Custom Concept Text-to-Image Using Stable Diffusion Model in Generative Artificial Intelligence
Keywords:
Custom Concept, Generative Artificial Intelligence, Stable Diffusion, Text-to-imageAbstract
The ability of algorithms to produce content that closely mimics human work has revolutionized several fields thanks to generative artificial intelligence, or Gen AI. However, these developments also raise questions about generative models' transparency, predictability, and behavior. Considering the relevance of this topic and the expanding influence of AI on society, research into it is imperative. This paper aims to empirically explore the nuances of behavior in the setting of discriminative generative AI, using the stable diffusion model as an example. We will be better equipped to handle obstacles and guarantee the ethical and responsible application of generative AI in a world that is changing quickly if we have a deeper grasp of this phenomenon. The research method is carried out in several stages, such as dataset collection, modeling, testing, and analysis of results. The research results show that generative artificial intelligence can create realistic images like the original. However, there are still several challenges, including the availability of a reasonably large dataset for training data and high and long computing times. Likewise, the results of the Fréchet Inception Distance (FID) test were still quite large, namely 1284.4430, which shows that the quality of this model is still not good.Downloads
Download data is not yet available.
Downloads
Published
2024-05-30
Issue
Section
Articles
License
Copyright (c) 2024 International Journal of Informatics and Computing
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.