Custom Concept Text-to-Image Using Stable Diffusion Model in Generative Artificial Intelligence

Authors

  • Alam Rahmatulloh Department of Informatics, Faculty of Engineering, Siliwangi University

Keywords:

Custom Concept, Generative Artificial Intelligence, Stable Diffusion, Text-to-image

Abstract

The ability of algorithms to produce content that closely mimics human work has revolutionized several fields thanks to generative artificial intelligence, or Gen AI. However, these developments also raise questions about generative models' transparency, predictability, and behavior. Considering the relevance of this topic and the expanding influence of AI on society, research into it is imperative. This paper aims to empirically explore the nuances of behavior in the setting of discriminative generative AI, using the stable diffusion model as an example. We will be better equipped to handle obstacles and guarantee the ethical and responsible application of generative AI in a world that is changing quickly if we have a deeper grasp of this phenomenon. The research method is carried out in several stages, such as dataset collection, modeling, testing, and analysis of results. The research results show that generative artificial intelligence can create realistic images like the original. However, there are still several challenges, including the availability of a reasonably large dataset for training data and high and long computing times. Likewise, the results of the Fréchet Inception Distance (FID) test were still quite large, namely 1284.4430, which shows that the quality of this model is still not good.

Downloads

Download data is not yet available.

Downloads

Published

2024-05-30

Issue

Section

Articles