A Comprehensive Review Cross-Domain Image Translation: A Framework Using Generative Adversarial Networks and Variational Autoencoders

Main Article Content

Sahar Jabbar Mohammed

Abstract




Within the extensive array of image generative models, two models are particularly notable: V ariational Autoencoders (V AE) and Generative Adversarial Networks (GAN). Generative Adversarial Networks (GANs) can generate realistic images; nevertheless, they are prone to mode collapse and lack straightforward methods for obtaining the latent representation of an image. Conversely, VAEs do not encounter these issues; yet, they frequently produce images that are less realistic than those generated by GANs. This article elucidates that the absence of realism is partly attributable to a prevalent overestimate of the dimensionality of the natural image manifold. To address this issue, we propose a new framework that integrates VAE with GAN in a unique and complementary manner, resulting in an auto-encoding model that retains the features of VAEs while creating images of GAN quality. We assess our methodology using both qualitative and quantitative analyses across five image datasets.


We introduce a comprehensive learning system that integrates a deep convolutional GAN network with a variational autoencoder network. Initially, we identified a technique that addresses the issue of images generated by GANs typically being unclear and distorted. In this scenario, the integration of GAN with VAE may be a more advantageous option.




Article Details

How to Cite
Sahar Jabbar Mohammed. (2026). A Comprehensive Review Cross-Domain Image Translation: A Framework Using Generative Adversarial Networks and Variational Autoencoders. Iraqi Journal of Intelligent Computing and Informatics (IJICI), 4(2), 103~112. Retrieved from http://ijici.edu.iq/index.php/1/article/view/85
Section
Articles