Abstract
Deepfake technology has progressed rapidly alongside the development of Generative Adversarial Networks (GANs), Variational Autoencoders (VAEs), and multiple-encoder synthesis methods. These improvements made the generation of hyperreal synthetic media possible, posing the challenge of misinformation, identity theft and cyberthreats. To address these risks, research on deepfake detection had continued and has employed CNNs, RNNs, transformers, and hybrid architectures to sense content that has been manipulated. This survey offers a detailed overview of the emerging techniques for generating deepfakes which ranges from face swapping, reenactment to lip-syncing models along with a varied analysis of the current state-of-the-art deepfake detection methods. It evaluates the strengths and limitations of spatial, temporal, and multimodal detection methods. In addition, generalization issues, adversarial robustness, computational costs, and ethical challenges are discussed in detail. The field of self-supervised learning is set to become a game changer with the arrival of new algorithms and models. Interpretability of adversarial training is improved by the use of interpretable AI (XAI), and adaptive adversarial training. Through these, the inner workings of forensic models get established. Yet, practical deployment remains a massive challenge to be addressed, especially with real-time detection and the scaling of the process. Also, the research paper maps out a path for further development, by forcing stress on lightweight and efficient detection models and the use of multimodal approaches as well as regulatory regiments. AI-management is key to the development of socially responsible AI governance architectures, which is clearly shown by the paper's outline of future research directions revolving around responsible practices in AI modeling advancements. Moreover, the main purpose of this review is the integration of the latest scientific discoveries to provide researchers, practitioners, and policymakers with a solid base of reference so that their work can be directed toward the creation of scalable, interpretable, and ethically responsible deepfake detection solutions in the time of the swiftly changing synthetic media technologies.

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Copyright (c) 2025 Iraqi Journal of Intelligent Computing and Informatics (IJICI)