Generative Adversarial Networks: Techniques, Area Covered, and Challenges

Choubey Aakanksha S.*, Gajbhiye Samta**, Tiwari Rajesh***
*-** Department of Computer Science & Engineering, Shri Shankaracharya Institute of Engineering and Technology, Bhilai, Chhattisgarh, India.
*** CMR Engineering College, Medchal, Hyderabad, Telangana, India.
Periodicity:July - December'2023
DOI : https://doi.org/10.26634/jaim.1.2.20025

Abstract

Generative Adversarial Networks (GANs) have emerged as powerful frameworks for generating realistic and diverse data samples in various domains including computer vision, natural language processing and audio synthesis. In this study, a comprehensive overview of GANs, their key components, the training process, and the evolution of the field are presented. Different variations in GAN architectures and their applications across domains are discussed. Furthermore, the challenges and open research directions in GANs, their stability, mode collapse, evaluation metrics, and ethical considerations are analysed. The aim of this review is to provide a comprehensive understanding of GANs and their strengths, limitations, and potential future developments.

Keywords

Generative Adversarial Networks (GANs), Natural Language Processing, Training, Fréchet Inception Distance (FID), Dataset.

How to Cite this Article?

Aakanksha, S. C., Samta, G., and Rajesh, T. (2023). Generative Adversarial Networks: Techniques, Area Covered, and Challenges. i-manager’s Journal on Artificial Intelligence & Machine Learning, 1(2), 49-55. https://doi.org/10.26634/jaim.1.2.20025

References

[1]. Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., & Bengio, Y. (2014). Generative adversarial nets. Advances in Neural Information Processing Systems, 27.
[2]. Salimans, T., Goodfellow, I., Zaremba, W., Cheung, V., Radford, A., & Chen, X. (2016). Improved techniques for training GANs. Advances in Neural Information Processing Systems, 29.
[4]. Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., & Hochreiter, S. (2017). GANs trained by a two time-scale update rule converge to a local Nash equilibrium. Advances in Neural Information Processing Systems, 30.
If you have access to this article please login to view the article or kindly login to purchase the article

Purchase Instant Access

Single Article

North Americas,UK,
Middle East,Europe
India Rest of world
USD EUR INR USD-ROW
Online 15 15

Options for accessing this content:
  • If you would like institutional access to this content, please recommend the title to your librarian.
    Library Recommendation Form
  • If you already have i-manager's user account: Login above and proceed to purchase the article.
  • New Users: Please register, then proceed to purchase the article.