Open Access Review Article

A Mini-Review on Current Difficulties of Self-attention Generative Adversarial Networks

Cheng Xu*

School of Computer Science, University College Dublin, Dublin, Ireland

Corresponding Author

Received Date: November 11, 2022;  Published Date: November 23, 2022

Abstract

With the rapid development of Vision Transformer, its application in the field of Generative Adversarial Networks (GAN) is becoming more and more obvious. However, based on the current research situation in this field, Transformer-based GAN has achieved better performance than traditional convolution-based GAN in some cases, but there is still a clear gap between them. In addition, there are several problems that can be witnessed in the diversity of generated content. In this research, I will review recent research with self-attention generative adversarial model and present some understandings.

Keywords:Transformer; Self-attention; GAN; Generative model

Abbreviations:Generative Adversarial Networks (GAN); Variational Auto-encoder (VAE); Convolutional Neural Network (CNN); Natural Language Processing (NLP); Vision Transformer (ViT); Conditional GAN (CGAN); Wasserstein GAN (WGAN); Multilayer Perceptron (MLP); Fre chet Inception Distance (FID); Inception Score (IS)

Citation
Signup for Newsletter
Scroll to Top