Generating MNIST Handwritten Digits
using
Generative Adversarial Network (GAN) & Variational Autoencoder (VAE)
Saurabh Kumar Prasad
Department of Computer Science
University of Florida
[email protected]
https://www.kaggle.com/jessicali9530/celeba-dataset
Abstract— In this literature, I intend to do a
comparative analysis of performance of Generative
Adversarial Network (GAN) vs Variational Autoencoder C. Language and Libraries selection
for Generating MNIST Handwritten Digits. I would be picking python for performing this
comparison as I am comfortable with the language and It
Index Terms— Handwriting Recognition, Adversarial
has all the libraries, I need. The libraries I plan to use for
Network, Variational Autoencoder, Unsupervised
Learning this project are- NumPy, Matplotlib, Keras, TensorFlow
etc.
D. Computational Resource
I. PROJECT PLAN
I would be using Nvidia GPU for training the model.
Generative adversarial networks (GANs) have proven The system configuration is as follows- Intel 9750H
hugely successful in variety of applications of image processor, 16 GB RAM, Nvidia RTX 2070 with 2306
processing. It consists of generator and discriminator cuda cores and 1410 MHz Clock speed.
models. The Generator model tries to generate “fake”
samples as close to the real ones and the discriminator REFERENCES
model tries to discriminate “fake” samples from real
[1] Alec Radford and Luke Metz and Soumith Chintala,
ones. “Unsupervised Representation Learning with Deep
Convolutional Generative Adversarial Networks”, arXic,
On the other hand, VAE learns the complicated data 1511.06434.
distribution such as images using neural networks and [2] Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu,
aims at maximizing the lower bound of the data log- David Warde-Farley, Sherjil Ozair, Aaron Courville, Yoshua
Bengio “Generative Adversarial Networks”
likelihood [3] https://medium.com/@prakashpandey9/deep-generative-models-
A. Initial Literature Search e0f149995b7c
[4] H. Hu, M. Liao, W. Mao, W. Liu, C. Zhang and Y. Jing,
I plan to go through articles on towardsdatascience.com "Variational Auto-Encoder for text generation," 2020 IEEE 5th
to learn about the basics of GAN and VAE. After that I Information Technology and Mechatronics Engineering
will be reading research papers on these topics for in- Conference (ITOEC), Chongqing, China, 2020, pp. 595-598,
doi: 10.1109/ITOEC49072.2020.9141571.
depth understanding. I am learning the tools like Keras,
TensorFlow using Udemy.
B. Dataset Preparation
I will be using the Modified National Institute of
Standards and Technology dataset i.e. MNIST dataset
(http://yann.lecun.com/exdb/mnist/). Either I will be
downloading it from this website or I if I use the Keras
library I will use mnist.load_dataset() function to get the
dataset. Additionally I would like to use a facial dataset
called ‘CelebA’ from Kaggle link-