0% found this document useful (0 votes)
21 views9 pages

Program 8-1

Uploaded by

Vaishnavi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views9 pages

Program 8-1

Uploaded by

Vaishnavi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 9

8.

Applying Generative Adversial Networks for image generation


and unsupervised tasks.

Aim: To implement Generative Adversial Networks for image


generation and unsupervised tasks.

ALGORITHM:
1. Define the GAN Architecture
 Step 1.1: Design the Generator:
o Input: Random noise vector (latent space) of size zzz.
o Layers: Fully connected, batch normalization, and activation layers to upscale
the noise into an image of the desired size.
o Output: Synthetic image (same size as the real image).
 Step 1.2: Design the Discriminator:
o Input: An image (real or generated).
o Layers: Convolutional or fully connected layers to classify the input as real or
fake.
o Output: A scalar (1 for real, 0 for fake).
 Step 1.3: Combine both into a GAN:
o The discriminator is connected to the generator.
o The discriminator weights are frozen while training the GAN.

2. Preprocess the Data


 Step 2.1: Load the real image dataset (e.g., MNIST, CIFAR-10).
 Step 2.2: Normalize pixel values to a range of [−1,1][-1, 1][−1,1].
 Step 2.3: Prepare batches of training data.

3. Define Loss Functions and Optimizers


 Step 3.1: Use the Binary Cross-Entropy Loss for both generator and discriminator.
 Step 3.2: Use optimizers like Adam or RMSProp with appropriate learning rates for
each model.
4. Training the GAN
 Step 4.1: Repeat for a predefined number of epochs:
o Step 4.1.1: Train the Discriminator:
 Select a batch of real images.
 Generate a batch of fake images using the generator.
 Compute the discriminator's loss on:
 Real images (label = 1).
 Fake images (label = 0).
 Update the discriminator weights based on the loss.
o Step 4.1.2: Train the Generator:
 Generate a batch of noise vectors.
 Pass these through the generator to produce fake images.
 Compute the discriminator's output for these fake images (label = 1).
 Compute the generator loss based on the discriminator's feedback.
 Update the generator weights to improve its ability to fool the
discriminator.

5. Monitor the Progress


 Step 5.1: Periodically save and display generated images to evaluate the generator's
performance.
 Step 5.2: Log the discriminator and generator losses to track training stability.

6. Save the Models


 Step 6.1: Save the trained generator model for generating new synthetic images.
 Step 6.2: Optionally save the discriminator model for transfer learning or further
experiments.

PROGRAM:
import torch
import [Link] as nn
import [Link] as optim
import torchvision
import [Link] as transforms
from [Link] import DataLoader

# Generator Network
class Generator([Link]):
def __init__(self, latent_dim):
super(Generator, self).__init__()
[Link] = [Link](
[Link](latent_dim, 256),
[Link](0.2),
[Link](256, 512),
[Link](0.2),
[Link](512, 1024),
[Link](0.2),
[Link](1024, 28*28),
[Link]()
)

def forward(self, z):


img = [Link](z)
img = [Link]([Link](0), 1, 28, 28)
return img

# Discriminator Network
class Discriminator([Link]):
def __init__(self):
super(Discriminator, self).__init__()
[Link] = [Link](
[Link](28*28, 1024),
[Link](0.2),
[Link](0.3),
[Link](1024, 512),
[Link](0.2),
[Link](0.3),
[Link](512, 256),
[Link](0.2),
[Link](256, 1),
[Link]()
)

def forward(self, img):


img_flat = [Link]([Link](0), -1)
validity = [Link](img_flat)
return validity

# GAN Training Setup


class GAN:
def __init__(self, latent_dim=100):
self.latent_dim = latent_dim
[Link] = Generator(latent_dim)
[Link] = Discriminator()
# Loss and Optimizers
self.adversarial_loss = [Link]()
self.g_optimizer = [Link]([Link](), lr=0.0002,
betas=(0.5, 0.999))
self.d_optimizer = [Link]([Link](), lr=0.0002,
betas=(0.5, 0.999))

def train(self, dataloader, epochs):


for epoch in range(epochs):
for real_imgs, _ in dataloader:
batch_size = real_imgs.size(0)

# Ground truths
valid = [Link](batch_size, 1)
fake = [Link](batch_size, 1)

# Train Generator
self.g_optimizer.zero_grad()
z = [Link](batch_size, self.latent_dim)
generated_imgs = [Link](z)
g_loss = self.adversarial_loss([Link](generated_imgs), \
valid)
g_loss.backward()
self.g_optimizer.step()

# Train Discriminator
self.d_optimizer.zero_grad()
real_loss = self.adversarial_loss([Link](real_imgs), valid)
fake_loss =
self.adversarial_loss([Link](generated_imgs.detach()), fake)
d_loss = (real_loss + fake_loss) / 2
d_loss.backward()
self.d_optimizer.step()

print(f"Epoch [{epoch+1}/{epochs}], "


f"D Loss: {d_loss.item():.4f}, "
f"G Loss: {g_loss.item():.4f}")

def generate_images(self, num_images):


z = [Link](num_images, self.latent_dim)
generated_imgs = [Link](z)
return generated_imgs

# Data Preparation
def prepare_mnist_dataloader():
transform = [Link]([
[Link](),
[Link]((0.5,), (0.5,))
])
dataset = [Link](
root='./data',
train=True,
download=True,
transform=transform
)
dataloader = DataLoader(dataset, batch_size=64, shuffle=True)
return dataloader

# Main Execution
def main():
# Prepare data
dataloader = prepare_mnist_dataloader()

# Initialize and train GAN


gan = GAN()
[Link](dataloader, epochs=50)

# Generate sample images


generated_images = gan.generate_images(16)

# Optionally save or visualize generated images


[Link].save_image(
generated_images,
'generated_images.png',
normalize=True
)

if __name__ == "__main__":
main()
OUTPUT:
Downloading [Link]
[Link]
Downloading [Link]
[Link] to ./data/MNIST/raw/train-images-idx3-
[Link]
100%|██████████| 9.91M/9.91M [00:01<00:00, 5.23MB/s]
Extracting ./data/MNIST/raw/[Link] to
./data/MNIST/raw

Downloading [Link]
[Link]
Downloading [Link]
[Link] to ./data/MNIST/raw/[Link]
100%|██████████| 28.9k/28.9k [00:00<00:00, 155kB/s]
Extracting ./data/MNIST/raw/[Link] to
./data/MNIST/raw

Downloading [Link]
[Link]
Downloading [Link]
[Link] to ./data/MNIST/raw/t10k-images-idx3-
[Link]
100%|██████████| 1.65M/1.65M [00:01<00:00, 1.24MB/s]
Extracting ./data/MNIST/raw/[Link] to
./data/MNIST/raw
Downloading [Link]
[Link]
Downloading [Link]
[Link] to ./data/MNIST/raw/[Link]
100%|██████████| 4.54k/4.54k [00:00<00:00, 3.83MB/s]
Extracting ./data/MNIST/raw/[Link] to
./data/MNIST/raw

Epoch [1/5], D Loss: 0.0782, G Loss: 2.3916


Epoch [2/5], D Loss: 0.0842, G Loss: 2.3884
Epoch [3/5], D Loss: 0.0757, G Loss: 3.7717
Epoch [4/5], D Loss: 0.0669, G Loss: 2.7846
Epoch [5/5], D Loss: 0.2120, G Loss: 1.8788

RESULT:
The trained GAN successfully generates realistic images resembling the input
dataset.

You might also like