GAN Algorithm
🌟 Objective Function (Minimax Game):
min max V (D, G) = Ex∼pdata [log D(x)] + Ez∼pz [log(1 − D(G(z)))]
G D
Where: - D(x) : Probability that the discriminator thinks real data is real - D(G(z)) : Probability that the
discriminator thinks fake data is real - G(z) : Fake data generated from random noise z
🔄 GAN Training Algorithm (Step-by-Step)
Step 1: Initialize - Create two neural networks: - Generator G : maps noise z → fake data - Discriminator
D : distinguishes real vs fake data
Step 2: Train the Discriminator D - Sample: - Real data x ∼ pdata - Noise z ∼ pz , then generate G(z) -
The discriminator maximizes:
Ex∼pdata [log D(x)] + Ez∼pz [log(1 − D(G(z)))]
Goal: - D(x) → 1 for real data - D(G(z)) → 0 for fake data
Step 3: Train the Generator G - Sample noise z ∼ pz , and generate fake data G(z) - The generator
minimizes:
Ez∼pz [log(1 − D(G(z)))]
Goal: Fool the discriminator: make D(G(z)) →1
Practical improvement (non-saturating loss):
max Ez∼pz [log D(G(z))]
G
Step 4: Repeat - Alternate: - One step to update the discriminator - One step to update the generator -
Continue for many iterations
🏁 Goal of GAN Training:
• Generator creates realistic fake data
• Discriminator gets confused:
1
D(x) ≈ D(G(z)) ≈ 0.5
This means real and fake data are indistinguishable — GAN has reached equilibrium.