1-GAN Mnist.ipynb - Colab
1-GAN Mnist.ipynb - Colab
1-GAN Mnist.ipynb - Colab
ipynb - Colab
import numpy as np
import matplotlib.pyplot as plt
from sklearn.datasets import fetch_openml
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Reshape, Conv2DTranspose, Conv2D, LeakyReLU, Flatten
from tensorflow.keras.optimizers import Adam
2. Model Architecture Generator Network: The generator takes a random noise vector and generates an image similar to Fashion MNIST.
Input Layer: Latent vector (random noise) of size 100. Dense Layer: Fully connected layer to project the latent vector to a larger dimension (e.g.,
7x7x128). Reshape Layer: Reshape the output to a 7x7x128 array. Transposed Convolutional Layers: Layer 1: 128 filters, kernel size 3x3, strides
of 2, ReLU activation, upsample to (14, 14, 128). Layer 2: 64 filters, kernel size 3x3, strides of 2, ReLU activation, upsample to (28, 28, 64). Output
Layer: Transposed convolution to produce a (28, 28, 1) image, with tanh activation for values in range [-1, 1]. python
generator = Sequential([
Dense(7*7*128, input_dim=100),
Reshape((7, 7, 128)),
Conv2DTranspose(128, kernel_size=3, strides=2, padding='same', activation='relu'),
Conv2DTranspose(64, kernel_size=3, strides=2, padding='same', activation='relu'),
Conv2DTranspose(1, kernel_size=3, padding='same', activation='tanh')
])
Discriminator Network: The discriminator takes an image (real or generated) and outputs a probability of whether the image is real or fake.
Input Layer: Image of shape (28, 28, 1). Convolutional Layers: Layer 1: 64 filters, kernel size 3x3, Leaky ReLU activation (slope 0.2). Layer 2: 128
filters, kernel size 3x3, Leaky ReLU activation (slope 0.2). Pooling: Optional max pooling or strided convolution for down-sampling. Fully
https://colab.research.google.com/drive/1sdKOFDta2w0c9hxM1aOedLMmtH0gxyqh?authuser=2#scrollTo=vqwBa9F5Bow7&printMode=true 1/4
11/16/24, 8:33 PM 27 GAN Mnist.ipynb - Colab
Connected Layer: Dense layer with a single output node. Output Layer: Sigmoid activation to output a probability score. python
discriminator = Sequential([
Conv2D(64, kernel_size=3, strides=2, padding='same', input_shape=(28, 28, 1)),
LeakyReLU(alpha=0.2),
Conv2D(128, kernel_size=3, strides=2, padding='same'),
LeakyReLU(alpha=0.2),
Flatten(),
Dense(1, activation='sigmoid')
])
3. Training Parameters Optimizers: Use separate Adam optimizers for the generator and discriminator with learning rate 0.0002 and beta_1
of 0.5.
Loss Function: Binary Cross-Entropy for both generator and discriminator loss. Epochs: Train for 100 epochs. Batch Size: Set batch size to 64
for stable training. Training Process: For each batch, train the discriminator on real and fake images. Train the generator via the discriminator’s
gradients to improve its ability to generate realistic images.
4. Performance Metrics Discriminator Loss: Track the discriminator loss for both real and fake images. Generator Loss: Track generator loss
during training. Inception Score (optional): Use this advanced metric if available to assess generated image quality.
5. Evaluation Generated Samples: Generate and visualize a grid of images at various training epochs (e.g., every 10 epochs).
discriminator_losses = []
generator_losses = []
batch_size=10
for epoch in range(10):
for batch in range(len(x_train) // batch_size):
# Prepare real and fake data
# Slice x_train and reshape to (batch_size, 28, 28, 1) in one step
# Remove the extra dimension from x_train using squeeze
real_images = x_train[batch * batch_size: (batch + 1) * batch_size].squeeze(axis=-1) # Remove extra dimension
# Train discriminator
d_loss_real = discriminator.train_on_batch(real_images, np.ones((batch_size, 1)))
# Train generator
noise = np.random.normal(0, 1, (batch_size, 100))
https://colab.research.google.com/drive/1sdKOFDta2w0c9hxM1aOedLMmtH0gxyqh?authuser=2#scrollTo=vqwBa9F5Bow7&printMode=true 2/4
11/16/24, 8:33 PM 27 GAN Mnist.ipynb - Colab
g_loss = gan.train_on_batch(noise, np.ones((batch_size, 1)))
# Append losses
discriminator_losses.append(d_loss)
generator_losses.append(g_loss)
Training Curves: Plot generator and discriminator losses over epochs for performance analysis.
6. Result Interpretation Image Quality: Analyze the visual quality and diversity of the generated images over training epochs. Model Behavior:
Discuss notable trends in loss functions, such as mode collapse (where the generator produces similar images repetitively). Address
whether the discriminator is too strong, leading to generator struggling to produce realistic images. Submission Requirements Code:
Submit a well-documented Jupyter Notebook with all code, including comments explaining each step. Report: Describe the model
architecture and reasons for chosen hyperparameters. Analyze the training results, mentioning any observations or issues encountered.
Plots and Visualizations: Ensure all required plots, including generated images, loss curves, and any additional evaluations, are present in
the notebook and report. This assignment will help you understand GAN architecture, train models to generate synthetic data, and
interpret GAN performance metrics and outcomes.
https://colab.research.google.com/drive/1sdKOFDta2w0c9hxM1aOedLMmtH0gxyqh?authuser=2#scrollTo=vqwBa9F5Bow7&printMode=true 3/4
11/16/24, 8:33 PM 27 GAN Mnist.ipynb - Colab
https://colab.research.google.com/drive/1sdKOFDta2w0c9hxM1aOedLMmtH0gxyqh?authuser=2#scrollTo=vqwBa9F5Bow7&printMode=true 4/4