AI Document

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 7

https://colab.research.google.

com/drive/
10xJYhBvEtITo7rVZ_f6Id3pv4yJ1ShhD?usp=sharing

This code is used to build the block by using various components in Neural
Network.
(x_train, y_train), (x_test, y_test) = fashion_mnist.load_data()
assert x_train.shape == (60000, 28, 28)-Shape of the Data or Format of the Data
assert x_test.shape == (10000, 28, 28)
assert y_train.shape == (60000,)
assert y_test.shape == (10000,)

1.Importing Python from GAN

from keras.datasets import fashion_mnist

from keras.models import Sequential
from keras.layers import BatchNormalization
from keras.layers import Dense, Reshape, Flatten
from keras.layers.advanced_activations import LeakyReLU
from tensorflow.keras.optimizers import Adam

import numpy as np
!mkdir generated_images

Numpy is used for Holding Numbers as matrices or vectors

Shift+Enter (Create Folder)

2.We are now creating the variable for the Neural Network.

Img_width=28

Img_height=28

Channels=1

Img_shape=(img_width,imgheight,chanels)

Latent_dim=100 (noise=latent) 100 random values to pass into the


generator to reconstruct the late tandem into an actual image

Adam=adam(lr=0.0001)adam is a gradient descent algoritham called


stochastic gradient descent (Optimizer=to optimize the parameters with
respect to the lost value that allows machine to learn)
3.(Building Generator) Reconstruct the noise into actual image

Def build generator():

Model=sequential() 99% of the neual networks are sequential (We are going to have sequence of
layers)

Model.add(LeakyReLU(alpha=0.2))-which will give you best results

Model.add(BatchNormalization(momentum=0.8)) Normalization will give better results

Model.add(Dense(512,input_dim=latent dim) Dense layer that allows to implement parameters


inside) (if we increase more parameters =more patterns)

Model.add(Desne(np.prod(img_shape),activation=’tanh’))

Model.add(reshape(img_shape))

Np.pod(Squish our imageshape into one value=784

Model.summery()

Return model

Generator=build_generator()

28x28X1=784 to make a model

4. Building Discriminator

Def build_discrimnatir():

Model=Sequential()

Model.add(Flatten((input_shape=img_shape)(Flaten=Into One dimension)

Model.add(Dense(512))

Model.add(LeakyReLU(alpha=0.2))

Model.add(Dense(256))

Model.add(LeakyRelu(alpha=0.2))

Model.add(Dense(256))

Model.add(LeakyRelu(alpha=0.2))

Model.add(Dense(256))

Model.add(LeakyRelu(alpha=0.2))
Model.add(Dense(1,activation=’sigmold’))(o=fake,1-real)(squish it between 0 to 1

Model.summary()

Return model

Discriminator=build_discriminator()

Flattern=784

Total parms=533505

We built 2 neural networks and next we need to to doid connect them together

5) connecting Neural Network to build GAN(input and output shape should match)

Discriminator.compile(loss=’binary_crossentrpy’,optimizer=’adam’)

GAN=Sequential()

Discriminator.trainable=False(We are not going to train discriminator)

GAN.add(discriminator)

GAN.comile(loss=binary_crossentropy’,optimizer=’adam”)

binary_crossentropy’=gradient descent

import PIL

save_name=0.00000000

Daf save_imgs(epoch):

R,c=5,5

Noise=np.random.normal(0,1(r*c,latent_dim))

Gen_imgs=generator.predict(noise)

Global save_name

Save_name+=0.00000001

Print(“%.8f” % save_name)

# Rescale image 0-1

Gen_images=0.5*gen_images+0.5
Fig,axs=pit.subplots(r,c)

Cnt=0

For I in range( R):

For j in range (c ):

Axs[I,j].imshow(gen_imgs[cnt,:,:,0],cmap=’grey’)

#axs[I,j].imshow(gen_images[cnt])

Axs[I,j].axis(‘off’)

Cnt +=1

Fig,savefig(“generated_images/%8f.png”%save_name)

Print(‘saved’)

Plt.close()

6)Training our GAN

Def train(epochs,batch_size=64,save interval=200) 1000=16,30,000=32,60,000=64=>200=intervals


we generate to GIF,epochs variable requires to loop)

(X_train,_),(_,_)=mnist_load_data()

Print(X_train.shape)

60000,28,28(imgaes,width,height)

Train(50)

X_train=X_train/127.5-1. (Normalise our data)

Lost Value=Predict value-Actual Value)

Valid=np.ones((batch_size,1)) vector full of ones

Fakes=np.Zeros(batch_Size,1)) vector full of zeros going to train our neural network

For epoch in a range(epochs):

Idx=np.random.randint(0, x_train.shape[0],batch_size) random number 0 to 60000 and grab


64(0,60000,64)

Imgs=x_train[idx] (loaded 64 random images into this variable)


Noise=np.random.normal(0,1,(batch_size,latent_dim)) we are grabbing 64 latent dims for 64 noisy
images

Gen_imgs=generator.predict(noise)

Random images and generated images associated with valid and fakes

#train the discrimator

d_loss_real=discriminator.train_on_batch(imgs,valid)

d_loss_fake=discriminator.train_on_batch(gen_imgs,kakes)

d_loss=np.add(d_loss_real,d_loss_fake)*0.5 (Average)

classifying real images and fake images

noise=np.random.normal(0,1,(batch_size,latent_dim)) we are generating 64 latent demos pretty


well

#training the GANN,invesrse y label

g_loss=GAN.train_on_batch(noise,valid)==to get high quality of images

print(*********%d [D loss: %f,acc:%.2f] [G loss: %f]”%(epch,d_loss[0],d_loss[1]*100,g_loss))

if(epoch % save_interval)==0

save_imgs(epoch)

#Print(X_train.shape)

Train(30000,batch_size=64,save_interval=200)
https://colab.research.google.com/drive/114cbfMkf56RQRGEpO7FYmss-
2_YKpENc?usp=sharing

1) Importing Python Packages for GAN

from keras.datasets import cifar10, mnist
from keras.models import Sequential
from keras.layers import Reshape
from keras.layers import Dense
from keras.layers import Flatten
from keras.layers import Conv2D
from keras.layers import Conv2DTranspose
from keras.layers import Dropout
from keras.layers.advanced_activations import LeakyReLU
from tensorflow.keras.optimizers import Adam
import numpy as np
!mkdir generated_images

2) Parameters for Neural Networks & Data


img_width = 32
img_height = 32
channels = 3(RGB)
img_shape = (img_width, img_height, channels)
latent_dim = 100
adam = Adam(lr=0.0002)

3) Building Generator
def build_generator():
model.add(dense(256*4*4,input_dim=latent_dim) latent dim is noise,8*8
is initial shape that we are going to reconstruct=how many parameters
you want in output shape
model.add(leakyReLU(alpha=0.2))activation function
model.add(Reshape(4,4,256)))8/8/256

model.add(conv2dTranspose(128,(4,4),strids=(2,2),padding=’same’))
conventional layers=strids(Understand images)
model.add(Reshape((8,8,356)))

model.add(conv2dTranspose(128,(4,4),strids=(2,2),padding=’same’))
model.add(Reshape((8,8,356)))
model.add(conv2dTranspose(128,(4,4),strids=(2,2),padding=’same’))
model.add(Reshape((8,8,356)))

model.add((Conv2D(3,(3,3),activation=’tanh’,padding=’same’)) will give


best results
model.summary()

    return model

generator = build_generator()

4) Building Discriminator(classify images based on alex


net)
def build_discriminator():
model=sequential()
model.add(Conv2D(64,(3,3),padding=’same,imput_shape=img_shape))
model.add(LeakyReLU(alpha=0.2))

model.add(Conv2D,(128,(3,3),padding=’same’,))
model.add(LeakyReLU(alpha=0.2))

model.add(Conv2D,(128,(3,3),padding=’same’,))
model.add(LeakyReLU(alpha=0.2))

model.add(Conv2D,(256,(3,3),padding=’same’,))
model.add(LeakyReLU(alpha=0.2))

model.add(Flatten())
model.add(Dropout(0.4)) dropout 40% neurons and keeping 60% to pass
into final layer (increase stabilization,decrease overfitting
model.add(Dense(1,activation’sigmold’))

model.summary()
    return model

discriminator = build_discriminator()
discriminator.compile(loss='binary_crossentropy', optimizer=adam, metri
cs=['accuracy'])

You might also like