Lab Manual_R20A6683 Deep Learning_Year-IV_Semester-I
Lab Manual_R20A6683 Deep Learning_Year-IV_Semester-I
Lab Manual_R20A6683 Deep Learning_Year-IV_Semester-I
DEEP LEARNING
LAB MANUAL
B. TECH
Vision
To be a premier center for academic excellence and research through
innovative interdisciplinary collaborations and making significant contributions to
the community, organizations, and society as a whole.
Mission
To impart cutting-edge Artificial Intelligence technology in accordance with
industry norms.
To instill in students a desire to conduct research in order to tackle challenging
technical problems for industry.
To develop effective graduates who are responsible for their professional
growth, leadership qualities and are committed to lifelong learning.
Quality Policy
PEO1: To possess knowledge and analytical abilities in areas such as Maths, Science, and
fundamentalengineering.
PEO2: To analyse, design, create products, and provide solutions to problems in Computer
Scienceand Engineering.
PEO3: To leverage the professional expertise to enter the workforce, seek higher education,
andconduct research on AI-based problem resolution.
PEO4: To be solution providers and business owners in the field of computer science and
engineering with an emphasis on artificial intelligence and machine learning.
After successful completion of the program a student is expected to have specificabilities to:
PSO1: To understand and examine the fundamental issues with AI and ML applications.
PSO2: To apply machine learning, deep learning, and artificial intelligence approaches to address
issues in social computing, healthcare, vision, language processing, speech recognition, and other
domains.
PSO3: Use cutting-edge AI and ML tools and technology to further your study and research.
Department of Computer Science & Engineering
(Artificial Intelligence & Machine Learning)
To introduce the basic concepts and techniques of Deep Learning and the need of Deep
Learning techniques in real-world problems.
To provide understanding of various Deep Learning algorithms and the way to evaluate
performance of the Deep Learning algorithms.
To apply Deep Learning to learn, predict and classify the real-world problems.
To understand, learn and design Artificial Neural Networks of Supervised Learning for the
selected problems and very the different parameters.
To understand the concept of CNN, RNN, GANs, Auto-encoders.
Lab Outcomes:
Upon successful completion of this course, the students will be able to:
Understand the basic concepts and techniques of Deep Learning and the need of Deep
Learning techniques in real-world problems.
Understand CNN algorithms and the way to evaluate performance of the CNN architectures.
Apply RNN and LSTM to learn, predict and classify the real-world problems in the
paradigms of Deep Learning.
Guidelines to students
A. Standard operating procedure
a) Explanation on today’s experiment by the concerned faculty using PPT covering
the following aspects:
1) Name of the experiment
2) Aim
3) Software/Hardware requirements
4) Writing the python programs by the students
5) Commands for executing programs
Writing of the experiment in the Observation Book
The students will write the today’s experiment in the Observation book as per the
following format:
a) Name of the experiment
b) Aim
c) Writing the program
d) Viva-Voce Questions and Answers
e) Errors observed (if any) during compilation/execution
Students are required to carry their lab observation book and record book with
completed experiments while entering the lab.
Students must use the equipment with care. Any damage is caused student is punishable.
Students are not allowed to use their cell phones/pen drives/ CDs in labs.
Students need to maintain proper dress code along with ID Card.
Students are supposed to occupy the computers allotted to them and are not supposed
totalk or make noise in the lab.
Students, after completion of each experiment they need to be updated in observation
notes and same to be updated in the record.
Lab records need to be submitted after completion of experiment and get it corrected
withthe concerned lab faculty.
If a student is absent for any lab, they need to be completed the same experiment in the
free time before attending next lab.
Steps to perform experiments in the lab by the student
Step 1: Students have to write the date, aim and for that experiment in the observation book.
Step 2: Students have to listen and understand the experiment explained by the faculty and
note down the important points in the observation book.
Step 3: Students need to write procedure/algorithm in the observation book.
Step 4: Analyze and Develop/Implement the logic of the program by the student
inrespective platform
Step 5: After approval of logic of the experiment by the faculty then the experiment has to
beexecuted on the system.
Step 6: After successful execution the results are to be shown to the faculty and noted
thesame in the observation book.
Step 7: Students need to attend the Viva-Voce on that experiment and write the same in
theobservation book.
Step 8: Update the completed experiment in the record and submit to the
concerned faculty in-charge.
10
B. Tech – CSE (AI & ML) R-20
Regularity 3 Marks
Program written 3 Marks
Execution & Result 3 Marks
Viva-Voce 3 Marks
Dress Code 3 Marks
Viva-Voce 10 Marks
Record 10 Marks
11
B. Tech – CSE (AI & ML) R-20
12
B. Tech – CSE (AI & ML) R-20
INDEX
Use the concept of Data Augmentation to increase the data size from a single
9 image.
Design and implement a CNN model to classify CIFAR10 image dataset.
Use the concept of Data Augmentation while designing the CNN model.
10 Record the accuracy corresponding to the number of epochs.
13
B. Tech – CSE (AI & ML) R-20
Implement the standard VGG-16 & 19 CNN architecture model to classify
12 multi category image dataset and check the accuracy.
Implement Auto encoders for image denoising on MNIST, Fashion MNIST or any
16 suitable dataset.
14
R-20
Week-1
a. Design a single unit perceptron for classification of a linearly separable binary dataset (placement.csv)
without using pre-defined models. Use the Perceptron() from sklearn.
Program
# Single unit perceptron
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.linear_model import Perceptron
df=pd.read_csv('/content/gdrive/My Drive/ML_lab/placement.csv')
X = df.iloc[:,0:2]
y = df.iloc[:,-1]
p = Perceptron()
p.fit(X,y)
print(p.coef_)
print(p.intercept_)
z=p.score(X,y)
print("accuracy score is",z)
from mlxtend.plotting import plot_decision_regions
plot_decision_regions(X.values, y.values, clf=p, legend=2)
OUTPUT:
R-20
Exercise:
Design a single unit perceptron for classification of a linearly separable binary dataset without using pre-defined
models. Use the Perceptron() from sklearn.
Program
# Perceptron on Or-, And- and Xor-ed data
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
or_data = pd.DataFrame()
and_data = pd.DataFrame()
xor_data = pd.DataFrame()
or_data['input1']=[1,1,0,0]
or_data['input2']=[1,0,1,0]
or_data['ouput']=[1,1,1,0]
and_data['input1']=[1,1,0,0]
and_data['input2']=[1,0,1,0]
and_data['ouput']=[1,0,0,0]
xor_data['input1']=[1,1,0,0]
xor_data['input2']=[1,0,1,0]
xor_data['ouput']=[0,1,1,0]
OUTPUT:
Exercise
Identify the problem with single unit Perceptron. Classify using Not- and XNOR-ed data and analyze the result.
R-20
Week-2
Build an Artificial Neural Network by implementing the Backpropagation algorithm and test the same
using appropriate data sets. Vary the activation functions used and compare the results.
Program:
from keras.models import Sequential
from keras.layers import Dense, Activation
import numpy as np
import pandas as pd
from sklearn import datasets
iris = datasets.load_iris()
X, y = datasets.load_iris( return_X_y = True)
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.40)
# Define the network model and its arguments.
# Set the number of neurons/nodes for each layer:
model = Sequential()
model.add(Dense(2, input_shape=(4,)))
model.add(Activation('sigmoid'))
model.add(Dense(1))
model.add(Activation('sigmoid'))
#sgd = SGD(lr=0.0001, decay=1e-6, momentum=0.9, nesterov=True)
#model.compile(loss='categorical_crossentropy', optimizer=sgd, metrics=['accuracy'])
# Compile the model and calculate its accuracy:
model.compile(loss='mean_squared_error', optimizer='sgd', metrics=['accuracy'])
#model.fit(X_train, y_train, batch_size=32, epochs=3)
# Print a summary of the Keras model:
model.summary()
#model.fit(X_train, y_train)
#model.fit(X_train, y_train, batch_size=32, epochs=300)
model.fit(X_train, y_train, epochs=5)
score = model.evaluate(X_test, y_test)
print(score)
OUTPUT:
R-20
Exercise:
Note down the accuracies for the following set of experiments on the given NN and compare the results
Do the required modifications needed. Take training data percentage 30%, test data percentage 70%.
Build a Deep Feed Forward ANN by implementing the Backpropagation algorithm and test the same
using appropriate data sets. Use the number of hidden layers >=4.
Program:
from keras.models import Sequential
from keras.layers import Dense, Activation
import numpy as np
import pandas as pd
from sklearn import datasets
iris = datasets.load_iris()
X, y = datasets.load_iris(return_X_y = True)
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.40)
# Define the network model and its arguments.
# Set the number of neurons/nodes for each layer:
model = Sequential()
model.add(Dense(2, input_shape=(4,)))
model.add(Activation('sigmoid'))
model.add(Dense(1))
model.add(Activation('sigmoid'))
model.add(Dense(2, input_shape=(4,)))
model.add(Activation('sigmoid'))
model.add(Dense(1))
model.add(Activation('sigmoid'))
model.add(Dense(2, input_shape=(4,)))
model.add(Activation('sigmoid'))
model.add(Dense(1))
model.add(Activation('sigmoid'))
OUTPUT:
R-20
Exercise:
Modify the above NN model to run on Ionosphere dataset with number of hidden layers >=4. Take training data
percentage 30%, test data percentage 70%. No. of epochs=100, activation function ReLu, optimizer ADAM.
R-20
Week-4
Design and implement an Image classification model to classify a dataset of images using Deep Feed Forward
NN. Record the accuracy corresponding to the number of epochs. Use the MNIST datasets.
Program
#load required packages
import tensorflow as tf
from tensorflow import keras
from keras.models import Sequential
from keras import Input
from keras.layers import Dense
import pandas as pd
import numpy as np
import sklearn
from sklearn.metrics import classification_report
import matplotlib
import matplotlib.pyplot as plt
# Print shapes
print("Shape of X_train: ", X_train.shape)
print("Shape of y_train: ", y_train.shape)
print("Shape of X_test: ", X_test.shape)
print("Shape of y_test: ", y_test.shape)
# Display images of the first 10 digits in the training set and their true lables
fig, axs = plt.subplots(2, 5, sharey=False, tight_layout=True, figsize=(12,6),
facecolor='white')
n=0
for i in range(0,2):
for j in range(0,5):
axs[i,j].matshow(X_train[n])
axs[i,j].set(title=y_train[n])
n=n+1
plt.show()
# Print shapes
print("New shape of X_train: ", X_train.shape)
print("New shape of X_test: ", X_test.shape)
# Printing the parameters:Deep Feed Forward Neural Network contains more than 100K
#print(' Weights and Biases ')
#for layer in model_d1.layers:
#print("Layer: ", layer.name) # print layer name
#print(" --Kernels (Weights): ", layer.get_weights()[0]) # kernels (weights)
#print(" --Biases: ", layer.get_weights()[1]) # biases
print("")
print('---------- Evaluation on Training Data ----------- ')
print(classification_report(y_train, pred_labels_tr))
print("")
OUTPUT:
R-20
Exercise:
Design and implement an Image classification model to classify a dataset of images using Deep Feed Forward
NN. Record the accuracy corresponding to the number of epochs 5, 50. Use the CIFAR10/Fashion MNIST
datasets. [You can use CIFAR10 available in keras package]. Make the necessary changes whenever required.
Below note down only the changes made and the accuracies obtained.
R-20
Week-5
Design and implement a CNN model (with 2 layers of convolutions) to classify multi category image datasets.
Record the accuracy corresponding to the number of epochs. Use the MNIST, CIFAR-10 datasets.
Program
import keras
from keras.datasets import mnist
from keras.layers import Dense, Activation, Flatten, Conv2D, MaxPooling2D
from keras.models import Sequential
from keras.utils import to_categorical
import numpy as np
import matplotlib.pyplot as plt
(train_X,train_Y), (test_X,test_Y) = mnist.load_data()
train_X = train_X.reshape(-1, 28,28, 1)
test_X = test_X.reshape(-1, 28,28, 1)
train_X.shape
train_X = train_X.astype('float32')
test_X = test_X.astype('float32')
train_X = train_X / 255
test_X = test_X / 255
train_Y_one_hot = to_categorical(train_Y)
test_Y_one_hot = to_categorical(test_Y)
model = Sequential()
model.add(Conv2D(64, (3,3), input_shape=(28, 28, 1)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Conv2D(64, (3,3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Flatten())
model.add(Dense(64))
model.add(Dense(10))
model.add(Activation('softmax'))
model.compile(loss=keras.losses.categorical_crossentropy,
optimizer=keras.optimizers.Adam(),metrics=['accuracy'])
model.fit(train_X, train_Y_one_hot, batch_size=64, epochs=10)
test_loss, test_acc = model.evaluate(test_X, test_Y_one_hot)
print('Test loss', test_loss)
print('Test accuracy', test_acc)
predictions = model.predict(test_X)
print(np.argmax(np.round(predictions[0])))
plt.imshow(test_X[0].reshape(28, 28), cmap = plt.cm.binary)
plt.show()
OUTPUT:
R-20
Exercise:
Design and implement a CNN model (with 2 layers of convolutions) to classify multi category image datasets.
Record the accuracy corresponding to the number of epochs 10, 100. Use the CIFAR10/Fashion MNIST datasets.
Make the necessary changes whenever required. Below note down only the changes made and the accuracies
obtained.
R-20
Week-6
Design and implement a CNN model (with 4+ layers of convolutions) to classify multi category image datasets.
Record the accuracy corresponding to the number of epochs. Use the Fashion MNIST datasets. Record the time
required to run the program, using CPU as well as using GPU in Colab.
Program-
import keras
from keras.datasets import fashion_mnist
from keras.layers import Dense, Activation, Flatten, Conv2D, MaxPooling2D
from keras.models import Sequential
from keras.utils import to_categorical
import numpy as np
import matplotlib.pyplot as plt
train_X = train_X.astype('float32')
test_X = test_X.astype('float32')
train_X = train_X / 255
test_X = test_X / 255
train_Y_one_hot = to_categorical(train_Y)
test_Y_one_hot = to_categorical(test_Y)
model = Sequential()
model.add(Conv2D(128, (3,3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Conv2D(28, (3,3)))
model.add(Activation('relu'))
#model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Flatten())
model.add(Dense(64))
model.add(Dense(10))
model.add(Activation('softmax'))
R-20
model.compile(loss=keras.losses.categorical_crossentropy,
optimizer=keras.optimizers.Adam(),metrics=['accuracy'])
predictions = model.predict(test_X)
print(np.argmax(np.round(predictions[0])))
OUTPUT:
R-20
Exercise:
Design and implement a CNN model (with 4+ layers of convolutions) to classify multi category image datasets.
Use the MNIST/ CIFAR-10 datasets. Set the No. of Epoch as 5, 10 and 20. Make the necessary changes whenever
required. Record the accuracy corresponding to the number of epochs. Record the time required to run the
program, using CPU as well as using GPU in Colab. Below note down only the changes made and the accuracies
obtained.
R-20
Week-7
Design and implement a CNN model (with 2+ layers of convolutions) to classify multi category image datasets.
Use the concept of padding and Batch Normalization while designing the CNN model. Record the accuracy
corresponding to the number of epochs. Use the Fashion MNIST datasets.
Program
# Batch-Normalization and padding
import keras
from keras.datasets import fashion_mnist
from keras.layers import Dense, Activation, Flatten, Conv2D, MaxPooling2D,
BatchNormalization
from keras.models import Sequential
from keras.utils import to_categorical
import numpy as np
import matplotlib.pyplot as plt
train_X = train_X.astype('float32')
test_X = test_X.astype('float32')
train_X = train_X / 255
test_X = test_X / 255
train_Y_one_hot = to_categorical(train_Y)
test_Y_one_hot = to_categorical(test_Y)
model = Sequential()
model.add(Conv2D(128, (3,3),padding='same'))
model.add(Activation('relu'))
#BatchNormalization()
model.add(MaxPooling2D(pool_size=(2,2) ,padding='same'))
model.add(Conv2D(28, (3,3)))
model.add(Activation('relu'))
R-20
#BatchNormalization()
model.add(MaxPooling2D(pool_size=(2,2),padding='same'))
model.add(Flatten())
model.add(Dense(64))
model.add(Dense(10))
model.add(Activation('softmax'))
model.compile(loss=keras.losses.categorical_crossentropy,
optimizer=keras.optimizers.Adam(),metrics=['accuracy'])
predictions = model.predict(test_X)
print(np.argmax(np.round(predictions[0])))
OUTPUT:
R-20
Exercise:
Design and implement a CNN model (with 2+ layers of convolutions) to classify multi category image datasets.
Use the concept of Batch-Normalization and padding while designing the CNN model. Record the accuracy
corresponding to the number of epochs 5, 25, 225. Make the necessary changes whenever required. Use the
MNIST/CIFAR-10 datasets. Below note down only the changes made and the accuracies obtained.
R-20
Week-8
Design and implement a CNN model (with 4+ layers of convolutions) to classify multi category image datasets. Use
the concept of regularization and dropout while designing the CNN model. Use the Fashion MNIST datasets.
Record the Training accuracy and Test accuracy corresponding to the following architectures:
a. Base Model
b. Model with L1 Regularization
c. Model with L2 Regularization
d. Model with Dropout
e. Model with both L2 (or L1) and Dropout
Program
a. Base Model: Modify the b. experiment program commenting on kernel_regularizer=l1(0.01) function. See the below
program for reference.
b.
# L1 Regularizer
import keras
from keras.datasets import fashion_mnist
from keras.layers import Dense, Activation, Flatten, Conv2D, MaxPooling2D
from keras.models import Sequential
from keras.regularizers import l1
from keras.utils import to_categorical
import numpy as np
import matplotlib.pyplot as plt
train_X = train_X.astype('float32')
test_X = test_X.astype('float32')
train_X = train_X / 255
test_X = test_X / 255
train_Y_one_hot = to_categorical(train_Y)
test_Y_one_hot = to_categorical(test_Y)
model = Sequential()
model.add(Conv2D(128, (3,3),kernel_regularizer=l1(0.01)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
R-20
model.add(Conv2D(28, (3,3),
#kernel_regularizer=l1(0.01)
))
model.add(Activation('relu'))
#model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Flatten())
model.add(Dense(64))
model.add(Dense(10))
model.add(Activation('softmax'))
model.compile(loss=keras.losses.categorical_crossentropy,
optimizer=keras.optimizers.Adam(),metrics=['accuracy'])
predictions = model.predict(test_X)
print(np.argmax(np.round(predictions[0])))
c.
# L2 regularizer
import keras
from keras.datasets import fashion_mnist
from keras.layers import Dense, Activation, Flatten, Conv2D, MaxPooling2D
from keras.models import Sequential
from keras.regularizers import l2
from keras.utils import to_categorical
import numpy as np
import matplotlib.pyplot as plt
train_X = train_X.astype('float32')
test_X = test_X.astype('float32')
train_X = train_X / 255
test_X = test_X / 255
train_Y_one_hot = to_categorical(train_Y)
test_Y_one_hot = to_categorical(test_Y)
model = Sequential()
model.add(Conv2D(256,(3,3),input_shape=(28,28,1), kernel_regularizer=l2(0.01)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Conv2D(128, (3,3),
#kernel_regularizer=l2(0.01)
))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Conv2D(28, (3,3),
#kernel_regularizer=l2(0.01)
))
model.add(Activation('relu'))
#model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Flatten())
model.add(Dense(64))
model.add(Dense(10))
model.add(Activation('softmax'))
model.compile(loss=keras.losses.categorical_crossentropy,optimizer=keras.optimizers.Ada
m(),metrics=['accuracy'])
d.
#Dropout
import keras
from keras.datasets import fashion_mnist
from keras.layers import Dense, Activation, Flatten, Conv2D, MaxPooling2D, Dropout
from keras.models import Sequential
from keras.utils import to_categorical
import numpy as np
import matplotlib.pyplot as plt
train_X = train_X.astype('float32')
test_X = test_X.astype('float32')
train_X = train_X / 255
test_X = test_X / 255
train_Y_one_hot = to_categorical(train_Y)
test_Y_one_hot = to_categorical(test_Y)
model = Sequential()
model.add(Conv2D(128, (3,3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
#Dropout(0.20)
model.add(Conv2D(28, (3,3)))
model.add(Activation('relu'))
R-20
#model.add(MaxPooling2D(pool_size=(2,2)))
#Dropout(0.20)
model.add(Flatten())
model.add(Dense(64))
model.add(Dense(10))
model.add(Activation('softmax'))
model.compile(loss=keras.losses.categorical_crossentropy,
optimizer=keras.optimizers.Adam(),metrics=['accuracy'])
predictions = model.predict(test_X)
print(np.argmax(np.round(predictions[0])))
e.
train_X = train_X.astype('float32')
test_X = test_X.astype('float32')
train_X = train_X / 255
test_X = test_X / 255
train_Y_one_hot = to_categorical(train_Y)
test_Y_one_hot = to_categorical(test_Y)
R-20
model = Sequential()
model.add(Conv2D(128, (3,3),
#kernel_regularizer=l2(0.01)
))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Conv2D(28, (3,3),
#kernel_regularizer=l2(0.01)
))
model.add(Activation('relu'))
#model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Flatten())
model.add(Dense(64))
model.add(Dense(10))
model.add(Activation('softmax'))
model.compile(loss=keras.losses.categorical_crossentropy,
optimizer=keras.optimizers.Adam(),metrics=['accuracy'])
predictions = model.predict(test_X)
print(np.argmax(np.round(predictions[0])))
Design and implement a CNN model (with 4+ layers of convolutions) to classify multi category image datasets. Use
the concept of regularization and dropout while designing the CNN model. Use the MNIST dataset. Modify the
program as and when needed. Record the Training accuracy and Test accuracy corresponding to the following
architectures:
a. Base Model
b. Model with both L2 (or L1) and Dropout
R-20
Week-9
Use the concept of data augmentation to increase the data size from a single image.
Program-
#data augmentation on a single image
OUTPUT:
R-20
Exercise:
Use the concept of data augmentation to increase the data size from a single image. Use any random image of your
choice. Apply variations of ImageDataGenerator () function on arguments height_shift_range=0.5,
horizontal_flip=True, rotation_range=90, brightness_range=[0.2,1.0], zoom_range=[0.5,1.0] etc. and analyze the
output images.
R-20
Week-10
Design and implement a CNN model to classify CIFAR10 image dataset. Use the concept of Data Augmentation
while designing the CNN model. Record the accuracy corresponding to the number of epochs.
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.datasets import cifar10
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout, Activation, Flatten
from tensorflow.keras.layers import Conv2D, MaxPooling2D
# The data, shuffled and split between train and test sets:
(x_train, y_train), (x_test, y_test) = cifar10.load_data()
print('x_train shape:', x_train.shape)
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')
num_classes = 10
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255
x_test /= 255
model_1 = Sequential()
model_1.summary()
batch_size = 32
datagen = ImageDataGenerator(
featurewise_center=False, # set input mean to 0 over the dataset
samplewise_center=False, # set each sample mean to 0
featurewise_std_normalization=False, # divide inputs by std of the dataset
samplewise_std_normalization=False, # divide each input by its std
zca_whitening=False, # apply ZCA whitening
rotation_range=0, # randomly rotate images in the range (degrees, 0 to 180)
width_shift_range=0.1, # randomly shift images horizontally (fraction of total
width)
height_shift_range=0.1, # randomly shift images vertically (fraction of total
height)
horizontal_flip=True, # randomly flip images
vertical_flip=False) # randomly flip images
OUTPUT:
Exercise:
Can you make the above model do better on the same dataset? Can you make it do worse? Experiment with
different settings of the data augmentation while designing the CNN model. Record the accuracy mentioning the
modified settings of data augmentation.
R-20
Week-11
Implement the standard LeNet CNN architecture model to classify multi category image dataset (MNIST) and
check the accuracy.
Program-
# LeNet
import tensorflow as tf
from tensorflow import keras
import numpy as np
(train_x, train_y), (test_x, test_y) = keras.datasets.mnist.load_data()
train_x = train_x / 255.0
test_x = test_x / 255.0
train_x = tf.expand_dims(train_x, 3)
test_x = tf.expand_dims(test_x, 3)
val_x = train_x[:5000]
val_y = train_y[:5000]
lenet_5_model = keras.models.Sequential([
keras.layers.Conv2D(6, kernel_size=5, strides=1, activation='tanh',
input_shape=train_x[0].shape, padding='same'), #C1
keras.layers.AveragePooling2D(), #S2
keras.layers.Conv2D(16, kernel_size=5, strides=1, activation='tanh',
padding='valid'), #C3
keras.layers.AveragePooling2D(), #S4
keras.layers.Conv2D(120, kernel_size=5, strides=1, activation='tanh',
padding='valid'), #C5
keras.layers.Flatten(), #Flatten
keras.layers.Dense(84, activation='tanh'), #F6
keras.layers.Dense(10, activation='softmax') #Output layer
])
lenet_5_model.compile(optimizer='adam',
loss=keras.losses.sparse_categorical_crossentropy, metrics=['accuracy'])
lenet_5_model.fit(train_x, train_y, epochs=5, validation_data=(val_x, val_y))
lenet_5_model.evaluate(test_x, test_y)
OUTPUT:
R-20
Exercise:
Implement the standard LeNet CNN architecture model to classify multi category image dataset (Fashion
MNIST) and check the accuracy. Below note down only the changes made and the accuracies obtained for epochs
5, 50, 250.
R-20
Week-12
Implement the standard VGG 16 CNN architecture model to classify cat and dog image dataset and check the
accuracy.
Program-
# VGG16
import keras,os
from keras.models import Sequential
from keras.layers import Dense, Conv2D, MaxPool2D, Flatten
from keras.preprocessing.image import ImageDataGenerator
import numpy as np
trdata = ImageDataGenerator()
traindata = trdata.flow_from_directory(directory="/content/gdrive/My
Drive/training_set",target_size=(224,224))
tsdata = ImageDataGenerator()
testdata = tsdata.flow_from_directory(directory="/content/gdrive/My Drive/test_set",
target_size=(224,224))
model = Sequential()
model.add(Conv2D(input_shape=(224,224,3),filters=64,kernel_size=(3,3),padding="same"
,activation="relu"))
model.add(Conv2D(filters=64,kernel_size=(3,3),padding="same", activation="relu"))
model.add(MaxPool2D(pool_size=(2,2),strides=(2,2)))
model.add(Conv2D(filters=128, kernel_size=(3,3), padding="same", activation="relu"))
model.add(Conv2D(filters=128, kernel_size=(3,3), padding="same", activation="relu"))
model.add(MaxPool2D(pool_size=(2,2),strides=(2,2)))
model.add(Conv2D(filters=256, kernel_size=(3,3), padding="same", activation="relu"))
model.add(Conv2D(filters=256, kernel_size=(3,3), padding="same", activation="relu"))
model.add(Conv2D(filters=256, kernel_size=(3,3), padding="same", activation="relu"))
model.add(MaxPool2D(pool_size=(2,2),strides=(2,2)))
model.add(Conv2D(filters=512, kernel_size=(3,3), padding="same", activation="relu"))
model.add(Conv2D(filters=512, kernel_size=(3,3), padding="same", activation="relu"))
model.add(Conv2D(filters=512, kernel_size=(3,3), padding="same", activation="relu"))
model.add(MaxPool2D(pool_size=(2,2),strides=(2,2)))
model.add(Conv2D(filters=512, kernel_size=(3,3), padding="same", activation="relu"))
model.add(Conv2D(filters=512, kernel_size=(3,3), padding="same", activation="relu"))
model.add(Conv2D(filters=512, kernel_size=(3,3), padding="same", activation="relu"))
model.add(MaxPool2D(pool_size=(2,2),strides=(2,2)))
model.add(Flatten())
model.add(Dense(units=4096,activation="relu"))
model.add(Dense(units=4096,activation="relu"))
model.add(Dense(units=2, activation="softmax"))
OUTPUT:
R-20
Exercise:
Implement the standard VGG 19 CNN architecture model to classify cat and dog image dataset and check the
accuracy. Make the necessary changes whenever required.
R-20
Week-13
Program-
model.summary()
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['acc'])
model.fit(X_train, y_train,epochs=5,validation_data=(X_test,y_test))
test_loss, test_acc = model.evaluate(X_test, y_test)
print('Test loss', test_loss)
print('Test accuracy', test_acc)
OUTPUT:
Exercise:
Implement RNN for sentiment analysis on movie reviews. Use the concept of Embedding layer.
R-20
R-20
Week-14
Implement Bi-directional LSTM for sentiment analysis on movie reviews.
Program-
# Bi directional LSTM
import numpy as np
from keras.preprocessing import sequence
from keras.utils import pad_sequences
from keras.models import Sequential
from keras.layers import Dense, Dropout, Embedding, LSTM, Bidirectional
from keras.datasets import imdb
model = Sequential()
model.add(Embedding(n_unique_words, 128, input_length=maxlen))
model.add(Bidirectional(LSTM(64)))
model.add(Dropout(0.5))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
history=model.fit(x_train, y_train, batch_size=batch_size, epochs=10,
validation_data=[x_test, y_test])
test_loss, test_acc = model.evaluate(x_test, y_test)
print('Test loss', test_loss)
print('Test accuracy', test_acc)
print(history.history['loss'])
print(history.history['accuracy'])
from matplotlib import pyplot
pyplot.plot(history.history['loss'])
pyplot.plot(history.history['accuracy'])
pyplot.title('model loss vs accuracy')
pyplot.xlabel('epoch')
pyplot.legend(['loss', 'accuracy'], loc='upper right')
pyplot.show()
OUTPUT:
R-20
Exercise:
Implement Bi-directional LSTM on a suitable dataset of your choice. Modify the program as needed.
R-20
Week-15
Implement Generative Adversarial Networks to generate realistic Images. Use MNIST dataset.
Program-
# loading the mnist dataset
from tensorflow.keras.datasets.mnist import load_data
#plot of 25 images from the MNIST training dataset, arranged in a 5×5 square.
import glob
import imageio
import matplotlib.pyplot as plt
import numpy as np
import os
import PIL
from tensorflow.keras import layers
import time
import tensorflow as tf
def make_generator_model():
model = tf.keras.Sequential()
model.add(layers.Dense(7*7*256, use_bias=False, input_shape=(100,)))
model.add(layers.BatchNormalization())
model.add(layers.LeakyReLU())
model.add(layers.Reshape((7, 7, 256)))
assert model.output_shape == (None, 7, 7, 256) # Note: None is the batch size
model.add(layers.Conv2DTranspose(128, (5, 5), strides=(1, 1), padding='same',
use_bias=False))
assert model.output_shape == (None, 7, 7, 128)
model.add(layers.BatchNormalization())
model.add(layers.LeakyReLU())
# upsample to 14x14
model.add(layers.Conv2DTranspose(64, (5, 5), strides=(2, 2), padding='same',
use_bias=False))
assert model.output_shape == (None, 14, 14, 64)
model.add(layers.BatchNormalization())
model.add(layers.LeakyReLU())
# upsample to 28x28
model.add(layers.Conv2DTranspose(1, (5, 5), strides=(2, 2), padding='same',
use_bias=False, activation='tanh'))
assert model.output_shape == (None, 28, 28, 1)
return model
# sample image generated by the the generator
generator = make_generator_model()
return model
discriminator = make_discriminator_model()
decision = discriminator(generated_image)
print (decision)
# This method returns a helper function to compute cross entropy loss
cross_entropy = tf.keras.losses.BinaryCrossentropy(from_logits=True)
def discriminator_loss(real_output, fake_output):
real_loss = cross_entropy(tf.ones_like(real_output), real_output)
fake_loss = cross_entropy(tf.zeros_like(fake_output), fake_output)
total_loss = real_loss + fake_loss
return total_loss
def generator_loss(fake_output):
return cross_entropy(tf.ones_like(fake_output), fake_output)
generator_optimizer = tf.keras.optimizers.Adam(1e-4)
discriminator_optimizer = tf.keras.optimizers.Adam(1e-4)
checkpoint_dir = './training_checkpoints'
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt")
checkpoint = tf.train.Checkpoint(generator_optimizer=generator_optimizer,
discriminator_optimizer=discriminator_optimizer,
generator=generator,
discriminator=discriminator)
EPOCHS = 5
noise_dim = 100
num_examples_to_generate = 16
gen_loss = generator_loss(fake_output)
disc_loss = discriminator_loss(real_output, fake_output)
gradients_of_generator = gen_tape.gradient(gen_loss,
generator.trainable_variables)
gradients_of_discriminator = disc_tape.gradient(disc_loss,
discriminator.trainable_variables)
generator_optimizer.apply_gradients(zip(gradients_of_generator,
generator.trainable_variables))
discriminator_optimizer.apply_gradients(zip(gradients_of_discriminator,
discriminator.trainable_variables))
for i in range(predictions.shape[0]):
plt.subplot(4, 4, i+1)
plt.imshow(predictions[i, :, :, 0] * 127.5 + 127.5, cmap='gray')
plt.axis('off')
plt.savefig('image_at_epoch_{:04d}.png'.format(epoch))
plt.show()
train(train_dataset, EPOCHS)
checkpoint.restore(tf.train.latest_checkpoint(checkpoint_dir))
# Display a single image using the epoch number
def display_image(epoch_no):
return PIL.Image.open('image_at_epoch_{:04d}.png'.format(epoch_no))
display_image(EPOCHS)
import tensorflow_docs.vis.embed as embed
embed.embed_file(anim_file)
OUTPUT:
R-20
Exercise:
Implement Generative Adversarial Networks to generate realistic Images. Use Fashion MNIST or any
human face datasets.
R-20
Week-16
Implement Auto encoders for image denoising on MNIST dataset.
Program-
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import tensorflow as tf
import keras
from keras.models import Sequential
from keras.layers import Conv2D,MaxPool2D, UpSampling2D,Dropout
from keras.datasets import mnist
(x_train,y_train),(x_test,y_test) = mnist.load_data()
# to get the shape of the data
print("x_train shape:",x_train.shape)
print("x_test shape", x_test.shape)
plt.figure(figsize = (8,8))
for i in range(25):
plt.subplot(5,5,i+1)
plt.title(str(y_train[i]),fontsize = 16, color = 'black', pad = 2)
plt.imshow(x_train[i], cmap = plt.cm.binary )
plt.xticks([])
plt.yticks([])
plt.show()
val_images = x_test[:9000]
test_images = x_test[9000:]
val_images = val_images.astype('float32') / 255.0
val_images = np.reshape(val_images,(val_images.shape[0],28,28,1))
# here maximum pixel value for our images may exceed 1 so we have to clip the images
train_noisy_images = np.clip(train_noisy_images,0.,1.)
R-20
val_noisy_images = np.clip(val_noisy_images,0.,1.)
test_noisy_images = np.clip(test_noisy_images,0.,1.)
plt.figure(figsize = (8,8))
for i in range(25):
plt.subplot(5,5,i+1)
plt.title(str(y_train[i]),fontsize = 16, color = 'black', pad = 2)
plt.imshow(train_noisy_images[i].reshape(1,28,28)[0], cmap = plt.cm.binary )
plt.xticks([])
plt.yticks([])
plt.show()
model = Sequential()
# encoder network
model.add(Conv2D(filters = 128, kernel_size = (2,2), activation = 'relu', padding =
'same', input_shape = (28,28,1)))
model.add(tf.keras.layers.BatchNormalization())
model.add(Conv2D(filters = 128, kernel_size = (2,2), activation = 'relu', padding =
'same'))
model.add(tf.keras.layers.BatchNormalization())
model.add(Conv2D(filters = 256, kernel_size = (2,2),strides = (2,2), activation =
'relu', padding = 'same'))
model.add(tf.keras.layers.BatchNormalization())
model.add(Conv2D(filters = 256, kernel_size = (2,2), activation = 'relu', padding =
'same'))
model.add(tf.keras.layers.BatchNormalization())
model.add(Conv2D(filters = 512, kernel_size = (3,3), activation = 'relu', padding =
'same'))
model.add(tf.keras.layers.BatchNormalization())
model.add(Conv2D(filters = 512, kernel_size = (2,2),strides = (2,2), activation =
'relu', padding = 'same'))
# decoder network
model.add(Conv2D(filters = 512, kernel_size = (2,2), activation = 'relu', padding =
'same'))
EPOCHS = 5
BATCH_SIZE = 256
VALIDATION = (val_noisy_images, val_images)
history = model.fit(train_noisy_images, train_images,batch_size = BATCH_SIZE,epochs =
EPOCHS, validation_data = VALIDATION)
plt.subplot(2,1,1)
plt.plot(history.history['loss'], label = 'loss')
plt.plot(history.history['val_loss'], label = 'val_loss')
plt.legend(loc = 'best')
plt.subplot(2,1,2)
plt.plot(history.history['accuracy'], label = 'accuracy')
plt.plot(history.history['val_accuracy'], label = 'val_accuracy')
plt.legend(loc = 'best')
plt.show()
plt.figure(figsize = (18,18))
for i in range(10,19):
plt.subplot(9,9,i)
if(i == 14):
plt.title('Real Images', fontsize = 25, color = 'Green')
plt.imshow(test_images[i].reshape(1,28,28)[0], cmap = plt.cm.binary)
plt.show()
plt.figure(figsize = (18,18))
for i in range(10,19):
if(i == 15):
plt.title('Noised Images', fontsize = 25, color = 'red')
plt.subplot(9,9,i)
plt.imshow(test_noisy_images[i].reshape(1,28,28)[0], cmap = plt.cm.binary)
plt.show()
R-20
plt.figure(figsize = (18,18))
for i in range(10,19):
if(i == 15):
plt.title('Denoised Images', fontsize = 25, color = 'Blue')
plt.subplot(9,9,i)
plt.imshow(model.predict(test_noisy_images[i].reshape(1,28,28,1)).reshape(1,28,28)[0]
, cmap = plt.cm.binary)
plt.show()
OUTPUT:
R-20
Exercise:
Implement Auto encoders for image denoising on Fashion MNIST dataset or on any suitable dataset of
your choice.