0% found this document useful (0 votes)
3 views

DL_0801CS223D04_Assignment5.ipynb - Colab

The document provides a detailed explanation of building and training autoencoders using the Keras library with the MNIST dataset. It covers the process of loading the dataset, normalizing the images, creating a simple single-layer autoencoder, and then expanding to a deep autoencoder with multiple layers. The document also includes code snippets for training the models and visualizing the results, emphasizing the differences between single-layer and deep autoencoders.

Uploaded by

Pritha Mishra
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

DL_0801CS223D04_Assignment5.ipynb - Colab

The document provides a detailed explanation of building and training autoencoders using the Keras library with the MNIST dataset. It covers the process of loading the dataset, normalizing the images, creating a simple single-layer autoencoder, and then expanding to a deep autoencoder with multiple layers. The document also includes code snippets for training the models and visualizing the results, emphasizing the differences between single-layer and deep autoencoders.

Uploaded by

Pritha Mishra
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

keyboard_arrow_down Autoencoders

from keras.layers import Input, Dense, Conv2D, MaxPooling2D, UpSampling2D


from keras.models import Model
from matplotlib import pyplot as plt
from keras.datasets import mnist
import numpy as np

# Loading the dataset MNIST. This is a dataset of 60,000 28x28 grayscale images of the 10 d
# along with a test set of 10,000 images.

# Loads the MNIST dataset. Returns the (train subset) and (test subset)
# Each subset consists of (input features, outputs)
(x_train, y_train), (x_test, y_test) = mnist.load_data()

# Prints the number of images in the training and testing subsets


print('Number of images in training subset: ', len(x_train))
print('Number of images in testing subset: ', len(x_test))

# Prints the shape of a single image


# x_train[0] gives the first element and .shape prints the shape of that element (which is
print('Shape of the input is: ', x_train[0].shape)

# The shape is (28, 28), that is, the height and width of the image is 28 and 28 respective
# Morever, note that it is a grayscale image. Will discuss more in the upcoming classes

Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-dataset


11490434/11490434 ━━━━━━━━━━━━━━━━━━━━ 1s 0us/step
Number of images in training subset: 60000
Number of images in testing subset: 10000
Shape of the input is: (28, 28)

# Prints the range of the image intenstities (or values)


print("min value: ", np.min(x_train), " max value: ", np.max(x_train))

min value: 0 max value: 255

# Normalizes the range of the images from (0,255) to (0,1.0)


x_train = x_train.astype('float32') / 255.
x_test = x_test.astype('float32') / 255.

# Why? The pixel values can range from 0 to 255.


# When using the image as it is and passing through a Neural Network,
# the computation of high numeric values may become more complex.
# To reduce this we can normalize the values to range from 0 to 1.

# Prints the new range of the image intenstities (or values)


print("min value: ", np.min(x_train), " max value: ", np.max(x_train))
min value: 0.0 max value: 1.0

trainNoise = np.random.normal(loc=0.5, scale=0.5, size=x_train.shape)


testNoise = np.random.normal(loc=0.5, scale=0.5, size=x_test.shape)
testXNoisy = np.clip(x_test + testNoise, 0, 1)

# Prints the shape of a the training subset


print('Shape of the training subset is: ', x_train.shape)

# (60000, 28, 28) == (Batch_size, Height, Width) [or]


# (60000, 28, 28) == (Number of samples, Height, Width)

Shape of the training subset is: (60000, 28, 28)

# We want to flatten each image in the dataset.


# That is, we want to convert the images from 2D matrix to 1D vector.
# For doing this we will use .reshape() operator

# Syntax
# ======
# array = numpy.reshape(array, newshape)
# Gives a new shape to an array without changing its data.
# [or]
# array.reshape(newshape)
# for example: arr.reshape((5,5))

# Calculate the height and width of the input


height, width = x_train[0].shape

# Reshape the training subset


x_train = x_train.reshape((len(x_train), height * width))
# Reshape the testing subset
x_test = x_test.reshape((len(x_test), height * width))

# Prints the shape of a the flattened training subset


print('Shape of the training subset is: ', x_train.shape)

Shape of the training subset is: (60000, 784)

keyboard_arrow_down Single-layer

The simplest possible autoencoder is a single hidden layer of n < # of input pixels nodes. The
output layer of this model mirrors the input layer in size. In the next few code cells we train just such
a simple autoencoder and demonstrate its output.

# Let us now build our model.

# Let us build a single hidden layered network.

# This is the size of our encoded representations. It is also called as bottleneck size.
encoding_dim = 32 # This is a hyperparamter. Lower value means higher compression.
# 784 -> 32 (compression of factor 784/32 = 24.5, assuming the input is 784).

# Input() is used to instantiate a Keras tensor.


# The shape parameter expects the shape of the input tensor.
input_img = Input(shape=(784,))

# Assignment 1: Figure out why the values assigned to shape is (784,) and not simply (784)?

# Creates the first layer of the network (which will be the encoder in this case)
# Dense() creates a densely-connected NN layer.

# The paramters denote the following:


# units: Positive integer, dimensionality of the output space. (which will be encoding_dim
# activation: Activation function to use. If we do not specify anything, no activation is ap
encoded = Dense(units=encoding_dim, activation='relu')(input_img)

# Creates the second layer of the network (which will be the decoder in this case)
# Note that the activation here 'sigmoid'. This is because we want the output to be in the
decoded = Dense(784, activation='sigmoid')(encoded)

# Model() groups layers into an object with training/inference features.


# Arguments
# ========
# inputs: The input(s) of the model, generaly a keras.Input object
# outputs: The output(s) of the model
autoencoder = Model(input_img, decoded)

# compile(), configures the model for training.


# Arguments
# ========
# optimizer: name of the optimizer
# loss: loss function
# metrics: list of metrics to be evaluated by the model during training and testing. (not u

autoencoder.compile(optimizer='adadelta', loss='binary_crossentropy')

# Assignment 2: Study reagrding various optimizers and losses

# Trains the model for a fixed number of epochs.


# Model.fit()

# Arguments
# ========
# x: Input data.
# y: Target data.
# batch_size: Number of samples per gradient update.
# epochs: Number of epochs to train the model. An epoch is an iteration over the entire x an
# validation_data: Data on which to evaluate the loss and any model metrics at the end of ea
# verbose: Verbosity mode. 0 = silent, 1 = progress bar, 2 = one line per epoch.

# Returns
# ========
# A History object. Its History.history attribute is a record of training loss values and me
# as well as validation loss values and validation metrics values (if applicable).

history_100 = autoencoder.fit(x=x_train,
y=x_train,
epochs=100,
batch_size=512,
validation_data=(x_test, x_test),
verbose=1)

# Note that the x, y are same (i.e., x_train) in autoencoder, as the input and output of the
Epoch 97/100
118/118 ━━━━━━━━━━━━━━━━━━━━ 2s 16ms/step - loss: 0.6843 - val_loss: 0.6840
Epoch 98/100
118/118 ━━━━━━━━━━━━━━━━━━━━ 2s 16ms/step - loss: 0.6841 - val_loss: 0.6839
Epoch 99/100
118/118 ━━━━━━━━━━━━━━━━━━━━ 2s 17ms/step - loss: 0.6840 - val_loss: 0.6837
Epoch 100/100
118/118 ━━━━━━━━━━━━━━━━━━━━ 3s 22ms/step - loss: 0.6838 - val loss: 0.6836

# History object.
# History.history returns a dictionary

# Prints the keys of the dictionary


print(history_100.history.keys())

dict_keys(['loss', 'val_loss'])

# Access the values of training loss at every epoch using the key 'loss'

print(history_100.history['loss'])
print("\nlength of the list: ", len(history_100.history['loss']), ", which is equal to the n

[0.6945615410804749, 0.6944591403007507, 0.6943569779396057, 0.694255530834198,

length of the list: 100 , which is equal to the number of epochs.

# Plots the training and the validation loss


plt.plot(history_100.history['loss'])
plt.plot(history_100.history['val_loss'])

# Provides a title to the plot


plt.title('model loss')

# Provides a label to the y axis


plt.ylabel('loss')

# Provides a label to the x axis


plt.xlabel('epochs')

# Provides a legend to the plot,


# so as we can identify which color line denotes which loss.
plt.legend(['train', 'val'], loc='upper right')
plt.show()
# Generates output predictions for the input samples.
# Syntax: model.predict(input)

decoded_imgs = autoencoder.predict(x_test)

313/313 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step

# Plots a comparison between the original and the reconstructed images

# The nuimber of images to be displayed


n = 10

# Sets the plot size


plt.figure(figsize=(20, 4))

# Runs the loop for n samples


for i in range(n):

# Block to display the original images


# ====================================

# subplot(nrows, ncols, index)


# Arguments
# =========
# nrows: total number of rows, 2 in our case.
# ncols: total number of columns, n in our case.
# index: the index of the grid at which the current element is to be placed (starts with
ax = plt.subplot(2, n, i + 1)

# shows an image, we reshaped the 1D vector back to 2D matrix


plt.imshow(x_test[i].reshape(28, 28))

# for displaying grayscale images


plt.gray()

# to turn off the axis and its ticks


ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)

# Block to display the reconstructed images


# ====================================
ax = plt.subplot(2, n, i + 1 + n)
plt.imshow(decoded_imgs[i].reshape(28, 28))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)

plt.show()

We see here that the decoder trained in our autoencoder learns grainy (lossy) versions of the
original images.

This form of simple one-layer autoencoder learns a representation of the underlying dataset that is
very close to what is learned by PCA. You can thus think of this simple autoencoder as being a kind
of stochastic approximation of the more deterministic PCA algorithm.

Design the following auto encoders

1. Deep auto encoder


2. Under-complete auto encoder
3. Over-complete auto encoder

keyboard_arrow_down Deep Autoencoders

We can also use deeper autoencoders.

These have all the benefits and tradeoffs we would expect of increasing the number of layers in our
neural networks, namely: longer training times and less easily decoded representations, but more
accurate reconstructions.

A good default choice for a deep autoencoder is to scale the image down progressively, than scale
it back up again, using the same number of nodes on each layer on either side of the "most
compressed" representation at the center of the hidden layers.

#Input Dimension is 784


input_img = Input(shape=(784,))
# 1st hidden layer of encoder with no of nodes less than the input dimension
encoded = Dense(128, activation='relu')(input_img)
# 2nd hidden layer of encoder with no of nodes less than the 1st hidden layer
encoded = Dense(64, activation='relu')(encoded)
# Bottle neck layer with minimum nodes
encoded = Dense(32, activation='relu')(encoded)
# 1st hidden layer of decoder with no of nodes equal to the no of nodes in the 2nd hidden la
decoded = Dense(64, activation='relu')(encoded)
# 2st hidden layer of decoder with no of nodes equal to the no of nodes in the 1st hidden la
decoded = Dense(128, activation='relu')(decoded)
#output layer with no of nodes equal to the input layer
decoded = Dense(784, activation='sigmoid')(decoded)

autoencoder = Model(input_img, decoded)


# compiling the autoencoder with adam optimizer and the loss function we use is binary_cros
autoencoder.compile(optimizer='adam', loss='binary_crossentropy')

history_200 = autoencoder.fit(x_train, x_train,


epochs=200,
batch_size=512,
shuffle=True,
validation_data=(x_test, x_test))

Epoch 1/200
118/118 ━━━━━━━━━━━━━━━━━━━━ 5s 29ms/step - loss: 0.4094 - val_loss: 0.2038
Epoch 2/200
118/118 ━━━━━━━━━━━━━━━━━━━━ 6s 35ms/step - loss: 0.1876 - val_loss: 0.1539
Epoch 3/200
118/118 ━━━━━━━━━━━━━━━━━━━━ 4s 31ms/step - loss: 0.1505 - val_loss: 0.1371
Epoch 4/200
118/118 ━━━━━━━━━━━━━━━━━━━━ 5s 27ms/step - loss: 0.1364 - val_loss: 0.1292
Epoch 5/200
118/118 ━━━━━━━━━━━━━━━━━━━━ 7s 40ms/step - loss: 0.1289 - val_loss: 0.1235
Epoch 6/200
118/118 ━━━━━━━━━━━━━━━━━━━━ 5s 39ms/step - loss: 0.1235 - val_loss: 0.1189
Epoch 7/200
118/118 ━━━━━━━━━━━━━━━━━━━━ 4s 27ms/step - loss: 0.1196 - val_loss: 0.1154
Epoch 8/200
118/118 ━━━━━━━━━━━━━━━━━━━━ 6s 35ms/step - loss: 0.1159 - val_loss: 0.1125
Epoch 9/200
118/118 ━━━━━━━━━━━━━━━━━━━━ 3s 27ms/step - loss: 0.1133 - val_loss: 0.1100
Epoch 10/200
118/118 ━━━━━━━━━━━━━━━━━━━━ 5s 27ms/step - loss: 0.1106 - val_loss: 0.1077
Epoch 11/200
118/118 ━━━━━━━━━━━━━━━━━━━━ 4s 38ms/step - loss: 0.1084 - val_loss: 0.1053
Epoch 12/200
118/118 ━━━━━━━━━━━━━━━━━━━━ 4s 27ms/step - loss: 0.1063 - val_loss: 0.1043
Epoch 13/200
118/118 ━━━━━━━━━━━━━━━━━━━━ 5s 27ms/step - loss: 0.1050 - val_loss: 0.1025
Epoch 14/200
118/118 ━━━━━━━━━━━━━━━━━━━━ 5s 39ms/step - loss: 0.1035 - val_loss: 0.1016
Epoch 15/200
118/118 ━━━━━━━━━━━━━━━━━━━━ 3s 27ms/step - loss: 0.1026 - val_loss: 0.1011
Epoch 16/200
118/118 ━━━━━━━━━━━━━━━━━━━━ 5s 27ms/step - loss: 0.1018 - val_loss: 0.1007
Epoch 17/200
118/118 ━━━━━━━━━━━━━━━━━━━━ 4s 37ms/step - loss: 0.1008 - val_loss: 0.0992
Epoch 18/200
118/118 ━━━━━━━━━━━━━━━━━━━━ 3s 28ms/step - loss: 0.0999 - val_loss: 0.0988
Epoch 19/200
118/118 ━━━━━━━━━━━━━━━━━━━━ 5s 26ms/step - loss: 0.0992 - val_loss: 0.0975
Epoch 20/200
118/118 ━━━━━━━━━━━━━━━━━━━━ 4s 30ms/step - loss: 0.0984 - val_loss: 0.0973
Epoch 21/200
118/118 ━━━━━━━━━━━━━━━━━━━━ 4s 35ms/step - loss: 0.0980 - val_loss: 0.0966
Epoch 22/200
118/118 ━━━━━━━━━━━━━━━━━━━━ 4s 27ms/step - loss: 0.0969 - val_loss: 0.0955
Epoch 23/200
118/118 ━━━━━━━━━━━━━━━━━━━━ 5s 30ms/step - loss: 0.0967 - val_loss: 0.0952
Epoch 24/200
118/118 ━━━━━━━━━━━━━━━━━━━━ 4s 35ms/step - loss: 0.0960 - val_loss: 0.0949
Epoch 25/200
118/118 ━━━━━━━━━━━━━━━━━━━━ 4s 27ms/step - loss: 0.0954 - val_loss: 0.0940
Epoch 26/200
118/118 ━━━━━━━━━━━━━━━━━━━━ 5s 28ms/step - loss: 0.0949 - val_loss: 0.0939
Epoch 27/200
118/118 ━━━━━━━━━━━━━━━━━━━━ 4s 37ms/step - loss: 0.0944 - val_loss: 0.0935
Epoch 28/200
118/118 ━━━━━━━━━━━━━━━━━━━━ 3s 26ms/step - loss: 0.0939 - val_loss: 0.0932
Epoch 29/200
118/118 ━━━━━━━━━━━━━━━━━━━━ 3s 26ms/step - loss: 0 0939 - val loss: 0 0924

plt.plot(history_200.history['loss'])
plt.plot(history_200.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'val'], loc='upper left')
plt.show()

Design a Deep auto encoders and use the following optimizer and explore other loss function such
as hinge loss etc.
keyboard_arrow_down Convolutional autoencoder

Since our inputs are images, it makes sense to use convolutional neural networks (convnets) as
encoders and decoders. In practical settings, autoencoders applied to images are always
convolutional autoencoders since they simply perform much better.

Let's implement one. The encoder will consist in a stack of Conv2D and MaxPooling2D layers (max
pooling being used for spatial down-sampling), while the decoder will consist in a stack of Conv2D
and UpSampling2D layers.

import keras
from keras import layers

input_img = Input(shape=(28, 28, 1))

x = Conv2D(16, (3, 3), activation='relu', padding='same')(input_img)


x = MaxPooling2D((2, 2), padding='same')(x)
x = Conv2D(8, (3, 3), activation='relu', padding='same')(x)
x = MaxPooling2D((2, 2), padding='same')(x)
x = Conv2D(8, (3, 3), activation='relu', padding='same')(x)
encoded = layers.MaxPooling2D((2, 2), padding='same')(x)

# at this point the representation is (4, 4, 8) i.e. 128-dimensional

x = Conv2D(8, (3, 3), activation='relu', padding='same')(encoded)


x = UpSampling2D((2, 2))(x)
x = Conv2D(8, (3, 3), activation='relu', padding='same')(x)
x = UpSampling2D((2, 2))(x)
x = Conv2D(16, (3, 3), activation='relu')(x)
x = UpSampling2D((2, 2))(x)
decoded = Conv2D(1, (3, 3), activation='sigmoid', padding='same')(x)

autoencoder = keras.Model(input_img, decoded)


autoencoder.compile(optimizer='adam', loss='binary_crossentropy')

from keras.datasets import mnist


import numpy as np

(x_train, _), (x_test, _) = mnist.load_data()

x_train = x_train.astype('float32') / 255.


x_test = x_test.astype('float32') / 255.
x_train = np.reshape(x_train, (len(x_train), 28, 28, 1))
x_test = np.reshape(x_test, (len(x_test), 28, 28, 1))

history_100 = autoencoder.fit(x_train, x_train,


epochs=100,
batch_size=256,
shuffle=True,
validation_data=(x_test, x_test))

Epoch 1/100
235/235 ━━━━━━━━━━━━━━━━━━━━ 72s 288ms/step - loss: 0.3514 - val_loss: 0.1655
Epoch 2/100
235/235 ━━━━━━━━━━━━━━━━━━━━ 69s 293ms/step - loss: 0.1591 - val_loss: 0.1429
Epoch 3/100
235/235 ━━━━━━━━━━━━━━━━━━━━ 83s 299ms/step - loss: 0.1399 - val_loss: 0.1317
Epoch 4/100
235/235 ━━━━━━━━━━━━━━━━━━━━ 70s 297ms/step - loss: 0.1309 - val_loss: 0.1259
Epoch 5/100
235/235 ━━━━━━━━━━━━━━━━━━━━ 81s 292ms/step - loss: 0.1254 - val_loss: 0.1212
Epoch 6/100
235/235 ━━━━━━━━━━━━━━━━━━━━ 84s 301ms/step - loss: 0.1214 - val_loss: 0.1180
Epoch 7/100
235/235 ━━━━━━━━━━━━━━━━━━━━ 82s 302ms/step - loss: 0.1185 - val_loss: 0.1155
Epoch 8/100
235/235 ━━━━━━━━━━━━━━━━━━━━ 80s 295ms/step - loss: 0.1162 - val_loss: 0.1142
Epoch 9/100
235/235 ━━━━━━━━━━━━━━━━━━━━ 72s 309ms/step - loss: 0.1141 - val_loss: 0.1115
Epoch 10/100
235/235 ━━━━━━━━━━━━━━━━━━━━ 81s 304ms/step - loss: 0.1123 - val_loss: 0.1100
Epoch 11/100
235/235 ━━━━━━━━━━━━━━━━━━━━ 84s 312ms/step - loss: 0.1108 - val_loss: 0.1087
Epoch 12/100
235/235 ━━━━━━━━━━━━━━━━━━━━ 71s 303ms/step - loss: 0.1097 - val_loss: 0.1075
Epoch 13/100
235/235 ━━━━━━━━━━━━━━━━━━━━ 70s 299ms/step - loss: 0.1084 - val_loss: 0.1064
Epoch 14/100
235/235 ━━━━━━━━━━━━━━━━━━━━ 83s 303ms/step - loss: 0.1077 - val_loss: 0.1061
Epoch 15/100
235/235 ━━━━━━━━━━━━━━━━━━━━ 81s 299ms/step - loss: 0.1066 - val_loss: 0.1047
Epoch 16/100
235/235 ━━━━━━━━━━━━━━━━━━━━ 81s 293ms/step - loss: 0.1056 - val_loss: 0.1040
Epoch 17/100
235/235 ━━━━━━━━━━━━━━━━━━━━ 83s 299ms/step - loss: 0.1049 - val_loss: 0.1039
Epoch 18/100
235/235 ━━━━━━━━━━━━━━━━━━━━ 81s 296ms/step - loss: 0.1043 - val_loss: 0.1027
Epoch 19/100
235/235 ━━━━━━━━━━━━━━━━━━━━ 74s 314ms/step - loss: 0.1038 - val_loss: 0.1022
Epoch 20/100
235/235 ━━━━━━━━━━━━━━━━━━━━ 77s 294ms/step - loss: 0.1034 - val_loss: 0.1017
Epoch 21/100
235/235 ━━━━━━━━━━━━━━━━━━━━ 82s 295ms/step - loss: 0.1027 - val_loss: 0.1015
Epoch 22/100
235/235 ━━━━━━━━━━━━━━━━━━━━ 84s 302ms/step - loss: 0.1023 - val_loss: 0.1008
Epoch 23/100
235/235 ━━━━━━━━━━━━━━━━━━━━ 84s 311ms/step - loss: 0.1017 - val_loss: 0.1006
Epoch 24/100
235/235 ━━━━━━━━━━━━━━━━━━━━ 78s 294ms/step - loss: 0.1013 - val_loss: 0.1000
Epoch 25/100
235/235 ━━━━━━━━━━━━━━━━━━━━ 85s 305ms/step - loss: 0.1012 - val_loss: 0.1000
Epoch 26/100
235/235 ━━━━━━━━━━━━━━━━━━━━ 80s 297ms/step - loss: 0.1007 - val_loss: 0.0993
Epoch 27/100
235/235 ━━━━━━━━━━━━━━━━━━━━ 84s 306ms/step - loss: 0.1002 - val_loss: 0.0992
Epoch 28/100
235/235 ━━━━━━━━━━━━━━━━━━━━ 70s 299ms/step - loss: 0.0999 - val_loss: 0.0987
Epoch 29/100
235/235 ━━━━━━━━━━━━━━━━━━━━ 83s 305ms/step - loss: 0.0998 - val loss: 0.0985
plt.plot(history_100.history['loss'])
plt.plot(history_100.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'val'], loc='upper left')
plt.show()

UNDERCOMPLETE AUTOENCODERS

# Reshaping the input data from (28, 28) to (784,)


x_train_flat = x_train.reshape(-1, 784)
x_test_flat = x_test.reshape(-1, 784)

# Undercomplete Autoencoder

# Input Dimension is 784


input_img = Input(shape=(784,))
# Encoder with fewer neurons in the bottleneck layer
encoded = Dense(64, activation='relu')(input_img)
encoded = Dense(32, activation='relu')(encoded) # Bottleneck with fewer neurons
decoded = Dense(64, activation='relu')(encoded)
decoded = Dense(784, activation='sigmoid')(decoded)

undercomplete_autoencoder = Model(input_img, decoded)


undercomplete_autoencoder.compile(optimizer='adam', loss='binary_crossentropy')

history_undercomplete = undercomplete_autoencoder.fit(x_train_flat, x_train_flat,


epochs=100,
batch_size=512,
shuffle=True,
validation_data=(x_test_flat, x_test_f
# Plotting the loss
plt.plot(history_undercomplete.history['loss'])
plt.plot(history_undercomplete.history['val_loss'])
plt.title('Undercomplete Autoencoder Loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'val'], loc='upper left')
plt.show()
Epoch 1/100
118/118 ━━━━━━━━━━━━━━━━━━━━ 5s 31ms/step - loss: 0.4423 - val_loss: 0.2317
Epoch 2/100
118/118 ━━━━━━━━━━━━━━━━━━━━ 4s 21ms/step - loss: 0.2167 - val_loss: 0.1748
Epoch 3/100
118/118 ━━━━━━━━━━━━━━━━━━━━ 2s 20ms/step - loss: 0.1694 - val_loss: 0.1521
Epoch 4/100
118/118 ━━━━━━━━━━━━━━━━━━━━ 4s 29ms/step - loss: 0.1495 - val_loss: 0.1392
Epoch 5/100
118/118 ━━━━━━━━━━━━━━━━━━━━ 5s 25ms/step - loss: 0.1388 - val_loss: 0.1319
Epoch 6/100
118/118 ━━━━━━━━━━━━━━━━━━━━ 5s 20ms/step - loss: 0.1321 - val_loss: 0.1257
Epoch 7/100
118/118 ━━━━━━━━━━━━━━━━━━━━ 3s 24ms/step - loss: 0.1259 - val_loss: 0.1200
Epoch 8/100
118/118 ━━━━━━━━━━━━━━━━━━━━ 4s 32ms/step - loss: 0.1200 - val_loss: 0.1155
Epoch 9/100
118/118 ━━━━━━━━━━━━━━━━━━━━ 4s 30ms/step - loss: 0.1167 - val_loss: 0.1128
Epoch 10/100
118/118 ━━━━━━━━━━━━━━━━━━━━ 4s 20ms/step - loss: 0.1138 - val_loss: 0.1109
Epoch 11/100
118/118 ━━━━━━━━━━━━━━━━━━━━ 3s 20ms/step - loss: 0.1119 - val_loss: 0.1089
Epoch 12/100
118/118 ━━━━━━━━━━━━━━━━━━━━ 3s 26ms/step - loss: 0.1101 - val_loss: 0.1073
Epoch 13/100
118/118 ━━━━━━━━━━━━━━━━━━━━ 5s 23ms/step - loss: 0.1084 - val_loss: 0.1060
Epoch 14/100
118/118 ━━━━━━━━━━━━━━━━━━━━ 5s 20ms/step - loss: 0.1070 - val_loss: 0.1048
Epoch 15/100
118/118 ━━━━━━━━━━━━━━━━━━━━ 3s 25ms/step - loss: 0.1059 - val_loss: 0.1038
Epoch 16/100
118/118 ━━━━━━━━━━━━━━━━━━━━ 3s 27ms/step - loss: 0.1047 - val_loss: 0.1029
Epoch 17/100
118/118 ━━━━━━━━━━━━━━━━━━━━ 5s 22ms/step - loss: 0.1042 - val_loss: 0.1023
Epoch 18/100
118/118 ━━━━━━━━━━━━━━━━━━━━ 2s 21ms/step - loss: 0.1035 - val_loss: 0.1018
Epoch 19/100
118/118 ━━━━━━━━━━━━━━━━━━━━ 3s 22ms/step - loss: 0.1029 - val_loss: 0.1009
Epoch 20/100
118/118 ━━━━━━━━━━━━━━━━━━━━ 4s 31ms/step - loss: 0.1022 - val_loss: 0.1006
Epoch 21/100
118/118 ━━━━━━━━━━━━━━━━━━━━ 2s 20ms/step - loss: 0.1015 - val_loss: 0.1000
Epoch 22/100
118/118 ━━━━━━━━━━━━━━━━━━━━ 3s 25ms/step - loss: 0.1013 - val_loss: 0.0996
Epoch 23/100
118/118 ━━━━━━━━━━━━━━━━━━━━ 2s 21ms/step - loss: 0.1006 - val_loss: 0.0991
Epoch 24/100
118/118 ━━━━━━━━━━━━━━━━━━━━ 3s 24ms/step - loss: 0.1003 - val_loss: 0.0990
Epoch 25/100
118/118 ━━━━━━━━━━━━━━━━━━━━ 5s 25ms/step - loss: 0.1000 - val_loss: 0.0983
Epoch 26/100
118/118 ━━━━━━━━━━━━━━━━━━━━ 3s 23ms/step - loss: 0.0994 - val_loss: 0.0978
Epoch 27/100
118/118 ━━━━━━━━━━━━━━━━━━━━ 6s 28ms/step - loss: 0.0990 - val_loss: 0.0976
Epoch 28/100
118/118 ━━━━━━━━━━━━━━━━━━━━ 3s 28ms/step - loss: 0.0988 - val_loss: 0.0971
h
Epoch 29/100
118/118 ━━━━━━━━━━━━━━━━━━━━ 4s 22ms/step - loss: 0.0981 - val_loss: 0.0965
Epoch 30/100
OVERCOMPLETE AUTOENCODERS
118/118 ━━━━━━━━━━━━━━━━━━━━ 3s 26ms/step - loss: 0.0977 - val_loss: 0.0962
Epoch 31/100
118/118 ━━━━━━━━━━━━━━━━━━━━ 5s 29ms/step - loss: 0.0974 - val_loss: 0.0962
# Overcomplete Autoencoder
Epoch 32/100
118/118 ━━━━━━━━━━━━━━━━━━━━ 4s 22ms/step - loss: 0.0972 - val_loss: 0.0954
# Input Dimension is 784
Epoch 33/100
input_img = Input(shape=(784,))
# Encoder with ━━━━━━━━━━━━━━━━━━━━
118/118 2s 21ms/step
more neurons in the bottleneck layer - loss: 0.0967 - val_loss: 0.0951
Epoch 34/100
encoded = Dense(1024, activation='relu')(input_img)
118/118
encoded ━━━━━━━━━━━━━━━━━━━━
= Dense(512, 3s 23ms/step
activation='relu')(encoded) - loss: with
# Bottleneck 0.0964 - neurons
more val_loss: 0.0947
Epoch 35/100
decoded = Dense(1024, activation='relu')(encoded)
118/118
decoded ━━━━━━━━━━━━━━━━━━━━
= Dense(784, 4s 32ms/step - loss: 0.0961 - val_loss: 0.0944
activation='sigmoid')(decoded)
Epoch 36/100
118/118 ━━━━━━━━━━━━━━━━━━━━
overcomplete_autoencoder = Model(input_img,
4s decoded)
21ms/step - loss: 0.0955 - val_loss: 0.0943
overcomplete_autoencoder.compile(optimizer='adam',
Epoch 37/100 loss='binary_crossentropy')
118/118 ━━━━━━━━━━━━━━━━━━━━ 2s 21ms/step - loss: 0.0953 - val_loss: 0.0939
history_overcomplete
Epoch 38/100 = overcomplete_autoencoder.fit(x_train_flat, x_train_flat,
epochs=100,
118/118 ━━━━━━━━━━━━━━━━━━━━ 3s 21ms/step - loss: 0.0949 - val_loss: 0.0935
batch_size=512,
Epoch 39/100
shuffle=True,
118/118 ━━━━━━━━━━━━━━━━━━━━ 4s 30ms/step - loss: 0.0947 - val_loss:x_test_fla
validation_data=(x_test_flat, 0.0934
Epoch 40/100
# Plotting the ━━━━━━━━━━━━━━━━━━━━
118/118 loss 3s 24ms/step - loss: 0.0944 - val_loss: 0.0936
Epoch 41/100
plt.plot(history_overcomplete.history['loss'])
118/118 ━━━━━━━━━━━━━━━━━━━━ 2s 20ms/step - loss: 0.0943 - val_loss: 0.0928
plt.plot(history_overcomplete.history['val_loss'])
Epoch 42/100
plt.title('Overcomplete Autoencoder Loss')
118/118 ━━━━━━━━━━━━━━━━━━━━ 2s 21ms/step - loss: 0.0938 - val_loss: 0.0926
plt.ylabel('loss')
plt.xlabel('epoch')
Epoch 43/100
plt.legend(['train', 'val'], loc='upper left')
118/118 ━━━━━━━━━━━━━━━━━━━━ 2s 21ms/step - loss: 0.0936 - val_loss: 0.0925
plt.show()
Epoch 44/100
118/118 ━━━━━━━━━━━━━━━━━━━━ 3s 26ms/step - loss: 0.0935 - val_loss: 0.0922
Epoch 45/100
118/118 ━━━━━━━━━━━━━━━━━━━━ 4s 20ms/step - loss: 0.0932 - val_loss: 0.0919
Epoch 46/100
Epoch 1/100
118/118 ━━━━━━━━━━━━━━━━━━━━
118/118 ━━━━━━━━━━━━━━━━━━━━ 3s 25s20ms/step
198ms/step - loss:
- loss:0.0929 - val_loss:
0.2783 0.0918
- val_loss: 0.1105
Epoch 47/100
Epoch 2/100
118/118 ━━━━━━━━━━━━━━━━━━━━ 2s 20ms/step - loss: 0.0928 - val_loss: 0.0918
118/118 ━━━━━━━━━━━━━━━━━━━━ 23s 192ms/step - loss: 0.1004 - val_loss: 0.0845
Epoch
Epoch 48/100
3/100
118/118 ━━━━━━━━━━━━━━━━━━━━ 3s 24ms/step - loss: 0.0927 - val_loss: 0.0915
118/118 ━━━━━━━━━━━━━━━━━━━━ 22s 187ms/step - loss: 0.0838 - val_loss: 0.0777
Epoch
Epoch 49/100
4/100
118/118 ━━━━━━━━━━━━━━━━━━━━ 5s 21ms/step - loss: 0.0922 - val_loss: 0.0912
118/118 ━━━━━━━━━━━━━━━━━━━━ 41s 185ms/step - loss: 0.0776 - val_loss: 0.0745
Epoch 50/100
Epoch 5/100
118/118 ━━━━━━━━━━━━━━━━━━━━ 2s 20ms/step - loss: 0.0921 - val_loss: 0.0910
118/118 ━━━━━━━━━━━━━━━━━━━━ 42s 193ms/step - loss: 0.0744 - val_loss: 0.0731
Epoch 51/100
Epoch 6/100
118/118 ━━━━━━━━━━━━━━━━━━━━ 2s 20ms/step - loss: 0.0918 - val_loss: 0.0908
118/118 ━━━━━━━━━━━━━━━━━━━━ 41s 197ms/step - loss: 0.0725 - val_loss: 0.0712
Epoch
Epoch 52/100
7/100
118/118 ━━━━━━━━━━━━━━━━━━━━ 3s 24ms/step - loss: 0.0917 - val_loss: 0.0909
118/118 ━━━━━━━━━━━━━━━━━━━━ 41s 197ms/step - loss: 0.0711 - val_loss: 0.0703
Epoch 53/100
Epoch 8/100
118/118 ━━━━━━━━━━━━━━━━━━━━ 3s 28ms/step - loss: 0.0916 - val_loss: 0.0905
118/118 ━━━━━━━━━━━━━━━━━━━━ 41s 199ms/step - loss: 0.0702 - val_loss: 0.0706
Epoch 54/100
Epoch 9/100
118/118 ━━━━━━━━━━━━━━━━━━━━ 4s 20ms/step - loss: 0.0914 - val_loss: 0.0902
118/118 ━━━━━━━━━━━━━━━━━━━━ 22s 188ms/step - loss: 0.0693 - val_loss: 0.0685
Epoch
Epoch 55/100
10/100
118/118 ━━━━━━━━━━━━━━━━━━━━ 3s 20ms/step - loss: 0.0912 - val_loss: 0.0902
118/118 ━━━━━━━━━━━━━━━━━━━━ 25s 211ms/step - loss: 0.0689 - val_loss: 0.0682
Epoch 56/100
Epoch 11/100
118/118 ━━━━━━━━━━━━━━━━━━━━ 2s 20ms/step - loss: 0.0910 - val_loss: 0.0901
118/118 ━━━━━━━━━━━━━━━━━━━━ 38s 186ms/step - loss: 0.0683 - val loss: 0.0685
E h 57/100

You might also like