Case Study - AP23322130042
Case Study - AP23322130042
Case Study - AP23322130042
Report on
Submitted by
Submitted to:
import tensorflow as tf
import numpy as np
model = keras.Sequential([
keras.layers.Flatten(),
keras.layers.Dense(84, activation='relu'),
keras.layers.Dense(10, activation='softmax')
])
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
# Model summary
model.summary()
plt.figure(figsize=(12, 5))
# Accuracy plot
plt.subplot(1, 2, 1)
plt.legend()
# Loss plot
plt.subplot(1, 2, 2)
plt.legend()
plt.show()
num_samples = 5
plt.figure(figsize=(10, 5))
img = x_test[index]
plt.subplot(1, num_samples, i + 1)
plt.imshow(img.squeeze(), cmap='gray')
plt.title(f'Predicted: {prediction}')
plt.axis('off')
plt.show()
Output of the Code:
https://colab.research.google.com/drive/1NGsvuhjkmro_ND8CBf6HYYmA1gdL1e92?usp=sharing
Altering the filter size or quantity of filters in a CNN affects both its performance and complexity. A
larger filter size, like going from, allows the network to detect larger patterns, though it requires
more computation. Increasing the number of filters enables the model to learn a broader range of
features, which may improve accuracy but also adds to the model's size and processing time. This
can enhance performance, but if the model becomes too complex, it risks overfitting.
Pooling layers, such as MaxPooling, reduce the spatial size of the data as it passes through the
network. By decreasing the number of parameters and operations in subsequent layers, pooling
helps the model generalize better while also lowering computational requirements. This down
sampling retains essential features and discards unnecessary details, promoting efficiency and
reducing overfitting.
Adding Batch Normalization: after layers helps the model learn faster by normalizing the input
values within each mini-batch, which can enhance model stability and generalization.
Dropout, as used in the current model, deactivates a random fraction of nodes during training to
minimize overfitting. Using batch normalization together with dropout improves training stability
and robustness, potentially leading to better overall performance, especially on more complex data.
Given Code:
import tensorflow as tf
from tensorflow import keras
from sklearn.model_selection import train_test_split
import matplotlib.pyplot as plt
# Load MNIST dataset
(x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data()
# plot 4 images as gray scale
#plt.subplot(221)
#plt.imshow(x_train[999], cmap=plt.get_cmap('gray'))
#plt.subplot(222)
#plt.imshow(x_train[100], cmap=plt.get_cmap('gray'))
#plt.subplot(223)
#plt.imshow(x_train[10], cmap=plt.get_cmap('gray'))
#plt.subplot(224)
#plt.imshow(x_train[77], cmap=plt.get_cmap('gray'))
# show the plot
#plt.show()
# Define CNN model
model = tf.keras.Sequential([
keras.layers.Conv2D(64, (3, 3), activation='relu',
input_shape=(28, 28, 1)),
keras.layers.MaxPooling2D((2, 2)),
keras.layers.Conv2D(64, (3, 3), activation='relu'),
keras.layers.MaxPooling2D((2, 2)),
keras.layers.Conv2D(64, (3, 3), activation='relu'),
keras.layers.Flatten(),
keras.layers.Dense(128, activation='relu'),
keras.layers.Dropout(0.2),
keras.layers.Dense(10, activation='softmax')
])
# Compile model
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
model.summary()
#model.count_params()
# Train model
model.fit(x_train, y_train, epochs=10, validation_data=(x_test,
y_test))
# Evaluate model
test_loss, test_acc = model.evaluate(x_test, y_test)
print(f'Test accuracy: {test_acc:.2f}')
import numpy as np
import cv2
from google.colab.patches import cv2_imshow
img = cv2.imread('//content/sample_data/download.jpg',
cv2.IMREAD_GRAYSCALE)
# Make a prediction
prediction = model.predict(img)
predicted_digit = np.argmax(prediction)
https://colab.research.google.com/drive/1jQGo4ByRf4qdhrxZi7ElXyu_Eacp7k0i?usp=sharing