Tensorflow and Keras Apis: 0.1 Computer Vision: Neural Networks and Deep Learning
Tensorflow and Keras Apis: 0.1 Computer Vision: Neural Networks and Deep Learning
Tensorflow and Keras Apis: 0.1 Computer Vision: Neural Networks and Deep Learning
May 9, 2020
To enhance our dataset, the data augmentation technique via Keras ImageDataGenerator is used.
We can also improve models by adjusting model layers and optimizing hyper-parameters.
[0]: import os
import random
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import cv2
import shutil
import pickle
import datetime
from xml.etree import cElementTree as ElementTree
import tensorflow as tf
from tensorflow.keras.layers import concatenate
from tensorflow.keras.layers import Conv2D, SeparableConv2D
from tensorflow.keras.layers import Flatten, Dense, Activation
from tensorflow.keras.layers import BatchNormalization, Dropout
from tensorflow.keras.layers import MaxPooling2D, AveragePooling2D
from tensorflow.keras.layers import GlobalMaxPooling2D, GlobalAveragePooling2D
from tensorflow.python.client import device_lib
from tensorflow.keras.preprocessing.image import ImageDataGenerator
1
from tensorflow.keras.preprocessing.image import load_img, img_to_array
from tensorflow.keras.callbacks import TensorBoard,
from tensorflow.keras.callbacks import ReduceLROnPlateau, EarlyStopping
Environment
[0]: tf.keras.__version__
[0]: '2.3.0-tf'
[0]: tf.__version__
[0]: '2.2.0-rc3'
[0]: device_lib.list_local_devices()
2
[0]: class XmlListConfig(list):
def __init__(self, aList):
for element in aList:
if element:
# treat like dict
if len(element) == 1 or element[0].tag != element[1].tag:
self.append(XmlDictConfig(element))
# treat like list
elif element[0].tag == element[1].tag:
self.append(XmlListConfig(element))
elif element.text:
text = element.text.strip()
if text:
self.append(text)
class XmlDictConfig(dict):
3
[0]: dir_test = 'drive/My Drive/Colab Notebooks/P6/data/test'
def load_obj(name):
with open(root_data + 'obj/' + name + '.pkl', 'rb') as f:
return pickle.load(f)
1 Prepare Data
[0]: filenames = []
categories = []
widths = []
heights = []
count = 0
for dir_an_name in directory_annotation_names:
directory = dir_an_name.split('/')[-1]
filenames_dir = os.listdir(dir_an_name)
path = root_images + "/" + directory
xml_string = open(root_annotation + "/" + directory +
"/" + filenames_dir[0], "r+").read()
root_xml = ElementTree.XML(xml_string)
xmldict = XmlDictConfig(root_xml)
for filename_dir in filenames_dir:
if os.path.isfile(path + "/" + filename_dir + ".jpg"):
filenames.append(filename_dir + ".jpg")
categories.append(xmldict['object']['name'])
widths.append(xmldict['size']['width'])
heights.append(xmldict['size']['height'])
else:
count += 1
4
'heights': heights
})
[0]: data
5
[0]: plt.figure(figsize=(20, 6))
sns.distplot(data['heights'], hist=False, label="distribution heights")
sns.distplot(data['widths'], hist=False, label="distribution widths")
[0]: 16440
[0]: 4200
[0]: 12240
[0]: 4140
if os.path.isdir(dir_train):
#shutil.rmtree(dir_train)
6
if os.path.isdir(dir_test):
#shutil.rmtree(dir_test)
os.mkdir(dir_train)
os.mkdir(dir_test)
file_error_test = []
for i, row in test_df.iterrows():
path = root_images + "/" + row['directory'] + "/" + row['filename']
if os.path.isfile():
#shutil.copy(path, dir_test)
else:
file_error_test.append(row['filename'])
print(file_error_test)
[0]: plt.imshow(img)
7
2 Preprocessing
Define constants
[0]: IMAGE_SIZE = (299, 299)
8
img = cv2.merge(channels)
return img
[0]: plt.imshow(whitening(img_to_array(img)))
Clipping input data to the valid range for imshow with RGB data ([0..1] for
floats or [0..255] for integers).
[0]: plt.imshow(equalHist(img_to_array(img)))
9
[0]: <matplotlib.image.AxesImage at 0x7f599610e6d8>
10
[0]: validation_generator = validation_datagen.\
flow_from_dataframe(dataframe=validate_df,
directory=dir_train,
x_col="filename",
y_col="category",
target_size=IMAGE_SIZE,
class_mode="categorical",
batch_size=32
)
[0]: plt.subplot(1, 2, 1)
plt.imshow(img, aspect="auto")
plt.title('Original image')
plt.subplot(1, 2, 2)
plt.imshow(X_batch[0], aspect="auto")
plt.title('preprocessing')
plt.tight_layout()
plt.show()
Clipping input data to the valid range for imshow with RGB data ([0..1] for
floats or [0..255] for integers).
11
3 Callbacks : ReduceLROnPlateau, EarlyStopping, TensorBoard
[0]: # Stop training when the loss metric has stopped improving from 5 epochs
earlystop = EarlyStopping(monitor='val_loss', patience=5)
<IPython.core.display.Javascript object>
12
Optimizers : RMSprop. See [Adam optimizer](https://keras.io/optimizers/)
Use for approximate the gradient during back-propagation
4 Transfer Learning
[0]: # Load InceptionResNetV2 model with the trained weights for the feature␣
,→representation layers
inceptionResNetV2_model = tf.keras.applications.
,→InceptionResNetV2(weights='imagenet', include_top=False)
[0]: x = inceptionResNetV2_model.output
[0]: x = GlobalAveragePooling2D()(x)
13
Epoch 1/30
382/382 [==============================] - 8900s 23s/step - loss: 1.2136 -
accuracy: 0.7956 - val_loss: 0.9366 - val_accuracy: 0.8638 - lr: 0.0100
Epoch 2/30
382/382 [==============================] - 146s 383ms/step - loss: 0.8820 -
accuracy: 0.8670 - val_loss: 1.1164 - val_accuracy: 0.8593 - lr: 0.0100
Epoch 3/30
382/382 [==============================] - 147s 384ms/step - loss: 0.7621 -
accuracy: 0.8842 - val_loss: 1.0991 - val_accuracy: 0.8745 - lr: 0.0100
Epoch 4/30
382/382 [==============================] - 146s 383ms/step - loss: 0.7119 -
accuracy: 0.8915 - val_loss: 1.1188 - val_accuracy: 0.8721 - lr: 0.0100
Epoch 5/30
382/382 [==============================] - 146s 382ms/step - loss: 0.6227 -
accuracy: 0.9019 - val_loss: 1.2700 - val_accuracy: 0.8733 - lr: 0.0100
Epoch 6/30
382/382 [==============================] - 146s 381ms/step - loss: 0.6019 -
accuracy: 0.9060 - val_loss: 1.2158 - val_accuracy: 0.8748 - lr: 0.0100
14
[0]: fig = plt.figure(figsize=(12, 6))
ax = fig.add_subplot(111, autoscale_on=True)
ax.plot(history.history['loss'], color='b', label="Training loss")
ax.plot(history.history['val_loss'], color='r', label="validation loss")
ax.set_xticks(np.arange(1, 30, 1))
legend = plt.legend(loc='best', shadow=True)
plt.tight_layout()
plt.show()
Evaluation
15
[0]: test_datagen = ImageDataGenerator(rescale=1./255)
[0]: loss_and_metrics
5 Extract misclassifications
[0]: test_df
16
20575 n02093647_120.jpg Bedlington_terrier 237 360 6
20576 n02093647_2585.jpg Bedlington_terrier 237 360 6
20577 n02093647_2068.jpg Bedlington_terrier 237 360 6
20578 n02093647_3219.jpg Bedlington_terrier 237 360 6
20579 n02093647_2349.jpg Bedlington_terrier 237 360 6
[0]: test_df
[0]: # Misclassifications
error_df = test_df[test_df['category'] != test_df['predicted']]
error_df.shape[0]
[0]: 485
17
[0]: error_df[['category', 'predicted']].groupby('category').describe()
[0]: predicted
count unique top freq
category
Afghan_hound 1 1 Newfoundland 1
Airedale 2 2 Irish_terrier 1
American_Staffordshire_terrier 14 4 Staffordshire_bullterrier 11
Appenzeller 2 2 EntleBucher 1
Australian_terrier 8 3 silky_terrier 4
… … … … …
toy_poodle 1 1 miniature_poodle 1
toy_terrier 4 3 basenji 2
vizsla 2 2 Weimaraner 1
whippet 3 2 Italian_greyhound 2
wire-haired_fox_terrier 1 1 Lakeland_terrier 1
ignore_index=True)
[0]: display(errors_sypnosis)
category list_errors
0 bull_mastiff | Labrador_retriever | Brabancon_griffon | cho...
18
1 Norwich_terrier | Scottish_deerhound | West_Highland_white_ter...
2 Tibetan_terrier | otterhound | soft-coated_wheaten_terrier | L...
3 Cardigan | Pembroke | Pembroke | Pembroke |
4 Irish_wolfhound | Scottish_deerhound | Scottish_deerhound | Sc...
.. ... ...
92 Siberian_husky | Eskimo_dog | toy_poodle | Eskimo_dog | Eskim...
93 Eskimo_dog | malamute | Siberian_husky |
94 Japanese_spaniel | Shih-Tzu | Blenheim_spaniel |
95 golden_retriever | Great_Pyrenees | cocker_spaniel | Great_Pyre...
96 Lhasa | Shih-Tzu | Tibetan_mastiff | Shih-Tzu | Shih...
19
x_col='filename',
y_col='category',
target_size=IMAGE_SIZE,
class_mode='categorical',
batch_size=32
)
Clipping input data to the valid range for imshow with RGB data ([0..1] for
floats or [0..255] for integers).
Clipping input data to the valid range for imshow with RGB data ([0..1] for
floats or [0..255] for integers).
Clipping input data to the valid range for imshow with RGB data ([0..1] for
floats or [0..255] for integers).
Clipping input data to the valid range for imshow with RGB data ([0..1] for
floats or [0..255] for integers).
Clipping input data to the valid range for imshow with RGB data ([0..1] for
floats or [0..255] for integers).
20
epochs=30,
steps_per_epoch=train_df.shape[0]//32,
validation_data=validation_generator,
validation_steps=validate_df.shape[0]//32,
callbacks=[learning_rate_reduction, earlystop]
)
Epoch 1/30
382/382 [==============================] - 406s 1s/step - loss: 1.4635 -
accuracy: 0.8377 - val_loss: 1.2315 - val_accuracy: 0.8700 - lr: 0.0100
Epoch 2/30
382/382 [==============================] - 399s 1s/step - loss: 1.4446 -
accuracy: 0.8372 - val_loss: 1.1172 - val_accuracy: 0.8771 - lr: 0.0100
Epoch 3/30
382/382 [==============================] - 396s 1s/step - loss: 1.3201 -
accuracy: 0.8495 - val_loss: 1.2262 - val_accuracy: 0.8717 - lr: 0.0100
Epoch 4/30
382/382 [==============================] - 393s 1s/step - loss: 1.2951 -
accuracy: 0.8462 - val_loss: 1.2033 - val_accuracy: 0.8700 - lr: 0.0100
Epoch 5/30
382/382 [==============================] - ETA: 0s - loss: 1.2972 - accuracy:
0.8508
Epoch 00005: ReduceLROnPlateau reducing learning rate to 0.004999999888241291.
382/382 [==============================] - 394s 1s/step - loss: 1.2972 -
accuracy: 0.8508 - val_loss: 1.1955 - val_accuracy: 0.8702 - lr: 0.0100
Epoch 6/30
382/382 [==============================] - 397s 1s/step - loss: 0.8653 -
accuracy: 0.8822 - val_loss: 0.9745 - val_accuracy: 0.8876 - lr: 0.0050
Epoch 7/30
382/382 [==============================] - 406s 1s/step - loss: 0.7608 -
accuracy: 0.8858 - val_loss: 0.9190 - val_accuracy: 0.8850 - lr: 0.0050
Epoch 8/30
382/382 [==============================] - 408s 1s/step - loss: 0.7355 -
accuracy: 0.8842 - val_loss: 0.9485 - val_accuracy: 0.8831 - lr: 0.0050
Epoch 9/30
382/382 [==============================] - ETA: 0s - loss: 0.6819 - accuracy:
0.8912
Epoch 00009: ReduceLROnPlateau reducing learning rate to 0.0024999999441206455.
382/382 [==============================] - 409s 1s/step - loss: 0.6819 -
accuracy: 0.8912 - val_loss: 0.9270 - val_accuracy: 0.8848 - lr: 0.0050
Epoch 10/30
382/382 [==============================] - 407s 1s/step - loss: 0.5759 -
accuracy: 0.9051 - val_loss: 0.8837 - val_accuracy: 0.8886 - lr: 0.0025
Epoch 11/30
382/382 [==============================] - 408s 1s/step - loss: 0.5444 -
accuracy: 0.9053 - val_loss: 0.8894 - val_accuracy: 0.8915 - lr: 0.0025
Epoch 12/30
21
382/382 [==============================] - 407s 1s/step - loss: 0.5221 -
accuracy: 0.9072 - val_loss: 0.8669 - val_accuracy: 0.8896 - lr: 0.0025
Epoch 13/30
382/382 [==============================] - 405s 1s/step - loss: 0.4721 -
accuracy: 0.9120 - val_loss: 0.8999 - val_accuracy: 0.8862 - lr: 0.0025
Epoch 14/30
382/382 [==============================] - ETA: 0s - loss: 0.5005 - accuracy:
0.9092
Epoch 00014: ReduceLROnPlateau reducing learning rate to 0.0012499999720603228.
382/382 [==============================] - 407s 1s/step - loss: 0.5005 -
accuracy: 0.9092 - val_loss: 0.9126 - val_accuracy: 0.8802 - lr: 0.0025
Epoch 15/30
382/382 [==============================] - 412s 1s/step - loss: 0.4344 -
accuracy: 0.9134 - val_loss: 0.8271 - val_accuracy: 0.8915 - lr: 0.0012
Epoch 16/30
382/382 [==============================] - 413s 1s/step - loss: 0.4216 -
accuracy: 0.9168 - val_loss: 0.8398 - val_accuracy: 0.8898 - lr: 0.0012
Epoch 17/30
382/382 [==============================] - 413s 1s/step - loss: 0.4210 -
accuracy: 0.9184 - val_loss: 0.8195 - val_accuracy: 0.8934 - lr: 0.0012
Epoch 18/30
382/382 [==============================] - 415s 1s/step - loss: 0.4030 -
accuracy: 0.9196 - val_loss: 0.8186 - val_accuracy: 0.8905 - lr: 0.0012
Epoch 19/30
382/382 [==============================] - 415s 1s/step - loss: 0.3890 -
accuracy: 0.9246 - val_loss: 0.8350 - val_accuracy: 0.8900 - lr: 0.0012
Epoch 20/30
382/382 [==============================] - ETA: 0s - loss: 0.4068 - accuracy:
0.9171
Epoch 00020: ReduceLROnPlateau reducing learning rate to 0.0006249999860301614.
382/382 [==============================] - 414s 1s/step - loss: 0.4068 -
accuracy: 0.9171 - val_loss: 0.8211 - val_accuracy: 0.8922 - lr: 0.0012
Epoch 21/30
382/382 [==============================] - 415s 1s/step - loss: 0.3534 -
accuracy: 0.9270 - val_loss: 0.7882 - val_accuracy: 0.8938 - lr: 6.2500e-04
Epoch 22/30
382/382 [==============================] - 411s 1s/step - loss: 0.3533 -
accuracy: 0.9250 - val_loss: 0.8018 - val_accuracy: 0.8936 - lr: 6.2500e-04
Epoch 23/30
382/382 [==============================] - 409s 1s/step - loss: 0.3751 -
accuracy: 0.9231 - val_loss: 0.8093 - val_accuracy: 0.8941 - lr: 6.2500e-04
Epoch 24/30
382/382 [==============================] - 408s 1s/step - loss: 0.3989 -
accuracy: 0.9219 - val_loss: 0.7877 - val_accuracy: 0.8934 - lr: 6.2500e-04
Epoch 25/30
382/382 [==============================] - 409s 1s/step - loss: 0.3875 -
accuracy: 0.9226 - val_loss: 0.8017 - val_accuracy: 0.8922 - lr: 6.2500e-04
Epoch 26/30
22
382/382 [==============================] - ETA: 0s - loss: 0.3833 - accuracy:
0.9235
Epoch 00026: ReduceLROnPlateau reducing learning rate to 0.0003124999930150807.
382/382 [==============================] - 416s 1s/step - loss: 0.3833 -
accuracy: 0.9235 - val_loss: 0.8019 - val_accuracy: 0.8922 - lr: 6.2500e-04
Epoch 27/30
382/382 [==============================] - 422s 1s/step - loss: 0.3431 -
accuracy: 0.9256 - val_loss: 0.7896 - val_accuracy: 0.8948 - lr: 3.1250e-04
Epoch 28/30
382/382 [==============================] - 421s 1s/step - loss: 0.3444 -
accuracy: 0.9291 - val_loss: 0.7907 - val_accuracy: 0.8931 - lr: 3.1250e-04
Epoch 29/30
382/382 [==============================] - 423s 1s/step - loss: 0.3542 -
accuracy: 0.9258 - val_loss: 0.7938 - val_accuracy: 0.8924 - lr: 3.1250e-04
Evaluation
[0]: fig = plt.figure(figsize=(12, 6))
ax = fig.add_subplot(111)
ax.plot(history_aug.history['accuracy'], color='b',
label="Training accuracy")
ax.plot(history_aug.history['val_accuracy'], color='r',
label="Validation accuracy")
ax.set_xticks(np.arange(1, 29, 1))
legend = plt.legend(loc='best', shadow=True)
plt.tight_layout()
plt.show()
23
[0]: fig = plt.figure(figsize=(12, 6))
ax = fig.add_subplot(111, autoscale_on=True)
ax.plot(history_aug.history['loss'], color='b', label="Training loss")
ax.plot(history_aug.history['val_loss'], color='r', label="Validation loss")
ax.set_xticks(np.arange(1, 30, 1))
legend = plt.legend(loc='best', shadow=True)
plt.tight_layout()
plt.show()
[0]: loss_and_metrics_aug
preprocessing function
24
Modify image size input
[0]: # Block 1
my_VGG16.add(Conv2D(64,(3, 3), input_shape=(224, 224, 3), padding='same',
activation='relu'))
25
my_VGG16.add(MaxPooling2D(pool_size=(2,2), strides=(2,2)))
[0]: # Block2
my_VGG16.add(Conv2D(128, (3, 3), padding='same', activation='relu'))
my_VGG16.add(MaxPooling2D(pool_size=(2,2), strides=(2,2)))
[0]: # Block3
my_VGG16.add(Conv2D(256, (3, 3), padding='same', activation='relu'))
my_VGG16.add(MaxPooling2D(pool_size=(2,2), strides=(2,2)))
[0]: # Block4
my_VGG16.add(Conv2D(512, (3, 3), padding='same', activation='relu'))
my_VGG16.add(MaxPooling2D(pool_size=(2,2), strides=(2,2)))
[0]: # Block5
my_VGG16.add(Conv2D(512, (3, 3), padding='same', activation='relu'))
my_VGG16.add(MaxPooling2D(pool_size=(2,2), strides=(2,2)))
[0]: my_VGG16.compile(optimizer=tf.keras.optimizers.RMSprop(learning_rate=0.001),
loss='categorical_crossentropy', metrics=['accuracy'])
Epoch 1/30
382/382 [==============================] - 273s 713ms/step - loss: 81.5608 -
accuracy: 0.0074 - val_loss: 4.7875 - val_accuracy: 0.0083 - lr: 0.0010
Epoch 2/30
26
382/382 [==============================] - 271s 708ms/step - loss: 4.8113 -
accuracy: 0.0064 - val_loss: 4.7875 - val_accuracy: 0.0083 - lr: 0.0010
Epoch 3/30
382/382 [==============================] - 272s 712ms/step - loss: 4.8540 -
accuracy: 0.0072 - val_loss: 4.7875 - val_accuracy: 0.0083 - lr: 0.0010
Epoch 4/30
382/382 [==============================] - ETA: 0s - loss: 4.8599 - accuracy:
0.0070
Epoch 00004: ReduceLROnPlateau reducing learning rate to 0.0005000000237487257.
382/382 [==============================] - 273s 716ms/step - loss: 4.8599 -
accuracy: 0.0070 - val_loss: 4.7877 - val_accuracy: 0.0083 - lr: 0.0010
Epoch 5/30
382/382 [==============================] - 273s 715ms/step - loss: 4.7883 -
accuracy: 0.0084 - val_loss: 4.7878 - val_accuracy: 0.0083 - lr: 5.0000e-04
Epoch 6/30
382/382 [==============================] - 272s 711ms/step - loss: 4.6768 -
accuracy: 0.0162 - val_loss: 4.7853 - val_accuracy: 0.0093 - lr: 5.0000e-04
Epoch 7/30
382/382 [==============================] - 271s 710ms/step - loss: 4.5277 -
accuracy: 0.0313 - val_loss: 4.7874 - val_accuracy: 0.0074 - lr: 5.0000e-04
Epoch 8/30
382/382 [==============================] - 274s 716ms/step - loss: 4.3475 -
accuracy: 0.0424 - val_loss: 5.0457 - val_accuracy: 0.0095 - lr: 5.0000e-04
Epoch 9/30
382/382 [==============================] - 274s 718ms/step - loss: 4.2178 -
accuracy: 0.0598 - val_loss: 4.7801 - val_accuracy: 0.0122 - lr: 5.0000e-04
Epoch 10/30
382/382 [==============================] - 274s 718ms/step - loss: 4.0927 -
accuracy: 0.0702 - val_loss: 4.8033 - val_accuracy: 0.0115 - lr: 5.0000e-04
Epoch 11/30
382/382 [==============================] - 274s 718ms/step - loss: 3.9952 -
accuracy: 0.0866 - val_loss: 5.4929 - val_accuracy: 0.0055 - lr: 5.0000e-04
Epoch 12/30
382/382 [==============================] - ETA: 0s - loss: 3.8918 - accuracy:
0.1011
Epoch 00012: ReduceLROnPlateau reducing learning rate to 0.0002500000118743628.
382/382 [==============================] - 274s 719ms/step - loss: 3.8918 -
accuracy: 0.1011 - val_loss: 5.3590 - val_accuracy: 0.0086 - lr: 5.0000e-04
Epoch 13/30
382/382 [==============================] - 274s 717ms/step - loss: 3.5799 -
accuracy: 0.1453 - val_loss: 5.9678 - val_accuracy: 0.0086 - lr: 2.5000e-04
Epoch 14/30
382/382 [==============================] - 274s 718ms/step - loss: 3.4416 -
accuracy: 0.1651 - val_loss: 5.9237 - val_accuracy: 0.0081 - lr: 2.5000e-04
Evaluation
[0]: vgg16_test_datagen = ImageDataGenerator(rescale=1./255)
27
[0]: vgg16_test_generator = vgg16_test_datagen.\
flow_from_dataframe(dataframe=test_df,
directory=dir_test,
x_col='filename',
y_col='category',
target_size=(224, 224),
class_mode='categorical',
batch_size=32
)
[0]: loss_and_metrics_v
28
[0]: # Entry flow
main_input = tf.keras.Input(shape=(299, 299, 3), name='main_input')
x = Activation('relu')(x)
x = SeparableConv2D(128, (3, 3), padding='same')(x)
x = Activation('relu')(x)
x = SeparableConv2D(256, (3, 3), padding="same")(x)
x = Activation('relu')(x)
x = SeparableConv2D(256, (3, 3), padding="same")(x)
x = Activation('relu')(x)
x = SeparableConv2D(768, (3, 3), padding="same")(x)
x = Activation('relu')(x)
x = SeparableConv2D(768, (3, 3), padding="same")(x)
29
y = Activation('relu')(y)
y = SeparableConv2D(728, (3, 3), padding="same")(y)
y = Activation('relu')(y)
y = SeparableConv2D(728, (3, 3), padding="same")(y)
x = concatenate([x, y])
x = Activation('relu')(x)
x = SeparableConv2D(728, (3, 3), padding="same")(x)
x = Activation('relu')(x)
x = SeparableConv2D(1024, (3, 3), padding="same")(x)
x = concatenate([x, tower_4])
x = GlobalAveragePooling2D()(x)
x = Flatten()(x)
x = Dense(2048, activation='relu')(x)
x = Dense(1024, activation='relu')(x)
x = Dense(120, activation='softmax')(x)
[0]: my_Xception.compile(optimizer=tf.keras.optimizers.RMSprop(learning_rate=0.001),
loss='categorical_crossentropy', metrics=['accuracy'])
Epoch 1/30
382/382 [==============================] - 429s 1s/step - loss: 4.7893 -
30
accuracy: 0.0069 - val_loss: 4.7875 - val_accuracy: 0.0083 - lr: 0.0010
Epoch 2/30
382/382 [==============================] - 416s 1s/step - loss: 4.7889 -
accuracy: 0.0056 - val_loss: 4.7875 - val_accuracy: 0.0083 - lr: 0.0010
Epoch 3/30
382/382 [==============================] - 415s 1s/step - loss: 4.7885 -
accuracy: 0.0065 - val_loss: 4.7875 - val_accuracy: 0.0083 - lr: 0.0010
Epoch 4/30
382/382 [==============================] - ETA: 0s - loss: 4.7885 - accuracy:
0.0079
Epoch 00004: ReduceLROnPlateau reducing learning rate to 0.0005000000237487257.
382/382 [==============================] - 414s 1s/step - loss: 4.7885 -
accuracy: 0.0079 - val_loss: 4.7875 - val_accuracy: 0.0081 - lr: 0.0010
Epoch 5/30
382/382 [==============================] - 417s 1s/step - loss: 4.7880 -
accuracy: 0.0072 - val_loss: 4.7875 - val_accuracy: 0.0083 - lr: 5.0000e-04
Epoch 6/30
382/382 [==============================] - 415s 1s/step - loss: 4.7880 -
accuracy: 0.0073 - val_loss: 4.7875 - val_accuracy: 0.0083 - lr: 5.0000e-04
Epoch 7/30
382/382 [==============================] - ETA: 0s - loss: 4.7880 - accuracy:
0.0072
Epoch 00007: ReduceLROnPlateau reducing learning rate to 0.0002500000118743628.
382/382 [==============================] - 417s 1s/step - loss: 4.7880 -
accuracy: 0.0072 - val_loss: 4.7875 - val_accuracy: 0.0083 - lr: 5.0000e-04
Epoch 8/30
382/382 [==============================] - 418s 1s/step - loss: 4.7878 -
accuracy: 0.0079 - val_loss: 4.7875 - val_accuracy: 0.0083 - lr: 2.5000e-04
Epoch 9/30
382/382 [==============================] - 419s 1s/step - loss: 4.7878 -
accuracy: 0.0071 - val_loss: 4.7875 - val_accuracy: 0.0083 - lr: 2.5000e-04
Epoch 10/30
382/382 [==============================] - ETA: 0s - loss: 4.7878 - accuracy:
0.0072
Epoch 00010: ReduceLROnPlateau reducing learning rate to 0.0001250000059371814.
382/382 [==============================] - 421s 1s/step - loss: 4.7878 -
accuracy: 0.0072 - val_loss: 4.7875 - val_accuracy: 0.0083 - lr: 2.5000e-04
Epoch 11/30
382/382 [==============================] - 423s 1s/step - loss: 4.7876 -
accuracy: 0.0077 - val_loss: 4.7875 - val_accuracy: 0.0083 - lr: 1.2500e-04
Epoch 12/30
382/382 [==============================] - 424s 1s/step - loss: 4.7876 -
accuracy: 0.0066 - val_loss: 4.7875 - val_accuracy: 0.0083 - lr: 1.2500e-04
Evaluation
[0]: loss_and_metrics_x = my_Xception.evaluate(test_generator, batch_size=32,
steps=test_df.shape[0]//32)
31
129/129 [==============================] - 28s 218ms/step - loss: 4.7874 -
accuracy: 0.0041
loss_and_metrics_x
32