Univariate and Mutivariate Time Series Forecasting
Univariate and Mutivariate Time Series Forecasting
Univariate and Mutivariate Time Series Forecasting
February 3, 2024
[ ]:
Starting from 5, take every 6th, that is ever hour , original df has 10 mins interval
[3]: df = df[5::6]
df
1
[70091 rows x 3 columns]
The resulting dataframe, df, now consists of rows that represent data at one-hour intervals from
the original dataframe, which had a 10-minute interval. The displayed output shows the “Date
Time,” “T (degC)” (temperature), and “p (mbar)” (pressure) columns for each selected hour. The
dataframe has a total of 70091 rows and 3 columns.
2
Making each row a matrix input This function creates input matrices (X) and output vectors
(y) for a time series prediction task. Each row of the input matrix represents a window of win-
dow_size consecutive rows from the original DataFrame. The corresponding output is the value in
the next time step after the window. The function returns the input matrix (X) and output vector
(y).
[6]: # [[[1], [2], [3], [4], [5]]] [6], take last five to predict the 6th,,,, 1-5 is␣
↪appended to X, 6 to y
[7]: WINDOW_SIZE = 5
X1, y1 = df_to_X_y(temp, WINDOW_SIZE)
X1.shape, y1.shape
Explanation:
WINDOW_SIZE Definition:
WINDOW_SIZE = 5: The code sets a window size of 5 for creating input matrices and output
vectors. Function Call:
X1, y1 = df_to_X_y(temp, WINDOW_SIZE): Calls the df_to_X_y function with the tem-
perature data (temp) and the specified window size (WINDOW_SIZE). Generated Matrices and
Vectors:
X1.shape, y1.shape: Prints the shapes of the generated input matrices (X1) and output vectors
(y1). Result of Focal Cell:
((70086, 5, 1), (70086,)): The resulting shapes indicate that X1 is a 3D array with dimensions
(70086, 5, 1), and ‘
3
[8]: # Split the data into Training, testinga and validation sets
# Training set
X_train1, y_train1 = X1[:60000], y1[:60000]
# Validation set
X_val1, y_val1 = X1[60000:65000], y1[60000:65000]
# Testing set
X_test1, y_test1 = X1[65000:], y1[65000:]
X_train1.shape, y_train1.shape, X_val1.shape, y_val1.shape, X_test1.shape,␣
↪y_test1.shape
[8]: ((60000, 5, 1), (60000,), (5000, 5, 1), (5000,), (5086, 5, 1), (5086,))
Explanation:
X_train1, y_train1 = X1[:60000], y1[:60000]: Splits the input matrix X1 and output vector y1 into
training data with the first 60,000 samples.
X_val1, y_val1 = X1[60000:65000], y1[60000:65000]: Creates a validation set with samples from
60,000 to 64,999 from the input matrix X1 and output vector y1.
X_test1, y_test1 = X1[65000:], y1[65000:]: Forms a test set using samples from 65,000 onwards in
the input matrix X1 and output vector y1.
X_train1.shape, y_train1.shape, X_val1.shape, y_val1.shape, X_test1.shape, y_test1.shape:
Prints the shapes of the resulting training, validation, and testing sets.
[9]: # Import necessary libraries
import tensorflow as tf
import os
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import *
from tensorflow.keras.callbacks import ModelCheckpoint
from tensorflow.keras.losses import MeanSquaredError
from tensorflow.keras.metrics import RootMeanSquaredError
from tensorflow.keras.optimizers import Adam
WARNING:tensorflow:From C:\Users\christopher.wachira_\anaconda3\lib\site-
packages\keras\src\losses.py:2976: The name
tf.losses.sparse_softmax_cross_entropy is deprecated. Please use
tf.compat.v1.losses.sparse_softmax_cross_entropy instead.
4
model1.add(LSTM(64))
WARNING:tensorflow:From C:\Users\christopher.wachira_\anaconda3\lib\site-
packages\keras\src\backend.py:873: The name tf.get_default_graph is deprecated.
Please use tf.compat.v1.get_default_graph instead.
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
lstm (LSTM) (None, 64) 16896
=================================================================
Total params: 17425 (68.07 KB)
Trainable params: 17425 (68.07 KB)
Non-trainable params: 0 (0.00 Byte)
_________________________________________________________________
Explanation:
model1 = Sequential(): Initializes a sequential model, allowing the addition of layers in a linear
stack.
model1.add(InputLayer((5, 1))): Adds an input layer with a shape of (5, 1), indicating the input
data has 5 time steps and 1 feature.
model1.add(LSTM(64)): Adds an LSTM layer with 64 units.
model1.add(Dense(8, activation=‘relu’)): Adds a Dense layer with 8 units and a Rectified Linear
Unit (ReLU) activation function.
model1.add(Dense(1, activation=‘linear’)): Adds another Dense layer with 1 unit and a linear
activation function.
model1.summary(): Prints a summary of the model architecture, including information about the
layers, output shapes, and parameters.
The summary indicates that the model has a total of 17,425 parameters, all of which are trainable.
5
The model uses an LSTM layer followed by Dense layers for processing and prediction.
# Compile the model with Mean Squared Error as the loss function, Adam␣
↪optimizer with a learning rate of 0.0001,
# Callbacks include ModelCheckpoint (cp1) to save the best model during training
model1.fit(X_train1, y_train1, validation_data=(X_val1, y_val1), epochs=10,␣
↪callbacks=[cp1])
Epoch 1/10
WARNING:tensorflow:From C:\Users\christopher.wachira_\anaconda3\lib\site-
packages\keras\src\utils\tf_utils.py:492: The name tf.ragged.RaggedTensorValue
is deprecated. Please use tf.compat.v1.ragged.RaggedTensorValue instead.
6
1875/1875 [==============================] - 11s 6ms/step - loss: 2.8609 -
root_mean_squared_error: 1.6914 - val_loss: 0.6485 -
val_root_mean_squared_error: 0.8053
Epoch 3/10
1859/1875 [============================>.] - ETA: 0s - loss: 1.0182 -
root_mean_squared_error: 1.0091INFO:tensorflow:Assets written to: model1\assets
INFO:tensorflow:Assets written to: model1\assets
1875/1875 [==============================] - 12s 6ms/step - loss: 1.0171 -
root_mean_squared_error: 1.0085 - val_loss: 0.5360 -
val_root_mean_squared_error: 0.7321
Epoch 4/10
1872/1875 [============================>.] - ETA: 0s - loss: 0.7512 -
root_mean_squared_error: 0.8667INFO:tensorflow:Assets written to: model1\assets
INFO:tensorflow:Assets written to: model1\assets
1875/1875 [==============================] - 12s 7ms/step - loss: 0.7514 -
root_mean_squared_error: 0.8668 - val_loss: 0.5299 -
val_root_mean_squared_error: 0.7280
Epoch 5/10
1872/1875 [============================>.] - ETA: 0s - loss: 0.6906 -
root_mean_squared_error: 0.8310INFO:tensorflow:Assets written to: model1\assets
INFO:tensorflow:Assets written to: model1\assets
1875/1875 [==============================] - 14s 8ms/step - loss: 0.6905 -
root_mean_squared_error: 0.8310 - val_loss: 0.4987 -
val_root_mean_squared_error: 0.7062
Epoch 6/10
1875/1875 [==============================] - 10s 5ms/step - loss: 0.6690 -
root_mean_squared_error: 0.8179 - val_loss: 0.5249 -
val_root_mean_squared_error: 0.7245
Epoch 7/10
1873/1875 [============================>.] - ETA: 0s - loss: 0.6591 -
root_mean_squared_error: 0.8119INFO:tensorflow:Assets written to: model1\assets
INFO:tensorflow:Assets written to: model1\assets
1875/1875 [==============================] - 12s 7ms/step - loss: 0.6592 -
root_mean_squared_error: 0.8119 - val_loss: 0.4922 -
val_root_mean_squared_error: 0.7016
Epoch 8/10
1875/1875 [==============================] - 10s 6ms/step - loss: 0.6534 -
root_mean_squared_error: 0.8083 - val_loss: 0.5103 -
val_root_mean_squared_error: 0.7143
Epoch 9/10
1875/1875 [==============================] - 9s 5ms/step - loss: 0.6496 -
root_mean_squared_error: 0.8060 - val_loss: 0.5215 -
val_root_mean_squared_error: 0.7221
Epoch 10/10
7
1875/1875 [==============================] - 9s 5ms/step - loss: 0.6470 -
root_mean_squared_error: 0.8044 - val_loss: 0.4990 -
val_root_mean_squared_error: 0.7064
Loss, RMSE and validation loss(which is the most important)all decreased, and the model was
saved at 0.4990 which was the lowest
[13]: # Load the model with the lowest validation loss to memory
from tensorflow.keras.models import load_model
# Load the best model saved during training from the 'model1/' directory
model1 = load_model('model1/')
WARNING:tensorflow:From C:\Users\christopher.wachira_\anaconda3\lib\site-
packages\keras\src\saving\legacy\saved_model\load.py:107: The name
tf.gfile.Exists is deprecated. Please use tf.io.gfile.exists instead.
WARNING:tensorflow:From C:\Users\christopher.wachira_\anaconda3\lib\site-
packages\keras\src\saving\legacy\saved_model\load.py:107: The name
tf.gfile.Exists is deprecated. Please use tf.io.gfile.exists instead.
# The following line generates predictions using the trained model on the␣
↪training data
train_predictions = model1.predict(X_train1).flatten()
8
… … …
59995 6.118703 6.07
59996 7.174312 9.88
59997 12.004978 13.53
59998 15.653346 15.43
59999 16.419474 15.54
Plotting
[15]: # Plotting
plt.plot(train_results['Train Predictions'][50:100]) # Plot predictions on␣
↪training data from index 50 to 100
# The following line generates predictions using the trained model on the␣
↪validation data
val_predictions = model1.predict(X_val1).flatten()
9
# Creating a DataFrame to display the predicted values alongside the actual␣
↪values for the validation data
# Display the DataFrame with the predicted and actual values for the validation␣
↪data
val_results
[17]: # Plotting
plt.plot(val_results['Val Predictions'][:100])
plt.plot(val_results['Actuals'][:100])
10
and on the test dataset
[18]: # Predictions on the test dataset using the trained model (model1)
# The following line generates predictions using the trained model on the test␣
↪data
test_predictions = model1.predict(X_test1).flatten()
# Display the DataFrame with the predicted and actual values for the test data
test_results
11
5082 -1.500765 -1.40
5083 -1.675806 -2.75
5084 -3.374628 -2.89
5085 -3.235152 -3.93
[19]: # Plotting
plt.plot(test_results['Test Predictions'][:100])
plt.plot(test_results['Actuals'][:100])
12
# Plot the Predictions and Actuals within the specified range
plt.plot(df['Predictions'][start:end])
plt.plot(df['Actuals'][start:end])
# Return the DataFrame and Mean Squared Error (MSE) for further analysis
return df, mse(y, predictions)
13
[ ]:
temp_df = pd.DataFrame({'Temperature':temp})
temp_df['Seconds'] = temp_df.index.map(pd.Timestamp.timestamp)
temp_df
# Add a new df,converting period signals to sin and cos signals forday and year
Year cos
Date Time
14
2009-01-01 01:00:00 0.999950
2009-01-01 02:00:00 0.999942
2009-01-01 03:00:00 0.999934
2009-01-01 04:00:00 0.999926
2009-01-01 05:00:00 0.999917
day and year are defined as the number of seconds in a day and a year, respectively. New columns
(‘Day sin’, ‘Day cos’, ‘Year sin’, ‘Year cos’) are added to temp_df. These new columns contain
sine and cosine transformations of the ‘Seconds’ column, representing the periodicity of the day
and year.
[24]: # Now drop seconds since it does not loop around but is ever increasing
temp_df = temp_df.drop('Seconds', axis=1)
temp_df.head()
[24]: Temperature Day sin Day cos Year sin Year cos
Date Time
2009-01-01 01:00:00 -8.05 0.258819 0.965926 0.010049 0.999950
2009-01-01 02:00:00 -8.88 0.500000 0.866025 0.010766 0.999942
2009-01-01 03:00:00 -8.81 0.707107 0.707107 0.011483 0.999934
2009-01-01 04:00:00 -9.05 0.866025 0.500000 0.012199 0.999926
2009-01-01 05:00:00 -9.63 0.965926 0.258819 0.012916 0.999917
day and year are defined as the number of seconds in a day and a year, respectively. New columns
(‘Day sin’, ‘Day cos’, ‘Year sin’, ‘Year cos’) are added to temp_df. These new columns contain
sine and cosine transformations of the ‘Seconds’ column, representing the periodicity of the day
and year.
[25]: def df_to_X_y2(df, window_size=6):
# Convert the DataFrame to a NumPy array
df_as_np = df.to_numpy()
X = [] # List to store input features
y = [] # List to store labels
for i in range(len(df_as_np)-window_size):
# Extract a window of rows from the DataFrame as an input feature
row = [r for r in df_as_np[i:i+window_size]]
X.append(row)
# Extract the label from the next row after the window
label = df_as_np[i+window_size][0]
y.append(label)
15
[26]: # Call the df_to_X_y2 function to convert temp_df into input features (X2) and␣
↪labels (y2)
X2, y2 = df_to_X_y2(temp_df)
X2 has a shape of (70085, 6, 5), indicating it’s a 3D array with 70085 samples, each containing a
window of 6 rows and 5 columns. y2 has a shape of (70085,), representing the labels corresponding
to each sample in X2. This code is preparing input-output pairs (X2 and y2) for a machine learning
model, where X2 is a window of historical data, and y2 is the label to predict.
[27]: # Split the model
X2_train, y2_train = X2[:60000], y2[:60000]
X2_val, y2_val = X2[60000:65000], y2[60000:65000]
X2_test, y2_test = X2[65000:], y2[65000:]
X2_train.shape, y2_train.shape, X2_val.shape, y2_val.shape, X2_test.shape,␣
↪y2_test.shape
[27]: ((60000, 6, 5), (60000,), (5000, 6, 5), (5000,), (5085, 6, 5), (5085,))
[ ]:
[28]: # Standardise
temp_training_mean = np.mean(X2_train[:, :, 0])
temp_training_std = np.std(X2_train[:, :, 0])
def preprocess(X):
X[:, :, 0] = (X[:, :, 0] - temp_training_mean) / temp_training_std
return X
the code aims to standardize the first column of the input features (X2_train) using the mean
and standard deviation calculated from the training data. The preprocess function is designed to
perform this standardization and is applied to the input features before they are used in a machine
learning model.
temp_training_mean: Calculate the mean of the first column of the training data (X2_train[:, :,
0]). temp_training_std: Calculate the standard deviation of the first column of the training data.
preprocess(X): A function that standardizes the first column of the input features by subtracting the
mean and dividing by the standard deviation. This standardization process helps ensure that the
input data has a consistent scale, which can be important for certain machine learning algorithms.
[29]: # The result of this code is the standardized input features for each dataset,
# which are then ready to be used in a machine learning model.
16
preprocess(X2_train) # Standardize the first column of the input features for␣
↪training data
17
…,
[ ]:
18
model2.add(InputLayer((6, 5))) # Add an input layer with the shape (6, 5)
model2.add(LSTM(64)) # Add an LSTM layer with 64 units
model2.add(Dense(8, activation='relu')) # Add a dense layer with 8 units and␣
↪ReLU activation function
Model: "sequential_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
lstm_1 (LSTM) (None, 64) 17920
=================================================================
Total params: 18449 (72.07 KB)
Trainable params: 18449 (72.07 KB)
Non-trainable params: 0 (0.00 Byte)
_________________________________________________________________
[32]: # Train the LSTM model on the provided training data (X2_train, y2_train) for␣
↪10 epochs
Epoch 1/10
1871/1875 [============================>.] - ETA: 0s - loss: 39.5215 -
root_mean_squared_error: 6.2866INFO:tensorflow:Assets written to: model2\assets
INFO:tensorflow:Assets written to: model2\assets
19
1875/1875 [==============================] - 14s 6ms/step - loss: 39.4553 -
root_mean_squared_error: 6.2813 - val_loss: 4.0008 -
val_root_mean_squared_error: 2.0002
Epoch 2/10
1865/1875 [============================>.] - ETA: 0s - loss: 4.9239 -
root_mean_squared_error: 2.2190INFO:tensorflow:Assets written to: model2\assets
INFO:tensorflow:Assets written to: model2\assets
1875/1875 [==============================] - 11s 6ms/step - loss: 4.9160 -
root_mean_squared_error: 2.2172 - val_loss: 1.7913 -
val_root_mean_squared_error: 1.3384
Epoch 3/10
1870/1875 [============================>.] - ETA: 0s - loss: 1.9630 -
root_mean_squared_error: 1.4011INFO:tensorflow:Assets written to: model2\assets
INFO:tensorflow:Assets written to: model2\assets
1875/1875 [==============================] - 13s 7ms/step - loss: 1.9609 -
root_mean_squared_error: 1.4003 - val_loss: 0.9441 -
val_root_mean_squared_error: 0.9716
Epoch 4/10
1863/1875 [============================>.] - ETA: 0s - loss: 1.0679 -
root_mean_squared_error: 1.0334INFO:tensorflow:Assets written to: model2\assets
INFO:tensorflow:Assets written to: model2\assets
1875/1875 [==============================] - 13s 7ms/step - loss: 1.0670 -
root_mean_squared_error: 1.0329 - val_loss: 0.7166 -
val_root_mean_squared_error: 0.8465
Epoch 5/10
1874/1875 [============================>.] - ETA: 0s - loss: 0.8240 -
root_mean_squared_error: 0.9078INFO:tensorflow:Assets written to: model2\assets
INFO:tensorflow:Assets written to: model2\assets
1875/1875 [==============================] - 11s 6ms/step - loss: 0.8240 -
root_mean_squared_error: 0.9078 - val_loss: 0.6084 -
val_root_mean_squared_error: 0.7800
Epoch 6/10
1870/1875 [============================>.] - ETA: 0s - loss: 0.6805 -
root_mean_squared_error: 0.8249INFO:tensorflow:Assets written to: model2\assets
INFO:tensorflow:Assets written to: model2\assets
1875/1875 [==============================] - 12s 6ms/step - loss: 0.6799 -
root_mean_squared_error: 0.8245 - val_loss: 0.4883 -
val_root_mean_squared_error: 0.6988
Epoch 7/10
1865/1875 [============================>.] - ETA: 0s - loss: 0.5939 -
root_mean_squared_error: 0.7706INFO:tensorflow:Assets written to: model2\assets
INFO:tensorflow:Assets written to: model2\assets
20
1875/1875 [==============================] - 14s 7ms/step - loss: 0.5938 -
root_mean_squared_error: 0.7706 - val_loss: 0.4477 -
val_root_mean_squared_error: 0.6691
Epoch 8/10
1875/1875 [==============================] - ETA: 0s - loss: 0.5529 -
root_mean_squared_error: 0.7436INFO:tensorflow:Assets written to: model2\assets
INFO:tensorflow:Assets written to: model2\assets
1875/1875 [==============================] - 12s 6ms/step - loss: 0.5529 -
root_mean_squared_error: 0.7436 - val_loss: 0.4291 -
val_root_mean_squared_error: 0.6550
Epoch 9/10
1875/1875 [==============================] - 9s 5ms/step - loss: 0.5382 -
root_mean_squared_error: 0.7336 - val_loss: 0.4319 -
val_root_mean_squared_error: 0.6572
Epoch 10/10
1874/1875 [============================>.] - ETA: 0s - loss: 0.5313 -
root_mean_squared_error: 0.7289INFO:tensorflow:Assets written to: model2\assets
INFO:tensorflow:Assets written to: model2\assets
1875/1875 [==============================] - 14s 7ms/step - loss: 0.5314 -
root_mean_squared_error: 0.7290 - val_loss: 0.4148 -
val_root_mean_squared_error: 0.6441
21
[ ]:
[ ]:
[ ]:
Year cos
Date Time
2009-01-01 01:00:00 0.999950
2009-01-01 02:00:00 0.999942
22
2009-01-01 03:00:00 0.999934
2009-01-01 04:00:00 0.999926
2009-01-01 05:00:00 0.999917
[ ]:
7 is the no of hrs( window length), 6 is the training variables, 2 is temp and pressure|
[37]: ((60000, 7, 6), (60000, 2), (5000, 7, 6), (5000, 2), (5084, 7, 6), (5084, 2))
Standardizing
[38]: # Calculate the mean and standard deviation for pressure (column 0) and␣
↪temperature (column 1) separately in the training data
def preprocess3(X):
# Standardize the pressure and temperature values in the input data using the␣
↪calculated mean and standard deviation
23
X[:, :, 0] = (X[:, :, 0] - p_training_mean3) / p_training_std3
X[:, :, 1] = (X[:, :, 1] - temp_training_mean3) / temp_training_std3
def preprocess_output3(y):
# Standardize the output labels (pressure and temperature) using the same␣
↪mean and standard deviation values
Preprocess functions Results of the standardized input features for each dataset.
[39]: preprocess3(X3_train)
preprocess3(X3_val)
preprocess3(X3_test)
[40]: preprocess_output3(y3_train)
preprocess_output3(y3_val)
preprocess_output3(y3_test)
24
Model: "sequential_2"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
lstm_2 (LSTM) (None, 64) 18176
=================================================================
Total params: 18714 (73.10 KB)
Trainable params: 18714 (73.10 KB)
Non-trainable params: 0 (0.00 Byte)
_________________________________________________________________
# Compile the model using Mean Squared Error as the loss function, Adam␣
↪optimizer with a learning rate of 0.0001,
[43]: # Fitting the model to the training data for 10 epochs, validating on the␣
↪validation data, and using ModelCheckpoint callback
Epoch 1/10
1875/1875 [==============================] - ETA: 0s - loss: 0.1558 -
root_mean_squared_error: 0.3948INFO:tensorflow:Assets written to: model3\assets
INFO:tensorflow:Assets written to: model3\assets
1875/1875 [==============================] - 33s 16ms/step - loss: 0.1558 -
root_mean_squared_error: 0.3948 - val_loss: 0.0310 -
val_root_mean_squared_error: 0.1760
Epoch 2/10
1873/1875 [============================>.] - ETA: 0s - loss: 0.0234 -
root_mean_squared_error: 0.1529INFO:tensorflow:Assets written to: model3\assets
INFO:tensorflow:Assets written to: model3\assets
1875/1875 [==============================] - 22s 12ms/step - loss: 0.0234 -
root_mean_squared_error: 0.1528 - val_loss: 0.0154 -
val_root_mean_squared_error: 0.1242
Epoch 3/10
25
1870/1875 [============================>.] - ETA: 0s - loss: 0.0122 -
root_mean_squared_error: 0.1105INFO:tensorflow:Assets written to: model3\assets
INFO:tensorflow:Assets written to: model3\assets
1875/1875 [==============================] - 18s 10ms/step - loss: 0.0122 -
root_mean_squared_error: 0.1105 - val_loss: 0.0083 -
val_root_mean_squared_error: 0.0910
Epoch 4/10
1874/1875 [============================>.] - ETA: 0s - loss: 0.0081 -
root_mean_squared_error: 0.0898INFO:tensorflow:Assets written to: model3\assets
INFO:tensorflow:Assets written to: model3\assets
1875/1875 [==============================] - 20s 11ms/step - loss: 0.0081 -
root_mean_squared_error: 0.0898 - val_loss: 0.0058 -
val_root_mean_squared_error: 0.0761
Epoch 5/10
1874/1875 [============================>.] - ETA: 0s - loss: 0.0065 -
root_mean_squared_error: 0.0808INFO:tensorflow:Assets written to: model3\assets
INFO:tensorflow:Assets written to: model3\assets
1875/1875 [==============================] - 18s 10ms/step - loss: 0.0065 -
root_mean_squared_error: 0.0808 - val_loss: 0.0050 -
val_root_mean_squared_error: 0.0704
Epoch 6/10
1865/1875 [============================>.] - ETA: 0s - loss: 0.0058 -
root_mean_squared_error: 0.0761INFO:tensorflow:Assets written to: model3\assets
INFO:tensorflow:Assets written to: model3\assets
1875/1875 [==============================] - 16s 8ms/step - loss: 0.0058 -
root_mean_squared_error: 0.0761 - val_loss: 0.0045 -
val_root_mean_squared_error: 0.0670
Epoch 7/10
1861/1875 [============================>.] - ETA: 0s - loss: 0.0054 -
root_mean_squared_error: 0.0734INFO:tensorflow:Assets written to: model3\assets
INFO:tensorflow:Assets written to: model3\assets
1875/1875 [==============================] - 13s 7ms/step - loss: 0.0054 -
root_mean_squared_error: 0.0734 - val_loss: 0.0042 -
val_root_mean_squared_error: 0.0645
Epoch 8/10
1865/1875 [============================>.] - ETA: 0s - loss: 0.0052 -
root_mean_squared_error: 0.0719INFO:tensorflow:Assets written to: model3\assets
INFO:tensorflow:Assets written to: model3\assets
1875/1875 [==============================] - 12s 6ms/step - loss: 0.0052 -
root_mean_squared_error: 0.0719 - val_loss: 0.0038 -
val_root_mean_squared_error: 0.0619
Epoch 9/10
26
1868/1875 [============================>.] - ETA: 0s - loss: 0.0050 -
root_mean_squared_error: 0.0710INFO:tensorflow:Assets written to: model3\assets
INFO:tensorflow:Assets written to: model3\assets
1875/1875 [==============================] - 14s 7ms/step - loss: 0.0050 -
root_mean_squared_error: 0.0710 - val_loss: 0.0038 -
val_root_mean_squared_error: 0.0615
Epoch 10/10
1875/1875 [==============================] - 13s 7ms/step - loss: 0.0050 -
root_mean_squared_error: 0.0705 - val_loss: 0.0038 -
val_root_mean_squared_error: 0.0620
# Creating a DataFrame to store and display the predictions and actual values
df = pd.DataFrame(data={'Temperature Predictions': temp_preds,
'Temperature Actuals':temp_actuals,
'Pressure Predictions': p_preds,
'Pressure Actuals': p_actuals
})
# Plotting the temperature and pressure predictions and actual values within␣
↪the specified range
plt.plot(df['Temperature Predictions'][start:end], label='Temperature␣
↪Predictions')
[45]: # Calling the plot_predictions2 function with the trained model and test data
plot_predictions2(model3, X3_test, y3_test)
27
[45]: Temperature Predictions Temperature Actuals Pressure Predictions \
0 0.394915 0.412451 -0.770675
1 0.351745 0.353683 -0.761016
2 0.315301 0.323123 -0.727488
3 0.319730 0.250251 -0.675293
4 0.278681 0.254952 -0.587432
.. … … …
95 0.685219 0.716869 -0.397875
96 0.648846 0.687485 -0.402577
97 0.636048 0.663978 -0.388477
98 0.638984 0.641646 -0.380351
99 0.651502 0.653400 -0.371565
Pressure Actuals
0 -0.793439
1 -0.763123
2 -0.721893
3 -0.652773
4 -0.652773
.. …
95 -0.416310
96 -0.399333
97 -0.399333
98 -0.400546
99 -0.369018
28
Post proccessing
[46]: # Function to post-process temperature predictions
def postprocess_temp(arr):
# Rescaling the temperature predictions using training mean and standard␣
↪deviation
29
[48]: # Running the plot_predictions2 function with model3, X3_test, and y3_test
post_processed_df = plot_predictions2(model3, X3_test, y3_test)
post_processed_df
Pressure Actuals
0 982.43
1 982.68
2 983.02
3 983.59
4 983.59
.. …
95 985.54
96 985.68
97 985.68
98 985.67
99 985.93
30
[49]: # Setting the start and end indices for the subset of data to be visualized
start, end = 0, 100
plt.plot(post_processed_df['Temperature Predictions'][start:end],␣
↪label='Temperature Predictions')
plt.plot(post_processed_df['Temperature Actuals'][start:end],␣
↪label='Temperature Actuals')
31
[50]: # Plotting the subset of Pressure Predictions and Actuals from the␣
↪post_processed_df DataFrame
plt.plot(post_processed_df['Pressure Predictions'][start:end])
plt.plot(post_processed_df['Pressure Actuals'][start:end])
32
[ ]:
[ ]:
33