Stock Prediction Using Recurrent Neural Network (RNN)
Stock Prediction Using Recurrent Neural Network (RNN)
Stock Prediction Using Recurrent Neural Network (RNN)
RECURRENT NEURAL
NETWORK(RNN)
WHAT IS NEURAL NETWORK?
• System of a hardware or software that follows the pattern of the operation of neurons in the human brain.
• The basic concept of on how artificial neural network system works is:
the network is initially trained or fed a certain amount of data and based on the data it will trained
according to the input we set and will produce the output that we desired.
• Neural network is occasionally described in terms of their depth, including the amount of layers between the
input and output layer or known as the hidden layer.
FEED FORWARD NEURAL NETWORK (FNN)
• The most basic learning model is the feed-forward neural network (FNN). This type of artificial neural
network algorithm only transfers the information straight from input layer to output layer.
• It may or may not have hidden layer, making their functioning more interpretable.
• Mathematically speaking, the neural network took in some input, do weight calculation and generate some
output through several layer of hidden layer.
RECURRENT NEURAL NETWORK
• In general, RNN works by saving the processing nodes and feed the result back into the model.
• In other words, this model learns by cycling the information through a loop.
• Therefore, RNN takes their input not just as the current input example they see but also as to what they
had perceived previously in time.
RECURRENT NEURAL NETWORK
• To simplify, RNN taking the concept of a neural network and adding some state to the network
• For example, we have an input Xt coming into the neuron A and produce output ht but at the same time there
will be process that will store some state and fed it back into the neuron.
• This will allow the neuron to use the stored state over time to drive some of the weights and calculation to
generate context in future inputs.
• Therefore, input X1 will going to use some of the information in X0. This process will be repeated in the following
sessions.
EXAMPLE
• Both of this neural network work based on a sequence that is being input to the model.
• Memory is important when it comes to sequence but FNN only received a fixed size of vector as
input.
• For example, refer to figure below, let say if want to use FNN and RNN to predict the next word.
• FNN cannot predict the next step in a video because it will require a sequence of image vectors as input and not
just one since the probability of certain event happening will depend on every frame before it.
• Therefore, in order to predict where the motorcycle jumper will land, the neural network needs to have the
context of the sequence of motorcycle jumper image vector before he jumped off in order to have an accurate
prediction.
RECURRENT NEURAL NETWORK
• RNN accept sequences of vectors as inputs in order for the information to persists.
• Recall that for FNN the hidden layer weights is only based on the input data.
• In RNN, the hidden layer is a combination of the input data in current time step and the hidden layer in a previous
time step.
• The hidden layer in constantly changing as it gets more inputs and the only way to reach these hidden states is with a
correct
RECURRENT NEURAL NETWORK
• The hidden layer in constantly changing as it gets more inputs and the only way to reach these hidden states is with
a correct sequence of inputs.
• These is how ‘memory’ incorporates in RNN and we can model this process mathematically.
RECURRENT NEURAL NETWORK HIDDEN STATE
• Due to the feedback loop reoccurring at every time step in the series each hidden states has traces not only the previous
hidden state but also all of those that preceded it.
• That’s why it is known as recurrent.
• From a different perspective, we can think of it as the copies of the same network with each passing a message to the next.
VANISHING GRADIENT PROBLEM
• The advantage of the RNN is that the model can connect to the previous data with the present task.
• The disadvantages of RNN model is memory become subtler as the time passes since the error signal from the latter
time steps does not make it thorough enough back in time to influence the network at earlier time step during
backpropagation through time.
• According to Yoshua Bengio, this is known as vanishing gradient problem in one of his paper with the title of “Learning
Long-Term Dependencies with Gradient Descent is Difficult”.
• A well-known solution to this problem is a modification to the RNN called Long Short Term Memory (LSTM).
LONG SHORT TERM MEMORY (LSTM)
• Normally, neurons of the neural network are units apply an activation function like a sigmoid to a linear combination of their
inputs. In an LSTM RNN, we instead replacing neurons with a memory cell.
• Each cell has an input gate, an output gate and internal state that feed into itself across time steps with constant weight of
one.
• This eliminates the vanishing gradient problem since any gradient that flows into the self-recurring units during
backpropagation is preserved indefinitely since error multiplied by 1 still have the same value.
• Each gate has an activation function like sigmoid.
LONG SHORT TERM MEMORY (LSTM)
• During the forward pass, the input gate learns when to let activation pass into the cell and the output gate will learn
to let the activation pass out of it.
• During the backward pass, the output gate learns when to let the error flow into the cell and input gate will learn on
how to flow out of the cell through the network.
• By doing this, the hidden state results in the network will be able to remember long-term dependencies
SUMMARY
To sum everything up:
• RNN can model sequential data because the hidden state is affected by the input and the previous
hidden state.
• A solution to the vanisihing gradient problem is to use LSTM cells to remember long term
dependencies.
• Therefore, based on this information that we gathered, we can use LSTM networks to make prediction
for a time series data easily.
APPLICATION
• One of the application of using the RNN with LSTM cells is to make a prediction of a stock market
price by using a deep learning library such as KERAS or Tensorflow.
• The way it works is by providing stock prices history as an input in the form of time series. We will
use time series data for a specific stock price over time and use that prior history to predict the
next price of that stock.
• The steps of using RNN in predicting stock price is:
1. Get historic stock price data
2. Extract price to predict
3. Normalize
4. Create Time Series
5. Build model
6. Fit model
7. Test and evaluate
BUILDING RNN FOR PREDICTING STOCK MARKET PRICE
1. Historic stock price data
• The data to be set as input is the Microsoft operation stock price for the last 10 years in the form of csv
file.
• The data was obtained from : https://www.nasdaq.com/symbol/msft/historical
BUILDING RNN FOR PREDICTING STOCK MARKET PRICE
2. Preparing environment for training and
testing of data to use the RNN.