Neural
Neural
Neural
Applications I-II
Summary
Basics of Neural Network
What is a Neural Network
Neural Network Classifier
Data Normalization
Neuron and bias of a neuron
Single Layer Feed Forward
Limitation
Multi Layer Feed Forward
Back propagation
Neural Networks
What is a Neural Network?
•Biologically motivated approach to
machine learning
v min A
v' ( new _ max A new _ min A) new _ min A
max A min A
Say, max A was 100 and min A was 20 ( That means maximum and
minimum values for the attribute ).
Normalization by decimal scaling normalizes by moving the decimal point of values of attribute A.
v
v' j
10
Here j is the smallest integer such that max|v’|<1.
Example :
x1 and x2 values multiplied by weight values w1 and w2 are input to the neuron x.
One Neuron as a Network
• The neuron receives the weighted sum as input and calculates the
output as a function of input as follows :
Bias of a Neuron
x1-x2= -1
x2 x1-x2=0
x1-x2= 1
x1
Bias as extra input
w0
x0 = +1
x1 W1
Activation
v
function
Input
( )
Attributex2 w2 Output
values Summing function
class
weights
y
xm wm
m
v w x
j 0
j j
w0 b
Neuron with Activation
The neuron is the basic information processing unit of a
NN. It consists of:
Linear Separable:
x y x y
Linear inseparable:
Solution?
x y
A Multilayer Feed-Forward Neural
Network
Output Class
Ok
Output nodes
w jk
Oj
Hidden nodes
wij - weights
Input nodes
Network is fully connected
Input Record : xi
Neural Network Learning
Ok k= 1, 2,.. #classes
• Network is fully connected, i.e. each unit provides input
to each unit in the next forward layer.
Classification by Back propagation
where wij is the weight of the connection from unit i in the previous layer to
unit j; Oi is the output of unit I from the previous layer;
Each unit in the hidden and output layers takes its net input
and then applies an activation function. The function
symbolizes the activation of the neuron represented by the
unit. It is also called a logistic, sigmoid, or squashing
function.
Given a net input Ij to unit j, then
Oj = f(Ij),
the output of unit j, is computed as
1
Oj I
1 e j
Back propagate the error
When reaching the Output layer, the error is
computed and propagated backwards.
For a unit k in the output layer the error is
• computed by a formula:
Errk Ok (1 Ok )(Tk Ok )
Where O k – actual output of unit k ( computed by activation
function. 1
Ok
1 e Ik
Err j O j (1 O j ) Errk w jk
• k
j ( l ) Errj
j j j
Update weights and biases
We are updating weights and biases after the
presentation of each sample.
This is called case updating.
Output vector
Errk Ok (1 Ok )(Tk Ok )
Output nodes
1 Err j O j (1 O j ) Errk w jk
Oj I k
1 e j
Hidden nodes
Input vector: xi
Example of Back propagation
Input = 3, Hidden
Neuron = 2 Output =1
Initialize weights :
Random Numbers
from -1.0 to 1.0
Bias ( Random )
θ4 θ5 θ6
Unit j Error j
6 0.475(1-0.475)(1-0.475) =0.1311
We assume T 6 = 1
……..similarly ………similarly
Advanced Features of Neural Network
• Control
• Function approximation
• Associative memory
Subset
NN 2
2
The Split the
dataset
Whole into subsets
that can fit Subset
Datas NN 3
into memory 3
et .
.
.
Subset
NN n
n
A Single
Neural
Network
Model
Modular Neural Network
ERROR
Local Minimum
Local Minimum
Global Minimum
W (N dimensional)
Faster Convergence
• Momentum
• Faster convergence
Applications-I
Handwritten Digit Recognition
Face recognition
Process identification
Process control
?
THANKS