Highly Interconnected Processing Elements: 1. Neural Network Architectures
Highly Interconnected Processing Elements: 1. Neural Network Architectures
Highly Interconnected Processing Elements: 1. Neural Network Architectures
e5
V1 V3
V5
e2
e4
e5
V2 V4
e3
Vertices V = { v1 , v2 , v3 , v4, v5 }
Edges E = { e1 , e2 , e3 , e4, e5 }
is calculated in each neuron node, and if the value is above some threshold
(typically 0) the neuron fires and takes the activated value (typically 1);
otherwise it takes the deactivated value (typically -1).
2.Multi Layer Feed-forward Network
Input Output
hidden layer hidden layer
weights wjk y1
weights vij w11
x1 v11
w12
v21 y1 y2
x2 w11
v1m
v2m y3
vn1 w1m
ym
Vℓm
Hidden Layer
xℓ neurons yj
yn
Input Layer
neurons xi Output Layer
neurons zk
input to output layer.
- The input layer neurons are linked to the hidden layer neurons; the
the first hidden layers, m2 neurons in the second hidden layers, and n
output neurons in the output layers is written as (ℓ - m1 - m2 – n ).
The Fig. above illustrates a multilayer feed-forward network with a
configuration (ℓ - m – n).
3.Recurrent Networks
Example :
y1
x1
y1 y2
x2
Yn
ym
Xℓ
output.
- Every input pattern is used to train the network.
improved performance.
• Unsupervised Learning
- No teacher is present.
• Reinforced learning
- A teacher is present but does not present the expected or desired output
- A reward is given for correct answer computed and a penalty for a wrong
answer.
• Note : The Supervised and Unsupervised learning methods are most
popular forms of learning compared to Reinforced learning.
• Hebbian Learning
Hebb proposed a rule based on correlative weight adjustment.
In this rule, the input-output pattern pairs (Xi , Yi) are associated by
the weight matrix W, known as correlation matrix computed as
n
W= Xi YiT
i=1
Hebb Network was stated by Donald Hebb in 1949. According to Hebb's rule, the weights are
found to increase proportionately to the product of input and output. It means that in a Hebb
network if two neurons are interconnected then the weights associated with these neurons can be
increased by changes in the synaptic gap.
This network is suitable for bipolar data. The Hebbian learning rule is generally applied to logic
gates.
• Gradient descent Learning
This is based on the minimization of errors E defined in terms of weights
and the activation function of the network.
- If Wij is the weight update of the link connecting the i th and the j th
neuron of the two neighboring layers, then Wij is defined as
Wij = ( E / Wij )
Note : The Hoffs Delta rule and Back-propagation learning rule are
the examples of Gradient descent learning.
.