Single Layer Feedforward Networks
Single Layer Feedforward Networks
Single Layer Feedforward Networks
(-1, -1)
(1.6, -1.6)
• Implement threshold
by a node x0
– Constant output 1
– Weight w0 = - threshold
– A common practice in
NN design
Perceptrons
• Linear separability
– A set of (2D) patterns (x1, x2) of two classes is linearly
separable if there exists a line on the (x1, x2) plane
• w0 + w1 x1 + w2 x2 = 0
• Separates all patterns of one class from the other class
– A perceptron can be built with
• 3 input x0 = 1, x1, x2 with weights w0, w1, w2
– n dimensional patterns (x1,…, xn)
• Hyperplane w0 + w1 x1 + w2 x2 +…+ wn xn = 0 dividing the
space into two regions
– Can we get the weights from a set of sample patterns?
• If the problem is linearly separable, then YES (by perceptron
learning)
• Examples of linearly separable classes
- Logical AND function o x
patterns (bipolar) decision boundary
x1 x2 output w1 = 1
-1 -1 -1 w2 = 1 o o
-1 1 -1 w0 = -1
1 -1 -1 x: class I (output = 1)
1 1 1 -1 + x1 + x2 = 0 o: class II (output = -1)
- Logical OR function
x x
patterns (bipolar) decision boundary
x1 x2 output w1 = 1
-1 -1 -1 w2 = 1
-1 1 1 w0 = 1 o x
1 -1 1
1 1 1 1 + x1 + x2 = 0 x: class I (output = 1)
o: class II (output = -1)
Perceptron Learning
• The network
– Input vector ij (including threshold input = 1) n
– Weight vector w = (w0, w1,…, wn ) net=w⋅i j = ∑
w k ik , j
– Output: bipolar (-1, 1) using the sign node functionk =0
( w xk ) i j ( w class(i j ) i j ) i j
w i j class(i j ) i j i j
since i j i 0
( w xk ) i j w i j class(i j ) i j i j
0 if class(i j ) 1
0 if class(i ) 1
j
Threshold
2
x1 z1 2 Threshold
-2
-2
Y
x2 2 z2 2