Percept Ron

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 15

INTRODUCTION TO PRECEPTRON

PERCEPTRON

A Perceptron is an algorithm used for supervised learning of


binary classifiers. Binary classifiers decide whether an input,
usually represented by a series of vectors, belongs to a specific
class.
•Input Nodes or Input Layer:
This is the primary component of Perceptron which accepts the initial data
into the system for further processing. Each input node contains a real
numerical value.
•Wight and Bias:
Weight parameter represents the strength of the connection between
units. This is another most important parameter of Perceptron
components. Weight is directly proportional to the strength of the
associated input neuron in deciding the output. Further, Bias can be
considered as the line of intercept in a linear equation.
•Activation Function:
These are the final and important components that help to determine
whether the neuron will fire or not. Activation Function can be considered
primarily as a step function.
Types of Activation functions:
•Sign function
•Step function, and
•Sigmoid function
Weights (wi​) are adjustable parameters that the perceptron
learns during the training process. The goal of training is to find
the optimal weights that allow the perceptron to correctly
classify or approximate the desired function. The training
algorithm adjusts these weights iteratively based on the error
between the perceptron's output and the target output.
Perceptron can be defined as a single artificial neuron
In a classification task, weights determine the orientation and that compute its weighted input with the help of the
slope of the decision boundary that separates different classes of threshold activation function or step function.
data points. By adjusting the weights, the perceptron can learn It is also called as a TLU(Threshold Logical Unit).
to classify input data correctly.
PERCEPTRON ALGORITHM WORKING
Criteria Input Weight
Artists is Good x1 = 0 or 1 w1 = 0.7
Weather is x2 = 0 or 1 w2 = 0.6
Good
Friend will x3 = 0 or 1 w3 = 0.5
Come
Food is Served x4 = 0 or 1 w4 = 0.3
Alcohol is x5 = 0 or 1 w5 = 0.4
Served

1. Set a threshold value: 3. Sum all the results:


•Threshold = 1.5 •0.7 + 0 + 0.5 + 0 + 0.4 = 1.6 (The Weighted
2. Multiply all inputs with its Sum)
weights: 4. Activate the Output:
•x1 * w1 = 1 * 0.7 = 0.7 •Return true if the sum > 1.5 ("Yes I will go to the
•x2 * w2 = 0 * 0.6 = 0 Concert")
•x3 * w3 = 1 * 0.5 = 0.5
•x4 * w4 = 0 * 0.3 = 0
•x5 * w5 = 1 * 0.4 = 0.4
PERCEPTRON FUNCTION

Perceptron is a function that maps its input “x,” which is multiplied with
the learned weight coefficient; an output value ”f(x)”is generated.

f(x)=1; if w.x+b>=0
otherwise, f(x)=0

• In the equation given above: “w” = vector of real-valued weights “b” =


bias (an element that adjusts the boundary away from origin without any
dependence on the input value) “x” = vector of input x values
“m” = number of inputs to the Perceptron
The output can be represented as “1” or “0.” It can also be represented
as “1” or “-1” depending on which activation function is used.
AND Gate
Row 1
• From w1x1+w2x2+b, initializing w1, w2, as 1 and b as -1, we get: x1(1)+x2(1)-1
• Passing the first row of the AND logic table (x1=0, x2=0), we get: 0+0–1 = -1
• From the Perceptron rule, if Wx+b<0, then y`=0. Therefore, this row is correct, and no
need for Backpropagation.

Row 2
• Passing (x1=0 and x2=1), we get: 0+1–1 = 0
• From the Perceptron rule, if Wx+b >=0, then y`=1. This row is incorrect, as the output
is 0 for the AND gate.
• So we want values that will make the combination of x1=0 and x2=1 to give y` a value
of 0. If we change b to -1.5, we have: 0+1–1.5 = -0.5
• From the Perceptron rule, this works (for both row 1, row 2 and 3).

Row 4
• Passing (x1=1 and x2=1), we get: 1+1–1.5 = 0.5
• Again, from the perceptron rule, this is still valid.
• Therefore, we can conclude that the model to achieve an AND gate, using the
Perceptron algorithm is: x1+x2–1.5
OR Gate
From the diagram, the OR gate is 0 only if both inputs are 0.

Row 1
• From w1x1+w2x2+b, initializing w1, w2, as 1 and b as -1, we get: x1(1)+x2(1)-1
• Passing the first row of the OR logic table (x1=0, x2=0), we get: 0+0–1 = -1
• From the Perceptron rule, if Wx+b<0, then y`=0. Therefore, this row is correct.

Row 2
• Passing (x1=0 and x2=1), we get: 0+1–1 = 0
• From the Perceptron rule, if Wx+b >=0, then y`=1. This row is again, correct (for both row 1,
row 2 and 3).

Row 4
• Passing (x1=1 and x2=1), we get: 1+1–1 = 1
• Again, from the perceptron rule, this is still valid. Quite Easy!
• Therefore, we can conclude that the model to achieve an OR gate, using the Perceptron
algorithm is: x1+x2–1
XOR Gate
• The Boolean representation of an XOR gate is: x1x`2 + x`1×2
• We first simplify the Boolean expression
• x`1×2 + x1x`2 + x`1×1 + x`2×2
• x1(x`1 + x`2) + x2(x`1 + x`2)
• (x1 + x2)(x`1 + x`2)
• (x1 + x2)(x1x2)`
• From the simplified expression, we can say that the XOR gate consists of an OR
gate (x1 + x2), a NAND gate (-x1-x2+1) and an AND gate (x1+x2–1.5).
• This means we will have to combine 2 perceptrons:
• OR (x1+x2–1)
• NAND (-x1-x2+1)
• AND (x1+x2–1.5)
Characteristics of Perceptron
The perceptron model has the following characteristics.
1.Perceptron is a machine learning algorithm for supervised learning of binary classifiers.
2.In Perceptron, the weight coefficient is automatically learned.
3.Initially, weights are multiplied with input features, and the decision is made whether the neuron is fired or not.
4.The activation function applies a step rule to check whether the weight function is greater than zero.
5.The linear decision boundary is drawn, enabling the distinction between the two linearly separable classes +1 and -
1.
6.If the added sum of all input values is more than the threshold value, it must have an output signal; otherwise, no
output will be shown.

Limitations of Perceptron Model


A perceptron model has limitations as follows:
•The output of a perceptron can only be a binary number (0 or 1) due to the hard limit transfer function.
•Perceptron can only be used to classify the linearly separable sets of input vectors. If input vectors are non-linear, it
is not easy to classify them properly.
ACTIVATION FUNCTION

The activation function decides whether a neuron should


be activated or not by calculating the weighted sum and
further adding bias to it. The purpose of the activation
function is to introduce non-linearity into the output of a
neuron
The activation function applies a step rule (convert the
numerical output into +1 or -1) to check if the output of the
weighting function is greater than zero or not.
TYPES OF PERCEPTRON
Single Layer Perceptron Model:
This is one of the easiest Artificial neural networks (ANN) types. A single-layered perceptron model consists feed-forward network and
also includes a threshold transfer function inside the model. The main objective of the single-layer perceptron model is to analyze the
linearly separable objects with binary outcomes.
In a single layer perceptron model, its algorithms do not contain recorded data, so it begins with inconstantly allocated input for weight
parameters. Further, it sums up all inputs (weight). After adding all inputs, if the total sum of all inputs is more than a pre-determined
value, the model gets activated and shows the output value as +1.
If the outcome is same as pre-determined or threshold value, then the performance of this model is stated as satisfied, and weight
demand does not change. However, this model consists of a few discrepancies triggered when multiple weight inputs values are fed into
the model. Hence, to find desired output and minimize errors, some changes should be necessary for the weights input.

Multi-Layered Perceptron Model:


Like a single-layer perceptron model, a multi-layer perceptron model also has the same model structure but has a greater
number of hidden layers.
The multi-layer perceptron model is also known as the Backpropagation algorithm, which executes in two stages as
follows:
•Forward Stage: Activation functions start from the input layer in the forward stage and terminate on the output layer.
•Backward Stage: In the backward stage, weight and bias values are modified as per the model's requirement. In this
stage, the error between actual output and demanded originated backward on the output layer and ended on the input
layer.
Hence, a multi-layered perceptron model has considered as multiple artificial neural networks having various layers in
which activation function does not remain linear, similar to a single layer perceptron model. Instead of linear, activation
function can be executed as sigmoid, TanH, ReLU, etc., for deployment.
A multi-layer perceptron model has greater processing power and can process linear and non-linear patterns. Further, it
can also implement logic gates such as AND, OR, XOR, NAND, NOT, XNOR, NOR.

You might also like