Artificial Neural Networks: Part 1/3
Artificial Neural Networks: Part 1/3
Artificial Neural Networks: Part 1/3
Part 1/3
Berrin Yanikoglu
DA514– Machine Learning
Biological Inspirations
Biological Inspirations
Humans perform complex tasks like vision, motor
control, or language understanding very well.
Adaptivity
– changing the connection strengths to learn things
Non-linearity
– the non-linear activation functions are essential
Fault tolerance
– if one of the neurons or connections is damaged, the whole
network still works quite well
x0= +1
bi
x1 :Bias
wi1
x2
f ai
x3 Neuroni Activation Output
function
xm wim
Input
Synaptic
Weights
Bias
n
An artificial neuron:
- computes the weighted sum of its input (called its net input)
- adds its bias
- passes this value through an activation function
We say that the neuron “fires” (i.e. becomes active) if its output is
above zero.
Bias
Bias can be incorporated as another weight clamped to a fixed
input of +1.0
– sigmoid: a = 1/(1+e-n)
– ...
Activation Functions
Artificial Neural Networks
A neural network is a massively parallel, distributed processor
made up of simple processing units (artificial neurons).
Input Output
layer
layer
Different Network Topologies
Multi-layer feed-forward networks
– One or more hidden layers.
– Input projects only from previous layers onto a layer.
typically, only from one layer to the next
2-layer or
1-hidden layer
fully connected
network
Input Hidden Output
layer layer
layer
Different Network Topologies
Recurrent networks
– A network with feedback, where some of its inputs
are connected to some of its outputs (discrete time).
Input Output
layer
layer
Applications of ANNs
ANNs have been widely used in various domains for:
– Pattern recognition
– Function approximation
– Associative memory
– ...
Artificial Neural Networks
Early ANN Models:
– Perceptron, ADALINE, Hopfield Network
Current Models:
– Deep Learning Architectures
– Multilayer feedforward networks (Multilayer perceptrons)
– Radial Basis Function networks
– Self Organizing Networks
– ...
How to Decide on a Network Topology?
– # of input nodes?
• Number of features
– # of output nodes?
• Suitable to encode the output representation
– transfer function?
• Suitable to the problem
– # of hidden nodes?
• Not exactly known
Multilayer Perceptron
Each layer may have different number of nodes and different
activation functions
But commonly:
– Same activation function within one layer
• sigmoid/tanh activation function is used in the hidden
units, and
• sigmoid/tanh or linear activation functions are used in
the output units depending on the problem
(classification-sigmoid/tanh or function approximation-
linear)
Neural Networks Resources
Reference
Neural Networks Text Books
Journals:
IEEE Transactions on NN
Neural Networks
Neural Computation
Biological Cybernetics
...