INTRO Spiking Neural Networks An Introduction-Vreeken
INTRO Spiking Neural Networks An Introduction-Vreeken
INTRO Spiking Neural Networks An Introduction-Vreeken
=
exp
0
(1.3)
>
=
0 if 0
0 if 1
s
s
(s)
(1.4)
The duration of the absolute refractoriness is set by
abs
,
during which large constant K ensures that the membrane
potential
is vastly above the threshold value. Constant n
0
scales the duration of the negative after-potential. Having a
description of what happens to a neuron when it fires itself
we need one for the effect of incoming postsynaptic poten-
tials.
) H(s
s s
(s)
ij
s
ij
m
ij
ij
(
|
|
.
|
\
|
|
|
.
|
\
|
=
exp exp
(1.5)
In equation 1.5
ij
defines the transmission delay (axons
and dendrites are fast, synapses relatively slow) and 0<s<m
are time constants defining the duration of the effect of the
postsynaptic potential. Kernel by default describes the ef-
fect of an excitatory postsynaptic potential; by using the
negative value, we can model an IPSP from an inhibitory
synapse (see fig. 1c). We use variable wij to model the synap-
tic efficacy or weight; with which we also can model inhibi-
tory connections by using values lower than zero. It should
be noted that real synapses are either excitatory or inhibitory;
we know of no synapses changing effect during lifetime.
Neurons of the second generation work in an iterative,
clock-based manner of digital computers, but can deal with
analog input values; we can quite easily feed input neurons
with digitised values from a dataset or a robot-sensor. Due to
their iterative nature these networks are not very well suited
for temporal tasks; they do not use time in their computation,
whereas spiking neural networks do. However, such values
cannot just be fed into a spiking neuron, in some way well
either have to convert this information into spikes, or have to
employ a method to alter the membrane-potential directly.
A general approach to achieve the latter is to use an extra
function h
ext
to describe the effect of an external influence on
the membrane potential. These functions usually are too
task-specific to be covered in this paper, so this leaves us
with
+ =
i j
(f)
j
j F t
(f)
j ij ij
ext
) t (t w (t) h h(t)
(1.6)
For non-hardware solutions it might proof handier to convert
analog signals into spikes that can be fed to the network di-
rectly. An often-used solution is to apply a Poisson-process
for spike generation by the sensor neuron; a higher input
signal correlates with a higher chance for a spike. Such a
spike will then be processed and affect the membrane-poten-
tial of neurons normally. The current excitation of a neuron
is described by
h(t) ) t (t n (t) u
i
(f)
i
F t
(f)
i i i
+ =
(1.7)
where the refractory state, effects of incoming postsynaptic
potentials and eventual external events are combined. To-
gether with equation 1.2 this forms the spike-response
model, a powerful though easy to implement model for
working with spiking neural networks.
Short-term memory neurons
Analysis of neural networks has always been difficult and is
even more so in a spatial-temporal domain as with networks
of spiking neurons. An often-used simplification of the spike-
response model only takes the refractory effects of the last
pulse sent into account. Mathematically speaking, by replac-
ing equation 1.7 with
h(t) ) t (t n (t) u
i i i
+ =
)
(1.8)
we are already finished. Forgetting about the effects of earlier
refractory periods is not a capital crime; for normal operation
4
the model is still quite realistic, while analysis is been made
easier. Due to the bad memory of this model these are
called short term memory neurons.
Integrate-and-fire neurons
The most widely used and best-known model of threshold-
fire neurons, and spiking neurons in general, is the integrate-
and-fire neuron [5,6]. This model is based on, and most easily
explained by, principles of electronics. Figure 3 shows sche-
matic drawings of both a real and an integrate-and-fire neu-
ron. A spike travels down the axon and is transformed by a
low-pass filter, which converts the short pulse into a current
pulse I(t-t
j
(f)
) that charges the integrate-and-fire circuit. The
resulting increase in voltage there can be seen as postsynap-
tic potential (t-t
j
(f)
). Once the voltage over the capacitor goes
above threshold value the neuron sends out a pulse itself.
Mathematically we write
RI(t) u(t)
m
u
m
+ =
(1.9)
to describe the effects on membrane potential u over time,
with
m
being the membrane time constant in which voltage
leaks away. As with the spike-response model the neuron
fires once u crosses threshold and a short pulse is gener-
ated. To force a refractory period after firing we set u to K<0
for a period of
abs
.
=
i j
(f)
j
j F t
(f)
j ij i
) t (t c (t) I
(1.10)
The input current I for neuron i will often be 0, as incom-
ing pulses have a finite short length. Once a spike arrives, it
is multiplied by synaptic efficacy factor c
ij forming the post-
synaptic potential that charges the capacitor. This model is
computationally simple and can easily be implemented in
hardware. It is closely linked to the more general spike-re-
sponse model and can be used like it by rewriting it into the
correct kernels and [8].
Spiking neurons in hardware
Very Large-Scale Integration (VLSI) technology integrates
many powerful features into a small microchip like a micro-
processor. Such systems can use data representations of ei-
ther binary (digital VLSI) or continuous (analog VLSI) volt-
ages. Progress in digital technology has been tremendous,
providing us with ever faster, more precise and smaller
equipment. In digital systems an energy-hungry
synchronisation clock makes it certain that parts are ready
for action. Analog systems consume much less power and
space on silicon than digital systems (in many orders of
magnitude) and are easily interfaced with the analog real
world. However, their design is hard, due to noise computa-
tion is fundamentally (slightly) inaccurate and sufficiently
reliable non-volatile analog memory does not (yet) exist
[20,4].
Noise is the influence of random effects that affect every-
thing in the real world that operates in normal (so, above the
absolute zero) working environment temperatures. For digi-
tal systems this is not much of a problem, as extra precision
can be acquired by using more bits for more precise data
encoding. In analog systems such a simple counter-measure
is not at hand, there are no practical ways of eliminating
noise; at normal temperatures noise has to be accepted as a
fact of life. Our brain is a perfect example of an analog sys-
tem that operates quite well with noise, like neural networks
do in general.
In fact, performance of neural networks increases with
noise present [11]. Spiking neuron models can easily be
equipped with noise-models like noisy threshold, reset or
integration. The interested reader can find more de-tails on
the modelling of noise in spiking neurons in Gerstners
excellent review on neuron models [8].
Hybrid systems can provide a possibly perfect solution,
operating with reliable digital communication and memory
while using fast, reliable and cheap analog computation and
interfacing. In such a solution, neurons can send short digi-
tal pulses, much like weve seen before in the integrate-and-
fire model. This model can be implemented in VLSI systems
quite well [2]. VLSI systems usually work parallel, a very
welcome fact for simulation of neural systems, which are
inherently massively parallel. A significant speed gain can be
acquired by using a continuous hardware solution; by defini-
tion digital simulation will have to recalculate each time-slice
iteratively [20]. Though computer simulations have an
advantage in adaptability, scaling a network up to more neu-
rons (1000+) often means leaving the domain of real-time
simulation. VLSI systems can be specifically designed to be
able to link up, easily forming a scalable set-up that consists
of many parts operating like they are one big system
[2,11,20].
Synaptic plasticity
We saw that synapses are very complex signal pre-processors
that they play an important role in development, memory
and learning of neural structures. Synaptic plasticity is a
form of change of the pre-processing, which is a preferred
word for learning as it better describes what is at hand: long-
or short-term change in synaptic efficacy [1,4,24].
Hebbian plasticity is a local form of long-term potentiation
(LTP) and depression (LTD) of synapses and is based on the
correlation of firing activity between pre- and postsynaptic
neurons. This is usually, and easily, implemented with rate
coding; similar neuron activity means a strong correlation.
As we are using pulse-coding schemes, we have to think
about how to define correlations in neural activity using
single spikes. Pure Hebbian plasticity acts locally at each
individual synapse, making it both very powerful and diffi-
cult to control; it is a positive-feedback process that can
destabilize postsynaptic firing rates by endlessly strengthen-
ing effective and weakening ineffective synapses. If possible
one has to avoid such behaviour, most desirably by a biologi-
cally plausible local rule.
Spike-timing dependent synaptic plasticity (STDP) is a
form of competitive Hebbian learning that uses the exact
spike timing information [1,23]. Experiments in neuroscience
have shown that long-term strengthening occurs if presynap-
tic action potentials arrive within 50ms before of a
postsynaptic spike and a weakening if it arrives late. Due to
this mechanism STDP can lead to stable distributions of LTP
and LDP, making postsynaptic neurons sensitive to the tim-
Fig. 3. Schematic drawing of the integrate-and-fire neuron. On the
left side, the low-pass filter that transforms a spike to a current pulse
I(t) that charges the capacitor. On the right, the schematic version of
the soma, which generates a spike when voltage u over the capacitor
crosses threshold [10].
5
ing of incoming action potentials. This sensitivity leads to
competition among the presynaptic neurons, resulting in
shorter latencies, spike synchronization and faster informa-
tion propagation through the network [1,23].
Hebbian plasticity is a form of unsupervised learning,
which is useful for clustering input data but less appropriate
when a desired outcome for the network is known in ad-
vance. Back-propagation [21] is a widely known and often
used supervised learning algorithm. Due to the very complex
spatial-temporal dynamics and continuous operation it can-
not be directly applied to spiking neural networks, adapta-
tions [12] exist in which individual spikes and their timing
are taken into account.
Discussion
Neural structures are very well suited for complex informa-
tion processing. Animals show an incredible ease in coping
with dynamic environments, raising interest for the use of
artificial neural networks in tasks that deal with real-world
interactions. Over the years, three generations of artificial
neural networks have been proposed, each generation
biologically more realistic and computationally more power-
ful. Spiking neural networks use the element of time in com-
municating by sending out individual pulses. Spiking neu-
rons can therefore multiplex information into a single stream
of signals, like the frequency and amplitude of sound in the
auditory system [9].
We have covered the very general and realistic spike-re-
sponse model as well as the more common integrate-and-fire
model for using pulse coding in neurons. Both models are
powerful, realistic and easy to implement in both computer
simulation and hardware VLSI systems. Standard neural net-
work training algorithms use rate coding and cannot be di-
rectly used satisfactory for spiking neural networks. Spike-
timing dependent synaptic plasticity uses exact spike timing
to optimise information-flow through the network, as well as
impose competition between neurons in the process of unsu-
pervised Hebbian learning.
Pulse coding is computationally powerful [15,16,17] and
very promising for tasks in which temporal information
needs to be processed. We conclude with the remark that this
is the case for virtually any task or application that deals with
in- or output from the real world.
References
1. Abbot, L. F. & Nelson, S. B. Synaptic Plasticity: taming the beast, Nature
Neuroscience Review, vol. 3 p.1178-1183 (2000).
2. Christodoulou C., Bugmann, G. & Clarkson, T. G. A spiking neuron model:
applications and learning, Neural Networks, in press (2002).
3. DasGupta, B. & Schnitger, G. The power of approximating: a comparison of
activation functions, Advances in Neural Information Processing Systems,
vol. 5 p.363-374 (1992).
4. Elias, J. G. & Northmore, D. P. M. Building Silicon Nervous Systems with
Dendritic Tree Neuromorphs in Maass, W. & Bishop, C. M. (eds.) Pulsed Neural
Networks, MIT-press (1999).
5. Feng, J. & Brown, D. Integrate-and-fire Models with Nonlinear Leakage, Bulletin
of Mathematical Biology vol. 62, p.467-481 (2000).
6. Feng, J. Is the integrate-and-fire model good enough? a review, Neural
Networks vol. 14, p.955-975 (2001).
7. Ferster, D. & Spruston, N. Cracking the neuronal code, Science, vol. 270 p.756-
757 (1995).
8. Gerstner, W. Spiking Neurons in Maass, W. & Bishop, C. M. (eds.) Pulsed
Neural Networks, MIT-press (1999).
9. Gerstner, W. , Kempter, R., Leo van Hemmen, J. & Wagner, H. Hebbian
Learning of Pulse Timing in the Barn Owl Auditory System in Maass, W. &
Bishop, C. M. (eds.) Pulsed Neural Networks, MIT-press (1999).
10. Gerstner, W., Kistler, W. Spiking Neuron Models, Cambridge University Press
(2002).
11. Horn, D. & Opher, I. Collective Excitation Phenomena and Their Applications in
Maass, W. & Bishop, C. M. (eds.) Pulsed Neural Networks, MIT-press (1999).
12. Kempter, G. & van Hemmen, J. L. Hebbian Learning and Spiking Neurons,
Physical Review E4, p. 4498-4514 (1999).
13. Kruger, J. & Aiple, F. Multimicroelectrode investigation of monkey striate cortex:
spike train correlations in the infra-granular layers, Journal of Neurophysiology
vol. 60, p.798-828 (1988).
14. Maass, W., Schnitger, G. & Sontag, E. On the computational power of sigmoid
versus boolean threshold circuits, Proc. of the 32
nd
Annual IEEE Symposium on
Foundations of Computer Science, p. 767-776 (1991).
15. Maass, W. The Third Generation of Neural Network Models, Technische
Universitt Graz (1997).
16. Maass, W. Computation with Spiking Neurons in Maass, W. & Bishop, C. M.
(eds.) Pulsed Neural Networks, MIT-press (1999).
17. Maass, W. On the Computational Power of Recurrent Circuits of Spiking
Neuron, Technische Universitt Graz (2000).
18. Maass, W. Synapses as Dynamic Memory Buffers, Technische Universitt
Graz (2000).
19. Maass, W. Computing with Spikes, Technische Universitt Graz
(2002).
20. Murray, A. F. Pulse-Based Computation in VLSI Neural Networks in
Maass, W. & Bishop, C. M. (eds.) Pulsed Neural Networks, MIT-press (1999).
21. Rumelhart, D.E., Hinton, G.E. & Williams, R.J. Learning representations by
back-propagating errors, Nature Vol. 323 (1986).
22. Sala, D., Cios, K. & Wall, J. Self-Organization in Networks of Spiking Neurons,
University of Toledo, Ohio (1998).
23. Song, S., Miller, K. D. & Abbott, L. F. Competetive Hebbian Learning
through spike-timing-dependent synaptic plasticity, Nature
Neuroscience, vol. 3 p. 919-926 (2000).
24. Thorpe, S., Delorme, A., Van Rullen, R. Spike based strategies for
rapid processing, Neural Networks, vol. 14(6-7), p.715-726 (2001).