Sequence Learning and Generation Using A Spiking Neural Network
Sequence Learning and Generation Using A Spiking Neural Network
Sequence Learning and Generation Using A Spiking Neural Network
Network
Abstract— A new neural network that uses the ReSuMe desired output sequence in response to a particular input
supervised learning algorithm for generating a desired spike sequence is a relatively new area in the field of neural
output sequence in response to a given input spike sequence networks. ReSuMe [4, 5] is a supervised learning method
is proposed. Possible advantages of the proposed new neural for spiking neural networks that can recognize a particular
network system compared to the Liquid State Machine based input sequence of inter-spike times and can generate a desired
ReSuMe network system include better learning convergence sequence of spike outputs at desired spike times in response
and a smaller neural network size. to an input sequence of spikes. ReSuMe optimizes the
synaptic weights of the learning neuron using the input and
desired output spike time sequence information .
Keywords: spiking neuron, sequence learning, sequence
generation, Liquid State Machine. In this paper the neural network used in the ReSuMe
algorithm [4,5] is modified in order to aid the convergence of
I. INTRODUCTION
the learning algorithm and to simplify the neural network.
An artificial neural network is a circuit made up of
interconnected neurons that mimics the behavior of a
biological neural network present in the brain. Each neuron in II. RESUME ALGORITHM
a neural network responds to the sum total of the inputs that
A. Algorithm Description
arrive at its weighted synapses; once a certain input sum
threshold is reached, the neuron produces an output spike. There are three types of neurons used in the ReSuMe
algorithm: input, learning, and teaching neurons. The set of
input, learning and teaching neurons are defined as follows,
In this paper, a neural network consisting of leaky integrate-
and-fire spiking neurons is considered. Input spikes that arrive Input neuron set: , ,…
at a leaky integrate-and-fire spiking neuron are summed and Learning neuron set: , ,…
the neuron membrane potential (charge) increases with a Teaching neuron set: , ,…
certain time constant; if no inputs occur, the accumulated in
membrane potential decreases (i.e. charge leaks out) with a i
certain time constant. When the membrane potential reaches a
certain threshold value, a pulse-like action potential output is
generated. This action potential is considered to be the output Siin t
Liquid
spike of the neuron. Immediately after an output spike occurs, State
the membrane potential is reset to a resting potential. There is . Machine .
an interval of time right after the neuron fires called the . (LSM) .
“refractory time” during which the neuron does not respond to . .
inputs. The leaky integrate-and-fire spiking neuron responds
to the input spike arrival time information (inter-spike time
coding) in order to produce an output spike at a certain time
[1]. The traditional artificial neuron model on the other hand
responds to the number of input spikes it receives at a certain
time (rate coding). Figure 1. ReSuMe Neural Network
action
potential and are positive valuues for excitatory synapses and
negative values for inhibitoryy synapses. and are time
Neuron threshold constants.
501
output) are then input into the learning neuron synapses. Each uniformly spaced in time from . Just like the neurons in
synapse of the learning neuron receives an input from only one described earlier, neurons in output a single spike at
of the outputs of the LSM. The synaptic weights of the times ( t s )’= t is delay. . In other words, neuron only
learning neuron are adjusted using the desired output firing outputs a spike in response to the ith input spike which occurs
times provided by the teaching neuron ndi , iin spike times, at time t s .
and the actual Sio t times so that a desired output spike S in t
sequence can be generated.
nd
According to [5], the ReSuMe learning algorithm has
difficulty in accurately generating the desired output sequence
times unless the number of LSM outputs is large (e.g. 300).
The fact that during learning, synaptic weights of the learning
neuron have to be adjusted simultaneously by considering a
large number of spike time values (all the spikes in ( iin ))
can contribute to convergence and output spike sequence
accuracy problems. Furthermore, in order to produce a large
number of outputs ( iin , the LSM must be built with a large
number of inter-connected spiking neurons.
502
and (7). Simulations were done using MATLAB [3]. Desired
output spike sequences Sid t were selected for a given set of
input spike sequences Siin t . The learning neuron was trained α= 0 α= 0.01
to produce the desired sequence Sid t . Synaptic weights of
the learning neuron were adjusted using Equations (6) and (7). 6.0 8.0 10.0 6.0 8.0 10.0
Parameters values that were tried in the simulations are as
follows: a = {0, 0.01, 0.05, 0.1}, = {0.5, 1.0, 1.5, 2.0}, and 0.5 o 0.5 o o o
= {6.0, 8.0, 10.0}ms. The postsynaptic membrane 1.0 o o o 1.0 o o
potential was modeled using Equation (9). The input spike 1.5 1.5 o
sequence Siin t consisted of 20 manually selected spike 2.0 2.0
times. For the desired output spike sequence Sid t ,spike α= 0.05 α= 0.1
sequences of length 4, 6 and 8 were manually made. The
supplementary input consisted of uniformly distributed 6.0 8.0 10.0 6.0 8.0 10.0
spikes spaced 3.5 ms. apart from each other. The pulse width
0.5 0.5
of the postsynaptic current was set to 3ms. The refractory
time for a spiking neuron was set to 0.5ms. It was assumed 1.0 o 1.0 o
that all the input spikes in the input spike sequence 1.5 1.5
Siin t would enter the system within 100ms. A network 2.0 2.0
training session consisted of 1000 learning epochs. The
output firing time error was defined using the following Table 2. Learning Convergence, Output Sequence
equation: Length = 6
ef t = tfd - tfo (9)
503
2) Use spike firing times of the 7 groups as inputs to 7 resulted in a successful discrimination of all five input
learning neuron synapses. sequences has a D.
3) The components of the neural network for the experiments
The network trained with = 0.01, = 1.5, and =
were: a) seven neurons shown in Figure 1; each 10.0ms could not recognize noisy input sequences. The
neuron received an input from one of the seven groups. network trained with = 0.1, = 1.5, = 6.0ms could not
Thus, the multiple spike inputs from each group was input discriminate the original input signal from the others. In
into 7 synapses of the learning neuron; b) the learning general, recognition and discrimination performance had a
neuron that receives input spikes from the 7 groups through 97 % success rate.
its seven synapses: c) the neurons shown in Figure 3
that provide uniformly spaced (in time) inputs to the
learning neuron; and d) the teaching neuron.
4) Train the learning neuron by using the combination of Parameters
parameters that resulted in learning convergence for an 4 6 8
output spike sequence of length 6, i.e. Table 2 entries that 1.5 10.0 R,D R,D R,D
have “ ” marks. For every such combination of 0 8.0 R,D R,D R,D
parameters, train the network using 1,000 epochs. 2.0
10.0 R,D R,D R,D
Results of the above experiments have shown that when 8.0 R,D R,D R,D
1.5
multiple input spikes per learning neuron synapse were used, 10.0 R,D R,D D
learning convergence could not be achieved regardless of the 0.01 6.0 R,D R,D R,D
parameter combinations used. These results clearly show that 2.0 8.0 R,D R,D R,D
having only one spike input per learning neuron synapse from 10.0 R,D R,D R,D
the neurons makes the synaptic weight adjustment during 1.0 10.0 R,D R,D R,D
learning simpler and as a consequence learning convergence 6.0 R,D R,D R,D
possible. 1.5 8.0 R,D R,D R,D
0.05 10.0 R,D R,D R,D
2) Testing 6.0 R,D R,D R,D
The successfully trained networks were tested for a) 2.0 8.0 R,D R,D R,D
recognition capability using sequences that had noise added to 10.0 R,D R,D R,D
the original input sequences; and b) for discrimination 8.0 R,D R,D R,D
capability using input spike sequences that were quite different 1.0
10.0 R,D R,D R,D
from the original input spike sequences. Noise for each input 6.0 R R,D R,D
spike sequence consisted of shifting five of the original input 1.5 8.0 R,D R,D R,D
0.1
spike times in the range of 0.5ms per spike. For all test cases, 10.0 R,D R,D R,D
the total number of spikes was kept at 20 (the same number of 6.0 R,D R,D R,D
spikes as in the original input spike sequence). All parameter 2.0 8.0 R,D R,D R,D
combinations that had resulted in learning convergence were 10.0 R,D R,D R,D
used for the tests.
Four noise-added input spike sequences were used to test the Table 4. Recognition and Discrimination Performance of
recognition capability of the network. When the absolute Trained Network
values of all the values were 1ms or less, it was assumed
that the network had successfully recognized the noisy input V. CONCLUSIONS
sequence as being similar to the original noise-free input spike
The learning convergence and testing results have shown
sequence.
that the new network can be made to recognize and
Five input spike sequences that were different from the discriminate input spike sequences consisting of 20 spikes
original input spike patterns were used to test the and can generate a desired sequence of spikes for a given
discrimination capability of the network. When the absolute input spike sequence.
values of one or more value(s) exceeded 1ms, it was
assumed that the network had correctly discriminated the input Having only one spike input per learning neuron synapse
spike sequence as being different from the original input spike made the synaptic weight assignment during learning simpler
sequence. because the adjustment became a localized (in time) task
around a single input spike time. Learning convergence
2.1) Testing Results experiments in section IV part 1.2) have shown that having
In the following Tables, the combination of parameters that only one spike input per learning neuron synapse resulted in
resulted in correct recognition of all four noisy input much better learning convergence when compared to a
sequences has an R; the combination of parameters that method that uses multiple input spikes per learning neuron
synapse.
504
In order to be able to generate any arbitrary desired output REFERENCES
spike sequence, input spikes times should be distributed [1] W. Gerstner and W. Kistler, Spiking Neuron Models Single Neurons,
uniformly throughout the time window during which input Populations, plasticity”, Cambridge University Press, Cambridge,
spikes are received. However, in general the S in t spikes are 2002.
scattered in time with various time gaps and/or clustered. [2] W. Maass, T. Natschlaeger, and H. Markram, “Computational models
for generic cortical microcircuits”, In J. Feng, editor, Computational
Thus, in order to provide an additional set of input spikes, Neuroscience: A Comprehensive Approach, Chapman and Hall/CRC,
supplementary input neurons , , … were used. Boca Raton, 2004, pp. 575-605.
These neurons fired only one spike and provided uniformly [3] MATLAB, The Math Works, Inc, http://www.mathworks.com.
spaced (in time) input spikes to the learning neuron. The [4] F. Ponulak, “ReSuMe - new supervised learning method for Spiking
Neural Networks”, Technical Report, Institute of Control and
supplementary spike inputs were essential in achieving Information Engineering, Poznań University of Technology, 2005,
accurate output spike sequence times. http://d1.cie.put.poznan.pl/~fp/.
[5] F. Ponulak, “Supervised Learning in spiking neural networks with
The number of N in neurons is dependent on the number of ReSuMe method”, Ph. D. thesis, Institute of Control and Information
input spikes in S in t . The number of N s neurons is dependent Engineering, Poznań University of Technology, 2006,
http://d1.cie.put.poznan.pl/~fp/.
on the input spike window and the time spacing between [6] F. Ponulak and A. Kasiński, “Supervised Learning in Spiking Neural
spikes. Thus an overall network size comparison between the Networks with ReSuMe: Sequence Learning, Classification and
newly proposed neural network and the original LSM-based Spike-Shifting”, Institute of Control and Information Engineering,
ReSuMe network is dependent on the number of neurons Poznań University of Technology, 2009.
[7] Y. Konishi and R. H. Fujii, “Incremental learning of temporal
used for N in and in the new network and the size of the sequences using state memory and a resource allocating network”,
LSM. 2004 IEEE International Joint Conference on Neural Networks, vol. 4,
pp. 25-29.
The accuracy of the output spike sequence generated by the [8] H. H. Amin and R. H. Fujii., “Spiking neural network inter-spike time
new network is believed to be good. A direct accuracy based decoding scheme”, IEICE-Transactions on Information and
Systems, vol. E88-D, Issue 8, August 2005, pp. 1893-1902.
comparison with the LSM-based ReSuMe network will be [9] R. H. Fujii and T. Ichishita, “Effect of synaptic weight assignment on
made using the same test data and the same criteria for spiking neuron response”, 6th International Conference on Machine
gauging error. A detailed analysis of the effects of ReSuMe Learning and Applications (ICMLA), Cincinnati, Ohio, USA, 2007,
parameter value variations will also be investigated . pp. 217-222.
[10] R. H. Fujii and K. Oozeki, “Temporal data encoding and sequence
learning with spiking neural networks,” 16th International Conference
on Artificial Neural Networks (ICANN 2006), Athens, Greece,
pp.780-789.
[11] J. L. Elman, “Finding structure in time”, Cognitive Science, vol. 14,
1990, pp. 179-211.
[12] D. L. Wang and B. Yuwono, “Anticipation-based temporal pattern
generation,” IEEE Trans. Sys. Man Cybern., vol. 25, 1995, pp. 615-
628.
505