NeuroSimV3.0 User Mannual
NeuroSimV3.0 User Mannual
NeuroSimV3.0 User Mannual
Index
1. Introduction ........................................................................................................................................... 2
2. New Feature Highlights in Version 3.0. ............................................................................................... 2
3. System Requirements (Linux) .............................................................................................................. 4
4. Installation and Usage (Linux) .............................................................................................................. 4
5. Device Level: Synaptic Device Characteristics .................................................................................... 5
5.1 Non-ideal Analog eNVM Device Properties ................................................................................ 5
5.2 Fitting by MATLAB script (nonlinear_fit.m) ............................................................................... 7
5.3 Device Types and Parameters ....................................................................................................... 8
6. Circuit Level: Synaptic Cores and Array Architectures ..................................................................... 11
6.1 Analog Synaptic Array Architectures ......................................................................................... 12
6.2 Digital Synaptic Array Architectures .......................................................................................... 14
6.3 Array Peripheral Circuits ............................................................................................................ 16
7. Algorithm Level: Multilayer Perceptron (MLP) Neural Network Architecture ................................. 18
8. How to run MLP simulator (+NeuroSim) ........................................................................................... 21
1
1. Introduction
MLP simulator (+NeuroSim) is developed in C++ to emulate the online learning/offline classification
scenario with MNIST handwritten dataset in a 2-layer multilayer perceptron (MLP) neural network based
on SRAM, emerging non-volatile memory (eNVM) and ferroelectric FET (FeFET) array architectures. The
eNVM in this simulator refers to a special subset of resistive memory devices that can tune the conductance
into multilevel states with voltage stimulus. NeuroSim is a circuit-level macro model for benchmarking
neuro-inspired architectures in terms of circuit-level performance metrics, such as chip area, latency,
dynamic energy and leakage power. Without NeuroSim, MLP simulator can be regarded as a standalone
functional simulator that is able to evaluate the learning accuracy and the circuit-level performance (but
only for the synapse array) during learning. With NeuroSim, MLP simulator (+NeuroSim) becomes an
integrated framework with hierarchical organization from the device level (transistor, eNVM and FeFET
device properties) to the circuit level (array architectures with periphery circuit modules) and then to the
algorithm level (neural network topology), enabling instruction-accurate evaluation on the learning
accuracy as well as the circuit-level performance metrics at the run-time of learning.
The target users for this simulator are device engineers who wish to quickly estimate the system-level
performance with his/her own analog synaptic device data. The users are expected to have the weight update
characteristics (conductance vs. # pulse) ready in hand. Device-level parameters such as number of levels,
weight update nonlinearity, device-to-device variation, and cycle-to-cycle variations, could be extracted
using the MATLAB script that is provided. At the circuit level, several design options are available, such
as the analog synaptic array architecture (eNVM crossbar, eNVM pseudo-crossbar or FeFET), or digital
synaptic array architecture (eNVM crossbar, eNVM 1T1R, or SRAM). At the algorithm level, a simple 2-
layer MLP neural network is provided for evaluation, thus only limited options are available to the users to
modify, such as the size of each layer and the size of weight matrices.
2
off ratio. The detailed parallel read-out scheme is illustrated in section 6.2. To enable the parallel
read-out, please set the parameter “parallel” to “true” in the constructor for digitalNVM under Cell.cpp.
SL Switch Matrix
WL
eNVM
Ron Roff
Reference Column
Mux
Fig. 0 (a) A parallel read-out scheme for eNVM based digital synapse
4) Add more training algorithms into the MLP simulator (optimization_type in param.cpp)
Training algorithms such as “Momentum”, “Adgrad”, “RMSprop”, “Adam” is added into the MLP
simulator. These algorithms may show better convergence and accuracy.
This feature can be specified by changing the parameter “optimization_type” under param.cpp. The
available options are as follows:
a. “SGD”: the stochastic gradient descent which is the algorithm used by V1.0 and V2.0.
b. “Momentum”: the momentum method.
c. “Adagrad”: the self-adaptive gradient descent method. The learning rate will monotonically decay
as the training goes on.
d. “RMSprop”. It is a unpublished adaptive learning rate method proposed by Geoffrey Hinton The
learning rate of weight wi is divided by the average of wi’s recent gradient.
e. “Adam”: Adaptive Moment Estimation.
The details for the above algorithms can be found in the following link:
http://ruder.io/optimizing-gradient-descent/index.html#momentum
The accuracy with these algorithms is shown below in Fig. 00. It is noticed that the convergence for
analogNVMs (such as Ag-Si) with poor linearity can be improved by properly choosing the training
algorithm. For analog NVMs with good linearity (such as EpiRAM), the accuracy can be further
improved.
3
EpiRAM is referred to this paper: S. Choi, S. H. Tan, Z. Li, Y. Kim, C. Choi, P.-Y. Chen, H. Yeon,
S. Yu, J. Kim, “SiGe epitaxial memory for neuromorphic computing with reproducible high
performance based on engineered dislocations,” Nature Materials, vol.17, pp. 335-340, 2018.
Figure 00. training accuracy vs. epochs for Ag-Si and EpiRAM
It should be noted that the hardware support for these algorithms have not been added into
NeuroSimV3.0. This feature is only at the algorithm level. Hardware support may be added into the
future version.
※ The tool may not run correctly (stuck forever) if compiled with gcc 4.5 or below, because some C++11
features are not well supported.
Command Description
4
make Compile the codes and build the “main” program
make clean Clean up the directory by removing the object files and the “main” executable
./main Run simulation (after make)
make run Run simulation (after make), and the results will be saved to a log file (filename
appended with the current time info). This command does not work if “stdbuf” is not
found.
※ The simulation uses OpenMP for multithreading, and it will use up all the CPU cores by default.
Fig. 2 Analog eNVM device behavioral model of the nonlinear weight update with the nonlinearity
labeled from -6 to 6.
2) Limited precision
The precision of an analog eNVM device is determined by the number of conductance states it has, which
is Pmax in Eq. (1)-(3).
3) Device-to-device weight update variation
The effect of device-to-device weight update variation can be analyzed by introducing the variation into
the nonlinearity baseline. This variation is defined as the nonlinearity baseline’s standard deviation (σ)
respect to 1 step of the 6 steps in Fig. 2.
4) Cycle-to-cycle weight update variation
The cycle-to-cycle weight update variation is referred to as the variation in conductance change at every
programming pulse. This variation (σ) is expressed in terms of the percentage of entire conductance range.
5) Dynamic range (ON/OFF ratio)
Ideally, the weight values are represented by a normalized conductance of analog eNVM devices with the
range from 0 to 1. However, the minimum conductance can be regarded as 0 only when the ratio between
6
the maximum and minimum conductance (ON/OFF ratio) approaches infinity. With limited ON/OFF ratio,
the cells with weight=0 still have leakage.
6) Conductance variation
Different devices may observe different ON/OFF ratios if the conductance range has a variation. The
conductance variation (σ) is typically expressed in terms of the percentage of the highest conductance state
(ON state) as it changes the conductance range most.
0.9
0.8
0.7
Normalized Conductance
0.6
0.5
0.4
0.3
7
Fig. 3 Fitting of Ag:a-Si weight update data with normalized A in the plot of normalized conductance vs.
normalized number of pulses.
8
IdealDevice Analog This device class is a subset of RealDevice that does not have
non-ideal synaptic device properties in weight update.
MeasuredDevice Analog This device class uses the look-up table method rather than
parameters to reproduce the weight update curves, thus the weight
update variations are not applicable here.
DigitalNVM Digital The binary eNVM class.
SRAM Digital The binary SRAM class.
All the parameters in Cell.cpp have the comments to describe their meaning, and here we introduce the
important or common ones that are used in more than one device class. For resistive synaptic devices, the
maxConductance and minConductance are defined as 1/RON and 1/ROFF, respectively. readVoltage and
readPulseWidth are on-chip read voltage (V) and read pulse width (s). The specified value of
readPulseWidth does not matter because it will be modified later by the read circuit module when it
calculates the required pulse width for each integration cycle based on the ADC precision.
writeVoltageLTP and writePulseWidthLTP are the write voltage (V) and the write pulse width (s) during
LTP or weight increase. writeVoltageLTD and writePulseWidthLTD are also defined in the same way.
For the non-ideal device properties, we describe their implementation below one by one with the associated
parameters:
1) Nonlinear weight update
To enable this property, the users need to make sure nonlinearWrite=true. Then, the users have to provide
the value of NL_LTP and NL_LTD, which represent the weight update nonlinearity for LTP and LTD,
respectively. In the example of Ag:a-Si, NL_LTP and NL_LTD are set to 2.40 and -4.88, as obtained from
the MATLAB fitting results.
2) Limited precision
The number of conductance states can be specified in maxNumLevelLTP and maxNumLevelLTD for the
LTP and LTD of the analog synaptic device, respectively. On the other hand, the weight precision with
digital synaptic devices is specified in numWeightBit in Param.cpp.
3) Device-to-device weight update variation
The standard deviation (σ) of device-to-device variation is specified in sigmaDtoD. If this property is not
considered, sigmaDtoD should be set to 0.
4) Cycle-to-cycle weight update variation
The standard deviation (σ) of cycle-to-cycle variation is specified in sigmaCtoC. It is multiplied with
(maxConductance - minConductance) because it is expressed in terms of the percentage of entire
conductance range as mentioned earlier. Currently the simulator only takes one value of the cycle-to-cycle
weight update variation for sigmaCtoC. It is encouraged that the user selects the larger one in LTP and
LTD for conservative estimation. If this property is not considered, sigmaCtoC should be set to 0.
5) Dynamic range (ON/OFF ratio)
The dynamic range is solely determined by maxConductance and minConductance. There is no
additional parameter to enable this property. However, if the users would not like to take this effect into
account, it is fine to set minConductance=0 to obtain an infinite ON/OFF ratio.
9
6) Conductance variation
To enable this property, the users need to make sure conductanceRangeVar=true. Then, the users have to
provide the value of maxConductanceVar and minConductanceVar, which represents the standard
deviation (σ) of conductance variation at maximum and minimum conductance state in terms of percentage,
respectively. If the ON/OFF ratio is large, setting maxConductanceVar alone is good enough.
7) Read noise
To enable this property, the users need to make sure readNoise=true. Then, the users have to provide the
value of sigmaReadNoise, which is the standard deviation of read noise in gaussian distribution.
For other modes or parameters, cmosAccess is used to choose the cell structure, or synaptic core type in
other words. cmosAccess=true means the pseudo-crossbar/1T1R array, while cmosAccess=false means
the true crossbar array. If the cell is pseudo-crossbar/1T1R, we need to define resistanceAccess, which is
the turn-on resistance value of the transistor in 1T1R array. The FeFET option is for the ferroelectric FET
configuration. We do not have a dedicated device class for FeFET because it is similar to the analog eNVM
from the non-ideal device properties’ point of view. Its default configuration is FeFET=false. If
FeFET=true, we need to provide the value of gateCapFeFET, which is the gate capacitance of FeFET. If
the cell is crossbar. I-V nonlinearity NL can be specified as the current ratio between write voltage and half
write voltage considering if a selector is added. To enable this property, the users have to set
nonlinearIV=true. The nonIdenticalPulse option is for non-identical write pulse scheme where the write
pulse amplitude or width linearly increases or decreases with the pulse number. As shown in Fig. 5,
VinitLTP, VstepLTP, VinitLTD, VstepLTD, PWinitLTP, PWstepLTP, PWinitLTD and PWstepLTD
are essential parameters that need to be defined by the users when nonIdenticalPulse=true.
VstepLTP
PWinitLTD
VinitLTP
VinitLTD
PWinitLTP
VstepLTD
VinitLTP PWinitLTD
PWinitLTP VinitLTD
PWstepLTD
eNVM
Crossbar WL Decoder
WL Switch Matrix
BL Switch Matrix
BL Switch Matrix
Synapse
BL FeFET
Synapse SL BL
SLN
SLS
Crossbar Array BL
Pseudo-crossbar Array
Pseudo-crossbar Array
Mux Mux
Read Read Read Read Mux
Decoder
Decoder
Mux
Read Read
Decoder
Circuit Circuit
Mux
WL Decoder
WL Decoder
WL Decoder
eNVM BL
SL BL SRAM Array
Mux
Mux
S/A S/A S/A S/A
VSA VSA VSA VSA VSA VSA
VSA VSA VSA VSA Adder Adder
Mux Decoder
Mux Decoder
11
SL Switch Matrix
WL
eNVM
Ron Roff
Reference Column
Mux
Fig. 6 Synaptic cores based on (a) analog eNVM crossbar parallel read-out, (b) analog eNVM pseudo-
crossbar parallel read-out, (c) analog FeFET parallel read-out, (d) digital eNVM crossbar row-by-row
read read-out, (e) digital eNVM pseudo-crossbar row-by-row read read-out, (f) digital SRAM array row-
by-row read out and (g) digital eNVM pseudo-crossbar parallel read-out
12
VW 0 Selected cells VW 0
VW/2 VW/2
VW/2 VW/2
BL BL
Weight increase Weight decrease
(phase 1) (phase 2)
Fig. 7 Voltage bias scheme in the write operation of analog eNVM crossbar array. Two separate phases
for weight increase and decrease are required. In this example, the left cell of the selected cells will be
updated in phase 1, while the right one will be updated in phase 2.
2) Analog eNVM pseudo-crossbar array
Another common design solution to the write disturbance and sneak path problem is to add a cell selection
transistor in series with the eNVM device, forming the one-transistor one-resistor (1T1R) array architecture,
as shown in Fig. 8(a). The WL controls the gate of the transistor, which can be viewed as a switch for the
cell. The source line (SL) connects to the source of the transistor. The eNVM cell’s top electrode connects
to the BL, while its bottom electrode connects to the drain of the transistor through a contact via. In such
case, the cell area of 1T1R array is then determined by the transistor size, which is typically >6F2 depending
on the maximum current required to be delivered into the eNVM cell. Larger current needs larger transistor
gate width/length (W/L). However, conventional 1T1R array is not able to perform the parallel weighted
sum operation. To solve this problem, we modify the conventional 1T1R array by rotating the BLs by 90o,
which is known as the pseudo-crossbar array architecture, as shown in Fig. 8(b). In weighted sum operation,
all the transistors will be transparent when all WLs are turned on. Thus, the input vector voltages are
provided to the BLs, and the weighted sum currents are read out through SLs in parallel. The weight update
operation in pseudo-crossbar array is similar to that in crossbar array, as shown in Fig. 9. As the unselected
WLs can turn off the transistors on unselected rows, no voltage bias is required for these unselected BLs
thus pseudo-crossbar array can have save a lot of weight update energy compared to the crossbar array. In
the simulator, the pseudo-crossbar array architecture can be designated by setting cmosAccess=true in
Cell.cpp.
Cell Cell
WL WL
BL
SL BL eNVM SL
(a) Conventional 1T1R array (b) Pseudo-crossbar array
Fig. 8 Transformation from (a) conventional 1T1R array to (b) pseudo-crossbar array by 90o rotation of
BL to enable weighted sum operation.
13
VW 0 VW 0
Selected cells
VW VW 0 0
VWL WL VWL WL
BL VW BL 0
0 0
0 0
0 0
0 0
SL SL
Weight increase (phase 1) Weight decrease (phase 2)
Fig. 9 Voltage bias scheme in the write operation of analog eNVM pseudo-crossbar array. Two separate
phases for weight increase and decrease are required. In this example, the left cell of the selected cells
will be updated in phase 1, while the right one will be updated in phase 2.
3) Analog FeFET array
As shown Fig. 6(c), the analog FeFET array is in the pseudo-crossbar fashion, which is similar to the analog
eNVM pseudo-crossbar one. It also has an access transistor for each cell to prevent programming on other
unselected rows during row-by-row weight update. As FeFET is a three-terminal device, it needs two
separate SLs for the weighted sum (SLS) and weight update (SLN), respectively. Its weight update
operation is shown in Fig. 10. In the simulator, the FeFET array architecture can be designated by setting
cmosAccess=true and FeFET=true in Cell.cpp.
VW 0 VW 0
Selected cells
0 0 VW VW
VWL VWL
0 VW
0 WL 0 WL
0 0
SLN
SLN
SLS
SLS
0 0
BL
0 BL
0
F F F F F F
Weight increase (phase 1) Weight decrease (phase 2)
Fig. 10 Voltage bias scheme in the write operation of analog FeFET pseudo-crossbar array. Two separate
phases for weight increase and decrease are required. In this example, the left cell of the selected cells
will be updated in phase 1, while the right one will be updated in phase 2.
14
4) Digital eNVM crossbar/1T1R array with row-by-row read-out
Both digital eNVM crossbar and 1T1R array have very similar architectures, as shown in Fig. 6(d)-(e).
Multiple digital eNVM devices are grouped along the row to represent one weight. Different than the analog
eNVM array architecture, the weighted sum operation in the digital one is essentially row-by-row based,
thus it requires the adder and register to accumulate the partial weighted sum in a row-by-row fashion. On
the other hand, the weight update operation in digital eNVM crossbar/1T1R array is also row-by-row based
and is similar to the write operation in conventional eNVM array for memory, which is shown in Fig. 11.
In the simulator, the crossbar array architecture is selected by setting cmosAccess=false, and the 1T1R
array architecture is selected by setting cmosAccess=true.
0 0
VW/2 VW/2
0 0
VW/2 VW/2
BL BL SL BL SL BL
(a) SET (phase 1) RESET (phase 2) (b) SET (phase 1) RESET (phase 2)
Fig. 11 Voltage bias scheme in the write operation of (a) digital eNVM crossbar and (b) digital eNVM
1T1R array. Two separate phases for SET and RESET are required. In this example, the left cell of the
selected cells will be programmed to 1 in SET, while the right one will be programmed to 0 in RESET.
5) Digital eNVM 1T1R array with parallel read-out
In the parallel read-out scheme for digital eNVMs, it is based on 1T1R pseudo-crossbar array. The WL-BL
Switch Matrix turn on all the WLs with input “1”s and apply read pulse at the BLs. The actual partial sum
of each column is obtained by subtracting the partial sum bits of the reference column from its partial sum
bits through the subtractor. The partial sum of different numerical significance is added up through the
adder and shift register. The weight update in this architecture operation is still conducted row-by-row.
Since the partial sum of a column is read out in parallel, it eliminates the requirement for adder and register
as in the row-by-row read-out scheme described above. But the adder and shift register is still needed to
add up the partial sums with difference numerical significance.
15
6.3 Array Peripheral Circuits
The periphery circuit modules used in the analog synaptic cores in Fig. 6 are described below:
1) Switch matrix
Switch matrixes are used for fully parallel voltage input to the array rows or columns. Fig. 12(a) shows the
BL switch matrix for example. It consists of transmission gates that are connected to all the BLs, with
control signals (B1 to Bn) of the transmission gates stored in the registers (not shown here). In the weighted
sum operation, the input vector signal is loaded to B1 to Bn, which decide the BLs to be connected to either
the read voltage or ground. In this way, the read voltage that is applied at the input of transmission gates
can pass to the BLs and the weighted sums are read out through SLs in parallel. If the input vector is not 1
bit, it should be encoded using multiple clock cycles, as shown in Fig. 12(b). The reason why we do not
use analog voltage to represent the input vector precision is the I-V nonlinearity of eNVM cell, which will
cause the weighted sum distortion or inaccuracy. In the simulator, all the switch matrixes (slSwitchMatrix,
blSwitchMatrix, wlSwitchMatrix) are instantiated from SwitchMatrix class in SwitchMatrix.cpp. A
new WL-BL switch matrix is designed for the parallel read-out architecture for digital eNVMs based array,
as shown in Fig. 12 (c). It can be instantiated from the NewSwitchMatrix class in SwitchMatrix.cpp
B1
B1[0] B1[1] B1[2] B1[k-1]
BL1 B1 V
0
≈
B1 VREAD
B2
B2[0] B2[1] B2[2] B2[k-1]
GND
V
BL2 B2 Input
0
≈
Vdd WL
B2
Pseudo
Read Enable
Input 1T1R Cell
Bn
Bn[0] Bn[1] Bn[2] Bn[k-1]
GND
Write Enable BL
V
BLn Bn GND/Vwrite Input
0 SL
≈
Read Enable
(a) Bn (b) Digitized Input Vector (c) Vwrite/GND
Fig. 12 (a) Transmission gates of the BL switch matrix in the weighted sum operation. A vector of control
signals (B1 to Bn) from the registers (not shown here) decide the BLs to be connected to either a voltage
source or ground. (b) Control signals in a bit stream to represent the precision of the input vector. (c). The
design of the new WL-BL switch matrix
2) Crossbar WL decoder
The crossbar WL decoder is modified from the traditional WL decoder. It has an additional feature to
activate all the WLs for making all the transistors transparent for weighted sum. The crossbar WL decoder
is constructed by attaching the follower circuits to every output row of the traditional decoder, as shown in
Fig. 13. If ALLOPEN=1, the crossbar WL decoder will activate all the WLs no matter what input address
is given, otherwise it will function as a traditional WL decoder. In the simulator, the crossbar WL decoder
contains a traditional WL decoder (wlDecoder) instantiated from RowDecoder class in RowDecoder.cpp
and a collection of follower circuits (wlDecoderOutput) instantiated from WLDecoderOutput class in
WLDecoderOutput.cpp.
16
ALLOPEN VIN
Follower
WL[0]
ADDR[0]
ADDR[1]
n:2n
Decoder
ADDR[n-1]
WL[2n-1]
Fig. 13 Circuit diagram of the crossbar WL decoder. Follower circuit is attached to every row of the
decoder to enable activation of all WLs when ALLOPEN=1.
3) Multiplexer (Mux) and Mux decoder
The Multiplexer (Mux) is used for sharing the read periphery circuits among synaptic array columns,
because the array cell size is much smaller than the size of read periphery circuits and it will not be area-
efficient to put all the read periphery circuits underneath the array. However, sharing the read periphery
circuits among synaptic array columns inevitably increases the latency of weighted sum as time
multiplexing is needed, which is controlled by the Mux decoder. In the simulator, the Mux (mux) is
instantiated from Mux class in Mux.cpp and the Mux decoder (muxDecoder) is instantiated from
RowDecoder class in RowDecoder.cpp.
4) Analog-to-digital read circuit
To convert these analog weighted sum currents to digital outputs, we use the read circuit in the following
reference to employ the principle of the integrate-and-fire neuron model, as shown in Fig. 14(a). The read
circuit integrates the weighted sum current on the finite capacitance of the array column. Once the voltage
charges up above a certain threshold, the read circuit fires an output pulse and the capacitance is discharged
back. The simulated waveform of integrated input voltage and the digital output spikes of the read circuit
is shown in Fig. 14(b). The number of output spikes is proportional to the weight sum current. The precision
required for this analog-to-digital conversion (ADC) determines the pulse width in each bit of the input
vector. In the simulator, a collection of read circuits (readCircuit) is instantiated from ReadCircuit class
in ReadCircuit.cpp.
D. Kadetotad, Z. Xu, A. Mohanty, P.-Y. Chen, B. Lin, J. Ye, S. Vrudhula, S. Yu, Y. Cao, J.-S. Seo, “Parallel
architecture with resistive crosspoint array for dictionary learning acceleration,” IEEE J. Emerg. Sel. Topics
Circuits Syst. (JETCAS), vol. 5, no. 2, pp. 194-204, 2015.
17
I=5.75µA I=1µA
(a) (b)
Fig. 14 (a) Design of a read circuit that employs the principle of the integrate-and-fire neuron model. (b)
Simulated waveform of integrated input voltage and the digital output spikes of the read circuit.
5) WL decoder and column decoder
Both the traditional WL decoder (wlDecoder) and column decoder (colDecoder) are instantiated from
RowDecoder class in RowDecoder.cpp. Their only difference is the connection to the array rows or
columns, which will be determined in the initialization. If REGULAR_ROW is specified, then it will be
a WL decoder. If REGULAR_COL is specified, then it will be a column decoder.
6) Decoder driver
The decoder driver helps provide the voltage bias scheme for the write operation when its decoder selects
the cells to be programmed. As the digital eNVM crossbar array has the write voltage bias scheme for both
WLs and BLs, it needs the WL decoder driver (wlDecoderDriver) and column decoder driver
(colDecoderDriver). These decoder drivers can be instantiated from DecoderDriver class in
DecoderDriver.cpp.
7) Adder and register
As mentioned earlier, the adders and registers are used to accumulate the partial weighted sum results during
the row-by-row weighted sum operation in digital synaptic array architectures. The group of adders is
instantiated from Adder class in Adder.cpp and the group of registers (dff) is instantiated from DFF class
in DFF.cpp.
8) Adder and shift register
The adder and shift register pair at the bottom of synaptic core performs shift and add of the weighted sum
result at each input vector bit cycle (B1 to Bn in Fig. 12(b)) to get the final weighted sum. The bit-width of
the adder and shift register needs to be further extended depending on the precision of input vector. If the
values in the input vector are only 1 bit, then the adder and shift register pair is not required. In the simulator,
a collection of the adder and shift register pairs (ShiftAdd) is instantiated from ShiftAdd class in
ShiftAdd.cpp, where ShiftAdd further contains a group of adders (adder) instantiated from Adder class
in Adder.cpp and a group of registers (dff) instantiated from DFF class in DFF.cpp.
Input vector
Neurons 10 Output Synaptic Synaptic
20x20 Cropped
Handwritten Digits Neurons Core Core
(WIH) (WHO)
vector
Black & White
Data
∑
Neuron ×2 ∑
×2
(a) Periphery
WIH
WHO
- + - +
Adders Adders
Neuron nodes at
Neuron
previous layer
W FF
∑ MSB MSB
output
Low-precision Mux Mux
Activation Function
Registers Registers
ΔW High-precision
Activation Function
Values
from Computation
of weight
BP Weight update with other Predicted
(b) previous
layer update
errors (c) hardware control logics result
Fig. 15 (a) The 2-layer MLP neural network. (b) Schematic of a neuron node. (c) Circuit block diagram
for hardware implementation of the 2-layer MLP network.
In the back propagation phase, the weight update values (ΔW) will be translated to the number of LTP or
LTD write pulses (Fig. 15(b)) and applied to the synaptic array following the voltage bias scheme in Fig.
7, Fig. 9, Fig. 10 or Fig. 11. In the previous 1.0 version, we use a naïve weight update scheme, where all
the selected cells in each write batch have to wait for the full number of write pulse cycles regardless of
their ΔW. This naïve weight update scheme effectively reduces the hardware design complexity, but also
greatly increases the weight update latency and energy due to the redundant write pulse cycles. Thus, in 2.0
version, we propose to use an optimized weight update scheme, where the cells only need to go through the
maximum ΔW’s number of cycles in each write batch. If all the cells in a write batch do not need an update
(ΔW=0), this entire write batch can even be skipped. This could bring significant reduction in weight update
latency and energy, especially considering that ΔW will usually become very small or even zero after the
first few epochs of learning.
20
8. How to run MLP simulator (+NeuroSim)
1) Select the synaptic device type in main.cpp
Firstly, the users have to select the synaptic device type for the two synaptic cores. Available device types
are RealDevice, IdealDevice, MeasuredDevice, DigitalNVM and SRAM as listed in Section 4.3. The
default configuration is RealDevice for both synaptic cores, as shown below in main.cpp:
arrayIH->Initialization<RealDevice>();
arrayHO->Initialization<RealDevice>();
2) Modify the device parameters in Cell.cpp
After selecting the synaptic device type, the users may wish to modify the device parameters in the
corresponding synaptic device class in Cell.cpp. Essential parameters have been described in Section 4.3.
3) Modify the network and hardware parameters in Param.cpp
The users may also wish to modify the network and hardware parameters in Param.cpp. For the network
side, numMnistTrainImages and numMnistTestImages are the number of images in MNIST during
training and testing respectively, numTrainImagesPerEpoch means the number of training images per
epoch, while interNumEpochs represents the internal number of epochs within each printed epoch shown
on the screen. In addition, nInput, nHide and nOutput are the number of neurons in input, hidden and
output layers in the 2-layer MLP neural network, respectively.
For the hardware side, the first four hardware parameters determine the learning configuration, which can
be the following cases:
1. Online learning in hardware: useHardwareInTrainingFF, useHardwareInTrainingWU and
useHardwareInTestingFF are all true
2. Offline learning in software and then classification only in hardware: useHardwareInTrainingFF and
useHardwareInTrainingWU are false, while useHardwareInTestingFF is true
3. Pure learning in software: useHardwareInTrainingFF, useHardwareInTrainingWU and
useHardwareInTestingFF are all false
For other hardware parameters, numBitInput means the number of bits of the input data. The hardware
architecture design in this released version only allows numBitInput=1 (black and white data), which
should not be changed. numBitPartialSum represents the number of bits in the digital output (partial
weighted sum output) of read circuit (ADC) at the periphery of the synaptic array. numWeightBit means
the number of weight bits for pure algorithm without consideration of hardware, and numColMuxed means
the number of columns of the synaptic array sharing one read circuit in the array. Time-multiplexing is
required if numColMuxed is greater than 1. For example, the total weighted sum latency will be increased
by roughly 16 times if numColMuxed=16. In the weight update, there might also be limited throughput
for the weight update information to be provided from outside. In this case, time-multiplexing is
implemented by setting numWriteColMuxed. For example, numWriteColMuxed=16 means updating
every row will need roughly 16 weight update operations.
4) Compilation of the program
Whenever any change is made in the files, the codes has to be recompiled by using make command as
stated in Installation and Usage (Linux) section. If the compilation is successful, the following screenshot
of Fig. 16 can be expected:
21
Fig. 16 Output of make
22
9. The Benchmark Table for Different Synaptic Devices
This section shows the benchmark table for different synaptic device. In V3.0, the technology node is
shifted to 32nm from 14nm for all devices, which is more realistic considering the current industry
integration of eNVMs at mostly at 40nm or 28nm. Besides, In V3.0, more devices are added into the
benchmark table. In analog eNVM devices, the EpiRAM proposed by the MIT team and the TaOx/HfO2
stack proposed by the Tsinghua team are included. The MUX transistor sizes are relaxed for those devices
with small on-state resistance (Ron) to allow voltage transfer to the synaptic arrays, therefore, the area of
those arrays significantly increases. For digital synapse, STT-MRAM is added as a comparison to SRAM.
Both row-by-row read-out and parallel read-out are simulated for STT-MRAM device.
Ref. P.-Y. Chen, S. Yu, “Technological benchmark of analog synaptic devices for neuro-inspired
architectures,” IEEE Design & Test, DOI: 10.1109/MDAT.2018.2890229
23
ON/OFF ratio 12.5 10 6.84 4.43 19.8 50.2 45 -- 2.3
Weight increase 0.9V/100µ 0.7V (avg.)/ 3.65V
3.2V/300µs 1.6V/50ns -2V/1ms 5V/5µs -- 1V/10ns
pulse s 6µs (avg.)/ 75ns
Weight decrease 3V (avg.)/ -2.95V
-2.8V/300µs 1.6V/50ns 2V/1ms -1V/100µs -3V/5µs -- 1V/10ns
pulse 125ns (avg.)/ 75ns
Cycle-to-cycle
3.5% 3.7% <1% 5% 1.5% 2% <0.5% -- --
variation (σ)
Online learning
~72% ~80% ~33% ~20% 89% 92% 88% ~94% ~94% ~94%
accuracy
6292.4
Area 6292.3 µm2 8663.1 µm2 21760 µm2 46565µm2 9144.3µm2 7032.6µm2 65728 µm2 70254 µm2 66632 µm2
µm2
Latency 90.1s (row-
31997s 10.15s 12218s 470.42S 203.0s 229.6s 2.73s 5.98 s 6.9s (parallel)
(optimized) by row)
Energy
13.44mJ 4.01mJ 2.53mJ 15.26mJ 35.0mJ 31.01mJ 1.9mJ 15.56 mJ 0.1467J 0.1462J
(optimized)
Leakage power 105.65µW 105.65µW 105.65µW 105.65µW 105.65µW 105.65µW 105.65µW 2.80 mW 124.8 µW 84.0 µW
Some remarks
1. The area cost increases by about 6 times at 32nm node, which is slightly larger than (32nm/14nm)2 =
5.22. It can be explained by additional periphery modules to support the reference weight.
2. In terms of latency and energy consumption, they are determined by both the operations of periphery
circuit and the read/write within the synaptic array. The latency and energy consumption of the
periphery circuit will be degraded by shifting technology from 14nm to 32nm. The latency and energy
consumption within the synaptic array is mainly determined by the write pulse width and write voltage,
which is assumed to be independent on technology node here. Therefore, three trends are observed
for analog eNVMs based synaptic device:
a. For devices with relatively OK linearity and long programming pulse width (e.g. EpiRAM, Ag-
Si, PCM with 1μs-100μs programming pulse width), both latency and energy consumption is
reduced because the weight range is changed from 0-1 to (-1)-1. In this case, the same ΔW
corresponds to 1/2 the number of programming pulses in V2.0 and therefore, the write latency
in the synaptic array is reduced significantly. The increase of latency and energy consumption
of the periphery is comparatively small. Therefore, both latency and energy consumption is
reduced.
b. For devices with relatively OK linearity and short programming pulse width (e.g. FeFET with
~75ns programming pulse width), both energy consumption and latency is increased due to the
impact of technology node shift on periphery circuit.
c. For devices with relatively bad linearity (e.g. PCMO), both latency and energy consumption is
increased. In V2.0, devices with bad linearity will not learn (~10% learning accuracy) and
therefore, the number of programming pulse applied is relatively less. In V3.0, the learning
accuracy is slightly improved, which means more programming pulses are applied to change
the weight of the synapse device.
3. For online learning accuracy, good accuracy can be obtained if the following three conditions are
obtained:
a. Good linearity. In general, devices with good linearity shows good online learning accuracy.
For example, for EpiRAM and PCMs, it maintains the accuracy in both V2.0 and V3.0
b. Small cycle to cycle variation. For TaOx/HfOx even though it has good linearity, the online
accuracy is low because of it relatively large cycle to cycle variation.
24
c. Enough conductance states. This is not obvious in V2.0. But in V3.0, since the weight range is
extended from (0,1) to (-1,1), the weight distance between each conductance becomes larger.
For device with good linearity, small variation but relatively fewer number of conductance
state (e.g. HZO FeFET), an accuracy drop is observed in V3.0. For example, the online learning
accuracy for HZO FeFET is dropped from 90% to 88% in V3.0, the online learning accuracy.
For AlOx/HfOX is dropped from 41% to 20% due to its limited number of conductance states.
4. For devices with low on-off ratio, the online training accuracy can be improved by introducing the
reference current in V3.0.
5. For digital eNVM based synapses, by parallelizing the read-out operation, STT-MRAM can achieve
comparable latency as SRAM and much lower leakage power than SRAM. But the energy consumption
for STT-MRAM is still larger due to its relatively large write current.
25