Roadmap To Neuromorphic Computation
Roadmap To Neuromorphic Computation
Roadmap To Neuromorphic Computation
ROADMAP
41
The Danish Council on Ethics, Denmark
42
Department of Food and Resource Economics, University of Copenhagen, Denmark
43
Dipartimento di Elettronica, Informazione e Bioingegneria Politecnico di Milano and IU.NET, 20133 Milano, Italy
∗
Author to whom any correspondence should be addressed.
E-mail: nipr@dtu.dk
Keywords: neuromorphic computation, spiking neural networks, robotics, memristor, convolutional neural networks, self-driving cars,
deep learning
Abstract
Modern computation based on von Neumann architecture is now a mature cutting-edge science.
In the von Neumann architecture, processing and memory units are implemented as separate
blocks interchanging data intensively and continuously. This data transfer is responsible for a large
part of the power consumption. The next generation computer technology is expected to solve
problems at the exascale with 1018 calculations each second. Even though these future computers
will be incredibly powerful, if they are based on von Neumann type architectures, they will
consume between 20 and 30 megawatts of power and will not have intrinsic physically built-in
capabilities to learn or deal with complex data as our brain does. These needs can be addressed by
neuromorphic computing systems which are inspired by the biological concepts of the human
brain. This new generation of computers has the potential to be used for the storage and processing
of large amounts of digital information with much lower power consumption than conventional
processors. Among their potential future applications, an important niche is moving the control
from data centers to edge devices. The aim of this roadmap is to present a snapshot of the present
state of neuromorphic technology and provide an opinion on the challenges and opportunities that
the future holds in the major areas of neuromorphic technology, namely materials, devices,
neuromorphic circuits, neuromorphic algorithms, applications, and ethics. The roadmap is a
collection of perspectives where leading researchers in the neuromorphic community provide their
own view about the current state and the future challenges for each research area. We hope that this
roadmap will be a useful resource by providing a concise yet comprehensive introduction to
readers outside this field, for those who are just entering the field, as well as providing future
perspectives for those who are well established in the neuromorphic computing community.
Contents
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Section 1. Materials and devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1. Phase-change memory devices. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2. Ferroelectric devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
3. Valence change memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
4. Electrochemical metallization cells . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
5. Nanowire networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
6. 2D materials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
7. Organic materials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
Section 2. Neuromorphic circuits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
8. Spintronics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
9. Deep learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
10. Spiking neural networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
11. Emerging hardware approaches for optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
12. Enabling technologies for the future heterogeneous neuromorphic accelerators . . . . . . . . . . . . . . . . . . . . . . 41
13. Photonics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
14. Large-scale neuromorphic computing platforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
Section 3. Neuromorphic algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
15. Learning in spiking neural networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
16. Learning-to-learn for neuromorphic hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
17. Computational neuroscience . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
18. Stochastic computing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
19. Convolutional spiking neural networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
2
Neuromorph. Comput. Eng. 2 (2022) 022501 Roadmap
Introduction
N Pryds1 , Dennis V Christensen1 , Bernabe Linares-Barranco2 , Daniele Ielmini3 and Regina Dittmann4
1
Technical University of Denmark
2
Instituto de Microelectrónica de Sevilla, CSIC and University of Seville
3
Politecnico di Milano and IU.NET
4
Forschungszentrum Jülich GmbH
Computers have become essential to all aspects of modern life and are omnipresent all over the globe.
Today, data-intensive applications have placed a high demand on hardware performance, in terms of short
access latency, high capacity, large bandwidth, low cost, and ability to execute artificial intelligence (AI) tasks.
However, the ever-growing pressure for big data creates additional challenges: on the one hand, energy con-
sumption has become a remarkable challenge, due to the rapid development of sophisticated algorithms and
architectures. Currently, about 5%–15% of the world’s energy is spent in some form of data manipulation,
such as transmission or processing [1], and this fraction is expected to rapidly increase due to the exponential
increase of data generated by ubiquitous sensors in the era of internet of things. On the other hand, data pro-
cessing is increasingly limited by the memory bandwidth due to the von-Neumann’s architecture with physical
separation between processing and memory units. While the von Neumann computer architecture has made an
incredible contribution to the world of science and technology for decades, its performance is largely inefficient
due to the relatively slow and energy demanding data movement.
Conventional von Neumann computers based on complementary metal oxide semiconductor (CMOS)
technology do not possess the intrinsic capabilities to learn or deal with complex data as the human brain
does. To address the limits of digital computers, there are significant research efforts worldwide in developing
profoundly different approaches inspired by biological principles. One of these approaches is the development
of neuromorphic systems, namely computing systems mimicking the type of information processing in the
human brain.
The term ‘neuromorphic’ was originally coined in the 1990s by Carver Mead to refer to mixed signal
analog/digital very large scale integration computing systems that take inspiration from the neuro-biological
architectures of the brain [2]. ‘Neuromorphic engineering’ emerged as an interdisciplinary research field that
focused on building electronic neural processing systems to directly ‘emulate’ the bio-physics of real neurons
and synapses [3]. More recently, the definition of the term neuromorphic has been extended in two addi-
tional directions [4]. Firstly, the term neuromorphic was used to describe spike-based processing systems
engineered to explore large-scale computational neuroscience models. Secondly, neuromorphic computing
comprises dedicated electronic neural architectures that implement neuron and synapse circuits. Note that
this concept is distinct from AI machine learning approaches which are based on pure software algorithms
developed to minimize the recognition error in pattern recognition tasks [5]. However, a precise definition of
neuromorphic computing is somewhat debated. It can range from very strict high-fidelity mimicking of neu-
roscience principles where very detailed synaptic chemical dynamics are mandatory, to very vague high-level
loosely brain-inspired principles, such as the simple vector (input) times matrix (synapses) multiplication. In
general, as of today, there is a wide consensus that neuromorphic computing should at least encompass some
3
Neuromorph. Comput. Eng. 2 (2022) 022501 Roadmap
time-, event-, or data-driven computation. In this sense, systems like spiking neural networks (SNN), some-
times referred to as the third generation of neural networks [6], are strongly representative. However, there
is an important cross-fertilization between the technologies required to develop efficient SNNs and those for
more traditional non-SNN, referred to as artificial neural networks (ANN), which are typically more time-
step-driven. While the former definition of neuromorphic computing is more plausible, in this roadmap we
aim at broadening the scope to emphasize the cross-fertilization between ANN and SNN.
Nature is a vital inspiration for the advancement to a more sustainable computing scenario, where neuro-
morphic systems display much lower power consumption than conventional processors, due to the integration
of non-volatile memory and analog/digital processing circuits as well as the dynamic learning capabilities in
the context of complex data. Building ANNs that mimic a biological counterpart is one of the remaining chal-
lenges in computing. If the fundamental technical issues are solved in the next few years, the neuromorphic
computing market is projected to rise from $0.2 billion in 2025 to $22 billion in 2035 [7] as neuromorphic
computers with ultra-low power consumption and high speed advance and drive demands for neuromorphic
devices.
In line with these increasingly pressing issues, the general aim of the roadmap on neuromorphic computing
and engineering is to provide an overview of the different fields of research and development that contribute to
the advancement of the field, to assess the potential applications of neuromorphic technology in cutting edge
technologies and to highlight the necessary advances required to reach these. The roadmap addresses:
• Neuromorphic materials and devices
• Neuromorphic circuits
• Neuromorphic algorithms
• Applications
• Ethics
Neuromorphic materials and devices: To advance the field of neuromorphic computing and engineering,
the exploration of novel materials and devices will be of key relevance in order to improve the power effi-
ciency and scalability of state-of-the-art CMOS solutions in a disruptive manner [4, 8]. Memristive devices,
which can change their conductance in response to electrical pulses [9–11], are promising candidates to act
as energy- and space-efficient hardware representation for synapses and neurons in neuromorphic circuits.
Memristive devices have originally been proposed as binary non-volatile random-access memory and research
in this field has been mainly driven by the search for higher performance in solid-state drive technologies (e.g.,
flash replacement) or storage class memory [12]. However, thanks to their analog tunability and complex
switching dynamics, memristive devices also enable novel computing functions such as analog computing
or the realisation of brain-inspired learning rules. A large variety of different physical phenomena has been
reported to exhibit memristive behaviour, including electronic effects, ionic effects as well as structural or
ferroic ordering effects. The material classes range from magnetic alloys, metal oxides, chalcogenides to 2D
van de Waals materials or organic materials. Within this roadmap, we cover a broad range of materials and
phenomena with different maturity levels with respect to their use in neuromorphic circuits. We consider
emerging memory devices that are already commercially available as binary non-volatile memory such as
phase-change memory (PCM), magnetic random-access memory, ferroelectric memory as well as redox-based
resistive random-access memory and review their prospects for neuromorphic computing and engineering. We
complement it with nanowire networks, 2D materials, and organic materials that are less mature but may offer
extended functionalities and new opportunities for flexible electronics or 3D integration.
Neuromorphic circuits: Neuromorphic devices can be integrated with conventional CMOS transistors to
develop fully functional neuromorphic circuits. A key element in neuromorphic circuits is their non-von Neu-
mann architecture, for instance consisting of multiple cores each implementing distributed computing and
memory. Both SSNs, adopting spikes to represent, exchange and compute data in analogy to action potentials
in the brain, as well as circuits that are only loosely inspired by the brain, such as ANNs, are generally included
in the roster of neuromorphic circuits, thus will be covered in this roadmap. Regardless of the specific learning
and processing algorithm, a key processing element in neuromorphic circuits is the neural network, including
several synapses and neurons. Given the central role of the neural network, a significant research effort is cur-
rently aimed at technological solutions to realize dense, fast, and energy-efficient neural networks by inmemory
computing [13]. For instance, a memory array can accelerate the matrix-vector multiplication (MVM) [14].
This is a common feature of many neuromorphic circuits, including spiking and non-spiking networks, and
takes advantage of Ohm’s and Kirchhoff’s laws to implement multiplication and summation in the network.
The MVM crosspoint circuit allows for the straightforward hardware implementation of synaptic layers with
high density, high real-time processing speed, and high energy efficiency, although the accuracy is challenged
by stochastic variations in memristive devices in particular, and analog computing in general. An additional cir-
cuit challenge is the mixed analog-digital computation, which results in the need for large and energyhungry
4
Neuromorph. Comput. Eng. 2 (2022) 022501 Roadmap
analog-digital converter circuits at the interface between the analog crosspoint array and the digital system.
Finally, neuromorphic circuits seem to take the most benefit from hybrid integration, combining front-end
CMOS technology with novel memory devices that can implement MVM and neuro-biological functions,
such as spike integration, short-term memory, and synaptic plasticity [15]. Hybrid integration may also need
to extend, in the long term, to alternative nanotechnology concepts, such as bottom-up nanowire networks
[16], and alternative computing concepts, such as photonic [17] and even quantum computing [18], within a
single system or even a single chip with 3D integration. In this scenario, a roadmap for the development and
assessment of each of these individual innovative concepts is essential.
Neuromorphic algorithms: A fundamental challenge in neuromorphic engineering for real application sys-
tems is to train them directly in the spiking domain in order to be more energy-efficient, more precise, and
also be able to continuously learn and update the knowledge on the portable devices themselves without rely-
ing on heavy cloud computing servers. Spiking data tend to be sparse with some stochasticity and embedded
noise, interacting with non-ideal non-linear synapses and neurons. Biology knows how to use all this to its
advantage to efficiently acquire knowledge from the surrounding environment. In this sense, computational
neuroscience can be a key ingredient to inspire neuromorphic engineering, and learn from this discipline how
brains perform computations at a variety of scales, from small neurons ensembles, mesoscale aggregations, up
to full tissues, brain regions and the complete brain interacting with peripheral sensors and motor actuators.
On the other hand, fundamental questions arise on how information is encoded in the brain using nervous
spikes. Obviously, to maximize energy efficiency for both processing and communication, the brain maxi-
mizes information per unit spike [19]. This means unravelling the information encoding and processing by
exploiting spatio-temporal signal processing to maximize information while minimizing energy, speed, and
resources.
Applications: The realm of applications for neuromorphic computing and engineering continues to grow
at a steady rate, although remaining within the boundaries of research and development. While it is becom-
ing clear that many applications are well suited to neuromorphic computing and engineering, it is also
important to identify new potential applications to further understand how neuromorphic materials and
hardware can address them. The roadmap includes some of these emerging applications as examples of
biologically-inspired computing approaches for implementation in robots, autonomous transport capabil-
ity or in perception engineering where the applications are based on integration with sensory modalities of
humans.
Ethics: While the future development and application of neuromorphic systems offer possibilities beyond
the state of the art, the progress should also be addressed from an ethical point of view where, e.g., lack of
transparency in complex neuromorphic systems and autonomous decision making can be a concern. The
roadmap thus ends with a final section addressing some of the key ethical questions that may arise in the wake
of advancements in neuromorphic computation.
We hope that this roadmap represents an overview and updated picture of the current state-of-the-art
as well as being the future projection in these exciting research areas. Each contribution, written by leading
researchers in their topic, provides the current state of the field, the open challenges, and a future perspective.
This should guide the expected transition towards efficient neuromorphic computations and highlight the
opportunities for societal impact in multiple fields.
Acknowledgements
DVC and NP acknowledge the funding from Novo Nordic Foundation Challenge Program for the BioMag
project (Grant No. NNF21OC0066526), Villum Fonden, for the NEED project (00027993), Danish Coun-
cil for Independent Research Technology and Production Sciences for the DFF Research Project 3 (Grant
No. 00069B), the European Union’s Horizon 2020, Future and Emerging Technologies (FET) programme
(Grant No. 801267) and Danish Council for Independent Research Technology and Production Sciences for
the DFF-Research Project 2 (Grant No. 48293). RD acknowledges funding from the German Science foun-
dation within the SFB 917 ‘Nanoswitches’, by the Helmholtz Association Initiative and Networking Fund
under Project Number SO-092 (Advanced Computing Architectures, ACA), the Federal Ministry of Edu-
cation and Research (project NEUROTEC Grant No. 16ES1133K) and the Marie Sklodowska-Curie H2020
European Training Network, ‘Materials for neuromorphic circuits’ (MANIC), grant Agreement No. 861153.
BLB acknowledges funding from the European Union’s Horizon 2020 (Grants 824164, 871371, 871501, and
899559). DI acknowledges funding from the European Union’s Horizon 2020 (Grants 824164, 899559 and
101007321).
5
Neuromorph. Comput. Eng. 2 (2022) 022501 Roadmap
1.1. Status
PCM exploits the behaviour of certain phase-change materials, typically compounds of Ge, Sb and Te, that
can be switched reversibly between amorphous and crystalline phases of different electrical resistivity [20]. A
PCM device consists of a certain nanometric volume of such phase change material sandwiched between two
electrodes (figure 1).
In recent years, PCM devices are being explored for brain-inspired or neuromorphic computing mostly by
exploiting the physical attributes of these devices to perform certain associated computational primitives in-
place in the memory itself [13, 21]. One of the key properties of PCM that enables such inmemory computing
(IMC) is simply the ability to store two levels of resistance/conductance values in a non-volatile manner and to
reversibly switch from one level to the other (binary storage capability). This property facilitates in-memory
logical operations enabled through the interaction between the voltage and resistance state variables [21].
Applications of in-memory logic include database query [22] and hyper-dimensional computing [23].
Another key property of PCM that enables IMC is its ability to achieve not just two levels but a continuum of
resistance values (analogue storage capability) [20]. This is typically achieved by creating intermediate phase
configurations through the application of partial RESET pulses. The analogue storage capability facilitates
the realization of MVM operations in O(1) time complexity by exploiting Kirchhoff’s circuit laws. The most
prominent application for this is DNN inference [24]. It is possible to map each synaptic layer of a DNN to a
crossbar array of PCM devices. There is a widening industrial interest in this application owing to the promise
of significantly improved latency and energy consumption with respect to existing solutions. This in-memory
MVM operations also enable non-neuromorphic applications such as linear-solvers and compressed sensing
recovery [21].
The third key property that enables IMC is the accumulative property arising from the crystallization kinet-
ics. This property can be utilized to implement DNN training [25, 26]. It is also the central property that is
exploited for realizing local learning rules like spike-timing-dependent plasticity in SNN [27, 28]. In both cases,
the accumulative property is exploited to implement the synaptic weight update in an efficient manner. It has
also been exploited to emulate neuronal dynamics [29].
Note that, PCM is at a very high maturity level of development with products already on the market and a
well-established roadmap for scaling. This fact, together with the ease of embedding PCM on logic platforms
(embedded PCM) [30] make this technology of unique interest for neuromorphic computing and IMC in
general.
6
Neuromorph. Comput. Eng. 2 (2022) 022501 Roadmap
Figure 1. Key physical attributes that enable neuromorphic computing. (a) Non-volatile binary storage facilitates in-memory
logical operations relevant for applications such as hyper-dimensional computing. (b) Analog storage enables efficient
matrix-vector multiply (MVM) operations that are key to applications such as deep neural network (DNN) inference. (c) The
accumulative behaviour facilitates applications such as DNN training and emulation of neuronal and synaptic dynamics in SNN.
Figure 2. Key challenges associated with PCM devices. (a) The SET/RESET conductance values exhibit broad distributions which
is detrimental for applications such as in-memory logic. (b) The drift and noise associated with analogue conductance values
results in imprecise matrixvector multiply operations. (c) The nonlinear and stochastic accumulative behaviour result in
imprecise synaptic weight updates.
integration density is also limited by the access device, which could be a selector in the backend-of-the-line
(BEOL) or front-end bipolar junction transistors (BJT) or metal-oxide-semiconductor field effect transistors
(MOSFET). The threshold voltage must be overcome when SET operations are performed, so the access device
must be able to manage voltages at least as high as the threshold voltage. While MOSFET selector size is mainly
determined by the PCM RESET current, the BJT and BEOL selectors can guarantee a minimum cell size of
4F2 , leading to very high density [34]. However, BEOL selector-based arrays have some drawbacks in terms of
precise current control, while the management of parasitic drops is more complex for BJT-based arrays [35].
7
Neuromorph. Comput. Eng. 2 (2022) 022501 Roadmap
conductance is mostly determined by the projection segment that appears parallel to the amorphous phase-
change segment. Recently, it was shown that it is possible to achieve remarkably high precision in-memory
scalar multiplication (equivalent to 8 bit fixed point arithmetic) using projected PCM devices [38]. These
projected PCM devices also facilitate array-level temperature compensation schemes. Alternate multi-layered
PCM devices have also been proposed that exhibit substantially lower drift [39].
There is a perennial focus on trying to reduce the RESET current via scaling the switchable volume of the
PCM device. Either by shrinking the overall dimension of the device in a confined geometry or by scaling the
bottom electrode dimensions of a mushroom-type device. The exploration of new material classes such as
single elemental antimony could help with the scaling challenge [40].
The limited endurance and various other non-idealities associated with the accumulative behaviour such as
limited dynamic range, nonlinearity and stochasticity can be partially circumvented with multiPCM synaptic
architectures. Recently, a multi-PCM synaptic architecture was proposed that employs an efficient counter-
based arbitration scheme [41]. However, to improve the accumulation behaviour at the device level, more
research is required on the effect of device geometries as well as the randomness associated with crystal growth.
Besides conventional electrical PCM devices, photonic memory devices based on phase-change materials,
which can be written, erased, and accessed optically, are rapidly bridging a gap towards allphotonic chip-scale
information processing. By integrating phase-change materials onto an integrated photonics chip, the analogue
multiplication of an incoming optical signal by a scalar value encoded in the state of the phase change material
was achieved [42]. It was also shown that by exploiting wavelength division multiplexing, it is possible to
perform convolution operations in a single time step [43]. This creates opportunities to design phase-change
materials that undergo faster phase transitions and have a higher optical contrast between the crystalline and
amorphous phases [44].
This work was supported in part by the European Research Council through the European Union’s Horizon
2020 Research and Innovation Programme under Grant No. 682675.
8
Neuromorph. Comput. Eng. 2 (2022) 022501 Roadmap
2. Ferroelectric devices
2.1. Status
Ferroelectricity was firstly discovered in 1920 by Valasek in Rochelle salt [45] and describes the ability of a
non-centrosymmetric crystalline material to exhibit a permanent and switchable electrical polarization due
to the formation of stable electric dipoles. Historically, the term ferroelectricity stems from the analogous
behavior with the magnetization hysteresis of ferromagnets when plotting the ferroelectric polarization versus
the electrical field. Regions of opposing polarization are called domains. The polarization direction of such
domains can be switched typically by 180◦ but, based on the crystal structure, other angles are also possible.
Since the discovery of the stable ferroelectric barium titanate in 1943, ferroelectrics have found application
in capacitors in electronics industry. Already, in the 1950s, ferroelectric capacitor (FeCAP) based memories
(FeRAM) have been proposed [46], where the information is stored as polarization state of the ferroelectric
material. Read and write operation are performed by applying an electric field larger than the coercive field EC .
The destructive read operation determines the switching current of the FeCAP upon polarization reversal, thus
requiring a write-back operation after readout. Thanks to the development of mature processing techniques
for ferroelectric lead zirconium tantalate FeRAMs, these have been commercially available since the early 1990s
[47]. However, the need for a sufficiently large capacitor together with the limited thin-film manufacturability
of perovskite materials has so far restricted their use to niche applications [48].
The ferroelectric field effect transistors (FeFET) that was proposed in 1957 [49] features a FeCAP as gate
insulator, modulating the transistor’s threshold voltage that can be sensed non-destructively by measuring the
drain-source current. Perovskite based FeFET memory arrays with up to 64 kBit have been demonstrated [50].
However, due to difficulties in the technological implementation, limited scalability and data retention issues,
no commercial devices became available.
The ferroelectric tunneling junction (FTJ) was proposed by Esaki et al in 1970 s as a ‘polar switch’ [51]
and was firstly demonstrated in 2009 using a BaTiO3 ferroelectric layer [52]. The FTJ features a ferroelectric
layer sandwiched between two electrodes, thus modifying the tunneling electro-resistance. A polarization-
dependent current is measured non-destructively when applying electrical fields smaller than EC .
Since the fortuitous discovery of ferroelectricity in hafnium oxide (HfO2 ) in 2008 and its first publication
in 2011 [53] the well-established and CMOS-compatible fluorite-structure material has been extensively stud-
ied and has recently gained a lot of interest in the field of nonvolatile memories and beyond von-Neumann
computing [54, 55] (figure 3).
9
Neuromorph. Comput. Eng. 2 (2022) 022501 Roadmap
Figure 3. The center shows two typical ferroelectric crystals and the corresponding PV-hysteresis curve. The top figure illustrates
(a) FeCAP based FeRAM, the figure on the bottom left shows a FeFET and the bottom right an FTJ.
larger cross-bar structures [60]. However, increasing the ratio between the on-current density and the self-
capacitance of FTJ devices turns out to be one of the main challenges to increasing the reading speed for these
devices. The tunneling current densities depend strongly on the thickness of the ferroelectric layer and the
composition of the multi-layer stacks. The formation of very thin ferroelectric layers is hindered by uninten-
tional formation of interfacial dead layers towards the electrodes and increasing leakage currents due to defects
and grain-boundaries in the poly-crystalline thin films.
10
Neuromorph. Comput. Eng. 2 (2022) 022501 Roadmap
Figure 4. Main elements of a neural network. Neurons can be realized using scaled down FeFETs [55] while synapses can be
realized using FTJs [54] or medium to large scale FeFETS. Adapted with permission from [54]. Copyright (2020) American
Chemical Society and [55]. Copyright (2018) the Royal Society of Chemistry.
fractions. Moreover, ferroelectric grains that differ in size or orientation of the polarization axis, electronically
active defects as well as grain size dependent surface energy effects give rise to the formation of ferroelectric
domains that possess different electrical properties in terms of coercive field EC (typical values ∼1 MV cm−1 )
or remnant polarization Pr (typical values 10–40 μC cm−2 ) with impact on the device-to-device variability and
the gradual switching properties that are important especially for analog synaptic devices. Some drawbacks of
the poly-crystallinity of ferroelectric HfO2 - and ZrO2 -based thin films could be tackled by the development of
epitaxial growth of monocrystalline ferroelectric layers [62] where domains might extend over a larger area.
The case of FTJs in particular demonstrate the effect of domain wall motion that might allow a more gradual
and analogue switching behavior even in small scaled devices. The utilization of an anti-ferroelectric hysteretic
switching that was demonstrated in ZrO2 thin films bears the potential to overcome some limitations that are
related to the high coercive field of ferroelectric HfO2 , such as operation voltages being larger than the typical
core voltages in modern CMOS technologies or the limited cycling endurance [63].
Finally, in addition to the very encouraging results adopting ferroelectric HfO2 , in 2019 another promis-
ing material was realized. AlScN is a semiconductor processing compatible and already utilized piezoelectric
material that was made ferroelectric [64] (figure 4).
Acknowledgements
This work was financially supported out of the State budget approved by the delegates of the Saxon State
Parliament.
11
Neuromorph. Comput. Eng. 2 (2022) 022501 Roadmap
3.1. Status
Resistive random access memories (RRAMs), also named memristive devices, change their resistance state
upon electrical stimuli. They can store and compute information at the same time, thus enabling in-memory
and brain-inspired computing [13, 65]. RRAM devices relying on oxygen ion migration effects and subsequent
valence changes are named valence change memory (VCM) [66]. They have been proposed to implement
electronic synapses in hardware neural networks, due to the ability to adapt their strength (conductance)
in an analogue fashion as a function of incoming electrical pulses (synaptic plasticity), leading to long-term
(short-term) potentiation and depression. In addition, learning rules such as spike-time or spike-rate depen-
dent plasticity, paired-pulse facilitation or the voltage threshold—based plasticity have been demonstrated;
the stochasticity of the switching process has been exploited for stochastic update rules [67–69]. Most of the
VCM devices are based on a two-terminal configuration, and the switching geometry involves either confined
filamentary, or interfacial regions (figure 5(A)). Filamentary VCMs are today the most advanced in terms of
integration and scaling. Their switching mechanism relies on the creation and rupture of conductive filaments
(CF), formed by a localized concentration of defects, shorting the two electrodes. The modulation/control of
the CF diameter and/or CF dissolution can lead to two or multiple stable resistance states [70, 71]. Proto-
types of neuromorphic chips have been recently shown, integrating HfOx and TaOx -based filamentary-VCM
as synaptic nodes in combination with CMOS neurons [72–74]. In interfacial VCM devices, the conductance
scales with the junction area of the device, and the mechanism is related to a homogenous oxygen ion move-
ment through the oxides, either at the electrode/oxide or oxide/oxide interface. Reference material systems are
based on complex oxides, such as bismuth ferrite [75] and praseodymium calcium manganite [76]; or bilay-
ers stacks, e.g. TiO2 /TaO2 [77] and a-Si/TiO2 [78]. Finally, three-terminal VCM redox transistors have been
recently studied (figure 5(A) right), where the switching mechanism is related to the control of the oxygen
vacancy concentration in the bulk of the transistor channel [79, 80]. While interfacial and redox-transistor
devices are today at low technological readiness, and most of the studies are reported at single device level,
they promise future advancement in neuromorphic computing in terms of analogue control, higher resistance
values, improved reliability, reduced stochasticity with respect to filamentary devices [81]. To design neu-
romorphic circuits including VCM devices, compact models are requested. For filamentary devices compact
models including variability are available [81, 82], but lacking for interfacial VCM and redox-based transistors.
12
Neuromorph. Comput. Eng. 2 (2022) 022501 Roadmap
Figure 5. (A) Sketch of the three types of VCM devices (filamentary, interfacial and redox transistor). (B) Possible functionalities
that can be implemented by VCM devices, namely binary memory (left), analog/multilevel (centre) and stochastic (right)
memory. In the figures, the device resistance evolution is plotted as a function of applied electrical stimuli (pulses). (C) Schematic
drawing of some of the interesting properties of VCM for neuromorphic applications, i.e. synaptic plasticity dynamics and type of
memory with different long or short retention scales (LTM, STM). Many experimental VCM devices show a non-linear and
asymmetric modulation of the conductance (G) update, but plasticity dynamics can be as well modulated by programming
strategies or materials engineering.
VCM arrays needs to be further addressed. Simulation models for interfacial VCM are not available yet and
need to be developed.
Redox-based VCM transistors have been only shown on a single device level [79, 80]. Thus, reliable sta-
tistical data on cycle-to-cycle variability, device-to-device variability and stability of the programmed states is
not available yet. Moreover, the trade-off between switching speed and voltage has not been studied in detail.
Another challenge is the understanding of the switching mechanism and the development of suitable models
for circuit design.
The open challenges for all three types of VCM devices are summarized in table 1.
13
Neuromorph. Comput. Eng. 2 (2022) 022501 Roadmap
Table 1. Summary of status and open challenges of the three types of VCM devices.
Binary Available, with good Possible, but lack of Very new devices,
endurance (>106 –109 ) statistical data. mostly proposed for
and retention (>years) Endurance and long multilevel
retention to be applications, lack of
optimized statistical data
Long term memory (LTM) Yes, retention at high T Possible, lack of statistic Possible. Few studies.
and for 6–10 years. data on array, single Lack of statistical data
Depends on R levels device retention up to
years for some material
stacks
Short term memory (STM) Usually difficult to Possible, to be further Possible, to be further
achieve controlled addressed and addressed. Few
decay optimized studies
14
Neuromorph. Comput. Eng. 2 (2022) 022501 Roadmap
desirable to identify a reference material system with a robust switching mechanism supported by a compre-
hensive understanding and modelling from underlying physics to compact and circuits modelling. Indeed, the
modelling of these devices are still at its infancy. One open question for both devices is the trade-off between
data retention and switching speed. In contrast to the filamentary devices, the velocity of the ions are probably
not accelerated by Joule heating. Thus, the voltage needs to be increased more than in filamentary devices, to
operate the devices at fast speed [85]. This might limit the application of these device to a certain time domain
as the CMOS might not be able to provide the required voltage. By using thinner device layers or material
engineering this issue could be addressed.
Acknowledgements
This work was partially supported by the Horizon 2020 European projects MeM-Scales (Grant No. 871371),
MNEMOSENE (Grant No. 780215), and NEUROTECH (Grant No. 824103); in part by the Deutsche
Forschungsgemeinschaft (SFB 917); in part by the Helmholtz Association Initiative and Networking Fund
under Project Number SO-092 (Advanced Computing Architectures, ACA) and in part by the Federal Min-
istry of Education and Research (BMBF, Germany) in the project NEUROTEC (Project Numbers 16ES1134
and 16ES1133K).
15
Neuromorph. Comput. Eng. 2 (2022) 022501 Roadmap
Ilia Valov
Research Centre Juelich
4.1. Status
Electrochemical metallization memories were introduced in nanoelectronics with the intention to be used as
memory, optical, programmable resistor/capacitor devices, sensors and for crossbar arrays and rudimentary
neuromorphic circuits by Kozicki et al [86, 87] under the name programmable metallization cells. These types
of devices are also termed conductive bridging random access memories or atomic switches [88]. The princi-
ples of operation of these two electrode devices using thin layers as ion transporting media are schematically
shown in figure 6. As electrochemically active electrodes Ag, Cu, Fe or Ni are mostly used and for counter
electrodes Pt, Ru, Pd, TiN or W are preferred. Electrochemical reactions at the electrodes and ionic transport
within the device are triggered by internal [89] or applied voltage causing the formation of metallic filament
(bridge) short-circuiting the electrodes and defining the low resistance state (LRS). Voltage of opposite polar-
ity is used to dissolve the filament, returning the resistance to high ohmic state (HRS). LRS and HRS are used
to define Boolean 1 and 0, respectively.
Apart from the prospective for a paradigm shift in computing and information technology offered by
memrsitive devices in general [8], ECMs provide particular advantages compared to other redox-based resis-
tive memories. They operate at low voltages (∼0.2 V to ∼1 V) and currents (from nA to μA range) allowing
for low power consumption. A huge spectrum of materials can be used as solid electrolytes, ionic conductors,
mixed conductors, semiconductors, macroscopic insulators and even high-k materials such as SiO2 , HfO2 ,
Ta2 O5 etc, predominantly in amorphous but also in crystalline state [90]. The spectrum of these materials
includes 1D and 2D materials but also different polymers, bioinspired/bio-compatible materials, proteins and
other organic and composite materials [91, 92]. The metallic filament can vary in thickness and may either
completely bridge the device, or be only partially dissolved providing multilevel to analog behaviour. Very thin
filaments are extremely unstable and dissolve fast (down to 1010 s) [93]. The devices are stable against radi-
ation/cosmic rays, high energy particles and electromagnetic waves and can operate over a large temperature
range [94, 95]. Due to these properties, ECMs can be implemented in various environments, systems and tech-
nologies. The typical applications are as selector devices, volatile, non-volatile digital and analog memories,
transparent and flexible devices, sensors, artificial neurons and synapses [96–98]. The devices can combine
more functions and are thought of as basic units for the fields of autonomous systems, beyond von Neumann
computing and AI. Further development in the field is essential to realise the full potential of this technology.
16
Neuromorph. Comput. Eng. 2 (2022) 022501 Roadmap
Figure 6. Principle operation and current–voltage characteristics of electrochemical metallization devices. The individual
physical processes are related to the corresponding part of the I –V dependence. The figure is reproduced from [99].
Figure 7. Schematic differences between ideal cells (left) and real cells accounting for interface interactions occurring due to
sputtering conditions, chemical interactions or environmental influences. Physical instabilities/dissolution of the electrode,
leading to clustering and formation of conductive oxides in ECM devices (middle). Chemical dissolution of the electrode and
formation of insulating oxides. (Right) The figure is modified from [101].
layer and inhibit or support reliable operation [101]. All these effects have their origin in the nanosize of the
devices and highly non-equilibrium operating conditions.
17
Neuromorph. Comput. Eng. 2 (2022) 022501 Roadmap
18
Neuromorph. Comput. Eng. 2 (2022) 022501 Roadmap
5. Nanowire networks
5.1. Status
The human brain is a complex network of about 1011 neurons connected by 1014 synapses, anatomically orga-
nized over multiple scales of space, and functionally interacting over multiple scales of time [106]. Synaptic
plasticity, i.e. the ability of synaptic connections to strengthen or weaken over time depending on exter-
nal stimulation, is at the root of information processing and memory capabilities of neuronal circuits. As
building blocks for the realization of artificial neurons and synapses, memristive devices organized in large
crossbar arrays with a top-down approach have been recently proposed [107]. Despite the state-of-art of this
rapidly growing technology demonstrated hardware implementation of supervised and unsupervised learn-
ing paradigms in ANN, the rigid top-down and grid-like architecture of crossbar arrays fails in emulating the
topology, connectivity and adaptability of biological neural networks, where the principle of self-organization
governs both structure and functions [106]. Inspired by biological systems (figure 8(a)), more biologically
plausible nanoarchitectures based on self-organized memristive nanowire (NW) networks have been pro-
posed [16, 108–112] (figures 8(b) and (c)). Here, the main goal is to focus on the emergent behaviour of
the system arising from complexity rather than on learning schemes that require addressing of single ANN
elements. Indeed, in this case main players are not individual nano objects but their interactions [113]. In
this framework, the cross-talk in between individual devices, that represents an unwanted source of sneak cur-
rents in conventional crossbar architectures, here represents an essential component for the network emerging
behaviour needed for the implementation of unconventional computing paradigms. NW networks can be
fabricated by randomly dispersing NWs with a metallic core and an insulating shell layer on a substrate by a
low-cost drop casting technique that does not require nanolithography or cleanroom facilities. The obtained
NW network topology shows small-world architecture similarly to biological systems [114]. Both single NW
junctions and single NWs show memristive behaviour due to the formation/rupture of a metallic filament
across the insulating shell layer and to breakdown events followed by electromigration effects in the formed
nanogap, respectively (figures 8(e) and (h)) [16]. Emerging network-wide memristive dynamics were observed
to arise from the mutual electrochemical interaction in between NWs, where the information is encoded in
‘winnertakes-all’ conductivity pathways that depend on the spatial location and temporal sequence of stim-
ulation [115–117]. By exploiting these dynamics, NW networks in multiterminal configuration can exhibit
homosynaptic, heterosynaptic and structural plasticity with spatiotemporal processing of input signals [16].
Also, nanonetworks have been reported to exhibit fingerprints of self-organized criticality similarly to our
brain [108, 118, 119], a feature that is considered responsible for optimization of information transfer and
processing in biological circuits. Because of both topological structure and functionalities, NW networks are
considered as very promising platforms for hardware realization of biologically plausible intelligent systems.
19
Neuromorph. Comput. Eng. 2 (2022) 022501 Roadmap
Figure 8. Bio-inspired memristive NW networks. (a) Biological neural networks where synaptic connections between neurons
are represented by bright fluorescent boutons (image of primary mouse hippocampal neurons); (b) self-organizing memristive
Ag NW networks realized by drop-casting (scale bar, 500 nm). Adapted from [16] under the terms of Creative Commons
Attribution 4.0 License, copyright 2020, Wiley-VCH. (c) Atomic switch network of Ag wires. Adapted from [112], copyright 2013,
IOP Publishing. (d) and (e) Single NW junction device where the memristive mechanism rely on the formation/rupture of a
metallic conductive filament in between metallic cores of intersecting NWs under the action of an applied electric field and
(f) and (g) single NW device where the switching mechanism, after the formation of a nanogap along the NW due to an electrical
breakdown, is related to the electromigration of metal ions across this gap. Adapted from [16] under the terms of Creative
Commons Attribution 4.0 License, copyright 2020, Wiley-VCH.
reference [121], the software/hardware for interfacing the NW network with the ReRAM readout represents
a challenge from the electronic engineering point of view. To fully investigate the computing capabilities of
these self-organized systems, modelling of the emergent behaviour is required for understanding the interplay
in between network topology and functionalities. This relationship can be explored with a complex network
approach by means of graph theory metrics. Current challenges in understanding and modelling the emergent
behaviour of NW networks rely on the experimental investigation of resistive switching mechanism in single
network elements, including a statistical analysis of inherent stochastic switching features of individual mem-
ristive elements. Also, combined experiment and modelling are essential to investigate hallmarks of criticality
including short and longrange correlations among network elements, power-law distributions of events and
20
Neuromorph. Comput. Eng. 2 (2022) 022501 Roadmap
avalanche effects by means of an information theory approach. Despite scale-free networks operating near the
critical point similarly to the cortical tissue are expected to enhance information processing, understanding
how critical phenomena affect computational capabilities of self-organized NW networks still remain an open
challenge.
Acknowledgements
This work was supported by the European project MEMQuD, code 20FUN06. This project (EMPIR 20FUN06
MEMQuD) received funding from the EMPIR programme co-financed by the participating states and from
the European Union’s Horizon 2020 research and innovation programme.
21
Neuromorph. Comput. Eng. 2 (2022) 022501 Roadmap
6. 2D materials
6.1. Status
With more and more deployed edge devices, huge volumes of data are being generated each day and are waiting
for real-time analysis. To process these raw data, these data have to be collected and stored, which are accom-
plished in sensors, memory unit and computing units, respectively. This usually gives rise to large delay and
high energy consumption, which becomes severe with an explosive growth in data generation. Computing in
sensory or memory devices allows for reducing latency and power consumption associated with data transfer
[128] and is promising for real-time analysis. Functional diversity and performances of these two distinct com-
puting paradigms are largely determined by the type of functional materials. Two dimensional (2D) materials
represent a novel class of materials and show many promising properties, such as atomically thin geometry,
excellent electronic properties, electrostatic doping, gate-tuneable photoresponse, superior thermal stability,
exceptional mechanical flexibility and strength, etc. Stacking distinct 2D materials on top of each other enables
creation of diverse van der Waals (vdW) heterostructures with different combinations and stacking orders, not
only retaining the properties of dividual 2D components but also exhibiting additional intriguing properties
beyond those of individual 2D materials.
2D materials and vdW heterostructures have recently shown great potential on achieving insensor comput-
ing and IMC, as shown in figure 10. There has intense interest in exploring unique properties of 2D materials
and their vdW heterostructures for designing computational sensing devices. For example, photovoltaic prop-
erties of gate-tuneable p–n homojunction based on ambipolar material WSe2 were exploited for ultrafast vision
sensor capable of processing images within 50 ns [129]. Employing gate-tuneable optoelectronic response of
WSe2 /h-BN vdW heterostructure can emulate the hierarchical architecture and biological functionalities of
human retina to design reconfigurable retinomorphic sensor array [130].
2D materials and their associated vdW heterostructures were also introduced for IMC devices and circuits
to improve the switching characteristics and offering additional functionalities. Several switching mechanisms
such as conductive filament [131], chargingdischarging [132–134], grain boundary migration [135], ionic
intercalation [136, 137], lattice phase transition [138], etc, have been reported in 2D materials-based planar and
vertical devices. With strict limitation in available space and the number of references, only a few representa-
tive works are mentioned in this roadmap. Interested readers are encouraged to refer to previous review article
[139]. Based on superior thermal stability and atomically-sharp interface of graphene/MoS2−x Ox /graphene
vdW heterostructure, a robust memristive device was reported to exhibit endurance of 107 at room tempera-
ture and stable switching performance in a recordhigh operating temperature of 340 ◦ C [140]. Different from
oxide-based memristive devices, metal/2D material/metal vertical devices with layered-structure feature of
switching medium were used to mimic high-performance electronic synapses with good energy efficiency
[141], which holds promise for modelling artificial neural network in a high-density memristive crossbar
array [142]. Reducing the thickness of switching medium down to monolayer allows for fabrication of thinnest
resistive switching devices with featuring the conductive-point resistive switching mechanism [143, 144].
22
Neuromorph. Comput. Eng. 2 (2022) 022501 Roadmap
Figure 10. 2D and vdW heterostructure materials for neuromorphics. The in-sensor computing devices include WSe2 -based
homojunction for ultrafast machine vision (adapted with permission [129], copyright 2020, Springer Nature) and WSe2 /h-BN
vdW heterostructure for reconfigurable vision sensor (adapted with permission [130], copyright 2020, the American Association
for the Advancement of Science); the IMC devices include self-selective vdW memristor (adapted with permission [131],
Copyright 2019, Springer Nature), electrically tuneable homojunction based synaptic circuit (adapted with permission [132],
copyright 2020, Springer Nature), vdW semi-floating gate memory (adapted with permission [133], copyright 2018, Springer
Nature), gate-tuneable heterostructure electronic synapse (adapted with permission [134], copyright 2017, American Chemistry
Society), grain boundary mediated MoS2 planar memristor (adapted with permission [135], copyright 2015, Springer Nature),
ionic intercalation memristive device (adapted with permission [137], copyright 2019, Springer Nature), phase change
memristive devices (adapted with permission [138], copyright 2019, Springer Nature), robust graphene/MoS2−x Ox /graphene
vdW memristor (adapted with permission [140], copyright 2018, Springer Nature), multilayer h-BN electronic synapse (adapted
with permission [141], copyright 2018, Springer Nature), atomristor (adapted with permission [143], copyright 2018, American
Chemistry Society).
However, crossbar study of MIM vertical devices is limited due to the difficulty in synthesis of large area 2D
materials with controllable thickness and high quality vdW heterostructure with controllable interface. From
electrical point of view, most 2D materials based MIM devices cannot achieve endurances of larger than 106
cycles and stability study of the resistive states was not always demonstrated in multilevel resistive switching
devices reported so far. Besides, a unified criteria for yield and variability has been not yet established, which
leads to a challenge in evaluating the maturity of 2D materials technology for circuit- and system-level appli-
cations of IMC. Clearly stating yield-pass criteria and variability windows of memristive devices is especially
important in 2D materials given the large number of local defects intrinsic to scalable synthesis methods as
well as other extrinsic defects introduced during integration.
23
Neuromorph. Comput. Eng. 2 (2022) 022501 Roadmap
recent advances in device physics and arrays as well as peripheral circuits would offer unprecedented oppor-
tunities to realize devices arrays on wafer scale with 2D materials that are suitable for in-sensor and IMC
applications.
For practical applications of in-sensor computing, further exploration of novel device physics related to
2D materials is required. For example, in the case of vision sensor, a few distinct types of visual information
(i.e. orientation, colour, etc) have to be sensed and processed simultaneously with low power consumption
and low latency. Notably, significant progresses have been achieved in anisotropic optoelectronics based on
low-lattice symmetry 2D materials and bandgap engineering by electrical field and quantum confinement in
2D materials. This would facilitate the device design with new mechanism that enables to sense and process
visual information related to orientation, colour and others. Recently, a 32 × 32 optoelectronic machine vision
array has been fabricated with a large-area monolayer MoS2 synthesized by metal-organic chemical vapor
deposition to propel the functional complexity in visual processing to an unprecedented level [146]. Together
with the advance in industrial foundry synthesis of large-area ambipolar WS2 directly on dielectric by plasma
enhanced ALD, the promising demonstrations of in-sensor computing should be extended to a larger scale
array to benchmark against the performance of conventional materialbased technology.
Traditionally, IMC is usually implemented in 1T1R crossbar array to avoid the sneak path issue. Similarly,
2D material based resistive switching devices should be organized in such way. To that end, radically new
growth processes are desired to achieve all 2D materials 1T1R integrated circuit applications. Furthermore, to
fabricate a large-scale crossbar array with high yield and low variance, it is required to spatially engineer the
precise atomic vacancy patterns on the surface of wafer-scale single crystal 2D semiconductors or insulators,
in particular for monolayer form.
Beyond individual 2D materials, vdW heterostructures by stacking 2D materials with distinct electronic
properties can retain the properties of each component and exhibit additional properties inaccessible in indi-
vidual 2D materials. With the breakthrough in material synthesis and fabrication of large-scale integrated
arrays as well as peripheral circuits, use of 2D vdW heterostructures in in-sensor and IMC would provide a
disruptive technology to solve the challenges of traditional electronics based on von Neumann architecture.
Acknowledgements
This work was supported in part by the National Natural Science Foundation of China (61625402, 62034004,
61921005, 61974176), and the Collaborative Innovation Center of Advanced Microstructures. FM would like
to acknowledge the support from AIQ foundation.
24
Neuromorph. Comput. Eng. 2 (2022) 022501 Roadmap
7. Organic materials
7.1. Status
Organic semiconductors (OSCs) have emerged as candidate materials for artificial synaptic devices owing to
their low switching energies, wide-range of tunability, and facile ion-migration due to the large free volume
within the material. OSCs emulate neuroplasticity at the single unit level with a wide range of synaptic switch-
ing mechanisms demonstrated for both two-terminal devices, which utilize filament formation [148], charge
trapping [149], and ion migration, as well as three-terminal transistor-like architectures such as ion-gated elec-
trochemical transistors [150] and charge trapping transistors. In most cases, the resistive switching of polymers
is either via metal-ion migration to form conductive pathways (figures 11(a) and (b)) or by reversible doping
where the oxidation state of the OSC is modulated via charge trapping on defect sites (such as implanted
nanoparticles), redox reactions (e.g. protonation/deprotonation), or ion intercalation (figures 11(c) and (d)).
The ability to tailor the properties of OSCs makes them a particularly promising class of materials for
neuromorphic devices since both chemical and microstructural control over the materials can dramatically
influence device performance (figure 11(e)). Side-chain engineering of OSCs can enhance ionic mobility in the
materials, enabling relatively high-speed device operation [151], whereas modification of chemical moieties on
the polymer backbone can be used to tune of energy levels and electronic conductivity [152]. The crystallinity
and microstructure of these materials allow for yet another degree of freedom which can be exploited to further
optimize them to emulate synaptic behavior [153]. Lastly, the relatively low-cost and solution processability
makes OSCs particularly attractive where large-area or printable devices are desired, such as when interfacing
with biological systems.
Thus far, OSC neuromorphic devices have demonstrated a variety of synaptic functionality, including the
representation of synaptic weight as electrical resistance [150], excitatory postsynaptic potential (EPSC), global
connectivity [154], and pulse shaping [155]. This broad functionality makes OSCs promising for applications
ranging from high-performance computing to biological interfacing of neuromorphic systems. Recently, three-
terminal electrochemical devices with low switching energy have been demonstrated which can overcome
several challenges associated with parallel operation of a hardware neural network in a crossbar architecture
[156], showing the promise for organic materials in neuromorphic engineering. In this work, however, we will
discuss the general challenges and outlook for using OSCs in neuromorphic computing without focusing on
any single device, application, or architecture.
25
Neuromorph. Comput. Eng. 2 (2022) 022501 Roadmap
Figure 11. Organic neuromorphic device operation. (a) Schematic of filament formation and (b) corresponding read current vs.
voltage response. (c) Schematic of three-terminal neuromorphic device based on modulating the channel carrier concentration
and (d) the corresponding programming curve. (e) Schematic of organic semiconductor structure showing backbone represented
by a conjugated thiophene (green), the molecular packing distance (gold), and the tunable sidechains (purple).
currents often requires an access device, increasing the complexity of the array and providing an additional
integration challenge.
Environmental and electronic stability. A final remaining challenge for OSCs is to achieve long-term device
stability and resistance state retention. Interfaces of OSCs and dielectrics are susceptible to formation of traps
resulting from exposure to oxygen or moisture, leading to irreversible changes in device performance. Addi-
tionally, because of the inherently low switching energy found in many organic neuromorphic devices, ‘SET’
OSCs are susceptible to leakage due to parasitic reactions with the surrounding atmosphere [159]. Finally,
both the charge transport and doping reactions in OSCs must be stable at the typical operating temperatures
of computers (∼85 ◦ C) without suffering from changes in morphology due to thermal annealing.
26
Neuromorph. Comput. Eng. 2 (2022) 022501 Roadmap
Figure 12. State-of-the-art organic neuromorphic devices. (a) Analog resistance tuning of an electrochemical neuromorphic
device under ±2 V 200 ns write pulses (gray shaded area), followed by 100 ns write-read delay and +0.3 V 500 ns readout (orange
shaded area). The horizontal dashed lines are a guide to the eye to represent tunable conductance states. (b) The volumetric
scaling of electrochemical doping enables channel conductance of devices to be tuned with increasingly lower write energies and
shorter write pulses as device sizes are reduced. (c) Cross-sectional schematic of fabrication procedure of densely packed
ion-gel-gated vertical P3HT synapses and (d) optical microscopy images of a crossbar array. (e) A non-volatile ionic floating gate
(IFG) memory consisting of a filament forming access device (green) attached to a PEDOT: PSS organic synapse (blue).
(f) Schematic of parallelprogrammable neuromorphic array using IFG memory divided into a two-layer neural network, as
indicated by orange and green. Analog network inputs V i R are applied across the source-drain rows, while programming inputs
V i W and V j W are applied along the gate row and drain column, respectively. Adapted from reference [160], AAAS, (a) and (b);
reproduced from reference [161], Springer Nature Ltd, (c) and (d); adapted from reference [156], AAAS, (e) and (f).
can enable nanopatterning of OSC channels with resolutions limited by conventional lithographic techniques
[164], but defining gate and electrolyte geometries with similar precision for complete three-terminal devices
introduces additional complexity. Choi et al recently demonstrated vertical three-terminal electrochemical
neuromorphic devices which reduced the single cell footprint to ca 100 μm by 100 μm in a crossbar architecture
using photo-crosslinked P3HT as the channel material (figures 12(c) and (d)). In principle, this cell could be
reduced significantly using the same general technique with the use of advanced photolithography.
Integration. Advancements in non-traditional chip manufacturing (BEOL alternatives) [165] are neces-
sary for seamless integration of OSCs with silicon technology. Sneak currents in neuromorphic arrays can be
avoided by using filament-forming access devices coupled to three-terminal memories, as shown by Fuller et al
(figures 12(e) and (f)) [156]. Increasing the temperature stability of OSCs also helps enable complete integra-
tion with conventional BEOL processing. Recently, Gumyusenge et al demonstrated that nanoconfined OSCs
27
Neuromorph. Comput. Eng. 2 (2022) 022501 Roadmap
Acknowledgements
AS and STK acknowledge support from the National Science Foundation and the Semiconductor Research
Corporation (Award NSF E2CDA #1507826). TJQ acknowledges support from the National Science Founda-
tion Graduate Research Fellowship Program under Grant DGE-1656518.
28
Neuromorph. Comput. Eng. 2 (2022) 022501 Roadmap
8. Spintronics
8.1. Status
Spintronics, or spin electronics, manipulates the spin of electrons in addition to their charge. This brings mul-
tiple interesting features for neuromorphic computing: the non-volatile memory provided by nanomagnets
and the non-linear dynamics of magnetization induced by fields or currents [168]. These two aspects allow the
same materials to be used to mimic the essential operations of synapses and neurons. Important experimental
results have thus been obtained in recent years.
Synapses. The first way to realize spintronic synapses is to store the weights in digital spin torquemagnetic
random access memories (ST-MRAMs) [169]. Gigabit devices from the latter are now commercially available
in several large foundries. They consist of magnetic tunnel junctions (MTJ), formed by an ultra-thin (∼1 nm)
insulator sandwiched between magnetic layers, integrated in the CMOS process. The main advantage of ST-
MRAMs over their competitors is their endurance, which is more than two orders of magnitude higher, a very
important factor for the chips dedicated to learning, that will require very many read/write cycles. Indeed, the
resistance change mechanism comes from a reversal of magnetization by current pulses of the order of nanosec-
onds and a hundred millivolts, a purely electronic phenomenon that does not require the movement of ions
or atoms in a nanostructure as in ReRAMs or PCMs. Moreover, they are non-volatile, retaining information
even when the power is switched off. Associative memories integrating ST-MRAMs (figure 13(a)) have enabled
significant gains in power consumption, with only 600 μW per recognition operation, i.e. a 91.2% reduction
compared to a twin chip using conventional static random access memory [169].
The second way to realize spintronic synapses is to directly imitate a synapse with a magnetic tunnel junc-
tion. In this case, the junction acts as a memristor device, which takes as input a current and multiplies it by
its resistance, which thus plays the role of the synaptic weight. The stability of magnetization in MTJ allows
them to retain the value of the weight. Since magnetization is naturally bistable, MTJ are very good candidates
for neural networks with binary weights [170]. It is also possible to modify the materials or geometry so that
the magnetization changes orientation via non-uniform states. This has allowed to experimentally realize ana-
log synapses (figure 13(b)) [171–173], as well as to train a small neural network with magnetic multi-state
synapses (figure 13(c)) [174].
Neurons. In most neural network algorithms, neurons simply apply a non-linear function to the realvalued
synaptic inputs they receive. The characteristics of the nonlinear dynamics of spintronics can be exploited to
mimic biology more closely, which could lead to increased computing functionalities such as local and unsu-
pervised learning. Biological neurons transform the voltage on their membrane into electrical spike trains,
with a mean frequency that is non-linearly dependent on the voltage. MTJ transform DC inputs into an oscil-
lating voltage with a frequency that depends non-linearly on the injected current. This property can be used
to imitate neurons. In stable junctions such as those used for ST-MRAMs, the spin torque can induce oscilla-
tions between about ten MHz and ten GHz depending on the materials and geometry. These oscillations have
been used with a single device to recognize pronounced digits with a time-multiplexed reservoir [175]. Four
coupled spintronic nano-oscillators were also trained to recognize vowels via their synchronization patterns
to RF inputs (figure 14(a)) [176]. In unstable junctions, thermal fluctuations may be sufficient to induce tele-
graphic voltage behavior, allowing the mimicking of stochastic neurons with minimal energy consumption.
Neuromorphic tasks have been performed by small experimental systems composed of such junctions, using
neural networks [177, 178] or probabilistic algorithms [179].
29
Neuromorph. Comput. Eng. 2 (2022) 022501 Roadmap
Figure 13. Spintronic synapses. (a) Schematic of an associative memory circuit with ST-MRAM cell, reproduced from [169].
(b) R–I hysteresis loop of a spintronic memristor based on current-induced domain wall displacement in a magnetic tunnel
junction, reproduced from [171]. (c) R–I hysteresis loop of a spintronic memristor exploiting spin–orbit torques in a
ferromagnetic/antiferromagnetic bilayer, reproduced from [173].
Figure 14. Spintronic neurons. (a) Principle of vowel recognition with four coupled spintronic nanooscillators, reproduced from
[176]. Left: schematic of the implemented neural network. Right: schematic of the experimental set-up and associated microwave
emissions in free (light blue) and phase-locked (navy) states. (b) Superparamagnetic tunnel junction behaviour under different
input voltage (time traces at the bottom, average resistance top right) and circuit implementing a probabilistic bit (top left) [179].
(c) Schematic of a population of superparaMTJ assembled in a neural network reproduced from [177].
[180, 181]. On the CMOS design side, the development of low-power circuits allowing efficient reading of the
state of the junctions, such as sense-amplifier, is crucial. As for all technologies, device reliability and scaling is
a challenge, especially in analog implementations. The first demonstrations will certainly rely on binarization
of resistance values for the inference phase and implementation of hardware binary neural networks, before
end-to-end on-chip learning solutions are developed.
Combining ionic and spintronic effects will be one of the keys to efficient learning of neuromorphic chips.
It was recently demonstrated that strong magnetoelectric effects enable control of magnetic dynamics by the
electric field created by the interface, more efficiently than previous methods [182].
A critical challenge for the development of hardware neural networks is to achieve a high density of connec-
tions. Spintronics offers several opportunities to tackle this issue. Long-range connections can be implemented
via spin currents and magnetic waves or by physically moving magnetic textures such as skyrmions and soli-
tons [168, 183]. Furthermore, the multilayer nature of spintronic devices allows them to naturally stack in
three dimensions, opening the path to vertical communication [184].
30
Neuromorph. Comput. Eng. 2 (2022) 022501 Roadmap
Spintronic neuromorphic chips will be able to receive as inputs fast signals compatible with digital elec-
tronics (classical binary junctions), radio-frequency inputs (GHz oscillator), as well as inputs varying at the
speed of the living world, thanks to superparamagnetic junctions or magneto-electric effects that can oper-
ate at timescales between seconds and milliseconds. There is active research on developing spintronic devices
for on-chip communication (using their capability to emit and receive microwaves), magnetic sensing (with
promising biomedical applications) and energy harvesting, all of which could benefit neuromorphic chips
[168].
Taking full advantage of the dynamical behavior of spintronic devices will require the development of ded-
icated learning algorithms, inspired by advances in both machine learning and computational neuroscience.
The fact that the behavior of spintronic devices relies on purely physical phenomena that can be predictively
described and integrated into neural network programming libraries is a key enabler for this task [185].
31
Neuromorph. Comput. Eng. 2 (2022) 022501 Roadmap
9. Deep learning
9.1. Status
The development of deep learning (DL) has brought AI to the spotlight of broad research communities.
The brain-inspired neural network models with different structures and configurations have made significant
progress in a variety of complex tasks [5]. However, in conventional von Neumann architecture, the physi-
cally separated computing unit and memory unit require frequent data shuttling between them, which results
in considerable power consumption and latency cost. One promising approach to tackle this issue is to real-
ize IMC paradigm where each underlying device component functions as memory and computation elements
simultaneously. Non-volatile devices based on resistive switching phenomena [13, 188], such as redox memris-
tor, phase change, magnetic and ferroelectric devices, could support such computing system and show greatly
improved performance in data centric computation tasks.
Analogue resistive-switching memory based IMC is promising to bring orders of magnitudes improvement
in energy efficiency compared to the conventional von Neumann hardware. The devices are assembled in a
crossbar structure to conduct VMM operations, where the input vectors are encoded as voltage amplitude,
pulse widths, pulse numbers, or sequential pulses with different significances, and the matrix elements are
mapped to tunable cell conductance where each cell is often represented in the differential form of a pair of
devices. Thanking to Ohm’s law for multiplication and Kirchhoff’s current law for accumulation, the dense
crossbar could conduct multiplication-accumulation (MAC) fully in parallel and the computation occurs at
the data location. Since VMM calculation accounts for the majority of computation during inference and
training of deep learning algorithms, this IMC paradigm could help the hardware to meet stringent requests
of low power dissipation and high computing throughput.
Major progresses have been made in this area, spanned from device optimization to system demonstra-
tion [13, 21, 188]. The oxide-memristor devices have been scaled down to 2 nm in an array [189] and 3D
stacked architecture has been fabricated in laboratory to enhance the network connectivity [190]. In addi-
tion, various DNN models, including perceptron [191, 192], multiple layer perceptron (MLPs) [193], long
short term memory (LSTM) [194] based recurrent neural networks (RNNs), and convolutional neural net-
works (CNNs) [74], have been demonstrated based on nonvolatile resistive-switching crossbars or macro
circuits. These demonstrations have covered the typical learning algorithms for supervised learning, unsu-
pervised learning and reinforcement learning. More recently, a multiple-array based memristor system [74]
and some monolithically integrated memristor chips have been demonstrated [195, 196], and it is encourag-
ing to see that this kind of IMC system could achieve an accuracy comparable to software results and reach
>10 TOPS/W energy efficiency using 8 bit input precision [74]. However, despite the fast development of
hardware prototypes and demonstrations, a monolithically integrated IMC chip with large and tiled crossbars
(shown in figure 15) for practical and sophisticated DL models (e.g. ResNET50) is still under-explored, and the
accomplished tasks are limited to relatively small dataset (e.g. MNIST, CIFAR10) rather than handling large
workloads (e.g. ImageNet).
32
Neuromorph. Comput. Eng. 2 (2022) 022501 Roadmap
Figure 15. Schematic of underlying IMC hardware for deep learning acceleration, presented from crossbar level, macro circuit
level, and monolithic system level, respectively.
commercially available, it is large operation voltages and slow speeds together with its limited endurance
and scalability make it at best an interim solution, to be replaced by emerging devices. Oxide memristor is
promising in dense integration given the demonstration of 2 nm feature size and 3D stacking ability at lab.
However, only 130 nm analog-switching technology [195, 196] and 22 nm digital-switching technology [197]
in foundry have been reported. Many other kinds of devices require back-end process with high temperature,
complex layer deposition or special handling process, which present obstacles for them to be monolithically
integrated with the mainstream CMOS technology. The absence of high-uniformity and high-yield process in
mainstream foundries for large-scale and small-footprint integration of analogue-switching devices has been
slowing down the development of IMC circuits.
Errors in analog IMC and inefficiency of periphery circuits also imposes serious challenges for practical
hardware. Analog computing directly utilizing physical laws is superior in energy efficiency, whereas it only
suits for low-precision tasks so far. Although DL algorithms put loose constraints on parameter precisions
(such as 4 bit weights for regular inference tasks), state-of-theart models still demands accurate digitalized
value representations. However, the conductance states of analog devices always follow a certain distribution
and deviate from the target mapping values, which would bring in weight representing errors. In addition, at
the array/crossbar level, the parasitic effects along the metal wires would lead to inevitable IR drop and result
in inaccurate programming and computing. This effect becomes more severe if the array size is increased
for higher performance. Such systematic errors may be mitigated through some algorithm and architecture
codesign, such as compensations in the mapping algorithms. The periphery circuits would also introduce
computing errors due to the voltage loss on analogue switches, transistor mismatch, unfixed clamping voltage
and environmental fluctuations. All these together would substantially lower analogue computing accuracy
and prevent IMC system from realistic applications if not appropriately addressed.
To take the full advantage of IMC features, all necessary functional blocks should be integrated mono-
lithically with device crossbars (as shown in figure 15), including buffers, interfacial circuits (mainly ADCs,
converting the accumulated current to digital signals), routing units, control logic and digital processing. These
circuits are expected to match the device operating requirements, such as programming voltage and driv-
ing currents. In such a complete on-chip system with tiled crossbars, the auxiliary periphery circuits might
consume much more power, area and latency than the analogdomain VMM calculation. Although the IMC
paradigm eliminate the movement of DL weights, it still needs data flowing between different layers and
requires on-chip memory accessing. Meanwhile, the parallel MAC calculations desire multiple ADCs locat-
ing at the end of each column to carry out fast conversations frequently. According to the profile of a designing
instance, the ADCs account for the majority of power and area overhead (shown in figure 16) [74]. Exploiting a
larger crossbar to conduct VMM is beneficial to boost system performance by amortizing the periphery circuit
overhead in the whole system, which, however, would lead to larger parasitic capacitance and resistance, higher
dynamic range of the output current and lower device utilization ratio. The inefficiency of periphery circuits,
especially the ADCs, is becoming the system bottleneck of IMC hardware, where innovations are needed in
the co-design of device and architecture.
33
Neuromorph. Comput. Eng. 2 (2022) 022501 Roadmap
Figure 16. The breakdown of area and power consumption in a macro-circuitry instance [74]. (a) Area overhead. (b) Power
overhead.
First of all, researches in material engineering and device optimization should be conducted either based
on present analogue-switching non-volatile devices or for the exploration of novel devices, aiming at enhanced
reliability, improved programming linearity and symmetry while maintaining high switching speed, low pro-
gramming power and intensive scaling potential. In addition, stable stack process for large-scale heterogeneous
integration of highly uniform crossbar is needed for practical applications. The development of 3D process
could drive the device density to next level and bring in extra dimension to explore more efficient system. Even
more importantly, 3D structures enable the massive connectivity and low-loss communications required for
complex neural networks.
Second, at the macro-circuit level, there is plenty of room to optimize the crossbar structure and the periph-
ery circuits. For example, basic two-transistor-2-memristor (2T2M) configuration [195] could be utilized as
a signed-weight unit to construct IMC arrays, where in situ subtraction operation is conducted in analog
domain with the differential current being accumulated subsequently. Such configuration reduces total flowing
currents to mitigate IR drop effect, which makes it available to build larger crossbar. Apart from this, encod-
ing input signal by pulse width or low voltage amplitude range might bypass the nonlinear current–voltage
characteristic issue, at the expense of increasing system latency or circuitry complexity. On the other hand,
novel periphery circuitry design customized for IMC is required, including fast, low-power ADC and high-
throughput routing scheme with little on-chip memory. For example, time-domain interfaces could be used
to replace conventional ADC-based interfaces [199]. Furthermore, some emerging devices with rich nonlin-
earities [96] could potentially replace circuitry blocks directly, such as implementation of device-wise ReLU
function [200, 201].
Finally, system-level innovations are critical to expedite the development of IMC hardware. From archi-
tecture perspective, time division multiplexing of ADCs and replicating same weights to different crossbars
are key technologies in order to optimize the system dataflow and boost the computing parallelism. In addi-
tion, despite the difficulties in data storing and transmission in analogue domain, interfacing, transferring and
processing the information in analogue format is intriguing due to the potential of huge efficiency benefits.
From the algorithmic point of view, configuring and optimizing the DL models to fit IMC device features and
reduce hardware cost is demanded. On-chip learning, hardware-aware learning and hybrid learning are some
representative works to mitigate device non-ideal characteristics and computing errors.
9.4. Concluding remarks
IMC based on analogue-switching non-volatile device shows exceptional superiority regarding computing
throughput and energy efficiency than the conventional von Neumann hardware, suited for dealing with data
centric problems and brain-inspired deep learning algorithms. In spite of the significant advancements in
device explorations and system demonstrations, device non-ideal behaviors, difficulties in large-scale het-
erogeneous integration, inaccuracies of analog computing and inefficiency of periphery circuits pose great
challenges to promoting the IMC technologies for practical application. Monolithic integration of a complete
system that unleash the full potential of the IMC features with tiled crossbar architecture and smooth dataflow
is still missing. Consequently, extensive co-design efforts from device optimization, circuitry design, archi-
tecture exploration and algorithm tailoring are consistently needed. With the utilization of more emerging
devices and advanced 3D integration process, the IMC promises bright future of deep learning hardware.
34
Neuromorph. Comput. Eng. 2 (2022) 022501 Roadmap
Giacomo Indiveri
University of Zurich and ETH Zurich, Switzerland
10.1. Status
The design of neuromorphic circuits for implementing SNN represents one of the main activities of Neu-
romorphic Computing and Engineering. Currently, these activities can be divided into two main classes:
(i) the design of large-scale general-purpose spiking neural network simulation platforms using digital cir-
cuits and advanced complementary metal-oxide semiconductor (CMOS) fabrication processes [202–204], and
(ii) the design of analog biophysically realistic synaptic and neural processing circuits for the realtime emu-
lation of neural dynamics applied to specific sensory-motor online processing tasks [3, 205–210]. This latter
effort pursues the original goal of Neuromorphic Engineering, set forth over thirty years ago by Carver Mead
and colleagues [211, 212], to use the physics of electronic devices for understanding the principles of compu-
tation used by neural processing systems. While the strategy of building artificial neural processing systems
using CMOS technologies to physically emulate cortical structures and neural processing systems was mainly
restricted to academic investigations for basic research in the past, the recent advent of emerging memory
technologies based on memristive devices spurred renewed interest in this approach, also for applied research
and practical applications. One of the main reasons is that the analog and mixedsignal analog/digital neuro-
morphic processing architectures that implement adaptation, learning, and homeostatic mechanisms are, by
construction, robust to device variability [4, 213]. This is a very appealing feature that enables the exploitation
of the intricate physics of nanoscale memristive devices, which have a high degree of variability, for carry-
ing out complex sensory processing, pattern recognition, and computing tasks. Another appealing feature of
these mixed-signal neuromorphic computing architectures, that enables a perfect symbiosis with memristive
devices, is their ‘IMC’ nature: these architectures are typically implemented as large crossbar arrays of synapse
circuits that represent at the same time the site of memory and of computation. The synapses in each row
of these arrays are connected to integrate-and-fire (I & F) soma circuits, located on the side of the array. The
soma circuits sum spatially all the weighted currents produced by the synapses, integrate them over time, and
produce an output pulse (spikes) when the integrated signal crosses a set threshold. In turn the synapses are
typically stimulated with input spikes (e.g., arriving from other soma circuits in the network), and convert the
digital pulse into a weighted analog current [3, 213]. Depending on the complexity of the synapse and soma
circuits, it is possible to design systems that can exhibit complex temporal dynamics, for example to create
spatiotemporal filters matched to the signals and patterns of interest, or to implement adaptive and learning
mechanisms that can be used to ‘train’ the network to carry out specific tasks.
35
Neuromorph. Comput. Eng. 2 (2022) 022501 Roadmap
To exploit the features of neuromorphic spiking hardware to their fullest extent, there are therefore two
tightly interlinked critical challenges that need to be addressed in conjunction: (i) the development of a radi-
cally different theory of computation that combines the use of fading memory traces and non-linear dynamics
with local spike-based learning mechanisms, and (ii) the development of both volatile and nonvolatile memory
technologies, compatible with CMOS analog circuits, that support the theories developed.
This paper is supported in part by the European Union’s Horizon 2020 ERC project NeuroAgents (Grant No.
724295), and in part by the European Union’s Horizon 2020 research and innovation programme under Grant
Agreement No. 871371 (project MeMScales).
36
Neuromorph. Comput. Eng. 2 (2022) 022501 Roadmap
11.1. Status
This perspective outlines a roadmap of emerging hardware approaches that utilize neuromorphic and physics-
inspired principles to solve combinatorial optimization problems faster and more efficiently than traditional
CMOS in von Neumann architectures. Optimization problems are ubiquitous in modern society, needed in
training ANN, building optimal schedules (e.g., airlines), allocating finite resources, drug discovery, path plan-
ning (VLSI and shipping), cryptography, and graph analytics problems (social networks, internet search).
Such problems are often extremely challenging, requiring compute resources that scale exponentially with the
problem size (i.e., NP-complete or NP-hard complexity). Mathematically, in a combinatorial optimization
problem [223] one has a pre-defined cost function, c(x), that maps from a discrete domain X (nodes, vectors,
graph objects) to, the real number space, and the goal is to find the xopt that achieves the globally optimum
cost value cmin (xopt ).
While exact methods for solving optimization problems have been developed, these can be too time-
consuming for challenging or even modest-sized instances. Instead, there is steadily rising popularity for faster
meta-heuristic approaches, such as simulated annealing [224] and evolutionary algorithms [225], computing
models such as Boltzmann machines [226], Ising models [227, 228], and variations of Hopfield networks [229].
These take inspiration from physical and biological systems which solve optimization problems (figure 17)
spontaneously. Many naturally-occurring phenomena, including the trajectories of baseballs and shapes taken
by amoeba, are driven to extrema of objective functions by following simple principles (e.g., least action or
minimum power dissipation [230]). In one example, proteins, which are long chains of amino acids, can con-
tort into an exponentially large number of different shapes, yet they repeatably stabilize into a fixed shape on
the time-scale of milliseconds. For a protein composed of only 100 peptide bonds, it is estimated that there are
over 10300 different conformal shapes. Even exploring one every picosecond (10−12 s), takes more than the age
of the Universe to explore them all. Instead, nature uses efficient dynamics to arrive at a solution in less than a
second.
37
Neuromorph. Comput. Eng. 2 (2022) 022501 Roadmap
Figure 17. Optimization in society and nature. Top row: important application areas include flight scheduling, VLSI routing,
training ANN. Middle row: optimization in nature includes protein folding (see text), object motion obeying the principle of least
action, and the orientation of magnetic spins in a crystal. Bottom row: some highlighted emerging hardware approaches include
probabilistic logic bits implemented with MTJ [243], CIM [234], coupled oscillators [241], and analog IMC [239].
ferromagnetic and antiferromagnetic coupling using simple electrical elements such as resistance and capaci-
tance, and can achieve highly parallelized all-to-all connectivity. The analog or continuous-time dynamics of
these Ising solvers has an inherent advantage of parallelism which lowers the time to solution compared to
CMOS annealers and CPUs operating in discrete time. The time-to-solution (or cycles-to-solutions) remains
similar for both the IMT solver and RRAM-based hardware accelerator.
38
Neuromorph. Comput. Eng. 2 (2022) 022501 Roadmap
With increasing scale and fan-out there arises the inevitable challenge of significant device parasitics and
variability. Non-idealities include interconnect/wire parasitics in terms of line-to-ground capacitance, line-to-
line capacitance and frequency variability for the oscillator approaches. With increasing problem size and the
concurrent increase in the size of the network, it will be increasingly difficult to find the globally optimal solu-
tion. The reduction in success probability can be mitigated by increasing the number of anneal cycles and/or
executing larger trial batches, but only at the expense of time-to-solution. An alternate approach could be to
exploit emerging monolithic three-dimensional integration technology that provides multiple tiers of inter-
connect that can be dynamically configured to provide an efficient, scalable and dense network on chip. This
promising direction will provide new architectural opportunities for on-chip implementation of large dense
networks with programmable connections that are beyond the capabilities of existing process and packaging
technologies today.
We stress that optimization problems are highly diverse, and even within a problem category (e.g., schedul-
ing) specific instances can have different traits and levels of difficulty, such as the characteristic scale of barriers
between minima, the density of saddle points, or the relative closeness in value between local and global min-
ima. Consequently, domain experts have developed techniques highly tailored to their problem class. This
could entail parameter choices such as using different noise distributions or cooling schedules (simulated
annealing), to algorithmic variations such as ensembles of models exchanging temperatures (parallel temper-
ing) or populations exchanging and mutating characteristics (genetic algorithms). Thus, it is desired for any
emerging hardware to support these rich variations as much as possible, exposing internal parameters to the
user for control, as well as provisioning the architecture to efficiently realize the more promising algorithmic
variations. Many optimization problems may also involve substantial pre- and post-processing computations.
For example, transforming a practical airline crew scheduling problem into the prototypical NP-hard ‘setcover’
problem first involves constructing sub-sets from viable rotations. Such pre- and postprocessing, let alone mid-
stream processing (replica exchange in parallel tempering), requires flexible and complex architectures that
include traditional digital units in addition to neuromorphic and physics-based optimization solvers.
The above challenges highlight the need for hardware designs to be algorithm and ‘software aware’. Equally
important is the development of algorithms and tools that are strongly ‘hardware aware’. These must be
designed to exploit the strengths of the underlying processing units—such as cheap stochasticity or certain
39
Neuromorph. Comput. Eng. 2 (2022) 022501 Roadmap
Acknowledgements
The authors acknowledge helpful input for figure and table content from Thomas Van Vaerenbergh and Suhas
Kumar.
40
Neuromorph. Comput. Eng. 2 (2022) 022501 Roadmap
12.1. Status
AI and in particular ANNs have demonstrated amazing results in a wide range of pattern recognition tasks
including machine vision, natural language processing, and speech recognition. ANN hardware accelerators
place significant demands on both storage and computation. Today’s computing architectures cannot effi-
ciently handle AI tasks: the energy costs of transferring data between memory and processor at the highest
possible rates are unsustainably high. As a result, the development of radically different chip architectures
and device technologies is fundamental to bring AI to power-constrained applications, such as data of things,
combining edge analytics with IOT.
41
Neuromorph. Comput. Eng. 2 (2022) 022501 Roadmap
Figure 18. Modular AI systems composed of heterogeneous components—each of which being optimized for a specific task and
exploiting different technology solutions.
42
Neuromorph. Comput. Eng. 2 (2022) 022501 Roadmap
and their interactions. Deep learning frameworks will be complemented with export capabilities on those
heterogeneous platforms.
Acknowledgements
We acknowledge funding support from the H2020 MeM-Scales project (871371), the ECSEL TEMPO project
(826655) and the ECSEL ANDANTE project (876925).
43
Neuromorph. Comput. Eng. 2 (2022) 022501 Roadmap
13. Photonics
13.1. Status
The field of optical computing began with the development of the laser in 1960 and has since been followed
by many inventions especially from the 1980 s demonstrating optical pattern recognition and optical Fourier-
transform processing [266]. Although these optical processors never evolved to commercial products due to a
limited application space and the high competition with emerging electronic computers, photonic computing
again gained much interest in recent years to overcome the bottlenecks of electronic computing in the field of
AI, where large datasets must be processed energy efficiently and at high speeds [267]. Optical computers are
able to seriously challenge electronic implementations in these domains, particularly in throughput. Photon-
ics further has allowed one to integrate optics on-chip enabling such optical neuromorphic processors to have
several advantages compared to their electronic counterparts. One of them is based on the fact that photons
are bosons and are able to occupy the same physical location (i.e. not subject to the Pauli exclusion princi-
ple). Thus, many can be transmitted through the same channel without mutual interference. This offers an
intrinsically high degree of parallelization by wavelength and mode multiplexing techniques, enabling the use
of the same physical processor to carry out multiple operations in parallel leading to high computing densities.
Additionally, the data transport problem that is apparent in electronics at high signal speeds is easily addressed
using photonic waveguides that serve as low power data links. Taken together with the fact that linear oper-
ations can be implemented in the optical domain with very high energy efficiency [268], photonics offers a
promising platform for high speed and highly parallelised neuromorphic computing [269].
Many non-von Neumann photonic computing techniques have been demonstrated using integrated, fibre-
based and free-space optics [268], showing a large variety of different approaches ranging from coherent
neural networks [270], RC [271] and phase-change photonics [272–274] to hardware accelerators for the main
computational bottlenecks (usually matrix multiplications) in conventional AI solutions [43, 275]. Most of
these are analogous to IMC that has most prominently been developed by IBM [276, 277]. Further advances
in photonic computing might first lead to optical co-processors that accelerate specific operations such as
VMMs and are implemented together with conventional electronic processors. The next step could be photonic
neuromorphic computers avoiding electro-optic conversions (figure 19).
44
Neuromorph. Comput. Eng. 2 (2022) 022501 Roadmap
Figure 19. Different implementations of neuromorphic photonic circuits. (a) Coherent matrix multiplication unit based on
MZIs [270]. Diffractive DNN [268]. (c) All-optical neural network using phase-change materials [272].
domain have to be performed. Especially analogue to digital converters (ADC) can make up a huge part of
the power budget and scale badly in terms of energy with the number of bits and operation speed [278]. To
be able to use the high modulation speeds accessible in modulating and detecting optical signals, significant
improvements in digitizing the results of the computation have to be made.
45
Neuromorph. Comput. Eng. 2 (2022) 022501 Roadmap
with good performance, the existing capabilities in photonic foundries are well behind those that exist in the
more mature and established electronics. Improvements in fabrication techniques will give way to less variation
in device specifications, e.g. in the wavelength specification of certain components like resonators or multi-
plexers and reduction of optical loss. This will be important to improve on the parameters that make photonic
neuromorphic processors more advantageous, specifically the ability to wavelength multiplex. A useful tool in
the fabrication process could be an additional tuning step after fabrication, to match the designed specifica-
tions such as measuring the resonance wavelength of a resonator and adjusting it to the desired wavelength as
a post-processing correction. Advances in the standard components as modulators and detectors as well as the
addition of new components to the libraries of photonic foundries and the development of new materials for
non-volatile optical storage will enhance this field and bring these circuits closer to commercialization.
Yet another crucial component is efficient light sources that can be integrated on photonics alongside
reliable many-channel on-chip multiplexers. Integrated optical frequency combs that provide a wide opti-
cal spectrum with a fixed channel spacing that can be exploited for computing as a coherent light source
are a prime example of this [279]. Photonic neuromorphic circuits rely on electronic control and therefore
improvements in high-speed electronic components such as digital-to analogue (DAC) and analogue-to dig-
ital converters (ADC) are also very important. Further research could also lead to alloptical DACs and ADCs
circumventing the need for electro-optic conversions. In general, photonic neuromorphic processors that min-
imize conversions between the digital and analogue domains are preferable. A specific class of neural networks
that could prove especially suitable for low power photonic processing are SNN, that reduce digital-to-analogue
conversions by using binary spikes and their time dependence as information carriers.
As the non-linear optical coefficients for silicon are small, functional materials that allow for such nonlin-
earity or other added functionality are also important [280]. A promising class of materials are phasechange
materials (PCMs) that switch their optical properties upon excitation and therefore effectively resemble a non-
linear element [281, 282]. Although PCMs can be switched with low optical powers, significant improvements
have to be made in increasing the switching speed in order to keep up with high modulation speeds enabled by
photonics. Another class of materials considered for low power optical non-linearities are epsilon-near-zero
materials [283].
Operating with analogue signals results in a higher sensitivity to noise; recent advances in reducing the
precision of neural networks to lower numbers of bits with low loss in prediction accuracy is one step to
overcome this challenge [284] and further research in this area is also required.
As photonic integrated circuits become more and more complex, similar to electronics, a threedimensional
implementation seems necessary to avoid crosstalk and loss when routing the signals and avoid waveguide
crossings, which also requires investigation (figure 20).
46
Neuromorph. Comput. Eng. 2 (2022) 022501 Roadmap
ability to carry out linear operations at very low energies. This makes photonic neuromorphic processors a
very promising route to tackle the upcoming challenges in AI applications.
In spite of the challenges photonic computing concepts that can overcome the limitations of elec-
tronic processors have been demonstrated in the recent years, and a roadmap to address their march into
commercialization would be a huge benefit to society.
Acknowledgements
This research was supported by EPSRC via Grants EP/J018694/1, EP/M015173/1, and EP/M015130/1 in
the United Kingdom and the Deutsche Forschungsgemeinschaft (DFG) Grant PE 1832/5-1 in Germany.
WHPP gratefully acknowledges support by the European Research Council through Grant 724707. We fur-
ther acknowledge funding for this work from the European Union’s Horizon 2020 Research and Innovation
Program (Fun-COMP project, #780848).
47
Neuromorph. Comput. Eng. 2 (2022) 022501 Roadmap
Steve Furber
The University of Manchester, United Kingdom
14.1. Status
The last decade has seen the development of a number of large-scale neuromorphic computing platforms.
Notable among these are the SpiNNaker [204] and BrainScaleS [285] systems, developed prior to, but sup-
ported under the auspices of, the EU Flagship Human Brain Project, and somewhat later the Intel Loihi [202]
system. These systems all have largescale implementations and widespread user communities.
All three systems are based upon conventional CMOS technology but with different architectural
approaches. SpiNNaker uses a large array of conventional small, embedded processors connected through a
bespoke packet-switched fabric designed to support largescale SNN in biological real time and optimised for
brain modelling applications. BrainScaleS uses above threshold analogue circuits to model neurons running
10 000 times faster than biology, implemented on a wafer-scale substrate, optimised for experiments involving
accelerated learning. Loihi sits somewhere between these two, using a large array of asynchronous digital hard-
ware engines for modelling and generally running somewhat faster than biological real time, with the primary
purpose of accelerating research to enable the commercial adaptation of future neuromorphic technology.
In order to support their respective user communities these systems have extensive software stacks, allow-
ing users to describe their models in a high-level neural modelling language such as PyNN [286] (used for
both SpiNNaker and BrainScaleS) so that straightforward applications can be developed without a detailed
understanding of the underlying hardware.
These large-scale systems have been up and running reliably for some time, supporting large user commu-
nities, and offer readily accessible platforms for experiments in neuromorphic computing. Access to neuro-
morphic technology is no longer a limiting factor for those who wish to explore its potential and capabilities,
including using these existing platforms to model future technologies (figure 21).
48
Neuromorph. Comput. Eng. 2 (2022) 022501 Roadmap
Figure 21. The million-core SpiNNaker machine at the University of Manchester, occupying 10 rack cabinets with an 11th
cabinet (shown on the right) containing the associated servers.
why neuromorphic engineers track advances in brain science, exploring the potential of advances such as new
insights into dendritic computation to improve the capabilities of engineered systems.
Similarly, the explosion over the last decade of applications of AI based upon ANN offers insights into the
effective organisation of neurons, whether spiking or not. There is a strong sense that the success of ANN must
be telling us something about how brains work, despite the absence of evidence in biology for, for example, the
error backpropagation learning mechanism that is so effective in ANN. Some form of gradient descent (the
principle underlying backprop) must be at work in biological learning, and recent developments in algorithms
such as e-prop [260] offer a glimpse of how that could work.
The prospect of the convergence of neuromorphic engineering with brain science and mainstream AI is
tantalising for all three branches of science/engineering.
Acknowledgements
The design and construction of the SpiNNaker machine was supported by EPSRC (the UK Engineering and
Physical Sciences Research Council) under Grants EP/D07908X/1 and EP/G015740/1. Ongoing development
of the software is supported by the EU ICT Flagship Human Brain Project (FP7-604102, H2020-720270,
H2020-785907, H2020-945539).
49
Neuromorph. Comput. Eng. 2 (2022) 022501 Roadmap
Emre Neftci
15.1. Status
The dynamical nature of SNN circuits and their spatiotemporal sparsity supported by asynchronous technolo-
gies makes them particularly promising for fast and efficient processing of dynamical signals (section 10). Here,
we discuss learning in SNNs, which refers to the tuning of their states and parameters to learn new behaviors,
achieve homeostasis and other basic computations. In biology, this is achieved via local plasticity mechanisms
that operate at various spatial and temporal scales. While several neural and synaptic plasticity rules investi-
gated in neurosciences have been implemented in neuromorphic hardware [307], recent work has shown that
many of these rules can be captured through three factor rules (3F) of the type [259, 299, 307]
where factors F pre and F post correspond to functions over presynaptic and postsynaptic states, respectively, and
the factor M post is a post-synaptic modulation term (see also section 18 for a specific example). The modula-
tion is a task-dependent function, which can for example represent error in supervised learning task, surprise
in an unsupervised learning task, or reward in reinforcement learning. Given the generality of the three-factor
rule in representing existing learning rules and paradigms, this section focuses on the requirements for imple-
menting 3F plasticity in neuromorphic hardware. By analogy to the brain, the most intuitive implementation
of synaptic plasticity is on-chip, i.e. plasticity is achieved at or the synapse circuit or equivalent circuit near
the SNN (figure 22, top). Neuromorphic engineers have extensively implemented learning dynamics derived
from computational neurosciences, such as STDP variants [3, 291, 310] and more recently, 3F rules [296].
On-chip learning requires precious memory and routing resources [309], which hinders scalability. On digi-
tal technologies, this problem can be sidestepped by time-multiplexing a dedicated local plasticity processor
[295, 301]. The timemultiplexing approach however suffers from the same caveats as a von Neumann computer
due to the separation between the SNN and the associated plasticity processor. Other promising alternatives for
true local plasticity are emerging devices (section 1) and related architectures (section 9), which allow storage
and computation for plasticity to occur at the same place.
A more practical approach to learning in SNNs is off-chip (figure 22, bottom), which relies on a separate
general purpose computer to train a model of the SNN, where memory and computational are potentially
more abundant. In this approach, once the SNN is trained, the parameters are then mapped to the hardware.
Provided a suitable model of the hardware substrate or a method to convert parameters from a conven-
tional network to an SNN, off-chip learning generally achieves the best inference accuracy on practical tasks
[298, 315]. Heterogenous approaches combining on-chip and off-chip approaches (also called chip-in-the-
loop, figure 22, middle) have been successful at smaller scales [300], although scalability there remains hindered
by the access to the local states necessary for plasticity in the chip. The suitability of onchip or off-chip learning
is highly dependent on the task. The former is best for continual learning (section 18) and the latter is best
when a large dataset is already available and the SNN model and parameter conversion are near-exact. If the
model is not exact, hybrid learning is often the most suitable method. Onchip and hybrid learning also have
the advantage that learning can occur online, i.e. during task performance.
Although Hebbian STDP variants have been instrumental for modeling in neuroscience, mathematically
rigorous rules derived from task objectives such as 3F rules have a clear upper hand in terms of practical
performance [304, 315, 322]. This is arguably because some forms of spatial credit assignment are necessary to
learn in non-shallow networks [292]. Thus, we anticipate that mathematically motivated (top-down driven)
rules grounded in neuroscience are likely to drive the majority of future research in SNN learning and their
neuromorphic implementation. Already today, the success of top-down modeling of learning to efficiently
train SNNs ushered in a new wave of inspiration from machine learning (ML) [313], and accelerated the quest
to build neuromorphic learning machines. In the following, we focus on specific challenges of 3F learning
approaches.
50
Neuromorph. Comput. Eng. 2 (2022) 022501 Roadmap
Figure 22. Implementation strategies and roadmap of SNN learning. Learning in SNNs can be achieved onchip, off-chip or a
combination of both (chip-in-the-loop). In off-chip learning, the parameters trained on a general-purpose computer (pink box)
are mapped on to the neuromorphic device (blue). In the chip-inthe-loop approach, updates are computed partially off-chip, but
using states recorded from the chip. In the on-chip implementation, the updates are computed locally to the neurons or synapses.
While brainlike, efficient, continual learning can only be achieved using on-chip learning, off-chip approaches also play an
important role in pre-training the model and prototyping new algorithms, circuits and devices, or when learning is not necessary
(fast SNN inference).
51
Neuromorph. Comput. Eng. 2 (2022) 022501 Roadmap
breakthroughs will take place when software and algorithms are designed specifically for the constraints and
dynamics of neuromorphic hardware. This involves moving beyond the concept of mapping conventional neu-
ral network architectures and operations to SNNs, and instead modeling and exploiting the computational
properties of biological neurons and circuits. Beyond advances in the emerging devices themselves (section 1),
one key enabler of such breakthroughs will be a differentiable programming library (e.g. tensorflow) oper-
ating at the level of spike-events and temporal dynamics that facilitates the scalable composition and tracing
of operations [293]. Such a framework can in turn facilitate the computation of the necessary learning fac-
tors. While recent work demonstrated SNN learning with ML frameworks [314, 316], the mapping of the 3F
computations on dedicated hardware does not yet exist. This is due to a lack of applications and the more strin-
gent requirements for learning. Additionally, current technologies are not optimized to training large-scale
SNNs, which remains today very slow and memory-intensive due to the high complexity of the underlying
dynamics and gradients [320, 321]. However, provided that SNN models capture key features of the brain,
namely that average spike rate of neurons in the brain is at most 2 Hz and that connectivity is locally dense but
globally sparse [297], specialized computers capable of sparse matrix operations can greatly accelerate offline
training compared to conventional computers. This is because a neuron that does not spike does not elicit
any additional computations or learning at the afferent neurons. Spurred by the hardware efficiency of bina-
rized neural networks [312], some ML hardware now support efficient sparse operations [303] which could be
exploited in SNN computations. A community-wide effort in these directions (software and general-purpose
hardware) are likely to boost several research areas, including the discovery of new (spatial) credit assignment
solutions, the identification and control of the distinctive dynamics of SNNs (multiple compartments, den-
drites, feedback dynamics, reward circuits etc), and the evaluation of new materials and devices, all in the light
of community-accepted benchmarks. Undertaking such device evaluations prior to the design and fabrication
cycle, for instance via a suitable surrogate model of the device, can save precious resources and dramatically
accelerate the development of emerging devices.
The ability to cross-compile models in a software library can blur the line between hardware and soft-
ware. This resonates well with the idea of on-chip and off-chip learning working in concert. That approach
is attractive because the difficulties of online learning can be mitigated in hardware with multiple stages of
training, for example by first training offline and then fine-tuning online [317]. Furthermore, fewer learn-
ing cycles entail fewer perturbations of the network, and thus mitigating the problems of sequential learning.
At the same time, learning is achieved after a much smaller number of observations (e.g. few-shot learning),
which is essential in continual learning tasks (section 3.4). The success of such meta-learning hinges on a good
task set definition and is compute- and memory-intensive. Once again, general-purpose computers supporting
sparse matrix operations, associated ML libraries and community-wide efforts are essential to achieve this at
scale. Although ML is not the only approach for SNN learning, the tools developed to enable ML-style learning
algorithms are central to other learning models and approaches. These include hyperdimensional computing,
variational inference algorithms, and neural Monte Carlo sampling, all of which rely on well-controlled models
and stochasticity that can be supported by such tools.
Acknowledgements
This work was supported by the National Science Foundation under Grant 1652159 and 1823366.
52
Neuromorph. Comput. Eng. 2 (2022) 022501 Roadmap
16.1. Status
An important goal for neuromorphic hardware is to support fast on-chip learning in the hand of a user. Two
problems need to be solved for that:
(a) A sufficiently powerful learning method has to run on the chip, such as stochastic gradient descent.
(b) This on-chip learning needs to converge fast, requiring ideally just a single example (oneshot learning).
Evolution has found methods that enable brains to learn a new class from a single or very few examples.
For instance, we can recognize a new face in many orientations, scales, and lighting conditions after seeing
it just once, or at least after seeing it a few times. But this fast learning is supported by a long series of prior
optimization processes of the neural networks in the brain during evolution, development, and prior learning.
In addition, insight from cognitive science suggests that the learning and generalization capability of our brains
is supported by innate knowledge, e.g. about basic properties of objects, 3D space, and physics. Hence, in
contrast to most prior on-chip learning experiments in neuromorphic engineering, neural networks in the
brain do not start from a tabula rasa state when they learn something new.
Learning from few examples has already been addressed in modern machine learning and AI [323]. Of par-
ticular interest for neuromorphic applications are methods that enable recurrently connected neural networks
RNNs to learn from single or few examples. RNNs are usually needed for online temporal processing—an
application domain of particular interest for energy-efficient neuromorphic hardware. The gold standard for
RNN-learning is backpropagation through time (BPTT). While BPTT is inherently an offline learning method
that appears to be off-limit for online onchip learning, it has recently been shown that BPTT can typically be
approximated quite well by computationally efficient online approximations. In particular, one can port the
online broadcast alignment heuristic from feedforward to recurrent neural networks [324]. In addition, one
can emulate the common LSTM (long short-term memory) units of RNNs in machine learning by neuromor-
phic hardware-friendly adapting spiking neurons. Finally, a computationally efficient online approximation of
BPTT—called e-prop—exists that also works well for recurrent networks of spiking neurons (RSNNs) with
such adapting neurons [260]. The resulting algorithm for on-chip training of the weights W ji for neuron i to
neuron j of an RSNN—for reducing some arbitrary but differentiable loss function E—takes there the form
dE
= Ltj etji .
dWji t
The so-called learning signal Ltj at time, is some online available approximation to the derivative of the loss
function E with regard to the spike output of neuron j, and the eligibility trace etji is an online and locally
computable eligibility trace. While this would usually require even more training examples than BPTT, one
can speed it up substantially by optimizing the learning signal Ltj and the initial values of the synaptic weights
W ji to enable learning from few examples for a large—in general even infinitely large—family F of on-chip
learning tasks [325]. This can be achieved through learning-to-learn (L2L) [326]. A scheme for the application
of L2L to enable fast on-chip learning is shown in figure 23.
53
Neuromorph. Comput. Eng. 2 (2022) 022501 Roadmap
Figure 23. Scheme for the application of L2L for offline priming of a neuromorphic chip. Hyperparameters Θ of the RSNN on
the chip are optimized for supporting fast learning of arbitrary tasks C from a family F that captures learning challenges that may
arise in the hands of a user. The resulting hyperparameters are then loaded onto the chip. Note that the desired generalization
capability is here more demanding than usually: we want that the chip can also learn tasks C from the family F very fast that did
not occur during offline priming (but share structural properties with other tasks in the family F ).
Figure 24. (Left) Learning architecture for fast on-chip learning with e-prop. A learning signal generator produces online
learning signals for fast on-chip learning. The weights of the learning signal generator as well as the initial weights of the learning
network result from offline priming. (Right) Example application for fast learning. In this task C, the learning network has to
learn the new command ‘connect’ from a single utterance, so that it recognizes it also from other speakers. The learning signal
generator is activated when the new command is learnt (leftmost green segment).
for storing information from the few training examples that are needed for fast learning. In the case of machine
learning, these hidden variables are the values of memory cells of LSTM units. In SNN these are the current
values of firing thresholds of adapting neurons. An alternative is to choose only some synaptic weights to be
hyperparameters, and to leave others open for fast on-chip learning [335]. Option 3 is used by the MAML
approach of [336], where only very few updates of synaptic weights via BPTT are required in the inner loop of
L2L. It also occurs in [325] in conjunction with option 4, see figure 24 for an illustration.
One common challenge that underlies the success of all mentioned options, is the efficacy of the training
algorithm for the offline priming phase, the outer loop of L2L. While option 1 can often be carried out by
gradient-free methods, the more demanding network optimizations of the other options tend to require BPTT
for offline priming of the RNN.
54
Neuromorph. Comput. Eng. 2 (2022) 022501 Roadmap
plasticity are required in the inner loop of L2L, as in option 1. In the case of option 2 a spike-based neuromor-
phic hardware just needs to be able to emulate adapting spiking neurons. This can be done for example on
SpiNNaker [337] and Intel’s Loihi chip [202]. Using BPTT for on-chip learning appears to be currently infea-
sible, but on-chip learning with e-prop is supported by SpiNNaker and the next generation of Loihi. Then
option 4 can be used for enabling more powerful fast on-chip learning. The only additional requirement for
the hardware is that an offline primed learning signal generator can be downloaded onto the chip (once and
for all), and that the chip supports communication of its learning signal for gating local synaptic plasticity
rules according to eprop.
An illustration of a sample application which becomes then realistic is shown in figure 24: on-chip learning
of a new spoken command from a single example in such a way that the same command can then also be
recognized under different acoustic conditions and from different speakers.
Future advances need to address the challenge of training extended learning problems during the offline
phase. Besides improved gradient-based algorithms, also gradient-free training methods such as evolution
strategies [338] are attractive for that. In fact, since the latter paradigm allows to employ neuromorphic hard-
ware directly for evaluating the learning performance, this approach can benefit from the speed and efficiency
of fast neuromorphic devices, as in [329]. Particularly fast neuromorphic hardware such as Brainscales [339]
might support then even more powerful offline priming with training algorithms that could not be carried out
on GPU-based hardware, thereby providing the basis for superior hybrid systems.
Acknowledgements
This research/project was supported by the Human Brain Project (Grant Agreement Number 785907) of the
European Union and a Grant from Intel.
55
Neuromorph. Comput. Eng. 2 (2022) 022501 Roadmap
Srikanth Ramaswamy
Newcastle University
17.1. Status
Understanding the brain is probably the final frontier of modern science. Rising to this challenge can provide
fundamental insights into what makes us human, develop new therapeutic treatments for brain disorders, and
design revolutionary information and communication tools. Recent years have witnessed phenomenal strides
in employing mathematical models, theoretical analyses and computer simulations to understand the multi-
scale principles governing brain function and dysfunction—a field referred to as ‘computational neuroscience’
[340, 341]. Computational neuroscience aims at distilling the necessary properties and features of a bio-
logical system across multiple spatio-temporal scales—from membrane currents, firing properties, neuronal
morphology, synaptic responses, structure and function of microcircuits and brain regions, to higher-order
cognitive functions such as memory, learning and behavior. Computational models enable the formulation
and testing of hypotheses, which can be validated by further experiments.
The multidisciplinary foundations of computational neuroscience can be broadly attributed to neurophys-
iology, and the interface of experimental psychology and computer science. The first school of thought, neu-
rophysiology, is exemplified by the model of action potential initiation and propagation proposed by Hodgkin
and Huxley [342] and theoretical models of neural population dynamics [229]. Whereas, the second school
of thought, at the interface of experimental psychology and computer science focuses on information pro-
cessing and learning, which could be traced back to models of ANN that were developed about half a century
ago [343]. Computational neuroscience became its own nascent field about three decades ago and has rapidly
evolved ever since [344].
In the early stages of its conception, computational neuroscience focused almost entirely on states of sen-
sory processing, mainly due to the fact that studies of cognitive function were restricted to the domain of psy-
chology, which was beyond what empirical neuroscience could offer. However, since then, rapid strides in tools
and techniques have enabled tremendous advances in our knowledge of the neural mechanisms brain underly-
ing cognitive states such learning and memory, reward and decision-making, arousal and attention [345–347].
Consequently, the dynamic field of neuroscience bestows many opportunities and challenges. A recent devel-
opment is the symbiosis between computational neuroscience and deep learning [313]. Deep learning models
have enabled efficient means to analyze vast amounts of data to catalyze computational modeling in brain
research. However, the current framework of deep learning is mostly restricted to object recognition or lan-
guage translation. Identifying the fundamental mechanisms responsible for the emergence of higher cognitive
functions such as attention and decision making, appropriately recapitulated into algorithms by computational
models, will influence the next generation of intelligent devices.
56
Neuromorph. Comput. Eng. 2 (2022) 022501 Roadmap
brain function, which operates by consuming a few watts of power, it still manages to solve complex problems
by devising algorithms that appear to be intractable with current computing resources. The brain is robust,
reliable and resilient in its operation despite the fact that its building blocks could fail. All of these are highly
advantageous features to instruct the better design of the next generation of computing hardware [356].
In the future, computational models and simulations of brain function and dysfunction will be better
informed through its unique capabilities to model and predict the outside environment, underlying design
principles, their mechanisms and multi-scale organization and operation. The interface of experimental and
computational neuroscience will shed new light on the unique biological architecture of the brain and help
translate this knowledge into the development of brain-inspired technologies.
We are still at the tip of the iceberg in dealing with and solving diverse challenges. It is possible that
‘neuromorphic’ computing systems of the future will comprise billions of artificial neurons and the devel-
opment, design, configuration and testing of radically different hardware systems will require new software
compatible with the organizing principles of brain function. This will require a deep theoretical understand-
ing of the way the brain implements its computational principles. Knowledge of the cognitive architectures
underlying capabilities such as attention, visual and sensory perception can enable us to implement biological
features that current computing systems lack.
Recent advances in ‘connectomics’ allow unprecedented reconstructions of biological brain networks.
These connectomes display rich structural properties, which include heavy-tailed degree distributions, segre-
gated ensembles and small world networks [357]. Despite these advances, how network structure determines
function computation remains unknown. Going forward, approaches from computational modelling and neu-
roscience could better inform the design of neuromorphic systems to unravel how structure leads to function,
how the same network configuration could result in a spectrum of cognitive tasks depending on the network
state and how different network architectures could support the emergence of similar cognitive states.
Acknowledgements
S.R. acknowledges support from the European Union’s Horizon 2020 research and innovation programme
under the Marie Skłodowska-Curie grant agreement No. 842492 and a Newcastle University Academic Track
(NUAcT) Fellowship.
57
Neuromorph. Comput. Eng. 2 (2022) 022501 Roadmap
Jonathan Tapson
18.1. Status
The human brain is extraordinarily efficient in computation, using at least five orders of magnitude less power
than the best neuromorphic silicon circuits [359]. Nonetheless, it still consumes approximately 20%–25% of
a human’s available metabolic energy, and it is safe to assume that the evolutionary pressure to optimize for
power efficiency in the brain was extremely severe [360]. It therefore comes as a suprise that the transmission of
signals through the brain’s synaptic junctions is apparently noisy and inefficient, with probabilities of 0.4–0.8
for transmission of an axonal spike being typical (see [361] for a detailed review). This raises the question:
is this transmission variability a bug or a feature? Also, can any brain-inspired computational system which
does not include synaptic stochasticity capture the essence of human thought? Perhaps it serves to regularize
biological neural networks, in the same way that machine learning techniques such as dropout are used to
make ANN more robust (figure 25).
This, and many other similar questions, drive the field of stochastic computation [362, 363]. The field covers
a large number of techniques in which some kind of probabilistic function, filter or network is used to create a
computational output which would not be possible with deterministic systems. For example, in neuromorphic
neural networks, the use of nonlinear random projections has become a commonplace method for raising the
dimensionality of an input space prior to a learned solution layer. Technologies as diverse as silicon device mis-
match, memristor mismatch, and even random networks of conductive fibres have been proposed and tested
for this purpose [364]. Generally, stochastic computation methodologies fall into a number of categories:
(a) Systems where noise or randomness is used to add energy to a system, enabling it to traverse or estab-
lish states which were otherwise inaccessible. Energy in this sense means potential or kinetic energy (in
mechanical, electrical or chemical form), rather than general system power consumption. The various
phenomena of stochastic resonance and stochastic facilitation [362] are typical examples of these systems.
(b) Systems where the data or input streams are intrinsically random or noisy, and rather than filter or other-
wise reduce the uncertainty in the signals, a computational system is devised which processes the raw signal
to produce a computationally optimal output. Recently, many of these systems apply Bayesian models
[365], particularly when derived from biological principles.
(c) Systems in which it is required to project the input space nonlinearly to a higher dimension, in order to
facilitate linear or other numerical solutions to some regression or classification problem. This obviously
includes conventional neural networks; however, there is an increasing body of research in both human
neuroscience and machine learning in which random nonlinear projections are found to be optimal for
some function.
58
Neuromorph. Comput. Eng. 2 (2022) 022501 Roadmap
Figure 25. Illustration of a simple stochastic computation, using logic gates (AND or MUX) to compute on strings of bits
representing probabilistic coding of numbers. Note that single-bit errors in the computation (as a result of noise) will not
significantly change the output. After [362].
59
Neuromorph. Comput. Eng. 2 (2022) 022501 Roadmap
19.2. Challenges
The majority of previous works on convolutional SNNs classify static images. SNNs require a spike train
to process in the temporal domain. Thus, various coding schemes have been proposed for converting static
images to spikes [386]. It is important to select a proper coding strategy since the energy consumption on an
asynchronous neuromorphic hardware is approximately proportional to the number of spikes. Currently, rate
coding is the most widely-used coding scheme since it yields high application-level accuracy. But, rate coding
generates spikes, where the number of spikes is proportional to the pixel intensity. This causes multiple (and
sometimes redundant) spikes per neuron and therefore reduces the energy-efficiency of the overall system. In
order to bring more energy advantages, a new coding scheme with fewer spikes should be explored.
Another challenge is directly training deep convolutional SNNs. From ANN literature, it is well known that
network depth is a crucial factor for achieving high accuracy on vision tasks. ANN-toSNN conversion enables
deep SNNs with competitive accuracy. But, emulating float activation with multiple binary spikes requires a
large number of time-steps, which in turn increases overall energy and latency. Surrogate gradient learning
allows short latency and can be used with flexible input representations, but it suffers convergence issues when
we scale up the depth. Therefore, convolutional SNNs with surrogate learning are still restricted to shallow
networks on trivial datasets. Overall, effective spike-based training techniques for deep convolutional SNNs is
necessary to reap the full energy-efficiency advantages of SNNs.
Finally, there is a need to investigate SNNs beyond the perspective of accuracy and energyefficiency. Good-
fellow et al [376] showed that unrecognizable noise can induce a significant accuracy drop in ANNs. This
questions the reliability of ANNs since humans do not misclassify when presented with such perturbed adver-
sarial inputs. In this light, there is a need to analyze the robustness of SNNs. Furthermore, the internal spike
behavior of SNNs still remains to be a ‘blackbox’ as that of conventional ANNs. In the ANN domain, several
interpretation tools have been proposed and provide cues for advanced computer vision applications such
as visual-question answering. In a similar vein, an SNN interpretation tool should be explored because of
its potential usage for real-world applications where interpretability in addition to high energy-efficiency is
crucial.
60
Neuromorph. Comput. Eng. 2 (2022) 022501 Roadmap
Figure 26. Illustration of the architectural difference between (a) Convolutional ANN and (b) Convolutional SNN. Both
architectures are based on spatial convolution operation, however, convolutional SNNs convey information using binary spikes
across multiple time-steps. To this end, most convolutional SNNs use a Poisson spike generator in order to encode an RGB image
into temporal spikes. Also, the rectified linear unit (ReLU) neuron is replaced by leaky -and-integrate-fire (LIF) spiking neuron in
which an output spike is generated whenever a membrane potential exceeds a firing threshold (Vth).
61
Neuromorph. Comput. Eng. 2 (2022) 022501 Roadmap
crossbar utilization (see [386] for more details). To address the mapping problem due to the weight sharing,
the authors in [371, 372] present a mapping protocol for convolutional ANNs. They also provide a simulation
tool for crossbar implementation, which evaluates the energy and performance of a network during inference
on crossbars. We believe that similar mapping protocols can be extended to convolutional SNNs.
Acknowledgements
The research was funded in part by C-BRIC, one of six centres in JUMP, a Semiconductor Research Corporation
(SRC) program sponsored by DARPA, the National Science Foundation, the Technology Innovation Institute
(Abu Dhabi) and the Amazon Research Award.
62
Neuromorph. Comput. Eng. 2 (2022) 022501 Roadmap
Gouhei Tanaka
The University of Tokyo
20.1. Status
Resevoir computing (RC) is a machine learning framework capable of fast learning, suited mainly for tempo-
ral/sequential information processing [387]. The general concept of RC is to transform sequential input data
into a high-dimensional dynamical state using a ‘reservoir’ and then perform a pattern analysis for the reser-
voir state in a ‘readout’. This concept was originally conceived with a special class of recurrent neural network
(RNN) models (see figure 27(a)), such as echo state networks (ESNs) [388] and liquid state machines (LSMs)
[389]. The main characteristic is that the reservoir is fixed and only the readout is adapted or optimized using
a simple (mostly linear) learning algorithm, thereby enabling fast model training. Owing to the computational
efficiency, software-based RC on general-purpose digital computers has been widely applied to pattern recog-
nition, such as classification, prediction, system control, and anomaly detection, for various time series data.
To improve computational performance, many variants of RC models have been actively studied [390].
On the other hand, hardware-based RC is an attracting option for realizing efficient machine learning
devices. A reservoir can be constructed not only with RNNs but also with other nonlinear systems. In fact,
a rich variety of physical reservoirs have been demonstrated using electrical, photonic, spintronic, mechani-
cal, material, biological, and many other systems (see figure 27(b)) [391]. Such physical RC is promising for
developing novel machine learning devices as well as for finding unconventional physical substrates available
for computation. The system architectures of hardware-based reservoir can be mainly classified into several
types, including network-type reservoirs consisting of nonlinear nodes, single-nonlinear-node reservoirs with
time-delayed feedback [392], and continuous medium reservoirs [393]. Many efforts are currently underway
to improve computational performance, enhance energy efficiency, reduce computational cost, and promote
implementation efficiency of the physical reservoirs. They are often combined with a software-based readout
or a readout device based on reconfigurable hardware capable of multiply-accumulate operation.
Further advances in physical RC would contribute to realizing novel AI chips, which are distinguished
from AI chips for deep learning. One of their potential targets is edge computing [394]. High-speed machine
learning computation for data stream obtained from sensors and terminal devices would lead to data traffic
reduction and data security enhancement in the Internet of Things (IoT) society.
63
Neuromorph. Comput. Eng. 2 (2022) 022501 Roadmap
Figure 27. RC frameworks where the reservoir is fixed and only the readout weights W out is trained. (a) A conventional RC
system with an RNN-based reservoir as in ESNs and LSMs. (b) A physical RC system in which the reservoir is realized using a
physical system or device. Figure reproduced from [391]. CC BY 4.0.
Table 3. Examples of subjects in RC applications. Table reproduced from (Tanaka et al 2019). CC BY 4.0.
Category Examples
Biomedical EEG, fMRI, ECG, EMG, heart rates, biomarkers, BMI, eye movement, mammogram, lung images
Visual Images, videos
Audio Speech, sounds, music, bird calls
Machinery Vehicles, robots, sensors, motors, compressors, controllers, actuators
Engineering Power plants, power lines,
renewable energy, engines, fuel cells,
batteries, gas flows, diesel oil,
coal mines, hydraulic excavators,
steam generators, roller mills,
footbridges, air conditioners
Communication Radio waves, telephone calls, internet traffic
Environmental Wind power and speed, ozone concentration, PM2.5, wastewater, rainfall, seismicity
Security Cryptography
Financial Stock price, stock index, exchange rate
Social Language, grammar, syntax, smart phone
which physical RC system meets a specific purpose. It is also significant to promote an integration of RC-based
machine learning devices with IoT devices.
64
Neuromorph. Comput. Eng. 2 (2022) 022501 Roadmap
are required for development of physical RC. Therefore, the progress of physical RC would be accelerated by
interdisciplinary collaborations between experts in different research areas.
Acknowledgements
This work was partially based on results obtained from a project, JPNP16007, commissioned by the New
Energy and Industrial Technology Development Organization (NEDO), and supported in part by JSPS KAK-
ENHI Grant Number 20K11882, JST CREST Grant Number JPMJCR19K2, and JST-Mirai Program Grant
Number JPMJMI19B1.
65
Neuromorph. Comput. Eng. 2 (2022) 022501 Roadmap
Simon Thorpe
CerCo-CNRS
21.1. Status
Deep learning architectures now dominate AI. But although they are superficially neurally-inspired, there
are significant differences with biology. The ‘neurons’ in such systems typically send floating-point numbers,
whereas real neurons send spikes. Attempting to model a system with the complexity of the human brain with
floating-point numbers seems doomed to failure. The brain has 86 billion neurons, with around 7000 synapses
each on average. Real-time simulation of such a system with a resolution of 1 millisecond would require
(8.6 × 10) × (7.0 × 103 ) × (1.0 × 103 ) floatingpoint operations a second—over 600 PetaFLOPS, even without
worrying about the details of individual neurons. This would saturate the most powerful supercomputer on
the planet and require 30 Megawatts of power—over one million times the brain’s remarkable 20 W budget.
How does the brain achieve such a low energy budget? It seems very likely that spikes could be a key to this
efficiency and a reason why, since the late 1990 s, SNN have attracted increasing interest [6, 409, 410].
A first critical advantage is that computation only occurs when there are spikes to process. The AER proto-
col (address event representation), first proposed in 1992 [411], communicates by sending lists of spikes. It is
used in many neuromorphic systems, including DVS (see section 21) and the multi-million processor SpiN-
Naker project [203, 412]. An early event-driven spiking neuron simulator was the original version of SpikeNet
[413, 414]. At the time, the joke was that such a system could simulate the entire human brain in real-time—as
long as none of the neurons spiked!
Second, spikes allow the development of far more efficient coding schemes. Researchers in both neuro-
science and neural networks typically assume that neurons send information using a firing rate code. And yet,
the very first recording of responses of the optic nerve by Lord Adrian in Cambridge in the 1920 s demonstrated
that while increasing the luminosity of a flashed stimuli increased both the peak firing rate and maintained
firing rate of fibres, there was also a striking reduction in latency [415]. Thus, even with a flashed stimulus,
response latency is not fixed. Sensory neurons effectively act as intensity-to-delay convertors—a fact effec-
tively ignored for over six decades. But in 1990, it was proposed that spike-arrival times across a population of
neurons could be a highly efficient code [416], an idea confirmed experimentally for the retina in 2008 [417].
66
Neuromorph. Comput. Eng. 2 (2022) 022501 Roadmap
Figure 28. Comparison of three different spike-based coding strategies. Top: conventional rate coding using counts of spikes in
an relatively long observation window. Middle: rank order coding uses the order of firing of just the first spike in a shorter
window. Bottom: N of M coding which limits the number of spikes that a transmitted allows very rapid and efficient transmission
of large amounts of data.
but requires huge numbers of training trials with labelled data. Fortunately, a human infant’s brain does not
need to be trained with millions of images of dogs and cats to categorize new images correctly! Instead, they
can learn about new objects very rapidly. There is now good evidence that humans learn to detect virtually
anything new that repeats, with no need for labelled data. If humans listen to meaningless Gaussian noise
containing sections that repeat, they rapidly notice the repeating structure and form memories lasting for weeks
[421]. And in experiments where random images from the ImageNet database are flashed at rates of up to 120
frames per second, humans notice images that repeat, even with only 2–5 presentations [422]. None of the
existing floating-point (or rate-based) supervised learning schemes could explain such learning. In contrast,
a simple spike-time dependent plasticity rule that reinforces synapses activated before the target neuron fires
makes them develop selectivity to patterns of input spikes that repeat [423], and will even find the start of the
pattern [424]. Similar methods have also been used to generate selectivity to repeating patterns in the output
of a dynamic vision sensor corresponding to cars going by on a freeway—again in a totally unsupervised way
[425].
These STDP based learning rules use continuously variable synaptic weights and typically require tens of
repeats to find the repeating pattern, even when all parameters are optimized. But we have recently developed a
new learning rule called JAST using binary weights [426] that can match our ability to spot repeating patterns
67
Neuromorph. Comput. Eng. 2 (2022) 022501 Roadmap
in as few as 2–5 presentations. The target neuron starts with a fixed number of binary connections. Then,
instead of varying the strength of the synapses (as in conventional STDP learning), the algorithm effectively
swaps the locations of the connections to match the repeating input pattern.
The algorithm was originally implemented on a low-cost Spartan-6 FPGA, already capable of implementing
a network with 4096 inputs and 1024 output neurons, and calculating the activation level and updating all the
outputs 100 000 times a second. The circuit also included the learning algorithm on-chip.
Acknowledgements
68
Neuromorph. Comput. Eng. 2 (2022) 022501 Roadmap
Section 4. Applications
22. Robotics
Chiara Bartolozzi
Istituto Italiano di Tecnologia (IIT)
22.1. Status
Neuromorphic systems, being inspired by how the brain computes, are a key technology for the imple-
mentation of artificial systems that solve problems that the brain solves, under very similar constraints and
challenges. As such, they hold the promise to efficiently implement autonomous systems capable of robustly
understanding the external world in relation to themselves, and plan and execute appropriate actions.
The first neuromorphic robots were proof of concepts based on ad hoc hardware devices that emulated
biological motion perception [427]. They relied on the know-how of chip designers, who had to manually
turn knobs to tune the chip behaviour. That seed could grow into a mature field thanks to the availability of
hardware that could be more easily tuned by nonexperts with standard software tools [428] and of quality DVS
[429] and neuromorphic computing chips and systems [412, 430] featuring many instances of neurons and
(learning) synapses that could be used as computational primitives for perception and decision making. Since
then, neuromorphic robotics followed three main paths, with the development of visual perception for robots
using event-driven (dynamic) vision sensors [375, 431], proof-of-concept systems linking sensing to control
[432] and SNN for the control of motors [433, 434]. At the same time, the neurorobotics community started
developing models of perception, cognition and behaviour based on SNN, with recent attempts to imple-
ment those on neuromorphic platforms [435–437]. Finally, the computational neuroscience community has
developed learning theories to reconcile DNNs with biologically inspired spike-based learning and to directly
develop spiking neural models for motor control that in the future could be implemented on neuromorphic
hardware [438–440].
In this rich and lively scenario, the multiple contributing communities and research fields have the potential
to lead to the next breakthrough, whereby neuromorphic sensing and computing support the development of
smart, efficient and robust robots. This research is timely and necessary, as robots are moving from extremely
controlled environments, to spaces where they collaborate with humans, where they must dynamically adapt,
borrowing from neural computational principles (figure 29).
69
Neuromorph. Comput. Eng. 2 (2022) 022501 Roadmap
Figure 29. Neuromorphic robots: on the left the tracker chip mounted on a pan-tilt unit [427], on the right the iCub humanoid
platform featuring event-driven vision sensors.
Figure 30. Timeline of a possible development roadmap. In green required theoretical advancements in order of increasing
complexity. In red the technological roadmap highlighting the path for new circuit and devices development, as well as the
infrastructure needed for the integration on robotic platforms.
The signature of neuromorphic robots will be continuous learning and adaptation to different environ-
ments, different tasks, changes in the robot plant, different collaborators. This must be supported by hardware
capable of handling plasticity at multiple temporal scales and a strong knowledge of how the brain implements
such mechanisms.
At the technological level, it is paramount to develop neuromorphic devices that can be embedded on
robots, increasing the neurons and synapses count and fan-in fan-out capabilities, while maintaining a low
power budget. Ideally, those devices have standard interfaces that do not require the use of additional compo-
nents to be connected to the software infrastructure of the robots. With growing task complexity, and the need
of multiple hardware platforms to run different computational modules, the neuromorphic paradigm could
take advantage of robotic middlewares, such as ROS, or YARP, that are currently seamlessly integrated with
neuromorphic sensors and computing devices.
70
Neuromorph. Comput. Eng. 2 (2022) 022501 Roadmap
An advancement in the understanding of the role of different brain areas, their working principles and their
interaction with other areas across different temporal and spatial scales shall guide the design of artificial archi-
tectures using spiking neuron models, synaptic plasticity mechanisms, connectivity structures to implement
specific functionalities. It is crucial to find the right level of detail and abstraction of each neural computational
primitive and develop a principled methodology to combine them. Starting from highly detailed models of
brain areas, the community shall find reduced models that can implement their basic functionality and that
can be implementable on neuromorphic hardware.
As the community is now developing SNN to extract information from a single sensory modality, the next
step would be to take into account information from other sensory modalities, so that decisions depend on
the state of the environment, of the robot and of the ongoing task. Among many others, a key area to take
inspiration from is the cerebellum, that supports the acquisition of motor plans and their adaptation to the
current (sensed) conditions [442]. The resulting computational frameworks shall therefore include dynamic
and continuous learning and adaptation.
On the other hand, progress is necessary in the neuromorphic hardware supporting those new frameworks.
New circuits for the emulation of additional computational primitives are needed, as well as the possibility to
support dynamic, continuous learning and adaptation at multiple timescales.
Specific to the robotic domain, neuromorphic devices should be truly embeddable. To this aim, standardi-
sation of communication protocols, programming tools, online integration with the robot’s middleware must
be developed. The necessary miniaturisation to pack more computational resources on a single system that can
be mounted on a robot goes through the integration of silicon devices with memristive devices. On a longer
term, nanotechnology and flexible electronics could represent a viable solution to further miniaturize, or dis-
tribute computational substrates that can de-localise computation to the periphery, or create folded structures
similar to the cortex, that through folding increased the surface available for computation, achieving higher
computational capabilities (figure 30).
Acknowledgements
The author would like to thank E Donati for fun and insightful discussions and brainstorming on the topic.
71
Neuromorph. Comput. Eng. 2 (2022) 022501 Roadmap
Jonathan Tapson
23.1. Status
Self-driving cars have been a staple of science fiction for decades; more recently, they have seemed like an
attainable goal in the near future. The machine-learning (ML) boom of the period 2015–2020 gave great cause
for optimism, with experts such as the US Secretary of Transport Anthony Foxx declaring in 2016 [443] that
‘By 2021, we will see autonomous vehicles in operation across the country in ways that we [only] imagine
today. . . My daughter, who will be 16 in 2021, will not have her driver’s license. She will be using a service’.
This optimism has faded away in the last three years, with the recognition that while it is straightforward
to make cars autonomous in simple environments such as freeway driving, there are a multitude of situations
where driving becomes too complex for current solutions to achieve autonomy. It is tempting to refer to these
as ‘corner cases’ or ‘edge cases’—in the sense of being a highly unlikely combination of circumstances, at a
‘corner’ or ‘edge’ of the feature space, which produces a situation where a machine learning algorithms fails to
operate correctly—except that, in the real-world of driving, these situations appear to be far more common
than was originally expected.
It may be helpful to use the industry terminology when discussing self-driving cars. Self-driving is more
formally known as Advanced Driver Assistance Systems (ADAS) and the industry generally uses the Society
for Automotive Engineering’s (SAE) five-level ADAS model, illustrated below, when discussing autonomous
driving capabilities (figure 31).
The more recent perception of ADAS progress can be summed up in a quote from Prof Mary Cummings,
Director of Duke University’s Humans and Autonomy Laboratory [444]: ‘there are basically two camps. First
are those who understand that full autonomy is not really achievable on any large scale, but are pretending
they are still in the game to keep investors happy. Second are those who are in denial and really believe it is
going to happen’.
Between the optimistic and pessimistic extremes, there is a consensus view amongst ADAS researchers that
while full level 5 ADAS is unlikely to be available in the next five years, level 4 ADAS is both an attainable and
useful target.
72
Neuromorph. Comput. Eng. 2 (2022) 022501 Roadmap
Figure 31. The SAE ADAS model. Note that levels 0–2 depend on continuous monitoring by the driver, whereas 3–5 do not. The
customary vision of an autonomous car would be ADAS level 5—a car which is able to be autonomous in all environments
without any human supervision or intervention.
L2 2 TOPS
L3 24 TOPS
L4 320 TOPS
L5 4000 + TOPS
control systems for level 4 ADAS [446], which is not insignificant, particularly for electric vehicles where range
is a critical issue.
73
Neuromorph. Comput. Eng. 2 (2022) 022501 Roadmap
Given the real-time nature of ADAS computation, and the necessity to process correlated streams of visual
and 3D point-cloud data (from lidar systems), there is some expectation that event-based neuromorphic com-
putation may be more suitable than current GPU-type computational hardware. At least one neuromorphic
event-based hardware startup is focused on real-time vision processing for this purpose [449].
In terms of building cognitive models of the world, we are reaching a point where brute-force approaches to
ML are producing diminishing returns. Language models such as GPT-3 [450] can produce impressive passages
of text, but it becomes clear that there is no real insight being generated; and these models are trained on orders
of magnitude more text than any human could assimilate in a lifetime, suggesting that there is an unfilled
deficiency in the model. Neuromorphic approaches such as Legendre memory units [451] are offering equal
performance to GPT-3 architectures with 10× lower training and memory requirement, suggesting that this
may help to close this gap. Similarly, the use of neuromorphic hardware such as Intel’s Loihi [202] and GML’s
GrAIOne chips [449], which are strictly event-based and intrinsically sparse, may provide a computational
platform that enables these more biologically realistic machine learning methods.
74
Neuromorph. Comput. Eng. 2 (2022) 022501 Roadmap
Thomas A Cleland
Cornell University
24.1. Status
Artificial olfactory systems were early adopters of biologically-inspired design principles. Persaud and Dodd
constructed an electronic nose in 1982 based explicitly on the principles of the mammalian olfactory sys-
tem—specifically, the deployment of a diverse set of broadly-tuned chemosensors, with odorant selectivity
arising from a convergent feature detection process based on the pattern of sensor responses to each odor-
ant [452]. Such cross-sensor patterns, inclusive of sampling error and other sources of noise, invite machine
learning strategies for classification. Gardner and colleagues subsequently trained ANN to recognize odorant-
associated response patterns from chemosensor arrays [453], and constructed a portable, field-deployable
system for this purpose [454].
The biomimetic principle of chemical sensing by arrays of partially selective chemosensors has remained
the state of the art [455, 456]. Arrays obviate the need to develop highly selective sensors for analytes of interest,
as high specificity can be readily achieved by the deployment of larger numbers of partially selective sensors
[457]. Moreover, such systems are responsive to a wide range of chemical diversity (odorant quality space;
figure 32), enabling the identification of multiple chemical species and diagnostic odorant mixtures and effec-
tively representing their similarity relationships. The intrinsic redundancy of such chemosensor arrays also
renders their responses more robust to contamination or interference, provided the analysis method is able to
use the redundant information effectively. In contrast, strategies for post-sampling signal processing and anal-
ysis have varied. Typically, chemosensor array responses are conditioned by electronic preprocessors and then
analyzed by one of a range of methods including linear discriminant analysis, principal components analysis,
similaritybased cluster analyses, and support vector machines, along with a variety of artificial neural net-
workbased techniques [455, 456, 458, 459]. However, more directly brain-inspired techniques also have been
applied to both the conditioning and analysis stages of processing. For example, the biological olfactory bulb
(OB) network (figure 33) decorrelates similar inputs using contrast enhancement [460]. When applied as sig-
nal conditioning to artificial sensor data, this operation improved the performance of a naïve Bayes classifier
[461]. Similarly, inhibitory circuit elements inspired by the analogous insect antennal lobe (AL) have been
deployed to enhance the performance of support vector machines [459, 462]. Finally, fully neuromorphic cir-
cuits for analysis and classification have been developed that are based directly on OB/AL circuit architectures
(first by [463]; reviewed in [464]; more recently [465, 466]). These approaches are discussed below.
75
Neuromorph. Comput. Eng. 2 (2022) 022501 Roadmap
Figure 32. Illustration of the capacities of chemosensor arrays to distinguish small changes in odorant quality. (A) Axes denote a
2D quality (Q-) space of physicochemical similarity, ellipses depict the selectivities of three different chemosensors sampling that
space. Sensors with broader receptive fields cover a given sensory space more effectively. Discrimination capacity (denoted by hot
colors) is maximized where the dropoff of sensitivity is steepest, and where the chemoreceptive fields of multiple sensors overlap.
(B) Example two-dimensional Q-space with 30 sensors (ellipse pairs, distinguishing selectivity from sensitivity) and 40 chemical
ligands (points) deployed. (C) Mean discrimination capacity depends on the number of sensors deployed into a Q-space, shown
as a function of the number of competing ligands deployed into the Q-space illustrated in (B). Deploying additional
chemosensors reliably improves system performance. Adapted from [457].
Figure 33. Annotated circuit diagram of mammalian olfactory bulb with three sensor classes (denoted by color). Human
olfactory bulbs exhibit roughly 400 sensor classes, whereas those of rats and mice express roughly 1200. Glomerular layer (GL)
circuitry performs signal conditioning, whereas the formation of target representations depends on synaptic plasticity between
principal neurons (MT) and granule cell interneurons (Gr) in the external plexiform layer (EPL). Principal neurons then project
to multiple target structures including piriform cortex, which feeds back excitation onto granule cell interneurons. Adapted from
[460].
capacity for expansion can exhibit lifelong learning capabilities. We have referred to this collection of prop-
erties as learning in the wild [460, 467], and focused on the capacity of such olfaction-inspired algorithms
to learn targets from one- or few-shot learning and identify known targets amidst unpredictable interfer-
ence [466], function in statistically unpredictable environments [468], and mitigate the effects of sensor drift
and decay [467]. Notably, working neuromorphic olfaction algorithms have been deployed on diverse edge-
compatible hardware platforms including Intel Loihi, IBM TrueNorth, fieldprogrammable gate arrays, and
custom neuromorphic devices [458, 463, 466, 469, 470].
76
Neuromorph. Comput. Eng. 2 (2022) 022501 Roadmap
other odorant sources. The actual capacity for such signal restoration under noise depends on the development
of circuits that leverage this capacity, and while early efforts are promising [466], there is substantial room for
algorithm improvement, such as integrating the pattern completion and clustering capabilities of piriform
cortex circuitry, developing cognitive computing methods such as hierarchical category learning to optimize
speed-precision tradeoff decisions, and improving performance in the wild. Making these capacities robust
requires the development of larger-scale chemosensor arrays, including compact, high-density arrays that can
be deployed in the field. Different sensor technologies, optimized both for different sample phases (gas, liquid)
and different chemoreceptive ranges of sample quality (e.g., food odors for quality control, toxic gases for
safety), will be required. Large libraries of candidate sensors can be screened [455, 471], reducing the need for
predictive models of sensors’ chemoreceptive fields in this process. However, molecular imprinting technology,
developed to produce highly specific chemosensors, now provides this capacity [472], and in principle could
be adapted to produce broader receptive fields by imprinting analyte mixtures.
Neuromorphic circuits are not readily adaptable to arbitrary tasks; the domain-specific architectures that
underlie their efficient operation also delimit the range of their applications. Olfaction-inspired networks
are not limited to chemosensory applications [467], but they are not likely to be effective when tasks do not
match their structural priors. However, the characterization and analysis of such fully functional neuromorphic
circuits enables the identification and extraction of computational motifs, yielding toolkits that can be intelli-
gently applied to new functional circuits. Moreover, new techniques for spike-based gradient descent learning
have successfully demonstrated few-shot learning in neuromorphic circuits preconfigured for the task domain
by transfer learning [317]. The design of task-specific neuromorphic circuits in the future is likely to depend on
combinations of these strategies, with qualitative circuit elements drawn from theory and generalized domains
established therein via emerging optimization strategies.
Acknowledgements
77
Neuromorph. Comput. Eng. 2 (2022) 022501 Roadmap
Christoph Posch
Prophesee
25.1. Status
Neuromorphic event-based (EB) vision sensors take inspiration from the functioning of the human retina,
trying to recreate its visual information acquisition and processing operations on VLSI silicon chips. The
first device of this kind out of C Mead’s group at Caltech, named the ‘Silicon Retina’, made it on the cover
of Scientific American in 1991 [473]. In contrast to early more biologically faithful models, often modelling
many different cell types and signalling pathways, in turn leading to very complex designs with limited prac-
tical usability, in recent years more focus has been put on the creation of practical sensor designs, usable
in real-world artificial vision applications. In [474], a comprehensive history and state-of-the-art review of
neuromorphic vision sensors is presented.
Today, the majority of EB sensor devices are based on the ‘temporal contrast’ or ‘change detection’ (CD)
type of operation, loosely mimicking the transient magno-cellular pathway of the human visual system
(figure 34). In contrast to conventional image sensors, CD sensors do not use one common sampling rate
(=frame rate) for all pixels, but each pixel defines the timing of its own sampling points in response to its
visual input by reacting to changes of the amount of incident light [429, 477, 478]. Consequently, the entire
sampling process is no longer governed by an artificial timing source but by the signal to be sampled itself, or
more precisely by the variations over time of the signal. The output generated by such a sensor is not a sequence
of images but a quasi-time-continuous stream of pixelindividual contrast events, generated and transmitted
conditionally, based on the dynamics happening in the scene. Acquired information is encoded and trans-
mitted in the form of data packets containing the originating pixel’s X, Y coordinate, time stamp, and often
contrast polarity. Other families of EB devices complement the pure asynchronous temporal contrast func-
tion with the additional acquisition of sustained intensity information, either pixel individually [475] or in the
form of frames like in conventional image sensors [476].
Due to the high temporal precision of acquired visual dynamics, inherent data sparsity, and robust high
dynamic range operation, EB sensors gain increasing prevalence as visual transducer for artificial vision systems
in applications where the need for high-speed or low-latency operation, uncontrolled lighting conditions and
limited resources in terms of power budget, post-processing capabilities or transmission bandwidth, coincide,
e.g. in various automotive, IoT, surveillance, mobile or industrial use cases [375].
78
Neuromorph. Comput. Eng. 2 (2022) 022501 Roadmap
Figure 34. (a) Simplified three-layer retina model and (b) corresponding CD pixel circuitry; in (c) typical signal waveforms of
the pixel circuit are shown. The upper trace represents an arbitrary voltage waveform at the node V p tracking the photocurrent
through the photoreceptor. The bipolar cell circuit responds with spike events of different polarity to positive and negative
gradients of the photocurrent, while being monitored by the ganglion cell circuit that also transports the spikes to the next
processing stage; the rate of change is encoded in inter-event intervals; (d) shows the response of an array of CD pixels to a natural
scene (person moving in the fieldof-view of the sensor). Spikes, also called ‘events’, have been collected for some tens of
milliseconds and are displayed as an image with ON (going brighter) and OFF (going darker) events drawn as white and black
dots.
vision task such as e.g. object detection, classification, tracking, optic flow, etc. The first group is preferably
implemented close to where the raw data are generated, i.e. near-sensor or in-sensor. Typically implemented
in an on-chip HW data pipeline, algorithms pre-process the raw pixel data for more efficient transmission
and post-processing, also with respect to memory access and processing algorithms requirements. This data
conditioning pipeline can include functions such as recoding, formatting, rearranging, compressing, thinning,
filtering, binning, histogramming, framing etc. The latter group includes all application-specific vision pro-
cessing using computer-vision and/or ML-based algorithms and compute models, typically running on some
form of application processor. The question of the optimal compute fabric and architecture to be used with
EB sensors is unresolved today, and the optimal choice application dependent. However, as discussed widely in
other parts of this review, emerging non-von-Neumann architectures, in particular neuromorphic approaches
such as SNN, are better suited to realize an efficient EB system than e.g. general purpose CPUs. Much progress
is being made in this area, however challenges remain around the absence of wellestablished deep learning
architectures, including training techniques, for event data, or the lack of largescale datasets.
79
Neuromorph. Comput. Eng. 2 (2022) 022501 Roadmap
Figure 35. Evolution over time of pixel pitch and array size of CD-based EB sensors.
Following the CMOS technology and integration roadmaps will yield EB devices with increasing indus-
trial applicability. Further advances in production and packaging technologies like triple wafer stacking, die-
stacking system-in-package and wafer-level optics will support the trend to autonomous ultra-low power/small
form-factor edge perception devices and AI systems where the sensor is highly integrated and tightly packaged
with pre-processing and application processing, thereby significantly reducing power consumption and trans-
mission bandwidth requirements of an artificial vision system, e.g. in IoT, mobile or perception networks
applications.
A big impact on the usability and competitiveness of EB systems is expected to come from future advances
in neuromorphic computing and event-based processing techniques. SNN are a natural fit for post-processing
to the data generated by EB sensors [202, 479]. But the sparse data output of EB sensors is also a good match
to future hardware accelerators for conventional DNN that exploit activation and network sparsity [480].
Recently, new kinds of neuromorphic vision devices beyond CMOS have been demonstrated, exploiting
different electro-optical material properties and fabrication techniques to further advance the tight integration
of sensing and processing, often combining photon transduction and analog neural network (ANN) functions
into a single fabric [129, 130, 481, 482]. Even though these devices are in their early proof-of-concept phase,
interesting and promising results have already been demonstrated.
80
Neuromorph. Comput. Eng. 2 (2022) 022501 Roadmap
ShihChii Liu
University of Zurich and ETH Zurich
26.1. Status
Neuromorphic audition technology is inspired by the amazing capability of human hearing. Humans under-
stand speech even in difficult auditory scenarios and using a tiny fraction of the brain’s entire 10 W. Matching
the capability of human hearing is an important goal of the development of algorithms, hardware technology
and applications for artificial hearing devices.
Brief history: human hearing starts with the biological cochlea which uses a space-to-rate encoding. The
incoming sound is encoded as asynchronous output pulses generated by a set of broadly frequency-selective
channels [484]. For frequencies below 3 kHz, these pulses are phase locked to the frequency [485]. This encod-
ing scheme leads to sparser sampling of frequency information from active frequency channels instead of the
maximal sampling rate used on a single audio input. The first silicon cochlea designs starting with the work of
Lyon and Mead (electronic cochlea) [486], model the basilar membrane (BM) of the cochlea by a set of coupled
filter stages. Subsequent designs include those with better matching properties for the filter stages and using
coupled filter architectures ranging from the originally proposed cascaded type modeling the phenomenolog-
ical output of the cochlea [486], to a resistively-coupled bank of bandpass filters that models the role of the
BM and the cochlear fluid more explicitly [487, 488].
Later designs include models of the inner-hair cells on the BM, that transduce the BM and fluid vibrations
into an electrical signal. They are frequently modelled as half-wave rectifiers in silicon designs. Some designs
include the automatic gain control mechanism of outer hair cells that are useful for dealing with larger sound
volume ranges from 60–120 dB. Cochlea designs starting from the early 2000 s include circuits that generate
asynchronous binary outputs (or spikes) encoded using the address-event representation. Details and historical
evolution of these VLSI designs are described in [487, 488]. Recent spiking cochlea designs in more advanced
technologies such as 65 nm and 180 nm CMOS demonstrate better power-efficiency (e.g., <1 uW/channel in
[489]). These new designs show competitive power efficiency compared to other audio front end designs that
compute spectrogram features from regular samples of a single audio source.
Importance of field: in the early 2000s, cochlea circuits were developed for audio bionic applications [490]
and models of biological auditory localization circuits [491]. With increasing prevalence of voicecontrolled
devices in everyday life, neuromorphic and bio-inspired solutions can potentially be interesting because of the
need for low-latency and energy-efficient design solutions in audio edge application domains.
81
Neuromorph. Comput. Eng. 2 (2022) 022501 Roadmap
Figure 36. (A) Top subfigure shows the 64-channel cochlea spike rasters corresponding to speech sample waveform of spoken
digits, ‘3–5’ in bottom subfigure. (B) Architecture for an example audio keyword spotting task. Figure shows an ASIC block that
combines the dynamic audio sensor (DAS) front-end with a continuous-valued DNN [495, 497] for an example ‘wake-up’
keyword spotting task. Spike outputs of the local filter channels are generated using asynchronous delta modulation [489]. The
spike events can be used to drive an SNN directly [496].
to configure SNNs to reach similar accuracy compared to continuous-valued artificial neural network (ANN)
solutions on a specific task, conversion techniques that map trained ANNs to SNNs [499] and global supervised
training methods for SNNs have been effective for this goal [500].
82
Neuromorph. Comput. Eng. 2 (2022) 022501 Roadmap
metric for always-on audio devices. Other challenges include algorithms that are hardware aware, e.g., to the
variability of the network parameters after the ASIC fabrication, and approaches to reduce memory access or
to create predictable memory access patterns to reduce energy loss from the unpredictable memory accesses
of SNNs. The emerging largescale availability of high-density local memory is also an interesting component
of future research for the ASIC development.
Acknowledgements
We acknowledge the Sensors Group members and colleagues who have worked on the Dynamic Audio Sensor
design and audio systems. Partial funding provided by the Swiss National Science Foundation, HEAR-EAR,
200021172553.
83
Neuromorph. Comput. Eng. 2 (2022) 022501 Roadmap
27.1. Status
Biohybrid systems are established by biological and artificial components interacting in a unidirectional or
bidirectional fashion. In this section, we specifically refer to neurons or brain tissue as the biological component
of biohybrid systems for brain repair.
The first demonstration of a biohybrid dialogue was achieved in vitro at the beginning of the 1990 s by
Renaud-LeMasson and colleagues, who established a communication between a biological neuronal network
and a computational model neuron [504]. Soon after, Chapin and colleagues brought the biohybrid paradigm
to the in vivo setting by providing the first proof of concept of interfacing the brain with a robotic end-effector
[505], a paradigm that has recently become a reality in the clinical research [506].
Biohybrid systems are now a widespread approach to address brain dysfunction and devise novel treatments
for it [507]. Representative examples are electronic devices coupled to biological neurons in vitro [219] or to the
brain in vivo [508] and establishing a bidirectional communication through a closedloop architecture. A key
feature of such systems is the real-time processing and decoding of neural signals to drive an actuator for brain
function modulation or replacement. To this end, enhancement of biohybrid systems with AI is the emerging
strategy to achieve an adaptive interaction between the biological and artificial counterparts. Neuromorphic
engineering represents the latest frontier for enhancing biohybrid systems with hardware intelligence [509]
and distributed computing [510], offering unprecedented brain-inspired computational capability, dynamic
learning of and adaptation to ongoing brain activity, power-efficiency, and miniaturization to the micro-scale.
In particular, the intrinsic learning and adaptive properties of neuromorphic devices present the key to bypass
the typical trial-and-error programming along with the stiff pre-programmed behaviour of current brain
implantable devices, such as those used for deep-brain stimulation. In turn, such a unique potential enables
surpassing the drawbacks of current mechanistic approaches with a phenomenological (evidence-based) oper-
ating mode. Overall, these features serve as an asset to attain a physiologically-plausible interaction between
the biological and artificial counterparts.
The latest avenue for biomedical applications is neuromorphic-based functional biohybrids for brain
regeneration. These are hybridized brain tissue grafts (figure 37), wherein the neuromorphic counterpart(s)
emulate and integrate brain function, aiming at guiding the integration of the biological graft into the host
brain. This crucial aspect cannot be attained by a purely biological regenerative approach. Further advances in
neuromorphic biohybrids are thus expected to bring unparalleled strategies in regenerative medicine for the
brain: by providing symbiotic artificial counterparts capable of autonomous and safe operation for controlled
brain regeneration, they herald a paradigm shift in biomedical interventions for brain repair, from interaction
to integration.
84
Neuromorph. Comput. Eng. 2 (2022) 022501 Roadmap
Figure 37. Concept of functional biohybrids for brain regeneration. Functional biohybrids merge concepts from regenerative
medicine (rebuild of brain matter) and neuromorphic neuroprosthetics (adaptive control of brain function). The symbiotic
interaction between the biological and artificial counterparts in the biohybrid graft is expected to achieve a controlled brain
regeneration process.
and physically inaccessible). Further, the operation of an autonomous system, by definition, should not
depend on external components.
(b) Wireless operation: this is required to follow the graft’s evolving function during the regeneration process,
to enable wireless device re-programming and hardware failure monitoring.
(c) On-chip learning, supported by application-specific integrated circuits for advanced signal processing, to
follow the evolving temporal dynamics of the graft during its integration within the host brain, without
the aid of an external controller.
(d) Bioresorbable property: in aiming at healing brain damage, the neuromorphic counterparts should be
regarded as a temporary aid in the process. Thus, they should be removable upon completion of brain
repair. While non-invasive micro-surgery techniques, such as high-intensity focused ultrasound, may per-
mit removal of mm-sized devices, this is not technically feasible in the case of ultrasmall (and, even more
so, intracellular) devices. Thus, particularly relevant to functional biohybrids is that the neuromorphic
counterparts should be bioresorbable.
85
Neuromorph. Comput. Eng. 2 (2022) 022501 Roadmap
Wireless operation. While autonomous operation is a key feature of the neuromorphic dust, the need of
patient monitoring, device fine-tuning, and hardware failure checks should not be underestimated for guaran-
teeing the patient’s safety. To this end, wireless access to the device is fundamental. Thus, dedicated integrated
circuits are required, which must be ultrasmall so not to introduce bottlenecks in device miniaturization. As
stated above, advanced CMOS technology holds promise to enable these wireless features in neuromorphic
dust. Further, protocols tailored to energy efficient wireless communication are needed.
On-chip learning. Understanding spatiotemporal patterns is a key feature to address the evolving dynamics
of neuronal networks and reverse-engineer brain dysfunction. So far, these features have been achieved by
pre-programming and the use of a microcontroller [520]. Further advances must be made in order to achieve
the same level of performance through on-chip learning. This would enable to address the inter-individual
variability of the human brain while overcoming the drawbacks of trial-and-error (re)programming and of
the need of a wired controller.
Bioresorbable materials. The device materials must be fully biocompatible so not to release cytotoxic com-
pounds in the patient’s brain. While outstanding advances have been made in the biosensors field [521], a
major efforts must be put in the field of neuromorphic engineering, where the performance of the device
strongly depends on materials. In this regard, organic materials may present the key to beat this challenge.
Acknowledgements
This work was funded by the European Union under the Horizon 2020 framework programme through the
FET-PROACTIVE project HERMES - Hybrid Enhanced Regenerative Medicine Systems, Grant Agreement No.
824164.
86
Neuromorph. Comput. Eng. 2 (2022) 022501 Roadmap
28.1. Status
Neuromorphic computing aims to mimic the brain to create energy-efficient devices capable of handling
complicated tasks. In this regard, analysis of multivariate time-series signals has led to advancements in differ-
ent application areas ranging from speech recognition and human activity classification to electronic health
evaluation. Exploration of this domain has led to unique bio-inspired commercial off-the-shelf device imple-
mentations in the form of fitness monitoring devices, sleep tracking gadgets, and EEG-based brain trauma
marker identifying devices. Even with this deluge of work over the years, the necessity of evolving the research
direction with day-to-day needs relating to this sphere is still pivotal. The key idea behind the wealth of research
in these domains comes from the fact that it is very difficult to generalize human abilities and activities, and it
is even more difficult to create devices that can operate at a level as accurate as human-level perception. This
is where contemporary machine learning and the more modern deep learning frameworks shine. The cur-
rent scenario of using automated devices for a variety of health-related applications requires that these devices
become more sensitive, specific, user-friendly, and lastly accurate for their intended tasks. This relates to further
advancements in the region of algorithm construction and constraint-based design of implementable hard-
ware architectures. The current crop of research in this area investigates DNNs architectures for the purpose of
feature extraction, object detection, classification, etc DNN models utilize the capacity of CNNs, recurrent neu-
ral networks (RNNs), and even to some extent fully connected layers to extract spatial features for time-series
assessment which was previously exhaustively calculated via different hand-engineered feature extraction tech-
niques coupled with simple classification algorithms. Along with this, RNNs and their advanced equivalents
in the form of long short term memory networks (LSTMs) and gated recurrent unit has also been integrated
into the deep learning architectures to handle timeseries signals. The idea behind this integration stems from
the fact that RNNs and LSTMs are modeled in such a way that they can keep track of previous instances of
the input data in order to make a prediction, which makes these architectures very effective for pattern and
dependency detection within the time-series data. The other aspect of developing these diverse DNN models
is to make them readily implementable in terms of hardware accelerators and therein lies the issue of hardware
constrained efficient designs. As a consequence, the computation and model size specifications of different
hardware-oriented approaches will result in the advancement of application-oriented software designs which
will, in turn, increase the reliability and efficiency of these embedded devices.
87
Neuromorph. Comput. Eng. 2 (2022) 022501 Roadmap
Figure 39. The deep learning framework takes in windowed images of the raw multimodal time-series signals as input to the
convolutional layers. Correspondingly, feature extraction is achieved in convolutional layers which results in a two-dimensional
feature map. The pooling layers contribute to reducing the feature map size while keeping the spatial features intact. This
two-dimensional pooled feature map is reshaped to have one-dimensional form so that it can be forwarded to the next fully
connected layers. Finally, the last fully connected layer will have neurons equal to the number of outputs as desired by the
application. Furthermore, with regards to multi-input model, supplementary information coming from a separate model can be
concatenated with the one-dimensional feature map to bolster the inference accuracy.
Thus, a critical challenge in terms of hardware design is to maintain high frequency and energy efficiency with
low energy consumption.
88
Neuromorph. Comput. Eng. 2 (2022) 022501 Roadmap
Figure 40. This figure illustrates the trend of energy efficiency against model size of different deep learning architectures
deployed on the low power Artix-7 100t FPGA platform which has a memory of 1.65 Mb. The applications focused here are EEG
detection [532], human activity recognition [523], stress detection [523], tongue drive systems [523] along with cough and
dyspnea detection as part of respiratory symptoms recognition [531]. Depending on the model size, the frameworks can be tiny
or large whereas the energy efficiency is dictated by the performance of the design. In the same vein, the plot also shows the device
inference accuracy for the different models ranging from 86% up to 98% which further justifies that these architectures are
specific enough for low power embedded deployment.
designer has to find the sweet spot between the accuracy of the model and the practicality of its size being suit-
able for low power embedded platforms while also ensuring that the energy efficiency of the target device is
also satisfactory. Figure 40 shows a comparison among different models with a variety of applications for their
model size, classification/detection accuracy, and energy efficiency which establishes that depending on the
application, deep learning models can fit on low-power embedded devices with standard performance. Also,
a modification to these frameworks can take in additional information in the form of vectors from a separate
model to enhance the overall accuracy of the model as demonstrated in figure 39.
89
Neuromorph. Comput. Eng. 2 (2022) 022501 Roadmap
Elisa Donati
University of Zurich and ETH Zurich, Switzerland
29.1. Status
Electromyography (EMG) is a neurophysiological technique for recording muscle movements. It is based on
the principle that whenever a muscle contracts, a burst of electric activity is propagated through the close tissue.
The source of the electrical signal in EMG is the summation of action potentials of motor units (MUs) [533].
An MU is composed of muscle fibers innervated by axonal branches of a motorneuron, that is intermingled
with fibers of other MUs. The recorded electric activity is linearly correlated to the strength of the contraction
and the number of recruited MUs. EMG signals can be acquired both invasively, using needle electrodes, and
superficially, by placing electrodes on the skin—called surface EMG (sEMG).
EMG signals have been and are relevant in several clinical and biomedical applications. In particular, they
are extensively employed in myoelectric prosthetics control for classifying muscle movements. Wearable solu-
tions for this application already exist, but they have a large margin for improvement, from increasing the
granularity of movement classification to reducing computational resources needed and consequently power
consumption.
Like any other signal, EMG is susceptible to various types of noises and interferences, such as signal acqui-
sition noise, and electrode displacement. Hence, a pre-processing phase is the first step to perform proper
signal analysis, which involves filtering, amplification, compression, and feature extraction both in time and
frequency domains [534]. The mainstream approach for movement classification is machine learning (ML),
which delivers algorithms with very high accuracy [535], although the high variability in test conditions and
their high computational load limit their deployment to controlled environments. These drawbacks can be
partially solved by using deep learning techniques that allow for better generalization to unseen conditions but
remain computationally expensive, requiring bulky power-hungry hardware, that hinder wearable solutions
[536].
Neuromorphic technologies offer a solution to this problem by processing data with low latency and low-
power consumption mimicking the key computational principles of the brain [3]. Compared to state-of the-art
ML approaches, neuromorphic EMG processing shows a reduction of up to three orders of magnitude in terms
of power consumption and latency [537–539], with limited loss in accuracy (5%–7%) [540, 541].
New approaches have been proposed that directly extract the motorneurons activity from EMG signals
as spike trains [543]. They represent a more natural and intuitive interface with muscles but currently limit
themselves by processing spikes with traditional ML techniques and do not consider the possibility of using
more appropriate frameworks such as SNNs.
90
Neuromorph. Comput. Eng. 2 (2022) 022501 Roadmap
of MUs action potentials can be assessed with activation maps obtained from HD-EMG signals [543]. Nev-
ertheless, current implementations are still computationally expensive, and only recently it was possible for
their deployment in real-time. After the decomposition, the spike trains are translated and processed using
ML methods instead of better-suited SNNs [546].
Designing neuromorphic systems able to extract and process motorneurons activity from EMG signals
will pave the way to a new class of wearable devices that can be miniaturized and directly interface with the
electrodes.
91
Neuromorph. Comput. Eng. 2 (2022) 022501 Roadmap
online processing, which is optimal for real-time closed-loop applications and less vulnerable to interferences
either caused by humans or the environment.
Smart electrodes another long-term game-changer would be the technological breakthroughs that will
allow the single electrode to be able to record directly the activity of a single MU, removing the need for
decomposition algorithms.
92
Neuromorph. Comput. Eng. 2 (2022) 022501 Roadmap
30.1. Status
Collaborative autonomous systems (CAS) (see figure 42) are entities that can cooperate among themselves
and with humans, with variable levels of human intervention (depending on the level of autonomy) in per-
forming complex tasks in unknown environments. Their behaviour is driven by the availability of perception,
communication, cognitive and motor skills and improved computational capabilities (on/off-board systems).
The high level of autonomy enables the execution of dependable actions under changing internal or external
conditions. Therefore, CAS are expected to be able to: (1) perceive and understand their own condition and
the environment they operate in; (2) dependably interact with the physical world despite of sudden changes;
(3) intelligently evolve through learning and adaptation to unforeseen operational conditions; (4) self-decide
their actions based on their understanding of the environment.
Currently, CAS (e.g., collaborative robots–cobots) show limited performances when accomplishing phys-
ical interaction tasks in complex scenarios [552]. Recent studies have demonstrated that autonomous robots
can outperform the task they are programmed for, but they are limited in the ability to adapt to unexpected
situations [553] and to different levels of human-robot cooperation [552]. These limitations are mainly due to
the lack of generalization capabilities, i.e., cobots cannot transfer knowledge across multiple situations (envi-
ronments, tasks, and interactions). One of the most viable pathways to solve this issue is to build intelligent
autonomous cobots by incorporating AI-based methods into the control systems [554]. These bio-inspired
controllers [555] allow taking a different perspective from the classical control approaches, which require
a deeper understanding of the mechanics of the interactions and of the intrinsic limitations of the systems
beforehand. Main current research directions [556] are focused on the understanding of the biological work-
ing principles of the central nervous system in order to build innovative neuromorphic computing algorithms
and hardware that will bring significant advances in this field; in particular, they will provide computational
efficiency and powerful control strategies for robust and adaptive behaviours.
In the next decades, there will be significant developments in CAS related to self-capabilities such as self-
inspection, -configuration, -adaptation, -healing, -optimization, -protection, and -assembly. This will be a
great enabler of systems acting in real-world unstructured scenarios, such as in remote applications (deep sea
or space), in hazard situations (disasters), in healthcare interventions (assistive, rehabilitation, or diagnosis),
and in proximity to people.
93
Neuromorph. Comput. Eng. 2 (2022) 022501 Roadmap
Figure 42. Overall idea of a collaborative autonomous control system. The supervisor manages the entire system, observes and
analyses the whole situation and provides information to each agent to improve their autonomous actions and optimize the
operations.
suffer from an extremely high energy demand that is not sustainable and they cannot be easily scaled. Addi-
tionally, a limited number of processes can run simultaneously, and the speed of the response is still low.
Consequently, new neuromorphic architectures are the most promising alternatives to address the increasing
demand to create CAS able of a seamless interaction with human beings.
94
Neuromorph. Comput. Eng. 2 (2022) 022501 Roadmap
95
Neuromorph. Comput. Eng. 2 (2022) 022501 Roadmap
Section 5. Ethics
Like the development of other forms of AI, the development of neuromorphic technology may raise a
number of ethical questions [569, 570] (figure 43).
One issue concerns privacy and surveillance. The development of most forms of AI depends upon access
to data, and as far as these data can be seen as private or personally identifiable, it raises a question about
when it is (ethically) defensible to use such data. On the one hand, some argue that persons have a right to be
let alone and exercise full control over information about themselves, so that any use of such data presupposes
fully informed consent. On the other hand, others recognize the importance of privacy but argue that it may
sometimes be outweighed by the fact that reliable applications for the good of everyone presuppose access to
high quality representative data [571].
Another issue concerns opacity. Many forms of AI support decision making based on complex patterns
extracted from huge data sets. Often, however, it will be impossible not only for the person who makes the
final decision but also for the developer to know what the system’s recommendations are based on and it is in
this sense that it is said to be opaque. For some such opacity does not matter as long as there are independent
ways of verifying that the system delivers an accurate result, but others argue that it is important that the system
is explainable [572]. In this way, a tension is often created between accuracy and transparency, and what the
right trade-off is may often depend upon the concrete context.
Opacity is closely connected with the question of bias since opacity may hide certain biases. There are
different forms of bias but in general, bias arises when automated AI decision support systems are based on
data that is not representative of all the individuals that the system supports decisions in relation to [573].
There are different opinions as to when the existence of bias in automated decision support systems poses a
serious problem. Some argue that ‘traditional’ unsupported human decision-making is biased, too, and that
the existence of bias in automated AI decision support systems only pose a serious problem if the bias is more
significant than the pre-existing human bias. Others argue that features such as opacity or the lack of suitable
institutional checks and balances may tend to make the existence of bias in automated decision support systems
more problematic than ‘ordinary’ human bias [574]. A separate problem is created by the fact that it sometimes
will be easier to identify and quantify bias in AI systems than in humans, making a direct comparison more
difficult.
The development of forms of AI based on neuromorphic technology also raises questions about manipula-
tion of human behavior, online as well as offline. One context in which such questions arise is advertising and
political campaigning, where AI generated deep knowledge about individuals’ preferences and beliefs, which
may be used to influence them in a way that escapes the individuals’ own awareness. Similar issues may also
arise in connection with other forms of AI such as chatbots and care or sex robots that simulate certain forms of
human behavior without being ‘the real deal’. Even if persons develop some form of emotional attachment to
such systems, some argue that there is something deeply problematic and deceptive about such systems [575],
while others point out that there is nothing intrinsically wrong with such systems as long as they help satisfy
human desires [576]. If, as described in section 4.1, neuromorphic technologies will make it possible for robots
to move from extremely controlled environments to spaces where they collaborate with humans and exhibit
continuous learning and adaptation, it may make such questions more pressing.
A distinct set of issues are raised by the possibility of developing AI systems that do not just support human
decision making but operate in a more or less autonomous way such as ‘self-driving’ cars and autonomous
weapons. One question that such systems raise concerns the way in which they should be programmed in order
to make sure that they make ethically justifiable decisions (in most foreseeable situations). Another question
concerns how responsibility and risk should be distributed in the complex social system they are a part of. If, as
described in section 4.2, neuromorphic engineering offers the kind of technological leaps required for achiev-
ing truly autonomous vehicles, the development of neuromorphic technologies may make such questions more
pressing than at present.
A distinct issue relates to sustainability. As pointed out in the introduction, 5%–15% of the world’s energy
is spent in some form of data manipulation (transmission or processing), and as long as a substantial amount of
that energy comes from sources that contribute to climate change through the emission of greenhouse gases,
96
Neuromorph. Comput. Eng. 2 (2022) 022501 Roadmap
Figure 43. Some of the most salient ethical issues raised by the development of neuromorphic technology.
it raises a question as to whether all that data manipulation is really necessary or could be done in a more
energy efficient way. And in so far as neuromorphic technologies, as e.g. pointed out in section 28, shows a
reduction of up to three orders of magnitude in terms of power consumption compared to state-of the-art ML
approaches, it seems to provide robust ethical support for the development of neuromorphic technologies.
As mentioned in the beginning of this section, the ethical questions raised by the development of neu-
romorphic technology is not unique to this technology but related to the development of AI as such. The
successful development of neuromorphic technology may make some of the issues more pressing, and a cen-
tral task for future work on the ethics of neuromorphic technology will, accordingly, be to inquire into the exact
way in which the issues are raised by the development of neuromorphic technology. But the existing forms of
AI already raise many of the questions described so far. Besides these questions, however, the development of
neuromorphic technology (as well as other forms of AI) may also raise a number of questions that are more
speculative either because it is unclear whether the development will take place, when it will happen or what
the precise consequences will be.
One such issue has to do with automation and unemployment. AI systems have already replaced humans in
certain job functions (e.g., customer service), but it has been suggested that most job functions will be affected
by the development of AI at one point [577]. Because such a development has the potential to disrupt the social
order (e.g., through mass unemployment) it raises an important ethical (and political) question as to how AI
systems should be introduced into society [578].
Another more speculative issue relates to artificial moral agents and so-called robot rights. If the develop-
ment of neuromorphic (and other) forms of AI leads to the creation of systems that possess some or all the
traits that make us ascribe rights and responsibilities to humans, it may thus raise a question about whether
such rights and responsibilities should be ascribed to artificially intelligent systems [579, 580].
Thirdly, some have also pointed out that the development of neuromorphic (and other) forms of AI may
create issues related to the so-called singularity. The idea is that the technological development may lead to
the creation of general forms of AI that surpass the human level of intelligence and then begin to control the
further development of AI in ways that may not be in the interests of the human species and perhaps even
threaten its very existence. Whether such a scenario is likely has been questioned [581], but some argue that
even a slight risk should be taken serious given the potentially devastating consequences [582].
No matter what one thinks is the right answer to the ethical questions raised by the development of neu-
romorphic technology, it is, finally, worth noticing that it still leaves an important practical question: how
best to make sure that the actual development and implementation of neuromorphic technology will take
place in an ethically defensible way. For some questions, governmental regulation may be the best means. For
others, the best solution may be to trust the community of developers to make the right, valuebased deci-
sions when designing systems, while some questions, perhaps, should be left to the enlightened citizenry. In
97
Neuromorph. Comput. Eng. 2 (2022) 022501 Roadmap
the end, however, it will probably be up to an inquiry into the concrete situation to decide when one or the
other approach—or combination of approaches—provides the best means of securing an ethically defensible
development of neuromorphic technology.
The data that support the findings of this study are available upon reasonable request from the authors.
ORCID iDs
References
[1] Vidal J 2017 ‘Tsunami of Data’ Could Consume One Fifth of Global Electricity by 2025 (Climate Home News)
[2] Mead C 1990 Neuromorphic electronic systems Proc. IEEE 78 1629–36
[3] Chicca E, Stefanini F, Bartolozzi C and Indiveri G 2014 Neuromorphic electronic circuits for building autonomous cognitive
systems Proc. IEEE 102 1367–88
[4] Chicca E and Indiveri G 2020 A recipe for creating ideal hybrid memristive-CMOS neuromorphic processing systems Appl. Phys.
Lett. 116 120501
[5] LeCun Y, Bengio Y and Hinton G 2015 Deep learning Nature 521 436–44
[6] Maass W 1997 Networks of spiking neurons: the third generation of neural network models Neural Netw. 10 1659–71
[7] Yole 2021 Neuromorphic computing and sensing 2021 Yole Reports www.yole.fr
[8] Zidan M A, Strachan J P and Lu W D 2018 The future of electronics based on memristive systems Nat. Electron. 1 22–9
[9] Chua L 1971 Memristor-the missing circuit element IEEE Trans. Circuit Theory 18 507–19
[10] Strukov D B, Snider G S, Stewart D R and Williams R S 2008 The missing memristor found Nature 453 80–3
[11] Yang J J, Strukov D B and Stewart D R 2013 Memristive devices for computing Nat. Nanotechnol. 8 13–24
[12] Dittmann R and Strachan J P 2019 Redox-based memristive devices for new computing paradigm APL Mater. 7 110903
[13] Ielmini D and Wong H-S P 2018 In-memory computing with resistive switching devices Nat. Electron. 1 333–43
[14] Li C et al 2018 Analogue signal and image processing with large memristor crossbars Nat. Electron. 1 52–9
[15] Ielmini D, Wang Z and Liu Y 2021 Brain-inspired computing via memory device physics APL Mater. 9 050702
[16] Milano G, Pedretti G, Fretto M, Boarino L, Benfenati F, Ielmini D, Valov I and Ricciardi C 2020 Brain-inspired structural
plasticity through reweighting and rewiring in multi-terminal self-organizing memristive nanowire networks Adv. Intell. Syst. 2
2000096
[17] Shastri B J, Tait A N, Ferreira de Lima T, Pernice W H P, Bhaskaran H, Wright C D and Prucnal P R 2021 Photonics for artificial
intelligence and neuromorphic computing Nat. Photon. 15 102–14
[18] Marković D and Grollier J 2020 Quantum neuromorphic computing Appl. Phys. Lett. 117 150501
[19] Thorpe S, Fize D and Marlot C 1996 Speed of processing in the human visual system Nature 381 520–2
[20] Le Gallo M and Sebastian A 2020 An overview of phase-change memory device physics J. Phys. D: Appl. Phys. 53 213002
98
Neuromorph. Comput. Eng. 2 (2022) 022501 Roadmap
[21] Sebastian A, Le Gallo M, Khaddam-Aljameh R and Eleftheriou E 2020 Memory devices and applications for in-memory
computing Nat. Nanotechnol. 15 529–44
[22] Giannopoulos I, Singh A, Le Gallo M, Jonnalagadda V P, Hamdioui S and Sebastian A 2020 Adv. Intell. Syst. 2 2000141
[23] Karunaratne G, Le Gallo M, Cherubini G, Benini L, Rahimi A and Sebastian A 2020 In-memory hyperdimensional computing
Nat. Electron. 3 327–37
[24] Joshi V et al 2020 Accurate deep neural network inference using computational phase-change memory Nat. Commun. 11 2473
[25] Tsai H, Ambrogio S, Narayanan P, Shelby R M and Burr G W 2018 Recent progress in analog memory-based accelerators for
deep learning J. Phys. D: Appl. Phys. 51 283001
[26] Nandakumar S R et al 2020 Mixed-precision deep learning based on computational memory Front. Neurosci. 14 406
[27] Sebastian A, Le Gallo M, Burr G W, Kim S, BrightSky M and Eleftheriou E 2018 Tutorial: brain-inspired computing using
phase-change memory devices J. Appl. Phys. 124 111101
[28] Ambrogio S, Ciocchini N, Laudato M, Milo V, Pirovano A, Fantini P and Ielmini D 2016 Unsupervised learning by spike timing
dependent plasticity in phase change memory (PCM) synapses Front. Neurosci. 10 56
[29] Tuma T, Pantazi A, Le Gallo M, Sebastian A and Eleftheriou E 2016 Stochastic phase-change neurons Nat. Nanotechnol. 11 693
[30] Zuliani P, Conte A and Cappelletti P 2019 The PCM way for embedded non volatile memories applications 2019 Symp. VLSI
Technology T192–3
[31] Boniardi M, Ielmini D, Lavizzari S, Lacaita A L, Redael A and Pirovano A 2010 Statistics of resistance drift due to structural
relaxation in phase-change memory arrays IEEE Trans. Electron Devices 57 2690–6
[32] Crespi L et al 2015 Modeling of atomic migration phenomena in phase change memory devices 2015 IEEE Int. Memory Workshop
(IMW) 1–4
[33] BrightSky M et al 2015 Crystalline-as-deposited ALD phase change material confined PCM cell for high density storage class
memory 2015 IEEE Int. Electron Devices Meeting (IEDM) 13.6.–4
[34] Kau D et al 2009 A stackable cross point phase change memory 2009 IEEE Int. Electron Devices Meeting (IEDM) 127.1.–4
[35] Arnaud F et al 2020 High density embedded PCM cell in 28 nm FDSOI technology for automotive micro-controller applications
2020 IEEE Int. Electron Devices Meeting (IEDM) 124.2.–4
[36] Redaelli A, Pellizer F and Pirovano A 2012 Phase change memory device for multibit storage EP Patent 2034536
[37] Koelmans W W, Sebastian A, Jonnalagadda V P, Krebs D, Dellmann L and Eleftheriou E 2015 Projected phase-change memory
devices Nat. Commun. 6 8181
[38] Giannopoulos I et al 2018 8 bit precision in-memory multiplication with projected phase-change memory 2018 IEEE Int.
Electron Devices Meeting (IEDM) 127.7.–4
[39] Ding K et al 2019 Phase-change heterostructure enables ultralow noise and drift for memory operation Science 366 210–5
[40] Salinga M et al 2018 Monatomic phase change memory Nat. Mater. 17 681–5
[41] Boybat I et al 2018 Neuromorphic computing with multimemristive synapses Nat. Commun. 9 2514
[42] Rios C, Youngblood N, Cheng Z, Le Gallo M, Pernice W H, Wright C D, Sebastian A and Bhaskaran H 2019 In-memory
computing on a photonic platform Sci. Adv. 5 eaau5759
[43] Feldmann J et al 2021 Parallel convolutional processing using an integrated photonic tensor core Nature 589 52–8
[44] Cheng Z, Milne T, Salter P, Kim J S, Humphrey S, Booth M and Bhaskaran H 2021 Antimony thin films demonstrate
programmable optical nonlinearity Sci. Adv. 7 eabd7097
[45] Valasek J 1921 Piezo-electric and allied phenomena in Rochelle salt Phys. Rev. 17 475
[46] Buck D A 1952 Ferroelectrics for digital information storage and switching MIT Digital Computer Laboratory Report
[47] Bondurant D 1990 Ferroelectronic RAM memory family for critical data storage Ferroelectrics 112 273–82
[48] Mikolajick T, Schroeder U and Slesazeck S 2020 The past, the present, and the future of ferroelectric memories IEEE Trans.
Electron Devices 67 1434–43
[49] Ross I 1957 Semiconductive translating device USA Patent 2791760A
[50] Zhang X, Takahashi M, Takeuchi K and Sakai S 2012 64 kbit ferroelectric-gate-transistor-integrated NAND flash memory with
7.5 V program and long data retention Japan. J. Appl. Phys. 51 04DD01
[51] Esaki L, Laibowitz R and Stiles P 1971 Polar Switch (IBM Technical Disclosure Bulletin) vol 13 p 2161
[52] Garcia V, Fusil S, Bouzehouane K, Enouz-Vedrenne S, Mathur N, Barthelemy A and Bibes M 2009 Giant tunnel electroresistance
for non-destructive readout of ferroelectric states Nature 460 81–4
[53] Böscke T, Müller J, Bräuhaus D, Schröder U and Böttger U 2011 Ferroelectricity in hafnium oxide thin films Appl. Phys. Lett. 99
102903
[54] Max B, Hoffmann M, Mulaosmanovic H, Slesazeck S and Mikolajick T 2020 Hafnia-based double-layer ferroelectric tunnel
junctions as artificial synapses for neuromorphic computing ACS Appl. Electron. Mater. 2 4023–33
[55] Mulaosmanovic H, Chicca E, Bertele M, Mikolajick T and Slesazeck S 2018 Mimicking biological neurons with a nanoscale
ferroelectric transistor Nanoscale 10 21755–63
[56] Beyer S et al 2020 FeFET: a versatile CMOS compatible device with game-changing potential IEEE Int. Memory Workshop (IMW)
1–4
[57] Sally A 2004 Reflections on the memory wall Proc. Conf. Comput. Front. p 162
[58] Okuno J et al 2020 SoC compatible 1T1C FeRAM memory array based on ferroelectric Hf0.5 Zr0.5 O2 Symp. VLSI Technology 1–4
[59] Pešić M et al 2016 Physical mechanisms behind the field-cycling behavior of HfO2 based ferroelectric capacitors Adv. Funct.
Mater. 26 4601–12
[60] Slesazeck S and Mikolajick T 2019 Nanoscale resistive switching memory devices: a review Nanotechnology 30 352003
[61] Deng S, Jiang Z, Dutta S, Ye H, Chakraborty W, Kurinec S, Datta S and Ni K 2020 Examination of the interplay between
polarization switching and charge trapping in ferroelectric FET Int. Electron Device Meeting (IEDM) (San Francisco)
[62] Wei Y et al 2018 A rhombohedral ferroelectric phase in epitaxially strained Hf0.5 Zr0.5 O2 thin films Nat. Mater. 17 1095–100
[63] Pesic M, Schroeder U, Slesazeck S and Mikolajick T 2018 Comparative study of reliability of ferroelectric and anti-ferroelectric
memories IEEE Trans. Device Mater. Relib. 18 154–62
[64] Fichtner S, Wolff N, Lofink F, Kienle L and Wagner B 2019 AlScN: a III–V semiconductor based ferroelectric J. Appl. Phys. 125
114103
[65] Spiga S, Sebastian A, Querlioz D and Rajendran B 2020 Memristive Device for Brain-Inspired Computing: From Materials, Devices,
and Circuits to Applications-Computational Memory, Deep Learning and Spiking Neural Networks (Oxford: Woodhead Publishing)
[66] Waser R, Dittmann R, Staikov G and Szot K 2009 Redox-based resistive switching memories— nanoionic mechanisms,
prospects, and challenges Adv. Mater. 21 2632–63
99
Neuromorph. Comput. Eng. 2 (2022) 022501 Roadmap
[67] Brivio S, Ly D R B, Vianello E and Spiga S 2021 Non-linear memristive synaptic dynamics for efficient unsupervised learning in
spiking neural networks Front. Neurosci. 15 580909
[68] Payvand M, Demirag Y, Dalgaty T, Vianello E and Indiveri G 2020 Analog weight updates with compliance current modulation
of binary ReRAMs for on-chip learning pp 1–5
[69] Covi E, George R, Frascaroli J, Brivio S, Mayr C, Mostafa H, Indiveri G and Spiga S 2018 Spike-driven threshold-based learning
with memristive synapses and neuromorphic silicon neurons J. Phys. D: Appl. Phys. 51 344003
[70] Zhang W, Gao B, Tang J, Li X, Wu W, Qian H and Wu H 2019 Analog-type resistive switching devices for neuromorphic
computing Phys. Status Solidi RRL 13 1900204
[71] Covi E, Brivio S, Serb A, Prodromakis T, Fanciulli M and Spiga S 2016 Analog memristive synapse in spiking networks
implementing unsupervised learning Front. Neurosci. 10 6–13
[72] Mochida R et al 2018 A 4M synapses integrated analog ReRAM based 66.5 TOPS/W neuralnetwork processor with cell current
controlled writing and flexible network architecture 2018 IEEE Symp. VLSI Technology (Honolulu, HI, USA 18–22 June 2018)
[73] Valentian A, Rummens F, Vianello E, Mesquida T, de Boissac C L M, Bichler O and Reita C 2019 Fully integrated spiking neural
network with analog neurons and RRAM synapses 2019 IEEE Int. Electron Devices Meeting (IEDM)
[74] Yao P, Wu H, Gao B, Tang J, Zhang Q, Zhang W, Yang J J and Qian H 2020 Fully hardware-implemented memristor
convolutional neural network Nature 577 641–6
[75] You T et al 2015 Engineering interface-type resistive switching in BiFeO3 thin film switches by Ti implantation of bottom
electrodes Sci. Rep. 5 18623
[76] Moon K, Fumarola A, Sidler S, Jang J, Narayanan P, Shelby R M, Burr G W and Hwang H 2018 Bidirectional non-filamentary
RRAM as an analog neuromorphic synapse: I. Al/Mo/Pr0.7 Ca0.3 MnO3 material improvements and device measurements IEEE J.
Electron Devices Soc. 6 146–55
[77] Gao L, Wang I-T, Chen P-Y, Vrudhula S, Seo J-S, Cao Y, Hou T-H and Yu S 2015 Fully parallel write/read in resistive synaptic
array for accelerating on-chip learning Nanotechnology 26 455204
[78] Govoreanu B et al 2015 a-VMCO: a novel forming-free, selfrectifying, analog memory cell IEEE 2015 Symp. VLSI Technology
T132–3
[79] Kim S et al 2019 Metal-oxide based, CMOS compatible ECRAM for deep learning accelerator 2019 IEEE Int. Electron Device
Meeting (San Francisco, USA 07 November 2019)
[80] Li Y et al 2020 Filament-free bulk resistive memory enables deterministic analogue switching Adv. Mater. 32 2003984
[81] Bengel C, Siemon A, Cüppers F, Hoffmann-Eifert S, Hardtdegen A, von Witzleben M, Helllmich L, Waser R and Menzel S 2020
Variability-aware modeling of filamentary oxide based bipolar resistive switching cells using SPICE level compact models TCAS
vol 67 p 46184630
[82] Puglisi F M, Larcher L, Padovani A and Pavan P 2016 Bipolar resistive RAM based on HfO2 : physics, compact modeling, and
variability control IEEE J. Emerg. Sel. Top. Circuits Syst. 6 171–84
[83] Zhao M, Gao B, Tang J, Qian H and Wu H 2020 Reliability of analog resistive switching memory for neuromorphic computing
Appl. Phys. Rev. 7 011301
[84] Cueppers F, Menzel S, Bengel C, Hardtdegen A, von Witzleben M, Boettger U, Waser R and Hoffmann-Eifert S 2019 Exploiting
the switching dynamics of HfO2 -based ReRAM devices for reliable analog memristive behavior APL Mater. 7 091105
[85] Menzel S, Böttger U, Wimmer M and Salinga M 2015 Physics of the switching kinetics in resistive memories Adv. Funct. Mater.
25 6306–25
[86] Kozicki M N and West W C 1998 Programmable metallization cell structure and method of making same US Patent 5761115A
[87] Swaroop B, West W C, Martinez G, Kozicki M N and Akers L A 1998 Programmable current mode Hebbian learning neural
network using programmable metallization cell IEEE Int. Symp. Circuits and Systems (ISCAS) 33–6
[88] Hasegawa T, Terabe K, Tsuruoka T and Aono M 2012 Atomic switch: atom/ion movement controlled devices for beyond von
Neumann computers Adv. Mater. 24 252–67
[89] Valov I, Linn E, Tappertzhofen S, Schmelzer S, van den Hurk J, Lentz F and Waser R 2013 Nanobatteries in redox-based resistive
switches require extension of memristor theory Nat. Commun. 4 1771
[90] Kozicki M N, Mitkova M and Valov I 2016 Electrochemical Metallization Memories (New York: Wiley) pp 483–514
[91] Raeis-Hosseini N and Lee J-S 2017 Resistive switching memory using biomaterials J. Electroceram. 39 223–38
[92] Gao S, Yi X, Shang J, Liu G and Li R-W 2019 Organic and hybrid resistive switching materials and devices Chem. Soc. Rev. 48
1531–65
[93] Midya R et al 2017 Anatomy of Ag/Hafnia-based selectors with 1010 nonlinearity Adv. Mater. 29 1604457
[94] Gonzalez-Velo Y, Barnaby H J and Kozicki M N 2017 Review of radiation effects on ReRAM devices and technology Semicond.
Sci. Technol. 32 083002
[95] Chen W, Chamele N, Gonzalez-Velo Y, Barnaby H J and Kozicki M N 2017 Low-temperature characterization of Cu–Cu:
silica-based programmable metallization cell IEEE Electron Device Lett. 38 1244–7
[96] Wang Z et al 2016 Memristors with diffusive dynamics as synaptic emulators for neuromorphic computing Nat. Mater. 16 101–8
[97] Ohno T, Hasegawa T, Tsuruoka T, Terabe K, Gimzewski J K and Aono M 2011 Short-term plasticity and long-term potentiation
mimicked in single inorganic synapses Nat. Mater. 10 591–5
[98] Cha J-H, Yang S Y, Oh J, Choi S, Park S, Jang B C, Ahn W and Choi S-Y 2020 Conductive-bridging random-access memories for
emerging neuromorphic computing Nanoscale 12 14339–68
[99] Valov I, Waser R, Jameson J R and Kozicki M N 2011 Electrochemical metallization memories-fundamentals, applications,
prospects Nanotechnology 22 254003
[100] Valov I 2014 Redox-based resistive switching memories (ReRAMs): electrochemical systems at the atomic scale
ChemElectroChem 1 26–36
[101] Valov I 2017 Interfacial interactions and their impact on redox-based resistive switching memories (ReRAMs) Semicond. Sci.
Technol. 32 093006
[102] Lübben M, Cüppers F, Mohr J, von Witzleben M, Breuer U, Waser R, Neumann C and Valov I 2020 Design of defect-chemical
properties and device performance in memristive systems Sci. Adv. 6 eaaz9079
[103] Belmonte A, Celano U, Degraeve R, Fantini A, Redolfi A, Vandervorst W, Houssa M, Jurczak M and Goux L 2015
Operating-current dependence of the Cu-mobility requirements in oxide-based conductive-bridge RAM IEEE Electron Device
Lett. 36 775–7
[104] Yeon H et al 2020 Alloying conducting channels for reliable neuromorphic computing Nat. Nanotechnol. 15 574–9
100
Neuromorph. Comput. Eng. 2 (2022) 022501 Roadmap
[105] Valov I and Tsuruoka T 2018 Effects of moisture and redox reactions in VCM and ECM resistive switching memories J. Phys. D:
Appl. Phys. 51 413001
[106] Kandel E R, Schwartz J H, Jessell T M, Siegelbaum S A and Hudspeth A J 2013 Principles of Neural Science 5th edn (New York:
McGraw-Hill)
[107] Xia Q and Yang J J 2019 Memristive crossbar arrays for brain-inspired computing Nat. Mater. 18 309–23
[108] Stieg A Z, Avizienis A V, Sillin H O, Martin-Olmos C, Aono M and Gimzewski J K 2012 Emergent criticality in complex turing
B-type atomic switch networks Adv. Mater. 24 286–93
[109] Stieg A Z, Avizienis A V, Sillin H O, Martin-Olmos C, Lam M-L, Aono M and Gimzewski J K 2014 Self-organized atomic switch
networks Japan. J. Appl. Phys. 53 01AA02
[110] Diaz-Alvarez A, Higuchi R, Sanz-Leon P, Marcus I, Shingaya Y, Stieg A Z, Gimzewski J K, Kuncic Z and Nakayama T 2019
Emergent dynamics of neuromorphic nanowire networks Sci. Rep. 9 14920
[111] Milano G, Porro S, Valov I and Ricciardi C 2019 Recent developments and perspectives for memristive devices based on metal
oxide nanowires Adv. Electron. Mater. 5 1800909
[112] Sillin H O, Aguilera R, Shieh H-H, Avizienis A V, Aono M, Stieg A Z and Gimzewski J K 2013 A theoretical and experimental
study of neuromorphic atomic switch networks for reservoir computing Nanotechnology 24 384004
[113] Aono M and Ariga K 2016 The way to nanoarchitectonics and the way of nanoarchitectonics Adv. Mater. 28 989–92
[114] Loeffler A et al 2020 Topological properties of neuromorphic nanowire networks Front. Neurosci. 14 184
[115] Manning H G et al 2018 Emergence of winner-takes-all connectivity paths in random nanowire networks Nat. Commun. 9 3219
[116] Li Q et al 2020 Dynamic electrical pathway tuning in neuromorphic nanowire networks Adv. Funct. Mater. 30 2003679
[117] Li Q, Diaz-Alvarez A, Tang D, Higuchi R, Shingaya Y and Nakayama T 2020 Sleep-dependent memory consolidation in a
neuromorphic nanowire network ACS Appl. Mater. Interfaces 12 50573–80
[118] Mallinson J B, Shirai S, Acharya S K, Bose S K, Galli E and Brown S A 2019 Avalanches and criticality in self-organized nanoscale
networks Sci. Adv. 5 eaaw8438
[119] Pike M D et al 2020 Atomic scale dynamics drive brain-like avalanches in percolating nanostructured networks Nano Lett. 20
3935–42
[120] Fu K et al 2020 Reservoir computing with neuromemristive nanowire networks 2020 Int. Joint Conf. Neural Networks (IJCNN)
1–8
[121] Milano G et al 2021 In materia reservoir computing with a fully memristive architecture based on self-organizing nanowire
networks Nat. Mater. 21 195–202
[122] Usami Y et al 2021 In-materio reservoir computing in a sulfonated polyaniline network Adv. Mater. 33 2102688
[123] Lilak S et al 2021 Spoken digit classification by in-materio reservoir computing with neuromorphic atomic switch networks
Front. Nanotechnol. 3 1–11
[124] Nirmalraj P N et al 2012 Manipulating connectivity and electrical conductivity in metallic nanowire networks Nano Lett. 12
5966–71
[125] Sannicolo T et al 2018 Electrical mapping of silver nanowire networks: a versatile tool for imaging network homogeneity and
degradation dynamics during failure ACS Nano 12 4648–59
[126] Milano G et al 2020 Mapping time-dependent conductivity of metallic nanowire networks by electrical resistance tomography
toward transparent conductive materials ACS Appl. Nano Mater. 3 11987–97
[127] Diederichsen K M, Brow R R and Stoykovich M P 2015 Percolating transport and the conductive scaling relationship in lamellar
block copolymers under confinement ACS Nano 9 2465–76
[128] Zhou F and Chai Y 2020 Near-sensor and in-sensor computing Nat. Electron. 3 664–71
[129] Mennel L, Symonowicz J, Wachter S, Polyushkin D K, Molina-Mendoza A J and Mueller T 2020 Ultrafast machine vision with
2D material neural network image sensors Nature 579 62–6
[130] Wang C-Y et al 2020 Gate-tunable van der Waals heterostructure for reconfigurable neural network vision sensor Sci. Adv. 6
eaba6173
[131] Sun L et al 2019 Self-selective van der Waals heterostructures for large scale memory array Nat. Commun. 10 3161
[132] Pan C et al 2020 Reconfigurable logic and neuromorphic circuits based on electrically tunable two-dimensional homojunctions
Nat. Electron. 3 383–90
[133] Liu C, Yan X, Song X, Ding S, Zhang D W and Zhou P 2018 A semi-floating gate memory based on van der Waals
heterostructures for quasi-non-volatile applications Nat. Nanotechnol. 13 404–10
[134] Tian H et al 2017 Emulating bilingual synaptic response using a junction-based artificial synaptic device ACS Nano 11 71567163
[135] Sangwan V K, Jariwala D, Kim I S, Chen K-S, Marks T J, Lauhon L J and Hersam M C 2015 Gate-tunable memristive phenomena
mediated by grain boundaries in single-layer MoS2 Nat. Nanotechnol. 10 403
[136] Zhu J et al 2018 Ion gated synaptic transistors based on 2D van der Waals crystals with tunable diffusive dynamics Adv. Mater. 30
1800195
[137] Zhu X, Li D, Liang X and Lu W D 2019 Ionic modulation and ionic coupling effects in MoS2 devices for neuromorphic
computing Nat. Mater. 18 141–8
[138] Zhang F et al 2019 Electric-field induced structural transition in vertical MoTe2 - and Mo1−x Wx Te2 -based resistive memories Nat.
Mater. 18 55–61
[139] Wang C Y, Wang C, Meng F, Wang P, Wang S, Liang S J and Miao F 2020 2D layered materials for memristive and neuromorphic
applications Adv. Electron. Mater. 6 1901107
[140] Wang M et al 2018 Robust memristors based on layered two-dimensional materials Nat. Electron. 1 130
[141] Shi Y et al 2018 Electronic synapses made of layered two-dimensional materials Nat. Electron. 1 458
[142] Chen S et al 2020 Wafer-scale integration of two-dimensional materials in high-density memristive crossbar arrays for artificial
neural networks Nat. Electron. 3 638–45
[143] Ge R, Wu X, Kim M, Shi J, Sonde S, Tao L, Zhang Y, Lee J C and Akinwande D 2018 Atomristor: nonvolatile resistance switching
in atomic sheets of transition metal dichalcogenides Nano Lett. 18 434–41
[144] Hus S M et al 2021 Observation of single-defect memristor in an MoS2 atomic sheet Nat. Nanotechnol. 16 58–62
[145] Liu C, Wang L, Qi J and Liu K 2020 Designed growth of large-size 2D single crystals Adv. Mater. 32 2000046
[146] Jang H, Liu C, Hinton H, Lee M H, Kim H, Seol M, Shin H J, Park S and Ham D 2020 An atomically thin optoelectronic machine
vision processor Adv. Mater. 32 2002431
[147] Wang S et al 2021 Networking retinomorphic sensor with memristive crossbar for brain-inspired visual perception Natl Sci. Rev.
8 nwaa172
101
Neuromorph. Comput. Eng. 2 (2022) 022501 Roadmap
[148] Zeng F, Li S, Yang J, Pan F and Guo D 2014 Learning processes modulated by the interface effects in a Ti/conducting polymer/Ti
resistive switching cell RSC Adv. 4 14822–8
[149] Ouyang J, Chu C-W, Szmanda C R, Ma L and Yang Y 2004 Programmable polymer thin film and non-volatile memory device
Nat. Mater. 3 918
[150] Van De Burgt Y et al 2017 A non-volatile organic electrochemical device as a low-voltage artificial synapse for neuromorphic
computing Nat. Mater. 16 414–8
[151] Giovannitti A et al 2016 Controlling the mode of operation of organic transistors through side-chain engineering Proc. Natl
Acad. Sci. USA 113 12017–22
[152] Giovannitti A et al 2020 Energetic control of redox-active polymers toward safe organic bioelectronic materials Adv. Mater. 32
1908047
[153] Go G-T, Lee Y, Seo D-G, Pei M, Lee W, Yang H and Lee T-W 2020 Achieving microstructure-controlled synaptic plasticity and
long-term retention in ion-gel-gated organic synaptic transistors Adv. Intell. Syst. 2 2000012
[154] Gkoupidenis P, Koutsouras D A and Malliaras G G 2017 Neuromorphic device architectures with global connectivity through
electrolyte gating Nat. Commun. 8 15448
[155] Kim Y et al 2018 A bioinspired flexible organic artificial afferent nerve Science 360 998–1003
[156] Fuller E J et al 2019 Parallel programming of an ionic floating-gate memory array for scalable neuromorphic computing Science
364 570–4
[157] Defranco J A, Schmidt B S, Lipson M and Malliaras G G 2006 Photolithographic patterning of organic electronic materials Org.
Electron. 7 22–8
[158] Zakhidov A A, Lee J-K, DeFranco J A, Fong H H, Taylor P G, Chatzichristidi M, Ober C K and Malliaras G G 2011 Orthogonal
processing: a new strategy for organic electronics Chem. Sci. 2 1178–82
[159] Keene S T, Melianas A, van de Burgt Y and Salleo A 2018 Mechanisms for enhanced state retention and stability in redox-gated
organic neuromorphic devices Adv. Electron. Mater. 5 1800686
[160] Melianas A et al 2020 Temperature-resilient solid-state organic artificial synapses for neuromorphic computing Sci. Adv. 6
eabb2958
[161] Choi Y, Oh S, Qian C, Park J H and Cho J H 2020 Vertical organic synapse expandable to 3D crossbar array Nat. Commun. 11
4595
[162] Spyropoulos G D, Gelinas J N and Khodagholy D 2019 Internal ion-gated organic electrochemical transistor: a building block for
integrated bioelectronics Sci. Adv. 5 eaau7378
[163] Bischak C G et al 2020 A reversible structural phase transition by electrochemically-driven ion injection into a conjugated
polymer J. Am. Chem. Soc. 142 7434–42
[164] Lenz J, del Giudice F, Geisenhof F R, Winterer F and Weitz R T 2019 Vertical, electrolyte-gated organic transistors show
continuous operation in the MA cm−2 regime and artificial synaptic behaviour Nat. Nanotechnol. 14 579–85
[165] Shulaker M M et al 2015 Monolithic 3D integration of logic and memory: carbon nanotube FETs, resistive RAM, and silicon
FETs Technical Digest—Int. Electron Devices Meeting IEDM 27.4.pp 1–4
[166] Gumyusenge A et al 2018 Semiconducting polymer blends that exhibit stable charge transport at high temperatures Science 362
1131–4
[167] Keene S T et al 2020 A biohybrid synapse with neurotransmitter-mediated plasticity Nat. Mater. 19 969–73
[168] Grollier J, Querlioz D, Camsari K Y, Everschor-Sitte K, Fukami S and Stiles M D 2020 Neuromorphic spintronics Nat. Electron. 3
360–70
[169] Ma Y, Miura S, Honjo H, Ikeda S, Hanyu T, Ohno H and Endoh T 2016 A 600 μW ultra-low-power associative processor for
image pattern recognition employing magnetic tunnel junction-based nonvolatile memories with autonomic intelligent
power-gating scheme Japan. J. Appl. Phys. 55 04EF15
[170] Vincent A F et al 2015 Spin-transfer torque magnetic memory as a stochastic memristive synapse for neuromorphic systems IEEE
Trans. Biomed. Circuits Syst. 9 166–74
[171] Lequeux S, Sampaio J, Cros V, Yakushiji K, Fukushima A, Matsumoto R, Kubota H, Yuasa S and Grollier J 2016 A magnetic
synapse: multilevel spin-torque memristor with perpendicular anisotropy Sci. Rep. 6 31510
[172] Mansueto M et al 2019 Realizing an isotropically coercive magnetic layer for memristive applications by analogy to dry friction
Phys. Rev. Appl. 12 044029
[173] Fukami S, Zhang C, DuttaGupta S, Kurenkov A and Ohno H 2016 Magnetization switching by spin–orbit torque in an
antiferromagnet–ferromagnet bilayer system Nat. Mater. 15 535–41
[174] Borders W A, Akima H, Fukami S, Moriya S, Kurihara S, Horio Y, Sato S and Ohno H 2016 Analogue spin–orbit torque device
for artificial-neural-network-based associative memory operation Appl. Phys. Express 10 013007
[175] Torrejon J et al 2017 Neuromorphic computing with nanoscale spintronic oscillators Nature 547 428–31
[176] Romera M et al 2018 Vowel recognition with four coupled spin-torque nano-oscillators Nature 563 230
[177] Mizrahi A, Hirtzlin T, Fukushima A, Kubota H, Yuasa S, Grollier J and Querlioz D 2018 Neural-like computing with populations
of superparamagnetic basis functions Nat. Commun. 9 1533
[178] Daniels M W, Madhavan A, Talatchian P, Mizrahi A and Stiles M D 2020 Energy-efficient stochastic computing with
superparamagnetic tunnel junctions Phys. Rev. Appl. 13 034016
[179] Borders W A, Pervaiz A Z, Fukami S, Camsari K Y, Ohno H and Datta S 2019 Integer factorization using stochastic magnetic
tunnel junctions Nature 573 390–3
[180] Yuasa S, Nagahama T, Fukushima A, Suzuki Y and Ando K 2004 Giant room-temperature magnetoresistance in single-crystal
Fe/MgO/Fe magnetic tunnel junctions Nat. Mater. 3 868–71
[181] Parkin S S P, Kaiser C, Panchula A, Rice P M, Hughes B, Samant M and Yang S-H 2004 Giant tunnelling magnetoresistance at
room temperature with MgO (100) tunnel barriers Nat. Mater. 3 862–7
[182] Zahedinejad M, Fulara H, Khymyn R, Houshang A, Fukami S, Kanai S, Ohno H and Åkerman J 2020 Memristive control of
mutual SHNO synchronization for neuromorphic computing (arXiv:2009.06594)
[183] Pinna D, Bourianoff G and Everschor-Sitte K 2020 Reservoir computing with random skyrmion textures Phys. Rev. Appl. 14
054020
[184] Fernández-Pacheco A, Streubel R, Fruchart O, Hertel R, Fischer P and Cowburn R 2017 Three-dimensional nanomagnetism Nat.
Commun. 8 15756
[185] Papp A, Porod W and Csaba G 2020 Nanoscale neural network using non-linear spin-wave interference (arXiv:2012.04594)
102
Neuromorph. Comput. Eng. 2 (2022) 022501 Roadmap
[186] Khymyn R, Lisenkov I, Voorheis J, Sulymenko O, Prokopenko O, Tiberkevich V, Akerman J and Slavin A 2018 Ultra-fast artificial
neuron: generation of picosecond-duration spikes in a current-driven antiferromagnetic auto-oscillator Sci. Rep. 8 15727
[187] Zázvorka J et al 2019 Thermal skyrmion diffusion used in a reshuffler device Nat. Nanotechnol. 14 658–61
[188] Wang Z, Wu H, Burr G W, Hwang C S, Wang K L, Xia Q and Yang J J 2020 Resistive switching materials for information
processing Nat. Rev. Mater. 5 173–95
[189] Pi S, Li C, Jiang H, Xia W, Xin H, Yang J J and Xia Q 2019 Memristor crossbar arrays with 6 nm half-pitch and 2 nm critical
dimension Nat. Nanotechnol. 14 35–9
[190] Lin P et al 2020 Three-dimensional memristor circuits as complex neural networks Nat. Electron. 3 225–32
[191] Prezioso M, Merrikh-Bayat F, Hoskins B D, Adam G C, Likharev K K and Strukov D B 2015 Training and operation of an
integrated neuromorphic network based on metal-oxide memristors Nature 521 61–4
[192] Yao P et al 2017 Face classification using electronic synapses Nat. Commun. 8 15199
[193] Li C et al 2018 Efficient and self-adaptive in situ learning in multilayer memristor neural networks Nat. Commun. 9 2385
[194] Li C et al 2019 Long short-term memory networks in memristor crossbar arrays Nat. Mach. Intell. 1 49–57
[195] Liu Q et al 2020 33.2 A fully integrated analog ReRAM based 78.4 TOPS/W compute-in-memory chip with fully parallel MAC
computing 2020 IEEE Int. Solid-State Circuits Conf. (ISSCC) 500–2
[196] Wan W et al 2020 33.1 a 74 TMACS/W CMOS-RRAM neurosynaptic core with dynamically reconfigurable dataflow and in situ
transposable weights for probabilistic graphical models 2020 IEEE Int. Solid-State Circuits Conf. (ISSCC) 498–500
[197] Xue C-X et al 2021 A CMOS-integrated compute-in-memory macro based on resistive random-access memory for AI edge
devices Nat. Electron. 4 81–90
[198] Wang W, Song W, Yao P, Li Y, Van Nostrand J, Qiu Q, Ielmini D and Yang J J 2020 Integration and co-design of memristive
devices and algorithms for artificial intelligence Iscience 23 101809
[199] Li W, Xu P, Zhao Y, Li H, Xie Y and Lin Y 2020 TIMELY: pushing data movements and interfaces in PIM accelerators towards
local and in time domain 2020 ACM/IEEE 47th Annual Int. Symp. Computer Architecture (ISCA) 832–45
[200] Sangheon O, Shi Y, de Valle J, Salev P, Lu Y, Huang Z, Kalcheim Y, Schuller I V and Kuzum D 2021 Energy-efficient Mott
activation neuron for full-hardware implementation of neural networks Nat. Nanotechnol. 16 680–7
[201] Zhang X et al 2019 Experimental demonstration of conversion-based SNNs with 1T1R Mott neurons for neuromorphic
inference 2019 IEEE Int. Electron Devices Meeting (IEDM) pp 6–7
[202] Davies M et al 2018 Loihi: a neuromorphic manycore processor with on-chip learning IEEE Micro 38 82–99
[203] Mayr C, Hoeppner S and Furber S 2019 Spinnaker 2: a 10 million core processor system for brain simulation and machine
learning (arXiv:1911.02385)
[204] Furber S and Bogdan P (ed) 2020 SpiNNaker: A Spiking Neural Network Architecture (Boston-Delft: Now Publishers)
[205] Rasche C, Douglas R and Mahowald M 1997 Characterization of a pyramidal silicon neuron Neuromorphic Systems: Engineering
Silicon from Neurobiology ed L S Smith and A Hamilton (Singapore: World Scientific)
[206] van Schaik A 2001 Building blocks for electronic spiking neural networks Neural Netw. 14 617–28
[207] Maldonado Huayaney F L, Nease S and Chicca E 2016 Learning in silicon beyond STDP: a neuromorphic implementation of
multi-factor synaptic plasticity with calcium-based dynamics IEEE Trans. Circuits Syst. I 63 2189–99
[208] Levi T, Nanami T, Tange A, Aihara K and Kohno T 2018 Development and applications of biomimetic neuronal networks toward
brainmorphic artificial intelligence IEEE Trans. Circuits Syst. II 65 577–81
[209] Abu-Hassan K, Taylor J D, Morris G, Donati E, Bortolotto Z A, Indiveri G, Paton J F R and Nogaret A 2019 Optimal solid state
neurons Nat. Commun. 10 5309
[210] Rubino A, Livanelioglu C, Qiao N, Payvand M and Indiveri G 2020 Ultra-low-power FDSOI neural circuits for extreme-edge
neuromorphic intelligence IEEE Trans. Circuits Syst. I 68 45–56
[211] Mead C 1989 Analog VLSI and Neural Systems (Reading, MA: Addison-Wesley)
[212] Douglas R, Mahowald M and Mead C 1995 Neuromorphic analogue VLSI Annu. Rev. Neurosci. 18 255–81
[213] Payvand M, Nair M V, Müller L K and Indiveri G 2019 A neuromorphic systems approach to in-memory computing with
non-ideal memristive devices: from mitigation to exploitation Faraday Discuss. 213 487–510
[214] Lillicrap T P and Santoro A 2019 Backpropagation through time and the brain Curr. Opin. Neurobiol. 55 82–9
[215] Backus J 1978 Can programming be liberated from the von Neumann style? Commun. ACM 21 613–41
[216] Indiveri G and Liu S-C 2015 Memory and information processing in neuromorphic systems Proc. IEEE 103 1379–97
[217] Ganguli S, Huh D and Sompolinsky H 2008 Memory traces in dynamical systems Proc. Natl Acad. Sci. 105 18970–5
[218] Park S, Chu M, Kim J, Noh J, Jeon M, Lee B H, Hwang H, Lee B and Lee B-g 2015 Electronic system with memristive synapses for
pattern recognition Sci. Rep. 5 10123
[219] Buccelli S et al 2019 A neuromorphic prosthesis to restore communication in neuronal networks iScience 19 402–14
[220] Bauer F C, Muir D R and Indiveri G 2019 Real-time ultra-low power ECG anomaly detection using an event-driven
neuromorphic processor IEEE Trans. Biomed. Circuits Syst. 13 1575–82
[221] Donati E, Payvand M, Risi N, Krause R and Indiveri G 2019 Discrimination of EMG signals using a neuromorphic
implementation of a spiking neural network IEEE Trans. Biomed. Circuits Syst. 13 795–803
[222] Burelo K, Sharifshazileh M, Krayenbühl N, Ramantani G, Indiveri G and Sarnthein J 2021 A spiking neural network (SNN) for
detecting high frequency oscillations (HFOs) in the intraoperative ECoG Sci. Rep. 11 6719
[223] Papadimitriou C H and Steiglitz K 1998 Combinatorial Optimization: Algorithms and Complexity (Courier Corporation)
[224] Kirkpatrick S, Gelatt C D and Vecchi M P 1983 Optimization by simulated annealing Science 220 671–80
[225] Fogel D B 2006 Evolutionary Computation: Toward a New Philosophy of Machine Intelligence vol 1 (New York: Wiley)
[226] Ackley D H, Hinton G E and Sejnowski T J 1985 A learning algorithm for Boltzmann machines Cogn. Sci. 9 147–69
[227] Lucas A 2014 Ising formulations of many NP problems Front. Phys. 2 5
[228] Barahona F 1982 On the computational complexity of Ising spin glass models J. Phys. A: Math. Gen. 15 3241
[229] Hopfield J J 1982 Neural networks and physical systems with emergent collective computational abilities Proc. Natl Acad. Sci. 79
2554–8
[230] Vadlamani S K, Xiao T P and Yablonovitch E 2020 Physics successfully implements Lagrange multiplier optimization Proc. Natl
Acad. Sci. USA 117 26639–50
[231] Kadowaki T and Nishimori H 1998 Quantum annealing in the transverse Ising model Phys. Rev. E 58 5355
[232] Johnson M W et al 2011 Quantum annealing with manufactured spins Nature 473 194–8
[233] Inagaki T et al 2016 A coherent Ising machine for 2000-node optimization problems Science 354 603–6
[234] McMahon P L et al 2016 A fully programmable 100-spin coherent Ising machine with all-to-all connections Science 354 614–7
103
Neuromorph. Comput. Eng. 2 (2022) 022501 Roadmap
[235] Hamerly R et al 2019 Experimental investigation of performance differences between coherent Ising machines and a quantum
annealer Sci. Adv. 5 eaau0823
[236] Aramon M, Rosenberg G, Valiante E, Miyazawa T, Tamura H and Katzgraber H G 2019 Physics-inspired optimization for
quadratic unconstrained problems using a digital annealer Front. Phys. 7 48
[237] 2018 Fujitsu quantum-inspired digital annealer cloud service to rapidly resolve combinatorial optimization problems-fujitsu
global https://fujitsu.com/global/about/resources/news/press-releases/2018/0515-01.html (accessed 05 February 2021)
[238] Takemoto T, Hayashi M, Yoshimura C and Yamaoka M 2020 A 2 × 30 k-spin multi-chip scalable CMOS annealing processor
based on a processing-in-memory approach for solving large-scale combinatorial optimization problems IEEE J. Solid-State
Circuits 55 145–56
[239] Mahmoodi M R, Prezioso M and Strukov D B 2019 Versatile stochastic dot product circuits based on nonvolatile memories for
high performance neurocomputing and neurooptimization Nat. Commun. 10 5113
[240] Cai F et al 2020 Power-efficient combinatorial optimization using intrinsic noise in memristor Hopfield neural networks Nat.
Electron. 3 409–18
[241] Chou J, Bramhavar S and Ghosh S (W H-S Reports, and Undefined) 2019 Analog coupled oscillator based weighted Ising
machine nature.com available: https://nature.com/articles/s41598-019-49699-5 (accessed: 06 February 2021)
[242] Dutta S, Khanna A, Gomez J, Ni K, Toroczkai Z and Datta S 2019 Experimental demonstration of phase transition
nano-oscillator based ising machine 2019 IEEE Int. Electron Devices Meeting (IEDM) pp 37–8
[243] Xiao T 2019 Optoelectronics for Refrigeration and Analog Circuits for Combinatorial Optimization (UC Berkeley)
[244] Camsari K Y, Sutton B M and Datta S 2019 p-bits for probabilistic spin logic Appl. Phys. Rev. 6 011305
[245] Abbink E J W, Albino L, Dollevoet T, Huisman D, Roussado J and Saldanha R L 2011 Solving Large Scale Crew Scheduling
Problems in Practice vol 3 (Berlin: Springer) pp 149–64
[246] Hamerly R et al 2018 Scaling advantages of all-to-all connectivity in physical annealers: the coherent Ising machine vs D-wave
2000Q
[247] Kalehbasti R, Ushijima-Mwesigwa H, Mandal A and Ghosh I 2020 Ising-based louvain method: clustering large graphs with
specialized hardware (arXiv:2012.11391) (accessed: 05 February 2021)
[248] Strubell E, Ganesh A and Mccallum A 2019 Energy and policy considerations for deep learning in NLP Proc. 57th Annual Meeting
of the Association for Computational Linguistics (ACL) 3645–50
[249] Rumelhart D E, Hinton G E and Williams R J 1986 Learning representations by back-propagating errors Nature 323 533–6
[250] Hsieh E R et al 2019 High-density multiple bits-per-cell 1T4R RRAM array with gradual SET/RESET and its effectiveness for
deep learning Proc. IEEE Int. Electron Devices Meeting (IEDM)
[251] Esmanhotto E et al 2020 High-density 3D monolithically integrated multiple 1T1R multi-level-cell for neural networks Proc.
IEEE Int. Electron Devices Meeting (IEDM)
[252] Barraud S et al 2020 3D RRAMs with gate-all-around stacked nanosheet transistors for in-memory-computing Proc. IEEE Int.
Electron Devices Meeting (IEDM)
[253] Alfaro Robayo D et al 2019 Integration of OTS based back-end selector with HfO2 OxRAM for crossbar arrays Proc. IEEE Int.
Electron Devices Meeting (IEDM)
[254] Le B Q, Grossi A, Vianello E, Wu T, Lama G, Beigne E, Wong H-S P and Mitra S 2019 Resistive RAM with multiple bits per cell:
array-level demonstration of 3 bits per cell IEEE Trans. Electron Devices 66 641–6
[255] Valentian A et al 2019 Fully integrated spiking neural network with analog neurons and RRAM synapses Proc. IEEE Int. Electron
Devices Meeting (IEDM)
[256] Ambrogio S et al 2018 Equivalent-accuracy accelerated neural-network training using analogue memory Nature 558 60–7
[257] Payvand M and Indiveri G 2019 Spike-based plasticity circuits for alwayson on-line learning in neuromorphic systems IEEE Int.
Symp. Circuits and Systems (ISCAS)
[258] Ly D R B et al 2018 Role of synaptic variability in resistive memory-based spiking neural networks with unsupervised learning J.
Phys. D: Appl. Phys. 51 444002
[259] Gerstner W, Lehmann M, Liakoni V, Corneil D and Brea J 2018 Eligibility traces and plasticity on behavioral time scales:
experimental support of neohebbian three-factor learning rules Front. Neural Circuits 12 53
[260] Bellec G, Scherr F, Subramoney A, Hajek E, Salaj D, Legenstein R and Maass W 2020 A solution to the learning dilemma for
recurrent networks of spiking neurons Nat. Commun. 11 3625
[261] Crafton B, Parihar A, Gebhardt E and Raychowdhury A 2019 Direct feedback alignment with sparse connections for local
learning Front. Neurosci. 13 525
[262] Ernoult M et al 2019 Updates of equilibrium prop match gradients of backprop through time in an RNN with static input
NeurIPS 2019 Proc. (arXiv:1905.13633)
[263] Demirag Y et al 2021 PCM-trace: scalable synaptic eligibility traces with resistivity drift of phase-change materials Proc. IEEE Int.
Symp. Circuits and Systems (ISCAS) (to be published)
[264] Dalgaty T, Castellani N, Turck C, Harabi K-E, Querlioz D and Vianello E 2021 In situ learning using intrinsic memristor
variability via Markov chain Monte Carlo sampling Nat. Electron. 4 151–61
[265] Vivet P et al 2020 A 220GOPS 96-core processor with 6 chiplets 3D-stacked on an active interposer offering 0.6 ns mm−1 latency,
3-Tb/s/mm2 inter-chiplet interconnects and 156 mW mm−2 @ 82%-peakdfficiency DC–DC converters Proc. IEEE Int.
Solid-State Circuits Conf. (ISSCC) 46–8
[266] Ambs P 2010 Optical computing: a 60 year adventure Adv. Opt. Technol. 2010 372652
[267] Nahmias M A, De Lima T F, Tait A N, Peng H-T, Shastri B J and Prucnal P R 2020 Photonic multiply-accumulate operations for
neural networks IEEE J. Sel. Top. Quantum Electron. 26 1–18
[268] Lin X, Rivenson Y, Yardimci N T, Veli M, Luo Y, Jarrahi M and Ozcan A 2018 All-optical machine learning using diffractive deep
neural networks Science 361 1004–8
[269] Shastri B J et al 2020 Photonics for artificial intelligence and neuromorphic computing Nat. Photon. 15 102–14
[270] Shen Y et al 2017 Deep learning with coherent nanophotonic circuits Nat. Photon. 11 441–6
[271] Tanaka G et al 2019 Recent advances in physical reservoir computing: a review Neural Netw. 115 100–23
[272] Feldmann J, Youngblood N, Wright C D, Bhaskaran H and Pernice W H P 2019 All-optical spiking neurosynaptic networks with
self-learning capabilities Nature 569 208–14
[273] Cheng Z, Ríos C, Pernice W H, Wright C D and Bhaskaran H 2017 On-chip photonic synapse Sci. Adv. 3 e1700160
[274] Wu C, Yu H, Lee S, Peng R, Takeuchi I and Li M 2021 Programmable phase-change metasurfaces on waveguides for multimode
photonic convolutional neural network Nat. Commun. 12 96
104
Neuromorph. Comput. Eng. 2 (2022) 022501 Roadmap
[275] Xu X et al 2021 11 TOPS photonic convolutional accelerator for optical neural networks Nature 589 44–51
[276] Sebastian A et al 2017 Temporal correlation detection using computational phase-change memory Nat. Commun. 8 1115
[277] Ríos C et al 2019 In-memory computing on a photonic platform Sci. Adv. 5 eaau5759
[278] Murmann B 2015 The race for the extra decibel: a brief review of current ADC performance trajectories IEEE Solid-State Circuits
Mag. 7 58–66
[279] Marin-Palomo P et al 2017 Microresonator-based solitons for massively parallel coherent optical communications Nature 546
274–9
[280] Huang C, de Lima T F, Jha A, Abbaslou S, Shastri B J and Prucnal R 2019 Giant enhancement in signal contrast using integrated
all-optical nonlinear thresholder Optics InfoBase Conf. Papers
[281] Wuttig M, Bhaskaran H and Taubner T 2017 Phase-change materials for non-volatile photonic applications Nat. Photon. 11
465–76
[282] Pernice W H and Bhaskaran H 2012 Photonic non-volatile memories using phase change materials Appl. Phys. Lett. 101 171101
[283] Reshef O, De Leon I, Alam M Z and Boyd R W 2019 Nonlinear optical effects in epsilon-near-zero media Nat. Rev. Mater. 4
535–51
[284] Gupta S, Agrawal A, Gopalakrishnan K and Narayanan P 2015 Deep learning with limited numerical precision 32nd Int. Conf.
Machine Learning (ICML 2015)
[285] Schemmel J, Brüderle D, Grübl A, Hock M, Meier K and Millner S 2021 A wafer-scale neuromorphic hardware system for
large-scale neural modelling Proc. Int. Symp. Circuits System 1947–50
[286] Davison A P, Brüderle D, Eppler J M, Kremkow J, Muller E, Pecevski D A, Perrinet L and Yger P 2009 PyNN: a common interface
for neuronal network simulators Front. Neuroinform. 2 11
[287] Knight J C and Nowotny T 2018 GPUs outperform current HPC and neuromorphic solutions in terms of speed and energy when
simulating a highly-connected cortical model Front. Neurosci. 12 941
[288] van Albada S J, Rowley A G, Senk J, Hopkins M, Schmidt M, Stokes A B, Lester D R, Diesmann M and Furber S B 2018
Performance comparison of the digital neuromorphic hardware SpiNNaker and the neural network simulation software NEST
for a full-scale cortical microcircuit model Front. Neurosci. 12 291
[289] Rhodes O, Peres L, Rowley A G D, Gait A, Plana L A, Brenninkmeijer C and Furber S B 2020 Realtime cortical simulation on
neuromorphic hardware Phil. Trans. R. Soc. A 378 20190160
[290] Blouw P, Choo X, Hunsberger E and Eliasmith C 2019 Benchmarking keyword spotting efficiency on neuromorphic hardware
Proc. NICE’19
[291] Arthur J and Boahen K 2006 Learning in silicon: timing is everything Advances in Neural Information Processing Systems 18 ed Y
Weiss, B Schölkopf and J Platt (Cambridge, MA: MIT Press)
[292] Baldi P, Sadowski P and Lu Z 2017 Learning in the machine: the symmetries of the deep learning channel Neural Netw. 95 110–33
[293] Baydin A G, Pearlmutter B A, Radul A A and Siskind J M 2017 Automatic differentiation in machine learning: a survey J. Mach.
Learn. Res. 18 5595–637
[294] Bellec G, Scherr F, Hajek E, Salaj D, Legenstein R and Maass W 2019 Biologically inspired alternatives to backpropagation
through time for learning in recurrent neural nets (arXiv:1901.09049)
[295] Davies M, Srinivasa N, Lin T H, Chinya G, Joshi P, Lines A, Wild A and Wang H 2018 Loihi: a neuromorphic manycore processor
with on-chip learning IEEE Micro 38 82–99
[296] Yigit D, Moro F, Dalgaty T, Navarro G, Frenkel C, Indiveri G, Vianello E and Payvand M 2021 PCM-trace: scalable synaptic
eligibility traces with resistivity drift of phase-change materials (arXiv:2102.07260)
[297] Ercsey-Ravasz M, Markov N T, Lamy C, Van Essen D C, Knoblauch K, Toroczkai Z and Kennedy H 2013 A predictive network
model of cerebral cortical connectivity based on a distance rule Neuron 80 184–97
[298] Esser S K et al 2016 Convolutional networks for fast, energy-efficient neuromorphic computing Proc. Natl Acad. Sci. USA 113
11441–6
[299] Fouda M E, Kurdahi F, Eltawil A and Neftci E 2019 Spiking Neural Networks for Inference and Learning: A Memristor-Based Design
Perspective (Elsevier) ch 19 (accepted)
[300] Simon F, Schemmel J, Grubl A, Hartel A, Hock M and Meier K 2017 Demonstrating hybrid learning in a flexible neuromorphic
hardware system IEEE Trans. Biomed. Circuits Syst. 11 128–42
[301] Galluppi F, Lagorce X, Stromatias E, Pfeiffer M, Plana L A, Furber S B and Benosman R B 2014 A framework for plasticity
implementation on the spinnaker neural architecture Front. Neurosci. 8 429
[302] Jaderberg M, Czarnecki W M, Osindero S, Vinyals O, Graves A and Kavukcuoglu K 2016 Decoupled neural interfaces using
synthetic gradients (arXiv:1608.05343)
[303] Jia Z, Tillman B, Maggioni M and Scarpazza D P 2019 Dissecting the graphcore IPU architecture via microbenchmarking
(arXiv:1912.03413)
[304] Kaiser J, Mostafa H and Neftci E 2019 Synaptic plasticity for deep continuous local learning 14 424
[305] Kumaran D, Hassabis D and McClelland J L 2016 What learning systems do intelligent agents need? Complementary learning
systems theory updated Trends Cognit. Sci. 20 512–34
[306] Lillicrap T P, Santoro A, Marris L, Akerman C J and Hinton G 2020 Backpropagation and the brain Nat. Rev. Neurosci. 21 335–46
[307] Neftci E O 2018 Data and power efficient intelligence with neuromorphic learning machines iScience 5 52–68
[308] Payvand M, Fouda M E, Kurdahi F, Eltawil A and Neftci E O 2020 Error-triggered threefactor learning dynamics for crossbar
arrays 2020 2nd IEEE Int. Conf. Artificial Intelligence Circuits and Systems (AICAS) 218–22
[309] Pedroni B U, Joshi S, Deiss S R, Sheik S, Detorakis G, Paul S, Augustine C, Neftci E O and Cauwenberghs G 2019
Memory-efficient synaptic connectivity for spike-timing- dependent plasticity Front. Neurosci. 13 357
[310] Pfeil T, Potjans T C, Schrader S, Potjans W, Schemmel J, Diesmann M and Meier K 2012 Is a 4 bit synaptic weight resolution
enough? Constraints on enabling spike-timing dependent plasticity in neuromorphic hardware Front. Neurosci. 6 90
[311] Prezioso M, Mahmoodi M R, Merrikh Bayat F, Nili H, Kim H, Vincent A and Strukov D B 2018 Spike-timing dependent
plasticity learning of coincidence detection with passively integrated memristive circuits Nat. Commun. 9 5311
[312] Rastegari M, Ordonez V, Redmon J and Farhadi A 2016 Xnor-net: imagenet classification using binary convolutional neural
networks European Conf. Computer Vision (Berlin: Springer) 525–42
[313] Richards B A et al 2019 A deep learning framework for neuroscience Nat. Neurosci. 22 1761–70
[314] Rueckauer B, Bybee C, Goettsche R, Singh Y, Mishra J and Wild A 2021 NxTF: an API and compiler for deep spiking neural
networks on intel loihi (arXiv:2101.04261)
105
Neuromorph. Comput. Eng. 2 (2022) 022501 Roadmap
[315] Shrestha S B and Orchard G 2018 SLAYER: spike layer error reassignment in time Advances in Neural Information Processing
Systems 1412–21
[316] Spilger P et al 2020 hxtorch: PyTorch for brainscales-2 IoT Streams for Data-Driven Predictive Maintenance and IoT, Edge, and
Mobile for Embedded Machine Learning (Berlin: Springer) pp 189–200
[317] Stewart K, Orchard G, Shrestha S B and Neftci E 2020 Online few-shot gesture learning on a neuromorphic processor IEEE J.
Emerg. Sel. Top. Circuits Syst. 10 512–21
[318] Thiele J C, Bichler O and Dupret A 2019 SpikeGrad: an ANN-equivalent computation model for implementing backpropagation
with spikes (arXiv:1906.00851)
[319] Wang Z et al 2018 Fully memristive neural networks for pattern classification with unsupervised learning Nat. Electron. 1 137
[320] Zenke F and Gerstner W 2014 Limits to high-speed simulations of spiking neural networks using generalpurpose computers
Front. Neuroinf. 8 76
[321] Zenke F and Neftci E O 2021 Brain-inspired learning on neuromorphic substrates Proc. IEEE 109 935–50
[322] Zenke F and Ganguli S 2017 Superspike: supervised learning in multi-layer spiking neural networks (arXiv:1705.11146)
[323] Lake B M, Ullman T D, Tenenbaum J B and Gershman S J 2017 Building machines that learn and think like people Behav. Brain
Sci. 40 e253
[324] Murray J M 2019 Local online learning in recurrent networks with random feedback eLife 8 e43299
[325] Scherr F, Stöckl C and Maass W 2020 One-shot learning with spiking neural networks (bioRxiv)
[326] Hochreiter S, Younger A S and Conwell R 2020 Learning to learn using gradient descent Int. Conf. Artificial Neural Networks
(Berlin: Springer) pp 87–94
[327] Confavreux B, Zenke F, Agnes E J, Lillicrap T and Vogels T 2020 A meta-learning approach to (re) discover plasticity rules that
carve a desired function into a neural network (bioRxiv)
[328] Jordan J, Schmidt M, Senn W and Petrovici M A 2020 Evolving to learn: discovering interpretable plasticity rules for spiking
networks (arXiv:2005.14149)
[329] Bohnstingl T, Scherr F, Pehle C, Meier K and Maass W 2019 Neuromorphic hardware learns to learn Front. Neurosci. 13 483
[330] Wang J X, Kurth-Nelson Z, Tirumala D, Soyer H, Leibo J Z, Munos R and Botvinick M 2016 Learning to reinforcement learn
(arXiv:1611.05763)
[331] Duan Y, Schulman J, Chen X, Bartlett L, Sutskever I and Abbeel P 2016 Rl2 : fast reinforcement learning via slow reinforcement
learning (arXiv:1611.02779)
[332] Wang J X, Kurth-Nelson Z, Kumaran D, Tirumala D, Soyer H, Leibo J Z, Hassabis D and Botvinick M 2018 Prefrontal cortex as a
meta-reinforcement learning system Nat. Neurosci. 21 860–8
[333] Bellec G, Salaj D, Subramoney A, Legenstein R and Maass W 2018 Long short-term memory and learning-to-learn in networks
of spiking neurons Advances in Neural Information Processing Systems 787–97
[334] Subramoney A, Bellec G, Scherr F, Legenstein R and Maass W 2020 Revisiting the role of synaptic plasticity and network
dynamics for fast learning in spiking neural networks (bioRxiv)
[335] Subramoney A, Scherr F and Maass W 2019 Reservoirs learn to learn (arXiv:1909.07486)
[336] Finn C, Abbeel P and Levine S 2017 Model-agnostic meta-learning for fast adaptation of deep networks Proc. 34th Int. Conf.
Machine Learning (PMLR) vol 70 pp 1126–35
[337] Furber S B, Lester D R, Plana L A, Garside J D, Painkras E, Temple S and Brown A D 2012 Overview of the spinnaker system
architecture IEEE Trans. Comput. 62 2454–67
[338] Salimans T, Ho J, Chen X, Sidor S and Sutskever I 2017 Evolution strategies as a scalable alternative to reinforcement learning
(arXiv:1703.03864)
[339] Grübl A, Billaudelle S, Cramer B, Karasenko V and Schemmel J 2020 Verification and design methods for the brainscaleS
neuromorphic hardware system J. Signal Process. Syst. 92 1277–92
[340] Sejnowski T J, Koch C and Churchland P S 1988 Computational neuroscience Science 241 1299–306
[341] Abbott L F 2008 Theoretical neuroscience rising Neuron 60 489–95
[342] Hodgkin A L and Huxley A F 1952 A quantitative description of membrane current and its application to conduction and
excitation in nerve J. Physiol. 117 500–44
[343] Hopfield J J 1988 Artificial neural networks IEEE Circuits Dev. Mag. 4 3–10
[344] Schwartz E L 1993 Computational Neuroscience (Cambridge, MA: MIT Press)
[345] Thiele A and Bellgrove M A 2018 Neuromodulation of attention Neuron 97 769–85
[346] Baddeley A 1992 Working memory Science 255 556–9
[347] Hollerman J R and Schultz W 1998 Dopamine neurons report an error in the temporal prediction of reward during learning Nat.
Neurosci. 1 304–9
[348] Markram H et al 2015 Reconstruction and simulation of neocortical microcircuitry Cell 163 456–92
[349] Koch C and Reid R C 2012 Observatories of the mind Nature 483 397–8
[350] Koroshetz W et al 2018 The state of the NIH BRAIN initiative J. Neurosci. 38 6427–38
[351] Amunts K, Ebell C, Muller J, Telefont M, Knoll A and Lippert T 2016 The human brain project: creating a European Research
infrastructure to decode the human brain Neuron 92 574–81
[352] Adams A et al 2020 International brain initiative: an innovative framework for coordinated global brain Research efforts Neuron
105 212–6
[353] Okano H, Miyawaki A and Kasai K 2015 Brain/MINDS: brain-mapping project in Japan Phil. Trans. R. Soc. B 370 20140310
[354] Davison A P 2012 Collaborative modelling: the future of computational neuroscience? Netw. Comput. Neural Syst. 23 157–66
[355] Ramaswamy S et al 2015 The neocortical microcircuit collaboration portal: a resource for rat somatosensory cortex Front. Neural
Circuits 9 44
[356] Markram H 2013 Seven challenges for neuroscience Funct. Neurol. 28 145–51
[357] Suárez L E, Richards B A, Lajoie G and Misic B 2021 Learning function from structure in neuromorphic networks Nat. Mach.
Intell. 3 771–86
[358] Tsodyks M 2008 Computational neuroscience grand challenges—a humble attempt at future forecast Front. Neurosci. 2 21
[359] Hasler J and Marr B 2013 Finding a roadmap to achieve large neuromorphic hardware systems Front. Neurosci. 7 118
[360] Niven J E and Laughlin S B 2008 Energy limitation as a selective pressure on the evolution of sensory systems J. Exp. Biol. 211
1792–804
[361] Branco T and Staras K 2009 The probability of neurotransmitter release: variability and feedback control at single synapses Nat.
Rev. Neurosci. 10 373–83
106
Neuromorph. Comput. Eng. 2 (2022) 022501 Roadmap
[362] Hamilton T J, Afshar S, van Schaik A and Tapson J C 2014 Stochastic electronics: a neuro-inspired design paradigm for
integrated circuits Proc. IEEE 102 843–59
[363] Alawad M and Lin M 2019 Survey of stochastic-based computation paradigms IEEE Trans. Emerg. Top. Comput. 7 98–114
[364] Pantone R D, Kendall J D and Nino J C 2018 Memristive nanowires exhibit small-world connectivity Neural Netw. 106 144–51
[365] Rigotti M, Barak O, Warden M R, Wang X-J, Daw N D, Miller E K and Fusi S 2013 The importance of mixed selectivity in
complex cognitive tasks Nature 497 585–90
[366] Thakur C S, Afshar S, Wang R M, Hamilton T J, Tapson J and van Schaik A 2016 Bayesian estimation and inference using
stochastic electronics Front. Neurosci. 10 104
[367] Maass W 2014 Noise as a resource for computation and learning in networks of spiking neurons Proc. IEEE 102 860–80
[368] Hubel D H and Wiesel T N 1962 Receptive fields, binocular interaction and functional architecture in the cat’s visual cortex J.
Physiol. 160 106–54
[369] Fukushima K and Miyake S 1982 Neocognitron: a self-organizing neural network model for a mechanism of visual pattern
recognition Competition and Cooperation in Neural Nets (Berlin: Springer) pp 267–85
[370] Kim Y and Panda P 2021 Visual explanations from spiking neural networks using interspike intervals (arXiv:2103.14441)
[371] Jain S, Sengupta A, Roy K and Raghunathan A 2020 RxNN: a framework for evaluating deep neural networks on resistive
crossbars IEEE Trans. Comput.-Aided Des. Integr. Circuits Syst. 40 326–38
[372] Chen P-Y, Peng X and Yu S 2018 NeuroSim: a circuit-level macro model for benchmarking neuro-inspired architectures in
online learning IEEE Trans. Comput.-Aided Des. Integr. Circuits Syst. 37 3067–80
[373] Diehl P U, Neil D, Binas J, Cook M, Liu S C and Pfeiffer M 2015 Fast-classifying, highaccuracy spiking deep networks through
weight and threshold balancing 2015 Int. Joint Conf. Neural Networks (IJCNN) (Piscataway, NJ: IEEE) pp 1–8
[374] Lee J H, Delbruck T and Pfeiffer M 2016 Training deep spiking neural networks using backpropagation Front. Neurosci. 10 508
[375] Gallego G et al 2020 Event-based vision: a survey IEEE Trans. Pattern Anal. Mach. Intell. 44 154–80
[376] Goodfellow I J, Shlens J and Szegedy C 2014 Explaining and harnessing adversarial examples (arXiv:1412.6572)
[377] Mostafa H 2017 Supervised learning based on temporal coding in spiking neural networks IEEE Trans. Neural Netw. Learn. Syst.
29 3227–35
[378] Han B and Roy K 2020 Deep spiking neural network: energy efficiency through time based coding European Conf. Computer
Vision p 388
[379] Montemurro M A, Rasch M J, Murayama Y, Logothetis N K and Panzeri S 2008 Phase-of-firing coding of natural visual stimuli
in primary visual cortex Curr. Biol. 18 375–80
[380] Kim J, Kim H, Huh S, Lee J and Choi K 2018 Deep neural networks with weighted spikes Neurocomputing 311 373–86
[381] Rathi N, Srinivasan G, Panda P and Roy K 2020 Enabling deep spiking neural networks with hybrid conversion and spike timing
dependent backpropagation (arXiv:2005.01807)
[382] Wu Y, Deng L, Li G, Zhu J and Shi L 2018 Spatio-temporal backpropagation for training high-performance spiking neural
networks Front. Neurosci. 12 331
[383] Kim Y and Panda P 2020 Revisiting batch normalization for training low-latency deep spiking neural networks from scratch
(arXiv:2010.01729)
[384] Sharmin S, Panda P, Sarwar S S, Lee C, Ponghiran W and Roy K 2019 A comprehensive analysis on adversarial robustness of
spiking neural networks 2019 Int. Joint Conf. Neural Networks (IJCNN) (Piscataway, NJ: IEEE) pp 1–8
[385] Sharmin S, Rathi N, Panda P and Roy K 2020 Inherent adversarial robustness of deep spiking neural networks: effects of discrete
input encoding and non-linear activations European Conf. Computer Vision (Berlin: Springer) pp 399–414
[386] Roy K, Jaiswal A and Panda P 2019 Towards spike-based machine intelligence with neuromorphic computing Nature 575 607–17
[387] Lukoševičius M and Jaeger H 2009 Reservoir computing approaches to recurrent neural network training Comput. Sci. Rev. 3
127–49
[388] Jaeger H 2001 The ‘echo state’ approach to analysing and training recurrent neural networkswith an erratum note GMD
Technical Report 148.34 (Bonn: German National Research Center for Information Technology) p 13
[389] Maass W, Natschläger T and Markram H 2002 Real-time computing without stable states: a new framework for neural
computation based on perturbations Neural Comput. 14 2531–60
[390] Jaeger H 2007 Echo state network Scholarpedia 2 2330
[391] Tanaka G, Yamane T, Héroux J B, Nakane R, Kanazawa N, Takeda S, Numata H, Nakano D and Hirose A 2019 Recent advances in
physical reservoir computing: a review Neural Netw. 115 100–23
[392] Appeltant L et al 2011 Information processing using a single dynamical node as complex system Nat. Commun. 2 468
[393] Fernando C and Sojakka S 2003 Pattern recognition in a bucket European Conf. Artificial Life (Berlin: Springer)
[394] Shi W, Cao J, Zhang Q, Li Y and Xu L 2016 Edge computing: vision and challenges IEEE Internet Things J. 3 637–46
[395] Jaeger H 2002 Tutorial on Training Recurrent Neural Networks, Covering BPPT, RTRL, EKF and the ‘Echo State Network’ Approach
vol 5 (Bonn: GMD-Forschungszentrum Informationstechnik)
[396] Bertschinger N and Natschläger T 2004 Real-time computation at the edge of chaos in recurrent neural networks Neural Comput.
16 1413–36
[397] Lukoševičius M 2012 A practical guide to applying echo state networks Neural Networks: Tricks of the Trade (Berlin: Springer) pp
659–86
[398] Grigoryeva L and Ortega J-P 2018 Echo state networks are universal Neural Netw. 108 495–508
[399] Nakajima K and Fischer I (ed) 2021 Reservoir Computing: Theory, Physical Implementations, and Applications (Berlin: Springer)
[400] Ma C et al 2021 Addressing limited weight resolution in a fully optical neuromorphic reservoir computing readout Sci. Rep. 11
3102
[401] Araujo F A et al 2020 Role of non-linear data processing on speech recognition task in the framework of reservoir computing Sci.
Rep. 10 328
[402] Nakane R et al Spin waves propagating through a stripe magnetic domain structure and their applications to reservoir computing
Phys. Rev. Res. (accepted)
[403] Pathak J, Lu Z, Hunt B R, Girvan M and Ott E 2017 Using machine learning to replicate chaotic attractors and calculate
Lyapunov exponents from data Chaos 27 121102
[404] Gallicchio C, Micheli A and Pedrelli L 2017 Deep reservoir computing: a critical experimental analysis Neurocomputing 268
87–99
[405] Hadaeghi F 2021 Neuromorphic Electronic Systems for Reservoir Computing (Reservoir Computing Natural Computing Series) ed
K Nakajima and I Fischer (Berlin: Springer)
107
Neuromorph. Comput. Eng. 2 (2022) 022501 Roadmap
[406] Van der Sande G, Brunner D and Soriano M C 2017 Advances in photonic reservoir computing Nanophotonics 6 561–76
[407] Cramer B et al 2020 Control of criticality and computation in spiking neuromorphic networks with plasticity Nat. Commun. 11
2853
[408] Tanaka G et al 2020 Guest editorial: special issue on new frontiers in extremely efficient reservoir computing IEEE Trans. Neural
Netw. Learn. Syst. (unpublished)
[409] Gerstner W and Kistler W M 2002 Spiking Neuron Models: Single Neurons, Populations, Plasticity (Cambridge: Cambridge
University Press)
[410] Paugam-Moisy H and Bohte S M 2012 Computing with spiking neuron networks Handbook of Natural Computing vol 1 pp 1–47
[411] Mahowald M 1992 VLSI Analogs of Neuronal Visual Processing: A Synthesis of Form and Function
[412] Furber S B, Galluppi F, Temple S and Plana L A 2014 The spinnaker project Proc. IEEE 102 652–65
[413] Delorme A, Gautrais J, Van Rullen R and Thorpe S 1999 SpikeNET: a simulator for modeling large networks of integrate and fire
neurons Neurocomputing 26–27 989–96
[414] Delorme A and Thorpe S J 2003 SpikeNET: an event-driven simulation package for modelling large networks of spiking neurons
Netw. Comput. Neural Syst. 14 613–27
[415] Adrian E D and Matthews R 1927 The action of light on the eye J. Physiol. 63 378–414
[416] Thorpe S J 1990 Spike arrival times: a highly efficient coding scheme for neural networks Parallel Processing in Neural Systems and
Computers ed R Eckmiller, G Hartmann and G Hauske (Amsterdam: North-Holland) pp 91–4
[417] Gollisch T and Meister M 2008 Rapid neural coding in the retina with relative spike latencies Science 319 1108–11
[418] Thorpe S and Gautrais J 1998 Rank order coding Computational Neuroscience: Trends in Research ed J Bower (New York: Plenum)
pp 113–8
[419] Furber S B, John Bainbridge W, Mike Cumpstey J and Temple S 2004 Sparse distributed memory using N-of-M codes Neural
Netw. 17 1437–51
[420] Thorpe S J, Guyonneau R, Guilbaud N, Allegraud J-M and Vanrullen R 2004 SpikeNet: real-time visual processing with one spike
per neuron Neurocomputing 58–60 857–64
[421] Agus T R, Thorpe S J and Pressnitzer D 2010 Rapid formation of robust auditory memories: insights from noise Neuron 66 610–8
[422] Thunell E and Thorpe S J 2019 Memory for repeated images in rapid-serial-visual-presentation streams of thousands of images
Psychol. Sci. 30 989–1000
[423] Masquelier T and Thorpe S J 2007 Unsupervised learning of visual features through spike timing dependent plasticity PLoS
Comput. Biol. 3 e31
[424] Masquelier T, Guyonneau R and Thorpe S J 2008 Spike timing dependent plasticity finds the start of repeating patterns in
continuous spike trains PLoS One 3 e1377
[425] Bichler O, Querlioz D, Thorpe S J, Bourgoin J-P and Gamrat C 2012 Extraction of temporally correlated features from dynamic
vision sensors with spike-timing-dependent plasticity Neural Netw. 32 339–48
[426] Thorpe S, Yousefzadeh A, Martin J and Masquelier T 2017 Unsupervised learning of repeating patterns using a novel STDP based
algorithm J.Vis. 17 1079
[427] Indiveri G 1999 Neuromorphic analog VLSI sensor for visual tracking: circuits and application examples IEEE Trans. Circuits
Syst. II 46 1337–47
[428] http://jaerproject.org
[429] Lichtsteiner P, Posch C and Delbruck T 2008 A 128 × 128 120 dB 15 μs latency asynchronous temporal contrast vision sensor
IEEE J. Solid-State Circuits 43 566–76
[430] Qiao N et al 2015 A re-configurable on-line learning spiking neuromorphic processor comprising 256 neurons and 128 K
synapses Front. Neurosci. 9 141
[431] Monforte M, Arriandiaga A, Glover A and Bartolozzi C 2020 Where and when: event-based spatiotemporal trajectory prediction
from the iCub’s point-of-view 2020 IEEE Int. Conf. Robotics and Automation (ICRA) (Piscataway, NJ: IEEE) pp 9521–7
[432] Kreiser R, Renner A, Leite V R, Serhan B, Bartolozzi C, Glover A and Sandamirskaya Y 2020 An on-chip spiking neural network
for estimation of the head pose of the iCub robot Front. Neurosci. 14 551
[433] Gutierrez-Galan D, Dominguez-Morales J P, Perez-Peña F, Jimenez-Fernandez A and Linares-Barranco A 2020 NeuroPod: a
real-time neuromorphic spiking CPG applied to robotics Neurocomputing 381 10–9
[434] Zhao J, Risi N, Monforte M, Bartolozzi C, Indiveri G and Donati E 2020 Closed-loop spiking control on a neuromorphic
processor implemented on the iCub IEEE J. Emerg. Sel. Top. Circuits Syst. 10 546–56
[435] Naveros F, Luque N R, Ros E and Arleo A 2020 VOR adaptation on a humanoid iCub robot using a spiking cerebellar model
IEEE Trans. Cybern. 50 47444757
[436] García D H, Adams S, Rast A, Wennekers T, Furber S and Cangelosi A 2018 Visual attention and object naming in humanoid
robots using a bio-inspired spiking neural network Robot. Auton. Syst. 104 56–71
[437] Rapp H and Nawrot M P 2020 Proc. Natl Acad. Sci. USA 117 28412–21
[438] Kaiser J, Mostafa H and Neftci E 2020 Synaptic plasticity dynamics for deep continuous local learning (DECOLLE) Front.
Neurosci. 14 424
[439] Illing B, Gerstner W and Brea J 2019 Biologically plausible deep learning—but how far can we go with shallow networks? Neural
Netw. 118 90–101
[440] Klos C, Kossio Y F K, Goedeke S, Gilra A and Memmesheimer R M 2020 Dynamical learning of dynamics Phys. Rev. Lett. 125
088103
[441] Panzeri S, Harvey C D, Piasini E, Latham E and Fellin T 2017 Cracking the neural code for sensory perception by combining
statistics, intervention, and behavior Neuron 93 491507
[442] Yang S, Wang J, Zhang N, Deng B, Pang Y and Azghadi M R 2021 CerebelluMorphic: large-scale neuromorphic model and
architecture for supervised motor learning IEEE Trans. Neural Netw. Learn. Syst. 1–15
[443] 2016 Video interview with anthony foxx available online at https://theverge.com/a/verge2021/secretary-anthony-foxx
[444] Adams E 2020 The revolution will not be self-driven: the real future of autonomous cars available online at
https://robbreport.com/motors/cars/self-driving-cars-explainer-2901586/
[445] Jason Eichenholz, Luminar Technologies Inc 2019 personal communication
[446] Gawron J H, Keoleian G A, De Kleine R D, Wallington T J and Kim H C 2018 Life cycle assessment of connected and automated
vehicles: sensing and computing subsystem and vehicle level effects Environ. Sci. Technol. 52 3249–56
[447] Lichtsteiner P and Delbruck T 2005 64 × 64 event-driven logarithmic temporal derivative silicon retina 2005 IEEE Workshop on
Charge-Coupled Devices and Advanced Image Sensors (Nagano, Japan) pp 157–60
108
Neuromorph. Comput. Eng. 2 (2022) 022501 Roadmap
[448] Itti L, Koch C and Niebur E 1998 A model of saliency-based visual attention for rapid scene analysis IEEE Trans. Pattern Anal.
Machine Intell. 20 1254–9
[449] Moreira O et al 2020 NeuronFlow: a hybrid neuromorphic—dataflow processor architecture for AI workloads 2nd IEEE Int.
Conf. Artificial Intelligence Circuits (AICAS 2020)
[450] Brown T et al 2020 Language models are few-shot learners (arXiv:2005.14165v4)
[451] Voelker A, Rasmussen D and Eliasmith C 2020 A spike in performance (arXiv:2002.03553)
[452] Persaud K and Dodd G 1982 Analysis of discrimination mechanisms in the mammalian olfactory system using a model nose
Nature 299 352–5
[453] Gardner J W, Hines E L and Wilkinson M 1990 Application of artificial neural networks to an electronic olfactory system Meas.
Sci. Technol. 1 446–51
[454] Hines E L and Gardner J W 1994 An artificial neural emulator for an odour sensor array Sensors Actuators B 19 661–4
[455] Persaud K C, Marco S and Gutierrez-Galvez A 2013 Neuromorphic Olfaction (Frontiers in Neuroengineering) (Boca Raton, FL:
CRC Press)
[456] Pearce T C, Schiffman S S, Nagle H T and Gardner J W 2003 Handbook of Machine Olfaction: Electronic Nose Technology (New
York: Wiley)
[457] Gronowitz M E, Liu A, Qiu Q, Yu C R and Cleland T A A physicochemical model of odor sampling (bioRxiv)
[458] Pearce T C, Karout S, Rácz Z, Capurro A, Gardner J W and Cole M 2013 Rapid processing of chemosensor transients in a
neuromorphic implementation of the insect macroglomerular complex Front. Neurosci. 7 119
[459] Vergara A, Fonollosa J, Mahiques J, Trincavelli M, Rulkov N and Huerta R 2013 On the performance of gas sensor arrays in open
sampling systems using inhibitory support vector machines Sensors Actuators B 185 462–77
[460] Cleland T A and Borthakur A 2020 A systematic framework for olfactory bulb signal transformations Front. Comput. Neurosci.
14 579143
[461] Schmuker M and Schneider G 2007 Processing and classification of chemical data inspired by insect olfaction Proc. Natl. Acad.
Sci. USA 104 20285–9
[462] Huerta R, Vembu S, Amigó J M, Nowotny T and Elkan C 2012 Inhibition in multiclass classification Neural Comput. 24 2473–507
[463] Koickal T J, Hamilton A, Tan S L, Covington J A, Gardner J W and Pearce T C 2007 Analog VLSI circuit implementation of an
adaptive neuromorphic olfaction chip IEEE Trans. Circuits Syst. I 54 60–73
[464] Vanarse A, Osseiran A and Rassau A 2017 An investigation into spike-based neuromorphic approaches for artificial olfactory
systems Sensors 17 2591
[465] Diamond A, Schmuker M and Nowotny T 2019 An unsupervised neuromorphic clustering algorithm Biol. Cybern. 113 423–37
[466] Imam N and Cleland T A 2020 Rapid online learning and robust recall in a neuromorphic olfactory circuit Nat. Mach. Intell. 2
181–91
[467] Borthakur A and Cleland T A 2019 Signal conditioning for learning in the wild Presented at the Neuro-inspired Computational
Elements Workshop (Albany, NY, USA) available: https://doi.org/10.1145/3320288.3320293
[468] Borthakur A and Cleland T A 2017 A neuromorphic transfer learning algorithm for orthogonalizing highly overlapping sensor
array responses Presented at the ISOCS/IEEE Int. Symp. Olfaction and Electronic Nose (ISOEN) (Montreal, Canada)
[469] Guerrero-Rivera R and Pearce T C 2007 Attractor-based pattern classification in a spiking FPGA implementation of the olfactory
bulb 2007 3rd Int. IEEE/EMBS Conf. Neural Engineering (Piscataway, NJ: IEEE) pp 593–9
[470] Imam N, Cleland T A, Manohar R, Merolla P A, Arthur J V, Akopyan F and Modha D S 2012 Implementation of olfactory bulb
glomerular-layer computations in a digital neurosynaptic core Front. Neurosci. 6 83
[471] Marco S et al 2014 A biomimetic approach to machine olfaction, featuring a very large-scale chemical sensor array and
embedded neuro-bio-inspired computation Microsyst. Technol. 20 729–42
[472] BelBruno J J 2019 Molecularly imprinted polymers Chem. Rev. 119 94–119
[473] Mahowald M and Mead C 1991 The silicon retina Sci. Am. 264 76–82
[474] Posch C, Serrano-Gotarredona T, Linares-Barranco B and Delbruck T 2014 Retinomorphic event-based vision sensors:
bioinspired cameras with spiking output Proc. IEEE 102 1470–84
[475] Posch C, Matolin D and Wohlgenannt R 2011 A QVGA 143 dB dynamic range frame-free PWM image sensor with lossless
pixel-level video compression and time-domain CDS IEEE J. Solid-State Circuits 46 259–75
[476] Brandli C, Berner R, Yang M, Liu S-C and Delbruck T 2014 A 240 × 180 130 dB 3 μs latency global shutter spatiotemporal vision
sensor IEEE J. Solid-State Circuits 49 2333–41
[477] Son B et al 2017 A 640 × 480 dynamic vision sensor with a 9 μm pixel and 300 Meps addressevent representation 2017 IEEE Int.
Solid-State Circuits Conf. (ISSCC) (San Francisco, CA) pp 66–7
[478] Finateu T et al 2020 A 1280 × 720 back-illuminated stacked temporal contrast event-based vision sensor with 4.86 μm pixels,
1.066 GEPS readout, programmable event-rate controller and compressive data-formatting pipeline 2020 IEEE Int. Solid-State
Circuits Conf. (ISSCC) (San Francisco, CA, USA) pp 112–4
[479] Moradi S, Qiao N, Stefanini F and Indiveri G 2018 A scalable multicore architecture with heterogeneous memory structures for
dynamic neuromorphic asynchronous processors (DYNAPs) IEEE Trans. Biomed. Circuits Syst. 12 106122
[480] Moreira O et al 2020 NeuronFlow: a neuromorphic processor architecture for live AI applications
[481] Zhou F et al 2019 Optoelectronic resistive random access memory for neuromorphic vision sensors Nat. Nanotechnol. 14 776
[482] Tian H, Wang X, Wu F, Yang Y and Ren T-L 2018 High performance 2D perovskite/graphene optical synapses as artificial eyes
2018 IEEE Int. Electron Devices Meeting (IEDM)
[483] 2021 Neuromorphic Sensing & Computing Report (Yole Développement) www.yole.fr
[484] Shamma S A 1985 Speech processing in the auditory system: I. The representation of speech sounds in the responses of the
auditory nerve J. Acoust. Soc. Am. 78 1612–21
[485] Palmer A R and Russell I J 1986 Phase-locking in the cochlear nerve of the Guinea-pig and its relation to the receptor potential of
inner hair-cells Hear. Res. 24 1–15
[486] Lyon R F and Mead C 1988 An analog electronic cochlea IEEE Trans. Acoust. Speech Signal Process. 36 1119–34
[487] Lyon R F, Katsiamis A G and Drakakis E M 2010 History and future of auditory filter models Proc. 2010 IEEE Int. Symp. Circuits
and Systems pp 3809–12
[488] Liu S-C, Delbruck T, Indiveri G, Whatley A and Douglas R 2015 Event-Based Neuromorphic Systems (New York: Wiley)
[489] Yang M, Chien C-H, Delbruck T and Liu S-C 2016 A 0.5 V 55 μW 64 × 2 channel binaural silicon cochlea for event-driven
stereo-audio sensing IEEE J. Solid-State Circuits 51 2554–69
109
Neuromorph. Comput. Eng. 2 (2022) 022501 Roadmap
[490] Sarpeshkar R, Salthouse C, Sit J-J, Baker M W, Zhak S M, Lu T K-T, Turicchia L and Balster S 2005 An ultra-low-power
programmable analog bionic ear processor IEEE Trans. Biomed. Eng. 52 711–27
[491] Horiuchi T K 2009 A spike-latency model for sonar-based navigation in obstacle fields IEEE Trans. Circuits Syst. I 56 2393–401
[492] Chan V, Jin C and van Schaik A 2010 Adaptive sound localisation with a silicon cochlea pair Front. Neurosci. 4 196
[493] Liu S-C, Rueckauer B, Ceolini E, Huber A and Delbruck T 2019 Event-driven sensing for efficient perception: vision and audition
algorithms IEEE Signal Process. Mag. 36 29–37
[494] Uysal I, Sathyendra H and Harris J 2008 Towards spike-based speech processing: a biologically plausible approach to simple
acoustic classification Int. J. Appl. Math. Comput. Sci. 18 129–37
[495] Gao C, Braun S, Kiselev I, Anumula J, Delbruck T and Liu S 2019 Real-time speech recognition for IoT purpose using a delta
recurrent neural network accelerator 2019 IEEE Int. Symp. Circuits and Systems (ISCAS) pp 1–5
[496] Tsai W Y et al 2016 Always-on speech recognition using truenorth, a reconfigurable, neurosynaptic processor IEEE Trans.
Comput. 66 996–1007
[497] Yang M, Yeh C-H, Zhou Y, Cerqueira J P, Lazar A A and Seok M 2019 Design of an always-on deep neural network-based 1 μW
voice activity detector aided with a customized software model for analog feature extraction IEEE J. Solid-State Circuits 54
1764–77
[498] Neftci E, Mostafa H and Zenke F 2019 Surrogate gradient learning in spiking neural networks IEEE Signal Process. Mag. 36 61–3
[499] Rueckauer B, Lungu I, Hu Y, Pfeiffer M and Liu S-C 2017 Conversion of continuous-valued deep networks to efficient
event-driven networks for image classification Front. Neurosci 11 682
[500] Wu J, Yılmaz E, Zhang M, Li H and Tan K C 2020 Deep spiking neural networks for large vocabulary automatic speech
recognition Front. Neurosci. 14 199
[501] Zai A, Bhargava S, Mesgarani N and Liu S-C 2015 Reconstruction of audio waveforms from spike trains of artificial cochlea
models Front. Neurosci. 9 347
[502] Ceolini E, Anumula J, Braun S and Liu S 2019 Event-driven pipeline for low-latency low-compute keyword spotting and speaker
verification system 2019 IEEE Int. Conf. Acoustics, Speech and Signal Processing (ICASSP) 7953–7
[503] Shan W, Yang M-H, Xu J, Lu Y, Zhang S, Wang T, Yang J, Shi L and Seok M 2020 14.1 A 510 nW 0.41 V low-memory
lowcomputation keyword-spotting chip using serial FFT-based MFCC and binarized depthwise separable convolutional neural
network in 28 nm CMOS 2020 IEEE Int. Solid-State Circuits Conf. (ISSCC) 230–2
[504] Renaud-LeMasson S, LeMasson G, Marder E and Abbot L F 1993 Hybrid circuits of interacting computer model and biological
neurons Presented at the Advances in Neural Information Processing Systems 5 (NIPS Conf.)
[505] Chapin J K, Moxon K A, Markowitz R S and Nicolelis M A L 1999 Real-time control of a robot arm using simultaneously
recorded neurons in the motor cortex Nat. Neurosci. 2 664–70
[506] Hochberg L R et al 2012 Reach and grasp by people with tetraplegia using a neurally controlled robotic arm Nature 485 372–5
[507] Panuccio G, Semprini M and Chiappalone M 2016 Intelligent biohybrid systems for functional brain repair New Horiz. Transl.
Med. 3 162–74
[508] Guggenmos D J, Azin M, Barbay S, Mahnken J D, Dunham C, Mohseni P and Nudo R J 2013 Restoration of function after brain
damage using a neural prosthesis Proc. Natl Acad. Sci. 110 21177
[509] Vassanelli S and Mahmud M 2016 Trends and challenges in neuroengineering: toward ‘intelligent’ neuroprostheses through
brain-‘brain inspired systems’ communication Front. Neurosci. 10 438
[510] Serb A et al 2020 Memristive synapses connect brain and silicon spiking neurons Sci. Rep. 10 2590
[511] Pautot S, Wyart C and Isacoff E Y 2008 Colloid-guided assembly of oriented 3D neuronal networks Nat. Methods 5 735–40
[512] Gómez-Martínez R, Hernández-Pinto A M, Duch M, Vázquez P, Zinoviev K, de la Rosa E J, Esteve J, Suárez T and Plaza J A 2013
Silicon chips detect intracellular pressure changes in living cells Nat. Nanotechnol. 8 517–21
[513] Chen L Y, Parizi K B, Kosuge H, Milaninia K M, McConnell M V, Wong H-S P and Poon A S Y 2013 Mass fabrication and
delivery of 3D multilayer μtags into living cells Sci. Rep. 3 2295
[514] Desai S B et al 2016 MoS2 transistors with one-nanometer gate lengths Science 354 6308
[515] van de Burgt Y, Melianas A, Keene S T, Malliaras G and Salleo A 2018 Organic electronics for neuromorphic computing Nat.
Electron. 1 386–97
[516] Tran L-G, Cha H-K and Park W-T 2017 RF power harvesting: a review on designing methodologies and applications Micro Nano
Syst. Lett. 5 14
[517] Basaeri H, Christensen D B and Roundy S 2016 A review of acoustic power transfer for bio-medical implants Smart Mater. Struct.
25 123001
[518] Rebel G, Estevez F, Gloesekoetter P and Castillo-Secilla J M 2015 Energy harvesting on human bodies Smart Health: Open
Problems and Future Challenges ed A Holzinger, C Röcker and M Ziefle (Berlin: Springer) pp 125–59
[519] Roseman J M, Lin J, Ramakrishnan S, Rosenstein J K and Shepard K L 2015 Hybrid integrated biological-solid-state system
powered with adenosine triphosphate Nat. Commun. 6 10070
[520] Wang W et al 2018 Learning of spatiotemporal patterns in a spiking neural network with resistive switching synapses Sci. Adv. 4
eaat4752
[521] Huang X 2018 Materials and applications of bioresorbable electronics J. Semicond. 39 011003
[522] Lecomte A, Giantomasi L, Rancati S, Boi F, Angotzi G N and Berdondini L 2020 Surface-functionalized self-standing
microdevices exhibit predictive localization and seamless integration in 3D neural spheroids Adv. Biosys. 4 2000114
[523] Jafari A, Ganesan A, Thalisetty C S K, Sivasubramanian V, Oates T and Mohsenin T 2019 SensorNet: a scalable and low-power
deep convolutional neural network for multimodal data classification IEEE Trans. Circuits Syst. I 66 274–87
[524] Hosseini M, Ren H, Rashid H-A, Mazumder A N, Prakash B and Mohsenin T 2020 Neural networks for pulmonary disease
diagnosis using auditory and demographic information epiDAMIK 2020: 3rd epiDAMIK ACM SIGKDD Int. Workshop on
Epidemiology Meets Data Mining and Knowledge Discovery
[525] Dauphin Y N and Bengio Y 2013 Big neural networks waste capacity (arXiv:1301.3583)
[526] Gong Y, Liu L, Yang M and Bourdev L 2014 Compressing deep convolutional networks using vector quantization
(arXiv:1412.6115)
[527] Alemdar H, Leroy V, Prost A B and Pétrot F 2017 Ternary neural networks for resource-efficient AI applications Int. Joint Conf.
Neural Networks (IJCNN) (Anchorage, AK, USA)
[528] Courbariaux M, Bengio Y and David J 2015 Binaryconnect: training deep neural networks with binary weights during
propagations Advances in Neural Information Processing Systems vol 28 3123–31
110
Neuromorph. Comput. Eng. 2 (2022) 022501 Roadmap
[529] Lo C Y, Lau F C M and Sham C W 2018 Fixed-point implementation of convolutional neural networks for image classification
Int. Conf. Advanced Technologies for Communications (ATC) (Ho Chi Minh City, Vietnam)
[530] Umuroglu Y, Fraser N J, Gambardella G, Blott M, Leong P, Jahre M and Vissers K 2017 FINN: a framework for fast, scalable
binarized neural network inference Proc. 2017 ACM/SIGDA Int. Symp. Field-Programmable Gate Arrays (New York: ACM) pp
65–74
[531] Ren H, Mazumder A N, Rashid H-A, Chandrareddy V, Shiri A and Mohsenin T 2020 End-to-end scalable and low power
multi-modal CNN for respiratory-related symptoms detection IEEE 33rd Int. System-on-Chip Conf. (SOCC)
[532] Khatwani M, Rashid H-A, Paneliya H, Horton M, Waytowich N, Hairston W D and Mohsenin T 2020 A flexible multichannel
EEG artifact identification processor using depthwise-separable convolutional neural networks ACM J. Emerg. Technol. Comput.
Syst. 17 1–23
[533] Enoka R M 1995 Morphological features and activation patterns of motor units J. Clin. Neurophysiol. 12 538–59
[534] Rodriguez-Tapia B, Soto I, Martinez D M and Arballo N C 2020 Myoelectric interfaces and related applications: current state of
EMG signal processing—a systematic review IEEE Access 8 7792–805
[535] Arteaga M V, Castiblanco J C, Mondragon I F, Colorado J D and Alvarado-Rojas C 2020 EMG-driven hand model based on the
classification of individual finger movements Biomed. Signal Process. Control 58 101834
[536] Park K H and Lee S W 2016 Movement intention decoding based on deep learning for multiuser myoelectric interfaces 2016 4th
Int. Winter Conf. Brain-Computer Interface (BCI) (Piscataway, NJ: IEEE) pp 1–2
[537] Donati E, Payvand M, Risi N, Krause R, Burelo K, Indiveri G, Dalgaty T and Vianello E 2018 Processing EMG signals using
reservoir computing on an event-based neuromorphic system 2018 IEEE Biomedical Circuits and Systems Conf. (BioCAS)
(Piscataway, NJ: IEEE) pp 1–4
[538] Donati E, Payvand M, Risi N, Krause R and Indiveri G 2019 Discrimination of EMG signals using a neuromorphic
implementation of a spiking neural network IEEE Trans. Biomed. Circuits Syst. 13 795–803
[539] Ceolini E, Frenkel C, Shrestha S B, Taverni G, Khacef L, Payvand M and Donati E 2020 Handgesture recognition based on EMG
and event-based camera sensor fusion: a benchmark in neuromorphic computing Front. Neurosci. 14 637
[540] Behrenbeck J et al 2019 Classification and regression of spatio-temporal signals using NeuCube and its realization on SpiNNaker
neuromorphic hardware J. Neural Eng. 16 026014
[541] Ma Y, Chen B, Ren P, Zheng N, Indiveri G and Donati E 2020 EMG-based gestures classification using a mixed-signal
neuromorphic processing system IEEE J. Emerg. Sel. Top. Circuits Syst. 10 578–87
[542] Azghadi M R, Lammie C, Eshraghian J K, Payvand M, Donati E, Linares-Barranco B and Indiveri G 2020 Hardware
implementation of deep network accelerators towards healthcare and biomedical applications IEEE Trans. Biomed. Circuits Syst.
14 1138–59
[543] Del Vecchio A, Germer C M, Elias L A, Fu Q, Fine J, Santello M and Farina D 2019 The human central nervous system transmits
common synaptic inputs to distinct motor neuron pools during non-synergistic digit actions J. Physiol. 597 5935–48
[544] Valle G et al 2018 Biomimetic intraneural sensory feedback enhances sensation naturalness, tactile sensitivity, and manual
dexterity in a bidirectional prosthesis Neuron 100 37–45
[545] Moin A et al 2020 A wearable biosensing system with in-sensor adaptive machine learning for hand gesture recognition Nat.
Electron. 4 54–63
[546] Urh F, Strnad D, Clarke A, Farina D and Holobar A 2020 On the selection of neural network architecture for supervised motor
unit identification from high-density surface EMG 2020 42nd Annual Int. Conf. IEEE Engineering in Medicine & Biology Society
(EMBC) (Piscataway, NJ: IEEE) pp 736–9
[547] Kleine B U, van Dijk J P, Lapatki B G, Zwarts M J and Stegeman D F 2007 Using two-dimensional spatial information in
decomposition of surface EMG signals J. Electromyogr. Kinesiol. 17 535–48
[548] Holobar A and Zazula D 2007 Multichannel blind source separation using convolution kernel compensation IEEE Trans. Signal
Process. 55 4487–96
[549] Rossi D et al 2015 PULP: a parallel ultra low power platform for next generation IoT applications 2015 IEEE Hot Chips 27 Symp.
(HCS) (Piscataway, NJ: IEEE) pp 1–39
[550] Chatterjee S, Roy S S, Bose R and Pratiher S 2020 Feature extraction from multifractal spectrum of electromyograms for
diagnosis of neuromuscular disorders IET Sci. Meas. Technol. 14 817–24
[551] Trigili E, Grazi L, Crea S, Accogli A, Carpaneto J, Micera S, Vitiello N and Panarese A 2019 Detection of movement onset using
EMG signals for upper-limb exoskeletons in reaching tasks J. Neuroeng. Rehabil. 16 45
[552] Ajoudani A, Zanchettin A M, Ivaldi S, Albu-Schäffer A, Kosuge K and Khatib O 2018 Progress and prospects of the human-robot
collaboration Auton. Robots 42 957–75
[553] Ingrand F and Ghallab M 2017 Deliberation for autonomous robots: a survey Artif. Intell. 247 10–44
[554] Kunze L, Hawes N, Duckett T, Hanheide M and Krajnik T 2018 Artificial intelligence for long-term robot autonomy: a survey
IEEE Robot. Autom. Lett. 3 4023–30
[555] Capolei M C, Angelidis E, Falotico E, Lund H H and Tolu S 2019 A biomimetic control method increases the adaptability of a
humanoid robot acting in a dynamic environment Front. Neurorobot. 13 70
[556] Indiveri G and Sandamirskaya Y 2019 The importance of space and time for signal processing in neuromorphic agents: the
challenge of developing low-power, autonomous agents that interact with the environment IEEE Signal Process. Mag. 36 16–28
[557] Thompson F and Galeazzi R 2020 Robust mission planning for autonomous marine vehicle fleets Robot. Auton. Syst. 124 103404
[558] Zool H I and Nohaidda S 2019 A survey and analysis of cooperative multi-agent robot systems: challenges and directions
Applications of Mobile Robots vol 1 (IntechOpen)
[559] Yang H, Han Q-L, Ge X, Ding L, Xu Y, Jiang B and Zhou D 2020 Fault-tolerant cooperative control of multiagent systems: a
survey of trends and methodologies IEEE Trans. Ind. Inf. 16 4–17
[560] Fardet T and Levina A 2020 Simple models including energy and spike constraints reproduce complex activity patterns and
metabolic disruptions PLoS Comput. Biol. 16 e1008503
[561] Naveros F, Garrido J A, Carrillo R R, Ros E and Luque N R 2017 Event- and time-driven techniques using parallel CPU-GPU
co-processing for spiking neural networks Front. Neuroinform. 11 7
[562] Furber S 2016 Large-scale neuromorphic computing systems J. Neural Eng. 13 051001
[563] Thompson F and Guihen D 2019 Review of mission planning for autonomous marine vehicle fleets J Field Robot. 36 333–54
[564] Atyabi A, MahmoudZadeh S and Nefti-Meziani S 2018 Current advancements on autonomous mission planning and
management systems: an AUV and UAV perspective Annu. Rev. Control 46 196–215
111
Neuromorph. Comput. Eng. 2 (2022) 022501 Roadmap
[565] Li J, Li Z, Chen F, Bicchi A, Sun Y and Fukuda T 2019 Combined sensing, cognition, learning, and control for developing future
neuro-robotics systems: a survey IEEE Trans. Cognit. Dev. Syst. 11 148–61
[566] Woo J, Kim J H, Im J-P and Moon S E 2020 Recent advancements in emerging neuromorphic device technologies Adv. Intell.
Syst. 2 2070101
[567] Hauser S, Mutlu M, Léziart P-A, Khodr H, Bernardino A and Ijspeert A J 2020 Roombots extended: challenges in the next
generation of self-reconfigurable modular robots and their application in adaptive and assistive furniture Robot. Auton. Syst. 127
103467
[568] Galin R, Meshcheryakov R, Kamesheva S and Samoshina A 2020 Cobots and the benefits of their implementation in intelligent
manufacturing IOP Conf. Ser.: Mater. Sci. Eng. 862 032075
[569] Müller V C 2020 Ethics of artificial intelligence and robotics The Stanford Encyclopedia of Philosophy (Winter 2020 Edition) ed
E N Zalta (https://plato.stanford.edu/archives/win2020/entries/ethics-ai/)
[570] Coeckelbergh M 2020 AI Ethics (Cambridge, MA: MIT Press)
[571] Topol E 2019 Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again (New York: Basic Books)
[572] Cohen I 2018 Is there a duty to share healthcare data? Big Data, Health Law, and Bioethics ed I Cohen, H Lynch, E Vayena and
U Gasser (Cambridge: Cambridge University Press) pp 209–22
[573] Binns R 2018 Fairness in machine learning: lessons from political philosophy Proc. 1st Conf. Fairness, Accountability and
Transparency, Proc. Machine Learning Research vol 81 149–59
[574] Ploug T and Holm S 2020 The right to refuse diagnostics and treatment planning by artificial intelligence Med. Health Care
Philos. 23 107–14
[575] Nyholm S and Frank L 2017 From sex robots to love robots: is mutual love with a robot possible? Robot Sex: Social and Ethical
Implications vol 219–243 (Cambridge, MA: MIT Press)
[576] Danaher J 2019 The philosophical case for robot friendship J. Posthuman Stud. 3 5–24
[577] Baldwin R 2019 The Globotics Upheaval: Globalisation, Robotics and the Future of Work (New York: Oxford University Press)
[578] Goos M 2018 The impact of technological progress on labour markets: policy challenges Oxford Rev. Econ. Pol. 34 362–75
[579] Turner J 2019 Robot Rules: Regulating Artificial Intelligence (Berlin: Springer)
[580] Coeckelbergh M 2016 Care robots and the future of ICT-mediated elderly care: a response to doom scenarios AI Soc. 31 455–62
[581] Bostrom N 2014 Superintelligence: Paths, Dangers, Strategies (Oxford: Oxford University Press)
[582] Floridi L 2016 Should we be afraid of AI? Machines seem to be getting smarter and smarter and much better at human jobs, yet
true AI is utterly implausible. Why? (Aeon) https://aeon.co/essays/true-ai-is-both-logically-possible-and-utterly-implausible
112