SEMINAR REPORT ON NEUROMORPHIC ENGINEERING
SEMINAR REPORT ON NEUROMORPHIC ENGINEERING
SEMINAR REPORT ON NEUROMORPHIC ENGINEERING
on
NEUROMORPHIC ENGINEERING
Submitted in partial fulfilment of the requirement for the award of degree
Bachelor of Technology
in
by
K.SAI TEJA - 21J21A0538
of
CERTIFICATE
Assistant Professor and submitted to Joginpally B.R. Engineering College is original and has
not been submitted in part or whole for Bachelor degree to any other university.
1 INTRODUCTION 1
1.1 Objectives
1.2 Importance of Neuromorphic Engineering
2 ARTIFICIAL NEURAL NETWORKS 3
3 LITERATURE REVIEW 5
3.1 Historical Viewpoint of Neuromorphic Computing
3.2 Neuromorphic Hardware & Software
4 NEUROMORPHIC HARDWARE 7
4.1 Advantages of Spiking Neural Networks
4.2 Neuromorphic Hardware implementations
4.3 Neuromorphic Hardware Leaders
4.4 Neuromorphic Hardware Chips
5 APPLICATION AREAS 12
6 WORKING PRINCIPLE 14
7 MARKET TRENDS OF AI CHIPS 16
7.1 Cloud / Datacenter
7.2 Edge Computing
7.3 Future Market of Neuromorphic Chips
8 ADVANTAGES AND DISADVANTAGES 20
9 CONCLUSION 23
The core of neuromorphic systems lies in technologies such as neuromorphic chips (e.g.,
IBM TrueNorth, Intel Loihi, SpiNNaker), event-based sensors (such as dynamic vision sen-
sors), and memristors, which emulate synaptic behavior. These systems promise break-
throughs in computational efficiency, low power consumption, and real-time data processing,
making them ideal for applications in robotics, healthcare, autonomous systems, and artificial
intelligence.
1.1 Objectives
The primary objectives of this project include:
Discuss potential future use cases, including brain-machine interfaces and low-power
AI solutions.
1
4. Advancements and Challenges:
5. Future Implications:
CHAPTER 2
2
ARTIFICIAL NEURAL NETWORKS
An Artificial Neural Network (ANN) is a combination and collection of nodes that are in -
spired by the biological human brain. The objective of ANN is to perform cognitive functions
such as problem-solving and machine learning. The mathematical models of the ANN have
been started in the 1940s however, it was in silent for a long time (Maass, 1997). Nowadays,
ANNs became very popular with the success of ImageNet2 in 2009. The reason behind this is
the developments in ANN models and hardware systems that can handle and implement these
models. The ANNs can be separated into three generations based on their computational units
and performance (Figure 1). An Artificial Neural Network (ANN) is a combination and col-
lection of nodes that are inspired by the biological human brain. The objective of ANN is to
perform cognitive functions such as problem-solving and machine learning. The mathemat-
ical models of the ANN have been started in the 1940s however, it was in silent for a long
time (Maass, 1997). Nowadays, ANNs became very popular with the success of ImageNet2
in 2009 . The reason behind this is the developments in ANN models and hardware systems
that can handle and implement these models. The ANNs can be separated into three genera-
tions based on their computational units and performance [Figure 1]
Artifical Neural
Networks
The first generation of the ANNs has started in 1943 with the work of Mc-Culloch and Pitts.
Their work was based on a computational model for neural networks where each neuron is
called “perceptron”. Their model later was improved with extra hidden layers (Multi-Layer
Perceptron) for better accuracy - called MADALINE - by Widrow and his students in the
3
1960s . However, the first generation ANNs were far from biological models and were just
giving digital outputs. Basically, they were decision trees based on if and else conditions.
The Second generation of ANNs contributed to the previous generation by applying func-
tions into the decision trees of the first-generation models. The functions work among each
visible and hidden layers of perceptron and create the structure called “deep neural net-
works”.
omparison Table
4
CHAPTER 3
LIRTERATURE REVIEW
Neuromorphic Systems are outputs of neuromorphic engineering; these systems usually ex-
hibit the following key features: two basic components, which are neurons and synapses;
colocated memory and computation; simple communication between components; and learn-
ing in the components. Other features are nonlinear dynamics, high fan-in/fan-out compon-
ents, spiking behavior, the ability to adapt and learn through plasticity of both parameters,
events, and structure, robustness, and the ability to handle noisy or incomplete input. Biolo-
gical neural networks are used as inspiration for integrated circuit design in neuromorphic
devices. The goal is to achieve equivalent learning capacity while being adaptive, fault-toler-
ant, and allowing processing and memory/storage to occur within the same network. Neur-
5
omorphic Hardware also known as architecture is illustrated in Figure 1, with the goal of cre -
ating electrical devices to replicate neural mechanisms in electronic hardware that can ex-
ecute complex computations. Neuromorphic software refers to algorithms that are used to
design a machine to imitate the mammalian brain's (Schuman et al., 2017; Kelsey, 2019).
CHAPTER 4
NEUROMORPHIC HARDWARE
6
The traditional Neumann systems are multi-model systems consisting of three different
units: processing unit, I/O unit, and storage unit. These modules communicate with each
other through various logical units in a sequential way. They are very powerful in bit-precise
computing .
However, neural-networks are data-centric; most of the computations are based on the
dataflow and the constant shuffling between the processing unit and the storage unit creates a
bottleneck. Since data needs to be processed in sequential order, the bottleneck causes
rigidity.
GPUs have massively parallel computing power compared to CPUs (Zheng & Mazumder,
2020). Therefore, they quickly become the dominant chips for implementing neural
networks. Currently, data centres are mainly using millions of interconnected GPUs to give
parallelism on processes, but this solution causes increased power consumption .
GPUs have expedited deep learning research processes, support the development of
algorithms and managed to enter the markets. However, future edge applications such as
Robotics or autonomous cars will require more complex artificial networks working in real-
time, low latency, and low energy consumption inference (Zheng & Mazumder, 2020).
The ASICs are costly to design and not reconfigurable because they are hard-wired, but this
hardwired nature also contributes to their optimization. Throughout the data-flow
optimization, they can perform better and more energy-efficiently than the FPGAs.
Therefore, FPGAs serve as a prototype chip for further designing costly deep learning ASICs
(Zheng & Mazumder, 2020).
Deep learning accelerators are energy-efficient and effective for current data sizes. However,
they are still limited to the bottleneck of the architecture, i.e the internal data link between
the processor and the global memory units (Kasabov, et al., 2016), as the load of the data size
is increasing faster than the prediction of the Moore’s Law 1. It would be difficult for a built
edge system that enables the process of these data (Pelé, 2019). Novel approaches beyond the
von Neumann architecture are needed therefore to cope with the shuttling issue between
memory/processor.
7
4.1 Advantages of Spiking Neural Networks
Learning of SNNs with neuromorphic chips can be handled both by native SNN algorithms
and with the conversion of the 2nd generation algorithms of ANN into SNN.
Native SNN algorithms are theoretically promising efficient and effective, however, the
practical issues with them continue. Huge efforts are made to improve these algorithms so
that they can compete with 2 nd generation ANN algorithms and eventually surpass them in
both inference speed, accuracy and efficiency in the area of artificial intelligence.
Another advantage of SNNs is the possibility of getting the benefits of the 2nd generation
ANNs algorithms by conversion, i.e. deep learning networks (2nd generation ANN models)
are mapped throughout, either empirically or mathematically, into SNN neurons. Therefore,
successful deep learning operations can be converted into SNNs without any training
algorithm (Zheng & Mazumder, 2020). Through this method, SNNs can reach the inference
accuracy of cognitive applications with low energy consumption. (Figure 4) (Figure 5)
Applied Brain Research group, the owner of brain simulator Nengo, published a paper on
2018 to compare the Intel neuromorphic chip Loihi Wolf Mountain with conventional CPUs
and GPUs (Blouw, et al., 2018). The methodology consisted in applying the 2 nd generation
ANN on GPUs and CPUs, and convert the 2 nd generation to SNN to apply to Loihi.
According to their result, for real-time inference, (batch-size=1) neuromorphic Intel chip
consumes 100x lower energy compared to GPUs which are the most common chips for
implementing 2nd generation ANNs (Figure 4). Moreover, compared to Movidius, Loihi
conserves the speed of the inference and the energy-consumption per inference as the
number of neurons increases (Figure 5).
Neuromorphic chips can be designed digital, analog or in a mixed way. All these designs
have their pros and cons.
Analog chips resemble the characteristics of the biological properties of neural networks
better than the digital ones. In the analog architecture, few transistors are used for emulating
the differential equations of neurons. Therefore, theoretically, they consume lesser energy
8
than digital neuromorphic chips (Furber, 2016) (Schemmel, et al., 2010). Besides, they can
extend the processing beyond its allocated time slot. Thanks to this feature, the speed can be
accelerated to process faster than in real-time. However, the analog architecture leads to
higher noise, which lowers the precision. Also, the analog nature of the architecture causes
leakage on signals which limits long-time learning in STDP (Indiveri, 2002).
Digital ones, on the other hand, are more precise compared to analog chips. Their digital
structure enhances on-chip programming. This flexibility allows artificial intelligent
researchers to implement various kinds of an algorithm accurately with low-energy
consumption compared to GPUs.
Mixed chips try to combine the advantages of the analog chips, i.e. lesser energy
consumption, and the advantages of the digital ones, i.e. precision. (Milde, et al., 2017)
Yet the analog chips are more biological and promising, digital neuromorphic chips are on
higher demand because they are easy to implement for real-world applications. As the
learning algorithms for SNNs and hardware technology improve, analog architectures could
eventually have the potential to take the position of digital.
IBM with the collaboration of the DARPA SYNAPSE program built the chip “TrueNorth”.
TrueNorth is a digital chip produced to speed-up the research on SNNs and commercialize it
It is not an on-chip programmable so it can be used just for inference. (Liu, et al., 2019). This
is a disadvantage for the on-chip training research and at the same time limits the usage of
the chip in critical applications (such as autonomous driving which needs continuous
training) Efficient training - as mentioned before - is an advantage of neuromorphic hardware
that unfortunately does not occur in the TrueNorth.
One IBM´S objective is to use the chip on cognitive applications such as robotics,
classification, action classification, audio processing, stereo vision, etc. The chip has actually
proven usefulness in relation to low energy consumption compared to GPUs (DeBole, et al.,
2019). However, the TrueNorth is not yet on-sale for end-users, being only possible to
request it for research reasons.
9
The chip is relatively old (5 years) and IBM presently seems not to be planning any new chip
design but scaling it. IBM aims to invest in research focused on learning algorithms of SNNs
and to take real-world applications to the market. With this goal, IBM is not only funding
research (IBM labs around the world) but also sponsoring main neuromorphic hardware
workshops (Neuro Inspired Computational Elements Workshop (NICE), Telluride
Workshop).
IBM has also agreements with US security companies. In 2018, in partnership with The
Airforce Research Laboratory.
There are mutual relationships and interdependency between neuroscience and neuromorphic
hardware. As neuroscience researchers discover the physical brain communications, mapping
and learning mechanisms of the neural networks, these findings are designed and
implemented in neuromorphic hardware. In turn, neuromorphic chips contribute to the
effectiveness of neuroscience by emulating the brain models and allowing neuroscientists to
make more complex and efficient experiments.
Different visualizations on the contribution of neuromorphic chips, key people and actors,
integrated circuits in the market, and interconnections in the area have been included in the
annex (elaborated with the road mapping software tool, ´Sharpcloud´) to facilitate an
overview of the area.
Within the Human Brain Project, the focus of the University of Manchester is the study of
the brain neurons rather than cognitive applications. However, as it is flexible, it can serve as
10
a chip for cognitive applications too. Therefore, the Neurorobotics Platform of HBP is taking
advantage of the SpiNNaker chip as hardware for robotic applications. The first generation
SpiNNaker can be used on-cloud through HBP collaboration portal and the physical boards
can be sold for research aims. Currently, SpiNNaker-1 is the world’s largest neuromorphic
computing platform and will assist EBRAINS. It has around 100 external users who can
access the machine through the HBP Collaboratory. And there are around 100 SpiNNaker-1
systems in use by research groups around the world.
Even though the CMOS technology of the first SpiNNaker chip dates from the last decade,
the recent study of Steve Furber and his team reveals that it manages to simulate Julich
cortical unit in real-time and in an energy-efficient way. SpiNNaker-1 surpasses the HPC
(runs in 3 times slower than real-time) and GPU (runs 2 times slower than real-time) in terms
of processing speed and
CHAPTER 5
11
APPLICATIONS AREAS
1) Medicine:
Neuromorphic devices are extremely effective at receiving and responding to data from
their environment. When coupled with organic materials, these devices become com-
patible with the human body. Hence neuromorphic systems, now or in future, could be
used to improve drug delivery systems. The use of neuromorphic computing instead of
traditional devices could create a more realistic, seamless experience for those with
prosthetics (Tuchman et al., 2020). Neuromorphic devices that can emulate the bionic
sensory and perceptual functions of neural systems have great applications in personal
healthcare monitoring, and neuro-prosthetics (Zeng, He, Zhang, & Wan 2021).
Large-scale projects and product customization could also benefit from neuromorphic
computing. It could be used to process large set of data more easily from environ-
mental sensors. These sensors could measure water content, temperature, radiation
and other parameters depending on the needs of the industry. The neuromorphic com-
puting structure could help recognize patterns in this data, making it easier to reach
effective and efficient conclusion. Neuromorphic devices could also be applied to
product customization due to the nature of their building materials. These materials
can be transformed into easily manipulated fluids. In liquid form, they can be pro-
cessed through additive manufacturing to create devices specifically fit for the user’s
needs (Tuchman et al., 2020).
3) Artificial Intelligence:
The way the brain’s neurons receive, process, and send signals is extremely fast and
energy-efficient. As such, it is natural that professionals in technology, especially
those in the field of Artificial Intelligence (AI), would be intrigued by neuromorphic
devices that mimic human nervous system. As the name suggests, researchers in the
field of AI will focus on a particular element of the brain-intelligence. (Lutkevich,
2020).
12
4) Cloud Computing (The edge), Driverless Car, and Smart Technology:
CHAPTER 6
13
WORKING PRINCIPLE
Neuromorphic systems use artificial neurons and synapses to replicate the behavior of biolog-
ical ones. These components communicate using spikes or events, similar to how information
is transmitted in the brain. This spike-based communication reduces power consumption and
enables real-time processing of dynamic data.
2) Spike-Based Communication
Neuromorphic chips, such as IBM TrueNorth, Intel Loihi, and SpiNNaker, serve as the back-
bone of these systems. These chips feature dense arrays of artificial neurons and synapses ca-
pable of performing billions of operations per second while consuming minimal power.
5) Event-Based Sensors
14
Dynamic vision sensors (DVS) and other event-based sensors play a crucial role in neuro-
morphic systems. These sensors detect changes in the environment and generate spikes only
when significant events occur, ensuring efficient data handling and processing.
Once information is received from sensors or other input sources, the neuromorphic system
processes the data using its spiking neural network (SNN). The network performs complex
computations in parallel and makes decisions based on patterns and adaptive learning.
7) Real-Time Applications
The output of neuromorphic systems is used in real-time applications such as robotics, brain-
machine interfaces, autonomous vehicles, and low-power AI systems. The ability to process
information efficiently and adaptively makes these systems ideal for dynamic, real-world sce-
narios.
15
CHAPTER 7
AI is very popular today and the market of chips is receiving an increasing interest and
attention from the markets. A list of companies can be found in tables 6 and 7 of annex 2.
Many applications are already adopted by end-users and numerous emerging applications are
expected to happen in the short term. This rising demand will affect the plans of
semiconductor companies. According to McKinsey's report “Artificial-intelligence hardware:
New opportunities for semiconductor companies”, estimated CAGR (Compound annual
growth rate) of AI-supported semiconductors will be 18-19% between 2017-2025 compared
to 3-4% of non-AI semiconductors . Report from TMT Analytics also correlated with
McKinsey´s and expects the market of AI-supported semiconductors to achieve 66 billion
dollars by 2025 .
Currently available semiconductors for AI applications are the CPUs and the AI accelerators.
The AI accelerators are leading the market because of the computing limitations of CPUs.
Available AI accelerators are the GPUs, ASICs, and FPGAs, as mentioned in chapter 3.
16
Figure : AI-supported Semiconductors market volume
FPGAs research and development are basically supported by Intel and Microsoft. However,
their lower performance compared to ASIC or GPU will limit opportunities in a market that
will only demand efficient AI solutions. Their expected market share will be around 9 billion
by 2024 (Bloomberg Business, 2018). The inference is presently dominated by the
traditional CPU datacenters (Figure 16). The CPU dominance will be gradually replaced by
ASICs as soon as the utilization of the latter will become widespread. As the complexity of
the tasks increase and datasets becomes larger, the inference cost of CPUs will be much
higher. ASICs can provide the solution to this dilemma by the “increased parallelism,
memory transfer reductions, and workload reductions”
Edge computing represents the future of AI however, the amount of data transactions is
increasing tremendously. Most of these data are just unnecessary bulky data which could
17
make data-centers inadequate in the near future. Moreover, latency and real-time processing
are crucial in some applications (health, space, robotics, etc.). Edge inference is inevitable to
solve these issues.
According to Chetan Sharma Consulting, the Edge Market size is expected to reach 4.1
Trillion dollars by 2030. And half-trillion of this size will be located in the edge hardware
market, which also includes the chip sector (Figure 17) (Chetan Sharma Consulting, 2019).
By 2018 the AI edge chip market was less than 100 million dollars; however, the demand
will be huge. The top mobile chip producers, e.g. ARM architecture on Apple, Qualcomm,
Huawei, have already edge inference and will certainly continue to invest. McKinsey
believes that the edge inference chip market will get around 5 billion by 2025 and might
surpass the data-center inference market by 2030 (Batra, et al., 2018).
At present, dominant processors in the edge market are CPUs. However, for large scale, real-
time applications, CPUs will not be enough, and they will replace with ASICs by 2025
(Figure 18). On the other hand, edge training – even though being a very important area - is
not efficient yet. There are some methodologies such as federated learning which boosts
privacy and limits the data size. Unfortunately, this solution does not cover yet the latency
related issues
Yole and TMT Analytics expect that the market size of neuromorphic chips can reach
billion-dollar by mid-20s with a growth of 51% between 2017-2023 (Yole Development,
18
2019) (Kennis, 2019). If they can manage to get ahead and demonstrate their potential under
the pressure of the currently-successful AI accelerators, Neuromorphic chips are expected to
take a solid place in the market by mid-20s and possibly achieve market domination by 2030
19
CHAPTER 8
Advantages :
Energy Efficiency :
Real-Time Processing :
Adaptive Learning :
These systems can learn and adapt to new inputs using biologically inspired learning mecha-
nisms, such as spike-timing-dependent plasticity (STDP), which allows continuous improve-
ment without retraining.
Parallel Processing :
Neuromorphic systems process multiple data streams simultaneously, emulating the brain’s
ability to handle vast amounts of information in parallel.
Scalability :
Neuromorphic hardware, such as memristors and spiking neural networks (SNNs), is inher-
ently scalable, allowing for the development of compact and efficient systems.
Low Latency :
Bio-Inspired Functionality :
Mimicking the neural processes of the brain allows neuromorphic systems to perform tasks
like pattern recognition, sensory integration, and decision-making more efficiently.
20
Application Diversity :
Disadvantages :
Complexity of Development :
Hardware Limitations :
Neuromorphic chips like Intel Loihi and IBM TrueNorth are still in development and face
challenges such as limited memory, scalability issues, and high production costs.
Current software tools for developing neuromorphic applications are not as mature or widely
supported as those for traditional computing systems.
Learning Constraints :
While neuromorphic systems excel at unsupervised and reinforcement learning, they may
struggle with tasks requiring complex supervised learning due to their spike-based architec-
ture.
Cost of Implementation :
The production of neuromorphic chips and sensors is expensive, limiting their accessibility
for smaller organizations and startups.
Hardware-Software Integration :
Neuromorphic engineering is still in its nascent stages, meaning many of its potential applica-
tions and benefits are theoretical and yet to be fully realized.
21
CHAPTER 9
CONCLUSION
In our emerging and dynamic AI-based society, research and development on AI is to a large
extent focused on the improvement and utilisation of deep neural networks and AI accelera-
tors. However, there is a limit in the architecture of traditional von Neumann systems, and the
exponential increasing of data-size and processing requires more innovative and powerful so-
lutions. Spiking Neural Networks and Neuromorphic computing, which are well-developed
and known areas among neuroscientist and neuro-computing researchers, are part of a trend
of very recent and novel technologies that already contribute to enable the exploration and
simulation of the learning structures of the human brain.
This report has explained the evolution of the artificial neuronal networks, the emergence of
SNNs and their impact on the discovery of neuromorphic chips. It has been discussed the lim-
itations of the traditional chips and the eventual influence of neuromorphic chips on demand-
ing AI applications. The main players have been identified in the area, and have been related
to current and future applications. The study has also described the market advantages of neu-
romorphic chips when comparing with other AI semiconductors. Neuromorphic chips are
compatible with event-based sensors applications and emerging technologies such as pho-
tonic, graphene or non-volatile memories. They have a huge potential in the development of
AI and can certainly be a dominant technology in the next decade.
Hopefully, the report has served to briefly give some light on the complexity of this challeng -
ing computing area. While staying loyal to our objective of offering a practical description of
the most recent advances, we have also tried to be instructive enough so that to increase the
interest and visibility of the topic to the non-specialised audience. For other readers the study
may represent a promising and challenging step towards a more profound understanding of
the area that could eventually support the creation of roadmaps, the exploration of new indus-
trial applications, or the analysis of synergies between these novel chips and other related
emerging trends.
22
APPENDIX SEMINAR PRESENTATION SLIDES
23
24
25
26