SEMINAR REPORT ON NEUROMORPHIC ENGINEERING

Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 30

Seminar Report

on

NEUROMORPHIC ENGINEERING
Submitted in partial fulfilment of the requirement for the award of degree

Bachelor of Technology

in

Computer Science and Engineering

by
K.SAI TEJA - 21J21A0538

Under the Supervision

of

Mr. P. Naveen Kumar, B.Tech., M.Tech.,


Assistant Professor

Department of Computer Science and Engineering

JOGINPALLY B.R. ENGINEERING COLLEGE


Accredited by NAAC with A+ Grade, Recognized under Sec. 2(f) of UGC Act. 1956
Approved by AICTE, Affiliated to JNTUH, Hyderabad and ISO 9001:2015 Certified
Bhaskar Nagar, Yenkapally, Moinabad (Mandal)
R.R (Dist)-500075. T.S., India
JOGINPALLY B.R. ENGINEERING COLLEGE
Accredited by NAAC with A+ Grade, Recognized under Sec. 2(f) of UGC Act. 1956
Approved by AICTE, Affiliated to JNTUH, Hyderabad and ISO 9001:2015 Certified
Bhaskar Nagar, Yenkapally, Moinabad (Mandal)
R.R (Dist)-500075. T.S., India

CERTIFICATE

The seminar entitled “NEUROMORPHIC ENGINEERING” that is been submitted


by K.SAI TEJA – 21J21A0538 in partial fulfilment of the award of Bachelor of Technology
in Computer Science and Engineering to Jawaharlal Nehru Technological University
Hyderabad. It is record of bonafide work carried out under our supervision. In my opinion,
this report is of standard required for the degree of Bachelor of Technology.

SEMINAR SUPERVISOR HEAD OF THE DEPARTMENT

Mr. P. Naveen Kumar B.tech.,M.Tech., Dr. T. Prabakaran B.E, M.E., Ph.D.,


Assistant Professor Professor
DECLARATION OF THE STUDENT

I hereby declare that the Technical Seminar entitled “NEUROMORPHIC


ENGINEERING”, presented under the supervision of Mr. P. Naveen Kumar B.tech.,M.Tech.,

Assistant Professor and submitted to Joginpally B.R. Engineering College is original and has
not been submitted in part or whole for Bachelor degree to any other university.

K.SAI TEJA – 21J21A0538


TABLE OF CONTENTS

CHAPTER CONTENTS PAGE No.

1 INTRODUCTION 1
1.1 Objectives
1.2 Importance of Neuromorphic Engineering
2 ARTIFICIAL NEURAL NETWORKS 3
3 LITERATURE REVIEW 5
3.1 Historical Viewpoint of Neuromorphic Computing
3.2 Neuromorphic Hardware & Software
4 NEUROMORPHIC HARDWARE 7
4.1 Advantages of Spiking Neural Networks
4.2 Neuromorphic Hardware implementations
4.3 Neuromorphic Hardware Leaders
4.4 Neuromorphic Hardware Chips
5 APPLICATION AREAS 12
6 WORKING PRINCIPLE 14
7 MARKET TRENDS OF AI CHIPS 16
7.1 Cloud / Datacenter
7.2 Edge Computing
7.3 Future Market of Neuromorphic Chips
8 ADVANTAGES AND DISADVANTAGES 20
9 CONCLUSION 23

APPENDIX SEMINAR PRESENTATION SLIDES 20


CHAPTER 1
INTRODUCTION

Neuromorphic Engineering is an emerging interdisciplinary field that mimics the structure


and functioning of biological neural networks to revolutionize computing systems. This inno-
vative approach is inspired by the human brain’s remarkable efficiency, adaptability, and
ability to process complex data in real time. By integrating concepts from artificial intelli-
gence, neuroscience, and material science, Neuromorphic Engineering seeks to develop com-
puting systems that are not only faster and more efficient but also capable of learning and
adapting to dynamic environments.

The core of neuromorphic systems lies in technologies such as neuromorphic chips (e.g.,
IBM TrueNorth, Intel Loihi, SpiNNaker), event-based sensors (such as dynamic vision sen-
sors), and memristors, which emulate synaptic behavior. These systems promise break-
throughs in computational efficiency, low power consumption, and real-time data processing,
making them ideal for applications in robotics, healthcare, autonomous systems, and artificial
intelligence.

1.1 Objectives
The primary objectives of this project include:

1. Understanding Neuromorphic Systems

 Study the architecture and principles of neuromorphic computing.

 Explore how these systems emulate biological neural processes.

2. Exploration of Core Technologies:

 Investigate the use of neuromorphic chips, event-based sensors, and memristors.

 Evaluate their role in enhancing computational efficiency and adaptability.

3. Applications in Real-World Scenarios:

 Analyze current implementations of neuromorphic systems in industries such as


healthcare, robotics, and smart sensors.

 Discuss potential future use cases, including brain-machine interfaces and low-power
AI solutions.

1
4. Advancements and Challenges:

 Identify the technical advancements driving the field forward.

 Highlight challenges such as scalability, integration, and hardware limitations.

5. Future Implications:

 Assess the transformative potential of neuromorphic engineering in shaping next-gen-


eration intelligent systems.

1.2 Importance of Neuromorphic Engineering


Neuromorphic Engineering has the potential to redefine the landscape of intelligent comput-
ing by overcoming the limitations of traditional von Neumann architectures. Its ability to
process information in parallel, adapt to new data, and operate with low energy consumption
makes it critical for solving modern-day challenges in automation, artificial intelligence, and
real-time decision-making. Furthermore, as the world moves toward edge computing and en-
ergy-efficient solutions, neuromorphic systems stand out as a sustainable and scalable solu-
tion.

By exploring this transformative technology, this project seeks to provide a comprehensive


understanding of the principles, applications, and future direction of Neuromorphic Engineer-
ing, offering valuable insights into its role in the evolution of computing and artificial intelli-
gence.

CHAPTER 2
2
ARTIFICIAL NEURAL NETWORKS

An Artificial Neural Network (ANN) is a combination and collection of nodes that are in -
spired by the biological human brain. The objective of ANN is to perform cognitive functions
such as problem-solving and machine learning. The mathematical models of the ANN have
been started in the 1940s however, it was in silent for a long time (Maass, 1997). Nowadays,
ANNs became very popular with the success of ImageNet2 in 2009. The reason behind this is
the developments in ANN models and hardware systems that can handle and implement these
models. The ANNs can be separated into three generations based on their computational units
and performance (Figure 1). An Artificial Neural Network (ANN) is a combination and col-
lection of nodes that are inspired by the biological human brain. The objective of ANN is to
perform cognitive functions such as problem-solving and machine learning. The mathemat-
ical models of the ANN have been started in the 1940s however, it was in silent for a long
time (Maass, 1997). Nowadays, ANNs became very popular with the success of ImageNet2
in 2009 . The reason behind this is the developments in ANN models and hardware systems
that can handle and implement these models. The ANNs can be separated into three genera-
tions based on their computational units and performance [Figure 1]

Artifical Neural
Networks

2nd Generation ANNs 3rd Generation ANNs


1st Generation ANNs
"Deep Neural "Spiking Neural
"Perceptrons"
Networks" Networks"

Figure 1- Generations of Artificial Neural Networks

The first generation of the ANNs has started in 1943 with the work of Mc-Culloch and Pitts.
Their work was based on a computational model for neural networks where each neuron is

called “perceptron”. Their model later was improved with extra hidden layers (Multi-Layer
Perceptron) for better accuracy - called MADALINE - by Widrow and his students in the

3
1960s . However, the first generation ANNs were far from biological models and were just
giving digital outputs. Basically, they were decision trees based on if and else conditions.

The Second generation of ANNs contributed to the previous generation by applying func-
tions into the decision trees of the first-generation models. The functions work among each
visible and hidden layers of perceptron and create the structure called “deep neural net-
works”.

The advantages of SSNs over ANNs are: (Kasabov, 2019)

• Efficient modeling of temporal - spatio temporal or spectro temporal - data


• Efficient modeling of processes that involve different time scales
• Bridging higher-level functions and “lower” level genetics
• Integration of modalities, such as sound and vision, in one system
• Predictive modeling and event prediction
• Fast and massively parallel information processing
• Compact information processing

Table 2 ANN-SNN C (Tsinghua University, 2018) 4

omparison Table

4
CHAPTER 3

LIRTERATURE REVIEW

3.1 Historical Viewpoint of Neuromorphic Computing

In recent years, neuromorphic computing has evolved as a complementary architecture to


von Neumann computers. Mead (1990) invented the phrase “neuromorphic computing”. Tur-
ing, widely known for his pioneering work in computation and the key publication 'On com-
putable numbers' made a formal argument in 1936 that a machine could be built to do any
imaginable mathematical computation if it could be represented as an algorithm. Turing work
quickly evolved into the modern computing industry. Few people realized that, in addition to
developing the digital computer, Turing foresaw connectionism and neuron-like computing.
Turing described a machine that consists of artificial neurons connected in any pattern with
modifier devices in his article 'Intelligent machinery' (Turing, 1936 &1948), which he wrote
in 1948 but was not published until well after his death in 1968. Modifier devices could be
used to pass or delete a signal, and the neurons were made up of NAND gates, which Turing
chose because they may be used to generate any other logic function (James, 2012; Hebb,
2002). The first person to use the term “neural plasticity” appears to have been the Polish
neuroscientist Jerzy Konorski. Ever since the concept of brain plasticity evolved, several re-
searches had been carried out and many are in the pipeline towards mimicking human brain
by computer (Nugent, Kenyon, & Porter, 2004; Nugent, 2008; Yang, Pickett, Li, Ohlberg, &
Stewart, 2008; Snider, 2008; Jackson, Rajendran, Corrado, Breitwisch, & Burr, 2013; Snider,
2011).

3.2 Neuromorphic Hardware & Software

Neuromorphic Systems are outputs of neuromorphic engineering; these systems usually ex-
hibit the following key features: two basic components, which are neurons and synapses;
colocated memory and computation; simple communication between components; and learn-
ing in the components. Other features are nonlinear dynamics, high fan-in/fan-out compon-
ents, spiking behavior, the ability to adapt and learn through plasticity of both parameters,
events, and structure, robustness, and the ability to handle noisy or incomplete input. Biolo-
gical neural networks are used as inspiration for integrated circuit design in neuromorphic
devices. The goal is to achieve equivalent learning capacity while being adaptive, fault-toler-
ant, and allowing processing and memory/storage to occur within the same network. Neur-
5
omorphic Hardware also known as architecture is illustrated in Figure 1, with the goal of cre -
ating electrical devices to replicate neural mechanisms in electronic hardware that can ex-
ecute complex computations. Neuromorphic software refers to algorithms that are used to
design a machine to imitate the mammalian brain's (Schuman et al., 2017; Kelsey, 2019).

CHAPTER 4

NEUROMORPHIC HARDWARE
6
The traditional Neumann systems are multi-model systems consisting of three different
units: processing unit, I/O unit, and storage unit. These modules communicate with each
other through various logical units in a sequential way. They are very powerful in bit-precise
computing .

However, neural-networks are data-centric; most of the computations are based on the
dataflow and the constant shuffling between the processing unit and the storage unit creates a
bottleneck. Since data needs to be processed in sequential order, the bottleneck causes
rigidity.

GPUs have massively parallel computing power compared to CPUs (Zheng & Mazumder,
2020). Therefore, they quickly become the dominant chips for implementing neural
networks. Currently, data centres are mainly using millions of interconnected GPUs to give
parallelism on processes, but this solution causes increased power consumption .

GPUs have expedited deep learning research processes, support the development of
algorithms and managed to enter the markets. However, future edge applications such as
Robotics or autonomous cars will require more complex artificial networks working in real-
time, low latency, and low energy consumption inference (Zheng & Mazumder, 2020).

The ASICs are costly to design and not reconfigurable because they are hard-wired, but this
hardwired nature also contributes to their optimization. Throughout the data-flow
optimization, they can perform better and more energy-efficiently than the FPGAs.
Therefore, FPGAs serve as a prototype chip for further designing costly deep learning ASICs
(Zheng & Mazumder, 2020).

Deep learning accelerators are energy-efficient and effective for current data sizes. However,
they are still limited to the bottleneck of the architecture, i.e the internal data link between
the processor and the global memory units (Kasabov, et al., 2016), as the load of the data size
is increasing faster than the prediction of the Moore’s Law 1. It would be difficult for a built
edge system that enables the process of these data (Pelé, 2019). Novel approaches beyond the
von Neumann architecture are needed therefore to cope with the shuttling issue between
memory/processor.

7
4.1 Advantages of Spiking Neural Networks

Learning of SNNs with neuromorphic chips can be handled both by native SNN algorithms
and with the conversion of the 2nd generation algorithms of ANN into SNN.

Native SNN algorithms are theoretically promising efficient and effective, however, the
practical issues with them continue. Huge efforts are made to improve these algorithms so
that they can compete with 2 nd generation ANN algorithms and eventually surpass them in
both inference speed, accuracy and efficiency in the area of artificial intelligence.

Another advantage of SNNs is the possibility of getting the benefits of the 2nd generation
ANNs algorithms by conversion, i.e. deep learning networks (2nd generation ANN models)
are mapped throughout, either empirically or mathematically, into SNN neurons. Therefore,
successful deep learning operations can be converted into SNNs without any training
algorithm (Zheng & Mazumder, 2020). Through this method, SNNs can reach the inference
accuracy of cognitive applications with low energy consumption. (Figure 4) (Figure 5)

Applied Brain Research group, the owner of brain simulator Nengo, published a paper on
2018 to compare the Intel neuromorphic chip Loihi Wolf Mountain with conventional CPUs
and GPUs (Blouw, et al., 2018). The methodology consisted in applying the 2 nd generation
ANN on GPUs and CPUs, and convert the 2 nd generation to SNN to apply to Loihi.
According to their result, for real-time inference, (batch-size=1) neuromorphic Intel chip
consumes 100x lower energy compared to GPUs which are the most common chips for
implementing 2nd generation ANNs (Figure 4). Moreover, compared to Movidius, Loihi
conserves the speed of the inference and the energy-consumption per inference as the
number of neurons increases (Figure 5).

4.2 Neuromorphic Hardware Implementations

Neuromorphic chips can be designed digital, analog or in a mixed way. All these designs
have their pros and cons.

Analog chips resemble the characteristics of the biological properties of neural networks
better than the digital ones. In the analog architecture, few transistors are used for emulating
the differential equations of neurons. Therefore, theoretically, they consume lesser energy

8
than digital neuromorphic chips (Furber, 2016) (Schemmel, et al., 2010). Besides, they can
extend the processing beyond its allocated time slot. Thanks to this feature, the speed can be
accelerated to process faster than in real-time. However, the analog architecture leads to
higher noise, which lowers the precision. Also, the analog nature of the architecture causes
leakage on signals which limits long-time learning in STDP (Indiveri, 2002).

Digital ones, on the other hand, are more precise compared to analog chips. Their digital
structure enhances on-chip programming. This flexibility allows artificial intelligent
researchers to implement various kinds of an algorithm accurately with low-energy
consumption compared to GPUs.

Mixed chips try to combine the advantages of the analog chips, i.e. lesser energy
consumption, and the advantages of the digital ones, i.e. precision. (Milde, et al., 2017)

Yet the analog chips are more biological and promising, digital neuromorphic chips are on
higher demand because they are easy to implement for real-world applications. As the
learning algorithms for SNNs and hardware technology improve, analog architectures could
eventually have the potential to take the position of digital.

4.3 Neuromorphic Hardware Leaders

IBM with the collaboration of the DARPA SYNAPSE program built the chip “TrueNorth”.
TrueNorth is a digital chip produced to speed-up the research on SNNs and commercialize it
It is not an on-chip programmable so it can be used just for inference. (Liu, et al., 2019). This
is a disadvantage for the on-chip training research and at the same time limits the usage of
the chip in critical applications (such as autonomous driving which needs continuous
training) Efficient training - as mentioned before - is an advantage of neuromorphic hardware
that unfortunately does not occur in the TrueNorth.

One IBM´S objective is to use the chip on cognitive applications such as robotics,
classification, action classification, audio processing, stereo vision, etc. The chip has actually
proven usefulness in relation to low energy consumption compared to GPUs (DeBole, et al.,
2019). However, the TrueNorth is not yet on-sale for end-users, being only possible to
request it for research reasons.

9
The chip is relatively old (5 years) and IBM presently seems not to be planning any new chip
design but scaling it. IBM aims to invest in research focused on learning algorithms of SNNs
and to take real-world applications to the market. With this goal, IBM is not only funding
research (IBM labs around the world) but also sponsoring main neuromorphic hardware
workshops (Neuro Inspired Computational Elements Workshop (NICE), Telluride
Workshop).

IBM has also agreements with US security companies. In 2018, in partnership with The
Airforce Research Laboratory.

4.4 Neuromorphic Hardware research chips

There are mutual relationships and interdependency between neuroscience and neuromorphic
hardware. As neuroscience researchers discover the physical brain communications, mapping
and learning mechanisms of the neural networks, these findings are designed and
implemented in neuromorphic hardware. In turn, neuromorphic chips contribute to the
effectiveness of neuroscience by emulating the brain models and allowing neuroscientists to
make more complex and efficient experiments.

Different visualizations on the contribution of neuromorphic chips, key people and actors,
integrated circuits in the market, and interconnections in the area have been included in the
annex (elaborated with the road mapping software tool, ´Sharpcloud´) to facilitate an
overview of the area.

The University of Manchester - SpiNNaker

SpiNNaker is a digital, on-chip programmable hardware designed by the University of


Manchester, under the supervision of Steve Furber (Figure 24) (Furber, et al., 2013).
SpiNNaker was the first on-chip programmable digital chip, so a large variety of research has
been conducted around it.

Within the Human Brain Project, the focus of the University of Manchester is the study of
the brain neurons rather than cognitive applications. However, as it is flexible, it can serve as

10
a chip for cognitive applications too. Therefore, the Neurorobotics Platform of HBP is taking
advantage of the SpiNNaker chip as hardware for robotic applications. The first generation
SpiNNaker can be used on-cloud through HBP collaboration portal and the physical boards
can be sold for research aims. Currently, SpiNNaker-1 is the world’s largest neuromorphic
computing platform and will assist EBRAINS. It has around 100 external users who can
access the machine through the HBP Collaboratory. And there are around 100 SpiNNaker-1
systems in use by research groups around the world.

Even though the CMOS technology of the first SpiNNaker chip dates from the last decade,
the recent study of Steve Furber and his team reveals that it manages to simulate Julich
cortical unit in real-time and in an energy-efficient way. SpiNNaker-1 surpasses the HPC
(runs in 3 times slower than real-time) and GPU (runs 2 times slower than real-time) in terms
of processing speed and

CHAPTER 5

11
APPLICATIONS AREAS

A review of several literatures makes us to understand that application areas of neuromorphic


system are countless, because all aspects of life require human brain; hence, there are diverse
needs for neuromorphic engineering, but we limited our areas of application to the following:

1) Medicine:

Neuromorphic devices are extremely effective at receiving and responding to data from
their environment. When coupled with organic materials, these devices become com-
patible with the human body. Hence neuromorphic systems, now or in future, could be
used to improve drug delivery systems. The use of neuromorphic computing instead of
traditional devices could create a more realistic, seamless experience for those with
prosthetics (Tuchman et al., 2020). Neuromorphic devices that can emulate the bionic
sensory and perceptual functions of neural systems have great applications in personal
healthcare monitoring, and neuro-prosthetics (Zeng, He, Zhang, & Wan 2021).

2) Large Scale Operations and Product Customization:

Large-scale projects and product customization could also benefit from neuromorphic
computing. It could be used to process large set of data more easily from environ-
mental sensors. These sensors could measure water content, temperature, radiation
and other parameters depending on the needs of the industry. The neuromorphic com-
puting structure could help recognize patterns in this data, making it easier to reach
effective and efficient conclusion. Neuromorphic devices could also be applied to
product customization due to the nature of their building materials. These materials
can be transformed into easily manipulated fluids. In liquid form, they can be pro-
cessed through additive manufacturing to create devices specifically fit for the user’s
needs (Tuchman et al., 2020).

3) Artificial Intelligence:

The way the brain’s neurons receive, process, and send signals is extremely fast and
energy-efficient. As such, it is natural that professionals in technology, especially
those in the field of Artificial Intelligence (AI), would be intrigued by neuromorphic
devices that mimic human nervous system. As the name suggests, researchers in the
field of AI will focus on a particular element of the brain-intelligence. (Lutkevich,
2020).
12
4) Cloud Computing (The edge), Driverless Car, and Smart Technology:

Neuromorphic system has low energy consumption. Hence, neuromorphic computing


is suitable for “the edge”. “The edge” refers to the outskirts of a network that would
allow devices to be connected to a cloud platform. Since driverless cars must operate
in the space, neuromorphic computing could help them respond more effectively and
efficiently to their surroundings, when not connected to a stable internet source; a sys-
tem employing neuromorphic computing could take over these vehicles. This could
make driverless cars safer and more suitable for varying environments. The advance
sensing capabilities of neuromorphic computing could also improve existing “smart
technology” (Jayaraman, 2020).

CHAPTER 6

13
WORKING PRINCIPLE

The working principle of Neuromorphic Engineering is based on the emulation of biological


neural networks to design computing systems that mimic the structure and functionality of
the human brain. Unlike traditional von Neumann architectures, which separate memory and
computation, neuromorphic systems integrate these processes into a unified framework, al-
lowing for parallel, adaptive, and energy-efficient computation.

At its core, the system operates as follows:

1) Neuron and Synapse Emulation

Neuromorphic systems use artificial neurons and synapses to replicate the behavior of biolog-
ical ones. These components communicate using spikes or events, similar to how information
is transmitted in the brain. This spike-based communication reduces power consumption and
enables real-time processing of dynamic data.

2) Spike-Based Communication

Information in neuromorphic systems is processed using event-driven architectures, where


data is transmitted only when necessary (spike-based communication). This differs from tra-
ditional clock-driven systems, leading to energy efficiency and faster processing.

3) Adaptive Learning Mechanisms

Neuromorphic systems incorporate learning algorithms such as Hebbian learning, spike-tim-


ing-dependent plasticity (STDP), and other biologically inspired mechanisms. These allow
the system to adapt to changes in input data and improve performance over time, similar to
how the brain learns from experience.

4) Integration of Neuromorphic Hardware

Neuromorphic chips, such as IBM TrueNorth, Intel Loihi, and SpiNNaker, serve as the back-
bone of these systems. These chips feature dense arrays of artificial neurons and synapses ca-
pable of performing billions of operations per second while consuming minimal power.

5) Event-Based Sensors

14
Dynamic vision sensors (DVS) and other event-based sensors play a crucial role in neuro-
morphic systems. These sensors detect changes in the environment and generate spikes only
when significant events occur, ensuring efficient data handling and processing.

6) Data Processing and Decision Making

Once information is received from sensors or other input sources, the neuromorphic system
processes the data using its spiking neural network (SNN). The network performs complex
computations in parallel and makes decisions based on patterns and adaptive learning.

7) Real-Time Applications

The output of neuromorphic systems is used in real-time applications such as robotics, brain-
machine interfaces, autonomous vehicles, and low-power AI systems. The ability to process
information efficiently and adaptively makes these systems ideal for dynamic, real-world sce-
narios.

15
CHAPTER 7

MARKET TRENDS OF AI CHIPS

AI is very popular today and the market of chips is receiving an increasing interest and
attention from the markets. A list of companies can be found in tables 6 and 7 of annex 2.
Many applications are already adopted by end-users and numerous emerging applications are
expected to happen in the short term. This rising demand will affect the plans of
semiconductor companies. According to McKinsey's report “Artificial-intelligence hardware:
New opportunities for semiconductor companies”, estimated CAGR (Compound annual
growth rate) of AI-supported semiconductors will be 18-19% between 2017-2025 compared
to 3-4% of non-AI semiconductors . Report from TMT Analytics also correlated with
McKinsey´s and expects the market of AI-supported semiconductors to achieve 66 billion
dollars by 2025 .

Currently available semiconductors for AI applications are the CPUs and the AI accelerators.
The AI accelerators are leading the market because of the computing limitations of CPUs.
Available AI accelerators are the GPUs, ASICs, and FPGAs, as mentioned in chapter 3.

Figure : Market of AI & Non-AI semi-conductors between 2017-2025

16
Figure : AI-supported Semiconductors market volume

7.1 Cloud / Datacenter


GPUs are currently dominating the cloud market for training services. Nvidia is the main
actor in the field and this trend will likely continue during a couple of years. On the other
hand, ASICs are emerging in the market and Google is already using their specified ASIC
system “Tensor Processing Units” on its data centers. It is expected that ASICs will reach a
28 billion dollars chip market by 2026, which will be almost half of the total AI chip market
(Bloomberg Business, 2019).

FPGAs research and development are basically supported by Intel and Microsoft. However,
their lower performance compared to ASIC or GPU will limit opportunities in a market that
will only demand efficient AI solutions. Their expected market share will be around 9 billion
by 2024 (Bloomberg Business, 2018). The inference is presently dominated by the
traditional CPU datacenters (Figure 16). The CPU dominance will be gradually replaced by
ASICs as soon as the utilization of the latter will become widespread. As the complexity of
the tasks increase and datasets becomes larger, the inference cost of CPUs will be much
higher. ASICs can provide the solution to this dilemma by the “increased parallelism,
memory transfer reductions, and workload reductions”

7.2 Edge Computing

Edge computing represents the future of AI however, the amount of data transactions is
increasing tremendously. Most of these data are just unnecessary bulky data which could

17
make data-centers inadequate in the near future. Moreover, latency and real-time processing
are crucial in some applications (health, space, robotics, etc.). Edge inference is inevitable to
solve these issues.

According to Chetan Sharma Consulting, the Edge Market size is expected to reach 4.1
Trillion dollars by 2030. And half-trillion of this size will be located in the edge hardware
market, which also includes the chip sector (Figure 17) (Chetan Sharma Consulting, 2019).

By 2018 the AI edge chip market was less than 100 million dollars; however, the demand
will be huge. The top mobile chip producers, e.g. ARM architecture on Apple, Qualcomm,
Huawei, have already edge inference and will certainly continue to invest. McKinsey
believes that the edge inference chip market will get around 5 billion by 2025 and might
surpass the data-center inference market by 2030 (Batra, et al., 2018).

At present, dominant processors in the edge market are CPUs. However, for large scale, real-
time applications, CPUs will not be enough, and they will replace with ASICs by 2025
(Figure 18). On the other hand, edge training – even though being a very important area - is
not efficient yet. There are some methodologies such as federated learning which boosts
privacy and limits the data size. Unfortunately, this solution does not cover yet the latency
related issues

7.3 Future Market of Neuromorphic Chips

Compared to AI Accelerators, Neuromorphic chips seems to be the best option in relation to


“parallelism”, “energy efficiency” and “performance”. They can handle both AI inference
and training in real-time. Moreover, edge training is possible through neuromorphic chips
(Kendall & Kumar, 2020). However, learning methodologies should be improved their
accuracy. In addition, there is no yet market-ready neuromorphic chips to widespread and
cope with the potential user size. The start-ups mentioned in chapter 3.5 are expected to
release their chip in the market in 2020. The success of aiCTX and BrainChip can be
determinant for the future of neuromorphic computing. The hybrid research chip “Tianjic”
has also tested on real-world applications and it could be a good sample for the transition
period.

Yole and TMT Analytics expect that the market size of neuromorphic chips can reach
billion-dollar by mid-20s with a growth of 51% between 2017-2023 (Yole Development,
18
2019) (Kennis, 2019). If they can manage to get ahead and demonstrate their potential under
the pressure of the currently-successful AI accelerators, Neuromorphic chips are expected to
take a solid place in the market by mid-20s and possibly achieve market domination by 2030

19
CHAPTER 8

ADVANTAGES AND DISADVANTAGES

Advantages :

Energy Efficiency :

Neuromorphic systems consume significantly less power compared to traditional computing


architectures, making them ideal for energy-constrained applications such as edge devices
and IoT.

Real-Time Processing :

Event-driven architectures enable these systems to process information in real-time, making


them suitable for applications like robotics, autonomous vehicles, and healthcare devices.

Adaptive Learning :

These systems can learn and adapt to new inputs using biologically inspired learning mecha-
nisms, such as spike-timing-dependent plasticity (STDP), which allows continuous improve-
ment without retraining.

Parallel Processing :

Neuromorphic systems process multiple data streams simultaneously, emulating the brain’s
ability to handle vast amounts of information in parallel.

Scalability :

Neuromorphic hardware, such as memristors and spiking neural networks (SNNs), is inher-
ently scalable, allowing for the development of compact and efficient systems.

Low Latency :

The spike-based communication mechanism reduces delays in decision-making, critical for


time-sensitive applications like autonomous navigation and real-time AI systems.

Bio-Inspired Functionality :

Mimicking the neural processes of the brain allows neuromorphic systems to perform tasks
like pattern recognition, sensory integration, and decision-making more efficiently.
20
Application Diversity :

Neuromorphic engineering is already showing promise in fields such as robotics, prosthetics,


smart sensors, and adaptive AI for IoT.

Disadvantages :

Complexity of Development :

Designing and programming neuromorphic systems require expertise in neuroscience, ma-


chine learning, and hardware engineering, posing a steep learning curve for developers.

Hardware Limitations :

Neuromorphic chips like Intel Loihi and IBM TrueNorth are still in development and face
challenges such as limited memory, scalability issues, and high production costs.

Limited Software Ecosystem :

Current software tools for developing neuromorphic applications are not as mature or widely
supported as those for traditional computing systems.

Learning Constraints :

While neuromorphic systems excel at unsupervised and reinforcement learning, they may
struggle with tasks requiring complex supervised learning due to their spike-based architec-
ture.

Cost of Implementation :

The production of neuromorphic chips and sensors is expensive, limiting their accessibility
for smaller organizations and startups.

Hardware-Software Integration :

Synchronizing neuromorphic hardware with conventional computing systems remains a chal-


lenge, as traditional systems use fundamentally different architectures.

Early Stage of Development :

Neuromorphic engineering is still in its nascent stages, meaning many of its potential applica-
tions and benefits are theoretical and yet to be fully realized.

21
CHAPTER 9

CONCLUSION

In our emerging and dynamic AI-based society, research and development on AI is to a large
extent focused on the improvement and utilisation of deep neural networks and AI accelera-
tors. However, there is a limit in the architecture of traditional von Neumann systems, and the
exponential increasing of data-size and processing requires more innovative and powerful so-
lutions. Spiking Neural Networks and Neuromorphic computing, which are well-developed
and known areas among neuroscientist and neuro-computing researchers, are part of a trend
of very recent and novel technologies that already contribute to enable the exploration and
simulation of the learning structures of the human brain.

This report has explained the evolution of the artificial neuronal networks, the emergence of
SNNs and their impact on the discovery of neuromorphic chips. It has been discussed the lim-
itations of the traditional chips and the eventual influence of neuromorphic chips on demand-
ing AI applications. The main players have been identified in the area, and have been related
to current and future applications. The study has also described the market advantages of neu-
romorphic chips when comparing with other AI semiconductors. Neuromorphic chips are
compatible with event-based sensors applications and emerging technologies such as pho-
tonic, graphene or non-volatile memories. They have a huge potential in the development of
AI and can certainly be a dominant technology in the next decade.

Hopefully, the report has served to briefly give some light on the complexity of this challeng -
ing computing area. While staying loyal to our objective of offering a practical description of
the most recent advances, we have also tried to be instructive enough so that to increase the
interest and visibility of the topic to the non-specialised audience. For other readers the study
may represent a promising and challenging step towards a more profound understanding of
the area that could eventually support the creation of roadmaps, the exploration of new indus-
trial applications, or the analysis of synergies between these novel chips and other related
emerging trends.

22
APPENDIX SEMINAR PRESENTATION SLIDES

23
24
25
26

You might also like