Content: SL - No Particulars Page - No 1 2 2 3 3 Scope and Objective 6
Content: SL - No Particulars Page - No 1 2 2 3 3 Scope and Objective 6
Content: SL - No Particulars Page - No 1 2 2 3 3 Scope and Objective 6
1 ABSTRACT 2
2 INTRODUCTION 3
SCOPE
OBJECTIVE
4 DESIGN AND IMPLEMENTATION 8
8 CONCLUSION 23
9 REFERENCES 24
2 Edge computing
ABSTRACT
The proliferation of Internet of Things (IoT) and the success of rich cloud services have
pushed the horizon of a new computing paradigm, edge computing, which calls for
Edge computing has the potential to address the concerns of response time
requirement, battery life constraint, bandwidth cost saving, as well as data safety and privacy.
In this paper, we introduce the definition of edge computing, followed by several case
studies, ranging from cloud offloading to smart home and city, as well as collaborative edge
computing, and hope this paper will gain attention from the community and inspire more
INTRODUCTION
Internet of Things (IoT) was first introduced to the community in 1999 for supply chain
management, and then the concept of "making a computer sense information without the aid
of human intervention" was widely adapted to other fields such as healthcare, home,
Now with IoT, we will arrive in the post-cloud era, where there will be a large quality of data
generator by things that are immersed in our daily life, and a lot of applications will also be
Some IoT applications might require very short response time, some might involve private
data, and some might produce a large quantity of data which could be a heavy load for
networks. Cloud computing is not efficient enough to support these applications. With the
push from cloud services and pull from IoT, we envision that the edge of the network is
changing from data consumer to data producer as well as data consumer. In this paper, we
attempt to contribute the concept of edge computing. We start from the analysis of why we
need edge computing, then we give our definition and vision of edge computing. Several case
studies like cloud offloading, smart home and city as well as collaborative edge are
management, privacy and security, as well as optimization metrics that are worth future.
Data is increasingly produced at the edge of the network, therefore, it would be more efficient
to also process the data at the edge of the network. Previous work such as micro datacentres
cloudlet, and fog computing has been introduced to the community because cloud computing
Is not always efficient for data processing when the data is produced at the edge of the
network. In this section, we list some reasons why edge computing is more efficient than
cloud computing for some computing services, then we give our definition and understanding
of edge computing.
without latency. It allows smart applications and devices to respond to data almost
instantaneously, as its being created, eliminating lag time. This is critical for technologies
such as self-driving cars, and has equally important benefits for business.
Edge computing is expected act as a strategic brain behind IoT. Identifying the role of edge
computing in loT is the main research issue at present. Edge computing is utilized to reduce
the amount of data sent to the cloud and decrease service access latency. Figure illustrates the
Scope
Push from Cloud Services: Putting all the computing tasks on the cloud has been
proved to be an efficient way for data processing since the computing power on the
cloud outclasses the capability of the things at the edge. However, compared to the
fast developing data processing speed, the bandwidth of the network has come to a
standstill. With the growing quantity of data generated at the edge, speed of data
transportation is becoming the bottleneck for the cloud based computing paradigm.
Pull From IoT: Almost all kinds of electrical devices will become part of IoT, and
they will play the role of data producers as well as consumers, such as air quality
safe to infer that the number of things at the edge of the network will develop to more
than billions in a few years. Thus, raw data produced by them will be enormous,
making conventional cloud computing not efficient enough to handle all these data.
This means most of the data produced by IoT will never be transmitted to the cloud,
Change from Data Consumer to Producer: In the cloud computing paradigm, the end
devices at the edge usually play as data consumer, for example, watching a YouTube
video on your smart phone. However, people are also producing data nowadays from
their mobile devices. The change from data consumer to data producer/consumer
Objective
The different objectives for edge computing in the context of IoT are as follows:
of seamless connectivity, and inefficient congestion control, degrade the overall network
performance. Therefore, efficient usage of network resources in edge computing is vital for IoT.
Cost Optimization: The use of an adequate platform for enabling edge computing necessitates
extensive infrastructure deployment that involves substantial upfront investment and operational
expenses. Most of these expenses are related to network node placement, which requires
deliberate planning and optimization to minimize the overall cost. Deployment of an optimal
number of nodes at appropriate positions can significantly reduce capital, and optimal
computing. Subscribers need to have strict control over power management. Energy- efficient IoT
devices and applications are desirable in edge computing. According to a study, one trillion IoT
nodes need sensing platforms that support various applications using power harvesting to ensure
Data Management: The large number of IoT devices at present are expected to generate large
amounts of data that need to be managed in a timely manner. Efficient and effective data
management mechanisms are desirable in edge computing. Transmission and aggregation of IoT-
Taxonomy of IoT-based edge computing that considers particular features, such as wireless
Network Technologies IoT devices send collected data to a locally available edge server for
processing. These devices communicate with edge computing platforms through either
wireless networking technologies, such as WiFi and cellular networking (e.g., 3G, 4G, and
5G), or wired technologies, such as Ethernet. These network technologies vary in terms of
data rate, transmission range, and number ofsupported devices. Wireless networks provide
flexibility and mobility to users who execute their applications on the edge server. However,
Computing Nodes IoT devices have limited processing capabilities, which make them
augment their capabilities by leveraging the resources of edge servers. The edge computing
paradigm relies on different computational devices to provide services to IoT users. These
computational devices are the core element of IoT-based edge computing. Computing nodes
include servers, base stations (BS), routers, and vehicles that can provide resources and
various services to IoT devices. The use of these devices is specific to the computing
paradigm.
Computing Paradigms Various computing paradigms are used in lot to provide different services
depending on diverse application requirements. These paradigms can be categorized into cloud
computing, edge computing (i.e., MEC, fog, and cloudlet), mobile ad hoc cloud (MAC), and hybrid
interruption-free access to powerful cloud servers. These servers can rapidly process large amounts of
data upon receipt from remote IoT devices and send back the results. However, real-time delay-
sensitive applications cannot afford long delays induced by a wide area network. Continuous
transmission of voluminous raw data through unreliable wireless links may also be
cloud computing capabilities near IoT devices, that is, the network edge. An important type
of edge computing platform is MEC, which brings cloud computing capabilities to the edge
of a cellular network [10]. Computational and storage services in MEC are provided at the
BS. Unlike MEC, fog computing employs local fog nodes (i.e., local network devices such as
services. Fog computing is considered a premier technology following the success of IoT.
intensive tasks from IoT devices are performed on a server deployed in the local area
network. Unlike cloud and edge computing platforms that rely on infrastructure deployment,
MAC capitalizes the shared resources of available mobile devices within local proximity to
process computation-intensive tasks. Cloud and edge computing are used together in hybrid
computing. Such infrastructure is usually adopted when we require the large computing
resources of cloud computing but cannot tolerate the latency of the cloud. Variants of edge
computing can be employed in such applications to overcome the latency problems of cloud
computing.
Edge Computing is a distributed architecture, simply defined as the processing of data when
it is collected. It has been emerged to minimize both bandwidth and time response in an IoT
system. The use of an edge computing technique is required when the latency is required to
be optimized to avoid network saturation as well as when the data processing burden is high
which is an architecture that makes use of edge gadgets to accomplish a considerable amount
output from the real world referred to as transduction. Fog nodes determine whether to
process the data locally from several data sources or send the data out to cloud.
The tasks of edge computing, which people carry out in a daily manner. There are three basic
Data sources: As the input, any endpoint which records and collects data from clients or its
Artificial intelligence: As the processing function, it is the main facet after data collected to
recommendations, and improve the performance based on machine learning or data analytics
models.
Actionable insights: The results from the previous stage succeed only when an individual can
act and make any informed selection. Thus, within this stage, the insights appear in a
transparent manner in type of control panels, visualizations, alerts and so on, which motivates
loop.
ACTIONABLE INSIGHTS
ARTIFICIAL INTELLIGENCE
An organization should oversee and ensure privacy and security of their Iot framework.
following:
utilize the source (e.g. pseudonym) without revealing the source's real identity.
other third parties and having the ability to observe that the resource or service is
being used.
Unlinkability: ensuring that a third party (e.g., an attacker) cannot identify whether
Anonymity: an individual may make use of a resource without revealing his identity.
Confidentiality: assuring only the data proprietor and an individual can access the
access to the data when the individual's data is transferred and also collected in edge
or core network framework, as well as when the data is kept or handled in edge or
cloud nodes.
Integrity: assuring the proper and steady transmission of data to the accredited
Availability: ensuring the accredited party manages to access the edge services in any
regions based on individual's needs. This also implies that an individual's data held in
edge or cloud nodes along with the cipher text format can be handled under various
practical needs.
Access control and authentication: access control imitates a linking point of all
privacy and security demands by the access control technique. Authentication ensures
Two case studies are presented in this section to illustrate the edge computing vision
comprehensively. First, we analyse a smart parking system that lessens traffic when an
individual is navigating for a car parking space. Second, we explore utilizing the CDN to
minimize the latency of transmitted data as well as enhance Internet content availability.
Consider a system that allows individuals to promptly find an available parking place on one
click of a key on a smart device. This system will significantly decrease the time feuited to
The smart parking system is usually powered via RFID, ultrasonic detector, and infrared
sensing units.
a) Flow of Execution: A common flow of execution works as follows for a smart parking,
system:
3) Browse between randomly available parking slots, then select a preferable slot.
6) When a customer parks the car via navigation and confirms his parking, the time
countdown starts.
7) On departure, the customer can pay any additional charge if he exceeds the allowed time.
b) Benefits: Smart parking may minimize traffic for an automobile navigating for a slot, can
be useful for many people and decrease vehicles emissions, making for an even more
environmentally friendly city. It can also boost accessibility for businesses and grocery stores
c) Future Scope:
The system may be adjusted to integrate future self-driving automobiles and assure
More efficient parking algorithms could be established for the optimal consumption
of resources, such as availability of slots and parking durations. For example, a deep
A CDN is one of the most promising solutions to address the issue of massive web traffic, by
web content in a faster way. The CDN is a special case of edge computing. Today, a lot of
Internet websites, such as Facebook, eBay, and Netflix, leverage the CDN architecture to
geographical areas. Providing requests to every user from a central location can easily
level agreement (SLA) violations can also occur. CDN addresses precisely the issue
in this use case. The origin server is connected to several exchange points (IXP).
These servers named as Point of Presence (POPs). They are distributed throughout
2) Advantages: There are many advantages for consumers under the CDN
architecture.
Website security improvement: a CDN with the help of distributed denial of service
(DDoS) mitigation can enhance and maintain the website security from DDoS attacks
Faster website page loading: a CDN can be utilized to provide static web content,
Botnet and spam defence: a CDN can be set up with firewall policies which obstruct
Enhancing global content availability: a CDN can manage massive traffic and hold up
Handling website traffic spikes: a CDN provides better load balancing between
We have described five potential applications of edge computing in the last section. To
realize the vision of edge computing, we argue that the systems and network community need
to work together. In this section, we will further summarize these challenges in detail and
bring forward some potential solutions and opportunities worth further research, including
programmability, naming, data abstraction, service management, privacy and security and
optimization metrics.
Programmability
In cloud computing, users program their code and deploy them on the cloud. The cloud
provider is in charge to decide where the computing is conducted in a cloud. Users have zero
or partial knowledge of how the application runs. This is one of the benefits of cloud
computing that the infrastructure is transparent to the user. Usually, the program is written in
one programing language and compiled for a certain target platform, since the program only
runs in the cloud. However, in the edge computing, computation is offloaded from the cloud,
and the edge nodes are most likely heterogeneous platforms. In this case, the runtime of these
nodes differ from each other, and the programmer faces huge difficulties to write an
application that may be deployed in the edge computing paradigm. To address the
defined as a serial of functions/computing applied on the data along the data propagation
the computing can occur anywhere on the path as long as the application defines where the
computing should be conducted. The computing stream is software defined computing flow
such that data can be processed in distributed and efficient fashion on data generating
devices, edge nodes, and the cloud environment. As defined in edge computing, a lot of
computing can be done at the edge instead of the centric cloud. In this case, the computing
stream can help the user to determine what functions/computing should be done and how the
data is propagated after the computing happened at the edge. The function/computing
distribution metric could be latency-driven, energy cost, TCO, and hardware/ software
specified limitations. The detailed cost model is discussed in Section IV-F. By deploying a
computing stream, we expect that data is computed as close as possible to the data source,
Naming
In edge computing, one important assumption is that the number of things is tremendously
large. At the top of the edge nodes, there are a lot of applications running, and each
application has its own structure about how the service is provided. Similar to all computer
systems, the naming scheme in edge computing is very important for programing, addressing.
things identification, and data communication. However, an efficient naming mechanism for
the edge computing paradigm has not been built and standardized yet. Edge practitioners
usually needs to learn various communication and network protocols in order to communicate
with the heterogeneous things in their system. The naming scheme for edge computing needs
to handle the mobility of things, highly dynamic network topology, privacy and security
protection, as well as the scalability targeting the tremendously large amount of unreliable
things. Traditional naming mechanisms such as DNS and uniform resource identifier satisfy
most of the current networks very well. However, they are not flexible enough to serve the
dynamic edge network since sometimes most of the things at edge could be highly mobile
and resource constrained. Moreover, for some resource constrained things at the edge of the
network, IP based naming scheme could be too heavy to support considering its complexity
and overhead. New naming mechanisms such as named data networking (NDN) and Mobility
could also be applied to edge computing. NDN provide a hierarchically structured name for
content/data centric network, and it is human friendly for service management and provides
good scalability for edge. However, it would need extra proxy in order to fit into other
communication protocols such as Bluetooth or ZigBee, and so on. Another issue associated
with NDN is security, since it is very hard to isolate things hardware information with service
providers. MobileFirst can separate name from network address in order to provide better
mobility support, and it would be very efficient if applied to edge services where things are of
highly mobility. Nerveless, a global unique identification (GUID) needs to be used for
naming is MobileFirst, and this is not required in related fixed information aggregation
service at the edge of the network such as home environment. Another disadvantage of
MobileFirst for edge is the difficulty in service management since GUID is not human
friendly.
Data Abstraction
Various applications can run on the edge OS consuming data or providing service by
communicating through the air position indicators from the service management layer. Data
abstraction has been well discussed and researched in the wireless sensor network and cloud
computing paradigm. However, in edge computing, this issue becomes more challenging.
With IoT, there would be a huge number of data generators in the network, and here we take
example. In a smart home, almost all of the things will report data to the edge OS, not to
mention the large number of things deployed all around the home. However, most of the
things at the edge of the network, only periodically report sensed data to the gateway. For
example, the thermometer could report the temperature every minute, but this data will most
likely only be consumed by the real user several times a day. Another example could be a
security camera in the home which might keep recording and sending the video to the
gateway, but the data will just be stored in the database for a certain time with nobody
Service Management
In terms of service management at the edge of the network, we argue that the following four
differentiation, extensibility, isolation, and reliability. Differentiation: With the fast growth of
IoT deployment, we expected multiple services will be deployed at the edge of the network,
such as Smart Home. These services will have different priorities. For example, critical
services such as things diagnosis and failure alarm should be processed earlier than ordinary
service. Health related service, for example, fall detection or heart failure detection should
also have a higher priority compared with other service such as entertainment.
Optimization Metrics
In edge computing, we have multiple layers with different computation capability. Workload
allocation becomes a big issue. We need to decide which layer to handle the workload or how
many tasks to assign at each part. There are multiple allocation strategies to complete a
workload, for instances, evenly distribute the workload on each layer or complete as much as
possible on each layer. The extreme cases are fully operated on endpoint or fully operated on
Latency: Latency is one of the most important metrics to evaluate the performance, especially
in interaction applications/services
Bandwidth: From latency's point of view, high bandwidth can reduce transmission time,
Energy: Battery is the most precious resource for things at the edge of the network. For the
endpoint layer, offloading workload to the edge can be treated as an energy free method.
Cost: From the service providers' perspective, e.g., YouTube, Amazon, etc., edge computing
provides them less latency and energy consumption, potential increased throughput and
CONCLUSION
Nowadays, more and more services are pushed from the cloud to the edge of the network
because processing data at the edge can ensure shorter response time and better reliability.
Moreover, bandwidth could also be saved if a larger portion of data could be handled at the
edge rather than uploaded to the cloud. The burgeoning of IOT and the universalized mobile
devices changed the role of edge in the computing paradigm from data consumer to data
producer/consumer. It would be more efficient to process or massage data at the edge of the
network.
In this paper, we came up with our understanding of edge computing, with the rationale that
computing should happen at the proximity of data sources. In this article, we investigated,
highlighted, and reported recent premier advances in edge computing technologies (e.g., fog
computing. MEC, and cloudlets) with respect to measuring their effect on IoT. Then, we
categorized edge computing literature by devising a taxonomy, which was used to uncover
the premium features of edge computing that can be beneficial to the IoT paradigm. We
outlined a few key requirements for the deployment of edge computing in IoT and discussed
challenges to the successful deployment of edge computing in IoT are identified and
discussed.
We conclude that although the deployment of edge computing in IoT provides numerous
benefits, the convergence of these two computing paradigms brings about new issues that
REFERENCES
[1] Edge Computing: Vision and Challenges, Weisong Shi, Fellow, IEEE, Jie Cao, Student
Member, IEEE, Quan Zhang, Student Member, IEEE, Youhuizi Li, and Lanyu Xu 2016
[2] Secure Edge Computing in IoT Systems: Review and Case Studies, Mohammed
[3] The Role of Edge Computing in Internet of Things, Najmul Hassan, Saira Gillani, Ejaz
[4] Ala Al-Fuqaha, Senior Member, IEEE, Mohsen Guizani, Fellow, IEEE, Mehdi
Mohammadi, Student Member, IEEE, Mohammed Aledhari, Student Member, IEEE, and
Multi-Access Edge Computing," IEEE Commune. Mag., vol. 55, no. 11, Nov. 2017 [6] T.
Taleb et al., "Mobile Edge Computing Potential in Making Cities Smarter," IEEE Commune.
[7] M. Satyanarayanan et al., "Edge Analytics in the Internet of Things," IEEE Pervasive