Brainware Computing Concepts Scopes and Challenges
Brainware Computing Concepts Scopes and Challenges
Brainware Computing Concepts Scopes and Challenges
sciences
Article
Brainware Computing: Concepts, Scopes and Challenges
Eui-Nam Huh and Md Imtiaz Hossain *
Department of Computer Science and Engineering, Kyung Hee University, Global Campus,
Yongin-si 17104, Korea; johnhuh@khu.ac.kr
* Correspondence: hossain.imtiaz@khu.ac.kr; Tel.: +82-31-201-2454
Abstract: Over the decades, robotics technology has acquired sufficient advancement through the
progression of 5G Internet, Artificial Intelligence (AI), Internet of Things (IoT), Cloud, and Edge
Computing. Though nowadays, Cobot and Service Oriented Architecture (SOA) supported robots
with edge computing paradigms have achieved remarkable performances in diverse applications,
the existing SOA robotics technology fails to develop a multi-domain expert with high performing
robots and demands improvement to Service-Oriented Brain, SOB (including AI model, driving
service application and metadata) enabling robot for deploying brain and a new computing model
with more scalability and flexibility. In this paper, instead of focusing on SOA and Robot as a Service
(RaaS) model, we propose a novel computing architecture, addressed as Brainware Computing, for
driving multiple domain-specific brains one-at-a-time in a single hardware robot according to the
service, addressed as Brain as a Service (BaaS). In Brainware Computing, each robot can install and
remove the virtual machine, which contains SOB and operating applications from the nearest edge
cloud. Secondly, we provide an extensive explanation of the scope and possibilities of Brainware
Computing. Finally, we demonstrate several challenges and opportunities and then concluded with
future research directions in the field of Brainware Computing.
Keywords: brainware computing; service oriented architecture (SOA); service oriented brain (SOB);
Citation: Huh, E.-N.; Hossain, M.I. brain as a service (BaaS)
Brainware Computing: Concepts and
Challenges. Appl. Sci. 2021, 11, 5303.
https://doi.org/10.3390/app11115303
1. Introduction
Academic Editor: Juan Francisco De
Having Service Oriented Architecture (SOA) [1] and Collaborative Robots (COBOT) [2],
Paz Santana
the automation industry has grown to be a vast equalizer in diverse applications such as
healthcare [3], medicine [4,5], e-commerce [6], surveillance systems [7], smart city [8], smart
Received: 23 March 2021
home [9], UAV [10], manufacturing [11], and so on [12]. Recently, the proliferation of 5G
Accepted: 2 June 2021
Published: 7 June 2021
Internet, Artificial Intelligence (AI), Internet of Things (IoT), Cloud, and Edge Computing
advance this automation technology further matured over the years yielding the noble
services and technologies, for instance, Software as a Service (SaaS) [13], Infrastructure
Publisher’s Note: MDPI stays neutral
with regard to jurisdictional claims in
as a Service (IaaS) [14], Platform as a Service (PaaS) [15], Robot as a Service (RaaS) [16],
published maps and institutional affil-
etc. Moreover, these emerging scalable and flexible infrastructures drive the existing
iations. automation industry to accelerate the performance significantly in terms of perfection and
resource utilization in business and defense.
The robotic concept was first introduced by Czech playwright Karl Capek in 1921
to denote a fictional humanoid [17,18]. Later in 1942, Isaac Asimov first used the term
robotics [19]. After that, robotics technology has gradually improved in terms of per-
Copyright: © 2021 by the authors.
formance, scalability, and flexibility through the progression of software and hardware
Licensee MDPI, Basel, Switzerland.
This article is an open access article
technologies along with Networking, the IoT, AI, Cloud, and Edge Computing. To solve
distributed under the terms and
the ergonomic and productivity issues in the automation industry, Michael A. Peshkin et
conditions of the Creative Commons al. proposed a more flexible and scalable robot architecture called Collaborative Robot
Attribution (CC BY) license (https:// (COBOT) in 2001 [2]. However, Service-Oriented Architecture (SOA), Cloud Computing,
creativecommons.org/licenses/by/ and IoT push the margin of the automation industry services and scopes up, offering
4.0/). Robots as a Service (RaaS) [16] and Infrastructure as a Service (IaaS) [14]. In the last decade,
along with these service infrastructures and vast headways of Artificial Intelligence (AI)
more specifically, deep learning and robotics technologies have attained gigantic capability
in terms of accuracy and perfection on diverse applications [3–12].
Modern AI technology has obtained tremendous performances, but are all about
domain-specific, hence state-of-the-art, robotics technology likewise. Every specific service
or application demands a distinct domain-specific brain, i.e., artificial intelligent model and
driving application. In this proposal, we use the keyword “brain” to denote a container
image that includes an AI-trained model (if necessary, then, capable of being updated
through service-aware learning), driving service application, and metadata. Developing
a high-performing multi-domain expert system is extremely challenging. For example,
a robot that is designed and trained for caring for a child is not capable of performing
rationally as a nurse and vice versa. Yara Rizk et al. [20] explained the difficulties and
challenges of multi-domain adaptive systems to describe the heterogeneity of the existing
robotic systems. As a single robot owns a limited capacity of heterogeneity characteristics,
Yunfei Shi et al. [21] considered the problem of a heterogeneous team of robots in terms of
modeling and sampling. They proposed a technique to facilitate multiple robots working
together. Aforementioned articles denote that existing single robotic systems struggle to
perfectly support heterogeneity in terms of domain adaptation and performing better in
diverse domains [20,21]. Deploying multiple brains inside a single robotic system might
be one of the possible solutions to build the multi-domain adaptive robot. However, this
solution is not practically and economically feasible because of the limited storage capacity
and computation capability of the robots such as household robots, self-driving cars, etc.
To deal with the aforementioned issue, by the grace of the potential from 5G internet
and possibilities from the state-of-the-art AI, edge, and cloud computing advancement, we
propose a computing architecture named Brainware Computing. Brainware Computing
denotes encapsulating AI model and service application with corresponding metadata in
cloud, sharing domain specific brains to the edges, deploying and enabling requested brain
images into the the robot based on the requested service, and combining and composing
of multiple brains without external programming required. The existing edge comput-
ing paradigm has been considered as the backbone computing paradigm for Brainware
Computing. Current robot architecture and working principles can be divided into four
concerns: (1) Communication, (2) Storage, (3) Sensors and Actuators, and (4) Software.
The software includes the AI model and driving application, which perform learning
from the environment and inference of the next state by interacting with the real world
environment [22] through sensors and actuators. In the existing robotics architecture, these
four operations are standalone and fixed together. Though Robot as a Service (RaaS) [16]
architecture introduced a strategy to increase the flexibility, it demands external control by
the developers with very limited variability. In the Brainware Computing platform, a big
portion of software component (learning, model updating and brain) is decoupled from
the robot and makes the robot equipped to establish and exclude the virtual machine based
on the service-demands (see Figure 1). The virtual machine includes Service Oriented AI
model, corresponding operating applications and metadata. These virtual machines are
stored in the form of the container image in the Intelligent Hierarchical Brain Store at the
nearest edge cloud. Based on the service-demands, edge and core cloud can install and
remove the service-specific brain image into and from the robots. Thus, a single robot can
have a magnificent performance in any concerned domain by switching the intelligent
brain inside it, which is being supervised by the edge cloud.
We elaborately discuss the need for Brainware Computing in Section 2. In Section 3,
we show the concepts and introduction of the Brainware Computing with the case studies.
Section 4 presents the scopes and possibilities. Challenges and opportunities of the pro-
posed Brainware Computing are demonstrated in Section 5. Lastly, the conclusion along
with the research direction are described in Section 6.
Appl. Sci. 2021, 11, 5303 3 of 20
Edge Cloud
Contain Data and brain
Ensure Services
Coordinating Communication
Service aware learning
1) Uploading data/
weights/service
requests
2) Downloading
Brain
● End Devices/
Robots/Cobots
through 5G high-speed connectivity, recent state-of-the-art robots and cobots are not
efficient enough to utilize these benefits. In this section, we elaborate on the limitations of
existing robotic systems with possible solutions.
2.1.1. Invariability
In the Robot as a Service (RaaS) [16] infrastructure, usually the service robot is designed
for the multi-domain perspectives, but in limited varieties in terms of domain adaptability,
even after demanding external maintenance to perform as a multi-domain expert. The
rapid rise of advanced robotics technology has opened many new areas. A robot interacts
with the environment based on the predefined tasks and a fixed-domain artificial intelligent
model in it. If the environment demands a new service, then the robot failed to interact with
the environment. Although recently advanced robots update their learning through deep
reinforcement learning, they have insufficient capacity to accommodate all the services
due to the lack of computational and storage capability. For example, a robot that is
designed for cleaning the house in the afternoon fails to perform as a child-caring robot or
teacher at night and vice versa. Deploying and enabling different service-oriented brains
at a different time in the robot is one of the best solutions to make the robot perform in
multi-domain perspectives.
2.1.2. Inefficiency
Recent advancements of the 5G internet, edge, and cloud computing allow high-speed
data transmission and processing toward a very large scale [26,27]. Though existing service-
oriented architectures such as IaaS, SaaS, and RaaS utilize the data processing advantages
of cloud computing, they overlook the possibilities of edge computing and 5G internet
[13,14,16]. For example, RaaS provides services based on cloud computing and external
control by the developers without utilizing edge computing advancement and high data
transmission opportunity [16]. Let us consider a robot that is designed to recognize human
action using a deep learning model in videos. If 3D-Resnet101 [28] is selected as the
backbone network for this task, then the trained model size will be 365.1 MB considering
64 frames per clip [29]. 5G internet allows a data transmission rate up to 1+ Gbps in peak
and 100+ Mbps on average with very low latency [30]. So, on average, recent high-speed
5G internet requires approximately 3 to 4 second to deploy the brain image of action
recognition tasks in the robot. Moreover, considering task processing and data sharing
capacity in edge computing, this latency can be reduced with improved reliability [31].
Existing robotics technology including service-oriented structures is not capable of grabbing
these benefits yet. This scope of high data transmission and computational capacity in
edge and cloud computing allows deploying and enabling different service-oriented brains
at a different time into the robot to open up a new era in robotics technology. These brains
are stored in core and edge clouds.
Moreover, the core concept is to replace all possible sensors by camera and microphone.
Advanced emerging computer vision and time series signal analysis techniques are used to
perceive the environment and surroundings. The robots will cooperate with humans using
vision and voice command. So, instead of using numerous sensors, deep reinforcement
learning models are able to perceive the real world using the camera and audio sensors.
Suggested general purpose robots contain camera and sound sensors. The task and services
are performed using the image and audio data. Proposed Brainware Computing infras-
tructure provides only those services that are able to perform using images, videos, audio
signals data and the processed information obtained by processing them. Utilizing and
processing these data, robots estimate and augment other possible information for under-
standing nature. For example, Minakshi et al. [33] proposed the idea of rainfall prediction
by analyzing cloud images. Nilay et al. [34] proposed a technique for weather forecasting
using satellite images by avoiding other sensors. Being motivated by these works, we
suggest replacing the other sensors by camera and audio sensors. For any particular task,
the dedicated and specified corresponding brain images are responsible to extract the
necessary environmental information from the images, videos and audio data. So, both the
cost of sensors and storage is reduced in the Brainware Computing infrastructure.
operational flow as described. The overall flow of execution of the Brainware Computing
can be divided into the following concerns:
Figure 2. A single operational flow between edge devices and edge cloud with architecture.
Searching to
other edges
Coordinating Communication
Brain Image for Task A
Semantic Understanding
Brain Image for Task B
Successful
Brain Image for Task D
Search
Brain Image for Task E
Service aware Learning
Releasing Brain to Coordinating Brain virtualization
communication for deployment
Figure 3. Service-oriented brain searching, learning updates and deployment. (Different colors
represent different services).
In 2010, Yinong Chen et. al. proposed Robot as a Service (Raas) [16] based on SOA
architecture. These service-oriented architectures and robots are equipped to perform multi-
domain tasks, but in the fixed domain at a time. RaaS robots demand explicit programming
by the developers to make them experts for multi-domain service spaces. In our Brainware
Computing, BaaS does not require explicit programming. The whole end-to-end procedure
for enabling, deploying, and removing brains inside the robots is done by edge cloud. A
general-purpose robot with all the necessary hardware arrangements can perform any of
the short and long-term tasks. Recently, a robot can perform multiple tasks such as cops [35],
waiters [36], pets [37], child carer [38], autonomous car [39], etc., but it needs to be explicitly
programmed, and the flexibility of shifting domain is limited. In Brainware Computing,
a general-purpose robot can do all the necessary tasks with high flexibility in terms of
domain adaptions such as a car, cleaning robot, pet, UAV, etc. Brainware Computing
offers the complete architecture of Service Oriented Brain (SOB). Based on the request of a
service broker, the robot requests the nearest edge cloud for the service-specific brain in
the form of a virtual machine including the corresponding service application. Edge cloud
contains the most possible brains appropriate for corresponding services, for example,
brain for UAV, car, pets, cleaning, cops, surveillance, etc. After having the requests from
the service broker, the general-purpose robots convey the service request with metadata
to the coordinating communication unit of the responsible edge. After processing the
service demand, searching, and updating the brain, the coordinating communication unit
transmits the requested brain image as a service to that particular robot.
What?
Segmented
Image-to-Text Text (What):
Translation Three cars are
Deep Learning Based running on a road
Linguistic
Analysis and Translation
Coordinating Communication
Searching Brain
“Autonomous Driving Brain”
Where?
Depth
Image-to-Text
Translation Text (Where):
Deep Learning Based Relative distances
Linguistic between vehicles
Deep Learning based Depth estimation Analysis and Translation
the controller of the Brainware Computing communication. Requests from the robots to
edges, controlling communications between edges, the protocol of sharing a brain, sending
models for updating to core cloud and receiving data and brain for computing and storing
to the brain store are performed by the inter edge coordinating communication unit (see
Figure 2).
multiple tasks. Furthermore, based on the needs, the edge can install a collaborative brain,
which will allow the robot to work with a human, sensing human response, commands
and reacting to a shared environment.
A self-driving car is the best example of an autonomous movable robot in the factory to
detect, collect and move the product. These kinds of robots need various brains to operate
for the various purposes at different times. In industry, nowadays collaboration robots are
performing better. Humans and robots are working together at the shared environment.
Humans have some limitations in terms of speed, mistakes, tiredness, emotion, etc., and
the accuracy of the intelligence systems is not 100% because of diverseness and uncertainty
of the real world environment. Combining performances of humans and cobots increased
the performance at services. In our proposed technique, Brainware Computing will install
the brain in the cobot, which will take commands, observe humans, be a learner of human
intelligence and work with humans together for different perspective at different times.
brain, etc. In this case, multiple brains collaborate with each other. These modules create a
driving model. After performing the task, the driving brain which consists of two modules
is removed from the robot. On the other hand, in the case of atomic tasks, for example, a
COBOT as the packaging machine, which helps humans for product packaging in a factory.
Here, a single computer vision model is installed to accomplish the whole task. So, in
terms of hybrid tasks, multiple brains work side-by-side, but for atomic tasks, a single
brain works alone. Based on the service, different brains or a collection of corresponding
brains are installed and removed from the robot at different times.
In possible scenarios, the necessary brains are installed, and after performing the
task, the brains are removed from the robot. Moreover, in some cases, to run a successful
execution, there need to run a collection of virtual machines for the same purpose side-by-
side, which is controlled by the hypervisor.
5.2.1. Differentiation
With the increasing impact of robotics in diverse applications and needs for providing
heterogeneous services in multiple sectors, robots are demanded and requested to perform
multiple services at different times, such as cleaning robots, self-driving cars, UAVs, smart
homes, etc. These service requests from multiple service brokers have different priori-
ties and importance. For example, a service request of patient caring robots should be
performed earlier than the service requests of cleaning robots.
5.2.2. Extensibility
Purpose of Brainware Computing is to provide any services that are requested by the
users gradually. Usually, the existing robotic systems perform on multiple domains through
Robot as a Service (RaaS), but demand explicit programming by the robots. As Brainware
Appl. Sci. 2021, 11, 5303 15 of 20
Computing contains all the necessary brains for all the services, often the category of
services may be increased and a new service may need to be registered to the core and
edge cloud. The demands from service broker are unknown and the service spaces are
uncertain and unseen. Robots need to increase the number of services and also update
the learning. In Brainware Computing, we propose service oriented learning to add new
services and also to update the learning of existing models. This registry and discovery
task is accomplished by the registry and discovery unit of edge clouds.
5.2.3. Isolation
To ensure the isolation in Brainware Computing, we suggest designing the architecture
using virtual machine for each unit of execution and container image of the brains. The
overall core and edge cloud is designed based on a virtual machine working principle. All
the service brains and applications are contained in the form of container images. On the
whole, execution flow for any particular services are performed by all the unit successfully.
A well-designed collection and set of virtual machine units share the processing. The
processing task of a unit does not depend on other units. As every individual service brain
and execution block unit is isolated, each using container images and virtual machine,
respectively, so the performance of a service does not depend on the other brain. The
execution blocks and brain images share processing capability. If a flow of execution is
interrupted in a edge because of the failure of semantic environment understanding unit of
that particular edge, then the nearest edge shares the semantic environment understanding
unit for processing the task of the failed unit and returns the decision for searching brain to
the searching unit of the first edge.
5.2.4. Reliability
There are two kinds of reliability issues in Brainware Computing: (1) Brainware
reliability in terms of seamlessly service deployment and (2) robots performance during
the execution
• Service flow and deployment: For performing the overall service flow from service
broker request to the service deployment is quite challenging. There are a lot of unit
operations should be performed. The overall atomic individual and isolated systems
should be designed to maintain the whole system. We prefer virtual machine for
every unit in an isolation manner, where the controller and inter-edge coordinating
communications are able to detect the problems and notify all the other edges, then
the controller offloads the task to the nearest edge or direct to the cloud. For example:
if a service is requested to edge A, and edge A is failing to operate, then all the units
of A can detect the reason and notify the cloud, then cloud shift that particular service
request to the nearest edge, which ensures seamless services.
• Robot performance during execution: Robot performance depends on the learning
strategy and scale. The performance of the brain is concerned with the user. For ex-
ample, considering deep learning, as all the service brains are obtained after training
on large scale data such as imagenet [49], cifer [50], custom dataset, etc., for image
processing, kinetics dataset [51] for action recognition, cityscapes [52], for semantic
segmentation, etc., using centralized approach, then the performance of the trained
deep learning model is remarkable. Furthermore, knowledge updating using feder-
ated and reinforcement learning is performed by the service aware learning to ensure
it is adaptive to uncertain scenarios.
5.3. Optimization
In Brainware Computing, there are different labels of execution such as, at service
broker, robots, edge, and cloud, which are divided into multiple layers. These levels
and layers perform different kinds of computation. In this proposed computing, request
allocation, task offloading and synchronization are the core concerns. We need to design
the infrastructure by specifying the computation and task processing at different levels
Appl. Sci. 2021, 11, 5303 16 of 20
and layers. In the case of service request execution, processing, service management,
and deployment, there are some key factors that need to be concerned and optimized.
Based on the services, we need to focus on the trade-off. Some services require the speedy
deployment of the brain images, utilization of energy for efficient and accurate services by
interacting with the environment, and low latency using less bandwidth and energy.
5.3.1. Latency
There are some basic kinds of interactions in Brainware Computing, (1) robot-environment
interactions and (2) robot-edge interactions (3) inter-edge interactions and (4) edge-cloud
interactions. Considering performances, latency is one of the key metrics to evaluate
the quality of service (QoS) and the response time of service requests. As the core and
edge cloud have the high computational capability and the artificial intelligence model
parameter estimation task is done by the edges and core cloud during training, there is
less computation time needed. By the grace of 5G internet, the transmission delay has
drastically reduced. However, deploying or sharing the brains among the edges and from
cloud to edges and computation for searching and preparing brain images may add some
latency. As the core and edge cloud have a high computational capacity, high-speed net-
working reduced the latency. In Brainware Computing, robots interact with the real-world
environment, being driven by the brain image. As the brain model can be heavy (mostly,
deep learning model), robots only perform inference for reacting to the environment using
the model. All the necessary training and model updating is performed on core and edge
clouds, respectively. Interaction between edges also has an issue of latency, but because of
high-speed connectivity among cloud and edges and computation power, this problem
can be solved. The latency is handled by dividing the computation into different layers.
For speedy interaction with the real-world environment, Brainware Computing imposes
the majority of the workload on the cloud and edges, where robots only infer using the
trained model.
5.3.2. Bandwidth
To share and deploy brain images into robotics with high bandwidth can reduce trans-
mission delay, especially for a heavy deep learning model. As in Brainware Computing,
the service requested may arrive from remote places where internet connectivity is not
so smooth and seamless, the robot first deploys the whole brain into it, then performs
offline. At service execution time, no bandwidth is required. While deploying the brain,
the transmission latency depends on bandwidth. Lower bandwidth increases transmission
delay. In Brainware Computing, we prefer a trade-off. As all the workloads are performed
in the core and edge cloud, there is less transmission delay required. The bandwidth
between the core and edge cloud or edge cloud and robot is considered and tends to be
high for sharing the heavy model during deploying and sharing weights. We prefer model
optimization and encoding–decoding technique for sharing brain image to reduce the
bandwidth required and transmission delay. As in most cases, the distances between robots
and edges are uncertain, then to reduce bandwidth, we prefer to compress the trained
model and transmit using a packet system. However, we focus on bandwidth and latency
for training in the cloud, updating the model in edges and inference only on robots and
sharing the brain using compression and decoding in different layers.
5.3.3. Energy
The robot performs using battery energy, which is the most important resource. To
save the robot’s battery energy, we take away the whole workload of training and learning
updating (federated and reinforcement learning) at the core and edge cloud. The robot’s
layer only shares the metadata, installed brain images and inference using the trained
model using low energy compared to the related systems. The approach for installing
the brain and performing only inference reduces the energy consumption by the robot
Appl. Sci. 2021, 11, 5303 17 of 20
comparatively. We provide most of the computation workloads to the edges and cloud to
save energy of the robot’s battery.
Figure 6. Encoder–decoder example for transferring the encoded features of images instead of
actual data.
For the purpose of security, there is a section called registry and discovery in the edge
clouds. The following concern should be maintained to ensure security for the Brainware
Computing. Saad Khan et al. [55] and Shalin Parikh et. al. [56] introduced some techniques
to ensure the security for fog and edge computing, respectively. Being motivated by these
concepts, in Brainware Computing, we suggest the following steps:
• Federated learning to hold the data privacy in edge computing;
• Encryption of metadata and access verification and authorization: Registry and dis-
covery block is responsible for this task;
• To detect unauthorized access the block includes intrusion detection systems (IDS)
and multiple step verification systems;
• Analysis and identification of User Behaviour Profiling (UBP).
Appl. Sci. 2021, 11, 5303 18 of 20
6. Conclusions
Recently, numerous service-oriented infrastructures are being proposed with the help
of edge and cloud computing, which ensures strong connectivity, data processing and
sharing with low latency and high reliability. Existing robotics technology cannot properly
utilize the whole resources, on the other hand, existing robots are not multi-domain experts.
Though Robot as Service (RaaS) provides some flexibility and programmability, it contains
no edges, hence it requires high latency and demands external programming by the
developers. By the advancements of edge and cloud computing, the border of possibilities
of the robotics technology has been pushed up by Brainware Computing, which allows the
robots to be experts on the multi-domain systems using the different brains at different
times without any external coding required by the developers, and needs low latency and
bandwidth. We proposed a computing infrastructure, and addressed Brainware Computing
with scopes and possibilities. Furthermore, we provided an extensive explanation of the
challenges and opportunities with resources optimization. In the introduction of Brainware
Computing, we extensively explained every unit, which indicates a new research direction.
Furthermore, we experienced that the challenges and opportunities discussed in this paper
indicate a new research area. We hope that the proposed Brainware Computing will
revolutionize the automation industry with the help of Brain as a Service (BaaS).
References
1. Perrey, R.; Lycett, M. Service-oriented architecture. In Proceedings of the Symposium on Applications and the Internet Workshops,
Orlando, FL, USA, 27–31 Junuary 2003; pp. 116–119.
2. Peshkin, M.A.; Colgate, J.E.; Wannasuphoprasit, W.; Moore, C.A.; Gillespie, R.B.; Akella, P. Cobot architecture. IEEE Trans. Robot.
Autom. 2001, 17, 377–390.
3. Butter, M.; Rensma, A.; Kalisingh, S.; Schoone, M.; Leis, M.; Gelderblom, G.J.; Korhonen, I. Robotics for Healthcare; European
Commission EC: Brussels, Belgium, 2008.
4. Dario, P.; Guglielmelli, E.; Allotta, B. Robotics in medicine. In Proceedings of the IEEE/RSJ International Conference on Intelligent
Robots and Systems (IROS’94), Munich, Germany, 1 January 1994; Volume 2, pp. 739–752.
5. Taylor, R.H.; Kazanzides, P.; Fischer, G.S.; Simaan, N. Medical robotics and computer-integrated interventional medicine. Biomed.
Inf. Technol. 2008, 73, 617–672.
6. Demir, S.; Paksoy, T. AI, Robotics and Autonomous Systems in SCM. In Logistics 4.0: Digital Transformation of Supply Chain
Management; CRC Press: Boca Raton, FL, USA, 2020; p. 156.
7. Ahmed, I.; Din, S.; Jeon, G.; Piccialli, F.; Fortino, G. Towards collaborative robotics in top view surveillance: A framework for
multiple object tracking by detection using deep learning. IEEE/CAA J. Autom. Sin. 2020, 8, 1253–1270.
8. Macrorie, R.; Marvin, S.; While, A. Robotics and automation in the city: A research agenda. Urban Geogr. 2020, 42, 1–21.
9. Golubchikov, O.; Thornbush, M. Artificial Intelligence and Robotics in Smart City Strategies and Planned Smart Development.
Smart Cities 2020, 3, 1133–1144.
Appl. Sci. 2021, 11, 5303 19 of 20
10. Petrlík, M.; Báča, T.; Heřt, D.; Vrba, M.; Krajník, T.; Saska, M. A robust uav system for operations in a constrained environment.
IEEE Robot. Autom. Lett. 2020, 5, 2169–2176.
11. Bhatt, P.M.; Malhan, R.K.; Shembekar, A.V.; Yoon, Y.J.; Gupta, S.K. Expanding capabilities of additive manufacturing through use
of robotics technologies: A survey. Addit. Manuf. 2020, 31, 100933.
12. Grimble, M.J.; Majecki, P. Nonlinear Automotive, Aerospace, Marine and Robotics Applications. In Nonlinear Industrial Control
Systems; Springer: London, UK, 2020; pp. 699–759.
13. Dubey, A.; Wagle, D. Delivering software as a service. Mckinsey Q. 2007, 2007, 6.
14. Dawoud, W.; Takouna, I.; Meinel, C. Infrastructure as a service security: Challenges and solutions. In Proceedings of the 7th
International Conference on Informatics and Systems (INFOS), Cairo, Egypt, 28–30 March 2010; pp. 1–8.
15. Keller, E.; Rexford, J. The “Platform as a Service” Model for Networking. INM/WREN 2010, 10, 95–108.
16. Chen, Y.; Du, Z.; Garcia-Acosta, M. Robot as a service in cloud computing. In Proceedings of the Fifth IEEE International
Symposium on Service Oriented System Engineering, Nanjing, China, 4–5 June 2010; pp. 151–158.
17. Kurfess, T.R. (Ed.) Robotics and Automation Handbook; CRC Press: Boca Raton, FL, USA, 2018.
18. Zunt, D. Who did actually invent the word “robot” and what does it mean? The Karel Čapek Website. Retrieved 09-11-2011; 2005.
19. Clarke, R. Asimov’s laws of robotics: Implications for information technology. 2. Computer 1994, 27, 57–66.
20. Shi, Y.; Wang, N.; Zheng, J.; Zhang, Y.; Yi, S.; Luo, W.; Sycara, K. Adaptive Informative Sampling with Environment Partitioning
for Heterogeneous Multi-Robot Systems. In Proceedings of the 2020 IEEE/RSJ International Conference on Intelligent Robots and
Systems (IROS); Las Vegas, NV, USA, 24–30 October 2020; pp. 11718–11723.
21. Buehler, J. Capabilities in heterogeneous multi-robot systems. In Proceedings of the AAAI Conference on Artificial Intelligence,
Toronto, ON, Canada, 22 July 2012; Volume 26, No. 1.
22. McKerrow, P.J.; McKerrow, P. Introduction to Robotics; Addison-Wesley: Sydney, Australia, 1991; Volume 3.
23. Caruana, R. Multitask learning. Mach. Learn. 1997, 28, 41–75.
24. Zhang, Y.; Yang, Q. A survey on multi-task learning. IEEE Trans. Knowl. Data Eng. 2021.
25. Sener, O.; Koltun, V. Multi-task learning as multi-objective optimization. arXiv 2018, arXiv:1810.04650.
26. De Looper, C. What Is 5G? The Next-Generation Network Explained. Digital Trends. 5 May 2020. Available online: https:
//www.digitaltrends.com/mobile/what-is-5g/ (accessed on 6 June 2021).
27. Armbrust, M.; Fox, A.; Griffith, R.; Joseph, A. D.; Katz, R.; Konwinski, A.; Zaharia, M. A view of cloud computing. Commun.
ACM 2010, 53, 50–58.
28. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on
Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778.
29. Crasto, N.; Weinzaepfel, P.; Alahari, K.; Schmid, C. Mars: Motion-augmented rgb stream for action recognition. In Proceedings of
the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 7882–7891.
30. Gopal, B.G.; Kuppusamy, P.G. A comparative study on 4G and 5G technology for wireless applications. IOSR J. Electron. Commun.
Eng. 2015, 10, 67–72.
31. Shi, W.; Cao, J.; Zhang, Q.; Li, Y.; Xu, L. Edge computing: Vision and challenges. IEEE Internet Things J. 2016, 3, 637–646.
32. Zhang, Y.; Lu, M. A review of recent advancements in soft and flexible robots for medical applications. Int. J. Med. Robot. Comput.
Assist. Surg. 2020, 16, e2096.
33. Gogoi, M.; Devi, G. Cloud Image Analysis for Rainfall Prediction: A Survey. Adv. Res. Electr. Electron. Eng. 2015, 2, 13–17.
34. PKapadia, N.S.; Rana, D.P.; Parikh, U. Weather Forecasting using Satellite Image Processing and Artificial Neural Networks. Int.
J. Comput. Sci. Inf. Secur. 2016, 14, 1069.
35. Wired Blog, Robot Cops to Patrol Korean Streets. 17 January 2006. Available online: https://www.wired.com/2006/01/robot-
cops-to-p/ (accessed on 22 March 2021).
36. Cheong, A.; Lau, M.W.S.; Foo, E.; Hedley, J.; Bo, J.W. Development of a robotic waiter system. IFAC-PapersOnLine 2016, 49, 681–686.
37. Robot Pets. Available online: http://en.wikipedia.org/wiki/AIBO (accessed on: 22 March 2021).
38. Ian Hamilton, Robot to be Added at Hoag Hospital Irvine. InTouch News, 8 October 2009. Available online: http://www.
intouchhealth.com/ (accessed on 22 May 2021).
39. Liu, S.; Liu, L.; Tang, J.; Yu, B.; Wang, Y.; Shi, W. Edge Computing for Autonomous Driving: Opportunities and Challenges. Proc.
IEEE 2019, 7, 1697–1716, doi:10.1109/JPROC.2019.2915983.
40. Varghese, B.; Wang, N.; Barbhuiya, S.; Kilpatrick, P.; Nikolopoulos, D.S. Challenges and opportunities in edge computing. In
Proceedings of the 2016 IEEE International Conference on Smart Cloud (SmartCloud), New York, NY, USA, 18–20 November 2016;
pp. 20–26.
41. Wang, J.; Balazinska, M. Elastic Memory Management for Cloud Data Analytics. In Proceedings of the 2017 USENIX Annual
Technical Conference (USENIXATC 17), Santa Clara, CA, USA, 12–14 July 2017; pp. 745–758.
42. Deshmukh, P.P.; Amdani, S.Y. Virtual Memory Optimization Techniques in Cloud Computing. In Proceedings of the 2018 Interna-
tional Conference on Research in Intelligent and Computing in Engineering (RICE), San Salvador, El Salvador, 22–24 August 2018;
pp. 1–4.
43. Docker Container. Available online: https://www.docker.com/ (accessed on: 25 February 2021).
44. Kubernetes. Available online: http://kubernetes.io/ (accessed on: 25 February 2021).
Appl. Sci. 2021, 11, 5303 20 of 20
45. Vavilapalli, V.K.; Murthy, A.C.; Douglas, C.; Agarwal, S.; Konar, M.; Evans, R.; Graves, T.; Lowe, J.; Shah, H.; Seth, S.; et al. Apache
Hadoop YARN: Yet another resource negotiator. In Proceedings of the 4th Annual Symposium on Cloud Computing (SOCC’13),
New York, NY, USA, 1–3 October 2013; pp. 5:1–5:16.
46. Forouzan, B.A. TCP/IP Protocol Suite; McGraw-Hill Higher Education: New York, NY, USA, 2002.
47. López, J.M.; Díaz, J.L.; Entrialgo, J.; García, D. Stochastic analysis of real-time systems under preemptive priority-driven
scheduling. Real-Time Syst. 2008, 40, 180.
48. Jang, J.; Jung, J.; Cho, Y.; Choi, S.; Shin, S.Y. Design of a lightweight TCP/IP protocol stack with an event-driven scheduler. J. Inf.
Sci. Eng. 2012, 28, 1059–1071.
49. Deng, J.; Dong, W.; Socher, R.; Li, L.J.; Li, K.; Fei-Fei, L. Imagenet: A large-scale hierarchical image database. In Proceedings of the
2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 248–255.
50. Krizhevsky, A.; Hinton, G. Convolutional deep belief networks on cifar-10. Unpublished Manuscript 2010, 40, 1–9.
51. Carreira, J.; Zisserman, A. Quo vadis, action recognition? A new model and the kinetics dataset. In Proceedings of the IEEE
Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 6299–6308.
52. Cordts, M.; Omran, M.; Ramos, S.; Rehfeld, T.; Enzweiler, M.; Benenson, R.; Schiele, B. The cityscapes dataset for semantic urban
scene understanding. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA,
27–30 June 2016; pp. 3213–3223.
53. Yang, Q.; Liu, Y.; Chen, T.; Tong, Y. Federated machine learning: Concept and applications. ACM Trans. Intell. Syst. Technol. (TIST)
2019, 10, 1–19.
54. Badrinarayanan, V.; Kendall, A.; Cipolla, R. Segnet: A deep convolutional encoder-decoder architecture for image segmentation.
IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 2481–2495.
55. Khan, S.; Parkinson, S.; Qin, Y. Fog computing security: a review of current applications and security solutions. J. Cloud Comput.
2017, 6, 1–22.
56. Parikh, S.; Dave, D.; Patel, R.; Doshi, N. Security and privacy issues in cloud, fog and edge computing. Procedia Comput. Sci. 2019,
160, 734–739.