Vehicloud - How Can Vehicles Increase Cloud Intelligence?: Digicosme PHD Proposal

Download as pdf or txt
Download as pdf or txt
You are on page 1of 4

DIGICOSME PhD proposal

VehiCloud - How can Vehicles increase Cloud intelligence?

Supervision team
Lila Boukhatem (LISN, Université Paris Saclay, France)
Andrea Araldo (Télécom SudParis – SAMOVAR, France)
Nadjib Achir (INRIA Saclay TRiBE, France)
Collaborator: Aline Carneiro Viana (INRIA Saclay TRiBE, France)

Summary
The idea of using computation resources deployed on vehicles to execute tasks generated by mobile users’
devices has attracted the interest of the research community. However, it is still not clear under which
conditions it is beneficial to integrate such resources with the entire computation infrastructure. The objective
of the PhD project can be summarized in answering the following questions: what is Vehicular Cloud Computing
(VCC) good for? And under which conditions?
Many works consider VCC in isolation or together with Cloud Computing (CC). However, some of the benefits
expected from VCC could be already (and better) satisfied by Edge Computing (EC). For this reason, we take a
holistic view, considering the entire Computation Ecosystem (CoE), composed of CC, VCC and EC. Most of the
work on VCC proposes yet-another-offloading strategy. Since we aim to answer the fundamental questions
above, we take instead a different approach. We measure the overall performance of the CoE, in terms of
mobile device energy consumption and QoS metrics, with and without VCC, with a heterogenous set of
application classes. We systematically study under which conditions (vehicle mobility, VCC penetration rate,
vehicle density, user load, application traffic class composition) adding VCC into the CoE brings performance
gain.
By doing so, we can understand in which cases VCC can benefit the CoE, and reduce the need for EC
infrastructure (which is costly to deploy) and in which cases, instead, such a benefit would be marginal. We use
to this aim simulation and analytical models of connectivity, mobility and application task inter-dependency.
Instrumental to answering our fundamental questions, we seek a system-optimal offloading strategy, resorting
to learning based optimization.
Keywords: Vehicular Cloud; Cloud Computing; Vehicular Networks; Edge Computing .

PhD Research Program


Introduction
Around 250 million connected vehicles were expected to be on our roads in 2025 [1], increasingly equipped
with powerful computation devices. Interest has latterly grown around Vehicular Cloud computing (VCC) [2] to
show that the underutilized computational resources in the vehicles can be used as a computing infrastructure
to extend edge and cloud capacities. The benefits are (i) the vicinity to end-users and (ii) no need for investing
in dedicated infrastructure. Indeed, both Cloud Computing (CC) and Edge Computing (EC) may be severely
constrained by the high cost in terms of infrastructure deployment, which is not the case when considering
vehicular computing, which is infrastructure-less. For all these reasons, VCC is an excellent candidate to
complete the CC and EC ecosystem at a low cost. However, the benefits of such an integration have not been
systematically studied. With this PhD, we aim to fill this gap. Here, we take a holistic view, defining the
Computation Ecosystem (CoE) as the union of CC, EC and VCC.
State of the art
Mobile applications generate a set of computation tasks, corresponding to their composing methods and
functions [3]. Recently, interest has grown around the idea of offloading some of the “offloadable” tasks onto
external resources, e.g., EC or CC. A huge literature exists on offloading tasks on static edge servers [4], in
particular on the decision of whether to offload, resorting to Lyapunov Optimization [5, 6] or game theory [7].
The idea of offloading tasks on vehicles is explored by [8, 9], where, however, vehicles are assumed to be static.
More recent work tackles the more challenging dynamic and stochastic case of moving vehicles [2], where
however the offloading can only occur on VCC. In [10, 11] EC, CC and VCC are jointly considered. However, in
the former, a task is offloaded on VCC only if it fails to be offloaded on CC or EC. On the contrary, we do not
consider VCC exclusively as a backup: our offloading policies may choose, under some contextual conditions,
for instance high request rate, to preferentially offload some classes of tasks (see Table 1) on VCC, to keep EC
and CC resources free for further computation. In [11], multiple inter-dependent tasks are considered, but they
are picked from the ones already enqueued in the EC node, while our tasks are directly produced by devices.
Objectives
While different offloading strategies have been proposed, to the best of our knowledge the following questions
are still unanswered: What is Vehicular Cloud (VCC) good for? And under which conditions? The PhD project
aims to systematically answer them. We evaluate the benefits of integrating VCC into future network
architectures, together with CC and EC. We translate these high-level questions into concrete operational
objectives as follows. We assume mobile users generate tasks of different traffic classes, defined in terms of
latency constraints (high vs. low responsiveness) and bandwidth requirements (data-intensive vs. data-light).
Examples are listed below:

Data-intensive Data-light
High responsiveness Augmented Reality [12] Online Gaming (when only game commands transit
over the net[4])
Low responsiveness Video Livecast [13] Antivirus [5, 7]
Table 1: Examples of applications

In simulation, we propose to measure the performance of the computing system, consisting in (i) application
latency (and probability of violating latency constraints of the applications considered) (ii) mobile user energy
consumption, (iii) amount of traffic generated in the backhaul. We aim to study the performance of the CoE
with and without VCC. We define a set of contextual conditions (vehicle mobility, VCC penetration rate, vehicle
density, amount of resources deployed in EC, load of the different task classes) and systematically study how
the performance vary with them. The overall goal is to understand in which contexts VCC brings relevant
performance gain, to guide operators in planning the evolution of their access network infrastructure.
Challenges
The key challenges that we need to tackle are: (i) Extremely large scale of the Computation Ecosystem (CoE)
and heterogeneity of its computation nodes, in terms of mobility, connectivity and amount of computation
resources. (ii) The CoE is highly dynamic and stochastic: resources available and quality of connections change
over time and cannot be exactly known. (iii) Applications diversity, in terms characteristics, structure and
constraints.

Research work and methodology


Model of the CoE
We represent the entire CoE in a unified model, in which devices produce tasks and nodes, as cloud servers,
micro-servers deployed at the network edge infrastructure or vehicles, expose computation capabilities. Nodes
have different mobility (only vehicle-nodes actually move). Many works on task placement in CC, VCC or EC
have neglected the practical communications constraints between the devices, which clearly lead to impractical
solutions as the reliability, quality and availability of the wireless communication links are highly variable
especially in vehicular and dense mobile urban environments [14]. We instead assume nodes communicate
with 5G connectivity (in particular eMBB [15], since tasks are generated by smartphones) and model the
corresponding constraints.
An important challenge would be to define the optimal level of abstraction for characterizing the
communication resources in the designed system model for a balanced tradeoff between accuracy and
tractability. Apart from wireless resources, the computational resources available in a node vary due to several
tasks being executed concurrently, some of which are critical and high priority (e.g., tasks in a vehicle-node
related to navigation) and may take up all the resources. Since resource availability and connectivity cannot be
exactly predicted, we model the CoE as a stochastic dynamic network [16]. Due to the large scale of the CoE,
our offloading decisions are distributed. A device generating a task can decide either to (i) execute it locally or
(ii) offload it. In the latter case, the task will be received by an edge agent, which decides either to (i) execute it
in one edge server or (ii) forward it to one of the connected vehicles or (iii) forward it to the remote cloud.
Learning for offloading
We need to solve a placement problem, i.e., to decide in which node to run each task: into the mobile device
itself or in one of the CC, EC or VCC nodes. We aim to optimize the overall performance, while satisfying all
application requirements (see Table 1). Since the availability of computation resources on nodes and the
quality of the communication links continuously change, we resort to learning strategies. However, the
“learning” of previous work [2] has several limitations: (i) Each device learned if it is convenient to offload on a
node by interacting with it multiple times; however, in our case, such a knowledge would be soon unusable,
since a vehicle-node may be seen just for few seconds. (ii) Each device learned by itself and could not benefit
from the experience of the others, (iii) The calculation of the offloading policy was too computation- and
communication-intensive for the device. We seek instead the following features for our offloading strategy. It
must be Global: a single offloading policy will be computed via distributed Reinforcement Learning (dRL) [17],
by all edge agents, which collect offloading experiences (delay, success) of multiple devices in proximity and
collaboratively train a single RL policy. Our strategy will be Long-term, in the sense that the offloading decisions
will not be based on the specific nodes that happened to be around a device at a certain point in time. They will
instead be based on a more general state, which statistically describes, from the viewpoint of an edge agent,
the ensemble of the reachable nodes in the CoE, in terms of resource utilization, mobility and channel quality.
Since such a general state would be experienced multiple times, the learned policy actions would be valid in the
long term. Our strategy will have no computation cost for the device as all the RL training will occur in the edge
agents. A single trained RL policy will be periodically distributed across edge agents and devices, which will then
simply apply the prescribed decisions on forthcoming tasks. In this way, our offloading decisions can be
instantaneous, and thus able to support low-latency constraints.
Task models
Concerning the application model and constraints, we first consider simple mono task applications, which can
be modeled by (1) the amount of data that needs to be sent, (2) the amount of data received as results, (3) the
amount of required computation resources, and (4) finally, the related QoS requirements. However, we believe
that the CoE that we consider can perfectly match modern multi-task applications to take advantage of task
parallelization and reduce application completion time. In addition to its QoS requirements, multi-task
applications can be modeled as a Directed Acyclic Graph (DAG) of tasks under precedence constraints and
inter-dependencies. Unfortunately, most of the literature works [18] assume either the existence of one edge
server or independent tasks, in addition to a non-dynamic ecosystem, which does not correspond with the CoE
that we are considering (i.e., a large number of edge nodes and high mobility of users and vehicles). To take
into account these aspects, we propose to adapt the rich literature on graph-based tasks scheduling on
multiprocessing systems [19]. Unfortunately, even though these algorithms can be very efficient, they cannot
directly be considered in our CoE, since they mostly consider all computation happens in the same cluster, and
thus assume low or constant tasks inter-dependencies delays, which would be unrealistic in our case.
We aim to show that the offloading may be beneficial for mono and multi-tasking applications or not
depending on the context, i.e., application parameters and CoE configuration.
Expected results
We will combine extensive simulation campaigns and theoretical analysis. In particular the milestones would
be:
(1) Definition of CoE model and framing into the SOTA.
(2) Creation of a simulation model (based on open-source frameworks, i.e., Omnetpp, INET, SimulLTE, and their
evolutions, to simulate both the movement of vehicles and the network communications.
(3) Development of learning strategies for calculating offloading policies and comparison with the SOTA. Study
of the analytical properties of the strategies. Consideration of multi-task applications.
(4) Performance evaluation. Draw conclusions of contextual conditions in which VCC is beneficial.
References
[1] Alfonso Velosa et al. Predicts 2015: The internet of things. Gartner Inc, 2015.
[2] Y. Sun et al. Adaptive Learning-Based Task Offloading for Vehicular Edge Computing. IEEE Tr. on Veh.
Technol., 2019.
[3] A. Zanni et al. Automated selection of offloadable tasks for mobile computation offloading in edge
computing. In CNSM, ’17.
[4] F. Messaoudi et al. Toward a mobile gaming based-computation offloading. In IEEE ICC. IEEE, 2018.
[5] Y. Kim et al. Mobile Computation Offloading for Application Throughput Fairness and Energy. IEEE
Tr.Wirel.Comm., ’19.
[6] Ouyang et al. Follow me at the edge:Mobility-aware dynamic service placement for mobile edge computing.
IEEE JSAC, ’18.
[7] T.Q. Dinh et al. Learning for computation offloading in mobile edge computing. IEEE Tr. on Commun., 66(12),
2018.
[8] A. A. Alahmadi, Ahmed Q. Lawey, et al. Distributed processing in vehicular cloud networks. In NOF, 2017.
[9] S. Arif et al. Datacenter at the airport: Reasoning about time-dependent parking lot occupancy. IEEE TPDS,
23(11), 2012.
[10] H. Zhang et al. Toward vehicle-assisted cloud computing for smartphones. IEEE Tr. on Veh. Technol.,
64(12), 2015.
[11] F. Sun et al. Cooperative Task Scheduling for Computation Offloading in Vehicular Cloud. IEEE Tr. on Veh.
Technol., 2018.
[12] A. Ben-Ameur, A. Araldo, et al. Deployability of Aug. Reality Using Embedded Edge Devices. In IEEE CCNC,
2021.
[13] Q. He et al. Fog-Based Transcoding for Crowdsourced Video Livecast. IEEE Commun. Mag., 55(4), 2017.
[14] X. Hou et al. Vehicular Fog Computing: A Viewpoint of Vehicles as the Infrastructure. IEEE Tr. on Veh. Tech.,
65(6), 2016.
[15] NR and NG-RAN Overall Description-Release 15. 3GPP, document TS 38.300, 2018.
[16] Amelina et al. Approximate Consensus in Stochastic Networks with Application to Load Balancing. IEEE
Tr.Inf.Theory, ’15.
[17] Sartoretti et al. Distributed Reinforcement Learning for Multi-robot Decentralized Collective Construction. In
DARS, 2019.
[18] Chen et al. Multi-user multi-task computation offloading in green mobile edge cloud computing. IEEE
Tr.Serv.Comp., ’19.
[19] Özkaya et al. A scalable clustering-based task scheduler for homogeneous processors using dag partition.
In IEE IPDPS, ’19.

Required background for applicants


The position is fit for a student with an M.S. in computer science (ideally, with a specialization in
networking and optimization) with a theoretical background to perform the theoretical
analysis (optimization and algorithm design). Good programming skills (Matlab or
Python or NS3) are required. Knowledge and skills in learning-based algorithms
would be appreciated.

Application
Required documents are:
- CV.
- A cover/motivation letter describing the interest in this topic.
- Degree certificates and transcripts for Bachelor and Master (or the last 5 years).
- A recommendation letter from the Master thesis’ supervisor (or research project or
internship) to be sent directly to the contact person.

Application deadline 10/05/2021.

The PhD student will be funded on a Digicosme PhD allocation (of three years duration). More details
can be provided on request.

Contact
Lila Boukhatem: Lila.Boukhatem@universite-paris-saclay.fr
Andrea Araldo: andrea.araldo@telecom-sudparis.eu
Achir Nadjib : nadjib.achir@inria.fr

You might also like