Sharma 2016
Sharma 2016
Sharma 2016
Abstract—This paper proposes a novel routing protocol for Opp- assumption prevails and only the contact opportunities are used
Nets called MLProph, which uses machine learning (ML) algo- to route the packets within the network. In this paper, a new
rithms, namely decision tree and neural networks, to determine the infrastructure-less routing protocol for OppNets (called ML-
probability of successful deliveries. The ML model is trained by us-
ing various factors such as the predictability value inherited from Proph) is introduced as an improvement of the PROPHET+
the PROPHET routing scheme, node popularity, node’s power routing protocol [4]. MLProph uses a machine learning (ML)
consumption, speed, and location. Simulation results show that technique to train itself based on various factors such as buffer
MLProph outperforms PROPHET+, a probabilistic-based routing capacity, hop count, node energy, node movement speed, popu-
protocol for OppNets, in terms of number of successful deliveries,
larity parameter, and number of successful deliveries. The ML
dropped messages, overhead, and hop count, at the cost of small
increases in buffer time and buffer occupancy values. algorithm is trained based on the past network routing data in
order to yield an equation that computes the probability used
Index Terms—Decision tree, delay-tolerant networks, machine to check whether the node in contact will be able to eventually
learning (ML), neural networks, opportunistic networks
(OppNets), PROPHET+.
deliver the message to its intended destination. This value is
then used to decide on the next hop for the buffered message.
I. INTRODUCTION The rest of the paper is organized as follows. In Section II,
some related work on routing protocols in infrastructure-less
PPORTUNISTIC networks (OppNets) [1] are a type of
O challenged networks where link performance is highly
variable and the network contacts are intermittent. As such,
OppNets are presented. In Section III, the proposed MLProph
routing protocol is described. In Section IV, the ML process in
MLProph is described. In Section V, the simulation results are
routing in OppNets is a challenge since the nodes are required
presented. Finally, Section VI concludes the paper.
to buffer their packets till they find suitable forwarders that can
eventually carry these packets to destination and with minimal
delay. II. RELATED WORK
Routing protocols in OppNets [2] can be broadly categorized Various routing protocols for OppNets have been proposed
into: 1) infrastructure-based routing protocols—for which the in the literature. The most representative ones are described as
presence of some form of infrastructure that augments when follows. In [3], Vahdat et al. proposed the epidemic protocol,
forwarding the packets to their destinations is assumed; and where the sender node floods the network with multiple copies
2) infrastructure-less-based routing protocols—where no such of the message it intends to deliver to the destination node. It
does so by distributing a copy of the message to every node that
Manuscript received July 15, 2016; revised September 20, 2016; ac- it comes in contact with, which in turn distributes their copy
cepted October 4, 2016. This work was supported in part by Finep, with to every neighboring node. This process continues till one of
resources from Funttel under Grant 01.14.0231.00, under the Centro de the copies of the message get delivered to the destination node.
Referência em Radiocomunicações—CRR project of the Instituto Nacional
de Telecomunicações—Inatel, Brazil, in part by the Government of Russian This protocol has higher eventual delivery rate at the cost of high
Federation under Grant 074-U01, and in part by Instituto de Telecomunicações, network resource consumption. In [2], Boldrini et al. proposed
NetGNA, Covilhã Delegation. The work of J. J. P. C. Rodrigues was also HiBOp, a context-based opportunistic routing protocol which
supported by the Fundação para a Ciência e a Tecnologia funding Project
UID/EEA/500008/2013. involves the use of the Identity table that stores the context of
D. K. Sharma and S. K. Dhurandher are with the CAITFS, Division the node and the History table that records the attributes from
of Information Technology, Netaji Subhas Institute of Technology, Uni- the Identity table of the neighboring nodes in the past. These
versity of Delhi, Delhi 110021, India (e-mail: dk.sharma1982@yahoo.com;
s.dhurandher.in@ieee.org). data structures are used to determine the number of initial copies
I. Woungang is with the Department of Computer Science, Ryerson Univer- of original message to be distributed by the sender node and to
sity, Toronto, ON M5B 2K3, Canada (e-mail: iwoungan@scs.ryerson.ca). calculate the delivery predictability based on which the next-
R. K. Srivastava is with the Ohio State University, Columbus, OH 43210
USA (e-mail: srivastava.141@osu.edu). hop for a message can be selected. In [5], Dhurandher et al.
A. Mohananey is with the Department of Software Development Engineer- proposed a routing protocol for OppNets (called HBPR) that uti-
ing, Infibeam.com, Nehrunagar, Ahmedabad 380015, Gujarat, India (e-mail: lizes contextual information for next hop selection purpose. In
anhadmohananey@gmail.com).
J. J. P. C. Rodrigues is with the National Institute of Telecommunications their scheme, the behavioral information of a node is taken into
(Inatel), Santa Rita do Sapucaı́-MG 37540-000, Brazil, also with the Instituto de account when determining the next best hop that the node should
Telecomunicações, Universidade da Beira Interior, Covilhã 6201-001, Portugal, pass the message to, based on the calculation of a direction pre-
and also with the ITMO University, St. Petersburg 197101, Russia (e-mail:
joeljr@ieee.org). dictor of the destination node by using a Markov predictor and
Digital Object Identifier 10.1109/JSYST.2016.2630923 a utility metric. In [6], Lindgren et al. proposed PROPHET, a
1937-9234 © 2016 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
See http://www.ieee.org/publications standards/publications/rights/index.html for more information.
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.
TABLE I
INPUT ATTRIBUTES USED FOR THE MLPROPH PROTOCOL
Prophet probability x1 Probability used for deciding next hop selection in Prophet routing
protocol. Described in detail in Section III-A3
Buffer occupancy x2 Amount of capacity left in buffer to keep more packets
refer to Section III-A3 for details
Successful deliveries x3 Gives the number of successful message transfers from the start
of simulation to current time, between the two given nodes
Success ratio x4 Indicates the ratio of successful message transferred
to total number of transfers initiated between the two nodes
From node speed x5 Represent speed at which the sender is traveling
To node speed x6 Represent speed at which the receiver is traveling
Distance from message source x7 Distance of the location of contact between the two nodes from
the point of origin of message
Distance to message destination x8 Distance of the location of contact to the final destination of
the message
Message live time x9 Duration of time from the creation of the
message to current time
From node energy x1 0 Energy of sender node
To node energy x1 1 Energy receiver node
Current hop count x1 2 Number of hops the message has traveled before reaching the
current sender node
routing protocol for OppNets that relies on the calculation of a speed/energy, distance from message source, distance to mes-
delivery predictability table that records the probabilities of suc- sage destination, current hop count, and message live time. The
cessful deliveries of messages from source to destination nodes. message live time parameter represents the duration of time
Whenever a node encounters other nodes, their delivery pre- from the creation of the message to the current time. The mes-
dictability values are exchanged and the message is forwarded sage is forwarded from the sender node if Pm > K × Pr , where
to those nodes that have the higher delivery predictability values. Pm is the probability of the final delivery (called ML probabil-
In [4], Huang et al. introduced PROPHET+, an improvement ity) computed using ML techniques, Pr is the probability of the
of PROPHET, which consists of using a weighted function to sender node delivering the message to the destination (obtained
calculate the node’s delivery probability when performing the using PROPHET)—the so-called PROPHET probability—and
routing as instructed by PROPHET. In [7], the encounter and K ∈ [0, 1] is a normalization factor.
distance routing protocol is proposed which depends on two pa-
rameters for next hop selection: the number of encounters and A. Calculation of the ML Probability Pm
the distance of the node from the destination. The ratio of these Two ML models, namely neural network and decision tree, are
parameters is used to decide on the next hop selection. used to calculate the value of Pm and to assess the performance
of the proposed MLProph routing protocol.
III. PROPOSED MLPROPH PROTOCOL 1) Neural Network Model: A neural network model with
The proposed MLProph protocol uses a ML technique to multiple hidden layers is considered, where (x1 , . . . , x12 ) are
perform the next hop selection for a message. Whenever a con- the input parameters as described in Table I constituting the
nection is established between two nodes, and the buffer of one input layer and p1 and p2 are the outputs forming the output
node contains a message that needs to be transmitted, a deci- layer (p1 and p2 are, respectively, the probabilities of successful
sion needs to be made on whether or not the message should and unsuccessful delivery). The value p1 is the ML probability
be forwarded to the other node (termed as next hop selection). Pm , which is the predicted probability of successful delivery
Intuitively, the message must be only forwarded from the sender based on given values of input, i.e., (x1 , . . . , x12 ). This value
to the neighboring receiver node if the intermediate node has a at each node in the neural network is a function of a linear
high enough probability of forwarding it directly or indirectly combination of values of nodes in the previous layer. The value
to the destination node. Forwarding the message too frequently at node hi is given as
⎛ ⎞
can lead to excessive packet loss and higher buffer overhead. n
On the other hand, less frequent forwarding is a definite pre- hi = F ⎝ xj wji ⎠ (1)
cursor to less number of delivered messages. The probability j =1
of successful delivery is dependent on various factors that rep-
where xj is the value of the jth node of the previous layer, F
resent the history and capability of nodes to successful deliver
is the activation function, and w is the weight matrix. Let us
the messages. The delivery probability at the next hop selection
consider the value of node h1 , which is a linear combination of
is computed by a trained ML model involving the following
(x1 , . . . , x12 ) passed through an activation function F , i.e.,
attributes: PROPHET probability, buffer occupancy, success-
ful deliveries, success ratio, from node speed/energy, to node h1 = F (w11 x1 + w21 x2 + w31 x3 + · · · + w121 x12 ). (2)
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.
SHARMA et al.: MACHINE LEARNING-BASED PROTOCOL FOR EFFICIENT ROUTING IN OPPORTUNISTIC NETWORKS 3
Thus, moving from the input layer, the value at each node can
be computed, and thereby, the values of the output layer, which
will produce Pm (from p1 ). This process is referred to as feed-
forward operation. To perform this step, we must first calculate
the appropriate values of the weight matrix wij for each possi-
ble linear combination. This is performed through the training
of the set of data gathered by running the simulation on the
training scenario, and the neural network is built by finding the
corresponding weights.
Training: The neural network is trained by using the Back
propagation algorithm [8] and the training data obtained prior
to simulation. Each training data sample represents the input
values of (x1 , . . . , x12 ) for every next hop selection, and the
obtained output determines whether the message that has been
forwarded has eventually reached the destination node or not. If
the output was successful p1 = 1 and p2 = 0, otherwise p1 = 0
and p2 = 1. Prior to training, a sample neural network is con-
sidered, i.e., initialized with random values, and subsequently,
the neural network is learned by iterating over the training set so
that it provides the optimum predictions on whether successful
delivery will take place or not. To compute the value of pm ,
the ML model must first be trained or built based on the data
Fig. 1. Sample decision tree.
captured in a scenario called the training scenario. The corre-
sponding data is called the training data and a particular entry in
the data is called a training example. For each training example
in the training data, the following actions are taken. linear combination of values of nodes of the previous layer with
1) The input is propagated forward to generate the output an activation function applied to this combination. Equation (2)
of the activation functions at each layer. Lets call this is used to perform the feedforward process, where w has been
N NPrediction . Let N NActual be the actual values from obtained from the training step. Moving from the input to the
the training set. output layer across all the hidden layers, the value of each neu-
2) The training error is calculated, which is the sum over ron is computed using (2), which is subsequently also used in
the output units of the squared difference between desired the calculations of the node values for the next layer. The final
output (N NActual ) and the prediction (N NPrediction ) as output layer gives p1 and p2 , where the value of p1 corresponds
defined in the LMS algorithm [9]. The training error of to Pm .
weights w is obtained as 2) Decision Tree Model: The considered decision tree model
1 is shown in Fig. 1, where w1 and w2 are the output classes
J(w) = (N NActual − N NPrediction )2 . (3) representing the successful delivery and unsuccessful delivery,
2
respectively, and (x1 , x2 , . . . , x12 ) are the input attributes, and
3) The weights are corrected to reduce the training error the internal nodes represent the decisions made based on the
J(w). δ(w), the change that needs to be made in the input attributes. As an example, in Fig. 1, starting at the root,
values of the weights w is computed using equation for a given set of input values, we move towards the leaf nodes.
δ(w) = −n(dJ/dw) (4) Say the value x1 is 0.2, then we will move along the left subtree
from the root node. On the other hand, if the value of x1 is 0.8,
where n is the learning rate representing the relative then the input is predicted to belong to class w2 . The value of
change in the weights based on the training error. Pm is the probability distribution of the decisions falling into the
4) The value of w is updated as follows: category of the particular class (w1 or w2 ). Assume that we have
reached the node N with a value of x9 as 1.8, then the predicted
w(new) = w(old) + δ(w). (5)
class becomes w1 , and Pm is defined as the probability of nodes
The above steps are carried out for all the training examples, falling into class w1 from the training set given that node N was
resulting in a trained neural network with learned weights, which reached.
gives a sufficiently good prediction, and hence, a low prediction Building the Decision Tree: The decision tree is built using
error. the training data in a recursive manner as follows.
Computation of Pm (Feed Forward): The trained neural net- BuildTree(S)
work is then used to compute Pm , based on the input parameters 1) Find the most appropriate attribute and corresponding
(x1 , . . . , x12 ) obtained at real time, during the next hop selec- value to use for the decision of the root node. In the above
tion process. The process of moving from the input to the output example, x1 is the attribute chosen in the first recursive
layer is done by calculating the value at each node, which is the call and x1 > 0.3 is the corresponding decision. The most
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.
popular measure for computing the attribute to be used to such that nodes n1 and n2 frequently interact, as well as n2 and
split the dataset is the entropy impurity [9] n3 . In this case, a message to be delivered to n3 , if forwarded
from n1 , would also have a high delivery probability
i(N ) = − P (wj ) log P (wj ) (6)
a(b c ) P(n 1 ,n 3 ) = a + b + P(n 1 ,n 3 ) old + (1 − P(n 1 ,n 3 ) old )
where P (wj ) is the fraction of the patterns at a particular
node that are in category wj . For splitting, we must choose ×P(n 1 ,n 2 ) × P(n 2 ,n 3 ) × β. (10)
the query that decreases the impurity as much as possible.
This delivery probability will then be used as a parameter for
The drop in impurity is defined by
the ML algorithm since it gives a good indication of the node
δ(i) = i(N ) − P (x)i(Nx ) − (1 − P (x))i(Ny ) (7) delivery and contact histories, which is essential for the next
where Nx and Ny are the left and right descendant nodes hop selection.
and i(Nx ) and i(Ny ) are their impurities. For example, if The normalization factor lies between 0 and 1. Its final value
the attributes are x1 , . . . , x12 , then the attribute that has is determined by considering different values of K in between
the highest value of δ(i) will correspond to the root of S. 0 and 1 and observing the corresponding number of delivered
2) Based on the attribute computed from step (1), split the messages. The probability of final delivery versus the value of
dataset S into Sx and Sy , where Sx corresponds to the left K is a Gaussian curve where the point corresponding to the
subtree of the root of S and Sy corresponds to the right maximum value of the delivery probability gives K. This value
subtree. of K is the one to be considered in this work.
3) Call BuildTree(Sy ) and BuildTree(Sy ) in a recursive man- The buffer occupancy parameter is indicative of the capacity
ner to build the descendants of the root. (in terms of storage) of the potential receiver node to absorb and
The above procedure is stopped when the maximum value subsequently forward the message. It is defined as
of δ(i) falls below a certain threshold. In this paper, the C4.5 Buffer Occupancy = bufferSizeavailable
implementation of the decision trees [11] and the gain ratio
impurity equation are used to compute the value of δ(i), where −messageSizetoBeForwarded . (11)
a change in the impurity is scaled by dividing it with the entropy
of the parent node. The optimized J48 decision tree by the Weka IV. ML PROCESS IN MLPROPH
Library [11] is also utilized. Our proposed MLProph routing protocol solves the problem
Calculation of Pm : Once the decision tree is fully grown, of next hop selection by computing a delivery probability at
based on the input attributes, the decision tree conditions are each next hop decision situation, which depends on parameters
applied until a leaf node is reached. The class computed at the such as node speed, buffer size, contact history, and PROPHET
leaf node is used to determine the value of Pm using the proba- probability. The ML process creates an optimum model where
bility distribution from the training set. As an example, if class the input is a vector of the parameters at run time, and the output
w1 is predicted, Pm will be obtained as the number of ele- is a delivery probability. The training of the ML model happens
ments in the set in which w1 was reached from the predecessor prior to simulation based on the same input and output attributes,
node divided by the total number of times the predecessor was in an environment indicative of real world scenarios. The ML
reached. The attributes (x1 , x2 , . . . , x12 ) that have been used by process is capable of computing the relative importance of the
the MLProph protocol to compute Pm for the next hop selection above-mentioned parameters on the next hop decision, yielding
in the neural network and decision tree techniques are given in optimum calculations of Pm , the delivery probabilities.
Table I.
3) Calculation of the Normalization Factor K and
A. MLProph Algorithm
PROPHET Probability Pr : Although the PROPHET router
has not been used directly for next hop selection purpose, the The MLProph algorithm for next hop selection is divided into
PROPHET delivery probabilities are continuously updated and the following parts.
used as parameters for the ML algorithm. PROPHET works by Training: The data is generated for the next hop by running
updating the probabilities in such a way that the nodes that have the simulation on the training scenario. Each entry in the data
frequent interactions have high delivery probabilities depicts whether the final delivery has occurred for the hop selec-
tion in question or not, by sending the message from the sender
P (x, y) = P (x, y)old + (1 − P (x, y)old ) × Pinit . (8) to the receiver given the parameters described in Table I at that
If some nodes have not been connected recently, their delivery time. In order to ensure that a message is transmitted in all pos-
probabilities must also age. This is taken care of by the aging sible cases, this simulation is run by using the Epidemic routing
factor as follows: protocol [3].
After the data has been generated, it is used to build the
P (x, y) = P (x, y)old × (γ)k (9)
above-mentioned neural network-based and decision tree-based
where γ is the aging factor, and k is time units since the last learning models.
aging occurred. Real Simulation: The simulation is started, and every time a
PROPHET also considers a transitive nature among nodes. message can be transmitted, or the next hop selection decision
As an example, assume that there are three nodes n1 , n2 , and n3 is to be made, the following actions are taken: 1) capture the
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.
SHARMA et al.: MACHINE LEARNING-BASED PROTOCOL FOR EFFICIENT ROUTING IN OPPORTUNISTIC NETWORKS 5
No. of Groups 4
Group No. No. of Nodes Speed Wait Time Movement Model
Interface Bluetooth
Transmit speed 250 K
1 45 7–10 10–30 BusMovement
Transmit range 10 2 150 0.8–6 0 BusMovement
Buffer size 10 M 3 80 7–10 10–30 WorkingDay
Message TTL 1200
Movement
4 50 17–25 0 WorkingDay
Movement
PROPHET probability (Pr ); 2) compute the ML probability
(Pm ) based on the considered trained ML model; and 3) forward
TABLE IV
the message from the sender to the receiver if Pm > K × Pr . CONFIGURATION PARAMETERS USED FOR SIMULATION OF GROUPS
number of nodes, to name a few. Some simulation parameters 1 150 7–17 10–30 BusMovement
such as buffer size, transmit range, Time-to-Live (TTL), and 2 50 0.2–1.8 0 WorkingDay
transmit speed, are kept common across all groups. The router Movement
3 40 7–10 10–30 BusMovement
can vary across groups, however, for validating the MLProph 4 25 1.7–10 0 WorkingDay
algorithm, the same router has been used for all groups. The Movement
following constitute the main configuration settings. 5 14 2–6 5–30 BusMovement
6 50 0.2–1.8 0 WorkingDay
1) Movement Model: This defines the motion of nodes inside Movement
a community with respect to a particular scenario. 7 40 7–10 10–30 BusMovement
2) Transmit Speed: For an interface, this represents the speed
at which the messages will move in the network.
considered: 1) varying the buffer: the buffer size for all groups
3) Transmit Range: This is the maximum distance at which
in the network is varied from 20 M → 40 M → 60 M → 80 M
a sender node can send the data to a receiver.
→ 100 M; 2) varying the TTL: the message TTL is varied from
4) Buffer Size: This is the space available in the node to
50 → 100 → 150 → 200 → 250; and 3) varying the number of
store the packets before eventually forwarding or dropping
nodes: the number of nodes in the network are varied from 130
them.
→ 180 → 230 → 280 → 330.
5) Speed: This defines the range of speeds that the nodes in
The considered performance metrics are: buffer time, delivery
the movement model can take.
probability, number of packets dropped, overhead ratio, latency,
6) Number of Hosts: This is the number of hosts in the group.
and hop count.
7) Message TTL: This defines the lifespan of the message in
the network.
B. Simulation Results
8) Wait Time: This defines the time a group has to wait since
the simulation has started before initiating any movement. In this section, the performances of MLProphN N ,
For ML purpose, the training data is gathered by using the MLProphD T , and PROPHET+ are compared under the afore-
simulation configurations provided in Table II. After the sim- mentioned varying parameters. Table III describes the individ-
ulation data has been gathered, the neural network-based and ual group configuration parameters used for the training of the
decision tree-based learning models are trained separately by MLProph protocol, whereas Table V describes the individual
using the Weka Library in Java. The obtained models are then group configuration parameters used for running the simulation
used to build two routers, namely MLProphN N (from the neu- of the proposed MLProph protocol.
ral network model) and MLProphD T (from the decision tree 1) Varying the Buffer Size: The buffer size for all groups in
model). The results obtained using PROPHET+, MLProphN N , the network is varied and the aforementioned performance met-
and MLProphD T routers are then compared against each other rics are evaluated. The results are captured in Figs. 2–4. It can be
based on the simulation environment described in Table IV. To observed that the average buffer time increases as the buffer size
test the MLProph protocol, the following varying parameters are is increased. This is understandable since a large buffer means
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.
SHARMA et al.: MACHINE LEARNING-BASED PROTOCOL FOR EFFICIENT ROUTING IN OPPORTUNISTIC NETWORKS 7
TABLE VI
95% CONFIDENCE INTERVAL FOR DIFFERENT RESULTS
VI. CONCLUSION
This paper has proposed MLProph a novel ML-based routing
protocol for OppNets. Simulation results have shown that the
performance of MLProph is superior to that of PROPHET+ in
terms of delivery probability, overhead ratio, with a payoff in the
average latency, and buffer size due to the constraints imposed
on the next hop selection process by MLProph. It has also been
shown that in terms of delivery probabilities, packet loss, and
overhead ratio, the MLProphN N model is preferable for use
compared to the MLProphD T model. In future, we will simulate
Fig. 8. Overhead ratio with varying number of nodes. MLProph using real mobility traces. We will also investigate the
use of other ML classifiers such as support vector machines.
REFERENCES
[1] L. Lilien, Z. H. Kamal, V. Bhuse, and A. Gupta, “Opportunistic networks:
The concept and research challenges in privacy and security,” in Proc.
NSF WSPWN, Miami, FL, USA, Mar. 2006, pp. 134–14.
[2] C. Boldrini, M. Conti, I. Iacopini, and A. Passarella, “HIBOP: A his-
tory based routing protocol for opportunistic networks,” in Proc. IEEE
WOWMOM, Espoo, Finland, Jun. 2007, pp. 1–12.
[3] A. Vahdat and D. Becker, “Epidemic routing for partially connected ad
hoc networks,” Dept. Comput. Sci., Duke Univ., Durham, NC, USA, Tech.
Rep. CS-2000-06, 2000.
[4] T. K. Huang, C. K Lee, and L.-J. Chen, “PROPHET+: An adaptive
prophet-based routing protocol for opportunistic network,” in Proc. 24th
IEEE Int. Conf. Adv. Inf. Netw. Appl., Perth, Australia, Apr. 2010,
pp. 112–119.
Fig. 9. Average latency with varying number of nodes. [5] S. K. Dhurandher, D. K. Sharma, I. Woungang, and S. Bhati, “HBPR:
History based prediction for routing in infrastructure-less opportunistic
networks,” in Proc. 27th IEEE Int. Conf. Adv. Inf. Netw. Appl., Barcelona,
Spain, Mar. 2013, pp. 931–936.
MLProphD T for the same. It can also be observed that ML- [6] A. Lindgren, A. Doria, and D. Schelen, “Probabilistic routing in intermit-
Proph yields a significant lower overhead ratio compared to that tently connected networks,” ACM SIGMOBILE Mobile Comp. Commun.
obtained using PROPHET+. Fig. 9 shows that MLProph yields a Rev., vol. 7, no. 3, pp. 19–20, 2003.
[7] S. K. Dhurandher, S. J. Borah, D. K. Sharma, I. Woungang, K. Arora,
higher average latency compared to PROPHET+. This is due to and D. Agarwal, “EDR: An encounter and distance based routing protocol
the fact the criteria for forwarding a packet is more constrained for opportunistic networks,” in Proc. 30th IEEE Int. Conf. Adv. Inf. Netw.
in MLProph compared to PROPHET+. Appl., 2016, pp. 297–302.
[8] Y. LeCun, L. Bottou, G. B. Orr, K-R. Muller, “Efficient BackProp,” Neural
4) Confidence Interval of Results: A unique confidence Networks: Tricks of the Trade, vol. 1524, Lecture Notes Comput. Sci.,
interval was computed for each observable parameter on the pp. 9–50, Mar. 2002.
combined dataset of values observed when the buffer size, [9] R. O. Duda, P. E. Hart, and D. G. Stork, Pattern Classification. Hoboken,
NJ, USA: Wiley, Oct. 2012.
TTL, number of nodes, and transmit range are varied. The [10] A. Keranen, “Opportunistic network environment simulator,” Dept. Com-
95% confidence intervals is summarized in Table VI. From this mun. Netw., Helsinki Univ. Technol., Espoo, Finland, Special Assignment
table, it can be seen that MLProph in both implementations Rep. ONE v. 1.4.1, May 2008.
[11] S. Drazin and M. Montag, “Decision tree analysis using Weka,” Ma-
(neural network and decision tree) yields substantially lower chine Learning-Project II, Univ. Miami, Miami, FL, USA, 2012. [Online].
values of the number of packets dropped and overhead ratio Available: http://www.samdrazin.com/classes/een548/