Dynamic Networks: Models and Algorithms
Dynamic Networks: Models and Algorithms
1 Introduction
The study of dynamic networks has come into popularity recently, and many models and algorithms for such
networks have been suggested. In this column we survey some recent work on dynamic network algorithms,
focusing on the effect that model parameters such as the type of adversary, the network diameter, and the
graph expansion can have on the performance of algorithms. We focus here on high-level models that are
not induced by some specific mobility pattern or geographic model (although much work has gone into
geographic models of dynamic networks, and we touch upon them briefly in Section 2).
Dynamic network behavior has long been studied in distributed computing literature, but initially it was
modeled as a fault in the network; as such, it was typically bounded, either in duration or in the number of
nodes affected (or both). For example, in the general omission-fault model, if two nodes that could once
communicate can no longer send messages to each other, this is treated as a failure of one of the nodes, and
the number of faulty nodes is assumed to be bounded. Another example is self-stabilizing algorithms, which
are guaranteed to function correctly only when changes to the network have stopped [16]. These models
are appropriate for modeling unreliable static networks, but they are not appropriate for mobile and ad hoc
networks, where changes are unbounded in number and occur continually.
In the sequel we survey several models for dynamic networks, both random and adversarial, and al-
gorithms for these models. The literature on dynamic networks is vast, and this column is not intended
as a comprehensive survey. We have chosen to focus on models and algorithms that exhibit the following
properties.
Unceasing changes. Changes to the network never stop, and are not restricted in frequency.
Dynamic topology not subject to the algorithm’s control. We concentrate solely on algorithms that
execute in a dynamic network, but have no control over how the topology changes.
Non-geographical models. With the exception of one model discussed in Section 3.2, the network models
we discuss here are not motivated by a geographical distribution of the nodes on the plane or by an underly-
ing mobility pattern for the nodes. Rather, we focus on abstract models, where the dynamic communication
topology can reflect any communication network — including both wireless mobile networks and unreliable
traditional point-to-point networks.
The rest of the column is organized as follows. In the next section we introduce the basic dynamic
graph model used in the work we survey, and point out several basic parameters of the model that can affect
algorithmic performance. In Section 3 we define the dynamic diameter of a dynamic graph, a generalization
of the notion of the diameter in a static graph; we illustrate the role it plays and give some examples of
how it is affected by basic model parameters. In Section 4 we discuss a recent result on computation and
information dissemination in adversarial dynamic networks, and in Section 5 we survey some of the work
on broadcast in dynamic radio networks. Finally, in Section 6 we briefly discuss a few alternative models to
the ones discussed in this column, and we conclude in Section 7 with some directions for future research.
Adversary model. The adversary model determines the method by which the dynamic edge function is
generated. We focus on three types of adversaries: an adaptive worst-case adversary (“adaptive adversary”
for short) generates the dynamic graph on-the-fly; in each round r, the edges E(r) are chosen based on
the execution so far, including the states of all the participants at time r − 1 and their randomness up to
time r − 1 (exclusive). An oblivious worst-case adversary (“oblivious adversary”) commits to a dynamic
graph in advance, choosing an edge function E that does not depend on the randomness of the participants
or their states during the execution. (For deterministic algorithms, adaptive and oblivious adversaries are
equivalent.)
Finally, a random-graph adversary is not an adversary at all, but simply a distribution according to which
the instantaneous graph at each time is chosen. Random dynamic graph models are typically Markovian
and often synchronous. In that case, the dynamic graph for round r + 1 is chosen according to some
probability distribution that only depends on G(r) (see, e.g., [3]). A special class of Markovian graphs has
been considered in [5, 11], where it is assumed that for all r ≥ 1, each edge e 6∈ E(r) appears in round
r + 1 (that is, e ∈ E(r + 1)) with probability p independent of all other edges, and each edge e ∈ E(r)
disappears in round r + 1 (that is, e 6∈ E(r + 1)) with probability q independent of other edges. We refer
to such graphs as Markovian dynamic graphs with birth rate p and death rate q. If we choose q = 1 − p,
we obtain an interesting special case, where the graph of each round is an independent Erdős–Rényi random
graph Gn,p (see, e.g., [23]).
Instantaneous connectivity and expansion. Real communication networks are often dense and well-
connected. To capture such properties it is often assumed that each instantaneous graph (V, E(r)) has some
desirable connectivity properties; for example, it may be assumed that each instantaneous graph is a vertex
expander, or that it is k-vertex connected for some k > 1, and so on. In adversarial models, it is usually
assumed that the graph is at least connected (or strongly connected for directed graphs) in each round. In
random graph models the distribution is typically one that guarantees that each instantaneous graph or the
union of a few consecutive graphs is with very high probability connected (and often has good expansion to
boot).
To formally capture the connectivity of each instantaneous graph, we define a vertex growth function
g : N → N, which generalizes the usual notion of vertex expansion. We say that a (static) graph G = (V, E)
has vertex growth g if for every vertex set S ⊂ V of size s = |S| ≤ |V |/2, the set of neighbors N (S) :=
{v ∈ V \ S : ∃u ∈ S : (u, v) ∈ E} has size at least |N (S)| ≥ g(|S|).
For example, if G is connected, then the vertex growth of G is at least 1; if G is k-connected, the vertex
growth of G is at least k (that is, the vertex growth is at least the constant function g(s) = k); and if G has
vertex expansion α > 0, then G has vertex growth g(s) = α · s.
We use g(f ) to denote the vertex growth of a set after f consecutive rounds, counting members of
the original set as well as new neighbors: the base case is g(0) (s) = s, and for f > 1, the growth is
g(f ) (s) = g(f −1) (s) + g g(f −1) (s) .
Long-term stability. To model stability inherent in the dynamic network, one may assume for example
that topology changes only occur once every certain number of rounds [3]. Alternatively, in [28] we intro-
duce a property called T -interval connectivity, which asserts that over any window of T consecutive rounds
there is some connected spanning subgraph that remains stable during those T rounds. (This graph is not
2 1 1 2
1 2 0 1 ...
0 3 0 3 2 3 0 3
Figure
Figure1:1:The
Thedynamic
dynamicstar
star graph from [3]
graph from withnn==4.4. Self-loops
[3]with Self-loopsare
aregenerally
generallynotnot shown.
shown. Bold
Bold nodes
nodes and and
edges
edges indicate
indicate the shortest
the shortest walknode
walk from from node
0 to node0 3toinnode 3 in thegraph.
the dynamic dynamic graph.
first round, and in each round this message is re-broadcast by all nodes that know it (that is, all nodes that
It is further shown in [3] that the cover time is bounded from above by nO(n) , although in some special cases it
have heard it in the past), then n − 1
is polynomial in n (for example, when the instantaneous graph is always d-regular). node
rounds are required for the message to reach n − 1these
However, (in Fig. 1, node
upper bounds
3 does not receive the message until round 3). The dynamic diameter of the network is n
hold only for oblivious adversaries; an adaptive adversary can prevent a random walk or any other token-circulation − 1, even though
themechanism
instantaneous from diameter is always
ever completing. In bounded by 2.
fact, the adversary can restrict the token to two nodes u0 , u1 , where u0 is the
node
Somethatinstantaneous
has the token initially and uof
properties 1 isthesome arbitrarily chosen
communication node:
graph dototranslate
do this, whenever nodeon
into bounds ui holds the token,
the dynamic
the adversary connects u only to u , and connects u to the remainder of the
diameter. For instance, if the instantaneous graph in each round is a vertex expander, then the dynamic
i 1−i 1−i graph. Thus u i has no choice except
to either keep the token or pass it to u .
diameter is O(log n); if each instantaneous graph is k-connected, the dynamic diameter is O(n/k) [28].
1−i
The dynamic star described above also illustrates why the instantaneous diameter does not reflect the time required
The key is that unlike a small instantaneous diameter, these properties guarantee that a large number of new
to flood a message through the network. If node 0 broadcasts a message on all its links in the first round, and in each
nodes are causally influenced by each node in each round. In general we can state the following.
round this message is re-broadcast by all nodes that know it (that is, all nodes that have heard it in the past), then n − 1
rounds 3.1.
Lemma are required
Let G = for(V,
theE)
message to reach
be dynamic nodesuch
graph 1 (infor
n − that Fig.every
1, node 3 does
round notstatic
r, the receive the message
graph until
G(r) has round
vertex
3). The dynamic diameter of the network is n − 1, even though the instantaneous diameter is always(d)
bounded by 2.
growth g. The dynamic diameter of G is at most 2d, where d is the smallest integer such that g (1) > n/2.
Some instantaneous properties of the communication graph do translate into bounds on the dynamic diameter. For
instance, if theWe
Proof sketch. instantaneous
must showgraph in each
that for round
any two is a vertex
nodes u, v ∈expander, then tthe
V and time ≥dynamic
0, we havediameter
(u, t)is O(log
(v, tn);
+ if each
2d).
instantaneous graph is k-connected, the dynamic diameter is O(n/k) [28]. The key is that unlike a
The vertex growth of G tells us that at first, the set of nodes causally influenced by u grows quickly. small instantaneous
diameter, these properties guarantee that a large number of new nodes are causally influenced by each node in each
Formally, let Ut0 := {w ∈ V | (u, t) (w, t0 )} denote the set of nodes causally influenced at time t0 by the
round. In general we can state the following.
state of u at time t (see Fig. 2). If |Ut0 | ≤ n/2, then the vertex growth of G guarantees that at least g(|Ut0 |)
Lemma
edges cross3.1.theLet = (V, E)Ube
cutGbetween dynamic
t0 and V \ graph such that for
Ut0 ; therefore, the every
causalround r, the static
influence graph G(r)
of u spreads to athas g(|U
vertex
least growth
t0 |)
g. The dynamic diameter of G is at most 2d, where d is the smallest integer such
new nodes, and |Ut0 +1 | ≥ |Ut0 | + g(|Ut0 |) = g (|Ut0 |). This process continues until we reach a time t0
(1) that g (d)
(1) > n/2.
Proof|U
where t0 | > n/2;
sketch. because
We must show we for anyd two
thatchose so that
nodes g(d)
u,(1)
v ∈>V n/2, t ≥ 0,|U
we have
and time d | have
we > n/2.(u, t) ! (v, t + 2d).
Once the causal influence of u has reached n/2 nodes, we
The vertex growth of G tells us that at first, the set of nodes causally influencedcan no longer apply
by uvertex
grows growth
quickly.toFormally,
show
! := {w ∈to
let itUtcontinues
that | (u, t) !
V spread (w, t )}Instead,
quickly. "
weset
denote the nowof turn
nodesour attention
causally to node
influenced at v,
timeand "
consider
t by theofinverse
the state u at time
t (see Fig.
question: how If |Unodes
2).many t! | ≤ causally
n/2, theninfluence
the vertexv growth
at time of t +G2d?
guarantees
Let Vt0 that := {w ∈ V g(|U
at least | (w,t! |)
t0 )edges(v,cross
t + 2d)}
the cut
between U ! and V \ Ut! ; therefore, the 0 causal influence of u spreads to at least g(|Ut! |) new nodes, and |Ut! +1 | ≥
denote the nodes whose
t state at time t influences v at time t + 2d. We" trace the growth of this set going
|Ut!in| +time
g(|Ustarting
t! |) = g from
(1)
(|Ut!time
|). This
t +process continues
2d. Just untilaswe reach
as a|Vtime t where |Ut! | > n/2; because(1) we chose d
back as before, long t0 | ≤ n/2 we have |Vt0 −1 | ≥ g (|Vt0 |).
so that g(d) (1) > n/2, we have |Ud | > n/2.
Therefore, |Vt+d | > n/2. But now we are done: since |Ut+d | > n/2 and |Vt+d | > n/2, there is some node
Once the causal influence of u has reached n/2 nodes, we can no longer apply vertex growth to show that it
incontinues
the intersection,
to spread ∈ Ut+d ∩V
w quickly. t+d . This
Instead, node
we now satisfies
turn both (u,tot)node (w,
our attention v, andt+d) and (w,
consider thet+d) (v, t+2d).
inverse question: how
(u, t) (v, t + 2d).
many nodes causally influence v at time t + 2d? Let Vt! := {w ∈ V | (w, t ) ! (v, t + 2d)} denote the nodes whose
Consequently, "
state at time t" influences v at time t + 2d. We trace the growth of this set going back in time starting from time t + 2d.
Similar arguments to |V
the! | one in Lemma 3.1 have also(1) been
(|Vt! |). applied in |V
slightly different contexts, e.g.,
Just as before, as long as t ≤ n/2 we have |Vt! −1 | ≥ g Therefore, t+d | > n/2. But now we are done:
[8,since
25].|Ut+d | > n/2 and |Vt+d | > n/2, there is some node in the intersection, w ∈ Ut+d ∩ Vt+d . This node satisfies
both (u, t) ! (w, t + d) and (w, t + d) ! (v, t + 2d). Consequently, (u, t) ! (v, t + 2d).
The dynamic diameter in partially-synchronous networks. The majority of this column discusses syn-
chronous dynamic networks, but we now make a brief foray into partially-synchronous networks. There
are many different versions of partial synchrony in the 4literature; here we are interested in the model where
nodes operate in real time (that is, time is modeled as a real number, rather than a natural number), and each
node is equipped with a hardware clock, which is subject to some bounded inaccuracy (called clock drift).
u
Time 0 Time 1 Time 2 Time 3 Time 4
Figure 2: Illustration for the proof of Lemma 3.1, with n = 9 and t = 0. Each instantaneous graph is 2-connected
Figure
(vertex2: Illustration
growth for the proof
2), the smallest of Lemma
d satisfying g(d) >3.1,9/2 with
is d =n 2,=and
9 and t = 0. diameter
the dynamic Each instantaneous graphareas
is 4. The shaded is
for t! = 2, g3, 4 in 9/2 isgray d =(with
2, and
(d)
2-connected (vertex
indicate Ut! for t! = growth
0, 1, 2 in2), thegray,
light and Vdt! satisfying
smallest >darker U2 the
∩ V2dynamic
"= ∅, as diameter is
argued in the
4.proof).
The shaded areas
Only edges indicate
from nodes inUtU0 tfor t0t!==0,
! for 0,1, 2 inedges
1 and light into
gray,
nodes ! fort t =
and inVtV0 tfor 0!
= 2,
3, 43,are
4 in darker gray (with
shown.
U2 ∩ V2 6= ∅, as argued in the proof). Only edges from nodes in Ut0 for t0 = 0, 1 and edges into nodes in Vt0
for t0 = 3, 4 are shown.
The dynamic diameter in partially-synchronous networks. The majority of this column discusses synchronous
dynamic networks, but we now make a brief foray into partially-synchronous networks. There are many different
Itversions
is also ofassumed in this setting
partial synchrony in thethat messages
literature; heresuffer
we arefrom unpredictable
interested in the model delay
wherebounded above by
nodes operate U , and
in real time
that
(thateach node
is, time is broadcasts
modeled as at least
a real number, rather ∆H
once every than real timenumber),
a natural units. The anddynamic
each nodediameter
is equipped for with
this partially-
a hardware
synchronous
clock, which is model
subjectis to
defined similarly
some bounded to the synchronous
inaccuracy (called clockcase:drift). itIt is
is an
alsoupper
assumedbound onsetting
in this the timethatrequired
messages
suffer
for from unpredictable
a single message to bedelay floodedbounded above bythe
throughout U , entire
and that each node
network. The broadcasts
technical at details
least once of every ∆H real are
the definition time
units. The dynamic diameter for this partially-synchronous model
more complex than for synchronous networks, and we refer to [26] for the precise definition. is defined similarly to the synchronous case: it is
an upper bound on the time required for a single message to be flooded throughout
In [26] we considered the problem of clock synchronization in this model: the goal is for each node the entire network. The technical
details of the definition are more complex than for synchronous networks, and we refer to [26] for the precise definition.
to output a logical clock, such that the logical clocks are always well-synchronized (unlike the hardware
In [26] we considered the problem of clock synchronization in this model: the goal is for each node to output a
clocks,
logical which can drift
clock, such apart
that the overclocks
logical time).are It always
is well-known that the quality
well-synchronized (unlikeof theclock synchronization
hardware clocks, whichthat cancan
drift
beapart over time). It is well-known that the quality of clock synchronization that can be achieved in a static networkofis
achieved in a static network is closely related to its diameter: no algorithm can avoid a clock difference
Θ(D)
closelybetween
related to twoitsdistant
diameter: nodes in the graph,
no algorithm can and
avoida clock
a clockdifference
differenceof ofΘ(log
Θ(D) D) even two
between between
distant neighboring
nodes in the
and a clock difference of Θ(log D) even between neighboring nodes [31].
graph,[31].
nodes
For dynamic
For dynamic graphs,
graphs,thethe situation
situation is almost the same,
is almost replacing
the same, D nowD
replacing bynowthe dynamic diameter. diameter.
by the dynamic There is, however,
There
an additional caveat: when a new communication link appears, it takes Θ(D)
is, however, an additional caveat: when a new communication link appears, it takes Θ(D) time before time before the clock difference between
its endpoints can be reduced to its final value of Θ(log D). Informally, this adjustment time corresponds to the time it
the clock difference between its endpoints can be reduced to its final value of Θ(log D). Informally, this
takes for information about the new edge to propagate through the network, so that all nodes can adjust their behavior
adjustment time corresponds to the time it takes for information about the new edge to propagate through
accordingly. In [26] we show that in networks with dynamic diameter D, there is a dynamic clock synchronization
the
algorithm thatsoachieves
network, that allanodes can adjustoftheir
clock difference behavior
no more accordingly.
than O(D) between In any[26]
two we show
nodes thattime
at any in networks with
in the execution,
dynamic diameter D,
and a clock difference of O(log D) between nodes that remain neighbors of each other for Θ(D) time. The algorithm
there is a dynamic clock synchronization algorithm that achieves a clock difference of
no more on
is based thantheO(D)
optimal between any twofrom
static algorithm nodes at any
[31], but ittime in the
requires execution,
very careful handlingand a clock
of new difference of O(log
edges, in order D)
to reduce
the clocknodes
between difference
that on such neighbors
remain edges whileofateach the same
othertime Θ(D)
for not creating
time.large Theclock differences
algorithm on “older”
is based on the edges that
optimal
have existed for longer than Θ(D) time.
static algorithm from [31], but it requires very careful handling of new edges, in order to reduce the clock
difference on such edges while at the same time not creating large clock differences on “older” edges that
have existed for longer than Θ(D) time.
dynamic diameter of O log1+np (n) . Using standard probability-theoretic arguments, it can be shown that
indeed this is true with high probability (w.h.p.). Note that for p = Ω(1)/n, we get a logarithmic dynamic
diameter; even though the graph is not connected in each round w.h.p. (as p is below the connectivity
threshold), still over O(log n) rounds every node causally influences every other node w.h.p. For constant
p, the dynamic diameter becomes a constant. (Closely related to the dynamic diameter of a sequence of
independent random graphs is the basic problem of spreading a rumor by a push-pull gossip protocol, as
introduced in [25]. In such protocols each node chooses a random neighbor to connect to, which induces an
independent random graph for that round.)
In [11], Clementi et al. show that the above result generalizes to Markovian dynamic graphs with birth
rate p. In addition, they give lower bound on the dynamic diameter for that case.
Theorem 3.2. [11] For all 0 ≤ p, q ≤ 1, the dynamic diameter of a Markovian dynamic graph with birth
rate p and any death rate q is at most O log1+pn n , independent of q and the initial graph G(0). If G(0)
is empty and p ≥ ln(n)/n is above the connectivity threshold of the random graph Gn,p , this result is
asymptotically tight. For general p > 0, the dynamic diameter is lower bounded by Ω(log(n)/(np)).
Proof Sketch. The upper bound can be shown as for the simpler case where the graph is an independent
random graph Gn,p in each round. To see this, consider two consecutive rounds r and r + 1. For an
edge e 6∈ E(r − 1), the probability that e ∈ E(r) is p; for an edge e ∈ E(r − 1), the probability that
e ∈ E(r) ∪ E(r + 1) is (1 − q) + qp ≥ p. Hence, in the union of any two consecutive rounds, each
edge independently occurs with probability at least p. It follows that the dynamic diameter of a Markovian
random graph with birth rate p is at most twice as large as the dynamic diameter when the graph in each
round is an independent random graph G n,p . We have already seen above that the dynamic diameter of such
dynamic graphs is w.h.p. O log1+pn n .
For the lower bound, we note that if E(0) = ∅, the union of the instantaneous graphs in the first d
rounds is itself a (static) random graph Gn,p̂ , where p̂ = 1 − (1 − p)d (the probability that an edge never
appears in d rounds). If p ≥ ln(n)/n and d = Θ(lognp (n)), then the diameter of this random graph is
Θ(lognp̂ n) = Θ(lognp n) [7]. This implies that the diameter of the dynamic graph is Ω(lognp (n)), because
in the first d rounds a node can only causally influence nodes at distance at most d from it in the union of the
graphs during those d rounds. A somewhat similar, but more involved argument can be made for the case
np < ln n. For details, we refer to [11].
Subsequent work in [5, 15] analyzes the flooding time of Markovian random graphs with birth rate p
and death rate q in the stationary distribution. In that case, the graphs in all rounds are random graphs Gn,p̂
with edge probability p̂ = p/(p + q). If p̂ > ln(n)/n, i.e., if p̂ is above the connectivity threshold, w.h.p.,
Since each node can only invite at most k − 1 other nodes to join its committee, the size of each committee
is at most k. If k ≥ n, then we have just one leader, whose invitations all reach their destinations; in
k − 1 ≥ n − 1 cycles this leader successfully identifies and invites all the other nodes in the graph.
Notice that when k ≥ n, each node has n − 1 rounds where it is “singled out” and has its UID forwarded
to all nodes in the graph. If nodes remember all the UIDs they hear, they can collect the UIDs of all the
participants this way; if nodes have initial inputs, they can attach them to their UIDs, and in this manner we
can achieve full information dissemination and exact counting.
The protocol above requires O(n2 ) rounds for full information dissemination in 1-interval connected
graphs. If the graph enjoys good expansion, then it can take fewer than n − 1 rounds to invite each node, and
the process can be sped up; if the graph is T -interval connected for T > 1, then we can pipeline invitations
and invite T nodes in O(n) rounds, gaining a factor of T in the round complexity. We refer to [28] for the
full details.
Liveness and termination. The algorithms we focused on in this column always terminate, even when
the network changes continually. There are (at least) two popular alternative modes of computation.
The first alternative is to use self-stabilizing algorithms. The basic assumption here is that changes are
bounded in time and eventually cease, or that changes are infrequent enough that the algorithm can stabilize
between them. While changes continue to occur, the algorithm can behave badly; however, if changes
cease for sufficiently long, a self-stabilizing algorithm will eventually gravitate to a correct configuration
and resume normal operation. We refer to [16] for a comprehensive treatment of self-stabilizing algorithms.
Some algorithms provide a two-part correctness guarantee: a safety guarantee, which holds even while
the network is dynamic, and a liveness (or termination) guarantee, which only holds when the network
stops changing. For example, algorithms based on Gafni and Bertsekas’ link reversal algorithm from [19]
frequently have this flavor; the link reversal algorithm eventually orients a directed network so that all nodes
have a path to some fixed sink node, but the time required is Θ(n2 ) rounds [19, 9], and during this time
nodes cannot necessarily route messages to the sink. One example that uses link-reversal is [37], which
gives a mutual exclusion algorithm in which nodes pass a single token among themselves to determine who
can enter the critical section; link reversal is used to route the token from node to node. Safety (mutual
exclusion) is always guaranteed because only one node has the token, but liveness (in this case starvation-
freedom) is only guaranteed when the network stops changing.
A second alternative to always-terminating computation is eventually stabilizing computation. Here it is
not assumed that the network eventually stops changing. However, the algorithm never terminates; instead,
the outputs of the nodes are continually updated, and they eventually stabilize to a correct answer. (Note that
the nodes themselves do not know when a correct answer has been reached, so nodes cannot stop executing
the algorithm.) A prominent example is population protocols [1], where participants are anonymous and
Dynamic overlay networks. The models we discussed here assume that the network topology is com-
pletely adversarially controlled. There is a vast literature on algorithms that choose the communication
topology themselves, typically in the form of an overlay network that the algorithm superimposes on an
underlying fully-connected network. One very popular application domain is peer-to-peer protocols, where
participants must maintain a small number of connections to peers they select, e.g., [35, 36]. The algorithm
must deal with nodes joining and leaving the network (churn) by adjusting the overlay network. Examples
of protocols that deal with continual concurrent joins and leaves controlled by an adaptive worst-case adver-
sary are given, for instance, in [29, 32]. In the examples given above, the overlay changes only as a response
to changes in the set of participating peers; other algorithms, such as those given in [20, 22, 33], induce a
constantly-changing overlay even when the set of participants remains static.
Geographic models. Much of the work on dynamic networks is motivated by an underlying distribution
of the nodes on the two- or three-dimensional plane, with some geometric constraint governing which nodes
have a communication link to each other (perhaps the most popular assumption is that only nodes within
some fixed distance of each other can directly communicate). From time to time the nodes change their
positions according to some mobility pattern, either adversarially controlled or random (e.g., the random
way-point model [6, 24, 30]; we discussed one variant of this model in Section 3.2). We refer to [10] for a
survey.
7 Conclusion
In this column we briefly surveyed several models for dynamic networks. The main characteristic shared
by these models is that they do not assume that the network stops changing at some point, and they do
not restrict the number or frequency of changes. Nevertheless, recent results show that always-terminating
computation is possible in such networks, even when the network is adversarially controlled.
There are many open research questions in dynamic networks. One major concern is dealing with node
failures, or with nodes joining and leaving the network; most of the algorithms we discussed here are not
resilient to such changes. For example, in some of the network-wide broadcast algorithms described in
Section 5, nodes stop forwarding the message after a certain number of rounds. At some points in the
execution there might only be a few nodes still forwarding the message, and if these nodes were to crash,
the broadcast would fail. Can broadcast algorithms be made resilient to node crashes, while still completing
in a timely manner and allowing nodes to eventually stop sending? As another example, the information
dissemination algorithm from Section 4 cannot withstand node failures, nor can it handle new nodes joining
the network. Can information dissemination be solved in networks that enjoy good vertex growth (e.g.,
networks that are always k-connected), despite changes to the set of nodes participating in the computation?
Another interesting direction is testing and adapting to network conditions. Throughout this column
we saw that if the network enjoys good expansion or long-term stability, algorithms can be made to run
faster. But how do we know if the network is a good expander, or even if it is 1-interval connected? How
do we know if it is fairly stable over time? Lightweight tests for such properties, which can run alongside
an algorithm and inform it of the condition of the network in the recent past, could allow algorithms to take
advantage of a well-behaved network.
Acknowledgement. The authors would like to thank the Editor, Idit Keidar, for many insightful comments
and suggestions.
References
[1] D. Angluin, J. Aspnes, Z. Diamadi, M. J. Fischer, and R. Peralta. Computation in networks of passively
mobile finite-state sensors. In Proc. of 23rd ACM Symp. on Principles of Distributed Computing
(PODC), pages 290–299, 2004.
[3] C. Avin, M. Koucký, and Z. Lotker. How to explore a fast-changing world (cover time of a simple
random walk on evolving graphs). In Proc. of 35th Coll. on Automata, Languages and Programming
(ICALP), pages 121–132, 2008.
[4] R. Bar-Yehuda, O. Goldreich, and A. Itai. On the time-complexity of broadcast in multi-hop radio
networks: An exponential gap between determinism and randomization. J. of Computer and System
Sciences, 45(1):104–126, 1992.
[5] H. Baumann, P. Crescenzi, and P. Fraigniaud. Parsimonious flooding in dynamic graphs. In Proc. of
28th ACM Symp. on Principles of Distributed Computing (PODC), pages 260–269, 2009.
[6] C. Bettstetter, G. Resta, and P. Santi. The node distribution of the random waypoint mobility model
for wireless ad hoc networks. IEEE Transactions on Mobile Computing, 2:257–269, 2003.
[7] B. Bollobás. The diameter of random graphs. Trans. of the American Mathematical Society, 267(1):41–
52, 1981.
[8] E. Bortnikov, M. Gurevich, I. Keidar, G. Kliot, and A. Shraer. Brahms: Byzantine resilient random
membership sampling. Computer Networks, 53(13):2340–2359, 2009.
[9] C. Busch and S. Tirthapura. Analysis of link reversal routing algorithms. SIAM J. on Computing,
35:305–326, 2005.
[10] T. Camp, J. Boleng, and V. Davies. A survey of mobility models for ad hoc network research. Wireless
Communication and Mobile Computing, 2(5):483–502, 2002.
[11] A. Clementi, C. Macci, A. Monti, F. Pasquale, and R. Silvestri. Flooding time in edge-Markovian
dynamic graphs. In Proc. of 27th ACM Symp. on Principles of Distributed Computing (PODC), pages
213–222, 2008.
[13] A. Clementi, A. Monti, and R. Silvestri. Round robin is optimal for fault-tolerant broadcasting on
wireless networks. J. of Parallel and Distribited Computing, 64(1):89–96, 2004.
[14] A. Clementi, A. Monti, and R. Silvestri. Fast flooding over Manhattan. In Proc. of 29th ACM Symp.
on Principles of Distributed Computing (PODC), pages 375–383, 2010.
[15] A. Clementi, F. Pasquale, A. Monti, and R. Silvestri. Information spreading in stationary Markovian
evolving graphs. In Proc. of IEEE Symp. on Parallel & Distributed Processing (IPDPS), 2009.
[17] A. Ferreira. Building a reference combinatorial model for MANETs. IEEE Network Magazine,
18(5):24–29, 2004.
[18] A. Ferreira, A. Goldman, and J. Monteiro. Performance evaluation of routing protocols for MANETs
with known connectivity patterns using evolving graphs. Wireless Networks, 16(3):627–640, 2009.
[19] E. M. Gafni and D. P. Bertsekas. Distributed algorithms for generating loop-free routes in networks
with frequently changing topology. IEEE Transactions on Communications, 29:11–18, 1981.
[20] A. J. Ganesh, A.-M. Kermarrec, and L. Massoulié. SCAMP: Peer-to-peer lightweight membership
service for large-scale group communication. Networked Group Communication, pages 44–55, 2001.
[21] P. Grindrod and D. J. Higham. Evolving graphs: dynamical models, inverse problems and propagation.
Proc. of the Royal Society A: Mathematical, Physical and Engineering Sciences, 466(2115):753–770,
2009.
[22] M. Gurevich and I. Keidar. Correctness of gossip-based membership under message loss. SIAM J. of
Computing, 39(8):3830–3859, 2010.
[23] S. Janson, T. Łuczak, and A. Rucinski. Random graphs. Wiley-Interscience Series in Discrete Mathe-
matics and Optimization. Wiley-Interscience, New York, 2000.
[24] D. B. Johnson and D. A. Maltz. Dynamic source routing in ad hoc wireless networks. In Mobile
Computing, chapter 5, pages 153–181. Kluwer Academic, 1996.
[25] R. Karp, C. Schindelhauer, S. Shenker, and B. Vöcking. Randomized rumor spreading. Proc. of 41st
Symp. on Foundations of Computer Science (FOCS), pages 565–574, 2000.
[26] F. Kuhn, C. Lenzen, T. Locher, and R. Oshman. Optimal gradient clock synchronization in dynamic
networks. In Proc. of 29th ACM Symp. on Principles of Distributed Computing (PODC), pages 430–
439, 2010.
[27] F. Kuhn, N. Lynch, C. Newport, R. Oshman, and A. Richa. Broadcasting in unreliable radio networks.
In Proc. of 29th ACM Symp. on Principles of Distributed Computing (PODC), pages 336–345, 2010.
[28] F. Kuhn, N. Lynch, and R. Oshman. Distributed computation in dynamic networks. In Proc. of 42nd
Symp. on Theory of Computing (STOC), pages 513–522, 2010.
[30] J.-Y. Le Boudec and M. Vojnovic. The random trip model: Stability, stationarity regime, and perfect
simulation. IEEE/ACM Transactions on Networking, 16(6):1153–1166, 2006.
[31] C. Lenzen, T. Locher, and R. Wattenhofer. Tight bounds for clock synchronization. In Proc. of 28th
ACM Symp. on Principles of Distributed Computing (PODC), pages 46–55, 2009.
[32] X. Li, J. Misra, and C. G. Plaxton. Maintaining the ranch topology. J. of Parallel and Distributed
Computing, 70(11):1142–1158, 2010.
[33] P. Mahlmann and C. Schindelhauer. Peer-to-peer networks based on random transformations of con-
nected regular undirected graphs. In Proc. of 17th ACM Symp. on Parallelism in Algorithms and
Architectures (SPAA), pages 155–164, 2005.
[34] R. O’Dell and R. Wattenhofer. Information dissemination in highly dynamic graphs. In Proc. of
Workshop on Foundations of Mobile Computing (DIALM-POMC), pages 104–110, 2005.
[35] C. Plaxton, R. Rajaraman, and A. W. Richa. Accessing nearby copies of replicated objects in a dis-
tributed environment. In Proc. of 9th ACM Symp. on Parallel Algorithms and Architectures (SPAA),
pages 311–320, 1997.
[36] I. Stoica, R. Morris, D. Karger, F. Kaashoek, and H. Balakrishnan. Chord: A scalable peer-to-peer
lookup service for internet applications. In Proc. ACM SIGCOMM Conference on Applications, Tech-
nologies, Architectures, and Protocols for Computer Communications, 2001.
[37] J. E. Walter, J. L. Welch, and N. H. Vaidya. A mutual exclusion algorithm for ad hoc mobile networks.
Wireless Networks, 7:585–600, 2001.