Personalized Federated Learning: A Meta-Learning Approach: Alireza Fallah, Aryan Mokhtari, Asuman Ozdaglar

Download as pdf or txt
Download as pdf or txt
You are on page 1of 29

Personalized Federated Learning: A Meta-Learning

Approach
Alireza Fallah∗, Aryan Mokhtari†, Asuman Ozdaglar∗

Abstract
In Federated Learning, we aim to train models across multiple computing units (users),
while users can only communicate with a common central server, without exchanging their
arXiv:2002.07948v4 [cs.LG] 23 Oct 2020

data samples. This mechanism exploits the computational power of all users and allows users
to obtain a richer model as their models are trained over a larger set of data points. However,
this scheme only develops a common output for all the users, and, therefore, it does not adapt
the model to each user. This is an important missing feature, especially given the heterogeneity
of the underlying data distribution for various users. In this paper, we study a personalized
variant of the federated learning in which our goal is to find an initial shared model that
current or new users can easily adapt to their local dataset by performing one or a few steps
of gradient descent with respect to their own data. This approach keeps all the benefits of
the federated learning architecture, and, by structure, leads to a more personalized model for
each user. We show this problem can be studied within the Model-Agnostic Meta-Learning
(MAML) framework. Inspired by this connection, we study a personalized variant of the well-
known Federated Averaging algorithm and evaluate its performance in terms of gradient norm
for non-convex loss functions. Further, we characterize how this performance is affected by the
closeness of underlying distributions of user data, measured in terms of distribution distances
such as Total Variation and 1-Wasserstein metric.

1 Introduction
In Federated Learning (FL), we consider a set of n users that are all connected to a central node
(server), where each user has access only to its local data [1]. In this setting, the users aim to come
up with a model that is trained over all the data points in the network without exchanging their
local data with other users or the central node due to privacy issues or communication limitations.
More formally, if we define fi : Rd → R as the loss corresponding to user i, the goal is to solve
n
1X
min f (w) := fi (w). (1)
w∈Rd n i=1

In particular, consider a supervised learning setting, where fi represents expected loss over the
data distribution of user i, i.e.,

fi (w) := E(x,y)∼pi [li (w; x, y)] , (2)

where li (w; x, y) measures the error of model w in predicting the true label y ∈ Yi given the input
x ∈ Xi , and pi is the distribution over Xi × Yi . The focus of this paper is on a data heterogeneous
setting where the probability distribution pi of users are not identical. To illustrate this formulation,
consider the example of training a Natural Language Processing (NLP) model over the devices of
a set of users. In this problem, pi represents the empiricalPdistribution of words and expressions
used by user i. Hence, fi (w) can be expressed as fi (w) = (x,y)∈Si pi (x, y)li (w; x, y), where Si is
the data set corresponding to user i and pi (x, y) is the probability that user i assigns to a specific
word which is proportional to the frequency of using this word by user i.
∗ Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Cambridge,

MA, USA. {afallah@mit.edu, asuman@mit.edu}.


† Department of Electrical and Computer Engineering, The University of Texas at Austin, Austin, TX, USA.

mokhtari@austin.utexas.edu.

1
Indeed, each user can solve its local problem defined in (2) without any exchange of information
with other users; however, the resulted model may not generalize well to new samples as it has
been trained over a small number of samples. If users cooperate and exploit the data available at
all users, then their local models could obtain stronger generalization guarantees. A conventional
approach for achieving this goal is minimizing the aggregate of local functions defined in (1).
However, this scheme only develops a common output for all the users, and therefore, it does not
adapt the model to each user. In particular, in the heterogeneous settings where the underlying
data distribution of users are not identical, the resulted global model obtained by minimizing the
average loss could perform arbitrarily poorly once applied to the local dataset of each user. In
other words, the solution of problem (1) is not personalized for each user. To highlight this point,
recall the NLP example, where although the distribution over the words and expressions varies
from one person to another, the solution to problem (1) provides a shared answer for all users,
and, therefore, it is not fully capable of achieving a user-specific model.
In this paper, we overcome this issue by considering a modified formulation of the federated
learning problem which incorporates personalization (Section 2). Building on the Model-Agnostic
Meta-Learning (MAML) problem formulation introduced in [2], the goal of this new formulation
is to find an initial point shared between all users which performs well after each user updates it
with respect to its own loss function, potentially by performing a few steps of a gradient-based
method. This way, while the initial model is derived in a distributed manner over all users, the
final model implemented by each user differs from other ones based on her or his own data. We
study a Personalized variant of the FedAvg algorithm, called Per-FedAvg, designed for solving the
proposed personalized FL problem (Section 3). In particular, we elaborate on its connections with
the original FedAvg algorithm [3], and also, discuss a number of considerations that one needs
to take into account for implementing Per-FedAvg. We also establish the convergence properties
of the proposed Per-FedAvg algorithm for minimizing non-convex loss functions (Section 4). In
particular, we characterize the role of data heterogeneity and closeness of data distribution of
different users, measured by distribution distances, such as Total Variation (TV) or 1-Wasserstein,
on the convergence of Per-FedAvg.
Related Work. Recently we have witnessed significant progress in developing novel methods that
address different challenges in FL; see [4, 5]. In particular, there have been several works on various
aspects of FL, including preserving the privacy of users [6, 7, 8, 9] and lowering communication
cost [10, 11, 12, 13]. Several work develop algorithms for the homogeneous setting, where the data
points of all users are sampled from the same probability distribution [14, 15, 16, 17]. More related
to our paper, there are several works that study statistical heterogeneity of users’ data points in
FL [19, 20, 21, 22, 24, 25], but they do not attempt to find a personalized solution for each user.
The centralized version of model-agnostic meta-learning (MAML) problem was first proposed
in [2] and followed by a number of papers studying its empirical characteristics [26, 27, 28, 29, 30, 31]
as well as its convergence properties [32, 33]. In this work, we focus on the convergence of MAML
methods for the FL setting that is more challenging as nodes perform multiple local updates
before sending their updates to the server, which is not considered in previous theoretical works
on meta-learning.
Recently, the idea of personalization in FL and its connections with MAML has gained a lot
of attention. In particular, [34] considers a formulation and algorithm similar to our paper, and
elaborates on the empirical success of this framework. Also, recently, there has been a number of
other papers that have studied different combinations of MAML-type methods with FL architec-
ture from an empirical point of view [35, 36]. However, our main focus is on developing a theoretical
understating regarding this formulation, where we characterize the convergence of the Per-FedAvg,
and the role of this algorithm’s parameters on its performance. Besides, in our numerical exper-
iment section, we show how the method studied in [34] may not perform well in some cases, and
propose another algorithm which addresses this issue. In addition, an independent and concurrent
work [37] studies a similar formulation theoretically for the case of strongly convex functions. The
results in [37] are completely different from ours, as they study the case that the functions are
strongly convex and exact gradients are available, while we study nonconvex functions, and also
address gradient stochasticity.
Using meta-learning and multi-task learning to achieve personalization is not limited to MAML
framework. In particular, [38] proposes ARUBA, a meta-learning algorithm inspired by online
convex optimization, and shows that applying it to FedAvg improves its performance. A similar
idea is later used in [39] to design differentially private algorithms with application in FL. Also

2
in [40], the authors use multi-task learning framework and propose a new method, MOCHA, to
address the statistical and systems challenges, including data heterogeneity and communication
efficiency. Their proposed multi-task learning scheme also leads to a set of solutions that are more
user-specific. A detailed survey on the connections of FL and multi-task and meta-learning can be
found in [4, 5]. Also, in [18], the authors consider a framework for training a mixture of a single
global model and local models, leading to a personalized solution for each user. A similar idea
has been studied in [41], where the authors propose an adaptive federated learning algorithm that
learns a mixture of local and global models as the personalized model.

2 Personalized Federated Learning via Model-Agnostic Meta-


Learning
As we stated in Section 1, our goal in this section is to show how the fundamental idea behind the
Model-Agnostic Meta-Learning (MAML) framework in [2] can be exploited to design a personalized
variant of the FL problem. To do so, let us first briefly recap the MAML formulation. Given a set
of tasks drawn from an underlying distribution, in MAML, in contrast to the traditional supervised
learning setting, the goal is not finding a model which performs well on all the tasks in expectation.
Instead, in MAML, we assume we have a limited computational budget to update our model after
a new task arrives, and in this new setting, we look for an initialization which performs well after
it is updated with respect to this new task, possibly by one or a few steps of gradient descent. In
particular, if we assume each user takes the initial point and updates it using one step of gradient
descent with respect to its own loss function, then problem (1) changes to
n
1X
min F (w) := fi (w − α∇fi (w)), (3)
w∈Rd n i=1

where α ≥ 0 is the stepsize. The strength of this formulation is that, not only it allows us to
maintain the advantages of FL, but also it captures the difference between users as either existing
or new users can take the solution of this new problem as an initial point and slightly update it
with respect to their own data. Going back to the NLP example, this means that the users could
take this resulting initialization and update it by going over their own data Si and performing just
one or few steps of gradient descent to obtain a model that works well for their own dataset.
As mentioned earlier, for the considered heterogeneous model of data distribution, solving
problem (1) is not the ideal choice as it returns a single model that even after a few steps of local
gradient may not quickly adjust to each users local data. On the other hand, by solving (3) we
find an initial model (Meta-model) which is trained in a way that after one step of local gradient
leads to a good model for each individual user. This formulation can also be extended to the case
that users run a few steps of gradient update, but to simplify our notation we focus on the single
gradient update case. We would like to mention that the problem formulation in (3) for FL was
has been proposed independently in another work [34] and studied numerically. In this work, we
focus on the theoretical aspect of this problem and seek a provably convergent method for the case
that the functions fi are nonconvex.

3 Personalized FedAvg
In this section, we present the Personalized FedAvg (Per-FedAvg) method to solve (3). This
algorithm is inspired by FedAvg, but it is designed to find the optimal solution of (3) instead
of (1). In FedAvg, at each round, the server chooses a fraction of users with size rn (r ∈ (0, 1]) and
sends its current model to these users. Each selected user i updates the received model based on
its own loss function fi and by running τ ≥ 1 steps of stochastic gradient descent. Then, the active
users return their updated models to the server. Finally, the server updates the global model by
computing the average of the models received from these selected users, and then the next round
follows. Per-FedAvg follows the same principles. First, note that function F in (3) can be written
as the average of meta-functions F1 , . . . , Fn where the meta-function Fi associated with user i is
defined as
Fi (w) := fi (w − α∇fi (w)). (4)

3
Algorithm 1: The proposed Personalized FedAvg (Per-FedAvg) Algorithm
Input:Initial iterate w0 , fraction of active users r.
for k : 0 to K − 1 do
Server chooses a subset of users Ak uniformly at random and with size rn;
Server sends wk to all users in Ak ;
for all i ∈ Ak do
Set wk+1,0
i
= wk ;
for t : 1 to τ do
Compute the stochastic gradient ∇f ˜ i (wi
k+1,t−1 , Dt ) using dataset Dt ;
i i

Set w̃k+1,t = wk+1,t−1 − α∇fi (wk+1,t−1 , Dt );


i i ˜ i i
00 0
Set wk+1,t
i i
= wk+1,t−1 − β(I − α∇ ˜ 2 fi (wi i ˜ i i
k+1,t−1 , Dt ))∇fi (w̃k+1,t , Dt );
end for
Agent i sends wk+1,τ
i
back to server;
end for
Server updates its model by averaging over received models: wk+1 = rn 1
i∈Ak wk+1,τ ;
i
P
end for

To follow a similar scheme as FedAvg for solving problem (3), the first step is to compute the
gradient of local functions, which in this case, the gradient ∇Fi , that is given by
∇Fi (w) = I − α∇2 fi (w) ∇fi (w − α∇fi (w)). (5)


Computing the gradient ∇fi (w) at every round is often computationally costly. Hence, we take
a batch of data Di with respect to distribution pi to obtain an unbiased estimate ∇f
˜ i (w, Di ) given
by
˜ i (w, Di ) := 1
X
∇f i
∇li (w; x, y). (6)
|D | i
(x,y)∈D

Similarly, the Hessian ∇ fi (w) in (5) can be replaced by its unbiased estimate ∇
2 ˜ 2 fi (w, Di ).
At round k of Per-FedAvg, similar to FedAvg, first the server sends the current global model
wk to a fraction of users Ak chosen uniformly at random with size rn. Each user i ∈ Ak performs
τ steps of stochastic gradient descent locally and with respect to Fi . In particular, these local
updates generate a local sequence {wk+1,t
i
}τt=0 where wk+1,0
i
= wk and, for τ ≥ t ≥ 1,
i
wk+1,t i
= wk+1,t−1 ˜ i (wk+1,t−1
− β ∇F i
), (7)

where β is the local learning rate (stepsize) and ∇F ˜ i (wi


k+1,t−1 ) is an estimate of ∇Fi (wk+1,t−1 )
i

in (5). Note that the stochastic gradient ∇F ˜ i (w


k+1,t−1 ) for all local iterates is computed using
i
0 00
independent batches Dt , Dt , and Dt as follows
i i i

 00
  0

˜ i (wk+1,t−1
∇F i
) := I −α∇ ˜ 2 fi (wk+1,t−1
i
, Dt i ) ∇f˜ i wk+1,t−1
i
−α∇f ˜ i (wk+1,t−1
i
, Dti ), Dti . (8)

Note that ∇F
˜ i (wi
k+1,t−1 ) is a biased estimator of ∇Fi (wk+1,t−1 ) due to the fact that ∇fi (wk+1,t−1 −
i ˜ i
0
k+1,t−1 , Dt ), Dt ) is a stochastic gradient that contains another stochastic gradient inside.
˜ i (wi
α∇f i i

Once, the local updates wk+1,τ i


are evaluated, users send them to the server, and the server
updates its global model by averaging over the received models, i.e., wk+1 = rn 1
i∈Ak wk+1,τ .
i
P
Note that as in other MAML methods [2, 33], the update in (7) can be implemented in two
stages: First, we compute w̃k+1,t i i
= wk+1,t−1 ˜ i (wi
− α∇f k+1,t−1 , Dt ) and then evaluate wk+1,t
i i
i,t−1 00 0
by wk+1,t = wk+1 − β(I − α∇
i ˜ fi (w
2
k+1,t−1 , Dt ))∇fi (w̃k+1,t , Dt ).Indeed, it can be verified the
i i ˜ i i

outcome of the these two steps is equivalent to the update in (7). To simplify the notation,
0 00
throughout the paper, we assume that the size of Dti , Dti , and Dt i is equal to D, D0 , and D00 ,
respectively, and for any i and t. The steps of Per-FedAvg are depicted in Algorithm 1.

4 Theoretical Results
In this section, we study the convergence properties of the Personalized FedAvg (Per-FedAvg)
method. We focus on nonconvex settings, and characterize the overall communication rounds

4
between server and users to find an -approximate first-order stationary point, where its formal
definition follows.
Definition 4.1. A random vector w ∈ Rd is called an -approximate First-Order Stationary Point
(FOSP) for problem (3) if it satisfies E[k∇F (w )k2 ] ≤ .
Next, we formally state the assumptions required for proving our main results.
Assumption 1. Functions fi are bounded below, i.e., minw∈Rd fi (w) > −∞.
Assumption 2. For every i ∈ {1, . . . , n}, fi is twice continuously differentiable and Li -smooth,
and also, its gradient is bounded by a nonnegative constant Bi , i.e.,

k∇fi (w)k ≤ Bi , k∇fi (w) − ∇fi (u)k ≤ Li kw − uk ∀w, u ∈ Rd . (9)

As we discussed in Section 3, the second-order derivative of all functions appears in the update
rule of Per-FedAvg Algorithm. Hence, in the next Assumption, we impose a regularity condition
on the Hessian of each fi which is also a customary assumption in the analysis of second-order
methods.
Assumption 3. For every i ∈ {1, . . . , n}, the Hessian of function fi is ρi -Lipschitz continuous,
i.e.,
k∇2 fi (w) − ∇2 fi (u)k ≤ ρi kw − uk ∀w, u ∈ Rd . (10)
To simplify the analysis, in the rest of the paper, we define B := maxi Bi , L := maxi Li , and
ρ := maxi ρi which can be, respectively, considered as a bound on the norm of gradient of fi ,
smoothness parameter of fi , and Lipschitz continuity parameter of Hessian ∇2 fi , for i = 1, . . . , n.
Our next assumption provides upper bounds on the variances of gradient and Hessian estima-
tions.
Assumption 4. For any w ∈ Rd , the stochastic gradient ∇li (x, y; w) and Hessian ∇2 li (x, y; w),
computed with respect to a single data point (x, y) ∈ Xi × Yi , have bounded variance, i.e.,

E(x,y)∼pi k∇li (x, y; w) − ∇fi (w)k2 ≤ σG 2


(11)
 
,
(12)
 2 2 2
 2
E(x,y)∼pi k∇ li (x, y; w) − ∇ fi (w)k ≤ σH .

Finally, we state our last assumption which characterizes the similarity between the tasks of
users.
Assumption 5. P For any w ∈ Rd , the gradient and Hessian of local functions fi (w) and the average
n
function f (w) = i=1 fi (w) satisfy the following conditions
n n
1X 1X 2
k∇fi (w) − ∇f (w)k2 ≤ γG
2
, k∇ fi (w) − ∇2 f (w)k2 ≤ γH
2
. (13)
n i=1 n i=1

Assumption 5 captures the diversity between the gradients and Hessians of users. Note that
under Assumption 2, the conditions in Assumption 5 are automatically satisfied for γG = 2B
and γH = 2L. However, we state this assumption separately to highlight the role of similarity of
functions corresponding to different users in convergence analysis of Per-FedAvg. In particular, in
the following subsection, we highlight the connections between this assumption and the similarity
of distributions pi for the case of supervised learning (2) under two different distribution distances.

4.1 On the Connections of Task Similarity and Distribution Distances


Recall the definition of fi in (2). Note that Assumption 5 captures the similarity of loss functions
of different users. Hence, a fundamental question here is whether this has any connection with
the closeness of distributions pi . We study this connection by considering two different distances:
Total Variation (TV) distance and 1-Wasserstein distance. Throughout this subsection, we assume
all users have the same loss function l(.; .) over the same set of inputs and labels, i.e., fi (w) :=
Ez∼pi [l(z; w)] where z := (x, y) ∈ Z := X × Y. Also, let p = n1 i pi denote the average of all
P
users’ distributions.
• Total Variation (TV) Distance: For P distributions q1 and q2 over countable set Z, their
TV distance is given by kq1 − q2 kT V = 12 z∈Z |q1 (z) − q2 (z)|. If we assume a stronger version of

5
Assumption 2 holds where for any z ∈ Z and w ∈ Rd , we have k∇w l(z; w)k ≤ B and k∇2w l(z; w)k ≤
L, then Assumption 5 holds with (check Appendix B)
n n
1X 1X
2
γG = 4B 2 kpi − pk2T V , 2
γH = 4L2 kpi − pk2T V . (14a)
n i=1 n i=1

This simple derivation shows that γG and γH exactly capture the difference between the probability
distributions of the users in a heterogeneous setting.
• 1-Wasserstein Distance: The 1-Wasserstein distance between Rtwo probability distributions q1
and q2 over a metric space Z defined as W1 (q1 , q2 ) := inf q∈Q(q1 ,q2 ) Z×Z d(z1 , z2 ) dq(z1 , z2 ), where
d(., .) is a distance function over metric space Z and Q(q1 , q2 ) denotes the set of all measures
on Z × Z with marginals q1 and q2 on the first and second coordinate, respectively. Here, we
assume all pi have bounded support (note that this assumption holds in many cases as either Z
itself is bounded or because we normalize the data). Also, we assume that for any w, the gradient
∇w l(z; w) and the Hessian ∇2w l(z; w) are both Lipschitz with respect to parameter z and distance
d(., .), i.e,
k∇w l(z1 ; w) − ∇w l(z2 ; w)k ≤ LZ d(z1 , z2 ), k∇2w l(z1 ; w) − ∇2w l(z2 ; w)k ≤ ρZ d(z1 , z2 ). (15)
Then, Assumption 5 holds with (check Appendix B)
n n
1X 1X
2
γG = L2Z W1 (pi , p)2 , 2
γH = ρ2Z W1 (pi , p)2 . (16)
n i=1 n i=1

This derivation does not require Assumption 2 and holds when (15) are satisfied. Finally, consider a
special case where the data distributions are homogeneous, and each pi is an empirical distribution

drawn from a distribution pu with sample size m. In this case, we have W √1 (p1i , pu ) = O(1/ m)
[42]. Hence, since W1 is a distance, it is easy to verify that γG , γH = O(1/ m) .

4.2 Convergence Analysis of Per-FedAvg Algorithm


In this subsection, we derive the overall complexity of Per-FedAvg for achieving an -first-order
stationary point. To do so, we first prove the following intermediate result which shows that under
Assumptions 2Pand 3, the local meta-functions Fi (w) defined in (4) and their average function
n
F (w) = (1/n) i=1 Fi (w) are smooth.
Lemma 4.2. Recall the definition of Fi (w) in (4) with α ∈ [0, 1/L]. If Assumptions 2 and 3
hold, then Fi is
Psmooth with parameter LF := 4L + αρB. As a consequence, the average function
n
F (w) = (1/n) i=1 Fi (w) is also smooth with parameter LF .
Assumption 4 provides upper bounds on the variances of gradient and Hessian estimation for
functions fi . To analyze the convergence of Per-FedAvg, however, we require upper bounds on the
bias and variance of gradient estimation of Fi . We derive these bounds in the following lemma.
Lemma 4.3. Recall the definition of the gradient estimate ∇F ˜ i (w) in (8) which is computed
using D, D0 , and D00 that are independent batches with size D, D0 , and D00 , respectively. If
Assumptions 2-4 hold, then for any α ∈ [0, 1/L] and w ∈ Rd we have

˜ i (w) − ∇Fi (w) ≤ 2αLσ


h i
E ∇F √ G,
D
(αL)2 2
     
2 1
˜ 2 2 2 2 α
E ∇Fi (w) − ∇Fi (w) ≤ σF := 12 B + σG + 1 + σH − 12B 2 .
D0 D 4D00
To measure the tightness of this result, we consider two special cases. First, if the exact
gradients and Hessians are available, i.e., σG = σH = 0, then σF = 0 as well which is expected
as we can compute exact ∇Fi . Second, for the classic federated learning problem, i.e., α = 0 and
Fi = fi , we have σF = O(1)σG 2
/D0 which is tight up to constants.
Next, we use the similarity conditions for the functions fi in Assumption 5 to study the simi-
larity between gradients of the functions Fi .
1 While our focus here is to elaborate on the dependence of Wasserstein distance on the number of samples, it is

worth noting that one drawback of this bound is that the convergence speed of Wasserstein distance in dimension
is exponentially slow.

6
Lemma 4.4. Recall the definition of Fi (w) in (4) and assume that α ∈ [0, 1/L]. Suppose that the
conditions in Assumptions 2, 3, and 5 are satisfied. Then, for any w ∈ Rd , we have
n
1X
k∇Fi (w) − ∇F (w)k2 ≤ γF2 := 3B 2 α2 γH
2 2
+ 192γG .
n i=1

To check the tightness of this result, we focus on two special cases as we did for Lemma 4.3
. First, if ∇fi are all equal, i.e., γG = γH = 0, then γF = 0. This is indeed expected as all ∇Fi
are equal to each other in this case. Second, for the classic federated learning problem, i.e., α = 0
and Fi = fi , we have γF = O(1)γG that is optimal up to a constant factor given the conditions in
Assumption 5.
Theorem 4.5. Consider the objective function F defined in (3) for the case that α ∈ (0, 1/L].
Suppose that the conditions in Assumptions 1-4 are satisfied, and recall the definitions of LF , σF ,
and ηF from Lemmas 4.2-4.4. Consider running Algorithm 1 for K rounds with τ local updates in
each round and with β ≤ 1/(10τ LF ). Then, the following first-order stationary condition holds
K−1 τ −1
1 XX   4(F (w0 ) − F ∗ )
E k∇F (w̄k+1,t )k2 ≤
τK βτ K
k=0 t=0
α2 L2 σG
2
   
2 2 1−r
+ O(1) βLF (1 + βLF τ (τ −1)) σF + βLF γF + βLF τ (τ −1) + ,
r(n − 1) D
1 i
P
where w̄k+1,t is the average of iterates of users in Ak at time t, i.e., w̄k+1,t = rn i∈Ak wk+1,t ,
and in particular, w̄k+1,0 = wk and w̄k+1,τ = wk+1 .
Note that σF is not a constant, and as expressed in Lemma 4.3, we can make it arbitrary small
by choosing batch sizes D, D0 , or D00 large enough. To see how tight our result is, we again focus
on special cases. Let α = 0, τ = 1, and r = 1. In this case, Per-FedAvg reduces to stochastic
gradient descent, where the only source of stochasticity is the batches of gradient. In this case,
the second term in the right hand side reduces to O βLF σF2 where, here, σF2 itself is equal to
2
σG /D. This is the classic result for stochastic gradient descent for nonconvex functions, and we
recover the lower bounds [43]. Also, it is worth noting that the term α2 L2 σG 2
/D appears in the
upper bound due to the fact that ∇Fi (w) is a biased estimator of ∇Fi (w). This bias term will
˜
be eliminated if we assume that we have access to the exact gradients at training time (see the
discussion after Lemma 4.3), which is, for instance, the case in [37], where the authors focus on
the deterministic case.
Next, we characterize the choices of τ , K, and β in terms of the required accuracy  to obtain
the best possible complexity bound for the result in Theorem 4.5.
Corollary 4.6. Suppose the conditions in Theorem 4.5 are satisfied. If we set the number of local
updates as τ = O(−1/2 ), number of communication rounds with the server as K = O(−3/2 ), and
α2 σ 2
stepsize of Per-FedAvg as β = , then we find an O( + D G )-first-order stationary point of F .
α2 σ 2
The result in Corollary 4.6 shows that to achieve an O( + D G )-first-order stationary point
of F the Per-FedAvg algorithm requires K = O(−3/2 ) rounds of communication between users
and the server. Indeed, by setting D = O(−1 ) or setting the meta-step stepsize as α = O(1/2 )
Per-FedAvg can find an -first-order stationary point of F for any arbitrary  > 0.

Remark
 4.7. The result
 of Theorem 4.5 and Corollary 4.6 provide an upper bound on the average
of E k∇F (w̄k+1,t )k2 for all k ∈ {0, 1, ..., K − 1} and t ∈ {0, 1, ..., τ − 1}. However, one concern
here is that due to the structure of Algorithm 1, for any k, we only have access to w̄k+1,t for t = 0.
To address this issue, at any iteration k, the center can choose tk ∈ {0, 1..., τ − 1} uniformly at
i i
random, and ask all the users in Ak to send wk+1,t k
back to the server, in addition to wk+1,τ . By
following this scheme we canP ensure that the same upper bound also hods for the expected average
1 K−1  2

models at the server, i.e., K k=0 E k∇F (w̄k+1,tk )k .

Remark 4.8. It is worth noting that it is possible to achieve the same complexity bound using a
diminishing stepsize. We will further discuss this at the end of Appendix G.

7
5 Numerical Experiments
In this section, we numerically study the role of personalization when the data distributions are
heterogeneous. In particular, we consider the multi-class classification problem over MNIST [44]
and CIFAR-10 [45] datasets and distribute the training data between n users as follows: (i) Half
of the users, each have a images of each of the first five classes; (ii) The rest, each have a/2
images from only one of the first five classes and 2a images from only one of the other five classes
(see Appendix I for an illustration). We set the parameter a as a = 196 and a = 68 for MNIST
and CIFAR-10 datasets, respectively. This way, we create an example where the distribution of
images over all the users are different. Similarly, we divide the test data over the nodes with
the same distribution as the one for the training data. Note that for this particular example in
which the user’s distributions are significantly different, our goal is not to achieve state-of-the-art
accuracy. Rather, we aim to provide an example to compare the various approaches for obtaining
personalization in the heterogenous setting. Indeed, by using more complex neural networks the
results for all the considered algorithms would improve; however, their relative performance would
stay the same.
We focus on three algorithms: The first method that we consider is the FedAvg method, and,
to do a fair comparison, we take the output of the FedAvg method, and update it with one step
of stochastic gradient descent with respect to the test data, and then evaluate its performance.
The second and third algorithms that we consider are two different efficient approximations of
Per-FedAvg. Similarly, we evaluate the performance of these methods for the case that one step
of local stochastic gradient descent is performed during test time. To formally explain these two
approximate versions of Per-FedAvg, note that the implementation of Per-FedAvg requires access
to second-order information which is computationally costly. To address this issue, we consider
two different approximations:
(i) First, we replace the gradient estimate with its first-order approximation which ignores the
0
Hessian term, i.e., ∇F k+1,t−1 ) in (8) is approximated by ∇fi (wk+1,t−1 −α∇fi (wk+1,t−1 , Dt ), Dt ).
˜ i (wi ˜ i ˜ i i i

This is the same idea deployed in First-Order MAML (FO-MAML) in [2], and it has been studied
empirically for the federated learning setting in [34]. We refer to this algorithm as Per-FedAvg
(FO).
(ii) Second, we use the idea of the HF-MAML, proposed in [33], in which the Hessian-vector
product in the MAML update is replaced by difference of gradients using the following approx-
imation: ∇2 φ(w)u ≈ (∇φ(u + δv) − ∇φ(u − δv))/δ. We refer to this algorithm as Per-FedAvg
(HF).
As shown in [33], for small stepsize at test time α both FO-MAML and HF-MAML perform
well, but as α becomes large, HF-MAML outperforms FO-MAML in the centralized setting. A
more detailed discussion on Per-FedAvg (FO) and Per-FedAvg (HF) is provided in Appendix H.
Moreover, there we discuss how our analysis can be extended to these two methods. Note that
the model obtained by any of these three methods is later updated using one step of stochastic
gradient descent at the test time, and hence they have the same budget at the test time.
We use a neural network with two hidden layers with sizes 80 and 60, and we use Exponential
Linear Unit (ELU) activation function. We take n = 50 users in the network, and run all three
algorithms for K = 1000 rounds. At each round, we assume rn agents with r = 0.2 are chosen to
run τ local updates. The batch sizes are D = D0 = 40 and the learning rate is β = 0.001. Part
of the code is adopted from [46]. Note that the reported results for all the considered methods
corresponds to the average test accuracy among all users, after running one step of local stochastic
gradient descent.
The test accuracy results along with the 95% confidence intervals are reported in Table 1.
For MNIST dataset, both Per-FedAvg methods achieve a marginal gain compared to FedAvg.
However, the achieved gain from using Per-FedAvg (HF) compared to FedAvg is more significant
for CIFAR-10 dataset. In particular, we have three main observations here: (i) For α = 0.001
and τ = 10, Per-FedAvg (FO) and Per-FedAvg (HF) perform almost similarly, and better than
FedAvg. In addition, decreasing τ leads to a decrease in the performance of all three algorithms,
which is expected as the total number of iterations decreases. (ii) Next, we study the role of α.
By increasing α from 0.001 to 0.01, for τ = 4, the performance of Per-FedAvg (HF) improves,
which could be due to the fact that model adapts better with user data at test time. However,
as discussed above, for larger α, Per-FedAvg (FO) performance drops significantly. (iii) Third,
we examine the effect of changing the level of data heterogeneity. To do so, we change the data

8
Table 1: Comparison of test accuracy of different algorithms given different parameters

Algorithms
Dataset Parameters
FedAvg + update Per-FedAvg (FO) Per-FedAvg (HF)
τ = 10, α = 0.01 75.96% ± 0.02% 78.00% ± 0.02% 79.85% ± 0.02%
MNIST
τ = 4, α = 0.01 60.18 % ± 0.02% 64.55% ± 0.02% 70.94% ± 0.03%
τ = 10, α = 0.001 40.49% ± 0.07% 46.98% ± 0.1% 50.44% ± 0.15%
CIFAR- τ = 4, α = 0.001 38.38% ± 0.07% 34.04% ± 0.08% 43.73% ± 0.11%
10 τ = 4, α = 0.01 35.97% ± 0.17% 25.32% ± 0.18% 46.32% ± 0.12%
τ = 4, α = 0.01, 58.59% ± 37.71% ± 71.25% ±
diff. hetero. 0.11% 0.23% 0.05%

distribution of half of the users that have a/2 images from one of the first five classes by removing
these images from their dataset. As the last line of Table 1 shows, Per-FedAvg (HF) performs
significantly better that FedAvg under these new distributions, while Per-FedAvg (FO) still suffers
from the issue we discussed in (ii). In summary, the more accurate implementation of Per-FedAvg,
i.e., Per-FedAvg (HF), outperforms FedAvg in all cases and leads to a more personalized solution.

6 Conclusion
We considered the Federated Learning (FL) problem in the heterogeneous case, and studied a
personalized variant of the classic FL formulation in which our goal is to find a proper initialization
model for the users that can be quickly adapted to the local data of each user after the training
phase. We highlighted the connections of this formulation with Model-Agnostic Meta-Learning
(MAML), and showed how the decentralized implementation of MAML, which we called Per-
FedAvg, can be used to solve the proposed personalized FL problem. We also characterized the
overall complexity of Per-FedAvg for achieving first-order optimality in nonconvex settings. Finally,
we provided a set of numerical experiments to illustrate the performance of two different first-order
approximations of Per-FedAvg and their comparison with the FedAvg method, and showed that the
solution obtained by Per-FedAvg leads to a more personalized solution compared to the solution
of FedAvg.

7 Acknowledgment
Research was sponsored by the United States Air Force Research Laboratory and was accomplished
under Cooperative Agreement Number FA8750-19-2-1000. The views and conclusions contained
in this document are those of the authors and should not be interpreted as representing the offi-
cial policies, either expressed or implied, of the United States Air Force or the U.S. Government.
The U.S. Government is authorized to reproduce and distribute reprints for Government pur-
poses notwithstanding any copyright notation herein. Alireza Fallah acknowledges support from
MathWorks Engineering Fellowship. The research of Aryan Mokhtari is supported by NSF Award
CCF-2007668.

9
References
[1] J. Konečnỳ, H. B. McMahan, F. X. Yu, P. Richtárik, A. T. Suresh, and D. Bacon,
“Federated learning: Strategies for improving communication efficiency,” arXiv preprint
arXiv:1610.05492, 2016.
[2] C. Finn, P. Abbeel, and S. Levine, “Model-agnostic meta-learning for fast adaptation of deep
networks,” in Proceedings of the 34th International Conference on Machine Learning, (Sydney,
Australia), 06–11 Aug 2017.
[3] B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y Arcas, “Communication-
Efficient Learning of Deep Networks from Decentralized Data,” in Proceedings of the 20th
International Conference on Artificial Intelligence and Statistics, vol. 54 of Proceedings of
Machine Learning Research, (Fort Lauderdale, FL, USA), pp. 1273–1282, PMLR, 20–22 Apr
2017.
[4] P. Kairouz, H. B. McMahan, B. Avent, A. Bellet, M. Bennis, A. N. Bhagoji, K. Bonawitz,
Z. Charles, G. Cormode, R. Cummings, et al., “Advances and open problems in federated
learning,” arXiv preprint arXiv:1912.04977, 2019.

[5] T. Li, A. K. Sahu, A. Talwalkar, and V. Smith, “Federated learning: Challenges, methods,
and future directions,” IEEE Signal Process. Mag., vol. 37, no. 3, pp. 50–60, 2020.
[6] J. C. Duchi, M. I. Jordan, and M. J. Wainwright, “Privacy aware learning,” Journal of the
ACM (JACM), vol. 61, no. 6, p. 38, 2014.

[7] H. B. McMahan, D. Ramage, K. Talwar, and L. Zhang, “Learning differentially private recur-
rent language models,” arXiv preprint arXiv:1710.06963, 2017.
[8] N. Agarwal, A. T. Suresh, F. X. X. Yu, S. Kumar, and B. McMahan, “cpsgd: Communication-
efficient and differentially-private distributed sgd,” in Advances in Neural Information Pro-
cessing Systems, pp. 7564–7575, 2018.

[9] W. Zhu, P. Kairouz, B. McMahan, H. Sun, and W. Li, “Federated heavy hitters discovery
with differential privacy,” in International Conference on Artificial Intelligence and Statistics,
pp. 3837–3847, 2020.
[10] A. Reisizadeh, A. Mokhtari, H. Hassani, A. Jadbabaie, and R. Pedarsani, “Fedpaq: A
communication-efficient federated learning method with periodic averaging and quantization,”
in International Conference on Artificial Intelligence and Statistics, pp. 2021–2031, 2020.
[11] X. Dai, X. Yan, K. Zhou, K. K. Ng, J. Cheng, and Y. Fan, “Hyper-sphere quantization:
Communication-efficient sgd for federated learning,” arXiv preprint arXiv:1911.04655, 2019.
[12] D. Basu, D. Data, C. Karakus, and S. Diggavi, “Qsparse-local-sgd: Distributed sgd with quan-
tization, sparsification and local computations,” in Advances in Neural Information Processing
Systems, pp. 14668–14679, 2019.
[13] Z. Li, D. Kovalev, X. Qian, and P. Richtárik, “Acceleration for compressed gradient descent
in distributed and federated optimization,” arXiv preprint arXiv:2002.11364, 2020.
[14] S. U. Stich, “Local sgd converges fast and communicates little,” arXiv preprint
arXiv:1805.09767, 2018.
[15] J. Wang and G. Joshi, “Cooperative sgd: A unified framework for the design and analysis of
communication-efficient sgd algorithms,” arXiv preprint arXiv:1808.07576, 2018.
[16] F. Zhou and G. Cong, “On the convergence properties of a k-step averaging stochastic gradient
descent algorithm for nonconvex optimization,” in Proceedings of the 27th International Joint
Conference on Artificial Intelligence, pp. 3219–3227, 2018.
[17] T. Lin, S. U. Stich, K. K. Patel, and M. Jaggi, “Don’t use large mini-batches, use local SGD,”
in 8th International Conference on Learning Representations, ICLR, 2020.

10
[18] F. Hanzely and P. Richtárik, “Federated learning of a mixture of global and local models,”
arXiv preprint arXiv:2002.05516, 2020.

[19] Y. Zhao, M. Li, L. Lai, N. Suda, D. Civin, and V. Chandra, “Federated learning with non-iid
data,” arXiv preprint arXiv:1806.00582, 2018.
[20] A. K. Sahu, T. Li, M. Sanjabi, M. Zaheer, A. Talwalkar, and V. Smith, “On the convergence
of federated optimization in heterogeneous networks,” arXiv preprint arXiv:1812.06127, 2018.

[21] S. P. Karimireddy, S. Kale, M. Mohri, S. J. Reddi, S. U. Stich, and A. T. Suresh,


“Scaffold: Stochastic controlled averaging for on-device federated learning,” arXiv preprint
arXiv:1910.06378, 2019.
[22] F. Haddadpour and M. Mahdavi, “On the convergence of local descent methods in federated
learning,” arXiv preprint arXiv:1910.14425, 2019.

[23] A. Khaled, K. Mishchenko, and P. Richtárik, “Tighter theory for local sgd on identical and
heterogeneous data,” arXiv preprint arXiv:1909.04746, 2019.
[24] X. Li, K. Huang, W. Yang, S. Wang, and Z. Zhang, “On the convergence of fedavg on non-iid
data,” arXiv preprint arXiv:1907.02189, 2019.

[25] A. K. R. Bayoumi, K. Mishchenko, and P. Richtarik, “Tighter theory for local sgd on identical
and heterogeneous data,” in International Conference on Artificial Intelligence and Statistics,
pp. 4519–4529, 2020.
[26] A. Antoniou, H. Edwards, and A. Storkey, “How to train your MAML,” in International
Conference on Learning Representations, 2019.

[27] Z. Li, F. Zhou, F. Chen, and H. Li, “Meta-SGD: Learning to learn quickly for few-shot
learning,” arXiv preprint arXiv:1707.09835, 2017.
[28] E. Grant, C. Finn, S. Levine, T. Darrell, and T. Griffiths, “Recasting gradient-based meta-
learning as hierarchical bayes,” in International Conference on Learning Representations, 2018.

[29] A. Nichol, J. Achiam, and J. Schulman, “On first-order meta-learning algorithms,” arXiv
preprint arXiv:1803.02999, 2018.
[30] L. Zintgraf, K. Shiarli, V. Kurin, K. Hofmann, and S. Whiteson, “Fast context adaptation
via meta-learning,” in Proceedings of the 36th International Conference on Machine Learning,
pp. 7693–7702, 2019.

[31] H. S. Behl, A. G. Baydin, and P. H. S. Torr, “Alpha MAML: adaptive model-agnostic meta-
learning,” 2019.
[32] P. Zhou, X. Yuan, H. Xu, S. Yan, and J. Feng, “Efficient meta learning via minibatch proximal
update,” in Advances in Neural Information Processing Systems 32, pp. 1534–1544, Curran
Associates, Inc., 2019.

[33] A. Fallah, A. Mokhtari, and A. Ozdaglar, “On the convergence theory of gradient-based model-
agnostic meta-learning algorithms,” in International Conference on Artificial Intelligence and
Statistics, pp. 1082–1092, 2020.
[34] F. Chen, M. Luo, Z. Dong, Z. Li, and X. He, “Federated meta-learning with fast convergence
and efficient communication,” arXiv preprint arXiv:1802.07876, 2018.
[35] Y. Jiang, J. Konečnỳ, K. Rush, and S. Kannan, “Improving federated learning personalization
via model agnostic meta learning,” arXiv preprint arXiv:1909.12488, 2019.
[36] T. Li, M. Sanjabi, and V. Smith, “Fair resource allocation in federated learning,” arXiv preprint
arXiv:1905.10497, 2019.

[37] S. Lin, G. Yang, and J. Zhang, “A collaborative learning framework via federated meta-
learning,” arXiv preprint arXiv:2001.03229, 2020.

11
[38] M. Khodak, M.-F. F. Balcan, and A. S. Talwalkar, “Adaptive gradient-based meta-learning
methods,” in Advances in Neural Information Processing Systems, pp. 5915–5926, 2019.

[39] J. Li, M. Khodak, S. Caldas, and A. Talwalkar, “Differentially private meta-learning,” arXiv
preprint arXiv:1909.05830, 2019.
[40] V. Smith, C.-K. Chiang, M. Sanjabi, and A. S. Talwalkar, “Federated multi-task learning,” in
Advances in Neural Information Processing Systems, pp. 4424–4434, 2017.

[41] Y. Deng, M. M. Kamani, and M. Mahdavi, “Adaptive personalized federated learning,” arXiv
preprint arXiv:2003.13461, 2020.
[42] E. del Barrio, E. Giné, and C. Matrán, “Central limit theorems for the wasserstein distance
between the empirical and the true distributions,” Annals of Probability, pp. 1009–1071, 1999.

[43] Y. Arjevani, Y. Carmon, J. C. Duchi, D. J. Foster, N. Srebro, and B. Woodworth, “Lower


bounds for non-convex stochastic optimization,” arXiv preprint arXiv:1912.02365, 2019.
[44] Y. LeCun, “The mnist database of handwritten digits,” http://yann. lecun. com/exdb/mnist/,
1998.
[45] A. Krizhevsky, G. Hinton, et al., “Learning multiple layers of features from tiny images,” 2009.

[46] J. Langelaar, “Mnist neural network training and testing,” MATLAB Central File Exchange,
2019.
[47] C. Villani, Optimal transport: old and new, vol. 338. Springer Science & Business Media,
2008.

12
Appendix
A Intermediate Notes
Note that the gradient Lipschitz assumption, i.e., the second inequality in (9), also implies that fi
satisfies the following conditions for all w, u ∈ Rd :
− Li Id  ∇2 fi (w)  Li Id , (17a)
Li
| fi (w) − fi (u) − ∇fi (u)> (w − u)| ≤ kw − uk2 . (17b)
2

B Proofs of results in Subsection 4.1


B.1 TV Distance
Note that
X
k∇fi (w) − ∇f (w)k = ∇w l(z; w) (pi (z) − p(z))
z∈Z
X
≤ k∇w l(z; w)k |pi (z) − p(z)|
z∈Z
X
≤B |pi (z) − p(z)| = 2Bkpi − pkT V (18)
z∈Z

where the second inequality


Pn holds due to the assumption that k∇w l(z; w)k ≤ B for any w and
z. Plugging (18) in n1 i=1 k∇fi (w) − ∇f (w)k2 , gives us the desired result. The other result on
Hessians can be proved similarly.

B.2 1-Wasserstein Distance


We claim that for any i and w ∈ Rd , we have
k∇fi (w) − ∇f (w)k ≤ LZ W1 (pi , p), (19)
which will immediately give us one of the two results. To show this, first, note that
k∇fi (w) − ∇f (w)k = sup v > (∇fi (w) − ∇f (w))
v∈Rd :kvk≤1
 
Ez∼pi v > ∇l(z; w) − Ez∼p v > ∇l(z; w)
   
= sup
v∈Rd :kvk≤1

Thus, we need to show for any v ∈ Rd with kvk ≤ 1, we have


Ez∼pi v > ∇l(z; w) − Ez∼p v > ∇l(z; w) ≤ LZ W1 (pi , p). (20)
   

Next, note that since pi and p both have bounded support, by Kantorovich-Rubinstein Duality
[47], we have
W1 (pi , p) = sup {Ez∼pi [g(z)] − Ez∼p [g(z)] | continuous g : Z → R, Lip(g) ≤ 1} . (21)
Using this result, to show (20), it suffices to show g(z) = v > ∇l(z; w) is LZ -Lipschitz. Note that
Cauchy-Schwarz inequality implies
kv > ∇l(z1 ; w) − v > ∇l(z2 ; w)k ≤ kvkk∇l(z1 ; w) − ∇l(z2 ; w)k ≤ LZ d(z1 , z2 ) (22)
where the last inequality is obtained using kvk ≤ 1 along with (15).
Finally, note that we can similarly show the result for γH by considering the fact that
k∇2 fi (w)−∇2 f (w)k = max ξv > ∇2 fi (w) − ∇2 f (w) v

sup
ξ∈{1,−1} v∈Rd :kvk≤1

ξ Ez∼pi v > ∇2 l(z; w)v − Ez∼p v > ∇2 l(z; w)v


   
= max sup
ξ∈{1,−1} v∈Rd :kvk≤1

and taking the functions g(z) = v > ∇2 l(z; w)v and g(z) = −v > ∇2 l(z; w)v along with using
Kantorovich-Rubinstein Duality Theorem again.

13
C Proof of Lemma 4.2
Recall that
∇Fi (w) = I − α∇2 fi (w) ∇fi (w − α∇fi (w)). (23)


Given this, note that

k∇Fi (w1 ) − ∇Fi (w2 )k


= I − α∇2 fi (w1 ) ∇fi (w1 − α∇fi (w1 )) − I − α∇2 fi (w2 ) ∇fi (w2 − α∇fi (w2 ))
 

= I − α∇2 fi (w1 ) (∇fi (w1 − α∇fi (w1 )) − ∇fi (w2 − α∇fi (w2 )))


+ I − α∇2 fi (w1 ) − I − α∇2 fi (w2 ) ∇fi (w2 − α∇fi (w2 )) (24)


 

2
≤ I − α∇ fi (w1 ) k∇fi (w1 − α∇fi (w1 )) − ∇fi (w2 − α∇fi (w2 ))k
+ α ∇2 fi (w1 ) − ∇2 fi (w2 ) k∇fi (w2 − α∇fi (w2 ))k (25)

where (24) is obtained by adding and subtracting I − α∇2 fi (w1 ) ∇fi (w2 − α∇fi (w2 )) and the


last inequality follows from the triangle inequality and the definition of matrix norm. Now, we
bound two terms of (25) separately.
First, note that by (17a), I − α∇2 fi (w1 ) ≤ 1 + αL. Using this along with smoothness of fi , we
have

I − α∇2 fi (w1 ) k∇fi (w1 − α∇fi (w1 )) − ∇fi (w2 − α∇fi (w2 ))k
≤ (1 + αL)L kw1 − α∇fi (w1 )) − w2 + α∇fi (w2 )k
≤ (1 + αL)L (kw1 − w2 k + αk∇fi (w1 ) − ∇fi (w2 )k)
≤ (1 + αL)L(1 + αL)kw1 − w2 k
≤ 4Lkw1 − w2 k, (26)

where we used smoothness of fi along with α ≤ 1/L.


For the second term, Using (9) in Assumption 2 along with Assumption 3 implies

α ∇2 fi (w1 ) − ∇2 fi (w2 ) k∇fi (w2 − α∇fi (w2 ))k ≤ αρBkw1 − w2 k. (27)

Putting (26) and (27) together, we obtain the desired result.

D Proof of Lemma 4.3


Recall that the expression for the stochastic gradient ∇F ˜ i (w) is given by
   
˜ i (w) = I − α∇
∇F ˜ i w − α∇f
˜ 2 fi (w, D00 ) ∇f ˜ i (w, D), D0 (28)

which can be written as


˜ i (w) = I − α∇2 fi (w) + e1 (∇fi (w − α∇fi (w)) + e2 ) . (29)

∇F

Note that in the above expression e1 and e2 are given by


 
e1 = α ∇2 fi (w) − ∇˜ 2 fi (w, D00 ) ,

and
˜ i (w − α∇f
e2 = ∇f ˜ i (w, D), D0 ) − ∇fi (w − α∇fi (w)) .

Based on Assumption 4, it can be easily shown that

E [e1 ] = 0, (30a)
σ2
E ke1 k2 ≤ α2 H00 . (30b)
 
D

14
Next, we proceed to bound the first and second moments of e2 . To do so, first note that e2 can
also be written as
    
˜ i w − α∇f
e2 = ∇f ˜ i (w, D), D0 − ∇fi w − α∇f˜ i (w, D)
   
+ ∇fi w − α∇f ˜ i (w, D) − ∇fi (w − α∇fi (w)) . (31)

Note that, conditioning on D, the first term is zero mean and the second term is deterministic.
Therefore,
h   i
kE [e2 ]k = E ∇fi w − α∇f ˜ i (w, D) − ∇fi (w − α∇fi (w))
h   i
≤ E ∇fi w − α∇f ˜ i (w, D) − ∇fi (w − α∇fi (w))
h i
≤ αLE ∇f ˜ i (w, D) − ∇fi (w) (32)
αLσG
≤ √ , (33)
D
where (32) is obtained using smoothness of fi . The last inequality is also obtained using

σ2
 
2
E ∇f˜ i (w, D) − ∇fi (w) ≤ G. (34)
D

In addition, we have

E ke2 k2 = E E ke2 k2 |D
    
     2
= E ∇f ˜ i w − α∇f ˜ i (w, D), D0 − ∇fi w − α∇f˜ i (w, D)
 
  2
+ E ∇fi w − α∇f ˜ i (w, D) − ∇fi (w − α∇fi (w))

σ2
 
2
≤ G0 + L2 α2 E ∇f ˜ i (w, D) − ∇fi (w) (35)
D
(αL)2
 
1
2
≤ σG + (36)
D0 D

where (36) follows from (34), and (35) is obtained using smoothness of fi along with the fact that
     2  σ2
E ∇f˜ i w − α∇f ˜ i (w, D), D0 − ∇fi w − α∇f˜ i (w, D) ≤ G0 .
D

Next, note that, by comparing (29) and (5), along with the fact that e1 and e2 are independent,
and e1 is zero-mean (30a), we have
h i
E ∇F˜ i (w) − ∇Fi (w) = (I − α∇2 fi (w))E [e2 ] . (37)

Hence, by taking the norm of both sides, we obtain


h i
E ∇F˜ i (w) − ∇Fi (w) = (I − α∇2 fi (w))E [e2 ]

≤ (I − α∇2 fi (w)) kE [e2 ]k (38)

where the last inequality follows from the definition of matrix norm. Now, using (33) along with
the fact that kI − α∇2 fi (w)k ≤ 1 + αL ≤ 2 gives us the first result in Lemma 4.3.
To show the other result, note that, by comparing (29) and (5), along with the matrix norm
definition, we have

˜ i (w) − ∇Fi (w) ≤ kI − α∇2 fi (w)kke2 k + ke1 kk∇fi (w − α∇fi (w)) k + ke1 kke2 k.
∇F (39)

15
As a result, by the Cauchy-Schwarz inequality (a + b + c)2 ≤ 3(a2 + b2 + c2 ) for a, b, c ≥ 0, we have
2
˜ i (w) − ∇Fi (w)
∇F
≤ 3kI − α∇2 fi (w)k2 ke2 k2 + 3ke1 k2 k∇fi (w − α∇fi (w)) k2 + 3ke1 k2 ke2 k2 . (40)

By taking expectation, and using the fact that kI − α∇2 fi (w)k ≤ 1 + αL ≤ 2 and

k∇fi (w − α∇fi (w)) k ≤ B,

we have
 
2
˜ i (w) − ∇Fi (w) ≤ 3B 2 E ke1 k2 + 12E ke2 k2 + 3E ke1 k2 E ke2 k2 (41)
       
E ∇F

where we also used the fact that e1 and e2 are independent as D00 is independent from D and D0 .
Plugging (30b) and (36) in (41), we obtain
 
2
˜
E ∇Fi (w) − ∇Fi (w)
2
(αL)2 (αL)2
   
2 2 σH 2 1 2 2 2 1
≤ 3B α 00 + 12σG + + 3α σG σH +
D D0 D D0 D00 DD00

which gives us the desired result.

E Proof of Lemma 4.4


Recall that
∇Fi (w) = I − α∇2 fi (w) ∇fi (w − α∇fi (w)). (42)


which can be expressed as

∇Fi (w) = I − α∇2 f (w) + Ei (∇f (w − α∇f (w)) + ri ) (43)




where

Ei = α ∇2 f (w) − ∇2 fi (w) , (44)




ri = ∇fi (w − α∇fi (w)) − ∇f (w − α∇f (w)). (45)

First, note that, by Assumption 5, we have


n
1X
kEi k2 = α2 γH
2
. (46)
n i=1

Second, note that

kri k ≤ k∇fi (w − α∇fi (w)) − ∇fi (w − α∇f (w))k


+ k∇fi (w − α∇f (w)) − ∇f (w − α∇f (w))k
≤ αLk∇fi (w) − ∇f (w)k + k∇fi (w − α∇f (w)) − ∇f (w − α∇f (w))k (47)

where the last inequality is obtained using (9) in Assumption 2. Now, by using (a+b)2 ≤ 2(a2 +b2 ),
we have
n
1X
kri k2
n i=1
n
2 X 2

≤ (αL)2 k∇fi (w) − ∇f (w)k2 + k∇fi (w − α∇f (w)) − ∇f (w − α∇f (w))k
n i=1
≤ 2 1 + (αL)2 (γG (48)
 2 2
+ γG )
≤ 2
8γG . (49)

16
where the second inequality follows from Assumption 5 and the last inequality is obtained using
αL ≤ 1. Next, recall that the goal is to bound the variance of ∇Fi (w) when i is drawn from a
uniform distribution. We know that by subtracting a constant from a random variable, itsvariance
does not change. Thus, variance of ∇Fi (w) is equal to variance of ∇Fi (w)− I − α∇2 f (w) ∇f (w−
α∇f (w)). Also, the variance of the latter is bounded by its second moment, and hence,
n n
1X 1X 2
k∇Fi (w) − ∇F (w)k2 ≤ Ei ∇f (w − α∇f (w)) + I − α∇2 f (w) ri + Ei ri

n i=1 n i=1
n
1X 2
I − α∇2 f (w) ri + kEi ri k (50)

≤ kEi ∇f (w − α∇f (w))k +
n i=1

Therefore, using k∇f (w − α∇f (w))k ≤ B along with I − α∇2 f (w) ≤ 1 + αL and Cauchy-
Schwarz inequality (a + b + c)2 ≤ 3(a2 + b2 + c2 ) for a, b, c ≥ 0, we obtain
n n n n
!
1X 1 X 1 X 1 X
k∇Fi (w) − ∇F (w)k2 ≤ 3 B 2 kEi k2 + (1 + αL)2 kri k2 + kEi ri k2
n i=1 n i=1 n i=1 n i=1
n n n
!
21 1X 1X
X
≤3 B 2
kEi k + 4 2
kri k + 2
kEi k kri k 2
(51)
n i=1 n i=1 n i=1

where the last inequality is obtained using αL ≤ 1 along with kEi ri k ≤ kEi kkri k which comes
from the definition of matrix norm. Finally, to complete the proof, notice that we have
n n
!
1X 1 X
kEi k2 kri k2 ≤ max kEi k2 kri k2 (52)
n i=1 i n i=1
≤ max kEi k2 (8γG
2
) (53)
i
≤ 32(αL)2 γG
2 2
≤ 32γG (54)

where (53) follows from (49) and the last line is obtained using αL ≤ 1 along with the fact that
k∇2 fi (w)k ≤ L, and thus,
kEi k
= k∇2 f (w) − ∇2 fi (w)k ≤ 2L. (55)
α
Plugging (53) in (51) along with (46) and (49), we obtain the desired result.

F An Intermediate Result
Proposition F.1. Recall from Section 3 that at any round k ≥ 1, and for any agent i ∈ {1, .., n},
i
we can define a sequence of local updates {wk,t }τt=0 where wk,0
i
= wk−1 and, for τ ≥ t ≥ 1,

i
wk,t i
= wk,t−1 ˜ i (wk,t−1
− β ∇F i
). (56)
Pn i
We further define the average of these local updates at round k and time t as wk,t = 1/n i=1 wk,t .
Suppose that the conditions in Assumptions 2-4 are satisfied. Then, for any α ∈ [0, 1/L] and any
t ≥ 0, we have
" n #
1X i
E kw − wk,t k ≤ 2βt(1 + 2βLF )t−1 (σF + γF ), (57a)
n i=1 k,t
" n #  t−1
1X i 1 1 2 2
E 2 2
kw − wk,t k ≤ 4β (1 + )t 1 + φ + 16(1 + )β LF (2σF2 + γF2 ) (57b)
n i=1 k,t φ φ

where φ > 0 is an arbitrary positive constant and LF , σF , and γF are given in Lemmas 4.2, 4.3,
and 4.4, respectively.
Before stating the proof, note that an immediate consequence of this result is the following
corollary:

17
Corollary F.2. Under the same assumptions as Proposition F.1, and for any β ≤ 1/(10τ LF ), we
have
" n #
1X i
E kw − wk,t k ≤ 4βt(σF + γF ), (58a)
n i=1 k,t
" n #
1X i
E kw − wk,t k ≤ 35β 2 tτ (2σF2 + γF2 )
2
(58b)
n i=1 k,t

for any 0 ≤ t ≤ τ .
Proof. Let
n
1X  i
(59)

St := E kwk,t − wk,t k
n i=1

where S0 = 0 since wk,0


i
= wk−1 for any i. Note that
n
1X  i 
St+1 = E kwk,t+1 − wk,t+1 k
n i=1
 
n n 
1 X
i ˜ i (wk,t
i 1 X j ˜ j (wj ) 

= E  wk,t − β ∇F )− wk,t − β ∇F k,t
n i=1 n j=1
   
n n n n
1 X 1 X 1 X 1 X
≤ i
E kwk,t − w j k + β ˜ i (wk,t
E k∇F i
)− ˜ j (wj )k .
∇F (60)
n i=1 n j=1 k,t n i=1 n j=1 k,t

Note that the first term in (60) is in fact St and the second one can be upper bounded as follows
 
n n
1 X
˜ i (wi ) − 1
X
˜ j (wj )k
E k∇F k,t ∇F k,t
n i=1 n j=1
 
n n n
1 X
i 1 X j 1X h i ˜ i (wi )k
i
≤ E k∇Fi (wk,t )− ∇Fj (wk,t )k + E k∇Fi (wk,t ) − ∇F k,t
n i=1 n j=1 n i=1
 
n n
1 X 1 X j ˜ j (wj )k
+ E k∇Fj (wk,t ) − ∇F k,t
n i=1 n j=1
 
n n
1X  i 1 X j
≤ E k∇Fi (wk,t )− ∇Fj (wk,t )k + 2βσF
n i=1 n j=1

where the last inequality is obtained using Lemma 4.3. By substituting this in (60), we obtain
 
n n
1X  1 X j
St+1 ≤ St + 2βσF + β i
E k∇Fi (wk,t )− ∇Fj (wk,t )k . (61)
n i=1 n j=1

If we define ηi := ∇Fi (wk,t


i
) − ∇Fi (wk,t ), using (61), we obtain
 
n n
1X  1X
St+1 ≤ St + 2βσF + β E k∇Fi (wk,t ) − ∇Fj (wk,t )k
n i=1 n j=1
 
n n
1X  1X
+β E kηi − ηj k . (62)
n i=1 n j=1

Note that, by Lemma 4.2,


i
kηi k ≤ LF kwk,t − wk,t k, (63)

18
and thus,
n
1X
kηi k ≤ LF St . (64)
n i=1

As a result, and by using (62), we have


 
n n
1 X 1 X
St+1 ≤ (1 + 2βLF )St + 2βσF + β E k∇Fi (wk,t ) − ∇Fj (wk,t )k .
n i=1 n j=1

≤ (1 + 2βLF )St + 2β(σF + γF ) (65)

where the last inequality is obtained using Lemma 4.4. Using (65) recursively, we obtain
 
Xt
St+1 ≤  (1 + 2βLF )j  2β(σF + γF ) ≤ 2β(t + 1)(1 + 2βLF )t (σF + γF ) (66)
j=0

which completes the proof of (57a). To prove (57b), let


n
1X  i
E kwk,t − wk,t k2 . (67)

Σt :=
n i=1

Similarly Σ0 = 0. Note that


n
1X  i
E kwk,t+1 − wk,t+1 k2

Σt+1 =
n i=1
 
2
n n
1X  i ˜ i (wk,t
i 1 X j ˜ j (wj )

= E  wk,t − β ∇F )− wk,t − β ∇F

k,t
n i=1 n j=1

 
n n
1+φX  i 1 X j 2
≤ E kwk,t − w k
n i=1 n j=1 k,t
 
n n
1 + 1/φ X 1 X
˜ j (wj )k2 
+ β2 E k∇F˜ i (wk,t
i
)− ∇F k,t (68)
n i=1
n j=1
 
n n
1 + 1/φ X 1 X
˜ j (wj )k2 
≤ (1 + φ)Σt + β 2 ˜ i (wk,t
E k∇F i
)− ∇F k,t (69)
n i=1
n j=1

where (68) is obtained using ka + bk2 ≤ (1 + φ)kak2 + (1 + 1/φ)kbk2 for any arbitrary positive real
number φ. To bound the second term in (69), note that
   
n n
˜ i (wk,t
i 1 X
˜ j (wj )k2  ≤ 2E k∇Fi (wk,t
i 1 X j
E k∇F )− ∇F k,t )− ∇Fj (wk,t )k2 
n j=1 n j=1
 
2
  1X n  
j ˜ j (wj ) (70)
 ˜ i i
+ 2E  ∇F i (wk,t ) − ∇Fi (wk,t ) + ∇Fj (wk,t ) − ∇F .

k,t
n j=1

Now, we bound the second term in (70). Using Cauchy-Schwarz inequality

n+1 2 n+1
! n+1
!
X X X
al bl ≤ 2
kal k kbl k 2
(71)
l=1 l=1 l=1

˜ i (wi ) − ∇Fi (wi ), b1 = 1 and al = 1/√n (∇F


with a1 = ∇F ˜ l−1 (wl−1 ) − ∇Fl−1 (wl−1 )), bl = 1/√n,
k,t k,t k,t k,t

19
for l = 2, ..., n + 1, implies
 
2
  1X n  
 ˜ i i j ˜ j (wj )
E  ∇F (w ) − ∇F (w ) + ∇Fj (wk,t ) − ∇F

i k,t i k,t k,t
n j=1

 
2 n 2
˜ i (wk,t
i i 1X j ˜ j (wj )
≤ 2E  ∇F ) − ∇Fi (wk,t ) + ∇Fj (wk,t ) − ∇F k,t

n j=1

≤ 4σF2 (72)

where the last inequality is obtained using Lemma 4.3. Plugging (72) in (70) and using (69), we
obtain
1 2 2
Σt+1 ≤ (1 + φ)Σt + 8(1 + )β σF
φ
 
n n
1 21 X  1 X j
+ 2(1 + )β i
E k∇Fi (wk,t )− ∇Fj (wk,t )k2  . (73)
φ n i=1 n j=1

Now, it remains to bound the last term in (73). Recall ηi = ∇Fi (wk,t
i
) − ∇Fi (wk,t ). First, note
that, using ka + bk ≤ 2kak + 2kbk , we have
2 2 2

n n n
1X j 1X 1X
i
k∇Fi (wk,t )− ∇Fj (wk,t )k2 ≤ 2k∇Fi (wk,t ) − ∇Fj (wk,t )k2 + 2kηi − ηj k2 . (74)
n j=1 n j=1 n j=1

Substituting this bound in (73) and using Lemma 4.4 yields


 
n n
1 2 1 1 X 1 X
Σt+1 ≤ (1 + φ)Σt + 4(1 + )β (2σF2 + γF2 ) + 4(1 + )β 2 E kηi − ηj k2  . (75)
φ φ n i=1 n j=1


Note
√ that, using Cauchy-Schwarz inequality (71) with a1 = ηi , b1 = 1 and al = 1/ nηl−1 , bl =
1/ n for l = 2, ..., n + 1, implies
 
n n
1 X 1 X
kηi − ηj k2 ≤ 2 kηi k2 + kηj k2 
n j=1 n j=1
 
n
1 X
≤ 2L2F kwk,t
i
− wk,t k2 + kwi − wk,t k2  (76)
n j=1 k,t

where the last inequality is obtained using Lemma 4.2 which states
i
kηi k ≤ LF kwk,t − wk,t k. (77)

Plugging (76) in (75) implies


 
1 2 2 1
Σt+1 ≤ 1 + φ + 16(1 + )β LF Σt + 4(1 + )β 2 (2σF2 + γF2 ). (78)
φ φ

As a result, similar to (66), we obtain


 t
1 1 2 2
Σt+1 ≤ 4β (1 + )(t + 1) 1 + φ + 16(1 + )β LF (2σF2 + γF2 )
2
(79)
φ φ

which gives us the desired result (57b).


Finally, to show (58), first note that for any n, we know
1 n
(1 + ) ≤ e. (80)
n

20
Using this, along with the assumption β ≤ 1/(10LF τ ) and the fact that e0.2 ≤ 2, we immediately
obtain (58a). To show the other one (58b), we use (57b) with φ = 1/(2τ ):
1 2 2 1
φ + 16(1 + )β LF = + 16(1 + 2τ )β 2 L2F
φ 2τ
1 1
≤ + 16(1 + 2τ )
2τ 100τ 2
1
≤ (81)
τ
where the first inequality follows from the assumption β ≤ 1/(10LF τ ) and the last inequality is
obtained using the trivial bound 1 + 2τ ≤ 3τ . Finally, using (81) along with (80) completes the
proof.

G Proof of Theorem 4.5


Although we only ask a fraction of agents to compute their local updates in Algorithm 1, here,
and just for the sake of analysis, we assume all agents perform local updates. This is just for our
analysis and we will not
Pnuse all agents’ updates in computing wk+1 . Also, from Proposition F.1,
recall that wk,t = 1/n i=1 wk,ti
.
Let Fk+1
t
denote the σ-field generated by {wk+1,t
i
}ni=1 . Note that, by Lemma 4.2, we know F
is smooth with gradient Lipschitz parameter LF , and thus, by (17b), we have
F (w̄k+1,t+1 )
LF
≤ F (w̄k+1,t ) + ∇F (w̄k+1,t )> (w̄k+1,t+1 − w̄k+1,t ) + kw̄k+1,t+1 − w̄k+1,t k2
2!
1 X ˜ LF 2 1 X ˜
≤ F (w̄k+1,t ) − β∇F (w̄k+1,t )> i
∇Fi (wk+1,t ) + β k i
∇Fi (wk+1,t )k2 (82)
rn 2 rn
i∈Ak i∈Ak

where the last inequality is obtained using the fact that


1 X i
w̄k+1,t+1 = wk+1,t+1
rn
i∈Ak
1 X i ˜ i (wk+1,t
i

= wk+1,t − β ∇F )
rn
i∈Ak
1 X ˜ i
= w̄k+1,t − β ∇Fi (wk+1,t ).
rn
i∈Ak

Taking expectation from both sides of (82) yields


" !#
> 1 X ˜ i
E [F (w̄k+1,t+1 )] ≤ E[F (w̄k+1,t )] − βE ∇F (w̄k+1,t ) ∇Fi (wk+1,t )
rn
i∈Ak
" #
LF 2 1 X ˜
+ β E k i
∇Fi (wk+1,t )k2 (83)
2 rn
i∈Ak

Next, note that


1 X ˜ 1 X
i
∇Fi (wk+1,t )=X +Y +Z + ∇Fi (w̄k+1,t ) (84)
rn rn
i∈Ak i∈Ak

where
1 X ˜ 
X= i
∇Fi (wk+1,t i
) − ∇Fi (wk+1,t ) , (85)
rn
i∈Ak
1 X i
(86)

Y = ∇Fi (wk+1,t ) − ∇Fi (wk+1,t ) ,
rn
i∈Ak
1 X
Z= (∇Fi (wk+1,t ) − ∇Fi (w̄k+1,t )) . (87)
rn
i∈Ak

21
We next bound the moments of X, Y , and Z, condition on Fk+1
t
. First, recall the Cauchy-Schwarz
inequality
rn 2 rn
! rn !
X X X
ai bi ≤ kai k2 2
kbi k . (88)
i=1 i=1 i=1
√ √
• Using this inequality with ai = (∇F k+1,t )−∇Fi (wk+1,t ))/ rn and bl = 1/ rn, we obtain
˜ i (wi i

1 X ˜ 2
kXk2 ≤ i
∇Fi (wk+1,t i
) − ∇Fi (wk+1,t ) , (89)
rn
i∈Ak

and hence, by using Lemma 4.3 along with the tower rule, we have
E[kXk2 ] = E[E[kXk2 | Fk+1
t
]] ≤ σF2 . (90)

• Regarding Y , note that by using Cauchy-Schwarz inequality (similar to what we did above)
along with smoothness of Fi , we obtain
1 X 2 L2 X 2
kY k2 ≤ i
∇Fi (wk+1,t ) − ∇Fi (wk+1,t ) ≤ F i
wk+1,t − wk+1,t . (91)
rn rn
i∈Ak i∈Ak

Again, taking expectation and using the fact that Ak is chosen uniformly at random, implies
E[kY k2 ] = E[E[kY k2 | Fk+1
t
]]
" " ##
1 X 2
≤ L2F E E i
wk+1,t − wk+1,t t
Fk+1
rn
i∈Ak
" n #
2 1X i 2
= LF E kw − wk,t k
n i=1 k,t
≤ 35β 2 L2F τ (τ − 1)(2σF2 + γF2 ) (92)
where the last step follows from (58b) in Corollary F.2 along with the fact that t ≤ τ − 1.
Pn
• Regarding Z, first recall that if we have n numbers a1 , ..., an with mean µ = 1/n i=1 ai and
n
variance σ 2 = 1/n i=1 |ai − µ|2 , and we take a subset of them {ai }i∈A with size |A| = rn
P
by sampling without replacement, then we have
" P 2
#
σ2 σ 2 (1 − r)
 
a i rn − 1
E i∈A
−µ = 1− = . (93)
rn rn n−1 r(n − 1)

Using this, we have


Pn i
(1 − r)/n kwk+1,t
i=1 − wk+1,t k2
2 t
(94)
 
E kw̄k+1,t − wk+1,t k | Fk+1 ≤ ,
r(n − 1)
and hence, by taking expectation from both sides and using the tower rule along with (58b)
in Corollary F.2, we obtain
 35(1 − r)β 2 τ (τ − 1)(2σF2 + γF2 )
E kw̄k+1,t − wk+1,t k2 ≤ (95)

.
r(n − 1)

Next, note that
√ by using Cauchy-Schwarz inequality (88), with ai = (∇Fi (wk+1,t ) − ∇Fi (w̄k+1,t )) / rn
and bi = 1/ rn, we have
1 X 2
kZk2 ≤ k∇Fi (wk+1,t ) − ∇Fi (w̄k+1,t )k
rn
i∈Ak

L2F X 2
≤ kwk+1,t − w̄k+1,t k = L2F kw̄k+1,t − wk+1,t k2 (96)
rn
i∈Ak

where the last inequality is obtained using smoothness of Fi (Lemma 4.2). Now, taking
expectation from both sides and using (95) yields
35(1 − r)β 2 L2F τ (τ − 1)(2σF2 + γF2 )
E[kZk2 ] ≤ . (97)
r(n − 1)

22
Now, getting back to (83), we first lower bound the term
" !#
> 1 X ˜ i
E ∇F (w̄k+1,t ) ∇Fi (wk+1,t ) .
rn
i∈Ak

To do so, note that, by (84), we have


" !#
> 1 X ˜ i
E ∇F (w̄k+1,t ) ∇Fi (wk+1,t )
rn
i∈Ak
" !#
> 1 X
= E ∇F (w̄k+1,t ) X +Y +Z + ∇Fi (w̄k+1,t )
rn
i∈Ak
" !#
1 X
≥ E ∇F (w̄k+1,t )> − E ∇F (w̄k+1,t )> X ]
 
∇Fi (w̄k+1,t )
rn
i∈Ak
1
− E[k∇F (w̄k+1,t )k2 ] − E[kY + Zk2 ] (98)
4
where the last inequality is obtained using the fact that
 1
E ∇F (w̄k+1,t )> (Y + Z) ≤ E[k∇F (w̄k+1,t )k2 ] + E[kY + Zk2 ].

4
Now, we bound terms in (98) separately. First, note that by tower rule we have
" !#
1 X
E ∇F (w̄k+1,t )> ∇Fi (w̄k+1,t )
rn
i∈Ak
" " ! ##
1 X
= E E ∇F (w̄k+1,t )> ∇Fi (w̄k+1,t ) t
Fk+1
rn
i∈Ak
" " ! ##
> 1 X t
= E ∇F (w̄k+1,t ) E ∇Fi (w̄k+1,t ) Fk+1
rn
i∈Ak

= E k∇F (w̄k+1,t )k2 (99)


 

where the last equality is obtained using the fact that Ak is chosen uniformly at random, and thus,
" ! # n
1 X t 1X
E ∇Fi (w̄k+1,t ) Fk+1 = ∇Fi (w̄k+1,t ).
rn n i=1
i∈Ak

Second, note that


h h ii
E ∇F (w̄k+1,t )> X = E E ∇F (w̄k+1,t )> X Fk+1
t
 
h h ii
= E ∇F (w̄k+1,t )> E X Fk+1
t
.

As a result, we have
h h ii
E ∇F (w̄k+1,t )> X = E ∇F (w̄k+1,t )> E X Fk+1 t
 
 h 
1 i 2
≤ E[k∇F (w̄k+1,t )k2 ] + E E X Fk+1 t
4
1 4α2 L2 σG
2
≤ E[k∇F (w̄k+1,t )k2 ] + (100)
4 D
where the last inequality follows from Lemma 4.3. Third, note that by Cauchy-Schwarz inequality,
E[kY + Zk2 ] ≤ 2 E[kY k2 ] + E[kZk2 ]

 
1−r
≤ 70β 2 L2F τ (τ − 1)(2σF2 + γF2 ) 1 +
r(n − 1)
≤ 140β 2 L2F τ (τ − 1)(2σF2 + γF2 ) (101)

23
where second inequality is obtained using (92) and (97). Plugging (99), (100), and (101) in (98)
implies
" !#
> 1 X ˜ i
E ∇F (w̄k+1,t ) ∇Fi (wk+1,t )
rn
i∈Ak

1 4α2 L2 σG
2
≥ E[k∇F (w̄k+1,t )k2 ] − 140β 2 L2F τ (τ − 1)(2σF2 + γF2 ) − . (102)
2 D
Next, we characterize an upper bound for the other term in (83):
" #
1 X ˜ i 2
E k ∇Fi (wk+1,t )k
rn
i∈Ak

Note that, by (84) we have


1 X ˜ 1 X
k i
∇Fi (wk+1,t )k2 ≤ 2kX + Y + Zk2 + 2k ∇Fi (w̄k+1,t )k2 , (103)
rn rn
i∈Ak i∈Ak

and thus, by (101) along with (90), we have


" #
1 X ˜ i
E k ∇Fi (wk+1,t )k2
rn
i∈Ak
" #
1 X
≤ 2E k ∇Fi (w̄k+1,t )k + 4σF2 + 560β 2 L2F τ (τ − 1)(2σF2 + γF2 ).
2
(104)
rn
i∈Ak

Note that, E 1/(rn) i∈Ak ∇Fi (w̄k+1,t ) | Fk+1 = ∇F (w̄k+1,t ), since Ak is chosen uniformly at
 P t


random. Also, by Lemma 4.4, we have


1 h i
E k∇Fi (w̄k+1,t ) − ∇F (w̄k+1,t )k2 Fk+1
t
≤ γF2 ,
n
and thus, by (93), we have
" #
1 X  γ 2 (1 − r)
∇Fi (w̄k+1,t )k ≤ E k∇F (w̄k+1,t )k2 + F
2
(105)

E k .
rn r(n − 1)
i∈Ak

Plugging (105) in (104), we obtain


 
2
1 X ˜ i
E ∇Fi (wk+1,t ) 
rn
i∈Ak

 2γ 2 (1 − r)
≤ 2E k∇F (w̄k+1,t )k2 + F + 4σF2 + 560β 2 L2F τ (τ − 1)(2σF2 + γF2 ). (106)

r(n − 1)

Substituting (106) and (102) in (83) implies

E [F (w̄k+1,t+1 )]
≤ E[F (w̄k+1,t )] − β(1/2 − βLF )E k∇F (w̄k+1,t )k2
 

γ 2 (1 − r) 4βα2 L2 σG
2
 
3
+ 140(1 + 2βLF )β L2F τ (τ − 1)(2σF2 + γF2 ) 2
+ β LF 2σF2 + F +
r(n − 1) D
β 
E k∇F (w̄k+1,t )k2 + βσT2 . (107)

≤ E[F (w̄k+1,t )] −
4
where
γ 2 (1 − r) 4α2 L2 σG
2
 
σT2 2
:= 280(βLF ) τ (τ − 1)(2σF2 + γF2 ) + βLF 2σF2 + F + (108)
r(n − 1) D

24
the last inequality is obtained using β ≤ 1/(10τ LF ). Summing up (107) for all t = 0, ..., τ − 1, we
obtain
τ −1
!
βτ 1 X  2
+ βτ σT2 (109)

E [F (wk+1 )] ≤ E [F (wk )] − E k∇F (w̄k+1,t )k
4 τ t=0
where we used the fact that w̄k+1,τ = wk+1 . Finally, summing up (109) for k = 0, ..., K − 1 implies
K−1 τ −1
!
βτ K 1 XX  2
+ βτ KσT2 . (110)

E [F (wK )] ≤ F (w0 ) − E k∇F (w̄k+1,t )k
4 τK t=0
k=0

As a result, we have
K−1 τ −1
1 XX  4
E k∇F (w̄k+1,t )k2 ≤ F (w0 ) − E [F (wK )] + βτ KσT2
 
τK t=0
βτ K
k=0
4(F (w0 ) − F ∗ )
≤ + 4σT2 (111)
βτ K
which gives us the desired result.
Remark G.1. As stated in Remark 4.8, we could easily extend our analysis to the case with
diminishing stepsize. In particular, by using βk as the stepsize at iteration k, the descent result
(109) holds with β = βk . Hence, summing √ up this equation for k = 0, ..., K − 1, we recover the
same complexity bounds using βk = O(1/ τ k).

H On First-Order Approximations of Per-FedAvg


As we stated previously, the Per-FedAvg method, same as MAML, requires computing Hessian-
vector product which is computationally costly in some applications. As a result, one may consider
using the first-order approximation of the update rule for the Per-FedAvg algorithm. The main
goal of this section is to show how our analysis can be extended to the case that we either drop
the second-order term or approximate the Hessian-vector product using first-order techniques.
To do so, we show that it suffices to only extend the result in Lemma 4.3 for the first-order
approximation settings and find σ̃F such that
h i
E ∇F˜ i (w) − ∇Fi (w) ≤ mF ,
 
2
E ∇F˜ i (w) − ∇Fi (w) ≤ σ̃F2 .

One can easily check that the rest of analysis does not change, and the final result (Theorem 4.5)
holds if we just replace σF by σ̃F and α2 L2 σG
2
/D by m2F .
We next focus on two different approaches, developed for MAML formulation, for approximating
the Hessian-vector product, and show how we can characterize mF and σ̃F for both cases:

• Ignoring the second-order term: Authors in [2] suggested to simply ignore the second-
order term in the update of MAML to reduce the computation cost of MAML, i.e., to replace
˜ i (w) with
∇F
 
˜ i w − α∇f
∇f ˜ i (w, D), D0 . (112)

This approach is known as First-Order MAML (FO-MAML), and it has been shown that it performs
relatively well in many cases [2]. In particular, [33] characterized the convergence properties of
FO-MAML for the centralized MAML problem. Next, we characterize the mean and variance of
this gradient approximation.
Lemma H.1. Assume that we estimate ∇Fi (w) by (112) where D and D0 are independent batches
with size D and D0 , respectively. Suppose that the conditions in Assumptions 2-4 are satisfied.
Then, for any α ∈ [0, 1/L] and w ∈ Rd , we have
 
h
˜
i
FO σG
E ∇Fi (w) − ∇Fi (w) ≤ mF := αL √ + B ,
D
(αL)2
   
2 1
˜
E ∇Fi (w) − ∇Fi (w) FO 2 2
≤ (σ̃F ) := 2σG + + 2(αLB)2 .
D0 D

25
Proof. In fact, in this case, ∇F
˜ i (w) is approximating

Gi (w) := ∇fi (w − α∇fi (w)) . (113)

To bound mF
F , note that
O

h i h i
E ∇F ˜ i (w) − Gi (w) + kE [Gi (w) − ∇Fi (w)]k
˜ i (w) − ∇Fi (w) ≤ E ∇F (114)
αLσG
≤ √ + αLB (115)
D
where the first term follows from (33) in the proof of Lemma 4.3 in Appendix D, and the second
term is obtained using

kGi (w) − ∇Fi (w)k = α ∇2 fi (w)∇fi (w − α∇fi (w))


≤ αk∇2 fi (w)k · k∇fi (w − α∇fi (w)) k ≤ αLB (116)

where the first inequality follows from the matrix norm definition and the last inequality is obtained
using Assumption 2.
To characterize σ̃FF O , note that
   
2 2 h i
2
E ∇F ˜ i (w) − ∇Fi (w) ≤ 2E ∇F˜ i (w) − Gi (w) + 2E kGi (w) − ∇Fi (w)k . (117)

We bound these two terms separately. Note that we have already bounded the first term in
Appendix D (see (36)), and we have

(αL)2
   
2 1
E ∇F˜ i (w) − Gi (w) 2
≤ σG + . (118)
D0 D

Plugging (118) and (116) into (117), we obtain the desired result.

Note that while the first term in σ̃FF O can be made arbitrary small by choosing D and D0 large
enough, this is not the case for the second term. However, the second term is also negligible if α is
small enough. Yet this bound suggests that this approximation introduces a non-vanishing error
term which is directly carried to the final result (Theorem 4.5).

• Estimating Hessian-vector product using gradient differences: In the context of


MAML problem, it has been shown that the update of FO-MAML leads to an additive error
that does not vanish as time progresses. To resolve this matter, [33] introduced another variant of
MAML, called HF-MAML, which approximates the Hessian-vector product by gradient differences.
More formally, the idea behind their method is that for any function g, the product of the Hessian
∇2 g(w) by any vector v can be approximated by

∇g(w + δv) − ∇g(w − δv)


(119)

with an error of at most ρδkvk2 , where ρ is the parameter for Lipschitz continuity of the Hessian
of g. Building on this idea, in Per-FedAvg update rule, we can replace ∇F
˜ i (w) by
 
˜ i w − α∇f
∇f ˜ i (w, D), D0 − αd˜i (w) (120)

where
   
˜ i w+δ ∇f
∇f ˜ i (w−α∇f
˜ i (w, D), D0 ), D00 − ∇f
˜ i w−δ ∇f
˜ i (w−α∇f
˜ i (w, D), D0 ), D00
d˜i (w) := .

(121)

For this approximation, we have the following result, which shows that we have an additional
degree of freedom (δ) to control the error term that does not decreased with increasing batch sizes.

26
Lemma H.2. Assume that we estimate ∇Fi (w) by (120) where D, D0 , and D00 are independent
batches with size D, D0 , and D00 , respectively. Suppose that the conditions in Assumptions 2-4 are
satisfied. Then, for any α ∈ [0, 1/L] and w ∈ Rd , we have
 
h i
˜ i (w) − ∇Fi (w) ≤ mHF 2LσG LσG 2
E ∇F F := α √ + √ + ρδB ,
D D0
2(αL)2 α2
   
2 2
E ∇F˜ i (w) − ∇Fi (w) ≤ (σ̃FHF )2 := 6σG2
+ 0 + 2 00 + 2(αρδ)2 B 4 .
D D 2δ D

Proof. Note that, this time ∇F


˜ i (w) is approximating
0
Gi (w) := ∇fi (w − α∇fi (w)) − αdi (w) (122)

where
∇fi (w + δ∇fi (w − α∇fi (w))) − ∇fi (w − δ∇fi (w − α∇fi (w)))
di (w) := (123)

is the term approximating ∇2 fi (w)∇fi (w − α∇fi (w)). Below, we characterize σ̃FHF , and mHF
F
can be done similarly.
Similar to (117), we have
     
2 2 2
E ∇F ˜ i (w) − ∇Fi (w) ˜ i (w) − G0i (w)
≤ 2E ∇F
0
+ 2E Gi (w) − ∇Fi (w) . (124)

We again bound both terms separately. To simplify the notation, let us define
 
˜ i w − α∇f
gi (w) := ∇fi (w − α∇fi (w)) , g̃i (w) := ∇f ˜ i (w, D), D0 . (125)

First, note that, using (a + b + c)2 ≤ 3(a2 + b2 + c2 ) for a, b, c ≥ 0, we have


2
˜ i (w) − G0i (w)
∇F
3α2 ˜ 2
≤ 3kg̃i (w) − gi (w)k2 + ∇fi (w + δg̃i (w), D00 ) − ∇fi (w + δgi (w))
4δ 2
3α2 ˜ 2
+ ∇fi (w − δg̃i (w), D00 ) − ∇fi (w − δgi (w)) . (126)
4δ 2
Taking expectation from both sides, along with using (118), we have
 
2
E ∇F˜ i (w) − G0i (w)

(αL)2 3α2
    
1 2
2
≤ 3σG + + E ˜ i (w + δg̃i (w), D00 ) − ∇fi (w + δgi (w))
∇f
D0 D 4δ 2
 
2
+E ∇f ˜ i (w − δg̃i (w), D00 ) − ∇fi (w − δgi (w))

α2 (αL)2 3α2  h
 
2 1 2
i
≤ 3σG 2 00
+ 0
+ + 2 E k∇fi (w + δg̃i (w)) − ∇fi (w + δgi (w))k
2δ D D D 4δ
h i
2
+E k∇fi (w − δg̃i (w)) − ∇fi (w − δgi (w))k (127)

where (127) is obtained using the fact that D00 is independent from D and D0 which implies

σ2
 
2
˜ 00
E ∇fi (w ± δg̃i (w), D ) − ∇fi (w ± δgi (w)) ≤ G00
D
h i
2
+ E k∇fi (w ± δg̃i (w)) − ∇fi (w ± δgi (w))k .

Next, note that Assumption 2 yields

k∇fi (w ± δg̃i (w)) − ∇fi (w ± δgi (w))k ≤ δLkg̃i (w) − gi (w)k.

27
Table 2: Illustration of the our experiment’s setting

Image Classes
1 2 ··· 5 6 7 ··· 10
1 a a ··· a 0 0 ··· 0
2 a a ··· a 0 0 ··· 0
Groups of Users ..
.
..
.
..
.
..
.
..
.
..
.
..
.
..
.
..
.
5 a a ··· a 0 0 ··· 0
6 a/2 0 ··· 0 2a 0 ··· 0
7 0 a/2 ··· 0 0 2a ··· 0
.. .. .. .. .. .. .. .. ..
. . . . . . . . .
10 0 0 ··· a/2 0 0 ··· 2a

a: Comparison in terms of runtime b: Comparison in terms of number of iterations

Plugging this bound into (127) and using (117) implies

α2 (αL)2 (αL)2
    
2 1
E ∇F˜ i (w) − G0 (w) ≤ 3σ 2
+ (1 + ) +
i G
2δ 2 D00 2 D0 D
2 2
 
2(αL) 2 α
≤ 3σG2
+ 0 + 2 00 (128)
D D 2δ D

where the last inequality is obtained using αL ≤ 1.


Bounding the second term in (124) is more straightforward as we have
0
Gi (w) − ∇Fi (w) = α di (w) − ∇2 fi (w)∇fi (w − α∇fi (w)) ≤ αρδkgi (w)k2 ≤ αρδB 2 . (129)

Plugging (128) and (129) into (124) gives us the desired result.

I More on Numerical Experiments


In this section, we discuss our further results on numerical experiments. We thank the anonymous
reviewers for their suggestions on adding this results, and we are looking forward to further explore
our method from numerical point of view in future works.
First, in Table 2. we provide an illustration of the numerical setting in Section 5.
Second, in Figure 1a, we illustrate the average test accuracy of all studied algorithms with
respect to time. As this figure shows, Per-FedAvg (HF) achieves higher level of accuracy compared
to the regular Fed-Avg with local updates within the same computation time.
Third, we also compare our method with ARUBA [38]. To do so, we also report the output
of FedAvg+ARUBA after refinement for each user. In particular, we consider τ = 4 and K =
1000, and also tune hyper-parameters of ARUBA for a fair comparison. The final accuracy of

28
all algorithms is as follows: Per-FedAvg(FO): 34.04 ± 0.08, Fed-Avg+ARUBA (with refinement):
36.74 ± 0.1, Per-FedAvg(HF): 43.73 ± 0.11. In Figure 1b, we have also depicted one realization of
training path, just to provide intuition on the convergence speed of these methods.

29

You might also like