Improved Maximin Share Approximations For Chores by Bin Packing
Improved Maximin Share Approximations For Chores by Bin Packing
Improved Maximin Share Approximations For Chores by Bin Packing
by Bin Packing
Jugal Garg∗ Xin Huang† Erel Segal-Halevi‡
jugal@illinois.edu huangxin@inf.kyushu-u.ac.jp erelsgl@gmail.com
arXiv:2411.04391v1 [cs.GT] 7 Nov 2024
Abstract
We study fair division of indivisible chores among n agents with additive cost functions
using the popular fairness notion of maximin share (MMS). Since MMS allocations do not
always exist for more than two agents, the goal has been to improve its approximations and
identify interesting special cases where MMS allocations exists. We show the existence of
• 1-out-of-⌊ 11
9
n⌋ MMS allocations, which improves the state-of-the-art factor of 1-out-of-
3
⌊ 4 n⌋.
• MMS allocations for factored instances, which resolves an open question posed by Ebadian
et al. (2021).
• 15/13-MMS allocations for personalized bivalued instances, improving the state-of-the-art
factor of 13/11.
We achieve these results by leveraging the HFFD algorithm of Huang and Lu (2021). Our ap-
proach also provides polynomial-time algorithms for computing an MMS allocation for factored
instances and a 15/13-MMS allocation for personalized bivalued instances.
1 Introduction
Fair division of indivisible tasks (or chores) has garnered significant attention recently due to its
applications in various multi-agent settings; see recent surveys [Amanatidis et al., 2023, Liu et al.,
2024]. The problem is to find a fair partition of a set M of m indivisible chores among n agents
with preferences. We assume that each agent i has additive preferences represented
P by the cost
functions vi (.) such that the cost of a set of chores S is given by vi (S) = c∈S vi (c), where vi (c)
represents the cost of chore c for agent i.
A natural and popular fairness notion in the context of indivisible items is called maximin
share (MMS), introduced by Budish [2011]. It appears to be also favored by participating agents
in real-life experiments [Gates et al., 2020]. An agent’s MMS is defined as the minimum cost they
can ensure by partitioning all the chores into n bundles (one for each agent) and then receiving a
bundle with the highest cost. Formally, for a set S of chores and an integer d, let Πd (S) denote the
set of all partitions of S into d bundles. Then,
∗
University of Illinois at Urbana Champaign, USA
†
Kyushu University, Fukuoka, Japan
‡
Ariel University, Ariel 40700, Israel
1
Let us denote the MMS of an agent i by µi := MMSni (M). In an MMS allocation, each agent’s
bundle cost does not exceed their MMS. However, for more than two agents, MMS allocations
are not guaranteed to exist [Aziz et al., 2017, Feige et al., 2021]. Therefore, the focus shifted to
exploring approximations of MMS and identifying interesting special classes where MMS allocations
can be achieved. Two natural relaxations are multiplicative and ordinal approximations.
α-MMS This approach involves multiplying the MMS by a factor α > 1 to raise each agent’s
threshold. An allocation is said to be α-MMS if the cost of each agent’s bundle is at most α times
their MMS. Research has progressed to demonstrate the existence of 13/11-MMS allocations [Huang
and Segal-Halevi, 2023].
1-out-of-d-MMS Another way to adjust the threshold is by considering the MMS when the
chores are divided into d < n bundles. An allocation is 1-out-of-d-MMS if each agent’s bundle
cost is no more than MMSdi (M). This relaxation, initially introduced in [Budish, 2011] for the
case of goods, is valued for its resilience to small perturbations in chores costs; see [Hosseini et al.,
2021a] for more details. The current best-known factor for which existence is established is 1-out-
of-⌊ 43 n⌋ [Hosseini et al., 2022].
2
1.2 Further related work
Given the intense study of MMS notion and its variants, we focus on closely related work here.
Computing the MMS value of an agent is NP-hard, even for two agents, using a straightforward
reduction from the Partition problem. However, a Polynomial Time Approximation Scheme (PTAS)
exists for this computation using a PTAS for the job scheduling problem [Hochbaum and Shmoys,
1987]. However, for factored instances, MMS values can be computed in polynomial time [Ebadian
et al., 2021].
MMS allocations are not guaranteed to exist for more than two agents [Aziz et al., 2017, Feige
et al., 2021], which has motivated the exploration of approximate MMS allocations to ensure their
existence. For multiplicative approximation, a series of works [Aziz et al., 2017, Barman and
Krishnamurthy, 2020, Huang and Lu, 2021, Huang and Segal-Halevi, 2023] have established the
current best approximation factor of 13/11. On the other hand, for ordinal approximation, research
has progressed to show the existence of 1-out-of-⌊ 43 n⌋ MMS allocations [Aigner-Horev and Segal-
Halevi, 2022, Hosseini et al., 2022]. For the special case of (non-personalized) bivalued instances,
MMS allocations are known to exist [Feige, 2022].
Goods The MMS notion can similarly be defined for the fair division of goods. Like the case of
chores, MMS allocations for goods do not always exist [Kurokawa et al., 2018]. Extensive research
has been dedicated to approximate MMS allocations for goods. Notable works [Amanatidis et al.,
2017, Ghodsi et al., 2018, Garg and Taki, 2021, Akrami et al., 2023, Hosseini and Searns, 2021,
Hosseini et al., 2021a] have led to a multiplicative approximation factor of 3/4 + 3/3836 [Akrami
and Garg, 2024] and an ordinal approximation factor of 1-out-of-4⌊n/3⌋ [Akrami et al., 2024].
2 Preliminaries
We denote [t] = {1, . . . , t}. An instance of the problem of dividing a set M of m indivisible items
(chores) among n agents is given by a cost function vector v = (v1 , . . . , vn ), where vi denotes the
additive cost function of agent i. We assume that all chore costs are positive, i.e., vi (c) > 0 for all
c ∈ M.
IDO instances It is without loss of generality to assume that the instance has identical order
preferences (IDO), meaning there exists a universal ordering of all chores c1 , . . . , cm such that, for
any agent i, vi (c1 ) ≥ vi (c2 ) · · · ≥ vi (cm ) [Huang and Lu, 2021]. Thus, we will assume a universal
ordering over the chores, which is also used for tie-breaking.
FFD algorithm The chores division problem has a strong connection with the well-studied bin
packing problem. Huang and Lu [2021] were the first to apply the First Fit Decreasing (FFD)
algorithm, a technique from the bin packing problem, to the fair division of chores, a method
further utilized in Huang and Segal-Halevi [2023].
Given a set M of chores, a cost function v on M, and a threshold τ (the ”bin size”), the FFD
algorithm first orders the chores by descending cost, based on the cost function v. It then packs
each chore in the order into the first available bin where it fits, meaning the total cost in the bin
does not exceed the threshold τ . If no existing bin can accommodate the chore without surpassing
the threshold, a new empty bin is opened, and the chore is placed in it. We denote the sequence of
bins output by the FFD algorithm for a given set of inputs as F F D(M, v, τ ).
3
MultiFit algorithm To ensure that the number of bins exactly equals n (one for each agent),
we employ the MultiFit algorithm [Coffman et al., 1978]. This algorithm runs the FFD algorithm
multiple times with different threshold values τ . The goal is to find a τ that allows F F D(M, v, τ )
to allocate all the chores into exactly n bins. It uses binary search to efficiently determine the
threshold.
HFFD algorithm Huang and Lu [2021] extended the FFD algorithm from bins to agents to
accommodate the scenario where agents have distinct cost functions vi ’s and different thresholds
τi ’s. This generalized algorithm is known as Heterogeneous FFD (HFFD). The HFFD algorithm
processes chores by the universal ordering (from largest to smallest cost). It proceeds by filling one
bin at a time as follows: for each chore, the algorithm checks if the chore fits into the current bin
according to the threshold τi and the cost function vi of at least one remaining agent i. If it does
fit, the chore is added to the bin; otherwise, it is skipped. Once no more chores fit into the current
bin, the bin is closed and allocated to one of the remaining agents for whom the last chore fitted.
This process continues until all chores are allocated or until no remaining agents can accommodate
the chores. If all chores are successfully allocated, the algorithm is said to have succeeded ; if there
are remaining chores that cannot be allocated to any agent, the algorithm failed.
We next describe the concept of First-Fit-Valid, as defined in [Huang and Segal-Halevi, 2023], which
will be useful throughout the paper. We note that all the missing proofs are in the appendix.
4
Intuitively, once the bundles A1 , . . . , Ak−1 have been allocated, the benchmark bundle is the
lexicographically maximal feasible subset of the remaining items. The following proposition shows
that this is exactly the outcome of running the FFD algorithm for generating the next bundle Ak .
Proposition 2.4 (Huang and Segal-Halevi 2023). Let D be the bundle collection which is output
lex
by FFD(M, v, τ ). Then, for all i ∈ [n], we have Di = Bi , where Bi is the i-th benchmark bundle of
D.
For completeness, we provide a proof in the Appendix.
Next, we show that when using HFFD to generate the bundle in some iteration k, the resulting
bundle is guaranteed to be lexicographically greater than or equal to the k-th benchmark bundle.
This property is called First Fit Valid, which is formally defined below.
Definition 2.5 (First Fit Valid (FFV)). Given a bundle collection A, a cost function v and a
lex
threshold τ , we call the tuple (M, A, v, τ ) First Fit Valid if for any k ∈ [n], we have Ak ≥ Bk ,
where Bk is the k-th benchmark bundle of A,1 and the lexicographic comparison uses the cost
function v.
We emphasize that the definition of a tuple as FFV depends on the entire set of chores M, not
only on the chores allocated in A. This is because, if M contains chores not allocated in A, this
may affect the benchmark bundles.
With the notion of First Fit Valid, the next proposition characterizes the output of the HFFD
algorithm, where vω , τω are the cost function and threshold of the last agent ω in HFFD.
Proposition 2.6 (Huang and Segal-Halevi 2023). For any IDO instance I and any threshold
vector (τ1 , . . . , τn ), if Algorithm HFFD produces a bundle collection A, then for any τ ≤ τω , the
tuple (M, A, vω , τ ) is a First Fit Valid tuple.
9
3 1-out-of-⌊ 11 n⌋ MMS
9
In this section, we demonstrate that the HFFD algorithm yields a 1-out-of-⌊ 11 n⌋ MMS allocation.
Our approach combines the results of FFD for bin packing [Dósa et al., 2013] with a reduction from
HFFD to FFD, closely following the method described in [Huang and Segal-Halevi, 2023].
9
Lemma 3.1. Let n ≥ 2 be an integer, M a set of chores, and v any cost function, Let d := ⌊ 11 n⌋,
and suppose
MMSd (M) = 1.
If the tuple (M, A, v, 1) is First Fit Valid, then the allocation A must contain all chores in M.
Proof. We assume by contradiction that there is a chore c− ∈ M, that is not in any bundle in A.
Let us consider the case where v(c− ) ≤ 11 2
. Given that MMSd (M) = 1, the total cost of all
chores in M is at most nd. This implies there exists at least one bundle Aj whose cost is less than
9
d ≤ 11 n. Consequently, we have v(Aj ∪ {c− }) ≤ 1. However, Aj ∪ {c− } is lexicographically larger
than Aj , thus contradicting the definition of First Fit Valid.
We can now assume that v(c− ) > 11 2
. Because removing all chores less than c− would not affect
the First Fit Valid. This operation is dicussed in Tidy-up Procedure in [Huang and Segal-Halevi,
2023].
1 S
Note that Bk is lexicographically-maximal among all subsets of M \ j<k Aj with cost at most τ , but Ak may
be lexicographically larger than Bk since its cost may be larger than τ .
5
We will construct another instance M′ by decreasing the cost of some chores and removing
some chores from M. Then we can obtain another allocation A′ , which is output of FFD(M′ , v, 1).
The unallocated chore c− will still be in M′ , and not in the output of FFD. Notice that for this
9
new instance the maximin share for ⌊ 11 n⌋ bins is no more than 1. Then we have a contradiction
here. Because FFD algorithm can allocate all chores in this case [Dósa et al., 2013].
We now show how to construct such an instance, using notions from [Huang and Segal-Halevi,
2023].
We first define redundant chores. Given a bundle Aj , a chore Aj [p] is called redundant if p > k
where k is the smallest integer such that v(∪p≤k Aj [p]) ≥ 1. We remove all redundant chores from
M and A at the beginning. We keep the notation M and A, but assume that there is no redundant
chores in the rest of the proof.
To see why they are called redundant, we can go back to the algorithm. If we set threshold 1
for FFD, then FFD will not allocate to any bundle chores with total cost more than 1. So removing
redundant chores does not influence the fact that there is an unallocated chore.
Let k be the smallest integer such that v(Ak ) > 1. Apply Proposition 2.4 to the first k − 1
bundles of A, they are the same as the output of FFD(M, v, 1). To fix the k-th bundle, we introduce
the following concept from [Huang and Segal-Halevi, 2023].
Definition 3.2 (Suitable reduced chore). Let (M, A, v, 1) be a First Fit Valid tuple without
any redundant chores. Let k be the smallest integer such that v(Ak ) > 1. Let c∗k be the
smallest chore in Ak . Let c◦k be a chore with v(c◦k ) < v(c∗k ) and M′ = M \ {c∗k } ∪ {c◦k }.
Let P be the output of F F D(M′ , v, 1). The chore c◦k is a suitable reduced chore of c∗k if
[ [
Pj = Aj \ {c∗k } ∪ {c◦k }. (1)
j≤k j≤k
In other words: replacing c∗k by c◦k has no effect on the allocation of bundles i > k.
Note that we change Tidy-up tuple with First Fit Valid tuple in the above definition. If we can
always find a suitable reduced chore, then we can fix the allocation to the output of FFD algorithm
one bundle by one bundle. The remaining thing is to prove there is a suitable reduced chore.
Roughly, when we want to find a suitable reduced chore for bundle Ak , its reduced cost should
be such that the new cost of Ak is at most 1. This puts an upper bound on the cost of suitable
reduced chores. This motivates the following definition.
Definition 3.3 (Fit-in space [Huang and Segal-Halevi, 2023]). Given an index k, let c∗k be the last
(smallest) chore in bundle Ak . The fit-in space for Ak is f t(k) := 1 − v(Ak \ {c∗k }).
Note that since there are no redundant chores, the fit-in space is non-negative. The following
claim is useful for simplifying our proof.
2
Claim 3.4 (Huang and Segal-Halevi 2023 (Lemmas 11 and 12)). Let γ = 11 . If there are no
−
redundant chores, and no chore is smaller than the unallocated chore c , and f t(k) ≤ 2γ = 114
,
then a suitable reduced chore exists.
Example 3.6 after the proof gives some details that how this reduction work. It can help
understand the whole proof.
4
Now we only need to consider the case of a large fit-in space f t(k) > 2γ = 11 . This is the
main point in which our proof differs from theirs. In their proof, they used a procedure called
6
“Tidy-Up”, which makes the instance easier to work with. We cannot use this procedure here, as
it might reduce the number of bundles. Therefore, we need a whole different proof for the case of
a large fit-in space.
Let c∗k be the smallest chore in Ak . We prove that, for the case of large fit-in space, there are
exactly 2 chores in bundle Ak .
9
No chore could be larger than 1, as this is the maximin share for ⌊ 11 n⌋ bins. On the other
hand, there cannot be three chores in bundle Ak . Otherwise, it means that there are at least two
chores except c∗k . When the cost v(Ak ) > 1, the smallest chore is larger than the fit-in space by
definition. So the cost of each chore in Ak is at least 11 4
. So the total cost of Ak \ {c∗k } is at least
4
2 · 11 = 118
. By the definition of fit-in space, we have v(Ak \ {c∗k }) = 1 − f t(k) < 11 7
. We have a
contradiction here. Therefore, bundle Ak contains exactly 2 chores, namely Ak [1] and Ak [2] = c∗k .
Let c◦k be a chore with cost v(c◦k ) = f t(k). We prove it is a suitable reduced chore. Let us run
FFD for the first k bundles on the instance after replacing chore c∗k with c◦k .
Suppose first that c◦k is not allocated to a bundle Pi with i < k. By selection of k, this means
that the first k − 1 bundles generated by FFD are equal to those of A (Pi = Ai for all i < K), then
when filling Pk , the chores Ak [1] and c◦k are available. Because A is FFV, Ak [1] must be the largest
remaining chore, so it will be allocated to Pk first. Now, the remaining room in Pk is exactly equal
to the fit-in space, which is the size of c◦k . Therefore, c◦k will be allocated to Pk , and we are done.
Now we consider the case that c◦k is allocated to a bundle Pi where i < k. We have the following
claim.
Claim 3.5. Suppose that c◦k is allocated to a bundle Pi where i < k, we have v(Pi [1]) = v(Pk [1]) =
v(Ak [1]) = v(Pj [1]) for all i < j < k.
Proof. We first show v(Ai [1]) = v(Pi [1]) when i ≤ k. As allocation A is First Fit Valid w.r.t. M,
and the costs of first k − 1
As FFD processes large chores before small chores, any chore larger than c∗k that is allocated in
Aj where j < k, will be allocated in Pj , as its allocation is not influenced by c∗k .
We also have v(Ak [1]) = v(Pk [1]) by FFV, as it is the largest chore remaining after removing
the chores in the first k − 1 bundles.
By the properties of FFD, we have v(Pi [1]) ≥ v(Pk [1]) = v(Ak [1]) = 1 − f t(k), as Ak [1] is the
only chore in Ak except the last one.
If c◦k is allocated to Pi for i < k, we have v(Pi [1] ∪ c◦k ) ≤ 1, which means v(Pi [1]) ≤ 1 − v(c◦k ) =
1 − f t(k).
Combining the two inequalities on v(Pi [1]) leads to v(Pi [1]) = 1 − f t(k) = v(Ak [1]). As
v(Pk [1]) ≤ v(Pj [1]) ≤ v(Pi [1]) for i < j < k, the costs of all these chores must be equal. ■
With Claim 3.5, we have an easy description of what happened between P and A. We prove
that there will be a shift: for all i < j ≤ k, we have Pj = {Aj [1]} ∪ (Aj−1 \ {Aj−1 [1]}). First, we
have Pi = {Ai [1]} ∪ {c◦k }. Basically, the argument is as follows: If you give the same space and
same set of chores, the FFD algorithm will get the same result. We prove that it is true for i + 1.
The similar argument directly works for other indexes. Let S be the set of chores Ai \ {Ai [1]}. The
set of chores S are allocated into a space 1 − v(Ai [1]) = f t(k). After allocating the chore Pi+1 [1],
the remaining space is f t(k). When FFD algorithm try to generate Pi+1 , let us consider the set
of chores which are no larger than f t(k). Let us denote the set by L = {c | M′ \ (∪j≤i Pi )}. By
Proposition 2.4, the first i bundles of allocation A are the same to the output of FFD(M, v, 1). Let
us consider the set of chores which are no larger than f t(k) when Ai is generated. Let us denote
the set by R = {c | v(c) ≤ f t(k) and c ∈ M \ (∪j<i Aj )}. We have R = L. So what are allocated
to Ai will be also allocated to Pi+1 . For other bundles, the argument is similar.
7
The chores in the bundle with index later than k will not be affected the allocation of first k
bundle. So c◦k is a suitable reduced chore.
Example 3.6. This example illustrates the reduction from a FFV instance to a valid output for
the FFD algorithm. Let us assume that the FFD threshold is set at 1. There are four bundles with
chore costs as follows: A1 = {.70, .25, .25}, A2 = {.60, .35}, A3 = {.60, .50}, A4 = {.54, .24, .23}.
There is also one unallocated chore with a cost of .19.
We can see that bundles A1 , A3 , A4 are invalid for FFD algorithm, as their total cost is larger
than 1. The reduction will fix them one by one.
• For bundle A1 the fit-in space is f t(1) = .05, which is less than γ = 2
11 , Therefore, we just
remove the last chore (with cost .25).
• For bundle A3 , the fit-in space is f t(3) = .40, which is greater than 2γ = 11
4
. Here, reducing
the cost of .50 to .40 is a suitable reduced chore. The suitable reduced chore will be allocated
to A2 and kick out the last chore in bundle A2 . The last chore in A2 will be allocated in
bundle A3 .
• For the bundle A4 , the fit-in space is f t(4) = .22. By the result from Huang and Segal-Halevi
[2023], we can always find a suitable reduced chore here. In this particular case, any cost
between .19 and .22 is Ok. Let us choose .20 for the suitable reduced chore.
After all these operations, we get the following bundles. We have A′1 = {.70, .25}, A′2 = {.60, .40}, A′3 =
{.60, .35}, A′4 = {.54, .24, .20}. The unallocated chore with cost .19 is still there. The resulting al-
location is the same as the output of the FFD algorithm on the set of chores.
9
Theorem 3.7. Suppose that the MMS value for ⌊ 11 n⌋ bundles (bins) is 1 for every agent. If
the HFFD algorithm is executed with a threshold of at least 1 for each agent, then all chores are
allocated.
Proof. Let A be the output of HFFD. By Proposition 2.6, since τω ≥ 1, the tuple (M, A, vω , 1) is
First-Fit-Valid.
9
Given that the 1-out-of-⌊ 11 n⌋ MMS of vω is 1 by assumption, we can apply Lemma 3.1 with
v = vω , and thus conclude that A must contain all chores.
SWAP (A, Ti , Tj ) = A′ .
When the subscripts i and j are not crucial or their meaning is clear from the context, we will
use symbols without subscripts. Additionally, there may be cases where one set of chores could be
empty. For example, the operation SWAP (A, ∅j , T ) denotes moving the set of chores T to bundle
Aj . The subscript j specifies the bundle receiving the chores in T .
8
Lemma 4.2 (Reduction Lemma). For any threshold τ > 0, and any cost function v that is either
factored or bivalued if FFD(M, v, τ ) allocates all the chores into n bins, and (M, Q, v, τ ) is a
First-Fit Valid tuple, then Q contains all chores in M.
We will prove the above lemma for factored cost functions in Section 5 and for bivalued cost
functions in Section 6.
This lemma implies two things:
Corollary 4.3. For both factored and personalized-bivalued instances, we can reduce multiple agents
case to single agent case. That is, the approximation ratio of HFFD is equal to that of MultiFit.
Proof. This follows from Lemma 4.2 as the outcome of HFFD when all agents have threshold τ is
First Fit Valid for vω and τ .
Corollary 4.4. For both factored and personalized-bivalued instances, FFD is monotone in the
following sense: if FFD(M, v, τ ) allocates all chores, then FFD(M, v, β) allocates all chores for
any β > τ .
Proof. This follows from Lemma 4.2 as the outcome of FFD(M, v, β) is First Fit Valid for v and
τ.
Polynomial-time computation Corollary 4.4 implies that we can use binary search to de-
sign polynomial-time algorithms for factored and personalized bivalued instances. The process
involves performing a binary search to determine a minimal threshold τi for each agent i, such that
FFD(M, vi , τi ) can allocate all chores. These thresholds τi are then used in the HFFD algorithm.
By Corollary 4.4, the HFFD algorithm will allocate all the chores, ensuring that each agent i gets
a bundle no larger than τi .
However, this approach does not extend to the general case, as discussed in [Huang and Segal-
Halevi, 2023]. In general, combining all thresholds into the HFFD algorithm may not guarantee
the allocation of all chores.
5 Factored Instance
A cost function is called factored if any smaller cost is a factor of the next-larger cost, i.e., if
v(cm ) ≤ · · · ≤ v(c1 ) then v(cm ) | v(cm−1 ) | · · · | v(c2 ) | v(c1 ). The notion first appeared with
relation to bin packing in [Coffman et al., 1987], where it is called “divisible item sizes”. It was
introduced into fair division in [Ebadian et al., 2021].
Claim 5.1 (Coffman et al. [1987]). For a factored instance, given an index j and a set S of indices
such that
(1) v(cj ) ≥ v(ci ) for any i ∈ S and
(2) v(S) ≥ v(cj ),
there is a subset S ′ ⊆ S such that v(S ′ ) = v(cj ).
Theorem 5.2 (Coffman et al. [1987]). In job scheduling with a factored cost function, FFD with
threshold exactly equal to the minimum makespan always allocates all jobs.
We now extend Theorem 5.2 from FFD to HFFD. The following is a special case of Lemma 4.2
for factored instances.
9
Algorithm 1: Reduction by swapping for a factored cost function
Data: Q a First-Fit Valid allocation; P output of FFD on same items.
1 for k=1 to N do
lex
2 if Qk = Pk then
3 Continue to next k.
4 end
lex
/* Else, Qk > Pk by FFV */
/* Compare Qk with Pk from largest to smallest chore */
5 for j= 1 to |Qk | do
6 if v(Qk [j]) > v(Pk [j]) then
7 Let T = ∪w≥j Pk [w];
8 if v(T ) ≥ v(Qk [j]) then
9 Find a set S ⊆ T such that v(S) = v(Qk [j]);
/* Claim 5.1 guarantees existence of S */
10 Update P = SWAP (P, S, Qk [j])
11 end
12 else
13 Update P = SWAP (P, T, Qk [j])
14 end
15 end
16 end
17 end
Lemma (Reduction Lemma for Factored Instances). For any threshold τ > 0, and any factored
cost function v, if FFD(M, v, τ ) allocates all the chores into n bins, and (M, Q, v, τ ) is a First-Fit
Valid tuple, then Q contains all chores in M.
Proof. Here is a high-level idea for the proof. Given an allocation P produced by the FFD algorithm
and a First-Fit Valid allocation Q, our goal is to transform P into Q using a series of swapping
operations.
We describe the swapping operations in Algorithm 1 and provide an illustrative example below.
Example 5.3. Suppose the threshold is 10 and P1 = {a, b, c, d, e} with chore costs 4, 2, 2, 1, 1, and
lex
Q1 = {w, x, y, z} with chore costs 4, 4, 4, 1. Note that the instance is factored and Q1 > P1 . At the
inner for loop, the condition v(Qk [j]) > v(Pk [j]) is first satisfied for j = 2. The set T = {b, c, d, e}
and its total cost is 6 which is larger than v(Qk [2]). We find the set S = {b, c}, remove it from P1
and swap it with {x}. Now we have P1 = {a, x, d, e} with chore costs 4, 4, 1, 1.
Next, after the swap, v(Qk [j]) > v(Pk [j]) for j = 3. We have T = {d, e}, its total cost is 2
which is now smaller than v(Qk [j]). So we remove T from P1 and swap it with {y}. The new
P1 = {a, x, y} with chore costs 4, 4, 4.
Next, for j = 4 we have v(Qk [j]) > v(Pk [j]) as v(Pk [j]) = 0 by definition. T = ∅, and we swap
it with z. So the final P1 is {a, x, y, z}, which is lex-equivalent to Q1 .
Claim 5.4. At each iteration k of the main loop, the cost of any bundle Pi with i > k does not
increase.
10
Proof. The proof is by induction. The base k = 0 is straightforward. For k ≥ 1, assume that the
claim holds for all iterations before k. We have v(Pk ) ≤ τ since τ is the threshold used by FFD.
By the First-Fit Valid property, if we check the chores of Pk and Qk from largest to smallest, the
first different place j must satisfy v(Qk [j]) > v(Pk [j]).
In the swap of line 10, the swap is between two sets of equal cost, v(S) = v(Qk [j]), so the costs
of all bundles to not change.
In the swap of line 13, the swap is between sets with different costs, v(T ) < v(Qk [j]), so the
cost of Pk increases and the cost of some bundle Pi with i > k decreases.
By Claim 5.4, at the start of each iteration k, v(Pk ) ≤ τ , so by First Fit Valid definition, it still
lex
holds that Qk ≥ Pk .
During iteration k, after each iteration j of the inner loop, the set Pk [1, . . . , j] is leximin-
equivalent to Qk [1, . . . , j]. If, for some j ≤ |Qk |, we have |Pk | < j, then T = ∅, and the algorithm
will move some chores from some later bundles into Pk . Thus, at the end of iteration k, we will
lex lex
have Pk = Qk . Consequently, the entire process ensures that P = Q. As allocation P contains
all chores, allocation Q must also include all chores. This completes the proof of Lemma 4.2 for
factored instances.
Theorem 5.5. In chore allocation with factored cost functions, HFFD with the thresholds of all
agents equal to their maximin share always allocates all chores.
6 Bivalued Instance
In this section, we prove Lemma 4.2 for bivalued instances. Most proofs of the auxiliary claims are
deferred to the Appendix.
Lemma (Reduction Lemma for Bivalued Instances). For any threshold τ > 0, and any bivalued
cost function v, if FFD(M, v, τ ) allocates all the chores into n bins, and (M, Q, v, τ ) is a First-Fit
Valid tuple, then Q contains all chores in M.
The reduction process for this case is described in Algorithm 2, and we provide an illustrative
example below. For any bundle A, we denote by L(A) the set of large chores in A, and by S(A)
the set of small chores in A.
Example 6.1. Suppose the threshold is 20 and P1 = {a, b, c, d, e} with chore costs 7, 7, 2, 2, 2 and
Q1 = {u, v, w, x, y, z} with chore costs 7, 7, 7, 7, 2, 2. We have |L(Q1 )| = 4 > 2 = L(P1 ), so there
must be a large chore in some bundle Pi with i > 1; we take the last such chore (say x) and swap
it with c, d, e, so now P1 = {a, b, x} with chore costs 7, 7, 7.
Now, we can assume w.l.o.g. that the large chores in P1 are the same as some large chores in
Q1 , say u, v, x, so P1 = {u, v, x} and P1 ⊆ Q1 . In the next loop, we simply add to P1 the missing
chores w, y, z or some other chores with the same costs.
Claim 6.2. 1. In iteration k, the cost of any bundle Pi with i > k does not increase.
2. If initially the number of large chores in Qk is greater than in Pk , then the number of large
chores in Pk does not change in iterations 1 to k − 1.
11
Algorithm 2: Reduction by swapping for a bivalued cost function
Data: Q a First-Fit Valid allocation; P output of FFD on same items.
1 for k=1 to N do
lex
2 if Qk = Pk then
3 Continue to next k.
4 end
lex
/* Else Qk > Pk by FFV, so either |L(Qk )| > |L(Pk )|, or |L(Qk )| = |L(Pk )|
and |S(Qk )| > |S(Pk )|. */
5 if |L(Qk )| > |L(Pk )| then
6 Let cl be the large chore which appears last in allocation P;
/* We prove later that chore cl must be in some bundle Pi for
i>k */
7 Update P = SWAP (P, cl, S(Pk ));
8 end
/* At this point we can assume wlog that Qk ⊇ Pk . */
9 for c ∈ Qk \ Pk do
10 Let chore cl be the one that appears last in allocation P with the same cost as c;
11 Update P = SWAP (P, cl, ∅k );
12 end
13 end
12
6.1 Lower bound
The lower bound of 15/13 is shown using a simple example.
Example 6.3. There are n = 3 agents. All agents have the same valuation. There are three large
chores with cost l = 4, and nine small chores with cost s = 3. The MMS is 13, as the chores
can be partitioned into three bundles containing 1 large chore and 3 small chores each. However,
FFD with any bin size in [13, 15) packs the three large chores into a single bin, and eight small
chores into the following two bins, so one small chore remains unallocated.
Theorem 6.4. For personalized bivalued instances of chores, the HFFD algorithm achieves a tight
15
approximation ratio of 13 for MMS.
Proof sketch. Algorithm 3 accepts as input an allocation Q with maximum cost µ, and an allocation
P generated by FFD with threshold τ := 15 13 µ. The algorithm processes both allocations bin by
bin. In each iteration k, it modifies Q such that Qk becomes equal to Pk . The modification is done
by swapping some chores in Qk with some chores from bundles Qz for z > k. Using a detailed
case analysis we show that, even after the swap, the cost of Qz remains at most τ . At the end of
iteration n, Qi = Pi for all i ∈ [n]. As Q contains all chores, this implies that P allocates all chores
in n bins, which concludes the proof.
Lemma 6.5. For any l and s, FFD with bin size τ ≥ µ + s packs all chores in at most n bins.
Proof. FFD processes the large chores first. Every bin (except maybe the last one) receives at least
⌊µ/l⌋ large chores, whereas every bin in the optimal packing receives at most ⌊µ/l⌋ large chores.
Hence, all large chores are packed into at most n bins.
FFD then processes the small chores. As the total cost of all chores is at most n · µ, at least
one of the first n bins has a total cost of at most µ; this bin has at least τ − µ ≥ s remaining space.
Therefore, every small chore can be packed into one of the first n bins.
13 15
Corollary 6.6. if µ ≥ 6.5s = 2 s, then FFD with bin size τ = 13 µ packs all items into n bins.
Based on the corollary, from now on we focus on the case µ < 6.5s.
Let P be the output of FFD(v, τ ). We prove that, by performing some swapping operations
on Q, we can make Q equal to P. The swapping process is described as Algorithm 3. Below, we
provide a detailed explanation of the process.
First note that, in the FFD outcome, the bundles are naturally ordered by descending order of
the number of large chores, so that the first bundles contain ⌈τ /l⌉ large chores each, then there is
13
Algorithm 3: Transforming an MMS partition to an output of FFD
Data: v — a cost function;
Q — an n-maximin-share partition of v, with MMS value µ;
τ — a threshold for FFD, τ := 15
13 · µ;
P — the output of FFD with cost function v and bin size τ .
1
2 Order the bundles in Q by the number of large chores, from many to few. Break ties by
the number of small chores.
3 for k=1 to n do
4 // INVARIANT 1: Qi = Pi for all i < k.
5 // INVARIANT 2: v(Qk ) ≤ µ, and v(Qi ) ≤ τ for all i > k.
6 // INVARIANT 3: one of the following holds for all i > k: (a) v(Qi ) ≤ µ, or (b) Qi contains
no large chores, or (c) Qi contains one large chore and some bi small chores and (bi + 2)s ≤ τ .
7 if |Qk | > |Pk | then
8 Let S be a set of two small chores in Qk .
9 // Lemma 6.7 shows that they exist.
10 Let z := max index of a bundle in Q that contains a large chore.
11 // Lemma 6.7 shows that z > k.
12 Let cl be a large chore in Qz ;
13 Update Q = SWAP (Q, cl, S);
14 // In two special cases, another swap is needed:
15 if |Qk | > |Pk | then
16 if Qk contains exactly 2 large and 1 small chore, and Pk contains exactly 2 large
chores then
17 Move one small chore from Qk to Qz .
18 end
19 else if Qk contains exactly 2 large and 2 small chores, and Pk contains exactly
3 large chores then
20 Let S ′ be a set of two small chores in Qk .
21 Let z ′ := max index of a bundle in Q that contains a large chore.
22 Let cl′ be a large chore in Qz ′ ;
23 Update Q = SWAP (Q, cl′ , S ′ );
24 end
25 end
26 end
27 // Here, |Qk | ≤ |Pk |.
28 for j=1 to |Pk | do
29 if v(Qk [j]) < v(Pk [j]) then
30 Let z := max index of a bundle in Q that contains a chore with the same cost as
Pk [j], and let cl be such a chore in Qz ;
31 // either Pk [j] is large and Qk [j] is small/empty, or Pk [j] is small and Qk [j] is empty
(it is possible only if |Qk | < |Pk |) ;
32 Update Q = SWAP (Q, cl, Qk [j]);
33 end
34 end
35 end
14
possibly a single bundle that contains fewer large chores, and the remaining bundles contain only
small chores. In Line 2, we order the given MMS partition in the same way, by descending order
of the large chores, and then by descending number of the small chores.
In the following lines, we process the bundles in Q and P in this order. For every k in 1, . . . , n,
we modify Q by swapping some chores, so that Qk becomes equal to Pk . Therefore, after the main
loop ends, bundles Q1 , . . . , Qn are equal to bundles P1 , . . . , Pn , which means that in P, all items
are packed into n bins.
We maintain the following invariants at the start of each iteration k:
• All bundles in Q processed in previous iterations are equal to their counterparts in P, that
is Qi = Pi for all i < k. This holds vacuously in iteration k = 1, and holds in the following
iterations due to the swapping process.
• All bundles in Q not processed yet have a cost of at most τ . Moreover, each of these bundles
satisfies one of the following conditions: (a) A cost of at most µ, or — (b) No large chores
at all, or — (c) One large chore and some number bi of small chores, such that bi + 2 small
chores can fit into a single FFD bin.
Note that, in iteration k = 1, invariant (a) holds, as the costs of all bundles in Q are at most
µ by definition of MMS.
Our main challenge will be to prove that the invariant is maintained in the following iterations.
Note that, during the swap in Line 13, the cost of Qz might increase. We will prove later that,
at the end of the iteration, it remains at most τ , and also satisfies Invariant 3. Before that, we note
two things:
• The algorithm only swaps a large chore from the last bundle Qz containing a large chore.
Therefore, the order of bundles in Qk , . . . , Qn is maintained: we can still assume that the
following bundles are ordered by descending order of the number of large chores.
• The cost of the bundle Qz might grow above µ, but the costs of other bundles remain at most
µ. By Lemma 6.7(c), the last bundle Qz will never satisfy the condition |Qz | > |Pz |, and will
never arrive at Line 13. Therefore, all bundles Qk that arrive at Line 13 satisfy v(Qk ) ≤ µ.
In most cases, |Qk | = |Pk | after a single swap, but in two special cases, another swap is needed.
To analyze these cases, we divide to cases based on the numbers of large and small chores in Qk
and Pk . Due to Lemma 6.5, the total number of chores in Qk is at most 6. Due to Lemma 6.7, if
Line 13 runs, then there are only 35 cases for the numbers of chores; they are all listed in Figure
15
1. columns A–D. For example, the first case (in row 3) is the case in which Qk contains one large
chore and two small chores, which is the minimum number required by Lemma 6.7. In this case,
by Lemma 6.7, Pk must contain two large chores and no small chores.
Given the numbers of large and small chores in Pk and Qk , we can derive a lower bound on the
ratio l/s; that is, we derive an inequality of the kind l > X · s, for some positive number X. The
bound is proved below, and shown in Figure 1 column E.
15
Lemma 6.8. Suppose τ = 13 µ. For any aq < ap and bq > bp , if Qk contains aq large and bq small
chores, and Pk contains ap large and bp small chores, then
l > (15bq − 13bp − 13)s/(13ap − 15aq ).
By substituting the lower bound on l in the inequality µ ≥ aq l + bq s, we can get a lower bound
on the threshold µ in terms s. That is, we can get an inequality of the form µ > X · s, for some
positive number X. These bounds are shown in Figure 1 column F. If the lower bound is at least
6.5s, then by Corollary 6.6 FFD packs all items into n bins; therefore, we can ignore all the rows
in which column F is at least 6.5 (the relevant values in column F are italicized).
Similarly, by substituting the lower bound on l in the inequality τ ≥ (15/13) · (aq l + bq s), we can
get a lower bound on the threshold τ in terms of l and s. In particular, we can get lower bounds
of the forms: τ > X · s or τ > l + X · s, for some positive numbers X. These upper bounds are
shown in Figure 1, columns G and H.
Now we can see that, in almost all cases in which µ < 6.5s, the difference |Qk | − |Pk | = 1. There
are only two special cases, and they are handled by the inner condition starting at Line 15:
• Qk initially contained 1 large and 3 small chores, and Pk contained 2 large and no small
chores (row 4). After the swap in Line 13, Qk contains 2 large and 1 small chore; we move
the small chore to Qz and get Qk = Pk .
• Qk initially contained 1 large and 4 small chores, and Pk contained 3 large and no small
chores (row 10). After the swap in Line 13, Qk contains 2 large and 2 small chores. As it
still contains fewer large chores than Pk , there must be some later bundle Qz ′ for z ′ > k that
contains a large chore; we do another swap of two small chores from Qk with one large chore
from Qz ′ , and get Qk = Pk .
If one of the two special cases happens, Pk = Qk holds, and we can move to the next iteration.
lex
Otherwise, Qk received at most one new large chore, and |Qk | ≤ |Pk |; therefore, Pk ≥ Qk holds.
Now we start the second phase of swaps, which is the loop starting at Line 28. For convenience,
we add to Qk some dummy chores with cost 0, so that |Qk | = |Pk |. We process the chores in
lex
Qk and Pk in order, from large to small. As Pk ≥ Qk , we always have v(Pk [j]) ≥ v(Qk [j]). If
v(Pk [j]) = v(Qk [j]) then there is nothing to do. Otherwise, v(Pk [j]) > v(Qk [j]), which means that
either (a) Pk [j] is large and Qk [j] is small or dummy, or (b) Pk [j] is small and Qk [j] is dummy. In
both cases, the chore Pk [j] is missing from Qk [j]. As for i < k the bundles Qi and Pi are identical,
the missing chores must be found in some later bundle, say Qz . We swap the smaller chore Qk [j]
with the larger chore from Qz . The cost of Qz decreases, so the invariant is not affected.
Once the second loop ends Qk = Pk , so invariant 1 holds at the start of the next iteration.
It remains to prove that invariants 2 and 3 hold too. For this we have to prove that, for every
bundle Qz involved in a swap, its cost after the end of the iteration is at most τ , and in addition,
it satisfies one of the conditions (a) (b) or (c) in invariant 3.
Now we are ready to prove the loop invariants 2 and 3. We assume that the invariants hold at
the iteration start, and prove that they hold at the iteration end.
16
Figure 1: Columns A–D describe all possible cases for numbers of large and small chores in Pk
and Qk , that satisfy Lemma 6.7.
Column E describes a lower bound on l/s derived from Lemma 6.8.
Column F describes a lower bound on µ/s derived from substituting column E in µ ≥ aq l + bq s.
Columns G and H describe lower bounds on τ derived from substituting column E in τ ≥ (15/13) ·
(aq l + bq s).
17
Lemma 6.9. If invariants 2 and 3 hold at the start of iteration k, then they also hold at the end
of iteration k, so that the following holds for all i > k:
• v(Qi ) ≤ τ .
• Either (a) v(Qi ) ≤ µ, or (b) Qi contains no large chores, or (c) Qi contains one large chore
and some bi small chores and (bi + 2)s ≤ τ .
We have proved that the invariants hold throughout Algorithm 3. Therefore, they also hold
when the algorithm ends. Therefore, when the algorithm ends, Qk = Pk for all k ∈ {1, . . . , n}.
This shows that FFD has packed all chores in Q into n bundles. Combined with Example 6.3, we
get the tight bound declared in Theorem 6.4.
7 Conclusion
Huang and Lu [2021] first explored the connection between the bin packing problem and the fair
allocation of chores, an approach that was further utilized in [Huang and Segal-Halevi, 2023].
Building on this foundation, our paper delves deeper into this connection. We demonstrated that the
HFFD algorithm could improve the state-of-the-art of maximin share (MMS) in factored instances,
personalized bivalued instances, and 1-out-of-d MMS allocations. The central question we explore
is: Can we discover additional connections between bin packing and the fair allocation of chores?
An intriguing open question in this area is whether the HFFD (and FFD) algorithm is monotonic
in a certain range. In this paper, we demonstrate the monotonicity of the algorithm for two special
classes of chores costs. This question is closely tied to the possibility of designing an efficient
algorithm for MMS allocations that achieves the exact approximation ratio of the bin packing
problem. The monotonicity in the general case remains unclear. Another open problem is whether
MMS allocations always exist for personalized bivalued instances. To the best of our knowledge,
there are no known counterexamples demonstrating the non-existence of MMS for these instances.
Acknowledgments
Jugal Garg was supported by NSF Grants CCF-1942321 and CCF-2334461. Xin Huang was sup-
ported by JST ERATO Grant Number JPMJER2301, Japan. Erel Segal-Halevi was funded by
Israel Science Foundation grant no. 712/20.
References
Elad Aigner-Horev and Erel Segal-Halevi. Envy-free matchings in bipartite graphs and their ap-
plications to fair division. Information Sciences, 587:164–187, 2022.
Hannaneh Akrami and Jugal Garg. Breaking the 3/4 barrier for approximate maximin share. In
Proc. 35th Symp. Discrete Algorithms (SODA), pages 74–91, 2024.
Hannaneh Akrami, Jugal Garg, Eklavya Sharma, and Setareh Taki. Simplification and improvement
of MMS approximation. In Proc. 32nd Intl. Joint Conf. Artif. Intell. (IJCAI), 2023.
Hannaneh Akrami, Jugal Garg, Eklavya Sharma, and Setareh Taki. Improving approximation
guarantees for maximin share. In Proc. 25th Conf. Economics and Computation (EC), 2024.
18
G. Amanatidis, H. Aziz, G. Birmpas, A. Filos-Ratsikas, B. Li, H. Moulin, A. A. Voudouris, and
X. Wu. Fair division of indivisible goods: Recent progress and open questions. Artificial Intelli-
gence, 322:103965, 2023.
Georgios Amanatidis, Evangelos Markakis, Afshin Nikzad, and Amin Saberi. Approximation algo-
rithms for computing maximin share allocations. ACM Transactions on Algorithms (TALG), 13
(4):1–28, 2017. doi: 10.1145/3147173.
Haris Aziz, Gerhard Rauchecker, Guido Schryen, and Toby Walsh. Algorithms for max-min share
fair allocation of indivisible chores. In Proceedings of the 31st AAAI Conference on Artificial
Intelligence (AAAI), pages 335–341, 2017.
Haris Aziz, Péter Biró, Jérôme Lang, Julien Lesca, and Jérôme Monnot. Efficient reallocation
under additive and responsive preferences. Theor. Comput. Sci., 790:1–15, 2019.
Siddharth Barman and Sanath Kumar Krishnamurthy. Approximation algorithms for maximin
fair division. ACM Transactions on Economics and Computation (TEAC), 8(1):1–28, 2020. doi:
10.1145/3381525.
Eric Budish. The combinatorial assignment problem: Approximate competitive equilibrium from
equal incomes. Journal of Political Economy, 119(6):1061 – 1103, 2011.
Edward G. Coffman, Jr., Michael R Garey, and David S Johnson. Bin packing with divisible item
sizes. Journal of Complexity, 3(4):406–428, 1987.
György Dósa, Rongheng Li, Xin Han, and Zsolt Tuza. Tight absolute bound for first fit decreasing
bin-packing: FFD(L) ≤ 11/9 OPT(L) + 6/9. Theor. Comput. Sci., 510:13–61, 2013.
Soroush Ebadian, Dominik Peters, and Nisarg Shah. How to fairly allocate easy and difficult chores.
arXiv preprint arXiv:2110.11285, 2021.
Uriel Feige. Maximin fair allocations with two item values, 2022. https://www.wisdom.weizmann.
ac.il/∼feige/mypapers/MMSab.pdf.
Uriel Feige, Ariel Sapir, and Laliv Tauber. A tight negative example for MMS fair allocations. In
Proc. 17th Conf. Web and Internet Economics (WINE), pages 355–372, 2021.
Jugal Garg and Setareh Taki. An improved approximation algorithm for maximin shares. Artificial
Intelligence, page 103547, 2021.
Vael Gates, Thomas L. Griffiths, and Anca D. Dragan. How to be helpful to multiple people at
once. Cognitive Science, 44, 2020.
Mohammad Ghodsi, MohammadTaghi HajiAghayi, Masoud Seddighin, Saeed Seddighin, and Hadi
Yami. Fair allocation of indivisible goods: Improvements and generalizations. In Proc. 19th
Conf. Economics and Computation (EC), pages 539–556, 2018.
Dorit S. Hochbaum and David B. Shmoys. Using dual approximation algorithms for scheduling
problems theoretical and practical results. J. ACM, 34(1):144–162, 1987.
19
Hadi Hosseini and Andrew Searns. Guaranteeing maximin shares: Some agents left behind. In
Proceedings of the 30th International Joint Conference on Artificial Intelligence (IJCAI), pages
238–244, 2021.
Hadi Hosseini, Andrew Searns, and Erel Segal-Halevi. Ordinal maximin share approximation for
goods. J. Artif. Intell. Res., 74, 2021a. doi: 10.1613/jair.1.13317.
Hadi Hosseini, Sujoy Sikdar, Rohit Vaish, and Lirong Xia. Fair and efficient allocations under
lexicographic preferences. In Thirty-Fifth AAAI Conference on Artificial Intelligence, AAAI,
pages 5472–5480. AAAI Press, 2021b.
Hadi Hosseini, Andrew Searns, and Erel Segal-Halevi. Ordinal maximin share approximation for
chores. In 21st International Conference on Autonomous Agents and Multiagent Systems, AA-
MAS 2022, pages 597–605, 2022.
Xin Huang and Pinyan Lu. An algorithmic framework for approximating maximin share allocation
of chores. In EC ’21: The 22nd ACM Conference on Economics and Computation, pages 630–631,
2021.
Xin Huang and Erel Segal-Halevi. A reduction from chores allocation to job scheduling. In Proc.
24th Conf. Economics and Computation (EC), 2023.
D. S. Johnson. Near-optimal bin packing algorithms. Phd thesis, Massachusetts Institute of Tech-
nology, 1973.
David Kurokawa, Ariel D. Procaccia, and Junxing Wang. Fair enough: Guaranteeing approximate
maximin shares. J. ACM, 65(2):8:1–8:27, 2018. doi: 10.1145/3140756.
S. Liu, X. Lu, M. Suzuki, and T. Walsh. Mixed fair division: A survey. In Proceedings of the AAAI
Conference on Artificial Intelligence, volume 38, pages 22641–22649, 2024.
20
A Proofs for Section 2 (Preliminaries)
Proposition 2.4 (Huang and Segal-Halevi 2023). Let D be the bundle collection which is output
lex
by FFD(M, v, τ ). Then, for all i ∈ [n], we have Di = Bi , where Bi is the i-th benchmark bundle of
D.
S
Proof. By the description of FFD, we have v(Di ) ≤ τ , so Di ∈ S≤τ where S = M \ j<k Dj .
lex
By the definition of lexicographically maximal bundle, we have Bi ≥ Di . To show equality, it is
lex
sufficient to prove Di ≥ Bi for all i ∈ [n]. Suppose the contrary, and let i be the smallest integer for
lex
which Di < Bi . Let q be the smallest integer for which v(Di [q]) < v(Bi [q]). When FFD constructs
bundle Di , the chore Bi [q] is available, it is processed before Di [q], and the cost of the current
bundle would not exceed τ if Bi [q] is put into it at that moment, since v(Bi ) ≤ τ . Therefore, FFD
lex
would put Bi [q] into the current bundle before Di [q] — a contradiction. So we have Di ≥ Bi , which
lex
gives us Di = Bi as claimed.
Proposition 2.6 (Huang and Segal-Halevi 2023). For any IDO instance I and any threshold
vector (τ1 , . . . , τn ), if Algorithm HFFD produces a bundle collection A, then for any τ ≤ τω , the
tuple (M, A, vω , τ ) is a First Fit Valid tuple.
lex
Proof. For all k ≥ 1, we have to prove that Ak ≥ Bk , where Bk is the k-th benchmark bundle of A
w.r.t. threshold τ , where the lexicographic comparison uses the cost function vω .
lex
if Ak = Bk we are done. Otherwise, let q be the smallest index such that vω (Ak [q]) ̸= vω (Bk [q]).
Suppose for contradiction that vω (Ak [q]) < vω (Bk [q]). This implies that Bk [q] precedes Ak [q] in
the universal ordering. In addition, if we add chore Bk [q] to the current bundle before chore Ak [q]
is added, the cost of the current bundle would be acceptable by agent ω, since vω (Bk ) ≤ τ by
definition of a benchmark bundle, and τ ≤ τω by assumption. Therefore, HFFD would insert chore
Bk [q] to the current bundle before chore Ak [q], which is a contradiction.
This implies that vω (Ak [q]) > vω (Bk [q]). Since q is the first index in which Ak and Bk differ,
lex
Ak > Bk .
Proof. (a) By the loop invariant, Pi = Qi for all i < k, and v(Qk ) ≤ τ . Therefore, by FFD
lex
properties, Pk ≥ Qk . As Qk contains more chores than Pk , Pk must contain more large chores than
Qk .
(b) As Pk contains more large chores and Qk contains more chores overall, Qk must contain at
least two small chores more than Pk .
(c) As Pk contains some large chore that is not in Qk , that large chore must be allocated
elsewhere in Q. As Pi = Qi for all i < k, this large chore must be allocated in some Qz with z > k.
21
(d) As the bundles in Q have been ordered by the number of large chores, Qk must contain at
least as many large chores as Qz .
15
Lemma 6.8. Suppose τ = 13 µ. For any aq < ap and bq > bp , if Qk contains aq large and bq small
chores, and Pk contains ap large and bp small chores, then
l > (15bq − 13bp − 13)s/(13ap − 15aq ).
Proof. The condition on Qk implies
aq l + bq s ≤ µ = (13/15)τ.
On the other hand, Qk contains small chores that are unallocated in Pk , so by FFD properties, Pk
has no room for an additional small chore. Hence:
ap l + (bp + 1)s > τ.
Combining these inequalities gives:
ap l + (bp + 1)s > τ ≥ (15/13)(aq l + bq s)
(13ap − 15aq )l > (15bq − 13bp − 13)s
l > (15bq − 13bp − 13)s/(13ap − 15aq ).
This completes the proof.
Lemma 6.9. If invariants 2 and 3 hold at the start of iteration k, then they also hold at the end
of iteration k, so that the following holds for all i > k:
• v(Qi ) ≤ τ .
• Either (a) v(Qi ) ≤ µ, or (b) Qi contains no large chores, or (c) Qi contains one large chore
and some bi small chores and (bi + 2)s ≤ τ .
Proof. If the conditions of Lemma 6.7 are not satisfied in iteration k, then Line 13 does not run,
and thus the cost of Qi remains the same or decreases. Also, the number of large chores in Qi
remains the same or decreases. Therefore the invariants still hold.
Otherwise, the numbers of chores in Qk and Pk must be one of the cases in Figure 1, that is,
one of the rows 3–37 where the value in column F is less than 6.5. We have to prove that, in all
these cases, the modified bundle Qz still satisfies the invariants. We consider three cases based on
the contents of Qk .
Case 1. Qk contains exactly one large chore, and some bq small chores (rows 3–22 in the figure).
By the ordering on Q, Qz contains exactly one large chore, and at most bq small chores.
After the swap in Line 13, Qz contains no large chores and at most bq + 2 small chores, so
v(Qz ) ≤ (bq + 2)s. By looking at Column G it can be verified that, in all rows 3–22, τ ≥ (bq + 2) · s.
Therefore, v(Qz ) ≤ τ in all these cases, and Qz satisfies invariant (b).
We now handle the two special cases, where another swap happens.
• In the special case of row 4 (where Qk initially contains 1 large and 3 small chores), τ > 6s.
Therefore, v(Qz ) < τ even after one more small chore is moved.
• In the special case of row 10 (where Qk initially contains 1 large and 4 small chores), Qz after
the first swap contains no large chores, so the second swap will occur with another bundle
Qz ′ . Similarly to Qz , this bundle too contains 1 large and at most 4 small chores, so after
the swap it contains at most 6 small chores, so v(Qz ′ ) ≤ 6s < τ .
22
Case 2. Qk contains 2 large and 2 small chores, and Pk contains 3 large chores (row 23 in the
figure). In this case τ > 6.67s (see column G). We now split to sub-cases based on the number of
large chores in Qz .
Subcase 2.1(a). Qz contains 1 large chore and satisfies invariant (a), that is, v(Qz ) ≤ µ. This
means that Qz contains some bz small chores such that l + bz s ≤ µ. After the swap, Qz contains
no large chores and (bz + 2) small chores. If bz ≤ 4, then v(Qz ) ≤ 6s < τ , so Qz satisfies invariant
(b). Otherwise, l + 5s ≤ µ. As Pk contains 3 large chores, the situation is as in row 17, where
7.58s < µ, so the case is handled by Corollary 6.6.
Subcase 2.1(c). Qz contains 1 large chore and satisfies invariant (c), that is, it contains some
bz small chores such that (bz + 2)s ≤ τ . After the swap, Qz contains no large chores and (bz + 2)
small chores, so v(Qz ) ≤ τ , and Qz satisfies invariant (b).
Subcase 2.2. Qz contains 2 large chores. By the ordering on Q, it contains at most 2 small
chores. After the swap in Line 13, Qz contains 1 large chore and at most 4 small chores. In column
H it can be seen that τ ≥ l + 4 · s, so v(Qz ) ≤ τ . So v(Qz ) ≤ 6s < τ , and Qz satisfies invariant (b).
Case 3. Qk contains 2 large and 3 small chores, and Pk contains 4 large chores (row 26 in the
figure). In this case τ > 6.82s (see column G).
Again we split to sub-cases based on the number of large chores in Qz .
Subcase 3.1(a). Qz contains 1 large chore and satisfies invariant (a), that is, v(Qz ) ≤ µ. This
means that Qz contains some bz small chores such that l + bz s ≤ µ. After the swap, Qz contains
no large chores and (bz + 2) small chores. If bz ≤ 4, then v(Qz ) ≤ 6s < τ , so Qz satisfies invariant
(b). Otherwise, l + 5s ≤ µ. As Pk contains 4 large chores, the situation is as in row 20, where
6.68s < µ, so the case is handled by Corollary 6.6.
Subcase 3.1(c). This case is handled exactly like Subcase 2.1(c) above.
Subcase 3.2. Qz contains 2 large chores. By the ordering on Q, it contains at most 3 small
chores. After the swap in Line 13, Qz contains 1 large chore and at most 5 small chores. At the
same time, Qk contains 3 large chores and one 1 small chores. As a result, |Qk | = |Pk | but Qk ̸= Pk ,
so the algorithm moves to the second loop. In Line 32, the algorithm swaps one small chore from
Qk with one large chore from Qz . At the end of the iteration, Qz contains no large chores and at
most 6 small chores. So v(Qz ) ≤ 6s < τ , and Qz satisfies invariant (b).
23