Algorithm.eee
Algorithm.eee
Lecture 11
Lecturer: Alantha Newman Scribes: Farah Charab
min 7x + 3y
subject to x+y ≥ 2
3x + y ≥ 4
x, y ≥ 0
Suppose that an oracle tells you that x = y = 1 is a feasible solution with the optimal value of 10. For a
minimization LP, any feasible solution of the LP provides an upper bound on the optimal value, but the
crucial thing is how to provide a good lower bound on the optimal value. Specifically, given an optimal
solution, can we prove that this is indeed the minimum solution? How do we prove that there is no
solution with a smaller value? Consider the following: Multiply the second constraint by 2, and then
add it to the first constraint. We get the following:
(x + y) + 2(3x + y) = 7x + 3y
≥2 ≥2(4)
≥ 10
The above approach shows that 10 is the minimum feasible solution for the considered LP. The method
of LP duality generalizes the above approach. The basic idea is to find the multipliers for the constraints
that yields the largest lower bound (smallest upper bound) for the minimization (maximization) LP
problem. How do we do this? In this lecture, we will show how to use the “dual” of the LP to obtain
the desired bounds. In order to present the LP-Duality theorem formally, we will first show how to
transform a given “primal” LP to its “dual”.
1
The Dual LP is obtained as follows:
m
P
T
max bj yj
max b y j=1
m
subject to AT y ≤ c → P
subject to Aji yj ≤ ci ∀i ≤ n
y ≥ 0 j=1
yj ≥ 0 ∀j ≤ m
2. For each variable in P, we have a corresponding constraint in D that upper bounds the value of the
coefficient of that variable. (That is, after multiplying each constraint in P with its corresponding
multiplier and then summing all of these constraints, the coefficient corresponding to each variable
in P should be upper bounded by its corresponding coefficient in the primal objective function.)
3. The dual objective function maximizes the lower bound on P by maximizing bT y.
It might seem that there is no benefit in calculating the solution to the dual LP. That is, if obtaining
a lower bound by solving the dual LP requires us to solve an LP, why not simply solve the primal LP?
However, note that once we have solved the dual LP, the multipliers obtained (i.e. the yj values) can
be used to prove that an optimal solution to the primal LP is actually optimal. Given these multipliers,
the process of checking for optimality could be much faster than (re)solving the primal LP.
Next, we prove the “Weak Duality Theorem” which states that any feasible solution to the dual LP
is a lower bound on the optimal value of the corresponding primal LP.
For completeness, we will state the “Strong Duality Theorem”. However, for the purpose of obtaining
lower bounds for the approximation algorithms that we will see in the rest of this lecture, Weak Duality
suffices.
2 Dual-Fitting
One way to use the dual in the design of approximation algorithms is the method of dual fitting. In this
method, we find an integer solution x for the primal LP and show that:
cT x ≤ α · bT y, (1)
2
for some dual-feasible solution y. Note that by Weak Duality, bT y is a lower bound on the value of an
optimal solution. Thus, such a solution x would result in an α-approximation algorithm. Suppose we
can obtain x and y without solving an LP, and we can show that x and y are feasible for the primal
and dual LPs, respectively, and that (1) holds. Then we never have to actually solve an LP! We will
demonstrate the method of Dual Fitting using the following examples.
• C←∅
• while C 6= U do
• end while
Note that the cost of the set cover (number of sets) produced by the algorithm is a feasible integral
solution to the covering LP: X
P = p(ei ).
ei ∈U
This has the same form seen in the objective function of the dual LP, although this particular solution
may not be dual-feasible. However, we will show that if we divide the dual by a suitable factor, i.e
P
α , then this “shrunk dual” is a feasible solution for the dual LP. Thus, we will have shown that our
algorithm is an α approximation for the unweighted set cover problem.
p(e)
Lemma 3 The vector y = (y1 y2 . . . y|U | ), where ∀ e ∈ U , ye = Hn is a dual feasible solution. (Recall
that Hn is the nth harmonic number.)
3
Proof To show that {ye } is a feasible solution, we need to show that for each set s ∈ S, the following
holds:
X X p(ei )
yei = ≤ 1. (2)
ei ∈s ei ∈s
Hn
Consider a set s ∈ S consisting of k elements. Let ek , ek−1 , . . . e1 be the elements covered by the algorithm
in that order, i.e e1 was covered the last and ek was covered first.
Claim 4 p(ej ) ≤ 1j .
Why? Because when element ej was covered, there existed a set with j uncovered elements, namely s,
i.e |s ∩ (U\C)| = j. Since the algorithm chooses the set with the most number of uncovered elements, it
can not choose a smaller set, and hence the claim follows.
So now, we can upper bound the cost of the elements in the set s as follows:
k
X X 1
p(ei ) ≤ ≤ Hk ≤ Hn .
ei ∈s i=1
i
P
Since for any s ∈ S : p(ei ) ≤ Hn , we obtain (2), and the lemma follows.
ei ∈s
SinceP P that the values {yei } = {p(ei )/Hn } are dual-feasible, it follows from Weak Duality
we have show
that ei ∈U yei = ei ∈U p(ei )/Hn is a lower bound on the cost of a set cover.
4
Now we construct a vertex cover as follows. For all (i, j) ∈ M, we add vertices i, j to the vertex cover.
This set covers all edges of the graph, since M was chosen to be maximal. Hence, the cost of vertex
cover is: X
cost(V C) = 2 · xij = 2 · |M|.
i,j∈E
Since M is a maximal matching, {xij } is dual feasible. Hence, by Weak Duality, |M| is a lower bound
for the vertex cover LP. Thus, we obtain a 2-approximation algorithm for the vertex cover problem.
Algorithm “RandomFAS”
Input: G = (V,A).
Output: Ordering of vertices in V .
• Choose i ∈ V at random.
• if (j, i) ∈ A, ⇒ j → L
• if (i, j) ∈ A, ⇒ j → R
We now prove that RandomFAS is a 3-approximation algorithm for the problem of Feedback Arc Set
on Tournaments.
Let T be the set of directed triangles: {t ∈ T : i → j → k → i}. Note that T ⊂ C. Let At be the
event that one vertex of t = {i, j, k} is chosen before triangle t is broken by the algorithm, i.e. when
{i, j, k} occur in same recursive call. Let pt be the probability of event At . Then, we have:
X
E[cost of the solution] = E[number of backward edges] = pt .
t∈T
5
pt
In the next lemma, we will show that setting yc = 0 for all c ∈
/ T , and yt = 3 for all t ∈ T gives a
dual-feasible solution.
pt
Lemma 6 Setting yc = yt = 3 , if c = t ∈ T and 0 otherwise is dual feasible.
Proof Let Be be the event that edge e is backwards in the output ordering. Let Be ∧ At be the event
that edge e is backwards due to At . For example, suppose vertices i, j, k form triangle t which is yet
unbroken when vertex k is chosen as the pivot. Then we say that edge e = (i, j) is backwards due to
event At . Since, given event At , each edge in t is equally likely to be a backwards edge, we have:
X X pt
yc = ≤ 1.
c:e∈c t:e∈t
3
Thus, we obtain a 3-approximation algorithm for the problem of Feedback Arc Set on Tournaments.
Since we havePshow that the values {yc } = {pt /3} are dual-feasible, it follows from Weak Duality that
P
t∈T pt /3 = c∈C yc is a lower bound on the size of minimum feedback arc set.