0% found this document useful (0 votes)
3 views

Algorithm.eee

This document discusses linear programming duality and its application in designing approximation algorithms. It explains the concepts of primal and dual LPs, the Weak and Strong Duality Theorems, and introduces the method of dual fitting through examples like the unweighted set cover and vertex cover problems. Additionally, it presents algorithms and their approximation guarantees for various problems, including the Feedback Arc Set on Tournaments.

Uploaded by

Amdework
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

Algorithm.eee

This document discusses linear programming duality and its application in designing approximation algorithms. It explains the concepts of primal and dual LPs, the Weak and Strong Duality Theorems, and introduces the method of dual fitting through examples like the unweighted set cover and vertex cover problems. Additionally, it presents algorithms and their approximation guarantees for various problems, including the Feedback Arc Set on Tournaments.

Uploaded by

Amdework
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Approximation Algorithms and Hardness of Approximation March 26, 2013

Lecture 11
Lecturer: Alantha Newman Scribes: Farah Charab

1 Linear Programming Duality


In this lecture, we will show how to provide lower (upper) bounds on the value of the optimal solution of
a given LP corresponding to a minimization (maximization) problem. Such methods are advantageous
for several reasons:
• It is often expensive to compute the exact solution for an LP.
• Recall that the first step in designing an approximation algorithm for a minimization (maximiza-
tion) problem is to efficiently compute a “tight” lower (upper) bound on the optimal value.
• A feasible solution itself is not a lower (upper) bound on the optimal value of the minimization
(maximization) problem.
To demonstrate the idea of duality, we consider the following LP:

min 7x + 3y
subject to x+y ≥ 2
3x + y ≥ 4
x, y ≥ 0
Suppose that an oracle tells you that x = y = 1 is a feasible solution with the optimal value of 10. For a
minimization LP, any feasible solution of the LP provides an upper bound on the optimal value, but the
crucial thing is how to provide a good lower bound on the optimal value. Specifically, given an optimal
solution, can we prove that this is indeed the minimum solution? How do we prove that there is no
solution with a smaller value? Consider the following: Multiply the second constraint by 2, and then
add it to the first constraint. We get the following:

(x + y) + 2(3x + y) = 7x + 3y
≥2 ≥2(4)

≥ 10

The above approach shows that 10 is the minimum feasible solution for the considered LP. The method
of LP duality generalizes the above approach. The basic idea is to find the multipliers for the constraints
that yields the largest lower bound (smallest upper bound) for the minimization (maximization) LP
problem. How do we do this? In this lecture, we will show how to use the “dual” of the LP to obtain
the desired bounds. In order to present the LP-Duality theorem formally, we will first show how to
transform a given “primal” LP to its “dual”.

1.1 Primal(P) to Dual(D)


From now on, we will consider the case in which the primal LP is a minimization problem. The case in
which the primal LP is a maximization LP is analogous. The primal LP can be written in the following
eqiuvalent forms:
n
P
T min ci xi
min c x i=1
n
subject to: Ax ≥ b → P
subject to: Aji xi ≥ bj ∀j ≤ m
x ≥ 0 i=1
xi ≥ 0 ∀i ≤ n

1
The Dual LP is obtained as follows:
m
P
T
max bj yj
max b y j=1
m
subject to AT y ≤ c → P
subject to Aji yj ≤ ci ∀i ≤ n
y ≥ 0 j=1
yj ≥ 0 ∀j ≤ m

1. For each constraint in P, we assign a multiplier variable yj .

2. For each variable in P, we have a corresponding constraint in D that upper bounds the value of the
coefficient of that variable. (That is, after multiplying each constraint in P with its corresponding
multiplier and then summing all of these constraints, the coefficient corresponding to each variable
in P should be upper bounded by its corresponding coefficient in the primal objective function.)
3. The dual objective function maximizes the lower bound on P by maximizing bT y.

It might seem that there is no benefit in calculating the solution to the dual LP. That is, if obtaining
a lower bound by solving the dual LP requires us to solve an LP, why not simply solve the primal LP?
However, note that once we have solved the dual LP, the multipliers obtained (i.e. the yj values) can
be used to prove that an optimal solution to the primal LP is actually optimal. Given these multipliers,
the process of checking for optimality could be much faster than (re)solving the primal LP.
Next, we prove the “Weak Duality Theorem” which states that any feasible solution to the dual LP
is a lower bound on the optimal value of the corresponding primal LP.

Theorem 1 (Weak Duality) If x is a primal-feasible solution and y is a dual-feasible solution, then


cT x ≥ bT y.
Proof
X XX
bT y = bj yj ≤ ( Aji xi )yj
j j i
XX
= ( Aji yj )xi
i j
X
≤ ci xi = cT x.
i

For completeness, we will state the “Strong Duality Theorem”. However, for the purpose of obtaining
lower bounds for the approximation algorithms that we will see in the rest of this lecture, Weak Duality
suffices.

Theorem 2 (Strong Duality) If x is an optimal primal-feasible solution and y is an optimal dual-


feasible solution, then cT x = bT y, i.e the primal and the dual have the same optimal objective value.

2 Dual-Fitting
One way to use the dual in the design of approximation algorithms is the method of dual fitting. In this
method, we find an integer solution x for the primal LP and show that:

cT x ≤ α · bT y, (1)

2
for some dual-feasible solution y. Note that by Weak Duality, bT y is a lower bound on the value of an
optimal solution. Thus, such a solution x would result in an α-approximation algorithm. Suppose we
can obtain x and y without solving an LP, and we can show that x and y are feasible for the primal
and dual LPs, respectively, and that (1) holds. Then we never have to actually solve an LP! We will
demonstrate the method of Dual Fitting using the following examples.

2.1 Unweighted Set Cover


We are given a universe of n elements U = {e1 , e2 . . . en }, a collection of subsets of the universe
S = {s1 , s2 . . . sk }. The goal is to find the minimum number of sets required to cover all the ele-
ments in the universe. Let us formulate this problem as an LP, and then obtain the dual.
Primal (Covering) LP:
P
min xs
s∈S
P
subject to xs ≥ 1 ∀e ∈ U
s:e∈s
xs ≥ 0
Dual (Packing) LP:
P
max ye
e∈U
P
subject to ye ≤ 1 ∀s ∈ S
e:e∈S
ye ≥ 0
Recall the greedy algorithm for Set Cover from Lecture 2:

Algorithm “Most Bang for the Buck”


Input: A collection of subsets S = {s1 , s2 . . . sk } of the set U = {e1 , e2 . . . en }.
Output: A collection of subsets that cover U.

• C←∅
• while C 6= U do

– Find a set S that maximizes |S ∩ (U\C)|


1
– Set p(e) = |S∩(U \C)| for each e ∈ S\C
– C ←C ∪S

• end while

Note that the cost of the set cover (number of sets) produced by the algorithm is a feasible integral
solution to the covering LP: X
P = p(ei ).
ei ∈U
This has the same form seen in the objective function of the dual LP, although this particular solution
may not be dual-feasible. However, we will show that if we divide the dual by a suitable factor, i.e
P
α , then this “shrunk dual” is a feasible solution for the dual LP. Thus, we will have shown that our
algorithm is an α approximation for the unweighted set cover problem.
p(e)
Lemma 3 The vector y = (y1 y2 . . . y|U | ), where ∀ e ∈ U , ye = Hn is a dual feasible solution. (Recall
that Hn is the nth harmonic number.)

3
Proof To show that {ye } is a feasible solution, we need to show that for each set s ∈ S, the following
holds:
X X p(ei )
yei = ≤ 1. (2)
ei ∈s ei ∈s
Hn
Consider a set s ∈ S consisting of k elements. Let ek , ek−1 , . . . e1 be the elements covered by the algorithm
in that order, i.e e1 was covered the last and ek was covered first.
Claim 4 p(ej ) ≤ 1j .
Why? Because when element ej was covered, there existed a set with j uncovered elements, namely s,
i.e |s ∩ (U\C)| = j. Since the algorithm chooses the set with the most number of uncovered elements, it
can not choose a smaller set, and hence the claim follows.
So now, we can upper bound the cost of the elements in the set s as follows:
k
X X 1
p(ei ) ≤ ≤ Hk ≤ Hn .
ei ∈s i=1
i
P
Since for any s ∈ S : p(ei ) ≤ Hn , we obtain (2), and the lemma follows.
ei ∈s

In summary, we have the following theorem:


Theorem 5 The approximation guarantee of the greedy set cover algorithm is Hn .
Proof
X X X
p(ei ) = Hn · yei = Hn yei ≤ Hn · OP T.
ei ∈U ei ∈U ei ∈U

SinceP P that the values {yei } = {p(ei )/Hn } are dual-feasible, it follows from Weak Duality
we have show
that ei ∈U yei = ei ∈U p(ei )/Hn is a lower bound on the cost of a set cover.

2.2 Vertex Cover


The problem of Vertex Cover is as follows. Given a graph G = (V, E), the goal is to find the minimum
subset of vertices such that every edge is covered. The LP for this problem and its dual are formulated
below:

Primal (Vertex Cover) LP:


P
min xv
v∈V
subject to xu + xv ≥ 1 ∀ (u, v) ∈ E
xv ≥ 0
Dual (Matching) LP:
P
max ye
e∈E
P
subject to ye ≤ 1 ∀v∈V
e:v∈e
ye ≥ 0
Let M be a maximal matching of the graph G, and let xij be defined as follows:

1 if e = (i, j) ∈ M
xij =
0 o.w

4
Now we construct a vertex cover as follows. For all (i, j) ∈ M, we add vertices i, j to the vertex cover.
This set covers all edges of the graph, since M was chosen to be maximal. Hence, the cost of vertex
cover is: X
cost(V C) = 2 · xij = 2 · |M|.
i,j∈E

Since M is a maximal matching, {xij } is dual feasible. Hence, by Weak Duality, |M| is a lower bound
for the vertex cover LP. Thus, we obtain a 2-approximation algorithm for the vertex cover problem.

2.3 Feedack Arc Set on Tournaments


Let G = (V, A) be a tournament, which is a complete graph in which each edge is oriented in one direc-
tion. The goal of the Feedback Arc Set Problem is to find a minimum cardinality subset of arcs S ⊂ A
such that G0 = (V, A \ S) is acyclic. The primal LP and its dual are shown below. Note that C is the
set of directed cycles in G.

Primal (Covering) LP:


P
min xij
(i,j)∈A
P
subject to: xij ≥ 1, ∀ c ∈ C
(i,j)∈c
xij ≥ 0
Dual (Packing) LP:
P
max yc
Pc∈C
subject to: yc ≤ 1, ∀ (i, j) ∈ A
c:(i,j)∈c
yc ≥ 0
We consider the following algorithm for the Feedback Arc Set Problem on Tournaments. Note that
this problem is equivalent to finding an ordering of the vertices that minimizes the number of backward
edges, i.e. the set S comprises the backward edges in the ordering.

Algorithm “RandomFAS”
Input: G = (V,A).
Output: Ordering of vertices in V .

• Choose i ∈ V at random.
• if (j, i) ∈ A, ⇒ j → L
• if (i, j) ∈ A, ⇒ j → R

• return (RandomF AS(L, AL ), RandomF AS(R, AR ))

We now prove that RandomFAS is a 3-approximation algorithm for the problem of Feedback Arc Set
on Tournaments.
Let T be the set of directed triangles: {t ∈ T : i → j → k → i}. Note that T ⊂ C. Let At be the
event that one vertex of t = {i, j, k} is chosen before triangle t is broken by the algorithm, i.e. when
{i, j, k} occur in same recursive call. Let pt be the probability of event At . Then, we have:
X
E[cost of the solution] = E[number of backward edges] = pt .
t∈T

5
pt
In the next lemma, we will show that setting yc = 0 for all c ∈
/ T , and yt = 3 for all t ∈ T gives a
dual-feasible solution.
pt
Lemma 6 Setting yc = yt = 3 , if c = t ∈ T and 0 otherwise is dual feasible.

Proof Let Be be the event that edge e is backwards in the output ordering. Let Be ∧ At be the event
that edge e is backwards due to At . For example, suppose vertices i, j, k form triangle t which is yet
unbroken when vertex k is chosen as the pivot. Then we say that edge e = (i, j) is backwards due to
event At . Since, given event At , each edge in t is equally likely to be a backwards edge, we have:

P r(Be ∧ At ) = P r(Be |At )P r(At )


1
= × pt
3
pt
= .
3
0 0
P that for any t 6= t ∈ T such that e ∈ t and e ∈ T , Be ∧ At and Be ∧ At are disjoint events. Hence,
Note 0

P r(Be ∧ At ) ≤ 1. This implies that, forall e ∈ A:


t:e∈t

X X pt
yc = ≤ 1.
c:e∈c t:e∈t
3

We can therefore conclude that {yc } is a dual-feasible solution.

Thus, we obtain a 3-approximation algorithm for the problem of Feedback Arc Set on Tournaments.

Theorem 7 The approximation guarantee of RandomFAS is 3.


Proof
X X X
E[cost of solution] = pt = 3yc = 3 · yc ≤ 3 · OP T.
t∈T c∈C c∈C

Since we havePshow that the values {yc } = {pt /3} are dual-feasible, it follows from Weak Duality that
P
t∈T pt /3 = c∈C yc is a lower bound on the size of minimum feedback arc set.

You might also like