0% found this document useful (0 votes)
89 views

Approximation Algorithms II Max 3 SAT

This document summarizes approximation algorithms for two NP-hard problems: Max 3SAT and Set Cover. For Max 3SAT, it presents a randomized algorithm that achieves a (7/8)-approximation in expectation by randomly assigning true/false values to variables. For Set Cover, it describes the GreedySetCover algorithm that at each step selects the set covering the most remaining elements. It is shown this algorithm achieves an O(log n) approximation ratio. The document also discusses open problems for approximating the Art Gallery problem of placing minimum guards to monitor an art gallery polygon. It describes this as a special case of Set Cover with an infinite number of sets/elements.

Uploaded by

ingo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
89 views

Approximation Algorithms II Max 3 SAT

This document summarizes approximation algorithms for two NP-hard problems: Max 3SAT and Set Cover. For Max 3SAT, it presents a randomized algorithm that achieves a (7/8)-approximation in expectation by randomly assigning true/false values to variables. For Set Cover, it describes the GreedySetCover algorithm that at each step selects the set covering the most remaining elements. It is shown this algorithm achieves an O(log n) approximation ratio. The document also discusses open problems for approximating the Art Gallery problem of placing minimum guards to monitor an art gallery polygon. It describes this as a special case of Set Cover with an infinite number of sets/elements.

Uploaded by

ingo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

Chapter 7

Approximation algorithms II
By Sariel Har-Peled, December 7, 2009¬ Version: 0.3

7.1 Max Exact 3SAT


We remind the reader that an instance of 3SAT is a boolean formula, for example F = (x1 + x2 +
x3 )(x4 + x1 + x2 ), and the decision problem is to decide if the formula has a satisfiable assignment.
Interestingly, we can turn this into an optimization problem.

Problem: Max 3SAT


Instance: A collection of clauses: C1 , . . . , Cm .
Output: Find the assignment to x1 , ..., xn that satisfies the maximum
number of clauses.

Clearly, since 3SAT is NP-C it implies that Max 3SAT is NP-H. In particular, the
formula F becomes the following set of two clauses:

x1 + x2 + x3 and x4 + x1 + x2 .

Note, that Max 3SAT is a maximization problem.

Definition 7.1.1 Algorithm Alg for a maximization problem achieves an approximation factor α
if for all inputs, we have:
Alg(G)
≥ α.
Opt(G)

In the following, we present a randomized algorithm – it is allowed to consult with a source


of random numbers in making decisions. A key property we need about random variables, is the
linearity of expectation property, which is easy to derive directly from the definition of expectation.
¬
This work is licensed under the Creative Commons Attribution-Noncommercial 3.0 License. To view a copy of
this license, visit http://creativecommons.org/licenses/by-nc/3.0/ or send a letter to Creative Commons,
171 Second Street, Suite 300, San Francisco, California, 94105, USA.

1
Definition 7.1.2 (Linearity of expectations.) Given two random variables X, Y (not necessarily in-
dependent, we have that E[X + Y] = E[X] + E[Y].

Theorem 7.1.3 One can achieve (in expectation) (7/8)-approximation to Max 3SAT in polynomial
time. Namely, if the instance has m clauses, then the generated assignment satisfies (7/8)m clauses
in expectation.
Proof: Let x1 , . . . , xn be the n variables used in the given instance. The algorithm works by
randomly assigning values to x1 , . . . , xn , independently, and equal probability, to 0 or 1, for each
one of the variables.
Let Yi be the indicator variables which is 1 if (and only if) the ith clause is satisfied by the
generated random assignment and 0 otherwise, for i = 1, . . . , m. Formally, we have

1 Ci is satisfied by the generated assignment,

Yi = 

0
 otherwise.
Now, the number of clauses satisfied by the given assignment is Y = mi=1 Yi . We claim that
P
E[Y] = (7/8)m, where m is the number of clauses in the input. Indeed, we have
 m  m
X  X
E[Y] = E Yi  =
  E[Yi ]
i=1 i=1

by linearity of expectation. Now, what is the probability that Yi = 0? This is the probability that all
three literals appear in the clause Ci are evaluated to FALSE. Since the three literals are instance
of three distinct variable, these three events are independent, and as such the probability for this
happening is
1 1 1 1
Pr[Yi = 0] = ∗ ∗ = .
2 2 2 8
(Another way to see this, is to observe that since Ci has exactly three literals, there is only one
possible assignment to the three variables appearing in it, such that the clause evaluates to FALSE.
Now, there are eight (8) possible assignments to this clause, and thus the probability of picking a
FALSE assignment is 1/8.) Thus,
7
Pr[Yi = 1] = 1 − Pr[Yi = 0] = ,
8
and
7
E[Yi ] = Pr[Yi = 0] ∗ 0 + Pr[Yi = 1] ∗ 1 = .
8
Namely, E[# of clauses sat] = E[Y] = mi=1 E[Yi ] = (7/8)m. Since the optimal solution satisfies
P
at most m clauses, the claim follows.
Curiously, Theorem 7.1.3 is stronger than what one usually would be able to get for an ap-
proximation algorithm. Here, the approximation quality is independent of how well the optimal
solution does (the optimal can satisfy at most m clauses, as such we get a (7/8)-approximation. Cu-
riouser and curiouser­ , the algorithm does not even look on the input when generating the random
assignment.
­
“Curiouser and curiouser!” Cried Alice (she was so much surprised, that for the moment she quite forgot how to
speak good English). – Alice in wonderland, Lewis Carol

2
7.2 Approximation Algorithms for Set Cover
7.2.1 Guarding an Art Gallery
You are given the floor plan of an art gallery, which is a two
dimensional simple polygon. You would like to place guards that
see the whole polygon. A guard is a point, which can see all points
around it, but it can not see through walls. Formally, a point p can
see a point q, if the segment pq is contained inside the polygon.
See figure on the right, for an illustration of how the input looks
like.
A visibility polygon at p (depicted as the yellow polygon on
the left) is the region inside the polygon that p can see. WE would
like to find the minimal number of guards needed to guard the
p given art-gallery? That is, all the points in the art gallery should
be visible from at least one guard we place.
The art-gallery problem is a set-cover problem. We have a
ground set (the polygon), and family of sets (the set of all visibility
polygons), and the target is to find a minimal number of sets covering the whole polygon.
It is known that finding the minimum number of guards needed is NP-H. No approximation
is currently known. It is also known that a polygon with n corners, can be guarded using n/3 + 1
guards. Note, that this problem is harder than the classical set-cover problem because the number
of subsets is infinite and the underlining base set is also infinite.
An interesting open problem is to find a polynomial time √ approximation algorithm, such that
given P, it computes a set of guards, such that #guards ≤ nkopt , where n is the number of vertices
of the input polygon P, and kopt is the number of guards used by the optimal solution.

7.2.2 Set Cover


The optimization version of Set Cover, is the following:
Problem: Set Cover
Instance: (S , F )
S - a set of n elements
F - a family of subsets of S , s.t. X∈F X = S .
S
Output: The set X ⊆ F such that X contains as few sets as possible,
and X covers S .
Note, that Set Cover is a minimization problem which is also NP-H.

Example 7.2.1 Consider the set S = {1, 2, 3, 4, 5} and the following family of subsets
n o
F = {1, 2, 3}, {2, 5}, {1, 4}, {4, 5} .
n o
Clearly, the smallest cover of S is Xopt = {1, 2, 3}, {4, 5} .

3
The greedy algorithm GreedySetCover for this prob- GreedySetCover(S , F )
lem is depicted on the right. Here, the algorithm al- X ← ∅; T ← S
ways picks the set in the family that covers the largest while T not empty do
number of elements not covered yet. Clearly, the algo- U ← set in F covering
rithm is polynomial in the input size. Indeed, we are largest # of elements in T
given a set S of n elements, and m subsets. As such, the X ← X ∪ {U}
input size is at most Ω(m + n), and the algorithm takes T ←T \U
time polynomial in m and n. Let Xopt = {V1 , . . . , Vk } be
the optimal solution. return X.
Let T i denote the elements not covered in the be-
ginning ith iteration of GreedySetCover, where T 1 = S . Let Ui be the set added to the cover in the
ith iteration, and αi = |Ui ∩ T i | be the number of new elements being covered in the ith iteration.

Claim 7.2.2 We have α1 ≥ α2 ≥ . . . ≥ αk ≥ . . . ≥ αm .


Proof: If αi < αi+1 then Ui+1 covers more elements than Ui and we can exchange between them,
as we found a set that in the ith iteration covers more elements that the set used by GreedySet-
Cover. Namely, in the ith iteration we would use Ui+1 instead of Ui . This contradicts the greediness
of GreedySetCover of choosing the set covering the largest number of elements not covered yet.
A contradiction.

Claim 7.2.3 We have αi ≥ |T i | /k. Namely, |T i+1 | ≤ (1 − 1/k) |T i |.


Proof: Consider the optimal solution. It is made out of k sets and it covers S , and as such it covers
T i ⊆ S . This implies that one of the subsets in the optimal solution cover at least 1/k fraction of
the elements of T i . Finally, the greedy algorithm picks the set that covers the largest number of
elements of T i . Thus, Ui covers at least αi ≥ |T i |/k elements.
As for the second claim, we have that |T i+1 | = |T i | − αi ≤ (1 − 1/k) |T i |.

Theorem 7.2.4 The algorithm GreedySetCover generates a cover of S using at most O(k log n)
sets of F , where k is the size of the cover in the optimal solution.
Proof: We have that |T i | ≤ (1 − 1/k) |T i−1 | ≤ (1 − 1/k)i |T 0 | = (1 − 1/k)i n. In particular, for
M = d2k ln ne we have
!M ! !
1 1 d2k ln ne 1
|T M | ≤ 1 − n ≤ exp − M n = exp − n ≤ < 1,
k k k n
since 1 − x ≤ e−x , for x ≥ 0. Namely, |T M | = 0. As such, the algorithm terminates before reaching
the Mth iteration, and as such it outputs a cover of size O(k log n), as claimed.

7.3 Biographical Notes


The Max 3SAT remains hard in the “easier” variant of MAX 2SAT version, where every clause
has 2 variables. It is known to be NP-H and approximable within 1.0741 [FG95], and is not
approximable within 1.0476 [Has97]. Notice, that the fact that MAX 2SAT is hard to approximate
is surprising if one consider the fact that 2SAT can be solved in polynomial time.

4
Bibliography
[FG95] U. Feige and M. Goemans. Approximating the value of two power proof systems, with
applications to max 2sat and max dicut. In ISTCS ’95: Proceedings of the 3rd Israel
Symposium on the Theory of Computing Systems (ISTCS’95), page 182, Washington,
DC, USA, 1995. IEEE Computer Society.

[Has97] J. Hastad. Some optimal inapproximability results. In Proc. 29th Annu. ACM Sympos.
Theory Comput., pages 1–10, 1997.

You might also like