Improved Approximation Algorithms For Maximum Cut and Satisability Problems Using Semidenite Programming
Improved Approximation Algorithms For Maximum Cut and Satisability Problems Using Semidenite Programming
Improved Approximation Algorithms For Maximum Cut and Satisability Problems Using Semidenite Programming
Permission to make digital or hard copies of part or all of thiswork for personal or classroom use
is granted without fee providedthat copies are not made or distributed for profit or commercialadvantage and that copies bear this notice and the full citation
onthe first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise,
to republish, to post on servers, or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from Publications Dept, ACM Inc.,
fax +1 (212) 869-0481, or permissions@acm.org.
Abstract
We present randomized approximation algorithms for the maximum cut (MAX CUT) and
maximum 2-satisability (MAX 2SAT) problems that always deliver solutions of expected
value at least .87856 times the optimal value. These algorithms use a simple and elegant
technique that randomly rounds the solution to a nonlinear programming relaxation. This
relaxation can be interpreted both as a semidenite program and as an eigenvalue mini-
mization problem. The best previously known approximation algorithms for these problems
had performance guarantees of 12 for MAX CUT and 34 for MAX 2SAT. Slight extensions of
our analysis lead to a .79607-approximation algorithm for the maximum directed cut problem
(MAX DICUT) and a .758-approximation algorithm for MAX SAT, where the best previously
known approximation algorithms had performance guarantees of 14 and 34 respectively. Our
algorithm gives the rst substantial progress in approximating MAX CUT in nearly twenty
years, and represents the rst use of semidenite programming in the design of approximation
algorithms.
Introduction
Given an undirected graph G = (V; E ) and non-negative weights wij = wji on the edges (i; j ) 2 E ,
the maximum cut problem (MAX CUT) is that of nding the set of vertices S that maximizes
the weight of the edges in the cut (S; S); that is, the weight of the edges with one endpoint in S
and the other in S. For simplicity, we usually set wij = 0 for (i; j ) 2= E and denote the weight
P
of a cut (S; S) by w(S; S) = i2S;j 2=S wij . The MAX CUT problem is one of the Karp's original
NP-complete problems [37], and has long been known to be NP-complete even if the problem
is unweighted; that is, if wij = 1 for all (i; j ) 2 E [19]. The MAX CUT problem is solvable in
polynomial time for some special classes of graphs (e.g. if the graph is planar [52, 27]). Besides
its theoretical importance, the MAX CUT problem has applications in circuit layout design and
statistical physics (Barahona et al. [4]). For a comprehensive survey of the MAX CUT problem,
the reader is referred to Poljak and Tuza [62].
Because it is unlikely that there exist ecient algorithms for NP-hard maximization problems,
a typical approach to solving such a problem is to nd a -approximation algorithm; that is, a
A preliminary version has appeared in the Proceedings of the 26th Annual ACM Symposium on Theory of
Computing, Montreal, pages 422{431, 1994.
y Address: Dept. of Mathematics, Room 2-382, M.I.T., Cambridge, MA 02139. Email: goemans@math.mit.edu.
Research supported in part by NSF contract 9302476-CCR and DARPA contract N00014-92-J-1799.
z Address: IBM T.J. Watson Research Center, Room 33-219, P.O. Box 218, Yorktown Heights, NY, 10598.
Email: dpw@watson.ibm.com. Research supported by an NSF Postdoctoral Fellowship. This research was con-
ducted while the author was visiting MIT.
1
polynomial-time algorithm that delivers a solution of value at least times the optimal value.
The constant is sometimes called the performance guarantee of the algorithm. We will also use
the term \-approximation algorithm" for randomized polynomial-time algorithms that deliver
solutions whose expected value is at least times the optimal. In 1976, Sahni and Gonzales [66]
presented a 21 -approximation algorithm for the MAX CUT problem. Their algorithm iterates
through the vertices and decides whether or not to assign vertex i to S based on which placement
maximizes the weight of the cut of vertices 1 to i. This algorithm is essentially equivalent to
the randomized algorithm that
ips an unbiased coin for each vertex to decide which vertices
are assigned to the set S . Since 1976, a number of researchers have presented approximation
algorithms for the unweighted MAX CUT problem with performance guarantees of 21 + 21m [69],
1 + n?1 [61], 1 + 1 [30], and 1 + 1 [33] (where n = jV j, m = jE j and denotes the maximum
2 4m 2 2n 2 2
degree), but no progress was made in improving the constant in the performance guarantee
beyond that of Sahni and Gonzales's straightforward algorithm.
We present a simple, randomized ( ? )-approximation algorithm for the maximum cut
problem where
= 0min 2 > 0:87856;
1 ? cos
and is any positive scalar. The algorithm represents the rst substantial progress in approx-
imating the MAX CUT problem in nearly twenty years. The algorithm for MAX CUT also
leads directly to a randomized ( ? )-approximation algorithm for the maximum 2-satisability
problem (MAX 2SAT). The best previously known algorithm for this problem has a perfor-
mance guarantee of 34 and is due to Yannakakis [71]. A somewhat simpler 43 -approximation
algorithm was given in Goemans and Williamson [22]. The improved 2SAT algorithm leads to
.7584-approximation algorithm for the overall MAX SAT problem, fractionally better than Yan-
nakakis' 34 -approximation algorithm for MAX SAT. Finally, a slight extension of our analysis
yields a ( ? )-approximation algorithm for the maximum directed cut problem (MAX DICUT),
where
= 0<arccos(
min 2 2 ? 3 > 0:79607:
?1=3) 1 + 3 cos
The best previously known algorithm for MAX DICUT has a performance guarantee of 14 [55].
Our algorithm depends on a means of randomly rounding a solution to a nonlinear relaxation
of the MAX CUT problem. This relaxation can either be seen as a semidenite program or as
an eigenvalue minimization problem. To our knowledge, this is the rst time that semidenite
programs have been used in the design and analysis of approximation algorithms. The relaxation
plays a crucial role in allowing us to obtain a better performance guarantee: previous approxi-
mation
P
algorithms compared the value of the solution obtained to the total sum of the weights
w
i<j . This sum can be arbitrarily close to twice the value of the maximum cut.
ij
A semidenite program is the problem of optimizing a linear function of a symmetric matrix
subject to linear equality constraints and the constraint that the matrix be positive semidenite.
Semidenite programming is a special case of convex programming and also of the so-called linear
programming over cones or cone-LP since the set of positive semidenite matrices constitutes a
convex cone. To some extent, semidenite programming is very similar to linear programming;
see Alizadeh [1] for a comparison. It inherits the very elegant duality theory of cone-LP (see
Wolkowicz [70] and the exposition by Alizadeh [1]). The simplex method can be generalized to
semidenite programs (Pataki [57]). Given any > 0, semidenite programs can be solved within
an additive error of in polynomial time ( is part of the input, so the running time dependence
on is polynomial in log 1 ). This can be done through the ellipsoid algorithm (Grotschel et
al. [26]) and other polynomial-time algorithms for convex programming (Vaidya [67]) as well as
2
interior-point methods (Nesterov and Nemirovskii [50, 51] and Alizadeh [1]). To terminate in
polynomial time, these algorithms implicitly assume some requirement on the feasible space or
on the size of the optimum solution; for details see Grotschel et al. [26] and Section 3.3 of Alizadeh
[1]. Since the work of Nesterov and Nemirovskii, and Alizadeh, there has been much development
in the design and analysis of interior-point methods for semidenite programming; for several
references available at the time of writing of this paper, see the survey paper by Vandenberghe
and Boyd [68].
Semidenite programming has many interesting applications in a variety of areas including
control theory, nonlinear programming, geometry and combinatorial optimization; see [51, 9, 68,
1], the references therein and the references below. In combinatorial optimization, the impor-
tance of semidenite programming is that it leads to tighter relaxations than the classical linear
programming relaxations for many graph and combinatorial problems. A beautiful application
of semidenite programming is the work of Lovasz [43] on the Shannon capacity of a graph. In
conjunction with the polynomial-time solvability of semidenite programs, this leads to the only
known polynomial-time algorithm for nding the largest stable set in a perfect graph (Grotschel
et al. [25]). More recently, there has been increased interest in semidenite programming from
a combinatorial point-of-view [46, 47, 1, 58, 17, 45]. This started with the work of Lovasz and
Schrijver [46, 47], who developed a machinery to dene tighter and tighter relaxations of any
integer program based on quadratic and semidenite programming. Their papers demonstrated
the wide applicability and the power of semidenite programming for combinatorial optimization
problems. Our use of semidenite programming relaxations was inspired by these papers, and
by the paper of Alizadeh [1].
For MAX CUT, the semidenite programming relaxation we consider is equivalent to an
eigenvalue minimization problem proposed by Delorme and Poljak [13, 12]. An eigenvalue mini-
mization problem consists of minimizing a decreasing sum of P the eigenvalues i of a matrix subject
to equality constraints on the matrix; that is, minimizing i mi i, where 1 2 n
and m1 m2 mn 0. The equivalence of the semidenite program we consider and
the eigenvalue bound of Delorme and Poljak was established by Poljak and Rendl [58]. Building
on work by Overton and Womersley [54, 53], Alizadeh [1] has shown that eigenvalue minimiza-
tion problems can in general be formulated as semidenite programs. This is potentially quite
useful, since there is an abundant literature on eigenvalue bounds for combinatorial optimization
problems; see the survey paper by Mohar and Poljak [49].
As shown by Poljak and Rendl [60, 59] and Delorme and Poljak [14], the eigenvalue bound
provides a very good bound on the maximum cut in practice. Delorme and Poljak [13, 12] study
the worst-case ratio between the maximum cut and their eigenvalue bound. The worst instance
32p = 0:88445 . . ., but they were unable
they are aware of is the 5-cycle for which the ratio is 25+5 5
to prove a bound better than 0.5 in the worst-case. Our result implies a worst-case bound of ,
very close to the bound for the 5-cycle.
The above discussion on the worst-case behavior indicates that straightforward modications
of our technique will not lead to signicant improvements in the MAX CUT result. Furthermore,
MAX CUT, MAX 2SAT, and MAX DICUT are MAX SNP-hard [55], and so it is known that
there exists a constant c < 1 such that a c-approximation algorithm for any of these problems
would imply that P = NP [2]. Bellare, Goldreich, and Sudan [6] have shown that c is as small as
83/84 for MAX CUT and 95/96 for MAX 2SAT. Since bidirected instances of MAX DICUT are
equivalent to instances of MAX CUT, the bound for MAX CUT also applies to MAX DICUT.
Since the appearance of an abstract of this paper, Feige and Goemans [16] have extended
our technique to yield a .931-approximation algorithm for MAX 2SAT and a .859-approximation
algorithm for MAX DICUT. By using semidenite programming and similar rounding ideas,
3
Karger, Motwani, and Sudan [36] have been able to show how to color a k-colorable graph with
3
O~ (n1? +1 ) colors in polynomial time. Frieze and Jerrum [18] have used the technique to devise
k
approximation algorithms for the maximum k-way cut problem that improve on the previously
best known 1 ? 1=k performance guarantee. Chor and Sudan [10] apply ideas from this paper to
the \betweeness" problem. Thus it seems likely that the techniques in this paper will continue
to prove useful in designing approximation algorithms.
We expect that in practice the performance of our algorithms will be much better than the
worst-case bounds. We have performed some preliminary computational experiments with the
MAX CUT algorithm which show that on a number of dierent types of random graphs the
algorithm is usually within .96 of the optimal solution.
A preliminary version of this paper [21] presented a method to obtain deterministic versions
of our approximation algorithm with the same performance guarantees. However, the method
given had a subtle error, as was pointed out to us by several researchers. Mahajan and Ramesh
[48] document the error and propose their own derandomization scheme for our algorithms.
The paper is structured as follows. We present the randomized algorithm for MAX CUT in
Section 1, and its analysis in Section 2. We elaborate on our semidenite programming bound
and its relationship with other work on the MAX CUT problem in Section 3. The quality of
the relaxation is investigated in Section 4, and computational results are presented in Section
5. In Section 6, we show how to extend the algorithm to an algorithm for MAX 2SAT, MAX
SAT, MAX DICUT, and other problems. We conclude with a few remarks and open problems
in Section 7.
i<j
(Q) subject to: yi 2 f?1; 1g 8i 2 V:
ToPsee this, note that the set S = fijyi = 1g corresponds to a cut of weight w(S; S) =
1
2 i<j wij (1 ? yi yj ).
Since solving this integer quadratic program is NP-complete, we consider relaxations of (Q).
Relaxations of (Q) are obtained by relaxing some of the constraints of (Q), and extending the
objective function to the larger space; thus all possible solutions of (Q) are feasible for the
relaxation, and the optimal value of the relaxation is an upper bound on the optimal value of
(Q). We can interpret (Q) as restricting yi to be a 1-dimensional vector of unit norm. Some
very interesting relaxations can be dened by allowing yi to be a multi-dimensional vector vi of
unit Euclidean norm. Since the linear space spanned by the vectors vi has dimension at most n,
we can assume that these vectors belong to Rn (or Rm for some m n), or more precisely to
the n-dimensional unit sphere Sn (or Sm for m n). To ensure that the resulting optimization
problem is indeed a relaxation, we need to dene the objective function in such a way that it
reduces to 21 i<j wij (1 ? yi yj ) in the case of vectors lying in a 1-dimensional space. There are
P
several natural ways of guaranteeing this property. For example, one can replace (1 ? yi yj ) by
(1 ? vi vj ) where vi vj represents the inner product (or dot product) of vi and vj . The resulting
4
relaxation is denoted by (P ):
Maximize 1 wij (1 ? vi vj )
X
2 i<j
(P ) subject to: vi 2 Sn 8i 2 V:
We will show in Section 3 we can solve this relaxation using semidenite programming. We can
now present our simple randomized algorithm for the MAX CUT problem.
1. Solve (P ), obtaining an optimal set of vectors vi .
2. Let r be a vector uniformly distributed on the unit sphere Sn .
3. Set S = fijvi r 0g.
In other words, we choose a random hyperplane through the origin (with r as its normal) and
partition the vertices into those vectors that lie \above" the plane (i.e. have a non-negative inner
product with r) and those that lie \below" it (i.e. have a negative inner product with r). The
motivation for this randomized step comes from the fact that (P ) is independent of the coordinate
system: applying any orthonormal transformation to a set of vectors results in a solution with
the same objective value.
Let W denote the value of the cut produced in this way, and E [W ] its expectation. We will
show in Theorem 2.1 in Section 2 that, given any set of vectors vi 2 Sn , the expected weight of
the cut dened by a random hyperplane is
E [W ] = w arccos(vi vj ) :
X
ij
i<j
We will also show in Theorem 2.3 that
E [W ] 21
X
wij (1 ? vi vj );
i<j
where = min0 2 1?cos > :878: If Z is the optimal value of the maximum cut and Z is
MC P
the optimal value of the relaxation (P ), then since the expected weight of the cut generated by
the algorithm is equal to E [W ] ZP ZMC , the algorithm has a performance guarantee of
for the MAX CUT problem.
We must argue that the algorithm can be implemented in polynomial time. We assume that
the weights are integral. In Section 3 we show that the program (P ) is equivalent to a semidenite
program. Then we will show that by using an algorithm for semidenite programming we can
obtain, for any > 0, a set of vectors vi 's of value greater than ZP ? in time polynomial in
the input size and log 1 . On these approximately optimal vectors, the randomized algorithm will
produce a cut of expected value greater than or equal to (ZP ? ) ( ? )ZMC . The point
on the unit sphere Sn can be generated by drawing n values x1; x2; . . . ; xn independently from
the standard normal distribution, and normalizing the vector obtained (see Knuth [38, p. 130]);
for our purposes, there is no need to normalize the resulting vector x. The standard normal
distribution can be simulated using the uniform distribution between 0 and 1 (see Knuth [38, p.
117]). The algorithm is thus a randomized ( ? )-approximation algorithm for MAX CUT.
5
2 Analysis of the Algorithm
In this section, we analyze the performance of the algorithm. We rst analyze the general case
and then consider the case in which the maximum cut is large and the generalization to negative
edge weights. We conclude this section with a new formulation for the MAX CUT problem.
Let fv1 ; . . . ; vng be any vectors belonging to Sn , and let E [W ] be the expected value of the
cut w(S; S) produced by the randomized algorithm given in the previous section. We start by
characterizing the expected value of the cut.
Theorem 2.1
E [W ] = 1
X
wij arccos(vi vj ):
i<j
Given a vector r drawn uniformly from the unit sphere Sn , we know by the linearity of
expectation that X
E [W ] = wij Pr[sgn(vi r) 6= sgn(vj r)];
i<j
where sgn(x) = 1 if x 0, and -1 otherwise. Theorem 2.1 is thus implied by the following lemma.
Lemma 2.2
Pr[sgn(vi r) 6= sgn(vj r)] = 1 arccos(vi vj ):
Proof : A restatement of the lemma is that the probability the random hyperplane separates the
two vectors is directly proportional to the angle between the two vectors; that is, it is proportional
to the angle = arccos(vi vj ). By symmetry, Pr[sgn(vi r) 6= sgn(vj r)] = 2 Pr[vi r 0; vj r < 0].
The set fr : vi r 0; vj r < 0g corresponds to the intersection of two half-spaces whose dihedral
angle is precisely ; its intersection with the sphere is a spherical digon of angle and, by
symmetry of the sphere, thus has measure equal to 2 times the measure of the full sphere. In
other words, Pr[vi r 0; vj r < 0] = 2 , and the lemma follows.
Our main theorem is the following. As we have argued above, this theorem applied to vectors
vi's of value greater than ZP ? implies that the algorithm has a performance guarantee of ? .
Recall that we dened
= 0min 2 :
1 ? cos
Theorem 2.3
E [W ] 21
X
wij (1 ? vi vj ):
i<j
By using Theorem 2.1 and the nonnegativity of wij , the result follows from the following
lemma applied to y = vi vj .
Lemma 2.4 For ?1 y 1, 1 arccos(y) 12 (1 ? y):
Proof : The expression for follows straightforwardly by using the change of variables cos = y .
See Figure 1, part (i).
The quality of the performance guarantee is established by the following lemma.
Lemma 2.5 > :87856:
6
Proof : Using simple calculus, one can see that achieves its value for = 2:331122 . . ., the
non-zero root of cos + sin = 1. To formally prove the bound of 0:87856, we rst observe that
2
1?cos 1 for 0 < =2. The concavity of f (0 ) = 1 ? cos in the range =2 implies
that, for any 0 , we have f () f (0 ) + ( ? 0 )f (), or 1 ? cos 1 ? cos 0 + ( ? 0 ) sin 0 =
sin 0 + (1 ? cos 0 ? 0 sin 0 ). Choosing 0 = 2:331122 for which 1 ? cos 0 ? 0 sin 0 < 0, we
derive that 1 ? cos < sin 0 , implying that > sin2 0 > 0:87856:
We have analyzed the expected value of the cut returned by the algorithm. For randomized
algorithms, one can often also give high probability results. In this case, however, even computing
the variance of Pthe cut value analytically is dicult. Nevertheless, one could use the fact that W
is bounded (by i<j wij ) to give lower bounds on the probability that W is less than (1 ? )E [W ].
0.742
0.5
0.25
0
0 0.16 0.33 0.5 0.66 0.844 1
Figure 1: (i) Plot of z = arccos(y )= as a function of t = 12 (1 ? y ). The ratio z=t is thus the
slope of the line between (0; 0) and (z; t). The minimum slope is precisely = 0:742=0:844 and
corresponds to the dashed line. (ii) The plot also represents h(t) = arccos(1 ? 2t)= as a function
of t. As t approaches 1, h(t)=t also approaches 1. (iii) The dashed line corresponds to h~ between
0 and
:844.
E [W ] h(A) w 1 ? vi vj :
X
A ij 2
i<j
7
The theorem implies a performance guarantee of h(A)=A ? when A
. As A varies between
and 1, one easily veries that h(A)=A varies between and 1 (see Figure 1, part (ii)).P
Proof : Letting e = wij =Wtot and xe = 1?v2 v for e = (i; j ), we can rewrite A as A = e e xe .
i j
that
E [W ] Wtoth(A) = h(AA) wij 1 ? v2i vj ;
X
i<j
proving the result.
2.2 Analysis for negative edge weights
So far, we have assumed that the weights are non-negative. In several practical problems, some
edge weights are negative [4]. In this case the denition of the performance guarantee has to be
modied since the optimum value could be positive or negative. We now give a corresponding
generalization of Theorem 2.3 to arbitrary weights.
Theorem 2.7 Let W? = Pi<j wij?, where x? = min(0; x). Then
8 9
fE [W ] ? W? g : 12
< X =
wij (1 ? vi vj ) ? W? ; :
i<j
Proof : The quantity E [W ] ? W? can be written as
X
wij arccos(v i v j ) +
X
jwij j 1 ? arccos(v i v j )
:
i<j :wij >0
i<j :wij <0
n P o
1
Similarly, 2 i<j wij (1 ? vi vj ) ? W? is equal to
wij 1 ? v2i vj + jwij j 1 + v2i vj :
X X
The result therefore follows from Lemma 2.4 and the following variation of it.
8
Lemma 2.8 For ?1 z 1; 1 ? 1 arccos(z) 21 (1 + z).
Proof : The lemma follows from Lemma 2.4 by using a change of variables z = ?y and noting
that ? arccos(z ) = arccos(?z ).
2.3 A new formulation of MAX CUT
An interesting consequence of our analysis is a new nonlinear formulation of the maximum cut
problem. Consider the following nonlinear program:
w arccos(vi vj )
X
Maximize ij
i<j
(R) subject to: v i 2 Sn 8i 2 V:
Let ZR denote the optimal value of this program.
Theorem 2.9 ZR = ZMC :
Proof : We rst show that ZRP ZMC : This follows since (R) is a relaxation of (Q): the objective
1
function of (R) reduces to 2 i<j wij (1 ? vi vj ) in the case of vectors vi lying in a 1-dimensional
space.
To see that ZR ZMC
, let the vectors vi denote the optimal solution to (R). From Theorem
2.1, we see that the randomized algorithm gives a cut whose expected value is exactly ZR ,
implying that there must exist a cut of value at least ZR .
i<j
(SD) subject to: yii = 1 8i 2 V
Y symmetric positive semidenite
9
where Y = (yij ). The feasible solutions to (SD) are often referred to as correlation matrices [24].
Strictly speaking, we cannot solve (SD) to optimality in polynomial time; the optimal value ZP
might in fact be irrational. However, using an algorithm for semidenite programming, one can
obtain, for any > 0, a solution of value greater than ZP ? in time polynomial in the input size
and log 1 . For example, Alizadeh's
p adaptation of Ye's interior-point algorithm to semidenite
programming [1] performs O( n(log Wtot + log 1 )) iterations. By exploiting the simple structure
of the problem (SD) as is indicated in [64] (see also [68, Section 7.4]), each iteration can be
implemented in O(n3 ) time. Once an almost optimal solution to (SD) is found, one can use an
incomplete Cholesky decomposition to obtain vectors v1 ; . . . ; vn 2 Sm for some m n such that
1 P wij (1 ? vi vj ) Z ? .
2 i<j P
Among all optimum solutions to (SD), one can show the existence of a solution of low rank.
Grone, Pierce and Watkins [24] show that any extreme solution of (SD) (i.e. which cannot be
expressed as the strict convex combination of other feasible solutions) has rank at most l where
l(l+1) n, i.e. l p8n+1?1 < p2n. For related results, see [41, 11, 42, 40]. This means that
2 2 p
there exists a primal optimum solution Y to (SD) of p rank less than 2n, and that the optimum
vectors vi of (P ) can be embedded in Rm with m < 2n. This result also follows from a more
general statement about semidenite programs due to Barvinok [5] and implicit in Pataki [56]:
any extreme solution of a semidenite program with k linear equalities has rank at most l where
l(l+1) k.
2
3.2 The Semidenite Dual
As mentioned in the introduction, there is an elegant duality theory for semidenite programming.
We now turn to discussing the dual of the program (SD). It is typical to assume that the objective
function of a semidenite program is symmetric. For this purpose,PwePcan rewrite the objective
1
function of (SD) as 4 i=1 j =1 wij (1 ? yij ), or even as 21 Wtot ? 14 i j wij yij . In matrix form,
Pn Pn
the objective function can be conveniently written as 21 Wtot ? 14 Tr(WY ), where W = (wij ) and
Tr denotes the trace.
The dual of (SD) has a very simple description:
ZD = 21 Wtot + 14 Min
X
i
i
(D) subject to: W + diag (
) positive semidenite,
where diag (
) denotes the diagonal matrix whose ith diagonal entry is
i . The dual has a simple
interpretation. Since W + diag (
) is positive semidenite, it can be expressed as C T C ; in other
words, the weight wij can be viewed as ci cj for
Psome vectors
ci 's and
i = ci ci = kcik2 . The
weight of any cut is thus w(S; S) = ( i2S ci ) j 2= S cj , which is never greater than
P
P
ci
2 = 1 W + 1 X
:
i2V
2
2 tot 4 i2V i
Showing weak duality between (P ) and (D), namely that ZP ZD , is easy. Consider
any primal feasible solution Y and any dual vector
. Since both Y and W + diag (
) are
positive semidenite, we derive that Tr((diag (
) + W )Y ) 0 (see [39, p. 218, ex. 14]). But
Tr((diag(
)+ W )Y ) = Tr(diag(
)Y )+ Tr(WY ) = Pi
i + Tr(WY ), implying that the dierence
of the dual objective function value and the primal objective function value is non-negative.
For semidenite programs in their full generality, there is no guarantee that the primal opti-
mum value is equal to the dual optimum value. Also, the maximum (resp. minimum) is in fact
10
a supremum (resp. inmum) and there is no guarantee that the supremum (resp. inmum) is
attained. These, however, are pathological cases. Our programs (SD) and (D) behave nicely;
both programs attain their optimum values, and these values are equal (i.e. ZP = ZD ). This can
be shown in a variety of ways (see [58, 1, 5]).
Given that strong duality holds in our case, the argument showing weak duality implies that,
for the optimum primal solution Y and the optimum dual solution
, we have Tr((diag (
) +
W )Y ) = 0. Since both diag(
)+ W and Y are positive semidenite, we derive that (diag(
)+
W )Y = 0 (see [39, p. 218, ex. 14]). This is the strong form of complementary slackness for
semidenite programs (see Alizadeh [1]); the component-wise product expressed by the trace is
replaced by matrix multiplication. This implies, for example, that Y and diag (
) + W share a
system of eigenvectors and that rank(Y ) + rank(diag (
) + W ) n.
3.3 The Eigenvalue Bound of Delorme and Poljak
The relaxation (D) (and thus (P )) is also equivalent to an eigenvalue upper bound on the value
of the maximum cut ZMC introduced by Delorme and Poljak [13, 12]. To describe the bound, we
rst introduce some notation. The Laplacian matrix L = (lij ) is dened by lij = ?wij for i 6= j
and lii = nk=1 wik . The maximum eigenvalue of a matrix A is denoted by max (A).
P
i=1
= 4ZnMC ;
proving the lemma. A vector u satisfying ni=1 ui = 0 is called a correcting vector. Let g (u) =
P
n max (L + diag (u)): The bound proposed by Delorme and Poljak [13] is to optimize g (u) over
4
all correcting vectors:
ZEIG = Inf g (u)
n
X
(EIG) subject to: ui = 0:
i=1
As mentioned in the introduction, eigenvalue minimization problems can be formulated as
semidenite programs. For MAX CUT, the equivalence between (SD) and (EIG) was established
by Poljak and Rendl [58].
For completeness, we derive the equivalence between (EIG) and the dual (D). For the
optimum dual vector
, the smallest eigenvalue of diag (
) + W must be 0, since otherwise
we could decrease all
i by some > 0 and thus reduce the dual objective function. This
11
P
can be rewritten as max(?W ? diag (
)) = 0. P Dene as ( i
i + 2Wtot)=n. PBy denition,
ZD = n=4. Moreover, dening ui = ?
i ? j wij , one easily veries that i ui = 0 and
that ?W ? diag (
) = L + diag (u) ? I , implying that max(L + diag (u)) = . This shows that
Z . The converse inequality follows by reversing the argument.
ZEIG D
12
5
3
8
2
2 10 3
7
7 8 4
1 1
9
9 11
11
6
10
6 5
4
Figure 2: Graph on 11 vertices for which the ratio E [W ]=ZP is less than :8796 in the unweighted
case. The convex hull of the optimum vectors is depicted on the right; the circle represents the
center of the sphere.
5 Computational Results
In practice, we expect that the algorithm will perform much better than the worst-case bound
of . Poljak and Rendl [60, 59] (see also Delorme and Poljak [14]) report computational results
showing that the bound ZEIG is typically less than 2-5% and, in the instances they tried, never
worse than 8% away from ZMC . We also performed our own computational experiments, in which
the cuts computed by the algorithm were usually within 4% of the semidenite bound ZP , and
never less than 9% from the bound. To implement the algorithm, we used code supplied by R.
Vanderbei [64] for a special class of semidenite programs. We use Vanderbei's code to solve the
semidenite program, then we generate 50 random vectors r. We output the best of the 50 cuts
induced. We applied our code to a small subset of the instances considered by Poljak and Rendl
[60]. In particular, we considered several dierent types of random graphs, as well as complete
geometric graphs dened by Traveling Salesman Problem (TSP) instances from the TSPLIB (see
Reinelt [63]).
For four dierent types of random graphs, we ran 50 instances on graphs of 50 vertices, 20 on
graphs of size 100, and 5 on graphs of size 200. In the Type A random graph, each edge (i; j ) is
included with probability 1/2. In the Type B random graph, each edge is given a weight drawn
uniformly from the interval [-50,50]; the ratio of Theorem 2.7 is used in reporting nearness to
the semidenite bound. In the Type C random graph of size n 10, an edge (i; j ) is included
with probability 10=n, leading to constant expected degree. Finally, in the Type D random
graphs, an edge (i; j ) is included with probability .1 if i <= n=2 and j > n=2 and probability .05
otherwise, leading to a large cut between the vertices in [1; . . . ; n=2] and those in [n=2+1; . . . ; n].
We summarize the results of our experiments in Table 1 below. CPU Times are given in CPU
seconds on a Sun SPARCstation 1.
In the case of the TSP instances, we used Euclidean instances from the TSPLIB, and set
the edge weight wij to the Euclidean distance between the points i and j . We summarize our
13
Type of Graph Size Num Trials Ave Int Gap Ave CPU Time
50 50 .96988 36.28
Type A 100 20 .96783 323.08
200 5 .97209 4629.62
50 50 .97202 23.06
Type B 100 20 .97097 217.42
200 5 .97237 2989.00
50 50 .95746 23.53
Type C 100 20 .94214 306.84
200 5 .92362 2546.42
50 50 .95855 27.35
Type D 100 20 .93984 355.32
200 5 .93635 10709.42
Table 1: Summary of results of algorithm on random instances. Ave Int Gap is the average ratio
of the value of the cut generated to the semidenite bound, except for Type B graphs, where it
is the ratio of the value of the cut generated minus the negative edge weights to the semidenite
bound minus the negative edge weights.
results in Table 2. In all 10 instances, we compute the optimal solution; for 5 instances, the
value of the cut produced is equal to ZP , and for the others, we have been able to exploit
additional information from the dual semidenite program to prove optimality (for the problem
dantzig42, gr48 and hk48, Poljak and Rendl [60] also show that our solution is optimal). For all
TSPLIB instances, the maximum cut value is within .995 of the semidenite bound. Given these
computational results, it is tempting to speculate that a much stronger bound can be proven for
these Euclidean instances. However, the instance dened by a unit length equilateral triangle
has a maximum cut value of 2, but ZP = 94 , for a ratio of 98 = 0:8889.
Homer and Peinado [34] have implemented our algorithm on a CM-5, and have shown that it
produces optimal or very nearly optimal solutions to a number of MAX CUT instances derived
from via minimization problems. These instances were provided by Michael Junger [35] and have
between 828 and 1366 vertices.
6 Generalizations
We can use the same technique as in Section 1 to approximate several other problems. In the
next section we describe a variation of MAX CUT and give an ( ? )-approximation algorithm
for it. In Section 6.2 we give an ( ? )-approximation algorithm for the MAX 2SAT problem,
and show that it leads to a slightly improved algorithm for MAX SAT. Finally, in Section 6.3, we
give a ( ? )-approximation algorithm for the maximum directed cut problem (MAX DICUT),
where > :79607. In all cases, we will show how to approximate more general integer quadratic
programs that can be used to model these problems.
6.1 MAX RES CUT
The MAX RES CUT problem is a variation of MAX CUT in which pairs of vertices are forced
to be on either the same side of the cut or on dierent sides of the cut. The extension of
14
Instance Size SD Val Cut Val Time
dantzig42 42 42638 42638 43.35
gr120 120 2156775 2156667 754.87
gr48 48 321815 320277 26.17
gr96 96 105470 105295 531.50
hk48 48 771712 771712 66.52
kroA100 100 5897392 5897392 420.83
kroB100 100 5763047 5763047 917.47
kroC100 100 5890760 5890760 398.78
kroD100 100 5463946 5463250 469.48
kroE100 100 5986675 5986591 375.68
Table 2: Summary of results of algorithm on TSPLIB instances. SD Val is the value produced
by the semidenite relaxation. Cut Val is the value of the best cut output by the algorithm.
the algorithm to the MAX RES CUT problem is trivial. We merely need to add the following
constraints to (P ): vi vj = 1 for (i; j ) 2 E + and vi vj = ?1 for (i; j ) 2 E ? , where E + (resp.
E ?) corresponds to the pair of vertices forced to be on the same side (resp. dierent sides) of the
cut. Using the randomized algorithm of Section 1 and setting yi = 1 if r vi 0 and yi = ?1
otherwise gives a feasible solution to MAX RES CUT, assuming that a feasible solution exists.
Indeed, it is easy to see that if vi vj = 1, then the algorithm will produce a solution such that
yiyj = 1. If vi vj = ?1 then the only case in which the algorithm produces a solution such that
yiyj 6= ?1 is when vi r = vj r = 0, an event that happens with probability 0. The analysis of the
expected value of the cut is unchanged and, therefore, the resulting algorithm is a randomized
( ? )-approximation algorithm.
Another approach to the problem is to use a standard reduction of MAX RES CUT to MAX
CUT based on contracting edges and \switching" cuts (see e.g. [60]). This reduction introduces
negative edge weights and so we do not discuss it here, although Theorem 2.7 can be used to
show that our MAX CUT algorithm applied to a reduced instance has a performance guarantee
of ( ? ) for the original MAX RES CUT instance. In fact, a more general statement can be
made: any -approximation algorithm (in the sense of Theorem 2.7) for MAX CUT instances
possibly having negative edge weights leads to a -approximation algorithm for MAX RES CUT.
6.2 MAX 2SAT and MAX SAT
An instance of the maximum satisability problem (MAX SAT) is dened by a collection C
of boolean clauses, where each clause is a disjunction of literals drawn from a set of variables
fx1; x2; . . . ; xng. A literal is either a variable x or its negation x. The length l(Cj ) of a clause
Cj is the number of distinct literals in the clause. In addition, for each clause Cj 2 C there
is an associated non-negative weight wj . An optimal solution to a MAX SAT instance is an
assignment of truth values to the variables x1 ; . . . ; xn that maximizes the sum of the weight
of the satised clauses. MAX 2SAT consists of MAX SAT instances in which each clause has
length at most two. MAX 2SAT is NP-complete [19]; the best approximation algorithm known
previously has a performance guarantee of 43 and is due to Yannakakis [71] (see also Goemans
and Williamson [22]). As with the MAX CUT problem, MAX 2SAT is known to be MAX SNP-
hard [55]; thus there exists some constant c < 1 such that the existence of a c-approximation
15
algorithm implies that P = NP [2]. Bellare, Goldreich, and Sudan [6] have shown that a
95/96-approximation algorithm for MAX 2SAT would imply P = NP . Haglin [28, 29] has
shown that any -approximation algorithm for MAX RES CUT can be translated into a -
approximation algorithm for MAX 2SAT, but we will show a direct algorithm here. Haglin's
observation together with the reduction from MAX RES CUT to MAX CUT mentioned in the
previous section shows that any -approximation for MAX CUT with negative edge weights
translates into a -approximation algorithm for MAX 2SAT.
6.2.1 MAX 2SAT
In order to model MAX 2SAT, we consider the integer quadratic program
X
Maximize [aij (1 ? yi yj ) + bij (1 + yi yj )]
i<j
(Q0) subject to: yi 2 f?1; 1g 8i 2 V;
where aij and bij are non-negative. The objective function of (Q0) is thus a non-negative linear
form in 1 yi yj . To model MAX 2SAT using (Q0), we introduce a variable yi in the quadratic
program for each boolean variable xi in the 2SAT instance; we also introduce an additional
variable y0 . The value of y0 will determine whether -1 or 1 will correspond to \true" in the MAX
2SAT instance. More precisely, xi is true if yi = y0 and false otherwise. Given a boolean formula
C , we dene its value v(C ) to be 1 if the formula is true and 0 if the formula is false. Thus,
v(xi) = 1+y20y and v(xi) = 1 ? v(xi) = 1?y20 y . Observe that
i i
= 1 + y 0 yi + 1 + y 0 yj + 1 ? y i yj :
4 4 4
The value of other clauses with 2 literals can be similarly expressed; for instance, if xi is negated
one only needs to replace yi by ?yi . Therefore, the value v (C ) of any clause with at most two
literals per clause can be expressed in the form required in (Q0). As a result, the MAX 2SAT
problem can be modelled as
X
Maximize wj v(Cj )
Cj 2C
(SAT ) subject to: yi 2 f?1; 1g 8i 2 f0; 1; . . . ; ng;
where the v (Cj ) are non-negative linear combinations of 1+ yi yj and 1 ? yi yj . The (SAT ) program
is in the same form as (Q0).
We relax (Q0 ) to:
X
Maximize [aij (1 ? vi vj ) + bij (1 + vi vj )]
i<j
(P 0 ) subject to: v i 2 Sn 8i 2 V:
Let E [V ] be the expected value of the solution produced by the randomized algorithm. By the
linearity of expectation,
X X
E [V ] = 2 aij Pr[sgn(vi r) 6= sgn(vj r)] + 2 bij Pr[sgn(vi r) = sgn(vj r)]:
i<j i<j
16
Using the analysis of the max cut algorithm, we note that Pr[sgn(vi r) = sgn(vj r)] = 1 ?
1
arccos(vi vj ), and thus the approximation ratio for the more general program follows from
Lemmas 2.4 and 2.8.
Hence we can show the following theorem, which implies that the algorithm is an ( ? )-
approximation algorithm for (Q0) and thus for MAX 2SAT.
Theorem 6.1 X
E [V ] [aij (1 ? vi vj ) + bij (1 + vi vj )] :
i<j
0 yi 1 1in
0 zj 1 8Cj 2 C :
By associating yi = 1 with xi set true, yi = 0 with xi set false, zj = 1 with clause Cj satised, and
zj = 0 with clause Cj not satised, the program exactly corresponds to the MAX SAT problem.
We showed that for any feasible solution (y; z ), if xi is set to be true with probability yi , then
the probability that clause j will be satised is at least (1 ? (1 ? k1 )k )zj , for k = l(Cj ). We then
considered choosing randomly between the following two algorithms: (1) set xi true independently
with probability yi ; (2) set xi true independently with probability 21 . Given this combined
algorithm, the probability that a length k clause is satised is at least 21 (1?2?k )+ 12 (1?(1? k1 )k )zj .
This expression can be shown to be at least 34 zj for all lengths k. Thus if an optimal solution to
the linear program is used, the algorithm results in a 43P-approximation algorithm for MAX SAT,
since the expected value of the algorithm is at least 34 j wj zj .
We formulate a slightly dierent relaxation of the MAX SAT problem. Let u(C ) denote a
relaxed version of the expression v used in the previous section in which the products yi yj are
replaced by inner products of vectors vi vj . Thus u(xi ) = 12 (1 + vi v0), u(xi ) = 21 (1 ? vi v0 ),
and u(xi _ xj ) = 14 (1 + vi v0 ) + 41 (1 + vj v0 ) + 14 (1 ? vi vj ). We then consider the following
relaxation,
X
Max wj zj
Cj 2C
subject to: X X
u(xi) + u(xi) zj 8Cj 2 C
i2I +
j
i2I ? j
If at least one of the literals in Cj is set true, then at least l(Cj ) ? 1 of the clauses in Pj will be
satised. Thus the following program is a relaxation of the MAX SAT problem:
X
Max wj zj
Cj 2C
subject to: X X
u(xi) + u(xi) zj 8Cj 2 C
i2I +
j
i2I ?j
2
l(C ) zj ;
j
so that the overall expectation of the algorithm will be at least
X X
wj (:5p1 + (p2 + p3)zj ) + wj (:75p1 + (:75p2 + p3 )zj )
j :l(Cj )=1 j :l(Cj )=2
18
wj ((1 ? 2?l(C ))p1 + ((1 ? (1 ? l(C1 ) )l(C ) )p2 + l(C2 ) p3 )zj ):
X
+ j j
j :l(Cj )3 j j
19
Consider any term in the sum, say dijk Pr[sgn(vi r) = sgn(vj r) = sgn(vk r)]. The cijk terms
can be dealt with similarly by simply replacing vi by ?vi . The performance guarantee follows
from the proof of the following two lemmas.
Lemma 6.2
Pr[sgn(vi r) = sgn(vj r) = sgn(vk r)] = 1 ? 21 (arccos(vi vj ) + arccos(vi vk ) + arccos(vj vk )):
Lemma 6.3 For any vi; vj ; vk 2 Sn,
1 ? 21 (arccos(vi vj ) + arccos(vi vk ) + arccos(vj vk )) 4 [1 + vi vj + vi vk + vj vk ] ;
where = min0<arccos(?1=3) 2 1+3
2?3 > 0:79607.
cos
Proof of Lemma 6.2: A very short proof can be given relying on spherical geometry. The desired
probability can be seen to be equal to twice the area of the spherical triangle polar to the spherical
triangle dened by vi , vj and vk . Stated this way, the result is a corollary to Girard's formula
(1629 [20], see [65]) expressing the area of a spherical triangle with angles 1 , 2 and 3 as its
excess 1 + 2 + 3 ? .
We also present a proof of the lemma from rst principles. In fact, our proof parallels Euler's
proof (1781 [15], see [65]) of Girard's formula. We dene the following events:
A : sgn(vi r) = sgn(vj r) = sgn(vk r)
Bi : sgn(vi r) 6= sgn(vj r) = sgn(vk r)
Ci : sgn(vj r) = sgn(vk r)
Cj : sgn(vi r) = sgn(vk r)
Ck : sgn(vi r) = sgn(vj r):
Note that Bi = Ci ? A. We dene Bj and Bk similarly, so that Bj = Cj ? A and Bk = Ck ? A.
Clearly,
Pr[A] + Pr[Bi ] + Pr[Bj ] + Pr[Bk ] = 1: (1)
Also, Pr[Ci ] = Pr[A]+Pr[Bi ] and similarly for j and k. Adding up these equalities and subtracting
(1), we obtain
Pr[Ci ] + Pr[Cj ] + Pr[Ck ] = 1 + 2 Pr[A]: (2)
By Lemma 2.2, Pr[Ci ] = 1 ? 1 arccos(vj vk ) and similarly for j and k. Together with (2), we
derive
Pr[A] = 1 ? 1 (arccos(vi vj ) + arccos(vi vk ) + arccos(vj vk ));
2
proving the lemma.
Proof of Lemma 6.3: One can easily verify that the dened value of is greater than 0:79607.
Let a = arccos(vi vj ), b = arccos(vi vk ) and c = arccos(vj vk ). From the theory of spherical
triangles, it follows that the possible values for (a; b; c) over all possible vectors vi , vj and vk
dene the set
S = f(a; b; c) : 0 a ; 0 b ; 0 c ; c a + b; b a + c; a b + c; a + b + c 2g:
(see [7, Corollary 18.6.12.3]). The claim can thus be restated as 1 ? 21 (a + b + c) 4 (1+cos(a)+
cos(b) + cos(c)) for all (a; b; c) 2 S .
Let (a; b; c) minimize h(a; b; c) = 1 ? 21 (a + b + c) ? 4 (1 + cos(a) + cos(b) + cos(c)) over all
(a; b; c) 2 S . We consider several cases:
20
1. a + b + c = 2 . We have 1 ? 21 (a + b + c) = 0. On the other hand,
1 + cos(a) + cos(b) + cos(c)
= 1 + cos(a) + cos(b) + cos(a + b)
= 1 + cos(a + b) + 2 cos( a +2 b ) cos( a ?2 b )
= 2 cos2 ( a +2 b ) + 2 cos( a +2 b ) cos( a ?2 b )
= 2 cos( a + b ) cos( a + b ) + cos( a ? b ) :
(3)
2 2 2
We now derive that h(a; b; c) ? 2 cos( a+2 b )[cos( a+2 b ) + cos( a?2 b )] 0, the last inequality
following from the fact that 2 a+2 b ? j a?2 b j and thus cos( a+2 b ) 0 and [cos( a+2 b ) +
cos( a?2 b )] 0.
2. a = b + c or b = a + c or c = a + b. By symmetry, assume that c = a + b. Observe that
1 ? 21 (a + b + c) = 1 ? a+ b . On the other hand, by (3) we have that
1 + cos(a) + cos(b) + cos(c)
= 2 cos( a +2 b )(cos( a ?2 b ) + cos( a +2 b ))
2 cos( a +2 b )(1 + cos( a +2 b )):
Letting x = a+2 b , we observe that the claim is equivalent to 1 ? 2x 2 cos(x)(1 + cos(x))
for any 0 x =2. One can in fact verify that 1 ? 2x 0:81989 2 cos(x)(1 + cos(x)) for any
0 x =2, implying the claim.
3. a = 0 or b = 0 or c = 0. Without loss of generality, let a = 0. The denition of S implies
that b = c, and thus b = a + c. This case therefore reduces to the previous one.
4. a = or b = or c = . Assume a = . This implies that b + c = and, thus, a + b + c = 2 .
We have thus reduced the problem to case 1.
5. In the last case, (a; b; c) belongs to the interior of S . This implies that the gradient of h
must vanish and the hessian of h must be positive semidenite at (a; b; c). In other words,
sin a = sin b = sin c = 2 , and cos a 0; cos b 0 and cos c 0. From this, we derive that
a = b = c. But h(a; a; a) = 1 ? 23a ? 4 (1 + 3 cos(a)). The lemma now follows from the fact
that a 23 , the denition of and the fact that 1 + 3 cos a 0 for a arccos ?31 .
Thus we obtain a ( ? )-approximation algorithm for (Q00) and for the MAX DICUT problem.
7 Concluding Remarks
Our motivation for studying semidenite programming relaxations came from a realization that
the standard tool of using linear programming relaxations for approximation algorithms had
limits which might not be easily surpassed (see the conclusion of Goemans and Williamson [22]).
In fact, a classical linear programming relaxation for the maximum cut problem can be shown
to be arbitrarily close to twice the value of the maximum cut in the worst case. Given the
21
work of Lovasz and Schrijver [46, 47], which showed that tighter and tighter relaxations could
be obtained through semidenite programming, it seemed worthwhile to investigate the power of
such relaxations from a worst-case perspective. The results of this paper constitute a rst step
in this direction. As we mentioned in the introduction, further steps have already been made,
with improved results for MAX 2SAT and MAX DICUT by Feige and Goemans, and for coloring
by Karger, Motwani, and Sudan. We think that the continued investigation of these methods is
promising.
While this paper leaves many open questions, we think there are two especially interest-
ing problems. The rst question is whether a .878-approximation algorithm for MAX CUT
can be obtained without explicitly solving the semidenite program. For example, the rst 2-
approximation algorithms for weighted vertex cover involved solving a linear program [32], but
later Bar-Yehuda and Even [3] devised a primal-dual algorithm in which linear programming was
used only in the analysis of the algorithm. Perhaps a semidenite analog is possible for MAX
CUT. The second question is whether adding additional constraints to the semidenite program
leads to a better worst-case bound. There is some reason to think this might be true. Linear
constraints are known for which the program would nd an optimal solution on any planar graph,
32p for the current semidenite program for the 5-cycle.
whereas there is a gap of 25+5 5
One consequence of this paper is that the situation with several MAX SNP problems is no
longer clear-cut. When the best-known approximation results for MAX CUT and MAX SAT had
such long-standing and well-dened bounds as 12 and 34 , it was tempting to believe that perhaps
no further work could be done in approximating these problems, and that it was only a matter
of time before matching hardness results would be found. The improved results in this paper
should rescue algorithm designers from such fatalism. Although MAX SNP problems cannot be
approximated arbitrarily closely, there still is work to do in designing improved approximation
algorithms.
Acknowledgements
Since the appearance of an extended abstract of this paper, we have beneted from discussions
with a very large number of colleagues. In particular, we would like to thank Don Coppersmith
and David Shmoys for pointing out to us that our MAX 2SAT algorithm could lead to an im-
proved MAX SAT algorithm, Jon Kleinberg for bringing reference [44] to our attention, Bob
Vanderbei for kindly giving us his semidenite code, Francisco Barahona and Michael Junger
for providing problem instances, Joel Spencer for motivating Theorem 2.6, Farid Alizadeh, Ga-
bor Pataki and Rob Freund for results on semidenite programming, and Shang-Hua Teng for
bringing reference [38] to our attention. We received other useful comments from Farid Alizadeh,
Joseph Cheriyan, Jon Kleinberg, Monique Laurent, Colin McDiarmid, Giovanni Rinaldi, David
Shmoys, E va Tardos, and the two anonymous referees.
References
[1] F. Alizadeh. Interior point methods in semidenite programming with applications to com-
binatorial optimization. SIAM Journal on Optimization, 5:13{51, 1995.
[2] S. Arora, C. Lund, R. Motwani, M. Sudan, and M. Szegedy. Proof verication and hardness
of approximation problems. In Proceedings of the 33rd Annual Symposium on Foundations
of Computer Science, pages 14{23, 1992.
22
[3] R. Bar-Yehuda and S. Even. A linear time approximation algorithm for the weighted vertex
cover problem. Journal of Algorithms, 2:198{203, 1981.
[4] F. Barahona, M. Grotschel, M. Junger, and G. Reinelt. An application of combinatorial
optimization to statistical physics and circuit layout design. Operations Research, 36:493{
513, 1988.
[5] A. I. Barvinok. Problems of distance geometry and convex properties of quadratic maps.
Discrete and Computational Geometry, 13:189{202, 1995.
[6] M. Bellare, O. Goldreich, and M. Sudan. Free bits, PCP and non-approximability | Towards
tight results. Unpublished manuscript, 1995.
[7] M. Berger. Geometry II. Springer-Verlag, Berlin, 1987.
[8] J. A. Bondy and U. S. R. Murty. Graph Theory with Applications. American Elsevier
Publishing, New York, NY, 1976.
[9] S. Boyd, L. El Ghaoui, E. Feron, and V. Balakrishnan. Linear Matrix Inequalities in System
and Control Theory. SIAM, Philadelphia, PA, 1994.
[10] B. Chor and M. Sudan. A geometric approach to betweenness. In Proceedings of the European
Symposium on Algorithms, 1995.
[11] J. Christensen and J. Vesterstrm. A note on extreme positive denite matrices. Mathema-
tische Annalen, 244:65{68, 1979.
[12] C. Delorme and S. Poljak. Combinatorial properties and the complexity of a max-cut ap-
proximation. European Journal of Combinatorics, 14:313{333, 1993.
[13] C. Delorme and S. Poljak. Laplacian eigenvalues and the maximum cut problem. Mathe-
matical Programming, 62:557{574, 1993.
[14] C. Delorme and S. Poljak. The performance of an eigenvalue bound on the max-cut problem
in some classes of graphs. Discrete Mathematics, 111:145{156, 1993.
[15] L. Euler. De mensura angulorum solidorum. Petersburg, 1781.
[16] U. Feige and M. X. Goemans. Approximating the value of two prover proof systems, with
applications to MAX 2SAT and MAX DICUT. In Proc. of the Third Israel Symposium on
Theory of Computing and Systems, pages 182{189, 1995.
[17] U. Feige and L. Lovasz. Two-prover one-round proof systems: Their power and their prob-
lems. In Proceedings of the 24th Annual ACM Symposium on the Theory of Computing,
pages 733{744, 1992.
[18] A. Frieze and M. Jerrum. Improved approximation algorithms for MAX k-CUT and MAX
BISECTION. In Proceedings of the Fourth MPS Conference on Integer Programming and
Combinatorial Optimization. Springer-Verlag, 1995. To appear in Algorithmica.
[19] M. Garey, D. Johnson, and L. Stockmeyer. Some simplied NP-complete graph problems.
Theoretical Computer Science, 1:237{267, 1976.
23
[20] A. Girard. De la mesure de la supercie des triangles et polygones spheriques. Appendix to
\Invention nouvelle en l'algebre", Amsterdam, 1629.
[21] M. X. Goemans and D. P. Williamson. .878-approximation algorithms for MAX CUT and
MAX 2SAT. In Proceedings of the 26th Annual ACM Symposium on the Theory of Com-
puting, pages 422{431, 1994.
[22] M. X. Goemans and D. P. Williamson. New 3/4-approximation algorithms for the maximum
satisability problem. SIAM Journal on Discrete Mathematics, 7:656{666, 1994.
[23] G. H. Golub and C. F. Van Loan. Matrix Computations. The Johns Hopkins University
Press, Baltimore, MD, 1983.
[24] R. Grone, S. Pierce, and W. Watkins. Extremal correlation matrices. Linear Algebra and
its Applications, 134:63{70, 1990.
[25] M. Grotschel, L. Lovasz, and A. Schrijver. The ellipsoid method and its consequences in
combinatorial optimization. Combinatorica, 1:169{197, 1981.
[26] M. Grotschel, L. Lovasz, and A. Schrijver. Geometric Algorithms and Combinatorial Opti-
mization. Springer-Verlag, Berlin, 1988.
[27] F. Hadlock. Finding a maximum cut of a planar graph in polynomial time. SIAM Journal
on Computing, 4:221{225, 1975.
[28] D. J. Haglin. Approximating maximum 2-CNF satisability. Parallel Processing Letters,
2:181{187, 1992.
[29] D. J. Haglin. Personal communication, 1994.
[30] D. J. Haglin and S. M. Venkatesan. Approximation and intractability results for the maxi-
mum cut problem and its variants. IEEE Transactions on Computers, 40:110{113, 1991.
[31] L. Hall. Personal communication, 1994.
[32] D. S. Hochbaum. Approximation algorithms for the set covering and vertex cover problems.
SIAM Journal on Computing, 11:555{556, 1982.
[33] T. Hofmeister and H. Lefmann. A combinatorial design approach to MAXCUT. Unpublished
manuscript, 1995.
[34] S. Homer and M. Peinado. A highly parallel implementation of the Goemans/Williamson
algorithm to approximate MaxCut. Manuscript, 1994.
[35] M. Junger. Personal communication, 1994.
[36] D. Karger, R. Motwani, and M. Sudan. Approximate graph coloring by semidenite pro-
gramming. In Proceedings of the 35th Annual Symposium on Foundations of Computer
Science, pages 2{13, 1994.
[37] R. M. Karp. Reducibility among combinatorial problems. In R. Miller and J. Thatcher,
editors, Complexity of Computer Computations, pages 85{103. Plenum Press, New York,
NY, 1972.
24
[38] D. E. Knuth. Seminumerical Algorithms, volume 2 of The Art of Computer Programming.
Addison-Wesley, Reading, MA, second edition, 1981.
[39] P. Lancaster and M. Tismenetsky. The Theory of Matrices. Academic Press, Orlando, FL,
1985.
[40] M. Laurent and S. Poljak. On the facial structure of the set of correlation matrices. Unpub-
lished manuscript, 1995.
[41] C.-K. Li and B.-S. Tam. A note on extreme correlation matrices. SIAM Journal on Matrix
Analysis and Applications, 15:903{908, 1994.
[42] R. Loewy. Extreme points of a convex subset of the cone of positive denite matrices.
Mathematische Annalen, 253:227{232, 1980.
[43] L. Lovasz. On the Shannon capacity of a graph. IEEE Transactions on Information Theory,
IT-25:1{7, 1979.
[44] L. Lovasz. Self-dual polytopes and the chromatic number of distance graphs on the sphere.
Acta Scientiarum Mathematicarum, 45:317{323, 1983.
[45] L. Lovasz. Combinatorial optimization: Some problems and trends. DIMACS Technical
Report 92-53, 1992.
[46] L. Lovasz and A. Schrijver. Matrix cones, projection representations, and stable set poly-
hedra. In Polyhedral Combinatorics, volume 1 of DIMACS series in Discrete Mathematics
and Theoretical Computer Science, pages 1{17. American Mathematical Society, 1989.
[47] L. Lovasz and A. Schrijver. Cones of matrices and setfunctions, and 0-1 optimization. SIAM
Journal on Optimization, 1:166{190, 1990.
[48] S. Mahajan and H. Ramesh. Correctly derandomizing Goemans and Williamson's Max CUT
algorithm. Unpublished manuscript, 1995.
[49] B. Mohar and S. Poljak. Eigenvalue methods in combinatorial optimization. In R. Brualdi,
S. Friedland, and V. Klee, editors, Combinatorial and Graph-Theoretic Problems in Linear
Algebra, volume 50 of The IMA Volumes in Mathematics and its Applications, pages 107{
151. Springer-Verlag, 1993.
[50] Y. Nesterov and A. Nemirovskii. Self-Concordant Functions and Polynomial Time Methods
in Convex Programming. Central Economic and Mathematical Institute, USSR Academy of
Science, Moscow, 1989.
[51] Y. Nesterov and A. Nemirovskii. Interior Point Polynomial Methods in Convex Program-
ming. Society for Industrial and Applied Mathematics, Philadelphia, PA, 1994.
[52] G. I. Orlova and Y. G. Dorfman. Finding the maximal cut in a graph. Engineering Cyber-
netics, pages 502{506, 1972.
[53] M. L. Overton and R. S. Womersley. On the sum of the largest eigenvalues of a symmetric
matrix. SIAM Journal on Matrix Analysis and Applications, 13:41{45, 1992.
25
[54] M. L. Overton and R. S. Womersley. Optimality conditions and duality theory for minimizing
sums of the largest eigenvalues of symmetric matrices. Mathematical Programming, 62:321{
357, 1993.
[55] C. H. Papadimitriou and M. Yannakakis. Optimization, approximation, and complexity
classes. Journal of Computer and System Sciences, 43:425{440, 1991.
[56] G. Pataki. On the multiplicity of optimal eigenvalues. Management Science Research Report
MSRR-604, GSIA, Carnegie Mellon University, 1994.
[57] G. Pataki. On cone-LP's and semi-denite programs: Facial structure, basic solutions, and
the simplex method. GSIA Working Paper WP 1995-03, Carnegie-Mellon University, 1995.
[58] S. Poljak and F. Rendl. Nonpolyhedral relaxations of graph-bisection problems. DIMACS
Technical Report 92-55, 1992. To appear in SIAM Journal on Optimization.
[59] S. Poljak and F. Rendl. Node and edge relaxations of the max-cut problem. Computing,
52:123{137, 1994. Also appears as Report 266, Technische Universitat Graz.
[60] S. Poljak and F. Rendl. Solving the max-cut problem using eigenvalues. In M. Lucer-
tini, G. Rinaldi, A. Sassano, and B. Simeone, editors, Partitioning and Decomposition in
Combinatorial Optimization, Discrete Applied Mathematics. 1995. To appear.
[61] S. Poljak and D. Turzk. A polynomial algorithm for constructing a large bipartite subgraph,
with an application to a satisability problem. Canadian Journal of Mathematics, 34:519{
524, 1982.
[62] S. Poljak and Z. Tuza. The max-cut problem | a survey. In W. Cook, L. Lovasz, and
P. Seymour, editors, Special Year on Combinatorial Optimization, DIMACS series in Discrete
Mathematics and Theoretical Computer Science. American Mathematical Society, 1995. To
be published.
[63] G. Reinelt. TSPLIB { A traveling salesman problem library. ORSA Journal on Computing,
3:376{384, 1991.
[64] F. Rendl, R. Vanderbei, and H. Wolkowicz. Interior point methods for max-min eigenvalue
problems. Report 264, Technische Universitat Graz, 1993.
[65] B. Rosenfeld. A history of non-Euclidean geometry. Springer-Verlag, Berlin, 1988.
[66] S. Sahni and T. Gonzalez. P-complete approximation problems. Journal of the ACM,
23:555{565, 1976.
[67] P. M. Vaidya. A new algorithm for minimizing convex functions over convex sets. In
Proceedings of the 30th Annual Symposium on Foundations of Computer Science, pages
338{343, 1989.
[68] L. Vandenberghe and S. Boyd. Semidenite programming. Submitted to SIAM Review,
1994.
[69] P. M. Vitanyi. How well can a graph be n-colored? Discrete Mathematics, 34:69{80, 1981.
[70] H. Wolkowicz. Some applications of optimization in matrix theory. Linear Algebra and Its
Applications, 40:101{118, 1981.
26
[71] M. Yannakakis. On the approximation of maximum satisability. Journal of Algorithms,
17:475{502, 1994.
27