0% found this document useful (0 votes)
52 views

Algorithms Fieldguides

1. The document defines basic properties of arithmetic series, including formulas for the sum of the first n terms and the sum of squares of the first n natural numbers. It also introduces basic concepts of discrete Fourier transforms. 2. The fast Fourier transform algorithm is described as running in O(n log n) time, an improvement over the naive O(n2) algorithm. It works by recursively breaking down the DFT into even and odd parts. 3. Applications of graph algorithms like topological sorting, shortest paths, minimum spanning trees, and strongly connected components are discussed. Key graph algorithms like depth-first search, breadth-first search, and Dijkstra's are summarized with their time complexities.

Uploaded by

Clark Chen
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
52 views

Algorithms Fieldguides

1. The document defines basic properties of arithmetic series, including formulas for the sum of the first n terms and the sum of squares of the first n natural numbers. It also introduces basic concepts of discrete Fourier transforms. 2. The fast Fourier transform algorithm is described as running in O(n log n) time, an improvement over the naive O(n2) algorithm. It works by recursively breaking down the DFT into even and odd parts. 3. Applications of graph algorithms like topological sorting, shortest paths, minimum spanning trees, and strongly connected components are discussed. Key graph algorithms like depth-first search, breadth-first search, and Dijkstra's are summarized with their time complexities.

Uploaded by

Clark Chen
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

'"*-$/#(.

$)$ $ ' 0$


-$// ) 4 -$.#) -.#0'$.#  4 # 

.$ -*+ -/$ . *! *"-$/#(.

(x) xa xb

x(a+b)

.$  -$ .
-$/#( /$  -$ .  ,0 )/$' 
)/.
-/$#( /$  -$ . ,0-
n
!

n
!

ark1 =

k=1

a(1r )
1r

)!$)$/  *( /-$  -$ .

k=

k=1
n
!

k=1

k2

k=1
n

(xa )b

n
!

-$/#( /$  -$ .  ,0 )/$'


)/ " -.

x(ab)

x( 2 ) = x

. !*- (*./ " - (*1' +-*' (. 0. /#$. /* # & $! ./$'' ./-*)"'4 *)) / 
-*+ -/$ .

-*( /#  2 " / /# -**/. *! 0)$/4 1'0/  / p(x) ) q(x) ) 2
(0'/$+'4 /# - .+ /  1'0 . !-*( # -**/ /*" /# - /* " / 2n + 1
- .+ /$1 +$-. *! 1'0 . *! (p q)(x) O(n)
$)''4 2 $)/ -+*'/ /# . +*$)/. 0.$)" /# $)1 -.  /* " / /#
* $$ )/. *! (p q)(x) O(n '*" n)
1'0/$*) < values > < coef f > w

)/ -+*'/$*) < coef f > 1/n < values > w 1

n(n+1)
2

2k 1 = n2

function FFT(A, w)
Input: Coefficient representation of a polynomial A(x)

n(n+1)(2n+1)
$)$/  *( /-$  -$ .
6

of degree less than or equal to n1,


where n is a power of 2w, an nth root of unity
Output: Value representation A(w_0),...,A(w_n1)

ark1 =

k=1

/-*)"'4 *)) /  *(+*) )/. O(n)

# ./ *0-$ - -).!*-( $) /# .*+ *! *0- &)*2' " /#0. !- $. 0.  !**'4)*($' 0'/$+'$/$*)  2)/ /* &)*2 /# * $$ )/ *! (p q)(x) &)*2$)"
*)'4 /# * $$ )/. *! p(x) ) q(x) # )$1 24 $. /# *) 2 '' 0. $)
'" - 2#$# -0). $) O(n2 ) - $.  !./ - '"*-$/#(
& /# * $$ )/. *! p(x) ) q(x) ) +'0" /# ( $)/* /#  2$/#
/# **/. *! )$/4  )  2n + 1 +*$)/. O(n '*" n)

y = '*"b (x) $!!x = by


'*"b (xy) = '*"b (x) + '*"b (y)
'*" (x)
'*"b (x) = '*"b (c) '*"c (x) = '*"c (b)
'*"b (xn ) = n '*"b
a
x(ab) = xxb

./ *0-$ - -).!*-( O(n '*" n)

if w = 1: return A(1)

a
1r

express A(x) in the form A_e(x^2) + xA_o(x^2)


call FFT(A_e, w^2) to evaluate A_e at even powers of w

*-(' $($/  !$)$/$*) *! O, , and

0()
f (n)
< (0)
'$(
n g(n)
= c, 0 < c <

*+*'*"$' *-/ O(V + E)

call FFT(A_o, w^2) to evaluate A_o at even powers of w

f (n) (g(n))
f (n) O(g(n))
f (n) (g(n))

*)./-$)/.  $- /  "-+#  $. 4'$ $! ) *)'4 $!   +/#!$-./ . -# *! 


4$ '. )* & " . *-(''4 2 .4  /*+*'*"$' .*-/ *!  $- /  4'$
"-+#  $. ) *- -$)" *! /# 1 -/$ . *!  .0# /#/ !*- 1 -4 " (vi , vj ) *!  2
#1 i < j
!  $. 4'$ /# ) )* '$) - *- -$)" $. +*..$'
*+*'*"$' *-/ - /0-).  '$./ 2$/# '' )* . +*$)/$)" ' .0# /#/ .$''4 ''
+- )/. *(  !*- )4 #$'- ) 3'0$)" .*0- .  *- -  "-+# !-*( /#
#$"# ./ +*./ )0( - $)   - .$)" *- -
#0. 2 - / .$)"'4 *)) /  *(+*) )/ !-*(   2$/# . 1 -' ./-*)"'4
*)) /  *(+*) )/. #  0)$,0 .*0- ) 0)$,0 .$)& $) /#  # - (0'/$+' /*+*'*"$' .*-/$)" +*..$' .  !*- 0)/$( *(+$'$)" *# 0'$)"

for j = 0 to n 1:
compute A(w_j) = A_e(w^2j) + w^j A_o(w^2j)
return A(w_0),...,A(w_n1)

#$. +-* .. #++ ). - 0-.$1 '4

 -# '"*-$/#(.
 +/# $-./  -# O(V + E) /&
3+'*- . '' /# 24 *2) /*  /- /# ) '$(. & 0+ ) 3+'*- . '/ +/#.
. !*- *+*'*"$' *-/ ) $)$)" *)) /  *(+*) )/.

(pre(u) < pre(v) < post(v) < post(u))

Output: visited(u) is set to true for all nodes u


visited(v) = true

/# )

for each edge(v,u) in E:


explore(u)
postvisit(v)
for

O(nd )
T (n) =
O(nd logn)

O(nlogb a )

-)#$)" /*- a
 +/# *! - logb n
$/# *! - alogb n = nlogb a

0) $( O(n)


$" $/0- *+*'*"$''4 .*-/ /# "-+# - 1 -. /# " . /*+*'*"$''4 .*-/
"$) 2 - #  ) 2 .*0- /# - .0'/$)" /-1 -.' $. ) 2  *)/$)0 /$'' )
*! '$./
'"*-$/#(

0)  +/#!$-./ . -# *) 


0) /# 0)$- /  *)) /  *(+*) )/. '"*-$/#( 2#$# *0)/ 0. . 
*0)/ - /* *0)/ /# )0( - *! *(+*) )/. $)  *)  ) 0-$)" /#
 +/#!$-./ . -# +-* .. /# 1 -/$ . $)  - .$)" *- - *! /# $- +*./
)0( -. !-*( ./ +

- /# $-./  -# O(V + E) 0 0


Input: Graph G = (V, E), directed or undirected; vertex s V
Output: For all vertices u reachable from s, dist(u) is set
to the distance from s to u.
def bfs(G,s):
for all u in V:
dist(u) = infinity
dist(s) = 0
Q = [s] (Queue containing just s)
while Q is not empty:
u = eject(u)
for all edges (u,v) in E:
if dist(v) = infinity:
inject(Q,v)
dist(v) = dist(u) + 1

$%&./-. '"*-$/#( O(V + E) '*" V $)-4 +


% /$1 $. /* !$) .#*-/ ./ +/# /)- / /-0/0- $. -$*-$/4 0 0

previsit(v)

./ -. # *- (
T (n) = aT (n/b) + O(nd )


!  )  - ./-*)"'4 *)) /  *(+*) )/. ) /# - $. ) " !-*(
 )* $)  /*  )* $)  /# ) /# #$"# ./ +*./ )0( - $)  $. $"" /#) /# #$"# ./ +*./ )0( - $) 

Input: G = (V,E) is a graph; v V

if not visited(u):

# )* /#/ -  $1 . /# #$"# ./ +*./ )0( - $)   +/#!$-./ . -#


(0./ '$ $)  .*0- ./-*)"'4 *)) /  *(+*) )/

def explore(G,v): #Where G = (V,E) of a Graph

reachable from v
pre/post
[u[v v]u] is a Tree/Forward Edge
[v[u u]v] is a Back Edge
[v v][u u] is a Cross Edge


! /# 3+'*- .0-*0/$) $. ./-/  / )* 0 /# ) $/ 2$'' / -($)/
+- $. '4 2# ) '' )* . - #' !-*( 0 #1  ) 1$.$/ 

a > 0, b > 1,

and

d 0,

for all u in V:
dist(u) = infinity
prev(u) = nil
dist(s) = 0

def dfs(G):

H = makequeue(V) # using dist values as keys

for all v in V:

if d > logb a
if d = logb a
if d < lobb a

def dijkstra(G,l,s):

if not visited(v):
explore(v)

while H is not empty:


u = deletemin(H)
for all edges (u,v) in E:
if dist(v) > dist(u)+l(u,v)

- 1$.$/ *0)/ /$'' )*   /* /# ,0 0


*./1$.$/ *0)/ /$'' 4*0 ' 1 /# "$1 ) )*
 $- /  -+# #.  4' $! ) *)'4 $! $/  & " !*0) 0-$)" 

dist(v) = dist(u)+l(u,v)
prev(v) = u
decreasekey(H,v)

 ''() *- '"*-$/#( O(V E)

% /$1 $. /* !$) .#*-/ ./ +/# ''*2$)" !*- ) "/$1

$.%*$)/  /. / /-0/0-

if find(u) != find(v):

" .

procedure shortest-paths(G, l, s)
Input: Directed graph G = (V, E);
edge lengths {l_e: e in E} with no negative cycles;
vertex s in V

add edge {u,v} to X


union(u,v)

# *1 '"*-$/#( 0/$'$5 . $.%*$)/ . /. /*  / -($) 2# /# - $)"  "$1 )


" - / .  4' .$''4 4 # &$)" 2# /# - *- )*/ */# . /. #1 /# .(
-**/ ) ./*-

dist(u) is set to the distance from s to u.


for all u in V:

repeat |V|-1 times:


for all e in E:
update(e)

$- /  4'$ -+#.

 /-

. )$- /  4'$ -+#.

2$/# ) )* . #. ) " .

)4 *)) /  0)$- /  "-+#  2$/# |E| = |V | 1 $.  / ) 0)$- /  "-+# $.  /)4 +$- *! )* .

$! ) *)'4 $! /# - $.  0)$,0 +/#  /2 )


)   1 -4 " ' . /*  1 -/ 3 2$/#  '*2 - +*./ )0( -

-$(. '"*-$/#( O(E '*" E)

1 -4 $- /  "-+# $.   *! $/. ./-*)"'4 *)) /  *(+*) )/.

% /$1 $. /* '.* !$) /#  '/ -)/$1 /* -0.&'. '"*-$/#( $($'- /*
$%&./-.
/)- / /-0/0- $. -$*-$/4 0 0
!  ). |E| / (*./ |V 2 |
) # $/ -/$*) /# .0/-  !$)  4 3 "-*2. 4 *) " /# '$"#/ ./
 /2 )  1 -/ 3 $)  )  1 -/ 3 *0/.$ 

# )* /#/ -  $1 . /# #$"# ./ +*./ )0( - $)   +/#!$-./ . -#


(0./ '$ $)  .*0- ./-*)"'4 *)) /  *(+*) )/

!  )  - ./-*)"'4 *)) /  *(+*) )/. ) /# - $. ) " !-*(
 )* $)  /*  )* $)  /# /# #$"# ./ +*./ )0( - $)  $. $"" - /#)
/# #$"# ./ +*./ )0( - $) 

) )4 +/# *!   /# 1 -/$ . ++ - $) ) $)- .$)" '$) -$5  *- ''*2$)" 4*0 /* -0) $%&./-. '"*-$/#( $) O(n)

4 '"*-$/#(.

 !$)$/$*).
 - 4 '"*-$/#( '24. /& . /# # + ./ ' ./ 2 $"#/ " !*- $/. ) 3/ ./ +
)* (// - /# !0/0- *). ,0 ) .
 - $. ) 4'$ 0)$- /  *)) /  "-+# 2$/# |V | 1 " . *- !*- 
3/'4 |V | 1 " .
 -$)" $. '' /# 1 -/$ . 3/'4 *) #*+ !-*( 4*0 0-- )/ 1 -/ 3

-0.&'.  '"*-$/#( O(E '*" E)

pi(x) = x
rank(x) = 0
def find(x): // O(E log V)
while x != pi(x):
x=pi(x)
return x

0/ -*+ -/4

 $- /  "-+# #.  4' $! ) *)'4 $! $/.  +/#!$-./ . -# - 1 '. 
& "


! /# 3+'*- .0-*0/$) $. ./-/  / )* 0 /# ) $/ 2$'' / -($)/
+- $. '4 2# ) '' )* . - #' !-*( 0 #1  ) 1$.$/ 

-

-*+ -/$ . *! -

0++*. " .  - +-/ *!  ($)$(0( .+))$)" /- *! G = (V, E) $& )4


.0. / *! )* .  !*- 2#$#  * . )*/ -*..  /2 )  )  ) ' /  /#
'$"#/ ./ " -*.. /# +-/$/$*) # ) X e $. +-/ *! .*( $)$(0( +))$)"
-

1 -4  #.  .*0- ) .$)&

)4 -**/ )* *! -)& k #. / ' ./ 2k )* . $) $/. /-

def makeset(x): // O(1)

dist(u) = infinity

dist(s) = 0

*- )4 x, rank(x) < rank((x))



! /# - - n ' ( )/. *1 -'' /# - )  / (*./ nk )* . *! -)& k
2
# (3$(0( -)& $. logn

Output: For all vertices u reachable from s,

prev(u) = nil

*)/$).  !0)/$*) !$) /#/ - /0-). /# -**/  "$1 ) . / pi - ! -. /* /# +- )/


)* rank - ! -. /* /# # $"#/ .0/- #)"$)" !*-( /#/ )* )0( - *! ' 1 '.
 '*2 $/

procedure prim(G, w)
Input: A connected undirected graph G = (V, E) with weights
Output: A minimum spanning tree defined by the array prev
for all u in V :
cost(u) = infinity
prev(u) = nil
Pick any initial node u_0
cost(u_0) = 0
H = makequeue (V) (priority queue with cost-values as keys)
while H is not empty:
v = deletemin(H)
for each {v, z} in E:
if cost(z) > w(v, z):
cost(z) = w(v, z)
prev(z) = v
decreasekey(H, z)

def union(x,y): // O(E log V)


if find(x) == find(y):
return
elif rank(find(x)) > rank(find(y)):
pi(find(y)) = find(x)
else:
pi(find(x))=find(y)
if rank(find(x)) == rank(find(y)):
rank(find(y)) = rank(find(y)) + 1

/# *(+- ..$*)


function find(x):
if x != pi(x): pi(x) = find(pi(x))
return pi(x)

.$)" +/# *(+- ..$*) ''*2. !*- ) (*-/$5  *./ *!  !*- *0/ $.%*$)/  /.
union(x, y) ) f ind(x) *+ -/$*).

)$*) $)
. . $.%*$)/  /. / /-0/00). $) + - *+ -/$*) Olog n 2#$# $. /# )0( - *! /$( . 4*0 ) /&  '*" *!
)  !*- $/  *( . *- ' ..
/ $. 1 -4 .'*2 ) !*- '' +-/$' . . $. *)./)/
.$''4 $! !$)3 ) !$)4 - /0-) /# .( 1'0 /# 4 - $) /# .( "-+# .*
* )*/#$)" '.  /# " # ) 0)$*)3 4
)$*) *-./ . $. O(logN ) 1" !*- '' +. $. O(nlog n) 2# - ) $. )0( *! ' ( )/. $) / /-0/0-

0() )*$)"
 ( ). /* )* / 0.$)" /# *+/$(' )0( - *! $/. !*- # #-/ - "$1 )
 $./-$0/$*)

!  ). |E| / (*./ |V 2 |
*-/ " . 0.$)"  .*-/$)" '"*-$/#( /# ) - + / '4  /# ) 3/ '$"#/ ./ "
/#/ * .)/ +-*0  4' $ *)'4 1$.$/ ) 2 1 -/$ .

func huffman(f):
Input: An array f[1...n] of frequencies

Input: A connected undirected graph G = (V,E) with edge


weights w
Output: A minimum spanning tree defined by the edges X
for all u in V:

 / *1 - '"*-$/#( *'4)*($' $(


Input: A set of elements B; sets S1,...,Sm

let H be a priority queue of integers, ordered by f

Output: A selection of the S_i whose union is B.

for i=1 to n: insert(H,i)

Cost: Number of sets picked.

for k = n+1 to 2n-1:

makeset(u)
X = {}
Sort the edges E by weight
for all edges {u,v} in E, in increasing order of weight:

Output: An encoding tree with n leaves

i=deletemin(H), j=deletemin(H)
Repeat until all elements of B are covered:
Pick the set Si with the largest number of
uncovered elements.

create a node numbered k with children i,j


f[k] = f[i]+f[j]
insert(H,k)

4)($ -*"-(($)"
0)( )/''4  $. - !0''4 -0/ !*-$)" /# .*'0/$*). /*  +-*' ( 4 /0-)$)" $/
$)/* .('' - ) .('' - ) ./  .0+-*' (. /#/ - ( ( - 0. !0' $)!*-(/$*)
*0/ $/. $"" - *- +- )/ .0+-*' ( .* /#/ $/ ) 1 )/0''4 - *)./-0/ $/. '! /*
.*'1 /# *-$"$)' +-*' ( $)  - .*)' (*0)/ *! /$( #$. - ( (-) $.
* ) *) 0.$)" ( (*$5/$*) *- +- )/ +*$)/ -.
4)($ -*"-(($)" #. /2* ++-*# . 2#$# */# #1 /# .(
.4(+/*/$ -0)/$( $ - 4  *)./)/
*+ *2) # /*+ *2) ++-*# 0. . /# - 0-.$1 $  *! - &$)" /#
+-*' ( $)/* /-$1$''4 0/ ./$'' # '+!0' .('' - .0+-*' (. ) !$)$)" 
24 /#-*0"# -0/ !*- ) ( (*-$5/$*) /* !$) /# (3$(0( *($)$(0( *! 1 -4 + -(0//$*) 2#/ $.  ' 2$/# 1 - + -(0//$*) *!
/# .0+-*' (. #$. $. 0)'$& $1$ ) *),0 - 2#$# "-) -. $/.
$$ )4 !-*( - 0$)" $/. .0+-*' (. /* (..$1 '4 .('' - +-*' (.
*//*( + # *//*( 0+ ++-*# 0. . /# *++*.$/ ++-*# *!
- &$)" *2) /# +-*' ( $)/* $/. .('' ./ .0+-*' (. ) $/ -/$1 '4
0.$)" /# .('' - +-*' (. /* .*'1 $"" - ) $"" - +-*' (. 0)/$' $/
.*'1 . /# *-$"$)' +-*' (  /' $. * ) 0.  /* & + /-& *! /# .
1'0 .  $. (*- .+ $$ )/ /#)  0)' .. 4*0 0. /$' - 0-.$*) !*
*0 ) .*'1 (*./  +-*' (. 4 /# !*''*2$)" ./ +.
 !$) /# .0+-*' (. )*2 /# *! .0+-*' (.
0 ..  .*'0/$*) !*- 2#/ $. )*/ /# .0+-*' ( (3($) *! -0/ !*-
+ -(0//$*). )*2 /# *! "0 .. .
 '/ /# .0+-*' (. /* /# +- )/ +-*' ( #$. $. /#
- 0-.$1 $/ -/$1 ./ +
* /# - 0-.$*) ) ( (*$5 *- $/ -/$1 '4 0$'  /' /* & + /-& *!
+- 1$*0.'4 0.  1'0 . # . )  0.  /* !*-(   ).0- /# 
!*- /# . - 4'$ $ #1 1'$ /*+*'*"$' *- - *- )*  + ) ) .
*) +- )/ +-*' (.
*'1 /# *-$"$)' +-*' ( (subproblems time/subproblem)
#**.$)" 0+-*' (. i $. 0-- )/ +-*' (

- )/# .$5/$*) O(n3 )

fib = []
fib(n):
if k <= 2: f = 2
else: f = fib[k-1] + fib[n-2]
fib[k] = f

'*4-.#'' O(|V |3 )

.  !*- !$)$)" .#*-/ ./ +/#. $)  2 $"#/  "-+# 2$/# +*.$/$1 *- ) "/$1


2 $"#/. 0/ 2$/# )* ) "/$1 4' .

"

return fib[n]
for i=1 to n:

#*-/ ./ /#. (V E)

for j=1 to n:

*- . *-  .#*-/ ./ +/# .1 /#/ 0. .  '$($/ *! "0 .. /& /# ($) *! /#
$)*($)" " 2 $"#/. $)/* 1 .4 !-*(  )* 0 ) /# )  $/ /* /# +- !$3
.0+-*' ( *! .#*-/ ./ +/# !-*( . 0
*-  ) -' Sk (s, v) = 2 $"#/ *! .#*-/ ./ +/# !-*( . /* 1 /#/ 0. . k
" . Sk (s, v) = min(u,v)inE (Sk1 (s, u) + w(u, v)
#$. $.  ''()*-

dist(i,j,0) = infinity
for all (i,j) in E:
dist(i,j,0) = l(i,j)
for k = 1 to n:
for i = 1 to n:
for j = 1 to n:
dist(i,j,k) = min{dist(i,k,k-1)+

*)" ./
)- .$)" 0. ,0 ) O(n2 )
' . )*/ /#/ .0. ,0 ) . - )4 . ,0 ) . !*0) $) )*/# - . ,0 ) .
/#/ - )*/ )  ..-$'4 ) 3/ /* # */# - *)/$"0*0. *)/$"0*0. .0. ,0 ) .
- .0./-$)". # !*''*2$)" '"*-$/#( ./-/. / *) .$ *! /# '$./ ) !$). /#
(3 ' )"/# *! . ,0 ) . / -($)/$)" / /#/ "$1 ) )* - 0-.$1 '4 !*''*2$)"
&'$)&. # ) "$1 ) '' /# ' )"/#. *! +/#. / -($)/$)" / /#/ "$1 ) )*
#**. /# (3 ' )"/# $/#*0/ ( (*$5/$*) /#$. .*'0/$*) 2*0'  3+*) )/$'
/$(

dist(k,j,k-1),
dist(i,j,k-1)}

-1 '$)" ' .() -*' (  O(n2 2n )


#*-/ ./ +/# !*- 1$.$/$)" '' )* .
C({1},1)=0
for s = 2 to n:
for all subsets S in {1,2,...,n} of size s and has l:

L = {}

C(S,1) = infinity

for j=1,2,...,n:

for all j in S,j != 1:

L[j] = 1+max{L[i]:(i,j) in E}
# The (i,j) represents all the edges that go from

C(S,j) = min{C(S-{j},i)+dij:i in S,i not in j}


return min over j, C({1,...,n},j)+dj1

# a node to j.
return max(L)

$) - -*"-(($)"

$/ $./) + ''$)" 0"" ./$*).

  $)/*   .*'1 - '$& $(+' 3 ) % /$1 0)/$*) 2#$# .// . $! 4*0 2)/ /*
(3$($5 *- ($)$($5 /# ,0/$*) (3(x + 2y) *)./-$)/. 2#$# '$($//$*). !*- /# 1-$' . *! /# % /$1 0)/$*) x 0, y 600

#$. '"*-$/#( 2*-&. 4 .$''4 #**.$)" /# ($) *! /# *+/$*). !*- 1 -4 "$1 )


' // - # *+/$*).  $)" $)"  "+ $) /2 ) ' // -. *! *) *! /# ./-$)". *(/#$)" /# /2* ' // -. ) (*1$)" *)
3 )*24 ) 0))4 #1 ) $/ $./) *! 2$/# /#$. *)!$"0-/$*)

* /0-)  (3$($5/$*) +-*' ( $)/*  ($)$($5/$*) *- 1$ 1 -. %0./


(0'/$+'4 /# * !$$ )/. *! /# *% /$1 !0)/$*) 4

* /0-) ) $) ,0'$/4 *)./-$)/ '$& !n


i=1 ai xi b $)/* ) ,0/$*)
$)/-*0  ) 2 1-$'  ) 0. n
i=1 ai xi + s = b s 0  $.
&)*2) .  .'& 1-$'

0$3 x[i :]i O(n) .0+-*' (. - -*& ) !-*( $ /* )


*+*'*"$' - - $. -$"#/ /* '

S _ N O W Y

- !$3 x[: i]i O(n) .0+-*' (. - -*& ) !-*( ./-/ /* $


*+*'*"$' - - $. ' /* -$"#/

for i = 0,1,2,...,m:

0./-$)". x[i : j]i, j O(n2 ) .0+-*' (. - !-"( )/. *!


+-*' ( *($)/$*) *! .0$3 ) +- !$3 *+*'*"$' - - $. $)- .$)"
.0./-$)" .$5

for j = 1,2,...,n:

$*)$ O(n)

C(i, j) = min{C(i, k) + c(k + 1, j) + mi1 mk mj }

for k in range(1, n):

S U N N _ Y

* #)" ) $) ,0'$/4 *)./-$)/ $)/* $) ,0'$/$ . - 2-$/ ax = b


. ax b and ax b

E(i,0) = i


!  '$) - +-*"-( #. ) 0)*0)  1'0 /# ) $/. 0' (0./ 
$)! .$'

E(0,j) = j
for i = 1,2,...,m:

$(+' 3

for j = 1,2,...,n:
E(i,j) = min{E(i-1,j)+1,E(i,j-1)+1,E(i-1,j-1)

4+$''4 *'4)*($' /$( *-./ . 3+*) )/$'

+diff(i,j)}

 0-.$1

return E(m,n)

memo = {}

)+.& O(nW )

let v be any vertex of the feasible region


while there is a neighbor v of v with a better value:

fib(n):
if n in memo: return memo[n]
if n <= 2: f = 2
else: f = fib(n-1) + fib(n-2)
memo[n] = f
return f

/ -/$1

set v = v

/ (. #1  2 $"#/ )  1'0 4*0 2)/ /* (3$($5 /# 1'0 2$/#$)  "$1 )
2 $"#/ # (*0)/ 4*0 ) --4 $) 4*0- &)+.&
$/# - + /$/$*)

K() = maxitems {K( item ) + value}


$/#*0/ - + /$/$*)

K(, j) = maxavailable

items

{K( j , j 1) + Vj , k(, j 1)}

return v

3 '*2
*)./-0/ "-+#  /#/ $.  .$(+' $- /  "-+# 2$/# .*0- . ) .$)& / *
) "/$1 +$/$ . - +*..$'
*)./-0/   .$0' -+# 2$/# !*-2- " . !*- /# (*0)/ *! +$/4 /#/ $.
)*/  $)" 0.  +$/4 0-- )/ 0. )  & " !*- 2#/ $. 0-- )/'4  $)"
0. 

*-0'& -.*)
/-/ 2$/# )* !'*2 *) 4*0- "-+# $)  +/# 0.$)"  # ) - /  - .$0'
"-+#  ) )*2 0.  !*- !$)$)"  +/# !-*( . /* / $) /# - .$0' "-+#
!
*) 3$./. 2 - )*/ *+/$(''4 0.$)" *0- !'*2  /# ) !$) /# " 2$/# /#
 +$/4 " /#$. $. *0- *//' ) & )  !'*2 *)/* '' /# " . $)
/#/ +/# 0+ /* /# +$/4 /#/ $. )*/  $)" 0.  4 /# *//' ) & " # - 4
(3$($5$)" /# !'*2 *! /# +/# 0- ) 2 !'*2 $. "0-)/  /*   // - - / 
) 2 - .$0' "-+# ) - + / 0)/$' )* +/# $) /# - .$0' "-+# )  !*0)
!-*( . /* / #$. 2$'' #++ ) 2# ) +$/4 0-- )/ 0. . 2 '*. *0- !*-2-
" ) *)'4 #1  & "
#$. '"*-$/#( 2$'' .*( /$( .  - . !'*2 $) *) -  /* $)- . !'*2 $)
)*/# - 3 !'*2 $. /# /*/' 2 $"#/. *! $)*($)" " . /* /# .$)& 0)/$(
2*0'  O(E M ) 2# - M $. /# )0( - *! $/ -/$*). ) E $. /# /$( /*
!$)  +/# #$. ) '.*  .//  . O((3!'*2E) *2 1 - /#$. (4 )*/
/ -($)/
! 4*0 0.  $)./  *!  4*0 - 0.$)" /# (*). -+
'"*-$/#( 2#$# 2$'' / -($)/ ) #. *(+' 3$/4 O(V (E)2 ) 2#$# $.  // !*- '-" ) /2*-&.

 - )4 +-*' ( $)  )  - 0  /* /#$. +-*' ( $) *'4)*($'


/$( .* $/ $. / ' ./ . $$0'/ *- #- . )4  +-*' ( # '/$)"
-*' ( $.  #- ) '.* $(+*..$' /* .*'1 $)  !$)$/ (*0)/ *! /$(
.* /#$. $  $. )*/ '24. +-/$''4 0. !0' !*- - 0/$*). *./  -
+-*' ( -  $)  0/ /#*. /#/ - - *(+' /

0-/ ($'/*)$) /#

*(+' / -*' ( /#/ $. )*/ *)'4 . #- . 1 -4 +-*' ( $) 


0/ $. '.* $)   - ) $)  )4  +-*' ( )  - 0  /*
*) *! /# . $) *'4)*($' /$(
/ $. * ) 0. !0' /* +-*1 /# $$0'/4 *!
+-*' (. # . - /# #- ./ +-*' (. $)  ) /# - .*) 2#4 
 ) - 1*'0/$*)$5 /#$)".

) + ) )/  /

$1 ) ) 0)$- /  "-+#   !$)  1'$ *'*-$)"  .0# /#/ )* /2*


1 -/$ . .#-$)" /# .( " #1 /# .( *'*- *- - +*-/ /#/ .0# )
*- -$)" * .)/ 3$./

- +-*' (.*(+' /

.4 +-*' (. $) 


-1 '$)" ' .() -*' (
*)" ./ /#
 /#$)"
)+.&

) + ) )/  /

)/ " - $) - -*"-(($)"
0-/ /#
')  0/

  


$)$(0( +))$)" #*-/ ./ /#
$+-/$/ /#$)"
)-4 )+.&

) + ) )/  / *) /- .
$) - -*"-(($)"
0' - /#
$)$(0( 0/

3 '*2 $) 0/ # *- (


# .$5 *! /# (3$(0( !'*2 $)  ) /2*-& ,0'. /# +$/4 *! /# .('' ./
./0/ 2# - ) ./0/ +-/$/$*). /# 1 -/$ . $)/* /2* $.%*$)/ "-*0+. ) 
.0# /#/ . ./-/ $. $) ) / "*' $. $) 

3+'$)  4 3(+' '$./ *! *4. '$./ *! "$-'. $! *4 '$& . "$-'  $- / " 3$./.
!-*( *4 /* "$-'
. /# -  + -! / (/#$)" - / .*0- )* s ) .$)& )* t
s #. *0/"*$)" " . /* '' /# *4. ) t #. $)*($)" " . !-*( '' /# "$-'.
$1 1 -4 "  +$/4 *! *) *1$*0.'4  !'*2 3$./. $! /# - $.  !'*2 $)/* t
2$/# .$5 ,0' /* )0( - *! *0+' .

*(+0//$*)' *(+' 3$/4


 0. *(+0//$*)' *(+' 3$/4 /*  / -($) '..$!$/$*). !*- *0- '"*-$/#(.
/* &)*2 $! /# 4 - ! .$'

 $.$*) 1.  -# -*' (.


 $.$*) -*' ( *(+0//$*)' +-*' ( /#/ ).2 -.  . *- * 0- $)+0/
)  )4 +*..$' ./-$)" $)-4 

) $/ 2$'' ).2 - $/# - *-  + )$)"


0+*) 2 /# - /# .*'0/$*) $. *-- / *- )*/ #$. /4+ *! +-*' (  / -($) . *0'.. .
 -# -*' ( *(+0//$*)' +-*' ( /.&  2$/# )*/ $!  .*'0/$*) 3$./. 0/
2#/ *) $.  $.$*) +-*' (. )   -$1  !-*(  -# +-*' (. 2#$# " ) -''4 '24. (*- $$0'/

'..$!$/$*).
 # . / *! '' . -# +-*' (. /#/ - .*'1' $)  - .*)' (*0)/
*! /$( *'4)*($' /$(
 *) / -($)$./$ *'4)*($' # . / *! '' . -# +-*' (. 2#*.
.*'0/$*) )  # &  $) *'4)*($' /$( $)'0 .  /# - ($"#/ 3$./
. -# +-*' (. 2#*. .*'0/$*). ) )*/  # &  $) *'4)*($' /$(
 .*'0/$*) (4 )*/ )  ..-4  !*0) $)  - .*)' (*0)/ *! /$(
2n ) n! '"*-$/#(. )  $)  ''    0. $! 4*0 # /#
+*2 - /* "0 .. *-- / 1 -4 /$( $/ 2*0' 2*-& $) *'4)*($' /$(
(&$)" $/ )*) / -($)$./$ #*0'  ''  0 ..$)" $) 

$1 )  "-+# )  )0( - g /# $( $. /* !$) g 1 -/$ . /#/ - $) + ) )/


( )$)" /#/ )* /2* *! 2#$# #1 .#- ) "

-+# *'*-$)"

*((*) *(+' / -*' (.

$+-/$/ /#$)"

$1 )  +/# ./-/$)" / s ) )$)" / t /#/ "* . /#-*0"# # 1 -/ 3 3/'4
*)

/$.!$$'$/4 
#$. $. /# +-*/*/4+$' *(+' / +-*' ( /#/ 1 -4/#$)" ./-/  !-*( 4
4*0 #1 .*( **' ) 3+- ..$*). 2-$// ) 0.$)" *)'4    1-$' .
) +- )/# . . 3(+' x1 x2 x3 #  +-*' ( $. "$1 ) )4 *) *!
/# . 3+- ..$*). $. /# - .*( ..$")( )/ *!  )   1'0 . /* /#
1-$' . /#/ 2$'' (& /# )/$- 3+- ..$*) 


#$. $.  ./-$/ - 1 -.$*) *! /#  +-*' ( $) 2#$# /# .// ( )/ $. $1$  $)/*
'0. . 2# - # '0. ) #1 3/'4 '$/ -'. 3(+'
(x1 x2 x3 ) (x4 x5 x6 ) *- /# . 4*0 2)/ /* !$) 2# /# - /# 3$./. 1'0 . !*- x1 ...x6 .0# /#/ /# **' ) 1'0/ . /* 

$-0$/
$1 )  $-0$/ *! '*"$ "/ . 2$/#  .$)"' *0/+0/ ) )* '**+. !$) /# -  . //$)"
*! /# $)+0/. /#/ 0. . /# $-0$/ /* *0/+0/

)/ " - $) - -*"-(($)"
*'1  +-*' ( 0.$)" '$) - *% /$1 !0)/$*) ) '$) - $) ,0'$/$ . 

*)./-$)$)" /# 1'0 . *! /# 1-$' . /* $)/ " -.

-1 '$)" ' .() -*' ( 


$) /# .#*-/ ./ +/# $)  "-+# /#/ 1$.$/. '' /# 1 -/$ . 3/'4 *)  !*- /0-)$)" #*( #$. *( . !-*( /# $  *!  /-1 '$)" .' .() 2)/$)" /*
$$ )/'4 1$.$/ '' /# $/$ . /* . '' #$. 2- .

0-/ ($'/*)$) 4'


$1 )  "-+# !$) $! /# -  4' /#/ +.. . /#-*0"# # 1 -/ 3 3/'4 *) *- +*-/ *) * .)/ 3$./

 0/$*).
 0/$*). - ) $)- $' 0. !0' /**' !*- $/# - /0-)$)"  . -# +-*' ( 2
*)/ &)*2 #*2 /* .*'1 $)/* *) 2 * *- +-*1$)" /#/  . -# +-*' ( ) )*/
 .*'1  *- $. #- /* .*'1 4 - 0$)" $/ /* +-*' ( /#/ $. *) *! /#*. /2*
/#$)".
 &)*2 #*2 /* .*'1 B $)  - .*)' (*0)/ *! /$( ) 2 2)/ /* 0. /#$.
&)*2' " /* .*'1 A
  )*/  - 0/$*) !-*( A /* B . A B $$0'/'4 !'*2. $) /# $- /$*)
*! /# --*2

! 2 ) - 0 *0- 0)&)*2) +-*' ( A $)/*  &)*2) +-*' ( B /# ) B (0./
 . #- $! )*/ 1 ) #- - /* .*'1 /#) A  24 /* (/# (/$''4 2-$/ /#$. $.
A p B A /# - 4 +-*1$ .  '*2 - *0) *! #-) .. !*- B
 0/$*) )  *(+*.  . 2 '' $! 4*0 ) - 0 A B ) B C
/# ) A C
)4 . -# +-*' ( $)  )  - 0  /* )  - -*' ( $) *'4)*($'
$( 0/ /#$. $. )*/ '24. 0. !0' '$& '/$)" -*' (
)4 . -# +-*' ( $)  ) '.*  - 0  /* ) *(+' / -*' ( $)
*'4)*($' $(
)4 +-*' ( /#/ $. *(+' / $. - 0$' /* )4 */# - +-*' ( /#/ $. '.*
*(+' / 2#$# $. 1 -4 0. !0'

 0/$*) -

OP T (I)
A(I)

 '$)" 2$/# *(+' / ) ..

-)#)*0)

*- 3$($5/$*) A = max(I) *!

&/-&$)"

)*/# - / #)$,0 $) 2#$# 4*0 *)/ +-* .. *  2#*' +*-/$*) *!  /-  0.


0 /* .*( !*-(0' 4*0 &)*2 $/ 2$'' )*/ * -  (*- $$ )/ .*'0/$*) *3(+' $! 4*0 )  /* #**. !$)  ($'/*)$) +/# ) 4*0 #1 /#0. !- 
+/# A 2$/# 2 $"#/ ) )*/# - +/# B 2$/# 2 $"#/ "$1 ) /# $)!*-(/$*)
/#/ /#  !*- /# - ./ *! /# )* . !*- # +/# $. *! ' )"/# 4*0 )
$.- "- /# -0/ !*- '0'/$*) *! /# '*)" - +/# .$) $/ 2*)/   // /#) 2#/ 4*0 '- 4 #1
$)$)" /# '*2 -*0) $. /# - ''4 /-$&4 +-/

 -/ 3 *1 -
)+0/ ) 0)$- /  "-+#    0/+0/  .0. / *! /# 1 -/$ .
 *)/$)  $)  /#/ /*0# . 1 -4 " *' $)$($5 
+ $' . *!  / *1 - 2*-&. $) (logn)

&/-&$)" $. /# $  *! - % /$)"  +*-/$*) *!  .*'0/$*) /#/ $) .*( ()) +-*1 . /*  0. ' .. 2$/#*0/ !0''4 +-* ..$)" $/ 4 $! 4*0 !$)  1$*'/$*) *! 
 -/$) -$/ -$ . / $) 4*0- '"*-$/#( /# ) 4*0 ) ./*+ )* '*)" - +-* .. /#
#$'- ) *! /#/ )* ) (*1 0+ /* )*/# - )* /* /-4 $/. '0&
/. )*/ "*$)" /*
2*-& )424. .$) $/ $)/ 2*-& !*- /# +- )/
*- ./-/'4  &/-&$)" '"*-$/#( - ,0$- .  / ./ /#/ '**&. / 
.0+-*' ( ) ,0$&'4  '- . *) *! /#- *0/*( .

Start with some problem P0

0 ..  .*'0/$*) /* /# .0+-*' ( $. !*0)

bestsofar = infinity

For each P_i:

Repeat while S is nonempty:


choose a subproblem (partial solution)
expand it into smaller subproblems P_1, P_2, ..., P_k

else if lowerbound(Pi) < bestsofar: add Pi to S


return bestsofar

If test (P_i) succeeds:


halt and announce this solution
If test (P_i) fails:
discard Pi Otherwise: add P_i to S
Announce that there is no solution

if ||x-x_i|| < ||x-x_i*||, set i* = i


return y_i*

Let S = {P_0}, the set of active subproblems Repeat while S is nonempty:For each P_i:
If P_i is a complete solution: update bestsofar
choose a subproblem P S and remove it from S
expand it into smaller subproblems P_1, P_2, . . . , P_k

def Classify(x):
for i - 2, 3,..., n:

Let S = {P0}, the set of active subproblems

P in S and remove it from S


Start with some problem P0

#$. $. *0- '..$!4 '"*-$/#(


0). $) (nd)

set i* = 1

$'0- /# .0+-*' ( #. )* .*'0/$*) &/-&

) -/$)/4 -)# "$) ) - /-4

 - ./  $"#*-.

++-*3$(/$*) /$*
$)$)" /# 
$. /# #'' )"$)" +-/ 0/ $/ $. '24. +*.$/$1
A(I)

*- $)$($5/$*) A = max(I) *! OP T (I)

*2 /* (& /#$. (*- 0-/ 4 0.$)"  1*/$)" .4./ (


$1 ) 3 2 *(+0/ /# $./) !-*( 3 /* # *. -1/$*) $) /# /-$)$)" . /
) /# ) & + /# & '*. ./ *. -1/$*).
)/0$/$1 '4 /# '.. *! # *! /# (
"$1 .  +'0.$' .0"" ./$*) !*- /# '.. *! 3 # - !*- 2 ) /- / # . 
1*/ !*- /# '.. *! 3  /& /# . & 1*/ . !$) 2#$# '.. -  $1  /# (*./
1*/ . ) '..$!4 3 2$/# /#$. '..
) ()4 ++'$/$*). /#$. $(+-*1 . /#
0-4 *! /# ) - ./ ) $"#*- '..$!$ - 4 .(**/#$)" *0/ /# '..$!$ -.
+- $/$*).
*- **' ) '..$!$/$*) $/ $. * ) *)1 )$ )/ /* #**. & /*  * /* 1*$
/$ .
0). $) (n(d + lgk)) 0/ !*- .('' 1'0 . *! & /#$. $. '(*./ . !./ . /#
'..$!$ -

9.2.3

TSP

By the triangle inequality, these bypasses can only make the overall tour shorter.

The triangle inequality played a crucial role in making the k- CLUSTER problem approximable.
It also helps with the TRAVELING SALESMAN PROBLEM: if the distances between cities satisfy
the metric properties, then there is an algorithm that outputs a tour of length at most 1.5
times optimal. Well now look at a slightly weaker result that achieves a factor of 2.
Continuing with the thought processes of our previous two approximation algorithms, we
can ask whether there is some structure that is easy to compute and that is plausibly related
to the best traveling salesman tour (as well as providing a good lower bound on OPT). A little
thought and experimentation reveals the answer to be the minimum spanning tree.
Lets understand this relation. Removing any edge from a traveling salesman tour leaves
a path through all the vertices, which is a spanning tree. Therefore,
TSP cost cost of this path MST cost.

Now, we somehow need to use the MST to build a traveling salesman tour. If we can use each
edge twice, then by following the shape of the MST we end up with a tour that visits all the
cities, some of them more than once. Heres an example, with the MST on the left and the
resulting tour on the right (the numbers show the order in which the edges are taken).
Wichita

Albuquerque

Wichita

Tulsa

Amarillo

Albuquerque 8

Little
Rock

11
9

Amarillo

Tulsa

12

Dallas

15

El Paso

El Paso

16

Houston

Little
Rock
3

Dallas
2

Houston

San Antonio

San Antonio

Therefore, this tour has a length at most twice the MST cost, which as weve already seen is
at most twice the TSP cost.
This is the result we wanted, but we arent quite done because our tour visits some cities
multiple times and is therefore not legal. To fix the problem, the tour should simply skip any
city it is about to revisit, and instead move directly to the next new city in its list:
Wichita
Tulsa

Albuquerque

Little
Rock

Amarillo

Dallas
El Paso

Houston
San Antonio

279

Finally, there is another class of problems, between the first two given here, for which
the approximation ratio is about log n. SET COVER is an example.
(A humbling reminder: All this is contingent upon the assumption P = NP. Failing this,
this hierarchy collapses down to P, and all NP-complete optimization problems can be solved
exactly in polynomial time.)
A final point on approximation algorithms: often these algorithms, or their variants, perform much better on typical instances than their worst-case approximation ratio would have
you believe.

9.3 Local search heuristics


Our next strategy for coping with NP-completeness is inspired by evolution (which is, after
all, the worlds best-tested optimization procedure)by its incremental process of introducing
small mutations, trying them out, and keeping them if they work well. This paradigm is
called local search and can be applied to any optimization task. Heres how it looks for a
minimization problem.
let s be any initial solution
while there is some solution s in the neighborhood of s
for which cost(s ) < cost(s): replace s by s
return s
On each iteration, the current solution is replaced by a better one close to it, in its neighborhood. This neighborhood structure is something we impose upon the problem and is the
central design decision in local search. As an illustration, lets revisit the traveling salesman
problem.

9.3.1

v1 = 117,586,003, v2 = 738,493,291, v3 = 238,827,453,

But what if we are interested in instances of TSP that do not satisfy the triangle inequality?
It turns out that this is a much harder problem to approximate.
Here is why: Recall that on page 259 we gave a polynomial-time reduction which given
any graph G and integer any C > 0 produces an instance I(G, C) of the TSP such that:
(i) If G has a Rudrata path, then

OPT (I(G, C))

(ii) If G has no Rudrata path, then

Traveling salesman, once more

Discard any item with weight > W


Let vmax = maxi vi
Rescale values v!i = vi vnmax
Run the dynamic programming algorithm with values {!
vi}
Output the resulting choice of items

n + C.

This means that even an approximate solution to TSP would enable us to solve R UDRATA
PATH ! Lets work out the details.
Consider an approximation algorithm A for TSP and let A denote its approximation ratio.
From any instance G of R UDRATA PATH, we will create an instance I(G, C) of TSP using the
specific constant C = nA . What happens when algorithm A is run on this TSP instance? In
case (i), it must output a tour of length at most A OPT(I(G, C)) = nA , whereas in case (ii) it
must output a tour of length at least OPT(I(G, C)) > n A . Thus we can figure out whether G
has a Rudrata path! Here is the resulting procedure:
Given any graph G:
compute I(G, C) (with C = n A ) and run algorithm A on it
if the resulting tour has length nA :
conclude that G has a Rudrata path
else: conclude that G has no Rudrata path
This tells us whether or not G has a Rudrata path; by calling the procedure a polynomial
number of times, we can find the actual path (Exercise 8.2).
Weve shown that if TSP has a polynomial-time approximation algorithm, then there is
a polynomial algorithm for the NP-complete R UDRATA PATH problem. So, unless P = NP,
there cannot exist an efficient approximation algorithm for the TSP.

9.2.4

we could simply knock off some precision and instead use 117, 738, and 238. This doesnt
change the problem all that much and will make the algorithm much, much faster!
Now for the details. Along with the input, the user is assumed to have specified some
approximation factor > 0.

= n, the number of vertices in G.

OPT (I(G, C))

13
14

10

Lets consider the O(nV ) algorithm. In the bad case when V is large, what if we simply
scale down all the values in some way? For instance, if

General TSP

Knapsack

Our last approximation algorithm is for a maximization problem and has a very impressive
guarantee: given any > 0, it will return a solution of value at least (1 ) times the optimal
value, in time that scales only polynomially in the input size and in 1/.
The problem is KNAPSACK, which we first encountered in Chapter 6. There are n items,
with weights w1 , . . . , wn and values v1 , . . . , vn (all positive integers), and the goal is to pick the
most valuable combination of items subject to the constraint that their total weight is at most
W.
Earlier we saw a dynamic programming solution to this problem with running time O(nW ).
Using a similar technique, a running time of O(nV ) can also be achieved, where V is the sum
of the values. Neither of these running times is polynomial, because W and V can be very
large, exponential in the size of the input.

Lets see why this works. First of all, since the rescaled values v!i are all at most n/, the
dynamic program is efficient, running in time O(n 3 /).
Now suppose the optimal solution to the original problem is to pick some subset of items
S, with total value K . The rescaled value of this same assignment is
$
&
"%
"
"#
n
n
n

vi
1 K
n.
v!i =
vi
vmax
vmax
vmax
iS

iS

iS

! has a rescaled value of


Therefore, the optimal assignment for the shrunken problem, call it S,
at least this much. In terms of the original values, assignment S! has a value of at least
%
&
"
"
vmax
n
vmax
vi
v!i
K
n
= K vmax K (1 ).
n
vmax
n
iSb

9.2.5

iSb

The approximability hierarchy

Given any NP-complete optimization problem, we seek the best approximation algorithm
possible. Failing this, we try to prove lower bounds on the approximation ratios that are
achievable in polynomial time (we just carried out such a proof for the general TSP). All told,
NP-complete optimization problems are classified as follows:
Those for which, like the TSP, no finite approximation ratio is possible.

Those for which an approximation ratio is possible, but there are limits to how small
this can be. V ERTEX COVER, k- CLUSTER, and the TSP with triangle inequality belong
here. (For these problems we have not established limits to their approximability, but
these limits do exist, and their proofs constitute some of the most sophisticated results
in this field.)

Down below we have a more fortunate class of NP-complete problems for which approximability has no limits, and polynomial approximation algorithms with error ratios
arbitrarily close to zero exist. K NAPSACK resides here.

280

281

iterations will be needed: whether for instance, there might be an exponential number of
them. Likewise, all we can easily say about the final tour is that it is locally optimalthat
is, it is superior to the tours in its immediate neighborhood. There might be better solutions
further away. For instance, the following picture shows a possible final answer that is clearly
suboptimal; the range of local moves is simply too limited to improve upon it.

Figure 9.7 (a) Nine American cities. (b) Local search, starting at a random tour, and using
3-change. The traveling salesman tour is found after three moves.
(a)
Wichita

Albuquerque

Tulsa

Amarillo

Little
Rock

To overcome this, we may try a more generous neighborhood, for instance 3-change, consisting of tours that differ on up to three edges. And indeed, the preceding bad case gets
fixed:

Dallas
El Paso

Houston
San Antonio

But there is a downside, in that the size of a neighborhood becomes O(n 3 ), making each
iteration more expensive. Moreover, there may still be suboptimal local minima, although
fewer than before. To avoid these, we would have to go up to 4-change, or higher. In this
manner, efficiency and quality often turn out to be competing considerations in a local search.
Efficiency demands neighborhoods that can be searched quickly, but smaller neighborhoods
can increase the abundance of low-quality local optima. The appropriate compromise is typically determined by experimentation.

Assume we have all interpoint distances between n cities, giving a search space of (n 1)!
different tours. What is a good notion of neighborhood?
The most obvious notion is to consider two tours as being close if they differ in just a few
edges. They cant differ in just one edge (do you see why?), so we will consider differences of
two edges. We define the 2-change neighborhood of tour s as being the set of tours that can be
obtained by removing two edges of s and then putting in two other edges. Heres an example
of a local move:

(b)
(i)

(ii)

(iii)

(iv)

We now have a well-defined local search procedure. How does it measure up under our two
standard criteria for algorithmswhat is its overall running time, and does it always return
the best solution?
Embarrassingly, neither of these questions has a satisfactory answer. Each iteration is
certainly fast, because a tour has only O(n 2 ) neighbors. However, it is not clear how many
282

283

284

You might also like