0% found this document useful (0 votes)
97 views4 pages

Ps 1 Sol

The document provides solutions to problems from Problem Set 1 on algorithms. It includes: 1) A ranking of functions by order of growth from fastest to slowest, with notes explaining some of the rankings. 2) Analyzing statements involving asymptotic notation like Big O, proving some always/sometimes/never true. 3) Analyzing the runtime of an algorithm that uses insertion sort on small subarrays within merge sort, showing the optimal subarray size is O(lg n). 4) Solving recurrence relations for various functions, showing their asymptotic growth rates using the Master Theorem or other techniques.

Uploaded by

saad
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
97 views4 pages

Ps 1 Sol

The document provides solutions to problems from Problem Set 1 on algorithms. It includes: 1) A ranking of functions by order of growth from fastest to slowest, with notes explaining some of the rankings. 2) Analyzing statements involving asymptotic notation like Big O, proving some always/sometimes/never true. 3) Analyzing the runtime of an algorithm that uses insertion sort on small subarrays within merge sort, showing the optimal subarray size is O(lg n). 4) Solving recurrence relations for various functions, showing their asymptotic growth rates using the Master Theorem or other techniques.

Uploaded by

saad
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

Massachusetts Institute of Technology Handout 8

6.046J/18.410J: Introduction to Algorithms February 15, 2001


Professors Piotr Indyk and Madhu Sudan

Problem Set 1 Solutions


Problem 1-1. Ranking Functions by Order of Growth
The ranking is based on the following facts:
•exponential functions grow faster than polynomial functions, which grow faster than
logarithmic functions;
•the base of a logarithm does not matter asymptotically, but the base of an exponential
and the degree of a polynomial do matter.
•Stirling’s approximation for n! is useful for dealing with factorials asymptotically.
The ranking is:
 n
2
3
(lg n)!
1 nlg lg n
lg∗ n n lg n
 n
3
lg lg n 2
{ ln n , lg n2 } en
√ lg n
2 22n
{ n , 2blg nc , 2dlg ne } (n − 1)!
{ n lg n , lg(n!) } n!
4lg n (2n)!
3 2 3 n
{n , n +n } 22
Some notes:
 n
2
• 3
approaches zero.
•n = 2lg n ≥ 2blg nc ≥ 2lg n−1 = 2−1 2lg n = 12 n and so 2blg nc = Θ(n). Similarly, n ≤ 2dlg ne ≤
2n and so also 2dlg ne = Θ(n).
√ lg n 1 1 √
• 2 = 2 2 lg n = n 2 = n and similarly 4lg n = 22 lg n = n2 .
√  n+ 1 
•lg(n!) ≈ lg 2πe ne 2
= Θ(n lg n).
√  lg n+ 1 √ (lg n)lg n √lg n lg n
 lg n
•(lg n)! ≈ 2πe lgen 2
= 2π nlg e
. Now notice that (lg n) = 2lg lg n
=
 lg lg n √ √
2lg lg n lg n = 2lg n = nlg lg n . Using this we get: (lg n)! ≈ 2πnlg lg n−lg e lg n. This
is asymptotically greater than any polynomial in n, since for any polynomial nd , there
is some n for which lg lg n − lg e > d. However, it is strictly less than nlg lg n since they
nlg e
differ by √ = ω(1).
lg n
2 Handout 8: Problem Set 1 Solutions

Problem 1-2. Asymptotic Notation

(a) Sometimes true: For f (n) = n it is true, while for f (n) = 1/n it is not true. (The
statement is always true for f (n) = Ω(1), and hence for most functions with which
we will be working in this course, and in particular all time and space complexity
functions).
(b) Sometimes true: For f (n) = 1 and g(n) = kn ∗ sin(n)k it is true, while for any
f (n) = O(g(n)), e.g. f (n) = g(n) = 1, it is not true.
(c) Always true: max(f (n), g(n)) ≤ f (n) + g(n) ≤ 2 max(f (n), g(n)).
(d) Always true: Consider f (n) + g(n) where g(n) = o(f (n)) and let c be a constant
such that g(n) < cf (n) for large enough n. Then f (n) ≤ f (n) + g(n) ≤ (1 + c)f (n)
for large enough n.
(e) Never true: If f (n) = Ω(g(n)) then there exists positive constant cΩ and nΩ
such that for all n > nΩ , cg(n) ≤ f (n). But if f (n) = o(g(n)), then for any
positive constant c, there exists no (c) such that for all n > no (c), f (n) < cg(n). If
f (n) = Ω(g(n)) and f (n) = o(g(n)), we would have that for n > max(nΩ , no (cΩ )) it
should be that f (n) < cΩ g(n) ≤ f (n) which cannot be.

Problem 1-3. Insertion sort on small arrays in merge sort

(a) Insertion sort takes O(k 2 ) time per k-element list. Therefore, sorting n/k such k-
element lists, takes O(k 2 n/k) = O(nk) time.
(b) Merging requires O(n) work at each level, since we are still working on n elements,
even if they are partitioned among sublists. The number of levels, starting with n/k
k-element lists and finishing with 1 n-element list, is dlg(n/k)e. Therefore, the total
running time for the merging is O(n lg(n/k)).
(c) The largest asymptotic value of k, for which the modified algorithm has the same
asymptotic running time as standard merge sort, is k = O(lg n). The combined
running time is O(n lg n + n lg n − n lg lg n), which is O(n lg n). For larger values of
k the term nk would be larger than O(n lg n).
(d) In practice, k should be chosen such that it minimizes the running time of the
combined algorithm, which is cnk + dn lg(n/k), where c and d are some positive
constants that determine the actual running time of the two phases. The value
k = d/c minimizes the above expression, and thus the best choice of k does not
depend on n, but rather on the ratio between the constants d and c.
Handout 8: Problem Set 1 Solutions 3

Problem 1-4. Recurrences


(a) T (n) = 4T ( n2 ) + n3 =⇒ T (n) = Θ(n3 )
Using the third case of the master theorem:
 
n3 = Ω nlog2 4 + 0.1 = Ω(n2.1 )

(b) T (n) = T ( n2 ) + T ( n3 ) + n =⇒ T (n) = Θ(n)


Indeed
n n
T (n) = T ( ) + T ( ) + n ≥ n
2 3
so T (n) = Ω(n). Now let c > 6 and suppose by induction that T (k) ≤ ck ∀k < n
(the base case T (1) = 1 < c is true). Now
n n 5c
T (n) ≤ c + c + n = (1 + )n ≤ cn
2 3 6
given the assumption on c. So we have T (n) = O(n) as well.
1
(c) T (n) = 3T (n 3 ) + log3 n =⇒ T (n) = Θ(lg n lg lg n)
k
Let n = 3k we have T (3k ) = 3T (3 3 ) + k. Let S(k) = T (3k ), S(1) = 1 and
S(k) = 3S( k3 ) + k so the second case of the master theorem (k = Θ(k log3 3 )) gives
S(k) = Θ(k lg k). Going back to n we get T (n) = S(log3 n) = Θ(log3 n lg log3 n) =
Θ(lg n lg lg n).
(d) T (n) = T (n − 1) + n4 =⇒ T (n) = Θ(n5 )
By iteration T (n) = ni=1 i4 . Then use approximation by integrals (page 50 in
P

textbook).
(e) T (n) = T (n/2 + 5) + n2 =⇒ T (n) = Θ(n2 ) It is reasonable to guess that T (n) has
the same solution of S(n) = S(n/2) + n2 as the difference between n/2 and n/2 + 5
becomes negligible for n → ∞. By case 3 of the Master Theorem, we have that
S(n) = Θ(n2 ).
Thus we guess T (n) = Ω(n2 ) and we prove it by substitution. Assume that T (m) ≥
cm2 for an appropriate constant c and for all m < n. Then we have that

T (n) ≥ c(n2 /4 + 5n + 25) + n2 .

With simple algebraic manipulations we have c(n2 /4 + 5n + 25) + n2 ≥ cn2 is equiv-


alent to (1 − 3/4c)n2 + c(5n + 25) ≥ 0. Since c(5n + 25) is always positive, it will be
enough to find a value of c for which (1 − 3/4c)n2 is positive. Any c < 4/3 makes
the inequality true (notice that it may be true also for some c > 4/3). Hence we
have T (n) = Ω(n2 ).
Now, we guess T (n) = O(n2 ) and we prove it by substitution. Assume that T (m) ≤
am2 for an appropriate constant a and for all m < n. Then

T (n) ≤ a(n2 /4 + 5n + 25) + n2 .


4 Handout 8: Problem Set 1 Solutions

Thus T (n) ≤ an2 whenever a(n2 /4 + 5n + 25) + n2 ≤ an2 , which is equivalent to


a(n2 /4 + 5n + 25) ≤ a(n2 − 1). This inequality is true, given any value of a, for n
sufficiently large, e.g. for n ≥ 100. Thus, we have proved that, for any constant a,
T (n) ≥ an2 , for n ≥ 100. Hence T (n) = O(n2 ).
Combining T (n) = O(n2 ) and T (n) = Ω(n2 ), we get T (n) = Θ(n2 ).
(f ) T (n) = 2T (n/2) + n lg n =⇒ T (n) = n lg2 n
Unfortunately, the Master Method cannot be used since the added value n lg n is
within a logarithmic factor of nlog2 2 = n. Instead, by iteration:
n n n n
T (n) = n lg n + 2 lg + 4 lg + · · ·
2 2 4 4
n n

= n lg n + lg + lg + · · ·
2 4
lg n lg n
lg 2k = n
X X
= n k
k=0 k=0
lg n(lg n + 1)
= n = Θ(n lg2 n)
2

(g) T (n) = 4T (n/2) + n2 + n =⇒ T (n) = n2 lg n


This time its fine to use the second case of the Master Method, since n2 + n =
Θ(nlog2 4 ) = Θ(n2 ).

You might also like