CS 161 Summer 2009 Homework #2 Sample Solutions: Problem 1 (24 Points)
CS 161 Summer 2009 Homework #2 Sample Solutions: Problem 1 (24 Points)
CS 161 Summer 2009 Homework #2 Sample Solutions: Problem 1 (24 Points)
Regrade Policy: If you believe an error has been made in the grading of your homework,
you may resubmit it for a regrade. If the error consists of more than an error in addition of
points, please include with your homework a detailed explanation of which problems you
think you deserve more points on and why. We reserve the right to regrade your entire
homework, so your final grade may either increase or decrease.
1
2
2
The last line holds for n ≥ 1 and d1 > 3 log 3 . Thus we can proof by induction that
T (n) < cn log n for all n, where c = max{d1 , 3d1 , log2 3 }.
Therefore, T (n) = O(n log n) and T (n) = Θ(n log n).
(e) (3 points) T (n) = 2T (n/2) + n/ log n
Answer: T (n) = Θ(n log log n).
Similar to part(b).
(f) (3 points) T (n) = T (n/2) + T (n/4) + T (n/8) + n
Answer: T (n) = Θ(n).
We solve this problem by guess-and-check. The total size on each level of the recur-
rence tree is less than n, so we guess that f (n) = n will dominate.
Base case, for n0 = 1, we have T (n) = n.
Inductive step: Assume for all i < n that c1 n ≤ T (i) ≤ c2 n. Then,
c1 n/2 + c1 n/4 + c1 n/8 + kn ≤ T (n) ≤ c2 n/2 + c2 n/4 + c2 n/8 + kn
c1 n(7/8 + k/c1 ) ≤ T (n) ≤ c2 n(7/8 + k/c2 )
If c1 ≥ 8k and c2 ≤ 8k, then c1 n ≤ T (n) ≤ c2 n. So, T (n) = Θ(n).
(g) (3 points) T (n) = T (n − 1) + 1/n
Answer: T (n) = Θ(log n).
We solve this problem by algebraic substitution.
T (n) = T (n − 1) + 1/n
= T (n − 2) + 1/(n − 1) + 1/n
...
n
X
= Θ(1) + 1/i
i=1
= Θ(1) + ln n
√ √
(j) (3 points) T (n) = nT ( n) + n
Answer: T (n) = Θ(n log log n).
We solve it by algebraic substitution.
√ √
T (n) = nT ( n) + n
= n1/2 (n1/4 T (n1/4 + n1/2 )) + n
= n3/4 T (n1/4 ) + 2n
= n3/4 (n1/8 T (n1/8 + n1/4 )) + 2n
= n7/8 T (n1/8 ) + 3n
...
k k
= n1−1/2 T (n1/2 ) + kn
4
k
When n1/2 falls under 2, we have k > log log n. We then have T (n) = n1−1/ log n T (2)+
n log log n = Θ(n log log n).
Problem 2 [5 points]
Answer: a = 48.
A : T (n) = 7T (n/2) + n2 . We have a = 7, b = 2, f (n) = n2 . Apply case 1 of the Master
Theorem, we get T (n) = Θ(nlog2 7 ).
A0 : T 0 (n) = aT 0 (n/4) + n2 . We have a = a, b = 4, f (n) = n2 . If log2 7 > log4 a > 2, Apply
case 1 of the Master Theorem, we will get T 0 (n) = Θ(nlog4 a ) < Θ(nlog2 7 ) = T (n). Solving
the inequation, we get a < 49.
When log4 a ≤ 2, by case 2 and case 3 of the Master Theorem, we won’t get a faster
algorithm than A.
Now consider the outer loop of the Insertion-Sort algorithm. It iterates n times
as the index traverses the array and it is independent of the number of inversions.
Therefore the total running time of Insertion-Sort is O(I + n) = O(kn).
Merge-Sort: Θ(n log n). Same Recurrence function T (n) = 2T (n/2) + Θ(n).
QuickSort:
Best-case scenario, rewrite recurrence as T (n) = T (k) + T (n − k) + Θ(n). The depth
CS161 Homework #2 Sample Solutions 5
of the recurrence tree is n/k. Thus T (n) = O(n2 /k). When k << n, it is close to
O(n2 ).
Worst-case scenario (sorted), rewrite recurrence as T (n) = T (1) + T (n − 1) + Θ(n).
Thus T (n) = O(n2 ).
(b) (5 points) O(kn). The idea of the bubble-sort algorithm is, starting from the left,
compare adjacent items and keep ”bubbling” the larger one to the right until all
items reach their proper positions.
From 3(a), we know there are at most O(kn) inversions. After each swap operation
in line 6, the number of inversion will decrease by one. Thus line 6 will be executed
at most O(kn) times, which is the bottle-neck of the running time.
(c) (10 points) For the solution to this problem, we are going to use a heap. We assume
that we have a heap with the following subroutines:
We consider A as a set of sorted lists. In particular, notice that A[i] < A[i + 2k + 1],
for all i ≤ n − 2k − 1. That’s because the element at slot i can at most move forwards
during sorting to slot i + k and the element at slot i + 2k + 1 can at most move
backwards during sorting to slot i + k + 1. For simplicity, assume n is divided by 2k.
We divide the array A into 2k lists defined as follows:
Each of these lists is sorted, and of size ≤ n. We can sort these lists in O(n log k)
time using the Sort-K-Sorted algorithm below.
6
Algorithm 1 Sort-K-Sorted(A, n, k)
1: H = Make-Heap()
2: for i = 1 to 2k do
3: Insert(H, A[i], i)
4: end for
5: for i = 1 to n do
6: j=Extract-Min(H)
7: B[i] = A[j]
8: if j + 2k ≤ n then
9: Insert(H, A[j + 2k], j)
10: end if
11: end for
12: return B
Running time analysis: Since there are never more than 2k element in the heap, Each
Extract-Min and Insert operation requires O(log k) time. The second loop is is
repeated n times, resulting in an O(n log k) running time.
(d) (5 points) The key invariant is to show that after every iteration of the loop, the heap
contains the smallest element in every list. A formal induction proof is required here.
Alternative Solution: Recall we already know how to merge two sorted lists that con-
tain n elements in O(n) time. It is possible, then to merge the lists in a tournament. For
example, for 2k = 8, where A + B means to merge lists A and B:
Round 1: (A1 + A2 ), (A3 + A4 ), (A5 + A6 ), (A7 + A8 )
Round 2: (A1 + A2 + A3 + A4 ), (A5 + A6 + A7 + A8 )
Round 3: (A1 + A2 + A3 + A4 + A5 + A6 + A7 + A8 )
Notice that there are log k merge steps, each of which merges n elements and hence has a
cost of O(n). This leads to the total running time of O(n log k).
(b) (5 points) Split the array into m = 2i subarrays. Call Merge-Sort on each subarray
on separate machine. Once machines start to returning answers, use free machines
to merge subarrys that are adjacent to each other.
CS161 Homework #2 Sample Solutions 7
Alternative Solution: At the Merge step, we can use m/2 machines to merge all pairs
of sorted subarrays, then m/4 machines for the next merge step, etc. This will give the
same Big-O running time.
Problem 5 [8 points]
Insertion-Sort: Stable. We can prove it using the following loop invariant: At the start of
each interaction of the for loop of lines 1-8 (page 17 of CLRS), if A[a] = A[b], a < b ≤ j − 1
and distinct, then A[a] appeared before A[b] in teh initial array. Proof by induction is
omitted here.
Merge-Sort: Stable. We can prove it by induction on the fact that Merge-Sort (page
29 and 32 on CLRS) is stable.
Base case: When we call Merge-Sort with indices p and r such that p 6< r (therefore
p == r), we return the same array. So calling Merge-Sort on an array of size one returns
the same array, which is stable.
Induction: We assume that calling Merge-Sort on an array of size less than n returns a
stably sorted array. We then show that if we call Merge-Sort on an array of size n, we
also return a stably sorted array. Each call of Merge-Sort contains two calls to Merge-
Sort on half arrays, and in addition, merges these two subarrays and returns. Since we
assume that the calls to Merge-Sort on smaller arrys return stably sorted arrays, we only
need to show that the Merge step on two stably sorted arrays returns a stable array. if
A[i] = A[j], i < j in the initial array, we need f (i) < f (j) in the new array, where f is the
function which gives the new positions in the sorted array. If i and j are in the same half
of the recursive Merge-Sort call, then by assumption, it hold. If i and j are in different
subarrays, then we know that i is in the left subarray and j is in the right subarray since
i < j. In the Merge step, we take the elements from the left subarray while it is less than
or equal to the right subarray element (line 13). Therefore, we will take element i before
taking element j and f (i) < f (j), the claim we are trying to prove.
Bubble-Sort: Stable. Formal proof by induction is omitted here. The key point is
that equal value elements are not swapped (line 5 in the pseudocode in Problem 3(b)).
QuickSort: NOT stable. To make it stable, we can add a new field to each array el-
ement, which indicates the index of that element in the original array. Then, when we
8
sort, we can sort on the condition (line 4 on page 146 of CLRS) A[j] > x OR A[j] == x
AND index(A[j]) < index(x). At the end of the sort, we are guaranteed to have all
the elements in sorted order, and for any i < j, such that A[i] equals A[j], we will have
index(A[j]) < index(A[i]) so the sort will be stable.
≥ log((k!)bn/kc )
≥ bn/kc log(k!)
≥ bn/kcc1 k log k
≥ c1 (n − k) log k
≥ c1 n log k/2
= Ω(n log k)