0% found this document useful (0 votes)
38 views16 pages

Divide and Conquer

The document discusses two divide-and-conquer algorithms: merge sort and quicksort. Merge sort divides the problem into two equal subproblems, sorts them recursively, and then merges the results. This takes O(n log n) time. Quicksort chooses a pivot element and partitions the array into two subarrays based on element values relative to the pivot, before recursively sorting the subarrays. In the average case this also takes O(n log n) time, but the worst case is O(n^2).

Uploaded by

Ian Sikobe
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
38 views16 pages

Divide and Conquer

The document discusses two divide-and-conquer algorithms: merge sort and quicksort. Merge sort divides the problem into two equal subproblems, sorts them recursively, and then merges the results. This takes O(n log n) time. Quicksort chooses a pivot element and partitions the array into two subarrays based on element values relative to the pivot, before recursively sorting the subarrays. In the average case this also takes O(n log n) time, but the worst case is O(n^2).

Uploaded by

Ian Sikobe
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 16

DIVIDE AND

CONQUER
selinao@uonbi.ac.ke
Divide-and-Conquer

• Divide the problem into a number of sub problems that are


smaller instances of the same problem.
• Conquer the sub problems by solving them recursively. If the
sub problems are small enough just solve them directly.
• Combine the solutions to sub problems into the solution for
the original problem
Merge Sort
• Follows the divide and conquer approach

• Divide-Divide the n-element sequence to be sorted into two sub


sequences of n/2 elements each

• Conquer-Sort the two subsequences recursively using merge


sort

• Combine-Merge the two sorted sub sequences to provide the


sorted answer

• Key operation of merge sort is the merging of the two sorted


sequences
Merge sort Pseudocode
Mergesort(A, p,r)
if p<r
q=(p+q)/2
Mergesort(A,p,q)
Mergesort(A,q+1,r)
merge(A,p,q,r)
Merge(A,p,q,r)
n1=q-p+1
n2=r-q
let L[1-----n1+1] and R[1-----n2+1] be the new sub arrays
for i=1 to n1
L[i]=A[p+i-1]
for j=1 to n2
R[j]=A[q+j]
L[n1+1]=∞
R[n2+1]=∞
i=1
j=1
for k=p tor
if L[i]≤R[j]
A[k]=L[i]
i++
else
A[k]=R[j]
J++
Loop invariant for merge sort
• At the start of each iteration of the for loop the sub array
A[p---k-1] cointains the elements K-p smallest elements of the
sub array in sorted order

• Initialization: K=p so the sub array A[p—k-1] is empty

• Maintainance:
Analyzing divide and conquer
algorithms
• When an algorithm contains a recursive call its running time is
described by a recurrence.
• A recurrence: is an equation that describes the overall running
time on a problem of size n in terms of the running time on
smaller inputs
• Let T(n) be the running time on a problem of size n
• If the problem size is small enough the straightward solution
takes constant time which we write as O(1)
• Suppose that the subdivision of the problem yields a sub
problem each of which is 1/b the size of the original problem.
• It takes time T(n/b) to solve one sub problem of size n/b and
so it takes time aT(n/b) to solve a sub problems
Analyzing divide and conquer
algorithms
• If we take D(n) time to divide the problem into sub problems
and c(n) time to combine the solution to the original problem
we get the recurrence

T(n)= O{1} if n is small


aT(n/b) +D(n) + C(n) otherwise
Analysis of merge sort
• We assume the original problem is a power of 2
• Each divide step then yields two sub sequences of size n/2
Worst case running time of merge sort on n numbers
• Merge sort on just one element takes constant time
When n>1 the running time is broken as follows
• Divide: Just compute the middle of the array which takes
constant time. Thus D(n)=Θ(1)
• Conquer: We recursively solve two sub problems each of size
n/2 which contributes 2T(n/2)
• Combine: The merge procedure takes time Θ(n)
adding D(n)= Θ(1) and C(n)=Θ(n) we get Θ(n)
Analysis of merge sort
• Adding all gives the recurrence
• T(n)= Θ(1) if n=1
• 2T(n/2)+Θ(n) if n>1
Analysis of merge sort

• The height h of the merge-sort tree is O(log n)


• At each recursive call we divide in half the sequence,
• The overall amount of work done at all the nodes at depth i is
O(n)
• we partition and merge 2isequences of size n/2i
• we make 2i+1recursive calls
• the numbers all occur and are used at each depth
• Thus, the total running time of merge-sort is O(nlog n)
Quick sort
Motivation

• In merge sort the `divide’ is simple, and the `merge’ (relatively)


complicated
• Can we make the ‘merge’ simple?
• Answer: make the `divide’ more complicated so that the merge becomes
`concatenate
When is `merge’ simple?

• When the lists A and B are sorted and known to be in disjoint ordered
ranges
• all of elements of A are smaller than all those of B
• If A and B are stored as consecutive sub-arrays, then merge actually needs
no work at all:
• Just “forget the boundary”
Description of quicksort
• Quicksort, like merge sort, applies the divide-and-conquer
paradigm introduced. Here is the three-step divide-and-
conquer process for sorting a typical array (A,p- r).
• Divide: Choose a pivot q and Partiton or rearrange the array
into two sub arrays A[p—q-1] and A[q+1—r] such that the sub
array A[p—q-1] contains elements less than q and the sub
array A[q+1—r] contains elements greater than q.
• Conquer: sort the two sub arrays A[p—q-1] and A[q+1—r by
making recursive calls to quicksort
• Combine: Because the subarrays are already sorted, no work
is needed to combine them: the entire array A[p– r] is now
sorted
Pseudo code for quicksort
quicksort(A,p,r)
if p<r
q=partition(A,p,r)
Quicksort(A, p,q-1)
Quicksort(A,q+1,r)
Partition(A,p,r)
x=A[r]
i=p-1
for(j=p to r-1)
if A[j]≤x
i=i+1
exchange A[i] with A[j]
exchange A[i+1] with A[r]
return(i+1)
Worst case running time of
quicksort
• The worst case for quick-sort occurs when the pivot is the
unique minimum or maximum element

• One of Land R has size n -1 and the other has size 0

• The running time is proportional to the sum n+(n-1) + … +2 + 1

• Thus, the worst-case running time of quick-sort is O(n2)


Best-case Running Time

• The best case for quick-sort occurs when the pivot is the
median element
• The Land R parts are equal –the sequence is split in halves, like
in merge sort
• Thus, the best-case running time of quick-sort is O(n log n)

Average-case Running Time


• The average case for quick-sort: half of the times, the pivot is
roughly in the middle
• Thus, the average-case running time of quick-sort is O(n log n)
again
Advantages of quicksort
• it can be done “in-place”
• Uses a small amount of workspace
• Because the “merge” step is now a lot easier!!

• The “split” is more complicated, and the merge “much” easier


–but turns out that the quick-sort split is easier to do in-place
than the merge-sort merge

You might also like