vision_cs_2023_algorithm_chapter_3_divide_and_conquer_21
vision_cs_2023_algorithm_chapter_3_divide_and_conquer_21
vision_cs_2023_algorithm_chapter_3_divide_and_conquer_21
com
1
byjusexamprep.com
ALGORITHM
2
byjusexamprep.com
Fundamental of Divide & Conquer Strategy:
There are two fundamentals of Divide & Conquer Strategy:
⮚ Relational Formula
⮚ Stopping Condition
1. Relational Formula: It is the formula that we generate from the given technique. After
generation of Formula we apply D&C Strategy, i.e. we break the problem recursively & solve the
broken subproblems.
2. Stopping Condition: When we break the problem using Divide & Conquer Strategy, then we
need to know that for how much time, we need to apply divide & Conquer. So the condition
where the need to stop our recursion steps of D&C is called as Stopping Condition.
Divide And Conquer algorithm :
DAC(a, i, j)
{
if(small(a, i, j)) // stopping Condition
return(solution(a, i, j))
else
m = divide(a, i, j) // f1(n)
b = DAC(a, i, mid) // T(n/2)
c = DAC(a, mid+1, j) // T(n/2)
d = combine(b, c) // f2(n)
return(d)
}
Recurrence Relation for DAC algorithm :
This is the recurrence relation for the above program.
T(n) = O(1) if n is small
T(n) = f1(n) + 2T(n/2) + f2(n)
⮚ Binary Search
⮚ Merge Sort
⮚ Quick Sort
⮚ Selection Procedure
⮚ Strassen’ s matrix multiplication
⮚ Finding Inversion
3
byjusexamprep.com
1. Max - Min Problem :
Analyze the algorithm to find the maximum and minimum element from an array.
Maximum Minimum
Find Maximum and minimum element of an array using minimum number of comparisons.
In this approach, the array is divided into two halves. Then using recursive approach maximum
and minimum numbers in each halves are found. Later, return the maximum of two maxima
of each half and the minimum of two minima of each half.
In this given problem, the number of elements in an array is y−x+1y−x+1, where y is greater
than or equal to x.
Max−Min(x,y)Max−Min(x,y) will return the maximum and minimum values of an
array numbers[x...y]numbers[x...y].
Algorithm: Max - Min(x, y)
if y – x ≤ 1 then
return (max(numbers[x], numbers[y]), min((numbers[x], numbers[y]))
else
(max1, min1):= maxmin(x, ⌊((x + y)/2)⌋)
(max2, min2):= maxmin(⌊((x + y)/2) + 1)⌋,y)
return (max(max1, max2), min(min1, min2))
Analysis
Let T(n) be the number of comparisons made by Max−Min(x,y)Max−Min(x,y), where the
number of elements n=y−x+1n=y−x+1.
If T(n) represents the numbers, then the recurrence relation can be represented as
T(n)=⎧⎩⎨⎪⎪T(⌊n2⌋)+T(⌈n2⌉)+210forn>2forn=2forn=1T(n)={T(⌊n2⌋)+T(⌈n2⌉)+2forn>21forn=
20forn=1
Let us assume that n is in the form of power of 2. Hence, n = 2k where k is height of the
recursion tree.
So,
T(n)=2.T(n2)+2=2.(2.T(n4)+2)+2.....=3n2−2
0; if n = 1
• Recurrence Relation: T(n) = 1; if n = 2
T ( n / 2 ) + T ( n / 2 ) + 1; if n 2
Time complexity with DAC = O(n)
4
byjusexamprep.com
Number of comparison
(a) When n is even = (3n/2) – 2,
(b) Else n is odd = (3n – 1)/2.
Example
Consider the problem of computing min-max in an unsorted array where min and max
are minimum and maximum elements of the array. Algorithm A1 can compute min-max
in a1 comparisons without divide and conquer. Algorithm A2 can compute min-max in a2
comparisons by scanning the array linearly. What could be the relation between a1 and
a2 considering the worst case scenarios?
(A) a1 < a2
(B) a1 > a2
(C) a1 = a2
(D) Depends on the input
Answer: (B)
Explanation: When Divide and Conquer is used to find the minimum-maximum element
in an array, Recurrence relation for the number of comparisons is
T(n) = 2T(n/2) + 2 where 2 is for comparing the minimums as well the maximums of the
left and right subarrays
On solving, T(n) = 1.5n – 2.
While doing linear scan, it would take 2*(n-1) comparisons in the worst case to find both
minimum as well maximum in one pass.
2. Power of an element-
1 , if n = 0
∙ T(n) = a , if n = 1
T ( n / 2) + c , otherwise
Time complexity = O(log2n)
C Program for calculating Power of an Element
/* Extended version of power function that can work
for float x and negative y*/
#include<stdio.h>
float power(float x, int y)
{
float temp;
if( y == 0)
return 1;
5
byjusexamprep.com
temp = power(x, y/2);
if (y%2 == 0)
return temp*temp;
else
{
if(y > 0)
return x*temp*temp;
else
return (temp*temp)/x;
}
}
3. Binary Search –
● Binary Search is one of the fastest searching algorithms.
● It is used for finding the location of an element in a linear array.
● It works on the principle of divide and conquer technique.
Binary Search Algorithm can be applied only on Sorted arrays.
So, the elements must be arranged in-
● Either ascending order if the elements are numbers.
● Or dictionary order if the elements are strings.
To apply binary search on an unsorted array,
● First, sort the array using some sorting technique.
● Then, use binary search algorithm.
Binary Search Algorithm-
Consider-
● There is a linear array ‘a’ of size ‘n’.
● Binary search algorithm is being used to search an element ‘item’ in this linear array.
● If search ends in success, it sets loc to the index of the element otherwise it sets loc to -
1.
6
byjusexamprep.com
● Variables beg and end keeps track of the index of the first and last element of the array or
sub array in which the element is being searched at that instant.
● Variable mid keeps track of the index of the middle element of that array or sub array in
which the element is being searched at that instant.
Explanation
Binary Search Algorithm searches an element by comparing it with the middle most element of
the array.
Then, following three cases are possible-
Case-01
If the element being searched is found to be the middle most element, its index is returned.
Case-02
If the element being searched is found to be greater than the middle most element,
then its search is further continued in the right sub array of the middle most element.
Case-03
If the element being searched is found to be smaller than the middle most element,
then its search is further continued in the left sub array of the middle most element.
This iteration keeps on repeating on the sub arrays until the desired element is found
or size of the sub array reduces to zero.
7
byjusexamprep.com
Time Complexity Analysis-
Binary Search time complexity analysis is done below-
● In each iteration or in each recursive call, the search gets reduced to half of the array.
● So for n elements in the array, there are log 2n iterations or recursive calls.
Thus, we have-
This time complexity of binary search remains unchanged irrespective of the element position
even if it is not present in the array.
Step-01:
● To begin with, we take beg=0 and end=6.
● We compute location of the middle element as-
mid
= (beg + end) / 2
= (0 + 6) / 2
=3
● Here, a[mid] = a[3] = 20 ≠ 15 and beg < end.
● So, we start the next iteration.
Step-02:
● Since a[mid] = 20 > 15, so we take end = mid – 1 = 3 – 1 = 2 whereas beg remains
unchanged.
● We compute location of the middle element as-
mid
= (beg + end) / 2
8
byjusexamprep.com
= (0 + 2) / 2
=1
● Here, a[mid] = a[1] = 10 ≠ 15 and beg < end.
● So, we start next iteration.
Step-03:
● Since a[mid] = 10 < 15, so we take beg = mid + 1 = 1 + 1 = 2 whereas end remains
unchanged.
● We compute location of the middle element as-
mid
= (beg + end) / 2
= (2 + 2) / 2
=2
● Here, a[mid] = a[2] = 15 which matches to the element being searched.
● So, our search terminates in success and index 2 is returned.
Important Note-
For in-memory searching, if the interval to be searched is small,
● Linear search may exhibit better performance than binary search.
● This is because it exhibits better locality of reference.
4. Merge Sort-
● Merge sort is a famous sorting algorithm.
● It uses a divide and conquer paradigm for sorting.
9
byjusexamprep.com
● It divides the problem into sub problems and solves them individually.
● It then combines the results of sub problems to get the solution of the original problem.
• Comparison based sorting.
• Stable sorting algorithm but outplace.
c if n = 1
• Recurrence Relation: T(n) =
T ( n / 2 ) + T ( n / 2 ) + cn if n 1
Time complexity: (i) Worst case = O(nlogn) ,
(ii) Best case = O(nlogn),
((i)) Average case = O(nlogn).
Space complexity: S(n) = 2n + logn + c; Worst case = O(n)
10
byjusexamprep.com
Properties-
Some of the important properties of merge sort algorithm are-
● Merge sort uses a divide and conquer paradigm for sorting.
● Merge sort is a recursive sorting algorithm.
● Merge sort is a stable sorting algorithm.
● Merge sort is not an in-place sorting algorithm.
● The time complexity of merge sort algorithm is Θ(nlogn).
● The space complexity of merge sort algorithm is Θ(n).
NOTE
Merge sort is the best sorting algorithm in terms of time complexity Θ(nlogn)
if we are not concerned with auxiliary space used.
Step-02:
Let n be the maximum input size of a problem that can be solved in 6 minutes (or 360 seconds).
Then, we have-
k x nlogn = 360
(5/64) x nlogn = 360 { Using Result of Step-01 }
nlogn = 72 x 64
11
byjusexamprep.com
nlogn = 4608
On solving this equation, we get n = 512.
Option B is correct.
Problem 2-
T(n)=2T(n/2)+n
=2[2T(n/4)+n/2]+n
=2[2[2T(n/8)+n/4]+n/2]+n
=2^3T(n/8)+2n+n
=2^k T(n/2^k)+kn
so, n/2^k=1
k=logn
nk=n*logn
T(n)=O(n*logn)
No of levels = k = logn
and at each level the time complexity = n
time complexity= n*logn
5. Quick Sort-
• Comparison based sorting.
• 2 to 3 times faster than merge and heap sort.
• Not a stable sorting algorithm but inplace.
Choosing pivot is the most important factor.
As Start element T(n) = O(n2), As End element T(n) = O(n2)
As Middle element T(n) = O(n2), As Median element T(n) = O(n2)
As Median of Median T(n) = O(n logn)
Time complexity:
(i) Best case: each partition splits array into two halves then
T(n) = T(n/2) + T(n/2) + n = O(nlogn)
(ii) Worst case: each partition gives unbalanced splits we get
T(n) T(n – k) + T(k – 1) + cn = O(n2)
(iii) Average case: In the average case, we donot know where the split happens for this
reason we take all the possible value of split locations.
T(n) = + (n – i) + n + 1 = O(nlogn)
12
byjusexamprep.com
Partition_Array (a , beg , end , loc)
Begin
Set left = beg , right = end , loc = beg
Set done = false
While (not done) do
While ( (a[loc] <= a[right] ) and (loc ≠ right) ) do
Set right = right - 1
end while
if (loc = right) then
Set done = true
else if (a[loc] > a[right]) then
Interchange a[loc] and a[right]
Set loc = right
end if
if (not done) then
While ( (a[loc] >= a[left] ) and (loc ≠ left) ) do
Set left = left + 1
end while
if (loc = left) then
Set done = true
else if (a[loc] < a[left]) then
Interchange a[loc] and a[left]
Set loc = left
end if
end if
end while
End
Quick Sort is also based on the concept of Divide and Conquer, just like merge sort. But in
quick sort all the heavy lifting(major work) is done while dividing the array into subarrays,
while in case of merge sort, all the real work happens during merging the subarrays. In case
of quick sort, the combine step does absolutely nothing.
It is also called partition-exchange sort. This algorithm divides the list into three main
parts:
1. Elements less than the Pivot element
2. Pivot element(Central element)
3. Elements greater than the pivot element
13
byjusexamprep.com
Pivot element can be any element from the array, it can be the first element, the last
element or any random element. In this tutorial, we will take the rightmost element or the
last element as pivot.
For example: In the array {52, 37, 63, 14, 17, 8, 6, 25}, we take 25 as pivot. So after the
first pass, the list will be changed like this.
{6 8 17 14 25 63 37 52}
Hence after the first pass, pivot will be set at its position, with all the elements smaller to it
on its left and all the elements larger than to its right. Now 6 8 17 14 and 63 37 52 are
considered as two separate sunarrays, and same recursive logic will be applied on them, and
we will keep doing this until the complete array is sorted.
How Quick Sorting Works
Following are the steps involved in quick sort algorithm:
1. After selecting an element as pivot, which is the last index of the array in our case, we divide
the array for the first time.
2. In quick sort, we call this partitioning. It is not simple breaking down of array into 2
subarrays, but in case of partitioning, the array elements are so positioned that all the
elements smaller than the pivot will be on the left side of the pivot and all the elements
greater than the pivot will be on the right side of it.
3. And the pivot element will be at its final sorted position.
4. The elements to the left and right, may not be sorted.
5. Then we pick subarrays, elements on the left of pivot and elements on the right of pivot,
and we perform partitioning on them by choosing a pivot in the subarrays.
Worst Case-
Quick Sort is sensitive to the order of input data.
It gives the worst performance when elements are already in the ascending order.
It then divides the array into sections of 1 and (n-1) elements in each call.
Then, there are (n-1) divisions in all.
Therefore, here total comparisons required are f(n) = n x (n-1) = O(n2).
Order of Quick Sort in worst case = O(n2)
14
byjusexamprep.com
Advantages of Quick Sort-
The advantages of quick sort algorithm are-
Quick Sort is an in-place sort, so it requires no temporary memory.
Quick Sort is typically faster than other algorithms.
(because its inner loop can be efficiently implemented on most architectures)
Quick Sort tends to make excellent usage of the memory hierarchy like virtual memory or
caches.
Quick Sort can be easily parallelized due to its divide and conquer nature.
6. Matrix Multiplication-
O (1) , for n = 1
• Using DAC: T(n) =
( )
8 T (n / 2 ) + O n , for n 1
2
O (1) , for n = 1
T(n) =
7 T (n / 2 ) + an , for n 1
2
• Karatsuba algorithm for fast multiplication of two n-digit numbers required exactly
(when n is a power of 2) using Divide and Conquer.
Sorting Techniques
1. Bubble Sort-
Bubble sort is the easiest sorting algorithm to implement.
It is inspired by observing the behaviour of air bubbles over foam.
It is an in-place sorting algorithm.
It uses no auxiliary data structures (extra space) while sorting.
15
byjusexamprep.com
It then swaps the two elements if they are in the wrong order.
In each pass, bubble sort places the next largest element to its proper position.
In short, it bubbles down the largest element to its correct position.
Best Case-
In best case, the array is already sorted but still to check, bubble sort performs O(n)
comparisons.
Hence, the best case time complexity of bubble sort is O(n).
Average Case-
In average case, bubble sort may require (n/2) passes and O(n) comparisons for each pass.
Hence, the average case time complexity of bubble sort is O(n/2 x n) = Θ(n2).
The following table summarizes the time complexities of bubble sort in each case-
16
byjusexamprep.com
Time Complexity
Best Case O(n)
Average Case Θ(n2)
Worst Case O(n2)
From here, it is clear that bubble sort is not at all efficient in terms of time complexity of its
algorithm.
Properties-
Some of the important properties of bubble sort algorithm are-
Bubble sort is a stable sorting algorithm.
Bubble sort is an in-place sorting algorithm.
The worst case time complexity of bubble sort algorithm is O(n2).
The space complexity of bubble sort algorithm is O(1).
Number of swaps in bubble sort = Number of inversion pairs present in the given array.
Bubble sort is beneficial when array elements are less and the array is nearly sorted.
Problem-01:
The number of swapping needed to sort the numbers 8, 22, 7, 9, 31, 5, 13 in ascending order
using bubble sort is-
A. 11
B. 12
C. 13
D. 10
Solution-
In bubble sort, Number of swaps required = Number of inversion pairs.
Here, there are 10 inversion pairs present which are-
(8,7)
(22,7)
(22,9)
(8,5)
(22,5)
(7,5)
(9,5)
(31,5)
(22,13)
(31,13)
Thus, Option (D) is correct.
17
byjusexamprep.com
Problem-02:
When will bubble sort take worst-case time complexity?
A. The array is sorted in ascending order.
B. The array is sorted in descending order.
C. Only the first half of the array is sorted.
D. Only the second half of the array is sorted.
Solution-
In bubble sort, Number of swaps required = Number of inversion pairs.
When an array is sorted in descending order, the number of inversion pairs = n(n-1)/2 which
is maximum for any permutation of array.
Thus, Option (B) is correct.
2. Selection Sort-
● Selection sort is one of the easiest approaches to sorting.
● It is inspired from the way in which we sort things out in day to day life.
● It is an in-place sorting algorithm because it uses no auxiliary data structures while sorting.
18
byjusexamprep.com
temp = A[i];
A[i] = A[index];
A[index] = temp;
}
Here,
● i = variable to traverse the array A
● index = variable to store the index of minimum element
● j = variable to traverse the unsorted sub-array
● temp = temporary variable used for swapping
Time Complexity
Best Case n2
Average Case n2
Worst Case n2
19
byjusexamprep.com
while(j > 0 && A [ j ] > key)
{
A [ j+1 ] = A [ j ];
j--;
}
A [ j+1 ] = key;
}
Here,
● i = variable to traverse the array A
● key = variable to store the new number to be inserted into the sorted sub-array
● j = variable to traverse the sorted sub-array
Time Complexity Analysis-
● Selection sort algorithm consists of two nested loops.
● Owing to the two nested loops, it has O(n2) time complexity.
Time Complexity
Best Case n
Average Case n2
Worst Case n2
Important Notes-
● Insertion sort is not a very efficient algorithm when data sets are large.
● This is indicated by the average and worst case complexities.
● Insertion sort is adaptive and number of comparisons are less if array is partially sorted.
Example-
12, 11, 13, 5, 6
Let us loop for i = 1 (second element of the array) to 4 (last element of the array)
i = 1. Since 11 is smaller than 12, move 12 and insert 11 before 12
11, 12, 13, 5, 6
20
byjusexamprep.com
i = 2. 13 will remain at its position as all elements in A[0..I-1] are smaller than 13
11, 12, 13, 5, 6
i = 3. 5 will move to the beginning and all other elements from 11 to 13 will move one position
ahead of their current position.
5, 11, 12, 13, 6
i = 4. 6 will move to position after 5, and elements from 11 to 13 will move one position ahead
of their current position.
5, 6, 11, 12, 13
****
21