Daa Unit-2 (Mid-1)

Download as pdf or txt
Download as pdf or txt
You are on page 1of 34

Explain about the general abstraction for the Divide and Conquer Strategy?

The Divide and Conquer strategy is a fundamental algorithm design paradigm


that breaks a problem into smaller, more manageable sub-problems, solves
each sub-problem independently, and then combines their solutions to solve
the original problem. This approach is used to tackle a variety of problems
efficiently, especially those that can be recursively divided into similar sub-
problems.
General Abstraction of Divide and Conquer Strategy
Algorithm:
Algorithm DandC(P)
{
if Small(P) then
return S(P);
else
Divide P into smaller instances P1,P2,…….,Pk, k≥1;
Apply DandC to each of these sub-problems;
return Combine(DandC(P1),DandC(P2)...,DandC(Pk));
}
1. Divide:
• Objective: Break the original problem into smaller sub-problems.
• Approach: Decompose the problem into smaller instances of the same
problem or into fundamentally different sub-problems. The division is
often based on a recursive approach, where the problem size is reduced
with each recursive call. The decomposition usually involves a
partitioning of the input into smaller chunks.
2. Conquer:
• Objective: Solve each of the smaller sub-problems.
• Approach: Recursively apply the divide and conquer strategy to each
sub-problem until reaching a base case. The base case is a simple version
of the problem that can be solved directly without further division.
3. Combine:
• Objective: Merge the solutions of the sub-problems to form a solution
for the original problem.
• Approach: Integrate the results of the solved sub-problems to form a
complete solution. This step usually involves combining or merging the
sub-problems' solutions in a way that adheres to the problem's
constraints and requirements.
Examples of Divide and Conquer Algorithms:
• Merge Sort:
o Divide: Split the array into two halves.
o Conquer: Recursively sort each half.
o Combine: Merge the two sorted halves into a single sorted array.
• Binary Search:
o Divide: Split the search range into two halves.
o Conquer: Determine which half contains the target value.
o Combine: Repeat the process in the chosen half until the target
value is found or the range is exhausted.
Advantages of Divide and Conquer:
• Efficiency: Often leads to efficient algorithms with reduced time
complexity compared to straightforward approaches.
• Simplicity: Can simplify complex problems by breaking them into
manageable sub-problems.
• Parallelism: Sub-problems can often be solved in parallel, leveraging
multi-core processors.
Disadvantages:
• Overhead: Recursive calls and combining results can introduce overhead.
• Complexity: Designing and implementing divide and conquer solutions
can be complex, especially in ensuring correct combination of sub-
solutions.
Explain the Binary Search Algorithm? Calculate Time Complexity of it? Find
element -4 from the below set by using the above technique: { -12, -10, -8, -6,
-3, 0, 10, 20, 30}

Binary Search is an efficient algorithm for finding an element in a sorted array.


It works by repeatedly dividing the search interval in half and narrowing down
the potential location of the target element.
Steps of Binary Search:
1. Initialize:
o Set two pointers, low and high, to the start and end of the array,
respectively.
2. Find the Middle:
o Compute the middle index: mid = (low + high) // 2.
3. Compare:
o Compare the target value with the element at the middle index:
▪ If the target is equal to the middle element, the search is
complete.
▪ If the target is less than the middle element, narrow the
search to the left half by setting high = mid - 1.
▪ If the target is greater than the middle element, narrow the
search to the right half by setting low = mid + 1.
4. Repeat:
o Repeat the above steps until the low pointer exceeds the high
pointer, indicating that the target is not in the array.

5. Return:
o If the target is found, return the index of the target element.
o If the target is not found, return an indicator of failure (e.g., -1).
Pseudo Code:
FUNCTION BinarySearch(array, size, target)
{
low = 0;
high = size-1;
while (low ≤ high) do
{
mid = (low + high) / 2;
if (array[mid] = target)
return mid;
else if (array[mid] < target):
low = mid + 1;
else
high = mid – 1;
}
return -1; // Target not found
}
Time Complexity
Binary Search has a time complexity of O(log n), where n is the number of
elements in the array. This is because the search interval is halved with each
step, leading to a logarithmic reduction in the number of elements to be
considered.
Explanation:
Let T(n) the time used to search n elements. As we need to search only one of
the halves, the Recurrence relation is : T(n) = T(n/2) + c
In the same way:
T(n/2) = T(n/4) + c,
So, T(n) = T(n/4) + 2*c.
Going in this way ...
T(n) = T(n/2^m ) + m*c, and
T(n) = T(n/2^k ) + k*c
T(n)=T(1) + k*c = k*c + 1 = O(log n).

Example: Finding Element -4


Array: { -12, -10, -8, -6, -3, 0, 10, 20, 30}
Target: -4
1. Initialization:
o low = 0
o high = 8 (length of array - 1)
2. First Iteration:
o mid = (0 + 8) // 2 = 4
o Element at index 4 is -3
o Since -4 < -3, update high = mid - 1 = 3
3. Second Iteration:
o low = 0
o high = 3
o mid = (0 + 3) // 2 = 1
o Element at index 1 is -10
o Since -4 > -10, update low = mid + 1 = 2
4. Third Iteration:
o low = 2
o high = 3
o mid = (2 + 3) // 2 = 2
o Element at index 2 is -8
o Since -4 > -8, update low = mid + 1 = 3
5. Fourth Iteration:
o low = 3
o high = 3
o mid = (3 + 3) // 2 = 3
o Element at index 3 is -6
o Since -4 > -6, update low = mid + 1 = 4
6. Termination:
o Now low = 4 and high = 3
o Since low is greater than high, the loop terminates.
o The target -4 is not found in the array.

Result: The element -4 is not in the array, so the function would return -1.
Explain the Merge Sort Algorithm? Calculate Time Complexity of it? Draw the
tree of calls of merge sort for the following set 310, 285, 179, 652, 351, 423,
861, 254, 450, 520 using merge sort algorithm and draw the tree of calls of
merge sort.

Merge Sort Algorithm


Merge Sort is a divide-and-conquer algorithm that recursively divides an array
into two halves, sorts each half, and then merges the sorted halves to produce
a sorted array. It is particularly effective for large datasets and provides a
consistent O(n log n) time complexity.
Steps of Merge Sort:
1. Divide:
o Split the array into two halves until each sub-array contains a
single element or is empty.
2. Conquer:
o Recursively sort each sub-array.
3. Combine:
o Merge the sorted sub-arrays to produce a sorted array.
Merge Sort Algorithm:
Dividing:
Algorithm MergeSort(low, high)
{
// Small(P) is true if there is only one element to sort. In this case the list
is already sorted.
if (low < high) then
{
//If there are more than one element Divide P into subproblems.
mid := [(low + high)/2] ;
// Solve the subproblems.
MergeSort(low, mid);
MergeSort (mid + 1, high);
Merge(low, mid, high); // Combine the solutions.
}
}
Merging in Merge Sort:
Algorithm Merge (low, mid, high)
{
// a[low : high] is a global array containing two sorted subsets in
a [low : mid] and in a [mid + 1 :high]. The goal is to merge these two sets
into a single set residing in a [low : high]. B[] is an auxiliary global array.
h:=low; i:=low ; j : =mid +1;
while ((h≤ mid) and (j ≤ high)) do
{
if (a[h]≤a[j]) then
{
b[i] := a[h]; h :=h+1;
}
else
{
b[i] := a[j]; j :=j+1;
}
i:= i+1;
}
if ( h>mid) then
{
for k:=j to high do
{
b[i] : =a[k]; i:=i+1;
}
}
else
{
for k :=h to mid do
{
b[i] := a[k]; i:= i+1;
}
for k: =low to high do
{
a[k] :=b[k];
}
}
}
Time Complexity of Merge Sort
Merge Sort has a time complexity of O(n log n), where n is the number of
elements in the array.
• Divide Step: The array is divided into two halves recursively, which takes
O(log n) time.
• Merge Step: Merging two sorted halves takes O(n) time. This merging
process is done at each level of recursion.
Let T(n) the time used to sort n elements. As we can perform separation and
merging in linear time, it takes c*n time to perform these two steps, for some
constant c.
So, recurrence relation is: T(n) = 2*T(n/2) + c*n.
In the same way: T(n/2) = 2*T(n/4) + (c*n)/2
So, T(n) = 4*T(n/4) + 2*c*n.
Going In this way ….
T(n) = (2^m)*T(n/2^m ) + m*c*n
T(n) = (2^k)*T(n/2^k ) + k*c*n

T(n) = n*T(1) + c*n*log n

T(n) = O(n log n).

Do the below task by your own hope u all know drawing the tree
for merge sort (not mentioned in this pdf):
Draw the tree of calls of merge sort for the following set
310, 285, 179, 652, 351, 423, 861, 254, 450, 520
Explain the Quick Sort Algorithm? Calculate Time Complexity of it? Draw the
tree of calls of Quicksort for the following set of elements, (20, 30, 10, 40, 5,
60, 90, 45, 35, 25, 15, 55)
Quick Sort is a divide-and-conquer sorting algorithm that works by selecting a
"pivot" element from the array and partitioning the other elements into two
sub-arrays according to whether they are less than or greater than the pivot.
The sub-arrays are then sorted recursively.
Steps of Quick Sort:
1. Choose a Pivot:
o Select an element from the array to act as the pivot. The choice of
pivot can be random, the first element, the last element, or the
median.
2. Partition:
o Rearrange the elements in the array so that all elements less than
the pivot come before it and all elements greater than the pivot
come after it. The pivot is then in its final position.
3. Recursively Apply:
o Apply the same process recursively to the sub-arrays formed by
dividing the array at the pivot.
Algorithm Pseudo Code:
Algorithm quicksort(array, low, high) {
if low < high {
pivotIndex = partition(array, low, high);
quicksort(array, low, pivotIndex - 1);
quicksort(array, pivotIndex + 1, high);
}
}
Algorithm partition(array, low, high) {
pivot = array[high];
i = low - 1;
for j = low to high - 1 {
if array[j] ≤ pivot {
i = i + 1;
swap(array[i], array[j]);
}
}
swap(array[i + 1], array[high]);
return i + 1;
}
Time Complexity of Quick Sort
• Best Case: O(n log n) when the pivot divides the array into two nearly
equal halves.
• Average Case: O(n log n) on average, when the pivot is reasonably
balanced.
• Worst Case: O(n^2) when the pivot is the smallest or largest element,
leading to unbalanced partitions.
Tree of Calls for Quick Sort
For the Array: {20, 30, 10, 40, 5, 60, 90, 45, 35, 25, 15, 55}
1. Initial Array: {20, 30, 10, 40, 5, 60, 90, 45, 35, 25, 15, 55}
Initial Call:
• Pivot: Choose last element (e.g., 55)
• Partition Result: {20, 30, 10, 40, 5, 60, 90, 45, 35, 25, 15} | 55 (pivot in
correct place)
Left Sub-array: {20, 30, 10, 40, 5, 60, 90, 45, 35, 25, 15}
Recursive Calls for Left Sub-array:
• Pivot: Choose last element (e.g., 15)
• Partition Result: {10, 5, 15, 30, 40, 60, 90, 45, 35, 25, 20} | 15
Left Sub-array of Previous Partition: {10, 5}
• Pivot: 5
• Partition Result: {5, 10} | 5
Right Sub-array of Previous Partition: {30, 40, 60, 90, 45, 35, 25, 20}
• Pivot: 20
• Partition Result: {10, 5, 15, 20, 30, 40, 60, 90, 45, 35, 25} | 20
Left Sub-array of Previous Partition: {30} (Already Sorted)
Right Sub-array of Previous Partition: {40, 60, 90, 45, 35, 25}
• Pivot: 25
• Partition Result: {20, 10, 5, 15, 25, 30, 40, 60, 90, 45, 35} | 25
Left Sub-array of Previous Partition: {40} (Already Sorted)
Right Sub-array of Previous Partition: {60, 90, 45, 35}
• Pivot: 35
• Partition Result: {25, 30, 35, 60, 90, 45} | 35
Left Sub-array of Previous Partition: {30} (Already Sorted)
Right Sub-array of Previous Partition: {60, 90, 45}
• Pivot: 45
• Partition Result: {25, 30, 35, 45, 60, 90} | 45
Left Sub-array of Previous Partition: {60, 90}
• Pivot: 90
• Partition Result: {25, 30, 35, 45, 60, 90} | 90
Final Sorted Array: {5, 10, 15, 20, 25, 30, 35, 40, 45, 60, 90}
Explain about the Process of Tiling the Defective Chess Board with neat
Diagrams.

The problem of tiling a defective chess board often involves covering a


chessboard with dominoes or tiles when a single square (or sometimes more)
is removed, and ensuring that the board can still be fully covered by the
remaining tiles. This problem is known for its use in mathematical proofs and
computational algorithms.
Problem Description
Consider an 8x8 chessboard where one square is removed. The goal is to cover
the remaining 63 squares with dominoes, where each domino covers exactly
two squares.
Process of Tiling
1. Chessboard Configuration:
An 8x8 chessboard has 64 squares. When one square is removed, 63 squares
remain.
2. Coloring the Chessboard:
To understand the tiling, it's useful to color the chessboard in a checkerboard
pattern:
• Color the chessboard in a pattern such that no two adjacent squares
have the same color.
• In the standard coloring, each domino must cover one black and one
white square.
3. Removing a Square:
When a square is removed, analyze the coloring of the board:
• Removing one square will disrupt the balance of the number of black
and white squares. Specifically, if you remove a square from a standard
checkerboard pattern, there will be an imbalance of black and white
squares.
4. Imbalance Analysis:
For a standard 8x8 chessboard, half of the squares are black and half are white
(32 of each). Removing one square will leave 31 squares of one color and 32
squares of the other color.
5. Tiling Feasibility:
• Domino Coverage: Each domino covers one black and one white square.
• If you remove one square, you have an imbalance in the number of black
and white squares, making it impossible to cover the board completely
with dominoes, as every domino requires an equal number of black and
white squares.

The below diagrams show working :


An example input:
The below diagrams show working:
After placing the first tile:

Recurring for the first sub square:


Shows the first step in all four sub squares:

Conclusion
• Impossibility of Tiling: If a single square is removed from a standard 8x8
chessboard, it is impossible to cover the remaining 63 squares
completely with 31 dominoes due to the color imbalance.
• Generalization: This concept can be extended to larger boards or
different defective patterns, where the balance of covered squares must
be maintained for successful tiling.
Discuss and write the General Control Abstraction for Greedy Technique and
Find the optimal solution of the knapsack instance n = 7, M = 15,
(p1, p2, …p7) = (10, 5, 15, 7, 6, 18, 3) and (w1, w2, ……w7) = (2, 3, 5, 7, 1, 4, 1)

General Control Abstraction for Greedy Technique


The Greedy Technique is a problem-solving approach that builds up a solution
piece by piece, always choosing the next piece that offers the most immediate
benefit. It is particularly useful for optimization problems where a local
optimum choice leads to a global optimum solution. The general control
abstraction for the Greedy Technique can be outlined as follows:
Greedy method consists of 3 functions (steps):
• Select: it selects an input from array a[ ] (candidate set) and puts in the
variable x.
• Feasible: it is a Boolean function which checks whether the selected
input meets the constraints or not.
• Union: if the selected input i.e. 'x' makes the solution feasible, then x is
included in the solution and objective function get updated.
Pseudo-Code for Greedy Technique:
Algorithm Greedy (a, n) // a [1 ..n] contains the n inputs
{
solution := 0;
for i := 1 to n do
{
x := Select (a);
if Feasible (solution, x) then
solution := Union ( solution, x);
}
return solution;
}
Knapsack Problem
The Knapsack Problem is a classic example of a problem that can be solved
using a greedy approach. In this problem, you have a knapsack with a
maximum weight capacity and a set of items, each with a weight and value.
The goal is to maximize the total value of the items in the knapsack without
exceeding its weight capacity.
Greedy Approach for Knapsack Problem
A common greedy strategy for the knapsack problem is to use the value-to-
weight ratio to decide which items to include in the knapsack. The steps are:
1. Calculate the value-to-weight ratio for each item.
2. Sort the items based on this ratio in descending order.
3. Iterate through the sorted items, adding them to the knapsack as long as
the remaining capacity allows.
4. Stop when you cannot add more items without exceeding the knapsack's
weight capacity.
Instance Details
• Number of items (n): 7
• Maximum weight capacity (M): 15
• Values (p): (10, 5, 15, 7, 6, 18, 3)
• Weights (w): (2, 3, 5, 7, 1, 4, 1)
Implementation
1. Calculate the value-to-weight ratios:

Value-to-Weight Ratio
Item Value (p) Weight (w)
(p/w)

Item 1 10 2 5

Item 2 5 3 1.67

Item 3 15 5 3

Item 4 7 7 1

Item 5 6 1 6

Item 6 18 4 4.5

Item 7 3 1 3

2. Sort items based on the ratio:


1. Item 5: Ratio = 6
2. Item 1: Ratio = 5
3. Item 6: Ratio = 4.5
4. Item 3: Ratio = 3
5. Item 7: Ratio = 3
6. Item 2: Ratio = 1.67
7. Item 4: Ratio = 1
3. Greedily select items:
• Start with an empty knapsack and a remaining capacity of 15.
• Add Item 5: Weight = 1, Value = 6. Remaining capacity = 14.
• Add Item 1: Weight = 2, Value = 10. Remaining capacity = 12.
• Add Item 6: Weight = 4, Value = 18. Remaining capacity = 8.
• Add Item 3: Weight = 5, Value = 15. Remaining capacity = 3.
• Add Item 7: Weight = 1, Value = 3. Remaining capacity = 2.
• Cannot add Item 2: Weight = 3, exceeds remaining capacity.
• Cannot add Item 4: Weight = 7, exceeds remaining capacity.
4. Calculate the total value:
• Total Value = Value of Item 5 + Item 1 + Item 6 + Item 3 + Item 7
• Total Value = 6 + 10 + 18 + 15 + 3 = 52
Solution:
• Selected items: 5, 1, 6, 3, 7
• Total Value: 52
• Total Weight: 1 + 2 + 4 + 5 + 1 = 13
This solution is optimal under the greedy approach, given the value-to-weight
ratio.
Discuss about the Minimum Cost Spanning tree? Explain how to find the
minimum cost spanning tree by using Prim’s algorithm with an example.

Minimum Cost Spanning Tree (MST)


A Minimum Cost Spanning Tree (MST) is a subset of the edges of a connected,
undirected graph that forms a tree (a connected graph with no cycles) and has
the minimum possible total edge weight. In other words, among all the
spanning trees of a graph, an MST has the smallest sum of edge weights.
Properties of MST
1. Spanning Tree: A spanning tree is a subgraph that includes all the
vertices of the graph and is a tree (i.e., it is connected and has no cycles).
2. Minimum Cost: The total weight (sum of edge weights) of the spanning
tree is minimized.
Prim’s Algorithm
Prim’s Algorithm is a greedy algorithm used to find the MST of a connected,
undirected graph. It works by growing the MST one edge at a time, starting
from an arbitrary vertex, and always choosing the smallest edge that connects
a vertex in the tree to a vertex outside the tree.
Steps of Prim’s Algorithm
1. Initialization:
o Start with an arbitrary vertex and add it to the MST.
o Initialize the MST with this single vertex.
o Initialize the set of vertices not yet in the MST.
2. Edge Selection:
o Find the minimum weight edge that connects a vertex in the MST
to a vertex outside the MST.
o Add this edge and the vertex it connects to the MST.
3. Repeat:
o Repeat the process until all vertices are included in the MST.
4. Termination:
o The algorithm terminates when all vertices are included in the
MST.
Example:

Suppose, a weighted graph is -

Step 1 - First, we have to choose a vertex from the above graph. Let's choose B.
Step 2 - Now, we have to choose and add the shortest edge from vertex B.
There are two edges from vertex B that are B to C with weight 10 and edge B to
D with weight 4. Among the edges, the edge BD has the minimum weight. So,
add it to the MST.

Step 3 - Now, again, choose the edge with the minimum weight among all the
other edges. In this case, the edges DE and CD are such edges. Add them to
MST and explore the adjacent of C, i.e., E and A. So, select the edge DE and add
it to the MST.
Step 4 - Now, select the edge CD, and add it to the MST.

Step 5 - Now, choose the edge CA. Here, we cannot select the edge CE as it
would create a cycle to the graph. So, choose the edge CA and add it to the
MST.

So, the graph produced in step 5 is the minimum spanning tree of the given
graph. The cost of the MST is given below –
Cost of MST = 4 + 2 + 1 + 3 = 10 units.
Discuss about the Minimum Cost Spanning tree? Explain how to find the
minimum cost spanning tree by using Krushkal's algorithm with your own
example.

Minimum Cost Spanning Tree (MST)


A Minimum Cost Spanning Tree (MST) is a subset of the edges of a connected,
undirected graph that forms a tree (a connected graph with no cycles) and has
the minimum possible total edge weight. In other words, among all the
spanning trees of a graph, an MST has the smallest sum of edge weights.
Properties of MST
1. Spanning Tree: A spanning tree is a subgraph that includes all the
vertices of the graph and is a tree (i.e., it is connected and has no cycles).
2. Minimum Cost: The total weight (sum of edge weights) of the spanning
tree is minimized.
Kruskal's Algorithm
Kruskal's Algorithm is a greedy algorithm used to find the MST of a connected,
undirected graph. The algorithm operates by sorting all the edges of the graph
by their weight and then adding edges to the MST in order of increasing
weight, provided they do not form a cycle.
Steps of Kruskal's Algorithm
1. Initialization:
o Start with an empty MST.
o Sort all the edges of the graph in non-decreasing order of their
weight.
2. Edge Selection:
o Iterate through the sorted edges, and for each edge, check if
adding it to the MST will form a cycle.
o If adding the edge does not form a cycle, include it in the MST.
3. Cycle Detection:
o Use a data structure such as Union-Find (Disjoint Set Union) to
efficiently check for cycles.
4. Termination:
o Repeat the process until the MST includes all the vertices of the
graph (i.e., when the MST has V−1 edges for V vertices).

Example:

As the first step, sort all the edges in the given graph in an ascending order and
store the values in an array.

Edge B→D A→B C→F F→E B→C G→F A→G C→D D→E C→G

Cost 5 6 9 10 11 12 15 17 22 25

Then, construct a forest of the given graph on a single plane.


From the list of sorted edge costs, select the least cost edge and add it onto the
forest in output graph.
B→D=5
Minimum cost = 5
Visited array, v = {B, D}
Similarly, the next least cost edge is B → A = 6; so we add it onto the output
graph.
Minimum cost = 5 + 6 = 11
Visited array, v = {B, D, A}

The next least cost edge is C → F = 9; add it onto the output graph.
Minimum Cost = 5 + 6 + 9 = 20
Visited array, v = {B, D, A, C, F}
The next edge to be added onto the output graph is F → E = 10.
Minimum Cost = 5 + 6 + 9 + 10 = 30
Visited array, v = {B, D, A, C, F, E}

The next edge from the least cost array is B → C = 11, hence we add it in the
output graph.
Minimum cost = 5 + 6 + 9 + 10 + 11 = 41
Visited array, v = {B, D, A, C, F, E}
The last edge from the least cost array to be added in the output graph is F → G
= 12.
Minimum cost = 5 + 6 + 9 + 10 + 11 + 12 = 53
Visited array, v = {B, D, A, C, F, E, G}

The obtained result is the minimum spanning tree of the given graph
with cost = 6+5+11+9+10+12 = 53.

You might also like