Unit - Iii R23
Unit - Iii R23
Unit - Iii R23
Divide and conquer: General method, , Quick sort , Merge sort, Strassen’s matrix multiplication
Greedy method: General method, Job sequencing with deadlines, knapsack problem, Minimum cost
spanning trees, Single source shortest paths .
Where T(n) is the time for D And C on any I/p of size ‘n’.
g(n) is the time of compute the answer directly for small I/ps.
f(n) is the time for dividing P & combining the solution to
sub problems.
Example:
1) Consider the case in which a=2 and b=2. Let T(1)=2 & f(n)=n.
We have,
T(n) = 2T(n/2)+n
= 2[2T(n/2/2)+n/2]+n
= [4T(n/4)+n]+n
= 4T(n/4)+2n
= 4[2T(n/4/2)+n/4]+2n
= 4[2T(n/8)+n/4]+2n
= 8T(n/8)+n+2n
= 8T(n/8)+3n
*
*
*
In general, we see that T(n)=2^iT(n/2^i )+in., for any log n >=I>=1.
QUICK SORT :
The divide-and-conquer approach can be used to arrive at an efficient sorting method different
from merge sort.
In merge sort, the file a[1:n] was divided at its midpoint into sub arrays which were
independently sorted & later merged.
In Quick sort, the division into 2 sub arrays is made so that the sorted sub arrays do not need to
be merged later.
This is accomplished by rearranging the elements in a[1:n] such that a[I]<=a[j] for all I
between 1 & n and all j
between (m+1) & n for some m, 1<=m<=n.
Thus the elements in a[1:m] & a[m+1:n] can be independently sorted.
Algorithm Partition(a,m,p)
//within a[m],a[m+1],…..,a[p-1] the elements
// are rearranged in such a manner that if
//initially t=a[m],then after completion
//a[q]=t for some q between m and
//p-1,a[k]<=t for m<=k<q, and
//a[k]>=t for q<k<p. q is returned
//Set a[p]=infinite.
{
v=a[m];I=m;j=p;
repeat
{
repeat
I=I+1;
until(a[I]>=v);
repeat
j=j-1;
until(a[j]<=v);
if (I<j) then interchange(a,i.j);
}until(I>=j);
a[m]=a[j]; a[j]=v;
retun j;
}
Algorithm Interchange(a,I,j)
//Exchange a[I] with a[j]
{
p=a[I];
a[I]=a[j];
a[j]=p;
}
Algorithm: Sorting by Partitioning
Algorithm Quicksort(p,q)
//Sort the elements a[p],….a[q] which resides
//is the global array a[1:n] into ascending
//order; a[n+1] is considered to be defined
// and must be >= all the elements in a[1:n]
{
if(p<q) then // If there are more than one element
{
// divide p into 2 subproblems
j=partition(a,p,q+1);
//’j’ is the position of the partitioning element.
//solve the subproblems.
quicksort(p,j-1);
quicksort(j+1,q);
//There is no need for combining solution.
}
}
Partition(int m, int p)
{
int v,I,j;
v=a[m];
i=m;
j=p;
do
{
do
i=i+1;
while(a[i]<v);
if (i<j)
interchange(I,j);
} while (I<j);
a[m]=a[j];
a[j]=v;
return j;
}
Interchange(int I, int j)
{
int p;
p= a[I];
a[I]=a[j];
a[j]=p;
}
Output:
Enter the no. of elements 5
Enter the array elements
3
8
1
5
2
The sorted elements are,
1
2
3
5
8
As another example divide-and-conquer, we investigate a sorting algorithm that has the nice
property that is the worst case its complexity is O(n log n)
This algorithm is called merge sort
We assume throughout that the elements are to be sorted in non-decreasing order.
Given a sequence of ‘n’ elements a[1],…,a[n] the general idea is to imagine then split into 2
sets a[1],…..,a[n/2] and a[[n/2]+1],….a[n].
Each set is individually sorted, and the resulting sorted sequences are merged to produce a
single sorted sequence of ‘n’ elements.
Thus, we have another ideal example of the divide-and-conquer strategy in which the splitting
is into 2 equal-sized sets & the combining operation is the merging of 2 sorted sets into one.
Algorithm For Merge Sort:
Algorithm MergeSort(low,high)
//a[low:high] is a global array to be sorted
//Small(P) is true if there is only one element
//to sort. In this case the list is already sorted.
{
if (low<high) then //if there are more than one element
{
//Divide P into subproblems
//find where to split the set
mid = [(low+high)/2];
//solve the subproblems.
mergesort (low,mid);
mergesort(mid+1,high);
//combine the solutions .
merge(low,mid,high);
}
}
Consider the array of 10 elements a[1:10] =(310, 285, 179, 652, 351, 423, 861, 254, 450, 520)
Algorithm Mergesort begins by splitting a[] into 2 sub arrays each of size five (a[1:5] and
a[6:10]).
The elements in a[1:5] are then split into 2 sub arrays of size 3 (a[1:3] ) and 2(a[4:5])
Then the items in a a[1:3] are split into sub arrays of size 2 a[1:2] & one(a[3:3])
The 2 values in a[1:2} are split to find time into one-element sub arrays, and now the merging
begins.
(310| 285| 179| 652, 351| 423, 861, 254, 450, 520)
1. Let A and B be the 2 n*n Matrix. The product matrix C=AB is calculated by using the
formula,
3. Divide and conquer method suggest another way to compute the product of n*n matrix.
4. We assume that N is a power of 2 .In the case N is not a power of 2 ,then enough rows and
columns of zero
can be added to both A and B .SO that the resulting dimension are the powers of two.
5. If n=2 then the following formula as a computed using a matrix multiplication operation for
the elements of
A & B.
6. If n>2,Then the elements are partitioned into sub matrix n/2*n/2..since ‘n’ is a power of 2
these product can be recursively computed using the same formula .This Algorithm will
continue applying itself to smaller sub
matrix until ‘N” become suitable small(n=2) so that the product is computed directly .
7. The formula are
2 2 2 2 1 1 1 1 4 4 4 4
2 2 2 2 * 1 1 1 1 = 4 4 4 4
2 2 2 2 1 1 1 1 4 4 4 4
2 2 2 2 1 1 1 1 4 4 4 4
Example
4 4 4 4
*
4 4 4 4
P=(4*4)+(4+4)=64
Q=(4+4)4=32
R=4(4-4)=0
S=4(4-4)=0
T=(4+4)4=32
U=(4-4)(4+4)=0
V=(4-4)(4+4)=0
C11=(64+0-32+0)=32
C12=0+32=32
C21=32+0=32
C22=64+0-32+0=32
32 32
since n/2n/2 &matrix can be can be added in Cn for some constant C, The overall computing
time T(n) of the resulting divide and conquer algorithm is given by the sequence.
That is T(n)=O(n^3)
* Matrix multiplication are more expensive then the matrix addition O(n^3).We can attempt
to reformulate the equation for Cij so as to have fewer multiplication and possibly more
addition .
12. Stressen has discovered a way to compute the Cij of equation (2) using only 7 multiplication
and 18 addition or subtraction.
13. Strassen’s formula are
P= (A11+A12)(B11+B22)
Q= (A12+A22)B11
R= A11(B12-B22)
S= A22(B21-B11)
T= (A11+A12)B22
U= (A21-A11)(B11+B12)
V= (A12-A22)(B21+B22)
C11=P+S-T+V
C!2=R+t
C21=Q+T
C22=P+R-Q+V
GREEDY METHOD :
* The function select an input from a[] and removes it. The select input value is assigned to X.
Feasible is a Boolean value function that determines whether X can be included into the solution
vector.
The function Union combines X with The solution and updates the objective function.The function
Greedy describes the essential way that a greedy algorithm will once a particular problem is chosen
and the function subset, feasible & union are properly implemented.
Example
Suppose we have in a country the following coins are available :
Dollars(100 cents)
Quarters(25 cents)
Dimes( 10 cents)
Nickel(5 Cents)
Pennies(1 cent)
Our aim is paying a given amount to a customer using the smallest possible number of coins.
For example if we must pay 276 cents possible solution then,
1 doll+7 q+ 1 pen9 coins
2 doll +3Q +1 pen6 coins
2 doll+7dim+1 nic +1 pen11 coins.
KNAPSACK PROBLEM :
we are given n objects and knapsack or bag with capacity M object I has a weight Wi where I
varies from 1 to N.
The problem is we have to fill the bag with the help of N objects and the resulting profit has to be
maximum.
Formally the problem can be stated as
Maximize xipi subject to XiWi<=M
Where Xi is the fraction of object and it lies between 0 to 1.
There are so many ways to solve this problem, which will give many feasible solution for which we
have to find the optimal solution. But in this algorithm, it will generate only one solution which is
going to be feasible as well as optimal. First, we find the profit & weight rates of each and every
object and sort it according to the descending order of the ratios. Select an object with highest p/w
ratio and check whether its height is lesser than the capacity of the bag. If so place 1 unit of the first
object and decrement .the capacity of the bag by the weight of the object you have placed.
Repeat the above steps until the capacity of the bag becomes less than the weight of the object you
have selected .in this case place a fraction of the object and come out of the loop. Whenever you
selected.
ALGORITHM:
Let G(V,E) be an undirected connected graph with vertices ‘v’ and edge ‘E’.
A sub-graph t=(V,E’) of the G is a Spanning tree of G iff ‘t’ is a tree.3
The problem is to generate a graph G’= (V,E) where ‘E’ is the subset of E,G’ is a Minimum
spanning tree.
Each and every edge will contain the given non-negative length .connect all the nodes with edge
present in set E’ and weight has to be minimum.
NOTE:
We have to visit all the nodes.
The subset tree (i.e) any connected graph with ‘N’ vertices must have at least N-1 edges and also it
does not form a cycle.
Definition:
A spanning tree of a graph is an undirected tree consisting of only those edge that are necessary to
connect all the vertices in the original graph. A Spanning tree has a property that for any pair of
vertices there exist only one path between them and the insertion of an edge to a spanning tree form
a unique cycle.
Application of the spanning tree:
1. Analysis of electrical circuit.
2. Shortest route problems.
METHODS OFMINIMUM COST SPANNING TREE :
The cost of a spanning tree is the sum of cost of the edges in that trees.
There are 2 method to determine a minimum cost spanning tree are
1. Kruskal’s Algorithm
2. Prom’s Algorithm.
In kruskal's algorithm the selection function chooses edges in increasing order of length without
worrying too much about their connection to previously chosen edges, except that never to form a
cycle. The result is a forest of trees that grows until all the trees in a forest (all the components)
merge in a single tree.
In this algorithm, a minimum cost-spanning tree ‘T’ is built edge by edge.
Edge are considered for inclusion in ‘T’ in increasing order of their cost.
Algorithm:
Algorithm kruskal(E,cost,n,t)
//Eset of edges in G has ‘n’ vertices.
//cost[u,v]cost of edge (u,v).tset of edge in minimum cost spanning tree
// the first cost is returned.
{
for i=1 to n do parent[I]=-1;
I=0;mincost=0.0;
While((I<n-1)and (heap not empty)) do
{
j=find(n);
k=find(v);
if(j not equal k) than
{
i=i+1
t[i,1]=u;
t[i,2]=v;
mincost=mincost+cost[u,v];
union(j,k);
}
}
if(i notequal n-1) then write(“No spanning tree”)
else return minimum cost;
}
Analysis
14. The time complexity of minimum cost spanning tree algorithm in worst case is O(|E|log|E|),
where E is the edge set of G.
Step 1. In the graph, the Edge(g, h) is shortest. Either vertex g or vertex h could be
representative. Lets choose vertex g arbitrarily.
Step 2. The edge (c, i) creates the second tree. Choose vertex c as representative for second
tree.
Step 3. Edge (g, g) is the next shortest edge. Add this edge and choose vertex g as
representative.
Step 5. Add edge (c, f) and merge two trees. Vertex c is chosen as the representative.
Step 6. Edge (g, i) is the next next cheapest, but if we add this edge a cycle would be created.
Vertex c is the representative of both.
Step 7. Instead, add edge (c, d).
Step 10. Again, if we add edge (b, c), it would create a cycle. Add edge (d, e) instead to
complete the spanning tree. In this spanning tree all trees joined and vertex c is a sole
representative.
4.8.3. PRIM'S ALGORITHM
Start from an arbitrary vertex (root). At each stage, add a new branch (edge) to the tree already
constructed; the algorithm halts when all the vertices in the graph have been reached.
Algorithm prims(e,cost,n,t)
{
Let (k,l) be an edge of minimum cost in E;
Mincost :=cost[k,l];
T[1,1]:=k; t[1,2]:=l;
For I:=1 to n do
If (cost[i,l]<cost[i,k]) then near[i]:=l;
Else near[i]:=k;
Near[k]:=near[l]:=0;
For i:=2 to n-1 do
{
Let j be an index such that near[j]≠0 and
Cost[j,near[j]] is minimum;
T[i,1]:=j; t[i,2]:=near[j];
Mincost:=mincost+ Cost[j,near[j]];
Near[j]:=0;
For k:=0 to n do
If near((near[k]≠0) and (Cost[k,near[k]]>cost[k,j])) then
Near[k]:=j;
}
Return mincost;
}
15. The prims algorithm will start with a tree that includes only a minimum cost edge of G.
16. Then, edges are added to the tree one by one. the next edge (i,j) to be added in such that I is a
vertex included
in the tree, j is a vertex not yet included, and cost of (i,j), cost[i,j] is minimum among all the
edges.
17. The working of prims will be explained by following diagram
Step 1: Step 2:
Step 3: Step 4:
Step 5: Step 6:
Graphs can be used to represent the highway structure of a state or country with vertices
representing cities and edges representing sections of highway. The edges can then be assigned
weights which may be either the distance between the two cities connected by the edge or the
average time to drive along that section of highway. A motorist wishing to drive from city A to B
would be interested in answers to the following questions:
The problems defined by these questions are special case of the path problem we study in this
section. The length of a path is now defined to be the sum of the weights of the edges on that path.
The starting vertex of the path is referred to as the source and the last vertex the destination. The
graphs are digraphs representing streets. Consider a digraph G=(V,E), with the distance to be
traveled as weights on the edges. The problem is to determine the shortest path from v0 to all the
remaining vertices of G. It is assumed that all the weights associated with the edges are positive.
The shortest path between v0 and some other node v is an ordering among a subset of the edges.
Hence this problem fits the ordering paradigm.
Example:
Consider the digraph of fig 7-1. Let the numbers on the edges be the costs of travelling along that
route. If a person is interested travel from v1 to v2, then he encounters many paths. Some of them
are
o v1 v2 = 50 units
o v1 v3 v4 v2 = 10+15+20=45 units
o v1 v5 v4 v2 = 45+30+20= 95 units
o v1 v3 v4 v5 v4 v2 = 10+15+35+30+20=110 units
The cheapest path among these is the path along v1 v3 v4 v2. The cost of the path is
10+15+20 = 45 units. Even though there are three edges on this path, it is cheaper than travelling
along the path connecting v1 and v2 directly i.e., the path v1 v2 that costs 50 units. One can also
notice that, it is not possible to travel to v6 from any other node.
To formulate a greedy based algorithm to generate the cheapest paths, we must conceive a
multistage solution to the problem and also of an optimization measure. One possibility is to build
the shortest paths one by one. As an optimization measure we can use the sum of the lengths of all
paths so far generated. For this measure to be minimized, each individual path must be of minimum
length. If we have already constructed i shortest paths, then using this optimization measure, the
next path to be constructed should be the next shortest minimum length path. The greedy way to
generate these paths in non-decreasing order of path length. First, a shortest path to the nearest
vertex is generated. Then a shortest path to the second nearest vertex is generated, and so on.
A much simpler method would be to solve it using matrix representation. The steps that should be
followed is as follows,
Step 1: find the adjacency matrix for the given graph. The adjacency matrix for fig 7.1 is given
below
V1 V2 V3 V4 V5 V6
V1 - 50 10 Inf 45 Inf
Step 2: consider v1 to be the source and choose the minimum entry in the row v1. In the above
table the minimum in row v1 is 10.
Step 3: find out the column in which the minimum is present, for the above example it is column
v3. Hence, this is the node that has to be next visited.
Step 4: compute a matrix by eliminating v1 and v3 columns. Initially retain only row v1. The
second row is computed by adding 10 to all values of row v3.
The resulting matrix is
V2 V4 V5 V6
V1 Vw 50 Inf 45 Inf
Minimum 50 25 45 inf
Step 5: find the minimum in each column. Now select the minimum from the resulting row. In the
above example the minimum is 25. Repeat step 3 followed by step 4 till all vertices are covered or
single column is left.
V2 V5 V6
V1 Vw 50 45 Inf
Minimum 45 45 inf
V5 V6
V1 Vw 45 Inf
V1 V3 V4 V2 Vw 45+10 45+inf
Minimum 45 Inf
V6
V1 Vw Inf
V1 V3 V4 V2 V5 Vw 45+inf
Minimum inf
Finally the cheapest path from v1 to all other vertices is given by V1 V3 V4 V2 V5.
PART-A (2 Marks)
1. Explain the solution to the problem of job sequencing with deadlines by Greedy technique.
2. Explain how travelling salesman problem is solved by using dynamic programming with an
example.
3. Describe Kanpsack problem. Find an optimal solution for the Greedy Knapsack instance for
n=3, m=6, profits are (P1, P2, P3)=(1, 2, 5) weights are (w1, w2, w3)=(2,3,4).
4. (a)Solve for Knapsack problem using Greedy method for the instance n = 7, profits (p1...p7) =
(10,5,15,7,6,18,3) and weights (W1...W7) = ( 2,3,5,7,1,4,1) and capacity of sack is m = 15.
(b) Explain the problem Job sequencing with Deadline and solve for n = 5, (P1...P5) = (20, 15,
10, 5, 1) and Deadlines (D1...D5) = (2, 2, 1, 3, 3).
5. Write the merge sort algorithm. Find out the best, worst and average cases of this algorithm. Sort
the following numbers using merge sort: 10, 12, 1, 5, 18, 28, 38, 39, 2, 4, 7
6. Explain quick sort algorithm with an example. Derive its time complexity.
7. Mention the advantage of Stressan’s matrix multiplication. Explain how the Stressan’s matrix
multiplication works.
8. Write an algorithm for finding the Maximum and Minimum values using Divide and Conquer
method and solve for: 22, 13, -5, -8, 15, 60, 17, 31, 47.
9. Explain Binary search algorithm and derive its time complexity.