0% found this document useful (0 votes)
7 views50 pages

Design and Analysis of Algorithms_part1

The document contains lecture notes for a course on Design and Analysis of Algorithms offered by the Department of CSE at Malla Reddy College of Engineering & Technology. It outlines the course objectives, key topics, and performance analysis methods, including time and space complexity, algorithm design methods, and classifications of problems. The notes also provide a structured approach to algorithm representation using pseudocode and discuss various algorithmic strategies such as divide and conquer, greedy methods, dynamic programming, and backtracking.

Uploaded by

swlabcse
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views50 pages

Design and Analysis of Algorithms_part1

The document contains lecture notes for a course on Design and Analysis of Algorithms offered by the Department of CSE at Malla Reddy College of Engineering & Technology. It outlines the course objectives, key topics, and performance analysis methods, including time and space complexity, algorithm design methods, and classifications of problems. The notes also provide a structured approach to algorithm representation using pseudocode and discuss various algorithmic strategies such as divide and conquer, greedy methods, dynamic programming, and backtracking.

Uploaded by

swlabcse
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 50

MRCET CAMPUS (AUTONOMOUS INSTITUTION – UGC, GOVT.

OF INDIA)

Department of CSE
(Emerging Technologies)
(Data Science, Cyber Security and Internet of Things)

DESIGN AND ANALYSIS OF


ALGORITHMS
(R20A0505)

LECTURE NOTES
B.Tech – CSE (Emerging Technologies) R-20

DESIGN AND ANALYSIS OF ALGORITHMS

LECTURE NOTES

B.TECH (R-20 Regulation)


(II YEAR – II SEM)
(2022-23)

DEPARTMENT OF CSE
(EMERGING TECHNOLOGIES)
(Data Science, Cyber Security and Internet of Things)

MALLA REDDY COLLEGE OF ENGINEERING &


TECHNOLOGY
(Autonomous Institution – UGC, Govt. of India)
Recognized under 2(f) and 12 (B) of UGC ACT 1956
(Affiliated to JNTUH, Hyderabad, Approved by AICTE - Accredited by NBA & NAAC – ‘A’ Grade - ISO 9001:2015 Certified)
Maisammaguda, Dhulapally (Post Via. Hakimpet), Secunderabad – 500100, Telangana State, India
B.Tech – CSE (Emerging Technologies) R-20

Department of Computer Science and Engineering

(EMERGING TECHNOLOGIES)
Vision

 To be at the forefront of Emerging Technologies and to evolve as a Centre of


Excellence in Research, Learning and Consultancy to foster the students into
globally competent professionals useful to the Society.

Mission

The department of CSE (Emerging Technologies) is committed to:

 To offer highest Professional and Academic Standards in terms of Personal growth


and satisfaction.

 Make the society as the hub of emerging technologies and thereby capture
opportunities in new age technologies.

 To create a benchmark in the areas of Research, Education and Public Outreach.

 To provide students a platform where independent learning and scientific study are
encouraged with emphasis on latest engineering techniques.

Quality Policy

 To pursue continual improvement of teaching learning process of Undergraduate and


Post Graduate programs in Engineering & Management vigorously.

 To provide state of art infrastructure and expertise to impart the quality education
and research environment to students for a complete learning experiences.

 Developing students with a disciplined and integrated personality.

 To offer quality relevant and cost effective programmes to produce engineers as per
requirements of the industry need.

For more information: www.mrcet.ac.in


B.Tech – CSE (Emerging Technologies) R-20

INDEX
B.Tech – CSE (Emerging Technologies) R-20

MALLA REDDY COLLEGE OF ENGINEERING & TECHNOLOGY


II Year B. Tech CSE-Emerging Technologies-Cyber Security ‐ II Sem L T/P/D C
3 -/-/- 3
(R20A0505) DESIGN AND ANALYSIS OF ALGORITHMS

COURSE OBJECTIVES:
1. To analyze performance of algorithms.
2. To choose the appropriate data structure and algorithm design method for a specified
application.
3. To understand how the choice of data structures and algorithm design methods impacts the
performance of programs.
4. To solve problems using algorithm design methods such as the greedy method, divide and
conquer, dynamic programming, backtracking and branch and bound.
5. To understand the differences between tractable and intractable problems and to introduce P
and NP classes.

UNIT-I Introduction: Algorithms, Pseudo code for expressing algorithms, performance analysis-
Space complexity, Time Complexity, Asymptotic notation- Big oh notation, omega notation,
theta notation and little oh notation. Divide and Conquer: General method. Applications- Binary
search, Quick sort, merge sort, Strassen’s matrix multiplication.

UNIT-II Disjoint set operations, Union and Find algorithms, AND/OR graphs, Connected
components, Bi-connected components.
Greedy method: General method, applications- Job sequencing with deadlines, Knapsack
problem, Spanning trees, Minimum cost spanning trees, Single source shortest path problem.

UNIT-III Dynamic Programming: General method, applications- Matrix chained multiplication,


Optimal binary search trees, 0/1 Knapsack problem, All pairs shortest path problem, Traveling
sales person problem, Reliability design.

UNIT-IV Backtracking: General method, Applications- n-queue problem, Sum of subsets


problem, Graph coloring, Hamiltonian cycles.

UNIT-V Branch and Bound: General method, applications- Travelling sales person problem, 0/1
Knapsack problem- LC branch and Bound solution, FIFO branch and Bound solution.
NP-Hard and NP-Complete Problems: Basic concepts, Non deterministic algorithms, NP-Hard
and NPComplete classes, NP-Hard problems, Cook’s theorem.

TEXT BOOKS:
1. Fundamentals of Computer Algorithms, Ellis Horowitz, SartajSahni and Rajasekharan,
Universities press
2. Design and Analysis of Algorithms, P. h. Dave,2ndedition,Pearson Education.

REFERENCES:
1. Introduction to the Design And Analysis of Algorithms A Levitin Pearson Education
2. Algorithm Design foundations Analysis and Internet examples, M.T.Goodrich and R Tomassia
John Wiley and sons
3. Design and Analysis of Algorithms, S. Sridhar, Oxford Univ.Press
4. Design and Analysis of Algorithms,Aho , Ulman and Hopcraft , Pearson Education.
5. Foundations of Algorithms, R. NeapolitanandK.Naimipour , 4th edition
B.Tech – CSE (Emerging Technologies) R-20

UNIT-I
Introduction: Algorithms, Pseudo code for expressing algorithms, performance analysis-
Space complexity, Time Complexity, Asymptotic notation- Big oh notation, omega notation,
theta notation and little oh notation.
Divide and Conquer: General method. Applications- Binary search, Quick sort, merge sort,
Strassen’s matrix multiplication.

INTRODUCTION TO ALGORITHM

What is an Algorithm?
Algorithm is a set of steps to complete a task.

For example,

Task: to make a cup of tea.

Algorithm:

· add water and milk to the kettle,


· boil it, add tea leaves,
· Add sugar, and then serve it in cup.

"a set of steps to accomplish or complete a task that is described precisely enough that a
computer can run it ".

• An algorithm is a finite set of instructions that, if followed, accomplishes a particular task. In


addition, all algorithms must satisfy the following criteria:
• Input. Zero or more quantities are externally supplied.
• Output. At least one quantity is produced.
• Definiteness. Each instruction is clear and unambiguous.
• Finiteness. The algorithm terminates after a finite number of steps.
• Effectiveness. Every instruction must be very basic enough and must be feasible.
• Algorithms that are definite and effective are also called computational procedures.
• A program is the expression of an algorithm in a programming language
PSEUDOCODE:

• Algorithm can be represented in Text mode and Graphic mode


• Graphical representation is called Flowchart
• Text mode most often represented in close to any High level language
such as C, Pascal Pseudocode
• Pseudocode: High-level description of an algorithm.
 More structured than plain English.
 Less detailed than a program.
 Preferred notation for describing algorithms.
 Hides program design issues.
• Example of Pseudocode:

• To find the max element of an array

Algorithm arrayMax(A, n)
Input array A of n integers
Output maximum element of A
currentMax  A[0]
for i  1 to n  1 do
if A[i]  currentMax then
currentMax  A[i]
return currentMax

DESIGN AND ANALYSIS OF ALGORITHMS Page 6


RULES FOR PSEUDOCODE
1. Write only one stmt per line Each stmt in your pseudocode should express just one action for the
computer. If task list is properly drawn, then in most cases each task will correspond to one line of
pseudocode.
2. Capitalize initial keyword In the example above, READ and WRITE are in caps. There are just a
few keywords we will use: READ, WRITE, IF, ELSE, ENDIF, WHILE, ENDWHILE, REPEAT,
UNTIL
3. Indent to show hierarchy We will use a particular indentation pattern in each of the design
structures:
SEQUENCE: keep statements that are “stacked” in sequence all starting in the same column.
SELECTION: indent the statements that fall inside the selection structure, but not the keywords that
form the selection.
LOOPING: indent the statements that fall inside the loop, but not the keywords that form the loop
4. Keep stmts language independent Resist the urge to write in whatever language you are most
comfortable with. In the long run, you will save time! There may be special features available in the
language you plan to eventually write the program in; if you are SURE it will be written in that
language, then you can use the features. If not, thenavoid using the special features.

PERFORMANCE ANALYSIS:

• What are the Criteria for judging algorithms that have a more direct relationship to performance?
• computing time and storage requirements.

Performance evaluation can be loosely divided into two major phases:


• a priori estimates and
• a posteriori testing.

• The space complexity of an algorithm is the amount of memory it needs to run to completion.
• The time complexity of an algorithm is the amount of computer time it needs to run to
completion.

Space Complexity:

• Space Complexity Example:


Algorithm abc(a,b,c)
{
return a+b++*c+(a+b-c)/(a+b) +4.0;
}

The Space needed by each of these algorithms is seen to be the sum of the followingcomponent.

DESIGN AND ANALYSIS OF ALGORITHMS Page 7


1.A fixed part that is independent of the characteristics (eg:number,size)of the inputs andoutputs.
The part typically includes the instruction space (ie. Space for the code), space for simple variable
and fixed-size component variables (also called aggregate) space for constants, andso on.

2. A variable part that consists of the space needed by component variables whose size is dependent on
the particular problem instance being solved, the space needed by referenced variables (to the extent
that is depends on instance characteristics), and the recursion stack space.

The space requirement s(p) of any algorithm p may therefore be written as,
S(P) = c+ Sp(Instance characteristics)
Where ‘c’ is a constant.

Example 2:
Algorithm sum(a,n)
{
s=0.0;
for I=1 to n do
s= s+a[I];
return s;
}
 The problem instances for this algorithm are characterized by n,the number ofelements to be
summed. The space needed d by ‘n’ is one word, since it is of type integer.
 The space needed by ‘a’a is the space needed by variables of tyepe array of floating point
numbers.
 This is atleast ‘n’ words, since ‘a’ must be large enough to hold the ‘n’elements to be summed.
• So,we obtain Ssum(n)>=(n+s) [ n for a[],one each for n,I a& s]

Time Complexity:

• The time T(p) taken by a program P is the sum of the compile time and therun time(execution
time)

• The compile time does not depend on the instance characteristics. Also we may assume that a
compiled program will be run several times without recompilation .Thisrum time is denoted by
tp(instance characteristics).

• The number of steps any problem statement is assigned depends on the kind ofstatement.

• For example, comments à 0 steps.


Assignment statements is 1 steps.

[Which does not involve any calls to other algorithms]


Interactive statement such as for, while & repeat-untilà Control part of the statement.

We introduce a variable, count into the program statement to increment count with
initial value 0.Statement to increment count by the appropriate amount are introduced
into the program.

This is done so that each time a statement in the original program is executes
count is incremented by the step count of that statement.

DESIGN AND ANALYSIS OF ALGORITHMS Page 8


Algorithm:
Algorithm sum(a,n)
{
s= 0.0;
count = count+1;
for I=1 to n do
{
count =count+1;
s=s+a[I];
count=count+1;
}
count=count+1;
count=count+1;
return s;
}

1. If the count is zero to start with, then it will be 2n+3 on termination. So each invocation
of sum execute a total of 2n+3 steps.
2. The second method to determine the step count of an algorithm is to build atable in
which we list the total number of steps contributes by each statement.

o First determine the number of steps per execution (s/e) of the statement and
thetotal number of times (ie., frequency) each statement is executed.

o By combining these two quantities, the total contribution of all statements, the
step count for the entire algorithm is obtained.

Statement Steps per Frequency Total


execution
1. Algorithm Sum(a,n) 0 - 0
2.{ 0 - 0
3. S=0.0; 1 1 1
4. for I=1 to n do 1 n+1 n+1
5. s=s+a[I]; 1 n n
6. return s; 1 1 1
7. } 0 - 0

Total 2n+3

DESIGN AND ANALYSIS OF ALGORITHMS Page 9


We usually consider one algorithm to be more efficient than another if its worst-case running
time has a smaller order of growth.

Complexity of Algorithms

The complexity of an algorithm M is the function f(n) which gives the running time and/or
storage space requirement of the algorithm in terms of the size ‘n’ of the input data. Mostly,
the storage space required by an algorithm is simply a multiple of the data size ‘n’.

Complexity shall refer to the running time of the algorithm.

The function f(n), gives the running time of an algorithm, depends not only on the size ‘n’ of
the input data but also on the particular data. The complexity function f(n) for certain cases
are:

1. Best Case : The minimum possible value of f(n) is called the best case.

2. Average Case : The expected value of f(n).

3. Worst Case : The maximum value of f(n) for any key possible input.

ASYMPTOTIC NOTATION

Formal way notation to speak about functions and classify them

The following notations are commonly use notations in performance analysis and used to
characterize the complexity of an algorithm:

1. Big–OH (O) ,
2. Big–OMEGA (Ω),
3. Big–THETA (Θ) and
4. Little–OH (o)

Asymptotic Analysis of Algorithms:

Our approach is based on the asymptotic complexity measure. This means that we don’t try to
count the exact number of steps of a program, but how that number grows with the size of the
input to the program. That gives us a measure that will work for different operating systems,
compilers and CPUs. The asymptotic complexity is written using big-O notation.

· It is a way to describe the characteristics of a function in the limit.


· It describes the rate of growth of functions.
· Focus on what’s important by abstracting away low-order terms and constant factors.
· It is a way to compare “sizes” of functions:

O≈ ≤

DESIGN AND ANALYSIS OF ALGORITHMS Page 10


Ω≈ ≥
Θ≈=
o ≈ <
ω≈>

Time complexity Name Example


O(1) Constant Adding an element to the
front of a linked list
O(logn) Logarithmic Finding an element in a
sorted array
O (n) Linear Finding an element in an
unsorted array
O(nlog n) Linear Logarithmic Sorting n items
by ‘divide-and-conquer’-
Mergesort
O(n2) Quadratic Shortest path between two
nodes in a graph
O(n3) Cubic Matrix Multiplication
O(2n) Exponential The Towers of Hanoi
problem

Big ‘oh’: the function f(n)=O(g(n)) iff there exist positive constants c and no such that
f(n)<=c*g(n) for all n, n>= no.
Omega: the function f(n)=(g(n)) iff there exist positive constants c and no such that
f(n) >= c*g(n) for all n, n >= no.
Theta: the function f(n)=(g(n)) iff there exist positive constants c1,c2 and no such that c1
g(n) <= f(n) <= c2 g(n) for all n, n >= no

Big-O Notation

This notation gives the tight upper bound of the given function. Generally we represent it as
f(n) = O(g (11)). That means, at larger values of n, the upper bound off(n) is g(n). For
example, if f(n) = n4 + 100n2 + 10n + 50 is the given algorithm, then n4 is g(n). That means
g(n) gives the maximum rate of growth for f(n) at larger values of n.

O —notation defined as O(g(n)) = {f(n): there exist positive constants c and no such that
0 <= f(n) <= cg(n) for all n >= no}. g(n) is an asymptotic tight upper bound for f(n). Our
objective is to give some rate of growth g(n) which is greater than given algorithms rate of
growth f(n).

In general, we do not consider lower values of n. That means the rate of growth at lower
values of n is not important. In the below figure, no is the point from which we consider the
rate of growths for a given algorithm. Below no the rate of growths may be different.

DESIGN AND ANALYSIS OF ALGORITHMS Page 11


Note Analyze the algorithms at larger values of n only What this means is, below no we do
not care for rates of growth.

Omega— Ω notation

Similar to above discussion, this notation gives the tighter lower bound of the given
algorithm and we represent it as f(n) = Ω (g(n)). That means, at larger values of n, the
tighter lower bound of f(n) is g
For example, if f(n) = 100n2 + 10n + 50, g(n) is Ω (n2).
The . Ω. notation as be defined as Ω (g (n)) = {f(n): there exist positive constants c and
no such that 0 <= cg (n) <= f(n) for all n >= no}. g(n) is an asymptotic lower bound for
f(n). Ω (g (n)) is the set of functions with smaller or same order of growth as f(n).

Theta- Θ notation
This notation decides whether the upper and lower bounds of a given function are same or
not. The average running time of algorithm is always between lower bound and upper bound.

DESIGN AND ANALYSIS OF ALGORITHMS Page 12


If the upper bound (O) and lower bound (Ω) gives the same result then Θ notation will also
have the same rate of growth. As an example, let us assume that f(n) = 10n + n is the
expression. Then, its tight upper bound g(n) is O(n). The rate of growth in best case is g (n) =
0(n). In this case, rate of growths in best case and worst are same. As a result, the average
case will also be same.

None: For a given function (algorithm), if the rate of growths (bounds) for O and Ω are not
same then the rate of growth Θ case may not be same.

Now consider the definition of Θ notation It is defined as Θ (g(n)) = {f(71): there exist
positive constants C1, C2 and no such that O<=5 c1g(n) <= f(n) <= c2g(n) for all n >= no}.
g(n) is an asymptotic tight bound for f(n). Θ (g(n)) is the set of functions with the same
order of growth as g(n).

Important Notes

For analysis (best case, worst case and average) we try to give upper bound (O) and lower
bound (Ω) and average running time (Θ). From the above examples, it should also be clear
that, for a given function (algorithm) getting upper bound (O) and lower bound (Ω) and
average running time (Θ) may not be possible always.
For example, if we are discussing the best case of an algorithm, then we try to give upper
bound (O) and lower bound (Ω) and average running time (Θ).
In the remaining chapters we generally concentrate on upper bound (O) because knowing
lower bound (Ω) of an algorithm is of no practical importance and we use 9 notation if upper
bound (O) and lower bound (Ω) are same.

Little Oh Notation

The little Oh is denoted as o. It is defined as : Let, f(n} and g(n} be the non negative
functions then

DESIGN AND ANALYSIS OF ALGORITHMS Page 13


DIVIDE AND CONQUER

General Method

In divide and conquer method, a given problem is,


i) Divided into smaller subproblems.
ii) These subproblems are solved independently.
iii) Combining all the solutions of subproblems into a solution of the whole.

If the subproblems are large enough then divide and conquer is reapplied. The generated subproblems
are usually of some type as the original problem.

Hence recursive algorithms are used in divide and conquer strategy.

DESIGN AND ANALYSIS OF ALGORITHMS Page 14


Algorithm DAndC(P)
{
if small(P) then return
S(P)else{
divide P into smaller instances P1,P2,P3…Pk;
apply DAndC to each of these subprograms; // means DAndC(P1), DAndC(P2)…..
DAndC(Pk)
return combine(DAndC(P1), DAndC(P2)….. DAndC(Pk));
}
}
//PProblem
//Here small(P) Boolean value function. If it is true, then the function S is
//invoked

Time Complexity of DAndC algorithm:


T(n) = T(1) if n=1
aT(n/b)+f(n) if n>1

a,b contants.
This is called the general divide and-conquer recurrence.

Example for GENERAL METHOD:


As an example, let us consider the problem of computing the sum of n numbers a0, ... an-1.
If n > 1, we can divide the problem into two instances of the same problem. They are sum of
the first | n/2|numbers
Compute the sum of the 1st [n/2] numbers, and then compute the sum of another n/2 numbers.
Combine the answers of two n/2 numbers sum.
i.e.,
a0 + . . . + an-1 =( a0 + ....+ an/2) + (a n/2 + ........ + an-1)
Assuming that size n is a power of b, to simplify our analysis, we get the following
recurrence for the running time T(n).
T(n)=aT(n/b)+f(n)

This is called the general divide and-conquer recurrence.


f(n) is a function that accounts for the time spent on dividing the problem into smaller ones
and on combining their solutions. (For the summation example, a = b = 2 and f (n) = 1.

Advantages of DAndC:
The time spent on executing the problem using DAndC is smaller than other method.
This technique is ideally suited for parallel computation.
This approach provides an efficient algorithm in computer science.

Master Theorem for Divide and Conquer


In all efficient divide and conquer algorithms we will divide the problem into subproblems,
each of which is some part of the original problem, and then perform some additional work to
compute the final answer. As an example, if we consider merge sort [for details, refer Sorting
chapter], it operates on two problems, each of which is half the size of the original, and then
uses O(n) additional work for merging. This gives the running time equation:

DESIGN AND ANALYSIS OF ALGORITHMS Page 15


T(n) = 2T(n)+ O(n)
2

The following theorem can be used to determine the running time of divide and conquer
algorithms. For a given program or algorithm, first we try to find the recurrence relation for
the problem. If the recurrence is of below form then we directly give the answer without
fully solving it.

If the recurrence is of the form T(n) = aT(n) + Θ (nklogpn), where a >= 1, b > 1, k >= O
2
and p is a real number, then we can directly give the answer as:

Applications of Divide and conquer rule or algorithm:


 Binary search,
 Quick sort,
 Merge sort,
 Strassen’s matrix multiplication.

Binary search or Half-interval search algorithm:


1. This algorithm finds the position of a specified input value (the search "key") within
an array sorted by key value.
2. In each step, the algorithm compares the search key value with the key value of the
middle element of the array.
3. If the keys match, then a matching element has been found and its index, or position,
is returned.
4. Otherwise, if the search key is less than the middle element's key, then the algorithm
repeats its action on the sub-array to the left of the middle element or, if the search
key is greater, then the algorithm repeats on sub array to the right of the middle
element.
5. If the search element is less than the minimum position element or greater than the
maximum position element then this algorithm returns not found.

DESIGN AND ANALYSIS OF ALGORITHMS Page 16


// A recursive binary search function. It returns
// location of x in given array arr[l..r] is present,
// otherwise -1
int binarySearch(int arr[], int l, int r, int x)
{
if (r >= l) {
int mid = l + (r - l) / 2;

// If the element is present at the middle


// itself
if (arr[mid] == x)
return mid;

// If element is smaller than mid, then


// it can only be present in left subarray
if (arr[mid] > x)
return binarySearch(arr, l, mid - 1, x);

// Else the element can only be present


// in right subarray
return binarySearch(arr, mid + 1, r, x);
}

// We reach here when element is not


// present in array
return -1;
}

Merge Sort:
The merge sort splits the list to be sorted into two equal halves, and places them in separate
arrays. This sorting method is an example of the DIVIDE-AND-CONQUER paradigm i.e. it
breaks the data into two halves and then sorts the two half data sets recursively, and finally
merges them to obtain the complete sorted list. The merge sort is a comparison sort and has an
algorithmic complexity of O (n log n). Elementary implementations of the merge sort make use of
two arrays - one for each half of the data set. The following image depicts the complete procedure
of merge sort.

DESIGN AND ANALYSIS OF ALGORITHMS Page 17


Advantages of Merge Sort:
1. Marginally faster than the heap sort for larger sets
2. Merge Sort always does lesser number of comparisons than Quick Sort. Worst case for
merge sort does about 39% less comparisons against quick sort’s average case.
3. Merge sort is often the best choice for sorting a linked list because the slow random-
access performance of a linked list makes some other algorithms (such as quick sort)
perform poorly, and others (such as heap sort) completely impossible.

Program for Merge sort:


#include<stdio.h>
#include<conio.h>
int n;
void main(){
int i,low,high,z,y;
int a[10];
void mergesort(int a[10],int low,int high);
void display(int a[10]);
clrscr();
printf("\n \t\t mergesort \n");
printf("\n enter the length of the list:");
scanf("%d",&n);
printf("\n enter the list elements");
for(i=0;i<n;i++)
scanf("%d",&a[i]);
low=0;
high=n-1;
mergesort(a,low,high);
display(a);
getch();
}
void mergesort(int a[10],int low, int high)

DESIGN AND ANALYSIS OF ALGORITHMS Page 18


{
int mid;
void combine(int a[10],int low, int mid, int high);
if(low<high)
{
mid=(low+high)/2;
mergesort(a,low,mid);
mergesort(a,mid+1,high);
combine(a,low,mid,high);
}
}
void combine(int a[10], int low, int mid, int high){
int i,j,k;
int temp[10];
k=low;
i=low;
j=mid+1;
while(i<=mid&&j<=high){
if(a[i]<=a[j])
{
temp[k]=a[i];
i++;
k++;
}
else
{
temp[k]=a[j];
j++;
k++;
}
}
while(i<=mid){
temp[k]=a[i];
i++;
k++;
}

while(j<=high){
temp[k]=a[j];
j++;
k++;
}
for(k=low;k<=high;k++)
a[k]=temp[k];
}
void display(int a[10]){
int i;
printf("\n \n the sorted array is \n");
for(i=0;i<n;i++)
printf("%d \t",a[i]);}

DESIGN AND ANALYSIS OF ALGORITHMS Page 19


Algorithm for Merge sort:
Algorithm mergesort(low, high)
{
if(low<high) then // Dividing Problem into Sub-problems and
{ this “mid” is for finding where to split the set.
mid=(low+high)/2;

mergesort(low,mid);
mergesort(mid+1,high); //Solve the sub-problems
Merge(low,mid,high); // Combine the solution
}
}
void Merge(low, mid,high){
k=low;
i=low;
j=mid+1;
while(i<=mid&&j<=high) do{
if(a[i]<=a[j]) then
{
temp[k]=a[i];
i++;
k++;
}
else
{
temp[k]=a[j];
j++;
k++;
}
}
while(i<=mid) do{
temp[k]=a[i];
i++;
k++;
}

while(j<=high) do{
temp[k]=a[j];
j++;
k++;
}
For k=low to high do
a[k]=temp[k];
}
For k:=low to high do a[k]=temp[k];
}

DESIGN AND ANALYSIS OF ALGORITHMS Page 20


Computing Time for Merge sort:
T(n)= a if n=1;
2T(n/2)+ cn if n>1

The time for the merging operation in proportional to n, then computing time for merge sort
is described by using recurrence relation.

Here c, a Constants.

If n is power of 2, n=2k

Form recurrence relation

T(n)= 2T(n/2) + cn

2[2T(n/4)+cn/2] + cn

4T(n/4)+2cn

22 T(n/4)+2cn

23 T(n/8)+3cn

24 T(n/16)+4cn

2k T(1)+kcn

an+cn(log n)

By representing it by in the form of Asymptotic notation O is

T(n)=O(nlog n)

Quick Sort
Quick Sort is an algorithm based on the DIVIDE-AND-CONQUER paradigm that selects a pivot
element and reorders the given list in such a way that all elements smaller to it are on one side
and those bigger than it are on the other. Then the sub lists are recursively sorted until the list gets
completely sorted. The time complexity of this algorithm is O (n log n).

 Auxiliary space used in the average case for implementing recursive function calls is
O (log n) and hence proves to be a bit space costly, especially when it comes to large
data sets.
2
 Its worst case has a time complexity of O (n ) which can prove very fatal for large
data sets. Competitive sorting algorithms

DESIGN AND ANALYSIS OF ALGORITHMS Page 21


Quick sort program
#include<stdio.h>
#include<conio.h>
int n,j,i;
void main(){
int i,low,high,z,y;
int a[10],kk;
void quick(int a[10],int low,int high);
int n;
clrscr();
printf("\n \t\t mergesort \n");
printf("\n enter the length of the list:");
scanf("%d",&n);
printf("\n enter the list elements");
for(i=0;i<n;i++)
scanf("%d",&a[i]);
low=0;
high=n-1;
quick(a,low,high);
printf("\n sorted array is:");
for(i=0;i<n;i++)
printf(" %d",a[i]);
getch();
}

int partition(int a[10], int low, int high){


int i=low,j=high;
int temp;
int mid=(low+high)/2;
int pivot=a[mid];
while(i<=j)
{
while(a[i]<=pivot)
i++;

DESIGN AND ANALYSIS OF ALGORITHMS Page 22


while(a[j]>pivot)
j--;
if(i<=j){
temp=a[i];
a[i]=a[j];
a[j]=temp;
i++;
j--;
}}
return j;
}
void quick(int a[10],int low, int high)
{
int m=partition(a,low,high);
if(low<m)
quick(a,low,m);
if(m+1<high)
quick(a,m+1,high);
}
Algorithm for Quick sort
Algorithm quickSort (a, low, high) {
If(high>low) then{
m=partition(a,low,high);
if(low<m) then quick(a,low,m);
if(m+1<high) then quick(a,m+1,high);
}}

Algorithm partition(a, low, high){


i=low,j=high;
mid=(low+high)/2;
pivot=a[mid];
while(i<=j) do { while(a[i]<=pivot)
i++;
while(a[j]>pivot)
j--;
if(i<=j){ temp=a[i];
a[i]=a[j];
a[j]=temp;
i++;
j--;
}}
return j;
}

Time Complexity
Name Best case Average Worst Space
Case Case Complexity
Bubble O(n) - O(n2) O(n)
Insertion O(n) O(n2) O(n2) O(n)
Selection O(n2) O(n2) O(n2) O(n)

DESIGN AND ANALYSIS OF ALGORITHMS Page 23


Quick O(log n) O(n log n) O(n2) O(n + log n)
Merge O(n log n) O(n log n) O(n log n) O(2n)
Heap O(n log n) O(n log n) O(n log n) O(n)

Comparison between Merge and Quick Sort:


 Both follows Divide and Conquer rule.
 Statistically both merge sort and quick sort have the same average case time i.e., O(n
log n).
 Merge Sort Requires additional memory. The pros of merge sort are: it is a stable sort,
and there is no worst case (means average case and worst case time complexity is
same).
 Quick sort is often implemented in place thus saving the performance and memory by
not creating extra storage space.
 But in Quick sort, the performance falls on already sorted/almost sorted list if the
pivot is not randomized. Thus why the worst case time is O(n2).

Randomized Sorting Algorithm: (Random quick sort)


 While sorting the array a[p:q] instead of picking a[m], pick a random element (from
among a[p], a[p+1], a[p+2]---a[q]) as the partition elements.
 The resultant randomized algorithm works on any input and runs in an expected O(n
log n) times.

Algorithm for Random Quick sort


Algorithm RquickSort (a, p, q) {
If(high>low) then{
If((q-p)>5) then
Interchange(a, Random() mod (q-p+1)+p, p);
m=partition(a,p, q+1);
quick(a, p, m-1);
quick(a,m+1,q);
}}

DESIGN AND ANALYSIS OF ALGORITHMS Page 24


Strassen’s Matrix Multiplication

DESIGN AND ANALYSIS OF ALGORITHMS Page 25


UNIT- II:

Disjoint set operations, Union and Find algorithms, AND/OR graphs, Connected components, Bi-
connected components. Greedy method: General method, applications- Job sequencing with deadlines,
Knapsack problem, Spanning trees, Minimum cost spanning trees, Single source shortest path problem.

Disjoint Sets: If Si and Sj, i≠j are two sets, then there is no element that is in both Si and Sj..
For example: n=10 elements can be partitioned into three disjoint sets,
S1= {1, 7, 8, 9}
S2= {2, 5, 10}
S3= {3, 4, 6}
Tree representation of sets:

10
S1 S2 S3

Disjoint set Operations:


Disjoint set Union
Find(i)

Disjoint set Union: Means Combination of two disjoint sets elements. Form above
example S1 U S2 ={1,7,8,9,5,2,10 }
For S1 U S2 tree representation, simply make one of the tree is a subtree
of the other.

1 1

5 5 7 8 9
7 8 9

S1 U S2 10
2 2 10
S1 U S2 S2 U S1

Find: Given element i, find the set containing i.


Form above example:
Find(4) S3
Find(1) S1
Find(10) S2

DESIGN AND ANALYSIS OF ALGORITHMS Page 26


Data representation of sets:

Tress can be accomplished easily if, with each set name, we keep a pointer to the root of the
tree representing that set.

For presenting the union and find algorithms, we ignore the set names and identify sets just
by the roots of the trees representing them.
For example: if we determine that element ‘i’ is in a tree with root ‘j’ has a pointer to entry
‘k’ in the set name table, then the set name is just name[k]

For unite (adding or combine) to a particular set we use FindPointer function.


Example: If you wish to unite to Si and Sj then we wish to unite the tree with rootsFindPointer (Si)
and FindPointer (Sj)
FindPointer is a function that takes a set name and determines the root of the tree thatrepresents it.
For determining operations:
Find(i) 1St determine the root of the tree and find its pointer to entry in setname table.
Union(i, j) Means union of two trees whose roots are i and j.

If set contains numbers 1 through n, we represents tree node P[1:n]. n Maximum number of elements

Each node represent in array

i 1 2 3 4 5 6 7 8 9 10
P -1 5 -1 3 -1 3 1 1 1 5

DESIGN AND ANALYSIS OF ALGORITHMS Page 27


find(i) by following the indices, starting at i until we reach a node with parent
Example: Find(6) start at 6 and then moves to 6’s parent. Since P[3] is negative, we reached
the root.

Algorithm for finding Union(i, j): Algorithm for find(i)


Algorithm Simple union(i, j) Algorithm SimpleFind(i)
{ {
P[i]:=j; // Accomplishes the union While(P[i]≥0) do i:=P[i];
} return i;
}

If n numbers of roots are there then the above algorithms are not useful for union and find.
For union of n trees Union(1,2), Union(2,3), Union(3,4),…..Union(n-1,n).
For Find i in n trees Find(1), Find(2),….Find(n).

Time taken for the union (simple union) is O(1) (constant). For the n-1 unions O(n).

Time taken for the find for an element at level i of a tree is O(i).
For n finds O(n2).

To improve the performance of our union and find algorithms by avoiding the creation of
degenerate trees. For this we use a weighting rule for union(i, j)

Weighting rule for Union(i, j):


If the number of nodes in the tree with root ‘i’ is less than the tree with root ‘j’, then make ‘j’
the parent of ‘i’; otherwise make ‘i’ the parent of ‘j’.

DESIGN AND ANALYSIS OF ALGORITHMS Page 28


Algorithm for weightedUnion(i, j)
Algorithm WeightedUnion(i,j)
//Union sets with roots i and j, i≠j
// The weighting rule, p[i]= -count[i] and p[j]= -count[j].
{
temp := p[i]+p[j];
if (p[i]>p[j]) then
{ // i has fewer nodes.
P[i]:=j;
P[j]:=temp;
}
else
{ // j has fewer or equal nodes.
P[j] := i;
P[i] := temp;
}
}

For implementing the weighting rule, we need to know how many nodes there are
in every tree.
For this we maintain a count field in the root of every tree.i
root node
count[i] number of nodes in the tree.
Time required for this above algorithm is O(1) + time for remaining unchanged is
determined by using Lemma.

Lemma:-Let T be a tree with m nodes created as a result of a sequence of unions eachperformed


using Weighted Union.The height of T is no greater than |log2 m|+1.

DESIGN AND ANALYSIS OF ALGORITHMS Page 29


Collapsing rule:If ‘j’ is a node on the path from ‘i’ to its root and p[i]≠root[i], then set p[j] to
root[i].
Algorithm for Collapsing find.
Algorithm CollapsingFind(i)
//Find the root of the tree containing element i.
//collapsing rule to collapse all nodes form i to the root.
{
r;=i;
while(p[r]>0) do r := p[r]; //Find the root.
While(i ≠ r) do // Collapse nodes from i to root r.
{
s:=p[i];
p[i]:=r;
i:=s;
}
return r;
}

Collapsing find algorithm is used to perform find operation on the tree created byWeighted
Union.

For example: Tree created by using Weighted Union

Now process the following eight finds: Find(8), Find(8), ............ Find(8)
If SimpleFind is used, each Find(8) requires going up three parent link fields for a total of 24
moves to process all eight finds. When CollapsingFind is uised the first Find(8) requires going
up three links and then resetting two links. Total 13 movies requies for process all eight finds.

DESIGN AND ANALYSIS OF ALGORITHMS Page 30


AND-OR GRAPHS
The AND-OR GRAPH (or tree) is useful for representing the solution of problems that can
solved by decomposing them into a set of smaller problems, all of which must then be solved.
This decomposition, or reduction, generates arcs that we call AND arcs. One AND arc may
point to any number of successor nodes, all of which must be solved in order for the arc to point
to a solution. Just as in an OR graph, several arcs may emerge from a single node, indicating a
variety of ways in which the original problem might be solved. This is why the structure is
called not simply an AND-graph but rather an AND-OR graph (which also happens to be an
AND-OR tree)

EXAMPLE FOR AND-OR GRAPH

ALGORITHM:
1. Let G be a graph with only starting node INIT.
2. Repeat the followings until INIT is labeled SOLVED or h(INIT) > FUTILITY
a) Select an unexpanded node from the most promising path from INIT (call it NODE)
b) Generate successors of NODE. If there are none, set h(NODE) = FUTILITY (i.e.,
NODE is unsolvable); otherwise for each SUCCESSOR that is not an ancestor of
NODE do the following:
i. Add SUCCESSSOR to G.
ii. If SUCCESSOR is a terminal node, label it SOLVED andset h(SUCCESSOR)
= 0.
iii. If SUCCESSPR is not a terminal node, compute its h
c) Propagate the newly discovered information up the graph by doing the following: let S
be set of SOLVED nodes or nodes whose h values have been changed and need to have
values propagated back to their parents. Initialize S to Node. Until S is empty repeat the
followings:
i. Remove a node from S and call it CURRENT.
ii. Compute the cost of each of the arcs emerging from CURRENT. Assign
minimum cost of its successors as its h.
iii. Mark the best path out of CURRENT by marking the arc that had the minimum
cost in step ii
iv. Mark CURRENT as SOLVED if all of the nodes connected to it through new
labeled arc have been labeled SOLVED
v. If CURRENT has been labeled SOLVED or its cost was just changed,
propagate its new cost back up through the graph So add all of the ancestors of
CURRENT to S.

DESIGN AND ANALYSIS OF ALGORITHMS Page 31


EXAMPLE: 1
STEP 1:
A is the only node, it is at the end of the current best path. It is expanded, yielding nodes B, C, D. The
arc to D is labeled as the most promising one emerging from A, since it costs 6compared to B and C,
Which costs 9.
STEP 2:

Node B is chosen for expansion. This process produces one new arc, the AND arc to E and F,
with a combined cost estimate of 10.so we update the f’ value of D to 10.Going back one more
level, we see that this makes the AND arc B-C better than the arc to D, so it is labeled as the
current best path.

STEP 3:

DESIGN AND ANALYSIS OF ALGORITHMS Page 32


we traverse the arc from A and discover the unexpanded nodes B and C. If we going to find a
solution along this path, we will have to expand both B and C eventually, so let’s choose to

explore B first. This generates two new arcs, the ones to G and to H. Propagating their f’
values backward, we update f’ of B to 6(since that is the best we think we can do, which we
can achieve by going through G). This requires updating the cost of the AND arc B-C to
12(6+4+2). After doing that, the arc to D is again the better path from A, so we record that as
the current best path and either node E or node F will chosen for expansion at step 4.
STEP4:

` Connected Component:
Connected component of a graph can be obtained by using BFST (Breadth
first search andtraversal) and DFST (Dept first search and traversal). It is
also called the spanning tree.

BFST (Breadth first search and traversal):


In BFS we start at a vertex V mark it as reached (visited).
The vertex V is at this time said to be unexplored (not yet discovered).
A vertex is said to been explored (discovered) by visiting all vertices
adjacent from it. All unvisited vertices adjacent from V are visited
next.
The first vertex on this list is the next
to be explored. Exploration continues
until no unexplored vertex is left. These
operations can be performed by using
Queue.

DESIGN AND ANALYSIS OF ALGORITHMS Page 33


Algorithm for BFS to convert undirected graph G to Connected component or spanningtree.

Algorithm BFS(v)
// a bfs of G is begin at vertex v
// for any node I, visited[i]=1 if I has already been visited.
// the graph G, and array visited[] are global
{
U:=v; // q is a queue of unexplored vertices.
Visited[v]:=1;
Repeat{
For all vertices w adjacent from U do If
(visited[w]=0) then
{
Add w to q; // w is unexplored
Visited[w]:=1;
}
If q is empty then return; // No unexplored vertex.
Delete U from q; //Get 1st unexplored vertex.
} Until(false)
}
Maximum Time complexity and space complexity of G(n,e), nodes are in adjacency list.
T(n, e)=θ(n+e)
S(n, e)=θ(n)

If nodes are in adjacency matrix then


T(n, e)=θ(n2)
S(n, e)=θ(n)

DESIGN AND ANALYSIS OF ALGORITHMS Page 34


DFST(Dept first search and traversal).:
Dfs different from bfs
The exploration of a vertex v is suspended (stopped) as soon as a new vertex is
reached.
In this the exploration of the new vertex (example v) begins; this new vertex has been
explored, the exploration of v continues.
Note: exploration start at the new vertex which is not visited in other vertex exploring
and choose nearest path for exploring next or adjacent vertex.

DESIGN AND ANALYSIS OF ALGORITHMS Page 35


Algorithm for DFS to convert undirected graph G to Connected component or spanningtree.
Algorithm dFS(v)
// a Dfs of G is begin at vertex v
// initially an array visited[] is set to zero.
//this algorithm visits all vertices reachable from v.
// the graph G, and array visited[] are global
{
Visited[v]:=1;
For each vertex w adjacent from v do
{
If (visited[w]=0) then DFS(w);
{
Add w to q; // w is unexplored
Visited[w]:=1;
}
}

Maximum Time complexity and space complexity of G(n,e), nodes are in adjacency list.
T(n, e)=θ(n+e)
S(n, e)=θ(n)

If nodes are in adjacency matrix then


T(n, e)=θ(n2)
S(n, e)=θ(n)

A biconnected component of G is a maximal set of edges such that any two edges in the set lie on a common
simple cycle

DESIGN AND ANALYSIS OF ALGORITHMS Page 36


DESIGN AND ANALYSIS OF ALGORITHMS Page 37
.
Greedy Method:
The greedy method is perhaps (maybe or possible) the most straight forward design
technique, used to determine a feasible solution that may or may not be optimal.

Feasible solution:- Most problems have n inputs and its solution contains a subset of inputs
that satisfies a given constraint(condition). Any subset that satisfies the constraint is called
feasible solution.

Optimal solution: To find a feasible solution that either maximizes or minimizes a given
objective function. A feasible solution that does this is called optimal solution.

The greedy method suggests that an algorithm works in stages, considering one input at a
time. At each stage, a decision is made regarding whether a particular input is in an optimal
solution.

Greedy algorithms neither postpone nor revise the decisions (ie., no back tracking).

DESIGN AND ANALYSIS OF ALGORITHMS Page 38


Example: Kruskal’s minimal spanning tree. Select an edge from a sorted list, check, decide,
and never visit it again.
Application of Greedy Method:
Job sequencing with deadline
0/1 knapsack problem
Minimum cost spanning trees
Single source shortest path problem.

Algorithm for Greedy method


Algorithm Greedy(a,n)
//a[1:n] contains the n inputs.
{
Solution :=0;
For i=1 to n do
{
X:=select(a);
If Feasible(solution, x) then
Solution :=Union(solution,x);
}
Return solution;
}
Selection Function, that selects an input from a[] and removes it. The selected input’s
value is assigned to x.
Feasible Boolean-valued function that determines whether x can be included into the
solution vector.
Union function that combines x with solution and updates the objective function.

Knapsack problem

The knapsack problem or rucksack (bag) problem is a problem in combinatorial optimization: Given a set of
items, each with a mass and a value, determine the number of each item to include in a collection so that the
total weight is less than or equal to a given limit and the total value is as large as possible

There are two versions of the problems

1. 0/1 knapsack problem


2. Fractional Knapsack problem
a. Bounded Knapsack problem.
b. Unbounded Knapsack problem.

DESIGN AND ANALYSIS OF ALGORITHMS Page 39


Solutions to knapsack problems

Brute-force approach:-Solve the problem with a straight farward algorithm


Greedy Algorithm:- Keep taking most valuable items until maximum weight is
reached or taking the largest value of eac item by calculating vi=valuei/Sizei
Dynamic Programming:- Solve each sub problem once and store their solutions in
an array.

0/1 knapsack problem:


Let there be items, to where has a value and weight . The maximum
weight that we can carry in the bag is W. It is common to assume that all values and weights
are nonnegative. To simplify the representation, we also assume that the items are listed in
increasing order of weight.

Maximize subject to

Maximize the sum of the values of the items in the knapsack so that the sum of the weights
must be less than the knapsack's capacity.
Greedy algorithm for knapsack
Algorithm GreedyKnapsack(m,n)
// p[i:n] and [1:n] contain the profits and weights respectively
// if the n-objects ordered such that p[i]/w[i]>=p[i+1]/w[i+1], m size of knapsack and
x[1:n] the solution vector
{
For i:=1 to n do x[i]:=0.0
U:=m;
For i:=1 to n do
{
if(w[i]>U) then break;
x[i]:=1.0;
U:=U-w[i];
}
If(i<=n) then x[i]:=U/w[i];
}

Ex: - Consider 3 objects whose profits and weights are


defined as(P1, P2, P3) = ( 25, 24, 15 )
W1, W2, W3) = ( 18, 15, 10 )
n=3 number of objects
m=20 Bag capacity
Consider a knapsack of capacity 20. Determine the optimum strategy for placing the
objects in to the knapsack. The problem can be solved by the greedy approach
where in the inputs are arranged according to selection process (greedy strategy) and
solve the problem in stages. The various greedy strategies for the problem could be
as follows.

DESIGN AND ANALYSIS OF ALGORITHMS Page 40


(x1, x2, x3) ∑ xiwi ∑ xipi
(1, 2/15, 0) 2 2
18x1+ x15 = 20 25x1+ x 24 = 28.2
15 15
(0, 2/3, 1) 2 2
x15+10x1= 20 x 24 +15x1 = 31
3 3

(0, 1, ½ ) 1 1
1x15+ x10 = 20 1x24+ x15 = 31.5
2 2
(½, ⅓, ¼ ) ½ x 18+⅓ x15+ ¼ x10 = 16. 5 ½ x 25+⅓ x24+ ¼ x15 =
12.5+8+3.75 = 24.25

Analysis: - If we do not consider the time considered for sorting the inputs then all of thethree
greedy strategies complexity will be O(n).

Job Sequence with Deadline:


There is set of n-jobs. For any job i, is a integer deadling di≥0 and profit Pi>0, the profit Pi is
earned iff the job completed by its deadline.

To complete a job one had to process the job on a machine for one unit of time. Only one
machine is available for processing jobs.

A feasible solution for this problem is a subset J of jobs such that each job in this subset can
be completed by its deadline.

The value of a feasible solution J is the sum of the profits of the jobs in J, i.e., ∑i∈ jPi
An optimal solution is a feasible solution with maximum value.

The problem involves identification of a subset of jobs which can be completed by its deadline.
Therefore the problem suites the subset methodology and can be solved by the greedy method.

DESIGN AND ANALYSIS OF ALGORITHMS Page 41


Ex: - Obtain the optimal sequence for the following jobs.
j1 j2 j3 j4
(P1, P2, P3, P4) = (100, 10, 15, 27)

(d1, d2, d3, d4) = (2, 1, 2, 1)


n =4

Feasible Processing Value


solution sequence
j1 j2
(2,1) 100+10=110
(1, 2)
(1,3) (1,3) or (3,1) 100+15=115
(1,4) (4,1) 100+27=127
(2,3) (2,3) 10+15=25
(3,4) (4,3) 15+27=42
(1) (1) 100
(2) (2) 10
(3) (3) 15
(4) (4) 27

In the example solution ‘3’ is the optimal. In this solution only jobs 1&4 are processed and
the value is 127. These jobs must be processed in the order j4 followed by j1. the process of
job 4 begins at time 0 and ends at time 1. And the processing of job 1 begins at time 1 and
ends at time2. Therefore both the jobs are completed within their deadlines. The optimization
measure for determining the next job to be selected in to the solution is according to the
profit. The next job to include is that which increases ∑pi the most, subject to the constraint
that the resulting “j” is the feasible solution. Therefore the greedy strategy is to consider the
jobs in decreasing order of profits.
The greedy algorithm is used to obtain an optimal solution.
We must formulate an optimization measure to determine how the next job is chosen.

DESIGN AND ANALYSIS OF ALGORITHMS Page 42


algorithm js(d, j, n)
//d dead line, jsubset of jobs ,n total number of jobs
// d[i]≥1 1 ≤ i ≤ n are the dead lines,
// the jobs are ordered such that p[1]≥p[2]≥---≥p[n]
//j[i] is the ith job in the optimal solution 1 ≤ i ≤ k, k subset range
{
d[0]=j[0]=0;
j[1]=1;
k=1;
for i=2 to n do{
r=k;
while((d[j[r]]>d[i]) and [d[j[r]]≠r)) do
r=r-1;
if((d[j[r]]≤d[i]) and (d[i]> r)) then
{
for q:=k to (r+1) setp-1 do j[q+1]= j[q];
j[r+1]=i;
k=k+1;
}
}
return k;
}

Note: The size of sub set j must be less than equal to maximum deadline in given list.

Single Source Shortest Paths:

Graphs can be used to represent the highway structure of a state or country with
vertices representing cities and edges representing sections of highway.
The edges have assigned weights which may be either the distance between the 2
cities connected by the edge or the average time to drive along that section of
highway.
For example if A motorist wishing to drive from city A to B then we must answer the
following questions
o Is there a path from A to B
o If there is more than one path from A to B which is the shortest path
The length of a path is defined to be the sum of the weights of the edges on that path.

Given a directed graph G(V,E) with weight edge w(u,v). e have to find a shortest path from
source vertex S∈ v to every other vertex v1∈ v-s.

DESIGN AND ANALYSIS OF ALGORITHMS Page 43


To find SSSP for directed graphs G(V,E) there are two different algorithms.

 Bellman-Ford Algorithm
 Dijkstra’s algorithm

Bellman-Ford Algorithm:- allow –ve weight edges in input graph. This algorithm
either finds a shortest path form source vertex S∈ V to other vertex v∈ V or detect a –
ve weight cycles in G, hence no solution. If there is no negative weight cycles are
reachable form source vertex S∈ V to every other vertex v∈ V
Dijkstra’s algorithm:- allows only +ve weight edges in the input graph and finds a
shortest path from source vertex S∈ V to every other vertex v∈ V.

Consider the above directed graph, if node 1 is the source vertex, then shortest path
from 1 to 2 is 1,4,5,2. The length is 10+15+20=45.

To formulate a greedy based algorithm to generate the shortest paths, we must


conceive of a multistage solution to the problem and also of an optimization measure.

This is possible by building the shortest paths one by one.

As an optimization measure we can use the sum of the lengths of all paths so far
generated.

If we have already constructed ‘i’ shortest paths, then using this optimization measure,
the next path to be constructed should be the next shortest minimum length path.

The greedy way to generate the shortest paths from Vo to the remaining vertices is to
generate these paths in non-decreasing order of path length.

For this 1st, a shortest path of the nearest vertex is generated. Then a shortest path to
the 2nd nearest vertex is generated and so on.

DESIGN AND ANALYSIS OF ALGORITHMS Page 44


Algorithm for finding Shortest Path

Algorithm ShortestPath(v, cost, dist, n)


//dist[j], 1≤j≤n, is set to the length of the shortest path from vertex v to vertex j in graph g
with n-vertices.
// dist[v] is zero
{
for i=1 to n do{
s[i]=false;
dist[i]=cost[v,i];
}
s[v]=true;
dist[v]:=0.0; // put v in s
for num=2 to n do{
// determine n-1 paths from v
choose u form among those vertices not in s such that dist[u] is minimum.
s[u]=true; // put u in s
for (each w adjacent to u with s[w]=false) do
if(dist[w]>(dist[u]+cost[u, w])) then
dist[w]=dist[u]+cost[u, w];
}
}

SPANNING TREE: - A Sub graph ‘n’ of o graph ‘G’ is called as a spanning tree if
(i) It includes all the vertices of ‘G’
(ii) It is a tree

DESIGN AND ANALYSIS OF ALGORITHMS Page 45


Minimum cost spanning tree: For a given graph ‘G’ there can be more than one spanning
tree. If weights are assigned to the edges of ‘G’ then the spanning tree which has the
minimum cost of edges is called as minimal spanning tree.

The greedy method suggests that a minimum cost spanning tree can be obtained by contacting
the tree edge by edge. The next edge to be included in the tree is the edge that results in a
minimum increase in the some of the costs of the edges included so far.

There are two basic algorithms for finding minimum-cost spanning trees, and both are greedy
Algorithms

 Prim’s Algorithm
 Kruskal’s Algorithm

Prim’s Algorithm: Start with any one node in the spanning tree, and repeatedly add the
cheapest edge, and the node it leads to, for which the node is not already in the spanning tree.

DESIGN AND ANALYSIS OF ALGORITHMS Page 46


PRIM’S ALGORITHM: -
i) Select an edge with minimum cost and include in to the spanning tree.
ii) Among all the edges which are adjacent with the selected edge, select the
onewith minimum cost.
iii) Repeat step 2 until ‘n’ vertices and (n-1) edges are been included. And the
subgraphobtained does not contain any cycles.

Notes: - At every state a decision is made about an edge of minimum cost to be included
into the spanning tree. From the edges which are adjacent to the last edge included in
the spanning tree i.e. at every stage the sub-graph obtained is a tree.

Prim's minimum spanning tree algorithm


Algorithm Prim (E, cost, n,t)
// E is the set of edges in G. Cost (1:n, 1:n) is the
// Cost adjacency matrix of an n vertex graph such that
// Cost (i,j) is either a positive real no. or ∞ if no edge (i,j) exists.
//A minimum spanning tree is computed and
//Stored in the array T(1:n-1, 2).
//(t (i, 1), + t(i,2)) is an edge in the minimum cost spanning tree. The final cost is returned
{
Let (k, l) be an edge with min cost
in E Min cost: = Cost (x,l);
T(1,1):= k; + (1,2):= l;
for i:= 1 to n do//initialize
near
if (cost (i,l)<cost (i,k) then n east
(i): l; else near (i): = k;
near (k): = near (l): =
0; for i: = 2 to n-1 do
{//find n-2 additional edges for t
let j be an index such that near (i) 0 & cost (j, near (i)) is
minimum;t (i,1): = j + (i,2): = near (j);
min cost: = Min cost + cost (j, near
(j)); near (j): = 0;
for k:=1 to n do // update near ()
if ((near (k) 0) and (cost {k, near (k)) > cost
(k,j))) then near Z(k): = ji
}
return mincost;
}

DESIGN AND ANALYSIS OF ALGORITHMS Page 47


The algorithm takes four arguments E: set of edges, cost is nxn adjacency matrix cost of (i,j)=
+ve integer, if an edge exists between i&j otherwise infinity. ‘n’ is no/: of vertices. ‘t’ is a
(n- 1):2matrix which consists of the edges of spanning tree.
E = { (1,2), (1,6), (2,3), (3,4), (4,5), (4,7), (5,6), (5,7), (2,7) }
n = {1,2,3,4,5,6,7)

i) The algorithm will start with a tree that includes only minimum cost edge of
G. Then edges are added to this tree one by one.
ii) The next edge (i,j) to be added is such that i is a vertex which is
already included in the treed and j is a vertex not yet included in the
tree and cost of i,j is minimum among all edges adjacent to ‘i’.
iii) With each vertex ‘j’ next yet included in the tree, we assign a value
near ‘j’. The value near ‘j’ represents a vertex in the tree such that cost (j,
near (j)) is minimum among all choices for near (j)
iv) We define near (j):= 0 for all the vertices ‘j’ that are already in the tree.
v) The next edge to include is defined by the vertex ‘j’ such that (near (j))
0 and cost of (j, near (j)) is minimum.
Analysis: -
The time required by the prince algorithm is directly proportional to the no/: of vertices.
If agraph ‘G’ has ‘n’ vertices then the time required by prim’s algorithm is 0(n2)

DESIGN AND ANALYSIS OF ALGORITHMS Page 48


Kruskal’s Algorithm: Start with no nodes or edges in the spanning tree, and repeatedly
add the cheapest edge that does not create a cycle.
In Kruskals algorithm for determining the spanning tree we arrange the edges in the increasing
order of cost.
i) All the edges are considered one by one in that order and deleted from the graph
and areincluded in to the spanning tree.
ii) At every stage an edge is included; the sub-graph at a stage need not be a
tree. Infect it is a forest.
iii) At the end if we include ‘n’ vertices and n-1 edges without forming cycles then
we get a single connected component without any cycles i.e. a tree with
minimum cost.
At every stage, as we include an edge in to the spanning tree, we get disconnected trees
represented by various sets. While including an edge in to the spanning tree we need to
check it does not form cycle. Inclusion of an edge (i,j) will form a cycle if i,j both are in
same set. Otherwise the edge can be included into the spanning tree.
Kruskal minimum spanning tree algorithm
Algorithm Kruskal (E, cost, n,t)
//E is the set of edges in G. ‘G’ has ‘n’ vertices
//Cost {u,v} is the cost of edge (u,v) t is the set
//of edges in the minimum cost spanning tree
//The final cost is returned
{ construct a heap out of the edge costs using heapify;
for i:= 1 to n do parent (i):= -1 // place in different sets
//each vertex is in different set {1} {1}
{3} i: = 0; min cost: = 0.0;
While (i<n-1) and (heap not empty))do
{
Delete a minimum cost edge (u,v) from the heaps; and reheapify using
adjust; j:= find (u); k:=find (v);
if (j k) then
{ i: = 1+1;
+ (i,1)=u; + (i, 2)=v;
mincost: =
mincost+cost(u,v); Union
(j,k);
}
}
if (i n-1) then write (“No
spanningtree”);else return
mincost;
}

Consider the above graph of , Using Kruskal's method the edges of this graph are considered
for inclusion in the minimum cost spanning tree in the order (1, 2), (3, 6), (4, 6), (2, 6), (1, 4),
(3, 5), (2, 5), (1, 5), (2, 3), and (5, 6). This corresponds to the cost sequence 10, 15, 20, 25,
30, 35, 40, 45, 50, 55. The first four edges are included in T. The next edge to be considered
is (I, 4). This edge connects two vertices already connected in T and so it is rejected. Next,
the edge (3, 5) is selected and that completes the spanning tree.

DESIGN AND ANALYSIS OF ALGORITHMS Page 49

You might also like