0% found this document useful (0 votes)
103 views86 pages

ADA Lec 001-005

The document provides an outline for a lecture on advance analysis and designing of algorithms. It covers topics such as introduction to algorithms, asymptotic notations, recursion, sorting algorithms, graph algorithms, greedy approach, dynamic programming, and NP-complete problems. Example pseudocodes are also provided to demonstrate algorithm analysis based on time and space complexity. The goal of the lecture is to teach students how to analyze and compare algorithms to determine their efficiency.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
103 views86 pages

ADA Lec 001-005

The document provides an outline for a lecture on advance analysis and designing of algorithms. It covers topics such as introduction to algorithms, asymptotic notations, recursion, sorting algorithms, graph algorithms, greedy approach, dynamic programming, and NP-complete problems. Example pseudocodes are also provided to demonstrate algorithm analysis based on time and space complexity. The goal of the lecture is to teach students how to analyze and compare algorithms to determine their efficiency.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 86

Advance Analysis and Designing of

Algorithm

Lecture No.01
Prepared by : Afroj Alam
Course Outline
• Introduction of Algorithms and its notation
• Basics algorithms and its analysis
• Asymptotic notations
• Recursion and recurrence relations
• Divide-and-conquer approach
• Sorting; Search trees
• Hashing

2
Course Outline (Cont !!!)
• Binary Search Tree
• Graph algorithms; Shortest paths; Network flow; Disjoint Sets;
Polynomial and matrix calculations;
• Greedy approach
• Dynamic programming
• String matching algorithms
• NP complete problems

3
What is an algorithm
• An algorithm is a step by step procedure to solve a problem
• Plate form independent
• Can be implement in any procedural language
• Simplest form (for experts) is Pseudo code
• Pseudo code can be written in any language syntax, but here in this
course we will used C/C++ syntax
• Two important factors
• Space Tradeoff (Refer to less use of memory i.e. RAM)
• Time Tradeoff (Refer to less time of Micro processor for execution)
Features of algorithm
• Complete
• Finite
• At least one Output
• 0,1 or more inputs
• Correct
• Clarity
Example-1.
• Write an algorithms or Pseudo code to read two number and
display the largest number.
Algorithms Notations
• Input Statements
• Read
• Output statement
• Write

• Selection statement Example


• If –Then –End If If ( a>b ) then
• If – then ---Else --- End If write ( a+”Is Large”)
Else
write ( b+”Is Large”)
End if
Algorithms Notations (Cont !!!)
• Loops ( For, While, Until )
• Example-1
Repeat Step 2 For i=1, N, 1
• Example-3
Repeat Step 2 while i<=10
• Example-4
Repeat Step 2 Until i>10
Finish(Exit)
Example-2.
• Write an algorithms or Pseudo code to read a number and
check it is it odd or even.
Example-3.
• Write an algorithms or Pseudo code to print all the natural
number from 1 to 20.
Algorithm Example-3
Algorithm Numberseries(n)
{ This algorithm is used to print the number from 1 to 20}
Step 1: Start
Step 2: Initialize variable counter as integer
counter= 1
Step 3: Repeat the step 4 while counter < 20
Step 4: a. Print the value of number
b. number = number (n) + 1
Step 5: Stop
Example-4.
• Write an algorithms or Pseudo code to print all the even no
series from 1 to 20
Example-5.
• Write an algorithms or Pseudo code to read an array of N
integer values, calculate its sum and then print the sum
Algorithm Example-5.
Algorithm SUM(Ary[ ], i, total, N)
Step-1 Repeat step 2 for i=1 to N,1
Step-2 Read(Ary[i])
Step-3 [Initialize variable i and total]
a. i =1
b. total = 0
Step-4 [ Perform a loop to read the all elements of array ary and add]
Repeat step 5 while i<=10
Step-5 a. total = total + ary [ i ]
b. i=i+1
Step-6 Write (total)
Step-7 Exit

14
Assignment
• Write an algorithm which read three numbers and print the
smallest number. Also write a C language program .
• Write an algorithm which read an array of 10 integers and
count the even numbers. Also write a C language program .

15
Analysis and designing of algorithm

Lecture No.02
Prepared by : Afroj Alam
Criteria for analysis of algorithms
• Why analyze algorithms?
• evaluate algorithm performance
• compare different algorithms
• Analyze what about them?
• running time, memory usage, solution quality
• worst -case and “best” case
Complexity analysis
• What is the "best" Algorithm
• You can considered an algorithm is best on the basis of
answers of following questions.
• How "complicated" is the algorithm
• Time Complexity
• Space Complexity
Important Point to note
• Programs depend on the operating systems, machine,
compiler/interpreter used, etc.
• Analysis of algorithms compare algorithms and not programs.
• Performance of Algorithms should be measured before its
implementation as program in any language.
• Algorithms should be checked on different machines.
Example-1.
• Consider following four pseudo codes (in C)
• Purpose of all pseudo codes is same.
• We are analysing them on the basis of time and space
complexity.
PesuedoCode-1.

main()
• Facts:
{ 1. Three variables ( 6 bytes)
int a,b,c; 2. Three assignment process
a=2; 3. One Calculation
b=3; 4. One output statement
c=a+b;
cout<< c;
}
PesuedoCode-2
• Facts:
main() 1. Two variables ( 4 bytes)
{ 2. Two assignment process
int a,b; 3. One Calculation
a=2; 4. One output statement
b=3;
cout<< a+b;
}
PesuedoCode-3.

main() • Facts:
{ 1. Three variables ( 6 bytes)
int a,b,c; 2. One input statement
cin>>a,>>b; 3. One Assignment
c=a+b; 4. One Calculation
cout<<c; 5. One output statement
}
PesuedoCode-4.

main() • Facts:
{ 1. Two variables ( 4 bytes)
int a,b; 2. One input statement
cin>>a>>b; 3. One Calculation
cout<<a+b; 4. One output statement
}
Comparison of Pseudo codes
Facts Pseudo 1 Pseudo 2 Pseudo 3 Pseudo 4
Variables 3 (6 bytes) 2(4 bytes) 3 (6 bytes) 2 (4bytes)

No of 3 2 1 0
Assignments

No of Calculation 1 1 1 1

Input Statements 0 0 1 1

Output 1 1 1 1
Statements

10
Which one Pseudo code is best and how

• With respect to space trade of Pseudo code 2 and Pseudo code


4 are candidate for best because in these pseudo codes 2
variables are used.
• But when focus on time trade-off/complexity then Pseudo 2
code will best
• Why not Pseudo code 4 is best though, you are entering
dynamic data (through cin )
Home Work
• Write at least three pseudo codes to find the largest of three numbers
and analyzed these pseudo codes on the basis of space and time
complexity.
Lecture 3.

Order of growth and asymptotic


analysis

1
Asymptotic Analysis
 To compare two algorithms with running
times f(n) and g(n), we need to find how
fast each function grows.
 Hint: use rate of growth
 Compare functions in the limit, that is,
asymptotically!
(i.e., for large values of n)

2
Functions
 Algorithm can be analyzed mathematically by
functions. Therefore, the actual analysis is to
establish a relative order among functions.
 Given two functions, there are usually points where
one function is smaller than the other function, so it
does not make sense to claim, for instance, f(N)
<g(N).
 Thus, we compare their relative rates of growth.
Graph of linear and binary search

f(N)

g(N)
Functions
 Although 1000 N is larger than N^2 for small values
of N, N^2 grows at a faster rate, and thus N^2 will
eventually be the larger function.
 There may be some points in which f(N) > g(N).
Rate/order of Growth
 Consider the example of buying elephants and
goldfish:
Cost: cost_of_elephants + cost_of_goldfish
Cost ~ cost_of_elephants (approximation)
 The low order terms in a function are relatively
unimported for large n
n4 + 100n2 + 10n + 50 ~ n4

i.e., we say that n4 + 100n2 + 10n + 50 and n4 have the


same rate of growth

6
Rate/order of Growth
 Let f(n) = 3n2+5n+6
 Important point to be noted when you are finding
order of growth
 Remove all constants of function f(n) for an
algorithm, such as
 remove 6 from above f(n)
 Remove low order terms such as
 Remove or ignore 5n from above f(n)
 Remove coefficient of high order term such
 Remove or ignore 3 from above f(n)

7
Rate of Growth ≡Asymptotic
Analysis

 Using rate of growth as a measure to compare


different functions implies comparing them
asymptotically.
 If f(x) is faster growing than g(x), then f(x) always
eventually becomes larger than g(x) in the limit (for
large enough values of x).

8
Example
 Suppose you are designing a web site to process
user data (e.g., financial records).
 Suppose program A takes fA(n)=30n+8
microseconds to process any n records, while
program B takes fB(n)=n2+1 microseconds to
process the n records.
 Which program would you choose, knowing
you’ll want to support millions of users?

9
Visualizing Orders of Growth
 On a graph, as
you go to the
right, a faster
growing

Value of function 
fA(n)=30n+8

function
eventually
becomes fB(n)=n2+1
larger...

Increasing n 

10
Common asymptotic notation or
Big-O Notation
 We say fA(n)=30n+8 is order n, or O(n).
It is, at most, roughly proportional to n.
 fB(n)=n2+1 is order n2, or O(n2). It is, at most,
roughly proportional to n2.
 In general, an O(n2) algorithm will be slower than
O(n) algorithm.
 Warning: an O(n2) function will grow faster than
an O(n) function.

11
Examples (Order of Growth)

 We say that n4 + 100n2 + 10n + 50 is of the


order of n4 or O(n4)
 We say that 10n3 + 2n2 is O(n3)
 We say that n3 - n2 is O(n3)
 We say that 10 is O(1),
 We say that 1273 is O(1)

12
Common orders of magnitude

13
Common orders of magnitude
Constant time algorithm
 Constant Time
 If the time taken by the algorithm is O(C), then
the algorithm will run in a constant time, C,
regardless of the size of its input.
 An example of such an algorithm is one which
would accept an input and then halt without
doing anything else.
 Array: accessing any element
 Fixed-size stack: push and pop methods
 Fixed-size queue: enqueue and dequeue
methods
Linear Time Algorithm

 Linear Time
 If the algorithm runs will grow larger linearly
proportional to the size of the input.
 This algorithm is said to be “big-O of n”,
written O(n), and the algorithm is called linear.
 In a linear algorithm, doubling the input size
makes the time increase approximately
twofold (or less).
 An example of such algorithm is the linear
search, traversing, find minimum .
Quadratic Time Algorithm

 Quadratic Time
 If the largest term in a formula is no more than
a constant times n^2, then the algorithm is
said to be “big-O of n^2”, written O(N^2), and
the algorithm is called quadratic.
 In a quadratic algorithm, doubling the input
size makes the number of operations increase
approximately fourfold (or less).
Logarithmic Time algorithm

 Logarithmic Time
 An algorithm that has the time grow
logarithmically to the size of the input. An
example of an algorithm that runs in is the
binary search algorithm.
 If the largest term in the formula is a constant
times a logarithm of n, then the algorithm is
“big-O of the logarithm of n,” written O(log n),
and the algorithm is called logarithmic.
Exponential Time algorithm

 Exponential Time
 Exponential functions grow very quickly, so
exponential algorithms are only useful for
small problems.
 All exponential functions belong to the
same order of growth regardless of the base
of the exponent.
Why exponential Time is bad?

Time (Sec) Interpretation


210 17 Minutes
220 12 Days
230 32 years
251 Existence of Dinosaurs
257 Creation of Earth
260 Origin of universe
 Examples of exponential algorithm
 Tower of Hanoi (recursive solution)
 Fibonacci sequence (recursive solution)
Analysis and designing of algorithm

Lecture No.04
Prepared by : Afroj Alam
Running Time of Algorithms
• Running time
• depends on input size n
• size of an array

• # of elements in a matrix

• vertices and edges in a graph

2
Analysis of non recursive algorithm
• for Loops
• The running time of a for loop is at most the running time of the
statements inside the for loop (including tests) times the number of
iterations.
Let
Example 1. running time of basic
for(i=0 ;i<n ;i++) operations is constant
a[i]=0; C
Note: 2. and loop is iterated n
(number of basic steps in times then
loop body) * (number of Total Running time will be
iterations) n*c

3
Nested for Loops
• The total running time of a statement inside a nested for loops is the
running time of the statement multiplied by the product of the sizes of
all the for loops.
• Example Let
for(i=0 ;i<n; i++) 1. running time of basic
operations is constant C
for(j=0 ;j<=n ;j++) 2. Running time of outer
K++; loop (i.e. i) is n
3. And running time of
Inner loop (i.e. j) is n
4. Total Running time will
be
n*n*c=n2*c
4
Consecutives for Loops
• The total running time of consecutive loops is the sum of
running of all consecutives loops
• Example Let
for(i=0 ;i<n ;i++) 1. running time of basic
operations is constant C
K++; 2. Running time of loop
for(j=0 ;j<n ;j++) (i.e. i) is n
K++; 3. And running time of loop
(i.e. j) is m
4. Total Running time will
be
n*c + m * c
5
Recursive Analysis

6
Recursive Function
• The recursive function is
• a kind of function that calls itself, or
• a function that is part of a cycle in the sequence of function calls.

f1 f1 f2 … fn
Problems Suitable for Recursive Functions
• Some problem have a straightforward solution.
• The other cases problem can be redefined in terms of problems
that are closer to the simple cases.
• The problem can be reduced entirely to simple cases by calling
the recursive function.
• If this is a simple case
solve it
else
redefine the problem using recursion
Splitting a Problem into Smaller Problems

• Assume that the problem of size 1 can be solved


easily (i.e., the simple case).
• We can recursively split the problem into a
problem of size 1 and another problem of size n-
1.
Example 1: Recursive Factorial
• The following shows the recursive and iterative
versions of the factorial function:
Recursive version Iterative version
int factorial (int n) int factorial (int n)
{ {
if (n == 0) int i, product=1;
return 1; for (i=n; i>1; --i)
else product=product * i;
return n * factorial (n-1);
} return product;
}

Recursive Call
10
Tracing Recursive Functions

• Executing recursive algorithms goes through two phases:


• Expansion in which the recursive step is applied until hitting the base step
• “Substitution” in which the solution is constructed backwards starting with the base step

factorial(4) = 4 * factorial (3) Expansion


= 4 * (3 * factorial (2)) phase
= 4 * (3 * (2 * factorial (1)))
= 4 * (3 * (2 * (1 * factorial (0))))

= 4 * (3 * (2 * (1 * 1)))
= 4 * (3 * (2 * 1))
= 4 * (3 * 2)
=4*6 Substitution
= 24 phase

11
The complete recursive factorial example

int main(void)
{
int num, fact;
printf("Enter an integer between 0 and 7> ");
scanf("%d", &num);
fact = factorial(num);
printf("The factorial of %d is %d\n", num, fact);
}
int factorial (int n)
{
if (n == 0) //base case
return 1;
else
return n * factorial (n-1); //recursive case
} 12
Example 2: Multiplication

• Suppose we wish to write a recursive function to multiply an integer m by


another integer n using addition. [We can add, but we only know how to
multiply by 1].
• The best way to go about this is to formulate the solution by identifying the
base case and the recursive case.
• The base case is if n is 1. The answer is m.
• The recursive case is: m*n = m + m (n-1).
m, n = 1
m*n
m + m (n-1), n>1
13
Example 2: Multiplication …
#include <stdio.h>
int multiply(int m, int n);

int main(void) {
int num1, num2;

printf("Enter two integer numbers to multiply: ");


scanf("%d%d", &num1, &num2);

printf("%d x %d = %d\n", num1, num2, multiply(num1, num2));


system("pause");
return 0;
}

int multiply(int m, int n) {


if (n == 1)
return m; /* simple case */
else
return m + multiply(m, n - 1); /* recursive step */
}
14
Example 2: Multiplication …

Expansion
phase
multiply(5,4) = 5 + multiply(5, 3)
= 5 + (5 + multiply(5, 2))
= 5 + (5 + (5 + multiply(5, 1)))

= 5 + (5 + (5 + 5))
= 5 + (5 + 10)
= 5 + 15
= 20 Substitution
phase

15
Analysis of Running time of algorithms
(Best, Average and Worst cases)

16
Types of Algorithm Complexity
• Worst Case Complexity:
• the function defined by the maximum number of steps taken on any
instance(array) of size n
• Best Case Complexity:
• the function defined by the minimum number of steps taken on any
instance(Array) of size n
• Average Case Complexity:
• the function defined by the average number of steps taken on any
instance(Array) of size n

17
Types of Algorithm Complexity
(Example: Linear Search)
5 7 2 6 9 12 1
• Worst Case Complexity:
• You want to search 1 in above array which is at location N
• You need N steps
• Best Case Complexity:
• You want to search 5 in above array which is at location 1
• You need 1 steps
• Average Case Complexity:
• You want to search 2 or 9 etc in above array
• You need 3 steps for 2 and 5 steps for 9

18
Best, Worst, and Average Case Complexity
Number Worst Case
of steps Complexity

Average Case
Complexity

Best Case
Complexity

N
(input
size)
19
Assignment

• Exercise-1.
• Write a pseudo code which find the sum of two 3*3 matrices and then calculate its
running time.
• Exercise-2.
• Write a pseudo code which read a number N and print whether it is prime or not .
After that, calculate the run time complexity

20
Searching Algorithms
Lecture Objectives
 Learn how to implement the sequential search algorithm

 Learn how to implement the binary search algorithm

 Learn how to estimate and compare the performance of


algorithms

 Learn how to measure the running time of a program


Searching Algorithms
 Necessary components to search a list of data
 Array containing the list
 Length of the list
 Item for which you are searching

 After search completed


 If item found, report “success,” return location in
array
 If item not found, report “not found” or “failure”
Linear Search Algorithm
Linear search code:

int linSearch(int[] list, int Length, int key) {


int loc;
boolean found = false;

for(int loc = 0; loc < Length; loc++) {


if(list[loc] == key) {
found = true;
break;
}
}
if(found)
return loc;
else
return -1;
}
Linear search code:

void search(int a[],int len,int item)


{ int search=0; int i,l;
for(i=0;i<len;i++)
{ if(a[i]==item)
{ search=1;
l=i; break;
}
}
if(search)
cout<<"Search is sucess and its location is "<<l;
else
cout<<"Searching item is not available in the Array..";
}
Performance of the Linear Search

• Therefore, the best case is: 1


• And, the worst case is: N
• The average case is: Worst case and key found at the end of
the array list!

Best case

Average Number of
1+N
=
Comparisons
2
Binary Search Algorithm
 Can only be performed on a sorted list !!!

 Uses divide and conquer technique to search list


Binary Search Algorithm (Cont’d)
 Search item is compared with middle element of list
 If search item < middle element of list, search is restricted
to first half of the list
 If search item > middle element of list, search second half
of the list
 If search item = middle element, search is complete
Binary Search Algorithm (Cont’d)
• Determine whether 75 is in the list

Figure 2: Array list with twelve (12) elements

Figure 3: Search list, list[0] … list[11]


Binary Search Algorithm (Cont’d)

Figure 4: Search list, list[6] … list[11]


Binary Search Algorithm (Cont’d)
int binarySearch(int[] list, int Length, int key) {
int first = 0, last = Length - 1;
int mid; int found = false;
while (first <= last && !found)
{
mid = (first + last) / 2;
if (list[mid] == key)
{ found = true; }
else
{ if(list[mid] > key)
last = mid - 1;
else
first = mid + 1;
}
}
if (found)
return mid;
else
return –1;
} //end binarySearch
Binary Search Algorithm (Cont’d)

Figure 5: Sorted list for binary search

key = 89

key = 34
Binary Search Algorithm (Cont’d)

Figure 6: Sorted list for binary search

key = 22
Performance of Binary Search Algorithm

Figure 7: A Sorted list for binary search

key ≠ List[499] key < List[499]

Figure 8: Search list after first iteration


Performance of Binary Search Algorithm (Cont’d)

key > List[249]

Figure 9: Search list after second iteration


Performance of Binary Search Algorithm (Cont’d)

• Suppose that L is a list of size 1000000

• Since 1000000  1048576, it follows that the while loop in


binary search will have at most 21 iterations to determine
whether an element is in L

• Every iteration of the while loop makes two key (that is,
item) comparisons
Performance of Binary Search Algorithm (Cont’d)

 To determine whether an element is in L, binary search


makes at most 42 item comparisons
 On the other hand, on average, a linear search will
make 500,000 key (item) comparisons to determine
whether an element is in L
 In general, if L is a sorted list of size N, to determine
whether an element is in L, the binary search makes at
most log2N key (item) comparisons
Run time complexity
 First iteration N/2 item remained
 Second iteration N/4=N/22 item remained
 Kth iteration N/2k one or more item remained

 N/2k >=1 means at last one

or more element remains

 Taking log on both sides


 k=Log n
2

You might also like