DAA - Basics

Download as pdf or txt
Download as pdf or txt
You are on page 1of 74

Design and Analysis of Algorithms

Algorithm
● An algorithm is a process or a set of rules. it contains a finite set of
instructions that are being carried in a specific order to perform the specific
task.
● An algorithm is composed of a finite set of steps, each of which may require
one or more operation
● The word algorithm comes from the name of a Persian author, Abu Ja'far
Mohammedibn Musa al Khowarizmi (c. 825 A.D.), who wrote a textbook on
mathematics.
● It is not the complete program or code; it is just a solution (logic) of a
problem, which can be represented either as an informal description using a
Flowchart or Pseudocode.
Algorithm vs Pseudocode vs Program:
● An algorithm is defined as a well-defined sequence of steps that provides
a solution for a given problem, whereas a pseudocode is one of the
methods that can be used to represent an algorithm. Pseudocode does
not consist of any specific syntax like a programming language.
● While algorithms are generally written in a natural language or plain
English language, pseudocode is written in a format that is similar to the
structure of a high-level programming language. Program on the other
hand allows us to write a code in a particular programming language.
Parameters Algorithm Pseudocode Program
Algorithm vs Pseudocode vs Program:
Meaning An algorithm is a well- It refers to the code It is basically a simplified version
and defined, systematic (written by programmers) of the programming codes. These
Definition logical approach that for any program that codes exist in the plain English
comes with a step-by- follows the basic rules of language, and it makes use of
step procedure for the concerned various short phrases for writing a
computers to solve any programming language. program code before
given program. implementing it in any
programming language.
Expression We can express You get to include various No device can directly read a
and Use algorithms using control structures using program. Instead, you can write
flowcharts, natural pseudocode, such as anything in a computer language
language, and many repeat-until, if-then-else, and then use a compiler or
more. while, for and case. interpreter (for compiling and
interpreting) so that it becomes
understandable for any computer
system.
Characteristics of an Algorithm
● There are five different characteristics that deal with various aspects of
the algorithm.

1. Input specified
2. Output specified
3. Definiteness
4. Finiteness
5. Effectiveness
Contd…
● An algorithm should have 0 or more well-defined inputs.
● An algorithm should have 1 or more well-defined outputs
● Definiteness means specifying the sequence of operations for turning
input into output. The algorithm should be clear and unambiguous.
● Algorithms must terminate after a finite number of steps.
● All those steps that are required to get to output must be feasible with
the available resources
Advantages of an Algorithm
● Effective Communication: Since it is written in a natural language like
English, it becomes easy to understand the step-by-step delineation of a
solution to any particular problem.
● Easy Debugging: A well-designed algorithm facilitates easy debugging to
detect the logical errors that occurred inside the program.
● Easy and Efficient Coding: An algorithm is nothing but a blueprint of a
program that helps develop a program.
● Independent of Programming Language: Since it is a language-
independent, it can be easily coded by incorporating any high-level
language.
Example
● Algorithms for making tea/coffee

● Step 1: Start
● Step 2: Boil water in a saucepan
● Step 3: Add tea to boiling water
● Step 4: Add sugar to boiling water
● Step 5: Add milk to boiling water
● Step 6: Boil this water with all the ingredients for 2 mins
● Step 7: Sieve the tea in a cup
● Step 8: Stop
Example
● Write algorithms to go for a class picnic

● Step 1: Start
● Step 2: Decide the picnic venue, date and time
● Step 3: Decide the picnic activities
● Step 4: Hire a vehicle to reach to the venue and comeback
● Step 5: Goto to the picnic venue on the decided date
● Step 6: Do the activities planned for the picnic
● Step 7: Come back to school in the hired vehicle
● Step 8: Stop
Example
● Write an algorithm to add two numbers entered by the user.

● Step 1: Start
● Step 2: Enter the values of a and b.
● Step 3: Add the values of a and b and store the result in the sum variable,
i.e., sum=a+b.
● Step 4: Print the sum
● Step 5: Stop
Example
● Determining the Largest Number Among 2 Integers

● Step 1: Read the Integer A.


● Step 2: Read Integer B.
● Step 3: If B is greater than A, then print B, else A.
Example
● Algorithm to print the numbers between 1 to 10

● Step 1: Start
● Step 2: Set i = 1
● Step 3: if i <= 10 then go to step 4 otherwise go to step 6
● Step 4: Print i
● Step 5: Set i ← i + 1 go to step 3
● Step 6: Stop
Example
● Algorithm for factorial of a number

● Step 1: Start
● Step 2: Read a number n
● Step 3: Initialize variables: i = 1, fact = 1
● Step 4: if i <= n go to step 5 otherwise go to step 7
● Step 5: Calculate fact = fact * i
● Step 6: Increment the i by 1 (i=i+1) and go to step 4
● Step 7: Print fact
● Step 8: Stop
Example
● Algorithm to find and returns the maximum of n given numbers:

1. Algorithm Max(A, n)
2. // A is an array of size n.
3. {
4. Result:=A[1];
5. for i :=2 to n do
6. if A[i] >Result then Result:=A[i];
7. return Result;
8. }
Write an Algorithm

● Algorithm to make a lemon juice


● Write algorithms to celebrate New Year
● Calculate the Interest of a Bank Deposit
● Determine and Output Whether Number N is Even or Odd
● Algorithm to find a Maximum of 3 numbers
● Algorithm to calculate the area of a rectangle or circle or triangle by taking
the user’s choice.
● Algorithm to find all numbers between 1 to 200 divisible by 5
● Algorithm to check prime number
● Algorithm of Fibonacci series
● Algorithm to find the sum of first n numbers
● Algorithm to find a max element in an array
Need of Algorithm

● To understand the basic idea of the problem. ● To understand the flow of the problem.
● To find an approach to solve the problem. ● To measure the behavior (or performance) of the
● To improve the efficiency of existing techniques. methods in all cases (best cases, worst cases,
● To understand the basic principles of designing average cases)
the algorithms. ● With the help of an algorithm, we can also
● To compare the performance of the algorithm identify the resources (memory, input-output)
with respect to other techniques. cycles required by the algorithm.
● It is the best method of description without ● With the help of an algorithm, we convert art into
describing the implementation detail. science.
● The Algorithm gives a clear description of ● To understand the principle of designing.
requirements and goal of the problem to the ● We can measure and analyze the complexity
designer. (time and space) of the problems concerning
● A good design can produce a good solution. input size without implementing and running it; it
will reduce the cost of design.
Analysis of algorithm

● The analysis is a process of estimating the efficiency of an algorithm.

● Process of finding the computational complexity of algorithms – the


amount of time, storage, or other resources needed to execute them.

● The term "analysis of algorithms" was coined by Donald Knuth.

● In theoretical analysis of algorithms it is common to estimate their


complexity in the asymptotic sense, i.e., to estimate the complexity
function for arbitrarily large input. Big O notation, Big-omega notation
and Big-theta notation are used to this end.
Criteria which we can judge an algorithm

● Does it do what we want it to do?


● Does it work correctly according to the original specifications of the task?
● Is there documentation that describes how to use it and how it works?
● Are procedures created in such a way that they perform logical sub-
functions?
● Is the code readable?
Contd…

● There are two fundamental parameters based on which we can analyze


the algorithm:

● Space Complexity:
● Space complexity can be understood as the amount of space required by
an algorithm to run to completion.

● Time Complexity:
● Time complexity is a function of input size n that refers to the amount of
time needed by an algorithm to run to completion.
Algorithm Analysis

● Efficiency of an algorithm can be analyzed at two different stages, before


implementation and after implementation. They are the following −

● A Priori Analysis − This is a theoretical analysis of an algorithm. Efficiency


of an algorithm is measured by assuming that all other factors, for
example, processor speed, are constant and have no effect on the
implementation.
● A Posterior Analysis − This is an empirical analysis of an algorithm. The
selected algorithm is implemented using programming language. This is
then executed on target computer machine. In this analysis, actual
statistics like running time and space required, are collected.
Types of analysis

● Worst-case time complexity: For 'n' input size, the worst-case time
complexity can be defined as the maximum amount of time needed by
an algorithm to complete its execution.

● Average case time complexity: For 'n' input size, the average-case time
complexity can be defined as the average amount of time needed by an
algorithm to complete its execution.

● Best case time complexity: For 'n' input size, the best-case time
complexity can be defined as the minimum amount of time needed by
an algorithm to complete its execution.
Contd…

● For example: In bubble sort, when the input array is already sorted, the
time taken by the algorithm is linear i.e. the best case.

● But, when the input array is in reverse condition, the algorithm takes the
maximum time (quadratic) to sort the elements i.e. the worst case.

● When the input array is neither sorted nor in reverse order, then it takes
average time.
How to approximate the time taken by the
Algorithm?
● There are two types of algorithms:

● Iterative Algorithm: In the iterative approach, the function repeatedly


runs until the condition is met or it fails. It involves the looping construct.
● Recursive Algorithm: In the recursive approach, the function calls itself
until the condition is met. It integrates the branching structure.
Complexity

for(i=1;i<=n;i++) for(i=1;i<=n;i++)
{ {
stmt; // ? times for(j=1;j<=n;j++)
} {
O(n) stmt; // ? times
}
for(i=1;i<=n;i=i+2) }
{ O(n2)
stmt; // ? times
}
O(n) // degree of polynomial
Contd…

for(i=0;i<n;i++) for(i=1;i<n;i=i*2) for(i=n;i>=1;i=i/2)


{ { {
for(j=0;j<i;j++) stmt; stmt;
{ } }
stmt; value of i as: value of i as:
} 1+2+22+23+ .... + 2k n+n/2+n/22+ .... + n/2k
}
Loop condition i>=n Loop condition i<1
1+2+3+…+n = n(n+1)/2 2k=n n/2k=1
= O(n2) k = log2n n = 2k , k = log2n
O(log2n) O(log2n)
Complexity

for(i=1;i<=n;i++) for(i=0;i<n;i++)
{ {
stmt; // ? times for(j=1;j<n;j=j*2)
} {
for(j=1;j<=n;j++) stmt; //n* ? times
{ }
stmt; // ? times }
}
So: n+n = 2n times O(nlogn)
O(n)
Contd…

● for(i=1;i<=n;i++) O(n)
● for(i=1;i<=n;i=i+2) n/2 = O(n) // n/200 = O(n)
● for(i=n;i>=1;i--) O(n)
● for(i=1;i<n;i=i*2) O(log2n)
● for(i=1;i<n;i=i*3) O(log3n)
● for(i=n;i>=1;i=i/2) O(log2n)
Asymptotic Notation

● In Asymptotic Analysis, we evaluate the performance of an algorithm in terms


of input size (we don’t measure the actual running time). We calculate, how
the time (or space) taken by an algorithm increases with the input size.
● There are three common asymptotic notations: Big O, Big Theta θ and Big
Omega Ω are used to calculate the running time complexity of an algorithm.
● Asymptotic notations are mathematical tools to represent the time
complexity of algorithms for asymptotic analysis.
● Asymptotic Notation is a way of comparing function that ignores constant
factors and small input sizes.
Big Oh Notation, Ο

● (Asymptotic Upper bound)


● The notation Ο(n) is the formal way to express the upper bound of an
algorithm's running time. It measures the worst case time complexity or
the longest amount of time an algorithm can possibly take to complete.
Contd…

● for a function f(n)


● Say f(n) is your algorithm
runtime, and g(n) is an arbitrary
time complexity you are trying to
relate to your algorithm.
● The function f(n)=O(g(n)), if and
only if there exist a positive ● The function 3n+2 = O(n) as
3n+2 <= 4n for all n >= 2.
constant C and K such that
● 3n + 3 = O(n) as
● f(n) ≤ C * g(n) for all n, n≥K.
3n + 3 <= 4n for all n >= 3.
Omega Notation, Ω

● (Asymptotic Lower bound)


● The notation Ω(n) is the formal way to express the lower bound of an
algorithm's running time. It measures the best case time complexity or
the best amount of time an algorithm can possibly take to complete.
Contd…

● for a function f(n)

● The function f(n)=Ω(g(n)), if and


only if there exist a positive
constant C and K such that
● The function 3n + 2 = Ω(n) as
● f(n)≥C * g(n) for all n, n≥K.
3n + 2 >= 3n for n >= 1
● 3n+3 = Ω(n) as
3n+3 >=3n for n >= 1
Theta Notation, θ

● (Asymptotic Tight bound)


● The notation θ(n) is the formal way to express both the lower bound and
the upper bound of an algorithm's running time.
● Big Theta, commonly written as Θ(n), is an asymptotic notation for the
average case.
Contd…

● The function f(n)=θ(g(n)), if and


only if there exist a positive
constant C1, C2 and K such that
C1*g(n) ≤ f(n) ≤ C2 * g(n) for all
n, n≥K.

● The function 3n+2 = θ(n) as


3n+2 >= 3n for all n >= 2 and
3n+2 <=4n for all n >= 2,
so c1 = 3, C2= 4, and no = 2.
Recursive Algorithms

● A recursive function is a function that is defined in terms of itself. Similarly,


an algorithm is said to be recursive if the same algorithm is invoked in the
body.
● An algorithm that calls itself is direct recursive. Algorithm A is said to be
indirect recursive if it calls another algorithm which in turn calls A.

● Direct recursive Indirect Recursive


● A( ) A( ) B( )
● { { {
● A( ) B( ) A( )
● } } }
Recurrence Relation

● In an Analysis of Algorithm, recurrence relations are used to analyze the


running time of a recursive function.
● The running time of a recursive function is denoted by T(n) where n is the
size of the input. For example
● T(n)=T(n−1)+O(1)
● The simplest form of a recurrence relation is the case where the next
term depends only on the immediately previous term.
Recursive Algorithms

void Test (int n) Call Test(3)


{
if(n>0) n =3
{ The function will call 4 times
printf("%d",n);
Test(n-1); f(n) = n+1 call
}
} O(n)

T(n) = T(n-1)+1
Recurrence Tree
Recurrence Relation

T(n) = T(n-1)+1 T(n-2) = T(n-3)+1

T(n) = 1 if n=0 Substitute


= T(n-1)+1 if n>0 T(n) = T(n-3)+1+2 = T(n-3)+3

T(n) = T(n-1)+1 Continue for k times


T(n) = T(n-k)+k
T(n-1) = T(n-1-1)+1 = T(n-2)+1
Assume n-k = 0, So, n=k
Substitute T(n) = T(n-n)+n = 1+n
T(n) = T(n-2)+1+1 = T(n-2)+2
O(n)
Example

void Test (int n) T(n) T(n) = T(n-1)+n


{
if(n>0) 1 T(n) = 1 if n=0
{ = T(n-1)+n if n>0
for(i=0;i<n;i++) n+1
{
printf("%d",n); n
}
Test(n-1); T(n-1)
}
}

T(n) = T(n-1)+2n+2 = T(n-1)+n (Asymtomatic)


Recurrence Tree
Contd…

T(n) = 1 if n=0 T(n) =


= T(n-1)+n if n>0 T(n-k)+(n-(k-1))+(n-(k-2))+..+(n-1)+n

T(n) = T(n-1)+n Assume n-k = 0, So, n=k


T(n-1) = T(n-2)+n-1
T(n-2) = T(n-3)+n-2 T(n) =
T(n-n)+(n-(n-1))+(n-(n-2))+..+(n-1)+n
Substitute
T(n) = T(n-2)+n-1+n T(n) =
T(n) = T(n-3)+n-2+n-1+n T(0)+1+2+3+..+(n-1)+n = 1+n(n+1)/2
. = O(n2)
.
Example

void Test (int n) T(n)


T(n) = T(n-1)+logn
{
if(n>0) T(n) = 1 if n=0
{
for(i=1;i<n;i=i*2)
= T(n-1)+logn if n>0
{
printf("%d",i); logn
}
Test(n-1); T(n-1)
}
}

T(n) = T(n-1)+logn
Recurrence Tree
Contd…

T(n) = 1 if n=0
= T(n-1)+logn if n>0

T(n) = T(n-1)+logn
T(n) = T(n-2)+log(n-1)+logn
T(n) = T(n-3)+log(n-2)+log(n-1)+logn
T(n) = T(n-k)+log1+log2+..+log(n-1)+logn

Assume n-k = 0, So n=k


T(n) = T(0)+logn! = 1+ nlogn
O(nlogn)
Contd…

● T(n) = T(n-1)+1 O(n)


● = T(n-1)+n O(n2)
● = T(n-1)+logn O(nlogn)
● = T(n-1)+n2 O(n3)
● = T(n-2)+1 n/2 O(n)
● = T(n-100)+n O(n2)
Example

void Test (int n) T(n)


T(n) = 2T(n-1)+1
{
if(n>0) T(n) = 1 if n=0
{ = 2T(n-1)+1 if n>0
printf("%d",n); 1
Test(n-1); T(n-1)
Test(n-1); T(n-1)
}
}

T(n) = 2T(n-1)+1
Recurrence Tree
Contd…

T(n) = 1 if n=0
= 2T(n-1)+1 if n>0 = 2kT(n-k)+2k-1+ 2k-2+... 22+2+1

T(n) = 2T(n-1)+1 Assume n-k = 0, So n=k


= 2[2T(n-2)+1]+1
= 22T(n-2)+2+1 = 2nT(0)+2n-1+ 2n-2+... 22+2+1
= 22[2T(n-3)+1]+2+1 2n+2n+-1 = 2n+1+1
= 23T(n-3)+22+2+1
O(2n)
Contd…

● T(n) = T(n-1)+1 O(n)


● = T(n-1)+n O(n2)
● = T(n-1)+logn O(nlogn)
● = 2T(n-1)+1 O(2n)
● = 2T(n-2)+1 O(2n/2) = O(2n)
● = 3T(n-1)+1 O(3n)
● = 2T(n-1)+n O(n2n)
Example

● Tower of Hanoi
● Ackerman Problem
Different types of time complexities

● Constant time – O (1)


● Linear time – O (n)
● Logarithmic time – O (log n)
● Quadratic time – O (n^2)
● Cubic time – O (n^3)
● Exponential time – O(2^n)
● Factorial time – O(n!)

● 1 < logn < √n < n < nlogn < n2 < n3 < - - - < 2n < 3n < - - - < nn
Contd…

● An algorithm is said to have constant time with order O (1) when it is not
dependent on the input size n. Irrespective of the input size n, the runtime
will always be the same.
● An algorithm is said to have a linear time complexity when the running time
increases linearly with the length of the input. When the function involves
checking all the values in input data, with this order O(n).
● An algorithm is said to have a logarithmic time complexity when it reduces
the size of the input data in each step. This indicates that the number of
operations is not the same as the input size. Algorithms are found in binary
trees or binary search functions O(logn).
Contd…

● An algorithm is said to have a non-linear time complexity where the


running time increases non-linearly (n^2) with the length of the input.
Generally, nested loops come under this order. Similarly, if there are ‘m’
loops defined in the function, then the order is given by O (n ^ m), which
are called polynomial time complexity functions.
Contd…
Algorithm Design Techniques
Contd…

1. Divide and Conquer Approach: It is a top-down approach. The algorithms which follow the
divide & conquer techniques involve three steps:
● Divide the original problem into a set of subproblems.
● Solve every subproblem individually, recursively.
● Combine the solution of the subproblems (top level) into a solution of the whole original
problem.

2. Greedy Technique: Greedy method is used to solve the optimization problem. An


optimization problem is one in which we are given a set of input values, which are required
either to be maximized or minimized (known as objective), i.e. some constraints or conditions.
● Greedy Algorithm always makes the choice (greedy criteria) looks best at the moment, to
optimize a given objective.
● The greedy algorithm doesn't always guarantee the optimal solution however it generally
produces a solution that is very close in value to the optimal.
Contd…

3. Dynamic Programming: Dynamic Programming is a bottom-up approach we solve all


possible small problems and then combine them to obtain solutions for bigger
problems.
● This is particularly helpful when the number of copying subproblems is exponentially
large. Dynamic Programming is frequently related to Optimization Problems.

4. Branch and Bound: In Branch & Bound algorithm a given subproblem, which cannot be
bounded, has to be divided into at least two new restricted subproblems.
● Branch and Bound algorithm are methods for global optimization in non-convex
problems. Branch and Bound algorithms can be slow, however in the worst case they
require effort that grows exponentially with problem size, but in some cases we are
lucky, and the method coverage with much less effort.
Contd…

5. Randomized Algorithms: A randomized algorithm is defined as an algorithm that is allowed to


access a source of independent, unbiased random bits, and it is then allowed to use these
random bits to influence its computation.

6. Backtracking Algorithm: Backtracking Algorithm tries each possibility until they find the right
one. It is a depth-first search of the set of possible solution. During the search, if an
alternative doesn't work, then backtrack to the choice point, the place which presented
different alternatives, and tries the next alternative.

7. Randomized Algorithm: A randomized algorithm uses a random number at least once during
the computation make a decision.
● Example 1: In Quick Sort, using a random number to choose a pivot.
● Example 2: Trying to factor a large number by choosing a random number as possible divisors.
From
Gurpreet Singh Chhabra
Data Structure
● Data Structure is a way to store and organize data so that it can be used
efficiently.
● organizing the data in memory.
Types of Data Structures
Algorithm vs Pseudocode vs Program:
● Algorithm – It is a well-defined, systematic logical approach that comes
with a step-by-step procedure for computers to solve any given program.
● Pseudocode refers to a method using which we represent any algorithm
for any program. Pseudocode does not consist of any specific syntax like
a programming language.
● Program – It refers to the code (written by programmers) for any
program that follows the basic rules of the concerned programming
language.
Example
● Algorithm to make a lemon juice

● Step 1: First, we will cut the lemon in half.


● Step 2: Squeeze the lemon as much you as can and take out its juice in a
container.
● Step 3: Add two tablespoons of sugar to it.
● Step 4: Stir the container until the sugar gets dissolved.
● Step 5: When sugar gets dissolved, add some water and ice in it.
● Step 6: Store the juice in a fridge for 5 to minutes.
● Step 7: Now, it's ready to drink.
Example
● Algorithm for Baking a Cake

● Preheat the oven


● Gather the ingredients
● Measure out the ingredients
● Mix together the ingredients to make the batter
● Grease a pan
● Pour the batter into the pan
● Put the pan in the oven
● Set a timer
● When the timer goes off, take the pan out of the oven
● Enjoy!
Example
● Write algorithms to celebrate New Year

● Step 1: Start
● Step 2: Prepare a guest list for New Year party
● Step 3: Decide the venue, food menu, games and fun activities for the
party
● Step 4: Invite the guests for the party
● Step 5: On New Year eve, get ready and enjoy the party
● Step 6: Stop
Example
● Calculate the Interest of a Bank Deposit

● Step 1: Read the amount.


● Step 2: Read years.
● Step 3: Read rate.
● Step 4: Calculate the interest with the formula
Interest=Amount*Years*Rate/100.
● Step 5: Print interest.
Example
● Determine and Output Whether Number N is Even or Odd

● Step 1: Read number N.


● Step 2: Set remainder as N modulo 2.
● Step 3: If the remainder is equal to 0 then number N is even, else number
N is odd.
● Step 4: Print output.
Example
● Algorithm to print the numbers between 1 to 10 (while loop)

● Step 1: Start
● Step 2: Initialize variable i to 1 (i=1)
● Step 3: Output i
● Step 4: Increment i by 1 (i=i+1)
● Step 5: Check if the value of i is less than or equal to 10.
if i<=10 then go to step 3 otherwise go to Step 6
● Step 6: Stop.
Example
● Problem: Suppose there are 60 students in ● Algorithmic Approach:
the class. How will you calculate the
number of absentees in the class?

● Pseudo Approach:
● Count <- 0, absent <- 0, total <-
60
● Initialize a variable called as Count to zero,
absent to zero, total to 60 ● REPEAT till all students counted
● FOR EACH Student PRESENT DO the ● Count <- Count + 1
following:
● Increase the Count by One ● absent <- total - Count
● Then Subtract Count from total and store ● Print "Number absent is:" ,
the result in absent
● Display the number of absent students absent
Time Complexity

● Time complexity is defined as the amount of time taken by an algorithm to


run, as a function of the length of the input. It measures the time taken to
execute each statement of code in an algorithm.
Small-o

● Small-o, commonly written as o, is an Asymptotic Notation to denote the


upper bound (that is not asymptotically tight) on the growth rate of runtime
of an algorithm.

● f(n) is o(g(n)), if for all real constants c (c > 0) and n0 (n0 > 0), f(n) is < c g(n)
for every input size n (n > n0).

● The definitions of O-notation and o-notation are similar. The main difference
is that in f(n) = O(g(n)), the bound f(n) <= g(n) holds for some constant c > 0,
but in f(n) = o(g(n)), the bound f(n) < c g(n) holds for all constants c > 0.
Small-omega

● Small-omega, commonly written as ω, is an Asymptotic Notation to denote


the lower bound (that is not asymptotically tight) on the growth rate of
runtime of an algorithm.

● f(n) is ω(g(n)), if for all real constants c (c > 0) and n0 (n0 > 0), f(n) is > c g(n)
for every input size n (n > n0).

● The definitions of Ω-notation and ω-notation are similar. The main difference
is that in f(n) = Ω(g(n)), the bound f(n) >= g(n) holds for some constant c > 0,
but in f(n) = ω(g(n)), the bound f(n) > c g(n) holds for all constants c > 0.
Recurrence Relation

● A recurrence relation can be solved using the following methods −

● Substitution Method − In this method, we guess a bound and using


mathematical induction we prove that our assumption was correct.
● Recursion Tree Method − In this method, a recurrence tree is formed
where each node represents the cost.
● Master’s Theorem − This is another important technique to find the
complexity of a recurrence relation.

You might also like