DSA - Module
DSA - Module
Algorithms
Mrs. Pushpanjali Y Jillanda
Department of MCA, DSATM
Terminologies
▪ Program: Expression of an algorithm in a programming
language
▪ Testing: Process of executing a program on sample data sets to
determine whether faulty results occur
▪ Debugging: Checking for the presence of error in a program
and correcting it
▪ Profiling: Executing the correct program and measure time and
space complexities
Algorithms
Introduction
▪ Algorithm is a sequence of unambiguous instructions for
solving a problem
▪ For obtaining a required output for any legitimate input in a
finite amount of time
▪ Step by step procedure with the input to solve the problem in a
finite amount of time to obtain the required output
▪ Named after Persian Mathematician, Muhammad ibn Mūsā al-
Khwārizmī
Definition
▪ An Algorithm is a finite set of instructions that, if followed,
accomplishes a particular task
Characteristics
▪ Input: Zero or more quantities are externally supplied
▪ Output: At least one quantity is produced
▪ Definiteness: Each instruction is clear and unambiguous
▪ Finiteness: If the instructions of an algorithm is traced, then for
all cases the algorithm must terminate after a finite number of
steps
▪ Effectiveness: Every instruction must be very basic to be
carried out in principle, by a person using only pencil and paper
Example – Euclid’s Algorithm
Euclid’s Algorithm for computing gcd(m,n)
▪ Step 1: If n = 0, return the value of m as the answer and stop;
otherwise proceed to Step 2
▪ Step 2: Divide m by n and assign the value of remainder to r
▪ Step 3: Assign the value of n to m and the value of r to n. Go to
Step 1
Pseudocode – Euclid’s
Algorithm
Computing gcd(m,n)
// Computes gcd(m,n) by Euclid’s Algorithm
// Input: Two non-negative, not both zero integers m and n
// Output: Greatest Common Divisor of m and n
while 𝑛 ≠ 0 do
r m mod n
mn
nr
return m
Fundamentals of
Algorithmic Problem
Solving
Understand the Problem
Design an Algorithm
Prove Correctness
Processing/ Assignment
Input values a, b
Sub-Function
c=a+b
Input and Output
Flow Connectivity
Stop
Stop Stop State
Proving algorithms correctness
▪ We have to prove that the algorithm yields required result for
every legitimate input in a finite amount of time
▪ For some algorithms, a proof of correctness is quite easy; for
others, it can be quite complex
▪ An algorithm is incorrect if it fails for one instance of input, then
you need to redesign the whole problem
Analyzing an Algorithm
▪ Time Efficiency: How fast the algorithm runs
▪ Space Efficiency: How much extra space the algorithm needs
▪ Simplicity: Simple algorithms are easier to understand
▪ Generality: It is easier to design an algorithm for a problem in
more general terms
Coding an Algorithm
▪ Coding / Implementation of an algorithm is done by a suitable
programming language
▪ The transition from algorithm to a program should be done correct
to ensure it is still efficient
▪ Standard tricks like computing loop’s invariant outside loop,
collecting common subexpressions, replacing expensive operations
by cheap ones, selection of programming language should be known
to the programmer
▪ Using a better algorithm can speed up the program considerably
▪ It is essential to write an optimized code to reduce the burden of the
compiler
Coding an Algorithm
▪ Good Algorithm is a result of repeated effort and rework
▪ Most of the algorithms are destined to be implemented on
computer programs
▪ Not only implementation but also the optimization of the code
is necessary. This increases the speed of operation
▪ A working program provides the opportunity in allowing
empirical analysis of the underlying program
Important Problem
Types
Important Problem Types
▪ Sorting
▪ Searching
▪ String Processing
▪ Graph Problems
▪ Combinational Problems
▪ Geometric Problems
▪ Numerical Problems
Sorting
▪ Refers to re-arranging the items of a given list in ascending /
descending order
▪ Ex: We may need to sort numbers, characters, strings, records etc.
▪ We need to choose a piece of information to be ordered. This number
is called a key
▪ Many algorithms exist. Some are indeed better than others, but there
is no algorithm that would be the best algorithm in all situations
▪ A sorting algorithm is called stable if it preserves the relative order
of any two equal elements in its input
▪ An algorithm is said to be in place if it does not require extra
memory, except possibly a few memory units
Searching
▪ Refers to finding a given value called search key, in a given set
▪ Several algorithms ranging from Sequential Search to Binary
Search
▪ Some algorithms are based on representing the underlying set
in a different form more conductive to searching. They are used
in large databases
▪ Some algorithms work faster than others but require more
memory
▪ Some are very fast only in sorted arrays
String Processing
▪ String – Sequence of characters
▪ String Processing algorithms have been important for a long
time in conjunction with computer languages and compiling
issues
▪ String matching is one kind of such problem
Graph Problems
▪ Graph – Collection of points called Vertices, some of which are
connected by line segments called Edges
▪ They can be used for modeling wide variety of real-life
applications
▪ Basic graph algorithm includes graph traversal algorithms,
shortest path algorithms and topological sorting for graphs
with directed edges
Combinatorial Problems
▪ Refers to combinatorial object such as Permutation,
Combination, subset – that satisfies certain constraints and
has some desired property
▪ These are the most difficult problems – as the number of
combinatorial objects grows extremely fast with a problem’s
size racing unimaginable magnitude even for moderate sized
instances
▪ There are no algorithms for solving such problems exactly in
an acceptable amount of time
Geometric Problems
▪ Deals with geometric objects such as points, lines and polygons
▪ These algorithms are used in developing applications for
Computer Graphics, Robotics and Tomography
Numeric Problems
▪ Involves mathematical objects of continuous nature
▪ Solving equations, system of equations, computing definite
integrals, evaluating functions and so on
▪ Majority of such problems can be solved only approximately
▪ Such problems require manipulating real numbers, which can
be represented in computer only approximately
▪ Large number of arithmetic operation leads to round off error
which can drastically distort the output
Fundamentals of the
Analysis of Algorithm
Efficiency
Algorithm Efficiency
▪ Efficiency of an algorithm can be in terms of time and space
▪ Can be analyzed by the following ways:
▪ Analysis Framework
▪ Asymptotic Notations and its properties
▪ Mathematical Analysis for Recursive Algorithms
▪ Mathematical Analysis for Non-Recursive Algorithms
Notion of Algorithm
Notion of Algorithm
Problem
Algorithm
𝑝 𝑛 𝑛+1
𝐶𝑎𝑣𝑔 𝑛 = + 𝑛. 1 − 𝑝
𝑛 2
𝑝 𝑛+1
𝐶𝑎𝑣𝑔 𝑛 = + 𝑛. 1 − 𝑝
2
Average-Case Efficiencies
▪ If p = 1 (successful search) the average number of key
(𝒏 + 𝟏)
comparisons is ≈ 𝒉𝒂𝒍𝒇
𝟐
▪ If p = 0 (unsuccessful search) the average number of key
comparisons is n
▪ Finding the average case efficiency is difficult but for some
algorithm it is important
Amortized Efficiency
▪ It applies not to a single run of an algorithm but rather to a
sequence of operations performed on the same data structure
▪ It turns out that in some situations a single operation can be
expensive, but total for an entire sequence of n operation is
always significantly better than worst-case efficiency of that
single operation multiplied by n
Analysis Framework-Summary
▪ Both time and space efficiencies are measured as functions of
the algorithm’s input size
▪ Time efficiency is measured by counting the number of times
the algorithm’s basic operation is executed.
▪ Space efficiency is measured by counting the number of extra
memory units consumed by the algorithm
Analysis Framework-Summary
▪ The efficiencies of some algorithms may differ significantly for
inputs of the same size. For such algorithms, we need to
distinguish between the worst-case, average-size, and best-case
efficiencies
▪ The framework’s primary interest lies in the order of growth of
the algorithm’s running time (extra memory units consumed)
as its input size goes to infinity
Asymptotic Notations
Asymptotic Notations
▪ Used to express and compare the growth rate of functions
▪ In our case, the functions will typically represent the running
time of algorithms
▪ Allows us to express the behavior of a function as the input
approaches infinity
▪ We use three notations:
▪ O – Big O Notation
▪ Ω – Big Omega Notation
▪ Θ – Big Theta Notation
Big O Notation
▪ Mathematical Notation used to find an upper bound on time
taken by an algorithm or data structure
▪ It provides a way to compare the performance of different
algorithms and data structures, and to predict how they will
behave as the input size increases
▪ O(f(n)), where f(n) is a function that represents the number of
operations (steps) that an algorithm performs to solve a
problem of size n.
Big O –
Time Complexity t(n)
Definition
A function f(n) is said to
be in O(g(n)), denoted
f(n) € O(g(n)), if f(n) is
bounded above by some
constant multiple of g(n)
for all large n, i.e., if there
exist some positive
constant c and some
non-negative integer n0
such that
f(n) ≤ cg(n) for all n ≥ n0
Input Size
Big O Notation
▪ Important because it helps analyze the efficiency of algorithms
▪ Provides a way to describe how the runtime or space
requirements of an algorithm grow as the input size increases
▪ Allows programmers to compare different algorithms and
choose the most efficient one for a specific problem
▪ Helps in understanding the scalability of algorithms and
predicting how they will perform as the input size grows
▪ Enables developers to optimize code and improve overall
performance
O(n)
▪ Linear Time Complexity means that the running time of an
algorithm grows linearly with the size of the input
▪ Example: Linear Search / Sequential Search
O(log n)
▪ Logarithmic Time Complexity means that the running time of
an algorithm is proportional to the logarithm of the input size
▪ Example: Binary Search
Big O(n )
2
Definition
A function f(n) is said to
be in Ω(g(n)), denoted
f(n) € Ω(g(n)), if f(n) is
bounded below by some
constant multiple of g(n)
for all large n, i.e., if there
exist some positive
constant c and some
non-negative integer n0
such that
f(n) ≥ cg(n) for all n ≥ n0
Input Size
Big Theta Θ Notation
▪ Specifies the upper and lower bounds of a function f(n)
▪ Provides the average time complexity of an algorithm
Big Θ –
Time Complexity t(n)
Definition
A function f(n) is said to
be in Θ(g(n)), denoted
f(n) € Θ(g(n)), if t(n) is
bounded both above and
below by some constant
multiples of g(n) for all
large n, i.e., if there exist
some positive constant
c1 and c2 and some non-
negative integer n0 such
that
c2g(n) ≤ f(n) ≤ c1g(n) for
all n ≥ n0
Input Size
Summary
Notation Definition Explanation
Describes the upper bound of the
O 𝑓 𝑛 ≤ 𝑐 ∗ 𝑔 𝑛 ∀ 𝑛 ≥ 𝑛0 algorithm’s running time. Used most of
the time.
Describes the lower bound of the
Ω 𝑓 𝑛 ≥ 𝑐 ∗ 𝑔 𝑛 ∀ 𝑛 ≥ 𝑛0 algorithm’s running time. Used less.
Describes both the upper and lower
𝑐1 ∗ 𝑔 𝑛 ≤ 𝑓 𝑛 ≤ 𝑐2 ∗ 𝑔 𝑛 bound of the algorithm’s running time.
Θ ∀ 𝑛 ≥ 𝑛0 Also used a lot and preferred over Big O if
we can find an exact bound.
• f(n) represents the function being analyzed, typically the algorithm’s time complexity
• g(n) represents a specific function that bounds f(n)
• c, c1, c2 are constants
• n0 is the minimum input size beyond which inequality holds
Basic Efficiency
Classes
Class Name Comments
Short of best-case efficiencies, very few reasonable
examples can be given since an algorithm's running time
1 Constant
typically goes to infinity when its input size grows infinitely
large
Typically, a result of cutting a problem's size by a constant
factor on each iteration of the algorithm. Note that a
log 𝑛 Logarithmic logarithmic algorithm cannot take into account all its input
(or even a fixed fraction of it): any algorithm that does so
will have at least linear running time
Algorithms that scan a list of size n (e.g., sequential search)
n Linear
belong to this class
Many divide-and-conquer algorithms, including merge sort
𝑛 log 𝑛 𝑛 log 𝑛
and quick sort in the average case, fall into this category
Class Name Comments
Typically, characterizes efficiency of algorithms with two
embedded loops. Elementary sorting algorithms and
𝑛2 Quadratic
certain operations on n-by-n matrices are standard
examples
Typically, characterizes efficiency of algorithms with three
𝑛3 Cubic embedded loops. Several non-trivial algorithms from linear
algebra fall into this class
Typical for algorithms that generate all subsets of an n-
element set. Often, the term "exponential" is used in a
2𝑛 Exponential
broader sense to include this and faster orders of growth as
well
Typical for algorithms that generate all permutations of an
𝑛! Factorial
n-element set
Basic Efficiency Classes
Examples
Class Name Example
1 Constant
log 𝑛 Logarithmic Half of the size, Binary Search
n Linear Linear Search
𝑛 log 𝑛 𝑛 log 𝑛 Merge Sort, Quick Sort
𝑛2 Quadratic 2 Loops
𝑛3 Cubic 3 Loops
2𝑛 Exponential Sets
𝑛! Factorial Permutations
Using limits for comparing
order of growth
𝑇(𝑛)
0 T(n) has smaller order of growth than g(n)
▪ lim = ቐ𝑐 T(n) has the same order of growth of g(n)
𝑛→∞ 𝑔(𝑛)
∞ T(n) has larger order of growth than g(n)
Useful Property involving the
Asymptotic Notations
Theorem: If 𝒕𝟏 𝒏 ∈ 𝑶 (𝒈𝟏 (𝒏)) and 𝒕𝟐 𝒏 ∈ 𝑶 (𝒈𝟐 (𝒏)) then
𝒕𝟏 𝒏 + 𝒕𝟐 𝒏 ∈ 𝑶 (𝐦𝐚𝐱{𝒈𝟏 𝒏 , 𝒈𝟐 𝒏 })
Proof:
Since 𝒕𝟏 𝒏 ∈ 𝑶 𝒈𝟏 𝒏 , there exist some positive constant 𝒄𝟏
and some non-negative integer 𝒏𝟏 , such that, 𝒕𝟏 𝒏 ≤ 𝒄𝟏 (𝒈𝟏 𝒏 )
for all 𝒏 ≥ 𝒏𝟏
Since 𝒕𝟐 𝒏 ∈ 𝑶 𝒈𝟐 𝒏 , there exist some positive constant 𝒄𝟐
and some non-negative integer 𝒏𝟐 , such that, 𝒕𝟐 𝒏 ≤ 𝒄𝟐 (𝒈𝟐 𝒏 )
for all 𝒏 ≥ 𝒏𝟐
Let us denote, 𝒄𝟑 = 𝒎𝒂𝒙 𝒄𝟏 , 𝒄𝟐 and consider 𝒏 ≥ 𝒎𝒂𝒙(𝒏𝟏 , 𝒏𝟐 ) so
that we have both inequalities.
Adding both inequalities above yields the following:
𝒕𝟏 𝒏 + 𝒕𝟐 𝒏 ≤ 𝒄𝟏 𝒈𝟏 𝒏 + 𝒄𝟐 (𝒈𝟐 𝒏 )
𝒕𝟏 𝒏 + 𝒕𝟐 𝒏 ≤ 𝒄𝟑 𝒈𝟏 𝒏 + 𝒄𝟑 (𝒈𝟐 𝒏 )
= 𝒄𝟑 [ 𝒈𝟏 𝒏 + 𝒈𝟐 𝒏 )
𝒕𝟏 𝒏 + 𝒕𝟐 𝒏 ≤ 𝒄𝟑 . 𝟐. 𝒎𝒂𝒙{𝒈𝟏 𝒏 , 𝒈𝟐 𝒏 }
Hence, 𝒕𝟏 𝒏 + 𝒕𝟐 𝒏 ∈ 𝑶 (𝐦𝐚𝐱{𝒈𝟏 𝒏 , 𝒈𝟐 𝒏 }) with constants c &
𝒏𝟎 required by the definition O being 𝟐. 𝒄𝟑 = 𝟐. 𝒎𝒂𝒙{𝒄𝟏 , 𝒄𝟐 } and
𝒎𝒂𝒙 𝒏𝟏 , 𝒏𝟐 , respectively
This property implies that the algorithm’s overall efficiency will be
determined by the part with a higher order of growth, i.e., its least
efficient part
∴ 𝒕𝟏 𝒏 ∈ 𝑶 (𝒈𝟏 (𝒏)) and 𝒕𝟐 𝒏 ∈ 𝑶 𝒈𝟐 𝒏 ,
then 𝒕𝟏 𝒏 + 𝒕𝟐 𝒏 ∈ 𝑶 (𝐦𝐚𝐱{𝒈𝟏 𝒏 , 𝒈𝟐 𝒏 })
Basic Rules of Sum Manipulation
𝑢 𝑢
𝑐𝑎𝑖 = 𝑐 𝑎𝑖
𝑖=𝑙 𝑖=𝑙
𝑢 𝑢 𝑢
𝑎𝑖 ± 𝑏𝑖 = 𝑎𝑖 ± 𝑏𝑖
𝑖=𝑙 𝑖=𝑙 𝑖=𝑙
Summation Formulas
𝑢
1 = 𝑢 − 𝑙 + 1
𝑖=𝑙
Where 𝑙 ≤ 𝑢 are some lower and upper integer limits
𝑛 𝑛
𝑛(𝑛 + 1) 1 2
𝑖 = 𝑖 = 1 + 2 + 3 + …+ 𝑛 = ≈ 𝑛 ∈ Θ(𝑛2 )
2 2
𝑖=0 𝑖=1
Mathematical
Analysis of Non-
Recursive Algorithms
General Plan for Analyzing
Time Efficiency
▪ Decide on a parameter(s) indicating an input's size.
▪ Identify the algorithm's basic operation.
▪ Check whether the number of times the basic operation is
executed depends only on the input size. If it also depends on
some additional property, the worst-case, average-case, and
best-case efficiencies have to be investigated separately.
▪ Set up a sum expressing the number of times the algorithm's
basic operation is executed.
▪ Using standard formulas and rules of sum manipulation either
find a closed form formula for the count, or at the very least,
establish the order of growth.
Example 1
▪ Consider the problem of finding the value of the largest element in a
list of n numbers. Assume that the list is implemented as an array for
simplicity
Algorithm: MaxElement(A[0,…n-1])
// Determines the value of the largest element in a given array
// Input: An array A[0,…n-1] of real numbers
// Output: The value of the largest element in A
maxval A[0]
for i1 to n-1 do
if A[i]>maxval
maxvalA[i]
return maxval
Algorithm Analysis
▪ The measure of an input’s size here is the number of elements
in the array i.e., n
▪ There are two operations in the for-loop’s body:
▪ The comparison A[i]>maxval and
▪ The assignment maxvalA[i]
𝑐 𝑛 = 1
𝑖=1
i.e., Sum up 1 in repeated n-1 times
𝑛−1
𝑐 𝑛 = 1 = 𝑛 − 1 ∈ Θ(𝑛)
𝑖=1
Example 2
▪ Consider the element uniqueness problem. Check whether all the
elements in a given array of n elements are distinct
Algorithm: UniqueElements(A[0,…n-1])
// Determines whether all the elements in a given array are distinct
// Input: An array A[0,…n-1]
// Output: Returns true if all the elements in A are distinct and false
otherwise
for i0 to n-2 do
for ji+1 to n-1 do
if A[i]= A[j]
return false
return true
Algorithm Analysis
▪ The natural measure of the input’s size here is again n (the
number of elements in the array)
▪ Since the inner most loop contains a single operation
(comparison of two elements), we should consider it as the
algorithm’s basic operation
▪ The number of element comparisons depends not only on n but
also on whether there are equal elements in the array and, if
there are, which array positions they occupy. We will limit our
investigation to the worst case only.
Algorithm Analysis
One comparison is made for each repetition of the inner most loop, i.e., for each
value of the loop variable j between its limits i+1 and n-1; this is repeated for each
value of the outer loop, i.e., for each value of the loop variable i between its limits 0
and n-2
𝑛−2 𝑛−1 𝑛−2
𝑐𝑤𝑜𝑟𝑠𝑡 𝑛 = 1 = [ 𝑛 − 1 − 𝑖 + 1 + 1]
𝑖=0 𝑗=𝑖+1 𝑖=0
𝑛−2
= 𝑛−1−𝑖
𝑛−2 𝑛−2 𝑖=0 𝑛−2
(𝑛 − 2)(𝑛 − 1)
= 𝑛−1 −𝑖 = 𝑛−1 1−
2
𝑖=0 𝑖=0 𝑖=0
1
𝑘=0
The total number of multiplications M(n) is expressed by the
following triple sum
𝑛−1 𝑛−1 𝑛−1 𝑛−1 𝑛−1 𝑛−1
𝑀(𝑛) = 1 = 𝑛 = 𝑛2 = 𝑛3
𝑖=0 𝑗=0 𝑘=0 𝑖=0 𝑗=0 𝑖=0
Algorithm Analysis
The running time of the algorithm on a particular machine m, we
can do it by the product
𝑇 𝑛 ≈ 𝑐𝑚 𝑀 𝑛 = 𝑐𝑚 𝑛3
𝑴 𝒏 = 𝑴 𝒏 − 𝟏 + 𝟏 for 𝒏 > 𝟎
𝑴 𝟎 = 𝟎 for 𝒏 = 𝟎
Method of Backward
substitutions
𝑴 𝒏 = 𝑴 𝒏−𝟏 +𝟏 Substitute 𝑴 𝒏 − 𝟏 = 𝑴 𝒏 − 𝟐 + 𝟏
= 𝑴 𝒏−𝟐 +𝟏 +𝟏
= 𝑴 𝒏−𝟐 +𝟐 Substitute 𝑴 𝒏 − 𝟐 = 𝑴 𝒏 − 𝟑 + 𝟏
= 𝑴 𝒏−𝟑 +𝟏 +𝟐
= 𝑴 𝒏−𝟑 +𝟑
…
= 𝑴 𝒏−𝒊 +𝒊
…
= 𝑴 𝒏−𝒏 +𝒏
=𝒏
∴𝑴 𝒏 =𝒏
Example 2
▪ Consider educational workhorse of recursive algorithms: the
Tower of Hanoi puzzle. We have n disks of different sizes that
can slide onto any of three pegs. Consider A (source), B
(auxiliary), and C (Destination). Initially, all the disks are on the
first peg in order of size, the largest on the bottom and the
smallest on top. The goal is to move all the disks to the third
peg, using the second one as an auxiliary.
Example 2
Example 2
Algorithm: TOH(n, A, C, B)
// Move disks from source to destination recursively
// Input: n disks (n>0)and 3 pegs A, B, and C
// Output: Disks moved to destination as in the source order
if n=1
Move disk from A to C //Source to Destination
else
Move top n-1 disks from A to B using C
TOH(n-1, A, B, C)
Move top n-1 disks from B to C using A
TOH(n-1, B, C, A)
Algorithm Analysis
▪ The number of moves M(n) depends on n only, and we get the
following recurrence equation for it:
𝟏, 𝒏=𝟏
𝑴 𝒏 =ቊ
𝑴 𝒏 − 𝟏 + 𝟏 + 𝑴(𝒏 − 𝟏), 𝒏>𝟏
𝟏, 𝒏=𝟏
𝑴 𝒏 =ቊ
𝟐𝑴 𝒏 − 𝟏 + 𝟏, 𝒏>𝟏
Method of Backward
substitutions
𝑴 𝒏 = 𝟐𝑴 𝒏 − 𝟏 + 𝟏 Substitute 𝑴 𝒏 − 𝟏 = 𝟐𝑴 𝒏 − 𝟐 + 𝟏
= 𝟐 𝟐𝑴 𝒏 − 𝟐 + 𝟏 + 𝟏
= 𝟐𝟐 𝑴 𝒏 − 𝟐 + 𝟐 + 𝟏 Substitute 𝑴 𝒏 − 𝟐 = 𝟐𝑴 𝒏 − 𝟑 + 𝟏
= 𝟐𝟐 𝟐𝑴 𝒏 − 𝟑 + 𝟏 + 𝟐 + 𝟏
= 𝟐𝟑 𝑴 𝒏 − 𝟑 + 𝟐𝟐 + 𝟐 + 𝟏
Substitute 𝑴 𝒏 − 𝟑 = 𝟐𝑴 𝒏 − 𝟒 + 𝟏
= 𝟐𝟒 𝑴 𝒏 − 𝟒 + 𝟐𝟑 + 𝟐𝟐 + 𝟐 + 𝟏
…
= 𝟐𝒊 𝑴 𝒏 − 𝒊 + 𝟐𝒊−𝟏 + 𝟐𝒊−𝟐 + ⋯ + 𝟐 + 𝟏
= 𝟐𝒊 𝑴 𝒏 − 𝒊 + 𝟐𝒊 − 𝟏
Method of Backward
substitutions
▪ Since the initial condition is specified for n=1, which is achieved
for i=n-1,
𝑴 𝒏 = 𝟐𝒏−𝟏 𝑴 𝒏 − (𝒏 − 𝟏) + 𝟐𝒏−𝟏 − 𝟏
𝑴 𝒏 = 𝟐𝒏−𝟏 𝑴 𝟏 + 𝟐𝒏−𝟏 − 𝟏
𝑴 𝒏 = 𝟐𝒏−𝟏 + 𝟐𝒏−𝟏 − 𝟏
𝑴 𝒏 = 𝟐. 𝟐𝒏−𝟏 − 𝟏
𝑴 𝒏 = 𝟐𝒏 − 𝟏 → 𝚯(𝟐𝒏 )
▪ Thus, we have an exponential time algorithm