0% found this document useful (0 votes)
7 views16 pages

Module 3 - Complexity of an Algorithm

The document discusses the complexity of algorithms, focusing on time and space complexity as measures of efficiency. It outlines types of algorithm analysis, including best, average, and worst-case scenarios, and emphasizes the importance of understanding these cases for algorithm selection and optimization. Additionally, it introduces asymptotic notations to describe the rate of growth of algorithms, providing a framework for comparing their efficiency based on input size.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views16 pages

Module 3 - Complexity of an Algorithm

The document discusses the complexity of algorithms, focusing on time and space complexity as measures of efficiency. It outlines types of algorithm analysis, including best, average, and worst-case scenarios, and emphasizes the importance of understanding these cases for algorithm selection and optimization. Additionally, it introduces asymptotic notations to describe the rate of growth of algorithms, providing a framework for comparing their efficiency based on input size.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

Module 3: Complexity of an Algorithm (Week 2)

3.1 Introduction and Definition


Computational complexity or simply the complexity of an algorithm, is a
measure of the amount of time and/or space required by an algorithm for an
input of a given size. It evaluates the order of count of operations executed by
an algorithm as a function of input data size. To assess the complexity, the
order (approximation) of the count of operation is always considered instead
of counting the exact steps.

The analysis of algorithms is the process of finding the computational


complexity of algorithms.

Usually, this involves determining a function that relates the length of an


algorithm's input to the number of steps it takes (its time complexity) or the
number of storage locations it uses (its space complexity). An algorithm is said
to be efficient when this function's values are small, or grow slowly compared
to a growth in the size of the input.

Suppose X is an algorithm and n is the size of input data, the time and space
used by the algorithm X are the two main factors, which decide the efficiency
of X.
(a) Time Complexity: Time is measured by counting the number of key
operations such as comparisons in the sorting algorithm.
(b) Space Complexity: Space is measured by counting the maximum
memory space required by the algorithm.

The complexity of an algorithm f(n) gives the running time and/or the storage
space required by the algorithm in terms of n as the size of input data.

3.2 Types of Algorithm Analysis


Algorithm analysis deals with the execution or running time of various
operations involved in the algorithm. The running time of an operation can be
defined as the number of computer instructions executed per operation.
Efficiency of an algorithm can be analyzed at two different stages, before
implementation and after implementation. They are the following:

(a) A Priori Analysis: This is a theoretical analysis of an algorithm.


Efficiency of an algorithm is measured by assuming that all other
factors, for example, processor speed, are constant and have no effect
on the implementation.

Page 1 of 16
(b) A Posterior Analysis: This is an empirical analysis of an algorithm. The
selected algorithm is implemented using programming language. This
is then executed on target computer machine. In this analysis, actual
statistics like running time and space required, are collected.

Usually, they are three types of Algorithm analysis which are:


(a) Best Case: Minimum time required for algorithm execution.
(b) Average Case: Average time required for algorithm execution.
(c) Worst Case: Maximum time required for algorithm execution

3.3 Best Case, Average Case, and Worst Case Algorithm Analysis
In algorithm analysis, different cases are used to describe how an algorithm
performs under different conditions of input. These cases help evaluate the
efficiency and limitations of the algorithm in terms of time and sometimes
space complexity.

Each case describes the number of operations or execution time required


depending on the nature of the input. Understanding all three cases is
essential for analyzing and comparing algorithms especially when choosing
an algorithm for a specific application. In practical use, worst-case and
average-case complexities are most important.

The reasons for using best, average and worst case analysis are:
(a) Choosing the Right Algorithm: For a critical system where consistent
performance is paramount, an algorithm with a good worst-case
complexity might be preferred, even if its average case is slightly worse
than another algorithm. For general-purpose applications where inputs
vary widely, an algorithm with an excellent average-case complexity
might be chosen, even if it has a rare but poor worst case.
(b) Algorithm Design and Optimization: Designers often focus on
improving the worst-case performance of algorithms, as this provides
the strongest guarantee. Techniques like randomization (e.g.,
randomized Quick Sort) are used to make the worst-case scenario
extremely improbable, effectively pushing the worst case towards the
average case.
(c) Benchmarking and Performance Evaluation: When testing
algorithms, it is good practice to include test cases that trigger best,
average, and worst scenarios to get a comprehensive understanding of
their performance envelope.

3.3.1 Best Case


The best case describes the scenario where the algorithm performs the
minimum number of operations (i.e., it runs under ideal conditions). This

Page 2 of 16
scenario describes the situation where an algorithm performs optimally,
requiring the minimum amount of time or resources. This usually happens
with a specific input that allows the algorithm to complete its task in the
fewest possible steps. The Inputs to the algorithm are arranged in a way that
makes the algorithm finish as fast as possible. The best case is usually not
sufficient for performance evaluation because it does not reflect typical
behavior.

Mathematically, for the best case scenario, we seek an input Ibest of size n such
that the cost C(Ibest) is minimized:

𝑻𝒃𝒆𝒔𝒕 = 𝐦𝐢𝐧(𝑰)
|𝑰|= 𝒏

where C(I) is the cost (number of operations) for a specific input I, and |I|
denotes input of size n.

For a Linear Search algorithm, the best case is when the target element is at
the first position. The algorithm finds it immediately with just one
comparison.

3.3.2 Average Case


The average case is the expected running time of the algorithm over all
possible inputs of size n, assuming all inputs are equally likely (or based on
a known probability distribution). It involves calculating the average number
of operations over all possible inputs of a given size, often assuming a uniform
distribution of inputs. It provides a more realistic estimate of an algorithm's
performance in practice and is often derived using probability and
mathematical expectation. It requires knowledge or assumptions about the
distribution of input data and can be more complex to analyze mathematically
than the best or worst cases.

Mathematically, the average case is often the most complex to analyze, and it
requires defining a probability distribution over the set of all possible inputs
of size n. Let P(I) be the probability of a specific input I occurring and C(I) is
the running time for input i, Average case is:

𝑻𝒂𝒗𝒈= ∑|𝑰|= 𝒏 𝑪(𝑰). 𝑷(𝑰)

If we assume a uniform probability distribution (where all inputs of size n are


equally likely), then

Page 3 of 16
𝟏
𝑷(𝑰) =
𝑵𝒖𝒎𝒃𝒆𝒓 𝒐𝒇 𝑰𝒏𝒑𝒖𝒕𝒔 𝒐𝒇 𝑺𝒊𝒛𝒆 𝒏

For example, in a linear search, the average case involves finding the element
somewhere in the middle of the list. On average, the algorithm might have to
check about half of the elements.

3.3.3 Worst Case


The worst case describes the scenario where the algorithm performs the
maximum number of operations for an input of size n (i.e., it runs under the
least favourable conditions). The worst case scenario describes the situation
where an algorithm performs at its slowest, requiring the maximum amount
of time or resources. This happens with a specific input that forces the
algorithm to take the most computationally expensive path. It is the most
pessimistic scenario and assumes the input that causes the algorithm to take
the most time or space. The worst case provides a guarantee that the
algorithm will not run longer than this, regardless of the input and is critical
in real-time safety-critical systems (like medical devices, avionics).

Mathematically, in the worst case analysis, we seek an input Iworst of size n


such that the cost C(Iworst) is maximized:

𝑻𝒘𝒐𝒓𝒔𝒕= 𝐦𝐚𝐱 𝑪(𝑰)


|𝑰| =𝒏

For example, in a linear search, the worst case occurs if the element we are
searching for is at the very end of the list, or if it is not in the list at all. In
either situation, the algorithm has to check every single element in the list.

The worst case analyses is important for the following reasons


(a) Guaranteed Performance: The worst-case complexity provides a
crucial performance guarantee. If an algorithm takes X time in the worst
case, it is clear that it will never take longer than X time for any input
of that size. This is vital for critical systems where performance
deadlines must be met (e.g., operating systems, flight control software).
(b) Simplicity: Often, the worst-case input structure is easier to identify
and analyze than the average case. You deliberately try to "break" the
algorithm.
(c) Resource Allocation: Knowing the upper bound helps in allocating
sufficient computational resources (CPU time, memory) for an
application.

Page 4 of 16
3.4 Rate of growth of an Algorithm
The rate of growth of an algorithm is defined as the rate at which the running
time of the algorithm increases when the input size is increased. It gives an
idea of how fast the time or space complexity grows as the size of the input
increases. It is a fundamental concept in algorithm analysis because it allows
us to predict how an algorithm will perform for very large inputs, regardless
of the specific machine or programming language used. Instead of measuring
exact time in seconds (which varies depending on hardware, compiler, etc.),
we count the number of "elementary operations" an algorithm performs as a
function of the input size, often denoted by 'n'. Then, we simplify this function
to its dominant term, ignoring constant factors and lower-order terms,
because for large 'n', the dominant term dictates the growth.

To describe the rate of growth, the asymptotic notations are used (especially
Big O notation), which allow algorithms to be classify according to their
growth rates. The Big O focuses on the dominant term of the complexity
function while disregarding constant factors and lower-order terms.

The Rate of Growth is Important for the following reasons:


(a) Scalability: It tells us how well an algorithm will scale. An algorithm
that works fine for a small input might become impossibly slow for a
large input if its rate of growth is too high.
(a) Comparison: It provides a standardized way to compare the efficiency
of different algorithms designed to solve the same problem. An
algorithm with a slower rate of growth is generally preferred for large
inputs.
(a) Resource Prediction: It helps in predicting the resources (time,
memory) required for an algorithm as the input size grows, which is
crucial for system design and planning.

Commonly used asymptotic notations and their Big O are shown in the table
and below:

Type Big O notation Example


Constant time O(1) Accessing an array element
Linear time O(n) Traversing an array
Logarithmic time O (log n) Binary Search
Linearithmic/Log- O (nlog (n)) Merge Sort, Quick Sort
Linear time
Exponential time 2O(n) Recursive Fibonacci
Cubic time O (n3) Floyd-Warshall Algorithm
Polynomial time nO(1) Matric Multiplication

Page 5 of 16
Quadratic time O (n2) Nested loop (Bubble Sort)
Factorial time O(n!) Permutations (Brute force)

The most common rates of growth, from fastest (most efficient) to slowest
(least efficient) as input size 'n' increases are explain below:

(a) Constant Time: O(1): The algorithm runs in constant time when it
takes the same amount of time to complete, regardless of the size of the
input. This means that the execution time does not grow as the input
size increases. A good example of this is accessing an element in an
array using its index or performing push and pop operations on a stack.
These operations are executed instantly, without regard to the number
of elements involved.

(b) Logarithmic Time: O(log n): An algorithm has logarithmic time


complexity when the running time grows in proportion to the logarithm
of the input size. This typically happens in algorithms that solve
problems by repeatedly dividing the input in half. A classic example is
binary search, which searches for an element in a sorted array by
halving the search range with each step. As the input size increases,
the time required increases very slowly. Even if the input doubles, only
a small and consistent amount of additional time is needed.

(c) Linear Time: O(n): An algorithm runs in linear time if its running time
increases directly in proportion to the size of the input. This means that
if the input size doubles, the time taken will also roughly double. An
example of a linear time algorithm is linear search, where each element
in an unsorted list is examined one by one. Another example is a loop
that iterates through all elements of an array.

(d) Linearithmic (or Log-Linear) Time: O(n log n): When an algorithm's
time complexity grows proportionally to the input size multiplied by the
logarithm of the input size, it is said to run in linearithmic time. This is
commonly seen in efficient sorting algorithms such as merge sort, heap
sort, and the average-case performance of quick sort. These algorithms
are faster than quadratic sorts for large inputs, and their performance
is generally considered optimal for general-purpose sorting. The time
increases more quickly than linear time but far more slowly than
quadratic time, making it an excellent trade-off between performance
and complexity.

Page 6 of 16
(e) Quadratic Time: O(n²): Quadratic time complexity occurs when the
running time increases in proportion to the square of the input size.
This usually happens in algorithms with nested loops, where each loop
runs over the entire input set. Examples include bubble sort, selection
sort, and the worst-case scenario of insertion sort. As the input size
doubles, the running time quadruples, which can make these
algorithms inefficient for large datasets.

(f) Cubic Time: O(n³): An algorithm has cubic time complexity when its
running time grows in proportion to the cube of the input size. This
often results from having three levels of nested loops, each iterating over
the input data. A common example is the naive implementation of
matrix multiplication. In such cases, doubling the input size increases
the running time by a factor of eight, which becomes computationally
expensive very quickly.

(g) Exponential Time: O(2ⁿ): Exponential time complexity describes


algorithms whose running time doubles with each additional element
in the input. These algorithms are typically only practical for very small
input sizes due to their rapid growth in computation time. Brute-force
solutions to complex problems like the Traveling Salesperson Problem
or solving the Towers of Hanoi exhibit exponential behavior. A small
increase in input size leads to a disproportionately large increase in the
time required to execute.

(h) Factorial Time: O(n!): Factorial time complexity means that the
running time increases in proportion to the factorial of the input size.
This type of complexity is even more extreme than exponential and is
generally only acceptable for very small input sizes. Algorithms that
involve generating all possible permutations of a set, such as exhaustive
approaches to the Traveling Salesperson Problem, fall into this
category. The time taken becomes unmanageable extremely quickly,
making such algorithms impractical except for minimal input sizes.

3.5 Asymptotic Analytical Tools and Features


The efficiency of an algorithm depends on the amount of time, storage and
other resources required to execute the algorithm. The efficiency is measured
with the help of asymptotic notations. An algorithm may not have the same
performance for different types of inputs and with increase in the input size,
the performance will change. The study of change in performance of the
algorithm with the change in the order of the input size is defined as asymptotic
analysis.

Page 7 of 16
Asymptotic notations are the mathematical notations used to describe the
running time of an algorithm when the input tends towards a particular value
or a limiting value. For example: In bubble sort, when the input array is
already sorted, the time taken by the algorithm is linear i.e., the best case.
But, when the input array is in reverse condition, the algorithm takes the
maximum time (quadratic) to sort the elements i.e., the worst case. When the
input array is neither sorted nor in reverse order, then it takes average time.
These durations are denoted using asymptotic notations.

Asymptotic analysis refers to computing the running time of any operation in


mathematical units of computation. For example, the running time of one
operation is computed as f(n) and may be for another operation, it is computed
as g(n2). This means the first operation running time will increase linearly
with the increase in n and the running time of the second operation will
increase exponentially when n increases. Similarly, the running time of both
operations will be nearly the same if n is significantly small.

There are mainly three asymptotic notations:


✓ Big-O notation
✓ Omega notation
✓ Theta notation

3.5.1 Big-O Notation (O-notation)


Big-O notation represents the upper bound of the running time of an
algorithm. Thus, it gives the worst-case complexity of an algorithm.

Big-O gives the upper bound of a function

For a function g(n), Θ(g(n)) is given by the relation:


O(g(n)) = { f(n): there exist positive constants c and n0
such that 0 ≤ f(n) ≤ cg(n) for all n ≥ n0}

The above expression can be described as a function f(n) belongs to the set
O(g(n)) if there exists a positive constant c such that it lies between 0 and cg(n),
for sufficiently large n. For any value of n, the running time of an algorithm
does not cross the time provided by O(g(n)).

Page 8 of 16
Since it gives the worst-case running time of an algorithm, it is widely used
to analyze an algorithm as we are always interested in the worst-case
scenario.

3.5.2 Omega Notation (Ω-notation)


Omega notation represents the lower bound of the running time of an
algorithm. Thus, it provides the best-case complexity of an algorithm.

Omega gives the lower bound of a function


For a function g(n), Θ(g(n)) is given by the relation:
Ω(g(n)) = {f(n): there exist positive constants c and n0
such that 0 ≤ cg(n) ≤ f(n) for all n ≥ n0}

Page 9 of 16
The above expression can be described as a function f(n) belongs to the set
Ω(g(n)) if there exists a positive constant c such that it lies above cg(n), for
sufficiently large n. For any value of n, the minimum time required by the
algorithm is given by Omega Ω(g(n)).

3.5.3 Theta Notation (Θ-notation)


Theta notation encloses the function from above and below. Since it
represents the upper and the lower bound of the running time of an algorithm,
it is used for analyzing the average-case complexity of an algorithm.

For a function g(n), Θ(g(n)) is given by the relation:


Θ(g(n)) = { f(n): there exist positive constants c1, c2 and n0
such that 0 ≤ c1g(n) ≤ f(n) ≤ c2g(n) for all n ≥ n0 }

Theta bounds the function within constants factors

The above expression can be described as a function f(n) belongs to the set
Θ(g(n)) if there exist positive constants c1 and c2 such that it can be
sandwiched between c1g(n) and c2g(n), for sufficiently large n. If a function f(n)
lies anywhere in between c1g(n) and c2g(n) for all n ≥ n0, then f(n) is said to be
asymptotically tight bound.

3.6 Space Complexity and Analysis


Space complexity of an algorithm represents the amount of memory space
required by the algorithm in its life cycle. If an algorithm takes an input of
size n, and requires f(n) additional memory units (apart from input), then its
space complexity is O(f(n)).

Page 10 of 16
Space complexity includes all memory used by the program, including:
▪ Input storage: memory to hold the input data.
▪ Auxiliary space: temporary space or extra variables used by the
algorithm (e.g., counters, temporary arrays).
▪ Function call stack: memory used for recursive function calls.

The space required by an algorithm is equal to the sum of the following two
components:
(a) A fixed part, that is, the space required to store certain data and
variables that are independent of the size of the problem. For example,
simple variables and constants used, algorithm size, etc. The fixed part
includes the space required to store:
▪ Constant variables
▪ Simple data types (e.g., integer, float, boolean)
▪ Static structures whose size doesn't change with input
▪ Program code or compiled instructions

(b) A variable part, that is, the space required to store variables, whose size
depends on the size of the problem. For example, dynamic memory
allocation, recursion stack space, etc.The variable part includes the
space required to store:
▪ The input sizes
▪ Data structures used (arrays, vectors, hash tables)
▪ Dynamic memory allocations
▪ Recursive call stack size

Space complexity S(P) of any algorithm P is S(P) = C + SP(I), where C is the fixed
part and S(I) is the variable part of the algorithm, which depends on instance
characteristic I.

Example1: Summation
Algorithm: SUM(A, B)
Step 1 - START
Step 2 - C ← A + B + 10
Step 3 - Stop

Here we have three variables A, B, and C and one constant. Hence S(P) = 1 +
3. Now, space depends on data types of given variables and constant types
and it will be multiplied accordingly.

Example 2: Consider the C++ program below to compute the factorial of


a number:
int factorial (int n) {

Page 11 of 16
if (n == 0) return 1;
return n * factorial (n - 1);
}

The Space complexity analysis is shown below:


▪ Fixed part: space for n, return value → C = O(1)
▪ Variable part: recursive calls → S_P(I) = O(n)
▪ Total Space Complexity: S(P) = O(1) + O(n) = O(n)

Example 3: Linear Search


int linearSearch(int arr[], int n, int key) {
for (int i = 0; i < n; i++) {
if (arr[i] == key) return i;
}
return -1;
}

Space complexity Analysis:


▪ Fixed Part (C): int i, int key → O(1)
▪ Variable Part (S_P(I)): No extra space is allocated beyond input. Input
array is given, not counted in auxiliary space
Total Space Complexity = O(1)

Example 4: Reversing an Array Using Extra Space


void reverseArray(int arr[], int n) {
int* temp = new int[n]; // dynamic memory allocation
for (int i = 0; i < n; i++) {
temp[i] = arr[n - i - 1];
}
for (int i = 0; i < n; i++) {
arr[i] = temp[i];
}
delete[] temp;
}

Space Complexity Analysis:


▪ Fixed Part (C): Loop variables → O(1)
▪ Variable Part (S_P(I)): int* temp = new int[n] → O(n)
Total Space Complexity = O(n)

3.7 Time Complexity Analysis of an Algorithm


The time complexity of an algorithm represents the amount of time required
by the algorithm to run to completion. It describes how the runtime grows as

Page 12 of 16
the input size increases, rather than the actual time (which depends on
hardware, compiler, etc.).

Time requirements can be defined as a numerical function T(n), where T(n)


can be measured as the number of steps, provided each step consumes
constant time. For example, addition of two n-bit integers takes n steps.
Consequently, the total computational time is T(n) = c ∗ n, where c is the time
taken for the addition of two bits. Here, we observe that T(n) grows linearly as
the input size increases.

The Time Complexity of an algorithm is important for the following reasons:


▪ It helps you predict performance before running the program.
▪ It allows for fair comparison between algorithms regardless of machine
speed.
▪ It ensures your solution is scalable for large inputs.

When analysing the time Complexity, the focus should be on:


▪ Loops (each adds a factor to the complexity)
▪ Nested loops (multiply complexities)
▪ Recursive calls (solve recurrence relations like T(n) = T(n-1) + 1)
▪ Operations on data structures (e.g., arrays, stacks, trees)
▪ Ignore constants (e.g., O(2n) is just O(n))
▪ Drop lower-order terms (e.g., O(n + log n) is O(n))

Example 1: Constant Time – O(1)


int getFirstElement(int arr[]) {
return arr[0]; // takes the same time regardless of array size
}

Explanation:
▪ Only one operation is performed: accessing the first element of the
array.
▪ This takes constant time because it doesn't depend on the size of the
input array.
▪ Whether the array has 1 or 1,000,000 elements, arr[0] is fetched in the
same time.
Time Complexity: O(1)

Example 2: Linear Time – O(n)


int sum(int arr[], int n) {
int total = 0;
for (int i = 0; i < n; i++) {

Page 13 of 16
total += arr[i];
}
return total;
}

Explanation:
▪ The loop runs from i = 0 to i = n - 1, meaning n iterations.
▪ In each iteration, one addition operation is performed.
▪ The number of operations increases linearly with the size of the input
array.
Time Complexity: O(n)

Example 3: Quadratic Time – O(n²)


void printPairs(int arr[], int n) {
for (int i = 0; i < n; i++) {
for (int j = 0; j < n; j++) {
cout << arr[i] << ", " << arr[j] << endl;
}
}
}

Explanation:
▪ There are two nested loops, each running n times.
▪ For each i, the inner loop executes n times, resulting in n × n = n²
iterations.
▪ The number of operations grows quadratically as the input size
increases.
Time Complexity: O(n²)

Example 4: Logarithmic Time – O(log n)


int binarySearch(int arr[], int n, int target) {
int low = 0, high = n - 1;
while (low <= high) {
int mid = (low + high) / 2;
if (arr[mid] == target) return mid;
else if (arr[mid] < target) low = mid + 1;
else high = mid - 1;
}
return -1;
}

Explanation:
▪ In each iteration, the search space is cut in half.

Page 14 of 16
▪ If the array has 16 elements, binary search performs at most log₂(16) =
4 comparisons.
▪ This is called logarithmic time because the number of steps is
proportional to the logarithm of the input size n.
▪ Assume that the array is already sorted
Time Complexity: O(log n)

3.8 Algorithmic Trade-offs (Resources)


A tradeoff is a situation where one thing increases and another thing
decreases. An Algorithm tradeoff is a way to solve a problem in either in less
time and by using more space, or in very little space by spending a long
amount of time. The best Algorithm is that which helps to solve a problem
that requires less space in memory and also takes less time to generate the
output. But in general, it is not always possible to achieve both of these
conditions at the same time. A type of Space-Time Trade-off include the
following:

(a) Compressed or Uncompressed data: A space-time trade-off can be


applied to the problem of data storage. If data stored is uncompressed,
it takes more space but less time. But if the data is stored compressed,
it takes less space but more time to run the decompression algorithm.
There are many instances where it is possible to directly work with
compressed data. In that case of compressed bitmap indices, where it
is faster to work with compression than without compression.
(b) Re-rendering or Stored images: In this case, storing only the source
and rendering it as an image would take more space but less time i.e.,
storing an image in the cache is faster than re-rendering but requires
more space in memory.
(c) Smaller code or loop unrolling: Smaller code occupies less space in
memory but it requires high computation time that is required for
jumping back to the beginning of the loop at the end of each iteration.
Loop unrolling can optimize execution speed at the cost of increased
binary size. It occupies more space in memory but requires less
computation time.
(d) Lookup tables or Recalculation: In a lookup table, an implementation
can include the entire table which reduces computing time but
increases the amount of memory needed. It can recalculate i.e.,
compute table entries as needed, increasing computing time but
reducing memory requirements.

3.9 Assignment on Time and Space Complexities


Exercise 1: What is the time and space complexity of this function?
int getLastElement(int arr[], int n) {

Page 15 of 16
return arr[n - 1];
}

Exercise 2: Compute the time and space complexity in C++ program below.
How does time complexity change with the size of the array? What about space
complexity?
int findMax(int arr[], int n) {
int maxVal = arr[0];
for (int i = 1; i < n; i++) {
if (arr[i] > maxVal)
maxVal = arr[i];
}
return maxVal;
}

Exercise 3: How many function calls are made for Fibonacci(5)? What is the
space complexity due to recursion?

int fibonacci(int n) {
if (n <= 1) return n;
return fibonacci(n - 1) + fibonacci(n - 2);
}

Page 16 of 16

You might also like