Ch1 AlgorithmComplexity
Ch1 AlgorithmComplexity
Ch1 AlgorithmComplexity
The word Algorithm is an English translation of the Muslim scientist name Al-Khwarizmi (See figure
1). An algorithm is a step-by-step set of instructions that must be followed to accomplish a specific task.
It defines the logic and the sequence of operations needed to resolve a problem. It's crucial to understand
that an algorithm does not constitute the entire program or code for a problem. Instead, it serves as the
underlying logic used to solve the problem. It can be expressed as pseudo-code or visualized through a
flowchart.
Alghoarismi
algorithm
•Arabic /algorismus •English
name(164 h – •Latin translation , 17 •Current english
232 h / 780 - translation, th century word
850) 12th-15th
Al-Khwarizmi century Algorism /
()الخوارزمي Algorithme
The qualities and characteristics of an algorithm are the metrics by which we evaluate their excellence.
Correctness, efficiency, simplicity and robustness are the basic indicators of a comprehensive algorithm:
Correctness is a key quality in algorithms. It means the algorithm consistently gives the right
output for valid inputs, building trust. It ensures the algorithm reliably solves the problem, even
when it's complex.
1
Efficiency measures the algorithm's ability to execute tasks with speed and economy. An
efficient algorithm can process vast datasets, make real-time decisions, and optimize resources,
all while minimizing time and computational resources.
However, simplicity should not be sacrificed for efficiency. Simplicity is the art of keeping
algorithms clear, understandable, and maintainable. It ensures that the algorithm's logic remains
transparent, making it easier to troubleshoot, adapt, and enhance over time.
Figure 2: Important
Correctness Efficiency indicators of an
algorithm quality
Simplicity Robustness
Algorithm CorrectAlgorithm:
Input: List of integers nums
Output: Sum of all elements in nums
result = 0
for i from 0 to n-1: # Corrected loop range
result = result + nums[i]
return result
2
Here are four versions of pseudo-code for the same problem, each emphasizing the absence of one of the
four key qualities: correctness, efficiency, simplicity, and robustness.
Algorithm IncorrectAlgorithm:
Input: List of integers nums
Output: Sum of all elements in nums
result = 0
for i from 1 to n: # Incorrect loop range
result = result + nums[i]
return result
Algorithm InefficientAlgorithm:
Input: List of integers nums
Output: Sum of all elements in nums
result = 0
for i from 0 to n-1:
for j from 0 to n-1: # Nested loop, highly inefficient
if i = j result = result + nums[j]
return result
Algorithm ComplexAlgorithm:
Input: List of integers nums
Output: Sum of all elements in nums
result = 0
i = 0
while i <= n-1:
ind = i # Unnecessary intermediate variables ind, x and temp
x = nums[ind]
3
temp = x + result
result = temp
i = ind + 1
return result
Algorithm NonRobustAlgorithm:
Input: List of integers nums
Output: average of all elements in nums
sum = 0
for i from 0 to n-1:
sum = sum + nums[i]
result = sum / n # Division by zero if nums is empty
return result
These versions illustrate the consequences of missing one of the four key qualities in algorithm design
when calculating the sum of all elements in a list of integers: incorrectness, inefficiency, complexity,
and a lack of robustness.
4
Time Complexity: Time complexity is a function of input size n that refers to the amount of
time needed by an algorithm to run to completion.
Space Complexity: The space complexity can be understood as the amount of space required by
an algorithm to run to completion.
Example:
We explore two distinct algorithms for calculating the sum of 'n' real numbers, each with its own
approach to space efficiency. The first algorithm utilizes a single variable to store the cumulative sum
without any additional memory overhead. This results in constant space complexity, making it suitable
for scenarios where memory efficiency is paramount. The second algorithm employs an array to store
the input real numbers, providing flexibility in handling the input but at the cost of increased memory
usage. This approach exhibits linear space complexity as the size of the array grows with 'n.'
Algorithm SumOfRealNumbersSingleVar:
Input: Real numbers r1, r2, ..., rn
Output: Sum of the real numbers
Algorithm SumOfRealNumbersWithArray:
Input: Real numbers r1, r2, ..., rn
Output: Sum of the real numbers
5
read(reals[i]) # Read the next real number
sum = sum + reals[i] # Add each real number from the array to the sum
return sum
These pseudocode examples demonstrate two approaches to solving the problem while considering
space complexity and memory efficiency.
Theoretical Complexity Analysis is a mathematical approach used to understand how algorithms behave
in an abstract sense. It allows us to describe and compare algorithms based on their efficiency without
the need for actual execution. In contrast, empirical analysis takes a practical approach. It involves
running algorithms on real computers using actual data and measuring their real-world performance. It's
akin to test-driving a car on real roads, considering hardware, data size, and practical considerations.
6
b) Average case time complexity:
When dealing with an input size 'n,' the average case time complexity provides an estimation of the
typical time an algorithm requires for execution. It is represented as a function describing the average
number of steps an algorithm takes across various potential inputs of size 'n.'
Example: Consider a non-ordered array with n elements, and the problem is to find an element within
this array.
Asymptotic analysis simplifies the evaluation of algorithm efficiency by focusing on how it behaves
with large inputs. Resources are expressed as functions of input, but we reduce them to the dominant
terms for simplicity. The simplest example is a function ƒ (n) = n2+3n, the term 3n becomes
insignificant compared to n2 when n is very large. The function ƒ (n) is said to be asymptotically
equivalent to n2 as n → ∞, and can be written symbolically as ƒ (n) ~ n2.
These asymptotic notations provide a concise way to estimate algorithm complexity without the need for
extensive cost analysis. This is essential for defining algorithm efficiency and simplifying performance
comparisons between algorithms.
a) Big O notation
Big O (capital letter O, not a zero) is the formal method to gauge the maximum amount of time that an
algorithm may consume when processing inputs of varying sizes.
For the formal definition, suppose f(n) and g(n) are two functions defined on some subset of the real
numbers. We write
f(n) = O(g(n))
(or f(n) = O(g(n)) for n∞ to be more precise)
7
if and only if there exist constants n0 and c such that
notation name
O(1) constant
O(log(n)) logarithmic
O(n) linear
O(n2) quadratic
O(nc) polynomial
O(cn) exponential
8
Figure 4: Visual representation of used functions in complexity
i. Constant Complexity
It imposes a complexity of O (1). It undergoes an execution of a constant number of steps like 1, 5, 10,
etc. for solving a given problem. The count of operations is independent of the input data size.
9
v. Quadratic Complexity
It imposes a complexity of O (n2). For n input data size, it undergoes the order of n2 count of operations
on n number of elements for solving a given problem. If n = 100, it will endure 10 000 steps. In other
words, whenever the order of operation tends to have a quadratic relation with the input data size, it
results in quadratic complexity. For example, for n number of elements, the steps are found to be in the
order of 3*n2/2.
a) Complexity approximation
Calculating the exact complexity is not reasonable given the quantity of instructions in most programs
and is not useful for comparing two algorithms. Therefore, we will settle for an approximate value. To
achieve this, we use a series of approximations:
10
b) Input Size
it is necessary to assess the size of the data required for the algorithm. These are referred to as inputs or
instances. The size 'n' will depend on the encoding of these inputs. In practice, we choose the most
significant dimensions as the size. We can cite some examples:
c) Rules
For calculating the complexity of an algorithm, the following rules are required:
Any elementary operation has O(1) complexity because its execution time is constant
The complexity of a sequence of two blocks of instructions: B1 (of complexity O(f1(n))) and B2
(of complexity O(f2(n))), is equal to the greater of the complexities of the two blocks : O(
max(f1(n),f2(n)))
The complexity of a conditional statement (If cond Then B1 Else B2) is, in the worst case, the
maximum between the complexities of the 3 parts of the conditional (cond, B1 and B2).
The complexity of a loop is the sum over all iterations of the loop body complexity
11