Ch1 AlgorithmComplexity

Download as pdf or txt
Download as pdf or txt
You are on page 1of 11

Chapter 1: Algorithmic Complexity

1.1. Review of an Algorithm

The word Algorithm is an English translation of the Muslim scientist name Al-Khwarizmi (See figure
1). An algorithm is a step-by-step set of instructions that must be followed to accomplish a specific task.
It defines the logic and the sequence of operations needed to resolve a problem. It's crucial to understand
that an algorithm does not constitute the entire program or code for a problem. Instead, it serves as the
underlying logic used to solve the problem. It can be expressed as pseudo-code or visualized through a
flowchart.

Alghoarismi
algorithm
•Arabic /algorismus •English
name(164 h – •Latin translation , 17 •Current english
232 h / 780 - translation, th century word
850) 12th-15th
Al-Khwarizmi century Algorism /
(‫)الخوارزمي‬ Algorithme

Figure 1 : Evolution of the word “algorithm”

1.2. Qualities and Characteristics of an Algorithm

The qualities and characteristics of an algorithm are the metrics by which we evaluate their excellence.
Correctness, efficiency, simplicity and robustness are the basic indicators of a comprehensive algorithm:

 Correctness is a key quality in algorithms. It means the algorithm consistently gives the right
output for valid inputs, building trust. It ensures the algorithm reliably solves the problem, even
when it's complex.

1
 Efficiency measures the algorithm's ability to execute tasks with speed and economy. An
efficient algorithm can process vast datasets, make real-time decisions, and optimize resources,
all while minimizing time and computational resources.

 However, simplicity should not be sacrificed for efficiency. Simplicity is the art of keeping
algorithms clear, understandable, and maintainable. It ensures that the algorithm's logic remains
transparent, making it easier to troubleshoot, adapt, and enhance over time.

 Robustness is another characteristic that distinguishes great algorithms. Robust algorithms


handle unexpected input or exceptional conditions, ensuring they do not break or produce
incorrect results.

Figure 2: Important
Correctness Efficiency indicators of an
algorithm quality

Simplicity Robustness

Example: Calculate the sum of all elements in a list of integers.

Algorithm CorrectAlgorithm:
Input: List of integers nums
Output: Sum of all elements in nums
result = 0
for i from 0 to n-1: # Corrected loop range
result = result + nums[i]
return result

2
Here are four versions of pseudo-code for the same problem, each emphasizing the absence of one of the
four key qualities: correctness, efficiency, simplicity, and robustness.

Version 1: Lacking Correctness (Incorrect Algorithm)

Algorithm IncorrectAlgorithm:
Input: List of integers nums
Output: Sum of all elements in nums

result = 0
for i from 1 to n: # Incorrect loop range
result = result + nums[i]
return result

Version 2: Lacking Efficiency (Inefficient Algorithm)

Algorithm InefficientAlgorithm:
Input: List of integers nums
Output: Sum of all elements in nums

result = 0
for i from 0 to n-1:
for j from 0 to n-1: # Nested loop, highly inefficient
if i = j result = result + nums[j]
return result

Version 3: Lacking Simplicity (Complex Algorithm)

Algorithm ComplexAlgorithm:
Input: List of integers nums
Output: Sum of all elements in nums

result = 0
i = 0
while i <= n-1:
ind = i # Unnecessary intermediate variables ind, x and temp
x = nums[ind]

3
temp = x + result
result = temp
i = ind + 1
return result

Version 4: Lacking Robustness (Non-Robust Algorithm)

Algorithm NonRobustAlgorithm:
Input: List of integers nums
Output: average of all elements in nums

sum = 0
for i from 0 to n-1:
sum = sum + nums[i]
result = sum / n # Division by zero if nums is empty
return result

These versions illustrate the consequences of missing one of the four key qualities in algorithm design
when calculating the sum of all elements in a list of integers: incorrectness, inefficiency, complexity,
and a lack of robustness.

1.3. Definition of Algorithmic Complexity:

Algorithmic complexity, also known as computational complexity, is a fundamental concept in


computer science that deals with the analysis and measurement of the resources required by an
algorithm to solve a specific problem. It provides a quantitative understanding of how the efficiency and
performance of an algorithm scale with the size of the input data. In essence, it allows us to evaluate and
compare algorithms based on their efficiency in terms of time and space. Algorithmic complexity
answers questions like "How long does it take for an algorithm to execute as the input size grows?" and
"How much memory or storage space does it consume?" By quantifying these aspects, we can make
informed decisions about which algorithm to choose for a particular task, taking into account factors
such as speed, memory usage, and computational resources.

4
 Time Complexity: Time complexity is a function of input size n that refers to the amount of
time needed by an algorithm to run to completion.

 Space Complexity: The space complexity can be understood as the amount of space required by
an algorithm to run to completion.

Example:

We explore two distinct algorithms for calculating the sum of 'n' real numbers, each with its own
approach to space efficiency. The first algorithm utilizes a single variable to store the cumulative sum
without any additional memory overhead. This results in constant space complexity, making it suitable
for scenarios where memory efficiency is paramount. The second algorithm employs an array to store
the input real numbers, providing flexibility in handling the input but at the cost of increased memory
usage. This approach exhibits linear space complexity as the size of the array grows with 'n.'

Algorithm 1: Calculating Sum with a Single Variable

Algorithm SumOfRealNumbersSingleVar:
Input: Real numbers r1, r2, ..., rn
Output: Sum of the real numbers

sum = 0 # Initialize a variable to store the sum


for i from 1 to n:
read(x) # Read the next real number
sum = sum + x # Add the real number to the sum
return sum

Algorithm 2: Calculating Sum with an Array (Linear Space Complexity - O(n))

Algorithm SumOfRealNumbersWithArray:
Input: Real numbers r1, r2, ..., rn
Output: Sum of the real numbers

sum = 0 # Initialize a variable to store the sum


for i from 1 to n:

5
read(reals[i]) # Read the next real number
sum = sum + reals[i] # Add each real number from the array to the sum
return sum

These pseudocode examples demonstrate two approaches to solving the problem while considering
space complexity and memory efficiency.

1.4. Theoretical Complexity and Empirical Analysis of Algorithms:

Theoretical Complexity Analysis is a mathematical approach used to understand how algorithms behave
in an abstract sense. It allows us to describe and compare algorithms based on their efficiency without
the need for actual execution. In contrast, empirical analysis takes a practical approach. It involves
running algorithms on real computers using actual data and measuring their real-world performance. It's
akin to test-driving a car on real roads, considering hardware, data size, and practical considerations.

Table 1: Algorithmic Analysis: Theoretical vs. Empirical

Aspect Theoretical Complexity Empirical Analysis of Algorithms


Definition Mathematical approach Practical approach
Focus Abstract behavior Real-world performance
Input Size Prediction Predicts growth patterns Observes real-time behavior
Notation Big O notation Measurement and metrics
Use Case Algorithm selection Performance optimization
Considerations Idealized scenarios Real-world factors
Precision General trends Hardware and data-dependent

1.5. Types of Time Complexity Analysis

Time complexity analysis can be categorized into three main types:

a) Worst-case time complexity:


This type of analysis centers on determining the maximum time required for an algorithm to execute
when given an input size 'n.' In essence, it defines a function representing the utmost number of steps an
algorithm performs for an input of size 'n,' Computer Scientists are more interested in this.

6
b) Average case time complexity:
When dealing with an input size 'n,' the average case time complexity provides an estimation of the
typical time an algorithm requires for execution. It is represented as a function describing the average
number of steps an algorithm takes across various potential inputs of size 'n.'

c) Best-case time complexity:


The best-case time complexity determines the minimum time an algorithm needs to complete its
execution. It is expressed as a function denoting the fewest number of steps an algorithm performs for an
input of size 'n.'

Example: Consider a non-ordered array with n elements, and the problem is to find an element within
this array.

1.6. Asymptotic Notations

Asymptotic analysis simplifies the evaluation of algorithm efficiency by focusing on how it behaves
with large inputs. Resources are expressed as functions of input, but we reduce them to the dominant
terms for simplicity. The simplest example is a function ƒ (n) = n2+3n, the term 3n becomes
insignificant compared to n2 when n is very large. The function ƒ (n) is said to be asymptotically
equivalent to n2 as n → ∞, and can be written symbolically as ƒ (n) ~ n2.

These asymptotic notations provide a concise way to estimate algorithm complexity without the need for
extensive cost analysis. This is essential for defining algorithm efficiency and simplifying performance
comparisons between algorithms.

a) Big O notation
Big O (capital letter O, not a zero) is the formal method to gauge the maximum amount of time that an
algorithm may consume when processing inputs of varying sizes.

For the formal definition, suppose f(n) and g(n) are two functions defined on some subset of the real
numbers. We write

f(n) = O(g(n))
(or f(n) = O(g(n)) for n∞ to be more precise)

7
if and only if there exist constants n0 and c such that

|f(n)| ≤ c |g(n)| for all n> n0

Figure 3: big O notation,


f(n) = O(g(n))

b) Typical classes of algorithm complexities


We take a look at the different types of complexities of an algorithm. Here is a list of classes of
functions that are commonly encountered when analyzing algorithms. The slower growing functions are
listed first. c is some arbitrary constant.

Table 2 : List of common classes of complexity

notation name

O(1) constant

O(log(n)) logarithmic

O(n) linear

O(n log(n))) Linearithmic

O(n2) quadratic

O(nc) polynomial

O(cn) exponential

8
Figure 4: Visual representation of used functions in complexity

i. Constant Complexity
It imposes a complexity of O (1). It undergoes an execution of a constant number of steps like 1, 5, 10,
etc. for solving a given problem. The count of operations is independent of the input data size.

ii. Logarithmic Complexity


This complexity class involves an algorithm executing approximately log(n) steps. It's commonly
employed for operations on n elements, often utilizing a logarithmic base of 2. For n = 1 000 000, an
algorithm that has a complexity of O(log(n)) would undergo 20 steps (with a constant precision

iii. Linear Complexity


Linear complexity is complexity of O (n). It encompasses the same number of steps as that of the total
number of elements to implement an operation on n elements. For example, if there exist 500 elements,
then it will take about 500 steps. Basically, in linear complexity, the number of elements linearly
depends on the number of steps. For example, the number of steps for n elements can be n/2 or 3*n.

iv. Linearithmic Complexity


It also imposes a run time of O(n*log(n)). It undergoes the execution of the order n*log(n) on n number
of elements to solve the given problem. For a given 1000 elements, the linear complexity will execute
10 000 steps for solving a given problem.

9
v. Quadratic Complexity
It imposes a complexity of O (n2). For n input data size, it undergoes the order of n2 count of operations
on n number of elements for solving a given problem. If n = 100, it will endure 10 000 steps. In other
words, whenever the order of operation tends to have a quadratic relation with the input data size, it
results in quadratic complexity. For example, for n number of elements, the steps are found to be in the
order of 3*n2/2.

vi. Polynomial Complexity


In this complexity class, the algorithm's execution involves an order of operations that depends on a
polynomial function of the input size n, denoted as O(nc). Here, 'c' represents a positive constant, and the
algorithm exhibits efficiency proportional to the input raised to a fixed power. For instance, when c = 2,
the algorithm has quadratic complexity (O(n2)), while c = 3 results in cubic complexity (O(n3)). The
polynomial complexity class is characterized by its sensitivity to the input size's power, making it
efficient for a wide range of practical scenarios. For example, if there exist 100 elements, and c=4 it is
going to execute 100 000 000 steps.

vii. Exponential Complexity


Exponential complexity represents the class of algorithms whose execution time grows exponentially
(O(2n)) with the size of the input n. In this case, the number of operations required increases at a rate of
2 raised to the power of n. For example, if n = 10, then the exponential function will result in 1,024.
Similarly, if n = 20, it will result in 1048 576, and if n = 100, it will result in a number having 30 digits.

1.7. Complexity Calculation

a) Complexity approximation
Calculating the exact complexity is not reasonable given the quantity of instructions in most programs
and is not useful for comparing two algorithms. Therefore, we will settle for an approximate value. To
achieve this, we use a series of approximations:

 First approximation: We typically focus on worst-case complexity.

 Second approximation: We simplify the complexity to its general form.

 Third approximation: We analyze only the asymptotic behavior of the complexity.

10
b) Input Size
it is necessary to assess the size of the data required for the algorithm. These are referred to as inputs or
instances. The size 'n' will depend on the encoding of these inputs. In practice, we choose the most
significant dimensions as the size. We can cite some examples:

 Numbers: these numbers

 Polynomials: degree, number of coefficients

 m x n matrices: max(m, n), m.n, m+n

 Graphs: order, size, product of the two

 Trees: like graphs and height, arity

 Lists, arrays, files: number of cells, elements

 Words: their length


The choice of a particular data structure significantly impacts the efficiency of an algorithm.

c) Rules
For calculating the complexity of an algorithm, the following rules are required:

 Any elementary operation has O(1) complexity because its execution time is constant

 The complexity of a sequence of two blocks of instructions: B1 (of complexity O(f1(n))) and B2
(of complexity O(f2(n))), is equal to the greater of the complexities of the two blocks : O(
max(f1(n),f2(n)))

 The complexity of a conditional statement (If cond Then B1 Else B2) is, in the worst case, the
maximum between the complexities of the 3 parts of the conditional (cond, B1 and B2).

 The complexity of a loop is the sum over all iterations of the loop body complexity

11

You might also like