Skip to content

Commit 58538d6

Browse files
authored
Merge branch 'main' into main
2 parents 1f4f3ab + 33ef1e2 commit 58538d6

18 files changed

+846
-2
lines changed

contrib/ds-algorithms/index.md

+1
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,6 @@
11
# List of sections
22

3+
- [Time & Space Complexity](time-space-complexity.md)
34
- [Queues in Python](Queues.md)
45
- [Graphs](graph.md)
56
- [Sorting Algorithms](sorting-algorithms.md)
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,243 @@
1+
# Time and Space Complexity
2+
3+
We can solve a problem using one or more algorithms. It's essential to learn how to compare the performance of different algorithms and select the best one for a specific task.
4+
5+
Therefore, it is highly required to use a method to compare the solutions in order to judge which one is more optimal.
6+
7+
The method must be:
8+
9+
- Regardless of the system or its settings on which the algorithm is executing.
10+
- Demonstrate a direct relationship with the quantity of inputs.
11+
- Able to discriminate between two methods with clarity and precision.
12+
13+
Two such methods use to analyze algorithms are `time complexity` and `space complexity`.
14+
15+
## What is Time Complexity?
16+
17+
The _number of operations an algorithm performs in proportion to the quantity of the input_ is measured by time complexity. It facilitates our investigation of how the performance of the algorithm scales with increasing input size. But in real life, **_time complexity does not refer to the time taken by the machine to execute a particular code_**.
18+
19+
## Order of Growth and Asymptotic Notations
20+
21+
The Order of Growth explains how an algorithm's space or running time expands as the amount of the input does. This increase is described via asymptotic language, such Big O notation, which concentrates on the dominating term as the input size approaches infinity and is independent of lower-order terms and machine-specific constants.
22+
23+
### Common Asymptotic Notation
24+
25+
1. `Big Oh (O)`: Provides the worst-case scenario for describing the upper bound of an algorithm's execution time.
26+
2. `Big Omega (Ω)`: Provides the best-case scenario and describes the lower bound.
27+
3. `Big Theta (Θ)`: Gives a tight constraint on the running time by describing both the upper and lower bounds.
28+
29+
### 1. Big Oh (O) Notation
30+
31+
Big O notation describes how an algorithm behaves as the input size gets closer to infinity and provides an upper bound on the time or space complexity of the method. It helps developers and computer scientists to evaluate the effectiveness of various algorithms without regard to the software or hardware environment.
32+
33+
To denote asymptotic upper bound, we use O-notation. For a given function `g(n)`, we denote by `O(g(n))` (pronounced "big-oh of g of n") the set of functions:
34+
35+
$$
36+
O(g(n)) = \{ f(n) : \exists \text{ positive constants } c \text{ and } n_0 \text{ such that } 0 \leq f(n) \leq c \cdot g(n) \text{ for all } n \geq n_0 \}
37+
$$
38+
39+
Graphical representation of Big Oh:
40+
41+
![BigOh Notation Graph](images/Time-And-Space-Complexity-BigOh.png)
42+
43+
### 2. Big Omega (Ω) Notation
44+
45+
Big Omega (Ω) notation is used to describe the lower bound of an algorithm's running time. It provides a way to express the minimum time complexity that an algorithm will take to complete. In other words, Big Omega gives us a guarantee that the algorithm will take at least a certain amount of time to run, regardless of other factors.
46+
47+
To denote asymptotic lower bound, we use Omega-notation. For a given function `g(n)`, we denote by `Ω(g(n))` (pronounced "big-omega of g of n") the set of functions:
48+
49+
$$
50+
\Omega(g(n)) = \{ f(n) : \exists \text{ positive constants } c \text{ and } n_0 \text{ such that } 0 \leq c \cdot g(n) \leq f(n) \text{ for all } n \geq n_0 \}
51+
$$
52+
53+
Graphical representation of Big Omega:
54+
55+
![BigOmega Notation Graph](images/Time-And-Space-Complexity-BigOmega.png)
56+
57+
### 3. Big Theta (Θ) Notation
58+
59+
Big Theta (Θ) notation provides a way to describe the asymptotic tight bound of an algorithm's running time. It offers a precise measure of the time complexity by establishing both an upper and lower bound, indicating that the running time of an algorithm grows at the same rate as a given function, up to constant factors.
60+
61+
To denote asymptotic tight bound, we use Theta-notation. For a given function `g(n)`, we denote by `Θ(g(n))` (pronounced "big-theta of g of n") the set of functions:
62+
63+
$$
64+
\Theta(g(n)) = \{ f(n) : \exists \text{ positive constants } c_1, c_2, \text{ and } n_0 \text{ such that } 0 \leq c_1 \cdot g(n) \leq f(n) \leq c_2 \cdot g(n) \text{ for all } n \geq n_0 \}
65+
$$
66+
67+
Graphical representation of Big Theta:
68+
69+
![Big Theta Notation Graph](images/Time-And-Space-Complexity-BigTheta.png)
70+
71+
## Best Case, Worst Case and Average Case
72+
73+
### 1. Best-Case Scenario:
74+
75+
The best-case scenario refers to the situation where an algorithm performs optimally, achieving the lowest possible time or space complexity. It represents the most favorable conditions under which an algorithm operates.
76+
77+
#### Characteristics:
78+
79+
- Represents the minimum time or space required by an algorithm to solve a problem.
80+
- Occurs when the input data is structured in such a way that the algorithm can exploit its strengths fully.
81+
- Often used to analyze the lower bound of an algorithm's performance.
82+
83+
#### Example:
84+
85+
Consider the `linear search algorithm` where we're searching for a `target element` in an array. The best-case scenario occurs when the target element is found `at the very beginning of the array`. In this case, the algorithm would only need to make one comparison, resulting in a time complexity of `O(1)`.
86+
87+
### 2. Worst-Case Scenario:
88+
89+
The worst-case scenario refers to the situation where an algorithm performs at its poorest, achieving the highest possible time or space complexity. It represents the most unfavorable conditions under which an algorithm operates.
90+
91+
#### Characteristics:
92+
93+
- Represents the maximum time or space required by an algorithm to solve a problem.
94+
- Occurs when the input data is structured in such a way that the algorithm encounters the most challenging conditions.
95+
- Often used to analyze the upper bound of an algorithm's performance.
96+
97+
#### Example:
98+
99+
Continuing with the `linear search algorithm`, the worst-case scenario occurs when the `target element` is either not present in the array or located `at the very end`. In this case, the algorithm would need to iterate through the entire array, resulting in a time complexity of `O(n)`, where `n` is the size of the array.
100+
101+
### 3. Average-Case Scenario:
102+
103+
The average-case scenario refers to the expected performance of an algorithm over all possible inputs, typically calculated as the arithmetic mean of the time or space complexity.
104+
105+
#### Characteristics:
106+
107+
- Represents the typical performance of an algorithm across a range of input data.
108+
- Takes into account the distribution of inputs and their likelihood of occurrence.
109+
- Provides a more realistic measure of an algorithm's performance compared to the best-case or worst-case scenarios.
110+
111+
#### Example:
112+
113+
For the `linear search algorithm`, the average-case scenario considers the probability distribution of the target element's position within the array. If the `target element is equally likely to be found at any position in the array`, the average-case time complexity would be `O(n/2)`, as the algorithm would, on average, need to search halfway through the array.
114+
115+
## Space Complexity
116+
117+
The memory space that a code utilizes as it is being run is often referred to as space complexity. Additionally, space complexity depends on the machine, therefore rather than using the typical memory units like MB, GB, etc., we will express space complexity using the Big O notation.
118+
119+
#### Examples of Space Complexity
120+
121+
1. `Constant Space Complexity (O(1))`: Algorithms that operate on a fixed-size array or use a constant number of variables have O(1) space complexity.
122+
2. `Linear Space Complexity (O(n))`: Algorithms that store each element of the input array in a separate variable or data structure have O(n) space complexity.
123+
3. `Quadratic Space Complexity (O(n^2))`: Algorithms that create a two-dimensional array or matrix with dimensions based on the input size have O(n^2) space complexity.
124+
125+
#### Analyzing Space Complexity
126+
127+
To analyze space complexity:
128+
129+
- Identify the variables, data structures, and recursive calls used by the algorithm.
130+
- Determine how the space requirements scale with the input size.
131+
- Express the space complexity using Big O notation, considering the dominant terms that contribute most to the overall space usage.
132+
133+
## Examples to calculate time and space complexity
134+
135+
#### 1. Print all elements of given array
136+
137+
Consider each line takes one unit of time to run. So, to simply iterate over an array to print all elements it will take `O(n)` time, where n is the size of array.
138+
139+
Code:
140+
141+
```python
142+
arr = [1,2,3,4] #1
143+
for x in arr: #2
144+
print(x) #3
145+
```
146+
147+
Here, the 1st statement executes only once. So, it takes one unit of time to run. The for loop consisting of 2nd and 3rd statements executes 4 times.
148+
Also, as the code dosen't take any additional space except the input arr its Space Complexity is O(1) constant.
149+
150+
#### 2. Linear Search
151+
152+
Linear search is a simple algorithm for finding an element in an array by sequentially checking each element until a match is found or the end of the array is reached. Here's an example of calculating the time and space complexity of linear search:
153+
154+
```python
155+
def linear_search(arr, target):
156+
for x in arr: # n iterations in worst case
157+
if x == target: # 1
158+
return True # 1
159+
return False # If element not found
160+
161+
# Example usage
162+
arr = [1, 3, 5, 7, 9]
163+
target = 5
164+
print(linear_search(arr, target))
165+
```
166+
167+
**Time Complexity Analysis**
168+
169+
The for loop iterates through the entire array, which takes O(n) time in the worst case, where n is the size of the array.
170+
Inside the loop, each operation takes constant time (O(1)).
171+
Therefore, the time complexity of linear search is `O(n)`.
172+
173+
**Space Complexity Analysis**
174+
175+
The space complexity of linear search is `O(1)` since it only uses a constant amount of additional space for variables regardless of the input size.
176+
177+
178+
#### 3. Binary Search
179+
180+
Binary search is an efficient algorithm for finding an element in a sorted array by repeatedly dividing the search interval in half. Here's an example of calculating the time and space complexity of binary search:
181+
182+
```python
183+
def binary_search(arr, target):
184+
left = 0 # 1
185+
right = len(arr) - 1 # 1
186+
187+
while left <= right: # log(n) iterations in worst case
188+
mid = (left + right) // 2 # log(n)
189+
190+
if arr[mid] == target: # 1
191+
return mid # 1
192+
elif arr[mid] < target: # 1
193+
left = mid + 1 # 1
194+
else:
195+
right = mid - 1 # 1
196+
197+
return -1 # If element not found
198+
199+
# Example usage
200+
arr = [1, 3, 5, 7, 9]
201+
target = 5
202+
print(binary_search(arr, target))
203+
```
204+
205+
**Time Complexity Analysis**
206+
207+
The initialization of left and right takes constant time (O(1)).
208+
The while loop runs for log(n) iterations in the worst case, where n is the size of the array.
209+
Inside the loop, each operation takes constant time (O(1)).
210+
Therefore, the time complexity of binary search is `O(log n)`.
211+
212+
**Space Complexity Analysis**
213+
214+
The space complexity of binary search is `O(1)` since it only uses a constant amount of additional space for variables regardless of the input size.
215+
216+
#### 4. Fibbonaci Sequence
217+
218+
Let's consider an example of a function that generates Fibonacci numbers up to a given index and stores them in a list. In this case, the space complexity will not be constant because the size of the list grows with the Fibonacci sequence.
219+
220+
```python
221+
def fibonacci_sequence(n):
222+
fib_list = [0, 1] # Initial Fibonacci sequence with first two numbers
223+
224+
while len(fib_list) < n: # O(n) iterations in worst case
225+
next_fib = fib_list[-1] + fib_list[-2] # Calculating next Fibonacci number
226+
fib_list.append(next_fib) # Appending next Fibonacci number to list
227+
228+
return fib_list
229+
230+
# Example usage
231+
n = 10
232+
fib_sequence = fibonacci_sequence(n)
233+
print(fib_sequence)
234+
```
235+
236+
**Time Complexity Analysis**
237+
238+
The while loop iterates until the length of the Fibonacci sequence list reaches n, so it takes `O(n)` iterations in the `worst case`.Inside the loop, each operation takes constant time (O(1)).
239+
240+
**Space Complexity Analysis**
241+
242+
The space complexity of this function is not constant because it creates and stores a list of Fibonacci numbers.
243+
As n grows, the size of the list also grows, so the space complexity is O(n), where n is the index of the last Fibonacci number generated.

0 commit comments

Comments
 (0)