Skip to content

Commit 5323d96

Browse files
authored
Merge branch 'main' into introduction-to-pie-charts-in-matplotlib
2 parents ac4cca2 + 165639d commit 5323d96

23 files changed

+1185
-26
lines changed

contrib/ds-algorithms/index.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,6 @@
11
# List of sections
22

3+
- [Time & Space Complexity](time-space-complexity.md)
34
- [Queues in Python](Queues.md)
45
- [Graphs](graph.md)
56
- [Sorting Algorithms](sorting-algorithms.md)

contrib/ds-algorithms/recursion.md

Lines changed: 44 additions & 24 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22

33
When a function calls itself to solve smaller instances of the same problem until a specified condition is fulfilled is called recursion. It is used for tasks that can be divided into smaller sub-tasks.
44

5-
# How Recursion Works
5+
## How Recursion Works
66

77
To solve a problem using recursion we must define:
88
- Base condition :- The condition under which recursion ends.
@@ -17,43 +17,63 @@ When a recursive function is called, the following sequence of events occurs:
1717
- Stack Management: Each recursive call is placed on the call stack. The stack keeps track of each function call, its argument, and the point to return to once the call completes.
1818
- Unwinding the Stack: When the base case is eventually met, the function returns a value, and the stack starts unwinding, returning values to previous function calls until the initial call is resolved.
1919

20-
# What is Stack Overflow in Recursion
20+
## Python Code: Factorial using Recursion
2121

22-
Stack overflow is an error that occurs when the call stack memory limit is exceeded. During execution of recursion calls they are simultaneously stored in a recursion stack waiting for the recursive function to be completed. Without a base case, the function would call itself indefinitely, leading to a stack overflow.
22+
```python
23+
def fact(n):
24+
if n == 0 or n == 1:
25+
return 1
26+
return n * fact(n - 1)
27+
28+
if __name__ == "__main__":
29+
n = int(input("Enter a positive number: "))
30+
print("Factorial of", n, "is", fact(n))
31+
```
2332

24-
# Example
33+
### Explanation
2534

26-
- Factorial of a Number
35+
This Python script calculates the factorial of a given number using recursion.
2736

28-
The factorial of i natural numbers is nth integer multiplied by factorial of (i-1) numbers. The base case is if i=0 we return 1 as factorial of 0 is 1.
29-
30-
```python
31-
def factorial(i):
32-
#base case
33-
if i==0 :
34-
return 1
35-
#recursive case
36-
else :
37-
return i * factorial(i-1)
38-
i = 6
39-
print("Factorial of i is :", factorial(i)) # Output- Factorial of i is :720
40-
```
41-
# What is Backtracking
37+
- **Function `fact(n)`:**
38+
- The function takes an integer `n` as input and calculates its factorial.
39+
- It checks if `n` is 0 or 1. If so, it returns 1 (since the factorial of 0 and 1 is 1).
40+
- Otherwise, it returns `n * fact(n - 1)`, which means it recursively calls itself with `n - 1` until it reaches either 0 or 1.
41+
42+
- **Main Section:**
43+
- The main section prompts the user to enter a positive number.
44+
- It then calls the `fact` function with the input number and prints the result.
45+
46+
#### Example : Let n = 4
47+
48+
The recursion unfolds as follows:
49+
1. When `fact(4)` is called, it computes `4 * fact(3)`.
50+
2. Inside `fact(3)`, it computes `3 * fact(2)`.
51+
3. Inside `fact(2)`, it computes `2 * fact(1)`.
52+
4. `fact(1)` returns 1 (`if` statement executes), which is received by `fact(2)`, resulting in `2 * 1` i.e. `2`.
53+
5. Back to `fact(3)`, it receives the value from `fact(2)`, giving `3 * 2` i.e. `6`.
54+
6. `fact(4)` receives the value from `fact(3)`, resulting in `4 * 6` i.e. `24`.
55+
7. Finally, `fact(4)` returns 24 to the main function.
56+
57+
#### So, the result is 24.
58+
59+
#### What is Stack Overflow in Recursion?
60+
61+
Stack overflow is an error that occurs when the call stack memory limit is exceeded. During execution of recursion calls they are simultaneously stored in a recursion stack waiting for the recursive function to be completed. Without a base case, the function would call itself indefinitely, leading to a stack overflow.
62+
63+
## What is Backtracking
4264

4365
Backtracking is a recursive algorithmic technique used to solve problems by exploring all possible solutions and discarding those that do not meet the problem's constraints. It is particularly useful for problems involving combinations, permutations, and finding paths in a grid.
4466

45-
# How Backtracking Works
67+
## How Backtracking Works
4668

4769
- Incremental Solution Building: Solutions are built one step at a time.
4870
- Feasibility Check: At each step, a check is made to see if the current partial solution is valid.
4971
- Backtracking: If a partial solution is found to be invalid, the algorithm backtracks by removing the last added part of the solution and trying the next possibility.
5072
- Exploration of All Possibilities: The process continues recursively, exploring all possible paths, until a solution is found or all possibilities are exhausted.
5173

52-
# Example
53-
54-
- Word Search
74+
## Example: Word Search
5575

56-
Given a 2D grid of characters and a word, determine if the word exists in the grid. The word can be constructed from letters of sequentially adjacent cells, where "adjacent" cells are horizontally or vertically neighboring. The same letter cell may not be used more than once.
76+
Given a 2D grid of characters and a word, determine if the word exists in the grid. The word can be constructed from letters of sequentially adjacent cells, where "adjacent" cells are horizontally or vertically neighboring. The same letter cell may not be used more than once.
5777

5878
Algorithm for Solving the Word Search Problem with Backtracking:
5979
- Start at each cell: Attempt to find the word starting from each cell.
Lines changed: 243 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,243 @@
1+
# Time and Space Complexity
2+
3+
We can solve a problem using one or more algorithms. It's essential to learn how to compare the performance of different algorithms and select the best one for a specific task.
4+
5+
Therefore, it is highly required to use a method to compare the solutions in order to judge which one is more optimal.
6+
7+
The method must be:
8+
9+
- Regardless of the system or its settings on which the algorithm is executing.
10+
- Demonstrate a direct relationship with the quantity of inputs.
11+
- Able to discriminate between two methods with clarity and precision.
12+
13+
Two such methods use to analyze algorithms are `time complexity` and `space complexity`.
14+
15+
## What is Time Complexity?
16+
17+
The _number of operations an algorithm performs in proportion to the quantity of the input_ is measured by time complexity. It facilitates our investigation of how the performance of the algorithm scales with increasing input size. But in real life, **_time complexity does not refer to the time taken by the machine to execute a particular code_**.
18+
19+
## Order of Growth and Asymptotic Notations
20+
21+
The Order of Growth explains how an algorithm's space or running time expands as the amount of the input does. This increase is described via asymptotic language, such Big O notation, which concentrates on the dominating term as the input size approaches infinity and is independent of lower-order terms and machine-specific constants.
22+
23+
### Common Asymptotic Notation
24+
25+
1. `Big Oh (O)`: Provides the worst-case scenario for describing the upper bound of an algorithm's execution time.
26+
2. `Big Omega (Ω)`: Provides the best-case scenario and describes the lower bound.
27+
3. `Big Theta (Θ)`: Gives a tight constraint on the running time by describing both the upper and lower bounds.
28+
29+
### 1. Big Oh (O) Notation
30+
31+
Big O notation describes how an algorithm behaves as the input size gets closer to infinity and provides an upper bound on the time or space complexity of the method. It helps developers and computer scientists to evaluate the effectiveness of various algorithms without regard to the software or hardware environment.
32+
33+
To denote asymptotic upper bound, we use O-notation. For a given function `g(n)`, we denote by `O(g(n))` (pronounced "big-oh of g of n") the set of functions:
34+
35+
$$
36+
O(g(n)) = \{ f(n) : \exists \text{ positive constants } c \text{ and } n_0 \text{ such that } 0 \leq f(n) \leq c \cdot g(n) \text{ for all } n \geq n_0 \}
37+
$$
38+
39+
Graphical representation of Big Oh:
40+
41+
![BigOh Notation Graph](images/Time-And-Space-Complexity-BigOh.png)
42+
43+
### 2. Big Omega (Ω) Notation
44+
45+
Big Omega (Ω) notation is used to describe the lower bound of an algorithm's running time. It provides a way to express the minimum time complexity that an algorithm will take to complete. In other words, Big Omega gives us a guarantee that the algorithm will take at least a certain amount of time to run, regardless of other factors.
46+
47+
To denote asymptotic lower bound, we use Omega-notation. For a given function `g(n)`, we denote by `Ω(g(n))` (pronounced "big-omega of g of n") the set of functions:
48+
49+
$$
50+
\Omega(g(n)) = \{ f(n) : \exists \text{ positive constants } c \text{ and } n_0 \text{ such that } 0 \leq c \cdot g(n) \leq f(n) \text{ for all } n \geq n_0 \}
51+
$$
52+
53+
Graphical representation of Big Omega:
54+
55+
![BigOmega Notation Graph](images/Time-And-Space-Complexity-BigOmega.png)
56+
57+
### 3. Big Theta (Θ) Notation
58+
59+
Big Theta (Θ) notation provides a way to describe the asymptotic tight bound of an algorithm's running time. It offers a precise measure of the time complexity by establishing both an upper and lower bound, indicating that the running time of an algorithm grows at the same rate as a given function, up to constant factors.
60+
61+
To denote asymptotic tight bound, we use Theta-notation. For a given function `g(n)`, we denote by `Θ(g(n))` (pronounced "big-theta of g of n") the set of functions:
62+
63+
$$
64+
\Theta(g(n)) = \{ f(n) : \exists \text{ positive constants } c_1, c_2, \text{ and } n_0 \text{ such that } 0 \leq c_1 \cdot g(n) \leq f(n) \leq c_2 \cdot g(n) \text{ for all } n \geq n_0 \}
65+
$$
66+
67+
Graphical representation of Big Theta:
68+
69+
![Big Theta Notation Graph](images/Time-And-Space-Complexity-BigTheta.png)
70+
71+
## Best Case, Worst Case and Average Case
72+
73+
### 1. Best-Case Scenario:
74+
75+
The best-case scenario refers to the situation where an algorithm performs optimally, achieving the lowest possible time or space complexity. It represents the most favorable conditions under which an algorithm operates.
76+
77+
#### Characteristics:
78+
79+
- Represents the minimum time or space required by an algorithm to solve a problem.
80+
- Occurs when the input data is structured in such a way that the algorithm can exploit its strengths fully.
81+
- Often used to analyze the lower bound of an algorithm's performance.
82+
83+
#### Example:
84+
85+
Consider the `linear search algorithm` where we're searching for a `target element` in an array. The best-case scenario occurs when the target element is found `at the very beginning of the array`. In this case, the algorithm would only need to make one comparison, resulting in a time complexity of `O(1)`.
86+
87+
### 2. Worst-Case Scenario:
88+
89+
The worst-case scenario refers to the situation where an algorithm performs at its poorest, achieving the highest possible time or space complexity. It represents the most unfavorable conditions under which an algorithm operates.
90+
91+
#### Characteristics:
92+
93+
- Represents the maximum time or space required by an algorithm to solve a problem.
94+
- Occurs when the input data is structured in such a way that the algorithm encounters the most challenging conditions.
95+
- Often used to analyze the upper bound of an algorithm's performance.
96+
97+
#### Example:
98+
99+
Continuing with the `linear search algorithm`, the worst-case scenario occurs when the `target element` is either not present in the array or located `at the very end`. In this case, the algorithm would need to iterate through the entire array, resulting in a time complexity of `O(n)`, where `n` is the size of the array.
100+
101+
### 3. Average-Case Scenario:
102+
103+
The average-case scenario refers to the expected performance of an algorithm over all possible inputs, typically calculated as the arithmetic mean of the time or space complexity.
104+
105+
#### Characteristics:
106+
107+
- Represents the typical performance of an algorithm across a range of input data.
108+
- Takes into account the distribution of inputs and their likelihood of occurrence.
109+
- Provides a more realistic measure of an algorithm's performance compared to the best-case or worst-case scenarios.
110+
111+
#### Example:
112+
113+
For the `linear search algorithm`, the average-case scenario considers the probability distribution of the target element's position within the array. If the `target element is equally likely to be found at any position in the array`, the average-case time complexity would be `O(n/2)`, as the algorithm would, on average, need to search halfway through the array.
114+
115+
## Space Complexity
116+
117+
The memory space that a code utilizes as it is being run is often referred to as space complexity. Additionally, space complexity depends on the machine, therefore rather than using the typical memory units like MB, GB, etc., we will express space complexity using the Big O notation.
118+
119+
#### Examples of Space Complexity
120+
121+
1. `Constant Space Complexity (O(1))`: Algorithms that operate on a fixed-size array or use a constant number of variables have O(1) space complexity.
122+
2. `Linear Space Complexity (O(n))`: Algorithms that store each element of the input array in a separate variable or data structure have O(n) space complexity.
123+
3. `Quadratic Space Complexity (O(n^2))`: Algorithms that create a two-dimensional array or matrix with dimensions based on the input size have O(n^2) space complexity.
124+
125+
#### Analyzing Space Complexity
126+
127+
To analyze space complexity:
128+
129+
- Identify the variables, data structures, and recursive calls used by the algorithm.
130+
- Determine how the space requirements scale with the input size.
131+
- Express the space complexity using Big O notation, considering the dominant terms that contribute most to the overall space usage.
132+
133+
## Examples to calculate time and space complexity
134+
135+
#### 1. Print all elements of given array
136+
137+
Consider each line takes one unit of time to run. So, to simply iterate over an array to print all elements it will take `O(n)` time, where n is the size of array.
138+
139+
Code:
140+
141+
```python
142+
arr = [1,2,3,4] #1
143+
for x in arr: #2
144+
print(x) #3
145+
```
146+
147+
Here, the 1st statement executes only once. So, it takes one unit of time to run. The for loop consisting of 2nd and 3rd statements executes 4 times.
148+
Also, as the code dosen't take any additional space except the input arr its Space Complexity is O(1) constant.
149+
150+
#### 2. Linear Search
151+
152+
Linear search is a simple algorithm for finding an element in an array by sequentially checking each element until a match is found or the end of the array is reached. Here's an example of calculating the time and space complexity of linear search:
153+
154+
```python
155+
def linear_search(arr, target):
156+
for x in arr: # n iterations in worst case
157+
if x == target: # 1
158+
return True # 1
159+
return False # If element not found
160+
161+
# Example usage
162+
arr = [1, 3, 5, 7, 9]
163+
target = 5
164+
print(linear_search(arr, target))
165+
```
166+
167+
**Time Complexity Analysis**
168+
169+
The for loop iterates through the entire array, which takes O(n) time in the worst case, where n is the size of the array.
170+
Inside the loop, each operation takes constant time (O(1)).
171+
Therefore, the time complexity of linear search is `O(n)`.
172+
173+
**Space Complexity Analysis**
174+
175+
The space complexity of linear search is `O(1)` since it only uses a constant amount of additional space for variables regardless of the input size.
176+
177+
178+
#### 3. Binary Search
179+
180+
Binary search is an efficient algorithm for finding an element in a sorted array by repeatedly dividing the search interval in half. Here's an example of calculating the time and space complexity of binary search:
181+
182+
```python
183+
def binary_search(arr, target):
184+
left = 0 # 1
185+
right = len(arr) - 1 # 1
186+
187+
while left <= right: # log(n) iterations in worst case
188+
mid = (left + right) // 2 # log(n)
189+
190+
if arr[mid] == target: # 1
191+
return mid # 1
192+
elif arr[mid] < target: # 1
193+
left = mid + 1 # 1
194+
else:
195+
right = mid - 1 # 1
196+
197+
return -1 # If element not found
198+
199+
# Example usage
200+
arr = [1, 3, 5, 7, 9]
201+
target = 5
202+
print(binary_search(arr, target))
203+
```
204+
205+
**Time Complexity Analysis**
206+
207+
The initialization of left and right takes constant time (O(1)).
208+
The while loop runs for log(n) iterations in the worst case, where n is the size of the array.
209+
Inside the loop, each operation takes constant time (O(1)).
210+
Therefore, the time complexity of binary search is `O(log n)`.
211+
212+
**Space Complexity Analysis**
213+
214+
The space complexity of binary search is `O(1)` since it only uses a constant amount of additional space for variables regardless of the input size.
215+
216+
#### 4. Fibbonaci Sequence
217+
218+
Let's consider an example of a function that generates Fibonacci numbers up to a given index and stores them in a list. In this case, the space complexity will not be constant because the size of the list grows with the Fibonacci sequence.
219+
220+
```python
221+
def fibonacci_sequence(n):
222+
fib_list = [0, 1] # Initial Fibonacci sequence with first two numbers
223+
224+
while len(fib_list) < n: # O(n) iterations in worst case
225+
next_fib = fib_list[-1] + fib_list[-2] # Calculating next Fibonacci number
226+
fib_list.append(next_fib) # Appending next Fibonacci number to list
227+
228+
return fib_list
229+
230+
# Example usage
231+
n = 10
232+
fib_sequence = fibonacci_sequence(n)
233+
print(fib_sequence)
234+
```
235+
236+
**Time Complexity Analysis**
237+
238+
The while loop iterates until the length of the Fibonacci sequence list reaches n, so it takes `O(n)` iterations in the `worst case`.Inside the loop, each operation takes constant time (O(1)).
239+
240+
**Space Complexity Analysis**
241+
242+
The space complexity of this function is not constant because it creates and stores a list of Fibonacci numbers.
243+
As n grows, the size of the list also grows, so the space complexity is O(n), where n is the index of the last Fibonacci number generated.

contrib/machine-learning/index.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -9,3 +9,4 @@
99
- [TensorFlow.md](tensorFlow.md)
1010
- [PyTorch.md](pytorch.md)
1111
- [Types of optimizers](Types_of_optimizers.md)
12+
- [Logistic Regression](logistic-regression.md)

0 commit comments

Comments
 (0)