1. Lecture 1
1. Lecture 1
• What is Algorithm?
• Characteristics of an Algorithm
• Concept of Algorithmic efficiency
• Run-time Analysis of Algorithm
• Growth of Functions
• Asymptotic Notations
In computer science, an algorithm is a detailed plan for solving a problem using a computer. It takes
an input, processes it through a series of computational steps, and produces an output. Algorithms
are the foundation of all computer programs and are essential for tasks such as sorting data,
searching for information, and performing calculations.
5. Effectiveness: Each step of the algorithm must be basic enough to be carried out.
Algorithms can vary in complexity, efficiency, and design, and they are often evaluated based on their
time complexity (how the execution time grows with input size) and space complexity (how much
memory is required as the input size grows).
Algorithm efficiency refers to the measurement of the performance of an algorithm in terms of its
consumption of computational resources, such as time and space (memory). Efficient algorithms
perform tasks in less time and with less memory compared to less efficient ones. The efficiency of
an algorithm is critical, especially when dealing with large datasets or time-sensitive applications.
1. Time Complexity:
- Definition: Time complexity is a measure of the amount of time an algorithm takes to complete as
a function of the length of the input.
- Notation: Often expressed using Big O notation (e.g., O(n), O(log n), O(n^2)), which describes the
upper bound on the time required by the algorithm in the worst-case scenario.
- Types:
- Constant Time (O(1)): The execution time does not change with the size of the input.
- Logarithmic Time (O(log n)): The execution time grows logarithmically with the input size.
- Linear Time (O(n)): The execution time grows linearly with the input size.
- Quadratic Time (O(n^2)): The execution time grows quadratically with the input size.
- Exponential Time (O(2^n)): The execution time grows exponentially with the input size.
2. Space Complexity:
- Notation: Also expressed using Big O notation (e.g., O(1), O(n), O(n^2)).
- Types:
- Constant Space (O(1)): The amount of memory used does not change with the size of the input.
- Linear Space (O(n)): The amount of memory used grows linearly with the input size.
- Quadratic Space (O(n^2)): The amount of memory used grows quadratically with the input size.
1. Worst-case Complexity: The maximum time or space an algorithm can take, which provides an
upper bound.
2. Average-case Complexity: The expected time or space an algorithm will take on average for
random input.
3. Best-case Complexity: The minimum time or space an algorithm can take, which provides a lower
bound.
4. Amortized Complexity: The average time per operation over a sequence of operations, which is
useful for algorithms that have occasional expensive operations.
Example:
Consider two algorithms for searching a list: Linear Search and Binary Search.
- Linear Search:
- Time Complexity: O(n) - In the worst case, it checks each element once.
In this example, Binary Search is more efficient in terms of time complexity than Linear Search for
large datasets, although both have the same space complexity.
Runtime analysis of an algorithm involves determining how the execution time of an algorithm grows
with the size of the input. This analysis helps in understanding the efficiency and scalability of an
algorithm. The key concepts used in runtime analysis include asymptotic notation, worst-case,
average-case, and best-case scenarios.
1. Asymptotic Notation:
- Big O Notation (O): Describes an upper bound on the time complexity, giving the worst-case
scenario. For example, O(n) indicates that the runtime grows linearly with the input size.
- Omega Notation (Ω): Describes a lower bound, giving the best-case scenario. For example, Ω(n)
indicates that the runtime grows at least linearly with the input size.
- Theta Notation (Θ): Describes a tight bound, giving both the upper and lower bounds. For example,
Θ(n) indicates that the runtime grows exactly linearly with the input size.
2. Types of Complexity:
- Worst-case Complexity: The maximum time an algorithm can take to complete for any input of
size n.
- Average-case Complexity: The expected time an algorithm takes on average for inputs of size n.
- Best-case Complexity: The minimum time an algorithm takes to complete for any input of size n.
Steps in Runtime Analysis
1. Identify the Basic Operations: Determine the fundamental operations whose execution time will
be counted. This could be comparisons, assignments, arithmetic operations, etc.
2. Count the Number of Basic Operations: Analyze how the number of basic operations grows with
the input size. This usually involves looping constructs, recursive calls, and conditionals.
3. Express the Count Using Asymptotic Notation: Represent the growth rate using Big O, Omega, or
Theta notation.
Examples
1. Linear Search:
for i in range(len(arr)):
if arr[i] == x:
return i
return -1
- Average-case Complexity: O(n) (assuming x is equally likely to be any element or not in the array)
2. Binary Search:
if arr[mid] == x:
return mid
else:
right = mid - 1
return -1
- Worst-case Complexity: O(log n) (when the array is divided in half each time)
def merge_sort(arr):
if len(arr) > 1:
mid = len(arr) // 2
left_half = arr[:mid]
right_half = arr[mid:]
merge_sort(left_half)
merge_sort(right_half)
i=j=k=0
arr[k] = left_half[i]
i += 1
else:
arr[k] = right_half[j]
j += 1
k += 1
arr[k] = left_half[i]
i += 1
k += 1
arr[k] = right_half[j]
j += 1
k += 1
- Recur Step: Divides the array into two halves, leading to T(n/2) for each half
Thus, runtime analysis helps in understanding how an algorithm's performance scales with input
size, which is crucial for choosing the right algorithm for a given problem, especially when dealing
with large datasets or performance-critical applications.
Growth of Functions
The growth of functions is a fundamental concept in algorithm analysis that helps us understand how
the running time or space requirements of an algorithm increase with the size of the input. This is
crucial for comparing the efficiency of different algorithms, especially as the input size becomes
large.
- The running time does not change with the input size.
- Example: Efficient sorting algorithms like Merge Sort and Quick Sort.
- Example: Simple sorting algorithms like Bubble Sort, Insertion Sort, and Selection Sort.
When comparing the growth rates of different functions, we can see how they scale as the input size
n increases. Here is a list of common growth functions in increasing order of their growth rates:
1. Constant: O(1)
2. Logarithmic: O(log n)
3. Linear: O(n)
4. Log-linear: O(n log n)
5. Quadratic: O(n^2)
6. Cubic: O(n^3)
7. Exponential: O(2^n)
8. Factorial: O(n!)
n: 1 2 3 4 5 10 15 20 25 30
-------------------------------------------------------------
O(1): 1 1 1 1 1 1 1 1 1 1
O(n): 1 2 3 4 5 10 15 20 25 30
- Efficient Algorithms: Algorithms with slower growth rates (e.g., O(1), O(log n), O(n)) are generally
more efficient and scalable.
- Inefficient Algorithms: Algorithms with faster growth rates (e.g., O(2^n), O(n!)) are generally less
efficient and may become impractical for large input sizes.
Understanding the growth of functions is essential for analyzing the efficiency of algorithms. By
comparing growth rates, we can predict how algorithms will perform as the input size increases and
choose the most appropriate algorithm for a given problem.
Asymptotic Notations
Asymptotic notations are mathematical tools used to describe the running time or space complexity
of algorithms in terms of their input size. They provide a way to classify algorithms according to their
performance and scalability, especially for large input sizes. The most commonly used asymptotic
notations are Big O, Big Omega, and Big Theta.
- Definition: Describes an upper bound on the time complexity of an algorithm. It represents the
worst-case scenario.
- Example: If an algorithm has a time complexity of O(n), it means that in the worst case, the running
time grows linearly with the size of the input.
- Mathematical Representation:
- Definition: Describes a lower bound on the time complexity of an algorithm. It represents the best-
case scenario.
- Example: If an algorithm has a time complexity of Ω(n), it means that in the best case, the running
time grows linearly with the size of the input.
- Mathematical Representation:
- Definition: Describes a tight bound on the time complexity of an algorithm. It represents both the
upper and lower bounds.
- Example: If an algorithm has a time complexity of Θ(n), it means that the running time grows
linearly with the size of the input.
- Mathematical Representation:
- Definition: Describes an upper bound that is not tight. It indicates that the growth rate of the
function is strictly less than the comparison function.
- Example: If an algorithm has a time complexity of o(n^2), it means that the running time grows
slower than quadratic time.
- Mathematical Representation:
- Definition: Describes a lower bound that is not tight. It indicates that the growth rate of the function
is strictly greater than the comparison function.
- Example: If an algorithm has a time complexity of ω(n), it means that the running time grows faster
than linear time.
- Mathematical Representation:
Examples
1. Linear Search:
- Best-case (Ω): Ω(1) - When the element is found at the first position.
- Worst-case (O): O(n) - When the element is not in the array and all elements are checked.
2. Binary Search:
- Worst-case (O): O(log n) - When the element is not in the array and the array is halved each time.