0% found this document useful (0 votes)
6 views10 pages

1. Lecture 1

The document provides an overview of algorithms, their characteristics, and the importance of algorithm efficiency in terms of time and space complexity. It discusses various concepts such as run-time analysis, growth of functions, and asymptotic notations, which are essential for evaluating algorithm performance. Examples of linear and binary search algorithms illustrate the differences in time and space complexities.

Uploaded by

foreafcbeta
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views10 pages

1. Lecture 1

The document provides an overview of algorithms, their characteristics, and the importance of algorithm efficiency in terms of time and space complexity. It discusses various concepts such as run-time analysis, growth of functions, and asymptotic notations, which are essential for evaluating algorithm performance. Examples of linear and binary search algorithms illustrate the differences in time and space complexities.

Uploaded by

foreafcbeta
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

Lecture 1: Analysis and Design of Algorithms

• What is Algorithm?
• Characteristics of an Algorithm
• Concept of Algorithmic efficiency
• Run-time Analysis of Algorithm
• Growth of Functions
• Asymptotic Notations

An algorithm is a step-by-step procedure or set of rules designed to solve a specific problem or


perform a particular task. It is a sequence of well-defined instructions that can be followed to achieve
a desired outcome. Algorithms are used in various fields, including mathematics, computer science,
and everyday life.

In computer science, an algorithm is a detailed plan for solving a problem using a computer. It takes
an input, processes it through a series of computational steps, and produces an output. Algorithms
are the foundation of all computer programs and are essential for tasks such as sorting data,
searching for information, and performing calculations.

Here are some key characteristics of algorithms:

1. Definiteness: Each step of the algorithm is precisely defined.

2. Input: An algorithm has zero or more inputs.

3. Output: An algorithm produces at least one output.

4. Finiteness: An algorithm must terminate after a finite number of steps.

5. Effectiveness: Each step of the algorithm must be basic enough to be carried out.

Algorithms can vary in complexity, efficiency, and design, and they are often evaluated based on their
time complexity (how the execution time grows with input size) and space complexity (how much
memory is required as the input size grows).

Concept of Algorithm Efficiency

Algorithm efficiency refers to the measurement of the performance of an algorithm in terms of its
consumption of computational resources, such as time and space (memory). Efficient algorithms
perform tasks in less time and with less memory compared to less efficient ones. The efficiency of
an algorithm is critical, especially when dealing with large datasets or time-sensitive applications.

Key Concepts in Algorithm Efficiency:

1. Time Complexity:

- Definition: Time complexity is a measure of the amount of time an algorithm takes to complete as
a function of the length of the input.
- Notation: Often expressed using Big O notation (e.g., O(n), O(log n), O(n^2)), which describes the
upper bound on the time required by the algorithm in the worst-case scenario.

- Types:

- Constant Time (O(1)): The execution time does not change with the size of the input.

- Logarithmic Time (O(log n)): The execution time grows logarithmically with the input size.

- Linear Time (O(n)): The execution time grows linearly with the input size.

- Quadratic Time (O(n^2)): The execution time grows quadratically with the input size.

- Exponential Time (O(2^n)): The execution time grows exponentially with the input size.

2. Space Complexity:

- Definition: Space complexity is a measure of the amount of memory an algorithm uses as a


function of the length of the input.

- Notation: Also expressed using Big O notation (e.g., O(1), O(n), O(n^2)).

- Types:

- Constant Space (O(1)): The amount of memory used does not change with the size of the input.

- Linear Space (O(n)): The amount of memory used grows linearly with the input size.

- Quadratic Space (O(n^2)): The amount of memory used grows quadratically with the input size.

Evaluating Algorithm Efficiency:

When evaluating the efficiency of an algorithm, consider the following factors:

1. Worst-case Complexity: The maximum time or space an algorithm can take, which provides an
upper bound.

2. Average-case Complexity: The expected time or space an algorithm will take on average for
random input.

3. Best-case Complexity: The minimum time or space an algorithm can take, which provides a lower
bound.

4. Amortized Complexity: The average time per operation over a sequence of operations, which is
useful for algorithms that have occasional expensive operations.
Example:

Consider two algorithms for searching a list: Linear Search and Binary Search.

- Linear Search:

- Time Complexity: O(n) - In the worst case, it checks each element once.

- Space Complexity: O(1) - It uses a constant amount of extra space.

- Binary Search(requires a sorted list):

- Time Complexity: O(log n) - It repeatedly divides the search interval in half.

- Space Complexity: O(1) - It uses a constant amount of extra space.

In this example, Binary Search is more efficient in terms of time complexity than Linear Search for
large datasets, although both have the same space complexity.

Run-Time Analysis of an Algorithm

Runtime analysis of an algorithm involves determining how the execution time of an algorithm grows
with the size of the input. This analysis helps in understanding the efficiency and scalability of an
algorithm. The key concepts used in runtime analysis include asymptotic notation, worst-case,
average-case, and best-case scenarios.

Key Concepts in Runtime Analysis

1. Asymptotic Notation:

- Big O Notation (O): Describes an upper bound on the time complexity, giving the worst-case
scenario. For example, O(n) indicates that the runtime grows linearly with the input size.

- Omega Notation (Ω): Describes a lower bound, giving the best-case scenario. For example, Ω(n)
indicates that the runtime grows at least linearly with the input size.

- Theta Notation (Θ): Describes a tight bound, giving both the upper and lower bounds. For example,
Θ(n) indicates that the runtime grows exactly linearly with the input size.

2. Types of Complexity:

- Worst-case Complexity: The maximum time an algorithm can take to complete for any input of
size n.

- Average-case Complexity: The expected time an algorithm takes on average for inputs of size n.

- Best-case Complexity: The minimum time an algorithm takes to complete for any input of size n.
Steps in Runtime Analysis

1. Identify the Basic Operations: Determine the fundamental operations whose execution time will
be counted. This could be comparisons, assignments, arithmetic operations, etc.

2. Count the Number of Basic Operations: Analyze how the number of basic operations grows with
the input size. This usually involves looping constructs, recursive calls, and conditionals.

3. Express the Count Using Asymptotic Notation: Represent the growth rate using Big O, Omega, or
Theta notation.

Examples

1. Linear Search:

def linear_search(arr, x):

for i in range(len(arr)):

if arr[i] == x:

return i

return -1

- Basic Operation: Comparison (arr[i] == x)

- Worst-case Complexity: O(n) (when x is not in the array)

- Best-case Complexity: O(1) (when x is the first element)

- Average-case Complexity: O(n) (assuming x is equally likely to be any element or not in the array)

2. Binary Search:

def binary_search(arr, x):

left, right = 0, len(arr) - 1

while left <= right:

mid = (left + right) // 2

if arr[mid] == x:

return mid

elif arr[mid] < x:


left = mid + 1

else:

right = mid - 1

return -1

- Basic Operation: Comparison (arr[mid] == x)

- Worst-case Complexity: O(log n) (when the array is divided in half each time)

- Best-case Complexity: O(1) (when x is at the middle index)

- Average-case Complexity: O(log n)

Analyzing a More Complex Algorithm

Consider the example of Merge Sort:

def merge_sort(arr):

if len(arr) > 1:

mid = len(arr) // 2

left_half = arr[:mid]

right_half = arr[mid:]

merge_sort(left_half)

merge_sort(right_half)

i=j=k=0

while i < len(left_half) and j < len(right_half):

if left_half[i] < right_half[j]:

arr[k] = left_half[i]

i += 1

else:

arr[k] = right_half[j]

j += 1
k += 1

while i < len(left_half):

arr[k] = left_half[i]

i += 1

k += 1

while j < len(right_half):

arr[k] = right_half[j]

j += 1

k += 1

- Basic Operations: Comparisons and assignments

- Divide Step: Takes O(1) time to compute the midpoint

- Recur Step: Divides the array into two halves, leading to T(n/2) for each half

- Merge Step: Takes O(n) time to merge two halves

Thus, runtime analysis helps in understanding how an algorithm's performance scales with input
size, which is crucial for choosing the right algorithm for a given problem, especially when dealing
with large datasets or performance-critical applications.

Growth of Functions

The growth of functions is a fundamental concept in algorithm analysis that helps us understand how
the running time or space requirements of an algorithm increase with the size of the input. This is
crucial for comparing the efficiency of different algorithms, especially as the input size becomes
large.

Common Growth Functions

1. Constant Time (O(1)):

- The running time does not change with the input size.

- Example: Accessing a specific element in an array.


2. Logarithmic Time (O(log n)):

- The running time grows logarithmically as the input size increases.

- Example: Binary search in a sorted array.

3. Linear Time (O(n)):

- The running time grows linearly with the input size.

- Example: Scanning all elements in an array.

4. Log-Linear Time (O(n log n)):

- The running time grows log-linearly with the input size.

- Example: Efficient sorting algorithms like Merge Sort and Quick Sort.

5. Quadratic Time (O(n^2)):

- The running time grows quadratically with the input size.

- Example: Simple sorting algorithms like Bubble Sort, Insertion Sort, and Selection Sort.

6. Cubic Time (O(n^3)):

- The running time grows cubically with the input size.

- Example: Algorithms that involve three nested loops.

7. Exponential Time (O(2^n)):

- The running time grows exponentially with the input size.

- Example: Recursive algorithms that solve the Towers of Hanoi problem.

8. Factorial Time (O(n!)):

- The running time grows factorially with the input size.

- Example: Brute force algorithms for the Traveling Salesman Problem.

Comparing Growth Rates

When comparing the growth rates of different functions, we can see how they scale as the input size
n increases. Here is a list of common growth functions in increasing order of their growth rates:

1. Constant: O(1)

2. Logarithmic: O(log n)

3. Linear: O(n)
4. Log-linear: O(n log n)

5. Quadratic: O(n^2)

6. Cubic: O(n^3)

7. Exponential: O(2^n)

8. Factorial: O(n!)

Visualizing Growth Functions

To better understand the differences in growth rates, let's visualize them:

n: 1 2 3 4 5 10 15 20 25 30

-------------------------------------------------------------

O(1): 1 1 1 1 1 1 1 1 1 1

O(log n): 0 1 1.6 2 2.3 3.3 3.9 4.3 4.6 5

O(n): 1 2 3 4 5 10 15 20 25 30

O(n log n): 0 2 4.8 8 11.6 33 58.5 86 115 147

O(n^2): 1 4 9 16 25 100 225 400 625 900

O(n^3): 1 8 27 64 125 1000 3375 8000 15625 27000

O(2^n): 2 4 8 16 32 1024 32768 1048576 - -

O(n!): 1 2 6 24 120 3.63*10^6 - - - -

Implications of Growth Rates

- Efficient Algorithms: Algorithms with slower growth rates (e.g., O(1), O(log n), O(n)) are generally
more efficient and scalable.

- Inefficient Algorithms: Algorithms with faster growth rates (e.g., O(2^n), O(n!)) are generally less
efficient and may become impractical for large input sizes.

Understanding the growth of functions is essential for analyzing the efficiency of algorithms. By
comparing growth rates, we can predict how algorithms will perform as the input size increases and
choose the most appropriate algorithm for a given problem.
Asymptotic Notations

Asymptotic notations are mathematical tools used to describe the running time or space complexity
of algorithms in terms of their input size. They provide a way to classify algorithms according to their
performance and scalability, especially for large input sizes. The most commonly used asymptotic
notations are Big O, Big Omega, and Big Theta.

Common Asymptotic Notations

1. Big O Notation (O)

- Definition: Describes an upper bound on the time complexity of an algorithm. It represents the
worst-case scenario.

- Purpose: To provide an upper limit on the growth rate of the function.

- Example: If an algorithm has a time complexity of O(n), it means that in the worst case, the running
time grows linearly with the size of the input.

- Mathematical Representation:

2. Big Omega Notation (Ω)

- Definition: Describes a lower bound on the time complexity of an algorithm. It represents the best-
case scenario.

- Purpose: To provide a lower limit on the growth rate of the function.

- Example: If an algorithm has a time complexity of Ω(n), it means that in the best case, the running
time grows linearly with the size of the input.

- Mathematical Representation:

3. Big Theta Notation (Θ)

- Definition: Describes a tight bound on the time complexity of an algorithm. It represents both the
upper and lower bounds.

- Purpose: To provide an exact asymptotic behavior of the function.

- Example: If an algorithm has a time complexity of Θ(n), it means that the running time grows
linearly with the size of the input.
- Mathematical Representation:

4. Little o Notation (o)

- Definition: Describes an upper bound that is not tight. It indicates that the growth rate of the
function is strictly less than the comparison function.

- Purpose: To show that a function grows slower than another function.

- Example: If an algorithm has a time complexity of o(n^2), it means that the running time grows
slower than quadratic time.

- Mathematical Representation:

5. Little Omega Notation (ω)

- Definition: Describes a lower bound that is not tight. It indicates that the growth rate of the function
is strictly greater than the comparison function.

- Purpose: To show that a function grows faster than another function.

- Example: If an algorithm has a time complexity of ω(n), it means that the running time grows faster
than linear time.

- Mathematical Representation:

Examples

1. Linear Search:

- Best-case (Ω): Ω(1) - When the element is found at the first position.

- Worst-case (O): O(n) - When the element is not in the array and all elements are checked.

- Tight-bound (Θ): Θ(n) - Reflects the average case.

2. Binary Search:

- Best-case (Ω): Ω(1) - When the element is at the middle position.

- Worst-case (O): O(log n) - When the element is not in the array and the array is halved each time.

- Tight-bound (Θ): Θ(log n) - Reflects the average case.

You might also like