0% found this document useful (0 votes)
6 views

DESIGN LAB

The document outlines a laboratory course on Design and Analysis of Algorithms, detailing various experiments focused on sorting algorithms, factorial computation, and optimization techniques. It includes aims, theoretical explanations, algorithms, and comparative analyses of different methods such as Insertion Sort, Selection Sort, Iterative and Recursive methods, and Quick Sort. Each experiment concludes with a summary of findings and performance metrics, emphasizing the efficiency of different algorithms under various conditions.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views

DESIGN LAB

The document outlines a laboratory course on Design and Analysis of Algorithms, detailing various experiments focused on sorting algorithms, factorial computation, and optimization techniques. It includes aims, theoretical explanations, algorithms, and comparative analyses of different methods such as Insertion Sort, Selection Sort, Iterative and Recursive methods, and Quick Sort. Each experiment concludes with a summary of findings and performance metrics, emphasizing the efficiency of different algorithms under various conditions.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 28

Design and Analysis of Algorithm Lab

CSE3005L

School of Engineering & Sciences

Department of Computer Science and Engineering

Submitted by

Student Name Jaiprakash Biswal


Enrollment No. 220160203032
Programme B Tech CSE
Department SOES
Session/Semester 2022-26/V

Submitted to
Faculty Name Ms. Sunita Sahu
INDEX

S No. Aim of the experiment Page No. Date Teacher’s Signature

1 Implementation and comparative analysis of


selection sort and insertion sort on the basis
of comparison required to sort large size
lists.
2 Implement and Analysis factorial of a
number program using iterative and
recursive methods.
3 Write a program to implement Strassen’s
matrix Multiplication algorithms and analyze
its complexity.
4 Write a program to implement Divide and
conquer strategy Merge sort and analyze its
complexity.
5 Write a program to implement Divide and
conquer strategy Quick Sort and analyze its
complexity
6 Write a program to implement DFS and BFS
algorithm.

7 Implement fractional knapsack problem


using Greedy Strategy.

8 Apply Greedy method to compress the given


data using Huffman encoding.

9 Implement minimum spanning tree using


Prim’s algorithm and analyze its time
complexity.
10 Apply dynamic programming methodology
to implement 0/1 Knapsack problem.
Experiment No. 1

Aim: Comparative analysis on comparisons required to sort list or large values of n for Selection
and Insertion sort.

Theory:

i) Insertion Sort: Insertion Sort is a simple sorting algorithm that sorts a list by taking one element
at a time and placing it in its correct position. It works by comparing the current element with the
elements before it and moving them one position ahead if they are larger. This process is repeated
until the list is sorted.

Algorithm:

Step 1. Start from the second element (first element is already sorted).
Step 2. Compare the current element with elements in the sorted part.
Step 3. Shift elements that are greater than the current element one position to the right.
Step 4. Insert the current element into its correct position.
Step 5. Repeat until the entire array is sorted

Example:

Initial Array: [ 12, 11, 13, 5, 6]

Step 1. Compare 11 with 12. Since 11 < 12, move 12 to the right. Insert 11 at the beginning.
Hence Array becomes [ 11, 12, 13, 5, 6]

Step 2. Compare 13 with 12. Since 13 > 12, no movement needed.


Hence array remains same. [ 11, 12, 13, 5, 6]

Step 3.

• Compare 5 with 13. Since 5 < 13, move 13 to the right.


Now array = [11,12,5,13,6]

• Compare 5 with 12. Since 5 < 12, move 12 to the right.


Now array = [11,5,12,13,6]

• Compare 5 with 11. Since 5 < 11, move 11 to the right. Insert 5 at the beginning.
Hence array becomes [5,11,12,13,6]

Step 4.

• Compare 6 with 13. Since 6 < 13, move 13 to the right.


Now array = [5,11,12,6,13]

• Compare 6 with 12. Since 6 < 12, move 12 to the right.


Now array [5,11,6,12,13]
• Compare 6 with 11. Since 6 < 11, move 11 to the right.
Now array [5,6,11,12,13]

• Compare 6 with 5. As 6>5, movement required. Insert 6 after 5.

Now Final sorted Array : [5,6,11,12,13]

Source Code:

Output:

ii) Selection Sort: Selection Sort is a simple sorting algorithm that sorts a list by repeatedly finding
the smallest (or largest) element from the unsorted part and swapping it with the first unsorted
element. This process is repeated until the entire list is sorted.

Algorithm:

Step 1. Start from the first element.


Step 2. Find the smallest element in the unsorted part of the array.
Step 3. Swap it with the first element of the unsorted part.
Step 4. Move the boundary between sorted and unsorted parts.
Step 5. Repeat until the entire array is sorted.

Example:

Initial Array: [ 12, 11, 13, 5, 6]


Step 1. Find the minimum element in the array from index 0 to 4.

• Minimum element is 5 at index 3.


• Swap 5 with the first element (12).
Array becomes [5,12,11,13,6]

Step 2. Find the minimum element in the array from index 1 to 4.

• Minimum element is 6 at index 4.


• Swap 6 with the second element (11).
Hence array [5,6,12,11,13]

Step 3. Find the minimum element in the array from index 2 to 4.

• Minimum element is 11 at index 4.


• Swap 11 with the third element (13).
Array [5,6,11,12,13]

Step 4. Find the minimum element in the array from index 3 to 4.

• Minimum element is 12 at index 3.


• No swap needed as 12 is already in the correct position.
Now array [5,6,11,12,13]

Step 5. Only one element left, no need to find the minimum.

• The array is already sorted.

Final Sorted Array: [ 5, 6, 11, 12, 13]

Source Code :

Output :
Comparative Analysis:

Basis of Comparison Insertion Sort Selection Sort


Best case (Time Complexity) O (n) O (n²)
Average Case (Time Complexity) O (n²) O (n²)
Worst Case (Time Complexity) O (n²) O (n²)
Space Complexity O (n) O (n)

Table 1.1: Comparison between Insertion Sort and Selection Sort

Array Size Case Insertion Sort Selection Sort

n=10 Best Case 10 100


Average Case 100 100
Worst Case 100 100
n=100 Best Case 100 10000
Average Case 10000 10000
Worst Case 10000 10000
n=1000 Best Case 1000 1000000
Average Case 1000000 1000000
Worst Case 1000000 1000000

Table 1.2: Comparison between Insertion sort and Selection sort on the basis of Time Complexity

Conclusion: Insertion sort is often preferred for smaller or partially sorted datasets due to its better
performance in the best-case scenario while selection sort is simpler but less efficient for larger
datasets.
Experiment No. 2

Aim: Implement and Analysis factorial of a number program using iterative and recursive methods.

Theory:

i) What is Iterative Method?

The Iterative method involves solving a problem by repeatedly applying a set of instructions using
loops like for or while. Instead of recursion, iteration continuously updates variables and checks for
a termination condition, like reaching a maximum number of iterations or achieving a desired level
of accuracy. It’s commonly used for tasks like summing numbers, searching, or optimizing functions,
providing an efficient way to handle repetitive tasks without excessive memory usage.

Algorithm:

Step 1. Input: Read the input number n.


Step 2. Initialize: Set result =1 to store the product.
Step 3. Iterate: Use a loop to multiply result by each number from 2 to n.
Step 4. Return: After the loop ends, return result as the factorial of n.
Step 5. Base Case: For n=0, return 1 (since 0!=1).

Example:

Step 1. Input: The user enters n=5.

Step 2. Initialize: We initialize a variable result = 1 to store the product of numbers.

Step 3. Iteration: Start a loop from 2 to n (i.e., 2 to 5):

• Multiply result (which is 1 initially) by 2:


result = 1×2=2

• Multiply result by 3:
result = 2×3=6

• Multiply result by 4:
result= 6×4=24

• Multiply result by 5:
result= 24×5=120

Step 4. Return: After the loop finishes, result holds the value 120, which is 5!.

Step 5. Base Case (For n=0): If n=0 , the program would return 1 directly without entering the
loop since 0!=1 by definition.
Source Code:

Output:

ii) What is Recursion?

It is a programming technique where a function calls itself to solve a problem. This approach typically
involves defining a base case to stop the recursive calls and a recursive case that reduces the problem
size. Each call adds a new layer to the call stack, which can lead to elegant solutions for problems
like tree traversals or calculating factorials. However, recursion can consume more memory due to
stack frames and may result in a Recursion Error if the maximum recursion depth is exceeded.

Algorithm:

Step 1. Input: Read the input number n.


Step 2. Base Case: If n=0 or n=1, return 1 (since 0!=1 and 1!=1).
Step 3. Recursive Step: If n>1, return n × factorial(n−1) .
Step 4. Return: The recursive call will continue until the base case is reached, and then it will
return the result.

Example:

Step 1. Input: The user enters n=5.

Step 2. Recursive Calls:

• 5!= 5×4!

• 4!= 4×3!
• 3!= 3×2!

• 2!= 2×1!

• Base case: 1!=1

Step 3. Return from Base Case : The base case is reached when n=1, and the function returns 1.

Step 4. Backtracking and Multiplication: Now the recursion unwinds, and the results from each
recursive call are multiplied:

• 2!= 2×1=2

• 3!= 3×2=6

• 4!= 4×6=24

• 5!= 5×24=120

Step 5. Final Output: The result, 120, is returned as the factorial of 5.

Source Code:

Output:
Comparative Analysis:

Basis of Comparison Iteration Recursion


Time Complexity O(1) if the iteration exits O(1) if the base case is
early (e.g., early termination reached immediately
in a loop)
Space Complexity O(1) since only a few loop O(1) if tail recursion is used
variables are needed (in languages that optimize
tail calls)
Stack Overflow Risk No risk of stack overflow due Prone to stack overflow if
to no recursive calls recursion depth is too large,
especially in cases with no
base case optimization (deep
recursions)

Table 2. Comparison between Iteration and Recursion on basis of Time and SpaceComplexity

Conclusion:

• Iterative is more memory efficient for large numbers.

• Recursive is easier to implement and understand but may face issues with large input due to
recursion limits.
Experiment No. 5

Aim: Write a program to implement Divide and conquer strategy (Quick Sort) and analyze its
complexity. Repeat the experiment for different values of n (the number of elements in the list to be
sorted) and plot a graph of the time taken versus n.

Theory:

i) What is Quick Sort?

Quick Sort is a highly efficient, comparison-based divide-and-conquer sorting algorithm. It is


commonly used due to its average-case efficiency and is one of the most popular sorting algorithms
in practice.

Algorithm:

Step 1. Choose a Pivot: Select an element from the array as the pivot. Various strategies can be
used to select the pivot, such as choosing the first element, the last element, the middle element, or a
random element.

Step 2. Partitioning: Rearrange the elements in the array so that all elements less than the pivot are
on the left, and all elements greater than the pivot are on the right. The pivot will be in its final
position after this step.

Step 3. Recursion: Recursively apply the above steps to the sub-arrays formed by splitting the
array at the pivot.

Step 4. Base Case: The recursion ends when the size of the array is 1 or 0, at which point the array
is already sorted.

Example

Initial Array: [10, 7, 8, 9, 1, 5]

Step 1. Choose a pivot, say 5 (last element).

Step 2. Partitioning: Compare each element with the pivot:

- 10 > 5 (do nothing)


- 7 > 5 (do nothing)
- 8 > 5 (do nothing)
- 9 > 5 (do nothing)
- 1 < 5 (swap 1 with 10)

• After the swaps, the array looks like this: [1, 7, 8, 9, 10, 5]
• Finally, swap the pivot 5 with 7 (the first element greater than 5).
• Now, the array is [1, 5, 8, 9, 10, 7], and the pivot 5 is in the correct position.
Step 3. Recursive Calls:

• Left sub-array: [1] (already sorted)


• Right sub-array: [8, 9, 10, 7]
- Choose a pivot 7, and partition:
- After partitioning, we get [7, 8, 9, 10].

Final Sorted Array: [1, 5, 7, 8, 9, 10]

Source Code:

Output:
Comparative Analysis:

Number of Best Case Time Average Case Worst Case Space


Elements (n) Time Taken( in Time Time Complexity
Complexity seconds) Complexity Complexity
100 O(n log n) 0.0001 O(n log n) O(n^2) O(log n)
200 O(n log n) 0.0002 O(n log n) O(n^2) O(log n)
300 O(n log n) 0.0003 O(n log n) O(n^2) O(log n)
400 O(n log n) 0.0004 O(n log n) O(n^2) O(log n)
500 O(n log n) 0.0005 O(n log n) O(n^2) O(log n)
600 O(n log n) 0.0006 O(n log n) O(n^2) O(log n)
700 O(n log n) 0.0007 O(n log n) O(n^2) O(log n)
800 O(n log n) 0.0008 O(n log n) O(n^2) O(log n)
900 O(n log n) 0.0010 O(n log n) O(n^2) O(log n)
1000 O(n log n) 0.0007 O(n log n) O(n^2) O(log n)

Table 3. Comparison on the basis of time and space complexity for different values of n

• Best Case Time Complexity:


- The best case occurs when the pivot selected divides the list into two equal halves consistently.
This results in balanced partitions and the time complexity is O(n log n). This is the same as the
average case for Quick Sort because even in the best scenario, the algorithm still performs log n
recursive calls with each call handling approximately half of the elements.

• Average Case Time Complexity:


- The average case also remains O(n log n) because most input distributions (random data) result in
reasonably balanced partitions.

• Worst Case Time Complexity:


- The worst case (O(n2)) occurs when the pivot consistently ends up being the smallest or largest
element, leading to highly unbalanced partitions.

• Space Complexity:
- The space complexity is O(log n), mainly due to the recursive calls made during the sorting process.
This depth depends on the partitioning, but in the average and best cases, it stays manageable.

Graph:
Fig 3.1. Time taken versus n graph

Note: The actual time taken may vary depending on the system and the input data.

Conclusion:

For practical purposes, QuickSort is one of the most efficient sorting algorithms with an average time
complexity of O(nlogn) and relatively low space complexity, making it highly suitable for large
datasets.
Experiment No. 4

Aim: Write a program to implement Divide and Conquer Strategy Merge sort and analyse its
complexity.

Merge Sort: Merge sort is a sorting algorithm that follows the divide-and-conquer approach. It
works by recursively dividing the input array into smaller sub-arrays and sorting those sub-arrays
then merging them back together to obtain the sorted array.

Steps of Merge Sort:

1. Divide: If the array has more than one element, split it into two subarrays.

2. Conquer: Recursively apply merge sort to both subarrays.

3. Combine: Merge the two sorted subarrays into a single sorted array.

Example: Array= [38, 27, 43, 3, 9, 82, 10]

Step1 Divide:

38 27 43 3 and 9 82 10

Step2 Conquer:

38 27 43 3

➢ Left sub-array

38 27 and 43 3

 27 38 and 3 43

3 27 38 43

9 82 10
Right sub-array

9 82 and 10
 9 10 82

Step3Combine:

3 27 38 43 and 9 10 82

3 9 10 27 38 43 82

Source Code:

Output:
Time Complexity: (O(nlogn)) The time complexity of Merge Sort remains the same in all cases
(best, average, and worst) due to its consistent approach to dividing and merging the array.

Space Complexity: (O(n)) Merge Sort requires additional space proportional to the size of the array
being sorted because it needs to store the subarrays during the merge process.

Conclusion:

 Stable Sort: Maintains the relative order of equal elements.

 Consistent Performance: Efficient for large datasets and linked lists


Experiment No. 3

Aim: Write a program to implement Strassen’s Multiplication algorithms and analyse its complexity

Strassen’s Matrix Multiplication:

Strassen's matrix multiplication is an optimized algorithm for multiplying two square matrices.
Standard matrix multiplication has a time complexity of O(n^3), where n is the number of rows (or
columns) of the matrices being multiplied. Strassen's algorithm reduces this complexity to O(n^2.81),
making it faster for large matrices.

How Strassen's Algorithm works:

Strassen's algorithm divides each matrix into four equal-sized sub-matrices:

Divide: For two matrices A & B, both of order n X n, divide them into four smaller submatrices

A= A11 A12 B= B11 B12


A21 A22 B21 B22

Recursive Multiplication: The product of the two matrices is computed using seven products
of sub-matrices:

1. P = (A11 + A22) (B11 + B22)

2. Q = B11(A21 + A22)

3. R = A11(B12 - B22)

4. S = A22(B21 - B11)

5. T = B22(A11 + A12)

6. U = (A21 – A11) (B11 + B12)

7. V = (B21 + B22) (A12 – A22)

Recombination: The final product is obtained by combining these seven products:

C11 = P + S – T + V

C12 = R + T

C21 = Q + S

C22 = P + R – Q + U
Conquer: Recursively apply the same process to the smaller submatrices until the base
case(typically 1 X 1) is reached.

Source Code:

Output:

Conclusion: Strassen's algorithm multiplies matrices efficiently (O(n^2.81)), boosting performance


in linear algebra, machine learning, and computer graphics.
Experiment No. 6

Aim: Write a program to implement DFS and BFS algorithm.

DFS: Depth-First Search (DFS) is a fundamental graph traversal algorithm used to explore nodes
(vertices) in a graph or tree data structure. It uses a stack to keep track of vertices.

How DFS Works:


1. Choose a starting node (root).
2. Explore as far as possible along each branch before backtracking.
3. Visit each node and mark it as visited.

DFS Steps:
1. Selection: Choose a starting node (root).
2. Exploration: Explore the neighbors of the current node.
3. Visitation: Mark the current node as visited.
4. Backtracking: Return to the previous node and explore its remaining neighbors.
5. Repeat: Continue steps 2-4 until all nodes are visited.
Suppose we have the following graph:
A
/\
B C
/\ \
D E F

Starting from node A, a pre-order DFS traversal would visit nodes in the following order:
A, B, D, E, C, F
Source code:
Output:

BFS: Breadth-First Search (BFS) is a fundamental graph traversal algorithm used to explore nodes
(vertices) in a graph or tree data structure. IT uses a queue to track the vertices to visit.

How BFS Works:

1. Choose a starting node (root).


2. Explore all nodes at the current level before moving to the next level.
3. Visit each node and mark it as visited.

BFS Steps:

1. Selection: Choose a starting node (root).


2. Initialization: Create an empty queue and add the root node.
3. Exploration: Dequeue a node, visit it, and enqueue its unvisited neighbors.
4. Repeat: Continue step 3 until the queue is empty.

Suppose we have the following graph:


A
/\
B C
/ \ \
D E F

Starting from node A, a BFS traversal would visit nodes in the following order:
A, B, C, D, E, F
Source Code:

Output:
Analysis of DFS and BFS on the basis of complexities:

Complexity Cases DFS BFS

Best Case O(V+E) O(V+E)


Time Complexity
Worst Case O(V+E) O(V+E)

Average Case O(V+E) O(V+E)


Best Case O(V) O(V)
Space Complexity
Worst Case O(V) O(V)

Average Case O(V) O(V)

Conclusion: -

Depth-First Search (DFS): DFS explores graphs depth-first. Useful in searching, topological
sorting, and social networks.

Breadth-First Search (BFS): BFS explores graphs level-by-level. Useful in shortest paths,
minimum spanning tree, and web crawlers.
Experiment No. 7

Aim: Implement fractional knapsack problem using Greedy Strategy.

Knapsack problem: The Knapsack Problem is an optimization problem where you are given a set
of items, each with a weight and a value. The objective is to select a combination of items to maximize
the total value without exceeding a given weight limit (capacity of the knapsack).

Imagine we have a knapsack (backpack) with a limited capacity (weight or volume). We are given a
set of items, each with:
1. Weight (w)
2. Value (v)
Our goal is to select a subset of items to include in the knapsack such that:
1. The total weight does not exceed the knapsack capacity.
2. The total value is maximized.

Types of Knapsack:

Knapsack

Fractional 0/1
Knapsack Knapsack

Fractional Knapsack: In the Fractional Knapsack Problem, items can be included partially
(fractional).

Greedy Algorithm:
1. Calculate the value-to-weight ratio (v/w) for each item.
2. Sort items by v/w ratio in descending order.
3. Include items with the highest v/w ratio until knapsack is full.
4. If an item doesn't fit entirely, include a fraction of it.

Example: Capacity of the knapsack: 20

Items Item1 Item2 Item3


Value 25 24 15
Weight 18 15 10
Profit = 1.3 1.6 1.5
Value/Weight
5/10 Obj3
20
15 Obj2

Knapsack
Maximum Profit = P2 + P3(fraction)
= 24 + (5/10) X15
=31.5

Source Code:
Output:

Space Complexity: O(n) for storing items.


Time Complexity: O(n log n) due to sorting.

Conclusion: The Fractional Knapsack Problem maximizes value within capacity constraints, solved
using a greedy algorithm (O(n log n)). Useful in resource allocation, financial portfolio optimization,
and inventory management.
Experiment No. 10

Aim: Apply dynamic programming methodology to implement 0/1 Knapsack problem.

0/1 Knapsack Problem: In the 0/1 Knapsack Problem, items can only be included or excluded
entirely.
1. A set of items, each with:
- Weight (w)
- Value (v)
2. A knapsack with a limited capacity (W)

Goal: Maximize the total value while staying within the knapsack capacity.

Constraints:
1. Each item can either be included (1) or excluded (0).
2. No item can be included partially.

Dynamic Programming Solution:


1. Initialize a 2D table dp of size (n+1) x (W+1).
2. Fill the table using the recurrence relation:

dp[i][j] = max(dp[i-1][j], dp[i-1][j-w[i]] + v[i])

where w[i] is the weight of item i and v[i] is its value.

Example:
n= 4 M=5
W1, W2, W3, W4 = 2, 3, 4, 5
V1, V2, V3, V4 = 3, 4, 5, 6

Using recurrence relation:


W

Vi Wi i 0 1 2 3 4 5

0 0 0 0 0 0 0

3 2 1 0 0 3 3 3 3

4 3 2 0 0 3 4 4 7

5 4 3 0 0 3 4 5 7

6 5 4 0 0 3 4 5 7
Source
Code:

Output:

Time Complexity: O(nW)


Space Complexity: O(nW)

Conclusion: The 0/1 Knapsack Problem optimizes resource allocation, selecting items with
maximum value within capacity constraints (O(nW)). Useful in logistics, supply chain management,
financial portfolio optimization, and inventory control.

You might also like