DESIGN LAB
DESIGN LAB
CSE3005L
Submitted by
Submitted to
Faculty Name Ms. Sunita Sahu
INDEX
Aim: Comparative analysis on comparisons required to sort list or large values of n for Selection
and Insertion sort.
Theory:
i) Insertion Sort: Insertion Sort is a simple sorting algorithm that sorts a list by taking one element
at a time and placing it in its correct position. It works by comparing the current element with the
elements before it and moving them one position ahead if they are larger. This process is repeated
until the list is sorted.
Algorithm:
Step 1. Start from the second element (first element is already sorted).
Step 2. Compare the current element with elements in the sorted part.
Step 3. Shift elements that are greater than the current element one position to the right.
Step 4. Insert the current element into its correct position.
Step 5. Repeat until the entire array is sorted
Example:
Step 1. Compare 11 with 12. Since 11 < 12, move 12 to the right. Insert 11 at the beginning.
Hence Array becomes [ 11, 12, 13, 5, 6]
Step 3.
• Compare 5 with 11. Since 5 < 11, move 11 to the right. Insert 5 at the beginning.
Hence array becomes [5,11,12,13,6]
Step 4.
Source Code:
Output:
ii) Selection Sort: Selection Sort is a simple sorting algorithm that sorts a list by repeatedly finding
the smallest (or largest) element from the unsorted part and swapping it with the first unsorted
element. This process is repeated until the entire list is sorted.
Algorithm:
Example:
Source Code :
Output :
Comparative Analysis:
Table 1.2: Comparison between Insertion sort and Selection sort on the basis of Time Complexity
Conclusion: Insertion sort is often preferred for smaller or partially sorted datasets due to its better
performance in the best-case scenario while selection sort is simpler but less efficient for larger
datasets.
Experiment No. 2
Aim: Implement and Analysis factorial of a number program using iterative and recursive methods.
Theory:
The Iterative method involves solving a problem by repeatedly applying a set of instructions using
loops like for or while. Instead of recursion, iteration continuously updates variables and checks for
a termination condition, like reaching a maximum number of iterations or achieving a desired level
of accuracy. It’s commonly used for tasks like summing numbers, searching, or optimizing functions,
providing an efficient way to handle repetitive tasks without excessive memory usage.
Algorithm:
Example:
• Multiply result by 3:
result = 2×3=6
• Multiply result by 4:
result= 6×4=24
• Multiply result by 5:
result= 24×5=120
Step 4. Return: After the loop finishes, result holds the value 120, which is 5!.
Step 5. Base Case (For n=0): If n=0 , the program would return 1 directly without entering the
loop since 0!=1 by definition.
Source Code:
Output:
It is a programming technique where a function calls itself to solve a problem. This approach typically
involves defining a base case to stop the recursive calls and a recursive case that reduces the problem
size. Each call adds a new layer to the call stack, which can lead to elegant solutions for problems
like tree traversals or calculating factorials. However, recursion can consume more memory due to
stack frames and may result in a Recursion Error if the maximum recursion depth is exceeded.
Algorithm:
Example:
• 5!= 5×4!
• 4!= 4×3!
• 3!= 3×2!
• 2!= 2×1!
Step 3. Return from Base Case : The base case is reached when n=1, and the function returns 1.
Step 4. Backtracking and Multiplication: Now the recursion unwinds, and the results from each
recursive call are multiplied:
• 2!= 2×1=2
• 3!= 3×2=6
• 4!= 4×6=24
• 5!= 5×24=120
Source Code:
Output:
Comparative Analysis:
Table 2. Comparison between Iteration and Recursion on basis of Time and SpaceComplexity
Conclusion:
• Recursive is easier to implement and understand but may face issues with large input due to
recursion limits.
Experiment No. 5
Aim: Write a program to implement Divide and conquer strategy (Quick Sort) and analyze its
complexity. Repeat the experiment for different values of n (the number of elements in the list to be
sorted) and plot a graph of the time taken versus n.
Theory:
Algorithm:
Step 1. Choose a Pivot: Select an element from the array as the pivot. Various strategies can be
used to select the pivot, such as choosing the first element, the last element, the middle element, or a
random element.
Step 2. Partitioning: Rearrange the elements in the array so that all elements less than the pivot are
on the left, and all elements greater than the pivot are on the right. The pivot will be in its final
position after this step.
Step 3. Recursion: Recursively apply the above steps to the sub-arrays formed by splitting the
array at the pivot.
Step 4. Base Case: The recursion ends when the size of the array is 1 or 0, at which point the array
is already sorted.
Example
• After the swaps, the array looks like this: [1, 7, 8, 9, 10, 5]
• Finally, swap the pivot 5 with 7 (the first element greater than 5).
• Now, the array is [1, 5, 8, 9, 10, 7], and the pivot 5 is in the correct position.
Step 3. Recursive Calls:
Source Code:
Output:
Comparative Analysis:
Table 3. Comparison on the basis of time and space complexity for different values of n
• Space Complexity:
- The space complexity is O(log n), mainly due to the recursive calls made during the sorting process.
This depth depends on the partitioning, but in the average and best cases, it stays manageable.
Graph:
Fig 3.1. Time taken versus n graph
Note: The actual time taken may vary depending on the system and the input data.
Conclusion:
For practical purposes, QuickSort is one of the most efficient sorting algorithms with an average time
complexity of O(nlogn) and relatively low space complexity, making it highly suitable for large
datasets.
Experiment No. 4
Aim: Write a program to implement Divide and Conquer Strategy Merge sort and analyse its
complexity.
Merge Sort: Merge sort is a sorting algorithm that follows the divide-and-conquer approach. It
works by recursively dividing the input array into smaller sub-arrays and sorting those sub-arrays
then merging them back together to obtain the sorted array.
1. Divide: If the array has more than one element, split it into two subarrays.
3. Combine: Merge the two sorted subarrays into a single sorted array.
Step1 Divide:
38 27 43 3 and 9 82 10
Step2 Conquer:
38 27 43 3
➢ Left sub-array
38 27 and 43 3
27 38 and 3 43
3 27 38 43
9 82 10
Right sub-array
9 82 and 10
9 10 82
Step3Combine:
3 27 38 43 and 9 10 82
3 9 10 27 38 43 82
Source Code:
Output:
Time Complexity: (O(nlogn)) The time complexity of Merge Sort remains the same in all cases
(best, average, and worst) due to its consistent approach to dividing and merging the array.
Space Complexity: (O(n)) Merge Sort requires additional space proportional to the size of the array
being sorted because it needs to store the subarrays during the merge process.
Conclusion:
Aim: Write a program to implement Strassen’s Multiplication algorithms and analyse its complexity
Strassen's matrix multiplication is an optimized algorithm for multiplying two square matrices.
Standard matrix multiplication has a time complexity of O(n^3), where n is the number of rows (or
columns) of the matrices being multiplied. Strassen's algorithm reduces this complexity to O(n^2.81),
making it faster for large matrices.
Divide: For two matrices A & B, both of order n X n, divide them into four smaller submatrices
Recursive Multiplication: The product of the two matrices is computed using seven products
of sub-matrices:
2. Q = B11(A21 + A22)
3. R = A11(B12 - B22)
4. S = A22(B21 - B11)
5. T = B22(A11 + A12)
C11 = P + S – T + V
C12 = R + T
C21 = Q + S
C22 = P + R – Q + U
Conquer: Recursively apply the same process to the smaller submatrices until the base
case(typically 1 X 1) is reached.
Source Code:
Output:
DFS: Depth-First Search (DFS) is a fundamental graph traversal algorithm used to explore nodes
(vertices) in a graph or tree data structure. It uses a stack to keep track of vertices.
DFS Steps:
1. Selection: Choose a starting node (root).
2. Exploration: Explore the neighbors of the current node.
3. Visitation: Mark the current node as visited.
4. Backtracking: Return to the previous node and explore its remaining neighbors.
5. Repeat: Continue steps 2-4 until all nodes are visited.
Suppose we have the following graph:
A
/\
B C
/\ \
D E F
Starting from node A, a pre-order DFS traversal would visit nodes in the following order:
A, B, D, E, C, F
Source code:
Output:
BFS: Breadth-First Search (BFS) is a fundamental graph traversal algorithm used to explore nodes
(vertices) in a graph or tree data structure. IT uses a queue to track the vertices to visit.
BFS Steps:
Starting from node A, a BFS traversal would visit nodes in the following order:
A, B, C, D, E, F
Source Code:
Output:
Analysis of DFS and BFS on the basis of complexities:
Conclusion: -
Depth-First Search (DFS): DFS explores graphs depth-first. Useful in searching, topological
sorting, and social networks.
Breadth-First Search (BFS): BFS explores graphs level-by-level. Useful in shortest paths,
minimum spanning tree, and web crawlers.
Experiment No. 7
Knapsack problem: The Knapsack Problem is an optimization problem where you are given a set
of items, each with a weight and a value. The objective is to select a combination of items to maximize
the total value without exceeding a given weight limit (capacity of the knapsack).
Imagine we have a knapsack (backpack) with a limited capacity (weight or volume). We are given a
set of items, each with:
1. Weight (w)
2. Value (v)
Our goal is to select a subset of items to include in the knapsack such that:
1. The total weight does not exceed the knapsack capacity.
2. The total value is maximized.
Types of Knapsack:
Knapsack
Fractional 0/1
Knapsack Knapsack
Fractional Knapsack: In the Fractional Knapsack Problem, items can be included partially
(fractional).
Greedy Algorithm:
1. Calculate the value-to-weight ratio (v/w) for each item.
2. Sort items by v/w ratio in descending order.
3. Include items with the highest v/w ratio until knapsack is full.
4. If an item doesn't fit entirely, include a fraction of it.
Knapsack
Maximum Profit = P2 + P3(fraction)
= 24 + (5/10) X15
=31.5
Source Code:
Output:
Conclusion: The Fractional Knapsack Problem maximizes value within capacity constraints, solved
using a greedy algorithm (O(n log n)). Useful in resource allocation, financial portfolio optimization,
and inventory management.
Experiment No. 10
0/1 Knapsack Problem: In the 0/1 Knapsack Problem, items can only be included or excluded
entirely.
1. A set of items, each with:
- Weight (w)
- Value (v)
2. A knapsack with a limited capacity (W)
Goal: Maximize the total value while staying within the knapsack capacity.
Constraints:
1. Each item can either be included (1) or excluded (0).
2. No item can be included partially.
Example:
n= 4 M=5
W1, W2, W3, W4 = 2, 3, 4, 5
V1, V2, V3, V4 = 3, 4, 5, 6
Vi Wi i 0 1 2 3 4 5
0 0 0 0 0 0 0
3 2 1 0 0 3 3 3 3
4 3 2 0 0 3 4 4 7
5 4 3 0 0 3 4 5 7
6 5 4 0 0 3 4 5 7
Source
Code:
Output:
Conclusion: The 0/1 Knapsack Problem optimizes resource allocation, selecting items with
maximum value within capacity constraints (O(nW)). Useful in logistics, supply chain management,
financial portfolio optimization, and inventory control.