0% found this document useful (0 votes)
3 views8 pages

Design and Analysis of Algorithms

The document provides an overview of algorithms, their types, and analysis, including design paradigms like brute-force, recursive, and dynamic programming. It discusses data structures such as arrays, stacks, queues, trees, and heaps, along with their operations and complexities. Additionally, it covers sorting algorithms, time complexity calculations, and the classifications of problems in computational theory (P, NP, NP-Complete).

Uploaded by

Kashan Memon
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views8 pages

Design and Analysis of Algorithms

The document provides an overview of algorithms, their types, and analysis, including design paradigms like brute-force, recursive, and dynamic programming. It discusses data structures such as arrays, stacks, queues, trees, and heaps, along with their operations and complexities. Additionally, it covers sorting algorithms, time complexity calculations, and the classifications of problems in computational theory (P, NP, NP-Complete).

Uploaded by

Kashan Memon
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

Design and Analysis of Algorithms

1. Algorithm & Types by Structure


• Definition: An algorithm is a finite set of clear instructions to solve a problem 1 .
• Types (design paradigms):
• Brute-Force: Straightforward, tries all possibilities (O(n!) or exponential for some problems) 2 .
• Recursive: Solves a problem by solving smaller instances of the same problem 3 .
• Divide & Conquer: Breaks problem into independent subproblems, solves each, then combines
results 4 . Example: Merge Sort.
• Dynamic Programming: Stores (memoizes) results of overlapping subproblems to avoid
recomputation 5 . Example: Fibonacci DP, Knapsack.
• Greedy: Builds solution step-by-step, choosing the best local option at each step 6 . Example:
Fractional Knapsack, Prim’s MST.
• Backtracking: Builds solution incrementally and backtracks when constraints fail 7 . Example: N-
Queens, Sudoku.
• Randomized: Uses random choices to improve efficiency or simplicity 8 . Example: randomized
QuickSort or randomized selection.

2. Notations & Complexity


• Asymptotic Notation: Classifies growth of runtime as input size grows 9 .
• Big-O (O): Upper bound on running time (worst-case) – ignores constant factors 10 .
• Big-Ω (Ω): Lower bound (best-case).
• Big-Θ (Θ): Tight bound (both upper and lower) 10 .
• Rule: Drop constant factors and lower-order terms. e.g. 3n³+6n²+6000 = Θ(n³) 11 .
• Complexity Types:
• Worst-case: Maximum time for any input of size n.
• Average-case: Expected time over all inputs of size n 12 .
• Best-case: Minimum time (rarely used for guarantees).
• Example: Average vs worst-case distinction 12 : e.g. QuickSort average Θ(n log n) vs worst-case Θ(n²).

3. Arrays (1D & 2D)

Diagram: One-dimensional array with indexed slots. An array is a contiguous block of memory storing
elements of the same type. You can access any element by its index in O(1) time. Common operations:
- Traverse: Visit all elements sequentially, O(n) time.

1
- Search – Linear: Check each element (unsorted), worst-case O(n) 13 .
- Search – Binary: On sorted array; repeatedly split search range, O(log n) 14 .
- Insertion/Deletion: If inserting or removing at arbitrary position, elements after that point must shift
(O(n)). E.g. deleting an element costs O(n) 15 . Inserting at end (if space available) can be O(1).
- 2D Arrays: Think of as a matrix (rows×columns) stored in row-major order. Example: a 3×3 matrix is a 2D
array 16 . Common for grid or table data.

4. Stacks (LIFO)

Diagram: Stack operations (push and pop on top). A stack is Last-In-First-Out (LIFO). Only the top element is
accessible. Operations:
- Push(x): Add element x on top (O(1)).
- Pop(): Remove the top element (O(1)).
- Peek/Top(): Read top element without removing it (O(1)).
- Applications: Expression evaluation and conversion. In postfix (Reverse Polish) notation, operators come
after operands. For example, to evaluate postfix 2 3 4 * + : push 2, 3, 4; pop 3 and 4 for ‘*’, push result
12; then pop 2 and 12 for ‘+’, yielding 14. Postfix format is “operand operand operator” 17 . Stacks also
implement function-call history (call stack) and undo functionality.

5. Sorting Algorithms (Quick & Bubble)


• Quick Sort: Divide-and-conquer. Pick a pivot, partition array into <pivot and ≥pivot, then recurse.
Average-case time Θ(n log n), worst-case Θ(n²) (e.g. already sorted with poor pivot) 18 . Often fast in
practice and in-place.
• Bubble Sort: Simple comparison sort. Repeatedly sweep through array, swapping adjacent out-of-
order elements. After each pass the largest unsorted element bubbles to the end. Worst- and
average-case Θ(n²); best-case Θ(n) if already sorted 19 . Used for teaching or small n.

2
6. Queues (FIFO)

Diagram: Queue (first-in-first-out) showing enqueue at rear, dequeue at front. A queue is First-In-First-Out (FIFO).
Operations:
- Enqueue(x): Insert element x at the rear/tail (O(1)).
- Dequeue(): Remove element from the front/head (O(1)).
- Traverse: Visit elements front to rear (O(n)).
- Usage: CPU process scheduling (ready queue), breadth-first search on graphs. A queue supports FIFO: the
earliest enqueued element is the first to be dequeued 20 21 .

7. Trees (Binary, BST, B-tree, AVL)

3
Diagram: A binary tree.
- Binary Tree: Each node has at most two children (left/right) 22 . A complete binary tree has all levels full
except possibly the last, which is filled left-to-right 22 .
- Binary Search Tree (BST): A binary tree where left-subtree values < node < right-subtree values 23 .
Allows fast search/insertion/deletion in average O(log n). Inserting a key follows BST property (go left or
right from root until leaf). Deletion has cases (leaf, one-child, two-children) requiring tree adjustment.
- Tree Traversals: In-order (Left→Node→Right), Pre-order (Node→Left→Right), Post-order
(Left→Right→Node). In-order of a BST yields sorted order 24. (Pre-order is used for copying a tree; post-
order can delete a tree or produce postfix expressions).
- B-tree: A self-balancing multi-way search tree used in databases and file systems 25 . It generalizes BST by
allowing nodes with many children, keeping height low. Search/insert/delete in O(log n) 25 . All leaves stay
at same level.
- AVL Tree: A self-balancing BST where for every node, heights of left and right subtrees differ by at most 1
26 . Ensures O(log n) search/insert/delete by performing rotations to rebalance on inserts/deletions.

8. Heaps (Binary Heap)

Diagram: A binary heap (complete tree). Top element is largest (max-heap).


A heap is a complete binary tree satisfying the heap property: in a max-heap each parent ≥ its children (in a
min-heap parent ≤ children) 27 . The largest (or smallest) element is at the root, making heaps useful for
priority queues.
- Insert: Add new element at the bottom (next available leaf), then “bubble up” (swap with parent until heap
property holds). O(log n) time.
- Delete (Extract): Remove the root. Replace it with the last element, then “heapify” (bubble down) to
restore heap property. O(log n).
- Build-Heap: Given n elements, one can build a heap in O(n) time by heapifying bottom-up.
- Heap Sort: Build a max-heap from the array, then repeatedly extract the max into the end. Sorts in O(n log
n) time in-place.

4
9. Asymptotic Rules
• Order-of-Growth: Focus on how runtime scales. Big-O notation describes the upper bound growth
rate 10 . Big-Θ gives tight bounds.
• Ignoring Constants: Drop constants and lower-order terms. E.g. 3n²+10n+7 is Θ(n²) 11 .
• Combining Costs: For sequential code, add complexities (e.g. two loops: O(n)+O(n)=O(n)). For nested
loops, multiply (e.g. double loop gives O(n²) 28 ). Master Theorem or recursion can solve divide-and-
conquer recurrences.

10. Greedy vs Dynamic Programming; Knapsack


• Greedy Algorithms: Make a locally optimal choice at each step without rethinking (fast, but not
always optimal overall) 29 . Example: always take the highest-value item that fits.
• Dynamic Programming (DP): Exploits optimal substructure and overlapping subproblems by
storing intermediate results 30 . Guarantees optimal solutions for problems with these properties
(though often slower than greedy).
• Knapsack Problem: Given a set of items (weight, value) and a weight limit, choose items to
maximize total value without exceeding capacity 31 .
• 0/1 Knapsack: Each item must be taken whole or left. Solved optimally by DP (building a table of best
values) in pseudo-polynomial time 32 . Greedy by value/weight is not optimal here.
• Fractional Knapsack: Items can be divided; solve by greedy: sort by value/weight and take highest first
32 . This yields an optimal solution quickly (O(n log n) due to sorting).

11. Divide & Conquer; Other Sorting Algorithms


• Divide & Conquer: Solve by dividing the problem into smaller subproblems, solving each, then
combining results 33 . E.g. Merge Sort (divide array, sort halves, merge).
• Merge Sort: D&C sort. Always Θ(n log n) time, stable.
• Insertion Sort: Builds sorted array one element at a time by shifting. Worst/average Θ(n²); best Θ(n)
for nearly-sorted data. Simple and efficient for small n.
• Selection Sort: Repeatedly select the minimum element and swap into place. Θ(n²) always; simple
but not adaptive.
• Shell Sort: Improved insertion sort with gap sequence. Complexity depends on gap strategy (e.g.
Θ(n^(5/3)) for Pratt’s sequence), generally better than O(n²) on average.
• Bucket Sort: Distribute elements into buckets (based on value range), sort each bucket, then
concatenate. Average-case ~O(n + k) if distribution uniform (k = buckets) 34 ; worst-case O(n²) if all
elements fall in one bucket.
• Radix Sort: Sort by individual digits or letters (least or most significant first). Non-comparison sort;
runs in O(d·(n + b)) where d = number of digits, b = base. E.g. sorting integers by digits in base 10 in
linear time.
• Quick Sort: (See above). Often the fastest general-purpose sort.
• Heap Sort: (See above). Guaranteed O(n log n), not stable.

5
12. Calculating Time Complexity
• Loop Analysis: Count how many times key operations run. A single loop from 1..n does O(n)
operations. Nested loops multiply (e.g. two nested loops give ~n² operations 28 ). Consecutive loops
add (e.g. two separate O(n) loops give O(2n)=O(n)) 35 .
• Recurrences: For recursive algorithms, set up recurrence (e.g. T(n) = T(n/2)+O(1) ⇒ O(log n), or
Master Theorem for divide-and-conquer).
• Combine Costs: Sum or multiply as above, then apply big-O simplifications (drop lower terms).

13. P, NP, NP-Complete (NPC)


• P: Class of decision problems solvable in polynomial time by a deterministic algorithm 36 . (e.g.
sorting, shortest path in graphs with Dijkstra).
• NP: Problems verifiable (or solvable by a nondeterministic machine) in polynomial time 37 . If a
candidate solution is given, its correctness can be checked in poly time (e.g. Sudoku solution).
• NP-hard: At least as hard as the hardest problems in NP. Any NP problem can be reduced to an NP-
hard problem in poly time 38 . NP-hard problems may not even be in NP (they might not have easily
verifiable solutions). Example: Halting problem.
• NP-Complete (NPC): Problems that are both in NP and NP-hard 39 . These are the “hardest” NP
problems. If any NP-complete problem can be solved in poly time, then all NP problems are in P.
Classic NPC problems: SAT (Boolean satisfiability), Hamiltonian Cycle, Vertex Cover, 0/1 Knapsack.

14. Real-World Examples


• Navigation (Dijkstra/A*): Google Maps and GPS navigation use shortest-path algorithms (Dijkstra
or A*) on a road network graph 40 .
• Database Indexing (B-trees): B-tree indices in databases and file systems allow fast lookup/insert/
delete in logarithmic time 25 .
• Compression (Huffman Coding): Popular compression formats (ZIP, JPEG) use Huffman coding, a
greedy algorithm for optimal prefix codes 41 .
• Scheduling (Priority Queues): Operating systems and event simulators use heaps or priority
queues to schedule tasks by priority.
• Parsing and Compilers: Stacks are used to parse arithmetic expressions (operator precedence) and
to manage function calls (call stack).
• Search Utilities: Binary search (on sorted data) powers quick lookup in large data (e.g. library
database lookup).
• Hashing: Hash tables (not covered above) use computed indices for near-constant time lookup; used
in caches, spell-checkers, etc.

Sources: Textbook concepts and tutorials 1 10 29 11 34 28 35 40 25 41 , etc. (Illustrations from


Wikimedia Commons.)

6
1 Algorithm - Wikipedia
https://en.wikipedia.org/wiki/Algorithm

2 3 4 5 6 7 8 29 30 33 Most important type of Algorithms | GeeksforGeeks


https://www.geeksforgeeks.org/most-important-type-of-algorithms/

9 Big O notation - Wikipedia


https://en.wikipedia.org/wiki/Big_O_notation

10 Big O vs Theta Θ vs Big Omega Ω Notations | GeeksforGeeks


https://www.geeksforgeeks.org/difference-between-big-oh-big-omega-and-big-theta/

11 Types of Asymptotic Notations in Complexity Analysis of Algorithms | GeeksforGeeks


https://www.geeksforgeeks.org/types-of-asymptotic-notations-in-complexity-analysis-of-algorithms/

12 Average-case complexity - Wikipedia


https://en.wikipedia.org/wiki/Average-case_complexity

13 Linear search - Wikipedia


https://en.wikipedia.org/wiki/Linear_search

14 Binary search - Wikipedia


https://en.wikipedia.org/wiki/Binary_search

15 Process of deleting an array in C++


https://iq.opengenus.org/delete-array-in-cpp/

16 Array (data structure) - Wikipedia


https://en.wikipedia.org/wiki/Array_(data_structure)

17 Evaluation of Postfix Expression | GeeksforGeeks


https://www.geeksforgeeks.org/evaluation-of-postfix-expression/

18 Quick Sort | GeeksforGeeks


https://www.geeksforgeeks.org/quick-sort-algorithm/

19 Time and Space Complexity Analysis of Bubble Sort | GeeksforGeeks


https://www.geeksforgeeks.org/time-and-space-complexity-analysis-of-bubble-sort/

20 21 Queue (abstract data type) - Wikipedia


https://en.wikipedia.org/wiki/Queue_(abstract_data_type)

22 Complete Binary Tree | GeeksforGeeks


https://www.geeksforgeeks.org/complete-binary-tree/

23 Binary Search Tree | GeeksforGeeks


https://www.geeksforgeeks.org/binary-search-tree-data-structure/

24 Tree Traversal Techniques | GeeksforGeeks


https://www.geeksforgeeks.org/tree-traversals-inorder-preorder-and-postorder/

25 B-tree - Wikipedia
https://en.wikipedia.org/wiki/B-tree

26 AVL Tree Data Structure | GeeksforGeeks


https://www.geeksforgeeks.org/introduction-to-avl-tree/

7
27 Heap Data Structure | GeeksforGeeks
https://www.geeksforgeeks.org/heap-data-structure/

28 35 How to Analyse Loops for Complexity Analysis of Algorithms | GeeksforGeeks


https://www.geeksforgeeks.org/how-to-analyse-loops-for-complexity-analysis-of-algorithms/

31 32 Difference between 0/1 Knapsack problem and Fractional Knapsack problem | GeeksforGeeks
https://www.geeksforgeeks.org/difference-between-0-1-knapsack-problem-and-fractional-knapsack-problem/

34 Bucket sort - Wikipedia


https://en.wikipedia.org/wiki/Bucket_sort

36 37 38 39 P, NP, CoNP, NP hard and NP complete | Complexity Classes | GeeksforGeeks


https://www.geeksforgeeks.org/types-of-complexity-classes-p-np-conp-np-hard-and-np-complete/

40 The Algorithms Behind The Working Of Google Maps | by Sachin Singh | Medium
https://medium.com/@sachin28/the-algorithms-behind-the-working-of-google-maps-73c379bcc9b9

41 Huffman Coding | Greedy Algo-3 | GeeksforGeeks


https://www.geeksforgeeks.org/huffman-coding-greedy-algo-3/

You might also like