Unit - 1
Unit - 1
Question: What are the basic notations used in algorithm analysis, and how do they help in
evaluating algorithm performance?
Answer: The main notations used are Big O, Omega, and Theta. Big O gives an upper bound
on the runtime, Omega gives a lower bound, and Theta provides both an upper and lower
bound. These notations help in evaluating the efficiency and performance of algorithms,
especially when comparing them in terms of time or space complexity.
Question: Explain the concept of time complexity, space complexity, and trade-offs between
them.
Answer: Time complexity measures the amount of time an algorithm takes to execute relative to
the input size, often denoted as O(f(n)). Space complexity measures the amount of memory an
algorithm uses. The trade-off arises when optimizing one of these complexities (e.g., reducing
time complexity) may increase the other (e.g., increasing memory usage).
3. Omega Notation
Question: What is Omega notation, and how does it differ from Big O notation?
Answer: Omega (Ω) notation represents the lower bound of an algorithm's running time,
meaning it describes the best-case scenario for an algorithm's execution time. Unlike Big O,
which provides an upper bound (worst-case scenario), Omega provides a guarantee that the
algorithm will take at least a certain amount of time.
4. Theta Notation
Answer: Theta (Θ) notation represents both an upper and lower bound on the running time of
an algorithm. It describes the exact asymptotic behavior, meaning the algorithm's runtime grows
at the same rate both in the best and worst-case scenarios, providing a tight bound on the
performance.
5. Big O Notation
Answer: Big O notation describes the upper bound of an algorithm's time or space complexity,
showing the worst-case growth rate. It is essential in analyzing how the algorithm performs as
the input size grows, allowing for a comparison of different algorithms' efficiencies.
Question: What are the basic data structures, and why are they important in algorithm design?
Answer: Basic data structures include arrays, linked lists, stacks, queues, hash tables, trees,
and graphs. They are fundamental in algorithm design because they provide efficient ways to
store and organize data, enabling optimized algorithm performance for searching, sorting, and
manipulating data.
Question: How are linear arrays represented in memory, and what are the basic operations on
them?
Answer: Linear arrays are stored in contiguous memory locations, with each element having a
fixed size. Basic operations include traversal (visiting each element), insertion (adding an
element), deletion (removing an element), searching (finding an element), and sorting (ordering
elements), each with specific time complexities depending on the operation.
Question: Explain the time complexities of array operations like insertion, deletion, and
searching.
Answer:
9. Sorting and Searching: Bubble Sort, Insertion Sort, Selection Sort, Linear
and Binary Search
Question: Compare the time complexities of Bubble Sort, Insertion Sort, and Selection Sort.
Answer:
● Bubble Sort: O(n^2) in the worst case, as it repeatedly swaps adjacent elements.
● Insertion Sort: O(n^2) in the worst case, but O(n) in the best case when the array is
already sorted.
● Selection Sort: O(n^2) in the worst case, as it repeatedly selects the minimum element
and swaps it with the current position. These sorting algorithms have quadratic time
complexity, making them inefficient for large datasets.
Question: What is the difference between Linear Search and Binary Search in terms of time
complexity?
Answer:
● Linear Search: O(n) time complexity, as it checks each element sequentially until the
target is found or the end is reached.
● Binary Search: O(log n) time complexity, but it requires the array to be sorted. It
repeatedly divides the search space in half, significantly reducing the number of
comparisons.