Presentation 1

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 22

Data structure and Algorithm

Shalini Rajasingham
Introduction to Algorithms
• Definition: Searching algorithms are a set of instructions or a
procedure used to find a specific item or group of items from a
collection of items. These algorithms are fundamental in the field of
computer science and are used to locate elements within data
structures like arrays, lists, trees, and graphs.
• Purpose: The primary purpose of searching algorithms is to
efficiently locate and retrieve information from a dataset. They are
designed to minimize the time and resources required to find an item,
making data retrieval faster and more efficient.
Real-World Examples of Searching Algorithms
1.Looking Up a Contact in a Phone Book: This is similar to a linear search, where
you start from the beginning of the phone book and flip through each page until
you find the contact you're looking for.
2.Searching for a File on a Computer: When you use the search function in your
file explorer or operating system, it uses a searching algorithm to quickly locate the
file or files that match your search criteria.
3.Finding a Product in an Online Store: When you search for a product on an e-
commerce website, a searching algorithm is used to sift through the vast inventory
and present you with relevant results.
4.Autocomplete in Search Engines: As you type in a search query, searching
algorithms work behind the scenes to predict and display possible completions or
related searches.
5.Database Queries: When you query a database, searching algorithms are used to
find and retrieve the specific records or data that match your query criteria.
Linear Search
• Linear search, also known as sequential search, is one
of the simplest searching algorithms.

• It works by starting at the beginning of a list and


checking each element one by one until the desired
element is found or the end of the list is reached.
Linear Search Example
Consider the list [3, 5, 7, 9, 11] and we want to find the element 7.
• Step 1: Compare 3 with 7 (not a match)
• Step 2: Compare 5 with 7 (not a match)
• Step 3: Compare 7 with 7 (match found)
• The algorithm returns the index 2, where the element 7 is located.

Time Complexity
• The time complexity of linear search is O(n), where n is the number of
elements in the list.
• In the worst-case scenario, the algorithm has to check every element in the list,
so the time taken is proportional to the length of the list.
Binary Search
• Binary search is an efficient algorithm used to find a target element in
a sorted list or array.
• It works by repeatedly dividing the search interval in half. If the value
of the target is less than the value in the middle of the interval, the
algorithm narrows the interval to the lower half. Otherwise, it narrows
it to the upper half.
• This process continues until the target value is found or the interval is
empty.
Example
Initial List: [2, 4, 6, 8, 10] Target Element: 8
1.First Step: The middle element is 6 (at index 2), which is less than 8. So, we
ignore the left half of the list and consider only the right half [8, 10].
2.Second Step: Now, in the new list [8, 10], the middle element is 8 (at index 0
of the new list, but index 3 of the original list), which is equal to our target.
So, we have found the element 8.
• The algorithm returns the index 3, where the element 8 is located in the
original list.
Time Complexity
• The time complexity of binary search is O(log n), where n is the number of
elements in the list.
• This is because with each iteration, the size of the search interval is halved,
leading to a logarithmic number of comparisons.
Jump Search
• Works on sorted arrays.
• Jumps ahead by a fixed number of steps (usually sqrt(n)) and checks if
the element is greater than the target.
• If so, performs a linear search backward until the target is found.
• The basic idea behind Jump Search is to "jump" ahead by a fixed number of
steps in the array and then perform a linear search backwards from the point
where the target value was surpassed. This approach reduces the number of
comparisons needed to find the target value.
• Time complexity: O(√n).
Jump Search
• Determine the Jump Size: The optimal jump size is typically chosen to be the square root
of the size of the array (let's denote this as m = sqrt(n), where n is the size of the array).
This jump size balances the number of steps jumped and the size of the subarray to be
searched linearly.

• Jump Ahead in Steps: Start at the beginning of the array and jump ahead by m elements
at a time, until either the target value is found or the next element in the array is greater
than the target value.

• Perform Linear Search: Once the range is narrowed down, perform a linear search
backwards from the current position to the last position checked (or the beginning of the
array) to find the target value.

• Return the Index or Indicate Not Found: If the target value is found, return its index in
the array. If the end of the array is reached without finding the target value, indicate that
it is not present in the array.
Introduction to Hash Tables
• Definition: A hash table is a data structure that stores key-value pairs.
It uses a hash function to compute an index into an array of buckets or
slots, from which the desired value can be found.
• Purpose: Hash tables are used for fast data retrieval. They provide
efficient lookup, add, and delete operations.
• Components: Key, value, hash function.
• Real-world Analogy: A library index system where books are indexed
by a unique code (key) and can be quickly retrieved using that code.
Introduction to Hash Tables
Hash Functions
• Role: A hash function maps keys to indices in the hash table,
determining where the key-value pair should be stored.
• Characteristics of a Good Hash Function:
• Consistency: The same key always maps to the same index.
• Efficiency: The function should be fast to compute.
• Uniform distribution: Keys should be distributed uniformly across the table to
minimize collisions.
• Example:
Simple Hash Function Example
• Suppose we have a hash table with 10 slots (indices 0 to 9), and we want to
store integer keys in this table. A simple hash function for this scenario could
be:
• This hash function takes the key, divides it by 10, and returns the remainder. This
ensures that the result is always in the range of 0 to 9, which corresponds to the
indices of the hash table.
Example Usage
• If the key is 25, hash(25) would return 5, so the key would be stored in slot 5 of
the hash table.
• If the key is 42, hash(42) would return 2, so the key would be stored in slot 2 of
the hash table.
• If the key is 103, hash(103) would return 3, so the key would be stored in slot 3
of the hash table.
• This is a very basic example of a hash function. In practice, hash functions can
be more complex to ensure a more uniform distribution of keys and to handle a
Handling Hash Collisions

• Hash collisions occur when two keys hash to the same index.

• Collision Resolution Techniques:


• Open Hashing or Separate Chaining: Store multiple elements at the same
index using a linked list.
• Closed Hashing or Open Addressing: Find another open slot within the table
using methods like linear probing or quadratic probing.

• Example
Handling a Full Hash Table
• Chaining (Using a Linked List)
• Scenario: Even if all slots in the hash table are occupied, chaining allows for
multiple elements to be stored at the same index by using a linked list.
• Example: Suppose we have a hash table with 5 slots, and the hash function is
hash(key) = key % 5. If we insert keys 5, 10, 15, and 20, they all hash to index 0. In
this case, the linked list at index 0 would contain all four keys.
• Open Addressing (Linear Probing)
• Scenario: If all slots are occupied in a hash table using linear probing, the table is
considered full, and no more elements can be inserted unless the table is resized or
an element is removed.
• Example: Suppose we have a hash table with 5 slots, and the hash function is
hash(key) = key % 5. If we insert keys 1, 6, 11, 16, and 21, they will occupy all the
slots. Attempting to insert another key, such as 26, would result in a failure to find
an empty slot, indicating that the table is full.
Handling a Full Hash Table
• When a hash table becomes full, there are a couple of common strategies to handle this situation:
• Resizing: Increase the size of the hash table and rehash all existing keys. This is a common
approach to maintain the load factor (the ratio of the number of elements to the number of slots)
within a desirable range.
• Rehashing: Use a secondary hash function to find another slot when the primary slot is occupied.
This is more specific to open addressing.
• External Storage: For chaining, if the linked lists become too long, it might be time to consider
resizing the hash table or using a different data structure to store the elements at each index.
Introduction to Sorting Algorithms

• Definition: Sorting algorithms arrange the elements of a list


in a certain order (typically ascending or descending).
• Importance: Sorting is a fundamental operation in computer
science, used in various applications like data analysis,
searching, and organizing data.
• Overview of Common Sorting Algorithms: Bubble sort,
selection sort, insertion sort, merge sort, quick sort.
Insertion Sort
• Insertion sort builds a sorted array one element at a time by
inserting each element into its correct position within the
sorted part of the array.
• Simple Example
• Sort the array [8, 4, 3, 5, 2].
• Time Complexity
• O(n^2) in the worst case.
• Example
Selection Sort
• Selection sort repeatedly finds the minimum (or
maximum) element from the unsorted part of the array
and moves it to the beginning (or end).
• Simple Example: Sort the array [29, 10, 14, 37, 13].
• Time Complexity: O(n^2)
• Example
Bubble Sort
• Bubble sort repeatedly swaps adjacent elements if they
are in the wrong order, effectively "bubbling" the
largest unsorted element to its correct position in each
iteration.
• Simple Example: Sort the array [5, 3, 8, 4, 2].
• Time Complexity: O(n^2)
• Example
Merge Sort
Merge sort is a divide-and-conquer algorithm that divides the
array into halves, sorts each half, and then merges the sorted
halves back together.
• Sort the array [38, 27, 43, 3, 9, 82, 10].
• The array is split and sorted recursively, then merged back
together.
• Time Complexity
• O(n log n) in all cases.
• Example
Quick Sort
Quick sort is a highly efficient sorting algorithm that uses a pivot
element to partition the array into two parts, then recursively sorts
the partitions.
Simple Example
• Sort the array [10, 80, 30, 90, 40, 50, 70].
• Choose 70 as the pivot, partition the array, and then recursively sort the
partitions.
• Time Complexity
• O(n log n) on average, O(n^2) in the worst case.
• Example

You might also like