Dsa Unit 5

Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 58

Unit-5

Sorting & Searching


Selection Sort
• Selection Sort algorithm is used to arrange a list of
elements in a particular order (Ascending or
Descending).
• In selection sort, the first element in the list is
selected and it is compared repeatedly with all the
remaining elements in the list.
• If any element is smaller than the selected element
(for Ascending order), then both are swapped so that
first position is filled with the smallest element in the
sorted order.
Selection Sort
• Next, we select the element at a second position in
the list and it is compared with all the remaining
elements in the list.
• If any element is smaller than the selected element,
then both are swapped. This procedure is repeated
until the entire list is sorted.
Step by Step Process Selection Sort
• Step 1 - Select the first element of the list (i.e.,
Element at first position in the list).
• Step 2: Compare the selected element with all the
other elements in the list.
• Step 3: In every comparision, if any element is found
smaller than the selected element (for Ascending
order), then both are swapped.
• Step 4: Repeat the same procedure with element in
the next position in the list till the entire list is sorted.
Selection Sort Algorithm
for(i=0; i<size; i++)
{ for(j=i+1; j<size; j++)
{ if(list[i] > list[j])
{ temp=list[i];
list[i]=list[j];
list[j]=temp;
}
}
}
Complexity of the Selection Sort
• Worst Case : O(n2)
Best Case : Ω(n2)
Average Case : Θ(n2)
Insertion Sort Algorithm
• Insertion sort algorithm arranges a list of elements in
a particular order.
• In insertion sort algorithm, every iteration moves an
element from unsorted portion to sorted portion
until all the elements are sorted in the list.
Step by Step Process
• Step 1 - Assume that first element in the list is in
sorted portion and all the remaining elements are in
unsorted portion.
• Step 2: Take first element from the unsorted portion
and insert that element into the sorted portion in the
order specified.
• Step 3: Repeat the above process until all the
elements from the unsorted portion are moved into
the sorted portion.
Insertion Sort Algorithm
for i = 2 to size
{ temp = list[i];
j = i-1;
while ((temp < list[j]) && (j >= 1))
{ list[j+1] = list[j];
j = j - 1;
}
list[j+1] = temp;
}
Complexity of the Insertion Sort
• Worst Case : O(n2)
Best Case : Ω(n)
Average Case : Θ(n2)
Quick Sort
• Quick sort is a fast sorting algorithm used to sort a
list of elements.
• Quick sort algorithm is invented by C. A. R. Hoare.
The quick sort algorithm attempts to separate the list
of elements into two parts and then sort each part
recursively.
• That means it use divide and conquer strategy.
Quick Sort
• In quick sort, the partition of the list is performed
based on the element called pivot.
• Here pivot element is one of the elements in the list.
The list is divided into two partitions such that "all
elements to the left of pivot are smaller than the
pivot and all elements to the right of pivot are
greater than or equal to the pivot".
Step by Step Process
• Step 1 - Consider the first element of the list as pivot
(i.e., Element at first position in the list).
• Step 2 - Define two variables i and j. Set i and j to first
and last elements of the list respectively.
• Step 3 - Increment i until list[i] > pivot then stop.
• Step 4 - Decrement j until list[j] < pivot then stop.
• Step 5 - If i < j then exchange list[i] and list[j].
• Step 6 - Repeat steps 3,4 & 5 until i > j.
• Step 7 - Exchange the pivot element with list[j]
element.
Complexity of the Quick Sort
• Worst Case : O(n2)
Best Case : O (n log n)
Average Case : O (n log n)
Quick Sort Algorithm
• void quickSort(int list[10],int first,int last)
• {
• int pivot,i,j,temp;
• if(first < last)
• {
• pivot = first;
• i = first;
• j = last;
Quick Sort Algorithm
while(i < j)
{ while(list[i] <= list[pivot] && i < last)
i++;
while(list[j] > list[pivot])
j--;
if(i < j)
{ temp = list[i];
list[i] = list[j];
list[j] = temp;
}
}
Quick Sort Algorithm
• temp = list[pivot];
• list[pivot] = list[j];
• list[j] = temp;
• quickSort(list,first,j-1);
• quickSort(list,j+1,last);
• }
• }
Searching
• Search is a process of finding a value in a list of
values.
• In other words, searching is the process of locating
given value position in a list of values.
Linear / Sequential Search Algorithm
• This search process starts comparing search element with
the first element in the list.
• If both are matched then result is element found
otherwise search element is compared with the next
element in the list.
• Repeat the same until search element is compared with
the last element in the list, if that last element also doesn't
match, then the result is "Element not found in the list".
Linear / Sequential Search Algorithm
• That means, the search element is compared with
element by element in the list.
• Linear search algorithm finds a given element in a list
of elements with O(n) time complexity where n is
total number of elements in the list.
Binary Search Algorithm
• The binary search algorithm can be used with only a
sorted list of elements.
• That means the binary search is used only with a list
of elements that are already arranged in an order.
This search process starts comparing the search
element with the middle element in the list.
• If both are matched, then the result is "element
found".
• Otherwise, we check whether the search element is
smaller or larger than the middle element in the list.
Binary Search Algorithm
• If the search element is smaller, then we repeat the
same process for the left sublist of the middle
element.
• If the search element is larger, then we repeat the
same process for the right sublist of the middle
element.
• We repeat this process until we find the search
element in the list or until we left with a sublist of
only one element.
Binary Search Algorithm
• And if that element also doesn't match with the
search element, then the result is "Element not
found in the list".
• Binary search algorithm finds a given element in a list
of elements with O(log n) time complexity where n is
total number of elements in the list.
Example Binary Search
Example Binary Search
Hashing
• In all search techniques like linear search, binary
search and search trees, the time required to search
an element depends on the total number of
elements present in that data structure.
• In all these search techniques, as the number of
elements increases the time required to search an
element also increases linearly.
Hashing
• Hashing is another approach in which time required
to search an element doesn't depend on the total
number of elements.
• Using hashing data structure, a given element is
searched with constant time complexity. Hashing is
an effective way to reduce the number of
comparisons to search an element in a data
structure.
Hashing
Hashing
• Hashing is the process of indexing and retrieving element (data)
in a data structure to provide a faster way of finding the element
using a hash key.
• Here, the hash key is a value which provides the index value
where the actual data is likely to be stored in the data structure.
• In this data structure, we use a concept called Hash table to store
data.
• All the data values are inserted into the hash table based on the
hash key value.

Hashing
• The hash key value is used to map the data with an
index in the hash table.
• And the hash key is generated for every data using
a hash function.
• That means every entry in the hash table is based on
the hash key value generated using the hash
function.
Hash function
• Hash function is a function which takes a piece of
data (i.e. key) as input and produces an integer (i.e.
hash value) as output which maps the data to a
particular index in the hash table.
Types of hash function
• Division method
• In this the hash function is dependent upon the
remainder of a division.
• For example:-if the record 52,68,99,84 is to be placed
in a hash table and let us take the table size is 10.
• Then:
• h(key)=record% table size.
• 2=52%10 8=68%10
• 9=99%10 4=84%10
Types of hash function
• Mid square method
• In this method firstly key is squared and then mid
part of the result is taken as the index.
• For example: consider that if we want to place a
record of 3101 and the size of table is 1000.
• So 3101*3101=9616201
• h (3101) = 162 (middle 3 digit)
Types of hash function
• Digit folding method
• In this method the key is divided into separate parts
and by using some simple operations these parts are
combined to produce a hash key.
• For example: consider a record of 12465512 then it
will be divided into parts i.e. 124, 655, 12.
• After dividing the parts combine these parts by
adding it.
• H(key)=124+655+12 =791
Hash Table
• Hash table is just an array which maps a key (data)
into the data structure with the help of hash function
such that insertion, deletion and search operations are
performed with constant time complexity (i.e. O(1)).
• Using hash table concept, insertion, deletion, and
search operations are accomplished in constant time
complexity.
• Generally, every hash table makes use of a function
called hash function to map the data into the hash
table.
Collision in Hashing
• When the hash value of a key maps to an already
occupied bucket of the hash table,it is called as
a Collision.

• Collision Resolution Techniques are the techniques


used for resolving or handling the collision.
Separate Chaining
• This technique creates a linked list to the slot for
which collision occurs.
• The new key is then inserted in the linked list.
• These linked lists to the slots appear like chains.
• That is why, this technique is called as separate
chaining.
Example
• Let us consider a simple hash function as “key
mod 7” and sequence of keys as 50, 700, 76,
85, 92, 73, 101.
Example
Advantages
 Simple to implement.
 Hash table never fills up, we can always add more
elements to the chain.
 Less sensitive to the hash function or load factors.
 It is mostly used when it is unknown how many and
how frequently keys may be inserted or deleted.
Disadvantages
 Cache performance of chaining is not good as keys
are stored using a linked list.
 Open addressing provides better cache performance
as everything is stored in the same table.
 Wastage of Space (Some Parts of hash table are
never used)
 If the chain becomes long, then search time can
become O(n) in the worst case.
 Uses extra space for links.
Open addressing
• Unlike separate chaining, all the keys are stored
inside the hash table.
• No key is stored outside the hash table.

Techniques used for open addressing are-


• Linear Probing
• Quadratic Probing
• Double Hashing
Linear Probing
• When collision occurs, we linearly probe for the next
bucket.
• We keep probing until an empty bucket is found.
Advantage-
• It is easy to compute.
Disadvantage-
• The main problem with linear probing is clustering.
• Many consecutive elements form groups.
Example Linear Probing
Quadratic Probing-
• When collision occurs, we probe for i2‘th bucket in
ith iteration.
• We keep probing until an empty bucket is found.
Example Quadratic Probing
Double Hashing
• Double hashing is a collision resolving technique in
Open Addressed Hash tables.
• Double hashing uses the idea of applying a second
hash function to key when a collision occurs.
• First hash function is typically hash1(key) = key %
TABLE_SIZE
• A popular second hash function is :
• hash2(key) = PRIME – (key % PRIME) where PRIME
is a prime smaller than the TABLE_SIZE.
Double Hashing

You might also like