280 - DS Complete-5
280 - DS Complete-5
280 - DS Complete-5
And when there's no swap required, bubble sorts learns that an array is completely
sorted.
return list
end BubbleSort
Pseudocode
We observe in algorithm that Bubble Sort compares each pair of array element unless
the whole array is completely sorted in an ascending order. This may cause a few
complexity issues like what if the array needs no more swapping as all the elements
are already ascending.
To ease-out the issue, we use one flag variable swapped which will help us see if any
swap has happened or not. If no swap has occurred, i.e. the array requires no more
processing to be sorted, it will come out of the loop.
Pseudocode of BubbleSort algorithm can be written as follows −
procedure bubbleSort( list : array of items )
loop = list.count;
end for
end for
It finds that both 14 and 33 are already in ascending order. For now, 14 is in sorted
sub-list.
It swaps 33 with 27. It also checks with all the elements of sorted sub-list. Here we see
that the sorted sub-list has only one element 14, and 27 is greater than 14. Hence, the
sorted sub-list remains sorted after swapping.
By now we have 14 and 27 in the sorted sub-list. Next, it compares 33 with 10.
So we swap them.
We swap them again. By the end of third iteration, we have a sorted sub-list of 4 items.
This process goes on until all the unsorted values are covered in a sorted sub-list. Now
we shall see some programming aspects of insertion sort.
Algorithm
Now we have a bigger picture of how this sorting technique works, so we can derive
simple steps by which we can achieve insertion sort.
Step 1 − If it is the first element, it is already sorted. return 1;
Step 2 − Pick next element
Step 3 − Compare with all elements in the sorted sub-list
Step 4 − Shift all the elements in the sorted sub-list that is greater than the
value to be sorted
Step 5 − Insert the value
Step 6 − Repeat until list is sorted
Pseudocode
procedure insertionSort( A : array of items )
int holePosition
int valueToInsert
for i = 1 to length(A) inclusive do:
valueToInsert = A[i]
holePosition = i
end for
end procedure
Lecture-26
Selection Sort
Consider the following depicted array as an example.
For the first position in the sorted list, the whole list is scanned sequentially. The first
position where 14 is stored presently, we search the whole list and find that 10 is the
lowest value.
So we replace 14 with 10. After one iteration 10, which happens to be the minimum
value in the list, appears in the first position of the sorted list.
For the second position, where 33 is residing, we start scanning the rest of the list in a
linear manner.
We find that 14 is the second lowest value in the list and it should appear at the second
place. We swap these values.
After two iterations, two least values are positioned at the beginning in a sorted
manner.
The same process is applied to the rest of the items in the array.
Following is a pictorial depiction of the entire sorting process −
Now, let us learn some programming aspects of selection sort.
Algorithm
for i = 1 to n - 1
/* set current element as minimum*/
min = i
for j = i+1 to n
if list[j] < list[min] then
min = j;
end if
end for
end procedure
Lecture-27
Merge Sort
To understand merge sort, we take an unsorted array as the following −
We know that merge sort first divides the whole array iteratively into equal halves
unless the atomic values are achieved. We see here that an array of 8 items is divided
into two arrays of size 4.
This does not change the sequence of appearance of items in the original. Now we
divide these two arrays into halves.
We further divide these arrays and we achieve atomic value which can no more be
divided.
Now, we combine them in exactly the same manner as they were broken down. Please
note the color codes given to these lists.
We first compare the element for each list and then combine them into another list in a
sorted manner. We see that 14 and 33 are in sorted positions. We compare 27 and 10
and in the target list of 2 values we put 10 first, followed by 27. We change the order of
19 and 35 whereas 42 and 44 are placed sequentially.
In the next iteration of the combining phase, we compare lists of two data values, and
merge them into a list of found data values placing all in a sorted order.
After the final merging, the list should look like this −
Algorithm
Merge sort keeps on dividing the list into equal halves until it can no more be divided.
By definition, if it is only one element in the list, it is sorted. Then, merge sort combines
the smaller sorted lists keeping the new list sorted too.
Step 1 − if it is only one element in the list it is already sorted, return.
Step 2 − divide the list recursively into two halves until it can no more be divided.
Step 3 − merge the smaller lists into new list in sorted order.
Merge sort works with recursion and we shall see our implementation in the same way.
procedure mergesort( var a as array )
if ( n == 1 ) return a
var l1 as array = a[0] ... a[n/2]
var l2 as array = a[n/2+1] ... a[n]
l1 = mergesort( l1 )
l2 = mergesort( l2 )
return merge( l1, l2 )
end procedure
procedure merge( var a as array, var b as array )
var c as array
while ( a and b have elements )
if ( a[0] > b[0] )
add b[0] to the end of c
remove b[0] from b
else
add a[0] to the end of c
remove a[0] from a
end if
end while
The pivot value divides the list into two parts. And recursively, we find the pivot for
each sub-lists until all lists contains only one element.
Quick Sort Pivot Algorithm
Based on our understanding of partitioning in quick sort, we will now try to write an
algorithm for it, which is as follows.
Step 1 − Choose the highest index value has pivot
Step 2 − Take two variables to point left and right of the list excluding pivot
Step 3 − left points to the low index
Step 4 − right points to the high
Step 5 − while value at left is less than pivot move right
Step 6 − while value at right is greater than pivot move left
Step 7 − if both step 5 and step 6 does not match swap left and right
Step 8 − if left ≥ right, the point where they met is new pivot
Quick Sort Pivot Pseudocode
The pseudocode for the above algorithm can be derived as −
function partitionFunc(left, right, pivot)
leftPointer = left
rightPointer = right - 1
while True do
while A[++leftPointer] < pivot do
//do-nothing
end while
end while
swap leftPointer,right
return leftPointer
end function
Quick Sort Algorithm
Using pivot algorithm recursively, we end up with smaller possible partitions. Each
partition is then processed for quick sort. We define recursive algorithm for quicksort as
follows −
Step 1 − Make the right-most index value pivot
Step 2 − partition the array using pivot value
Step 3 − quicksort left partition recursively
Step 4 − quicksort right partition recursively
Quick Sort Pseudocode
To get more into it, let see the pseudocode for quick sort algorithm −
procedure quickSort(left, right)
if right-left <= 0
return
else
pivot = A[right]
partition = partitionFunc(left, right, pivot)
quickSort(left,partition-1)
quickSort(partition+1,right)
end if
end procedure
Lecture-29
Heap Sort
Heap sort is a comparison based sorting technique based on Binary Heap data
structure. It is similar to selection sort where we first find the maximum element and
place the maximum element at the end. We repeat the same process for remaining
element.
What is Binary Heap?
Let us first define a Complete Binary Tree. A complete binary tree is a binary tree in
which every level, except possibly the last, is completely filled, and all nodes are as far
left as possible
A Binary Heap is a Complete Binary Tree where items are stored in a special order
such that value in a parent node is greater(or smaller) than the values in its two children
nodes. The former is called as max heap and the latter is called min heap. The heap
can be represented by binary tree or array.
Why array based representation for Binary Heap?
Since a Binary Heap is a Complete Binary Tree, it can be easily represented as array
and array based representation is space efficient. If the parent node is stored at index I,
the left child can be calculated by 2 * I + 1 and right child by 2 * I + 2 (assuming the
indexing starts at 0).
Heap Sort Algorithm for sorting in increasing order:
Heapify procedure can be applied to a node only if its children nodes are heapified. So
the heapification must be performed in the bottom up order.
Lets understand with the help of an example:
Input data: 4, 10, 3, 5, 1
4(0)
/ \
10(1) 3(2)
/ \
5(3) 1(4)
We can’t use counting sort because counting sort will take O(n 2) which is worse than
comparison based sorting algorithms. Can we sort such an array in linear time?
Radix Sort is the answer. The idea of Radix Sort is to do digit by digit sort starting from
least significant digit to most significant digit. Radix sort uses counting sort as a
subroutine to sort.
Lecture-30
Radix Sort
1) Do following for each digit i where i varies from least significant digit to the most
significant digit.
………….a) Sort input array using counting sort (or any stable sort) according to the i’th
digit.
Example:
Original, unsorted list:
170, 45, 75, 90, 802, 24, 2, 66
Sorting by least significant digit (1s place) gives: [*Notice that we keep 802 before 2,
because 802 occurred before 2 in the original list, and similarly for pairs 170 & 90 and
45 & 75.]
170, 90, 802, 2, 24, 45, 75, 66
Sorting by next digit (10s place) gives: [*Notice that 802 again comes before 2 as 802
comes before 2 in the previous list.]
802, 2, 24, 45, 66, 170, 75, 90
Sorting by most significant digit (100s place) gives:
2, 24, 45, 66, 75, 90, 170, 802
What is the running time of Radix Sort?
Let there be d digits in input integers. Radix Sort takes O(d*(n+b)) time where b is the
base for representing numbers, for example, for decimal system, b is 10. What is the
value of d? If k is the maximum possible value, then d would be O(log b(k)). So overall
time complexity is O((n+b) * logb(k)). Which looks more than the time complexity of
comparison based sorting algorithms for a large k. Let us first limit k. Let k <= nc where
c is a constant. In that case, the complexity becomes O(nLog b(n)). But it still doesn’t
beat comparison based sorting algorithms.
Linear Search
Linear search is to check each element one by one in sequence. The following
method linearSearch() searches a target in an array and returns the index of the target; if
not found, it returns -1, which indicates an invalid index.
1 int linearSearch(int arr[], int target)
2 {
3 for (int i = 0; i < arr.length; i++)
4 {
5 if (arr[i] == target)
6 return i;
7 }
8 return -1;
9 }
Linear search loops through each element in the array; each loop body takes constant
time. Therefore, it runs in linear time O(n).
Lecture-31
Binary Search
For sorted arrays, binary search is more efficient than linear search. The process starts
from the middle of the input array:
If the target equals the element in the middle, return its index.
If the target is larger than the element in the middle, search the right half.
If the target is smaller, search the left half.
In the following binarySearch() method, the two index variables first and last indicates the
searching boundary at each round.
1 int binarySearch(int arr[], int target)
2 {
3 int first = 0, last = arr.length - 1;
4
5 while (first <= last)
6 {
7 int mid = (first + last) / 2;
8 if (target == arr[mid])
9 return mid;
10 if (target > arr[mid])
11 first = mid + 1;
12 else
13 last = mid - 1;
14 }
15 return -1;
16 }
1 arr: {3, 9, 10, 27, 38, 43, 82}
2
3 target: 10
4 first: 0, last: 6, mid: 3, arr[mid]: 27 -- go left
5 first: 0, last: 2, mid: 1, arr[mid]: 9 -- go right
6 first: 2, last: 2, mid: 2, arr[mid]: 10 -- found
7
8 target: 40
9 first: 0, last: 6, mid: 3, arr[mid]: 27 -- go right
10 first: 4, last: 6, mid: 5, arr[mid]: 43 -- go left
11 first: 4, last: 4, mid: 4, arr[mid]: 38 -- go right
12 first: 5, last: 4 -- not found
Binary search divides the array in the middle at each round of the loop. Suppose the
array has length n and the loop runs in t rounds, then we have n * (1/2)^t = 1 since at
each round the array length is divided by 2. Thus t = log(n). At each round, the loop
body takes constant time. Therefore, binary search runs in logarithmic time O(log n).
The following code implements binary search using recursion. To call the method, we
need provide with the boundary indexes, for example,
binarySearch(arr, 0, arr.length - 1, target);
1
2 binarySearch(int arr[], int first, int last, int target)
3 {
4 if (first > last)
5 return -1;
6
7 int mid = (first + last) / 2;
8
9 if (target == arr[mid])
10 return mid;
11 if (target > arr[mid])
12 return binarySearch(arr, mid + 1, last, target);
13 // target < arr[mid]
14 return binarySearch(arr, first, mid - 1, target);
15 }
Lecture-32
Hashing
Introduction
When we put objects into a hashtable, it is possible that different objects (by
the equals() method) might have the same hashcode. This is called a collision. Here is
the example of collision. Two different strings ""Aa" and "BB" have the same key: .
"Aa" = 'A' * 31 + 'a' = 2112
"BB" = 'B' * 31 + 'B' = 2112
The big attraction of using a hash table is a constant-time performance for the basic
operations add, remove, contains, size. Though, because of collisions, we cannot guarantee
the constant runtime in the worst-case. Why? Imagine that all our objects collide into the
same index. Then searching for one of them will be equivalent to searching in a list, that
takes a liner runtime. However, we can guarantee an expected constant runtime, if we
make sure that our lists won't become too long. This is usually implemnted by
maintaining a load factor that keeps a track of the average length of lists. If a load factor
approaches a set in advanced threshold, we create a bigger array and rehash all
elements from the old table into the new one.
Another technique of collision resolution is a linear probing. If we cannoit insert at index
k, we try the next slot k+1. If that one is occupied, we go to k+2, and so on.
Lecture-33
Hashing Functions
Choosing a good hashing function, h(k), is essential for hash-table based
searching. h should distribute the elements of our collection as uniformly as possible to
the "slots" of the hash table. The key criterion is that there should be a minimum
number of collisions.
If the probability that a key, k, occurs in our collection is P(k), then if there are m slots in
our hash table, a uniform hashing function, h(k), would ensure:
Sometimes, this is easy to ensure. For example, if the keys are randomly distributed in
(0,r], then,
h(k) = floor((mk)/r)
will provide uniform hashing.
Mapping keys to natural numbers
Most hashing functions will first map the keys to some set of natural numbers, say (0,r].
There are many ways to do this, for example if the key is a string of ASCII characters,
we can simply add the ASCII representations of the characters mod 255 to produce a
number in (0,255) - or we could xor them, or we could add them in pairs mod 2 16-1, or
...
Having mapped the keys to a set of natural numbers, we then have a number of
possibilities.
1. Use a mod function:
h(k) = k mod m.
When using this method, we usually avoid certain values of m. Powers of 2 are
usually avoided, for k mod 2b simply selects the b low order bits of k. Unless we
know that all the 2b possible values of the lower order bits are equally likely, this
will not be a good choice, because some bits of the key are not used in the hash
function.
Prime numbers which are close to powers of 2 seem to be generally good
choices for m.
For example, if we have 4000 elements, and we have chosen an overflow table
organization, but wish to have the probability of collisions quite low, then we
might choose m = 4093. (4093 is the largest prime less than 4096 = 2 12.)
2. Use the multiplication method:
o Multiply the key by a constant A, 0 < A < 1,
o Extract the fractional part of the product,
o Multiply this value by m.
Thus the hash function is:
h(k) = floor(m * (kA - floor(kA)))
In this case, the value of m is not critical and we typically choose a power of 2 so
that we can get the following efficient procedure on most digital computers:
o Choose m = 2p.
o Multiply the w bits of k by floor(A * 2w) to obtain a 2w bit product.
o Extract the p most significant bits of the lower half of this product.
It seems that:
A = (sqrt(5)-1)/2 = 0.6180339887
is a good choice (see Knuth, "Sorting and Searching", v. 3 of "The Art of
Computer Programming").
3. Use universal hashing:
A malicious adversary can always chose the keys so that they all hash to the
same slot, leading to an average O(n) retrieval time. Universal hashing seeks to
avoid this by choosing the hashing function randomly from a collection of hash
functions (cf Cormen et al, p 229- ). This makes the probability that the hash
function will generate poor behaviour small and produces good average
performance.