Bubble Sort Algorithm
In this algorithm, adjacent elements are compared one by one repeatedly and
swapped if they are in the wrong order. There are generally n-1 passes until all the
elements are in their correct places.
Some essential characteristics of the bubble sort algorithm are given here.
There are a total of n-1 passes.
The total number of comparisons is O(n^2).
It is a stable algorithm.
It is not an adaptive algorithm.
The worst time complexity of the bubble sort algorithm is O(n^2).
It can be made adaptive by using flags.
Pseudocode
Bubble Sort Algorithm
bubbleSort(arr)
n = length (arr);
for i from 0 to n-1:
swapped = false
for j from 0 to n-i-1:
if arr[j] > arr[j+1]:
swap(arr[j], arr[j+1])
swapped = true
if swapped is false:
break;
Insertion Sort Algorithm
Insertion sort divides the array into two different parts. They are sorted and unsorted
parts. It then compares the sorted parts and, after comparison, swaps them if
needed. It makes the final sorted array one at a time, at its correct position within the
sorted array.
The time complexity of the insertion sort is O(n^2).
It is a stable algorithm.
It is an adaptive algorithm.
Pseudocode
Insertion Sort Algorithm
insertionSort(arr):
n = length(arr)
for i from 1 to n-1
key = arr[i]
j=i–1
while j >= 0 and arr[j] > key:
arr[j+1] = arr[j]
j=j–1
arr[j+1] = key
Selection Sort Algorithm
In selection sort, we need to ensure that the smallest element of the current unsorted
subarray goes to its final position which is done by finding the smallest element in
the subarray. This algorithm reduces the size of the unsorted part by 1 and increases
the size of the sorted part by 1 at each pass.
The time complexity of the selection sort algorithm is O(n^2).
It is not a stable algorithm.
It is a non-recursive algorithm.
It is also not an adaptive algorithm.
It offers the benefits of the least number of swaps to sort the array, unlike
insertion sort.
Pseudocode
Selection Sort Algorithm
selectionSort(arr):
n = length(arr)
for i from 0 to n-1:
min_index = i
for j from i+1 to n-1:
if arr[j] < arr[min_index]:
min_index = j
swap(arr[i], arr[min_index])
Quick Sort Algorithm
Quick Sort uses a divide-and-conquer strategy. It selects a pivot element and
partitions the array into two sub-arrays. It arranges the element in such a way that it
contains elements less than the pivot elements ion one side of the pivot and
elements greater than the pivot on the other side. It then sorts the subarrays
recursively.
Some important characteristics of the quick sort algorithm is given here.
The worst scenario takes place in a quick sort when the array is already
sorted.
The time complexity of the quick sort algorithm is O (n^2).
It is also a recursive algorithm.
It also follows the divide-and-conquer approach.
It works by selecting a pivot element and then partitioning the array into two
major parts.
Quick Sort Algorithm
quickSort(arr, low, high):
if low < high:
pivot_index = partition(arr, low, high)
quickSort(arr, low, pivot_index – 1)
quickSort(arr, pivot_index + 1, high)
partition (arr, low, high):
pivot = arr [high]
i = low – 1
for j from low to high-1:
if arr[j] <= pivot:
i=i+1
swap(arr[i], arr[j])
swap(arr[i + 1], arr[high])
return i + 1
Merge Sort
Merge sorting is a type of divide-and-conquer algorithm. It divides the unsorted list
into n subparts, each containing one element, and then repeatedly merges them to
produce new sorted parts until only one remains.
Some of the important Merge sort characteristics are given here.
It works on a divide-and-conquer approach.
It is a stable algorithm.
It is also an adaptive algorithm.
The time complexity of merge sort is O(n logn)
It is an easy and reliable algorithm whose time complexity is better than the
rest.
It is a recursive sorting algorithm.