Algorithm & Data Structure Lec2 (BET)
Algorithm & Data Structure Lec2 (BET)
Algorithm & Data Structure Lec2 (BET)
Fall 2011
Efficiency of an Algorithm
Data
Algorithm growth rate is a measure of its efficiency Growth Rate: Measure an Algorithms Time Requirement as a Function of the Problem Size (N)
How quickly the Algorithms Time Requirement Grows as a Function of the Problem Size
Example
Suppose we have two algorithms A and B for solving the same problem. Which one do we use?
Suppose
Big O Notation
If Algorithm A Requires Time Proportional to f(N), Algorithm A is Said to be Order f(N), Which is Denoted as O(f(N));
Constant : O(1)
Logarithmic : O(log N)
Time Requirement Increases Slowly for a Logarithmic Function Typical algorithm: Solves a Problem by Solving a Smaller Constant Fraction of the Problem Example: Binary search
Linear : O(N)
Time Requirement Increases Directly with the size of the Problem Example: Retrieving an element from a linked list
O(N log N)
Quadratic : O(N2)
Increases Rapidly with the Size of the Problem Typical Algorithms : Algorithms that Use Two Nested Loops Practical for Only Small Problems
Cubic : O(N3)
Increases Rapidly with the Size of the Problem Typical Algorithms : Algorithms that Use Three Nested Loops
Very, very, very bad, even for small N Usually Increases too Rapidly to be Practical
Sorting Algorithms
Sorting is the process of rearranging your data elements/Item in ascending or descending order
Unsorted Data
Sorting Algorithms
Bubble Sort Selection Sort Insertion Sort Shell sort Comb Sort Merge Sort Heap Sort Quick Sort Counting Sort Bucket Sort Radix Sort Distribution Sort Time Sort
Source: Wikipedia
Bubble Sort
Compares Adjacent Items and Exchanges Them if They are Out of Order
When You Order Successive Pairs of Elements, the Largest Element Bubbles to the Top(end) of the Array
Bubble Sort
Pass 1 29 10 10 10 10 10 29 14 14 14 14 14 29 29 29 37 37 37 37 13 13 13 13 13 37 10 10 10 10 14 14 14 14 Pass 2 29 29 29 13 13 13 13 29 37 37 37 37
Selection Sort
To Sort an Array into Ascending Order, First Search for the Largest Element Because You Want the Largest Element to be in the Last Position of the Array,
You Swap the Last Item With the Largest Item to be in the Last Position of the
Array, You Swap the Last Item with the Largest Item, Even if These Items Appear to be Identical
Now, Ignoring the Last (Largest) Item of the Array, search Rest of the Array For Its Largest Item and Swap it With Its Last Item, Which is the Next-to-Last Item in the original Array
You Continue Until You Have Selected and Swapped N-1 of the N Items in the Array
The Remaining Item, Which is Now in the First Position of the Array, is in its Proper Order
Selection Sort
Initial Array After1st Swap After2nd Swap After3rd Swap After4th Swap
29 29 13 13 10
10 10 10 10 13
14 14 14 14 14
37 13 29 29 29
13 37 37 37 37
Insertion Sort
Divide Array Into Sorted Section At Front (Initially Empty), Unsorted Section At End
Step Iteratively Through Array, Moving Each Element To Proper Place In Sorted Section
Sorted ..
Unsorted
..
i
After i Iterations
N-1
Insertion Sort
Initial Array
29
10
14
37
13
Copy 10
Shift 29
Insert 10, Copy 14
29
10 10 10 10 10
Sorted Array
29
29 29 14 14 14 13
14
14 29 29 29 14 14
37
37 37 37 37 29 29
13
13 13 13 13 37 37
Shift 29
Insert 14; Copy 37 Insert 37 on Itself
10
Shell Sort
Improved and efficient version of insertion sort It iterates the data elements/items like insertion sort, but instead of shifting it makes the swaps.
Shell Sort
3
5 1 2 4
1
5 3 2 4
1
2 3 5 4
1
2 3 4 5
Merge Sort
Divide and Conquer Algorithm Recursively split the input in half Then recursively merge pairs of pieces Recursive Steps:
Merge Sort
The Recursive calls Continue to divide the Array into Pieces Until Each Piece Contains Only One Item
An Array of One Item is Obviously Sorted The Algorithm Then Merges These small Pieces Until One Sorted Array Results
Merge Sort
38 16 27 39 12 27
38
16
27
39
12
27
Recursive Calls
38 38 16 16
16 16 38 27 12
27
39 39 12
12
27
to Mergesort
12
39 12 27 39 39
Merge Steps
38 16 27 27
38
Quick Sort
Divide and Conquer algorithm Quicksort Works by Partitioning the Array into Two Pieces Separated by a Single Element That is Greater Than all the Elements in the Left part and Smaller Than all the Elements in the right part
This Guarantees That, the Single Element , Called the Pivot Element, is in its Correct position
Then the Algorithm Proceeds, Applying the Same Method to the Two
parts Separately
Quick Sort
Partition (Divide)
all elements to the left are less all elements to the right are greater
< pivot
pivot
>= pivot
Quick Sort
Conquer
Partitioning Method
Must Arrange Items Into Two regions S1, the Set of Items Less Than the Pivot, and S2, the Set of Items Greater Than or Equal to Pivot Different algorithms for Choice of a Pivot Retain Pivot in A[F] position The Items That await Placement are in Another Region , Called the Unknown Region S1 Unknown S2 P F <P LastS1 >=P ? FirstUnknown L
At Each Step of the partition Algorithm you Examine One Item from Unknown Region and Place it in S1 or S2
S1 P F
Swap S2
Swap A[FirstUnknown] With the First Item of S2, Which is A[LastS1+1], and Then
Increment S1 by 1
Thus the Item That Was in A[FirstUnknown] will be at the Rightmost Position of S1 Item of S2 That was Moved to A[FirstUnknown]: If you Increment FirstUnknown by 1, That Item Becomes the Rightmost Member of S2
P F
<P LastS1
>=P
? FirstUnknown L
Interchange A[LastS1], the Rightmost Item in S1 with Pivot Thus, Pivot Would be in its Actual Location
27
Pivot
38 38
S2
12 12 12
S2
39
Unknown
27 27 27
Unknown
16 16 16 16 16
Unknown
Choose Pivot, keep it in A[F] FirstUnknown = 1(Points to 38 38 Belongs in S2 S1 is Empty 12 Belongs in S1, swap38 & 12 39 Belongs in S2 27 Belongs in S2
27
Pivot
39 39 39
S2
Unknown
27
Pivot
38
S1
27
Pivot
12
S1
38 38 38
S1
27 27 27
S2
Unknown
27
Pivot
12
S1
39
S2
27
Pivot
12 12 12
39 39 39
16 38 38
16 Belongs in S1, Swap 38 & 16 No more Unknown Place Pivot between S1 and S2
27
S1
16
Pivot
27
S2
16
27
27
Radix Sort 329 457 657 839 436 720 355 720 355 436 457 657 329 839 720 329 436 839 355 457 657 329 355 436 457 657 720 839
Mergesort
Quicksort Radix Sort
N * log N
N2 N
Searching Algorithms
Searching is the process of determining whether or not a given value exists in a data structure or a storage media.
Start from beginning of an array/list and continues until the item is found or the entire array/list has been searched.
Sequentially scan the array, comparing each array item with the searched value.
If a match is found; return the index of the matched element; otherwise return 1.
Note: linear search can be applied to both sorted and unsorted arrays.
Linear Search
bool LinSearch(double x[ ], int n, double item) { for(int i=0;i<n;i++) { if(x[i]==item) { return true;
}
else { return false; }
}
return false; }
Benefits
Easy to understand Array can be in any order Inefficient for array of N elements Examines N/2 elements on average for value in array, N elements for value not in array
Disadvantages
Binary search looks for an item in an array/list using divide and conquer strategy
Binary Search
Binary search algorithm assumes that the items in the array being searched is sorted
If the item for which we are searching is less than the item in the middle, we know that the item wont be in the second half of the array
Once again we examine the middle element The process continues with each comparison cutting in half the portion of the array where the item might be
Binary Search
Binary search uses a recursive method to search an array to find a specified value
If the value is found, its index is returned If the value is not found, -1 is returned Note: Each execution of the recursive method reduces the search space by about a half
There is no infinite recursion On each recursive call, the value of first is increased, or the value of last is decreased
If the chain of recursive calls does not end in some other way, then eventually the method will be called with first larger than
last
2.
Each stopping case performs the correct action for that case If first > last, there are no array elements between a[first] and a[last], so key is not in this segment of the array, and result is correctly set to -1
For each of the cases that involve recursion, if all recursive calls perform their actions correctly, then the entire case performs correctly
If key < a[mid], then key must be one of the elements a[first] through a[mid-1], or it is not in the array
The method should then search only those elements, which it does
If key > a[mid], then key must be one of the elements a[mid+1] through a[last], or it is not in the array
The method should then search only those elements, which it does
The method search passes all three tests: Therefore, it is a good recursive method definition
The binary search algorithm is extremely fast compared to an algorithm that tries all array elements in order
About half the array is eliminated from consideration right at the start
Then a quarter of the array, then an eighth of the array, and so forth
Given an array with 1,000 elements, the binary search will only need to compare about 10 array elements to the key value, as compared to an average of 500 for a serial search algorithm
The binary search algorithm has a worst-case running time that is logarithmic:
O(log n)
If desired, the recursive version of the method search can be converted to an iterative version that will run more efficiently
Binary Search