Algorithm & Data Structure Lec2 (BET)

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 49

Algorithm & Data Structure CSC-112

Fall 2011

Syed Muhammad Raza

Efficiency of an Algorithm

The Amount of Computer Time and Space that an Algorithm Requires

We are more interested in time complexity. Why?


Efficiency of algorithms helps comparing different methods of solution for the same problem

Algorithm Analysis Should be Independent of


Specific Implementation languages, Computers and

Data

Efficiency is a concern for large problems (large data size)

Algorithm Growth Rate


Algorithm growth rate is a measure of its efficiency Growth Rate: Measure an Algorithms Time Requirement as a Function of the Problem Size (N)

How quickly the Algorithms Time Requirement Grows as a Function of the Problem Size

The Way to Measure a Problems Size Depends on the Application


Number of Nodes in a Linked List The Size of an Array

Number of Items in the Stack


Etc.

Example

Suppose we have two algorithms A and B for solving the same problem. Which one do we use?

Suppose

Algorithm A Requires Time Proportional to N2 (expressed as O(N2 ). This is Big O Notation.)

Algorithm B Requires Time Proportional to N , i.e. B is O(N)

Obviously B is a better choice

Big O Notation

If Algorithm A Requires Time Proportional to f(N), Algorithm A is Said to be Order f(N), Which is Denoted as O(f(N));

f(N) is Called Algorithms Growth-Rate Function

Growth Rates Comparison

Growth Rates Comparison (Tabular)

Common Growth Rates

Constant : O(1)

Time Requirement is Constant

Independent of Problem Size


Example: Accessing an array element

Logarithmic : O(log N)

Time Requirement Increases Slowly for a Logarithmic Function Typical algorithm: Solves a Problem by Solving a Smaller Constant Fraction of the Problem Example: Binary search

Linear : O(N)

Time Requirement Increases Directly with the size of the Problem Example: Retrieving an element from a linked list

Common Growth Rates

O(N log N)

Time Requirement Increases More Rapidly Than a Linear Algorithm

Typical algorithms : Divide a Problem into Smaller Problems That are


Each Solved Separately

Quadratic : O(N2)

Increases Rapidly with the Size of the Problem Typical Algorithms : Algorithms that Use Two Nested Loops Practical for Only Small Problems

Cubic : O(N3)

Increases Rapidly with the Size of the Problem Typical Algorithms : Algorithms that Use Three Nested Loops

Common Growth Rates

Exponential time: O( 2N)


Very, very, very bad, even for small N Usually Increases too Rapidly to be Practical

Sorting Algorithms

Sorting is the process of rearranging your data elements/Item in ascending or descending order

Unsorted Data

Sorted Data (Ascending)

Sorted Data (Descending)

Sorting Algorithms

They are many


Bubble Sort Selection Sort Insertion Sort Shell sort Comb Sort Merge Sort Heap Sort Quick Sort Counting Sort Bucket Sort Radix Sort Distribution Sort Time Sort
Source: Wikipedia

Bubble Sort

Compares Adjacent Items and Exchanges Them if They are Out of Order

When You Order Successive Pairs of Elements, the Largest Element Bubbles to the Top(end) of the Array

Bubble Sort (Usually) Requires Several Passes Through the Array

Bubble Sort
Pass 1 29 10 10 10 10 10 29 14 14 14 14 14 29 29 29 37 37 37 37 13 13 13 13 13 37 10 10 10 10 14 14 14 14 Pass 2 29 29 29 13 13 13 13 29 37 37 37 37

Selection Sort

To Sort an Array into Ascending Order, First Search for the Largest Element Because You Want the Largest Element to be in the Last Position of the Array,

You Swap the Last Item With the Largest Item to be in the Last Position of the
Array, You Swap the Last Item with the Largest Item, Even if These Items Appear to be Identical

Now, Ignoring the Last (Largest) Item of the Array, search Rest of the Array For Its Largest Item and Swap it With Its Last Item, Which is the Next-to-Last Item in the original Array

You Continue Until You Have Selected and Swapped N-1 of the N Items in the Array

The Remaining Item, Which is Now in the First Position of the Array, is in its Proper Order

Selection Sort

Initial Array After1st Swap After2nd Swap After3rd Swap After4th Swap

29 29 13 13 10

10 10 10 10 13

14 14 14 14 14

37 13 29 29 29

13 37 37 37 37

Insertion Sort

Divide Array Into Sorted Section At Front (Initially Empty), Unsorted Section At End

Step Iteratively Through Array, Moving Each Element To Proper Place In Sorted Section
Sorted ..

Unsorted

..

i
After i Iterations

N-1

Insertion Sort
Initial Array

29

10

14

37

13

Copy 10
Shift 29
Insert 10, Copy 14

29
10 10 10 10 10
Sorted Array

29
29 29 14 14 14 13

14
14 29 29 29 14 14

37
37 37 37 37 29 29

13
13 13 13 13 37 37

Shift 29
Insert 14; Copy 37 Insert 37 on Itself

Copy 13 Shift 14, 29, 37


Insert 13

10

Shell Sort

Improved and efficient version of insertion sort It iterates the data elements/items like insertion sort, but instead of shifting it makes the swaps.

Shell Sort

3
5 1 2 4

1
5 3 2 4

1
2 3 5 4

1
2 3 4 5

Merge Sort

Divide and Conquer Algorithm Recursively split the input in half Then recursively merge pairs of pieces Recursive Steps:

Divide the Array in Half Sort the Halves

Merge The Halves inside a Temporary Array


Copy Temporary Array to the appropriate locations in original array

Merge Sort

The Recursive calls Continue to divide the Array into Pieces Until Each Piece Contains Only One Item

An Array of One Item is Obviously Sorted The Algorithm Then Merges These small Pieces Until One Sorted Array Results

Merge Sort
38 16 27 39 12 27

38

16

27

39

12

27

Recursive Calls

38 38 16 16

16 16 38 27 12

27

39 39 12

12

27

to Mergesort

12
39 12 27 39 39
Merge Steps

38 16 27 27

38

Quick Sort

Divide and Conquer algorithm Quicksort Works by Partitioning the Array into Two Pieces Separated by a Single Element That is Greater Than all the Elements in the Left part and Smaller Than all the Elements in the right part

This Guarantees That, the Single Element , Called the Pivot Element, is in its Correct position

Then the Algorithm Proceeds, Applying the Same Method to the Two

parts Separately

Quick Sort

Partition (Divide)

Choose a pivot Find the position for the pivot so that


all elements to the left are less all elements to the right are greater

< pivot

pivot

>= pivot

Quick Sort

Conquer

Apply the same algorithm to each half

< pivot < p p > p pivot < p

>= pivot p >= p

Partitioning Method

Must Arrange Items Into Two regions S1, the Set of Items Less Than the Pivot, and S2, the Set of Items Greater Than or Equal to Pivot Different algorithms for Choice of a Pivot Retain Pivot in A[F] position The Items That await Placement are in Another Region , Called the Unknown Region S1 Unknown S2 P F <P LastS1 >=P ? FirstUnknown L

At Each Step of the partition Algorithm you Examine One Item from Unknown Region and Place it in S1 or S2

To Move A[FirstUnknown] into S1

S1 P F

Swap S2

Unknown <P ? FirstUnknown L

>=P <P >=P LastS1 LastS1+1

Swap A[FirstUnknown] With the First Item of S2, Which is A[LastS1+1], and Then

Increment S1 by 1

Thus the Item That Was in A[FirstUnknown] will be at the Rightmost Position of S1 Item of S2 That was Moved to A[FirstUnknown]: If you Increment FirstUnknown by 1, That Item Becomes the Rightmost Member of S2

Thus, Statements for the Following Steps are Required


Swap A [FirstUnknown] with A[lastS1+1] Increment LastS1 Increment FirstUnknown

To Move A[FirstUnknown] into S2 S1 S2 Unknown

P F

<P LastS1

>=P

? FirstUnknown L

Region S2 and Unknown are Adjacent

Simply Increment FirstUnknown by 1, S2 Expands to the Right

To Move Pivot Between S1 and S2


Interchange A[LastS1], the Rightmost Item in S1 with Pivot Thus, Pivot Would be in its Actual Location

Quick Sort Example (only one step)


Pivot

27
Pivot

38 38
S2

12 12 12
S2

39
Unknown

27 27 27
Unknown

16 16 16 16 16
Unknown

Choose Pivot, keep it in A[F] FirstUnknown = 1(Points to 38 38 Belongs in S2 S1 is Empty 12 Belongs in S1, swap38 & 12 39 Belongs in S2 27 Belongs in S2

27
Pivot

39 39 39
S2

Unknown

27
Pivot

38
S1

27
Pivot

12
S1

38 38 38
S1

27 27 27
S2

Unknown

27
Pivot

12
S1

39
S2

27
Pivot

12 12 12

39 39 39

16 38 38

16 Belongs in S1, Swap 38 & 16 No more Unknown Place Pivot between S1 and S2

27
S1

16
Pivot

27
S2

16

27

27

Radix Sort 329 457 657 839 436 720 355 720 355 436 457 657 329 839 720 329 436 839 355 457 657 329 355 436 457 657 720 839

Comparison of Sorting Algorithms


Worst Case Selection Sort Bubble Sort Insertion Sort N2 N2 N2

Mergesort
Quicksort Radix Sort

N * log N
N2 N

Searching Algorithms

Searching is the process of determining whether or not a given value exists in a data structure or a storage media.

We will study two searching algorithms


Linear Search Binary Search

Linear Search: O(n)

The linear (or sequential) search algorithm on an array is:

Start from beginning of an array/list and continues until the item is found or the entire array/list has been searched.

Sequentially scan the array, comparing each array item with the searched value.

If a match is found; return the index of the matched element; otherwise return 1.

Note: linear search can be applied to both sorted and unsorted arrays.

Linear Search
bool LinSearch(double x[ ], int n, double item) { for(int i=0;i<n;i++) { if(x[i]==item) { return true;

}
else { return false; }

}
return false; }

Linear Search Tradeoffs

Benefits

Easy to understand Array can be in any order Inefficient for array of N elements Examines N/2 elements on average for value in array, N elements for value not in array

Disadvantages

Binary Search: O(log2 n)

Binary search looks for an item in an array/list using divide and conquer strategy

Binary Search

Binary search algorithm assumes that the items in the array being searched is sorted

The algorithm begins at the middle of the array in a binary


search

If the item for which we are searching is less than the item in the middle, we know that the item wont be in the second half of the array

Once again we examine the middle element The process continues with each comparison cutting in half the portion of the array where the item might be

Binary Search

Binary search uses a recursive method to search an array to find a specified value

The array must be a sorted array: a[0]a[1]a[2]. . . a[finalIndex]

If the value is found, its index is returned If the value is not found, -1 is returned Note: Each execution of the recursive method reduces the search space by about a half

Pseudocode for Binary Search

Execution of Binary Search

Execution of Binary Search

Key Points in Binary Search


1.

There is no infinite recursion On each recursive call, the value of first is increased, or the value of last is decreased

If the chain of recursive calls does not end in some other way, then eventually the method will be called with first larger than

last
2.

Each stopping case performs the correct action for that case If first > last, there are no array elements between a[first] and a[last], so key is not in this segment of the array, and result is correctly set to -1

If key == a[mid], result is correctly set to mid

Key Points in Binary Search


3.

For each of the cases that involve recursion, if all recursive calls perform their actions correctly, then the entire case performs correctly

If key < a[mid], then key must be one of the elements a[first] through a[mid-1], or it is not in the array

The method should then search only those elements, which it does

The recursive call is correct, therefore the entire action is correct

Key Points in Binary Search

If key > a[mid], then key must be one of the elements a[mid+1] through a[last], or it is not in the array

The method should then search only those elements, which it does

The recursive call is correct, therefore the entire action is correct

The method search passes all three tests: Therefore, it is a good recursive method definition

Efficiency of Binary Search

The binary search algorithm is extremely fast compared to an algorithm that tries all array elements in order

About half the array is eliminated from consideration right at the start

Then a quarter of the array, then an eighth of the array, and so forth

Efficiency of Binary Search

Given an array with 1,000 elements, the binary search will only need to compare about 10 array elements to the key value, as compared to an average of 500 for a serial search algorithm

The binary search algorithm has a worst-case running time that is logarithmic:

O(log n)

A serial search algorithm is linear: O(n)

If desired, the recursive version of the method search can be converted to an iterative version that will run more efficiently

Binary Search

IT IS ALL ABOUT DOING THE RIGHT THING


AT THE RIGHT TIME

You might also like