0% found this document useful (0 votes)
44 views

COIT23001 Object Oriented Development Week 9: Reading Task: Chapter 19 This Lecture Covers The Following Topics

Algorithms and Implementations
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
44 views

COIT23001 Object Oriented Development Week 9: Reading Task: Chapter 19 This Lecture Covers The Following Topics

Algorithms and Implementations
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 42

COIT23001 Object Oriented Development

Week 9

 Reading task: Chapter 19


 This lecture covers the following topics.
 Algorithms and Implementations
 Overview of search and sorting algorithms
 Faster sorting algorithms
 Algorithms and Big-O notation

The lecture slides are based on Text Book (Java How to Program, Deitel & Deitel)
© Copyright 1992-2012 by Pearson Education, Inc. All Rights Reserved.
Overview of search & sorting algorithms

 Searching data involves determining whether a value


(referred to as the search key) is present in the data and, if
so, finding its location.
 Two popular search algorithms are the simple linear search and the
faster but more complex binary search.
 Sorting places data in ascending or descending order, based
on one or more sort keys.
 This chapter introduces two simple sorting algorithms, the selection
sort and the insertion sort, along with the more efficient but more
complex merge sort.
 Figure 19.1 summarizes the searching and sorting
algorithms discussed in the examples and exercises of this
book. (You can find details of Search algorithms in the
course coit29222/coit11222)
© Copyright 1992-2012 by Pearson Education, Inc. All Rights
Reserved.
© Copyright 1992-2012 by Pearson Education, Inc. All Rights
Reserved.
© Copyright 1992-2012 by Pearson Education, Inc. All Rights
Reserved.
Faster Sorting Algorithms

 Sorting data (i.e., placing the data into some particular order,
such as ascending or descending) is one of the most important
computing applications.
 An important item to understand about sorting is that the end
result—the sorted array—will be the same no matter which
algorithm you use to sort the array.
 The choice of algorithm affects only the run time and memory
use of the program.
 The rest of this chapter introduces three common sorting
algorithms.
 The first two—selection sort and insertion sort—are easy to program
but inefficient.
 The last algorithm—merge sort—is much faster than selection sort and
insertion sort but harder to program.

© Copyright 1992-2012 by Pearson Education, Inc. All Rights


Reserved.
Selection Sort

 Selection sort
 simple, but inefficient, sorting algorithm
 Its first iteration selects the smallest element in the array and swaps it
with the first element.
 The second iteration selects the second-smallest item (which is the
smallest item of the remaining elements) and swaps it with the second
element.
 The algorithm continues until the last iteration selects the second-
largest element and swaps it with the second-to-last index, leaving the
largest element in the last index.
 After the ith iteration, the smallest i items of the array will be sorted
into increasing order in the first i elements of the array.
 After the first iteration, the smallest element is in the first position.
After the second iteration, the two smallest elements are in order in the
first two positions, etc.
 The selection sort algorithm runs in O(n2) time.

© Copyright 1992-2012 by Pearson Education, Inc. All Rights


Reserved.
© Copyright 1992-2012 by Pearson Education, Inc. All Rights
Reserved.
© Copyright 1992-2012 by Pearson Education, Inc. All Rights
Reserved.
© Copyright 1992-2012 by Pearson Education, Inc. All Rights
Reserved.
© Copyright 1992-2012 by Pearson Education, Inc. All Rights
Reserved.
© Copyright 1992-2012 by Pearson Education, Inc. All Rights
Reserved.
© Copyright 1992-2012 by Pearson Education, Inc. All Rights
Reserved.
© Copyright 1992-2012 by Pearson Education, Inc. All Rights
Reserved.
Insertion Sort

 Insertion sort
 another simple, but inefficient, sorting algorithm
 The first iteration takes the second element in the
array and, if it’s less than the first element, swaps it
with the first element.
 The second iteration looks at the third element and
inserts it into the correct position with respect to the
first two, so all three elements are in order.
 At the ith iteration of this algorithm, the first i
elements in the original array will be sorted.
 The insertion sort algorithm also runs in O(n2) time.
© Copyright 1992-2012 by Pearson Education, Inc. All Rights
Reserved.
© Copyright 1992-2012 by Pearson Education, Inc. All Rights
Reserved.
© Copyright 1992-2012 by Pearson Education, Inc. All Rights
Reserved.
© Copyright 1992-2012 by Pearson Education, Inc. All Rights
Reserved.
© Copyright 1992-2012 by Pearson Education, Inc. All Rights
Reserved.
© Copyright 1992-2012 by Pearson Education, Inc. All Rights
Reserved.
© Copyright 1992-2012 by Pearson Education, Inc. All Rights
Reserved.
© Copyright 1992-2012 by Pearson Education, Inc. All Rights
Reserved.
Merge Sort

 Merge sort
 efficient sorting algorithm
 conceptually more complex than selection sort and insertion sort
 Sorts an array by splitting it into two equal-sized subarrays,
sorting each subarray, then merging them into one larger
array.
 The implementation of merge sort in this example is
recursive.
 The base case is an array with one element, which is, of course,
sorted, so the merge sort immediately returns in this case.
 The recursion step splits the array into two approximately equal
pieces, recursively sorts them, then merges the two sorted arrays
into one larger, sorted array.
 Merge sort has an efficiency of O(n log n).

© Copyright 1992-2012 by Pearson Education, Inc. All Rights


Reserved.
© Copyright 1992-2012 by Pearson Education, Inc. All Rights
Reserved.
© Copyright 1992-2012 by Pearson Education, Inc. All Rights
Reserved.
© Copyright 1992-2012 by Pearson Education, Inc. All Rights
Reserved.
© Copyright 1992-2012 by Pearson Education, Inc. All Rights
Reserved.
© Copyright 1992-2012 by Pearson Education, Inc. All Rights
Reserved.
© Copyright 1992-2012 by Pearson Education, Inc. All Rights
Reserved.
© Copyright 1992-2012 by Pearson Education, Inc. All Rights
Reserved.
© Copyright 1992-2012 by Pearson Education, Inc. All Rights
Reserved.
© Copyright 1992-2012 by Pearson Education, Inc. All Rights
Reserved.
© Copyright 1992-2012 by Pearson Education, Inc. All Rights
Reserved.
© Copyright 1992-2012 by Pearson Education, Inc. All Rights
Reserved.
© Copyright 1992-2012 by Pearson Education, Inc. All Rights
Reserved.
Algorithms and Big-O notation
 Program running efficiency measures how well an
algorithm runs on a particular machine. It is a
measure of time complexity.
 Memory efficiency is a measure of the amount of
memory an algorithm occupies. It is a measure of
space complexity .
Running time analysis method
 Machine independent measures of efficiency involve
counting the number of comparisons, assignment
statements, etc.
 To analyze the complexity of an algorithm, identify
the dominant operation and measure the number of
times it occurs.
Running time example (1)

 Counting the number of comparisons, T(n), required


to locate the minimum of the elements in an n-element
array is

T(n) = n-1
Running time example (2)

 Count the number of comparisons in the n-1 passes


of the algorithm in the Selection sort.

T(n) = (n-1) + (n-2) + ... + 2 + 1

T(n) = n(n-1)/2 = n2/2 - n/2

© 2005 Pearson Education,


Inc., Upper Saddle River, NJ.
All rights reserved.
Formal definition of Big-O notation
Assume T(n) representing the running time as a function of n
(the number of data items processed). In terms of T(n), formally,
the Big-O notation
T(n)=O(f(n))
Means that there exist a constants n0, and a function f(n), such that
for all n>n0, we always can find a constant c, with cf(n)≥T(n).

.
Big-O notation- rules

 The following rules hold for Big Oh notation:

 O(k f(n)) = O(f(n))


 O(f(n)) + O(g(n)) = O(f(n) + g(n))

 O(f(n)) O(g(n)) = O(f(n) g(n))

.
Big-O notation- typical growth rate

Typical growth-rate functions evaluated at increasing


values of n.
Acknowledge: Quoted from Data structure and
Abstractions with Java, Carrano 2007
Algorithms and common growth rate

Order of magnitude Big-O Notation Example(s)


of algorithm
Find minimum
Constant time O(1) value in an ordered
array
Output of elements in
Linear O(n) an n-element array

Quadratic O(n2) Insertion sort

Cubic O(n3) Matrix multiplication


Binary search/
Logarithmic O(log2 n) or O(nlog2 n) Merge sort
Travel salesperson
Exponential O(2n) problem

© Copyright 1992-2012 by Pearson Education, Inc. All Rights


Reserved.

You might also like