Algorithmic Design 2
Algorithmic Design 2
i) Relatively easy
ii) Defer implementation details (put in bottom modules)
iii) Top down testing
o Hard to identify top functions (is there a simple function at the top?)
o No natural hierarchy
i) By breaking the problem down into steps it is easy to get good understanding of the scale of the
problem.
ii) Many of the design decisions can be made during the stepwise refinement process, making it
faster to make changes than after coding.
Modular programming is the process of breaking down a problem into smaller independent
tasks. These tasks can then be broken down into subtasks. Each module is designed to
perform a specific function. Note that each module in itself is a small program, which can be
compiled and tested individually and all modules are combined to build the whole program.
Modular programming is an important and beneficial approach to programming problems.
They make program development easier, and they can also help with future development
projects.
In this light, a program is broken down into small independent tasks that are small enough to
be understood easily, without having to understand the whole program at once. Each task has
its own functionality and performs a specific part of the actual processing. These tasks are
developed independently, and each task can carry out a specified function on its own. When
these tasks are completed, they are combined together to solve the problem.
A module is a component of a larger system that interacts with the rest of the system in a
simple and well-defined manner.
A sorting algorithm is an algorithm that puts elements of a list in a certain order. We can distinguish
many sorting algorithms: Bubble sort , Insertion sort, Selection sort, Heapsort, Mergesort, Quick sort,
…
1) Selection sort
The principle of Selection Sort is to find the smallest element of the data sequence (from index 0 to
n-1) and put it to the beginning of the sequence. This procedure is then applied on the yet unsorted
areas (1 to n-1, 2 to n-1 and so on), until the area from n-2 to n-1 has been sorted. Now the entire
data sequence has been transformed to sorted form. To get the smallest element to the right
position, simply exchange it with the first in the (sub-)sequence.
Example: ALGORITHM
Unsorted Sequence 8 4 1 5 4 for i ← 0 to n-2 do
Step 1 8 4 1 5 4 min ← i
for j ← (i + 1) to n-1 do
Step 2 1 4 8 5 4 if A[j] < A[min]
Step 3 1 4 8 5 4 min ← j
Step 4 1 4 4 5 8 swap A[i] and A[min]
Result 1 4 4 5 8
Complexity analysis
Selection sort is not difficult to analyze compared to other sorting algorithms since none of the loops
depend on the data in the array. Selecting the lowest element requires scanning all n elements (this
takes n − 1 comparisons) and then swapping it into the first position. Finding the next lowest element
requires scanning the remaining n − 1 elements and so on, for (n − 1) + (n − 2) + ... + 2 + 1 = n(n − 1) /
2 ∈ Θ(n2) comparisons (see arithmetic progression). Each of these scans requires one swap for n − 1
elements (the final element is already in place).
2) Insertion sort
The data sequence is divided into two sections: an already sorted target sequence and a still
unsorted source sequence. The target sequence initially consists of the first element only. However,
it grows by one element per iteration.
In every iteration the first element of the current source sequence is compared with the elements of
the target sequence in incremental order. If the first element of the source sequence is lower than or
equal to an element of the target sequence, all elements to the left of the first element of the source
sequence and to the right of the current element of the target sequence are moved to the right, the
current element is inserted to the corresponding location in the target sequence and the iteration is
stopped. Sorting is finished once all elements belong to the target sequence.
Actual run time could be better (loop may terminate earlier than running for p-1 times!
3) Bubble sort
The name of this algorithm is supposed to indicate that the elements "rise" like bubbles. All elements
are compared with their successors; if the element with the lower index is the greater one, the
elements are exchanged. Once the entire sequence has been processed this way, the whole process
is repeated - until there's been an iteration during which no exchange took place. Then the data is
sorted. Example (the elements that are currently compared with each other are in bold print):
4) Merge sort
Example: ALGORITHM
Unsorted Sequence 8 4 1 5 4 MergeSort (Array(First..Last))
Division 1 8 4 1|5 4 Begin
Division 2 8 4|1|5|4 If Array contains only one element Then
Division 3 8|4|1|5|4 Return Array
Merge 1 4 8|1|5|4 Else
Merge 2 1 4 8|4 5 Mid= ((Last + First)/2)
Merge 3 1 4 4 5 8 LHA = MergeSort(Array(First..Mid))
Result 1 4 4 5 8 RHA = MergeSort(Array(Mid+1..Last))
ResultArray = Merge( LHA , RHA )
Return ResultArray
EndIf
End MergeSort
Example
5) Quicksort
Quicksort is a fast and recursive sorting algorithm, which is used not only for educational purposes,
but widely applied in practice. On the average, it has O(n log n) complexity, making quicksort suitable
for sorting big data volumes.
Algorithm
The divide-and-conquer strategy is used in quicksort. Below the recursion step is described:
PARTITION(A, p, r) QUICKSORT(A, p, r)
{ {
x = A[r] // pivot: grab last element if p < r
i = p – 1 // index of last element ≤ pivot; initially before {
array q = PARTITION(A, p, r)
for j = p to r-1// inspect all elements but pivot QUICKSORT(A, p, q-1)
{ QUICKSORT(A, q+1, r)
if A[j] x // move only elements ≤ pivot to left region
¿ }
{ }
i=i+1
swap A[i] and A[j]
}
}
swap A[i+1] and A[r] // put pivot in correct place
return i+1
}
Example
Searching for data is one of the fundamental fields of computing. Often, the difference between a
fast program and a slow one is the use of a good algorithm for the data set.
1) Linear searching
A linear search is the most basic of search algorithm you can have. A linear search sequentially moves
through your collection (or data structure) looking for a matching value.
These methods work as two-stage procedures. First, the problem is modified to be more
amenable to solution. In the second stage the problem is solved.
Example: consider the problem of finding the two closest numbers in an array of numbers.
III. RECURSION
A recursive function is a function that calls itself during its execution. The main
characteristics of a recursive algorithm are:
- Stopping condition: It specifies the situation when the result can be obtained directly
without the need of a recursive call.
- Recursive call: The algorithm calls itself at least once. The value of the parameter
corresponding to a cascade of calls should lead eventually to the stopping condition.
We now see why factorial is often the introductory example for recursion: the factorial
function is recursive, it is defined in terms of itself. Taking the factorial of n, we have:
n !=
{n∗( n−1
1 if n=0
) ! if n>0
Let's try writing our factorial function factorial(n). We want to code in the n! = n*(n - 1)!
functionality. Easy enough:
The Fibonacci sequence is often used to illustrate the power of dynamic programming. The
sequence is defined by the following recurrence relation:
You may remember from your study of binary trees that a nearly
complete one, like the one above, has around 2 n nodes, where n is
the height of the tree (number of levels - 1). That means the number
of function calls is around 2n, which is exponential given each
function call is O(1). That's very expensive for such a simple little
function!
c) Power function
Let consider the problem of computing xn for x>0 and n a natural number. Starting from the
definition,
n n−1
x =x∗x∗…∗x=x∗x .
Applying the brute force and the recursive techniques respectively, we have
d) Hanoi’s tower
Let’s consider three vertical pegs labelled as follows: src (source), dest (destination) and int
(intermediate). On the src peg there are n disc placed in decreasing order of their diameter (the disc
at the bottom is the largest one). We want to move all discs from the peg src to the rod dest by using
int as intermediate peg such that the following constraints are satisfied.
(i) On the destination peg the disc are placed in the same decreasing order
(ii) A disc can never be placed on a smaller disc
hanoi(n,src,dest,int)
if n = 1 then
src → dest
else
hanoi(n-1,src,int,dest)
src → dest
hanoi(n-1,int,dest,src)
endif
factorial( n)
if (n > 0)then
return(n * factorial(n-1));
else
return(1)
endif
end
Comparison
Insertion Sort:
Average Case / Worst Case : Θ(n2) ; happens when input is already sorted in descending
order
Best Case : Θ(n) ; when input is already sorted
No. of comparisons : Θ(n2) in worst case & Θ(n) in best case
Selection Sort:
Average Case / Worst Case / Best Case: Θ(n2)
No. of comparisons : Θ(n2)
No. of swaps : Θ(n2) in worst/average case & 0 in best case
Merge Sort :
Average Case / Worst Case / Best case : Θ(nlgn) ; doesn't matter at all whether the input is
sorted or not
No. of comparisons : Θ(n+m) in worst case & Θ(n) in best case ; assuming we are merging
two array of size n & m where n<m
No. of swaps : No swaps ! [but requires extra memory, not in-place sort]
Bubble Sort:
Worst Case : Θ(n2)
Best Case : Θ(n) ; on already sorted
No. of comparisons : Θ(n2) in worst case & best case
No. of swaps : Θ(n2) in worst case & 0 in best case
Quicksort
Worst-case: O(N2): This happens when the pivot is the smallest (or the largest)
element.
Best-case O(NlogN) The best case is when the pivot is the median of the array,
Average-case - O(NlogN)
Linear Search:
Worst Case : Θ(n) ; search key not present or last element
Best Case : Θ(1) ; first element
No. of comparisons : Θ(n) in worst case & 1 in best case
Binary Search:
Worst case/Average case : Θ(logn)
Best Case : Θ(1) ; when key is middle element
No. of comparisons : Θ(logn) in worst/average case & 1 in best case