100% found this document useful (1 vote)
52 views12 pages

Algorithmic Design 2

Notes on algorithmic design 2

Uploaded by

great martin96
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
52 views12 pages

Algorithmic Design 2

Notes on algorithmic design 2

Uploaded by

great martin96
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 12

I.

Algorithm Design Strategies

An algorithm strategy is a general approach to solve problems algorithmically. Although


more than one technique may be applicable to a specific problem, it is often the case that an
algorithm constructed by one approach is clearly superior to equivalent solutions built using
alternative techniques.

Different approaches are used to build an algorithm

a) Top-down design: Top-down approach is essentially the breaking down of a


complex problem or algorithm into multiple smaller parts or sub problems. These
small parts of the complex problem are further decomposed until the resulting sub
part of the problem becomes a fundamental program which can be understood and
cannot be further decomposed.

Advantages of top down

i) Relatively easy
ii) Defer implementation details (put in bottom modules)
iii) Top down testing

Disadvantages of top down

o Hard to identify top functions (is there a simple function at the top?)
o No natural hierarchy

b) Bottom-up design: This is the opposite of the top-down approach. In a Bottom-Up


Technique, the most fundamental parts are designed, which are then combined to a
whole complex problem. Here the fundamental programs are explicitly solved first
and the results of these used to construct solutions to progressively larger problem.

Advantages of Bottom up design

i) Identifies system utilities that can be reused

Disadvantages of Bottom up design

o Hard to do entire system from bottom up

c) Stepwise refinement: Stepwise refinement refers to the progressive refinement (split)


in small steps of a program specification into a program. Sometimes, it is called top-
down design.

Advantages of step-wise refinement

i) By breaking the problem down into steps it is easy to get good understanding of the scale of the
problem.
ii) Many of the design decisions can be made during the stepwise refinement process, making it
faster to make changes than after coding.

EKABOLE MARTIN 676560248 1


d) Modular programming

Modular programming is the process of breaking down a problem into smaller independent
tasks. These tasks can then be broken down into subtasks. Each module is designed to
perform a specific function. Note that each module in itself is a small program, which can be
compiled and tested individually and all modules are combined to build the whole program.
Modular programming is an important and beneficial approach to programming problems.
They make program development easier, and they can also help with future development
projects.

In this light, a program is broken down into small independent tasks that are small enough to
be understood easily, without having to understand the whole program at once. Each task has
its own functionality and performs a specific part of the actual processing. These tasks are
developed independently, and each task can carry out a specified function on its own. When
these tasks are completed, they are combined together to solve the problem.

A module is a component of a larger system that interacts with the rest of the system in a
simple and well-defined manner.

Importance of modular programming


a) Distributed development: By breaking down the problem into multiple tasks,
different developers can work in parallel. And this will shorten the development time.

b) Code reusability: A program module can be reused in programs. Modules or


procedures can also be reused in future projects.
c) Program readability: A program that has many functions is straightforward. But a
program with no functions can be very long and hard to follow.
d) Manageable tasks: Breaking down a programming project into modules makes it
more manageable. These individual modules are easier to design, implement and test.
e) Maintainability: An error in the system provide from a given module. It is then easier
to maintain a single module than to maintain a whole program

Advantages of using Modular Programing Disadvantages of Modular Programing


It is easy to use, since it allows easy in Modules must be linked and additional testing
debugging the code and prone to less error must be carried out to make sure that the links
work correctly.
It allows the user to reuse the functionality with Programmers must ensure that cross- reference
a different interface without typing the whole is done
program again.
Development can be shared between a team of Interface between modules must be planned.
programmers. That is each person programming
different modules according to their strengths

EKABOLE MARTIN 676560248 2


II. SOME STANDARD ALGORITHMS

VI.1 Sorting algorithms

A sorting algorithm is an algorithm that puts elements of a list in a certain order. We can distinguish
many sorting algorithms: Bubble sort , Insertion sort, Selection sort, Heapsort, Mergesort, Quick sort,

1) Selection sort
The principle of Selection Sort is to find the smallest element of the data sequence (from index 0 to
n-1) and put it to the beginning of the sequence. This procedure is then applied on the yet unsorted
areas (1 to n-1, 2 to n-1 and so on), until the area from n-2 to n-1 has been sorted. Now the entire
data sequence has been transformed to sorted form. To get the smallest element to the right
position, simply exchange it with the first in the (sub-)sequence.

Example: ALGORITHM
Unsorted Sequence 8 4 1 5 4 for i ← 0 to n-2 do
Step 1 8 4 1 5 4 min ← i
for j ← (i + 1) to n-1 do
Step 2 1 4 8 5 4 if A[j] < A[min]
Step 3 1 4 8 5 4 min ← j
Step 4 1 4 4 5 8 swap A[i] and A[min]
Result 1 4 4 5 8

Complexity analysis

Selection sort is not difficult to analyze compared to other sorting algorithms since none of the loops
depend on the data in the array. Selecting the lowest element requires scanning all n elements (this
takes n − 1 comparisons) and then swapping it into the first position. Finding the next lowest element
requires scanning the remaining n − 1 elements and so on, for (n − 1) + (n − 2) + ... + 2 + 1 = n(n − 1) /
2 ∈ Θ(n2) comparisons (see arithmetic progression). Each of these scans requires one swap for n − 1
elements (the final element is already in place).

2) Insertion sort
The data sequence is divided into two sections: an already sorted target sequence and a still
unsorted source sequence. The target sequence initially consists of the first element only. However,
it grows by one element per iteration.

In every iteration the first element of the current source sequence is compared with the elements of
the target sequence in incremental order. If the first element of the source sequence is lower than or
equal to an element of the target sequence, all elements to the left of the first element of the source
sequence and to the right of the current element of the target sequence are moved to the right, the
current element is inserted to the corresponding location in the target sequence and the iteration is
stopped. Sorting is finished once all elements belong to the target sequence.

EKABOLE MARTIN 676560248 3


ALGORITHM
insertionSort(array A)
Example (the target sequence is in bold):
begin Unsorted Sequence 8 4 1 5 4
for i := 1 to length[A]-1 do Step 1 8 4 1 5 4
begin Step 2 4 8 1 5 4
value := A[i]; Step 3 1 4 8 5 4
j := i-1;
while j ≥ 0 and A[j] > value do Step 4 1 4 5 8 4
begin Result 1 4 4 5 8
A[j + 1] := A[j];
j := j-1;
end; Complexity analysis
A[j+1] := value;
end; Worst-case Complexity: 2 + 3 + 4 + … + n =
end; n(n+1)/2 -1 = (n2)/2 + n/2 -1 = O(n2)
(For reverse sorted array).

Actual run time could be better (loop may terminate earlier than running for p-1 times!

The best case scenario (sorted array as input): (n).

3) Bubble sort
The name of this algorithm is supposed to indicate that the elements "rise" like bubbles. All elements
are compared with their successors; if the element with the lower index is the greater one, the
elements are exchanged. Once the entire sequence has been processed this way, the whole process
is repeated - until there's been an iteration during which no exchange took place. Then the data is
sorted. Example (the elements that are currently compared with each other are in bold print):

Unsorted Sequence 8 4 1 5 4 ALGORITHM


Step 1 8 4 1 5 4 procedure bubbleSort(A:array)
do
4 8 1 5 4
swapped := false
4 1 8 5 4 for i from 0 to n - 2 do:
4 1 5 8 4 if A[i]> A[i+1] then
Step 2 4 1 5 4 8 swap(A[i], A[i+1])
1 4 5 4 8 => no exchange swapped := true
end if
1 4 5 4 8 end for
1 4 4 5 8 => no exchange while swapped
Step 3 1 4 4 5 8 => no exchange end procedure
1 4 4 5 8 => no exchange
1 4 4 5 8 => no exchange
1 4 4 5 8 => no exchange Complexity analysis
Result 1 4 4 5 8
Bubble Sort is very slow most of the
time, except if the unsorted sequence actually is already sorted - then Bubble Sort works faster than
the other algorithms: Already after one iteration (consisting of n-1 comparisons) it realizes that the
sequence doesn't need to be sorted and stops.

Complexity:  i=1n  j=i+1n 1 =  i=1n (n –i) = n2 – n(n+1)/2 = n2/2 – n/2 = (n2)

4) Merge sort

EKABOLE MARTIN 676560248 4


The unsorted sequence is divided into two sub-sequences. These, again, are divided into two sub-
sub-sequences, and so on. The smallest partitions consist of just one element each; thus, each of
them can be regarded as sorted within its borders. Now neighbouring partitions are merged to larger
sub-sequences (that's the source of the name of this algorithm), however not before the elements
have been correctly ordered. The resulting sub-sequences are, again, merged until finally the entire
data sequence has been reunited.

Example: ALGORITHM
Unsorted Sequence 8 4 1 5 4 MergeSort (Array(First..Last))
Division 1 8 4 1|5 4 Begin
Division 2 8 4|1|5|4 If Array contains only one element Then
Division 3 8|4|1|5|4 Return Array
Merge 1 4 8|1|5|4 Else
Merge 2 1 4 8|4 5 Mid= ((Last + First)/2)
Merge 3 1 4 4 5 8 LHA = MergeSort(Array(First..Mid))
Result 1 4 4 5 8 RHA = MergeSort(Array(Mid+1..Last))
ResultArray = Merge( LHA , RHA )
Return ResultArray
EndIf
End MergeSort
Example

Divide step Merge step

5) Quicksort

Quicksort is a fast and recursive sorting algorithm, which is used not only for educational purposes,
but widely applied in practice. On the average, it has O(n log n) complexity, making quicksort suitable
for sorting big data volumes.
Algorithm
The divide-and-conquer strategy is used in quicksort. Below the recursion step is described:

EKABOLE MARTIN 676560248 5


1. Choose a pivot value. The Pivot can be any value, which is in range of sorted values, even if it
doesn't present in the array.
2. Partition. Rearrange elements in such a way, that all elements which are lesser than the
pivot go to the left part of the array and all elements greater than the pivot, go to the right
part of the array. Values equal to the pivot can stay in any part of the array. Notice, that
array may be divided in non-equal parts.
3. Sort both parts. Apply quicksort algorithm recursively to the left and the right parts.
The algorithm details are as follows:

PARTITION(A, p, r) QUICKSORT(A, p, r)
{ {
x = A[r] // pivot: grab last element if p < r
i = p – 1 // index of last element ≤ pivot; initially before {
array q = PARTITION(A, p, r)
for j = p to r-1// inspect all elements but pivot QUICKSORT(A, p, q-1)
{ QUICKSORT(A, q+1, r)
if A[j] x // move only elements ≤ pivot to left region
¿ }
{ }
i=i+1
swap A[i] and A[j]
}
}
swap A[i+1] and A[r] // put pivot in correct place
return i+1
}

Example

EKABOLE MARTIN 676560248 6


VI.2 Searching algorithms

Searching for data is one of the fundamental fields of computing. Often, the difference between a
fast program and a slow one is the use of a good algorithm for the data set.

1) Linear searching
A linear search is the most basic of search algorithm you can have. A linear search sequentially moves
through your collection (or data structure) looking for a matching value.

int function LinearSearch(Array A , int Lb, int Ub, int Key );


begin
for i= Lb to Ub do
if A [ i ]= Key then
return i ;
return –1;
end;
2) Binary search
binary search or half-interval search algorithm
locates the position of an item in a sorted array.
Binary search works by comparing an input value to
the middle element of the array. The comparison
determines whether the element equals the input,
less than the input or greater. When the element
being compared to equals the input the search
stops and typically returns the position of the
element. If the element is not equal to the input
then a comparison is made to determine whether
the input is less than or greater than the element.

Recursive algorithm Iterative algorithm


BinarySearch(A[0..N-1], value, low, high) BinarySearch(A[0..N-1], value)
{ {
if (high < low) low = 0
return -1 // not found high = N - 1
mid = low + ((high - low) / 2) while (low <= high)
if (A[mid] > value) {
return BinarySearch(A, value, low, mid-1) mid = low + ((high - low) / 2)
else if (A[mid] < value) if (A[mid] > value)
return BinarySearch(A, value, mid+1, high) high = mid - 1
else else if (A[mid] < value)
return mid // found low = mid + 1
} else
return mid // found
EKABOLE MARTIN 676560248 7
}
return -1 // not found
}
1. Divide-and-Conquer, Decrease and conquer
Also known as stepwise refinement, these are methods of designing algorithms that
(informally) proceed as follows: Given an instance of the problem to be solved, split this into
several smaller sub-instances (of the same problem), independently solve each of the sub-
instances and then combine the sub-instance solutions so as to yield a solution for the original
instance.
With the divide-and-conquer method the size of the problem instance is reduced by a factor
(e.g. half the input size), while with the decrease-and-conquer method the size is reduced by a
constant.
Examples of divide-and-conquer algorithms: Computing an by recursion, Binary search in
a sorted array (recursion), Merge sort algorithm, Quicksort algorithm (recursion)
Examples of decrease-and-conquer algorithms: Insertion sort, Topological sorting, Binary
Tree traversals: inorder, preorder and postorder (recursion)
2. Transform-and-Conquer

These methods work as two-stage procedures. First, the problem is modified to be more
amenable to solution. In the second stage the problem is solved.
Example: consider the problem of finding the two closest numbers in an array of numbers.

III. RECURSION
A recursive function is a function that calls itself during its execution. The main
characteristics of a recursive algorithm are:
- Stopping condition: It specifies the situation when the result can be obtained directly
without the need of a recursive call.
- Recursive call: The algorithm calls itself at least once. The value of the parameter
corresponding to a cascade of calls should lead eventually to the stopping condition.

EKABOLE MARTIN 676560248 8


III.1 An example of recursive function
In general, problems that are defined in terms of themselves are good candidates for recursive
techniques. The standard example used by many computer science textbooks is the factorial
function.
a) Factorial function
Let’s consider the problem of computing n! for n be a natural number. Starting from the
definition of n !=n × ( n−1 ) × ( n−2 ) × … ×2 ×1, and applying the brute force technique (also
known as iterative technique) we obtain the following algorithm.
factorial(n) The factorial function describes the operation of multiplying
a f←1 number by all the positive integers smaller than it. For
for i from n to 1 do example, 5! = 5*4*3*2*1 . And 9! = 9*8*7*6*5*4*3*2*1 .
f←f*n
endfor
return f

We now see why factorial is often the introductory example for recursion: the factorial
function is recursive, it is defined in terms of itself. Taking the factorial of n, we have:

n !=
{n∗( n−1
1 if n=0
) ! if n>0

Let's try writing our factorial function factorial(n). We want to code in the n! = n*(n - 1)!
functionality. Easy enough:

factorial(int n) Let’s test it to make sure it is working properly


if (n= 0)then
f ← 1; factorial(3) = 3 * factorial(2)
else = 3 * 2 * factorial(1)
f ← n * factorial(n-1); =3* 2* 1 * factorial(0)
endif =3*2*1*1
return f

III.2 Other examples of recursive functions

b) The Fibonacci sequence

The Fibonacci sequence is often used to illustrate the power of dynamic programming. The
sequence is defined by the following recurrence relation:

EKABOLE MARTIN 676560248 9


F0 = 0 This very easily translates into a recursive function:
F1 = 1
Fn = Fn-1 + Fn-2 Fibonacci( n
if ((n == 0) || (n == 1))
fib ← n;
else
Seems simple enough. What fib ← Fibonacci(n-1) + Fibonacci(n-2)
is the running time of return fib
Fibonacci()? Consider the call
Fibonacci(4).

You may remember from your study of binary trees that a nearly
complete one, like the one above, has around 2 n nodes, where n is
the height of the tree (number of levels - 1). That means the number
of function calls is around 2n, which is exponential given each
function call is O(1). That's very expensive for such a simple little
function!

c) Power function

Let consider the problem of computing xn for x>0 and n a natural number. Starting from the
definition,
n n−1
x =x∗x∗…∗x=x∗x .

Applying the brute force and the recursive techniques respectively, we have

poxer1(real x, integer n) poxer2(real x, integer n)


p←1 if n= 0 then
for i from 1 to n do return 1
p←p*1 else
endfor retun x * power2(x, n-1)
retun p

d) Hanoi’s tower

Let’s consider three vertical pegs labelled as follows: src (source), dest (destination) and int
(intermediate). On the src peg there are n disc placed in decreasing order of their diameter (the disc
at the bottom is the largest one). We want to move all discs from the peg src to the rod dest by using
int as intermediate peg such that the following constraints are satisfied.

(i) On the destination peg the disc are placed in the same decreasing order
(ii) A disc can never be placed on a smaller disc

EKABOLE MARTIN 676560248 10


The basic idea of solving this problem is: “move n-1 disc from src to int using dest as intermediate
peg; move the largest disc to src to dest; move the n-1 discs from int to dest using s as intermediate
peg”. This idea is described very easy in a recursive algotithm

hanoi(n,src,dest,int)
if n = 1 then
src → dest
else
hanoi(n-1,src,int,dest)
src → dest
hanoi(n-1,int,dest,src)
endif

e) Recursive function using stack


The use of the stack is the best option in many system to execute recursive functions. During the
program execution, as the function call itself, a stack of pointer(addresses of instructions of the caller
function to which control must return after the called function is executed) is created in the memory.
For example for the factorial function described above, the stact is shown below for n = 4

factorial( n)
if (n > 0)then
return(n * factorial(n-1));
else
return(1)
endif
end

Factorial (0) = 1. Factorial(0) is not put in stack


because it does not invoke any other function

Comparison
Insertion Sort:
 Average Case / Worst Case : Θ(n2) ; happens when input is already sorted in descending
order
 Best Case : Θ(n) ; when input is already sorted
 No. of comparisons : Θ(n2) in worst case & Θ(n) in best case

EKABOLE MARTIN 676560248 11


 No. of swaps : Θ(n2) in worst/average case & 0 in Best case

Selection Sort:
 Average Case / Worst Case / Best Case: Θ(n2)
 No. of comparisons : Θ(n2)
 No. of swaps : Θ(n2) in worst/average case & 0 in best case
Merge Sort :
 Average Case / Worst Case / Best case : Θ(nlgn) ; doesn't matter at all whether the input is
sorted or not
 No. of comparisons : Θ(n+m) in worst case & Θ(n) in best case ; assuming we are merging
two array of size n & m where n<m
 No. of swaps : No swaps ! [but requires extra memory, not in-place sort]
Bubble Sort:
 Worst Case : Θ(n2)
 Best Case : Θ(n) ; on already sorted
 No. of comparisons : Θ(n2) in worst case & best case
 No. of swaps : Θ(n2) in worst case & 0 in best case
Quicksort
 Worst-case: O(N2): This happens when the pivot is the smallest (or the largest)
element.
 Best-case O(NlogN) The best case is when the pivot is the median of the array,
Average-case - O(NlogN)
Linear Search:
 Worst Case : Θ(n) ; search key not present or last element
 Best Case : Θ(1) ; first element
 No. of comparisons : Θ(n) in worst case & 1 in best case
Binary Search:
 Worst case/Average case : Θ(logn)
 Best Case : Θ(1) ; when key is middle element
 No. of comparisons : Θ(logn) in worst/average case & 1 in best case

Comparing Time Complexities

Best case Worst case Average case


Algorithm

Binary search O( 1 ) O(lg n) O(lg n )


Sequential search O( 1 ) O( n ) O( n )
Finding largest O( n ) O( n ) O( n )
Pattern matching O( n ) O( mn ) O( n )
Selection sort O( n2) O( n2) O( n2 )
Quicksort O( n2) O(NlogN) O(NlogN)

EKABOLE MARTIN 676560248 12

You might also like