Analysis of Algorithms
Lecture-01
Introduction
What is an Algorithm
Definition, characteristics, Writing an algorithm
Algorithm and Basic Concept
Algorithm Complexity
Analysis of the algorithms
Computing run time of algorithms
Bounds
Important Problem Type
Searching and Sorting- linear and binary search
String processing
Fundamental Data Structure
Liner data Structure
Non Liner Data structure
Algorithm: Definition
An algorithm is a set of steps of operations to solve a problem
performing calculation, data processing, and automated reasoning tasks.
An algorithm is an efficient method that can be expressed within finite amount
of time and space.
• It can have zero or more inputs.
• It must have at least one output.
• It should be efficient both in terms of memory and time.
• It should be finite.
• Every statement should be unambiguous.
Algorithm is independent from any programming language.
Note: A program can run infinitely, for example a server program it runs
24X7X365. But an algorithm must be finite.
Characteristics of an Algorithm
Not all procedures can be called an algorithm. An algorithm should have the
following characteristics −
Unambiguous − Algorithm should be clear and unambiguous. Each of its
steps (or phases), and their inputs/outputs should be clear and must lead to
only one meaning.
Input − An algorithm should have 0 or more well-defined inputs.
Output − An algorithm should have 1 or more well-defined outputs, and
should match the desired output.
Finiteness − Algorithms must terminate after a finite number of steps.
Feasibility − Should be feasible with the available resources.
Independent − An algorithm should have step-by-step directions, which
should be independent of any programming code.
Note: It must have a termination criteria
Design and Analysis of Algorithm: Objectives
This subject focus on following points
To Construct the algorithms for the problems to be solved or already
solved by other methods, the problems may be from diverse fields and not
restricted to Computer Science only.
To prove that a proposed algorithm solve the problem correctly.
To analyze the time and space requirements of an algorithm in standard,
asymptotical notations, that is independent of particular machine.
To prove that the proposed algorithm solve the problem faster than other
solutions
Writing an Algorithm
An algorithm can be written in following ways:
▪ Natural Language Representation: Use of Natural language in
writing and algorithm can be ambiguous and therefore algorithm may
lack the characteristic of being definite.
▪ Flowcharts: Graphical representation of algorithmic steps, flowcharts are
not suitable to write the solutions for complex problems.
▪ Pseudocode: has an advantage of being easily converted into any
programming language. A Table of pseudocode conventions are given in
textbook. Mostly used.
Flowchart
Flowcharts pictorially depict a process.
They are easy to understand and are commonly used in case
of simple problems.
Pseudo Code
The pseudo code has an advantage of being easily converted into
any programming language.
This way of writing algorithm is most acceptable and most widely
used.
In order to be able to write a pseudo code, one must be familiar
with the conventions of writing it.
Algorithm design strategies
▪ Brute force
▪ Try all possible combinations
▪ Divide and conquer
▪ breaking down a problem into two or more sub-problems of
the same (or related) type, until these become simple enough
to be solved directly.
▪ Decrease and conquer
▪ Change an instance into one smaller instance of the problem.
▪ Solve the smaller instance.
▪ Convert the solution of the smaller instance into a solution
for the larger instance.
Algorithm design strategies
▪ Transform and conquer
▪ a simpler instance of the same problem, or
▪ a different representation of the same problem, or
▪ an instance of a different problem
▪ Greedy approach
▪ The problem could be solved in iterations
▪ Dynamic programming
▪ An instance is solved using the solutions for smaller instances.
▪ The solution for a smaller instance might be needed multiple times.
▪ The solutions to smaller instances are stored in a table, so that each
smaller instance is solved only once.
▪ Additional space is used to save time.
▪ Backtracking and branch-and-bound
Algorithm: Design Considerations
The five most essential things are to be considered while
writing an algorithm are as follows:
▪ Time taken
▪ Memory usage
▪ Input
▪ Process
▪ Output.
Algorithm Complexity
Suppose X is an algorithm and n is the size of input data, the time and space
used by the algorithm X are the two main factors, which decide the efficiency
of X.
Time Factor − Time is measured by counting the number of key
operations such as comparisons in the sorting algorithm.
Space Factor − Space is measured by counting the maximum memory
space required by the algorithm.
The complexity of an algorithm f(n) gives the running time and/or the
storage space required by the algorithm in terms of n as the size of input
data.
Analysis of algorithms
Issues:
▪ Correctness
▪ Space efficiency
▪ Time efficiency
▪ Optimality
Approaches:
▪ Theoretical analysis
▪ Empirical analysis
*Apriori analysis vs. posterior analysis
Types of the Algorithm Analysis
Efficiency of an algorithm can be analyzed at two different stages, before
implementation and after implementation. They are the following
A Priori Analysis − This is a theoretical analysis of an algorithm. Efficiency
of an algorithm is measured by assuming that all other factors, for example,
processor speed, are constant and have no effect on the implementation.
A Posterior Analysis − This is an empirical analysis of an algorithm. The
selected algorithm is implemented using programming language. This is then
executed on target computer machine. In this analysis, actual statistics like
running time and space required, are collected.
We shall learn about a priori algorithm analysis. Algorithm analysis deals with
the execution or running time of various operations involved. The running
time of an operation can be defined as the number of computer instructions
executed per operation.
Why to perform analysis of the Algorithms
The analysis of an algorithm is done because:
▪ The analysis of algorithm can be more reliable than experiments.
▪ Experiments can guarantee the behavior of algorithms on certain test cases,
while theoretical analysis and establishment of run time bounds can
guarantee the behavior of algorithm on whole domain.
▪ Analysis helps us to chose from many possible solutions.
▪ Performance of the program can be predicted before it is implemented.
▪ By analysis we can have idea of slow and faster parts of the algorithm, and
accordingly we can plan our implementation strategies.
Worst-Case/ Best-Case/ Average-Case Analysis
▪ Worst-Case Analysis –The maximum amount of time that an algorithm
require to solve a problem of size n.
▪ Best-Case Analysis –The minimum amount of time that an algorithm
require to solve a problem of size n. The best case behavior of an algorithm
is NOT so useful.
▪ Average-Case Analysis –The average amount of time that an algorithm
require to solve a problem of size n.
▪ Worst-case analysis is more common than average-case analysis.
Asymptotic Analysis
Asymptotic analysis of an algorithm refers to defining the mathematical
boundation/framing of its run-time performance. Using asymptotic analysis,
we can very well conclude the best case, average case, and worst case
scenario of an algorithm.
Asymptotic analysis is input bound i.e., if there's no input to the algorithm,
it is concluded to work in a constant time. Other than the "input" all other
factors are considered constant.
Asymptotic analysis refers to computing the running time of any operation
in mathematical units of computation. For example, the running time of one
operation is computed as f(n) and may be for another operation it is
computed as g(n2). This means the first operation running time will increase
linearly with the increase in n and the running time of the second operation
will increase exponentially when n increases. Similarly, the running time of
both operations will be nearly the same if n is significantly small.
Asymptotic Analysis
Asymptotic Notations
Following are the commonly used asymptotic notations to
calculate the running time complexity of an algorithm.
Ο Notation
Ω Notation
θ Notation
Big-O Notation
Definition: f(n) is in O(g(n)) if order of growth of f(n) ≤ order of growth of
g(n) (within constant multiple),
Examples:
▪ 10n is O(n2)
▪ 5n+20 is O(n)
f (n) : there exist positive constants c and n0 s.t.
O( g (n)) =
0 f ( n ) cg ( n ) for all n n 0
Omega Notation, Ω
The notation Ω(n) is the formal way to express the Lower bound of an
algorithm's running time.
f (n) : there exist positive constants c and n0 s.t.
( g (n)) =
0 cg (n) f (n) for all n n0
Theta Notation, θ
The notation θ(n) is the formal way to express both the lower bound and the
upper bound of an algorithm's running time. It is represented as follows −
f (n) : there exist positive constants c1 , c2 , and n0 s.t.
( g (n)) =
0 c1 g (n) f (n) c2 g (n) for all n n0
Common Asymptotic Notations
A Comparison of Growth-Rate Functions
How to Compute the run time of an Algorithm
How to Compute the run time of an Algorithm
Cost of basic operations
Cost of basic operations
Cost of Instructions
Growth-Rate Functions – Example1
Cost Times
i = 1; c1 1
sum = 0; c2 1
while (i <= n) { c3 n+1
i = i + 1; c4 n
sum = sum + i; c5 n
}
T(n) = c1 + c2 + (n+1)*c3 + n*c4 + n*c5
= (c3+c4+c5)*n + (c1+c2+c3)
= a*n + b
➔ So, the growth-rate function for this algorithm is O(n)
Growth-Rate Functions – Example2
Cost Times
i=1; c1 1
sum = 0; c2 1
while (i <= n) { c3 n+1
j=1; c4 n
while (j <= n) { c5 n*(n+1)
sum = sum + i; c6 n*n
j = j + 1; c7 n*n
}
i = i +1; c8 n
}
T(n) = c1 + c2 + (n+1)*c3 + n*c4 + n*(n+1)*c5+n*n*c6+n*n*c7+n*c8
= (c5+c6+c7)*n2 + (c3+c4+c5+c8)*n + (c1+c2+c3)
= a*n2 + b*n + c
➔ So, the growth-rate function for this algorithm is O(n2)
Growth-Rate Functions – Example3
Cost Times
for (i=1; i<=n; i++) c1 n+1
n
for (j=1; j<=i; j++) c2 ( j + 1)
j =1
n j
for (no=1; no<=j; no++) c3 (k + 1)
j =1 k =1
n j
x=x+1; c4 k
j =1 k =1
( j + 1)
n j
T(n) = c1*(n+1) + c2*( ) + c3* ( (k + 1) ) + c4*(
j
n
k
) j =1 j =1 k =1 j =1 k =1
= a*n3 + b*n2 + c*n + d
➔ So, the growth-rate function for this algorithm is O(n3)
Some Well-known Computational Problems
Searching and Sorting
Combinatorial problems
Geometrical Problems
Traveling salesman problem
Knapsack problem
Chess
Towers of Hanoi
Graph Problems
Algorithms Examples
Search Problems
Statement of problem:
Input: A sequence of n numbers <a1, a2, …, an>
Key= item to be Searched
Output: Index of the item to be searched
Instance: The sequence <5, 3, 2, 8, 3>
Algorithms:
Sequential Search
Binary Search(for sorted arrays)
Linear Search
Worst and average case time complexity is order of N. So we need a better
algorithm.
Key=20
Binary Search
Binary Search
Binary Search Implementation
Analysis of Binary Search
Sorting Problems
Statement of problem:
Input: A sequence of n numbers <a1, a2, …, an>
Output: A reordering of the input sequence <a´1, a´2, …, a´n> so that a´i ≤
a´j whenever i < j
Instance: The sequence <5, 3, 2, 8, 3>
Algorithms:
Selection sort
Insertion sort
Merge sort
Quick Sort
(many others)
String Processing
A string is a sequence of characters from an alphabet.
Text strings: letters, numbers, and special characters.
String matching: searching for a given word/pattern in a text.
Examples:
(i) searching for a word or phrase on WWW or in a Word
document
(ii) searching for a short read in the reference genomic sequence
String Matching
Given a text string T of length n and a pattern string P
of length m, the exact string matching problem is to
find all occurrences of P in T.
Example: T=“AGCTTGA” P=“GCT”
Applications:
Searching keywords in a file
Searching engines (like Google and Openfind)
Database searching (GenBank)
More string matching algorithms (with source codes):
http://www-igm.univ-mlv.fr/~lecroq/string/
String Matching
Brute Force algorithm
The brute force algorithm consists in checking, at all positions in the text
between 0 and n-m, whether an occurrence of the pattern starts there or not.
Then, after each attempt, it shifts the pattern by exactly one position to the
right.
Time: O(mn) where m=|P| and
n=|T|.
Graph Problems
Informal definition
A graph is a collection of points called vertices, some of which
are connected by line segments called edges.
Modeling Real-life Problems
Modeling WWW
Communication networks
Project scheduling …
Examples of Graph Algorithms
Graph traversal algorithms
Shortest-path algorithms
……..
Data Structures Review
….
Data Structures:
An implementation of an ADT is a translation into
statements of a programming language,
─ the declarations that define a variable to be of that
ADT type
─ the operations defined on the ADT (using procedures
of the programming language)
Each data structure is built up from the basic data types of the
underlying programming language using the available data
structuring facilities, such as
arrays, records (structures in C), pointers, files, sets, etc.
Fundamental data structures
list
array
linked list graph
string
tree and binary tree
stack
queue
priority queue
Linear Data Structures
Arrays ◼ Arrays
A sequence of n items of the same data type that
◼ fixed length (need
are stored contiguously in computer memory and
made accessible by specifying a value of the preliminary
array’s index. reservation of
memory)
Linked List ◼ contiguous memory
A sequence of zero or more nodes each locations
containing two kinds of information: some data
and one or more links called pointers to other ◼ direct access
nodes of the linked list. ◼ Insert/delete
Singly linked list (next pointer)
◼ Linked Lists
Doubly linked list (next + previous pointers)
◼ dynamic length
◼ arbitrary memory
a1 a2 an locations
◼ access by following
links
◼ Insert/delete
Stacks and Queues
Stacks
A stack of plates
─ insertion/deletion can be done only at the top.
─ LIFO
Two operations (push and pop)
Queues
A queue of customers waiting for services
─ Insertion/enqueue from the rear and deletion/dequeue from the
front.
─ FIFO
Two operations (enqueue and dequeue)
Priority Queue and Heap
◼ Priority queues (implemented using heaps)
◼A data structure for maintaining a set of elements, each
associated with a key/priority, with the following
operations:
◼ Finding the element with the highest priority
◼ Deleting the element with the highest priority
◼ Inserting a new element
◼ Scheduling jobs on a shared computer
Graphs
Formal definition
A graph G = <V, E> is defined by a pair of two sets: a finite set V of
items called vertices and a set E of vertex pairs called edges.
Undirected and directed graphs (digraphs).
Complete, dense, and sparse graphs
A graph with every pair of its vertices connected by an edge is called
complete, K|V|
In Dense graph, the number of edges is close to the maximal number
of edges.
1 2
3 4
Dense
Complete
Graph Representation
Adjacency matrix
n x n boolean matrix if |V| is n.
The element on the ith row and jth column is 1 if there’s an edge from
ith vertex to the jth vertex; otherwise 0.
The adjacency matrix of an undirected graph is symmetric.
Adjacency linked lists
A collection of linked lists, one for each vertex, that contain all the
vertices adjacent to the list’s vertex.
Which data structure would you use if the graph is a 100-node star
shape?
2 3 4
0111 4
0001
0001
0000
Weighted Graphs
Graphs or digraphs with numbers assigned to the edges.
5
1 2
6 7
9
3 4
8
Graph Properties -- Paths and Connectivity
Paths
A path from vertex u to v of a graph G is defined as a sequence of
adjacent (connected by an edge) vertices that starts with u and ends
with v.
Simple paths: a path in a graph which does not have repeating
vertices..
Path lengths: the number of edges, or the number of vertices – 1.
Connected graphs
A graph is said to be connected if for every pair of its vertices u and v
there is a path from u to v.
Connected component
The maximum connected subgraph of a given graph.
Graph Properties – A cyclicity
Cycle
A simple path of a positive length that starts and ends a the
same vertex.
Acyclic graph
A graph without cycles
DAG (Directed Acyclic Graph) 1 2
3 4
Trees
Trees
A tree (or free tree) is a connected acyclic graph.
Forest: a graph that has no cycles but is not necessarily connected.
Properties of trees
For every two vertices in a tree there always exists rooted
exactly one simple
1 3 5
path from one of these vertices to the other. 3
─ Rooted trees: The above property makes it possible to select an
2
arbitrary vertex in a free tree and consider it4as the1root of
5
the so 4
called rooted tree.
─ Levels in a rooted tree. 2
Forest
Rooted Trees (I)
Ancestors/predecessors
For any vertex v in a tree T, all the vertices on the simple path from the root to
that vertex are called ancestors.
Descendants/children
rooted
All the vertices for which a vertex v is an ancestor are said to be descendants of
v. 3
Parent, child and siblings
If (u, v) is the last edge of the simple path from the root to vertex v,
4 u is said
1 to 5
be the parent of v and v is called a child of u.
Vertices that have the same parent are called siblings. 2
Leaves
A vertex without children is called a leaf.
Subtree
A vertex v with all its descendants is called the subtree of T rooted at v.
Rooted Trees (II)
Depth of a vertex
The length of the simple path from the root to the vertex.
Height of a tree
The length of the longest simple path from the root to a
leaf.
h=2
3
4 1 5
2
Ordered Trees
Ordered trees
An ordered tree is a rooted tree in which all the children of each vertex
are ordered.
Binary trees
A binary tree is an ordered tree in which every vertex has no more than
two children and each children is designated either a left child or a right
child of its parent. 9
Binary search trees 6 8
Each vertex is assigned a number. 5 2 3
A number assigned to each parental vertex is larger than all the
numbers in its left subtree and smaller than all the numbers in its right
subtree.
6
3 9
2 5 8
Topics To be covered in next class
▪ Divide and Conquer
▪ General method,
▪ Applications-Binary search, Quick sort, Merge sort
▪ Selection Sort
▪ Finding maximum and minimum.
▪ Solving recurrence relations using Masters Theorem,
▪ Substitution method