ADA Merged
ADA Merged
ADA Merged
5. Graph Traversal and search techniques: Breadth first search, Depth first
search.
1. Introduction to 2. Algorithm
Algorithms Specification
3. Performance
4. Randomized
Analysis (Space and
Algorithms
Time Complexity)
What is Algorithm?
Understanding Algorithms
▪ In mathematics and computer science, algorithms are powerful tools for solving
specific problems and automating tasks.
▪ Output:
- Every algorithm must produce at least one meaningful output or result.
- The output should convey relevant information or serve a purpose.
▪ Definiteness:
- Each instruction within an algorithm must be precise, clear, and free from ambiguity.
- Clarity ensures that the algorithm's actions are well-understood and correctly executed.
Algorithm Criteria:
▪ Finiteness:
- An algorithm must exhibit the property of finiteness, meaning it will complete its execution
within a finite number of well-defined steps.
- It should avoid infinite loops or indeterminate behavior.
▪ Effectiveness:
- Every instruction in the algorithm must be fundamental and executable.
- The instructions should be practical and achievable with available resources.
Example: Multiply two numbers
Need to understand relationship between problem size and running time when is a running
program not good enough?
Need to learn how to analyze an algorithm's running time without coding it.
Need to recognize bottlenecks in code as well as which parts of code are easiest to optimize.
Why do we Analyze Algorithms
▪ The time complexity of a program is the amount of computer time it needs to run to
completion.
▪ The limiting behavior of the complexity as size increases is called the asymptotic time
complexity.
▪ Instruction space: Instruction space is the space needed to store the compiled version
of the program instructions. The amount of instructions space that is needed depends
on factors such as:
✓ The compiler used to complete the program into machine code.
✓ The compiler options in effect at the time of compilation
✓ The target computer.
Space Complexity
▪ Data space: Data space is the space needed to store all constant and variable values.
Data space has two components:
✓ Space needed by constants and simple variables in program.
✓ Space needed by dynamically allocated objects such as arrays and class instances.
▪ Environment stack space: The environment stack is used to save information needed
to resume execution of partially completed functions.
Algorithm Design Goals
- The three basic design goals that one should strive for in a program are:
1. Try to save Time
2. Try to save Space
3. Try to save Face
- A program that runs faster is a better program, so saving time is an obvious goal.
- It's important to note that the function f(n), which quantifies the algorithm's
running time, relies not only on the input data size 'n' but also on the specific
characteristics of the data being processed.
The complexity function f(n) for certain cases
are:
▪ Best Case (Infrequently Utilized): This refers to the minimal conceivable value of
f(n) and is referred to as the "best case."
▪ Average Case (Seldom Employed): This pertains to the anticipated value of f(n) and
is known as the "average case."
▪ Worst Case (Most Commonly Employed): This signifies the maximum value of f(n)
achievable for any conceivable input and is commonly referred to as the "worst case."
▪ The domain within computer science that delves into the efficiency of algorithms is
formally recognized as the "analysis of algorithms."
Best Case:
▪ This denotes the specific input conditions under which an algorithm exhibits its
most efficient performance, resulting in the shortest possible execution time.
▪ The best case provides a lower bound for the algorithm's time complexity.
▪ For instance, in the case of a linear search, the best case occurs when the
sought-after data is found at the very beginning of a large dataset.
Worst Case
▪ This characterizes the input scenarios in which an algorithm experiences its
most time-consuming execution, resulting in the longest possible time.
▪ The worst case offers an upper bound for the algorithm's time complexity.
▪ As an illustration, in linear search, the worst case transpires when the desired
data is entirely absent from the dataset.
Average Case
▪ In the average case, we consider a spectrum of random input conditions and
calculate the algorithm's execution time for each of these conditions.
▪ Subsequently, we determine the average by summing up the execution times for
all random inputs and dividing it by the total number of inputs.
▪ This provides us with a realistic estimate of the algorithm's expected
performance under typical, random scenarios.
▪ The formula for average case computation is as follows:
▪ Average Case = (Sum of execution times for all random cases) / (Total number of
cases)
Asymptotic Analysis
▪ In Asymptotic Analysis, we evaluate the performance of an algorithm in terms of input
size.
▪ We don’t measure the actual running time. We calculate, how the time (or space)
taken by an algorithm increases with the input size.
▪ For example, let us consider the search problem (searching a given item) in a sorted
array.
▪ let us say:
▪ We run the Linear Search on a fast computer A and
▪ Binary Search on a slow computer B and
▪ pick the constant values for the two computers so that it tells us exactly how long it takes
for the given machine to perform the search in seconds.
▪ Let’s say the constant for A is 0.2 and the constant for B is 1000 which means that A is
5000 times more powerful than B.
▪ For small values of input array size n, the fast computer may take less time.
▪ But, after a certain value of input array size, the Binary Search will definitely start taking less
time compared to the Linear Search even though the Binary Search is being run on a slow
machine.
▪ The reason is the order of growth of Binary Search with respect to input size is
logarithmic while the order of growth of Linear Search is linear.
▪ It provides valuable insights into how algorithms perform as input sizes increase, simplifying
complex performance comparisons.
▪ An Example: Comparing Sorting Algorithms: Consider two sorting algorithms: one with a time
complexity of 1000nlog(n) and another with 2nlog(n). Both share the same asymptotic
complexity (n*log(n)). However, in asymptotic analysis, we disregard constant factors.
▪ Constant Factors Disregarded: Asymptotic analysis doesn't consider constant factors. Thus,
it doesn't allow us to determine which of the two sorting algorithms is better in practice, as
their real-world performance could differ due to these constants.
Big-O Notation
▪ Unlike Big-O notation, Omega notation acknowledges constant factors and focuses
on the lower bounds of an algorithm's performance.
Theta Notation
▪ It defines the average case of an algorithm’s time complexity, the Theta notation
defines when the set of functions lies in both O(expression) and Omega(expression),
then Theta notation is used.
▪ Theta notation (Θ) is a mathematical notation used to describe the exact and balanced
behavior of an algorithm's time complexity.
The following notations are commonly use notations in performance analysis and used
to characterize the complexity of an algorithm:
▪ Big–OH (O)
▪ Big–OMEGA (Ω)
▪ Big–THETA (Θ)
▪ Little–OH (o)
- Let g and f be the function from the set of natural numbers to itself.
- The function f is said to be Θ(g), if there are constants c1, c2 > 0 and a natural number n0
such that c1* g(n) ≤ f(n) ≤ c2 * g(n) for all n ≥ n0.
Theta (Θ) Notation
- Θ (g(n)) = {f(n): there exist positive constants c1, c2 and n0 such that 0 ≤ c1 * g(n) ≤
f(n) ≤ c2 * g(n) for all n ≥ n0}
- The above expression can be described as if f(n) is theta of g(n), then the value f(n) is
always between c1 * g(n) and c2 * g(n) for large values of n (n ≥ n0).
- The definition of theta also requires that f(n) must be non-negative for values of n
greater than n0.
- A simple way to get the Theta notation of an expression is to drop low-order terms
and ignore leading constants.
- If f(n) describes the running time of an algorithm, f(n) is O(g(n)) if there exist a positive
constant C and n0 such that, 0 ≤ f(n) ≤ cg(n) for all n ≥ n0.
Big-O Notation
- O(g(n)) = { f(n): there exist positive constants c and n0 such that 0 ≤ f(n) ≤ cg(n) for all
n ≥ n0 }.
- The Big-O notation is useful when we only have an upper bound on the time complexity
of an algorithm.
- Many times we easily find an upper bound by simply looking at the algorithm.
Omega Notation (Ω-Notation)
- Omega notation represents the lower bound of the running time of an algorithm. Thus, it
provides the best case complexity of an algorithm.
- Let g and f be the function from the set of natural numbers to itself. The function f is said to be
Ω(g), if there is a constant c > 0 and a natural number n0 such that c*g(n) ≤ f(n) for all n ≥ n 0
- Ω(g(n)) = { f(n): there exist positive constants c and n0 such that 0 ≤ cg(n) ≤ f(n) for all n ≥ n0 }
Randomized Algorithms
▪ An algorithm that uses random numbers to decide what to do next anywhere in its
logic is called Randomized Algorithm.
▪ Output and Execution time may vary from run to run for the same input.
Randomized Algorithms
- A randomized algorithm is an algorithm that incorporates randomness or randomization as a
fundamental part of its design and operation.
- Unlike traditional algorithms, which produce the same output for a given input every time,
randomized algorithms introduce an element of chance into their computations.
- These algorithms use random numbers or random processes to make decisions, perform
computations, or achieve specific goals.
Key characteristics
▪ Probabilistic Behavior:
▪ Randomized algorithms produce results that are probabilistic or statistical in nature.
▪ The outcome of the algorithm may vary each time it is run, but it has a well-defined
probability distribution.
▪ They may guarantee that the solution is correct but not bound the running time precisely.
1
Course Outline 1. Stacks
2. Queues
3. Priority Queues
4. Binary Trees
5. Dictionaries
6. Sets and Disjoint Set Union
2
Data Types
▪ Two important things about data types:
• Defines a certain domain of values.
• Defines Operations allowed on those values.
▪ Example: int type
• Takes only integer values
• Operations: addition, subtraction, multiplication, bitwise operations etc.
▪ We cannot perform bitwise and % operations in float.
3
User Defined Data Types
▪ In contrast to primitive data types (int, float, char), there is a concept of a user defined data
types.
▪ The operations and values of user defined data types are not specified in the language
itself but is specified by the user.
▪ Example: Structure in C -> By using structures, we are defining our own type by combining
other data types.
▪ struct student {
int id;
char name[50];
}
4
Abstract Data Types (ADT)
▪ ADTs are like user defined data types which defines operations on values
using functions without specifying what is there inside the function and how
the operation are performed.
▪ An ADT specifies what operations can be performed on a particular data
structure, but it does not prescribe how these operations should be
implemented.
▪ Example: Stack ADT
▪ A stack consists of elements of same type arranged in a sequential order.
5
Abstract Data Types (ADT)
▪ The idea behind ADTs is to separate the logical view of a data structure
from its implementation details.
▪ This separation allows programmers to focus on the functionality and
behavior of the data structure without worrying about the specific
implementation.
▪ As a result, ADTs provide a way to abstract away the underlying
complexities of data structures and promote modularity and code
reusability.
6
Examples of ADT
▪ Stack: A data structure that follows the Last In, First Out (LIFO) principle,
where elements are added and removed from the same end (top).
▪ Queue: A data structure that follows the First In, First Out (FIFO) principle,
where elements are added at the rear and removed from the front.
▪ List: A collection of elements where each element has a position or index,
and operations can include adding, removing, or accessing elements at a
specific position.
7
Examples of ADT
▪ Set: A collection of unique elements with operations like adding,
removing, and checking for the presence of an element.
▪ Map (or Dictionary): A collection of key-value pairs, where each key is
associated with a value. Operations include inserting, deleting, and
looking up values based on their keys.
▪ Graph: A collection of nodes and edges, where nodes represent entities,
and edges represent relationships between entities.
8
Abstract Data Types (ADT)
▪ Initialization/Creation:
o Description: Initializes a new instance of the ADT.
o Purpose: Creates a new instance of the data structure.
▪ Insertion/Adding:
o Description: Adds a new element to the data structure.
o Purpose: Increases the size of the data structure by including a new element.
▪ Deletion/Removing:
o Description: Removes an element from the data structure.
o Purpose: Decreases the size of the data structure by eliminating an element.
9
Abstract Data Types (ADT)
▪ Access/Retrieval:
o Description: Retrieves the value of a specific element in the data structure.
o Purpose: Obtains the value of an element without modifying the data structure.
▪ Search:
o Description: Determines whether a particular element is present in the data
structure.
o Purpose: Checks for the existence of a specific element.
▪ Traversal:
o Description: Visits and processes each element in the data structure.
o Purpose: Iterates through all elements in the data structure. 10
Abstract Data Types (ADT)
▪ Update/Modification:
o Description: Modifies the value of an existing element in the data structure.
o Purpose: Changes the content of an element without adding or removing it.
▪ Size/Length:
o Description: Returns the number of elements in the data structure.
o Purpose: Provides information about the size or length of the data structure.
11
Abstract Data Types (ADT)
▪ Emptiness Check:
o Description: Determines whether the data structure is empty.
o Purpose: Indicates whether the data structure contains any elements.
▪ Clear:
o Description: Removes all elements from the data structure.
o Purpose: Resets the data structure to an empty state.
▪ The class provides the implementation details, while the interface defines
the set of operations that can be performed on the ADT.
12
Abstract Data Types (ADT)
▪ Think of ADT as a black box which hides the inner structure and design of the data
type from the user.
▪ There are multiple ways to implement an ADT.
▪ Example: A stack ADT can be implemented using arrays or linked lists.
▪ The program which uses data structure is called a client program. Client program
has access to the ADT i.e. interface.
▪ The program which implements the data structure is known as the implementation.
13
Stacks
▪ A stack is a linear data structure in which insertions and deletions are allowed
only at the end, called the top of the stack.
▪ When we define a stack as an ADT (or Abstract Data Type), then we are only
interested in knowing the stack operations from the user point of view.
▪ Means we are not interested in knowing the implementation details at this moment.
▪ We are only interested in knowing what types of operations we can perform on
stack.
14
Why Abstract Data Types?
▪ Let say, if someone wants to use the stack in the program, then he can simply use
push and pop operations without knowing its implementation.
▪ Also, if in future, the implementation of stack is changed from array to linked list,
then the client program will work in the same way without being affected.
▪ Hence, ADT provides Abstraction.
15
Primary Stack Operations
▪ push (data) : Inserts data onto stack.
▪ pop ( ) : Deletes the last inserted element from the stack
▪ top ( ) : returns the last inserted element without removing it.
▪ size ( ) : returns the size or the number of elements in the stack.
▪ isEmpty ( ) : returns True if the stack is empty, else returns False.
▪ isFull ( ) : returns True if the stack is full, else returns False.
16
Push and Pop Operation
▪ For the push operation:
• Top is incremented by 1.
• New element is pushed at the position top
17
Pop an element
▪ How to delete the element at index 3?
▪ We cannot simply remove the array element.
▪ Still, we can give the user an illusion that the array element is deleted
by decrementing the top variable.
▪ Top variable always keeps track of the topmost element of the stack.
▪ If the top is decremented by 1 then the user will perceive that the topmost
element is deleted.
18
Queues
▪ A queue is defined as a linear data structure that is open at both ends and
the operations are performed in First In First Out (FIFO) order.
▪ We define a queue to be a list in which all additions to the list are made at
one end, and all deletions from the list are made at the other end.
▪ The element which is first pushed into the order, the operation is first
performed on that.
19
FIFO Principle of ▪ A Queue is like a line waiting to
Queue: purchase tickets, where the first
person in line is the first person
served. (i.e. First come first serve).
20
Queue Representation
Like stacks, Queues can also be represented in an array:
In this representation, the Queue is implemented using the array. Variables used in
this case are
▪ Queue: the name of the array storing queue elements.
▪ Front: the index where the first element is stored in the array representing the
queue.
▪ Rear: the index where the last element is stored in an array representing the
queue.
21
Array implementation of queue
To implement a queue using an array,
▪ create an array: array of size n and
▪ take two variables front and rear both of which will be initialized to 0 which
means the queue is currently empty.
▪ Element
• rear is the index up to which the elements are stored in the array and
• front is the index of the first element of the array.
22
Enqueue
If rear < n which indicates that the array is not full then
store the element at arr[rear] and increment rear by 1
23
Dequeue
▪ Removal of an element from the queue.
▪ An element can only be deleted when there is at least an element to
delete i.e. rear > 0.
▪ Now, the element at array[front] can be deleted but all the remaining
elements have to shift to the left by one position in order for the dequeue
operation to delete the second element from the left on another dequeue
operation.
24
Front and Display
25
Priority Queues
26
Priority Queue
▪ Priority Queues are abstract data structures where each data/value in the queue
has a certain priority.
▪ For example, In airlines, baggage with the title “Business” or “First-class” arrives
earlier than the rest.
▪ Various applications of the Priority queue in Computer Science are:
➢ Job Scheduling algorithms, CPU and Disk Scheduling, managing resources that are
shared among different processes, etc.
27
Priority Queue
▪ A priority Queue is an extension of the queue with the following properties:
o Every item has a priority associated with it.
o An element with high priority is dequeued before an element with low
priority.
o If two elements have the same priority, they are served according to their
order in the queue.
28
Priority Queue
29
Insertion Operations of a Priority Queue
▪ Create a new element: Create a new element to be inserted into the queue.
This element should include the data value to be stored in the queue and its
associated priority value.
▪ Add the element to the queue: Append the new element to the end of the
queue. This temporarily violates the heap property of the queue.
▪ Restore the heap property: Perform heapify operations to restore the heap
property of the queue. This involves recursively comparing the newly added
element with its parent nodes and swapping them if their priorities are not in
order.
30
How is Priority assigned?
▪ In a priority queue, generally, the value of an element is considered for
assigning the priority.
▪ For example, the element with the highest value is assigned the highest priority
and the element with the lowest value is assigned the lowest priority.
▪ The reverse case can also be used i.e., the element with the lowest value can be
assigned the highest priority. Also, the priority can be assigned according to our
needs.
31
Deletion Operations of a Priority Queue
▪ Deleting an element from a priority queue involves removing the element with
the highest priority while maintaining the heap property of the queue.
▪ Identify the highest priority element: Locate the element with the highest
priority in the queue. This is typically the root node in a max heap or the last
element in a min heap.
▪ Remove the highest priority element: Extract the identified element from the
queue. This creates an empty slot in the heap structure.
32
Deletion Operations of a Priority Queue
▪ Fill the empty slot: To maintain the heap property, a new element needs to be
placed in the empty slot. This is done by moving the last element of the heap to
the empty slot and then performing heapify operations to ensure that the heap
property is restored.
▪ Heapify: The heapify operation involves recursively comparing the new element
with its parent nodes and swapping them if necessary until the heap property is
restored. This ensures that the heap remains sorted after the deletion.
33
Operations of a Priority Queue
▪ Peek in a Priority Queue
o This operation helps to return the maximum element from Max Heap or the
minimum element from Min Heap without deleting the node from the priority
queue.
34
Types of Priority Queue
▪ Ascending Order Priority Queue: As the name suggests, in ascending order
priority queue, the element with a lower priority value is given a higher priority in
the priority list.
▪ For example, if we have the following elements in a priority queue arranged in
ascending order like 4,6,8,9,10. Here, 4 is the smallest number, therefore, it will
get the highest priority in a priority queue.
35
Types of Priority Queue
▪ Descending order Priority Queue : The root node is the maximum element in a
max heap, as you may know. It will also remove the element with the highest
priority first.
▪ As a result, the root node is removed from the queue. This deletion leaves an
empty space, which will be filled with fresh insertions in the future.
▪ The heap invariant is then maintained by comparing the newly inserted element to
all other entries in the queue.
36
Difference between Priority Queue and
Normal Queue?
▪ In Queue, the oldest element is dequeued first. While, in Priority Queue, an
element based on the highest priority is dequeued.
▪ When elements are popped out of a priority queue the result obtained is either
sorted in Increasing order or in Decreasing Order. While, when elements are
popped from a simple queue, a FIFO order of data is obtained in the result.
37
Tree Data Structure
▪ A Tree is a Data structure in which data
items are connected using references in a
hierarchical manner.
▪ Each Tree consists of a root node from
which we can access each element of the
tree.
▪ Starting from the root node, each node
contains zero or more nodes connected to
it as children.
▪ A simple tree can be depicted as seen in
the following figure.
38
Parts of a Tree Data structure
▪ Root Node: Root node is the topmost node of a tree. It is always the first node created
while creating the tree and we can access each element of the tree starting from the
root node. In the above example, the node containing element 50 is the root node.
▪ Parent Node: The parent of any node is the node which references the current node.
In the above example, 50 is the parent of 20 and 45, 20 is parent of 11, 46 and 15.
Similarly 45 is the parent of 30 and 78.
▪ Child Node: Child nodes of a parent node are the nodes at which the parent node is
pointing using the references. In the example above, 20 and 45 are children of 50. The
nodes 11, 46, and 15 are children of 20 and 30 and 78 are children of 45.
39
Parts of a Tree Data structure
▪ Edge: The reference through which a parent node is connected to a child node is
called an edge. In the above example, each arrow that connects any two nodes is an
edge.
▪ Leaf Node: These are those nodes in the tree which have no children. In the above
example, 11, 46, 15, 30, and 78 are leaf nodes.
▪ Internal Nodes: Internal Nodes are the nodes which have at least one child. In the
above example, 50, 20 and 45 are internal nodes.
40
Binary Trees
▪ Binary Tree is defined as a tree data
structure with at most 2 children.
Since each element in a binary tree
can have only 2 children, we
typically name them the left and
right child.
41
Binary Tree
Representation
42
Binary Tree ▪ A binary tree is a tree data
structure in which each node can
have a maximum of 2 children.
▪ It means that each node in a binary
tree can have either one, or two or
no children.
▪ Each node in a binary tree contains
data and references to its children.
Both the children are named as left
child and the right child according
to their position.
43
Example
44
Basic Operation On Binary Tree
▪ Inserting an element.
▪ Removing an element.
▪ Searching for an element.
▪ Traversing an element.
▪ Auxiliary Operation On Binary Tree:
▪ Finding the height of the tree
▪ Find the level of the tree
▪ Finding the size of the entire tree.
45
Arrays
▪ The array is a collection of the same type of elements at contiguous memory
locations under the same name.
▪ It is easier to access the element in the case of an array.
▪ The size is the key issue in the case of an array which must be known in
advance so as to store the elements in it.
▪ No modification is possible at the runtime after the array is created and
memory wastage can also occur if the size of the array is greater than the
number of elements stored in the array
46
Dictionaries
▪ A dictionary is a collection of data values.
▪ It holds a key: value pair in which we can easily access a value if the key is
known.
▪ It improves the readability of your code and makes it easier to debug.
▪ It is fast as the access of a value through a key is a constant time operation.
▪ A dictionary is also called a hash, a map, a HashMap in different programming
languages.
▪ Keys in a dictionary must be unique an attempt to create a duplicate key will
typically overwrite the existing value for that key.
47
Dictionaries
▪ Dictionary is an abstract data structure that supports the following operations:
▪ Search (K key) (returns the value associated with the given key)
▪ Insert (K key, V value)
▪ Delete (K key)
▪ Each element stored in a dictionary is identified by a key of type K.
▪ Dictionary represents a mapping from keys to values.
▪ Other terms for keyed containers include the names map, table, search
table, associative array, or hash.
48
Comparison Between Array and Dictionary:
S.N. Array Dictionary
1. Stores just a set of objects. Represents the relationship
between pair of objects
2. Lookup time is more in the case of Lookup time is less compared to
array O(N), where N is the size of the an array. Generally, it is O(1)
array.
3. Elements are stored at contagious Elements may or may not be
memory locations. stored at a contagious memory
location.
4. Items are unordered, changeable, Items are ordered, changeable,
and do allow duplicates. and do not allow duplicates.
49
Comparison Between Array and Dictionary:
S.N. Array Dictionary
5. Items are not represented as key: Items are represented as key:
value pair value pair
6. The values in the array are of the The values in dictionary items can
same data type be of any data type
7. Values can be accessed randomly To access a value the key is
without the need for any key required.
50
Sets and disjoint sets union
▪ Set: A set is a collection of distinct elements. The Set can be represented, for
examples, as S1= {1,2,5,10}.
▪ Disjoint Sets: The disjoints sets are those do not have any common element. For
example S1={1,7,8,9} and S2={2,5,10}, then we can say that S1 and S2 are two disjoint
sets.
▪ Two sets are called disjoint sets if they don’t have any element in common, the
intersection of sets is a null set.
▪ Disjoint Set Operations: The disjoint set operations are:
▪ Union
▪ Find 51
Sets and disjoint sets union
▪ Disjoint set Union: If Si and Sj are tow disjoint sets, then their union Si U Sj consists
of all the elements x such that x is in Si or Sj.
▪ Example: S1= {1,7,8,9} and S2= {2,5,10} , so S1 U S2={1,2,5,7,8,9,10}
▪ Find: Given the element I, find the set containing i.
▪ Example:
▪ S1={1,7,8,9} S2={2,5,10} S3={3,4,6}
▪ Find(4)= S3 Find(5)=S2 Find(7)=S1
52
Q1. Briefly Explain the Queue data structure. Write an algorithm
to add and remove an element from the circular queue and
compute the complexity of your algorithm.
53
Queue Data Structure ▪ A Queue is defined as a linear data
structure that is open at both ends
and the operations are performed in
First In First Out (FIFO) order.
▪ We define a queue to be a list in
which all additions to the list are
made at one end, and all deletions
from the list are made at the other
end.
▪ The element which is first pushed
into the order, the operation is first
performed on that.
54
Write an algorithm to add and remove an element
from the circular queue and compute the complexity
of your algorithm.
▪ A circular queue is similar to a linear queue as it is also based on the FIFO (First
In First Out) principle except that the last position is connected to the first
position in a circular queue that forms a circle. It is also known as a Ring Buffer.
▪ The circular queue solves the major limitation of the normal queue. In a normal
queue, after a bit of insertion and deletion, there will be non-usable empty
space.
▪ Here, indexes 0 and 1 can only be used after resetting the queue (deletion of all
elements). This reduces the actual size of the queue.
55
Write an algorithm to add and remove an element from the
circular queue and compute the complexity of your algorithm.
▪ A circular queue is similar to a linear queue as it is also based on the FIFO (First
In First Out) principle except that the last position is connected to the first
position in a circular queue that forms a circle. It is also known as a Ring Buffer.
▪ The circular queue solves the major limitation of the normal queue. In a normal
queue, after a bit of insertion and deletion, there will be non-usable empty
space.
▪ Here, indexes 0 and 1 can only be used after resetting the queue (deletion of all
elements). This reduces the actual size of the queue.
56
The steps of enqueue operation are
given below:
Algorithm to insert
an element in a First, we will check whether the Queue is full
or not.
circular queue
Initially the front and rear are set to -1. When
we insert the first element in a Queue, front
and rear both are set to 0.
57
Algorithm to insert an element in a circular queue
58
The steps of dequeue operation are given
below:
Algorithm to
First, we check whether the Queue is empty
remove an or not. If the queue is empty, we cannot
element in a perform the dequeue operation.
circular queue
When the element is deleted, the value of
front gets decremented by 1.
59
Algorithm to remove an element in a circular queue
60
Q1. Briefly Explain the Stack data structure. Write an algorithm
to add and remove an element from the stack and compute the
complexity of your algorithm.
61
Stack Data Structure
▪ Stack is a linear data structure which follows a particular order in which the
operations are performed. The order may be LIFO(Last In First Out) or
FILO(First In Last Out).
▪ Stack operations may involve initializing the stack, using it and then de-
initializing it. Apart from these basic stuffs, a stack is used for the following
two primary operations
▪ push() − Pushing (storing) an element on the stack.
▪ pop() − Removing (accessing) an element from the stack.
62
Stack Data Structure
63
Stack Data Structure
▪ Step 1 − Checks if the stack is full.
▪ Step 2 − If the stack is full, produces
an error and exit.
▪ Step 3 − If the stack is not full,
increments top to point next empty
space.
▪ Step 4 − Adds data element to the
stack location, where top is pointing.
▪ Step 5 − Returns success.
64
Stack Data
Structure
65
66
SORTING,
SELECTION AND By: Ashok Basnet
SEQUENCING
Outline
▪ General method of problem solving: Divide and Conquer Method
▪ Binary Search Technique
▪ Finding the Minimum and Maximum
▪ Merge Sort, Quick Sort and Selection Sort
▪ Strassen's Matrix Multiplication
▪ General method of problem solving: The Greedy Method
▪ Knapsack Problem: Fractional Knapsack Problem
▪ Tree Vertex Splitting
▪ Job Sequencing with Deadlines
▪ Minimum Cost Spanning Tree
▪ Optimal Storage Allocation on Magnetic Tapes
▪ Optimal Merge Patterns
Divide and Conquer Method
• Recursion:
o Divide and Conquer Method relies on recursion, where a problem is recursively
broken down into smaller sub-problems.
• Breaking Down the Problem:
o The algorithm divides the problem into two or more sub-problems, often of the
same or related type.
• Solving Sub-Problems:
o The sub-problems are solved independently, either directly if they are simple
enough or by further breaking them down recursively.
• Combining Solutions:
o The solutions to the sub-problems are combined to produce a solution for the
original problem.
• Efficiency:
o D&C often leads to efficient algorithms because it breaks down complex problems
into simpler, manageable sub-problems.
Divide and Conquer Method
• Applications: D&C is widely used in solving various types of problems, including:
o Sorting Algorithms: such as Quick Sort and Merge Sort.
o Discrete Fourier Transform (FFT): a fast algorithm for computing the DFT.
• Example Algorithms:
o Quick Sort: Sorts an array by partitioning and recursively sorting sub-arrays.
o Merge Sort: Divides the array into two halves, recursively sorts them, and then
merges them.
Divide and Conquer
Method
merge_sort(left_half)
merge_sort(right_half)
• Time Complexity: O(n^2) • Time Complexity: O(n log n) • Time Complexity: O(n log n) in
in the worst and average in all cases (worst, average, the average case, O(n^2) in the
cases. and best). worst case (can be mitigated
• In-Place: Yes. • In-Place: No (requires with randomization or careful
• Stability: No. additional space for pivot selection).
• Use Cases: Small datasets, merging). • In-Place: Yes (with the potential
simple implementations, • Stability: Yes. for log n recursion stack in the
and situations where in- • Use Cases: Large datasets, worst case).
place sorting is crucial. situations where stability is • Stability: No.
important, linked lists, and • Use Cases: General-purpose
external sorting. sorting, large datasets,
situations where average-case
performance is crucial.
Selection Sort, Merge Sort, and Quick Sort
• Time Complexity:
• Quick Sort has an average-case time complexity of O(n log n), making it more
efficient than Selection Sort for larger datasets. However, Quick Sort's worst-case
time complexity is O(n^2), which can be a concern in certain situations.
• Merge Sort consistently performs at O(n log n) in all cases, making it a reliable choice
when a stable sort is needed.
• Selection Sort consistently has a time complexity of O(n^2), making it less efficient for
large datasets.
• In-Place vs. Extra Space:
• Selection Sort and Quick Sort are in-place algorithms, meaning they don't require
additional memory proportional to the size of the input.
• Merge Sort, on the other hand, requires additional space for merging, proportional to
the size of the input. This can be a drawback in terms of space complexity.
Selection Sort, Merge Sort, and Quick Sort
• Stability:
• Quick Sort is not stable, meaning equal elements may not maintain their original
order.
• Selection Sort is not stable.
• Merge Sort is stable.
• Ease of Implementation:
• Selection Sort is straightforward to implement.
• Quick Sort involves partitioning and recursion and can be more challenging to
implement correctly.
• Merge Sort is relatively easy to implement but involves additional memory.
Finding the Maximum and Minimum
• The Max-Min Problem in algorithm analysis is finding the maximum and
minimum value in an array.
• To find the maximum and minimum numbers in a given array numbers[ ] of
size n, the following algorithm can be used.
• naive method
• divide and conquer approach.
Naïve Method
• Naïve method is a basic method to solve any problem. In this method, the
maximum and minimum number can be found separately.
• Algorithm:
• Naive_Max_Min(numbers[ ])
Initialize max_value and min_value to the first element of the array
For each element num in numbers[1:]:
If num > max_value:
update max_value
If num < min_value:
update min_value
Return (max_value, min_value)
• The number of comparison in Naive method is 2n - 2. The number of
comparisons can be reduced using the divide and conquer approach.
Divide and Conquer Approach
void Test(int n) { // n
if (n > 0) {
printf("%d ", n); // 1
Test(n - 1); // n-1
}
}
▪ So, T (n) = T (n-1) + 1
▪ Hence, T (n) = 1 for n=0, and T (n-1) + 1 for n>0
Solving Recurrence Relation
• Definition of Recurrence Relation: T (n) = T (n−1)+1
• Initial Condition: T (n) = 1 for n = 0
• Substituting and Simplifying:
▪ T (n) = T (n−1) + 1
▪ T (n−1) = T (n−2) + 1
• Substitute T (n−1) into the first equation: T (n) = T (n−2) + 2
• Continue substituting T (n−k) for k times: T (n) = T (n−k) + k
• Assuming n-k=0: T (n) = T (0) + n = n + 1
Preparing Recurrence Relation
void Test(int n) {
if (n > 0) {
for (int i = 1; i < n; i = i + 1) {
printf("%d", n);
}
Test(n - 1);
}
}
Solving Recurrence Relation
1. Greedy about profit : Obj1 have max profit. Now Remaining weight: 2kg. Now
obj2 can only be included 2/15 of obj2. So overall profit = 25 + 24*2/15 = 28.2
2. Greedy about weight: Here we put least profitable object. Minimum weight
object is obj3, we put it in the bag. Then 10/15 of the obj2 is added.
So overall profit: 15 + 24*2/3 = 31
Knapsack Problem
• Say we have knapsack of capacity(m) = 20
Objects Obj1 Obj2 Obj3
Profit 25 24 15
Weight 18 15 10
p/w 1.3 1.6 1.5
• But we are not getting optimal solution from the previous method.
• So compute profit/weight
• So highest proportion is of object 2. We put obj2 in knapsack.
• Remaining capacity: 20-15 = 5kg
• Then we select 5/10th of obj3. So, remaining capacity: 0kg
• Profit = 24 + 15 * ½ = 24+7.5 = 31.5
Knapsack Problem Algorithm
Object(O) 1 2 3 4 5 6 7
Profits(P) 10 5 15 7 6 18 3
Weights(w) 2 3 5 7 1 4 2
• 15-1 = 14kg
• 14-2 = 12kg and so on
Knapsack Problem
Object(O) 1 2 3 4 5 6 7
Profits(P) 10 5 15 7 6 18 3
Weights(w) 2 3 5 7 1 4 2
15-1 = 14kg
14-2 = 12kg
12-4 = 8kg
8-5 = 3kg
3-1 = 2kg
2-2 = 0
Knapsack Problem
Object(O) 1 2 3 4 5 6 7
Profits(P) 10 5 15 7 6 18 3
Weights(w) 2 3 5 7 1 4 2
o Compute sum(xiwi)
o Profit sum(xipi)
o Constraint Sum(xiwi) <= m
o Objective Sum(xipi) needs to be maximum
o Hence constraint needs to be fulfilled having maximized objective function.
o Code Implementation
Job Sequencing with deadlines
• When we are given a set of ‘n’ jobs. Associated with each Job i, deadline di
>= 0 and profit Pi >= 0.
• For any job ‘i’ the profit pi is earned iff the job is completed by its deadline.
• Only one machine is available for processing jobs. An optimal solution is
the feasible solution with maximum profit.
• Sort the jobs in ‘j’ ordered by their deadlines. The array d [1 : n] is used to store
the deadlines of the order of their p-values.
• The set of jobs j [1 : k] such that j [r], 1 ≤ r ≤ k are the jobs in ‘j’ and d (j [1]) ≤ d
(j[2]) ≤ . . . ≤ d (j[k]).
Job Sequencing with deadlines
• There are n jobs to be processed on a machine.
• Each job i has a deadline di ≥ 0 and profit pi ≥ 0 .
• Pi is earned iff the job is completed by its deadline.
• The job is completed if it is processed on a machine for unit time.
• Only one machine is available for processing jobs.
• Only one job is processed at a time on the machine.
• A feasible solution is a subset of jobs J such that each job is completed
by its deadline.
• An optimal solution is a feasible solution with maximum profit value.
Job Sequencing with deadlines
Using Greedy Method
• Step-01:
• Sort all the given jobs in decreasing order of their profit.
• Step-02:
• Check the value of maximum deadline.
• Draw a Gantt chart where maximum time on Gantt chart is the value of maximum
deadline.
• Step-03:
• Pick up the jobs one by one.
• Put the job on Gantt chart as far as possible from 0 ensuring that the job gets
completed before its deadline.
Using Greedy Method
Jobs J1 J2 J3 J4 J5 J6
Deadlines 5 3 3 2 4 2
Profits 200 180 190 300 120 100
• Step-01: Sort all the given jobs in decreasing order of their profit.
• Step-02: Value of maximum deadline: 5. So, draw a Gantt chart with maximum time on Gantt
chart = 5.
• We take each job one by one in the order they appear in Step-01.
• We place the job on Gantt chart as far as possible from 0.
• We take job J4. Since its deadline is 2, so we place it in the first empty cell before deadline 2.
• We take job J1. Since its deadline is 5, so we place it in the first empty cell before deadline 5.
• We take job J3. Since its deadline is 3, so we place it in the first empty cell before deadline 3.
• We take job J2. Since its deadline is 3, so we place it in the first empty cell before deadline 3.
• Now, we take job J5. Since its deadline is 4, so we place it in the first empty cell before deadline
4.
• The only job left is job J6 whose deadline is 2. All the slots before deadline 2 are already
occupied. Thus, job J6 can not be completed.
Using Greedy Method
Job Slot Assigned Solution Profit
Considered
-------- ------- --------- 0
J1 [4, 5] J1 200
J2 [0, 1], [4, 5] J2, J1 380
J3 [0, 1], [2, 3], , [4, 5] J2, J3, J1 570
J4 [0, 1], [1, 2], [2, 3], [4, 5] J2, J4, J3, J1 870
J5 [0, 1], [1, 2], [2, 3], [3, 4], [4, 5] J2, J4, J3, J5, J1 990
J6 ------- ------- ----------
Using Greedy Method
• Summary:
• The optimal schedule is: J2 , J4 , J3 , J5 , J1
• This is the required order in which the jobs must be completed in
order to obtain the maximum profit.
• All the jobs are not completed in optimal schedule.
• This is because job J6 could not be completed within its deadline.
• Maximum earned profit
▪ Sum of profit of all the jobs in optimal schedule
▪ Profit of job J2 + Profit of job J4 + Profit of job J3 + Profit of job J5
+ Profit of job J1
▪ 180 + 300 + 190 + 120 + 200 = 990 units
Exercise
Jobs J1 J2 J3 J4 J5
Deadlines 2 1 3 2 1
Profits 60 100 20 40 20
Jobs J1 J2 J3 J4 J5
Profits 20 15 10 5 1
Deadlines 2 2 1 3 3
Optimal Merge Patterns
• Let's say we want merge two array:
A B
3 5
8 9
12 11
20 16
• Here we can compare elements from two array and place in the new array.
• Time taken to merge these items is the sum of number of elements in each list.
• Say we have 5 array with multiple elements to be merged, then we need optimal
way to merge them.
Optimal Merge Patterns
• Given n number of sorted files, the task is to find the minimum computations
done to reach the Optimal Merge Pattern.
• When two or more sorted files are to be merged altogether to form a single file,
the minimum computations are done to reach this file are known as Optimal
Merge Pattern.
• If more than 2 files need to be merged then it can be done in pairs. For example,
if need to merge 4 files A, B, C, D. First Merge A with B to get X1, merge X1 with
C to get X2, merge X2 with D to get X3 as the output file.
• If we have two files of sizes m and n, the total computation time will be m+n.
Here, we use the greedy strategy by merging the two smallest size files among
all the files present.
• Examples: Given 3 files with sizes 2, 3, 4 units. Find an optimal way to combine
these files
Optimal Merge Patterns
Optimal Merge Patterns
• Given ‘n’ sorted files, there are many ways to pair wise merge them into a
single sorted file.
• As, different pairings require different amounts of computing time, we
want to determine an optimal (i.e., one requiring the fewest comparisons)
way to pair wise merge ‘n’ sorted files together.
• This type of merging is called as 2-way merge patterns.
• To merge an n-record file and an m-record file requires possibly n + m
record moves, the obvious choice is, at each step merge the two smallest
files together.
• The two-way merge patterns can be represented by binary merge trees.
Optimal Merge Patterns
• Solve:
• Given five files (X1, X2, X3, X4, X5) with sizes (20, 30, 10, 5, 30). Apply
greedy rule to find optimal way of pair wise merging to give an optimal
solution using binary merge tree representation.
Optimal Merge Patterns
• Solve:
• Consider the sequence {3, 5, 9, 11, 16, 18, 20}. Find optimal merge patter
for this data
Optimal Merge Patterns
• In this case, we have to find the permutation of the program order which
minimizes the MRT after storing all programs on single tape only.
• There are many permutations of programs. Each gives a different MRT.
Consider three programs (P1, P2, P3) with a length of (L1, L2, L3) = (5, 10, 2).
• Let's find the MRT for different permutations. 6 permutations are possible
for 3 items.
• The Mean Retrieval Time for each permutation is listed in the following
table.
Optimal Storage Allocation on Magnetic Tapes
Optimal Storage Allocation on Magnetic Tapes
• It should be observed from the above table that the MRT is 26/3, which is
achieved by storing the programs in ascending order of their length.
• Thus, greedy algorithm stores the programs on tape in non-decreasing
order of their length, which ensures the minimum MRT.
Algorithm MRT_SINGLE_TAPE(L):
// Description: Find storage order of n programs such that mean retrieval
time is minimum
// Input: L is an array of program lengths sorted in ascending order
// Output: Minimum Mean Retrieval Time
Tj ← 0 // Initialize total retrieval time
for i ← 1 to n do
for j ← 1 to i do
Tj ← Tj + L[j] // Accumulate retrieval time for each program
end for
end for
MRT ← Tj / n // Calculate mean retrieval time
return MRT
Minimum Cost Spanning Tree
• A minimum spanning tree (MST) is a tree that spans all the vertices in a
connected, undirected graph while minimizing the sum of the weights
(costs) of the edges.
• In simpler terms, it is a subset of the edges of the graph that forms a tree
and connects all the vertices with the minimum possible total edge weight.
• A single graph can have multiple spanning trees.
• A Minimum Spanning Tree(MST) or minimum weight spanning tree for a
weighted, connected, undirected graph is a spanning tree having a weight
less than or equal to the weight of every other possible spanning tree.
• The weight of a spanning tree is the sum of weights given to each edge of
the spanning tree.
Minimum Cost Spanning Tree
• Key characteristics of a minimum spanning tree:
• Spanning Tree: It covers all the vertices of the graph without forming
any cycles.
• Acyclic: It does not contain any cycles since it is a tree.
• Connected: It connects all the vertices of the original graph.
• Minimum Weight: The sum of the edge weights in the spanning tree is
minimized.
Minimum Cost Spanning Tree
• A Graph is a non-linear data structure consisting of vertices and edges.
More formally a Graph is composed of a set of vertices( V ) and a set of
edges( E ). The graph is denoted by G(E, V).
Minimum Cost Spanning Tree
• Vertices: Vertices are the fundamental units of the graph. Sometimes,
vertices are also known as vertex or nodes. Every node/vertex can be
labeled or unlabeled.
• Edges: Edges are drawn or used to connect two nodes of the graph. It can
be ordered pair of nodes in a directed graph. Edges can connect any two
nodes in any possible way. There are no rules. Sometimes, edges are also
known as arcs. Every edge can be labeled/unlabeled.
• Graphs are used to solve many real-life problems. Graphs are used to
represent networks. The networks may include paths in a city or telephone
network or circuit network.
• If G(V, E) is a graph then every spanning tree of graph G consists of (V – 1)
edges, where V is the number of vertices in the graph and E is the number
of edges in the graph.
Minimum Cost Spanning Tree
• Consider a complete graph of three vertices and all the edge weights are the
same then there will be three spanning trees(which are also minimal) of the same
path length are possible.
Minimum Cost Spanning Tree
• For any cut C of the graph, if the weight of an edge E in the cut-set of C is
strictly smaller than the weights of all other edges of the cut-set of C, then
this edge belongs to all the MSTs of the graph.
Minimum Cost Spanning Tree
• Consider Example with possible spanning tree
Minimum Cost Spanning Tree
• Algorithms for finding Minimum Spanning Tree(MST):-
o Prim’s Algorithm
o Kruskal’s Algorithm
Prim’s Algorithm
• It starts with an empty spanning tree. The idea is to maintain two sets of
vertices.
• The first set contains the vertices already included in the MST, the other
set contains the vertices not yet included.
• At every step, it considers all the edges that connect the two sets and picks
the minimum weight edge from these edges.
• After picking the edge, it moves the other endpoint of the edge to the set
containing MST.
• A group of edges that connects two sets of vertices in a graph is called cut
in graph theory.
• So, at every step of Prim’s algorithm, find a cut (of two sets, one contains
the vertices already included in MST and the other contains the rest of the
vertices), pick the minimum weight edge from the cut, and include this
vertex to MST Set (the set that contains already included vertices).
Prim’s Algorithm
Prim's Algorithm(Graph G):
1. Initialize an empty priority queue Q.
2. Add an arbitrary starting vertex to the priority queue.
3. While Q is not empty:
a. Extract the edge with the smallest weight from Q.
b. If adding this edge doesn't form a cycle, add it to the minimum
spanning tree.
c. Add the vertex at the other end of the chosen edge to the priority
queue if not already present.
4. Return the minimum spanning tree.
Prim’s Algorithm
Prim’s Algorithm (Start with minimum edge and
then start adding connected smallest)
Prim’s Algorithm
Kruskal’s Algorithm
Kruskal's Algorithm(Graph G):
1. Sort all edges in non-decreasing order of their weights.
2. Initialize an empty minimum spanning tree.
3. Initialize a data structure to keep track of connected components.
4. For each edge in the sorted order:
a. If adding the edge does not create a cycle, add it to the minimum
spanning tree and merge the connected components.
5. Return the minimum spanning tree.
Kruskal’s Algorithm
Kruskal’s Algorithm
Kruskal’s Algorithm
• Now pick all edges one by one from the sorted list of edges
Step 1: Pick edge 7-6: No cycle is formed, include it.
Data Structures Priority queue (or min-heap) Disjoint set data structure (union-find)
Selects edge with minimum weight Selects edge with minimum weight
Edge/Vertex
connecting a vertex in the current tree to from the sorted list of edges without a
Selection
a vertex outside the tree predetermined starting point
O(E log V), where E is the number of O(E log E), where E is the number of
Complexity
edges, V is the number of vertices edges
Starts with a single vertex and grows Starts with an empty tree and adds
Starting Point
the tree edges
Suitable for dense graphs or graphs Suitable for sparse graphs or graphs
Applications
with dense clusters with edges of similar weights
Terminates when all vertices are Terminates when all vertices are
Termination
included in the minimum spanning tree included in the minimum spanning tree
Tree Vertex Splitting
• Directed and weighted binary tree
• Consider a network of power line transmission
• The transmission of power from one node to the other results in some
loss, such as drop in voltage
• Each edge is labeled with the loss that occurs (edge weight)
• Network may not be able to tolerate losses beyond a certain level
• You can place boosters in the nodes to account for the losses
• Definition 1 Given a network and a loss tolerance level, the tree
vertex splitting problem is to determine the optimal placement
of boosters.
• You can place boosters only in the vertices and nowhere else
Tree Vertex Splitting
• The tree vertex splitting problem involves optimizing the placement of boosters
in a directed and weighted binary tree, representing a power transmission
network.
• The objective is to minimize losses, which are associated with the edges
(transmission lines) in the tree.
• Network Description:
• The network is represented as a directed and weighted binary tree.
• Nodes in the tree correspond to locations in the power transmission network.
• Edges in the tree are labeled with weights that represent the losses occurring
during the transmission between nodes.
• Loss Tolerance Level:
• There is a specified loss tolerance level that the network can withstand.
• The total loss along a path from the source to any node should not exceed
this tolerance level.
Tree Vertex Splitting
• Objective:
• The goal is to determine the optimal placement of boosters in the vertices
(nodes) of the tree.
• Boosters are placed to compensate for losses and ensure that the total loss
along any path does not exceed the given tolerance level.
• Constraints:
• Boosters can only be placed in the vertices of the tree, and not along the
edges.
• Optimization:
• The optimization problem involves finding the best locations for boosters to
minimize losses in the network.
Tree Vertex Splitting
• Let T = (V, E, w) be a weighted directed tree
• V is the set of vertices
• E is the set of edges
• w is the weight function for the edges
• wij is the weight of the edge hi, ji ∈ E
• We say that wij = ∞ if <i, j> !∈ E
• A vertex with in-degree zero is called a source vertex
• A vertex with out-degree zero is called a sink vertex
• For any path P ∈ T, its delay d(P) is defined to be the sum of the weights
(wij ) of that path, or
Tree Vertex Splitting
• E.g.: S = {2 , 22 , 23 , ….. 2n }
S = { Sn } = { 2n }
• Here Initial Condition is S1 = 2. So Recurrence relation can be expressed as:
Sn = 2Sn-1 , n ≥ 2
• E.g.: S = {1, 1, 2, 3, 5, ….. }
o Sn = Sn-1 + Sn-2 , n ≥ 3 with S1 = S2 = 1 (This is Fibonacci Sequence)
Linear Recurrence Relation with Constant Time Coefficient
• Iteration Method
• Method of characteristic roots
• Generating functions
Linear Recurrence Relation with Constant Time Coefficient
• The method of characteristic roots with constant coefficient can solve both
o Homogeneous Recurrence Relation [ f(n) = 0 ]
o Non-homogeneous Recurrence Relation [ f(n) != 0 ]
Homogeneous Recurrence Relation
constants.
• Put an = rn , an-1 = rn-1 and so on.
• Find an equation in terms of r. This is called as characteristic equation or
auxiliary equation.
• Solve characteristic equation and find characteristic roots.
• If characteristic roots are:
o r = r1, r2, r3 (distinct), then general solution is an = b1r1n + b2r2n + b3r3n
where b1, b2, b3 are constants.
o r = r1, r1 then an = (b1 + nb2) r1n
o r = r1, r1, r2, r3 then an = (b1 + nb2)r1n + b3r2n + b4r3n
o r = r1, r1, r1, r2 then an = (b1 + nb2+ n2b3)r1n +b4r2n
Homogeneous Recurrence Relation
• Equation: [ C0an + C1an-1 + C2an-2 + ….…. + Ckan-k = f(n) ] where f(n) != 0 and
f(n) is a polynomial of degree s.
• General solution: an = anh + anp where h is for homogeneous solution and p is
for particular solution.
• Case 1: Equate equation to zero and find homogeneous solution
• Case 2:
o Let an = A0 + A1n + A2n2 + …... + Asns
o Put an , an-1 and so on in the given recurrence relation
o Compare the coefficients of like power on n.
o Find A0, A1, A2,…. As
o An(p) = A0 + A1n +A2n2 + …........... + Asns
Non-Homogeneous Recurrence Relation with Const. Coeff.
• To find yn(p):
put yn = A0 + A1n + A2n2
{A0 + A1(n+2) + A2(n+2)2 } - { A0 + A1(n+1) + A2(n+1)2 } - 2{A0 + A1n
+ A2n2 } = n2
{ A1 – 2 A0 + 3A2} + {2A2 – 2A1}n + {-2A2}n2 = n2
Comparing coefficients,
A1 – 2A0 + 3A2 = 0
2A2 - 2A1 = 0
-2A2 = 1
Solving A0 = -1, A1 = -½ and A2 = -1/2
Non-Homogeneous Recurrence Relation with Const. Coeff.
• Solve an+2 + 2an+1 - 15an = 6n + 10, with the initial conditions a0 = 1, a1 = -1/2.
• Solve an+2 - 5an+1 + 6an = n2
• Solve an - 2an-1 = 6n, with a1 = 2
Case III: Non-Homogeneous Recurrence Relation with Const. Coeff.
• Equation: [ C0an + C1an-1 + C2an-2 + ….…. + Ckan-k = f(n) ] where f(n) != 0 and
f(n) = βnP(n) where β is not a characteristics root and P(n) is a polynomial.
• General solution: an = anh + anp where h is for homogeneous solution and p is
for particular solution.
• Case 1: Equate equation to zero and find homogeneous solution
• Case 2:
o Let an = βn (A0 + A1n + A2n2 + …... + Asns )
o Put an , an-1 and so on in the given recurrence relation
o Compare the coefficients of like power on n.
o Find A0, A1, A2,…. As
• an(p) =βn (A0 + A1n + A2n2 + …... + Asns )
Non-Homogeneous Recurrence Relation with Const. Coeff.
• Equation: [ C0an + C1an-1 + C2an-2 + ….…. + Ckan-k = f(n) ] where f(n) != 0 and f(n)
= βnP(n) where β is characteristics root of multiplicity t and P(n) is a polynomial.
• General solution: an = anh + anp where h is for homogeneous solution and p is for
particular solution.
• Case 1: Equate equation to zero and find homogeneous solution
• Case 2:
o Let an = ntβn (A0 + A1n + A2n2 + …... + Asns )
o Put an , an-1 and so on in the given recurrence relation
o Compare the coefficients of like power on n.
o Find A0, A1, A2,…. As
• an(p) = ntβn (A0 + A1n + A2n2 + …... + Asns )
Non-Homogeneous Recurrence Relation with Const. Coeff.
def fibonacci(n):
if n <= 1:
return n
else:
return fibonacci(n-1) + fibonacci(n-2)
Steps to Establish the Recurrence Relation
• Identify the base cases: Determine the simplest cases where the function does not
call itself.
if n <= 1:
return n
• This means F(0) = 0 and F(1) = 1
• Identify the recursive case(s): Look at the part of the function where it calls itself
with smaller inputs.
else:
return fibonacci (n-1) + fibonacci (n-2)
• This translates to the recurrence relation: F(n) = F(n-1) + F(n-2) for n > 1
• Combine the base case and the recursive case to form the complete recurrence
relation.
Complete Recurrence Relation for Fibonacci
• Combining the above steps, the recurrence relation for the Fibonacci sequence is:
𝐹(𝑛)=⎧ 0 if 𝑛 = 0
⎨ 1 if 𝑛 = 1
⎩ 𝐹(𝑛−1) + 𝐹(𝑛−2) if 𝑛 > 1
Another Example: Factorial
• Recursive case:
else:
return n * factorial(n-1)
𝐹(𝑛) =⎧ 1 if 𝑛 = 0
⎩ n . 𝐹(𝑛−1) if 𝑛 > 0
Example: Binary Search
• Merge Sort
• Tower of Hanoi
• Quick Sort
•
Find Recurrence Relation for:
• Merge Sort
𝐹(𝑛) =⎧ O(1) if 𝑛 ≤ 1
⎩ 2T(n/2) + O(1) if 𝑛 > 1
• Tower of Hanoi
𝐹(𝑛) =⎧ O(1) if 𝑛 = 1
⎩ 2T(n - 1) + O(1) if 𝑛 > 1
• Quick Sort
𝐹(𝑛) =⎧ O(1) if 𝑛 ≤ 1
⎩ 2T(n/2) + O(n) if 𝑛 > 1
•
General Steps to Derive Recurrence Relations
1. Define the problem size: Determine what 𝑛 represents (e.g., the size of the input).
2. Identify base case(s): Find the simplest cases where the function terminates
without recursion.
3. Identify recursive case(s): Determine how the function reduces the problem size
and combines the results of smaller subproblems.
4. Formulate the recurrence: Write the mathematical expression that represents the
function's behavior in terms of smaller problem sizes.
Methods to Solve Recurrence Relations
V 1 2 3 4 5 6 7 8 9 10 11 12
cost
d
Multi Stage Graphs
• For fifth stage
V 1 2 3 4 5 6 7 8 9 10 11 12
cost 0
d 12
• Cost(5, 12) = 0
Multi Stage Graphs
• for fourth stage
V 1 2 3 4 5 6 7 8 9 10 11 12
cost 4 2 5 0
d 12 12 12 12
• Cost(4, 9) = 4
• Cost(4, 10) = 2
• Cost(4, 11) = 5
Multi Stage Graphs
• for third stage
V 1 2 3 4 5 6 7 8 9 10 11 12
cost 7 4 2 5 0
d 10 12 12 12 12
V 1 2 3 4 5 6 7 8 9 10 11 12
cost 7 5 4 2 5 0
d 10 10 12 12 12 12
V 1 2 3 4 5 6 7 8 9 10 11 12
cost 7 5 7 4 2 5 0
d 10 10 10 12 12 12 12
• Cost(2, 2) = min{ c(2, 6) + cost (3, 6), c(2, 7) + cost(3, 7), c(2, 8) + cost(3, 8) }
= { 4+7, 2+5, 1+7 } = 7
Multi Stage Graphs
• for second stage
V 1 2 3 4 5 6 7 8 9 10 11 12
cost 7 9 7 5 7 4 2 5 0
d 7 6 10 10 10 12 12 12 12
• Cost(2, 2) = min{ c(2, 6) + cost (3, 6), c(2, 7) + cost(3, 7), c(2, 8) + cost(3, 8) }
= { 4+7, 2+5, 1+7 } = 7
• Cost(2, 3) = min{ c(3, 6) + cost (3, 6), c(3, 7) + cost(3, 7) }
= { 2+7, 7+5 } = 9
Multi Stage Graphs
• for second stage
V 1 2 3 4 5 6 7 8 9 10 11 12
cost 7 9 18 7 5 7 4 2 5 0
d 7 6 8 10 10 10 12 12 12 12
• Cost(2, 2) = min{ c(2, 6) + cost (3, 6), c(2, 7) + cost(3, 7), c(2, 8) + cost(3, 8) }
= { 4+7, 2+5, 1+7 } = 7
• Cost(2, 3) = min{ c(3, 6) + cost (3, 6), c(3, 7) + cost(3, 7) }
= { 2+7, 7+5 } = 9
• Cost(2, 4) = min{ c(4, 8) + cost (3, 8) }
= { 11+7 } = 18
Multi Stage Graphs
• for second stage
V 1 2 3 4 5 6 7 8 9 10 11 12
cost 7 9 18 15 7 5 7 4 2 5 0
d 7 6 8 8 10 10 10 12 12 12 12
• Cost(2, 2) = min{ c(2, 6) + cost (3, 6), c(2, 7) + cost(3, 7), c(2, 8) + cost(3, 8) }
= { 4+7, 2+5, 1+7 } = 7
• Cost(2, 3) = min{ c(3, 6) + cost (3, 6), c(3, 7) + cost(3, 7) }
= { 2+7, 7+5 } = 9
• Cost(2, 4) = min{ c(4, 8) + cost (3, 8) }
= { 11+7 } = 18
• Cost(2, 5) = min{ c(5, 7) + cost (3, 7), c(5, 8) + cost(3, 8) }
= { 11+5, 8+7 } = 15
Multi Stage Graphs
• for first stage
V 1 2 3 4 5 6 7 8 9 10 11 12
cost 16 7 9 18 15 7 5 7 4 2 5 0
d 2 or 7 6 8 8 10 10 10 12 12 12 12
3
• Cost(1, 1) = min{ c(1,2) + cost(2, 2), c(1, 3) + cost(2, 3), c(1, 4) + cost(2, 4), c(1, 5) +
cost(2, 5) }
= min{ 9+7, 7+9, 3+18, 2+15 } = min(16, 16, 21, 17)
Multi Stage Graphs
V 1 2 3 4 5 6 7 8 9 10 11 12
cost 16 7 9 18 15 7 5 7 4 2 5 0
d 2 or 7 6 8 8 10 10 10 12 12 12 12
3
• We have complete data. Now we can implement the dynamic programming to make
decision.
• For forward approach, we will move from source to sink.
• d(stage, vertex) = d(1, 1) = 2
• d(2, 2) = 7
• d(3, 7) = 10
• d(4, 10) = 12
• Path = 1, 2, 7, 10, 12
Multi Stage Graphs
V 1 2 3 4 5 6 7 8 9 10 11 12
cost 16 7 9 18 15 7 5 7 4 2 5 0
d 2 or 7 6 8 8 10 10 10 12 12 12 12
• We have3 complete data. Now we can implement the dynamic programming to make
decision.
• For forward approach, we will move from source to sink.
• d(stage, vertex) = d(1, 1) = 3
• d(2, 3) = 6
• d(3, 6) = 10
• d(4, 10) = 12
• Path = 1, 3, 6, 10, 12
Multi Stage Graphs
• Algorithm for Multistage Graph (Shortest Path):
• Initialization:
• Create a table to store the optimal costs for each node in each stage.
• Initialize the last stage's costs as the weights of the corresponding nodes.
• Bottom-Up Computation:
• Starting from the penultimate stage and moving backward, compute the optimal costs for each
node in each stage based on the optimal costs of the next stage.
• Use the following recurrence relation for each node in a stage i:
cost[i] = min(weight(i, j) + cost[j]) for all j in the next stage
Here, weight(i, j) represents the weight of the edge from node i to node j.
• Traceback:
• After completing the bottom-up computation, the optimal cost of the starting node is obtained.
• Trace back through the computed costs to reconstruct the optimal path.
• Optimal Solution:
• The optimal solution is the path from the starting node to the ending node with the minimum
total cost.
All pairs shortest paths
All pairs shortest paths (Floyd-Warshall)
• In the all pairs shortest path problem, we are to find a shortest path
between every pair of vertices in a directed graph G.
• That is, for every pair of vertices (i, j), we are to find a shortest path
from i to j as well as one from j to i.
• These two paths are the same when G is undirected.
• The all pairs shortest path problem is to determine a matrix A such that
A (i, j) is the length of a shortest path from i to j.
• The matrix A can be obtained by solving n single-source problems using
the algorithm shortest Paths.
All pairs shortest paths
• The shortest i to j path in G, i ≠ j originates at vertex i and goes through
some intermediate vertices (possibly none) and terminates at vertex j.
• If k is an intermediate vertex on this shortest path, then the sub paths from
i to k and from k to j must be shortest paths from i to k and k to j,
respectively.
• Otherwise, the i to j path is not of minimum length. So, the principle of
optimality holds.
• Let Ak (i, j) represent the length of a shortest path from i to j going through
no vertex of index greater than k, we obtain:
All pairs shortest paths
All pairs shortest paths
All pairs shortest paths
All pairs shortest paths
All pairs shortest paths
All pairs shortest paths
All pairs
shortest paths
Single Source Shortest Path: General Weights (Bellman Ford
Algorithm)
• The Single-Source Shortest Path (SSSP) algorithm is used to find the shortest paths from
a single source vertex to all other vertices in a weighted graph.
• There are several algorithms to solve this problem, and two of the most commonly used
ones are Dijkstra's algorithm and Bellman-Ford algorithm.
• Given a graph and a source vertex src in the graph, find the shortest paths from src to all
vertices in the given graph. The graph may contain negative weight edges.
• Dijkstra doesn’t work for Graphs with negative weights, Bellman-Ford works for such
graphs.
• Bellman-Ford is also simpler than Dijkstra and suites well for distributed systems.
• Initialization:
• Create an array dist[] to store the shortest distances from the source to each vertex.
• Initialize dist[] with infinity for all vertices except the source, and set dist[source] to 0.
• Relaxation Loop:
• Iterate |V| - 1 times, where |V| is the number of vertices.
• For each edge (u, v) in the graph, check if the distance from the source to v can be
shortened by going through u.
• If dist[u] + weight(u, v) < dist[v], update dist[v] with the new, shorter
distance.
• Negative Cycle Check:
• After the relaxation loop, check for the presence of negative cycles.
• If there is a vertex v such that dist[u] + weight(u, v) < dist[v], then the graph
contains a negative cycle.
• Result:
• The final distances in the dist[] array represent the shortest paths from the source to all
other vertices, or the presence of a negative cycle is detected.
function BellmanFord(graph, source):
create an array dist[] to store the shortest distances
initialize dist[] with infinity for all vertices except the source
set distance[source] to 0
for i from 1 to |V| - 1:
for each edge (u, v) in graph:
alt = dist[u] + weight(u, v)
if alt < dist[v]:
dist[v] = alt
for each edge (u, v) in graph:
if dist[u] + weight(u, v) < dist[v]:
raise Error("Graph contains a negative cycle")
return dist
Problem
1
Problem 2
• Hence, m = 7
0/1 - Knapsack Problem
Weight 1 3 4 5
Value 1 4 5 7
V Wt. 0 1 2 3 4 5 6 7
0 0 0 0 0 0 0 0 0 0
1 1 0
4 3 0
5 4 0
7 5 0
0/1 - Knapsack Problem
V Wt. j=0 1 2 3 4 5 6 7
0 0 i=0 0 0 0 0 0 0 0 0
1 1 1 0 1
4 3 2 0
5 4 3 0
7 5 4 0
Value Wt J=0 1 2 3 4 5 6 7 8
0 0 i=0 0 0 0 0 0 0 0 0 0
1 2 1 0 0 1 1 1 1 1 1 1
2 3 2 0 0 1 2 2 3 3 3 3
5 4 3 0 0 1 2 5 5 6 7 7
6 5 4 0 0 1 2 5 6 6 7 8
String Editing Problem
• The String Editing Problem, also known as the Edit Distance or Levenshtein Distance
problem, involves determining the minimum number of operations required to
transform one string into another.
• The allowed operations typically include insertion, deletion, and substitution of
characters.
• The goal is to find the minimum cost or number of operations needed to make the two
strings identical.
• Example:
o Convert azced to abcdef
o Let's create a matrix to solve the String Editing Problem (Edit Distance problem)
using dynamic programming.
String Editing Problem
Null a b c d e f
Null 0 1 2 3 4 5 6
a 1
z 2
c 3
e 4
d 5
String Editing Problem
Null a b c d e f
Null 0 1 2 3 4 5 6
a 1 0 1 2 3 4 5
z 2 1 1 2 3 4 5
c 3 2 2 1 2 3 4
e 4 3 3 2 2 2 3
d 5 4 4 3 2 3 3
String Editing Problem
Null a b c d e f
Null 0 1 2 3 4 5 6
a 1 0 1 2 3 4 5
z 2 1 1 2 3 4 5
c 3 2 2 1 2 3 4
e 4 3 3 2 2 2 3
d 5 4 4 3 2 3 3
Reliability Design
• The problem is to design a system that is composed of several devices
connected in series.
• Let ri be the reliability of device Di (that is ri is the probability that device i
will function properly) then the reliability of the entire system is π ri.
• Even if the individual devices are very reliable, the reliability of the system
may not be very good.
• For example, if n=10 and ri = 0.99, 1 ≤ i ≤ 10, then π ri = 0.904.
• Hence, it is desirable to duplicate devices. Multiple copies of the same
device type are connected in parallel.
Reliability Design
D1 30 0.9 2
D2 15 0.8 3
D3 20 0.5 3
TSP
Optimal Binary Search Tree
• As we know that in binary search tree, the nodes in the left subtree have lesser
value than the root node and the nodes in the right subtree have greater value
than the root node.
• We know the key values of each node in the tree, and we also know the
frequencies of each node in terms of searching means how much time is required
to search a node.
• The frequency and key-value determine the overall cost of searching a node. The
cost of searching is a very important factor in various applications.
• The overall cost of searching a node should be less. The time required to search a
node in BST is more than the balanced binary search tree as a balanced binary
search tree contains a lesser number of levels than the BST.
• There is one way that can reduce the cost of a binary search tree is known as an
optimal binary search tree.
Optimal Binary
Search Tree
Optimal Binary Search Tree
• In the above tree, all the nodes on the left subtree are smaller
than the value of the root node, and all the nodes on the right
subtree are larger than the value of the root node.
• The maximum time required to search a node is equal to the
minimum height of the tree, equal to log(n).
• Now we will see how many binary search trees can be made
from the given number of keys.
• For example: 10, 20, 30 are the keys, and the following are the
binary search trees that can be made out from these keys.
• When we use the above formula, then it is
found that total 5 number of trees can be
created.
• The cost required for searching an element
depends on the comparisons to be made to
search an element. Now, we will calculate the
average cost of time of the above binary search
trees.
• Till now, we read about the height-balanced binary search tree.
• To find the optimal binary search tree, we will determine the frequency of
searching a key.
• Let's assume that frequencies associated with the keys 10, 20, 30 are 3, 2,
5.
• The above trees have different frequencies. The tree with the
lowest frequency would be considered the optimal binary search tree. The
tree with the frequency 17 is the lowest, so it would be considered as
the optimal binary search tree.
Dynamic Approach Optimal Binary Search Trees
4 2
5 6
Articulation Points and Biconnected Components
• Let G = (V, E) be a connected undirected graph. Consider the following definitions:
• Articulation Point (or Cut Vertex): An articulation point in a connected graph is
a vertex (together with the removal of any incident edges) that, if deleted, would
break the graph into two or more pieces.
• Bridge: Is an edge whose removal results in a disconnected graph.
• Biconnected: A graph is biconnected if it contains no articulation points. In
a biconnected graph, two distinct paths connect each pair of vertices. A graph
that is not biconnected divides into biconnected components.
• This is illustrated in the following figure:
Compute
discovery
time
Make table for identifying articulation point
node 1 2 3 4 5 6
Discovery 1 6 3 2 4 5
time(d)
L 1 1 1 1 3 3
Compute articulation point
• (u, v) = (parent, child)
• If L[v] >= d[u] then articulation point exist at u except for root,
otherwise not articulation point.
• Different condition will be applied for root vertex.
• If root is having multiple child then root is an articulation point.
Compute articulation point
• Initialization:
• Perform a DFS traversal of the graph starting from any vertex (usually the root).
• Initialize variables to keep track of the discovery time and low-link values for each
vertex.
• Use a stack to keep track of the vertices in the current DFS traversal.
• DFS Traversal:
• During the DFS traversal, assign a unique discovery time to each vertex as you visit it
for the first time.
• Keep track of the lowest discovery time (low-link value) reachable from the current
vertex, considering both forward and backward edges.
Compute articulation point
• Identifying Articulation Points:
• An articulation point is identified based on the following conditions:
• If the current vertex is the root of the DFS tree and has more than one child, it is
an articulation point.
• For other vertices, if there exists a child v such that low[v] >=
discovery_time[current], then the current vertex is an articulation point.
Thank You
Backtracking
By: Ashok Basnet
Lecture Outline
• Introduction to Backtracking method of problem solving
• The 8-queen problem
• Sum of Sub-set problem
• Graph coloring problem
• Hamilton Cycle
Introduction to Backtracking method of problem
solving
• Backtracking is used to solve problem in which a sequence of objects is chosen from a
specified set so that the sequence satisfies some criterion.
• Backtracking is a modified depth first search of a tree.
• Backtracking algorithms determine problem solutions by systematically searching the
solution space for the given problem instance.
• This search is facilitated by using a tree organization for the solution space.
• Backtracking is the procedure where by, after determining that a node can lead
to nothing but dead end, we go back (backtrack) to the nodes parent and proceed
with the search on the next child.
Terminology
• Problem state is each node in the depth first search tree.
• Solution states are the problem states ‘S’ for which the path from the root node to ‘S’
defines a tuple in the solution space.
• Answer states are those solution states for which the path from root node to s defines
a tuple that is a member of the set of solutions.
• State space is the set of paths from root node to other nodes. State space tree is
the tree organization of the solution space.
• Live node is a node that has been generated but whose children have not yet
been generated.
• E-node is a live node whose children are currently being explored. In other words, an E-
node is a node currently being expanded.
• Dead node is a generated node that is not to be expanded or explored any further. All
children of a dead node have already been expanded.
Planar Graphs
• When drawing a graph on a piece of a paper, we often find it convenient to permit
edges to intersect at points other than at vertices of the graph.
• These points of interactions are called crossovers.
• A graph G is said to be planar if it can be drawn on a plane without any
crossovers; otherwise G is said to be non-planar i.e., A graph is said to be planar.
N-Queens Problem
• Let us consider, N = 8. Then 8-Queens Problem is to place eight queens on an 8 x 8
chessboard so that no two “attack”, that is, no two of them are on the same
row, column, or diagonal.
• All solutions to the 8-queens problem can be represented as 8-tuples (x1, . . . . ,
x8), where xi is the column of the ith row where the ith queen is placed.
• The explicit constraints using this formulation are Si = {1, 2, 3, 4, 5, 6, 7, 8}, 1 < i < 8.
Therefore the solution space consists of 88 8-tuples.
• The implicit constraints for this problem are that no two xi’s can be the same (i.e.,
all queens must be on different columns) and no two queens can be on the
same diagonal.
• This realization reduces the size of the solution space from 88 tuples to 8! Tuples.
N-Queens Problem
• The promising function must check whether two queens are in the same column
or diagonal:
• Suppose two queens are placed at positions (i, j) and (k, l) Then:
• Column Conflicts: Two queens conflict if their xi values are identical.
• Diag 45 conflict: Two queens i and j are on the same 450 diagonal if: i – j = k – l. This
implies, j – l = i – k.
• Diag 135 conflict: i + j = k + l. This implies, j – l = k – i.
• Therefore, two queens lie on the same diagonal if and only if: |j – l| = |i – k |
• Where, j be the column of object in row i for the ith queen and l be the column
of object in row ‘k’ for the kth queen.
Sum of Sub-set problem
• Given a set of non-negative integers and a value sum, the task is to check if there is a
subset of the given set whose sum is equal to the given sum.
• Examples:
• Input: set[] = {3, 34, 4, 12, 5, 2}, sum = 9
Output: True
Explanation: There is a subset (4, 5) with sum 9.
• Input: set[] = {3, 34, 4, 12, 5, 2}, sum = 30
Output: False
Explanation: There is no subset that add up to 30.
Sum of Sub-set problem
• Given positive numbers wi, 1 ≤ i ≤ n, and m, this problem requires finding all subsets of
wi whose sums are ‘m’.
• All solutions are k-tuples, 1 ≤ k ≤ n.
• For example, n = 4, w = (11, 13, 24, 7) and m = 31, the desired subsets are (11, 13, 7)
and (24, 7).
• Draw the tree for the given problem to find the solution.
Graph coloring problem
• Let G be a graph and m be a given positive integer.
• We want to discover whether the nodes of G can be colored in such a way that no two
adjacent nodes have the same color, yet only m colors are used.
• This is termed the m-colorabiltiy decision problem.
• The m-colorability optimization problem asks for the smallest integer m for which the
graph G can be colored.
• Given any map, if the regions are to be colored in such a way that no two
adjacent regions have the same color, only four colors are needed.
• For many years it was known that five colors were sufficient to color any map, but
no map that required more than four colors had ever been found.
• After several hundred years, this problem was solved by a group of mathematicians
with the help of a computer. They showed that in fact four colors are sufficient for
planar graphs.
Graph coloring problem
• The function m-coloring will begin by first assigning the graph to its adjacency
matrix, setting the array x [] to zero.
• The colors are represented by the integers 1, 2, . . . , m and the solutions are given by
the n-tuple (x1, x2, . . ., xn), where xi is the color of node i.
Hamilton Cycle
• Let G = (V, E) be a connected graph with n vertices. A Hamiltonian cycle (suggested by
William Hamilton) is a round-trip path along n edges of G that visits every vertex once
and returns to its starting position.
• The graph G1 contains the Hamiltonian cycle 1, 2, 8, 7, 6, 5, 4, 3, 1. The graph G2
contains no Hamiltonian cycle.
Start with the node 0 .
Apply DFS for finding the Hamiltonian path.
When base case reach (i.e. total no of node traversed == V (total vertex)):
Check whether current node is a neighbour of starting node.
As node 2 and node 0 are not neighbours of each other so return from it.
As cycle is not found in path {0, 3, 1, 4, 2}. So, return from node 2, node 4.
Now, explore another option for node 1 (i.e node 2)
When it hits the base condition again check for Hamiltonian cycle
As node 4 is not the neighbour of node 0, again cycle is not found then return.
Return from node 4, node 2, node 1.
Now, explore other options for node 3.
In the Hamiltonian path {0,3,4,2,1,0} we get cycle as node 1 is the
neighbour of node 0.
So print this cyclic path .
This is our Hamiltonian cycle.
Hamilton Cycle