0% found this document useful (0 votes)
2 views49 pages

ADSA UNIT - 3

The document provides an overview of graphs as non-linear data structures consisting of vertices and edges, detailing various terminologies and types of graphs, such as directed, undirected, and weighted graphs. It discusses graph representation methods, including adjacency matrices and lists, and outlines traversal techniques like Breadth-First Search (BFS) and Depth-First Search (DFS). Additionally, it covers concepts such as topological sorting, strongly connected components, and minimum spanning trees, along with their applications and algorithms.

Uploaded by

dhanika
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views49 pages

ADSA UNIT - 3

The document provides an overview of graphs as non-linear data structures consisting of vertices and edges, detailing various terminologies and types of graphs, such as directed, undirected, and weighted graphs. It discusses graph representation methods, including adjacency matrices and lists, and outlines traversal techniques like Breadth-First Search (BFS) and Depth-First Search (DFS). Additionally, it covers concepts such as topological sorting, strongly connected components, and minimum spanning trees, along with their applications and algorithms.

Uploaded by

dhanika
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 49

DEPARTMENT OF COMPUTER APPLICATIONS

ADVANCED DATA STRUCTURES AND ALGORITHMS

Definition:

INTRODUCTION TO GRAPHS

Graph is a non-linear data structure. It is a collection of


vertices and edges. Generally, a graph G is represented as G = (
V , E ), where Vis set of vertices and E is set of edges.
Example:
The following is a graph with 5 vertices and
6 edges.This graph G can be defined as G =
(V,E)
Where V = {A,B,C,D,E} and E =
{(A,B),(A,C)(A,D),(B,D),(C,D),(B,E),(E,D)}.

Graph Terminology:
We use the following terms in graph data structure.
Vertex
Individual data element of a graph is called as Vertex.
Vertex is also known as node. In above example graph, A, B, C, D
& E are known asvertices.
Edge
An edge is a connecting link between two vertices. Edge is
also
known as Arc. An edge is represented as (startingVertex,
endingVertex).
For example, in above graph the link between vertices A
and B is represented as (A,B). In above example graph, there are
7 edges (i.e., (A,B), (A,C), (A,D), (B,D), (B,E), (C,D), (D,E)).

Edges are three types:


• Undirected Edge - An undirected egde is a bidirectional
edge. If there is undirected edge between vertices A and B
then edge (A , B)is equal to edge (B , A).
• Directed Edge - A directed egde is a unidirectional edge. If
there is directed edge between vertices A and B then edge
(A , B) is not equal to edge (B , A).

• Weighted Edge - A weighted egde is a edge with value (cost) on


it.

Undirected Graph
A graph with only undirected edges is said to be undirected
graph.

Directed Graph
A graph with only directed edges is said to be directed graph.

Adjacent
If there is an edge between vertices A and B then both A
and B are said to be adjacent. In other words, vertices A and B
are said to be adjacent if there is an edge between them.

Incident
Edge is said to be incident on a vertex if the vertex is
one of theendpoints of that edge.

Degree
Total number of edges connected to a vertex is said to be
degree ofthat vertex.

Indegree
Total number of incoming edges connected to a vertex is
said to beindegree of that vertex.

Outdegree
Total number of outgoing edges connected to a vertex is
said to beoutdegree of that vertex.
Parallel edges or Multiple edges

If there are two undirected edges with same end vertices


and two directed edges with same origin and destination, such
edges are called parallel edges or multiple edges.
Loop or cycle
Edge (undirected or directed) is a self-loop if its two
endpoints coincide with each other.
Simple Graph

A graph is said to be simple if there are no parallel and


self-loop edges.
Path

A path is a sequence of alternate vertices and edges that


starts at a vertex and ends at other vertex such that each edge is
incident to its predecessor and successor vertex.

REPRESENTATION OF GRAPH
By Graph representation, we simply mean the technique which is
to be used in order to store some graph into the computer's
memory. There are two ways to store Graph into the computer's
memory.
• Sequential Representation

• Adjacency Matrix
• Incident Matrix
• Linked Representation
• Adjacency List

• Sequential Representation
• Adjacency Matrix
In sequential representation, we use adjacency matrix to
store the mapping represented by vertices and edges. In
adjacency matrix, the rows and columns are represented by the
graph vertices. A graph having n vertices, will have a dimension n
x n.
An entry Mij in the adjacency matrix representation of an
undirected graph G will be 1 if there exists an edge between Vi
and Vj.
An undirected graph and its adjacency matrix
representation is shown in the following figure.

in the above figure, we can see the mapping among the vertices
(A, B, C, D, E) is represented by using the adjacency matrix which
is also shown in the figure.
There exists different adjacency matrices for the directed
and undirected graph. In directed graph, an entry Aij will be 1
only whenthere is an edge directed from Vi to Vj.
A directed graph and its adjacency matrix representation
is shownin the following figure.

Representation of weighted directed graph is different.


Instead of filling the entry by 1, the Non- zero entries of the
adjacency matrix are represented by the weight of respective
edges.
The weighted directed graph along with the adjacency
matrix representation is shown in the following figure.

• Incidence Matrix
In this representation, the graph is represented using a matrix
of size total number of vertices by a total number of edges. That
means graph with 4 vertices and 6 edges is represented using a
matrix of size 4X6. In this matrix, rows represent vertices and
columns represents edges. This matrix is filled with 0 or 1 or -1.
Here, 0 represents that the row edge is not connected to column
vertex, 1 represents that the row edge is connected as the
outgoing edge to column vertex and -1 represents that the row
edge is connected as the incoming edge to column vertex.

For example, consider the following directed graph representation...

• Linked Representation
In the linked representation, an adjacency list is used to store
the Graph into the computer's memory.

• Adjacency List
Consider the undirected graph shown in the following figure
and checkthe adjacency list representation.
An adjacency list is maintained for each node present in
the graph which stores the node value and a pointer to the next
adjacent node to the respective node. If all the adjacent nodes
are traversed then store the NULL in the pointer field of last
node of the list. The sum of the lengths of adjacency lists is
equal to the twice of the number of edges present in an
undirected graph.
Consider the directed graph shown in the following figure
and checkthe adjacency list representation of the graph.

In a directed graph, the sum of lengths of all the adjacency lists


is equalto the number of edges present in the graph.

In the case of weighted directed graph, each node contains an


extra field that is called the weight of the node. The adjacency
list representation of a directed graph is shown in the following
figure.
GRAPH TRAVERSALS
Graph traversal is a technique used for searching a vertex
in a graph. The graph traversal is also used to decide the order of
vertices is visited in the search process. A graph traversal finds
the edges to be used in the search process without creating loops.
That means using graph traversal we visit all the vertices of the
graph without getting intolooping path.

There are two graph traversal techniques and they are as follows...
• BFS (Breadth First Search)
• DFS (Depth First Search)
• BFS (BREADTH FIRST SEARCH)
BFS traversal of a graph produces a spanning tree as
final result. Spanning Tree is a graph without loops. We use
Queue data structure with maximum size of total number of
vertices in the graph to implement BFS traversal.

ALGORITHM:
//Initially all vertices are unmarked (i.e 0) after visiting they are marked (i.e 1)
BFS(G)
mark each vertex in V with 0 as a mark of being
“unvisited”count ←0
for each vertex v in V do

if v is marked with 0
bfs(v)
bfs(v)
mark v and initialize a
queue with vwhile the
queue is not empty do
for each vertex w in V adjacent to the front vertex do

if w is
mark
ed
with
0
mark
w
add w to the queue

remove the front vertex from the queue

Example:

• DFS (Depth First Search)


DFS traversal of a graph produces a spanning tree as
final result. Spanning Tree is a graph without loops. We use
Stack data structure with maximum size of total number of
vertices in the graph to implement DFS traversal.
ALGORITHM:
DFS(G)
//Initially all vertices are unmarked (i.e 0) after visiting they are marked (i.e 1)
for each vertex v in V do
if v is marked with 0
dfs(v)
dfs(v)
mark v
for each vertex w in V
adjacent to v doif w is
marked with 0
dfs(w)

Example:

Definition:

TOPOLOGICAL SORT

A topological sort or topological ordering of a directed


acyclic graph is a linear ordering of its vertices such that for
every directed edge uv from vertex u to vertex v, u comes
before v in the ordering. Precisely, a topological sort is a graph
traversal in which each node v is visited only after all its
dependencies are visited. A topological ordering is possible if
and only if the graph has no directed cycles, that is, if it is a
directed acyclic graph (DAG).
Any DAG has at least one topological ordering, and
algorithms are known for constructing a topological ordering of
any DAG in linear time. Topological sorting has many applications
especially in ranking problems such as feedback arc set.

For example, topological sorting order of the graph given


below is "4 5 2 3 1 0". One graph may have more than one order
of the topological sort. Taking the above example, another order
of the topological sorting of above stated graph is "5 4 2 3 1 0".
The vertex with in-degree as 0 is taken as first vertex in
topological sort.

Algorithm:
• Compute all the in degrees of every vertex of given graph G
• Find vertex 'u' which is having the in degree 0 and display it in order.
If no suchvertex is found, then there is possibility that there is cycle
and all the vertices can't be ordered. So, stop the sort process and exit.
• Remove the vertex 'u' and all its corresponding edges (u,v) from given
graph G.
• Update all the in degrees of left out remaining vertices.
• Repeat above stated steps 2 to 4 till all the vertices are processed.

Example:

• Compute all the in degrees of each and every node in the graph .
V1: 0
V2: 1
V3: 2
V4: 2
V5: 2
• Find a vertex with in degree 0 that is V1
• Output V1 and put it in first place in order of
topological sort,remove V1 and update the indegrees
of the left over vertices.
V2: 0
V3: 1

V4: 1
V5: 2
• now the vertex with in degree 0 is V2 ,output V2
,put it into topological sort order next to V1
,remove V2 and update the indegrees of remaining
vertices
V3: 1
V4: 0
V5: 1
• now the vertex with in degree 0 is V4 ,output V4
,put it into topological sort order next to V2
,remove V4 and update the indegrees of remaining
vertices
V3: 0
V5: 0
• now the vertex with in degree 0 is V3 and V5 , we can
follow anyorder here we can either output V3 first or V5
first ,put the selected vertex into topological sort order
next to V4 ,let us takeV3 first remove V3 and at last
remove V5 .
So topological sorting order of the graph is: V1, V2, V4, V3, V5

but by taking V5 first and then V3 in step 6, another


topological sortorder would be : V1, V2, V4, V5, V3

Complexity
Analysis:
Time Complexity: O(V+E).
The above algorithm is simply DFS with an extra stack. So time
complexity is the same as DFS which is.
Topological Sort Example:
Consider the following directed acyclic graph-

For this
graph, following 4
differenttopological orderings
are possible-
• 123456

• 123465
• 132456
• 132465

Applications of Topological Sort:


Few important applications of topological sort are-
• Scheduling jobs from the given dependencies among jobs
• Instruction Scheduling
• Determining the order of compilation tasks to perform in make
files
• Data Serialization
Definition:

STRONGLY CONNECTED COMPONENTS

A strongly connected component is the portion of a


directed graph in which there is a path from each vertex to
another vertex. It is applicable only on a directed graph.

Example:
Let us take the graph below.

Initial graph
The strongly connected components of the above graph are:

Kosaraju's Algorithm:
Kosaraju's Algorithm is based on the depth-first searchalgorithm
Three steps are involved.

• Perform a depth first search on the whole graph.

Let us start from vertex-0, visit all of its child vertices, and
mark the visited vertices as done. If a vertex leads to an
already visited vertex, then push this vertex to the stack.
For example: Starting from vertex-0, go to vertex-1,
vertex-2, and then to vertex-3. Vertex-3 leads to already
visited vertex-0, so push the source vertex (ie. vertex-3)
into the stack.

DFS on the graph


Go to the previous vertex (vertex-2) and visit its child
vertices i.e. vertex-4, vertex-5, vertex-6 and vertex-7
sequentially. Since there is nowhere to go from vertex-7,
push it into the stack.

Go to the previous vertex (vertex-6) and visit its child vertices.


But,
all of its child vertices are visited, so push it into the stack.

Stacking
Final Stack
• Reverse the original graph.
DFS on reversed graph
• Perform depth-first search on the reversed graph.
Start from the top vertex of the stack. Traverse through all of
its child vertices. Once the already visited vertex is reached,
one strongly connected component is formed. For example:
Pop vertex-0 from the stack. Starting from vertex-0, traverse
through its child vertices (vertex-0, vertex-1, vertex-2,
vertex-3 in sequence) and mark them as

visited. The child of vertex-3 is already visited, so these


visitedvertices form one strongly connected component.
Start from the top and traverse through all the vertices
Go to the stack and pop the top vertex if already visited.
Otherwise, choose the top vertex from the stack and traverse
through its child vertices as presented above.

Pop the top vertex if already visited.

Strongly connected component


Kosaraju's Algorithm Complexity:
Kosaraju's algorithm runs in linear time i.e. O(V+E)

Strongly Connected Components Applications


• Vehicle routing applications
• Maps
• Model-checking in formal verification

MINIMUM SPANNING TREES


Definition - spanning tree:
Given a connected and undirected graph, a spanning tree
of that graph is a subgraph that is a tree and connects all the
vertices together. A single graph can have many different
spanning trees.

Definition – minimum spanning tree:


A minimum spanning tree (MST) or minimum weight
spanning tree for a weighted, connected, undirected graph is a
spanning tree with a weight less than or equal to the weight of
every other spanning tree. The weight of a spanning tree is
the sum of weights given to each edge of the spanning tree. A
minimum spanning tree has (V – 1) edges where V is the number
of vertices in the given graph.

Applications of Spanning Tree:


 For implementing Routing protocols in computer networks
 In civil network planning to build networks
 For clustering i.e. grouping similar objects under one
category anddistinguishing from other categories

Applications of Minimum Spanning Tree:


 In designing telecommunication networks and water supplynetwork
 For finding paths in a map
 In electrical grid systems

Minimum Spanning Tree Algorithms:


There are various algorithms in computer science that help us
find the minimum spanning tree for a weighted graph. Some of
these algorithmsare: Prim’s Algorithm and Kruskal’s Algorithm

Definition:

PRIM’S ALGORITHM

Prim’s Algorithm also use Greedy approach to find the


minimum spanning tree. In Prim’s Algorithm we grow the
spanning tree from a starting position. Here we add vertex to
the growing spanning tree in Prim's.

Algorithm:
Prim(G)
//Prim’s algorithm for constructing a minimum spanning tree
//Input: A weighted connected graph G = _V, E_
//Output: ET , the set of edges composing a minimum spanning
tree of GVT ← {v0} //the set of tree vertices can be initialized
with any vertex
ET ← ∅
for i ←1 to |V| − 1 do

find a minimum-weight edge e∗ = (v∗, u∗) among all


the edges(v, u) such that v is in VT and u is in V −
VT
VT ←
VT 𝖴
{u∗}ET
← ET 𝖴
{e∗}
return ET

Example:
In Prim’s Algorithm, we will start with an arbitrary node (it
doesn’t matter which one) and mark it. In each iteration we will
mark a new vertex that is adjacent to the one that we have
already marked. As a greedy algorithm, Prim’s algorithm will
select the cheapest edge and mark the vertex. So we will simply
choose the edge with weight 1. In the next iteration we have
three options, edges with weight 2, 3 and 4. So, we will select
the edge with weight 2 and mark the vertex. Now again we have
three options, edges with weight 3, 4 and 5. But we can’t choose
edge with weight 3 as it is creating a cycle. So we will select the
edge with weight 4 and we end up with the minimum spanning
tree of total cost 7 (= 1 + 2 +4).

Time Complexity:
The time complexity of the Prim’s
Algorithm is O((V+E)logV) because each vertex is inserted in the
priority queue only once and insertion in priority queue take
logarithmic time.

Definition:

KRUSKAL’S ALGORITHM

Kruskal’s Algorithm builds the spanning tree by adding


edges one by one into a growing spanning tree. Kruskal's
algorithm follows greedy approach as in each iteration it finds an
edge which has least weight and add it to the growing spanning
tree.

Algorithm:
Kruskal(G)
//Kruskal’s algorithm for constructing a minimum spanning tree
//Input: A weighted connected graph G = {V, E}
//Output: ET , the set of edges composing a minimum spanning
tree of Gsort E in nondecreasing order of the edge weights
w(ei1) ≤ . . . ≤ w(ei|E|)
ET ← ∅; ecounter ←0 //initialize the set of tree edges and
its sizek←0 //initialize the number of processed edges
while ecounter <
|V| − 1 do
k←k + 1
if ET 𝖴 {eik } is acyclic
ET ← ET 𝖴 {eik}; ecounter ←ecounter + 1
return ET

Checking of 2 vertices are connected or not could be done using


DFS which starts from the first vertex, then check if the second
vertex is visited or not. But DFS will make time complexity large
as it has an order of O(V+E) where V is the number of vertices, E
is the number of edges. Sothe best solution is "Disjoint Sets":

Example:

In Kruskal’s algorithm, at each iteration we will select the


edge with the lowest weight. So, we will start with the lowest
weighted edge first i.e., the edges with weight 1. After that we
will select the second lowest weighted edge i.e., edge with
weight 2. Notice these two edges are totally disjoint. Now, the
next edge will be the third lowest weighted edge i.e., edge with
weight 3, which connects the two disjoint pieces of the graph.
Now, we are not allowed to pick the edge with weight 4, that
will create a cycle and we can’t have any cycles. So we will
select the fifth lowest weighted edge i.e., edge with weight 5.
Now the other two edges will create cycles so we will ignore
them. In the end, we end up with a minimum spanning tree with
total cost 11 (= 1 + 2 + 3 + 5).

Time Complexity:
In Kruskal’s algorithm, most time consuming operation is
sortingbecause the total complexity of the Disjoint-Set operations
will be
O (E log V), which is the overall Time Complexity of the algorithm.

SINGLE SOURCE SHORTEST PATH


ALGORITHMS
The shortest path problem is about finding a
path between 2 vertices in a graph such that the total sum of the
edges weights is minimum.

Some of the algorithms used for finding shortest path are.


• Dijkstra’s algorithm
• Bellman-Ford algorithm
Definition:

DIJKSTRA’S ALGORITHM

It solves single-source shortest-paths problem: for a given


vertex called the source in a weighted connected graph, find
shortest paths to all its other vertices. It works only for
connected graphs. The algorithm predominantly follows Greedy
approach for finding locally optimal solution.
Dijkstra's algorithm, published in 1959, is named after its
discovererEdsger Dijkstra, who was a Dutch computer scientist.

Algorithm:

Example - Algorithm Demonstration:


Time Complexity Analysis:
Following are the cases for calculating the time complexity of
Dijkstra’s Algorithm.
Case1- When graph G is represented using an adjacency matrix.
Since the implementation contains two nested for loops, each of
complexity O(n), the complexity of Dijkstra’s algorithm is O(n2).
Please note that n here refers to total number of vertices in the
given graph.
Case 2- When graph G is represented using an adjacency list -
The time complexity, in this scenario reduces to O(V+E log V)
where E represents number of edges and V represents number of
vertices in the graph.

Applications of Dijkstra’s Algorithm:


 Traffic information systems use Dijkstra’s Algorithm for

tracking destinations from a given source location.


 Open Source Path First (OSPF), an Internet-based routing

protocol, uses Dijkstra’s Algorithm for finding best route


from source router to other routers in the network.
 It is used by Telephone and Cellular networks for routing

management.
 It is also used by Geographic Information System (GIS),

such as Google Maps, for finding shortest path from point A


to point B.

Definition:
BELLMAN-FORD ALGORITHM

The Bellman–Ford algorithm is an algorithm that computes


shortest paths from a single source vertex to all of the
other vertices in a weighted digraph. It is similar to Dijkstra's
algorithm but it can work

with graphs in which edges can have negative weights. The


Bellman-Ford algorithm uses the bottom-up approach.

Algorithm:
Input: Graph and a source vertex src.
Output: Shortest distance to all vertices from src. If there is a negative
weight cycle, then shortest distances are not calculated, negative weight
cycle is reported.
bellmanFordAlgorithm(G, s) //G is the graph and s is the
source vertex for each vertex V in G
dist[V] <- infinite // dist is
distance prev[V] <- NULL //
prev is previousdist[s] <- 0
for each
vertex V in G
for each edge
(u,v) in G
temporaryDist <- dist[u] +
edgeweight(u, v)if temporaryDist <
dist[v]
dist[v] <-
temporaryDistprev[v]
<- u

If dist[U] + edgeweight(U, V) < dist[V}


Error: Negative
Cycle Existsreturn dist[],
prev[]

Working of Algorithm:
Like other Dynamic Programming Problems, the algorithm
calculates shortest paths in a bottom-up manner. It first
calculates the shortest distances which have at-most one edge in
the path. Then, it calculates the shortest paths with at-most 2
edges, and so on. After the i-th iteration of the outer loop, the
shortest paths with at most i edges are calculated. There can be
maximum |V| – 1 edges in any simple path that is why the outer
loop runs |v| – 1 times. The idea is, assuming that there is no
negative weight cycle, if we have calculated shortest paths with
at most i edges, then an iteration over all edges guarantees to
give shortest path with at-most (i+1) edges.
Example:
Consider the weighted graph below.

Choose path value 0 for the source vertex and infinity for
all other vertices.

If the new calculated path length


is less than the previous path length, go to the source vertex's
neighboring Edge and relax the path length of the adjacent
Vertex.

This procedure must be repeated V-1 times, where V is the


number of vertices in total. This happened because, in the worst-
case scenario, any vertex's path length can be changed N times
to an even shorter path length.

As a result, after V-1 iterations,


you find your new path lengthsand can determine in case the
graph has a negative cycle or not.

Definition:
DYNAMIC PROGRAMMING

Dynamic Programming (commonly referred to as DP) is an


algorithmic technique for solving a problem by recursively
breaking it down into simpler sub problems and using the fact
that the optimal solution to the overall problem depends upon
the optimal solution to it’s individual sub problems.
The technique was developed by Richard Bellman in the
1950s. DP algorithm solves each sub problem just once and then
remembers its answer, thereby avoiding re-computation of the
answer for similar sub problem every time. It is the most
powerful design technique for solving optimization related
problems.

Example:
For example, if we write simple recursive solution for
FibonacciNumbers, we get exponential time complexity and if
we optimize it by

ALL-PAIRS SHORTEST
PATHS –WARSHALL’s ALGORITHM
Transitive closure definition: The transitive closure of a
directed graph with n vertices can be defined as the n × n
boolean matrix T = {tij}, in which the element in the ith row and
the jth column is 1 if there exists a nontrivial path (i.e., directed
path of a positive length) from the ith vertex to the jth vertex;
otherwise, tij is 0.

All-Pairs Shortest Paths that return the shortest paths


between

Warshall’s Algorithm definition:


Warshall's algorithm is used to determine the transitive
closure of a directed graph or all paths in a directed graph by
using the adjacency matrix. For this, it generates a sequence of
n matrices. Where, n is used to describe the number of vertices.
R(0), ..., R(k-1), R(k), ... , R(n)
Each of these matrices provides certain information about
directed paths in the digraph. Specifically, the element r(k)ij
in the ith row andjth column of matrix
R(k) (i, j = 1, 2, . . . , n, k = 0, 1, . . . , n) is equal to 1 if
and only if there exists a directed path of a positive length
from the ith vertex to the jth vertex with each intermediate
vertex, if any, numbered not higher than k. Thus, the series
starts with R(0), which does not allow any intermediate vertices
in its paths; hence, R(0) is nothing other than the adjacency
matrix of the digraph.

ALGORITHM Warshall(A[1..n, 1..n])


//ImplementsWarshall’s algorithm for computing the transitive closure
//Input: The adjacency matrix A of a digraph with n vertices
//Output: The transitive closure of the
digraphR(0) ←A
for k←1 to n do
for i ←1
to n do
for j
←1 to
n do
R(k)[i, j ]←R(k−1)[i, j ] or (R(k−1)[i, k] and R(k−1)[k, j])

return R(n)
Working of algorithm:

Time efficiency is only O(n3)

ALL-PAIRS SHORTEST PATHS –FLOYD’s ALGORITHM


The Floyd–Warshall algorithm is an example of dynamic
programming. It was published in its current form by Robert
Floyd in 1962 Floyd’s Algorithm finds the shortest path
between every pair of vertices in the Network.
Floyd’s algorithm computes the distance matrix of a
weighted graph with n vertices through a series of n × n
matrices:
D(0), . . . , D(k−1), D(k), . . . , D(n).
Each of these matrices contains the lengths of shortest
paths with certain constraints on the paths considered for the
matrix in question. Specifically, the element d(k)ij in the ith
row and the jth column of matrix D(k) (i, j = 1, 2, . . . , n, k =
0, 1, . . . , n) is equal to the length of the shortest path among
all paths from the ith vertex to the jth vertex with each
intermediate vertex, if any, numbered not higher than k. In
particular, the series starts with D(0), which does not allow
any intermediate vertices in its paths; hence,D(0) is simply the
weight matrix of the graph. The last matrix in the series, D(n),
contains the lengths of the shortest paths among all paths that
can use all n vertices as intermediate and hence is nothing
other than the distance matrix being sought.

ALGORITHM Floyd(W[1..n, 1..n])


//Implements Floyd’s algorithm for the all-pairs shortest-paths
problem
//Input: The weight matrix W of a graph with no negative-length
cycle
//Output: The distance matrix of the shortest
paths’ lengthsD ←W //is not necessary if W can
be overwritten
for k←1 to n do
for i ←1 to n do
for j ←1 to n do
D[i, j ]←min{D[i, j ], D[i, k]+ D[k, j]}
return D
Working of algorithm:

Example:
MATRIX MULTIPLICATION USING
DYNAMIC
PROGRAMMING

The first step is to create a matrix where the number of rows and
columns equals the number of vertices and then to populate it
with initial data. For each vertex that can be reached by one
hop, you will populate it with the edge weight. If a vertex cannot
be reached with one hop, you will populate it with infinity and if
a vertex is trying to reach itself, you will populate it with zero.
Let’s walk through the algorithm to populate the initial matrix.
We’ll start by populating all the zeros. The zeros will be
down thediagonal. Why? Looking at the first entry at A1(1,1),
you’re trying to go

from vertex 1 to vertex 1. What’s the weight of that? Zero.


Same occursfor (2,2), (3,3), and (4,4).

Let’s examine A1(1,2) next. You can get from vertex 1 to 2 in


one hop.The edge weight is 10.

Next, we’ll look at A1(1,3). Since you cannot get to vertex 3


from vertex1 in one hop, you enter infinity.

Let’s fill in the rest of the fields.


From 1 to 4, A0(1,4) = 5
From 2 to 1,
A0(2,1) = ∞
From 2 to 3,
A0(2,3) = ∞
From 2 to 4,
A0(2,4) = 9
From 3 to 1, A0(3,1) = -2

From 3 to 2, A0(3,2) = 4
From 3 to 4,
A0(3,4) = ∞
From 4 to 1,
A0(4,1) = ∞
From 4 to 2,
A0(4,2) = -3
From 4 to 3, A0(4,3) = 1
We start with trying to obtain the value for A0(1,1). If we follow
the for loop, k will be equal to 1, then 2, then 3 and finally 4
while the values of iand j do not change.

Next, plug in the values for those entries.

The minimum value is 0. So, we take the value of zero and plug
it into A1(1,1)

Next, we’ll examine A1(1,2), but this time we’ll do it visually.


What we did in the previous example is go through the row and
the column corresponding to the entry. Since we looked at entry
(1,1), we added the values from the first row and the first
column and then got the minimum of those entries. Let’s start
off by highlighting the first row and second column since we’re
working on entry (1,2).
If we walk through the algorithm we add (1,1) to (1,2), (1,2) to
(2,2), (1,3) to (3,2) and (1,4) to (4,2).

The minimum is 2 so we update A2(1,2) with 2.


Let’s fill out the rest of the cells visually as well for the first row.
A2(1,3)is next.

The minimum is 6 so we update A2(1,3) with 6. A2(1,4) is next.

The minimum is 5 so we update A2(1,4) with 5. A2(2,1) is next.

The value is still infinity, so we update A2(2,1) with ∞. A2(2,2) is next.


For the remainder of the iteration in constructing A2, I’ll list
the itemsand you can verify them by following through the matrix
A1.

The final product looks like the following:

We keep repeating the procedure until we’re able to observe


one of twooccurrences:
The entries from the previous matrix to the current matrix don’t
change There is a negative value in the diagonal. This indicates a
negative cycleand the values will decrease indefinitely.
What exactly is the A2 matrix? It’s the result of modified matrix
multiplication of two A1 matrices. The next matrix to find is A4.
That will be accomplished by multiplying two A2 matrices. Let’s
run through the entire process.
Since the matrix changed from the previous version, we’re still
notfinished. Time to move to A8.

Since there are no changes between A4 and A8, no further


action isnecessary.

You might also like