0% found this document useful (0 votes)
42 views11 pages

ADA Practicals - Greedy, Dynamic, Graph

The document discusses implementing various greedy algorithms and dynamic programming problems. It includes algorithms for fractional knapsack problem, activity selection problem, job sequencing problem, Prim's algorithm, Kruskal's algorithm, making change problem, 0-1 knapsack problem, longest common subsequence problem, chained matrix multiplication, and breadth first search. For each problem or algorithm, it provides a high level description and pseudo-code. It then instructs to implement the algorithms in the given programming language.

Uploaded by

Jaydev
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
42 views11 pages

ADA Practicals - Greedy, Dynamic, Graph

The document discusses implementing various greedy algorithms and dynamic programming problems. It includes algorithms for fractional knapsack problem, activity selection problem, job sequencing problem, Prim's algorithm, Kruskal's algorithm, making change problem, 0-1 knapsack problem, longest common subsequence problem, chained matrix multiplication, and breadth first search. For each problem or algorithm, it provides a high level description and pseudo-code. It then instructs to implement the algorithms in the given programming language.

Uploaded by

Jaydev
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

3: Implement following greedy algorithms

3.1 Fractional Knapsack Problem


Given a set of items, each with a weight and a value, determine a subset of items to
include in a collection so that the total weight is less than or equal to a given limit and
the total value is as large as possible. In fractional knapsack, the items can be broken
in order to maximize the profit.

Algorithm:

Input: an integer n, positive values wi and vi , for 1  i  n, and positive value W.

Output: n values xi such that 0  xi  1 and n n

 xi wi  W and  xi vi is maximized.
i 1 i 1

(1) Sort the n objects from large to small based on the ratios vi/wi . We assume the
arrays w[1..n] and v[1..n] store the respective weights and values after sorting.

(2) initialize array x[1..n] to zeros.

(3) weight = 0; i = 1

(4) while (i  n and weight < W) do

(4.1) if weight + w[i]  W then x[i] = 1

(4.2) else x[i] = (W – weight) / w[i]

(4.3) weight = weight + x[i] * w[i]

(4.4) i++

Implementation:
3.2 Activity Selection Problem

We are given a set of events that have a start time and finish time, and we need to
produce a subset of these events such that no events intersect each other (that is,
having overlapping times), and that we have the maximum number of events
scheduled as possible.
Algorithm:

Input: events: a set of intervals where is the start time, and is the finish
time.
Solution: A subset S of Events.
Constraint: No events can intersect (start time exclusive). That is, for all
intervals where it holds that .
Objective: Maximize the number of scheduled events, i.e. maximize the size of the
set S.
Algorithm Activity_Selection(S[1…n],F[1…n],n)

1. Sort the input activities by increasing finishing time. f1 ≤ f2 ≤ . . . ≤ fn


2. i=1
3. A={i}
4. j = 1
5. for i = 2 to n
6. {
7. if S[i] ≥ F[j]
8. {
9. A= AU{i}
10. j=i
11. }
12. }
13. Return set A

Implementation:
3.3 Job Sequencing / Scheduling Problem
Given list of jobs where every job has a deadline and associated profit if the job is
finished before the deadline. It is also given that every job takes single unit of time, so
the minimum possible deadline for any job is 1. It is problem of maximizing total
profit if only one job can be scheduled at a time.

Algorithm:

Job_Scheduling(jobs[1...n], n)
{
Initialize result[1...n] to zero
Initialize slots[1...n] to false
for (i=1 to n) {
for (j=jobs[i].deadline down to 1) {
if (slot[j] is false) {
result[j] = i;
slot[j] = true;
break;
}
}
}
}

Implementation:
3.4 Prim’s Algorithm
Given a connected weighted undirected graph G, Prim’s algorithm finds a minimum
spanning tree. Prim’s algorithm maintains two disjoint sets of vertices. One
containing vertices that are in the growing spanning tree and other that are not in the
growing spanning tree. Then it selects the vertex with minimum cost that is connected
to the growing spanning tree and is not in the growing spanning tree and adds it into
the growing spanning tree. This can be done using Priority Queues. Insert the vertices,
which are connected to growing spanning tree, into the Priority Queue. To check for
cycles, Prim’s algorithm marks the nodes which have been already selected and insert
only those nodes in the Priority Queue that are not marked.

Algorithm:

Implementation:
3.5 Krushkal’s Algorithm
Given a connected, undirected and weighted graph, Kruskal’s algorithm uses the
greedy approach for finding a minimum spanning tree. Kruskal’s algorithm treats
every node as an independent tree and connects one with another only if it has the
lowest cost compared to all other options available. Kruskal’s algorithm first sorts all
the edges in non-decreasing order of their weight.Then it picks the smallest weighted
edge. Check if it forms a cycle with the spanning tree formed so far. If cycle is not
formed, includes this edge else discards it till no edge is remaining for consideration.

Algorithm:

Implementation:
4. Implement following using dynamic programming method

4.1 Making Change Problem


The input to the making change problem is a sequence of positive integers [d1, d2, d3
... dn] and T, where di represents a coin denomination and T is the target amount.
Assuming an unlimited supply of coins of each denomination, making change
problem finds the minimum number of coins required to form the given amount. An
extra effort would be to find the exact coins to build up the amount.

Algorithm:
Making_Change_DynamicProg (d[1...n],n,N)
{
Let C[1....n, 0.....N] be new table
for (i = 1 to n) {
C[i,0] = 0
}
for (i = 1 to n) {
for ( j=1 to N) {
if (i==1) and (j<d[i]) then C[i, j] = +∞
else if (i==1) then C[i, j] = 1 + C[i , j – d[i]]
else if (j<d[i]) then C[i, j] = C[i – 1, j]
else C[i, j] = Minimum (C[i – 1, j], 1 + C[i, j – d[i]])
}
}
Return C[n,N]
}
Making_Change_Select_Coins (C[1...n,0...N],d[1...n],n,N)
{
i=n
j=N
if (C[i, j] == + ∞) then Return “No Solution”
S = {} // Solution Set  Initially Empty
while ( i != 0 and j != 0) {
if( C[i, j] == C[i – 1, j] ) { // Check out of bound conditions while comparing {
i=i–1 // MOVE UP IN TABLE C
}
else {
S = S U {d[i]} // SELECT ONE COIN OF d[i]
j = j – d[i] // MOVE LEFT IN TABLE C
}
}
Return S
}
Implementation:
4.2 0-1 Knapsack Problem
In 0-1 Knapsack problem using dynamic programming, we have n items each with an
associated weight and value (benefit or profit). The objective is to fill the knapsack
with items such that we have a maximum profit without crossing the weight limit of
the knapsack. Since this is a 0 1 knapsack problem hence we can either take an entire
item or reject it completely. We cannot break an item and fill the knapsack.

Algorithm:
Algorithm 0-1Knapsack_DynamicProg (w[1..n],v[1...n],n,W)
{
Let Array V[1....n, 0.....W] be new table.
for (i = 1 to n) V[i,0] = 0
for (i = 1 to n) {
for ( j=1 to W) {
if (i==1 and j<w[i]) then V[i, j] = 0
else if (i==1) then V[i, j] = v[i]
else if (j<w[i]) then V[i, j] = V[i – 1][j]
else V[i, j] = Maximum (V[i – 1, j], v[i] + V[i – 1, j – w[i]])
}
}
Return V[n,W]
}
0-1Knapsack_Select_Items (V[1...n,0...W] ,w[1..n],v[1...n],n, W)
{
i=n
j=W
S = {} // Solution Set  Initially Empty
while ( i != 0 and j != 0) {
if( V[i][j] == V[i – 1, j] ) { // Check out of bound conditions while comparing
i=i–1 // MOVE UP IN TABLE V
}
else {
S = S U { ( w[i], v[i] ) } // SELECT i th ITEM (w[i],v[i])
i=i–1 // MOVE UP IN TABLE V
j = j – w[i] // MOVE LEFT IN TABLE V
}
}
Return S
}

Implementation:
4.3 Longest Common Subsequence Problem
A subsequence is a sequence that appears in the same relative order, but not
necessarily contiguous. For example, “abc”, “abg”, “bdf”, “aeg”, ‘”acefg”, .. etc are
subsequences of “abcdefg”. The longest common subsequence (LCS) is defined as the
longest subsequence that is common to all the given sequences. Let X = < x1, x2,
x3,…, xm > and Y = < y1, y2, y3,…, yn > be the sequences. To compute the length of an
element the following algorithm is used.

Algorithm:
LCS_Length (X,Y)
{
m = X.length
n = Y.length
let b[1…m , 1…n] and c[0…m , 0…n] be new tables
for i=0 to m
c[i,0] = 0 // Initialize 0th column
for j=1 to n
c[0,j] = 0 // Initialize 0th row
for i=1 to m
for j=1 to n
if X[i] = = Y[j]
c[i, j] = c[i – 1, j – 1] + 1
b[i, j] = “ C ”
else if c[i – 1, j] >= c[i, j-1]
c[i, j] = c[i – 1, j]
b[i, j] = “ U ”
else
c[i, j] = c[i , j – 1]
b[i, j] = “ L ”
Return c[m, n]
}
PRINT-LCS(b[1...m,1...n], X, i, j)
{
if i = 0 or j = 0
then return
if b[i, j] = “ C “
PRINT-LCS(b, X, i - 1, j - 1)
print X[i]
else if b[i, j] = “ U“
PRINT-LCS(b, X, i - 1, j)
else
PRINT-LCS(b, X, i, j - 1)
}

Implementation:
4.4 Chained Matrix Multiplication
It is matrix product parenthesization problem. We want to multiply three matrices X,
Y, and Z. We could do it like (XY)Z or like X(YZ). Which way we do the
multiplication doesn’t affect the final outcome but it can affect the running time to
compute it. Matrix chain multiplication is optimization problem to find the most
efficient way to multiply given sequence of matrices. The problem is not to actually
perform multiplication but to decide sequence of multiplications.

To solve problem we maintain tow matrix m and s of size nXn where n is number of
matrices given. Following equation used to fill matrix m

Algorithm:

Implementation:
5. Implement following algorithms for graphs

5.1 Bredth First Search


Bredth First Search (BFS) is a graph traversal algorithm which starts with a selected
node (source or starting node) and traverse the graph layerwise thus exploring the
neighbour nodes (nodes which are directly connected to source node). BFS assigns
two values to each vertex: Distance and Predecessor.

BFS initializes the distance and predecessor of each vertex to the special value (null).
It starts the search at the source and assign it a distance of 0. Then visits all the
neighbors of the source and gives each neighbor a distance of 1 and set its predecessor
to be the source. Then it visits all the neighbors of the vertices whose distance is
1 and that have not been visited before, and gives each of these vertices a distance of
2 and set its predecessor to be vertex from which they were visited. It keeps going
until all vertices reachable from the source vertex have been visited, always visiting
all vertices at distance k from the source before visiting any vertex at distance k+1. If
there is no path from the source vertex to vertex v, then v's distance is infinite and its
predecessor has the same special value as the source's predecessor.

Algorithm:

Implementation:
5.2 Depth First Search
Depth first Search (DFS) traversal is a recursive algorithm for searching all the
vertices of a graph. DFS algorithm traverses a graph in a depthward motion and uses a
stack to remember to get the next vertex to start a search, when a dead end occurs in
any iteration.
DFS algorithm puts each vertex of the graph into one of two categories: Visited and
Not Visited. The purpose of the algorithm is to mark each vertex as visited while
avoiding cycles. DFS starts by putting any one of the graph's vertices on top of a
stack. Then it removes the top item of the stack and add it to the visited list. Then it
creates a list of that vertex's adjacent nodes. And inserts the ones which aren't in the
visited list to the top of the stack. It keeps repeating these steps until the stack is
empty.

Algorithm:

Implementation:

You might also like