Daa Unit-Iii

Download as pdf or txt
Download as pdf or txt
You are on page 1of 16

lOMoARcPSD|45122995

UNIT
UNIT -III- 4
Dynamic Programming
4.1 The General Method
4.2 Warshall’s Algorithm
4.3 Floyd’s Algorithm for the All-Pairs Shortest Paths Problem
4.4 Single-Source Shortest Paths
4.5 General Weights 0/1 Knapsack
4.6 The Traveling Salesperson problem.
4.1 The General Method
Definition
Dynamic programming (DP) is a general algorithm design technique for solving
problems with overlapping sub-problems. This technique was invented by American
mathematician ―R
ichard Bellman‖ in 1950s.
Key Idea
The key idea is to save answers of overlapping smaller sub-problems to avoid re-
computation.
Dynamic Programming Properties
• An instance is solved using the solutions for smaller instances.
• The solutions for a smaller instance might be needed multiple times, so store their
results in a table.
• Thus each smaller instance is solved only once.
• Additional space is used to save time.
Dynamic Programming vs. Divide & Conquer
LIKE divide & conquer, dynamic programming solves problems by combining solutions
to sub-problems. UNLIKE divide & conquer, sub-problems are NOT independent in
dynamic programming.

Divide & Conquer Dynamic Programming

1. Partitions a problem into independent 1. Partitions a problem into


smaller sub-problems overlapping sub-problems

2. Doesn‘t store solutions of sub-


2. Stores solutions of sub-
problems. (Identical sub-problems
problems: thus avoids
may arise - results in the same
calculations of same quantity
computations are performed
twice
repeatedly.)

3. Bottom up algorithms: in which


3. Top down algorithms: which logically the smallest sub-problems are
progresses from the initial instance explicitly solved first and the
down to the smallest sub-instances via results of these used to construct
intermediate sub-instances. solutions to progressively larger
sub-instances

43
Downloaded by Shubham Sanodiya (sanodiyashubh04@gmail.com)
lOMoARcPSD|45122995

Dynamic Programming vs. Divide & Conquer: EXAMPLE


Computing Fibonacci Numbers

1. Using standard recursive formula:

0 if n=0
F(n) = 1 if n=1
F(n-1) + F(n-2) if n >1

Algorithm F(n)
// Computes the nth Fibonacci number recursively by using its definitions
// Input: A non-negative integer n
// Output: The nth Fibonacci number
if n==0 || n==1 then
return n
else
return F(n-1) + F(n-2)

Algorithm F(n): Analysis


• Is too expensive as it has repeated calculation of smaller Fibonacci numbers.
• Exponential order of growth.

F(n)

F(n-1) + F(n-2)

F(n-2) + F(n-3) F(n-3) + F(n-4)

2. Using Dynamic Programming:


Algorithm F(n)
// Computes the nth Fibonacci number by using dynamic programming method
// Input: A non-negative integer n
// Output: The nth Fibonacci number
A[0] 0
A[1] 1
for i 2 to n do
A[i] A[i-1] + A[i-2]
return A[n]

Algorithm F(n): Analysis


• Since it caches previously computed values, saves time from repeated
computations of same sub-instance
• Linear order of growth

44
Downloaded by Shubham Sanodiya (sanodiyashubh04@gmail.com)
lOMoARcPSD|45122995

Rules of Dynamic Programming


1. OPTIMAL SUB-STRUCTURE: An optimal solution to a problem contains
optimal solutions to sub-problems
2. OVERLAPPING SUB-PROBLEMS: A recursive solution contains a ―s mall‖
number of distinct sub-problems repeated many times
3. BOTTOM UP FASHION: Computes the solution in a bottom-up fashion in the
final step

Three basic components of Dynamic Programming solution


The development of a dynamic programming algorithm must have the following three
basic components
1. A recurrence relation
2. A tabular computation
3. A backtracking procedure

Example Problems that can be solved using Dynamic Programming


method
1. Computing binomial co-efficient
2. Compute the longest common subsequence
3. Warshall‘s algorithm for transitive closure
4. Floyd‘s algorithm for all-pairs shortest paths
5. Some instances of difficult discrete optimization problems like knapsack problem
And traveling salesperson problem
4.2 Warshall’s Algorithm
Some useful definitions:
• Directed Graph: A graph whose every edge is directed is called directed graph
OR digraph
• Adjacency matrix: The adjacency matrix A = {aij} of a directed graph is the
boolean matrix that has
o 1 - if there is a directed edge from ith vertex to the jth vertex
o 0 - Otherwise
• Transitive Closure: Transitive closure of a directed graph with n vertices can be
defined as the n-by-n matrix T={tij}, in which the elements in the ith row (1≤ i ≤
n) and the jth column(1≤ j ≤ n) is 1 if there exists a nontrivial directed path (i.e., a
directed path of a positive length) from the ith vertex to the jth vertex, otherwise
tij is 0. The transitive closure provides reach ability information about a digraph.
Computing Transitive Closure:
• We can perform DFS/BFS starting at each vertex
• Performs traversal starting at the ith vertex.
• Gives information about the vertices reachable from the ith vertex
• Drawback: This method traverses the same graph several times.
• Efficiency : (O(n(n+m))
• Alternatively, we can use dynamic programming: the Warshall‘s Algorithm
Underlying idea of Warshall’s algorithm:

45
Downloaded by Shubham Sanodiya (sanodiyashubh04@gmail.com)
lOMoARcPSD|45122995

• Let A denote the initial boolean matrix.


• The element r(k) [ i, j] in ith row and jth column of matrix Rk (k = 0, 1, …, n) is
equal to 1 if and only if there exists a directed path from ith vertex to jth vertex
with intermediate vertex if any, numbered not higher than k
• Recursive Definition:
• Case 1: A path from vi to vj restricted to using only vertices from
{v1,v2,…,vk} as intermediate vertices does not use vk, Then
R(k) [ i, j ] = R(k-1) [ i, j ].
• Case 2: A path from vi to vj restricted to using only vertices from
{v1,v2,…,vk} as intermediate vertices do use vk. Then
R(k) [ i, j ] = R(k-1) [ i, k ] AND R(k-1) [ k, j ].
R(k)[ i, j ] = R(k-1) [ i, j ] OR (R(k-1) [ i, k ] AND R(k-1) [ k, j ] )
Algorithm:
Algorithm Warshall(A[1..n, 1..n])
// Computes transitive closure matrix
// Input: Adjacency matrix A
// Output: Transitive closure matrix R
R(0) A
for k→1 to n do
for i→ 1 to n do
for j→ 1 to n do
R(k)[i, j]→ R(k-1)[i, j] OR (R(k-1)[i, k] AND R(k-1)[k, j] )
return R(n)
Find Transitive closure for the given digraph using Warshall‘s algorithm.

A C

D
B

Solution:
A B C D
R(0) = A 0 0 1 0
B 1 0 0 1
C 0 0 0 0
D 0 1 0 0

46
Downloaded by Shubham Sanodiya (sanodiyashubh04@gmail.com)
lOMoARcPSD|45122995

R(0) k=1 A B C D A B C D
Vertex 1 A 0 0 1 0 A 0 0 1 0
can be B 1 0 0 1 B 1 0 1 1
intermediate C 0 0 0 0 C 0 0 0 0
node D 0 1 0 0 D 0 1 0 0

R1[2,3]
= R0[2,3] OR
R0[2,1] AND R0[1,3]
= 0 OR ( 1 AND 1)
=1
R(1) k=2 A B C D A B C D
Vertex A 0 0 1 0 A 0 0 1 0
{1,2 } can B 1 0 1 1 B 1 0 1 1
be C 0 0 0 0 C 0 0 0 0
intermediate D 0 1 0 0 D 1 1 1 1
nodes
R2[4,1]
= R1[4,1] OR
R1[4,2] AND R1[2,1]
= 0 OR ( 1 AND 1)
=1

R2[4,3]
= R1[4,3] OR
R1[4,2] AND R1[2,3]
= 0 OR ( 1 AND 1)
=1

R2[4,4]
= R1[4,4] OR
R1[4,2] AND R1[2,4]
= 0 OR ( 1 AND 1)
=1

R(2) k=3 A B C D A B C D
Vertex A 0 0 1 0 A 0 0 1 0
{1,2,3 } can B 1 0 1 1 B 1 0 1 1
be C 0 0 0 0 C 0 0 0 0
intermediate D 1 1 1 1 D 1 1 1 1
nodes
NO CHANGE

47
Downloaded by Shubham Sanodiya (sanodiyashubh04@gmail.com)
lOMoARcPSD|45122995

R(3) k=4 A B C D A B C D
Vertex A 0 0 1 0 A 0 0 1 0
{1,2,3,4 } B 1 0 1 1 B 1 1 1 1
can be C 0 0 0 0 C 0 0 0 0
intermediate D 1 1 1 1 D 1 1 1 1
nodes
R4[2,2]
= R3[2,2] OR
R3[2,4] AND R3[4,2]
= 0 OR ( 1 AND 1)
=1

R(4) A B C D
A 0 0 1 0 TRANSITIVE CLOSURE
B 1 1 1 1 for the given graph
C 0 0 0 0
D 1 1 1 1

Efficiency:
• Time efficiency is Θ(n3)
• Space efficiency: Requires extra space for separate matrices for recording
intermediate results of the algorithm.

4.3Floyd’s Algorithm to find -ALL PAIRS SHORTEST PATHS


Some useful definitions:
• Weighted Graph: Each edge has a weight (associated numerical value). Edge
weights may represent costs, distance/lengths, capacities, etc. depending on the
problem.
• Weight matrix: W(i,j) is
o 0 if i=j
o ∞ if no edge b/n i and j.
o ―we ight of edge‖ if edge b/n i and j.

Problem statement:
Given a weighted graph G( V, Ew), the all-pairs shortest paths problem is to find the
shortest path between every pair of vertices ( vi, vj ) Є V.
Solution:
A number of algorithms are known for solving All pairs shortest path problem
• Matrix multiplication based algorithm
• Dijkstra's algorithm
• Bellman-Ford algorithm
• Floyd's algorithm

48
Downloaded by Shubham Sanodiya (sanodiyashubh04@gmail.com)
lOMoARcPSD|45122995

Underlying idea of Floyd’s algorithm:


• Let W denote the initial weight matrix.
• Let D(k) [ i, j] denote cost of shortest path from i to j whose intermediate vertices
are a subset of {1,2,…,k}.
• Recursive Definition
Case 1:
A shortest path from vi to vj restricted to using only vertices from {v1,v2,…,vk}
as intermediate vertices does not use vk. Then
D(k) [ i, j ] = D(k-1) [ i, j ].
Case 2:
A shortest path from vi to vj restricted to using only vertices from {v1,v2,…,vk}
as intermediate vertices do use vk. Then
D(k) [ i, j ] = D(k-1) [ i, k ] + D(k-1) [ k, j ].
We conclude:
D(k)[ i, j ] = min { D(k-1) [ i, j ], D(k-1) [ i, k ] + D(k-1) [ k, j ] }

Algorithm:
Algorithm Floyd(W[1..n, 1..n])
// Implements Floyd‘s algorithm
// Input: Weight matrix W
// Output: Distance matrix of shortest paths‘ length
D W
for k → 1 to n do
for i→ 1 to n do
for j→ 1 to n do
D [ i, j]→ min { D [ i, j], D [ i, k] + D [ k, j]
return D
Example:
Find All pairs shortest paths for the given weighted connected graph using Floyd‘s
algorithm.
A 5

4 2 C

B
3

Solution:

D(0) =
A B C
A
0 2 5
B
4 0 ∞
C
∞ 3 0

49
Downloaded by Shubham Sanodiya (sanodiyashubh04@gmail.com)
lOMoARcPSD|45122995

D(0) k=1
Vertex 1
can be
intermediate A B C A B C
node A 0 2 5 A 0 2 5
B 4 0 ∞ B 4 0 9
C ∞ 3 0 C ∞ 3 0

D1[2,3]
= min { D0 [2,3],
D0 [2,1] + D0 [1,3] }
D(1) k=2 = min { ∞, ( 4 + 5) }
Vertex 1,2 =9
can be
intermediate A B C A B C
nodes A 0 2 5 A 0 2 5
B 4 0 9 B 4 0 9
C ∞ 3 0 C 7 3 0

D2[3,1]
= min { D1 [3,1],
D1 [3,2] + D1 [2,1] }
D(2) k=3 = min { ∞, ( 4 + 3) }
Vertex 1,2,3 =7
can be
intermediate A B C A B C
nodes A 0 2 5 A 0 2 5
B 4 0 9 B 4 0 9
C 7 3 0 C 7 3 0
D(3)
NO Change

A B C
A 0 2 5 ALL PAIRS SHORTEST
B 4 0 9 PATHS for the given
C 7 3 0 graph

50
Downloaded by Shubham Sanodiya (sanodiyashubh04@gmail.com)
lOMoARcPSD|45122995

4.4 0/1 Knapsack Problem Memory function


Definition:
Given a set of n items of known weights w1,…,wn and values v1,…,vn and a knapsack
of capacity W, the problem is to find the most valuable subset of the items that fit into the
knapsack.
Knapsack problem is an OPTIMIZATION PROBLEM

Dynamic programming approach to solve knapsack problem


Step 1:
Identify the smaller sub-problems. If items are labeled 1..n, then a sub-problem would be
to find an optimal solution for Sk = {items labeled 1, 2, .. k}

Step 2:
Recursively define the value of an optimal solution in terms of solutions to smaller
problems.
Initial conditions:
V[ 0, j ] = 0 for j ≥ 0
V[ i, 0 ] = 0 for i ≥ 0

Recursive step:
max { V[ i-1, j ], vi +V[ i-1, j - wi ] }
V[ i, j ] = if j - wi ≥ 0
V[ i-1, j ] if j - wi < 0
Step 3:
Bottom up computation using iteration

Question:
Apply bottom-up dynamic programming algorithm to the following instance of the
knapsack problem Capacity W= 5

Item # Weight (Kg) Value (Rs.)


1 2 3
2 3 4
3 4 5
4 5 6

Solution:
Using dynamic programming approach, we have:

51
Downloaded by Shubham Sanodiya (sanodiyashubh04@gmail.com)
lOMoARcPSD|45122995

Step Calculation Table


1 Initial conditions:
V[ 0, j ] = 0 for j ≥ 0 V[i,j] j=0 1 2 3 4 5
V[ i, 0 ] = 0 for i ≥ 0 i=0 0 0 0 0 0 0
1 0
2 0
3 0
4 0
2 W1 = 2,
Available knapsack capacity = 1 V[i,j] j=0 1 2 3 4 5
W1 > WA, CASE 1 holds: i=0 0 0 0 0 0 0
V[ i, j ] = V[ i-1, j ] 1 0 0
V[ 1,1] = V[ 0, 1 ] = 0 2 0
3 0
4 0
3 W1 = 2,
Available knapsack capacity = 2 V[i,j] j=0 1 2 3 4 5
W1 = WA, CASE 2 holds: i=0 0 0 0 0 0 0
V[ i, j ] = max { V[ i-1, j ], 1 0 0 3
vi +V[ i-1, j - wi ] } 2 0
V[ 1,2] = max { V[ 0, 2 ], 3 0
3 +V[ 0, 0 ] } 4 0
= max { 0, 3 + 0 } = 3
4 W1 = 2,
Available knapsack capacity = V[i,j] j=0 1 2 3 4 5
3,4,5 i=0 0 0 0 0 0 0
W1 < WA, CASE 2 holds: 1 0 0 3 3 3 3
V[ i, j ] = max { V[ i-1, j ], 2 0
vi +V[ i-1, j - wi ] } 3 0
V[ 1,3] = max { V[ 0, 3 ], 0
4
3 +V[ 0, 1 ] }
= max { 0, 3 + 0 } = 3
5 W2 = 3,
Available knapsack capacity = 1 V[i,j] j=0 1 2 3 4 5
W2 >WA, CASE 1 holds: i=0 0 0 0 0 0 0
V[ i, j ] = V[ i-1, j ] 1 0 0 3 3 3 3
V[ 2,1] = V[ 1, 1 ] = 0 2 0 0
3 0
4 0

52
Downloaded by Shubham Sanodiya (sanodiyashubh04@gmail.com)
lOMoARcPSD|45122995

6 W2 = 3,
Available knapsack capacity = 2 V[i,j] j=0 1 2 3 4 5
W2 >WA, CASE 1 holds: i=0 0 0 0 0 0 0
V[ i, j ] = V[ i-1, j ] 1 0 0 3 3 3 3
V[ 2,2] = V[ 1, 2 ] = 3 2 0 0 3
3 0
4 0
7 W2 = 3,
Available knapsack capacity = 3 V[i,j] j=0 1 2 3 4 5
W2 = WA, CASE 2 holds: i=0 0 0 0 0 0 0
V[ i, j ] = max { V[ i-1, j ], 1 0 0 3 3 3 3
vi +V[ i-1, j - wi ] } 2 0 0 3 4
V[ 2,3] = max { V[ 1, 3 ], 3 0
4 +V[ 1, 0 ] }
4 0
= max { 3, 4 + 0 } = 4
8 W2 = 3,
Available knapsack capacity = 4 V[i,j] j=0 1 2 3 4 5
W2 < WA, CASE 2 holds: i=0 0 0 0 0 0 0
V[ i, j ] = max { V[ i-1, j ], 1 0 0 3 3 3 3
vi +V[ i-1, j - wi ] } 2 0 0 3 4 4
V[ 2,4] = max { V[ 1, 4 ], 3 0
4 +V[ 1, 1 ] } 4 0
= max { 3, 4 + 0 } = 4

9 W2 = 3,
Available knapsack capacity = 5 V[i,j] j=0 1 2 3 4 5
W2 < WA, CASE 2 holds: i=0 0 0 0 0 0 0
V[ i, j ] = max { V[ i-1, j ], 1 0 0 3 3 3 3
vi +V[ i-1, j - wi ] } 2 0 0 3 4 4 7
V[ 2,5] = max { V[ 1, 5 ], 3 0
4 +V[ 1, 2 ] } 4 0
= max { 3, 4 + 3 } = 7

10 W3 = 4,
Available knapsack capacity = V[i,j] j=0 1 2 3 4 5
1,2,3 i=0 0 0 0 0 0 0
W3 > WA, CASE 1 holds: 1 0 0 3 3 3 3
V[ i, j ] = V[ i-1, j ] 2 0 0 3 4 4 7
3 0 0 3 4
4 0

53
Downloaded by Shubham Sanodiya (sanodiyashubh04@gmail.com)
lOMoARcPSD|45122995

11 W3 = 4,
Available knapsack capacity = 4 V[i,j] j=0 1 2 3 4 5
W3 = WA, CASE 2 holds: i=0 0 0 0 0 0 0
V[ i, j ] = max { V[ i-1, j ], 1 0 0 3 3 3 3
vi +V[ i-1, j - wi ] } 2 0 0 3 4 4 7
V[ 3,4] = max { V[ 2, 4 ], 3 0 0 3 4 5
5 +V[ 2, 0 ] } 4 0
= max { 4, 5 + 0 } = 5

12 W3 = 4,
Available knapsack capacity = 5 V[i,j] j=0 1 2 3 4 5
W3 < WA, CASE 2 holds: i=0 0 0 0 0 0 0
V[ i, j ] = max { V[ i-1, j ], 1 0 0 3 3 3 3
vi +V[ i-1, j - wi ] } 2 0 0 3 4 4 7
V[ 3,5] = max { V[ 2, 5 ], 3 0 0 3 4 5 7
5 +V[ 2, 1 ] }
4 0
= max { 7, 5 + 0 } = 7

13 W4 = 5,
Available knapsack capacity = V[i,j] j=0 1 2 3 4 5
1,2,3,4 i=0 0 0 0 0 0 0
W4 < WA, CASE 1 holds: 1 0 0 3 3 3 3
V[ i, j ] = V[ i-1, j ] 2 0 0 3 4 4 7
3 0 0 3 4 5 7
4 0 0 3 4 5
14 W4 = 5,
Available knapsack capacity = 5 V[i,j] j=0 1 2 3 4 5
W4 = WA, CASE 2 holds: i=0 0 0 0 0 0 0
V[ i, j ] = max { V[ i-1, j ], 1 0 0 3 3 3 3
vi +V[ i-1, j - wi ] } 2 0 0 3 4 4 7
V[ 4,5] = max { V[ 3, 5 ], 3 0 0 3 4 5 7
6 +V[ 3, 0 ] } 4 0 0 3 4 5 7
= max { 7, 6 + 0 } = 7

Maximal value is V [ 4, 5 ] = 7/-

What is the composition of the optimal subset?


The composition of the optimal subset if found by tracing back the computations
for the entries in the table.

54
Downloaded by Shubham Sanodiya (sanodiyashubh04@gmail.com)
lOMoARcPSD|45122995

Step Table Remarks


1
V[i,j] j=0 1 2 3 4 5 V[ 4, 5 ] = V[ 3, 5 ]
i=0 0 0 0 0 0 0
1 0 0 3 3 3 3 ITEM 4 NOT included in the
2 0 0 3 4 4 7 subset
3 0 0 3 4 5 7
4 0 0 3 4 5 7
2
V[i,j] j=0 1 2 3 4 5 V[ 3, 5 ] = V[ 2, 5 ]
i=0 0 0 0 0 0 0
1 0 0 3 3 3 3 ITEM 3 NOT included in the
2 0 0 3 4 4 7 subset
3 0 0 3 4 5 7
4 0 0 3 4 5 7
3
V[i,j] j=0 1 2 3 4 5 V[ 2, 5 ] ≠ V[ 1, 5 ]
i=0 0 0 0 0 0 0
1 0 0 3 3 3 3 ITEM 2 included in the subset
2 0 0 3 4 4 7
3 0 0 3 4 5 7
4 0 0 3 4 5 7
4 Since item 2 is included in the knapsack:
Weight of item 2 is 3kg, therefore,
remaining capacity of the knapsack is
(5 - 3 =) 2kg V[ 1, 2 ] ≠ V[ 0, 2 ]

V[i,j] j=0 1 2 3 4 5 ITEM 1 included in the subset


i=0 0 0 0 0 0 0
1 0 0 3 3 3 3
2 0 0 3 4 4 7
3 0 0 3 4 5 7
4 0 0 3 4 5 7
5 Since item 1 is included in the knapsack: Optimal subset: { item 1, item 2 }
Weight of item 1 is 2kg, therefore,
remaining capacity of the knapsack is Total weight is: 5kg (2kg + 3kg)
(2 - 2 =) 0 kg. Total profit is: 7/- (3/- + 4/-)

55
Downloaded by Shubham Sanodiya (sanodiyashubh04@gmail.com)
lOMoARcPSD|45122995

Efficiency:
• Running time of Knapsack problem using dynamic programming algorithm is:
O( n * W )
• Time needed to find the composition of an optimal solution is: O( n + W )
Memory function

• Memory function combines the strength of top-down and bottom-up approaches


• It solves ONLY sub-problems that are necessary and does it ONLY ONCE.

The method:
• Uses top-down manner.
• Maintains table as in bottom-up approach.
• Initially, all the table entries are initialized with special ―null‖ symbol to indicate
that they have not yet been calculated.
• Whenever a new value needs to be calculated, the method checks the
corresponding entry in the table first:
• If entry is NOT ―n ull‖, it is simply retrieved from the table.
• Otherwise, it is computed by the recursive call whose result is then recorded in
the table.

Algorithm:

Algorithm MFKnap( i, j )
if V[ i, j] < 0
if j < Weights[ i ]
value → MFKnap( i-1, j )
else
value → max {MFKnap( i-1, j ),
Values[i] + MFKnap( i-1, j - Weights[i] )}
V[ i, j ]→ value
return V[ i, j]

Example:
Apply memory function method to the following instance of the knapsack problem
Capacity W= 5

Item # Weight (Kg) Value (Rs.)


1 2 3
2 3 4
3 4 5
4 5 6

Solution:
Using memory function approach, we have:

56
Downloaded by Shubham Sanodiya (sanodiyashubh04@gmail.com)
lOMoARcPSD|45122995

Computation Remarks
1 Initially, all the table entries are initialized
with special ―n ull‖ symbol to indicate that V[i,j] j=0 1 2 3 4 5
they have not yet been calculated. Here i=0 0 0 0 0 0 0
null is indicated with -1 value. 1 0 -1 -1 -1 -1 -1
2 0 -1 -1 -1 -1 -1
3 0 -1 -1 -1 -1 -1
4 0 -1 -1 -1 -1 -1
2 MFKnap( 4, 5 )
V[ 1, 5 ] = 3
MFKnap( 3, 5 ) 6 + MFKnap( 3, 0 )
V[i,j] j=0 1 2 3 4 5
5 + MFKnap( 2, 1 )
i=0 0 0 0 0 0 0
MFKnap( 2, 5 )
1 0 -1 -1 -1 -1 3
2 0 -1 -1 -1 -1 -1
MFKnap( 1, 5 ) 4 + MFKnap( 1, 2 ) 3 0 -1 -1 -1 -1 -1
0 3
4 0 -1 -1 -1 -1 -1
MFKnap( 0, 5 ) 3 + MFKnap( 0, 3 )
0 3+0

3 MFKnap( 4, 5 )
V[ 1, 2 ] = 3
MFKnap( 3, 5 ) 6 + MFKnap( 3, 0 )
V[i,j] j=0 1 2 3 4 5
i=0 0 0 0 0 0 0
MFKnap( 2, 5 ) 5 + MFKnap( 2, 1 )
1 0 -1 3 -1 -1 3
2 0 -1 -1 -1 -1 -1
MFKnap( 1, 5 ) 4 + MFKnap( 1, 2 )
3
3 0 -1 -1 -1 -1 -1
3 0
4 0 -1 -1 -1 -1 -1
MFKnap( 0, 2 ) 3 + MFKnap( 0, 0 )
0 3+0

4 MFKnap( 4, 5 )
V[ 2, 5 ] = 7
MFKnap( 3, 5 ) 6 + MFKnap( 3, 0 )
V[i,j] j=0 1 2 3 4 5
i=0 0 0 0 0 0 0
MFKnap( 2, 5 ) 5 + MFKnap( 2, 1 ) 1 0 -1 3 -1 -1 3
3 7
2 0 -1 -1 -1 -1 7
MFKnap( 1, 5 ) 4 + MFKnap( 1, 2 ) 3 0 -1 -1 -1 -1 -1
3 3
4 0 -1 -1 -1 -1 -1

57
Downloaded by Shubham Sanodiya (sanodiyashubh04@gmail.com)
lOMoARcPSD|45122995

5 MFKnap( 4, 5 )
V[ 2, 1 ] = 0
MFKnap( 3, 5 ) 6 + MFKnap( 3, 0 ) V[ 3, 5 ] = 7
5
7
V[i,j] j=0 1 2 3 4 5
MFKnap( 2, 5 ) 5 + MFKnap( 2, 1 )
7 i=0 0 0 0 0 0 0
0
1 0 0 3 -1 -1 3
MFKnap( 1, 1 )
2 0 0 -1 -1 -1 7
0
3 0 -1 -1 -1 -1 7
MFKnap( 0, 1 )
4 0 -1 -1 -1 -1 -1
0
6 7

MFKnap( 4, 5 )
V[ 4, 5 ] = 7
7 6
V[i,j] j=0 1 2 3 4 5
MFKnap( 3, 5 ) 6 + MFKnap( 3, 0 ) i=0 0 0 0 0 0 0
7 0 1 0 0 3 -1 -1 3
2 0 0 -1 -1 -1 7
3 0 -1 -1 -1 -1 7
4 0 -1 -1 -1 -1 7

The composition of the optimal subset if


found by tracing back the computations
for the entries in the table as done with
the early knapsack problem

Conclusion:
Optimal subset: { item 1, item 2 }

Total weight is: 5kg (2kg + 3kg)


Total profit is: 7/- (3/- + 4/-)

Efficiency:
• Time efficiency same as bottom up algorithm: O( n * W ) + O( n + W )
• Just a constant factor gain by using memory function
• Less space efficient than a space efficient version of a bottom-up algorithm

58
Downloaded by Shubham Sanodiya (sanodiyashubh04@gmail.com)

You might also like