0% found this document useful (0 votes)
266 views4 pages

Assembly Line Scheduling

1. The document describes an assembly line scheduling problem that can be solved using dynamic programming. It involves finding the fastest way to assemble a product using two assembly lines with multiple stations on each line and a time cost to switch between lines. 2. The Floyd-Warshall all-pairs shortest path algorithm is described next. Given a weighted graph, it uses dynamic programming to compute the shortest distance between all pairs of vertices in Θ(n3) time using a series of n matrices where intermediate vertices are incrementally allowed. 3. Problem-specific optimizations are discussed like reducing the space usage from Θ(n3) to Θ(n2) by using two matrices instead of computing the entire sequence

Uploaded by

Jay Gala
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
266 views4 pages

Assembly Line Scheduling

1. The document describes an assembly line scheduling problem that can be solved using dynamic programming. It involves finding the fastest way to assemble a product using two assembly lines with multiple stations on each line and a time cost to switch between lines. 2. The Floyd-Warshall all-pairs shortest path algorithm is described next. Given a weighted graph, it uses dynamic programming to compute the shortest distance between all pairs of vertices in Θ(n3) time using a series of n matrices where intermediate vertices are incrementally allowed. 3. Problem-specific optimizations are discussed like reducing the space usage from Θ(n3) to Θ(n2) by using two matrices instead of computing the entire sequence

Uploaded by

Jay Gala
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

CSCE 411: Design and Analysis of Algorithms

More Dynamic Programming Examples


Fall 2014

(Based on Introduction to Algorithms, 2nd and 3rd Eds., by Cormen, Leiserson, Rivest and Stein.)

1 Assembly Line Scheduling


Consider the following situation:

• There are two assembly lines, each with n stations, for manufacturing some product.

• The time required at station i is not necessarily the same in both assembly lines

• There is a time cost to switching between assembly lines.

In the following example, there are three stations on each line, the time to get from the entry to
line 1 is e1 , the time to do station 1 on line 1 is a11 , etc., the time to transfer from line 1 to line 2
between stations 1 and 2 is t11 , etc., and the time to get from station 3 to the exit is x2 . Similarly
for line 2.

e1# a11# a12# a13# x2#


t11# t12#
entry#
t21# t22# exit#
e2# a21# a22# a23# x1#

Question: What is the fastest way to assemble the product (to get from the entry to the exit)?
In a brute force approach, we could consider all possibilities (e.g., use station i on line j, for
i = 1, . . . , n and j = 1, 2). But there are 2n possible solutions, so this is not feasible.
Instead, let’s use dynamic programming.

1. Recursive expression for the solution. Let’s consider the fastest way to get from the entry
through station j on line 1. One possibility is that we get to this station via station j − 1 on
the same line (line 1), in which case, the best we can do is to get to station j − 1 on line 1 as
fast as possible, and then spend ai,1 time at station j on line 1. The other possibility is that
we come via station j − 1 on the other line (line 2), in which case the best we can do is to get
to station j − 1 on line 2 as fast as possible, then change to line 1 at a cost of t2,j−1 , and then
spend ai,1 time at station j on line 1. We can then take the faster of these two possibilities.

• Let C[i, j] be the fastest time to get from the entry through station j on line i, j = 1, . . . , n
and i = 1, 2.
• The goal (desired final answer) is:

min(C[1, n] + x1 , C[2, n] + x2 ).

1
• Formula for C:
Basis: C[i, 1] = ei + ai,1 , for i = 1, 2.
Recursion: For j > 1,
C[i, j] = min(C[i, j − 1] + ai,j , C[i0 , j − 1] + ti0 ,j−1 + ai,j )
where i0 is the “other” line, i.e., if i = 1, then i0 = 2, and if i = 2, then i0 = 1.

C 1 2 3 ... n
1 e1 + a11 ... (goal)
2 e2 + a21 ... (goal)

2. Determine Dependencies Among Subproblems. C[i, j] depends on C[i, j − 1] and C[i0 , j − 1].
3. Determine Order for Solving Subproblems. C[1, 1], C[2, 1], C[1, 2], C[2, 2], . . . , C[1, n], C[2, n],
i.e., column-major order of the matrix C.
Writing the pseudocode is left as an exercise to the reader.
4. Running time: There are Θ(n) entries to compute and each one takes constant time, so the
total time is Θ(n).

2 Floyd-Warshall All-Pairs Shortest Path Algorithm


For this algorithm we will use the adjacency matrix representation of the graph. Suppose we are
given a weighted graph G(V, E, w). Let n = |V |. G is represented with an n × n matrix A whose
(i, j) entry is 
 w(i, j) if (i, j) ∈ E
aij = ∞ if i 6= j and (i, j) 6∈ E
0 if i = j

.
Goal: compute an n × n matrix D whose (i, j) entry is
dij = shortest distance from i to j (minimum weight path, not fewest edges).
(See textbook for how to compute the actual shortest paths.)
The Floyd-Warshall algorithm uses dynamic programming to solve this problem.

1. Recursive expression for the solution. For k = 0, . . . , n, let Dk be the matrix whose elements
are
dkij = shortest distance from i to j, when intermediate vertices are in {1, 2, . . . , k}

Example:
5"
1" 2"
2"
10"
3" 4"
3"

2
• d014 = ∞, since there is no edge from 1 to 4.
• d114 = ∞, since letting vertex 1 be an intermediate does not help us get from 1 to 4
• d214 = 15, since the path 1-2-4 has weight 15 (we are allowed to use any subset of {1, 2}
as intermediates)
• d314 = 10, since the path 1-2-3-4 has weight 10 (we are allowed to use any subset of
{1, 2, 3} as intermediates)
• d414 = 10: we are allowed to use any of the four vertices as intermediates, but this does
not result in any path better than 1-2-3-4.

The goal (desired final answer) is Dn . In this situation, all vertices may be used as intermediate
vertices. Next we derive the formula for Dk , k = 0, . . . n:
Basis: D0 = A. The adjacency matrix A gives the minimum weight of “paths” from i to j
with no intermediate vertices (i.e., there must be an edge from i to j).
Recursion: Dk has the entries

dkij = min{dk−1 k−1 k−1


ij , dik + dkj }

Why? Consider vertices i and j and a path between them with shortest distance dk−1ij , i.e., a
minimum-weight path over all paths with intermediate nodes in {1, . . . , k − 1}.

dijk%1'
i j
All'intermediate'nodes'are'in'{1,…,k%1}'

Now consider k as a possible intermediate vertex.

dijk%1'
i j

dkjk%1'
dikk%1'
k
All'intermediate'nodes'are'in'{1,…,k%1}'

Take whichever path from i to j has smaller weight: the old path (just using intermediate
vertices from 1 through k − 1) or the new path which consists of pasting together (a) a
minimum-weight path from i to k using intermediate vertices from 1 through k − 1 and (b) a
minimum-weight path from k to j using intermediate vertices from 1 through k − 1.

3
2. Determine Dependencies Among Subproblems. D0 is needed for D1 , D1 is needed for D2 , . . .,
Dn−1 is needed for Dn .
3. Determine Order for Solving Subproblems: In linear order, D0 , D1 , . . . , Dn . Pseudocode:

Input: A // adjacency matrix for graph G


1. D^0 := A
2. for k := 1 to n do /* compute D^k using D^{k-1}
3. for i := 1 to n do
4. for j := 1 to n do
5. d_{ij}^k := min(d_{ij}^{k-1}, d_{ik}^{k-1} + d_{kj}^{k-1}
6. return D^n

4. Running Time:
• Step 1 takes Θ(n2 ) time
• Steps 2-4 consist of 3 nested loops, for a total of Θ(n3 ) iterations
• Step 5, the inner body of the 3 nested loops, takes constant time
• Step 6 takes constant time
Total is Θ(n3 ) time.
5. Problem-Specific Optimizations. We can reduce the space usage from Θ(n3 ) to Θ(n2 ) by just
using two matrices, previous and current. In fact, we can even improve to just one matrix.
Example:
6"
1" 2"
4"
11" 2"
3"
3"

       
0 4 11 0 4 11 0 4 6 0 4 6
0 1
D = A = 6 0 2  , D = 6 0 2  , D2 = 6 0 3
2 ,D =  5 0 2
3 ∞ 0 3 7 0 3 7 0 3 7 0

Sometimes, instead of wanting to know the shortest path between two vertices, we just need
to know whether there exists a path between them. That is, we want to compute the transitive
closure of the graph. Formally, the transitive closure of G = (V, E) is defined as G∗ = (V, E ∗ ),
where E ∗ = {(i, j) : there exists a path from i to j in G}.
One approach: Add some weights to the edges, do an APSP algorithm, and then check for each
i and j if the answer is finite (there exists a path) or infinite (there is no path).
A slightly more direct version of this idea is to modify Floyd-Warshall so that
• the Dk matrices are boolean: dkij = 1 if there is a path from i to j whose intermediate nodes
are in {1, 2, . . . , k}, and 0 otherwise
k−1 k−1
• the calculation in line 5 becomes: dkij := dij OR (dij AND dk−1
kj ).

You might also like