0 ratings0% found this document useful (0 votes) 129 views32 pagesGeneralized Assignment Problem
Generalized Assignment Problem
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content,
claim it here.
Available Formats
Download as PDF or read online on Scribd
7
Generalized assignment
problem
7.1 INTRODUCTION
‘The Generalized Assignment Problem (GAP) can be described, using the
terminology of knapsack problems, as follows. Given n items and m knapsacks,
with
ry = profit of item if assigned to knapsack i,
wy = weight of item j if assigned to knapsack i,
capacity of knapsack i,
assign each item to exactly one knapsack so as to maximize the total profit assigned,
Without assigning to any knapsack a total weight greater than its capacity, Le.
EEnw a
subject to owyty Sc. EM vm}, (72)
To Aheetdh GE)
xy =00r 1, FeMjem aa)
where
{ 1 ir item j is assigned to knapsack i
0 otherwise.
‘The problem is frequently described in the literature as that of optimally assigning
2m tasks to m processors (1 jobs to m agents, and so on), given the profit pi, and
the amount of resource wy corresponding to the assignment of task j to processor
i, and the total resource ¢, available for each processor i.
189190 7 Generalized asignment problem
‘The minimization version of the problem can also be encountered in the
Titerature: by defining ¢y as the cost required to assign item j to knapsack i,
MINGAP is —
yyw as
subject to (7.2), (7.3), (7.4),
GAP and MINGAP are equivalent, Setting p, = ~cy (or ¢y = —py) forall € MF
and j € N immediately transforms one version into the other. Ifthe numerical data
are restricted to positive integers (as frequently occurs), the transformation can be
‘obtained as follows. Given an instance of MINGAP, define any integer value ¢
such that
1 > maxiew sen {6} 18)
and se:
py ata cy fOr EMAJEN. an
From (7.5) we then have
LL Law
where, fom (73) the fst erm i independent of (4). Hence the solution (x)
OF GAP also solves MINGAP, Te same method transforms any instance of GAP
imo ar equivalent instance of MINGAP (by seting cy =7"~y for Ef] €N,
with > max jex{ 4)
Because of constraints (7.3), an instance of the generalized assignment problem
does aot necesrily havea feasible soaion. Moreover, even the feasibility
tqueson is NP-complete In fac, given an instance (a) of PARTITION
(Ge Seton 12), consider te instanceof GAP (or MINGAB) avg
wag ey aed pyy pay =U ford = levees ad ce Deciding
tether a feasible solution (of value) to such instance exists ian NP-complete
problem, sine the answer is yes if and only ifthe answer to the inlance of
PARTITION is yes,
“The followin version of the problem (LEGAP), instead, aways amis a feasible
solutin,
She as
subject to (7.2), (7.4) and
Gen. a97A Introduction ww
LEGAP too is equivalent to GAP. Given any instance of LEGAP, an equivalent
instance of GAP will have an additional knapsack of capacity csi = 7, with
Pmsij = 0-and wns.) = 1 for J EN, while py = py fori € M andj €N.
(Knapsack m + 1 gives no extra profit and always allows a feasible solution, so
2 =£.) Conversely, given any instance of GAP, we can define an integer constant
4 such that
4 > Yomaxiew (Pi)
and set
fyrpyta fori eM.J EN.
‘With these profits, any set of m ems has a higher value than any set of k< m items.
Hence, by solving LEGAP we obtain the solution for GAP (of value = = # ~ ng)
iF (73) is satisfied, oF we know that the instance of GAP has no feasible solution
if 0%, xy =0 for some j
LEGAP isa generalization ofthe 0-1 multiple knapsack problem (Chapter 6) in
which py = py and w=; for alli € M and j EN (ie the profit and weight of
each item are independent ofthe knapsack it is asigned to. Lagrangian relaxations
for LEGAP have been studied by Chalmet and Gelders (1977).
“The best known special case of generalized assignment problem is the Linear
Min-Sum Assignment Problem (ot Assignment Problem), which is a MINGAP
with n= m, ¢, = Land wy = 1 forall EM and j €N (60, because of (7.3),
constraints (7.2) can be replaced by S7J.,xj = 1 for i € M). The problem can
be solved in O(n*) time through the classical Hungarian algorithm (Kuhn (1955),
Lawler (1976); eficient Fortran codes can be found in Carpaneto, Martello and Toth
(1988), The assignment problem, however, isnot used in general asa subproblem
in algorithms for the generalized cas.
[Another special case arises when wy = w; for all ¢ € Mand j € N. Implicit
enumeration algorithms for this case have been presented by De Maio and Roveda
(1971) and Srinivasan and Thompson (1973).
Facets of the GAP polytope have been studied by Gottlieb and Rao (1989,
19890)
‘We will suppose, as is usual, thatthe weights wy of any GAP instance are
positive integers. Hence, without los of generality, we will also assume that
py ad e; ae pose inezes 110)
HHiswy se}l 20 for EN, ai
62 mimes ty) for Fem a
If assumption (7.10) is violated, (@) fractions can be handled by multiplying
through by a proper factor: (b) knapsacks with c; < O can be eliminated; (€) for
each item j having minjew {py} < 0, we can set py = py + |minjen {py} +112 7 Generalized assignment problem
for i € M and subsract |minjew{p,}|-+1 from the resulting objective function
value. As isthe case for the 0-1 multiple knapsack problem, there is no easy way of
transforming an instance so as to handle negative weights, bu all our considerations
‘easly extend to this case too. If an item violates assumption (7.11) then it cannot
be assigned, so the GAP instance is infeasible. Knapsacks violating assumption
(7.12) can be eliminated from the instance.
In Section 7.2 we introduce various types of relaxations. Exact and approximate
algorithms are described in Sections 7.3 and 7.4, reduction procedures in Section
7.5, Section 7.6 presents the results of computational experiments.
7.2 RELAXATIONS AND UPPER BOUNDS
‘The continuous relaxation of GAP, C(GAP), given by (7.1)-(7.3) and
wy 20 FEM 13)
is rarely used in th literature since it does not exploit the structure of the problem
and tends to give solutions a long way from feasibility
7.2.1 Relaxation of the capacity constraints
Ross and Soland (1975) have proposed the following upper bound for GAP. First
‘constraints (7.2) are relaxed to
wy Sey FEM, EN.
‘and the optimal solution £ to the resulting problem is obtained by determining, for
each FEN,
i) =are max (py 21 EM. wy Sa)
and seting ijj),) = Land fy =0 for all i € M\{i(j)). The resulting upper bound,
of vale :
Uo = Pigs (7.44)
is then improved as follows, Let
fem,
ieM,72 Retaatons and upper bounds 193
we
EM id >0},
weUM™
Given a set § of numbers, we denote with maxs S (resp. mim 5) the second
smaximiim (resp. minimum) value in S and with arg maxy$ (resp. arg miny S) the
corresponding index. Since Mis the set of those Knapsacks for which the relaxed
constant (7.2) is violated,
nn) mae 5 FEM iy
4G ah GENT
{gives the minimum penalty that will be incured if an item j currently assigned to
4 knapsack in Mf” is reassigned, Hence, for each i € M', a lower bound on the
loss of profit to be paid in order to satisty constraint (7.2) is given by the solution
to the 0-1 single knapsack problem in minimization form (see Section 2.1), KP!
(EM, defined by
minimize» = ayy
subject yyy > ds
im
yy=Oorl, FENn
‘where yy = 1 if and only if item is removed from knapsack i. The resulting Ross
and Soland (1975) bound is thus
uy=to-
as)
‘This bound can also be derived from the Lagrangian relaxation, L(GAP.), of
the problem, obtained by dualizing constraints (7.3) in much the same way as
described in Section 6.2.2 forthe 0-1 multiple knapsack problem. In this case 100
the relaxed problem,
manic Spa $s (So
subject to (7.2), (74),
separates into m 0-1 single knapsack problems (KP), i = 1, ....m) of the form196 7 Generalized assignment problem
subject io Sway Sen
xy 20081, EN,
where fy = py — Ay , and its solution value is
HUGAP.A))=
It is now easy to see that, by choosing for Jy the value
Ty = maxefpy sf EM. wy See FEN,
Uy. Infact, by transforming each KP} into an equivalent
maximization form (as described in Section 2.1), and noting that, in each KP,
By < Oif | EN, and wy < cj, we have vy = Dey, q) — 21 (i EM"). Hence, from
(7.14) end (7.15),
Lews- Lpwsit WH+ Das
UGAP.IN= Tat YT Din -39+DY
ou,
Example 7.1
Consider the instance of GAP defined by
(6942 103
teva Gao) ieee)
(ete a
UO os 1.72 Relations and upper bounds ws
‘The initial bound is Up
7. Then we have
Nr =(124.5,7), N2= 13.6), Ud
Mi =(1).N'=(1.2.4.5,7):
1-6
a =2 Maha
Solving KP} we obtain
14 =2,01)9=,0, 0,0, -. 1.
so the resulting bound is
Uy =Up— = 45.0
7.2.2. Relaxation of the semi-assignment constraints
Martello and Toth (1981e) have obtained an upper bound for GAP by removing
constraints (7.3). Is immediate to see thatthe resulting felaxed problem coincides
with L(GAP 0), hence it decomposes into a series of 0-1 single knapsack problems,
KP? (| EM), ofthe form
subject to Swyry Se
xy=Oorl, JEN.
In this ease 100, the resulting upper bound, of value
To= oa. ay
‘can be improved by computing a lower bound on the penalty to be paid in order
to satisfy the violated constraints. Let
wtefrewsSweehcS 1 Genre egment pte
wefien Sw i}
cs
be the sets of those items for which (7.3) is violated, and define
wu
GEM sxy=1) forall j EN:
‘we can compute, using any of the methods of Sections 2.2-2.3,
4 = upper bound on =; ifxy =O, j EN?,i EMP).
1u} = upper bound on = ify =1, FENG EM
and determine, for each
paid for satisfying (7.3:
nj NUN? a lower bound J on the penalty to be
rmingene (2; ~ min
up} ity en”,
on)
Liewa ane — mi
= min (3.18) iff € N?
‘The imaroved upper bound is thus
U;
To — maxjeynuv> th) as)
Example 72
Consider the instance of GAP defined by
n a5;
m = 2;
a
‘The solutions to KP? and KPZ are
7 oy
9 02;
1, 1,0, 0, 0
1,0,0, 1,0),72 Relations and upper bounds ws
N= (3),N> = {1}, MP) =
We compute 1? and u}, through the Dantzig, bound (Section 2.2.1), but, for w},
we skip those items k for which wic > ¢; ~ wi. Hence
atareae [S| =
sore [2] or
1.2)
uly=3 47 #34 (0)
Wy=8
It follows that f) = 0 and fs = min {4,1} = 1, so the resulting upper bound is
Uz =U ~ = 25.
For this instance the Ross-Soland bound initially gives Up = 33, and, after the
solution of KP}, Uy = 31. Hence Uy > U; > Uy > Us. On the other hand,
computing the Manello“Toth bound for Example 7.1 gives Uy = 54, hh = 2,
I= l,le= 5, l= 1, 22, and Uz =49, ie, U; < Uy < Ur < To. Thus while,
obviously, Up > U; and Ty > Us, no dominance exists between the other pairs of,
these bounds. 5)
7.2.3 The multiplier adjustment method
Fisher, Jaikumar and Van Wassenhove (1986) have developed an upper bound,
‘based on the Lagrangian relaxation L(GAP , A) and dominating the bound proposed
bby Ross and Soland (1975). Obviously, the continuous and integer solutions of @
knapsack problem may differ; this implies (see Fisher (1981)) that, forthe optimal
Lagrangian multiplier A*,
2(L(GAP.°)) < 2(C(GAP)):
there is no analytical way, however, to determine \*. One possibility is the
classical subgradient optimization approach. The novelty of the Fisher-Jaikumar—
‘Van Wassenhove bound consists of a new technique (multiplier adjustment method)
for determining “good” multipliers, The method stars by setting
Ay = max: {py ii EM, wy Sei}, GENT
‘as shown in Section 7.2.1, the corresponding Lagrangian relaxation produces the
value Ui of the Ross-Soland bound. Note, in addition, that, with this choice, we198 7 Generalized assignment problem
have, for each j €.N. By (= py — A,) > 0 for at most one i € M. so there is an
optimal Lagrangian solution for this A which satisfies 377, xy < 1 for all j EN.
If some constraint (7.3) is not satisfied, i is, under certain conditions, possible to
select a j* for which 3", xj- = O and decrease 2). by an amount which ensures
that inthe new Lagrangian solution S-"xj+ = 1, while $2", xy < 1 continues to
hold forall other j. This phase is iterated until either the Solution becomes feasible
or the required conditions fail
The following procedure, ADJUST, is an efficient implementation of the
multiplier adjustment method. After the initial solution has been determined, a
heuristic phase attempts to satisfy violated constraints (7.3) through pairs (ij)
such that py ~ A) = 0. The adjustment phase then considers items j* violating (7-3)
and cemputes, for 1 €-M the least decrease A+ required in A+ for item j* to be
includzd in the optimal solution to KP. If an item j* is found for which
(2) ming {Ay joy.s4sMnje} > OF
(b) decreasing \j- by min: (A: j-,
swisfies 5%, xy < 1 for all | EN,
Ani} the new Lagrangian solution
then such updating is performed (decreasing the curent upper bound value by
min{ 4; j-s...,nj+} ) and @ new heuristic phase is attempted. If no such j*
exists, the process terminates.
The output variables define the upper bound value
7.19)
if opt = “yes”, this value is optimal and (x) gives the corresponding solution
procedure ADJUST
Input: 1.m.(pj). (4).
‘output: (). (4) (%;), Us. opt
begin
‘comment: ntaization;
Nee {hen)
Mm (lem),
fori := 1 101m do for j:= 1 10% dos, += 0;
forj := 1 ton do iy = max{ py: EM, wy Sa):
Us Dew Mi
fori := 110 m do
begin
N= {J EN :py ~ A) > 0}:
set xj(j € Ni) fo the solution to
max 2 = Len (Pi — Ay
subject to Den, Waxy S Ci
or eNs712 Relaxations and upper bounds 199
ond
pt = "no":
1S seu y= 1 foralj EN then opt = “yes
else
peat
comment: heuristic phase
= 1G): Suey ty =0. Py — =O):
for each (7) Cli", in order Of decreasing p,, do
WY cea 4 = 0 and w+ Shey wit <6 then xy
comment Sdustment :
H#Sj¢y Xy = 1 for all €N then opt
else
begin
T= {VEN | Deew
found no":
repeat
Tet * bo any indox in J
J};
My = {EM sy 0. my 0 — my
determine the soluion to
(KP) max 5 = Sey, (Pi — AY)
subject Sing 9) Sc Myr
ye ort jeM:
+ (pys — Aye)
a
ay:
end:
itmin( Sy. 21 € Me) > Othen
begin
= arg min (Aye 21 € Mie}:
let). € Nin, be the soliton found for Ke;
for each j EN Wir do 5;
yee
ity + Sica 4ny ty S Horallj EN thon
in
found := "yes":
Ajo Ajo = mig (Aye 21 € Mj}
replace row i* of with (3)
Bio Sie + pings = Anh
Us = Us 4.
end
end
until J = or found = "yes"
end
until opt = “yes” or found
end,20 7 Generalize asignment problem
Example 72 (continued)
‘The initial solution is obtained by setting
OQ) #63340
10, (1) = ,0,0, 1, Ds
(0, 0, 1, 0,
5.004
31
Us = 16+ (1045)
‘we obtain
‘The heuristic phase has no effect, hence J = {1.2}. For j
My = {12h
M = {5} =6, y=, Ans =2:
M =, Ar
hhence i* = 1, Replacing (xy ,) with (1,0, 0,0, 1), condition $7, ¢y iy < 1 continues
to hold for all j EN, so we have
Q) =0,3.34, Ds
13.0) )) = (1.0.0.0, Ds
29,
{4}. For j* = 4 we have
9. For this instance we have Uy < Uit= 3D,
but Uy » Uy (26) > Us (25. On he other han, applying procedure ADIUST
tothe instance of Example 7.1, we inaly have Uy =45, then the ist adjustment
improves tt 43 and the second to 42 (with wo further adjustments producing no
improvement). Hence Uy = 42 Copy + py
os ee
xy =00r 1, TEM EN,
has the same structure as L(GAP 2) (Section 7.2.1), hence separates into m O-1
single knapsack problems (KP! 1m) of the form
maximize #) = (apy + 4),
ay =00rl, FEN;
the later
EV mvs
subject to ie eM
orl, ieEM.jEN,
has the same structure as the initial Ross-Soland relaxation (Seetion 7.2.1), hence
its optimal solution is
0 oho
where
ij) = arg max (py — py 11 EM, wy <6)
By solving problems KP/’ (i € M), we obtain the solution to L(XYGAP .p), of
value72 Retasations and upper bounds 203
HLXYGAP 10) = S24 # M1) — Hd 028)
hhence the upper bound
Us= [LOYGAP, wy) /la+ 9) (729)
Jomsten and Nasberg (1986) have proved that, for += 1 and for the optimal
Lagrangian multipliers 9°)
HLIXYGAP, 1") < =(L(GAP, 2°)
However, there is no analytical way to determine pi, and the multiplier adjustment
method of Section 7.23 does not appear adaptable to XYGAP. JOmsten_ and
Nisherg have proposed using a subgradient optimization algorithm to determine @
“good p. At each iteration, the current jis updated by setting jy = jy +409 ~
ay) @ EM, j EN), where isan appropriate postive step.
Example 7.2 (continued)
Using a = B=$
11001
wo-(} 00 1 ob
«the same solution found for T (Section 7.2.2), and
and starting with py =0 for all i EM. j EN, we obtain
i.e. the same solution found for Uo. The initial upper bound value is thus
Ug= [13 +165] = 29 © UY).
‘Assuming thatthe initial step is ¢ = 1, we then have
(opi + Hy
SS206 7 Generalized assignment problem
(3p — Hy
m4
OO 7 0 OF
Ow
and the upper bound becomes
Us= 35414
28,
Further improvements could be obtained by iterating the procedure. []
7.3 EXACT ALGORITHMS
‘The most commonly used methods in the literature forthe exact solution of GAP
are depth-first branch-and-bound algorithms.
Tn the Ross and Soland (1975) scheme, upper bound U; (see Section 7.2.1)
{s computed at each node of the branch-decision tree. The branching variable is
selected though the information determined for computing UI fact, the variable
choser to separate, xj 8 the one, among those with yy =0 (7 € M’, j EN") in
the opimal solution to problems KP? (7 €.M, for which the quantity
4
wil (Go — Lohan warn)
is a maximum, This variable represents an item j* which is “well ft" into knapsack
1°, considering both the penalty for reassigning the item and the residual capacity
ff the knapsack. Two branches are then generated by imposing x;.)- = 1 and
iy 20.
In the Martelio and Toth (1981¢) scheme, upper bound min (U, U2) (see Sections
7.2.1,1.2.2) is computed at each node of the branch-decision tree. In ation, atthe
root node, a tighter upper bound on the global solution is determined by computing
‘min (Ws, Uz) (see Section 7.2.3). The information computed for U2 determines the
branching as follows. The separation is performed on item
j* = arg max {I):7 EN UN? },
i.e. on the item whose re-assignment is likely to produce the maximum decrease
Of the objective function. If j* € N°, m nodes are generated by assigning
J 10 each knapsack in turn (as shown in Figure 7.1(a): if j* € N>, with
MP(")= {iiids-sig)- I nodes are generated by assigning j* wo knapsacks
jisssesin—1 in turn, and another node by excluding j* from knapsacks i++ sin—1
(@s shown in Figure 7.1(b). With this branching strategy, m single knapsack7.3 sae agorthos 2s
problems KP? must be solved to compute the upper bound associated with the root
rode, but only one new KP? for each other node of the tree. Infact if j* € N°,
imposing y+ = 1 requires only the solution of problem KP?, the soiutions to
problems KP? (/ # £) being unchanged with respect 0 the generating. node; if
J” EN. the strategy is the same as that used in the Martello and Toth (1980a)
algorithm for the 0-1 multiple knapsack problem (see Section 6.4.1), for which
we have shown that the solution of mr problems KP? produces the upper bounds
corresponding to the Wi generated nodes
Fipure 7.1) Branching strategy when j* € N°
Figure 7.1) Branching strategy when /* © 5
‘The execution of the above scheme is preceded by a preprocessing which: (a)
determines an approximate solution through a procedure, MTHG, described in the
next section; (b) reduces the size of the instance, through two procedures, MTRG1
and MTRG2, described in Section 7.5. (Example 7.3 of Section 7.5 illustrates
the branching scheme.) At each decision node, a partial reduction is performed,206 7 Generalize asignment problem
by seerching for unassigned items which can currently be assigned 10 only one
‘knapsack. The Fortran implementation of the resulting algorithm (MTG) is included
in the present volume.
In the Fisher, Jakumar and Van Wassenhove (1986) scheme, upper bound Us
(Section 7.2.3) is computed at each node of the branch-decision tee. The branching,
variable isan x;+j+ corresponding to a wi-j- which is maximum over all variables
that have not been fixed to 0 or 1 at previous branches. Two nodes are then
generated by fixing x/-j- = 1 and xj-j- =0.
No scheme has been proposed by Jomsten and Nasberg (1986).
74 APPROXIMATE ALGORITHMS
As seen in Section 7.1, determining whether an instance of GAP (or MINGAP) has
1 feasble solution is an NP-complete problem. It follows that, unless P= N’P,
these problems admit no polynomial-time approximate algorithm with fixed worst-
‘ease performance ratio, hence also no polynomial-time approximation scheme.
The following polynomial-time algorithm (Martello and Toth, 1981¢) provides
approximate solutions to GAP. Let fy be a measure of the “desirability” of assigning
item j to knapsack /. We iteratively consider all the unassigned items, and determine
the item j* having the maximum difference between the largest and the second
largest f (i € M).j* is then assigned to the knapsack for which fj- is @ maximum.
In the second part of the algorithm the current solution is improved through local
exchanges. On output, if feas = “no”, no feasible solution has been determined:
‘otherwise the solution found, of value :*, is stored in y, = (knapsack to which item
J is assigned), j = 1,-2.0
procedure MTHG:
Input: n-m. (py). (y).e).CSids
‘output: *.(), feas;
Dees):
Uz{ln):
‘comment intial solution;
for each jc U do
begin
Fy {ieM: my $e)
iF, = 0 then feas = "no"
else74 Approximate algorithms 207
begin
if s= arg max { fy 21 €F,}:
itE\ {i} = then d i= 40
else d= fj — mara{ fy 21 EF}:
end
end:
It feas = "yes" then
begin
end:
‘comment: improvement
itfeas = "yes" then
for j = | ton do
begin
Ar [py ci EM\U, my
Wag B then
begin
Tot poy = aK A
ity, > poy then
begin
end
end
end.
Procedure MTHG can he implemented efficiently by initially sorting in
‘decreasing order, for each item j, the values fy (i € M) such that wi; <7) (= @)
‘This requires O(um log m) time, and makes immediately available, at each iteration
the inner loop, the pointers to the maximum and the second maximum element
of {fj : 1 € F)}. Hence the main while loop performs the O(n) assignments
within a total of O(n?) time. Whenever an item is assigned, the decrease in Z+
ccan make it necessary to update the pointers. Since, however, the above maxima
ccan only decrease during execution, a total of O(n?) operations is required by the28 7 Generalized assignment problem
algorithm for these checks and updatings. By finally observing that the exchange
phase clearly takes O(nm) time, we conclude that the overall time complexity of
MTHG is O(nmlogm + 1°)
‘Computational experiments have shown that good results can be obtained using
the following choices for f,
(@) fs =ps (with his choice the improvement phase canbe skipped:
@fy=—wi lei.
Example 7.3
Consicer the instance of GAP defined by
nepwusn ap
oo =(4 39 9 etm),
Mum 9 3M
a 9 $745 5 26
oo =( ae 6 > 4):co= (28)
ie 6 6 a8 M
Let us consider execution of MTHG with ji = wy. The first phase of the
algoritam gives
Peard 20, wale
PeBid =D, KAZ
Prsid = yal
SPelid’ = ten =3
Pedd=& yarn
Peed = 5, w=?
11.)
‘The second phase performs the exchanges7S _ Reduction algorithms 20
j a
i %
50 the solution found is
232.) =G.311221029.0
A Fortran implementation of MTHG, which determines the best solution
‘obtainable with choices (a)-(@) for fy, is included in the present volume. A more
complex approximate algorithm, involving 4 modified subgradient optimization
approach and branch-and-bound, can be found in Klastorin (1979).
Mazzola (1989) has derived from MTHG an approximate algorithm for the
‘generalization of GAP arising when the capacity constraints (7.2) are now-linear.
7.8 REDUCTION ALGORITHMS.
‘The following algorithms (Martello and Toth, 19816) can be used to reduce the
size of an instance of GAP. Let (3)) define a feasible solution (determined, for
example, by procedure MTHG of the previous section) of value 2" = SO" py,
‘The first reduction algorithm receives in input the upper bound value Uo of
Section 7.2.1 and the corresponding values i(j) = arg max {pj : i € Mwy < ci}
GEN). The algorithm fixes 10 0 those variables x, which, if set (© 1, would
decrease the bound toa value not greater than 2". (We obviously assume z* < Uo.)
If, for some j, all xy but one, say 3+), are fixed 10 0, then xin; is fixed to 1, We
assume that, on input, all entries of (x) are preset to a dummy value other than ©
for 1. On output, £" (j € NV) has the value | (xj: 7 € M. xy = 0}|. and c; gives
the residual capacity of knapsack { (7 € M); these values are used by the second
reduction algorithm, We also assume that, initially, z; = ¢; forall / € MI, for
some j, all x ae fixed to 0, the Feasible solution (3;) is optimal, hence the output
variable opr takes the value "yes"
procedure MTRG!
Input: mnt. (2 05) G2
output: (5) 49). ot:
begin
opt'= "0
Up.GD). Cy)210 7 Generali assignment problem
end
else if k? =m then opt == "yes"
end
end.
The time complexity of MTRGI is clearly O(nm). When the execution fixes
some sariable 1 1, hence deereasing some capacity ¢, further reductions can be
obtained by reapplying the procedure. Since n variables at most can be fixed to 1,
the resting time complenity is (Hm)
“The second reduction algorithm receives in input (x). (4°) ©), the upper bound
value Uo of Section 7.2.2 and, for each problem KP? (i € M), the corresponding
Solution 2 j.--.n) and optimal value s,. Computation of the upper bounds of
Section 7.2.2
«ff = current upper bound on the solution value of K
1} = current upper bound on the solution value of K/
is then used to fix to 4) variables xy which, if set to I — fy, would give an
upper bound not greater than =". We assume that MTRGI is fits iteratively run,
then T and the solutions 10 problems KP? are determined using the reductions
‘obtained, Consequently, the new algorithm cannot take decisions contradicting
those of MTRGI. Itcan, however, fix to 1 more than one variable ina column, oF to
O all the variables in column, Such situations imply thatthe current approximate
solution is optimal, hence the output variable opt takes the value “yes”.
procedure MTRG2: _
Input: 9.1. ()- Cy): ©). 2%. To. Gi). By) (ay). Ds
‘output (1), opt;
begin
opt
i
re
fk 2 then
egin
xy 20:
Abe kP +
else S
W% =0 and 2* >
and z* > Uo 2+ ul) then
tk1 =O then £1 = 1
else opt = “yes”
end;
"90" then
m=1 or k1=1 then
“begin
for j = 1 tom don; = 0,
ay
”
end
else
Hae
unt} > n_oF opt = "yes"
end.
IF uf and w} are computed through any ofthe O(n) methods of Sections 2.2 and
2.3, the time complexity of MTRG2 is O(n) In this ease too, when a variable
has been fixed to 1, a new execution can produce further reductions.
Example 7.3 (continued)
Using the solution value :* = 232 found by MTHG and the upper bound value
Up = 263, MTRGI gives
157 =0, hence KY = 2, soan 7 Generalize asignment problem
Solsing KP? i= 1,2,3) forthe reduced problem, we get
oo1110@ 8
(oo 9081 o Pcr=(3).7
11000000 és
where fixed 1 values are circled, Executing MTRG2 we have
cy
0, 131 = 0, hence xs,
ta = 0,154 =0, hence xy
‘The execution of the Martello and Toth (1981c) branch-and-bound algorithm
(see Section 7,3) follows, Computation of U2 and Us for the root node gives
Nag, N>
5}, M (5) = {1,2
15= 89,2 = BS; [5 =, Up = 252
y= 245,
‘The branching scheme is shown in Figure 7.2. Since j* = 5, we generate nodes
1 and 2, and compute the corresponding bounds.
Tyarsieet Toeradeo" v
Figure 7.2 Decision-re for Example 7.3
Node 1: 2) = 0,0, 1, 0, 0,0, 0,1). 2
Node 2 : (81 ,) = 0,00, 1.0.1, 1,0),
NO = (3), N>=(6}, M>)={1, 2}:
u}y = 68, uly = 78, uly = 54, b= M4;as
Us
1 = 263 (unchanged).
‘The highest penalty is /5 = 14, hence j* = 3 and nodes 3, 4, 5 ate generated,
Node 3: (8 )) = (0,0, 1, 1.0.0, 1,0),
Nowe 4 (8 )) = 0,0, 1,0, 1, 0,0, 0
Node 5 = (6 )) = (1,0, 1, 0,0, 0, 0,0),
NO = {2}, N> = {6}, MPE)={1, 2s
Us = 218 <2
‘The approximate solution (1) = G3, 1.
thus optimal.
7.6 COMPUTATIONAL EXPERIMENTS
Tables 7.1 t0 74 compare the exact algorithms of Section 7.3 on four classes
of randomly generated problems. For the sake of uniformity with the literature
(Ross and Soland (1975), Martello and Toth (1981c), Fisher, Jaikumar and Van
Wassenhove (1986), all generated instances are minimization problems of the
form (7.5), (7.2), (7.3), (74). All the algorithms we consider except the Ross
‘and Soland (1975) one, solve maximization problems, so the generated instances
are transformed through (7.7), using for ¢ the value
f= marin, jen {ey} + 1
‘The classes are
(@) uniformly random in (5,251,
6 uniformly random in {1 401,
= 9n/m) +04 marcas {jey, m5} fOr
(where N; is defined as in Section 7.2.1)
(b) wy and cy as for class (a),
TS(n/m) +04 maXien {jey, Wy) for
(©) wy, and cy, a8 for cass (a,
08 So" wip/m for i7 Generalized assignment problem,
ae
eo 9 zt m0 (OF
L oor oe ir 1s00 0
£ moo L 00 oF
t ee sco ae vero 1 sro OF
st Gi Ca toy 3 seo oc
t 1 soo ot soo 1 900 ot
z ZOOS OMSRODL s 100 oF
z z= woo eM eH z ooo ae
1 1 wot 0100 It £90 oF
Spon SOPON WHE «SOPON ULL —«SOPON SpoN Ow
AGOLN A uN oy
‘suoqqoad o] 2940 sopou jo saquinu aBes0Ny
‘teioay “spu0298 uF OFWOONE aH
FELL OE,as
76 Computational experiments
ste (si90R ses (PsLONe PSHE of
Iss seu ME SESI F616 0
ee soso Cee o
(oerees 76095 6 OF
(Dusrce —BORCE (OHO 0
so trot
oe
0
ot
PON ALL
Spon aa
NFO ‘AGOLN An
‘suiojgaid 91 2940 sopou Jo Saquinu SesaAy/sowH eaAy “SpuoDes UF OFRHODG AH (@) HS wa LANE,7 Generaioed asignment problem
a6
see (orr'es car zee) gost
oi 19 001 oz poLO
ork (SRLELD IS Dest ey ee me
STR GIOPPIE es RGOLEE solr GISIES! —Lzaze Our LE
ot eeu LSD & eco or seo
S86 ese (DPz096 corel EVER OCOD (HEPES
7 9 ssur eget
” x os a
SPN PON SPN SL PON
NrDUN ATIOLN DIN Sa
Ssupygoud 9] 2940 sapou jo Saquunu aatsaxy;sum aawAy SpuOsss UL OF=VONDG AHI“) PS ENG EL AML,st reo Oe STOO sro
‘uy au of
zu (@eOTIE oc
so el ust 86r0 o.
Et 1929 (Won0't6 of
ous rz0s6 Esl 06x91 wf
Ip eo 0 ol
sunt 906 oeee Wens'oe un su oF
sori (Wecr7> awe (SM6oRIR SETHE of
ot
Spon SLC NSPONCWLE~CNMPON MEL SPON
NOL ATID Aad OLN Sa
‘Sub|goid p1 2940 sopou jo saquinu sBeisay/soumn SBeIDAy “SpuOD>S UF OPH/OODG TH) PS EEG FL ARae 74
(@ wy uniformly random in {1, 100},
uniformly random in [wy +20),
6-08 59. wy /m for 7 = Ay... sm.
Problems of class (a) have been proposed by Ross and Soland (1975) and generally
admit many feasible solutions, Problems of classes (b), (¢) and (d) have tighter
‘capacity constraints; in addition, in problems of class (d) a correlation between
profits and weights (often found in real-world applications) has been introduced.
‘The entries in the tables give average running times (expressed in seconds) and
average numbers of nodes generated in the branch-decision tree. A time limit of
100 seconds was imposed on the running time spent by each algorithm for the
solution of a single instance. For data sets for which the time limit occurred,
the corresponding entry gives, in brackets, the number of instances solved within
100 seconds (the average values are computed by also considering the interrupted
instarces). The cases where the time limit occurred forall the instances are denoted.
as “time limit”, The following algorithms have been coded in Fortran IV and run
‘on an HP 9000/840 computer, using option *-o” for the Fortran compile:
RS
Algorithm of Ross and Soland (1975);
Algorithm of Martello and Toth (1981¢) as described in Section 7.3;
Fr
MTGFIV = Algorithm MTG with upper bound min (U2,Us) (see Sections 7.2.2,
7.2.3) computed at each node of the branch-decision tee:
Algorithm of Fisher, Jaikumar and Van Wassenhove (1986);
Tut
Algorithm MTG with upper bound min (U,U2.Us) (see Sections
7.2.1, 7.2.2, 1.2.4) computed at each node of the branch-decision tree
For all the algorithms, the solution of the 0-1 single knapsack problems was
‘obtained using algorithm MT! of Section 2.5.2
For the computation of Us, needed by MTGIN, the number of iterations in the
‘subgradient optimization procedure was limited to 50—as suggested by the authors
Gensten and Nasberg, 1986)—for the root node, and to 10 for the other nodes,
‘The Lagrangian multipliers were initially set to
Aden iemsen
(as suggested by the authors) for the root node, and to the corresponding. values,
‘obtained at the end of the previous computation for the other nodes. (Different
choices of the number of iterations and of the initial values of the multipliers
produced worse computational results.) The step used, at iteration k of the
‘subgradient optimization procedure, to modify the current j1j values was that
proposed by the authors, ie.76 Computational experiments 29
‘The tables show that the fastest algorithms are RS for the “easy” instances of
‘lass (a), and MTG for the harder instances (b),(c) (@). Algorithms MTGFJV and
MTGIN generate fewer nodes than MTG, but the global running times are larger
(dhe computation of Us and Us being much heavier than that of Uy and U2), mainly
for problems of classes (b) (c) and ().
Algorithm FIV is much worse than the other algorithms for all data sets,
contradicting, to a certain extent, the results presented for the same classes of
test problems in Fisher, Jaikumar and Van Wassenhove (1986). This could be
‘explained by observing that such results were obtained by comparing executions
‘on different computers and using different random instances. In addition, the current
implementation of MTG incorporates, for the root node, the computation of upper
bound Us,
‘Table 7.5 gives the performance ofthe Fortran IV implementation of approximate
algorithm MTHG (Section 7.4) on large-size instances. The entries give average
running times (expressed in seconds) and, in brackets, upper bounds on the average
percentage errors. The percentage errors were computed as 100 (W/ ~2")/U., where
U =min (U),U>,Us,Us)- Only data sets (a) (b) and (c) are considered, since the
‘computation of U for data set (2) required excessive running times. Errors of value
(0.000 indicate that all the solutions found were exact. The table shows that the
running times are quite small and, with few exceptions, practically independent
‘Table 75 Algorithm MTHG, HP 9000840 in seconds. Average times (average percentage
‘erros) over 10 problems
™ ” Data set (@) Data set (6) Data set (©)
50 0.120.188) 0.14045.434) (0136¢6822)
5 100 0.28710.063), 03254750) 0318.73)
200 0.88710.028), 0.86914 347) (04852(6.150)
500 2.6840.012), 3.86015.681) 3.88716.145)
50 0.192016), 0.225(3.425), (024046243)
0 100 0.4570.019), 052165.160), (035045.908)
200 1.148(0.008) 1.271(4.799) 133465.190)
500 3.888¢0.006), 5.139(5.708) 5:17515.553)
30 10.393(0.082) 10.399(1.228), (0.386.479)
20 100 (0.743¢0.002), (0.86641.189), 0.885.187)
200 1,69340.008) 201112.180), 2038(8.540)
500 2.96710.000), 7.482.453), 7.358.367)
30 10.938(0.000) 0.832(0.125), 0.8762.024)
50 100 0.728(0.005) 179201195) 20164081)
200 3.456(0.002), 3:849(0.206) 4.131.248)
‘500 2.87910.000) 126130517) 12.6473.198)20 7 Generalized assignment problem
ofthe dataset. For n = S00 and dataset (a. the frst execution of MTHG (wi
{fy =0)) almost always produced an optimal solution of value =" = Up, 50 the
‘computing times are considerably smaller than forthe other datasets, The quality
fof the solutions found by MTHG is very good for data set) and clearly worse for
the other datasets, especially for small values of m. However, i isnot possible to
decide whether these high errors depend only on the approximate solution or also
‘on the upper bound values. Limited experiments indicated thatthe eror computed
with respect tothe optim solution value tends to be about half that computed
with respect to 0