The Temporal Knapsack Problem and Its Solution
The Temporal Knapsack Problem and Its Solution
The Temporal Knapsack Problem and Its Solution
1 Introduction
This paper defines the temporal knapsack problem (TKP), presents some algo-
rithms for solving it and compares the performance of the algorithms on some
hard instances. TKP is a natural generalisation of the knapsack problem and a
natural specialisation of the multi-dimensional knapsack problem. Nonetheless,
it is—as far as we know—a new problem.
In the TKP a resource allocator is given bids for portions of a timeshared
resource — such as CPU time or communication bandwidth — or a shared-
space resource — such as computer memory, disk space, or equivalent rooms in
a hotel that handles block-booking. Each bid specifies the amount of resource
needed, the time interval throughout which it is needed, and a price offered for
the resource. The resource allocator will, in general, have more demand than
capacity, so it has the problem of selecting a subset of the bids that maximises
the total price obtained.
We were initially drawn to formulating the TKP from our interest in ap-
plying combinatorial optimisation techniques in the context of grid computing.
R. Barták and M. Milano (Eds.): CPAIOR 2005, LNCS 3524, pp. 34–48, 2005.
c Springer-Verlag Berlin Heidelberg 2005
The Temporal Knapsack Problem and Its Solution 35
Fig. 2. An instance of the temporal knapsack problem to which the reduce operator is
applied
Maximise: price(b) · Xb
b∈bids
Subject to: demand(b) · Xb ≤ capacity(t), for each t ∈ times
b∈bids(t)
such a case the time with the larger capacity imposes a weaker constraint and
therefore can be removed; if the two times have the same capacity, then either
time can be removed. In the instance of Fig. 2(c), t2 and t3 impose the same
constraint as do t8 and t9 . Thus, t3 and t9 can be removed from the instance,
resulting in the instance of Figure 2(d).
The split operator decomposes a problem instance into subproblems that can
be solved independently. A split can be performed between any two adjacent
times, t and t , such that bids(t) ∩ bids(t ) = ∅. In the instance of Fig. 2(d),
a split can be made in between t4 and t8 resulting in two subproblems: one
comprising times t2 and t4 and bids b1 , b3 and b4 ; and a second comprising time
t8 and bids b6 and b7 . Splitting is rarely used in constraint programming, though
two recent exceptions are the work of Walsh [6] and of Marinescu and Dechter
[7].
The branch operator is the familiar one from constraint programming, arti-
ficial intelligence and operations research. A bid b is selected, it is then removed
from the problem and two branches are generated: one in which b is labelled
“reject” and the other in which it is labelled “accept.” On the accept branch,
demand(b) must be subtracted from the capacity at all times in duration(b).
The decomposition algorithm, which applies the previous operations, is shown
as Algorithm 1. This performs some initialisation and then calls the recursive
procedure Solve.
Algorithm 1: Decomposition
Input: P , an instance of TKP;
Reduce(P );
Split(P ) into set of problems S;
for (s ∈ S) do Solve(s);
Let us now turn our attention to the four procedures used in the de-
composition algorithm: RejectBid, AcceptBid, Reduce and Split. The algo-
rithms for these support procedures are given in Algorithm 2. In this discus-
sion, and throughout, let Demand(t) be the total demand at time t—that is
The Temporal Knapsack Problem and Its Solution 39
Procedure Reduce(P);
for b ∈ bids do TestForcedReject(b);
for t ∈ times do if Demand(t) ≤ capacity(t) then RemoveTime(t);
if |times| ≥ 2 then
set ta to min(times);
while ta = max(times) do
set tb to next(ta );
if bids(ta ) = bids(tb ) then
if capacity(ta ) ≥ capacity(tb ) then RemoveTime(ta ); set ta to tb
else RemoveTime(tb )
else set ta to tb
Procedure Split(P );
Let ta and tb be two times such that next(ta ) = tb and bids(ta ) ∩ bids(tb ) = ∅;
if no such times exist then return(P );
else
Let P1 be the TKP instance with times {t|t ≤ ta } and bids {b|end(b) ≤ ta };
Let P2 be the TKP instance with times {t|t ≥ tb } and bids {b|start(b) ≥ tb };
return(Split(P1 ) ∪ Split(P2 ))
Procedure TestForcedReject(b:bids);
if for some t ∈ duration(b), demand(b) > capacity(t) then RejectBid(b)
b∈bids(t) demand(b). Also let next(t) be the smallest time strictly greater than
t; next(t) is undefined if t is the largest time.
RejectBid(b) and AcceptBid(b) are simple; both label bid b appropriately and
remove it from the problem instance. In addition, AcceptBid(b) must subtract
the demand of the bid from the resource capacities available. Recall that reduce
performs two kinds of simplification: (1) removing bids that must be rejected
(forced rejects) and (2) removing unnecessary times, which may lead to removing
40 M. Bartlett et al.
bids that must be accepted (forced accepts). This is achieved by performing (1)
to completion and then performing (2) to completion. Once this is done, there is
no need to perform (1) again; doing so would not force any more rejects. In fact,
it can be shown (though space precludes doing so here) that every time Solve is
invoked it is given an instance such that
– ∀t ∈ times ∀b ∈ bids(t) demand(b) ≤ capacity(t),
– ∀t ∈ times capacity(t) < Demand(t),
– ∀b duration(b) = ∅ and
– ∀t, t ∈ times t = next(t) =⇒ bids(t) = bids(t ).
An instance that has these properties is said to be reduced since performing
the Reduce operator on the instance would have no effect. For each the two
occurrences of Reduce in Solve we have implemented a significant simplification
of the operator by considering the context in which it occurs.
As presented, the decomposition algorithm defines an AND/OR search tree.1
Each node consists of a TKP instance. The root node is an AND node and
comprises the initial problem instance. Every leaf node comprises a TKP instance
with no bids. The children of an AND node are the (one or more) instances
generated by applying the Split operator to the AND node. The set of feasible
solutions to an AND node is the cross product of the feasible solutions of its
children. Each child of an AND node is an OR node. Each OR node, other than
the leaves, has two children generated by the branching in the algorithm—one
child in which a selected bid is accepted and one in which that bid is rejected.
The set of feasible solutions of an OR node is the union of the feasible solutions
of its children.
The Decomposition Algorithm is correct in that the feasible solutions of the
AND/OR search tree include all optimal solutions. The feasible solutions of the
tree generally contain non-optimal solutions, which is obvious once one notices
that the non-deterministic algorithm does not use the price of the bids. The next
section considers how to explore the tree to find an optimal solution and how to
use bounds on the objective function to prune the tree during the exploration.
1
The idea that a non-deterministic program implicitly defines an AND/OR tree was
used in the very first paper published in the journal Artificial Intelligence [8].
The Temporal Knapsack Problem and Its Solution 41
region for TKP without eliminating any potential integer solutions. To enhance
the given search strategy, we employ the well-known Gomory mixed-integer cut
(GMIC), which is considered one of the most important classes of cutting planes
(see [13]).
Using the results from the linear relaxed TKP model (0 ≤ Xb ≤ 1, ∀b ∈ bids)
and an objective function value z of a known feasible integer solution, a valid
GMIC [14] for the TKP model can be written as
fi − f0
zU − z ≥
−ri Xi +
rj (1 − Xj ) +
−ri + Xi +
1 − f0
fi ≤f0 fj ≤f0 fi >f0
i∈N1 j∈N2 i∈N1
fj − f0
fk − f0
rj + (1 − Xj ) +
−rk sk +
−ri + sk
1 − f0 1 − f0
fj >f0 fk ≤f0 fk >f0
j∈N2 k∈S k∈S
where, N1 (N2 ) is the set of indices for non-basic variables at their lower (up-
per) bounds; S, the set of slack variables s; r, the reduced costs; and zU , the
objective value of the linear relaxation model. It is clear that zU provides an up-
per bound for the linear mixed integer TKP. In this notation, f0 = zU −
zU ;
fi = −ri −
−ri , ∀i ∈ N1 ; and fj = rj −
rj , ∀j ∈ N2 .
The generated GMICs are added to the linear relaxed TKP instance and
all its descendants in the search tree. Each added GMIC removes some of the
non-integer solutions from the relaxed feasible region, but none that are integer.
This also helps to improve the upper bound zU .
We also employ the “reduced costs constraints” (RCC ) and “reverse-reduced
costs constraints” (R-RCC ) discussed by Oliva et al. [15]. Following their work,
the “pseudo-utility criterion” is used to obtain a reasonably good feasible so-
lution. This criterion is computationally cheap, especially once the solution to
the linear relaxation is known and the optimal values of the dual variables,
λ, are determined. In this criterion all Xb are sorted in non-increasing order of
price(b)/(demand(b) · t∈duration(b) λt ) and the demands are satisfied in this or-
der, as long as there is enough capacity. The resulting objective function value,
denoted by zL , yields a lower bound for the optimum z. From
z− ri Xi + rj (1 − Xj ) − rk sk = zU ,
i∈N1 j∈N2 k∈S
one can devise two useful constraints: RCC on the right, and R-RCC on the
left.
zU − z U + ≤ − ri Xi + rj (1 − Xj ) − rk sk ≤ zU − zL
i∈N1 j∈N2 k∈S
where zU + represents an upper bound which is stronger than the one provided
by the linear relaxation (zU ). Such an upper bound can be obtained by using a
“surrogate relaxation”. In our case, this relaxation consists of adding together
all of the constraints weighted with their associated dual values.
44 M. Bartlett et al.
The aforementioned cuts and constraints are used for propagation purposes.
By enforcing bounds consistency, the domains of decision variables including
slacks are filtered and in certain cases are reduced to
a singleton.
Bounds consis-
tency on a RCC /R-RCC constraint of the form a ≤ i bi xi + j cj (1 − xj ) ≤ d
gives the following additional bounds on the domains of the variables xp ∈ N1 ∪S
and xq ∈ N2 :
a − i bi − j cj + bp d
≤ xp ≤ and
bp bp
d a− i bi − j cj + cq
1− ≤ xq ≤ 1 − .
cq cq
The GMIC works in a manner similar to RCC.
4 Performance Comparison
This section compares the effectiveness of three algorithms at solving a range of
randomly-generated TKP instances. The algorithms considered are:
– the decomposition algorithm using AO* search with the forced-split variable-
selection strategy;
– the decomposition algorithm using depth-first search with the forced-split
variable-selection strategy; and
– the integer linear program solver provided by CPLEX version 8.1 with the
default settings. The solver uses a branch-and-cut algorithm — branch-and-
bound augmented by the use of cuts. After an initial “presolve” phase, which
removes redundant constraints and attempts to tighten the bounds on the
variables, the solver creates a tree, whose root contains the linear relaxation
of the problem, and proceeds to expand nodes of this tree until an optimal
integer solution has been found. At each node, the linear relaxation of the
problem at that node is solved. If this leads to a solution in which some
variables have fractional values, a selection of cuts are generated and added
to the problem. The problem is then solved again, and if some variables are
still non-integer, one is chosen to branch on, producing one child with the
chosen variable set to 1 and another with it set to 0.
Preliminary experiments showed that the decomposition algorithm consistently
performed better with forced-split selection than with demand selection. This
is the case for both AO* search and depth-first search. Consequently, extensive
experiments were not performed for the demand strategy.
Our method for randomly generating TKP instances is controlled by six
parameters: ntimes, max length, max demand, ucapacity, max rate and nbids.
These are all integers, except max rate, which is a floating point number. Given
these parameters, an instance is generated that has times = {t1 , . . . tntimes },
ordered in the obvious way, a uniform capacity of ucapacity, and nbids bids.
The Temporal Knapsack Problem and Its Solution 45
Each bid within an instance is generated by randomly choosing its start time,
end time, demand and rate from a uniform distribution over the following ranges:
start(b) ∈ [1, ntimes],
end(b) ∈ [start(b), max(ntimes, start(b) + max length − 1)],
demand(b) ∈ [1, max demand],
rate(b) ∈ [1, max rate].
All of these are integers except that rate(b) is a floating point number. From
these values, we set price(b) to round(rate(b)·demand(b)·(end(b)−start(b)+1)).
Performance was assessed on a set of randomly-generated instances in which
most factors affecting complexity were kept static,
ntimes = 2880,
max length = 100,
max demand = 50,
ucapacity = 400,
while varying the values of two parameters: nbids and max rate. The value 2880
corresponds to the number of 15 minute slots in 30 days. By varying max rate
from 1.0 to 2.0 in increments of .2, and then further increasing its value to 4, 8
and 16, and by varying nbids from 400 to 700 in increments of 50, we generated
63 problem suites, each containing 20 instances. By using these parameter values,
we have focussed the majority of our experiments on instances generated with
max rate between 1 and 2, but also consider instances with larger values of
max rate in order to show the effect this parameter has over a greater range.
Figure 3 shows the mean solution time taken by each algorithm for all gen-
erated instances in which there are a given number of bids and max rate ≤ 2,
The graph reveals that for the problems with the lowest number of bids, both
decomposition algorithms outperform the CPLEX solver. However, as the prob-
lem size increases, the performance of the decomposition algorithms deteriorates
faster than that of CPLEX. The AO* algorithm is unable to solve some instances
100
AO*
Depth First
Mean Solution Time (s)
CPLEX
10
0.1
400 450 500 550 600 650 700
Number of Bids
Fig. 3. Mean time taken to solve instances with a given number of bids
46 M. Bartlett et al.
AO*
Depth First
0.1
1 2 4 8 16
Maximum Rate
Fig. 4. Mean time taken to solve instances with a given range of rates
with 600 bids as it encounters memory problems and thrashes badly. The depth-
first algorithm and CPLEX are both capable of solving all instances with up to
and including 650 bids, but encounter problems on some instances with 700 bids;
CPLEX through encountering memory-shortage problems and depth-first search
through the exceptionally long time required to solve some problem instances.
Figure 4, shows the performance of the three algorithms on all instances in
which max rate has a given value and nbids ≤ 550. The graph shows that for
smaller values of max rate, on average the CPLEX solver performs best, followed
by the AO* algorithm, with the depth-first exploration proving worse. However
for larger values of max rate the AO* and depth-first algorithms clearly out-
perform the CPLEX solver. It is worth noting that the performance of CPLEX,
unlike the other two programs, is barely affected by the value of max rate.
While we have reported mean solution time throughout, it should be men-
tioned that these are influenced strongly by the extremely long run times required
for a few of the instances. For most problem suites, the algorithms solve most in-
stances in very short times; however for a few instances substantially longer is
taken, resulting in these instances having a large influence on the mean. Despite
this, we report mean times rather than median times (which would not be affected
by these extreme values) as the frequency of the very hard instances increases with
both increasing numbers of bids and decreasing max rate, and their frequency is
a significant component of the difficulty of a particular problem suite.
Acknowledgements
We thank Michael Trick, Michel Vasquez and Lucas Bordeaux for helpful discus-
sions. This project has been partly funded by Microsoft Research. Ian Miguel is
supported by a UK Royal Academy of Engineering/EPSRC Post-doctoral Re-
search Fellowship and S. Armagan Tarim by Science Foundation Ireland under
Grant No. 03/CE3/I405 as part of the Centre for Telecommunications Value-
Chain-Driven Research (CTVR) and Grant No. 00/PI.1/C075.
48 M. Bartlett et al.
References
1. Foster, I., Kesselman, C., Tuecke, S.: The anatomy of the Grid: Enabling scalable
virtual organization. The International Journal of High Performance Computing
Applications 15 (2001) 200–222
2. Roy, A., Sander, V.: Advanced reservation API. GFD-E5, Scheduling Working
Group, Global Grid Forum (GGF) (2003)
3. Foster, I., Kesselman, C., Nick, J., Tuecke, S.: The physiology of the grid: An open
grid services architecture for distributed systems integration (2002)
4. Martello, S., Toth, P.: Knapsack Problems: Algorithms and Computer Implemen-
tations. Wiley, New York (1990)
5. Fréville, A.: The multidimensional 0-1 knapsack problem: An overview. European
Journal of Operational Research 155 (2004) 1–21
6. Walsh, T.: Stochastic constraint programming. In: Proc. of the Fifteenth European
Conf. on Artificial Intelligence, IOS Press (2002) 1–5
7. Marinescu, R., Dechter, R.: AND/OR tree search for constraint optimization.
In: Proc. of the 6th International Workshop on Preferences and Soft Constraints.
(2004)
8. Manna, Z.: The correctness of nondeterministic programs. Artificial Intelligence 1
(1970) 1–26
9. Nilsson, N.J.: Principles of Artificial Intelligence. Tioga (1980)
10. Martelli, A., Montanari, U.: Additive AND/OR graphs. In: Proc. of the Fourth
Int. Joint Conf. on Artificial Intelligence. (1975) 345–350
11. Chang, C.L., Slagle, J.: An admissible and optimal algorithm for searching
AND/OR graphs. Artificial Intelligence 2 (1971) 117–128
12. Chvatal, V.: Linear Programming. W.H.Freeman, New York (1983)
13. Bixby, R.E., Fenelon, M., Gu, Z., Rothberg, E., Wunderling, R.: MIP: Theory and
practice — closing the gap. In Powell, M.J.D., Scholtes, S., eds.: System Mod-
elling and Optimization: Methods, Theory, and Applications. Kluwer Academic
Publishers (2000) 19–49
14. Wolsey, L.A.: Integer Programming. John Wiley and Sons, New York (1998)
15. Oliva, C., Michelon, P., Artigues, C.: Constraint and linear programming : Using
reduced costs for solving the zero/one multiple knapsack problem. In: Proc. of
the Workshop on Cooperative Solvers in Constraint Programming (CoSolv 01),
Paphos, Cyprus. (2001) 87–98
16. Russell, S.J.: Efficient memory-bounded search methods. In: Proc. of the Tenth
European Conf. on Artificial Intelligence, Vienna, Wiley (1992) 1–5