Academia.eduAcademia.edu

On a Stochastic Knapsack Problem

2010

Abstract

Chance- (Expectation-) Constrained Knapsack problemwith random weightsBranch-and-Bound algorithm to search binary solutionspaceSolve relaxation to obtain upper bounds using stochasticgradient algorithmIntegration by Parts method to overcomenon-di erentiabilityConvergence Issues !solved with more intelligent choicesAble to solve (Binary) Chance-Constrained Knapsack problemwith up to 100 items in less than 1h

On a Stochastic Knapsack Problem Stefanie Kosuch and Marc Letournel and Abdel Lisser Laboratoire de recherche en Informatique, Université Paris Sud 91405 Orsay Cedex, France Key words: stochastic knapsack, expectation constraint, stochastic gradient method, Arrow-Hurwicz 1 Introduction The deterministic knapsack problem is a well known and well studied NP-hard combinatorial optimization problem. It consists in filling a knapsack with items out of a given set such that the weight capacity of the knapsack is respected and the total reward maximized. For a review of references on the stochastic knapsack problem, stochastic gradient algorithms and branch-and-bound methods see [4]. In the deterministic problem, all parameters (item weights, rewards, knapsack capacity) are known (deterministic). In the stochastic counterpart, some (or all) of these parameters are assumed to be random, i.e. not known at the moment the decision has to be made. In this paper, we study the stochastic knapsack problem with expectation constraint. The item weights are assumed to be independently normally distributed. We solve the relaxed version of this problem using a stochastic gradient algorithm in order to provide upper bounds for a branch-and-bound framework. Two approaches to estimate the needed gradients are applied, one based on Integration by Parts and one using Finite Differences. Finite Differences is a robust and simple approach with efficient results despite the fact that the estimated gradients are biased, meanwhile Integration by Parts is based upon a more theoretical analysis and permits to enlarge the field of applications. 2 Mathematical formulations We consider a stochastic knapsack problem of the following form: Given a set of n items. Each item has a weight that is not known in advance and the CTW2010, University of Cologne, Germany. May 25-27, 2010 decision of which items to choose has to be made without the exact knowledge of their weights. Therefore, we handle the weights as random variables and assume that weight χi of item i is independently normally distributed with mean µi > 0 and standard deviation σi . Furthermore, each item has a fix reward per weight unit ri > 0. We denote by χ, µ, σ and r the corresponding n-dimensional vectors. The aim is to maximize the expected total P gain E[ ni=1 ri χi xi ]. In addition, we assume that the knapsack problem has a fixed weight capacity c > 0. In this paper, we solve the following expectation constrained knapsack problem: Expectation Constrained Knapsack Problem (ECKP ) max E x∈{0,1}n " n X i=1 ri χ i x i # (1) s.t. E [HR+ (c − g(x, χ))] ≥ p (2) P where E [·] denotes the expectation, g(x, χ) = ni=1 χi xi is the total weight of the chosen items, HR+ denotes the indicator function of the positive real interval - the Heaviside function, and p ∈ (0.5, 1] is the prescribed probability. We refer to the function inside the expectation of the constraint function as θ, i.e. θ(x, χ) = HR+ (c − g(x, χ)). 3 Problem solving method Due to its combinatorial nature, ECKP can be solved using a branch-andbound framework as presented in [4]. To obtain upper bounds, the authors propose to solve the corresponding continuous optimization problem using a stochastic gradient type algorithm. A stochastic gradient algorithm is an algorithm that combines both Monte-Carlo techniques and the deterministic gradient method. More precisely, instead of computing the gradient of the objective funtion (that is a function in expectation) to determine the direction of descent, one uses the gradient of the function insight the expectation. By drawing independent samples of the random variables at each iteration, one approximates the expectation. Applying a gradient method to solve the relaxed ECKP is promising as its objective function is concave and, in addition, constraint (2) defines a convex feasible set due to the assumption that the weights are independently normally distributed. The particular stochastic gradient algorithm used in this work is the Stochastic Arrow-Hurwicz algorithm (hereafter called SAH-algorithm) that uses Lagrangian multipliers to deal with the expectation constraint (for further details see [3]). 112 However, to use such an algorithm for ECKP , one has to estimate the gradient of the indicator function HR+ (·). In this paper, we apply two different approaches: the first one is a non-biased estimator based on Integration by Parts (called hereafter IP-method ) proposed in [1] to solve continuous stochastic optimization problems. The second approach is a Finite Differences estimator (FD-method ) presented in [2]. Unlike the IP-method method, the FD-method provides a biased estimator of the gradient. In subsection 3.0.1 we present the two methods. Subsection 3.0.2 gives a first insight in the convergence analysis we conducted. 3.0.1 Gradient computation methods In the FD-method, the h-th component of the gradient of θ is approximated by the corresponding difference quotient θ(x + δν h , χ) − θ(x − δν h , χ) 2δ where δ > 0 and ν h ∈ {0, 1}n such that νhh = 1 and νih = 0 for i 6= h. The basic idea of the IP-method consists in using Integration by Parts to e reformulate E[θ(x, χ)] which gives rise to a function in expectation E[θ(x, χ)] e e s.t. E[θ(x, χ)] = E[θ(x, χ)]. θ is differentiable and the idea is to use the gradient of θe in the SAH-algorithm. Andrieu et al. presented how to compute such a e θ(x, χ) using Integration by Parts (see Theorem 5.5 in [1]). We state and proof their theorem for the case of ECKP with normally distributed weights. 3.0.2 Convergence analysis When using the IP-method, main adaptations have been made to correctly check all hypotheses of convergence. Instead of replacing {0, 1}n by [0, 1]n when relaxing ECKP , the theoretical analysis compels us to consider a complementary set of a neighborhood of 0[0,1]n . However, assuming that an empty knapsack is not an optimal solution, it is convenient to consider that the optimal solution vector of the continuous problem contains at least one component xκ with xκ ≥ 1/n. We are thus allowed to replace [0, 1]n by {x ∈ [0, 1]n | ||x||∞ ≥ 1/n} = Xcont . Accordingly, we obtain the following admissible set of the relaxed ECKP : ad Xcont = {x ∈ Xcont : E [HR+ (c − g(x, χ))] ≥ p} Checking that all steps of the algorithm stay in this subset is a central point of our work. 113 4 Numerical results for the relaxed and combinatorial ECKP We tested our algorithms on an instance from the literature as well as on a great number of randomly generated instances. Numerical tests of the SAH-algorithm involving the abovementioned adaptations have shown that the algorithm converges on all tested instances. We also compared our approach with a method that has previously been used to solve the relaxed ECKP . The idea of this method is to reformulate the problem as a deterministic equivalent second order cone problem (SOCP ) and to solve it using an interior point algorithm. It turned out that in terms of running time, our SAH-algorithm outperforms the SOCP approach for small and medium size instances (up to 1000 items). Concerning the resolution of the combinatorial problem using a branch-and-bound framework, we are able to solve problems with up to 250 items in an average computing time of 1h. In comparison, when using the SOCP procedure one can only solve problems up to 75 items in comparable time. References [1] L. Andrieu. Optimization sous contrainte en probabilité. Ecole Nationale des Ponts et Chaussés, 2004. [2] Laetitia Andrieu, Guy Cohen, and Felisa Vzquez-Abad. Stochastic programming with probability constraints. http://fr.arxiv.org/abs/0708.0281 (Accessed 24 October 2008), 2007. [3] J. C. Culioli and G. Cohen. Optimisation stochastique sous contraintes en espérance. Comptes rendus de l’Académie des sciences, Paris, Série I, 320(6):753 758, 2008. [4] Stefanie Kosuch and Abdel Lisser. Upper bounds for the 0-1 stochastic knapsack problem and a b&b algorithm. Annals of Operations Research (Online First), 2009. http://dx.doi.org/10.1007/s10479-009-0577-5. 114