Nesting Problems - Exact and Heuristic Algorithms
Nesting Problems - Exact and Heuristic Algorithms
PHD THESIS:
Supervised by:
Ramón Álvarez-Valdés Olaguı́bel
José Manuel Tamarit Goerlich
2
To the women of my life: Natalia and Olaya...
3
4
ACKNOWLEDGEMENTS
This thesis would not have been possible without the generous support of Ramón Álvarez-
Valdés Olaguı́bel and Jose Manuel Tamarit Goerlich who strongly supported me.
Special appreciation to professor Enrique Benavent, who spent a lot of time to introduce
me into the Operational Research world.
I want to thank all the examiners, Enriqueta Vercher, José Fernando Oliveira, Maria Anto-
nia Carravilla, Fran Parreño, Rubén Ruiz and Enrique Benavent for the great effort of reading
the thesis carefully and their for all the observations and corrections.
Many thanks to professors José Fernando Oliveira, Maria Antónia Carravilla and António
Miguel Gomes for supervising me in my research internship in Porto.
Special thanks to Julia Bennell, who spent a lot of her valuable time supervising me in my
research internship in Southampton.
Special thanks to Jonathon Scandrett, who has corrected and improved the English redac-
tion.
Finally, very special thanks to Natalia and Olaya for their infinite patience.
5
6
Contents
7
3.1.3 Branching on constraints (BC) . . . . . . . . . . . . . . . . . . . . . . 60
3.1.4 Computational results for the different branching strategies . . . . . . . 61
3.2 Updating the bounds on the pieces . . . . . . . . . . . . . . . . . . . . . . . . 66
3.2.1 Method I . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
3.2.2 Method II . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
3.3 Finding incompatible variables . . . . . . . . . . . . . . . . . . . . . . . . . . 69
3.3.1 Incompatibility using the bounds on the pieces . . . . . . . . . . . . . 69
3.3.2 Incompatibility using the transitivity of the pieces . . . . . . . . . . . . 70
3.3.3 Computational results of the strategies for updating bounds and finding
incompatible variables . . . . . . . . . . . . . . . . . . . . . . . . . . 73
3.4 Results of the complete algorithm . . . . . . . . . . . . . . . . . . . . . . . . 74
8
7.3.2 1-Compaction into two phases . . . . . . . . . . . . . . . . . . . . . . 142
7.4 Crossed objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
9 Two dimensional irregular bin packing problems with guillotine cuts 159
9.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
9.2 Literature review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
9.3 Problem description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
9.4 Mixed integer formulation for the insertion of one piece . . . . . . . . . . . . . 162
9.5 Guillotine cut structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
9.6 Rotations and reflections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
9.7 Constructive algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
9.8 Constructive algorithm with two phases . . . . . . . . . . . . . . . . . . . . . 172
9.9 Embedding an improvement procedure into the constructive algorithm . . . . . 173
9.10 Computational experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
9.11 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
9
10
Chapter 1
1.1 Introduction
Nesting problems are two-dimensional cutting and packing problems involving irregular shapes.
This thesis is focused on real applications on Nesting problems such as the garment industry
or the glass cutting. The aim is to study different mathematical methodologies to obtain good
lower bounds by exact procedures and upper bounds by heuristic algorithms. The core of the
thesis is a mathematical model, a Mixed Integer Programming model, which is adapted in each
one of the parts of the thesis.
This study has three main parts: first, an exact algorithm for Nesting problems when rotation
for the pieces is not allowed; second, an Iterated Greedy algorithm to deal with more complex
Nesting problems when pieces can rotate at several angles; third, a constructive algorithm to
solve the two-dimensional irregular bin packing problem with guillotine cuts. This thesis is
organized as follows.
The first part is focused on developing exact algorithms. In Chapter 2 we present two
Mixed Integer Programming (MIP) models, based on the Fischetti and Luzzi MIP model [27].
We consider horizontal lines in order to define the horizontal slices which are used to separate
each pair of pieces. The second model, presented in Section 2.3, uses the structure of the
horizontal slices in order to lift the bound constraints. Section 2.5 shows that if we solve
these formulations with CPLEX, we obtain better results than the formulation proposed by
Gomes and Oliveira [32]. The main objective is to design a Branch and Cut algorithm based
on the MIP, but first a Branch and Bound algorithm is developed in Chapter 3. Therefore, we
study different branching strategies and design an algorithm which updates the bounds on the
coordinates of the reference point of the pieces in order to find incompatible variables which
are fixed to 0 in the current branch of the tree. The resulting Branch and Bound produces an
important improvement with respect to previous algorithms and is able to solve to optimality
problems with up to 16 pieces in a reasonable time.
In order to develop the Branch and Cut algorithm we have found several classes of valid
inequalities. Chapter 4 presents the different inequalities and in Chapter 5 we propose separa-
tion algorithms for some of these inequalities. However, our computational experience shows
that although the number of nodes is reduced, the computational time increases considerably
and the Branch and Cut algorithm becomes slower.
11
The second part is focused on building an Iterated Greedy algorithm for Nesting problems.
In Chapter 6 a constructive algorithm based on the MIP model is proposed. We study different
versions depending on the objective function and the number of pieces which are going to be
considered in the initial MIP. A new idea for the insertion is presented, trunk insertion, which
allows certain movements of the pieces already placed. Chapter 7 contains different movements
for the local search based on the reinsertion of a given number of pieces and compaction. In
Chapter 8 we present a math-heuristic algorithm, which is an Iterated Greedy algorithm because
we iterate over the constructive algorithm using a destructive algorithm. We have developed
a local search based on the reinsertion of one and two pieces. In the constructive algorithm,
for the reinsertion of the pieces after the destruction of the solution and the local search mo-
vements, we use several parameters that change along the algorithm, depending on the time
required to prove optimality in the corresponding MIP models. That is, somehow we adjust the
parameters, depending on the difficulty of the current MIP model. The computational results
show that this algorithm is competitive with other algorithms and provides the best known re-
sults on several known instances.
The third part is included in Chapter 9. We present an efficient constructive algorithm for
the two dimensional irregular bin packing problem with guillotine cuts. This problem arises
in the glass cutting industry. We have used a similar MIP model with a new strategy to ensure
a guillotine cut structure. The results obtained improve on the best known results. Further-
more, the algorithm is competitive with state of the art procedures for rectangular bin packing
problems.
The most studied problem is the strip packing problem where the width of the strip is fixed
and the objective is based on reducing waste. As all the pieces have to be placed into the
strip without overlapping, minimizing the waste is equivalent to minimizing the total required
length. These problems arise in a wide variety of industries like garment manufacturing (see
Figure 1.1), sheet-metal cutting, furniture making and shoe manufacturing.
Two-dimensional Bin Packing Problems consist in placing a set of pieces into a finite num-
ber of bins in such a way that the total number of bins is minimized. In some applications, the
placement areas into which the pieces have to be packed or cut can have irregular shapes, as in
the case of leather hides for making shoes. The placement area can have uniform or varying
qualities depending on the region, sometimes including defective parts that cannot be used. In
these cases the objective is based on reducing the number of big items needed to place all the
pieces.
The leather nesting problem (LNP) consists in finding the best layouts for a set of irregular
12
Figure 1.1: An example from garment manufacturing (taken from Gomes and Oliveira [32])
Figure 1.2: An example of a leather cutting instance given by Baldacci et al. [7]
pieces within large natural leather hides whose contours could be highly irregular and may in-
clude holes. According to the typology presented by Wäscher et al. [70], LNP can be classified
as a two-dimensional residual cutting stock problem (2D-RCSP). Figure 1.2 shows an example
of a leather nesting problem.
Heistermann et. al. [33] proposed a greedy algorithm for real instances in the automo-
tive industry. They consider holes, defects and regions with different levels of quality (quality
zones) in the leather hides. Recently, Alves et. al. [3] have proposed several constructive algo-
rithms, reporting an extensive computational experiment using two real data sets.
Baldacci et al. [7] use the raster representation to design a heuristic algorithm. They pro-
pose an iterative algorithm based on three different constructive procedures for the Irregu-
lar Single Knapsack Problem. They compare their algorithms on instances defined for two-
dimensional strip packing problems, two-dimensional bin packing problems and real-world
leather cutting instances.
Crispin et. al. [21] propose two genetic algorithms using different coding methodologies
13
for shoe manufacturing. They consider directional constraints for the pieces in given regions of
the hides which reduce the solution space.
There are several publications on the leather nesting problem that do not take quality zones
into account. We can find a heuristic algorithm based on placing the shapes one by one on
multiple sheets in Lee et. al [39], and Yupins and Caijun [72] propose both genetic algorithms
and simulated annealing.
Nevertheless, in the literature of Nesting problems the variant with more publications is the
two-dimensional strip packing problem where the width of the strip is fixed and minimizing
the total required length is the common objective. This problem is categorized as the two-
dimensional, irregular open dimensional problem in Wäscher et al. [70]. Fowler and Tanimoto
[28] demonstrate that this problem is NP-complete and, as a result, solution methodologies pre-
dominantly utilize heuristics. In Sections 1.4 and 1.5 we can find a literature review on exact
methods and heuristic algorithms, respectively. In this thesis we focus on that problem and
both exact and heuristic algorithms are proposed.
One important characteristic of the pieces is their shape. In Nesting problems pieces are
irregular and most frequently polygons. However, there are several publications that consider
circular edges. Burke et. al. [18] propose a generalized bottom-left corner heuristic with hill
climbing and tabu search algorithms for finding a good sequence of the given pieces. Schei-
thauer et al. [59] use φ-functions to deal with circular edges. In practice, circular edges of the
pieces could be approximated by polygons, so most of the publications consider the pieces as
polygons.
Another feature of the pieces is the allowed angles of rotation: rotation can be free; only
specific angles can be allowed (90o , 180o ,...) or rotation is not allowed at all. The angles of
rotation can be fixed because pieces have to respect a given pattern in the stock sheet or due to
the structural properties of the material being cut. In most of the published studies, rotation of
the pieces is not allowed or is restricted to several fixed angles, though there are several studies
dealing with free rotation. Xiao Liu and Jia-Wei Ye [41] propose a constructive heuristic algo-
rithm and Stoyan et al. [63] use φ-functions.
The periodic packing of irregular shapes, also known as regular packing or lattice packing,
consists in packing congruent copies of one shape. Costa et al. [20] propose heuristic algo-
rithms for large-scale periodic packing. They allow pieces to be rotated freely on the condi-
tion that all the pieces must follow the same orientation (single lattice) or can be rotated 180o
(double lattice). According to the typology by Wäscher [70], this problem is a two-dimensional
irregular IIPP (Identical Item Packing Problem).
Another application of Nesting problems arises in the glass cutting industry. In this case
the cutting process divides the stock sheet (or the given part that is going to be cut) into two
different parts. These cuts are known as guillotine cuts and a direct consequence is that pieces
cannot have concavities. Pieces have to be placed into bins of a fixed width and length, so the
objective is to use as few bins as possible to pack all the pieces.
14
Finally, we can find several publications on three-dimensional nesting problems. Schei-
thauer et al. [67] extend φ-functions to three dimensional objects and Egeblad et al. [26]
propose a heuristic algorithm that can be applied on three-dimensional problems.
First we describe the pixel/raster method, which divides the stock sheet into pixels with
each pixel having a value to indicate if it is used by a piece. This approach has the disadvan-
tage of losing precision when the problem has irregular pieces. The second method is based on
direct trigonometry, using the known D-functions. The third method uses the Non-Fit Polygons,
introduced by Art [5]. The Non-Fit Polygon (NFP) reduces the problem of identifying whether
two polygons overlap to the problem of checking if one point satisfies any of a subset of linear
inequalities. When pieces are allowed to be rotated, the NFPs can be used only by calculating
the NFP of each pair of pieces for each combination of the allowed angles of rotation. Moreo-
ver, when the rotation of pieces is free, the NFP does not work. The Phi-function, introduced
by Stoyan et al. [63], is a generalization of the NFP and gives a measure of the separation bet-
ween a pair of pieces. The Phi-function tool is limited by the mathematical complexity needed
for it to be defined. However, it considers the rotation of the pieces. A recent approach is the
sentinel method proposed by Macarenhas and Birgin [47], which also works for a free rotation
of the pieces but can be constructed for pieces only with simple (convex) shapes.
Oliveira and Ferreira [50] use a simple codification where each pixel of the matrix takes the
value k = 0, 1, 2, . . . if there are k pieces covering it. Note that if the matrix has a pixel which
takes the value 0, there is no piece covering it. In the case where there is a pixel which takes a
value greater than 1, there is overlapping in the given pixel. In Figure 1.3 we can see the codi-
fication of one piece. It is important to mention that there are pixels which are not completely
covered by the piece, but we have to consider that it is covered in order to ensure that pieces
have a non-overlapping configuration, so the corresponding value in the matrix takes the value
15
1.
Figure 1.3: Oliveira and Ferreira representation (taken from Bennell and Oliveira [12]).
Segenreich and Braga [60] propose a codification which identifies not only overlapping,
but also whether the pieces are in contact. Their codification uses a 1 for the boundary of the
pieces and a 3 for the inner part. Then, in the pixel-matrix, a pixel with a value greater than 4
indicates that there is overlapping and a pixel with value 2 means that two pieces are in contact
but that they do not overlap. If the matrix has all the elements lower than or equal to 2, then
the resulting solution is feasible. In Figure 1.4 we can see two pieces separately, and then their
sum. Note that in the resulting sum we can observe some pixels with a value of 6, meaning
that the pieces are in an overlapping position and pixels with the value 4 indicate that, in the
respective pixels, pieces are in a non-feasible position (the boundary of one piece with the inner
part of the other piece).
Figure 1.4: Segenreich and Braga non boolean representation for irregular pieces (taken from Bennell
and Oliveira [12]).
Babu and Babu [6] propose a different idea. Oliveira and Ferreira’s codification, as with
Segenreich and Braga’s, uses 0 for representing the absence of pieces, and a number greater
than 0 for the presence of pieces. Babu and Babu use 0 for representing the inner part of the
pieces and a number greater than 0 for the adjacent pixels. These adjacent pixels are enumerated
from right to left in increasing order, starting with 1 at the right of the piece. In Figure 1.5 we
can observe how they represent a piece with a hole. The two central pixels with values 1 and 2
represent the hole in the piece.
The Babu and Babu codification generalizes the shape of the stock sheet. The big item
where we have to arrange the small items can be an irregular polygon and can also have holes
16
Figure 1.5: Babu and Babu representation (taken from Bennell and Oliveira [12]).
which cannot be used by any piece. One advantage of this codification is that the code of each
pixel represents the number of pixels needed to push the represented piece to the right in or-
der to make the solution possible. In that way, in a given configuration it is easy to calculate
the compaction of the pieces in the bottom-left corner of the strip. The disadvantage of this
approach is an increase in the computational effort needed to update the pixel matrix and its
complexity.
On the other hand, in the pixel/raster method the amount of required information depends
on the area of the pieces. When the pieces are larger the pixel matrix is more complex and it is
harder to update it. If we use polygons, the amount of required information increases with the
number of vertices of the polygons used for representing the pieces.
The main problem of using polygons appears when we want to guarantee that pieces do not
overlap. The first idea that comes to mind is based on direct trigonometry. There are known
tools that allow us to identify when two segments intersect or if a given point is placed inside a
given polygon. These tools are more complex compared with the pixel/raster method. In fact,
the computational time of the pixel/raster method is quadratic on the size of the pixel matrix,
and when we use direct trigonometry the computational time is exponential on the number of
edges of the polygons.
In what follows we are going to present an approach using trigonometry to evaluate whether
two polygons overlap. In Figure 1.6(a) we can observe that if two polygons overlap, then their
respective enclosed rectangles also overlap. Figure 1.6(b) shows that if any two edges intersect,
then the rectangles whose diagonal is one of these edges also overlap.
17
Figure 1.6: (a) If two polygons overlap, then their respective enclosed rectangles also overlap. (b) If
two edges intersect, then the rectangles whose diagonal is one of the edges also overlap (taken from
Bennell and Oliveira [12]).
If the enclosing rectangles of two given polygons do not overlap, then the polygons are not
in a overlapping position. This is much easier to check than checking if two complex polygons
overlap. Then, in order to know whether two pieces overlap or not, the first step is to check
if the enclosed rectangles overlap. Ferreira et. al. [4] study the reduction of checking the
feasibility of one given solution by first applying the study of the enclosed rectangles of each
pair of pieces, which ranges between 90.7% and 97.6% of the pairs. Furthermore, if we also
analyze the edges, the reduction ranges between 96% and 99.4%. Obviously, the complexity
of the polygons reduces the effectiveness of the study of the enclosing rectangles, but Ferreira
et al. never obtain a reduction percentage lower than 90%.
In order to check if two polygons overlap, Bennell and Oliveira [12] give a set of hierarchi-
cal tests.
TEST 1:
Do the bounding boxes of the polygons overlap? NO
YES
? ?
TEST 2: THE POLYGONS
For each pair of edges from different polygons, do their - DO NOT OVER-
respective bounding boxes overlap? NO LAP
YES
?
TEST 3: - THE POLYGONS
For each pair of edges from different polygons, does the YES OVERLAP
edge analysis indicate an intersection? 6
NO
?
TEST 4:
YES
For one vertex of each polygon, does that vertex reside
inside the other polygon? NO
Figures 1.7 (a) and (b) show, respectively, negative answers for Tests 1 and 2. An example
of an affirmative answer is shown in Figures 1.7 (c) and (d), and in Figure 1.7 (e) we can see
18
both possibilities for Test 4.
Figure 1.7: Different cases of the relative position between two pieces (taken from Bennell and Oliveira
[12]).
Preparata and Shamos [54] propose a test to identify if a given point is inside a polygon.
This test is useful to solve Test 4. In order to answer Test 3, we can use the known D functions,
which are an efficient tool to obtain the relative position between two edges.
Figure 1.8: Interpretation of the D functions (taken from Bennell and Oliveira [12]).
These functions come from the equation of the distance from a point to a line. The inter-
pretation is the following one:
We have to consider the origin of the coordinate axis in the bottom-left corner, i.e. the X
axis always increases to the right and the Y axis increases upward. It is possible to use these
functions to study the relative position between two segments with given orientations. Maha-
devan [43] gives a complete study for differentiating all the cases.
19
B
We denote by PA the reference point of polygon A and PB denotes the reference point of
polygon B. Let us consider that both reference points are in the bottom-left corner of the enclo-
sing rectangle of their respective polygons. The NFP of pieces A and B, denoted by NFPAB , is
the region in which the reference point of polygon B cannot be placed because it would overlap
polygon A. To build it, PA is placed at the origin and PB slides around polygon A in such a
way that there is always a point of polygon B touching the border of polygon A. The left-hand
side of Figure 1.9 shows several positions of polygon B (triangle) moving around polygon A
(square). The right-hand side of the figure shows NFPAB , the forbidden region for placing PB ,
relative to PA , if overlapping is not allowed. Note that NFPBA matches NFPAB with a rotation
of 180o .
When one or both polygons are non-convex, the construction of the NFP is more complex.
Figure 1.11, taken from Bennell and Oliveira [12], shows complicated cases. In Figure 1.11(a),
piece B has some feasible positions within the concavity of A and therefore NFPAB includes a
small triangle of feasible placements for the reference point of B. In Figure 1.11(b), the width
of B fits exactly into the concavity of A and its feasible positions in the concavity produce a
segment of feasible positions for the reference point of B in NFPAB . In Figure 1.11(c), there is
exactly one position in which B fits into the concavity of A and then NFPAB includes a single
feasible point for the reference point of B.
20
A B
NFPAB A NFPAB
B
B
A A NFPAB
B
(a) (b) (c)
Figure 1.11: Special cases of NFP when non-convex pieces are involved (taken from Bennell and Oli-
veira [12])
In this thesis we assume that for each pair of pieces i, j, NFPi j is given by a polygon and, if
that is the case, by a set of points (as in Figure 1.11(c)), a set of segments (as in Figure 1.11(b))
or a set of enclosed polygons (as in Figure 1.11(a)).
Mahadevan [43] proposes a sliding algorithm in order to obtain the NFP. Let A and B be two
pieces and PA and PB be their respective reference points. In order to obtain NFPAB , we first
need an initial position of the pieces that guarantees the touching position without overlapping.
To ensure that, we match the maximum value of the Y coordinate of B with the minimum value
of the Y coordinate of A. The reference point of B in this position is going to be the first vertex
of NFPAB . In the case that there is more than one position satisfying the previous condition,
then the initial vertex will be the point located further to the left in order to ensure that the
initial vertex is a vertex of NFPAB .
Then, in Figure 1.12 we can see that there are three different possibilities for the first mo-
vement of B:
• Case (a): The slope of edge (a j , a j+1 ) is lower than the slope of edge (b j , b j+1 ). In that
case, b j slides over the edge (a j , a j+1 ).
21
• Case (b): The slope of edge (b j , b j+1 ) is lower than the slope of edge (a j , a j+1 ). In that
case, a j slides over the edge (b j , b j+1 ).
• Case (c): Both edges, (a j , a j+1 ) and (b j , b j+1 ), have the same slope. In that case, both
glides of cases (a) and (b) are done.
Figure 1.12: Different cases to begin to slide piece B around piece A (taken from Bennell and Oliveira
[12]).
Mahadevan uses the D functions in order to know the inner part of the NFPAB defined by
the side of the segment which is obtained by sliding a complete combination of a vertex-edge of
pieces A and B. In the event that any piece has a concavity, when one vertex slides completely
over one edge, the polygons could be in an overlapping position, see Figure 1.13. In that case,
the glide corresponds to the minimum distance from the starting point to the intersection point
of both polygons, which matches with the maximum distance available for sliding B along the
edge such that no overlaps exist with piece A.
Figure 1.13: Projection of B when it is moved over the edge of A (taken from Bennell and Oliveira [12]).
The sliding algorithm only works for pairs of polygons that are relatively simple connected
polygons. For example, in Figure 1.11 this algorithm does not find the hole in (a) or the point
of (c)).
Whitwell [71] proposes an extension of the sliding algorithm in order to identify the holes.
He studies if there is any edge of any polygon which is not traveled over, i.e. the edge is not
used for sliding any point in the Mahadevan algorithm. If there is any relative position for the
non-visited edges such that non-overlapping is produced, then there is a hole in the NFPAB .
This hole can be obtained in a similar way, using a glide as above, whose result is the border
of the hole. The Whitwell algorithm first applies the Mahadevan sliding algorithm, but the
considered edges are flagged. If there is any non-flagged edge, we can suppose, without loss of
generality, that it belongs to A and each vertex of B is checked to see if it can slide along this
edge without producing an overlap. That can be studied by checking if the two edges whose
22
intersection defines the given vertex of B are both to the left of the given edge of A. In the case
that one or both of them are placed to the left of the edge of A, then there is no feasible glide
between A and B such that the given vertex of B could touch the given edge of A, see Figure
1.14.
Figure 1.14: (a) bi and b j are both on the right of edge a, (b) bi is on the left and b j is on the right of a,
(c) bi an b j are both on the left of a (taken from Bennell and Oliveira [12]).
Let us suppose that there is one vertex of B such that the two incident edges are both to
the right of edge a, as shown in Figure 1.14 (a). In that case it is necessary to study whether
there are some positions in which to place this vertex over the edge a which produce an overlap
of pieces A and B. In order to check this, Whitwell uses a similar structure to the Mahadevan
algorithm. Figure 1.15 shows an example of how the Whitwell algorithm works in order to find
a feasible initial point starting from an overlapping position. Obviously, when the shapes of
the pieces are more complex, having more edges and concavities, this approach will need more
computational effort.
Figure 1.15: The Whitwell method for finding feasible initial points (taken from Bennell and Oliveira
[12]).
Let A and B be two arbitrary closed sets of vector points in R2 . S is the resulting sum of
adding all the vector points in A to those in B. Then the Minkowski sum, S , is defined as
S = A ⊕ B = {a + b | a ∈ A, b ∈ B}.
23
The union of geometric translations of sets can also define the Minkowski sum. If Ab denotes
set A translated by vector b, then
[
S = A⊕B= Ab .
b∈B
Stoyan and Ponomarenko [64] first formalized the relationship between Minkowski sums
and NFPs, providing a rigorous proof in their paper. They termed the NFP as the hodograph.
Proof of this relationship is also discussed in Milenkovic et al. [48] and Bennell [8], who des-
cribe it as the Minkowski difference. In order to use the Minkowski sums and its relationship
with the NFP, the realization of the above definition needs to be formulated into a procedure to
obtain the NFP. Ghosh [30] develops a set of Boundary Addition Theorems for both the convex
and non-convex cases that underpin his method of obtaining the Minkowski sum. That shows
that there is sufficient information in the boundary of the polygons to obtain the boundary of the
Minkowski sum. The theorems also support the use of slope diagrams, which form the basis of
his approach, for representing the Minkowski sum. Ghosh [30] provides a detailed explanation
of these theorems and Bennell [8] provides a discussion of the approach with respect to the
NFP. In a Bennell et al. [12] tutorial we can find easy examples of Ghosh’s idea. Recently,
Bennell et al. [61] propose an efficient algorithm to calculate the NFPs using Minkowski sums
based on the principles defined by Ghosh.
1.3.4 φ functions
The φ function is a topology concept. This tool can be viewed as a generalization of the NFPs,
whose purpose is to represent all the mutual positions of two pieces, which can be polygons or
pieces with curved lines. In Bennell et al. [14] an extensive tutorial on the φ functions and φ
objects can be found.
The φ function is a mathematical expression which describes the interaction between two
geometrical objects in such a way that the positions of the objects are the input and a real value
is the output. This value can be viewed as the distance between the objects: if the value is 0,
the pieces are touching each other; if the value is positive, the pieces are separated; and if the
value is negative, the pieces overlap.
Stoyan et al.[63] use the φ functions to solve the non-overlapping problem for Nesting pro-
blems. They consider problems with several copies of one and two pieces. In Stoyan et al. [66]
this idea is applied to 3-dimensional Nesting problems. In this case, they use the φ functions
not only to know if a given solution is feasible, but also to define the mathematical model.
For basic pieces in two dimensions, a φ function class is already defined (see Stoyan et al.
[68]). When dealing with more complex pieces, each piece can be decomposed into convex
polygons or basic pieces, which correspond to a wide set of fixed-orientation bi-dimensional
objects, see Stoyan et al. [65]. Scheithauer et al. [59] study the different combinations of
basic pieces with circular segments. Stoyan et al. [67] and [66] apply this concept to the 3-
dimensional case.
24
There is a close relation between the φ function and the NFP and the Minkowski sum. When
the rotation of both pieces and the position of one of the pieces is fixed, the set of points such
that the value of the φ function is 0 describes the NFP. However, the φ function has more in-
formation because it is not necessary to fix one of the pieces, and also gives an overlapping
measure.
The only exact algorithm that can be found in the literature is proposed by Fischetti and
Luzzi [27]. They develop a Mixed Integer Programming MIP model and a branching strategy
which is used with CPLEX. The Fischetti and Luzzi MIP non-overlapping constraints use tigh-
ter coefficients instead of big-M constants. The branching strategy is based on separating pieces
by blocks, that is, when a block of n pieces is already separated then the next piece which over-
laps with it is separated from the pieces of the block. Their exact algorithm has been tested on
three instances, glass1, glass2 and glass3 with 5, 7 and 9 pieces, respectively. These instances
are broken glass problems, for which the optimal solution has 100% utilization. Fischetti and
Luzzi [27] are able to prove optimality only for instances glass1 and glass2.
One of the first strategies to deal with the Nesting problem is based on the known place-
ment rule BL (Bottom Left). This rule iteratively moves each piece to the left-most feasible
position with respect to the set of already placed polygons. Albano and Sapuppo [1] propose
this constructive algorithm where the ties are broken by preferring the lowest feasible position.
25
In Blazewicz [16] we can find an extension which allows the polygons to be placed into a hole
surrounded by already placed polygons. Dowsland and Dowsland [24] and Gomes and Oliveira
[31] extend the constructive algorithm, considering the whole container instead of the envelope
formed by the pieces which are already placed. Gomes and Oliveira [31] add a local search to
find a good sequence of the given polygons using a random weighted length criterion. Finally,
Burke et al. [18] propose an extension of the algorithm to deal with irregular pieces having
circular edges. They propose searching from the left side of the layout through the unfeasible
positions until a feasible position is found.
Oliveira et al. [51] develop the TOPOS algorithm with an alternative placement rule. The
positions of the pieces on the stock sheet are not fixed and only the relative position between
pieces which are already placed are fixed. They investigate different placement rules for gro-
wing the solution with the aim of keeping the layout as compact as possible while trying to
avoid an increase in length.
The JOSTLE algorithm, proposed by Dowsland et al. [25], oscillates between packing from
the left end of the stock sheet to packing from its right end, and the sequence of pieces is deter-
mined by the x-coordinate of each piece in the previous iteration.
On the other hand, there are many publications that use algorithms based on linear program-
ming (LP) for compaction and separation. Milenkovic and Li [49] propose different algorithms
for the compaction and separation of pieces, a physically-based simulated method for compac-
tion and a position-based model that finds a local optimum for separation. Bennell and Dows-
land [10] propose LP-based compaction and separation algorithms with a tabu search whose
original version was proposed without the integration of LP in Bennell and Dowsland [9]. The
tabu search is used to solve the overlap minimization problem (OMP), which minimizes the
overlap penalty for all pairs of polygons under the constraint that they are placed into a contai-
ner of a given width and length. The solution obtained by solving the OMP could be unfeasible.
Gomes and Oliveira [32] develop a simulated annealing algorithm and use an LP model
for compaction and separation. This model is, initially, a Mixed Integer Programming (MIP)
model where the relative position of each pair of polygons is determined by a set of binary
variables. Then they fix the binary variables and the model is transformed into an LP model
which is easy to solve and is used for compaction and separation. They use the constructive
TOPOS heuristic presented in [51] to build the initial solution. Song and Bennell [62] use the
TOPOS heuristic to design a heuristic algorithm based on the study of the permutations by
adopting a beam search.
Egeblad et al. [26] propose a guided local search algorithm for OMP in which the neigh-
borhood consists in moving the polygons in both vertical and horizontal directions from its
current position. In this paper pieces cannot protrude from the stock sheet and they use the
intersection area for each pair of polygons as the overlap penalty.
Imamichi et al. [35] propose an iterated local search (ILS) to solve an OMP algorithm
which allows pieces to protrude from the stock sheet. They use a measure of overlap that they
call the penetration depth, which is the minimum translational distance to separate a given
26
pair of polygons. They incorporate a nonlinear programming technique. Umetami et al. [69]
propose a guided local search for a similar OMP which adds a direction to the pieces in order
to calculate the measured overlap (directional penetration depth). They develop an algorithm
which finds the position with a minimum overlap penalty for each polygon when it is translated
in a specified direction.
Recently, Leung et al. [40] propose an extended local search based on the separation algo-
rithm used in Imamichi et al. [35]. A tabu search algorithm is used to avoid local minima and
a compact algorithm is used. Kubagawa et al. [58] propose a two-level algorithm in which an
external level controls the value of the length (the open dimension) and an inner level controls
the initial temperature used for simulated annealing, and the objective is to place all items in-
side the container. They use the collision-free region which indicates permitted placements for
the insertion of the pieces.
27
1.6 Instances
In this section we are going to present the instances that we can find in the literature and a
set of smaller instances that we have created in order to test the exact algorithm developed for
Nesting Problems. In the first subsection the most known instances and their characteristics
are presented. In the second subsection we introduce the set of smaller instances for testing
the exact procedure. Finally, since in Chapter 9 we develop a constructive algorithm for the
two dimensional irregular bin packing problem with guillotine cuts, the corresponding set of
instances are presented in the third subsection.
• No rotation allowed.
Most of the known instances in the literature consider one of the first two types of ins-
tances. It is not common to allow free rotation for the pieces. In Table 1.1 we can see the set of
instances that can be found in the literature, ordered by the non-decreasing number of pieces.
Fischetti and Luzzi [27] use the first three instances, glass1, glass2 and glass3, to test their
exact algorithm. The remaining instances have been used only for heuristic algorithms due to
the large number of pieces.
In what follows we present the pictures of the best results obtained using algorithms pre-
sented in the thesis. We use the notation Lbest to denote that it is the best known solution for the
current problem, but not necessarily the optimal solution.
28
Table 1.1: Nesting instances from the literature
Instances Types of Number of Average of Allowed rotation Plate width Problem type Authors
Pieces pieces vertices angles
glass1 5 5 6 0 45 Puzzle,convex Fischetti and Luzzi [27]
glass2 7 7 5.42 0 45 Puzzle,convex Fischetti and Luzzi [27]
glass3 9 9 5.44 0 45 Puzzle Fischetti and Luzzi [27]
dighe2 10 10 4.7 0 100 Jigsaw puzzle Dighe and Jakiela [23]
fu 12 12 3.58 0-90-180-270 38 Artificial, convex Fujita et al. [29]
poly1a 15 15 4.6 0-90-180-270 40 Artificial Hopper [34]
poly1a0 15 15 4.6 0 40 Artificial Hopper [34]
dighe1 16 16 3.87 0 100 Jigsaw puzzle Dighe and Jakiela [23]
mao 9 20 9.22 0-90-180-270 2550 Garment Bounsaythip and Maouche [17]
blaz2 4 20 7.5 0-180 15 Artificial Oliveira and Ferreira [50]
marques 8 24 7.37 0-90-180-270 104 Garment Marques et al. [45]
albano 8 24 7.25 0-180 4900 Garment Albano and Sapuppo [1]
jakobs1 25 25 5.6 0-90-180-270 40 Artificial Jakobs [37]
29
jakobs2 25 25 5.36 0-90-180-270 70 Artificial Jakobs [37]
shapes2 7 28 6.29 0-180 15 Artificial Oliveira and Ferreira [50]
dagli 10 30 6.3 0-180 60 Garment Ratanapan and Dagli [55]
poly2a 15 30 4.6 0-90-180-270 40 Artificial Hopper [34]
poly2b 30 30 4.53 0-90-180-270 40 Artificial Hopper [34]
shapes1 4 43 8.75 0-180 40 Artificial Oliveira and Ferreira [50]
shapes0 4 43 8.75 0 40 Artificial Oliveira and Ferreira [50]
poly3a 15 45 4.6 0-90-180-270 40 Artificial Hopper [34]
poly3b 45 45 4.6 0-90-180-270 40 Artificial Hopper [34]
swim 10 48 21.9 0-180 5752 Garment Oliveira and Ferreira [50]
poly4a 15 60 4.6 0-90-180-270 40 Artificial Hopper [34]
poly4b 60 60 4.6 0-90-180-270 40 Artificial Hopper [34]
trousers 17 64 5.06 0-180 79 Garment Oliveira and Ferreira [50]
poly5a 15 75 4.6 0-90-180-270 40 Artificial Hopper [34]
poly5b 75 75 4.57 0-90-180-270 40 Artificial Hopper [34]
shirts 8 99 6.63 0-180 5752 Garment Dowsland et al. [25]
Jakobs1: L = 11.31 Jakobs2: L = 23.76 Dagli: L = 57.54
30
(transformed) swim: L = 6164.40 shirts: L = 63.16
31
poly4a: Lbest = 54.14 poly5a: L = 70.56
Instances three, threep2, threep2w9, threep3 and threep3w9 are constructed with copies of
3 pieces with simple shapes: a square, a triangle and a diamond.
Pieces from instances shapes4 and shapes8 are taken from instance shapes0 in Table 1.1,
which has pieces with very irregular shapes.
Instance f u in Table 1.1 has 12 pieces and 4 allowed angles of rotation. We create instance
fu12 by using the same pieces of instance f u but with a fixed orientation for the pieces. Fur-
thermore, we have built instances fu5, fu6, fu7, fu8, fu9 and fu10 which consider, respectively,
the first 5, 6, 7, 8, 9 and 10 pieces of instance f u.
Instances glass1, glass2, glass3, dighe2 and dighe1ok are the broken glass instances used
by Fischetti and Luzzi [27].
We have built 15 instances from instance Jakobs1 by choosing 10, 12 and 14 pieces ran-
domly. Analogously, we have built 15 instances from instance Jakobs2. We call these instances
Ja-b-c-d, where a denotes the initial problem (Jakobs1 or Jakobs2); b denotes the number of
32
pieces; c represents the width of the strip and d the creation order, used to distinguish between
the 5 instances of each type.
In what follows, we present the pictures of the best results obtained with the exact algo-
rithms. We use the notation Lub to denote that it is an upper bound, that is, that no optimality is
proved, and L if it is the optimal solution.
33
Table 1.2: Small instances used to test the exact algorithms
34
three: L = 6 threep2: L = 9.33 threep2w9: L = 8 threep3: L = 13.53 threep3w9: L = 11
———————————————————-
35
J2-10-35-0: L = 23.66 J2-10-35-1: L = 21.30 J2-10-35-2: L = 19.95 J2-10-35-3: L = 20.37 J2-10-35-4: L = 19.4
J2-12-35-0: L = 26.21 J2-12-35-1: L = 24.22 J2-12-35-2: L = 21.5 J2-12-35-3: L = 21.73 J2-12-35-4: L = 23.21
J2-14-35-0: Lub = 29.53 J2-14-35-1: Lub = 29.75 J2-14-35-2: Lub = 26 J2-14-35-3: Lub = 26 J2-14-35-4: Lub = 25
36
1.6.3 Instances for the two dimensional irregular bin packing problems
with guillotine cuts
The two dimensional irregular bin packing problems with guillotine cuts is a recent problem
and Bennell et al. [11] is the only paper which has studied it so far. They propose 8 instances, 4
of them corresponding to real data from industry and the other 4 instances generated using pro-
perties of the industrial data. The number of pieces ranges between 40 and 149. The instance
name is coded by a letter and a number: the letter can be J if the data is provided by industry
or H if the data are generated; the number represents the total number of pieces to be packed
into the bins.
Table 1.3 provides details of the test data: the average and standard deviation of the number
of edges, the average and the standard deviation of the area. This table has been obtained from
[11].
Table 1.3: Test instances for the problem with guillotine cuts
In what follows, we present the pictures of the best solution obtained with the constructive
algorithms presented in Chapter 9 on these instances.
37
Figure 1.16: J40
38
Figure 1.18: J60
39
Figure 1.20: H80
40
Figure 1.22: H120
41
Figure 1.23: H149
42
Chapter 2
In this chapter we will first describe the formulation used by Gomes and Oliveira [32], and then
two new proposals based on the ideas of Fischetti and Luzzi [27]. In all cases the objective
function will be the minimization of L, the strip length required to accommodate all the pieces
without overlapping. Also, all formulations contain two types of constraints: those preventing
the pieces from exceeding the dimensions of the strip and those forbidding the pieces from
overlapping. The differences between formulations lie in the way these constraints are defined.
More specifically, the Gomes and Oliveira formulation, GO, and our proposals, HS1 and
HS2, differ in the way they use the NFP to define the non-overlapping constraints. Gomes and
Oliveira assign a binary variable to each edge or (convex) concavity of the NFP and then use a
big M constant to activate or deactivate each one of the non-overlapping constraints. In our HS1
and HS2 formulations we take the Fischetti and Luzzi idea of partitioning the outer region of
each NFP into convex polygons, called slices, and use it in a particular way, defining horizontal
slices. A binary variable is then assigned to each slice and all the variables corresponding to
the slices of one NFP are included in each one of the non-overlapping constraints constructed
from this NFP in order to eliminate the big M. The third formulation proposed, HS2, has the
same non-overlapping constraints as the HS1 formulation, but the containment constraints are
lifted by considering the interaction between each pair of pieces.
Let P = {p1 , . . . , pN } be the set of pieces to be arranged into the strip. We denote the length
and the width of the strip by L and W, respectively. The bottom-left corner of the strip is placed
at the origin. The placement of each piece is given by the coordinates of its reference point,
(xi , yi ), which is the bottom-left corner of the enclosing rectangle, see Figure 2.1. We denote by
wi and li the width and length of piece i, respectively. Let i and j be two pieces. The number
of binary variables that we define in order to separate pieces i and j is represented by mi j . The
binary variables defined over NFPi j are represented by bi jk , k = 1 . . . , mi j . The set of binary
variables created from the NFPi j is denoted by V NFPi j . The number of inequalities needed to
describe the slice k of the NFPi j (slice defined by variable bi jk ) is represented by fikj .
43
i
(xi , yi )
bi j1
i
bi j2 bi j4
NFPi j
j
i
bi j3
y j − yi ≥ wi bi j1 (2.1)
where wi is the width of i. However, this inequality is not valid if bi j1 = 0. In order to transform
(2.1) into a valid inequality, Gomes and Oliveira [32] include a big-M term (where M is a large
positive number). The valid constraint would be:
y j − yi ≥ wi − (1 − bi j1 )M (2.2)
αi jk (x j − xi ) + βi jk (y j − yi ) ≤ qi jk + M(1 − bi jk ) (2.3)
44
Min L (2.4)
s.t. xi ≤ L − li i = 1, ..., N (2.5)
yi ≤ W − wi i = 1, ..., N (2.6)
αi jk (x j − xi ) + βi jk (y j − yi ) ≤ qi jk + M(1 − bi jk ) 1 ≤ i < j ≤ N (2.7)
k = 1, ..., mi j
Pmi j
k=1 bi jk ≥ 1 1≤i< j≤N (2.8)
bi jk ∈ {0, 1} 1≤i< j≤N (2.9)
xi , yi ≥ 0 1≤i≤N (2.10)
The objective function (2.4) minimizes the required length of the strip. Constraints (2.5),
(2.6) and (2.10) are bound constraints, keeping the pieces inside the strip. Constraints (2.7)
prevent overlapping. Variables bi jk are integer (constraints 2.9) and at least one of them must
take value 1 for each NFP (constraints 2.8).
One potential disadvantage of this formulation is that the previous definition of non-overlapping
constraints does not limit the relative position of the pieces very strictly. Let us consider the
more complex situation in Figure 2.3. As NFP12 has 8 edges, 8 binary variables are defined.
If p2 is placed into the shaded region on the upper left-hand side, two variables b121 and b122
can take value 1. In fact, if p2 is placed well to the left of p1 in that zone, even variable b123
can take value 1. If we use this formulation in a branch and bound procedure, many different
branches can contain the same solution and that can slow down the search.
b121 2
b125
45
b8 b8
b1 b7 b1 b7
NFPi j b6 NFPi j b6
b2 b2
b4 b5 b4 b5
b3 b3
b9
b1 b8
NFPi j b7
b2
b6
b4
b3 b5
that bi jk = 1 if the reference point of piece j is placed in S ikj , and 0 otherwise. In this approach,
in order to consider the slices as convex polygons or segments or points, an upper bound for
the length of the strip is needed. It can be obtained by a simple heuristic algorithm or just by
calculating the total length of the pieces. In a vertical way, the maximum separation between
each pair of pieces is given by the width of the strip, W.
Fischetti and Luzzi do not specify the way to obtain the slices. In Figure 2.4 we can see
three different ways to define the slices. Note that the way in which the slices are defined af-
fects the number of slices corresponding to an NFP. For instance, the third case in Figure 2.4
requires one slice more than the others.
The main idea of our proposal is based on defining the slices through horizontal inequali-
ties. In a first step, if necessary, the complexities of the NFP are eliminated and one or more
binary variables are assigned to each complexity. Figure 1.11 shows the three cases of com-
plexities that we can find in the NFPs, and if one of these complexities appears in the NFP then
it ceases to be a polygon. So, in this step, we transform the NFP into a polygon.
In a second step, all the concavities of the NFP are going to be closed, and each concavity
produces one or more binary variables depending on the degree of concavity. After this proce-
dure we transform the NFP into a convex polygon.
46
1
3 2
5 4
8 7
Finally, in a third step, the slices of the outer region of NFP (transformed into a convex
polygon) are defined using horizontal inequalities. In this step a binary variable is assigned to
each one of the slices defined.
The set of binary variables defined in the three steps for each pair of pieces is the set of all
the binary variables in the formulation.
Finally, the non-overlapping constraints are constructed without using big M constants of
the GO formulation.
Once these elements have been considered, NFPi j is just a polygon, convex or not. We say
that the binary variables defined in this step define a complex slice. We denote by B1 the set of
binary variables which define complex slices.
These complexities are not common in real-life problems because the pieces have to have
very specific forms in order to produce an NFP with one of these types of complexities.
47
v1 v10 v1 v10
v9 v9
v2 v2
v5 v6 v8 v5 v6 v8
v3 v4 v7 v3 v7
v4
2 1
3 20 bi j2 bi j5
4 19
5 11 12 18 bi j1
6 9 10 13 14 17 bi j3 bi j4
7 8 15 16
2 1
4 19
5 18
9 14 bi j6
7 8 15 16
is not to the right-hand side of the line defined by vn+r−2 and vn+r−1 . In Figure 2.6 there is just
one concavity, C = {v4 , v5 , v6 , v7 }, but in the general case many concavities can appear.
We say that a given concavity is a convex concavity if the polygon defined by the vertices
of the concavity is convex. In the general case, from the list of convex concavities of the NFP,
we choose that with the largest number of vertices and close it by adding a segment from its
first to its last vertex. Other convex concavities can also be closed if they are disjoint with those
already closed. The list of NFPi j vertices is updated, eliminating the intermediate vertices,
and a variable bi jk is associated with each closed concavity. The process is repeated until
the resulting polygon NFPcij is convex. Figure 2.7 illustrates the process. In a first iteration
several convex concavities are found: C1 = {v2 , v3 , v4 }, C2 = {v5 , v6 , v7 }, C3 = {v8 , v9 , v10 },
C4 = {v10 , v11 , v12 , v13 }, C5 = {v13 , v14 , v15 }, C6 = {v16 , v17 , v18 } and C7 = {v19 , v20 , v1 }. C4 with 4
vertices is chosen to be closed, as are C1 , C2 , C6 and C7 , which are disjoint with C4 and with
each other. Binary variables bi j1 to bi j5 are associated with these closed regions. The updated
NFP appears in Figure 2.7(c). As this polygon is still non-convex, the process is repeated in
Figure 2.7(d) until a convex polygon is obtained.
In Figure 2.8 we can see an NFP with one non-convex concavity. In this case, firstly we
eliminate the only convex concavity given by C1 = {v5 , v6 , v7 }, and then we drop v6 from the list
48
v2 v1
v10
v6
v3
v9
v7
v5
v4
v8
b7
b8 b14
b9 b13
b10 b12
b11
of vertices. After that, the next convex concavity to eliminate is C2 = {v5 , v7 , v8 }, where vertex
v7 is eliminated. Finally we eliminate the resulting convex concavity given by C3 = {v4 , v5 , v8 },
where vertex v5 is eliminated.
A closed slice is defined by a binary variable associated with a concavity of the NFP. The
set of binary variables defined from a concavity of a NFP is denoted by B2 .
The variables are defined in that way for two main reasons. First, using variables associated
with slices overcomes the disadvantages of the Gomes and Oliveira definition. Each feasible
position of piece j with respect to piece i corresponds to a unique variable (except for the
unavoidable common border between slices). Second, defining the slices in a horizontal way
49
helps us to control the relative vertical position of the pieces. We will take advantage of that
when developing the branch and bound algorithm.
where δki jf h
= + qki jf θikjf h .
Finally, in order to have a valid inequality, coefficients δki jf h are obtained by computing the
maximal value of the left-hand side where each variable bi jh takes value 1:
where C is a box which is large enough to include all the possible placements of pi and p j , of
width 2W and length 2L, and where L is an upper bound on L. This maximization problem can
be solved by evaluating the function on the vertices of the closed region S ihj ∩ C.
Min L (2.11)
s.t. xi ≤ L − li i = 1, ..., N (2.12)
yi ≤ W − wi i = 1, ..., N (2.13)
Pmi j k f h
αki jf (x j − xi ) + βki jf (y j − yi ) ≤ h=1 δi j bi jh
1 ≤ i < j ≤ N, k = 1, ..., mi j , f = 1, ..., tikj (2.14)
Pmi j
k=1 bi jk = 1 1≤i< j≤N (2.15)
bi jk ∈ {0, 1} 1≤i< j≤N (2.16)
xi , yi ≥ 0 1≤i≤N (2.17)
50
2.3 Formulation HS2 (Horizontal Slices 2)
The bound constraints (2.12), (2.13), (2.17) of the HS1 formulation are the same as those of
the GO formulation (2.5), (2.6), (2.10). In this section we lift these constraints by using the
interaction between pairs of pieces.
Let us consider the slice k ∈ {1, . . . , mi j }. We denote by xi jk (xi jk ) the maximum (minimum)
value x j can take in the NFPi j -coordinate system when bi jk = 1. Analogously, yi jk (y ) is the
i jk
maximum (minimum) value y j can take in the NFPi j -coordinate system when bi jk = 1. In the
example in Figure 2.10, these values are represented for slice k = 2, considering W = 10 and
an upper bound on L, L = 11. In this case, if the reference point of pi is placed at the origin,
and as li = 4 and L = 11, there is a space of 4 units to the left of NFPi j and 3 units to its right
into which the reference point of piece p j can be placed. Similarly, as wi = 4 and W = 10,
the space above NFPi j for the reference point of pi has 3 units and below it also has 3 units.
ij ij
Looking at NFPi j , Y = 4, Y i j = −3, X = 4 and X i j = −3. Looking at slice k = 2, xi j2 = −1,
xi j2 = −7, yi j2 = 4 and y = 2.
i j2
The subset of variables associated with NFPi j which forces the reference point of p j to
be above (below) the reference point of pi is denoted as Ui j (Di j ). Analogously, the subset of
variables which force the reference point of p j to be to the right (left) of the reference point of
pi is denoted as Ri j (Li j ). If V NFPi j is the set of all the variables associated with NFPi j , these
sets can be described as follows:
51
4
3 3
2 2
1 1
1 2 3 4 1 2 3
pi pj
xi j2 xi j2
bi j1 ij
yi j2 Y
bi j2 bi j8
y
i j2
bi j3 bi j7
bi j4
bi j6
Yij
bi j5
Xi j X
ij
52
• Left-hand side bound
In the constraint we include the variables forcing p j to be to the right of pi , that is, the
variables in Ri j . The coefficient of each variable, that is, the forced displacement of p j to
the right of pi , is given by xi jk . The lifted constraint will be:
X
xj ≥ xi jk bi jk (2.18)
k∈Ri j
where LS i j := {k | λi jk > 0}. In the example in Figure 2.10: x j ≤ L−3−2bi j2 −4bi j3 −2bi j4 .
In slice 3, p j is completely to the left of pi and the coefficient of bi j3 is 4, corresponding
to li . In slice 2, 2 units of this initial separation can be eliminated if piece p j is placed at
the rightmost point inside the slice. Then, the coefficient of bi j2 is 2.
• Bottom-side bound
The variables forcing p j to be above pi , are those in Ui j . The coefficient of each variable,
that is, the forced displacement of p j on top of pi , is given by y . The lifted constraint
i jk
will be:
X
yj ≥ y bi jk (2.20)
i jk
k∈Ui j
• Upper-side bound
The variables to be included are those forcing p j to be at the bottom of pi , those of Di j .
The corresponding coefficient, the minimum distance from the reference point of p j to
W forced by the variable, is µi jk = wi − (yi jk − Y i j ). The constraint will be:
X
yj ≤ W − wj − µ bi jk (2.21)
i jk
k∈DS i j
53
2.3.3 Formulation HS2
The complete HS2 formulation is:
Min L (2.22)
k∈LS λ bi jk 1≤i< j≤N
P P
s.t. k∈Ri j xi jk bi jk ≤ xi ≤ L − li − (2.23)
P i j i jk
k∈DS i j µi jk bi jk 1≤i< j≤N
P
k∈Ui j yi jk bi jk ≤ yi ≤ W − wi − (2.24)
Pmi j k f h
αki jf (x j − xi ) + βki jf (y j − yi ) ≤ h=1 δi j bi jh (2.25)
1 ≤ i < j ≤ N, k = 1, ..., mi j , f = 1, ..., tikj (2.26)
Pmi j
k=1 bi jk = 1 1≤i< j≤N (2.27)
bi jk ∈ {0, 1} 1≤i< j≤N (2.28)
xi , yi ≥ 0 1≤i≤N (2.29)
where constraints (2.23) and (2.24) are the lifted bound constraints which substitute for the
initial bound constraints (2.12) and (2.13) of the HS 1 formulation.
For solving the instances, we have used the Branch & Bound algorithm given by CPLEX
12.2 with 64 bits, using just one processor at 3.40 GHz. The stopping criterion is the time limit,
considering 1 hour of computational time.
In Table 2.1, for each formulation we can see the lower bound (LB), the GAP calculated as
(U B − LB)/U B, where U B denotes the upper bound given by CPLEX, and the running time
in seconds. The last two rows of the table contain the averages and the number of optimal
solutions obtained before the time limit.
54
It can be seen that the formulation HS2 clearly works better than GO and HS1, solving 29
instances to optimality and having an average GAP of 0.14.
A third alternative we have considered is solving a special case of a 1-Contiguous Bin Pa-
cking Problem (1-CBPP), which was shown to be very effective for the Strip Packing Problem
with rectangular pieces (Martello et al. [46] ; Alvarez-Valdes et al. [2]). Each piece is divided
into a set of vertical slices of width w. From each slice the maximum embedded rectangle
is obtained (see Figure 2.11). The problem is then to pack all these rectangles into the mini-
mum number of levels N, putting the rectangles corresponding to one piece into contiguous
levels. The value wN is a lower bound for the original problem. An integer formulation for the
1-CBPP appears in Alvarez-Valdes et al. [2]. We solve this formulation using CPLEX, with
limited time, and obtain the corresponding lower bound.
Table 2.2 shows the relative performance of the three lower bounds expressed as the average
percentage distance from each bound to the optimal or best known solution for each instance in
the test set. The 1-CBP problem associated with each instance has been solved using CPLEX
12.1, with a time limit of 900 seconds. The quality of the area bound is, obviously, much higher
than the bound of the longest piece, especially for large instances. The bound based on the 1-
CBP problem is much better than the area bound, but its computational cost is sometimes very
high and increases with the problem size. Moreover, none of bounds evolve during the search
because the integer variables in the formulation do not fix the absolute position of the pieces,
but only their relative position. Therefore, we decided not to spend time at the beginning of the
algorithm calculating bounds and only the longest piece bound, included in the formulation, is
used.
55
Table 2.1: Comparing formulations GO, HS1 and HS2
GO HS1 HS2
Instance Pieces LB GAP Time LB GAP Time LB GAP Time
three 3 6.00 0.00 0.02 6.00 0.00 0.22 6.00 0.00 0.75
shapes 4 4 24.00 0.00 0.08 24.00 0.00 0.03 24.00 0.00 0.00
fu 5 5 17.89 0.00 1.40 17.89 0.00 0.94 17.89 0.00 0.14
glass1 5 45.00 0.00 0.02 45.00 0.00 0.05 45.00 0.00 0.06
fu 6 6 23.00 0.00 0.11 23.00 0.00 0.12 23.00 0.00 0.48
threep2 6 9.33 0.00 20 9.33 0.00 3.2 9.33 0.00 3.9
threep2w9 6 8.00 0.00 96. 8.00 0.00 11.4 8.00 0.00 8.5
fu 7 7 24.00 0.00 0.67 24.00 0.00 1.5 24.00 0.00 1.0
glass2 7 45.00 0.00 0.72 45.00 0.00 7.0 45.00 0.00 2.8
fu 8 8 24.00 0.00 0.76 24.00 0.00 2.6 24.00 0.00 1.3
shapes 8 8 25.00 0.04 3600 26.00 0.00 126 26.00 0.00 272
fu 9 9 25.00 0.00 161 25.00 0.00 257 25.00 0.00 70
threep3 9 9.15 0.32 3600 13.53 0.00 2565 13.53 0.00 3394
threep3w9 9 7.00 0.36 3600 8.67 0.22 3600 10.00 0.09 3600
glass3 9 100.00 0.00 2909 100.00 0.00 175 100.00 0.00 324
fu 10 10 28.00 0.02 3600 28.00 0.03 3600 28.68 0.00 3064
dighe2 10 96.27 0.04 3600 100.00 0.00 359 100.00 0.00 177
J1-10-20-0 10 16.00 0.11 3600 18.00 0.00 82 18.00 0.00 45
J1-10-20-1 10 17.00 0.00 442 17.00 0.00 69 17.00 0.00 34
J1-10-20-2 10 20.00 0.00 428 20.00 0.00 124 20.00 0.00 304
J1-10-20-3 10 18.00 0.14 3600 19.00 0.10 3600 20.67 0.00 3600
J1-10-20-4 10 11.00 0.15 3600 12.50 0.00 3413 12.50 0.00 628
J2-10-35-0 10 20.00 0.18 3600 20.25 0.16 3600 23.66 0.00 2858
J2-10-35-1 10 20.00 0.12 3600 20.00 0.11 3600 21.30 0.00 1443
J2-10-35-2 10 18.00 0.10 3600 18.38 0.13 3600 19.95 0.00 452
J2-10-35-3 10 18.00 0.13 3600 20.00 0.03 3600 18.38 0.15 3600
J2-10-35-4 10 18.00 0.08 3600 18.00 0.10 3600 19.43 0.00 2721
J1-12-20-0 12 10.00 0.17 3600 11.56 0.04 3600 12.00 0.00 509
J1-12-20-1 12 9.00 0.18 3600 9.00 0.25 3600 10.00 0.00 2430
J1-12-20-2 12 10.00 0.23 3600 12.00 0.00 22056 12.00 0.00 2332
J1-12-20-3 12 7.00 0.22 3600 7.00 0.22 3600 8.00 0.00 214
J1-12-20-4 12 8.00 0.43 3600 9.01 0.40 3600 12.00 0.14 3600
J2-12-35-0 12 20.25 0.26 3600 20.00 0.35 3600 20.37 0.34 3600
J2-12-35-1 12 17.00 0.35 3600 16.37 0.49 3600 16.91 0.40 3600
J2-12-35-2 12 15.50 0.30 3600 14.00 0.43 3600 15.08 0.34 3600
J2-12-35-3 12 16.00 0.29 3600 16.00 0.30 3600 14.00 0.39 3600
J2-12-35-4 12 18.00 0.26 3600 15.20 0.42 3600 20.00 0.20 3600
fu 12 28.00 0.17 3600 24.00 0.29 3600 24.00 0.29 3600
J1-14-20-0 14 7.00 0.46 3600 10.00 0.33 3600 11.00 0.21 3600
J1-14-20-1 14 6.08 0.49 3600 8.00 0.43 3600 8.00 0.43 3600
J1-14-20-2 14 8.00 0.47 3600 8.30 0.48 3600 10.00 0.36 3600
J1-14-20-3 14 6.25 0.43 3600 7.00 0.42 3600 8.00 0.33 3600
J1-14-20-4 14 8.00 0.47 3600 10.00 0.33 3600 10.00 0.35 3600
J2-14-35-0 14 12.78 0.57 3600 16.67 0.52 3600 19.19 0.45 3600
J2-14-35-1 14 12.00 0.59 3600 14.50 0.59 3600 16.00 0.54 3600
J2-14-35-2 14 13.00 0.48 3600 14.18 0.53 3600 14.00 0.53 3600
J2-14-35-3 14 12.00 0.54 3600 13.00 0.59 3600 15.87 0.50 3600
J2-14-35-4 14 12.00 0.54 3600 14.00 0.50 3600 14.00 0.50 3600
poly1a0 15 13.00 0.21 3600 13.00 0.28 3600 13.00 0.28 3600
dighe1ok 16 56.00 0.58 3600 65.62 0.57 3600 71.00 0.54 3600
Average 0.21 2621 0.19 2302 0.14 1970
#opt 14 20 29
56
Chapter 3
The Branch & Bound algorithm is a well known exact procedure. There are two important
elements to analyze in order to build an efficient algorithm: the branching, which studies the
way in which we select the binary variables to be set to 0/1 and the order of nodes to be solved;
and the cutting, that is, which valid inequalities could be added to the relaxed linear model in
order to eliminate unfeasible solutions of the mixed integer model.
The formulations in the previous section can be used in a Branch & Bound algorithm in
which at each node of the search tree the linear relaxation provides a lower bound and, if the
node is not fathomed, branching will consist in building two nodes, one with a variable bi jk = 1
and the other with bi jk =0. The initial mixed integer model that we want to solve with a Branch
& Bound algorithm is the one given by the HS2 formulation, presented in Section 2.3. In Sec-
tion 2.5 we saw that the HS2 formulation provided the best results.
In this chapter we present a study of different ways of branching. We will see that Bran-
ching is very important because different strategies provide quite different results for a given
instance. The study of valid inequalities to be added to the formulation and their corresponding
separation procedures are left to chapters 4 and 5 respectively.
The remainder of this chapter is organized in 5 sections. In Section 3.1 different strate-
gies of branching, based on Fischetti and Luzzi priorities [27], are presented. Furthermore, we
propose a branching based on giving more priority to the binary variables which define larger
slices. We also study a dynamic branching scheme in which the decisions of branching are
taken depending on the fractional solutions and, finally, a strategy of branching on constraints.
In Section 3.1.4 we discuss the computational results obtained by each one of the strategies
presented in Section 3.1. As in the previous chapter, we run the Branch & Bound algorithm
developed by CPLEX with a time limit of one hour for each instance.
At each node of the Branch & Bound tree there is a subset of binary variables fixed to 1
and another subset fixed to 0. The binary variables fixed to 0 do not guarantee that the pair
of pieces which define the corresponding NFP do not overlap. On the other hand, if there is
a binary variable fixed to 1, we can ensure that the corresponding pair of pieces is completely
separated and the relative position is the one defined by the respective slice. Then, when a
binary variable is fixed to 1, the horizontal and vertical bound constraints of the pieces can be
57
updated because the relative position of the pieces is given by the limits of the slice defined
by the activated binary variable. In Section 3.2 we propose two different ways of updating the
bounds of the reference points of the pieces.
In an advanced node of the Branch & Bound tree there are many pairs of pieces which are
separated by fixed binary variables of previous nodes of the same branch. So, it could be that
there are non-fixed binary variables which are incompatible with the linear model of the current
node. That is, there can be binary variables which, if fixed to 1, make the linear model unfea-
sible. In Section 3.3 we present two approaches for finding incompatible binary variables. The
first approach uses the combination of the bounds on the pieces with the limits of the strip. The
idea is to find the binary variables which cannot be activated or otherwise the corresponding
slices exceed the limits of the strip. The second approach is based on the transitivity of the
pieces. If piece i is placed to the left of piece j which, at the same time, is placed to the left of
piece k, then all the binary variables of NFPik which force piece k to be placed to the left of
piece i are incompatible binary variables.
Finally, in Section 3.3.3 we discuss the behavior of the previous strategies on the set of test
instances and draw some conclusions.
58
Algorithm 1 Fischetti and Luzzi’s priorities
Require: ψ = number of binary variables;
Require: S = ∅;
while P , ∅ do
Select a piece pi ∈ P;
for all p j ∈ S do
for k = 1, . . . , mi j do
Assign priority ψ to bi jk ;
ψ = ψ − 1;
end for
end for
S = S ∪ {pi };
P = P\{pi };
end while
when studying the behavior of the Fischetti and Luzzi branching strategy computationally, we
will consider three alternatives:
• FL: Fischetti and Luzzi’s priorities without any initial ordering of the pieces
Even when we specify the order in which the pieces are considered for assigning priorities,
there is still a degree of freedom concerning the order in which the variables of the correspon-
ding NFP are considered. One possibility for ordering the variables of an NFP is the area of
the corresponding slice, giving more priority to variables associated with larger slices. Then, a
fourth strategy to be studied is FL A SA, in which we first order the pieces by area and then
the variables by the area of the slice. A last strategy in this group could be SA, ordering all the
variables according to the area of their slices, but not using Fischetti and Luzzi’s priorities.
59
4
3 3 3
2 2 2
1 1 1
1 2 3 4 1 2 3 1 2 3 4
b121 b131
b122 b128 b132 b138 b231
ordered list of pieces until we find a piece p j overlapping with some of the previous pieces. Let
S = {pi1 , ..., pik } be the set of these pieces, 1 ≤ i1 < .... < ik . For each piece i ∈ S , we compute
upi , downi , le f ti , righti , the number of pieces separated from i from above, from below, from
the left and from the right, respectively. We consider that a piece pk is separated from above
piece pi if there is a variable bikl = 1 for some l, such that y > 0. Similarly, it is separated
ikl
from below if yikl < 0, to the right if xikl > 0 and to the left if xikl < 0.
When choosing a variable to separate piece p j from S , the lower the values of upi , downi ,
le f ti , righti for some i ∈ S , the more adequate this position is for piece p j and hence the
branching variable should separate p j from pi in this direction. For instance, if for some i ∈ S ,
upi = 0, none of the other pieces in S is above pi and then that would be a good position for p j .
Separating p j from pi in this direction could possibly separate p j from some other pieces in S .
Then, in one of the branches we set k|xi jk ≥0 bi jk = 0 and in the other k|xi jk ≥0 bi jk = 1. In
P P
the example, one branch would have bi j3 + bi j4 + bi j5 + bi j6 + bi j7 + bi j8 = 0 and the other
bi j3 + bi j4 + bi j5 + bi j6 + bi j7 + bi j8 = 1. If no pair of pieces satisfy (3.1), we branch on variables
using strategy FL A.
60
b8 b08 b8
b1 b7 b1 b7
NFPi j NFPi j
b6 b6
b2 b2
b3 b5 b03 b3 b5
b4 b04 b4
One advantage of this branching strategy is that the formulation can be locally enhanced.
In the example, if we are in the node in which we have set bi j3 + bi j4 + bi j5 + bi j6 + bi j7 + bi j8 = 1,
piece pi must be to the left of piece p j . Then, constraint xi ≤ x j is satisfied in all the successor
nodes. This constraint can be lifted, considering variables bi jk for which xi jk ≥ 0. The lifted
constraint would be: X
xi + xi jk bi jk ≤ x j (3.2)
k|xi jk ≥0
In Table 3.1 we can find the computational results of the FL, FL L and FL A strategies.
For each strategy, the columns show the lower bound (LB) obtained for each instance, the
GAP=(UB-LB)/UB and the running time. The last two rows show the averages and the number
of optimal solutions. There are significant differences between the different methods, so the
initial order for the pieces in Fischetti and Luzzi’s priorities really matters. We can observe
that FL A obtains the best results and the best deviation of the lower bound with respect to the
upper bound. The average GAP obtained by FL A is 0.047 and it is able to solve 34 over the
50 instances to optimality.
In Table 3.2 we can observe that if in the FL A strategy we order the slices in a non decrea-
sing area, the results are very similar. Furthermore, if we drop Fischetti and Luzzi’s priorities
but we order the slices in a non-decreasing area (S A strategy), we can see that the results are
clearly worse, obtaining a deviation average of 0.36.
Table 3.3 presents the comparison between the best strategy at this moment, FL A, with
dynamic branching (DB) defined in 3.1.2 and the branching on constraints (BC), described in
3.1.3. Note that the results of dynamic branching are slightly worse than the FL A strategy,
obtaining an average for the gap of 0.065. On the other hand, branching on constraints obtains
61
bad results.
Finally, Table 3.4 summarizes the comparison between all the branching strategies already
described. The performance of each strategy is summarized in three values: the number of
optimal solutions, the average GAP and the average CPU time. The strategies compared are:
• CPLEX: the default strategy provided by CPLEX, as shown in the final columns of Table
2.1
• FL: the Fischetti and Luzzi strategy, taking the pieces as they appear in the data file
• FL L: the Fischetti and Luzzi strategy, ordering the pieces by non-increasing length
• FL A: the Fischetti and Luzzi strategy, ordering the pieces by non-increasing area
• FL A SA: the Fischetti and Luzzi strategy, ordering the pieces by non-increasing area
and the variables of each NFP by non-increasing area of the corresponding slice
In summary, we can say that the Fischetti and Luzzi strategy works very well, but only if
the pieces have been previously ordered by length or, even better, by area. Doing that, the first
pieces to be separated are the largest ones and the lower bounds increase sharply. The table
also shows that adding the ordering of variables by slice area to the previous strategy neither
harms nor improves the algorithm and it is very poor when used alone. Dynamic branching
works quite well. Its slightly worse results are due to the fact that it needs to read the solution
at each node, which slows down the search and many fewer nodes are explored. Nevertheless,
the strategy seems promising for a more complex algorithm in which the solution at each node
has to be read, for instance in a Branch and Cut procedure in which the separation algorithms
run at each node require the current solution. Branching on constraints performs quite poorly.
It seems clear that only fixing a variable to 1, separating at least two pieces, has a strong effect
on the current solution. When branching on constraints, this effect is missing and the results
are much worse.
62
Table 3.1: Comparing branching strategies FL, FL L and FL A
Instances FL FL L FL A
LB GAP Time LB GAP Time LB GAP Time
three 6.00 0.00 0.75 6.00 0.00 0.53 6.00 0.00 0.73
shapes4 24.00 0.00 0.08 24.00 0.00 0.05 24.00 0.00 0.02
fu5 17.89 0.00 0.62 17.89 0.00 0.59 17.89 0.00 0.08
glass1 45.00 0.00 0.06 45.00 0.00 0.02 45.00 0.00 0.02
fu6 23.00 0.00 0.92 23.00 0.00 0.69 23.00 0.00 0.69
threep2 9.33 0.00 1.06 9.33 0.00 1.75 9.33 0.00 1.36
threep2w9 8.00 0.00 5.18 8.00 0.00 4.54 8.00 0.00 7.36
fu7 24.00 0.00 0.86 24.00 0.00 0.66 24.00 0.00 0.62
glass2 45.00 0.00 2.62 45.00 0.00 2.28 45.00 0.00 1.84
fu8 24.00 0.00 0.94 24.00 0.00 0.61 24.00 0.00 0.72
shapes8 26.00 0.00 4.85 26.00 0.00 4.57 26.00 0.00 4.43
fu9 25.00 0.00 124.43 25.00 0.00 12.12 25.00 0.00 32.87
threep3 13.53 0.00 955.26 13.53 0.00 2323.68 13.53 0.00 819.01
threep3w9 11.00 0.00 2344.16 10.33 0.06 3600.00 11.00 0.00 2822.93
glass3 100.00 0.00 1335.54 100.00 0.00 40.05 100.00 0.00 13.74
fu10 28.00 0.02 3600.00 28.69 0.00 2186.18 28.69 0.00 198.59
dighe2 77.82 0.35 3600.00 100.00 0.00 18.02 100.00 0.00 1.64
J1-10-20-0 14.67 0.19 3600.00 18.00 0.00 51.00 18.00 0.00 6.35
J1-10-20-1 11.65 0.35 3600.00 17.00 0.00 36.22 17.00 0.00 3.74
J1-10-20-2 20.00 0.00 2519.21 20.00 0.00 40.47 20.00 0.00 7.71
J1-10-20-3 14.36 0.32 3600.00 20.75 0.00 685.33 20.75 0.00 251.04
J1-10-20-4 9.22 0.29 3600.00 12.50 0.00 257.29 12.50 0.00 89.81
J2-10-35-0 19.80 0.21 3600.00 23.66 0.00 420.81 23.66 0.00 395.12
J2-10-35-1 18.00 0.20 3600.00 21.30 0.00 196.76 21.30 0.00 148.17
J2-10-35-2 19.95 0.00 993.63 19.95 0.00 156.30 19.95 0.00 95.18
J2-10-35-3 17.88 0.16 3600.00 20.38 0.00 1294.96 20.38 0.00 823.05
J2-10-35-4 17.38 0.13 3600.00 19.43 0.00 272.91 19.44 0.00 316.23
J1-12-20-0 10.00 0.17 3600.00 12.00 0.00 115.99 12.00 0.00 32.81
J1-12-20-1 7.00 0.42 3600.00 10.00 0.00 175.94 10.00 0.00 29.73
J1-12-20-2 10.00 0.23 3600.00 12.00 0.00 21.26 12.00 0.00 19.59
J1-12-20-3 7.00 0.22 3600.00 8.00 0.00 16.29 8.00 0.00 28.36
J1-12-20-4 9.00 0.36 3600.00 13.00 0.00 308.35 13.00 0.00 149.06
J2-12-35-0 20.02 0.29 3600.00 25.00 0.11 3600.00 24.50 0.13 3600.00
J2-12-35-1 16.85 0.33 3600.00 22.50 0.08 3600.00 22.00 0.15 3600.00
J2-12-35-2 18.00 0.23 3600.00 18.19 0.20 3600.00 20.00 0.09 3600.00
J2-12-35-3 16.00 0.28 3600.00 19.40 0.17 3600.00 20.00 0.11 3600.00
J2-12-35-4 15.63 0.35 3600.00 22.00 0.08 3600.00 22.00 0.07 3600.00
fu 24.00 0.29 3600.00 28.50 0.16 3600.00 32.11 0.03 3600.00
J1-14-20-0 6.25 0.55 3600.00 12.00 0.08 3600.00 12.00 0.00 446.69
J1-14-20-1 6.00 0.50 3600.00 10.00 0.17 3600.00 11.00 0.06 3600.00
J1-14-20-2 9.00 0.44 3600.00 12.15 0.19 3600.00 13.00 0.13 3600.00
J1-14-20-3 6.00 0.50 3600.00 10.00 0.00 277.20 10.00 0.00 97.81
J1-14-20-4 8.00 0.47 3600.00 13.00 0.10 3600.00 13.50 0.04 3600.00
J2-14-35-0 18.00 0.40 3600.00 24.00 0.21 3600.00 23.70 0.19 3600.00
J2-14-35-1 16.00 0.50 3600.00 21.34 0.31 3600.00 21.88 0.30 3600.00
J2-14-35-2 16.00 0.38 3600.00 18.63 0.31 3600.00 20.00 0.25 3600.00
J2-14-35-3 16.00 0.41 3600.00 20.00 0.23 3600.00 20.12 0.23 3600.00
J2-14-35-4 14.00 0.50 3600.00 19.82 0.24 3600.00 20.00 0.23 3600.00
poly1a0 13.00 0.20 3600.00 13.00 0.21 3600.00 13.00 0.19 3600.00
dighe1ok 74.00 0.48 3600.00 100.00 0.22 3600.00 100.00 0.13 3600.00
Average 0.21 2541.80 0.063 1474.47 0.047 1288.94
#opt 15 32 34
63
Table 3.2: Comparing branching strategies FL A, FL A SA y SA
Instancias FL A FL A SA SA
LB GAP Time LB GAP Time LB GAP Time
three 6.00 0.00 0.73 6.00 0.00 0.67 6.00 0.00 0.41
shapes4 24.00 0.00 0.02 24.00 0.00 0.05 24.00 0.00 0.31
fu5 17.89 0.00 0.08 17.89 0.00 0.25 17.89 0.00 0.53
glass1 45.00 0.00 0.02 45.00 0.00 0.03 45.00 0.00 0.02
fu6 23.00 0.00 0.69 23.00 0.00 0.28 23.00 0.00 0.69
threep2 9.33 0.00 1.36 9.33 0.00 1.37 9.33 0.00 3.84
threep2w9 8.00 0.00 7.36 8.00 0.00 4.99 8.00 0.00 32.42
fu7 24.00 0.00 0.62 24.00 0.00 1.14 24.00 0.00 2.12
glass2 45.00 0.00 1.84 45.00 0.00 1.93 45.00 0.00 5.80
fu8 24.00 0.00 0.72 24.00 0.00 0.47 24.00 0.00 28.24
shapes8 26.00 0.00 4.43 26.00 0.00 4.87 18.21 0.30 3600.00
fu9 25.00 0.00 32.87 25.00 0.00 33.43 24.00 0.04 3600.00
threep3 13.53 0.00 819.01 13.53 0.00 757.34 9.86 0.34 3600.00
threep3w9 11.00 0.00 2822.93 11.00 0.00 3130.53 7.67 0.32 3600.00
glass3 100.00 0.00 13.74 100.00 0.00 16.36 100.00 0.17 2600.00
fu10 28.69 0.00 198.59 28.69 0.00 209.01 25.45 0.12 3600.00
dighe2 100.00 0.00 1.64 100.00 0.00 7.89 100.00 0.17 3600.00
J1-10-20-0 18.00 0.00 6.35 18.00 0.00 8.88 15.40 0.23 3600.00
J1-10-20-1 17.00 0.00 3.74 17.00 0.00 2.43 15.30 0.10 3600.00
J1-10-20-2 20.00 0.00 7.71 20.00 0.00 7.43 17.20 0.18 3600.00
J1-10-20-3 20.75 0.00 251.04 20.75 0.00 314.83 17.60 0.18 3600.00
J1-10-20-4 12.50 0.00 89.81 12.50 0.00 80.34 10.10 0.28 3600.00
J2-10-35-0 23.66 0.00 395.12 23.67 0.00 368.12 18.06 0.28 3600.00
J2-10-35-1 21.30 0.00 148.17 21.30 0.00 174.83 14.76 0.36 3600.00
J2-10-35-2 19.95 0.00 95.18 19.95 0.00 69.39 14.86 0.32 3600.00
J2-10-35-3 20.38 0.00 823.05 20.38 0.00 785.50 15.29 0.31 3600.00
J2-10-35-4 19.44 0.00 316.23 19.43 0.00 290.88 13.74 0.31 3600.00
J1-12-20-0 12.00 0.00 32.81 12.00 0.00 22.14 10.00 0.23 3600.00
J1-12-20-1 10.00 0.00 29.73 10.00 0.00 21.86 8.90 0.26 3600.00
J1-12-20-2 12.00 0.00 19.59 12.00 0.00 18.89 10.60 0.24 3600.00
J1-12-20-3 8.00 0.00 28.36 8.00 0.00 16.19 7.00 0.18 3600.00
J1-12-20-4 13.00 0.00 149.06 13.00 0.00 169.23 11.10 0.21 3600.00
J2-12-35-0 24.50 0.13 3600.00 24.25 0.13 3600.00 20.06 0.32 3600.00
J2-12-35-1 22.00 0.15 3600.00 22.00 0.15 3600.00 17.83 0.34 3600.00
J2-12-35-2 20.00 0.09 3600.00 20.00 0.09 3600.00 16.20 0.30 3600.00
J2-12-35-3 20.00 0.11 3600.00 20.00 0.13 3600.00 15.91 0.30 3600.00
J2-12-35-4 22.00 0.07 3600.00 22.00 0.07 3600.00 17.54 0.29 3600.00
fu 32.11 0.03 3600.00 32.25 0.04 3600.00 28.50 0.16 3600.00
J1-14-20-0 12.00 0.00 446.69 12.00 0.00 2157.54 10.75 0.28 3600.00
J1-14-20-1 11.00 0.06 3600.00 11.00 0.06 3600.00 10.00 0.23 3600.00
J1-14-20-2 13.00 0.13 3600.00 13.00 0.13 3600.00 12.15 0.29 3600.00
J1-14-20-3 10.00 0.00 97.81 10.00 0.00 420.80 8.80 0.27 3600.00
J1-14-20-4 13.50 0.04 3600.00 13.50 0.07 3600.00 11.95 0.30 3600.00
J2-14-35-0 23.70 0.19 3600.00 23.50 0.22 3600.00 21.54 0.38 3600.00
J2-14-35-1 21.88 0.30 3600.00 21.79 0.26 3600.00 21.34 0.30 3600.00
J2-14-35-2 20.00 0.25 3600.00 20.00 0.23 3600.00 18.63 0.29 3600.00
J2-14-35-3 20.12 0.23 3600.00 20.00 0.24 3600.00 18.83 0.32 3600.00
J2-14-35-4 20.00 0.23 3600.00 20.00 0.23 3600.00 18.97 0.32 3600.00
poly1a0 13.00 0.19 3600.00 13.00 0.19 3600.00 13.00 0.28 3600.00
dighe1ok 100.00 0.13 3600.00 100.00 0.14 3600.00 100.00 0.35 3600.00
Average 0.047 1288.942 0.048 1333.998 0.209 2861.487
#opt 34 34 10
64
Table 3.3: Comparing branching strategies FL A, DB y BC
Instancias FL A DB BC
LB GAP Time LB GAP Time LB GAP Time
three 6.00 0.00 0.73 6.00 0.00 1.00 6.00 0.00 0.42
shapes4 24.00 0.00 0.02 24.00 0.00 0.20 24.00 0.00 0.08
fu5 17.89 0.00 0.08 17.89 0.00 0.51 17.89 0.00 3.62
glass1 45.00 0.00 0.02 45.00 0.00 0.03 45.00 0.00 0.02
fu6 23.00 0.00 0.69 23.00 0.00 0.45 23.00 0.00 1.67
threep2 9.33 0.00 1.36 9.33 0.00 1.70 9.33 0.00 10.00
threep2w9 8.00 0.00 7.36 8.00 0.00 5.44 8.00 0.00 22.79
fu7 24.00 0.00 0.62 24.00 0.00 1.12 24.00 0.00 5.93
glass2 45.00 0.00 1.84 45.00 0.00 3.01 45.00 0.00 42.76
fu8 24.00 0.00 0.72 24.00 0.00 0.67 24.00 0.00 17.94
shapes8 26.00 0.00 4.43 26.00 0.00 4.62 26.00 0.00 817.91
fu9 25.00 0.00 32.87 25.00 0.00 46.47 24.00 0.08 3600.00
threep3 13.53 0.00 819.01 13.53 0.00 852.70 10.87 0.21 3600.00
threep3w9 11.00 0.00 2822.93 11.00 0.00 2834.79 8.00 0.29 3600.00
glass3 100.00 0.00 13.74 100.00 0.00 31.11 100.00 0.26 3600.00
fu10 28.69 0.00 198.59 28.69 0.00 345.54 25.45 0.15 3600.00
dighe2 100.00 0.00 1.64 100.00 0.00 11.47 100.00 0.17 3600.00
J1-10-20-0 18.00 0.00 6.35 18.00 0.00 8.41 15.40 0.19 3600.00
J1-10-20-1 17.00 0.00 3.74 17.00 0.00 6.68 15.30 0.15 3600.00
J1-10-20-2 20.00 0.00 7.71 20.00 0.00 6.86 17.20 0.14 3600.00
J1-10-20-3 20.75 0.00 251.04 20.75 0.00 386.62 17.60 0.20 3600.00
J1-10-20-4 12.50 0.00 89.81 12.50 0.00 128.45 10.10 0.28 3600.00
J2-10-35-0 23.66 0.00 395.12 23.66 0.00 883.76 18.06 0.31 3600.00
J2-10-35-1 21.30 0.00 148.17 21.30 0.00 210.37 16.00 0.31 3600.00
J2-10-35-2 19.95 0.00 95.18 19.95 0.00 175.33 14.90 0.33 3600.00
J2-10-35-3 20.38 0.00 823.05 20.37 0.00 1081.21 15.29 0.31 3600.00
J2-10-35-4 19.44 0.00 316.23 19.43 0.00 389.47 13.74 0.35 3600.00
J1-12-20-0 12.00 0.00 32.81 12.00 0.00 54.40 10.00 0.23 3600.00
J1-12-20-1 10.00 0.00 29.73 10.00 0.00 63.12 8.90 0.26 3600.00
J1-12-20-2 12.00 0.00 19.59 12.00 0.00 53.06 10.60 0.26 3600.00
J1-12-20-3 8.00 0.00 28.36 8.00 0.00 86.63 7.00 0.22 3600.00
J1-12-20-4 13.00 0.00 149.06 13.00 0.00 466.86 11.10 0.28 3600.00
J2-12-35-0 24.50 0.13 3600.00 24.00 0.14 3600.00 20.06 0.35 3600.00
J2-12-35-1 22.00 0.15 3600.00 22.00 0.13 3600.00 17.83 0.41 3600.00
J2-12-35-2 20.00 0.09 3600.00 19.75 0.18 3600.00 16.20 0.38 3600.00
J2-12-35-3 20.00 0.11 3600.00 19.50 0.19 3600.00 15.91 0.39 3600.00
J2-12-35-4 22.00 0.07 3600.00 21.38 0.13 3600.00 17.54 0.36 3600.00
fu 32.11 0.03 3600.00 31.67 0.07 3600.00 28.50 0.16 3600.00
J1-14-20-0 12.00 0.00 446.69 12.00 0.00 949.36 10.75 0.28 3600.00
J1-14-20-1 11.00 0.06 3600.00 11.00 0.08 3600.00 10.00 0.23 3600.00
J1-14-20-2 13.00 0.13 3600.00 12.50 0.18 3600.00 12.15 0.29 3600.00
J1-14-20-3 10.00 0.00 97.81 10.00 0.00 381.16 8.80 0.27 3600.00
J1-14-20-4 13.50 0.04 3600.00 12.33 0.23 3600.00 11.95 0.34 3600.00
J2-14-35-0 23.70 0.19 3600.00 22.47 0.28 3600.00 21.54 0.38 3600.00
J2-14-35-1 21.88 0.30 3600.00 21.34 0.29 3600.00 21.34 0.38 3600.00
J2-14-35-2 20.00 0.25 3600.00 19.25 0.24 3600.00 18.63 0.38 3600.00
J2-14-35-3 20.12 0.23 3600.00 20.00 0.23 3600.00 18.83 0.41 3600.00
J2-14-35-4 20.00 0.23 3600.00 20.00 0.23 3600.00 18.97 0.32 3600.00
poly1a0 13.00 0.19 3600.00 13.00 0.28 3600.00 13.00 0.28 3600.00
dighe1ok 100.00 0.13 3600.00 100.00 0.35 3600.00 100.00 0.35 3600.00
Average 0.047 1288.942 0.065 1341.452 0.219 2826.463
#opt 34 34 11
65
Table 3.4: Comparing branching strategies
At the beginning, in the root node, the lower and the upper bounds on the reference point
of the pieces are given by the width, W, and an upper bound of the length of the strip, Lub .
We denote by UXi (LXi ) to the upper (lower) bound on xi , and UYi (LYi ) represents the upper
(lower) bound on yi . In Figure 3.3 we can observe the bounds of piece i at the root node, which
are LXi = LYi = 0, UYi = 4 and UXi = 5. In the root node, the lower bounds always are 0 for
all the pieces because of our definition for the reference point.
UYi
pi
UXi Lub
3.2.1 Method I
Let us suppose that a binary variable bi jk ∈ NFPi j is fixed to 1. This binary variable has fikj
inequalities associated to it corresponding to the non-overlapping constraints defined in the
66
HS2 formulation used for describing the slice. These inequalities have the following structure:
mi j
X
αki jf (x j − xi ) + βki jf (y j − yi ) ≤ δki jf k bi jk + δki jf h bi jh (3.3)
h=1,h,k
Pmi j
where h=1,h,k δki jf h bi jh = 0 since bi jk = 1.
Let us consider the case in which αki jf > 0 and βki jf > 0. The other cases are solved using
similar arguments. Constraint (3.3) can be written as:
The maximum value the right hand side can attain is βki jf UYi − βki jf LY j + αki jf UXi + δki jf k (where
U stands for upper bound and L for lower bound). Then
and
βki jf UYi − βki jf LY j + αki jf UXi + δki jf k
UX j = min{UX j , }
αki jf
In a similar way, the upper bound on yi can be updated. The lower bounds on x j and y j do
not need to be updated because they have positive coefficients and the minimum value is 0 in
both cases.
On the other hand, it is important to update the lower bounds for the real variables which
have a negative coefficient, xi and yi . In what follows we explain how we update the lower
bound on yi . Note that x j and y j have positive coefficients. Therefore, we substitute these
variables with their respective lower bounds, and we assign to variable xi the maximum value
that it can attain, obtaining the following inequality:
In Figure 3.4 we can see an example where slice defined by bi jk is activated. Piece i is
the triangle represented in Figure 3.3 and piece j is a square. This slice is the one defined by
variable b233 in Figure 3.1. When this slice is used then piece i has to be placed to the left of
piece j, then the lower bounds of piece j, LX j and LY j , and the upper bounds of piece i, LXi
and LYi , can be updated.
67
W
bi jk pj
UYi pi
UY j
Feasible Feasible
zone zone
piece i piece j
0 LX j UXi UX j
Lub
Figure 3.4: Feasible zone to arrange pi and p j when slice defined by bi jk is activated.
Both methods are used in an iterative way, going through the list of pieces until no bounds
are updated. The second method is less accurate, because the extended slice may allow over-
lapping and the calculated bounds are looser, but it is very fast because the values xmin , xmax ,
ymin , ymax are calculated just once, when NFPi j is built.
3.2.2 Method II
Let us consider a pair of pieces pi and p j and one slice S i jk associated with a binary variable
bi jk . We define the extended slice S i∗jk as the minimum rectangle enclosing S i jk . Let us denote
the minimum and maximum coordinates of S i∗jk by xmin , xmax , ymin , ymax . Then, if bi jk = 1, the
bounds for piece p j can be updated as follows (see Figure 3.5):
68
W
bi jk pj
UYi pi
UY j
Feasible Feasible
zone zone
piece i piece j
0 LX j UXi UX j
Lub
Figure 3.5: Feasible zones to arrange pi and p j using extended slices when slice S i jk is activated.
Let S i∗jk = {(xmin, ymin), (xmax, ymax)} be the extended slice defined from bi jk . If one of
the following conditions holds, then bi jk is incompatible and can be fixed to 0.
Note that the left-hand side of each one of the previous inequalities matches with Method
2 for updating the bounds of piece j (see Section 3.2.2). A similar argument can be applied to
obtaining the conditions on piece i.
69
3.3.2 Incompatibility using the transitivity of the pieces
Let us consider three pieces, pi , p j and pk and let bi j1 , bik1 , b jk1 be one variable of each non-fit
polygon. A sufficient condition for these three variables to be incompatible is that one of these
cases is satisfied:
ik1
1. xmin > xmax
i j1
+ xmax
jk1
ik1
2. xmax < xmin
i j1
+ xmin
jk1
We consider the extended slices associated with the three variables. One the one hand, if
bi j1 = 1, piece p j must have its reference point inside S i∗j1 = {(xmin i j1
, yimin
j1 i j1
), (xmax , yimax
j1
)}. If also
b jk1 = 1, in the same coordinate system (with respect to pi ), the possible positions for the refe-
rence point of pk are obtained by considering, for each point in S i∗j1 , the points in S ∗jk1 , that is,
all the points in the rectangle S i∗j1, jk1 = {(xmin
i j1
+ xmin , ymin + ymin
jk1 i j1 jk1 i j1
), (xmax + xmaxjk1
, yimax
j1
+ ymaxjk1
)}. On
the other hand, if bik1 = 1, the reference point of pk must be in S ik1 = {(xmin , ymin ), (xmax , yik1
∗ ik1 ik1 ik1
max )}.
Therefore, for the 3 variables taking value 1 simultaneously, S i∗j1, jk1 and S ik1 ∗
must intersect and
none of the above conditions may hold.
Let us consider the following example. In Figure 3.6 we can see three pieces from ins-
tance shapes8. The respective NFPs and the binary variables are represented in Figure 3.7.
If we assume that b122 is fixed to one in a given node, then the reference point of p2 must be
∗
placed, with respect to p1 , in S 122 , represented in Figure 3.8. Since S 122 is rectangular, then
S 122 = S 122 . Furthermore, let us suppose that b236 is fixed to 1. Then, the possible positions for
∗
∗ ∗
the reference point of p3 are obtained by considering, for each point in S 122 , the points in S 236 ,
∗
that is, all the points in the rectangle S 122,236 , the green rectangle at the bottom of Figure 3.10.
Let us now consider NFP1 3 . Variables of NFP13 can take the value 1 if they allow piece
∗
p3 to be placed in the rectangle defined by S 122,236 . In Figure 3.10 we can see the case in
which b236 = 1. The feasible zone to place p3 with respect to p1 is given by S 138 . In this case,
∗ ∗
S 138 S 138 because the slice is not rectangular. We can observe that S 138 does not intersect
∗
with S 122,236 . Then, if in a given node of the Branch & Bound tree variables, b122 and b138 are
fixed to 1, then b236 can be fixed to 0. In a similar way, we can fix variables b231 , b232 , b233 , b234
and b235 from V NFP23 to 0.
70
NFP13
NFP12
p1 p2 p3
b121
b123
b131
b138 b139
b132
NFP13
b137
b133
b134 b135
b136
b231
b238 b239
b232
NFP23
b237
b233
b234 b235
b236
p2
∗
p1
S 122
Figure 3.8: If b122 = 1 then the reference point of p2 must be placed in the rectangle.
71
p2
∗
p1
S 122
p3
∗
S 122,236
Figure 3.9: If b122 = 1 and b238 = 1, then the reference points of p2 and p3 must be placed in their
corresponding rectangles.
p2 p3
∗ p1
S 122
∗
S 138
p3
∗
S 122,236 Feasible zone of p3 if b122 = 1 and b236 = 1
Figure 3.10: If b122 = b138 = b236 = 1, then the reference point of p3 should be placed in the rectangles
defined by variables b138 and b236 , but it is impossible because they have no intersection.
72
3.3.3 Computational results of the strategies for updating bounds and
finding incompatible variables
Tables 3.5 and 3.6 show the effect on the performance of the algorithm of updating bounds
and fixing variables as described in Sections 3.2 and 3.3. Table 3.5 focuses on the 34 instances
solved to optimality and therefore the information of interest is the number of nodes and the
running times. Table 3.6 contains the relevant information for the 16 instances that could not
be solved to optimality within the limit of 3600 seconds, the GAP between the lower and the
upper bounds and the value of the lower bound. In each table, we compare the results obtained
by the Initial strategy (column 2), without updating bounds and fixing variables, with several
strategies developed for implementing these procedures.
In Section 3.2 two methods for updating bounds on variables were developed. Method
1, using the non-overlapping constraints, is exact but much slower. Method 2, based on the
extended slices, is slightly inexact but faster. A preliminary computational comparison between
them clearly showed that the second method performed much better for the same time limit.
Therefore, Method 2 is used. The first natural strategy was to update bounds at each node in
which a variable had been fixed to 1 and then to try to fix variables for all the variables not yet
fixed in the problem. The results of this Strategy 1 appear in column 3. Using these procedures
for all the variables in all the nodes in which a variable has been fixed to 1 has a positive
effect in reducing the number of nodes to be explored, but the required computational effort
slows down the algorithm. In Table 3.5 we observe a significant reduction in the number of
nodes, but the running time is more than doubled. In Table 3.6, with a time limit, the results are
worse than the initial ones in terms of GAP and lower bound. Therefore, it seemed necessary to
modify the strategy to reduce the computational effort of these procedures. Column 4 shows the
results of Strategy 2 when not all the variables but only those strictly positive in the solution
are considered for fixing. The results improve on those of the previous strategy in terms of
computing time, though the reduction of nodes is not so sharp, but they are still worse than the
initial ones. A further way of reducing the computational burden is not using the procedures at
every node in which one variable is fixed to 1, but only in some nodes, allowing the solution to
be changed more profoundly before using them again. That can be done in two ways: calling
the procedures after a given number of nodes, n0 , for instance after every n0 = 5, n0 = 10 or
n0 = 25 nodes, or calling them when, in the branch to which the node belongs, a given number
of variables has been fixed to 1 from the last call, t0 , for instance t0 = 3 or t0 = 5 variables. Both
alternatives have been tested for the values mentioned. In Tables 3.7 and 3.8 we can see the
general results of those alternatives, but the results obtained for 10 nodes (Strategy 3 in Table
3.5 and 3.6) and for 3 variables (Strategy 4 in Table 3.5 and 3.6) seem to be the best alternative.
For the instances solved, there is a small reduction in the number of nodes and a slight increase
in the running time. Therefore, its results are very similar to those of the Initial strategy. For
the unsolved instances, the results are slightly better, decreasing the GAP and increasing the
lower bound. In summary, updating bounds and fixing variables to 0 have a positive effect
only if the computational effort involved is carefully balanced with their advantages in terms
of reducing the size of the problem to be solved in successor nodes. According to our results,
these procedure should be used every 10 nodes in which a variable has been fixed to 1 and only
for variables which are strictly positive.
73
Table 3.5: Comparing strategies for updating bounds and fixing variables: Solved instances
Table 3.6: Comparing strategies for updating bounds and fixing variables: Unsolved instances
• Formulation: HS2 (horizontal slices, with lifted bound constraints, Section 2.3.3)
• Lower bounds on L: only the trivial bound of the length of the longest piece
• Branching strategy: Fischetti and Luzzi priorities, ordering the pieces by non-decreasing
area (Section 3.1.1)
• Updating bounds and fixing variables to 0: every 10 nodes in which a variable has been
fixed to 1, and only considering variables with strictly positive values (Sections 3.2.2-
3.3.2)
The complete results of the final version of the exact algorithm appear in Tables 3.9 and
3.10, separating its behavior for instances solved to optimality within the initial time limit of
3600 seconds from those not solved in that time. In Table 3.9, for each instance solved to
optimality, we show the optimal solution, number of nodes in the tree and running time. In
Table 3.10 we allow the algorithm to run much longer in order to study its evolution for harder
problems and show the lower and upper bounds obtained at each milestone (1 hour, 2 hours,
5 hours, 10 hours). The last two columns show the total running time if the instance has been
solved to optimality, and the optimal solution if it is known. If this not the case, the value
corresponds to the best known solution and is marked with an asterisk.
The results of these two tables indicate that our branch and bound procedure is able to solve
optimally all the instances with up to 10 pieces, most of those with 12 pieces and some of those
with 14 pieces. Even for the solved instances, there are large differences in terms of the number
of nodes in the search tree and running times. For example, instances threep3 and threep3w9
have the same pieces and only differ in the strip width, 7 and 9 respectively, but this small
increase results in a very large increase in the solution time. Comparing sets J1 and J2, we
can observe that instances derived from J1 are much easier than those derived from J2. Pieces
in J1 are more regular and fit together more nicely, while pieces in J2 are more irregular and
there are always large amounts of waste between them (see Section 1.6). Table 3.10 shows that
long runs for instances J2 with 12 pieces can obtain if not optimal, then solutions which are at
74
Table 3.7: Comparing the effect of studying every 5, 10 and 25 nodes in which a variable is fixed to 1
75
Table 3.8: Comparing the effect of studying the nodes when the number of variables fixed to 1 is a
multiple of 3 and 5
76
Table 3.9: Results of the Branch and Bound algorithm: Instances solved in less than 3600 seconds
least very close to optimality, while for instances of the same set with 14 instances, even for
long runs the gaps between lower and upper bounds do not close.
In summary, even for the moderate number of pieces of the instances tested, our integer
formulation, based on assigning variables to regions derived from the edges of the non-fit-
polygons, involves a large set of binary variables. Good branching strategies and reduction
procedures, plus the power of the latest version of CPLEX, are not enough to speed up the
search process and ensure an optimal solution for medium size problems. The optimal solutions
for the tested instances appear in Section 1.6.2. When optimality has not been reached, the
figure indicates that it corresponds to an upper bound.
77
Table 3.10: Results of the Branch and Bound algorithm: Instances not solved in 3600 seconds
78
Chapter 4
In this chapter we present several classes of valid inequalities for the mixed integer formulation
HS2 defined in Section 2.3. We discuss the separation algorithms in the next chapter.
In a branch and cut algorithm additional inequalities are used to cut fractional solutions
appearing in the nodes of the tree. Nevertheless, in many different problems there are valid
inequalities such that their separation algorithms require a great computational effort and it
does not make sense to use these inequalities. Therefore, if the separation algorithms are too
complicated, it can be better to branch rather than cut the fractional solution. However, it is
interesting to explore ways of improving the cutting process in the Branch & Cut algorithm.
When we solve the linear relaxation of the HS2 formulation, with the binary variables re-
laxed to real variables and bounded between 0 and 1, in many cases most of the pieces in the
resulting solution are placed at the origin, overlapping each other. One possible reason is that
the relationship between the real and the binary variables is too weak. In order to strengthen
the relation between variables xi and yi and variables bi jk , pi , p j ∈ P, k ∈ {1, . . . , mi j }, we use
X-Y inequalties defined in Section 4.1, and impenetrability inequalities defined in Section 4.2.
These two types of inequalities use the bounds on the reference point of the pieces, so these
inequalities are only valid for the branch created from the current node, because the bounds on
the pieces change every time the set of binary variables fixed to 1 is modified (see Section 3.2).
In Sections 4.3 and 4.4 we define cover and LU-cover inequalities. We propose a classifi-
cation of the set of binary variables depending on how many pieces are separated in a vertical
direction. The idea is to find a set of binary variables in such a way that the total width of the
pieces which are being separated exceed the width of the strip. Thus, not all of the variables
in this set can take the value 1. LU-cover inequalities do not take into account how pieces are
sorted in the pile, whereas cover inequalities consider a specific ordering of the pieces in the
pile.
Finally, in Section 4.5, we present the transitivity inequalities, which are an extension of
the idea presented in Section 3.3. The aim of these inequalities, as with LU-cover and cover
inequalities, is to find a set of binary variables which cannot take the value 1 simultaneously.
In this case we use the transitivity of the pieces instead of the width of the strip.
79
4.1 X-Y inequalities
These inequalities improve the relation between xi and yi variables and variables bi jk , ∀pi ∈ P,
∀p j ∈ P\pi , ∀k ∈ {1, . . . , mi j }. We study two types of X-Y inequalities. The first type of in-
equalities modify the coefficients of the inequalities needed to describe the slices of the NFPs
(Type I). The second type of inequalities study the relation between one of the coordinates of
the reference point of one piece and the binary variables created from NFPs of the given piece
and the rest of the pieces (Type II).
4.1.1 Type I
Let pi , p j ∈ P, i , j be a pair of pieces. An inequality used for describing a slice k ∈ {1, . . . , mi j }
of the NFPi j has the following structure:
mi j
X
α(x j − xi ) + β(y j − yi ) ≤ qt bi jt (4.1)
t=1
α if α < 0
(
• α =00
0 otherwise
−β if β > 0
(
• β =
0
0 otherwise
β if β < 0
(
• β =
00
0 otherwise
Now we consider the positive terms of the left-hand side of inequality (4.1). The following
condition must hold:
α0 xi + α00 x j + β0 yi + β00 y j ≤ qk
because α0 xi + α00 x j + β0 yi + β00 y j ≤ α(x j − xi ) + β(y j − yi ). Let us consider the case in which
α > 0 and β > 0 (the other cases are similar). That implies α00 = β00 = 0, α0 = −α and β0 = −β,
then α0 xi + α00 x j + β0 yi + β00 y j = −αxi − βyi ≤ −αxi − βyi + αx j + βy j .
In the case that bi jk = 0, the previous inequality may not be valid. However, if we multiply
the right-hand side by bi jk , then the resulting inequality is valid because the coordinates of every
piece, by definition, must be positive. Therefore, inequality
α0 xi + α00 x j + β0 yi + β00 y j ≤ qk bi jk
is valid.
80
This inequality can be improved by adding to its right hand side the binary variables defined
from the same NFP whose coefficients are negative, i.e.:
X
α xi + α x j + β yi + β y j ≤ qk bi jk +
0 00 0 00
qt bi jt (4.2)
t∈H
In the case of there being lower bounds on variables xi , x j , yi , y j greater than 0, inequalities
(4.2) can be improved. Let CXi , CX j , CYi , CY j be, respectively, the lower bounds. Inequality
(4.2) can be lifted as follows:
We call inequalities (4.3) X-Y inequalities of type I. These inequalities are valid because in
each solution exactly one slice of each NFP is used (one binary variable takes the value 1). Let
bi jl = 1 with l ∈ H ∪ {k}. At most two terms of the right-hand side in inequality (4.3) take a
negative value, the other two taking the value 0. Let us suppose that α00 = β00 = 0, α0 = −α and
β0 = −β (α > 0 and β > 0 is satisfied), then the other cases are similar. Inequality (4.3) can thus
be rewritten in the following way:
α0 xi + β0 yi ≤ ql + α0CX j + β0CY j
4.1.2 Type II
Let pi ∈ P. In what follows we are going to study what the binary variables of the entire pro-
blem are such that, when they take value 1, coordinates xi or yi have to be increased.
Let p j ∈ P \ {pi } and let bi jk ∈ V NFPi j (V NFPi j = {bi jk |∀k = 1, . . . , mi j }). The minimum
value that xi or yi can take when bi jk = 1 is defined by the limits of the slice in the NFPi j -
coordinate system, defined by x jik , y jik , x jik and y . In Figure 4.1 we can see that x jik = −xi jk ,
jik
x jik = −xi jk , y jik = −yi jk , y = −y .
jik i jk
Figure 4.1 shows that the slice defined by variable bi jk is placed in the third quadrant. Then
when it is used, it forces piece i to be moved x jik units in a horizontal direction because piece
j protrudes from the left of piece i. Then any inequality with this form, xi ≥ x jik bi jk , is valid.
In the case that y > 0 (slices in the first and second quadrant), then inequality yi ≥ y bi jk
jik jik
is also valid. This idea is the same as the one we used to lift the bound constraints in the HS2
formulation (see Section 2.3), but in this case we consider the lower bounds on the coordinates
of the pieces and we are going to include more binary variables from other NFPs.
81
4
3 3
2 2
1 1
1 2 3 4 1 2 3
pi pj
xi jk xi jk
yi jk
bi jk
y
i jk
y jik
bi jk
y
jik
x jik x jik
NFPi j : slice bi jk NFP ji : same slice bi jk
We denote by CXt and CYt , respectively, the lower bounds on xt and yt , ∀pt ∈ P. In Figure
4.1, if we consider CX j > 0, we can see that inequality xi ≥ x jik bi jk can be improved in the
following way:
xi ≥ (x jik + CX j )bi jk (4.4)
The inequality obtained from yi can be obtained in a similar way. If we consider NFP ji , we
can add any subset of binary variables satisfying x jik > 0 for each binary variable to equation
(4.4). We define Hi j := {bi jk | x jik > 0} and Hi0j ⊆ Hi j . Inequality (4.4) can be lifted in the
following way: X
xi ≥ (x jiv + CX j )bi jv (4.5)
bi jv ∈Hi0j
Let pt ∈ P \ {i, j} be another piece and let us consider the slices whose associated va-
riables bitk0 , k0 ∈ {1, . . . , mit } satisfy xtik0 > 0. These variables cannot be added to (4.5). It
may not be valid because pieces j and t could both be placed to the left of piece i, and one
on top of the other, in such a way that xi ≥ (x jik + CX j )bi jk and xi ≥ (xtik + CXt )bitk , but
xi ≥ (x jik + CX j )bi jk + (xtik + CXt )bitk is not satisfied.
82
1. θi jk ≤ x jik + CX j and θitk ≤ xtik + CXt
then inequality (4.6) is valid. If both variables take the value 0, then the inequality is valid. In
the case that exactly one of them takes the value 1, then by Condition 1 inequality (4.6) it is also
valid. Finally, if both binary variables take the value 1, inequality (4.6) is also valid because of
Condition 2.
Let b∗ be a subset of binary variables of p j ∈P\{pi } Hi j in such a way that when every variable
S
of b∗ takes the value 1 it is possible to build a feasible solution for the problem. The set of all
possible subsets of binary variables b∗ is denoted by B∗ . Note that it is impossible for two
binary variables of the same NFP to appear in any subset b∗ because they cannot take the value
1 simultaneously. Let us consider the following constraints:
X X
xi ≥ θi jv bi jv (4.7)
p j ∈P\{pi } bi jv ∈Hi j
then, we say that inequalities (4.7) are X-Y inequalities of type II.
Let us suppose that β = 0 (case α = 0 is similar). The corresponding inequality X-Y of type
I has the following structure:
X
α0 xi + α00 x j ≤ (qk + α00CXi + α0CX j )bi jk + (qt + α00CXi + α0CX j )bi jt
t∈H
where just one of the coefficients α0 or α00 takes a value different from 0 and, furthermore, it
is negative by definition. Let us suppose that α0 < 0 and α00 = 0. If we divide the previous
inequality by α0 , we obtain the following inequality:
X
xi ≥ µk bi jk + µt bi jt (4.8)
t∈H
where µt = (qt + α0CX j )/α0 . Inequality (4.8) has the same structure of an X-Y inequality of type
II for xi .
83
bi j1
bi j2 bi j8
bi j3 bi j7
bi j4 bi j6
bi j5
In the case that α , 0 and β , 0, an X-Y inequality of type I is not dominated by X-Y
inequalities of type II. If we look at the NFP represented in Figure 4.2, we can see that the slice
defined by bi j8 is not rectangular, so the extended slice does not match the original slice. The
NFP-constraint obtained from the segment of the NFPi j , which is necessary to define the limits
of the slice associated with bi j8 , has the following form:
xi − x j + yi − y j ≤ 3bi j1 + 5bi j2 + 8bi j3 + 10bi j4 + 13bi j5 + bi j6 − 3bi j7 − 6bi j8
If we build the corresponding X-Y inequality of type I without taking into account the lower
bounds greater than 0, we get the following inequality:
x j + y j ≥ 3bi j7 + 6bi j8
The coefficient of variable bi j8 in the inequality is 6. The only way to obtain this inequality
using inequalities of type II is by considering the sum of two X-Y inequalities of type II, one
for variable x j and another one for variable y j . In both cases, the maximum coefficient of bi j8
is 2 because xi j8 = y = 2, so the resulting coefficient of bi j8 in the sum of these inequalities
i j8
is at most 4, and it is impossible to obtain the value 6 for the coefficient in the corresponding
inequality X-Y of type I.
However, if for each binary variable the associated slice has the same shape as its extended
slice, then X-Y inequalities of type I would be a particular case of X-Y inequalities of type II.
84
where i and j are the two pieces which are being separated, k ∈ {1, . . . , mi j } is the slice of the
NFPi j and the inequality defines one of its edges.
Coefficients xi and x j have different signs, −α and α, in all the inequalities of the NFPs
which have α , 0 and β , 0. The same situation applies to coefficients yi and y j , whose signs
are, respectively, −β and β.
Let i and j be two pieces and let NFPi j be the associated Non-Fit Polygon. We are going to
study the minimum value of s := xi + x j + yi + y j in each one of the slices of the NFPi j . The
core of the impenetrability constraints is the separation of one piece from the origin because
all the fractional solutions obtained when the initial linear problem is solved place many pieces
at the origin or very close to it, with a lot of overlapping.
In Section 4.1 we presented the X-Y inequalities. In these inequalities we studied one or the
sum of two variables defined from the reference point of one or two pieces. Inequalities X-Y of
type I study the sum of two of these variables, where each one of the variables could be defined
from different pieces. In inequalities X-Y of type II, each variable associated with a coordinate
of the reference point of any piece is studied separately.
If we calculate the sum of four of the X-Y inequalities of type II, we can obtain a lower
bound for s. In the Impenetrability constraints this lower bound is improved and the resulting
inequalities will have better coefficients.
The Impenetrability constraints have the following structure:
mi j
X
s≥ ql bi jl (4.9)
l=1
ql := min s (4.10)
bi jl =1
In the case that there are lower bounds on xi , yi , x j or y j greater than 0, these coefficients
would have a greater value. These inequalities are then valid just in the corresponding branch
defined from the current node on the branch and cut tree (local constraints).
In what follows we present two approaches to dealing with problem (4.10). The first ap-
proach, an exact method, calculates the minimum value of s in such a way that there is a feasible
placement for pieces i and j that reaches this value. The second approach, an approximate me-
thod, uses a simplification of the slices in order to alleviate the computational effort. If we use
the second approach, the resulting inequality could be slightly weaker because coefficients ql
could be lower than those in the exact method.
85
Exact method for calculating coefficients ql
Let i and j be two pieces and let k ∈ {1, . . . , mi j } be a slice of NFPi j . We are going to calculate
the minimum value of s = xi + x j + yi + y j when bi jk = 1.
We use the notation described in Subsection 3.2.2. Let S i jk = {(x1 , y1 ), . . . , (xr , yr )} be the
set of r vertices sorted in an anticlockwise order in the NFPi j -coordinate system. We denote
the four quadrants by Qt , t = 1, . . . , 4. The geometric space defined by slice k in the NFPi j -
coordinate system is given by the convex hull of S i jk , conv(S i jk ).
First we calculate the coefficient qk in the case that there are no lower bounds greater than
0. After that we study how the lower bounds can be used to improve the coefficients.
We have to minimize |x| + |y| because any coordinate of piece j with a negative value in the
NFPi j -coordinate system has the following behavior in the stock sheet coordinate system (the
bottom-left corner of the stock sheet is located at the origin):
• If x < 0, then piece i has to be moved in a horizontal direction from the origin to the right
in order to place piece j in that position, in such a way that x j = 0 and xi = |x|.
• If y < 0, then piece i has to be moved in a vertical direction in order to place piece j such
that y j = 0 (and yi = |y|).
In the case that the coordinates of piece j are positive in the NFPi j -coordinate system, the
behavior in the stock sheet coordinate system will satisfy:
For each point in the NFPi j -coordinate system there is an associated placement for pieces
i and j in the stock sheet coordinate system in such a way that s is minimized. The relative
position of both pieces is fixed and any other values of the coordinates of the pieces placed in
the same slice (respecting the relative position) would have a greater value of s.
An example is shown in Figure 4.3. The pair of pieces is drawn in Figure 4.1 and the cor-
responding NFPi j is drawn in Figure 4.2. Let us suppose that we use the point R = (−3, 3) in
order to separate pieces i and j. The reference point of pieces i and j is represented, respec-
tively, in blue and green. In the stock sheet coordinate system we have to place piece i and j
as Figure 4.3 shows in order to minimize s, and the relative position described by point R is
satisfied. Then, xi = 3 and x j = 0, and coordinates Y remain with the same value as in the
NFPi j -coordinate system (NFPi j -CS).
86
bi jk R
pj
pi
Figure 4.3: Relative position of both pieces defined by point R = (−3, 3) such that
s is minimized.
In the case that there is a t such that conv(S i jk )∩Qt = ∅, then we consider min(x,y)∈conv(S i jk )∩Qt |x|+
|y| = 0. If variable bi jk is incompatible, i.e. bi jk cannot take the value 1 because the pieces would
exceed the limit of the strip, then we consider that qk = 0. That is, we do not include it in the
inequality because it can be dropped from the problem.
Using the intersections of slice k with the quadrants, we define, for all t = 1, . . . , 4, the
following subsets:
S it jk := {(x1t , yt1 ), . . . , (xrt t , ytrt )},
in such a way that the convex hull of points of S it jk matches with conv(S i jk )∩Qt , i.e. conv(S it jk ) =
conv(S i jk )∩Qt . The intersection of two convex polygons will be a convex polygon, so conv(S it jk )
is the convex polygon which contains the part of the slice defined by bi jk placed in Qt . Figure
4.4 shows a partition of slice k into two rectangles, the part corresponding to Q2 is drawn in
green, conv(S i2jk ), and the part placed in Q3 , conv(S i3jk ) is represented in blue. The sets of points
are the following ones:
S i2jk := {(−3, 2), (−7, 2), (−7, 0), (−3, 0)}
S i3jk := {(−3, 0), (−7, 0), (−7, −1), (−3, −1)}
In this case there are no lower bounds on the coordinate of pieces. In order to obtain the
coefficient qk of an impenetrability constraint, we have to calculate the sum of the minimum
value of each coordinate of i and j in the NFPi j -coordinate system in all the relative positions
given by the vertices of S it jk , ∀t ∈ {1, . . . , 4}. In the example shown in Figure 4.4 we can see
that there are six points which we have to study. These points are defined by S i2jk and S i3jk . The
minimum value of the sum of the coordinates considering absolute values is given by the point
(−3, 0), and then qk = 3.
If the coordinates of both pieces have positive lower bounds, denoted as CXi , CYi , CX j and
CY j , then coefficients qk of an impenetrability constraint can be improved. In order to modify
the coefficients we have to take into account the unfeasible configurations.
87
bi jk S i2jk
S i3jk
qk ≥ CXi + CYi + CX j + CY j
In the case that R = (CX j − CXi , CY j − CYi ) belongs to slice k, i.e. if R ∈ conv(S i jk ), then
qk = CXi + CYi + CX j + CY j .
Using the same idea presented above, quadrants Qt , t = 1, . . . , 4 define a partition of R2
such that
qk = min { min ω }.
t∈{1,...,4} (x,y)∈conv(T i jk )∩Qt
In this case, instead of |x| + |y|, we use ω. The value of ω will be greater than |x| + |y|. Let us
define ω = ω1 + ω2 where ω1 denotes the sum of the X-coordinate of both pieces, which can
be calculated in the following way:
• If x < CX j − CXi then we have to move piece i in a horizontal direction in order to place
piece j in such a way that x j = CX j and xi = CX j − x. In this case, ω1 = 2CX j − x.
• If x > CX j , the pieces i and j satisfy xi = CXi and x j = x. In this case, ω1 = CXi + x.
Similarly, ω2 denotes the sum of the Y-coordinates of both pieces, which can be calculated in
the following way:
• If y < CY j − CYi , then piece i has to be moved in a vertical direction in order to place
piece j in such a way that y j = CY j and yi = CY j − y. In this case, ω2 = 2CY j − y.
• If y > CY j , then pieces i and j satisfy yi = CYi and y j = y. In this case, ω1 = CYi + y.
88
pj
pi
Then, for each point in slice k in the NFPi j -coordinate system, we can obtain the coordi-
nates of both pieces in the stock sheet coordinate system (xi , yi , x j , y j ), for which ω = s. By
definition, lower bounds of pieces are always satisfied and, furthermore, any other placement of
both pieces satisfying the relative position given by the point of the slice will produce a greater
value of s.
The transformation of the relative position of both pieces in order to build a feasible pla-
cement for two different points of the slice could produce the same value of s. Furthermore,
when we represent the point which minimizes s in the NFPi j -coordinate system, it is not ne-
cessary to be at any vertex of S it jk for a t = 1, . . . , 4. In this case, there is a vertex from S it jk
such that the transformation will place both pieces in such a way that the value of s is minimum.
To obtain qk , we calculate |s| in the NFPi j -coordinate system and ω for all the relative
positions given by the vertices of S it jk , ∀t ∈ {1, . . . , 4}. The minimum value would be qk . In
Figure 4.4 we can see an example. Lower bounds are CXi = 2, CYi = 1, CX j = 1 and CY j = 3.
When we consider these lower bounds, then configuration xi = 3, yi = 0, x j = 0 and y j = 0 is
not valid because the lower bound would not be satisfied. The minimum coefficient is given by
vertex (−3, 2) of S i2jk , where:
• x < CX j − CXi , so ω1 = 2CX j − x = 5.
• y = CX j − CXi , so ω2 = CYi + CY j = 4.
As ω = ω1 + ω2 = 9, then qk = 9. The transformation of point (−3, 0) produces xi = 4, yi = 3,
x j = 1 and y j = 3, so at that point s = 11. In Figure 4.5, pieces i and j are drawn in the stock
sheet coordinate system in such a way that slice k is used and s is minimized. The dashed lines
in blue represent the lower bounds of piece i and the dashed lines in green represent the lower
bounds of piece j.
If we consider CXi = 5, then qk = 10, which is obtained at vertex (−3, 2). Pieces are placed
satisfying xi = CXi , yi = CYi , x j = CX j and y j = CY j . If we represent the relative position in
89
the NFPi j -coordinate system, we can observe that the point is (−4, 2), which is not a vertex of
S i2jk .
In this case, the idea is to consider the transformation of these complex slices (slices with
more than 4 vertices) into extended slices, see Section 3.3.2. If the slice is approximated, then
we have not guaranteed that the coefficient obtained is the best one, i.e. the coefficient could
be lower but the calculation is easier and faster.
Every slice is modified to have at most 4 vertices. We therefore have to study at most 4 + h
points, where h = 2 if the slice crosses one of the axes in the NFPi j -coordinate system or h = 4
if it crosses both axes.
In the following example we can see an NFPi j in which the impenetrability inequality ob-
tained by the approximate method, is dominated by the inequality obtained by the exact method.
Let us consider the NFPi j drawn in Figure 4.6. If there are no lower bounds on the pieces,
the corresponding impenetrability constraint obtained by the exact method is:
On the other hand, if we calculate the coefficients using the approximated method, the
extended slice obtained from slice 9 is:
where these vertices represent, respectively, the bottom-left corner and the top-right corner of
the enclosing rectangle, so q9 = 0.5 + 0.5 = 1. All the other coefficients remain equal, and we
obtain an impenetrability inequality which is dominated by the previous one.
90
bi j1
bi j2 bi j8
bi j3 bi j7
bi j9
bi j4
bi j6
bi j5
The inequalities studied in this section consider a specific order for the pieces. That is, it is
important to know which piece is placed at the bottom of the pile and in which order the rest
of the pieces are placed. If we place all the pieces one completely on top of the other (there is
no parallelism between any pair of pieces), then any order of the pieces in the set produces the
same width of the pile. In the next section we develop this idea.
If the binary variables that we consider in the inequality allow the pieces to match in a
vertical way, that is, when the segments obtained by the projection on the Y-axis of the pieces
overlap, then the given order of pieces could affect the total width of the pile, depending on the
quantity of overlap in the Y-axis which is allowed for each pair of pieces in the pile.
Let us take a pair of pieces i, j ∈ P and let bi jk for some k ∈ {1, . . . , mi j } be a binary variable
which takes the value 1. In order to identify if the separation of these two pieces forces one
piece to protrude from the other piece in a vertical direction, we are going to use the limits of
the slice in the Y-axis. If we want the total width of the two pieces to increase, the minimum
width of arranging both pieces must be greater than the width of both pieces separately.
Let {1, . . . , l} be a set of l pieces. We call a stack a set of ordered pieces 1 − 2 − · · · − l, such
that piece p1 (or 1) is placed at the bottom of the pile, piece 2 protrudes from the top of piece 1
in a vertical direction, and so on, with piece l placed on the top of the pile. Let C(1, . . . , l) be
the set of l piles of pieces in such a way that the order 1 − 2 − · · · − l is satisfied but the starting
piece can be changed. That is, the pile built by 2 − 3 − · · · − l − 1 also belongs to C(1, . . . , l).
We say that C(p1 , . . . , pl ) is a chain.
Let us suppose that bi jk = 1 and y is strictly positive. That is, piece j is placed satisfying
i jk
y j ≥ yi + y . On the other hand, if yi jk < 0 and bi jk = 1, then piece j is placed satisfying
i jk
y j ≤ yi + y (see Figure 2.10).
i jk
91
Now we are going to use some notation used in Section 2.3 with some changes. Let i and j
be a pair of pieces. Uij is the set of variables whose associated slices separate piece j upwards
from piece i and piece i protrudes from below piece j (note that this definition is somewhat
different from that used in Section 2.3). In a similar way, Dij is the set of variables whose
associated slices separate piece i upwards from piece j and piece j protrudes from below piece
i.
p1
b121
b122 b128
p2
b123 b127
b124 b126
b125
In what follows, we define a partition of Uij (Dij ) for classifying the binary variables, taking
into account the amount of overlap allowed in a vertical direction for both pieces.
We denote by Ui0j (D0i j ) the subsets of binary variables which separate piece j entirely on
top of i (piece i is separated entirely from on top of piece j).
U0ij := {k ∈ Ui j | Yij − y = 0}
i jk
D0ij := {k ∈ Di j | |Y i j − yi jk | = 0}
0
In the example appearing in Figure 4.7, U12 = {1} and D012 = {5}.
Let υ1i j := mink∈Ui j \Ui0j Y i j −y (δ1i j := mink∈Di j \D0i j |Y i j −yi jk |). We define the following subsets
i jk
of variables:
D1ij := {k ∈ Di j | |Y i j − yi jk | = δ1i j }
92
In the example represented in Figure 4.7, we can see that U12 1
= {2, 8}, D112 = {4, 6}, υ112 = 2
and δ112 = 2 .
Iteratively, if we have already defined Uit−1 j , Di j and υi j := mink∈Ui j \ t−1
t−1 t S s Yij − y
s=0 U i j
(δti j :=
i jk
s |Y
mink∈Di j \St−1
s=0 Di j i j − yi jk |), then we define:
Let υi j ∈ N (δi j ∈ N) be the integer such that the following condition holds:
υ δ
• Ui ji j , ∅ (Di ji j , ∅)
υ +1 δ +1
• Ui ji j =∅ (Di ji j = ∅)
in such a way that variables in Uit j (Dti j ) allow pieces i and j to match in a vertical direction less
than variables from Uisj (Disj ), where t < s ≤ υi j (t < s ≤ δi j ).
υt δt
We say that Ui j (Di j ) has υi j + 1 (δi j + 1) classes. Variables from Ui ji j (Di ji j ) belong to class
υti j (δti j ).
Now we present a preliminary result. The idea is fix a binary variable to 0 such that, if it
was fixed to 1, the relative position of the pieces associated with the corresponding NFP would
exceed the width of the strip.
Theorem 1
Let i, j ∈ P. If there is a variable bi jk for any k such that yi jk > W − w j or y jik > W − wi , then all
feasible solutions s of the problem satisfy bi jk = 0.
Proof:
If bi jk = 1, then W − w j ≥ y j ≥ yi jk > W − w j or W − wi ≥ yi ≥ y jik > W − wi . §
Theorem 2
Let i, j, k ∈ P. If there are three subsets Ui0j ⊆ Ui j , U 0jk ⊆ U jk and Uki0 ⊆ Uki in such a way that
the following conditions are satisfied:
93
• If k ∈ U 0jk , such that k belongs to class v (k ∈ U vjk ), then U ljk ⊆ Ut0 .
S
l≤v
We denote by τ∗i j the highest class given by variables from Ui0j . Similarly, we define τ∗jk
and τ∗ki , respectively, as the highest classes given by variables from U 0jk and Uki0 .
3. Let $1 ∈ R and $2 ∈ R be, respectively, the largest and the second largest value of the
τ∗ τ∗ τ∗
following real numbers: {υi ji j , υ jkjk , υkiki }. Then,
wi + w j + wk − $1 − $2 > W
We say that the previous inequality is a vertical clique (or just a clique) inequality associated
with chain C(i, j, k).
Proof:
The different feasible orderings for stacking three pieces with chain C(i, j, k) are:
(a) i − j − k: That implies that piece i is placed below piece j and, simultaneously, piece j is
placed below piece k.
(b) j − k − i: That implies that piece j is placed below piece k and, at the same time, piece k
is placed below piece i.
(c) k − i − j: That implies that piece k is placed below piece i and, at the same time, piece i
is placed below piece j.
The combination which produces the minimum total width is the one that allows more
τ∗
overlap in a vertical direction between the adjacent pieces. Let us suppose that $1 = υi ji j and
τ∗ τ∗ τ∗ τ∗
$2 = υ jkjk , i.e, υi ji j ≥ υ jkjk ≥ υkiki . The other cases can be proved in a similar way.
τ∗ τ∗ τ∗ τ∗
As υi ji j ≥ υi jki and υ jkjk ≥ υkiki , the combination with less width is given by chain (a). The
amount of width used is wi + w j + wk − $1 − $2 , which by hypothesis 3 exceeds the width of the
strip. So if any variable of bi js ∈ Ui0j takes the value 1, then it is impossible to build a feasible
solution. Thus it is satisfied that:
X X
bi js + b jks ≤ 1
s∈Ui0j s∈U 0jk
P
In order to add the term s∈Uki0 bkis to the left hand side of the inequality, we have to prove that
none of the variables in set Uki0 can take the value 1 simultaneously with any variable of sets
Ui0j or U 0jk .
• If both bkis ∈ Uki0 , bi jt ∈ Ui0j take the value 1, then we get the combination k − i − j which
produces a stack wider than i − j − k, so the width of the strip is exceeded.
94
• If bkis ∈ Uki0 and b jkt ∈ Ui0j take the value 1, then we get combination j − k − i which
produces a stack wider than i − j − k, so the width of the strip is exceeded.
τ∗ τ∗
In conclusion, inequality 4.11 is valid when $1 = υi ji j and $2 = υ jkjk . By applying this argument
to the remaining cases, we obtain that inequality 4.11 is valid in all cases. §
Theorem 3
Let i1 , . . . , ir ∈ P be r pieces and let U s(s+1)
0
⊆ U s(s+1) , 1 ≤ s ≤ r, where ir+1 = i1 , in such a way
that the following conditions hold:
0
• U s(s+1) , ∅, 1 ≤ s ≤ r
0 v l 0
• If k ∈ U s(s+1)
S
such that k ∈ U s(s+1) , then l≤v U s(s+1) ⊆ U s(s+1) .
0
• We denote by t∗s(s+1) the highest class given by variables of U s(s+1) . Then,
r r ∗ ∗
tl(l+1) tτ(τ+1)
X X
wl − ( υl(l+1) − υτ(τ+1) ) > W, (4.12)
l=1 l=1
∗
tτ(τ+1) t∗
where τ ∈ {1, . . . , r} satisfy: υτ(τ+1) ≤ υl(l+1)
l(l+1)
, ∀l ∈ {1, . . . , r}.
Therefore, inequality 4.13 is valid.
r
X X
bl(l+1)k ≤ r − 1. (4.13)
l=1 0
bl(l+1)k ∈Ul(l+1)
We say that inequalities 4.13 are vertical covers, or just covers, associated with chain C(1, 2, . . . , r−
1).
Proof:
If r = 3, we really have a clique inequality because the third condition is the same in both cases.
Let us suppose that the combination with the minimum total width is given by:
1 − 2 − ··· − r
that is, this is the combination which allows more overlap between adjacent pieces in a vertical
direction.
0
If a binary variable from U s(s+1) , 1 ≤ s < r takes value 1 then, by the third condition, the
stack exceeds the width of the strip and any solution satisfies:
r−1
X X
bl(l+1)k ≤ r − 1
0
l=1 bl(l+1)k ∈Ul(l+1)
because for the left-hand side to take its maximum value r, it would be necessary to fix one
0
binary variable of each subset Ul(l+1) to 1, which is unfeasible.
95
P
To get inequality (4.13) we need to add the sum b1rk ∈Ur10 b1rk to the left-hand side. Note
that any combination obtained is such that rl=1 bl(l+1)k ∈Ul(l+1) bl(l+1)k = r exceeds the width of
P P
0
the strip, because any way of stacking the pieces produces a pile wider than the combination
defined by 1 − 2 − · · · − r. Then inequality (4.13) is valid. §
All the results presented above can be modified in order to build cliques and covers in a
horizontal direction. In order to build these inequalities we need an upper bound of the length
of the strip, Lub . The following definitions are similar to the vertical case.
Let i, j ∈ P be two pieces. We denote by Rij the set of variables whose associated slices
separate piece j to the right of piece i and, furthermore, piece i protrudes from the left of piece j.
Similarly, we define by Lij the set of variables whose associated slices separate piece i towards
the right of piece j and, furthermore, piece j protrudes from the left of piece i.
Rij := {k | y ≥ 0, X i j − xi jk < l j }
i jk
Lij := {k | yi jk ≤ 0, |X i j − xi jk | < l j }
Note that Ri j = L ji . In what follows we define a partition of Rij (Lij ) in order to classify
binary variables, taking into account the quantity of overlap allowed between the given pair of
pieces in horizontal direction.
Let i, j ∈ P and let k ∈ {1, . . . , mi j}. We define R0i j (Li0j ) as the subset of variables whose
associated slices separate piece j towards the right of piece i (piece i towards the left of piece
j), described in the following way:
R0ij := {k ∈ Ri j | X i j − xi jk = 0}
L0ij := {k ∈ Li j | |X i j − xi jk | = 0}
R1ij := {k ∈ Ri j | X i j − xi jk = ρ1i j }
L1ij := {k ∈ Li j | |X i j − xi jk | = λ1i j }
Rtij := {k ∈ Ri j | X i j − xi jk = ρti j }
Ltij := {k ∈ Li j | |X i j − xi jk | = λti j }
96
Then, we have built a partition of Ri j (Li j ),
ρi j
[ λi j
[
Ri j = Risj (Li j = Lisj )
s=0 s=0
in such a way that variables from Rti j (Lit j ) allow pieces i and j to match in the horizontal
direction less than variables from Risj (Lisj ) with t < s ≤ ρi j (t < s ≤ λi j ).
ρt λt
We say that Ri j (Li j ) has ρi j + 1 (λi j + 1) classes. Variables of Ri ji j (Li ji j ) belong to class ρti j
(λti j ).
Theorem 4
Let i, j, k ∈ P. If there are three subsets R0i j ⊆ Ri j , R0jk ⊆ R jk and R0ki ⊆ Rki such that the following
conditions hold:
1. R0i j , ∅, R0jk , ∅ and R0ki , ∅.
We denote by τ∗i j the higher class given by variables from Ui0j . Similarly, τ∗jk and τ∗ki
denote the higher class given by R0jk and R0ki respectively.
3.
li + l j + lk − $1 − $2 > L sup
where $1 ∈ R and $2 ∈ R are, respectively, the largest and the second largest value of
τ∗ τ∗ τ∗
the following real numbers: {ρi ji j , ρ jkjk , ρkiki }.
Then inequality (4.14) is valid.
X X X
bi js + b jks + bkis ≤ 1. (4.14)
s∈R0i j s∈R0jk s∈R0ki
We say that this inequality is a horizontal clique associated with chain C(i, j, k, i).
Proof:
Similar to Theorem 2.
Theorem 5
Let i1 , . . . , ir ∈ P be r pieces and let R0s(s+1) ⊆ R s(s+1) , 1 ≤ s ≤ r, where ir+1 = i1 , such that the
following conditions hold:
• R0s(s+1) , ∅, 1 ≤ s ≤ r
97
• If k ∈ R0s(s+1) such that k ∈ U s(s+1)
v l
⊆ R0s(s+1) .
S
, then l≤v U s(s+1)
• We denote by t∗s(s+1) the highest class given by variables from R0s(s+1) . Then:
r r ∗ ∗
th(h+1) tτ(τ+1)
X X
lh − ( υh(h+1) − ρτ(τ+1) ) > L sup ,
h=1 h=1
t∗ t∗
where τ ∈ {1, . . . , r} satisfies: ρτ(τ+1)
τ(τ+1)
≤ ρh(h+1)
h(h+1)
, ∀h ∈ {1, . . . , r}.
Then inequality (4.15) is valid. We say that this inequality is a horizontal cover associated with
chain 1 − 2 − · · · − r − 1.
Xr X
bh(h+1)k ≤ r − 1. (4.15)
0
h=1 bh(h+1)k ∈Uh(h+1)
Proof:
Similar to Theorem 3.
Let us consider the example with three pieces shown in Figure 3.1 and W = 7. We build a
clique inequality as follows:
First, we identify the following sets of variables:
0
* U12 = {b121 } y U12
1
= {b122 , b128 }. Furthermore, υ12 = 1 y υ112 = 2.
0
* U23 = {b231 }. Furthermore, υ23 = 0.
0
* U31 = {b135 } y U31
1
= {b134 , b136 }. Furthermore, υ31 = 1 y υ131 = 2.
98
We denote by Y i j (Y i j ) the maximum (minimum) value of coordinate Y of the NFPi j in the
NFPi j -coordinate system.
In order to know the extent that piece j protrudes from piece i in the vertical direction when
a given slice is used, bi jk = 1, we need to calculate Y i j − y if y > 0, or (−1)Y i j − (−1)yi jk
i jk i jk
if yi jk < 0. This is a measure of the overlap allowed for pieces i and j in the vertical direction,
and we have to compare it with the minimum width of pieces i and j. If the difference is less
than the minimum width of pieces i and j, then piece j must protrude from piece i. In the case
that y < 0 and yi jk > 0, the slice would allow piece j to be placed such that y j = yi and piece
i jk
j would not protrude from piece i.
Let i, j ∈ P. We define by U∗ij (D∗ij ) the set of binary variables whose associated slices sepa-
rate piece j from on top of piece i (piece i on top of piece j):
where wi j := min{wi , w j }.
In Figure 4.8 we present an example in which wi = 5 and w j = 6. Slices defined from
variables {b0 , b1 , b7 , b8 } separate piece j from on top of piece i. Variable b0 forces piece j to
be placed entirely on top of piece i, i.e. if b0 = 1, then pieces i and j cannot overlap in the
vertical direction. If we look at the slice defined by b1 , pieces can be placed in such a way
that an overlap of 4 units in a vertical direction is allowed, then piece j would protrude just 2
units from piece i because w j = 6 and piece i would protrude 1 unit below piece j. Note that
U ∗ji = {b0 , b1 , b7 , b8 } and D∗ji = {b3 , b4 , b5 }.
i
j
99
Let C = {i1 , . . . , ir }, 1 < r ≤ N r, be a set of r pieces, and let Ui0s it ⊆ Ui∗s it , Ui0s it , ∅ and
D0is it ⊆ D∗is it , D0is it , ∅, ∀1 ≤ s < t ≤ r. We denote by h the index of a variable bis jt h which
0
satisfies yh = Y i j and h0 the index of a variable bis jt h0 which satisfies yhi j = Y i j . We always
ij
consider that bis it h ∈ Ui0s it and bis it h0 ∈ D0is it . We denote by UD0is it := Ui0s it ∪ D0is it . Note that
Ui0s il = D0il is ∀i s , it ∈ C.
Theorem 6
If inequality (4.16) is satisfied, then inequality (4.17) is valid. We say that this inequality is an
LU-cover inequality.
r
X
wi s − δ > W (4.16)
s=1
X r
r−1 X X r−1
X
Bis il k ≤ (r − s) − 1. (4.17)
s=1 l=s+1 k∈UD0i s=1
s il
where
Xr−1 X
δ := max { qτ(t)τ(t+1)l Bτ(t)τ(t+1)l }
τ∈π{C} 0
t=1 l∈Uτ(t)τ(t+1)
qτ(t)τ(t+1)l being the amount of overlap over the Y axis of pieces τ(t +1) and τ(t) when bτ(t)τ(t+1)l =
1. π{C} is the set of all the permutations of the pieces in C. An element τ ∈ π{C} is defined by
τ = {τ(1), . . . , τ(r)} where ∀t, τ(t) = il for some l ∈ {1, ..., r}.
Proof:
Pmi j
From equations k=1 Bi jk = 1 for each NFPi j , 1 ≤ i < j ≤ N used in HS2 formulation, we get:
r−1 X
X r X r−1
X
Bis il k ≤ (r − s).
s=1 l=s+1 k∈UD0i s=1
s il
Let us suppose that there is a solution satisfying the previous inequality as an equality. Note
that all pieces i1 , . . . ir have to be separated using slices whose associated binary variables be-
long to UD0i j .
In order to calculate the minimum width given by any pile built with pieces i1 , . . . ir , we add
up the width of the pieces as if they were placed one on top of the adjacent piece and then for
each pair of adjacent pieces we have to subtract the maximum overlap allowed along the Y axis.
The minimum width of the r pieces is given by the left hand side of inequality (4.16), which
is greater than W. Then, it is not possible that r−1 k∈UD0i s i Bi s il k =
P Pr P Pr−1
s=1 l=s+1 s=1 (r − s), so in-
l
equality (4.17) holds. §
100
In the example appearing in Figure 4.8, the coefficients are:
qi j0 = 0
qi j1 = 4
qi j7 = qi j8 = 2
q ji3 = q ji4 = 3
q ji5 = 0
Note that variables b3 , b4 and b5 belong to U ∗ji = D∗i j and we have to calculate how much
piece j protrudes from piece i.
These inequalities could be more accurate without using the classification of the binary va-
riables but the computational effort would increase considerably. In order to reduce the effort
needed to know which slices are incompatible, we will use the extended slices defined in Sec-
tion 3.2.2.
Note that we can consider more variables in order to lift the previous inequality. If we
look at NFP23 , variables b232 , b233 , b234 , b235 , b236 , b238 are also incompatible with b122 = 1 and
b138 = 1, so the inequality b122 + b138 + b232 + b233 + b234 + b235 + b236 + b238 ≤ 2 is also valid.
In the previous inequality we consider more than one binary variable from V NFP23 . It is
possible to add other binary variables from V NFP12 and V NFP13 instead of these variables
from V NFP23 , but we would have to check that there is no combination of three binary va-
riables taking the value 1 producing a feasible solution. In Section 3.3.1 we presented several
conditions to determine whether three binary variables are incompatible.
101
102
Chapter 5
Separation algorithms
In this Chapter we present separation algorithms for some of the inequalities presented in Chap-
ter 4 and some computational results.
In Section 5.1 we present the separation algorithms for both types of X-Y inequalities. For
X-Y inequalities of type I we use an exact procedure which is based on studying each one of
the inequalities (9.7) used in the HS2 formulation (non-overlapping inequalities). For X-Y in-
equalities of type II we propose an iterative procedure which builds either the most violated
inequality or the inequality which is closest to being violated by the current fractional solution.
Separation for the Impenetrability constraints is discussed in Section 5.2. Finally, in Sec-
tion 5.3 we present an algorithm to add all the cliques at the root node and two separation
algorithms for the cover inequalities. The computational experiments in all the cases show
that the separation algorithms require a great computational effort and, despite the fact that we
need to explore fewer nodes on the solved instances, the computational time increases and it
makes no sense to add the separation algorithms of these inequalities to the Branch & Bound
algorithm described in Chapter 3.
We do not present any separation algorithm for the LU-cover inequalities described in Sec-
tion 4.4 or the Transitivity inequalities defined in Section 4.5 because our computational ex-
perience shows that the separation algorithms that we have tried work in a similar way to the
cover inequalities and it is not appropriate to add these inequalities to the Branch & Bound
algorithm.
103
In order to separate X-Y inequalties of type I, we study each one of the inequalities (9.7) in
the HS2 formulation defined from each NFP. These inequalities have the following structure:
mi j
X
αki jf (x j − xi ) + βki jf (y j − yi ) ≤ δki jf h bi jh (5.1)
h=1
In order to separate X-Y inequalties of type II, we consider each one of the coordinates of
each piece and, in each case, we build the most violated (or closest to being violated) by the
current solution X-Y inequality of type II. Let s0 be the solution given by the linear problem of
the current node. We denote by B+ := {bi jk | b0i jk > 0, ∀1 ≤ i < j ≤ N, ∀1 ≤ k ≤ mi j }, where b0i jk
denotes the value of variable bi jk in the solution s0 . The set of variables B+ contains the binary
variables of the HS2 formulation whose values in the current solution are positive.
Let i ∈ P. In what follows we build two inequalities, one for each coordinate of piece i. We
define the following sets of binary variables:
The order of the variables in vectors B+xi and B+yi is given by the value of the given variables
in the current solution in non-increasing order, i.e. the first positions are taken by the variables
with large values in the current solution (s0 ).
Let t x = |B+xi | y ty = |B+yi |. We build the X-Y inequality of type II for coordinate xi in t x steps.
In a similar way, we build the X-Y inequality of type II for coordinate yi in ty steps.
Let λ j be the maximum value that the coefficient of any binary variable bi jk ∈ B+xi , j ∈
{1, . . . , N}, j , i, can take. At the beginning we consider that λ j = 0, ∀p j ∈ P \ {pi }.
In a first step we include the first variable of vector B+xi , bixj1 k1 , in the inequality. The corres-
ponding coefficient is θ1x := xixj1 k1 + CX j1 , where x xj1 ik1 denotes the minimum value defined by
bixj1 k1 on the X-axis in the NFP j1 i coordinate system.
xi ≥ θ1x bixj1 k1
When θ1x bixj1 k1 is added to the inequality, we update the value of λ j1 = θ1x . The previous
inequality is going to be updated by adding more terms to the right-hand side until all the
variables of B+xi have been studied. Let bixj2 k2 ∈ B+xi (bixj2 k2 ∈ V NFPi j2 ) be the next variable of
B+xi . The coefficient, θ2x , is calculated in the following way:
104
• If j1 = j2 ⇒ θ2x = x j2 ik2 + CX j2 .
• If j1 , j2 ⇒ θ2x = x j2 ik2 + CX j2 − λr .
PN
r=1,r,i, j2
If j1 = j2 , then variables bixj1 k1 and bixj2 k2 belong to the same NFP and as it is not possible
for both to take the value 1 at once, their coefficients do not interfere. However, if j1 , j2 ,
then it is possible for these two variables to take the value 1 simultaneously, so the sum of both
coefficients must be lower than or equal to the maximum value that each coefficient could take
j2 λr = θ1 . In the case that θ2 < 0, the variable is not included in
PN x x
individually. Note that r=1,r,i,
the inequality. If θ2 > λ j2 , we update the value λ j2 = θ2 .
x x
where several coefficients θrx , r = 1, . . . , t may have taken the value 0. Let bixjt+1 kt+1 be the
next variable of B+xi to be studied. The maximum coefficient of bixjt+1 kt+1 to add this variable in
constraint (5.2) is given by:
N
X
θt+1
x
= x jt+1 ikt+1 + CX jt+1 − λr
r=1,r,i, jt+1
Once all the variables of B+xi have been studied, a new X-Y inequality of type II is built:
tx
X
xi ≥ θrx bixjr kr
r=1
θry being the coefficients calculated in a similar way to θrx , and byijr kr the variables belonging to
B+yi .
Any X-Y inequality is added to the linear problem of the current node and every node
created in the same branch when it is violated by more than 1 . Initially we consider 1 = 0.1.
The inequalities which are violated by less than 1 produce a slight modification of the current
solution, so we do not add them to the linear problem.
105
Table 5.1: Branch & Cut with XY inequalities
Problem XY1 + XY2 XY2 XY1
LB GAP Nnodes Time Separation Time FV XY LB GAP Nnodes Time Separation Time FV XY LB GAP Nnodes Time Separation Time FV XY
three 6.00 0.00 28 0.09 0.02 0 11 6.00 0.00 29 0.05 0.00 0 4 6.00 0.00 26 0.05 0.00 0 8
shapes4 24.00 0.00 10 0.11 0.00 0 4 24.00 0.00 10 0.09 0.00 0 3 24.00 0.00 16 0.09 0.00 0 1
fu5 17.89 0.00 465 0.25 0.11 9 149 17.89 0.00 474 0.22 0.06 9 126 17.89 0.00 470 0.14 0.00 7 86
glass1 45.00 0.00 0 0.09 0.00 0 0 45.00 0.00 0 0.11 0.00 0 0 45.00 0.00 0 0.11 0.00 0 0
fu6 23.00 0.00 191 0.23 0.09 7 133 23.00 0.00 182 0.20 0.05 6 95 23.00 0.00 212 0.11 0.02 14 60
threep2 9.33 0.00 6062 3.51 1.14 0 1973 9.33 0.00 6107 3.20 0.78 8 1545 9.33 0.00 6991 2.14 0.00 4 745
threep2w9 8.00 0.00 15756 9.86 3.21 1008 11510 8.00 0.00 16342 9.91 3.03 957 10443 8.00 0.00 15596 5.55 0.08 898 6495
fu7 24.00 0.00 265 0.42 0.09 29 400 24.00 0.00 203 0.30 0.12 13 280 24.00 0.00 280 0.22 0.00 24 145
glass2 45.00 0.00 1834 7.07 1.08 117 5979 45.00 0.00 1758 6.15 1.15 93 5276 45.00 0.00 1863 2.31 0.02 102 1478
fu8 24.00 0.00 1001 1.42 0.62 87 1378 24.00 0.00 413 0.62 0.23 26 560 24.00 0.00 908 0.55 0.00 67 550
shapes8 26.00 0.00 11402 12.68 3.00 630 11022 26.00 0.00 10677 11.86 3.20 662 10574 26.00 0.00 12305 7.69 0.03 748 5821
fu9 25.00 0.00 197463 202.57 76.63 13766 193212 25.00 0.00 202416 199.15 74.94 12903 187534 25.00 0.00 166032 82.20 0.90 9051 77885
threep3 13.53 0.00 2876976 2757.35 812.27 19749 1290068 13.53 0.00 3018257 2822.92 816.45 24183 1313935 13.53 0.00 3211906 1776.55 11.12 23613 889705
threep3w9 10.00 0.09 2825666 3565.96 1100.54 139968 3135650 10.00 0.11 2644912 3222.33 997.78 131288 2850692 10.00 0.09 5463455 3566.81 23.42 304902 3371871
fu10 28.69 0.00 976564 1445.32 503.09 76216 1266894 28.68 0.00 953524 1430.33 496.27 74569 1163739 28.69 0.00 906734 609.50 4.18 72063 517532
dighe2 100.00 0.00 4429 28.13 4.77 625 18370 100.00 0.00 3851 26.18 4.70 494 14611 100.00 0.00 4370 13.73 0.05 648 5001
fu 28.00 -1.00 796027 3591.05 1011.29 78982 2356059 28.00 -1.00 842213 3579.05 1017.81 78799 2230902 30.00 -1.00 2837758 3575.36 32.56 324573 2959123
poly1a0 13.00 -1.00 593367 3592.87 757.73 121052 3005951 13.00 0.25 748746 3576.34 880.77 136730 3317781 13.00 0.24 1686120 3582.19 25.72 284436 4328784
dighe1 85.00 -1.00 186955 3595.93 515.80 19888 1121163 85.19 -1.00 201722 3573.86 521.32 22047 1098912 90.33 -1.00 459234 3590.39 19.23 54199 646499
J2-10-35-9 18.62 0.00 877233 1394.71 388.38 100819 1555110 18.62 0.00 928350 1417.16 389.21 109727 1596151 18.62 0.00 890338 762.13 4.48 105693 1014935
J2-10-35-8 22.00 0.00 212680 359.19 129.96 22540 370836 22.00 0.00 185031 300.16 107.28 19097 296039 22.00 0.00 172418 118.79 0.84 15863 117172
J2-10-35-7 18.67 0.00 286026 418.39 126.59 32874 523229 18.67 0.00 344069 493.48 148.20 40556 625905 18.67 0.00 291302 224.25 1.44 35715 382301
J2-10-35-6 18.00 0.00 1988136 2979.57 861.36 227334 3041733 18.00 0.00 1877921 2684.54 766.26 223177 2761875 18.00 0.00 2007734 1599.65 8.02 229381 1915170
106
J2-10-35-5 20.00 0.00 325264 489.25 148.48 38561 513291 20.00 0.00 383017 559.28 167.47 46709 577987 20.00 0.00 366760 278.59 1.56 45744 349828
J2-10-35-4 19.44 0.00 1185856 2041.83 620.45 185875 2625428 19.43 0.00 1128749 1891.82 570.57 176203 2390953 19.44 0.00 1042240 786.76 4.06 176024 1445922
J2-10-35-3 20.00 0.02 2254313 3576.46 1163.08 200438 3736529 20.00 0.02 2360461 3571.19 1155.00 203082 3613464 20.38 0.00 2218086 1493.43 8.21 184861 1714825
J2-10-35-2 19.95 0.00 224637 427.94 124.97 29470 489019 19.95 0.00 251721 453.06 131.15 32594 515185 19.95 0.00 251027 222.61 1.37 33111 316382
J2-10-35-1 21.30 0.00 517025 870.55 282.69 53870 868337 21.30 0.00 643841 1038.17 330.19 66034 1021907 21.30 0.00 501411 385.49 2.50 49967 429790
J2-10-35-0 23.66 0.00 1208417 2526.53 755.03 157306 2528264 23.66 0.00 1078265 2208.49 656.59 140901 2156474 23.66 0.00 1135719 1011.57 6.94 159046 1264005
J1-12-15-9 12.00 0.00 32208 107.34 34.80 4054 86196 12.00 0.00 42808 130.60 42.79 5377 106936 12.00 0.00 35172 41.79 0.37 4113 38679
J1-12-15-8 17.00 0.00 152684 716.62 181.41 11255 389892 17.00 0.00 155021 696.45 176.69 12303 377710 17.00 0.00 162646 246.09 1.28 12548 117341
J1-12-15-7 13.00 0.00 148905 414.76 118.81 14945 339036 13.00 0.00 171525 470.81 135.44 17271 379013 13.00 0.00 121125 148.33 0.89 12288 110057
J1-12-15-6 14.00 0.07 916930 3586.48 934.66 118156 2823282 14.50 0.01 1019142 3572.73 908.08 129795 2757926 14.71 0.00 1031396 1447.14 7.46 142618 1426214
J1-12-15-5 13.00 0.07 1061438 3585.39 1041.76 110874 2550464 13.00 0.04 924123 3579.44 989.86 98772 2352623 13.55 0.00 1163219 1431.68 10.03 123902 1001055
J1-12-15-4 17.50 0.00 649672 2777.63 632.97 64289 1511026 17.50 0.00 655889 2704.71 616.05 65420 1438674 17.50 0.00 649191 977.07 5.90 63416 427769
J1-12-15-3 10.93 0.00 409258 1021.00 289.35 50144 791198 10.93 0.00 439052 1031.31 293.50 54388 786270 10.93 0.00 408227 529.56 3.35 52957 466977
J1-12-15-2 16.00 0.00 233520 894.57 243.33 23991 580773 16.00 0.00 199703 736.03 200.20 21004 476000 16.00 0.00 174545 244.97 1.28 17669 128764
J1-12-15-1 14.00 0.00 563893 1658.62 481.34 64917 1214376 14.00 0.00 1110920 3262.67 944.15 134133 2518901 14.00 0.00 648886 843.44 5.41 73488 650624
J1-12-15-0 15.00 0.00 67658 301.25 82.74 6631 191259 15.00 0.00 68951 320.24 88.73 6958 203237 15.00 0.00 70929 106.63 0.66 6932 59860
5.1.1 Computational results
In order to study the effect of the XY inequalities, we have considered three different versions
of the Branch & Cut algorithm. The first one separates both types of inequalities (XY1 + XY2),
the second version separates just the second type of XY inequalities and the third version only
tries to find XY inequalities of type I. In Table 5.1 we can see the computational results obtained
by these three versions on instances described in Table 1.2. The time limit on all the algorithms
is one hour.
We can see that, in general, the computational effort needed to separate XY inequalities of
both types is not justified. The first column in each algorithm is the lower bound that the algo-
rithm reaches in one hour. The second column represents the relative GAP ( U B−LB UB
). A value
of −1 represents that the given algorithm is not able to find any feasible solution. Note that in
instances fu and dighe1, no algorithm is able to find a feasible solution, so the gap does not
make sense. The first algorithm also fails to find a feasible solution for instance poly1a0. If
we look at instances for which the gap is positive (unsolved instances), we can see that the best
results are obtained with XY1, probably because it is faster and the algorithm studies a larger
number of nodes.
The third and fourth columns represent, respectively, the computational time used by CPLEX
and the computational time required by the separation algorithm. We can see that the last algo-
rithm has a lower separation time than the first two, that is, the separation of the XY inequalities
of type 2 is very slow.
Finally, in the two last columns we include the number of binary variables fixed to 0 (in-
compatible variables, see Section 3.3), and the number of XY inequalities added in the whole
branch and cut tree.
We build the impenetrability constraint for each pair of pieces i, j ∈ P only in the following
cases:
• There is a variable bi jk ∈ V NFPi j such that the value in the current solution is fractional,
between 0 and 1. Note that if there are no variables of V NFPi j with a fractional value in
the current solution, then there is a variable with a value of 1, which means that pieces i
and j are separated and no impenetrability constraint is violated.
• The enclosing rectangles of both pieces have a non-empty intersection. In the case that
the intersection is built, then the pieces are separated and no impenetrability constraint
would be violated.
107
Once we have decided to build the inequality, its coefficients can be calculated by using ei-
ther the exact or the approximate methods described in Section 4.2. The exact method requires
a very high computational effort and the separation would be very slow.
Every time an inequality is built, it is added to the linear problem in the case of it being
violated by more than 2 , i.e. if:
Xmi j
∗
s − ql b∗i jl ≤ −2 (5.3)
l=1
where s =∗
xi∗ + y∗i + x∗j + y∗j ; xi∗ ,
y y∗i , x∗j , y∗j b∗i jk , ∀k
∈ {1, . . . , mi j } being the values in the solution
of the corresponding variables. Initially we consider that 2 = 0.1.
We can see that the computational effort needed to separate the impenetrability constraints
is not justified. Branching is a better strategy than trying to find these types of inequalities.
where b0i jk denotes the value of the variable in the current solution.
108
Table 5.2: Branch & Cut + IMPENETRABILITY + XY1
(EX) The total width of the given pieces subtracting the maximum overlap of the pieces in a
vertical direction must exceed the width of the strip. That is, condition (4.12) must be
satisfied.
In the next subsection we present an exact method for the separation of the clique inequa-
lities. In Subsections 5.3.2 and 5.3.3 we propose two different heuristic algorithms, a first
algorithm with a more exhaustive search and the second algorithm which is simpler and faster.
Let us consider the directed graph G = (V, E). The set of vertices V represents all the va-
riables in the problem, i.e. each vertex is associated with a binary variable bi jk , for any i, j ∈ P,
109
k ∈ V NFPi j . We add an arc for each pair of vertices (bi j1 k1 , bi j2 k2 ), i, j1 , j2 ∈ P.
Then, by construction, any maximal clique of G = (V, E) involves at most three pieces.
Note that it is possible for a clique to involve only two pieces. We do not take into account
these cliques. In order to build a clique inequality we consider the maximal cliques in graph
G = (V, E) which involve exactly three pieces.
Then, all the sets of three pieces and the respective subsets of binary variables needed
to build a clique inequality (see theorem 2) are given by all the maximal cliques in graph
G = (V, E). We use the Patric and Östergård ([52]) algorithm to obtain all the maximal cliques
in the graph.
We denote by Nmin the minimum number of pieces needed to exceed the width of the strip.
Let Pmin be an ordered list of pieces in a non-decreasing order of width. Then, we need the first
Nmin pieces of Pmin to exceed the width of the strip.
Let wmin be the minimum width such that, when we add up the width of the Nmin − 1 first
pieces of Pmin and wmin , then the width of the strip is exceeded. We denote by pmin the piece
which satisfies:
• There is no piece whose width is less than wmin and greater than w pmin .
In order to obtain a chain of pieces we build the next graph. Let G = (V, A) be a directed
graph represented in Figure (5.1). We denote the set of vertices by V and the set of arcs by A.
110
1 N∗ + 1 (r + 1)N ∗ + 1
2 N∗ + 2 (r + 1)N ∗ + 2
N N∗ + N∗ (r + 1)N ∗ + N ∗
t + 1 of the graph. Note that i and j have to make reference to different pieces. The cost of arc
(i, j) is given by: X
1− b0i−tN ∗ , j−(t+1)N ∗ ,k
k∈Ui−tN ∗ , j−(t+1)N ∗
Once the graph G is generated, we calculate the N ∗ shortest paths between each one of the
vertices in the first column and the vertices in the last column (last copy), in such a way that
the initial and the final vertices of the path represent the same piece. If there is a path whose
length is lower that 2, then the chain of pieces given by the path satisfies condition (AP) and i
would be interesting to study it.
Let γ = (i1 , . . . , ir , i1 ) be the shortest path. In order to reduce the notation, we consider
now that i1 , . . . , ir make reference to the pieces and not to the copies. One of the following
conditions hold:
(a) The chain given by path γ does not repeat any pieces with the exception of piece i1 .
(b) There is a piece ik , k , 1 such that it appears more than once in path γ, i.e. γ =
(i1 , . . . , ik , . . . , ik , . . . , ir , i1 ).
We do not study case (b) because the value of the binary variables is strongly reduced and
the given path is not promising to find a violated inequality. We will continue studying the
shortest path with other endpoints.
If case (a) holds, then we consider the chain given by the path in order to build a clique or
a cover inequality (condition (AP) holds). Then, we have to check if condition (EX) is satisfied.
Path γ provides the chain Cγ = C(i1 , . . . , ir ). We know that the following condition holds:
s=1 wi s > W
1. s=r
P
Condition 1 shows that pieces exceed the width of the strip if they cannot overlap in the ver-
tical direction. On the other hand, condition 2 shows that pieces form a pile because the sum of
111
variables that we consider is greater than or equal to r − 2. Note that we consider all possible
matchings of pieces in the vertical direction because we consider all the binary variables whose
slices separate both pieces in the NFP in a vertical direction.
P s=r P
We define hex := s=1 wis − W and hap := bi jk ∈BT bi jk − (r − 2). These two numbers make
reference to the slack that conditions 1 and 2 have.
t
Let S := (υt1212 , . . . , υ(r−1)r
(r−1)r
, υtr1r1 ) be the vector of allowed overlap. Initially, as we consider all
υ s(s+1)
and υtr1r1 = υυr1r1 .
t s(s+1)
variables of subsets U s(s+1) , s = 1, . . . , r − 1 and Ur1 , then υ s(s+1) = υ s(s+1)
The chain Cγ defines r feasible orderings of the pieces. Each one of them can be obtained
by changing the piece which is situated at the bottom of the pile, and the subsequent pieces are
given by the next pieces of Cγ until all the pieces are stacked. In order to obtain the ordering
with more overlap in the Y axis, i.e. the ordering of the r pieces in such a way that the width of
the pile is minimum, we have to add up all the elements of vector S with the exception of the
minimum value, which makes reference to the pair of pieces with less overlap allowed.
t
Let s0 = min{min1≤s<r υ s(s+1)
s(s+1)
, υtr1r1 }. The width of the ordering with maximum overlap, am , is
given by:
X s=r s<r
X t s(s+1)
am = wis − ( υ s(s+1) + υtr1r1 − s0 )
s=1 s=1
• In the case that am < W, then the ordering with the minimum width fits on the strip. We
then try to eliminate some variables of BT in order to forbid certain overlaps of pieces in
the vertical direction.
When we are in the second, we select one of the subsets U ∗ ∈ U s(s+1) ∪ Ur1 and eliminate
variables in the following way:
(P1) We select a set U∗ ∈ U s(s+1) ∪ Ur1 which has not been studied yet.
(P2) We calculate the overlap of the class of variables that we are going to eliminate as sc∗ =
max{s0 , W − am + }, where > 0. We denote by c∗ the class which allows more overlap
in such a way that the overlap is lower than or equal to sc∗ . Note that we eliminate the
variables which allow placements with an overlap greater than sc∗ in order to attain one
of the following goals:
(P3) Let U1∗ = U∗c . If it is satisfied that b∈U1∗ b < hap , i.e. when variables which allow
∗ P
more overlap are eliminated and pieces are still stacked, then we update the sets BT ←
(BT \ U∗ ) ∪ U1∗ y U∗ ← U1∗ . In the case that condition b∈U1∗ b < hap holds, we label all
P
112
the sets as not studied. If the previous condition is not satisfied, then set U∗ is labeled as
studied.
(P4) If the new chain with minimum width exceeds the width of the strip, we have found a
violated inequality. Otherwise, we go back to step (P1).
This algorithm consists in building a tree for each piece. Let i ∈ P. The nodes of the tree
represent the pieces. The root node is given by piece i. The successors of a given node k are
the pieces of Uk∗ such that the chain given by the nodes form the root node to k can generate a
violated clique or cover inequality. Let C be the chain of pieces from the root node (i) to the
node defined by piece k0 ∈ Uk∗ . Let αk0 = pt ∈C wt and let δk0 be the maximum overlap allowed
P
1. βk0 < 2
We consider that = 0.4 and 2 = 1.7. In a future study it could be interesting to change
these parameters.
In the case that Condition 1 is satisfied and Condition 2 is not, adding the root node to C we
obtain a chain that produces a violated clique or cover inequality. In the case that Condition 1
is not satisfied, we reject chain C and close node k0 .
The first column in each algorithm shows the lower bound that the algorithm reaches after
one hour. The gap is represented in the second column. The third column shows the number of
nodes of the branch and cut tree. The total time is represented in the fourth column. The fifth
and sixth columns show, respectively, the number of inequalities added at all the nodes and at
the root node.
113
As happened with the previous inequalities, these separation procedures require an exces-
sive computational effort and it does not make sense to use them in the Branch & Cut algorithm.
114
Table 5.3: (A) Branch & Cut with all cliques added at the root node. (B) Branch & Cut with SC1 algorithm for cliques and covers. (C) Branch & Cut
with SC2 algorithm for cliques and covers.
Problem (A) (B) (C)
LB GAP Nnodes Time NR NR0 LB GAP Nnodes Time NR NR0 LB GAP Nnodes Time NR NR0
three 6 0 0 0.78 0 0 6.00 0.00 0 0.51 0 0 6.00 0.00 0 0.61 0 0
shapes4 24.00 0.00 12 0.09 0 0 24.00 0.00 12 0.05 0 0 24.00 0.00 12 0.03 0 0
fu5 17.89 0.00 492 0.41 1 1 17.89 0.00 462 0.09 11 10 17.89 0.00 506 0.11 6 1
glass1 45.00 0.00 0 0.09 0 0 45.00 0.00 0 0.08 0 0 45.00 0.00 0 0.03 0 0
fu6 23.00 0.00 153 0.34 0 0 23.00 0.00 154 0.66 8 7 23.00 0.00 155 0.78 7 7
threep2 9.33 0.00 5852 1.33 1 1 9.33 0.00 6200 3.00 20 11 9.33 0.00 5777 1.01 11 9
threep2w9 8.00 0.00 18181 9.45 153 153 8.00 0.00 16846 10.26 182 53 8.00 0.00 17875 7.00 148 36
fu7 24.00 0.00 171 0.87 0 0 24.00 0.00 155 1.19 16 16 24.00 0.00 211 0.72 8 8
glass2 45.00 0.00 2243 2.48 20 20 45.00 0.00 2043 5.63 49 30 45.00 0.00 2341 2.67 54 36
fu8 24.00 0.00 235 0.64 2 2 24.00 0.00 202 1.83 29 28 24.00 0.00 350 0.89 83 76
shapes8 26.00 0.00 12345 6.21 124 124 26.00 0.00 11255 33.79 250 127 26.00 0.00 11944 5.83 205 108
fu9 25.00 0.00 176826 31.42 2083 2083 25.00 0.00 198719 1766.38 4358 1556 25.00 0.00 228453 60.65 2360 450
threep3 13.53 0.00 2824979 936.65 4436 4436 12.00 0.11 357555 3597.09 646 141 13.53 0.00 2777515 850.99 2885 132
threep3w9 10.71 0.03 11398947 3745.38 113459 113459 9.00 0.20 397973 3608.04 4790 1317 10.00 0.09 7082761 3590.44 64018 1017
fu10 28.69 0.00 1103397 278.90 13881 13881 24.00 0.20 151555 3596.74 4767 2713 28.69 0.00 1062297 807.77 9841 1652
dighe2 100.00 0.00 4483 10.48 75 75 100.00 0.00 4883 127.44 418 353 100.00 0.00 4454 8.55 158 90
J1-10-20-0 18.00 0.00 7280 34.18 43 43 18.00 0.00 5569 123.54 83 44 18.00 0.00 8009 10.31 156 79
J1-10-20-1 17.00 0.00 8380 7.10 32 32 17.00 0.00 9217 182.10 31 2 17.00 0.00 12587 7.10 88 27
J1-10-20-2 20.00 0.00 12433 40.14 71 71 20.00 0.00 17450 313.14 188 50 20.00 0.00 11100 6.16 119 41
J1-10-20-3 20.75 0.00 768432 347.10 4533 4533 19.00 0.10 187520 3597.37 1517 301 20.75 0.00 616733 271.46 3458 282
J1-10-20-4 12.50 0.00 158951 102.01 1925 1925 12.50 0.00 189076 3190.22 3009 493 12.50 0.00 123961 86.02 1962 374
J2-10-35-0 23.66 0.00 1492983 607.84 31663 31663 21.75 0.13 166726 3594.23 8674 5534 22.00 0.12 734730 3594.17 19634 4244
115
J2-10-35-1 21.30 0.00 579497 206.08 9145 9145 20.00 0.09 184378 3595.40 5662 2930 21.30 0.00 732201 489.36 11114 1246
J2-10-35-2 19.95 0.00 286472 115.08 5991 5991 19.70 0.02 190436 3594.26 8553 5175 19.95 0.00 404291 1140.54 9836 3645
J2-10-35-3 20.37 0.00 2535369 706.93 33234 33234 19.75 0.06 179828 3592.70 6654 4111 20.00 0.02 2667239 3593.19 29351 3503
J2-10-35-4 19.43 0.00 1149119 343.08 25173 25173 18.00 0.10 171418 3628.07 10004 7132 18.50 0.05 790713 3639.67 28357 7599
J1-12-20-0 12.00 0.00 67950 49.56 1320 1320 12.00 0.00 31980 2405.08 4373 3740 12.00 0.00 29060 118.89 4335 3846
J1-12-20-1 10.00 0.00 59193 49.42 1080 1080 10.00 0.09 50874 3592.25 1620 808 10.00 0.00 149688 208.14 5108 3096
J1-12-20-2 12.00 0.00 30249 38.47 390 390 12.00 0.00 42161 3352.31 5044 4439 12.00 0.00 37217 730.77 7644 7095
J1-12-20-3 8.00 0.00 35007 40.06 271 271 8.00 0.00 34017 2622.24 1042 792 8.00 0.00 28290 932.89 18829 18584
J1-12-20-4 13.00 0.00 302181 185.02 5093 5093 12.00 0.20 50233 3594.76 5992 5033 12.46 0.11 205170 3561.25 16036 12450
J2-12-35-0 24.25 0.13 4925214 3574.87 105580 105580 21.30 0.29 36789 3589.10 6729 5879 22.00 0.21 501667 3509.21 15827 6232
J2-12-35-1 22.00 0.15 4935249 3582.19 93123 93123 19.90 0.23 50194 3596.37 7132 6533 20.00 0.23 146181 3639.14 9989 7517
J2-12-35-2 20.00 0.10 5137480 3572.49 119449 119449 18.00 0.27 46399 3591.61 7603 6977 18.00 0.25 185737 3556.45 15203 11374
J2-12-35-3 20.00 0.13 7022615 3566.65 170416 170416 17.00 0.29 40800 3603.94 9795 9090 18.00 0.25 165180 3695.35 19157 15303
J2-12-35-4 22.00 0.09 5805386 3579.22 113488 113488 18.75 0.29 39555 3592.97 8191 7337 19.82 0.23 110763 3579.51 12194 9973
fu 31.66 -1.00 8190820 3565.48 171464 171464 24.00 -1.00 28952 3591.00 5529 4889 25.94 -1.00 724740 3285.90 14409 4232
J1-14-20-0 12.00 0.00 239553 154.05 5205 5205 10.50 0.30 13122 3601.58 1235 1023 12.00 0.08 213500 3578.19 23536 18424
J1-14-20-1 11.00 0.08 5190579 3607.90 106048 106048 10.00 0.33 14452 3601.95 763 631 10.00 0.23 121948 3555.15 24070 21473
J1-14-20-2 13.00 0.13 4979617 3615.48 97156 97156 11.00 0.35 11500 3593.95 3715 3455 11.00 0.35 43760 3568.51 26454 25695
J1-14-20-3 10.00 0.00 174626 153.22 4294 4294 9.42 -1.00 11134 3597.84 1538 1337 10.00 0.09 159460 3581.00 24277 19904
J1-14-20-4 13.33 0.11 4871554 3664.15 103296 103296 12.00 0.28 13606 3597.96 2390 2144 12.00 0.23 110711 3552.21 23198 20747
J2-14-35-0 23.00 0.26 2753150 3585.51 63202 63202 20.00 0.41 9921 3601.97 4401 4110 21.88 0.34 148290 3559.79 15219 11799
J2-14-35-1 22.00 0.29 3181217 3575.76 55764 55764 18.00 -1.00 11794 3604.12 7556 7351 19.62 -1.00 42570 3580.19 14852 13892
J2-14-35-2 20.00 0.22 3406777 3619.97 74663 74663 18.00 -1.00 11452 3595.00 5541 5254 18.00 0.34 148501 3548.96 19983 15888
J2-14-35-3 20.00 0.23 4041754 3626.66 84032 84032 16.00 0.44 13076 3603.61 6176 5868 18.00 0.39 51798 3576.32 19908 18677
J2-14-35-4 20.00 0.21 3049800 4145.76 54661 54661 18.00 -1.00 12452 3599.77 3922 3745 18.85 -1.00 216982 3524.59 15075 12134
poly1a0 13.00 0.14 2866700 4441.72 54431 54431 13.00 -1.00 8358 3597.79 659 299 13.00 -1.00 36022 3573.61 48883 47702
dighe1 85.00 -1.00 238553 3572.00 3499 3499 65.71 -1.00 3734 3609.96 2973 2921 77.20 -1.00 20351 3588.60 8494 8213
116
Chapter 6
Constructive algorithms
Introduccion
In Chapter 3 we presented an exact algorithm for solving Nesting problems. The instances used
in the tests were described in Table 1.2. Due to the great difficulty of proving optimality, the
number of pieces in these instances was not larger than 16. Furthermore, we did not consider
the rotations of the pieces, so the problem was easier than the general case in which some rota-
tions can be allowed.
The Nesting instances that can be found in the literature usually have more (up to 99) pieces.
With respect to rotation, there are instances for which specific angles of rotation are allowed.
These angles are usually 0o − 180o or 0o − 90o − 180o − 270o . The difficulty of solving a nesting
instance increases when rotation is allowed. There are very few cases in which free rotation is
permitted.
In this chapter we study different constructive algorithms based on the insertion of the
pieces one at a time. In order to add a piece, a mixed integer problem is solved. We try to find
the optimal insertion of a given piece, keeping the relative position fixed between the pieces
already placed.
In Section 6.1 we present the core of the constructive algorithm and we make a comparison
between the formulations GO and HS2 described in Chapter 2. In Section 6.2 we do a compu-
tational experiment considering the initial insertion of more than one piece.
An interesting approach for the insertion of one piece is the trunk insertion, described in
Section 6.3. The idea is to allow certain movements on the pieces already placed while a new
piece is being inserted.
Finally, we propose two different objective functions for breaking ties when for the current
piece there is more than one placement which minimizes L. We add the coordinates of the
pieces in the objective function with small coefficients. The computational tests show that the
results obtained are slightly better. However, computational time increases when the objective
function is more complex.
117
6.1 Initial models
In Chapter 2 we presented the mixed integer models GO, HS1 and HS2. The GO formulation
does not use Fischetti and Luzzy’s slices, so when a binary variable takes the value 1 the re-
lative position between the given pair of pieces is less restrictive. That gives more flexibility
to the GO model compared with HS2 (or HS1). On the other hand, the GO model is harder to
solve to optimality, and when the set of binary variables is very large the corresponding MIP
can become impossible to solve.
The core of the constructive algorithms is the optimal insertion of the pieces, one at a time,
and the fixing of the relative position between all the pieces already placed. Let us suppose
that there are k pieces already placed into the strip and we want to place the next piece from a
given list. The relative position of the k pieces is fixed, so the corresponding binary variables
are fixed and the number of binary variables in the model is reduced. The only binary variables
that we are going to consider in order to add piece k + 1 to the current solution belong to the
NFPs defined by piece k + 1 and all the pieces already placed.
If the Nesting problem that we want to solve has a large number of pieces, then insertion
of the kth piece requires more computational effort than insertion of the jth piece if k > j. In
particular, the MIPs used for the insertion of the last pieces are much harder to solve because
the piece to be inserted must be separated from all the pieces already placed, so we have to
consider many NFPs and the number of binary variables increases considerably.
The two models that we compare in the constructive algorithm are GO and HS2, because
HS2 works better than HS1 and uses the same slices. In both models we eliminate the inequali-
ties defined to avoid symmetrical solutions (see Section 2.4). The objective is to insert the new
piece in the optimal position, and if these inequalities are used and a piece with the same shape
has already been placed, then the feasible region for the piece may be reduced considerably.
The initial MIP takes into account the first nini pieces and it is solved to optimality. The
value of nini when we use the HS model could be greater than in the GO model because of the
behavior of the models (see 2.5). At the first step we include the pieces i1 , . . . , inini .
Then, when we have a solution with the first nini pieces, we fix the relative position of these
pieces by fixing the binary variables of the previous MIP and then we add the next piece from
π. The set of binary variables considered in the new MIP are the ones in relation to the newly
added piece. In order to include the new piece in the model we have to modify the bound
inequalities (see constraints (2.5), (2.6), (2.23) and (2.24)) and the non-overlapping constraints
(see constraints (2.7), (2.8), (9.7) and (2.27)). Iteratively, we add the pieces one by one until
118
piece iN is placed.
The MIPs are becoming harder to solve as they consider more pieces, and the difficulty of
finding the optimal solution increases considerably when the instance has a large number of
pieces. If at the beginning we consider nini = 12, then the model has 66 active NFPs, which
means that it has to deal with the binary variables defined in 66 NFPs. On the other hand, if we
have already placed 66 pieces in a large instance, then the next MIP that we have to consider
for inserting piece 67 also will have 66 active NFPs.
To solve the MIPs we use CPLEX with a time limit of 100 seconds. If CPLEX does not
prove optimality within this time, it returns the best feasible solution that it has found, but if no
feasible solution is found, then the constructive algorithm fails.
Let t1 , . . . , tν be the types of pieces, where ν denotes the number of different types of pieces.
Each type t j has a number of copies in order to represent pieces with the same shape. Let n j ,
j = 1, . . . , ν be the number of pieces of type t j . The probability of selecting a given piece is
A(t j )
P(t j ) =
A(T )
where A(t j ) denotes the area of t j and A(T ) denotes the total area of the different types of pieces.
The vector π is built iteratively, piece by piece, choosing the pieces by using the previous pro-
bability. When all the pieces of a given type are included in π, then the type is eliminated and
the probabilities are recalculated.
The first computational test we have done compares the constructive algorithms with both
models (GO and HS2) and the bottom-left corner (BLC) on the set of instances presented in
Table 6.1. These instances have been obtained from Table 1.1, but we eliminate instance glass1,
glass2 and glass3 because they are easy to solve and we also eliminate instances poly3b, poly4b
and poly5b in these preliminary tests. On the other hand, we add other instances by considering
a different rotations of the pieces. Shapes2-0, swim0, trousers0, shirts0 correspond to shapes2,
swim, trousers and shirts with fixed orientation. With a similar notation we have used instances
poly2a, poly2b, poly3a and poly4a and poly5a without rotation. Instances have been grouped
119
into three sets depending on the rotation of the pieces. The instances of each group appear
ordered by a non-decreasing number of pieces.
Instances swimm0 and swimm1 have been created by reducing the number of vertices of
several pieces of instance swim in such a way that a feasible solution for instances swimm0 and
swimm1 are also feasible for instance swim. The average number of vertices is reduced from
21.9 to 12.8. Instance swimm0 has the same pieces as swimm1, but rotation is not allowed.
Instances Types of Pieces Number of pieces Average of vertices Plate width Problem type
Rotation: 0
dighe2 10 10 4.7 100 Jigsaw puzzle
poly1a0 15 15 4.6 40 Artificial
dighe1 16 16 3.87 100 Jigsaw puzzle
shapes2-0 4 20 7.5 15 obtained from blaz1
poly2a0 15 30 4.6 40 Artificial
poly2b0 30 30 4.53 40 Artificial
shapes0 4 43 8.75 40 Artificial
poly3a0 15 45 4.6 40 Artificial
swimm0 10 48 12.8 5752 obtained from swim
poly4a0 15 60 4.6 40 Artificial
trousers0 17 64 5.06 79 obtained from trousers
poly5a0 15 75 4.6 40 Artificial
shirts0 8 99 6.63 5752 obtained from shirts
Rotation: 0-180
albano 8 24 7.25 4900 Garment
shapes2 4 20 7.5 15 Artificial
dagli 10 30 6.3 60 Garment
shapes1 4 43 8.75 40 Artificial
swimm1 10 48 12.8 5752 obtained from swim
trousers 17 64 5.06 79 Garment
shirts 8 99 6.63 5752 Garment
Rotation: 0-90-180-270
fu 12 12 3.58 38 Artificial, convex
mao 9 20 9.22 2550 Garment
marques 8 24 7.37 104 Garment
jakobs1 25 25 5.6 40 Artificial
jakobs2 25 25 5.36 70 Artificial
In Table 6.2 we have run each algorithm 20 times, building different orders of pieces (π)
and choosing the rotation of each piece (θ) randomly. For each order we call three different
constructive algorithms: the first one uses the HS2 formulation, called CHS2, the second one
uses the GO formulation (CGO) and the third one is the bottom-left corner (BLC) implemented
by A.M. Gomes and J.F. Oliveira (and kindly provided by the authors).
In the constructive algorithms CGO and CHS2 we have considered nini = 1. In Section 6.2
we study the effect of changing this parameter. The objective function considered in both cases
is L. In Section 6.4 we propose other objective functions.
We can see that the best results, in general, are obtained with the CGO (GO formulation),
but the computational time increases considerably when the instances have more than 30 pieces.
In 18 out of 28 instances the constructive algorithm CGO obtains the best average length and
120
in 13 instances obtains the best solution. On the other hand, CHS2 is faster than CGO and in
instances with a large number of pieces such as swim it is clearly better than CGO. In CGO
what happens is that CPLEX cannot prove optimality in 100 seconds, so it gives an upper bound
which can either be very bad or it cannot find a feasible solution at all and then the construc-
tive algorithm fails. For instances swimm0 and swimm1, the constructive algorithm CGO only
provides a feasible solution in 5 of the 20 runs and the quality is very low.
If we look at constructive algorithm CHS2, we can see that the results, on average and for
the best solutions, are better than the BLC and its computational time is lower than CGO. The
computational time of the BLC is not comparable because in many instances the time of one
iteration is less than one second. The next computational test that we present consists in increa-
sing the number of BLC iterations in order to produce a computational effort similar to CHS2.
In Table 6.3 we can see the results for the BLC in 1000 runs, while the other two constructive
algorithms remain equal (20 runs, the same as Table 6.2). The third column in each algorithm
represents the total time.
The average length of the 1000 iterations in the BLC remains similar, but we can see an
improvement in the best results. In Table 6.2 the BLC algorithm finds the best solution in 4
instances: shirts0, swimm1 and trousers. However, if we look at Table 6.3 we can see that BLC
121
provides the best result in 16 instances and the total computational time remains lower than
CHS2.
Note that CHS2 algorithm takes more than 100 seconds to build a feasible solution in the
following instances:
We call hard instances those instances for which CHS2 needs more than 100 seconds to
build a feasible solution. In general, these instances have a large number of pieces. As the size
of the corresponding MIP formulation increases, then CPLEX needs more time to solve each
one of the corresponding MIPs.
In what follows we are going to explore the behavior and possibilities of the HS2 model.
The next sections study different options for the constructive algorithm CHS2.
122
6.2 Studying the initial number of pieces considered (nini)
The next computational test is focused on the nini parameter. We consider several instances
from Table 6.1 and we build 20 solutions for each instance and for each nini = 1, . . . , 12. We
present three tables, one for the average lengths, another one for the length of the best solutions
and finally a table for the average computational times.
In Table 6.4 we can see the average length obtained. We can see that if nini increases, then
the average length is reduced, so the quality of the solutions is better.
In Table 6.5 we can see the best solution obtained in each case. In 10 of the 20 instances
the best solution is found when nini = 12 and in 7 instances the best solution is found when
nini = 11.
In Table 6.6 we can see the average computational time needed in each case. The time limit
for the first MIP is 100 seconds.
In Figure 6.1 we show the relation between quality and computational time. Let us de-
note by AvL1 the average length of the 20 runs with nini = 1. For each instance and for each
nini , we calculate AvL/AvL1. The X − axis represents nini , the left-hand side of the Y − axis
represents the average of all the instances of AvL/AvL1 and its right-hand side the average time.
We can observe that when nini increases, the time increases in an exponential way and the
quality of the solutions improves slowly. For nini = 1, . . . , 6 the quality of the solution remains
practically at the same level, and when it begins to improve with nini = 7, . . . , 12, then the
computational time increases very sharply.
The computational time needed to prove optimality in a Nesting Problem with 12 pieces is
123
Table 6.5: Best solution obtained in 20 runs
nini = 1 nini = 2 nini = 3 nini = 4 nini = 5 nini = 6 nini = 7 nini = 8 nini = 9 nini = 10 nini = 11 nini = 12
Rotation: 0
poly1a0 16.1 16.1 16.1 16.1 16.1 16.1 16.1 16.1 16.7 16.4 16.2 15.8
dighe1 131.7 131.7 130.1 128.5 128.5 128.5 127 127.5 117.1 115.1 115 116.7
shapes2-0 28.7 28.7 28.7 28.2 28.2 28.7 28.5 27.9 28.4 28 28.2 28
poly2a0 31.7 31.7 31.7 31.7 31.7 31.7 31.7 31.2 31.1 31.2 31.4 30.4
poly2b0 34.6 34.6 34.6 34.6 34.6 34.6 34.6 33 33.5 33.4 33.2 32.6
shapes0 64.5 64.5 64.5 64.5 64.5 65 65 65 64 64 66 63
poly3a0 47.6 47.6 47.6 47.6 47.6 47.6 46.1 46.9 45.9 46.2 46.9 46.4
Rotation: 0-180
albano 10721.1 10721.1 10592.5 10721.1 10426.1 10792.8 10772.5 10461.7 10586.8 10641 10205.8 10508.1
shapes2 28.8 28.8 28.8 29.3 29.5 28.8 28.3 29.2 27.8 28.6 28.2 28.1
dagli 63.5 63.5 63.5 63.5 64.2 63.3 62.5 62.9 62.5 63.1 63.3 62.4
shapes1 60 60 60 60 60 60 60 60 61 62 61 61
trousers 263.5 262 264.5 261.5 258.6 259 270.3 262.1 261.9 262 257 256.2
shirts 64 65.1 63.2 64.4 64 64.2 64 63.9 64 63.7 64.5 64
Rotation: 0-90-180-270
fu 35.4 35.4 35.4 35 35.7 34.9 34.9 35.2 34.1 34.7 33 33
mao 2048.7 2048.7 2048.7 2048.7 2048.7 2066.4 1940.6 2134.8 2018.5 1976 1877 1877
marques 85.2 85.2 85.2 85.2 85 84.4 83.1 82.8 84.8 82 83.3 83.3
jakobs1 13 13 13 13 13 13 13 13 13 13 13 13
jakobs2 28 28 28 28 28 28 28 28 27.5 27.6 26.8 27.2
very high. We have considered a time limit of 100 seconds and in several problems there is no
guarantee that the solution given by CPLEX is optimal. There are instances such as jakobs1 or
jakobs2 for which CPLEX is able to prove optimality in less than 100 seconds, but there are
other instances, such as marques, where CPLEX provides a good solution but it is not usually
optimal. Therefore, in the computational tests of the next sections we are going to consider
nini = 1.
Instances with large and small pieces
Instances albano, trousers0, trousers, shirts0 and shirts have two definite sets of large and
small pieces. We can divide the pieces of these instances into two groups according to their
size. If we sort the pieces of these instances by area in a non-decreasing order, then there is a
big gap between large and small pieces. The area of the smallest piece in the set of large pieces
is more than 1.5 times the area of the larger piece in the set of small pieces.
For these instances we test a slightly different strategy when sorting the pieces. In a first
step we are going to place the large pieces and in a second step we add the rest of the pieces.
We randomize the order of the pieces within each one of these two sets, as described in the
previous section.
The sets of large and small pieces in the respective instances are the following:
• albano: 10 large and 14 small pieces.
124
Table 6.6: Average time in 20 runs
nini = 1 nini = 2 nini = 3 nini = 4 nini = 5nini = 6 nini = 7 nini = 8 nini = 9 nini = 10 nini = 11 nini = 12
Rotation: 0
poly1a0 2.4 2.5 2.3 2.2 2.4 2.2 2 2 1.9 3 22.1 70.7
dighe1 2.4 2.4 2.5 2.2 2.2 2.4 3.1 6.7 18 47.2 75.9 90.2
shapes2-0 18.6 18.6 18.5 17.2 17.7 17.8 38.5 92.7 112.8 115.7 116.4 115.2
poly2a0 31.9 31.9 31.8 31.6 31.9 32 30 33.8 51.3 59.6 103.9 124.2
poly2b0 37.5 37.4 37.2 38.4 37.6 36 36.2 38.8 49.6 100.4 121.2 132.3
shapes0 37.8 37.6 37.5 37 36.8 36.9 37 49 71 121 131.6 139.7
poly3a0 159.1 159 159.2 157.4 156.9 155.9 162.6 168.7 192.7 227.6 273.4 274.1
Rotation: 0-180
albano 16.2 15 15.9 14.4 15.4 23.7 53.1 85.9 105.1 113.4 113.5 113.4
shapes2 24.1 24.4 23.8 22.8 23.5 26.8 40.6 94.7 121.3 122.5 122.2 124
dagli 20.9 20.2 20.8 20.3 20.3 21.6 24.9 42.4 65.8 114.1 115.9 113.6
shapes1 62.7 58 59.2 58.7 60.2 59.1 73.5 69.3 113.2 139.6 154.6 158.4
trousers 184 187.7 178.1 183.5 187.8 190.4 185.9 196.6 209.3 211.6 237.4 277.6
shirts 966.4 1028.2 992.3 994.3 1009.1 936 1072.6 1034.9 1060.4 1059.9 1046.3 1066.2
Rotation: 0-90-180-270
fu 2.4 2.1 2.2 1.9 2 2.1 2.3 12.9 27.4 71.9 100.6 100.3
mao 12.8 12.3 11.5 10.6 11.5 10.7 67.2 85.3 98 113.4 109.9 109.2
marques 15 14.1 15.6 13.6 15.9 13.1 17.9 40 76.7 106.9 110.3 111.2
jakobs1 6.9 7 6.4 7.1 6.5 5.5 5.2 6 5.4 5.3 15.3 38.5
jakobs2 9.7 10.4 10 8.6 7.4 6.6 7 9.3 22.5 51.8 77.1 95.5
In Table 6.8 we present the best results. We can see that there are no improvements when
nini increases. However, these results are slightly better that the ones presented in Table 6.5.
The average computational time of the 20 runs is presented in Table 6.9. We can see that
the algorithm requires a computational effort greater than that required by the previous version,
reflected in Table 6.6.
125
Quality - Time
1.005 60
1
50
0.995
0.99 40
0.985
Quality
Time
30
0.98 Av L / Av L1
Average Time
0.975 20
0.97
10
0.965
0.96 0
1 2 3 4 5 6 7 8 9 10 11 12
n_ini
Figure 6.1: Representation of the quality of the constructed solutions and the computational time requi-
red for different values of parameter nini .
already placed and not to fix them, in order to allow some flexibility while a new piece is being
inserted.
Let us denote by LP s the linear model that is solved in order to calculate X s and Y s when
binary variables (bi jk ) are eliminated from the model by fixing their values to those in solution
s (bisjk ).
126
NFPit ir
bir it k
constructed. Let us denote the number of pieces already placed by n0 and let inext ∈ P be the
next piece to be placed. Let us denote the pieces already placed by i1 , . . . , in0 .
The relative position of each pair of pieces already placed is limited by the slice used, which
is activated by fixing the associated binary variable to 1. In order to make the relative position
of a given pair of pieces more flexible we allow the current slice to be changed for an adjacent
slice. In what follows, we define the concept of neighborhood of one piece in a given solution
and the set of adjacent slices of a given slice.
Let ir and it be two pieces already placed in a given solution s (1 ≤ r < t ≤ n0 ). We say
that piece it is a neighbor by movement of piece ir if the distance between the reference point
of piece it and the limits of the active slice in the NFPit ir -coordinate system (defined in section
2.3) is lower than s . This distance is defined as follows.
min dist(qt , p) ≤ s
˜ tr
p∈Fr(S k)
where qt is the placement of the reference point of it in the NFPit ir -coordinate system.
We denote by NS ir the set of pieces which are neighbors by movement of ir . In Figure 6.3
we can see an example, where pieces which belong to NS ir are drawn in green.
At each step of the constructive algorithm CHS2 we identify the pairs of pieces which are
127
ir
bi jk2
NFPi j
bi jk1
bi jk3
neighbors by movement. For each pair of neighboring pieces, instead of keeping the variable
expressing their relative position fixed in the solution of the previous MIP, we consider three
or more binary variables defined from the respective NFP, such that their corresponding slices
share part of the boundary of the current slice, i.e. we allow their relative position to be changed
by an adjacent slice.
In Figure 6.4 we can see an example. Let us consider that pieces i and j are neighbors and let
bi jk1 be the binary variable whose associated slice S i jk1 is used. In the next step of the construc-
tive algorithm CHS 2, instead of fixing variable bi jk1 = 1 we add equality bi jk1 + bi jk2 + bi jk3 = 1.
We call the combination of the optimal insertion of one piece and the previous relaxation
of the relative position of two neighboring pieces trunk insertion . The constructive algorithm
considering trunk insertion is called CHS2-TI.
Therefore, in a given iteration of the constructive algorithm CHS2-TI, we have more binary
variables than in CHS2 because additionally we allow some of the pieces already placed to be
reallocated. Thus the corresponding MIPs are harder to solve.
In Figure 6.5 we can see an example of a trunk insertion in the third iteration of the construc-
128
6
6
12
12
4
4
9
L = 14.0 L = 17.3
Figure 6.5: Instance fu: Trunk insertion of piece 9. Relative position between pieces 4 and 12 is
modified while piece 9 is being inserted.
tive algorithm CHS2-TI applied on instance fu. Initially, 3 pieces are placed. Then, in order to
arrange the next piece from the list π, piece 9, we allow certain movements of pieces 4, 6 and
12 that CHS2 does not contemplate. We can see that the relative position of pieces 4 and 12
has changed while piece 9 is being inserted.
In Table 6.10 we can see the comparison between CHS2 and CHS2-TI. We have chosen 5
instances, from 12 to 43 pieces, with different types of rotations and with each constructive
algorithm we have built 20 solutions.
CHS2 CHS2-TI
Instances Av. L Best L Total Time Av. L Best L Total Time
Rotation: 0
poly1a 18.2 16.2 2.7 16.5 15.3 30.9
shapes0 70.6 65.0 36.7 64.6 62.0 1183.1
Rotation: 0-180
albano 11189.1 10721.1 9.7 10642.3 10314.1 128.3
shapes1 65.7 61.0 47.3 59.5 58.0 1100.6
Rotation: 0-90-180-270
fu 39.2 35.4 1.0 36.7 33.4 1.9
On the one hand the solutions obtained by CHS2-TI are much better, both on average and in
the best length, than the results obtained by CHS2. On the other hand, the time increases consi-
derably, requiring more than 1000 seconds per iteration in instances with 43 pieces (shapes0
and shapes1).
129
6.4 Alternative objective functions
The objective function used in constructive algorithms CHS2 and CHS2-TI does not take into
account the coordinates of the reference point of the pieces, but only considers the length of the
strip (L). There are situations in which there are many places to insert a new piece such that the
solution is optimal. In such situations the constructive algorithm allows CPLEX to choose one
of the optimal solutions. In this section we are going to study different objective functions for
placing the pieces, not only considering the length of the strip, but also the coordinates of the
pieces.
The objective function that we are going to consider has the following form:
N
X N
X
min L + 1 xi + 2 yi (6.1)
i=1 i=1
In order to balance instances which have pieces with a length of more than 100 units (e.g,
albano, mao), we transform the data by dividing the width and all the coordinates of the pieces
by a multiple of 10 in such a way that no length is greater than 100.
We denote by FO0 the objective function which considers just the length of the strip (1 = 0
and 2 = 0).
In Table 6.11 we make the comparison of the constructive algorithm CHS2 with the two
objective functions described above, FO1 and FO2. Table 6.2 shows the computational results
of CHS2 with objective function FO0. We can observe that objective function FO2 obtains the
best length average in 7 of 14 instances, followed by objective function FO1 which obtains the
best average in 6 instances. Objective function FO0 obtains the best average length on instance
dagli. If we look at the best solution obtained in the 20 runs, objective functions FO1 and FO2
130
find the best solution in 8 instances, in contrast with considering just the length of the strip,
FO0, which finds the best solution in 2 instances.
In Table 6.12 we present the comparison of the constructive algorithm HS2-TI with the two
objective functions FO1 and FO2. Table 6.10 shows the computational results of HS2-TI with
objective function FO0.
We can see that in three instances, poly1a0, albano and fu, the best average length is ob-
tained by objective function FO2. However, the computational time increases with respect to
FO0. In the two remaining instances the best average is obtained with FO0 (see Table 6.1).
The best lengths in instances poly1a0 and fu are obtained with FO0, in instance shapes0 it is
obtained with FO1, L = 60.43, and in instance albano the best solution is obtained with FO2.
131
The best solution of instance shapes1, with L = 57.33, is obtained with FO1 and FO2.
In general, the computational time of the constructive algorithms when the objective func-
tion are FO1 and FO2 is greater than the constructive algorithms with FO0.
6.5 Conclusions
The constructive algorithms presented in this chapter use a mathematical model which is hard
to solve and the required computational time increases considerably if we compare it with the
bottom-left corner algorithm (BLC).
On the other hand, the results given show that the quality of solutions of the constructive
algorithms using model HS2 is better than the BLC. Model GO obtains good solutions but the
computational time and the complexity of solving the corresponding MIPs increases too much
in instances with a large number of pieces, and the algorithm could fail.
Trunk Insertion is an interesting approach and the solutions obtained using it are the best
ones obtained in all the constructive algorithms. For instance, the best known solution of ins-
tance shapes0 is L = 58 and, with this algorithm, a solution of 60.43 is constructed. The
problem of trunk insertion is the computational time required to build a solution.
132
Chapter 7
In Chapter 6 we studied constructive algorithms using different mathematical models with dif-
ferent objective functions. The computational study showed that the fastest algorithm is the
Bottom-Left Corner (BLC) implemented by Gomes and Oliveira. In this chapter we present
different movements with the objective of designing an efficient local search procedure.
Each section of this chapter presents a different local search movement based on the HS2
model. The initial solution is built with the BLC algorithm.
In Section 7.1 we present the n-insertion movement. The case in which n = 1, 1-insertion, is
similar to the optimal insertion used in the constructive algorithm CHS2 presented in Chapter 6.
In Section 7.2 we study a Compaction procedure. In this movement each piece maintains
its position relative to all the other pieces, though it can be modified slightly.
In Section 7.3 we study the k-compaction, which is the combination of the k-insertion and
the Compaction movements. There is a strong relation between the 1-compaction and trunk
insertion presented in Section 6.3. The 1-compaction requires a great computational effort, so
we propose a simplification of the movement in a two-step procedure. First we do the compac-
tion without the pieces selected for insertion, and then we add the pieces using the 1-insertion
movement.
Finally, in Section 7.4 we study different criteria based on the objective functions described
in the previous chapter in order to modify the current solution more frequently.
The set of instances that we are going to consider to test the different types of movements of
the local search is presented in Table 7.1. For each one of these instances we build 20 solutions
for which the order of the pieces is randomly chosen with probabilities weighted by area. The
rotation angles are also randomly chosen. The initial solution is the same in each one of the
iterations for each instance.
133
Table 7.1: Nesting instances used in the local search study
Instances Types of Pieces Number of pieces Average of vertices Plate width Problem type
Rotation: 0
dighe2 10 10 4.7 100 Jigsaw puzzle
poly1a 15 15 4.6 40 Artificial
dighe1 16 16 3.87 100 Jigsaw puzzle
shapes0 4 43 8.75 40 Artificial
Rotation: 0-180
albano 8 24 7.25 4900 Garment
shapes2 4 20 7.5 15 Artificial
dagli 10 30 6.3 60 Garment
shapes1 4 43 8.75 40 Artificial
Rotation: 0-90-180-270
fu 12 12 3.58 38 Artificial, convex
mao 9 20 9.22 2550 Garment
marques 8 24 7.37 104 Garment
jakobs1 25 25 5.6 40 Artificial
jakobs2 25 25 5.36 70 Artificial
7.1 n-insert
The core of this movement is the optimal re-insertion of a subset of n pieces. We are going to
consider the n-insertion with n = 1, n = 2 and n = 3. In each one of these cases we consider
different objective functions and different strategies for choosing the pieces to be re-inserted.
Each solution s(X s , Y s , O s ) is associated with an HS2 model defined by the vector of rota-
tions O s . We denote by MIP s the model defined by the solution s. Furthermore, given s, for
each pair of pieces, i, j, we can identify the slice in which the reference point of j lies relative
to i and then we can determine which binary variable associated with NFPi j takes the value 1.
The identification is not necessarily unique, because the reference point of piece j may lie on
the border of more than one slice. We denote this set of binary variables by Bs .
If we solve the corresponding linear problem with the binary variables of Bs set to the value
1, we obtain a solution s0 satisfying L s0 ≤ L s , where L s and L s0 denote the length of solutions s
and s0 , respectively.
In the next subsection we explain the structure of the model MIP s (i1 , . . . , in ; o1 , . . . , on ). The
relative position between pieces i1 , . . . , in and all the other pieces in the problem has to be free,
that is, the model is going to choose the best relative position of pieces i1 , . . . , in . However, the
relative position between pieces of PR = P\{i1 , . . . , in } is going to be fixed, so binary variables
from Bs which separate pieces of PR are fixed to 1 in the model.
134
7.1.1 1-insertion
Let i ∈ P be a given piece and let us denote by N1 (i) the optimal insertion of piece i having stu-
died the different rotations. This movement is done by solving several mixed linear problems
MIPs, as many as the allowed rotations of piece i. Each one of the MIPs considered is denoted
by MIP(i, oi ), where oi denotes the rotation angle of piece i.
In order to complete N1 (i), we have to solve the corresponding MIP(i, oi ) for each oi ∈ O.
The best solution obtained is the optimal insertion of piece i. Note that the placement of piece
i after the N1 (i) movement can be its previous position, which means that no change has been
made in the solution.
Let us suppose that the rotation of piece i is changed from angle oi to o0i . The construction
of MIP(i, o0i ) is not immediate, because we have to rotate piece i in the current model. Let us
denote the previous MIP (oi is the previous rotation of i) by MIP0 (i; oi ). All binary variables
in the entire problem are considered (no relative position between any pair of pieces is fixed).
Then, to build MIP(i, o0i ), we have to modify the following components of MIP(i; oi ):
• The lifted bound constraints (2.23) and (2.24). These constraints also consider the inter-
action between each pair of pieces and use the NFPs. Then, if an NFP is modified, the
corresponding lifted bound constraints have to be recalculated.
• It is possible that we need more binary variables. The new NFPs between piece i and
the other pieces can be more complex, their outer regions can have more slices and then
more binary variables would be needed for the MIP model.
The first computational experiment that we are going to do consists in applying N1 (i),
∀i ∈ P, stopping when no improvement is found, using the scheme in Algorithm 3.
In Table 7.2 we can see the computational results of the 1-insert movement with three dif-
ferent objective functions. The first objective function, FO0, is the length of the strip (L).
The second objective function, FO1, also considers the X-coordinate of the pieces and tries to
place the pieces as much as possible at the beginning of the strip. The third objective function
considers both coordinates of each piece. These objectives functions are defined in Section 6.1,
where 1 = 0.001 and 2 = 0.001.
135
Algorithm 3 1-insertion movement
Require: sc = (X sc , Y sc , Bsc , θ sc );
while sc improve do
Build π randomly (order of pieces to be re-inserted);
for i = 1, . . . , N do
PIECE = π(i);
N11 (PIECE);
if Find best solution then
Update solution sc ;
end if
end for
end while
The average percentage of improvement obtained using FO0 is slightly lower (7.91%) than
that obtained using objective functions FO1 and FO2, which get very similar results (8.34%
and 8.41% respectively). The use of these functions has a positive effect on the performance of
the insertion move. However, it increases the computational times, an effect already observed
in Chapter 6.
The next computational test consists in checking the linear problem in which the piece to be
re-inserted is eliminated from the model. If the solution to this linear problem does not improve
the solution for the initial complete model, then it is not necessary to re-insert the piece because
it cannot produce any improvement and we have to consider other pieces for reinsertion. We
call the model without the piece to be re-inserted and with all the binary variables fixed the
reduced linear problem. The results are presented in Table 7.3. The solutions obtained do not
always match the ones given in Table 7.2 because CPLEX does not always give the same solu-
tions, because it depends on the MIPs solved before the insertion.
We can see that checking the reduced linear problem before re-insertion reduces the com-
putational effort considerably.
136
Table 7.2: Performance of 1-insert with different objective functions
Table 7.3: Performance of 1-insert when the reduced linear problem is previously solved
137
7.1.2 2-insertion
The 2-insertion tries the reallocation of a pair of pieces. We randomly choose a pair of pieces
and eliminate the relation of both pieces to all the other pieces. This means that we consider
more binary variables than in the 1-insertion and the corresponding MIPs are harder to solve to
optimality.
Let us denote by N2 (i, j; oi , o j ) the optimal insertion of pieces i and j at once with a random
rotation. The MIPs that we need to solve in order to complete the N2 (i, j) movement are deno-
ted by MIP(i, j; oi , o j ), (oi , o j ) ∈ OxO.
In order to rotate a piece in the model we have to modify the constraints described in the
1-insertion. Since in the 2-insertion we have to rotate two pieces, we do it iteratively. The
rotation angles are obtained randomly.
We consider the three different objective functions defined in Section 6.4. In Table 7.4 we
can see that the best average results are obtained with the objective function FO2. In all cases
the improvement percentages are clearly higher than in the 1-insertion, but we need more com-
putational time to complete the 2-insertion.
As in the 1-insertion, we are going to check whether the linear problem considered by
dropping the two pieces to be reinserted improves the given objective function. If there is no
improvement, we do not complete the 2-insertion of the given pair of pieces. In Table 7.5 we
can see that when checking the linear problems, the total computational time is reduced.
138
Table 7.5: Performance of 2-insert with FO0 when the reduced linear problem is solved
7.1.3 3-insertion
The idea is the same as the previous movements. We denote the optimal reinsertion of three
pieces by N3 (i, j, k).
We randomly choose 1% of all the sets of three pieces and we randomly choose the rotation
of each piece. In Table 7.6 we can see the computational results of the 3-insertion movement.
The average improvement is increased in relation to the 2-insertion, but the computational time
increases considerably.
139
7.2 Compaction
The idea of this movement is to allow certain changes in the position of the pieces by giving
them some flexibility to move to adjacent slices but without leaving any piece completely free.
This idea is the same as trunk insertion described in Section 6.3.
Let s be a solution. For each NFPi j such that pieces i, j are neighbors, we consider at least
three binary variables. We consider the neighborhood by movement defined in Section 6.3 and
another type of neighborhood, called neighborhood by placement, which considers the distance
between the pieces. The set of pieces which belongs to the neighborhood by placement of piece
i in solution s is given by:
NPis = { j ∈ P | (x j , y j ) ∈ Ri }
where Ri is the rectangle whose vertices are: (xi − λli , yi − µwi ),(xi + li + λli , yi − µwi ), (xi + li +
λli , yi + wi + µwi ) and (xi − λli , yi + wi + µwi ). Initially, we consider λ = 1 and µ = 1.
Once the combined neighborhood of each piece is calculated, we consider the binary va-
riables whose corresponding slices are adjacent to the current slice in solution s, as shown in
Figure 6.4.
We do the compaction movement until no improvements are obtained. Table 7.7 shows the
computational results. We can see that this movement does not produce an important improve-
ment. In fact, it works worse than the 1-insertion movement.
140
7.3 1-Compaction
In the compaction movement there is no piece whose relative position with all the other pieces
is completely free. That is, none of the pieces can change its position dramatically.
The idea of the 1-Compaction movement is a combination of the 1-insertion with the com-
paction. So, one piece is going to be completely free and not all the variables of the NFPs
of this piece with all the other pieces are fixed. The remaining pieces can only change their
relative positions locally by the compaction movement.
We study two ways of performing the 1-compaction. The first approach is based on solving
just one MIP, 1-Compact in one level, which is harder than the 1-insertion and the compaction
because it considers both sets of binary variables at once. The second approach has two phases.
The first phase eliminates the selected piece and does the compaction until there is no impro-
vement and then the missing piece is reinserted.
We study all the pieces to be re-inserted in a random order and all the allowed rotations
until there is no improvement in the solution.
141
7.3.2 1-Compaction into two phases
In Table 7.9 we can see that the computational time is reduced. On average this move is slightly
better than the 2-insertion.
In this section we study alternative objective functions in order for modifying the current
solution despite the length of the strip remaining unchanged.
Let us consider the 1-insertion movement. When a piece is removed from a solution s, a
hole in the solution is created. To encourage the neighboring pieces to change their positions
and cover the hole, we use a crossed objective function. Let i be the piece which is going to be
reallocated. We denote by NPis the set of pieces which are neighbors of piece i in solution s.
We divide NPis into four subsets as follows:
• NP1(i, s) = { j ∈ NPis | xi ≤ x j , yi ≤ y j }
• NP2(i, s) = { j ∈ NPis | xi ≥ x j , yi ≥ y j }
142
Note that NPis = NP1(i, s) ∪ NP2(i, s) ∪ NP3(i, s) ∪ NP4(i, s). Then the crossed objective
function, COF, is defined as follows:
X X X X
COF : min L + (x j + y j ) + (−x j − y j ) + (x j − y j ) + (−x j + y j )
j∈NP1(i,s) j∈NP2(i,s) j∈NP3(i,s) j∈NP4(i,s)
Figure 7.1 shows an example of a 1-insertion movement using the crossed objective func-
tion. In Figure 7.1 (a) we can see a solution of instance shapes0 with L = 69. We randomly
select piece 36 to be reinserted. The neighborhood of piece 36 is drawn in blue. In Figure
7.1 (b) piece 36 is removed and the direction specified in the objective function for each piece
of the neighborhood is drawn. After the corresponding MIP model is solved, we obtain the
solution presented in Figure 7.1 (c), with L = 68.5. Note that the hole created when piece 36 is
removed is partially covered by piece 28.
We can use these objective functions in the 2-insertion and 3-insertion. In these cases it is
possible that the coordinates of one piece appear more than once in the objective function be-
cause this piece can be a neighbor of the two pieces being reinserted (or the three pieces in the
3-insertion). In order to forbid this situation we assign a priority to the pieces being reinserted
and for a piece which is a neighbor of more than one piece, we consider that it is a neighbor
only of the piece which has greater priority.
In Table 7.10 we can see the effect of using the crossed objective function in the 1-insertion
and 2-insertion movements. The results are improved considerably with respect to those obtai-
ned with the other objective functions.
Table 7.11 shows the results obtained by the 2-insertion and 3-insertion where all the ro-
tations are checked in the reinsertion, and the crossed objective function is considered. In the
2-insertion we can see a strong improvement but in the 3-insertion the improvement is rather
worse and the computational time is reduced. What happens in the 3-insertion is that CPLEX
gives upper bounds because it is not able to solve many of the MIPs to optimality. Then, when
no improvement is detected, the movement stops.
143
Table 7.10: Performance of 1-insert and 2-insert with crossed objective functions
1-insert-COF 2-insert-COF
Instances % Imp Best L Av. Time % Imp Best L Av. Time
Rotation: 0
dighe2 19.44 100.00 3.05 23.29 118.97 15.94
poly1a 17.32 15.53 13.93 22.94 15.19 67.12
dighe1 10.26 130.90 12.18 19.41 123.70 82.93
shapes0 2.49 67.00 89.53 7.25 63.00 535.66
Rotation: 0-180
albano 7.34 10255.62 191.27 10.41 10077.25 514.37
shapes2 7.36 28.02 95.40 12.20 27.10 336.18
dagli 11.86 61.93 210.19 15.87 60.02 633.85
shapes1 3.46 62.00 301.26 10.30 58.00 1385.21
Rotation: 0-90-180-270
fu 14.41 33.00 10.01 19.42 32.82 26.34
mao 10.75 1907.09 176.48 15.55 1852.82 606.66
marques 10.75 80.67 405.51 11.06 79.56 545.99
jakobs1 5.25 12.94 11.00 8.01 12.00 60.64
jakobs2 6.17 27.00 15.17 10.98 26.00 93.92
Average 9.76 110.99 14.36 377.29
Table 7.11: Performance of 2-insert and 3-insert with crossed objective functions and using the best
rotations
2-insert-COF-rotation 3-insert-COF-rotation
Instances % Imp L best Av. Time % Imp L best Av. Time
Rotation: 0
dighe2 26.06 100.00 35.95 26.49 100.00 37.44
poly1a 23.88 15.03 119.81 23.72 14.96 227.36
dighe1 19.84 117.47 114.56 19.48 113.79 113.48
shapes0 6.21 63.00 499.63 3.48 66.00 586.60
Rotation: 0-180
albano 9.53 10084.08 1025.85 9.29 10201.59 830.49
shapes2 13.71 26.72 981.08 11.96 27.42 1189.40
dagli 16.19 60.02 1388.36 8.71 60.64 1301.64
shapes1 10.15 58.00 2256.27 - - -
Rotation: 0-90-180-270
fu 20.25 32.17 134.63 19.59 31.89 69.56
mao 15.85 1852.73 1331.52 16.07 1844.49 1453.07
marques 13.28 77.37 2132.46 12.40 79.69 1854.55
jakobs1 8.13 12.00 102.40 8.07 12.17 74.81
jakobs2 9.77 26.00 142.68 9.20 26.00 186.31
Average 14.84 789.63 14.04 660.39
144
3 32 23 25 28 38
5 9
43
2 26 36
4 14 40
35
1
22 37
27
42
24
19 21 13 31
17 33
34
6 10
18 15
8 39
12 30
16 20
7 11 29 41
(a)
3 32 23 25 28 38
5 9
43
2 26
4 14 40
35
1
22 37
27
42
24
19 21 13 31
17 33
34
6 10
18 15
8 39
12 30
16 20
7 11 29 41
(b)
3 32 25 38
23 28
9
5 43
2 26
14 40
4 35
1 22
37
27
42
24
19 21
13 31
17 33
36
34
6 10
18 15
8 39
12 30
16 20
7 11 29 41
(c)
Figure 7.1: One iteration of a 1-insertion movement with a crossed objective function
145
146
Chapter 8
An Iterated Greedy Algorithm (IGA) generates a sequence of solutions by iterating over construc-
tive heuristics using a destruction and a construction phase. It can be improved by a local search
after the construction phase. The difference with the Iterated Local Search (ILS) is that the IGA
iterates over construction heuristics instead of iterating over a local search procedure.
Iterated Greedy Algorithms have been applied successfully to the Set Covering Problem
by Jacobs and Brusco [36] and Marchiori and Steenbeek [44]. Ruiz and Stützle [57] use an
IGA for the permutation flowshop problem. However, IGA has not yet been applied to nesting
problems.
The destruction procedure removes several pieces from either the current solution or the
best known solution. The strategy for choosing the number of pieces to be removed and the
selection criteria are presented in Section 8.2. As happens in ILS, at an iteration combining
destruction, construction and a local search the IGA could produce the same solution from
which the iteration started. In this case, the destructive algorithm can change along the IGA
iterative process, becoming more aggressive.
The construction phase is based on the insertion of the pieces removed in the destruction
phase in a similar way to the constructive algorithm presented in Section 6.3. After this phase
we obtain high quality solutions and in many cases the best solution of the algorithm. However,
in order to look for even better solutions, after the construction phase we apply a local search
procedure based on the movements presented in Chapter 7. In Section 8.2 we present the des-
tructive phase and in Section 8.3 we explain the constructive phase. The local search procedure
is defined in Section 8.4. The Iterated Greedy Algorithm structure is explained in Section 8.5.
Finally, the computational results and conclusions are presented in Section 8.6.
147
8.2 Destructive phase
The solution obtained after applying the local search procedure described in Chapter 7 is
usually tight. To determine the level of tightness of a given solution associated with the current
MIP model, we solve two linear problems which consist of fixing all the binary variables to
their current values and changing the objective function as follows:
FO1: L + ni=1 xi
P
FO2: L − ni=1 xi
P
When we solve the linear model with FO1, all the pieces are placed as close as possible to the
beginning of the strip. On the other hand, when we consider FO2, all the pieces are placed as
close as possible to the end of the strip. The total Euclidean distance between the X-coordinates
of each piece in both solutions (obtained by FO1 and FO2) is therefore a good indicator of the
tightness of the solution.
We divide the pieces into two groups. Let us denote by xi1 and xi2 the value of the x-
coordinate of piece pi in the solution obtained by considering, respectively, objective functions
FO1 and FO2. We divide the pieces into the following two subsets:
• P1 = {i | dist(xi1 , xi2 ) = 0}
Therefore, we are going to choose pieces from P1 randomly. At the beginning, the number
of pieces to be removed, n0 , is going to be n0 = 3, but it could increase by one if the local search
procedure or the constructive algorithm reaches the same solution as the previous iteration. If
the best solution is found in the current iteration, then n0 changes again to 3. We consider n0 ≤ 5
as the upper limit, because the constructive phase allows us to change the relative positions bet-
ween the pieces already placed and that makes it computationally hard to rebuild the solution
if n0 > 5.
148
binary variables can be considered a good indicator of the relative difficulty of the problems,
so if there are a lot of binary variables in an MIP problem, it is likely to be difficult to solve.
Furthermore, we have also seen that when an MIP model is solved by CPLEX and it stops be-
fore optimality because of the time limit, the solution returned by CPLEX can be a bad solution.
Bearing in mind all these lessons from Chapter 6, we have to control the computational
effort each time an MIP is solved. To do that, we use two parameters:
• nh : the number of pieces whose relative positions are relaxed in the next insertion. In
the constructive algorithm presented in Section 6.3, nh is always the number of pieces
already placed in the bin, but here this number will be controlled and adjusted by the
dynamic strategy described in the next paragraphs.
• s : determines the size of the neighborhood by movement of a piece and thus the number
of neighbors whose positions can be relaxed.
The value of these parameters nh and s are not fixed beforehand, but adjusted using the in-
formation from the MIPs being solved, the quality of the solutions obtained and the time spent
obtaining them.
Let us denote the number of pieces already placed by n0 and let inext ∈ P be the next piece
to be placed. Let us denote the pieces already placed by i1 , . . . , in0 . Initially, nh is going to be
the number of pieces already placed in the bin (nh = n0 ). The time limit considered for each
insertion is 50 seconds, that is, if CPLEX is not able to prove optimality in 50 seconds then it
returns the best solution found. We denote by ti the computational time needed by CPLEX to
solve the MIP model.
In the case that ti ≤ 25, we consider that the MIP has been easy to solve and the following
parameters are modified:
• nh is incremented by one (in the case that it is less than the number of pieces already
placed in the bin). Since inext is placed, then n0 has incremented by one and, therefore, nh
is increased by one.
• s is multiplied by 2. If s increases, the neighborhood of a given piece is expanded. If s
were big enough, all the pairs of pieces in the bin would be considered as neighbors and
the structure of the solution could change completely after the new insertion is made.
If ti > 25, we consider that the corresponding MIP has been hard to solve and the parameters
are adjusted in the following way:
• nh is reduced by one
• s is divided by 2 in order to reduce the number of binary variables in the next MIP model.
Since we use 50 seconds as the time limit for CPLEX, in the worst case ti = 50. In this case
we focus on the GAP given by CPLEX and compare it with τ.
149
• If the GAP obtained is lower than τ, then the last insertion is accepted, nh = 0 and
s = 0.1 for the next insertion.
• In the case that the GAP obtained is greater than τ, then the given solution is not accepted
and there is an attempt to reinsert inext with nh = 0 and s = 0.1.
We are going to consider τ = 0.3, that is, we accept the feasible solution given by CPLEX
if the GAP is lower than 30%.
We denote this constructive algorithm as DCHS2-TI (Dynamic CHS2-TI). Table 8.1 com-
pares the DCHS2-TI with the static version CHS2-TI, both with objective function FO1. We
can see that the DCHS2-TI algorithm produces similar results and the computational time is
reduced considerably when the instance size increases. In fact, DCHS2-TI is able to deal with
large instances such as swim or shirts. Table 8.2 shows the average results of 20 runs for all the
instances of nesting problems that we can find in the literature (see Section 6.1).
The constructive algorithm DCHS2-TI is used to build the initial solution for the IGA al-
gorithm and to rebuild the partial solutions given by the destructive phase. When DCHS2-TI is
used after the destructive phase, we evaluate whether the completed solution is accepted or not
for going to the local search phase, depending on the length obtained. The idea is to send only
those solutions which are considered promising to the time-consuming local search procedure,
that is, those which have possibilities of improving the best known solution. In order to do
that, we use a threshold ρ which gives the maximum percentage of deviation with respect to
the best known solution a solution can have to be considered for local search. Initially, for each
iteration of the IGA algorithm, we consider ρ = 0.05 (5%) and it increases 0.01 every two times
DCHS2-TI fails.
150
Table 8.2: Average results of DCHS2-TI in all the nesting instances.
Set of DCHS2-TI
Instances Av. L Best L Av. Time
Rotation: 0
dighe2 126.17 100.00 2.23
poly1a0 16.23 15.39 69.19
dighe1 133.00 123.68 77.24
shapes2-0 28.48 27.63 255.57
shapes0 66.10 62.00 463.69
trousers0 266.05 256.97 1332.21
shirts0 66.18 63.44 2358.34
Rotation: 0-180
blaz2 21.56 21.10 74.19
albano 10587.49 10128.78 158.99
shapes2 28.55 27.41 256.31
dagli 63.48 61.27 217.48
shapes1 62.09 58.00 433.88
swimm1 7100.11 6750.09 2232.94
trousers 256.18 248.45 1074.28
shirts 66.07 63.94 1920.12
Rotation: 0-90-180-270
fu 35.69 34.10 4.87
mao 2042.89 1928.78 98.39
marques 82.25 80.29 91.95
jakobs1 12.47 12.00 147.93
jakobs2 27.08 26.00 222.67
poly1a 15.61 14.71 87.13
poly2a 30.44 28.45 412.24
poly2b 34.03 30.76 447.10
poly3a 45.01 43.19 802.95
poly3b 44.76 42.85 825.39
poly4a 60.47 56.92 1411.32
poly5a 80.06 76.68 2374.67
will not be used here in the design of the local search phase.
On the other hand, we have improved the 1-insertion and the 2-insertion in such a way
that the local search procedure produces good improvements in a reasonable time. The origi-
nal 1-insertion is very rigid and does not produce a good improvement over the constructive
algorithm. Therefore we propose a new version of the 1-insertion which changes dynamically
depending on the evolution of the Iterated Greedy algorithm. We are going to use a similar idea
on the 2-insertion.
In both movements, 1-insertion and 2-insertion, we consider the crossed objective function
COF defined in Section 7.4. We have shown that this objective function works better than FO1
or FO2.
The local search procedure is based on applying either the 1-insertion or 2-insertion while
the length is reduced. In the case of the solution changing and the length remaininig equal,
then we repeat the same movement one more time. If in the second iteration the length is not
reduced, we change to the other movement. The local search finishes when the two movements
151
are performed without changing the current solution, or in two iterations for each movement
the length remains equal. The first movement to be applied is chosen randomly.
To choose the piece to be reinserted, we consider the pieces whose slack in the strip is 0,
that is, we are going to use the strategy presented in Section 8.2 to identify the set of pieces
which belongs to the skeleton of the current solution. If there are more than 20 pieces on the
skeleton we randomly choose 20 of them to be reinserted, otherwise we try to reinsert all the
pieces of the skeleton.
The constructive algorithm presented in Section 8.3 uses a dynamic strategy here to cali-
brate the values of parameters nh and s . When the solution is completely built, the values of
nh0 and s0 used in the insertion of the last piece are considered a good starting point for the
values of the parameters nh and s in the local search procedure. However, in order to intensify
the local search movements during the IG algorithm, nh could increase.
Initially we consider the values nh = min{5, nh0 } and s = min{W/10, s0 }. If during three
iterations of the IG algorithm the best known solution does not change, nh is increased by 1
unit. The upper limit is nh ≤ 15. If in a given iteration of the local search the best known
solution is improved, nh is reset to its initial value.
• np: The number of pairs of the skeleton which are attempted to be reinserted.
152
• s : Defines which pieces belong to the neighborhood of a given piece.
The computational test presented in Section 7.4 considered np to be 10% of all the pairs of
pieces. In the modified 2-insertion we consider several values for np, ranging between 5 and
80 without taking into account the total number of pieces. The second parameter, s , can take
the values 1, W/10 and W/15. With such values we build different strategies for the 2-insertion.
L0 : np = 5 and s = 0.1.
L1 : np = 20 and s = 0.1.
L2 : np = 20 and s = 1.
L3 : np = 20 and s = W/10.
L4 : np = 40 and s = W/10.
L5 : np = 60 and s = W/10.
L6 : np = 80 and s = W/10.
The modified 2-insertion of level 0 (L0) is the fastest. When the IGA does not improve
the best solution in three iterations, then the level of the 2-insertion is incremented. If the best
known solution is improved in the current iteration of the IG algorithm, we reset the level of
the 2-insertion to L0.
8.5 IG Algorithm
The Iterated Greedy (IG) algorithm is based on the constructive algorithm DCHS2-TI presented
in Section 8.3. The (IG) algorithm is organized as follows:
S.3 If the best solution is improved, then it is updated. If the distance between the solution
obtained and the best known solution is lower than 1%, we randomly choose the solution
to be used in the destruction phase (the best or the current solution). Otherwise, we
consider the best known solution in the destructive phase.
S.6 If the threshold (ρ) is satisfied (the distance between the solution obtained and the best
solution is lower than ρ), go to (S.2). Otherwise, go to (S.4) with the best known solution.
153
We denote the best known solution by s∗ . The solution obtained in the constructive phase
is denoted by s (initially s = s∗ ). In the second step we apply the local search procedure
presented in Section 8.4 (S.2). We denote by s0 the solution obtained after applying the local
search procedure to solution s. Then we differentiate the following cases:
• L s0 ≤ L s∗ . In that case the best known solution is improved, so s∗ is updated and the
following parameters reset their values:
After the local search procedure we use the destructive phase described in Section 8.2 in
order to eliminate n0 pieces from either the current solution s or the best known solution s∗
((S.3) and (S.4)). In the case that the distance between L s and L s∗ is more than 1%, we consider
s∗ in the destructive phase. Otherwise, we randomize the selection between s and s∗ .
The destructive phase eliminates several pieces randomly chosen from the skeleton of the
solution. The partial solution obtained is rebuilt with the constructive algorithm (S.5). As we
mention in Section 8.3, we use a threshold (ρ) to eliminate bad solutions obtained in (S.5). If
the solution given by (S.5) satisfies that the distance between the best known solution is lower
than threshold ρ, we apply the local search (go to S.2). Otherwise, we go to step (S.4) with the
best known solution and every two times that happens ρ is incremented by one unit.
The stopping criteria are based on the computational time, the total number of iterations
and the number of iterations without improving the best known solution. We consider that one
iteration is completed after the local search procedure is used (S.2). The IG algorithm finishes
in the following cases:
• Total computational time is greater than 36000 seconds and during 20 iterations the best
known solution has not been modified. In difficult instances as swim, trousers, poly2a,
poly2b, poly3a, poly3b, poly4a, poly5a and shirts, which one iteration the local search
procedure requires more than 1000 seconds we allow only 2 iterations without improve
the best known solution.
154
• The total number of iterations is greater than or equal to 100, and for 20 iterations the
best known solution has not been modified.
The next section shows the computational results and a comparison with state of the art
algorithms.
We use the instances presented in Section 1.6.1 with the following exceptions and modifi-
cations:
• Instances glass1, glass2, glass3, dighe2 and dighe1 are simple to solve. These instances
are broken glass instances and the pieces cannot rotate The IGA algorithm obtains the
optimal solution in less than 10 seconds in all the instances, with the exception of instance
dighe1 for which it proves optimality in 10567 seconds. Furthermore, all the algorithms
that we are going to consider obtain the optimal solution of instances dighe1 and dighe2
(instances glass are not considered).
• Instances poly4b and poly5b have only been used by Burke et al.[18] and with the tools
we have available we had problems calculating all the non-fit polygons, as there are too
many different types of pieces (60 and 75 respectively).
• For the instance swim we have built a version reducing the number of vertices of the
pieces in such a way that each solution of the reduced version is a valid solution for the
original instance.
We are going to consider the algorithms state of the art algorithms from the literature as follows:
SAHA The hybrid simulated annealing algorithm by Oliveira et al. [32] (2006).
2DNEST The fast neighborhood search for polygon placement using a bottom-left strategy by
Egeblad et al. [26] (2007).
ELS The extended local search algorithm based on nonlinear programming by Leung et al.
[40] (2012).
155
SA-SMT The two level algorithm by Sato et al. [58] which uses the collision-free region and exact
fitting placement. Since the inner level is a simulated annealing algorithm then we refer
to this algorithm as SA-SMT (Simulated Annealing by Sato, Martins and Tsuzuki, 2012).
In Table 8.3 we compare the minimum length obtained by all the algorithms in all the ins-
tances. The table has two parts because some instances have been used by all the authors while
others have only been used by Burke et al.[18].
In the first part of the table, we can see that none of the algorithms gets the best solution for
all the instances, showing that for this very difficult problem none of the proposed approaches is
consistently the best. Our algorithm IGA improves the best solution published in the literature
in instances shapes0 and shapes2 (in Section 1.6.1 we can see the solutions drawn). In general,
the results obtained with IGA are competitive with all the other algorithms. However, it seems
that our algorithm works slightly worse on instances with a high number of pieces, such as
swim, shirts, trousers. Furthermore, in these instances, the computational time increases consi-
derably in order to complete 20 iterations without improvement (see Table 8.4). This behaviour
indicates that the corresponding MIPs that we solve in each movement of the local search pro-
cedure are very hard to solve to optimality.
In the second part of the table, the results obtained with the IGA algorithm improve the
best know solution in all these instances, with the exception of instance poly5a. However, ins-
tances poly2a, poly3a, poly4a, poly5a, poly2b and poly3b require a great computational effort
although the number of pieces is not too high.
Instance poly1a0 can only be compared with the result obtained with the exact algorithm
presented in chapter 3. We can see that by giving 36000 seconds to both algorithms, the IGA
obtains a solution of L = 14.6, while the exact algorithm gets L = 15.9 (see Table 3.10).
Table 8.4 completes the information in Table 8.3 not only for IGA but for all the other al-
gorithms. It can be seen that the good results of SA-SMT are obtained at a high computational
cost, much higher than those of ELS, ILS and FITS, which attain a very good balance between
quality and cost. In our case, we have allowed our algorithm a higher computational time than
these three algorithms, but not as high as SA-SMT.
Looking at the results obtained by IGA in more detail, we have observed that it obtains good
solutions after applying the destructive and constructive phases. In fact, these procedures often
obtain the best current solution before applying the local search procedure. Nevertheless, the
local search procedure usually improved the current solution. The movements used have been
extensively studied. The difficulty of the models that we have to solve each time any move-
ment is done depends on the problem and even on the current solution. For this reason we have
chosen a dynamic strategy based on the gap obtained and the computational time needed by
CPLEX to solve the corresponding models and we have tried to adjust the parameters in order
to build affordable MIP problems.
Despite the fact that instance shapes1 has 43 pieces and instance poly2b only 30, one itera-
tion of the local search procedure requires more time in instance poly2b. This difference might
be given because the proportion between the width of the strip and the average width of the
156
Table 8.3: Best length obtained by each algorithm
pieces is greater on instance shapes1. That is, generally, pieces are wider in proportion with
the strip width on instance shapes1.
The computational results show that IGA is competitive with the state of the art algorithms
and improves the best known solution in several instances.
157
Table 8.4: Computational times of algorithms
158
Chapter 9
9.1 Introduction
The two-dimensional irregular-shape bin packing problem with guillotine cuts arises in the
glass cutting industry, where each cut in the cutting process divides the stock sheet, or the gi-
ven part that is going to be cut, into two different parts. Most of the algorithms that can be found
in the literature on two-dimensional irregular-shape packing problems minimize the length of
the strip required to accommodate the pieces and do not force a guillotine cut structure. On the
other hand, most of the algorithms including guillotine cuts deal with rectangles, so the guillo-
tine cuts are orthogonal with the edges of the stock sheet. Therefore, the problem considered
here combines three difficult components: the non-overlapping of the pieces, which with irre-
gular polygons is a hard problem, especially when the pieces are allowed to rotate freely; the
bin packing problem in which pieces have to be associated with bins; and the guarantee that the
solution can be produced by a set of guillotine cuts. We propose a constructive algorithm which
inserts the pieces one at a time by using a mathematical model and two different guillotine cut
structures. This constructive algorithm outperforms the previous algorithms designed for this
problem.
Up to now, the only paper which has dealt with the irregular-pieces bin packing problem
with guillotine cuts is presented by Bennell et al. [11]. They propose two different construc-
159
Figure 9.1: Real example of a bin packing problem with guillotine cuts and non-rectangular pieces
tion heuristics. The first heuristic, a one-stage algorithm, combines two pieces (or two sets of
pieces) by matching two of their edges using a dynamic solution evaluation to create a new
item which is transformed into its convex hull. Therefore, the guillotine cut structure is always
satisfied. This algorithm produces high-quality results in problems with a low number of pieces
per bin. When pieces are small and many pieces fit into one bin, this algorithm requires great
computationally effort. The second constructive heuristic, a two-stage algorithm, combines
pairs of pieces into rectangles by using the phi-functions presented by Romanova et al. [56],
and then uses the guillotine bin packing algorithm developed by Charalambous and Fleszar
[19] to pack the rectangles. The combination of the pieces into rectangles using phi-functions
requires great computational effort, up to eighteen hours in the largest instance (149 pieces),
although the packing algorithm by Charalambous and Fleszar [19] takes less than one second.
In this chapter we propose a constructive algorithm based on the insertion of pieces one at
a time. In order to add one piece to a given bin, a mixed integer model is solved to optimality.
The model is based on the formulation proposed in Chapter 2 for nesting problems but, in or-
der to guarantee a guillotine cut structure, after the insertion of a new piece we identify a new
guillotine cut which is going to be associated with one of the pieces already placed. The dif-
ference between both constructive algorithms lies in the way the association of guillotine cuts
to pieces is done. The first guillotine cut structure associates each new guillotine cut with the
latest inserted piece. The second guillotine cut structure does not take into account the insertion
order of the pieces. The idea is to associate the new guillotine cut with the piece for which one
of the edges is concurrent with the guillotine cut. Note that each guillotine cut is defined in
order to separate two pieces, so it is important to associate the guillotine cut with one of those
pieces.
The mixed integer formulation (MIP formulation) used here is based on those by Fischetti
and Luzzi [27] and by Gomes and Oliveira [32], which use the Non-Fit polygons (NFP) to ob-
tain the non-overlapping constraints. We use the horizontal slices formulation (HSF) presented
in Chapter 2 to obtain the partition of the outer zone of each NFP.
For pieces of irregular shapes, in Bennell and Oliveira [12] we can find an interesting dis-
160
cussion on the different strategies for building the NFPs. In our case, as the pieces are convex, it
is easy to obtain the NFPs. Bennell and Oliveria [12] describe an algorithm which just sorts the
edges of both pieces by taking into account the angles and the NFP is easily obtained. Howe-
ver, the pieces can be rotated continuously and can also be reflected (a mirror transformation).
Each NFP corresponds to a fixed rotation and a fixed reflection of two polygons, so each time
that a given piece changes its rotation or reflection, the MIP model has to be updated because
the NFPs of the pieces could be different. In this chapter we also present an algorithm to decide
the rotations and reflections of the pieces.
Due to the computational cost of the constructive algorithms it makes no sense to apply a
local search procedure like the tabu search presented by Lodi et al [42], which tries to empty
less occupied bins by assigning pieces to sub-instances that include pieces from k bins. This
tabu search might improve the bin packing component of the problem because the association
of the pieces in the bins that we use is completely greedy, but the computational effort would
be excessive.
In the next section we give a detailed description of the problem and some notation. Section
9.4 presents the horizontal slices MIP model (HSF), which is going to be used for the inser-
tion of each piece. The two different guillotine cut structures are presented in Section 9.5. In
Section 9.6 we describe the algorithm used to decide the rotations and reflections of the pieces
and in Section 9.7 we introduce the constructive algorithm scheme. In Section 9.8 we propose
a different way of getting the guillotine cut structure and in Section 9.9 we embed an improve-
ment procedure into the constructive process. Section 9.10 contains the computational study.
Finally, in Section 9.11 we draw some conclusions.
Let B denote the set of bins used. Each bin, bi (Pi , Xi , Yi , Ri , Mi , Gi ) ∈ B, has a set of pieces
associated, Pi ⊆ P. Each piece p ∈ Pi is given by an ordered list of vertices, p = (v1 , . . . , vt ),
and its edges can be expressed by ei = (vi , vi+1 ), where i = 1, . . . , n − 1 and the nth edge
is en = (vn , v1 ). The coordinates of the reference points of the pieces are given by vectors
Xi ∈ R|Pi | and Yi ∈ R|Pi | . The rotation angle and the reflection (mirror transformation) of the
pieces are given by Ri ∈ R|Pi | and Mi ∈ B|Pi | , where 1 represents that the mirror transformation
161
of the original piece is done. Finally, Gi = (g1i . . . g|P
i
i |−1
) is an ordered set of guillotine cuts in
1
such a way that the first guillotine cut, gi ∈ Gi divides bi into two parts. The second guillotine
cut, g2i ∈ Gi , is going to divide one of those parts, and so on. Note that the endpoints of a cut
can lie on some of the previous cuts instead of on one of the edges of the bin.
Each guillotine cut, gki (p, vini , vend ) ∈ Gi , where k ∈ {1, . . . , |Pi | − 1}, has a piece p ∈ Pi
associated and the endpoints of the cut, vini and vend , are expressed in a coordinate system in
which the reference point of piece p is placed at the origin. This means that we have to know
the position of piece p in order to know where gi is placed in the bin. We say that gki has order
k if it is the kth guillotine cut. The order in which the guillotine cuts are added is very important
because the endpoints of the guillotine cuts are given by the intersection with either the closest
guillotine cut with a lower order or the edges of the bin.
For each bi ∈ B we consider that the bottom-left corner of the boundary of bi is located
at the origin, and the position of each piece p ∈ Pi is given by coordinates of the reference
point (x p , y p ), x p ∈ Xi and y p ∈ Yi , which corresponds to the bottom-left corner of the enclosing
rectangle of the piece, even if pi changes its rotation or reflection.
Objective
The objective is to minimize the total number of bins (stock sheets) used. The last bin is usually
used only fractionally and then if a horizontal or vertical cut is applied, the largest reusable rec-
tangle is not considered as waste. So the objective is to minimize the fractional number of bins
(F).
Either of both measures (U) and (F) is helpful for differentiating the quality of competing
methods when they produce solutions with the same number of bins. There is a close relation
between (F) and (U). If we consider two different solutions s1 and s2 such that U(s1 ) > U(s2 ),
indicating that the usage obtained by solution s1 is better, then s1 has a smaller fractional num-
ber of bins F(s1 ) < F(s2 ).
162
must respect the guillotine cut structure already defined in previous steps. The solution for the
model provides the position of all the pieces involved in such a way that the new piece does not
overlap any of the other pieces and does not cross any of the existing guillotine cuts. The MIPs
become harder to solve when the number of pieces placed into the bin increases. The new piece
which is going to be inserted has a fixed rotation and a fixed reflection in order to calculate the
NFPs between the new piece and the pieces already placed.
Let Pi ⊆ P be the set of pieces already placed and let p ∈ P \ {Pi } be the piece which is
going to be inserted into bin bi ∈ B and let r p and m p be, respectively, the rotation and reflection
of p. We first write the whole model and then explain each component in detail:
• Objective function
The objective function (9.2) is a weighted combination of the length and width used,
represented by Lc and Wc , respectively. With this objective function we try to pack the
pieces as tightly as possible. We consider three different alternatives for the relative
weight ω:
– FO0: ω = (W/L)+1
1
In this case, the rectangle (Lc , Wc ) grows keeping the proportions of the bin.
– FO1: ω = 0.01
The objective is to minimize the width, and the length is used as a tie-breaker.
– FO2: ω = 0.99
The objective is to minimize the length, and the width is used as a tie-breaker.
• Containment constraints
Inequalities (9.3) and (9.4) ensure that the length and width used by the pieces in the
bin do not exceed the bin dimensions. Inequalities (9.5) and (9.6) define Lc and Wc , that
is, all the pieces are placed into the rectangle whose bottom-left corner is the origin and
whose upper-right corner is (Lc , Wc ).
163
• Non-overlapping constraints
Inequalities (9.7) are defined to ensure that the new piece p does not overlap any other
piece already included in the bin. These constraints are taken from the horizontal slices
formulation proposed in Section 2.3. In that case, binary variables associate to slices are
denoted by vi jk , i, j ∈ P and k ∈ 1, . . . , mi j , being mi j the number of slices defined from
the NFPi j .
Initially, all the pieces are sorted by a certain criterion in order to be inserted into the bins.
If one piece does not fit into a given bin, before creating a new bin we try the insertion of the
remaining pieces into that bin.
The rotation of the pieces and the reflection are obtained by the algorithm presented in
Section 9.6. This algorithm chooses r different rotations of both the original and the reflected
polygons, taking into account the pieces and the edges of the bin. Each one of the rotations
in the given reflection determined by the algorithm is tried by solving the corresponding MIP
model. The final position of the pieces is the one which produces a lower value on the current
objective function. Note that we solve several MIP models in order to make the insertion of one
piece, and for each different rotation or reflection (if the piece is not symmetrical) the NFPs
and the non-overlapping constraints have to be recalculated.
164
piece already placed. This new cut and the cuts from the previous iterations (constraints (9.9))
have to be included in the next models to ensure the separation of the pieces already included.
We consider two different associations between the guillotine cuts and the pieces. A first
structure, associated guillotine cuts (AGC), tries to associate the guillotine cut with the piece
which has an edge concurrent with it, in such a way that all the guillotine cuts have at least one
piece with one concurrent edge. The second guillotine cut structure, iterated guillotine cuts
(IGC), is based on the association of a new guillotine cut with the latest piece inserted.
AGC
In what follows we are going to use the example presented in Figure 9.2. When the first piece
is inserted into the bin, no guillotine cut is needed. So the model defined in Section 9.4 is
going to have only the containment constraints (9.3), (9.4), (9.5) and (9.6) because there is no
overlapping problem. Once the first piece p1 is inserted, we try to insert another piece, p2 .
The model for the second insertion is going to have the containment constraints for both pieces
and the non-overlapping constraints of both pieces. Note that there are still no guillotine cuts
defined. When the model with two pieces is solved, the solution provides the coordinates of
both pieces in the bin and we have to identify a guillotine cut.
Since pieces are convex polygons, there is at least one edge of one piece which can be used
as a valid guillotine cut. Thus the guillotine cut is associated with the piece for which one of
its edges is concurrent with the guillotine cut. In the example in Figure 9.2, the guillotine cut
is concurrent with an edge of p2 , so it is associated with p2 . Therefore, the relative position
between p2 and this guillotine cut is fixed throughout the construction process. In other words,
wherever piece p2 is moved after the solution of successive MIPs, the guillotine cut will be
moved with it.
The first guillotine cut in this example can be denoted as g1 (p2 , v1ini , v1end ), where p2 is the
associated piece and the line defined by points v1ini and v1end gives the relative position between
p2 and g1 . That is, in order to know the position of the cut g1 in the bin, we have to add up
the coordinates of p2 to v1ini and v1end . The endpoints of this guillotine cut are determined by
the intersection between the cut and the edges of the bin. The inequality defined by g1 which
separates p1 and p2 (inequality (9.9)) has the following structure:
where a12 , b12 and c12 are the coefficients needed to define the inequality. Let p3 be the
third piece to be inserted with a given rotation and reflection. In this case the non-overlapping
position between p1 and p2 is ensured by an inequality (9.9) defined by the guillotine cut g1 .
The non-overlapping constraints (9.7) are included for separating p3 from p1 and p2 .
In order to guarantee inequality (9.10) for piece p3 and g1 , we add the following three
165
inequalities:
α p2 ,p3 ,1 (x p3 − x p2 ) + β p2 ,p3 ,1 (y p3 − y p2 ) ≤ γRp2 ,p3 ,1 + (1 − χRp2 ,p3 ,1 )M (9.14)
α p2 ,p3 ,1 (x p3 − x p2 ) + β p3 ,p2 ,1 (y p3 − y p2 ) ≤ γLp2 ,p3 ,1 + (1 − χLp2 ,p3 ,1 )M (9.15)
χRp2 ,p3 ,1 + χLp2 ,p3 ,1 = 1 (9.16)
Constraint (9.14) forces p3 to be placed to the right of g1 when the corresponding binary
variable χRp2 ,p3 ,1 = 1. When binary variable χRp2 ,p3 ,1 takes the value 0, then the big-M constant
deactivates the inequality. Similarly, in order to place p3 to the left of g1 , we define a binary
variable, χLp2 ,p3 ,1 , and inequality (9.15). Equation (9.16) forces p3 to be placed at one side of the
g1 . We consider that the piece which has the guillotine cut associated is always placed to the
left of the cut and we maintain that notation in all the cases, irrespective of the slope of the cut.
Coefficients α p2 ,p3 ,1 and β p2 ,p3 ,1 are the same in both inequalities (9.14) and (9.15) because
the lines are parallel. However, coefficients γRp2 ,p3 ,1 and γLp2 ,p3 ,1 are different because the vertex
which touches the guillotine cut at each side, maintaining all the vertices on the same side, are
different.
Once p3 is inserted into the bin in Figure 9.2, we have to identify the guillotine cut of order
2 which separates p3 from either p1 or p2 . We can observe that the position of pieces p1 and p2
and guillotine cut g1 have changed when piece p3 is inserted. Since p3 is placed on the same
side of g1 as p1 , the new guillotine cut is going to separate pieces p1 and p3 . The thick black
edge on the left of piece p1 is used as the new guillotine cut, g2 , which is associated with p3 .
In that case, the top and the bottom limits of g2 are given by the intersections with the edges of
the bin.
After the third insertion the model becomes somewhat different because of the guillotine
cuts. Let p4 be the piece which is trying to be inserted. Inequalities (9.9) defined by guillotine
cuts g1 and g2 ensure the non-overlapping configuration of pieces p1 , p2 and p3 . Note that there
are two inequalities (9.9) associated with g1 : one given by (9.13) and another given by fixing
χRp2 ,p3 ,1 = 1, which reduces equations (9.14), (9.15) and (9.16) to one inequality of type (9.9).
Both inequalities are needed for the separation of the pairs p1 − p2 and p2 − p3 . Finally, there
is one inequality which uses the last guillotine cut g2 to separate p1 and p3 .
The non-overlapping constraints (9.7) are used to separate p4 from the rest of the pieces. In
order to guarantee that p4 is going to respect g1 and g2 , constraints (9.10), we add the following
constraints:
166
Inequalities (9.17), (9.18) and equality (9.19) have the same structure as constraints (9.14),
(9.15), (9.16), corresponding to the insertion of p3 , because the first guillotine cut must always
be satisfied, that is, the new inserted piece has to be either to the right or to the left of g1 .
In addition, inequalities (9.20), (9.21) and (9.22) are defined to force p4 to be placed at one
side of g2 , if that is necessary. If χLp2 ,p4 ,1 = 1, p4 and g2 are placed on opposite sides of g1 ,
so g2 and p4 are separated by g1 and there is no need for a new constraint to separate them.
Equation (9.22) allows the fixing of both binary variables, χRp3 ,p4 ,1 and χLp3 ,p4 ,1 , to 0, deactivating
inequalities (9.20) and (9.21). If χRp2 ,p4 ,1 = 1, then p4 would be placed on the same side of g1 as
g2 (also p1 and p3 ), and if we do not add inequality (9.22), then g2 could cross p4 . In that case,
one of both binary variables χRp3 ,p4 ,1 or χLp3 ,p4 ,1 has to take the value 1, activating the correspon-
ding inequality. If, instead of using this conditional structure, we had repeated the structure of
constraints (9.17), (9.18) and (9.19), forcing the new inserted piece to satisfy all the guillotine
cuts already defined, the model would have become unnecessarily restrictive.
Figure 9.2 shows the insertion of piece p4 . We can see that the endpoints of the new guillo-
tine cut needed to separate pieces p1 and p4 are defined by the intersection with previous guillo-
tine cuts instead of with the edges of the bin. It can also be observed that while maintaining their
relative positions, pieces p2 and p3 have been moved upwards, taking the associated guillotine
cuts with them, and making room for piece p4 .
We can add the rest of the pieces iteratively, as shown in the other drawings in Figure 9.2.
Let pl be the next piece to be inserted. The previous solution has already placed l − 1 pieces
into the bin and there are l − 2 guillotine cuts defined. For each guillotine cut gt , t = 2, . . . , l − 2,
we know which of the guillotine cuts with lower order it is associated with, t0 = 1, . . . , t − 1.
0 0
That is, gt can be placed at one side of gt or can be separated from gt by another guillotine cut,
0
having no relation with gt .
We denote the previous guillotine cut with which t is related by t∗ ∈ {1, . . . , t − 1}.
Inequalities (9.23) and (9.24) define the guillotine cuts, which are activated by binary va-
riables χRpk ,pt ,s and χLpk ,pt ,s , respectively. The number of guillotine cuts associated with piece pk
is denoted by sk .
Equality (9.25) ensures that the first guillotine cut, i.e. the guillotine cut with order 1, is
always satisfied by the new piece to be inserted (pt ). After that guillotine cut is satisfied, the
167
p3 p2
p2
p1 p1 p1
p6
p4 p4
p4
p3 p2 p2
p3 p2
p5 p5
p3
p1 p1 p1
pp67 pp67
p4 p4
p2 p2
p5 p5
p3 p3
p1 p1
p8
Figure 9.2: Example of the packing of a bin with the AGC structure.
remaining guillotine cuts are going to have a relation with the new inserted piece using equa-
lities (9.26). The value of σ could be L (left) or R (right), depending on what the relation is
between the guillotine cuts obtained in the previous steps of the constructive procedure. If the
corresponding binary variable which is on the left-hand side of (9.26) takes the value 1, then a
relation is needed between the sth guillotine cut associated with pk and piece pt .
168
4 4
3 2 5
0
1 1
0
2 3
AGC IGC
Figure 9.3: Difference between AGC and IGC (real example on instance han120).
IGC
The Iterated Guillotine Cut structure associates each guillotine cut with the last piece inserted
into the bin and does not take into account if any edge of the piece is concurrent with the guillo-
tine cut.
Inequalities (9.9) and (9.10) are the same in both structures. The IGC structure is simpler
than the AGC because each piece, except p1 , has one and only one cut associated (sk = 1 for
k = 2, . . . , t − 1 and s1 = 0).
169
9.6 Rotations and reflections
We define the reflection of a given piece p ∈ P as the polygon obtained by applying the mirror
transformation. The rotation of the pieces is completely free, that is, they can be rotated conti-
nuously between 0 and 360 degrees.
In this section we present an algorithm which decides the best rotations of one piece to be
inserted into the bin, taking into account the slope of the edges of the pieces already placed and
the edges of the bin. Since the model presented in Section 9.4 allows the piece which is going
to be inserted to be placed in any part of the bin, it is interesting that the given rotation of the
piece produces as many matchings as possible between the new piece and the pieces already
placed. That is, the algorithm looks for rotations of the new piece which would allow it to fit
better with the pieces already placed.
The algorithm GR (Get Rotations) chooses a set of rotations for the piece which is going to
be inserted in the next iteration of the constructive algorithm. The number of rotations nr is an
input of the algorithm and the output is a set of nr rotations.
Let pi be the piece which is trying to be inserted. The set of rotation angles that we are
going to study is obtained by matching each edge of pi with each edge of the bin and each edge
of each piece already placed into the bin. If the number of angles that we obtain is lower than
nr , then algorithm GR returns all these angles. In the case that we obtain more than nr different
rotations, we sort the angles by the following criteria:
a) Non-increasing number of matchings between the edges of the polygon obtained by ap-
plying the given rotation to the piece and the edges of the bin and the edges of all the
pieces already placed in the bin.
b) In order to break ties in (a), we use the total length of the edges of all the matchings.
The first nr rotations are returned by the GR algorithm.
The different strategies that we are going to use in the constructive algorithm are:
• 3R: We try the three best rotations given by algorithm GR.
• 3Rx3R: We try the three best rotations of the piece given by algorithm GR and the three
best rotations after applying the mirror movement.
• 6R: We try the six best rotations given by algorithm GR.
• 3R+1x3R+1: We try the three best rotations given by algorithm GR (also in the mirror
polygon) for the first piece of each bin and we increase the number of rotations by one
every time a new piece is inserted.
• 3R+3x3R+3: Similar to the previous one, but we increase the number of rotations by
three.
• 5R+5x5R+5: Similar to the two previous strategies, but we increase the number of rota-
tions by five and we begin to study five rotations for the first piece.
170
• E30: Every 30o in both polygons (the original one and after applying the mirror). The
first rotation is given by matching the longest edge of the piece with the bottom edge of
the bin.
• The rotations of the pieces to be inserted. That implies the number of rotations and the
criterion for choosing the rotations. The different criteria are described in Section 9.6.
• The reflections. It would be interesting to know if using reflections produces better solu-
tions or whether with just the rotations it is enough to find good solutions.
• The objective function used in the MIP model. We consider the three different objective
functions defined in Section 9.4.
• The guillotine cut structure used, IGC or AGC, described in Section 9.5.
We are going to consider three different criteria for sorting the pieces at the beginning of
the process:
- Randomly.
- By shape. We are going to pack first the pieces whose shape is similar to a rectangle and
then the rest of the pieces. Pieces are therefore divided into two sets: one set is given by
pieces whose shape is similar to a rectangle and in the other set we consider the rest of
the pieces. In order to identify whether a piece has approximately a rectangular shape,
we calculate the minimal enclosing rectangle by matching each edge of the piece with
one of the edges of the rectangle and if the usage, define as U = Area(Rectangle)
Area(Piece)
, is greater
than a given threshold µ then the piece is included in the first set of pieces. After all the
pieces are classified, we sort both sets by area and we begin with the insertion of the first
piece from the first group.
171
We solve many MIP problems along the algorithm. The computational effort of the construc-
tive algorithm depends greatly on the number of rotations to be attempted for each piece and the
computational time is duplicated when we consider the reflection. In Section 9.10 the benefit
of using the reflection is shown.
The first time the insertion of a new piece into a given bin is tried, we use as an upper bound
the value of the objective function when Lc and Wc are substituted for L and W. However, if in a
given rotation and reflection the MIP model is feasible, it provides a valid solution whose value
can be used as an upper bound for the following insertion of the same piece in the remaining
angles of rotation. That is, if we have found a good insertion of a given piece, then we use
the objective function value as an upper bound for the remaining polygons obtained by rotating
and reflecting the given piece.
Once all pieces are placed into bins, the less occupied bin is rebuilt with the objective func-
tions FO1 and FO2 in order to find the cut, either horizontal or vertical, which produces a
bigger non-used rectangle. This part is not going to be considered as waste (see Section 9.3).
Since the guillotine cut inequalities reduce the feasible zone for placing the new piece, we
now propose to try first the insertion without taking into account the guillotine cut constraints
(inequalities 9.10 in Section 9.4). It may then be that the position of the new piece does not
respect the previous guillotine cut structure. In that case, we need to identify a new guillotine
cut structure if there is one. A procedure for doing that is presented in this section.
Let i ∈ P be the piece which is going to be inserted. We denote by MIP1 the MIP model
used in the constructive algorithm defined in Section 9.4 without taking into account inequa-
lities (9.10). The complete model is denoted by MIP2 . Then, if MIP1 is unfeasible, MIP2 is
also unfeasible.
When MIP1 is feasible we obtain a position of the pieces with no overlap, but we do not
know if it could be obtained by guillotine cuts. We use a simple algorithm to find a guillotine
cut structure. Each edge of every piece is considered as the possible first guillotine cut. If an
edge can be used as the first guillotine cut, because it does not divide any piece and there are
pieces on both sides, then the bin is divided into two parts and the same procedure is used on
each part until all the pieces are separated. If in any part of the bin there are three or more
pieces for which none of the edges of these pieces can be considered a guillotine cut, we consi-
der that the given solution cannot be obtained by guillotine cuts. The algorithm is called FGCS
(Finding a Guillotine Cut Structure).
172
p4
p3
p2 p3 p1 p2
p1
Figure 9.4: Solutions obtained by a constructive algorithm and a constructive algorithm with two
phases.
Then, when MIP1 is solved and gives a feasible solution, we call FGCS to know if there is
a feasible guillotine cut structure. If FGCS fails, MIP2 is built and solved. The new guillotine
cut structure found is used for the next insertion, building a new MIP model.
In the case that MIP1 is unfeasible, we try the next rotation of the given piece and when
there are no rotations left, we consider the next piece to be inserted as in the constructive al-
gorithm (Section 9.7). Then, only in the case that MIP1 is feasible and FGCS fails to find a
guillotine cut structure, the MIP2 is solved.
This algorithm with two phases is more flexible and can produce better solutions. Figure
9.4 shows a simple example with rectangular pieces, applying both constructive algorithms. On
the left-hand side we can see that with the initial algorithm the guillotine cuts separating pieces
p1 , p2 and p3 are too restrictive and it is impossible to place any other rectangle, while on the
right-hand side piece p4 is placed and then a guillotine cut structure is easily found.
173
p1
p3
p6
p4 p8
p5 p7 p2
opened.
When a bin is closed, that is, when the constructive algorithm cannot place more pieces into
it, if the usage is lower than a given threshold κ, we identify the piece which produces more
waste and either the rotation of the piece or even the piece itself is changed. We propose a new
criterion to assess the quality of the placement of each piece in a given bin and a new criterion
to compare the quality of two different layouts of the bin with the same pieces.
Let nb be the number of pieces already placed into bin b. The guillotine cuts divide the
bin into nb containment polygons. Each one of these polygons contains exactly one piece and
the respective waste of each piece in its containment polygon can be calculated. Then, we
consider the one which produces more waste as the worst placed piece, taking into account the
corresponding containment polygon. Figure 9.5 shows the containment polygon of the worst
inserted piece, p6 , in the top-right corner of the bin. In this example, the usage of the piece in
the containment polygon is lower than 0.5.
Once the worst placed piece is identified, the bin is rebuilt in the following way. The bin is
emptied, the worst piece is removed from the list and the other pieces are inserted again with
their same rotations and reflections. Once all the previous pieces are placed, we try to insert
the worst piece considering 10 different rotations for each reflected polygon. These rotations
are obtained by using a modified version of algorithm GR, in which the edges of the bin are
not considered and only the matchings with the previously placed pieces are computed. If we
succeed, the remaining pieces in the problem are tested for insertion.
This procedure is applied twice and we accept a new construction of the given bin if the
waste has been reduced. In this case we repeat the procedure until no improvements are found
in two iterations.
174
9.10 Computational experiments
In this section we study several strategies to find the best constructive algorithm. All the dif-
ferent versions follow the structure described in Algorithm 4 (page 182).
We have used the test data presented by Bennell et al. [11]. They consider eight instances,
four provided by a company in glass cutting for conservatories and another four generated
using properties of the industrial data. The number of pieces ranges between 40 and 149. The
instance name is coded by a letter and a number: the letter can be J or H depending on whether
the instance is provided by a company (J) or is generated (H); the number represents the total
number of pieces to be packed into the bins.
The first constructive algorithm that we are going to study, CA1, considers the one given by
sorting the pieces by non-increasing area as the initial permutation. The algorithm to decide the
rotation is 3Rx3R, which considers 3 rotations for both polygons, original and reflected. The
guillotine cut structure is AGC.
CA3: As CA1, but initially pieces are sorted by shape (see Section 9.7).
CA4: As CA1, but the objective function is FO1 (see Section 9.4).
CA5: As CA1, but the objective function is FO2 (see Section 9.4).
CA6: As CA1, but the strategy for obtaining the rotation of the pieces is 6R (see Section 9.6).
CA7: As CA1, but the strategy for obtaining the rotation of the pieces is 3R+1x3R+1 (see
Section 9.6).
CA8: As CA1, but the strategy for obtaining the rotation of the pieces is 3R+3x3R+3 (see
Section 9.6).
CA9: As CA1, but the strategy for obtaining the rotation of the pieces is 5R+5x5R+5 (see
Section 9.6).
CA10: As CA1, but the strategy for obtaining the rotation of the pieces is E30 (see Section 9.6).
CA11: As CA1, but the strategy for obtaining the rotation of the pieces is E10 (see Section 9.6).
CA12: As CA1, but the strategy used for the guillotine cuts is IGC (see Section 9.5).
Table 9.1 shows the total number of bins used to pack all the pieces using these versions of
the constructive algorithm. Tables 9.2 and 9.3 show, respectively, the fractional number of bins
used and the usage of the bins, while in Table 9.4 we can see the computational time in seconds.
The comparison between CA1, CA2 and CA3 in Table 9.1 shows that the best sorting crite-
rion is non-increasing area (CA1). We can see that CA2 is always worse than CA1 and CA3 is
175
Table 9.1: Number of bins used (N)
Instances CA1 CA2 CA3 CA4 CA5 CA6 CA7 CA8 CA9 CA10 CA11 CA12
J40 8 9 8 8 8 8 8 8 8 8 8 8
J50 10 11 10 10 10 10 10 10 10 10 10 10
J60 11 12 11 11 11 11 11 11 11 11 11 11
J70 12 14 12 12 12 13 12 12 12 12 12 12
H80 10 11 10 10 10 10 10 10 10 10 10 10
H100 16 18 17 16 16 17 16 16 16 16 16 16
H120 16 18 17 16 17 17 16 16 16 17 16 17
H149 22 25 23 23 23 23 22 22 22 22 22 22
Instances CA1 CA2 CA3 CA4 CA5 CA6 CA7 CA8 CA9 CA10 CA11 CA12
J40 7.40 8.42 7.38 7.45 7.43 7.70 7.25 7.21 7.21 7.33 7.25 7.52
J50 9.27 10.51 9.37 9.24 9.31 9.53 9.31 9.16 9.16 9.36 9.25 9.38
J60 10.35 11.67 10.54 10.52 10.40 10.68 10.35 10.21 10.22 10.36 10.28 10.41
J70 11.63 13.60 11.85 11.80 11.78 12.18 11.57 11.66 11.62 11.79 11.74 11.79
H80 9.46 10.35 9.47 9.48 9.46 9.34 9.40 9.30 9.35 9.43 9.33 9.55
H100 15.56 17.48 16.12 15.87 15.82 16.16 15.62 15.43 15.47 15.94 15.54 15.75
H120 15.75 17.26 16.17 15.65 16.24 16.39 15.69 15.65 15.74 16.14 15.95 16.21
H149 21.83 24.53 22.39 22.09 22.24 22.09 21.88 21.72 21.77 21.87 21.75 21.83
worse in the last three instances. Furthermore, Table 9.2 shows that CA1 obtains better results
than CA3 in the first five instances with the exception of instance J40, in which CA3 is slightly
better.
In order to decide which objective function produces better results, we compare CA1, CA4
and CA5. Table 9.1 shows that CA1 is slightly better than CA4 and CA5. Only on instances J50
and H120 is the fractional number of bins used lower with CA4. It seems that CA4 produces
better results than CA5 because the length of the bin is greater than the width in all the ins-
tances. However, the weighted objective function FO0 works better than both FO1 and FO2.
The advantages of using reflection can be seen by comparing CA1 and CA6. Each insertion
tries 6 different shapes of one piece, CA1 considers the best three rotations of both original
and reflected polygons and CA6 does not take into account the reflection. We can see that the
results are clearly better if we consider the reflected polygons.
Algorithm CA1 considers 6 different polygons of the piece which is going to be inserted.
This means that 6 MIP models are solved to optimality in order to decide the relative position
between the new inserted piece and the pieces and guillotine cuts already placed. Algorithms
CA7, CA8, CA9, CA10 and CA11 consider more rotations for both polygons, original and re-
flected, of a given piece. Then, in Table 9.4 we can see that the computational time increases,
CA11 being the slowest algorithm (note that at each insertion CA11 72 MIPs are solved to
optimality). Table 9.1 shows that all these algorithms produce results with the same number
of bins with the exception of CA10 which obtains a worse result on instance H120. The best
results are given by CA8, which produces the best results for 6 of 8 instances. However, the
computational time of CA8 increases considerably in comparison with CA1.
176
Table 9.3: Usage (U)
Instances CA1 CA2 CA3 CA4 CA5 CA6 CA7 CA8 CA9 CA10 CA11 CA12
J40 0.82 0.72 0.82 0.82 0.82 0.79 0.84 0.84 0.84 0.83 0.84 0.81
J50 0.82 0.73 0.82 0.83 0.82 0.80 0.82 0.84 0.84 0.82 0.83 0.81
J60 0.84 0.74 0.82 0.82 0.83 0.81 0.84 0.85 0.85 0.84 0.84 0.83
J70 0.85 0.73 0.83 0.84 0.84 0.81 0.85 0.85 0.85 0.84 0.84 0.84
H80 0.86 0.79 0.86 0.86 0.86 0.87 0.87 0.88 0.87 0.87 0.87 0.85
H100 0.86 0.77 0.83 0.84 0.85 0.83 0.86 0.87 0.87 0.84 0.86 0.85
H120 0.87 0.80 0.85 0.88 0.85 0.84 0.88 0.88 0.87 0.85 0.86 0.85
H149 0.88 0.78 0.86 0.87 0.86 0.87 0.88 0.89 0.88 0.88 0.88 0.88
Instances CA1 CA2 CA3 CA4 CA5 CA6 CA7 CA8 CA9 CA10 CA11 CA12
J40 14 24 15 14 16 41 31 83 105 58 177 15
J50 18 36 22 20 19 24 52 89 108 95 237 22
J60 32 50 45 73 36 65 96 204 180 162 440 39
J70 62 84 121 93 129 44 142 334 305 281 609 55
H80 155 231 131 112 158 117 354 610 545 449 1257 99
H100 124 143 97 223 177 92 294 582 587 720 1286 108
H120 326 387 149 288 186 158 771 1198 1424 782 2769 194
H149 624 635 235 225 208 193 1042 1529 1525 1307 3369 250
Finally, since CA1 works slightly better than CA12, it seems that the AGC structure pro-
duces better solutions than the IGC.
Comparison of the constructive algorithms with two phases and the im-
provement procedure
Table 9.5 shows the comparison between the original CA1 and the CA1 configuration using the
two-phase constructive algorithm. The computational time remains similar and the quality of
the solutions is slightly improved.
On the other hand, when the improvement procedure presented in Section 9.9 is embedded
into the constructive algorithm CA1 with two phases (CA1M), the quality of the solutions is
even better. The number of bins in instances J40, J50 and J60 is reduced and in instances H100
and H120 the fractional number of bins is reduced. That is, on 5 instances this improvement
procedure obtains a better solution, though the computational times increase.
177
Table 9.5: Comparing the initial and the two-phase constructive algorithms
In Table 9.6 we can see that the number of bins used in every feasible solution is never
more than two bins away for the simple 1-dimensional lower bound. That gives an idea of the
difficulty of reducing the number of required bins even more.
178
dynamic weighting scheme. The best two algorithms using the one-step approach are given by
the following combinations:
The two-step algorithm (2S) also proposed by Bennell et al. [11] works slightly worse than
the one-step algorithms, but there is one instance (see J70 in Table 9.7) where the two-step
algorithm found a better solution with fewer bins than all the one-step algorithms.
Table 9.7 shows the computational results obtained by the two-step algorithm 2S and the
one-step algorithms 1S-0.94-5 and 1S-0.97-3. The two last columns correspond to CA1 and
CA1M (CA1 with two phases and the improvement procedure). Algorithm CA1M produces
the best known results for five of the eight instances (the best known solution of instance J70
is given by CA1 two phases in Table 9.5). The behavior of algorithm CA1 is also interesting
because on average it works better than the algorithms proposed by Bennell et al. [11], and it is
faster. In fact, we can see that CA1 reduces the number of bins used in 1S-0.94-5 and 1S-0.97-3
in 4 instances. Algorithm 1S-0.97-3 produces the best result for instance J50.
Comparison with the state of the art algorithms in rectangular bin packing
problems
The constructive algorithms proposed in this paper deal with irregular pieces and use a ma-
thematical model which is hard to solve to optimality in each step. Nevertheless, they can be
applied to standard bin packing problems with rectangular pieces to assess their performance
for this problem.
For the bin packing problem with rectangular pieces, there is a standard benchmark set
composed of 500 instances divided into 10 classes. The first 6 classes were proposed by Ber-
key and Wang [15] and the last 4 classes by Lodi et al. [42]. We consider two rotations for the
insertion of each piece (0o and 90o ) and therefore we are solving the 2DBP|R|G problem.
Table 9.8 compares the total number of bins used by the constructive algorithm CA1 with
fast heuristic algorithms: the Knapsack-Problem-based heuristics of Lodi et al. [42] (KP), the
Guillotine Bottom-Left heuristic of Polyakovsky and M’Hallah [53] (GBL) and the Construc-
tive Heuristic of Charalambous and Fleszar [19] (CH).
We can observe that the constructive algorithm CA1 is competitive, working better than
GBL, slightly worse than KP and clearly worse than CH. Algorithm CA1 with two phases pro-
duces better results than any other constructive algorithm. However, the state of the art proce-
dure on rectangular bin packing problems with guillotine cuts is the CHBP algorithm proposed
by Charalambous and Fleszar [19], in which their constructive algorithm (CH) is followed by
a postoptimization phase. CHBP requires only 7064 bins.
179
Table 9.7: Comparison with the algorithms proposed by Bennell et al. [11]
Table 9.9 shows the total number of bins used by algorithms CH, CA1, CA1 with two phases
and CHBP for each class of instances. The main differences between algorithms CH and CA1
appear in classes 7 and 8, the behavior in the rest of the classes being similar. The differences
disappear if we consider algorithm CA1 with two phases, which seems especially well fitted for
these types of instances. Note that CA1 and the other constructive algorithms presented in this
paper are carefully designed to decide the position of the pieces in a given bin and do not focus
on the assignment of pieces to bins. Nevertheless, they work well on rectangular bin packing
problems.
9.11 Conclusions
A new approach to ensuring a guillotine cut structure is proposed by using a mathematical
model. To our knowledge, in the literature of guillotine cut problems we cannot find any ma-
thematical model which considers guillotine cuts.
180
Table 9.8: Total number of bins in the 10 classes.
Class 1 2 3 4 5 6 7 8 9 10
CH 997 127 705 126 894 115 792 792 2131 512
CA1 994 130 718 127 896 116 844 843 2124 511
CA1(2 phases) 984 128 695 126 878 116 792 795 2124 508
CHBP 975 124 687 125 872 113 770 776 2119 503
The rotations and reflections of the pieces make the mathematical model harder to define.
A new algorithm (GR) for deciding rotations is proposed and it is demonstrated that it produces
good results. In order to deal with reflection, we double the computational effort if pieces are
not symmetric, trying the insertion of each piece for each reflected polygon.
The constructive algorithm proposed obtains high quality results on the bin packing pro-
blem with guillotine cuts and irregular convex pieces, improving the best known solutions in
6 of 8 instances and it is competitive with the rectangular bin packing problem with guillotine
cuts.
181
Algorithm 4 Constructive algorithm structure
Require: P, L, W;
Set P0 (initial permutation);
Set nr (number of rotations);
Set OF (objective function);
Set guillotine cut structure;
B = ∅, cont = 0;
while P0 , ∅ do
Create a new bin bcont .
for i = 0, . . . , |P0 | − 1 do
Set bestOFvalue = ωL + (1 − ω)W (ω is given by OF);
IN = f alse;
P∗ is the set of all polygons obtained by the different rotations of p0i = P0 [i];
if p0i has no symmetries and reflection is allowed then
P∗m is the set of all polygons obtained by the different rotations of m(p0i ) (reflected
polygon);
end if
for each polygon p ∈ P∗ ∪ P∗m do
Add p to the MIP model;
Solve the MIP model using as upper bound bestOFvalue;
if model is feasible then
IN = true;
Update best rotation (and reflection) of p0i .
bestOFvalue =current objective function value;
end if
Remove p from the model.
end for
if IN = true then
Add p0i to bcont
Insert the piece into the MIP model with the best rotation (and reflection).
Identify the new guillotine cut and update the guillotine cut constraints of the model.
P0 = P0 \ {p0i }
end if
end for
B = B ∪ {bcont };
cont = cont + 1;
end while
Sort bins B by non-decreasing waste;
Rebuild last bin of B with objectives functions FO1 and FO2 and choose the best configu-
ration.
return B;
182
Chapter 10
This thesis can be divided into three parts. The first part is formed by Chapters 2, 3, 4 and 5.
The second part includes Chapters 6, 7 and 8. Finally, the third part is included in Chapter 9.
First of all we have developed an exact algorithm, a Branch & Bound algorithm, for the
two-dimensional irregular strip packing problem (Nesting Problem) where pieces have a fixed
rotation. This algorithm improves the algorithm proposed by Fischetti and Luzzi [27] and is
able to solve instances with up to 16 pieces to optimality. In order to add a cutting process to
the Branch & Bound we propose several new kinds of valid inequalities. The computational
results show that the separation algorithms require too much, time making the cutting process
inefficient.
Secondly we have designed an Iterated Greedy algorithm to solve the two-dimensional ir-
regular strip packing problem where pieces can be rotated at several angles. This algorithm can
be classified as a math-heuristic algorithm because we solve many MIP problems to optimality.
The computational results show that this algorithm obtains good results and it is competitive
with the state of the art procedures.
Thirdly we have proposed an efficient constructive algorithm for the two-dimensional ir-
regular bin packing problem with guillotine cuts appearing in the glass cutting industry. This
algorithm outperforms the previous algorithms proposed by Bennell et al. [11] and it is even
competitive for the rectangular bin packing problem with guillotine cuts. A new and efficient
approach is used to guarantee the guillotine cut structure.
Chapter 2 contains the MIP formulations. We have improved the Gomes and Oliveira model
proposed in [32] by modifying the Fischetti and Luzzi [27] model. We propose two formula-
tions based on defining the Fischetti and Luzzi slices in a horizontal way. The computational
183
results show that HS2 model works better than all the other formulations.
The exact procedure is based on a Branch & Bound algorithm. Chapter 3 studies different
strategies for branching. We have improved the Fischetti and Luzzi strategy and the computa-
tional results show that different branching strategies obtain a wide range of quality results.
In order to add a cutting process into the Branch & Bound algorithm, we have found dif-
ferent valid inequalities which are presented in Chapter 4. Some of the inequalities combine
real and binary variables, X-Y inequalities and Impenetrability constraints, and other inequali-
ties try to find a set of binary variables which produce an unfeasible solution if all variables take
the value 1, cliques, covers, LU-covers and Transitivity inequalities. In Chapter 5 we propose
separation algorithms for several of those inequalities. The computational results show that we
have failed to design good separation algorithms because the computational effort used on the
separation algorithms is too high and it is preferable to branch rather than add inequalities.
In Chapter 6 we have designed Constructive Algorithms using the previous models based
on the insertion of the pieces one at a time. A new and interesting idea for the insertion of one
piece is trunk insertion, which allows certain movements of the pieces already placed in order
to place the new piece better. Despite the high computational effort needed on these algorithms,
the computational results show that they produce high quality results.
184
already placed. As the computational results show, this algorithm decides good rotations to be
tested. Furthermore, a new approach to guaranteeing the guillotine cut structure by using linear
inequalities is presented. We have also designed a strategy to find a guillotine cut structure for
a given solution and a criterion to identify pieces whose placement produces too much waste.
Taking into account these two elements, we have designed an improvement procedure embed-
ded into the constructive algorithm which produces good results. The computational results
show that this algorithm produces the best results and it is competitive with algorithms desi-
gned specifically for the rectangular bin packing problem with guillotine cuts.
As future work, it could be interesting to combine the algorithm which decides the rotations
of the pieces presented in Chapter 9 with the Iterated Greedy algorithm presented in Chapter 8,
providing an algorithm for general Nesting Problems, allowing pieces to be freely rotated.
Since we have failed to design an efficient cutting process in the Branch & Bound algo-
rithm, it could be interesting to improve the separation algorithms presented in Chapter 5 and
to try to find new valid inequalities. Furthermore, if the formulation were improved, then the
constructive algorithms presented in Chapter 6 and the heuristic algorithm presented in Chapter
8 might also improve their computational results.
185
186
Bibliography
[1] A LBANO , A., AND S APUPPO , G. Optimal allocation of two-dimensional irregular shapes
using heuristic search methods. IEEE Transactions on Systems, Man and Cybernetics 10
(1980), 242–248.
[2] A LVAREZ -VALDES , R., PARRE ÑO , F., AND TAMARIT, J. A branch and bound algorithm
for the strip packing problem. OR Spectrum 31 (2009), 431–459.
[3] A LVES , C., B R ÁS , P., DE C ARVALHO , J. V., AND P INTO , T. New constructive algo-
rithms for leather nesting in the automotive industry. Computers & Operations Research
39(7) (2012), 1487–1505.
[4] A LVES , J., F ERREIRA , J., A LBUQUERQUE , C., O LIVEIRA , J., F ERREIRA , J., AND
M ATOS , J. A flexible custom computing machine for nesting problems. Proceedings of
XIII DCIS.
[5] A RT, J. An approach to the two-dimensional, irregular cutting stock problem. Tech. Rep.
36.Y08, IBM Cambridge Scientific Centre, 1966.
[6] BABU , A., AND BABU , N. A generic approach for nesting of 2-d parts in 2-d sheets
using genetic and heuristic algorithms. Computer-Aided Design 33 (2001), 879–891.
[7] BALDACCI , R., B OSCHETTI , M., G ANOVELLI , M., AND M ANIEZZO , V. Algorithms
for nesting with defects. Discrete Applied Mahtematics doi:10.1016/j.dam.2012.03.026.
[8] B ENNELL , J. Incorporating problem specific knowledge into a local search framework
for the irregular shape packing problem. PhD thesis, University of Wales, UK, 1998.
[9] B ENNELL , J., AND D OWSLAND , K. A tabu search thresholding implementation for the
irregular stock cutting problem. International Journal of Porduction Research 37 (1999),
4259–4275.
[10] B ENNELL , J., AND D OWSLAND , K. Hybridising tabu search with optimisation tech-
niques for irregular stock cutting. Management Science 47 (2001), 1160–1172.
[11] B ENNELL , J., H AN , W., Z HAO , X., AND S ONG , X. Construction heuristics for two
dimensional irregular shape bin packing with guillotine constraints. Technical Report.
University of Southampton. Http://eprints.soton.ac.uk/208137.
[12] B ENNELL , J., AND O LIVEIRA , J. The geometry of nesting problems: A tutorial. Euro-
pean Journal of Operational Research 184 (2008), 397–415.
187
[13] B ENNELL , J., AND O LIVEIRA , J. A tutorial in irregular shape packing problems. Journal
of the Operational Research Society 60 (2009), S93–S105.
[14] B ENNELL , J., S TOYAN , Y., S CHEITHAUER , G., G IL , N., AND ROMANOVA , T. Tools
of mathematical modeling of arbitrary object packing problems. Annals of Operations
Research 179 (2010), 343–368.
[15] B ERKEY, J., AND WANG , P. Two-dimensional finite bin-packing algorithms. Journal of
Operational Research Society 38 (1987), 423–429.
[16] B LAZEWICZ , J., H AWRYLUK , P., AND WALKOWIAK , R. Using a tabu search approach
for solving the two-dimensional irregular cutting problem. Annals of Operations Research
41 (1993), 313–325.
[17] B OUNSAYTHIP, C., AND M AOUCHE , S. Irregular shape nesting and placing with evolu-
tionary approach. In Proceedings of the IEEE International Conference On Systems, Man
and Cybernetics (1997), vol. 4, pp. 3425–3430.
[18] B URKE , E., H ELLIER , R., K ENDALL , G., AND W HITWELL , G. A new bottom-left-
fill heuristic algorithm for the two-dimensional irregular cutting problem. Operations
Research 54 (2006), 587–601.
[19] C HARALAMBOUS , C., AND K.F LESZAR. A constructive bin-oriented heuristic for the
two-dimensional bin packing problem with guillotine cuts. Computers and Operational
Research 38 (2011), 1443–1451.
[20] C OSTA , M., G OMES , A., AND O LIVEIRA , J. Heuristic approaches to large-scale periodic
packing of irregular shapes on a rectangular sheet. European Journal of Operational
Research 192 (2009), 29–40.
[21] C RISPIN , A., C LAY, P., TAYLOR , G., AND R EEDMAN , D. Genetic algorithm coding
methods for leather nesting. Applied Intelligence 5238(1) (2005), 9–20.
[22] C UNINGHAME -G REEN , R. Geometry, shoemaking and the milk tray problem. New
Scientist 1677 (1989), 50–53.
[23] D IGHE , R., AND JAKIELA , M. Solving pattern nesting problems with genetic algorithms
eploying task decomposition and contact detection. Evolutionary Computation 3 (1996),
239–266.
[24] D OWSLAND , K., AND D OWSLAND , W. An algorithm for polygon placement using a
bottom-left strategy. European Journal of Operational Research 141 (2002), 371–381.
[25] D OWSLAND , K., D OWSLAND , W., AND B ENNELL , J. Jostling for position: Local
improvement for irregular cutting patterns. Journal of the Operational Research Society
49 (1998), 647–658.
[26] E GEBLAD , J., N IELSEN , B., AND O DGAARD , A. Fast neighborhood search for polygon
placement using a bottom-left strategy. European Journal of Opeartional Research 183
(2007), 1249–1266.
188
[27] F ISCHETTI , M., AND L UZZI , I. Mixed-integer programming models for nesting pro-
blems. J Heuristics 15 (2008), 201–226.
[28] F OWLER , R., AND TANIMOTO , S. Optimal packing and covering in the plane are np-
complete. Inform Process Lett 12(3) (1981), 133–137.
[29] F UJITA , K., A KAGJI , S., AND K IROKAWA , N. Hybrid approach for optimal nesting
using a genetic algorithm and a local minimisation algorithm. Proceedings of the 19th
Annual ASME Design Automation Conference 65 (1993), 477–484.
[30] G HOSH , P. An algebra of polygons through the notion of negative shapes. CVGIP: Image
Understanding 54 (1) (1991), 119–144.
[31] G OMES , A., AND O LIVEIRA , J. A 2-exchange heuristic for nesting problems. European
Journal of Operational Research 171 (2002), 811–829.
[32] G OMES , A., AND O LIVEIRA , J. Solving irregular strip packing problems by hybridising
simulated annealing and linear programming. European Journal of Operational Research
171 (2006), 811–829.
[33] H EISTERMANN , J., AND L ENGAUER , T. The nesting problem in the leather manufactu-
ring industry. Annals of Operations Research 57 (1995), 147–173.
[34] H OPPER , E. Two-dimensional packing utilising evolutionary algorithms and other meta-
heuristic methods. PhD thesis, University of Wales, ardiff, UK, 2000.
[35] I MAMICHI , T., YAGIURA , M., AND NAGAMOCHI , H. An iterated local search algo-
rithm based on nonlinear programming for the irregular strip packing problem. Discrete
Optimization 6 (2009), 346–361.
[36] JACOBS , L., AND B RUSCO , M. A local search heuristic for large set-covering problems.
Naval Research Logistics Quarterly 42 (7) (1995), 1129–1140.
[37] JAKOBS , S. On genetic algorithms for the packing of polygons. European Journal of
Operations Research 88 (1996), 165–181.
[38] KONOPASEK , M. Mathematical treatments of some apparel marking and cutting pro-
blems. Tech. Rep. 99-26-90857-10, Department of Commerce Report, U.S, 1985.
[39] L EE , W., M A , H., AND C HENG , B. A heuristic for nesting problems of irregular shapes.
Computer-Aided Design 40 (2008), 625–633.
[40] L EUNG , S., L IN , Y., AND Z HANG , D. Extended local search algortihm based on non-
linear programming for two-dimensional irregular strip packing problem. Computers &
Operations Research 39 (2012), 678–686.
[41] L IU , X., AND J IA - WEI. Heuristic algorithm based on the principle of minimum total
potential energy (hape): a new algorithm for nesting problems. Journal of Zhejiang
University-SCIENCE A (Applied Physics & Engineering) 12(11) (2011), 860–872.
189
[42] L ODI , A., M ARTELLO , S., AND V IGO , D. Heuristic and metaheuristic approaches for
a class of two-dimensional bin packing problems. INFORMS Journal on Computing 11
(1999), 345–357.
[43] M AHADEVAN , A. Optimisation in computer aided pattern packing. PhD thesis, North
Carolina State University, 1984.
[44] M ARCHIORI , E., AND S TEENBEEK , A. An evolutionary algorithm for large set covering
problems with applications to airline crew scheduling. Lecture notes in Computer Science
1803 (2000), 367–381.
[45] M ARQUES , V., B ISPO , C., AND S ENTIEIRO , J. A system for the compaction of two-
dimensional irregular shapes based on simulated annelaing. In Proceedings of the 1991
International Conference On Industrial Electronics, Control and Instrumentation - IE-
CON’91 (1991), vol. 99, pp. 1911–1916.
[46] M ARTELLO , S., M ONACI , M., AND V IGO , D. An exact approach to the strip packing
problem. INFORMS Journal on Computing 15 (2003), 310–319.
[47] M ASCARENHAS , W., AND B IRGIN , E. Using sentinels to detect intersections of convex
and nonconvex polygons. Computational & Applied Mathematics 29 (2010), 247–267.
[48] M ILENKOVIC , V., DANIELS , K., AND L I , Z. Automatic marker making. In Proceedings
of the Third Canadian Conference on Computational Geometry. Simon Frase University,
Vancouver, BC, 1991, pp. 243–246.
[49] M ILENKOVIC , V., AND L I , Z. Compaction and separation algorithms for non-convex
polygons and their applications. European Journal of Operational Research 84(3) (1995),
539–561.
[50] O LIVEIRA , J., AND F ERREIRA , J. Algorithms for nesting problems, Applied Simulated
Annealing. In Lecture Notes in Economics and Maths Systems, R. Vidal, Ed., vol. 396.
Springer-Verlag, 1993, pp. 255–274.
[51] O LIVEIRA , J., G OMES , A., AND F ERREIRA , J. TOPOS - A new constructive algorithm
for nesting problems. OR Spectrum 22 (2000), 263–284.
[52] PATRIC , R., AND Ö STERGåRD , A. A fast algorithm for the maximum clique problem.
Discrete Applied Mathematics 120 (2002), 197–207.
[55] R ATANAPAN , K., AND DAGLI , C. An object-based evolutionary algorithm for solving
irregular nesting problems. In Proceedings for Artificial Neural Networks in Engineering
Conference (ANNIE’97) (1997), vol. 7, pp. 383–388.
190
[56] ROMANOVA , T., S TOYAN , Y., AND A.PANKRATOV. Mathematical models and solution
algorithm for nesting problem of arbitrary shaped objects. 8th Conference of the special
interest group on cutting and packing (ESICUP), Copenhagen, Denmark (2011).
[57] RUIZ , R., AND S T ÜTZLE , T. A simple and effective iterated greedy algorithm for the
permutation flowshop shceduling problem. European Journal of Operational Research
177 (2007), 2033–2049.
[58] S ATO , A., M ARTINS , T., AND T SUZUKI , M. An algorithm for the strip packing problem
using collision free region and exact fitting placement. Computer-Aided Design 44 (2012),
766–777.
[59] S CHEITHAUER , G., S TOYAN , Y., G IL , N., AND ROMANOVA , T. Phi-functions for circu-
lar segments. Tech. Rep. MATH-NM-7-2003, Technische Univarsitat Dresden, Dresden,
2003.
[60] S EGENREICH , S., AND B RAGA , M. Optimal nesting of general plane figures: a Monte
Carlo heuristical approach. Computers & Graphics 10 (1986), 229–237.
[61] S ONG , X., AND B ENNELL , J. A comprehensive and robust procedure for obtaining the
no-fit polygon using Minkowski sums. Computers & Operations Research 35 (2008),
267–281.
[62] S ONG , X., AND B ENNELL , J. A beam search implementation for the irregukar shape
packing problem. Journal of Heuristics 16 (2010), 167–188.
[63] S TOYAN , Y., N OVOZHILOVA , M., AND K ARTASHOV, A. Mathematical model and me-
thod of searching for local extremum for the non-convex oriented polygons allocation
problem. European Journal of Operational Research 92 (1996), 193–210.
[64] S TOYAN , Y., AND P ONOMARENKO , L. Minkowski sum and hodograph of the dense
placement vector function. Tech. Rep. SER. A 10, SSR Academy of Science, 1977.
[65] S TOYAN , Y., S CHEITHAUER , G., G IL , N., AND ROMANOVA , T. φ-functions for com-
plex 2d-objects. 4OR 2 (2004), 69–84.
[66] S TOYAN , Y., S CHEITHAUER , G., PANKRATOV, A., AND M AGDALINA , I. Packing of
convex polytopes into parallelepiped. Optimization 54(2) (2005), 215–235.
[67] S TOYAN , Y., S CHEITHAUER , G., AND ROMANOVA , T. Mathematical modeling of inter-
action of primary geometric 3d objects. Cybernetics and Systems Analysis 41(3) (2005),
332–342.
[68] S TOYAN , Y., T ERNO , J., S CHEITHAUER , G., G IL , N., AND ROMANOVA , T. Phi-
functions for primary 2d-objects. Studia Informatica Universalis 2(1) (2002), 1–32.
[69] U METANI , S., YAGIURA , M., I MAHORI , S., I MAMICHI , T., N ONOBE , K., AND I BA -
RAKI , T. Solving the irregular strip packing problem via guided local search for overlap
minimization. International Transactions in Operational Research 16 (2009), 661–683.
191
[70] W ÄSCHER , G., H AUSSNER , H., AND S CHUMANN , H. An improved typology of cutting
and packing problems. European Journal of Operational Research 183 (2007), 1109–
1130.
[72] Y UPINS , Z., AND C AIJUN , Y. A generic approach for leather nesting. Fifth international
conference on natural computing 5 (2009), 303–307.
192