Cmpe536 L4
Cmpe536 L4
Cmpe536 L4
1
Outline
Variable neighbourhoods
Variable neighbourhood decent
Variable neighbourhood search
GRASP
2
Variable neighborhood Search (VNS)
Based on the following central observations
A local minimum w.r.t. one neighborhood structure is not
necessarily locally minimal w.r.t. another neighborhood
structure
A global minimum is locally optimal w.r.t. all
neighborhood structures
For many problems, local minima with respect to one or
several neighborhoods are relatively close to each other
3
Variable neighborhood Search (VNS)
4
Variable Neighborhood Search
Basic principle: change the neighborhood during the
search
Variations:
Variable Neighborhood Descent
Basic Variable Neighborhood Search
Reduced Variable Neighborhood Search
Variable Neighborhood Decomposition Search
5
Variable Neighborhood Search
The first task is to generate the different neighborhood
structures
9
10
Variations of Basic VNS
The order of the neighborhoods
Forward VNS: start with k=1 and increase
Backward VNS: start with k=k
max and decrease
Extended version:
Parameters kmin and kstep
Set k = kmin, and increase k by kstep if no improvement
Accepting worse solutions
With some probability
Skewed VNS: Accept if
f(s*)-αd(s, s*) < f(s)
d(s, s*) measures the distance between the solutions
12
Final Notes on VNS
Other variations exists
Reduced VNS: same as Basic VNS, but no Local Search
procedure
Can be fast
Variable Neighborhood Decomposition Search
Fix some components of the solution, and perform Local
Search on the remaining ”free” components
13
Final Notes on VNS
ILS and VNS are based on different underlying
”philosophies”
ILS: Perturb and do Local Search
VNS: Exploit different neighborhoods
15
16
Guided Local Search
A general strategy, metaheuristic, used to guide a
neighborhood search
Tries to overcome local optima by ”removing” them:
Changes the ”topography” of the search space
Uses an extended move evaluation function
Original objective function + penalties
Focuses on promising parts of the search space
17
Features of a Solution
GLS assumes that we can find some features of
a solution that we can penalize
What is a ”feature”?
Something that is characteristic for the solution
Something that might be different in another
solution
Problem dependent
Examples:
TSP: whether city A follows immediately after city B
in the tour or not
Knapsack: whether item 5 is included or not
18
Features - Example: TSP
A solution (trip) is an ordered set of
5
edges 2
An edge is a good choice for feature: 3
7
Is either in a solution or not
1
Feature cost = edge length
6
Let the set of all edges eij be features: 4
20
Extended Move Evaluation (1)
Until now we have only seen the use of the objective
function in move evaluations
We let f be the objective function
f(s) gives us the value of a solution s
We have always taken the best neighbor to be the
neighbor s for which f(s) has the best value
This will make the search stop when a local optimum
have been found
21
Extended Move Evaluation (2)
Let the set of features be denoted by
F = {1,…,G}
We have our indicator function:
Ii(s) = 1 if solution s has feature i
Ii(s) = 0 if solution s does not have feature i
We create a penalty vector p=[pi ], i=1…G
pi is the number of times feature i have been penalized
until now
The extended move evaluation function becomes
22
Extended Move Evaluation (3)
The extended move evaluation function has two parts
The original objective function
A penalty term, which penalizes certain features of the
solution
23
24
Penalties (1)
25
Penalties (2)
How to select which feature to penalize?
Define the utility of a feature i in solution s as
follows:
ui(s) = Ii(s) * ci / (1+pi)
Here, ci is the cost of the feature (in objective function)
and pi is the previous penalty
In a local optimum, s, increase the penalty for the
feature that has the highest utility value, ui(s)
NB: Penalties are only adjusted when the search
has reached a local optimum, and only for features
included in the local optimum
26
27
Comments on GLS
Uses the extended move evaluation function when
deciding which neighbor is best
Could also use ”First Improvement”-strategy
Uses the normal objective function value to find the
best solution
if (f(current)<f(best) …
If all features are penalized equally, then f*(s)
describes the same ”landscape” as f(s)
28
How to Select Lambda
29
GLS - Example : TSP (1) 5
2
Features: edges
3
Cost of the features: edge length 7
1
The feature associated with e26 will be
penalized in the solution on the 6 4
right: 1 2 3 4 5 6 7
In the next round of LS is the move 1 0 0 0 0 0 0
2 0 0 0 1 0
evaluation function as before, f(s), 3 0 0 0 0
4 0 0 0
except if e26 is in the solution, when 5 0 0
30
GLS - Example : TSP (2) 5
2
After the next LS e34
3
is penalized 7
1
After this the move
evaluation function is 6 4
31
Applications of GLS
A fairly recent metaheuristic (evolved from another
method called GENET)
Some applications:
Workforce scheduling at BT (Tsang, Voudouris 97)
TSP (Voudouris, Tsang 98)
VRP (Kilby et. al. 97)
Function optimization (Voudouris 98)
…
32
Possibilities and Extensions 1
0.5+( sin(sqrt(x*x))*sin(sqrt(x*x)) -0.5)/((1+0.001*(x*x)) * (1+0.001*(x*x)))
0.8
penalties 0.5
Automatic regulation of
0.4
0.3
features to penalize
0.1
0
-100 -50 0 50 100
F 6( x, y ) 0.5 2
1 0.001( x y ) 2 2
33
34
GRASP
Greedy Randomized Adaptive Search Procedures
A Metaheuristic that is based on Greedy Algorithms
A constructive approach
A multi-start approach (Multi-initial Sol.s)
Includes (optionally) a local search to improve the
constructed solutions
35
Spelling out GRASP
Greedy: Select the best choice (or among the best
choices)
Randomized: Use some probabilistic selection to
prevent the same solution to be constructed every
time
Adaptive: Change the evaluation of choices after
making each decision
Search Procedure: It is a heuristic algorithm for
examining the solution space
36
Two Phases of GRASP
GRASP is an iterative process, in which each
iteration has two phases
Construction
Build a feasible solution (from scratch) in the same
way as using a Greedy Algorithm, but with some
randomization
Improvement
Improve the solution by using some Local Search
(Best/First Improvement)
The best overall solution is retained
37
GRASP can be divided in two phases:
Local Search: from the solution obtained a neighborhood is built and the
search is then performed.
For N iterations
References:
T. A. Feo, and M. G. C. Resende. Greedy Randomized Adaptive Search Procedures. Journal of Global Optimization, no. 6, pp. 109-133, 1995.
M. G. C. Resende. Greedy Randomized Adaptive Search Procedures (GRASP). AT&T Labs Research Technical Report, 98.41.1, Dec. 1998.
M. G. C. Resende, and Celso C. Ribeiro. Greedy Randomized Search Procedures. AT&T Labs Research Technical Report TD-53RSJY, version 2, Aug. 2002 38
The Constructive Phase (1)
39
Construction Phase
If β is the best free element and Σ represents the free elements, RCL is
composed by:
RCL U {Σ ≥ α.β}
Randomly choose
Update Solution
an element
References:
T. A. Feo, and M. G. C. Resende. Greedy Randomized Adaptive Search Procedures. Journal of Global Optimization, no. 6, pp. 109-133, 1995.
M. G. C. Resende. Greedy Randomized Adaptive Search Procedures (GRASP). AT&T Labs Research Technical Report, 98.41.1, Dec. 1998.
M. G. C. Resende, and Celso C. Ribeiro. Greedy Randomized Search Procedures. AT&T Labs Research Technical Report TD-53RSJY, version 2, Aug. 2002 40
Construction Phase
After the element is added to the current solution the cost of each free
element is updated:
Randomly choose
Update Solution
an element
References:
T. A. Feo, and M. G. C. Resende. Greedy Randomized Adaptive Search Procedures. Journal of Global Optimization, no. 6, pp. 109-133, 1995.
M. G. C. Resende. Greedy Randomized Adaptive Search Procedures (GRASP). AT&T Labs Research Technical Report, 98.41.1, Dec. 1998.
M. G. C. Resende, and Celso C. Ribeiro. Greedy Randomized Search Procedures. AT&T Labs Research Technical Report TD-53RSJY, version 2, Aug. 2002 41
The Constructive Phase (2)
Each step is both Greedy and Randomized
First, we build a Restricted Candidate List
The RCL contains the best elements that we can add
to the solution
Then we select randomly one of the elements in
the Restricted Candidate List
We then need to reevaluate the remaining
elements (their evaluation should change as a
result of the recent change in the partial solution),
and repeat
42
The Restricted Candidate List (1)
Assume we have evaluated all the possible
elements that can be added to the solution
There are two ways of generate a restricted list
Based on rank
Based on value
In each case, we introduce a parameter α that
controls how large the RCL will be
Include the (1- α)% elements with highest rank
Include all elements that has a value within α% of
the best element
43
The Restricted Candidate List (2)
In general:
A small RCL leads to a small variance in the values of
the constructed solutions
A large RCL leads to worse average solution values, but
a larger variance
High values (=1) for α result in a purely greedy
construction
Low values (=0) for α result in a purely random
construction
44
The Restricted Candidate List (4)
The role of α is thus critical
Usually, a good choice will be to modify the value
of α during the search
Randomly
Based on results
The approach where α is adjusted based on
previous results is called ”Reactive GRASP”
The probability distribution of α changes based on
the performance of each value of α
45
Local Search Phase
Contructed Create
Solution Neighborhood
Compare with Improved
Existing Best Solution
Select Best
Solution
References:
T. A. Feo, and M. G. C. Resende. Greedy Randomized Adaptive Search Procedures. Journal of Global Optimization, no. 6, pp. 109-133, 1995.
M. G. C. Resende. Greedy Randomized Adaptive Search Procedures (GRASP). AT&T Labs Research Technical Report, 98.41.1, Dec. 1998.
F. Parreño, R. Alvarez-Valdes, J. F. Oliveira, and J. M. Tamarit. A hybrid GRASP/VND algorithm for two- and three-dimensional bin packing. Annals of Operations
Research, vol. 179, no. 1, pp. 203-220, Oct. 2008. 46
GRASP Overview
compare
Best Solution
References:
T. A. Feo, and M. G. C. Resende. Greedy Randomized Adaptive Search Procedures. Journal of Global Optimization, no. 6, pp. 109-133, 1995.
M. G. C. Resende. Greedy Randomized Adaptive Search Procedures (GRASP). AT&T Labs Research Technical Report, 98.41.1, Dec. 1998.
M. G. C. Resende, and Celso C. Ribeiro. Greedy Randomized Search Procedures. AT&T Labs Research Technical Report TD-53RSJY, version 2, Aug. 2002 47
GRASP vs. Other Methods (1)
GRASP is the first pure constructive method that
we have seen
However, GRASP can be compared to Local Search
based methods in some aspects
That is, a GRASP can sometimes be interpreted as a
Local Search where the entire solution is destroyed
(emptied) whenever a local optimum is reached
The construction reaches a local optimum when no
more elements can be added
49
GRASP vs. Other Methods (2)
In this sense, we can classify GRASP as
Memoryless (not using adaptive memory)
Randomized (not systematic)
Operating on 1 solution (not a population)
Potential improvements of GRASP would involve
adding some memory
Many improvements have been suggested, but not too
many have been implemented/tested
There is still room for doing research in this area
50
Large Neighbourhood Search
Partial restart heuristic
51
LNS – Construct
52
LNS – Construct
53
LNS – Construct
54
LNS – Construct
55
LNS – Construct
56
LNS – Construct
57
LNS – Construct
58
LNS – Construct
59
LNS – Construct
60
LNS – Construct
61
LNS – Construct
62
LNS – Destroy
63
LNS – Destroy
64
LNS – Destroy
65
LNS – Destroy
66
LNS – Construct
67
LNS – Construct
68
LNS
The magic is choosing which part of the solution to
destroy
Different problems (and different instances) need
different heuristic
69