Cmpe536 L4

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 69

Lecture IV

Local Search Con.

1
Outline
Variable neighbourhoods
Variable neighbourhood decent
Variable neighbourhood search

Guided local search

GRASP

Large neighbourhood search

Simulated annealing algorithm

2
Variable neighborhood Search (VNS)
Based on the following central observations
A local minimum w.r.t. one neighborhood structure is not
necessarily locally minimal w.r.t. another neighborhood
structure
A global minimum is locally optimal w.r.t. all
neighborhood structures
For many problems, local minima with respect to one or
several neighborhoods are relatively close to each other

3
Variable neighborhood Search (VNS)

4
Variable Neighborhood Search
Basic principle: change the neighborhood during the
search
Variations:
Variable Neighborhood Descent
Basic Variable Neighborhood Search
Reduced Variable Neighborhood Search
Variable Neighborhood Decomposition Search

5
Variable Neighborhood Search
The first task is to generate the different neighborhood
structures

For many problems different neighborhood structures


already exists
E.g., for the VRP: 2-Opt, Cross, Swap, Exchange,…

Find neighborhoods that depend on some parameter


k-Opt (k=2,3,…)
Flip-neighborhoods can be extended to double-flip, triple-flip, etc…

Some neighborhoods are associated with distance measures:


can increase the distance 6
7
Variable Neighborhood Descent
The final solution is locally optimal with respect to all
neighborhoods, N1, N2,…, Nk-max
”First Improvement” could be used instead of ”Best
Improvement”

Typically, neighborhoods are ordered from smallest to


largest

If Local Search Algorithms can be treated as Black-Box


procedures:
Sort the procedures
Apply them in the given order
Possibly reiterate starting the first one
Advantage: solution quality and speed
8
Basic Variable Neighborhood Search
Use neighborhood structures N1, N2,…, Nk-max

Standard (”Best Improvement”) Local Search is applied


in N1

The other neighborhoods are explored only randomly

Exploration of the other neighborhoods are


perturbations as in ILS
Perturbation is systematically varied

9
10
Variations of Basic VNS
The order of the neighborhoods
Forward VNS: start with k=1 and increase
Backward VNS: start with k=k
max and decrease
Extended version:

Parameters kmin and kstep

Set k = kmin, and increase k by kstep if no improvement
Accepting worse solutions
With some probability
Skewed VNS: Accept if
 f(s*)-αd(s, s*) < f(s)
 d(s, s*) measures the distance between the solutions

12
Final Notes on VNS
Other variations exists
Reduced VNS: same as Basic VNS, but no Local Search
procedure
 Can be fast
Variable Neighborhood Decomposition Search
 Fix some components of the solution, and perform Local
Search on the remaining ”free” components

13
Final Notes on VNS
ILS and VNS are based on different underlying
”philosophies”
ILS: Perturb and do Local Search
VNS: Exploit different neighborhoods

ILS and VNS are also similar in many respects

ILS can be more flexible w.r.t. the optimization of


the interaction between modules

VNS gives place to approaches such as VND for


obtaining more powerful Local Search approaches
14
Conclusions about ILS and VNS
Based on simple principles
Easy to understand
Basic versions are easy to implement
Robust
Highly effective

15
16
Guided Local Search
A general strategy, metaheuristic, used to guide a
neighborhood search
Tries to overcome local optima by ”removing” them:
Changes the ”topography” of the search space
Uses an extended move evaluation function
 Original objective function + penalties
Focuses on promising parts of the search space

17
Features of a Solution
GLS assumes that we can find some features of
a solution that we can penalize
What is a ”feature”?
Something that is characteristic for the solution
Something that might be different in another
solution
Problem dependent
Examples:
TSP: whether city A follows immediately after city B
in the tour or not
Knapsack: whether item 5 is included or not

18
Features - Example: TSP
A solution (trip) is an ordered set of
5
edges 2
An edge is a good choice for feature: 3
7
Is either in a solution or not
1
Feature cost = edge length
6
Let the set of all edges eij be features: 4

E {e }, i 1...N , j  i 1...N , i  j


ij
The cost for a feature eij is given by dij
in the distance matrix:
D = [d ], i=1...N, j=1...N
ij
19
Features & GLS
The modification of the move evaluation in
GLS is based on features
The features each has a cost
Represents (directly or indirectly) the influence of a
solution on the (extended) move evaluation function
Constant or variable (dependent on other features)
GLS tries to avoid costly features
We use an indicator function as follows:
I (s) = 1 if solution s has feature i
i
I (s) = 0 if solution s does not have feature i
i

20
Extended Move Evaluation (1)
Until now we have only seen the use of the objective
function in move evaluations
We let f be the objective function
f(s) gives us the value of a solution s
We have always taken the best neighbor to be the
neighbor s for which f(s) has the best value
This will make the search stop when a local optimum
have been found

21
Extended Move Evaluation (2)
Let the set of features be denoted by
F = {1,…,G}
We have our indicator function:
Ii(s) = 1 if solution s has feature i
Ii(s) = 0 if solution s does not have feature i
We create a penalty vector p=[pi ], i=1…G
pi is the number of times feature i have been penalized
until now
The extended move evaluation function becomes

22
Extended Move Evaluation (3)
The extended move evaluation function has two parts
The original objective function
A penalty term, which penalizes certain features of the
solution

The parameter λ adjusts the influence of the penalties

23
24
Penalties (1)

The penalties are initially equal to 0


When the search has reached a local optimum (with
respect to the extended move evaluation function)
The penalty is increased for some of the features of the
current (locally optimal) solution
This will make the current solution look worse in
comparison to some neighboring solutions (which do not
have the penalized features, and thus do not get the
penalty)

25
Penalties (2)
How to select which feature to penalize?
Define the utility of a feature i in solution s as
follows:
ui(s) = Ii(s) * ci / (1+pi)
Here, ci is the cost of the feature (in objective function)
and pi is the previous penalty
In a local optimum, s, increase the penalty for the
feature that has the highest utility value, ui(s)
NB: Penalties are only adjusted when the search
has reached a local optimum, and only for features
included in the local optimum

26
27
Comments on GLS
Uses the extended move evaluation function when
deciding which neighbor is best
Could also use ”First Improvement”-strategy
Uses the normal objective function value to find the
best solution
if (f(current)<f(best) …
If all features are penalized equally, then f*(s)
describes the same ”landscape” as f(s)

28
How to Select Lambda

The control parameter λ dictates the influence of the


penalty on the extended move evaluation function
 Low value: intensification
 High value: diversification
Can be problematic to find values for , even
though the method is robust for some value
Generally: fraction of the objective function value
at a local minimum
λ = α*f(a local minimum)/(#features in the local
minimum)

29
GLS - Example : TSP (1) 5
2
Features: edges
3
Cost of the features: edge length 7

1
The feature associated with e26 will be
penalized in the solution on the 6 4

right: 1 2 3 4 5 6 7
In the next round of LS is the move 1 0 0 0 0 0 0
2 0 0 0 1 0
evaluation function as before, f(s), 3 0 0 0 0
4 0 0 0
except if e26 is in the solution, when 5 0 0

the value will be f(s)+ 6 0

30
GLS - Example : TSP (2) 5
2
After the next LS e34
3

is penalized 7

1
After this the move
evaluation function is 6 4

as before, f(s), except 1 2 3 4 5 6 7


1 0 0 0 0 0 0
if e26 or e34 is in the 2 0 0 0 1 0
3 1 0 0 0
solution 4 0 0 0
5 0 0
6 0

31
Applications of GLS
A fairly recent metaheuristic (evolved from another
method called GENET)
Some applications:
Workforce scheduling at BT (Tsang, Voudouris 97)
TSP (Voudouris, Tsang 98)
VRP (Kilby et. al. 97)
Function optimization (Voudouris 98)
…

32
Possibilities and Extensions 1
0.5+( sin(sqrt(x*x))*sin(sqrt(x*x)) -0.5)/((1+0.001*(x*x)) * (1+0.001*(x*x)))

Limited life time of penalties 0.9

0.8

Diminishing penalties 0.7

Awards in addition to 0.6

penalties 0.5

Automatic regulation of 
0.4

0.3

New utility-functions to find 0.2

features to penalize
0.1

0
-100 -50 0 50 100

• Has been used for function optimization,


good results on: sin x  y  0.5 2 2 2

F 6( x, y )  0.5  2
1  0.001( x  y )  2 2

33
34
GRASP
Greedy Randomized Adaptive Search Procedures
A Metaheuristic that is based on Greedy Algorithms
A constructive approach
A multi-start approach (Multi-initial Sol.s)
Includes (optionally) a local search to improve the
constructed solutions

35
Spelling out GRASP
Greedy: Select the best choice (or among the best
choices)
Randomized: Use some probabilistic selection to
prevent the same solution to be constructed every
time
Adaptive: Change the evaluation of choices after
making each decision
Search Procedure: It is a heuristic algorithm for
examining the solution space

36
Two Phases of GRASP
GRASP is an iterative process, in which each
iteration has two phases
Construction
Build a feasible solution (from scratch) in the same
way as using a Greedy Algorithm, but with some
randomization
Improvement
Improve the solution by using some Local Search
(Best/First Improvement)
The best overall solution is retained

37
GRASP can be divided in two phases:

 Construction: where a feasible solution is built.

 Local Search: from the solution obtained a neighborhood is built and the
search is then performed.

For N iterations

Construction Local Search Improved


Best Solution
Phase Phase Solution

References:
T. A. Feo, and M. G. C. Resende. Greedy Randomized Adaptive Search Procedures. Journal of Global Optimization, no. 6, pp. 109-133, 1995.
M. G. C. Resende. Greedy Randomized Adaptive Search Procedures (GRASP). AT&T Labs Research Technical Report, 98.41.1, Dec. 1998.
M. G. C. Resende, and Celso C. Ribeiro. Greedy Randomized Search Procedures. AT&T Labs Research Technical Report TD-53RSJY, version 2, Aug. 2002 38
The Constructive Phase (1)

39
Construction Phase

The RCL (Restricted Candidate List) is built based on the α parameter:

 α is variable between 0 and 1:


 0 makes the construction too random.
 1 makes the construction too greedy.

 If β is the best free element and Σ represents the free elements, RCL is
composed by:
 RCL U {Σ ≥ α.β}

Repeat until element list is empty

Update Candidate Contructed


Initialize elements Build RCL
List Solution

Randomly choose
Update Solution
an element

References:
T. A. Feo, and M. G. C. Resende. Greedy Randomized Adaptive Search Procedures. Journal of Global Optimization, no. 6, pp. 109-133, 1995.
M. G. C. Resende. Greedy Randomized Adaptive Search Procedures (GRASP). AT&T Labs Research Technical Report, 98.41.1, Dec. 1998.
M. G. C. Resende, and Celso C. Ribeiro. Greedy Randomized Search Procedures. AT&T Labs Research Technical Report TD-53RSJY, version 2, Aug. 2002 40
Construction Phase

The randomness of GRASP is present when an element is picked by


chance from the RCL.

After the element is added to the current solution the cost of each free
element is updated:

 This is the adaptive part of the GRASP metaheuristic.

Repeat until element list is empty

Update Candidate Contructed


Initialize elements Build RCL
List Solution

Randomly choose
Update Solution
an element

References:
T. A. Feo, and M. G. C. Resende. Greedy Randomized Adaptive Search Procedures. Journal of Global Optimization, no. 6, pp. 109-133, 1995.
M. G. C. Resende. Greedy Randomized Adaptive Search Procedures (GRASP). AT&T Labs Research Technical Report, 98.41.1, Dec. 1998.
M. G. C. Resende, and Celso C. Ribeiro. Greedy Randomized Search Procedures. AT&T Labs Research Technical Report TD-53RSJY, version 2, Aug. 2002 41
The Constructive Phase (2)
Each step is both Greedy and Randomized
First, we build a Restricted Candidate List
The RCL contains the best elements that we can add
to the solution
Then we select randomly one of the elements in
the Restricted Candidate List
We then need to reevaluate the remaining
elements (their evaluation should change as a
result of the recent change in the partial solution),
and repeat

42
The Restricted Candidate List (1)
Assume we have evaluated all the possible
elements that can be added to the solution
There are two ways of generate a restricted list
Based on rank
Based on value
In each case, we introduce a parameter α that
controls how large the RCL will be
Include the (1- α)% elements with highest rank
Include all elements that has a value within α% of
the best element

43
The Restricted Candidate List (2)
In general:
A small RCL leads to a small variance in the values of
the constructed solutions
A large RCL leads to worse average solution values, but
a larger variance
High values (=1) for α result in a purely greedy
construction
Low values (=0) for α result in a purely random
construction

44
The Restricted Candidate List (4)
The role of α is thus critical
Usually, a good choice will be to modify the value
of α during the search
Randomly
Based on results
The approach where α is adjusted based on
previous results is called ”Reactive GRASP”
The probability distribution of α changes based on
the performance of each value of α

45
Local Search Phase

The Create Neighborhood can be implement in several different ways:

 This is problem dependent.

 The stopping criteria varies with the implemented method.

Repeat until stopping criteria is satisfied

Contructed Create
Solution Neighborhood
Compare with Improved
Existing Best Solution
Select Best
Solution

References:
T. A. Feo, and M. G. C. Resende. Greedy Randomized Adaptive Search Procedures. Journal of Global Optimization, no. 6, pp. 109-133, 1995.
M. G. C. Resende. Greedy Randomized Adaptive Search Procedures (GRASP). AT&T Labs Research Technical Report, 98.41.1, Dec. 1998.
F. Parreño, R. Alvarez-Valdes, J. F. Oliveira, and J. M. Tamarit. A hybrid GRASP/VND algorithm for two- and three-dimensional bin packing. Annals of Operations
Research, vol. 179, no. 1, pp. 203-220, Oct. 2008. 46
GRASP Overview

GRASP is easy to implement:

 Few parameters need to be set and tuned.

 Effort can be transferred to the implementation of efficient data


structures.

GRASP is also easily implemented in parallel.


For N iterations

Construction Local Search Improved


Phase Phase Solution

compare
Best Solution

Construction Local Search Improved


Phase Phase Solution

References:
T. A. Feo, and M. G. C. Resende. Greedy Randomized Adaptive Search Procedures. Journal of Global Optimization, no. 6, pp. 109-133, 1995.
M. G. C. Resende. Greedy Randomized Adaptive Search Procedures (GRASP). AT&T Labs Research Technical Report, 98.41.1, Dec. 1998.
M. G. C. Resende, and Celso C. Ribeiro. Greedy Randomized Search Procedures. AT&T Labs Research Technical Report TD-53RSJY, version 2, Aug. 2002 47
GRASP vs. Other Methods (1)
GRASP is the first pure constructive method that
we have seen
However, GRASP can be compared to Local Search
based methods in some aspects
That is, a GRASP can sometimes be interpreted as a
Local Search where the entire solution is destroyed
(emptied) whenever a local optimum is reached
The construction reaches a local optimum when no
more elements can be added

49
GRASP vs. Other Methods (2)
In this sense, we can classify GRASP as
Memoryless (not using adaptive memory)
Randomized (not systematic)
Operating on 1 solution (not a population)
Potential improvements of GRASP would involve
adding some memory
Many improvements have been suggested, but not too
many have been implemented/tested
There is still room for doing research in this area

50
Large Neighbourhood Search
 Partial restart heuristic

1. Create initial solution


2. Remove a part of the solution
3. Complete the solution as per step 1
4. Repeat, saving best

51
LNS – Construct

52
LNS – Construct

53
LNS – Construct

54
LNS – Construct

55
LNS – Construct

56
LNS – Construct

57
LNS – Construct

58
LNS – Construct

59
LNS – Construct

60
LNS – Construct

61
LNS – Construct

62
LNS – Destroy

63
LNS – Destroy

64
LNS – Destroy

65
LNS – Destroy

66
LNS – Construct

67
LNS – Construct

68
LNS
The magic is choosing which part of the solution to
destroy
Different problems (and different instances) need
different heuristic

69

You might also like