Local Search Algorithms
1. Hill Climbing
2. Simulated Annealing
3. Local Beam Search
4. Genetic Algorithm
Local Search
● The search algorithms that we have seen so far are designed to explore search spaces
systematically.
● This systematicity is achieved by keeping one or more paths in memory and by recording
which alternatives have been explored at each paint along the path.
● When a goal is found, the path to that goal also constitutes a solution to the problem.
● In many problems, however, the path to the goal is irrelevant. For example, in the 8-queens
problem.
● If the path to the goal does not matter, we might consider a different class of algorithms,
ones that do not worry about paths at all. Local search algorithms operate using a single
current node (rather than multiple paths) and generally move only to neighbors of that node.
● local search algorithms are useful for solving pure optimization problems, in which the aim
is to find the best state according to an objective function
1. Hill Climbing Algorithm
Q
Q
Key Points
○ Hill climbing algorithm is a local search algorithm which continuously moves in the direction of increasing
elevation/value to find the peak of the mountain or best solution to the problem. It terminates when it reaches a
peak value where no neighbor has a higher value.
○ Hill climbing algorithm is a technique which is used for optimizing the mathematical problems. One of the widely
discussed examples of Hill climbing algorithm is Traveling-salesman Problem in which we need to minimize the
distance traveled by the salesman.
○ It is also called greedy local search as it only looks to its good immediate neighbor state and not beyond that.
○ A node of hill climbing algorithm has two components which are state and value.
○ Hill Climbing is mostly used when a good heuristic is available.
○ In this algorithm, we don't need to maintain and handle the search tree or graph as it only keeps a single current
state.
Types
❖ Stochastic hill climbing chooses at random from among the uphill moves; the probability
of selection can vary with the steepness of the uphill move. This usually converges more
slowly than steepest ascent, but in some state landscapes, it finds better solutions.
❖ First-choice hill climbing implements stochastic hill climbing by generating successors
randomly until one is generated that is better than the current state. This is a good strategy
when a state has many (e.g., thousands) of successors.
❖ RANDOM-RESTART HILL CLIMBING does a series of hill-climbing searches from
randomly generated initial states,1 until a goal is found. It is trivially complete with
probability approaching 1, because it will eventually generate a goal state as the initial
state. If each hill-climbing search has a probability p of success, then the expected number
of restarts required is 1/p.
❖ The success of hill climbing depends very much on the shape of the state-space landscape:
if there are few local maxima and plateaux, random-restart hill climbing will find a good
solution very quickly.
2. Simulated Annealing
❖ A hill-climbing algorithm that never makes “downhill” moves toward states
with lower value (or higher cost) is guaranteed to be incomplete, because it
can get stuck on a local maximum.
❖ In contrast, a purely random walk—that is, moving to a successor chosen
uniformly at random from the set of successors—is complete but extremely
inefficient.
❖ Simulated annealing combines hill climbing with a random walk in some way
that yields both efficiency and completeness.
❖ In metallurgy, annealing is the process used to temper or harden metals and
glass by heating them to a high temperature and then gradually cooling them,
thus allowing the material to reach a low energy crystalline state.
Key points
❖ Simulated annealing maintains a current assignment of values to variables.
❖ At each step, it picks a variable at random, then picks a value at random. If assigning that value to
the variable is an improvement or does not increase the number of conflicts, the algorithm accepts
the assignment and there is a new current assignment.
❖ Otherwise, it accepts the assignment with some probability, depending on the temperature and how
much worse it is than the current assignment. If the change is not accepted, the current assignment
is unchanged.
❖ To control how many worsening steps are accepted, there is a positive real-valued temperature T.
Suppose A is the current assignment of a value to each variable.
❖ Suppose that h(A) is the evaluation of assignment A to be minimized. For solving constraints, h is
typically the number of conflicts.
❖ Simulated annealing selects a neighbor at random, which gives a new assignment A'. If h(A') ≤ h(A),
it accepts the assignment and A' becomes the new assignment. Otherwise, the assignment is only
accepted randomly with probability.
3. Local Beam Search
❖ Keeping just one node in memory might seem to be an extreme reaction to the problem of
memory limitations.
❖ The local beam search algorithm keeps track of k states rather than just one. It begins with
k randomly generated states.
❖ At each step, all the successors of all k states are generated. If any one is a goal, the
algorithm halts.
❖ Otherwise, it selects the k best successors from the complete list and repeats.
❖ In its simplest form, local beam search can suffer from a lack of diversity among the k
states—they can quickly become concentrated in a small region of the state space, making
the search little more than an expensive version of hill climbing.
❖ A variant called stochastic beam search, analogous to stochastic hill climbing, helps
alleviate this problem. Instead of choosing the best k from the the pool of candidate
successors, stochastic beam search chooses k successors at random, with the probability
of choosing a given successor being an increasing function of its value.
4. Genetic Algorithms
● A genetic algorithm (or GA) is a variant of stochastic beam search in which
successor states are generated by combining two parent states rather than by
modifying a single state.
● Like beam searches, GAs begin with a set of k randomly generated states, called
the population.
● Each state, or individual, is represented as a string over a finite alphabet—most I
commonly, a string of 0s and 1s.
● The theory of genetic algorithms explains how this works using the idea of a
schema, which is a substring in which some of the positions can be left unspecified.
For example, the schema 246***** describes all 8-queens states in which the first
three queens are in positions 2, 4, and 6, respectively. Strings that match the
schema (such as 24613578) are called instances of the schema.
Key Points
1. A fitness function should return higher values for better states, so, for the 8-queens problem
we use the number of non attacking pairs of queens, which has a value of 28 for a solution.
2. In (c), two pairs are selected at random for reproduction, in accordance with the
probabilities in (b).
3. Notice that one individual is selected twice and one not at all.
4. For each pair to be mated, a crossover point is chosen randomly from the positions in the
string.The crossover points are after the third digit in the first pair and after the fifth digit in
the second pair.
5. In (d), the offspring themselves are created by crossing over the parent strings at the
crossover point.
6. Finally, in (e), each location is subject to random mutation with a small independent
probability. One digit was mutated in the first, third, and fourth offspring. In the 8-queens
problem, this corresponds to choosing a queen at random and moving it to a random
square in its column