Informed Search Algorithms

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 44

Informed Search Algorithms

Informed Search Strategies


• These use problem-specific knowledge.
• Finds solutions more efficiently.
• General approach is best-first search.
Best-first search
• Idea: use an evaluation function f(n) for selecting a node
for expansion.
• estimate of "desirability“
– Expand most desirable unexpanded node

• Implementation:
• priority queue, a data structure that will maintain
the fringe in ascending order of f-values.
Best-first search
• Family of best first search algorithms exists with different
evaluation functions.
• A key component is a heuristic function h(n):
h(n) = estimated cost of the cheapest path from node n to
a goal node.
• If n is goal node, h(n)=0.
• Special cases:
– greedy best-first search
– A* search
Greedy best-first search
• Evaluation function f(n) = h(n) (heuristic)
= estimate of cost from n to goal.
e.g., hSLD(n) = straight-line distance from n to
Bucharest.
• Greedy best-first search expands the node
that appears to be closest to goal.
Romania with step costs in km
Greedy best-first search
example
Greedy best-first search
example
Greedy best-first search
example
Greedy best-first search
example
Properties of greedy best-first
search
Complete? No – can get stuck in loops, e.g.,
Iasi  Neamt  Iasi  Neamt 
Time? O(bm), but a good heuristic can give
dramatic improvement
Space? O(bm) -- keeps all nodes in memory
• Optimal? No
A search
*

Idea: avoid expanding paths that are already


expensive.
Evaluation function f(n) = g(n) + h(n).
• g(n) = cost so far to reach n.
• h(n) = estimated cost from n to goal.
• f(n) = estimated total cost of path through
n to goal.
A search example
*
A search example
*
A search example
*
A search example
*
A search example
*
A search example
*
Example
Admissible heuristics
• A heuristic h(n) is admissible if for every node n,
h(n) ≤ h*(n), where h*(n) is the true cost to reach
the goal state from n.
An admissible heuristic never overestimates the
cost to reach the goal, i.e., it is optimistic.
Example: hSLD(n) (never overestimates the actual
road distance).
• Theorem: If h(n) is admissible, A* using TREE-
SEARCH is optimal.
Optimality of A* (proof)
Suppose some suboptimal goal G2 has been generated and is in the
fringe. Let n be an unexpanded node in the fringe such that n is on a
shortest path to an optimal goal G.

• f(G2) = g(G2) since h(G2) = 0


• g(G2) > g(G) since G2 is suboptimal
• f(G) = g(G) since h(G) = 0
• f(G2) > f(G) from above.
Optimality of A (proof) *

Suppose some suboptimal goal G2 has been generated and is in the


fringe. Let n be an unexpanded node in the fringe such that n is on a
shortest path to an optimal goal G.

• f(G2) > f(G) from above


• h(n) ≤ h*(n) since h is admissible
• g(n) + h(n) ≤ g(n) + h*(n)
• f(n) ≤ f(G)
Hence f(G2) > f(n), and A* will never select G2 for expansion.
Properties of A*
Complete? Yes.
Time? Exponential.
Space? Keeps all nodes in memory.
• Optimal? Yes.
Admissible heuristics
E.g., for the 8-puzzle:
• h1(n) = number of misplaced tiles
• h2(n) = total Manhattan distance
• (i.e., no. of squares from desired location of each tile)

• h1(S) = ?
• h2(S) = ?
Admissible heuristics
E.g., for the 8-puzzle:
• h1(n) = number of misplaced tiles
• h2(n) = total Manhattan distance
• (i.e., no. of squares from desired location of each tile)

• h1(S) = ? 8
• h2(S) = ? 3+1+2+2+2+3+3+2 = 18
Dominance
• If h2(n) ≥ h1(n) for all n (both admissible)
• then h2 dominates h1
• h2 is better for search.

Learning heuristics from experience.


Local search algorithms
In many optimization problems, the path to the
goal is irrelevant; the goal state itself is the
solution.
• State space = set of "complete" configurations.
• Find configuration satisfying constraints, e.g., n-
queens.
• In such cases, we can use local search
algorithms.
• keep a single "current" state, try to improve it.
Example: n-queens
• Put n queens on an n × n board with no
two queens on the same row, column, or
diagonal.
Hill-climbing search
• "Like climbing Everest in thick fog with
amnesia”.
Hill-climbing search
Problems
• Local maxima: a local maximum is a peak that is higher
than each of its neighboring states but lower than the
global maximum. Hill-climbing algorithms that reach the
vicinity of a local maximum will be drawn upward toward
the peak but will then be stuck with nowhere else to go.
• Ridges: Ridges result in a sequence of local maxima
that is very difficult for greedy algorithms to navigate.
• Plateaux: a plateau is a flat area of the state-space
landscape. It can be a flat local maximum, from which no
uphill exit exists, or a shoulder, from which progress is
possible. A hill-climbing search might get lost on the
plateau.
Exploring the Landscape
local maximum

plateau

ridge
Hill Climbing: Disadvantages
Local maximum
A state that is better than all of its
neighbours, but not better than some other
states far away.

33
Hill Climbing: Disadvantages
Plateau
A flat area of the search space in which all
neighbouring states have the same value.

34
Hill Climbing: Disadvantages
Ridge: The orientation of the high region,
compared to the set of available moves,
makes it impossible to climb up. However,
two moves executed serially may increase
the height.

35
Simulated Annealing search
• Idea: escape local maxima by allowing some
"bad" moves but gradually decrease their
frequency.
• Annealing is the process used to temper or
harden metals and glass by heating them to a
high temperature and then gradually cooling
them, thus allowing the material to coalesce into
a low energy crystalline state.
Simulated Annealing search
• Not best but picks a random move.
• If it improves situation, then accepted.
• Otherwise with some probability less than 1.
• The probability decreases exponentially with the
badness of the move
– The amount E by which the evaluation is worsened.
– As T goes down.
– Bad moves are allowed at the start then
decreases.
Simulated annealing search
Simulated annealing search
• One can prove: If T decreases slowly enough,
then simulated annealing search will find a
global optimum with probability approaching 1

• Widely used in VLSI layout, airline scheduling,


etc.
Local beam search
• Keep track of k states rather than just one.
• Start with k randomly generated states
• At each iteration, all the successors of all k
states are generated
• If any one is a goal state, stop; else select the k
best successors from the complete list and
repeat.
Genetic algorithms
• A successor state is generated by combining two parent
states.
• Start with k randomly generated states (population).
• A state is represented as a string over a finite alphabet
(often a string of 0s and 1s).
• Evaluation function (fitness function). Higher values for
better states.
• Produce the next generation of states by selection,
crossover, and mutation
Genetic algorithms

Fitness function: number of non-attacking pairs of queens


(min = 0, max = 8 × 7/2 = 28)
24/(24+23+20+11) = 31%
• 23/(24+23+20+11) = 29% etc
Genetic algorithms
Algorithm

You might also like