Informed Search PDF

Download as pdf or txt
Download as pdf or txt
You are on page 1of 45

Informed/Heuristics search

algorithms
Material
• Ch 3 of Artificial Intelligence A Systems
Approach by Tim Jones
Chapter 4 of Artificial Intelligence a
Modern Approach by Russell and Norvig
Informed Search Techniques
Informed Search Techniques
Best-first search
• method of exploring the node with the best “score” first
• Idea: use an evaluation function f(n) for each node estimate of
"desirability"
 The node with the lowest evaluation is considered as best node and
selected for expansion
 maintains two lists, one containing a list of candidates yet to explore
(OPEN), and one containing a list of visited nodes (CLOSED)
 algorithm always chooses the best of all unvisited nodes that have been
graphed while other search strategies, such as depth-first and breadth-
first, have this restriction
Algorithm:
1. Define a list, OPEN, consisting solely of a single node, the start node, s.
2. IF the list is empty, return failure.
3. Remove from the list the node n with the best score (the node where f is the minimum), and move it to a list, CLOSED.
4. Expand node n.
5. IF any successor to n is the goal node, return success and the solution (by tracing the path from the goal node to s).
6. FOR each successor node:
a) apply the evaluation function, f, to the node.
b) IF the node has not been in either list, add it to OPEN.
7.looping structure by sending the algorithm back to the second step.

• Special cases:
– greedy best-first search
– A* search
Romania with step costs in km
Greedy best-first search
• greedy best-first search uses only the heuristic,
and not any link costs to expand the node that
appears to be closest to goal
• Evaluation function f(n) = h(n) (heuristic)
• = estimate of cost from n to goal
• disadvantage: if the heuristic is not accurate, it can go down
paths with high link cost since there might be a low heuristic for the
connecting node
• e.g., hSLD(n) = straight-line distance from n to
Bucharest
Greedy best-first search
example
Greedy best-first search
example

First, add the Start node to the fringe


Greedy best-first search
example

Visit the Start node and add its neighbors to the fringe
Greedy best-first search
example

Visit node A and add its neighbors to the fringe.


Greedy best-first search
example

node D has the lowest heuristic value, we visit at that node and add its
neighbors to the fringe
Greedy best-first search
example

node E has the lowest heuristic in the fringe, it is visited at and its neighbors are
added to the fringe.
Greedy best-first search
example

Goal is in the priority queue with a heuristic of 0, it is visited and a path to


the goal is found.
The path found from Start to Goal is: Start -> A -> D -> E -> Goal. In this case, it
was the optimal path, but only because the heuristic values were fairly accurate
Greedy best first Search:
another illustration
Properties of greedy best-first search

• Complete? No – can get stuck in loops,


• Time? O(bm), but a good heuristic can give
dramatic improvement though all nodes
are visited in the worst case
• Space? Also O(bm) -- keeps all nodes in
memory
• b is the average branching factor (the average number of
successors from a state), and m is the maximum depth
of the search tree
• Optimal? No
A* search
• Idea: avoid expanding paths that are
already expensive

• Evaluation function f(n) = g(n) + h(n)

• g(n) = cost so far to reach n
• h(n) = estimated cost from n to goal
• f(n) = estimated total cost of path through
n to goal

A* search example
A* search example
A* search example
A* search example
A* search example
A* search example
Other A* Applications
• Pathing / routing problems
• Resource planning problems
• Robot motion planning
• Language analysis
• Machine translation
• Speech recognition
Admissible Heuristics
• A heuristic h is admissible (optimistic) if:

• where is the true cost to a nearest goal


Admissible heuristics
• A heuristic h(n) is admissible if for every node n,
h(n) ≤ h*(n), where h*(n) is the true cost to reach the goal state from
n.
• E.g. Euclidean distance on a map problem
• An admissible heuristic never overestimates the cost to reach the
goal, i.e., it is optimistic
• Coming up with admissible heuristics is most of what’s involved in u
sing A* in practice.
• Inadmissible heuristics are often quite effective (especially when you
have no choice)
• Theorem: If h(n) is admissible, A* using TREE-SEARCH is optimal
Properties of A*
• Complete? Yes (unless there are infinitely
many nodes with f ≤ f(G) )

• Time? Exponential

• Space? Keeps all nodes in memory

• Optimal? Yes

Admissible heuristics
E.g., for the 8-puzzle:

• h1(n) = number of misplaced tiles


• h2(n) = total Manhattan distance
(i.e., no. of squares from desired location of each tile)

• h1(S) = ?
• h2(S) = ?

Admissible heuristics
E.g., for the 8-puzzle:

• h1(n) = number of misplaced tiles


• h2(n) = total Manhattan distance
(i.e., no. of squares from desired location of each tile)

• h1(S) = ? 8
• h2(S) = ? 3+1+2+2+2+3+3+2 = 18
Relaxed problems
• A problem with fewer restrictions on the actions
is called a relaxed problem

• The cost of an optimal solution to a relaxed
problem is an admissible heuristic for the
original problem

• If the rules of the 8-puzzle are relaxed so that a
tile can move anywhere, then h1(n) gives the
shortest solution

• If the rules are relaxed so that a tile can move to
any adjacent square, then h2(n) gives the
shortest solution
Local search algorithms
• In many optimization problems, the path to the
goal is irrelevant; the goal state itself is the
solution
• State space = set of "complete" configurations
• Find configuration satisfying constraints, e.g., n-
queens
• In such cases, we can use local search
algorithms
• keep a single "current" state, try to improve it
• Example: N-Queens using Hill climbing
algorithm
Example: n-queens
• Put n queens on an n × n board with no
two queens on the same row, column, or
diagonal

Hill-climbing search
• "Like climbing Everest in thick fog with
amnesia"

Hill-climbing search
• iterative algorithm that starts with an arbitrary solution to a problem, then attempts
to find a better solution by incrementally changing a single element of the solution. If
the change produces a better solution, an incremental change is made to the new
solution, repeating until no further improvements can be found.
• Simple, general idea:
– Start wherever
– Always choose the best neighbor
– If no neighbors have better scores than current, quit
• Problem: depending on initial state, can get stuck in local maxima

Hill climbing
• Application in the traveling salesman problem:
• It is easy to find an initial solution that visits all the cities but will be very poor
compared to the optimal solution. The algorithm starts with such a solution and
makes small improvements to it, such as switching the order in which two cities
are visited. Eventually, a much shorter route is likely to be obtained.
• good for finding a local optimum (a good solution that lies relatively
near the initial solution)
• Not guaranteed to find the best possible solution (the global optimum)
out of all possible solutions (the search space).
• Its relative simplicity makes it a popular first choice amongst
optimizing algorithms
• more advanced algorithms such as simulated annealing or tabu
search may give better results, in some situations hill climbing works
just as well
• can often produce a better result than other algorithms when time is
limited
Hill-climbing search: 8-queens problem

• h = number of pairs of queens that are attacking each other, either directly
or indirectly
• h = 17 for the above state

Hill-climbing search: 8-queens problem

• A local minimum with h = 1



Simulated annealing search
• Idea: escape local maxima by allowing some
"bad" moves but gradually decrease their
frequency

Properties of simulated
annealing search
• One can prove: If T decreases slowly enough,
then simulated annealing search will find a
global optimum with probability approaching

• Widely used in VLSI layout, airline scheduling,


etc

Tabu search
• A metaheuristic algorithm that can be used for solving
combinatorial optimization problems, such as the
traveling salesman problem (TSP). Tabu search uses a
local or neighborhood search procedure to iteratively
move from a solution x to a solution x' in the
neighborhood of x, until some stopping criterion has
been satisfied. To explore regions of the search space
that would be left unexplored by the local search
procedure
• Tabu search modifies the neighborhood structure of
each solution as the search progresses
• Example: Traveling salesman problem

You might also like