Informed Search PDF
Informed Search PDF
Informed Search PDF
algorithms
Material
• Ch 3 of Artificial Intelligence A Systems
Approach by Tim Jones
Chapter 4 of Artificial Intelligence a
Modern Approach by Russell and Norvig
Informed Search Techniques
Informed Search Techniques
Best-first search
• method of exploring the node with the best “score” first
• Idea: use an evaluation function f(n) for each node estimate of
"desirability"
The node with the lowest evaluation is considered as best node and
selected for expansion
maintains two lists, one containing a list of candidates yet to explore
(OPEN), and one containing a list of visited nodes (CLOSED)
algorithm always chooses the best of all unvisited nodes that have been
graphed while other search strategies, such as depth-first and breadth-
first, have this restriction
Algorithm:
1. Define a list, OPEN, consisting solely of a single node, the start node, s.
2. IF the list is empty, return failure.
3. Remove from the list the node n with the best score (the node where f is the minimum), and move it to a list, CLOSED.
4. Expand node n.
5. IF any successor to n is the goal node, return success and the solution (by tracing the path from the goal node to s).
6. FOR each successor node:
a) apply the evaluation function, f, to the node.
b) IF the node has not been in either list, add it to OPEN.
7.looping structure by sending the algorithm back to the second step.
• Special cases:
– greedy best-first search
– A* search
Romania with step costs in km
Greedy best-first search
• greedy best-first search uses only the heuristic,
and not any link costs to expand the node that
appears to be closest to goal
• Evaluation function f(n) = h(n) (heuristic)
• = estimate of cost from n to goal
• disadvantage: if the heuristic is not accurate, it can go down
paths with high link cost since there might be a low heuristic for the
connecting node
• e.g., hSLD(n) = straight-line distance from n to
Bucharest
Greedy best-first search
example
Greedy best-first search
example
Visit the Start node and add its neighbors to the fringe
Greedy best-first search
example
node D has the lowest heuristic value, we visit at that node and add its
neighbors to the fringe
Greedy best-first search
example
node E has the lowest heuristic in the fringe, it is visited at and its neighbors are
added to the fringe.
Greedy best-first search
example
• h1(S) = ?
• h2(S) = ?
•
Admissible heuristics
E.g., for the 8-puzzle:
• h1(S) = ? 8
• h2(S) = ? 3+1+2+2+2+3+3+2 = 18
Relaxed problems
• A problem with fewer restrictions on the actions
is called a relaxed problem
•
• The cost of an optimal solution to a relaxed
problem is an admissible heuristic for the
original problem
•
• If the rules of the 8-puzzle are relaxed so that a
tile can move anywhere, then h1(n) gives the
shortest solution
•
• If the rules are relaxed so that a tile can move to
any adjacent square, then h2(n) gives the
shortest solution
Local search algorithms
• In many optimization problems, the path to the
goal is irrelevant; the goal state itself is the
solution
• State space = set of "complete" configurations
• Find configuration satisfying constraints, e.g., n-
queens
• In such cases, we can use local search
algorithms
• keep a single "current" state, try to improve it
• Example: N-Queens using Hill climbing
algorithm
Example: n-queens
• Put n queens on an n × n board with no
two queens on the same row, column, or
diagonal
•
Hill-climbing search
• "Like climbing Everest in thick fog with
amnesia"
•
Hill-climbing search
• iterative algorithm that starts with an arbitrary solution to a problem, then attempts
to find a better solution by incrementally changing a single element of the solution. If
the change produces a better solution, an incremental change is made to the new
solution, repeating until no further improvements can be found.
• Simple, general idea:
– Start wherever
– Always choose the best neighbor
– If no neighbors have better scores than current, quit
• Problem: depending on initial state, can get stuck in local maxima
•
Hill climbing
• Application in the traveling salesman problem:
• It is easy to find an initial solution that visits all the cities but will be very poor
compared to the optimal solution. The algorithm starts with such a solution and
makes small improvements to it, such as switching the order in which two cities
are visited. Eventually, a much shorter route is likely to be obtained.
• good for finding a local optimum (a good solution that lies relatively
near the initial solution)
• Not guaranteed to find the best possible solution (the global optimum)
out of all possible solutions (the search space).
• Its relative simplicity makes it a popular first choice amongst
optimizing algorithms
• more advanced algorithms such as simulated annealing or tabu
search may give better results, in some situations hill climbing works
just as well
• can often produce a better result than other algorithms when time is
limited
Hill-climbing search: 8-queens problem
• h = number of pairs of queens that are attacking each other, either directly
or indirectly
• h = 17 for the above state
•
Hill-climbing search: 8-queens problem