Topic - 3 (Solving Problems by Searching)
Topic - 3 (Solving Problems by Searching)
Topic - 3 (Solving Problems by Searching)
Department of CSE
Daffodil International University
Topic Contents
Problem Solving Agents
Problem Formulation and Search Spaces
Example Problems
Tree Search Algorithm
Breadth-First Search
Uniform-Cost Search
Depth-First Search
Depth-Limited Search
Iteratively Deepening Search
Bidirectional Search 2
Introduction
In this topic, we see how an agent can find
a sequence of actions that achieves its
goals, when no single action will do.
3
Problem-Solving Agent
Four general steps in problem solving:
• Goal formulation
– What are the successful world states
• Problem formulation
– What actions and states to consider given the goal
• Search
– Examine different possible sequences of actions that
lead to states of known value and then choose the
best sequence
• Execute
– Perform actions on the basis of the solution
4
Example: Romania
5
Example: Romania
On holiday the agent is in Romania visiting in
Arad. Flight leaves tomorrow from Bucharest.
• Formulate goal
– Be in Bucharest
• Formulate problem
– States: various cities
– Actions: drive between cities
• Find solution
– Sequence of cities; e.g. Arad, Sibiu, Fagaras,
Bucharest,…
6
Problem Type
7
Problem Formulation
A problem is defined by:
An initial state, e.g. In(Arad)
A successor function S(x) = set of action-state pairs
• S(In(Arad)) = {(Go(Sibiu), In(Sibiu)), (Go(Zerind), In(Zerind)),
(Go(Timisoara), In(Timisoara))}
• initial state + successor function = state space
Goal test
• Explicit, e.g. x=‘In(Bucharest)’
Path cost (assume additive)
• e.g. sum of distances, number of actions executed, …
A solution is a sequence of actions from initial to
goal state
the optimal solution has the lowest path cost
8
State Space
The state space of a problem is the set of all
states reachable from the initial state.
The state space forms a graph, called a state
space graph, in which the nodes are states and
the arcs (may be labeled with costs) between
nodes are actions.
The map of Romania shown in an earlier slide
can be interpreted as a state space graph if we
view each road as standing for two driving
actions, one in each direction. 9
State Space
10
Example Problems
The problem-solving approach has been applied to
a vast array of task environments.
Some of the best known are listed here,
distinguishing between toy and real-world problems.
A toy problem is intended to illustrate or exercise
various problem-solving methods.
It is usable by different researchers to compare the
performance of algorithms.
A real-world problem is one whose solutions
people actually care about.
We will mainly focus on toy problems in this topic.
11
Toy Problems: 8-Puzzle
12
Toy Problems: 8-Queens
Complete-state formulation
• States: Any arrangement of 0 to 8 queens on the board
• Initial state: No queens
• Actions: Add queen to any empty square
• Goal test: 8 queens on board and none attacked
• Path cost: N/A
64.63.62…..57 ≈ 3 x 1014 possible sequences to investigate
14
Toy Problems: 8-Queens
Incremental formulation
• States: n (0 to 8) queens on the board, one per column
in the n leftmost columns with no queen attacking any
other
• Actions: Add queen in leftmost empty column such that
the queen is not attacking any other queen
• 2057 possible sequences to investigate; solutions are
easy to find
But with 100 queens the problem is much harder 15
Real-World Problems
Route-finding problems
Touring problems
Traveling Salesman problem
VLSI layout problem
Robot navigation
Automatic assembly sequencing
Internet searching
16
Basic Search Algorithms
17
Simple Tree Search Example
18
Simple Tree Search Example
19
Simple Tree Search Example
20
State Space vs. Search Tree
• A state corresponds
to a configuration of
the world
• A node is a data
structure in a search
tree
– e.g. node = <state,
parent-node, action,
path-cost, depth>
21
Search Strategies
A search strategy is defined by picking the order of node
expansion
Problem-solving performance is measured in four ways:
• Completeness: Is a solution found if one exists?
• Optimality: Does the strategy find the optimal solution?
• Time Complexity: How long does it take to find a solution?
• Space Complexity: How much memory is needed to perform
the search?
Time and space complexity are measured in terms of
problem difficulty defined by:
• b - branching factor of the search tree
• d - depth of the shallowest goal node
• m - maximum length of any path in the state space
22
Uninformed Search Strategies
• Uninformed search (or blind search)
– Strategies have no additional information about states
beyond that provided in the problem definition
23
Informed Search Strategies
• Informed (or heuristic) search
– Search strategies know whether one state is more
promising than another
24
Breadth-First Search
• Expand shallowest unexpanded node
• Fringe is the collection of nodes that have been generated
but not yet expanded.
• The set of all leaf nodes available for expansion at any
given point is called the frontier.
• Implementation: fringe/frontier is a FIFO queue
25
Breadth-First Search
26
Breadth-First Search
27
Breadth-First Search
28
Breadth-First Search
29
Breadth-First Search
• Time complexity: Assume a state space where
every state has b successors
– Assume solution is at depth d
– Worst case: expand all but the last node at depth d
– Total number of nodes generated:
– b + b2 + b3 + ...+ bd + (bd +1− b) = O(bd +1)
• Space complexity: every node generated must
remain in memory, so same as time complexity
30
Breadth-First Search
• Memory requirements are a bigger problem than
execution time.
31
Uniform-Cost Search
• Extension of BF-search:
– Expand node with lowest path cost
• Implementation: fringe = queue ordered by
path cost
• UC-search is the same as BF-search
when all step costs are equal
32
Uniform-Cost Search
Let us consider the following search tree, where A
and G are the initial and goal node, respectively.
33
Uniform-Cost Search
34
Uniform-Cost Search
35
Uniform-Cost Search
36
Uniform-Cost Search
37
Uniform-Cost Search
38
Uniform-Cost Search
Let us consider the another search tree, where A
and G are the initial and goal node, respectively.
39
Uniform-Cost Search
40
Uniform-Cost Search
41
Uniform-Cost Search
42
Uniform-Cost Search
43
Uniform-Cost Search
• Completeness:
– YES, if step-cost > some small positive constant ε
• Optimality:
– nodes expanded in order of increasing path cost
– YES, if complete
• Time and space complexity:
– Uniform-cost search is guided by path costs rather than
depths, so its complexity cannot easily be characterized
in terms of b and d.
– Instead, assume C* is the cost of the optimal solution
– Assume that every action costs at least ε
– Worst-case: O(bC*/ε)
44
Depth-First Search
• Expand deepest unexpanded node
• Implementation: fringe is a LIFO queue (= stack)
45
Depth-First Search
• Expand deepest unexpanded node
• Implementation: fringe is a LIFO queue (= stack)
46
Depth-First Search
• Expand deepest unexpanded node
• Implementation: fringe is a LIFO queue (= stack)
47
Depth-First Search
• Expand deepest unexpanded node
• Implementation: fringe is a LIFO queue (= stack)
48
Depth-First Search
• Expand deepest unexpanded node
• Implementation: fringe is a LIFO queue (= stack)
49
Depth-First Search
• Expand deepest unexpanded node
• Implementation: fringe is a LIFO queue (= stack)
50
Depth-First Search
• Expand deepest unexpanded node
• Implementation: fringe is a LIFO queue (= stack)
51
Depth-First Search
• Expand deepest unexpanded node
• Implementation: fringe is a LIFO queue (= stack)
52
Depth-First Search
• Expand deepest unexpanded node
• Implementation: fringe is a LIFO queue (= stack)
53
Depth-First Search
• Expand deepest unexpanded node
• Implementation: fringe is a LIFO queue (= stack)
54
Depth-First Search
• Expand deepest unexpanded node
• Implementation: fringe is a LIFO queue (= stack)
55
Depth-First Search
• Expand deepest unexpanded node
• Implementation: fringe is a LIFO queue (= stack)
56
DF-Search: Evaluation
• Completeness:
– Is a solution always found if one exists?
– No (unless search space is finite and no loops
are possible)
• Optimality:
– Is the least-cost solution always found?
– No
57
DF-Search: Evaluation
• Time complexity: O(bm)
• In general, time is terrible if m (maximal
depth) is much larger than d (depth of
shallowest solution)
– But if there exist many solutions then faster
than BF-search
• Space complexity: O(bm)
– Backtracking search uses even less memory
(one successor instead of all b)
58
Depth-Limited Search
• DF-search with depth limit l
– i.e. nodes at depth l have no successors
– Problem knowledge can be used
• Solves the infinite-path problem
• If l < d then incompleteness results
• Time complexity: O(bl)
• Space complexity: O(bl)
• Depth-first search can be viewed as a special
case of depth-limited search with l = ∞.
59
Depth-Limited Search with l = 2
60
Depth-Limited Search with l = 2
61
Depth-Limited Search with l = 2
62
Depth-Limited Search with l = 2
63
Depth-Limited Search
• DF-search with depth limit l
– i.e. nodes at depth l have no successors
– Problem knowledge can be used
• Sometimes, depth limits can be based on knowledge
of the problem.
• For example, on the map of Romania there are 20
cities.
• Therefore, we know that if there is a solution, it must
be of length 19 at the longest, so l = 19 is a possible
choice.
64
Depth-Limited Search
• But in fact if we studied the map carefully, we would
discover that any city can be reached from any other
city in at most 9 steps.
• This number, known as the diameter of the state
space, gives us a better depth limit, which leads to a
more efficient depth-limited search.
• Diameter of state space is the maximum distance
from one state to another in the state space.
• For most problems, however, we will not know a
good depth limit until we have solved the problem.
65
Iterative Deepening Search
• A general strategy to find best depth limit l
– Goal is found at depth d, the depth of the shallowest goal-
node
– Often used in combination with DF-search
• Combines benefits of DF- and BF-search
• Like depth-first search, its memory requirements are
very modest: O(bd) to be precise.
• Like breadth-first search, it is complete when the
branching factor is finite and optimal when the path
cost is a nondecreasing function of the depth of the
node.
66
Iterative Deepening Search
• Iterative deepening search may seem wasteful,
because states are generated multiple times.
• It turns out this is not very costly.
• The reason is that in a search tree with the same
(or nearly the same) branching factor at each
level, most of the nodes are in the bottom level, so
it does not matter much that the upper levels are
generated multiple times.
67
Iterative Deepening Search
• In an iterative deepening search, the nodes on the bottom level
(depth d) are generated once, those on the next to bottom level
are generated twice, and so on, up to the children of the root,
which are generated d times.
• So the total number of nodes generated is
N(IDS) = d.b + (d - 1).b2 + ...+ 1.bd = O(bd)
• We can compare this to the nodes generated by a breadth-first
search:
N(BFS) = b + b2 + ...+ bd + (bd+1 – b)= O(bd+1)
• Notice that breadth-first search generates some nodes at depth
d+1, whereas iterative deepening does not.
• The result is that iterative deepening is actually faster than
breadth-first search, despite the repeated generation of states.
68
Iterative Deepening Search
• For example, if b = 10 and d = 5,
69
ID-Search: Example
70
ID-Search: Example
71
ID-Search: Example
72
ID-Search: Example
73
Bidirectional Search
Two simultaneous searches run from start and goal.
– Motivation:
bd / 2 bd / 2 bd
– One forward from the initial state
– Other backward from goal
– Stops
when the two searches meet in the middle
• Check whether the node belongs to the other fringe before
expansion
74
Bidirectional Search
75
Bidirectional Search
77