Solving problems by searching
Uninformed search
CE417: Introduction to Artificial Intelligence
Sharif University of Technology
Spring 2016
Soleymani
“Artificial Intelligence: A Modern Approach”, Chapter 3
Outline
Problem-solving agents
Problem formulation and some examples of problems
Uninformed search algorithms
Using only the problem definition
2
Problem-Solving Agents
Problem Formulation: process of deciding what actions
and states to consider
States of the world
Actions as transitions between states
Goal Formulation: process of deciding what the next goal
to be sought will be
Agent must find out how to act now and in the future to
reach a goal state
Search: process of looking for solution (a sequence of actions
that reaches the goal starting from initial state)
3
Problem-Solving Agents
A goal-based agent adopts a goal and aim at satisfying it
(as a simple version of intelligent agent maximizing a performance measure)
“How does an intelligent system formulate its problem as
a search problem”
Goal formulation: specifying a goal (or a set of goals) that agent
must reach
Problem formulation: abstraction (removing detail)
Retaining validity and ensuring that the abstract actions are easy to
perform
4
Example: Romania
On holiday in Romania; currently in Arad.
Flight leaves tomorrow from Bucharest
Map of Romania
Initial state
currently in Arad
Formulate goal
be in Bucharest
Formulate problem
states: various cities
actions: drive between cities
Solution
sequence of cities, e.g.,Arad, Sibiu, Fagaras, Bucharest
5
Example: Romania (Cont.)
Assumptions about environment
Known
Observable
The initial state can be specified exactly.
Deterministic
Each applied action to a state results in a specified state.
Discrete
Given the above first three assumptions, by starting in an initial state
and running a sequence of actions, it is absolute where the agent will be
Perceptions after each action provide no new information
Can search with closed eyes (open-loop)
6
Problem-solving agents
Formulate, Search, Execute
7
Problem types
Deterministic and fully observable (single-state problem)
Agent knows exactly its state even after a sequence of actions
Solution is a sequence
Non-observable or sensor-less (conformant problem)
Agent’s percepts provide no information at all
Solution is a sequence
Nondeterministic and/or partially observable (contingency
problem)
Percepts provide new information about current state
Solution can be a contingency plan (tree or strategy) and not a sequence
Often interleave search and execution
Unknown state space (exploration problem)
8
Belief State
In partially observable & nondeterministic environments,
a state is not necessarily mapped to a world configuration
State shows the agent’s conception of the world state
Agent's current belief (given the sequence of actions and percepts up
to that point) about the possible physical states it might be in.
A sample
belief state
World states
9
Example: vacuum world
Single-state, start in {5}
Solution?
[Right, Suck]
Sensorless, start in {1,2,3,4,5,6,7,8}
e.g., Right goes to {2,4,6,8}
Solution?
[Right,Suck,Left,Suck]
Contingency
Nondeterministic: Suck may dirty a clean carpet
Partially observable: location, dirt only at the current location
Percept: [L, Clean], i.e., start in {5} or {7}
Solution?
[Right, if dirt then Suck] [Right, while dirt do Suck]
10
Single-state problem
In this lecture, we focus on single-state problem
Search for this type of problems is simpler
And also provide strategies that can be base for search in
more complex problems
11
Single-state problem formulation
A problem is defined by five items:
Initial state e.g., 𝐼𝑛( 𝐴𝑟𝑎𝑑)
Actions: 𝐴𝐶𝑇𝐼𝑂𝑁𝑆(𝑠) shows set of actions that can be executed in 𝑠
e.g., 𝐴𝐶𝑇𝐼𝑂𝑁𝑆(𝐼𝑛(𝐴𝑟𝑎𝑑)) = {𝐺𝑜(𝑆𝑖𝑏𝑖𝑢), 𝐺𝑜(𝑇𝑖𝑚𝑖𝑠𝑜𝑎𝑟𝑎), 𝐺𝑜(𝑍𝑒𝑟𝑖𝑛𝑑)}
12
Single-state problem formulation
A problem is defined by five items:
Initial state e.g., 𝐼𝑛( 𝐴𝑟𝑎𝑑)
Actions: 𝐴𝐶𝑇𝐼𝑂𝑁𝑆(𝑠) shows set of actions that can be executed in 𝑠
Transition model: 𝑅𝐸𝑆𝑈𝐿𝑇𝑆(𝑠, 𝑎) shows the state that results from doing
action 𝑎 in state 𝑠
e.g., 𝑅𝐸𝑆𝑈𝐿𝑇𝑆(𝐼𝑛(𝐴𝑟𝑎𝑑), 𝐺𝑜(𝑍𝑒𝑟𝑖𝑛𝑑)) = 𝐼𝑛(𝑍𝑒𝑟𝑖𝑛𝑑)
13
Single-state problem formulation
A problem is defined by five items:
Initial state e.g., 𝐼𝑛( 𝐴𝑟𝑎𝑑)
Actions: 𝐴𝐶𝑇𝐼𝑂𝑁𝑆(𝑠) shows set of actions that can be executed in 𝑠
Transition model: 𝑅𝐸𝑆𝑈𝐿𝑇𝑆(𝑠, 𝑎) shows the state that results from doing
action 𝑎 in state 𝑠
Goal test: 𝐺𝑂𝐴𝐿_𝑇𝐸𝑆𝑇(𝑠) shows whether a given state is a goal state
explicit, e.g., x = "at Bucharest"
abstract e.g., Checkmate(x)
14
Single-state problem formulation
A problem is defined by five items:
Initial state e.g., 𝐼𝑛( 𝐴𝑟𝑎𝑑)
Actions: 𝐴𝐶𝑇𝐼𝑂𝑁𝑆(𝑠) shows set of actions that can be executed in 𝑠
Transition model: 𝑅𝐸𝑆𝑈𝐿𝑇𝑆(𝑠, 𝑎) shows the state that results from doing
action 𝑎 in state 𝑠
Goal test: 𝐺𝑂𝐴𝐿_𝑇𝐸𝑆𝑇(𝑠) shows whether a given state is a goal state
Path cost (additive): assigns a numeric cost to each path that reflects agent’s
performance measure
e.g., sum of distances, number of actions executed, etc.
𝑐(𝑥, 𝑎, 𝑦) ≥ 0 is the step cost
15
Single-state problem formulation
A problem is defined by five items:
Initial state e.g., 𝐼𝑛( 𝐴𝑟𝑎𝑑)
Actions: 𝐴𝐶𝑇𝐼𝑂𝑁𝑆(𝑠) shows set of actions that can be executed in 𝑠
Transition model: 𝑅𝐸𝑆𝑈𝐿𝑇𝑆(𝑠, 𝑎) shows the state that results from doing
action 𝑎 in state 𝑠
Goal test: 𝐺𝑂𝐴𝐿_𝑇𝐸𝑆𝑇(𝑠) shows whether a given state is a goal state
Path cost (additive): assigns a numeric cost to each path that reflects agent’s
performance measure
Solution: a sequence of actions leading from the initial state to a goal state
Optimal Solution has the lowest path cost among all solutions.
16
State Space
State space: set of all reachable states from initial state
Initial state, actions, and transition model together define it
It forms a directed graph
Nodes: states
Links: actions
Constructing this graph on demand
17
Vacuum world state space graph
2 × 22 = 8
States
States? dirt locations & robot location
Actions? Left, Right, Suck
Goal test? no dirt at all locations
Path cost? one per action
18
Example: 8-puzzle
9!/2 = 181,440
States
States? locations of eight tiles and blank in 9 squares
Actions? move blank left, right, up, down (within the board)
Goal test? e.g., the above goal state
Path cost? one per move
19 Note: optimal solution of n-Puzzle family is NP-complete
Example: 8-queens problem
64 × 63 × ⋯ × 57
≃ 1.8 × 1014 States
Initial State? no queens on the board
States? any arrangement of 0-8 queens on the board is a state
Actions? add a queen to the state (any empty square)
Goal test? 8 queens are on the board, none attacked
Path cost? of no interest
search cost vs. solution path cost
20
Example: 8-queens problem
(other formulation)
2,057 States
Initial state? no queens on the board
States? any arrangement of k queens one per column in the leftmost k
columns with no queen attacking another
Actions? add a queen to any square in the leftmost empty column such
that it is not attacked by any other queen
Goal test? 8 queens are on the board
Path cost? of no interest
21
Example: Knuth problem
Knuth Conjecture: Starting with 4, a sequence of factorial,
square root, and floor operations will reach any desired
positive integer.
Example: 4! ! = 5
States? Positive numbers
Initial State? 4
Actions? Factorial (for integers only), square root, floor
Goal test? State is the objective positive number
Path cost? Of no interest
22
Read-world problems
Route finding
Travelling salesman problem
VLSI layout
Robot navigation
Automatic assembly sequencing
23
Tree search algorithm
Basic idea
offline, simulated exploration of state space by generating successors of
already-explored states
function TREE-SEARCH( problem) returns a solution, or failure
initialize the frontier using the initial state of problem
loop do
if the frontier is empty then return failure
choose a leaf node and remove it from the frontier
if the node contains a goal state then return the corresponding solution
expand the chosen node, adding the resulting nodes to the frontier
Frontier: all leaf nodes available for expansion at any given point
Different data structures (e.g, FIFO, LIFO) for frontier can cause different
orders of node expansion and thus produce different search algorithms.
24
Tree search example
25
Tree search example
26
Tree search example
27
Graph Search
Redundant paths in tree search: more than one way to get from
one state to another
may be due to a bad problem definition or the essence of the problem
can cause a tractable problem to become intractable
function GRAPH-SEARCH( problem) returns a solution, or failure
initialize the frontier using the initial state of problem
loop do
if the frontier is empty then return failure
choose a leaf node and remove it from the frontier
if the node contains a goal state then return the corresponding solution
add the node to the explored set
expand the chosen node, adding the resulting nodes to the frontier
only if not in the frontier or explored set
explored set: remembered every explored node
28
Graph Search
Example: rectangular grid
explored
frontier
29
Search for 8-puzzle Problem
Start Goal
30 Taken from: http://iis.kaist.ac.kr/es/
Implementation: states vs. nodes
A state is a (representation of) a physical configuration
A node is a data structure constituting part of a search tree
includes state, parent node, action, path cost g(x), depth
31
Search strategies
Search strategy: order of node expansion
Strategies performance evaluation:
Completeness: Does it always find a solution when there is one?
Time complexity: How many nodes are generated to find solution?
Space complexity: Maximum number of nodes in memory during search
Optimality: Does it always find a solution with minimum path cost?
Time and space complexity are expressed by
b (branching factor): maximum number of successors of any node
d (depth): depth of the shallowest goal node
m: maximum depth of any node in the search space (may be ∞)
Time & space are described for tree search
For graph search, analysis depends on redundant paths
32
Uninformed Search Algorithms
33
Uninformed (blind) search strategies
No additional information beyond the problem definition
Breadth-First Search (BFS)
Uniform-Cost Search (UCS)
Depth-First Search (DFS)
Depth-Limited Search (DLS)
Iterative Deepening Search (IDS)
34
Breadth-first search
Expand the shallowest unexpanded node
Implementation: FIFO queue for the frontier
35
Breadth-first search
36
Breadth-first search
37
Breadth-first search
38
Properties of breadth-first search
Complete?
Yes (for finite 𝑏 and 𝑑)
Time
𝑏 + 𝑏2 + 𝑏3 + ⋯ + 𝑏𝑑 = 𝑂(𝑏𝑑 ) total number of generated nodes
goal test has been applied to each node when it is generated
Space explored frontier
𝑂(𝑏 𝑑−1 ) + 𝑂(𝑏𝑑 ) = 𝑂(𝑏𝑑 ) (graph search)
Tree search does not save much space while may cause a great time excess
Optimal?
Yes, if path cost is a non-decreasing function of d
e.g. all actions having the same cost
39
Properties of breadth-first search
Space complexity is a bigger problem than time complexity
Time is also prohibitive
Exponential-complexity search problems cannot be solved by
uninformed methods (only the smallest instances)
1 million node/sec, 1kb/node
d Time Memory
10 3 hours 10 terabytes
12 13 days 1 pentabyte
14 3.5 years 99 pentabytes
16 350 years 10 exabytes
40
Uniform-Cost Search (UCS)
Expand node 𝑛 (in the frontier) with the lowest path cost 𝑔(𝑛)
Extension of BFS that is proper for any step cost function
Implementation: Priority queue (ordered by path cost) for
frontier
Equivalent to breadth-first if all step costs are equal
Two differences
Goal test is applied when a node is selected for expansion
A test is added when a better path is found to a node currently on the frontier
80 + 97 + 101 < 99 + 211
41
Properties of uniform-cost search
Complete?
Yes, if step cost ≥ 𝜀 > 0 (to avoid infinite sequence of zero-cost
actions)
Time
1+ 𝐶 ∗ 𝜀
Number of nodes with 𝑔 ≤ cost of optimal solution, 𝑂(𝑏 )
where 𝐶 ∗ is the optimal solution cost
𝑂(𝑏 𝑑+1 ) when all step costs are equal
Space
∗
Number of nodes with 𝑔 ≤ cost of optimal solution, 𝑂(𝑏1+ 𝐶 𝜀
)
Optimal?
Yes – nodes expanded in increasing order of 𝑔(𝑛)
Difficulty: many long paths may exist with cost ≤ 𝐶 ∗
42
Uniform-cost search (proof of optimality)
Lemma: If UCS selects a node 𝑛 for expansion, the optimal
solution to that node has been found.
Proof by contradiction: Another frontier node 𝑛′ must exist on the
optimal path from initial node to 𝑛 (using graph separation property).
Moreover, based on definition of path cost (due to non-negative step
costs, paths never get shorter as nodes are added), we have 𝑔 𝑛′
≤ 𝑔 𝑛 and thus 𝑛′ would have been selected first.
⇒ Nodes are expanded in order of their optimal path cost.
43
Depth First Search (DFS)
Expand the deepest node in frontier
Implementation: LIFO queue (i.e., put successors at front)
for frontier
44
DFS
Expand the deepest unexpanded node in frontier
45
DFS
Expand the deepest unexpanded node in frontier
46
DFS
Expand the deepest unexpanded node in frontier
47
DFS
Expand the deepest unexpanded node in frontier
48
DFS
Expand the deepest unexpanded node in frontier
49
DFS
Expand the deepest unexpanded node in frontier
50
DFS
Expand the deepest unexpanded node in frontier
51
DFS
Expand the deepest unexpanded node in frontier
52
DFS
Expand the deepest unexpanded node in frontier
53
DFS
Expand the deepest unexpanded node in frontier
54
DFS
Expand the deepest unexpanded node in frontier
55
Properties of DFS
Complete?
Tree-search version: not complete (repeated states & redundant paths)
Graph-search version: fails in infinite state spaces (with infinite non-goal path)
but complete in finite ones
Time
𝑂(𝑏𝑚): terrible if 𝑚 is much larger than 𝑑
In tree-version, 𝑚 can be much larger than the size of the state space
Space
𝑂(𝑏𝑚), i.e., linear space complexity for tree search
So depth first tree search as the base of many AI areas
Recursive version called backtracking search can be implemented in 𝑂(𝑚)
space
Optimal?
No DFS: tree-search version
56
Depth Limited Search
Depth-first search with depth limit 𝑙 (nodes at depth 𝑙 have no successors)
Solves the infinite-path problem
In some problems (e.g., route finding), using knowledge of problem to specify 𝑙
Complete?
If 𝑙 > 𝑑, it is complete
Time
𝑂(𝑏𝑙 )
Space
𝑂(𝑏𝑙)
Optimal?
No
57
Iterative Deepening Search (IDS)
Combines benefits of DFS & BFS
DFS: low memory requirement
BFS: completeness & also optimality for special path cost functions
Not such wasteful (most of the nodes are in the bottom level)
58
IDS: Example l =0
59
IDS: Example l =1
60
IDS: Example l =2
61
IDS: Example l =3
62
Properties of iterative deepening search
Complete?
Yes (for finite 𝑏 and 𝑑)
Time
𝑑 × 𝑏1 + (𝑑 − 1) × 𝑏2 + ⋯ + 2 × 𝑏 𝑑−1 + 1 × 𝑏𝑑 = 𝑂(𝑏𝑑 )
Space
𝑂(𝑏𝑑)
Optimal?
Yes, if path cost is a non-decreasing function of the node depth
IDS is the preferred method when search space is large and
the depth of solution is unknown
63
Iterative deepening search
Number of nodes generated to depth d:
𝑁𝐼𝐷𝑆 = 𝑑 × 𝑏1 + (𝑑 − 1) × 𝑏2 + … + 2 × 𝑏 𝑑−1 + 1 × 𝑏𝑑
= 𝑂(𝑏𝑑 )
For 𝑏 = 10, 𝑑 = 5, we compute number of generated nodes:
NBFS = 10 + 100 + 1,000 + 10,000 + 100,000 = 111,110
NIDS = 50 + 400 + 3,000 + 20,000 + 100,000 = 123,450
Overhead of IDS = (123,450 - 111,110)/111,110 = 11%
64
Bidirectional search
Simultaneous forward and backward search (hoping that they
meet in the middle)
Idea: 𝑏 𝑑/2 + 𝑏 𝑑/2 is much less than 𝑏 𝑑
“Do the frontiers of two searches intersect?” instead of goal test
First solution may not be optimal
Implementation
Hash table for frontiers in one of these two searches
Space requirement: most significant weakness
Computing predecessors?
May be difficult
List of goals? a new dummy goal
Abstract goal (checkmate)?!
65
Summary of algorithms (tree search)
a Complete if b is finite
b Complete if step cost ≥ ε>0
c Optimal if step costs are equal
d If both directions use BFS
Iterative deepening search uses only linear space and not much
more time than other uninformed algorithms
66