Chapter 3 Searching and Planning
Chapter 3 Searching and Planning
Chapter 3 Searching and Planning
Bedasa Wayessa
Introduction to AI - CoSc3112 1
Searching and Planning
Introduction to AI - CoSc3112 2
Outline
Solving Problems by Searching and planning
Problem Solving Agents
Problem spaces and search
Knowledge and rationality
Heuristic search strategies
Search and optimization (gradient descent)
Adversarial search
Constraint Satisfaction Problem
Planning and scheduling
Avoiding Repeated States
Dynamic game theory
Introduction to AI - CoSc3112 3
Introduction
Reflex agent are limited.
Goal-based agents are more capable.
Such an agent is called a problem-solving agent, and the
computational process it undertakes is called search.
Problem-solving agents use atomic representations.
Goal-based agents that use factored or structured
representations of states are called planning agents.
Problem needs definition, and problem-solving agents
searches for solutions.
Introduction to AI - CoSc3112 4
Introduction
Introduction to AI - CoSc3112 5
Problem-solving agent
Problem-solving process
Goal-based agents need a goal to guide them.
The agent can follow this four-phase problem-solving process:
1. Figuring out that goal (goal formulation) is the first step to solving
a problem.
A goal is a set of states.
2. Problem formulation is deciding what actions and states to take
given the goal.
If the agent lacks any other information, then the environment is
unknown.
Introduction to AI - CoSc3112 6
Problem-solving agent
Problem-solving process
Example 1: a Self-driving car with (known) or without
(unknown) a map.
With the map, agent can decide what to do.
– Can consider future actions to decide among many possible
immediate actions.
If an environment is observable, discrete, known, and
deterministic, then the solution to any problem in that
environment will be a fixed sequence of actions.
Example: plot a course from one city to another.
Introduction to AI - CoSc3112 7
Problem-solving agent
Problem-solving process
3. Search
Figuring out the sequence of actions to reach the goal is a
search.
Introduction to AI - CoSc3112 8
Problem-solving agent
Problem-solving process
4. The execution phase happens next, where the solution is put
into action.
Once that's done, the agent figures a new goal.
An agent executing its sequence ignores percepts (it knows
what it will do already).
Eyes are closed, this is known as an open-loop system.
– ignoring percepts breaks the loop between agent and environment.
– If the environment is nondeterministic, then the agent would be
safer using a closed-loop approach that monitors the percepts
Introduction to AI - CoSc3112 9
Problem-solving agent
Well-defined problems and solutions
A search Problems can be defined formally in 5 pieces:
– initial state
– a description of possible actions
– the transition model
– the goal test
– a path cost function
Introduction to AI - CoSc3112 10
Problem-solving agent
1. Initial state:
The state the agent starts in.
Example: In(Garage)
2. Actions:
What the agent can do.
Each possible action is applicable for a state s, if an
ACTION(s) function lists them in a sequence of states.
{ Leave(Garage), Go(Street), Turn(Right)}
Introduction to AI - CoSc3112 11
Problem-solving agent
3. Transition Model:
A description of each action.
Specified by a function, RESULT(s, a).
Returns state when action a is performed in a state s.
A successor is any state reachable from any given state
through one action.
RESULT(In(Garage), Go(Street)) = In(Street)
Initial state, actions, and transition model define the
problem's state space.
Introduction to AI - CoSc3112 12
Problem-solving agent
State space is forms a graph of states (nodes) connected by
actions (edges) arcs.
A set of possible states that the environment can be in.
A path in the state space is a sequence of states connected
by a sequence of actions.
4. Goal test:
Figures if a state is a goal state.
If the goal is to be at the store, then the goal set could just be
{ At(Store) }
That set could contain more than one goal state.
Introduction to AI - CoSc3112 13
Problem-solving agent
5. Path cost function:
assigns a numeric value to each path.
a problem-solving agent will pick a desirable cost
function.
The step cost for taking an action is denoted:
ACTION-COST(s, a, s')
Returns the cost of getting from state s, using action a, to
get to state s'.
Introduction to AI - CoSc3112 14
Problem-solving agent
Formulation of Problem
‒ The process of removing detail from a representation is called abstraction.
Such as Example: In(Garage).
‒ State descriptions and action descriptions need to be abstracted.
‒ An abstraction is valid if an abstract solution can be expanded into a more
detailed world, or environment.
‒ The abstraction is useful if doing the actions in the solution is easier in the
original problem.
‒ A good abstraction removes as much detail as possible while staying valid
and useful.
‒ Necessary for intelligent agents to actually be able to do stuff in the real
world. Such as plot a trip between cities for the driverless car.
Introduction to AI - CoSc3112 15
Problem-solving agent
Example Problems
‒ Toy problems illustrate problem-solving methods.
• is intended to illustrate or exercise various problem-solving
method. Useful for explanation .
‒ Real-world problems have solutions real people really
care about.
• such as robot navigation, is one whose solutions people actually use.
• for example, each robot has different sensors that produce different
data.
• the route-finding problem
Introduction to AI - CoSc3112 16
Problem-solving agent
Toy Problems
‒ Example 1. A vacuum-cleaner world with just two locations.
‒ Each location can be clean or dirty, and the agent can move left
or right and can clean the square that it occupies.
‒ Goal state get both Area cleaned.
Introduction to AI - CoSc3112 17
Problem-solving agent
Toy Problems A Vacuum-cleaner
To formulate as a problem, we need:
‒ states: determined by agent and dirt positions.
• 2 agent locations * 2 possible dirt locations * 2 different dirt
states = 8 possible world states.
‒ initial state: any will work
‒ actions: left, right, and suck
‒ transition model: actions have their expected actions, but
moving left from left square, moving right from right square,
sucking a clean square...all do nothing.
‒ goal test: checks if all squares are clean.
‒ path cost: each step costs 1.
Analysis: has discrete locations, discrete dirt, reliable cleaning, never
gets dirtier.
Introduction to AI - CoSc3112 18
Problem-solving agent
Toy Problems
‒ Example 1. A vacuum-cleaner world with just two locations.
Each location can be clean or dirty, and the agent can move left
or right and can clean the square that it occupies. Goal state get
both Area cleaned.
Introduction to AI - CoSc3112 19
Problem-solving agent
Toy Problems
‒ Example 2. an 8 - Puzzle.
‒ Is the sokoban puzzle, in which the agent’s goal is to push a
Sokoban puzzle number of boxes.
‒ There can be at most one box per cell.
Introduction to AI - CoSc3112 22
Problem-solving agent
Toy Problems 8-queens
‒ Example 3. The 8-queens:
‒ Tries to place 8 queens so no queen can take another. Useful for
testing search algorithms. (A queen attacks any piece in the same
row, column, or diagonal). Useful for testing search algorithms.
Introduction to AI - CoSc3112 24
Problem-solving agent
Toy Problems 8 Queens
Two types of formulations: incremental formulation and a
complete-state formulation.
Path cost doesn't matter in either because final state is all that
counts.
A first incremental formulation:
‒ states: an arrangement of 0 to 8 queens on the board.
‒ initial state: no queens on board.
‒ actions: add a queen to an empty square.
‒ transition model: return an updated board with a queen added to a square
‒ goal test: 8 queens on board, none attacked.
64 * 63 * … * 57 sequences to evaluate.
Introduction to AI - CoSc3112 25
Problem-solving agent
Toy Problems 8 Queens
An improved formulation
‒ states: all arrangements of n queens, (0 <= n <= 8), one per
column in leftmost n columns, with no attacks.
‒ actions: add a queen to the leftmost column so it's not attack or
attacked.
Reduces number of sequences from
‒ 64 * 63 *... * 57 = 1.8 x 10^14 to just 2057.
Introduction to AI - CoSc3112 27
Problem-solving agent
Example Problems Real-world problems
– Route-finding problems get defined in terms of locations
and transitions along links in between them.
– Commonly used: web sites and vehicle systems.
– In addition, packet routing in networks, military applications,
airline travel-planning.
– Touring problems: related to route-finding problems, but
take locations already visited into consideration.
Introduction to AI - CoSc3112 28
Problem-solving agent
Example Problems Real-world problems
– Traveling salesperson problem: a touring problem where
each city is visited exactly once.
– Goal: find the shortest tour.
– Used for planning movement of circuit drills, stocking
machines.
Introduction to AI - CoSc3112 29
Searching for Solutions
A search algorithm takes a search problem as input and
returns a solution, or an indication of failure.
Solutions are action sequences.
Search algorithms consider sequences.
They form a search tree, where initial state is the root.
Branches are actions.
Nodes contain the states in the state space of the problem.
Deciding between different actions is called expanding the
current state.
This entails generating a new set of states.
Introduction to AI - CoSc3112 30
Searching for Solutions
The root of the tree corresponds to the initial state of the problem.
The state space describes the (possibly infinite) set of states in the world, and the
actions that allow transitions from one state to another.
The search tree describes paths between these states, reaching towards the goal.
The search tree may have multiple paths to any given state.
Figure below shows the few steps in finding a path from Arad to Bucharest.
The root node of the search tree is at the initial state, Arad. We expand the node
Introduction to AI - CoSc3112 31
Searching for Solutions
Figure 3.6 Partial search trees for finding a route from Arad to Bucharest.
Introduction to AI - CoSc3112 32
Searching for Solutions
The parent node In(Arad) leads to three child nodes:
• In(Sibiu), In(Timisoara), In(Zerind).
A node without children is a leaf node.
The set of all nodes available for expansion is a frontier.
So, continue expanding nodes on the frontier until we find a
solution or there are no more expandable states.
Search algorithms follow this pattern, differentiated by which state
they expand next, their search strategy.
Introduction to AI - CoSc3112 33
Searching for Solutions
In(Arad) is in the tree twice, said to be a repeated state.
Generated by a loopy path.
A complete search tree here would be infinite.
Defense? Path costs are additive, so such a loopy path gets
disregarded on it's face.
A special case of redundant paths exists when there's more
than one way to get somewhere.
Often can be avoided by more careful formulation of the problem.
(8-queens problem, column-by-column)
Introduction to AI - CoSc3112 34
Searching for Solutions
Other times, can't be avoided: problems with reversible actions,
route-finding problems, sliding-block puzzles.
How to avoid redundant paths?
‒ Remember where you've been.
How?
‒ Keep an explored set that remembers every expanded node.
New nodes on the frontier we've seen before can be ignored.
Introduction to AI - CoSc3112 35
Searching for Solutions
Introduction to AI - CoSc3112 37
Uninformed Search Strategies
An uninformed search algorithm is given no clue about how
close a state is to the goal(s).
No additional information available to algorithm, other than
what's in the problem definition.
Known as a blind search, or uninformed search.
Order in which nodes are expanded sets each apart.
Searches that can tell if one option is probably better than the
other are informed searches, aka heuristic searches (Next
section).
Introduction to AI - CoSc3112 38
Uninformed Search Strategies
1. Breadth-first search
2. Depth-first search
3. Uniform-cost search
4. Iterative deepening search
5. Bidirectional search
Introduction to AI - CoSc3112 39
Breadth-first search
Introduction to AI - CoSc3112 40
Breadth-First Search
Breadth-first search (BFS): Finds a path between two nodes by
taking one step down all paths and then immediately
backtracking.
Often implemented by maintaining a queue of vertices to visit.
Adjacent nodes get expanded first, before moving on to
further away nodes.
Uses a FIFO queue for the frontier.
New nodes go to the back, older nodes at the front are
expanded first.
Visiting the nodes level by level. adjacent to it can be visited
Introduction to AI - CoSc3112 41
Breadth-First Search Example
Example 1.
Initial Second
3rd
Figure 3.8 Breadth-first search on a simple binary tree. At each stage, the node to be
expanded next is indicated by the triangular marker.
Introduction to AI - CoSc3112 42
Breadth-First Search
Introduction to AI - CoSc3112 43
Breadth-First Search Algorithm
function
Example BREADTH-FIRST
1. -SEARCH(problem) returns a solution
node or failure
node←NODE(problem. INITIAL)
if problem.IS-GOAL(node. STATE) then return node
frontier←a FIFO queue, with node as an element
reached←{problem.INITIAL}
while not IS-EMPTY(frontier) do
node←POP(frontier)
for each child in EXPAND(problem, node) do
s←child.STATE
if problem.IS-GOAL(s) then return child
if s is not in reached then
add s to reached
add child to frontier
return failure
Introduction to AI - CoSc3112 44
Breadth-First Search
Analysis:
‒ Complete: since all nodes examined, a solution will be found,
if it exists.
‒ Shallowest goal node not necessarily the optimal one.
‒ Could be if path cost is a nondecreasing function of the depth
of the node.
‒ Example: furthest right, bottom most node vs furthest left,
bottom-most node..
Introduction to AI - CoSc3112 45
Breadth-First Search
Complexity:
‒ Time: In a tree where states have b successors, each level
generates more and more nodes:
Introduction to AI - CoSc3112 46
Breadth-First Search
If we begin at the root, A
‒ then each of its children, say, from left to right:
• A, B, C, D
‒ The children of these first-level nodes are then visited:
• A, B, C, D, E, F, G
Introduction to AI - CoSc3112 47
Depth-first search
Depth first Search or Depth first traversal is a recursive
algorithm for searching all the vertices of a graph or tree data
structure.
Traversal means visiting all the nodes of a graph.
Always expands deepest node in the current frontier first.
As the nodes at the deepest level are expanded, the search backs
up to the next deepest node.
DFS is (usually) a graph-search algorithm, using LIFO queue: most
recently generated node is expanded first.
Pre-order. DFS uses backtracking.
Introduction to AI - CoSc3112 48
Depth-first search
Introduction to AI - CoSc3112 49
Depth-first search
Introduction to AI - CoSc3112 50
Depth-first search
Introduction to AI - CoSc3112 51
Depth-first search
If we begin at the root, A
‒ then select one of its children, say B
• A, B,
‒ select one of its children B, say E, and visit it
• A, B, E
‒ Again, before visiting the other child of B visit E
‒ Because there are none, we backtrack to B and visit F
• A, B, E, F, H
‒ all of B's visited, now backtrack to the last previously visited node A
• select C and its descendants (of which there are none):
• A, B, E, F, H, C
‒ Finally, we visit D and its descendants
• A, B, E, F, H, C, D, G
Introduction to AI - CoSc3112 52
Depth-first search
Can be implemented iteratively (graph-based), or recursively
(tree-based).
Properties vary based on implementation (graph-based or tree-
based).
Graph-search version: avoids repeated states and redundant
paths. Complete. Eventually expands every node.
Tree-search version: not complete. Can loop forever. Can be
modified to address this (marking). Still has redundant paths and
repeated states.
Introduction to AI - CoSc3112 53
Depth-first search
Encounter a non-goal, infinite path? Both fail.
Both versions non optimal. Consider multiple goal nodes along
separate paths: 1 at the bottom, 1 higher up but in a different
subtree.
Complexity:
‒ Time: limited by size of state space for a graph DFS.
‒ A tree DFS could generate all O(b^m) nodes, where m is the
maximal depth of the tree.
‒ Can be much bigger than the size of the state space.
Introduction to AI - CoSc3112 54
Depth-first search
On BFS graph search, no space complexity advantage.
For a BFS tree search, needs only store the path from root to
leaf node (including the remaining unexpanded sibling nodes along
the way).
Once the node expanded, it can be removed from memory as
soon as all descendants explored.
A state space with branching factor b and maximum depth m,
DFS requires storage of only O(b^m) nodes.
More efficient, and DFS is the go to search for a lot of AI areas.
Introduction to AI - CoSc3112 55
Uniform cost search
Uniform-cost search is a searching algorithm used for traversing
a weighted tree or graph.
When all step costs are equal, BFS is optimal because it
expands the shallowest node.
Instead of doing that, uniform-cost search, expands the node
with the lowest path cost.
Uses a priority queue, sorted by g where n is a node and g(n)
is the path cost of the node.
It is also called Dijkstra's algorithm
Introduction to AI - CoSc3112 56
Uniform cost search
How else is it different from BFS?
‒ The goal test gets applied to a node when it gets selected for
expansion instead of when its first generated
Why? The first goal node that is generated may be on a
suboptimal path.
And, a test is added in case a better path is found to a node
currently on the frontier.
Optimal because of the order in which nodes are expanded
and because paths never get shorter as nodes are added.
Introduction to AI - CoSc3112 57
Uniform cost search
Sibiu → Bucharest
‒ Sibiu → Fagaras → Bucharest = 310
‒ Sibiu → R.V → Pitesti → Bucharest = 278
‒ Former is discarded after testing which is shorter and the
latter is kept.
Introduction to AI - CoSc3112 59
Depth-limited search
A depth-limited search algorithm is similar to depth-first
search with a predetermined limit.
In this algorithm, the node at the depth limit will treat as it has no
successor nodes further.
The problem of infinite state spaces for DFS can be addressed
by limiting the depth in which you can search.
Nodes at this depth limit get treated like they have no successors.
Known as the depth-limited search.
Introduction to AI - CoSc3112 60
Depth-limited search
Can be incomplete if that limit is less than the depth of the space.
What if we limit search to 3 levels, but the goal is on the 5th level?
Won't find the goal.
Becomes un optimal if limit > than the depth of the tree.
Time complexity is O(b^l), where l is the depth limit.
Space complexity is O(b^l), where l is the depth limit.
DFS is like DLS where l is infinite.
Introduction to AI - CoSc3112 61
Depth-limited search
Can be incomplete if that limit is less than the depth of the space.
What if we limit search to 3 levels, but the goal is on the 5th level?
Won't find the goal.
Becomes un optimal if limit > than the depth of the tree.
Time complexity is O(b^l), where l is the depth limit.
Space complexity is O(b^l), where l is the depth limit.
DFS is like DLS where l is infinite.
Introduction to AI - CoSc3112 62
Depth-limited search
Prior knowledge can help choose the depth limit.
Finding a route...what if any route is at most 4 stops away from the
start, no matter the configuration of the map? Set l to 4.
Introduction to AI - CoSc3112 63
Depth-limited search
Depth-limited search can be terminated with two
Conditions of failure:
‒ Standard failure value: It indicates that problem does not
have any solution.
‒ Cutoff failure value: It defines no solution for the problem
within a given depth limit.
Introduction to AI - CoSc3112 64
Iterative deepening search
The iterative deepening algorithm is a combination of DFS and
BFS algorithms.
This search algorithm finds out the best depth limit and does it
by gradually increasing the limit until a goal is found.
This algorithm performs depth-first search up to a certain "depth
limit", and it keeps increasing the depth limit after each iteration
until the goal node is found.
This Search algorithm combines the benefits of Breadth-first
search's fast search and depth-first search's memory efficiency.
Introduction to AI - CoSc3112 65
Iterative deepening search
The iterative search algorithm is useful uninformed search when
search space is large, and depth of goal node is unknown.
Disadvantages: The main drawback of IDFS is that it repeats all
the work of the previous phase.
1'st Iteration-----> A
2'nd Iteration----> A, B, C
3'rd Iteration------>A, B, D, E, C, F, G
4'th Iteration------>A, B, D, H, I, E, C, F, K, G
In the fourth iteration, the algorithm will find the
goal node.
Introduction to AI - CoSc3112 66
Bidirectional search
Bidirectional search algorithm runs two simultaneous searches,
one form initial state called as forward-search and other from
goal node called as backward-search, to find the goal node.
Bidirectional search replaces one single search graph with two
small subgraphs in which one starts the search from an initial
vertex and other starts from goal vertex.
The search stops when these two graphs intersect each other.
Bidirectional search can use search techniques such as BFS, DFS,
DLS, etc.
Introduction to AI - CoSc3112 67
Bidirectional search
Advantages: Bidirectional search is fast. Bidirectional search
requires less memory.
Disadvantages: Implementation of the bidirectional search tree is
difficult. In bidirectional search, one should know the goal state in
advance.
Introduction to AI - CoSc3112 68
Bidirectional search
Introduction to AI - CoSc3112 69
Informed (Heuristic) Search Strategies
Informed search strategy: uses problem specific knowledge
beyond the definition of the problem itself.
‒ More efficient than uninformed search.
The informed search algorithm is more useful for large search
space.
Informed search algorithm uses the idea of heuristic, so it is
also called Heuristic search.
Heuristics function: Heuristic is a function which is used in
Informed Search, and it finds the most promising path.
Introduction to AI - CoSc3112 70
Informed (Heuristic) Search Strategies
Other characteristics of Informed Search Algorithm are:
‒ It is also known as Directed Search or Heuristic Search.
‒ Tries to reduce the amount of search.
‒ Dependent on the evaluation function to determine the ordering
of operations.
‒ The number of nodes that lead to good heuristic is placed in front.
‒ Evaluation is often based on empirical observations.
‒ It is more useful for large search space.
Introduction to AI - CoSc3112 71
Informed (Heuristic) Search Strategies
1. Best-first Search (Greedy search)
2. A∗ search
3. ∗ search
Introduction to AI - CoSc3112 72
Best-first Search (Greedy search)
Greedy best-first search algorithm always selects the path which
appears best at that moment.
It is the combination of DFS and BFS algorithms.
It uses the heuristic function and search.
Best-first search allows us to take the advantages of both algorithms.
With the help of best-first search, at each step, we can choose the most
promising node.
Best-first search : a general TREE-SEARCH or GRAPH-SEARCH
algorithm selecting a node for expansion based on an evaluation function,
f(n).
Introduction to AI - CoSc3112 73
Best-first Search (Greedy search)
Tries to expand the node closest to the goal.
f(n) = g(n).
Were, h(n) = estimated cost from node n to the goal.
Note: If the current node n is a goal node, the value of h(n) will be 0.
The greedy best first algorithm is implemented by the priority queue.
‒ A priority queue is a special type of queue in which each element is
associated with a priority value.
‒ And, elements are served on the basis of their priority.
‒ That is, higher priority elements are served first.
This strategy is quite similar to an uniform-cost search strategy.
‒ The difference this uses heuristic estimates rather than the cost of paths
from the start state. Introduction to AI - CoSc3112 74
Best-first Search (Greedy search)
Example: using the straight-line distance heuristic, called ℎ𝑆𝑆𝑆𝑆𝑆𝑆 .
𝒉𝒉𝑺𝑺𝑺𝑺𝑺𝑺 (In(Arad)) = 366.
Let's Examine:
‒ Arad → Sibiu → Fagaras → Bucharest
Introduction to AI - CoSc3112 78
A* search:
Minimizing the total estimated solution cost
A* search is the most widely used informed search algorithm where
a node n is evaluated by combining values of the functions
g(n) and h(n).
The function g(n) is the path cost from the start/initial node to a
node n and h(n) is the estimated cost of the cheapest path from
node n to the goal node.
• Therefore, we have f(n)=g(n) + h(n)
‒ where f(n) is the estimated cost of the cheapest solution through n.
‒ So, in order to find the cheapest solution, try to find the lowest
values of f(n).
Introduction to AI - CoSc3112 79
A* search:
Minimizing the total estimated solution cost
A* search is the most known form of best-first search.
Tries the node with the lowest value of g(n) + h(n).
It has combined features of UCS and greedy best-first search.
Assuming a good h(n), A* is both complete and optimal.
Similar to UNIFORM-COST-SEARCH, but A* uses g(n) + h(n)
instead of just g(n).
Pronounced as “A-star search”
Introduction to AI - CoSc3112 80
A* search:
Minimizing the total estimated solution cost
Introduction to AI - CoSc3112 81
A* search:
Minimizing the total estimated solution cost
A* search Example. The goal of reaching Bucharest
Introduction to AI - CoSc3112 84
A* search:
Minimizing the total estimated solution cost
A* search Example. The goal of reaching Bucharest
Introduction to AI - CoSc3112 85
A* search:
Minimizing the total estimated solution cost
A* search Example. The goal of reaching Bucharest
Introduction to AI - CoSc3112 87
Constraint Satisfaction Problem
Previous search problems:
Problems can be solved by searching in a space of states.
State is a “black box” – any data structure that supports successor
function, heuristic function, and goal test – problem-specific.
each state is atomic, or indivisible – a black box
For each problem we need domain-specific code to describe the
transitions between states.
A problem is solved when each variable has a value that
satisfies all the constraints on the variable.
‒ A problem described this way is called a constraint satisfaction
problem, or CSP.
Introduction to AI - CoSc3112 88
Constraint Satisfaction Problem
Definition: A constraint satisfaction problem consists of three
components, X, D, and C:
X is a set of variables, {X1, . . . ,Xn}.
D is a set of domains, {D1, . . . ,Dn}, one for each variable.
C is a set of constraints that specify allowable combinations of
values.
A domain, Di, consists of a set of allowable values, {v1, . . . ,vk}, for variable Xi.
Solution: Assign each value to each variables such that none of the
constraint is broken.
Simplest algorithm is : backtracking.
Introduction to AI - CoSc3112 89
Constraint Satisfaction Problem
Example 1: Constraint Satisfaction Problem
Set of variable and their domain
A = {1, 2, 3} Solution Step
B = {1, 2, 3} Its called partial assignment b/c
A=1 we don’t know for other variable
C = {1, 2, 3}
A = 1, B =1 Wrong backtrack on B
Set of constraint
A = 1, B =2 Wrong backtrack B
A>B
A = 1, B =3 Wrong backtrack A
B≠C
A≠C A = 2, B =1 1st const satisfied but No C
Introduction to AI - CoSc3112 90
Constraint Satisfaction Problem
Solution
Each state in a CSP is defined by an assignment of values to some
or all of the variables
An assignment that does not violate any constraints is called a
consistent or legal assignment
A complete assignment is one in which every variable is assigned
A solution to a CSP is consistent and complete assignment
Allows useful general-purpose algorithms with more power
than standard search algorithms
Example: Map Coloring
Introduction to AI - CoSc3112 91
Constraint Satisfaction Problem
Example: Map Coloring
Figure 6.1 (a) The principal states and territories of Australia. Coloring this map
can be viewed as a constraint satisfaction problem (CSP). The goal is to assign
colors to each region so that no neighboring regions have the same color.
Solution
{
WA = red, NT = green,
Q = red, NSW = green,
V = red, SA = blue,
T = red
}.
Introduction to AI - CoSc3112 93
Constraint Satisfaction Problem
Constraint Graph
‒ Constraint graph: nodes are variables, arcs are constraints
‒ Binary CSP: each constraint relates two variables
‒ CSP conforms to a standard pattern
• a set of variables with assigned values
• generic successor function and goal test
• generic heuristics
• reduce complexity
Introduction to AI - CoSc3112 94
Constraint Satisfaction Problem
CSP as a Search Problem
‒ Initial state: { } – all variables are unassigned
‒ Successor function:
• a value is assigned to one of the unassigned variables with no conflict
‒ Goal test: a complete assignment
‒ Path cost: a constant cost for each step
‒ Solution appears at depth n if there are n variables
‒ Depth-first or local search methods work well
Introduction to AI - CoSc3112 95
Constraint Satisfaction Problem
Types of Variables
‒ Discrete variables
• finite domains:
– n variables, domain size d O (d ^n) complete assignments
– Worst case, can’t solve finite-domain CSPs in less than
exponential time
• infinite domains:
– integers, strings, etc.
– e.g., job scheduling, variables are start/end days for each job
– need a constraint language, e.g., StartJob1 + 5 ≤ StartJob3
‒ Continuous variables
• e.g., start/end times for Hubble Space Telescope observations
• linear constraints solvable in polynomial time by linear programming
Introduction to AI - CoSc3112 96
Constraint Satisfaction Problem
Types of Constraints
‒ Unary constraints involve a single variable, e.g., SA ≠ green
‒ Binary constraints involve pairs of variables, e.g., SA ≠ WA
‒ Higher-order constraints involve 3 or more variables
• e.g., cryptarithmetic column constraints
Introduction to AI - CoSc3112 97
Constraint Satisfaction Problem
Real-World CSPs Problems
‒ Assignment problems e.g., who teaches what class
‒ Timetabling problems e.g., which class is offered when and where?
‒ Transportation scheduling problems
‒ Factory scheduling problems
Assignment 1(10% worth)
‒ Class scheduling CSP problem :
• There is a fixed number of professors and classrooms, a list of classes to
be offered, and a list of possible time slots for classes.
• Each professor has a set of classes that he or she can teach.
Introduction to AI - CoSc3112 98
Heuristic Functions in AI
As we have already seen that an informed search make use of
heuristic functions in order to reach the goal node in a more
prominent way.
A heuristic function (algorithm) or simply a heuristic is a shortcut
to solving a problem when there are no exact solutions for it or
the time to obtain the solution is too long.
Therefore, there are several pathways in a search tree to reach the
goal node from the current node.
The selection of a good heuristic function matters certainly.
A good heuristic function is determined by its efficiency.
Introduction to AI - CoSc3112 99
Heuristic Functions in AI
More is the information about the problem, more is the processing time.
Some toy problems, such as 8-puzzle, 8-queen, tic-tac-toe, etc., can be
solved more efficiently with the help of a heuristic function.
Consider the following 8-puzzle problem where we have a start
state and a goal state.
‒ Our task is to slide the tiles of the current/start state and place it in
an order followed in the goal state.
‒ There can be four moves either left, right, up, or down.
‒ There can be several ways to convert the current/start state to the
goal state, but, we can use a heuristic function h(n) to solve the
problem more efficiently.
Aorder followed
heuristic in the goal
function for state.
the 8-puzzle problem is defined below:
‒ h(n)=Number
There can be four of moves
tiles out of left,
either position.
right, up, or down.
‒So,
There
therecan be several
is total ways
of three tilestoout
convert the current/start
of position i.e., 6,5 and state
4. Doto the goal
not
count the empty tile present in the goal state). i.e. h(n)=3.
state, but, we can use a heuristic function h(n) to solve the problem
Now, we require to minimize the value of h(n) =0.
Wemorecan efficiently.
construct a state-space tree to minimize the h(n) value to 0,