Chapter 3
Chapter 3
Manoj Kr Basnet 1
Problem Solving:
Problem solving, particularly in artificial intelligence, may be characterized as a
systematic search through a range of possible actions in order to reach some
predefined goal or solution.
•Search
-Determine the possible sequence of actions that lead to the states of
known values and then choosing the best sequence.
•Execute
-Give the solution perform the actions.
Problem formulation:
A problem is defined by:
An initial state: State from which agent start
Goal test: Determine whether the given state is goal state or not
Example Problems
The problem-solving approach has been applied to a vast array of task
environments.
We list some of the best known here, distinguishing between toy and real world
problems.
Such problems tend not to have a single agreed-upon description, but we can give
the general flavor of their formulations.
Toy problems
The first we examine is the vacuum cleaner world which is shown in fig.
Initial state:
Any state can be designated as the initial state.
Actions:
In this simple environment, each state has just actions: Left, Right and Suck.
Transitional model:
The actions have their expected effects, except that moving Left in the leftmost
square, moving Right in the rightmost square and Sucking in the clean square
have no effect.
Path cost:
Each step costs 1, so the path cost is the number of steps in the path. 8-puzzle
It consist of 3x3 board with eight numbered tiles and a blank space.
A tile adjacent to the blank space can slide into the space.
The object is to reach a specified goal state, such as the shown on right of the
figure.
Initial state:
Any state can be designated as the initial state.
Actions:
The simplest formulation defines the actions as movements of the blank space Left,
Right, Up or Down.
Transitional model:
Given a state and action, this returns the resulting state, for example, if we apply Left
to the start state in fig, the resulting state has 5 and blank switched.
Goal test:
This checks whether the state matches the goal configuration shown in fig.
States:
Each state obviously includes a location(e.g. airport) and the current time.
Initial state:
This is specified by the user’s query.
Transitional model:
The state resulting from taking a flight will have the flight’s destination as the
current location and the flight’s arrival time as the current time.
Goal test:
Are we at the final destination specified by the user?
Path cost:
This depends on monetary cost, waiting time, flight time, customs and
immigration procedures, seat quality, time of day, type of airplane and so on.
-Robot navigation
-Touring problems
•State space
•Path
•Path cost
Importance of Search in AI
• Many of the tasks underlying AI can be phrased in terms of search for the solution
to the problem.
• Many goal based agents are essentially problem solving agents which must decide
what to do by searching for a sequences of actions that lead to their solutions.
• For the production systems, need to search for a sequence of rule applications that
lead to the required fact or action.
• For neural network system, need to search for the set of connection weights that will
result in the required input to output mapping.
Searching
A search problem
Figure below contains a
representation of a map.
Prepared By : Er. Manoj Kr Basnet 16
The nodes represent cities and the links represent direct road connections between
cities.
The number associated to a link represents the length of the corresponding road.
S 5 5
4 G
D E F 3
2 4
In the case of the uninformed search methods, the order in which potential solution
paths are considered is arbitrary, using no domain-specific information to judge
where the solution is likely to lie.
•Time Complexity: How long (worst or average case) does it take to find a
solution? Usually measured in terms of the number of nodes expanded
•Space Complexity: How much space is used by the algorithm? Usually measured
in terms of the maximum number of nodes in memory at a time
b -- maximum branching
factor (number of successor
of any node) of the search
Prepared By : Er. Manoj Kr Basnet 19
tree
m -- depth of the least-cost solution d --
maximum length of any path in the space
Breadth First Search
All nodes are expended at a given depth in the search tree before any nodes at the
next level are expanded until the goal reached.
Time complexity:
Assume a state space where every state has b successors.
root has b successors, each node at the next level has again b successors
(total b2), …
Space complexity:
Each node that is generated must remain in memory.
Total no. of nodes in memory:
1+ b + b2 + b3 + ………………….. bd + ( bd+1 –b) = O(bd+1)
Otherwise, not optimal but finds solution with shortest path length
(shallowest solution).
If each path does not have same path cost shallowest solution may not be
optimal.
C E
D F
G
DFS Evaluation:
Completeness;
Does it always find a solution if one exists?
NO
If search space is infinite and search space contains loops then DFS may
not find solution.
Time complexity;
Thus we can say DFS may not always give optimal solution.
The search continues by visiting the next node which has the least total cost from
the root.
In the queue, each node is associated with its total path cost from the root, where
the least-cost paths are given highest priority.
The node at the head of the queue is subsequently expanded, adding the next set
of connected nodes with the total path cost from the root to the respective node.
The uniform-cost search is complete and optimal if the cost of each step exceeds
some positive bound ε.
Does not care about the number of steps, only care about total cost.
•Space? Maximum as of
BFS.
Prepared By : Er. Manoj Kr Basnet 28
•Optimal? Yes
A A
Note : Heurisitic Start with root node
estimates are not Paths from root
4 11
used in this search ! 0 are generated.
A.
B C
15 13 4 3
4 11
D E F H
B CA
12 10 18 16 6 3 1 7
4 11
I J G1 K L M N G2
Since B has the least cost, we
expand it.
C
11
A
A
4 11
4 11
B C
B
15 13 4 3
15 13
D 19 E 17 F H
15 14
D 19 E 17
It works exactly like depth-first search, but avoids its drawbacks regarding
completeness by imposing a maximum limit on the depth of the search.
Even if the search could still expand a vertex beyond that depth, it will not do so and
thereby it will not follow infinitely deep paths or get stuck in cycles.
Therefore depth-limited search will find a solution if it is within the depth limit,
which guarantees at least completeness on all graphs.
Yet it introduces another source of problem if we are unable to find good guess
of l.
Time complexity: O( bl )
Space complexity: O ( bl )
On each iteration, IDDFS visits the nodes in the search tree in the same order as
depth-first search, but the cumulative order in which nodes are first visited,
assuming no pruning, is effectively breadth-first.
It works by setting a depth of search -say, depth 1- and doing depth-first search
to that depth.
If a solution is found then the process stops -otherwise, increase the depth by,
say, 1 and repeat until a solution is found.
It is a form of depth-first search with a lower bound on how deep the search can
go.
It can produce the same solution that breadth-first search would produce but does
not require the same memory usage (as for breadth-first search).
Note that depth-first search achieves its efficiency by generating the next node to
explore only when this needed.
The breadth-first search algorithm has to grow all the search paths available until a
solution is found -and this takes up memory.
Iterative deepening
achieves its memory saving
in the same way that depth-
Prepared By : Er. Manoj Kr Basnet 35
first search does -at the expense of redoing some computations again and again (a
time cost rather than a memory one).
•This is the preferred method for large state spaces, where the solution path length
is unknown.
The overall idea goes as follows until the goal node is not found i.e. the depth
limit is increased gradually.
Optimality:
It then, expands nodes from the start and goal state simultaneously.
Check at each stage if the nodes of one have been generated by the other, i.e, they
meet in the middle.
Optimality: yes (If done with correct strategy- e.g. breadth first)
•Deciding that certain nodes should be discarded, or pruned, from the search space.
•Even if a blind search will work we may want a more efficient search method
Best-First Search
Idea: use an evaluation function f(n) that gives an indication of which node to
expand next for each node.
-usually gives an estimate to the goal.
-the node with the lowest value is expanded first.
Typically, strategies use estimates of the cost of reaching the goal and try to
minimize it.
The node with the lowest evaluation is selected for expansion because that is the
best node, since it supposedly has the closest path to the goal (if the heuristic is
good).
The greedy best-first search algorithm is O(bm) in terms of space and time
complexity. (Where b is the average branching factor (the average number of
successors from a state), and m is the maximum depth of the search tree.)
Optimality:
– Greedy search is not optimal. Same as DFS.
Time complexity:
– In worst case Greedy search is same as DFS therefore it’s time complexity is
O(bm).
Space Complexity:
– Space complexity of DFS is O(bm). No nodes can be deleted from memory.
A * Search
Like all informed search algorithms, it first searches the routes that appear to be
most likely to lead towards the goal.
Optimality:
– A* search gives optimal solution when the heuristic function is admissible
heuristic.
Time complexity:
– Exponential with path length i.e. O(bd) where d is length of the goal node from
start node.
Space complexity:
– It keeps all generated nodes in memory. Hence space is the major problem not
time
It starts with a random (potentially poor) solution, and iteratively makes small
changes to the solution, each time improving it a little.
Ideally, at that point the current solution is close to optimal, but it is not guaranteed
that hill climbing will ever come close to the optimal solution.
Hill climbing is depth-first search with a heuristic measurement that orders choices
as nodes are expanded.
It always selects the most promising successor of the node last expanded.
In figure below, the straight line distances between each city and goal G is indicated
in square brackets, i.e. the heuristic.
[10] S 5
5
4 G
D E F 3
2 4
[8] [6] [3]
A 8.5 8 D
A 8.5 6 E
6 B F 3
-Another type of problem we may find with hill climbing searches is finding a
plateau. This is an area where the search space is flat so that all neighbors return the
same evaluation
Simulated Annealing:
Compared to hill climbing the main difference is that SA allows downwards steps.
Simulated annealing also differs from hill climbing in that a move is selected at random
and then decides whether to accept it.
If the move is better than its current position then simulated annealing will always take
it.
If the move is worse (i.e. lesser quality) then it will be accepted based on some
probability.
By analogy with this physical process, each step of the SA algorithm replaces the
current solution by a random "nearby" solution, chosen with a probability that depends
on the difference between the corresponding function values and on a global parameter
T (called the temperature), that is gradually decreased during the process.
The dependency is such that the current solution changes almost randomly when T is
large, but increasingly "downhill" as T goes to zero.