Mca AI 2 Unit
Mca AI 2 Unit
Mca AI 2 Unit
UNIT-2
Introduction to Searching Methods in AI
Searching for solutions,
Uninformed search strategies
Informed search strategies
Local search algorithms and optimistic problems
Adversarial Search
Search for games
Alpha Beta Pruning
( operations + sequence)
In AI computing; Expert System = Knowledge Base + control straetigies
Ques 2. What are the main aspects considered before solving a complex AI problem? What
is state space representation in AI?
Ans : Following aspects are considered before solving any AI problem :
(i) What are the main assumptions about the problem?
(ii) What kind of AI techniques are required to solve it(E.g : Game playing, theorem
proving, robotics, expert system, NLP etc.)
(iii) How intelligence level has to be modeled?
(iv) How we will know when we have reached our goal state of solution.
State Space Representation: A problem solving system that uses forward reasoning and
whose operators each work by producing a single new object, a new state in the Knowledge base
is said to represent problem in state space structure. In backward reasoning we towards goal state
from the given set of states.
Constituents of a state space problem searching:
(i) Set of initial states/state
(ii) Operator function: This function when applied to any state , then it changes it to another
state.
(iii) State space: All states reachable from initial state by any sequence of actions.
(iv) Path: Sequence of steps along the state space.
(v) Path cost: Cost (in during terms of time or money) incurred during traversing the state
space.
(vi) Pre Conditions: Values certain attributes must have to enable operators application in a
state.
(vii) Post condition: Attributes of a state altered by an operator application.
(viii) Goal State: Final state, when problem is solved.
Ans : This is the variation of breadth first search. Keeping just one node in memory might seem
to be a problem for memory limitations. So to avoid this beam search keeps track of K states,
rather than just one
(i) Beam search moves downwards only through the best nodes at each level by applying
heuristics and other nodes are ignored.
(ii) Width of beam is fixed
(iii) K- Parallel search threads share useful information among themselves. Bad moves if
occurred are halted and resources are passed to good successors.
Ques 4. Show that DFS is neither complete nor optimal search.
Ans : Since dfs explores in depth first, so if our search tree is infinite or has a cyclic graph, then
dfs may not terminate i.e. it will search for infinite time so it is not complete. Secondly dfs
may give solution at a certain depth and it may be possible that much better solutions exists
at upper level of tree space, so it is non-optimal as well.
Time complexity : O (b l ) , where b :branching factor. This algorithm solves infinite path problem.
Ques10. Heuristic path algorithm is best first search in which objective function is
f(n) = (2-w) g(n) +w h(n) for what values of w is this algorithm guaranteed to be optimal ?
What kind of search does this perform when w = 0? When w = 1? When w = 2?
Ans : For w = 2 algorithm is optimal as f(n) = ( 2 -2 ) g(n) + 2 * h(n) = 2 h(n).
If w = 0 , then f(n) = 2 g(n) it is breadth first search. If w = 1, f(n) = g(n) + h(n) .
If w = 2 , it is optimal search, as we can directly reach to goal state.
Ques 12. (a) How is AI related to Knowledge? Differentiate b/w Declarative and Procedural
knowledge.
(b) What are AI techniques? Explain the properties of it , and purpose of this with
example.
Ans : (a) Knowledge is defined as the body of facts and principles accumulated by human kind
or the act , fact or state of knowing. Knowledge has familiarity with language, concepts ,
procedures , rules , ideas, abstractions , places coupled with an ability to use these properties in
modeling different aspects of a real world problem.
Intelligence requires the possession of and access to knowledge. Characteristic of
intelligent people is that they possess much knowledge. Knowledge has following properties:
(i) It is voluminous
(ii) Knowledge continuously increases
(iii) Understanding the knowledge in different contexts depends on environment. E.g :
Understanding a visual scene requires knowledge of kinds of objects in scene
(iv) Solving a particular problem requires knowledge of that domain.
(v) Knowledge must be effectively represented in a meaning ful way , without any sort of
ambiguity.
Declarative knowledge is a passive knowledge expressed in the form of statements and facts
about the real world. Example: Employee database of any organization, Telephone directory etc.
Procedural knowledge is a compiled knowledge related to the performance of some task, i.e.
solving any problem systematically step by step. Example : steps used to solve any trigonometric
problem.
(b) AI Techniques: These are the methods to solve various type of tasks. E.g : Game playing ,
theorem proving , robotics, expert system, . Simple data structures like arrays, queues etc are
unable to represent the facts of real world. So some symbolic statements are required to represent
them.
AI technique is method that exploits knowledge that should be represented in such a way that :
(i) Knowledge captures generalization. It is not necessary to represent separately each individual
situation. Rather situations which carry important properties are grouped together.
(i) It can be understand by the people who must provide it, depending the domain
of problem
(E.g: Robotics, medical field, weather forecasting , biometrics, defense and
military, aeronautics, space research etc.)
(ii) It can be easily modifiable to correct errors.
(iii)It can be used in many situations even if it is not totally accurate / complete.
AI techniques
AI computers
Knowledge base relates with Inference Process , through which an AI system can derive
solutions which are already not in the KB. To make Computer smarter we need to learn how
Human Brain works does and what properties of brain are used to design an AI system. This is
called Cognitive Science.
Before solving a problem and selecting appropriate AI domain we require following points:
What are the assumptions about the knowledge?
Which technique is most appropriate for reaching to goal state.
What is the level for modeling the intelligence?
How will we know when we have succeeded in building an AI system?
Example : Translating English sentences into Japanese. (Requires technique of NLP and NLU)
Teaching a child to subtract integers. (Requires supervised learning)
Solving a cross word puzzle. (Requires logic theory.)
Ques 13. What are the characteristics of AI problems? Explain with examples.
Ans : (i) Is the problem decomposable ? Is there any way to divide large , complex problem
into sub problems and then apply simple approach to each problem for getting the solution of
main problem.
E.g : Designing a large software requires to design individual modules , then start coding phase.
(ii) Can solution steps be ignored or atleast undone , if they prove to be irrelevant ?
Based on this there can be following categories of problem:
Ignorable : Solve the problem using simple control strategies , that never back tracks.
(Theorem proving)
Recoverable : These problems can be solved using more complex control structures that can
have chances of errors. In this solution steps can be undone.( 8 – puzzle
Irrecoverable : Solved by the system that expands a great deal of effort , making each decision
since decision must be final. Solution steps cannot be undone. ( Chess game , card games etc.
)Control strategy requires lots of effort , have strict rules and has good heuristic information.
(iii)Is the universe predictable? Can we earlier predict the plans and steps to be taken
during problem solution. E.g : In bridge game , 8 –puzzle game we can predict the future
moves
Certain outcome problems: 8-puzzle, Uncertain outcomes : Bridge game.
Hardest problems: Irrecoverable + Uncertain Outcome.
Ques 14. What is Control Strategy and Production System? How this is helpful in AI.
Give example with its types also.
Ans: Control Strategy is an interpreter program used to control the order in which the production
rules are fixed and resolve conflicts, if more than one rule becomes applicable simultaneously.
This strategy repeatedly applies rules to the data base until a description of the goal state is
produced.
Control strategy must be systematic + must cause motion also
Example: In water Jug problem we select the rules from given list in such a way that it must
always generate a new state in the state space (Cause a motion). Rule must be selected in such a
way that duplicate states must be avoided.
Types of control strategies: (a) Irrocavable: Rule is selected and applied irrocavably without
provision for reconsideration later. (b) Tentative: Rule is selected and applied with a provision
to return later to this point in computation to apply some other rule.
Production Systems: These were proposed for modeling human solving behavior by Newell &
Simons in 1972. These are also referred as inferential systems or a rule based systems.
Roles of production system:
(i) A powerful knowledge representation method with action associated to it.
(ii) Bridge b/w AI research and expert system.
(iii) Strong data driven nature of intelligent action. When new input is given to the system,
behavior of system changes.
(iv) New rules can easily be added to account for new situations without disturbing the
rest of the system.
Expert systems have modules known as Inference Engine, which work based on production
systems.
Knowledge Base: Dynamic KB + Static KB. Global DB is the central data structure used by the
production system. Application of a rule changes the data base as a result it is dynamic in nature,
continuously changing when a production rule is applied to any of the system state.
So it is also known as Working Memory or Short Term Memory.
Static KB has complete information about the facts and rules and is fixed, never changes.
Ques 16. What is missionaries and cannibals problem? Give the production rules for its
solution.
Ans : In this problem, three missionaries and three cannibals must cross a river using a boat
which can carry at most two people, under the constraint that, for both banks, that the
missionaries present on the bank cannot be outnumbered by cannibals. The boat cannot cross the
river by itself with no people on board.
Solution:
Rule 6 1 ML , 1 CL 2 MB, 0 CB 0 MR , 2 CR
Rule 7 1 ML ,1 CL 1 MB, 1 CB 1 MR , 1 CR
Rule 8 0 ML , 2 CL 2 MB, 0 CB 1 MR , 1 CR
Rule 9 0 ML , 2 CL 0MB, 1 CB 3 MR , 0 CR
Rule 10 0 ML , 1 CL 0 MB, 2 CB 3 MR , 0 CR
Rule 11 0 ML , 1 CL 0 MB, 2 CB 3 MR , 0 CR
Rule 12 0 ML , 1 CL 0MB, 1 CB 3 MR , 1 CR
Rule 13 0 ML , 0 CL 0MB, 0 CB 3 MR , 3 CR
Ques 17. What are local search algorithms? Explain Hill climbing search.
Ans : Local search algorithms operate using a single current state , rather than multiple paths and
then move further to neighboring states. Paths followed by search are not retained i.e. not kept in
the memory.
Local search algorithms use very less memory. They can find reasonable solutions in large or
infinite state space. These are used to find solution of optimization problems.
Hill climbing technique is informed search strategy which works based on greedy approach.
Heuristic search / informed search sacrifice the claims of completeness. They behave like a tour
guide. They are good to the extent that they point in generally interesting directions and bad to
the extent that they miss the points of interest to individuals.
Simple Hill Climbing: This algorithm selects the FIRST BETTER MOVE (NODE) as a
new CURRENT STATE (next state).
(5) A
B C D
A is the current state with cost =5. Now compare the cost of A to its successor nodes one by one
Starting from node B. If the optimization problem is to maximize the cost , then the child node
which has more cost as compared to A will be better next node. Since (cost of B) < (cost of A),
so move to node C. (Cost of C) > cost of A, and node C is first better node, therefore simple hill
climbing makes node C as the NEW CURRENT STATE (next state). And further successors of
C will be generated.
Steepest Ascent Hill Climbing: This algorithm considers all the moves from the CURRENT
STATE and selects the BEST NODE as a new CURRENT STATE (next state).
In the same search tree above , if we apply the steepest ascent approach then the BEST NODE
among all the successors is selected as next state. So when we compare A with its successor
nodes , D is maximum among all the other nodes.
Key Points: (a) Hill climbing algorithm terminates, when it reaches a peak where no neighbor
has a higher value.
(a) Hill climbing is called greedy search because it selects a good neighbor state without
thinking ahead about where to go next.
(b) An operator is applied to each current state to generate its child nodes. And an
appropriate Heuristic function is used to estimate the cost of each node.
(c) Both simple and steepest ascent hill climbing may fail to find the solution. Either
algorithm may
Terminate, when no goal is found but by getting to a state from which no better states can
be generated.
Ques 18. Draw the state space graph of Hill climbing search. What are the draw backs of
this algorithm? Also discuss about time space complexity of this algorithm.
Ans : Hill climbing search generally falls into trap due to some of the following reasons which
are also the drawbacks of this method :
(a) Local Maxima: A local maxima is a state in space tree which is a peak state , better
than all its neighboring states but lower than the global maximum. Local max are
particularly disadvantageous because they often occur almost within the sight of a
solution. In this case it is called FOOTHILLS.
(b) Plateau : This is area in state space where evaluation function is flat i.e whole set of
neighboring states have the same value from current state , and to find best direction
is difficult. From flat local max no uphill moves exist and from a Shoulder Plateau, it
is possible to move forward.
(c) Ridges : This is a sequence of local maxima with some slope. The orientation of high
region compared to the set of available moves and directions in which they move,
makes it impossible to traverse a ridge by single moves.
State space diagram is a graphical representation of the set of states our search algorithm can
reach vs the value of our objective function(the function which we wish to maximize).
X-axis: denotes the state space ie states or configuration our algorithm may reach.
Y-axis: denotes the values of objective function corresponding to to a particular state.
The best solution will be that state space where objective function has maximum value(global
maximum).
(a) To deal with local max, backtrack to some earlier node and select some different path.
(b) To deal with plateau make a BIG HOP in some direction to try to get a new section of
search space. If set of rules and operators describe a single small steps apply them several
times in same direction.
(c) To deal with ridges apply two or more rules before progress. This is same as moving in
several directions at once.
Hill climbing algorithm is not complete, Whether it will find the goal state or not depends upon
the quality of the heuristic function. If the function is good enough , the search will still proceed
towards the goal state.
Space complexity: This is strongest feature. It requires a constant space. This is because it keeps
only the copy of the current state. In addition it may require additional memory for storing the
previous state and each candidate successor.
Time complexity of Hill Climbing will be proportional to the length of the steepest gradient
path. In finite domain, search will proceed to this path and will terminate.
Ques19. Explain Blocks World problem using heuristic function in Hill Climbing Search
strategy.
Ans : In this problem a set of initial arrangement of eight blocks is provided. We have to reach
the GOAL arrangement by moving blocks in a systematic order. States are to be evaluated using
heuristic , so that we can get next best node by applying Steepest Ascent Hill Climbing
technique.
Both the function will try to maximize the score/cost of each state.
Local Heuristic: Add 1 point for each block that is resting on the correct block it is supposed to
be (as compared to the goal state). Subtract 1 point for each block at incorrect position. Global
Point for table is also considered.
Global Heuristic: For each block that has correct support structure, add 1 point for each block in
support structure. For each block having incorrect support, subtract 1 point for each block. In this
point for table is not considered.
As the value of any structure maximizes, we will be nearer to the goal state.
Cost/score of goal state is 8(using local heuristic), because all the blocks are at its correct
position.
Now J is current new state with score 6 > cost of I (4). So , In step 2 three moves from best
state J is possible. All the neighbors of node J have lower have lower score than value of J i.e 4 ,
so J is a local maxima, and further no move is possible from states K, L and M. So search falls
in TRAP situation.To overcome the above problem of Local function, we can apply GLOBAL
heuristic.Now goal state will have score /cost of 28 and Initial state will have cost of -28.Again
the best node in next move will be that which has maximum score/cost.
Ques 20. Explain Branch and bound search strategy in detail with an example.
Ans . Branch-and-Bound search is a way to combine the space saving of depth-first search with
heuristic information. It is particularly applicable when many paths to a goal exist and we want
an optimal path. As in A* search, we assume that h(n) is less than or equal to the cost of a lowest-
cost path from n to a goal node.
The idea of a branch-and-bound search is to maintain the lowest-cost path to a goal found so
far, and its cost. Suppose this cost is bound. If the search encounters a path p such
that cost(p)+h(p) ≥ bound, path pcan be pruned. If a non-pruned path to a goal is found, it must
be better than the previous best path. This new solution is remembered and bound is set to the
cost of this new solution. It then keeps searching for a better solution.
Let us take the following example for implementing the Branch and Bound algorithm.
Step 3:
Ques 21. What is Best first search algorithm, explain? Give an example also. Compare best
first search with Hill climbing approach.
Ans : BEST FIRST SEARCH : In BFS and DFS, when we are at a node, we can consider any
of the adjacent as next node. So both BFS and DFS blindly explore paths without considering
any cost function. The idea of Best First Search is to use an evaluation function to decide which
adjacent is most promising and then explore. Best First Search falls under the category of
Heuristic Search or Informed Search.
We use a priority queue to store costs of nodes. So the implementation is a variation of BFS, we
just need to change Queue to Priority Queue. An evaluation function is used to assign a score to
each candidate node. The algorithm maintains two lists, one containing a list of candidates yet to
explore (OPEN), and one containing a list of visited nodes (CLOSED). Since all unvisited
successor nodes of every visited node are included in the OPEN list, the algorithm is not
restricted to only exploring successor nodes of the most recently visited node. In other words,
the algorithm always chooses the best of all unvisited nodes that have been graphed, rather
than being restricted to only a small subset, such as immediate neighbors. Other search
strategies, such as depth-first and breadth-first, have this restriction. The advantage of this
strategy is that if the algorithm reaches a dead-end node, it will continue to try other nodes .
Heuristic function used is f(n)= g(n) + h(n) , which estimates the cost of cheapest path from
node n to goal node i.e solution. f(n) gives true cost of a node n . g(n) is cost of getting from
initial state to current node. H(n) additional cost of getting from current node to the goal state.
A directed graph(OR graph) is used in which each node is a point in problem space. An
alternative problem solving path exists from each branch. A parent link always point to the best
node from which it came and the list of the nodes that were generated from it. Parent link helps
to recover path to the goal once the goal is found.
Step1. Initially start from source node "S" and search for goal "I" using given costs and Best
First search.
Step 2: Priority Queue OPEN, initially contains S. We remove S to CLOSED from OPEN and
process unvisited neighbors of S to priority queue OPEN = {A, B, C} and CLOSED = {S}.
Step 3: We remove A from OPEN and process unvisited neighbors of A to OPEN .
OPEN = {C, B, E, D} and CLOSED = { S,A}.
Example 2:
Ques 22. Explain A* search algorithm. Discuss about the admissibility of A* algorithm.
Ans : A* algorithm is the extension of BEST FIRST SEARCH , proposed by Hart and Raphael
in 1980. It combines the features of branch and bound, Dijkstra’s algorithm and Best first search.
(i) A* Search Algorithm does is that at each step it picks the node according to a value-
‘f’ which is a parameter equal to the sum of two other parameters – ‘g’ and ‘h’. At
each step it picks the node/cell having the lowest ‘f’, and process that node/cell.
(ii) f(n) = g(n) + h (n) , where g(n) is cost of getting from initial state to current node.
h(n) = estimate of additional cost of getting from current node to goal state.
F(n) = this gives total cost of reaching to goal node.
(iii) A* maintains two priority queues OPEN and CLOSED as in best first search.
OPEN contains those nodes which are unexpanded and not evaluated. CLOSED
contains those nodes , whose successors are generated and node cost is evaluated
using heuristic function.
A* Search Algorithm
1. Initialize the OPEN list with start node , Set f = 0 + h , g =0 initially and CLOSED = Φ
2. Repeat until a goal is reached
If OPEN = = { Φ}, then failure occurs.
Else select node in OPEN with min(f) value, set this node = BEST NODE for current
path.
Remove BEST NODE from OPEN and CLOSED = { BEST NODE}.
If ( BEST NODE = = Goal Node), search succeeds.
Else generate successors of BEST NODE , but don’t set BEST NODE to point to
them yet.
(a) for each successor
(i) if successor is the goal, stop search
Else set successor node to point to the BEST NODE to recover the current path.
(iii) if a node with the same position as successor is in the OPEN list which has a
lower f than successor, skip this successor
(iv) If a node with the same position as successor is in the CLOSED list which has a lower
f than successor, skip this successor otherwise, add the node to the OPEN list.
end (for loop)
(c)If current path of SUCCESSOR is cheaper than the current best path to old parent , then
reset old’s parent Link to point to BEST NODE else do nothing. Record g (old)
and update f (old) node.)
(d) To propagate new cost downwards the graph , perform dfs. Starting from old and
changing each node’s g value. Terminating each branch when you reach either a node
with no successors or a node to which an equivalent or better path has already been
found.
(e) If successor (OPEN and CLOSED) both, add OPEN = { successor}, and add it to the
list of BEST NODE’S successor.
Admissibility of A*
An algorithm is said to be admissible if it is guaranteed to return an optimal solution , when
one exists.
A* is admissible if it satisfies following assumptions:
(a) The branching factor is finite. That is from every node only finite number of alternate paths
emerges.
Proof : In every cycle of the main loop in A* , algo picks one node from OPEN and places it
in CLOSED. Since there are only a finite number of nodes, algorithm will terminate in a
finite number of cycles, even if it never reaches the goal.
(b) The cost of each move is greater than some arbitrarily small positive value ε.
i.e for all m , n : k (m , n) > ε.
(c) The heuristic function underestimates the cost to the goal node. i.e for all n : h(n) ≤ h*(n).
Proof : A* is complete and optimal .Admissible heuristics are by nature optimistic because they
think the cost of solving the problem is less than it actually is. g(n) is the exact cost to reach n,
we have as immediate consequence that f(n) never overestimates the true cost of a solution
through n. Let cost of optimal solution be C*. G2 is sub optimal goal node, h (G2)=0, since it is a
goal node. Therefore, f(G2) = g (G2) + h (G2) = g (G2) > C*. If h (n) does not overestimates
the cost of completing solution path, then f (n) = g(n) + h (n) ≤ C*.
A heuristic is consistent if for each node n and ach successor n ‘ of n generated by an operator ,
estimated cost of reaching the goal from n is no greater than step cost of getting to n ‘ plus the
estimated cost of reaching the goal from n ‘.
h(n) ≤ C (n , a , n ‘) + h ( n’ ) , where a is an action/operator.
If h(n) is consistent , then the values of f(n) along any path are non decreasing. If n’ is a
successor of n , then g (n ‘)= g(n) + C (n , a , n ‘ ) and f (n’) = g(n’) + h (n’)
f (n’) ≥ g(n) + h (n) = f(n). So A* expands all the nodes with f(n) < C*.
Ques 23. Mention some observations of g(n) and h(n) values in A* search. Discuss about
under estimation and overestimation of A* algorithm.
A* is optimal if h(n) is admissible heuristic , provided h (n) never overestimates the cost to reach
the goal. g ‘(n) and h ‘ (n) may not be known. What are known is estimated cost of both.
In general g’(n) will be lower than g(n) , because algorithm may not have found the optimal path
to n , i.e. g (n) ≥ g ‘ (n). The heuristic value h(n) is the distance to the goal from current state.
If algorithm guarantees an optimal solution, it is necessary that the function underestimates The
distance to the goal. That is h(n) ≤ h ‘ (n).If above condition is true then A* is admissible.
Ques 24. What is problem reduction technique? Using this explain AO* search with
an example.
Ans : When a problem can be divided into a set of sub problems, where each sub problem can be
solved separately and a combination of these will be a solution, AND-OR graphs or AND - OR
trees are used for representing the solution. The decomposition of the problem or problem
reduction generates AND arcs. One AND are may point to any number of successor nodes. All
these must be solved so that the arc will rise to many arcs, indicating several possible solutions..
Figure shows an AND - OR graph.
(i) In above figure the top node A has been expanded producing two area one leading to
B and leading to C-D .
(ii) The numbers at each node represent the value of f ' at that node (cost of getting to the
goal state from current state). For simplicity, it is assumed that every operation (i.e.
applying a rule) has unit cost, i.e., each are with single successor will have a cost of 1
and each of its components.
(iii) With the available information till now, it appears that C is the most promising node
to expand since its f ' = 3 , the lowest but going through B would be better since to
use C we must also use D' and the cost would be 9 = (3+4+1+1). Through B it would
be 6 = (5+1).
(iv) Thus the choice of the next node to expand depends not only n a value but also on
whether that node is part of the current best path form the initial mode. Figure (b)
makes this clearer. In figure the node G appears to be the most promising node, with
the least f ' value. But G is not on the current beat path, since to use G we must use
GH with a cost of 9 and again this demands that arcs be used (with a cost of 27).
Observation: In AO* algorithm, we consider best node + best path (Global View), rather than
best node + best link (Local View).
AO* algorithm uses a single structure Graph instead of OPEN & CLOSED priority queues in
A*. Each node in Graph points down to its immediate successors and up to its immediate
predecessors, and also has with it the value of h' cost of a path from itself to a set of solution
nodes. The cost of getting from the start nodes to the current node "g" is not stored as in the
A* algorithm. This is because it is not possible to compute a single such value since there may
be many paths to the same state. In AO* algorithm serves as the estimate of goodness of a node.
Also a there should value called FUTILITY is used. The estimated cost of a solution is greater
than FUTILITY then the search is abandoned as too expansive to be practical.
For representing above graphs AO* algorithm is as follows :
AO* ALGORITHM
1. Let Graph consists only to the node representing the initial state call this node INTT.
Compute h' (INIT).
2. Until INIT is labeled SOLVED or h’(INIT) > FUTILITY, repeat the
following procedure :
(I) Trace the marked arcs from INIT and select an unbounded/unexpanded node NODE.
(II) Generate the successors of NODE. If there are no successors then assign
h' (NODE) = FUTILITY. This means that NODE is not solvable. If there are successors
then for each one called SUCCESSOR, that is not also an ancestor of NODE do the following :
(III) Propagate the newly discovered information up the graph by doing the following. Let S be a
set of nodes that have been marked SOLVED. Initialize S to NODE. Until S is empty repeat
the following procedure:
(a) Select a node from S call if CURRENT and remove it from S.
(b) Compute h' of each of the arcs emerging from CURRENT. Assign minimum h' to
CURRENT. Cost (ARC) = Σ ( h’ value of each nodes) + cost of ARC itself.
(c) Mark the minimum cost path as the best out of CURRENT.
(d) Mark CURRENT SOLVED if all of the nodes connected to it through the new marked
are have been labeled SOLVED.
(d) If CURRENT has been marked SOLVED or its h ' has just changed, its new status
must be propagate backwards up the graph .Hence all the ancestors of CURRENT
areadded to S.
Note: AO* will always find minimum cost solution. AO * is both admissible and
complete.
Ques 26. In the graph given below explain, when would the search will terminate with
A if node F is expanded next and its successor is node A. Give the steps
of searching also.
Ans: If node F is expanded next with child node A, then the cost of A will be upward
propagated in this graph with cycle. Initially cost of A with AND arc is :
h(A) = h(C) + h(D) + cost of arc(A-C) + cost of arc (A- D).
h(A) = 11 + 15 + 1 + 1 = 28. Now 28 will be used to evaluate the revised cost of node F as
following:
h (F) = h(A) + 1 (because of OR path between Node F and A ).. So h(F) = 28 + 1= 29.
Now between h(E) and h(F) , h(E) = 30 > h(F) = 29 , so OR path C-F is better than path C-E.
So , Revised cost of C = h_new(F) + 1= 29 + 1=30.
Now h_new(A) = h(C) + h(D) + 2 = 30 + 15 + 2 = 47.
So , h_new (F) = 47 + 1= 48.
Now h(E)=30 < h_new(F) =48. So node E is better node, so ARC C-E is better now.
So h_new(C) = h(E) + 1 = 30 + 1 = 31. Again revise the cost of node A through upward cost
propagation.
So h_new (A) = h_new(C) + h(D) = 31 + 15 + 2 = 48.
So , h_new (F) = h_new(A) + 1 = 48 +1 = 49.
Still node E is better than F , so h(C) = 31….. This search will continue and no change in path
exist. So cycle will repeat till cost of search does not exceeds. FUTILITY. In that case search
will be terminated.
Ques 27. How is AI useful in game playing techniques. Describe what is adversarial search?
Ans . Game playing is very important and emerging field of AI, which makes machines to play
several games that a people can play. Machine may play with another machine or human robot. It
may as well play with another person also. This requires lot of searching and knowledge.
Charles Babbage, 19th century computer architect programmed an analytical engine for CHESS.
In 1960’s Arthur Samuel developed first operational game playing program. Mathematical game
theory a branch of economics views any multi agent environment as a game provided that the
impact of each agent on others is significant, regardless of whether an agent is cooperative or
competitive.
In AI games are deterministic, zero sum games. In this two agents whose actions alternte and in
which UTILITY values at the end of game are always equal and opposite . E.g: If one player
wins a game of chess (+1), other loses (-1).A utility function gives numeric value to terminal
states .
Adversarial Search: In this two opponents (agents) , playing a game with each other compete in
an environment so as one move of agent A opposes agent B and he tries to take move advantage
over other by maximizing his UTILITY and minimizing opponents UTILIT.
E.g : Chess game , Bridge game of playing cards, Tic Tac Toe , etc.
Some games have agents to be restricted to a small number of actions whose outcomes are define
by precise rules. Physical games like Ice Hockey requires more complex rule set to be defined.
Larger range of operators /actions is needed for better efficiency and result. Only Robot Soccer
Player game is much attractive among game players in AI.
UTILITY FUNCTION: This gives numeric value to terminal states. In chess game outcome can
be win , loss or draw with values +1 , -1 and 0 respectively.
A game tree is generated showing different states and cost of each state. Searching for goal is
performed in this tree. In a game tree two half moves, each called PLY exists. For Maximizing
player we have Max ply and for Minimizing player we have Min ply.
In game playing move generation and terminal test must be good enough for fast searching.
We can use a Plausible Move Generator function, in which only small number of promising
moves are generated. Alan Turing gave a utility function based on material advantage of pieces
in chess as following:
Add the number of black pieces(B) values , Value of white(W), and compute (W/B).
Factors considered for moving criteria by an agent can be :
(i). Piece advantage. (ii) Capbility of progress (iii) Control of center (iv) Mobility
(v) Threat of fork.
TERMINAL TEST : This test determines when the game is over. States where game is over
are called terminal states.
Ans : In MINIMAX searching the score is comparison of what is good for max player minus
what is good for min player.
(i)Nodes for Max players termed as max nodes that take on value of their highest scoring sub
nodes/successors.
(ii)Nodes for MIN players take on the value of their lowest scoring successors.
(iii)Assumption is that both the MAX and MIN players play optimally at their end to win the
game.
(iv)MINIMAX search uses simple recursive computation of minimax vlues of each successor
node in the
game tree in dfs order. Minimax values are backed up through the game tree.
In above game tree, firstly in DFS order branches of B are explored. Since B is at MIN ply w.r.t
to MIN player So find min(E, F , G ) and value is backed up to node B.
[ min ( E, F , G)= min(3 ,12,8) = 3].
So 3 is backed up to node B as its score. Similarly Score (node C) = min (2 , 4, 6) = 2.
Score (D) = min (14, 5, 2) = 2.
Now score of MAX player A = Max ( 3 , 2 , 2) = 3. So the winning path is A – B – E.
Ans : (i) Alpha-Beta pruning is not actually a an optimization technique for MINIMAX
algorithm. It reduces the computation time by a huge factor. Because number of game states that
MINIMAX has to examine is exponential in number of moves. So we can cut the exponent to
half. This allows us to search much faster and even go into deeper levels in the game tree. It cuts
off branches in the game tree which need not be searched because there already exists a better
move available. It is called Alpha-Beta pruning because it passes 2 extra parameters in the
minimax function, namely alpha and beta. This is also called Alpha Beta cut –off .
This technique when applied , returns the same moves as compared to MINIMAX, but
prunes branches that can not put impact on the final decision. It can be applied to the tree of
any depth in depth first search order.
Alpha is the best value(highest) that the maximizer currently can guarantee at that level (along
max path) or above. This is lower bound on MAX nodes.
Beta is the best value (lowest) that the minimizer currently can guarantee at that level(along min
path.) or above. This is an upper bound on MIN nodes.
Note : Search below a MIN node can be terminated if beta value of MIN node is less than any
of alpha values bound to its ancestor MAX nodes.
Search below a MAX node can be terminated if alpha value of Max node is greater than any
of beta bound to its ancestor MIN nodes.
if isMaximizingPlayer :
bestVal = - INFINITY
for each child node :
value = minimax(node, depth+1, false, alpha, beta)
bestVal = max( bestVal, value)
alpha = max( alpha, bestVal)
if beta <= alpha:
break
return bestVal
else :
bestVal = +INFINITY
for each child node :
value = minimax(node, depth+1, true, alpha, beta)
bestVal = min( bestVal, value)
beta = min( beta, bestVal)
if beta <= alpha:
break
return bestVal
Solution (ii) :
Step 1. Initially apply DFS on path A-B-D-I. Value of I is backed up at D as alpha =3.
Now between node I and J , beta =5 is of node J. So D can have backed up value >= 3 ,
because D is at Max Ply. Therefore value of Node J is changed to alpha =5 .
Step 4. Now consider another branch of node A, via path A-C-F-K-P (DFS order). Node K is
MIN node , so Min(P, Q) = min (0 , 7) = 0. Therefore at P, alpha = 0 , backed up as
beta =0 to node K. Node F is Max Node , so max (K , L) = mx (0 , 5) = 5.
Hence alpha =0 at Node F is changed to alpha = 5. So search below Q is pruned. Hence
node Q is not generated. At node K, beta =0 and ancestor of K i.e Node f has alpha =5,
so it will be pruned.
Ans : A constraint satisfaction problem (or CSP) is defined by a set of variables, X1, X2, . . . ,
Xn, and a set of constraints, C1, C2, C3…, ,Cm. Each variable Xi has a
nonempty domain Di of possible values. Each constraint Ci involves some subset of the
variables and specifies the allowable combinations of values for that subset. A state of the
problem is defined by an assignment of values to some or all of the variables, {Xi = vi,
Xj = vj, . . .}. An assignment that does not violate any constraints is called a consistent or legal
assignment. A complete assignment is one in which every variable is mentioned, and a solution
to a CSP is a complete assignment that satisfies all the constraints. Some CSPs also
require a solution that maximizes an objective function.
Varieties of CSPs
(A)Discrete variables – finite domains:
• n variables, domain size d , O(dn) complete assignments
– infinite domains:
• integers, strings, etc.
• e.g., job scheduling, variables are start/end days for each job
• need a constraint language, e.g., StartJob1 + 5 ≤ StartJob3
Continuous variables
– e.g., start/end times for Hubble Space Telescope observations.
– linear constraints solvable in polynomial time by linear programming.
Step 1 : Propagate available constraints. Initially set OPEN = { set of objects which have values
assigned in a complete solution.}.Then
Do until {inconsistency is detected or OPEN = NULL}
1.1 Select an object OB from OPEN. Strengthen as much as possible the set of constraints
which apply with OB.
1.2 If this set is different from the set which was assigned the last time OB was examined or if
it is first time OB has been examined, then add to OPEN all objects which share any
constraints with OB.
1.3 Remove OB from OPEN.
1. From Column 5, M=1, since it is only carry-over possible from sum of 2 single digit number
in column 4.
2. To produce a carry from column 4 to column 5 “ S + M”is at least 9 ,So ,
'S=8 or 9' so 'S+M= 9 or 10' & so 'O = 0 or 1'. But 'M=1', so 'O = 0'.
3.If there is carry from Column 3 to 4 then 'E=9' & so 'N=0'. But 'O = 0' so there is no carry
& 'S=9' & 'c3=0'.
3. If there is no carry from column 2 to 3 then 'E=N' which is impossible, therefore there is
carry & 'N=E+1' & 'c2=1'.
5. If there is carry from column 1 to 2 then 'N+R=E mod 10' & 'N=E+1' so 'E+1+R=E
mod 10', So 'R=9' but 'S=9', so there must be carry from column 1 to 2.
Therefore 'c1=1' & 'R=8'.
6. To produce carry 'c1=1' from column 1 to 2, we must have 'D+E=10+Y' as Y cannot be 0/1
so D+E is at least 12. As D is at most 7 & E is at least 5 (D cannot be 8 or 9 as it is
already assigned). N is at most 7 & 'N=E+1' so 'E=5 or 6'.
7. If E were 6 & D+E atleast 12 then D would be 7, but 'N=E+1' & N would also be 7 which
is impossible. Therefore 'E=5' & 'N=6'.
8. D+E is at least 12 for that we get 'D=7' & 'Y=2'.
9 5 6 7
+ 1 0 8 5
-----------------
1 0 6 5 2