IAI UNIT-II Games

Download as pdf or txt
Download as pdf or txt
You are on page 1of 57

UNIT-II

Games
Adversarial Search

Adversarial search is a search, where we examine the problem which arises


when we try to plan the world and other agents are planning against us.

o In previous topics, we have studied search strategies which are only


associated with a single agent that aims to find the solution which is
often expressed in the form of a sequence of actions.
o But there might be some situations where more than one agent is
searching for the solution in the same search space, and this situation
usually occurs in game playing.
o The environment with more than one agent is termed as multi-
agent environment, in which each agent is an opponent of other
agent and playing against each other. Each agent needs to consider
the action of other agents and the effect of that action on their
performance.
o So, Searches in which two or more players with conflicting goals are
trying to explore the same search space for the solution, are called
adversarial searches, often known as Games.
o Games are modeled as a Search problem and heuristic evaluation
function, and these are the two main factors which help to model and
solve games in AI.

Types of Games in AI:

Deterministic Chance Moves

Perfect information Chess, Checkers, go, Backgammon, monopoly


Othello

Imperfect Battleships, blind, tic-tac- Bridge, poker, scrabble, nuclear


information toe war

o Perfect information: A game with perfect information is in which


agents can investigate the complete board. Agents have all the
information about the game, and they can see each other’s moves
also. Examples are Chess, Checkers, Go, etc.
o Imperfect information: If in a game agent do not have all the
information about the game and are not aware with what's going on,
such type of games is called the game with imperfect information,
such as tic-tac-toe, Battleship, blind, Bridge, etc.

o Deterministic games: Deterministic games are those games which


follow a strict pattern and set of rules for the games, and there is no
randomness associated with them. Examples are chess, Checkers,
Go, tic-tac-toe, etc.

o Non-deterministic games: Non-deterministic are those games


which have various unpredictable events and have a factor of chance
or luck. This factor of chance or luck is introduced by either dice or
cards. These are random, and each action response is not fixed. Such
games are also called stochastic games.

o Example: Backgammon, Monopoly, Poker, etc.

Zero-Sum Game
o Zero-sum games are adversarial search which involves pure
competition.

o In Zero-sum game each agent's gain or loss of utility is exactly


balanced by the losses or gains of utility of another agent.

o One player of the game tries to maximize one single value, while
another player tries to minimize it.

o Each move by one player in the game is called a play.

o Chess and tic-tac-toe are examples of a Zero-sum game.

Zero-sum game: Embedded thinking


The Zero-sum game involved embedded thinking in which one agent or
player is trying to figure out:

o What to do.

o How to decide the move


o Needs to think about his opponent as well.

o The opponent also thinks about what to do.

Each of the players is trying to find out the response of his opponent to
their actions. This requires embedded thinking or backward reasoning to
solve the game problems in AI.

Formalization of the problem:

A game can be defined as a type of search in AI which can be formalized of the

following elements:

o Initial state: It specifies how the game is set up at the start.

o Player(s): It specifies which player has moved in the state space.

o Action(a): It returns the set of legal moves in state space.

o Result(s, a): It is the transition model, which specifies the result of


moves in the state space.

o Terminal-Test(s): Terminal test is true if the game is over, else it


is false at any case. The state where the game ends is called terminal
states.

o Utility(s, p): A utility function gives the final numeric value for a
game that ends in terminal states s for player p. It is also called
payoff function. For Chess, the outcomes are a win, loss, or draw and
its payoff values are +1, 0, ½. And for tic-tac-toe, utility values are
+1, -1, and 0.

Game tree:

A game tree is a tree where nodes of the tree are the game states and Edges of the

tree are the moves by players. Game tree involves initial state, action’s function, and
result Function.
Example: Tic-Tac-Toe game tree:

The following figure is showing part of the game-tree for tic-tac-toe game. Following
are some key points of the game:

o There are two players MAX and MIN.


o Players have an alternate turn and start with MAX.
o MAX maximizes the result of the game tree.
o MIN minimizes the result.

Example Explanation:

o From the initial state, MAX has 9 possible moves as he starts first.
MAX place x and MIN place o, and both players play alternatively until
we reach a leaf node where one player has three in a row or all
squares are filled.

o Both players will compute each node, minimax, the minimax value
which is the best achievable utility against an optimal adversary.
o Suppose both the players are aware of the tic-tac-toe and playing the
best play. Each player is doing his best to prevent the other one from
winning. MIN is acting against Max in the game.

o So, in the game tree, we have a layer of Max, a layer of MIN, and
each layer is called as Ply. Max place x, then MIN puts o to prevent
Max from winning, and this game continues until the terminal node.

o In this either MIN wins, MAX wins, or it's a draw. This game tree is
the whole search space of possibilities that MIN and MAX are playing
tic-tac-toe and taking turns alternately.

Hence adversarial Search for the minimax procedure works as follows:

o It aims to find the optimal strategy for MAX to win the game.

o It follows the approach of Depth-first search.

o In the game tree, optimal leaf node could appear at any depth of the
tree.

o Propagate the minimax values up to the tree until the terminal node
is discovered.

In each game tree, the optimal strategy can be determined from the
minimax value of each node, which can be written as MINIMAX(n). MAX
prefer to move to a state of maximum value and MIN prefer to move to a
state of minimum value then:
Mini-Max Algorithm in Artificial Intelligence:

o Mini-max algorithm is a recursive or backtracking algorithm which is


used in decision-making and game theory. It provides an optimal
move for the player assuming that the opponent is also playing
optimally.

o The Mini-Max algorithm uses recursion to search through the game


tree.

o The Mini-Max algorithm is mostly used for game playing in AI. Such
as Chess, Checkers, tic-tac-toe, go, and various two-players game.
This Algorithm computes the minimax decision for the current state.

o In this algorithm two players play the game; one is called MAX and
the other is called MIN.

o Both the players fight it as the opponent player gets the minimum
benefit while they get the maximum benefit.
o Both Players of the game are opponents of each other, where MAX
will select the maximized value and MIN will select the minimized
value.

o The minimax algorithm performs a depth-first search algorithm for


the exploration of the complete game tree.

o The minimax algorithm proceeds all the way down to the terminal
node of the tree, then backtrack the tree as the recursion.

Working of Min-Max Algorithm:

o The working of the minimax algorithm can be easily described using


an example. Below we have taken an example of a game tree which
represents the two-player game.
o In this example, there are two players one is called Maximizer, and
the other is called Minimizer.
o Maximizer will try to get the Maximum possible score, and Minimizer
will try to get the minimum possible score.
o This algorithm applies DFS, so in this game-tree, we must go all the
way through the leaves to reach the terminal nodes.
o At the terminal node, the terminal values are given so we will
compare those values and backtrack the tree until the initial state
occurs. Following are the main steps involved in solving the two-
player game tree:

Step-1: In the first step, the algorithm generates the entire game-tree and
applies the utility function to get the utility values for the terminal states.
In the below tree diagram, let's take A is the initial state of the tree.
Suppose maximizer takes first turn which has worst-case initial value =-
infinity, and minimizer will take next turn which has worst-case initial value
= +infinity.
Step 2: Now, first we find the utilities value for the Maximizer, its initial
value is -∞, so we will compare each value in terminal state with initial
value of Maximizer and determines the higher nodes values. It will find the
maximum among all.

o For node D max(-1,- -∞) => max(-1,4)= 4


o For Node E max(2, -∞) => max(2, 6)= 6
o For Node F max(-3, -∞) => max(-3,-5) = -3
o For node G max(0, -∞) = max(0, 7) = 7
Step 3: In the next step, it's a turn for minimizer, so it will compare all
nodes value with +∞ and will find the 3 rd layer node values.

o For node B= min(4,6) = 4

o For node C= min (-3, 7) = -3


Step 4: Now it's a turn for Maximizer, and it will again choose the
maximum of all nodes value and find the maximum value for the root node.
In this game tree, there are only 4 layers, hence we reach immediately to
the root node, but in real games, there will be more than 4 layers.

o For node A max(4, -3)= 4


That was the complete workflow of the minimax two player game.

Properties of Mini-Max algorithm:

o Complete- Min-Max algorithm is Complete. It will find a solution (if


exists), in the finite search tree.

o Optimal- Min-Max algorithm is optimal if both opponents are playing


optimally.

o Time complexity- As it performs DFS for the game-tree, so the time


complexity of Min-Max algorithm is O(bm), where b is branching
factor of the game-tree, and m is the maximum depth of the tree.

o Space Complexity- Space complexity of Mini-max algorithm is also


like DFS which is O(bm).
Limitation of the minimax Algorithm:

The main drawback of the minimax algorithm is that it gets really slow for
complex games such as Chess, go, etc. This type of game has a huge
branching factor, and the player has lots of choices to decide. This limitation
of the minimax algorithm can be improved from alpha-beta pruning.
Alpha-Beta Pruning
o Alpha-beta pruning is a modified version of the minimax algorithm. It is an

optimization technique for the minimax algorithm.

o As we have seen in the minimax search algorithm that the number of game states

it has to examine are exponential in depth of the tree. Since we cannot eliminate
the exponent, but we can cut it to half. Hence there is a technique by which

without checking each node of the game tree we can compute the correct

minimax decision, and this technique is called pruning. This involves two

threshold parameter Alpha and beta for future expansion, so it is called alpha-
beta pruning. It is also called as Alpha-Beta Algorithm.

o Alpha-beta pruning can be applied at any depth of a tree, and sometimes it not
only prune the tree leaves but also entire sub-tree.

o The two-parameter can be defined as:

a. Alpha: The best (highest-value) choice we have found so far at any point

along the path of Maximizer. The initial value of alpha is -∞.

b. Beta: The best (lowest-value) choice we have found so far at any point

along the path of Minimizer. The initial value of beta is +∞.

The Alpha-beta pruning to a standard minimax algorithm returns the same move as the

standard algorithm does, but it removes all the nodes which are not really affecting the
final decision but making algorithm slow. Hence by pruning these nodes, it makes the

algorithm fast.

Condition for Alpha-beta pruning:


The main condition which required for alpha-beta pruning is:

α>=β
Key points about alpha-beta pruning:
o The Max player will only update the value of alpha.
o The Min player will only update the value of beta.
o While backtracking the tree, the node values will be passed to upper nodes instead of
values of alpha and beta.
o We will only pass the alpha, beta values to the child nodes.

The Alpha-Beta Search Algorithm:


Working of Alpha-Beta Pruning:
Let's take an example of two-player search tree to understand the working of Alpha-
beta pruning

Step 1: At the first step the, Max player will start first move from node A where α= -∞

and β= +∞, these value of alpha and beta passed down to node B where again α= -∞

and β= +∞, and Node B passes the same value to its child D.

Step 2: At Node D, the value of α will be calculated as its turn for Max. The value of α is
compared with firstly 2 and then 3, and the max (2, 3) = 3 will be the value of α at node D and
node value will also 3.

Step 3: Now algorithm backtrack to node B, where the value of β will change as this is a turn of
Min, Now β= +∞, will compare with the available subsequent nodes value, i.e. min (∞, 3) = 3,
hence at node B now α= -∞, and β= 3.
In the next step, algorithm traverse the next successor of Node B which is node E, and
the values of α= -∞, and β= 3 will also be passed.

Step 4: At node E, Max will take its turn, and the value of alpha will change. The current

value of alpha will be compared with 5, so max (-∞, 5) = 5, hence at node E α= 5 and β=

3, where α>=β, so the right successor of E will be pruned, and algorithm will not
traverse it, and the value at node E will be 5.
Step 5: At next step, algorithm again backtrack the tree, from node B to node A. At

node A, the value of alpha will be changed the maximum available value is 3 as max (-∞,

3)= 3, and β= +∞, these two values now passes to right successor of A which is Node C.

At node C, α=3 and β= +∞, and the same values will be passed on to node F.

Step 6: At node F, again the value of α will be compared with left child which is 0, and

max(3,0)= 3, and then compared with right child which is 1, and max(3,1)= 3 still α
remains 3, but the node value of F will become 1.
Step 7: Node F returns the node value 1 to node C, at C α= 3 and β= +∞, here the value

of beta will be changed, it will compare with 1 so min (∞, 1) = 1. Now at C, α=3 and β=
1, and again it satisfies the condition α>=β, so the next child of C which is G will be

pruned, and the algorithm will not compute the entire sub-tree G.
Step 8: C now returns the value of 1 to A here the best value for A is max (3, 1) = 3.

Following is the final game tree which is the showing the nodes which are computed
and nodes which has never computed. Hence the optimal value for the maximizer is 3

for this example.


Move Ordering in Alpha-Beta pruning:
The effectiveness of alpha-beta pruning is highly dependent on the order in which each

node is examined. Move order is an important aspect of alpha-beta pruning.

It can be of two types:

o Worst ordering: In some cases, alpha-beta pruning algorithm does not prune any of the

leaves of the tree, and works exactly as minimax algorithm. In this case, it also consumes
more time because of alpha-beta factors, such a move of pruning is called worst
ordering. In this case, the best move occurs on the right side of the tree. The time
complexity for such an order is O(bm).

o Ideal ordering: The ideal ordering for alpha-beta pruning occurs when lots of pruning
happens in the tree, and best moves occur at the left side of the tree. We apply DFS
hence it first search left of the tree and go deep twice as minimax algorithm in the same

amount of time. Complexity in ideal ordering is O(bm/2).

Rules to find good ordering:


Following are some rules to find good ordering in alpha-beta pruning:

o Occur the best move from the shallowest node.


o Order the nodes in the tree such that the best nodes are checked first.
o Use domain knowledge while finding the best move. Ex: for Chess, try order: captures
first, then threats, then forward moves, backward moves.
o We can bookkeep the states, as there is a possibility that states may repeat.
Constraint Satisfaction Problems
Constraint satisfaction problems (CSPs) are mathematical questions defined as a set of objects
whose state must satisfy several constraints or limitations.

CSPs represents the objects in problem as a collection of finite constraints over variables, which
is solved by constraint satisfaction methods.

Assignment and solution of CSP:

➢ An assignment is complete when every variable is assigned a value.


➢ A solution to a CSP is a complete assignment that satisfies all constraints.

Applications:

➢ Map coloring
➢ Line drawing interpretation
➢ Scheduling problems
➢ Floor planning for VLSI

Formal Definition of CSP:

A CSP consists of

➢ A finite set of variables x1,x2,……………….xn


➢ Non empty domain of possible values for each variable D1,D2,……………...,Dn
Where Di = {v1,v2,…………,vk)
➢ Finite set of constraints C1,C2,C3,……….Cn
➢ Each constraint Ci limits the values that variables can take,
e.g, X1≠ X2
➢ A state is defined as an assignment of values to some or all variables
➢ A consistent assignment does not violate the constraints, is called as consistent or legal
assignment.
➢ A complete assignment is one in which every variable is maintained and a solution to
CSP. i.e., a complete assignment that satisfies all the constraints.
➢ Some csp also required a solution that maximizes an objective function.

• So (WA,NT) must be in {(red, green),(red, bl}ue),(green,red),………………..}


• Map coloring can be sometimes represented as a graph coloring problem.
Constraint Propagation
Artificial Intelligence (AI) encompasses a variety of methods and techniques

to solve complex problems efficiently. One such technique is constraint

propagation, which plays a crucial role in areas like scheduling, planning,

and resource allocation.

Constraint propagation is a fundamental concept in constraint satisfaction

problems (CSPs). A CSP involves variables that must be assigned values

from a given domain while satisfying a set of constraints. Constraint

propagation aims to simplify these problems by reducing the domains of

variables, thereby making the search for solutions more efficient.

Key Concepts

Variables: Elements that need to be assigned values.

Domains: Possible values that can be assigned to the variables.

Constraints: Rules that define permissible combinations of values for the

variables.

Constraint Graph:

A constraint graph visually represents CSPs. Variables are nodes, and

constraints are edges connecting the nodes.


How Constraint Propagation Works

Constraint propagation works by iteratively narrowing down the domains of

variables based on the constraints. This process continues until no more

values can be eliminated from any domain. The primary goal is to reduce

the search space and make it easier to find a solution.

Example

Consider a simple CSP with two variables, X and Y, each with domains {1, 2,

3}, and a constraint X ≠ Y. Constraint propagation will iteratively reduce the

domains as follows:

If X is assigned 1, then Y cannot be 1, so Y’s domain becomes {2, 3}.

If Y is then assigned 2, X cannot be 2, so X’s domain is reduced to {1, 3}.

This process continues until a stable state is reached.

Constraint Propagation Process

Constraint propagation involves iteratively applying constraints to reduce

the domains of variables, thereby narrowing down the possible values and

simplifying the problem. The process aims to make implicit constraints

explicit, thus reducing the search space and making it easier to find a

solution.
Steps in Constraint Propagation:

Initialization:

Start with all variables having their full domains.

Propagation:

Apply constraints to reduce the domains of the variables. When a domain of

a variable is reduced, this new information can further reduce the domains

of other connected variables.

Arc Consistency:

A common form of constraint propagation. A variable is arc-consistent if

every value in its domain satisfies the binary constraints with some value in

the domain of every connected variable.

Example: For variables X and Y with domains {1, 2, 3} and {2, 3}

respectively, and a constraint X < Y, the domain of X can be reduced to {1,

2}.

Iteration:

The propagation continues iteratively until no further domain reductions can

be made. This state is known as the fixed point.


Techniques in Constraint Propagation

Arc Consistency (AC) Algorithms:

➢ A variable in a CSP is arc-consistent if every value in its domain


satisfies the variable’s binary constraints.
➢ Xi is arc-consistent with respect to another variable Xj if for every
value in the current domain D i there is some value in the domain
Dj that satisfies the binary constraint on the arc (Xi, Xj).
➢ A network is arc-consistent if every variable is arc-consistent with
every other variable.
➢ Arc consistency tightens down the domains (unary constraint) using
the arcs (binary constraints).

AC-3 algorithm:
➢ AC-3 maintains a queue of arcs which initially contains all the arcs in
the CSP.
➢ AC-3 then pops off an arbitrary arc (Xi, Xj) from the queue and makes
Xi arc-consistent with respect to Xj.
➢ If this leaves Di unchanged, just moves on to the next arc;
➢ But if this revises Di, then add to the queue all arcs (Xk, Xi) where Xk is
a neighbor of Xi.
➢ If Di is revised down to nothing, then the whole CSP has no consistent
solution, return failure;
➢ Otherwise, keep checking, trying to remove values from the domains
of variables until no more arcs are in the queue.
➢ The result is an arc-consistent CSP that have the same solutions as the
original one but have smaller domains.
The complexity of AC-3:
Assume a CSP with n variables, each with domain size at most d, and
with c binary constraints (arcs). Checking consistency of an arc can be done
in O(d2) time, total worst-case time is O(cd3).

Node Consistency:

Ensures that each variable’s value satisfies the unary constraints on that

variable.

Path Consistency:

Extends the concept of arc consistency to consider triples of variables,

ensuring that constraints are satisfied over paths in the constraint graph.
Generalized Arc Consistency (GAC):

Extends arc consistency to handle constraints involving more than two

variables.

Example: Solving Sudoku with Constraint Propagation

In Sudoku, each cell is a variable, and the domain is the numbers 1 to 9. The

constraints are:

Each row must contain unique numbers.

Each column must contain unique numbers.

Each 3x3 subgrid must contain unique numbers.

Constraint propagation helps by:

Initially removing numbers from domains based on the initial clues.

Iteratively refining the domains of each cell by applying the constraints of

uniqueness.

Benefits of Constraint Propagation

Efficiency: Reduces the search space significantly, leading to faster

solutions.

Pruning: Eliminates many impossible solutions early in the process.

Simplification: Simplifies the problem, making it easier to apply other

solving techniques like backtracking.


Conclusion

Constraint propagation is a powerful technique in AI for efficiently solving

CSPs. By iteratively applying constraints to reduce the possible values of

variables, it narrows down the search space, making it easier to find

solutions to complex problems. This technique is widely used in various

applications, from scheduling and planning to solving puzzles and

optimizing resources.
Backtracking search for CSPs

➢ Backtracking search, a form of depth-first search, is commonly used for solving CSPs.
Inference can be interwoven with search.
➢ Commutativity: CSPs are all commutative. A problem is commutative if the order of
application of any given set of actions has no effect on the outcome.
➢ Backtracking search: A depth-first search that chooses values for one variable at a
time and backtracks when a variable has no legal values left to assign.
➢ Backtracking algorithm repeatedly chooses an unassigned variable, and then tries all
values in the domain of that variable in turn, trying to find a solution. If an
inconsistency is detected, then BACKTRACK returns failure, causing the previous call
to try another value.
➢ There is no need to supply BACKTRACKING-SEARCH with a domain-specific initial
state, action function, transition model, or goal test.
➢ BACKTRACKING-SARCH keeps only a single representation of a state and alters that
representation rather than creating a new one.
Knowledge-Based Agents
o An intelligent agent needs knowledge about the real world for taking decisions
and reasoning to act efficiently.

o Knowledge-based agents are those agents who have the capability of maintaining
an internal state of knowledge, reason over that knowledge, update their

knowledge after observations and take actions. These agents can represent the
world with some formal representation and act intelligently.

o Knowledge-based agents are composed of two main parts:

o Knowledge-base and

o Inference system.

A knowledge-based agent must able to do the following:

o An agent should be able to represent states, actions, etc.

o An agent Should be able to incorporate new precepts

o An agent can update the internal representation of the world

o An agent can deduce the internal representation of the world

o An agent can deduce appropriate actions.


The architecture of knowledge-based agent:

The above diagram is representing a generalized architecture for a knowledge-based


agent. The knowledge-based agent (KBA) take input from the environment by

perceiving the environment. The input is taken by the inference engine of the agent and
which also communicate with KB to decide as per the knowledge store in KB. The

learning element of KBA regularly updates the KB by learning new knowledge.

Knowledge base: Knowledgebase is a central component of a knowledge-based agent,


it is also known as KB. It is a collection of sentences (here 'sentence' is a technical term,

and it is not identical to sentence in English). These sentences are expressed in a


language which is called a knowledge representation language. The Knowledgebase of

KBA stores fact about the world.


Why use a knowledge base?

Knowledgebase is required for updating knowledge for an agent to learn with


experiences and act as per the knowledge.

Inference system
Inference means deriving new sentences from old. Inference system allows us to add a

new sentence to the knowledge base. A sentence is a proposition about the world.
Inference system applies logical rules to the KB to deduce new information.

Inference system generates new facts so that an agent can update the KB. An inference

system works mainly in two rules which are given as:

o Forward chaining
o Backward chaining

Operations Performed by KBA


Following are three operations which are performed by KBA in order to show the
intelligent behavior:

1. TELL: This operation tells the knowledge base what it perceives from the
environment.

2. ASK: This operation asks the knowledge base what action it should perform.

3. Perform: It performs the selected action.


A generic knowledge-based agent:
Following is the structure outline of a generic knowledge-based agents program:

function KB-AGENT(percept):
persistent: KB, a knowledge base
t, a counter, initially 0, indicating time
TELL(KB, MAKE-PERCEPT-SENTENCE(percept, t))
Action = ASK(KB, MAKE-ACTION-QUERY(t))
TELL(KB, MAKE-ACTION-SENTENCE(action, t))
t=t+1
return action

The knowledge-based agent takes percept as input and returns an action as output. The
agent maintains the knowledge base, KB, and it initially has some background

knowledge of the real world. It also has a counter to indicate the time for the whole
process, and this counter is initialized with zero.

Each time when the function is called, it performs its three operations:

o Firstly it TELLs the KB what it perceives.

o Secondly, it asks KB what action it should take

o Third agent program TELLS the KB that which action was chosen.

The MAKE-PERCEPT-SENTENCE generates a sentence as setting that the agent


perceived the given percept at the given time.

The MAKE-ACTION-QUERY generates a sentence to ask which action should be done at


the current time.
MAKE-ACTION-SENTENCE generates a sentence which asserts that the chosen action

was executed.

Various levels of knowledge-based agent:


A knowledge-based agent can be viewed at different levels which are given below:

1. Knowledge level
Knowledge level is the first level of knowledge-based agent, and in this level, we need to
specify what the agent knows, and what the agent goals are. With these specifications,

we can fix its behavior. For example, suppose an automated taxi agent needs to go from
a station A to station B, and he knows the way from A to B, so this comes at the

knowledge level.

2. Logical level:
At this level, we understand that how the knowledge representation of knowledge is

stored. At this level, sentences are encoded into different logics. At the logical level, an
encoding of knowledge into logical sentences occurs. At the logical level we can expect

to the automated taxi agent to reach to the destination B.

3. Implementation level:
This is the physical representation of logic and knowledge. At the implementation level
agent perform actions as per logical and knowledge level. At this level, an automated

taxi agent actually implements his knowledge and logic so that he can reach to the
destination.
Approaches to designing a knowledge-based agent:
There are mainly two approaches to build a knowledge-based agent:

1. Declarative approach: We can create a knowledge-based agent by initializing with


an empty knowledge base and telling the agent all the sentences with which we want to

start with. This approach is called Declarative approach.

2. Procedural approach: In the procedural approach, we directly encode desired

behavior as a program code. Which means we just need to write a program that already
encodes the desired behavior or agent.

However, in the real world, a successful agent can be built by combining both

declarative and procedural approaches, and declarative knowledge can often be


compiled into more efficient procedural code.
Propositional logic
Propositional logic (PL) is the simplest form of logic where all the statements are made

by propositions. A proposition is a declarative statement which is either true or false. It is


a technique of knowledge representation in logical and mathematical form.

Example:

1. a) It is Sunday.

2. b) The Sun rises from West (False proposition)


3. c) 3+3= 7(False proposition)

4. d) 5 is a prime number.

Following are some basic facts about propositional logic:

o Propositional logic is also called Boolean logic as it works on 0 and 1.

o In propositional logic, we use symbolic variables to represent the logic, and we


can use any symbol for a representing a proposition, such A, B, C, P, Q, R, etc.

o Propositions can be either true or false, but it cannot be both.

o Propositional logic consists of an object, relations or function, and logical

connectives.

o These connectives are also called logical operators.

o The propositions and connectives are the basic elements of the propositional
logic.

o Connectives can be said as a logical operator which connects two sentences.

o A proposition formula which is always true is called tautology, and it is also

called a valid sentence.

o A proposition formula which is always false is called Contradiction.

o A proposition formula which has both true and false values is calledcontingency.
o Statements which are questions, commands, or opinions are not propositions

such as "Where is Rohini", "How are you", "What is your name", are not

propositions.

Syntax of propositional logic:


The syntax of propositional logic defines the allowable sentences for the knowledge
representation. There are two types of Propositions:

a. Atomic Propositions

b. Compound propositions

o Atomic Proposition: Atomic propositions are the simple propositions. It consists of a

single proposition symbol. These are the sentences which must be either true or false.

Example:

1. 2+2 is 4, it is an atomic proposition as it is a true fact.


2. b) "The Sun is cold" is also a proposition as it is a false fact.

o Compound proposition: Compound propositions are constructed by combining


simpler or atomic propositions, using parenthesis and logical connectives.

Example:

1. a) "It is raining today, and street is wet."


2. b) "Ankit is a doctor, and his clinic is in Mumbai."
Logical Connectives:
Logical connectives are used to connect two simpler propositions or representing a

sentence logically. We can create compound propositions with the help of logical
connectives. There are mainly five connectives, which are given as follows:

1. Negation: A sentence such as ¬ P is called negation of P. A literal can be either

Positive literal or negative literal.

2. Conjunction: A sentence which has ∧ connective such as, P ∧ Q is called a

conjunction.
Example: Rohan is intelligent and hardworking. It can be written as,

P= Rohan is intelligent,

Q= Rohan is hardworking. → P∧ Q.

3. Disjunction: A sentence which has ∨ connective, such as P ∨ Q. is called

disjunction, where P and Q are the propositions.


Example: "Ritika is a doctor or Engineer",

Here P= Ritika is Doctor. Q= Ritika is Doctor, so we can write it as P ∨ Q.

4. Implication: A sentence such as P → Q, is called an implication. Implications are

also known as if-then rules. It can be represented as


If it is raining, then the street is wet.

Let P= It is raining, and Q= Street is wet, so it is represented as P → Q

5. Biconditional: A sentence such as P⇔ Q is a Biconditional sentence, example

If I am breathing, then I am alive


P= I am breathing, Q= I am alive, it can be represented as P ⇔ Q.
Following is the summarized table for Propositional Logic Connectives:

Truth Table:

In propositional logic, we need to know the truth values of propositions in all possible
scenarios. We can combine all the possible combination with logical connectives, and
the representation of these combinations in a tabular format is called Truth table.
Following are the truth table for all logical connectives:
Truth table with three propositions:

We can build a proposition composing three propositions P, Q, and R. This truth table is
made-up of 8n Tuples as we have taken three proposition symbols.

Precedence of connectives:

Just like arithmetic operators, there is a precedence order for propositional connectors
or logical operators. This order should be followed while evaluating a propositional
problem. Following is the list of the precedence order for operators:

Precedence Operators

First Precedence Parenthesis

Second Precedence Negation

Third Precedence Conjunction(AND)

Fourth Precedence Disjunction(OR)

Fifth Precedence Implication

Six Precedence Biconditional


Logical equivalence:

Logical equivalence is one of the features of propositional logic. Two propositions are said to be logically
equivalent if and only if the columns in the truth table are identical to each other.

Let's take two propositions A and B, so for logical equivalence, we can write it as A⇔B. In below truth table we
can see that column for ¬A∨ B and A→B, are identical hence A is Equivalent to B

Properties of Operators:
o Commutativity:
o P∧ Q= Q ∧ P, or
o P ∨ Q = Q ∨ P.
o Associativity:
o (P ∧ Q) ∧ R= P ∧ (Q ∧ R),
o (P ∨ Q) ∨ R= P ∨ (Q ∨ R)
o Identity element:
o P ∧ True = P,
o P ∨ True= True.
o Distributive:
o P∧ (Q ∨ R) = (P ∧ Q) ∨ (P ∧ R).
o P ∨ (Q ∧ R) = (P ∨ Q) ∧ (P ∨ R).
o DE Morgan's Law:
o ¬ (P ∧ Q) = (¬P) ∨ (¬Q)
o ¬ (P ∨ Q) = (¬ P) ∧ (¬Q).
o Double-negation elimination:
o ¬ (¬P) = P.

Limitations of Propositional logic:


o We cannot represent relations like ALL, some, or none with propositional logic. Example:
a. All the girls are intelligent.
b. Some apples are sweet.
o Propositional logic has limited expressive power.
o In propositional logic, we cannot describe statements in terms of their properties or logical
relationships.
Propositional Theorem Proving: Inference and Proofs

Inference:
In artificial intelligence, we need intelligent computers which can create new logic from

old logic or by evidence, so generating the conclusions from evidence and facts is
termed as Inference.

Inference rules:
Inference rules are the templates for generating valid arguments. Inference rules are
applied to derive proofs in artificial intelligence, and the proof is a sequence of the

conclusion that leads to the desired goal.

In inference rules, the implication among all the connectives plays an important role.
Following are some terminologies related to inference rules:

o Implication: It is one of the logical connectives which can be represented as P → Q. It


is a Boolean expression.

o Converse: The converse of implication, which means the right-hand side proposition
goes to the left-hand side and vice-versa. It can be written as Q → P.

o Contrapositive: The negation of converse is termed as contrapositive, and it can be


represented as ¬ Q → ¬ P.

o Inverse: The negation of implication is called inverse. It can be represented as ¬ P →


¬ Q.
From the above term some of the compound statements are equivalent to each other,

which we can prove using truth table:

Hence from the above truth table, we can prove that P → Q is equivalent to ¬ Q → ¬ P,

and Q→ P is equivalent to ¬ P → ¬ Q.

Types of Inference rules:

1. Modus Ponens:

The Modus Ponens rule is one of the most important rules of inference, and it states
that if P and P → Q is true, then we can infer that Q will be true. It can be represented as:

Example:

Statement-1: "If I am sleepy then I go to bed" ==> P→ Q


Statement-2: "I am sleepy" ==> P

Conclusion: "I go to bed." ==> Q.

Hence, we can say that, if P→ Q is true, and P is true then Q will be true.
Proof by Truth table:

2. Modus Tollens:

The Modus Tollens rule state that if P→ Q is true and ¬ Q is true, then ¬ P will also
true. It can be represented as:

Statement-1: "If I am sleepy then I go to bed" ==> P→ Q


Statement-2: "I do not go to the bed."==> ~Q

Statement-3: Which infers that "I am not sleepy" => ~P

Proof by Truth table:


3. Hypothetical Syllogism:

The Hypothetical Syllogism rule state that if P→R is true whenever P→Q is true, and
Q→R is true. It can be represented as the following notation:

Example:

Statement-1: If you have my home key then you can unlock my home. P→Q
Statement-2: If you can unlock my home then you can take my money. Q→R

Conclusion: If you have my home key then you can take my money. P→R

Proof by truth table:

4. Disjunctive Syllogism:

The Disjunctive syllogism rule state that if P∨Q is true, and ¬P is true, then Q will be true.

It can be represented as:


Example:

Statement-1: Today is Sunday or Monday. ==>P∨Q


Statement-2: Today is not Sunday. ==> ¬P

Conclusion: Today is Monday. ==> Q


Proof by truth-table:

5. Addition:
The Addition rule is one the common inference rule, and it states that If P is true, then

P∨Q will be true.

Example:
Statement: I have a vanilla ice-cream. ==> P

Statement-2: I have Chocolate ice-cream.


Conclusion: I have vanilla or chocolate ice-cream. ==> (P∨Q)

Proof by Truth-Table:
6. Simplification:
The simplification rule state that if P∧ Q is true, then Q or P will also be true. It can be
represented as:

Proof by Truth-Table:

7. Resolution:
The Resolution rule state that if P∨Q and ¬ P∧R is true, then Q∨R will also be true. It can
be represented as

Proof by Truth-Table:
Proof by Resolution

Resolution

The idea of resolution is simple: if we know that

• p is true or q is true
• and we also know that p is false or r is true
• then it must be the case that q is true or r is true.

This line of reasoning is formalized in the Resolution Tautology:

(p OR q) AND (NOT p OR r) -> q OR r

In order to apply resolution in a proof:

1. we express our hypotheses and conclusion as a product of sums (conjunctive normal


form), such as those that appear in the Resolution Tautology.
2. each maxterm in the CNF of the hypothesis becomes a clause in the proof.
3. we apply the resolution tautology to pairs of clauses, producing new clauses.
4. if we produce all the clauses of the conclusion, we have proven it.

Proof with Resolution

Given the following hypotheses:

1. If it rains, Joe brings his umbrella (r -> u)


2. If Joe has an umbrella, he doesn't get wet (u -> NOT w)
3. If it doesn't rain, Joe doesn't get wet (NOT r -> NOT w)

prove that Joes doesn't get wet (NOT w)


We first put each hypothesis in CNF:

1. r -> u == (NOT r OR u)
2. u -> NOT w == (NOT u OR NOT w)
3. NOT r -> NOT w == (r OR NOT w)

We then use resolution on the hypotheses to derive the conclusion (NOT w):

1. NOT r OR u Premise

2. NOT u OR NOT w Premise

3. r OR NOT w Premise

4. NOT r OR NOT w L1, L2, resolution

5. NOT w OR NOT w L3, L4, resolution

6. NOT w L5, idempotence

7. QED

Proofs by Contradiction using Resolution

We can combine resolution with proof by contradiction (where we assert the negation of what
we wish to prove, and from that premise derive FALSE) to direct our search towards smaller and
smaller clauses, with the goal of producing FALSE.

Proof by contradiction:

(NOT p -> 0) == p
We use proof by contradiction to drive our search for a proof; we are looking for the smallest
possible goal clause (false), so any use of equivalences or resolution that brings us to simpler
expressions is working towards that goal.

We can redo the previous proof (about Joe and his umbrella) using proof by contradiction with
resolution:

1. NOT r OR u Premise

2. NOT u OR NOT w Premise

3. r OR NOT w Premise

4. w Negation of conclusion

5. NOT r OR NOT w L1, L2, resolution

6. NOT w OR NOT w L3, L5, resolution

7. NOT w L6, idempotence

8. FALSE L4, L7, resolution

Proof by Resolution: Example 1

If either C173 or C220 is required, then all students will take computer science. C173 and C240
are required. Prove that all students will take computer science.

We formalize the proof as follows:

P1. (C173 OR C220) -> ACS


P2. C173
P3. C240
Prove: ACS
We then rewrite our hypotheses in conjunctive normal form:
P1: (NOT C173 OR ACS) (NOT C220 OR ACS)
P2: C173
P3: C240
Then we use proof by contradiction, by asserting the clauses of the premises and the negation
of the conclusion, and deriving false.
1. NOT C173 OR ACS Premise
2. NOT C220 OR ACS Premise
3. C173 Premise
4. C240 Premise
5. NOT ACS Negation of conclusion
6. NOT C173 L1, L5, resolution
7. FALSE L3, L6, resolution

You might also like