Logical Agents

Download as pdf or txt
Download as pdf or txt
You are on page 1of 70

LOGICAL AGENTS

14S3205 - Artificial Intelligence


Guntur Petrus Boy Knight

Teknik Elektro
Fakultas Informatika dan Teknik Elektro
Institut Teknologi Del
OUTLINE
• Knowledge-based Agents
• Wampa World
• Logic
• Propositional Logic
• Knowledge Bases
• Theorem Proving
KNOWLEDGE-BASED AGENTS
• Knowledge-based agents use a process of reasoning over an internal
representation of knowledge to decide what action to take
• So far, our problem-solving agents have performed a search over states in
order to find a plan. The representation of states has been atomic
(remember uninformed and informed search)
KNOWLEDGE-BASED AGENTS
• A central component of a knowledge-based agent is a knowledge base or
KB
• A KB contains a set of sentence that are written in a knowledge
representation language. The sentence contains some assertion about the
world

Natural language sentences Knowledge representation language sentence


Hoth is a planet planet(hoth)

Hoth is habitable habitable(hoth)

Hoth is far from its sun far_from(hoth, sol)

If a planet is far from its sun then it is cold planet(x) and sun(y) and far_from(x,y) →cold(x)
KNOWLEDGE-BASED AGENTS
• There are two kinds of sentences:
• Axioms – a sentence that is given
• Derived sentences – a new sentence that is derived from others sentences
• The process of deriving new sentences from old sentences is called inference
• A KB can initially contain some background information about the world,
and a knowledge-based agent can add to the information in the KB through
its observations of the world.
• In addition to asserting new knowledge into its KB, a knowledge-based
agent can also query the KB and ask it to derive new knowledge in order to
select what action it should take.
EXAMPLE :WAMPAS WORLD
(MODIFICATION OF WUMPUS WORLD)
• Our knowledge-based agent, R2D2, explores a cave
consisting of rooms connected by passageways.
• Lurking somewhere in the cave is the Wampa (look at
the picture on the right), a beast that eats any agent
that enters its room
• Some rooms contain bottomless pits that trap any
agent that wanders into the room.
• In one room is Master Luke
• The goal is :
• Collect Luke
• Exit the world without being eaten
WAMPA WORLD
Environment:
• A 4x4 grid of rooms.
• The agent starts in the square [1,1].
• Wampa and Luke are randomly placed in
other squares.
• Each square can be pit with 20% probability
WAMPA WORLD
Performance measure:
• +1000 points for rescuing Luke and leaving the
cave
• -1000 for falling into a pit or being eaten by the
Wampa
• -1 for each action taken
• -10 for using up your blaster fire
WAMPA WORLD
Actuators:
• R2 can move Forward, Turn Left, Turn Right
• Agent dies if it moves into a pit or a Wumpus
square
• Grab can pick up Luke.
• Shoot fires blaster bolt in a straight line in the
direction that R2D2 is facing.
• If the blaster hits the Wampa, it dies. R2 only
has enough power for one shot
• Climb gets R2 out of the cave but only works
in [1, 1]
WAMPA WORLD
Actuators:
• In each square adjacent to the Wampa, R2D2’s
olfactory sensor perceives a Stench
• In each square adjacent to a pit, R2D2’s wind
sensor perceives a Breeze
• In the square with Luke, R2D2’s audio sensor
perceives a Gasp
• When R2D2 walks into a wall it perceives a
Bump
• When the Wampa is killed , R2D2’s audio sensor
perceives a Scream
• Percept=[Stench, Breeze, None, None, None]
WAMPA WORLD
• Deterministic, discrete, static, single-agent
(Wampa doesn’t move)
• Sequential because reward doesn’t come for
many steps
• Partially observable because some parts of the
state are not directly perceivable:
Location of Luke, Wampa, and pits aren’t
directly observable
WAMPA WORLD WALKTHROUGH
• R2D2 starts in [1,1]
• Percept=[None, None, None, None, None]
• What can we conclude about [1,2] and [2,1]?
WAMPA WORLD WALKTHROUGH
• R2D2 moves to [1,2]
• Percept=[Stench, None, None, None, None]
• What can we conclude about [3,1] and [2,2] from
the Stench?
WAMPA WORLD WALKTHROUGH
• R2D2 moves back to [1,1] and gets the same
percept vector as before
• Percept=[None, None, None, None, None]
WAMPA WORLD WALKTHROUGH
• R2D2 moves to [2,1]
• Percept=[None, Breeze, None, None, None]
• What can we conclude about [3,1] and [2,2]
based on the Breeze?
WAMPA WORLD WALKTHROUGH
• R2D2 moves to [2,1]
• Percept=[None, Breeze, None, None, None]
• What can we conclude about [2,2] and [1,3]
based on the lack of a Stench here?
WAMPA WORLD WALKTHROUGH
• R2D2 moves to [2,2]
• Percept=[None, None, None, None, None]
WAMPA WORLD WALKTHROUGH
• R2D2 moves to [2,3]
• Percept=[Stench, Breeze, Gasp, None, None]
• What can we conclude about [2,4] or [3,3]?
WAMPA WORLD WALKTHROUGH
• R2D2 moves to [2,3]
• Percept=[Stench, Breeze, Gasp, None, None]
• We heard a Gasp, so Luke is here!
LOGIC
Logic can serve as a general class of representations for knowledge-based
agents. Here we are going to examine Propositional Logic
• KB consists of sentences in the representation language
• Representation language has a syntax that specifies sentences that are well-
formed
• A logic defines the semantics of the sentences, which is their meaning
• The semantics defines the truth of each sentence with respect to a possible
world, which we will often call a model.
POSSIBLE WORLDS AND MODELS
• Models are mathematical abstractions that have a fixed set of truth values
which are {true, false} for each sentence.
• If sentence 𝛂 is true in model m then we say
• m satisfies 𝛂, or
• m is a model of 𝛂
• We use the notation M(𝛂) to mean the set of all models of 𝛂

For instance, 𝛂 could be a sentence that means “there is no pit in [2,2]”. In


that case, M(𝛂) would be all instances of Wampa World where [2,2] doesn’t
have a pit.
LOGICAL ENTAILMENT
• Once we have a notion of truth, we can start to define logical reasoning. Logical
reasoning involves the entailment relation between sentence.
• In plain English, entailment is the idea that a sentence follows logically from another
sentence
• To write sentence 𝛂 entails sentence β in mathematical notation we use the ⊨
symbol:
𝛂⊨β
• The definition is
𝛂⊨β if and only if M(𝛂) ⊆ M(β)
This means that 𝛂 is more specific, or stronger than, β. For instances, β could mean
that “The agent is a robot” and 𝛂 could mean “The agent is an astromech”.
KNOWLEDGE BASE
The KB can be thought of as a set of sentences
𝜶𝟏 =“There is no pit in [1,2]”
𝜶𝟐 =“There is a pit in [3,1]”
𝜶𝟑 =“There is a wampa in [1,3]”

The KB is false in models that contradict what the agent


knows. For example, the KB is false in any model m
where [1,2] contains a pit.

Possible Worlds is the process of


enumerating all Possible Worlds that are
compatible with the KB. M KB ⊆ 𝑴(𝜶𝟏 )
LOGICAL INFERENCE
• Entailment can be applied to derive conclusions, which is the process of
logical inference.
• We can think about the consequences of a KB as a large set of additional
sentences that are entailed given the sentences that have been added to
the KB
• We would like to design inference algorithms to enumerate these sentences.
• When an inference algorithm i allows us to conclude that 𝛂 is true, then we
write:
𝑲𝑩 ⊢𝒊 𝜶
“𝜶 is derived from KB by i”
PROPOSITIONAL LOGIC
• Atomic sentences are represented with a single propositional symbols.
Propositional symbols stand for a statement that can be true or false
• For example, 𝑾𝟏,𝟑 is a propositional symbol that we choose to stand for:
“There is a Wampa at location [1,3]”
• That statement can be true or false.
• The symbol FacingEast could stand for “The agent is currently facing East”.
PROPOSITIONAL LOGIC
Complex sentences are constructed from simpler ones using parentheses and
logical connectives
TRUTH TABLES: NEGATION

𝑷 ¬𝑷
True False
False True
TRUTH TABLES: CONJUNCTION

𝑷 𝑸 𝑷⋀𝑸
True True True
True False False
False True False
False False False
P and Q are true:
“Wampas smell bad and Tauntauns smell bad.” (This sentence is true)
P is true and Q is false:
“Wampas smell bad and Tauntauns are robots.” (This sentence is false)
P is false and Q is false:
“Wampas smell good and Tauntauns are robots.” (This sentence is false)
TRUTH TABLES: DISJUNCTION

𝑷 𝑸 𝑷⋁𝑸
True True True
True False True
False True True
False False False
P and Q are true:
“Wampas smell bad or Tauntauns smell bad.” (This sentence is true)
P is true and Q is false:
“Wampas smell bad or Tauntauns are robots.” (This sentence is true)
P is false and Q is false:
“Wampas smell good or Tauntauns are robots.” (This sentence is false)
TRUTH TABLES: CONDITIONAL

𝑷 𝑸 𝑷⇒𝑸
True True True
True False False
False True True
False False True
TRUTH TABLES: CONDITIONAL
To understand why the conditional is defined this way assume that I tell you
this P ⟹ Q:
“If you join the dark side then we will rule the galaxy together”

In which of these four scenario did I tell a lie?


• You join the dark side, and we rule the galaxy together. (Both P and Q are True)
• You join the dark side, but we don’t rule the galaxy together. (P is True, Q is False)
• You don’t join the dark side, but we still rule the galaxy together. (P is False, Q is True)
• You don’t join the dark side, and we don’t rule the galaxy together. (P is False, Q is True)
TRUTH TABLES: BICONDITIONAL

𝑷 𝑸 𝑷⇔𝑸
True True True
True False False
False True False
False False True
TRUTH TABLES: BICONDITIONAL
I tell you:
“The Death Star can be destroyed if and only if your missile hits its vulnerable spot.”
1. The Death Star is destroyed, and you hit the vulnerable spot. (Both P and Q
are True)
2. The Death Star is destroyed, but you didn’t hit its vulnerable spot. (P is True,
Q is False)
3. The Death Star isn’t destroyed, but you did hit its vulnerable spot. (P is False,
Q is True)
4. The Death Star isn’t destroyed, and you also didn’t hit its vulnerable spot. (P
is False, Q is False)
A SIMPLE KNOWLEDGE BASES
We can construct a knowledge base for Wampa World.
Let’s start with a set of symbols for each location [x,y]:

𝑃𝑥,𝑦 is true if there is a pit in [x,y]


𝑊𝑥,𝑦 is true if there is a Wampa in [x,y]
𝐵𝑥,𝑦 is true if there is a breeze in [x,y]
𝑆𝑥,𝑦 is true if there is a stench in [x,y]
𝐿𝑥,𝑦 is true if there is a agent in [x,y]
A SIMPLE KNOWLEDGE BASES
We can construct sentences out of these 4
using our logical connectors. We’ll label
each sentence
𝑅1 : ¬𝑃1,1 3
𝑅2 : 𝐵1,1 ⇔(𝑃1,2 ⋁𝑃2,1 )
𝑅3 : 𝐵2,1 ⇔(𝑃1,1 ⋁𝑃2,2 ⋁𝑃3,1 )
2
These are true of all Wampa Worlds
1

1 2 3 4
A SIMPLE KNOWLEDGE BASES
We can construct sentences out of these using
4
our logical connectors. We’ll label each
sentence
𝑅1 : ¬𝑃1,1 3
𝑅2 : 𝐵1,1 ⇔(𝑃1,2 ⋁𝑃2,1 )
𝑅3 : 𝐵2,1 ⇔(𝑃1,1 ⋁𝑃2,2 ⋁𝑃3,1 )
What if we perceive the presence or absence 2

of breeze in [1,1], [2,1]?


𝑅4 : ¬𝐵1,1 1 No Breeze Breeze
𝑅5 : 𝐵2,1
1 2 3 4
A SIMPLE KNOWLEDGE BASES
We can construct sentences out of these using
4
our logical connectors. We’ll label each
sentence
𝑅1 : ¬𝑃1,1 3
𝑅2 : 𝐵1,1 ⇔(𝑃1,2 ⋁𝑃2,1 )
𝑅3 : 𝐵2,1 ⇔(𝑃1,1 ⋁𝑃2,2 ⋁𝑃3,1 )
𝑅4 : ¬𝐵1,1 2 Pit? Pit?
𝑅5 : 𝐵2,1
Can mechanically combine the sentences in
1 No Breeze Breeze Pit?
our KB to prove that a pit exists at (or is
absent from) any location? 1 2 3 4
POSSIBLE WORLDS

Models are mathematical abstractions that


have a fixed set of truth values which are {true,
false} for each sentence.
If sentence 𝛂 is true in model m then we say
• m satisfies 𝛂, or
• m is a model of 𝛂
We use the notation M(𝛂) to mean the set of
all models of 𝛂
POSSIBLE WORLDS
𝛂1 = “There is no pit in [1,2]”
POSSIBLE WORLDS
𝛂1 = “There is no pit in [1,2]”
POSSIBLE WORLDS
𝛂2 = “There is no pit in [2,2]”
POSSIBLE WORLDS

𝑅1 : ¬𝑃1,1
𝑅2 : 𝐵1,1 ⇔(𝑃1,2 ⋁𝑃2,1 )
KB = 𝑅3 : 𝐵2,1 ⇔(𝑃1,1 ⋁𝑃2,2 ⋁𝑃3,1 )
𝑅4 : ¬𝐵1,1
𝑅5 : 𝐵2,1
POSSIBLE WORLDS
β ⊨ 𝛼 if and only if M(β) ⊆ M(𝛼)
“β entails 𝛼 if and only if every model
in which β is true, 𝛂 is also true”

Does our KB entail that there is no pit in [1,2]?


KB⊨ 𝛂𝟏 if and only if M(KB) ⊆ M(𝛂𝟏 )
POSSIBLE WORLDS
β ⊨ 𝛼 if and only if M(β) ⊆ M(𝛼)
“β entails 𝛼 if and only if every model
in which β is true, 𝛂 is also true”

Does our KB entail that there is no pit in [1,2]?


KB⊨ 𝛂𝟏 if and only if M(KB) ⊆ M(𝛂𝟏 )

𝑅1 : ¬𝑃1,1
𝑅2 : 𝐵1,1 ⇔(𝑃1,2 ⋁𝑃2,1 )
KB= 𝑅3 : 𝐵2,1 ⇔(𝑃1,1 ⋁𝑃2,2 ⋁𝑃3,1 )
𝑅4 : ¬𝐵1,1
𝑅5 : 𝐵2,1

𝛂1 = “There is no pit in [1,2]”


KB⊨ 𝛂𝟏
POSSIBLE WORLDS
β ⊨ 𝛼 if and only if M(β) ⊆ M(𝛼)
“β entails 𝛼 if and only if every model
in which β is true, 𝛂 is also true”

Does our KB entail that there is no pit in [2,2]?


KB⊨ 𝛂𝟐 if and only if M(KB) ⊆ M(𝛂𝟐 )

𝑅1 : ¬𝑃1,1
𝑅2 : 𝐵1,1 ⇔(𝑃1,2 ⋁𝑃2,1 )
KB= 𝑅3 : 𝐵2,1 ⇔(𝑃1,1 ⋁𝑃2,2 ⋁𝑃3,1 )
𝑅4 : ¬𝐵1,1
𝑅5 : 𝐵2,1

𝛂2 = “There is no pit in [2,2]”


KB does not entail 𝛂𝟐 in some models
where KB is true 𝛂𝟐 is false
SOUNDNESS AND COMPLETENESS

• Sound – the inference algorithm should only derive entailed sentences


• Complete – an inference algorithm is complete if it can derive all sentences
that are entailed
THEOREM PROVING
Review: Propositional Logic
• Complex sentences are constructed from simpler ones using parentheses
and logical connectives
LOGICAL EQUIVALENCE
• Two sentences 𝛂 and β are logically equivalent if they are true in the same
set of models. We write this as
𝜶≡𝜷

• An alternate definition of logical equivalence is that two sentences 𝛂 and β


are logically equivalent if any only if they entail each other.
𝜶 ≡ 𝜷 𝒊𝒏 𝒂𝒏𝒅 𝒐𝒏𝒍𝒚 𝒊𝒇𝜶 ⊨ 𝜷 𝐚𝐧𝐝 𝜶 ⊨ 𝜷
STANDARD LOGICAL
EQUIVALENCES
INFERENCE RULES
Inference Rules can be used to derive proofs. Here’s the notation:
Whenever sentences of
the form 𝜶 ⇒ 𝜷 and 𝜶
are given…

𝜶⇒𝜷,𝜶
𝜷

…then the sentence 𝜷


can be inferred…
INFERENCE RULES
Modus Ponens Biconditional Elimination:

𝜶⇒𝜷,𝜶 𝜶⇔𝜷
𝜷 (𝜶⇒𝜷)∧(𝜶⇒𝜷)

Biconditional Elimination:
And Elimination:

𝜶⋀𝜷 (𝜶 ⇒ 𝜷) ∧ (𝜶 ⇒ 𝜷)
𝜶 𝜶⇔𝜷
INFERENCE EXAMPLE
𝑅1 : ¬𝑃1,1
𝑅2 : 𝐵1,1 ⇔(𝑃1,2 ⋁𝑃2,1 )
• KB = 𝑅3 : 𝐵2,1 ⇔(𝑃1,1 ⋁𝑃2,2 ⋁𝑃3,1 ) Biconditional Elimination:
𝑅4 : ¬𝐵1,1
𝑅5 : 𝐵2,1
(𝜶 ⇒ 𝜷) ∧ (𝜶 ⇒ 𝜷)
• Apply biconditional Elimination to 𝑅2 to get
𝑅6 . 𝜶⇔𝜷
𝑹𝟔 : (𝑩𝟏,𝟏 ⇒ (𝑷𝟏,𝟐 ⋁𝑷𝟐,𝟏 )) ∧ ((𝑷𝟏,𝟐 ⋁𝑷𝟐,𝟏 ) ⇒ 𝑩𝟏,𝟏
𝑅1 : ¬𝑃1,1
INFERENCE EXAMPLE
𝑅2 : 𝐵1,1 ⇔(𝑃1,2 ⋁𝑃2,1 )
𝑅 : 𝐵 ⇔(𝑃1,1 ⋁𝑃2,2 ⋁𝑃3,1 )
KB = 3 2,1
𝑅4 : ¬𝐵1,1 And Elimination:
𝑅5 : 𝐵2,1
𝑅6 : (𝐵1,1 ⇒ (𝑃1,2 ⋁𝑃2,1 )) ∧ ((𝑃1,2 ⋁𝑃2,1 ) ⇒ 𝐵1,1
𝜶∧𝜷
Apply biconditional Elimination to 𝑅6 to get 𝑅7 .
𝜶
𝑅7 : (𝐵1,1 ⇒ (𝑃1,2 ⋁𝑃2,1 ))
𝑅1 : ¬𝑃1,1
INFERENCE EXAMPLE
𝑅2 : 𝐵1,1 ⇔(𝑃1,2 ⋁𝑃2,1 )
𝑅3 : 𝐵2,1 ⇔(𝑃1,1 ⋁𝑃2,2 ⋁𝑃3,1 )
KB = Logical equivalence for contrapositives:
𝑅4 : ¬𝐵1,1
𝑅5 : 𝐵2,1
𝑅6 : (𝐵1,1 ⇒ (𝑃1,2 ⋁𝑃2,1 )) ∧ ((𝑃1,2 ⋁𝑃2,1 ) ⇒ 𝐵1,1
𝑅7 : (𝐵1,1 ⇒ (𝑃1,2 ⋁𝑃2,1 ))
(𝜶 ⇒ 𝜷)
(¬𝜷 ⇒ ¬𝜶)
Logical equivalence for contrapositives applied
to 𝑅7 gives 𝑅8 .

𝑅8 : (¬𝐵1,1 ⇒ ¬(𝑃1,2 ⋁𝑃2,1 ))


𝑅1 : ¬𝑃1,1 INFERENCE EXAMPLE
𝑅2 : 𝐵1,1 ⇔(𝑃1,2 ⋁𝑃2,1 )
𝑅3 : 𝐵2,1 ⇔(𝑃1,1 ⋁𝑃2,2 ⋁𝑃3,1 )
KB = 𝑅4 : ¬𝐵1,1 Modus Ponens:
𝑅5 : 𝐵2,1
𝑅6 : (𝐵1,1 ⇒ (𝑃1,2 ⋁𝑃2,1 )) ∧ ((𝑃1,2 ⋁𝑃2,1 ) ⇒ 𝐵1,1
𝑅7 : (𝐵1,1 ⇒ (𝑃1,2 ⋁𝑃2,1 ))
𝜶 ⇒ 𝜷, 𝜶
𝑅8 : (¬𝐵1,1 ⇒ ¬(𝑃1,2 ⋁𝑃2,1 ))
𝜷
Apply modus ponens to 𝑅4 and 𝑅8 to get:

𝑅9 : ¬(𝑃1,2 ⋁𝑃2,1 )
𝑅1 : ¬𝑃1,1 INFERENCE EXAMPLE
𝑅2 : 𝐵1,1 ⇔(𝑃1,2 ⋁𝑃2,1 )
𝑅3 : 𝐵2,1 ⇔(𝑃1,1 ⋁𝑃2,2 ⋁𝑃3,1 )
KB = 𝑅4 : ¬𝐵1,1 De Morgan’s Rule:
𝑅5 : 𝐵2,1
𝑅6 : (𝐵1,1 ⇒ (𝑃1,2 ⋁𝑃2,1 )) ∧ ((𝑃1,2 ⋁𝑃2,1 ) ⇒ 𝐵1,1
𝑅7 : (𝐵1,1 ⇒ (𝑃1,2 ⋁𝑃2,1 )) ¬(𝜶 ∨ 𝜷)
𝑅8 : (¬𝐵1,1 ⇒ ¬(𝑃1,2 ⋁𝑃2,1 ))
𝑅9 : ¬(𝑃1,2 ⋁𝑃2,1 ) (¬𝜶 ∧ ¬𝜷)
Apply De Morgan’s rule to 𝑅9 to get:

𝑅10 : ¬𝑃1,2 ∧ ¬𝑃2,1


𝑅1 : ¬𝑃1,1
INFERENCE EXAMPLE
𝑅2 : 𝐵1,1 ⇔(𝑃1,2 ⋁𝑃2,1 )
𝑅3 : 𝐵2,1 ⇔(𝑃1,1 ⋁𝑃2,2 ⋁𝑃3,1 )
KB = 𝑅4 : ¬𝐵1,1
And Elimination:

𝑅5 : 𝐵2,1
𝑅6 : (𝐵1,1 ⇒ (𝑃1,2 ⋁𝑃2,1 )) ∧ ((𝑃1,2 ⋁𝑃2,1 ) ⇒ 𝐵1,1 𝜶∧𝜷
𝑅7 : (𝐵1,1 ⇒ (𝑃1,2 ⋁𝑃2,1 ))
𝑅8 : (¬𝐵1,1 ⇒ ¬(𝑃1,2 ⋁𝑃2,1 ))
𝑅9 : ¬(𝑃1,2 ⋁𝑃2,1 )
𝜶
𝑅10 : ¬𝑃1,2 ∧ ¬𝑃2,1

𝑅11 : ¬𝑃1,2
𝑅12 : ¬𝑃2,1
INFERENCE EXAMPLE
KB =
𝑅1 : ¬𝑃1,1
𝑅2 : 𝐵1,1 ⇔(𝑃1,2 ⋁𝑃2,1 )
𝑅3 : 𝐵2,1 ⇔(𝑃1,1 ⋁𝑃2,2 ⋁𝑃3,1 ) No Pit in [1,2].
𝑅4 : ¬𝐵1,1
𝑅5 : 𝐵2,1
𝑅6 : (𝐵1,1 ⇒ (𝑃1,2 ⋁𝑃2,1 )) ∧ ((𝑃1,2 ⋁𝑃2,1 ) ⇒ 𝐵1,1
𝑅7 : (𝐵1,1 ⇒ (𝑃1,2 ⋁𝑃2,1 )) No Pit in [2,1].
𝑅8 : (¬𝐵1,1 ⇒ ¬(𝑃1,2 ⋁𝑃2,1 ))
𝑅9 : ¬(𝑃1,2 ⋁𝑃2,1 )
𝑅10 : ¬𝑃1,2 ∧ ¬𝑃2,1
𝑅11 : ¬𝑃1,2
𝑅12 : ¬𝑃2,1
SEARCH FOR A PROOF

We can use search algorithms to find a sequence of steps that constitutes a


proof!
• Initial State: the initial knowledge base.
• Actions: All inference rules applied to all sentence that match the top half of
the inference rule.
• Result: Add the sentence on the bottom half to the KB.
• Goal: The goal is a state that contains the sentence that we are trying to
prove.
SEARCH FOR A PROOF

• Initial State • Result


𝑅1 : ¬𝑃1,1
𝑅1 : ¬𝑃1,1
𝑅2 : 𝐵1,1 ⇔(𝑃1,2 ⋁𝑃2,1 )
𝑅2 : 𝐵1,1 ⇔(𝑃1,2 ⋁𝑃2,1 ) 𝑅3 : 𝐵2,1 ⇔(𝑃1,1 ⋁𝑃2,2 ⋁𝑃3,1 )
𝑅3 : 𝐵2,1 ⇔(𝑃1,1 ⋁𝑃2,2 ⋁𝑃3,1 ) 𝑅4 : ¬𝐵1,1

𝑅4 : ¬𝐵1,1 𝑅5 : 𝐵2,1
𝑅6 : (𝐵1,1 ⇒ (𝑃1,2 ⋁𝑃2,1 )) ∧ ((𝑃1,2 ⋁𝑃2,1 ) ⇒ 𝐵1,1
𝑅5 : 𝐵2,1 𝑅7 : (𝐵1,1 ⇒ (𝑃1,2 ⋁𝑃2,1 ))
• Action
𝐵1,1 ⇔(𝑃1,2 ⋁𝑃2,1 ) • Goal
(𝐵1,1 ⇒(𝑃1,2 ⋁𝑃2,1 ))∧((𝑃1,2 ⋁𝑃2,1 )⇒𝐵1,1 ¬𝑃1,2
MONOTONICITY IMPLIES
SOUNDNESS
• Monotonicity says that the set of entailed sentence can only increase as
information is added to the KB
if KB⊨𝛂 then KB ⋀ β ⊨ 𝛂
• If we add additional information to the KB, it doesn’t invalidate any other
info that we’ve already inferred
• Therefore it is sound it will only derive entailed sentences.
PROOF BY RESOLUTION
• But is it complete? Can this method derive all sentences that are entailed? If
we used iterative deepening search, then we would find any reachable
goal. But what if we were missing an important inference rule?
• There is a single inference rule, resolution, which yields a complete inference
algorithm when coupled with any complete search algorithm
• Let’s see our agent again
PROOF BY RESOLUTION
the agent returns from [2,1] to [1,1] and then goes to [1,2], where it perceives a
stench, but no breeze. We add the following facts to the knowledge base:
𝑅13 : ¬𝐵1,2
𝑅14 : 𝐵1,2 ⇔(𝑃1,1 ⋁𝑃2,2 ⋁𝑃1,3 )
we can now derive the absence of pits in [2,2] and [1,3] (remember that [1,1] is
already known to be pitless)
𝑅15 : ¬𝑃2,2
𝑅16 : ¬𝑃1,3
We can also apply biconditional elimination to 𝑅3 , followed by Modus Ponens with 𝑅5 ,
to obtain the fact that there is a pit in [1,1], [2,2], or [3,1]:
𝑅17 : 𝑃1,1 ⋁𝑃2,2 ⋁𝑃3,1
PROOF BY RESOLUTION
• Now comes the first application of the resolution rule: the literal ¬𝑃2,2 in 𝑅15
resolves with the literal 𝑃2,2 in 𝑅17 to give the resolvent
𝑅18 : 𝑃1,1 ⋁𝑃3,1
• Using 𝑅18 and 𝑅1 we have
𝑅19 : 𝑃3,1
• In English: if there’s a pit in [1,1] or [3,1] and it’s not in [1,1], then it’s in [3,1].
• These last two inference steps are examples of the unit resolution inference
rule
RESOLUTION
The unit resolution rule can be generalized to the full resolution rule
• If 𝒍𝒊 and 𝒎𝒋 are complementary literals, which means that one is the negation
of another, then we can eliminate them via unit resolution

𝒍𝟏 ∨ ⋯ ∨ 𝒍𝒌 , 𝒎𝟏 ∨ ⋯ ∨ 𝒎𝒏
𝒍𝟏 ∨ ⋯ ∨ 𝒍𝒊−𝟏 ∨ 𝒍𝒊+𝟏 ∨ ⋯ ∨ 𝒍𝒌 ∨ 𝒎𝟏 ∨ ⋯ ∨ 𝒎𝒋−𝟏 ∨ 𝒎𝒋+𝟏 ∨ ⋯ ∨ 𝒎𝒌
CONJUNCTIVE NORMAL FORM
• The resolution rule applies only to clauses (that is, disjunctions of literals), so it
would seem to be relevant only to knowledge bases and queries consisting
of clauses. How, then, can it lead to a complete inference procedure for all
of propositional logic?
• The answer is that every sentence of propositional logic is logically
equivalent to a conjunction of clauses
• A sentence expressed as a conjunction of clauses is said to be in conjunctive
normal form or CNF
CONJUNCTIVE NORMAL FORM
We now describe a procedure for converting to CNF.
1. Eliminate ⇔, replacing α ⇔ β with (α ⇒ β) ∧ (β ⇒ α).
2. Eliminate ⇒, replacing α ⇒ β with ¬α ∨ β:
3. CNF requires ¬ to appear only in literals, so we “move ¬ inwards” by
repeated application of the following equivalences
4. Now we have a sentence containing nested ∧ and ∨ operators applied to
literals
CONJUNCTIVE NORMAL FORM
We illustrate the procedure by converting the sentence 𝑩𝟏,𝟏 ⇔(𝑷𝟏,𝟐 ⋁𝑷𝟐,𝟏 ) into
CNF. The steps are as follows:
1. (𝑩𝟏,𝟏 ⇒ (𝑷𝟏,𝟐 ∨ 𝑷𝟐,𝟏 )) ∧ ((𝑷𝟏,𝟐 ∨ 𝑷𝟐,𝟏 ) ⇒ 𝑩𝟏,𝟏 )
2. (¬ 𝑩𝟏,𝟏 ∨ 𝑷𝟏,𝟐 ∨ 𝑷𝟐,𝟏 ) ∧ (¬(𝑷𝟏,𝟐 ∨ 𝑷𝟐,𝟏 ) ∨ 𝑩𝟏,𝟏 )
3. (¬ 𝑩𝟏,𝟏 ∨ 𝑷𝟏,𝟐 ∨ 𝑷𝟐,𝟏 ) ∧ ((¬ 𝑷𝟏,𝟐 ∧ ¬ 𝑷𝟐,𝟏 ) ∨ 𝑩𝟏,𝟏 )
4. (¬ 𝑩𝟏,𝟏 ∨ 𝑷𝟏,𝟐 ∨ 𝑷𝟐,𝟏 ) ∧ (¬ 𝑷𝟏,𝟐 ∨ 𝑩𝟏,𝟏 ) ∧ (¬ 𝑷𝟐,𝟏 ∨ 𝑩𝟏,𝟏 )
RESOLUTION ALGORITHM
• Inference procedures based on resolution work using the principle of proof
by contradiction
To show KB⊨𝛂 then we can show that KB ⋀ ¬𝛂 is unsatisfiable
• Here are the steps:
1. Convert (KB ⋀ ¬𝛂) into CNF
2. Apply the resolution rule to the resulting clauses
3. Resolve every pair that contains complementary literals to produce a new
clause
4. If that new clause isn’t already present, then add it to the set
5. Continue the process until one of two things happens:
1. There are no new clauses that can be added. In this case, the KB does not entail 𝛂
2. Two clauses resolve to yield the empty clause, in which case the KB does entail 𝛂
REFERENCES
• [2010] Artificial Intelligence- A Modern Approach, 3rd Ed, Stuart Russell, Peter
Norvig
• Artificial Intelligence slide presentation University of Pennsylvania (CIS 391 fall
2015)

You might also like