Logical Agents
Logical Agents
Logical Agents
Teknik Elektro
Fakultas Informatika dan Teknik Elektro
Institut Teknologi Del
OUTLINE
• Knowledge-based Agents
• Wampa World
• Logic
• Propositional Logic
• Knowledge Bases
• Theorem Proving
KNOWLEDGE-BASED AGENTS
• Knowledge-based agents use a process of reasoning over an internal
representation of knowledge to decide what action to take
• So far, our problem-solving agents have performed a search over states in
order to find a plan. The representation of states has been atomic
(remember uninformed and informed search)
KNOWLEDGE-BASED AGENTS
• A central component of a knowledge-based agent is a knowledge base or
KB
• A KB contains a set of sentence that are written in a knowledge
representation language. The sentence contains some assertion about the
world
If a planet is far from its sun then it is cold planet(x) and sun(y) and far_from(x,y) →cold(x)
KNOWLEDGE-BASED AGENTS
• There are two kinds of sentences:
• Axioms – a sentence that is given
• Derived sentences – a new sentence that is derived from others sentences
• The process of deriving new sentences from old sentences is called inference
• A KB can initially contain some background information about the world,
and a knowledge-based agent can add to the information in the KB through
its observations of the world.
• In addition to asserting new knowledge into its KB, a knowledge-based
agent can also query the KB and ask it to derive new knowledge in order to
select what action it should take.
EXAMPLE :WAMPAS WORLD
(MODIFICATION OF WUMPUS WORLD)
• Our knowledge-based agent, R2D2, explores a cave
consisting of rooms connected by passageways.
• Lurking somewhere in the cave is the Wampa (look at
the picture on the right), a beast that eats any agent
that enters its room
• Some rooms contain bottomless pits that trap any
agent that wanders into the room.
• In one room is Master Luke
• The goal is :
• Collect Luke
• Exit the world without being eaten
WAMPA WORLD
Environment:
• A 4x4 grid of rooms.
• The agent starts in the square [1,1].
• Wampa and Luke are randomly placed in
other squares.
• Each square can be pit with 20% probability
WAMPA WORLD
Performance measure:
• +1000 points for rescuing Luke and leaving the
cave
• -1000 for falling into a pit or being eaten by the
Wampa
• -1 for each action taken
• -10 for using up your blaster fire
WAMPA WORLD
Actuators:
• R2 can move Forward, Turn Left, Turn Right
• Agent dies if it moves into a pit or a Wumpus
square
• Grab can pick up Luke.
• Shoot fires blaster bolt in a straight line in the
direction that R2D2 is facing.
• If the blaster hits the Wampa, it dies. R2 only
has enough power for one shot
• Climb gets R2 out of the cave but only works
in [1, 1]
WAMPA WORLD
Actuators:
• In each square adjacent to the Wampa, R2D2’s
olfactory sensor perceives a Stench
• In each square adjacent to a pit, R2D2’s wind
sensor perceives a Breeze
• In the square with Luke, R2D2’s audio sensor
perceives a Gasp
• When R2D2 walks into a wall it perceives a
Bump
• When the Wampa is killed , R2D2’s audio sensor
perceives a Scream
• Percept=[Stench, Breeze, None, None, None]
WAMPA WORLD
• Deterministic, discrete, static, single-agent
(Wampa doesn’t move)
• Sequential because reward doesn’t come for
many steps
• Partially observable because some parts of the
state are not directly perceivable:
Location of Luke, Wampa, and pits aren’t
directly observable
WAMPA WORLD WALKTHROUGH
• R2D2 starts in [1,1]
• Percept=[None, None, None, None, None]
• What can we conclude about [1,2] and [2,1]?
WAMPA WORLD WALKTHROUGH
• R2D2 moves to [1,2]
• Percept=[Stench, None, None, None, None]
• What can we conclude about [3,1] and [2,2] from
the Stench?
WAMPA WORLD WALKTHROUGH
• R2D2 moves back to [1,1] and gets the same
percept vector as before
• Percept=[None, None, None, None, None]
WAMPA WORLD WALKTHROUGH
• R2D2 moves to [2,1]
• Percept=[None, Breeze, None, None, None]
• What can we conclude about [3,1] and [2,2]
based on the Breeze?
WAMPA WORLD WALKTHROUGH
• R2D2 moves to [2,1]
• Percept=[None, Breeze, None, None, None]
• What can we conclude about [2,2] and [1,3]
based on the lack of a Stench here?
WAMPA WORLD WALKTHROUGH
• R2D2 moves to [2,2]
• Percept=[None, None, None, None, None]
WAMPA WORLD WALKTHROUGH
• R2D2 moves to [2,3]
• Percept=[Stench, Breeze, Gasp, None, None]
• What can we conclude about [2,4] or [3,3]?
WAMPA WORLD WALKTHROUGH
• R2D2 moves to [2,3]
• Percept=[Stench, Breeze, Gasp, None, None]
• We heard a Gasp, so Luke is here!
LOGIC
Logic can serve as a general class of representations for knowledge-based
agents. Here we are going to examine Propositional Logic
• KB consists of sentences in the representation language
• Representation language has a syntax that specifies sentences that are well-
formed
• A logic defines the semantics of the sentences, which is their meaning
• The semantics defines the truth of each sentence with respect to a possible
world, which we will often call a model.
POSSIBLE WORLDS AND MODELS
• Models are mathematical abstractions that have a fixed set of truth values
which are {true, false} for each sentence.
• If sentence 𝛂 is true in model m then we say
• m satisfies 𝛂, or
• m is a model of 𝛂
• We use the notation M(𝛂) to mean the set of all models of 𝛂
𝑷 ¬𝑷
True False
False True
TRUTH TABLES: CONJUNCTION
𝑷 𝑸 𝑷⋀𝑸
True True True
True False False
False True False
False False False
P and Q are true:
“Wampas smell bad and Tauntauns smell bad.” (This sentence is true)
P is true and Q is false:
“Wampas smell bad and Tauntauns are robots.” (This sentence is false)
P is false and Q is false:
“Wampas smell good and Tauntauns are robots.” (This sentence is false)
TRUTH TABLES: DISJUNCTION
𝑷 𝑸 𝑷⋁𝑸
True True True
True False True
False True True
False False False
P and Q are true:
“Wampas smell bad or Tauntauns smell bad.” (This sentence is true)
P is true and Q is false:
“Wampas smell bad or Tauntauns are robots.” (This sentence is true)
P is false and Q is false:
“Wampas smell good or Tauntauns are robots.” (This sentence is false)
TRUTH TABLES: CONDITIONAL
𝑷 𝑸 𝑷⇒𝑸
True True True
True False False
False True True
False False True
TRUTH TABLES: CONDITIONAL
To understand why the conditional is defined this way assume that I tell you
this P ⟹ Q:
“If you join the dark side then we will rule the galaxy together”
𝑷 𝑸 𝑷⇔𝑸
True True True
True False False
False True False
False False True
TRUTH TABLES: BICONDITIONAL
I tell you:
“The Death Star can be destroyed if and only if your missile hits its vulnerable spot.”
1. The Death Star is destroyed, and you hit the vulnerable spot. (Both P and Q
are True)
2. The Death Star is destroyed, but you didn’t hit its vulnerable spot. (P is True,
Q is False)
3. The Death Star isn’t destroyed, but you did hit its vulnerable spot. (P is False,
Q is True)
4. The Death Star isn’t destroyed, and you also didn’t hit its vulnerable spot. (P
is False, Q is False)
A SIMPLE KNOWLEDGE BASES
We can construct a knowledge base for Wampa World.
Let’s start with a set of symbols for each location [x,y]:
1 2 3 4
A SIMPLE KNOWLEDGE BASES
We can construct sentences out of these using
4
our logical connectors. We’ll label each
sentence
𝑅1 : ¬𝑃1,1 3
𝑅2 : 𝐵1,1 ⇔(𝑃1,2 ⋁𝑃2,1 )
𝑅3 : 𝐵2,1 ⇔(𝑃1,1 ⋁𝑃2,2 ⋁𝑃3,1 )
What if we perceive the presence or absence 2
𝑅1 : ¬𝑃1,1
𝑅2 : 𝐵1,1 ⇔(𝑃1,2 ⋁𝑃2,1 )
KB = 𝑅3 : 𝐵2,1 ⇔(𝑃1,1 ⋁𝑃2,2 ⋁𝑃3,1 )
𝑅4 : ¬𝐵1,1
𝑅5 : 𝐵2,1
POSSIBLE WORLDS
β ⊨ 𝛼 if and only if M(β) ⊆ M(𝛼)
“β entails 𝛼 if and only if every model
in which β is true, 𝛂 is also true”
𝑅1 : ¬𝑃1,1
𝑅2 : 𝐵1,1 ⇔(𝑃1,2 ⋁𝑃2,1 )
KB= 𝑅3 : 𝐵2,1 ⇔(𝑃1,1 ⋁𝑃2,2 ⋁𝑃3,1 )
𝑅4 : ¬𝐵1,1
𝑅5 : 𝐵2,1
𝑅1 : ¬𝑃1,1
𝑅2 : 𝐵1,1 ⇔(𝑃1,2 ⋁𝑃2,1 )
KB= 𝑅3 : 𝐵2,1 ⇔(𝑃1,1 ⋁𝑃2,2 ⋁𝑃3,1 )
𝑅4 : ¬𝐵1,1
𝑅5 : 𝐵2,1
𝜶⇒𝜷,𝜶
𝜷
𝜶⇒𝜷,𝜶 𝜶⇔𝜷
𝜷 (𝜶⇒𝜷)∧(𝜶⇒𝜷)
Biconditional Elimination:
And Elimination:
𝜶⋀𝜷 (𝜶 ⇒ 𝜷) ∧ (𝜶 ⇒ 𝜷)
𝜶 𝜶⇔𝜷
INFERENCE EXAMPLE
𝑅1 : ¬𝑃1,1
𝑅2 : 𝐵1,1 ⇔(𝑃1,2 ⋁𝑃2,1 )
• KB = 𝑅3 : 𝐵2,1 ⇔(𝑃1,1 ⋁𝑃2,2 ⋁𝑃3,1 ) Biconditional Elimination:
𝑅4 : ¬𝐵1,1
𝑅5 : 𝐵2,1
(𝜶 ⇒ 𝜷) ∧ (𝜶 ⇒ 𝜷)
• Apply biconditional Elimination to 𝑅2 to get
𝑅6 . 𝜶⇔𝜷
𝑹𝟔 : (𝑩𝟏,𝟏 ⇒ (𝑷𝟏,𝟐 ⋁𝑷𝟐,𝟏 )) ∧ ((𝑷𝟏,𝟐 ⋁𝑷𝟐,𝟏 ) ⇒ 𝑩𝟏,𝟏
𝑅1 : ¬𝑃1,1
INFERENCE EXAMPLE
𝑅2 : 𝐵1,1 ⇔(𝑃1,2 ⋁𝑃2,1 )
𝑅 : 𝐵 ⇔(𝑃1,1 ⋁𝑃2,2 ⋁𝑃3,1 )
KB = 3 2,1
𝑅4 : ¬𝐵1,1 And Elimination:
𝑅5 : 𝐵2,1
𝑅6 : (𝐵1,1 ⇒ (𝑃1,2 ⋁𝑃2,1 )) ∧ ((𝑃1,2 ⋁𝑃2,1 ) ⇒ 𝐵1,1
𝜶∧𝜷
Apply biconditional Elimination to 𝑅6 to get 𝑅7 .
𝜶
𝑅7 : (𝐵1,1 ⇒ (𝑃1,2 ⋁𝑃2,1 ))
𝑅1 : ¬𝑃1,1
INFERENCE EXAMPLE
𝑅2 : 𝐵1,1 ⇔(𝑃1,2 ⋁𝑃2,1 )
𝑅3 : 𝐵2,1 ⇔(𝑃1,1 ⋁𝑃2,2 ⋁𝑃3,1 )
KB = Logical equivalence for contrapositives:
𝑅4 : ¬𝐵1,1
𝑅5 : 𝐵2,1
𝑅6 : (𝐵1,1 ⇒ (𝑃1,2 ⋁𝑃2,1 )) ∧ ((𝑃1,2 ⋁𝑃2,1 ) ⇒ 𝐵1,1
𝑅7 : (𝐵1,1 ⇒ (𝑃1,2 ⋁𝑃2,1 ))
(𝜶 ⇒ 𝜷)
(¬𝜷 ⇒ ¬𝜶)
Logical equivalence for contrapositives applied
to 𝑅7 gives 𝑅8 .
𝑅9 : ¬(𝑃1,2 ⋁𝑃2,1 )
𝑅1 : ¬𝑃1,1 INFERENCE EXAMPLE
𝑅2 : 𝐵1,1 ⇔(𝑃1,2 ⋁𝑃2,1 )
𝑅3 : 𝐵2,1 ⇔(𝑃1,1 ⋁𝑃2,2 ⋁𝑃3,1 )
KB = 𝑅4 : ¬𝐵1,1 De Morgan’s Rule:
𝑅5 : 𝐵2,1
𝑅6 : (𝐵1,1 ⇒ (𝑃1,2 ⋁𝑃2,1 )) ∧ ((𝑃1,2 ⋁𝑃2,1 ) ⇒ 𝐵1,1
𝑅7 : (𝐵1,1 ⇒ (𝑃1,2 ⋁𝑃2,1 )) ¬(𝜶 ∨ 𝜷)
𝑅8 : (¬𝐵1,1 ⇒ ¬(𝑃1,2 ⋁𝑃2,1 ))
𝑅9 : ¬(𝑃1,2 ⋁𝑃2,1 ) (¬𝜶 ∧ ¬𝜷)
Apply De Morgan’s rule to 𝑅9 to get:
𝑅5 : 𝐵2,1
𝑅6 : (𝐵1,1 ⇒ (𝑃1,2 ⋁𝑃2,1 )) ∧ ((𝑃1,2 ⋁𝑃2,1 ) ⇒ 𝐵1,1 𝜶∧𝜷
𝑅7 : (𝐵1,1 ⇒ (𝑃1,2 ⋁𝑃2,1 ))
𝑅8 : (¬𝐵1,1 ⇒ ¬(𝑃1,2 ⋁𝑃2,1 ))
𝑅9 : ¬(𝑃1,2 ⋁𝑃2,1 )
𝜶
𝑅10 : ¬𝑃1,2 ∧ ¬𝑃2,1
𝑅11 : ¬𝑃1,2
𝑅12 : ¬𝑃2,1
INFERENCE EXAMPLE
KB =
𝑅1 : ¬𝑃1,1
𝑅2 : 𝐵1,1 ⇔(𝑃1,2 ⋁𝑃2,1 )
𝑅3 : 𝐵2,1 ⇔(𝑃1,1 ⋁𝑃2,2 ⋁𝑃3,1 ) No Pit in [1,2].
𝑅4 : ¬𝐵1,1
𝑅5 : 𝐵2,1
𝑅6 : (𝐵1,1 ⇒ (𝑃1,2 ⋁𝑃2,1 )) ∧ ((𝑃1,2 ⋁𝑃2,1 ) ⇒ 𝐵1,1
𝑅7 : (𝐵1,1 ⇒ (𝑃1,2 ⋁𝑃2,1 )) No Pit in [2,1].
𝑅8 : (¬𝐵1,1 ⇒ ¬(𝑃1,2 ⋁𝑃2,1 ))
𝑅9 : ¬(𝑃1,2 ⋁𝑃2,1 )
𝑅10 : ¬𝑃1,2 ∧ ¬𝑃2,1
𝑅11 : ¬𝑃1,2
𝑅12 : ¬𝑃2,1
SEARCH FOR A PROOF
𝑅4 : ¬𝐵1,1 𝑅5 : 𝐵2,1
𝑅6 : (𝐵1,1 ⇒ (𝑃1,2 ⋁𝑃2,1 )) ∧ ((𝑃1,2 ⋁𝑃2,1 ) ⇒ 𝐵1,1
𝑅5 : 𝐵2,1 𝑅7 : (𝐵1,1 ⇒ (𝑃1,2 ⋁𝑃2,1 ))
• Action
𝐵1,1 ⇔(𝑃1,2 ⋁𝑃2,1 ) • Goal
(𝐵1,1 ⇒(𝑃1,2 ⋁𝑃2,1 ))∧((𝑃1,2 ⋁𝑃2,1 )⇒𝐵1,1 ¬𝑃1,2
MONOTONICITY IMPLIES
SOUNDNESS
• Monotonicity says that the set of entailed sentence can only increase as
information is added to the KB
if KB⊨𝛂 then KB ⋀ β ⊨ 𝛂
• If we add additional information to the KB, it doesn’t invalidate any other
info that we’ve already inferred
• Therefore it is sound it will only derive entailed sentences.
PROOF BY RESOLUTION
• But is it complete? Can this method derive all sentences that are entailed? If
we used iterative deepening search, then we would find any reachable
goal. But what if we were missing an important inference rule?
• There is a single inference rule, resolution, which yields a complete inference
algorithm when coupled with any complete search algorithm
• Let’s see our agent again
PROOF BY RESOLUTION
the agent returns from [2,1] to [1,1] and then goes to [1,2], where it perceives a
stench, but no breeze. We add the following facts to the knowledge base:
𝑅13 : ¬𝐵1,2
𝑅14 : 𝐵1,2 ⇔(𝑃1,1 ⋁𝑃2,2 ⋁𝑃1,3 )
we can now derive the absence of pits in [2,2] and [1,3] (remember that [1,1] is
already known to be pitless)
𝑅15 : ¬𝑃2,2
𝑅16 : ¬𝑃1,3
We can also apply biconditional elimination to 𝑅3 , followed by Modus Ponens with 𝑅5 ,
to obtain the fact that there is a pit in [1,1], [2,2], or [3,1]:
𝑅17 : 𝑃1,1 ⋁𝑃2,2 ⋁𝑃3,1
PROOF BY RESOLUTION
• Now comes the first application of the resolution rule: the literal ¬𝑃2,2 in 𝑅15
resolves with the literal 𝑃2,2 in 𝑅17 to give the resolvent
𝑅18 : 𝑃1,1 ⋁𝑃3,1
• Using 𝑅18 and 𝑅1 we have
𝑅19 : 𝑃3,1
• In English: if there’s a pit in [1,1] or [3,1] and it’s not in [1,1], then it’s in [3,1].
• These last two inference steps are examples of the unit resolution inference
rule
RESOLUTION
The unit resolution rule can be generalized to the full resolution rule
• If 𝒍𝒊 and 𝒎𝒋 are complementary literals, which means that one is the negation
of another, then we can eliminate them via unit resolution
𝒍𝟏 ∨ ⋯ ∨ 𝒍𝒌 , 𝒎𝟏 ∨ ⋯ ∨ 𝒎𝒏
𝒍𝟏 ∨ ⋯ ∨ 𝒍𝒊−𝟏 ∨ 𝒍𝒊+𝟏 ∨ ⋯ ∨ 𝒍𝒌 ∨ 𝒎𝟏 ∨ ⋯ ∨ 𝒎𝒋−𝟏 ∨ 𝒎𝒋+𝟏 ∨ ⋯ ∨ 𝒎𝒌
CONJUNCTIVE NORMAL FORM
• The resolution rule applies only to clauses (that is, disjunctions of literals), so it
would seem to be relevant only to knowledge bases and queries consisting
of clauses. How, then, can it lead to a complete inference procedure for all
of propositional logic?
• The answer is that every sentence of propositional logic is logically
equivalent to a conjunction of clauses
• A sentence expressed as a conjunction of clauses is said to be in conjunctive
normal form or CNF
CONJUNCTIVE NORMAL FORM
We now describe a procedure for converting to CNF.
1. Eliminate ⇔, replacing α ⇔ β with (α ⇒ β) ∧ (β ⇒ α).
2. Eliminate ⇒, replacing α ⇒ β with ¬α ∨ β:
3. CNF requires ¬ to appear only in literals, so we “move ¬ inwards” by
repeated application of the following equivalences
4. Now we have a sentence containing nested ∧ and ∨ operators applied to
literals
CONJUNCTIVE NORMAL FORM
We illustrate the procedure by converting the sentence 𝑩𝟏,𝟏 ⇔(𝑷𝟏,𝟐 ⋁𝑷𝟐,𝟏 ) into
CNF. The steps are as follows:
1. (𝑩𝟏,𝟏 ⇒ (𝑷𝟏,𝟐 ∨ 𝑷𝟐,𝟏 )) ∧ ((𝑷𝟏,𝟐 ∨ 𝑷𝟐,𝟏 ) ⇒ 𝑩𝟏,𝟏 )
2. (¬ 𝑩𝟏,𝟏 ∨ 𝑷𝟏,𝟐 ∨ 𝑷𝟐,𝟏 ) ∧ (¬(𝑷𝟏,𝟐 ∨ 𝑷𝟐,𝟏 ) ∨ 𝑩𝟏,𝟏 )
3. (¬ 𝑩𝟏,𝟏 ∨ 𝑷𝟏,𝟐 ∨ 𝑷𝟐,𝟏 ) ∧ ((¬ 𝑷𝟏,𝟐 ∧ ¬ 𝑷𝟐,𝟏 ) ∨ 𝑩𝟏,𝟏 )
4. (¬ 𝑩𝟏,𝟏 ∨ 𝑷𝟏,𝟐 ∨ 𝑷𝟐,𝟏 ) ∧ (¬ 𝑷𝟏,𝟐 ∨ 𝑩𝟏,𝟏 ) ∧ (¬ 𝑷𝟐,𝟏 ∨ 𝑩𝟏,𝟏 )
RESOLUTION ALGORITHM
• Inference procedures based on resolution work using the principle of proof
by contradiction
To show KB⊨𝛂 then we can show that KB ⋀ ¬𝛂 is unsatisfiable
• Here are the steps:
1. Convert (KB ⋀ ¬𝛂) into CNF
2. Apply the resolution rule to the resulting clauses
3. Resolve every pair that contains complementary literals to produce a new
clause
4. If that new clause isn’t already present, then add it to the set
5. Continue the process until one of two things happens:
1. There are no new clauses that can be added. In this case, the KB does not entail 𝛂
2. Two clauses resolve to yield the empty clause, in which case the KB does entail 𝛂
REFERENCES
• [2010] Artificial Intelligence- A Modern Approach, 3rd Ed, Stuart Russell, Peter
Norvig
• Artificial Intelligence slide presentation University of Pennsylvania (CIS 391 fall
2015)