AI_model_sols

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 29

1a.

Define Total Turing test, logical


positivism, tractable problems, decision
theory, neurons. (5M)
Turing Test:

A computer passes the Turing test if a human interrogator, after


posing some written questions, cannot tell whether the written
responses come from a person or from a computer.

Total Turing Test includes a video signal so that the interrogator


can test the subject’s perceptual abilities, as well as the opportunity
for the interrogator to pass physical objects “through the hatch”.

To pass the total Turing Test, the computer will need computer
vision to perceive objects, and robotics to manipulate objects and
move about, along with natural language processing, knowledge
representation, automated reasoning and machine learning.

Logical positivism:

This doctrine holds that all knowledge can be characterized by


logical theories connected, ultimately, to observation sentences
that correspond to sensory inputs.

Thus logical positivism combines rationalism and empiricism.

Tractability:

A problem is called tractable if the time required to solve instances


of the problem does not grow exponentially with the size of the
instances. It grows with Polynomial complexity.

Decision Theory:

A formal and complete framework which combines probability


theory with utility theory, for decisions (economic or otherwise)
made under uncertainty— that is, in cases where probabilistic
descriptions appropriately capture the decision maker’s
environment.

Neurons:

An excitable cell that fires electric signals called action


potentials across a neural network in the nervous system, also
called Nerve cell.

1b. Explain the significant contributions of various branches in the foundations of AI


 Philosophy:

7a. Explain various ambiguities in Natural


Language processing with examples and
summarize in the form of a table about formal
languages and their ontological and
epistemological commitments.

i. Natural languages suffer from ambiguity, a problem for a


representation language.
ii. From the viewpoint of formal logic, representing the same knowledge
in two different ways makes absolutely no difference; the same facts
will be derivable from either representation.
iii. In practice, however, one representation might require fewer steps to
derive a conclusion, meaning that a reasoner with limited resources
could get to the conclusion using one representation but not the other.
For nondeductive tasks such as learning from experience, outcomes
are necessarily dependent on the form of the representations used.
iv. Propositional logic is a declarative, compositional semantics that is
context-independent and unambiguous.
v. When we look at the syntax of natural language, the most obvious
elements are nouns and noun phrases that refer to objects (squares,
pits, wumpuses) and verbs and verb phrases that refer to relations
among objects (is breezy, is adjacent to, shoots). Some of these rela-
tions are functions—relations in which there is only one “value” for a
given “input.” Almost any assertion can be thought of as referring to
objects and properties or relations. Ex: “One plus two equals three.”
Objects: one, two, three, one plus two; Relation: equals; Function:
plus. (“One plus two” is a name for the object that is obtained by
applying the function “plus” to the objects “one” and “two.” “Three” is
another name for this object.)
vi. The language of first-order logic is built around objects and
relations. First-order logic can express facts about some or all of the
objects in the universe. This enables one to represent general laws or
rules, such as the statement “Squares neighboring the wumpus are
smelly.”
vii. The primary difference between propositional and first-order logic lies
in the ontological commitment made by each language—that is,
what it assumes about the nature of reality. Mathematically, this
commitment is expressed through the nature of the formal models
with respect to which the truth of sentences is defined.
viii.Propositional logic assumes that there are facts that either hold or do
not hold in the world. Each fact can be in one of two states: true or
false, and each model assigns true or false to each proposition
symbol. First-order logic assumes more; namely, that the world
consists of objects with certain relations among them that do or do not
hold.
ix. Temporal logic assumes that facts hold at particular times and that
those times (which may be points or intervals) are ordered.
x. Higher-order logic views the relations and functions referred to by
first-order logic as objects in themselves. This allows one to make
assertions about all relations—for example, one could wish to define
what it means for a relation to be transitive. Unlike most special
purpose logics, higher-order logic is strictly more expressive than first-
order logic, in the sense that some sentences of higher-order logic
cannot be expressed by any finite number of first-order logic
sentences.
xi. A logic can also be characterized by its epistemological
commitments—the possible states of knowledge that it allows with
respect to each fact. In both propositional and first-order logic, a
sentence represents a fact and the agent either believes the sentence
to be true, believes it to be false, or has no opinion. These logics
therefore have three possible states of knowledge regarding any
sentence.
xii. Systems using probability theory can have any degree of belief,
ranging from 0 (total disbelief) to 1 (total belief). For example, a
probabilistic wumpus-world agent might believe that the wumpus is in
[1,3] with probability 0.75.

7b. Define Universal and Existential


Instantiation and give examples for both.
Prove the following using Backward and
Forward chaining:
"As per the law, it is a crime for an American
to sell weapons to hostile nations. Country E,
an enemy of America, has some missiles, and
all the missiles were sold to it by Solan, who
is an American citizen."
Prove that "Solan is a criminal."

Universal Quantification:

∀ x P is true in a given model if P is true in all possible extended


interpretations constructed from the interpretation given in the
model, where each extended interpretation specifies a domain
element to which x refers.

We can extend the interpretation in five ways:

 x → Richard the Lionheart,


 x → King John,
 x → Richard’s left leg,
 x → John’s left leg,
 x → the crown.

The universally quantified sentence ∀x King(x) ⇒ Person(x) is true in


the originalmodel if the sentence King(x) ⇒ Person(x) is true under
each of the five extended interpretations. That is, the universally
quantified sentence is equivalent to asserting the following five
sentences:

 Richard the Lionheart is a king ⇒ Richard the Lionheart is a


person.
 King John is a king ⇒ King John is a person.
 Richard’s left leg is a king ⇒ Richard’s left leg is a person.
 John’s left leg is a king ⇒ John’s left leg is a person.
 The crown is a king ⇒ the crown is a person.

Existential Quantification:
∃ x P is true in a given model if P is true in at least one extended
interpretation that assigns x to a domain element.

Ex: ∃x Crown(x)∧OnHead(x,John).

That is, at least one of the following is true:

 Richard the Lionheart is a crown ∧ Richard the Lionheart is on


John’s head;
 King John is a crown ∧ King John is on John’s head;
 Richard’s left leg is a crown ∧ Richard’s left leg is on John’s head;
 John’s left leg is a crown ∧ John’s left leg is on John’s head;
 The crown is a crown ∧ the crown is on John’s head.

The fifth assertion is true in the model, so the original existentially


quantified sentence is true in the model.

Forward-chaining is an inference technique that starts with known


facts and applies inference rules to extract more facts until no new
facts can be derived.

1. American(x): x is an American.
2. Weapon(y): y is a weapon.
3. Hostile(z): z is a hostile nation.

4. Criminal(x): x is a criminal.
5. Missile(y): y is a missile.
6. Owns(z, y): z owns y.
7. Sells(x, y, z): x sells y to z.
8. Enemy(z, America): z is an enemy of America.

1. "It is a crime for an American to sell weapons to hostile nations.”

Clause: American(x) ∧ Weapon(y) ∧ Hostile(z) ∧ Sells(x, y, z) ⇒


Criminal(x)
First-order definite clauses: Example

2. "Nono(a country) has some missiles."

Existential Instantiation: ∃x Owns(Nono, x) ∧ Missile(x) becomes:

Clause 1: Owns(Nono, M1) Clause 2: Missile(M1)

3. Missiles Sold by Colonel West:

"All of its missiles were sold to it by Colonel West." Clause: Missile(x)


∧ Owns(Nono, x) ⇒ Sells(West, x, Nono)

4. Missiles are Weapons:

Clause: Missile(x) ⇒ Weapon(x)

5. "An enemy of America counts as hostile."

Clause: Enemy(x, America) ⇒ Hostile(x)

First-order definite clauses: Example

6. "West, who is American."

Clause: American(West)
7. "The country Nono, an enemy of America."

Clause: Enemy(Nono, America)


In Backward Chaining we start with a goal and recursively verify if it
can be proven true by checking facts and applying rules (rules) in
reverse.

8a. Explain (i) Unification and (ii)


Subsumption Lattice with examples. Write
short notes on how First Order Logic can be
applied to Wumpus World
Unification is a process of making two different logical atomic
expressions identical by finding a substitution. Unification depends
on the substitution process. It takes two literals as input and makes
them identical using substitution.

Basic Concepts
1.Terms: Constants, variables, and function applications are the
basic elements. 2.Substitution: Replacing variables with terms.
3.Unifier: A substitution that makes two terms or expressions
identical.

4.Most General Unifier (MGU): The simplest unifier that can be


applied to make the expressions identical.

Unification: Algorithm

1.Identify variables and terms: Extract variables and terms from


the expressions. 2.Apply substitution: Replace variables with
terms to make expressions identical. 3.Repeat: Continue until no
more substitutions are possible.

Eg: Consider two FOL expressions:


1.P(f(x),g(y)) 2. P(f(a),g(z)) Unify these two expressions.

Step1 : Identify corresponding parts: f(x) should match with f(a)


2. g(y) should match with g(z)

Step 2: Unify the first argument f(x) and f(a): Here, x should
be replaced with a . 2. Substitution θ = {x/a}

Step 3: Apply the substitution: After substitution: P(f(a), g(y))


Step 4:Unify the second argument g(y) and g(z): Here,y
should be replaced with z . Substitution θ= {y/z}

Step 5: Apply the substitution: After substitution θ = P(f(a), g(z))

Result. : The unified expression is P(f(a), g(z)) and the unifier is


{ x/a, y/z}.
Subsumption Lattice

A subsumption lattice is a hierarchical structure used in knowledge


representation and reasoning.

It organizes concepts (or queries) based on their generality, where


more general concepts subsume (or include) more specific ones.

Subsumption: A relationship where one concept (A) subsumes


another (B) if A is more general than B.

Used in Artificial Intelligence and database systems to optimize


query processing and reasoning.

Helps in Indexing and Retrieving information efficiently.

Eg: For the predicate Employs(x, y), queries can be organized as


follows:

Employs(x,y)(mostgeneral)

Employs(IBM,y)

Employs(x,Richard)

Employs(IBM,Richard)(mostspecific)

1. Identify Possible Queries: For the fact Employs(IBM, Richard),


identify all possible queries that can be constructed:

Employs(IBM, Richard)

Employs(x, Richard)

Employs(IBM, y)

Employs(x, y)

2. Organize Queries by Generality: Arrange the queries in a


hierarchical order based on their generality:

Employs(x, y) (most general)

Employs(IBM, y)

Employs(x, Richard)

Employs(IBM, Richard) (most specific)


3. Construct the Lattice: Place the most general query at the top.
Connect queries with arrows indicating the subsumption
relationship, moving from general to specific.

The wumpus world

The Wumpus agent receives a percept vector with five elements.


The corresponding first-order sentence stored in the knowledge
base must include both the percept and the time at which it
occurred; otherwise, the agent will get confused about when it saw
what.

A typical percept sentence would be Percept ([Stench, Breeze ,


Glitter , None , None ], 5) - Integers are for time steps. Here,
Percept is a binary predicate, and Stench and so on are constants
placed in a list.

The actions in the wumpus world can be represented by logical


terms: Turn(Right), Turn(Left), Forward, Shoot, Grab, Climb. To
determine which is best, the agent program executes the query(This
query asks for a variable a such that a is the best action at time 5)-
ASKVARS(∃a BestAction(a,5)), which returns a binding list such as
{a/Grab}. The agent program can then return Grab as the action to
take.
For all times t and for all values s,g,m, c: If the agent perceives the
data [s,Breeze,g,m,c] at time t, then the agent concludes that there
is a breeze at time t.

Eg: ∀t, s, g, m, c Percept([s, Breeze, g, m, c],t) ⇒ Breeze(t) ∀t, s, b,


m, c Percept([s, b, Glitter, m, c],t) ⇒ Glitter(t)

Simple “reflex” behaviour can also be implemented by quantified


implication sentences.
Eg: ∀t Glitter(t) ⇒ BestAction(Grab,t).
Given the percept and rules , this would yield the desired
conclusion. BestAction(Grab,5)—that is, Grab is the right Action.

Instead of naming each square individually (like Square1,2 and


Square1,3), which would require extra facts to state their adjacency,
simpler method can be used. By using a complex term with row and
column numbers as integers, let us represent squares more
efficiently.

Eg: Represent the square in row 1, column 2 as [1, 2]. This approach
eliminates the need for additional facts to describe relationships
between squares, making the representation more straightforward.

∀x,y, a,b Adjacent([x,y],[a,b]) ⇔


(x = a ∧ (y = b −1 ∨ y = b + 1)) ∨ (y = b ∧ (x = a −1 ∨ x = a + 1)) .

Meaning that two squares [x,y]and [a,b] are adjacent if and only if
one of the following conditions is true: they are in the same row
(x=a ) and their columns are consecutive ( y=b−1 or y=b+1 ); they
are in the same column (y=b) and their rows are consecutive
( x=a−1 or x=a+1).

Eg: Squares [1, 2] and [1, 3] : [1, 2]: row 1, column 2. & [1, 3]:
row 1, column 3.
According to the formula:
x= a , Here, 1=1, so this part is true.
y=b−1 or y=b+ 1 (consecutive columns): 2=3−1, so this part
is true.

Since both conditions are satisfied, [1, 2] and [1, 3] are adjacent.

Since both conditions are satisfied, [2, 3] and [3, 3] are adjacent.
The agent’s location changes over time, so that can be written as
At(Agent, s, t) to mean that the agent is at square s at time t.

The wumpus’s location can be fixed with ∀t At(Wumpus , [2, 2], t).
Therefore objects can only be at one location at a time. ∀x,s1,s2, t
At(x,s1,t) ∧ At(x,s2,t) ⇒ s1=s2 .

Given its current location, the agent can infer properties of the
square from properties of its current percept.
Eg: if the agent is at a square and perceives a breeze, then that
square is breezy:

∀s, t At(Agent,s,t) ∧ Breeze(t) ⇒ Breezy(s).

Having discovered which places are breezy (or smelly) and not
breezy (or not smelly), the agent can deduce where the pits are
(and where the wumpus is).

8b. Write appropriate quantifiers for the


following

(i) Some students read well

(ii) Some students like some books

(iii) Some students like all books

(iv) All students like some books


(v) All students like no books

Explain the concept of Resolution in First


Order Logic with appropriate procedure.
Relations: Student(x), Book(y)

Functions : ReadWell(x), Likes(x,y)

i. ∃x Student(x)∧ReadWell(x)
ii. ∃x ∃y Student(x) ∧ Book(y) ∧ Likes(x,y)
iii. ∃x Student(x) ∧ (∀y Book(y) => Likes(x,y))
iv. ∀x Student(x) => ∃y Book(y) ∧ Likes(x,y)
v. ∀x Student(x) => (∀y Book(y)=> ∼Likes(x,y)

i. Resolution, that yields a complete inference algorithm when


coupled with any complete search algorithm. Resolution is a rule
of inference used for automated theorem proving in First-Order
Logic (FOL).
It works by refuting(means demonstrating that a certain
statement or set of statements leads to a contradiction) the
negation of the goal we want to prove. If we can derive a
contradiction (an empty clause) from the negation of the goal
and our knowledge base, then the goal is proved.
ii. As in the propositional case, first-order resolution requires that
sentences be in conjunctive normal form (CNF)—that is, a
conjunction of clauses, where each clause is a disjunction of
literals. Literals can contain variables, which are assumed to be
universally quantified.
iii. The procedure for conversion to CNF is similar to the
propositional case
iv. The principal difference arises from the need to eliminate
existential quantifiers. Eg: The procedure by translating the
sentence
v. “Everyone who loves all animals is loved by someone,”
vi. ∀x [∀y Animal(y) ⇒ Loves(x,y)] ⇒ [∃y Loves(y,x)].
vii. The steps are as follows:

 Eliminate implications:
∀x [¬∀y ¬Animal(y) ∨ Loves(x,y)] ∨ [∃y Loves(y,x)].
 Move ¬ inwards:

¬∀x p becomes ∃x ¬p ; ¬∃x p becomes ∀x ¬p .


The sentence goes through the following transformations:

∀x [∃y ¬(¬Animal(y)∨Loves(x,y))] ∨ [∃z Loves(z,x)].

∀x [∃y ¬¬Animal(y)∧¬Loves(x,y)] ∨ [∃z Loves(z,x)].

∀x [∃y Animal(y)∧¬Loves(x,y)] ∨ [∃z Loves(z,x)].

 Standardize variables: For sentences like (∃x P(x)) ∨ (∃x Q(x))


which use the same variable name twice, change the name of
one of the variables. To avoid confusion when dropping
quantifiers, express it as:

∀x[∃y Animal(y) ∧ ¬Loves(x,y)] ∨ [∃z Loves(z,x]

 Skolemize: Skolemization is a process used to eliminate


existential quantifiers(∃) by introducing Skolem functions or
Skolem constants. This transformation is an essential step in
converting a formula to a form suitable for resolution.

∀x [Animal(F(x)) ∧ ¬Loves(x,F(x))] ∨ Loves(G(z),x) Here F and G are


Skolem functions. When an existential quantifier is within the scope
of one or more universal quantifiers, a Skolem function is
introduced.

 Drop universal quantifiers:

[Animal(F(x)) ∧ ¬Loves(x,F(x))] ∨ Loves(G(z),x) .


 Distribute ∨ over ∧:
[Animal(F(x)) ∨ Loves(G(z),x)] ∧ [¬Loves(x,F(x)) ∨ Loves(G(z),x)]
.

This step may also require flattening nested conjunctions and


disjunctions. The sentence is now in CNF and consists of two
clauses.

Eg: ∀x American(x)∧Weapon(y)∧Sells(x,y,z)∧Hostile(z) ⇒ Criminal(x)


becomes, in CNF, ¬American(x) ∨ ¬Weapon(y) ∨ ¬Sells(x, y, z) ∨
¬Hostile(z) ∨ Criminal(x)

viii.Every sentence of first-order logic can be converted into an


inferentially equivalent CNF sentence.
ix. The resolution rule for first-order clauses is a lifted version of the
propositional resolution rule.
x. Two standardized clauses, sharing no variables, can be resolved
if they contain complementary literals. Propositional literals are
complementary if one is the negation of the other, and first-
order literals are complementary if one unifies with the negation
of the other.
xi. This rule states that if you have two clauses (one on the
numerator and one on the denominator), you can resolve them
by eliminating the complementary literals and applying a
substitution, θ\thetaθ, to the resulting clause. This rule is called
the binary resolution rule because it resolves exactly two
literals. The binary resolution rule by itself does not yield a
complete inference procedure.
xii. The full resolution rule resolves subsets of literals in each clause
that are unifiable.
xiii.An alternative approach is to extend factoring—the removal of
redundant literals—to the first-order case. Propositional factoring
reduces two literals to one if they are identical; First-order
factoring reduces two literals to one if they are unifiable. The
unifier must be applied to the entire clause.
xiv. The combination of binary resolution and factoring is
complete.

9a. Explain various reasons for failure of First


Order Logic Define a sample space for
picking 2 tokens from 6 tokens of lab
questions with token taken first time is not
replaced.

Let us try to write rules for dental diagnosis using propositional


logic, so that we can see how the logical approach breaks down.
Consider the following simple rule: Toothache ⇒ Cavity .
The problem is that this rule is wrong. Not all patients with
toothaches have cavities; some of them have gum disease, an
abscess, or one of several other problems:

Toothache ⇒ Cavity ∨ GumProblem ∨ Abscess . . .


Unfortunately, in order to make the rule true, we have to add an
almost unlimited list of possible problems. We could try turning the
rule into a causal rule:

Cavity ⇒ Toothache .

But this rule is not right either; not all cavities cause pain. The only
way to fix the rule is to make it logically exhaustive: to augment the
left-hand side with all the qualifications required for a cavity to
cause a toothache. Trying to use logic to cope with a domain like
medical diagnosis thus fails for three main reasons:
Laziness: It is too much work to list the complete set of
antecedents or consequents needed to ensure an exceptionless rule
and too hard to use such rules.

Theoretical ignorance: Medical science has no complete theory


for the domain.

Practical ignorance: Even if we know all the rules, we might be


uncertain about a particular patient because not all the necessary
tests have been or can be run.

The agent’s knowledge can at best provide only a degree of belief


in the relevant sentences.

9b. Prove all Kolomogorovs axioms


(Probability axioms).
P(A U B) = P(A) + P(B) - P(A ∩ B)

1.Definition of Events:

• Let A and B be two events in a sample space S .

• A U B represents the event that either A or B (or


both) occur.

• A ∩ B represents the event that both A and B occur.

2.Counting Elements:

• The event A \cup B consists of three mutually exclusive


parts:

1. Elements that are only in A (denoted as A \ B ).


2. Elements that are only in B (denoted as B \ A ).

3. Elements that are in both A and B (denoted as A ∩


B ).

3.Probability Calculation:

The probability of A includes the probability of A \setminus B and


A \cap B :

P(A) = P(A \ B) + P(A ∩ B)

The probability of B includes the probability of B \setminus A and


A \cap B :

P(B) = P(B \ A) + P(A ∩ B)

4.Summing Probabilities:

The probability of the union of A and B is the sum of the


probabilities of the three mutually exclusive parts:

P(A U B) = P(A \ B) + P(B \ A) + P(A ∩ B)

5.Applying Inclusion-Exclusion Principle:

Adding the probabilities of A and B , we have:

P(A) + P(B) = (P(A \ B) + P(A ∩ B)) + (P(B \ A) + P(A ∩ B))

P(A) + P(B) = P(A \ B) + P(B \ A) + 2P(A ∩ B)

To find P(A U B) , we subtract P(A ∩ B) from this sum (to correct


for double counting):

P(A U B) = P(A) + P(B) - P(A ∩ B)

Thus, the inclusion-exclusion principle is proved.


9c. Given that bus arriving late=0.3 and a
student oversleeping probability is 0.4 find
the probability that student gets late.

10a. Explain marginalization and


normalization with a full joint distribution of
(toothache, catch, cavity)
The above equation gives us a direct way to calculate the
probability of any proposition, simple or complex: simply identify
those possible worlds in which the proposition is true and add up
their probabilities.

For example, there are six possible worlds in which cavity ∨


toothache holds:

P(cavity ∨ toothache)=0.108 + 0.012 + 0.072 + 0.008 + 0.016 +


0.064 = 0.28

adding the entries in the first row gives the unconditional or


marginal probability of cavity: P(cavity)=0.108 + 0.012 + 0.072 +
0.008 = 0.2

This process is called marginalization, or summing out— because we


sum up the probabilities for each possible value.

We can write the following general marginalization rule for any sets
of variables Y and Z:

where z∈Z means to add all the possible combinations of values of


the set of variables Z.

It can also be abbreviated as z, leaving Z implicit. We just used the


rule as:
We can calculate P(Cavity |toothache) even if we don’t know the
value of P(toothache)

Normalization is useful in many probability calculations, both to


make the computation easier and to allow us to proceed when some
probability assessment (such as P(toothache)) is not available.

10b. Write the representation of Bayes


Theorem. In a class, 70% children were fall
sick due to Viral fever and 30% due to
Bacterial fever. The probability of observing
temperature for Viral is 0.78 and for
Bacterial is 0.31. If a child develops high
temperature, find the child’s probability of
having viral infection.
10c. Explain the role of probability in solving
problems of Wumpus world

Uncertainty arises in the wumpus world because the agent’s sensors


give only partial information about the world.

Eg: Figure shows a situation in which each of the three reachable


squares—[1,3], [2,2], and [3,1]—might contain a pit.
Pure logical inference can conclude nothing about which square is
most likely to be safe, so a logical agent might have to choose
randomly.

Our aim is to calculate the probability that each of the three squares
contains a pit. (For this example we ignore the wumpus and the
gold.)

The relevant properties of the wumpus world are that (1) a pit
causes breezes in all neighboring squares, and (2) each square
other than [1,1] contains a pit with probability 0.2.

The first step is to identify the set of random variables we need:

As in the propositional logic case, we want one Boolean variable Pij


for each square, which is true iff square [i, j] actually contains a pit.
We also have Boolean variables Bij that are true iff square [i, j] is
breezy; we include these variables only for the observed squares—
in this case, [1,1], [1,2], and [2,1].

The next step is to specify the full joint distribution, P(P1,1,...,P4,4,


B1,1,

B1,2, B2,1). Applying the product rule.


P(P1,1,...,P4,4, B1,1, B1,2, B2,1) = P(B1,1, B1,2, B2,1
P1,1,...,P4,4)P(P1,1,...,P4,4) .

This decomposition makes it easy to see what the joint probability


values should be.

The first term is the conditional probability distribution of a breeze


configuration, given a pit configuration; its values are 1 if the
breezes are adjacent to the pits and 0 otherwise.

The second term is the prior probability of a pit configuration.

Each square contains a pit with probability 0.2, independently of the


other squares; hence,

For a particular configuration with exactly n pits,


P(P1,1,...,P4,4)=0.2n × 0.816−n. In the situation in Figure 13.5(a),
the evidence consists of the observed breeze (or its absence) in
each square that is visited, combined with the fact that each such
square contains no pit.

We abbreviate these facts as

b = ¬b1,1∧b1,2∧b2,1 and
known = ¬p1,1∧¬p1,2∧¬p2,1.

We are interested in answering queries such as P(P1,3 | known, b):


how likely is it that [1,3] contains a pit, given the observations so far

You might also like