AI unit 3 sem 5_watermark

Download as pdf or txt
Download as pdf or txt
You are on page 1of 38

B.

Tech 5th sem


Artificial Intelligence
(KAI - 501)

UNIT 3
Knowledge
Representation
Topics

● Knowledge
Representation
● First Order Logic
● Forward Chaining
and Backward
Chaining
● Reasoning in AI
What is Knowledge Representation?

● Knowledge Representation represents information from the real world for


a computer to understand and then utilize this knowledge to solve
complex real-life problems like communicating with human beings in
natural language.

● It is also a way which describes how we can represent knowledge in


artificial intelligence.
● There are different kinds of knowledge that used to represent the
knowledge in AI are Objects, Events, Performance, Facts, and
Knowledge-base.
Different types of Knowledge
Types of Knowledge

Types of Knowledge:- There are 5 types of Knowledge are as follow:-

1. Declarative Knowledge – Declarative Knowledge includes concepts, facts, and objects


and shows in a declarative sentence.
2. Structural Knowledge – It’s a basic problem-solving knowledge that describes the
relationship between objects and concepts.
3. Procedural Knowledge – Procedural Knowledge is responsible for knowing how to do
something and also includes strategies, procedures, rules, etc.
4. Meta Knowledge – It defines knowledge about other types of Knowledge.
5. Heuristic Knowledge – Heuristic Knowledge represents the expert knowledge in the
field or subject.
First Order Logic (FOL) / Predicate Logic

○ First-order logic is another way of knowledge representation in artificial intelligence. It is


an extension to propositional logic.

○ First-order logic is also known as Predicate logic or First-order predicate logic.


First-order logic is a powerful language that develops information about the objects
in a more easy way and can also express the relationship between those objects.

○ As a natural language, first-order logic also has two main parts:


a. Syntax
b. Semantics
Atomic Sentences

Atomic Sentences
○ Atomic sentences are the most basic sentences of first-order logic. These sentences are
formed from a predicate symbol followed by a parenthesis with a sequence of terms.

Example: Ravi and Ajay are brothers: => Brothers(Ravi, Ajay).


Chinky is a cat: => cat (Chinky).

Complex Sentences
○ Complex sentences are made by combining atomic sentences using connectives.

First-order logic statements can be divided into two parts:

○ Subject: Subject is the main part of the statement.


○ Predicate: A predicate can be defined as a relation, which binds two atoms together in a
statement.
Quantifiers In FOL

○ A quantifier is a language element which generates quantification, and quantification specifies the
quantity of specimen in the universe of discourse.
○ These are the symbols that permit to determine or identify the range and scope of the variable in the
logical expression. There are two types of quantifier:
a. Universal Quantifier, (for all, everyone, everything)
b. Existential quantifier, (for some, at least one).

Universal Quantifier:

Universal quantifier is a symbol of logical representation, which specifies that the statement within its
range is true for everything or every instance of a particular thing.

The Universal quantifier is represented by a symbol ∀, which resembles an inverted A.


Quantifiers:-

If x is a variable, then ∀x is read as:


○ For all x
○ For each x
○ For every x.

All man drink coffee.

∀x man(x) → drink (x, coffee).

It will be read as: There are all x where x is a man who drink coffee.

Existential Quantifier:
Existential quantifiers are the type of quantifiers, which express that the statement within its scope is
true for at least one instance of something.

It is denoted by the logical operator ∃, which resembles as inverted E. When it is used with a
predicate variable then it is called as an existential quantifier.
If x is a variable, then existential quantifier will be ∃x or ∃(x). And it will be read as:

○ There exists a 'x.'


○ For some 'x.'
○ For at least one 'x.'

Some boys are intelligent.

∃x: boys(x) ∧ intelligent(x)

It will be read as: There are some x where x is a boy who is intelligent.
Points to remember:

○ The main connective for universal quantifier ∀ is implication →.


○ The main connective for existential quantifier ∃ is and ∧.

Properties of Quantifiers:

○ In universal quantifier, ∀x∀y is similar to ∀y∀x.


○ In Existential quantifier, ∃x∃y is similar to ∃y∃x.
○ ∃x∀y is not similar to ∀y∃x.
Some Examples of FOL using quantifier:

1. All birds fly.


In this question the predicate is "fly(bird)."
And since there are all birds who fly so it will be represented as follows.
∀x bird(x) →fly(x).

2. Every man respects his parent.


In this question, the predicate is "respect(x, y)," where x=man, and y= parent.
Since there is every man so will use ∀, and it will be represented as follows:
∀x man(x) → respects (x, parent).

3. Some boys play cricket.


In this question, the predicate is "play(x, y)," where x= boys, and y= game. Since there are some boys so we will use ∃,
and it will be represented as:
∃x boys(x) → play(x, cricket).

4. Not all students like both Mathematics and Science.


In this question, the predicate is "like(x, y)," where x= student, and y= subject.
Since there are not all students, so we will use ∀ with negation, so following representation for this:
¬∀ (x) [ student(x) → like(x, Mathematics) ∧ like(x, Science)].
Inference in First-Order Logic

Inference in First-Order Logic is used to deduce new facts or sentences from existing sentences.

Substitution:

Substitution is a fundamental operation performed on terms and formulas. It occurs in all inference
systems in first-order logic. The substitution is complex in the presence of quantifiers in FOL. If we
write F[a/x], so it refers to substitute a constant "a" in place of variable "x".

Equality:

First-Order logic does not only use predicate and terms for making atomic sentences but also uses another way, which is
equality in FOL. For this, we can use equality symbols which specify that the two terms refer to the same object.

Example: Brother (John) = Smith.

As in the above example, the object referred by the Brother (John) is similar to the object referred by Smith. The equality
symbol can also be used with negation to represent that two terms are not the same objects.

Example: ¬(x=y) which is equivalent to x ≠y.


FOL inference rules for quantifier:
As propositional logic we also have inference rules in first-order logic, so following are some basic inference rules in FOL:

○ Universal Generalization
○ Universal Instantiation
○ Existential Instantiation
○ Existential introduction

1. Universal Generalization:
○ Universal generalization is a valid inference rule which states that if premise P(c) is true for any arbitrary element
c in the universe of discourse, then we can have a conclusion as ∀ x P(x).
○ It can be represented as:

○ This rule can be used if we want to show that every element has a similar property.
○ In this rule, x must not appear as a free variable.

Example: Let's represent, P(c): "A byte contains 8 bits", so for ∀ x P(x) "All bytes contain 8 bits.", it will also be true.
2. Universal Instantiation:

○ Universal instantiation is also called as universal elimination or UI is a valid inference rule. It can
be applied multiple times to add new sentences.
○ The new KB is logically equivalent to the previous KB.
○ As per UI, we can infer any sentence obtained by substituting a ground term for the variable.
○ The UI rule state that we can infer any sentence P(c) by substituting a ground term c (a constant
within domain x) from ∀ x P(x) for any object in the universe of discourse.
○ It can be represented as:

Example:

IF "Every person like ice-cream"=>


∀x P(x) so we can infer that
"John likes ice-cream" => P(c)
3. Existential Instantiation:

○ Existential instantiation is also called as Existential Elimination, which is a valid inference rule in
first-order logic.
○ It can be applied only once to replace the existential sentence.
○ The new KB is not logically equivalent to old KB, but it will be satisfiable if old KB was satisfiable.
○ This rule states that one can infer P(c) from the formula given in the form of ∃x P(x) for a new constant
symbol c.
○ The restriction with this rule is that c used in the rule must be a new term for which P(c ) is true.
○ It can be represented as:

Example:

∃x Crown(x) ∧ OnHead(x, John),


4. Existential introduction

○ An existential introduction is also known as an existential generalization, which is a valid


inference rule in first-order logic.
○ This rule states that if there is some element c in the universe of discourse which has a
property P, then we can infer that there exists something in the universe which has the
property P.
○ It can be represented as:

○ Example: Let's say that,


"Priyanka got good marks in English."
"Therefore, someone got good marks in English."
What is Unification?

○ Unification is a process of making two different logical atomic expressions identical by


finding a substitution. Unification depends on the substitution process.

E.g. Let's say there are two different expressions, P(x, y), and P(a, f(z)).

In this example, we need to make both above statements identical to each other. For this, we will perform
the substitution.

P(x, y)......... (i)


P(a, f(z))......... (ii)

○ Substitute x with a, and y with f(z) in the first expression, and it will be represented as a/x and f(z)/y.
○ With both the substitutions, the first expression will be identical to the second expression and the
substitution set will be: [a/x, f(z)/y].
Example

1)UNIFY(knows(Richard, x), knows(Richard, John))


Unifier: {John/x}.

2)UNIFY Q(a, g(x, a), f(y)), Q(a, g(f(b), a), x)}

Unifier: [a/a, f(b)/x, b/y].

3)UNIFY {p (X, X), and p (Z, f(Z))}


Unification Failed

4)UNIFY(prime (11), prime(y))


Unifier: {11/y}.
5){p (X, X), and p (Z, f(Z))}
Unification Failed
Resolution In FOL
Resolution is a theorem proving technique that proceeds by building refutation proofs, i.e., proofs by
contradictions.

Resolution is used, if there are various statements are given, and we need to prove a conclusion of those
statements. Unification is a key concept in proofs by resolutions. Resolution is a single inference rule which
can efficiently operate on the conjunctive normal form or clausal form.

Clause: Disjunction of literals (an atomic sentence) is called a clause. It is also known as a unit clause.

Conjunctive Normal Form: A sentence represented as a conjunction of clauses is said to be conjunctive


normal form or CNF.

We can resolve two clauses which are given below:

[Animal (g(x) V Loves (f(x), x)] and [¬ Loves(a, b) V ¬Kills(a, b)]

Where two complimentary literals are: Loves (f(x), x) and ¬ Loves (a, b)

These literals can be unified with unifier θ= [a/f(x), and b/x] , and it will generate a resolvent clause:

[Animal (g(x) V ¬ Kills(f(x), x)].


Steps of Resolution

1. Conversion of facts into first-order logic.


2. Convert FOL statements into CNF
3. Negate the statement which needs to prove (proof by contradiction)
4. Draw resolution graph (unification).

Example:

a. John likes all kind of food.


b. Apple and vegetable are food
c. Anything anyone eats and not killed is food.
d. Anil eats peanuts and still alive
e. Harry eats everything that Anil eats.
Prove by resolution that:
f. John likes peanuts.
Step-1: Conversion of Facts into FOL Step-2: Conversion of FOL into CNF

In First order logic resolution, it is required to


convert the FOL into CNF as CNF form makes easier
for resolution proofs.

○ Eliminate all implication (→) and rewrite


a. ∀x ¬ food(x) V likes(John, x)
b. food(Apple) Λ food(vegetables)
c. ∀x ∀y ¬ [eats(x, y) Λ ¬ killed(x)] V
food(y)
d. eats (Anil, Peanuts) Λ alive(Anil)
e. ∀x ¬ eats(Anil, x) V eats(Harry, x)
f. ∀x¬ [¬ killed(x) ] V alive(x)
g. ∀x ¬ alive(x) V ¬ killed(x)
h. likes(John, Peanuts).
○ Move negation (¬)inwards and rewrite ○ Rename variables or standardize variables
a. ∀x ¬ food(x) V likes(John, x) a. ∀x ¬ food(x) V likes(John, x)
b. food(Apple) Λ food(vegetables) b. food(Apple) Λ food(vegetables)
c. ∀x ∀y ¬ eats(x, y) V killed(x) V food(y) c. ∀y ∀z ¬ eats(y, z) V killed(y) V
d. eats (Anil, Peanuts) Λ alive(Anil) food(z)
e. ∀x ¬ eats(Anil, x) V eats(Harry, x) d. eats (Anil, Peanuts) Λ alive(Anil)
f. ∀x ¬killed(x) ] V alive(x) e. ∀w¬ eats(Anil, w) V eats(Harry, w)
g. ∀x ¬ alive(x) V ¬ killed(x) f. ∀g ¬killed(g) ] V alive(g)
h. likes(John, Peanuts). g. ∀k ¬ alive(k) V ¬ killed(k)
h. likes(John, Peanuts).
○ Eliminate existential instantiation quantifier by elimination.
In this step, we will eliminate existential quantifier ∃, and this process is known as Skolemization. But in this
example problem since there is no existential quantifier so all the statements will remain same in this step.

Drop Universal quantifiers.


In this step we will drop all universal quantifier since all the statements are not implicitly quantified so we don't need it.

a. ¬ food(x) V likes(John, x)
b. food(Apple)
c. food(vegetables)
d. ¬ eats(y, z) V killed(y) V food(z)
e. eats (Anil, Peanuts)
f. alive(Anil)
g. ¬ eats(Anil, w) V eats(Harry, w)
h. killed(g) V alive(g)
i. ¬ alive(k) V ¬ killed(k)
j. likes(John, Peanuts).
Distribute conjunction ∧ over disjunction ¬

Step-3: Negate the statement to be proved

In this statement, we will apply negation to the conclusion statements, which will be written as
¬likes(John, Peanuts)

Step-4: Draw Resolution graph:


Explanation of Resolution graph:
○ In the first step of resolution graph, ¬likes(John, Peanuts) , and likes(John, x) get
resolved(canceled) by substitution of {Peanuts/x}, and we are left with ¬ food(Peanuts)
○ In the second step of the resolution graph, ¬ food(Peanuts) , and food(z) get resolved (canceled)
by substitution of { Peanuts/z}, and we are left with ¬ eats(y, Peanuts) V killed(y) .
○ In the third step of the resolution graph, ¬ eats(y, Peanuts) and eats (Anil, Peanuts) get resolved by
substitution {Anil/y}, and we are left with Killed(Anil) .
○ In the fourth step of the resolution graph, Killed(Anil) and ¬ killed(k) get resolve by substitution
{Anil/k}, and we are left with ¬ alive(Anil) .
○ In the last step of the resolution graph ¬ alive(Anil) and alive(Anil) get resolved.
Forward Chaining and Backward chaining
Inference engine:
The inference engine is the component of the intelligent system in artificial intelligence, which applies logical rules to
the knowledge base to infer new information from known facts. The first inference engine was part of the expert
system. Inference engine commonly proceeds in two modes, which are:

a. Forward chaining
b. Backward chaining

A. Forward Chaining

Forward chaining is also known as a forward deduction or forward reasoning method when using an
inference engine. Forward chaining is a form of reasoning which start with atomic sentences in the
knowledge base and applies inference rules (Modus Ponens) in the forward direction to extract more
data until a goal is reached.

The Forward-chaining algorithm starts from known facts, triggers all rules whose premises are
satisfied, and add their conclusion to the known facts. This process repeats until the problem is
solved.
Properties of Forward-Chaining:

○ It is a down-up approach, as it moves from bottom to top.


○ It is a process of making a conclusion based on known facts or data, by starting from the initial state and
reaches the goal state.
○ Forward-chaining approach is also called as data-driven as we reach to the goal using available data.
○ Forward -chaining approach is commonly used in the expert system, such as CLIPS, business, and production
rule systems.

B. Backward Chaining:

Backward-chaining is also known as a backward deduction or backward reasoning method when using
an inference engine. A backward chaining algorithm is a form of reasoning, which starts with the goal and
works backward, chaining through rules to find known facts that support the goal.
Properties of backward chaining:

○ It is known as a top-down approach.


○ Backward-chaining is based on modus ponens inference rule.
○ In backward chaining, the goal is broken into sub-goal or sub-goals to prove the facts true.
○ It is called a goal-driven approach, as a list of goals decides which rules are selected and used.
○ Backward -chaining algorithm is used in game theory, automated theorem proving tools, inference
engines, proof assistants, and various AI applications.
○ The backward-chaining method mostly used a depth-first search strategy for proof.
Differences

○ Forward chaining as the name suggests, start from the known facts and move forward by applying
inference rules to extract more data, and it continues until it reaches to the goal, whereas backward
chaining starts from the goal, move backward by using inference rules to determine the facts that satisfy
the goal.
○ Forward chaining is called a data-driven inference technique, whereas backward chaining is called a
goal-driven inference technique.
○ Forward chaining is known as the down-up approach, whereas backward chaining is known as a top-down
approach.
○ Forward chaining uses breadth-first search strategy, whereas backward chaining uses depth-first search
strategy.
○ Forward and backward chaining both applies Modus ponens inference rule.
○ Forward chaining can be used for tasks such as planning, design process monitoring, diagnosis, and
classification, whereas backward chaining can be used for classification and diagnosis tasks.
Reasoning in AI

The reasoning is the mental process of deriving logical conclusion and making predictions
from available knowledge, facts, and beliefs. Or we can say, "Reasoning is a way to infer facts
from existing data." It is a general process of thinking rationally, to find valid conclusions.

In artificial intelligence, the reasoning is essential so that the machine can also think rationally
as a human brain, and can perform like a human.

Types of Reasoning

○ Deductive reasoning
○ Inductive reasoning
○ Abductive reasoning
○ Common Sense Reasoning
○ Monotonic Reasoning
○ Non-monotonic Reasoning
Deductive Reasoning

Deductive reasoning is deducing new information from logically related known information. It is the form of
valid reasoning, which means the argument's conclusion must be true when the premises are true.

Deductive reasoning is a type of propositional logic in AI, and it requires various rules and facts. It is
sometimes referred to as top-down reasoning, and contradictory to inductive reasoning.

In deductive reasoning, the truth of the premises guarantees the truth of the conclusion.

Example:

Premise-1: All the human eats veggies

Premise-2: Suresh is human.

Conclusion: Suresh eats veggies.


Inductive Reasoning

Inductive reasoning is a form of reasoning to arrive at a conclusion using limited sets of facts by the process of
generalization. It starts with the series of specific facts or data and reaches to a general statement or conclusion.

Inductive reasoning is a type of propositional logic, which is also known as cause-effect reasoning or bottom-up
reasoning.

In inductive reasoning, we use historical data or various premises to generate a generic rule, for which premises
support the conclusion.

In inductive reasoning, premises provide probable supports to the conclusion, so the truth of premises
does not guarantee the truth of the conclusion.
Example:

Premise: All of the pigeons we have seen in the zoo are white.

Conclusion: Therefore, we can expect all the pigeons to be white.


Abductive Reasoning

Abductive reasoning is a form of logical reasoning which starts with single or multiple observations then
seeks to find the most likely explanation or conclusion for the observation.

Abductive reasoning is an extension of deductive reasoning, but in abductive reasoning, the premises do
not guarantee the conclusion.

Example:

Implication: Cricket ground is wet if it


is raining

Axiom: Cricket ground is wet.

Conclusion It is raining.
Common Sense Reasoning

Common sense reasoning is an informal form of reasoning, which can be gained through
experiences.

Common Sense reasoning simulates the human ability to make presumptions about events which
occurs on every day.

It relies on good judgment rather than exact logic and operates on heuristic knowledge and
heuristic rules.

Example:

1. One person can be at one place at a time.


2. If I put my hand in a fire, then it will burn.
Monotonic Reasoning

In monotonic reasoning, once the conclusion is taken, then it will remain the same even if we add some other
information to existing information in our knowledge base. In monotonic reasoning, adding knowledge does
not decrease the set of prepositions that can be derived.

To solve monotonic problems, we can derive the valid conclusion from the available facts only, and it will not be
affected by new facts.

Monotonic reasoning is not useful for the real-time systems, as in real time, facts get changed, so we cannot
use monotonic reasoning.

Monotonic reasoning is used in conventional reasoning systems, and a logic-based system is monotonic.

Any theorem proving is an example of monotonic reasoning.

Example:

○ Earth revolves around the Sun.

It is a true fact, and it cannot be changed even if we add another sentence in knowledge base like, "The
moon revolves around the earth" Or "Earth is not round,"
Non-Monotonic Reasoning

In Non-monotonic reasoning, some conclusions may be invalidated if we add some more information to our
knowledge base.

Logic will be said as non-monotonic if some conclusions can be invalidated by adding more knowledge into
our knowledge base.

Non-monotonic reasoning deals with incomplete and uncertain models.

"Human perceptions for various things in daily life, "is a general example of non-monotonic reasoning.

Example: Let suppose the knowledge base contains the following knowledge:

○ Birds can fly
○ Penguins cannot fly
○ Pitty is a bird

So from the above sentences, we can conclude that Pitty can fly.
You Tube - CODER_SHIV

You might also like