AI unit 3 sem 5_watermark
AI unit 3 sem 5_watermark
AI unit 3 sem 5_watermark
UNIT 3
Knowledge
Representation
Topics
● Knowledge
Representation
● First Order Logic
● Forward Chaining
and Backward
Chaining
● Reasoning in AI
What is Knowledge Representation?
Atomic Sentences
○ Atomic sentences are the most basic sentences of first-order logic. These sentences are
formed from a predicate symbol followed by a parenthesis with a sequence of terms.
Complex Sentences
○ Complex sentences are made by combining atomic sentences using connectives.
○ A quantifier is a language element which generates quantification, and quantification specifies the
quantity of specimen in the universe of discourse.
○ These are the symbols that permit to determine or identify the range and scope of the variable in the
logical expression. There are two types of quantifier:
a. Universal Quantifier, (for all, everyone, everything)
b. Existential quantifier, (for some, at least one).
Universal Quantifier:
Universal quantifier is a symbol of logical representation, which specifies that the statement within its
range is true for everything or every instance of a particular thing.
It will be read as: There are all x where x is a man who drink coffee.
Existential Quantifier:
Existential quantifiers are the type of quantifiers, which express that the statement within its scope is
true for at least one instance of something.
It is denoted by the logical operator ∃, which resembles as inverted E. When it is used with a
predicate variable then it is called as an existential quantifier.
If x is a variable, then existential quantifier will be ∃x or ∃(x). And it will be read as:
It will be read as: There are some x where x is a boy who is intelligent.
Points to remember:
Properties of Quantifiers:
Inference in First-Order Logic is used to deduce new facts or sentences from existing sentences.
Substitution:
Substitution is a fundamental operation performed on terms and formulas. It occurs in all inference
systems in first-order logic. The substitution is complex in the presence of quantifiers in FOL. If we
write F[a/x], so it refers to substitute a constant "a" in place of variable "x".
Equality:
First-Order logic does not only use predicate and terms for making atomic sentences but also uses another way, which is
equality in FOL. For this, we can use equality symbols which specify that the two terms refer to the same object.
As in the above example, the object referred by the Brother (John) is similar to the object referred by Smith. The equality
symbol can also be used with negation to represent that two terms are not the same objects.
○ Universal Generalization
○ Universal Instantiation
○ Existential Instantiation
○ Existential introduction
1. Universal Generalization:
○ Universal generalization is a valid inference rule which states that if premise P(c) is true for any arbitrary element
c in the universe of discourse, then we can have a conclusion as ∀ x P(x).
○ It can be represented as:
○ This rule can be used if we want to show that every element has a similar property.
○ In this rule, x must not appear as a free variable.
Example: Let's represent, P(c): "A byte contains 8 bits", so for ∀ x P(x) "All bytes contain 8 bits.", it will also be true.
2. Universal Instantiation:
○ Universal instantiation is also called as universal elimination or UI is a valid inference rule. It can
be applied multiple times to add new sentences.
○ The new KB is logically equivalent to the previous KB.
○ As per UI, we can infer any sentence obtained by substituting a ground term for the variable.
○ The UI rule state that we can infer any sentence P(c) by substituting a ground term c (a constant
within domain x) from ∀ x P(x) for any object in the universe of discourse.
○ It can be represented as:
Example:
○ Existential instantiation is also called as Existential Elimination, which is a valid inference rule in
first-order logic.
○ It can be applied only once to replace the existential sentence.
○ The new KB is not logically equivalent to old KB, but it will be satisfiable if old KB was satisfiable.
○ This rule states that one can infer P(c) from the formula given in the form of ∃x P(x) for a new constant
symbol c.
○ The restriction with this rule is that c used in the rule must be a new term for which P(c ) is true.
○ It can be represented as:
Example:
E.g. Let's say there are two different expressions, P(x, y), and P(a, f(z)).
In this example, we need to make both above statements identical to each other. For this, we will perform
the substitution.
○ Substitute x with a, and y with f(z) in the first expression, and it will be represented as a/x and f(z)/y.
○ With both the substitutions, the first expression will be identical to the second expression and the
substitution set will be: [a/x, f(z)/y].
Example
Resolution is used, if there are various statements are given, and we need to prove a conclusion of those
statements. Unification is a key concept in proofs by resolutions. Resolution is a single inference rule which
can efficiently operate on the conjunctive normal form or clausal form.
Clause: Disjunction of literals (an atomic sentence) is called a clause. It is also known as a unit clause.
Where two complimentary literals are: Loves (f(x), x) and ¬ Loves (a, b)
These literals can be unified with unifier θ= [a/f(x), and b/x] , and it will generate a resolvent clause:
Example:
a. ¬ food(x) V likes(John, x)
b. food(Apple)
c. food(vegetables)
d. ¬ eats(y, z) V killed(y) V food(z)
e. eats (Anil, Peanuts)
f. alive(Anil)
g. ¬ eats(Anil, w) V eats(Harry, w)
h. killed(g) V alive(g)
i. ¬ alive(k) V ¬ killed(k)
j. likes(John, Peanuts).
Distribute conjunction ∧ over disjunction ¬
In this statement, we will apply negation to the conclusion statements, which will be written as
¬likes(John, Peanuts)
a. Forward chaining
b. Backward chaining
A. Forward Chaining
Forward chaining is also known as a forward deduction or forward reasoning method when using an
inference engine. Forward chaining is a form of reasoning which start with atomic sentences in the
knowledge base and applies inference rules (Modus Ponens) in the forward direction to extract more
data until a goal is reached.
The Forward-chaining algorithm starts from known facts, triggers all rules whose premises are
satisfied, and add their conclusion to the known facts. This process repeats until the problem is
solved.
Properties of Forward-Chaining:
B. Backward Chaining:
Backward-chaining is also known as a backward deduction or backward reasoning method when using
an inference engine. A backward chaining algorithm is a form of reasoning, which starts with the goal and
works backward, chaining through rules to find known facts that support the goal.
Properties of backward chaining:
○ Forward chaining as the name suggests, start from the known facts and move forward by applying
inference rules to extract more data, and it continues until it reaches to the goal, whereas backward
chaining starts from the goal, move backward by using inference rules to determine the facts that satisfy
the goal.
○ Forward chaining is called a data-driven inference technique, whereas backward chaining is called a
goal-driven inference technique.
○ Forward chaining is known as the down-up approach, whereas backward chaining is known as a top-down
approach.
○ Forward chaining uses breadth-first search strategy, whereas backward chaining uses depth-first search
strategy.
○ Forward and backward chaining both applies Modus ponens inference rule.
○ Forward chaining can be used for tasks such as planning, design process monitoring, diagnosis, and
classification, whereas backward chaining can be used for classification and diagnosis tasks.
Reasoning in AI
The reasoning is the mental process of deriving logical conclusion and making predictions
from available knowledge, facts, and beliefs. Or we can say, "Reasoning is a way to infer facts
from existing data." It is a general process of thinking rationally, to find valid conclusions.
In artificial intelligence, the reasoning is essential so that the machine can also think rationally
as a human brain, and can perform like a human.
Types of Reasoning
○ Deductive reasoning
○ Inductive reasoning
○ Abductive reasoning
○ Common Sense Reasoning
○ Monotonic Reasoning
○ Non-monotonic Reasoning
Deductive Reasoning
Deductive reasoning is deducing new information from logically related known information. It is the form of
valid reasoning, which means the argument's conclusion must be true when the premises are true.
Deductive reasoning is a type of propositional logic in AI, and it requires various rules and facts. It is
sometimes referred to as top-down reasoning, and contradictory to inductive reasoning.
In deductive reasoning, the truth of the premises guarantees the truth of the conclusion.
Example:
Inductive reasoning is a form of reasoning to arrive at a conclusion using limited sets of facts by the process of
generalization. It starts with the series of specific facts or data and reaches to a general statement or conclusion.
Inductive reasoning is a type of propositional logic, which is also known as cause-effect reasoning or bottom-up
reasoning.
In inductive reasoning, we use historical data or various premises to generate a generic rule, for which premises
support the conclusion.
In inductive reasoning, premises provide probable supports to the conclusion, so the truth of premises
does not guarantee the truth of the conclusion.
Example:
Premise: All of the pigeons we have seen in the zoo are white.
Abductive reasoning is a form of logical reasoning which starts with single or multiple observations then
seeks to find the most likely explanation or conclusion for the observation.
Abductive reasoning is an extension of deductive reasoning, but in abductive reasoning, the premises do
not guarantee the conclusion.
Example:
Conclusion It is raining.
Common Sense Reasoning
Common sense reasoning is an informal form of reasoning, which can be gained through
experiences.
Common Sense reasoning simulates the human ability to make presumptions about events which
occurs on every day.
It relies on good judgment rather than exact logic and operates on heuristic knowledge and
heuristic rules.
Example:
In monotonic reasoning, once the conclusion is taken, then it will remain the same even if we add some other
information to existing information in our knowledge base. In monotonic reasoning, adding knowledge does
not decrease the set of prepositions that can be derived.
To solve monotonic problems, we can derive the valid conclusion from the available facts only, and it will not be
affected by new facts.
Monotonic reasoning is not useful for the real-time systems, as in real time, facts get changed, so we cannot
use monotonic reasoning.
Monotonic reasoning is used in conventional reasoning systems, and a logic-based system is monotonic.
Example:
It is a true fact, and it cannot be changed even if we add another sentence in knowledge base like, "The
moon revolves around the earth" Or "Earth is not round,"
Non-Monotonic Reasoning
In Non-monotonic reasoning, some conclusions may be invalidated if we add some more information to our
knowledge base.
Logic will be said as non-monotonic if some conclusions can be invalidated by adding more knowledge into
our knowledge base.
"Human perceptions for various things in daily life, "is a general example of non-monotonic reasoning.
Example: Let suppose the knowledge base contains the following knowledge:
○ Birds can fly
○ Penguins cannot fly
○ Pitty is a bird
So from the above sentences, we can conclude that Pitty can fly.
You Tube - CODER_SHIV