Unit 4
Unit 4
Uncertain Knowledge
Reasoning
1. Deductive reasoning:
2. Inductive Reasoning:
Inductive reasoning is a form of reasoning to arrive at a
conclusion using limited sets of facts by the process of
generalization. It starts with the series of specific facts or data
and reaches to a general statement or conclusion.
Inductive reasoning is a type of propositional logic, which is
also known as cause-effect reasoning or bottom-up
reasoning.
In inductive reasoning, we use historical data or various
premises to generate a generic rule, for which premises
support the conclusion.
In inductive reasoning, premises provide probable supports
to the conclusion, so the truth of premises does not
guarantee the truth of the conclusion.
Example:
Premise: All of the pigeons we have seen in the zoo are
white.
Conclusion: Therefore, we can expect all the pigeons to be
white.
3. Abductive reasoning:
Abductive reasoning is a form of logical reasoning which
starts with single or multiple observations then seeks to find
the most likely explanation or conclusion for the observation.
Abductive reasoning is an extension of deductive reasoning,
but in abductive reasoning, the premises do not guarantee
the conclusion.
Example:
Implication: Cricket ground is wet if it is raining
Axiom: Cricket ground is wet.
Conclusion It is raining.
4. Common Sense Reasoning
Common sense reasoning is an informal form of reasoning,
which can be gained through experiences.
Common Sense reasoning simulates the human ability to
make presumptions about events which occurs on every day.
It relies on good judgment rather than exact logic and
operates on heuristic knowledge and heuristic rules.
Example:
1. One person can be at one place at a time.
2. If I put my hand in a fire, then it will burn.
The above two statements are the examples of common
sense reasoning which a human mind can easily understand
and assume.
5. Monotonic Reasoning:
In monotonic reasoning, once the conclusion is taken, then it
will remain the same even if we add some other information
to existing information in our knowledge base. In monotonic
reasoning, adding knowledge does not decrease the set of
prepositions that can be derived.
To solve monotonic problems, we can derive the valid
conclusion from the available facts only, and it will not be
affected by new facts.
Monotonic reasoning is not useful for the real-time systems,
as in real time, facts get changed, so we cannot use
monotonic reasoning.
Monotonic reasoning is used in conventional reasoning
systems, and a logic-based system is monotonic.
Any theorem proving is an example of monotonic reasoning.
Example:
o Earth revolves around the Sun.
6. Non-monotonic Reasoning
In Non-monotonic reasoning, some conclusions may be
invalidated if we add some more information to our
knowledge base.
Logic will be said as non-monotonic if some conclusions can
be invalidated by adding more knowledge into our knowledge
base.
Non-monotonic reasoning deals with incomplete and
uncertain models.
"Human perceptions for various things in daily life, "is a
general example of non-monotonic reasoning.
Example: Let suppose the knowledge base contains the
following knowledge:
o Birds can fly
o Penguins cannot fly
o Pitty is a bird
So from the above sentences, we can conclude that Pitty can
fly.
However, if we add one another sentence into knowledge
base "Pitty is a penguin", which concludes "Pitty cannot fly",
so it invalidates the above conclusion.
Advantages of Non-monotonic reasoning:
o For real-world systems such as Robot navigation, we can
use non-monotonic reasoning.
o In Non-monotonic reasoning, we can choose
probabilistic facts or can make assumptions.
Starts from Deductive reasoning starts Inductive reasoning starts from the
from Premises. Conclusion.
Monotonic Reasoning is
Non-monotonic Reasoning is
the process which does
the process which changes
not change its direction or
its direction or values as the
can say that it moves in
knowledge base increases.
1 the one direction.
Monotonic Reasoning
Non-monotonic reasoning
deals with very specific
deals with incomplete or not
type of models, which has
known facts.
2 valid proofs.
Monotonic Reasoning Non-Monotonic Reasoning
In non-monotonic
In monotonic reasoning,
reasoning, results and set of
results are always true,
prepositions will increase
therefore, set of
and decrease based on
prepositions will only
condition of added
increase.
4 knowledge.
A. Forward Chaining
Forward chaining is also known as a forward deduction or
forward reasoning method when using an inference engine.
Forward chaining is a form of reasoning which start with
atomic sentences in the knowledge base and applies
inference rules (Modus Ponens) in the forward direction to
extract more data until a goal is reached.
The Forward-chaining algorithm starts from known facts,
triggers all rules whose premises are satisfied, and add their
conclusion to the known facts. This process repeats until the
problem is solved.
Properties of Forward-Chaining:
o It is a down-up approach, as it moves from bottom to
top.
o It is a process of making a conclusion based on known
facts or data, by starting from the initial state and
reaches the goal state.
o Forward-chaining approach is also called as data-driven
as we reach to the goal using available data.
o Forward -chaining approach is commonly used in the
expert system, such as CLIPS, business, and production
rule systems.
Step-2:
At the second step, we will see those facts which infer from
available facts and with satisfied premises.
Rule-(1) does not satisfy premises, so it will not be added in
the first iteration.
Rule-(2) and (3) are already added.
Rule-(4) satisfy with the substitution {p/T1}, so Sells (Robert,
T1, A) is added, which infers from the conjunction of Rule (2)
and (3).
Rule-(6) is satisfied with the substitution(p/A), so Hostile(A) is
added and which infers from Rule-(7).
Step-3:
At step-3, as we can check Rule-(1) is satisfied with the
substitution {p/Robert, q/T1, r/A}, so we can add
Criminal(Robert) which infers all the available facts. And
hence we reached our goal statement.
Hence it is proved that Robert is Criminal using forward
chaining approach.
B. Backward Chaining:
Backward-chaining is also known as a backward deduction or
backward reasoning method when using an inference engine.
A backward chaining algorithm is a form of reasoning, which
starts with the goal and works backward, chaining through
rules to find known facts that support the goal.
Properties of backward chaining:
o It is known as a top-down approach.
o Backward-chaining is based on modus ponens inference
rule.
o In backward chaining, the goal is broken into sub-goal or
sub-goals to prove the facts true.
o It is called a goal-driven approach, as a list of goals
decides which rules are selected and used.
o Backward -chaining algorithm is used in game theory,
automated theorem proving tools, inference engines,
proof assistants, and various AI applications.
o The backward-chaining method mostly used
a depth-first search strategy for proof.
Example:
In backward-chaining, we will use the same above example,
and will rewrite all the rules.
o American (p) ∧ weapon(q) ∧ sells (p, q, r) ∧ hostile(r) →
Criminal(p) ...(1)
Owns(A, T1) ........(2)
o Missile(T1)
o ?p Missiles(p) ∧ Owns (A, p) → Sells (Robert, p, A)
......(4)
o Missile(p) → Weapons (p) .......(5)
o Enemy(p, America) →Hostile(p) ........(6)
o Enemy (A, America) .........(7)
o American(Robert). ..........(8)
Backward-Chaining proof:
In Backward chaining, we will start with our goal predicate,
which is Criminal(Robert), and then infer further rules.
Step-1:
At the first step, we will take the goal fact. And from the goal
fact, we will infer other facts, and at last, we will prove those
facts true. So our goal fact is "Robert is Criminal," so following
is the predicate of it.
Step-2:
At the second step, we will infer other facts form goal fact
which satisfies the rules. So as we can see in Rule-1, the goal
predicate Criminal (Robert) is present with substitution
{Robert/P}. So we will add all the conjunctive facts below the
first level and will replace p with Robert.
Here we can see American (Robert) is a fact, so it is proved
here.
Uncertainty:
Till now, we have learned knowledge representation using
first-order logic and propositional logic with certainty, which
means we were sure about the predicates. With this
knowledge representation, we might write A→B, which
means if A is true then B is true, but consider a situation
where we are not sure about whether A is true or not then
we cannot express this statement, this situation is called
uncertainty.
So to represent uncertain knowledge, where we are not sure
about the predicates, we need uncertain reasoning or
probabilistic reasoning.
Causes of uncertainty:
Following are some leading causes of uncertainty to occur in
the real world.
1. Information occurred from unreliable sources.
2. Experimental Errors
3. Equipment fault
4. Temperature variation
5. Climate change.
Probabilistic reasoning:
Probabilistic reasoning is a way of knowledge representation
where we apply the concept of probability to indicate the
uncertainty in knowledge. In probabilistic reasoning, we
combine probability theory with logic to handle the
uncertainty.
We use probability in probabilistic reasoning because it
provides a way to handle the uncertainty that is the result of
someone's laziness and ignorance.
In the real world, there are lots of scenarios, where the
certainty of something is not confirmed, such as "It will rain
today," "behavior of someone for some situations," "A match
between two teams or two players." These are probable
sentences for which we can assume that it will happen but
not sure about it, so here we use probabilistic reasoning.
Need of probabilistic reasoning in AI:
o When there are unpredictable outcomes.
o When specifications or possibilities of predicates
becomes too large to handle.
o When an unknown error occurs during an experiment.
Example:
In a class, there are 70% of the students who like English and
40% of the students who likes English and mathematics, and
then what is the percent of students those who like English
also like mathematics?
Solution:
Let, A is an event that a student likes Mathematics
B is an event that a student likes English.
Bayes' theorem:
Example-1:
Question: what is the probability that a patient has diseases
meningitis with a stiff neck?
Given Data:
A doctor is aware that disease meningitis causes a patient to
have a stiff neck, and it occurs 80% of the time. He is also
aware of some more facts, which are given as follows:
o The Known probability that a patient has meningitis
disease is 1/30,000.
o The Known probability that a patient has a stiff neck is
2%.
Decision tree
o Decision Tree is a Supervised learning technique that
can be used for both classification and Regression
problems, but mostly it is preferred for solving
Classification problems. It is a tree-structured classifier,
where internal nodes represent the features of a
dataset, branches represent the decision rules and each
leaf node represents the outcome.
o In a Decision tree, there are two nodes, which are
the Decision Node and Leaf Node. Decision nodes are
used to make any decision and have multiple branches,
whereas Leaf nodes are the output of those decisions
and do not contain any further branches.
o The decisions or the test are performed on the basis of
features of the given dataset.
o It is a graphical representation for getting all the
possible solutions to a problem/decision based on
given conditions.
o It is called a decision tree because, similar to a tree, it
starts with the root node, which expands on further
branches and constructs a tree-like structure.
o In order to build a tree, we use the CART
algorithm, which stands for Classification and
Regression Tree algorithm.
o A decision tree simply asks a question, and based on the
answer (Yes/No), it further split the tree into subtrees.
o Below diagram explains the general structure of a
decision tree
Why use Decision Trees?
There are various algorithms in Machine learning, so
choosing the best algorithm for the given dataset and
problem is the main point to remember while creating a
machine learning model. Below are the two reasons for using
the Decision tree:
o Decision Trees usually mimic human thinking ability
while making a decision, so it is easy to understand.
o The logic behind the decision tree can be easily
understood because it shows a tree-like structure.
1. Information Gain:
o Information gain is the measurement of changes in
entropy after the segmentation of a dataset based on an
attribute.
o It calculates how much information a feature provides us
about a class.
o According to the value of information gain, we split the
node and build the decision tree.
o A decision tree algorithm always tries to maximize the
value of information gain, and a node/attribute having
the highest information gain is split first. It can be
calculated using the below formula:
2. Gini Index:
o Gini index is a measure of impurity or purity used while
creating a decision tree in the CART(Classification and
Regression Tree) algorithm.
o An attribute with the low Gini index should be preferred
as compared to the high Gini index.
o It only creates binary splits, and the CART algorithm uses
the Gini index to create binary splits.
o Gini index can be calculated using the below formula:
Common sense
In artificial intelligence (AI), commonsense reasoning is a
human-like ability to make presumptions about the type and
essence of ordinary situations humans encounter every day.
These assumptions include judgments about the nature of
physical objects, taxonomic properties, and peoples'
intentions.
Some definitions and characterizations of common sense
from different authors include:
● "Commonsense knowledge includes the basic facts
about events (including actions) and their effects,
facts about knowledge and how it is obtained, facts
about beliefs and desires. It also includes the basic
facts about material objects and their properties."
● "Commonsense knowledge differs from encyclopedic
knowledge in that it deals with general knowledge
rather than the details of specific entities."
● Commonsense knowledge is "real world knowledge
that can provide a basis for additional knowledge to
be gathered and interpreted automatically".
● The commonsense world consists of "time, space,
physical interactions, people, and so on".
● Common sense is "all the knowledge about the world
that we take for granted but rarely state out loud".
● Common sense is "broadly reusable background
knowledge that's not specific to a particular subject
area... knowledge that you ought to have."
What is a Plan?
We require domain description, task specification, and goal
description for any planning system. A plan is considered a
sequence of actions, and each action has its preconditions
that must be satisfied before it can act and some effects that
can be positive or negative.
So, we have Forward State Space Planning
(FSSP) and Backward State Space Planning (BSSP) at the
basic level.