AI All Units
AI All Units
AI All Units
• Perfect information: A game with the perfect information is that in which agents can look
into the complete board. Agents have all the information about the game, and they can see
each other moves also. Examples are Chess, Checkers, Go, etc.
• Imperfect information: If in a game agent do not have all information about the game and
not aware with what's going on, such type of games are called the game with imperfect
information, such as tic-tac-toe, Battleship, blind, Bridge, etc.
• Deterministic games: Deterministic games are those games which follow a strict pattern
and set of rules for the games, and there is no randomness associated with them. Examples
are chess, Checkers, Go, tic-tac-toe, etc.
• Non-deterministic games: Non-deterministic are those games which have various
unpredictable events and has a factor of chance or luck. This factor of chance or luck is
introduced by either dice or cards. These are random, and each action response is not
fixed. Such games are also called as stochastic games.
Example: Backgammon, Monopoly, Poker, etc.
Zero-Sum Game
• Zero-sum games are adversarial search which involves pure competition.
• In Zero-sum game each agent's gain or loss of utility is exactly balanced by the losses or
gains of utility of another agent.
• One player of the game try to maximize one single value, while other player tries to
minimize it.
• Each move by one player in the game is called as ply.
• Chess and tic-tac-toe are examples of a Zero-sum game.
One move deep: If a game ends after one MAX and MIN move each, we say the tree is one move
deep, with two half-moves, each referred to as a ply.
Minimax value: The node's minimax value is the utility (for MAX) of being in the corresponding
state, assuming that both players play optimally from there until the game ends. The minimax value
of a terminal state determines its utility.
Given a game tree, the optimal strategy can be computed using the minimax value of each node,
i.e., MINIMAX (n).
MAX likes to get to the highest value state, whereas MIN prefers to go to the lowest value state.
Fig 1: Two ply game tree
Multiplayer games
Each node's single value must be replaced by a vector of values. A vector VA, VB, VC> is
associated with each node in a three-player game with players A, B, and C, for example.
From each actor's perspective, this vector indicates the utility of a terminal circumstance.
In nonterminal states, the backed-up value of a node n is always the utility vector of the successor
state with the highest value for the player choosing at n.
Step 3: Now algorithm backtrack to node B, where the value of β will change as this is a turn of
Min, Now β= +∞, will compare with the available subsequent nodes value, i.e. min (∞, 3) = 3,
hence at node B now α= -∞, and β= 3.
In the next step, algorithm traverse the next successor of Node B which is node E, and the values of
α= -∞, and β= 3 will also be passed.
Step 4: At node E, Max will take its turn, and the value of alpha will change. The current value of
alpha will be compared with 5, so max (-∞, 5) = 5, hence at node E α= 5 and β= 3, where α>=β, so
the right successor of E will be pruned, and algorithm will not traverse it, and the value at node E
will be 5.
Step 5: At next step, algorithm again backtrack the tree, from node B to node A. At node A, the
value of alpha will be changed the maximum available value is 3 as max (-∞, 3)= 3, and β= +∞,
these two values now passes to right successor of A which is Node C.
At node C, α=3 and β= +∞, and the same values will be passed on to node F.
Step 6: At node F, again the value of α will be compared with left child which is 0, and max(3,0)=
3, and then compared with right child which is 1, and max(3,1)= 3 still α remains 3, but the node
value of F will become 1.
Step 7: Node F returns the node value 1 to node C, at C α= 3 and β= +∞, here the value of beta will
be changed, it will compare with 1 so min (∞, 1) = 1. Now at C, α=3 and β= 1, and again it satisfies
the condition α>=β, so the next child of C which is G will be pruned, and the algorithm will not
compute the entire sub-tree G.
Step 8: C now returns the value of 1 to A here the best value for A is max (3, 1) = 3. Following is
the final game tree which is the showing the nodes which are computed and nodes which has never
computed. Hence the optimal value for the maximizer is 3 for this example.
Move Ordering in Alpha-Beta pruning:
The effectiveness of alpha-beta pruning is highly dependent on the order in which each node is
examined. Move order is an important aspect of alpha-beta pruning.
It can be of two types:
● Worst ordering: In some cases, alpha-beta pruning algorithm does not prune any of the leaves
of the tree, and works exactly as minimax algorithm. In this case, it also consumes more time
because of alpha-beta factors, such a move of pruning is called worst ordering. In this case, the best
move occurs on the right side of the tree. The time complexity for such an order is O(b m).
● Ideal ordering: The ideal ordering for alpha-beta pruning occurs when lots of pruning happens
in the tree, and best moves occur at the left side of the tree. We apply DFS hence it first search left
of the tree and go deep twice as minimax algorithm in the same amount of time. Complexity in
Simulation - A simulation is carried out in this process by selecting moves or strategies until a
desired result or condition is obtained.
Backpropagation - The remaining tree must be modified after finding the value of the newly
inserted node. As a result, the backpropagation process is carried out, in which the new node
propagates back to the root node. The amount of simulations kept in each node is increased during
the procedure. The number of wins is also increased if the new node's simulation results in a win.
The actions outlined above can be visualised using the graphic below:
These algorithms are especially beneficial in turn-based games like Tic Tac Toe, Connect 4,
Checkers, Chess, Go, and others where there is no element of chance in the game rules. Artificial
Intelligence Programs, such as AlphaGo, have lately employed this to compete against the world's
best Go players. However, its use isn't confined to video games. It can be applied to any situation
with state-action pairings and simulations to predict outcomes.
Application
Economic theory, evolutionary biology, and computer networks all use stochastic games. They are
generalizations of repeated games that correspond to the unique case of a single state.
0 i i i
{I, S, {b }, {A }, {O },P, {R }i, where
● I is a finite set of agents (or controllers) indexed 1, . . . , n.
● S is a finite set of states.
i
● R : S × A → < ~ is a reward function for agent i
A game unfolds across a finite or infinite number of stages, with the number of stages referred to as
the game's horizon. We focus on finite-horizon POSGs in this study; some of the issues of solving
the infinite-horizon situation are highlighted at the end. At each level, all agents choose an action at
the same time and are rewarded and observed. Each agent's goal is to maximize the expected sum
of prizes it will receive during the game. The reward functions determine whether agents compete
or cooperate in their quest for a reward. A decentralized partially observable Markov decision
process is one in which the agents share the same reward function.
The aim is to choose a value for each variable so that the resulting possible world satisfies the
constraints; we want a model of the constraints.
A finite CSP has a finite set of variables and a finite domain for each variable. Many of the
methods considered in this chapter only work for finite CSPs, although some are designed for
infinite, even continuous, domains.
The multidimensional aspect of these problems, where each variable can be seen as a separate
dimension, makes them difficult to solve but also provides structure that can be exploited.
Given a CSP, there are a number of tasks that can be performed:
● Determine whether or not there is a model.
● Find a model.
● Find all of the models or enumerate the models.
● Count the number of models.
● Find the best model, given a measure of how good models are.
● Determine whether some statement holds in all models.
This mostly considers the problem of finding a model. Some of the methods can also determine if
there is no solution. What may be more surprising is that some of the methods can find a model if
one exists, but they cannot tell us that there is no model if none exists.
CSPs are very common, so it is worth trying to find relatively efficient ways to solve them.
Determining whether there is a model for a CSP with finite domains is NP-hard and no known
algorithms exist to solve such problems that do not use exponential time in the worst case.
However, just because a problem is NP-hard does not mean that all instances are difficult to solve.
Many instances have structure that can be exploited.
Local consistency: If we treat each variable as a node in a graph and each binary constraint as an
arc, the act of enforcing local consistency in each area of the graph eliminates inconsistent values
across the board.
Different types of local consistency exist:
Node consistency
● If all of the values in the variable's domain fulfil the variable's unary constraint, the variable is
node-consistent (a node in the CSP network).
● If every variable in a network is node-consistent, we call it a node-consistent network.
Arc consistency
● If every value in a variable's domain fulfils the variable's binary constraints, the variable is arc-
consistent in a CSP.
j
● If there is some value in the domain D that satisfies the binary constraint on the arc for every
value in the current domain Di, then Xi is arc-consistent with regard to another variable X (X ,j i
j
X ).
● If every variable in a network is arc-consistent with every other variable, it is said to be arc-
consistent.
● Using the arcs, arc consistency tightens the domains (unary constraint) (binary constraints).
AC-3 algorithm
Function AC-3(csp) returns false if an inconsistency is found and true otherwise
Inputs: csp, a binary CSP with components (X, D, C)
Local variables: queue, a queue of arcs, initially all the arcs in csp
While queue is not empty do
If REVISE(csp, ) then
If size of then return false
Add to queue
Return true
Revised false
For each x in do
Delete from
Revised true
Return revised
This is arc consistency algorithm
● AC-3 keeps an arc queue that initially contains all of the arcs in the CSP.
● Then, from the queue, AC-3 selects an arbitrary arc (Xi, Xj) and makes Xi arc-consistent with
respect to Xj.
● If Di remains unchanged, continue on to the next arc; if Di is revised, add any arcs (Xk, Xi)
where Xk is a neighbour of Xi to the queue.
● If Di is reduced to zero, the entire CSP will collapse since there is no consistent answer.
● Otherwise, keep checking and removing values from variable domains until there are no more
arcs in the queue.
● As a result, an arc-consistent CSP with smaller domains has the same solutions as the original.
Path consistency
● A two-variable set Xi, Xj is path-consistent with regard to a third variable Xm if there is an
assignment to Xm that fulfils the constraints on Xi, Xm, and Xm, Xj for every assignment Xi = a,
Xj = b consistent with the constraint on Xi, Xj.
● The binary restrictions are tightened by path consistency, which uses implicit constraints inferred
from triples of variables.
K-consistency
● A CSP is k-consistent if a consistent value can always be assigned to any kth variable for any
collection of k-1 variables and for any consistent assignment to those variables.
● 1-consistency = node consistency; 2-consistency = arc consistency; 3-consistency = path
consistency.
● If a CSP is k-consistent and also (k – 1)-consistent, (k – 2)-consistent,... All the way down to 1-
consistent, it is very k-consistent.
● We can ensure that if we take a CSP with n nodes and make it strongly n-consistent, we will
2
discover a solution in time O(n d). However, in the worst situation, the procedure for establishing
n-consistency must take time exponential in n and demand space exponential in n.
Global constraints
A global constraint is one that applies to an infinite number of variables (but not necessarily all
variables). Special-purpose algorithms that are more efficient than general-purpose approaches can
manage global constraints.
1) inconsistency detection for Alldiff constraints
A straightforward formula is as follows: Remove any variable with a singleton domain from the
constraint and delete its value from the domains of the remaining variables. Repeat for as long as
singleton variables exist. An inconsistency has been discovered if an empty domain is formed at
any stage or if there are more variables than domain values left.
Applying arc consistency to an analogous collection of binary constraints is sometimes more
effective than using a simple consistency approach for a higher-order constraint.
Backtracking search: A depth-first search that assigns values to one variable at a time and then
backtracks when there are no more legal values to assign to that variable.
The backtracking algorithm selects an unassigned variable each time and then attempts all of the
values in that variable's domain in order to discover a solution. If an inconsistency is found,
BACKTRACK fails, leading the previous call to try a different value.
There is no need to provide a domain-specific initial state, action function, transition model, or goal
test to BACKTRACKING-SEARCH.
BACKTRACKING-SEARCH maintains only one representation of a state and modifies it rather
than producing new ones.
Function BACTRACKING-SEARCH(csp) returns a solution, or failure
Return BACTRACK({}, csp)
Var SELECT-UNASSIGNED-VARIABLE(csp)
For each value in ORDER-DOMAIN-VALUES(var,assignement,csp) do
If value is consistent with assignment then
Add{var=value} to assignment
Inferences INFERENCE(csp,var,value)
If inferences failure then
Add inferences to assignment
Result BACKTRACK (assignment, csp)
If result failure then
Return result
Remove {var=value} and inferences from assignment
Return failure
This is simple backtracking algorithm for CSP
Address the following questions to efficiently answer CSPs without domain-specific knowledge:
Which variable should be assigned next, according to the SELECT-UNASSIGNED-VARIABLE
function?
What order should the values of the ORDER-DOMAIN-VALUES function be tested in?
2)INFERENCE FUNCTION: What inferences should be made at each stage of the search?
3)Can the search avoid repeating its failure when it comes across an assignment that violates a
constraint?
SELECT-UNASSIGNED-VARIABLE
Variable selection—fail-first
Minimum-remaining-values (MRV) heuristic: The concept of selecting the variable having the
smallest number of "legal" values. The "most constrained variable" or "fail-first" heuristic selects a
variable that is most likely to cause a failure soon, allowing the search tree to be pruned. If a
variable X has no more permissible values, the MRV heuristic will select X, and failure will be
noticed instantly, avoiding the need for fruitless searches through other variables.
For example, since there is only one potential value for SA after WA=red and NT=green, it makes
logical to assign SA=blue next rather than Q.
Degree heuristic: By selecting the variable that is implicated in the greatest number of constraints
on other unassigned variables, the degree heuristic seeks to lower the branching factor on future
decisions.
SA, for example, has the highest degree of 5; the other variables have degrees of 2 or 3; and T has a
degree of 0.
ORDER-DOMAIN-VALUES
Value selection—fail-last
If we're looking for all possible solutions to a problem (rather than simply the first one), the order
doesn't matter.
Least-constraining-value heuristic: chooses the value that eliminates the fewest options for the
constraint graph's surrounding variables. (Leave as much freedom as possible for subsequent
variable assignments.)
For example, we've created a partial assignment with WA=red and NT=green, and Q is our next
option. Because blue eliminates the final lawful value available for Q's neighbour, SA, who prefers
red to blue, it is a negative decision.
In a backtracking search, the minimum-remaining-values and degree heuristics are domain-
independent approaches for determining which variable to choose next. The least-constraining-
value heuristic aids in determining which value for a given variable to try first.
Advantage: Combining the MRV heuristic with forward checking will make the search more
effective for many issues.
Disadvantages: Forward checking only makes the current variable arc-consistent, not all the other
variables.
MAC (Maintaining Arc Consistency) algorithm: [Detect this inconsistency more effectively than
forward checking.] The INFERENCE function invokes AC-3 once a variable Xi is allocated a
value, but instead of a queue of all arcs in the CSP, we only start with the arcs(Xj, Xi) for those Xj
that are unassigned variables that are neighbours of Xi. The call to AC-3 then proceeds as usual,
with the exception that if any variable's domain is reduced to the empty set, the call to AC-3 fails
and we know to retrace immediately.
3. Intelligent backtracking
Fig 5: (a) Principal states and territories of Australia. Colouring this map can be viewed as CSP. (b)
the map coloring problem represented as a constraint graph
Intelligent backtracking: Return to the variable that caused one of the possible values of the
following variable (e.g. SA) to be impossible.
A collection of assignments for a variable that are in conflict with some value for that variable.
(For example, the conflict set for SA is Q=red, NSW=green, V=blue.)
Backtracks to the most recent assignment in the conflict set using the backjumping approach.
(e.g - Backjumping, for example, would leap over T and try a different value for V.)
Forward checking can supply the conflict set with no extra work.
Add X=x to Y's conflict set whenever forward checking based on an assignment X=x deletes a
value from Y's domain.
If the last value in Y's domain is removed, the assignments in Y's conflict set are added to X's
conflict set.
In reality, forward checking prunes every branch cut by backjumping. In a forward-checking search
or a search that uses stronger consistency checking, basic backjumping is thus superfluous (such as
MAC).
The graphic above shows a general design for a knowledge-based agent. The knowledge-based
agent (KBA) collects input from the environment by monitoring it. The agent's inference engine
processes the data and communicates with KB to make judgments based on the knowledge
contained in KB. The learning component of KBA keeps the knowledge base current by learning
new material.
Q2) Describe the wumpus world?
A2) The Wumpus world is a basic world example that demonstrates the value of a knowledge-
based agent and how knowledge representation is represented. It was inspired by Gregory Yob's
1973 video game Hunt the Wumpus.
The Wumpus world is a cave with four rooms and pathways connecting them. As a result, there are
a total of 16 rooms that are interconnected. We now have a knowledge-based AI capable of
progressing in this world. There is an area in the cave with a beast named Wumpus who eats
everybody who enters. The agent can shoot the Wumpus, but he only has a single arrow. There are
some Pits chambers in the Wumpus universe that are bottomless, and if an agent falls into one, he
will be stuck there indefinitely.
The intriguing thing about this cave is that there is a chance of finding a gold heap in one of the
rooms. So the agent's mission is to find the gold and get out of the cave without getting eaten by
Wumpus or falling into Pits. If the agent returns with gold, he will be rewarded, but if he is
devoured by Wumpus or falls into the pit, he will be penalised.
A sample diagram for portraying the Wumpus planet is shown below. It depicts some rooms with
Pits, one room with Wumpus, and one agent in the world's (1, 1) square position.
There are some elements that can assist the agent in navigating the cave. The following are the
components:
● The rooms adjacent to the Wumpus room are stinky, thus there is a stench there.
● The room next to PITs has a breeze, so if the agent gets close enough to PIT, he will feel it.
● If and only if the room contains gold, there will be glitter.
● If the agent is facing the Wumpus, the agent can kill it, and the Wumpus will cry horribly, which
can be heard throughout the cave.
Environment:
● There are 16(4x4) rooms in this cave.
● The rooms opposite to the Wumpus (not diagonally) stink.
● Rooms adjacent to the pit (but not diagonally) are cool.
● The gold-studded chamber gleams.
● Agent's first position is in Room[1, 1], facing the right side.
● Wumpus, gold, and the three pits can be found anywhere except in Room[1, 1].
Actuators:
Devices that enable the agent to carry out the operations listed below in the surroundings.
● Move forward
● Turn right
● Turn left
● Shoot
● Grab
● Release
Sensors:
Devices that assist the agent in detecting the following from their surroundings.
● Breeze
● Stench
● Glitter
● Scream (When the Wumpus is killed)
● Bump (when the agent hits a wall)
Example of logic
Language of numerical constraints:
● A sentence:
x + 3 ≤ z x,
z - variable symbols (primitives in the language)
● An interpretation:
I: x = 5, z = 2
Variables mapped to specific real numbers
Fig 2: Logic
Types of logic
● Different types of logics possible
● Propositional logic
● First-order logic
● Temporal logic
● Numerical constraints logic
● Map-coloring logic
Example:
1. a) It is Sunday.
2. b) The Sun rises from West (False proposition)
3. c) 3+3= 7(False proposition)
4. d) 5 is a prime number.
1. Atomic Proposition: Atomic propositions are the simple propositions. It consists of a single
proposition symbol. These are the sentences which must be either true or false.
Example:
1. a) 2+2 is 4, it is an atomic proposition as it is a true fact.
2. b) "The Sun is cold" is also a proposition as it is a false fact.
Logical Connectives:
Logical connectives are used to connect two simpler propositions or representing a sentence
logically. We can create compound propositions with the help of logical connectives. There are
mainly five connectives, which are given as follows:
1. Negation: A sentence such as ¬ P is called negation of P. A literal can be either Positive
literal or negative literal.
2. Conjunction: A sentence which has ∧connective such as, P ∧ Q is called a conjunction.
Example: Rohan is intelligent and hardworking. It can be written as,
P= Rohan is intelligent, Q= Rohan is hardworking. → P∧ Q.
3. Disjunction: A sentence which has ∨ connective, such as P ∨ Q. Is called disjunction,
where P and Q are the propositions.
Example: "Ritika is a doctor or Engineer",
Here P= Ritika is Doctor. Q= Ritika is Doctor, so we can write it as P ∨ Q.
4. Implication: A sentence such as P → Q, is called an implication. Implications are also known as
if-then rules. It can be represented as
If it is raining, then the street is wet.
Let P= It is raining, and Q= Street is wet, so it is represented as P → Q
5. Biconditional: A sentence such as P⇔ Q is a Biconditional sentence, example If I am breathing,
then I am alive
P= I am breathing, Q= I am alive, it can be represented as P ⇔ Q.
Truth Table:
In propositional logic, we need to know the truth values of propositions in all possible scenarios.
We can combine all the possible combination with logical connectives, and the representation of
these combinations in a tabular format is called Truth table. Following are the truth table for all
logical connectives:
For negation
For conjunction
For disjunction
For implication
For Biconditional
Truth table with three propositions:
We can build a proposition composing three propositions P, Q, and R. This truth table is made-up
of 8n Tuples as we have taken three proposition symbols.
Precedence of connectives:
Just like arithmetic operators, there is a precedence order for propositional connectors or logical
operators. This order should be followed while evaluating a propositional problem. Following is
the list of the precedence order for operators:
Note: For better understanding use parenthesis to make sure of the correct interpretations. Such as
¬R∨ Q, It can be interpreted as (¬R) ∨ Q.
Logical equivalence:
Logical equivalence is one of the features of propositional logic. Two propositions are said to be
logically equivalent if and only if the columns in the truth table are identical to each other.
Let's take two propositions A and B, so for logical equivalence, we can write it as A ⇔B. In below
truth table we can see that column for ¬A∨ B and A→B, are identical hence A is Equivalent to B
Properties of Operators:
Commutativity:
• P∧ Q= Q ∧ P, or
• P ∨ Q = Q ∨ P.
Associativity:
• (P ∧ Q) ∧ R= P ∧ (Q ∧ R),
• (P ∨ Q) ∨ R= P ∨ (Q ∨ R)
Identity element:
• P ∧ True = P,
• P ∨ True= True.
Distributive:
• P∧ (Q ∨ R) = (P ∧ Q) ∨ (P ∧ R).
• P ∨ (Q ∧ R) = (P ∨ Q) ∧ (P ∨ R).
DE Morgan's Law:
• ¬ (P ∧ Q) = (¬P) ∨ (¬Q)
• ¬ (P ∨ Q) = (¬ P) ∧ (¬Q).
Double-negation elimination:
• ¬ (¬P) = P.
Rules of Inference
Remember that proofs, or chains of accurate inferences, can be created using a variety of inference
procedures.
Modus ponens, for example, is an inference rule that states:
A, A => B
--------- modus ponens
B
This rule states that if you're given a sentence A and a sentence A => B, you can infer B. For
example, and-elimination is a set of rules that states:
A&B
----- and-elimination
A
A&B
-----
B
These two rules encode the (obvious!) fact that if the sentences A and B are both true, then A and B
are both true.
Inference rules can also be used to express logical equivalences, such as:
A <==> B
---------------
(A=>B) & (B=>A)
There are many different inference rules, and selecting an acceptable set of rules for a theorem-
prover is crucial.
● The application of a rule of inference can be thought of as an action taken on the state of the
world, in which the rule of inference adds more facts to the knowledge base.
● If there are several inference rules to pick from, knowledge (i.e. heuristics) is required to assist in
the decision-making process.
○ Alternatively, we could use backtracking, which entails picking a rule at random, applying it,
and continuing until it "appears" that no proof is being achieved.
● Furthermore, how do we know that the inference rules we're employing are complete, i.e., how
do we know that if a proof exists, our set of inference rules will locate it?
○ Is it enough to prove any entailment in propositional logic if the two and-elimination rules are
our only rules of inference?
■ no!
■ for example, (P | P) -> P is clearly true, but and-elimination doesn’t apply to this sentence
DPLL incorporates three enhancements over the TT-ENTAILS scheme: Early termination, pure
symbol heuristic, and unit clause heuristic are all examples of heuristics.
Component analysis, variable and value ordering, intelligent backtracking, random restarts, and
creative indexing are some of the tricks that SAT solvers use to scale up to huge problems.
This is the WALKSAT algorithm for checking satisfiability by randomly flipping the value of
variables
Every iteration, the algorithm selects an unsatisfied sentence and alternates between two methods
for selecting a symbol to flip:
Either a. a "min-conflicts" step that reduces the number of unsatisfied clauses in the new state; or b.
a "min-conflicts" step that reduces the number of unsatisfied clauses in the new state.
Alternatively, there is a "random walk" step that selects the symbol at random.
The input sentence is satisfied when the algorithm returns a model.
There are two possible reasons for the algorithm's failure:
Either a. The sentence is unsatisfiable;
Or b. We need to give the algorithm more time.
If we set max_flips=∞, p>0, the algorithm will:
Either a. If a model exists, eventually return it
Alternatively, b. If the statement is unsatisfactory, never end it.
As a result, WALKSAT is useful when we assume a solution but are unable to detect
unsatisfiability.
k
The notation CNF (m, n) denotes a k-CNF sentence with m clauses and n symbols. (with n
variables and k literals per clause).
Given a set of random sentences, choose clauses consistently, independently, and without
replacement from among those clauses with k distinct literals that are either positive or negative at
random.
Hardness: problems right at the threshold > over constrained problems > under constrained
problems
Fig 3: (a) graph showing the probability that random 3-CNF sentence with n = 50 symbols is
satisfiable. (b) graph of the median run time on random 3-CNF sentences.
Satisfiability threshold conjecture: According to one hypothesis, there is a threshold ratio rk for
any k3, such that when n approaches infinity, the likelihood that CNFk(n, rn) is satisfiable becomes
1 for all values or r below the threshold, and 0 for all values or r beyond the threshold.
Fluent - Fluently refers to a changing aspect of the world. (For example, Ltx,y)
Atemporal variables - Symbols that represent eternal characteristics of the world do not require a
time superscript.
Effect Axioms - define the result of an action at the following time step.
Frame problem - Because the effect axioms fail to define what remains unchanged as a result of
an action, significant information is lost.
Solution: explicitly assert all the premises that remain the same by adding frame axioms.
Representation frame problem: In a universe with m distinct actions and n fluents, the
proliferation of frame axioms is inefficient; the set of frame axioms will be O(mn).
Solution: Because the world is mobile, there is a solution (for humans each action typically
changes no more than some number k of those fluents.) Instead of using a collection of axioms of
size O(mk), define the transition model with a set of axioms of size O(mn).
Inferential frame problem: The difficulty of projecting the consequences of a t-step lan of action
forward in time O(kt) rather than O(kt) (nt).
Solution: Change your focus from writing axioms about actions to writing axioms about fluents as
a solution.
We'll have an axiom for each fluent F that describes the truth value of Ft+1 in terms of fluents at
time t and possible actions at time t.
2. A hybrid agent
Hybrid agent: mixes condition-action rules and problem-solving algorithms to determine various
aspects of the state of the world.
As a current plan, the agent maintains and updates KB.
The atemporal axioms are found in the first KB. (Don't rely on t.)
Each time step adds a new percept sentence, as well as all the axioms that are dependent on t. (such
as the successor-state axioms).
The agent then employs logical inference by ASKING the KB questions (to work out which
squares are safe and which have yet to be visited).
The agent program's main body creates a plan based on a diminishing priority of goals:
1. If there is a glint, devise a strategy for grabbing the gold, returning to the original place, and
climbing out of the cave;
2. If there is no current plan, plan a route (using A* search) to the nearest safe square that has not
yet been visited, ensuring that the path passes through only safe squares;
3. If there are no safe squares to investigate and you still have an arrow, try shooting at one of the
available wumpus spots to create a safe square.
4. If this doesn't work, look for a square to explore that isn't shown to be dangerous.
5. If the job is difficult because there is no such square, return to the original spot and climb out of
the cave.
Safe
t
If ASK(KB, Glitter )=true then
Return action
Function PLAN_ROUTE(current, goals, allowed) returns an action sequence
Inputs: currents, the agent’s current position
Goals, a set of squares; try to plan a route to one of them
Allowed, a set of squares that can form part of the route
The proposition symbols linked with the current time step and the temporal symbols are used in a
logical phrase.
Maintaining a logical statement that represents the set of probable states consistent with the
observation history is required for logical state estimation. Each update step necessitates inference
using the environment's transition model, which is comprised of successor-state axioms that define
how each fluent changes.
State estimation: The process of updating one's belief state as new information becomes available.
Exact state estimation may necessitate logical formulations with exponentially increasing symbol
counts.
One typical method for approximating state estimation is to characterise belief states as literal
conjunctions (1-CNF formulas).
Given the belief state at t-1, the agent simply tries to prove Xt and Xt for each symbol Xt.
The new belief state is formed by the conjunction of verifiable literals, and the previous belief state
is discarded.
(As time passes, this scheme may lose some information.)
Given the whole percept history, the set of feasible states represented by the 1-CNF belief state
comprises all states that are in fact possible. The 1-CNF belief state serves as a conservative
approximation or outer envelope.
Fig 4: Depiction of 1-CNF belief state as a simply representable, conservative approximation to the
exact belief state.
Model SAT-SOLVER(cnf)
If model is null then
Return EXTRACT-SOLUTION(model)
Return failure
Precondition axioms: added to avoid creating plans containing illegal actions, saying that an action
occurrence requires the preconditions to be satisfied.
Action exclusion axioms were included to prevent the formation of plans that had many
simultaneous activities that interfered with one another.
Because it lacks the expressive power to deal concisely with time, space, and universal patterns of
relationships among things, propositional logic does not scale to unbounded settings.
To represent the above statements, PL logic is not sufficient, so we required some more powerful
logic, such as first-order logic.
First-Order logic:
● First-order logic is another way of knowledge representation in artificial intelligence. It is an
extension to propositional logic.
● FOL is sufficiently expressive to represent the natural language statements in a concise way.
● First-order logic is also known as Predicate logic or First-order predicate logic. First-order logic
is a powerful language that develops information about the objects in a more easy way and can also
express the relationship between those objects.
● First-order logic (like natural language) does not only assume that the world contains facts like
propositional logic but also assumes the following things in the world:
• Objects: A, B, people, numbers, colors, wars, theories, squares, pits, wumpus, ......
• Relations: It can be unary relation such as: red, round, is adjacent, or n-any relation such
as: the sister of, brother of, has color, comes between
• Function: Father of, best friend, third inning of, end of, ......
● As a natural language, first-order logic also has two main parts:
• Syntax
• Semantics
Atomic sentences:
● Atomic sentences are the most basic sentences of first-order logic. These sentences are formed
from a predicate symbol followed by a parenthesis with a sequence of terms.
● We can represent atomic sentences as Predicate (term1, term2, ......, term n).
Example: Ravi and Ajay are brothers: => Brothers(Ravi, Ajay).
Chinky is a cat: => cat (Chinky).
Complex Sentences:
● Complex sentences are made by combining atomic sentences using connectives.
Consider the statement: "x is an integer.", it consists of two parts, the first part x is the subject of
the statement and second part "is an integer," is known as a predicate.
Properties of Quantifiers:
● In universal quantifier, ∀x∀y is similar to ∀y ∀x.
● In Existential quantifier, ∃x∃y is similar to ∃y ∃x.
● ∃x∀y is not similar to ∀y∃x.
Universal Quantifier:
Universal quantifier is a symbol of logical representation, which specifies that the statement within
its range is true for everything or every instance of a particular thing.
The Universal quantifier is represented by a symbol ∀, which resembles an inverted A.
Note: In universal quantifier we use implication "→".
If x is a variable, then ∀x is read as:
● For all x
● For each x
● For every x.
Example:
All man drink coffee.
Let a variable x which refers to a cat so all x can be represented in UOD as below:
∀x man(x) → drink (x, coffee).
It will be read as: There are all x where x is a man who drink coffee.
Existential Quantifier:
Existential quantifiers are the type of quantifiers, which express that the statement within its scope
is true for at least one instance of something.
It is denoted by the logical operator ∃, which resembles as inverted E. When it is used with a
predicate variable then it is called as an existential quantifier.
Note: In Existential quantifier we always use AND or Conjunction symbol ( ∧).
If x is a variable, then existential quantifier will be ∃x or ∃(x). And it will be read as:
● There exists a 'x.'
● For some 'x.'
● For at least one 'x.'
Example:
Some boys are intelligent.
Points to remember:
● The main connective for universal quantifier ∀ is implication →.
● The main connective for existential quantifier ∃ is and ∧.
Propositional logic lacks the expressive power to succinctly explain a complex environment with
numerous items.
For example, we were compelled to establish a distinct regulation for each square regarding
breezes and pits, such as
B 1,1⇔(P 1,2 or P 2,1)
In English, though, it seems simple enough to state unequivocally that squares near to pits are airy.
Natural languages are also non-compositional, in the sense that the meaning of a statement like
"Then she saw it" might be influenced by the context created by numerous preceding and following
sentences.
Finally, natural languages contain ambiguity, which can make thinking harder.
The following are the most obvious features of natural language syntax:
● Object-referential nouns and noun phrases ( squares , pits, wumpuses ).
● Verbs and verb phrases that allude to object relationships (is breezy, is adjacent to, shoots).
● Some of these relationships are functions, meaning they have only one value for each input. It's
simple to start a list of objects, relations, and functions:
Objects: People, houses, numbers, theories, Ronald McDonald, colors, baseball games, wars,
centuries….
Relation: These can be unary relations or properties like red, round, bogus, prime, multi-
storied............, or more general n-ary relations like brother of, bigger than, within, part of, has
colour, happened after, owns, comes between, etc.
Functions: father of, closest friend, third inning of, more than, and the start of.
● The first-order logic language has a syntax and semantics based on objects and relations.
● First-order logic can also be used to convey facts about some or all of the universe's items.
● This allows generic laws or norms to be represented, such as the sentence "Squares adjacent to
the Wumpus are stinky."
The ontological commitment made by each language—that is, what it assumes about the nature of
reality—is the basic difference between propositional and first-order logic.
● Propositional logic, for example, assumes that there are facts in the world that either hold or do
not hold. There are two possible states for each fact: true or untrue.
● Temporal logic assumes that facts are true at specific moments and that those times (which can
be points or intervals) are in chronological sequence.
Higher-order logic considers the first-order logic's relations and functions to be objects in and of
themselves. This enables claims to be made about all object relations. A talent for dealing with
logical notation is required of an AI student.
Syntax
Syntax has to do with what ‘things’ (symbols, notations) one is allowed to use in the language and
in what way; there is/are a(n):
● Alphabet
● Language constructs
● Sentences to assert knowledge
Logical connectives (⇒, ∧, ∨, and ⇐⇒), negation (¬), and parentheses. These will be used to
recursively build complex formulas, just as was done for propositional logic.
Constants symbols are strings that will be interpreted as representing objects, e.g., Bob might be a
constant.
Variable symbols will be used as “place holders” for quantifying over objects.
Predicate symbols Each has an arity (number of arguments) associated with it, which can be zero
or some other finite value. Predicates will be used to represent object characteristics and
relationships.
Zero-arity Because predicate symbols are viewed as propositions in first-order logic, propositional
logic is subsumed. These assertions can be thought of as world properties.
Predicates with a single parameter can be thought of as specifying object attributes. If Rich is a
single-arity predicate, then Rich (Bob) is used to indicate that Bob is wealthy. Multi-arity
predicates are used to describe relationships between items.
Formula PrimitiveFormula
| (Formula Connective Formula)
| Sentence
| Qualifier Variable Formula
PrimitiveFormula Predicate(Term,…,Term)
Term Function(Term,…,Term)
| Constant
| Variable
Connectifier
Quantifier
Constant any string that is not used as a variable predicate or function
Variable any string that is not used as a constant predicate or function
Predicate any string that is not used as a constant variable or function
Function any string that is not used as a constant predicate or constant
Function symbols Each has a specific arity (number of input arguments) and is understood as a
function that maps the stated number of input objects to objects. Allowing FatherOf to be a single-
arity function symbol, the natural interpretation of FatherOf (Bob) is Bob's father.
Zero-arity function symbols are considered to be constants.
Universal and existential quantifier symbols will be used to quantify over objects. For example,
∀ x Alive(x) ⇒ Breathing(x) is a universally quantified statement that uses the variable x as a
placeholder.
1 n
symbol is interpreted as a set of tuples from the domain. If a tuple O = (o , · · · , o ) is in
I(p) then we say that p is true for the object tuple O.
• If p is a predicate symbol of arity 0, i.e., a simple proposition, then I(p) is equal to either
true or false.
Assume we have a single predicate TallerThan, a single function FatherOf, and a single constant
Bob. The following could be a model M1 for these symbols:
D = {BOB, JON, NULL}
I(Bob) = BOB
I(TallerThan) = {hBOB, JONi}
Because I(FatherOf) is a function, we'll only show the value for each argument to give the FatherOf
meaning.
I(FatherOf)(BOB) = JON
I(FatherOf)(JON) = NULL
I(FatherOf)(NULL) = NULL
M2 could also be interpreted as follows,
D = { BOB, JON }
I(Bob) = BOB
I(TallerThan) = { hBOB, JONi,hJON, BOBi }
I(FatherOf)(BOB) = BOB
I(FatherOf)(JON) = JON
It's vital to highlight the difference between Bob, which is a constant (a syntactic entity), and BOB,
which is a domain object (a semantic entity). The second interpretation isn't exactly what we're
looking for (the objects are dads of themselves, and TallerThan is inconsistent), but it's still a viable
model. By imposing proper limitations on the symbols, the knowledge base can rule out such
unexpected models from consideration.
Equality:
Atomic sentences are generated in First-Order Logic not only through the employment of predicate
and words, but also through the application of equality. We can do this by using equality symbols,
which indicate that the two terms relate to the same thing.
Universal Generalization:
• Universal generalization is a valid inference rule that states that if premise P(c) is true for
any arbitrary element c in the universe of discourse, we can arrive at the conclusion x P.
(x).
Universal Instantiation:
• Universal instantiation, often known as universal elimination or UI, is a valid inference
rule. It can be used numerous times to add more sentences.
• The new knowledge base is logically equivalent to the previous one.
• We can infer any phrase by replacing a ground word for the variable, according to the UI.
• According to the UI rule, any phrase P(c) can be inferred by substituting a ground term c
(a constant inside domain x) for any object in the universe of discourse in x P(x).
Existential Instantiation:
• Existential Elimination, also known as Existential Instantiation, is a valid first-order logic
inference rule.
• It can only be used once to substitute for the existential sentence.
• Despite the fact that the new KB is not conceptually identical to the previous KB, it will
suffice if the old KB was.
• This rule states that one can infer P(c) from the formula given in the form of ∃x P(x) for a
new constant symbol c.
• The only constraint with this rule is that c must be a new word for which P(c) is true.
Existential introduction
• An existential generalization, also known as an existential introduction, is a valid
inference rule in first-order logic.
• If some element c in the world of discourse has the characteristic P, we can infer that
something else in the universe has the attribute P, according to this rule.
Q16) Write the difference between propositional logic and first order logic?
A16) Key differences between PL and FOL
● Propositional Logic converts a complete sentence into a symbol and makes it logical whereas in
First-Order Logic relation of a particular sentence will be made that involves relations, constants,
functions, and constants.
● The limitation of PL is that it does not represent any individual entities whereas FOL can easily
represent the individual establishment that means if you are writing a single sentence then it can be
easily represented in FOL.
● PL does not signify or express the generalization, specialization or pattern for example
‘QUANTIFIERS’ cannot be used in PL but in FOL users can easily use quantifiers as it does
express the generalization, specialization, and pattern.
Q17) Write the difference between propositional logic and predicate logic?
A17) Difference between propositional logic and predicate logic
Unit - 5
Reasoning
Q1) What do you mean by inference in first order logic?
A1) Inference in First-Order Logic is used to deduce new facts or sentences from existing
sentences. Before understanding the FOL inference rule, let's understand some basic terminologies
used in FOL.
Substitution:
Substitution is a fundamental operation performed on terms and formulas. It occurs in all inference
systems in first-order logic. The substitution is complex in the presence of quantifiers in FOL. If
we write F[a/x], so it refers to substitute a constant "a" in place of variable "x".
Note: First-order logic is capable of expressing facts about some or all objects in the universe.
Equality:
First-Order logic does not only use predicate and terms for making atomic sentences but also uses
another way, which is equality in FOL. For this, we can use equality symbols which specify that
the two terms refer to the same object.
Example: Brother (John) = Smith.
As in the above example, the object referred by the Brother (John) is similar to the object referred
by Smith. The equality symbol can also be used with negation to represent that two terms are not
the same objects.
Example: ¬(x=y) which is equivalent to x ≠y.
1. Universal Generalization:
● Universal generalization is a valid inference rule which states that if premise P(c) is true for any
arbitrary element c in the universe of discourse, then we can have a conclusion as ∀ x P(x).
2. Universal Instantiation:
● Universal instantiation is also called as universal elimination or UI is a valid inference rule. It
can be applied multiple times to add new sentences.
● The new KB is logically equivalent to the previous KB.
● As per UI, we can infer any sentence obtained by substituting a ground term for the variable.
● The UI rule state that we can infer any sentence P(c) by substituting a ground term c (a constant
within domain x) from ∀ x P(x) for any object in the universe of discourse.
Example: 2.
Let's take a famous example,
"All kings who are greedy are Evil." So let our knowledge base contains this detail as in the form
of FOL:
∀x king(x) ∧ greedy (x) → Evil (x),
So from this information, we can infer any of the following statements using
Universal Instantiation:
● King(John) ∧ Greedy (John) → Evil (John),
● King(Richard) ∧ Greedy (Richard) → Evil (Richard),
● King(Father(John)) ∧ Greedy (Father(John)) → Evil (Father(John)),
3. Existential Instantiation:
● Existential instantiation is also called as Existential Elimination, which is a valid inference rule
in first-order logic.
● It can be applied only once to replace the existential sentence.
● The new KB is not logically equivalent to old KB, but it will be satisfiable if old KB was
satisfiable.
● This rule states that one can infer P(c) from the formula given in the form of ∃x P(x) for a new
constant symbol c.
● The restriction with this rule is that c used in the rule must be a new term for which P(c ) is true.
Example:
From the given sentence: ∃x Crown(x) ∧ OnHead(x, John),
So we can infer: Crown(K) ∧ OnHead( K, John), as long as K does not appear in the knowledge
base.
● The above used K is a constant symbol, which is called Skolem constant.
● The Existential instantiation is a special case of Skolemization process.
4. Existential introduction
● An existential introduction is also known as an existential generalization, which is a valid
inference rule in first-order logic.
● This rule states that if there is some element c in the universe of discourse which has a property
P, then we can infer that there exists something in the universe which has the property P.
For any v and g ground term combinations. For example, in the knowledge base, there is a
statement that states that all greedy rulers are Evil.
For the variable x, with the substitutions like {x/John}, {x/Richard} the following sentences can be
inferred
As a result, the set of all potential instantiations can be used to substitute a globally quantified
phrase.
As long as C1 does not appear elsewhere in the knowledge base. Thus, an existentially quantified
sentence can be replaced by one instantiation
Elimination of Universal and Existential quantifiers should give new knowledge base which can be
shown to be inferentially equivalent to old in the sense that it is satisfiable exactly when the
original knowledge base is satisfiable.
• Let Ψ 1 and Ψ2 be two atomic sentences and 𝜎 be a unifier such that, Ψ 1𝜎 = Ψ 2𝜎,
then it can be expressed as UNIFY (Ψ1, Ψ2).
• Example: Find the MGU for Unify {King(x), King (John)}
Example: Let's say there are two different expressions, P (x, y), and P (a, f(z)).
In this example, we need to make both above statements identical to each other. For this, we will
perform the substitution.
P (x, y)......... (i)
P (a, f(z))......... (ii)
• Substitute x with a, and y with f(z) in the first expression, and it will be represented as a/x
and f(z)/y.
• With both the substitutions, the first expression will be identical to the second expression
and the substitution set will be: [a/x, f(z)/y].
N + 1 premises = N atomic sentences + one implication. Applying SUBST (θ, q) yields the
conclusion we seek.
p1’ = King (John) p2’ = Greedy(y)
p1 = King(x) p2 = Greedy(x)
θ = {x / John, y/ John} q = Evil(x)
SUBST (θ, q) is Evil (John)
Unification Algorithms
Algorithm: Unify (L1, L2)
1. If L1 or L2 are both variables or constants, then:
1. If L1 and L2 are identical, then return NIL.
2. Else if L1 is a variable, then if L1 occurs in L2 then return {FAIL}, else return (L2/L1).
3. Also, Else if L2 is a variable, then if L2 occurs in L1 then return {FAIL}, else return (L1/L2). d.
Else return {FAIL}.
2. If the initial predicate symbols in L1 and L2 are not identical, then return {FAIL}.
3. If LI and L2 have a different number of arguments, then return {FAIL}.
4. Set SUBST to NIL. (At the end of this procedure, SUBST will contain all the substitutions used
to unify L1 and L2.)
5. For I ← 1 to the number of arguments in L1:
1. Call Unify with the ith argument of L1 and the ith argument of L2, putting the result in S.
2. If S contains FAIL, then return {FAIL}.
3. If S is not equal to NIL, then:
2. Apply S to the remainder of both L1 and L2.
3. SUBST: = APPEND (S, SUBST).
6. Return SUBST.
For any v and g ground term combinations. For example, in the knowledge base, there is a
statement that states that all greedy rulers are Evil.
For the variable x, with the substitutions like {x/John},{x/Richard}the following sentences can be
inferred
As a result, the set of all potential instantiations can be used to substitute a globally quantified
phrase.
As long as C1 does not appear elsewhere in the knowledge base. Thus an existentially quantified
sentence can be replaced by one instantiation
Elimination of Universal and Existential quantifiers should give new knowledge base which can be
shown to be inferentially equivalent to old in the sense that it is satisfiable exactly when the
original knowledge base is satisfiable.
Example: To determine the color of Fritz, a pet who croaks and eats flies, for example. The
Knowledge Base contains the following rules:
If X croaks and eats flies — → Then X is a frog.
If X sings — → Then X is a bird.
If X is a frog — → Then X is green.
If x is a bird — -> Then X is blue.
In the KB, croaks and eating insects are found. As a result, we arrive at rule 1, which has an
antecedent that matches our facts. The KB is updated to reflect the outcome of rule 1, which is that
X is a frog. The KB is reviewed again, and rule 3 is chosen because its antecedent (X is a frog)
corresponds to our newly confirmed evidence. The new result is then recorded in the knowledge
base (i.e., X is green). As a result, we were able to determine the pet's color based on the fact that it
croaks and eats flies.
Example: Fritz, a pet that croaks and eats flies, needs his color determined. The Knowledge Base
contains the following rules:
If X croaks and eats flies — → Then X is a frog.
If X sings — → Then X is a bird.
If X is a frog — → Then X is green.
If x is a bird — -> Then X is blue.
The third and fourth rules were chosen because they best fit our goal of determining the color of
the pet. For example, X may be green or blue. Both the antecedents of the rules, X is a frog and X
is a bird, are added to the target list. The first two rules were chosen because their implications
correlate to the newly added goals, namely, X is a frog or X is a bird.
Because the antecedent (If X croaks and eats flies) is true/given, we can derive that Fritz is a frog.
If it's a frog, Fritz is green, and if it's a bird, Fritz is blue, completing the goal of determining the
pet's color.
Example:
We can resolve two clauses which are given below:
[Animal (g(x) V Loves (f(x), x)] and [¬ Loves(a, b) V ¬Kills(a, b)]
Where two complimentary literals are: Loves (f(x), x) and ¬ Loves (a, b)
These literals can be unified with unifier θ= [a/f(x), and b/x], and it will generate a resolvent
clause:
[Animal (g(x) V ¬ Kills(f(x), x)].
To better understand all the above steps, we will take an example in which we will apply
resolution.
Example:
John likes all kind of food.
Apple and vegetable are food
Anything anyone eats and not killed is food.
Anil eats peanuts and still alive
1. Harry eats everything that Anil eats.
Prove by resolution that:
John likes peanuts.
What to Represent:
Following are the kind of knowledge which needs to be represented in that of the AI systems:
● Object: All the facts about that of the objects in our world domain. Example Guitars contains
strings, trumpets are brass instruments.
● Events: Events are the actions which occur in our world.
● Performance: It describes that of the behaviour which involves the knowledge about how to do
things.
● Meta-knowledge: It is knowledge about what we know.
● Facts: Facts are the truths about the real world and what we represent.
● Knowledge-Base: The central component of the knowledge-based agents is the knowledge
base. It is represented as KB. The Knowledgebase is a group of the Sentences (Here, sentences are
used as a technical term and not identical with the English language).
Knowledge: Knowledge is the awareness or familiarity gained by that of the experiences of facts,
data, and situations. Following are the types of knowledge in artificial intelligence:
1. Declarative Knowledge:
● Declarative knowledge is to know about something.
● It includes the concepts, the facts, and the objects.
● It is also called descriptive knowledge and expressed in declarative sentences.
● It is simpler than procedural language.
2. Procedural Knowledge
● It is also known as imperative knowledge.
● Procedural knowledge is a type of knowledge which is responsible for knowing how to do
something.
● It can be directly applied to any task.
● It includes rules, strategies, procedures, agendas, etc.
● Procedural knowledge depends on the task on which it can be applied.
3. Meta-knowledge:
● Knowledge about the other types of that of the knowledge is known as Meta-knowledge.
4. Heuristic knowledge:
● Heuristic knowledge is representing knowledge of some experts in a filed or subject.
● Heuristic knowledge is the rules of the thumb based on the previous experiences, awareness of
the approaches, and which are good to that of the work but not guaranteed.
5. Structural knowledge:
● Structural knowledge is basic knowledge to problem-solving.
● It describes relationships between various concepts such as kind of, part of, and grouping of
something.
● It describes the relationship that exists between concepts or objects.
2.Inheritable knowledge:
● In the inheritable knowledge approach, all the data must be stored into that of a hierarchy of the
classes.
● All classes should be arranged in that of a generalized form or a hierarchal manner.
● In this approach, we apply the inheritance property.
● Elements inherit values from the other members of a class.
● This approach contains the inheritable knowledge which shows that of a relation between
instance and the class, and it is known as instance relation.
● Every individual frame can represent that of the collection of attributes and its value.
● In this approach, objects and the values are represented in the Boxed nodes.
● We use Arrows which point from that of the objects to their values.
Example:
3. Inferential knowledge:
● Inferential knowledge approach represents the knowledge in the form of the formal logics.
● This approach can be used to derive more that of the facts.
● It guaranteed the correctness.
● Example: Let's suppose there are two statements:
1. Marcus is a man
2. All men are mortal
Then it can represent as;
Man(Marcus)
∀x = man (x) ----------> mortal (x)s
4. Procedural knowledge:
● Procedural knowledge approach uses that of the small programs and codes which describes how
to do specific things, and how to proceed.
● In this approach, one important rule is used which is If-Then rule.
● In this knowledge, we can use various coding languages such as LISP language and Prolog
language.
● We can easily represent heuristic or domain-specific knowledge using this approach.
● But it is not necessary that we can represent all cases in this approach.
The above diagram is showing how that of an AI system can interacts with that of the real world
and what the components help it to show the intelligence. AI system has the Perception component
by which it retrieves information from that of its environment. It can be visual, audio or another
form of the sensory input. The learning component is that the responsible for learning from data
captured by the Perception comportment.
In the complete cycle, the main components are the knowledge representation and the Reasoning.
These two components are involved in showing that of the intelligence in machine-like humans.
These two components are independent with each other but also coupled together. The planning
and execution depend on analysis of that of the Knowledge representation and the reasoning.
Relation = inheritance:
● Food is edible in all forms; fruit is a subclass of food, and apples are a subclass of fruit, hence an
apple is edible.
Defines a taxonomy
Referential transparency
● In whatever situation, an expression always yields the same outcome.
● For example, if an agent understands that 2+2=4 and 4<5, it should also know that 2+25.
● integrated into inferential rules in most formal logic languages.
Referential opacity
● transparent but not referential
● For propositional attitudes, we want referential opacity because terms matter, and not all agents
are aware of which terms are co-referential.
● In most formal logic languages, this is not directly achievable (except Modal Logic).
Modal Logic
Created to allow for referential opacity in the knowledge base
Regular logic is concerned with a single modality (the truth modality), allowing us to say "P is
true."
Modal logic comprises modal operators that use sentences as arguments rather than terms (e.g. "A
knows P" is represented with notation KAP where K is the modal operator for knowledge, A is the
agent, and P is a sentence).
Modal logic has the same syntax as first-order logic, with the exception that modal operators can
also be used to build sentences.
Modal logic's semantics are more sophisticated. A model in first-order logic has a set of objects as
well as an interpretation that translates each name to the correct item, relation, or function. We
want to be able to consider both the possibility that Superman's secret identity is Clark and the
chance that it isn't in modal logic. As a result, a model in modal logic is a collection of possible
universes (instead of 1 true world). Accessibility relations connect the worlds in a graph (one
relation for each modal operator). If everything in w1 is consistent with what A knows in w0, we
say that world w1 is accessible from world w0 with respect to the modal operator KA, and we
express this as Acc (KA, w0, w1).
As an arrow between possible universes, the figure depicts accessibility.
Fig: Possible worlds with accessibility relations K Superman and KLois .
An atom of knowledge If and only if P is true in every world accessible from w, KAP is true in
world w. Recursive application of this rule with the standard laws of first-order logic yields the
truth of more complex sentences. That is, modal logic can be used to reason about nested
knowledge sentences, or what one agent knows about the knowledge of another agent.
We can claim, for example, that while Lois does not know if Superman's secret identity is Clark
Kent, she does know that Clark does:
∃x K BondSpy(x)
In modal logic, this indicates that there is an x that Bonds knows to be a spy in all accessible
worlds.
The second interpretation is that "Bond just knows that at least one spy exists":
K Bond∃x Spy(x)
In modal logic, this indicates that there is an x who is a spy in each accessible world, but it does not
have to be the same x in each world.
a a
(K P ∧ K (P ⇒ Q)) ⇒ K Q a
Declare that every agent is aware of the truth or falsity of every proposition P:
a
K (P v ¬P)
True belief is justified knowledge:
a
K P⇒P
If an agent is aware of something, they are aware that they are aware of it:
a a a
K P ⇒ K (K P)
Similar axioms can be defined for belief (commonly abbreviated as B) and other modal operators.
One issue with modal logic is that it presupposes agents have logical omniscience (that is, if an
agent knows a set of axioms, then it knows all consequences of those axioms).
For example, Figure below has a Member Of link between Mary and Female Persons,
corresponding to the logical assertion Mary ∈Female Persons; similarly, the Sister Of link
between Mary and John corresponds to the assertion Sister Of (Mary, John). We can connect
categories using Subset Of links, and so on.
It's easy to get carried away when painting bubbles and arrows.
Can we draw a Has Mother link from Persons to Female Persons, for example, because we know
that persons have female persons as mothers?
Has Mother is a relationship between a person and his or her mother, and categories do not have
mothers, hence the answer is no.
As a result, in Figure below, we've employed a specific notation—the double-boxed link.
According to this link,
∀x x∈ Persons ⇒ [∀ y HasMother (x, y) ⇒ y ∈ FemalePersons ] .
We might also say that people have two legs—that is, that they have two legs.
∀x x∈ Persons ⇒ Legs(x, 2) .
The single-boxed link in Figure below is used to assert properties for each category member.
One of the key attractions of semantic networks has been the simplicity and effectiveness of this
inference process when compared to logical theorem proving.
Fig: Semantic network with four objects and four categories. Relations are denoted by labeled links
Inverse linkages are another prominent method of inference. Has Sister, for example, is the inverse
of Sister Of, implying that
When compared to first-order logic, the reader may have recognised an evident disadvantage of
semantic network notation: the fact that linkages between bubbles only reflect binary relations.
In a semantic network, for example, the sentence Fly(Shankar, New York, New Delhi,Yesterday)
cannot be affirmed explicitly.
Fig: Fragment of a semantic network showing the representation of the logical assertion
Fly(Shankar, New York, New Delhi, Yesterday)
The capacity of semantic networks to represent default values for categories is one of its most
essential features.
If you look closely at Figure, you'll find that John only has one leg, despite the fact that he's a
person, and all people have two legs.
The default value is said to be overridden by the more particular value. We might alternatively
override the default number of legs by adding a One Legged Persons category, which is a subset of
Persons that includes John.
If we say that the Legs assertion for Persons includes an exception for John, we can keep the
network's semantics purely logical:
∀x x∈ Persons ∧ x _= John ⇒ Legs(x, 2) .
This is semantically appropriate for a fixed network.
Description logics
First-order logic's syntax is intended to make it simple to declare things about objects.
Description logics are notations that make describing the definitions and properties of categories
easier.
Subsumption (testing if one category is a subset of another by comparing their definitions) and
classification are the two main inference tasks for description logics (checking whether an object
belongs to a category)..
The consistency of a category definition—whether the membership criteria are logically satisfiable
—is also considered in some systems.
Fig: Syntax of description in sunset of CLASSIC language
A typical description logic is the CLASSIC language (Borgida et al., 1989). Figure depicts the
syntax of CLASSIC descriptions.
To say that bachelors are unmarried adult males, for example, we would write
Bachelor = And(Unmarried, Adult ,Male) .
In first-order logic, the equivalent would be
Bachelor (x) ⇔ Unmarried(x) ∧ Adult(x) ∧ Male(x).
Any CLASSIC description can be translated into a first-order equivalent statement, however some
CLASSIC descriptions are more clear.
For example, to characterise a group of males who have at least three unemployed sons who are all
married to doctors, and at most two daughters who are all professors in physics or math
departments, we would use
And(Man, AtLeast(3, Son), AtMost(2, Daughter ),
All(Son, And(Unemployed,Married, All(Spouse, Doctor ))),
All(Daughter , And(Professor , Fills(Department , Physics,Math)))) .
Translating this into first-order logic is left as an exercise.
Negation and disjunction, for example, are generally absent from description logics.
In the Fills and OneOf constructions, CLASSIC only provides for a restricted type of disjunction.
Plans - We define plans in which the action to be performed in a particular state may be influenced
by information about prior execution phases. Sequential plans, i.e., plans that are just sequences of
activities, conditional plans, i.e., plans that can pick a different action depending on the current
circumstance at execution time, and iterative plans that can run actions until a situation happens are
all covered by the definition. There are plans that rely on a finite number of execution steps (finite-
memory plans) and plans that do not rely on prior execution steps (non-memory plans) (memory-
less plans). Plan executions, in general, result in trees (called execution trees) whose nodes
correspond to domain states.
Goals - Goals are defined as sets of acceptable trees that correspond to intended planning domain
evolutions. They can be used to represent traditional reachability goals, which are requirements
expressed on the leaves of execution trees that specify the end states to be achieved once a plan is
performed. They can also represent more complicated types of "extended objectives," such as
temporally extended goals, that convey conditions across the entire execution tree.
Our framework is broad enough to encompass a wide range of relevant and important planning
issues. Deterministic domains, plans that are sequences of actions, and reachability goals can all be
used to mimic traditional planning. Furthermore, our framework is ideally adapted for modeling
specific types of planning under uncertainty and imperfect knowledge, which has recently been
addressed in the scientific literature and is relevant to a number of real-world applications.
Nondeterministic domains do, in fact, model uncertainty in action effects, whereas partial
observability does so for observations.
Nondeterministic domains, null observability, sequential plans, and reachability goals, for example,
can be used to model conformant planning. Nondeterministic domains, conditional plans, and
reachability goals can all be used to model contingent planning.
Nondeterministic domains, history-dependent plans, and objectives that describe desirable domain
evolutions can all be used to simulate planning for temporally extended goals.
For practical reasons, the framework cannot be too broad to encompass all of the various planning
issues that have been discussed in the literature thus far. For example, we do not represent
probabilities of action outcomes in action domains, and goals are represented as utility functions,
which is a difference from planning based on Markov Decision Processes (MDP).
PDDL (Planning Domain Definition Language) is a language that is used to represent all actions in
a single action schema.
The four basic things required in a search problem are described by PDLL:
● Initial state - Each state is represented as the union of the ground and functionless atoms in its
initial state.
● Action - It is defined by a series of action schemas that define the ACTION() and RESULT()
functions implicitly.
● Result - The collection of activities taken by the agent yielded this result.
● Goal - The goal is the same as a precondition, which is a literal conjunction (whose value is
either positive or negative).
● Assume we're traveling from source "Mumbai" to destination "Goa" in the above hierarchical
planner diagram.
● Then you can decide how you want to travel: by plane, bus, or car. Assume you decide to travel
by "Bus."
● The "Take-Bus" strategy can then be broken down into a series of tasks, such as going to
Mumbai – Bus stop, purchasing a bus ticket, boarding the bus, and departing for Goa.
● The four acts in the previous point can now be split down into their component parts. Take, for
example, "By-Ticket for Bus."
● It may be broken down into three steps: go to the bus stop counter, request a ticket, and pay for
the ticket.
● As a result, each of these actions can be further deconstructed until we reach the level of actions
that can be performed without thought in order to generate the needed motor control sequences.
● Ordering & resource constraints: 2 jobs, each of form [AddEngine, AddWheels, Inspect] and
their individual action durations and its resource constraints
Fig 2: Future of AI
Healthcare: In the healthcare industry, AI will be critical in promptly and effectively diagnosing
ailments. With the help of AI, new drug discovery will be speedier and more cost-effective. It will
also improve patient engagement in their care by making appointment booking and bill payment
easier and with fewer errors. Apart from these helpful applications, one of AI's greatest challenges
in healthcare is ensuring its acceptance in daily clinical operations.
Cyber security: Without a question, ensuring data security is a top responsibility for every
company. The following are some predictions about how cyber security will alter as a result of AI:
● Security incidents will be tracked using AI techniques.
● NLP is used to identify the source of cyber-attacks.
● RPA bots are used to automate rule-based operations and procedures.
As a brilliant technology, however, it can also be used by attackers as a threat. They can employ AI
in an unethical manner by automating attacks that are difficult to defend.
Transportation: In the transportation industry, a fully autonomous vehicle has not yet been
produced, but researchers are working on it. In the cockpit, artificial intelligence and machine
learning are being used to help minimise workload, manage pilot stress and fatigue, and increase
on-time performance. There are a number of barriers to AI adoption in transportation, particularly
in public transit. There is a significant risk of becoming overly reliant on automated and
autonomous technologies.
E-commerce: In the near future, Artificial Intelligence will play a critical role in the e-commerce
industry. It will have a favorable impact on every part of the e-commerce industry, from user
experience to product marketing and delivery. In the future, we can expect e-commerce to include
automated warehouse and inventory management, shopper personalisation, and the employment of
chatbots.
Employment: Because of the usage of Artificial Intelligence, finding work has become easier for
both job seekers and companies. In the job market, AI is already being employed with stringent
regulations and algorithms that automatically reject an employee's résumé if it does not meet the
company's requirements. Most AI-enabled applications, spanning from marking written interviews
to telephonic rounds, are expected to drive the job process in the future.
Various AI programmes, such as Rezi, Jobseeker, and others, are assisting job searchers in creating
great resumes and finding the perfect position for their skills.
Apart from the aforementioned industries, AI has a bright future in manufacturing, finance and
banking, entertainment, and other fields.
Fig 3: Components of AI
Learning
Computer programmes, like humans, learn in a variety of ways. When it comes to AI, this
platform's learning is divided into a variety of categories. Learning for AI comprises the trial-and-
error approach, which is one of the most important aspects of AI. The solution continues to solve
problems until it obtains the desired results. As a consequence, the programme keeps track of all
the movements that resulted in positive outcomes and stores them in its database for use the next
time the computer is presented with the same problem.
Individual things, such as multiple solutions to problems, vocabulary, foreign languages, and so on,
are memorized as part of AI's learning component, often known as rote learning. The generalization
method is then used to put this learning process into practice.
When it comes to artificial intelligence, there are several different types of learning. The most basic
method is to learn through trial and error. A rudimentary computer programme for solving mate-in-
one chess situations, for example, might try moves at random until it finds mate. The programme
could then save the solution along with the position so that it could be recalled the next time the
computer came across the same situation. On a computer, rote learning—the simple memorizing of
individual items and procedures—is relatively straightforward to accomplish. The difficulty of
implementing what is known as generalization is more difficult. Generalization is the process of
adapting previous experience to similar new situations.
Reasoning
Until about five decades ago, the art of reasoning was solely available to humans. Reasoning is one
of the most important components of artificial intelligence because of its ability to differentiate. To
reason is to allow the platform to make inferences that are appropriate for the situation at hand. In
addition, these inferences are classified as inductive or deductive. The difference is that in an
inferential scenario, a problem's solution assures a conclusion. In the inductive example, however,
the mishap is always caused by instrument failure.
Programming computers have had a lot of success with deductive interferences. Reasoning, on the
other hand, always entails drawing meaningful inferences from the circumstance at hand.
To reason is to make inferences that are appropriate for the circumstances. Deductive and inductive
inferences are the two types of inferences. "Fred must be in either the museum or the café," for
example, is an example of the former. "Previous accidents of this nature were caused by instrument
failure; thus, this accident was caused by instrument failure," and "Previous accidents of this nature
were caused by instrument failure; therefore, this accident was caused by instrument failure." The
most important distinction between these two types of reasoning is that in deductive reasoning, the
validity of the premises assures the truth of the conclusion, whereas inductive reasoning offers
support to the conclusion without providing total assurance.
Problem-solving
In its most basic form, AI's problem-solving ability consists of data, with the answer requiring the
discovery of x. The AI platform sees a wide range of problems being handled. The various
'Problem-solving' approaches are artificial intelligence components that split questions into special
and general categories.
A solution to a given problem is tailor-made in the case of a special-purpose method, which often
takes advantage of some of the unique features supplied in the instance where a suggested problem
is incorporated. A general-purpose approach, on the other hand, entails a wide range of interesting
challenges. In addition, AI's problem-solving component allows programmes to include a step-by-
step reduction of the distance between any target state and the current state.
Problem solving, especially in artificial intelligence, can be defined as a methodical search through
a set of options in order to arrive at a predetermined goal or solution. There are two types of
problem-solving methods: special purpose and general purpose. A special-purpose approach is
designed specifically for a problem and typically takes advantage of extremely specific aspects of
the scenario in which the problem exists. A general-purpose method, on the other hand, can be used
to solve a wide range of issues.
Perception
When the 'perception' component of Artificial Intelligence is used, it scans any given environment
utilizing various artificial or actual sense organs. Furthermore, the processes are maintained
internally, allowing the perceiver to examine different scenes in suggested objects and comprehend
their relationships and characteristics. This analysis is frequently problematic as a result of the fact
that similar products might have a wide range of looks depending on the viewpoint of the indicated
angle.
Perception is one of the components of artificial intelligence that, in its current condition, can
propel self-driving cars at low speeds. FREDDY was one of the first robots that use perception to
detect distinct things and put together various artifacts.
Perception involves scanning the surroundings with numerous sense organs, both real and artificial,
and decomposing the view into discrete objects in various spatial relationships. The fact that an
object can seem differently depending on the angle from which it is viewed, the direction and
intensity of illumination in the scene, and how much the object contrasts with the surrounding
field, makes analysis more difficult.
Artificial perception has progressed to the point that optical sensors can identify humans,
autonomous vehicles can drive at highway speeds, and robots can roam through buildings
collecting empty drink cans.
Language - understanding
Language can be defined as a collection of diverse system indications that use convention to justify
their means. Language understanding, which is one of the most extensively used artificial
intelligence components, employs distinct types of language across various forms of natural
meaning, as shown by overstatements.
Human English is one of the most important qualities of languages since it allows us to distinguish
between distinct items. Similarly, AI is designed in such a way that it can grasp English, the most
widely spoken human language. In this way, the platform enables computers to quickly
comprehend the many computer programmes that are run on them.
A language is a set of signs with predetermined meaning. Language does not have to be limited to
the spoken word in this sense. For example, traffic signs form a mini language, with "danger
ahead" being a matter of convention in some nations. The fact that linguistic units have meaning by
convention distinguishes languages from other languages, and linguistic meaning differs
significantly from natural meaning, as seen by expressions like "Those clouds signify rain" and
"The drop in pressure shows the valve is malfunctioning."
English mathematician and pioneer of computer science, Alan Turing, standing at the Ferranti
Mark 1 computer at the University of Manchester in 1951. The earliest successful AI program was
written by Christopher Strachey to play checkers, or draughts, and ran on the Ferranti Mark I
Machine learning has the ability to analyze our designs more rapidly and at a lower cost in the
context of architecture and design. The ARD team has been experimenting with two methods for
incorporating it into the design process, as well as some potentially innovative applications.
The first is surrogate modeling, which is a direct alternative for time-consuming analytical
engineering simulations (such as structural deformation, solar radiation, and pedestrian movement)
that might take hours or even days to complete.
Many architectural projects today are of immense scale and complexity, characterized by a diverse
array of overlapping talent and technologies. Machine learning may be able to assist us in this
diverse subject by allowing us to solve problems and find patterns that were previously dependant
on complex analytical simulations and programmes. Due to the time and effort required, design
optimization based on their findings has been almost difficult, weakening the value of these tools.
We need to give designers with results in near real-time to tackle this problem. As a result, the
ARD team looked into surrogate models, in which a computationally 'cheaper' predictive model is
built based on a set of carefully selected simulation results. The surrogate model's approximation of
a simulation in real-time can then provide the designer with results that are good enough for them
to make a swift decision.
We call the second – and more cutting-edge – field of ARD's machine learning research 'design aid'
modeling, and these systems have the ability to function alongside designers' intuition in the
creative process.
This is referred to as design-assistance modeling because it aids architectural processes for which
we do not always have an analytical solution – one that can be determined from simulations. This
could include optimizing the spatial layout of furniture within a space or providing designers with
real-time document control support while they work.