Lectures 2

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 194

Artificial Intelligence

Search
Solving Problems by Searching

1
What is Intelligence
• The ability of a system to calculate,
reason, perceive relationships and
analogies, learn from experience, store
and retrieve information from memory,
solve problems, comprehend complex
ideas, use natural language fluently,
classify, generalize, and adapt new
situations.

2
Types of Intelligence

3
Types….

4
What is Intelligence Composed Of?

5
Reasoning

• It is the set of processes that enable us to


provide basis for judgement, making
decisions, and prediction. There are
broadly two types

6
Learning

• The ability of learning is possessed by humans,


particular species of animals, and AI-enabled
systems. Learning is categorized as follows –
• Auditory Learning
• Episodic Learning
• Motor Learning
• Observational Learning
• Perceptual Learning
• Relational Learning
7
Types of Learning

8
Types of Learning

9
Problem Solving

• It is the process in which one perceives


and tries to arrive at a desired solution
from a present situation by taking some
path, which is blocked by known or
unknown hurdles.
• Problem solving also includes decision
making, which is the process of selecting
the best suitable alternative out of multiple
alternatives to reach the desired goal.
10
Perception

• It is the process of acquiring, interpreting,


selecting, and organizing sensory
information.
• Perception presumes sensing. In
humans, perception is aided by sensory
organs. In the domain of AI, perception
mechanism puts the data acquired by the
sensors together in a meaningful manner.

11
Linguistic Intelligence

• It is one’s ability to use, comprehend,


speak, and write the verbal and written
language. It is important in interpersonal
communication.

12
AI Involved in….

13
What’s Involved in AI

14
Searching
• In which we see how an agent can find a sequence of
actions that achieves its goals when no single action will do
• This chapter describes one kind of goal-based agent called a
problem-solving agent.
• Problem-solving agents use atomic representations, as that
is, states of the world are considered as wholes, with no
internal structure visible to the problemsolving algorithms.
• Goal-based agents that use more advanced factored or
structured representations are usually called planning
agents

15
Problem formulation
• Problem formulation is the process of deciding
what actions and states to consider, given a
goal.
• A search algorithm takes a problem as input
and returns a solution in the form of an action
sequence.
• Execution Once a solution is found, the actions
it recommends can be carried out. This is called
execution
16
Well Defined Problem & Solutions
• A problem can be defined formally by five
components
• The initial state
• Actions
• Transition Model
• Path
• Goal Test

17
Types of Searching
• Informed Searching
Informed search algorithms can do quite well given
some guidance on where to look for solutions

• Uninformed Searching
• Uninformed search algorithms are given no
information about the problem other than its
definition

18
Search
• Search permeates all of AI
• What choices are we searching through?
– Problem solving
Action combinations (move 1, then move 3, then move 2...)
– Natural language
Ways to map words to parts of speech
– Computer vision
Ways to map features to object model
– Machine learning
Possible concepts that fit examples seen so far
– Motion planning
Sequence of moves to reach goal destination
• An intelligent agent is trying to find a set or sequence of actions to
achieve a goal
• This is a goal-based agent
19
20
Uninformed/Blind Search:

• The uninformed search does not contain any domain


knowledge such as closeness, the location of the goal.
It operates in a brute-force way as it only includes
information about how to traverse the tree and how to
identify leaf and goal nodes.
• Uninformed search applies a way in which search
tree is searched without any information about the
search space like initial state operators and test for the
goal, so it is also called blind search. It examines
each node of the tree until it achieves the goal node.
21
Breadth-first Search:

• Breadth-first search is the most common search strategy for


traversing a tree or graph. This algorithm searches
breadthwise in a tree or graph, so it is called breadth-first
search.
• BFS algorithm starts searching from the root node of the
tree and expands all successor node at the current level
before moving to nodes of next level.
• The breadth-first search algorithm is an example of a
general-graph search algorithm.
• Breadth-first search implemented using FIFO queue data
structure.
22
Advantages/Disadvantages
• Advantages:
• BFS will provide a solution if any solution exists.
• If there are more than one solutions for a given problem,
then BFS will provide the minimal solution which
requires the least number of steps.
• Disadvantages:
• It requires lots of memory since each level of the tree
must be saved into memory to expand the next level.
• BFS needs lots of time if the solution is far away from the
root node.

23
Example

24
Breadth First Search
• Example:
• In the below tree structure, we have shown the
traversing of the tree using BFS algorithm from
the root node S to goal node K. BFS search
algorithm traverse in layers, so it will follow the
path which is shown by the dotted arrow, and
the traversed path will be:
1.S---> A--->B---->C--->D---->G--->H--->E----
>F---->I---->K  
25
Depth-first Search

• Depth-first search is a recursive algorithm for


traversing a tree or graph data structure.
• It is called the depth-first search because it starts
from the root node and follows each path to its
greatest depth node before moving to the next path.
• DFS uses a stack data structure for its
implementation.
• The process of the DFS algorithm is similar to the
BFS algorithm.

26
Advantages/Disadvantages
• Advantage:
• DFS requires very less memory as it only needs to store a
stack of the nodes on the path from root node to the current
node.
• It takes less time to reach to the goal node than BFS algorithm
(if it traverses in the right path).
• Disadvantage:
• There is the possibility that many states keep re-occurring, and
there is no guarantee of finding the solution.
• DFS algorithm goes for deep down searching and sometime it
may go to the infinite loop.

27
Example

28
Example
• In the above search tree, we have shown the flow of
depth-first search, and it will follow the order as:
• Root node--->Left node ----> right node.
• It will start searching from root node S, and traverse
A, then B, then D and E, after traversing E, it will
backtrack the tree as E has no other successor and
still goal node is not found. After backtracking it
will traverse node C and then G, and here it will
terminate as it found goal node.

29
Depth-Limited Search Algorithm:

• A depth-limited search algorithm is similar to depth-first


search with a predetermined limit. Depth-limited search can
solve the drawback of the infinite path in the Depth-first
search. In this algorithm, the node at the depth limit will treat
as it has no successor nodes further.
• Depth-limited search can be terminated with two Conditions
of failure:
• Standard failure value: It indicates that problem does not
have any solution.
• Cutoff failure value: It defines no solution for the problem
within a given depth limit

30
Advantages/Disadvantages
• Advantages:
• Depth-limited search is Memory efficient.
• Disadvantages:
• Depth-limited search also has a disadvantage
of incompleteness.
• It may not be optimal if the problem has more
than one solution.

31
Example

32
Uniform-cost Search Algorithm:

• Uniform-cost search is a searching algorithm used for traversing


a weighted tree or graph. This algorithm comes into play when a
different cost is available for each edge. The primary goal of the
uniform-cost search is to find a path to the goal node which has
the lowest cumulative cost.
• Uniform-cost search expands nodes according to their path costs
form the root node. It can be used to solve any graph/tree where
the optimal cost is in demand. A uniform-cost search algorithm
is implemented by the priority queue. It gives maximum priority
to the lowest cumulative cost. Uniform cost search is equivalent
to BFS algorithm if the path cost of all edges is the same.

33
• Advantages:
• Uniform cost search is optimal because at every
state the path with the least cost is chosen.
• Disadvantages:
• It does not care about the number of steps
involve in searching and only concerned about
path cost. Due to which this algorithm may be
stuck in an infinite loop

34
Example

35
Iterative deepening depth-first Search:

• The iterative deepening algorithm is a combination of DFS and


BFS algorithms. This search algorithm finds out the best depth
limit and does it by gradually increasing the limit until a goal is
found.
• This algorithm performs depth-first search up to a certain "depth
limit", and it keeps increasing the depth limit after each iteration
until the goal node is found.
• This Search algorithm combines the benefits of Breadth-first
search's fast search and depth-first search's memory efficiency.
• The iterative search algorithm is useful uninformed search when
search space is large, and depth of goal node is unknown.

36
• Advantages:
• It combines the benefits of BFS and DFS
search algorithm in terms of fast search and
memory efficiency.
• Disadvantages:
• The main drawback of IDDFS is that it repeats
all the work of the previous phase.

37
Example

38
Example
• 1'st Iteration-----> A
2'nd Iteration----> A, B, C
3'rd Iteration------>A, B, D, E, C, F, G
4'th Iteration------>A, B, D, H, I, E, C, F, K, G
In the fourth iteration, the algorithm will find
the goal node.

39
Informed Search Algorithms

• So far we have talked about the uninformed search


algorithms which looked through search space for all
possible solutions of the problem without having any
additional knowledge about search space. But informed
search algorithm contains an array of knowledge such as
how far we are from the goal, path cost, how to reach to
goal node, etc. This knowledge help agents to explore less
to the search space and find more efficiently the goal node.
• The informed search algorithm is more useful for large
search space. Informed search algorithm uses the idea of
heuristic, so it is also called Heuristic search.

40
Heuristic is a function
• Heuristic is a function which is used in Informed Search,
and it finds the most promising path. It takes the current
state of the agent as its input and produces the estimation
of how close agent is from the goal.
• The heuristic method, however, might not always give the
best solution, but it guaranteed to find a good solution in
reasonable time. Heuristic function estimates how close a
state is to the goal. It is represented by h(n), and it
calculates the cost of an optimal path between the pair of
states. The value of the heuristic function is always
positive.
41
Pure Heuristic Search:

• Pure heuristic search is the simplest form of heuristic


search algorithms. It expands nodes based on their
heuristic value h(n). It maintains two lists, OPEN and
CLOSED list. In the CLOSED list, it places those
nodes which have already expanded and in the OPEN
list, it places nodes which have yet not been expanded.
• On each iteration, each node n with the lowest
heuristic value is expanded and generates all its
successors and n is placed to the closed list. The
algorithm continues unit a goal state is found.

42
Best-first Search Algorithm (Greedy Search):

• Greedy best-first search algorithm always selects the


path which appears best at that moment. It is the
combination of depth-first search and breadth-first
search algorithms. It uses the heuristic function and
search. Best-first search allows us to take the
advantages of both algorithms. With the help of best-
first search, at each step, we can choose the most
promising node. In the best first search algorithm, we
expand the node which is closest to the goal node and
the closest cost is estimated by heuristic function.  

43
Best first search algorithm:

• Step 1: Place the starting node into the OPEN list.


• Step 2: If the OPEN list is empty, Stop and return failure.
• Step 3: Remove the node n, from the OPEN list which has the lowest
value of h(n), and places it in the CLOSED list.
• Step 4: Expand the node n, and generate the successors of node n.
• Step 5: Check each successor of node n, and find whether any node is a
goal node or not. If any successor node is goal node, then return success
and terminate the search, else proceed to Step 6.
• Step 6: For each successor node, algorithm checks for evaluation
function f(n), and then check if the node has been in either OPEN or
CLOSED list. If the node has not been in both list, then add it to the
OPEN list.
• Step 7: Return to Step 2.

44
Advantages
• Best first search can switch between BFS and
DFS by gaining the advantages of both the
algorithms.
• This algorithm is more efficient than BFS and
DFS algorithms.

45
Example

46
Disadvantages:

• It can behave as an unguided depth-first search


in the worst case scenario.
• It can get stuck in a loop as DFS.
• This algorithm is not optimal.

47
• Example:
• Consider the below search problem, and we
will traverse it using greedy best-first search.
At each iteration, each node is expanded using
evaluation function f(n)=h(n) , which is given
in the below table.

48
A* Algorithm
• A* search is the most commonly known form of best-
first search.
• A* search algorithm finds the shortest path through
the search space using the heuristic function. This
search algorithm expands less search tree and
provides optimal result faster.
• In A* search algorithm, we use search heuristic as
well as the cost to reach the node. Hence we can
combine both costs as following, and this sum is
called as a fitness number.
49
50
Algorithm
• Step1: Place the starting node in the OPEN list.
• Step 2: Check if the OPEN list is empty or not,
if the list is empty then return failure and stops.
• Step 3: Select the node from the OPEN list
which has the smallest value of evaluation
function (g+h), if node n is goal node then
return success and stop, otherwise

51
Algo…..
• Step 4: Expand node n and generate all of its
successors, and put n into the closed list. For each
successor n', check whether n' is already in the
OPEN or CLOSED list, if not then compute
evaluation function for n' and place into Open list.
• Step 5: Else if node n' is already in OPEN and
CLOSED, then it should be attached to the back
pointer which reflects the lowest g(n') value.
• Step 6: Return to Step 2.

52
Advantages:

• A* search algorithm is the best algorithm than


other search algorithms.
• A* search algorithm is optimal and complete.
• This algorithm can solve very complex
problems.

53
Disadvantages:

• It does not always produce the shortest path as it


mostly based on heuristics and approximation.
• A* search algorithm has some complexity
issues.
• The main drawback of A* is memory
requirement as it keeps all generated nodes in
the memory, so it is not practical for various
large-scale problems.

54
Example

55
Hill Climbing Algorithm

• Hill climbing algorithm is a local search algorithm which


continuously moves in the direction of increasing value to find
the peak of the mountain or best solution to the problem. It
terminates when it reaches a peak value where no neighbor has a
higher value.
• Hill climbing algorithm is a technique which is used for
optimizing the mathematical problems. One of the widely
discussed examples of Hill climbing algorithm is Traveling-
salesman Problem in which we need to minimize the distance
traveled by the salesman.
• It is also called greedy local search as it only looks to its good
immediate neighbor state and not beyond that.

56
Hill ….
• A node of hill climbing algorithm has two
components which are state and value.
• Hill Climbing is mostly used when a good
heuristic is available.
• In this algorithm, we don't need to maintain
and handle the search tree or graph as it only
keeps a single current state.

57
Features of hill climbing Algo
• Generate and Test variant: Hill Climbing is the
variant of Generate and Test method. The Generate
and Test method produce feedback which helps to
decide which direction to move in the search space.
• Greedy approach: Hill-climbing algorithm search
moves in the direction which optimizes the cost.
• No backtracking: It does not backtrack the search
space, as it does not remember the previous states.

58
State-space Diagram for Hill Climbing:

• The state-space landscape is a graphical representation of


the hill-climbing algorithm which is showing a graph
between various states of algorithm and Objective
function/Cost.
• On Y-axis we have taken the function which can be an
objective function or cost function, and state-space on the
x-axis. If the function on Y-axis is cost then, the goal of
search is to find the global minimum and local minimum.
• If the function of Y-axis is Objective function, then the
goal of the search is to find the global maximum and local
maximum.
59
State Diagram

60
• Local Maximum: Local maximum is a state which is better
than its neighbor states, but there is also another state which is
higher than it
• Global Maximum: Global maximum is the best possible state
of state space landscape. It has the highest value of objective
function.
• Current state: It is a state in a landscape diagram where an
agent is currently present.
• Flat local maximum: It is a flat space in the landscape where
all the neighbor states of current states have the same value.
• Shoulder: It is a plateau region which has an uphill edge.

61
Types of Hill Climbing Algorithm:

• Simple hill Climbing:


• Steepest-Ascent hill-climbing:
• Stochastic hill Climbing:

62
Simple hill climbing
• Simple hill climbing is the simplest way to implement a
hill climbing algorithm. It only evaluates the
neighbor node state at a time and selects the first
one which optimizes current cost and set it as a
current state. It only checks it's one successor state,
and if it finds better than the current state, then move
else be in the same state. This algorithm has the
following features:
• Less time consuming
• Less optimal solution and the solution is not guaranteed

63
Algorithm for Simple Hill Climbing:

• Step 1: Evaluate the initial state, if it is goal state then return


success and Stop.
• Step 2: Loop Until a solution is found or there is no new
operator left to apply.
• Step 3: Select and apply an operator to the current state.
• Step 4: Check new state:
• If it is goal state, then return success and quit.
• Else if it is better than the current state then assign new state as a
current state.
• Else if not better than the current state, then return to step2.
• Step 5: Exit.
64
Steepest-Ascent hill climbing:

• The steepest-Ascent algorithm is a variation of


simple hill climbing algorithm. This algorithm
examines all the neighboring nodes of the
current state and selects one neighbor node
which is closest to the goal state. This
algorithm consumes more time as it searches
for multiple neighbors

65
• Step 1: Evaluate the initial state, if it is goal state then return
success and stop, else make current state as initial state.
• Step 2: Loop until a solution is found or the current state
does not change.
• Let SUCC be a state such that any successor of the current state
will be better than it.
• For each operator that applies to the current state:
• Apply the new operator and generate a new state.
• Evaluate the new state.
• If it is goal state, then return it and quit, else compare it to the SUCC.
• If it is better than SUCC, then set new state as SUCC.
• If the SUCC is better than the current state, then set current state to SUCC

66
Stochastic hill climbing:

• Stochastic hill climbing does not examine for


all its neighbor before moving. Rather, this
search algorithm selects one neighbor node at
random and decides whether to choose it as a
current state or examine another state.

67
Adversarial Search

• Adversarial search is a search, where we


examine the problem which arises when we
try to plan ahead of the world and other
agents are planning against us.
• In previous topics, we have studied the search
strategies which are only associated with a
single agent that aims to find the solution
which often expressed in the form of a
sequence of actions.
68
Adversarial Search
• But, there might be some situations where more than one agent
is searching for the solution in the same search space, and this
situation usually occurs in game playing.
• The environment with more than one agent is termed as multi-
agent environment, in which each agent is an opponent of other
agent and playing against each other. Each agent needs to
consider the action of other agent and effect of that action on
their performance.
• So, Searches in which two or more players with conflicting
goals are trying to explore the same search space for the
solution, are called adversarial searches, often known as
Games.

69
Types of AI Games
•Games are modeled as a Search problem and heuristic evaluation function, and
these are the two main factors which help to model and solve games in AI.

70
• Perfect information: A game with the perfect
information is that in which agents can look into the
complete board. Agents have all the information
about the game, and they can see each other moves
also. Examples are Chess, Checkers, Go, etc.
• Imperfect information: If in a game agents do not
have all information about the game and not aware
with what's going on, such type of games are called
the game with imperfect information, such as tic-tac-
toe, Battleship, blind, Bridge, etc.
71
• Deterministic games: Deterministic games are those
games which follow a strict pattern and set of rules for
the games, and there is no randomness associated with
them. Examples are chess, Checkers, Go, tic-tac-toe, etc.
• Non-deterministic games: Non-deterministic are those
games which have various unpredictable events and has
a factor of chance or luck. This factor of chance or luck
is introduced by either dice or cards. These are random,
and each action response is not fixed. Such games are
also called as stochastic games.

72
Mini-Max Algorithm in Artificial Intelligence

• Mini-max algorithm is a recursive or backtracking


algorithm which is used in decision-making and game
theory. It provides an optimal move for the player
assuming that opponent is also playing optimally.
• Mini-Max algorithm uses recursion to search through
the game-tree.
• Min-Max algorithm is mostly used for game playing in
AI. Such as Chess, Checkers, tic-tac-toe, go, and
various tow-players game. This Algorithm computes
the minimax decision for the current state.
73
• In this algorithm two players play the game, one is
called MAX and other is called MIN.
• Both the players fight it as the opponent player gets the
minimum benefit while they get the maximum benefit.
• Both Players of the game are opponent of each other,
where MAX will select the maximized value and MIN
will select the minimized value.
• The minimax algorithm performs a depth-first search
algorithm for the exploration of the complete game tree.
• The minimax algorithm proceeds all the way down to
the terminal node of the tree, then backtrack the tree as
the recursion.
74
Working of Min-Max Algorithm:

• The working of the minimax algorithm can be easily described using an


example. Below we have taken an example of game-tree which is
representing the two-player game.
• In this example, there are two players one is called Maximizer and other
is called Minimizer.
• Maximizer will try to get the Maximum possible score, and Minimizer
will try to get the minimum possible score.
• This algorithm applies DFS, so in this game-tree, we have to go all the
way through the leaves to reach the terminal nodes.
• At the terminal node, the terminal values are given so we will compare
those value and backtrack the tree until the initial state occurs.
Following are the main steps involved in solving the two-player game
tree:

75
Step-1:
• In the first step, the algorithm generates the
entire game-tree and apply the utility function
to get the utility values for the terminal states.
In the below tree diagram, let's take A is the
initial state of the tree.
• Suppose maximizer takes first turn which has
worst-case initial value =- infinity, and
minimizer will take next turn which has worst-
case initial value = +infinity.
76
77
Step 2: 
• Now, first we find the utilities value for the
Maximizer, its initial value is -∞, so we will
compare each value in terminal state with
initial value of Maximizer and determines the
higher nodes values. It will find the maximum
among the all.

78
79
Step 3: 
• In the next step, it's a turn for minimizer, so it
will compare all nodes value with +∞, and will
find the 3rd layer node values.

80
81
Step 4: 
• Now it's a turn for Maximizer, and it will again
choose the maximum of all nodes value and
find the maximum value for the root node. In
this game tree, there are only 4 layers, hence
we reach immediately to the root node, but in
real games, there will be more than 4 layers.

82
83
Disadvantages
• The main drawback of the minimax algorithm is that it
gets really slow for complex games such as Chess, go,
etc.
• This type of games has a huge branching factor, and
the player has lots of choices to decide.
• This limitation of the minimax algorithm can be
improved from alpha-beta pruning which we have
discussed in the next topic.

84
Alpha-Beta Pruning

• Alpha-beta pruning is a modified version of the minimax algorithm.


It is an optimization technique for the minimax algorithm.
• As we have seen in the minimax search algorithm that the number
of game states it has to examine are exponential in depth of the tree.
Since we cannot eliminate the exponent, but we can cut it to half.
• Hence there is a technique by which without checking each node of
the game tree we can compute the correct minimax decision, and
this technique is called pruning.
• This involves two threshold parameter Alpha and beta for future
expansion, so it is called alpha-beta pruning. It is also called
as Alpha-Beta Algorithm.

85
• Alpha-beta pruning can be applied at any depth of a tree, and
sometimes it not only prune the tree leaves but also entire sub-tree.
• The two-parameter can be defined as:
• Alpha: The best (highest-value) choice we have found so far at any point
along the path of Maximizer. The initial value of alpha is -∞.
• Beta: The best (lowest-value) choice we have found so far at any point
along the path of Minimizer. The initial value of beta is +∞.
• The Alpha-beta pruning to a standard minimax algorithm returns
the same move as the standard algorithm does, but it removes all
the nodes which are not really affecting the final decision but
making algorithm slow. Hence by pruning these nodes, it makes
the algorithm fast.
• Condition for Alpha-beta pruning:
• The main condition which required for alpha-beta pruning is:
1.α>=β  
86
Key points about alpha-beta pruning:

• The Max player will only update the value of


alpha.
• The Min player will only update the value of
beta.
• While backtracking the tree, the node values will
be passed to upper nodes instead of values of
alpha and beta.
• We will only pass the alpha, beta values to the
child nodes.
87
Working of Alpha-Beta Pruning:

• Step 1: At the first step the, Max player will


start first move from node A where α= -∞ and
β= +∞, these value of alpha and beta passed
down to node B where again α= -∞ and β=
+∞, and Node B passes the same value to its
child D.

88
89
• Step 2: At Node D, the value of α will be calculated
as its turn for Max. The value of α is compared with
firstly 2 and then 3, and the max (2, 3) = 3 will be
the value of α at node D and node value will also 3.
• Step 3: Now algorithm backtrack to node B, where
the value of β will change as this is a turn of Min,
Now β= +∞, will compare with the available
subsequent nodes value, i.e. min (∞, 3) = 3, hence at
node B now α= -∞, and β= 3.

90
91
• In the next step, algorithm traverse the next successor of
Node B which is node E, and the values of α= -∞, and β= 3
will also be passed.
• Step 4: At node E, Max will take its turn, and the value of
alpha will change. The current value of alpha will be
compared with 5, so max (-∞, 5) = 5, hence at node E α= 5
and β= 3, where α>=β, so the right successor of E will be
pruned, and algorithm will not traverse it, and the value at
node E will be 5.

92
93
• Step 5: At next step, algorithm again backtrack the tree,
from node B to node A. At node A, the value of alpha will
be changed the maximum available value is 3 as max (-∞,
3)= 3, and β= +∞, these two values now passes to right
successor of A which is Node C.
• At node C, α=3 and β= +∞, and the same values will be
passed on to node F.
• Step 6: At node F, again the value of α will be compared
with left child which is 0, and max(3,0)= 3, and then
compared with right child which is 1, and max(3,1)= 3 still
α remains 3, but the node value of F will become 1.

94
95
• Step 7: Node F returns the node value 1 to
node C, at C α= 3 and β= +∞, here the value of
beta will be changed, it will compare with 1 so
min (∞, 1) = 1. Now at C, α=3 and β= 1, and
again it satisfies the condition α>=β, so the
next child of C which is G will be pruned, and
the algorithm will not compute the entire sub-
tree G.

96
97
• Step 8: C now returns the value of 1 to A here
the best value for A is max (3, 1) = 3.
Following is the final game tree which is the
showing the nodes which are computed and
nodes which has never computed. Hence the
optimal value for the maximizer is 3 for this
example.

98
99
Constraint Satisfaction Problems
• We have seen so many techniques like Local
search, Adversarial search to solve different
problems. The objective of every problem-
solving technique is one, i.e., to find a solution
to reach the goal.
• Although, in adversarial search and local
search, there were no constraints on the agents
while solving the problems and reaching to its
solutions.
100
• Constraint satisfaction is a technique where a
problem is solved when its values satisfy certain
constraints or rules of the problem. Such type of
technique leads to a deeper understanding of the
problem structure as well as its complexity.
• Constraint satisfaction depends on three
components, namely:
• X: It is a set of variables.
• D: It is a set of domains where the variables reside.
There is a specific domain for each variable.
• C: It is a set of constraints which are followed by
the set of variables.
101
Solving Constraint Satisfaction Problems

• The requirements to solve a constraint


satisfaction problem (CSP) is:
• A state-space
• The notion of the solution.
• A state in state-space is defined by assigning
values to some or all variables such as
• {X1=v1, X2=v2, and so on…}.

102
• An assignment of values to a variable can be done in three
ways:
• Consistent or Legal Assignment: An assignment which does
not violate any constraint or rule is called Consistent or legal
assignment.
• Complete Assignment: An assignment where every variable is
assigned with a value, and the solution to the CSP remains
consistent. Such assignment is known as Complete assignment.
• Partial Assignment: An assignment which assigns values to
some of the variables only. Such type of assignments are called
Partial assignments.

103
Types of Domains in CSP

• There are following two types of domains which


are used by the variables :
• Discrete Domain: It is an infinite domain which can
have one state for multiple variables. For example, a
start state can be allocated infinite times for each
variable.
• Finite Domain: It is a finite domain which can have
continuous states describing one domain for one
specific variable. It is also called a continuous
domain.
104
Constraint Types in CSP

• With respect to the variables, basically there are


following types of constraints:
• Unary Constraints: It is the simplest type of
constraints that restricts the value of a single variable.
• Binary Constraints: It is the constraint type which
relates two variables. A value x2 will contain a value
which lies between x1 and x3.
• Global Constraints: It is the constraint type which
involves an arbitrary number of variables.

105
106
knowledge representation
• Humans are best at understanding, reasoning,
and interpreting knowledge. Human knows
things, which is knowledge and as per their
knowledge they perform various actions in the
real world. But how machines do all these
things comes under knowledge
representation and reasoning

107
• Knowledge representation and reasoning (KR, KRR) is the part of
Artificial intelligence which concerned with AI agents thinking and
how thinking contributes to intelligent behavior of agents.
• It is responsible for representing information about the real world so
that a computer can understand and can utilize this knowledge to
solve the complex real world problems such as diagnosis a medical
condition or communicating with humans in natural language.
• It is also a way which describes how we can represent knowledge in
artificial intelligence. Knowledge representation is not just storing
data into some database, but it also enables an intelligent machine
to learn from that knowledge and experiences so that it can behave
intelligently like a human.

108
What to Represent:

• Following are the kind of knowledge which needs to be represented in AI


systems:
• Object: All the facts about objects in our world domain. E.g., Guitars
contains strings, trumpets are brass instruments.
• Events: Events are the actions which occur in our world.
• Performance: It describe behavior which involves knowledge about how
to do things.
• Meta-knowledge: It is knowledge about what we know.
• Facts: Facts are the truths about the real world and what we represent.
• Knowledge-Base: The central component of the knowledge-based agents
is the knowledge base. It is represented as KB. The Knowledgebase is a
group of the Sentences (Here, sentences are used as a technical term and
not identical with the English language).

109
110
Declarative Knowledge:
• Declarative knowledge is to know about
something.
• It includes concepts, facts, and objects.
• It is also called descriptive knowledge and
expressed in declarative sentences.
• It is simpler than procedural language.

111
Procedural Knowledge

• It is also known as imperative knowledge.


• Procedural knowledge is a type of knowledge
which is responsible for knowing how to do
something.
• It can be directly applied to any task.
• It includes rules, strategies, procedures, agendas,
etc.
• Procedural knowledge depends on the task on
which it can be applied.
112
• 3. Meta-knowledge:
• Knowledge about the other types of knowledge is
called Meta-knowledge.
• 4. Heuristic knowledge:
• Heuristic knowledge is representing knowledge of
some experts in a fieled or subject.
• Heuristic knowledge is rules of thumb based on
previous experiences, awareness of approaches,
and which are good to work but not guaranteed.
113
Structural knowledge:

• Structural knowledge is basic knowledge to


problem-solving.
• It describes relationships between various
concepts such as kind of, part of, and grouping of
something.
• It describes the relationship that exists between
concepts or objects.

114
The relation between knowledge and intelligence:

• Knowledge of real-worlds plays a vital role in


intelligence and same for creating artificial
intelligence. Knowledge plays an important
role in demonstrating intelligent behavior in AI
agents. An agent is only able to accurately act
on some input when he has some knowledge
or experience about that input.

115
• Let's suppose if you met some person who is speaking
in a language which you don't know, then how you will
able to act on that. The same thing applies to the
intelligent behavior of the agents.
• As we can see in below diagram, there is one decision
maker which act by sensing the environment and using
knowledge. But if the knowledge part will not present
then, it cannot display intelligent behavior.

116
117
AI knowledge cycle:

• An Artificial intelligence system has the following


components for displaying intelligent behavior:
• Perception
• Learning
• Knowledge Representation and Reasoning
• Planning
• Execution

118
AI knowledge cycle:

119
• The above diagram is showing how an AI system can interact with the
real world and what components help it to show intelligence.
• AI system has Perception component by which it retrieves information
from its environment.
• It can be visual, audio or another form of sensory input. The learning
component is responsible for learning from data captured by
Perception comportment.
• In the complete cycle, the main components are knowledge
representation and Reasoning.
• These two components are involved in showing the intelligence in
machine-like humans. These two components are independent with
each other but also coupled together. The planning and execution
depend on analysis of Knowledge representation and reasoning.
120
Approaches to knowledge representation:

• There are mainly four approaches to knowledge


representation,
• 1. Simple relational knowledge:
• It is the simplest way of storing facts which uses the
relational method, and each fact about a set of the object
is set out systematically in columns.
• This approach of knowledge representation is famous in
database systems where the relationship between
different entities is represented.
• This approach has little opportunity for inference.
121
Example

122
2. Inheritable knowledge:

• In the inheritable knowledge approach, all data must be stored into a


hierarchy of classes.
• All classes should be arranged in a generalized form or a hierarchal
manner.
• In this approach, we apply inheritance property.
• Elements inherit values from other members of a class.
• This approach contains inheritable knowledge which shows a relation
between instance and class, and it is called instance relation.
• Every individual frame can represent the collection of attributes and
its value.
• In this approach, objects and values are represented in Boxed nodes.
• We use Arrows which point from objects to their values.

123
124
3. Inferential knowledge:

• Inferential knowledge approach represents knowledge in


the form of formal logics.
• This approach can be used to derive more facts.
• It guaranteed correctness.
• Example: Let's suppose there are two statements:
• Marcus is a man
• All men are mortal
Then it can represent as;

man(Marcus)
∀x = man (x) ----------> mortal (x)s

125
4. Procedural knowledge:

• Procedural knowledge approach uses small programs and


codes which describes how to do specific things, and how to
proceed.
• In this approach, one important rule is used which is If-Then
rule.
• In this knowledge, we can use various coding languages such
as LISP language and Prolog language.
• We can easily represent heuristic or domain-specific
knowledge using this approach.
• But it is not necessary that we can represent all cases in this
approach.

126
Requirements for knowledge Representation system:

• A good knowledge representation system must possess the following


properties.
1. Representational Accuracy:
KR system should have the ability to represent all kind of required
knowledge.
2. Inferential Adequacy:
KR system should have ability to manipulate the representational
structures to produce new knowledge corresponding to existing structure.
3. Inferential Efficiency:
The ability to direct the inferential knowledge mechanism into the most
productive directions by storing appropriate guides.
4. Acquisitional efficiency- The ability to acquire the new knowledge
easily using automatic methods.

127
Techniques of knowledge representation

• There are mainly four ways of knowledge


representation which are given as follows:
1.Logical Representation
2.Semantic Network Representation
3.Frame Representation
4.Production Rules

128
129
1. Logical Representation

• Logical representation is a language with some


concrete rules which deals with propositions and has
no ambiguity in representation. Logical
representation means drawing a conclusion based on
various conditions. This representation lays down
some important communication rules. It consists of
precisely defined syntax and semantics which
supports the sound inference. Each sentence can be
translated into logics using syntax and semantics.

130
• Syntax:
• Syntaxes are the rules which decide how we can construct legal
sentences in the logic.
• It determines which symbol we can use in knowledge representation.
• How to write those symbols.
• Semantics:
• Semantics are the rules by which we can interpret the sentence in the
logic.
• Semantic also involves assigning a meaning to each sentence.
• Logical representation can be categorised into mainly two logics:
1.Propositional Logics
2.Predicate logics

131
• Advantages of logical representation:
1.Logical representation enables us to do logical reasoning.
2.Logical representation is the basis for the programming
languages.
• Disadvantages of logical Representation:
1.Logical representations have some restrictions and are
challenging to work with.
2.Logical representation technique may not be very
natural, and inference may not be so efficient.

132
2. Semantic Network Representation

• Semantic networks are alternative of predicate logic


for knowledge representation. In Semantic
networks, we can represent our knowledge in the
form of graphical networks. This network consists
of nodes representing objects and arcs which
describe the relationship between those objects.
Semantic networks can categorize the object in
different forms and can also link those objects.
Semantic networks are easy to understand and can
be easily extended
133
• This representation consist of mainly two types of relations:
1.IS-A relation (Inheritance)
2.Kind-of-relation
• Example: Following are some statements which we need to
represent in the form of nodes and arcs.
• Statements:
1.Jerry is a cat.
2.Jerry is a mammal
3.Jerry is owned by Priya.
4.Jerry is brown colored.
5.All Mammals are animal.

134
In the above diagram, we have represented the different type of knowledge in
the form of nodes and arcs. Each object is connected with another object by
some relation. 135
Drawbacks in Semantic representation:

1.Semantic networks take more computational time at runtime as we need


to traverse the complete network tree to answer some questions. It might
be possible in the worst case scenario that after traversing the entire tree,
we find that the solution does not exist in this network.
2.Semantic networks try to model human-like memory (Which has 1015
neurons and links) to store the information, but in practice, it is not
possible to build such a vast semantic network.
3.These types of representations are inadequate as they do not have any
equivalent quantifier, e.g., for all, for some, none, etc.
4.Semantic networks do not have any standard definition for the link
names.
5.These networks are not intelligent and depend on the creator of the
system.

136
Advantages of Semantic network:

1.Semantic networks are a natural representation


of knowledge.
2.Semantic networks convey meaning in a
transparent manner.
3.These networks are simple and easily
understandable.

137
3. Frame Representation

• A frame is a record like structure which


consists of a collection of attributes and its
values to describe an entity in the world.
Frames are the AI data structure which divides
knowledge into substructures by representing
stereotypes situations. It consists of a
collection of slots and slot values. These slots
may be of any type and sizes. Slots have
names and values which are called facets.
138
Propositional logic in Artificial intelligence

• Propositional logic (PL) is the simplest form


of logic where all the statements are made by
propositions.
• A proposition is a declarative statement which
is either true or false. It is a technique of
knowledge representation in logical and
mathematical form.

139
Examples:

1.a) It is Sunday.  
2.b) The Sun rises from West (False proposition)
  
3.c) 3+3= 7(False proposition)  
4.d) 5 is a prime number.   

140
Following are some basic facts about
propositional logic:
• Propositional logic is also called Boolean logic as it works on
0 and 1.
• In propositional logic, we use symbolic variables to represent
the logic, and we can use any symbol for a representing a
proposition, such A, B, C, P, Q, R, etc.
• Propositions can be either true or false, but it cannot be both.
• Propositional logic consists of an object, relations or function,
and logical connectives.
• These connectives are also called logical operators.
• The propositions and connectives are the basic elements of the
propositional logic.

141
Cont…..
• Connectives can be said as a logical operator which connects
two sentences.
• A proposition formula which is always true is
called tautology, and it is also called a valid sentence.
• A proposition formula which is always false is
called Contradiction.
• A proposition formula which has both true and false values is
called
• Statements which are questions, commands, or opinions are
not propositions such as "Where is Rohini", "How are you",
"What is your name", are not propositions.

142
• Syntax of propositional logic:
• The syntax of propositional logic defines the
allowable sentences for the knowledge
representation. There are two types of
Propositions:
1.Atomic Propositions
2.Compound propositions

143
Atomic Proposition: 
• Atomic propositions are the simple
propositions. It consists of a single proposition
symbol. These are the sentences which must
be either true or false.
a) 2+2 is 4, it is an atomic proposition as it is a tr
ue fact.  
b) "The Sun is cold" is also a proposition as it is 
a false fact

144
Compound proposition: 
• Compound propositions are constructed by
combining simpler or atomic propositions,
using parenthesis and logical connectives.
• Example:
a) "It is raining today, and street is wet."  
b) "Ali is a doctor, and his clinic is in Mansehra.
"  

145
Logical Connectives:

• Logical connectives are used to connect two


simpler propositions or representing a sentence
logically.
• We can create compound propositions with the
help of logical connectives. There are mainly
five connectives, which are given as follows:

146
Negation: A sentence such as ¬ P is called
negation of P. A literal can be either Positive
literal or negative literal.
Conjunction: A sentence which has ∧ connective
such as, P ∧ Q is called a conjunction.
Example: Rohan is intelligent and hardworking.
It can be written as,
P=Rohan is intelligent,
Q= Rohan is hardworking. → P∧ Q.

147
Disjunction: A sentence which has ∨ connective, such as P ∨
Q. is called disjunction, where P and Q are the propositions.
Example: "Ritika is a doctor or Engineer",
Here P= Ritika is Doctor. Q= Ritika is Doctor, so we can write
it as P ∨ Q.
Implication: A sentence such as P → Q, is called an
implication. Implications are also known as if-then rules. It
can be represented as
            If it is raining, then the street is wet.
        Let P= It is raining, and Q= Street is wet, so it is
represented as P → Q

148
• Biconditional: A sentence such as P⇔ Q is a
Biconditional sentence, example If I am
breathing, then I am alive
            P= I am breathing, Q= I am alive, it
can be represented as P ⇔ Q.

149
Propositional Logic Connectives:

150
Cont….

151
152
153
Rules of Inference in Artificial intelligence

• In artificial intelligence, we need intelligent


computers which can create new logic from
old logic or by evidence, so generating the
conclusions from evidence and facts is
termed as Inference.

154
Inference rules:

• Inference rules are the templates for generating valid arguments. Inference
rules are applied to derive proofs in artificial intelligence, and the proof is a
sequence of the conclusion that leads to the desired goal.
• In inference rules, the implication among all the connectives plays an
important role. Following are some terminologies related to inference rules:
• Implication: It is one of the logical connectives which can be represented as P
→ Q. It is a Boolean expression.
• Converse: The converse of implication, which means the right-hand side
proposition goes to the left-hand side and vice-versa. It can be written as Q →
P.
• Contrapositive: The negation of converse is termed as contrapositive, and it
can be represented as ¬ Q → ¬ P.
• Inverse: The negation of implication is called inverse. It can be represented as
¬ P → ¬ Q.

155
Types of Inference rules:

• 1. Modus Ponens:
• The Modus Ponens rule is one of the most
important rules of inference, and it states that
if P and P → Q is true, then we can infer that
Q will be true. It can be represented as:

156
Example:
• Statement-1: "If I am sleepy then I go to bed"
==> P→ Q
Statement-2: "I am sleepy" ==> P
Conclusion: "I go to bed." ==> Q.
Hence, we can say that, if P→ Q is true and P
is true then Q will be true.

157
2. Modus Tollens:

• The Modus Tollens rule state that if P→ Q is


true and ¬ Q is true, then ¬ P will also true. It
can be represented as:

158
Example
• Statement-1: "If I am sleepy then I go to bed"
==> P→ Q
Statement-2: "I do not go to the bed."==> ~Q
Statement-3: Which infers that "I am not
sleepy" => ~P

159
3. Hypothetical Syllogism:

• The Hypothetical Syllogism rule state that if P→R is


true whenever P→Q is true, and Q→R is true. It can
be represented as the following notation:
• Example:
• Statement-1: If you have my home key then you can
unlock my home. P→Q
Statement-2: If you can unlock my home then you
can take my money. Q→R
Conclusion: If you have my home key then you can
take my money. P→R
160
4. Disjunctive Syllogism:

• The Disjunctive syllogism rule state that if


P∨Q is true, and ¬P is true, then Q will be
true. It can be represented as:

161
Example:

• Statement: I have a vanilla ice-cream. ==> P


Statement-2: I have Chocolate ice-cream.
Conclusion: I have vanilla or chocolate ice-
cream. ==> (P∨Q)

162
First-Order Logic in Artificial intelligence

• In the topic of Propositional logic, we have seen that


how to represent statements using propositional logic.
But unfortunately, in propositional logic, we can only
represent the facts, which are either true or false. PL is
not sufficient to represent the complex sentences or
natural language statements. The propositional logic has
very limited expressive power. Consider the following
sentence, which we cannot represent using PL logic.
• "Some humans are intelligent", or
• "Sachin likes cricket."

163
• To represent the above statements, PL logic is not
sufficient, so we required some more powerful logic,
such as first-order logic.
• First-Order logic:
• First-order logic is another way of knowledge
representation in artificial intelligence. It is an extension
to propositional logic.
• FOL is sufficiently expressive to represent the natural
language statements in a concise way.
• First-order logic is also known as Predicate logic or
First-order predicate logic. First-order logic is a
powerful language that develops information about the
objects in a more easy way and can also express the
relationship between those objects.
164
• First-order logic (like natural language) does not only assume
that the world contains facts like propositional logic but also
assumes the following things in the world:
• Objects: A, B, people, numbers, colors, wars, theories, squares, pits,
wumpus, ......
• Relations: It can be unary relation such as: red, round, is
adjacent, or n-any relation such as: the sister of, brother of, has
color, comes between
• Function: Father of, best friend, third inning of, end of, ......
• As a natural language, first-order logic also has two main parts:
• Syntax
• Semantics

165
Syntax of First-Order logic:

• The syntax of FOL determines which


collection of symbols is a logical expression in
first-order logic. The basic syntactic elements
of first-order logic are symbols. We write
statements in short-hand notation in FOL.

166
Basic Elements of First-order logic:

• Following are the basic elements of FOL


syntax:
• Syntax of First-Order logic:
• The syntax of FOL determines which
collection of symbols is a logical expression in
first-order logic. The basic syntactic elements
of first-order logic are symbols. We write
statements in short-hand notation in FOL.

167
Constant 1, 2, A, John, Mumbai, cat,....

Variables x, y, z, a, b,....

Predicates Brother, Father, >,....

Function sqrt, LeftLegOf, ....

Connectives ∧, ∨, ¬, ⇒, ⇔

Equality ==

Quantifier ∀, ∃

168
Atomic sentences:

• Atomic sentences are the most basic sentences


of first-order logic. These sentences are formed
from a predicate symbol followed by a
parenthesis with a sequence of terms.
• We can represent atomic sentences as Predicate
(term1, term2, ......, term n).
• Example: Ravi and Ajay are brothers: =>
Brothers(Ravi, Ajay).
                Chinky is a cat: => cat (Chinky).
169
Complex Sentences:

• Complex sentences are made by combining atomic


sentences using connectives.
• First-order logic statements can be divided into two
parts:
• Subject: Subject is the main part of the statement.
• Predicate: A predicate can be defined as a relation,
which binds two atoms together in a statement.
• Consider the statement: "x is an integer.", it consists
of two parts, the first part x is the subject of the statement
and second part "is an integer," is known as a predicate.

170
Quantifiers in First-order logic:

• A quantifier is a language element which generates


quantification, and quantification specifies the
quantity of specimen in the universe of discourse.
• These are the symbols that permit to determine or
identify the range and scope of the variable in the
logical expression. There are two types of
quantifier:
• Universal Quantifier, (for all, everyone, everything)
• Existential quantifier, (for some, at least one).

171
Universal Quantifier:

• Universal quantifier is a symbol of logical


representation, which specifies that the
statement within its range is true for
everything or every instance of a particular
thing.
• The Universal quantifier is represented by a
symbol ∀, which resembles an inverted A.

172
• If x is a variable, then ∀x is read as:
• For all x
• For each x
• For every x.
• Example:
• All man drink coffee.
• Let a variable x which refers to a cat so all x
can be represented in UOD as below
173
∀x man(x) → drink (x, coffee).
It will be read as: There are all x where x is a man who drink coffee.

174
Existential Quantifier:

• Existential quantifiers are the type of


quantifiers, which express that the statement
within its scope is true for at least one instance
of something.
• It is denoted by the logical operator ∃, which
resembles as inverted E. When it is used with a
predicate variable then it is called as an
existential quantifier.

175
• If x is a variable, then existential quantifier
will be ∃x or ∃(x). And it will be read as:
• There exists a 'x.'
• For some 'x.'
• For at least one 'x.

176
∃x: boys(x) ∧ intelligent(x)
It will be read as: There are some x where x is a boy who is intelligent.
177
• Points to remember:
• The main connective for universal quantifier ∀ is
implication →.
• The main connective for existential quantifier ∃ is
and ∧.
• Properties of Quantifiers:
• In universal quantifier, ∀x∀y is similar to ∀y∀x.
• In Existential quantifier, ∃x∃y is similar to ∃y∃x.
• ∃x∀y is not similar to ∀y∃x.
178
Some Examples of FOL using quantifier:

• 1. All birds fly.


In this question the predicate is "fly(bird)."
And since there are all birds who fly so it will be
represented as follows.
              ∀x bird(x) →fly(x).
• 2. Every man respects his parent.
In this question, the predicate is "respect(x, y)," where
x=man, and y= parent.
Since there is every man so will use ∀, and it will be
represented as follows:
              ∀x man(x) → respects (x, parent).
179
• 3. Some boys play cricket.
In this question, the predicate is "play(x, y),"
where x= boys, and y= game. Since there are
some boys so we will use ∃, and it will be
represented as:
              ∃x boys(x) → play(x, cricket).

180
• 4. Not all students like both Mathematics
and Science.
In this question, the predicate is "like(x, y),"
where x= student, and y= subject.
Since there are not all students, so we will
use ∀ with negation, so following
representation for this:
              ¬∀ (x) [ student(x) → like(x,
Mathematics) ∧ like(x, Science)].
181
• 5. Only one student failed in Mathematics.
In this question, the predicate is "failed(x, y),"
where x= student, and y= subject.
Since there is only one student who failed in
Mathematics, so we will use following
representation for this:
              ∃(x) [ student(x) → failed (x,
Mathematics) ∧∀ (y) [¬(x==y) ∧ student(y)
→ ¬failed (x, Mathematics)].
182
Free and Bound Variables:

• The quantifiers interact with variables which appear in a suitable


way. There are two types of variables in First-order logic which
are given below:
• Free Variable: A variable is said to be a free variable in a formula
if it occurs outside the scope of the quantifier.
•           Example: ∀x ∃(y)[P (x, y, z)], where z is a free variable.
• Bound Variable: A variable is said to be a bound variable in a
formula if it occurs within the scope of the quantifier.
•           Example: ∀x [A (x) B( y)], here x and y are the bound
variables.

183
Knowledge Engineering in First-order logic

• The process of constructing a knowledge-base


in first-order logic is called as knowledge-
engineering. In knowledge-engineering,
someone who investigates a particular domain,
learns important concept of that domain, and
generates a formal representation of the
objects, is known as knowledge engineer.

184
• In this topic, we will understand the Knowledge
engineering process in an electronic circuit domain, which
is already familiar. This approach is mainly suitable for
creating special-purpose knowledge base
• The knowledge-engineering process:
• Following are some main steps of the knowledge-
engineering process. Using these steps, we will develop a
knowledge base which will allow us to reason about digital
circuit (One-bit full adder) which is given below

185
186
Steps of Knowledge Engineering
• 1. Identify the task:
• The first step of the process is to identify the task, and for the
digital circuit, there are various reasoning tasks.
• At the first level or highest level, we will examine the
functionality of the circuit:
• Does the circuit add properly?
• What will be the output of gate A2, if all the inputs are high?
• At the second level, we will examine the circuit structure details
such as:
• Which gate is connected to the first input terminal?
• Does the circuit have feedback loops?

187
• 2. Assemble the relevant knowledge:
• In the second step, we will assemble the relevant
knowledge which is required for digital circuits. So for
digital circuits, we have the following required
knowledge:
• Logic circuits are made up of wires and gates.
• Signal flows through wires to the input terminal of the
gate, and each gate produces the corresponding output
which flows further.
• In this logic circuit, there are four types of gates
used: AND, OR, XOR, and NOT.
• All these gates have one output terminal and two input
terminals (except NOT gate, it has one input terminal).
188
• 3. Decide on vocabulary:
• The next step of the process is to select functions, predicate, and constants
to represent the circuits, terminals, signals, and gates. Firstly we will
distinguish the gates from each other and from other objects. Each gate is
represented as an object which is named by a constant, such as, Gate(X1).
The functionality of each gate is determined by its type, which is taken as
constants such as AND, OR, XOR, or NOT. Circuits will be identified by
a predicate: Circuit (C1).
• For the terminal, we will use predicate: Terminal(x).
• For gate input, we will use the function In(1, X1) for denoting the first
input terminal of the gate, and for output terminal we will use Out (1, X1).
• The function Arity(c, i, j) is used to denote that circuit c has i input, j
output.
• The connectivity between gates can be represented by
predicate Connect(Out(1, X1), In(1, X1)).
• We use a unary predicate On (t), which is true if the signal at a terminal is
on.

189
• 4. Encode general knowledge about the domain:
• To encode the general knowledge about the logic circuit, we need
some following rules:
• If two terminals are connected then they have the same input signal,
it can be represented as:
1. ∀  t1, t2 Terminal (t1) ∧ Terminal (t2) ∧ Connect (t1, t2) → Signal (t
1) = Signal (2).   
• Signal at every terminal will have either value 0 or 1, it will be
represented as:
1. ∀  t Terminal (t) →Signal (t) = 1 ∨Signal (t) = 0.  
• Connect predicates are commutative:
1. ∀  t1, t2 Connect(t1, t2)  →  Connect (t2, t1).       
• Representation of types of gates:
1. ∀  g Gate(g) ∧ r = Type(g) → r = OR ∨r = AND ∨r = XOR ∨r = N
OT

190
• 5. Encode a description of the problem instance:
• Now we encode problem of circuit C1, firstly we
categorize the circuit and its gate components. This
step is easy if ontology about the problem is already
thought. This step involves the writing simple
atomics sentences of instances of concepts, which is
known as ontology.
• For the given circuit C1, we can encode the problem
instance in atomic sentences as below:
• Since in the circuit there are two XOR, two AND,
and one OR gate so atomic sentences for these gates
will be:
191
1.For XOR gate: Type(x1)= XOR, Type(X2) = X
OR  
2.For AND gate: Type(A1) = AND, Type(A2)= 
AND  
3.For OR gate: Type (O1) = OR. 

192
• 6. Pose queries to the inference procedure and get
answers:
• In this step, we will find all the possible set of values of all
the terminal for the adder circuit. The first query will be:
• What should be the combination of input which would
generate the first output of circuit C1, as 0 and a second
output to be 1?
1. ∃ i1, i2, i3 Signal (In(1, C1))=i1  ∧  Signal (In(2, C1))=i2 
 ∧ Signal (In(3, C1))= i3  
2. ∧ Signal (Out(1, C1)) =0 ∧ Signal (Out(2, C1))=1  

193
• 7. Debug the knowledge base:
• Now we will debug the knowledge base, and this
is the last step of the complete process. In this
step, we will try to debug the issues of knowledge
base.
• In the knowledge base, we may have omitted
assertions like 1 ≠ 0.

194

You might also like