Agent Search and Game Playing 2 1

Download as pdf or txt
Download as pdf or txt
You are on page 1of 55

2-

Agent, Search & Game Playing

COMPILED BY: ER. SANTOSH PANDEYA 1


Agent

 An agent can be anything that perceiveits environment through sensors and act upon that environment through
actuators. An Agent runs in the cycle of perceiving, thinking, and acting.
An agent can be:

Human-Agent: A human agent has eyes, ears, and other organs which work for sensors and hand, legs, vocal tract work for
actuators.
Robotic Agent: A robotic agent can have cameras, infrared range finder, NLP for sensors and various motors for actuators.
Software Agent: Software agent can have keystrokes, file contents as sensory input and act on those inputs and display
output on the screen.
Hence the world around us is full of agents such as thermostat, cellphone, camera, and even we are also agents.
Sensor: Sensor is a device which detects the change in the environment and sends the information to other electronic
devices. An agent observes its environment through sensors.
Actuators: Actuators are the component of machines that converts energy into motion. The actuators are only responsible
for moving and controlling a system. An actuator can be an electric motor, gears, rails, etc.
Effectors: Effectors are the devices which affect the environment. Effectors can be legs, wheels, arms, fingers, wings, fins,
and display screen.

COMPILED BY: ER. SANTOSH PANDEYA 2


Interaction of Agents with the Environment

COMPILED BY: ER. SANTOSH PANDEYA 3


Structure of an AI Agent
To understand the structure of Intelligent Agents, we should be familiar
with Architecture and Agent programs. Architecture is the machinery that the agent executes on. It is a device with sensors
and actuators, for example, a robotic car, a camera, and a PC. An agent program is an implementation of an agent function.
An agent function is a map from the percept sequence(history of all that an agent has perceived to date) to an action.

Here are many examples of agents in artificial intelligence. Here are a few:
Intelligent personal assistants: These are agents that are designed to help users with various tasks, such as scheduling
appointments, sending messages, and setting reminders. Examples of intelligent personal assistants include Siri, Alexa, and
Google Assistant.
Autonomous robots: These are agents that are designed to operate autonomously in the physical world. They can perform
tasks such as cleaning, sorting, and delivering goods. Examples of autonomous robots include the Roomba vacuum cleaner
and the Amazon delivery robot.
Gaming agents: These are agents that are designed to play games, either against human opponents or other agents.
Examples of gaming agents include chess-playing agents and poker-playing agents.
Fraud detection agents: These are agents that are designed to detect fraudulent behavior in financial transactions. They can
analyze patterns of behavior to identify suspicious activity and alert authorities. Examples of fraud detection agents include
those used by banks and credit card companies.

COMPILED BY: ER. SANTOSH PANDEYA 4


Traffic management agents: These are agents that are designed to manage traffic flow in cities. They can monitor traffic
patterns, adjust traffic lights, and reroute vehicles to minimize congestion. Examples of traffic management agents include
those used in smart cities around the world.
A software agent has Keystrokes, file contents, received network packages that act as sensors and displays on the screen,
files, and sent network packets acting as actuators.
A Human-agent has eyes, ears, and other organs which act as sensors, and hands, legs, mouth, and other body parts act as
actuators.
A Robotic agent has Cameras and infrared range finders which act as sensors and various motors act as actuators.

Characteristics of an Agent

COMPILED BY: ER. SANTOSH PANDEYA 5


Types of Agents
Agents can be grouped into five classes based on their degree of perceived intelligence and capability :
 Simple Reflex Agents
 Model-Based Reflex Agents
 Goal-Based Agents
 Utility-Based Agents
 Learning Agent
 Multi-agent systems
 Hierarchical agents

COMPILED BY: ER. SANTOSH PANDEYA 6


Simple Reflex Agents
 Simple reflex agents ignore the rest of the percept history and act only on the basis of the current percept.
 Percept history is the history of all that an agent has perceived to date.
 The agent function is based on the condition-action rule.
 A condition-action rule is a rule that maps a state i.e., a condition to an action. If the condition is true, then the action is
taken, else not. This agent function only succeeds when the environment is fully observable.
 For simple reflex agents operating in partially observable environments, infinite loops are often unavoidable.
 It may be possible to escape from infinite loops if the agent can randomize its actions.

Fig: Simple Reflex Agents

COMPILED BY: ER. SANTOSH PANDEYA 7


Problems with Simple reflex agents are :
 Very limited intelligence.
 No knowledge of non-perceptual parts of the state.
 Usually too big to generate and store.
 If there occurs any change in the environment, then the collection of rules needs to be updated.

COMPILED BY: ER. SANTOSH PANDEYA 8


Model-Based Reflex Agents
• It works by finding a rule whose condition matches the current situation.
• A model-based agent can handle partially observable environments by the use of a model about the world.
• The agent has to keep track of the internal state which is adjusted by each percept and that depends on the percept
history.
• The current state is stored inside the agent which maintains some kind of structure describing the part of the world which
cannot be seen.
Updating the state requires information about:
How the world evolves independently from the agent?
How do the agent’s actions affect the world?

Model-Based Reflex Agents

COMPILED BY: ER. SANTOSH PANDEYA 9


Goal-Based Agents
 These kinds of agents take decisions based on how far they are currently from their goal(description of desirable
situations).
 Their every action is intended to reduce their distance from the goal.
 This allows the agent a way to choose among multiple possibilities, selecting the one which reaches a goal state. The
knowledge that supports its decisions is represented explicitly and can be modified, which makes these agents more
flexible.
 They usually require search and planning. The goal-based agent’s behavior can easily be changed.

COMPILED BY: ER. SANTOSH PANDEYA 10


Utility-Based Agents
• The agents which are developed having their end uses as building blocks are called utility-based agents.
• When there are multiple possible alternatives, then to decide which one is best, utility-based agents are used.
• They choose actions based on a preference (utility) for each state.
• Sometimes achieving the desired goal is not enough.
• We may look for a quicker, safer, cheaper trip to reach a destination.
• Agent happiness should be taken into consideration. Utility describes how “happy” the agent is.
• Because of the uncertainty in the world, a utility agent chooses the action that maximizes the expected utility.
• A utility function maps a state onto a real number which describes the associated degree of happiness.

COMPILED BY: ER. SANTOSH PANDEYA 11


Learning Agent
• A learning agent in AI is the type of agent that can learn from its past experiences or it has learning capabilities.
• It starts to act with basic knowledge and then is able to act and adapt automatically through learning.
• A learning agent has mainly four conceptual components, which are:
Learning element: It is responsible for making improvements by learning from the environment.
Critic: The learning element takes feedback from critics which describes how well the agent is doing with respect to a
fixed performance standard.
Performance element: It is responsible for selecting external action.
Problem Generator: This component is responsible for suggesting actions that will lead to new and informative
experiences.

COMPILED BY: ER. SANTOSH PANDEYA 12


Multi-Agent Systems
• These agents interact with other agents to achieve a common goal.
• They may have to coordinate their actions and communicate with each other to achieve their objective.
• A multi-agent system (MAS) is a system composed of multiple interacting agents that are designed to work together to
achieve a common goal.
• These agents may be autonomous or semi-autonomous and are capable of perceiving their environment, making
decisions, and taking action to achieve the common objective.
• MAS can be used in a variety of applications, including transportation systems, robotics, and social networks. They can
help improve efficiency, reduce costs, and increase flexibility in complex systems.

Hierarchical Agents
• These agents are organized into a hierarchy, with high-level agents overseeing the behavior of lower-level agents.
• The high-level agents provide goals and constraints, while the low-level agents carry out specific tasks.
• Hierarchical agents are useful in complex environments with many tasks and sub-tasks.

COMPILED BY: ER. SANTOSH PANDEYA 13


Uses of Agents
Agents are used in a wide range of applications in artificial intelligence, including:
Robotics: Agents can be used to control robots and automate tasks in manufacturing, transportation, and other industries.
Smart homes and buildings: Agents can be used to control heating, lighting, and other systems in smart homes and buildings,
optimizing energy use and improving comfort.
Transportation systems: Agents can be used to manage traffic flow, optimize routes for autonomous vehicles, and improve
logistics and supply chain management.
Healthcare: Agents can be used to monitor patients, provide personalized treatment plans, and optimize healthcare resource
allocation.
Finance: Agents can be used for automated trading, fraud detection, and risk management in the financial industry.
Games: Agents can be used to create intelligent opponents in games and simulations, providing a more challenging and
realistic experience for players.
Natural language processing: Agents can be used for language translation, question answering, and chatbots that can
communicate with users in natural language.
Cybersecurity: Agents can be used for intrusion detection, malware analysis, and network security.
Environmental monitoring: Agents can be used to monitor and manage natural resources, track climate change, and improve
environmental sustainability.
Social media: Agents can be used to analyze social media data, identify trends and patterns, and provide personalized
recommendations to users.

COMPILED BY: ER. SANTOSH PANDEYA 14


Black Box model of Agents

 Black box model of an artificial agent refers to a type of model or system where the internal workings or processes are not
visible or understandable to the user.
 Instead, only the inputs and the corresponding outputs are known and observed.
 This concept is widely used in artificial intelligence (AI) and machine learning (ML), where complex algorithms, such as
deep learning networks, often function as black boxes due to their intricate and opaque internal mechanisms.
 It's like a black box: you feed data in one end (the input), and you get an output on the other, but the inner workings are a
mystery.
Here's a breakdown of how it works:
Input: The AI system receives data, which could be text, images, numbers, or a combination.
Internal Workings: This is the black box part. Complex algorithms, often based on machine learning or deep learning,
analyze the data and identify patterns. These patterns are what allow the model to make predictions or decisions.
Output: Based on the analysis, the AI system produces an output, such as a classification (is this a cat or a dog in the
image?), a prediction (what will the weather be like tomorrow?), or a recommendation (what product would this
customer be interested in?).
Why are black box models a thing?
• Black box models often achieve very good results, especially in complex tasks like image recognition or spam
filtering.
• They can learn from massive amounts of data and identify patterns that humans might miss.
• This makes them very useful in a variety of applications.

COMPILED BY: ER. SANTOSH PANDEYA 15


Key Characteristics of Black Box Models
Opacity:
The internal structure and decision-making process are not transparent. Users and even developers might not fully
understand how the model arrives at a particular output from a given input.
Complexity:
Black box models, such as deep neural networks, involve numerous parameters and layers of computation that are too
complex to manually interpret or deconstruct.
Predictive Power:
Despite their opacity, black box models can achieve high levels of accuracy and predictive power, making them useful
for a wide range of applications.
Lack of Interpretability:
One of the main criticisms of black box models is their lack of interpretability. Users cannot easily trace the reasoning
behind a specific decision or prediction, which can be problematic in critical applications like healthcare or finance.

COMPILED BY: ER. SANTOSH PANDEYA 16


Examples of Black Box Models
Deep Learning Networks:
These include Convolutional Neural Networks (CNNs) for image processing and Recurrent Neural Networks (RNNs) for
sequence data. Their high dimensionality and layered structure make them quintessential black boxes.
Ensemble Methods:
Techniques like Random Forests or Gradient Boosting Machines aggregate the results of multiple simpler models,
resulting in a combined model that is often difficult to interpret.

Efforts to Address Black Box Nature


Explainable AI (XAI):
This is a field of research focused on developing methods to make black box models more interpretable. Techniques
such as LIME (Local Interpretable Model-agnostic Explanations) and SHAP (SHapley Additive exPlanations) provide
insights into how models make decisions.
Model Transparency Techniques:
Some approaches involve simplifying models post hoc or using surrogate models that approximate the behavior of
black box models in an interpretable manner.
Visualization Tools:
Tools and frameworks that allow users to visualize and explore the decision-making process of black box models are
being developed to aid understanding and trust.

COMPILED BY: ER. SANTOSH PANDEYA 17


Advantages and Disadvantages
Advantages:
High Performance: Black box models often outperform more interpretable models in terms of accuracy and efficiency.
Flexibility: They can handle a wide variety of tasks, from image and speech recognition to natural language processing and
autonomous driving.

Disadvantages:
Lack of Transparency: This can lead to issues in trust, accountability, and understanding, especially in regulated industries.
Difficulty in Debugging: When these models fail or produce unexpected results, it is challenging to pinpoint the cause of the
problem.
Ethical and Legal Concerns: The opacity of black box models can complicate ethical considerations and legal compliance,
particularly regarding bias and fairness.

COMPILED BY: ER. SANTOSH PANDEYA 18


Searching
Related question:
1.How searching is important in Artificial intelligence? Can you believe that proper searching algorithm improves the
performance of machine? Explain
2.Differentiate between informed and uninformed search? Explain A* search with example.
3.Explain searching method in AI? Among different search algorithm which one type of searching algorithm is more
efficient in AI. Explain with example.
4.Compare DFS and BFS for searching a graph with its properties.

COMPILED BY: ER. SANTOSH PANDEYA 19


Search in AI is the process of navigating from a starting state to a goal state by transitioning through
intermediate states. which is carried out by constructing a search tree about the problem.
•It is the universal technique of problem solving in AI by exploring a next state which are close enough to the
goal state.
•Rational agents or Problem-solving agents in AI mostly used these search strategies or algorithms to solve a
specific problem and provide the best result.
•Searching is a step by step procedure to solve a search-problem in a given search space. A search problem can
have three main factors:
•Search Space: Search space represents a set of possible solutions, which a system may have.
•Start State: It is a state from where agent begins the search.
•Goal test: It is a function which observe the current state and returns whether the goal state is achieved or not.

In the context of artificial intelligence (AI), searching involves techniques and algorithms that allow an AI system
to navigate through a space of possible solutions to find a goal or optimal answer.
This is a fundamental concept in AI, particularly in problem-solving, decision-making, and planning tasks.

COMPILED BY: ER. SANTOSH PANDEYA 20


Properties of Search Algorithms:

•Completeness: A search algorithm is said to be complete if it guarantees to return a solution if at least any
solution exists for any random input.
•Optimality: If a solution found for an algorithm is guaranteed to be the best solution (lowest path cost) among
all other solutions, then such a solution for is said to be an optimal solution.
•Time Complexity: Time complexity is a measure of time for an algorithm to complete its task.
•Space Complexity: It is the maximum storage space required at any point during the search, as the complexity
of the problem.

COMPILED BY: ER. SANTOSH PANDEYA 21


Types of search algorithms

COMPILED BY: ER. SANTOSH PANDEYA 22


Un informed search:
•The uninformed search does not contain any domain knowledge such as closeness, the
location of the goal.
•It operates in a brute-force way as it only includes information about how to traverse the
tree and how to identify leaf and goal nodes.
•Uninformed search applies a way in which search tree is searched without any information
about the search space like initial state operators and test for the goal, so it is also called
blind search.
•It examines each node of the tree until it achieves the goal node.
•It can be divided into following main types:
•Breadth-first search
•Uniform cost search
•Depth-first search
•Depth limited search.
•Iterative deepening depth-first search
•Bidirectional Search

COMPILED BY: ER. SANTOSH PANDEYA 23


Breadth first search:
•Breadth-first search is the
most common search strategy for traversing a tree or graph.
• This algorithm searches breadthwise in a tree or graph, so it is called breadth-first search.
•BFS algorithm starts searching from the root node of the tree and expands all successor
node at the current level before moving to nodes of next level.
•Breadth-first search implemented using FIFO queue data structure

Algorithm:
•Step 1:SET STATUS = 1 (ready state) for each node in G
•Step 2:Enqueuethe starting node In a queue and set its STATUS = 2 (waiting state)
•Step 3:Repeat Steps 4 and 5 until QUEUE is empty
•Step 4:Dequeuea node N. Process it and set its STATUS = 3 (processed state).
•Step 5:Enqueueall the neighbours of N that are in the ready state (whose STATUS = 1) and set
their STATUS = 2 (waiting state)[END OF LOOP]
•Step 6:EXIT

COMPILED BY: ER. SANTOSH PANDEYA 24


COMPILED BY: ER. SANTOSH PANDEYA 25
Properties:

•Completeness:
Complete if the goal node is at finite depth.
•Optimality:
It is guaranteed to find the shortest path.
•Time complexity
O(b(d+1)) Where the d= depth of shallowest solution and b is a node at every state.
•Space complexity
O(b(d+1))
•Weakness Suppose we have a tree having branching factor ‘b’
Time and space complexity is higher. (number of children of each node), and its depth ‘d

COMPILED BY: ER. SANTOSH PANDEYA 26


Uniform cost search:
•Uniform-cost search is a searching algorithm used for traversing a weighted tree or graph.
•The primary goal of the uniform-cost search is to find a path to the goal node which has the lowest cumulative cost.
•Uniform-cost search expands nodes according to their path costs form the root node. It can be used to solve any graph/tree
where the optimal cost is in demand. A uniform-cost search algorithm is implemented by the priority queue. It gives
maximum priority to the lowest cumulative cost.
•Uniform cost search is equivalent to BFS algorithm if the path cost of all edges is the same.

Algorithm used in weighted graph.


•Insert the root into the queue
•While the queue is not empty
•Dequeue the maximum priority element from the queue(If priorities are same, alphabetically smaller path is chosen)
•If the path is ending in the goal state, print the path and exit Else
•Insert all the children of the dequeued element, with the cumulative costs as priority
Advantages:
•Uniform cost search is optimal because at every state the path with the least cost is chosen.
Disadvantages:
•It does not care about the number of steps involve in searching and only concerned about path cost. Due to which this
algorithm may be stuck in an infinite loop.

COMPILED BY: ER. SANTOSH PANDEYA 27


Completeness:
Complete if step cost is ≥ ϵ(otherwise it can get stuck in infinite
loops)
Optimality:
Yes, always optimal as it only selects a path with the lowest path
cost. for any step cost ≥ ϵ
Time complexity
Let C*is Cost of the optimal solution, andεis each step to get closer
to the goal node. Then the number of steps is = C*/ε+1
Time complexity is O(b1 + [C*/ε])
Space complexity
Worst case space complexity is O(b1 + [C*/ε]).
Weakness
Time and space complexity is higher

COMPILED BY: ER. SANTOSH PANDEYA 28


Depth-first search (DFS)
•Depth-first search is a recursive algorithm for traversing a tree or graph data structure.
•It is called the depth-first search because it starts from the root node and follows each path to its greatest depth node
before moving to the next path.
•DFS uses a stack data structure for its implementation.

Algorithm
•Step 1:SET STATUS = 1 (ready state) for each node in G
•Step 2:Push the starting node A on the stack and set its STATUS = 2 (waiting state)
•Step 3:Repeat Steps 4 and 5 until STACK is empty
•Step 4:Pop the top node N. Process it and set its STATUS = 3 (processed state)
•Step 5:Push on the stack all the neighbors of N that are in the ready state (whose STATUS = 1) and set their STATUS = 2
(waiting state)[END OF LOOP]
•Step 6:EXIT

Advantage:
•DFS requires very less memory as it only needs to store a stack of the nodes on the path from root node to the current
node.
•It takes less time to reach to the goal node than BFS algorithm(if it traverses in the right path).
Disadvantage:
•There is the possibility that many states keep re-occurring, and there is no guarantee of finding the solution.
•DFS algorithm goes for deep down searching and sometime it may go to the infinite loop.

COMPILED BY: ER. SANTOSH PANDEYA 29


COMPILED BY: ER. SANTOSH PANDEYA 30
Depth-Limited Search
•To avoid the infinite depth problem of DFS, we can decide to only search
until depth L, i.e. we don’t expand beyond depth L.
•A depth-limited search algorithm is similar to depth-first search with a
predetermined limit.
•In this algorithm, the node at the depth limit will treat as it has no
successor nodes further.

Conditions of failure:
•Standard failure value: It indicates that problem does not have any
solution.
•Cutoff failure value: It defines no solution for the problem within a given
depth limit.
Properties :
•Completeness: DLS search algorithm is complete if the solution is
above the depth-limit.
•Time Complexity: Time complexity of DLS algorithm is O(bℓ).
•Space Complexity: Space complexity of DLS algorithm is O(b×ℓ).
•Optimal: Depth-limited search can be viewed as a special case of
DFS, and it is also not optimal even if ℓ>d

COMPILED BY: ER. SANTOSH PANDEYA 31


Iterative Deepening Search
•What of solution is deeper than L? -->Increase L iteratively.
•This algorithm performs depth-first search up to a certain "depth limit", and it keeps increasing the depth limit after each
iteration until the goal node is found.
•This Search algorithm combines the benefits of Breadth-first search's fast search and depth-first search's memory
efficiency.
•The iterative search algorithm is useful uninformed search when search space is large, and depth of goal node is unknown.
Advantages:
•It combines the benefits of BFS and DFS search algorithm in terms of fast search and memory efficiency.
Disadvantages:
•The main drawback of IDDFS is that it repeats all the work of the previous phase.
Properties:
•Completeness:
•This algorithm is complete is if the branching factor is finite.
•Time Complexity:
•Let's suppose b is the branching factor and depth is d then the worst-case time complexity is O(bd).
•Space Complexity:
•The space complexity of IDDFS will be O(bd).
•Optimal:
•IDDFS algorithm is optimal if path cost is a non-decreasing function of the depth of the node.

COMPILED BY: ER. SANTOSH PANDEYA 32


Iteration:
1'st --> A
2'nd --> A, B, C
3'rd ->A, B, D, E, C, F, G
4'th ->A, B, D, H, I, E, C, F, K, G
In the fourth iteration, the algorithm will find the goal node.

COMPILED BY: ER. SANTOSH PANDEYA 33


Informed Search
•Informed search algorithms use domain knowledge.
•In an informed search, problem information is available which can guide the search.
•Informed search strategies can find a solution more efficiently than an uninformed search
strategy. Informed search is also called a Heuristic search.
•A heuristic is a way which might not always be guaranteed for best solutions but guaranteed
to find a good solution in reasonable time.
•Informed search can solve much complex problem which could not be solved in another way.
•An mostly used informed search algorithms are.
•Greedy Search
•A* Search

COMPILED BY: ER. SANTOSH PANDEYA 34


Greedy /best-first search
•Greedy best-first search algorithm always selects the path which appears best at that moment.
•It is the combination of depth-first search and breadth-first search algorithms. It uses the heuristic function and search
•In the best first search algorithm, we expand the node which is closest to the goal node and the closest cost is estimated by
heuristic function, i.e.

f(n)=g(n).
•Were, h(n)= estimated cost from node n to the goal.

Advantages:
•Best first search can switch between BFS and DFS by gaining the advantages of both the algorithms.
•This algorithm is more efficient than BFS and DFS algorithms.

Disadvantages:
•It can behave as an unguided depth-first search in the worst case scenario.
•It can get stuck in a loop as DFS.
•This algorithm is not optimal.

COMPILED BY: ER. SANTOSH PANDEYA 35


Best First search

Algorithm:
•Step 1:Place the starting node into the OPEN list.
•Step 2:If the OPEN list is empty, Stop and return failure.
•Step 3:Remove the node n, from the OPEN list which has the lowest value of h(n), and places it in the CLOSED
list.
•Step 4:Expand the node n, and generate the successors of node n.
•Step 5:Check each successor of node n, and find whether any node is a goal node or not. If any successor node is
goal node, then return success and terminate the search, else proceed to Step 6.
•Step 6:For each successor node, algorithm checks for evaluation function f(n), and then check if the node has
been in either OPEN or CLOSED list. If the node has not been in both list, then add it to the OPEN list.
•Step 7:Return to Step 2

COMPILED BY: ER. SANTOSH PANDEYA 36


At each iteration, each node is expanded using evaluation
function f(n)=h(n) , which is given in the below table.
•we are using two lists which are OPEN and CLOSED Lists
•Expand the nodes of S and put in the CLOSED list
•Initialization: Open [A, B], Closed [S]
•Iteration 1:Open [A], Closed [S, B]
•Iteration 2:Open [E, F, A], Closed [S, B]: Open [E, A], Closed [S, B, F]
•Iteration 3:Open [I, G, E, A], Closed [S, B, F]: Open [I, E, A], Closed [S, B, F, G]
•Hence the final solution path will be:
•S----> B----->F----> G

COMPILED BY: ER. SANTOSH PANDEYA 37


A* search:
•A* search is the most commonly known form of best-first search.
•It uses heuristic function h(n), and cost to reach the node n from the start state g(n).
•It has combined features of UCS and greedy best-first search, by which it solve the problem
efficiently.
•A* search algorithm finds the shortest path through the search space using the heuristic
function.
•This search algorithm expands less search tree and provides optimal result faster.
A* algorithm is similar to UCS except that it uses g(n)+h(n) instead of g(n).
•In A* search algorithm, we use search heuristic as well as the cost to reach the node. Hence
we can combine both costs as following, and this sum is called as a fitness number.

COMPILED BY: ER. SANTOSH PANDEYA 38


Algorithm of A* search:
•Step1:Place the starting node in the OPEN list.
•Step 2:Check if the OPEN list is empty or not, if the list is empty then return failure and stops.
•Step 3:Select the node from the OPEN list which has the smallest value of evaluation
function (g+h), if node n is goal node then return success and stop, otherwise
•Step 4:Expand node n and generate all of its successors, and put n into the closed list. For
each successor n', check whether n' is already in the OPEN or CLOSED list, if not then compute
evaluation function for n' and place into Open list.
•Step 5:Else if node n' is already in OPEN and CLOSED, then it should be attached to the back
pointer which reflects the lowest g(n') value.
•Step 6:Return to Step 2.

COMPILED BY: ER. SANTOSH PANDEYA 39


Initialization:
{(S, 5)}
f(n) = 𝑔(n) + ℎ(n) = 0+5=5
Iteration1:
•{(S--> A, 4),

f(n)= 𝑔(n) + ℎ(n)=1 +3=4


•(S-->G, 10)}

f(n)= 𝑔(n) + ℎ(n)=10 + 0=10


Iteration2:Take current node is ‘A’
•{(S--> A-->C, 4),

f(n)= 𝑔(n) + ℎ(n) =2+2 =4


•(S--> A-->B, 7),

f(n)= 𝑔(n) + ℎ(n) =3+4=7


•(S-->G, 10)}

f(n)= 𝑔(n) + ℎ(n) =10+0=10

COMPILED BY: ER. SANTOSH PANDEYA 40


Iteration3:
•{(S--> A-->C--->G, 6),
f(n)= 𝑔(n) + ℎ(n)=6+0 =6
•(S--> A-->C--->D, 11),

f(n)= 𝑔(n) + ℎ(n)=5+6=11


•(S--> A-->B, 7),

f(n)= 𝑔(n) + ℎ(n)=3+4=7


•(S-->G, 10)}

f(n)= 𝑔(n) + ℎ(n)=10+0=10


•Iteration 4 will give the final result, as S--->A--->C--->G

it provides the optimal path with cost 6.

COMPILED BY: ER. SANTOSH PANDEYA 41


Iterative Improvement Algorithm: Types
 Hill Climbing Search
 Simulated Annealing Search
 Local Beam Search
 Genetic Algorithm

COMPILED BY: ER. SANTOSH PANDEYA 42


Hill Climbing Search
 Hill climbing is an optimization technique for solving computationally hard problems.
 Used in problems with “the property that the state description itself contains all the information”
 The algorithm is memory efficient since it does not maintain a search tree
 Hill climbing attempts to iteratively improve the current state by means of an evaluation function
 Searching for a goal state = Climbing to the top of a hill
 Moves continuously in the direction of increasing value (uphill)

Advantages of Hill Climbing algorithm:


 Hill Climbing is a simple and intuitive algorithm that is easy to understand and implement.
 It can be used in a wide variety of optimization problems, including those with a large search space and complex
constraints.
 Hill Climbing is often very efficient in finding local optima, making it a good choice for problems where a good solution is
needed quickly.
 The algorithm can be easily modified and extended to include additional heuristics or constraints.
Disadvantages of Hill Climbing algorithm:
 Hill Climbing can get stuck in local optima, meaning that it may not find the global optimum of the problem.
 The algorithm is sensitive to the choice of initial solution, and a poor initial solution may result in a poor final solution.
 Hill Climbing does not explore the search space very thoroughly, which can limit its ability to find better solutions.
 It may be less effective than other optimization algorithms, such as genetic algorithms or simulated annealing, for
certain types of problems.
COMPILED BY: ER. SANTOSH PANDEYA 43
COMPILED BY: ER. SANTOSH PANDEYA 44
Different regions in the State Space Diagram:
Local maximum: It is a state which is better than its neighboring state however there exists a state which is
better than it(global maximum). This state is better because here the value of the objective function is higher
than its neighbors.

Global maximum: It is the best possible state in the state space diagram. This is because, at this stage, the
objective function has the highest value.
Plateau/flat local maximum: It is a flat region of state space where neighboring states have the same value.
Ridge: It is a region that is higher than its neighbors but itself has a slope. It is a special kind of local maximum.
Current state: The region of the state space diagram where we are currently present during the search.
Shoulder: It is a plateau that has an uphill edge.

COMPILED BY: ER. SANTOSH PANDEYA 45


Intentionality and Goals

• Intentionality is the ability to think, feel and act in a deliberate way towards a purpose.
• It is widely believed that intentionality is a common human trait and ability.
• It is less clear whether machines could ever possess such a dimension.
• Most AI is focused on practical problems such as recognizing an image, driving a car or
searching the internet.
• In other words, AI is currently mostly about learning how to answer a question, make a
decision or predict something.
• It isn't about forming high level goals and acting with a purpose.

COMPILED BY: ER. SANTOSH PANDEYA 46


Gaming

A game can be formally defined as a kind of search problem with the following
components:
The initial state , which includes the board position and an indication of whose
move it is.
A set of Operators, which define the legal moves that a player can make
A terminal Test, which determines when the game is over. States where the
game has ended are called terminal states.
Payoff function. A utility function (also called a payoff function), which gives a
numeric value for the outcome of a game. In chess, the outcome is a win, lose
or draw, which we can represent by the values +1, -1 or 0. Some games have a
wider variety of possible outcomes; for example, the payoffs in backgammon
range from +192 to -192.

COMPILED BY: ER. SANTOSH PANDEYA 47


Game Playing
• Game playing involves abstract and pure form of computation that seems to require intelligence.
• Game playing has close relationship to intelligence and its well defined stated and rules.
• Game playing research has contributed ideas on how to make the best use of time to reach good decisions.
• Games don’t require much knowledge; the only knowledge we need to provide is the rules, legal moves and
the conditions of winning or losing the game.
• Both players try to win the game. So, both of them try to make the best move possible at each turn.
• Searching techniques like BFS(Breadth First Search) are not accurate for this as the branching factor is very
high, so searching will take a lot of time. So, we need another search procedures that improve –
• Generate procedure so that only good moves are generated.
• Test procedure so that the best move can be explored first.
• The goal of game playing in artificial intelligence is to develop algorithms that can learn how to play games and
make decisions that will lead to winning outcomes.
 Game playing is important in AI due to following reasons :
The state of game is easy to represent.
The rules of the game are limited and precise. Further the rules of the games are static, known to both players
• They provide a structured task. Therefore, success or failure can be easily measured.
• Games are examples from a class of problems concerned with actions. Games simulate real life situation

COMPILED BY: ER. SANTOSH PANDEYA 48


There are two main approaches to game playing in AI, rule-based systems and machine learning-based systems.
Rule-based systems use a set of fixed rules to play the game.
Machine learning-based systems use algorithms to learn from experience and make decisions based on that
experience.

The most common search technique in game playing is Minimax search procedure. It is depth-first depth-limited
search procedure. It is used for games like chess and tic-tac-toe.
Minimax algorithm uses two functions –
MOVEGEN : It generates all the possible moves that can be generated from the current position.
STATICEVALUATION : It returns a value depending upon the goodness from the viewpoint of two-player
Strategies Rules
Plies, Moves and Turns
 A strategy is a list of the optimal choices for each player at each stage of a given game.
 It is common in game theory to refer to one player's turn as a "ply" of the game.
 One round of all the player's turns is called a "move“, this originates in Chess, where one move consists of
each player taking on turn.
 There are many more games, however, that treat each player's turn as a separate move and this is the
terminology normally used in turn-based strategy games

COMPILED BY: ER. SANTOSH PANDEYA 49


Minimax Algorithm Tic-Tac-Toe

• Assume that two players named X (MAX) and 0 (MIN) who are playing the game.
• MAX is playing first.
• Initially MAX has 9 possible moves.
• Play alternates between MAX and MIN until we reach leaf nodes corresponding to
terminal states such that one player has 3 in a row or all squares are filled.

COMPILED BY: ER. SANTOSH PANDEYA 50


COMPILED BY: ER. SANTOSH PANDEYA 51
Evaluation Function & Utilitarian
An evaluation function, also known as a heuristic evaluation function or static
evaluation function, is a function used by game-playing programs to estimate the
value or goodness of a position in the minimax and related algorithms.
Utilitarian is one particular form of consequentialism, in which the “good
consequence” is considered to be the one that maximizes happiness (or: welfare,
benefit) for all people concerned.
Consequentialism would look only at the consequences of an action to judge
whether the action is right or wrong.
A consequentialist would say that an action is morally right if its consequences
lead to a situation that is clearly better than things were before the action

COMPILED BY: ER. SANTOSH PANDEYA 52


Decision Making, Planning & Internal Representation

Decision is obviously related to reasoning.


One of the possible definitions of artificial intelligence (AI) refers to cognitive processes
and especially to reasoning. Before making any decision, people also reason,
Artificial Intelligence is being used in decision support
Planning is a key ability for intelligent systems, increasing their autonomy and flexibility
through the
Construction of sequences of actions to achieve their goals.
Internal Representation defines the process of designing a program to solve problems, we
must define how the knowledge will be represented.
Knowledge is the information about a domain that can be used to solve problems in that
domain. To solve many
problems requires much knowledge, and this knowledge must be represented in the
computer.

COMPILED BY: ER. SANTOSH PANDEYA 53


Assignment-02
1.How searching is important in Artificial intelligence? Can you believe that proper searching algorithm improves the
performance of machine? Explain
2.Differentiate between informed and uninformed search? Explain A* search with example.
3.Explain searching method in AI? Among different search algorithm which one type of searching algorithm is more
efficient in AI. Explain with example.
4.Compare DFS and BFS for searching a graph with its properties.
5. Describe about Minimax Algorithm and Alpha Beta Pruning.
6. Explain the role of heuristics in informed search algorithms. How can heuristic functions be designed to ensure
admissibility and consistency?
7. How does the incorporation of alpha-beta pruning improve the efficiency of the Minimax algorithm? Illustrate with an
example how alpha-beta pruning can reduce the number of nodes evaluated in a game tree.
8. Discuss the impact of heuristic quality on the performance of A* search, and provide examples of designing heuristic
functions for specific problems like the 8-puzzle or pathfinding in a maze.

- must be submitted within 7 days

COMPILED BY: ER. SANTOSH PANDEYA 54


Thank
You

COMPILED BY: ER. SANTOSH PANDEYA 55

You might also like