AI Lec2-1

Download as pdf or txt
Download as pdf or txt
You are on page 1of 24

Lecture 2

AI Agents
• Artificial intelligence is defined as the study of
rational agents. A rational agent could be
anything that makes decisions, as a person,
firm, machine, or software. It carries out an
action with the best outcome after
considering past and current percepts (agent’s
perceptual inputs at a given instance).
AI Agents
• An AI system is composed of an agent and its
environment. The agents act in their
environment. The environment may contain
other agents.
• An agent is anything that can be viewed as :
• perceiving its environment
through sensors and
• acting upon that environment
through actuators
AI Agents
• To understand the structure of Intelligent Agents, we should be familiar
with Architecture and Agent programs. Architecture is the machinery that
the agent executes on. It is a device with sensors and actuators, for
example, a robotic car, a camera, a PC. Agent program is an
implementation of an agent function. An agent function is a map from the
percept sequence(history of all that an agent has perceived to date) to an
action.

Agent = Architecture + Agent Program


AI Agents
Examples of Agent:
• A software agent has Keystrokes, file contents, received
network packages which act as sensors and displays on the
screen, files, sent network packets acting as actuators.
• A Human-agent has eyes, ears, and other organs which act as
sensors, and hands, legs, mouth, and other body parts acting
as actuators.
• A Robotic agent has Cameras and infrared range finders
which act as sensors and various motors acting as actuators.
Simple reflex agents
• Simple reflex agents ignore the rest of the percept history and act only on
the basis of the current percept. Percept history is the history of all that
an agent has perceived to date. The agent function is based on
the condition-action rule. A condition-action rule is a rule that maps a
state i.e, condition to an action. If the condition is true, then the action is
taken, else not. This agent function only succeeds when the environment
is fully observable. For simple reflex agents operating in partially
observable environments, infinite loops are often unavoidable. It may be
possible to escape from infinite loops if the agent can randomize its
actions.
• Problems with Simple reflex agents are :
▪ Very limited intelligence.
▪ No knowledge of non-perceptual parts of the state.
▪ Usually too big to generate and store.
▪ If there occurs any change in the environment, then the collection of rules
need to be updated.
Simple Reflex Agent
• Use simple “if
then” rules
• Can be short
sighted

SimpleReflexAgent(percept)
state = InterpretInput(percept)
rule = RuleMatch(state, rules)
action = RuleAction(rule)
Return action
Example: Vacuum Agent

• Performance?
– 1 point for each square cleaned in time T?
– #clean squares per time step - #moves per time step?
• Environment: vacuum, dirt, multiple areas defined by square
regions
• Actions: left, right, suck, idle
• Sensors: location and contents
– [A, dirty]

– Environment may be partially observable


– Environment may be stochastic
• Thus Rational is not always successful
Simple Reflex Vacuum Agent
• If status=Dirty then return Suck
else if location=A then return Right
else if location=B then right Left
Reflex agents with State

• It works by finding a rule whose condition matches the


current situation. This agent can handle partially
observable environments by the use of a model about the
world. The agent has to keep track of the internal
state which is adjusted by each percept and that depends
on the percept history. The current state is stored inside the
agent which maintains some kind of structure describing
the part of the world which cannot be seen.
• Updating the state requires information about :
• how the world evolves independently from the agent, and
• how the agent’s actions affect the world.
Reflex Agent With State
• Store previously-
observed information
• Can reason about
unobserved aspects of
current state

ReflexAgentWithState(percept)
state = UpdateDate(state,action,percept)
rule = RuleMatch(state, rules)
action = RuleAction(rule)
Return action
Reflex Vacuum Agent
• If status=Dirty then Suck
else if have not visited other square in >3
time units, go there
Goal-based agents

• These kinds of agents take decisions based on how far


they are currently from their goal(description of
desirable situations). Their every action is intended to
reduce its distance from the goal. This allows the agent
a way to choose among multiple possibilities, selecting
the one which reaches a goal state. The knowledge
that supports its decisions is represented explicitly and
can be modified, which makes these agents more
flexible. They usually require search and planning. The
goal-based agent’s behavior can easily be changed.
Goal-Based Agents
• Goal reflects
desires of agents
• May project actions
to see if consistent
with goals
• Takes time, world
may change during
reasoning
Utility-based agents

• The agents which are developed having their end uses as


building blocks are called utility-based agents. When there
are multiple possible alternatives, then to decide which one
is best, utility-based agents are used. They choose actions
based on a preference (utility) for each state. Sometimes
achieving the desired goal is not enough. We may look for a
quicker, safer, cheaper trip to reach a destination. Agent
happiness should be taken into consideration. Utility
describes how “happy” the agent is. Because of the
uncertainty in the world, a utility agent chooses the action
that maximizes the output. A utility function maps a state
onto a real number which describes the associated degree
of happiness.
Utility-Based Agents
• Evaluation function
to measure utility
f(state) -> value
• Useful for
evaluating
competing goals
Learning Agent

• A learning agent in AI is the type of agent that can learn from its
past experiences or it has learning capabilities. It starts to act with
basic knowledge and then is able to act and adapt automatically
through learning.
A learning agent has mainly four conceptual components, which
are:
1. Learning element: It is responsible for making improvements by
learning from the environment
2. Critic: The learning element takes feedback from critics which
describes how well the agent is doing with respect to a fixed
performance standard.
3. Performance element: It is responsible for selecting external action
4. Problem Generator: This component is responsible for suggesting
actions that will lead to new and informative experiences.
Learning Agents
Xavier mail delivery robot
• Performance: Completed tasks
• Environment: See for yourself
• Actuators: Wheeled robot actuation
• Sensors: Vision, sonar
• Reasoning: Bayes classification
Pathfinder Medical Diagnosis System

• Performance: Correct Hematopathology


diagnosis
• Environment: Automate human diagnosis,
partially observable, single agent
• Actuators: Output diagnoses and further test
suggestions
• Sensors: Input symptoms and test results
• Reasoning: Bayesian networks
TDGammon
• Performance: Ratio of wins to losses
• Environment: Graphical output showing dice
roll and piece movement, fully observable,
multiagent
Sensors: Keyboard input
• Actuator: Numbers representing moves of
pieces
• Reasoning: Reinforcement learning, neural
networks
Alvinn
• Performance: Stay in lane, on road, maintain
speed
• Environment: Driving Hummer on and off road
without manual control (Partially observable,
single agent), Autonomous Automobile
• Actuators: Speed, Steer
• Sensors: Stereo camera input
• Reasoning: Neural networks
Talespin
• Performance: Entertainment value of generated
story
• Environment: Generate text-based stories that
are creative and understandable
• Actuators: Add word/phrase, order parts of story
• Sensors: Dictionary, Facts and relationships
stored in database
• Reasoning: Planning
Webcrawler Softbot
• Search web for items of interest
• Perception: Web pages
• Reasoning: Pattern matching
• Action: Select and traverse hyperlinks

You might also like