AI Lecture 3
AI Lecture 3
AI Lecture 3
Lecture 3
Intelligent Agents, &
AI Related Disciplines
▪ Environment Types
2
Acting Rationally
Intelligent
Agents
3
MainResources
forthisSection
4
Recap ..
What is Artificial Inteligence?
Four Main Approaches that have been followed, each by different
people with different methods.
Thinking Acting
Rationally Rationally
the Rational Agent
the Laws-of-Thought approach approach
5
AI as the Study & Design of Intelligent Agents
AI as the Study & Design
(Poole and Mackworth, 1999)
of
• An IntelligentAgents
intelligent agent is such that:
• Its actions are appropriate for its goals and circumstances.
• It is flexible to changing environments and goals.
• It learns from experience.
• It makes appropriate choices given perceptual limitations and
limited resources (bounded rationality or bounded optimality)..
Thus, a rational agent acts to optimally achieve its goals (does the
right thing). The right thing: that which is expected to maximize goal
achievement, given the available information.
6
IntelligentAgents
• In AI, artificial agents that have a physical presence in the world are
usually known as Robots.
• Robotics is the field primarily concerned with the implementation of the
physical aspects of a robot (i.e. perception of the physical environment,
actions on the environment).
• Another class of artificial agents include interface agents, for either
stand alone or Web-based applications (e.g. intelligent desktop
assistants, recommender systems, intelligent tutoring systems).
• Interface agents don’t have to worry about interaction with the
physical environment, but share all other fundamental components of
intelligent behavior with robots.
• We will focus on these agents in this course (i.e. software agents not
hardware agents “autonomous robots”).
7
Pac-Man .. as an ..Intelligent Agent
Agent Sensors
Percepts
? Environment
Actuators Actions
8
BB-8 “the Star
Intelligent “Rational”Agents? Wars franchise”
Ava,
in Ex Machina
IntelligentAgents
An agent is anything that can be viewed as perceiving its
environment through sensors and acting upon that environment
through actuators.
11
IntelligentAgents in the World ..
.. Learning? .. Feedback?
1: Sense
3: Act
AI
“Agent” Environment
2: Reason
5: Learn
4: Get Feedback
12
IntelligentAgents in the World ..
Knowledge Representation
Machine Learning
.. Capacities / Fields?
Reasoning +
Decision Theory
Natural Language
Generation
Natural Language +
Understanding Robotics
+ +
Computer Vision Human Computer
Speech Recognition /Robot
+ Interaction
Physiological Sensing
16
Mining of Interaction Logs
Example: Vacuum –Agent
Percepts:
Location and status,
e.g., [A, Dirty]
Actions:
Left, Right, Suck, NoOp
14
Rational Agents
• For each possible percept sequence, a rational agent should
select an action that is expected to maximize its
performance measure, given the evidence provided by the
percept sequence and the agent’s built-in knowledge.
15
Specifying the Task Environment [ PEAS ]
PEAS: Performance measure, Environment, Actuators, Sensors
P: a function the agent is maximizing (or minimizing);
Assumed given ..
In practice, needs to be computed somewhere.
E: a formal representation for world states;
For concreteness, a tuple (var1 = val1, var2 = val2, … , varn = valn).
A: actions that change the state according to a transition model;
Given a state and action, what is the successor state (or
distribution over successor states)?
S: Sensor observations that allow the agent to infer the world
state; Often come in very different form than the state itself ..
E.g., in tracking, observations may be pixels and state variables
3D coordinates.
16
PEAS Example 1: Autonomous Taxi
o Performance measure
Safe, fast, legal, comfortable trip, maximize profits
o Environment
Roads, other traffic, pedestrians, customers
o Actuators
Steering wheel, accelerator, brake, signal, horn
o Sensors
Cameras, speedometer, GPS, odometer, engine
sensors, keyboard
17
PEAS Example 2: Spam Filter
o Performance measure
Minimizing false positives, false negatives
o Environment
A user’s email account, email server
o Actuators
Mark as spam, delete, etc.
o Sensors
Incoming messages, other information about user’s account
18
Goal-basedAgentsversusCost-basedAgents
Goal-based agents: the actions depend on the goal; E.g., a mobile robot
which should move from room 112 to room 179 in a building takes actions
different from those of a robot that should move to room 105.
19
Goal-basedAgentsversusCost-basedAgents
Cost-based agents: the goal is to minimize the cost of erroneous decisions in the
long term; E.g., a spam filter is an agent that puts incoming emails into wanted or
unwanted (spam) categories & deletes any unwanted emails. Its goal as a goal-
based agent is to put all emails in the right category. In the course of this not-so-
simple task, the agent can occasionally make mistakes. Because its goal is to
classify all emails correctly, it will attempt to make as few errors as possible.
However, that is not always what the user has in mind. Let us compare the
following two agents. Out of 1,000 emails, Agent 1 makes only 12 errors. Agent 2
on the other hand makes 38 errors with the same 1,000 emails. Is it therefore worse
than Agent 1? The errors of both agents are shown in more detail in the following
confusion matrix:
Agent 1 in fact makes fewer errors than Agent 2, but those few errors are severe
because the user loses 11 potentially important emails. Because there are in this
case two types of errors of differing severity, each error should be weighted with
20
the appropriate cost factor.
Goal-basedAgentsversusCost-basedAgents
For example
We have 1000 email ---------→ 800 spam 200 wanted
2 agents
Agent1 said: that 799 is a spam and they are spam → True Positive
Also he said that 189 Wanted and they are Wanted → True Negative
Also he said: that 1 is spam and the filter he said it is wanted → False positive
Also he said: that 11 is wanted and the filter said it is a spam → False Negative
So he made 12 errors from 1000 email
Agent2 Also he said: that 762 is a spam and they are spam → True Positive
Also he said: that 200 Wanted and they are Wanted → True Negative
Also he said: that 38 is spam and the filter he said it is wanted → False Positive
Also he said: that 0 is wanted and the filter said it is a spam → False Negative
So he made 38 errors from 1000 email
21
Goal-basedAgentsversusCost-basedAgents
Agent 1 is better than Agent 2 if it is based on goal-based agent because
Agent 1 reached goal better than Agent 2
But more important is that
The error in Agent 1 is more critical than that in Agent 2 Why?
22
Exercises
What have we learned?
▪ Differentiate between Goal-based Agents &
Cost-based Agents.