AI Lecture 3

Download as pdf or txt
Download as pdf or txt
You are on page 1of 24

Artificial Intelligence

Lecture 3
Intelligent Agents, &
AI Related Disciplines

Dr. Mahmoud Bassiouni


mbassiouni@eelu.edu.eg
Lecture 3: IntelligentAgents, & AI Related Disciplines
2.1 Intelligent “Rational” Agents? 2.3 AI: Related Disciplines
▪ Recap: AI as the Study & Design of ▪ Learning Agents
Intelligent Agents
▪ AI vs. Machine Learning vs. Deep Learning
▪ Intelligent Agents in the World
▪ Machine Learning?
▪ An Example: Vacuum_Agent
▪ Datamining?
▪ Specifying the Task Environment [ PEAS ]
▪ AI vs. Data Science
▪ Goad-based vs. Cost-based Agents
▪ Why Data Science?

▪ Why Big Data?


2.2 Specifying the Task Environment

▪ Environment Types

2
Acting Rationally
Intelligent
Agents

3
MainResources
forthisSection

• This lecture covers mainly the


following chapter(s):
• Chapter 2 (Intelligent Agents) from
Stuart J. Russell and Peter Norvig,
"Artificial Intelligence: A Modern
Approach," Third Edition (2010),
by Pearson Education Inc.

4
Recap ..
What is Artificial Inteligence?
Four Main Approaches that have been followed, each by different
people with different methods.

the Cognitive the Turing Test


Modelling approach approach
Thinking Acting
Humanly Humanly

Thinking Acting
Rationally Rationally
the Rational Agent
the Laws-of-Thought approach approach

5
AI as the Study & Design of Intelligent Agents
AI as the Study & Design
(Poole and Mackworth, 1999)
of
• An IntelligentAgents
intelligent agent is such that:
• Its actions are appropriate for its goals and circumstances.
• It is flexible to changing environments and goals.
• It learns from experience.
• It makes appropriate choices given perceptual limitations and
limited resources (bounded rationality or bounded optimality)..

Thus, a rational agent acts to optimally achieve its goals (does the
right thing). The right thing: that which is expected to maximize goal
achievement, given the available information.

6
IntelligentAgents
• In AI, artificial agents that have a physical presence in the world are
usually known as Robots.
• Robotics is the field primarily concerned with the implementation of the
physical aspects of a robot (i.e. perception of the physical environment,
actions on the environment).
• Another class of artificial agents include interface agents, for either
stand alone or Web-based applications (e.g. intelligent desktop
assistants, recommender systems, intelligent tutoring systems).
• Interface agents don’t have to worry about interaction with the
physical environment, but share all other fundamental components of
intelligent behavior with robots.
• We will focus on these agents in this course (i.e. software agents not
hardware agents “autonomous robots”).

7
Pac-Man .. as an ..Intelligent Agent

Agent Sensors
Percepts
? Environment
Actuators Actions

8
BB-8 “the Star
Intelligent “Rational”Agents? Wars franchise”

Agent Smith in The Matrix (1999)

Dr. Will Caster

Arthur, an android bartender on


the Avalon, Passengers

David in Prometheus (2012)


Sonny in I, Robot (2004)
Intelligent “Rational” Agents? !
Automata

TARS and CASE


Interstellar
L3 (Star Wars –
Vision Han Solo)
(Marvel
Comics)

Ava,
in Ex Machina
IntelligentAgents
An agent is anything that can be viewed as perceiving its
environment through sensors and acting upon that environment
through actuators.

11
IntelligentAgents in the World ..
.. Learning? .. Feedback?

1: Sense
3: Act
AI
“Agent” Environment
2: Reason
5: Learn
4: Get Feedback
12
IntelligentAgents in the World ..
Knowledge Representation
Machine Learning
.. Capacities / Fields?
Reasoning +
Decision Theory

Natural Language
Generation

Natural Language +
Understanding Robotics
+ +
Computer Vision Human Computer
Speech Recognition /Robot
+ Interaction
Physiological Sensing
16
Mining of Interaction Logs
Example: Vacuum –Agent
Percepts:
Location and status,
e.g., [A, Dirty]

Actions:
Left, Right, Suck, NoOp

function Vacuum_Agent( [ location, status ] ) returns an action


if status = Dirty then return Suck
else if location = A then return Right
else if location = B then return Left

14
Rational Agents
• For each possible percept sequence, a rational agent should
select an action that is expected to maximize its
performance measure, given the evidence provided by the
percept sequence and the agent’s built-in knowledge.

• Performance measure (utility function):


An objective criterion for success of an agent's behaviour.

15
Specifying the Task Environment [ PEAS ]
PEAS: Performance measure, Environment, Actuators, Sensors
P: a function the agent is maximizing (or minimizing);
Assumed given ..
In practice, needs to be computed somewhere.
E: a formal representation for world states;
For concreteness, a tuple (var1 = val1, var2 = val2, … , varn = valn).
A: actions that change the state according to a transition model;
Given a state and action, what is the successor state (or
distribution over successor states)?
S: Sensor observations that allow the agent to infer the world
state; Often come in very different form than the state itself ..
E.g., in tracking, observations may be pixels and state variables
3D coordinates.
16
PEAS Example 1: Autonomous Taxi

o Performance measure
Safe, fast, legal, comfortable trip, maximize profits
o Environment
Roads, other traffic, pedestrians, customers
o Actuators
Steering wheel, accelerator, brake, signal, horn
o Sensors
Cameras, speedometer, GPS, odometer, engine
sensors, keyboard
17
PEAS Example 2: Spam Filter

o Performance measure
Minimizing false positives, false negatives
o Environment
A user’s email account, email server
o Actuators
Mark as spam, delete, etc.
o Sensors
Incoming messages, other information about user’s account

18
Goal-basedAgentsversusCost-basedAgents
Goal-based agents: the actions depend on the goal; E.g., a mobile robot
which should move from room 112 to room 179 in a building takes actions
different from those of a robot that should move to room 105.

19
Goal-basedAgentsversusCost-basedAgents
Cost-based agents: the goal is to minimize the cost of erroneous decisions in the
long term; E.g., a spam filter is an agent that puts incoming emails into wanted or
unwanted (spam) categories & deletes any unwanted emails. Its goal as a goal-
based agent is to put all emails in the right category. In the course of this not-so-
simple task, the agent can occasionally make mistakes. Because its goal is to
classify all emails correctly, it will attempt to make as few errors as possible.
However, that is not always what the user has in mind. Let us compare the
following two agents. Out of 1,000 emails, Agent 1 makes only 12 errors. Agent 2
on the other hand makes 38 errors with the same 1,000 emails. Is it therefore worse
than Agent 1? The errors of both agents are shown in more detail in the following
confusion matrix:

Agent 1 in fact makes fewer errors than Agent 2, but those few errors are severe
because the user loses 11 potentially important emails. Because there are in this
case two types of errors of differing severity, each error should be weighted with
20
the appropriate cost factor.
Goal-basedAgentsversusCost-basedAgents
For example
We have 1000 email ---------→ 800 spam 200 wanted

2 agents

Agent1 said: that 799 is a spam and they are spam → True Positive
Also he said that 189 Wanted and they are Wanted → True Negative
Also he said: that 1 is spam and the filter he said it is wanted → False positive
Also he said: that 11 is wanted and the filter said it is a spam → False Negative
So he made 12 errors from 1000 email

Agent2 Also he said: that 762 is a spam and they are spam → True Positive
Also he said: that 200 Wanted and they are Wanted → True Negative
Also he said: that 38 is spam and the filter he said it is wanted → False Positive
Also he said: that 0 is wanted and the filter said it is a spam → False Negative
So he made 38 errors from 1000 email
21
Goal-basedAgentsversusCost-basedAgents
Agent 1 is better than Agent 2 if it is based on goal-based agent because
Agent 1 reached goal better than Agent 2
But more important is that
The error in Agent 1 is more critical than that in Agent 2 Why?

Agent1 mistaken in 11 important emails that can contain critical


information that must be read by the user
Agent 2 did not make mistakes in the wanted emails maybe he failed to
recognize spam emails but if the user opens them he will mark them as
spam or delete them
But agent 2 gave me all the required or wanted mail correctly
Weight of error in agent 1 is higher than the weight of error in agent 2

Agent 2 is better than Agent 1 in terms of cost-based agent

22
Exercises
What have we learned?
▪ Differentiate between Goal-based Agents &
Cost-based Agents.

▪ Discuss PEAS, and how is it employed to


specify a task environment.

▪ What are the basic capacities of an Intelligent


Agent? .. illustrate using a diagram.
THANK YOU

You might also like