Lecture 03 - Agent Types PDF
Lecture 03 - Agent Types PDF
Lecture 03 - Agent Types PDF
Artificial Intelligence
Lecture # 03
28 January 2020
Spring 2020
FAST – NUCES, Faisalabad Campus
Zain Iqbal
Zain.iqbal@nu.edu.pk
CS401-Spring-20 1
Today’sTopics
Environment types
• Agent Types
◦Table-Lookup agents
◦Simple reflex agents
◦Model-based reflex agents
◦Goal-based agents
◦Utility-based agents
◦Learning agents
CS401-Spring-20 2
Properties of task environment
Fully observable vs. Partially observable
Single vs.Multi-agent
Deterministic vs.Stochastic
Episodic vs.Sequential
Static vs.Dynamic
Discrete vs.Continuous
Known vs Unknown
CS401-Spring-20 21
Environment types
Fully observable vs. partially observable:
CS401-Spring-20
Environment types
Deterministic vs. stochastic:
CS401-Spring-20
Environment types
Episodic vs. sequential:
CS401-Spring-20
Environment types
Static vs. dynamic:
CS401-Spring-20
Environment types
Discrete vs. continuous:
A limited number of distinct, clearly defined states, percepts and
• actions.
• Examples: Chess has finite number of discrete states, and has discrete
set of percepts and actions. Taxi driving has continuous states, and
actions
CS401-Spring-20
Environment types
Single agent vs. multiagent:
CS401-Spring-20
Environment types
Known vs. unknown:
CS401-Spring-20
Properties of task environment
Chess with a Chess without Taxi
clock a clock driving
Fully Yes Yes No
observable
Deterministic Strategic Strategic No
Episodic No No No
Dynamic Semi No Yes
Single agent No No No
CS401-Spring-20 22
Agent functions and programs
• An agent is completely specified by the agent function mapping
percept sequences to actions
• One agent function (or a small equivalence class) is rational
Aim: find a way to implement the rational agent function
• concisely -> design an agent program
CS401-Spring-20
Agent functions and programs
• Agent program:
– Takes the current percept as input from the sensors
– Return an action to the actuators
– While agent function takes the whole percept history, agent program
takes just the current percept as input which the only available input
from the environment
– The agent need to remember the whole percept sequence, if it
needs it
CS401-Spring-20
Agent Types
CS401-Spring-20 1
Table-lookup Agent
Simplest possible agent function:
◦ All possible states and their optimal actions
specified by the designers in advance
CS401-Spring-20 1
Table-lookup Agent
Drawbacks:
◦ Huge table (consider continuous states)
◦ Could take a long time to build the table
◦ No autonomy!
◦ Even with learning, agent could need a long time to
learn the table entries
CS401-Spring-20 1
Agent types
Fourbasic kind of agent programs will
be discussed:
◦ Simple reflex agents
◦ Model-based reflex agents
◦ Goal-based agents
◦ Utility-based agents
All
these can be turned into learning
agents.
CS401-Spring-20 1
Simple reflex agent
Select action on the
basis of only the current
percept
Large reduction in
possible percept/actions
situations
Implemented through
condition-action rules e.g.if
dirty then suck,etc.
Condition-action-rule
function REFLEX-VACUUM-AGENT([location,status]) returns an action
if status = Dirty then return Suck
else if location = A then return Right
else if location = B then return Left
CS401-Spring-20 1
Simple reflex agent
CS401-Spring-20 19
Model-based reflex agent
Reflex + State
To tackle partially
observable environments
(by maintaining internal
state)
Over time update state
using world knowledge
◦ How world evolves ?
◦ How do actions affect world ?
CS401-Spring-20 2
Model-based reflex agent
CS401-Spring-20 2
Goal-based agent
The agent needs a goal to
know which states are
desirable
Typically investigated in
Search and Planning
research
More flexible since
knowledge is represented
explicitly and can be
manipulated
Can easily be changed for
new goals as compared to
simple reflex agent (which
requires new rules to be
written)
CS401-Spring-20 2
Goal-based agents vs reflex-based agents
•
Although goal-based agents appears less efficient, it
is more flexible because the knowledge that supports
its decision is represented explicitly and can be
modified
• On the other hand, for the reflex-agent, we would
• have to rewrite many condition-action rules
The goal based agent's behavior can easily be
changed
• The reflex agent's rules must be changed for a new
situation
CS401-Spring-20
Utility-based agent
Certain goals can be
achieved in different
ways
Utility function maps a
(sequence of) state(s)
onto a real number
Improves on goals:
◦ Selecting between
conflicting goals
◦ Select appropriately between
several goals based on
likelihood of success
CS401-Spring-20 2
Learning agent
Learning element:
responsible for making
improvements
Performance element:
selects external actions
based on percepts
Critic: determines how the
agent is doing and
determines how it should
modify performance
element to do better in the
future
Problem generator:
suggests actions that will
lead to new and
informative experience
CS401-Spring-20 2
Examples
Simple reflex agents
◦ If temperature > 25, turn on the air conditioner
Model-based reflex agents
◦ Turn on the air conditioner at 9 a.m.
Goal based agents
◦ Keep people in the room comfortable
Utility based agents
◦ Measure comfortable with a comfortable utility
function (comfortable or uncomfortable is not
enough)
Learning agents
◦ Learn the definition of “comfortable” with some
feedback
CS401-Spring-20 2
Reading Material
Russell & Norvig: Chapter # 2
David Poole: Chapter # 1
CS401-Spring-20 24
2