lecture 5-6

Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 18

Intelligent agents

An intelligent agent (IA or agent) can be considered as an


autonomous decision making system situated within some
environment and capable of sensing and acting within its
environment
The structure of Intelligent Agents(Agent programs)
(1) Table-driven agents
– use a percept sequence/action table in memory to find the next action. They
are implemented by a (large) lookup table.
(2)Simple reflex agents
– are based on condition-action rules, implemented with an appropriate
– production system. They are stateless devices which do not have memory of
past world states.
(3) Agents with memory - Model-based reflex agents
– have internal state, which is used to keep track of past states of the world.
(4) Agents with goals – Goal-based agents
– are agents that, in addition to state information, have goal information that
describes desirable situations. Agents of this kind take future events into
consideration.
(5) Utility-based agents
– base their decisions on classic axiomatic utility theory in order to act
rationally.
(6) Learning agents
– they have the ability to improve performance through learning.
1. Table-lookup driven agents

Uses a percept sequence / action table in memory to


find the next action. Implemented as a (large) lookup table.

Drawbacks:
– Huge table (often simply too large)
– Takes a long time to build/learn the table
II) --- Simple reflex agents
The Simple reflex agents are the simplest agents. These agents take decisions on
the basis of the current percepts and ignore the rest of the percept history
(past State).
These agents only succeed in the fully observable environment.
The Simple reflex agent does not consider any part of percepts history during
their decision and action process.
The Simple reflex agent works on Condition-action rule, which means it maps
the current state to action. Such as a Room Cleaner agent, it works only if
there is dirt in the room.
Problems for the simple reflex agent design approach:
– They have very limited intelligence
– They do not have knowledge of non-perceptual parts of the current state
– Mostly too big to generate and to store.
– Not adaptive to changes in the environment.
Agent selects actions on the basis
of current percept only.

If tail-light of car in front is red,


then brake.
III) --- Model-based reflex agents

The Model-based agent can work in a partially observable environment,


and track the situation.
A model-based agent has two important factors:
• Model: It is knowledge about "how things happen in the world,"
so it is called a Model-based agent.
• Internal State: It is a representation of the current state based on
percept history.
These agents have the model, "which is knowledge of the world" and
based on the model they perform actions.
Updating the agent state requires information about:
• How the world evolves
• How the agent's action affects the world.
Model-based reflex agents

How detailed?

“Infers potentially
dangerous driver
in front.”
If “dangerous driver in front,”
then “keep distance.”
IV) --- Goal-based agents

 The knowledge of the current state environment is not always


sufficient to decide for an agent to what to do.
 The agent needs to know its goal which describes desirable
situations.

 Goal-based agents expand the capabilities of the model-based


agent by having the "goal" information.
 They choose an action, so that they can achieve the goal.

 These agents may have to consider a long sequence of possible


actions before deciding whether the goal is achieved or not.
Such considerations of different scenario are called searching
and planning, which makes an agent proactive.
Module: Goal-based agents
Problem Solving

Considers “future”
“Clean kitchen”

Agent keeps track of the world state as well as set of goals it’s trying to achieve: chooses
actions that will (eventually) lead to the goal(s).
More flexible than reflex agents  may involve search and planning
V) --- Utility-based agents

These agents are similar to the goal-based agent but provide an extra
component of utility measurement (“Level of Happiness”) which makes
them different by providing a measure of success at a given state.
Utility-based agent act based not only goals but also the best way to achieve
the goal.

The Utility-based agent is useful when there are multiple possible


alternatives, and an agent has to choose in order to perform the best
action.
The utility function maps each state to a real number to check how
efficiently each action achieves the goals.
l e : g
o du akin
M M
i o n Utility-based agents
ci s
De

Decision theoretic actions:


e.g. faster vs. safer
Learning Agent

• A learning agent in AI is the type of agent which can learn from its past experiences, or it has
learning capabilities.
• It starts to act with basic knowledge and then able to act and adapt automatically through learning.
• A learning agent has mainly four conceptual components, which are:

Learning element: It is responsible for making improvements by learning from environment


Critic: Learning element takes feedback from critic which describes that how well the agent is
doing with respect to a fixed performance standard.
Performance element: It is responsible for selecting external action

Problem generator: This component is responsible for suggesting actions that will lead to new
and informative experiences.
Hence, learning agents are able to learn, analyze performance, and look for new ways to improve the
performance.
More complicated when agent needs to learn VI) --- Learning agents
utility information: Reinforcement learning
(based on action payoff) Adapt and improve over time

Module:
Learning
“Quick turn is not safe”

No quick turn

Road conditions, etc


Takes percepts
and selects actions

Try out the brakes on


different road surfaces
Summary
An agent perceives and acts in an environment, has an architecture, and is implemented
by an agent program.
A rational agent always chooses the action which maximizes its expected performance,
given its percept sequence so far.
An autonomous agent uses its own experience rather than built-in knowledge of the
environment by the designer.
An agent program maps from percept to action and updates its internal state.
– Reflex agents (simple / model-based) respond immediately to percepts.
– Goal-based agents act in order to achieve their goal(s), possible sequence of steps.
– Utility-based agents maximize their own utility function.
– Learning agents improve their performance through learning.
Representing knowledge is important for successful agent design.

The most challenging environments are partially observable, stochastic, sequential,


dynamic, and continuous, and contain multiple intelligent agents.
Typical Intelligent Agents

• An agent can be anything that perceive its environment through sensors and
act upon that environment through actuators. An Agent runs in the cycle of
perceiving, thinking, and acting. An agent can be:

• Human-Agent: A human agent has eyes, ears, and other organs which work
for sensors and hand, legs, vocal tract work for actuators.
• Robotic Agent: A robotic agent can have cameras, infrared range finder, NLP
for sensors and various motors for actuators.
• Software Agent: Software agent can have keystrokes, file contents as sensory
input and act on those inputs and display output on the screen
Hence the world around us is full of agents such as thermostat,
cellphone, camera, and even we are also agents. Before moving
forward, we should first know about sensors, effectors, and
actuators.
Sensor: Sensor is a device which detects the change in the
environment and sends the information to other electronic
devices. An agent observes its environment through sensors.
Actuators: Actuators are the component of machines that
converts energy into motion. The actuators are only responsible
for moving and controlling a system. An actuator can be an
electric motor, gears, rails, etc.
Effectors: Effectors are the devices which affect the environment.
Effectors can be legs, wheels, arms, fingers, wings, fins, and
display screen.

You might also like