lecture 5-6
lecture 5-6
lecture 5-6
Drawbacks:
– Huge table (often simply too large)
– Takes a long time to build/learn the table
II) --- Simple reflex agents
The Simple reflex agents are the simplest agents. These agents take decisions on
the basis of the current percepts and ignore the rest of the percept history
(past State).
These agents only succeed in the fully observable environment.
The Simple reflex agent does not consider any part of percepts history during
their decision and action process.
The Simple reflex agent works on Condition-action rule, which means it maps
the current state to action. Such as a Room Cleaner agent, it works only if
there is dirt in the room.
Problems for the simple reflex agent design approach:
– They have very limited intelligence
– They do not have knowledge of non-perceptual parts of the current state
– Mostly too big to generate and to store.
– Not adaptive to changes in the environment.
Agent selects actions on the basis
of current percept only.
How detailed?
“Infers potentially
dangerous driver
in front.”
If “dangerous driver in front,”
then “keep distance.”
IV) --- Goal-based agents
Considers “future”
“Clean kitchen”
Agent keeps track of the world state as well as set of goals it’s trying to achieve: chooses
actions that will (eventually) lead to the goal(s).
More flexible than reflex agents may involve search and planning
V) --- Utility-based agents
These agents are similar to the goal-based agent but provide an extra
component of utility measurement (“Level of Happiness”) which makes
them different by providing a measure of success at a given state.
Utility-based agent act based not only goals but also the best way to achieve
the goal.
• A learning agent in AI is the type of agent which can learn from its past experiences, or it has
learning capabilities.
• It starts to act with basic knowledge and then able to act and adapt automatically through learning.
• A learning agent has mainly four conceptual components, which are:
Module:
Learning
“Quick turn is not safe”
No quick turn
• An agent can be anything that perceive its environment through sensors and
act upon that environment through actuators. An Agent runs in the cycle of
perceiving, thinking, and acting. An agent can be:
• Human-Agent: A human agent has eyes, ears, and other organs which work
for sensors and hand, legs, vocal tract work for actuators.
• Robotic Agent: A robotic agent can have cameras, infrared range finder, NLP
for sensors and various motors for actuators.
• Software Agent: Software agent can have keystrokes, file contents as sensory
input and act on those inputs and display output on the screen
Hence the world around us is full of agents such as thermostat,
cellphone, camera, and even we are also agents. Before moving
forward, we should first know about sensors, effectors, and
actuators.
Sensor: Sensor is a device which detects the change in the
environment and sends the information to other electronic
devices. An agent observes its environment through sensors.
Actuators: Actuators are the component of machines that
converts energy into motion. The actuators are only responsible
for moving and controlling a system. An actuator can be an
electric motor, gears, rails, etc.
Effectors: Effectors are the devices which affect the environment.
Effectors can be legs, wheels, arms, fingers, wings, fins, and
display screen.