2 Intelligent Agents Environment Types
2 Intelligent Agents Environment Types
Email: keskinoz@itu.edu.tr
ODS 2001 ODS 2001
1
13/10/2024
2
13/10/2024
1. Logic 2. Introspection
– Studied intensively within mathematics
– Gives a handle on how to reason intelligently
– Humans are intelligent, aren’t they?
• Example: automated reasoning • Expert systems
– Proving theorems using deduction – Implement the ways (rules) of the experts
– http://www.youtube.com/watch?v=3NOS63-4hTQ • Example: MYCIN (blood disease diagnosis)
• Advantage of logic: – Performed better than junior doctors
– We can be very precise (formal) about our programs
• Disadvantage of logic:
– Not designed for uncertainty.
3
13/10/2024
3. Brains
4. Evolution
– Our brains and senses are what give us intelligence
– Our brains evolved through natural selection
• Neurologist tell us about:
– Networks of billions of neurons • So, simulate the evolutionary process
• Build artificial neural networks – Simulate genes, mutation, inheritance, fitness, etc.
– In hardware and software (mostly software now) • Genetic algorithms and genetic programming
• Build neural structures – Used in machine learning (induction)
– Interactions of layers of neural networks
– Used in Artificial Life simulation
• http://www.youtube.com/watch?v=r7180npAU9Y&NR=1
4
13/10/2024
AI prehistory
• Philosophy
– Can formal rules be used to draw valid conclusions?
– Where does knowledge come from?
– How does knowledge lead into action? Intelligent Agents
• Mathematics/Statistics
– What are the formal rules to draw valid conclusion?
– How do we reason with uncertain information?
– How do intelligent agents learn?
• Economics
– How should we make decisions to maximize payoff?
– How should we do this when others are making decisions too?
• Psychology
– How do humans and animals think?
• Computer
– How can we build efficient computers?
• Linguistics
– How does language relate to thoughts?
– knowledge representation, grammar
Agents
• Sensors and actuators are both key components in control systems and automation, but
they serve opposite functions:
• An agent is anything that can be viewed as perceiving • Sensors
its environment through sensors and acting upon that • Function: Sensors detect and measure physical properties or changes in the
environment through actuators environment and convert them into signals that can be read by a system or a device.
• Purpose: They gather information from the environment, acting as the "input" side of
• Human agent: the system.
– eyes, ears, and other organs for sensors; • Examples:
– hands, legs, mouth, and other body parts for actuators – Temperature sensors (thermometers)
– Light sensors (photocells)
• Robotic agent: – Motion detectors
– cameras and infrared range finders for sensors – Pressure sensors
– various motors for actuators – Cameras
• Use Case: In a smart home, a temperature sensor might detect the room temperature
and send that data to a thermostat.
ODS 2001 ODS 2001
5
13/10/2024
6
13/10/2024
Demo:
http://www.ai.sri.com/~oreilly/aima3ejava/aima3ejavademos.h
tml
• Rationality
– Performance measuring success • An omniscient agent in artificial intelligence (AI) refers to a hypothetical or
– Agents prior knowledge of environment theoretical agent that has complete knowledge of the environment and all
– Actions that agent can perform possible outcomes of its actions. It knows the actual state of the world and
– Agent’s percept sequence to date how its actions will affect the future with absolute certainty.
• Key Features of an Omniscient Agent:
• Rational Agent: For each possible percept sequence, a • Complete Knowledge: It knows everything about the environment, including
rational agent should select an action that is expected to hidden or uncertain aspects.
maximize its performance measure, given the evidence • Perfect Prediction: It can predict the exact consequences of all possible
provided by the percept sequence and whatever built-in actions, knowing their outcomes ahead of time.
knowledge the agent has. • Optimal Decision-Making: Since it knows the best action to take in every
• possible situation, it can make optimal decisions to achieve its goal.
7
13/10/2024
Rationality
• Important Distinction:
• Omniscience vs. Rationality: Real-world AI agents are not omniscient because
• Rational is different from omniscience they do not have perfect knowledge of the environment or outcomes. However,
– Percepts may not supply all relevant information they can be rational, meaning they make the best possible decisions based on
the information they have. A rational agent may not always make the best
– E.g., in card game, don’t know cards of others. decision but works to maximize its performance given its limited knowledge and
computational resources.
• Why Omniscience is Impractical:
• Rational is different from being perfect • In real-world applications, omniscience is not achievable due to:
– Rationality maximizes expected outcome while • Uncertainty in environments: The agent cannot know all factors influencing
perfection maximizes actual outcome. the environment.
• Incomplete data: Sensors and inputs can be noisy or incomplete.
• Computation limits: It is impossible to compute all possible actions and their
consequences due to time and resource constraints.
• Thus, most AI systems aim for rationality rather than omniscience.
ODS 2001 ODS 2001
• Ideal: design agents to have some autonomy – Actuators: Steering wheel, accelerator, brake,
signal, horn
– Possibly become more autonomous with
experience – Sensors: Cameras, sonar, speedometer, GPS,
odometer, engine sensors, keyboard
ODS 2001 ODS 2001
8
13/10/2024
PEAS PEAS
Environment types
Fully observable (vs. partially observable)
• Fully observable (vs. partially observable) Is everything an agent requires to choose its actions
• Deterministic (vs. stochastic) available to it via its sensors? Perfect or Full
• Episodic (vs. sequential) information.
• Static (vs. dynamic) If so, the environment is fully accessible
• Discrete (vs. continuous)
If not, parts of the environment are inaccessible
• Single agent (vs. multiagent):
Agent must make informed guesses about world.
9
13/10/2024
10
13/10/2024
11
13/10/2024
• Static environments don’t change • A limited number of distinct, clearly defined percepts and
– While the agent is deliberating over what to do actions vs. a range of values (continuous)
• Semidynamic: If the environment itself does not change with Discrete Discrete Discrete Conti Conti Conti
12
13/10/2024
13
13/10/2024
• Recall the agent function that maps from percept histories to • If each state includes the information about the
actions:
[f: P* A] percepts and actions that led to it, the state
• An agent program can implement an agent function by space has perfect recall.
maintaining an internal state.
• The internal state can contain information about the state of the
• Perfect Information = Perfect Recall + Full
external environment. Observability + Deterministic Actions.
• The state depends on the history of percepts and on the history
of actions taken:
[f: P*, A* S A] where S is the set of states.
• If each internal state includes all information relevant to
information making, the state space is Markovian.
14
13/10/2024
Performance element is
what was previously the
whole agent
Input sensor
Output action
Learning element
Modifies performance
element.
15
13/10/2024
– Performance element
Critic: how the agent is
doing • How it currently drives
Input: checkmate? – Taxi driver Makes quick left turn across 3 lanes
Fixed • Critics observe shocking language by passenger and other
drivers and informs bad action
Problem generator • Learning element tries to modify performance elements
Tries to solve the problem for future
differently instead of • Problem generator suggests experiment out something
optimizing. called Brakes on different Road conditions
Suggests exploring new – Exploration vs. Exploitation
actions -> new problems.
• Learning experience can be costly in the short run
ODS 2001
• shocking language from ODS
other drivers
2001
• Less tip
The Big Picture: AI for Model-Based Agents The Picture for Reflex-Based Agents
Planning
Decision Theory Action Reinforcement Action
Learning
Game Theory Reinforcement
Learning
Knowledge Learning Learning
Logic Machine Learning
Statistics • Studied in AI, Cybernetics, Control Theory, Biology,
Probability
Psychology.
Heuristics
Inference
16
13/10/2024
Summary
ODS 2001
17