AI Chapter 1,2,3
AI Chapter 1,2,3
AI Chapter 1,2,3
Reactive agents
• Reactive agents do not have internal symbolic models.
• Act by stimulus-response to the current state of the
Structure of Intelligent Agents environment.
• Agent = architecture + program • Each reactive agent is simple and interacts with others in a
• Agent program: the implementation of f : P* → A, the basic way.
agent’s perception-action mapping • Complex patterns of behavior emerge from their interaction.
function Skeleton-Agent(Percept) returns Action • Benefits: robustness, fast response time
memory ← UpdateMemory(memory, Percept) • Challenges: scalability, how intelligent? and how do you
Action ← ChooseBestAction(memory) debug them?
memory ← UpdateMemory(memory, Action) Reflex agents w/ state
return Action
• Architecture: a device that can execute the agent program
(e.g., general-purpose computer, specialized device, beobot,
etc.)
Using a look-up-table to encode f : P* → A
Goal-based agents
Agent types
• Reflex agents
• Multi-hop mobile agents (roam the network from place to
place)
• Applications:
• Distributed information retrieval.
• Telecommunication network routing.
Information agents
• Manage the explosive growth of information.
• Manipulate or collate information from many distributed
sources.
• Information agents can be mobile or static.
• Examples:
• BargainFinder comparison shops among Internet stores for
Utility-based agents CDs
• FIDO the Shopping Doggie (out of service)
• Internet Softbot infers which internet facilities (finger, ftp,
gopher) to use and when from high-level search requests.
• Challenge: ontologies for annotating Web pages (eg, SHOE).
Mobile agents
Summary
• Programs that can migrate from one machine to another.
• Intelligent Agents:
• Execute in a platform-independent execution environment.
• Require agent execution environment (places). • Anything that can be viewed as perceiving its
• Mobility not necessary or sufficient condition for agenthood. environment through sensors and acting upon that
• Practical but non-functional advantages: environment through its effectors to maximize progress
• Reduced communication cost (eg, from PDA) towards its goals.
• Asynchronous computing (when you are not connected) • PAGE (Percepts, Actions, Goals, Environment)
• Two types: • Described as a Perception (sequence) to Action Mapping: f
• One-hop mobile agents (migrate to one other place) : P* → A
• Multi-hop mobile agents (roam the network from place to • Using look-up-table, closed form, etc.
place) • Agent Types: Reflex, state-based, goal-based, utility-
• Applications: based
• Distributed information retrieval. • Rational Action: The action that maximizes the expected
• Telecommunication network routing. value of the performance measure given the percept sequence
• Programs that can migrate from one machine to another. to date
• Execute in a platform-independent execution environment. Chapter 1
• Require agent execution environment (places). What is AI?
• Mobility not necessary or sufficient condition for agenthood.
Search strategies
A search strategy is defined by picking the order of node
expansion
Strategies are evaluated along the following dimensions:
completeness: does it always find a solution if one exists? Time complexity of breadth-first search
time complexity: number of nodes generated • If a goal node is found on depth d of the tree, all nodes up
till that depth are created.
space complexity: maximum number of nodes in memory
optimality: does it always find a least-cost solution?
Time and space complexity are measured in terms of
b: maximum branching factor of the search tree
d: depth of the least-cost solution • Thus: O(bd)
m: maximum depth of the state space (may be ∞) Uniform-cost search
Uninformed search strategies Expand least-cost unexpanded node
Uninformed search strategies use only the information Implementation:
available in the problem definition fringe = queue ordered by path cost
Breadth-first search Equivalent to breadth-first if step costs all equal
Uniform-cost search Complete? Yes, if step cost ≥ ε
Depth-first search Time? # of nodes with g ≤ cost of optimal solution, O(bceiling(C*/
Depth-limited search ε)
) where C* is the cost of the optimal solution
Iterative deepening search
Space? # of nodes with g ≤ cost of optimal solution, O(bceiling(C*/ Depth-limited search
ε)
) = depth-first search with depth limit l,
Optimal? Yes – nodes expanded in increasing order of g(n) i.e., nodes at depth l have no successors
Depth-first search Recursive implementation:
Expand deepest unexpanded node
Implementation:
fringe = LIFO queue, i.e., put successors at front
Repeated states
Failure to detect repeated states can turn a linear problem
into an exponential one!
Graph search
Summary
Problem formulation usually requires abstracting away real-
world details to define a state space that can feasibly be
explored
Variety of uninformed search strategies
Iterative deepening search uses only linear space and not much
more time than other uninformed algorithms