0% found this document useful (0 votes)
22 views60 pages

AI Suiss 03 PracticalReasoningAndSearches

The document discusses the concepts of logic agents and their knowledge bases, emphasizing the use of formal languages for representation and inference in artificial intelligence. It covers the principles of logic, including propositional and predicate logic, and introduces the idea of deductive reasoning agents that plan actions based on logical theories. Additionally, it highlights the importance of planning and means-end reasoning in creating goal-directed behavior for intelligent agents.

Uploaded by

linac72118
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views60 pages

AI Suiss 03 PracticalReasoningAndSearches

The document discusses the concepts of logic agents and their knowledge bases, emphasizing the use of formal languages for representation and inference in artificial intelligence. It covers the principles of logic, including propositional and predicate logic, and introduces the idea of deductive reasoning agents that plan actions based on logical theories. Additionally, it highlights the importance of planning and means-end reasoning in creating goal-directed behavior for intelligent agents.

Uploaded by

linac72118
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 60

Anno accademico 2024 – 2025

Introduzione
all’intelligenza artificiale
e al machine learning
-
Lezione 3
Agenti logici e agenti pratici e
soluzione di problemi di ricerca
Logic agents
Logic agents: Knowledge base and representation

KB (knowledge base) = set of sentences/expressions


which represent all the knowledge about the
environment

New formulas can be ADD/TELL to the KB,


else answers can be QUERY/ASK the KB

Logic agent: KB is defined with a formal language


(logic)

The KB just provide answers to queries, does not tell the action to perform

In logic agents logic inference (symbolic reasoning) is used to choose the best
action given the KB (action is a logic consequence of the KB)

Formal language is used to avoid the ambiguities of natural languges

3
Two issues for logic based agents

General feeling that these problems are anywhere near solved


Problems with logic agents led to the introduction of other agent
types
4
Logic

Logic contains instruments for:


– logic infererence: automatic operations through inference rules on logic
symbols to find the truth about a proposition starting from other
propositions/sentences
– understand the validity and consistency of a given theory
Two principal logic types:
– propositional logic: sentences tell truth about facts
– predicate logic: sentences tell truth about facts, objects attributes and
relations among them
Predicate logic is more expressive. It is possible to refer to variables and
quantification, which is not possible in propositional logic (which states
truth only about individual propositions/facts e.g. “The sun rises in the east”)

5
Formal language

Formal language:
Set of sentences (aka words or formulas) built from a fixed set of letters or
symbols. Sentences are also called well formed formulas (wff)
the inventory from which the latter are taken is the alphabet on which
the language is defined.
language is defined without reference to any meanings of its
expressions, i.e. it exists before any interpretation is assigned

Example:
A formal language ℒ can be defined with the alphabet a={▲,▼}, and with
a word being in ℒ if it begins with ▲and is composed solely of the symbols
▲ and ▼.
A possible interpretation of ℒ could assign the decimal digit '1' to ▲ and '0'
to ▼. Then ▲▼▲ would mean 101 under this interpretation of ℒ.

6
Symbols for predicate logic

Predicate logic alphabet

• the set of all constants C;


• the set of all symbols for functions F;
• the set of all symbols for predicates P;
• the set of all variables V;
• logic connections:
~ (negation),
∧ (AND, conjunction),
∨ (OR, disjunction),
← (implication),
↔ (equivalence),
parenthesis “(“ “)”
Quantifiers: Existential (∃) universal (∀)

7
Symbols for predicate logic

Constants: single entities define in the logic domain


– e.g. “maria”, “giovanna”, “3”

Variables: identify an unknown entity in the logic domain


– e.g. X, Y

Functions: map n objects in the domain to another object


– e.g. madre(maria), madre(X)
– In logic functions do not have any evaluation concept, just state relation
between objects

Predicates: generic relation, attribute on the objects of the domain


– e.g. felice(maria)

8
Some additional nomenclature

Term:
– a variable, a constant, or if f is symbol of a function and X,Y,Z,… are terms
, f(X,Y,Z,…) is a term
– e.g. maria, f(X)

Atom or atomic function:


– a predicate symbol applied to X,Y,Z,… p(X,Y,Z,….)
– e.g. felice(maria)

9
Logic grammar

Formal grammar :
a precise description of the well-formed formulas (wff) of a formal
language, i.e. anything describing the set of strings over the alphabet of
the formal language which constitute wff
it does not describe the semantics (i.e. the meaning) of the wff

(E4) and (E5) are not wff


10
Logic calculus

Deductive apparatus (logic calculus) is composed of transformation rules


(inference rules) and axioms. It allows to automatic derive an expression
from one or more expressions
Some inference rules:
Modus ponens ! # " !!
!
AND elimination # $ " $!
!
AND introduction # ! " !!
# $ " $!
OR introduction #
# $ " $!

Double negation ¬¬!


!
Unitary resolution ! # " ! ¬"
!

11
Formal proof

Formal proof or derivation:


finite sequence of wffs, each of which is either an axiom or follows from
the preceeding one by a rule of inference

E.g.
Teoria T: axioms (can be interpreted as a theory of ≤)
(A1) p(0,0)
(A2) ∀X ∀Y (p(X,Y) ⇒ p(X,s(Y)))
(A3) ∀X p(X,X)

Proof of p(0,s(0)) (T ⊦ p(0,s(0)) :


From A2, using specialization p(0,0) ⇒p(0,s(0)))
Using Modus Ponens (MP) p(0,s(0))

12
Interpretation, semantics, model

Interpretation
• assignment of meanings to symbols and truth values to predicates

Semantics
• the investigation of possible interpretations

Model
• for a wff s, I is a model for s, iff s is true in I
• A sentence true in all interpretations is valid

13
Intepretation examples

E.g: 2 interpretations of the language L: a constant “0”, a function “s()” e


and a predicate “p()”.

Interpretation I1 Domain: ℕ
– “0” represent 0
– “s()” represent the successor of a natural number
– “p()” represent the relation “≤”
Interpretation I2 Domain: negative integer numbers
– “0” represent 0
– “s” represent the predecessor of a natural number
– “p” represent the relation “≤”

14
Formal derivation and Logic entailment

Syntactic consequence
• a formula ℬ is a syntactic consequence within the formal system ℱ of a
set of formulas Γ if there is a formal proof in ℱ of ℬ from the set Γ: Γ├ℱ ℬ
• syntactic consequence does not depend on any interpretation of the
formal system,
• i.e. ℬ can be formally derived from Γ
Semantic consequence
• a formula ℬ is a semantic consequence within the formal system ℱ of a
set of statements Γ : Γ╞ℱ ℬ ; if and only if there is no model in which all
members of Γ are true while ℬ is false. i.e. the set of interpretations that
make all members of Γ true is a subset of the set of interpretations that
make ℬ true
• i.e. ℬ is logically entailed by Γ

15
Soundness and completeness

It should always be: Γ ⊢ A Û Γ ⊨ A


soundness: Γ ⊢ A Þ Γ ⊨ A
prevents from proving sentences that aren't true when we interpret
completeness: Γ ⊨ A Þ Γ ⊢ A
means that everything we know to be true on interpretation, we must be
able to prove.
Semantics Syntax

True
propositio
Axiom
n s

Models Inference
Soundnes rules
s

Theorem
s

16
Completenes
Deductive Reasoning agents

How can an agent decide what to do using theorem proving?

Idea is to use logic to encode a theory stating the best action to


perform

Let:
ρ be this formal theory (typically a set of rules);
∆ be a logical database that describes the current state of the world;
Ac be the set of actions the agent can perform;
∆ ⊢ρ φ means that φ can be proved from ∆ using ρ.

17
Deductive Reasoning agents

How does this fit into the abstract description?

The perception function is as before:

The next state function updates the database ∆

What about the action function?

18
Action function

19
An example: The Vacuum World

NB: with the predicate logic it is much more convenient to represent the environment. In
propositional logic we should have added several propositions In0,0, In0,1,…

Still may not be obvious in most real cases how to achieve a full logic representation of
the environment
20
The Vacuum World

21
An example: The Vacuum World
Rules ρ for determining what to do. We can define a predefinite schema to move into the
Vacuum World

….. And so on (left as an exercise).

We also need the rules to suck the dirt when present


In(𝑥, 𝑦) ∧ Dirt(𝑥, 𝑦) → Do(suck)
We did not use any utility function to specify the agent's goal,
but defined them in terms of logic sentences

Using these rules and starting at (0, 0) the robot will clear up dirt

22
Agents that plan ahead
«Goal based agents»
Agent pro-active behaviour

An intelligent agent is a computer system capable of flexible autonomous


action in some environment. Where by flexible, we mean:
§ Reactive
§ Pro-active
§ Social

Now we deal with the “pro-active” part, showing how we can design agents to
have goal-directed behaviour and plan ahead

Paolo Meridiani 24
Practical reasoning

Human practical reasoning consists mainly of two activities:

Deliberation:
§ deciding what states of affairs we want to achieve
§ the outputs of deliberation are intentions

Means-ends reasoning:
§ deciding how to achieve these states of affairs
§ the outputs of means-ends reasoning are plans

Intentions are a key part of this


§ The interplay between beliefs, desires and intentions defines how the model
works

Considering an agent motivated by beliefs, desires and intentions is a useful


abstraction to effectively design them, similarly to objects in object-oriented
programming

Paolo Meridiani 25
Intentions are stronger than desire

Paolo Meridiani 26
Planning
Planning is the design of a course of action
that will achieve some desired goal

Basic idea is to give a planning system:


• (representation of) goal state/intention to
achieve;
• (representation of) actions it can perform;
• (representation of) the environment;
And have it generate autonomously a plan to
achieve the goal.

This is automatic programming

A plan is then the sequence of actions

that brings the environment from the initial


state e0 to the goal state eg
<latexit sha1_base64="o/2zimBPBm6H5m2CdoiXVKj01lU=">AAACbXicbZFRT9swFIWdMBjrGASmPWygyVqF4KlKOgQ8ou1lj0xaAampohv3trVw7Mi+Aaqob/xC3vYX9rK/MKdU2ga9kqWj8x3L9nFeKukojn8G4cqL1bWX669arzfebG5F2zsXzlRWYE8YZexVDg6V1NgjSQqvSotQ5Aov8+uvDb+8Qeuk0T9oWuKggLGWIymAvJVF95jF6Z2V4wmBtea2TkGVE8jiGccs4ctQ0qDuUtRt0Gee/iU8FUNDbmm61rMmP+ZZ1I478Xz4c5EsRJst5jyLHtKhEVWBmoQC5/pJXNKgBktSKJy10sphCeIaxtj3UkOBblDP25rxfe8M+chYvzTxufvvjhoK56ZF7pMF0MQ9ZY25jPUrGp0OaqnLilCLx4NGleJkeFM9H0qLgtTUCxBW+rtyMQELgvwHtXwJydMnPxcX3U5y3Em+H7XPvizqWGe77BM7ZAk7YWfsGztnPSbYryAK3gcfgt/hu3Av/PgYDYPFnrfsvwkP/gC88b2Q</latexit>

↵0 ↵1 ↵2 ↵n
e0 ! e1 ! e2 ! e3 ! · · · ! eg
Paolo Meridiani 27
A simple planning agent

The same see function we


have introduced before
producing a new percept

The next function is updating


the world model = agent’s
beliefs

Deliberate: define an
intention/goal based on the
agent’s beliefs

Planning: construct a plan of


actions based on the current
This simplest implementation can be optimal in a world state and the intention
deterministic static environment (i.e. no replanning
needed) Execute the actions contained
in the plan

Paolo Meridiani 28
Means-end reasoning:
planning and search
problems

Paolo Meridiani
Search problems

Remember:
A search problem consists of:
§ A state space
§ A successor function (with
actions and potential costs)
§ A start and a goal state

Solution: a sequence of
actions (a plan) which
transforms the start state to a
goal state

Paolo Meridiani 30
Example: travelling in Romania

On holiday in Romania; currently in Arad. The flight leaves tomorrow from


Bucharest

State space:
Cities in Romania

Successor function:
Roads: Go to adjacent city
with cost = distance

Start state:
Arad

Goal test:
Is state == Bucharest?

Paolo Meridiani 31
Example: vacuum world as a search problem
goal stateS
start state
8x8y ¬Dirt(x, y)
<latexit sha1_base64="+GYHPdxsiL5DeKZT2ACwZwPUK4U=">AAACC3icbZDLSgMxFIbPeK31NurSTWgRKkiZEVGXRV24rGAv0A4lk6ZtaCYzJBnpUOraja/ixoUibn0Bd76NaTuCtv4Q+PjPOeSc3484U9pxvqyFxaXlldXMWnZ9Y3Nr297ZraowloRWSMhDWfexopwJWtFMc1qPJMWBz2nN71+O67U7KhULxa1OIuoFuCtYhxGsjdWyc81OKDHnaIB+KLlvCtpFV0zqwuAoOWzZeafoTITmwU0hD6nKLfuz2Q5JHFChCcdKNVwn0t4QS80Ip6NsM1Y0wqSPu7RhUOCAKm84uWWEDozTRmYV84RGE/f3xBAHSiWBbzoDrHtqtjY2/6s1Yt0594ZMRLGmgkw/6sQc6RCNg0FtJinRPDGAiWRmV0R6WGKiTXxZE4I7e/I8VI+L7mnRvTnJly7SODKwDzkogAtnUIJrKEMFCDzAE7zAq/VoPVtv1vu0dcFKZ/bgj6yPb9HfmkM=</latexit>

{
<latexit sha1_base64="gRkLF/8GQhmCxuxt1NbQbx39AfI=">AAAB6XicbVBNS8NAEJ3Ur1q/qh69LBbBU0lEqseiF49V7Ae0oWy2k3bpZhN2N0IJ/QdePCji1X/kzX/jts1BWx8MPN6bYWZekAiujet+O4W19Y3NreJ2aWd3b/+gfHjU0nGqGDZZLGLVCahGwSU2DTcCO4lCGgUC28H4dua3n1BpHstHM0nQj+hQ8pAzaqz00Mv65Ypbdecgq8TLSQVyNPrlr94gZmmE0jBBte56bmL8jCrDmcBpqZdqTCgb0yF2LZU0Qu1n80un5MwqAxLGypY0ZK7+nshopPUkCmxnRM1IL3sz8T+vm5rw2s+4TFKDki0WhakgJiazt8mAK2RGTCyhTHF7K2EjqigzNpySDcFbfnmVtC6qXq3q3V9W6jd5HEU4gVM4Bw+uoA530IAmMAjhGV7hzRk7L86787FoLTj5zDH8gfP5A518jWs=</latexit>

,
<latexit sha1_base64="0Eq8646ny07DmpZwqsk0gJbBCn4=">AAAB6HicbVBNS8NAEJ34WetX1aOXxSJ4kJKIqMeiF48t2A9oQ9lsJ+3azSbsboQS+gu8eFDEqz/Jm//GbZuDtj4YeLw3w8y8IBFcG9f9dlZW19Y3Ngtbxe2d3b390sFhU8epYthgsYhVO6AaBZfYMNwIbCcKaRQIbAWju6nfekKleSwfzDhBP6IDyUPOqLFS/bxXKrsVdwayTLyclCFHrVf66vZjlkYoDRNU647nJsbPqDKcCZwUu6nGhLIRHWDHUkkj1H42O3RCTq3SJ2GsbElDZurviYxGWo+jwHZG1Az1ojcV//M6qQlv/IzLJDUo2XxRmApiYjL9mvS5QmbE2BLKFLe3EjakijJjsynaELzFl5dJ86LiXVW8+mW5epvHUYBjOIEz8OAaqnAPNWgAA4RneIU359F5cd6dj3nripPPHMEfOJ8/dQGMtg==</latexit>

Paolo Meridiani 32
Vacuum world: how many world states?

Vacuum world state space


dimension:
§ Agent locations: 3x3=9
§ Agent facing: NSEW=4
§ Dirt configurations: 2 possibilities at
each location=29

How many states in total?


§ 9x4x29 = 18432

Search problems can become


quickly very big

Paolo Meridiani 33
How to represent search problems: state space graph

State space graph: a mathematical


representation of a search problem

§ Nodes are representation of


environment states
§ Arcs represent successors (action
results moving between states)
§ The goal test is a set of goal
nodes (maybe only one)
§ In a state space graph, each
state occurs only once!

We can rarely build this full graph in


memory (it’s too big), but it’s a useful
idea

Paolo Meridiani 34
Search tree

A search tree:
§ A “what if” tree of plans and their
outcomes
§ The start state is the root node
§ Children nodes correspond to
successors
§ Nodes show states, but correspond
to PLANS that achieve those states

For most problems, we can never


actually build the whole tree

Paolo Meridiani 35
Search problem general algorithm for solution

General search problem algorithm

§ Expand out potential plans (tree


nodes)

§ Maintain a fringe of partial plans


under consideration

§ Try to expand as few tree nodes


as possible. Search problems are
exponentially complex in search
depth

Main question: which fringe nodes to explore?

Paolo Meridiani 36
A search example

Paolo Meridiani 37
Depth-first search (DFS)

Paolo Meridiani 38
Search algorithms properties

Complete: Guaranteed to find a


solution if one exists?
Optimal: Guaranteed to find the
least cost path?
Time complexity?
Space (memory) complexity?

Simple cartoon of a search tree:


§ b is the branching factor
§ m is the maximum depth
§ solutions at various depths
§ Number of nodes in entire tree? NB: increase of complexity is
1+b+b2 +....bm =O(bm) exponential in the depth, but is
represented as linear in the cartoon

Paolo Meridiani 39
DFS properties

What nodes DFS expand?


§ To find a solution it may need to process
the whole tree!
§ If m is finite (loops to be prevented),
takes time O(bm)

How much space does the fringe take?


§ Only siblings on path to root, so O(b x m)

Is it complete?
§ If m is finite yes

Is it optimal?
§ No, it finds the “leftmost” solution,
regardless of depth or cost

Paolo Meridiani 40
Breadth-First Search (BFS)

Paolo Meridiani 41
BFS properties

What nodes does BFS expand?


§ Processes all nodes above
shallowest solution.
§ Let depth of shallowest solution be s
§ Search takes time O(bs)

How much space does the fringe


take?
§ Has roughly the last tier, so O(bs)

Is it complete?
§ s must be finite if a solution exists, so
yes

Is it optimal?
§ Only if costs are all 1 (more on costs
later)

Paolo Meridiani 42
DFS vs BFS

Remember: the main change in search problems strategy is what nodes


on the fringe you decide to expand first

For some applications DFS is preferred, for others BFS

Paolo Meridiani 43
Cost-sensitive searches

BFS finds the shortest path in terms of number of actions. It does not find
the least-cost path.

We will now discuss a similar algorithm which does find the least-cost path

Paolo Meridiani 44
Uniform Cost Search (UCS)

Paolo Meridiani 45
UCS properties

What nodes does UCS expand?


§ Processes all nodes with cost less than cheapest
solution!
§ If that solution costs C* and arcs cost at least ε,
then the “effective depth” is roughly C*/ ε
§ Takes time O(bC*/ ε) (exponential in effective
depth)

How much space does the fringe take?


§ Roughly as the last tier, so O(bC*/ ε)

Is it complete?
§ Assuming best solution has a finite cost yes!

Is it optimal?
§ Yes
The bad:
Explores options in every “direction”.
We can see the UCS as a slow/diligent turtle
that will always bring you to the optimal
Paolo Meridiani solution 46
Can we make the search faster? Informed searches

To pick the which nodes to expand first we would need to know how close we
are to the goal

A heuristic function h is:


§ A function that estimates how close a state is to a goal

A good heuristic function needs to be:


§ A reasonable estimate of the expected cost to reach the goal
§ Should be fast, otherwise it may be just faster to expand more nodes in the
search tree
§ Typically, it has to be designed for each particular search problem

Not always easy to find a good heuristic. A good heuristic can drastically change
the number of nodes on the fringe which needs to be expanded

Paolo Meridiani 47
Heuristics in path problems

Euclidean distance Manhattan distance

Paolo Meridiani 48
Another example of heuristics: the 8-puzzle

Possible heuristics:
§ total number of
misplaced tiles
§ better: total
Manhattan distance
(minimum number of
moves to the goal)

We can imagine a good heuristics as the cost needed to reach the goal in a
“relaxed problem” i.e. in this case as if we can move the tiles independently and
place them where we want
Paolo Meridiani 49
Greedy Search

Expand the node that seems closest...

What can can go wrong?

Paolo Meridiani 50
Greedy search

Strategy: expand the node that heuristics


thinks is closest to a goal state

Normally
§ Greedy takes you fast to (some) goal
§ However greedy search is not guaranteed
to be optimal (unless you have a perfect
heuristic)

Worst-case
§ like a badly-guided DFS for poor heuristics

Paolo Meridiani 51
A* search: combining UCS and greedy

UCS orders by path cost, or Greedy orders by goal


backword cost g(n) proximity, or forward cost h(n)

A* Search orders by the sum: f(n) = g(n) + h(n)

Paolo Meridiani 52
A* optimal?

Expand the node that seems closest...

What went wrong?


We need heuristics estimates to be ≤ than actual costs: admissible
heuristics

Paolo Meridiani 53
Admissible heuristics

A heuristic h is admissible (optimistic) if for every node:

where h* is the true cost to a nearest goal

Coming up with an admissible heuristics is most of what’s involved in using A*

It can be demonstrated that A* produces an optimal solution with an


admissible heuristics for a finite search tree

Paolo Meridiani 54
UCS vs A*

Paolo Meridiani 55
More complex planning
We have only looked at search problems in deterministic and accessible
environments. An optimal search algorithm in this context produces an optimal
plan (assuming infinite deliberation/planning time)

However, planning problems in complex problems may take long time:


§ E.g. if no infinite deliberation/planning time ➝ partial plans

More complex environments:


§ Multiple-state problems (deterministic and inaccessibile env) and
contingency problems (non-deterministic and inaccessibile env)
§ Agent needs to deal with multiple search trees, need to take into account
probabilities to develop a good plan, no guarantee to reach the goal state
§ Exploration problems (unknown env)
§ Agent needs to experiment and take chances (re-inforcement learning: reward if
getting “close” to the goal), can also be handled by a reactive or hybrid agents

Standard search algorithm into spate space cannot cope with these conditions.
Need more complex planners e.g. create sub-plans/intermediate goals

Paolo Meridiani 56
Planning by looking at the plan space

Search into the plan space and not into the state space:
STRIPS planner
The Stanford Research Institute Problem
Solver (1971) Used by Shakey

Goal: [RightShoeOn, LeftShoeOn]

STRIPS: Each action in the world represented by a


precondition and an effect
Op(ACTION: RightShoe, PRECOND: RightSockOn, EFFECT:
RightShoeOn)
Op(ACTION: RightSock, EFFECT: RightSockOn)
Op(ACTION: LeftShoe, PRECOND: LeftSockOn, EFFECT:
LeftShoeOn)
Op(ACTION: LeftSock, EFFECT: leftSockOn)

Can split goals into sub-goals and create plans by


looking at preconditions of actions which contain the
goals among their effect

Paolo Meridiani 57
Agent commitment

An agent is committed to
§ ends
the state of affairs it wishes to bring about
§ means
the mechanism via which the agent wishes to achieve the state of affairs

Degrees of commitment:
§ Blind commitment
Agent that will continue to maintain an intention no matter what. Blind
commitment is also referred to as fanatical commitment
§ Single-minded commitment
Agent that will continue to maintain an intention until it believes it is achieved or
it is believed to be impossible
§ Open-minded commitment
An open-minded agent to reconsider its intentions after each action

Reconsider the intentions and replanning accordingly may be needed in complex


environments but it can be costly, so a proper balance needs to be found

Paolo Meridiani 58
Agent’s commitment: an experiment

Kinny and Georgeff’s [https://www.ijcai.org/Proceedings/91-1/Papers/014.pdf]


experimentally investigated effectiveness of intention
reconsideration strategies in the TileWorld

Two different types of reconsideration strategy were used:


• bold agents: never pause to reconsider intentions
• cautious agents: stop to reconsider/replan after every
action

Dynamism in the environment is represented by the rate of


world change, γ
§ Higher γ implies higher frequency of appearing/
disappearing of holes

Paolo Meridiani 59
Agent’s commitment: experiment results

If γ is low (i.e., the environment does not change


quickly), then bold agents do well compared to
cautious ones
§ This is because cautious agents are wasting
time reconsidering their commitments while
bold agents are busy executing an optimal
plan

If γ is high (i.e., the environment changes


frequently), then cautious agents can outperform
bold agents
§ This is because they can recognise when
intentions are doomed, and to take advantage
of new opportunities
§ However, if planning costs (time needed to get
a new plan) are high the advantage can be
eroded

Paolo Meridiani 60

You might also like