Objective Degrees of Dependence in
Social Dependence Relations
Antônio Carlos da Rocha Costa1,2 and Graçaliz Pereira Dimuro1
1
Escola de Informática – Universidade Católica de Pelotas
Pelotas, RS, Brazil.
2
PPGC – Universidade Federal do Rio Grande do Sul
Porto Alegre, RS, Brazil
{rocha,liz}@atlas.ucpel.tche.br
Abstract. This paper presents a way to quantify dependence relations between
agents of a multiagent system in order to introduce a measure for the degree
of dependence established between them. The quantification of the dependence
relations is performed on a specially defined form of reduced dependence graphs,
called dependence situation graphs. The paper shows that the resulting notion of
objective degree of dependence is intuitively acceptable. Given that such degrees
of dependence have an objective nature, a way is presented to allow for their
refinement into subjective degrees of dependence, through the consideration of
subjective aspects of the dependence relationships. The paper also shows how
degrees of dependence allow for a measure of the dependence that a society as a
whole has on each agent that participates in it and, correlatively, a measure of the
statuses and negotiation powers of the agents of such society.
1 Introduction
The problem of measuring the dependence relations that arise between agents when
they operate in a social context has been put forward as an important problem since at
least [1], where a quantitative notion of strength of a dependence relation is proposed.
The Conclusion of [7], for instance, indicated several features on which the quantification of the dependence relations could be based, such as the importance of a goal
to an agent, the number of actions/resources needed to execute a plan, or the number of
agents which are able to perform a needed action or to control a needed resource. In [4],
dependence relations were given a quantitative evaluation on the basis of subjective notions, namely, the relative importance of goals to the agents and the cost of performing
the necessary actions.
We show here that the problem can be solved by appropriately quantifying the dependence situations that arise from those relations. The paper introduces a procedure for
an objective quantification of dependence situations. The procedure computes degrees
of dependence between agents on the basis of a specially derived form of dependence
graphs – the DS-graphs (dependence situation graphs) – so that a measure of the degree of dependence of each agent on the agents that can help it to achieve its goals may
be given in an easy way. The paper presents the procedure and examines some of its
features.
Following one of the suggestions in [7], the procedure takes into account essentially
the number of agents that are able to perform each needed action, but it also takes into
account the kind of dependence (AND-dependence, OR-dependence) that the structure
of the dependence situation establishes between the involved agents. Thus the need for
the DS-graphs, where those kinds of dependences are explicitly indicated.
The resulting degrees of dependence are said to be objective because they take into
account only information about the structure of the dependence situation, through the
DS-graph, and do not involve subjective notions (e.g., the importance of goals).
Objective degrees of dependence may be refined in many ways, according to the
needs of the application where they are to be used, by weighting them with features that
are relevant for the application. For instance, objective degrees of dependence may be
refined by the features suggested in [7], such as the importance of a goal to an agent or
the cost of the necessary resources, or by the number of resources needed to achieve the
goal, or else by probability that each agent has of really performing an action when the
action is necessary.
Also, by summing up the objective degrees of dependence that the agents of a society have on each other, it is possible to define a measure of the dependence of the
society, as a whole, on each of its agents. Correlatively, it is possible to define a measure of an agent’s status and negotiation power [2] within the society.
Further more, objective degrees of dependence may be used to refine the social
reasoning mechanisms that solve the problem of choosing partners for the formation of
coalitions, such as the one introduced in [7, 8].
The paper is structured as follows. Section 2 summarizes the relevant ideas concerning social dependence relations and dependence situations. Section 3 reviews
dependence-graphs and introduces the DS-graphs. Section 4 introduces a formal notation for DS-graphs. Section 5 defines the notion of objective degree of dependence
and shows how they can be calculated on simple DS-graphs. Section 6 introduces additional concepts: objective degrees of dependence for DS-graphs containing transitive
dependences and bilateral dependences; objective degrees of dependence of a society
on each of its agents; a measure of an agent’s negotiation power within a society; and
a way to refine objective degrees of dependence with subjective estimates. Section 7
brings the Conclusion and future work.
2 Dependence relations and dependence situations
Social dependence relations are pointed out in [1] as one of the main objective reasons
for the establishment of interactions between agents. Social dependence relations can
be defined by:
Definition 1. An agent α is said to socially depend on an agent β, with respect to an
action a, for the purpose of achieving a goal g, denoted (DEP α β a g), if and only if:
1.
2.
3.
4.
g is a goal of α;
α cannot do a by itself;
β can do a by itself;
a being done by β implies g being (eventually) achieved.
The definition characterizes social dependence relations as an objective feature of an
agent’s behavior, in the sense that it does not depend on the agent having it represented
in his mental states (beliefs, plans, etc.).
Regarding the direction of the dependence, dependence relations between two
agents can be classified either as unilateral or as bilateral:
unilateral: ∃a, g.(DEP α β a g) ∧ ∀a′ , g ′ .¬(DEP β α a′ g ′ )
α depends on β with respect to some action a and some goal g, but there is no
action and no goal with respect to which β depends on α
bilateral: ∃a, g.(DEP α β a g) ∧ ∃a′ , g ′ .(DEP β α a′ g ′ )
α depends on β with respect to some action a and some goal g, and β depends on
α with respect to some action a′ and some goal g ′
Regarding the goals that set the stage for the dependence, bilateral dependence relations
can be classified either as mutual or as reciprocal 3 :
mutual: ∃a, a′ , g.(DEP α β a g) ∧ (DEP β α a′ g) ∧ a 6= a′
α depends on β, and β depends on α, with respect to the same common goal g
reciprocal: ∃a, a′ , g, g ′ .(DEP α β a g ′ ) ∧ DEP β α a′ g) ∧ a 6= a′ ∧ g 6= g ′
α depends on β, and β depends on α, with respect to different private goals
Regarding the number of agents involved in a unilateral dependence, and the way
their actions are combined to help achieve an agent’s goal, social dependence relations
can be classified either as OR-dependence or as AND-dependence, in many ways [8].
For instance:
OR-dependence, multiple partners, single goal, single action needed:
(DEP α β1 a1 g) ∨ (DEP α β2 a2 g) ∨ . . . ∨ (DEP α βn an g)
there are several alternative agents βi , each being able to perform an action ai that
may lead an agent α to achieve the goal g
AND-dependence, multiple partners, single goal, multiple actions needed:
(DEP α β1 a1 g) ∧ (DEP α β2 a2 g) ∧ . . . ∧ (DEP α βn an g)
there are multiple partners βi , each having to perform a different action ai to jointly
lead agent α to achieve the goal g
As shown in the work on the DEPNET simulator [8], however, for the purpose
of quantifying dependence relations it is not necessary to take actions and plans into
account: it is enough to know that agent α is dependent on agent β to achieve goal g.
In case there are two or more agents that are able to help α to achieve g, it is further
necessary to know just the general kind of dependence (either an AND-dependence or
an OR-dependence) that arises between them and α.
Such simplified picture of a dependence relation, where only agents and goals are
considered, along with the types of relations connecting them, is called a dependence
situation [8].
3
In [1], a distinction is made between cooperation (social behavior induced by a relation of
mutual dependence) and social exchange (social behavior induced by a relation of reciprocal
dependence). We don’t make such distinction and use preferably the term social exchange to
denote both kinds of social behaviors.
Thus, the quantification procedure of dependence relations introduced below operates only on the information contained in such dependence situations, which motivates
the definition of the DS-graphs, in the next section.
3 DS-graphs
Dependence graphs were introduced in [9] as a generalization of dependence networks [8], for the picturing of the various dependence relations that may exist within a
multiagent system.
They are structures of the form DG = (Ag, Gl, P l, Ac, Ar, Ψ ) where agents Ag,
goals Gl, plans P l and actions Ac are taken as nodes and are linked with each other
by the arcs Ar as specified by function Ψ , thus construing the structure of the dependence relations show how agents depend on other agents to achieve goals through plans
involving actions performed by those other agents.
Since dependence graphs have usually quite complex structures, [9] also introduced the so-called reduced dependence graphs, where nodes representing plans are
abstracted away and goals are used not as nodes, but as labels of arcs.
The procedure for the quantification of dependence relations that we will introduce
below requires only the information contained in the so-called dependence situations,
which amounts to the immediate information content of the dependence relation expressed by the elements of the dependence graph, together with the types of dependences intervening between the agents (AND-dependences, OR-dependences).
This information about type is only indirectly represented in dependence graphs,
through the way actions and goals are related to plans.
On the other hand, as mentioned before, the procedure abstracts away information
about which plans (and actions) are involved, thus calculating degrees of dependence
that are relative to an implicitly understood (e.g., currently used) set of plans.
To structure such minimal information contained in dependence situations, we define the notion of a DS-graph (dependence situation graph):
Definition 2. Let Ag be a set of agents and Gl be the set of goals that those agents may
have. A DS-graph over Ag and Gl is a structure DS = (Ag, Gl, Ar, Lk, Ψ, ∆) such
that:
1. Ar is a set of arcs, connecting either an agent to a goal or a goal to an agent;
2. Lk is a set of links, connecting subsets of arcs;
3. Ψ : Ar → (Ag × Gl) ∪ (Gl × Ag) is a function assigning either an agent and
a goal or a goal and an agent to each arc, so that if Ψ (ar) = (ag, g) then arc ar
indicates that agent ag has the goal g, and if Ψ (ar) = (g, ag) then arc ar indicates
that goal g requires some action by agent ag in order to be achieved;
4. ∆ : Lk → ℘(Ar) is a function assigning links to sets of arcs, representing an
AND-dependence between such arcs, so that ∆(l) = {ar1 , . . . , arn } iff either:
(a) there are an agent ag and n goals g1 , . . . , gn such that
Ψ (ar1 ) = (ag, g1 ), . . . , Ψ (arn ) = (ag, gn )
indicating that ag aims the achievement of all the goals g1 , . . . , gn ; or,
(b) there are a goal g and n agents ag1 , . . . , agn such that
Ψ (ar1 ) = (g, ag1 ), . . . , Ψ (arn ) = (g, agn )
indicating that g requires the involvement of all the agents in the set
{ag1 , . . . , agn } in order to be achieved.
Given a DS-graph:
1. if there are: a set of agents {ag0 , ag1 , . . . , agn }; a set of arcs {ar0 , ar1 , . . . , arn };
a goal g; a link l; and if it happens that Ψ (ar0 ) = (ag0 , g), and Ψ (ari ) = (g, agi )
(for 1 ≤ i ≤ n), and ∆(l) = {ar1 , . . . , arn }, then we say that agent ag0 is ANDdependent on agents ag1 , . . . , agn with respect to goal g;
2. if there are: a set of agents {ag0 , ag1 , . . . , agn }; a set of arcs
{ar1 , . . . , arn , ar1′ , . . . , arn′ }; a set of goals g1 , . . . , gn ; a link l; and if it happens
that Ψ (ar1 ) = (ag0 , g1 ), . . . , Ψ (arn ) = (ag0 , gn ), and Ψ (ari′ ) = (gi , ag1 )
(for 1 ≤ i ≤ n), and ∆(l) = {ar1 , . . . , arn }, then we say that agent ag0 is
AND-dependent on agents ag1 , . . . , agn with respect to the goals g1 , . . . , gn ;
3. if there are: a set of agents {ag0 , ag1 , . . . , agn }; a set of arcs {ar0 , ar1 , . . . , an };
a goal g; and if it happens that Ψ (ar0 ) = (ag0 , g), and Ψ (ari ) = (g, agi ) (for
1 ≤ i ≤ n), but there is no link l such that {ar1 , . . . , arn } ⊆ ∆(l), then we say
that agent ag0 is OR-dependent on agents ag1 , . . . , agn with respect to goal g;
4. if there are: a set of agents {ag0 , ag1 , . . . , agn }; a set of arcs
{ar1 , . . . , arn , ar1′ , . . . , arn′ }; a set of goals g1 , . . . , gn ; and if it happens
that Ψ (ar1 ) = (ag0 , g1 ), . . . , Ψ (arn ) = (ag0 , gn ), and Ψ (ari′ ) = (gi , agi ) (for
1 ≤ i ≤ n), but there is no link l such that {ar1 , . . . , arn } ⊆ ∆(l), then we say
that agent ag0 is OR-dependent on agents ag1 , . . . , agn with respect to the goals
g1 , . . . , gn .
A1
B1
g
B2
B3
A2
B4
g
B5
B6
Fig. 1. Sample AND-dependence (1,2) and OR-dependence (3,4) relations for DS-graphs.
Graphically, we use the convention that AND-dependence is represented by a
curved link tying together the arcs involved in such dependence, while OR-dependence
is represented by the absence of any such link. Figure 1 illustrates both ANDdependence (of agent A1 on agents B1 , B2 , B3 with respect to goal g1 , and of agent A2
on agent B4 , B5 , B6 with respect to goals g2 , g3 , g4 ) and OR-dependence (of agent A3
on agents B7 , B8 , B9 with respect to goal g5 , and of agent A4 on agent B10 , B11 , B12
with respect to goals g6 , g7 , g8 ).
4 A notation for DS-graphs
In this section we present formal definitions that support the calculation of objective
degrees of dependence in DS-graphs. We develop a notation that allows for a succinct
representation of the structure of dependence situations, and that is used as the basis for
the definition of the calculation procedure.
4.1
Simple dependence situations.
A simple AND-dependence situation occurs in a DS-graph either when an agent is
dependent on two or more agents for the realization of a single goal, or when an agent
is dependent on a single agent for the realization of two or more goals.
If agent α is dependent on agents β1 , β2 , . . . , βn with respect to goal g, this is denoted as (α ≺ β1 ∧ β2 ∧ . . . ∧ βn | g). If agent α is dependent on agent β with respect
to goals g1 , g2 , . . . , gm , this is denoted as (α ≺ β | g1 ∧ g2 ∧ . . . ∧ gm ).
A simple OR-dependence relation occurs either when an agent is dependent on two
or more agents for the realization of a single goal, or when an agent is dependent on a
single agent for the realization of two or more alternative goals.
If agent α is dependent on agents β1 , β2 , . . . , βn with respect to goal g, this is denoted as (α ≺ β1 ∨ β2 ∨ . . . ∨ βn | g). If agent α is dependent on agent β with respect
to goals g1 , g2 , . . . , gm , this is denoted as (α ≺ β | g1 ∨ g2 ∨ . . . ∨ gm ).
4.2
Composed dependence situations.
A composed dependence situation occurs either when an agent is dependent on alternative sets of agents for the realization of a single goal, each set of agents being jointly
capable of tackling the goal (so that the agent is OR-dependent on the various sets of
agents, but AND-dependent on the agents of each set) or when a given agent is dependent on a conjunction of sets of agents, an agent in a set being able to act together with
an agent in each of the other sets, in order to achieve the goal aimed by the given agent
(so that the agent is AND-dependent on the various sets of agents, but OR-dependent
on the agents of each set).
A composed dependence situation is thus written either using a disjunctive dependence form:
(α ≺ ∧i1 (βi1 ) ∨ ∧i2 (γi2 ) ∨ . . . ∨ ∧ik (δik ) | g)
or using a conjunctive dependence form:
(α ≺ ∨i1 (βi1 ) ∧ ∨i2 (γi2 ) ∧ . . . ∧ ∨ik (δik ) | g)
4.3
Generalized dependence situations.
It may be interesting to generalize the notation for DS-graphs introduced above by
extending the number of occurrences of operations ∧ and ∨, both at the agents part and
at the goals part of the expression, thus including composed dependence situations as
special cases.
Let ⊙, ⊡ be, respectively, either the operators ∧, ∨ or the operators ∨, ∧. A generalized dependence situation is written using either an expression with a generalized
conjunctive dependence form:
(α ≺ ⊙i1 (⊡j1 βj1 ) ∧ ⊙i2 (⊡j2 γj2 ) ∧ . . . ∧ ⊙ik (⊡jk δjk ) | g1 ∧ g2 ∧ . . . ∧ gk )
or an expression with a generalized disjunctive dependence form:
(α ≺ ⊙i1 (⊡j1 βj1 ) ∨ ⊙i2 (⊡j2 γj2 ) ∨ . . . ∨ ⊙ik (⊡jk δjk ) | g1 ∨ g2 ∨ . . . ∨ gk )
We call structured goals the goals that appear in the goals part of generalized dependence situations.
Note that in the generalized dependence situations, the higher-level operators ∧ and
∨ are assumed to be non-commutative, so that a correspondence can be kept between
(sets of) agents and goals. This is also the reason why the number of sets of agents that
are listed and the number of goals listed should be the same.
The set of generalized dependence situations expressions is denoted by GDS.
4.4
Graphical representation of generalized dependence situations.
The mapping between the expressions defined above and the corresponding generalized
DS-graphs is immediate. Figure 2 illustrates the generalized DS-graph corresponding
to the generalized dependence situation denoted by:
(A ≺ ((B1 ∧ B2 ) ∧ (B3 ∨ B4 )) ∨ (B5 ∧ B6 ) | (g1 ∧ g2 ) ∨ g3 )
g1
B1
B2
A
g2
B3
B4
g3
B5
B6
Fig. 2. Sample generalized DS-graph.
For the sake of space, we omit the formal definition of generalized DS-graphs.
5 Calculating objective degrees of dependence in generalized
DS-graphs
To calculate objective degrees of dependence, a function dgr is defined, from the set
of expressions of generalized dependence situations to the positive reals in the interval
from 0 to 1.
The calculation of the degree of dependence of an agent on other agents, with respect to a given goal, is informally defined as:
– if an agent is autonomous on another agent, with respect to the given goal, its degree
of dependence on that agent is assigned the value 0;
– the total degree of dependence of an agent on all agents on which it is dependent,
with respect to the given goal, is assigned the value 1;
– if the dependence expression that characterizes the dependence situation of an agent
is of a conjunctive form with k terms, and its associated degree of dependence is d,
then the degree of dependence of the agent with respect to each of the terms of the
dependence expression is assigned the value d;
– if the dependence expression that characterizes the dependence situation of an agent
is of a disjunctive form with k terms, and its associated degree of dependence is d,
then the degree of dependence of the agent with respect to each of the terms of the
dependence expression is assigned the value d/k.
The rationale behind such informal procedure extends the one in [2]:
– a conjunctive form indicates that each of its component is essential to the achievement of the involved goals, thus all such components should be valued at the same
level of the involved goals;
– a disjunctive form indicates that its components are alternatives that are equally
able to achieve the involved goals, thus they devaluate each other and should be
uniformly valued by a fraction of the value of the involved goals.
This rationale gives rise to the formal definition of the function dgr:
Definition 3. Let G be the structured goal of an agent α and let α be dependent on a
set of other agents for the achievement of G. Then, the objective degree of dependence
of α on each such agent is given by the function dgr : GDS → [0 ; 1], defined by cases
as follows:
1. If G = ∧k (gk ) then dgr[(α ≺ ∧k (⊙ik (⊡jk βjk )) | G)] = 1;
2. If G = ∨k (gk ) then dgr[(α ≺ ∨k (⊙ik (⊡jk βjk )) | G)] = 1;
3. If dgr[(α ≺ ∧k (⊙ik (⊡jk βjk )) | ∧k (gk ))] = d
then dgr[(α ≺ ⊙ik (⊡jk βjk ) | gk )] = d;
4. If dgr[(α ≺ ∨k (⊙ik (⊡jk βjk )) | ∨k (gk ))] = d
then dgr[(α ≺ ⊙ik (⊡jk βjk ) | gk )] = d/k;
5. If dgr[(α ≺ ∧k (⊡jk βjk ) | gk )] = d then dgr[(α ≺ ⊡jk βjk | gk )] = d;
6. If dgr[(α ≺ ∨k (⊡jk βjk ) | gk )] = d then dgr[(α ≺ ⊡jk βjk | gk )] = d/k;
7. If dgr[(α ≺ ∧k βjk | gk )] = d then dgr[(α ≺ βjk | gk )] = d;
8. If dgr[(α ≺ ∨k βjk | gk )] = d then dgr[(α ≺ βjk | gk )] = d/k.
The following is true about Definition 3:
a) the definition provides a computable notion of degree of dependence that correspond to the two basic kinds of social dependence relations (OR-dependence,
AND-dependence);
b) as the notion of social dependence relation that supports them, the definition states
an objective notion of degree of dependence, which is function of no subjective
evaluation by the agents.
6 Additional concepts
6.1
Degrees of transitive dependences
When analyzing the dependence situations between agents, it is often necessary to take
into account dependence relations that go beyond the direct dependence between the
agents. One form of such indirect dependence is the transitive social dependence.
Transitive social dependence arises because social dependence may happen in a
transitive mode:
– if α depends on β with respect to some goal g, and β depends on γ w.r.t. some goal
g ′ , and g ′ is instrumental to g, then α depends on γ with respect to the combined
goal g • g ′ , which is implicitly adopted by α.
To define degrees of dependence for transitive dependence relations, a choice has
to be made regarding the operation on degrees of dependence that is induced by the
transitivity of the relations of social dependence. The natural choice is multiplication:
Definition 4. Let α be dependent on β with respect to goal g, and β be dependent on
γ with respect to g ′ , and g ′ be instrumental do g. Then, α is said to transitively depend
on γ with respect to the combined goal g • g ′ , denoted (α ≺ γ, g • g ′ ). Such transitive
degree of dependence is calculated by
dgr[(α ≺ γ | g • g ′ )] = dgr[(α ≺ β, g)] · dgr[(β ≺ γ, g ′ )]
Definition 4 enables the calculation of degrees of dependence that takes into account
dependences on agents that are far away in the overall network of social relations, and
not only degrees of dependence for direct dependence relations.
6.2
Degrees of bilateral dependence
The social dependence relations examined so far are said to be unilateral. When considering bilateral social dependence, a notion of degree of bilateral dependence has to
be defined. The natural choice for the operation on the degrees of dependence that arise
from bilateral dependences is addition:
Definition 5. Let α and β be two agents such that α is dependent on β with respect to
a goal g1 , and β is dependent on α with respect to a goal g2 . Then α and β are said to
be bilaterally dependent on the combined goal g1 ⊗ g2 , denoted (α ≺≻ β | g1 ⊗ g2 ).
Such degree of bilateral dependence is calculated by
dgr[(α ≺≻ β | g1 ⊗ g2)] = dgr[(α ≺ β | g1 )] + dgr[(β ≺ α | g2 )]
The following is true about Definition 5:
1. dgr[(α ≺≻ β | g1 ⊗ g2 )] = dgr[(β ≺≻ α | g1 ⊗ g2 )] = dgr[(α ≺≻ β | g2 ⊗ g1 )]
2. the definition applies both to the cases of reciprocal dependence (g1 6= g2 ) and to
the cases of mutual dependence (g1 = g2 ).
6.3
Negotiation power of agents in societies
Let M be a set of agents, and α a member of M . Let the subset of agents of M on which
α depends be given by dep(α, M ) = {β | (α ≺ β | g) for some g ∈ Goals(α)}. Let
codep(M, α) = {β ∈ dep(S, α) | (β ≺ α | g) for some g ∈ Goals(β)} be the subset
of agents of M that co-depend on α, that is, the subset of agents of dep(M, α) which
are themselves dependent on α.
We let (α ≺ M ) denote the fact that α belongs to M and that it depends on some
subset of agents of M . We let (M ≺ α) denote the fact that some subset of agents of M
are co-dependent on α. The degree with which α depends on M , and the degree with
which M co-depends on α, can both be calculated.
We define the degree of dependence of α on M as:
X
dgr[(α ≺ M )] =
dgr[(α ≺ β | g)]
β∈dep(α,M ),g∈Goals(α)
We define the degree of co-dependence of M on α as:
X
dgr[(M ≺ α)] =
dgr[(β ≺ α | g)]
β∈codep(M,α),g∈Goals(β)
In [2], the degree of co-dependence of M on α is called α’s social value to M . The
relation between α’s social appeal to M , and the degree of dependence that α has on
M determines α’s capacity of establishing exchanges, cooperation, coalitions, etc., in
M . In [2] this relation is called α’s power of negotiation in M .
Formally, we may establish that the negotiation power of an agent α in a set of agent
M is given by:
dgr(M ≺ α)
NgtPow(α, M ) =
dgr(α ≺ M )
A society is a set of agents that interact in order to overcome their social dependences. As such, a society becomes itself dependent on its own agent for its normal
functioning. If S is a society and α an agent of S, then dgr(M ≺ α) is the social value
of α in S and, correlatively, NgtPow(α, S) is the negotiation power of α in S.
6.4
Refining objective degrees of dependence with subjective estimates
Many subjective estimates of goals, actions, resources and plans can influence the way
agents perceive their dependences on other agents: importance, cost, preferences, emotional reactions, cultural biases, etc., all make the degrees of dependence depart in many
ways from the values that can be objectively calculated by the procedure defined above.
Thus, we must define a means to allow the objective degrees of dependence to be
refined by the subjective estimates of those various aspects of a dependence situation.
In a dependence situation, the object agent is the agent whose dependence is being
analyzed, while a third part agent is an agent on which the object agent depends [8].
The subjective factors that may influence the determination of a degree of dependence
are due either to the object agent (importance of goals, preferences among goals, etc.)
or to the third part agents (costs of actions, probability of action execution, etc.).
In a DS-graph, the subjective factors due to the object agents should label the arcs
connecting the object agents to the goals in concern, while the third part agent factors
should label the arcs connecting the goals with the third part agents.
We thus extend definition 3:
Definition 6. Let the wi ∈ [0 ; 1]. Then, the weighted objective degree of dependence
wdgr : GDS → [0 ; 1] is defined by cases as follows:
1. If G = ∧k (wk · gk ) then dgr[(α ≺ ∧k (⊙ik (⊡jk (wjk · βjk ))) | G)] = 1;
2. If G = ∨k (wk · gk ) then dgr[(α ≺ ∨k (⊙ik (⊡jk (wjk · βjk ))) | G)] = 1;
3. If dgr[(α ≺ ∧k (⊙ik (⊡jk (wjk · βjk ))) | ∧k (wk · gk ))] = d
then dgr[(α ≺ ⊙ik (⊡jk (wjk · βjk )) | wk · gk )] = d;
4. If dgr[(α ≺ ∨k (⊙ik (⊡jk (wjk · βjk ))) | ∨k (wk · gk ))] = d
then dgr[(α ≺ ⊙ik (⊡jk (wjk · βjk )) | wk · gk )] = d/k;
5. If dgr[(α ≺ ∧k (⊡jk (wjk · βjk )) | wk · gk )] = d
then dgr[(α ≺ ⊡jk (wjk · βjk ) | gk )] = wk · d;
6. If dgr[(α ≺ ∨k (⊡jk (wjk · βjk )) | wk · gk )] = d
then dgr[(α ≺ ⊡jk (wjk · βjk ) | gk )] = (wk · d)/k;
7. If dgr[(α ≺ ∧k (wjk · βjk ) | gk )] = d then dgr[(α ≺ βjk | gk )] = wjk · d;
8. If dgr[(α ≺ ∨k (wjk · βjk ) | gk )] = d then dgr[(α ≺ βjk | gk )] = (wjk · d)/k.
In this way, the objective degrees of dependence that we defined above clearly show
their roles as reference values, upon which subjective factors may operate to modulate
the objective evaluations with subjective ones.
7 Conclusion
This paper introduced degrees of dependence in dependence relations, whose calculations involve only objective notions, that is, notions that do not depend on the beliefs
and preferences of the agents involved in those relations. It stated the basic properties
of objective degrees of dependence.
Many lines of work may derive from the results presented here. It is necessary to
better explore the possible ways objective degrees of dependence may be combined
with subjective estimates, so that the dynamic evolution of the exchanges during the
functioning of the organization, and the effective behaviors of the agents, can be considered in the moment of calculating the degrees of dependence.
It is necessary to consider in which ways degrees of dependence can be used as
criteria in social reasoning mechanisms concerned with the formation of coalitions ( [4]
proposed one such way, for utility-based subjective degrees of dependence).
For this to profitable, however, it is also necessary to develop a theoretical account of
the deep relations that seem to exist between the theory of dependence relations [1] and
the theory of social exchange values [5, 6, 3], showing how degrees of dependence and
exchange values may jointly enrich the explanations of the higher level social notions
that can be derived from social dependence, like influence, power, trust, etc.
Acknowledgements: The authors thank Cristiano Castelfranchi for the remarks on a
previously wrong presentation of the transitivity of dependence relations, and for calling
our attention to the importance of connecting the work with the notion of negotiation
power. To an anonymous referee, for the suggestion that degrees of dependence could
be refined by the probability of the partner agents really performing needed actions.
References
1. C. Castelfranchi, M. Miceli and A. Cesta. Dependence Relations among Autonomous
Agents. In: E. Werner and Y. Demazeau (eds.), Decentralized A.I.-3. Elsevier, Amsterdam, 1992. p.215–227.
2. C. Castelfranchi and R. Conte. The Dynamics of Dependence Networks and Power Relations in Open Multiagent Systems. In: Proc. COOP’96 – Second International Conference
on the Design of Cooperative Systems, Juan-les-Pins, France, June, 12-14. INRIA SophiaAntipolis, 1996. p.125-137.
3. A. C. R. Costa and G. P. Dimuro. Systems of Exchange Values as Tools for Multiagent
Organizations. Journal of the Brazilian Computer Society, Special Edition on Multiagent
Organizations (J. Sichman, O. Boissier, C. Castelfranchi, V. Dignum, eds.), 2005.
4. N. David, J. S. Sichman and H. Coelho. Agent-Based Social Simulation with Coalitions in
Social Reasoning. Proc. 2nd. International Workshop on Multi-Agent Based Simulation
(MABS’00), Boston, USA. In: P. Davidsson and S. Moss eds. Multi-Agent Based Simulation, Lecture Notes in Artificial Intelligence, vol. 1979. Springer-Verlag, Berlin, 2000.
5. J. Piaget. Sociological Studies. Routlege, London, 1995.
6. M. R. Rodrigues, A. C. R. Costa, and R. Bordini. A System of Exchange Values to Support Social Interactions in Artificial Societes. In: Proceeding of the Second International
Conference on Autonomous Agnets and Multiagents Systems, AAMAS 2003, Melbourne,
pages 81–88, 2003.
7. J. S. Sichman and Y. Demazeau. On Social Reasoning in Multi-Agent Systems. Revista
Iberoamericana de Inteligencia Artificial, vol. 13, Verano, pages 68–84, 2001. Available
online at http://tornado.dia.fi.upm.es/caepia/numeros/13/sichman.pdf
8. J. S. Sichman, R. Conte, C. Castelfranchi and Y. Demazeau. A Social Reasoning Mechanism Based on Dependence Networks. In: A. G. Cohn (ed.) Proceedings of the 11th. European Conference on Artificial Intelligence, Baffins Lane, England: John Wiley & Sons,
1994.
9. J. S. Sichman and R. Conte. Multi-agent Dependence by Dependence Graphs. 2002. In:
Proc. 1st International Joint Conference on Autonomous Agents and Multi-Agent Systems
– AAMAS’02. pages 483-492, Bologna, Italy, July 2002.
Practical "Permission":
Dependence, Power, and Social Commitment
Cristiano Castelfranchi
Istituto di Psicologia del CNR*
Unit of AI & Cognitive Modelling
Roma, v. Marx 15 - 00137 Roma - ITALY
cris@pscs2.irmkant.rm.cnr.it
Extended Abstract - Preliminary version
Premise
The analytic enquiry of deontic modalities (obligatory, permitted, etc.) has been
developed before and independently of the logic modelling of mental attitudes, of
cognitive agent architecture, and of social interaction; it taken place following the blue
print of non-deontic modalities (necessary; possible). We might consider this kind of
approach and this use of the logic as basically anti-mentalistic: one tries to define,
formalise, and to reason about obligation and permission substantially ignoring the mind
of the involved agents. Thus, traditional treatment of deontic modalities is very
problematic for a cognitive scientist. Let me explain why.
As a cognitive scientist aimed at providing cognitive models of social relations and
interactions (Conte e Castelfr), I will claim that:
• There cannot be any obligation or permission that are not relative to, impinging
on, some Agent -more precisely some Cognitive Agent.
Why there could not be obligations/permissions for non-cognitive Agents? which is the
special, intrinsic relation between deontic modality and cognition?
• There cannot be any obligation or permission for non-social, standing alone,
Agent.
Why this? which is the special, intrinsic relation between deontic modality and
sociality?
In my view, current treatment by deontic logic does not help us to answer these crucial
questions.
• Obligations and permissions are just relative to actions i.e. to the behaviour of a
cognitive agent (a behaviour based on beliefs and directed by goals).
* This research has been supported by the ModelAge. EEC Project. I would like to thank Rosaria Conte with whom I
developped in years many reflections obout norms and cognition.
There cannot be obligations/permissions on mere word states and events unless as
results of an action of some agent.
Obligation and permission are addressed to a mind, and although pointing to a
behaviour they are implicitly referring to mental attitudes. More than this: obligations
and permissions are relations between minds and can be fully understood only on this
perspective 1 1 .
In this paper I will attempt to analyse permission in terms of a cognitive-social relation
between two agents:
•
If something is "permitted", it is permitted to somebody (y), by somebody (x).
I will analyse some basic aspects of the social relation between x and y and of their
mind. In particular, I will search for elements of Dependence relations, power, goal
adoption, and Social Commitment, as ingredient of the notion of "permitted" and of the
Permission relation between x and y.
I think that current developments in AI, philosophy, and logics relative to mental
attitudes, rational action, agent architecture (especially BDI models: for ex. Ingrand &
Pollack ; Rao & Georgeff, 91; Bell, 95; ), and Multi-Agent Systems, will allow the
expression of the mental and relational core of this notion. In this preliminary
exploration I will adopt a naive attitude, substantially ignoring the rich and subtle
philosophical literature on the topic (mainly on legal, institutional form of permission),
just reacting to some basic and consolidated notions, trying to build up on a sociocognitive ground this ontology. I will not propose any formalisation, but just point out
some aspects that should be formalised.
1. Permission is not the absence of prohibition
It should be clear, on the basis of previous claims that and why I cannot accept the well
established analysis that reduces the Permission to do a to the negation of the Obligation
of omitting a (if is not prohibited is permitted) (for ex. Ross).
This is absurd. Permission is something that is "given" to somebody, and he "has". It is
a social action and relation. It cannot just consist of the absence of a prohibition, of an
obligation to abstain from a. It is not a lack of some constraint, or of some restrictive
authority: it is the presence of a positive act and relation of an agent (x) towards another
agent (y).
If Robinson is living on a desert rock in the ocean, and nobody prohibits and prevents
him from using any part of the island as he likes, he is not "permitted" to do so. Only if
there are other agents, with a specific attitude and relation Robinson might be
"permitted" to do something. If I'm walking around and breathing, I do not got the
"permission" of breathing just because there is nobody (and no law) ordering of not
breathing.
Before starting this analysis about what is involved when "x permits to y to do a"
(PERMIT x y a), it is important to stress the fact that I'm not analysing the normative or
even the legal or institutional permission (Jones, ). I'm analysing "permission" in faceto-face, everyday interactions among agents not endowed with special roles. I think that
interpersonal or practical permission is both the conceptual and the practical forerunner
of the normative and institutional permission. I claim that the understanding the former
is a necessary, though non-sufficient condition for understanding the latter. At the end
of the paper I will say something about the Interpersonal Normative Permission,
1 In several languages the meaning of the verb "to permit" is broader than that of the noun "permission"
and of the locution "to give the permission". For example it is possible to say that a physical agent
"permitted" to a behavioural agent to do something ("rain did not permit John..."). In this use "to permit" is
related to "prevent" not to "prohibit". I will consider only the meaning of "to permit" that is in some sense
opposed to "to prohibit" and is close to "to give the permission".
contrasting it with the Interpersonal Practical Permission. I will say nothing about the
Institutional Permission (a complex form of the Normative one): I basically agree with
Jones and Sergot' analysis, although I think that their formal apparatus (deontic logic) is
not able to express the underlying cognitive and social relations that I describe for the
Practical Permission and that I claim to hold also in the other forms of Permission.
My working examples of interpersonal practical permission are the following ones:
(1)
y intends to enter a room, in the middle of the door there is x; y asks x
"could you let me pass, please" and y answers "please" moving away.
(2)
two children on a beach are writing on the sand with some pipes used as
pens. x's pipe is good, y's pipe is not good at all. x puts aside his pipe and y asks
him: "may I use that?" "yes, but later you give me it back".
2. Permission presupposes Dependence-Power relations
Let's now start to analyse the basic aspects of this social relation between x and y, and
of their mind. In particular, I will search for elements of Dependence relations, power,
goal adoption, and Social Commitment, as ingredient of the notion of "permitted" and of
the Permission relation between x and y.
(PERMIT x y a) implies -or better presupposes- that y is dependent on x as for her
possible goal G of executing a. x cannot permit y something that he cannot prevent y
from2
2. 1 The dependence theory
Below, I will describe a theory of dependence as presented in [Sichman et al. (1994)] on
the basis of a pre-existing model developed by Castelfranchi et al. [(1992; Conte &
Castelfranchi, 1995)
Our model
We claim that social agents are plunged into a network of social relationships. The focus
is on the agents' mental states, namely their goals. Social networks are here seen as
patterns of relationships holding among the goals and actions of a given set of agents.
The most fundamental relationship among agents' goals and actions is social
dependence [(Castelfranchi et al., 1992)], where one agent needs the action of another to
achieve one of her goals.
The three basic notions of the social dependence theory are social market, dependence
relation and dependence situation. We will present here only the first two; the last one
is available in [(Sichman et al., 1994)].
The social market
We will call a social market, or a market for short, any aggregate of agents where the
value of a single agent's resources depend on the wants and needs of the others. In other
words, in a social market, agents reach their goals thanks to what they have to “sell”.
The general principle for achieving one's goals is that you-have-what-I-need-and-Ihave-what-you-need.
The social market consists of a data structure composed by:
(a) the set of goals each agent wants to achieve,
(b) the set of actions she is able to perform,
2 At the institutional/legal level of course practical impossibility is not enough. The action could be practically
executable by y, but not morally or legally executable without x's consensus. So, y continues to be dependent on x,
but not for the execution of the practical action a, but for the execution of the institutional/ regular action a' that
requires as a condition the permission of x. y dependence on x is institutionally, normatively created (see ch. 5).
(c) the set of resources she controls and
(d) the set of plans she has.
A plan consists of a sequence of actions with its associated resources needed to accomplish
them.
However, an agent may have a plan whose actions or resources do not necessarily
belong to her own set of actions or resources, and therefore she may depend on others in
order to carry on a certain plan, and achieve a certain goal.
An entry corresponding to an agent ag j has respectively:
- the set of goals,
- actions,
- resources and
- plans the external observer believes
ag j has3 .
By, resources, we mean,
concrete objects that may be required by performing actions. For the time being, we will conceive of
resources as both non-consumable and re-usable (for example a pair of scissors is a resource for cutting a
piece of cloth). In future development of the model, both constraints will actually be dropped.
Dependence relations
Using the external description defined above, we define the notions of autonomy and
dependence as follows .
An agent agi is a-autonomous (action autonomous) for a given goal gk , according to a set of
plans Pqk if there is a plan that achieves this goal in this set and every action appearing in this plan
belongs to her own set of actions A(agi ). In other terms, an agent is a-autonomous if she is endowed with
all the actions involved in at least one of the plans that achieves her goal: if her set of plans is non-empty,
but none of those plans is exhausted by her actions, the agent is not a-autonomous.
Analogously, we define the notion of r-autonomy (resource autonomy).
Finally, an agent agi is s-autonomous (social autonomous) if she is both a-autonomous and rautonomous for this goal.
On the other hand, if an agent does not have all the actions (or resources) to achieve a
given goal, according to a set of plans, she may depend on the others for this goal.
agi a-depends (action-depends) on another agent ag j for a given goal gk , according to a set of
plans Pqk if agi has gk in her set of goals, she is not a-autonomous for gk and there is a plan in Pqk that
achieves gk where at least one action used in this plan is in ag j 's set of actions A(ag j ) .
An agent
In a similar way, we have defined the notion of r-dependence (resource-dependence).
Finally, an agent agi s-depends (social-depends) on another agent ag j if she either a-depends or rdepends on this latter.
Social Dependence, Power and Permission
When y is asking x for a permission (for example of passing or of using something), she
is believing that x is able and in position of preventing her from doing what she needs.
So, y is asking x of "let her doing".
In fact, if y depends on x, x got some social power over y [Castel 90]
3 For the formal expression of our model, see Sichman et al. (1994).
(S-DEP y x a g)
(POWER-over x y g)
x has the power of (CAN) allowing, favouring y in achieving g, and the power of
preventing her from this. We call this form of social power "power over" the other
(more precisely: over the goal of the other), and also "rewarding power" since x has the
power of giving y positive (goal achievement) or negative (goal frustration) rewards.
In the permission relation (asking/receiving-giving permission) there is a mutual belief
of x and y about y's dependence on x and x's power over y, as for a given goal of y.
A good formalization of this power relation and than of PERMIT would require a
formal definition of PREVENT (ex.[Ortiz] and of LET as a form of doing [(Porn)]).
Notice that there is not true "let something happen" if there is not (a belief about) the
power of preventing it or at least of attempting to prevent it..
2.1. Permission and Practical Possibility (why "weak" Dependence is weak)
One might object that in many cases y might have the practical possibility of doing what
she wants, of obtaining what she needs from x, without asking for something. Thus she
is not really dependent on x. For example in (1) or in (2) y might be much stronger than
x and could just push aside x's or take away x's pipe.
In this cases, that are normally conceptualised as "weak dependence" [(Jennings, )] the
problem is the correct identification of the goal y is depending on x for. In "weak
dependence" notion, y could, is able to do a, but he "prefers" to rely on x, to exploit x's
help/action. In my view this notion is quite superficial. In a deeper analysis one should
express the fact that if y "prefers" x's help, this means that there is more utility in this
choice: in other words, y will achieve more goals (for example the same result of doing
by himself plus saving time and effort). Now, as for the achievement of this more
global, compound goal he is strictly depending on x.
In other terms, when y is said to be "weakly" dependent on x for goal g, this means that
in fact he is depending on x for a compound goal G, g is just a part of, while he is not
depending on x as for g; therefore, that he (of course) prefers to achieve the entire G
than just g (if the cost of using x does not exceed the utility of G - g).
The same holds in permission: when y is asking/waiting for a permission for a given
action a when apparently she has the practical possibility (CAN) of doing a, this means
that the real goal of y, she is depending on x for, is not simply the successful execution
of a, but this plus other results that need x (passive) help. For example, she has the goal
of doing a without being impolite or aggressive, or doing a without fighting or arguing
with x. In order to achieve this global goal y is dependent on x and needs x's permission.
Also the action she will execute is not trivially "the same action a", since this action will
produce different results in different conditions.
2.2. Physical Obstacles, Conflict and Prohibition
Since in many cases x is not materially creating obstacles to y's action, but just could do
so, since normally x has just to let y doing a, why should y need x's permission? It is
necessary that y believes that there is a possible intention, motive, reason in x for
opposing to her action. So, the fact that x CAN obstacle y, his power, is a necessary but
insufficient condition for a permission relation. Also x's "willingness" [(Miceli, Cesta, )]
is important. Precisely, x is supposed to have the possible goal that y does not do a.
y is searching for x's agreement, consensus. Apparently x's disagreement, conflictual
attitudes, is consider by y an obstacle to her activity. Either x has the power of
materially, physically preventing y from doing a, and (although at the moment there are
no physical obstacles) y worries about x's putting such obstacles; or, x's mental attitude
is per se important for y and creates an obstacle.
In both cases the intention of x, his willingness of not creating obstacles, is the real
matter. Of course, y will have the goal that something will NOT happen, only if there is
some reason to suspect that it might be so: there is some reasons why x might have the
goal of contrasting y (in Normative Permission for example x's rights on a - see §.5).
Cognitive agent can be prevented from doing something just influencing them via
communication. Prohibition is in fact a way of preventing, of blocking, just based on
influence. More precisely, x makes y aware of x's conflictual goal: "I don't want that you
do a" ("my goal is opposite to your goal; I have the goal that you don't have/pursue your
goal") [(Castelfranchi, 1996)]. And he communicates this in order to change y's mind, in
order y does not do a. This is an Interpersonal Prohibition, an Imperative of not doing
something (based just on personal social power). So to prohibit is aimed at preventing,
and is a form and a way of preventing. How the awareness of x's opposite goal is an
obstacle for y's action? As we said, either it is just the announcement/prediction of
future physical obstacles, or is an impeachment per se. In this case clearly enough the
real goal of y is not only that of doing a, but that of doing a with the agreement of x,
without disappointing x (this can derive from several reasons: affect, respect, politeness,
norms, etc.).
In conclusion, when y asks/needs x's permission she is trying to avoid x's opposition,
either material, practical opposition or merely hostile attitudes (goal): in both cases in
fact is interested in x's mental attitudes; and there is some reason to expect possible
opposition by x.
2.3. Permission empowers
The identification of dependence-power ground of permission explains why permission
is power for y: it gives power to y. In fact, y's possibilities are augmented. Before and
without x's permission (a form of passive help) y has not the power of doinga (or of G),
she CANNOT a; after and thanks to x's permission she CAN.
In traditional treatment of permission this effect was described but it is not explained. It
is just postulated and seems quite unexplainable and by magic. Consider for example
Lewis's semantic for command and permission in his Master/Slave game, and the
opposite effects of command and permission on the "sphere of permissibility" [(Lewis,
1979)].
The problem, in my view, is why a command is a contraction, a restriction of the set of
y's possible behaviours, and on the contrary a permission is an expansion on the preexisting set of possible behaviour. My trivial explanation is that permission expand y's
powers when prescription (and prohibition) restrict them, and that this is due to y's
dependence from x, and to x's power over y.
If x prescribes something to y and has the power of influencing y [(Castelfranchi, 1990)]
which is based (especially for prohibition) on his power over y, y's behavioural
alternatives are reduced to one, and in any case y has not power of doing something else
without violating x's prescription.
If x permits something to y (y was depending on x) the sphere of y's powers is larger:
now she can achieve what was impossible before 4 .
3. Permission as a form of Social Goal-Adoption (passive help)
When (PERMIT x y a), doing a should be a possible goal of y: either an active goal y is
considering (desire) or pursuing by a plan (intention), or a goal that x believes that y
might/will activate and pursue.
If (x believes that) y does not want a, he cannot permit y a.
4 To be more clear, I think that to Prescribe/Command and to Permit are not symmetric. They are quite different:
Command reduces possible y's behaviours only if y's accept it (goal-adoption), although it automatically contracts
permitted behaviours (it is true that if something is prohibited is not permitted). Permission expands automatically
both permitted actions and possible actions. This is due to the fact that Permission is just based on the power-over
(dependence) while Prohibition is based on the power-of-influencing that pass through some decision of y.
For this reason for example the following dialogue is pragmatically and logically
inconsistent:
Daughter: "I don't want to marry dot. Smith!!!"
Father: "Well, I give you my permission (of marry him)".5
More precisely, if (PERMIT x y a) necessarily x does not believe that y will never have
such a goal (for the time the permission is referred to):
Not (BEL x (Not Eventually (GOAL y a)))
in fact, the father could perfectly answer:
Father: "Anyway, in case you change your mind, I give you my permission".
a is not necessarily a current, active and pursued goal of y. When y is "permitted" to do
a, is up to her, and x leaves to her, the decision about doing a or not, and this decision is
autonomous and free: no prescription of x is involved in this decision: if y likes to do a,
she can; or better as for x she can: x will not attempt to contrast or prohibit her doing so.
Notice that (PERMIT x y a) constraints the class of the agent y: such an agent should be
autonomous, able to deciding about, pursuing its own goals, and basing this pursuit on
its beliefs 6.
Since in permitting a has to be a goal of y, permitting a x is adopting a goal of y, he is
helping y to achieve her goal. But consider that this is a special form of goal adoption
and help, a quite passive form: to abstain from opposing.
Social Goal-Adoption (Castelfranchi 90 e 91) is when an agent adopts a goal because
and until (he believes that) is a goal of another agent. Or better (since this definition
could cover also some form of imitation), the agent has the goal that the other agent
achieves 7 /satisfies her goal [(Conte e Castelfr; Mic Cesta; Haddadi, 1996)]
(GOAL x (OBTAIN y g))
where (OBTAIN y g) =def (GOAL y g)
(KNOW y g)8
5 Notice that on the contrary the father could perfectly say:
Father: "Well, I order you to marry him!"
This shows on my view that it is false that to give an command (prescription, obkigation) implies to give the
permission. Command might presuppose that y does not want to, when permission presupposes that a is a (potential,
possible) goal of y. Only a subpart of the ingredients of giving a permission is implied by giving an order. In
particular, in commands x, having accepted (required) y action (S-Commitment to x to do a) is conversely SociallyCommitted to y to want y doing a and to not oppose to this.
I have also other problems with deontic logic assumption. For eample from the cognitive point of view it is possible
to permitt impossible things (things that x believes impossible): this is y's problem. Of course this makes x's help very
limited and literal. While it irrational to prescribe impossible things, because it is x's matter to satisfy the goal he is
prescribing. Either the real intend aeffect (goal) is not what he prescribes (but some side effect) or the prescription is
irrational.
6 I feel quite contradictory to define a notion of "permission" relative to a slave agent that is so slave that commands
are automatically accepted and necessarely true. This kind of agent is not autonomous, has no personal will, does not
decide whether obey or not to his master; thus it is meaningless to give him "permissions" that presupposes some
autonomous desires and goals in the agent! If an agent can be permitted to do something and then has his own goals,
he cannot automatically execute commands: he will take some decision of obeing or refusing them.
7 In helping and goal adoption the awareness of y is not necessary, and also y's pursuit of her goal is not necessary. x
help might be spontaneous and total (doing everithing necessary for realizing g for y). This is why the predicate
ACHIEVE perhaps is too strong (Haddadi, 1996).
8 This definition too is not completely satisfactory. In fact, in the definition of OBTAIN, (GOAL y g) should be just
presupposed: (GOAL x (OBTAIN y g)) shouldn't imply that (GOAL x (GOAL y g)).
There is an active help when in order to make the other achieve/satisfy her goal, x has to
plan and execute some action; there is a passive help when to allow y achieving her goal
y has just to abstain from doing something: he has just to let something to happen.
In case that x is in fact already creating obstacles to y's action, in giving the permission
he is also committing himself to actively remove such obstacles (like in the example 1).
Passive goal-adoption is implied by permission but is broader than permission. Not all
cases of passive goal-adoption are permission. Consider in example (2) that y, ignoring
that the pipe on the sand is related to x (close to x, discovered and used by x) just takes
and uses it, and suppose that x notices this and decide to let y do. Is x's behaviour a
permission? Not at all. This is just passive help: x could obstacle y, but decides to
consent, let, permit, allow y's action. x's action is also a social action (Conte e
Castelfranchi, 1995), but not sufficient to characterise a Permission relation, although
just practical and interpersonal (non-normative and non-institutional).
In this example y is not aware of x's decision and "help", and even of her dependence on
x; x has not the goal that y knows about his decision; y has not the goal of x deciding of
not opposing, and of letting her know about his decision; x is not adopting both y's goal
of doing a, and also y's goal that x does not oppose to her and that x let her know about
his decision.
On the contrary, all this is necessary in Permission: mutual believes, communication,
and x letting y know about his adoptive intention. In short, Permission is a form of
promise.
4. Permission as Social-Commitment
As said above, in giving his permission x is not only adopting y's possible goal of doing
a, but is also adopting y's goal of this adoption (permission) and of communicating
(letting her know about this adoption). The goal of y of having the permission normally
is explicitly communicated (request for permission), but it could be of course also an
implicit expectation. What exactly y is waiting from x, is a "promise" of not opposing,
contrasting her. The promised action (active or passive) might either be immediately
executed or delayed: it depends on if/when y will pursue her goal (in example 1 and 2
for example it is immediate). More generally what y expects is a Social-Commitment by
x (S-COMMIT x y g).
4.1. What is a S-Commitment
Social Commitment, is not just personal commitment to a give intention: it is a social
relation. More precisely [(Castelfranchi, 1995)]:
(a) a social Commitment is a form of "Goal Adoption". In other terms: x is committed to
y to do a, if y is interested in a. The result of a is a goal of y;for this reason, y has the
goal that x does a. Thus we should include in the formal definition of S-Commitment
the fact that (S-COMM x y a z) implies that (GOAL y (DOES x a)) .
b) If x is S-Committed to y, then y can (is entitled to):
- control if x does what he "promised";
- exact/require that he does it;
- complain/protest with x if he doesn't do a;
- (in some cases) make good his losses (pledges, compensations, retaliations,. )
x and y mutually know that x intends to do a and that this is y 's goal, and that as for a y
has specific rights on x (y is entitled by x to a).
One should introduce a relation of "entitlement" between x and y meaning that y has the
rights of controlling a, of exacting a, of protesting (and punishing), in other words, x is
S-Committed to y to not oppose to these rights of y (in such a way, x "acknowledges"
these rights of y ).
Not all the adoptions of a goal of y by x imply a S-Commitment of x to y. What else is
required? First, the Mutual Knowledge I already mentioned. Second, y 's agreement !
In fact, if x has just the I-Commitment to favour one of y 's goals, this is not sufficient
(even if there is common awareness): y should "accept" this. In other words, she
decided, she is I-Committed to achieve her goal by means of x's action. This acceptance
is known by x, there is an agreement. Then, the S-Commitment of x implies a SCommitment of y to x to accept x's action (y doesn't refuse, doesn't protest, doesn't say
"who told y ou!", ...). Without such (often implicit) agreement (which is a reciprocal SCommitment) no true S-Commitment of x to y has been established.
As I said the very act of committing oneself to someone else is a "rights-producing" act:
before the S-Commitment, before the "promise", y has no rights over x, y is not entitled
(by x) to exact this action. After the S-Commitment it exists such a new and crucial
social relation: y has some rights on x, she is entitled by the very act of Commitment on
x's part.
What I just said implies also that if x is S-Committed to y, he has a duty, an obligation,
he ought to do what he is Committed to 9 .
So, when x is committed, a is more than an Intention of x, it is a special kind of goal,
more cogent. The more cogent and normative nature of S-Commitment explains why
abandoning a Joint Intention or plan, a coalition or a team is not so simple as dropping a
private Intention. In fact, one cannot exit a S-Commitment in the same way one can exit
a private Commitment.
4.2. S-Commitment in Permission
Giving his permission, x is S-Committing himself to y to not opposing to y doing a. In
case for example that x gave the permission but later -when y is doing a- he makes
opposition, complains, or argues, y can with reasons protest for x's attitudes, saying:
"But you gave me the permission!!".
In fact, trough the permission y acquired some rights of doing a without x opposition,
and x, because of his assent, acquired some duties of not creating obstacles.
The same is true for removing personal obstacles. In example (1), x cannot answer "yes"
(permission) while remaining to block the way out. This behaviour is not coherent:
saying "yes" x promised of letting y to pass, and implicitly of removing his obstacle.
Like in any S-Commitment relation x is adopting some goals of y, and y on her turn is
acknowledging her dependence on x and delegating to x an helping action (either
passive or active).
It seems to me that all the basic ingredients of S-Commitment are there in a Permission
relation.
Of course not any S-Commitment is Practical Permission. Something more is needed. In
simple S-Commitment y is asking/delegating an action/goal to x, and x is doing
something for y; in Practical Permission x lets y doing an action, and he is just requested
9
Such creation of interpersonal obligations and rights through S-Commitments (‘microdeontics’) will require a general approach to deontics that allows contradictions among deontic
contexts and hierarchical levels (in this direction, see e.g. [Jones & Porn 1985]). For example, a
killer gets an obligation to his instigator to murder somebody, but, from the point of view of the
society such an obligation is in contrast with a prohibition (law) and with a much stronger
obligation.
to consent, and is committed to not prohibit or contrast (and in case, to eliminate
obstacles depending on him) 10.
Introducing S-Commitment we introduce some Normative stuff also in the merely
Practical Permission. In fact we know that Social Commitment act is an act that create
some rights (and complementary duties). It is an open problem whether this creation of
rights is merely interpersonal and "natural", or it presupposes some social system and
social norms (for example about promise keeping). I'm trying to explore the first
alternative as long as possible, trying to let norms and laws emerge from interactions
(and minds) -bottom up- and not only putting them from the top (society) impinging on
the agents.
5. Towards Normative Permission: x's rights, entitlement and
authority
Many readers might consider my notion of Practical Permission, just based on
dependence and practical power, too poor, lacking some more "normative" stuff. They
might consider insufficiently deontic the Social-Commitment relation (promise)
between x and y. I acknowledge that what I described is the basic and weakest form of
interpersonal permission. Even in the very trivial examples I used some other important
ingredients emerge, and I untowardly bypassed them. In particular, father-doughtier
example is, to be honest, a clear example of more "institutional" permission, based on
some form of "authority"; and also in children example there is something more.
Precisely what I put aside in this example was the fact that in some sense x "owns" his
pipe (by some sort of natural right), he has some "title" on it, and that y acknowledges
this rights and titles, and is asking for the permission because she aknowledges this and
does not want to violate x's rights.
The reasons why I ignored this features are first methodological: because I claim that
there is a basic nucleus in the notion of permission which is only enriched but not
eliminated by the notion of x rights and y's recognition of them; second, there are
practical reasons. To study permission based on rights and authority is more complex
than to study this poorer form of practical permission. But of course this is just on step.
So, in the great majority of permission episodes (and in a more specific notion of
permission) there is mutual belief between x and y about the fact that x is entitled to
prohibit y from using a resource he "owns" or from doing something. Thus the obstacles
that x could oppose to y action are not only practical or dispositional but also normative
obstacles. We know that x could block (or disturb) y's goal not only with some practical
action (fighting, concealing or destroying a resource, etc.) but simply by prohibiting y's
action. And x could be in position for prohibiting it not just in a weak sense (just
expressing his opposite goal), but in a stronger normative sense. There is a normative
prohibition when the expressed prescription is not just the individual personal will of x,
but is a norm [(Conte, Caste)]. In this case y's action would import the violation of some
normative prescription related to x's will. x will in some way creates a norm ("authority"
is nothing but this capability).
As I said, there is a poor form of practical Prescription/Prohibition as there is a poor
form of permission. If x is able to block y, to prevent y from entering a room or a way,
and he declares to y his goal that y does not pass trough, his intention to block her, and
declares this in order to induce y to not enter, x is practically "prohibiting" to y to
entering (independently of any rights). There is an imperative of not doing something.
Of course this prescription is not a Norm. Under which condition x will is creating an
instanciated norm for y? This is the problem.
In
conclusion,
one
should
distinguish
between
Normative
Permission/Prohibition/Prescription and non-normative but just personal and practical
10 This is different from legal permission (law) by authority, that is a more complex and 3 terms
relationship between three agents: x (the authority) gives the permission to y and give a complementary
prohibition to z about contrasting y's permitted behaviour. x prescribe to z to acknowledge y's rights.
Permission/Prohibition/Prescription. I tried to analyse the latter claiming that there is a
common core, and with the purpose of describing the social-interactive basis of the
emergence of normative notions and relations.
In order to fully understand the notion of permission also at the normative and
institutional level, the analysis of normative prescription and adoption (Conte, ; Conte)
and of rights [(xxx)] is needed.
To sum up
In sum, face-to-face permission is a social relation between two agents x and y relative
to a possible intentional action a of y. It implies:
•
that y depends on x as for a (and x having power over y as for a);
•
that x adopts y's goal (although in a passive form: not preventing it);
•
that there is a social commitment of x to y to not contrasting y
It creates rights for y and correspondent obligations for x. It empowers y.
It requires also some either explicit or implicit communication (to ask for/ to give) since
is based on mutual beliefs between x and y about the previous conditions.
References
[1] J. Bell. Changing Attitudes. In [x] pp. 40-55
[2] Bratman,M.E., Israel, D.J., Pollack, M.E. 1988. Plans and resource-bounded practical reasoning.
Computational Intelligence 4: 349-55.
[3] C. Castelfranchi, Social power: A missed point in DAI, MA and HCI. In Decentralized AI, Y.
Demazeau and J.P. Mueller (eds.),49-62. North-Holland, Elsevier.1990.
[4] Castelfranchi, C., Commitment: from intentions to groups and organizations. In Proceedings of
ICMAS'96, S.Francisco, June 1996, AAAI-MIT Press
[5] C. Castelfranchi. Prescribed Mental Attitudes in Goal-Adoption and Norm-Adoption . Preproceedings of ICMAS'96 Workshop on "Norms in MAS", Kyoto, 1996.
[6] C. Castelfranchi, M. Miceli, and A. Cesta. Dependence relations among autonomous agents. In E.
Werner & Y. Demazeau (Eds.), Decentralized A.I. 3, Amsterdam: Elsevier Science Publishers B. V.pp.
215-31. 1992.
[7] Ph. Cohen and H. Levesque. Intention is Choice with Commitment. Artificial Intelligence 42 (1992)
pp. 213-61
[8] R. Conte and C. Castelfranchi. Norms as mental objects. From normative beliefs to normative goals.
In Preproceedings of the AAAI Spring Symposium on "Reasoning about Mental States: Formal Theories
& Applications", AAAI,Stanford, CA, March 23-25. 1993.
[9] R. Conte and C. Castelfranchi. Cognitive and Social Action. London: UCL press. 1995.
[10] R. Falcone and C. Castelfranchi. "On behalf of ..": levels of help, levels of delegation and their
conflicts, 4th ModelAge Workshop: "Formal Model of Agents", Certosa di Pontignano (Siena), 15-17
gennaio 1997.
[11] A. Haddadi. Communication and Cooperation in Agent Systems. A pragmatic Theory. Springer
LNAI 1056. Berlin, 1996.
[12] N.R. Jennings. Commitments and conventions: The foundation of coordination in multi-agent
systems. The Knowledge Engineering Review, 3, pp. 223-50. 1993.
[13] B. van Linder, W. van der Hoek, and J.J. Ch. Meyer. Formalising Motivational Attitudes: On
Wishes, Goals and Commitments. In J. Bell and Z. Huang (eds) Practical Reasoning and Rationality.
Proceedings of the DRUMS II Workshop, Windsor, UK, 1996, pp. 79-94
[14] M. Miceli, and A. Cesta Strategic social planning: Looking for willingness in multi-agent domains.
Proceedings of the 15th Annual Conference of the Cognitive Science Society. Hilllsdale: Erlbaum.1993.
[15] C.L. Ortiz. The Semantics of Event Prevention. AAAI'93, Washinghton, DC. AAAI-MIT Press, pp.
683-88. 1993.
[16] I. Pörn. On the Nature of a Social Order. In J.E. Festand et al. (eds.) Logic, Methodology and
Philosophy of Science, North-Holland: Elsevier; 553-67.1989.
[17] A.S. Rao and M.P. Georgeff. Modelling rational agents within a BDI-architecture. In Proceedings of
KR'91, pp. 473-84, 1991.
[18] A. Ross. Directives and norms.. London: Steven & Sons 1968.
[19] Jones, A. & Sergot, M., Institutionalized power: a formal characterization. In MEDLAR II, special
issue of Journal of IGPL, 1995.
[20] J.S. Sichman, R. Conte, C. Castelfranchi, and Y. Demazeau. A social reasoning mechanism based on
dependence networks. In A G. Cohn (Ed.), Proceedings of the 11th.ECAI . John Wiley & Sons, pp. 188192. 1994.
[21] M.J. Wooldridge and N.R. Jennings (Eds.) Intelligent Agents. Proceedings of ATAL'94. Springer
LNAI 890, Springer-Verlag, Berlin, 1995.