The Strategy of Conflict and Cooperation: Mehmet S. Ismail Revised: 24 September 2023 First Version: 19 August 2018
The Strategy of Conflict and Cooperation: Mehmet S. Ismail Revised: 24 September 2023 First Version: 19 August 2018
The Strategy of Conflict and Cooperation: Mehmet S. Ismail Revised: 24 September 2023 First Version: 19 August 2018
Mehmet S. Ismail†
arXiv:1808.06750v7 [econ.TH] 25 Sep 2023
Abstract
This paper introduces a unified framework called cooperative extensive form
games, which (i) generalizes standard non-cooperative games, and (ii) allows for
more complex coalition formation dynamics than previous concepts like coalition-
proof Nash equilibrium. Central to this framework is a novel solution concept
called cooperative equilibrium system (CES). CES differs from Nash equilibrium in
two important respects. First, a CES is immune to both unilateral and multilateral
‘credible’ deviations. Second, unlike Nash equilibrium, whose stability relies on the
assumption that the strategies of non-deviating players are held fixed, CES allows
for the possibility that players may regroup and adjust their strategies in response
to a deviation. The main result establishes that every cooperative extensive form
game, possibly with imperfect information, possesses a CES. For games with perfect
information, the proof is constructive. This framework is broadly applicable in
contexts such as oligopolistic markets and dynamic political bargaining.
∗
I thank Martin Antonov for an online implementation of the algorithm I introduce in this paper via
GTE for three-player games, which is available at app.test.logos.bg under ‘BFI’ and at git.io/JfQTX. I
gratefully acknowledge the financial support of the DPE. I am especially grateful to Steven Brams who
has helped shaped my views about cooperation. I also thank Lorenzo Bastianello, Roland Bénabou,
Philippe Bich, Steven Brams, Eric van Damme, Dominik Karos, Andrew Mackenzie, Ronald Peeters,
Andrés Perea, Arkadi Predtetchinski, Martin Strobel, Peter Wakker, Xavier Venel, and participants at
Maastricht University (2019), the University of Paris 1 (2019), conference in honour of Hans Peters
(2020), 2020 ASSET Virtual Meeting, University of Paris 2 Panthéon-Assas (2021), Games, Agents, and
Incentives Workshop (AAMAS) 2021, GAMES 2020 Budapest, King’s College London (Department of
Informatics), London School of Economics and Political Science (Department of Mathematics), DARK
Seminar at the University College London Centre for Artificial Intelligence, and Imperial College London
(Department of Mathematics) for their valuable comments.
†
Department of Political Economy, King’s College London, London, UK. e-mail:
mehmet.s.ismail@gmail.com
1
1 Introduction
I hope we can go back to von Neumann and Morgenstern’s vision of cooperative
games and resurrect it. What is missing from their cooperative game theory
was that it ignored externalities created by coalition forming which affect
other players in strategic situations, e.g., competition among big firms. (Eric
Maskin’s paraphrased response to a question on the open problems in game
theory in Nobel Symposium “One Hundred Years of Game Theory: Future
Applications and Challenges.”)
Eric Maskin’s observation highlights a significant gap in economic theory: the absence
of a comprehensive framework to accommodate the complex interplay of coalition forma-
tion, externalities, and strategic behavior. This is not a recent concern. As early as in
his seminal work, von Neumann (1928) also acknowledged this problem. He specifically
pointed out that the strategic cooperation between any two players in a three-person
zero-sum game could destabilize the maximin solution, summarizing this issue by stating,
“a satisfactory general theory is as yet lacking” (see section 3).
This long-standing open problem serves as the motivation for the current paper. The
aim here is twofold: (i) to present a unified framework, called cooperative extensive form
games, which accommodates both externalities and synergies arising from strategic coali-
tion formation, and (ii) to introduce a novel solution concept called the cooperative equi-
librium system (CES). I show that non-cooperative extensive form games are a special
case of cooperative extensive form games, wherein players may strategically cooperate—
e.g., by writing a possibly costly contract—or act non-cooperatively. The main result of
this paper establishes that every cooperative extensive form game with possibly imperfect
information possesses a CES (Theorem 2). In particular, in perfect information games,
a CES consisting solely of pure strategies exists (Theorem 1), in which case the proof is
constructive. To the best of my knowledge, this paper is the first to offer a comprehensive
solution to this problem.
I situate these contributions within an extensive literature on both cooperative and
non-cooperative games dating back to the ground-breaking book by von Neumann and
Morgenstern (1944). Prior literature includes, but is not limited to, Aumann (1959),
Harsanyi (1974), Rubinstein (1980), Bernheim, Peleg, and Whinston (1987), Aumann and
Myerson (1988), Bloch (1996), Ray and Vohra (1997), Brams, Jones, and Kilgour (2005),
and Chander and Wooders (2020); the authors defined key concepts and discussed main
issues. The list is by no means comprehensive; for further discussion, see section 3.
2
It is well known that the Nash equilibrium is susceptible to ‘multilateral’ deviations by
coalitions comprising two or more players. Key solution concepts, such as the coalition-
proof Nash equilibrium (Bernheim et al., 1987) and equilibrium binding agreements (Ray
and Vohra, 1997), partially address this issue. However, these concepts encounter limita-
tions when faced with complex coalition formation dynamics. Ray and Vohra (1997, p.
33) articulate this challenge clearly:
“We must state at the outset that our treatment is limited by the assumption
that agreements can be written only between members of an existing coali-
tion... This is also the assumption in the definition of a coalition proof Nash
equilibrium. It must be emphasized that an extension of these notions to the
case of arbitrary blocking is far from trivial.”
Cooperative extensive form games address this limitation by permitting deviating players
to form coalitions not only within their subcoalitions but also with other players. Prior
to making a decision, each player rationally evaluates this dynamic to determine both the
coalition to join and the strategy to choose.
It is helpful to distinguish between a Nash equilibrium point and a CES. Recall that a
Nash equilibrium point is characterized by the absence of unilateral profitable deviations,
given that the strategies of other players are held fixed. In contrast, a CES is a solution
where neither unilateral nor multilateral ‘credible’ deviations exist. This leads to two
key distinctions. First, unlike a CES, an equilibrium point is immune only to unilateral
deviations. Second, the stability of an equilibrium point relies on the assumption that
the strategies of non-deviating players remain fixed—an assumption that is not generally
rational in cooperative extensive form games. Players might, for instance, adapt by
forming new coalitions (i.e., regrouping) and choosing new strategies in response to a
deviation. Consequently, a CES aims for robustness against deviations that are not only
profitable but also credible. Credibility, a recursive notion formally defined in section 5,
ensures that a deviating player is better off when compared to a recursively defined
‘counterfactual.’ This counterfactual considers the potential chain reactions triggered
by the initial deviation.
Finally, in section 7, I apply the CES to various theoretical domains. These include
oligopolistic competition with mergers, dynamic logrolling as conceptualized by Casella
and Palfrey (2019), and ‘corruption,’ where one player buys another’s cooperation to
influence a particular action. Specifically, I introduce a general logrolling model and show
that it always possesses a CES. This existence result extends Casella and Palfrey’s (2019)
3
result to logrolling games with (i) ‘dynamically rational’ voters, (ii) any type of voting
rule, (iii) any type of preferences, and (iv) imperfect information.
4
i and j are no longer independent players.3 The expectation is that, once a coalition C is
formed, its agents will select strategies based on player C’s utility function, uC . At each
information set, any coalition may form, and any available action may be chosen.
Third, the formation of coalitions may entail monetary and/or psychological costs.
Once a coalition is formed, agents retain the freedom to exit the coalition, regardless of
whether they have entered into a potentially costly and binding contract. However, exiting
a coalition may also incur monetary and/or psychological costs. The idea that a contract
is ‘binding’ merely implies that the cost of breaching it exceeds the cost of compliance.
Nevertheless, players retain full agency and may make any choices, even those considered
‘irrational.’4 Neither the formation of coalitions nor the strategies selected by players are
necessarily disclosed at each information set. While rational players may infer potential
coalition formations and strategy selections, these considerations pertain to the solution
concept rather than the framework itself.
Fourth, in cooperative strategic games, players possess the freedom to either act inde-
pendently or form coalitions, which could be via formal or informal mechanisms/institutions.
However, the ability to form a coalition may be subject to restrictions, and coordinating
actions could be unfeasible under certain plausible circumstances. If, for whatever rea-
son, some players are unable to form a coalition, this constraint is incorporated into the
cooperative strategic game framework, allowing all players to rationally account for this
constraint.5
Informal definition of the solution concept: In cooperative extensive form games,
the term system refers to a solution concept that comprises a family of collections of
strategy profiles and a family of collections of coalitions. In contrast, in non-cooperative
games, a solution concept comprises a single strategy profile. The term parallel game is
used to describe a clone of an original game, altered such that the player acting at the root
joins a coalition with one or more players. (These terms are formally defined in section 4.)
A parallel game is intended to model counterfactual scenarios of coalition formation.
A cooperative equilibrium system (CES) is a system from which no unilateral or mul-
tilateral credible deviation exists, compared to the corresponding parallel game. To elab-
orate, the following conditions must hold true in every subgame and every parallel game:
Given an appropriate parallel game that is determined recursively, singleton players have
3
A related but somewhat opposite approach, known as ‘player splitting,’ is studied by Perea y Monsuwé
et al. (2000) and Mertens (1989), and was used in the refinement of Nash equilibrium.
4
To clarify, players are not physically constrained to adhere to the terms of a contract.
5
For a comprehensive discussion on enforcement issues in both cooperative and non-cooperative games,
see Serrano (2004).
5
Firm 1
Firm 2 Firm 4
F A F A
Firm 3
30 20 25
F A 40 0 0
30 0 0
0 100 95
0 10
60 30
40 60
0 0
6
1
A
S L
2 4
F A F A
3
30 20 25
F A 40 0 0
30 0 0
0 100 95
0 10
60 30
40 60
0 0
1
B
S L
2,3 4
F A F A
2,3
30 20 25
F A 40 0 0
30 0 0
0 100 95
0 10
60 30
40 60
0 0
1,2
C
S L
1,2 4
F A F A
3
30 20 25
F A 40 0 0
30 0 0
0 100 95
0 10
60 30
40 60
0 0
7
firms will still engage in such alliances as long as the mutual benefits outweigh these costs.
Secondly, for each non-empty coalition C ⊆ {1, 2, 3, 4}, the coalitional utility function is
defined as uC ( · ) := i∈C ui ( · ).6
P
Figure 2 illustrates the CES for this market entry game in a three-step process. Step
A starts with the standard subgame perfect Nash equilibrium (SPNE), in which each
player acts independently and non-cooperatively. The outcome of this solution is (30, 40,
30, 0). Step B shows that Firm 2 and Firm 3 would both benefit from a collusion in
which they both opt for action F, yielding the outcome (0, 60, 40, 0). In this case, the
coalitional utility u2,3 (0, 60, 40, 0) = 100 is maximized compared to u2,3(30, 40, 30, 0) = 70
and u2,3 (10, 30, 60, 0) = 90.7 Anticipating this collusion, Firm 1 opts to enter market L,
leading to the outcome (20, 0, 0, 100).
Step C shows that both Firm 1 and Firm 2 collude by best responding to the outcome
in Step B. Specifically, Firm 1 chooses to enter market S and Firm 2 accommodates,
leading to the outcome (30, 40, 30, 0), which is strictly better for both compared to
the outcome in Step B. In response to the actions of Firm 1 and Firm 2, Firm 3 and
Firm 4 choose the best-response actions A and F, respectively. Importantly, there are no
additional opportunities for beneficial cooperation between any players.
As a result, the ‘on-path’ CES of this game can be summarized as
where 1 and 2 form a coalition, agent 1 chooses S, agent 2 chooses A, player 3 chooses
A, and player 4 chooses F. The outcome of this CES is (30, 40, 30, 0), as depicted in
Figure 2 (C).
At the outset, the collusion between Firm 1 and Firm 2 to obtain (30, 40, 30, 0) may
appear self-evident, particularly because it coincides with the SPNE outcome. However,
the following two modifications of the game illustrate that the collusion heavily relies on
‘off-path’ credible threats.
First, suppose that, all else being equal, the outcome (20, 0, 0, 100) is changed to
(20, 50, 0, 100). Recall that the reason Firm 2 colluded with Firm 1 in Step C was the
credible threat of Firm 1 opting for market L. In this modified game, however, such a
6
It is worth noting that the framework of cooperative strategic games, as introduced in section 4,
accommodates any form of von Neumann-Morgenstern coalitional preferences.
7
The notation “2, 3” signifies a coalition between Firm 2 and Firm 3.
8
threat would no longer be credible. Firm 2 would have no incentive to collude with Firm
1 in Step C, given the new outcome specified in Step B, because 50 > 40. Consequently,
the CES in the modified game can be summarized by
whose outcome is (20, 50, 0, 100), even though the SPNE remains unchanged.
Second, suppose that the outcome (25, 0, 0, 95) is changed to (60, 0, 0, 70), holding
everything else constant in the original game (Figure 1). In this modified game, the CES
outcome would be (60, 0, 0, 70). This is because Firm 4, foreseeing the potential collusion
between Firm 1 and Firm 2 in Step C, would extend an offer of collusion to Firm 1 in
the new ‘Step D.’ Firm 1 and Firm 4 would form a coalition, denoted {1, 4}, because (i)
both Firm 1 and Firm 4 are strictly better off compared to the outcome in Step C, and
(ii) choosing L and A maximizes player {1, 4}’s utility in Step D.8 Therefore, the CES in
this modified game can be summarized by
3 Related literature
The study of connections between non-cooperative games and cooperative games, which
abstract away from strategic interaction, has its origins in the seminal work of von Neu-
mann (1928). In a lesser-known portion of this influential paper, von Neumann introduced
the maximin solution in a three-person zero-sum game (see, p. 311). Notably, von Neu-
mann observed that any two players within this three-person game could form a strategic
coalition for mutual benefit, thereby destabilizing the maximin solution. Furthermore,
he pointed out that the opportunities for strategic cooperation grow as the number of
players increases. He concludes by stating, “aber eine befriedigende allgemeine Theorie
8
Importantly, the incentive for Firm 4 to collude with Firm 1 in Step D arises from the credible threat
of collusion between Firm 1 and Firm 2 in Step C, which would result in an unfavorable outcome for
Firm 4.
9
fehlt zur Zeit noch,” which has been translated by Sonya Bargmann as
Von Neumann (1928) appears to be the first to articulate the issue of strategic cooperation
as an open problem. His sustained interest in the subject came to light through his
subsequent discussions with mathematician Kurt Gödel. Evidence of these interactions
can be found in Gödel’s personal notes, which have recently been published by von Plato
(2021). In these notes, Gödel observes:
“with games of more than two players, there does not always exist any reason-
able statistical system of play.”
10
3.2 Cooperative approaches in non-cooperative games
A closely related strand of literature is the study of coalition-proofness in non-cooperative
games, exemplified by concepts such as strong Nash equilibrium by Aumann (1959),
strong perfect equilibrium by Rubinstein (1980), and coalition-proof Nash equilibrium by
Bernheim et al. (1987). Roughly speaking, a strong Nash equilibrium is a Nash equilibrium
in which no coalition can deviate in a manner that would benefit all of its members.
Coalition-proof Nash equilibrium is a weaker notion than strong Nash equilibrium; it
requires that coalitional deviations—while holding the strategies of other players fixed—
must be internally consistent, meaning that subcoalitions should not have an incentive to
further deviate.
The most salient difference between these solutions and the CES presented in this
paper lies in their relationship to Nash equilibrium. Specifically, these established equi-
librium concepts refine the set of Nash equilibria to the extent that they may not always
exist. In contrast, the CES is neither a refinement nor a coarsening of Nash equilibrium,
but its existence is guaranteed, as shown in Theorem 2.
Of note, as Bernheim et al. (1987, p. 7) themselves point out, coalitional deviations in
coalition-proof Nash equilibrium are restrictive in the sense that deviating subcoalitions
do not consider forming a coalition with non-deviating players:
“This rules out the possibility that some member of the deviating coalition
might form a pact to deviate further with someone not included in this coali-
tion. Such arrangements are clearly much more complex than those made
entirely by members of the coalition itself... Clearly, further is required to
resolve these issues in a fully satisfactory way.”
These complexities regarding the strategic formation of coalitions are addressed in co-
operative strategic games: every coalition structure is considered and may potentially
emerge as part of the CES solution in such games.
Finally, another major difference lies in the nature of the predictions made by each
solution concept. The concepts in the coalition-proofness literature predict a certain
strategy profile that is coalition-proof according to a specific notion. In contrast, the CES
is formally a family of collections of strategy profiles and coalitions, where the prediction
includes not only strategy profiles but also a set of ‘stable’ coalitions.
11
3.3 Coalition formation and farsighted (non-cooperative) ap-
proaches to cooperative games
Harsanyi’s (1974) seminal work in cooperative games has led to a recently burgeoning body
of literature incorporating elements of non-cooperative games into cooperative games, such
as farsightedness and backward induction. This integration has significantly enhanced our
understanding of both frameworks and their interrelations.
A vast literature exists which examines coalition formation, networks, team reason-
ing, and farsighted cooperative approaches in various contexts; see, for example, Ichiishi
(1981), Moulin and Peleg (1982), Greenberg (1990), Zhao (1992), Sugden (1993, 2003),
Chwe (1994), Perry and Reny (1994), Moldovanu and Winter (1995), Bloch (1996), Ray
and Vohra (1999), Bacharach (1999), Maskin (2003), Brams, Jones, and Kilgour (2005),
Bachrach et al. (2009), Jackson and Dutta (2001), Dutta, Ghosal, and Ray (2005), Her-
ings, Predtetchinski, and Perea (2006), Brandenburger and Stuart (2007), Acemoglu,
Egorov, and Sonin (2008), Herings, Mauleon, and Vannetelbosch (2009), Grabisch and
Rusinowska (2010), Jackson (2008), Jackson and Wolinsky (1996), Karos and Robles
(2021), and Kimya (2020). For an informative and extensive review of the literature in
economics and computer science, see, e.g., Ray (2007), Ray and Vohra (2015), Chalki-
adakis, Elkind, and Wooldridge (2011), Dafoe et al. (2020), Elkind, Rahwan, and Jennings
(2013) and Aziz and Savani (2016), along with the references therein.
As mentioned in the introduction, similar to Bernheim et al. (1987), Ray and Vohra
(1997, p. 33) note that their study is limited to ‘internally consistent’ deviations:
“We must state at the outset that our treatment is limited by the assumption
that agreements can be written only between members of an existing coali-
tion; once a coalition breaks away from a larger coalition it cannot forge an
agreement with any member of its complement. Thus, deviations can only
serve to make an existing coalition structure finer—never coarser.”
Diamantoudi and Xue (2007) extend Ray and Vohra’s (1997) equilibrium binding agree-
ment notion by characterizing it with von Neumann-Morgenstern (vNM) stable sets in an
appropriate cooperative game, and then allowing for any internal or external deviations.
As is well-known, vNM stable sets may not exist in general, and this also applies to the
extended equilibrium binding agreement notion of Diamantoudi and Xue (2007). In ad-
dition, both Ray and Vohra (1997) and Diamantoudi and Xue (2007) study a consistency
notion that is not based on extensive form representation, and they assume that contracts
are costless in their models. They also show that inefficiency issues are not resolved by
12
coalition formation, nor are they resolved in cooperative strategic games. For further
details and an extension of these models, I refer the interested reader to ‘Equilibrium
Process of Coalition Formation’ in Ray and Vohra (2015), which provides an extensive
survey of the recent literature.
Starting from a standard extensive form game with perfect information, Chander and
Wooders (2020) construct a cooperative game and introduce the notion of the subgame
perfect core; like the standard core, it may be empty. Their main finding contributes to the
Nash program: If the subgame perfect core of a perfect information game is non-empty,
then it can be implemented as an SPNE payoff of a relevant non-cooperative game. As
discussed with respect to Bernheim et al.’s (1987) and Ray and Vohra’s (1997) concepts,
Chander and Wooders’s (2020) concept assumes that once a coalition forms, the remaining
players stay as independent players and do not ‘regroup’; though, Chander and Wooders
do briefly discuss a special case in which the remaining players form a single coalition, as
discussed in Maskin (2003). In addition, Chander and Wooders discuss extending their
analysis to more general extensive form games. They define the notion of a subgame
perfect strong Nash equilibrium in general extensive form games, which is a strong Nash
equilibrium in every subgame of the game and is akin to the strong perfect equilibrium
of Rubinstein (1980).
Aumann and Myerson (1988) were the first to introduce endogenous network forma-
tion games in a strategic setting. Given a cooperative game, they construct an auxiliary
link formation game, which is a non-cooperative extensive form game with perfect infor-
mation, in which pairs of players are offered the opportunity to form links in a sequential
order.9 Subgame perfect equilibrium links in this non-cooperative extensive form game
with perfect information give rise to a ‘cooperation structure’ à la Myerson (1977), and
players receive their Myerson value in the cooperative game restricted to this cooperation
structure. Because such link formation games with perfect information are a subset of
non-cooperative extensive form games, they are also a subset of cooperative extensive
form games. To see this, note that any non-cooperative link formation game is non-
cooperative in nature—i.e., players choose their actions independently. The labels of the
actions—whether they are called ‘links’ between two players or named alphabetically—do
not matter in terms of the predictions. An interesting direction for future research would
be to extend non-cooperative link formation games to more general cooperative strategic
games and explore whether the CES predictions coincide with cooperative solutions such
as the Shapley and Myerson values. The network formation literature is not restricted
9
For more details on this model, see e.g. van den Nouweland (2005)
13
to non-cooperative link-formation games; there is a vast literature on general network
formation games, which is distinct from but closely related to the coalition formation
literature.10
The current paper mainly differs from the above literature in two respects: the frame-
work and the solution concept. First, various setups used in this literature are not directly
comparable to either standard extensive form games or cooperative strategic games. This
is in part due to the ‘cyclic’ behavior in coalition formation frameworks and the fact that
more than one player or coalition can choose an ‘action’ at a given decision node, which
cannot occur in extensive form games.11 Second, in cases where a coalition formation setup
such as Kimya’s (2020) is comparable to an extensive form game with perfect informa-
tion, the solution concept in question generally coincides with standard non-cooperative
concepts such as the backward induction. This is not surprising because the main idea
is to incorporate non-cooperative notions into cooperative games, as first proposed by
Harsanyi (1974).
14
non-cooperative extensive form games as a special case (see Remark 11).
15
Name Notation Element
Agents N = {1, 2, . . . , n} i
Players or coalitions P ⊆ 2N i or C
Players, including a non-singleton coalition C PC
Information sets H h
Non-terminal nodes X x
Terminal nodes Z z
Game tree T =X ∪Z
Player function I :X→N
Player i’s actions at h Ai (h) ai (h)
Mixed strategy profiles S s
Behavior strategies × hi ∈Hi
∆(A(hi )) bi
Utility of player C given P uC ( · | P )
Cooperative extensive form game Γ = (P, T, I, u, S, H)
Subgame at x Γ(x)
Subtree at h T (h)
Root of subgame containing h root(h)
Set of all subgames at root(h) G(h) gh
Immediate successors of x f (x)
Information set following h f ′ (h)
All successors of x F (x)
Parallel game of Γ(x) ΓP C
Parallel tree of T (h) TPC
Strategy system σ = (sh,g )h∈H,g∈G(h)
Strategy profile in subgame g ∈ G(h) σ(h, g)
Partition system π = (πh,g )h∈H,g∈G(h)
Reference system at h indexed by j rj (h)
System (σ, π)
16
With a slight abuse of notation, the singleton coalition {i} is represented as i. Each i ∈ N
has a finite set of pure actions Ai .
Remark 1. The terms, player and coalition, are used interchangeably throughout the
paper.
During the game, two or more players in P may cooperate to create a new player, as
follows.
Forming a coalition: If, for some k, players i1 , i2 , ..., ik form a coalition C = ∪kj=1 ij ,
then each singleton in C becomes an agent of coalition C, which is also called player C.14
If each ij is a singleton, then C = {i1 , i2 , ..., ik }.
Remark 2. As previously emphasized in section 2, the formation of coalitions may en-
tail monetary and/or psychological costs. Agents maintain the option to dissolve their
coalitions even in the presence of a binding contract. However, the termination of such
contracts could also entail both monetary and psychological costs. Importantly, players
have the freedom to opt for actions that may be deemed ‘irrational.’ While the framework
itself imposes no explicit constraints on player behavior, the solution concept—set to be
defined in section 5—does introduce such limitations.
Remark 3 (Incomplete information games). To circumvent additional complications in
notation, I do not separately introduce the notation for (Bayesian) cooperative strategic
games with incomplete information. These can be considered as cooperative strategic
games with complete but imperfect information á la Harsanyi (1967), where nature chooses
the types at the beginning of the game.
Utility functions: Let T = X ∪Z denote a game tree, x0 the root of the game tree, z ∈ Z
a terminal node, defined as a node that is not a predecessor of any other node, x ∈ X a
non-terminal node in the tree, and x̄ = |X| the cardinality of X (for an illustration, see
Figure 10 in the Appendix). Let C ⊆ 2N be the set of feasible coalitions, which includes
every C ∈ P , and let P be the set of feasible partitions where each (player) partition
P ′ ∈ P contains only feasible coalitions. The interpretation of a feasible coalition C ∈ C
is that its members have the potential to form C, but this does not necessarily imply that
its members have a preference for forming C.15
14
If a player does not want to join a coalition, then the coalition cannot form—i.e., any member can
veto the formation of the coalition they belong. For different assumptions on coalition formation, see
section 5.3.
15
It is possible that a feasible partition P ′ ∈ P is equal to any partition of e.g. {1, {1, 2}, 3} in a
subgame of Γ where {1, 2} is treated as a distinct player. The underlying intuition is that a player (e.g.,
player 1) may prefer to stay independent at some information sets, while preferring to form a coalition
at other information sets within a given subgame.
17
Let u denote a profile of utility functions for each feasible coalition and partition.
Let uC ( · |P ′) : Z → R denote the payoff function of feasible coalition C ∈ C given
feasible partition P ′ ∈ P . For each terminal node z, uC (z|P ′ ) denotes the von Neumann-
Morgenstern utility of player C if z is reached, given the partition of players P ′ . With a
slight abuse of notation, uC ( · |P ′) is abbreviated as uC|P ′ ( · ).
Remark 4. As previously noted, if a coalition is not feasible, then there is no utility
function for that coalition.16
Remark 5. Although a coalitional utility function may generally be any vNM utility
function, in some situations one could imagine that the coalitional utility may originate
from a cooperative game.
For a player i ∈ P , the utility given a player partition P ′ is denoted as ui ( · |P ′). Note
that ui ( · |P ) is not necessarily equivalent to ui ( · |P ′), where P ′ 6= P , due to the potential
for externalities—either positive or negative—and synergies that may arise when a player
joins a coalition. For example, a player i may derive different utility from an outcome
depending on whether i forms a coalition with player j as opposed to joining a coalition
with j ′ 6= j. (More generally, the formation of a coalition could also impact the utility
of players external to the coalition.) This distinction will be useful in determining which
coalitions are ‘individually rational.’ Upon joining a coalition C, the strategic decisions
within that coalition are guided by its utility function uC of the coalition C, whereas
player i makes the decision to join a coalition based on the individual utility function ui .
Remark 6 (The interpretation of forming a coalition). When forming a coalition C, the
agents within player C select their strategy based on the utility function of player C. For
an illustration of the formation of a coalition, refer to Figure 11 in the Appendix.
Remark 7. Any psychological or monetary costs incurred due to the formation of a coali-
tion are incorporated into the utility functions, given the player partition. More specifi-
cally, there could be a fixed cost of forming a coalition.
18
then A(x) = A(x′ ). For each player i, Hi specifies player i’s set of all information sets.
S
Let A(h) denote the set of pure actions at h and Ai = hi ∈Hi A(hi ) denote player i’s set
of all pure actions.
A pure strategy, s′i ∈ Si′ , of player i is a function s′i : Hi → Ai satisfying s′i (hi ) ∈ A(hi )
×
for all hi ∈ Hi . The set of all pure strategies of i is denoted by Si′ = hi ∈Hi A(hi ). A pure
strategy profile is represented by s′ ∈ S ′ .
×
Let ∆(A(hi )) denote the set of probability distributions over A(hi ), bi ∈ hi ∈Hi ∆(A(hi ))
a behavior strategy of i, and b a profile of behavior strategies. The profile s ∈ S denotes
the mixed strategy profile that is equivalent to behavior strategy profile b in the sense of
Kuhn (1953). I assume that Γ is common knowledge.
Let uC|P ′ (s) denote the von Neumann-Morgenstern expected utility (or payoff) for
player C given P ′. The outcome of a mixed strategy profile s given some partition P ′ is
a profile of expected utilities, (uC (s|P ′))C∈P ′ , one for each player in P ′ .
As standard in the literature, a deviation from a given strategy profile by an individual
player is called a unilateral deviation. A deviation by a non-singleton coalition is called
a multilateral deviation.
Remark 8. When two or more agents form a coalition, C, their information sets merge.
In incomplete information games, their types are also revealed to each other.
19
there is no other subgame starting at an information set between root(h) and h—i.e.,
root(h) is the closest singleton ‘ancestor’ of h. Note that in perfect information games,
x = root(x) for all nodes x ∈ X. F (h) and f (h) denote the set of all successor information
sets and the set of immediate successor information sets of information set h (excluding
h in both definitions), respectively.
In certain games with imperfect information, the notion of a ‘preceding’ set of in-
formation sets for a given h within a single subtree can be ambiguous. In imperfect
information games, define the correspondences F ′ and f ′ as extensions of F and f , re-
spectively. These extensions satisfy the following conditions. If h = x, i.e., h is a singleton,
then F ′ (h) = F (h). This condition ensures that the extended correspondence F ′ coincides
with the successor correspondence F in singleton information sets. For any h′ ∈ F (h),
/ F ′ (h′ ). In words, if h′ is a successor of h, then h cannot be a successor of h′ ac-
h ∈
cording to F ′ . Similarly, if h = x, then f ′ (h) = f (h). And, if h is not singleton, then
f ′ (h) = F ′ (h)\F ′(ĥ), where ĥ ∈ f ′ (h). In words, in non-singleton information sets, an im-
mediate successor of h according to f ′ cannot be in the set of successors of the immediate
successor of h. These conditions are natural extensions of the original correspondences.
It is worth noting that in games with perfect information, F ′ and f ′ reduce to F and f ,
respectively.
Parallel games and parallel trees: Let Γ be a game, x ∈ X a non-terminal node,
i = I(x) the active player at x, Γ(x) the subgame starting at x, and P ′ the set of players
in Γ(x), which is a subset of 2N , where N = {1, 2, ..., n}. Let h ∈ H be a non-terminal
history, which is equal to x when Γ(x) is a game of perfect information.
For a non-singleton coalition C of players in P ′ , define PC = {C ′ ∈ P ′ |C ′ ∈
/ C} ∪ C,
i.e., the player partition in which only the agents in C form a coalition. For example, if
P ′ = {1, 2, 3, 4, 5, 6} and C = {1, 2, 4}, then PC = {{1, 2, 4}, 3, 5, 6}. For a player i, define
F Ci (h) = {C ′ ∈ C | i ∈ C ′ , C ′ ⊆ h∈F (h) I(h)}, i.e., the set of all feasible coalitions i can
S
form with players who act after i.18 As an example, suppose that P ′ = {1, 2, 3}, h = x0 ,
I(x0 ) = i, and i = 1. Then, F C1 = {{1, 2}, {1, 3}, {1, 2, 3}}.
A parallel game of Γ(x), denoted by ΓPC , is the game in which the set of players P ′ is
replaced with the set of players given by the partition PC where C ∈ F Ci (x). The set of
18
In section 5.3, I discuss the extension of this model to the case in which coalitions can form more
generally. For example, it is possible that an ‘outside’ player who is not part of the game, i.e., who has
no action to choose from, may form a coalition with some of the players in the game. While this may be
a reasonable assumption in some strategic situations, I do not include this in the baseline model, for the
sake of simplicity.
20
all parallel games is defined as follows:
For an illustration of parallel games of a game, see Figure 12 in the Appendix. Note that if
the original game is an n-player game, then its parallel game ΓPC is an (n−|C| + 1)-player
game. Note also that each j ∈ C acts as an agent of a single player C in ΓPC . For example,
if Γ is a six-player game, 1 = I(x0 ) (player 1 acts at the root), and P ′ = {1, 2, 3, 4, 5, 6},
then ΓPC , where C = {1, 2, 4}, is the game in which the players are {1, 2, 4}, 3, 5, and 6.
1, 2, and 4 are the agents of coalitional player {1, 2, 4}.
Remark 9. A subgame or parallel game of a cooperative extensive form game is a coop-
erative extensive form game (not a non-cooperative game).
Let Γ(h) be the subgame starting at root(h), i ∈ I(h), and T (h) be a subtree of game
Γ(h). A parallel tree of T (h), denoted by TPC , is a subtree of Γ(h) in which the set of
players P ′ is replaced with the set of players given by the partition PC where C ∈ F Ci (h).
The set of all parallel trees is defined as follows:
Systems: A system or a ‘solution profile’ is a pair (σ, π) which is defined as follows. Let
G(h) be the set of all subgames of the subgame starting at root(h) for some information
set h. For example, if h is the root of game Γ, then G(h) is the set of all subgames of Γ,
including itself. For a given h, let S|gh denote the set of all strategy profiles in subgame
gh ∈ G(h).
Let σ be a strategy system defined as a mapping that maps the pair (h, gh ), where
h ∈ H and gh ∈ G(h), to a strategy profile sh,g ∈ S|gh :
(h, gh ) 7→ sh,gh .
Note that g depends on h. For simplicity, subscripted notation gh will not be used from
now on.
For a given h, σ(h, g) denotes a strategy profile in some subgame g ∈ G(h). Note
that function σ(h, · ) gives for each information set a collection of (possibly different)
strategy profiles, one for each successor subgame: σ(h, · ) = (sg )g∈G(h) . More compactly,
σ := (sh,g )h∈H,g∈G(h) . In summary, σ is a family of collections of strategy profiles.
Remark 10. Let h ∈ H be an information set and g ∈ G(h). Suppose that g ′ is a subgame
21
of g. In general, σ(h, g|g ′) 6= σ(h, g ′ ). In other words, σ(h, g ′) is not necessarily equal to
the restriction of the strategy profile σ(h, g) to the subgame starting at the root of g ′.
This is because a set of agents may prefer not to form a coalition in g ′ but prefer to form a
coalition in g, which would affect the choices in its subgame g ′. Thus, the strategy profile
chosen in g ′ can be different than the strategy profile chosen in g restricted to subgame
g′.
Similarly, let π be a partition system defined as a mapping that maps the pair (h, g),
where h ∈ H and g ∈ G(h), to a feasible partition πh,g in P restricted to subgame g:
(h, g) 7→ πh,g .
In other words, given some h, π(h, g) assigns a feasible coalition to each information set
in subgame g ∈ G(h). Let π(h, · ) := (πg )g∈G(h) and π := (πh,g )h∈H,g∈G(h) . Function π(h, · )
gives for each information set a collection of (possibly different) partition of players, one
for each successor subgame. In summary, π is a family of collections of player partitions.
The interpretation of a pair (σ(h, g), π(h, g)) is as follows. Let C be some coalition in
π(h, g). Each agent i of player C chooses their actions in g based on the strategy profile
σ(h, g). Their actions are guided by the coalitional utility function uC ( · |π(h, g)) in the
subgame g. In summary, a system is a complete description of what actions will be played
by which coalitions at every subtree and every parallel tree in the original game, every
subtree of every parallel tree, and every parallel tree of every subtree, and so on. For an
illustration of a system in a three-player game, refer to Figure 13 in the Appendix.
For a given h, let [σ(h), π(h)] denote the restriction of (σ, π) to the subgame starting
at h, Γ(h). Let [σ, π] be a system defined in Γ(x) for some x ∈ X. For each x′ ∈ f (x), let
[σ ′ (x′ ), π ′ (x′ )] be a system. Then, [σ, π] is called an extension of [σ ′ , π ′ ] to x if for every
x′ ∈ f (x), [σ(x′ ), π(x′ )] = [σ ′ (x′ ), π ′(x′ )]. Note that this implies that [σ, π] and [σ ′ , π ′]
coincide for every x′′ ∈ F (x).
The outcome of a system (σ, π) in a game Γ is defined as the outcome of the (possibly
mixed) strategy profile σ(x0 , gx0 ) with respect to the players in the partition π(x0 , gx0 )
where gx0 = Γ.
Non-cooperative solution concepts: Let Γ = (P, T, I, u, S, H) be a cooperative strate-
gic game. A mixed strategy profile s is called a Nash equilibrium if for every player j ∈ P ,
sj ∈ arg maxs′j ∈Sj′ uj (s′j , s−j ). A mixed strategy profile s is a called a subgame perfect Nash
equilibrium (SPNE) if for every subgame G of Γ the restriction of s to subgame G is a
Nash equilibrium in G.
22
Remark 11 (Generalization of non-cooperative games). Cooperative extensive form games
generalize non-cooperative extensive form games. Specifically, every non-cooperative ex-
tensive form game can be considered a special case of a cooperative extensive form game.
If no coalition involving more than one player is feasible, then a cooperative extensive
form game would reduce to a non-cooperative extensive form game.
Let Γ be a cooperative strategic game with perfect information and without chance moves.
Let (σ, π) be a system of Γ, x ∈ X be a non-terminal node, and i = I(x). At x, consider
parallel game ΓPC ∈ par(Γ(x)) where C ∋ i is a feasible non-singleton coalition. For the
rest of this subsection, associate the system [τ C (x), π C (x)] with each parallel game ΓPC .
Define r0 (x) := [τ i (x), π i (x)], which is the extension of [σ(x′ ), π(x′ )] to Γ(x), the subgame
starting at node x where i chooses the behavior strategy b∗i (x), and π i (x, Γ(x)) is defined
as the player partition in which i acts non-cooperatively at x, and the other players in
the partition are given by π(x′ , Γ(x′ )) for every x′ ∈ f (x).
In simple terms, τ i (x, Γ(x)) is defined as the strategy profile in which player i non-
cooperatively chooses behavior action b∗i (x). Furthermore, the non-cooperative reference
system at node x extends a given system to the node x, incorporating τ i (x, Γ(x)) and
player i. The intuition behind the definition of the non-cooperative reference system is
that if no player forms a coalition with the active player i at x, then player i would choose
a utility-maximizing strategy non-cooperatively.
Next, I define ‘reference systems’ to facilitate a comparison between the systems of
parallel games at x and the non-cooperative choice made by the active player at x.
23
Definition 2 (Reference systems). For each parallel game ΓPC at x, fix system [τ C (x), π C (x)].
Let (rj (x))j̄j=1 be a sequence for player i where rj (x) := [τ Cj (x), π Cj (x)] satisfying the
following two conditions.
1. For every two indices j and j ′ with j < j ′ , ui (τ Cj (x)|π Cj (x)) ≤ ui (τ Cj′ (x)|π Cj′ (x));
i.e., the sequence of payoffs is nondecreasing for i.
2. If ui (τ Cj (x)|π Cj (x)) = ui (τ Cj′ (x)|π Cj′ (x)) and Cj ( Cj ′ , then rj precedes rj ′ for
parallel game coalitions Cj and Cj ′ containing i.
Each element of sequence (rj (x))j̄j=0 , which includes r0 (x), is called a reference system.
Definition 3 (Individually rational reference systems). Fix the sequence (rj (x))j̄j=1 of
reference systems, including the non-cooperative reference system, for player i.
2. Induction step: Assume that rj (x) is IR, denoted as rk∗ (x), for some k ≥ 0. Next,
∗
rk+1 (x) is defined as follows. Let j ′ > j be the smallest number such that rj ′ (x) is
IR with respect to rk∗ (x); that is, for every agent i′ ∈ C̄j ′ where Cj ′ ⊆ C̄j ′ and C̄j ′ ∈
π Cj′ (x), ui′ (τ Cj′ (x)|π Cj′ (x)) > ui′ (τ Cj (x)|π Cj (x)).19 Then, define rk+1
∗
(x) := rj ′ (x).
In simple words, an IR reference system is a system of a parallel game ΓPC for some
coalition C, including agent i, where every member of C agrees to form C, i.e., every
member strictly prefers forming C compared to the appropriate counterfactual, the next
best IR reference system. Note that the sequence (rj∗ (x))l̄k=0 of IR reference systems is a
subsequence of the sequence (rj (x))j̄j=0 of reference systems.
19
In other words, C̄j ′ contains coalition Cj ′ such that C̄j ′ is part of the solution of the parallel game
ΓPC ′ .
j
24
Definition 4 (Credibility). Let x ∈ X be a non-terminal node and (rj∗ (x))l̄k=0 be a
sequence of IR reference systems at x. The IR reference system, rl̄∗ (x), that maximizes i’s
utility is defined as the credible reference system at x, with respect to systems (rj∗ (x))l̄k=0 .
In simple words, the credible reference system is the IR reference system that max-
imizes player i’s utility at x with respect to IR reference systems. For this reason, if a
reference system is not credible, then it is said to possess either a unilateral or a multi-
lateral credible deviation. There are a few further points to note regarding this definition.
First, given the sequence of IR reference systems, the credible reference system is unique.
However, at node x, there may be two different sequences of IR reference systems, in each
of which the credible reference systems may differ yet give the same utility to player i.
This is not surprising as there can be games in which all payoffs are the same. Second,
note that rl̄∗ (x) = [τ Cl̄ (x), π Cl̄ (x)], i.e., the credible reference system is a system of the
parallel game with coalition Cl̄ . It may be that the non-cooperative reference system
r0∗ (x) where ¯l = 0 is the only IR reference system (i.e., Cl̄ is singleton) and hence the
credible reference system. Third, also note that IR reference systems—and by extension,
credibility—are defined with respect to some arbitrary systems of parallel games, which
are themselves reference systems. There is no requirement whether these systems them-
selves satisfy the credibility requirement. CES, as defined next, incorporates a recursive
notion of credibility.
The recursive structure of the CES warrants special attention. While credibility is
defined with respect to arbitrary systems, a CES prescribes a credible reference system
to every non-terminal node x with respect to cooperative equilibrium systems. In the
context of one-person games, the notion of credibility simplifies to straightforward utility
25
maximization. In n-person games, the CES is based on a notion of credibility with
respect to systems of parallel games that are themselves credible. In turn, these systems
are credible with respect to other systems that are also credible, and so on. While it is
not immediately evident whether a CES always exists in n-person games, a constructive
algorithm for its computation is provided in the subsequent subsection.
I now turn to a simple observation about the relationship between CES and SPNE in
non-cooperative extensive form games.
Let Γ be a cooperative strategic game with perfect information and no chance moves. I
use strong mathematical induction to define a constructive algorithm for finding a CES
in Γ, which I will refer to as the CES algorithm. This algorithm outputs a system on
each subgame Γ(x), starting at some node x, by inducting on the number of players (n)
and number of nodes (denoted as x̄) in the successor nodes of x—including x itself and
excluding the terminal nodes.20 For instance, if x is a penultimate node, then (n, x̄) =
(1, 1). Let i = I(x) be the active player at x.
1. Base case: Let (n, x̄) = (1, 1). A CES at Γ(x) is defined by the pair (σ ∗ , π ∗ ), where
σ ∗ (x) ∈ arg maxs∈S ui (s) and π ∗ = i.
2. Induction step: Assume that a CES is defined for all subgames with parameters
(m, ȳ) where 1 ≤ m ≤ n, 1 ≤ ȳ ≤ x̄, and such that (n, x̄) 6= (1, 1) and (m, ȳ) 6=
(n, x̄).
Given this assumption, [σ ∗ (x′ ), π ∗ (x′ )] is defined for all x′ ∈ F (x), where x is the root
of subgame Γ(x). The solution can be extended to subgame Γ(x) with parameters
(n, x̄) as follows.21 Subsequently, the solutions of all parallel games are compared
with the non-cooperative choice of i at x.
20
The termination of the algorithm is indicated by the symbol .
21
I present steps (2a), (2b), and (2c) below to make the algorithm self-sufficient. For more details, see
the definitions in section 5.1.1.
26
(a) The non-cooperative reference system at x, denoted r0 (x): Let b∗i (x) be a
utility-maximizing behavior strategy of i at x given [σ ∗ (x′ ), π ∗ (x′ )]:
Define r0 (x) := [τ i (x), π i (x)], which is the extension of [σ ∗ (x′ ), π ∗ (x′ )] where
x′ ∈ f (x) to the subgame starting at node x. Here, i chooses the behavior
strategy b∗i (x), and π i (x) is the partition in which i and other players in the
subgame at x act non-cooperatively.22
(b) Reference systems: For any (n, x̄), consider parallel game ΓPC where C ∋ i is a
feasible non-singleton coalition. Since ΓPC is an (n − |C| + 1)-person subgame
with x̄ nodes, its CES, denoted by [τ C (x), π C (x)], is defined by the induction
hypothesis.
Let (rj (x))j̄j=0 be a sequence for i where rj (x) := [τ Cj (x), π Cj (x)]. The sequence
satisfies:
(i) for every two indices j and j ′ with j < j ′ , we have ui (τ Cj (x)|π Cj (x)) ≤
ui (τ Cj′ (x)|π Cj′ (x)) (i.e., it is a nondecreasing sequence for i), and
(ii) if ui (τ Cj (x)|π Cj (x)) = ui (τ Cj′ (x)|π Cj′ (x)) and Cj ( Cj ′ , then rj precedes
rj ′ , where Cj and Cj ′ are parallel game coalitions containing i.23
(c) Individually rational (IR) reference systems: Given the non-cooperative refer-
ence system r0∗ (x), the following IR reference systems are defined inductively.
Assume that rj (x) is IR, denoted as rj∗ (x), for some j ≥ 0. Let j ′ > j
be the smallest index such that rj ′ (x) is IR with respect to rj∗ (x), that is,
ui′ (τ Cj′ (x)|π Cj′ (x)) > ui′ (τ Cj (x)|π Cj (x)) for every agent i′ ∈ C̄j ′ where Cj ′ ⊆
C̄j ′ and C̄j ′ ∈ π Cj′ (x).24
Let rl̄∗ (x) be the credible reference system that maximizes i’s utility at x. Note
that rl̄∗ (x) = [τ Cl̄ (x), π Cl̄ (x)]. It is possible that r0∗ (x) where ¯l = 0 is the only
IR reference system.
22
Put differently, τ i (x, x) is defined as the strategy profile in which i chooses behavior action b∗i (x), and
π i (x) is defined as the partition in which i and other players in the subgame at x act non-cooperatively.
The rest follows the strategy profile and partitions given in the system [σ ∗ (x′ ), π ∗ (x′ )] for all successors
x′ ∈ F (x). Note that τ i (x) = τ i (x, x′ ) for all x′ ∈ F (x), meaning it provides a strategy profile for every
subgame.
23
Ties are broken arbitrarily.
24
More explicitly, C̄j ′ contains coalition Cj ′ such that C̄j ′ is part of the solution of the parallel game
ΓPCj′ .
27
A CES of Γ(x) is then given by system [σ ∗ , π ∗ ] where [σ ∗ (x), π ∗ (x)] = rl̄∗ (x), which
extends [σ ∗ (x′ ), π ∗ (x′ )] where x′ ∈ f (x). When x is the root of the game tree of Γ,
[σ ∗ , π ∗ ] is a CES of Γ.
I call coalitions stable if they survive the CES algorithm and players and agents dy-
namically rational if they utilize the CES algorithm in cooperative strategic games. For
a fully worked-out example, see section 6. The following theorem asserts the existence of
a CES.
The proof of this theorem can be found in the Appendix, section 9.2.
I proceed to explain each step in the CES algorithm in as informal a manner as possible.
As previously defined in section 5.1.2, let Γ be a cooperative strategic game with perfect
information, Γ(x) be a subgame of Γ with a root x, n the number of players, and x̄ the
successor nodes of x including x itself (and excluding the terminal nodes). Recall that
Γ(x) is the subgame starting at x. For example, if x is the root of game Γ, then Γ(x) is
simply the game Γ. The CES algorithm is defined via (strong) induction on n and x̄.
1. Base case: Let (n, x̄) = (1, 1). CES at Γ(x) is simply given by the pair (σ ∗ , π ∗ )
where player i, who is active at x, maximizes their utility at x, and the only viable
coalition is the singleton coalition containing i.
2. Induction step: The induction hypothesis assumes that CES is defined for all
‘smaller’ subgames (and parallel games) as given in section 5.1.2. Based on this
assumption, a CES is defined for all Γ(x′ ) where x′ is an immediate successor of x.
The next step is then to extend CES to Γ(x). This is achieved by comparing the
cooperative equilibrium systems of all parallel games at x with the non-cooperative
choice made by i at x.
28
This is called the non-cooperative reference system because if at node x player
i does not form a coalition, then i will have to make a non-cooperative choice
at x, given the CES at the immediate successor nodes of x.
(b) Reference systems: Consider a parallel game ΓPC where some coalition C,
including agent i, forms. Note that the parallel game ΓPC includes strictly
fewer players than Γ(x), hence its CES is defined by the induction hypothesis.
A CES of each such parallel game is called a reference system at x.
A nondecreasing sequence is constructed based on player i’s utility in the co-
operative equilibrium systems of each parallel game at x. In other words, we
have a nondecreasing sequence of reference systems.
(c) Individually rational reference systems: I first define the non-cooperative ref-
erence system to be individually rational (IR). Other IR reference systems are
then defined inductively as follows.
Assume that for some j ≥ 0 the reference system rj (x) is IR, and denote it
as rj∗ (x). Find the next reference system, denoted by rj∗′ (x), in the sequence
defined in Step (b) such that (i) j ′ is the smallest index satisfying j ′ > j,
and (ii) rj∗′ (x) is IR with respect to rj∗ (x). In other words, every agent of the
coalition that includes player i in the new reference system strictly prefers their
CES to the CES of the previous reference system.
The credible reference system is defined as the IR reference system that maxi-
mizes player i’s utility among all IR reference systems. Note that the credible
reference system might be the non-cooperative reference system at x.
A CES of Γ(x) is then given by the system that extends—according to the above
steps—cooperative equilibrium systems at every node x′ , where x′ is an immediate
successor of x. Accordingly, a CES of game Γ is a CES of Γ(x) where x is the root
of the game tree of Γ.
29
Definition 6 (Non-cooperative reference system). Let [σ(h′ ), π(h′ )] be a system where
h′ ∈ f ′ (h). The non-cooperative reference system at some h ∈ H, denoted as r0 (h), with
respect to [σ(h′ ), π(h′ )] is defined as follows. Let r0 (h) := [τ i (h), π i (h)] which is defined as
the extension of [σ(h′ ), π(h′ )] with h′ ∈ f ′ (h) to, T (h), the parallel tree starting at h, such
that [τ i (h, Γ(h)), π i (h, Γ(h))] is an SPNE in subgame Γ(h) where each player in subgame
Γ(h) excluding those who act at an information set h′ ∈ F ′ (h) choose their strategies
non-cooperatively.25
Put differently, [τ i (h, Γ(h)), π i (h, Γ(h))] is an SPNE of Γ(h) in which all players in
subgame Γ(h) before player I(h) choose their strategies non-cooperatively, and each player
who acts after player I(h) is given by player partition π(h′ ) where h′ ∈ f ′ (h).
Definition 7 (Reference systems). For each parallel tree TPC , fix the system [τ C (h), π C (h)].
Let (rj (h))j̄j=0 be a sequence for player i where rj (h) := [τ Cj (h), π Cj (h)], satisfying the
following two conditions.
1. For every two indices j and j ′ with j < j ′ , ui (τ Cj (h)|π Cj (h)) ≤ ui (τ Cj′ (h)|π Cj′ (h));
i.e., the sequence is nondecreasing for i.
2. If ui (τ Cj (h)|π Cj (h)) = ui (τ Cj′ (h)|π Cj′ (h)) and Cj ( Cj ′ , then rj precedes rj ′ for
parallel tree coalitions Cj and Cj ′ containing i.
Definition 8 (Individually rational reference systems). Fix the sequence (rj (h))j̄j=0 of
reference systems for player i.
2. Induction step: Assume that rj (h) is IR for some j ≥ 0. Denote rj (h) as rk∗ (h) for
∗
some k ≥ 0 such that j = 0 if and only if k = 0. Next, rk+1 (h) is defined as follows.
Let j ′ > j be the smallest number such that rj ′ (h) is IR with respect to rk∗ (h); in
other words, ui′ (τ Cj′ (h)|π Cj′ (h)) > ui′ (τ Cj (h)|π Cj (h)) for every agent i′ ∈ C̄j ′ where
Cj ′ ⊆ C̄j ′ and C̄j ′ ∈ π Cj′ (h).26 Then, define rk+1
∗
(h) := rj ′ (h).
25
Note that SPNE is defined with respect to players, which may be coalitions, which are given by player
partition π i (h, Γ(h)).
26
In other words, C̄j ′ contains coalition Cj ′ such that C̄j ′ is part of the solution of the parallel tree
ΓPCj′ .
30
Note that the sequence (rj∗ (h))l̄k=0 of IR reference systems is a subsequence of the
sequence (rj (h))j̄j=0 of reference systems.
Note that rl̄∗ (h) = [τ Cl̄ (h), π Cl̄ (h)]. As in the perfect information case, it may be
that r0∗ (h) where ¯l = 0 is the only and the credible reference system. Additionally, it is
worth emphasizing that credibility is defined with respect to given systems; there is no
requirement whether these systems themselves are credible. This observation leads to the
following definition.
31
Theorem 2 (Existence: imperfect information games). There exists a CES in possibly
mixed strategies in every finite n-person cooperative extensive form game.
The proof is given in the Appendix (see section 9.4). I next give a definition of a CES
in a game with chance moves.
In simple words, a CES of the game in which Nature moves at the root of the game
is simply the profile of the CESs of the subgames starting at each immediate successor of
the root.
If the cooperative extensive form structure of a normal form game is not provided,
then a CES of the game is defined as a CES of a cooperative extensive form game whose
reduced normal form coincides with the given normal form game.27
Next, I discuss some of the possibilities of modifying the framework of cooperative
strategic games, and how this might affect the definition and the existence of a CES.
Cooperative strategic games without coalitional utility: One may construct co-
operative strategic games without relying on a coalitional utility function for the players.
Specifically, these games can be formulated using a set of preference relations—denoted
as C|P —rather than utility functions uC|P . These preference relations do not have to
conform to (expected) utility assumptions. The only requirement is the existence of max-
imal elements for coalitional players in certain parallel games. When a coalition does not
have a maximal element, the coalition would not be feasible in the relevant parallel game.
Importantly, the definition of CES, as well as the existence proofs found in Theorem 1
and Theorem 2, remain valid. This is because these constructs rely on the presence of
maximal elements, and not on a full preference ordering that comes with a utility function.
27
In the context of incomplete information games, the notion of subgame perfection used within the
CES algorithm can be changed to perfect Bayesian equilibrium. Additionally, the concept of CES can be
extended from finite to infinite horizon games in a manner analogous to the extension of the backward
induction concept to infinite horizon games via subgame perfect equilibrium.
32
Partial, fixed, and mixed cooperation: Consider a strategic situation in which coali-
tions are ‘subgame-dependent’ in parallel games associated to a given game. In this
extended framework, the parallel game may consist of various subgames wherein coalition
compositions differ based on specific information sets. For example, at the root x of the
parallel game, player i may form a coalition with player j, and at another information
set h′ , player i may cooperate with player j ′ 6= j. Therefore, the active players at x and
h′ would be {i, j} and {i, j ′ }, respectively. Given that utilities for both {i, j} and {i, j ′ }
are well-defined, the CES algorithm would also be well-defined. Specifically, a CES in
this context would assign a credible reference system to each information set, enabling
the same player to form different coalitions even along the path of play of a CES.
In ‘fixed cooperation’ variant of cooperative strategic games, it is feasible to model
situations where two or more players cooperate at specific information sets, while acting
independently at others. To formalize this, let P be the finite set of players, where each
C ∈ P is a non-empty subset of N = {1, 2, ..., n}. Importantly, P is not constrained to be
a partition of N; for instance, it is plausible that P = {1, 2, 3, {1, 2}, {1, 3}}. Consider the
case where two players, i and j, cooperate at an information set h but act independently at
other information sets. In this situation, any parallel tree coalitions at h must necessarily
include both i and j. Furthermore, if a set of possibly non-singleton players, i1 , i2 , ..., ik ,
forms a coalition, then C = {i1 , i2 , ..., ik }. For instance, if {1, 2} and {1, 3} form a
coalition C, then it may be that C = {{1, 2}, {1, 3}} rather than {1, 2, 3}. This method
of modeling may be particularly useful for studying specific scenarios, such as the merger
of two independent decision-making organizations, where the decision to merge is made
at the organizational level.
In an alternative variant, one can model a situation where player i at an information set
h ‘mixes’ between two or more parallel trees, for example, between coalitions {i, j} and
{i, j ′ }, with the objective of attaining a more favorable individually rational reference
system at h. Any of these variants can be incorporated into the cooperative strategic
games. Mutatis mutandis, the CES would remain well-defined, and its existence would
be guaranteed, in line with Theorem 2.
Cooperative strategic games with more complex contracts: Players can write
contracts to form a coalition in cooperative extensive form games. In some sense, such
contracts are ‘simple,’ but one could consider situations where the agents can write more
complex contracts. For example, consider a parallel game where a coalition does not
have a maximal element and the preferences of its members cycle among three outcomes,
33
s ≻ s′ ≻ s′′ ≻ s. Further, suppose that they prefer these three outcomes to all others.28
In such a scenario, the coalition could write a contract that prescribes a specific strategy,
such as uniformly mixing over these three outcomes. This could be individually rational
for the members when compared to the corresponding counterfactual. Another possibility
is to define parallel games in terms of contracts rather than coalitions, allowing a player
the flexibility to write multiple contracts with the same set of players.
To formalize this, let Γ = (P, T, I, u, C, Σ, H) be a cooperative extensive form game
with complex contracts, in which C denotes a correspondence such that for every infor-
mation set h, C(h) is a set of feasible (abstract) contracts. For example, at each h where
i = I(h), C(h) could be a subset of 2N that includes i. This extended framework ac-
commodates a broader range of coalition forming possibilities than those introduced in
section 4. To give an example, there might exist players in Γ who do not act at any
information set but are nonetheless permitted to write contracts with those who do.
Provided that the contracts, regardless of their complexity, yield a well-defined out-
come and that credible individually rational reference systems exist, the CES would re-
main well-defined, and its existence would be guaranteed.
General non-cooperative/disagreement points: The starting point for the CES
algorithm is the non-cooperative reference system, wherein the active player acts inde-
pendently. This serves as a de facto disagreement point in case where no one forms a
coalition. However, the concept of a disagreement point can be defined more broadly.
Players could have the flexibility to either exogenously establish the non-cooperative ref-
erence system for all histories, or to implement a rule-based mechanism for determining
the disagreement point.
To formalize this extended framework, let Γ = (P, T, I, u, C, D, Σ, H) be a cooperative
extensive form game with complex contracts and with general disagreement points. Here,
D is a correspondence defined as follows: for every non-terminal x ∈ X, D(x) represents
an abstract set that prescribes a method for choosing the disagreement point in subgame
Γ(x) and all of its subtrees.
As an example, D(x) could specify an explicit sequence of players in subgame Γ(x),
wherein player i = I(x) offers to write contracts (or form coalitions) with other players.
Provided that this ordering is consistent with the implied order of play according to the
structure of the game tree, the CES would remain well-defined, and its existence would
28
In the context of social choice theory, this immediately brings to mind the Condorcet Paradox and
Arrow’s Arrow impossibility theorem.
34
be guaranteed.29
Cost of cooperation: As elaborated in Remark 7, both psychological and monetary
costs, if any, associated with forming coalitions are incorporated into the utility functions,
which are dependent on player partitions. One could extend this to accommodate more
general cost structures. For instance, the cost of forming a coalition could be contingent
on the specific outcomes that the coalition aims to achieve. Provided that the utility
functions are well-defined and accurately account for these costs, the CES will remain
well-defined.
Refinements, coarsenings, and other solutions: The CES, in its current formula-
tion, precludes the use of non-credible threats. However, this solution concept could be
extended to accommodate such threats, akin to the discourse contrasting Nash equilib-
rium with SPNE in non-cooperative games. Furthermore, a more generalized version of
the CES could be defined via the CES algorithm. This extended definition may incor-
porate either refinements or coarsenings of SPNE, such as the sequential equilibrium as
proposed by Kreps and Wilson (1982).
Let cooperative system denote a system wherein each individual player, each coalitional
player, and each agent within a coalition seeks to maximize their utility relative to specific
counterfactuals. Importantly, these counterfactuals are not required to be credible—i.e.,
they do not necessarily satisfy the steps described in the CES algorithm. Given this,
it follows that while every CES qualifies as a cooperative system, the converse is not
necessarily true.
An alternative modification to the current framework could involve relaxing the strict
incentive condition for joining a coalition. This adjustment would likely lead to an ex-
panded set of solutions. Further extensions may incorporate notions like maximin strat-
egy and maximin equilibrium (optimin criterion) (Ismail, 2014). In ‘optimin cooperative
system,’ players would aim to simultaneously maximize their minimum utility under prof-
itable deviations by other coalitions.
29
It should be noted that, unlike in games with perfect information, the order of play in games with
imperfect information may be ambiguous within specific subgames. This occurs because, at a non-
singleton information set h with player i = I(h), choosing an action ai does not unambiguously dictate
the subsequent player. Rather, the subsequent player might depend on the particular nodes following
action ai at information set h. In such scenarios, players can write more specific contracts.
35
6 A fully worked-out example
Figure 3 (A) illustrates a three-player cooperative strategic game in which player 1 (P1)
starts by choosing either L or R. To streamline the analysis, I impose two simplifying
assumptions. First, the cost of coalition formation is considered negligible. It is impor-
tant to note, however, that the analysis remains valid even if forming coalitions incurs
some costs, as long as these costs are not ‘too high.’ Second, for each non-empty coali-
tion C ⊆ {1, 2, 3}, I define the utility function of the coalition uC ( · ) := mini∈C ui ( · ).
In other words, one outcome is preferred to another if the minimum utility a member
of the coalition receives from the former is greater than the minimum utility derived
from the latter. To give an example, consider the coalition C = {1, 3}. In this case,
uC (R, d, l) > uC (R, d, k) because uC (R, d, l) = 5 while uC (R, d, k) = 1. (Note that in
general a coalitional utility can be any von Neumann-Morgenstern utility.)
36
1
A x0
L R
2 2
x1 x2
a b c d
3 3 3 3
x3 x4 x5 x6
e f g h i j k l
5 2 4 1 3 2 1 6
5 2 4 6 1 2 1 3
3 1 5 4 2 6 6 5
1
B
L R
2 2
a b c d
3 3 3 3
e f g h i j k l
5 2 4 1 3 2 1 6
5 2 4 6 1 2 1 3
3 1 5 4 2 6 6 5
37
2,3
C r1∗ (x1 )
a b
2,3 2,3
e f g h
5 2 4 1
5 2 4 6
3 1 5 4
2
D r0∗ (x2 )
c d
3 3
i j k l
3 2 1 6
1 2 1 3
2 6 6 5
Figure 4: Solution steps C and D. The lines with arrows represent best-responses, which
could be non-cooperative or coalitional. (C) The coalition between P2 and P3 form to
play b and h. The outcome of this reference system r1 (x1 ) is (1, 6, 4), which is IR with
respect to the non-cooperative reference system r0∗ (x1 ) whose outcome is (5, 5, 3). The
credible reference system at x1 is r1∗ (x2 ). (D) No coalition forms because P3 receives their
highest payoff when there is no coalition. Thus, the non-cooperative reference system,
r0∗ (x2 ) is the credible reference system at x2 .
38
1
E r0∗ (x0 )
L R
2,3 2
a b c d
2,3 2,3 3 3
e f g h i j k l
5 2 4 1 3 2 1 6
5 2 4 6 1 2 1 3
3 1 5 4 2 6 6 5
1,2,3
F r1 (x0 )
L R
1,2,3 1,2,3
a b c d
1,2,3 1,2,3 1,2,3 1,2,3
e f g h i j k l
5 2 4 1 3 2 1 6
5 2 4 6 1 2 1 3
3 1 5 4 2 6 6 5
Figure 5: (E) The non-cooperative reference system at x0 whose outcome is (2, 2, 6).
Given the cooperative equilibrium systems in steps C and D, P1 chooses their non-
cooperative best-response. (F) The reference system associated with the parallel game of
the grand coalition {1, 2, 3}. The outcome of this reference system is (4, 4, 5), which is
not IR with respect to (2, 2, 6).
39
(D). In the parallel game where the coalition {2, 3} forms, the utility u2,3 is maximized
at the outcome (6, 3, 5). However, this outcome is clearly not IR for P3 with respect to
the outcome of the non-cooperative reference system (2, 2, 6). As a result, the credible
reference system at this subgame is the non-cooperative reference system, r0∗ (x2 ).
Finally, consider the root of the game at x0 , where P1 is active. Figure 5 (E) illustrates
P1’s non-cooperative best-response, which is to choose R, taking into account the solutions
determined in the preceding steps of the CES algorithm. Thus, the outcome of the non-
cooperative reference system, r0∗ (x0 ), at the root is (2, 2, 6). To find the other reference
systems at the root, I analyze the parallel games at node x0 , generated by the coalitions
{1, 2}, {1, 3}, and {1, 2, 3}.
40
1,2
G r2∗ (x0 )
L R
1,2 1,2
a b c d
3 3 3 3
e f g h i j k l
5 2 4 1 3 2 1 6
5 2 4 6 1 2 1 3
3 1 5 4 2 6 6 5
1,3
H r3∗ (x0 )
L R
2 2
a b c d
1,3 1,3 1,3 1,3
e f g h i j k l
5 2 4 1 3 2 1 6
5 2 4 6 1 2 1 3
3 1 5 4 2 6 6 5
Figure 6: G: The parallel game of coalition {1, 2}: P1 and P2 cooperate to play L and
a to receive 5 each. The outcome of this reference system is (5, 5, 3), which is IR with
respect to the previous IR reference system, r0∗ (x0 ). H: P1’s forming a coalition with P2
is a credible threat to P3. Thus, P3 forms a coalition with P1: Agent P1 plays R and
agent P3 plays l in response to P2’s d. The outcome of this reference system is (6, 3, 5),
which is IR with respect to r2∗ (x0 ). As a result, (6, 3, 5) is the CES outcome of this game
because this is the outcome of the credible reference system at the root of the game.
41
‘on-path,’ description of this CES can be summarized by the following list of players,
stable coalitions, and their corresponding strategies:
In this CES, P1 and P3 form a coalition and hence act as a single player, denoted by
{1, 3}. Agent P1 chooses action R, P2 chooses a and d, and agent P3 chooses e, g, j, and
l at their respective decision nodes. The outcome of this CES is (6, 3, 5).
It is important to highlight that each of the potential coalitions—{1, 2}, {2, 3}, and
{1, 3}—except for the grand coalition, emerges as IR. Nevertheless, the coalition between
P1 and P3 is the only stable coalition.31
wherein P1 and P2 form a coalition, agent P1 chooses L, agent P2 chooses a and c, and
31
One interpretation of the CES is that the calculations underpinning the CES algorithm are initially
conducted mentally by the players, who subsequently form coalitions based on these calculations.
32
For the sake of simplicity, this example presupposes that payoff transfers are not feasible. However,
a similar game could be constructed even if transfers were permitted.
42
1. Node x0 : The solution at the root of the game:
(a) Node x0 :
[{R}, {a, d}, {e, g, j, l}; {1, 3}, 2].
(b) Node x2 (the subgame after action R):
i. Node x2 :
[{d}, {i, l}; 2, {1, 3}].
ii. Nodes x5 to x6 :
[{i, l}; {1, 3}].
(c) Node x1 (the subgame after action L):
i. Node x1 :
[{a}, {e, g}; 2, {1, 3}].
ii. Nodes x3 to x4 :
[{e, g}; {1, 3}].
(d) Nodes x3 to x6 (the penultimate nodes):
(a) Node x2 :
[{c}, {j, k}; 2, 3].
(b) Nodes x5 to x6 :
[{j, k}; 3].
(b) Nodes x3 to x4 :
[{e, g}; {3}].
(a) Nodes x3 to x6 :
[{e, g, j, k}; 3].
43
P3 chooses e, g, j, and k at the respective decision nodes. The outcome of this CES is
(5, 5, 3).
This example clearly demonstrates how a credible threat by P2 effectively mitigates
any potential for P3 to destabilize the {1, 2} coalition; that is, the parallel game coalition
{1, 3} is not IR with respect to the parallel game coalition {1, 2}.
7 Applications
7.1 Oligopolistic competition with mergers
Let Θ(P, c, r) denote a coalitional oligopolistic competition with the option for firms to
form coalitions such as mergers and cartels. Formally, Θ(P, c, r) is defined as a cooperative
strategic game Γ = (P, T, I, u, Σ, H) with the following parameters and their correspond-
ing interpretations. Each player i ∈ P is called a firm. P denotes the inverse demand
function, c the cost function, and r the sharing rule in case of a coalition among two or
more firms. Note that Θ(P, c, r) is a general model of (dynamic) oligopolistic competition
that includes the standard n-player Cournot, Bertrand, and Stackelberg competitions.
Under ‘standard’ assumptions that are commonly invoked in non-cooperative models
of oligopolistic competition, Theorem 2 implies that a CES exists in coalitional oligopolis-
tic games.33
Note that there are many ways of modelling n-firm strategic interaction via cooperative
strategic games based on how payoffs are defined. Next, I provide a simple example
of Stackelberg competition in which there are a leader and 2 followers who observe the
quantity choice of the leader and then choose their quantities simultaneously. The purpose
of the example is to illustrate a CES in a simple but non-trivial coalitional oligopolistic
competition.
For ease of comparison with the Cournot competition and the standard Stackelberg
competition, assume that the cost functions are symmetric. In this Stackelberg model, it
is clear that the grand coalition (i.e., monopoly) payoff is greater than the total payoff of
the firms when they act independently. Thus, there are incentives for the firms to form
the grand coalition.
Formally, consider a setting in which player 1 (or firm 1) is the leader, and each player
j ∈ {2, 3} is a follower. Player 1 chooses a quantity q1 ≥ 1, and upon observing q1 , each
33
The standard assumptions are the ones that guarantee the existence of Nash equilibrium in non-
cooperative models of quantity or price competition; see, e.g., Roberts and Sonnenschein (1976), Novshek
(1985), and Shapiro (1989).
44
follower j simultaneously chooses a quantity qj ≥ 0. Let Q = 3i=1 qi denote the total
P
market quantity. The inverse demand function is given by P(Q) = a − bQ, where a > 0
and b > 0. The cost function for each player i is c(qi ) = cqi , where c > 0.
The profit function for firm i is given by Πi (Q) = P(Q)qi − c(qi ). If a coalition C
forms through a merger or collusion among two or more firms, the coalitional profit is
P
ΠC (Q) = P(Q)qC − c(qC ), where qC = i∈C qi . The individual profit for each ‘agent’
firm i in coalition C is specified as defined as
ΠSi (Q)
P
ΠC (Q) − i∈C
Πi|C (Q) = ΠSi (Q) + ,
|C|
where ΠSi (Q) refers to the standard Stackelberg payoff for firm i when all firms oper-
ate independently and non-cooperatively. In simpler terms, the firms within a merger
distribute the net gains or losses from the coalition equally among themselves.34
For the proof, refer to section 9.5 in the Appendix. To offer an intuitive perspective on
this result, Figure 8 illustrates the payoff distributions for the three firms under both the
CES and the non-cooperative Stackelberg equilibrium, given a specific set of parameters
in this three-player game.
Surprisingly, Proposition 2 shows that the unique CES outcome in the three-firm
cooperative Stackelberg competition coincides with the outcome of the standard two-firm
non-cooperative Stackelberg competition, even though the total profits are maximized
when all firms collude. The leader, in strategically anticipating the potential collusion
among the followers, derives greater benefits from this ensuing positive externality than
34
Although one could consider various vNM utility functions for the firms, the above-defined sharing
rule is not implausible. Given that the firms are symmetric and that the leader enjoys higher profits than
the followers in a non-cooperative Stackelberg setting, this rule ensures that the leader accrues a greater
share of the profits if it engages in a coalition with followers.
45
150
Leader
Follower
Follower
The payoffs
100
50
0
CES Non-cooperative Stackelberg
it would by joining the followers’ coalition to establish a monopoly. This apparent puzzle
is not unique to the game described above; it has been studied for a long time (see, e.g.,
Salant et al. 1983).
7.2 Logrolling
The literature on logrolling has its roots in the influential works of Buchanan and Tullock
(1962) and Riker and Brams (1973). The latter’s vote trading paradox demonstrates
that mutually beneficial vote trades can result in Pareto inferior outcomes for all parties
involved, including the traders themselves.35 More recently, Casella and Palfrey (2019)
propose a vote trading model in which voters have separable preferences over proposals,
and each proposal is decided by majority voting. Their finding is both general and striking.
Specifically, they show that given any finite set of voters (assumed to be myopic) and
proposals, and given any separable preferences and an initial vote profile, there exists a
sequence of strictly payoff-improving trades that leads to a stable vote profile, meaning
no further strictly payoff-improving trades can be made.
Let L(K, v, r) denote a logrolling game, formally defined as a cooperative extensive
form game with possibly imperfect information, Γ = (P, T, I, u, Σ, H), supplemented by
the following parameters and their respective interpretations. Each player i ∈ P is called
a voter. The set K = {1, 2, ..., k̄} specifies the range of proposals available for voting,
and r denotes the voting rule employed to accept or reject a proposal, such as majority
35
For an in-depth discussion on this subject, the reader is referred to Brams (2003).
46
rule. Each voter i is allocated vi (k) votes to cast either in favor of or against proposal
k. Let v = (v1 , ..., vn ) be the initial vote profile. Given the voting rule, a pure strategy
profile results in an outcome o = (o1 , o2 , ..., ok̄ ), where each proposal k is either accepted
(ok = 1) or rejected (ok = 0). With a slight abuse of notation, ui (o) denotes ui (s′ ),
where s′ is the pure strategy profile that leads to outcome o. The concept of coalition
formation in this context can be understood as an exchange of votes among the members
of the coalition, either through a physical transfer as in Casella and Palfrey (2019) or via
alternative agreements.
Next, I turn to the question of the existence of cooperative equilibrium systems in
logrolling games. The following corollary follows directly from Theorem 1 (perfect infor-
mation games) and Theorem 2 (imperfect information games).
Corollary 1. Every logrolling game with possibly imperfect information possesses a CES.
If the game is of perfect information, then there is a CES that includes only pure strategies.
In essence, Corollary 1 states that for any logrolling game—with any finite set of vot-
ers, any number of proposals, any initial vote profile, any type of voting rule, and any
nature of preferences (either separable or not)—a CES is guaranteed to exist. This re-
sult substantially extends the existence result of Casella and Palfrey (2019) in four main
dimensions: (i) the introduction of dynamically rational voters, (ii) the generality in the
type of voting rule, (iii) the flexibility in the nature of voter preferences, and (iv) the
inclusion of imperfect information games. Particularly noteworthy is the model’s incor-
poration of dynamically rational voters. This extension is significant primarily because
it precludes the possibility of voters engaging in myopically payoff-improving trades that
could paradoxically reduce their overall utility.
In their working paper, Casella and Palfrey (2018) extend their analysis to include
farsighted vote trading. They find that such trading may not necessarily lead to a stable
outcome. The divergence between their results and those outlined in Corollary 1 arises
from differences in both the frameworks and the underlying concepts. The notion of
farsightedness they employ is consistent with the discussions found in the literature review
in section 3.3. This differs from the application of dynamic rationality in the context of
cooperative extensive form games.
47
Firm 1
Firm 2 Firm 4
F A F A
Firm 3
30 20 60
F A 40 0 0
30 0 0
0 100 70
0 10
60 30
40 60
0 0
48
willing to transfer 31 units of its wealth to Firm 1 to incentivize cooperation.36 This
transfer clearly leaves Firm 1 better off, as it gains wealth, and it also benefits Firm 2,
which would otherwise achieve a utility of zero in the original CES. Under these new
conditions, the coalition {1, 2} forms according to the CES algorithm, given the initial
wealth structure. In this coalition, Firm 1 chooses action S, and Firm 2 chooses action
A. The resulting CES outcome is now (30, 40, 30, 0), and the final wealth profile adjusts
to W = (31, 69, 0, 0).
This example serves to highlight how the inclusion of wealth transfers can substan-
tially alter strategic interactions and their resulting outcomes. The ability to incentivize
cooperation through wealth transfers introduces a complex dynamic to the process of
strategic coalition formation, one that can be effectively captured within this extended
framework.
8 Conclusions
The present paper contributes to the existing literature by introducing a unified frame-
work, called cooperative extensive form games. This framework accommodates not only
the externalities and synergies arising from coalition formation but also addresses the
dynamics of coalition formation and strategic behavior in a manner more comprehensive
than prior models. Specifically, the framework allows for the formation of coalitions both
among members of a sub-coalition and with external players. Within this framework, I
introduce a novel solution concept called cooperative equilibrium system (CES). Unlike
the Nash equilibrium, which is susceptible to multilateral profitable deviations, the CES
is robust to both unilateral and multilateral credible deviations. This solution concept
takes into account not only the profitability of deviations but also their credibility, which
is defined with respect to an appropriate counterfactual that accounts for chain reactions
triggered by the initial deviation.
The outcome of a CES outcome may be Pareto inefficient. This is evident since any
non-cooperative extensive form game can be reformulated as a cooperative strategic game.
An open question then arises: Is there a general class of coalitional utility functions,
possibly derived from a well-defined cooperative game (e.g., in characteristic function
form), that ensures Pareto efficiency of cooperative equilibrium systems?
I have identified multiple avenues for future research that can extend or modify the ex-
isting framework of cooperative strategic games. First, more nuanced coalition dynamics
36
Here, I impose the simplifying constraint that only integer unit transfers are permitted.
49
involving varying levels of cooperation—such as partial or mixed cooperation—warrant
thorough analysis. Second, the integration of complex contracts into cooperative exten-
sive form games adds additional layers of complexity that merit investigation, particularly
when preferences exhibit cycling. Third, the role of disagreement points, whether they
are exogenously defined or determined by specific rules, offers a broader context for un-
derstanding the dynamics of strategic coalition formation. Fourth, an examination of
generalized cost structures, potentially including both monetary and psychological costs
associated with coalition formation, provides avenues for expanding the current under-
standing of cooperative strategic games. Lastly, the refinement and generalization of
cooperative equilibrium systems, coupled with the incorporation of wealth transfers, in-
troduce complexities to the existing framework that necessitate further examination.
The framework and solution concept proposed in this paper have broad applicability
across multiple disciplines. Political scientists could employ it to explore the dynamics
of conflict and cooperation among nations. Likewise, the model is relevant for computer
scientists working on cooperative behaviors in multi-agent systems. Biologists may find
the framework useful for studying evolutionary patterns in species and genes. As previ-
ously elaborated in section 7.1, the framework is also particularly relevant for economists
investigating cooperation in strategic contexts, including airline alliances and oligopolistic
cartels.37
References
Acemoglu, D., G. Egorov, and K. Sonin (2008). Coalition formation in non-democracies.
The Review of Economic Studies 75 (4), 987–1009. [Cited on p. 12]
50
Shapley Value: Essays in Honor of Lloyd Shapley, pp. 175–191. Springer. [Cited on
p. 2 and 13]
Brams, S. (2003). Negotiation Games: Applying Game Theory to Bargaining and Arbi-
tration (Revised Edition). Routledge. [Cited on p. 46]
Brams, S. J., M. A. Jones, and D. M. Kilgour (2005, July). Forming stable coalitions:
The process matters. Public Choice 125 (1-2), 67–94. [Cited on p. 2 and 12]
Casella, A. and T. Palfrey (2018). Trading Votes for Votes. A Dynamic Theory. Social
Science Working Paper, 1444. California Institute of Technology , Pasadena, CA. [Cited
on p. 47]
51
Casella, A. and T. Palfrey (2019). Trading Votes for Votes. A Dynamic Theory. Econo-
metrica 87 (2), 631–652. [Cited on p. 3, 46, and 47]
Chwe, M. S.-Y. (1994). Farsighted coalitional stability. Journal of Economic theory 63 (2),
299–325. [Cited on p. 12]
Dutta, B., S. Ghosal, and D. Ray (2005). Farsighted network formation. Journal of
Economic Theory 122 (2), 143–164. [Cited on p. 12]
Etessami, K. and M. Yannakakis (2010). On the complexity of Nash equilibria and other
fixed points. SIAM Journal on Computing 39 (6), 2531–2597. [Cited on p. 15]
Fudenberg, D. and J. Tirole (1991). Game Theory. Cambridge, Mass: MIT Press. [Cited
on p. 15]
Gilboa, I. and E. Zemel (1989). Nash and correlated equilibria: Some complexity consid-
erations. Games and Economic Behavior 1 (1), 80–93. [Cited on p. 15]
52
Grabisch, M. and A. Rusinowska (2010). A model of influence in a social network. Theory
and Decision 69 (1), 69–96. [Cited on p. 12]
Herings, P. J.-J., A. Predtetchinski, and A. Perea (2006). The weak sequential core for
two-period economies. International Journal of Game Theory 34 (1), 55–65. [Cited on
p. 12]
Jackson, M. O. (2008). Social and Economic Networks. Princeton University Press. [Cited
on p. 12 and 14]
Jackson, M. O. and A. Wolinsky (1996). A strategic model of social and economic net-
works. Journal of Economic Theory 71 (1), 44–74. [Cited on p. 12]
Karos, D. and L. Robles (2021). Full farsighted rationality. Games and Economic Behav-
ior 130, 409–424. [Cited on p. 12]
53
Kimya, M. (2020). Equilibrium coalitional behavior. Theoretical Economics 15 (2), 669–
714. [Cited on p. 12 and 14]
Kuhn, H. W. (1953). Extensive games and the problem of information. In H. W. Kuhn and
A. W. Tucker (Eds.), Contributiosn to the Theory of Games II, Annals of Mathematical
Studies 28, pp. 193–216. Princeton: Princeton University Press. [Cited on p. 19]
Moldovanu, B. and E. Winter (1995). Order independent equilibria. Games and Economic
Behavior 9 (1), 21–34. [Cited on p. 12]
Moulin, H. and B. Peleg (1982). Cores of effectivity functions and implementation theory.
Journal of Mathematical Economics 10 (1), 115–145. [Cited on p. 12]
Papadimitriou, C. H. (1994). On the complexity of the parity argument and other inef-
ficient proofs of existence. Journal of Computer and System Sciences 48 (3), 498–532.
[Cited on p. 15]
Perea y Monsuwé, A., M. Jansen, and D. Vermeulen (2000, November). Player splitting
in extensive form games. International Journal of Game Theory 29 (3), 433–450. [Cited
on p. 5]
54
Perry, M. and P. J. Reny (1994). A noncooperative view of coalition formation and the
core. Econometrica: Journal of the Econometric Society, 795–817. [Cited on p. 12]
Ray, D. and R. Vohra (1999). A theory of endogenous coalition structures. Games and
economic behavior 26 (2), 286–336. [Cited on p. 12]
Ray, D. and R. Vohra (2015). Coalition formation. Handbook of Game Theory with
Economic Applications 4, 239–326. [Cited on p. 12 and 13]
Riker, W. H. and S. J. Brams (1973). The Paradox of Vote Trading. American Political
Science Review 67 (4), 1235–1247. [Cited on p. 46]
Salant, S. W., S. Switzer, and R. J. Reynolds (1983). Losses from horizontal merger: The
effects of an exogenous change in industry structure on Cournot-Nash equilibrium. The
Quarterly Journal of Economics 98 (2), 185–199. [Cited on p. 46]
Savani, R. and B. von Stengel (2015). Game Theory Explorer: software for the applied
game theorist. Computational Management Science 12 (1), 5–33. [Cited on p. 50]
Serrano, R. (2004). Fifty Years of the Nash Program, 1953-2003. SSRN Electronic Jour-
nal . [Cited on p. 5]
55
Shapiro, C. (1989). Theories of oligopoly behavior. In R. Schmalensee and R. Willig
(Eds.), Handbook of Industrial Organization, Volume 1, pp. 329–414. Elsevier. [Cited
on p. 44]
Sugden, R. (2003). The Logic of Team Reasoning. Philosophical Explorations 6 (3), 165–
181. [Cited on p. 12]
von Neumann, J. and O. Morgenstern (1944). Theory of Games and Economic Behavior
(1953, 3rd ed.). Princeton: Princeton University Press. [Cited on p. 2 and 14]
von Plato, J. (2021). Von Neumann explains his game theory to Gödel, September 1940.
The Mathematical Intelligencer 43 (1), 42–44. [Cited on p. 10]
Zhao, J. (1992). The hybrid solutions of an n-person game. Games and Economic Be-
havior 4 (1), 145–160. [Cited on p. 12]
Appendix
9 Proofs
9.1 Proof of Proposition 1
Proof. Let Γ = (P, T, I, u, S, H) be a cooperative extensive form game with perfect in-
formation in which the only feasible coalitions are singleton coalitions. This implies that
Γ is a non-cooperative extensive form game with perfect information. Notably, there
does not exist any parallel game in Γ. Hence, the non-cooperative reference system at a
non-terminal node x is always the credible reference system at x.
56
Under these conditions, Definition 5 simplifies as follows: in a CES, a player i = I(x),
at each non-terminal node x ∈ X, chooses the action that maximizes i’s utility, given the
CES of subgames starting at x′ ∈ f (x). Part (1) of Definition 5 remains the same: a
CES in a subgame with only one player prescribes playing the strategy that maximizes
the player’s utility in this subgame. These observations imply that CES strategies can be
found via backward induction. Therefore, those strategies must coincide with subgame
perfect equilibrium strategies in perfect information games.
1. Base case: Consider (n, x̄) = (1, 1) and (σ ∗ , π ∗ ) be a CES at Γ(x) such that π ∗ = i
and σ ∗ (x) ∈ arg maxs∈S ui (s). Note that the set arg maxs∈S ui (s) is non-empty
because there exists at least a pure action (out of finitely many pure actions) of
player i that maximizes player i’s utility at x.
2. Induction step: Assume that a CES exists for all subgames with parameters (m, ȳ)
such that 1 ≤ m ≤ n, 1 ≤ ȳ ≤ x̄, and (n, x̄) 6= (1, 1) and (m, ȳ) 6= (n, x̄). This is a
sound assumption because all these parameters are finite.
The subsequent steps demonstrate the existence of a CES for the subgame Γ(x)
with parameters (n, x̄).
(a) Observe that given [σ ∗ (x′ ), π ∗ (x′ )], b∗i (x) exists because
where Ai (x) is a finite set; though, this does not preclude the existence of a
(completely) mixed behavior strategy maximizer. Thus, the non-cooperative
reference system r0 (x) := [τ i (x), π i (x)] is well-defined.
(b) The reference systems, with the exception of non-cooperative reference systems,
are cooperative equilibrium systems of relevant parallel games at x. Observe
that each parallel game ΓPC at x has strictly fewer players than Γ(x). Moreover,
57
each subgame of the parallel game ΓPC (excluding ΓPC itself) possesses strictly
fewer nodes than ΓPC . Additionally, each parallel game of a subgame of ΓPC
has strictly fewer players than the original subgame. In simpler terms, for
any given game, every parallel game possesses strictly fewer players, and each
subgame possesses strictly fewer nodes than the original game. This iterative
reduction in the magnitude of the parameters continues until the game under
consideration converges to the one specified in the base case. Given the finite
number of parallel games and subgames of a game, the CES for all such games
including the subgame Γ(x) is well-defined. It is noteworthy that the induction
hypothesis assumes the existence of a CES for each ΓPC ; the coherence of this
assumption has now been substantiated.
As a result, (rj (x))j̄j=1 where rj (x) := [τ Cj (x), π Cj (x)] is a finite, well-defined,
and nondecreasing sequence from the perspective of i’s payoff.
(c) Assume the non-cooperative reference system r0∗ (x) to be individually rational
(IR). Given this assumption, inductively define a sequence of IR reference
systems starting from r0∗ (x).
Assume that for some j ≥ 0, rj (x) is IR and denoted as rj∗ (x). Let j ′ > j be
the smallest integer such that rj ′ (x) is IR with respect to rj∗ (x). Let j ′ > j
be the smallest integer such that rj ′ (x) is IR with respect to rj∗ (x); that is,
ui′ (τ Cj′ (x)|π Cj′ (x)) > ui′ (τ Cj (x)|π Cj (x)) for every agent i′ ∈ C̄j ′ , where Cj ′ ⊆
C̄j ′ and C̄j ′ ∈ π Cj′ (x). In essence, each agent in the coalition Cj ′ strictly prefers
the CES of the parallel game ΓPC ′ over that of ΓPCj . In addition, j ′ is chosen
j
such that there is no IR reference system between rj∗′ (x) and rj∗ (x).
Because (rj (x))j̄j=1 is a finite and nondecreasing sequence for i and the sequence
of IR reference systems is a strictly increasing subsequence of (rj (x))j̄j=0 , there
exists the credible reference system, denoted by rl̄∗ (x) = [τ Cl̄ (x), π Cl̄ (x)], that
maximizes i’s utility.
Next, it is left to show that the CES algorithm does not enter any indefinite cycles,
thereby ensuring its termination after a finite number of steps. In the base case, the
algorithm performs a standard utility maximization over a finite set of actions, precluding
the existence of cycles. The induction step requires further elaboration concerning the
potential for cycles. Importantly, this step operates recursively, assuming the existence
of a CES for ‘smaller’ games (i.e., games with fewer players or nodes). At each node, the
algorithm compares the CES for each parallel game, beginning with the non-cooperative
58
reference system (as outlined in Step 2.a). It then computes the credible IR reference
system at the node through an inductive process (Steps 2.b and 2.c). This procedure
does not include any cycles and will terminate after a finite number of iterations for
two reasons: (i) each parallel game possesses finitely many players and actions, and any
parallel game derived from a subgame possesses strictly fewer players due to coalition
formation; and (ii) the credible IR reference system for each active player exists because
there are finitely many such reference systems and the sequence of IR reference systems
is a finite strictly increasing sequence as shown above. Thus, the induction step is also
free from cycles.
Finally, I establish that the CES algorithm outputs a CES after executing a fi-
nite number of steps. Note that a CES of Γ(x) is defined as system [σ ∗ , π ∗ ] where
[σ ∗ (x), π ∗ (x)] = rl̄∗ (x), which extends [σ ∗ (x′ ), π ∗ (x′ )] where x′ ∈ F (x). The existence
of such a CES, [σ ∗ , π ∗ ], is guaranteed due to two reasons: (i) at each non-terminal node
x ∈ X, including the root of Γ, the credible reference system rl̄∗ (x) at x exists, as previ-
ously shown, and (ii) the game Γ consists of a finite set of nodes.
1. Base case: Let (n, h̄) = (1, 1) and gP ′ be the subgame at root(h) where P ′ is the set
of players in the subgame.38 CES at T (h) is defined by the pair (σ ∗ (h), π ∗ (h)) :=
(σ ∗ , π ∗ ) such that σ ∗ (h) is a subgame perfect equilibrium in gP ′ and π ∗ = P ′ in
which no player forms a coalition.39
2. Induction step:
Assume that CES is defined for all subtrees with parameters (m, ȳ) satisfying 1 ≤
m ≤ n, 1 ≤ ȳ ≤ h̄ such that (n, h̄) 6= (1, 1) and (m, ȳ) 6= (n, h̄). By assumption,
[σ ∗ (h′ ), π ∗ (h′ )] is defined for all h′ ∈ F (h) where h is the root of subtree T (h). The
38
Note that gP ′ is the only element in G(h).
39
Observe that σ ∗ (h) is a strategy profile in gP ′ .
59
solution is extended to subtree T (h) with parameters (n, h̄) as follows. I first define
reference systems to iteratively compare the solutions of all parallel trees with the
non-cooperative choice of i at h.
(a) The non-cooperative reference system at h, r0 (h): Let r0 (h) := [τ i (h), π i (h)]
which is defined as the extension of [σ ∗ (h′ ), π ∗ (h′ )] with h′ ∈ f ′ (h) to the
subgame starting at node root(h) such that [τ i (h), π i (h)] is an SPNE in this
subgame where each player in subgame Γ(h) excluding those who act at an
information set h′ ∈ F ′ (h) choose their strategies non-cooperatively.40
(b) Reference systems: For any (n, h̄) consider parallel tree TPC (h) where C ∋ i
is a feasible non-singleton coalition. Because TPC (h) has strictly fewer players
than T (h) and has h̄ information sets, its CES, [τ C (h), π C (h)], is defined by
assumption.
Let (rj (h))j̄j=1 be a sequence for i where rj (h) := [τ Cj (h), π Cj (h)] satisfying
(i) for every two indices j and j ′ with j < j ′ , we have ui (τ Cj (h)|π Cj (h)) ≤
ui (τ Cj′ (h)|π Cj′ (h)) (i.e., nondecreasing sequence for i), and
(ii) if ui (τ Cj (h)|π Cj (h)) = ui (τ Cj′ (h)|π Cj′ (h)) and Cj ( Cj ′ , then rj precedes
rj ′ for parallel tree coalitions Cj and Cj ′ containing i.41
(c) IR reference systems: The non-cooperative reference system r0 (h) is IR by def-
inition. The following IR reference systems are defined inductively as follows.
Assume that rj (h) is IR, denoted as rj∗ (h), for some j ≥ 0. Let j ′ > j
be the smallest number such that rj ′ (h) is IR with respect to rj∗ (h)—i.e.,
ui′ (τ Cj′ (h)|π Cj′ (h)) > ui′ (τ Cj (h)|π Cj (h)) for every i′ ∈ C̄j ′ where Cj ′ ⊆ C̄j ′
and C̄j ′ ∈ π Cj′ (h).
Let rl̄∗ (h) be the credible reference system, which maximizes i’s utility. Note
that rl̄∗ (h) = [τ Cl̄ (h), π Cl̄ (h)]. It may be that r0∗ (h) where ¯l = 0 is the only IR
reference system.
CES of T (h) is defined as [σ ∗ , π ∗ ] where [σ ∗ (h), π ∗ (h)] = rl̄∗ (h), which extends
[σ ∗ (h′ ), π ∗ (h′ )] where h′ ∈ F (h). When h is the root of the game tree of Γ, [σ ∗ , π ∗ ]
is said to be CES of Γ.
40
It is worth noting that an SPNE is defined with respect to players, which may be coalitions, which
are given by partition π ∗ (h′ ). In addition, the players/agents who act at an information set h′ ∈ F (h)
choose their strategies according to the CES assumed by the induction hypothesis.
41
Ties are broken arbitrarily.
60
9.4 Proof of Theorem 2
The proof strategy for the existence of a CES in imperfect information games largely mir-
rors that employed for perfect information games, as outlined in the proof of Theorem 1.
Thus, this subsection focuses exclusively on aspects that differentiate the two proofs. To
be precise, the essence of this proof is built upon three main observations: (i) each max-
imization problem in the CES algorithm is well-defined, (ii) the algorithm precludes the
presence of indefinite cycles, and (iii) the algorithm yields a CES within a finite number
of iterations.
1. Base case: Observe that in this case gP ′ denotes the subgame at root(h), where P ′
represents the set of players in the subgame. A CES (σ ∗ , π ∗ ) at subtree T (h) exists
for two reasons: firstly, no player in P ′ joins a coalition, and secondly the number
of players in gP ′ is finite. Thus, game gP ′ possesses an SPNE, σ ∗ (h).
The reason for the absence of indefinite cycles in the CES algorithm is analogous to
that in the perfect information setting. In summary, no cycles arise in either the base
case or any inductive step for two primary reasons: (i) each parallel tree in the algorithm
contains a finite number of players and actions, and a parallel tree derived from a subtree
possesses strictly fewer players than the original subtree, due to the formation of some
coalitions in the parallel tree; (ii) the sequence of IR reference systems for player i is a
61
strictly increasing and finite sequence, thereby ensuring the existence of the credible IR
reference system for player i.
Finally, it is left to show that CES algorithm outputs a CES within a finite number
of iterations. Analogous to the argument presented in section 9.2, the existence of a CES
[σ ∗ , π ∗ ] is guaranteed for two reasons: (i) at every information set h, including the root
of Γ, the credible reference system rl̄∗ (h) exists, and (ii) the total number of information
sets in Γ is finite.
62
(a−c)2
the follower receives 16b
. Thus, the non-cooperative reference system of the leader,
in which only the followers collude, is the credible reference system. As a result, in the
three-player Stackelberg model there is a unique CES outcome in which the leader receives
13(a−c)2 2
108b
and the followers each receive (a−c)
32b
. Note that when the followers collude there
are many ways to produce q2 + q3 , so there are multiple cooperative equilibrium systems;
though, in all of them the followers collude against the leader, and they all receive the
same payoffs.
63
10 Illustration of the notation
1
x0
L R
2 2
x1 x2
a b c d
3 3 3 3
x3 x4 x5 x6
e f g h i j k l
z1 z2 z3 z4 z5 z6 z7 z8
Figure 10: A three-player cooperative strategic game, where x0 is the root of the game
tree, x0 –x6 are non-terminal nodes, and z1 –x8 are terminal nodes.
1,2
L R
1,2 1,2
a b c d
3 3 3 3
e f g h i j k l
z1 z2 z3 z4 z5 z6 z7 z8
64
1 1,2
Γ ΓP{1,2}
L R L R
2 2 1,2 1,2
a b c d a b c d
3 3 3 3 3 3 3 3
e f g h i j k l e f g h i j k l
z1 z2 z3 z4 z5 z6 z7 z8 z1 z2 z3 z4 z5 z6 z7 z8
1,3 1,2,3
ΓP{1,3} ΓP{1,2,3}
L R L R
2 2 1,2,3 1,2,3
a b c d a b c d
1,3 1,3 1,3 1,3 1,2,3 1,2,3 1,2,3 1,2,3
e f g h i j k l e f g h i j k l
z1 z2 z3 z4 z5 z6 z7 z8 z1 z2 z3 z4 z5 z6 z7 z8
Figure 12: An illustration of an original game and its three parallel games at the root x0 .
65
[σ(x0 , gx0 ), π(x0 , gx0 )] 1,2,3
L R
1,2,3 1,2,3
a b c d
1,2,3 1,2,3 1,2,3 1,2,3
e f g h i j k l
z1 z2 z3 z4 z5 z6 z7 z8
1
a b c d
2,3 2,3 2,3 2,3
e f g h i j k l
z1 z2 z3 z4 z5 z6 z7 z8
1
L R
2 2
a b c d
[σ(x0 , gx3 ), π(x0 , gx3 )]
3 3 3 3
e f g h i j k l
z1 z2 z3 z4 z5 z6 z7 z8
66
1 1,2
r0 (x0 ) r1 (x0 )
L R L R
2,3 2 1,2 1,2
a b c d a b c d
2,3 2,3 3 3 3 3 3 3
e f g h i j k l e f g h i j k l
z1 z2 z3 z4 z5 z6 z7 z8 z1 z2 z3 z4 z5 z6 z7 z8
1,2,3 1,3
r2 (x0 ) r3 (x0 )
L R L R
1,2,3 1,2,3 2 2
a b c d a b c d
1,2,3 1,2,3 1,2,3 1,2,3 1,3 1,3 1,3 1,3
e f g h i j k l e f g h i j k l
z1 z2 z3 z4 z5 z6 z7 z8 z1 z2 z3 z4 z5 z6 z7 z8
Figure 14: A sequence of reference systems at x0 with respect to a given system, (σ, π),
where player 1’s preferences are ordered as follows: u1 (z6 ) ≤ u1 (z1 ) ≤ u1 (z3 ) ≤ u1 (z8 ).
67