czelakowski2015

Download as pdf or txt
Download as pdf or txt
You are on page 1of 272

Trends in Logic 42

Janusz Czelakowski

Freedom and
Enforcement
in Action
A Study in Formal Action Theory
Trends in Logic

Volume 42
TRENDS IN LOGIC
Studia Logica Library

VOLUME 42

Editor-in-Chief
Heinrich Wansing, Ruhr-University Bochum, Bochum, Germany

Editorial Assistant
Andrea Kruse, Ruhr-University Bochum, Bochum, Germany

Editorial Board
Aldo Antonelli, University of California, Davis, USA
Arnon Avron, University of Tel Aviv, Tel Aviv, Israel
Katalin Bimbó, University of Alberta, Edmonton, Canada
Giovanna Corsi, University of Bologna, Bologna, Italy
Janusz Czelakowski, University of Opole, Opole, Poland
Roberto Giuntini, University of Cagliari, Cagliari, Italy
Rajeev Goré, Australian National University, Canberra, Australia
Andreas Herzig, University of Toulouse, Toulouse, France
Andrzej Indrzejczak, University of Łodz, Łodz, Poland
Daniele Mundici, University of Florence, Florence, Italy
Sergei Odintsov, Sobolev Institute of Mathematics, Novosibirsk, Russia
Ewa Orłowska, Institute of Telecommunications, Warsaw, Poland
Peter Schroeder-Heister, University of Tübingen, Tübingen, Germany
Yde Venema, University of Amsterdam, Amsterdam, The Netherlands
Andreas Weiermann, University of Ghent, Ghent, Belgium
Frank Wolter, University of Liverpool, Liverpool, UK
Ming Xu, Wuhan University, Wuhan, People’s Republic of China

Founding Editor
Ryszard Wójcicki, Polish Academy of Sciences, Warsaw, Poland

SCOPE OF THE SERIES

The book series Trends in Logic covers essentially the same areas as the journal Studia
Logica, that is, contemporary formal logic and its applications and relations to other disci-
plines. The series aims at publishing monographs and thematically coherent volumes dealing
with important developments in logic and presenting significant contributions to logical
research.
The series is open to contributions devoted to topics ranging from algebraic logic, model
theory, proof theory, philosophical logic, non-classical logic, and logic in computer science
to mathematical linguistics and formal epistemology. However, this list is not exhaustive,
moreover, the range of applications, comparisons and sources of inspiration is open and
evolves over time.

More information about this series at http://www.springer.com/series/6645


Janusz Czelakowski

Freedom and Enforcement


in Action
A Study in Formal Action Theory

123
Janusz Czelakowski
Department of Mathematics
and Informatics
University of Opole
Opole
Poland

ISSN 1572-6126 ISSN 2212-7313 (electronic)


Trends in Logic
ISBN 978-94-017-9854-9 ISBN 978-94-017-9855-6 (eBook)
DOI 10.1007/978-94-017-9855-6

Library of Congress Control Number: 2015938093

Springer Dordrecht Heidelberg New York London


© Springer Science+Business Media Dordrecht 2015
This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part
of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations,
recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission
or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar
methodology now known or hereafter developed.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this
publication does not imply, even in the absence of a specific statement, that such names are exempt from
the relevant protective laws and regulations and therefore free for general use.
The publisher, the authors and the editors are safe to assume that the advice and information in this
book are believed to be true and accurate at the date of publication. Neither the publisher nor the
authors or the editors give a warranty, express or implied, with respect to the material contained herein or
for any errors or omissions that may have been made.

Printed on acid-free paper

Springer Science+Business Media B.V. Dordrecht is part of Springer Science+Business Media


(www.springer.com)
Preface

The aim of this book is to present a formal theory of action and to show the
relations of this theory with logic and other disciplines. The book concerns the
semantic, mathematical, and logical aspects of action.
In contemporary logic, reflections on actions and their performers (agents) have
assumed a growing and expanding importance. The theme of action, particularly
that of effective and rational action, is heavily rooted in the praxeological tradition.
From the viewpoint of logic, the problem of action goes beyond traditional bran-
ches of logic such as syntactics and semantics. The center of gravity of the issues
the problem of action raises is situated on the borderline between logical pragmatics
and praxeology.
The book focuses on the following tasks:

A. Description and Formalization of the Language of Action

In the contemporary literature, one can distinguish at least seven approaches to


action. Each of them outlines a certain perspective of action theory by bringing out
some specific aspects of human actions.
1. The linguistic framing, initiated by Maria Nowakowska. Atomic (in other
words: elementary) actions (procedures), as well as compound actions are dis-
tinguished. Atomic actions are primitive and non-reducible to others.
Compound actions are sets of finite sequences of atomic actions. If one identifies
the set of atomic actions with an alphabet, in the sense of formal linguistics, each
compound action becomes a language over this alphabet, that is, it becomes a set
of words. For a description of compound actions one can then apply the methods
of mathematical linguistics. Compound actions can be—in particular cases—
regular languages, context-free languages, etc. This formulation goes deeply
into the theory of algorithms and is appropriate for describing routine, algo-
rithmizable actions, such as the manufacture of cars or the baking of bread.

v
vi Preface

2. The dynamic logic approach. The view that human action is modeled on their
resemblance of computer programs can be found in the works of many
researchers (Boden, Segerberg, Suppes to mention a few). In this formulation,
an action is identified with a binary relation defined on a set of states. This
relation, called the resultant relation, assigns to each state a set of possible
outcomes of the action, when the action is being performed in this state. Each
pair of states belonging to this relation is called a possible performance of the
action. The formulation, in a natural way, links action theory with (fragments of)
set theory, whose main components are graph theory and the theory of relations.
3. Stit semantics gives an account of action from a perspective in which actions are
seen not as operations performed in an action system and which yield new states
of affairs, but rather as selections of pre-existent histories (or trajectories) of the
system in time. Stit semantics is therefore time oriented, and time, being a
situational component of action, plays a special role in it.
4. A special framing of the subject area of action is offered by deontology and
deontic logic. It is from the deontological perspective that a typology of actions
is determined; here, forbidden, permitted, and obligatory actions are distin-
guished. This formulation binds action theory with jurisprudence and the theory
of norms.
5. The fifth perspective of action originates from Dynamic Epistemic Logic (DEL),
the logic of knowledge change. DEL is concerned with actions which change
states of agents’ knowledge and beliefs. DEL builds models of the dynamics of
inquiry and accompanying flows of information. It provides insight into the
properties of individual or group agents (knowers) and analyzes consequences
of epistemic or verbal actions. Public announcements may serve as an example
[see van Benthem (2011); van Benthem et al. (2013)].
6. A pragmatic approach to action is developed by decision theory. From the
perspective of this theory, ‘decision making is a cognitive process resulting in
the selection of a course of actions usually among several alternative scenar-
ios’—see Wikipedia. Decision theory therefore differentiates between problem
analysis, which is a part of the cognitive process, and the selection of an
appropriate course of actions by the agent(s). The information gathered in
problem analysis at each stage of decision making is then used toward making
further steps. Decision theory is not concerned with the performability of actions
but rather with their costs—some actions turn out to be less or more profitable
than others. In other words, decision theory views actions as decisions and
assesses the latter in terms of costs or losses.
7. Game theory is a study of strategies and decision making. There is no strict
division line between game theory and decision theory. It is said that game
theory may be viewed as interactive decision theory because it builds mathe-
matical models of conflict and cooperation between rational decision-makers.
Preface vii

B. Models of the Action Theory

In view of the difficulty in determining the adequate language of actions, one should
not expect a theory to be defined in an axiomatic way. The natural compromise
consists in defining some intended models of action. In this book, two classes of
models are discussed:
• the class of elementary action systems,
• the class of situational action systems.
The second of the classes includes the first one as a limit case. The models allow
for unification of most of the formulations of action theory mentioned in (A). On
the ground of the above-mentioned models, one can define compound actions (as it
is done in the models considered by Nowakowska); likewise, one can reconstruct
models for dynamic logic. In terms of action systems it is possible to determine
notions of permitted, obligatory, and forbidden actions that are fundamental to
deontology.
The book also outlines the relations obtaining between situational action systems
and situational semantics.

C. Performability of Actions

The central problem that action theory poses for itself to solve is to provide an
adequate concept of the performability of action. The performability of actions
depends on the parameter which is the state of an action system (see point B). What
is more, the very notion of performability itself in not of an absolute character but is
relativized to a possible manner (aspect) of performing an action: for example, an
action can be physically performable (e.g., driving a car along a one-way road in the
opposite direction), when one takes into account technical limitations while being
legally non-performable—if one takes into account (as in the example given) the
limitations arising from the regulations contained in the highway code (actions in
fraudem legis). (In this example, the non-performability of action in the legal sense
means that the action is forbidden.) One of the aims of the book is to present the
definition of performability (atomic and compound ones) formulated in terms of
elementary and situational action systems.

D. Actions and Their Agents

Actions are performed by single persons (individuals), teams of people (collective


groups), robots, and groups, these being combinations of collective bodies, robots,
and machines. Performers of actions are referred to with the collective term of agent
viii Preface

of action. The literature on the subject is quite extensive and is focused on pro-
viding truth-conditions for sentences of the form: a is the agent of the action A.
The above problem has been analyzed by Brown, Chellas, Horthy, Kanger,
Pacuit, Segerberg, van Benthem, von Wright, and many others. An initial discus-
sion of the issue requires accepting certain ontological assumptions first. Actions
(and acts) are correlated with changes in states of affairs. Besides the states, cate-
gories of actions and their agents are distinguished. Moreover, in this book, there is
introduced the notion of the situational envelope of an action that takes into account
such parameters accompanying an action as: time, location, order of actions, etc.
A difficult part of the theory is the question of the intentionality of an action, e.g.,
when the intention of the agent is to perform an action, yet—for a variety of reasons—
the agent desists from performing it. The notional apparatus permits the introduction
of clear-cut criteria for differentiating between single actions (when the agent is an
individual) and collective actions (when the agent is a collective body), as well as
between one-time actions (such as the stabbing of Julius Caesar) and actions under-
stood as a type (e.g., stabbing as a type of criminal action).
The problem area of verbal actions and models of information flow which
accompanies actions requires special treatment. This question is not studied in the
book; nevertheless, the models of action to be developed allow for their extension
over verbal actions. In this context, one can modify the existing models of belief
systems deriving from Alchourrón, Gärdenfors, and Makinson.

E. Probabilistic Models of Action

The notion of performability mentioned in C is not probabilistic but binary: a given


action A is either performable or not in a definite state u. This notion does not
encompass some aspects of the performability of actions as, e.g., quality grading
(poor, medium, good performance, etc.). One framework that brings theories of
action closer to probability calculus and decision theory introduces a quantitative
measure of the degree of performability of an action. It is the probability of per-
formability of an action in a given state u. Also introduced are other measures of
performabilty such as the probability of the transition of the system from one state
to another on the condition that an action is performed. The measure is, to use the
simplest example, the probability of hitting—in the determined initial conditions
u—the ‘bull’s-eye’ with an arrow shot from a bow (the intended state v). The
performed action is here shooting an arrow at a target.
Two types of probabilistic models of action are distinguished. The emphasis is
put on their practical applications. The relevant models are constructed from ele-
mentary systems (point B) by introducing (conditional) probabilities of transition
from one state to another, under the assumption that the given action is performed.
(The notion of the performability of an action is distinguished from a possible
performance as well as from a performance of an action—the latter being a one-
time act, belonging to the situational surrounding of the system of action.)
Preface ix

F. Relationship with Deontic Logic

A significant feature of action theory is its firm rooting in the theory of law and
theory of legal and moral norms. This part of action theory is called deontology.
The central place in it is occupied by deontic logic. This is still an area which is
characterized by the existence of disparities concerning fundamental matters, the
proliferation of formal logical systems, as well as a lack of mathematical and logical
results of generally recognized significance and depth. In the formulations of
deontic logic known from the literature, deontic operators are considered as unary
sentence-generating functors defined on sentences. In semantic stylizations, these
formulations distinguish permitted, forbidden, and obligatory states of affairs.
In this book, a formulation of deontic logic is presented, according to which the
deontic operators belong to quite a different category: they are defined on actions,
and not on states of affairs. Thus, in this formulation, these are actions that are
permitted, forbidden, or obligatory entities, and not sets of states. It leads to two
simple formalized systems of deontic logic, whose semantics is founded on ele-
mentary action systems. The difference between the systems consists in the fact that
the first one validates the so-called closure principle, while the other rejects it. The
closure principle says that every action which is not forbidden is permitted. These
systems are free from deontic paradoxes. Completeness theorems for these systems
are proved. Deontic models of compound actions are also considered.
The book presents a new approach to norms. Norms in the broad sense are
viewed as certain rules of action. In the simplest case they are instructions which,
under given circumstances, permit, forbid, or order the performance of an action.

G. Relations with the Theory of Algorithms


and Programming

The theory of algorithmizable actions is a vital part of action theory. Here, algo-
rithmizable actions are set against actions that are creative, single, and unique in
their nature.
There is no satisfactory definition of algorithmizable actions. According to an
informal definition, an algorithm is a set of rules that precisely defines sequence of
actions. Instances of algorithmizable actions are regular or context-free compound
actions. According to the above linguistic approach, regular (respectively, context-
free) compound actions are defined as regular (context-free) languages over the
alphabet consisting of atomic actions.
A part of the book establishes certain results on algorithmizable actions referring
to the notion of an action program. The prototype here is the meaning of the term
“program”, with which it is invested by computer science. The above-mentioned
problem area displays relations with algorithmic logic in the sense of Salwicki and
the theory of algorithms; yet, it is not identical with them.
x Preface

H. Non-monotonic Reasonings and Action

An approach to non-monotonic reasonings which links them with the theory of


action is outlined. A general semantic scheme of defining non-monotonic reason-
ings in terms of frames is presented. Each frame F is a set of states W endowed with
a family R of relations of a certain kind. If S is a propositional language, then each
frame determines in a natural way a consequence-like operation on S. The latter
does not generally exhibit all properties of consequence operations as, e.g.,
monotonicity. Such operations exemplify non-monotonic patterns of reasoning. The
class of resulting structures encompasses preferential model structures and supra-
classical reasonings.

I. The Existing State of Knowledge in the Scope


of the Study Area

Reflection on the rational and irrational actions of human beings is not alien to the
Polish logical tradition. Books by Tadeusz Kotarbiński took the lead. The works of
Kazimierz Ajdukiewicz, directed toward practical application of logic, and the
activity of Kotarbiński’s disciples (Nowakowska, Stonert, Konieczny, Gasparski)
testify to this only too well. It seems research in this area needs new impulses so as
to permit it ultimately to penetrate to a broader extent and more profoundly into the
scientific and technical achievements of recent decades, especially as regards
dynamic logic, theory of automata, and programming.
The present book puts the emphasis on the mathematical and formal-logical
aspects of an action. Despite being firmly grounded in the tradition of Lvov-
Warsaw School, to a broad extent it takes into account the output of many schools,
including the Scandinavian, Dutch or Pittsbourgh ones, to mention a few. It also
takes into consideration the author’s own modest contributions. In the years 2001–
2003, the author was in charge of the research project “Logika i działanie (Logic
and Action)”, signature 1 H01A 011 18, financed within a generous grant obtained
from the then Komitet Badań Naukowych (State Committee for Scientific Research).
The support from the Committee resulted in the first drafts of this book.
Fragments of the book have been presented by the author over many years at
different conferences, both at home and abroad. The first presentation took place at
the University of Konstanz in October 1992, and the next during a meeting at Umeå
University (Sweden) in September 1993. Among other conferences the following
must be noted: Logic and its Applications in Philosophy and the Foundations of
Mathematics, organized annually since 1996 in Karpacz or Szklarska Poreba (The
Giant Mountains, Poland), where the author presented his papers relating to various
aspects of action a number of times. Throughout all this time the author has
received valuable advice and criticism from many people whose individual names
shall not be mentioned, but who are all cordially thanked.
Preface xi

It remains to hope that the book will (modestly) contribute to the formulation of
a generally accepted paradigm of action theory.

Acknowledgments

As a non-native speaker of English, the author is indebted to Dr. Matthew Carmody


for proofreading the whole text of the book and for making appropriate lin-
guistic recommendations and amendments.

Opole, Poland Janusz Czelakowski


September 2014
Contents

Part I Elements of Formal Action Theory

1 Elementary Action Systems. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3


1.1 Introductory Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 Elementary Action Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.3 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
1.4 Performability of Atomic Actions in Elementary
Action Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
1.5 Performability and Probability . . . . . . . . . . . . . . . . . . . . . . . . . 26
1.6 Languages, Automata, and Elementary Action Systems. . . . . . . . 41
1.7 Compound Actions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
1.8 Programs and Actions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60

2 Situational Action Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63


2.1 Examples—Two Games . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
2.1.1 Noughts and Crosses . . . . . . . . . . . . . . . . . . . . . . . . . 63
2.1.2 Chess Playing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
2.2 Actions and Situations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
2.2.1 An Example. Thomson Lamp . . . . . . . . . . . . . . . . . . . 80
2.3 Iterative Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
2.4 Pushdown Automata and Pushdown Algorithms. . . . . . . . . . . . . 90
2.5 The Ideal Agent. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
2.6 Games as Action Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
2.7 Cellular Automata . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
2.8 A Bit of Physics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112

3 Ordered Action Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115


3.1 Partially Ordered Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
3.2 Fixed-Point Theorems for Relations . . . . . . . . . . . . . . . . . . . . . 118

xiii
xiv Contents

3.3 Ordered Action Systems: Applications


of Fixed-Point Theorems. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
3.4 Further Fixed-Point Theorems: Ordered Situational
Action Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131

Part II Freedom and Enforcement in Action

4 Action and Deontology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143


4.1 Atomic Norms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
4.2 Norms and Their Semantics. . . . . . . . . . . . . . . . . . . . . . . . . . . 152
4.3 The Logic of Atomic Norms . . . . . . . . . . . . . . . . . . . . . . . . . . 162
4.4 The Closure Principle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
4.5 Compound Actions and Deontology . . . . . . . . . . . . . . . . . . . . . 176
4.6 Righteous Systems of Atomic Norms . . . . . . . . . . . . . . . . . . . . 181
4.7 More About Norms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189

5 Stit Frames as Action Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . 195


5.1 Stit Semantics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195
5.2 Stit Frames as Action Systems. . . . . . . . . . . . . . . . . . . . . . . . . 201
5.3 Ought Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203

6 Epistemic Aspects of Action Systems . . . . . . . . . . . . . . . . . . . . . . . 209


6.1 Knowledge Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209
6.1.1 Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209
6.1.2 Propositional Epistemic Logic . . . . . . . . . . . . . . . . . . . 211
6.1.3 The Idea of Knowledge . . . . . . . . . . . . . . . . . . . . . . . 211
6.1.4 Gettier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211
6.1.5 Exploring (K) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212
6.1.6 Extensionality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213
6.1.7 What Is Involved in Knowing? . . . . . . . . . . . . . . . . . . 213
6.1.8 Situational Semantics . . . . . . . . . . . . . . . . . . . . . . . . . 214
6.1.9 The Logical Framework . . . . . . . . . . . . . . . . . . . . . . . 215
6.1.10 Other Renditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216
6.1.11 Knowledge Models as Action Systems . . . . . . . . . . . . . 216
6.2 The Frame Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218
6.3 Action and Models for Non-monotonic Reasonings . . . . . . . . . . 225
6.3.1 Reasoning in Epistemic Contexts . . . . . . . . . . . . . . . . . 225
6.3.2 Frames for Non-monotonic Reasonings. . . . . . . . . . . . . 232
6.3.3 Tree-Like Rules of Conduct . . . . . . . . . . . . . . . . . . . . 238
Contents xv

Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247

Symbol Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253

Index of Definitions and Terms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255

Index of Names . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259


Part I
Elements of Formal Action Theory
Chapter 1
Elementary Action Systems

Abstract This chapter expounds basic notions. An elementary action system is a


triple consisting of the set of states, the transition relation between states, and a
family of binary relations defined on the set of states. The elements of this family are
called atomic actions. Each pair of states belonging to an atomic action is a possible
performance of this action. This purely extensional understanding of atomic actions
is close to dynamic logic. Compound actions are defined as sets of finite sequences of
atomic actions. Thus compound actions are regarded as languages over the alphabet
whose elements are atomic actions. This chapter is concerned with the problem of
performability of actions and the algebraic structure of the set of compound actions
of the system. The theory of probabilistic action systems is also outlined.

1.1 Introductory Remarks

Any practical activity requires a strict distinction between (material or mental) the
objects and states ascribed to them. A purposeful activity is always undertaken by
an agent who has some capacity to predict possible interactions between objects—
particularly himself and material objects—and is an activity aimed at bringing about
certain desirable states of those objects.
A man facing a problem situation first tries to produce a clear image of all the
factors defining the situation. In other words, he wants to realize in what state of
things he finds himself at the moment. This subjective world image then undergoes
a certain intellectual process—utilizing the available tools, he determines possible
variants of a further course of events. This is a preliminary phase of solving the
problem, consisting in finding all the factors determining the current state of affairs,
and then defining a set of actions to be performed to reach the intended state of things.
The second, practical phase aims to find the techniques to affect the object and bring
about a desirable change in it. This is achieved by working out an action plan which
is to lead to the goal. And, finally, the last stage—verification of whether the actual
effects of the performed actions correspond to the intended ones. We may thus speak
about the three-phase structure of an action: a clear recognition of the situation, the

© Springer Science+Business Media Dordrecht 2015 3


J. Czelakowski, Freedom and Enforcement in Action, Trends in Logic 42,
DOI 10.1007/978-94-017-9855-6_1
4 1 Elementary Action Systems

defining of an action plan, i.e., a set of actions which are to be performed in a definite
order, and lastly the checking of the effects of the action.
In the contemporary literature one may distinguish at least seven approaches to
action. They are briefly discussed in the preface. Each of them is centered around
a specific aspect of action. We shall present here in greater detail the three that
are pertinent to the content of this book. One, having its roots in dynamic logic,
treats human action as a computer program. The conviction that the notion of a
computer program may be a useful analogy for human action is developed in the
papers by Boden (1977), Suppes (1984), Segerberg (1989), and others. As Segerberg
writes (1989, p. 327): “(…) in understanding what happens when, for example John
performs the action of eating an apple it might be useful to view John as running a
certain ‘routine’ (…).” (‘Routine’ is the term suggested by Segerberg for the informal
counterpart of the notion of a program when speaking of human action.)
In dynamic logic, a universe of possible states is considered. Corresponding to
each program α, there is a set |α| of paths (i.e., sequences of states), each path
representing a possible computation according to that program. The set |α| is called
the intension of the program α. In propositional dynamic logic (PDL), the intension
of a program is usually thought to be a set of pairs of states than a set of paths—in the
perspective of PDL it is only the starting point and the endpoint of a path that matter.
Thus to each program α a certain binary relation on the set of states is assigned—
one may call it the resultant relation of a program. The operation that identifies a
program with its resultant relation leads, in consequence, to distinguishing a family
of binary relations on the set of states. In the context of human action theory modeled
on computer science, the relations are just called actions (see Segerberg 1989).
The approach presented in Nowakowska’s book (1979) describes human action
as a complex system of irreducible simple actions (‘elementary’ or atomic actions).
Compound actions are viewed as sets of sequences of simple actions but the latter
generally cannot be performed in random order. Simple action is here a primitive
notion. It is an approach with formal-linguistic origin: simple actions can be per-
formed only in a certain order just as letters are used only in certain orders to make
up words. This idea allows us to use the concepts of formal linguistics for the analysis
of the ‘grammar’ of action. The starting point in Nowakowska’s action theory is an
ordered pair
(D, L)

where D is a finite set and L is a subset of D ∗ , the set of all finite sequences of
elements of D. The members of D are interpreted as simple actions; L in turn is the
set of physically-feasible finite strings of simple actions. The set L, in analogy with
linguistics, is called the language of actions.
Many of the notions presented in Nowakowska’s book such as distributive classes
and parasite sequences, are drawn from the arsenal of formal linguistics. They are
a helpful tool in the analysis of actions. Her theory also contains specific notions
which have no linguistic counterparts, e.g., the complete ability, key actions, etc.
The third approach to action originates from deontic logic. It dates back to the
seminal work of von Wright (1963). This approach deals with an analysis of the
1.1 Introductory Remarks 5

notion of a norm and the typology of norms and actions. In the literature devoted to
deontic logic one distinguishes the following three categories of actions:
1. obligatory actions
2. permissive actions
3. prohibitive actions.
Norms define the ways and circumstances in which actions are performed, e.g., by
specifying the place, time, and order in which they should be executed. Some norms
may oblige the agent to perform an action while others may prohibit performing it
in the given situation.
This book comes in between the above-mentioned approaches to action. As in
Nowakowska’s book, the notion of an atomic action is the starting point in our
investigation. Compound actions are defined as sets of finite strings of atomic actions.
The latter are binary relations on so-called discrete systems, i.e., every atomic action
A is identified with a set of ordered pairs (u, w), where u and w are possible states of
the system. Every pair (u, w) belonging to A is called a possible performance of the
atomic action A; w is a possible effect of the action A in the state u. The above purely
extensional understanding of atomic actions is close to that of dynamic logic. In turn,
compound actions are regarded as languages over the alphabet whose elements are
atomic actions.
Throughout the book we use the standard notation adopted in Zermelo-Fraenkel
set theory (ZF). N is the set of natural numbers with zero. It is often marked as ω.
A binary relation is any set of ordered pairs. A binary relation on a set W is any
subset of the Cartesian product W × W . ℘ (W ) is the power set of W (= the set of
all subsets of W .)
Let P and Q be binary relations on a set W . We define:

Set-theoretic join: P ∪ Q := {(u, w) : P(u, w) or Q(u, w)}


Set-theoretic intersection: P ∩ Q := {(u, w) : P(u, w) and Q(u, w)}
Composition: P ◦ Q := {(u, w) : for some v ∈ W, P(u, v) and Q(v, w)}
Iteration: P 0 := E W , P n+1 := P n ◦ P for n  0, where
E W :={(u, u) : u ∈ W } is the diagonal relation on W
Iterative closure: P ∗ := {P n : n  0}
Transitive closure: P + := {P n : n  1}
Inversion: P −1 := {(u, w) : (w, u) ∈ P}.

The set
Dom(P) := {x : (∃y)(x, y) ∈ P}
is the domain of P while
C Dom(P) := {y : (∃x)(x, y) ∈ P}
is the co-domain of P.
6 1 Elementary Action Systems

1.2 Elementary Action Systems

Before introducing the formal apparatus of action theory, let us consider the following
simple (hypothetical) example. It is midnight. Mrs. Smith is sitting in the living room
and listening to the radio. She wants to go to bed. But first she has to perform two
actions:

A Turning off the radio


B Switching off the light

The current state u 0 is described by the sentence α ∧ β, where

α: The radio is turned on


β: The light in the living room is switched on

She wants to achieve the state u f in which the radio and the light are turned off.
It is assumed that the sentence “The radio is turned off” is equivalent to “The radio
is not turned on.” The same applies to the second sentence concerning light switch.
The final state u f is therefore described by the sentence ¬α ∧ ¬β.
u f may be attained from u 0 in two ways—by turning off the radio, and then
switching off the light, or vice versa. There are two intermediate states between u 0
and u f :

u1 : α ∧ ¬β

in which the radio is on and the light is off, and

u2 : ¬α ∧ β

in which the radio is off but the light is still on. We therefore distinguish the following
four-element set W of states:

W := {u 0 , u 1 , u 2 , u f }.

According to the formalism adopted in this book, each atomic action is represented
by a set of ordered pairs of states—the first element of the pair representing the state
of the system just before the given action is being executed, and the second element
representing the state of the system immediately after performing the action. In this
example the actions A and B are identified with the following sets of ordered pairs
of states:

A = {(u 0 , u 2 ), (u 1 , u f )},
B = {(u 0 , u 1 ), (u 2 , u f )},

respectively.
1.2 Elementary Action Systems 7

When it is completely dark in the room, Mrs. Smith is so scared that she cannot
localize the radioset in the room. It is not possible for her to reach the state u f from
the state u 1 (she cannot turn off the radio after switching the light off). To capture
such a situation, there is another key ingredient introduced in the action formalism—
the relation R of direct transition from one state to another. The relation R points
out which direct transitions between states are physically possible. Here “physical
possibility” refers to any transition which is effected by performing one of the above
actions and taking into account Mrs. Smith’s abilities to do them. Accordingly:

R := {(u 0 , u 2 ), (u 2 , u f ), (u 0 , u 1 )}.

(There is no direct transition from u 1 to u f .)

u1

u0 uf

u2
Fig. 1.1
Collecting together the above ingredients one arrives at the system

M = (W, R, {A, B}), (1.2.1)

which encodes Mrs. Smith’s current states and the action she has to perform so that
she could go to the bedroom.
A second hypothetical model is one in which Mrs. Smith sees perfectly in darkness.
In this model there is a direct transition from u 1 to u f —the state u f is achieved by
turning off the radio in the darkness of the living room. Here

R = {(u 0 , u 2 ), (u 1 , u f ), (u 2 , u f ), (u 0 , u 1 )}.

(1.2.1) is an instantiation of the general concept of an elementary action system—


the notion which we shall discuss in this chapter.
Yet another model is taken from everyday life. You want to get dressed in the
morning and then go to the office. You are to perform two actions: A—putting on the
trousers, and B—putting on the shoes. It is clear that you first put the trousers and then
the shoes. (It is rather unusual to perform these actions in the reverse order, especially
when the trousers are skintight.) The corresponding action system is similar to the
above one. There are four states distinguished: u 0 —you are undressed, u 1 —the
8 1 Elementary Action Systems

trousers are on, u 2 —the shoes are on, u f —the trousers and the shoes are on. We
may assume that

u1

u0 uf

u2
Fig. 1.2

A = {(u 0 , u 1 ), (u 2 , u f )} and B = {(u 0 , u 2 ), (u 1 , u f )}.

The relation R reflects the above limitations. Thus

R = {(u 0 , u 1 ), (u 1 , u f ), (u 0 , u 2 )}.

Intuitively, A is performable in u 0 but unperformable in u 2 . B is performable in u 0


and in u 1 . Thus, if you first perform B in u 0 , thus arriving at u 2 , there is no possibility
of performing A in u 2 , because R excludes it (there are no R-transition from u 2 to
the final state u f ).
A discrete system is a pair
U = (W, R) (1.2.2)

where W is a nonempty set called the set of states, and R is a binary nonempty
relation on W , R ⊆ W × W , called the direct transition relation between states. If
u, w ∈ W then the fact that (u, w) belongs to R is also marked by ‘u → R w’ or
‘R(u, w)’, or, more succinctly, by ‘u R w’, and read: ‘The system U directly passes
from the state u to the state w’.
An atomic action (on the system U) is an arbitrary nonempty subset A ⊆ W × W ,
i.e., it is a nonempty binary relation on W .
Definition 1.2.1 An elementary discrete action system is a triple

M = (W, R, A) (1.2.3)

where U = (W, R) is a discrete system and A is a nonempty family of atomic


actions on U. The members of the family A are called atomic actions of the action
system (1.2.3). 
Instead of ‘ A(u, w)’ we often write ‘u → A w’ or simply ‘u A w’ and read: ‘The
action A carries the system U from the state u to the state w.’ Any pair (u, w) such
1.2 Elementary Action Systems 9

that A(u, w) holds, is called a possible performance of the action A (in the state u).
Thus, any atomic action is identified with the set of its possible performances.1
Definition 1.2.1 does not assume that the atomic actions in A are mutually disjoint
relations. Hence, if (u, w) ∈ A ∩ B for some A, B ∈ A, the pair (u, w) is a possible
performance of A and a possible performance of B as well. (We shall return to this
issue later.)
Every elementary action system is a Kripke frame U = (W, R) enriched with a
bunch A of binary relations on it. Thus action systems are multi-relational Kripke
frames in which one relation is distinguished, viz., R. The relation R encodes a
possible evolution of the frame, i.e., it imposes limitations on the possibility of
direct transitions of the system U from some states to others. A variety of possible
interpretations of R are obtained by choosing (in a proper way) interesting classes
of discrete systems. To mention only the most important of these interpretations:
the relation R can be interpreted as the physical possibility of a transition from
one state to another, or as compatibility with a social role, or as compatibility with
labor regulations of a given institution. Apart from physical limitations, it is often
necessary in some action systems to take into account restrictions that are imposed
by law. These systems may be called deontic action systems—some actions in such
a system are legally forbidden, e.g., on the strength of traffic regulations, though
they are physically feasible (actions in fraudem legis). Finally, R can be viewed as
compatibility with the principles or empirical procedures of one or any other theory.
Each of the mentioned interpretations is bound with a certain set of discrete systems.
The relation R also determines when certain transitions u R w are irreversible.
Hence, R, in general, need not be symmetric. The relation R ∗ , the transitive and
reflexive closure of R, is called the relation of indirect transitions between states of
the system (W, R), or in short, the transition relation.
A direct transition u R w is said to be accomplished by an atomic action A ∈ A
if (u, w) is a possible performance of A, i.e., u A w holds.
A possible performance u A w of A is realized in the system M = (W, R, A) if
(u, w) is also an element of R, i.e., u R w holds. The fact that a performance u A w
is realized thus depends on the selection of the direct transition relation R.
Action theory presupposes a definite ontology of the world. We accept that all
individuals are divided into actual, potential, and virtual ones (see Scott 1970). The
above division may vary within a flow of time—a given individual, treated initially
as a virtual or potential entity, may become an actual individual, i.e., an element of
physical reality. In the context of this work, where the notion of a system and its
states are understood as widely as possible, we assume that the system can exist in
actual, potential, or virtual states. The distinction between particular parts of this
division will not be clear-cut but it can be useful. When, for example, one intends to
build a house, all actions (whether they are atomic or compound) such as preparing
a project, finding a site to build on, the purchase of materials, can be regarded as

1 Strictly speaking, the adopted set-theoretic formalism represents everyday situations in which

actions and states of affairs are involved in much the same way as probabilistic spaces model
random events.
10 1 Elementary Action Systems

transformations between virtual or potential states of the system, which is the house
to be built; in these states the house is not a physical reality. One can assume that the
actual states of the house are designated by a number of actions starting with laying
the foundations and closing with the completion of the house. A similar division of
states appears while designing and sewing a dress, making a new model of a car, and
so on.
However, when dealing with elementary action systems, we do not distinguish
explicitly a separate category of actual states of the system as a subset of W . We
shall uniformly speak of states of the action system.
Ota Weinberger (1985, p. 314) writes: “An action is a transformation of states
within the flow of time (including the possibility of an identity transformation—a
standstill) which fulfills the following conditions:
(i) the transformation of states (or behaviour) is accounted to a subject (to an agent)
(ii) the subject has at his disposal a range for action, i.e., a class of at least two states
of affairs which are possible continuations of a given trajectory in the system
of states
(iii) an appropriate information process underlying the subjects’ decision-making
causes the future development of the system to fall into one of the alternative
possibilities within the range of action.”
The concept of action presented here is convergent with that of Weinberger; it is
however more comprehensive—the notions of a situation and the situational envelope
of action are extensively highlighted in it. The issue is discussed at greater length in
Chap. 2.
Atomic actions need not be partial functions from the set W of states into W .
But each binary relation A on W defines a total function from W into the power set
℘ (W ), namely to A the mapping δ A : W → ℘ (W ) is assigned, where

δ A (u) := {w ∈ W : A(u, w)}.

The set δ A (u) may be empty. δ A (u) is called the set of possible effects (on the system)
of the action A in the state u.
The correspondence A → δ A is bijective: every function δ : W → ℘ (W ) is
of the form δ A for some unique binary relation A on W . (A is defined as follows:
A(u, w) if and only if w ∈ δ(u), for all u, w ∈ W .)
Since every binary relation on W can be uniformly replaced by a function δ from
W to ℘ (W ), one can move from the ‘relational’ image to the ‘functional’ image of
action theory, i.e., instead of considering elementary action systems M = (W, R, A)
given as in Definition 1.2.1, one may examine systems of the form

(W, δ R , {δ A : A ∈ A})

as well with δ R and δ A defined as above.


The pair (W, δ A ) is customarily called the directed graph of the relation A (the
digraph of A, for short). If W is finite, the graph (W, δ A ) is often represented by
1.2 Elementary Action Systems 11

means of its diagram. The elements of W are depicted by means of points on the
plane called vertices of the graph. If w ∈ W and δ A (w) is nonempty, the vertex
w is linked with each element u ∈ δ A (w) by means of an arrow. Each such arrow
is called an oriented edge; w is its beginning. For example, if W = {w1 , w2 , w3 }
and A = {(w1 , w1 ), (w1 , w2 ), (w2 , w3 )}, the graph (W, δ A ) is represented by the
following diagram.

w1 w2 w3
Fig. 1.3
The directed graph (W, δ A ) has no multiple edges. This means that it does not
contain—as a subgraph—a graph of the form.

w1 w2
Fig. 1.4
More formally, for any two distinct vertices w1 , w2 there exists at most one arrow
from w1 to w2 and at most one arrow leading from w2 to w1 .
The directed graph (W, δ A ) may contain loops, i.e., it may contain subgraphs of
the form.

w1
Fig. 1.5
This may happen if A contains pairs (w, w), for some w ∈ W .
The fact that atomic actions and the relation R of direct transitions between
states are represented in the form of digraphs (on the set W of vertices) has two
consequences. On the one hand, the possibility of graphic representation of the action
12 1 Elementary Action Systems

system as a set of points and arrows on the plane makes the structure of the system
more lucid and simpler to grasp intuitively. (This remark concerns only the action
systems which can be represented by means of families of planar graphs.) On the
other hand, defining an action system as a set of digraphs (without multiple edges)
enables us to apply methods and results that belong to graph theory.
We shall now give a few key definitions in the theory of elementary action systems.

Definition 1.2.2 The (finite) reach of an elementary action system M = (W, R, A)


is the binary relation ReM on W which is defined as follows: ReM (u, w) if and only
if either u = w and R(u, u) or there exists a finite nonempty sequence of states
u 0 , . . . , u n of W satisfying the following conditions:
(i) u 0 = u, u n = w;
(ii) for each i (0  i  n−1) there exists an atomic action Ai such that Ai (u i , u i+1 )
and R(u i , u i+1 ). 

Condition (ii) is more succinctly written as


(iii) u 0 A0 , R u 1 A1 , R u 2 . . . An−2 , R u n−1 An−1 , R u n .
The reach ReM of the system M = (W, R, A) is always a subrelation of R ∗ .
(In fact, ReM ⊆ R + ∪ (E W ∩ R). ReM is reflexive if and only if R is reflexive. The
reach of M is the set of indirect transitions between states which are accomplished
by means of finite strings of atomic actions of A (including the empty string).
The fact that A0 , . . . , An−1 is a finite sequence of atomic actions of A, u 0 , . . . , u n
is a nonempty sequence of states states of W such that Ai (u i , u i+1 ) for all i
(0  i  n − 1) is depicted as a figure

u 0 A0 u 1 A1 u 2 . . . u n−2 An−2 u n−1 An−1 u n . (1.2.4)

Any such figure (1.2.4) is called a (possible) operation of the action system
M = (W, R, A). If the sequence A0 , . . . , An−1 is empty, then (1.2.4) reduces to
the sequence u 0 with n = 0.
The operation (1.2.4) is realizable in the elementary action system M if and only
if u i R u i+1 holds for i = 0, . . . , n − 1. If A0 , . . . , An−1 is empty, then (1.2.4) is
realizable if and only if u 0 R u 0 . The realizable operation (1.2.4) is written as in (iii),
that is, as

u 0 A0 , R u 1 A1 , R u 2 . . . u n−2 An−2 , R u n−1 An−1 , R u n . (1.2.5)

Thus, the reach of M is the resultant relation of the set of all (finite) realizable
operations of M: u ReM w holds if and only if there exists a realizable operation of
M, which leads from u to w.
The following definition isolates some classes of elementary action systems.
1.2 Elementary Action Systems 13

Definition 1.2.3 Let M = (W, R, A) be an elementary action system. The system


M is
(a) separative if A ∩ B = ∅ for any two distinct actions A, B ∈ A;
(b) normal if A ⊆ R, for all A ∈ A;
(c) strictly normal if A = R;
(d) equivalential if it is separative and strictly normal; that is, the actions of A form
a partition of R;
(e) strictly equivalential if the function assigning Dom(A) to each A ∈ A is
one-to-one and the family {Dom(A) : A ∈ A} forms a partition of Dom(R);
(f) complete if the reach ReM of the system is equal to R + ∪ (E W ∩ R);
(g) reversible if for any atomic action A ∈ A and every pair (u, w) such that u A, Rw,
the pair (w, u) belongs to the reach of the system.
(h) deterministic if the actions of the system are functional, that is, for each A ∈ A,
if (u, v) ∈ A and (u, w) ∈ A, then v = w. 
The following observations are immediate:
Proposition 1.2.4 Let M = (W, R, A) be an elementary action system.
(1) If M is strictly normal, then it is normal.
(2) If M is strictly equivalential, then it is equivalential.
(3) M is reversible if and only if the reach of M is a symmetric relation.
(4) If M is complete and the relation R of direct transition is symmetric, the system
M is reversible. 
Definition 1.2.5 Let M = (W, R, A) be an elementary action system. A task for
M is any pair (Φ, Ψ ) such that Φ ⊆ W and Ψ ⊆ W . The set Φ is called the initial
condition of the task while Ψ is the terminal condition of the task. 
Elementary action systems are designed as set-theoretic or diagrammatic con-
structs, whose role is to assure the implementation of a given task, that is, to provide
a bunch of strings of atomic actions which would lead from Φ-states to Ψ -states. This
aspect makes the theory of atomic action systems similar to automata theory. But the
focus of action theory is rather on the performability of actions than the collecting
together of accepted strings of atomic actions leading from initial to terminal states.

1.3 Examples

To warm up we give an analysis of the following well-known logical puzzle.


Example 1.3.1 (Crossing a river) Three missionaries and three cannibals are gath-
ered together on the left bank of a river. There is a boat large enough to carry one or
two of them. They all wish to cross to the right bank. However, if cannibals outnum-
ber missionaries on either bank, the cannibals indulge in their natural propensities
and eat the missionaries. Is it possible to cross the river without a missionary being
eaten?
14 1 Elementary Action Systems

The problem is modeled as a simple action system with a task. Possible states are
labeled by figures of the form

(+)CCMM | CM,

where the symbol (+) indicates that the boat is on the left bank, two cannibals and
two missionaries are on the left bank, and one cannibal and one missionary are on
the other bank.
The symbol (+)CCCMMM | ε marks the initial state, and ε | (+)CCCMMM—
the required final state.
There are altogether 18 states that meet the above requirements. Apart from the
initial state

(+)CCCMMM | ε,

there are 8 states more in which the boat is on the left bank:

(+)CCMMM | C, (+)C | CCMMM,


(+)CCMM | CM, (+)CM | CCMM,
(+)CCC | MMM, (+)MMM | CCC,
(+)CC | CMMM, (+)CMMM | CC

and 8 dual states, where the boat is on the right bank:

CCMMM | (+)C, C | (+)CCMMM,


CCMM | (+)CM, CM | (+)CCMM,
CCC | (+)MMM, MMM | (+)CCC,
CC | (+)CMMM, CMMM | (+)CC

together with the final state

ε | (+)CCCMMM,

which is the dual of the initial state.


Some other states as, e.g., (+)CCM | CMM, are fatal and therefore excluded. In
this state two cannibals consume the missionary who has remained on the left bank.
After this action the state (+)CCM | CMM would turn into (+)CC | CMM. The
states (+)ε | CCCMMM and CCCMMM | (+)ε are also excluded.
In the above description of states cannibals and missionaries are not individuated,
that is, cannibals and missionaries are treated as two three-element collectives, whose
members are not discriminated.
There are 5 atomic actions labeled, respectively, as M, C, CM, M M, CC. These
labels inform who is sailing in the boat from one bank to the other. For, e.g., CM
1.3 Examples 15

marks the action according to which one cannibal and one missionary travel across
the river.
According to the arrow depiction of atomic actions, M consists of the following
pairs of states:

(+)CCMMM | C →M CCMM | (+)CM,


(+)CM | CCMM →M C | (+)CCMMM

as well as their inverses

CCMM | (+)CM →M (+)CCMMM | C,


C | (+)CCMMM →M (+)CM | CCMM.

The action C consists of the pairs

(+)CCCMMM | ε →C CCMMM | (+)C,


(+)CCMMM | C →C CMMM | (+)CC,
(+)C | CCMMM →C ε | (+)CCCMMM
(+)CCC | MMM →C CC | (+)CMMM,
(+)CC | CMMM →C C | (+)CCMMM,

and their inverses

CCMMM | (+)C →C (+)CCCMMM | ε,


CMMM | (+)CC →C (+)CCMMM | C.
ε | (+)CCCMMM →C (+)C | CCMMM.
CC | (+)CMMM →C (+)CCC | MMM
C | (+)CCMMM →C (+)CC | CMMM.

The action CM consists of the following pairs:

(+)CCCMMM | ε →CM CCMM | (+)CM,


(+)CCMM | CM →CM CM | (+)CCMM,
(+)CM | CCMM →CM ε | (+)CCCMMM

and their inverses

CCMM | (+)CMε →CM (+)CCCMMM | ε,


CM | (+)CCMM →CM (+)CCMM | CM,
ε | (+)CCCMMM →CM (+)CM | CCMM
16 1 Elementary Action Systems

In turn, the action M M consists of the pairs

(+)CCMM | CM →MM CC | (+)CMMM,


(+)CMMM | CC →MM CM | (+)CCMM,

and their inverses

CC | (+)CMMM →MM (+)CCMM | CM,


CM | (+)CCMM →MM (+)CMMM | CC

The action CC consists of the pairs

(+)CCCMMM | ε →CC CMMM | (+)CC,


(+)CCC | MMM →CC C | (+)CCMMM,
(+)CC | CMMM →CC ε | (+)CCCMMM
(+)CCMMM | C →CC MMM | (+)CCC

and their inverses

CMMM | (+)CC →CC (+)CCCMMM | ε


C | (+)CCMMM →CC (+)CCC | MMM,
ε | (+)CCCMMM →CC (+)CC | CMMM,
MMM | (+)CCC →CC (+)CCMMM | C.

The relation R of direct transition between states respects the above rules for
safely crossing the river. When two states u and w are connected by R, i.e., u → R w
holds, then w is achieved from u by performing one of the above actions. There are
no direct transitions from the above states to fatal states. (Since fatal states have been
deleted, each of the above actions A obeys the principle that missionaries are not
outnumbered by cannibals in the states entering possible performances of A. If we
decided to adjoin fatal states to the above list of possible states, we would also be
enforced to add a new atomic action, that of eating missionaries by cannibals.) This
fact guarantees that the above system is normal. We therefore have that R is simply
the set-theoretic union of the above actions,

R := C ∪ M ∪ CC ∪ C M ∪ M M.

R thus contains 32 pairs. R is a symmetric relation because the relations C, M, CC,


C M, M M are all symmetric.
It immediately follows from the above definitions that the resulting action system

M = (W, R, A)
1.3 Examples 17

with A := {C, M, CC, C M, M M} is separative, normal, complete, and reversible.


In fact, for every pair (u, w) such that u A, R w, it is the case that w A, R u, for any
action A ∈ A.
There are dead states: (+)MMM | CCC and CCC | (+)MMM are examples.
If u 0 represents any of these two states, there is no state w such that u 0 → R w.
But these two conceivable dead states have another characteristic feature: they are
isolated. This means that there is no state v by means of which one may enter u 0 ,
that is, there in no state v such that v → R u 0 . 

Example 1.3.2 This is a simplified description of washing in an automatic washing


machine.
Let us consider three sentences:

α: The bed linen is not washed


β: The linen is in the machine
γ: The machine is turned on.

In the (unavoidably incomplete) automatic linen washing scheme we wish to


present, one can assume that each state is described by one sentence of the form
c1 α & c2 β & c3 γ , where cδ = δ if c = 1 and cδ = ¬δ if c = 0. (¬δ is negation of
the sentence δ.) Thus, the set W of states consists of eight elements. They are:
w1 : The linen is not washed & the linen is not in the machine &
the machine is not turned on (i.e., it is turned off)
w2 : The linen is not washed & the linen is in the machine &
the machine is turned on
w3 : The linen is not washed & the linen is in the machine &
the machine is turned off
w4 : The linen is not washed & the linen is not in the machine &
the machine is turned on
w5 : The linen is washed & the linen is not in the machine &
the machine is turned off
w6 : The linen is washed & the linen is not in the machine &
the machine is turned on
w7 : The linen is washed & the linen is in the machine &
the machine is turned off
w8 : The linen is washed & the linen is in the machine &
the machine is turned on.

The set A of atomic actions consists of five elements:


18 1 Elementary Action Systems

A: Putting the linen into the machine


B: Taking the linen out of the machine
C: Turning on the machine
D: Turning off the machine
E: Washing the linen in the machine

The action E is exclusively performed by the machine, i.e., the linen is washed
by having put it first dry into the machine and then turning on the machine. The
remaining actions are performed by the person operating the machine. Such actions
as programming adjustment, providing a quantity of detergent, starching, spinning,
etc., are omitted.
Thus the process of washing can be defined as the following operation:

w1 A w3 C w2 E w8 D w7 B w5 .

The aim of washing is the implementation of the task ({w1 }, {w5 }), i.e., moving
the system from the w1 -state to the w5 -state.
The states w4 and w6 , though attainable by means of the above actions, are irrele-
vant to the process of washing. The w6 -state is attained from the w8 -state by turning
off the machine, taking out the linen, and turning it on again, which can be depicted
as follows:
w8 D w7 B w5 C w6 .

Similarly, the state w4 is attained from w1 by performing the action C : w1 C w4 .


All the actions A, B, C, D, and E are binary relations on W . It can be assumed
that the relation E consists only of one pair:

E := {(w2 , w8 )}.

(The action E is treated here as atomic; in a more accurate description of washing,


E is a complex system of actions.) The action A is reduced to putting the dry linen
into the turned off machine:
A := {(w1 , w3 )}.

The machine can be turned on or turned off when the linen is in the machine or
outside as well; at the same time the linen can be washed or dry. Thus

C := {(w1 , w4 ), (w3 , w2 ), (w5 , w6 ), (w7 , w8 )},


D := {(w2 , w3 ), (w4 , w1 ), (w6 , w5 ), (w8 , w7 )}.

The washed linen (if the machine has been on already) or the dry linen can be
taken out from the machine (if the agent operating the machine decides to take it out
again as soon as it had been put in, giving up washing). Thus

B := {(w7 , w5 ), (w3 , w1 )}.


1.3 Examples 19

All the actions A, B, C, D, and E are partial functions. Also, C = D −1 .


What is the relation R of direct transition on the set W ? It cuts out from the set
W ×W those pairs (u, w) not physically realized. This means that the direct transition
from u into the state w is impossible because of technical or physical limitations.
First of all, the relation R excludes in the above model any transitions from “wet” to
“dry” states—we assume that the dry linen can be washed out but its drying in the
machine after washing is impossible. (In the above example the washing machine
is not equipped with a dryer.) Secondly, the relation R reflects the fact that it is
not possible (or at least undesirable) to put in or take out directly the linen from the
working, turned on machine. Consequently, taking into account the above limitations
we have that

R := {(w1 , w3 ), (w1 , w4 ), (w2 , w3 ), (w3 , w1 ), (w3 , w2 ), (w4 , w1 ), (w5 , w6 ),


(w5 , w7 ), (w6 , w5 ), (w7 , w5 ), (w7 , w6 ), (w7 , w8 ), (w8 , w7 ), (w2 , w8 )}.

This relation is graphically depicted below:


w2 w8

w3 w7

w1 w5

w4 w6
Fig. 1.6

The action system thus defined

(W, R, {A, B, C, D, E}) (1.3.2)

is deterministic and separative. The system is also normal. However, the system is
not complete—the transition w5 R w7 does not belong to the reach of the system.
This is due to the fact that only unwashed linen can be put into the machine. (If we
extend action A with the pair (w5 , w7 ), the system would become complete.) 
20 1 Elementary Action Systems

Example 1.3.3 The third example we present is more sophisticated and it refers
directly to proof theory.
Let S be a nonempty set, whose elements are called sentential formulas. The
nature and the internal structure of the propositions of S are inessential here. The set
S itself will be called a sentential language.
A finitary Hilbert-type rule of inference in S is any set of pairs of the form (X, φ),
where X is a finite set of formulas of S and φ is an individual formula of S. (X, φ)
is read as “Infer φ from X .”
Now let T be a fixed set of sentential formulas of S which will be called a theory
in S, and let Θ be a fixed set of inference rules. A proof from T carried out by means
of the rules of Θ is any nonempty finite sequence of formulas (φ0 , φ1 , . . . , φn ) such
that for each i (0  i  n) either φi ∈ T or there exists a subset J of {0, . . . , i − 1}
and a rule r ∈ Θ such that ({φ j : j ∈ J }, φi ) ∈ r . The notion of a proof is strictly
relativized to the set of rules Θ.
Let WT be the set of all proofs from T . We define the relation R of direct transition
on WT as follows:
R(u, w) if and only if the proof w is obtained from the proof u by adjoining a
single formula at the end of u, i.e., w = u ∧ (φ) for some φ.
(u 1 ∧ u 2 is the concatenation of the sequences u 1 and u 2 .) R is therefore irreflexive.
The following atomic action Ar is assigned to each rule r in Θ:
Ar (u, w) if and only if there exists a sentential formula φ and a subsequence
(φ1 , . . . , φk ) of u such that w = u ∧ (φ) and ({φ1 , . . . , φk }, φ) ∈ r .
In other words, the ‘action’ of Ar on a proof u consists in adjoining a new formula
at the end of u according to the rule r (provided that such a formula can be found).
Ar thus suffixes proofs by adding one new element. Each action Ar is therefore an
irreflexive relation.
Let A := {Ar : r ∈ Θ}. The triple M := (W, R, A) is an elementary action
system. The system is normal but it need not be deterministic.
 The reach of the
system is equal to R + ; this follows from the fact that {A : A ∈ A} = R and R is
irreflexive. Therefore the system M is complete.
The system need not be separative—it may happen that a pair (u, w) belongs to
Ar ∩ As for some different rules r and s.
For every formula φ belonging to T , the sequence (φ) of length 1 is a proof.
The proofs of length 1 are called initial. Let ΦT be the totality of all initial proofs.
(Without loss of generality the members of ΦT may be identified with the formulas
of T .)
For an arbitrary formula φ of T , we define Ψφ to be the set of all proofs w such
that φ occurs as the last element of w. The pair (ΦT , Ψφ ) is a task for the system M.
In order to achieve a state of the system which is in Ψφ one has to find a proof of φ
from T . Thus, the system M can be moved from a state of ΦT to a state of Ψφ if and
only if Ψφ is nonempty. 
The Paradox of Two Agents This paradox concerns rather the notion of agency
than performability of actions. Let M := (W, R, A) be an elementary action system
1.3 Examples 21

operated by two different agents a and b. Suppose A, B ∈ A are two atomic actions
which are not disjoint, i.e., A ∩ B = ∅ (vide Example 1.3.3). Let us assume that a is
the agent of the action A and b is the agent of the action B. Suppose there are states
u, w ∈ W such that u A, R w and u B, R w, i.e., (u, w) is a realizable performance
of both A and B. This fact may seem to be paradoxical. The paradox is in the fact
that performing the action A in the state u by the agent a, resulting in w, is at the
same time performing the action B (by the agent b) although the agents a and b are
entirely different and while a is busy with A, b is at rest.
One may argue that it resembles the situation in which competences criss-cross,
i.e., the situation where the obligations (and actions) of people working in an office
overlap—in such a situation one person does what is some other person’s duty and
vice versa.
The paradox is merely apparent. In the framework of elementary action system
discussions concerning agency are premature. The notion of an agent belongs to a
richer theory, where situational aspects of performability of actions, in particular
agency, are taken into account. The situational notion of the performability of an
action (which will be discussed later) takes into account the situational environment
in which the system is—in particular it introduces the notion of an agent and describes
the ways the work of the agents is organized and how they operate the system. The
notion of an agent is thus incorporated into the situational envelope of an elementary
action system. Within the framework of elementary systems, talk of agents operating
the system is devoid of meaning. On the other hand, it may happen that one agent
may be assigned to many atomic actions in a given action system and he may perform
them simultaneously.
Actions are not the same objects as acts of performing actions. The phrase Driving
a car, being a name of a class of compound actions, bears a meaning that is compre-
hensible by any car driver. Driving a car by Adam is a name of a subclass of actions
belonging to the former class. In turn, the utterance Adam is driving a car refers to
an act of performing an action from this subclass.

1.4 Performability of Atomic Actions in Elementary Action


Systems

Let M := (W, R, A) be an elementary action system and let A be a fixed action of A.


We wish to define a precise mathematical notion of performability of the action A.
What does it mean to say that A is performable? Suppose we are given states u, w such
that A(u, w) holds. A quick answer to the question is this: the agent of A is able to
move the system from the u-state to the w-state. But this answer does not fully satisfy
us. First, the meaning of the phrase ‘is able’ is unclear. We want the meaning of the
phrase to be related directly to the intrinsic properties of the system. Second, the term
‘agent’ is not incorporated into the language of elementary action systems. Third,
the above explication does not take into account the fact that performability depends
22 1 Elementary Action Systems

on the type of limitations pertaining to the system. For example, the action may be
performable if only physical limitations are taken into account and at the same time it
may be unperformable with respect to legal regulations. According to the discussion
of definition of an action system (Definition 1.2.1), the relation R specifies a definite
type of limitation imposed on the functioning of the system. Therefore, a reliable
definition of performability should take on a form relativized to the fixed relation R
of direct transition. Third, we want performability to be the property of an atomic
action dependent on states of the system. This means that performability is treated
here as a ‘parameterized’ property defined with respect to a given state of the system.
It is clear that the states the system operates may vary with the passage of time. In
fact we distinguish two forms of performability—the strong and the weak ones. We
call them ∀-performability and ∃-performability, respectively. Here are the formal
definitions.

Definition 1.4.1 Let M := (W, R, A) be an elementary action system. Let A be an


atomic action in A, and u ∈ W .
(i) The action A is performable in the state u if and only if there exists a state
w ∈ W such that u A, R w (i.e., the transition u R w is accomplished by A);
(ii) A is totally performable in the state u if A is performable in u and for every
state w ∈ W , if u A w, then u R w. 

The performability of A is thus tantamount to the existence of a direct R-transition


from u to a state w such that (u, w) is a possible performance of A. We shall use the
terms “A is performable in u” and “A is ∃-performable in u” interchangeably. The
second conjunct of the definition of total performability of A in u states that every
possible performance of A in u is admissible by R. We shall henceforth also use the
term “A is ∀-performable in u” when we speak of the total performability of A in u.
The notion of the performability of an action (in a given state) is relativized to the
relation R of direct transition and so one should rather speak of R-performability.
Thus, different interpretations of R determine different meanings of the performa-
bility of an action.
The above definition offers a concept of zero-one performability. This definition
does not differentiate, say, poor performance from good performance or medium
performance of an action. A probabilistic conception of performability is outlined in
Sect. 1.5.
One may argue that Definition 1.4.1 does not take into account the epistemic
abilities of the agents operating the system. The agent of an action A may know how
A is defined, i.e., it is conceivable for him which pairs (u, w) belong to A and which
do not. But at the same time he may not know the limitations imposed by R, i.e., he
may not know if the transition from a state u to another state w is admissible in the
system by R or not. (We shall return to this issue in the last chapter devoted to the
epistemic status of the agent of an action.) In situational action systems agents may
be ‘components’ of the situational envelope. In these systems the relation of direct
transition may take into account the agents’ praxeological and epistemic abilities.
1.4 Performability of Atomic Actions in Elementary Action Systems 23

Observation 1.4.2
(1) An atomic action A is performable in u if and only if δ A (u) ∩ δ R (u) = ∅.
(2) A is totally performable in u if and only if ∅ = δ A (u) ⊆ δ R (u). 

Definition 1.4.1 is extended by defining the performability of an atomic action in


the system.

Definition 1.4.3 Let A be an atomic action of the elementary action system M :=


(W, R, A).
(i) A is performable in the system M if and only if a is performable in every state
u ∈ W such that δ A (u) = ∅.
(ii) A is totally performable in M if and only if A is totally performable in every
state u such that δ A (u) = ∅. 

Just as in Definition 1.4.1 instead of the phrase “A is performable in M” we shall


sometimes use the term “A is ∃-performable in M”, and analogously, we shall say
that A is ∀-performable in M whenever A is totally performable in M.

Observation 1.4.4 The atomic action A is totally performable in M if and only if


A ⊆ R. 

The dual property to performability is that of the unperformability of an action.


The atomic action A is said to be unperformable in a state u ∈ W if and only if
δ R (u) ∩ δ A (u) = ∅. In particular, if the set δ A (u) is empty, A is unperformable in u.
A is said to be totally unperformable in the system M if and only if A is unper-
formable in every state of W .

Observation 1.4.5 The atomic action A is totally unperformable in M if and only


if A ∩ R = ∅. 

A weaker variant of the notion of unperformability is the negation of total


performability. It does not seem, however, that the latter notion has any notewor-
thy properties or applications.
Total performability always implies performability. The converse holds for deter-
ministic actions, i.e., actions that are partial functions on W . The distinction between
performability and total performability is essential in the case of indeterministic
atomic actions.
Definition 1.4.1.(i) of performability of an atomic action A in a state u does
not specify a state w for which u A, R w holds; w may be an arbitrary element of
δ A (u)∩δ R (u). We define another property that can provide more detailed information
about a given action.
Let (u, w) be a pair of states such that R(u, w) holds. We recall that the transition
u R w is accomplished by an action A if and only if A(u, w) holds, i.e., the transition
u R w is a possible performance of A. If u R w is accomplished by A, then A is
∃-performable in u. This property of transitions (let us call it accomplishment) has
the following feature.
24 1 Elementary Action Systems

Coherence Principle If a transition u R w is accomplished by A and (u, w) is a


possible performance of another action B, then u R w is accomplished by B.
The Coherence Principle yields the following conclusion which may be regarded
as paradoxical:
If u R w is accomplished by A (and hence A is performable in u), then for
every atomic action B ∈ A for which (u, w) is a possible performance is also
performable in u.
The ‘paradox’ is caused by the fact that if a transition u R w is accomplished
by A then at the same time u R w is accomplished by any action B for which
(u, w) is a possible performance. This conclusion seems bizarre for one would expect
that the accomplishment of the transition u R w by one action A should not affect
the accomplishment of this transition by any other action of the system. (This is
essentially the same problem which we discussed at the end of Sect. 1.3.)
The ‘paradox’ arises again from confusing the notion of performability with the
act of performing an action. In the language of elementary action system a discourse
on actions involving time parameters is excluded. In particular, the notion of the
simultaneity of accomplishment of performability of actions is beyond this language.
We shall look at the problem in the next chapter, where issues centering around the
situational envelope of an elementary action system are discussed. If the system is
separative, i.e., A ∩ B = ∅ for any two distinct atomic actions A, B ∈ A, then the
above ‘problem’ disappears. On the other hand, from the viewpoint of elementary
action theory, it makes sense to speak of a succession of elementary actions in the
context of compound actions—this notion will be presented in Sect. 1.7.
We close this section with some observations on equivalent elementary action
systems.

Definition 1.4.6 Let (W, R) be a discrete system. Two elementary action systems
M := (W, R, A) and N := (W, R, B) (on (W, R)) are equivalent if and only if they
have the same reach, i.e., ReM = ReN . 

Examples 1.3.1–1.3.3 provide action systems, in which all atomic actions are
subsets of the relation R of direct transition, i.e., the atomic actions are totally per-
formable. This fact is not unexpected—every action system is equivalent to a system
in which the atomic actions are totally performable.

Proposition 1.4.7 Every elementary action system M := (W, R, A) is equivalent


to a normal action system N := (W, R, B).

Proof Given an action system M, define B := {A ∩ R : A ∈ A} \ {∅}. Then M and


(W, R, B) are equivalent. 

The above proposition gives rise to the question of whether every atomic action can
be identified with the set of its realizable performances in the system. (By analogy, in
logic any proposition is identified with the set of all possible states of affairs (‘worlds’)
in which the proposition is true.) Every action would be thus totally performable in
1.4 Performability of Atomic Actions in Elementary Action Systems 25

the system and we would not bother with the situations in which an action cannot be
performed. It seems that such a solution would lead us too far. In the case of action
systems the problem is more involved—the set W of states may admit various possible
relations R of direct transition defining different aspects (e.g., legal, physical, etc.)
of the performability of the same action. The identification of an action A with its
realizable performances (relative to a fixed relation R) will make it impossible to
analyze the properties of the action with respect to other admissible interpretations
of the relation of direct transition.
As each atomic action A is a binary relation on the set of states W , the domain
Dom(A) = {u ∈ W : (∃w ∈ W ) u A w} may be thought of as the set of states in
which the agent(s) intend to perform A. Some states u in Dom(A) may be impossible
from the perspective of performability of A—the agent may declare the intention of
performing A in u but his abilities are insufficient; the action may simply be unper-
formable. In turn, the set C Dom(A) = {w ∈ W : (∃u ∈ W ) u A w} encapsulates
the states which are the possible results of performing A. A state w in C Dom(A) is
therefore reachable by means of the action A if it is the case that u A w and u R w hold
for some u ∈ Dom(A). From the praxeological point of view one may safely assume
in Definition 1.4.1 that all atomic actions are total relations, i.e., Dom(A) = W for
all A. This assumption may be interpreted to mean that the agents may attempt to
perform A in whichever state of the system. But the actual performability of A takes
place in only few states u, viz., those that are linked with some state in C Dom(A)
by the relation R.

Proposition 1.4.8 For every elementary action system M := (W, R, A) there exists
an equivalent deterministic action system N := (W, R, B).

Proof Assuming the Axiom of Choice, every binary relation is the set-theoretic
union of partial unary functions. Hence, for every action A ∈ A, there exists
 a
family B(A) := i{B : i ∈ I (A)} ofpartial functions on W such that A = {Bi :
i ∈ I (A)}(= B(A)). Let B := {B(A) : A ∈ A}. The action system N :=
(W, R, B) is deterministic and equivalent to M. 

Note The Axiom of Choice, abbreviated as AC, states that:


For every nonempty family X of nonempty sets there exists a function f defined
on X such that f (A) ∈ A for all A ∈ X.
In set theory functions are uniformly defined as certain sets of ordered pairs. 

Yet another procedure, contrasting with the above ‘normalizing’ of an action


system M = (W, R, A), consists in extending the atomic actions of A to total
relations on W . (A relation B ⊆ W ×W is total (or serial) on W if Dom(B) = W .) To
present it, it is assumed that the relation R of direct transition satisfies the condition:
for every state u ∈ W , δ R (u) is a proper subset of W . (This condition is not, however,
excessively restrictive.)
Let A be a fixed relation in A and let u ∈ W . We define the relation Au on W as
follows. If δ A (u) = ∅, we put: Au := A. If δ A (u) = ∅ then we select an arbitrary
26 1 Elementary Action Systems

element w ∈ W such that w ∈ δ R (u) and put: Au := A ∪ {(u, w)}. Assuming the
Axiom of Choice, we see that the iteration of this procedure enables us to extend A
to a total relation Ă on W . Moreover, if A is a partial function, then the extension
Ă is a total function on W . The resulting action system N := (W, R, { Ă : A ∈ A})
is not normal. It has, however, another property: for every A ∈ A, the action Ă
is performable (totally performable) in the sense of N in the same states in which
A is performable (totally performable) in the sense of M. This is an immediate
consequence of the fact that A and Ă have the same realizable performances. Thus,
from the viewpoint of action theory, the abilities of the agents of A and Ă are the
same.
Without loss of generality, the scope of the action theory presented so far can be
restricted to elementary action systems in which all atomic actions are total relations
on the sets of states. In particular, in the case of deterministic action systems, it can
be assumed that all members of A are unary total functions on W . (Deterministic
action systems would be then represented by monadic algebras (W, R, f 1 , f 2 , . . .)
together with a distinguished binary relation R.) This assumption however is not
explicitly adopted in this book.

1.5 Performability and Probability

We remarked in Sect. 1.4 that the distinction between performability and total per-
formability is relevant in the case of atomic actions which are not partial functions.
A simple illustration of such a situation is the shooting of an arrow at a board. We
present here a simplified model of shooting which omits a number of aspects of this
action. It is assumed that the shooting takes place in one fixed state u 0 , which is
determined by the following factors: the distance between the archer and the board
is 70 m, there is no wind and the equipment of the archer is in good working order.
The space W of states consists of the state u 0 and the states which represent the pos-
sible effects of shooting. It is also assumed that there are only two possible effects
(outcomes) of the shooting—missing the board (state w0 ) and hitting the board (state
w1 ). Thus, W = {u 0 , w0 , w1 }. (We neglect the eventuality of the breaking of the
bowstring, etc. One may also consider wider and more natural spaces of states with
a richer gradation of hits. The hit value is measured by the number of points scored
in a hit. Assuming that for a hit on the board one may score up to 10 points, the set
of possible effects would contain 10 elements, say w1 , . . . , w10
 , where w denotes
k
the hit with k points.) Shooting an arrow from the bow is treated here as an atomic
action A with two possible performances (u 0 , w0 ) and (u 0 , w1 ) only, i.e.,

A := {(u 0 , w0 ), (u 0 , w1 )}. (1.5.1)

In the first case the shot is a miss, in the second a hit. In other words, a possible
performance of A is simply the shooting of an arrow, regardless of whether the arrow
hits the target or not.
1.5 Performability and Probability 27

Disregarding the regulations of the game and taking into account only the physical
limitations determined by the environment of the archer, the relation R will take the
following form, within the same space W of states:

R := {(u 0 , w0 ), (u 0 , w1 )}. (1.5.2)

Then A = R and so A would be an atomic action totally performable in the action


system
(W, R, {A}). (1.5.3)

Under the interpretation of R provided by (1.5.2), the shot from the bow is performed
irrespective of hitting or missing the board.
Generally, the fact that an arbitrary action A in a system M is stochastic manifests
itself in two ways: either as the tendency to a course other than the intended course of
performing A in u or as the tendency to perform, by the agent in a state u, an action
other than A. These propensities of the system are beyond the agent’s control and
independent of his will. In the former case the planned action A is actually performed
but the course of action is different from the intended one (the arrow may miss the
target). In the latter case the agent actually performs an action B which is different
from the intended action A. In model (1.5.3), with the relation R defined by means of
formula (1.5.2), the first tendency is dominant—the archer is always able to perform
the action A, but does not always hit the target. In other words, in model (1.5.3), the
action A is fully performable at u 0 , that is, irrespective of the result of shooting an
arrow, the agent indeed performs A.
System (1.5.3) is an exemplification of the first option in the probabilistic action
theory: the agent knows what action he is carrying out, but the results (outcomes) of
this particular action are stochastic and remain out of his control.
But the rules of the game recognize only those shots that hit the board. If the
relation R represents the obedience to the rules of the game, then R admits only one
transition—from u 0 to w1 , i.e.,

R := {(u 0 , w1 )}. (1.5.4)

According to Definition 1.4.1.(i) and the formulas (1.5.1) and (1.5.2), the shot from
the bow is performed if and only if the arrow actually hits the board. Therefore, shoot-
ing an arrow is a performable action but not totally performable in the elementary
action system
(W, R, {A}), (1.5.5)

with R defined as in (1.5.4).


While the definition of the relation R given in formula (1.5.2) and the definition
of performability based on it do not seem to cause interpretative difficulties, formula
(1.5.4) and the understanding of performability of A as hitting the target give rise
to questions and doubts. According to formulas (1.5.1) and (1.5.4), the action A is
performable in u 0 but it is not totally performable in this state. How is it, however,
28 1 Elementary Action Systems

guaranteed that the shot by the archer will hit the board? In other words, is the fact
of the performability of A vacuously satisfied, i.e., the state of things consisting in
hitting the board is conceivable but, in fact, the state is achieved after no actual shot?
Similar questions and doubts also arise in the case of such familiar actions as tossing
a coin or throwing a dice.
What does action performability really mean? What is the process that brings the
system to a state that is the effect of a given action? Two options arise here. One looks
for the possibility of control over the course of events brought about by the agent, no
matter who he is. Such a notion of performability is close, though not tantamount,
to the notion of controlling the system. The conception of performability, provided
by Definition 1.4.1 is devoid of probabilistic or stochastic connotations. Performing
an action thus resembles a chess player’s deliberate move across the chessboard—it
is a conscious choice of one of many possibilities of the direct continuation of the
game. If such possibilities do not exist, e.g., when the nearest squares where the
white knight could move are taken by other pieces, performing the action—white
chessman’s move—is impossible in this case. The eventuality of unintended and
accidental disturbances of the undertaken actions can be entirely neglected. Nature
is not involved in these actions so long as it exerts any uncontrolled influence on the
course of planned actions.
The second option—let us call it probabilistic—assesses the performability of an
action in terms of the chances of bringing about the intended effect of the action in
the given state or situation. The agent has no possibility of having full control over
his actions; the performability of an action is the resultant of the agent’s abilities
and the influence of external forces on the course of events. It is a kind of game
between the agent and nature. One may say that the agent always shares, though
unintentionally, his power with nature. In fact, it is a kind of diarchy—the agent has
no absolute control over the situation. For example, when shooting an arrow, the
archer takes into account gravitational forces, air resistance, and other factors that
guide the arrow’s flight; however, the role of the archer after the moment the arrow
is shot is of no significance.
The probabilistic character of the action A is revealed in action system (1.5.5);
this situation is well expressed by the proverb: “Man proposes, God disposes.” The
performability of action A, i.e., hitting the board with an arrow, should be described
not only in terms of the one-zero relation R but in terms of probability that assesses the
chances of hitting the target in the state u 0 . The performability of the action A should
just be identified with this probability, i.e., a number from the closed interval [0, 1].
At the same time one should not speak of the performability or unperformability
of the action A in the state u 0 , but rather of the probability of performability (or
unperformability) of this action in this state.
Instead of model (1.5.5), let us consider the following model

(W, R, {B, C}), (1.5.6)

where B := {(u 0 , w0 )}, C := {(u 0 , w1 )} and R := {(u 0 , w0 ), (u 0 , w1 )}, i.e., R is


defined as in (1.5.4). The actions B and C are totally performable in u 0 in the sense
1.5 Performability and Probability 29

of Definition 1.4.1. B is the action of shooting an arrow and missing the target; C is
shooting an arrow and hitting the board. The problem is, however, that the archer is
unable to foresee which action is actually being performed at u 0 : is it B or C? The
archer committing himself to perform, for example, the action C may not be able
to do it; instead, a performance of the action B may take place. Model (1.5.6) then
illustrates the second of the mentioned tendencies, i.e., the tendency to perform, at a
given state, an action other than the intended one.
If the relation R is equal to {(u 0 , w1 )}, then, in model (1.5.5), the action B is
unperformable in the state u 0 ; this means that missing the target after each shot is
excluded. This may seem paradoxical as we are convinced that even an experienced
archer can miss the target, i.e., B is sometimes performable in this state. The solution
is simple—the relation R = {(u 0 , w1 )} does not mirror the physical aspects of
shooting an arrow but only defines a ‘legal’ qualification of the action B—the shot
is successful (i.e., it is qualified as a realizable performance of B) only if the arrow
hits the target. This condition is thus vacuously satisfied.
The division line between the probabilistic and the ‘non-probabilistic’ options in
the action theory is not sharp. Speaking of the probabilistic and non-probabilistic
concepts of action performability we mean extreme, ‘pure’ situations which—
in practice—are not fully realized. Natural phenomena such as gravitational or elec-
tromagnetic forces cannot be completely eliminated as factors affecting the perfor-
mance of a large number of actions. Nature then is omnipresent. The assumption that
the tool the agent uses (be it a bow or an engine) is completely subordinated to his
will is not strictly speaking even true. However, the assumption can be safely made
when the agent is experienced in the use of a tool that works perfectly.
The deterministic description of actions and their performability is an ideal one
and as such particularly desirable. It is visible in situations associated with a mass
production, for example, where the complete repeatability of the manufacture cycle
and the elimination of disturbances whatsoever are factors treated with the greatest
consideration by the producer. In these cases the relation between the states in which
the actions involved in certain production phases are performed and the results of
these actions is treated as a one-to-one map. Nevertheless, none of the action theories
should in advance exclude the concept of randomness from its domain. This would
lead to conceptual systems which would differ from the ones we have discussed. We
mean here probabilistic action systems.
From the mathematical point of view probabilistic action systems are built on
‘ordinary’ elementary action systems M = (W, R, A) by enriching them with fami-
lies of probabilities providing a quantitative measure of the performability of atomic
actions. It is assumed that the system M is nontrivial, which means that R is non-
empty, and, for each A ∈ A, the relation A ∩ R is nonempty as well. To facilitate
the discussion
 we assume that R is the union of the set of atomic actions A, i.e.,
R = A. (This assumption can be dropped but it simplifies the discussion.) It
A∈A
follows that the system M is normal. We additionally assume that the set W of states
is countable.
30 1 Elementary Action Systems

To each atomic action A ∈ A and each state u ∈ W a measure puA with the
supporting set δ A (u) is assigned, i.e., puA is a function from the power set ℘ (W ) to
the closed interval [0, 1] which satisfies the following conditions:
  A
(1.5.7) puA is σ -additive, i.e., puA ( Xn) = pu (X n ), for every denumerable
n∈ω n∈ω
family {X n : n ∈ ω} of disjoint subsets of W ;
(1.5.8) puA ({w}) = 0 for all w ∈ δ A (u) and puA ({w}) = 0 whenever w ∈ δ A (u);
(1.5.9) If δ A (u) = ∅, then puA (δ A (u))  1.

If the set δ A (u) is empty, we assume that puA is identically equal to 0. It follows
from the above clauses that if δ A (u) = ∅, then puA (W ) = puA (δ A (u)) and puA (W ) is a
positive number in the interval [0, 1]. Thus, for each u for which δ A (u) is nonempty,
the function puA is a nontrivial σ -measure in the sense of measure theory.
For X ⊆ W , the number puA (X ) is interpreted as the probability that performing
the action A in u will lead the system to a state belonging to X . The positive number
puA (W ) is therefore called the probability of the performability of the action A at u.
This number may be less than 1.
In a given state u various actions of A are available and each of them is usually
performed with different probabilities.
The above formulation of the probability of the performability of an atomic action
A results in distinguishing a family of measures { puA : u ∈ W }, defined on the σ -
algebra ℘ (W ) of subsets of W . It is difficult to detect any regularities between the
above measures.
Supplementing (1.5.7)–(1.5.9) with the following, rather technical condition:
 A
(1.5.10) For every atomic action A the series pu (W ) is convergent to a positive
u∈W
number
enables, however, a neat formalization of our probabilistic action  theory. Condition
(1.5.10) is trivially satisfied if the space W is finite. The sum puA (W ), which is
 A u∈W
equal to pu (δ A (u)), does not bear probabilistic connotations—it is simply the
u∈W
sum of all probabilities of performing A in particular states of W .
Conditions (1.5.7)–(1.5.10) are jointly equivalent to the notion of a bi-distribution.
The latter is understood to be any function m A (·|·) from ℘ (W )×℘ (W ) into a bounded
interval [0, a] of nonnegative real numbers satisfying the following conditions:
 A
(1.5.7)∗ 0  m A ({u}|W )  1 for all u ∈ W and the series m ({u}|W ) is
u∈W
convergent,
(1.5.8)∗ m A is a σ -additive in both arguments, i.e., for any two families
{X n : n ∈ ω}, {Yn : n ∈ ω} of disjoint subsets of W , and for all X, Y ⊆ W ,
 
m A( X n |Y ) = m A (X n |Y )
n∈ω n∈ω
1.5 Performability and Probability 31

and
 
m A (X | Yn ) = m A (X |Yn );
n∈ω n∈ω

(1.5.9)∗ for any u, w ∈ W ,

m A ({u}|{w}) = 0 if w ∈ δ A (u)

and
m A ({u}|{w}) = 0 otherwise.

If m A ( · | · ) is a bi-distribution on ℘ (W ) × ℘ (W ), then the function

puA ( · ) := m A ({u}| ·),

for any fixed state u ∈ W , is a measure on ℘ (W ) satisfying conditions (1.5.7)–


(1.5.9) above.
 AMoreover,the family { puA : u ∈ W } satisfies condition (1.5.10),
because pu (W ) = m ({u}| · ) and the last series is convergent by (1.5.7)∗ .
A
u∈W u∈W
Conversely, if a family { puA : u ∈ W } of σ -measures on ℘ (W ) is singled out
so that (1.5.7)–(1.5.10) hold, a bi-distribution m A on ℘ (W ) × ℘ (W ) is defined by
putting: 
m A (X |Y ) := puA (Y ). (1.5.11)
u∈X

(If X is empty, it is assumed that m A (X |Y ) := 0.)


The easy proof that m A is well-defined and that it indeed satisfies conditions
(1.5.7)∗ –(1.5.9)∗ is omitted. (The positive-term series inthe formula (1.5.11) is con-
vergent since it is majorized by the convergent series puA (W ). The latter series
u∈X
is convergent by (1.5.10). We thus see that the notion of a bi-distribution is co-
extensive with the notion of a family of measures { puA : u ∈ W } satisfying postulates
(1.5.7)–(1.5.10).
Every bi-distribution m( · | · ) on ℘ (W ) × ℘ (W ) with W countable is uniquely
determined by its values on the pairs {u} × {w}, or, which amounts to the same—on
the ordered pairs (u, w), where u and w range over W .
If m A ( · | · ) is a bi-distribution and u ∈ W , Y ⊆ W , then the number m A ({u}|Y )
has the following interpretation—it is the probability that performing the action A
will move the system from u to an Y -state. m A ({u}|Y ) should not be confused with a
conditional probability. It seems that there does not exist a strict relationship between
these two notions.
The notion of a bi-distribution enables us to generalize the above theory to the
case of uncountable sets of states W . It requires, however, prior modification of the
condition (1.5.7)∗ and also the dropping of the assumption that the function m A is
defined on the entire product ℘ (W ) × ℘ (W ). It should be assumed that the domain
32 1 Elementary Action Systems

of the function m A is the Cartesian product F0 × F0 , where F0 is a suitably selected


σ -field of subsets of W . We shall not, however, discuss this issue here.
Let F denote the σ -field of subsets of W × W generated by the sets of the form
X × Y , where X ⊆ W , Y ⊆ W . (W is assumed to be a countable set of states.) It
follows from a well-known result of measure theory (see Halmos 1950) that if m 1
and m 2 are two (σ -additive) finite measures on ℘ (W ), then there exists a unique
finite measure m defined on the σ -field F such that m(X × Y ) = m 1 (X ) · m 2 (Y ),
for all X, Y ⊆ W .
If m( · | · ) is a bi-distribution on ℘ (W ) × ℘ (W ), then the maps μ1 ( · ) and μ2 ( · )
defined by the formulas

μ1 ( · ) := m( · |W ) and μ2 ( · ) := m(W | · ),

are finite measures on the σ -field ℘ (W ). According to the above result, there exists
a σ -additive measure μ on the σ -field F such that

μ(X × Y ) = μ1 (X ) · μ2 (Y ), (1.5.12)

for all X, Y ⊆ W .
If the bi-distribution m( · | · ) satisfies the equation

m(X |Y ) = m(X |W ) · m(W |Y ), (1.5.13)

for all X, Y ⊆ W , then trivially (1.5.12) and (1.5.13) imply that

μ(X × Y ) = m(X |Y ), (1.5.14)

for all X, Y ⊆ W . In other words, if m( · | · ) satisfies (1.5.13), then it uniquely


extends to an ‘ordinary’ measure on the σ -field F. This observation would reduce
the theory of bi-distributions to classical measure theory.
It is an open question if, given a bi-distribution m there always exists a measure μ
on F so that (1.5.14) holds, for all X, Y ⊆ W . We also do not know whether (1.5.13)
is well substantiated. In case W is countable, (1.5.13) is equivalent to another identity.
Writing m(u|w) instead of m({u}|{w}), it is not difficult to check that (1.5.13) holds
if and only if 
m(u|w) = m(u|v ) · m(v|w),
v,v ∈W

for all u, w ∈ W .
We thus arrive at the following definition:
Definition 1.5.1 Let M = (W, R, A) be a nontrivial action system in which R is
the union of the actions of A. A probabilistic elementary action system over M is
a pair  
W, {m A ( · | · ) : A ∈ A} (1.5.15)
1.5 Performability and Probability 33

where {m A ( · | · ) : A ∈ A} is a family of bi-distributions on ℘ (W ) × ℘ (W ) assigned


to the atomic actions of A. 

Note In the above definition the “action” component A is in fact redundant. The
symbol “A” plays merely the role of an index set. This is due to the fact that, given
the family {m A ( · | · ) : A ∈ A}, one can reconstruct each action A of A by means
of the formula: u A w if and only if m A (u|w) > 0—see (1.5.9)∗ . (We write here
m A (u|w) instead m A ({u}|{w}).) Consequently, u R w if and only if m A (u|w) > 0
for some A ∈ A.

The following definition is formulated entirely in probabilistic terms (it is formu-


lated in terms of probabilistic transitions puA , cf. conditions (1.5.7)–(1.5.9)) and it
abstracts from the notion of an atomic action viewed as a binary relation on a set of
states.

Definition 1.5.2 A countable probabilistic atomic action system is a triple


 
M pr = W, A, { puA : u ∈ W, A ∈ A} ,

where W is a countable set (the set of states of the system), A is a set (the set of
atomic actions of the system), and for any u ∈ W , A ∈ A, puA is a finite σ -measure
on the power set ℘ (W ) such that puA (W )  1. 

The situation where puA is the zero function for some u ∈ W , A ∈ A is allowed.
It may also happen that puA is a normalized measure, i.e., puA (W ) = 1. For any
X ⊆ W , the number puA (X ) is the probability of the performability of the action A
in the state u resulting in an X -state. puA (W ) is interpreted as the probability of the
performability of A in u.  A
If one additionally assumes that for any A ∈ A, the series pu (W ) is conver-
u∈W
gent to a positive number, then the above definition can be also expressed in terms
of bi-distributions m A ( · | · ).
Once a probabilistic system M pr is defined, the crucial problem probabilistic
action theory faces is that of assigning probabilities to sequences A0 , A1 , . . . , An of
atomic actions of the system, and more generally to compound actions, relative to a
given state u. (Compound actions are investigated in Sect. 1.7.) 
As mentioned earlier, the bi-distribution m A ( · | · ) need not be a normalized mea-
sure, i.e., the number m A (u|W ) which measures the probability of the performability
of the action A in the system at u need not be equal to 1. This is caused by the fact
that the agent, in a given state u, usually has a few stochastic actions to perform at
choice, say A1 , . . . , An . The actions are not under his full control. Assuming that
with probability 1 the agent always performs one of the actions A1 , . . . , An in the
state u, it is not possible to guarantee that the agent will perform with certainty in
u the intended and initiated action, say A1 . (The archer with probability 1 hits or
misses the target in u 0 ; at the moment of shooting it is not, however, possible to
34 1 Elementary Action Systems

decide which of the possible two actions is actually performed.) We assume that the
bi-distribution m A ( · | · ), assigned to the action A, reflects the above phenomenon.
The number m A (u|Y ) expresses the probability that performing A in u leads the
system to a Y -state. The probability takes into account two facts: first, that in u the
agent of A may actually perform, irrespective of his will and his declared intentions,
an action other than A, and, second, the fact that though the system is in u and the
action A is actually being performed, the system will not always be transferred to
a Y -state. The latter fact is also independent of his will. (We have to assume here
that Y is a proper subset of W .) The bi-distribution m A ( · | · ) is then the measure
of the resultant of these two factors. In the formalism accepted here the above two
factors are not separated, i.e., we do not single out two distinct probabilistic distrib-
utions which separately measure the probability of the performability of A and the
probability that performing A leads from u to Y -states (provided that A is actually
performed).
The following simple illustration sheds some light on that issue. An agent has an
unloaded dice at his disposal. Moreover, there are two urns containing balls. Urn I
contains five balls (3 white and 2 red), and urn II also contains five balls (1 white
and 4 red). If the outcome of throwing the dice is 1, 2, 3 or 4, the agent selects a ball
from Urn I; if the outcome is 5 or 6, the agent picks out a ball from Urn II. The above
situation can be expressed in terms of elementary action systems. The space of states
W contains four elements. u 0 is the initial state, before the dice is tossed. u 1 and u 2
are two possible states just after rolling the dice; u 1 represents the state in which a
1, 2, 3 or 4 is rolled, and u 2 marks the state showing 5 or 6. In turn, w and r mark
two possible states after picking out a ball from one of the urns, that is, picking out
a white ball or the red ball, respectively.
There are two actions being performed; A—tossing a dice, and B—selecting a ball
from one of the two urns. A contains two pairs, A = {(u 0 , u 1 ), (u 0 , u 2 )} and B =
{(u 1 , w), (u 2 , w), (u 1 , r ), (u 2 , r )}. Intuitively, A is performable with probability 1
at u 0 because tossing the coin is performable and no other option is available at
u 0 . Analogously, B is performable with probability 1 at each of the states u 1 , u 2 ,
because drawing a ball from whichever urn is performable and no other possibility
occurs. The direct transition relation merely ensures that the actions A and B are to
be successively performed in the right order. Accordingly, R consists of pairs

(u 0 , u 1 ), (u 0 , u 2 ), (u 1 , w), (u 2 , w), (u 1 , r ), (u 2 , r ).

The functioning of this normal action system (W, R, {A, B}) is represented by
the following picture.
The system satisfies the condition that R = A ∪ B. Moreover, the actions A
and B are disjoint. The probability 4/6 is assigned to the transition (u 0 , u 1 ) and the
weight of the transition (u 0 , u 2 ) is 2/6. In other words, the probability m A (u 0 |u 1 )
that performing the action A in u 0 will lead the system to the state u 1 is 4/6 and the
probability m A (u 0 |u 2 ) that performing the action A in u 0 will lead the system to the
state u 2 is 2/6. (We write here m A (u|w) instead m A ({u}|{w}).)
1.5 Performability and Probability 35

One assigns the probabilities 3/5, 1/5, 2/5, and 4/5 to the transitions (u 1 , w),
(u 2 , w), (u 1 , r ), and (u 2 , r ), respectively. This means that the probability m B (u 1 |w)
that performing the action B in u 1 will lead the system to the state w is 3/5, the
probability m B (u 2 |w) that performing the action B in u 2 will lead the system to the
state w is 1/5, and so on.

u1 w

Start u0

u2 r
Fig. 1.7
As the above action system should meet conditions (1.5.7)∗ , (1.5.8)∗ , and (1.5.9)∗ ,
we assume that m A (u|w) = 0 whenever w ∈ δ A (u), and m B (u|w) = 0 whenever
w ∈ δ B (u), for any states u, w.
The probability of performability of the action A in u 0 equals 1. This is due to the
fact that the sum of the probabilities assigned to the transitions (u 0 , u 1 ) and (u 0 , u 2 )
gives 1, m A (u 0 |u 1 ) + m A (u 0 |u 1 ) = 4/6 + 2/6 = 1. Similarly, the probability of
performability of B in each of the states u 1 and u 2 equals 1. Indeed, m B (u 1 |w) +
m B (u 1 |r ) = 3/5 + 2/5 = 1 and m B (u 2 |w) + m B (u 2 |r ) = 1/5 + 4/5 = 1. We
may therefore say that the agent has the power to perform A in u 0 (but the results of
performing A are random and out of his/her control, i.e., he/she does not know at u 0
the urn that will be drawn). Likewise, the agent is able with certainty to perform B in
each of the states u 1 and u 2 . (But again the possible outcomes of B are random in the
sense that the agent does not know whether the ball he/she will draw is red or white.
But the color of the drawn balls is irrelevant from the perspective of performability
of B!) The above system satisfies the condition that for any state u, m A (u|w) equals
0 or 1 and similarly, m B (u|w) equals 0 or 1.
Despite the probabilistic wording, the above action system (W, R, {A, B}) has
the property that the probability of the performability of A in u 0 is 1 and the per-
formability of B in either state u 1 , u 2 is also 1.
In a more accurate model, instead of one action B there are two actions Br and Bw
of picking out a red ball and a white ball, respectively. Thus Br = {(u 1 , r ), (u 2 , r )}
and Bw = {(u 1 , w), (u 2 , w)}. In the resulting normal model (W, R, {A, Br , Bw }) the
probabilities assigned to performability of Br in the states u 1 and u 2 are equal to 2/5
and 4/5, respectively, and the probabilities assigned to performability of Bw in the
states u 1 and u 2 are equal to 3/5 and 1/5, respectively. Consequently, the agent is no
longer able to perform with certainty the action Br in each of the states u 1 and u 2 (i.e.,
in each state he/she is unable to foresee the color of the ball that will be drawn). A
similar remark applies to the action Bw . In this model the actions Br , Bw are genuinely
36 1 Elementary Action Systems

probabilistic. We may also split the action A into two probabilistic actions: AI of
rolling a 1, 2, 3 or 4 and the action of rolling a 5 or 6. Thus AI = {(u 0 , u 1 )} and
AII = {(u 0 , u 2 )}.
As mentioned earlier, from the perspective of probabilistic action theory the cru-
cial problem concerns the assignment of probabilities to compound actions; in par-
ticular to strings of atomic actions once all probabilities pertinent to atomic actions
are known. Let
(W, R, {AI , AII , Br , Bw })

be the system defined as above. What is the probability of the performability of the
string of actions AI , Br in u 0 ? (The string AI , Br is identified with the compound
action {AI Br } containing only one word of length 2—see Sect. 1.7.) There is only
one string of performances of {AI Br }, viz. u 0 AI u 1 Br r . We may treat the actions AI
and Br as stochastically independent, which means that the probability of performing
the string AI Br in u 0 is equal to

m AI (u 0 |u 1 ) · m Br (u 1 |r ) = 4/6 · 2/5 = 8/30.

Following this pattern, we determine the probabilities of the performability of


other strings AII Br , AI , Bw , and AII , Bw in u 0 :

m AII (u 0 |u 2 ) · m Br (u 2 |r ) = 2/6 · 4/5 = 8/30,


m AI (u 0 |u 1 ) · m Bw (u 1 |w) = 4/6 · 3/5 = 12/30,
m AII (u 0 |u 2 ) · m Bw (u 2 |w) = 2/6 · 1/5 = 2/30.

They all add up to 1.


One may of course define more complicated models built on directed graphs with
urns being edges of a digraph. The urns may contain balls of different sorts.
The above simple examples provide mere samples of the problems probabilistic
action theory poses. They are not tackled here from the mathematical perspective.
The above definition of a probabilistic system does not fully resolve the issue of
the relationship holding between the bi-distributions m A and m B for different actions
A, B ∈ A. In some cases they may be treated as stochastically independent actions.
Another option that comes to light is to assume that for all u, w ∈ W ,

m A (u|w) = m B (u|w) whenever w ∈ δ A (u) ∩ δ B (u). (1.5.16)

It follows from (1.5.16) that for all u ∈ W and Y ⊆ W , m A (u|Y )  m B (u|Y ) when-
ever the action A is a subset of B. In particular, m A (u|W )  m B (u|W ) whenever
A ⊆ B. This agrees with the intuition that the larger an atomic action, the higher the
probability of its performability.
Let (W, R) be a countable discrete system. Yet another probabilistic approach to
action assigns probabilities to direct transitions between states of (W, R). Accord-
ingly, to each ordered pair (u, w) ∈ W × W a number p R (u, w) ∈ [0, 1] is assigned
1.5 Performability and Probability 37

so that the following conditions are met:

p R (u, w) = 0 if (u, w) ∈ R (1.5.17)


p (u, w) = 0
R
otherwise,

and, for every u ∈ W such that δ R (u) = ∅,



{ p R (u, w) : w ∈ δ R (u)} = 1. (1.5.18)

The number p R (u, w) is interpreted as the probability of the direct transition of


the discrete system (W, R) from the state u to the state w.
Condition (1.5.17) says that the probability of the direct transition from u to w
equals 0 if (u, w) is not in R. In the remaining cases the probability is nonzero and
the sum of such probabilities gives 1.
We recall that states u of W such that δ R (u) = ∅ are called dead (or terminal)
states; the remaining states are called nonterminal. The direct transition from a
terminal state to any other state is excluded. If u is terminal, then p R (u, w) = 0, for
all states w. Condition (1.5.18) says that the sum of probabilities of direct transitions
from any nonterminal state is 1. In particular, if u is a reflexive state, i.e., δ R (u) = {u},
then p R (u, u) = 1, which means that the system stays in u with probability 1 (and
with zero probability of leaving it).
In turn, to each atomic action A ∈ A on (W, R) and to each pair (u, w) ∈ W × W
a number c A (u, w) ∈ [0, 1] is assigned so that c A (u, w) = 0 if and only if w ∈ f A (u).
The number c A (u, w) · p R (u, w) is interpreted as the probability that performing
action A in the state u will move the system to the state w.
If Y ⊆ W , then the number

q A (u, Y ) := {c A (u, w) · p R (u, w) : w ∈ Y }, (1.5.19)

which is assumed to exist, is the probability that performing the action A in the
state u will lead the system to a state which belongs to Y . The atomic action A is
thus treated here as a perturbation of the discrete system (W, R). The coefficients
c A (u, w) in the formula (1.5.19) provide the numerical measure of this perturbation
in any state u. The number
   
p A (u) := q A u, δ A (u) = q A (u, W )

is the probability of the performability of the action A in the state u.


An interesting class of probabilistic systems is formed by normal systems satisfy-
thatAfor every state u the family {δ A (u) : A ∈ A} forms a partition
ing the condition
of δ R (u) and p (u) = 1. (Some sets in the family {δ A (u) : A ∈ A} may be
A∈A
empty.) Thus the sum of all probabilities of the performability of the actions of A in
each nonterminal state u is 1. The above urn example satisfies this condition.
38 1 Elementary Action Systems

It may be argued that randomness, inherent in many action systems, results from
the negligence of factors that collectively form the situational envelope in which
actions are immersed. To better understand the above issue, consider the well-known
example of tossing a coin. (The issues concerning the role the notion of a situation
plays in the theory will be elucidated in Chap. 2; here we will restrict ourselves to
introductory remarks.)
The coin is fixed. We distinguish two actions A and B; both actions are treated as
atomic:

A: tossing the coin onto the table


B: picking the coin up from the table

To facilitate the discussion we assume (for simplicity) that the space of states W
consists of three elements, W := {u 0 , T ails, H eads}. u 0 is the state in which the
coin is tossed onto the table. The state u 0 is well-defined—the coin is tossed from a
fixed point over the table in a definite direction with a constant force and at the same
angle. (One may think in terms of a sort of cannon attached to the table shooting
the coin with a constant force. The assumption that u 0 is well-defined is, of course,
an idealization—there will be small fluctuations in tossing which will influence the
score. This fact is crucial for the ‘stochasticity’ of the system.) The state T ails is
entirely determined by the position of the coin on the table, with the tails on the top.
Similarly, the state H eads is defined. The action A consists of two pairs:

A := {(u 0 , H eads), (u 0 , T ails)}

while
B := {(H eads, u 0 ), (T ails, u 0 )}.

The relation R is the join of the two actions, R := A ∪ B; thus the actions A and B
are totally performable in the system M := (W, R, {A, B}). Undoubtedly, the action
B is fully controlled by the agent—the person tossing the coin. The performability
of the action B does not depend on ‘hidden’ factors guiding the motion of the coin.
(Distinguishing the action B in the model may seem irrelevant and a bit too purist.
It is being done to clarify the reasoning and keep it in the framework of elementary
action systems.)
The actions A and B can be performed many times. To represent formally this
fact we introduce a new category of entities—the category of possible situations. A
possible situation is any pair
s = (w, n), (1.5.20)

where w ∈ W and n is a natural number, n  0. The pair (1.5.20) is interpreted


as follows: “After tossing the coin n times, the state of the system M is equal to
w.” The number n is called the label of the situation s, while w is called the state
corresponding to the situation s. Both actions A and B change not only the state of
1.5 Performability and Probability 39

the system but the situation. To express the latter fact we shall consider the following
triples:

(n, A, n + 1) (1.5.21)
(n, B, n), (1.5.22)

where n ∈ ω. The triples (1.5.21) and (1.5.22) are called the labeled actions A and
B, respectively. The triple (1.5.21) is read: ‘The action A performed in situation s
with the label n moves the system so that the situation just after performing A has
the label n + 1.’ To put it briefly, one can say that if the coin has been tossed n times,
then the following toss increases the number of tosses to n + 1. The labeled action
(1.5.22) does not change the label (i.e., the number of times the coin has been tossed
is not increased), but it changes the state of the system. More precisely, the labeled
action (1.5.21) transforms a situation s1 into a situation s2 if and only if s1 = (w, n),
s2 = (w, n + 1) and u A, R w (i.e., u = u 0 and w ∈ {T ails, H eads}). Analogously,
the labeled action (1.5.22) transforms a situation s1 into s2 if and only if s1 = (u, n),
s2 = (w, n) and u B, R w (i.e., u ∈ {T ails, H eads} and w = u 0 ).
We write s1 T r s2 to denote that one of the actions (1.5.21) or (1.5.22) transforms
the situation s1 into s2 . (u 0 , 0) is called the initial situation. A finite run of situations
is any finite sequence
(s0 , s1 , . . . , sn ) (1.5.23)

of possible situations such that s0 is the initial situation and si T r si+1 for all i,
0  i  n − 1. (Thus, if i is even, then si is transformed into si+1 by means of the
action (i, A, i + 1), i.e., by tossing the coin. If i is odd, then si is transformed into
si+1 by means of (1.5.22), i.e., by picking up the coin from the table.) The number
n is called the length of the run (1.5.23).
There are no limitations on the length of runs of situations (1.5.23). But, of course,
the agent of the above actions can stop the run (1.5.23) in any situation sn with odd
n. In other words, in each of the states T ails, H eads, the agent may switch to
another action system, with compound actions built in as, say, ‘having lunch’ or
‘going home’. In other words, he/she is able to perform an additional action, viz.,
Abort, in some situations.
Can the notion of a finite run of situations serve to define the probabilities
puA (T ails) and puA (H eads) for u := u 0 , i.e., the probabilities that performing
the action A in the state u 0 will lead the system to the state T ails (H eads, respec-
tively)? An answer to this question would require adopting the additional assumption
that all the runs (1.5.23) of length n are ‘equally probable’ provided that n is suffi-
ciently large. Any attempts to explicate this assumption would result in circulus in
definiendo.
The action A—tossing the coin—is almost tautologically performable in the state
u 0 . One may claim that the above notion of a possible situation (in a much abridged
form) and of a finite run enable us to better understand what the ‘stochasticity’ of the
action A means. From the purely intuitive point of view the agent of a ‘stochastic’
40 1 Elementary Action Systems

action has no influence on the run of situations, i.e., for no run (s0 , . . . , sn ) the agent
has the power and means of choosing a situation sn+1 that follows the actual situation
sn . But even in this case any attempt to give the above intuition a clear mathematical
shape may lead us astray.
A digression. Ringing round from your mobile phone is another simple example
of a situational action system with uncertain scores of actions. Suppose you want
to call your friend. You know their number. You and your phone together form an
action system. Let u be the initial state in which your phone is not activated. Let A be
the action of dialing the number of your friend. Two more states are distinguished:
w1 —after sending the signal you get an answer (your friend picks up the phone),
w2 —after sending the signal you get no answer (the friend does not pick up the
phone). w2 takes place when your friend is far from their phone or the phone is
deactivated or your friend does not want to talk to you. We assume that your friend’s
telephone works. The action A is totally performable in u (your phone is in good
shape), but the result of performing A is uncertain. Only after a couple of seconds
or more you find out what is going on—whether you are in the state w1 or in w2 .
(Usually a longer period elapses before finding that you are in the state w2 rather
than in w1 .) In the state w1 the verbal (and compound) action of talking with your
friend is undertaken, possibly followed by other actions. But in w2 the action of
call cancellation is performed, which results in the state u. After some time you
may retry calling your friend, iterating the action A (maybe repeatedly and maybe
successfully). The adequate description of this action system requires introducing
the time parameter as a situational component.
The conclusion we want to draw is the following: the formalism of the action the-
ory adopted in this book, even supplemented with the notion of a situational envelope
of an action system, does not allow a clear-cut distinction between probabilistic and
non-probabilistic systems. Introducing probabilities of transitions between states to
the formalism in the way presented in this chapter does not change the matter—it
is not self-evident how to attach, in definite cases, probabilities puA (Y ) to particular
actions A and pairs (u, Y ).
There is no doubt that speaking of the probability puA (Y ) needs satisfying the
postulate of the repeatability of the action A. It means that the situational envelope
of the action system has to guarantee, for every pair (u, w) such that u A, R w holds,
the possibility of the reverse transition of the system—from the state w to the state
u. This latter transition should be accomplished by means of a non-stochastic action
(like the action B in the example of tossing a coin). The opposition ‘stochastic—
non-stochastic’ is understood here intuitively, otherwise we shall end up in a vicious
circle.
We believe that for practical reasons, for any action A and any state u, the probabil-
ity puA (·) has to satisfy a condition which is difficult to formulate precisely and which
is associated with the following intuition that concerns the number puA (Y ). To put it
simply, let us assume that the action A is performable in the state u with probability
1, i.e., the agent of A declaring and being about to perform the action A in a state u is
always able to do it. Tossing a coin, shooting an arrow from a bow, casting a dice, etc.,
are examples of such actions. Suppose a long, ‘random’ series of n repetitions of A
1.5 Performability and Probability 41

in the state u has been performed (the series can simply be identified with a suitable
run of situations defined as in the case of tossing the coin). If, as a result, performing
the action A has led the system m times to a Y -state and n − m times to a state lying
outside Y , we can expect that the fraction m/n will be close puA (Y ) as n grows large.
In other words, with an unlimited number of repetitions of the action A in the state
u, the frequency of transitions of the system from the state u to a Y -state should ap-
proach the probability puA (Y ) that performing the action A in u will move the system
to a state belonging to Y . ( puA (Y ) is viewed here as in the first formal description of
probability, i.e., the function puA (·) satisfies the formulas (1.5.7)–(1.5.10).) Attempts
at explicating the above intuition encounter, however, the well-known logical difficul-
ties concerning randomness. Axiomatic probability theory avoids random sequences,
although various approaches to defining random sequences have been proposed (the
frequency/measure approach by von Mises–Church, Kolmogorov complexity or the
predictability approach). Probabilistic automata, invented by Rabin, employ coin
tosses to decide which state transition to take. For example, the command ‘Per-
form at random one of the actions A or B’, known from dynamic logic, can be
reconstructed within the above framework as performing the compound operation
consisting of two sequences of actions T, A and T, B where T is the action of flip-
ping a coin. If after tossing, the state H eads is obtained, the action A is performed;
otherwise, with T ails as a result, the action B is performed. (To simplify matters, we
assume that A and B are atomic actions.) Formally, the construction of the relevant
action system requires enriching the initial system M = (W, R, A) with three new
states u 0 , H eads, T ails, and then extending the relation R of M to a relation R  on
W  := W ∪ {u 0 , H eads, T ails}, where R  := R ∪ {(u 0 , H eads), (u 0 , T ails)} ∪
{(H eads, w) : w ∈ Dom(A)} ∪ {(T ails, w) : w ∈ Dom(B)}. As before T is
identified with the set {(u 0 , H eads), (u 0 , T ails)}. T is thus totally performable in
the extended system but the result of its performing is uncertain. (Strictly speaking,
one should add two more atomic actions: C := {(H eads, w) : w ∈ Dom(A)} and
D := {(T ails, w) : w ∈ Dom(B)}. If H eads is tossed, the agent initiates in the
action A in the state H eads by selecting a state w ∈ Dom(A) (but not at random,
because A is not stochastic). Similarly, if T ails is tossed, the agent initiates the
action B in the state T ails by selecting a state w ∈ Dom(B).)

1.6 Languages, Automata, and Elementary Action Systems

A ‘symbol’ is treated here as an abstract entity which is not defined formally. This
notion has a similar status as ‘point’ or ‘line’ in geometry. Letters and digits are
examples of frequently used symbols.
A string (or word) is a finite sequence (with repetitions) of juxtaposed symbols.
Words are written without parentheses and commas between particular symbols, e.g.,
abca instead of (a, b, c, a). The length of a string x, denoted by |x|, is the number
of all occurrences of symbols in the word. For example, aab has length 3 and abcb
42 1 Elementary Action Systems

has length 4. The empty word, denoted by ε, is the string consisting of zero symbols.
Its length |ε| is equal to 0.
An alphabet is a nonempty finite set Σ of symbols. Σ ∗ denotes the set of all
words (with the empty word included) over the alphabet Σ. Thus, a word x is in Σ ∗
if and only if all the symbols occurring in x belong to Σ. Σ + denotes the set of all
nonempty words over Σ; thus, Σ + = Σ ∗ \ {ε}.
Let x = a1 a2 . . . am and y = b1 b2 . . . bn be two words over Σ. The composition
(or concatenation) of the words x and y is the word, denoted by x y, which is obtained
by writing, first, the successive symbols of the word x, and then the successive
symbols of the word y:
x y := a1 . . . am b1 . . . bn .

For example, if x := abca and y := acb then x y = abcaacb.


The composition of any word with the empty word always yields the same word,
i.e., for any word x, xε = εx = x.
(The set Σ ∗ equipped with the binary operation of concatenation and the empty
word as an algebraic constant has thus the structure of a semigroup; ε is the unit of the
semigroup. In fact Σ ∗ is a free semigroup; the symbols of Σ are its free generators.)
For any word x the nth power of x for n = 0, 1, 2, . . . is defined inductively as
follows:

x 0 := ε
x n+1 := x x n for n = 0, 1, 2, . . .

Thus, x n = x . . . x with x occurring n times.


Induction on the length of a word enables to define the one-argument operation
of the mirror reflection (or the inverse) of a word:

ε−1 = ε
(ax)−1 := x −1 a for a ∈ Σ and x ∈ Σ ∗ .

It follows from the definition that if x = a1 . . . am , then x −1 = am . . . a1 .


Let x, y ∈ Σ ∗ . We say that x is a subword of the word y, symbolically: x  y,
if there exist words z, w ∈ Σ ∗ such that y = zxw. If zw = ε, then x is a proper
subword of y; if z = ε, then x is an initial subword of y; and if w = ε, then we say
that x is a terminal subword of y.
A formal language L (over an alphabet Σ) is a set of strings of symbols from the
alphabet.
Since languages are sets, the well-known set-theoretic operations such as union,
intersection, difference, etc. can be applied to them. But also some special operations,
derived from the operations on words, are performable on languages. For example,
the composition of two languages L 1 and L 2 is defined to be the language

L 1 L 2 := {x y : x ∈ L 1 and y ∈ L 2 }.
1.6 Languages, Automata, and Elementary Action Systems 43

The power of a language L is defined inductively, in a similar manner as for words:

L 0 := {ε}
L n+1 := L L n for n = 0, 1, 2, . . .

The closure of a language L, denoted by L ∗ , is defined as the union of the consecutive


powers of L:

L ∗ := {L n : n ∈ N}.

The positive closure of L, denoted by L + , is the union of all positive powers of L:



L + := {L n : n = 1, 2, . . .}.

The only difference between L ∗ and L + consists in that L ∗ always contains an empty
word while L + contains ε if and only if L does.
The notations L ∗ and L + agree with the definitions of the sets Σ ∗ and Σ +
introduced earlier as the totalities of all strings (of all nonempty strings, respectively)
over Σ. As a matter of fact, the set of all words over Σ is equal to the one-element
set {ε} plus the set of all words of length 1 (i.e., the set Σ) plus the set of all words
of length 2 (the set Σ 2 ), etc.
The inverse of a language is the language

L −1 := {x −1 : x ∈ L}.

Regular languages are the best-examined class of formal languages. The regularity
of this class of languages manifests itself in the fact that the class is recursively defined
by means of simple set-theoretic and algebraic conditions: the family REG(Σ) of
regular languages over Σ is defined as the least class R of Σ-languages which
satisfies the following properties:
(i) ∅, the empty language, belongs to R;
(ii) {ε}, the language consisting of the empty word, belongs to R;
(iii) for every letter a ∈ Σ, {a} belongs to R;
(iv) if L 1 ∈ R and L 2 ∈ R then L 1 ∪ L 2 and L 1 L 2 belong to R;
(v) if L ∈ R then L ∗ ∈ R.
It follows from the above definition that every finite set L ⊂ Σ ∗ is a regular
language.
REG+ (Σ) denotes the class of all regular languages over Σ which do not contain
the empty word ε. Thus, REG+ (Σ) := {L ∈ REG(Σ) : ε ∈ L}.
The class REG+ (Σ) can be inductively defined as the least family R+ of Σ-
languages which satisfies the following closure properties:
(i)+ the empty language ∅ belongs to R+
(ii)+ for every a ∈ Σ, {a} belongs to R+
44 1 Elementary Action Systems

(iii)+ if L 1 ∈ R+ and L 2 ∈ R+ , then L 1 ∪ L 2 and L 1 L 2 belong to R+


(iv)+ if L ∈ R+ then L + ∈ R+ .
The definition of a regular language also makes sense for an infinite set of symbols
Σ (though the latter set does not qualify as an alphabet in the strict sense): assuming
that Σ is infinite we define the families REG(Σ) and REG+ (Σ) in the same way
as above, i.e., by means of clauses (i)–(v) and (i)+ –(iv)+ , respectively. Denoting by
Σ(L) the set of all symbols of Σ that occur in the words of L, one easily shows that
for every L ∈ REG(Σ), the set Σ(L) is finite. Thus, each member of REG(Σ) is a
regular language over the alphabet Σ(L).
In the paragraphs below we shall devote much space to compound (or composite)
actions on elementary action systems M = (W, R, A). A compound action (on
M) is defined as a set of finite strings of atomic actions of the system M. Treating
the elements of A (i.e., the atomic actions) as primitive symbols, we see that each
compound action can be viewed as a ‘language’ over the ‘alphabet’ A. (Such a
formulation of compound actions is thus convergent with the ideas of Nowakowska.)
In particular, we will be interested in examining the properties of the class REG+ (A)
of regular compound actions over A that do not contain the empty string of actions.
We must remember, however, that the set A need not be finite.
The main task of the theory of formal languages is to provide methods for the
description of infinite languages. It is required at the same time that the methods
should satisfy the following two conditions Blikle (1973):
1. The condition of analytical effectiveness: the method by means of which a lan-
guage L over an alphabet is described determines a universal algorithm for de-
ciding in a finite number of steps, for each word x over Σ, if x belongs to L or it
does not.
2. The condition of synthetic effectiveness: the method by means of which a language
L over Σ is described provides an algorithm for producing all and only the words
of L; at the same time for each word x of L it is possible to estimate the number
of steps of the algorithm necessary to produce the word x.
The above postulates result from pragmatic aspects of the processes of commu-
nication among the language users. The analytical effectiveness of the method is
necessary for the language user to receive information encoded in the language. We
mean here, inter alia, ensuring the possibility of the grammatical parsing of expres-
sions (sentences) of a given language, of its translation into other languages, and
most importantly of all, of understanding expressions (sentences) of the language.
The condition of synthetic effectiveness, in turn, is indispensable in the process of
sending information encoded in the language, e.g., in speaking, writing, etc.
Besides abstract automata, which will be discussed later, combinatorial grammars
are a fundamental tool for the investigation of formal languages which satisfies,
under certain assumptions, the postulate of synthetic effectiveness: a combinatorial
grammar describes a language providing at the same time a method of producing all
and only the words of the language.
Every combinatorial grammar (over Σ) contains apart from the alphabet Σ a finite
set V of auxiliary symbols, a finite set of productions and the start symbol, which is
1.6 Languages, Automata, and Elementary Action Systems 45

always an element of V . More formally, a combinatorial grammar is a quadruple

G = (Σ, V, P, α),

where Σ is the given alphabet, also called the terminal alphabet, V is an auxiliary
alphabet (the members of V are called nonterminals or variables or syntactic cate-
gories), P is a finite subset of (Σ ∪ V )∗ × (Σ ∪ V )∗ called the list of productions
of the grammar; the fact that (x, y) ∈ P is written as x → y. x is the predecessor
and y is the successor of the production x → y. α is a distinguished element of V
called the start symbol.
We now formally define the language generated by a grammar G = (Σ, V, P, α).
To do so, we first define two relations ⇒G and ⇒∗G between strings in (Σ ∪ V )∗ .
Let x and y be arbitrary words over the alphabet Σ ∪ V . We say that the word x
directly derives y in the grammar G, and write x ⇒∗G y, if and only if there are
words z 1 , z 2 ∈ (Σ ∪ V )∗ and a production w1 → w2 in P such that x = z 1 w1 z 2 and
y = z 1 w2 z 2 . Thus, two strings are related by ⇒G just when the second is obtained
from the first by one application of some production.
The relation ⇒∗G is the reflexive and transitive closure of ⇒G . Thus, x ⇒∗G y if
and only if there is a (possibly empty) sequence of strings z 1 , . . . , z n in (Σ ∪ V )∗
such that z 1 = x, z n = y and z i ⇒∗G z i+1 for i = 1, 2, . . . , n − 1. Then we say that
x derives y in the grammar G. The sequence z 1 , . . . , z n is called a derivation of y
from x. Alternatively, x ⇒∗G y if y follows from x by application of zero or more
productions of P. Note that x ⇒∗G x, for any word x. (We shall omit the subscript
G in ⇒∗G when G is clear from context.)
The language generated by G, denoted by L(G), is the set {x : x ∈ Σ ∗ and α ⇒∗G
x}. Thus, a string x is in L(G) if x consists solely of terminals and the string can
be derived from α. The symbols of the alphabet V appear only while deriving the
words of L(G); they do not occur in the words of L(G). Thus, V plays an auxiliary
role in the process of defining the language L(G).

Example 1.6.1 Let Σ := {A, B}, V := {α, a, b}, and P, the list of productions, is:
α → ab, a → Aa, a → A, b → Bb, b → B. The following are derivations in the
grammar G := (Σ, V, P, α):

α ⇒ ab ⇒ Ab ⇒ AB
α ⇒ ab ⇒ Aab ⇒ A Ab.

The word AB belongs to the language L(G). The word A Ab is not in L(G) because
it contains the nonterminal b.
The language L(G) has a very simple description: L(G) = {Am B n :
m, n  1}. 

Right-linear grammars are the grammars in which all productions have the form
a → xb or a → x, where a and b are variables and x is a (possibly empty) string of
terminals.
46 1 Elementary Action Systems

Example 1.6.2 The grammar G from Example 1.6.1 is not right-linear because it
contains the production α → ab.
Let Σ and V be as above and let P  be the list of following productions: α → AB,
α → Aa, a → Ab, b → Bb, a → AB, a → Aa, b → B. G := (Σ, V, P  , α)
is thus a right-linear grammar. The language generated by G is the same as that in
Example 1.6.1, i.e., L(G ) = {Am B n : m, n  1}. 

Theorem 1.6.3 (Chomsky and Miller 1958) Right-linear grammars characterize


regular languages in the sense that a language is regular if and only if it is generated
by a right-linear grammar.

The proof of this theorem can be found, e.g., in Hopcroft and Ullman (1979). 
Abstract automata are a fundamental tool for the investigation of formal languages
which satisfies the postulate of analytical effectiveness. We shall shortly present the
most important facts concerning finite automata. These are the simplest automata.
Their theory is presented, from a much broader perspective, in specialized textbooks
on formal linguistics, see in Hopcroft and Ullman (1979) or The Handbook of The-
oretical Computer Science (see van Leeuwen 1990). Here we only record the basic
facts that will be employed in our action theory.
A finite automaton consists of a finite set of states and a set of transitions from
a state to a state that occur on the input of symbols chosen from an alphabet Σ.
One state, denoted by w0 , is singled out and is called the initial state, in which the
automaton starts. Some states are designated as final or accepting states.
Formally, a (nondeterministic) finite automaton is a quintuple

A := (W, Σ, δ, w0 , F), (1.6.1)

where
(i) W is a finite set of states of the automaton
(ii) Σ is an alphabet, whose elements are called input symbols
(iii) w0 ∈ W is the initial state
(iv) F ⊆ W is the set of final states,
(v) δ is the transition function mapping the Cartesian product W ×Σ into the power
set ℘ (W ).
(The set δ(u, a) is interpreted as the set of all states w such that there is a transition
labeled by a from u to w.)
Since the alphabet Σ is a fixed component of the automaton (1.6.1), we shall
often say that A is an automaton over the alphabet Σ.
The function δ can be extended to a function δ ∗ mapping W × Σ ∗ into ℘ (W ) and
reflecting sequences of inputs as follows:
(vi) δ ∗ (u, ε) := {u}.
(vii) δ ∗ (u, xa) = {w : for some state v in δ ∗ (u, x), w is in δ(v, a)}.
1.6 Languages, Automata, and Elementary Action Systems 47

Briefly, 
δ ∗ (u, xa) = {δ(v, a) : v in δ ∗ (u, x)}.

δ ∗ is indeed an extension of δ since δ ∗ (u, a) = δ(u, a) for any input symbol a. Thus,
we may again use δ in place of δ ∗ .
An automaton A is deterministic if the transition function δ has the property that
for all u ∈ W and a ∈ Σ, δ(u, a) is a one-element set.
A word x of Σ ∗ is said to be accepted by a finite automaton A = (W, Σ, δ, w0 , F)
if δ(w0 , x) ∩ F = ∅.
L(A) denotes the set of all words of Σ ∗ accepted by A. The set L(A) is called the
language accepted by A.
Two finite automata A and B (over the same alphabet Σ) are equivalent if they
accept the same languages, i.e., L(A) = L(B). It is a well-known fact (see Hopcroft
and Ullman 1979, p. 22) that deterministic and nondeterministic finite automata
accept the same sets of words; more specifically
Proposition 1.6.4 Let L ⊆ Σ ∗ be the language accepted by a nondeterministic
finite automaton. Then there exists a deterministic finite automaton that
accepts L. 
The crucial result for the theory of regular languages is that the latter are precisely
the languages accepted by finite automata.
Theorem 1.6.5 Let Σ be a finite set. For any set L ⊆ Σ ∗ the following conditions
are equivalent:
(i) L ⊆ REG(Σ)
(ii) L = L(A) for some finite automaton A over Σ.
A proof of this theorem can be found in Hopcroft and Ullman (1979). 
Theorem 1.6.5 has interesting applications. We shall mention one.

Theorem 1.6.6 The class REG∗ (Σ) is closed under complementation. That is, if
L ⊆ Σ ∗ is regular, then Σ ∗ \ L is regular as well.

Proof Let L = L(A) for some finite automaton A = (W, Σ1 , δ, w0 , F). In view of
Proposition 1.6.4 we may assume without loss of generality that M is deterministic.
Furthermore, we may assume that Σ1 = Σ, for if Σ1 \Σ is nonempty, one can delete
all transitions δ(u, a) of A on symbols a not in Σ. The fact that L ⊆ Σ ∗ assures that
the language accepted by A is not thereby changed. In turn, if Σ \ Σ1 is nonempty,
then none of the symbols of Σ \ Σ1 appears in the words of L. We may therefore
introduce a ‘dead’ state d into A with

δ(d, a) = d for all a ∈ Σ, and


δ(w, a) = d for all w ∈ W and a ∈ Σ \ Σ1 .
48 1 Elementary Action Systems

In order to construct an automaton over Σ which accepts the set Σ ∗ \ L, we


complement the set of final states of A = (W, Σ, δ, w0 , F); we define

A := (W, Σ, δ, w0 , W \ F)

Then A accepts a word x if and only if δ(w0 , x) is in W \ F, that is, x is in


Σ ∗ \ L. 

Corollary 1.6.7 The regular sets are closed under intersection.


 
Proof Let L 1 , L 2 belong to REG∗ (Σ). Then L 1 ∩L 2 = Σ ∗ \ (Σ ∗ \ L 1 )∪(Σ ∗ \ L 2 ) .
Closure under intersection then follows from closure under union and complemen-
tation. 

Let
A = (W, Σ, δ, w0 , F) (1.6.2)

be a (non-deterministic) finite automaton. The following elementary action system

M(A) := (W, RA , AΣ,δ ), (1.6.3)

is assigned to A, where
(i) W coincides with the set of states of A
(ii) the transition relation RA is defined as follows:
(u, w) ∈ RA if and only if w ∈ δ(a, u) for some a ∈ Σ
(iii) AΣ,δ := {Aa : a ∈ Σ}, where each Aa is the binary relation on W defined by
means of the equivalence: Aa (u, w) if and only if w ∈ δ(a, u).

(1.6.3) is a normal action system because RA = {Aa : a ∈ Σ}. Moreover,
if A is deterministic, Aa is a function whose domain is equal to W , i.e., for any
u ∈ W there exists a unique state w ∈ W such that Aa (u, w) holds. This is a direct
consequence of the fact that δ(a, u) is a singleton, for all a ∈ A, u ∈ W .
The correspondence a → Aa , a ∈ Σ, need not be injective. The above mapping
is one-to-one if and only if for every pair a, b ∈ Σ it is the case that

a = b if and only if (∀u ∈ W ) δ(a, u) = δ(b, u). (1.6.4)

The pair ({w0 }, F) is a task for the action system (1.6.3), where w0 is the initial state
of A and F is the set of final states.
Conversely, let
M = (W, R, A) (1.6.5)

be a finite elementary action system, and suppose a task (Φ, Ψ ) for M has been
singled out so that Φ is a one-element subset of W , Φ = {w0 }. We wish to define an
automaton assigned to M, which will be called the automaton corresponding to the
system M and the task (Φ, Ψ ).
1.6 Languages, Automata, and Elementary Action Systems 49

There are several ways of defining the transition function in the finite automaton
corresponding to the action system M. We present two options here. The first option
identifies the transition function in the automaton with realizable performances of
actions. Accordingly:

δM (u, A) := {w ∈ W : u A, R w}, (1.6.6)

for all u ∈ W , A ∈ A.
The second possible definition of the transition function abstracts from the relation
R. In this case we put:

δM (u, A) := {w ∈ W : A(u, w)}, (1.6.7)

for all u ∈ W , A ∈ A.
In both cases the quintuple

A(M) := (W, A, δM , w0 , Ψ ) (1.6.8)

is a finite (non-deterministic) automaton in which A, the set of atomic actions of M,


is the input alphabet. For any A, B ∈ A, the right-hand side of formula (1.6.4), i.e.,
the condition  
(∀u, w ∈ W ) w ∈ δM (u, A) ⇔ w ∈ δM (u, B)

is equivalent to A ∩ R = B ∩ R if δM is defined by formula (1.6.6), or it is equivalent


to A = B if δM is given by (1.6.7). The equation A ∩ R = A ∩ R need not imply the
identity A = B. However, when M is normal, then the definitions (1.6.6) and (1.6.7)
are equivalent, and consequently, the equivalence

A=B if and only if (∀u ∈ W ) δM (u, A) = δM (u, B)

holds for all A, B ∈ A in both the meanings of δM .


Having given a finite automaton (1.6.2), we define the action system M(A). Pro-
ceeding a step further, we can then define the automaton A(M(A)) in accordance with
the formulas (1.6.6) and (1.6.8). The two automata A and A(M(A)) are not identical
as they operate on different input alphabets—A is defined over the alphabet Σ while
A(M(A)) has the set {Aa : a ∈ Σ} as the input alphabet. The transition functions δ
and δM(A) in the respective automata satisfy the identity: δM(A) (u, Aa ) = δ(u, A),
for all a ∈ Σ, u ∈ W . If the automaton A satisfies condition (1.6.4), the automata
A and A(M(A)) can be identified because the map a → Aa establishes a one-to-one
correspondence between the sets Σ and {Aa : a ∈ Σ}.
Similarly, having given an action system (1.6.5) with the task ({w0 }, Ψ ) distin-
guished, we define the transition function δM and the automaton A(M), according to
the formulas (1.6.6) and (1.6.8); then we pass to the action system M(A(M)) which is
defined by means of the formula (1.6.3). If M is normal, the relationship between the
systems M and M(A(M)) is straightforward—the two systems are identical. Since M
50 1 Elementary Action Systems

is normal, the direct transition relation R (in M) coincides with the direct transition
relation RA(M) in the action system M(A(M)). It is also easy to see that the sets of
atomic actions of M and M(A(M)) are the same.

1.7 Compound Actions

Let us return for a while to the example with the washing machine. In order to wash
linen a sequence of atomic actions such as putting the linen into the machine, setting
the wash program, turning on the machine, etc., has to be performed. In a more
sophisticated model, the sequence of actions could also include spinning, starching,
and so on. Thus, washing is a set of sequences of atomic actions.
Everyday life provides an abundance of similar examples of compound actions
such as the baking of bread or the manufacture of cars. A compound action is then
a certain set of finite sequences of atomic actions. (We could also allow for infinite
strings of atomic actions as well; we shall not discuss this issue thoroughly in this
book.) The above remarks lead to the following definition:

Definition 1.7.1 Let M = (W, R, A) be an elementary action system. A compound


action of M is any set of finite sequences of atomic actions of A. 

The term “composite action” will also be occasionally used hereafter and treated
as being synonymous to “compound action.” We will often identify any atomic
action A ∈ A with the composite action {(A)}, so each atomic action qualifies as a
compound one. In order to simplify the notation we adopt here the convention that
any finite sequence (A1 , . . . , An ) of atomic actions is written by juxtaposing them
as A1 . . . An . In particular, instead of (A) we simply write A. This is in accordance
with the notation for formal languages adopted in Sect. 1.6.
Infinite compound actions are allowed. The approach to action presented here
is finitary in the sense that sets consisting of infinite strings of atomic actions are
not investigated. However, in Sect. 2.6 infinite sequences of actions are occasionally
mentioned in the context of infinite games.

Example There are 4 recycling bins for waste materials near your house: for paper,
plastic, white glass, and colored glass. Almost everyday you perform the compound
action of emptying your wastebin of plastic, paper, and glass. Taking out the trash
from the wastebin is a set of finite sequences of four atomic actions: A—putting the
paper from your wastebin into the paper bin, B—putting the plastic materials into
the plastic bin, C—putting the white glass into the white glass bin, and D—putting
the colored glass into the colored glass bin. The strings of these four actions are not
uniquely determined, because you may perform them in various orders. The initial
states are those in which your wastebin is full and the final one is when it is empty.
More accurately, assuming that we live in the mathematized world, your bin contains
today n pieces of waste and they are distributed among the four categories that add
up to n. Therefore your string of actions has length n, because you put each piece
1.7 Compound Actions 51

separately into the appropriate recycling bin (you are very concerned about the planet
and so you take special care). While performing this string of actions, i.e., taking
out the trash, your action system (the cleared wastebin) passes successively through
n + 1 states u 0 , . . . , u n , where u 0 is the initial state and u n —the final one. 

A is the set of all finite sequences of atomic actions of A. In particular the empty
string ε is a member of A∗ . ε is identified with the empty set, ε = ∅. A∗ is the largest
compound action of M.
CA denotes the family of all compound actions of the system M = (W, R, A).
(The members of CA will be often referred to as compound actions on M.)
We shall first distinguish two special members of CA. ε is the compound action,
whose only element is the empty sequence of atomic actions, ε = {ε} = {∅}. In turn,
∅ denotes the empty compound action.
C + A is the family of all compound actions which do not involve the empty string
of atomic actions. Thus,

C + A = {A ∈ CA : ε ∈ A}.

Note that the empty action ∅ belongs to C + A. Since the members of CA can
be treated as languages over the (possibly infinite) alphabet A, all the definitions
adopted for formal languages can be applied to compound actions. In particular for
B, C ∈ CA we define:

B ∪ C := {A1 . . . An : A1 . . . An ∈ B or A1 . . . An ∈ C},
B ∪ C is the union of B and C;
B ∩ C := {A1 . . . An : A1 . . . An ∈ B and A1 . . . An ∈ C},
B ∩ C is the meet of B and C;
∼ B := {A1 . . . An ∈ A∗ : A1 . . . An ∈ B},
∼ B is the complement of B;
B ◦ C := {A1 . . . Am B1 . . . Bn : A1 . . . Am ∈ B and B1 . . . Bn ∈ C},
B ◦ C is called the composition of B and C;

B∗ := {Bn : n  0}, where B0 := ε and Bn+1 := Bn ◦ B,
B∗ is the iterative closure of B;

B+ := {Bn : n  1} is called the positive iterative closure of B;
B−1 := {A1 . . . An : An . . . A1 ∈ B},
B−1 is called the inverse of B.

Note that A∗ = A+ ∪ ε, for any A. In particular, ε ∗ = ε + = ε and ∅+ = ∅,



∅ = ε. The family of compound actions on M exhibits a definite algebraic structure
represented by the algebra

(CA; ∪, ∩, ∼, ◦, ∗ , −1 ). (1.7.1)
52 1 Elementary Action Systems

As C + A = CA \ {ε}, the system

(C + A; ∪, ∩, ∼, ◦, + , −1 ). (1.7.2)

is an algebra as well. (But in (1.7.2) the complement ∼ is taken with respect to A+ ,


i.e., ∼ B = A+ \ B, for all B ∈ C + A.) Both algebras (1.7.1) and (1.7.2) are called
the algebras of compound actions of the system M.
The algebras (1.7.1) and (1.7.2) are examples of one-sorted algebras—each op-
eration on the algebra assigns a compound action to a single compound action or
to a pair of actions. We shall further enrich the structure of (1.7.1) and (1.7.2) by
adjoining to them operations which assign compound actions to single propositions.
(Any proposition is identified with a subset of W .) We shall also consider operations
which assign actions or propositions to pairs of type (action, proposition).
To each compound action A ∈ CA a binary relation Res A on W is assigned.
Res A is defined as follows:

Res A := (A1 ∩ R) ◦ . . . ◦ (An ∩ R) : n  0, A1 . . . An ∈ A , (1.7.3)

where P ◦ Q is the ordinary composition of the relations P and Q. Res A is called


the resultant relation of the compound action A. (If A1 . . . An is the empty string ε,
then (A1 ∩ R) ◦ . . . ◦ (An ∩ R) is equal to E W ∩ R, where E W is the diagonal of W .)
If A is an atomic action, i.e., A = {A} for some A ∈ A, then the resultant relation
of A is equal to A ∩ R. It follows from the definition of the reach ReM of the system
M (Definition 1.2.2) that for any compound action A ∈ CA, Res A is a subrelation
of ReM . (We assume that Res ε = E W ∩ R. Consequently, if the empty sequence ε
belongs to A, then E W ∩ R ⊆ Res A.)

Lemma 1.7.2 For any compound actions B, C ∈ CA:


(i) Res (B ∪ C) = Res B ∪ Res C,
(ii) Res (B ◦ C) = Res B ◦ Res C,
(iii) Res (B+ ) = (Res B)+ ,
(iv) Res (B ∩ C) ⊆ Res B ∩ Res C,
(v) Res (ε) = E W ∩ R and Res (∅) = ∅.

The proof is easy and is omitted. 


Note that Res (A∗ )= (Res A)∗ because Res (A∗ ) = Res (A+ ) ∪ Res(ε) =
Res (A+ )∪(E W ∩ R) while (Res A)∗ , the transitive and reflexive closure of Res A, is
equal to Res (A+ )∪ E W . But if the relation R is reflexive, then Res (A∗ ) = (Res A)∗ .
It follows from Lemma 1.7.2 that the mapping A → Res A, A ∈ C + A, is a
homomorphism of the reduct (C + A; ∪, ◦, + ) of the algebra (1)+ into the relational
algebra (℘ (W × W ), ∪, ◦, + ) of binary relations on W . Moreover, if R is reflexive,
then A → Res A, A ∈ CA, is a homomorphism of the reduct (CA; ∪, ◦, * ) of the
algebra (1) into the relational algebra (℘ (W × W ), ∪, ◦, ∗ ).
1.7 Compound Actions 53

A possible performance of a compound action A ∈ CA is any nonzero finite string


(u 0 , . . . , u n ) of states of W such that u 0 A1 u 1 . . . u n−1 An u n for some sequence
A1 . . . An ∈ A.
|A| is the set of all all possible performances of A. The set |A| is called the
intension of A.
As each atomic action A ∈ A is identified with {A}, we see that |A|, the intension
of A, is equal to A itself.
It is assumed that the diagonal E W is the set of all possible performances of
the composite action ε = {ε}. Therefore |ε| = E W . Thus, if a compound action
A contains the empty string of atomic actions, |A| is reflexive. The set of possible
performances of ∅ is empty, |∅| = ∅.
A possible performance
(u 0 , . . . , u n ) (1.7.4)

of a compound action A ∈ CA is said to be realizable if and only if u 0 R u 1 . . .


u n−1 R u n . In this case we also say that the indirect transition u 0 R u 1 . . . u n−1 R u n
is accomplished by the action A.
|A| R is the set of all realizable performances of A. In particular, |ε| R = E W ∩ R,
|∅| R = ∅, and |A| R = A ∩ R for any atomic action A.
If the possible performance (1.7.4) of A is realizable then (u 0 , u n ) ∈ ResA; and
conversely, if (u, w) ∈ ResA, then there exists a realizable performance (1.7.4) of
A such that u 0 = u and u n = w.
We now define the notion of the performability of a composite action.

Definition 1.7.3 Let M = (W, R, A) be an elementary action system, let A ∈ CA


be a compound action, and let u ∈ W .
(i) A is performable in u if and only if there exists a realizable performance
(u 0 , . . . , u n ) of A such that u 0 = u,
(ii) A is totally performable in u if and only A is performable in u and every possible
performance (u 0 , . . . , u n ) of A such that u 0 = u is realizable,
(iii) A is performable in the system M if and only if A is performable in every state
u ∈ W such that there exists a possible performance (u 0 , . . . , u n ) of A with
u 0 = u,
(iv) A is totally performable in M if and only if every possible performance of A is
realized. 

Thus, according to (i), A is performable in u if and only if there exists a sequence


A1 . . . An ∈ A and a (nonzero) sequence of states (u 0 , . . . , u n ) such that u 0 = u
and u 0 A1 R u 1 . . . u n−1 An R u n . The intuitive sense of the above definitions is
clear. Definition 1.7.3 of the performability of a compound action is an extension of
Definitions 1.4.1 and 1.4.3. (The latter definitions are valid only for atomic actions.)
If an action A ∈ CA is atomic, i.e., A = {A} for some A ∈ A, and u ∈ W , then
A is performable (totally performable) in u if and only if A is performable (totally
performable) in u in the sense of Definition 1.4.1.
54 1 Elementary Action Systems

The empty compound action ∅ is performable in no state of M. Note however that,


paradoxically, ∅ is totally performable in M because (i) is vacuously satisfied for ∅.
The action ε = {ε} is performable in u if and only if u R u. (It follows that then ε is
also totally performable in every reflexive state u.) Consequently, ε is performable
in the system M if and only if R is reflexive (if and only if ε is totally performable
in the system M).
The above definitions state that a compound action A is performable (at a given
state) if all strings of atomic actions belonging to A are performable at consecutive
states of (u 0 , . . . , u n ). We may therefore say that the performability of a compound
action A is a function of the performability of the constituents of A. One may there-
fore argue that Definitions 1.7.3 is a form of the compositionality principle for the
performability of actions.
Let us also note that A ∈ CA is totally performable in M if and only if it is totally
performable in every state u ∈ W such that there exists a (possible) performance
(u 0 , . . . , u n ) of A with u 0 = u.
Lemma 1.7.4 Let M = (W, R, A) be an action system and A ∈ CA a compound
action. If A is totally performable in M, then for all A1 . . . An ∈ A, it is the case that
A1 ◦ . . . ◦ An ∈ R n .
The proof is easy and is omitted. 
The family of all totally performable compound actions has an interesting alge-
braic structure.
Theorem 1.7.5 Let M = (W, R, A) be an elementary action system and let TP be
the family of all compound actions totally performable in the system M. Then:
(i) ∅ ∈ TP,
(ii) if A ∈ TP and B ∈ TP, then A ∪ B ∈ TP and A ◦ B ∈ TP,
(iii) if A ∈ TP, then A+ ∈ TP,
(iv) if A ∈ TP and B ⊆ A then B ∈ TP.
(v) If R is reflexive, then ε ∈ TP and hence A∗ ∈ TP, for any compound A.
Proof (ii). Let A, B ∈ TP and let (u 0 , . . . , u n ) be a possible performance of A ∪ B.
Hence, there exists a sequence A1 . . . An ∈ A ∪ B such that u 0 A1 u 1 . . . u n−1 An u n .
Since both A and B are totally performable, we have that u 0 R u 1 . . . u n−1 R u n .
So A ∪ B is totally performable.
Now let (u 0 , . . . , u n ) be a possible performance of A ◦ B. Hence, there ex-
ist sequences A1 . . . Ak ∈ A, B1 . . . Bl ∈ B, where l = n − k, such that
u 0 A1 u 1 . . . . . . u k−1 Ak u k B1 u k+1 . . . u n−1 Bl u n . Since A and B are totally per-
formable, both the sequences (u 0 , . . . , u k ), (u k+1 , . . . , u n ) are realizable. Hence
u 0 R u 1 . . . u k R u k+1 . . . . . . u n−1 R u n and so A ◦ B is totally performable.
(iii). Let A ∈ TP. It follows from (i) that An ∈ TP, for all n  1. To prove that
+
A is totally performable, assume that (u 0 , . . . , u m ) is a possible performance of

{An : n  1}. Hence, for some n  1, there exists a sequence A1 . . . Am ∈ An
such that u 0 A1 u 1 . . . u m−1 Am u m . Since An is totally performable, we have that
u 0 R u 1 . . . u m−1 R u m .
(iv) is immediate. As A∗ = A+ ∪ ε, (v) follows. 
1.7 Compound Actions 55

If the system M is normal, the family TP of all totally performable compound


actions has a simple characterization:
Proposition 1.7.6 Let M = (W, R, A) be a normal elementary action system, i.e.,
A ⊆ R, for all A ∈ A. Then every compound action A ∈ C + A is totally performable
in M. Moreover, if R is reflexive, then every action A ∈ CA is totally performable
in M.

Proof Let (u 0 , . . . , u n ) be a possible performance of a compound action A ∈ C + A.


Then u 0 A1 u 1 . . . u n−1 An u n for some nonempty string A1 . . . An ∈ A. As M is
normal, we have that u 0 R u 1 . . . u n−1 R u n . So (u 0 , . . . , u n ) is realizable.
If R is reflexive, then ε is totally performable and hence every action in CA is
totally performable in M. 

For any compound action A ∈ CA we define two subsets of W —the domain,


DomA, and the codomain (alias range), C DomA, of the action A:

DomA := {u ∈ W : (∃u 0 , . . . , u n ∈ W ) u = u 0 and (u 0 , . . . , u n )


is a possible performance of A},
C DomA := {w ∈ W : (∃u 0 , . . . , u n ∈ W ) w = u n and (u 0 , . . . , u n )
is a possible performance of A}

We also define:

Dom R A := {u ∈ W : (∃u 0 , . . . , u n ∈ W ) u = u 0 and (u 0 , . . . , u n )


is a realizable performance of A},
C Dom R A := {w ∈ W : (∃u 0 , . . . , u n ∈ W ) w = u n and (u 0 , . . . , u n )
is a realizable performance of A}.

It follows that

Dom R A = {u ∈ W : (∃w ∈ W )(u, w) ∈ Res A}

and

C Dom R A = {w ∈ W : (∃u ∈ W )(u, w) ∈ Res A}.

In particular, the above definitions make sense for any atomic action A. The
intended readings of the proposition DomA are: ‘A possibly starts’ or ‘A possibly
begins’, while for C DomA we have ‘A possibly terminates’ or ‘A possibly ends’.
Analogously, Dom R A says ‘A realizably starts’ and C Dom R A says ‘A realizably
terminates’.
56 1 Elementary Action Systems

For any two sets Φ, Ψ ∈ W we also define the following four compound actions:

Φ ☞1 Ψ := {A1 . . . An ∈ A∗ : (∀u ∈ Φ)(∃w ∈ Ψ ) (u, w) ∈ A1 ◦ . . . ◦ An },


Φ ☞2 Ψ := {A1 . . . An ∈ A∗ : (∀u ∈ Φ)(∀w ∈ Ψ ) (u, w) ∈ A1 ◦ . . . ◦ An },
Φ ☞3 Ψ := {A1 . . . An ∈ A∗ : (∃u ∈ Φ)(∀w ∈ Ψ ) (u, w) ∈ A1 ◦ . . . ◦ An },
Φ ☞4 Ψ := {A1 . . . An ∈ A∗ : (∃u ∈ Φ)(∃w ∈ Ψ ) (u, w) ∈ A1 ◦ . . . ◦ An }.

The compound actions Φ ☞i Ψ , i = 1, . . . , 4, are collectively called “From Φ to Ψ ,”


with the subscript i if necessary. It is not excluded that Φ ☞i Ψ may be the empty
action, for some i ∈ {1, . . . , 4}.
The relations between the actions Φ ☞i Ψ , i = 1, . . . , 4, are established by the
following simple lemma.

Lemma 1.7.7 Let M = (W, R, A) be an action system and Φ, Ψ ∈ W . Then:



(i) Φ ☞1 Ψ = {u} ☞1 Ψ : u ∈ Φ
 
(ii) Φ ☞2 Ψ = {u} ☞2 Ψ : u ∈ Φ = Φ ☞2 {w} : w ∈ Ψ

(iii) Φ ☞3 Ψ = {u} ☞3 Ψ : u ∈ Φ
 
(iv) Φ ☞4 Ψ = Φ ☞4 {w} : w ∈ Ψ = {u} ☞4 Ψ : u ∈ Φ

(v) Φ ☞2 Ψ = Φ ☞1 {w} : w ∈ Ψ

(vi) Φ ☞3 Ψ = {u} ☞2 Ψ : u ∈ Φ

(vii) Φ ☞4 Ψ = {u} ☞1 Ψ : u ∈ Φ . 

Theorem 1.7.8 Let M = (W, R, A) be a finite elementary action system. Let


Φ, Ψ ∈ W. Then for i = 1, . . . , 4, the compound action Φ ☞i Ψ is regular (over
A).

Proof We shall first prove the above theorem for i = 1.


If Φ is empty, then Φ ☞1 Ψ is equal to A∗ . So Φ ☞1 Ψ is regular.
If Φ is a singleton, Φ = {w0 }, we proceed as follows. We assign to the system
M the finite automaton A(M) := (W, A, δ, w0 , Ψ ) over the alphabet A, where the
transition function δ is defined as follows: δ(u, A) := {w ∈ W : u A w}, for all
A ∈ A and u ∈ W .
The proof of Theorem 1.7.8 is based on the following claims.

Claim 1 For all u, w ∈ W and for any string A1 . . . An of atomic actions,

A1 . . . An ∈ {u} ☞1 {w} if and only if w ∈ δ(u, A1 . . . An ).


1.7 Compound Actions 57

Proof of Claim 1 It immediately follows from the definition of {u} ☞1 {w} that
A1 . . . An ∈ {u} ☞1 {w} if and only if (u, w) ∈ A1 ◦ . . . ◦ An .
We prove the claim by induction on the length of the sequence A1 . . . An . The
claim holds for the empty string. If A ∈ A, then A ∈ {u} ☞1 {w} if and only
if (u, w) ∈ A if and only if w ∈ δ(u, A). Suppose the claim holds for all words
of length  n, and consider a word A1 . . . An An+1 . The following conditions are
equivalent:

A1 . . . An An+1 ∈ {u} ☞1 {w},


(u, w) ∈ A1 ◦ . . . ◦ An ◦ An+1 ,
 
(∃v ∈ W ) (u, v) ∈ A1 ◦ . . . ◦ An and (v, w) ∈ An+1 , (by the induction
hypothesis)
(∃v ∈ W )(v ∈ δ(u, A1 . . . An ) and w ∈ δ(v, An+1 ),
w ∈ δ(u, A1 . . . An An+1 ). 

Claim 2 For every sequence A1 . . . An of atomic actions of A, the word A1 . . . An


is accepted by A(M) if and only if A1 . . . An ∈ {w0 } ☞1 Ψ .

Proof of Claim 2 A1 . . . An is accepted by A(M) if and only if

(∃w)(w ∈ Ψ and w ∈ δ(w0 , A1 . . . An ) if and only if (by Claim 1)


(∃w)(w ∈ Φ and A1 . . . An ∈ {w0 } ☞1 {w}) if and only if
A1 . . . An ∈ {w0 } ☞1 Ψ. 
It follows from Claim 2 and Theorem 1.6.5 that for all w0 ∈ W and all Ψ ⊆ W , the
compound action {w0 } ☞1 Ψ is regular.
If Φ is a quite arbitrary nonempty subset of W , then by Lemma 1.7.7.(i), Φ☞1 Ψ =
{u} ☞1 Ψ : u ∈ Φ . Since Φ is finite (for M is assumed to be finite), Φ ☞1 Ψ
is thus an intersection of finitely many regular compound actions. This, in view of
Corollary 1.6.7, implies that Φ ☞1 Ψ is regular as well.
Recalling again Corollary 1.6.7 and Lemma 1.7.7.(v)–(vi), we easily get that
Φ ☞2 Ψ and Φ ☞3 Ψ are regular.
To prove that Φ ☞4 Ψ is regular, we notice that {u} ☞4 Ψ = {u} ☞1 Ψ , for all
u
 ∈ W , Ψ ⊆ W . By Lemma 1.7.7.(iv),
 Φ ☞ 4 Ψ = {u} ☞4 Ψ : u ∈ Φ =
{u} ☞1 Ψ : u ∈ Φ . Since {u} ☞1 Ψ : u ∈ Φ is the union of finitely many
regular actions, we infer that Φ ☞4 Ψ is also regular.
This completes the proof of the theorem. 
If the system M is normal, Proposition 1.7.6 implies that the actions Φ ☞i Ψ ,
i = 1, . . . , 4, are totally performable in M.
58 1 Elementary Action Systems

Apart from the compound actions Φ ☞i Ψ , i = 1, . . . , 4, we can also define


actions of the type ‘From Φ to Ψ ’ in the stronger version, in which the relation R of
direct transition is necessarily involved. Their definitions are strictly linked with the
issue of reaching goals in elementary action systems.
A task for an action system M is any pair (Φ, Ψ ) with Φ, Ψ ⊆ W . Φ is the initial
condition of the task while Ψ is the terminal condition or the goal of the task.
We shall now discuss three concepts of the attainability of the goal Ψ from the
set Φ that differ in strength.

Definition 1.7.9 Let (Φ,Ψ ) be a task for an elementary action system M =


(W, R, A).
(i) The goal Ψ is weakly attainable from Φ in M if and only if (Φ × W ) ∩ ReM is
non-empty, where ReM is the reach of M, i.e., there exist states u ∈ Φ, w ∈ Ψ
and an operation of M with the initial state u and the terminal state w;
(ii) The goal Ψ is attainable from Φ in M if and only if for every u ∈ Φ, ({u} ×
Ψ ) ∩ ReM is nonempty, i.e., from every state u ∈ Φ, the system turns to a
certain state w ∈ Ψ by means of an operation of M, whose initial state is u and
the initial state is w;
(iii) Ψ is strongly attainable from Φ in M if and only if Φ × Ψ ⊆ ReM , i.e., it is
possible to move the system from any state u ∈ Φ to any state w ∈ Ψ through
an operation of M with the initial state u and the terminal state w. 

For any task (Φ, Ψ ) we define the compound action Φ ☞iR Ψ , for i = 1, . . . , 4,
in an analogous way as the actions Φ ☞i Ψ :

Φ ☞1R Ψ := A1 . . . An ∈ A∗ : (∀u ∈ Φ)(∃w ∈ Ψ ) (u, w) ∈ (A1 ∩ R) ◦ · · · ◦ (An ∩ R) ,

Φ ☞2R Ψ := A1 . . . An ∈ A∗ : (∀u ∈ Φ)(∀w ∈ Ψ ) (u, w) ∈ (A1 ∩ R) ◦ · · · ◦ (An ∩ R) ,

Φ ☞3R Ψ := A1 . . . An ∈ A∗ : (∃u ∈ Φ)(∀w ∈ Ψ ) (u, w) ∈ (A1 ∩ R) ◦ · · · ◦ (An ∩ R) ,

Φ ☞4R Ψ := A1 . . . An ∈ A∗ : (∃u ∈ Φ)(∃w ∈ Ψ ) (u, w) ∈ (A1 ∩ R) ◦ · · · ◦ (An ∩ R) .

We recall that if the sequence A1 . . . An ∈ A∗ is empty, then (A1 ∩R)◦· · ·◦(An ∩R) =
E W ∩ R. If Φ and Ψ are singletons, |Φ| = |Ψ | = 1, then the actions Φ ☞iR Ψ ,
i = 1, . . . , 4, coincide, i.e., Φ ☞1R Ψ = · · · = Φ ☞4R Ψ.

Proposition 1.7.10 Let (Φ, Ψ ) be a task for M. Then


(i) Ψ is weakly attainable from Φ if and only if Φ ☞3R Ψ is nonempty;
(ii) Ψ is attainable from Φ if and only if (∀u ∈ Φ) {u} ☞3R Ψ = ∅;
(iii) Ψ is strongly attainable from Φ if and only if (∀u ∈ Φ)(∀w ∈ Ψ )
{u} ☞3R {w} = ∅. 
1.7 Compound Actions 59

It is not difficult to show that the actions Φ ☞iR Ψ , i = 1, . . . , 4, are regular in


every finite elementary action system M. (To prove this, one defines the automaton
A(M) in which the transition function δ is given by the formula: δ(u, A) := {w ∈ W :
u A, R w}; the proof then goes as in Theorem 1.7.8.)
If Φ ☞iR Ψ is non-empty, where i = 1, 2, then it is performable in every state
u ∈ Φ. (If Φ ☞2R Ψ = ∅, we can say more: for every pair u ∈ Φ, w ∈ Ψ there exists
a realizable performance (u 0 , . . . , u n ) of Φ ☞2R Ψ such that u 0 = u and u n = w.)
For any Φ ⊆ W and B ∈ CA we also define the following propositions.

[B]Φ := {u ∈ W : (∀w ∈ W )(Res B(u, w) implies w ∈ Ψ )},


BΦ := {u ∈ W : (∃w ∈ W )(Res B(u, w) implies w ∈ Φ)}.

[B]Φ is the set of all states u such that every realizable performance of B in u leads
the system to a state belonging to Φ. Analogously, BΦ is the set of all states u such
that some realizable performance of B in the state u carries the system to a Φ-state.
To each pair B, C of compound actions in CA the proposition B ⇒ R C is
assigned, where
   
B ⇒ R C := u ∈ W : ∀w ∈ W Res B(u, w) implies Res C(u, w) .

The proposition B ⇒ R C expresses the fact that the resultant relation of B is a


subrelation of the resultant relation of C. Indeed, B ⇒ R C = W if and only if
Res B ⊆ Res C.
We also define the proposition B ⇔ R C, where

B ⇔ R C := (B ⇒ R C) ∩ (C ⇒ R B).

The proposition B ⇔ R C expresses the fact that the compound actions B and C are
equivalent in the sense that their resultant relations are equal. Indeed, B ⇔ R C = W
if and only if Res B = Res C.
We close this section with the definition of the δ-operator. This operator, examined
by Segerberg (1989) in a somewhat different context, assigns to each proposition
Φ ⊆ W a certain (compound) action δΦ for which he suggested the reading “the
bringing about of Φ” or, more briefly, “doing Φ.” This action should be thought of
as the maximally reliable way of bringing about the state-of-affairs represented by
Φ. The δ-operator is a primitive notion in the Segerberg’s approach. Here it can be
defined as follows:

δΦ := {A1 . . . An ∈ A∗ : C Dom(A1 ◦ · · · ◦ An ) ⊆ Φ}.

(C Dom(P) is the co-domain (the range) of the relation P, C Dom(P) := {w ∈ W :


(∃u ∈ W )(u, w) ∈ P}.) Note that the action δΦ may be empty.
60 1 Elementary Action Systems

Denoting by [δΦ]Ψ the proposition


   
u ∈ W : ∀w ∈ W (u, w) ∈ Res δΦ ⇒ w ∈ Ψ

(i.e., [δΦ]Ψ = [Res δΦ]Ψ ), we see that δ satisfies Segerberg’s conditions:

[δΦ]Ψ = W and [δΦ]Ψ ∩ [δΨ ]Λ ⊆ [δΦ]Λ.

1.8 Programs and Actions

Definition 1.8.1 Let M = (W, R, A) be an elementary action system. A (finitistic)


program P over M is any set of possible operations of M; that is, P is a set of finite
sequences
u 1 , A1 , u 2 , A3 , u 2 , . . . u m , Am , u m+1 , (1.8.1)

where u 1 , u 2 , . . . , u m , u m+1 is a nonempty sequence of states of W and A1 , A2 , . . . ,


Am is a sequence of atomic actions of A such that (u i , u i+1 ) ∈ Ai for i =
1, 2, . . . , m.
As is customary, the sequence (1.8.1) is written down in a more compact form as

u 1 A1 u 2 A2 u 3 . . . u m Am u m+1 . (1.8.2)

This notation indicates that (u i , u i+1 ) ∈ Ai for i = 1, 2, . . . , m.


Any sequence (1.8.1) belonging to P is interchangeably called a run, an execution
or an operation of the program P. 

Definition 1.8.1 provides a semantic concept of a program. In logic, there is a clear


division line between the syntactic notion of propositional formula and the semantic
notion of a proposition. In computer science, programs are viewed as syntactic enti-
ties. Here programs are treated semantically; the semantic counterpart of a program
is constituted by the set of all possible runs (executions) of a syntactically-defined
program in a definite space of states. We shall use the word program with both two
meanings, leaving the disambiguation to context.

Definition 1.8.2 Let M = (W, R, A) be an elementary action system.


If A ∈ A is an atomic action, then by the program corresponding to A we shall
understand the set Pr (A) of all triples u 1 , A, u 2 with (u 1 , u 2 ) ∈ A.
In many contexts the program Pr (A) will be simply identified with the action A
itself.
If A is a compound action over M, then the program Pr (A) corresponding to A
is the set of all sequences (1.8.1) such that A1 , A2 , . . . , Am ∈ A.
Each sequence (1.8.1) belonging to Pr (A) is called a possible run or an execution
of the action A. 
1.8 Programs and Actions 61

From the set-theoretic viewpoint, possible executions of a compound action A are


not the same objects as possible performances of A, because the latter are sequences
of states u 1 , u 2 , u 3 , . . . , u m , u m+1 such that u 1 A1 u 2 A2 u 3 . . . u m Am u m+1 for some
sequence A1 , A2 , A2 , . . . , Am ∈ A.
According to the above convention, Pr (A) is identified with the set of all
sequences (1.8.2) for which A1 , A2 , . . . , Am ∈ A.

Two Examples
(A). Let A be an atomic action of M = (W, R, A) and Φ a subset of W . The
program “while Φ do A” defines the way the action A is to be performed: whenever
the system remains in a state belonging to Φ, iterate performing the action A until
the system moves outside Φ. (It is assumed here for simplicity that A is atomic; the
definition of “while – do” programs also makes sense for composite actions as well.)
The program “while Φ do A” is therefore identified with the set of all sequences of
the form
u 0 A u 1 A u 2 . . . u m A u m+1 (1.8.3)

such that the states u 0 , . . . , u m belong to Φ and u m+1 ∈ Φ.


The above program also comprises infinite runs

u 0 A u 1 A u 2 . . . u m A u m+1 . . . (1.8.4)

such that u i ∈ Φ for all i  0. Each sequence (1.8.4) is called a divergent run of the
program “while Φ do A” in the space W .
(B). Let Φ and Ψ be subsets of W and A a compound action over M = (W, R, A).
A Hoare program {Φ}A{Ψ } for A is the set of all runs u 1 A1 u 2 A2 u 3 . . . u m Am u m+1
of A such that u 1 ∈ Φ and u m+1 ∈ Ψ .
Φ is named the precondition and Ψ the postcondition: when the precondition is
met, the action A establishes the postcondition.
If A is an atomic action, then {Φ}A{Ψ } is defined as the set of all runs u 1 A u 2 of
A with u 1 ∈ Φ and u 2 ∈ Ψ . 
Example 1.8.3 Collatz’s Action System.
This example stems from number theory. We start with an arbitrary but fixed
natural number n. If n is even, we divide it by 2 and get n/2. If n is odd, we multiply
n by 3 and add 1 to obtain the even number 3n + 1. Then repeat the procedure
indefinitely. The Collatz conjecture says that no matter what number n we start with,
we will always eventually reach 1.
The Collatz conjecture is an unsolved problem in number theory, named after
Lothar Collatz, who proposed it in 1937.
In the framework of our theory, we may reformulate the Collatz conjecture in an
equivalent way as follows.
If n  1 is a natural number, then, by the Fundamental Theorem of Arithmetic,
it has a unique representation as a product of powers of consecutive primes, n =
2α 3β 5γ 7δ . . ., where all but finitely many exponents α, β, γ , δ, . . . are equal to 0.
62 1 Elementary Action Systems

Let N+ be the set of positive integers {1, 2, 3, . . .}. We define two atomic actions
A and B on N+ . For m, n ∈ N+ we put:

A(m, n) ⇔d f m = 2α 3β 5γ 7δ. . . , with α  1 and n = 3β 5γ 7δ. . . (= 20 3β 5γ 7δ. . .).

Thus, A(m, n) if and only if m is even and n is the (odd) number being the result of
dividing m by the largest possible power of 2.

B(m, n) ⇔d f m = 3β 5γ 7δ . . . and n = 3m + 1.

In other words, B(m, n) if and only if m is odd and n is the (even) number obtained
by multiplying m by 3 and then adding 1. The prime number representation of n may
differ much from that of m but it always contains 2 with a positive exponent.
We put: R := A ∪ B. M := (W, R, {A, B}) is a normal and deterministic
elementary action system.
The compound action A := {AB}+ ∪ {B A}+ is called here the Collatz action
and the system M itself is referred to as the Collatz system. A is performable in
every state u of the system. In fact, if u is even, then every finite string of the form
AB AB AB . . . AB is performable in u. If u is odd, then every finite string of the
form B AB AB A . . . B A is performable in u. Thus, every state u gives rise to only
one path of states beginning with u and being a possible performance of A.
The program Pr (A) corresponding to A is the union of two sets of runs:

u 1 A u 2 B u 3 A u 4 B . . . u 2m−1 A u 2m B u 2m+1 , (1.8.5)

where u 1 is an even number, and

u 1 B u 2 A u 3 B u 4 A . . . u 2m−1 B u 2m A u 2m+1 , (1.8.6)

where u 1 is odd, and m  1.


Since A and B are deterministic actions, for each even number u and each m there
exists exactly one run (1.8.5) such that u = u 1 . Similarly, for each odd u and each
m there exists exactly one run (1.8.6) such that u = u 1 .
The Collatz Conjecture formulated in the framework of the action system M states
that no matter what number u 1 is taken as the beginning of run (1.8.5) or (1.8.6) of
Pr (A), the run will always eventually reach 1 for sufficiently large m. In other words,
for every u there exists a (sufficiently big) natural number m such that the (unique)
run of Pr (A) with u 1 = u terminates with 1, i.e., u 2m+1 = 1. 
Chapter 2
Situational Action Systems

Abstract Situational aspects of action are discussed. The presented approach


emphasizes the role of situational contexts in which actions are performed. These
contexts influence the course of an action; they are determined not only by the current
state of the system but also shaped by other factors as time, the previously undertaken
actions and their succession, the agents of actions and so on. The distinction between
states and situations is explored from the perspective of action systems. The notion
of a situational action system is introduced and its theory is expounded. Numerous
examples illustrate the reach of the theory.

In this chapter, structures more complex than elementary action systems are investi-
gated. These are situational action systems. Before presenting, in a systematic way,
the theory of these structures, we shall illustrate the basic ideas by means of two
examples.

2.1 Examples—Two Games

2.1.1 Noughts and Crosses

Noughts and crosses, also called tic-tac-toe, is a game for two players, X and O,
who take turns marking the spaces in a 3 × 3 grid. Abstracting from the purely
combinatorial aspects of the game, we may identify the set of possible states of
the game with the set of 3 × 3 matrices, in which each of the entries is a number
from the set {0, 1, 2}. 0 marks blank, 1 and 2 mark placing Xs and Os on the board,
respectively. (It should be noted that many of the 39 positions are unreachable in the
game.)1 The rules are well known and so there is no need here to repeat them in
detail. The initial state is formed by the matrix with noughts only. The final states
are formed by matrices, in which all the squares are filled with 1s or 2s; yet there
is neither a column nor a row nor a diagonal filled solely with 1s or solely with 2s
(the state of a draw). Neither are there matrices in which the state of ‘three-in-a-row’

1 One may also encode each position by a sequence of length 9 of the digits 0, 1, 2.
© Springer Science+Business Media Dordrecht 2015 63
J. Czelakowski, Freedom and Enforcement in Action, Trends in Logic 42,
DOI 10.1007/978-94-017-9855-6_2
64 2 Situational Action Systems

is obtained; i.e., exactly one column or one row, or one diagonal filled with 1s or
2s (the victory of either of the players). The direct transition R from one state u to
w is determined by writing either a 1 or a 2 in the place of any one appearance of
a nought in matrix u. There is no transition from the final states u into any state.
(Relation R is therefore not total.) There are only two elementary actions: AX and
AO . The action AX consists in writing a 1 in a free square, whereas the action AO
consists in writing a 2 (as long as there are such possibilities). AX and AO are then
binary relations on the set of states. The above remarks define the elementary action
system M = (W, R, {AX , AO }) associated with noughts and crosses.
Not all the principles of the game are encoded in the elementary system M. The
fact that the actions are performed alternately is of the key importance, with the
provision that the action AX is performed by the player (agent) X and AO by O. We
assume that the first move is made by X. We shall return to discuss this question after
the next example is presented.
The game is operated by two agents X and O. The statement that somebody or
something is an agent, i.e., the doer/performer of a given action, says something
which is difficult (generally) to explicate. What matters here is that we show the
intimate relations occurring between the agent and the action. If we know that the
computer is the agent of, for example, the action AX (or AO ), then this sentence
entails something more: in the wake of it there follows a structure of orders, i.e., a
certain program that the computer carries out. It is not until such a programme is
established that we can speak, in a legitimate way, of the agency as the special bond
between the computer as an agent and the action being performed. When, on the other
hand, it is a person that is a player (i.e., he is the agent of one of the above-mentioned
actions), then in order to establish this bond it is sufficient, obviously, to know that
this person understands what he is doing, knows the rules of the game (which he can
communicate to us himself), and is actually taking part in the game.
The game of noughts and crosses can also be viewed from the perspective of
the theory of automata because the game has well-determined components such as:
a finite set of states, a two-element alphabet composed of atomic actions AX , AO ,
the initial state, the set of terminal states, and the indeterministic transition function
between states. In consequence, we obtain a finite automaton accepting (certain)
words of the form (AX AO )n or (AX AO )n AO , where n  1. However, as is easily
noticed, the above perspective is useless in view of the goals of the game. We are not
interested here in what words are accepted by the above-described automaton (since
it is well-known that these are actions AX , AO which are performed alternatingly),
but how the successive actions should be carried out so that the victory (or at least a
draw) is secured.
The details above concerning the game noughts and crosses are fully understood by
human beings. One can say even more: such detailed knowledge of the mathematical
representation of the game is not necessary. It suffices just to have a pencil and a sheet
of paper to start the game. Still, if one of the players is to be a computer (a defective
creature) seeing that it is not equipped with senses, the above rules are not sufficient
2.1 Examples—Two Games 65

and must be further expanded on in programming language of the computer. It is


not hard, though, as a competent programmer can easily write a suitable program on
his own.

2.1.2 Chess Playing

We shall use the standard algebraic notation (AN) (but in a rather rudimentary
way) for recording and describing the moves in the game of chess. Each square of
the chessboard is identified by a unique coordinate pair consisting of a letter and a
number. The vertical rows of squares (called files) from White’s left (the queenside)
to his right (the kingside) are labeled a through h. The horizontal rows of squares
(called ranks) are numbered 1 to 8 starting from White’s side of the board. Thus,
each square has a unique identification of the file letter followed by the rank number.
The Cartesian product of the two sets X := {a, b, c, d, e, f , g, h} and Y :=
{1, 2, 3, 4, 5, 6, 7, 8}, i.e., the set X × Y , is called the chessboard. The elements
of X × Y are called squares. The black squares are the elements of the set

{a, c, e, g} × {1, 3, 5, 7} ∪ {b, d, f , h} × {2, 4, 6, 8},

and the white squares are the elements of the set

{a, c, e, g} × {2, 4, 6, 8} ∪ {b, d, f , h} × {1, 3, 5, 7}.

The game is played by two players (agents): White and Black. Each of them has
16 pieces (chessmen) at his disposal. White plays with the white pieces while Black
with the black ones. Any arrangement of pieces (not necessarily all of them) on the
chessboard is a possible position. We consider only the positions where there are at
least two kings (white or black) on the chessboard. Thus, a position on the chessboard
is any non-empty injective partial function from the set White pieces ∪ Black pieces
into X × Y such that the two kings belong to its domain. (Not all possible positions
appear during the game; some never occur in any game.)
Let W be the set of all possible positions. Instead of the word ‘position’ we will
also be using the term ‘chessboard state’ or ‘configuration’. (The drawings showing
the chessboard states are customarily called diagrams.)
The relation R of the direct transition between the chessboard states is determined
by the rules of the chessmen’s movements. Thus, the two states u and w are in the
relation R, i.e., u R w, if and only if in the position u one of the players moves a
chessman of a proper colour, and w is the position just after the move. The move is
made in accordance with the rules. The position w may have as many chessmen as u,
or fewer in the case when some chessmen have been taken. So, a chessman’s move
that is followed by the taking of an opposite colour chessman is recognized as one
move.
66 2 Situational Action Systems

The first simplification we will is to reject the possibility of both big and small
castling. Accommodating castling is something hampered by certain conditions
which will not be analyzed here. The second simplification is to omit the replace-
ment of pawns with chessmen on the change line. (A pawn, having moved across the
whole board to the change line must be exchanged for the following chessmen: the
queen, a castle, a bishop or a knight from the set of spare chessmen.)
The transition u R w can be made by the White or Black player. In a fixed position
u, {w ∈ W : u R w} is then the set of all possible configurations on the chessboard
which one arrives at by any side’s move undertaken in the state u. The relation R
is the join of two relations: B Black and R White , where R White (u, w) takes place if
and only if in the position u the White player moves, according to the regulations, a
white piece which is in the configuration u, and w is the position just following the
move. The relation R Black is defined analogously. Thus, R = R White ∪ R Black . For a
given configuration u, the set of all pairs (u, w) such that R White (u, w) takes place
(R Black (u, w) takes place, respectively) can be called the set of all moves in the state
u admissible for the White player (admissible for the Black player, respectively).
We adopt here a restrictive definition of the direct transition relation between
configurations: in checking situations only the transitions resulting in releasing the
opponent’s king (if such exist) are admissible. Checking puts the king under the threat
of death. When the king is threatened, the danger must be averted by the king’s being
moved to a square not in any line of attack of any of the opponent’s pieces. In other
words, in any configuration u, in which the king of a given colour is checked only
the transitions from u to the states w, in which the king is released from check are
admissible. So, if u is a configuration where, for example, the white king is checked,
then R White (u, w) takes place if and only if there exists a white piece’s move from
u to the position w and the white king is not checked in the position w, i.e., it is not
in a position where it could be taken by a black piece. The transition R Black (u, w) is
similarly defined in the position u, where the black king is checked. Thus, if one of
the kings is checked in the state u, then u R w must be a move that releases the king
from check.
There are various ways of selecting atomic actions in the game of chess. One
option is to assign to each piece a certain atomic action. Thus, there are as many
atomic actions as there are pieces. Each player can thus perform 16 actions with
particular pieces. The above division of atomic actions is based on the tacit assump-
tion that pieces retain their individuality in the course of the game. We will be more
parsimonious and distinguish only six atomic actions to be performed by each player.
Consequently, the atomic actions we shall distinguish refer to the types of pieces,
and not to individual pieces. According to AN, each type of piece (other than pawns)
is identified by an uppercase letter. English-speaking players use K for king, Q for
queen, R for rook, B for bishop, and N for knight (since K is already used). Pawns
are not indicated by a letter, but rather by the absence of any letter. This is due to
the fact that it is not necessary to distinguish between pawns for moves, since only
one pawn can move to a given square. (Pawn captures are an exception and indicated
differently.)
2.1 Examples—Two Games 67

Here is the list of atomic actions performed by the White player:

K White , QWhite , RWhite , BWhite , N White , P White .

K White is the action any performance of which is a single move of the white knight,
QWhite are the white queen’s moves, RWhite are the white rooks’ moves, BWhite the
bishops’ moves, N White the knights’ moves, and P White the action any performance
of which is a move of an arbitrary white pawn on the chessboard.
A similar division of pieces and atomic actions is also adopted for the Black
player:
K Black , QBlack , RBlack , BBlack , N Black , P Black .

A move with a piece includes taking the opponent’s piece when performing this
move.
Actions are conceived of extensionally. A given atomic action is identified with
the set of its possible performances. Thus, if A ∈ {K White , QWhite , RWhite , BWhite ,
N White , P White }, e.g. A = K White , and u, w ∈ W , then A(u, w) takes place if and only
if in the position u the White player moves the knight (according to the movement
regulations for the knight), and w is the chessboard position just after the move.
(According to the algebraic notation, each move of a piece is indicated by the piece’s
uppercase letter, plus the coordinate of the destination square. For example, Be5
(move a bishop to e5), c5 (move a pawn to c5—no letter in the case of pawn moves,
remember).
The system
  
W, R, K White , QWhite , RWhite , BWhite , N White , P White ∪ (2.1.1)
 Black Black Black Black Black Black 
K ,Q ,R ,B ,N ,P ,

where R = R White ∪ R Black is an elementary action system operated by two agents


White and Black. The system is not normal. The reason is in the fact that in the
positions u in which the king of a given colour is checked, e.g. the white one, the set of
all possible performances of the actions K White , QWhite , RWhite , BWhite , N White , P White
in the state u is, in general, larger than the set of admissible transitions from the state u
to others, in which the king is no longer under threat. In other words, R White is a proper
subset of the union the relations K White ∪ QWhite ∪ RWhite ∪ BWhite ∪ N White ∪ P White .
Analogously, R Black is a proper subset of K Black ∪ QBlack ∪ RBlack ∪ BBlack ∪ N Black ∪
P Black . Thus, the actions from the above list are not totally performable in some
states.
The system (2.1.1) is complete. Although certain configurations u occur in none
of the chess games, each direct transition u R w is made by means of a certain atomic
action from the above list, i.e., there exists an A such that u A, Rw. From this remark
the completeness of the system follows.
68 2 Situational Action Systems

u 0 is the initial configuration accepted in any chess game, i.e., the arrangement
of pieces on the chessboard at the start of each game. The proposition Φ := {u 0 } is
thus the initial condition of each game.
In the game of chess, a few types of final conditions are distinguished. The most
two important are where the black king is checkmated and where the white king is
checkmated.
Let Ψ0Black be the set of all possible positions u on the chessboard in which the
black king is checked by some white piece. The proposition Ψ0Black expresses the fact
that the black king is checked (but not the fact that the next move must be performed
by the Black player). Analogously, the set Ψ0White is defined as the set of positions u
in which the white king is checked.
The king is checkmated when he is in a configuration in which he is checked and
when there is no way of escape nor cannot he in any other way be protected against
the threat of being taken. Checkmate results in the end of the game.
So, the checkmate of the black king is the set Ψ Black of all positions u in which
the black king is checkmated and none of the actions K Black , QBlack , RBlack , BBlack ,
N Black , P Black can be undertaken in order to protect the black king. (As noted above,
we are omitting here the possibility of castling as a way to escape checkmate.) This
fact can be simply expressed in terms of the relation R Black as follows:

Ψ Black := {u ∈ Ψ0Black : δ Black


R (u) = ∅}.

(δ Black
R is the graph of the relation R Black .)
In the analogous way we define the proposition Ψ White which expresses the check-
mate of the white king:

Ψ White := {u ∈ Ψ0White : δ White


R (u) = ∅}.

Thus, in the game of chess, we distinguish two tasks: (Φ0 , Ψ Black ) and (Φ0 , Ψ White ).
The task (Φ0 , Ψ Black ) is taken by the White player. He aims to checkmate the black
king. The states belonging to the proposition Ψ Black define then the goal the White
player wants to achieve. Analogously, (Φ0 , Ψ White ) is the task for the Black player.
His goal is to checkmate the white king.
A stalemate is a position in which one of the players, whose turn comes, cannot
move either the king or any other chessman; at the same time the king is not checked.
When there is a stalemate on the chessboard, the game ends in a draw.
Let ΛBlack denote the set of stalemate positions of the black king, i.e.,

ΛBlack := {u ∈ W : u ∈ Ψ0Black and δ Black


R (u) = ∅}.

Similarly, ΛWhite denotes the set of stalemate positions of the white king:

ΛWhite := {u ∈ W : u ∈ Ψ0White and δ White


R (u) = ∅}.
2.1 Examples—Two Games 69

If during the game the position u ∈ ΛBlack is reached and the Black player is to
make a move in this position, the game ends in a draw; similarly for when u ∈ ΛWhite
and the White player is to move.
The stalemate positions do not exhaust the set of configurations which give rise
to the game ending. A game ends in a draw when both players know that there exist
possibilities for continuing the game indefinitely which do not lead to positions of
Ψ Black ∪ Ψ White . In practice, a purely pragmatic criterion is applied which says that
the game is drawn after the same moves have been repeated in sequence three times;
the game is also limited by the rule that in 50 moves a pawn must have been moved.
In spite of simplifications made in the chess tournament, we have not explored
all the rules of the game. We know what actions the players perform but we have
not added that they can only do them alternately. The same player is not allowed
to make successively two or more moves (except the case of castling). Moreover,
the game is started only by the White player in the strictly defined initial position.
These facts indicate that the description of the game of chess, expressed by means
of formula (2.1.1), is oversimplified and it does not adequately reflect the real course
of the game. We shall return to this issue in the next paragraph.
The above situational description of the game of chess does not have, obviously,
any greater value as regards its usability. For obvious reasons neither the best of
players nor any computer is able to take on a victor’s strategy through searching all
possible configurations on the chessboard that are continuations of the given situation.
The phenomenon of combinatorial explosion blocks further calculations (we get very
large numbers, even for a small number of steps). At the time of writing, the best
computer can search at the most three moves ahead in each successive configuration
(a move is one made by White and the successive move by Black). It is in this fact,
as well as the rate at which computers operate, that the advantage the computer has
over the human manifests itself. One obvious piece of advice and the most reasonable
suggestion offered to the human player is to select a strategy referring based on his
experience as a chess-player and the knowledge gathered over the centuries, where
this includes a description of the finite set (larger but not too large—within the limits
of what a human brain cell remember) of particularly significant games.

2.2 Actions and Situations

The above examples reveal the significant role of situational contexts in which action
systems function. These contexts that influence the course of an action are not deter-
mined just by the current state of the action system but also by other factors. It is
often difficult or even not feasible to specify them all. Moving a black piece to an
empty square may be allowed by the relation R Black if only the current arrangement
of chessmen is taken into account; the move however will not be permitted when
the previous move was also made by the Black player for two successive moves by
the same player does not comply with the rules of chess. A similar remark applies
to tic-tac-toe.
70 2 Situational Action Systems

The above examples also show that the performability of an action in a particular
state u of the system may depend on the previously undertaken actions, their suc-
cession, and so on. In the light of the above remarks a distinction should be made
between the notion of a state of the system and the situation of the system. In this
chapter this distinction is explored from the perspective of action systems.
The set W of possible states, the relation R of a direct transition between states
and the family A of atomic actions, fully determine the possible transformations of
the atomic system, i.e., the transformations (carried out by the agents) that move the
system from one state to others. The current situation of the system is in general
shaped by a greater number of factors. A given situation is determined by the data
from the surroundings of the system. The current state of the system is one of the
elements constituting the situation but, of course, not the most important. The agents
are involved in a network of mutual relations. The situation in which the agents act
may depend on certain principles of cooperation (or hostility) as well. Acceptance of
certain norms of action is an example of such interdependencies. Thus, each state of
the system is immersed in certain wider situational contexts. The context is specified,
among others, by factors, not directly bound up with the state of the system as: the
moment when a particular action is undertaken, the place where the system or its part
is located at the moment the action is started or completed, the previously performed
action (or actions) and their agents, the strategies available to the agents, etc. Not all
of the mentioned factors are needed to make up a given situation—it depends on the
depth of the system description and the principles of its functioning. These elements
constitute the situational setting of a given state.
In a game of chess the relation R of direct transition between configurations on
the chessboard defined in Sect. 2.1 as the union R = R White ∪ R Black is insen-
sitive to the situational context the players are involved in. The atomic actions
K White , QWhite , RWhite , BWhite , N White , P White or K Black , QBlack , RBlack , BBlack ,
N Black , P Black transform some configurations into others. The above purely exten-
sional, ‘input—output’ description of atomic actions abstracts from some situational
factors which influence the course of the game. The performability of an atomic
action depends here exclusively on the state of the system in which the action is
undertaken (Definition 1.4.1). Thus performability is viewed here as a context-free
property, devoid of many situational aspects which may be relevant in the description
of an action.
We shall outline here a situational concept of action performability, according to
which the fact that an action is performable in a given state of the system depends
not only on the state and the relation R of direct transition between states but also
on certain external factors—the situational context the system is set in.
Performing an action changes the state of the system and at the same time it
creates a new situation. A move made by a player in a game of chess changes the
arrangement of chessmen on the board. It also changes the player’s situation: the
next move will be made by his opponent unless the game is finished. It does not
mean that the notion of an action should be revised—as before we identify (atomic)
actions with binary relations on the set W of possible states of the system. We shall,
however, extend the notion of an elementary action system by enlarging it with new
2.2 Actions and Situations 71

components: the set S of possible situations, the relation T r of a direct transition


between possible situations, and a map f which to every possible situation s assigns
a state f (s) ∈ W . The state f (s) is a part of the situation s—if s occurs then f (s) is
the state of the system corresponding to s. These components enable one to articulate
a new, ‘situational’ definition of the performability of an action. Thus, we will speak
of action performability in a given state of the system with respect to a definite
situational context, or shortly, the performability of an action in a given situation.
Situations are not investigated here in depth as a separate category. That is, we
shall not investigate and develop an ontology of situations but rather limit ourselves
to some general comments. The focus is rather on illustrating the role of situations
in the action theory in various contexts than outlining a general account of structured
situations. The notion of a situation we shall use here is built out of elements taken
from automata theory, theory of algorithms, games and even physics and does not
fully agree with the notions that occur in the literature.
We shall present here a simplified, ‘labeled’ theory of situations. This theory is
convergent with the early formal approaches to logical pragmatics (see e.g. Montague
1970; Scott 1970). On the other hand, this concept is based on some ideas which
directly stem from the theory of algorithms.
The ontology of actions adopted in this book shows similarities with the situation
calculus applied in logic programming. The situation calculus is a formalism designed
for representing and reasoning about dynamic entities (McCarthy 1963; Reiter 1991).
It is based on three key ingredients: the actions that can be performed in the world, the
situations, and the fluents that describe the state of the world. In Reiter’s approach
situations are finite sequences of actions. Here, situations are isolated and form a
separate ontological category.
Let S be a set, whose elements will be called possible (or conceivable) situa-
tions.2 Each situation s ∈ S is determined by a system of factors. Their specification
depends on the ways of organization and functioning of the action system. A possible
situation can include the following components: the state of the system, the location
of the system (or its distinguished parts), the agent of each atomic action, the action
currently performed on the system, the previously performed action and its agent,
and so on. We can roughly characterize any situation s as a sequence of entities

s = (w, t, x, . . .), (2.2.1)

where w ∈ W is a possible state of the system, t—time, x—location of the system, etc.
To each possible situation (2.2.1) a unique state w is assigned—the first component
of the above sequence—which is called the state of the system in the situation s. The
values of all the other parameters (and strictly speaking the names of the values of

2 This sentence is a convenient conceptual metaphor. Situations are constituents of the real world.
In this chapter situations are viewed as mathematical (or rather set-theoretical) representations of
these constituents. We therefore speak of sets of conceivable situations.
Of course, any representation of the world of situations and actions does not involve mentioning
all of them individually but it rather classifies them as uniform sorts or types. But token situations
or actions can be individuated, such as the stabbing of Julius Caesar.
72 2 Situational Action Systems

these parameters) defining a given situation form a subsequence of s which is called


the label of the situation s. In the case of situation (2.2.1), its label is equal to the
sequence (t, x, . . .). Thus, each possible situation s can be represented by an ordered
pair
s = (w, a),

where w ∈ W is a state of the system and a is a label.


The labels are assumed to form a set which is denoted by V . (It is not assumed
that the set V is finite.) Thus, the set S of possible situations is equal to the Cartesian
product W × V , i.e.,
S := W × V.

Apart from the set S there is a transition relation T r between situations. If


Tr(s1 , s2 ) holds, then s2 is the situation immediately occurring after the situation
s1 ; we also say that the situation s1 directly turns into the situation s2 . The fact that
Tr(s1 , s2 ) holds does not prejudge the occurrence of the situation s1 .
The above remarks enable us to articulate the definition of a situational action
system.
Definition 2.2.1 A situational action system is a six-tuple

M s := (W, R, A, S, Tr, f ),

where
(1) the reduct M := (W, R, A) is an elementary action system in the sense of
Definition 1.2.1;
(2) S is a non-empty set called the set of possible situations the action system M is
set in. The set S is also called the situational envelope of the action system M;
(3) Tr is a binary relation on S, called the direct transition relation between possible
situations;
(4) f : S → W is a mapping which to each situation s ∈ S assigns a state f (s) ∈ W
of the action system M. f (s) is called the state of the action system M corre-
sponding to the situation s, or simply, the state of the system in the situation s.
It is therefore unique, for each situation s;
(5) the relation R of direct transition between states of the action system M =
(W, R, A) is compatible with Tr, i.e., for every pair s1 , s2 ∈ S of situations, if
s1 Tr s2 then f (s1 ) R f (s2 ). 
The changes in the situational envelope take place according to the relation Tr.
Condition (5) says that the evolution of situations in the envelope is compatible with
transformations between the states of the system M, defined by the relation R. (In the
‘labeled’ setting of situations, that is, when S = W × V , the function f is defined as
the projection of W × V onto W , i.e., f (s) = w, for any situation s = (w, a) ∈ S.)
It follows from (5) that the set
   
(u, w) ∈ W × W : ∃s, t ∈ S s Tr t & f (s) = u & f (t) = w
2.2 Actions and Situations 73

is contained in the relation R. The two sets need not be equal. In other words, it is not
postulated that for any s1 ∈ S, w ∈ W the condition f (s1 ) R w implies that s1 Tr s2
for some s2 ∈ S such that w = f (s2 ). (For example, this implication fails to hold in
the situational model of chess playing defined below.) This shows that, in general,
the relation R ⊆ W × W cannot be eliminated from the description of situational
action systems and replaced by the relation Tr).
In the simplest case, possible situations of S are identified with states of the action
system M = (W, R, A), that is, S = W (the label of each situation is the empty
sequence), f is the identity map, and Tr = R, and the situational action system M s
reduces to the elementary system M.
Definition 2.2.1 is illustrated by means of so-called iterative action systems; the
latter form a subclass of situational systems. Iterative action systems function accord-
ing to simple algorithms that define the order in which particular atomic actions are
performed. A general scheme which defines these systems can be briefly described
in the following way. Let M = (W, R, A) be an elementary action system. It is
assumed that S, the totality of possible situations, is equal to the Cartesian product
W × V , where V is a fixed set of labels. The function f that assigns to each situation
s the corresponding state of the system M is the projection from W × V onto W . In
order to define the relation Tr of a direct transition between situations we proceed as
follows. An action performed in a definite situation changes the state of the system
M and also results in a change of the situational context. The triples

(a, A, b), (2.2.2)

where a and b are labels in V and A ∈ A is an atomic action, are called labeled
actions. The action (2.2.2) has the following interpretation: the atomic action A
performed in a situation with label a changes the state of the system M in such a
way that the situation just after performing this action is labeled by b. Thus, (a, A, b)
leads from situations labeled by a to ones with label b while the states of M transform
themselves according to the action A and the relation R.
A non-empty set LA of labeled actions is singled out. LA need not contain all the
triples (2.2.2) nor need be a finite set. We say that a labeled action (a, A, b) ∈ LA
transforms a situation s1 into s2 if and only if s1 = (u, a), s2 = (w, b) and, moreover,
it is the case that u A, R w.
The set LA makes it possible to define the transition relation Tr(LA) between
situations. Let s1 = (u, a), s2 = (w, b) ∈ S. Then we put:
 
(s1 , s2 ) ∈ Tr(L A) if and only if (∃A ∈ A) (a, A, b) ∈ L A & u A, Rw . (2.2.3)

It follows from (2.2.3) that (s1 , s2 ) ∈ Tr(LA) implies that f (s1 ) R f (s2 ), i.e., R is
compatible with Tr and that the conditions (1)–(5) of Definition 2.2.1 are met. The
six-tuple
(W, R, A, S, Tr(LA), f )

is a situational action system.


74 2 Situational Action Systems

A finite run of situations is as any finite sequence

(s0 , s1 , . . . , sn )

of situations such that (si , si+1 ) ∈ Tr(LA) for every i, 0  i  n − 1. The situation
s0 is called the beginning of the run.
We may also consider infinite (or divergent) runs of situations as well. These are
infinite sequences of situations

(s0 , s1 , . . . , sn , . . .)

such that (si , si+1 ) ∈ Tr(LA), for all i. The problem of divergent runs, interesting
from the viewpoint of computability theory, is not discussed in this book.
In tic-tac-toe two labels are distinguished: X and O. Thus, V := {X, O}. A possible
situation is a pair s = (w, a), where a ∈ V and w ∈ W is a configuration on the
board (each configuration being identified with an appropriate 3 × 3 matrix). The
pair s = (w, X) is read: w is the current configuration on board and X is to make a
move. One interprets the pair (w, O) in a similar way.
The set of labeled actions has two elements: (X, AX , O) and (O, AO , X).3 The
action (X, AX , O) transforms a situation of the form s1 = (u, X) into a situation
s2 = (w, O). The transformation from s1 to s2 is accomplished by performing the
action AX by X. This action moves the system from the state u to w. The labeled
action (O, AO , X) is read in a similar way.
The second example is similarly reconstructed as a situational action system.
In a game of chess we distinguish two labels, BLACK and WHITE, that is V :=
{BLACK, WHITE}. A possible chess situation is thus any pair of the form s =
(w, a), where a ∈ V and w ∈ W is a configuration on the chessboard. The pair
s = (w, WHITE) is interpreted as follows: w is the current configuration on the
chessboard and White is to make a move. The pair (w, BLACK) is read analogously.
The set LA of labeled atomic actions consists of the following triples:

(WHITE, AWhite , BLACK) (2.2.4)

where AWhite ∈ {KWhite , QWhite , RWhite , BWhite , NWhite , PWhite } and

(BLACK, ABlack , WHITE) (2.2.5)

where ABlack ∈ {KBlack , QBlack , RBlack , BBlack , NBlack , PBlack }.


The labeled action (2.2.4) thus transforms situations of the type s1 = (u, WHITE)
into situations s2 = (w, BLACK). The transformation is accomplished by

3 Indeed, instead of the triples (X, AX , O) and (O, AO , X) it suffices to take the pairs (AX , O) and
(AO , X), since the actions AX and AO are already labeled by X and O, respectively (that is, they
possess their own names). Defining labeled actions as triples is justified by the fact that elements of
the set of atomic actions in elementary action systems are not basically linguistically distinguished
through investing them with special names.
2.2 Actions and Situations 75

performing in the state u an atomic action by White so that w is the configura-


tion of the chessboard just after the move. The labeled action (2.2.5) is interpreted
in a similar way.
Having defined the set LA, we then define the relation Tr(LA) according to formula
(2.2.3). The function
 f : S → W is defined, as expected, as the projection of S onto
W : f (w, a) := w, for all (w, a) ∈ S.
There is only one initial situation s0 := (u 0 , WHITE), where u 0 is the configura-
tion at the outset of the game. Thus, a game of chess is a finite sequence of situations
(s0 , s1 , . . . , sn ), n  0, such that s0 = (u 0 , WHITE) is the initial situation and
(si , si+1 ) ∈ Tr(LA), for all i, 0  i  n − 1. It follows from the definition of Tr(LA)
that successive moves are alternately performed by the players Black and White. The
transition from s0 to s1 is accomplished by White who starts the game. In turn, the
transition from s1 to s2 is made by Black, and so on. Thus, a game of chess can be
represented as an iterative action system, taking into account the simplifications we
have made.
A few words about terminal situations. In the light of the above definitions, a
game of chess may be any long sequence (s0 , . . . , sn ) of a situation starting with
the initial situation. A game thus defined need not respect the rules of chess. First,
there exist time limitations that do not permit game to go on for too long. (They can
be inserted into the scheme of situation presented here by adjoining an additional
time parameter.) However, there is a more important reason. A game of chess has to
finish in the case of checkmating the adversary. The above definition of a game does
not take this factor into consideration. To resolve the matter we will define a certain
subclass of the class of possible situations. First, we will extend the set of labels with
a new label ω, which will be called the terminal label. Thus

V := {BLACK, WHITE, ω}.

We also say that


S := W × V .

A possible terminal situation is any situation of the form (w, ω), where w is
a configuration belonging to the set Ψ Black ∪ Ψ White i.e., w is a configuration in
which either the black or the white king is checkmated.
The set LA of labeled atomic actions defined by means of the formulas (2.2.4) and
(2.2.5) is also extended to the set LA by augmenting it with the following actions:

(BLACK, AWhite , ω) (2.2.6)

where AWhite is an atomic action assigned to White, and

(WHITE, ABlack , ω) (2.2.7)

where ABlack is an atomic action of Black.


76 2 Situational Action Systems

Note One should carefully distinguish between linguistic interpretations of a game


and situational runs of a game. Here ω, BLACK, WHITE are linguistic entities while
AWhite and ABlack are binary relation on the set of states. 

Let Tr(LA ) be the transition relation between situations of S determined by


the labeled actions of LA . As is easy to check, the terminal situations defined as
above are indeed terminal—if s is terminal then there exists no situation s such that
Tr(LA )(s, s ). The notion of a game of chess is defined similarly as above; that is, as
a finite sequence of situations (s0 , s1 , . . . , sn ), n  0, such that s0 = (u 0 , WHITE)
is the initial situation and (si , si+1 ) ∈ Tr(LA ), for all i. A game is concluded if the
situation sn is terminal. (The definition of a concluded game does not exhaust the set
of all situations in which the game is actually concluded. A game can be concluded
through being a draw. To take these situations into account, we would have to further
extend the notion of a situation and the transition relation between situations.)
The distinction between the meanings of ‘state’ and ‘situation’ is not absolute.
When speaking about elementary action systems, we do not always have in mind
sharply-distinguished material objects subject to the forces exerted by the agents.
The definition of an elementary system distinguishes only certain states of affairs
and relations between them; in particular the relation of a direct transition. The selec-
tion of one or other set of states and binary relations representing atomic actions
greatly depends on the ‘world perspective’ and on a definite perception of the ana-
lyzed actions in particular. We may figuratively say that elementary action systems
define what actions are performed while situational action systems also take into
account how they are performed. The borderline between the two concepts is fluid.
For example, one may modify the scheme of chess presented above so as to include
the players in the notion of a state (i.e. a configuration of pieces). Thus, e.g., the fact
that Black is to make a move would be a component of the current state of the game.
Such a step is, of course, possible; it would however complicate the description of
the game.
Suppose M = (W, R, A) is a quite arbitrary elementary action system. If one
wants to restrict uniformly all actions in the system to sequences of states of length
at most n, it suffices to introduce n labels, being e.g. the consecutive natural numbers
(or numerals) 0, 1, . . . , n, and to define the set of possible situations as the product
S := W × {0, 1, . . . , n}. The function f assigning to each situation s its unique state
is the projection onto the first axis, i.e., f (s) := w for any situation s = (w, k).
A given state may therefore receive n + 1 different labels. The (direct) transition
relation Tr between situations is defined as follows: for s = (w, k) and s = (w , k ),

s Tr s ⇔d f w R w ∧ w = w ∧ k = k + 1.

Each situational transition changes states and increases the label from k to k + 1.
(The second conjunct excludes reflexive points of R but does not exclude loops
of states of longer length from situational transitions. If we want to exclude loops
u R u 1 R . . . u l R u altogether as ‘futile’ computations, the problem is much more
complicated and not analysed here.)
2.2 Actions and Situations 77

M s = (W, R, A, Tr, f ) is a well-defined situational action system in which


M = (W, R, A) is contained.
The situations of the form s = (w, n) are terminal, where w is an arbitrary state.
This means that for s = (w, n) there is no situation s such that s Tr s . Therefore
the systems halts at s = (w, n) and the work of the action system M ceases as being
subject to the organization of M s . In turn, situations of the form (w, 0) may be treated
as initial ones.
Since loops in runs of situations are not excluded, it may happen that there is
a sequence of transitions between situations of length  3 , say s Tr s1 Tr . . . sl Tr s.
In this case the system starts at s and finishes at s (in the same state). But of course
we may declare at the outset that R does not admit loops.
The width of the situational envelope of an elementary action system depends
on how the system functions. Let us take a look at Example 1.3.3. WT is here the
set of all proofs carried out from T with the help of the rules from a set Θ. Let
w = (φ0 , . . . , φn−1 , φn ) be a fixed proof in WT . In the simplest case, the situational
context of the proof w is constituted by the way the formula φn , i.e., the last element
of the proof w, is adjoined to the shorter proof u = (φ0 , φ1 , . . . , φn−1 ). The proof w
is a result of applying a definite rule r ∈ Θ to the some ‘prior’ formulas {φ j : j ∈ J },
where J is a subset of {0, . . . , n − 1}; that is, to some formulas occurring in u. The
situational context of w is therefore represented by means of the triple (u, r, J ),
which encodes the above fact. Denoting the pair (r, J ) by a, we see that the above
situation can be identified with the pair s = (u, a). A deeper description of possible
situations would take into account not only the way the proposition φn was affixed to
u forming the proof w, but also, for example, the ways all or some of the sub-proofs
of u were formed. Along with changing the width and the depth of the description
of possible situations, the description of the relation Tr of a direct transition between
possible situations would be modified too.
The second remark concerns current situations of action systems. An action sys-
tem always finds itself in a definite situation—the current situation of the system.
The state corresponding to this situation (the state in which the system is) is called
the current state of the system. It is not possible to single out the separate cate-
gory of current situations by means of linguistic procedures though it is possible
to provide an exhaustive list of attributes determining these situations. Hybrid logic
introduces nominals which are objects that signify only one situation, namely the
one which actually happens. For example, one can always describe the configuration
of pieces which are now placed on the chessboard but no linguistic operation is able
to fully render the meaning of the word ‘now’. The terms ‘the current situation’ and
‘the current state’ are demonstratives—their proper understanding always requires
knowledge of extralinguistic factors.
The third remark concerns the relationship between situations and (possibly infi-
nite) runs of situations. Suppose

M s = (W, R, A, S, Tr, f )
78 2 Situational Action Systems

is a situational system. A possible (or hypothetical) run of situations (in M s ) is any


function mapping an interval of integers into S. There are thus a priori four possible
types of runs. A run is of type (−∞, +∞) if it is indexed by the set of all integers,
i.e., it is represented as an infinite sequence without end points

(. . . , s−m , s−m+1 , . . . , s0 , . . . , sn , sn+1 , . . .), (2.2.8)

where sk Tr sk+1 , for every integer k.


A run is of type (−∞, 0) if it is indexed by the non-positive integers, i.e., it is of
the form

(. . . , s−m, s−m+1 , . . . , s0 ), (2.2.9)

where sk Tr sk+1 for every negative integer k and there does not exist a situation s ∈ S
such that s0 Tr s.
A run is of type (0, +∞) if it is of the form

(s0 , . . . , sn , sn+1 , . . .), (2.2.10)

where sk Tr sk+1 for every natural number k ∈ ω and there does not exist a situation
s ∈ S such that s Tr s0 .
Finally, a run is finite (and terminated in both directions) if it is of the form
(s0 , . . . , sn ) for some natural number n and there do not exist situations s, s ∈ S
such that s Tr s0 or sn Tr s .
Whenever we speak of a run of situations, we mean any run falling into one of
the above four categories. If a situation s occurs in a run, then the situation s in the
run that directly follows s is called the successor of s; the situation s is then called
the predecessor of s .
Proposition 2.2.2 Let Ms = (W, R, A, S, Tr, f ) be a situational action system.
Let s, s be two possible situations in S. Then s Tr s holds if and only if there exists
a run (of one of the above types) such that s and s occur in the run and s is the
successor of s.
The proof is simple and is omitted. 
To each situational system Ms
the class R of all possible runs of situations in the
system is assigned. It may happen that R includes runs of all 4 types. This property
gives rise to a certain classification of situational systems. For example, a system
M s would be of category 1 if R contained only finite runs. There are 24 − 1 (= 15)
possible categories of situational action systems.
The relation Tr of direct transitions between situations can be unambiguously
described in terms of runs of situations. More precisely, any situational action system
can be equivalently characterized as a quintuple

(W, R, A, S, f ) (2.2.11)
2.2 Actions and Situations 79

satisfying the earlier conditions imposed on (W, R, A) and S, f , for which addi-
tionally the class R of runs of situations from S is singled out. R is postulated to be
the class of all runs of situations admissible for the system (2.2.11). The relation Tr
of direct transitions between situations can be then defined by the right-hand side of
the statements of Proposition 2.2.2.
We conclude this section with remarks on the performability of actions in situ-
ational systems. Suppose we are given a situational action system M s in the sense
of Definition 2.2.1. The fact that A is performable in a definite situation s is fully
determined by the relations Tr and R. Accordingly:
If s is a situation and A ∈ A is an atomic action, then the act of performing
the action A in this situation turns s into a situation s such that s Tr s and
f (s) A f (s ).
Immediately after performing the action A the system M = (W, R, A) is in the state
f (s ). This is due to the fact that R is compatible with Tr, i.e., s Tr s implies that
f (s) R f (s ).
The above remarks give rise to the following definition:
Definition 2.2.3 Let M s = (W, R, A, S, Tr, f ) be a situational action system and
let s ∈ S and A ∈ A.
(i) The atomic action A is performable in the situation s if and only if there exists
a situation s ∈ S such that s Tr s and f (s) A f (s ); otherwise A is unper-
formable in s.
(ii) The action A is totally performable in the situation s if and only if it is per-
formable in s and for every state w ∈ W , if f (s) A w, then s Tr s for some
situation s such that w = f (s ). 
The performability of an action A in a situation s is thus tantamount tothe existence
of the direct Tr-transition from s to another situation s such that the pair f (s), f (s )
is a possible performance of A (i.e., A is accounted for the transition s Tr s ). The
second conjunct of the definition of total performability of A in s states that every
possible performance of A in the state f (s) results in a new situation s such that
s Tr s (i.e., the relation Tr imposes no limitations on the possible performances of A
in f (s)).
Similar to the case of elementary action systems, total performability always
implies performability. The converse holds for deterministic actions, i.e. actions A
which are partial functions on W .
The concept of the situational performability of an atomic action is extended
onto non-empty compound actions. Let M s be a situational system and A ∈ CA
a compound action. A is performable in a situation s if and only if there exists a non-
empty string (s0 , . . . , sn ) of situations and a string of atomic actions A1 . . . An ∈ A
such that s = s0 Tr s1 . . . sn−1 Tr sn and f (s0 ) A1 f (s1 ) . . . f (sn−1 ) An f (sn ) (i.e.,
the transition from s to sn is effected by means of consecutive performances of the
actions A1 , . . . , An ).
The action A is totally performable in s if and only if A is performable in s
and for every possible performance (u 0 , . . . , u n ) of A such that u 0 = f (s) there
80 2 Situational Action Systems

exist situations s0 , . . . , sn with the following properties: s0 = s, u i = f (si ) for


i = 0, . . . , n, and u i Tr u i+1 for all i  n − 1.
The second conjunct of the above definition states that for every performance
(u 0 , . . . , u n ) of A starting with u 0 = f (s) there exists a run of situations (s0 , . . . , sn )
with s0 = s such that u 0 , . . . , u n are the states corresponding to s0 , . . . , sn .
The existence of a situational envelope of an elementary action system radically
restricts the possibilities of performing compound actions. In any normal elemen-
tary action system M, every compound action not containing the empty string ε is
totally performable (Proposition 1.7.6). However, if the elementary system M is a
part of some situational action system M s = (W, R, A, S, Tr, f ), that is, the reduct
(W, R, A) of M s coincides with M, then the above result is no longer true (if per-
formability is taken in the sense of M s ).

2.2.1 An Example. Thomson Lamp

Intuitively, a supertask is an activity consisting of an infinite number of steps but


taken, as a whole, during a finite time. Examples of modern supertasks resemble
ancient paradoxes posed by Zeno of Elea (e.g. Achilles and the tortoise). Supertasks
may be identified with infinite strings of atomic actions performed in a finite time
period.
A Thomson lamp is a device consisting of a lamp and a switch set on an electrical
circuit. If the switch is on, then the lamp is lit, and if the switch is off, then the lamp is
off. A Thomson lamp is therefore modelled as a simple elementary action system M
with two states and two atomic action A : ‘Switching on the lamp’ and B : ‘Switching
off the lamp’. The picture gets more complicated if M is contained in a situational
envelope involving only one additional situational parameter—time. Suppose that:
1. At time t = 0 the switch is on.
2. At time t = 1/2 the switch is off.
3. At time t = 3/4 the switch is on.
4. At time t = 7/8 the switch is off.
5. At time t = 15/16 the switch is on, etc.
What is the state of the lamp at time t = 1? Is it lit or not? We see that the above
infinite sequence of alternate atomic actions gives rise to a formulation of non-
trivial questions, depending on the selection of the situational envelope.4 Solving
them requires introducing genuinely infinitistic components into the picture of action
theory presented here. These components are introduced in different ways. One
option is to endow the set of states with a topology or with some order-continuous
properties. This option requires infinite, ordered sets of states exhibiting various
forms of order-completeness. The other option is to embed the atomic system in a

4 The above example is taken from Jerzy Pogonowski’s essay Entertaining Math Puzzles which can

be found at www.glli.uni.opole.pl.
2.2 Actions and Situations 81

situational envelope endowed with various continuity properties. (This is the case
with a Thomson lamp.) A mixture of these two approaches is also conceivable. In
Chap. 3 the first option is elaborated in the context of an ordered action system.

2.3 Iterative Algorithms

Let M = (W, R, A) be an elementary action system. An iterative algorithm for M


is a quintuple
D := (W, V, α, ω, LA), (2.3.1)

where W is the set of states of M, V is a finite set called the set of labels of the
algorithm, α and ω are designated elements of V , called the initial and the terminal
(or end) label of the algorithm, respectively, and LA is a finite subset of the Cartesian
product
V \ {ω} × A × V \ {α}.

The members of LA are called labeled atomic actions of the algorithm D. Thus, the
labeled actions are triples (a, A, b), where a and b are labels, called respectively the
input and the output label, and A is an atomic action. Since LA involves only finitely
many atomic actions of A, the members of the set

A D := {A ∈ A : (∃a, b ∈ V ) (a, A, b) ∈ LA}

are called the atomic actions of the algorithm D. LA may be empty.


The elements of the set S D := W × V are called the possible situations of the
algorithm. If s = (w, a) ∈ S D , then w is the state corresponding to s, and a is the
label of s. The situations with label α, i.e., the members of W × {α} are (possible)
initial situations of the algorithm while the members of W × {ω}, i.e., the situations
labeled by ω are (possible) terminal situations of the algorithm D.
The relation Tr D of direct transition in the algorithm is defined as follows:
if s = (u, a), t = (w, b) are possible situations of D, then

Tr D (s, t) if and only if (∃A) (a, A, b) ∈ LA & u A, Rw.

Thus, if s is an initial situation, then Tr(s , s) for no situation s of D. Similarly, if s


is terminal, then there does not exist a situation s ∈ S such that Tr(s, s ).
The relation Tr∗D , the transitive and reflexive closure of Tr D , is called the transition
relation in D. Thus,

(s, t) ∈ Tr∗D if and only if (∃n  0) (s, t) ∈ (Tr D )n .

(Thus, in particular, (Tr D )0 = {(s, s) : s ∈ S}.) Finite runs of the algorithm D are
sequences of situations
82 2 Situational Action Systems

(s0 , s1 , . . . , sn ) (2.3.2)

such that s0 is an initial situation of D and (si , si+1 ) ∈ Tr D for all i, 0  i  n − 1.


The situations s0 and sn are called the beginning and the end of the run (2.3.2). The
run (2.3.2) is terminated in D if and only if sn is a terminal situation of D.
It may happen that for some run (2.3.2) there does not exist a situation s with the
property that Tr D (sn , s); i.e., there is no possibility of continuation of the run (2.3.2),
even if the situation sn is not terminal in D. It is said then that the algorithm D is
stuck in the situation sn . A run with this property is not qualified as terminated in D.
The fact that an algorithm may get jammed in some situations leads us to isolate
the so-called integral iterative algorithms. A situation s = (w, a) is non-terminal
if a = ω. An iterative algorithm D is integral if the domain of the relation Tr D
coincides with the set of all non-terminal situations; that is, if for every non-terminal
situation s there exists a situation s such that Tr D (s, s ).
If the system M = (W, R, A) is normal and every atomic action A ∈ A is a total
function (with domain W ) and the iterative algorithm D for M has the property that
for every triple (a, A, b) ∈ LA with b = ω there exists a triple (c, B, d) ∈ LA such
that b = c, then D is integral.
Infinite (or divergent) runs of D are defined as infinite sequences of situations

(s0 , s1 , . . . , sn , . . .)

such that s0 is an initial situation of D and (si , si+1 ) ∈ Tr D for all i. Thus, no terminal
situation occurs in a divergent run.
Every elementary action system M = (W, R, A) with a distinguished iterative
algorithm D for M forms a situational action system. For let

M s := (W, R, A, S D , Tr D , f ),

where S D (= W × V ) is the set of all possible situations of D, Tr D is the transition


relation in D, and f : W × V → W is the projection onto W . M s is a situational
action system in the sense of Definition 2.2.1.
Each state of W corresponds to a certain initial situation of M s . This is not the
case in the iterative system associated with chess playing (Sect. 2.2). In a game of
chess there is only one initial situation.
To each iterative algorithm (2.3.1) a binary relation Res D on the set W of states
is assigned and called the resultant relation of the algorithm D:
 
(u, w) ∈ Res D if and only if (u, α), (w, ω) ∈ Tr∗D .

Equivalently, (u, w) ∈ Res D if and only if there exists a terminated run (s0 , . . . , sn )
such that s0 = (u, α) and sn = (w, ω).
The resultant relation Res D is a sub-relation of the reach of the elementary action
system M.
2.3 Iterative Algorithms 83

Example Let M be an arbitrary elementary action system, and let (A1 , . . . , An ) be


a fixed non-empty sequence of atomic actions of A. Let V := {0, 1, . . . , n}, α := 0,
ω := n, and LA := {(0, A1 , 1), . . . , (n − 1, An , n)}. Then D := (W, V, α, ω, LA)
is an algorithm for M. The resultant relation of D is equal to (A1 ∩ R) ◦ . . . ◦
(An ∩ R). 
An iterative algorithm (2.3.1) is well-designed if and only if for every labeled
action (a, A, b) ∈ LA there exists a finite string A1 , . . . , An of atomic actions
of D and a finite sequence a0 , a1 , . . . , an of labels of V such that a0 = α,
an = ω, (ai , Ai+1 , ai+1 ) ∈ LA for all i  n − 1, and (a, A, b) is equal to a
triple (ai , Ai+1 , ai+1 ) for some i  n − 1. Intuitively, D is well-designed if every
labeled action of LA is employed in some terminated run of D.
Well-designed algorithms are singled out for technical reasons. Each algorithm D
can be transformed into a well-designed algorithm by deleting from LA those labeled
actions (a, A, b) which do not satisfy the above condition. This operation does not
change the resultant relation of the algorithm.
We recall that REG(A) denotes the family of all regular compound actions
(over A), and REG+ (A) := {B ∈ REG(A) : ε ∈ B}. The properties of
the family REG(A) are independent of the internal structure of an action system
M = (W, R, A); the cardinality of A is the only factor that matters. This fact lead
us to distinguish, for each B ∈ CA, the set AB of atomic actions of A occurring in
the strings of B.
Lemma 2.3.1 If B ∈ REG(A), then AB is finite.
The lemma follows from the definition of a regular language over a (possibly infinite)
alphabet (see Sect. 1.6). 
Let A be a family of binary relations on a set W . The positive Kleene closure of
A, denoted by Cl + (A), is the least family B of binary relations on W which includes
A as a subfamily and satisfies the following conditions:
(i) ∅ ∈ B
(ii) if {P, Q} ⊆ B, then {P ◦ Q, P ∪ Q, P + } ⊆ B.
The Kleene closure of A, denoted by Cl ∗ (A), is the least family B of binary relations
on W which includes A as a subfamily and satisfies
(i) ∅, E W ∈ B
(ii) if {P, Q} ⊆ B, then {P ◦ Q, P ∪ Q, P ∗ } ⊆ B.
The purpose of this section is to determine the principles of the functioning of
elementary action systems M which are regulated by appropriate automata. This will
give rise to a class of situational action systems erected on atomic systems.
We henceforth will work with action systems M = (W, R, A) which are normal
and in which the relation R is reflexive. These assumptions imply the total per-
formability of all non-empty compound actions of CA in M, and of the action ε
in particular. (If the reflexivity of R is dropped, one should rather work with the
compound actions of C + A rather than those of CA and with the positive Kleene
closure Cl + .)
84 2 Situational Action Systems

Proposition 2.3.2 Let M = (W, R, A) be a normal elementary action system with


reflexive R. Then the following conditions hold:
(i) For every regular action B ∈ REG(A), Res B, the resultant relation of B defined
as in the formula (1.7.3) of Sect. 1.7, belongs to Cl(AB );
(ii) Conversely, for every binary relation Q ∈ Cl(A) there exists a regular action
B ∈ REG(A) such that Res B = Q and Q ∈ Cl(AB ).

Proof (i) We define:

P := {B ∈ CA : Res B ∈ Cl(AB )}.

(AB is a set of binary relations on W , viz, the set of atomic actions occurring in the
compound action B. As R is reflexive, Res B is reflexive, for all non-empty B.)
P has the following properties:
(a) If B ∈ CA is finite then B ∈ P; in particular ∅ ∈ P;
(b) If B, C ∈ P, then B ∪ C, B ◦ C and B∗ belong to P as well.
((b) directly follows from Lemma 1.7.2 and the remarks following it.) Thus, every
regular action of REG(A) belongs to P.
(ii) We define:
 
L := Q ⊆ W × W : (∃B) B ∈ REG(A) & Q = Res B .

We claim that Cl(A) ⊆ L.


∅ ∈ L and E W ∈ L, because ∅ = Res ∅ and E W = Res ε. For every atomic action
Q ∈ A, B := {Q} is a regular action on M and, since M is normal, Res B = Q.
Hence, A ⊆ L. Now assume that P, Q ∈ L. We shall show that {P ◦ Q, P ∪ Q, P ∗ }
⊆ L. We have that P = Res B and Q = Res C for some regular compound actions
B and C on M. Then, by Lemma 1.7.2, P ◦ Q = Res B ◦ Res C = Res (B ◦ C),
P ∪ Q = Res B∪ Res C = Res (B∪C), and, as R is reflexive, (Res B)∗ = Res (B∗ ).
As B ◦ C, B ∪ C, and B∗ are regular, this proves that Cl(A) ⊆ L.
If Q = Res B for some regular B, then by (i), Res B ∈ Cl(AB ), and hence
Q ∈ Cl(AB ). So (ii) holds. 

An iterative algorithm D = (W, V, α, ω, LA) for M is said to accept a string


A1 . . . An of atomic actions of M if there exists a sequence a0 , . . . , an of labels of
V such that a0 = α, an = ω and (ai , Ai+1 , ai+1 ) ∈ LA, for all i  n − 1. (Thus D
accepts the empty string ε if and only if α = ω. D accepts no strings if and only if
LA is empty and α = ω.)
The total (compound) action of the algorithm D, denoted by A(D), is defined as
the set of all strings of atomic actions accepted by D.
In particular, for a given algorithm D = (W, V, α, ω, LA), if LA is empty and
α = ω, then the total action of D is the empty compound action ∅. If LA is empty
and α = ω, then the total action of D equals ε.
2.3 Iterative Algorithms 85

Thus, D is well-designed if and only A D = AA(D) , i.e., every atomic action of


D occurs in some sequence belonging to A(D).
Proposition 2.3.3 Let D be an iterative algorithm for an elementary action system
M = (W, R, A). Then:
(i) the total action A(D) of D is regular, and
(ii) the resultant relation of the total action A(D) (in the sense of formula (1.7.3) of
Sect. 1.7) coincides with the resultant relation Res D of D.
Proof To prove (i) we shall employ some facts from the theory of formal grammars.
Let D = (W, V, α, ω, LA). We shall treat the sets V and A D (the set of atomic
actions of D) as disjoint alphabets. V will be the auxiliary alphabet (variables) while
A D will be the terminal alphabet (terminals). The initial label will be regarded as
the start symbol. The set LA of labeled actions of D specifies, in turn, a certain finite
set P of productions:

P := {a → Ab : (a, A, b) ∈ LA} ∪ {a → A : (∃a ∈ V ) (a, A, ω) ∈ LA},

where ω is the terminal label. The system G D := (A D , V, P, α) is, thus, a combi-


natorial grammar.

Lemma 2.3.4 The language generated by the grammar G D coincides with the total
action A(D) of the algorithm D.
Proof of the lemma. We first show that A(D) ⊆ L(G D ). Let A1 . . . An ∈ A(D). Then,
for some a0 , . . . , an ∈ V with a0 = α and an = ω, we have that (ai , Ai+1 , ai+1 ) ∈
LA for all i  n − 1. This implies that

α ⇒ A1 a1 ⇒ A1 A2 a2 ⇒ . . . ⇒ A1 . . . An−1 aa−1 ⇒ A1 . . . An−1 An

is a terminated derivation in G D . Hence A1 . . . An ∈ L(G D ).


To prove the reverse inclusion L(G D ) ⊆ A(D), suppose that A1 . . . An ∈ L(G D )
and let z 0 ⇒ z 1 ⇒ . . . ⇒ z m−1 ⇒ z m be a derivation of A1 . . . An in G D ,
where z 0 = α and z m = A1 . . . An . We show by induction for i = 1, . . . , n − 1 that
z i = A1 . . . Ai ai for some ai ∈ V . This holds for i = 1. Let i  2 and assume that
z j = A1 . . . A j a j for all j  i − 1. In particular, z i−1 = A1 . . . Ai−1 ai−1 . Since
i  n − 2, no production of the form a → A can be applied to z i−1 (for otherwise
we would get a word of the form A1 . . . Ai−1 A which is different from A1 . . . A; and
the string A1 . . . Ai−1 A blocks further derivations). So the word z i−1 derives z i by
means of a production ai−1 → Ab in which A = Ai .
Since z n−1 = A1 . . . An−1 an−1 , we see that z n must be equal to A1 . . . An . So m =
n, z m = z n , and the word z n−1 derives z n by means of the production an−1 → An .
It follows from the definition of G D that the sequence a0 := α, a1 , . . . , an−1 ,
an := ω has the property that (ai , Ai+1 , ai+1 ) ∈ LA for all i  n −1. So A1 . . . An ∈
A(D). This concludes the proof of the lemma. 
86 2 Situational Action Systems

G D is a right linear-grammar. This fact, in view of Theorem 1.6.3, implies that


the language L(G D ) (= A(D)) is regular. So (i) holds.

(ii) We have to show that Res D = Res A(D). Let (u, w) ∈ Res D , i.e., (u, α),
(w, ω) ∈ (Tr D )+ . Hence there exists a finite sequence of situations s0 , . . ., sn , where
si = (u i , ai ), for all i  n, such that s0 = (u, α), sn = (w, ω) and, for every i  n−1,
there exists an action Ai+1 ∈ A(D) with the property that (ai , Ai+1 , ai+1 ) ∈ P
and u i Ai+1 , R u i+1 . It follows that the sequence A1 . . . An belongs to A(D) and
(u, w) ∈ (A1 ∩ R) ◦ . . . ◦ (An ∩ R), i.e., (u, w) ∈ Res A(D).
Conversely, let (u, w) ∈ Res A(D). Hence there exists a sequence A1 . . . An ∈
A(D) and a string of states u 0 , . . . , u n such that u 0 = u, u n = w, and u i Ai+1 , R u i+1
for all i  n − 1. Since A1 . . . An ∈ A(D), there exists also a sequence of labels
a0 , . . . , an such that
 a0 = α, an = ω, and  (ai , Ai+1 , ai+1 ) ∈ LA, for all i  n − 1,
which means that (u i , ai ), (u i+ 1, ai+ 1) ∈ Tr D for every i  n −1. Hence (u, w) ∈
Res D . This proves that Res D = Res A(D). 

Corollary 2.3.5 Let D = (W, V, α, ω, LA) be an iterative algorithm for a normal


system M = (W, R, A) with reflexive R. Then Res D belongs to the Kleene closure
of A D .

Proof Use Propositions 2.3.2 and 2.3.3. 

Regular languages are those accepted by finite automata. Regular compound


actions, however, can be also conveniently characterized in terms of total actions
of iterative algorithms.

Theorem 2.3.6 Let M = (W, R, A) be an elementary action system with reflexive


R. For every compound action B ∈ CA the following conditions are equivalent:
(i) B is regular.
(ii) There exists a well-designed iterative algorithm D for M such that the total
action of D is equal to B and A D = AB .

Proof (ii) ⇒ (i) This is an immediate consequence of Proposition 2.3.3.


(i) ⇒ (ii) The proof of this part of the theorem consists in, for every regular
action B ∈ CA, the effective construction of an iterative algorithm D satisfying the
conditions mentioned in clause (ii). The proof is by induction on complexity of the
regular action B.
Suppose first that the action B is finite and non-empty. Let the initial and the
terminal labels α and ω be fixed. For each sequence A = A1 . . . An ∈ B we select a
string of distinct labels a1 , . . . , an−1 , and define:

P A := {(α, A1 , a1 ), . . . , (ai−1 , Ai , ai ), . . . , (an−1 , An , ω)},



LA := {P A : A ∈ B}.

Since B is assumed to be finite, LA is finite as well.


2.3 Iterative Algorithms 87

Let V be the collection of all the labels occurring in the labeled actions of LA.
Then D = (W, V, α, ω, LA) is a well-designed iterative algorithm for M. Moreover,
it follows from the construction of D that the compound action of D is equal to B
and that A D = AB .
If B is empty, it is assumed that α = ω and LA is empty. It follows that the
transition relation Tr D is empty. Consequently, the action of D is equal to ∅.
For the compound action ε, it is assumed that α = ω and LA is empty. This
implies that the transition relation Tr D is equal to E W . Consequently, the action of
D is equal to ε.
Now suppose B and C are compound actions on M such that well-designed algo-
rithms D1 and D2 for M have been constructed so that the total action of D1 is equal
to B and the total action of D2 is equal to C. We shall build an algorithm D for M
such that the total action of D is equal to B ∪ C. Let Di = (Wi , Vi , α i , ωi , LAi )
for i = 1, 2. Without loss of generality we can assume that α 1 = α 2 = α and
ω1 = ω2 = ω, and that the sets V1 \ {α, ω} and V2 Σ{α, ω} are disjoint. (Otherwise
one can simply rename the labels of V1 and V2 .) We then put:

D := (W, V1 ∪ V2 , α, ω, LA1 ∪ LA2 ).

It is easy to see that the total action of D is equal to B ∪ C and A D = AB∪C .


Moreover D is well-designed.
We shall now construct an algorithm D ◦ for M such that the total action of D ◦ is
equal to B ◦ C. We put:

D ◦ := (W, V1 ∪ V2 ∪ {c}, α, ω, LA◦ ),

where c is a new label adjoined to V1 ∪ V2 , and LA is the ‘concatenation’ of LA1


and LA2 . To define LA◦ formally, we need an auxiliary notion.
A terminated sequence of labeled atomic actions of an algorithm D =
(W, V, α, ω, LA) is any finite sequence of triples

(a0 , A1 , a1 ), (a1 , A2 , a2 ), . . . , (an−2 , An−1 , an−1 ), (an−1 , An , an ) (2.3.3)

of LA such that a0 = α and an = ω.


For any terminated sequence (2.3.3) of elements of LA1 and any terminated
sequence

(b0 , B1 , b1 ), (b1 , B2 , b2 ), . . . , (bm−2 , Bm−1 , bm−1 ), (bm−1 , Bm , bm )

of elements of LA2 , where b0 = α and bm = ω, we define a new sequence

(α, A1 , a1 ), (a1 , A2 , a2 ), . . . , (an−2 , An−1 , an−1 ), (an−1 , A − n, c), (2.3.4)


(c, B1 , b1 ), (b1 , B2 , b2 ), . . . , (bm−2 , Bm−1 , bm−1 ), (bm−1 , Bm , ω),

where c is the new label adjoined to V1 ∪ V2 .


88 2 Situational Action Systems

LA◦ is, by definition, the set of all labeled atomic actions occurring in all the
sequences of the form (2.3.4).
It is easy to show that the total action of D ◦ is equal to B ◦ C, the set of atomic
actions of D ◦ is equal to AB◦C , and D ◦ is well-designed.
Having given a well-designed algorithm D = (W, V, α, ω, LA) for M such that
the total action of D is equal to B, we define a new well-designed algorithm

D # = (W, V # , α # , ω# , LA# ).

Here α # = ω# , V # = (V \ {α, ω}) ∪ {α # , c}, where α # and c are new labels not
occurring in V . Thus, the labels α and ω are removed from V and new labels α # ,
c are adjoined. LA# is defined as follows: for every finite number, say k, of terminated
non-empty sequences of labeled atomic actions of D:

j j j j j j j j j
(a0 , A1 , a1 ), (a1 , A2 , a2 ), . . . , (an j −1 , An j , an j ) (2.3.5)

j j
where j = 1, . . . , k, and a0 = α, an j = ω for all j, a new sequence of labeled
actions is formed:

(α # , A11 , a11 ), (a11 , A12 , a21 ), . . . , (an11 −1 , A1n 1 , c),


(c, A21 , a12 ), (a12 , A22 , a22 ), . . . , (an22 −1 , A2n 2 , c),
..
.
j j j j j j j (2.3.6)
(c, A1 , a1 ), (a1 , A2 , a2 ), . . . , (an j −1 , An j , c),
..
.
(c, Ak1 , a1k ), (a1k , Ak2 , a2k ), . . . , (ank k −1 , Akn k , ω# ).

If k = 0, (2.3.6) reduces to the sequence (α # , ω# ), which is equal to (α # , α # ).


LA# is the set of all labeled atomic actions occurring in the sequences (2.3.6).
The total compound action of the algorithm D # is equal to B∗ , the iterative closure
of B. Moreover the set of atomic actions of D # is equal to the set of all atomic actions
occurring in B.
It follows from the above constructions that for every action B ∈ REG(A) there
exists a well-designed iterative algorithm D such that the total action of D is equal
to B and A D = AB . This completes the proof of the theorem 2.3.6. 

The following corollary (in a slightly different but equivalent form) is due to
Mazurkiewicz (1972a):

Corollary 2.3.7 Let M = (W, R, A) be a normal action system with reflexive R,


and let Q be a relation that belongs to the Kleene closure of A. Then there exists an
iterative algorithm D for M such that Res D = Q and Q ∈ Cl(A D ).
2.3 Iterative Algorithms 89

Proof According to Proposition 2.3.2.(ii), there exists a regular compound action B


on M such that Q = Res B and Q ∈ Cl(AB ). In turn, in view of Theorem 2.3.6,
there exists an iterative algorithm D for M, whose total action A(D) is equal to B
and A D = AB . Hence Q ∈ Cl(A D ). Since A(D) = B, Proposition 2.3.3 yields
Res D = Res B = Q. 

Let M = (W, R, A) be a finite elementary action system. In Sect. 1.7, the com-
pound actions Φ☞i Ψ have been defined for all Φ, Ψ ⊆ W and i = 1, . . . , 4. Since
these actions are regular, Theorem 2.3.6 implies that Φ☞i Ψ is ‘algorithmizable’ for
all Φ, Ψ and i = 1, . . . , 4; that is, there exists a well-designed iterative algorithm D
for M such that the total action A(D) of D is equal to Φ☞i Ψ . In particular this implies
that the resultant relation of Φ☞i Ψ coincides with Res D . Speaking figuratively, it
means that the ways the task (Φ, Ψ ) is implemented and the goal Ψ is reached in the
finite system M is subordinated to a well-designed iterative algorithm. Thus, there
exists, at least a theoretical possibility of ‘automatizing’ the tasks (Φ, Ψ ) in the sys-
tem M. The practical realization of this idea requires adopting realistic assumptions
as regards the cardinalities of the sets A, W and V (the set of labels of the algorithm).
The reach ReM of an elementary action system M is equal to the resultant relation
of the action A∗ , the set of all non-empty strings of atomic actions. Since A∗ is
regular if A is finite, Theorem 2.3.6 and Corollary 2.3.7 imply that ReM belongs
to the Kleene closure of A whenever M is a finite normal elementary system with
reflexive R. But this fact can be established directly by appealing to the definition
of ReM .
Let M = (W, R, A) be an elementary action system. Two compound actions
B, C ∈ CA are equivalent over M if and only if AB = AC and Res B = Res C,
i.e., B and C are ‘built up’ with the same atomic actions and the resultant relations
of B and C are identical.
The facts we have established thus far enable us to draw the following simple
corollary:

Corollary 2.3.8 Let M be an elementary normal action system with reflexive R.


Then for every action B ∈ CA the following conditions are equivalent:
(i) B is equivalent to a regular action;
(ii) There exists a well-designed algorithm D for M such that A D = AB and the
resultant relation of B is equal to Res D .

Proof (i) ⇒ (ii) Suppose B is equivalent to a regular action C. By Theorem 2.3.6 there
exists an iterative algorithm D for M such that A D = AC and the total action A(D)
of D is equal to C. Hence A D = AB and Res B = Res C = Res A(D) = Res D .
So (ii) holds.
(ii) ⇒ (i) Assume (ii) Since D is well-designed, AB = A D = AA(D) . By
Proposition 2.3.3, the action A(D) of D is regular. So B is equivalent to A(D) over
M. 
90 2 Situational Action Systems

The above corollary gives rise to the problem of the preservation of regular actions
by the relation of equivalence. More specifically, we ask if the following is true in
normal action systems M:
(∗) for any two compound actions B, C ∈ CA, if B is regular and equivalent to C
over M, then C is regular as well.
To answer the above question, we shall make use of the Myhill-Nerode Theorem.
Let Σ be an alphabet. For an arbitrary language L over Σ the equivalence relation
R L on the set Σ ∗ for all non-empty words is defined as follows:
x R L y if and only if for each word z, either both or neither x z and yz is in L, i.e.,
for all z, x z ∈ L if and only if yz ∈ L.
The Myhill-Nerode Theorem states that for an arbitrary language L ⊆ Σ ∗ , the
relation R L is of finite index (i.e., the number of equivalence classes of R L is finite)
if and only if L is accepted by some finite automaton (if and only if, by Theorem 1.6.5,
L is regular).
Let Σ := {A, B} and let L := {An B n : n = 1, 2, . . .}. The language L is not
regular. To show this we define xn := An , yn := B n for all n  1. If m = n then
xm xn ∈ L and xn xn ∈ L. Hence m = n implies that [xm ] = [xn ], where [x] denotes
the equivalence class of x with respect to R L . This means that the index of R L is
infinite. So L cannot be regular.
Let M = (W, R, A) be an action system, where A = {A, B}, the relations A and
B are transitive, B ⊆ A ⊆ R and R is reflexive. The compound action B is defined
in the same manner as the language L as above, that is, B := {An B n : n  1}.
The above argument shows that B is not regular. It is also easy to prove that
Res B = A◦ B. Hence trivially Res B belongs to the positive Kleene closure of AB =
{A, B} which implies, by Corollary 2.3.7, that there exists an iterative algorithm D
for M such that A D = A and the resultant relation of D is equal to Res B. Thus, B
is equivalent over M to the regular action A(D) of the algorithm. As B is not regular,
we see that (∗) does not hold for B and C.
The above result is not surprising because the definition of a regular compound
action over an action system M does not take into account the internal set-theoretic
structure of the family of atomic actions of M.

2.4 Pushdown Automata and Pushdown Algorithms

A grammar G = (Σ, V, P, α) is context-free if each production of P is of the


form a → x, where a is a variable and x is a string of symbols from (Σ ∪ V )∗ .
The adjective ‘context-free’ comes from the fact that every production of the
above sort can be treated as a substitution rule enabling to replace the symbol a by
the word x irrespective of the context in which the symbol occurs.
A language L is context-free if it is generated by some context-free grammar.
2.4 Pushdown Automata and Pushdown Algorithms 91

Example 2.4.1 Let L := {An B n : n = 1, 2, . . .}. We showed in Sect. 2.3 that L is


not regular. Let us consider the grammar G = (Σ, V, P, α), where Σ = {A, B},
V = {α} and P = {α → Aα B, α → AB}. G is context-free. By applying the
first production n − 1 times, followed by an application of the second production,
we have

α ⇒ Aα B ⇒ A Aα B B ⇒ A3 α B 3 ⇒ . . . ⇒ An−1 α B n−1 ⇒ An B n .

Furthermore, induction on the length of a derivation shows that if α ⇒G z 1 ⇒G


z 2 . . . z n−1 ⇒G z n then z n = An B n if z n is a terminal and z n = An α B n
otherwise. Thus L = L(G). 

CFL(Σ) denotes the family of all context-free languages over Σ. CFL+ (Σ) is the
class of all context-free languages over Σ which do not involve the empty word ε,

CFL+ (Σ) := {L : L ∈ CFL(Σ) and ε ∈ L}.

There are several ways by means of which one can restrict the format of produc-
tions without reducing the generative power of context-free grammars. If L is a non-
empty context-free language and ε is not in L, then L can be generated by a context-
free grammar G which uses productions whose right-hand sides each start with a
terminal symbol followed by some (possibly empty) string of variables. This special
form is called Greibach normal form. We shall formulate this result in a slightly
sharper form.
Let G be context-free. If at each step in a G-derivation a production is applied to
the leftmost variable, then the derivation is said to be leftmost. Similarly a derivation
in which the rightmost variable is replaced at each step is said to be rightmost.

Theorem 2.4.2 (Greibach 1965) Every context-free language L without ε is gener-


ated by a grammar G for which every production is of the form ν → Ax, where ν is a
variable, A is a terminal and x is a (possibly empty) string of variables. Furthermore,
every word of L can be derived by means of a leftmost derivation in G.

The proof of the above theorem is constructive—an algorithm is provided which, for
every context free-grammar G in which no production of the form ν → ε occurs,
converts it to Greibach normal form. We omit the details. 
We shall define a broad class of algorithms which comprises iterative algorithms
as a special, limit case.
Let M = (W, R, A) be an elementary action system. A pushdown algorithm for
M is a quadruple
D := (W, V, α, LA),

where W is, as above, the set of states of M, V is a finite set called the stack alphabet,
α is a particular stack symbol called the start symbol, and LA is a finite set of labeled
92 2 Situational Action Systems

atomic actions of M, i.e., LA is a finite set of triples of the form

(a, A, β), (2.4.1)

where A ∈ A, a ∈ V , and β ∈ V ∗ is a possibly empty string of symbols of V . (The


empty string ε ∈ V ∗ is called the end symbol.) The labels a and β of (2.4.1) are
called the input and the output label of (2.4.1), respectively.
The set
A D := {A ∈ A : (∃a, β) (a, A, β) ∈ LA}

is called the set of atomic actions of the algorithm D. The set is always finite.
Iterative algorithms may be regarded as a limit case of pushdown algorithms. The
former are pushdown algorithms in which the terminal labels β of labeled atomic
actions (a, A, β) ∈ LA have length  1, i.e., |β|  1.
The elements of the set S D := W × V ∗ are called the possible situations of the
algorithm. If s = (w, β) ∈ S D , w is the state corresponding to s and β is the label of
the situation s. β is also called the stack corresponding to the situation s. A situation
s is initial if it is of the form (w, α) for some w ∈ W . A situation s is terminal if it
is labeled by the empty string ε of stack symbols; i.e., the stack in this situation is
empty; so terminal situations are of the form (w, ε), w ∈ W .
A labeled action (a, A, β) transforms a situation s1 = (w1 , γ1 ) into a situation
s2 = (w2 , γ2 ) if and only if γ1 = aσ , γ2 = βσ for some σ ∈ V ∗ , and w1 A, R w2 .
Thus, s1 is a situation in which a is the top symbol on the stack and s2 is the situation
which results from s1 by replacing the top symbol a by the string β and performing
the atomic action A so that the system moves from w1 to the state w2 .
The transition relation Tr D in a pushdown algorithm D is defined similarly as in
the case of iterative algorithms:
(s1 , s2 ) ∈ Tr D if and only if there exists a labeled action (a, A, β) ∈ LA which
transforms s1 into s2 .
The notions of finite, infinite or terminated runs of situations in D are defined in
the well-known way. The latter are finite sequences (s0 , . . . , sn ) such that s0 is an
initial situation, Tr(si , si+1 ) for all i  n − 1, and sn is a terminal situation.
Every elementary normal action system M = (W, R, A) together with a distin-
guished pushdown algorithm D for M form, in a natural way, a situational action
system. (To simplify matters, it is also assumed that R is reflexive.) We put:

M s := (W, R, A, S D , Tr D , f )

where S D and Tr D are defined as above, and f is the projection from S D (= W × V ∗ )


onto W . M s is easily seen to satisfy the conditions imposed on situational action
systems.
2.4 Pushdown Automata and Pushdown Algorithms 93

The resultant relation Res D of a pushdown algorithm D is defined as follows:


 
(u, w) ∈ Res D if and only if (u, α), (w, ε) ∈ (Tr D )∗ .

Res D is a subrelation of the reach of M.


The members of V ∗ represent possible contents of the stack. A string a1 . . .an ∈ V ∗
is a possible state of the stack with a1 the top symbol of the stack. The strings of V ∗
are called control labels of the algorithm. Every pair (a, β) with a ∈ V and β ∈ V ∗
defines the control relation, denoted by a → β, on the set V ∗ :

γ1 (a → β)γ2 if and only if (∃σ ∈ V ∗ ) (γ1 = aσ & γ2 = βσ ). (2.4.2)

a → β is in fact a partial function, whose intended meaning is to modify control


labels of situations during the work of the algorithm. The functions a → β enable
us to express neatly possible transformations of the situations of the algorithm: a
labeled action (a, A, β) transforms a situation s1 = (w1 , γ1 ) into s2 = (w2 , γ2 ) if
and only if γ1 (a → β)γ2 and w1 A, R w2 . The functions a → β are also used in
defining a certain compound action on the system M called the total action of the
pushdown algorithm D. More specifically, a finite string A1 . . . An of atomic actions
of M is said to be accepted by the algorithm D if and only if there exists a sequence

(a1 , A1 , β1 ), . . . , (an , An , βn )

of labeled actions of LA and a sequence of control labels γ1 , . . . , γn+1 ∈ V ∗ with


the following properties:
(i) γ1 = a1 = α (= the start symbol), γn+1 = ε (= the end symbol)
(ii) γi (ai → βi )γi+1 for all i  n.
The elements of the string A1 . . . An are therefore consecutive actions of the algorithm
leading from initial to terminal situations.
The total action of a pushdown algorithm D is defined as the set A(D) of all
non-empty strings of atomic actions accepted by D.
Since A D , the set of all atomic actions the algorithm D involves, is always finite,
the composite action A(D) is a language over the alphabet A D .
The resultant relation Res A(D) of the action A(D) coincides with the resultant
relation Res D of the algorithm D. This is an immediate consequence of the following
simple observations:
(i) If (s0 , . . . , sn ) is a terminated run of situations, where si = (wi , γi ),
i = 0, 1,. . ., n, then (w0 , w1 ,. . ., wn ) is a realizable performance of A(D).
(ii) If (w0 , w1 , . . . , wn ) is a realizable performance of A(D), then there exist labels
α = γ0 , . . . , γn = ε such that (w0 , γ0 ), . . . , (wn , γn ) is a terminated run of
situations.
The following theorem characterizes the compound actions of pushdown algo-
rithms.
94 2 Situational Action Systems

Theorem 2.4.3 Let M = (W, R, A) be an elementary action system. Then for every
non-empty compound action B ∈ CA which does not contain the empty word ε the
following conditions are equivalent:
(i) B is context-free,
(ii) There exists a pushdown algorithm D for M such that B is the total action of D.
Proof (ii) ⇒ (i) Assume B = A(D) for some pushdown automaton D for M. The
idea of the proof is similar to the one applied in the proof of Proposition 2.3.3. The
stack alphabet V is treated here as an auxiliary alphabet, while the members of A D
are terminals. The label α is the start symbol. To the set LA of labeled actions of D
the following finite list P of productions is assigned:

P := {a → Aβ : (a, A, β) ∈ LA}.

The system
G D := (A D , V, P, α)

is then a grammar in Greibach normal form.


It is easy to see that the total action A(D) of D coincides with the set L of words
over A D that can be derived from α by means of leftmost derivations in G D . Theorem
2.4.2 implies that L = L(G D ). Hence B = A(D) = L(G D ). Since every language
generated by a Greibach grammar is context-free, (i) thus follows.
(i) ⇒ (ii) The proof of this part is also easy. Let B be a context-free compound
action (over the finite alphabet AB ). Since B contains only non-empty finite strings
of atomic actions, the language B is generated by a grammar

G = (AB , V, P, α) (2.4.3)

in which every production of P is of the form a → Aβ, where a ∈ V , A ∈ AB , and β


is a possibly empty string of elements of V . (This immediately follows from Greibach
Normal Form Theorem 2.4.2; the auxiliary alphabet V is obviously individually
suited to the language B.)
Let D := (W, V, α, LA) be the pushdown algorithm for M in which the set LA
of labeled actions is defined as follows:

LA := {(a, A, β) : a → Aβ is in P}.

The total action of the algorithm D is easily seen to be equal to the set of words
over AB obtained by means of all leftmost derivations in the grammar (2.4.3). But
in view of Theorem 2.4.2, the latter set of words is equal to the language (over AB )
generated by the grammar (2.4.3). Thus, A(D) = L(G) = B. 
Finite automata over some finite alphabet function according to a simple rule.
They read consecutive symbols in a word while assuming different states. Formally,
this requires assuming a specification of a finite set of states, a next-move function
2.4 Pushdown Automata and Pushdown Algorithms 95

δ from states and symbols to finite subsets of states, and a distinction between an
initial state and the set of final states of the automaton.
Pushdown automata are much more powerful tools equipped with a restricted
memory. A pushdown automaton can keep a finite stack during its reading a word. The
automaton may use a separate stack alphabet. The ‘next-move’ function indicates,
for each state, the symbol read and the top symbol of the stack, the next state of the
automaton, as well as the string of stack symbols which will be entered or removed
at the top of the stack.
Formally, a pushdown automaton (PDA) is a system

A = (W, Σ, Γ, δ, w0 , α, F) (2.4.4)

where
(i) W is a finite set of states of the automaton
(ii) Σ is an alphabet called the input alphabet
(iii) Γ is an alphabet called the stack alphabet
(iv) w0 is the initial state of the automaton
(v) α ∈ Γ is a distinguished stack symbols called the start symbol
(vi) F ⊆ W is the set of final states
(vii) δ is a mapping from W × Σ ∪ {ε} × Γ to finite non-empty subsets
of W × Γ ∗ .
The interpretation of

δ(u, a, Z ) = {(w1 , γ1 ), (w2 , γ2 ), . . . , (wm , γm )},

where w1 , . . . , wm ∈ W and γ1 , . . . ,m ∈ Γ ∗ , is that the PDA in state u, with input


symbol a and Z the top symbol on the stack, can, for any i (i = 1, . . . , m), enter state
wi , replace the symbol Z by the string γi , and advance the input head one symbol.
The convention is adopted that the leftmost symbol of γi will be placed highest on
the stack and the rightmost symbol lowest on the stack. It is not permitted to choose
the state wi and the string γ j for some j = i in one move. In turn,

δ(u, ε, Z ) = {(w1 , γ1 ), (w2 , γ2 ), . . . , (wm , γm )}

says that M in the state u, independent of the input symbol being scanned with Z
the top symbol on the stack, enters one of the states wi and replaces Z by γi for
i = 1, . . . , m. The input head is not advanced.
To formally describe the configuration of a PDA at a given instant one intro-
duces the notion of an instantaneous description (ID). Each ID is defined as a triple
(w, x, γ ), where w is a state, x is a string of input symbols, and γ is a string of stack
symbols. According to the terminology adopted in this chapter, each instantaneous
description s = (w, x, γ ) can be regarded as a possible situation of the automaton—
w is the state of the automaton in the situation s while the label (x, γ ) of the situation
96 2 Situational Action Systems

s defines the string x of input symbols read by the automaton and the string γ of
symbols on the stack.
If M = (W, Σ, Γ, δ, w0 , α, F) is a PDA, then we write (u, ax, Z γ ) M
(w, x, βγ ) if δ(u, a, Z ) contains (w, β). a is an input symbol or ε. ∗M is the
reflexive and transitive closure of M . L(M) is the language defined by final state
of M. Thus,

L(M) := {x ∈ Σ ∗ : (w0 , x, α) ∗M (w, ε, γ ) for some w ∈ F and γ ∈ Γ ∗ }.

Theorem 2.4.4 The class of languages accepted for by PDA’s coincides with the
class of context free languages.
The book by Hopcroft and Ullman (1979) contains a proof of the above theorem and
other relevant facts concerning various aspects of PDA’s. 
Turing machines can be represented as situational action systems too. The tran-
sition relations between possible situations in such systems are defined in a more
involved way however.
The characteristic feature of iterative and pushdown algorithms is that they pro-
vide global transformation rules of possible situations. In the simple case of iter-
ative algorithms, transformation rules are defined by means of a finite set LA of
labeled actions and then by distinguishing the set of all possible pairs of sequences
of labels (a0 , . . . , an ) and atomic actions A1 . . . An such that a0 = α, an = ω, and
(ai , Ai+1 , ai+1 ) ∈ LA.
The conjugate sequences (a0 , . . . , an ) and (A1 , . . . , An ) define ways of trans-
forming some situations into others. The transition from a situation labeled by ai
to a situation labeled by ai+1 is done through performing the action Ai+1 (for
i = 0, . . . , n − 1). The rules neither distinguish special states of the system nor
do they impose any restrictions on transitions between the states corresponding to
situations other than what relation R dictates. To put it briefly, they abstract from the
local properties of the action system.
Schemes of action which are not captured by the above classes of algorithms can
be easily provided. Let us fix an atomic action A ∈ A. Let Φ ⊂ W be a non-empty
set of states. The program ‘while Φ do A’ defines the way the action A should be
performed: whenever the system is in a state belonging to Φ, iterate performing the
action A until the system moves out of Φ. (We assume here for simplicity that A
is atomic; the definition of ‘while – do’ programs also makes sense for compound
actions.) The program ‘while Φ do A’ is identified with the set of all operations of
the form
u 0 A u 1 A u 2 . . . u n−1 A u n (2.4.5)

such that u 0 , . . . , u n−1 ∈ Φ and u n ∈ Φ. (The above program also comprises infinite
operations
u 0 A u 1 A u 2 . . . u n−1 A u n . . .

such that u i ∈ Φ for every i  0.)


2.4 Pushdown Automata and Pushdown Algorithms 97

We claim that the above program is not determined by a pushdown algorithm. This
statement requires precise formulation. To this end we define, for every pushdown
algorithm D = (W, V, α, LA), the notion of the program Pr (D) corresponding to
the algorithm D (see Sect. 1.8).
Pr (D) is the set of all finite operations u 0 A1 u 1 . . . u n−1 An u n with A1 . . . An
ranging over the total action A(D) of D. (As mentioned earlier, the notion of a
program is usually conceived of as a certain syntactic entity. Here it is identified with
its meaning; i.e., as a set of computations.)
Let M = (W, R, {A}) be a normal elementary action system in which A is a total
unary function on W . Let Φ be a proper non-empty subset of W closed with respect
to A. (This means that for every pair of states u, w ∈ W , u ∈ Φ and u A w imply
that w ∈ Φ.)

Proposition 2.4.5 There does not exist a pushdown algorithm D for M such that

Pr (D) = while Φ do A. (2.4.6)

Proof It follows from the definitions of A and Φ that:

For any n  1, there exist states u 0 , . . . , u n such that u 0 A u 1 . . . u n−1 A u n


and u 0 , . . . , u n ∈ Φ. (2.4.7)

We apply a reductio ad absurdum argument. Suppose that (2.4.6) holds for some
algorithm D. It is clear that A(D) is a compound action over the one-element alphabet
{A}. Let A . . . A be a word of A(D) of length n. As (2.4.6) holds, we have that for any
string (u 0 , . . . , u n ) of states, u 0 A u 1 . . . u n−1 A u n implies that u 0 , . . . , u n−1 ∈ Φ
and u n ∈ Φ. But this contradicts (2.4.7). 

The program ‘while Φ do A’ defines a situational system M s over M. (The system


M need not be normal.) The set V of labels consists of two elements, V := {a, ω},
where ω is the terminal label and the label a is identified with the name of the action
A.
S := W × V is the set of possible situations. A pair (u, a) ∈ S has the following
interpretation: u is the current state of the system and the action A is performed.
Analogously, (u, ω) is read: u is the current state of the system and no action is
performed in u. S0 := Φ × {a} is the set of initial situations, while the members of
S := (W \ Φ) × {ω} are terminal situations. The relation Tr of the transition between
situations is defined as follows:
Tr(s, t) if and only if either s = (u, a), t = (w, a), u ∈ Φ, w ∈ Φ and u A, Rw
or s = (u, a), t = (w, ω), u ∈ Φ, w ∈ Φ and u A, Rw.
Let f : W × V → W be the projection from S onto W . The six-tuple

M s := (W, R, {A}, S, Tr, f )


98 2 Situational Action Systems

is called the situational action system associated with the program ‘while Φ do A’
on M.
Finite runs of situations are defined in the usual way; they are sequences of situ-
ations
(s0 , . . . , sn ) (2.4.8)

such that s0 ∈ S0 and Tr(si , si+1 ) for all i  n − 1. The run (2.4.8) is terminated if
sn ∈ Sω .
Not every run (2.4.8) can be prolonged to a terminated run. It may happen that
for some sn = (u n , a) there are no states w such that u n A, R w. The program gets
jammed in the state u n .

2.5 The Ideal Agent

A theory of action should distinguish between praxeological and epistemic aspects


of action. (This issue is also discussed in the last chapter of this book.) In this work,
we will not discuss at length such praxeological elements of action as the cost of
action, available means, money, etc. These ‘hard’ praxeological factors are en bloc
represented by the relation R of direct transition between states. These problems are
closely connected with practical reasoning and with the studies on deliberation in
particular. As Segerberg (1985, p. 195) points out: “It is well-known how crucially
deliberation depends on the faculties and outlook of the agent, but it depends not
only on his knowledge, beliefs and values, but also how he perceives the situation,
what ways he has of acquiring new situation, his imagination, what action plans he
has available and what decision he employs.”
The very nature of agency and the problem of the epistemic status of agents in
particular are the most difficult issues that a theory of action has to resolve. In its
lexical meaning, the theory typically describes an action as behaviour caused by an
agent in a particular situation. The agent’s desires and beliefs (e.g. my wanting a
glass of water and believing the clear liquid in the cup in front of me is water) lead to
bodily behavior (e.g. reaching over for the glass). According to Davidson (1963), the
desire and belief jointly cause the action. But Bratman (1999) has raised problems
for such a view and has argued that one should take the concept of intention as basic
and as not analyzable into beliefs and desires (the Belief-Desire-Intention model of
action).5

5 An interesting example which might be discussed with the problem of agency is that of micro-
management where pathological relations hold between individual and collective agents. Roughly,
micromanagement is defined as “attention to small details in management: control of a person or
a situation by paying extreme attention to small details”. The notion of micromanagement in the
wider sense describes social situations where one person (the micromanager) has an intense degree
of control and influence over the members of a group. Often, this excessive obsession with the most
minute of details causes a direct management failure in the ability to focus on the major details.
2.5 The Ideal Agent 99

We make in this section the assumption that the agents operating a situational
action system
M s = (W, R, A, S, Tr, f )

are omniscient in the generous sense. To be more specific, suppose a1 , . . . , an is a list


of agents operating the system M s . It is assumed that :
(1) each agent ai (i = 1, . . . , n) knows all possible states of the system, i.e.,
the elements of W , and all possible direct transitions between states, i.e., the
elements of R;
(2) each agent ai (i = 1, . . . , n) knows the elements of S, Tr, and f ;
(3) each agent ai (i = 1, . . . , n) knows, for every A ∈ A, not what will in fact
happen, but what can happen if the action A is performed, i.e., for every u ∈ W
and every A ∈ A, the agent ai knows the elements of the set {w ∈ W : u A w};
(Postulates (1)–(3) thus ascertain that the internal structure of the system (W, R, A),
as well as the situational envelope are completely transparent to the agents a1 ,. . ., an .)
(4) each agent ai always knows the current situation of the system; in particular the
agent knows the current state of the system;
(The postulates (1)–(4), thus, enable each agent ai to state if a quite arbitrary action
A ∈ A is performable (or totally performable) in the current situation.)
(5) if ai is the agent of an action A ∈ A in a situation s and A is performable in s,
then ai knows what will happen, if he performs A in s.
The principle (5) thus says that the actions performed by the agent are completely
subject to his free will. It is not assumed that to each atomic action A only one agent
is assigned, where he is the same in all possible situations. The agents of A may vary
from one situation to another. We state however that every action is carried out by
at the most only one agent in a given situation (the agent who actually performs the
action). Even if the agents are ideal, if the system is operated by more than one agent,
they may not foresee the evolution of the system. The agent may not know who is
going to perform an action in the next move, what action the other agent is going to
perform and how the action will be performed. (The first case is obviously excluded
in a game of chess but the other two are not.) It is conceivable that in some situation s
there may be many agents who are allowed to perform certain actions. For example,
agent a1 could perform action A1 or A2 in s and agent a2 could perform A3 , A4 or
A5 in the same situation s. This may happen in some situational action systems. The
relation Tr of direct transition between situations allows for many possible courses
of events commencing with s. Whether agent a1 or agent a2 is going to make a move
in s may be a random matter; the situation s and the relation Tr need not determine
this accurately.
Free will, according to its lexical meaning, is the agent’s power of choosing and
guiding his actions (subject to limitation of the physical world, social environment
and inherited characteristics). Free will enables the agent, in a particular situation in
which he can act, both to choose an action he would like to perform (if there is more
100 2 Situational Action Systems

than one possible action he can perform in this situation) and to be completely free
in his way of accomplishing it. In the framework of the formalism accepted here,
the fact that an action A is performed according to the agent’s free will means only
that the moment the action is undertaken (a situation s), the system may pass to any
of the states from the set { f (s ) : s Tr s and f (s) A f (s )}. The free agent is not
hindered from transferring the system to any of the states belonging to the above set
(provided that the set is non-empty); his practical decision in this area is the result
of a particular value, and so something that in the given situation the agent considers
proper.
The notion of an agent should be widely understood; he can be a real man or
a group of people (a collective agent); or a man cooperating with a machine or a
robot. A detailed analysis of the concept of agent is not necessary for the present
investigation. We want, however, this notion to be devoid of any anthropocentric
connotation. (This view is not inconsistent with the above postulates (1)–(5) if the
meaning of the phrase ‘the agent knows’ is properly understood.)
Realistic theories of action should be based on more restrictive postulates than
those defining the epistemic and praxeological status of ideal agents. For instance,
in the game of bridge the players never know the current distribution of cards—the
postulate (4) is not true in this case. The agents may not know the members of W ,
as well as the pairs of states which are in the relation R. The agent of an action may
not know all the possible performances of this action. The knowledge of S and Tr
may also be fragmentary.
In Part II an approach to action theory is outlined which takes into account the
above limitations and assumes that agents’ actions are determined and guided by
systems of norms. Norms determine what actions in given conditions are permitted
and which are not. Agents’ actions are then controlled by the norms. The relationship
between this concept of action and deontic logic is also discussed there.
A theory of action is the meeting ground of many interests, and therefore many
approaches. The formal approach attempts to answer the following question: how
should actions be described in terms of set-theoretic entities? Any answer to this
question requires, of course, some ontological presuppositions. In this book we pro-
pose, in a tentative spirit, a relatively shallow analysis of this subject. First, we
assume that there exist states of affairs and processes. Furthermore, we claim that
these simple ontological assumptions can be rendered adequately into the language
of set theory. This leads to the definition of a discrete system. The set W of states is
the mathematical counterpart of the totality of states of affairs, and R, the relation of
direct transitions, represents possible processes (i.e., transitions) between states of
affairs. Apart from these concepts the category of situations is singled out. It is an
open problem whether situations can be faithfully represented by set theory. We do
not discuss this issue here because we would have to begin with a general account of
what a situation is. Instead, we aim at showing that in some simple cases the theory
of sets provides a good framework for the concepts of action and situation.
2.6 Games as Action Systems 101

2.6 Games as Action Systems

In this section, as an illustration of the theory expounded above, a class of idealized,


infinite mathematical games will be presented. These games will be modeled as
simple, situational action systems.
We recall that the symbols N and ω interchangeably denote the set of natural
numbers with zero. In set theory the set of natural numbers is defined as the least
non-empty limit ordinal. The elements of ω are also called finite ordinals. Thus 0 = ∅
is the least natural number and n + 1 = {0, 1, . . . , n}, for all n ∈ ω, according to the
standard set-theoretic definition of natural numbers.
If n ∈ ω, then n A is the set of functions from n to A. Note that 0 A = {∅}.
The empty function ∅ is also denoted by 0. The length of the sequence 0 is 0.
A = (W, Σ, Γ, δ, w0 , α, F).
We then define 

A := n
A
n∈ω

If p ∈ n A, we also write p = (a0 , a1 , . . . , an−1 ), or simply p = a0 a1 . . . an−1 ,


where ak := p(k) for k = 0, 1, . . . , n − 1. | p| is the length of p. (Formally, | p|
coincides with the domain of p.)
We consider the scheme of an infinite game between two players, denoted respec-
tively by I (the first player) and II (the second player).
Let A be a fixed non-empty set, e.g., the set of natural numbers.
(i) The players alternately choose elements of A. Each choice is a move of the
game; and each player before making each of his moves is privy to all the
previous moves.
(ii) Infinitely many moves are to be performed;
(iii) Player I starts the game.
The resulting infinite sequence of elements of A

a = a0 , a1 , . . . , an , an+1 , . . .

is called a play of the game. The even numbered elements of a, viz., a2n , n = 0, 1, . . .
are established by player I while the odd-numbered elements of a, a2n+1 , n = 0, 1, . . .
by player II.
Each set Z of infinite sequences of elements of A, Z ⊆ ω A determines a game
G(Z ). Z is called the payoff of the game G(Z ).
A play a of the game G(Z ) is won by player I if it is the case that a ∈ Z . If a ∈ Z —
the play a is won by II. Thus the game G(Z ) is decidable in the sense that for each
play a either I or II wins a (there are no draws).
Every initial segment an = a0 , a1 , . . . , an−1 of a possible play a is called a
partial play. <ω A is the set of all partial plays.
Let πn be the projection of ω A onto the nth axis, that is, πn (a) is the nth term of
the sequence a for all a ∈ ω A, n ∈ ω.
102 2 Situational Action Systems

Suppose that πn (Z ) is a proper subset of A for some odd n. Then II wins the
game irrespective of the first n moves a0 , a1 , . . . , an−1 made by the players. In the
nth step player II selects an arbitrary element an ∈ A \ πn (Z ). Then for any play a
being a continuation of the initial segment a0 , a1 , . . . , an it is the case that a ∈ Z .
(For if a ∈ Z , then πm (a) ∈ πm (Z ), for all m ∈ ω, which is excluded.) Thus the
play a is won by II. Intuitively, the winning strategy for II is any function S defined
on all possible positions of even length such that S( p) ∈ A \ πn (Z ) for any position
p = a0 , a1 , . . . , an−1 of length n with n defined as above.
We shall express the above remarks in a more formal setting as follows. We shall
limit ourselves to deterministic games, i.e., the ones in for each player in any position
there is only one option open to him according to which he continues the game, which
means that the strategy available to him is a function.

Definition 2.6.1 Let A be a non-empty set.


1. The set W := <ω A is called the set of possible positions or, in the terminology
of action theory, it is called the set of possible states.
2. Any function S : W → A is called a deterministic strategy in A.
3. A play in A is any mapping P : ω → A. 

Lemma 2.6.2 For any deterministic strategy S in A there exists a unique play P in
A such that P(n) = S(Pn) for all n ∈ ω.

(Note that P(0) = S(P0) = S(P∅) = S(0).)


Proof This is an application of the known Principle of Definability by Arithmetic
Recursion. 
Putting P(n) = an for all n ∈ ω, the above lemma says that a0 = S(0) and
an = S(a0 , a1 , . . . , an−1 ) for all n. P is called the play played according to the
strategy S.
From the perspective of action theory, each strategy S defines a binary relation
R S on the set W = <ω A is of states—possible positions. R S is the relation of direct
transition between possible situations. Intuitively, for u, w ∈ W , the fact u R S w
means that w results form u by an application of the strategy S to u and w is the
position obtained from the sequence u by adjoining the element S(u) at the end of u.
Formally, u R S w ⇔d f w = u, S(u).
Since the strategy S is deterministic, R S is a function from W to W . It follows
that (W, R S ) is a deterministic discrete system.
We are interested in a mathematical analysis of two-person games. To represent
such games in terms of action systems, we define the operation of the merger of any
two strategies.
The definition of the merger of two strategies takes into account the context-
sensitive aspect of each play, represented by the situation whose component is not
only the finite string of moves made by one player up to some phase of the play, but
also the respective moves of his opponent. Formally, this situation is represented by
the following mathematical construction. The players I and II have certain strategies
2.6 Games as Action Systems 103

S1 and S2 at their disposal. The strategy S1 defines the elementary action AI of the
player I on the set of states W . AI is defined as above as the transition relations
between positions imposed by the strategy S1 on the set of positions. Thus, for any
positions p, q ∈ W ,
p AI q ⇔d f q = p, S1 ( p).

Analogously one defines the action of the other player: for p, q ∈ W ,

p AII q ⇔d f q = p, S2 ( p).

It is clear that the relations AI and AII are both unary functions from W to W .
However, in two-person games the relation of direct transition between states is
defined in a more involved way. According to the rules, each play is represented by
a sequence
a0 , b0 , a1 , b1 , . . . , an , bn , an+1 , . . .

The moves made by I are influenced not only by his earlier choices a0 , a1 , . . . , an
but also by the moves made by the second player, represented by the sequence
b0 , b1 ,. . ., bn . The choice of an+1 is motivated not only by the moves a0 , a1 ,. . ., an
but also by b0 , b1 , . . . , bn . In other words, in the context-sensitive strategy avail-
able for I the consecutive moves performed by him are determined not only by
a0 , a1 , . . . , an but by the sequence a0 , b0 , a1 , b1 , . . . , an , bn . Analogously, in the
context-sensitive strategy of II the (n + 1)-th move of II is determined not by
b0 , b1 , . . . , bn , but by the sequence a0 , b0 , a1 , b1 , . . . , an , bn , an+1 . This sequence
determines the next move bn+1 .
To define properly the relation R of direct transition between states of W we first
define the notion of the the merger of two strategies.

Definition 2.6.3 Let S1 and S2 be two strategies in A. The merger of S1 and S2 is


the unique strategy S1 ⊕ S2 in A such that for every p ∈ <ω A,

S1 ( p) if | p| is even
(S1 ⊕ S2 )( p) :=
S2 ( p) if | p| is odd. 

If | p| = 0, i.e., p is empty, p = 0, then

(S1 ⊕ S2 )( p) = S1 (0),

and if | p| = 1, then
(S1 ⊕ S2 )( p) = S2 ( p).

The above definition says that if p = (a0 , b0 , a1 , b1 , . . . , an , bn ), then


 
(S1 ⊕ S2 )( p) = S1 (a0 , b0 , a1 , b1 , . . . , an , bn )
104 2 Situational Action Systems

and if p = (a0 , b0 , a1 , b1 , . . . , an , bn , an+1 ), then


 
(S1 ⊕ S2 )( p) = S2 (a0 , b0 , a1 , b1 , . . . , an , bn , an+1 ) .
 
In particular, (S1 ⊕ S2 )(0) = S1 (0) = a0 and (S1 ⊕ S2 )(a0 ) = S2 (a0 ) = b0 .
Applying Lemma 2.6.2 to the strategy S1 ⊕ S2 we get
Corollary 2.6.4 For any strategies S1 , S2 there exists a unique play P in A such
that P(n) = (S1 ⊕ S2 )(Pn) for all n ∈ ω. 
It follows from the above corollary and the definition of S1 ⊕ S2 that

P(2n) = S1 (P2n) and P(2n + 1) = S2 (P2n + 1)

for all n ∈ ω.
The relation R of direct transition on W is defined as the transition relation deter-
mined by the merger S1 ⊕ S2 , that is, R = R S1 ⊕S2 . Thus, for any positions p, q ∈ W ,

p Rq ⇔d f q = p, (S1 ⊕ S2 )( p).

We thus arrive at the definition of an elementary action system M = (W, R,


{AI , AII }) endowed with two atomic actions. M is called the action system corre-
sponding to the two-person game with strategies S1 and S2 .
Taking into account the typology of action systems adopted in Chap. 1, Defini-
tion 1.2.3, we have:

Proposition 2.6.5 The system M = (W, R, {AI , AII }) is deterministic and complete.

Proof Immediate. 

The system M is not normal, because neither AI ⊆ R nor AII ⊆ R. It may also
happen that the intersection AI ∩ AII is non-empty. Therefore the system need not
be separative. One may also easily check that the system is irreversible. This follows
from the fact that p R q implies that | p| < |q|, for any positions p and q.
The system M has another property: the set of states W is ordered by the relation ,
where for p, q ∈ W , p  q means that p = q or p is a proper prefix of q. M is
therefore an ordered action system. (Ordered systems are defined in Chap. 3.)
The system M is operated by two agents: I and II. Though each of them is able
to foresee the course of states according to his own strategy, the situation changes
when they play together. As a rule each player does not know his opponent’s strategy
and therefore is unable to predict consecutive positions that will be taken during
the game. In other words, each of them knows the action he will perform, because
each knows his own strategy. However, they individually may not know the relation
of direct transition R because they do not necessarily know the merger S1 ⊕ S2 .
Consequently, the players need not know the unique play P determined by S1 ⊕ S2 .
2.6 Games as Action Systems 105

But during the course of the game, the consecutive positions P(n) of the game are
revealed to both of them. Summing up, according to the remarks from the preceding
section, each of the agents I and II is an ideal one. However, the rules of the game
exclude the possibility that each of them knows the strategy available to his opponent.
In this sense they are not omniscient.
(The problem of agency in games is investigated by many thinkers—see e.g.
van Benthem and Liu (2004). Various “playing with information” games, public
announcements and, more generally, verbal actions are not analysed here.)
The above definition of a deterministic strategy is strong from the epistemic view-
point. So as to continue the game, the strategy S refers to all earlier positions occurring
in the course of the play determined by S. In other words, to make a successive move
in a given position p, the agent knows the sequence p; in particular he knows all the
prefixes of p, and therefore he knows all earlier positions in the play determined by
S. Mathematically, this means that the strategy of an agent is a function defined on
the set of all possible positions W = <ω A. In practice, the agent is not omniscient
and he is able to remember at most two or three earlier moves. It is therefore tempting
to broaden the definition of a deterministic strategy by allowing functions S which
are defined not on <ω A but merely on a subset of the form n A for some (not very
large) n. The organization of the game is then different. For example, for n = 2,
the play determined by such a new strategy S : 2 A → A is defined by declaring
in advance the first two moves a0 and a1 . Then successive moves are determined
by S; the function S takes into account merely the last two moves and determines
the subsequent move. We may call such functions S : n A → A Fibonacci-style
strategies, by analogy with Fibonacci sequences.
Mathematically, such a broadening of the definition of a strategy does not change
the scope of game theory because, as one can easily check, for every Fibonacci-
like strategy S there is a “standard” strategy S such that both S and S determine
the same play. However, this distinction between the standard and Fibonacci-style
strategies may turn out to be relevant in various epistemic contexts in which agents
with restricted memory resources compete.
The game is not fully encoded in the elementary action system M. We must
also take into account rules (i)–(iii) above that organize the game. They constitute
the situational envelope of M. More specifically, by a possible situation we shall
understand any element of the set

S := W × {I, II}.

Thus, S is a set of labeled situations with labels being the elements of {I, II}. The
pair (0, I) is called the initial situation. (The symbol “S” also ranges over strategies,
but such a double use of this letter should not lead to confusion.)
The relation of transition Tr between situation is defined in accordance with the
above requirements (i)–(iii). Thus, if s1 , s2 ∈ S and s1 = ( p, a), s2 = (q, b), then
106 2 Situational Action Systems

s1 Tr s2 ⇔d f either p is a position of even length, a = I, b = II, and AI ( p, q)


or p is a position of odd length, a = II, b = I, and AII ( p, q).

Intuitively, if p is a position whose length is an even number, then in the situation


( p, I) player I performs the action AI and the situation ( p, I) turns into (q, II), where
q = p, (S1 ⊕ S2 )( p) (= p, S1 ( p)). Analogously, if p is a position whose length is
an odd number, then in the situation ( p, II) player II performs the action AII and the
situation ( p, II) turns into (q, I), where q = p, (S1 ⊕ S2 )( p) (= p, S2 ( p)).
f : S → W is defined as the projection onto the first axis: if s = ( p, a), then
f (s) := p, for all s ∈ S.
The relation R of direct transition between states of the above action system M
is compatible with Tr; that is, for every pair s1 , s2 ∈ S of situations, if s1 Tr s2 then
f (s1 ) R f (s2 ). Indeed, assume s1 Tr s2 . If s1 = ( p, a), and the length of p is even,
then a = I, s2 = (q, II), where q = p, (S1 ⊕ S2 )( p). Hence p R q, which means that
f (s1 ) R f (s2 ). The case when s1 = ( p, a), and the length of p is odd, is analogously
checked.
It follows from the above remarks that

M s = (W, R, {AI , AII }, S, Tr, f )

is a situational action system. M s is the system associated with the above game.
M s , together with the initial situation (0, I) being distinguished, faithfully encodes
all aspects of the game with one exception, however; the target of the game has not
been revealed thus far. In other words, to have the game fully determined, the payoff
set must be defined. Below we present some remarks on wining strategies.
Let Z be a subset of ω A (i.e., Z is a set of infinite functions from ω to A). Let, as
above, S1 ⊕ S2 be the merge of strategies S1 and S2 and let P = a0 , b0 , a1 , b1 , . . . ,
an , bn , an+1 , . . . be the play determined by S1 ⊕ S2 .
Player I wins the play P for Z if the infinite sequence P belongs to Z . Player II
wins the play P for Z if P does not belong to Z .
S1 is a winning strategy in Z (for player I) if for every strategy S2 chosen by
player II it is the case that the resulting play P = a0 , b0 , a1 , b1 , . . . , an , bn , an+1 , . . .
determined by S1 ⊕ S2 belongs to Z .
S2 is a winning strategy in Z (for the second player) if for every strategy S1 chosen
by player I it is the case that the play P determined by S1 ⊕ S2 is not in Z .
Examples illustrating the above notions below are taken from Błaszczyk and Turek
(2007), Sect. 17.3.

Example We assume A = ω. Any two-person play consists in selection of a sequence


a0 , b0 , a1 , b1 , . . . , an , bn , . . . of natural numbers.
Let Z = ω ω. Then every strategy of I is a winning strategy for I, because for all
n, each selection of numbers a0 , a1 , . . . , an by I guarantees that I wins every play
(irrespective of the corresponding moves b0 , b1 , . . . , bn made by II).
If Z = ∅, then every strategy of II is a winning strategy for II.
2.6 Games as Action Systems 107

Let us consider a less trivial case: Z is a countable subset of ω ω, i.e., Z is a


countable set of infinite sequences zn , n ∈ ω. We define the following strategy for
the second player II:

S2 (0) := z0 (0) + 1,
 
S2 (c0 , c1 , . . . , cn ) := zn+1 2(n + 1) + 1,

for all n ∈ ω and all (c0 , c1 , . . . , cn ) ∈ n+1 ω.  


S2 is called the diagonal strategy. (Here zn = (zn (0), zn (1), . . .) and zn 2(n + 1)
is the value of zn for 2(n + 1)).

Proposition 2.6.6 The diagonal strategy is a winning strategy for the second player.

Proof Let S1 be an arbitrary strategy of player I. The play P defined by S1 ⊕ S2 is


represented by an infinite sequence

a0 , b0 , a1 , b1 , . . . , an , bn , an+1 , bn+1 , . . .

where b0 = z0 (0) + 1 = a0 + 1 and bn = zn (2n) + 1 for n = 1, 2, . . .


It suffices to notice that a0 , b0 , a1 , b1 , . . . , an , bn , an+1 , . . . is not in Z . Indeed,
the above sequence can be written as

z0 (0) + 1, b0 , z1 (2) + 1, b1 , z2 (4) + 1, b2 , z3 (6) + 1, b3 , . . . , zn (2n) + 1, bn , . . . (∗)

The above sequence differs from the sequences zn , n ∈ ω, because if (∗) were
equal to some zn , we would have that the element zn (2n) of zn would be equal to
the element of (∗) with index 2n; i.e., it would be equal to zn (2n) + 1, which is
excluded. 

A set Z ⊆ ω ω is said to be determined if in the two-player game G(Z ) one of the


players, I or II, has a winning strategy. According to the above proposition, every
countable set Z ⊆ ω ω is determined. Determined sets, when taken collectively, do
not show regular algebraic properties. For example, one cannot prove that they form
a Boolean algebra. In 1976 Martin proved the following theorem:

Theorem Assume the set theory Z F of Zermelo Fraenkel. Every Borel subset of R
is determined. 

The situation changes if one introduces the following set-theoretic axiom:


Axiom of Determinacy (AD) Every subset of ωω is determined. 
It is easy to see that AD is equivalent to the following statement:

For every countably infinite set A, every subset of ωA is determined. 


108 2 Situational Action Systems

Other games, as e.g. Banach–Mazur games or parity games, can be also repre-
sented in a similar manner as situational action systems. (Parity games are played
on a coloured directed graphs. These games are history-free determined. This means
that if a player has a winning strategy then he has a winning strategy that depends
only on the current position in the graph, and not on earlier positions.)
The games presented in this chapter are all qualified as games with perfect infor-
mation, which means that they are played by ideal agents in the sense expounded in
Sect. 2.5. Card games, where each player’s cards are hidden from other players, are
examples of games of imperfect information.

2.7 Cellular Automata

Cellular automata form a class of situational action systems. Here, we shall only
outline the functioning of one-dimensional cellular automata, the latter being a par-
ticular case of the general notion. Suppose we are given an infinitely long tape (in both
directions) divided into cells. From the mathematical viewpoint, the tape is identified
with the set of integers Z . Cells are, therefore, identified with integers. Each cell has
two immediate neighbors, viz. the left and the right adjacent cells. There are two
possible states for each cell, labeled 0 and 1. The state of a given cell c, together with
the states of its immediate neighbors, fully determines a configuration or possible
situation of the cell c. There are 8 = 23 possible situations of each cell. Possible
configurations are written in the following order

111, 110, 101, 100, 011, 010, 001, 000,

where the underlined digit indicates the current state of a particular cell c. The rules
defining a cellular one-dimensional automaton specify possible transitions for each
such situation. These rules are identified with functions assigning to each situation
a state from 0, 1. There are, therefore, 256 = 28 such rules. Each rule individually
defines a separate cellular automaton and its action. For instance,

possible situations 111, 110, 101, 100, 011, 010, 001, 000,
new state for center cell 0 1 1 0 1 1 1 0

is a rule which assigns to each possible configuration of the center cell a new state
of this cell.
Stephen Wolfram proposed a scheme, known as the Wolfram scheme, to assign
to each rule a number from 0 to 255. This scheme has become standard (see e.g.
Wolfram 2002).
As each rule is fully determined by the sequence forming new states for the center
cell, this sequence is encoded first as the binary representation of an integer and then
2.7 Cellular Automata 109

by its decimal representation. This number is taken to be the rule number of the
automaton. For example, the above rule is encoded by 011011102 which is 110 in
base 10. This is, therefore, Wolfram’s rule 110.
In the above infinite model, each cell c has two immediate neighbors: c−1 (the left
one) and c + 1 (the right one). Each mapping u : Z → {0, 1} defines the global
state of an automaton. (One may think of an automaton as a system of conjugate
cells subject to a given transformation rule.) Each rule r determines transformations
of global states of the automaton. Suppose that the following sequence of states (the
second row) have been assigned to the displayed finite segment of the sequence of
integers (cells):

. . . , −3, −2, −1, 0, 1, 2, 3, 4, 5, 6, 7, . . .


1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, . . .

The rule 110 turns the above states assigned to −2, −1, 0, 1, 2, 3, 4, 5, 6 to the fol-
lowing sequence

. . . , −3, −2, −1, 0, 1, 2, 3, 4, 5, 6, 7, . . .


... 1, 1, 1, 1, 1, 1, 1, 0, 0, ...

(The new states of the cells −3 and 7 are not displayed because the initial states
assigned to the left neighbour of −3 and the right neighbour of 7 have not been
disclosed.) Further iterations of the rule produce new global states.
One may also define an elementary cellular automaton with a finite number of
cells, say 1, 2, . . . , n by declaring that 1 is the right neighbour of n and n is the left
neighbour of 1. (Of course, n −1 is the left neighbour of n and 2 is the right neighbour
of 1.) It is also natural to take all the n complex roots of degree n of 1 as cells).
Each Wolfram rule r may be regarded as a certain context-dependent substitu-
tion. In each sequence of zeros and ones (of the type of integers) it replaces each
consecutive element of the sequence by a new one by looking at the left and right
immediate neighbours of this element in the sequence and then applying the adopted
rule. Wolfram rules act as rules with limited memory. In order to make a successive
substitutions in a 0-1 sequence, the rule merely assumes the knowledge of three con-
secutive elements of the sequence. One may complicate Wolfram rules in various
ways such as by taking into account more distant neighbours.
From the algebraic viewpoint, each Wolfram rule may be treated as a ternary oper-
ation of the two-element (Boolean) algebra {0, 1}. In turn, each (infinite) elementary
cellular automaton is regarded as an action of this algebra on the uncountable set
{0, 1} Z in the way which was explained above.
It is worth noticing that the automaton determined by the rule 110 is capable of
universal computation (Cook 2004) and is therefore equivalent to a universal Turing
machine.
Suppose a Wolfram rule r has been selected. Each individual cell c may be then
regarded as a simple two-state situational action system M c = (W, R, A, S, Tr, f )
which we shall describe. Here W = {0, 1} is the set of states. Possible situations s
110 2 Situational Action Systems

are labeled states, s = (u; a), where u is the state of s and a the label of s. Labels
are ordered pairs of states, a = (u L , u R ), where u L is a state of the left neighbour
of c and u R is a state of the right neighbour of c. Possible situations may therefore
be identified with configurations of c. Thus, if e.g., 011 is a configuration of c, then
s = (1; 01) is the possible (labeled) situation corresponding to 011: 1 is the current
state of c and 0 and 1 are the current states of the left and the right neighbour of c
respectively. There are, of course, 4 = 22 labels and 8 possible situations of M c .
Label = {00, 01, 10, 11} is the set of labels and S is the set of possible situations.
The work of M c is not, however, organized in the same way as in the case of the
situational action systems corresponding to iterative or pushdown algorithms.
Each label a ∈ Label gives rise to an atomic deterministic action Aa on W .
For suppose u ∈ W and a = (u L , u R ). Then, (u L , u, u R ) is a configuration. The
rule r assigns to this configuration a unique state w. We then put: Aa (u) := w. (The
correspondence a → Aa , a ∈ Label, need not be one-to-one. It is dependent on the
choice of a Wolfram rule.) We, therefore, have at most 4 atomic actions, all being
unary functions). We put:

A := {Aa : a ∈ Label}.

Rule r determines transitions between states. As each state u is nested in some


configuration (there are four such configurations for u), the rule r defines at the most
four possible immediates successors of u. Thus,
u R w if and only if for some configuration (u L , u, u R ) it is the case
that r assigns the state w to (u L , u, u R ).
Equivalently,
u R w if and only if w = Aa (u) for some a ∈ Label.
It follows that (W, R, A) is a deterministic and normal action system. (But R need
not be a function.)
The function f is a projection assigning to each possible situation s = (u; a) the
state u.
It remains to define the transition relation Tr between possible situations. How do
situations evolve? Suppose (u L , u, u R ) is a current configuration of the cell c, where
u is the current state of c. Rule r merely unambiguously determines the next state
of c. The clue is that the configuration being the successor of (u L , u, u R ) depends
not only on the current states u L and u R of the left and right neighbours of c, i.e. the
current states of the cells c − 1 and c − 1, but it is also determined by the current
states of farther neighbours, these being the cells c − 2 and c + 2. Thus, in order
to predict at least on move in a trajectory of situations, one has to know a quintuple
of states (u L L , u L , u, u R , u R R ) with u L L being the current state of c − 2, the left
neighbour of c − 1, and u R R being the current state of c + 2, the right neighbour of
c +1. Rule r then assigns to the configurations (u L L , u L , u) and (u, u R , u R R ) certain
states, say w1 , w2 , respectively, and it assigns to the configuration (u L , u, u R ) a state
2.7 Cellular Automata 111

w. Taking this into account, we see that the configuration (w1 , w, w2 ) is a successor
of (u L , u, u R ).
The above remarks enable one to define the relation Tr of direct transitions between
possible situations. Note that generally Tr need not be a function because the con-
figuration (w1 , w, w2 ) depends not only on (u, u R , u R R ) but also on states u L L and
u R R . On the other hand, any situation s has at the most four intermediate successors,
according to Tr. Each such successor is determined by states of the cells c − 2 and
c + 2.
The relation R of direct transition between states of the above action system
(W, R, A) is compatible with Tr, i.e., for every pair s1 , s2 ∈ S of situations, it is the
case that if s1 Tr s2 then f (s1 ) R f (s2 ). It follows that M c = (W, R, A, S, Tr, f ) is
a well-defined situational action system.
The action systems M c are identical, for all cells c. The functioning of each system
M c is a kind of a local feedback among M c and the surrounding systems M c−2 , M c−1 ,
M c+1 and M c+2 , for all cells c. The structure of this feedback is determined by a
definite Wolfram rule r .
The evolution of the system of automata M c , c ∈ Z , (Z —the set of integers) is
initialized by declaring a global initial state (a global configuration) of the system.
This is done by selecting a mapping u : Z → {0, 1}. (For example, one may assume
that u assigns the zero state to each cell.) The system is set in motion by the action
of the Wolfram rule r for the cell u(0) and its neighbours. Then the evolution of the
system proceeds spontaneously.
The theory of one-dimensional cellular automata is a particular case of the more
general theory of finitely many dimensional automata. We shall restrict our atten-
tion to the two-dimensional case because it gives a sufficient insight into the basic
intuitions underlying the general definition. A standard way of depicting a two-
dimensional cellular automaton is with an infinite sheet of graph paper and a set
of rules for the cells to follow. Each square is called a ‘cell’ and each cell has two
possible states, black and white. The ‘neighbours’ of a cell are the 8 squares touching
it. Mathematically, each cell (or square) is identified with a pair of integers (a, b).
Any pair of the form (c, d) with c = a ± 1 or d = b ± 1 is therefore a neighbour
of (a, b). (All these pairs form the Moore neighbourhood of (a, b). One may also
take a non-empty subset of the defined set of pairs (c, d) to form a narrower neigh-
bourhood of (a, b); in a particular case one gets the von Neumann neighbourhood of
(a, b) by taking the pairs (c, d) with c = a ± 1 and d = b ± 1.) For such a cell and
its neighbours, there are 512 (= 29 ) possible patterns. For each of the 512 possible
patterns, the rule table states whether the center cell will be black or white on the
next time interval.
It is usually assumed that every cell in the universe starts in the same state, except
for a finite number of cells in other states, often called a (global) configuration. More
generally, it is sometimes assumed that the universe starts out covered with a periodic
pattern and only a finite number of cells violate that pattern. The latter assumption
is common in one-dimensional cellular automata.
112 2 Situational Action Systems

2.8 A Bit of Physics

We have been concerned thus far with discrete situational action systems. A quite
natural question which can be posed in this context is: what about continuous situa-
tional action systems? How should we understand them? We shall not dwell on this
topic and discuss it thoroughly here. The paradigmatic examples of one-agent contin-
uous situational action systems occupy a central place in classical mechanics and the
behaviour of these systems is well described by appropriate differential equations.
An analysis of the mathematical apparatus pertinent to classical mechanics might
provide some clues which would enable one to build a theory of continuous action
systems in a much wider conceptual setting than that provided by physics. We shall
now briefly recall some basic facts, which will be very familiar to physicists. Suppose
we are given a physical body consisting of finitely many particles whose individual
sizes may be disregarded. We may therefore identify the body with a finite set of
material points. Hamiltonian mechanics, being a reformulation of classical principles
of Newtonian physics, was introduced in the 19th century by the Irish mathematician
William Rowan Hamilton. The key role is played by the Hamiltonian function H.
The value of the Hamiltonian is the total energy of the system being described. For a
closed system, it is the sum of the kinetic and potential energy of the system. The time
evolution of the system is expressed by a system of differential equations known as
the Hamiltonian equations. These equations are used to describe such systems as a
pendulum or an oscillating spring in which energy changes from kinetic to potential
and back again over time. Hamiltonians are also applied to model the energy of other
more complex dynamic systems such as planetary orbits in celestial mechanics.
Each state w of the system is represented by a pair of vectors w = (p, q) from a
vector space V. To simplify matters, we shall assume that V is a finitely dimensional,
real space, say V = Rn for some positive n. (The number n is called the degree
of freedom of the system.) V has therefore well-established topological properties.
(Topology is introduced by any of the equivalent norms on V.) The vector q represents
the (generalized) coordinates of the particles forming the system, and p represents the
(generalized) momenta of these particles (conjugate to the generalized coordinates).
If V is endowed with a Cartesian coordinate system, then the vectors p are “ordinary”
momenta. In the simplest case, when the systems consists of one particle only, not
subject to external forces, one may assume that q = (x, y, z) is a vector in the three
dimensional Euclidean space R3 , being the coordinates of the particle, and the vector
p = ( px , p y , p y ) representing the momentum of the particle.
W := V × V is the set of possible states of the system. W is called a phase
space. We define an action system which is operated by one agent. We call him
Hamilton. He performs one continuous action only. His action is identified with
the Hamiltonian function H. The Hamiltonian H = H(p, q, t) is a (scalar valued)
function and it specifies the domain T of values in which the time parameter t varies.
The domain of H is a subset of V × V × T but we assume that the Hamiltonian H
makes sense for all t ∈ T . The set T is usually an open interval on the straight line
R (or R itself).
2.8 A Bit of Physics 113

By a possible situation we shall mean any pair s = (w, t), where w is a state and
t is real number representing time moments. In other words, possible situations are
just members of the set S := V × V × T (= W × T ). We therefore identify possible
situations with triples s = (p, q, t), with p and q described as above. The members
of T play the role of labels of possible situations. We define f to be the projection
assigning to each possible situation s = (w, t) the state w.
How does Hamilton act? Suppose we are given a quite arbitrary situation s0 =
(p0 , q0 , t0 ) ∈ S from the domain of H. s0 is called an initial situation. The action
of Hamilton undertaken in the state (p0 , q0 ) labeled by t0 carries over the system,
for every moment t, from the state (p0 , q0 ) to a uniquely defined state (p, q) with
label t. In other words, for every t ∈ T , Hamilton turns the initial situation s0 into
a uniquely defined situation s = (p, q, t). We may therefore say that Hamilton
assigns to each situation s0 from its domain a phase path in the phase space, i.e.,
a continuous mapping from T to W . Moreover, this path passes through the state
(p0 , q0 ) at t0 . It is now quite obvious how to define the transition relation Tr between
possible situations. For any pair of situations s0 = (p0 , q0 , t0 ), s = (p, q, t) ∈ S, it
is the case that s0 Tr s if and only if both situations s0 and s belong to the domain of
H and the phase path assigned to s0 passes through the state (p, q) at t. In light of
the above remarks, the relation Tr is reflexive on the domain of H. How to compute
this path? We shall discuss this issue below.
The action of Hamilton is a binary relation AH defined on the set W . It is
assumed that the transition relation R between states is identical with the relation AH .
(Hamilton is omnipotent because he establishes the rules that govern mechanics.)
Therefore the system we define is normal. In light of the above remarks, given a
state (p0 , q0 ) and t0 ∈ T , the set f R (p0 , q0 ) contains in particular the range of the
unique phase path assigned to the situation s0 = (p0 , q0 , t0 ), provided that s0 is in
the domain of H. We assume that for every state (p0 , q0 ), the set f R (p0 , q0 ) is the
union of co-domains of all possible phase paths assigned to s0 = (p0 , q0 , t0 ), for all
possible choices of t0 ∈ T .
It follows from these definitions that the relation R is compatible with Tr; that is,
for any situations s0 = (p0 , q0 , t0 ), s = (p, q, t), s0 Tr s implies that f (s0 ) R f (s).
Indeed, s0 Tr s means that the phase path assigned to s0 passes through the point (p, q)
of the phase space at t. Hence (p, q) belongs to the range of this path. Consequently,
(p, q) ∈ f R (p0 , q0 ), i.e., f (s0 ) R f (s).
We have defined a one-agent situational action system

M s := (W, R, {AH }, S, Tr, f ).

The class of all such systems M s is called the Hamiltonian class.


How do we compute the phase paths involved in the above definitions? One
assumes that:
(1) H possesses the continuous partial derivatives with respect to p, q (in the sense
of the vector space V) and t.
114 2 Situational Action Systems

(2) For every initial condition s0 = (p0 , q0 , t0 ) belonging to the domain of H, the
phase path assigned to s0 is a pair of continuous functions (p(t), q(t)), each
function being a map from T to V, such that p(t0 ) = p0 and q(t0 ) = q0 . (In
the Cartesian coordinate system, the vector function q(t) describes the trajecto-
ries of the particles in space while p(t) defines their momenta, and hence their
velocities.) Moreover, by solving the Hamiltonian equations:

d ∂  
p(t) = − H q(t), p(t), t
dt ∂q
d ∂  
q(t) = + H q(t), p(t), t
dt ∂p

with the initial condition p(t0 ) = p0 and q(t0 ) = q0 , one computes the values
of p(t) and q(t) at any moment t belonging to T .
Chapter 3
Ordered Action Systems

Abstract Ordered action systems are systems whose sets of states are partially
ordered. A typology of ordered systems depending on the properties of the component
order relation is presented. A significant role of fixed points of the transition relation
in ordered systems is emphasized. Numerous examples illustrate the theory.

For many years, methods based on the so-called semantics of fixed points have been
extensively developed. This sort of semantics applies a range of results, convention-
ally called fixed-point theorems, to the definition or construction of abstract objects
with given prescribed properties. The heart of the matter is the definition of abstract
things as fixed points; and often as the least fixed point, of certain mappings of a par-
tially ordered set into itself. In the simplest case, the defined object is approximated
by certain constructions performed in a finite number of steps. By passing to the
limit, one obtains the object with the required properties. This process takes place in
ordered structures, in which the passage to the infinite is allowed and the existence
of the limit is secured. The above fixed-point approach to semantics is, therefore,
meaningful only for ordered structures which exhibit some completeness properties
such as the existence of suprema for directed subsets or chains. The completeness
of a poset guarantees that among its elements there always exists at least one for
which the above infinite process terminates. The process of definability by means
of fixed-points methods thus consists in picking out of the set of preexisted objects
(which form a complete poset), the one with the desired properties.
In this chapter, we present some results about fixed points and show applications
of these results in the theory of action. We are mainly concerned with fixed-point
theorems for relations defined in posets and not only for mappings. (We note that
traditionally the focus of mathematical literature is on fixed-point theorems for func-
tions rather than relations.) We present here a bunch of such theorems and also show
their applicability in the theory of action.

© Springer Science+Business Media Dordrecht 2015 115


J. Czelakowski, Freedom and Enforcement in Action, Trends in Logic 42,
DOI 10.1007/978-94-017-9855-6_3
116 3 Ordered Action Systems

3.1 Partially Ordered Sets

A rudimentary knowledge of set theory and the theory of partially ordered sets is
required in this section.
Let P be a set. A binary relation  on P is an order (or partial order) on P if and
only if  satisfies the following conditions:
(i)  is reflexive, i.e., a  a, for all a ∈ P;
(ii)  is transitive, i.e., a  b and b  c implies a  c, for all a, b, c ∈ P;
(iii)  is antisymmetric, i.e., a  b and b  a implies a = b, for all a, b ∈ P.
A partially ordered set, a poset, for short, is a set with an order defined on it.
Each order relation  on P gives rise to a relation < of strict order: a < b in P if
and only if a  b and a = b.
Let (P, ) be a poset and let X be a subset of P. Then, X inherits an order relation
from P: given x, y ∈ X, x  y in X if and only if x  y in P. We then also say that
the order on X is induced by the order from P.
In what follows we shall consider only nonempty posets (P, ) (unless it is explic-
itly stated otherwise).

(1) An element M ∈ X is called maximal in X whenever M  x implies M = x, for


every x ∈ X.
(2) An element m ∈ X is called minimal in X whenever x  m implies m = x, for
every x ∈ X.
(3) An element u ∈ P is called an upper bound of the set X if x  u, for every
x ∈ X.
(4) An element l ∈ P is called a lower bound of the set X if l  x, for every x ∈ X.
(5) An element a ∈ P is called the least upper bound of the set X if a is an upper
bound of X and a  u for every upper bound u of X. If X has a least upper
bound, this is called the supremum of X and is written as “sup(X)”.
(6) An element b ∈ P is called the greatest lower bound of the set X if b is a lower
bound of X and l  b for every lower bound l of X. If X has a greatest lower
bound, this is called the infimum of X and is written as “inf(X)”.

X may have more than one maximal element, or none at all. A similar situation
holds for minimal elements.
Instead of ‘sup(X)’ and ‘inf(X)’, we shall often write ∨X’ and ∧X’; in particular,
we write ‘a ∨ b’ and ‘a ∧ b’ instead of ‘sup({a, b})’ and ‘inf({a, b})’.
If the poset P itself has an upper bound u, then it is the only upper bound. u is
then called the greatest element of P. The notion of the least element of P is defined
in an analogous way.
A set X ⊆ P is:
(a) an upper directed subset of P if for every pair a, b ∈ X there exists an element
c ∈ X such that a  c and b  c (or, equivalently, if every finite nonempty subset
of X has an upper bound which is an element of X);
3.1 Partially Ordered Sets 117

(b) a chain in P if, for every pair a, b ∈ X, either a  b or b  a (that is, if any two
elements of X are comparable);
(c) a well-ordered subset of P (or: a well-ordered chain in P) if X is a chain, in which
every nonempty subset Y ⊆ X has a minimal element (in Y ). Equivalently, using
a weak form of the Axiom of Choice, X is well ordered if and only if it is a
chain and there is no strictly decreasing sequence c0 > c1 > · · · > cn > · · · of
elements of X. Every well-ordered chain X is isomorphic with a unique ordinal,
called the type of X.
A subset directed downwards is defined similarly; when nothing to the contrary
is said, ‘directed’ will always mean ‘directed upwards’. If the poset (P, ) itself is
a chain or directed, then it is simply called a chain or a directed poset.
Let (P, ) be a poset and let X be a subset of P. The set X is cofinal in (P, ) if
for every a ∈ P there exists b ∈ X such that a  b. If X is cofinal in (P, ), then
sup(P) exists if and only if sup(X) exists. Furthermore, sup(P) = sup(X).

Theorem 3.1.1 Let (P, ) be a poset. Every countable directed subset D of (P, )
contains a well-ordered subset of type  ω. In particular, every countably infinite
chain contains a cofinite well-ordered subchain of type ω. 

The above theorem is not true for uncountable directed subsets.


Theorem 3.1.2 (Zorn’s Lemma) If every nonempty chain in a poset P has an upper
bound, then the set P contains a maximal element. 
Zorn’s Lemma (quantified over all posets) is an equivalent form of the Axiom of
Choice (on the basis of Zermelo–Fraenkel set theory ZF).

Definition 3.1.3 Let (P, ) be a poset.


(1) The poset (P, ) is directed-complete if for every nonempty directed subset
D ⊆ P, the supremum sup(D) exists in (P, ).
(2) The poset (P, ) is chain complete (or inductive) if for every nonempty chain
C ⊆ P, the supremum sup(C) exists in (P, ).
(3) The poset (P, ) is well-orderably complete if for every nonempty well-ordered
chain C ⊆ P, the supremum sup(C) exists in (P, ). 

Note In the literature one also finds stronger versions of the above definitions in
which one also assumes the existence of the supremum for the empty directed subset
(the empty chain, respectively.) The supremum of the empty directed set is, of course,
the least element in the given poset, denoted by 0. Definition 3.1.3 do not entail that
the respective posets possess the zero element. However, when necessary, we shall
separately assume the existence of the zero element in directed-complete or inductive
posets. 

It is clear that every directed-complete poset is chain complete and every inductive
poset is well-orderably complete. In the presence of the Axiom of Choice these three
properties are known to be mutually equivalent.
118 3 Ordered Action Systems

The empty subset of a poset is well ordered. Hence, if (P, ) is well-orderably


complete and contains the least element 0, then the supremum of the empty subset
exists and it is the least element in (P).
Let (P, ) be a poset. A function π : P → P is monotone1 if a  b implies
π(a)  π(b) for every pair a, b ∈ A.
If π is monotone and C is a nonempty chain (a nonempty well-ordered chain) in
(P, ), then the image π[C] := {π(a) : a ∈ C} is also a nonempty chain (a nonempty
well-ordered chain). A similar implication holds for every nonempty directed set
D ⊆ P.

3.2 Fixed-Point Theorems for Relations

Let R ⊆ P × P be a binary relation defined on a nonempty set P. An element a∗ ∈ P


is called a fixed point of R if it is the case that a∗ R a∗ . (Fixed points of relations are
also called reflexive points.) a∗ is called a strong fixed point of R if it is a fixed point
and, moreover, for every b ∈ P, if a∗ R b, then b = a∗ . (This means that a∗ is the
unique element b in P such that a∗ R b, i.e., {b ∈ P : a∗ R b} = {a∗ }.)
Suppose that R is a function on P, that is, R satisfies the condition: for every a ∈ P
there exists a unique element b ∈ P such that a R b holds. Symbolically:

(∀a ∈ P ∃!b ∈ P) a R b.

Then the notions of a fixed point and of a strong fixed point for R are equivalent.
We recall that a relation R ⊆ P × P is total (on P) if Dom(R) = P. Symbolically,

(∀a ∈ P)(∃b ∈ P) a R b. (3.2.1)

Let (P, ) be a poset. A relation R ⊆ P × P is called inflationary if it is total and


included in , i.e.,
(∀a, b ∈ P) a R b implies a  b. (3.2.2)

(The term ‘inflationary relation’ is borrowed from computer science—see Cai and
Paige (1992), Desharnais and Möller (2005). In the literature the terms ‘∀-expansive
relation’ and ‘∃-expansive relation’ are also used as names for inflationary and expan-
sive relations, respectively.)
We note two simple but useful observations:

1 The term ‘isotone function’ is also used.


3.2 Fixed-Point Theorems for Relations 119

Theorem 3.2.1 (The Fixed-Point Theorem for Inflationary Relations) Let (P, )
be a poset in which every nonempty chain has an upper bound. Let R ⊆ P × P be
an inflationary relation. Then R has a fixed point a∗ which additionally satisfies the
following condition:

for every b ∈ P, if a∗ R b, then b = a∗ . (3.2.3)

(This means that a∗ is the unique element b in P such that a∗ R b, i.e., {b ∈ P :


a∗ R b} = {a∗ }.)

Proof By Zorn’s Lemma, applied to (P, ), there exists at least one maximal element
a∗ (in the sense of ). We shall check that a∗ is a fixed point for R. By totality, there
exists b ∈ P such that a∗ R b. (3.2.2) implies that a∗  b. Since a∗ is maximal,
a∗ = b. Hence, a∗ R a∗ . So a∗ is a fixed point of R. Evidently, by maximality and
(3.2.2), a∗ also satisfies (3.2.3). 

Let (P, ) be a poset. A relation R ⊆ P × P is called expansive if for every a ∈ P


there exists b ∈ P such that a R b and a  b; symbolically,

(∀a ∈ P)(∃b ∈ P) a R b and a  b. (3.2.4)

Every expansive relation is total and every inflationary relation is expansive. If R


is a function from P to P, then the properties of being inflationary and expansive are
equivalent for R.

Theorem 3.2.2 (The Fixed-Point Theorem for Expansive Relations) Let (P, ) be
a poset in which every nonempty chain has an upper bound. Let R ⊆ P × P be
an expansive relation. Then R has a fixed point a∗ which additionally satisfies the
condition:
(∀b ∈ P) a∗ R b and a∗  b implies b = a∗ . (3.2.5)

Proof Define: R0 := R ∩ . The relation R0 is total and inflationary. By Theorem


3.2.1, R0 has a fixed point a∗ for which (3.2.3) holds. Consequently, a∗ R a∗ and
(3.2.5) readily follows. 

The proof of Theorem 3.2.1 employs the Axiom of Choice (in the form of Zorn’s
Lemma). But in fact, the set-theoretic status of Theorems 3.2.1 and 3.2.2 is the
same—each of the above fixed-point theorems is equivalent to the Axiom of Choice.

Theorem 3.2.3 On the basis of Zermelo–Fraenkel set theory ZF, the following con-
ditions are equivalent:
(a) The Axiom of Choice (AC).
(b) Theorem 3.2.1.
(c) Theorem 3.2.2.
120 3 Ordered Action Systems

Proof The implication (a) ⇒ (b) directly follows from the proof of Theorem 3.2.1
because Zorn’s Lemma is used here. The implication (b) ⇒ (c) is present in the proof
of Theorem 3.2.2. To prove (c) ⇒ (a), we assume Theorem 3.2.2. We show that then
Zorn’s Lemma holds. Let (P, ) be an arbitrary poset in which every nonempty chain
has an upper bound. Let R :=. The relation R is expansive. By Theorem 3.2.2, R
has a fixed point a∗ which satisfies (3.2.5). We show that a∗ is a maximal element in
(P, ). Suppose b ∈ P and a∗  b. Hence, a∗ R b and a∗  b, by the definition of R.
Condition (3.2.4) then gives that b = a∗ . This means that a∗ is a maximal element
in (P, ). 

Example Let P = [0, 1] be the closed unit interval of real numbers. The sys-
tem (P, ) with the usual ordering  of real numbers satisfies the hypothesis of
Theorems 3.2.1 and 3.2.2. If the relation R is taken to be equal to  on P, then R is
inflationary. Hence, it has a fixed point in (P, ) which additionally satisfies (3.2.3).
It is clear that 1 is the only such a fixed point of R. On the other hand, every element
of P is a fixed point of R. 

Let (P, ) be a poset. A function π : P → P is expansive if a  π(a) for every


a ∈ P.

Corollary 3.2.4 (Zermelo) Let (P, ) be a poset in which every nonempty chain
has an upper bound. Let π : P → P be an expansive function. Then π has a fixed
point, i.e., there exists a∗ ∈ P such that π(a∗ ) = a∗ .

Proof Following the set theory, each function is identified with its graph. Accord-
ingly, let R be the graph of π. Thus, a R b holds if and only if b = π(a), for any
a, b ∈ P. The relation R satisfies the assumptions of both Theorems 3.2.1 and 3.2.2. R
is total because π is a function. R is inflationary because it is total and π is expansive.
In virtue of Theorem 3.2.1, R has a fixed point a∗ . In particular, a∗ R a∗ holds, which
means that a∗ = π(a∗ ). 

It is interesting to note that on the basis of Zermelo–Fraenkel set theory ZF,


Corollary 3.2.4 is equivalent to the Axiom of Choice, as shown by Abian. However,
if one strengthens the hypothesis of Corollary 3.2.4 and assumes that (P, ) is an
inductive poset with zero, the corollary modified in this way is provable in ZF and
the Axiom of Choice is not needed here—see Moschovakis (1994), Chap. 7.
The above theorems prove the existence of rather big reflexive points—they are
maximal elements in a given poset. The theorems we present further are concerned
with the problem of finding smaller fixed points. Before passing to them we recall
an axiom of set theory which is weaker than AC.
The Principle of Dependent Choices (DC) This principle asserts the following:
Let R be a total relation defined on a set A. Let a0 ∈ A. Then there exists a function
f : ω → A such that f (0) = a0 and f (n) R f (n + 1) for all n ∈ ω. 
3.2 Fixed-Point Theorems for Relations 121

Definition 3.2.5 Let (P, ) be a poset. A binary relation R ⊆ P × P is called


monotone2 if it satisfies:

(∀a, b, c ∈ P)(a  b and a R c implies (∃d ∈ P) b R d and c  d)

(see the figure below). 


If π : P → P is a function, then π is monotone if and only if the graph R of π is
a monotone relation in the above sense.

b R d

a R c
Fig. 3.1

A poset (P, ) is called σ-complete if every chain in (P, ) of type ω has a


supremum.
Definition 3.2.6 Let (P, ) be a σ-complete poset and let R ⊆ P × P be a binary
relation.
1. R is called σ-continuous if R is monotone and it additionally satisfies the fol-
lowing condition:
(cont)σ For every chain C in (P, ) of type ω and every monotone function f :
C → P, if a R f (a) for all a ∈ C, then sup(C) R sup(f [C]).
2. R is σ-continuous in the stronger sense if it is σ-continuous and satisfies:
(∗)σ For every chain D in (P, ) of type ω and for every element a ∈ P, if
a R d for all d ∈ D, then a R sup(D). 
(cont)σ can be formulated equivalently as follows:
For every chain of elements of P of type ω, a0 < a1 < · · · < an < an+1 < · · ·
and every increasing sequence b0  b1  · · ·  bn  bn+1  · · · of elements
of P, if an Rbn for all n, then sup{an : n ∈ N}R sup{bn : n ∈ N}.
It is not difficult to prove that R is σ-continuous in the stronger sense if and only if
it is monotone and satisfies:
(cont)∗σ For any two monotone functions f : N → P and g : N → P, if
f (n) R g(n) for all n ∈ N, then sup(f [N]) R sup(g[N]).

2 The term ‘isotone relation’ is also used.


122 3 Ordered Action Systems

The antecedent of (count)σ is not vacuously satisfied for certain relations:


Observation Let (P, ) be a poset. If a relation R ⊆ P × P is a monotone and total,
then for every chain a0 < a1 < · · · an < an+1 < · · · in (P, ) of type ω, there
exists a chain b0  b1  · · ·  bn  bn+1  · · · in (P, ) such that an R bn for
all n.
Indeed, let a0 < a1 < · · · an < an+1 < · · · be a countable chain of type ω. As R is
total, there exists an element b0 ∈ P such that a0 R b0 . As a0  a1 and a0 R b0 , the
monotonicity of R implies the existence of an element b1 ∈ P such that a1 R b1 and
b0  b1 . As a1  a2 and a1 R b1 , there exists an element b2 ∈ P such that a2 R b2
and b1  b2 , again by monotonicity. Going further, as a2  a3 and a2 R b2 , there
exists an element b3 ∈ P such that a3 R b3 and b2  b3 . Continuing this procedure,
we define the countable chain b0  b1  · · ·  bn · · · bn+1  · · · in (P, ) such
that an R bn for all n. (The Principle of Dependent Choices is used here.) 
We recall that if R is a binary relation R on a set P and a ∈ P, then δR (a) :=
{b ∈ P : a R b}. The set δR (a) is called the R-image of the element a.
Given a poset (P, ), a binary relation R on P and a ∈ P, we define:

↑ a := {b ∈: a  b},
↓ a := {b ∈ p : b  a}

The basic fact concerning fixed points of σ-continuous relations is provided by


the following theorem:

Theorem 3.2.7 Let (P, ) be a σ-complete poset with zero 0. Every σ-continuous
relation R ⊆ P × P such that the set δR (0) is nonempty has a fixed point a∗ . Fur-
thermore, a∗ can be assumed to have the following property: for every y ∈ P,
if δR (y) ⊆↓ y, then a∗  y.

Proof Since R is σ-continuous, R is a monotone relation in (P, ). We define:

Q := {x ∈ P : δR (x)∩ ↑ x = ∅ and (∀y ∈ P)(δR (y) ⊆ ↓ y implies x  y)}.

Evidently, 0 ∈ Q. Hence, the set Q is nonempty.

Lemma 1 RQ, the restriction of R to Q, is expansive in the poset (Q, ).

Proof of the lemma. Let a ∈ Q. As δR (a)∩ ↑, a = ∅, there exists b ∈ P such that

a R b and a  b. (3.2.6)

We prove that b ∈ Q. This will show that RQ is expansive. By (3.2.6) and the
monotonicity of R in (P, ) there exists c ∈ P such that b R c and b  c (see the
figure below).
3.2 Fixed-Point Theorems for Relations 123

b R c

a R b
Fig. 3.2

This means that c ∈ δR (b)∩ ↑ b. Hence,

δR (b)∩ ↑ b = ∅. (3.2.7)

Now let y be an arbitrary element of P such that δR (y) ⊆ ↓ y. We show b  y.


As a ∈ Q and δR (y) ⊆ ↓ y, we have that a  y, by the definition of Q. As a R b,
the monotonicity of R implies the existence of an element z ∈ P such that y R z and
b  z (see the diagram below). As z ∈ δR (y) and δR (y) ⊆ ↓ y, we get that z  y.
Consequently, b  z  y which gives that b  y. This proves that b ∈ Q. 

y R z

a R b
Fig. 3.3

If C is a chain in (P, ) of type ω and f : C → P is a monotone function, then


f [C] is a chain of type  ω. Hence, sup(f [C]) exists in (P, ). Evidently, every
chain in (Q, ) is also a chain in (P, ). Consequently, by the σ-continuity of R:
(a) For every chain C in (Q,  )of type ω and for every monotone function f : C →
P such that x R f (x) for all x ∈ C, it is the case that sup(C) R sup(f [C]).

Lemma 2 Let C be a chain in (Q, ) of type ω. Suppose that there exists a monotone
function f : C → Q such that x R f (x) and x  f (x) for all x ∈ C. Then
sup(C) belongs to Q.

Proof of the lemma. Let M := sup(C). We show M ∈ Q. By (a) we have that


M R sup(f [C]). Furthermore, as x  f (x) for all x ∈ C, it follows that M =
sup(C)  sup(f [C]). This shows that sup(f [C]) ∈ δR (M)∩ ↑ M. Hence,

δR (M)∩ ↑ M = ∅. (3.2.8)
124 3 Ordered Action Systems

Now let y ∈ P be an element such that δR (y) ⊆ ↓ y. We claim that M  y.


As C ⊆ Q, we have that x  y, for all x ∈ C, by the second conjunct of the definition
of Q. It follows that M = sup(C)  y. This and (3.2.8) prove that M ∈ Q. 
We pass to the proof of the theorem. We inductively define a strictly increasing
sequence a0 < a1 < . . . < an < an+1 < . . . of elements of Q. (The Principle of
Dependent Choices is applied here.) The type of the sequence is  ω.
We define: a0 := 0. Let us suppose the elements a0 < a1 < . . . < an have been
defined. As an ∈ Q and, by Lemma 1, the relation RQ is expansive, there exists
an element b ∈ Q such that an  b and an R b. If b = an , the defining procedure
terminates. In this case a∗ := an is already a fixed point of R. As a∗ belongs to Q,
the second statement of the thesis of the theorem evidently holds for a∗ . If b = an ,
we put: an+1 := b. Clearly, an < an+1 .
It remains to consider the case when the sequence a0 < a1 < . . . < an <
an+1 < . . . has type ω. In this case, we put C := {an : n ∈ N} and define f :
C → P by f (an ) := an+1 for all n ∈ N. f is well-defined and monotone. As C ⊆
Q, a R f (a) and a  f (a) for all a ∈ C, the supremum sup(C) belongs to Q,
by Lemma 2. Furthermore, sup(C) R sup(f [C]), by the σ-continuity of R. But,
evidently, sup(C) = sup(f [C]) because a0 = 0 (see the figure below). Putting
a∗ := sup(C), we thus see that a∗ R a∗ . So a∗ is a fixed point of R. Since a∗ ∈ Q, it
follows that for every y ∈ P, if δR (y) ⊆ ↓ y, then a∗  y. 

.. ..
.. ..
.. ..
R
a2 a3

R
a1 a2

R
a0 a1

R
0 a0
Fig. 3.4

Notes (1). The hypothesis of monotonicity of R in Theorem 3.2.7 is essential and


cannot be dropped. Let P = [0, 1] be the closed unit interval of real numbers. The
system (P, ) with the usual ordering  of real numbers is a σ-complete poset with
3.2 Fixed-Point Theorems for Relations 125

zero. The relation R ⊆ P × P is defined as follows: a R b if and only if (∃n ∈ N,


n  1) |a − b| = (1/2)n . It is easy to see that:
(a) R is symmetric and total,
(b) R is not monotone,
(c) R does not possess a fixed point.
As to (b), for the numbers 1/2 and 1, we have 1/2 R 1 and 1/2 < 1. But there does
not exist a number d in [0, 1] such that 1 R d and 1  d.
(2). The assumption that (P, ) possesses zero is essential in Theorem 3.2.7. For
let P = {1, 2, a, b}, where 1 < 2 and a < b. Let R := {(1, a), (a, 1), (2, b), (b, 2)}.
The relation R is symmetric and monotone on P. R is also σ-continuous. But R does
not possess a fixed point. The poset (P, ) is σ-complete but it is lacking the zero
element.
(3). On the basis of Zermelo–Fraenkel set theory ZF, Theorem 3.2.7 is equivalent
to the Principle of Dependent Choices (DC). 

Let (P, ) be a σ-complete poset. A function π : P → P is called σ-continuous


if it is monotone and π(sup(C) = sup(π[C]) for every chain C in (P, ) of type ω.
Since for any monotone function π : P → P, the image π[C] of any chain C
in (P, ) is a chain as well, we see that, in view of the σ-completeness of (P, ),
the above definition thus postulates the equality of the two suprema and not their
existence.
A function π : P → P is σ-continuous in the above sense if and only if the graph
R of π is σ-continuous as a binary relation.

Corollary 3.2.8 Let (P, ) be a σ-complete poset with zero 0. Every σ-continuous
function π : P → P has a least fixed point a∗ , i.e., π(a∗ ) = a∗ and (∀b ∈ P)
(π(b)  b implies a∗  b).

Proof We work with the graph R of π and proceed as the proof of Theorem 3.2.7.
Since π is σ-continuous, the graph R is a σ-continuous relation and δR (0) = {π(0)}.
By Theorem 3.2.7 there is a fixed point a∗ of R such that for every b ∈ P, δR (b) ⊆ ↓ b
implies that a∗  b. But the last condition simply says that for every b ∈ P, π(b)  b
implies a∗  b. The chain a0  a1  · · ·  an  an+1  · · · , defined as in the
proof of Theorem 3.2.7, has the following properties: a0 = 0, an+1 = π(an ), for all
n. Furthermore, a∗ = sup({an : n ∈ N}). 

The above chain a0  a1  · · ·  an  an+1  · · · is known as Kleene’s approx-


imation sequence (Kleene (1952). It can also be found in Tarski’s (1955) classic
paper. Many properties of such objects are compiled in Chap. 4 of Davey and Priest-
ley (2002).
Corollary 3.2.8 is provable in ZF—see Moschovakis (1994 pp. 108–109).
126 3 Ordered Action Systems

3.3 Ordered Action Systems: Applications


of Fixed-Point Theorems

We now show how to apply one of the above fixed-point theorems for relations—
Theorem 3.2.7—to the proof of the downward Löwenheim–Skolem–Tarski Theorem
for countable languages. This theorem belongs to model theory. It states, roughly,
that every infinite model A has an elementary submodel of any intermediate power
between the cardinality of the language and the cardinality of A. It is illuminating to
see the proof of this theorem in the context of elementary action systems. We apply
the standard model-theoretic notation. The notions we use, such as a submodel, an
elementary submodel, etc. are not defined here. Readers unfamiliar with them are
advised to consult any of the standard textbooks on the theory of models. A language
is a set that is the union of threes sets: a set of relational symbols (predicates), a set
of function symbols, and a set of constant symbols. (Constant symbols are often
viewed as nullary function symbols.) If L is a language, then For(L) denotes the set
of first-order formulas of L.

Theorem 3.3.1 (Downward Löwenheim–Skolem–Tarski Theorem) Let L be a


countable language and let α and β be cardinal numbers such that |For(L)|  β  α.
Let A be a model for L of cardinality α. Then A has an elementary submodel of cardi-
nality β. In fact, for any subset X0 ⊆ A of power  β, the model A has an elementary
submodel of power β which contains X0 as a subset of its universe.

Proof Let P be the family consisting of all subsets X ⊆ A such that |X| = β and
X0 ⊆ X. Evidently, P is nonempty because |A|  β. We have:
(A) The family P, ordered by inclusion, is σ-complete.
Indeed, forany chain C of subsets of A (of type ω) such that |X| = β for all X ∈ C,
the union C has cardinality β. Moreover, the set 0 := X0 is the least element of P.
We define the following binary relation on the poset (P, ⊆): for X, Y ∈ P,
X R Y if and only if X ⊆ Y and for every formula Φ(x, x1 , . . . , xn ) ∈
For(L) and any sequence a1 , . . . , an ∈ X (of length n) such that
A |= (∃x) Φ[a1 , . . . , an ] there exists b ∈ Y such that A |= Φ[b, a1 , . . . , an ].
X R Y thus says that the set Y includes X and, furthermore, for each formula
Φ(x, x1 , . . . , xn ) and any n-tuple a1 , . . . , an ∈ X satisfying (∃x)Φ in A, the set Y
contains at least one b ∈ A such that the (n + 1)-tuple b, a1 , . . . an satisfies
Φ(x, x1 , . . . , xn ) in A. The set Y may also contain some other elements of A, but
the cardinality of Y does not exceed β.
As |For(L)|  β, the crucial observation is that
(B) The relation R is total, i.e., for any X ∈ P there exists Y ∈ P such that X R Y .
Furthermore, the definition of R gives that for any X, Y , X1 , X2 , Y ∈ P:
(C) If X R Y then X ⊆ Y .
3.3 Ordered Action Systems: Applications of Fixed-Point Theorems 127

(D) X1 ⊆ X2 and X2 RY implies X1 RY .


It follows from (B) and (C) that R is inflationary. But, more interestingly,
(E) The relation R is σ-continuous (in the stronger sense).
We first check that R is monotone in (P, ⊆). We assume X, Y , Z ∈ P so that X ⊆ Y
and XRZ. Evidently, Y ∪ Z belongs to P. By (B) and (C) there exists a set W such
that Y ∪ ZRW and Y ∪ Z ⊆ W . As Y ∪ ZRW , (D) gives that YRW . As Z ⊆ W ,
monotonicity follows.
Suppose we are given two nonempty chains of elements of P,

Y0 ⊆ Y1 ⊆ . . . ⊆ Yn ⊆ Yn+1 ⊆ . . . and Z0 ⊆ Z1 ⊆ . . . ⊆ Zn ⊆ Zn+1 ⊆ . . .


 
of types  ω such that Yn RZn for all n. The sets Yn and Zn belong to P.
  n∈N n∈N
We claim that Yn R Zn .
n∈N n∈N 
The fact that Yn RZn for all n implies that Yn ⊆ Zn for all n. Hence Yn ⊆
   n∈N
Zn . Let Y := Yn and Z := Zn . Let Φ(x, x1 , . . . , xk ) be a formula in
n∈N n∈N n∈N
For(L) and let a1 , . . . , ak be a sequence of elements of Y (of length k) such that
A |= (∃x) Φ[a1 , . . . , ak ].
There exists n ∈ N such that a1 , . . . , ak ∈ Yn . Since Yn RZn , there exists b ∈ Zn
such that A |= Φ[b, a1 , . . . , an ]. Consequently, there exists b ∈ Z such that A |=
Φ[b, a1 , Φ, an ]. As Φ(x, x1 , . . . , xk ) is an arbitrary formula in For(L) and a1 , . . . , ak
are arbitrary elements of Y , this proves that Y : R : Z. So R is σ-continuous.
As R is total, then δR (0) = ∅. The system (P, ⊆, R) thus satisfies the assumptions
of Theorem 3.2.7. It follows that the relation R has a fixed point in (P, ⊆), say B.
It is then easy to verify that B is the universe of an elementary submodel B of A. As
B belongs to P, the cardinality of B is equal to β. 

The poset (P, ⊆), defined as in the proof of Theorem 3.3.1, is furnished with an
infinite family of atomic actions, each action being associated with a formula of L.
Indeed, for each formula Φ(x, x1 , . . . , xk ) with at least one free variable x, we define
the binary relation AΦ ⊆ P × P as follows: for X, Y ∈ P,
X AΦ Y iff X ⊆ Y and for every sequence a1 , . . . , an ∈ X (of length n) such
that A |= (∃x) Φ[a1 , . . . , an ] there exists b ∈ Y such that A |= Φ[b, a1 , . . . , an ].
AΦ is called the action of the formula Φ.
It follows from the definition of the relation R that

R= {AΦ : Φ ∈ For(L)}.

(B) and (F) imply that for any X ∈ P, each action AΦ is ∃-performable at X but
it need not be ∀-performable. But (F) also implies that each realizable performance
of whichever action AΦ is also a realizable performance of each of the remaining
128 3 Ordered Action Systems

actions. Thus any agent that successfully performs one action also performs the
other actions. Each fixed point of R is uniformly a fixed point of all the actions AΦ ,
Φ ∈ For(L), i.e., X ∗ is a fixed point of R if and only if it is a fixed point of each
relation AΦ , Φ ∈ For(L).
As the ordered action system (P, ⊆, R, {AΦ : Φ ∈ For(L)}) is not normal, we
may define another transition relation between states such that the resulting action
system is normal. The fact that the system is normal ensures that every possible
performance of an arbitrary action is realizable. We put:

R := {AΦ : Φ ∈ For(L)}.

The system
(P, ⊆, R , {AΦ : Φ ∈ For(L)})

is normal, i.e., every atomic action AΦ is ∀-performable in every state X ∈ P.


R has many fixed points. In fact, the set of fixed points of R is the union of
the set of fixed points of the actions AΦ , Φ ∈ For(L). But not all fixed points of
R qualify as solutions of the problem posed in Theorem 3.3.1, i.e., they need not
even be submodels of A. The right solutions are provided by states X ∈ P which are
uniformly fixed points of all the actions AΦ , Φ ∈ For(L).
The above remarks on ordered action systems are encapsulated by the following
general definitions.

Definition 3.3.2 An ordered elementary action system is a quadruple

M = (W, , R, A), (3.3.1)

where the reduct (W, ) is a poset and the reduct (W, R, A) is an elementary action
system. 

Thus, in every ordered action system, the set W of states is assumed to be ordered. In
practice, the system (3.3.1) is subject to further constraints. For example, it is often
assumed that
(W, ) is a tree-like structure or it is σ-complete and R satisfies some continuity
or expansivity conditions with respect to .
In some action systems, e.g., in stit semantics (see Chap. 5), the relation R itself
is assumed to be an order and is identified with . Everybody knows the frustrating
experience of performing a string of actions commencing with some state u, only to
end up back in state u. In this case a loop of positive length k of transitions between
states may occur: u R w1 R . . . R wk R u. Any effective action system excludes loops,
because actions involving them are futile and systems containing them are poorly
organized.
3.3 Ordered Action Systems: Applications of Fixed-Point Theorems 129

A binary relation R on a nonempty set W is loop-free if for any positive integer


k  1 there does not exist a sequence of states u, w1 , . . . , wk such that u R w1 R . . .
R wk R u. The fact that R is loop-free does not preclude the existence of reflexive
points of R, i.e., elements u such that u R u.
We recall that R∗ is the Kleene closure of R, that is, R∗ is the least reflexive and
transitive relation including R. The following observation is immediate:
Proposition 3.3.3 R is loop-free if and only if R∗ is a partial order.

Proof For any u, w ∈ W , u R∗ w if and only if u = w or for some k  1 there exists


a sequence of states w1 , . . . , wk such that u R w1 R . . . R wk R w.
(⇒). If R is loopless, then one directly verifies that R∗ is antisymmetric. As R∗ is
reflexive and transitive, it follows that R∗ is a partial order.
(⇐). As any partial order is loop-free, any subrelation of a partial order is loop-free
as well. Hence R, being a subrelation of the order R∗ , is loop-free. 

The main goal that underlines introducing ordered action systems on stage is the
need to make room for some infinite procedures in action. We call these procedures
approximations. Mathematics admits such procedures. The process of computing
consecutive decimal approximations of an irrational number is an example of such
an (potentially) infinite procedure. We write ‘potentially’ here because in practice
this process halts at some stage. This is caused by both the limitations of the computer
capacities and finite time resources. But, theoretically, this procedure can be made as
long as one wishes. In this case, the members of W represent different approximations
of a given irrational number. We may therefore identify each element of W with a
pair consisting of a rational number r together with a measure of accuracy of the
approximation provided by r. (It is assumed that such a measure is available.)
The states of (3.3.1), that is, the members of W , represent various possible phases
of the performance of a certain task by the agents. It may be the process of building
a house, sewing a dress, and so on. Intuitively, the order relation in (3.3.1) makes it
possible to compare degrees of completion. The fact that u  v means that at the
state v the process is more advanced than at u. For example, it may mean that the
state of the house built represented by v is more complete than that represented by u.
Obviously, various courses of actions are possible. We may have the situation that
u  v1 and u  v2 , where the states v1 and v2 are incomparable. This means that
if the system is at the state u (i.e., u is a phase of realizing the task), several further
courses of action, leading to more advanced stages of the performance represented
either by v1 or by v2 , are conceivable. The fact that v1 and v2 are incomparable means
that further strings of actions undertaken at u may ramify. It may happen that there
exists a state w such that v1  w and v2  w, which means that the ramified strings
of actions converge. But, generally, the task may not be unambiguously determined
and several options for completing the action may be admissible. (For example, it
may happen that the architect has designed various alternative versions of the house.)
In this situation such a state w may not exist.
130 3 Ordered Action Systems

Definition 3.3.4 Let M = (W, , R, A) be an ordered action system.


(1) M is said to be σ-complete (inductive) if the poset (W, ) is σ-complete (induc-
tive, respectively).
(2) An element u∗ ∈ W is called a fixed point of M if it is a fixed point of R, i.e.,
u∗ R u∗ .
(3) An element u∗ ∈ W is called a maximal fixed point of M if it is a fixed point of
M and, furthermore, for every w ∈ W , if u∗ R w, then u∗  w. 

The justification of (2) of the above definition is as follows. The action system M
is ‘activated’ in the zero state of 0. The task is to lead the system to a state u∗ , this
being a fixed point of the relation R and the final stage of the action of M. The state
u∗ is reached by making a possibly transfinite sequence of atomic actions from A,
which results in a transfinite monotone sequence of states of the space W ; the state
u∗ is a final element of this sequence or its supremum.
The above definition can be strengthened in various directions by, e.g., accepting
that each fixed point of A is at the same time a fixed point of some relation (action)
A ∈ A or that it is a fixed point of all atomic actions of A.
The action system defined as in the proof of Theorem 3.3.1 is σ-complete with
zero, while the
 fixed points investigated there are actually fixed points of this system.
Since R = {AΦ : Φ ∈ For(L)}, any fixed point u∗ of M is a fixed point of all
actions AΦ , i.e., u∗ AΦ u∗ for all Φ. M has yet another property: for every pair u, w
of states, if u R v, then u  v. The same implication holds for every action A of this
system: if u A v, then u  v, for all states u, w. This observation gives rise to the
following definition:

 system M = (W, , R, A) is called constructive


Definition 3.3.5 An ordered action
if the relation R and the union A are both subsets of . 
A system M = (W, , R, A) is destructive if R and the union A are both
subsets of the dual order . 
Destructive systems are thus constructive a rebouir—they are constructive in the
sense of the dual order .
The system (P, ⊆, R, {AΦ : Φ ∈ For(L)}) is obviously constructive.
(Constructive action systems should not be confused with the concept of con-
structive mathematics. The term ‘constructive mathematics’ has a well-established
meaning which is not captured by the above definition.)
If an elementary action system M = (W, , R, A) is constructive, then every
fixed point of M (if there is any) is a strong fixed point. Indeed, assume that a∗ is a
fixed point of M. Hence, a∗ R a∗ . We assume furthermore that a∗ R w for some state
w. As R is a subset of , we have that a∗  w.
3.3 Ordered Action Systems: Applications of Fixed-Point Theorems 131

Theorem 3.3.6Let M = (W, , R, A) be a σ-complete action system with zero 0


such that R ⊆ A. If R is σ-continuous and δR (0) = ∅, then M has a fixed point.

Proof By Theorem 3.2.7, R has a fixed point. 

In ordered action systems M = (W, , R, A), tasks for M may take the form
({0}, Ψ ), where 0 is the least element of (W, ) and Ψ is a set of fixed points
of M. The process of carrying the system M from the state 0 to a Ψ -state may not
be finitary since it usually involves various infinite limit passages. These passages
are strictly linked with the process of approximating fixed points of R by performing
possibly infinite strings of consecutive actions of the system. In the simplest case,
a fixed point can be reached in ω steps.
The following notion is convenient in the context of ordered action systems.

Definition 3.3.7 Let M = (W, , R, A) be a σ-complete elementary action system.


The ω-reach of M is the binary relation ReωM on W defined as follows. For u, v ∈ W ,
ReωM (u, v) if and only if either u = w and u R w or there exists a chain of
states u0  u1  · · ·  un  un+1  . . . of type  ω and a sequence of
atomic actions A0 , A1 , . . . , An , An+1 , . . . such that u = u0 , v = sup({un : n =
0, 1, . . .}) and un An , R un+1 , for all n. 

ReωM (u, v) thus states that v is achieved from the state u by means of performing a
possibly countably infinite string of actions An , n = 0, 1, . . . so that the outcomes
of the actions form an increasing chain of states. The state v is the supremum of the
resulting sequence of states. (It is not assumed here that the state v is a fixed point
of R and hence of M.)
The above chain u0 . . . u1  · · ·  un  un+1  · · · has the property that
un Run+1 , for all n. If the chain is infinite, it need not hold that um R sup({un : n ∈ N})
for all (and even for some) m. This means that, generally, it is not possible to reach
the state v = sup({un : n ∈ N}) from the states um by means of finitary procedures,
i.e., by accomplishing a finite string of actions from A. This situation occurs in the
system (P, ⊆, R, {AΦ : Φ ∈ For(L)}), where, generally, it is not possible to produce
an elementary submodel from the set 0 in a finite number of steps by applying the
actions from {AΦ : Φ ∈ For(L)}.
In the system (P, ⊆, R, {AΦ : Φ ∈ For(L)}), every pair (0, X) such that X is the
universe of an elementary submodel of power β of the given model A, belongs to the
reach ReωM .

3.4 Further Fixed-Point Theorems: Ordered


Situational Action Systems

The back and forth method is particularly useful in many branches of algebra and
model theory. It dates back to the proof of Cantor’s famous theorem stating that any
two countable linear dense orders without endpoints are isomorphic. The back and
132 3 Ordered Action Systems

forth argument has proved convenient in many branches of model theory, especially
in the theory of saturated models. In order to find a plausible abstract formulation of
the above construction, we shall discuss some refinements of the fixed-point theorems
presented in Sect. 3.2.
The notions we present in this section are modifications of the concepts we have
discussed in previous sections. These modifications are twofold in character. First,
some of the concepts defined in Sect. 3.2 are restricted to certain subsets of posets.
Second, a certain weaker counterpart of the notion of expansivity is formulated here:
conditional expansivity. We also define a certain restricted version of the σ-continuity
of a relation.
Let (P, ) be a poset. A function π : P → P is called quasi-expansive (or
conditionally expansive) if it satisfies the following quasi-identity:
 
(∀x) x  π(x) → π(x)  π(π(x)) .

Clearly, every expansive function is quasi-expansive. But also every monotone func-
tion is quasi-expansive. Thus, the notion of a conditionally expansive function is a
common generalization of the above two types of functions associated with posets.3
Given an element a ∈ P, we define: π 0 (a) := a and π n+1 (a) := π(π n (a)), for all
n  0.
If (P, ) has zero 0 and π : P → P is conditionally expansive, then the set
{π n (0) : n ∈ N} forms an increasing chain of type  ω.
We are concerned with the following counterpart of the notion of quasi-expansi-
vity for relations. A binary relation R on a poset (P, ) is called quasi-expansive
(or conditionally expansive) whenever:

(∀a, b ∈ P)(a  b ∧ a R b → (∃c ∈ P) b R c ∧ b  c)

(see the diagram below).

b R c

a R b
Fig. 3.5
Every monotone relation, defined as in Sect. 3.2, is trivially quasi-expansive. Fur-
thermore, every expansive relation is quasi-expansive. It is also evident that if R is

3 David Makinson considered functions with this property in the context of his research on modal
logics.
3.4 Further Fixed-Point Theorems: Ordered Situational Action Systems 133

the graph of a function π : P → P, then the relation R is quasi-expansive in the


above sense if and only if the function π is quasi-expansive.
Let (P, ) be a σ-complete poset. A relation R ⊆ P × P is said to be conditionally
σ-continuous (or quasi-σ-continuous) if the following conditions are met:

(1) R is quasi-expansive,
(2) For every chain C in (P, ) of type  ω and for every monotone and expansive
function f : C → P, if a R f (a) for all a ∈ C, then sup(C) R sup(f [C]).

Due to the monotonicity of f : C → P, (2) implies that the set f [C] is a countable
chain of type  ω and therefore sup(f [C]) exists. It is also clear that every σ-
continuous relation is conditionally σ-continuous.
A function π : P → P is conditionally σ-continuous (or quasi-σ-continuous) if
its graph has this property.

Theorem 3.4.1 Let (P, ) be a σ-complete poset. A function π : P → P is con-


ditionally σ-continuous if and only if it is quasi-expansive and for every chain
a0  a1  · · ·  an  an+1  · · · of elements
 of P of type ω, if an π(an ) and
π(an )  π(an+1 ) for all n ∈ N, then π sup({an : n ∈ N}) = sup {π(an ) : n ∈
N} .

The proof is easy and is omitted. 


It follows from these remarks that if π : P → P is conditionally σ-continuous,
where (P, ) is a σ-complete poset with zero 0, then {π n (0) : n ∈ N} is an increasing
chain and, moreover,
   
π sup({π n (0) : n ∈ N}) = sup {π(π n (0)) : n ∈ N} .

Consequently, the element a∗ := sup({π n (0) : n ∈ N}) is a fixed point of π.


We shall now modify a little the above definitions by relating the above properties
to certain selected subsets of posets.
Let (P, ) be a σ-complete poset and P0 a subset of P. Furthermore, let R be
a binary relation on P. We say that R is conditionally σ-continuous relative to P0
whenever:
(1) R is conditionally expansive on P0 , i.e., for every pair a, b ∈ P0 such that a  b
and a R b there exists c ∈ P0 such that b R c and b  c.
(2) For every countable chain C ⊆ P0 of type ω and every monotone and expansive
function f : C → P0 , if a R f (a) for all a ∈ C , then sup(C) R sup(f [C]).
(Note: sup(C) and sup(f [C]) may not belong to P0 .)
A closer look at the proof of Theorem 3.2.7 reveals that its conclusion can be
reached under weaker assumptions:
134 3 Ordered Action Systems

Theorem 3.4.2 Let (P, ) be a σ-complete poset with zero 0 and let P0 be a subset
of P. Suppose that a relation R ⊆ P × P is conditionally σ-continuous relative to
P0 . If 0 ∈ P0 and the set P0 ∩ δR (0) is nonempty, then R has a fixed point in P.

Proof We argue as in the proof of Theorem 3.2.7. Let a0 be an arbitrary element of


P0 ∩ δR (0). As the elements 0 and a0 belong to P0 and 0 R a0 and 0  a0 , there exists,
by the conditional monotonicity of R on P0 , an element a1 ∈ P0 such that a0 R a1
and a0  a1 . Taking the pair (a0 , a1 ) and applying the conditional monotonicity of
R on P0 , we see that there exists an element a2 ∈ P such that a1 R a2 and a1  a2 .
Taking in turn the pair (a1 , a2 ), we find an element a3 ∈ P0 such that a2 R a3 and
a2  a3 , again by monotonicity. Continuing this pattern of argument, we define a
countable chain: 0  a0  a1  · · ·  an  an+1  · · · of elements of P0 such that
0Ra0 Ra1 R . . . Ran Ran+1 R . . . (see the figure following the proof of Theorem 3.2.7).
It follows from the conditional σ-continuity of R relative to P0 that

sup{0, a0 , a1 , . . . , an , . . .} R sup{a0 , a1 , . . . , an , . . .}.

Putting a∗ := sup{a0 , a1 , . . . , an , . . .}, we see that a∗ R a∗ . So a∗ is a fixed point


of R. 

We discuss further modifications of the above definitions.


Let (P, ) be a poset and let P0 be a subset of P. Furthermore, let R1 and R2 be
two binary relations on P. We say that R1 and R2 are adjoint on P0 if the following
two conditions hold:

(∀a1 , b1 ∈ P0 ) a1  b1 ∧ a1 R1 b1 → (∃c1 ∈ P0 ) b1 R2 c1 ∧ b1  c1 (3.4.1)

(∀a2 , b2 ∈ P0 ) a2  b2 ∧ a2 R2 b2 → (∃c2 ∈ P0 ) b2 R1 c2 ∧ b2  c2 (3.4.2)

(see the figures below).

b1 R2 c1 b2 R1 c2

a1 R1 b1 a2 R2 b2
Fig. 3.6
3.4 Further Fixed-Point Theorems: Ordered Situational Action Systems 135

The above two conditions can be neatly defined as one property on the direct
products P × P. Define the direct power (P × P, ) of the poset (P, ) with the
usual coordinate-wise definition of the order on P × P. Thus,

(a1 , a2 )  (b1 , b2 ) if and only if a1  b1 and a2  b2 in (P, ),

for all pairs (a1 , a2 ), (b1 , b2 ) ∈ P × P. Furthermore, if R1 and R2 are binary relations
on P, then R1 ×R2 is the direct product of R1 and R2 . Thus, for all (a1 , a2 ), (b1 , b2 ) ∈
P × P,

(a1 , a2 ) R1 × R2 (b1 , b2 ) if and only if a1 R1 b1 and a2 R2 b2 .

Properties (3.4.1) and (3.4.2) are expressed as a single property of the poset
(P × P, ):
(A) For every two pairs (a1 , a2 ), (b1 , b2 ) ∈ P0 × P0 , if (a1 , a2 )  (b1 , b2 ) and
(a1 , a2 ) R1 × R2 (b1 , b2 ), then there exists a pair (c1 , c2 ) ∈ P0 × P0 such that
(b1 , b2 )  (c1 , c2 ) and (b1 , b2 ) R2 × R1 (c1 , c2 ).
Notice that in the antecedent of (A) the relation R1 × R2 occurs. This relation is
replaced in the consequent by R2 × R1 . Thus, (A) does not express the property of
quasi-expansivity of the relation R1 × R2 on P0 × P0 .
The relations R1 and R2 are also called the forth and the back relations on P0 ,
respectively. The pair (R1 , R2 ) is also called the back and forth pair (of relations)
relative to P0 .
Let (P, ) be a σ-complete poset and let P0 be a subset of P. A pair (R1 , R2 ) of
binary relations on P is said to be σ-continuously adjoint relative to P0 if R1 and R2
are adjoint on P0 and, furthermore, for every chain C = {an : n ∈ N} in P0 of type
 ω and for every monotone and expansive function f : C → P0 such that

a2n R1 af (2n) and a2n+1 R2 af (2n+1) , for all n ∈ N,

it is the case that

sup(C) R1 sup(f [C]) and sup(C) R2 sup(f [C]).

(We note that sup(C) and sup(f [C]) need not belong to P0 .)
An element a∗ ∈ P is called a fixed point of the pair (R1 , R2 ) if a∗ is a fixed point
of both relations R1 and R2 , i.e., a∗ R1 a∗ and a∗ R2 a∗ hold.

Theorem 3.4.3 Let (P, ) be a σ-complete poset with zero 0 and let P0 be a subset
of P. Suppose that a pair (R1 , R2 ) of binary relations on P is σ-continuously adjoint
relative to P0 . If 0 ∈ P0 and the set P0 ∩ δR1 (0) is nonempty, then the pair (R1 , R2 )
has a fixed point in P.
136 3 Ordered Action Systems

Proof We suitably modify the proof of Theorem 3.4.2. We define a countable chain
C (of type  ω) a0  a1  · · ·  an  an+1  · · · of elements of P0 . We put
a0 := 0. Let a1 be an arbitrary element of P0 ∩ δR1 (0). As a0 , a1 ∈ P0 , a0  a1
and a0 R1 a1 , there exists, by (3.4.1), an element a2 ∈ P0 such that a1  a2 and
a1 R2 a2 . Taking then the pair a1 , a2 and applying (3.4.2), we see that there exists an
element a3 ∈ P0 such that a2  a3 and a2 R1 a3 . Then, applying (3.4.1) to the pair
a2 , a3 , we find an element a4 ∈ P0 such that a3  a4 and a3 R2 a4 (see the figure
below). Continuing, we define an increasing chain C = {an : n ∈ N} in P0 such that
a0 R1 a1 R2 a2 R1 a3 R2 a4 . . . a2n R1 a2n+1 R2 a2n+2 . . . The function f : C → C
defined by f (an ) := an+1 , for all n ∈ N, is expansive and monotone. Furthermore,

a2n R1 f (a2n ) and a2n+1 R2 f (a2n+1 ), for all n ∈ N.

As the pair (R1 , R2 ) is σ-continuously adjoint relative to P0 , we have that


sup(C) R1 sup(f [C]) and sup(C) R2 sup(f [C]). Let a∗ := sup(C). Then
a∗ = sup(f [C]). It follows that a∗ R1 a∗ and a∗ R2 a∗ . This concludes the proof
of the theorem. 

As a somewhat trivial application of Theorem 3.4.3 we give a simple proof of the


mentioned theorem of Cantor:

Theorem 3.4.4 Every two countable linear and dense orders without end points are
isomorphic.

.. ..
. .
.. ..
. R2 .
a3 a4

R1
a2 a3

R2
a1 a2

R1
a0 a1
Fig. 3.7
3.4 Further Fixed-Point Theorems: Ordered Situational Action Systems 137

Proof Let (X1 , 1 ) and (X2 , 2 ) be two such orders. A partial isomorphism (from
(X1 , 1 ) to (X2 , 2 )) is any partial function f : X1 → X2 such that f is injective
on its domain Dom(f ) and, furthermore, for any elements x, y ∈ Dom(f ), x 1 y
if and only if f (x) 2 f (y). A partial isomorphism f : X1 → X2 is finite if its
domain Dom(f ) is a finite set. 0 denotes the empty partial isomorphism. A (partial)
isomorphism f is total if Dom(f ) = X1 and the co-domain CDom(f ) is equal to X2 .
Let P be the set of all partial isomorphisms from (X1 , 1 ) to (X2 , 2 ). P is partially
ordered by the inclusion relation ⊆ between partial isomorphisms. (Each partial
isomorphism is a subset of the product X1 × X2 .) The poset (P, ⊆) is σ-complete
because the union of any ω-chain of partial isomorphisms is a partial isomorphism.
Moreover, the empty isomorphism 0 is the least element in (P, ⊆).
We define two relations R1 and R2 on P. As X1 and X2 are countably infinite, we
can write X1 = {an : n ∈ N} and X2 = {bn : n ∈ N}. Given partial isomorphisms f
and g, we put:
f R1 g if and only if either f is a total isomorphism and g = f or f is a finite
isomorphism and g = f ∪ {(am , bn )}, where
(1) m is the smallest i such that ai ∈ Dom(f ),
(2) n is the smallest j such that bj ∈ CDom(f ) and f ∪ {(am , bj )} is a partial isomor-
phism.
(Note that the choice of n depends on the definition of m.)
f R2 g if and only if either f is a total isomorphism and g = f or f is a finite
isomorphism and g = f ∪ {(am , bn )}, where
(3) n is the smallest j such that bj ∈ CDom(f ),
(4) m is the smallest i such that ai ∈ Dom(f ) and f ∪ {(ai , bn )} is a partial isomor-
phism.
(The choice of m depends on the definition of n.)
Let P0 ⊆ P be the set of all finite isomorphisms. Using the fact that the orders
(X1 , 1 ) and (X2 , 2 ) are linear, dense, and without endpoints, it is easy to verify that
(R1 , R2 ) is a back and forth pair of relations relative to P0 . But, more interestingly,
the pair (R1 , R2 ) is also σ-continuously adjoint relative to P0 . The set P0 ∩ R1 [0]
is nonempty. Applying Theorem 3.4.3, we obtain that the pair (R1 , R2 ) has a fixed
point in (P, ⊆), say f ∗ . It follows from the definition of R1 and R2 that f ∗ is a total
isomorphism between (X1 , 1 ) and (X2 , 2 ). 

The above relations R1 and R2 are partial functions with the same domain, viz.
the set of finite partial isomorphisms plus the set of total isomorphisms. E.g., partial
isomorphisms f such that either Dom(f ) = X1 and CDom(f ) = X2 or Dom(f ) = X1
and CDom(f ) = X2 do not belong to the common domain of R1 and R2 . Further-
more, the existence of a total isomorphism between (X1 , 1 ) and (X2 , 2 ) is already
implicit in the fact that the pair (R1 , R2 ) is σ-continuously adjoint relative to P0 , as
can be checked. Thus, the above proof does not give a new insight into the original
138 3 Ordered Action Systems

proof by Cantor. The significance of the above proof is in the fact that it is uniformly
formulated in the general and abstract framework provided by Theorem 3.4.3.
The above example gives rise to the construction of a certain simple situational
action system. The set of states of the system coincides with the set P of partial
isomorphisms between (X1 , 1 ) and (X2 , 2 ). The system is equipped with two
elementary actions: R1 and R2 , the forth and the back action, respectively. By way
of analogy with game of chess, it is assumed that the system has two agents: BACK
and FORTH. The agent FORTH performs the action R1 while BACK performs R2 .
The agents are cooperative—they obey the rule that their actions are performed
alternately. Furthermore, they are successful in action—at the limit they are able to
define a required total isomorphism between the posets (X1 , 1 ) and (X2 , 2 ).
A possible situation is a pair s = (u, α), where u is a partial isomorphism and
α is an agent, i.e., α ∈ {BACK, FORTH}. u is the (unique) state of the system
corresponding to the situation s. This state is marked as f (s). The situation s is read:
‘The agent α performs his action in the state u’. The situation (0, FORTH) is called
initial. (We recall that 0 is the empty partial isomorphism.)
Let S be the set of possible situations. The transition relation Tr between situations
is defined as follows. For any situations s and t,
s Tr t if and only if either s = (u, FORTH), t = (v, BACK) and u R1 v
or s = (u, BACK), t = (v, FORTH) and u R2 v.
Thus, if s Tr t, then either f (s) R1 f (t) or f (s) R1 f (t). The union R1 ∪ R2 is called the
transition relation between states. Note that Tr is a (partial) function.
M s := (P, R1 ∪ R2 , {R1 , R2 }, S, Tr, f ) is a situational action system. M s is called
the system of the back and forth actions. M s is ordered in the sense that its set of
states P is ordered by .
The extended structure (P, , R1 ∪ R2 , {R1 , R2 }, S, Tr, f ) is an example of an
ordered situational action system. (See the remarks placed at the end of this section.)
The back and forth method has a wider range of applications than the proof of the
above theorem of Cantor. It was used by Fraïssé in his characterization of elementary
equivalence of models. The method was formulated as a game by Ehrenfeucht. The
players BACK and FORTH often go by different names: Spoiler and Duplicator,
(H)eloise and Abelard, ∀dam and ∃va; and are often denoted by ∃ and ∀.
We recall that an elementary action system M = (W, R, A) is constructive if the
relation R and each of the actions A ∈ A are subsets of . If M is constructive, the
relation R and the actions A ∈ A need not be expansive and nondecreasing because
they are not assumed to be total. By the same reason, the above relations need not
be expansive. In the above example, the reduct M := (P, R1 ∪ R2 , {R1 , R2 }) of M s
is a constructive, σ-complete elementary action system.
A relation , defined on a set S, is called a quasi-order if  is reflexive and
transitive. If  is a quasi-order, then the binary relation ρ defined on S is as follows:

aρb if and only if a  b and b  a, (3.4.3)


3.4 Further Fixed-Point Theorems: Ordered Situational Action Systems 139

(a, b ∈ S) is an equivalence relation on S. Let [a]ρ stands for the equivalence class
of the element a relative to ρ : [a]ρ := {x ∈ S : a ρ x}. The quotient set

S/ρ := {[a]ρ : a ∈ S}

is partially ordered by the relation , where

[a]ρ  [b]ρ if and only if ab

(a, b ∈ S).
Definition 3.4.5 Let M s = (W, R, A, S, Tr, f ) be a situational action system. The
system M s is ordered if its reduct M := (W, R, A) is an ordered action system, i.e.,
the set W is endowed with a partial order . 
In what follows ordered situational action systems are marked as

M s = (W, , R, A, S, Tr, f ), (3.4.4)

thus explicitly indicating the order relation  on the set of states W .


Given an ordered situational action system (3.4.4), we see that the partial order
 can be lifted to a quasi-order on the set S of possible situations. For any situations
a, b ∈ S, we define:

ab if and only if f (a)  f (b), (3.4.5)

i.e., the state f (a) corresponding to a is equal or less than the state f (b) corresponding
to b.  is a quasi-order on S.
The following fact is immediate:
Proposition 3.4.6 Let Ms = (W, , R, A, S, Tr, f ) be an ordered situational sys-
tem. Let the quasi-order  and the equivalence relation ρ on the set S of situations
be defined as in (3.4.5) and (3.4.3). Then, for any a, b ∈ S:
(8) [a]ρ = [b]ρ if and only if f (a) = f (b).
(9) [a]ρ  [b]ρ if and only if f (a)  f (b).
(10) The function f ∗ , which to each equivalence class [a]ρ assigns the state f (a), is
well-defined. Furthermore, f ∗ is an isomorphism between the posets (S/ρ, )
and (W, ). 
An ordered situational action system M s = (W, , R, A, S, Tr, f ) is σ-complete
(inductive) if its reduct (W, ) is a σ-complete poset (inductive poset, respectively).
In the context of ordered situational systems, it is useful to have another infinite
notion of a reach, more closely related to the properties of the set S of situations of
the system (cf. Definition 3.3.7).
Let M s = (W, , R, A, S, Tr, f ) be a σ-complete situational system. By the
situational ω-reach of the system M s we mean a binary relation SReω on the set S
defined as follows. For any situations a, b ∈ S,
140 3 Ordered Action Systems

SReω (a, b) if and only if either a = b and a Tr b or there exists a possi-


bly infinite sequence a0 , a1 , . . . , an , an+1 , . . . of situations of S and a sequence
A0 , A1 , . . . , An , An+1 , . . . of actions of A such that
(i) an Tr an+1 and f (an )  f (an+1 ), for all n,
(ii) an An an+1 , for all n,
(iii) a0 = a and f (b) = sup({f (an ) : n = 0, 1, . . .}).
Note that (i) can be formulated as:
(i)∗ an  an+1 , for all n.
Since an Tr an+1 implies
 that f (an ) R f (an+1 ), for all n, it is easy to see that if
SReω (a, b), then ReωM f (a), f (b) in the sense of Definition 3.3.7, but not conversely.
We recall that ReωM is the ω-reach of the elementary system M = (W, ,R, A).
For example, in the situational system M s = (P, R1 ∪ R2 , {R1 , R2 }, S, Tr, f ) of
the back and forth actions, defined as above, there exists a unique total isomor-
phism g between
 the posets
 (X1 , 1 ) and (X2 , 2 ) such that SReω (0, FORTH),
ω
(g, FORTH) and SRe (0, FORTH), (g, BACK) .
Part II
Freedom and Enforcement in Action
Chapter 4
Action and Deontology

Abstract This chapter is concerned with the deontology of actions. According to the
presented approach, actions and not propositions are deontologically loaded. Norms
direct actions and define the circumstances in which actions are permitted, prohibited,
or mandated. Norms are therefore viewed as deontological rules of conduct. The
definitions of permission, prohibition, and obligatoriness of an action are formulated
in terms of the relation of transition of an action system. A typology of atomic norms
is presented. To each atomic norm a proposition is associated and called the normative
proposition corresponding to this norm. A logical system, the basic deontic logic, is
defined and an adequate semantics based on action systems is supplied. The basic
deontic logic validates the closure principle. (The closure principle states that if an
action is not forbidden, it is permitted.) A system which annulls this principle is also
presented. In this context the problem of consistency of norms is examined. The
problem of justice is discussed. The key role plays here is the notion of a righteous
family of norms.

On Norms

The whole life of man progresses among norms; they are components of our everyday
existence, which we are hardly conscious of. Norms govern our actions and behaviors.
Every man is biologically conditioned and socially involved. We belong to different
communities and groups, be they social, professional, denominational, family, or
political, which results in the imposition of various systems of norms on our lives and
activities from the ones that determine what is and what is not allowed to the ones that
specify what is demanded and what is prohibited. Norms are of different strength and
rank. They can be as simple as God’s commandments and as complex as regulations
of the Civil Law. They relate to a variety of spheres and aspects of life, including
the biological dimension. Norms are hierarchized. Norms can be general or detailed,
concrete or abstract. We recognize norms and meta-norms: norms concerning norms.
Norms shape a person’s life from the cradle to the grave. The addressees of particular
norms may be either all people or some members of a given community. The fact
of being subject to norms does not necessarily mean that we are under the yoke.
Although we have examples of communities kept in fetters by systems of totalitarian

© Springer Science+Business Media Dordrecht 2015 143


J. Czelakowski, Freedom and Enforcement in Action, Trends in Logic 42,
DOI 10.1007/978-94-017-9855-6_4
144 4 Action and Deontology

norms (or, more frequently, communities in which those in power violate the letter
of law in a brutal way), we also have examples of communities in which lives can be
happily led. What is more, man can enjoy the benefits which flow from his freewill.
Paradoxically, a suitable system of norms can basically broaden the sphere of man’s
freedom in society. ‘Good’ and respected systems of norms protect a community
from chaos and anarchy, and in consequence multiply the effectiveness of actions
undertaken both by individual members of a community and smaller subgroups
(business people, office workers, medical doctors, politicians, etc.). Looking at the
question from the contemporary language-related perspective, ‘good’ systems of
norms (and not just legal ones) are the initial conditions which serve to secure stability
of social life of both individuals and groups in a society. Such systems can create
the space for actions that lead to the positive development and prosperity of the
community as a whole.
The above general (and rather banal) statements introduce the proper subject of
this chapter: what are ‘good’ sets of norms? We shall try to answer the question from
the precise perspective of formal logic. We shall have to pay a price of our choice,
though: the simplification of a complex matter and the exclusion of some of the most
vital aspects of the problem. Still, a simplified perspective will allow us, I believe,
to get to the essential heart of the matter.
A person’s behavior and actions have roots and motivations. For example, the
feeling of hunger or thirst forces man to take actions intended to satisfy these basic
needs. His actions are determined by biological constraints. Further actions aimed
at meeting biological needs are channeled through norms which determine what can
be done. If he lives in a developed society, he will go to a restaurant to eat or to a
supermarket to buy food to bring home to prepare a meal with. Ordinarily, a person
will not kill another human being in order to satisfy their hunger, be they a member
of the same or another community. (A community of cannibals may seem to provide
an obvious exception but in many recorded examples of such groups, human flesh
is not eaten to satisfy hunger but for other reasons, such as to absorb the spirit of
the deceased.) Societies also permit people to kill animals for food but here too both
what they can kill and how they can kill it are regulated by secular and religious laws.
If one lives in a developed country and falls ill, it is norms that determine what
one needs to do. One calls on the doctor or, if it is more serious, one calls for an
ambulance (though whether one does either may also depend on whether one has
health insurance in a society that requires it). Those who are well-off can bypass
bureaucracy and head off to the private clinic of their choice. At the extreme, one
can take the ultimate measure and commit suicide.
We treat behaviors as actions. If someone’s conduct was morally good, then they
did something that was in compliance with the moral code accepted by them, although
it may perhaps have violated other norms such as legal ones. Certain conducts are
biologically or physiologically conditioned, e.g., through pain, the feeling of hunger,
or lack of sleep. Nevertheless, behavior always consists in carrying out a certain
activity. When we are in pain, we typically behave in ways to ease or eliminate
it, e.g., through taking a prescribed drug or a palliative. When we feel sleepy, we
normally go to bed.
4 Action and Deontology 145

The term norm itself acquires, in this text, a broader meaning than in jurispru-
dence. We invest it here with a different sense. It covers moral norms, legal norms,
norms of language, norms of social coexistence which define models of behavior in
various communities, norms of professions, norms of production which are binding
in business, norms of conventional conducts or etiquette, and principles that regulate
the use of all kinds of appliances, from cars to coffeemakers. Grice’s Maxims may
be regarded as norms of effective communication as a form of purposeful activity. In
the framework presented herewith, norms play a role similar to the inference rules
in logic, yet this analogy is rather loose. To put it in a nutshell, norms are rules
of actions including conduct. Norms direct actions and define the circumstances in
which actions are permitted, prohibited, or mandated. To each norm a certain propo-
sition is associated; it is called the normative proposition corresponding to the norm.
In the simplest case of atomic norms, it takes the form of an implication of two
propositions. Norms, however, are not reducible to propositions; they are objects of
another order.
One may separate a special category of norms; we call them praxeological norms.
(The criteria of separation are not clear-cut however.) Praxeological norms are exem-
plified by the principles that serve producing a car or a TV set. They concern techno-
logical and organizational aspects of producing a commodity. Praxeological norms
are concerned with efficient ways and means of attaining the planned goals. Are such
norms deontologically loaded? The norms that govern manufacturing should not vio-
late law regulations, e.g., the regulations concerning air or water pollution or child
labor. That is obvious. But manufacturing is more concerned with keeping up techno-
logical regimentation and lowering production costs than with the principles of Civil
Code. Praxeological norms are in this sense deontologically loaded—the manufac-
turer performs actions that are in accordance with the adopted technology regime.
Norms delineate the direction of scientific research, not to say they condition the
scientific method. Proving mathematical theorems and defining notions take place
according to principles of logic. Rules of reasoning can be perceived as norms deter-
mining the features of a well-conducted proof. Only results obtained in the way of
deduction, thus in accordance with principles of logic, attain the status of a mathe-
matical result, inasmuch as they are original. All other methods, such as illumination
or persuasion, are not accepted. Here, the famous dictum of Jan Łukasiewicz comes
to mind: logic is the morality of thought and language.
There are strict principles which are binding in the natural sciences; that is, norms
relating to acknowledgment of scientific discoveries of the empirical nature, as well as
to the acknowledgment of scientific theories of a varying range. Empirical facts con-
firmed repeatedly by independent research teams are considered irrefutable. There
exist accepted procedures for acknowledging or rejecting hypotheses and empirical
theories. There exist also criteria by which to determine the reliability of research
apparatus and technical devices built on the basis of an accepted theory. A good
example here is the laser range finder—a precise and tested tool popularly applied
in the construction industry and in land surveying. This tool would not have been
possible if it were not for quantum mechanics.
146 4 Action and Deontology

On Deontic Logic

Two fundamentally different approaches to deontological problems should be


mentioned. (In order to facilitate the analysis we shall confine ourselves to proposi-
tional deontic systems.)
The dominant type of deontic logic singles out a list of deontic operators such as:
O ‘It is obligatory’
P ‘It is permitted’
F ‘It is forbidden’
and treats them as monadic logical connectives that assign propositions to proposi-
tions. Thus, the deontic operators are viewed as proposition-forming functors defined
on propositions, i.e., sets of states of affairs. This approach, initiated in the 1950s,
following the pioneering work of von Wright, was subsequently enriched and devel-
oped in various directions, e.g., by incorporating temporal or quantificational index-
ing explicitly into formalism or by introducing dyadic deontic operators in order
to do justice to ordinary normative discourse. Gabbay et al. (2014/2015) provide a
comprehensive account of deontic logic.
Here, we discuss a somewhat different approach to the above issues. It conceives
of the deontic operators as proposition-forming functors defined on actions. This
approach also originates from the papers of G.H. von Wright (and M. Fisher). Actions
(or deeds) bear deontic values (being obligatory, forbidden, permitted). We also
want to delineate a certain research program and to indicate how it could possibly
be implemented. The crucial point consists, of course, in a clear explanation of
how actions are related to deontic operators. We also discuss the notion of the so-
called atomic norm and its relationship to the above interpretation of the deontic
operators. (The term basic norm, in German: Grundnorm, can be also used. But here
the term norm receives a wider meaning than in jurisprudence, referring not only to
legal norms only but also moral norms, linguistic norms (in German: Sprachliche
Normen), social norms that provide patterns of behavior in social communities, etc.)
Trypuz and Kulicki (2014) write: “Deontic considerations are usually conducted
in one of the following contexts: (i) general norms expressed in: legal documents,
regulations, or implicitly present in the society in the form of moral or social rules;
(ii) specific norms, i.e., duties of particular agents in particular situations. Norms
of the two kinds refer to obligatory, permitted, and forbidden actions or states. (…)
In deontic logic the two types of norms are not usually present together in formal
systems. They are often regarded in deontic literature as linguistic variants of the
same normative reality.” Trypuz and Kulicki call the norms of the two types a-norms
(for action norms) and s-norms (for state norms), respectively. Both kinds can be
present in the same normative systems. For example, they remark after Atienza and
Manero (1998) that both kinds of norms are present in the Spanish constitution.
In this book the crucial definition of obligatoriness of an action is entirely formu-
lated in terms of this action and of the relation of direct transition in a given elementary
action system. A similar remark applies to the remaining deontic operators. These
definitions give rise to a deontic logical system, denoted by DL, which is adequately
4 Action and Deontology 147

characterized by natural and intuitive axioms. DL is decidable. The logical validity


of deontic formulas can be determined by an algorithm which is a straightforward
extension of the familiar zero-one method of classical propositional logic.
However, there are certain philosophical problems that should be discussed. One
of these is the question whether norms can be said to bear a truth value. According
to our standpoint, to each elementary norm there corresponds a unique proposition
which is called the normative proposition corresponding to the norm. The key issue
is to provide criteria for the truth and falsehood of these propositions. We believe
the solution we propose sheds some light on the problem how one can meaningfully
apply truth functional connectives to norms, as is (tacitly) assumed in deontic logic.
The sentences which are built up from action variables by applying deontic operators
comply with the principle of bivalence: they are either true or false. The conception
presented here is thus based on the following three assumptions:
1. Actions (and not state of affairs) are permitted, forbidden, or obligatory.
2. Norms are certain rules for actions.
3. To each atomic norm an elementary proposition is assigned which is called a ‘nor-
mative proposition’ in the terminology of von Wright.
Norms, however, are not reducible to propositions. In our approach norms play
a role somewhat similar to that of rules of inference in logic—norms guide actions
and determine circumstances under which some actions are permitted, forbidden, or
obligatory. Action theory investigates, describes, and classifies all forms of human
activity from a universal and unifying deontic perspective. This perspective is con-
stituted by the collections of norms. Whether it be proving a mathematical theorem,
building a house, making a movie or writing a novel, each of these highly diverse
actions is conjoined with a specific set of norms (rules) that one should obey so as to
achieve the desired goal. In other words, these sets of norms, being a specific envelope
of action, form the basis for action. Every movie director, writer, or architect knows
the rules of his/her profession, that is, he/she knows certain well-established collec-
tions of norms, specific to this profession, that he/she should obey while producing
a movie, writing a book with an intriguing plot, or designing a new house.
The notion of an elementary action system plays a crucial role in the approach
presented below.

4.1 Atomic Norms

In his Norm and Action, Georg Henrik von Wright defines norms as entities that
govern actions. This viewpoint is adopted also in this monograph. He divides norms
into rules, prescriptions, customs, directives, moral norms and ideals, and presents a
deeper analysis of these categories of norms. For example, prescriptions are subse-
quently divided into commands, permissions, and prohibitions. In his book he also
investigates the problem of the truth value of norms. He argues that rules and pre-
scriptions do not carry truth values and are therefore outside the category of truth.
In this book we follow this path. There is, however, a difference in the apprehension
148 4 Action and Deontology

of norms; in this book the central role is played by elementary action systems as
primary units. Norms in the semantic stylization are strictly conjoined with action
systems but the category of atomic norms is secondary. Action systems go first and
norms are associated with action systems. The issue whether a norm is a rule, a
prescription, etc., depends on the category (or rather a class) of distinguished action
systems—the character of a given class of action systems determines the respective
category of associated norms. It should be added that within the class of elemen-
tary action systems the discussion on categories of norms in the von Wright style is
very limited because elementary action systems are devoid of many aspects which
are necessary for a deeper discussion of norms. For instance, the category of an
agent of actions lies outside the vocabulary of elementary systems. The much wider
notion of a situational action system provides the right framework for a discussion of
norms because this notion incorporates necessary ingredients thereby making such
a discussion possible.
Von Wright discerns two groups of norms, viz. norms concerning action (acts
and forbearances) and norms concerning activity. (Forbearances are not discussed
in this book—they are linked rather with situational action systems, not elementary
ones.) Both types of norms are common and important. ‘Close the door’ orders an
act to be done. ‘Smoking allowed’ permits an activity. ‘If the dog barks, don’t run’
prohibits an activity. The theory presented here does not differentiate actions from
activities. We claim that within the adopted formalism the latter are reducible to the
former. We accept the standpoint that from the formal viewpoint each activity is a
class of actions, obtained by distinguishing a single action in each action system
from a class of such systems. On the other hand, an action as a one-time act is a
concrete action belonging to a set of actions (though not necessarily atomic) of an
individually defined and clearly named system of action. An instance of such an
action is the stabbing to death of Julius Cesar.
In this section we confine ourselves only to discussing a particular case of norms—
the notion of an atomic norm. The notion is relativised to a definite elementary action
system M = (W, R, A). From the semantic viewpoint, these simplest norms will be
identified with figures of the form (Φ, A, +), (Φ, A, −) or (Φ, A, !), where A is an
atomic action of A and Φ is an elementary proposition, i.e., Φ ⊆ W .
The commonly understood norms do not form a homogeneous set. The family of
all norms has a rich hierarchical structure—there can be less or more general norms,
as well as less or more important norms. Norms are usually accompanied by a certain
preferential structure. This structure determines the order and principles according
to which the norm is applied.
A reasonable theory of norms presupposes a definite ontology of the world. (In
Chapter II the role of situations in the theory of action is underlined.) On the ground
level the category of individuals is distinguished. Each individual is equipped with a
certain repertoire of atomic actions he is able to perform. We may thus speak about the
norms guiding the actions of individuals. The next level is occupied by collectives
(or groups) of individuals. There are also specific norms which the members of
collectives are expected to obey. Any religious community is an example of such
a collective.
4.1 Atomic Norms 149

In philosophy one also distinguishes the category of individual concepts such as


the president of the United States or the king of France. The words ‘notion’ and
‘concept’ are treated here as synonyms. Without entering into details, we assume
that notion is an abstract or general idea inferred or derived from specific instances.
For example, the concept of a horse is not identical with the animals in the world
grouped by this concept—or the reference class or extension.
One may also distinguish collective notions (or universal concepts). The latter
category mainly comprises such objects as the notion of a horse, bachelor, worker,
etc., i.e., the notions labeled by common nouns. (In a simplified picture, by a name we
shall understand any proper or common noun in the sense of traditional grammar.)
We put an emphasis on the fact that according to our standpoint every concept is
viewed as an abstract entity reducible neither to the name of this notion nor to the
extension of the concept. (By analogy, when colored bricks lie on the floor in the
kindergarten, they form a set, which is an abstract entity, although the elements of this
set are material objects.) To each collective notion (and, hence, to the name associated
with such a notion) is thus assigned the totality (collective) of all individuals being
denotations of this name and called the extension of this notion. In fact, these are just
collective notions that serve distinguishing definite collectives, as e.g. the collective
of teachers.)
There are many theories of concept (the classical theory, prototype theory, etc.).
We shall not tackle here the general issues pertaining the concept of concept, because
they are outside the scope of our theory.
An individual may belong to many collectives. This is often a source of the
conflict of norms, especially when the relations between the groups to which an
individual belongs are characterized by hostility. For example, a particular individual
may belong to a definite religious community and at the same time be a member of
another, say professional, group which is hostile or indifferent to any religion. There
can also be conflicts between norms guiding the actions of an individual and the
norms regulating the actions of a collective to which the individual belongs.
Norms define the ways and circumstances in which actions are performed, e.g.,
by specifying the place, time, and order of their performance, or by forbidding their
use for the second time. For example, in a chess game the instruction that allows
one to perform a small castling is an example of a norm—it permits that a certain
action be performed, i.e., moving the king and a castle in a particular situation on
the chessboard. But we have also another norm that allows one to castle only once.
It forbids one to perform a castling for a second time. The terms ‘use of a norm’ or
‘application of a norm’ are given in the intuitive sense. In view of their role in an
explication of the notion of a norm, it seems that a theory of action should attach
definite meanings to these terms.
The circumstances in which an atomic norm is used are entirely determined by
states of the elementary action system under consideration. The atomic norms are
neutral with respect to the factors constituting the situational envelope of the system;
these factors (and the agents of actions in particular) are simply not taken into account
by any atomic norm.
150 4 Action and Deontology

The assumption that actions (and not state of affairs) are permitted, forbidden
or obligatory, departs, to a significant extent, from the traditional format of deontic
logic according to which deontic operators act as unary logical connectives assigning
propositions to propositions. In our approach the deontic operators
O ‘It is obligatory’
P ‘It is permitted’
F ‘It is forbidden’
are proposition-forming functors defined on actions. Each of the above operators
applied to an action A defines a certain proposition. Thus, OA, PA, and FA are
subsets of W , for any atomic action A.
At first, we shall present a few remarks concerning the semantic aspects of norms.
(Questions pertaining to their syntax will be discussed later.) The starting point here
is the notion of an elementary action system as an appropriate semantic unit. Here
are formal definitions.
Let M = (W, R, A) be an elementary action system. A positive (or a permissive)
atomic norm for the system M is any triple

(Φ, A, +), (4.1.1)

where A ∈ A and Φ is an elementary proposition about M, i.e., Φ ⊆ W .


(4.1.1) is read:
(1)∗+ ‘It is the case that Φ, therefore A is permitted to perform,’
or in short: ‘Φ, therefore A is permitted.’ Φ is called the hypothesis of (4.1.1).
To the norm (4.1.1) an elementary proposition is assigned,

Φ → PA, (4.1.2)

where → represents the implication operation on the power set ℘ (W ), i.e., Φ → Ψ :=


¬Φ ∪Ψ for all Φ, Ψ ⊆ W . (4.1.2) is called the normative proposition corresponding
to the norm (4.1.1). Φ → PA is a subset of W . It will be defined in the next section.
A negative (or a prohibitive) atomic norm for M is any triple

(Φ, A, −), (4.1.3)

where Φ and A are described as above. (4.1.3) is read:


(1)∗− ‘It is the case that Φ, therefore A is forbidden to perform,’
or, in short: ‘Φ, therefore A is forbidden.’
To the norm (4.1.3) an elementary proposition is assigned, viz.

Φ → FA, (4.1.4)
4.1 Atomic Norms 151

defined in Sect. 4.2 below, and called the normative proposition corresponding to the
norm (4.1.3).
The circumstances in which a positive or negative atomic norm can be used depend
only on one factor: the set of states of the system encapsulated by the proposition Φ.
The moment the proposition Φ is ascertained to be true, a positive norm allows the
performance of a certain action from A, while a negative norm forbids it.
The norm (4.1.1) does not decide the issues concerning the performability of the
action A in circumstances in which Φ, the antecedent of the proposition (4.1.2), is
true. It may happen that in each state u ∈ Φ the action A is not performable. In this
case the executive powers of the norm (4.1.1) are nil.
We shall now discuss some limit cases of atomic norms. They are singled out on
the following short list:

(W, A, +) (4.1.5)
(W, A, −) (4.1.6)
(∅, A, +) (4.1.7)
(∅, A, −). (4.1.8)

Proposition W (the full proposition) is satisfied in every state of the system; in


particular W is satisfied before any performing of an arbitrary atomic action A. Thus,
the norm
(W, A, +), (4.1.5)

can be read as follows:


(2)∗+ ‘A is permitted to perform’, shortly ‘A is permitted.’
The norms of the form
(W, A, −), (4.1.6)

are dual to (4.1.5). The norm (4.1.6) has the following obvious meaning:
(2)∗− ‘A is forbidden to perform’, shortly ‘A is forbidden.’
The empty proposition ∅ is never satisfied; in particular it is not satisfied before any
performance of an arbitrary action A. The triple

(∅, A, +), (4.1.7)

then can be read as follows:


(3)∗+ ‘A is not permitted to perform’, shortly ‘A is not permitted.’
Obviously, the prohibition (3)∗+ refers to any state of the system. If one accepts the
meta-norm saying that what is not permitted is forbidden (see the discussion of the
Closure Principle below), we see that the positive norm (4.1.7) has the same meaning
152 4 Action and Deontology

as the negative norm (4.1.6). We may write down this equivalence as follows: (4.1.7)
≡ (4.1.6).
The norms of the form (4.1.5) with the full condition W are called categorical
positive norms. In turn, the negative norms (4.1.6) with the full condition W are
called categorical negative norms. They forbid the performance of some actions
irrespective of the state of the system. Some of the ten commandments, as, e.g.,
‘Thou shalt not kill,’ ‘Thou shalt not steal’ assume the form of categorical negative
norms.
The prohibitive norm
(∅, A, −), (4.1.8)

say that the action A is (never) forbidden. If one accepts the Closure Principle, (4.1.8)
is equivalent to the permissive norm (4.1.5), i.e., (4.1.8) ≡ (4.1.5), as the set of states
of the system in which performing the action A is prohibited is empty. We, thus, see
that, in view of the above equivalences, the categorical negative atomic norms could
be equally well defined as permissive norms of the form (4.1.7).
The general idea behind the above semantic definitions of norms is this: norms
are viewed as deontically loaded rules of conduct. Norms are guides, enabling one to
perform (or to refrain from performing) actions and to build action plans. Elementary
norms do not specify the order in which actions are to be performed (these factors
belong to the situational envelope of a given action system). Once the norm (Φ, A, +)
is accepted, performing the action A is permitted whenever the action system is in a
Φ-state. Analogously, once the norm (Φ, A, −) is accepted, performing the action A
is canceled whenever the action system is in a Φ-state. In the next section, obligatory
norms are discussed. Their role in action is highlighted.

4.2 Norms and Their Semantics

Obligatory norms need a discussion of their own. They have greater executive power
than permissive norms. From the formal point of view, the simplest obligatory norms,
atomic obligatory norms, are identified with the triples

(Φ, A, !), (4.2.1)

where Φ is an elementary proposition and A is an action belonging to A. (Here,


M = (W, R, A) is assumed to be a fixed elementary action system.) The norm
(4.2.1) is read:
(1)! ’It is the case that Φ, therefore A is obligatory,’
or in short: ‘Φ, therefore A is obligatory.’
To the norm (4.2.1) an elementary proposition is assigned, viz.

Φ → OA, (4.2.2)
4.2 Norms and Their Semantics 153

called the normative proposition corresponding to the norm (4.2.1). It will be


defined later.
While driving a car, we reach a crossroads where there is a road sign bearing a
white rightward arrow against the blue background. In this situation, we have to turn
right and drive on, following the direction indicated by the arrow in the road sign.
Obligatory norms can also have the form of a branched norm. For example, the
road sign bearing a two-cleft arrow is associated with the norm ordering the driver to
turn at the nearest crossroads behind the road sign to one of the streets that follows
the direction of the arrow, e.g., to the left or to the right but not straight ahead.
The action ordered by an obligatory norm is naturally also permitted. What makes
obligatory norms differ from permissive ones? Undoubtedly, it is a certain categorical
character that forces the agent to perform in a given state (or situation) the action
indicated by an obligatory norm, i.e., the action A in (4.2.1); other norms, even those
involving the proposition Φ, are insignificant in Φ. Obviously, one can imagine a
situation where the agent would feel the pressure of many obligatory norms at the
same time. In this situation, say s, he/she would have to perform a number of actions
simultaneously. It is usually impossible. It may well be that there is a contradiction
in a system of obligatory norms. Suppose we are given two obligatory norms

(Φ, A, !), (Φ, B, !) (4.2.3)

with the same antecedent Φ. Let us assume, moreover, that the sets of possible effects
of A and B in the states of Φ are disjoint, i.e., δA (u) ∩ δB (w) = ∅ for all u, w ∈ Φ.
The system of norms (4.2.3) is obviously contradictory in the intuitive sense—the
two actions A and B cannot be performed in any state u ∈ Φ. (We abstract here from
the situational envelope of the action system; thus, we cannot say that one action can
be performed before another, or that they are performed by different agents. These
components are not included into the notion of a state and consequently cannot be
articulated by norms of the form (4.2.1).)
In what way, then, do obligatory norms come before permissive ones? In trying
to answer this question, we will comment on two approaches to the problem of
obligatory norms.
The fact that a given action is ordered can be seen in a wider context. Agents
are usually involved in various action plans. In the more formal language we have
been using, one can say that an action plan is, in the simplest case, a certain finite
sequence of actions the agents intend to perform so that starting from a certain state
at the outset, they can attain a certain intended state of the system. Without a loss
of generality we may identify action plans with algorithms. The adoption by the
agents of a certain action plan gives obligatory strength to the actions included in it.
In implementing a given action plan, the agents are obliged to perform the actions
included in the plan in a definite order. Acceptance of the plan sanctions the priority
of actions that make the plan. Other actions, though permitted, are regarded then
as irrelevant or contingent. Therefore, one can say that an action is obligatory with
respect to the given action plan. However, it is not obligation in itself—the actions
included in the plan are to be performed in a definite order. So, if an action A is to
154 4 Action and Deontology

be performed as the third or the fifth, its obligation is just associated with this fact;
the action A is obligatory on account of the fact it is to be performed as the third
or fifth and not, for example, the seventh. The obligatory norms under which the
above situation falls will not be, however, atomic norms, i.e., they will not be of the
form (Φ, A, !), where Φ is an elementary proposition. The fact is that the order in
which the actions involved in an action plan are to be performed is indescribable by
sentences, whose meanings are elementary propositions. The above interpretation
of the obligatoriness of an action, i.e., an occurrence of the action in the accepted
action plan, leads to norms of a richer structure than the norms of the form (Φ, A, !).
Nevertheless, the Yiddish proverb ‘Man plans and God laughs’ adequately reflects
the significance of the plans we make and the actions that they render obligatory.
Yet another interpretation now appears here; let us call it the inner-system inter-
pretation, referring to the relation R of direct transition in the system (W, R, A). Our
point of view can be described as follows. To each atomic action A ∈ A an agent
is assigned, who is the performer of this action. The same agent may also perform
other atomic actions. The correspondence between actions and their agents need not
be functional. It is conceivable that the same action A may be performed, in different
states, by different agents. However, for the sake of simplicity, we assume that there
is only one agent of all the actions of A. In other words, the system is operated by
only one agent. This assumption enables us to restrict considerations to the formal-
ism of elementary action systems. Let u ∈ W and suppose B ⊆ A is a (nonempty)
family of atomic actions that are performable in u in the sense of Definition 1.4.1.
Thus, for every A ∈ B, there exists a state w such that u R, A w. What does it mean
that the agent has the freedom in the state u to perform any action from the family
B? One should distinguish here two situations which we shall briefly describe.
CASE 1. The agent has to perform one of the actions from B in the state u, but it
is only up to him/her which action from B to choose.
CASE 2. The agent does not have to undertake in the state u any actions from B.
If Case 1 holds, we say that the agent is in the situation of enforcement to perform a
certain action from B and—at the same time—has a free choice of each action from
the family B which he/she will perform. (It is obvious that if B consists only of one
action, free choice is illusory as the agent is forced to perform only one action.)
The relation R of direct transition, and only this relation, imposes a form of
enforcement upon the agent irrespective of who he/she is. Hence, speaking about
enforcement in the context of a given elementary action system (W, R, A), we should
provide the word with an additional letter R and talk about R-enforcement as it is
entirely determined by the relation R.
If several relations of transition are defined on W , different forms of enforcement
and obligations are imposed on the agent. For example, we can distinguish economic,
administrative, or legislative enforcement. This would give rise to a hierarchy of the
agent’s obligations being in compliance with the force that different relations R would
create. The relations expressing physical necessity (e.g., laws of gravity) would have
the strongest obligatory power, whereas obligations imposed by certain conventions
the weakest. Such an approach makes the mathematical formalism of the action
4.2 Norms and Their Semantics 155

theory more involved for it leads to action systems of the form

(W, R1 , . . . , Rn , A) (4.2.4)

with a complicated network of different relations R1 , . . . , Rn of direct transitions.


The simplifications we have made enable us to consider, instead of the system (4.2.4),
the family of systems (W, Ri , A), i = 1, . . . , n, and so the systems that fall under
Definition 1.2.1.
For a given state u, Case 1 is mathematically expressed as follows:

(∀A ∈ B)(δA (u) ∩ δR (u) = ∅) &


 
(∀w ∈ W ) w ∈ δR (u) ⇒ (∃A ∈ B)(w ∈ δA (u)) . (4.2.5)

The first conjunct of (4.2.5) ascertains that each of the actions A ∈ B is performable
in u, whereas the second conjunct, which can be briefly written as follows

δR (u) ⊆ {δA (u) : A ∈ B}, (4.2.6)

says that every direct transition uRw from u to an arbitrary state w is accomplished
by some action of B.
Suppose now that A is a single atomic action. When does the agent find himself in
the situation of being forced to perform the action A? Such a situation is a particular
instance of Case 1, i.e., when B = {A}. Condition (4.2.5) then takes the form δA (u) ∩
δR (u) = ∅ & δR (u) ⊆ δA (u) or, equivalently,

∅ = δR (u) ⊆ δA (u). (4.2.7)

(4.2.7) thus states that u is not terminal (which means that ∅ = δR (u)) and every
transition u R w originated in u is accomplished by A. In other words, every transition
from the state u into any state admissible by R can be executed solely through
performing the enforced action, i.e., the action A; hence, there is no possibility of
‘bypassing’ this action. The assumption of nonterminality of the state u is a necessary
condition ensuring the nontriviality of all possible enforcements pertinent to this state.
The agent is not R-obliged to perform the action A in u if and only if (4.2.7) does
not hold. The negation of (4.2.7) is equivalent to the disjunction:

δR (u) = ∅ or δR (u) \ δA (u) = ∅. (4.2.8)

The first disjunct says that u is a terminal state, while the second states that there is
a transition from u which is not accomplished by A.
Case 2 is equivalent to the fact that the agent is not enforced to perform any action
that belongs to B in the state u. Thus, Case 2 is equivalent to

(∀A ∈ B)(δR (u) = ∅ or δR (u) \ δA (u) = ∅). (4.2.9)


156 4 Action and Deontology

In particular, if u ∈ δR (u) and, for every A ∈ B, u ∈ δA (u), the agent does not have
to perform any action of B in the state u.
The above remarks give rise to a typology of atomic actions in the system
(W, R, A).

Definition 4.2.1 A is called an R-obligatory atomic action in the state u if and only if
∅ = δR (u) ⊆ δA (u) (i.e., (4.2.7) holds). Equivalently, we shall say: ‘It is obligatory
to perform A in u,’ when R is clear from context. 

Definition 4.2.2
(i) A is an R-forbidden atomic action in u if and only if δR (u) ∩ δA (u) = ∅ (i.e., A is
not R-performable in u). Equivalently, we shall say: ‘It is forbidden to perform
A in u.’
(ii) A is an R-permitted action in u if and only if A is not R-forbidden in u, i.e.,
when δR (u) ∩ δA (u) = ∅. We then say: ‘It is permitted to perform A in u.’ 

According to the above conception, if an action A is obligatory (in a state u), then
it is also permitted in u, but the converse need not hold.
For reasons of economy, when the main concern is the effectiveness of the system
(especially when the system itself is to be a well-planned and working production
system), the ‘futile’ states of the system are rejected. These are states in which some
actions A are permitted but are not obligatory. In such a system, in each state u which
belongs to the domain of A, the action will be obligatory.
The fact that the action A is obligatory is represented by the elementary proposition
OA. Thus, we assume:
(4.2.10) The proposition OA is true in u (i.e., u ∈ OA) if and only if ∅ = δR (u)
⊆ δA (u).
The proposition Φ → OA ascertains that it is obligatory to perform the action A
whenever the system is in any state belonging to Φ. Hence,
(4.2.10)Φ Φ → OA is true in u if and only if u ∈ Φ implies that u ∈ OA (i.e.,
u ∈ Φ or ∅ = δR (u) ⊆ δA (u).)
The arrow → in the above proposition has thus the same meaning as material
implication.
We can also establish truth conditions for normative propositions corresponding
to atomic permissive or prohibitive norms, according to Definition 4.2.2.
The fact that the action A is permitted is represented by the proposition PA. Thus:
(4.2.11) The proposition PA is true in u (i.e., u ∈ PA) if and only if δR (u)∩δA (u) = ∅
(i.e., A is performable in u).
This is the weak interpretation of the permissibility of an action—A is permitted in u
if at least one performance of A in u is admissible by the relation R. Then, we have:
(4.2.11)Φ Φ → PA is true in u if and only if u ∈ Φ implies that u ∈ PA.
4.2 Norms and Their Semantics 157

Analogously,
(4.2.12) The proposition FA is true in u (i.e., u ∈ FA) if and only if δR (u)∩δA (u) = ∅
(i.e., A is unperformable in u).
It follows from the above definitions that both OA and PA are false in any terminal
state u (that is, when δR (u) = ∅). Hence, in view of (4.2.12), FA is true in any terminal
state, as expected. We also have:
(4.2.12)Φ Φ → FA is true in u if and only if u ∈ Φ implies u ∈ FA.
If A ⊆ R, conditions (4.2.10), (4.2.11) and (4.2.12) reduce, respectively, to the
following ones:
(4.2.10)∗ u ∈ OA if and only if ∅ = δA (u) = δR (u)
(4.2.11)∗ u ∈ PA if and only if δA (u) = ∅
(4.2.12)∗ u ∈ FA if and only if δA (u) = ∅.

Note One may also distinguish a strong form of permission of a given action A,
which we write as P S A and read: ‘The action A is strongly permitted.’ P S A is the
elementary proposition defined as follows:

P S A is true in u (i.e., u ∈ P S A) if and only if ∅ = δA (u) ⊆ δR (u).

The proposition P S A is equivalent to the total performability of A: u ∈ P S A if and


only if A is totally performable in u. P S A implies PA, i.e., P S A ⊆ PA for every
action A.
The stipulation that δA (u) be nonempty in the definition of P S A is made so as to
exclude the states u in which A is vacuously permitted, i.e., the states u in which
fA (u) = ∅.
The distinction between the above two forms of permission, that is, between
the propositions PA and P S A, is relevant only in the case when the action A is not
deterministic.
In a similar way one may define strong forms of the prohibition and obligatoriness
of a given action A. We shall say that A is strongly forbidden in u, in symbols: u ∈ FS A,
if and only if either δA (u) ∩ δR (u) is empty or at least one possible performance of
A in u is not in R. Thus,

u ∈ FS A if and only if u ∈ FA or (∃w ∈ W )A(u, w) & ¬R(u, w).

A is strongly obligatory in u, symbolically: u ∈ OS A, if and only if ∅ = δA (u) =


δR (u). 

Permissive legal regulations as a rule take the form of a weak permission PA,
where A is a certain action considered by the law. For instance, the norms which
regulate the showing of pornographic movies have just such a form. Films of this
type can only be screened in certain places: in special cinemas but not in churches
or schools. (We obviously disregard here the situational context or the fact that
this action is not atomic.) The legal norm which allows capital punishment can be
158 4 Action and Deontology

seen in a similar light. Inasmuch the legislator generally approves of this form of
dealing out justice, the specific regulations clearly define and regulate the ways the
punishment is executed. Not all forms of legal execution (in our jargon—not all
possible performances of this action) can be approved by the law. For example, it
may be possible to apply the electric chair, but the use of the guillotine may be
forbidden.
Remark 4.2.3 In situational contexts, a definite action is linked with its agents.
The latter may vary depending on the situation. (All nontrivial forms of agency
are expressed in situational action systems.) Thus, instead of saying that the given
action A is obligatory in u, one says ‘It is obligatory for the given agent to perform
A in u.’ This leads to the concept of deontic functors defining agents’ duties and
freedoms. This concept differs from the one presented in this section, where we
only speak about the obligatoriness of actions as such. In consequence, one may
distinguish four interesting situations depending on the state u of the system:
O+
aA The agent a is obliged to perform the action A;
O−
a A The agent a is obliged not to perform the action A;
¬O+a A The agent a is not obliged to perform the action A;
¬O−a A The agent a is not obliged not to perform A.
If the system is operated by only one agent a, the first and the third utterances
O+ +
a A and ¬Oa A are reducible to elementary norms. The crucial point is thus to find
a plausible semantics for the second of the above utterances. One of the possible
solutions is to assume that the agent a must refrain from doing A (i.e., he/she is
obliged not to perform the action A) in u if and only if A is totally unperformable in
u. Such a solution reduces O−a A to an elementary proposition:

u ∈ O−
a A if and only if δR (u) ∩ δA (u) = ∅.

In the light of the definition (4.2.12), the proposition O−


a A would be equivalent to
FA. 
Linking the deontic functors P, O, and F with the relation R of direct transition
may seem to be a controversial assumption. The identification of permission with the
performability of an action (more strictly: with R-performability) particularly raises
doubts, because the notion of the performability of an action has clear connotations
related rather to the technical side of an action: an action is performable if there
exists a way, device, or machine that ensures achievement of the goal. (The goal is
conceived as one of many possible effects of the action.) Since the relation of direct
transition admits a large number of interpretations, each of them implies one specific
understanding of the propositions PA, OA or FA, for any action A. If R is understood
to be a familiar collection of action regulations (e.g., traffic regulations or the rules
of a one person game), the above conception of deontic functors agrees with our
intuitive understanding of them. On the other hand, when the relation R expresses
physical permission, and so specifies the range of admissible transformations of the
system whose behavior complies with definite physical laws, the meaning of the
4.2 Norms and Their Semantics 159

functor P is closer to the meaning of the word ‘possible’ than ‘permitted,’ and O
becomes almost synonymous with ‘necessary’.
Note One may consider more complex semantic norms. For example, they may be
tree-like structures. Here is a tree with eleven vertices.
c1 c2 c3 c4

b1 b2 b3 b4

a1 a2

0
Fig. 4.1

A poset (W, ≤, 0) with zero 0 is a tree if for any w ∈ W , the set ↓ w =


{u ∈ W : u ≤ w} is well ordered. Pragmatic norms take the form of labeled trees.
Formally, each tree-like positive norm is a pair (T, +), where T is a finite labeled
tree with root 0 in which to each edge ab, where b is a child of a, an atomic action
Aba (from child b to its parent a) is assigned and to each vertex a a certain subset Φa
of W is assigned (Φa is thus the label of a). These norms are discussed in Sect. 6.3 in
the context of non-monotonic reasonings (but the discussion is limited there only to
certain (totally) permitted norms). Tree-like norms are well suited for action systems
operated by many agents.
For example, if the tree has three vertices and in the labeled form T it is depicted as

Φ1 Φ2

A1 A2

Φ
Fig. 4.2
then the normative proposition assigned to the norm (T, +) consists of pairs of
states (u1 , u2 ) such that whenever u1 ∈ Φ1 and u2 ∈ Φ2 then there are states
w1 , w2 ∈ Φ such that (u1 , w1 ) is a realizable performance of A1 and (u2 , w2 ) is
a realizable performance of A2 . The above proposition is therefore a subset of the
product W × W and not a subset of W . If Φ = W , the above condition implies
160 4 Action and Deontology

that the product of the normative propositions corresponding to the atomic norms
(Φ1 , A1 , +) and (Φ2 , A2 , +) contained in the normative proposition corresponding
to the norm (T, +). (The arrows reverse the natural order of the tree.)
If the tree has two vertices and in the labeled form T it is depicted as
Φ1

A1

Φ
Fig. 4.3
then the normative proposition assigned to the norm (T, +) consists of states u1
such that whenever u1 ∈ Φ1 then there is a state w1 ∈ Φ such that (u1 , w1 ) is a
realizable performance of A1 . If Φ = W , the norm (T, +) is identified with the
atomic norm (Φ1 , A1 , +). Thus positive atomic norms are limit cases of tree-like
positive norms. 
The above semantics for atomic norms makes it possible to construct a simple
logical deontic system.
Example 4.2.4 The model we outline below represents (in a simplistic way) the road
traffic in the suburbs of a big American city. The street topography is represented by
a planar graph with a regular tartan pattern. The nodes represent crossroads and the
arrows between edges represent the direction of the traffic between the crossroads. A
single arrow a → b indicates that there is only one-way traffic from a to b, while the
double arrow a ↔ b indicates that in the street segment between a and b the traffic
runs in both directions. (Thus, a ↔ b is an abbreviation for a → b and b → a.)
In the graph under consideration the above-mentioned arrows are either horizontal
or vertical.
The options for the driver to maneuver at the crossroads depend, obviously, on
which side he/she approaches the crossroads and on the road sign set up at the entrance
to the crossroads. Each driver has only three actions in his/her repertoire. At each
crossroads he/she may turn left (← D), turn right (→ D), or drive straight on (↑ D).
There are no U-turns. Note, however, a simple but relevant fact: the qualification of
the directions in the graph above (straight, left, right) depends, of course, not only
at which crossroads the car finds itself, but also on the direction from which the car
has entered the crossroads. Thus, let us assume that the state of the car and (the state
of its driver) at crossroads b is determined not only by the very fact itself that his/her
car is at b, but also by the direction which the car has reached b from.
The graph models the system of crossroads in the city from the viewpoint of an
external observer (e.g., in a plane). Still, the directions (straight on, left, right) for the
car are defined locally with reference to the concrete position of the car approaching
the crossroads.
4.2 Norms and Their Semantics 161

A state is therefore a pair (a, b) indicating that the driver finds himself/herself
at crossroads b which he/she reached from the nearest crossroads a, in compliance
with the traffic regulations. The crossroads a and b are therefore adjacent. We shall
then identify possible states with pairs (a, b) of adjacent crossroads joined with an
arrow (horizontal or vertical) commencing at a passing toward b in compliance with
the graph. Let W be the set of states.
Each of the actions ← D, → D, ↑ D is therefore identified with the set of pairs of
states (and hence a set of pairs of pairs of adjacent crossings. More specifically,
 
← D := (a, b), (c, d) : b = c and the arrow leading from  c to d points leftwards
for the driver moving toward b = c from the direction a .
E.g., the following diagrams provide instances of ← D
d

a → b (= c)
or
a

(c =) b → d etc.
The other actions are defined similarly.
The transition relation R between the states is defined as follows. Suppose u1 =
(a1 , b1 ), and u2 = (a2 , b2 ) are states. Then, u1 R u2 if and only if a2 = b1 . In
other words, u1 R u2 if and only if the car approaching the crossroads b1 directly
from that of a1 (in compliance with the traffic regulations), can then drive directly
from the crossroads b1 (=a2 ) to that of b2 . The resulting triple (W, R, A) with
A = {← D,→ D,↑ D} is therefore an elementary action system.
In this system, the actions of A are deontologically evaluated in particular states.
E.g., for the action ↑ D and a state u1 = (a1 , b1 ) we have:
↑ D is obligatory in u1 = (a1 , b1 ) if and only if for the car approaching the
crossroads b1 from a1 there is only one arrow in the graph commencing at b1
pointing out the direction that enables continuation of the ride: straight on.
a2

a1 → b1 → c1

a3
In the above figure, the car running from a1 to b1 has only one path of continuation
of its ride: straight on from b1 to c1 .
In turn, in the following part of the graph
a2

a1 → b1 → c1

a3
162 4 Action and Deontology

the action ↑ D is no longer obligatory in the state u1 = (a1 , b1 ) because the car may
also turn right from b1 to a3 . Similar stories hold for the remaining actions. 

4.3 The Logic of Atomic Norms

The formal language we wish to define contains the following syntactic categories:

Sentential variables: p0 , p1 , . . .
Action variables: α0 , α1 , . . .
Boolean connectives: ∧, ∨, ¬
Deontic symbols: P, F, O
Auxiliary symbols: (, ).

From these disjoint sets the set Sent of sentences is generated in the following way:
(i) every sentential variable p is a sentence
(ii) for every action variable α, Fα, Pα and Oα are sentences
(iii) if φ and ψ are sentences then so are φ ∧ ψ, φ ∨ ψ and ¬φ
(iv) nothing else is a sentence.
(In clauses (ii) and (iii) we adopt the usual convention of suppressing the outer-
most parentheses in sentences.) There are no special action-forming functors in the
vocabulary of Sent.
Formulas of the shape φ → ψ and φ ↔ ψ are defined in the standard way as
abbreviations for ¬φ ∨ ψ and (φ → ψ) ∧ (φ → ψ), respectively.
The triples of the form

(φ, α, +), (φ, α, −), (φ, α, !) (4.3.1)

are regarded as permissive, prohibitive, and obligatory norms in the syntactic form,
respectively, where φ ∈ Sent and α is an action variable. They are respectively read:
‘It is the case that φ, therefore α is permitted,’ ‘It is the case that φ, therefore α is
forbidden,’ ‘It is the case that φ, therefore α is obligatory.’ In turn, the sentences of
the form
φ → Pα, φ → Fα, φ → Oα, (4.3.2)

are called the sentences corresponding to the norms (φ, α, +), (φ, α, −), (φ, α, !),
respectively. While norms (4.3.1) themselves do not carry a truth value, the sentences
(4.3.2) are logically meaningful in the sense that logical values, viz. truth or falsity,
are assigned to them. (4.3.2) are called normative sentences or norm sentences.
A model for Sent is a structure

M = (W, R, V0 , V1 ), (4.3.3)
4.3 The Logic of Atomic Norms 163

where
(v) (W, R) is a discrete system,
(vi) V0 is a valuation that assigns to each sentential variable p a subset V0 (p) of W ,
(vii) V1 is an operator which assigns to each action variable α a binary relation V1 (α)
on W .
If (4.3.3) is a model, the triple (W, R, {V1 (αn ) : n ∈ N}) is an elementary action
system.
The property ‘φ is true (holds) at u in M’, symbolized M |=u φ, is defined by
induction on the number of symbols of the formula φ as follows (the prefix M may
be dropped if it is clear which model is intended):

M |=u p if and only if u ∈ V0 (p)


M |=u Pα if and only if (∃w ∈ W )(uRw & u V1 (α) w)
M |=u Fα if and only if ¬(∃w ∈ W )(uRw & u V1 (α) w)
M |=u Oα if and only if (∃w ∈ W )(uRw) & (∀w ∈ W )(uRw ⇒ u V1 (α) w)
M |=u φ∧ψ if and only if M |=u φ and M |=u ψ
M |=u φ∧ψ if and only if M |=u φ or M |=u ψ
M |=u φ∧ψ if and only if not M |=u φ or M |=u ψ
M |=u ¬φ if and only if not M |=u φ.

A set Γ ⊆ Sent is true at u in M, symbolically: M |=u Γ , if and only if every


sentence Φ of Γ is true at u in M.
The above clauses enable us to extend the valuation V0 onto the set Sent by putting:

u ∈ V0 (φ) if and only if M |=u φ,

for all u and φ.


It follows that V0 (Oα) = O(V1 (α)), V0 (Pα) = P(V1 (α)) and V0 (Fα) =
F(V1 (α)), where the propositions on the right-hand sides are defined in accordance
with formulas (4.2.10)–(4.2.12) of Sect. 4.2.
We say that φ is true in M, denoted M |= φ, if φ is true at every u ∈ W . φ is
tautological if and only if it is true in all models M. 
A model M = (W, R, V0 , V1 ) is normalized if and only if R = {V1 (αn ) : n ∈
N}.

Proposition 4.3.1 For every model M = (W, R, V0 , V1 ) there exists a normalized


model M  = (W, R , V0 , V1 ) with the same set of states such that for every sentence
φ ∈ Sent and any u ∈ W ,

M |=u φ if and only if M  |=u φ.



Proof Define R := {R ∩ V1 (αn ) : n ∈ N}, V0 := V0 and V1 (α) = R ∩ V1 (α) for
every α. Then, for any sentential variable p and any action variable α,
164 4 Action and Deontology

M |=u p if and only if M  |=u p,


M |=u Pα if and only if M  |=u Pα,
M |=u Fα if and only if M  |=u Fα,
M |=u Oα if and only if M  |=u Oα.

From the above facts the thesis follows. 

We thus see that a sentence φ is tautological if and only if it is true in all normalized
models.
So far we have introduced semantic concepts of validity. Now we shall introduce
syntactic counterparts by presenting an axiomatic system. Modus Ponens is the only
primitive rule of inference. We shall need a number of axioms. First, we take a set of
axioms adequate for classical sentential logic. Second, we want special axioms for
the deontic operators.
The basic deontic logic is the smallest set DL of sentences of Sent that satisfies:
(i) DL contains all instances of the schemata

φ → (ψ → φ) (4.3.4)
(φ → (ψ → ξ )) → ((φ → ψ) → (φ → ξ )) (4.3.5)
¬¬φ → φ; (4.3.6)

(ii) DL contains all instances of

Oα → Pα (4.3.7)
Fα ↔ ¬Pα; (4.3.8)

(iii) DL is closed under Detachment, i.e.,

φ, φ → ψ ∈ DL only if ψ ∈ DL.

As it is well known, (i) and (iii) provide an adequate basis for classical propo-
sitional logic (CL), so that every instance in Sent of a classical tautology belongs
to DL. (4.3.7) represents the old Roman maxim Lex neminem cogit ad impossibila.
(4.3.7) excludes the states when the obligation of an action cannot be fulfilled; that is,
the action is not realizable (Obligatio impossibilium). (4.3.8) (in fact, in its logically
equivalent form Pα ↔ ¬Fα) is often called the Closure Principle (see Sect. 4.4).
If Γ is a subset of Sent, we say that ψ is deducible from Γ in DL, in symbols
Γ  φ, if there exists a finite sequence φ1 , . . . , φn of sentences such that φn = φ,
and for all i ≤ n, either φi ∈ Γ or φi is one of the axioms (4.3.4–4.3.8), or there
are j, k < i such that φk is φj → φi (so that φi is deducible from φj and φk by
Detachment). The string φ1 , . . . ,n is called a Γ -proof of φ. As a special case of this
relation, we put  φ if and only if ∅  φ. Thus,  φ if and only if φ ∈ DL.
4.3 The Logic of Atomic Norms 165

Every sentence of DL is tautological. This follows from the fact that in any model
M: (i) all instances of the axioms (4.3.4)–(4.3.8) are true in M, and (ii) if φ and
φ → ψ are true in M, then ψ is true as well. Thus, the logic DL is sound with
respect to the above relational semantics.

Theorem 4.3.2 (The Completeness Theorem) For every sentence φ, φ ∈ DL if and


only if φ is tautological.

The proof of the above theorem employs the well-known Lindenbaum techniques.
In order to apply them to DL, we shall make a list of the properties of DL which will
intervene in the proof of the Completeness Theorem.
The presence of classical logic in DL suffices to establish the Deduction Theorem
for DL:
Γ ∪ {φ}  ψ if and only if Γ  φ → ψ. (4.3.9)

(4.3.9) is used to prove, still only using CL, the following equivalence:

Γ ψ if and only if  φ1 ∧ · · · ∧ φn → ψ for some φ1 , . . . , φn ∈ Γ.


(4.3.10)
A set Γ ⊆ Sent is consistent if Γ  φ ∧ ¬φ.

Lemma 4.3.3 Suppose M |=u Γ for some model M and a state u. Then the set Γ is
consistent.
Proof Suppose by way of contradiction that Γ  φ ∧¬φ. Then, by (4.3.10), φ1 ∧· · ·
∧φn → φ ∧ ¬φ is tautological for some φ1 , . . . , φn ∈ Γ . As M |=u Γ , we get that
M |=u φ ∧ ¬φ, which is impossible. 

A set Γ ⊆ Sent is defined to be maximal if Γ is consistent, and for any φ ∈ Sent,


either φ ∈ Γ or ¬φ ∈ Γ . (This is equivalent to requiring that Γ need not be a subset
of any other consistent set.)

Lemma 4.3.4 Suppose ∇ is maximal.

∇  φ implies φ ∈ ∇. (4.3.11)
If φ ∈ ∇, then ∇ ∪ {φ} is not consistent. (4.3.12)
DL ∈ ∇. (4.3.13)
For any sentence φ, exactly one of φ and ¬ψ belongs to ∇,
i.e.,¬ψ ∈ ∇ if and only if ψ ∈ ∇. (4.3.14)
φ→ψ ∈∇ if and only if (φ ∈ ∇ implies ψ ∈ ∇). (4.3.15)
φ∧ψ ∈∇ if and only if φ ∈ ∇ and ψ ∈ ∇. (4.3.16)
φ∨ψ ∈∇ if and only if φ ∈ ∇ or ψ ∈ ∇. (4.3.17)
φ↔ψ ∈∇ if and only if (φ ∈ ∇ if and only if ψ ∈ ∇). (4.3.18)

The proof is easy and is omitted. 


166 4 Action and Deontology

The next result is a variant of the well-known Lindenbaum’s Lemma.

Lemma 4.3.5
(i) Every consistent set of sentences is contained in a maximal set.
(ii) Γ  φ if and only if φ belongs to every maximal extension of Γ .
(iii)  φ if and only if φ belongs to every maximal set.

We omit the proof. 

Lemma 4.3.6 The set {Oαn : n ∈ ω} is consistent.

Proof Construct a model M = (W, R, V0 , V1 ) for Sent such that M |=u {Oαn : n ∈
N} for some state u ∈ W . 

Corollary 4.3.7 For every maximal set ∇ and every action variable α there exists
a maximal set ∇  such that Oα ∈ ∇  and {Oβ : Oβ ∈ ∇} ⊆ ∇  .

To the logic DL a certain elementary action system

M(DL) = (W (DL), R, A)

is assigned and called the canonical action system (for DL). W (DL) is the set of all
maximal sets in Sent. The relation R of direct transition is defined by the condition:

∇ R ∇ if and only if {Oα : Oα ∈ ∇} ⊆ ∇  . (4.3.19)

For each action variable α the atomic action A(α) on W (DL) is defined as follows:

∇ A(α) ∇  if and only if ∇ R ∇  & Pα ∈ ∇ & Oα ∈ ∇  . (4.3.20)

We put A := {A(αn ) : n ∈ N}.


Definitions (4.3.19) and (4.3.20) immediately imply that A(α) ⊆ R for every
action variable α. The system M(DL) is thus normal.
The canonical model of the logic DL is the structure

Mo(DL) := (W (DL), R, V0 , V1 ), (4.3.21)

where W (DL) and R are defined as above, V0 (p) := {∇ ∈ W (DL) : p ∈ ∇} for


every sentential variable p, and V1 (α) := A(α) for every action variable α.
The following lemmas display the list of the more important properties of the
canonical model.

Lemma 4.3.8
(1) R is reflexive.
Let α be an action variable.
4.3 The Logic of Atomic Norms 167

(2) ∇ R ∇  and Oα ∈ ∇ imply Oα ∈ ∇  for all ∇, ∇  .


(3) ∇ R ∇  and Oα ∈ ∇ imply ∇ A(α) ∇  for all ∇, ∇  .
(4) If Pα ∈ ∇ then there exists a maximal set ∇  such that ∇ R ∇  and ∇ A(α) ∇  .

Proof (3). Suppose ∇ R ∇  and Oα ∈ ∇. By (2), Oα ∈ ∇  . In turn, in view of the


axiom A4, Oα ∈ ∇ implies Pα ∈ ∇. Therefore ∇ R ∇  & Pα ∈ ∇ & Oα ∈ ∇  .
So ∇ A(α) ∇  .
(4). Assume Pα ∈ ∇. By Corollary 4.3.7 there exists a maximal set ∇  such that
Oα ∈ ∇  and {Oβ : Oβ ∈ ∇} ⊆ ∇  . Thus ∇ R ∇  & Pα ∈ ∇ & Oα ∈ ∇  which
proves that ∇ A(α) ∇  . 

Lemma 4.3.9 Let ∇ be a maximal set. For any variable α,

Oα ∈ ∇ if and only if (∀∇  )(∇ R ∇  ⇒ ∇ A(α) ∇  ). (4.3.22)

Proof (⇒). Suppose Oα ∈ ∇. Let ∇ R ∇  . Then ∇ A(α) ∇  , by Lemma 4.3.8.(3).


(⇐). Assume the right-hand side of (4.3.22). The definition of A(α) yields (∀∇  )
(∇ A(α) ∇  ⇒ Oα ∈ ∇  ). So (∀∇  )(∇ R ∇  ⇒ Oα ∈ ∇  ) by the transitivity of ⇒.
Thus, {Oβ : Oβ ∈ ∇} ⊆ ∇  implies Oα ∈ ∇  , for all ∇  , by the definition of R.
Since trivially {Oβ : Oβ ∈ ∇} ⊆ ∇, we obtain that Oα ∈ ∇. 

Lemma 4.3.10 Let ∇ be a maximal set. For any variable α,

Pα ∈ ∇ if and only if (∃∇  )(∇ R ∇  & ∇ A(α) ∇  ). (4.3.23)

Proof The implication (⇒) is a paraphrase of Lemma 4.3.8.(4). To prove the reverse
implication, suppose the right-hand side of (4.3.23) is true for some ∇  . Since
∇ A(α) ∇  , (4.3.20) yields Pα ∈ ∇. 

Corollary 4.3.11 Let ∇ be a maximal set. Then, for any α,

Fα ∈ ∇ if and only if ¬(∃∇  )(∇ R ∇  & ∇ A(α) ∇  ). (4.3.24)



The fundamental property of the canonical model Mo(DL) is that for any φ ∈ Sent,
and any maximal set ∇,

Mo(DL) |=∇ φ if and only if φ ∈ ∇. (4.3.25)

This is proved by induction on the length of φ, with (4.3.22)–(4.3.24) being involved


to show that

Mo(DL) |=∇ Oα if and only if Oα ∈ ∇,


Mo(DL) |=∇ Pα if and only if Pα ∈ ∇,
Mo(DL) |=∇ Fα if and only if Fα ∈ ∇.
168 4 Action and Deontology

From (4.3.25) and Lemma 4.3.5 we obtain that in general

|= φ if and only if Mo(DL) |= φ,

i.e., the sentences true in Mo(DL) are precisely DL theorems. From this observation
Theorem 4.3.2 follows. 
The above Completeness Theorem can be more succinctly expressed in terms of
admissible valuations.
An admissible valuation for DL is any mapping h : Sent → {0, 1} such that:
(1) h is a Boolean homomorphism for the classical connectives of Sent, i.e., for any
φ, ψ ∈ Sent
 
h(φ ∧ ψ) = min h(φ), h(ψ)
 
h(φ ∨ ψ) = max h(φ), h(ψ)

1 if h(φ) = 0
h(¬φ) =
0 otherwise,

(2) for any action variable α

h(Oα)  h(Pα)

and

1 if h(Pα) = 0
h(Fα) =
0 otherwise.

Each mapping h0 defined on the union of the set of sentential variables and the set
of formulas of the form Pα, where α ranges over action variables, and with values in
{0, 1}, extends to an admissible valuation h. This extension is not unique because in
case of h0 (Pα) = 1, one may assign any of the values 0, 1 to the formula Oα. Then,
unambiguously extend the mapping onto the remaining formulae.
Let H be the set of all admissible valuations.

Theorem 4.3.12 For all φ, φ ∈ DL if and only if φ is validated by all h ∈ H.

Proof It is easy to see that every admissible valuation validates the axioms of DL
and the Rule of Detachment. Consequently, h(φ) = 1 for all φ ∈ DL and all h ∈ H.
This gives the “⇒”—part of the theorem.
To prove the reverse implication, we take the canonical model Mo(DL). Let, for
each maximal set ∇ ∈ W (DL), h∇ be the characteristic function of ∇, i.e., for every
φ ∈ Sent, 
1 if φ ∈ ∇
h∇ (φ) =
0 otherwise.
4.3 The Logic of Atomic Norms 169

In view of Lemmas 4.3.4 and 4.3.9–4.3.11, h∇ is an admissible valuation. Con-


versely, for every admissible valuation h there exists a unique maximal set ∇ such
that h = h∇ , viz. the set ∇ := {φ ∈ Sent : h(φ) = 1} is maximal. (The proof of this
fact is left as an exercise.)
It follows from the above remarks and Theorem 4.3.2 that if φ is validated by all
h ∈ H, then φ ∈ DL. So the “⇐”—part of the theorem also holds. 

It follows from the above theorem that the system DL is decidable. Moreover,
Theorem 4.3.2 can be strengthened to a strong completeness theorem. We say that a
set Γ semantically entails φ if, for every model M, if every sentence of Γ is true in
this model then so is φ.

Theorem 4.3.13 (The Strong Completeness Theorem) Let Γ be any set of sentences
and φ any sentence. φ is deducible from Γ if and only if Γ semantically entails φ.

We shall sketch the proof. The implication (⇒) can easily be established by induction
on the length of the Γ -proof of φ. To prove the reverse implication, suppose the
sentence φ is not deducible from Γ . We shall construct a model for Sent, in which
the sentences of Γ are true and φ is not.
Let W (Γ ) be the family of all maximal sets that include Γ . Since Γ is consistent,
W (Γ ) is nonempty by Lemma 4.3.5.(a). We then define the relation R, the actions
A(α) on W (Γ ), and the assignments V0 , V1 in the same way as in the proof of
Theorem 4.3.2.
The quadruple Mo(Γ ) := (W (Γ ), R, V0 , V1 ) is thus a model for Sent which
obeys the following condition. Let ψ ∈ Sent and ∇ ∈ W (Γ ). Then:

Mo(Γ ) |=∇ ψ if and only if ψ ∈ ∇. (4.3.26)

(4.3.26) and Lemma 4.3.5.(b) imply that

Mo(Γ ) |= ψ if and only if Γ  ψ. (4.3.27)

As Mo(Γ ) |=∇ Γ for all ∇ ∈ W (Γ ), we also have that

Mo(Γ ) |= Γ. (4.3.28)

Since the sentence φ is not deducible from Γ , (4.3.27) yields

φ is not true in Mo(Γ ). (4.3.29)

(4.3.28) and (4.3.29) prove that Γ does not entail semantically φ. 


The expressive power of the above deontic language is obviously very limited.
One cannot express in it various interdependencies holding between different actions.
For example, one semantically infers the sentence “I must not eat ice cream” from
the sentence “I must not eat sweets.” The sentence “If one eats ice-cream, then one
170 4 Action and Deontology

eats sweets” underlines the subordinate nature of the action of eating ice cream with
respect to the action of eating sweets. Similarly, if one fixes the oil pump in a car,
one fixes this car. The action of fixing a car is superior to the action of fixing the
oil pump in this car. In order to make comparisons of this sort, it suffices to extend
the above deontic language by adjoining a binary functor ∠, forming a formula from
action variables
α∠β (4.3.30)

with the intended reading ‘α is subordinated to β’ or ‘β is overriding the action α.’


Semantically, for any frame (W, R) and any binary relations A, B on W :

A ∠ B := {u ∈ W : fA (u) ⊆ fB (u)}. (4.3.31)

The logical1 value of ‘β is overriding α’ is therefore state dependent. (4.3.31)


defines the meaning of a phrase of form (4.3.30) provided that the meaning of the
action variables α and β has already been established. Note that A ⊆ B if and only
if A∠B = W .
By changing the grammar of the deontic language by incorporating into its body
a new formation rule allowing for expressions (4.3.30) for any action variables α and
β, we see that every model M = (W, R, V0 , V1 ) validates sentences of the form

Pα ∧ α∠β → Pβ (4.3.32)
Fβ ∧ α∠β → Fα. (4.3.33)

(4.3.32) and (4.3.33)2 immediately follow from the above semantic conditions
that characterize P and F in arbitrary models M and the fact that

M |=u α∠β if and only if (∀w ∈ W )(u V1 (α) w ⇒ u V1 (β) w). (4.3.34)

It is an open problem whether the above relational semantics adequately characterizes


the logical system that arises from DL by adjoining the axioms (4.3.32) and (4.3.33).3

1 The discussion of the meaning of the connective ∠ for compound actions is more intricate. This
problem will not be tackled here, though.
2 If β represents the act of ‘killing,’ and α represents ‘killing in self-defense,’ then according to

(4.3.33), in each case if killing is forbidden, then it is also forbidden to kill even if in self-defense.
We cannot see any deontic paradox here. This conclusion is justified when α and β are treated as
atomic actions (or the resultant relations of composite actions). But if the definition of ∠ is extended
onto compound actions, (4.3.32) and (4.3.33) are no longer true; see Sect. 4.5.
3 One can undermine the justifiability of the axioms (4.3.32) and (4.3.33) for compound actions. If

a compound action is forbidden, we can see no reason why each of its ‘parts’ should be forbidden;
see Sect. 4.5.
4.4 The Closure Principle 171

4.4 The Closure Principle

Every instance of the following principle

Pα → ¬Fα, (4.4.1)

which is a ‘half’ of the axiom (4.3.8), is clearly a tautological sentence. The principle
states that every permitted action is not forbidden. Its converse

¬Fα → Pα, (4.4.2)

which is called in jurisprudence the closure principle, is also a schema of tautological


sentences with respect to the above relational semantics. The closure principle states
that if an action is not forbidden, it is permitted.
(The closure principle also holds for the strong deontic operators P S and FS : if
A is an atomic action of a system (W, R, A), then for any u ∈ W , u ∈ P S A if and
only if u ∈ FS A.)
Principle (4.4.1) is commonly accepted in the known approaches to deontic logic.
It is logically equivalent to the tautology ¬(Fα ∧ Pα) that says that no action is
prohibited and permitted. The closure principle, however, is sometimes questioned
as it is argued that systems of norms, including legal ones, are never complete in
the sense that they do not codify all conceivable and potentially legally significant
actions. For example, outside the criminal code there may remain groups of actions
that norm-fixers cannot in advance decide whether they should be prohibited or not.
Such a code is called open.
Tautologies (4.4.1) and (4.4.2) together make it possible to define the operator F in
terms of P since the conjunction of (4.4.1) and (4.4.2) is equivalent to Fα ↔ ¬Pα.
One may attempt to modify the above semantics in order to exclude the closure
principle from the set of tautologies. One of the possible approaches changes the
definition of an elementary action system so as to take into account, on a larger scale,
knowledge limitations of the agents operating the system. This means accepting a
more realistic definition of an action system.
The open action systems are identified with quadruples

(W, R+ , R− , A), (4.4.3)

where W is the nonempty set of states of the system, R+ and R− are binary, disjoint
relations on W , and A is a family of binary relations on W . The members of A are
called atomic actions. The triple

(W, R+ , R− ), (4.4.4)
172 4 Action and Deontology

which is a reduct of (4.4.3), is called a discrete system or space. Both the relations
R+ and R− impose some limitations on the possibilities of the transition from some
states to others.
R+ (u, w) says that the direct transition from u to w is certain while R− (u, w)
says that the direct transition from u to w is excluded with certainty. The limitations
expressed by R+ and R− arise both from the intrinsic order the system is based on
and from the epistemic status of the agent which can be far from omniscience. (We
assume, for the sake of simplicity, that the system is operated by only one agent.)
These factors are more explicitly articulated by means of the following assumptions:
(a) The agent knows which transitions are excluded (these are the elements of R− );
(b) The agent knows with certainty which transitions take place (these are the ele-
ments of R+ );
(c) The agent is unable to decide on some of pairs of states (u, w) if the system
admits the direct transition from u to w or not.
Therefore R+ ∪ R− need not be the full relation W × W .
We assume that the agent knows the elements of the family A, i.e., he/she can
make the list of all his/her (atomic) actions and, for any atomic action A ∈ A, the
agent knows the elements of A and the elements of R+ and R− as well.
Let (u, w) ∈ A ∈ A. If (u, w) ∈ R+ , the agent knows with certainty that the pos-
sible performance (u, w) is realizable. Consequently, the agent knows with certainty
that A is performable in u. (But he may not know that u is the current state of the
system; this fact may be hidden from him.) If (u, w) ∈ R− , the agent knows with
certainty that the possible performance (u, w) is not realizable. Since R+ ∪ R− need
not be equal to W × W , it may also happen that (u, w) ∈ R+ ∪ R− . In this case
the agent does not know if the possible performance (u, w) is realizable or not—
the knowledge available to him does not allow him to decide this issue.
We shall present a certain formal semantics for norms, which does not yield the
closure principle. Before entering into the discussion of this topic, suppose that for
some action A, the set δA (u) of possible effects of this action in the state u is empty.
Then, in view of the condition (4.2.11) of Sect. 4.2, the action A is not permitted in u.
The acceptance of the closure principle, then, immediately implies that A is forbidden
in this state. However, if the closure principle is rejected, one may argue that A need
not be regarded as forbidden in u. For instance, let A be the set of all possible moves
of a particular type of piece in a game of chess (so A is one of the actions as described
in Sect. 2.1). Suppose u is a possible configuration on the chessboard in which the
pieces of a given type do not take part, i.e., they have been eliminated from the game
and remain outside the board. Then, δA (u) is empty. In such a situation, we cannot
say, obviously, that any move of any piece of the given type is admitted, since—
physically—this piece is not on the chessboard. For the same reason we cannot say,
however, that any movement of any of the pieces is forbidden in the position u.
These comments point to the fact that rejecting the closure principle requires in
consequence an examination of the case when the set δA (u) is empty. We assume
that in such states, action A is neither permitted nor forbidden.
4.4 The Closure Principle 173

Definition 4.4.1 Let the system (4.4.3) be fixed and let A ∈ A.

 u ∈ W if and only if
(i) The action A is permitted in a state
(∃w ∈ W ) R+ (u, w) & A(u, w) .
 
 in u if and −only if (∃w) A(u, w) &
(ii) A is forbidden
(∀w ∈ W ) A(u, w) ⇒ R (u, w) .
 + 
 + in u if and only if (∃w ∈ W ) R (u,
(iii) A is obligatory  −w) & 
(∀w ∈ W ) R (u, w) ⇒ A(u, w) & ¬(∃w ∈ W ) R (u, w) & A(u, w) .
Thus, A is obligatory in u if and only if every R+ -transition from u to a state
w, i.e., such that R+ (u, w), is accomplished by A (it is assumed that at least one
such a transition exists) and there does not exist a possible performance (u, w) of A
which—with certainty—is not realizable. If A is obligatory in u, then A is permitted
in u and A is not forbidden in u. If the set δA (u) is empty, the action A is neither
permitted nor forbidden in u.
As R+ ∩ R− = ∅, every permitted action in u is not forbidden in this state.
However, if A is not forbidden in u, then A need not be permitted in u. This is
certainly the case if the set δA (u) of possible
 effects of A in u is empty. The second
reason
 is that the condition
 (∃w ∈ W ) A(u, w) & ¬R− (u, w) need not imply (∃w ∈
+
W ) A(u, w) & R (u, w) . Thus, it may happen that the proposition ¬P A ∩ ¬FA is
nonempty.
The above remarks give rise to a formal semantics for the language Sent. This
semantics, whose construction is analogous to that presented in the previous section,
determines a logical system in which the closure principle is annulled. More specif-
ically, a model for Sent is now defined to be a quintuple

Mo = (W, R+ , R− , V0 , V1 ), (4.4.5)

where (W, R+ , R− ) is a space, V0 is a mapping which to each sentential variable


assigns a subset of W , and V1 is a mapping which to each action variable assigns a
binary relation on W .
Let DL+ be the logic which is defined by Detachment, the axioms (4.3.4)–(4.3.7)
of DL, and the axiom
Pα → ¬Fα. (4.4.6)

The above semantics adequately characterizes the logic DL+ .


Theorem 4.4.2 (The Completeness Theorem) Let α be a sentence of Sent. α is
deducible in DL+ if and only if it is true in every model.
Proof We shall omit the simple check that the logic DL+ is sound in every model
(4.4.5). The crucial part of the proof is the construction of the canonical model for
the logic DL+ .
The canonical model of the logic DL+ is the structure

Mo(DL+ ) := (W (DL+ ), R+ , R− , V0 , V1 ), (4.4.7)


174 4 Action and Deontology

where W (DL+ ) is the set of all maximal consistent sets of DL+ , R+ and R− are the
relations on W (DL+ ) defined by the conditions:

R+ (∇, ∇  ) if and only if {Oα : Oα ∈ ∇} ⊆ ∇  , (4.4.8)


−  
R (∇, ∇ ) if and only if {Fα : Oα ∈ ∇} ∩ ∇ = ∅, (4.4.9)

respectively.
V0 is the mapping which to each sentential variable p assigns the set {∇ ∈
W (DL+ ) : p ∈ ∇}, and V1 is the mapping assigning to each action variable α
the following atomic action A(α) on W (DL+ ):

A(α)(∇, ∇  ) if and only if either (i) R+ (∇, ∇  ) & Oα ∈ ∇  & Pα ∈ ∇


(4.4.10)
or (ii) R− (∇, ∇  ) & Oα ∈ ∇  & Fα ∈ ∇.

The proof of the following lemma can easily be established:

Lemma 4.4.3
(i) R+ is reflexive.
(ii) R+ ∩ R− = ∅.
Let α be an action variable and ∇ a maximal set. Then:
(iii) There exists a maximal set ∇  such that R+ (∇, ∇  ) and Oα ∈ ∇  .
(iv) If Fα ∈ ∇ then there exists a maximal set ∇  such that R− (∇, ∇  ) and Oα ∈
∇ . 

The key property of the canonical model is that for any sentence φ, and any
maximal set ∇,
Mo(DL+ ) |=∇ φ if and only if φ ∈ ∇. (4.4.11)

This is proved by induction on the length of φ. The following lemma makes it possible
to show that

Mo(DL+ ) |=∇ Oα if and only if Oα ∈ ∇,


+
Mo(DL ) |=∇ Pα if and only if Pα ∈ ∇,
+
Mo(DL ) |=∇ Fα if and only if Fα ∈ ∇.

Lemma 4.4.4 Let ∇ be a maximal set and α an action variable. Then:


4.4 The Closure Principle 175

Oα ∈ ∇ if and only if
   
(∀∇  ) R+ (∇, ∇  ) ⇒ A(α)(∇, ∇  ) & ¬(∃∇  ) R− (∇, ∇  ) & A(α)(∇, ∇  ) ;
(4.4.12)

 +  

Pα ∈ ∇ if and only if (∃∇ ) R (∇, ∇ ) & A(α)(∇, ∇ ) ; (4.4.13)
Fα ∈ ∇ if and only if
   
(∃∇  ) A(α)(∇, ∇  ) & (∀∇  ) A(α)(∇, ∇  ) ⇒ R− (∇, ∇  ) .
(4.4.14)

Proof The right-hand sides (RHS, for short) of (4.4.12)–(4.4.14) are paraphrases
 of
conditions (i)–(iii) of Definition 4.4.1. (In (4.4.12) the condition (∃∇  ) R+ (∇, ∇  )
is dropped since it follows from the reflexivity of R+ .)
(4.4.12). (⇒). Assume Oα ∈ ∇. Then, clearly Pα ∈ ∇. To prove the first conjunct
of RHS of (4.4.12), assume R+ (∇, ∇  ). Then, Oα ∈ ∇  . So R+ (∇, ∇  ) & Oα ∈
∇  & P ∈ ∇. Thus, (4.4.10)(i) holds proving A(α)(∇, ∇  ).
To prove the second conjunct on RHS of (4.4.12), suppose R− (∇, ∇  ) & A(α)
(∇, ∇  ) holds for some ∇  . This implies that of the disjuncts (4.4.12)(i)–(ii) char-
acterizing A(α), only (4.4.12)(ii) is true. So Fα ∈ ∇, which implies ¬Oα ∈ ∇. A
contradiction.
(⇐). Assume RHS of (4.4.12). Then, (∀∇  )(R+ (∇, ∇  ) ⇒ A(α)(∇, ∇  )) is true.
But it follows from the definition of A(α) that the sentence (∀∇  )(A(α)(∇, ∇  ) ⇒
Oα ∈ ∇  ) is true. So (∀∇  )(R+ (∇, ∇  ) ⇒ Oα ∈ ∇  ) is true. Since R+ (∇, ∇), we
thus have that Oα ∈ ∇.
(4.4.13). (⇒). Assume Pα ∈ ∇. By Lemma 4.4.3.(iii), there exists a maximal
set ∇  such that R+ (∇, ∇  ) and Oα ∈ ∇  . So (4.4.10)(i) is true, which gives that
R+ (∇, ∇  ) & A(α)(∇, ∇  ).
(⇐). Suppose R+ (∇, ∇  ) & A(α)(∇, ∇  ) holds for some ∇  . This implies that of
the clauses (4.4.10)(i)–(ii) characterizing A(α) only (4.4.10)(i) is true. So Pα ∈ ∇.
(4.4.14). (⇒). Assume Fα ∈ ∇, and suppose A(α)(∇, ∇  ) holds. Clause (4.4.10)(i)
cannot be true for it would imply that Pα ∈ ∇, which, in turn, would contradict the
fact that Fα ∈ ∇. So (4.4.10)(ii) is true. Thus, R− (∇, ∇  ) holds. It remains to show
that (∃∇  )(A(α)(∇, ∇  )). But this is a direct consequence of Lemma 4.4.3.(iv).
(⇐). Assume RHS.
 
Claim (∀∇  ) A(α)(∇, ∇  ) ⇒ (9)(ii) .
Proof of the claim. Suppose that A(α)(∇, ∇  ) & (4.4.10)(i) holds for some ∇  . Since
(4.4.10)(i) is true, we have R+ (∇, ∇  ) and so it is not the case that R− (∇, ∇  ). This
contradicts RHS, proving the claim. 
Now, by RHS, we have that A(α)(∇, ∇  ) holds for some ∇  . Hence, by Claim,
we obtain that (4.4.10)(ii) is true, i.e., R− (∇, ∇  ) & Oα ∈ ∇  & Fα ∈ ∇  for some
∇  . So Fα ∈ ∇.
This completes the proof of Lemma 4.4.4, proving at the same time formula
(4.4.11). 
The Completeness Theorem, thus, follows from (4.4.11) and the fact that the
intersection of all maximal sets of DL+ gives exactly the set of DL+ theorems. 
176 4 Action and Deontology

Remarks 4.4.5
(1). The atomic actions A(α) in the atomic model (4.4.7) are very modestly tailored—
the intersection A(α) ∩ (R+ )c ∩ (R− )c is the empty relation. In consequence, the
conjunction of the negation of the permission of the action α and the negation of its
prohibition reduces to the following equivalence:
 
Mo(DL+ ) |=∇ ¬Pα ∧ ¬Fα if and only if ¬(∃∇  ) A(α)(∇, ∇  ) ,

which may seem to be a somewhat trivial condition. We can prevent this from happen-
ing, though, by changing the definition of A(α), in the following way: the right-hand
side of (4.4.10) is supplemented with the third disjunct, viz.,

not R+ (∇, ∇  ) & not R− (∇, ∇  ) & Oα ∈ ∇  & ¬Pα ∈ ∇ & ¬Fα ∈ ∇. (4.4.15)

The thus defined action A(α) also satisfies the equivalences (4.4.12)–(4.4.14). The
condition Mo(DL+ ) |=∇ ¬Pα ∧ ¬Fα is then fairly nontrivial.
(2). The strong completeness theorem, formulated below, can also be proved by
means of a slight modification of the above proof. (The details are left to the reader.)
Theorem 4.4.6 (Strong Completeness Theorem) Let Γ be a set of sentences and φ
any sentence. Then, φ is deducible from Γ on the ground of DL+ if and only if Γ
semantically entails φ (i.e., for any model (4.4.11), if every sentence of Γ is true in
this model, then so is φ.) 

4.5 Compound Actions and Deontology

The analytical theory of obligation presented thus far relates to atomic actions. Now
let us expand the theory to encompass compound actions. This will allow us to
include structural dependences occurring between the algebraic properties of com-
pound actions and metalogical properties of deontic operators.
This section is concerned with the issue of defining for a given compound action
A (over an elementary action system M = (W, R, A)), the values of the deontic
operators for A. We shall begin with the operators P and F.
Suppose that a compound action A is a singleton, which means that it consists
of only one finite sequence of atomic actions, A = {A1 A2 . . . An }. Let u be a state.
Intuitively, A is permitted in u if the relation R enables the agent(s) to perform the
string of consecutive actions A1 , A2 , . . . , An commencing with the state u. Formally,
A is permitted in u if there exists a nonempty string u1 , . . . , un of states such that
u R A1 u1 . . . un−1 R An un . We extend this definition to arbitrary compound actions
as follows.
Definition 4.5.1 Let M = (W, R, A) be an elementary action system. Let A be a
nonempty compound action for M and u a state of W . The action A is permitted in
the state u, symbolically:
4.5 Compound Actions and Deontology 177

(a) u ∈ PA, if and only if there exist a nonempty string u1 , . . . , un of states and a
sequence A1 A2 . . . An ∈ A such that u R A1 u1 . . . un−1 R An un .
(In other words, u ∈ PA ⇔df (∃w ∈ W ) u ResA w.)
For the empty compound action ∅ it is assumed that P∅ = ∅. 

Thus, A is permitted in u if and only if for at least one sequence A1 A2 . . . An ∈ A,


the action {A1 A2 . . . An } is permitted in u; equivalently, A is performable in u (see
Definition 1.7.3.(i)). This is a weak form of permission. For example, drug use is
permitted in this sense as some drugs, such as alcohol or cannabis, are permitted.
(We abstract here from the precise definition of a narcotic.) A stronger definition
would require that for all sequences A1 A2 . . . An ∈ A, the action {A1 A2 . . . An } is
permitted in u. This option is not investigated here.
We want the closure principle to be rendered in the object language. The following
scheme seems to be the best candidate: FA = ¬PA for every compound action A.
Accordingly, we arrive at the following definition:
Definition 4.5.2 Let M = (W, R, A) be an elementary action system. Let A be a
nonempty compound action for M and u a state of W . The action A is forbidden in
the state u, symbolically:
(b) u ∈ FA, if and only if there does not exist a nonempty string u1 , . . . , un of states
and a sequence A1 A2 . . . An ∈ A such that u R A1 u1 . . . un−1 R An un .
(Equivalently, u ∈ FA ⇔df ¬(∃w ∈ W ) u ResA w.) 
In other words, A is forbidden in u if and only if, for every string A1 A2 . . . An ∈ A,
the action {A1 A2 . . . An } is forbidden in u. So, homicide is forbidden if and only if
all its forms are forbidden.
It follows from Definition 4.5.2 that for the empty action ∅, we have that F∅ = W ,
i.e., ∅ is forbidden everywhere. It also follows from the above two definitions that any
compound action A is not permitted in every terminal state and hence it is forbidden
in all terminal states.
We now turn to the issue of defining, for a given compound action A (over an ele-
mentary action system M = (W, R, A)), the propositions OA, ‘A is obligatory.’ We
first assume that A consists of a single sequence of atomic actions, A = {A1 A2 . . . An }.
Let u be a state. Intuitively, A is obligatory in u if and only if at least one atomic action
occurring in the sequence of consecutive actions A1 , A2 , . . . , An becomes obligatory
whenever one starts performing this string in the state u. Formally, there exists at least
one sequence of states u1 , . . . , un such that u R A1 u1 . . . un−1 R An un and for every
sequence u1 , . . . , un of states such that u R A1 u1 . . . un−1 R An un it is the case that for
at least one i (1 ≤ i < n), the atomic action Ai is obligatory in ui−1 . (We adopt that
u0 = u.) The first conjunct of the above definition guarantees that the obligatoriness
of A does not trivialize., i.e., the action {A1 A2 . . . An } is actually permitted in u. This
is a weak form of obligatoriness. It is not assumed that all actions A1 , A2 , . . . , An are
obligatory when one performs them starting at u.
178 4 Action and Deontology

The above remarks are encapsulated in the following definition:


Definition 4.5.3 Let M = (W, R, A) be an elementary action system. Let A be a
nonempty compound action for M and u a state of W .
(c) If A is a singleton, A = {A1 A2 . . . An }, then
u ∈ OA if and only if u ∈ PA and for every sequence u1 , . . . , un of
states such that u R A1 u1 . . . un−1 R An un it is the case that for at least one i
(1 ≤ i < n), the atomic action Ai is obligatory in ui−1 . (Here u0 = u.)
(d) Generally, for an arbitrary A,
u ∈ OA if and only if u ∈ PA and for every sequence A1 . . . An ∈ A, if the
action {A1 A2 . . . An } is permitted in u then it is obligatory in u in the sense of
(c).
For the empty compound action ∅ it is assumed that O∅ := ∅. 
If A is an atomic action—more precisely, A = {A} for some atomic action
A—then the above definition gets reduced to the definition of the proposition OA, as
given in the formula (4.2.10) of Sect. 4.2. As u ∈ OA for any nonterminal state, A is
not obligatory in such terminal states, which seems to be natural.
Here is a list of simple observations about the propositions OA and PA.
Proposition 4.5.4 Let A and B be nonempty compound actions.
(1) P(A ∪ B) = PA ∪ PB and P(A ∩ B) ⊆ PA ∩ PB.
(2) OA ⊆ PA.
(3) O(A ∪ B) = OA ∪ OB and O(A ∩ B) ⊆ OA ∩ OB.
(4) If A ⊆ B, then PA ∩ OB = OA.
(5) P(A ◦ B) ⊆ PA and OA ∩ P(A ◦ B) ⊆ O(A ◦ B).
The proofs are simple and are omitted. 

Note One may also consider other meanings of the operator O than that provided
by Definition 4.5.3. For instance, one may adopt the following definition. Let A be
a fixed compound action and u an arbitrary state of W . Suppose that there exists a
path
u R u1 R u2 R . . . R un , (4.5.1)

for some states u1 , . . . , un (n  1), starting with the state u. There are several ways (if
any) of accomplishing the direct transition (4.5.1)—there may exist atomic actions
A1 , . . . , An such that uA1 u1 A2 u2 . . . An un . The string A1 A2 . . . An need not belong to
A. The fact that A is obligatory in u means that for every transition (4.5.1) there exist
a number m, 1 ≤ m ≤ n, and atomic actions A1 , . . . , Am such that (i) A1 . . . Am ∈ A
and (ii) u A1 u1 A2 . . . Am um . In other words, each possible exit from the state u must
be initiated by a certain performance of the compound action A, which will lead the
system to a state um for some m ≤ n. After reaching the state um , it is possible, most
probably, to perform actions, in this state, which differ from A, and which will, in
turn, lead to the state un . It is vital to note here that a certain nonempty initial segment
of (4.5.1) must be accomplished by means of the action A.
This meaning of O is not discussed in this book. 
4.5 Compound Actions and Deontology 179

We shall define in this section the semantics of the basic deontic logic of compound
regular actions. The logic is denoted by DLREG. Its syntax is divided into the
following four categories.
The category SV of sentential variables: p0 , p1 , . . .
The category AAV of atomic action variables: α0 , α1 , . . .
The category RAT of regular action terms
The category SENT of sentences.
SV and AAV are assumed to be primitive, disjoint categories. The category RAT is
defined as follows:

(ri) ∅ and ε are regular compound action terms (∅ and ε represents the empty
composite action and the action ε, respectively);
(rii) For every n  1 and α1 , . . . , αn ∈ AAV , the expression {α1 . . . αn } is a regular
compound action term;
(riii) If α and β belong to RAT , then so do α ∪ β and α ◦ β;
(riv) If α belongs to RAT, then so does α ∗ .
(rv) Nothing else is a regular compound action term.

The category SENT of sentences is defined as follows:


(si) SV ⊆ SENT , i.e., every sentential variable is a sentence;
(sii) If α is a regular compound action term, then Oα, Pα and Fα are sentences;
(siii) If φ and ψ are sentences, then φ ∧ ψ, φ ∨ ψ, and ¬φ are sentences.
A model for SENT is a quintuple

Mo = (W, R, V0 , V1 , V 2 ),

where
(i) (W, R) is a discrete system
(ii) V1 is a mapping assigning to each atomic action variable α an atomic action
V1 (α) ⊆ W × W
(iii) V 2 is a mapping which assigns to every compound action term a compound
action over {V1 (αn ) : n ∈ N} in the following way:

V 2 (∅) = ∅ and V 2 (∅) = ε,


V 2 ({α1 . . . αn }) = {V1 (α1 ), . . . , V1 (αn )}

(i.e., V 2 ({α1 . . . αn }) is a singleton whose only element is the n-tuple of relations


V1 (α1 ), . . . , V1 (αn ). We recall that ∅ is the empty set and ε = {ε}.)

V 2 (α ∪ β) = V 2 (α) ∪ V 2 (β)
180 4 Action and Deontology

(= the union of the compound actions V 2 (α) and V 2 (β).)

V 2 (α ◦ β) = V 2 (α) ◦ V 2 (β)

(= the composition of the compound actions V 2 (α) and V 2 (β).)

V 2 (α ∗ ) = (V 2 (α))∗

(= the iterative closure of the compound action V 2 (α).)


(The above clauses imply that V 2 ({α1 . . . αn }) = V 2 ({α1 }) ◦ . . . ◦ V 2 ({αn }).)
V0 is a mapping assigning to each sentential variable p ∈ SV a subset V0 (p) ⊆ W .
V0 is then extended onto the entire set SENT by assuming that:

V0 (Oα) := O(V 2 (α)) for any compound action term α (4.5.2)

(The right-hand side of 4.5.2 is defined as in Definition 4.5.3.)

V0 (Pα) := P(V 2 (α)) (4.5.3)

V0 (Fα) := F(V 2 (α)) (4.5.4)


V0 (φ ∧ ψ) := V0 (φ) ∩ V0 (ψ)
V0 (φ ∨ ψ) := V0 (φ) ∪ V0 (ψ)
V0 (¬φ) := W \ V0 (φ)

(Note that

V0 (φ → ψ) = (W \ V0 (φ)) ∪ V0 (ψ)
V0 (φ ↔ ψ) = V0 (φ → ψ) ∩ V0 (ψ → φ).)

A sentence φ is true at u in the model Mo, in symbols:

Mo |=u φ,

if u ∈ V0 (φ).
φ is true in Mo if and only if φ is true at every u ∈ W .
φ is tautological if it is true in every model Mo.
Problem Axiomatize the set of all tautological sentences of DLREG.
4.6 Righteous Systems of Atomic Norms 181

4.6 Righteous Systems of Atomic Norms

For a family N of atomic norms for an action system M = (W, R, A), we let NP
(NF , NO , respectively) denote the subfamily of N consisting of all permissive norms
(prohibitive norms, obligatory norms, respectively).
A set N of atomic norms for M is said to be righteous for the system M if the
following three conditions are satisfied:
(i)P for every permissive norm (Φ, A, +) ∈ NP and every u ∈ Φ there exists a
state w ∈ δA (u) such that uRw.
(i)F for every prohibitive norm (Φ, A, −) ∈ NF and every state u ∈ Φ there does
not exist a state w such that u A w and u R w.
(i)O for every obligatory norm (Φ, A, !) ∈ NO and for every state u ∈ Φ, ∅ =
δR (u) ⊆ δA (u).
(i)P says that for any permissive norm (Φ, A, +) ∈ NP the action A is performable
in every state u ∈ Φ, and (i)F says that for every prohibitive norm (Φ, A, −) ∈ NF
the action A is totally unperformable in any state u ∈ Φ. In turn, (i)O says that A is
obligatory in any state u ∈ Φ.
The relation R determines the ‘freedom sphere’ of the system, for R determines
which actions are permitted and which are not. Righteous norms imposed on the
action system M should not violate the limits of this freedom sphere. What permissive
norms allow should not be less restricted than what the relation R admits, and in an
analogous way, what norms prohibit should not be more restricted than what R
excludes. In short, permissive norms cannot be more tolerant or liberal than R, and
prohibitive norms—more restrictive than R. On the other hand, the ‘obligatoriness’
of a norm arises from the compulsion imposed by R.
The following proposition directly follows from the above remarks.

Proposition 4.6.1 Let N be a family of atomic norms for M = (W, R, A). N is


righteous for M if and only if
(ii)P for every (Φ, A, +) ∈ NP , the proposition Φ → PA is equal to W ;
(ii)F for every (Φ, A, −) ∈ NF , the proposition Φ → FA is equal to W ;
(ii)O for every (Φ, A, !) ∈ NO , the proposition Φ → OA is equal to W . 

For every action system M = (W, R, A) there exist sets NP , NF and NO of


permissive, prohibitive and obligatory atomic norms such that the family N := NP ∪
NF ∪ NO is righteous for M. For example, as NP , NF and NO one can take the
following sets of norms:
 
NP := {(Φ, A, +) : (∀u ∈ Φ)(∃w ∈ W ) A(u, w) & R(u, w) },
 
NF := {(Φ, A, −) : (∀u ∈ Φ) ¬(∃w ∈ W )(A(u, w) & R(u, w)) },

NO := {(Φ, A, !) : (∀u ∈ Φ)(∃w ∈ W ) R(u, w) &

(∀u ∈ Φ)(∀w ∈ W )(R(u, w) ⇒ A(u, w)) },
182 4 Action and Deontology

Remark If the figure (Φ, A, +) is regarded as a strongly permissive norm, read


as “Φ, therefore A is strongly permitted,” then the proposition corresponding to it
is equivalent to Φ → P S A. In this case we can also speak of systems of norms
righteous in the strong sense. This means, in particular, that for every positive norm
(Φ, A, +) ∈ N the action A is totally performable in every state u ∈ Φ. Equivalently,
N is righteous in the strong sense if it is righteous and Φ ⊆ P S A for every positive
norm (Φ, A, +) ∈ N, Φ ⊆ FS A for every negative norm (Φ, A, −) ∈ N, and
Φ ⊆ OS A for every obligatory norm (Φ, A, !) ∈ N. 

Norms righteous in one model need not be righteous in another. In order to illus-
trate this, suppose that M = (W, R, A) and M  = (W, R , A) are action systems
with the same sets of states and atomic actions. The models differ merely with rela-
tions of a direct transition. Each set of interpreted (i.e., semantic) norms N for M, is
also a set of norms for M  and vice versa. The set N may be righteous for M, but not
necessarily for M  . Conversely, N may be righteous for M  , but this may be not the
case for the system of M.
In this context the question arises whether a given set N of norms for an action
system M can determine, in a conceivable way, the relation R of direct transition in
the system. In order to facilitate the discussion we shall confine ourselves to families
of permissive norms only.
A set NP of permissive atomic norms for M = (W, R, A) is said to be adequate
(for M) if and only if it is righteous for M and for every pair (u, w) ∈ R there exists
a norm (Φ, A, +) ∈ NP such that u ∈ Φ and u A w.
The adequacy of a family NP of atomic norms for M thus ascertains that every
direct transition u R w is in the range of a suitable norm of NP , i.e., the transition
u R w is accomplished by an action that is permitted by a norm of NP . It immediately
follows from the definition of adequacy that the inclusion R ⊆ A is a necessary
condition for a set of norms to be adequate for M.

Proposition 4.6.2 For every action system M = (W, R, A) such that R ⊆ A
there exists a family NP of permissive atomic norms which is adequate for M.

Proof For each A ∈ A define

ΦA := {u ∈ W : (∃w ∈ W )(R(u, w) & A(u, w))}.

Let
NP := {(ΦA , A, +) : A ∈ A}.

NP is righteous for M. Let (u, w) ∈ R. As R ⊆ A, there exists an action A ∈ A
such that (u, w) ∈ A. Thus, u ∈ ΦA and so NP is adequate for M. 

Proposition 4.6.3 Let Mi = (W, Ri , A), i = 1, 2, be two elementary action systems


with the same sets W of states and A of atomic actions. Let NP be a family of
permissive atomic norms (for both systems). If NP is adequate for M1 and M2 , then
the relations R1 and R2 satisfy the following condition:
4.6 Righteous Systems of Atomic Norms 183

(∀u ∈ W )[(∃w ∈ W ) R1 (u, w) ⇔ (∃w ∈ W ) R2 (u, w)].

Proof Let u ∈ W and suppose R1 (u, w) for some w. By the adequacy of NP for M 1
there exists a norm (Φ, A, +) ∈ NP such that u ∈ Φ and u A w. As NP is righteous
for M 2 , there exists a state w ∈ fA (u) such that R2 (u, w ) holds. By a symmetric
argument one shows that (∃w ∈ W ) R2 (u, w) implies (∃w ∈ W ) R1 (u, w), for all
u ∈ W. 

The conclusion which can be drawn from the above proposition is that a set NP of
norms, adequate for M, need not unambiguously determine the relation R of direct
transition in M. A given permissive norm (Φ, A, +) allows, in an arbitrary state
u ∈ Φ, for the performance of the action A; the norm, however, does not precisely
define the effect of this action. The norm only requires that it has to be an element
of fA (u).
We recall the formal language defined in Sect. 4.3. Any triple of the form

(φ, α, +), (φ, α, −), (φ, α, !) (4.6.1)

where4 φ ∈ Sent and α is an action variable, is a permissive, prohibitive and oblig-


atory atomic norms in the syntactic form, respectively. They are interpreted in ele-
mentary action systems M = (W, R, A) as triples

(Φ, A, +), (Φ, A, −) or (Φ, A, !) (4.6.2)

respectively, where A is an atomic action of A and Φ is an elementary proposition,


i.e., Φ ⊆ W . (Strictly speaking, formal norms (4.6.1) are interpreted in M by means
of a pair of mappings (V0 , V1 ), where V0 assigns to each sentential variable a subset
of W and V1 assigns to each action variable an atomic action in A—see Sect. 4.3.)
We may consider sets N of atomic norms (4.6.1) and ask about being righteous
on the syntactic level; that is, asking whether various interpretations (4.6.2) of the
norms of N lead to righteous sets in the semantic sense, as expounded above. Another
obvious question is about consistency of a set of formal norms. How are we to
understand consistency in this context? Intuitively, two formal norms (φ, α, +) and
(φ, α, −), (where the same sentence φ and the same action variable α occur in both
norms), are inconsistent—the same action in identical circumstances is permitted
and forbidden. The solution we propose is to define consistency of norms in terms
of normative sentences associated with norms. Accordingly, the following definition
of consistency of norms is adopted. Given a set N of formal norms, we define:

Ns :={φ → Pα : (φ, α, +) ∈ N} ∪
{φ → Fα : (φ, α, −) ∈ N} ∪ {φ → Oα : (φ, α, !) ∈ N}.

4 The obligatory norm (φ, α, !) may be also interpreted as the command “φ, therefore perform α!”.
184 4 Action and Deontology

Postulate of Consistency of Elementary Norms:


A set of formal norms N is consistent if and only if the corresponding set Ns of
norm sentences is consistent in the logical system DL defined in Sect. 4.3. 
If an action system M = (W, R, A) provides a righteous interpretation for a set
N of formal norms (by means of a pair of mappings (V0 , V1 )), then N is consistent
(because this ‘righteous’ interpretation validates the sentences of Ns at every state
u ∈ W ).
Consistency is an aspect which characterizes ‘good’ sets of formal norms. There
are other aspects that come to light:
(1) the incompatibility of two obligatory norms with the same hypothesis Φ occurs
when the actions required by them cannot be jointly performed in Φ.
(2) praxiological incompatibility occurs when the implementation of the action of
one norm annihilates the results of the action of the other norm.
The notion of a righteous interpretation of a system of formal norms excludes
such pathologies from sets of atomic norms.
Every righteous set of formal norms is consistent but not vice versa—consistency
is a weaker notion. The consistency of a set N of formal norms is equivalent to the
fact that there is an interpretation (V0 , V1 ) of Sent in an action system M validating
the set Ns of normative sentences corresponding to N in some state of the model. In
turn, Proposition 4.6.1 implies that a set N of formal norms is righteous if and only if
there is an interpretation of Sent in an action system M validating the set Ns in every
state of the model (because V0 (ψ) = W for every normative sentence ψ of Ns .)
The above remarks may give rise to a theory of justice based on the semantic
notion of a righteous set of norms. But the meaning of the term ‘justice,’ inherent in
this approach, relativizes it to a model of action: a set of formal norms is righteous
with respect to a model M.

Definition 4.6.4 Let N be a righteous set of (semantic, i.e., interpreted) atomic norms
for an action system M = (W, R, A) and let

u0 , u1 , u2 , . . . , un−2 , un−1 , un (4.6.3)

be a finite5 nonempty sequence of states of W . We say that (4.6.3) complies with the
rules of N if for every i (0 ≤ i ≤ n − 1):
(!)i for every obligatory norm (Φ, A, !) ∈ N, if ui ∈ Φ, then ui A ui+1 ,
(+)i there is a permissive norm (Φ, A, +) ∈ N such that ui ∈ Φ and ui A ui+1 ,
(−)i there is no prohibitive norm (Φ, A, −) ∈ N such that ui ∈ Φ and ui A ui+1 .


5 Infiniteruns of states (in both directions) also make sense. Some remarks on infinite runs in
situational action systems are presented in Sect. 2.2.
4.6 Righteous Systems of Atomic Norms 185

The sequence (4.6.3) is therefore a result of performing a string of atomic actions

A1 . . . An−1 An ,

each action being a component of some norm of N, so that u0 A1 u1 A2 u2 . . .


un−2 An−1 un−1 An un is an operation of M and the obligatory norms in N have priority.
This string of actions is not unambigously defined but its elements occur in the norms
of N. Since N is righteous, the operation u0 A1 u1 A2 u2 . . . un−2 An−1 un−1 An un is
realizable.
Conditions (!)i and (+)i are not contradictory. For instance, if u0 , u1 , u2 , . . . ,
un−2 , un−1 , un represent ‘states’ of a car while driving it (cf. Example 4.2.4), where
each state is determined by the street location of the car and its velocity and the
norms enforcing two actions—speed limit A and right-hand traffic B—are obviously
obligatory. (This system is situational—time matters here, but we shall disregard the
situational envelope.) Therefore (!)i holds for these two obligatory norms for every
state ui (0 ≤ i ≤ n − 1). In other words, the driver performs these two actions
(jointly!) at every state ui . (There are other obligatory actions as (e.g.) the car must
stop at red traffic lights. But this action may be obligatory in some points ui .) These
two norms do not exclude the possibility of performing other permitted action at some
ui , such as turning right. The norms of N may also forbid turning left at some ui .
Definition 4.6.4 formulates concisely the factual principles of conduct which
complies with norms. The above comments allow for the building a theory of justice
and liberty supported on the notion of a righteous system of norms (it will not matter
here whether they are specifically formally or informally). The meanings of the
terms ‘justice’ and ‘freedom’ are not of an absolute character but are relativized to a
definite model. One can speak about a righteous (or just) system of norms inside the
given model or from the point of view of the model. Different models allow different
interpretations of the same set of formal norms. As mentioned above, norms that are
righteous in one system will not be just in another one. By analogy, we can say that in
the case of norms of law, the term ‘righteous’ bears a different meaning to a follower
of Mohammad, who is subjected to the norms of sharia, and another one to a citizen
of Europe, where laws stemming from the ancient Rome are in force. Islam does
not separate the secular life from the religious one and regulates religious customs
and the organization of religious authority, as well as the everyday life of a Moslem.
Islam operates by means of a different deontology which derives from a blending
of religious law and morality. Sharia is based on the assumption that law is meant
to provide all norms necessary for the spiritual and physical development of the
individual. Moslem’s actions are divided into five categories—necessary, glorious,
permitted, reprehensible, and prohibited. (It should be noted that there are various
schools of law whose interpretations of sharia differ: the Hanafi, the Maliki, the
Shafi’i, or the highly conservative Hanbali school.)
A penal law includes a set of norms of a deontological character—orders and
prohibitions, the violation of which is punishable with a sanction. It makes use
of basically two types of norms: sanctioned and sanctioning. A sanctioned norm
orders, for example, punishment for committing a theft. A sanctioning norm imposes
186 4 Action and Deontology

on the court the duty of inflicting a suitable penalty in compliance with the code.
Therefore, sanctioning norms define various sanctions as a consequence of violating
a sanctioned norm.
Inasmuch as in different places different people behave in a similar way and
commit similar crimes (e.g., they steal), it is differences between individual penal
codes that are clear-cut if one takes into account the following two aspects: the
range of sanctioned norms determined through the enumeration of punishable acts;
and the size of sanctions. The latter determines differences in sanctioning norms.
Certain norms of sharia, which contain such sanctions as corporal punishment, the
amputations of hands, and stoning, are regarded as alien to contemporary western
legal systems. On the other hand, European law has not worked out uniform norms
relating, for example, to the ritual killing of animals. The following picture, taken
from the Internet, is very instructive, because it ideographically depicts sanctioned
and sanctioning norms in a nutshell:

Fig. 4.4 Picture on the internet: podroze.onet.pl


Systems of norms that regulate various aspects of collective and individual life are
dynamic quantities changing over time. The direction and the character of the changes
are determined by different factors and their analysis depends more on sociology
and social psychology rather than on logic. Various groups, such as lobbies, opinion-
forming environments, newspapers, the television, the Internet, trade unions, business
circles, and the authorities of different levels and power ranges exert a primary
influence on the evolution of systems of norms, especially legal ones. The order of
things is generally such that groups bring about a change of the legal status quo. An
4.6 Righteous Systems of Atomic Norms 187

example here is the ultimately successful pressure to abolish the capital punishment in
Poland at the end of the 1980s. A second example is provided by current discussions
in the press concerning the legal recognition of homosexual relationships. As regards
the latter, there are a number of arguments for and against being brought forward. It
is claimed that a change in the law will make it more just overall, as it will not ignore
issues relating to the life and well-being of minority groups. Some liberal lawyers
and economists (Friedrich Hayek, amongst others) reject the notion of social justice
as impossible to define and argue that attempts lead to overinterpretation, these being
driven different pressure groups whose goal is to realize their particular interests.
The evolution of a system of norms leads to the abandonment of certain old forms
and to the working out and implementation of a new legal framework. In consequence,
systems of social and individual action become altered (obviously they are not atomic
systems of action in view of their degree of complexity). In such systems, new norms
will be righteous, inasmuch as they are actually abided by. (Abiding by norms is, of
course, something regulated by norms of a higher order, which are related to, e.g.,
the functioning of the judicature, the police, or the army.)
In this discussion of righteous systems of norms, one aspect should be underlined.
The accepted definition relativizes justice to a model of action: a righteous system of
norms is one that is compliant with the model of action. The norms which are binding
in the model precisely reflect the range of liberty resulting from the construction
(structure) of the model; in particular the norms do not violate the limitations imposed
on by the transition relation between states. Actual systems of action, which people
of flesh and blood are entangled in, are multi-rung, often hierarchized, structures,
with a complicated knot of mutual links between parts of the structures. They are
clearly not atomic systems. One may advance the thesis, however, that the basic
components of the actual system which has been formally reconstructed can be
identified with complexes of atomic systems in which, additionally, the situational
envelope has been taken into account. In general, in real systems of action, there is no
concordance between the factual functioning of the system and the accepted norms
as regulators of activity: norms are simply violated. The theory above relates to an
ideal situation which is characterized by a full harmonizing of norms and actions.
The second question deals with the sense of justice. Norms—at least the funda-
mental ones included in constitutions and legal codes of states—are derived from
supreme values adhered to by given communities. According to the Judeo-Christian
tradition, these basic norms should not violate the commandments of the Decalogue.
The Age of Enlightenment brought new suggestions for supreme principles; for
example, Kant’s categorical imperative, the conception of social contract and the
ideas of freedom and egalitarianism. Every conscious human being has some norms
of conduct instilled in their childhoods, ones that shape their sense of justice and the
sensitivity to right and wrong. These principles can change over time as the forms
of social life in which individuals find themselves change.
In 2013, the press reported on the drastic form which the evictions of tenants in
Spain took. These evictions were the consequence of the strict mortgage law which
in force since 1909. In recent years half a million people have been turned out onto
the street, often because of just one or two overdue monthly mortgage repayments.
188 4 Action and Deontology

The inability to pay typically resulted from a loss of work, whereupon the immediate
repayment of the whole debt was ordered or usurious interest rates were introduced.
People who had for years been repaying their mortgage were suddenly deprived of
the right to remain where they lived. According to the press, this was an instance of
the execution of anachronistic legal norms that departed from a contemporary sense
of justice. Following the decision of the European Court of Justice concerning one
case of this type, the Spanish government decided to change the law. The action is
pending.
Looking at the issue more deeply, one can say that from the legal perspective
of the beginning of the 20th century, the system of mortgage law which has been
in force until today, is just: the law was introduced in compliance with civilized
procedures; the law is respected; the activities of banks and tenants are in agreement
with the letter of law; and there are no violations of it. Everyone who applies for a
credit is familiar with the regulations of the law and knows what documents they are
signing. The essence of the matter lies in the fact that these norms do not take into
account the change in the dominant form of residential ownership which occurred
during the century. Properties used to be largely rented with the option of buying
them just for the well-off. Nowadays properties are more readily purchased in the
real estate market. Owning one’s own property is common in Spain and in many
other countries.
The above remarks situate the problem of justice on two levels; the first level is
constituted by the action system a given set of sanctioned norms refers to, the other
level is occupied by the close-coupled action system whose functioning is determined
by the totality of sanctioning norms (more on sanctioned and sanctioning forms in
the next paragraph). Roughly, the first one concerns various legal aspects of the social
life; the other regulates the work of justice. Both the systems may be unjust. In the
discussed example, the two systems regulate various mortgage issues. Mortgage law,
together with complimentary regulations of various banks, is a peculiar melange of
principles (or norms) regulating the mode of granting loans and of sanctioning norms,
i.e., norms that impose various forms of control (and punishments) for defiance of
mortgage repayments. The heart of the matter is the lack of equilibrium between the
two groups of norms, viz. the sanctioned and sanctioning norms. The regulations
which settle the principles of granting loans were, with time, stretched in a natural
manner to cover a broad spectrum of potential (and often actual) clients who earn
their living by doing contract work. On the other hand, the sanctioning regulations
have not been altered; they are anachronistic and do not take into account the specific
character of the economic situation of many contemporary clients of banks. It can
be said that we have come to deal here with a situation where certain norms which
are adjusted to the system of procedures, i.e., the obligatory norm ordering to make
immediate payment of an installment of the granted credit, together with the coupled
norm which imposes defined sanctions, are too restrictive—the loan-taker is not able
to perform the ordered action, having lost his or her job. (The system of actions is
created here by all of the potential clients who submit loan applications.)
The inadequacy of the binding legal system in relation to actions (the model) actu-
ally taken by citizens consists in not making provision, by the relevant regulations,
4.6 Righteous Systems of Atomic Norms 189

for states where purchasers of flats lose the so-called credit repayment capacity (are
unable to repay the loans), e.g., in consequence of their losing work and means
of obtaining income. (It is indeed hardly possible to imagine that legal norms in
the capitalist reality should guarantee full and stable employment that is unlimited
over time). Yet, from the perspective of the theory of action systems it is vital that
mortgage law should take into account cases of bitter random instances affecting
loan-takers, and, consequently, the obligatory norms of this law should admit more
flexible forms of mortgage repayment schedules. (Because otherwise, the obligatory
action of repayments of installments may be unperformable in the aforementioned
states.) The essence of the matter lies—on the one hand—in transition to a new
model of action (one of a more complex set of states and more sophisticated legal
actions), on the other one—in the acceptance of more subtle procedures of grant-
ing and repaying credits, formulated within a new and fair system of norms for this
particular system.

4.7 More About Norms

The conditions in which a given atomic norm, i.e., any triple of one of the following
forms
(Φ, A, +), (Φ, A, −), (Φ, A, !), (4.7.1)

is applicable—in other words the range of a norm—are fully determined by the


elementary proposition Φ. If the system is in a state u belonging to Φ, the norm
(Φ, A, +) allows the agent of A to perform this action in the state u. Analogously,
the norm (Φ, A, −) forbids him to perform A in u, and (Φ, A, !) commands him to
perform A in u.
In modern civilized societies, it is a principle that the validity of a norm is estab-
lished by a sovereign decision taken by the legislature. Democratic legislation deter-
mines the conditions and the detailed procedures of inclusion and exclusion of norms
into and from the system of law in force. There is the commonly accepted principle
that a norm does not hold forwards or backwards in force. Not holding forwards
means that the norm is not applied either before the date of its taking effect or—in
the case of an annulment of the norm—after the date of its exclusion from the legal
system. Not holding backwards means that the legalized norm does not cover actions
taken before the date of its legalization. (A norm may however be introduced at a
time but be decreed to apply from a time earlier than the time of introduction. In this
case, though, civilized legislature requires that the intention to introduce a norm with
an earlier date of effect should be made known to whom it will be applicable prior to
this date.) The principles above, likewise the Closure Principle, ought to be treated
here as meta-norms that regulate the functioning of norms within the system of law.
The presence of these principles causes atomic norms to be integrated into a deter-
mined situational envelope each time we follow the effectiveness of a norm. One of
the elements of the situational envelope of an elementary norm is the time parameter
190 4 Action and Deontology

which determines whether the norm is valid or has been canceled by the legislator’s
decision. The second element is the presence of the legislator, the author and the
guardian of the norm, equipped with sanctions enforcing respect of the norm. The
third and last element is the presence of the defined addressee of the norm. Certain
behaviors are expected from the addressee of the norm in the conditions defined in
the assumptions of the norm. We assume that the addressee of an elementary norm
is identical with the agent of the action specified in the consequent of the norm.
The addressee can be defined, for instance, through reference to a certain general
or common name. A good example here is the norm which determines the principles
of the use of arms by police officers. The word ‘police officer’ is, obviously, a general
name, yet the addressee of the norm is not a ‘police officer’ in general, i.e., a certain
notion, but the designate of this name, that is a functionary of the police forces,
each time mentioned by their name and surname, say, Kowalski or Smith, who find
themselves in the situation (state) that is defined in the antecedent of the norm, thus,
e.g., in a given place at a given time, chasing a criminal. In the case of obligatory
norms (Φ, A, !) the addressee is to perform action A which is mentioned in the
consequent as long as the antecedent is satisfied. On the other hand, when the norm
has the form (Φ, A, −), it is expected that the addressee will not undertake action A
in the conditions determined by Φ. (The fact that the agent of action A may never
do this action in the circumstances described by Φ is not paradoxical. In a game of
chess, White may never move a given figure, e.g. the king. This does not change the
fact he is still the agent of this action. This remark signals the need for an in-depth
analysis of the essence of the relation ‘agent—action.’
The notion of an atomic norm distinguishes—in the elementary formula of the
norm—the antecedent or assumption (also referred to as the hypothesis of the norm),
which defines the conditions in which the norm applies, and also the consequent,
also referred to as the disposition of the norm, specifying the determined actions
related to this norm, as well as determining the status of this action: obligatory or
forbidden, or permissive. The disposition of the norm also marks out each time the
addressee of this norm through a situational framing. The disposition determines the
form of the addressee’s behavior in the conditions assumed by the norm; if the norm
is obligatory, the addressee has to carry out the action contained in the disposition
in the circumstances in which the assumption of the norm is satisfied. We have
not introduced here an additional division into general and individual norms due to
the fact whether the addressee is defined as a class of people or as an individual.
These questions are settled by the form of the elementary system of acting, to which
the given norm is fitted, and the situational surrounding of the system. The latter
determines the agents of the actions, and consequently the addressees of the norm.
We have also omitted here the problem of classifying norms by the rank the norm has
in hierarchical structure of laws and the branch of law to which the norm belongs.
As mentioned, jurisprudence introduces conjugate norms, viz. sanctioned and
sanctioning norms. The former determines the manner in which the addressee should
behave in the conditions set out by the hypothesis (assumption) of the norm. The
sanctioning norm, conjugate with the sanctioned one, defines the type of behavior on
the part of the legislator in the case when the addressee of the sanctioned norm does
4.7 More About Norms 191

not abide by it. The principle ‘nullum crimen sine lege’ is in force here. The area of
our interest here solely covers norms occupying the lowest level of the hierarchy of
the law. The assumption can be made that each obligatory or forbidden atomic norm
is a sanctioned one. The question arises whether and when the sanctioning norm
interrelated with the latter is atomic as well. Let us assume that N := (Φ, A, !) is
an obligatory norm for an elementary action system M = (W, R, A). The norm N
regulates the behavior of the agent (agents) of the action A in the system M, in the
conditions determined by the proposition Φ. The norm N ∗ , interrelated with N, reg-
ulates the behavior of the legislator (or an organ empowered by the legislator) toward
the addressee of the norm N. The specific character of the interrelated norm mani-
fests itself in the fact that such a norm commands the punishment of the addressee
of the sanctioning norm N, when, in the conditions determined by the hypothesis of
the norm N, the addressee of the norm has or has not performed an improper action
specified in the disposition of the norm. For instance, N ∗ can command to that a
driver be imprisoned if the latter has not given first aid to an injured passenger. (We
assume, of course, that the driver himself/herself has not suffered in the accident.)
The norm itself (N) commands that the aid be given to the passenger. The assump-
tion of the norm N can be fully specified. Let us agree to accept, then, the cautious
thesis that sanctioning norms may be—in many cases—treated as atomic norms. The
sanctioning norm N ∗ is not, however, the atomic norm for the system M. The system
of action to which the norm N ∗ is adjusted, differs fundamentally from the system
M. The space of states is here defined by determining the scale of violations of the
norm N and the gradation of penalties associated with this scale. Let us suppose a
simple framework in which penalties have the following format:
(n) “The addressee of the norm N is punished with n years’ imprisonment.”,
where n is a small positive natural number. The legislator performs one action:
inflicting the punishment. This action can be identified in the above model with the set
consisting of pairs of states (∗N , nN ), where ∗N is the state described by the sentence
stating that the addressee of the norm N has violated this norm and nN is the state in
which the addressee of the norm N is punished with n years’ imprisonment according
with (n). (In civilized criminal codes there is the principle that the punishment be
commensurable with the crime, with the duration of the sentence reflecting the type
and magnitude of the crime.) If we accept that in the output elementary system M,
the family A of atomic actions includes only one action, the action A, specified in
the norm N, then the above-defined action system, on the set of states of the form ∗N
and nN , with n ranging over a finite set of consecutive positive natural numbers, can
be called a system interconnected with the system M. The interconnected norm N ∗
is thus adjusted to the interconnected system; this causes the norm N ∗ to be reduced
to an atomic one in many cases.
We shall now consider norms with a more complex structure. These norms, apart
from the notion of a state of a system, take into account (to a very modest extent),
certain elements of the situational envelope of the system. More precisely, only one
purely situational aspect of action is considered here. This is the atomic action directly
preceding the performance of the current action. The notion of a possible situation,
192 4 Action and Deontology

therefore, is reduced to the pair


(u, A), (4.7.2)

where u is a state of the system and A is an arbitrary atomic action. (4.7.2) is read:
(1)∗ ‘u is a state of the system and this state is the result of performing the action
A.’
The norms which will be discussed here in brief will also be adjusted to the type
of situation above. These norms will comply with the rules of transformation of
situations of type (4.7.2). The norms being considered here are quadruples of the
form (A, Φ, B, +), or (A, Φ, B, −), or (A, Φ, B, !), where A and B are atomic actions
of some fixed action system M = (W, R, A) and Φ is an elementary proposition,
i.e., Φ is a subset of W .
The norm
(A, Φ, B, +) (4.7.3)

is read:
(2)+
∗ ‘It is the case that Φ after performing the action A, therefore the action B is
permitted’,
or in short: ‘Φ after performing the action A, therefore the action B is permitted.’
The norms of the above shape will be called positive (alias permissive).
Quadruples of the form
(A, Φ, B, −) (4.7.4)

are called negative (or prohibitive) norms. (4.7.4) reads:


(2)−
∗ ‘Φ after performing the action A, therefore the action B is prohibited.’
In turn, the quadruple
(A, Φ, B, !) (4.7.5)

which is called an obligatory (or imperative) norm, reads:


(2)!∗ ‘Φ after performing the action A, therefore the action B is obligatory.’
One may extend the above definition of a norm by allowing additional situational
parameters in it; in particular one may take into account a proposition Ψ referring
not to the present state of the system itself but to a course of previous actions on
M. This proposition, which would enter into the norm, may represent, for example,
what sequences of actions have already been undertaken by the agents or how many
times particular actions were performed. This additional factor has to be taken into
consideration, for example, in the obligatory norms that govern castling in a game
of chess.
The range of the norms (4.7.1) depends only on two factors:
(i) what action was performed last;
4.7 More About Norms 193

(ii) the extent of knowledge about the actual state of the system represented by the
proposition Φ.
The moment the factors (i) and (ii) are established, a positive norm allows the
performance of an action from A, while a negative norm forbids the performance of
a certain action from A. Because of factor (i), triples (4.7.3)–(4.7.5) may be regarded
as norms with a limited ‘memory.’ The memory is not associated with a history of
the system itself (and so the sequence of states through which the system passes) but
requires only recognizing the recently undertaken action.
As with elementary norms, (4.7.3) does not prejudge the possibility of performing
action B when the antecedent of the norm is satisfied. If action A has been performed
and, just after performing A, it is true that Φ, it may still happen that in each state
u ∈ Φ the action B is unperformable.
It is possible to refine the shape of norms by taking other, more complex factors
constituting the history of the system such as those referring to particular sequences
of previously performed actions and their agents or taking into account the order of
some or all the actions performed before. A game of chess proceeds according to
norms with memory—White’s move has to be followed by Black’s.
As is well known, an important problem of deontic logic is that of how to properly
represent conditional obligations such as If you smoke, then you ought to use an
ashtray. We do not tackle this problem here. But dyadic norms of the form: B is
obligatory, given A or: B is forbidden, given A, where A and B are actions, are
obviously well defined. In deontic logic they are marked by O(B|A) and F(B|A),,
respectively. Assuming that A and B are atomic actions in some action system M =
(W, R, A), we may identify O(B|A) and F(B|A) with (A, W, B, !) and (A, W, B, −),
respectively. (Note, however, that smoking a cigar is not an atomic action!) O(B|A) is
paraphrased as “After performing the action A, the action B is obligatory.” Similarly,
F(B|A) is paraphrased as “After performing the action A, the action B is forbidden.”
Of course, a more adequate rendering would be: “While performing the action A
and just after that, the action B is obligatory” or “While performing the action A,
the action B is forbidden,” but representing of such norms within the here formalism
outlined here would require passing to situational action systems endowed with tense
components.
Chapter 5
Stit Frames as Action Systems

Abstract Stit semantics gives an account of action from a certain perspective:


actions are seen not as operations performed in action systems and yielding new
states of affairs, but rather as selections of preexistent trajectories of the system in
time. Main problems of stit semantics are recapitulated. The interrelations between
stit semantics and the approach based on ordered action systems are discussed more
fully.

5.1 Stit Semantics

Action theory has considerably broadened the scope of traditional semantics by way
of introducing agents on the stage. Agents design various action plans, perform
actions, carry out reasoning about the effects of actions and, last but not least, they
bring about the desirable states of the world within which they act.
One of the crucial problems which a theory of action faces is that of the formal
representation of actions. It is still an area where there is a good deal of disagreement
about fundamental issues and lack of generally-acknowledged mathematical results
in central area of concerns. Traditional formal semantics is based on (fragments of) set
theory as an organizational method: the majority of semantic concepts (frames, pos-
sible worlds, truth-valuations, satisfaction, meaning, etc.) are reconstructible within
some parts of contemporary set theory. How then to describe actions? How to model
them? The theory of action is still lacking a well-entrenched paradigm. As a result of
many attempts undertaken to base the theory on a few plausible principles, many com-
peting formal systems have been developed, each of which is imperfectly applicable
to real-life situations in various, say, moral, legal, or praxeological contexts.
It was the work on deontic logic that initiated a deeper reflection on action and
provided various formal resources for expressing the idea that certain actions are
obligatory, permitted, or forbidden (von Wright)—“that closing the window, for
example, is obligatory.” However, the dominant line of research allows deontic oper-
ators (functors) to apply not to actions directly, but to arbitrary sentences. This means
that certain states of affairs are qualified as obligatory, forbidden, or that they ought
or ought not to be. It has been argued that the second perspective is more general.

© Springer Science+Business Media Dordrecht 2015 195


J. Czelakowski, Freedom and Enforcement in Action, Trends in Logic 42,
DOI 10.1007/978-94-017-9855-6_5
196 5 Stit Frames as Action Systems

Thus, that an agent ought to close the window is interpreted as (in a presumably
logically equivalent way) that it ought to be that the agent closes the window. In
other words, “the study of what agents ought to do is subsumed under the broader
study of what ought to be, and among the things that ought or ought not to be are
the actions agents perform or refrain from” (Horthy 2001). The emphasis in deontic
logic on the notion of what ought to be has been criticized, e.g., by Geach and von
Wright, since it leads to distortions when the existent formalisms are applied to the
task of analyzing an agent’s “oughts.”
In this chapter, we shall outline an approach to action developed mainly by Ameri-
can researchers. This approach is conventionally referred to as the Pittsburgh School.
(The name derives from the fact that the leading researchers, who made important
contributions to action theory, are professionally affiliated to Pittsburgh University.
But, of course, this group of researchers is much broader and includes logicians and
philosophers from other universities.) In fact, our goal is twofold. First, we shall
present the main ideas developed by logicians, computer scientists, and philoso-
phers belonging to the above school in a concise, but undistorted way, thus giving
the reader the opportunity of becoming acquainted with their approach to action. On
the other hand, we wish to reconstruct the main concepts of this approach within
the framework presented in previous chapters. This requires establishing the exact
relationship between the two approaches. The main semantic tool in the approach we
wish to present is that of a stit frame. We advocate here the thesis that stit models are
reconstructible as a special class of situational action systems, as defined in Chap. 2.
Generally, stit semantics, worked out by Chellas, gives an account of action from
a certain perspective: actions are seen here not simply as operations performed in
action systems and yielding new states of affairs, but rather as selections of preex-
istent histories or trajectories of the system in time. Actions available to an agent
in a given state (moment) u are simply the cells of a partition of the set of possible
histories passing through u. Stit semantics is therefore time oriented and time, being
a situational component of action, plays a privileged role in this approach. We shall
recapitulate the main principles of stit semantics after Horthy (2001) and relate them
to the concepts of action presented here.
One of the crucial problems the theory of action faces is the issue of representing
and reasoning about what agents ought to do, a notion that is distinguished from that
of what ought to be the case. The analysis of this notion is based on the treatment
of action in stit semantics. Stit semantics is the best known account of agency in
the theory of action. This semantic approach concentrates on the meaning of formal
constructions of the form

“a (an agent) sees to it that φ”,

usually abbreviated simply as


[a stit : φ].

These constructions provide a semantic account of various stit operators within


a temporal framework in which the future is open or indeterminate. From the
5.1 Stit Semantics 197

set-theoretic perspective, the above constructions are developed within the frame-
work of the theory of trees. As is well-known, this framework permits the introduction
of the ‘standard’ deontic operation  with the intended meaning “It ought to be that”.
It is then reasonable to propose syntactic complexes of the form [a stit : φ]—with
the intended meaning “It ought to be that a sees to it that φ”—as an analysis of the
idea that seeing to it that φ is something a ought to do. Such an attitude, explicitly
expounded, e.g., in Horthy’s book (2001), agrees with the thesis which identifies the
notion of what an agent ought to do with the notion of what it ought to be that the
agent does. The analysis of the notion of what an agent ought to do, proposed by Hor-
thy, is based on “a loose parallel between action in indeterministic time and choice
under certainty, as it is studied in decision theory.” This requires introducing a cer-
tain (quasi) ordering—a kind of dominance ordering—in the study of choice under
uncertainty. This preference ordering then defines both the optimal actions which an
agent performs and the proposition whose truth the agent should guarantee.

Definition 5.1.1 A branching system is a pair of the form W = (W, ≤), where W
is a nonempty set and ≤ is a tree-like order on W . Thus, the binary relation  is an
order (i.e., ≤ is reflexive, transitive, and antisymmetric1 ) with the following tree-like
property:

for any u, v, w ∈ W , if u  w and v  w, then either u  v or v  u. 

The above condition states that for any w ∈ W , the set ↓ w = {u ∈ W : u  w}


is linearly ordered. The tree property is stronger. We recall that a poset (W, ) with
zero 0 is a tree if for any w ∈ W , the set ↓ w is well ordered. As it is customary,
the elements of W are called states. We shall often interchangeably use the terms:
events, moments, etc. Following the terminology adopted in the theory of trees, the
elements of W are also called nodes. Intuitively, the fact that u  v means here that
the state u is equal or earlier than v. The notation u < v means that u  v and u = v.
In branching systems, forward branching represents the openness or indeterminacy
of the future, and the absence of backward branching represents the determinacy
of the past. Branching systems form the indeterministic framework of the theory of
branching time, originally presented in Prior’s book (1967), and then developed by
Thomason (1970, 1984).
We recall that a set of states h ⊆ W is a chain (or: h is linearly ordered) if it
satisfies the trichotomy postulate, that is, for any states u and v belonging to h, either
u < v or v < u or u = v. The set h is a maximal chain of states whenever it is
linearly ordered and there is no chain g that properly includes h.
The term “tree-like property” may seem to be a bit puzzling. Some branching
structures need not look like trees. For example, two linearly-ordered sets put verti-
cally in parallel form a branching system. But, of course, we will be mainly concerned
with structures, which are depicted as trees.

1 Stit
semantics assumes even a weaker condition that  is a quasi-order. Here we shall confine the
discussion to order relations.
198 5 Stit Frames as Action Systems

Any maximal chain in (W, ) is called a (possible) history. (The existence of


histories follow from the Axiom of Choice (AC). The fact that in every poset, each
chain can be extended to a maximal chain is equivalent to AC and is referred to as
Kuratowski’s Lemma.) The tree property postulates that for every w ∈ W , the set
↓ w := {u ∈ W : u  w} is a chain in (W, ). It is not assumed here that ↓ w is
well-ordered by . This last, stronger property is assumed in the theory of trees.
Intuitively, each history h represents a complete evolution of the system in time,
“one possible way in which things might work out” (Horthy 2001, p. 6). If u is a
state and h is a history, the fact that u ∈ h intuitively means that u occurs at some
place of the trajectory h, or that h passes through u. For each u ∈ W , we define

Hu := {h : h is a history and u ∈ h}.

Hu is the set of histories passing through u.


The set of possible worlds accessible at a state u ∈ W is identified with the set
Hu of histories passing through u. The propositions at u are identified with sets of
accessible histories, i.e., subsets of Hu .
The crucial role in the stit approach is played by the notion of an index. Each
index is a pair (u, h) consisting of a state u, together with a history h throughout that
state, i.e., h ∈ Hu . Following the notation adopted, e.g., in Horthy’s book (2001),
each such pair is written as u/h.
Branching systems are a semantic tool that enables one to develop a tense logic
of branching time. We shall sketch the main ideas.
Sentences are evaluated with respect to indices. Let W = (W, ) be a branching
system. A valuation (or an assignment) is a function V assigning a set of indices
V (x) to each sentential variable x. The mapping V is then inductively extended onto
the set of all sentences in the following way:

V (φ ∧ ψ) := V (φ) ∩ V (ψ),
V (¬φ) := the complement of V (φ).

The language is usually furnished with some other connectives. It is usually


assumed that the language contains temporal connectives P and F, as well as the
modal connectives ♦ and .
The connective P represents simple past tense and is read “It was the case that.”
The connective F represents simple future tense and is read “It will be the case
that.” The intended meanings of the above connectives are provided by the following
conditions:

V (Pφ) := {u/ h : (∃u ∈ h) u < u & u / h ∈ V (φ)},


V (Fφ) := {u/ h : (∃u ∈ h) u < u & u / h ∈ V (φ)},
5.1 Stit Semantics 199

The above framework makes it possible to fix the meanings of temporal connec-
tives as well as the meanings of the above modal connectives. In this context, the
formula φ means that φ is settled, or historically necessary. Intuitively, φ is true
at some state (moment) if φ is true at that state no matter how the future turns out.
Analogously, ♦φ means that φ is still open as a possibility. ♦φ is true at some state
(moment) if “there is still some way in which the future might evolve that would lead
to the truth of φ” (Horthy 2001, p. 9). Formally,

V (φ) := {u/ h : (∀h ∈ Hu ) u/ h ∈ V (φ)},


V (♦φ) := {u/ h : (∃h ∈ Hu ) u/ h ∈ V (φ)}.

The fact that u/ h ∈ V (φ) is read ‘φ is satisfied at an index u/ h in the branching


system W’ and denoted by W, u/ h |= φ, i.e.,

W, u/ h |= φ if and only if u/ h ∈ V (φ).

The satisfaction relation |= is thus a binary relation between the set of indices
and the set of formulas. It is easy to see that the satisfaction relation preserves the
validity of classical logic. The pair (W, |=) is then called a model.
|φ|u is the proposition expressed by the sentence φ at the state u in the model
(W, |=). Thus,
|φ|u = {h ∈ Hu : W, u/ h |= φ}.

Equivalently, |φ|u = {h ∈ Hu : u/ h ∈ V (φ)}.


In what follows, however, we will be not much concerned with purely logical
aspects of these semantic constructions. Instead, we will present further constructions
strictly linked with the theory of action.
Given a branching system W = (W, ) and u ∈ W , we say that two histories h 1
and h 2 are undivided at u if there is a state v such that u < v (i.e., v is properly later
than u) and v ∈ h 1 ∩ h 2 . (Due to the maximality of histories as chains in (W, ), we
also have that u ∈ h 1 ∩ h 2 .) The fact that histories h 1 and h 2 are undivided at u is
pictorially represented by the following diagram:

h1 h2
v

Fig. 5.1
200 5 Stit Frames as Action Systems

We arrive at the crucial definition for this section:

Definition 5.1.2 A stit frame is a structure of the form

M = (W, , Agent, Choice),

where (W, ) is a branching system, Agent is a nonempty set of agents, and Choice
is a function defined on the Cartesian product W × Agent. The function Choice
assigns to each agent a and a state u a partition, denoted by Choiceau , of the set Hu .
(We may equivalently say that Choiceau is an equivalence relation on the set Hu , for
all u ∈ W and a ∈ Agent.)
The elements of the partition Choiceau are called the actions available to the agent
a at u.
Furthermore, Choice is subject to two requirements of
(1) no choice between undivided histories, and
(2) the independence of agents.
If V is a valuation in the branching system (W, ), then the extended system

M = W, , Agent, Choice, V 

is called a stit model. 

We shall now define the meaning of the above two requirements.


The requirement of no choice between undivided histories says that for each
agent a and each state u, any two histories, which are undivided at u, belong to the
same choice cell of the partition Choiceau (equivalently, they belong to the same
equivalence class of the equivalence relation corresponding to Choiceau .
The requirement of the independence of agents says, intuitively, that at any given
state, the particular equivalence class that is selected by one agent cannot affect the
choices available to another. The formal definition of this condition is more involved.
Let u be a state. An action selection function at u is a mapping s assigning to each
agent a an action available to a at u, i.e., s(a) ∈ Choiceau for every a ∈ Agent.
Each such selection function “represents a possible pattern of action, a selection for
each agent of some action available to that agent at u” (Horthy 2001, p. 31). We let
Selectu denote the set of all selection functions at u. The condition of independence
 agents says that, for any state u ∈ W and for each s ∈ Selectu , the intersection
of
{s(a) : a ∈ Agent} is nonempty.
The agent a is said to perform the action K ∈ Choiceau at the index u/ h just in
the case h is a history belonging to K . It makes no sense to say that an agent performs
an action at a state u, but only at a state/history pair u/h. The histories belonging to
K are called possible outcomes of K .
Let h ∈ Hu . There is then a unique equivalence class in the range of Choiceau
which contains h. This class is denoted by Choiceau (h). It represents the particular
action performed by the agent at the index u/h.
5.1 Stit Semantics 201

The stit operator (connective), playing the central role, is referred to as the Chellas
stit and marked by cstit, because it is an analogue of the operator introduced in the
sixties by Chellas.
Valuations are extended onto stit sentences in the following way. A statement of
the form [a cstit : φ], which expresses the fact that the agent a sees to it that φ,
is defined as true at an index u/h just in the case the action performed by a at u/h
guarantees the truth of φ, which means that Choiceau (h) ⊆ |φ|u . Thus,

V ([a cstit : φ]) = {u/ h : Choiceau (h) ⊆ |φ|u }.

(There is also another operator closely related to the Chellas stit, marked by dstit,
and called the deliberative stit. The cstit and dstit operators are interdefinable in the
presence of the necessity functor—see Horthy (2001, p.16).
The semantic constructions above offer a number of logical schemes involving the
necessity operator, the temporal, and stit operators. They are not discussed here. We
remark only that various proof systems for logical calculi based on stit semantics or its
fragments have been proposed. We mention in these context the names of Xu, Wölfl,
Wansing, Herzig, Balbiani et al. (2007) provides a more extensive bibliography of
the work done in this area. The above framework for studying agency allows for a
natural treatment of the related concepts of individual ability, as distinguished from
that of impersonal possibility. Even though it is possible for it to rain tomorrow, no
agent has the ability to see it that it will rain tomorrow. The notion of what an agent
is able to do is identified with the notion of what it is possible that the agent does,
expressed by the modal formula

♦[a cstit : φ].

Other topics discussed within the framework of the above formalism are those
of refraining, group agency and group ability. Kenny’s objections to the thesis that
the notion of ability cannot be formalized by means of modal concepts and Brown’s
analysis of agency are also discussed.

5.2 Stit Frames as Action Systems

Stit frames can be viewed as action systems as defined in the previous chapters. Our
goal is to reconstruct stit frames within the setting of ordered action systems.
Let
(1) M = (W, , Agent, Choice)

be a stit frame. We shall assign to (1) a certain ordered (and constructive) action
system
202 5 Stit Frames as Action Systems

(2) M ∗ = (W, , R, A).


(This assignment will not be one-to-one.) M and M ∗ have the same set of states
and the same order relation . Consequently, (W, ) is a reduct of both the systems
M and M ∗ . The transition relation R between states in the system M ∗ coincides
with the order relation , i.e., R = . The definition of elementary actions is more
involved.
We recall that for any u ∈ W and any a ∈ Agent, Choiceau is a partition of the
set Hu , where Hu is the set of histories passing through u.
Let Sel be a set of functions f defined on the Cartesian product W × Agent such
that, for all u, v ∈ W and all a ∈ Agent:
(3) f (u, a) ∈ Choiceau , i.e., f (u, a) is a partition cell in Choiceau ;
(4) if u  v then f (v, a) ⊆ f (u, a);
Furthermore, it is assumed that
(5) for every pair (u, a) ∈ W × Agent and for every choice cell K ∈ Choiceau ,
there exists a function f ∈ Sel such that f (u, a) = K .
Thus, (5) says that the values of the functions of Sel exhaust the class of all partition
cells.
Condition (3) is motivated by the following observation. Suppose u, v ∈ W are
states such that u  v. Then, the tree property for (W, ) implies that Hv ⊆ Hu .
Furthermore, the requirement of no choice between undivided histories means that for
any a ∈ Agent and for any choice cells K ∈ Choiceau , L ∈ Choiceav , K ∩ L = ∅
impliesthat L ⊆ K . Consequently, every choice cell K ∈ Choiceau includes the
union {L ∈ Choiceav : K ∩ L = ∅} as a subset. Thus, (4) postulates that if
f (u, a) = K , then the value f (v, a) belongs to the family {L ∈ Choiceav : K ∩ L =
∅}. This is a kind of consistency between the actions f (u, a) available for a at u, and
the action f (v, a), which is available for a at v.
Given a set Sel of functions, defined as above, and a function f ∈ Sel, we then
f
proceed to define, for each agent a ∈ Agent, a binary relation Aa ⊆ W × W . For
u, v ∈ W , we put:
f
(6) u Aa v if and only if u  v and there exists a history h ∈ f (u, a) such that
v ∈ h.
The relations from the family

f
Aa := {Aa : f ∈ Sel}

are called actions available to the agent a.


f f
Thus, u Aa v means that by performing the action Aa at u, the agent a carries
out the system to a later state v, belonging to some history h ∈ f (u, a). In other
f
words, by performing the action Aa at u, the agent a can guarantee that the history
to be realized will lie among those belonging to f (u, a). This is in accordance with
the “philosophy” of action that underlines stit semantics: an action performed by an
agent at u results in choosing a history passing through u.
5.2 Stit Frames as Action Systems 203

Let

A := {Aa : a ∈ Agent}.

A is the totality of all actions available to the agents a ∈ Agent. A is the set of
atomic actions of the system (2) we wished to define. This makes the definition of
the system (2) complete.
f
It immediately follows from (6) that every action Aa is included in the order .
But since  is also the transition relation R, it follows that the resulting action system
is constructive and normal.

5.3 Ought Operators

The crucial task is to incorporate the deontic operator , read as

“It ought to be that”,

into the above framework based on branching time. Technically, Ought is defined
as a function assigning a nonempty subset of Hu to each state (moment) u. The set
Oughtu is intuitively thought as containing the ideal histories through u, those in
which things turn out as they ought to. A sentence of the form φ is then defined
as true at u/ h just in case φ is true in each of these ideal possibilities, i.e., φ is true
at u/ h for each history h ∈ Oughtu . The resulting structure

T r ee, Agent, Choice, Ought

is called a standard deontic stit frame. Deontic systems based on standard stit frames
model normative theories that can do no more than classify situations as either ideal
or nonideal. The above formalism is then generalized to accommodate a broader
range of normative theories, allowing for more than two values. Technically, this is
achieved by replacing the primitive Ought in the standard deontic stit frames by a
function Value assigning to each state u a mapping V alueu of the set of histories
from Hu into some fixed set of values. Thus, each history h through a state, rather
than being classified as ideal or nonideal, is instead assigned a certain value at that
moment. These values represent the worth or desirability of histories. The set of
values is assumed to be partially ordered. V alueu (h)  V alueu (h ) means that the
value assigned to the history h at u is at least as great as that assigned to h at u. The
structure
T r ee, Agent, Choice, V alue

is called a general deontic stit frame. This class of frames subsumes standard frames
as a special case and enables one to represent more involved normative theories. Of
204 5 Stit Frames as Action Systems

particular concern are utilitarian theories in which the value functions are subject
to two additional constraints. The first condition says that the range of the function
V alueu is a set of real numbers, for all u. The second constraint—uniformity along
histories—requires that the numbers assigned to a single history do not vary from
state to state; formally, V alueu (h) = V alueu (h) for any states u, u belonging to h.
Utilitarian stit frames form, therefore, a subclass of the class of general deontic stit
frames. Horthy’s book (2001) is concerned with an exposition of the basic concep-
tual issues related to deontic stit frames, and the focus is on the utilitarian deontic
framework.
As expected, general deontic models and utilitarian deontic models determine two
deontic logical systems. These logics are both impersonal, offering accounts only of
what ought to be, abstracting from any issues of agency. A certain explication of the
notion of agency can be obtained by combining the agency operator with the above
impersonal account of what ought to be. Technically, this results in a proposal accord-
ing to which the notion that an agent a ought to see it that φ is analyzed through a for-
mula of the form [a cstit : φ]. This proposal is referred to as the Meinong/Chisholm
analysis. Thus, the general idea underlying the Meinong/Chisholm analysis is that
what an agent ought to do is identified with what it ought to be that the agent does.
A number of objections against the Meinong/Chisholm analysis have been raised,
e.g., by Geach, Harman and Humberstone. Their analysis of the problem shows that
the above proposal is vulnerable to a simple objection called the gambling problem
in the literature. In conclusion, as Horthy points out, although the above utilitarian
framework carries some plausibility, the Meinong/Chisholm analysis yields incor-
rect results and therefore must be rejected as a representation of the purely utilitarian
notion of what an agent ought to do.
Horthy (2001) writes: “…the general goal of any utilitarian theory is to specify
standards for classifying actions as right or wrong; and in its usual formulation, act
utilitarianism defines an agent’s action in some situation as right just in case the con-
sequences of that action are at least as great in value as those of any of the alternatives
open to the agent, and wrong otherwise” (p. 70). In the above analysis of agency,
the alternative actions open to an agent a in a given situation are identified with the
actions belonging to Choiceau for an appropriate u. However, in the framework of
indeterminate time, the matter of specifying the consequences of an action presents
some difficulties. In decision theory, a situation in which the actions available to an
agent lead to various possible outcomes with known probability is described as a case
of risk, while a situation in which the probability with which the available actions
might lead to various outcomes is either unknown or meaningless, is described as a
case of uncertainty. Horthy is mainly concerned with the case of uncertainty. Conse-
quently, the above framework excludes expected value act utilitarianism because the
probabilistic information necessary to define expected values of actions is, generally,
unavailable. Instead, a theory called dominance act utilitarianism is formulated. It
is a form of act utilitarianism applicable in the presence of both indeterminism and
uncertainty and based on the dominance quasi-order among actions.
5.3 Ought Operators 205

Let K and K be members of Choiceau . Then K weakly dominates K (in symbols:


K  K ) whenever, roughly, the results of performing K are at least as good as those
of performing K in every state, and K strongly dominates K (in symbols: K  K )
whenever K weakly dominates K but it is not weakly dominated by K . According
to Horthy, an action available to an agent a at a state u is classified as right at an index
u/h exactly when this action belongs to certain subset O ptimalau ⊆ Choiceau (h)—
the set of the optimal actions available to that agent; and the action is classified as
wrong at m/h otherwise.
Although the classification of actions as right or wrong offered by the dominance
theory is relativized to full indices—moment/history pairs—the classification does
not depend on histories: any action that is classified as right at u/h is also right at
u/h for each history h ∈ Hu . The classification of actions offered by the dominance
theory is also referred to as state (or moment) determinate.
In order to avoid the difficulties inherent in the Meinong/Chisholm analysis, a new
two-argument functor [. . . cstit : . . .] is introduced, allowing for the construction
of statements of the form [a cstit : φ] with the intended meaning: ‘a ought to see it
that φ’. The evaluation rule for this functor is rather intricate but the underlying idea
is rather straightforward. A sentence φ is safely guaranteed by an action K available
to an agent a whenever the truth of φ is guaranteed by K and also by any other action
available to a that weakly dominates K . Thus, [a cstit : φ] is true at u/ h if and
only if, for every action K available to a at u that does not guarantee the truth of
φ, there is another action K available to a at u that both strongly dominates K and
safely guarantees A.
The semantics of the deontic operator  simplifies in the case of finite choice
situations, where the set of optimal actions available to the agent is finite (and non-
empty). The above semantics, based upon a dominance of semantics among actions,
is strictly linked with the notion of causal independence. A proposition is causally
independent of the actions available to a particular agent whenever the truth or falsity
of this proposition is guaranteed by a source of causality other than the actions of
that agent. The notion of independence is correlated with counterfactuals. We shall
not discuss here the formal representation of these correlations.
It is worth noticing that the notion of what an agent ought to do is extended to yield
an account of the agent’s conditional oughts as well. This requires introducing sets
Choiceau / X as containing those actions available to a at u that are consistent with
X ⊆ Hu . Conditional analogues of earlier concepts of weak and strong dominance
are also defined. Combining these concepts, one arrives at the notion of conditional
optimality. For example, Horthy’s book (2001) turns to the task of analyzing state-
ments of the form [a cstit : φ/ψ] expressing what an agent a ought to do under
specified background conditions ψ. The notion of conditional optimality is useful in
the explication of a version of act utilitarianism different from the theory of domi-
nance act utilitarianism sketched above. This version is referred to as orthodox act
utilitarianism. Roughly, while the dominance theory associates with any state a sin-
gle classification of the actions available to an agent as right or wrong, the orthodox
theory allows an action to be classified as right at some histories throughout a state
206 5 Stit Frames as Action Systems

but wrong at others. In order to represent the claim that a ought to see it that φ in
the orthodox sense, a new operator ⊕ representing the orthodox perspective of an
agent’s oughts is defined. Informally, the statement ⊕[a cstit : φ] is true at u/ h if
and only if the truth of φ is guaranteed by each of the actions available to a that are
optimal given the circumstances; that is, the particular state in which a finds himself
at this index.
An account governing groups of agents, as well as individuals, and an account
governing individual agents viewed as acting in co-operation with the members of
a group are also analyzed. The formal machinery enabling one to systematically
articulate various groups of normative concepts is developed in Horthy’s book. This
formal apparatus largely parallels the individual case. The key role is played here by
the partition ChoiceΓu representing the actions available to the group Γ at the state
u. The statement [Γ cstit : φ] means that, through the choice of one or another of its
available actions, the group Γ guarantees the truth of φ.
Having defined dominance act utilitarianism for both individuals and groups,
the relations holding between these two versions are explored. The crucial issues are
whether the satisfaction of dominance act utilitarianism by each member of the group
entails that the group itself satisfies this theory, and conversely, whether satisfaction
by a group entails satisfaction by its individual members. The answer to these two
questions in the case of dominance theory is: no—individual and group satisfaction
are independent notions. But the orthodox theory can be extended to cover groups as
well. In the case of the orthodox version of act utilitarianism, while satisfaction by
each individual in a group is again possible even when the group itself violates the
theory, satisfaction of the orthodox theory by a group does, in fact, entail satisfaction
by the individuals the group contains.
The theory of the dominance and orthodox ought operators for individual agents
is extended to group agents, so as to allow for sentences of the forms [Γ cstit : φ]
and ⊕[Γ cstit : φ] expressing, respectively, the dominance and orthodox meanings
of the statement: “The group Γ ought to see to it that φ”.
Group deontic operators enable one to express, in a formal way, a number of issues
concerning the relations among the oughts of different groups of agents, and also the
oughts governing collectives of agents and their individual members. The matter of
inheritance (both downward and upward) of whether individuals inherit oughts from
the groups to which they belong, and conversely, of whether groups inherit oughts
from their individual members is an example of such an issue.
The utility of the above framework is illustrated in the context of the discussion
on rule utilitarianism. According to rule utilitarianism an action available for an
individual is classified as right whenever that action is one that the individual would
perform in the best overall pattern of action.
The above framework, by placing the discussion in a context that distinguishes
explicitly between the orthodox and dominance forms of act utilitarianism, allows us
to relate rule utilitarianism to the two forms of act utilitarianism defined above. The
tenor of the discussion of these issues is that the conflict between rule utilitarianism
and the dominance theory of act utilitarianism is not severe. In situations allowing for
multiple optimal patterns of action, these two forms of utilitarianism are consistent.
5.3 Ought Operators 207

The above theories, only outlined here, specify the normative concepts governing
an agent on the basis of the actions available at a definite state, ignoring actions that
might be available later on, being (e.g.) a result of certain implemented action plans.
To avoid such a distorted picture of the agent’s real normative situation, the above
theory is also focussed on elaborating a general framework that allows an agent’s
choices to be “evaluated against the background of later possibilities.” This analysis is
based on the key technical concept of a strategy. This is a partial function σ assigning
to each state u in the domain of the strategy a certain action available to the agent at
u; formally, σ is a partial function on T such that σ (u) ∈ Choiceau for each u in the
domain of σ . Considerations are limited to certain classes StrategyaM of structurally
ideal strategies. StrategyaM is the set of strategies for a in a field M that are defined
at u, complete in M, and nonredundant. StrategyaM is a kind of generalization
of Choiceau . Just as Choiceau represents the options available to the agent a at u,
StrategyaM represents the options available to a throughout the entire field M ⊆
{u : u  u }. The problem of defining a strategic analogue of the standard cstit
operator, say scstit, for “strategic Chellas stit,” leads, however, to serious semantic
difficulties. The reason is that although an index u/ h provides enough information
to identify the action performed by a, it does not provide enough information to
identify the strategy being executed by a, thus not allow us to determine whether or
not that strategy guarantees the truth of φ.
In order to resolve these difficulties, some ways are mentioned in which these
issues might be approached. One of them is similar to the (Ockhamist) approach to
the future tense statements adopted in tense logic—since a state (moment) u alone
does not provide sufficient information for the evaluation of statements involving
scstit, this approach recommends that these statements be evaluated rather at triples
of the form u/ h/J , where h ∈ J and J ⊆ Hu than at indices consisting of pairs
m/ h only. Leaving aside the issue of finding solutions to the strategic cstit problem,
the above difficulties can be bypassed in the treatment of the strategic ability by
way of introducing of a fused operator ♦[. . . scstit : . . .]. The intuitive meaning of
♦[a scstit : φ] is that a has the ability to guarantee the truth of φ by carrying out
an available strategy. Such statements are evaluated at indices being triples u/ h/M,
where u/ h is an ordinary index and M is a field at u. (In fact, the truth or falsity of
a strategic ability statement is independent of the h-component of such an extended
index.)
The strategic ought operator allows for the construction of statements of the form
[a scstit : φ]. Such a statement, when evaluated at an extended index u/ h/M,
conveys the meaning that the agent a ought to guarantee the truth of φ as a result
of his/her actions throughout the field M. The semantics of the strategic ought oper-
ator validates many logical principles, analogous of theses of the momentary cstit
operators. This issue is not pursued here.
We have presented several models of the deontology of the ought operators. These
‘oughtological’ models are based on tree-like structures, in which a distinguished
situational parameter—the time factor—plays a special role. In fact, they are situa-
tional models. Each index s = u/h can be viewed as a situation and u is the state of
208 5 Stit Frames as Action Systems

this situation. A similar remark applies to ‘strategic’ triples u/ h/M, where u/ h is


an ordinary index and M is a field at u.
It seems that at least two ways in which possible modifications of the above
theory might be approached. First, the range of the theory might be broadened by
considering models founded rather on directed graphs than trees only; that is, the
models presented in Parts I and II of this book. These graph-like models are based
on two constituents, separating actions from their situational components: the notion
of an action system and the notion of a situational envelope of an action system. By
appealing to such graph-like models rather than to tree-like structures, it is possible
to represent some simple action systems, like a game of chess or a production line in a
factory, in an undistorted way. (In the case of a game of chess, instead of moments of
time, a more significant role is played by the distributions of pieces on the chessboard
and the order in which the two players, White and Black, perform their moves.) This,
of course, requires incorporating the concept of a situation as a certain set-theoretic
entity into the general framework of the theory of action.
Chapter 6
Epistemic Aspects of Action Systems

Abstract The theory of action conventionally distinguishes real actions and


doxastic (or epistemic) actions. Real actions (or as we put it—praxeological actions)
bring about changes in material objects of the environment external to the agent. Epis-
temic actions concern mental states of agents—they bring about changes of agents’
knowledge or beliefs about the environment as well as about other beliefs. Some log-
ical issues concerning knowledge, action, truth, and the epistemic status of agents are
discussed. In this context the frame and ramification problems are also analyzed. The
key issue raised in this chapter is that of non-monotonicity of reasoning. A reasoning
is non-monotonic if some conclusions are invalidated by adding more knowledge. In
this chapter a semantic approach to non-monotonic forms of reasoning, combining
them with action theory, is presented. It is based on the notions of a frame and of a
(tree-like) rule of conduct for an action system.

6.1 Knowledge Models

6.1.1 Introduction

This book1 does not cover some in-depth aspects of reasoning about action and
change such as the ramification problem and the frame problem. A significant amount
of work has been done by many researchers to shed light on these issues from a more
formal perspective. The problem of linking non-monotonic reasoning with action
deserves special attention. All these problems are inherent to any formalization of
dynamic systems. Yet another issue is defeasibility in normative systems and the
problem of regulation consistency. (This is different from the simple consistency
of the set of norms.) As to the defeasibility problem, we refer here to the work by

1 Theauthor is deeply indebted to Dr. Matthew Carmody for many insightful comments to this
paragraph. Many of them have been taken into account.
© Springer Science+Business Media Dordrecht 2015 209
J. Czelakowski, Freedom and Enforcement in Action, Trends in Logic 42,
DOI 10.1007/978-94-017-9855-6_6
210 6 Epistemic Aspects of Action Systems

Leon van der Torre (1997, 1999) and his colleagues from the 1990s.2 The problem of
regulation consistency is discussed by Cholvy (1999). Other papers that are relevant
are: Castilho et al. (2002), Eiter et al. (2005), Thielscher (1997, 2011), Varzinczak
(2006, 2010), Zhang and Foo (2001, 2002), Zhang et al. (2002).
We shall present some preliminary remarks with the intention of better under-
standing some of these difficult problems from the point of view of the formalism
developed in this book. This especially concerns the problem of non-monotonic rea-
soning, which is examined here in more detail from the semantic and pragmatic
perspectives.
The theory of action conventionally distinguishes real actions and doxastic (or
epistemic) actions. Real actions (or as we put it—praxeological actions) bring about
changes in material objects; these changes manifest themselves in changes of “the
environment external to the agent.” Epistemic actions concern mental states of
agents—they bring about changes of agents’ knowledge or beliefs “about the envi-
ronment as well as about other beliefs.”3 As for real actions, there are two main
approaches. According to the first approach, actions are as events of a certain kind.
Actions are done with an intention or for some reason. As Segerberg (1985) writes:
“To understand action both are needed, agent and world, intention and change.” The
other approach views material actions as changes of states of affairs brought about
by agents. Actions are described here by their results/outcomes. This view on action
and agency is close to modal and dynamic logic. Doxastic actions are often divided
into two classes: belief revision and belief update. It is interesting to note that in
the study of doxastic actions, the tools borrowed from (multi)modal logic and com-
puter science are extensively used to express epistemic operators. The introduction
to Trypuz (2014) gives a brief account of the main philosophical ideas that underlie
the theory of action. This book is intended to unify various views on praxeological
and epistemic actions. ‘Bare’ actions are modeled by elementary action systems.
By introducing a situational envelope of an elementary system (and hence passing
to situational action systems), it is possible to model various issues pertaining to
agency, intentionality, belief update, and so on.

2 Van der Torre (1997) writes: “There does not seem to be an agreement in deontic logic literature
on the definition of ‘defeasible deontic logic”’. It is generally accepted that a defeasible deontic
logic has to formalize reasoning about the following two issues.
1. Resolving conflicts. Defeasibility becomes relevant when there is a (potential) conflict between
two obligations. In a defeasible deontic logic a conflict can be resolved, because one of the obliga-
tions overrides, in some sense, the other one.
2. Diagnosing violations. Consider the obligation “normally, you should do p.” Now the problem
is what to conclude about somebody who does not do p. Is this an exception to the normality claim,
or is it a violation of the obligation to do p?”
3 See Introduction in Trypuz (2014).
6.1 Knowledge Models 211

6.1.2 Propositional Epistemic Logic

The language of propositional epistemic logic results from augmenting the language
of classical propositional logic (CPC) with a countable set of unary epistemic con-
nectives K a (a ∈ A). The set A represents agents. K a φ states ‘Agent a knows that
φ’, for arbitrary sentence φ. One may also add the doxastic connectives Ba (a ∈ A).
Ba φ then states ‘Agent a believes that φ’. A standard semantic interpretation of epis-
temic and doxastic operators will shortly be provided in terms of multimodal Kripke
frames called knowledge models. Before presenting them, we briefly discuss some
issues concerning knowledge and truth.

6.1.3 The Idea of Knowledge

Wikipedia provides a brief account of the term ‘knowledge’: “Knowledge is a famil-


iarity, awareness or understanding of someone or something, such as facts, informa-
tion, description, or skills, which are acquired through experience or education by
perceiving, discovering, or learning. Knowledge can refer to a theoretical or practical
understanding of a subject.” Philosophers have tended to focus on ‘knowledge-that’
or propositional knowledge. A traditional concern of the epistemologist has been to
provide an analysis of the concept of knowledge. Attempts to do this stretch back to
Plato, Meno, Theaetetus but have enjoyed a renaissance in the last 50 years because
of ‘Gettier cases’.

6.1.4 Gettier

Gettier presented the traditional analysis of knowledge consisting of three necessary


and sufficient conditions. Let a be an agent and φ a sentence. The tripartite analysis
says that a knows that φ if and only if
(a) φ is true
(b) a believes that φ
(c) a is justified in believing φ.
Let us admit and ignore that the conditions (especially (b) and (c)) are potentially to
some extent vague. Gettier argued that there were cases in which a has a justified
true belief but—it strikes us—not knowledge. For example, Piotr has a clock on his
office wall which has worked perfectly for the past 10 years. One Monday morning,
he looks up at his clock and forms the true belief that it is 10.15 a.m. His belief is
justified in view of the past history of the clock. However, the clock happened to stop
working on Sunday at 10.15 a.m. Had he looked up at any other time he would have
formed a false belief. He does not know it is 10.15 a.m. (The example is a variation
of that from Bertrand Russell.)
212 6 Epistemic Aspects of Action Systems

Many philosophers have attempted to explain the strong intuition that he does
not know by inserting clauses to ensure (for example) that an agent’s beliefs are not
defeasible or have been arrived at through reasoning through a false lemma. But the
opinion is that good solutions are not available—the definition of knowledge is a
matter of ongoing debate among philosophers.
Concerning the knowledge operator in epistemic logic, one may pose two ques-
tions here
(M) What is the meaning of the formula K a φ? How should it be explicated in simpler
terms?
and
(K) What is the logic and semantics of this operator? How are truth-values to be
assigned to K a φ?
These questions are clearly related. Nevertheless, one need not complete the tra-
ditional epistemic project of answering (M) before addressing (K). For we have a
sufficiently clear understanding of the concept of knowledge to undertake answering
(K). (Had we not, we could of course attempt (M); for a reasonable grasp of the
concept must be presumed in order to engage in its analysis.) For our purposes, it is
(K) that matters. Of course, in exploring the logic of knowledge, we will be exploring
the concept of knowledge but, as it were, indirectly.

6.1.5 Exploring (K)

Let us therefore explore the concept of knowledge a little more in order to address
(K). First of all, knowledge is factive and logically grounded—a knows only true
facts. Epistemic logic rejects various forms of false knowledge which are not grasped
by models. So, epistemic logic assumes the scheme:
T: K a φ → φ,
because the agent knows only true facts (but of course not all of them); hence in
particular, if agent a knows that φ is true,4 then φ is true. T is the axiom of truth—
the requirement that whatever is known is true.5

4 Formally, in the developed form one would write: “a knows that the sentence ‘φ’ is true.” Here
‘φ’ is the name of φ in the metalanguage and “a knows that ‘φ’ is true” is shorthand for “a knows
that the sentence bearing the name ‘φ’ is true.” To simplify the notation, we shall assume that the
particle ‘that’ plays the role of the quotation marks ‘and’. We therefore write “a knows that φ is
true”.
5 One distinguishes between a sentence φ and the metalanguage parlance about φ using appropriate

quotation marks. Thus the expressions “The sentence ‘a knows φ’ is true” and “a knows that φ is
true” bear different meanings.
6.1 Knowledge Models 213

The schemes

K a φ → K a (φ ∨ ψ) and K a ψ → K a (φ ∨ ψ)

are also valid but their converses K a (φ ∨ ψ) → K a φ and K a (φ ∨ ψ) → K a ψ


are not.
The axiom T and CPC imply that

(1) K a (¬φ) → ¬K a (φ)

is logically valid for all φ. Indeed, K a (¬φ) → ¬φ and ¬φ → ¬K a (φ) are valid by
T and the contraposition of T , respectively. Hence, K a (¬φ) → ¬K a (φ) holds, by
the transitivity of →.

6.1.6 Extensionality

Epistemic operators are not extensional. The following example is instructive. When
the unary epistemic operator ‘Newton knew that…’ is applied to the true sentence
‘8 = 5 + 3’ we obtain the true sentence ‘Newton knew that 8 = 5 + 3’. The truth-
value of this sentence is not necessarily preserved if we swap the embedded sentence
with one that has the same extension nor if we swap any term in it for a coextensive
one. For example, neither ‘Newton knew that Lincoln was president of the USA’ nor
‘Newton knew that the atomic number of oxygen = 8’ is true. To capture this and
other phenomena, Chapter 6 in Czelakowski (2001), provides a semantic approach
to propositional attitudes based on referential frames.

6.1.7 What Is Involved in Knowing?

It is evident that knowledge of a proposition is not equivalent to knowledge of the


truth of a sentence that expresses that proposition, as represented by (2):
(2) “Agent a knows that φ” is translated as the metasentence “a knows that φ is
true.”
The sentence on the right-hand side of (2) does not belong to the object language; it
belongs to the metalanguage. If one accepts (2), then the sentences on both sides of
the equivalence (2) carry the same logical value.
According to (2), the epistemic sentence “Adam knows that Walter Scott is the
author of Rob Roy” is paraphrased as “Adam knows that the sentence ‘Walter Scott
is the author of Rob Roy’ is true.” It is also assumed that classical logic is valid in
metalanguage and the usual conventions for truth as, e.g., “The sentence ‘¬φ’ is true”
is equivalent to “The sentence ‘φ’ is false” are adopted. (That is, we suppose Adam has
214 6 Epistemic Aspects of Action Systems

also a certain basic understanding of truth, falsity, and negation.) If one additionally
assumes that the connective ‘a knows that…’ is extensional for truth conventions,
one gets from (2) that “a knows that ¬φ is true” is equivalent to “a knows that φ is
false.” Accordingly, each negative fact of the kind “Agent a knows that ¬φ,” that is,
the formula K a (¬φ), is then under (2) equivalent in the metalanguage to “Agent a
knows that φ is false.” For example, the sentence “Adam knows that Goethe is not
the author of Rob Roy” receives, under paraphrase (2), the equivalent form “Adam
knows that the sentence ‘Goethe is the author of Rob Roy’ is false.”
Note, however, that (1) becomes paradoxical under paraphrase (2), because (1)
then equivalently states that if a knows that ¬φ is true (and hence a fortiori, a knows
that φ is false), then a does not know that φ is true, which is sheer nonsense (because
if a knows that if φ is false, then a knows everything about the truth and falsity of φ,
by bivalence). Thus the paraphrase (2) is untenable if a basic knowledge of classical
logic is available to a.
But one may retain partial paraphrases:
(a) If a knows φ, then a knows that φ is true,
and
(b) If a knows ¬φ, then a knows that φ is false,
without arriving at a contradiction.

6.1.8 Situational Semantics

On grounds of situational semantics, one may try to paraphrase the formula K a φ in


the metalanguage of epistemic discourse as follows:
(3) “Agent a knows that φ” is translated as “a knows the situation described by φ.”
One may also say that a knows the fact or the event represented by φ. As in the
case of paraphrase (2), the sentence on the right-hand side of (3) does not belong to
the object language. Another question that is left open is the meaning of the term
‘situation’. We may, of course, refer here to various theories of situations which
are available in the literature. In some approaches, the meaning of a sentence is
identified with the situation described by this sentence. Without dwelling deeply on
situational semantics and its nuances, one may then argue that (3), the situational
rendering of K a φ, yields that a knows the situation described by φ if and only a knows
the situation described by the negation ¬φ. Thus, from the viewpoint of Suszko’s
situational semantics (see Wójcicki 1984), if one knows the (positive) situation that
Paris is the capital of France, one also knows the (negative) situation that Paris is not
the capital of France and vice versa.
According to the Fregean standpoint, there are only two possible situations being
the referent of each well-defined sentence—truth or falsity. This is the content of
so-called Fregean Axiom—see Suszko (1975). As Suszko writes: “The semantical
6.1 Knowledge Models 215

assumption that all true (and similarly all false) sentences describe the same, i.e., have
a common referent (Bedeutung) is called the Fregean axiom.” (In the older literature,
the word ‘Bedeutung’ was translated as ‘meaning’ but nowadays it is translated as
‘referent’ while the word ‘Sinn’ is translated as ‘meaning’.) Thus, from the Fregean
perspective, if one accepts (3), yet another metalinguistic semantic translation of the
epistemic connective K a is conceivable:
“Agent a knows that φ” is translated as “a knows the referent of φ”
(that is, “a knows the truth-value of φ”).
But this interpretation also yields nonsense. We may therefore conclude that if
one assumes (3), then epistemic logic rejects the Fregean Axiom. (In fact, epistemic
logic abolishes the Fregean Axiom even without assuming (3).) As Suszko (1975)
writes: “It is obvious today that the abyss of thinking in a natural language does not
fit into the Fregean scheme.”
Similar remarks apply to other propositional attitudes as “a believes that φ”,
“a accepts φ”, “a maintains that φ,” etc. (Beliefs are probably true, possibly true, or
likely to be true.)

6.1.9 The Logical Framework

It is true that K a φ → φ but not the converse, as there are many truths we do not know.
There is no simple link between the truth of φ and the truth of K a φ. The traditional
epistemic project—the answer to (M) seeks to specify the link. The logical project
can to some extent sidestep it by helping itself to the catch-all epistemic means agents
have to put themselves in different epistemic states. Contemporary epistemic logic
offers a sophisticated semantic apparatus by means of which it answers (K), avoiding
simplifications and paradoxes. The semantics is provided by multirelational Kripke
frames called knowledge models. Thus epistemic logic is viewed as multimodal logic
applied for knowledge representation.
In case of one-agent systems, a model is a triple M = (W, R, V ), where R is a
binary relation on a set W (the set of ‘possible worlds’) and V is a mapping assigning
a subset of W to each propositional variable. M is called a Kripke-model and the
resulting semantics Kripke-semantics. A propositional variable p is true in a world
w in M (written M, w |= p) if and only if w belongs to the set of possible worlds
assigned to p, i.e., M, w |= p if and only if w ∈ V ( p), for all p. The formula K φ is
true in a world w (i.e., M, w |= K φ) if and only if for all w ∈ W , if R(w, w ), then
M, w |= φ. Thus K φ is true at the agent’s state of knowledge represented by w,
then R(w, w ) encodes all epistemic means available to the agent enabling him/her
to establish the truth of φ at w . The semantics for the Boolean connectives follow
the usual recursive recipe. A modal formula is said to be valid in the model M if and
only if this formula is true in all possible worlds of W .
216 6 Epistemic Aspects of Action Systems

6.1.10 Other Renditions

As for other syntactic renditions of K a φ, computer scientists have proposed that the
epistemic operator K a φ should be read as “Agent a knows implicitly φ,” “φ follows
from a’s knowledge,” “φ is agent a’s possible knowledge,” etc. Propositional attitudes
like these should replace the usual “agent a knows φ.”
Dynamic epistemic logic (DEL) represents changes in agents’ epistemic status by
transforming models. DEL takes into account interactions between agents or groups
that basically consist in sending information (as announcements) and analyzes the
consequences of the flow of information on agents’ epistemic states. As Hendricks
(2003) points out: “Inquiring agents are agents who read data, change their minds,
interact or have common knowledge, act according to strategies and play games,
have memory and act upon it, follow various methodological rules, expand, contract
or revise their knowledge bases, etc. all in the pursuit of knowledge. Inquiring agents
are active agents.”
The theory of action raises interesting praxeological problems strictly linked with
agents’ abilities to perform actions. Knowledge of actions is genuine procedural
knowledge and not always reducible to verbalisable propositional contexts. Consider
for example, the action of driving a car. Although the agent must know certain facts—
that one must put petrol in a car to make it work, that the brake pedal is to the left
of the accelerator pedal—there are skills involved that cannot be conveyed in words,
such as when to change gears or how to reverse park. So, the theory must discriminate
between theoretical knowledge (episteme) and the skills that should come into use
(techne).
We may extend the language of CPC by introducing action variables α, β in a
similar way as in Chap. 4. One may then consider praxeological operator Perfa (α)
which read as: “Agent a has ability to perform the action α.” It is an open problem
how to define the semantics of this operator.

6.1.11 Knowledge Models as Action Systems


 
A knowledge model A, W, (Aa )a∈A consists of a set of agents A, a set of states W ,
and, for each agent a ∈ A, an accessibility relation Aa ⊆ W × W . Subsets of W are
called propositions, events etc. depending on context. For a given u ∈ W , the event
δa (u) := {w ∈ W : Aa (u, w)} is the set of states that is considered possible for a at
u, while all other states are excluded by a at u. We say that a knows a proposition Φ
at u if δa (u) ⊆ Φ. The event that a knows Φ, denoted K a (Φ), is the set of all states
in which a knows Φ:
K a (Φ) = {u ∈ W : δa (u) ⊆ Φ}.
6.1 Knowledge Models 217

In other words, a knows that Φ at u if and only if, for every w ∈ W , Aa (u, w) implies
w ∈ Φ. The function K a : ℘ (W ) → ℘ (W ) thus defined is called the a’s knowledge
operator.
The following observation is immediate:
Lemma 6.1.1 For any sets Φ, Ψ ⊆ W , Φ ⊆ K a (Φ) if and only if Aa [Φ] ⊆ Ψ .
(Aa [Φ] is the Aa -image of Φ, Aa [Φ] := {w ∈ W : (∃u ∈ Φ) Aa (u, w)}.) Thus a
knows Ψ at all states u ∈ Φ if and only if the epistemic ‘action’ Aa carries every
state of Φ to a state in Ψ .
Proof (⇒). Assume Φ ⊆ K a (Ψ ) and w ∈ Aa [Φ]. Hence Aa (u, w) for some u ∈ Φ.
But u ∈ Φ and Φ ⊆ K a (Ψ ) give that u ∈ K a (Ψ ). This immediately implies that
w ∈ Ψ.
(⇐). Assume Aa [Φ] ⊆ Ψ and u ∈ Φ. We claim u ∈ K a (Ψ ), i e., δa (u) ⊆ Ψ .
Let then w be an arbitrary element of δa (u). Hence Aa (u, w) holds. But u ∈ Φ,
Aa (u, w) and Aa [Φ] ⊆ Ψ imply that w ∈ Ψ . This shows that δa (u) ⊆ Ψ , proving
that u ∈ K a (Ψ ). 
For each two agents a and b we put K ab (Φ) := K a (Φ) ∩ K b (Φ). The operator
Cab defined by 
Cab (Φ) = n
K ab (Φ)
n 1

is called the common knowledge (between a and b) operator. (Here K ab 1 (Φ) :=


n+1  n 
K ab (Φ) and K ab (Φ) := K ab K ab (Φ)  for all n  1.)
Formally, every epistemic model A, W, (Aa )a∈A ) falls under the definition of
a normal action system in which R, the relation of direct transition between states,
can be taken to be the union of the set of relations {Aa : a ∈ A}. Therefore, the
epistemic model (A, W, (Aa )a∈A ) may be identified with the normal action system
(W, R, {Aa : a ∈ A}). Note, however, that in epistemic models, the accessibility
relations Aa should not be treated as atomic actions. Each Aa is rather the resultant
relation (in the formal sense of this notion defined in Sect. 1.7) of a bunch of various
simpler infallible epistemic and non-epistemic in character actions undertaken by
the agent a at state u leading from his state of knowledge u to a state w (in our
terminology: Aa is the resultant relation Res Aa of a certain compound ‘epistemic’
action Aa available to a). In short, the compound action Aa encompasses various ways
of acquiring information by the agent a as learning, exchanging information, proving
theorems, conducting experiments, etc., and Aa is the resultant relation of the action.
In epistemic models one usually abstracts from the structure of ways of acquiring
knowledge and takes into account the initial and final states only because only these
two states matter. Dynamic logic is the logic of changing knowledge (DEL). The
situation in dynamic epistemic logic is more involved, because the models defined
there also take into consideration various forms of communication between the agents
in time-dependent contexts.
Although the whole apparatus of epistemic logic is not introduced here, knowledge
models serve for the interpretation of sentences in an epistemic language, where
218 6 Epistemic Aspects of Action Systems

each sentence corresponds to an event, the propositional connectives correspond


to set theoretic operations, and the language connective ‘a knows’ corresponds to
the (semantically defined) operator K a . The operator K a is monotonic and each
knowledge model validates the axiom of truth T . (For more information see e.g., van
Benthem 2013a; van Ditmarsch et al. 2013; Kooi 2011; Dov Samet 2008.)

6.2 The Frame Problem

The term ‘frame problem’ derives from a technique used by cartoon makers. This
technique, called ‘framing’, consists in superimposing of a sequence of different
images to produce a foreground in motion on the “frame,” which does not change,
and depicts the background of the scene. In reference to action theory the frame
problem can be articulated as the problem of identification and description of the
conditions (states of affairs, etc.) which are irrelevant or not affected by the actually
performed action. Every action changes some states of affairs. However, not all
factors that make-up a definite state are changed during the course of an action.
The factors that constitute situational aspects of an action, such as time or space
localization, are undoubtedly fluid.
The ramification problem and the frame problem are two sides to the same coin.
The ramification problem is concerned with the indirect consequences of an action.
It addresses the questions: How do we represent what happens implicitly due to an
action? How do we control the secondary and tertiary effects of an action?
The learned skills and extraordinary deftness in using these skills enable people
to attain vitally important goals at minimal cost and power. The ability of humans,
presumably refined over the millenia of our cognitive development, makes it possible
to decide immediately which factors are relevant and which are not while undertaking
an action. It is assumed that the common sense law of inertia “according to which
properties of a situation are assumed by default not to change as the result of an
action” underlies the efficient action (after Stanford Encyclopedia of Philosophy
(SEP), the entry “Frame Problem”).
The frame problem was initially formulated in the context of artificial intelligence
in 1969 by John McCarthy and Patrick J. Hayes in their article Some Philosophical
Problems from the Standpoint of Artificial Intelligence. The solutions that have been
put forward by computer scientists normally refer to the common sense law of inertia.
This yields the logical setting of the frame problem, where the heart of the matter is in
defining a set of logical formulas describing the results of actions without the need for
taking into account factors being “non-effects of those actions.” The solution consists
in adopting certain axioms called frame axioms that explicitly define the non-effects
of actions. (For example, “painting an object will not affect its position, and moving
an object will not affect its color”—see SEP). A meta-assumption is adopted (“the
general rule-of thumb”) that “an action can be assumed not to change a given property
of a situation unless there is evidence to the contrary.” This default assumption is
therefore identified with the above law of inertia (in the common sense). However,
6.2 The Frame Problem 219

there appear difficulties of logical character there—the consequence operation of the


standard logical systems is monotone, i.e., the set of conclusions increases with the
addition of further formulae as premises. This is not the case when one takes into
account various results of actions. For e.g, the assumption “that moving an object
will not affect its color” is invalidated when one moves the object into a vat of dye,
or presses it against a newly-painted wall, or into a flame, and so on. The solution of
the frame problem in the logical wording results in adopting various non-monotonic
reasoning formalisms as circumscriptions (McCarthy 1986). We shall return to this
issue.
Later the term ‘frame problem’ received a much broader meaning in philosophy,
where, according to some approaches, it touches the essence of the problem of ratio-
nality. The contemporary literature differentiates the epistemological frame problem
from its computational counterpart. After the Stanford Encyclopedia of Philosophy
(the entry “Frame Problem”) the first problem is formulated as follows: “How is
it possible to holistic, open-ended, context sensitive relevance to be captured by a
set of propositional (…) representations used in classical AI?” Whereas its com-
putational counterpart is encapsulated in the form of the question: “How could an
inference process tractably to be confined to just what is relevant, given that rele-
vance is holistic, open-ended, and context-sensitive?” In this short introduction to
the frame problem it is not even possible to telegraph many of the questions raised
by the researchers as, the relevance problem and the threat of infinite regress there.
One may maintain that the frame problem may be situated not only at the epistemic
level but also on the praxeological level. There, it is defined as the problem of isolating
the factors that guarantee efficient action, as well as one of describing those factors
that are irrelevant in the given situation (or state) when the action is taken, that is, those
that do not affect the process of performability of the action. Praxeology describes
humans’ engagement in purposeful actions, as opposed to reflexive behavior caused
by external factors, like sneezing or coughing. These external factors may be treated
as actions (or rather disturbances) of certain metasystems led by the forces of nature,
social groups, states, armies, etc.
In the formalism adopted in this book, where the basic semantic unit is that of an
elementary action system, the frame problem can also be articulated in at least two
ways that are parallel to the threads raised in the above discussion. In the epistemolog-
ical setting, the frame problem concerns the limitations of the epistemic abilities of
the agents of actions. Whereas each elementary or situational action system possesses
explicitly defined components as extensional, set-theoretic entities such as states or
atomic actions, only limited knowledge in this respect may be available to the agents
assigned to the system. This topic is more thoroughly discussed in the section on
ideal agents. In the praxeological setting the frame concerns executive rather than
epistemic limitations of the agents: they may have a full knowledge of the system
(therefore all conceivable scenarios of future events are available to them); but their
executive abilities are limited so that they may be unable to run the courses of action
in accordance with the strategies adopted by them. Then one may discern mixed
situations, being the resultant of epistemic and praxeological components. We know
from the preceding paragraph that knowledge is not reducible to the knowledge of
220 6 Epistemic Aspects of Action Systems

truth. Episteme (theoretical knowledge) is a more involved concept. One may argue,
however, that the epistemic setting of the frame and ramification problems is more
fundamental than the praxeological one. The reason is that the praxeological limita-
tions of the agents involved in various actions often stem from ignorance. The fact
that someone is not able to drive a car is conditioned by the potential driver’s lack of
knowledge—he simply did not learn the art of steering a car, and thus does not know
how to do it (despite the fact that he may possess a purely theoretical knowledge of
the construction of vehicles). One may maintain that all sorts of praxeological limita-
tions are secondary in relation to epistemic limitations. It seems, however, that there
have been merged two types of knowledge here: theoretical (episteme) with practical
(techne). Knowledge of efficient action is organized on two levels—a linguistic level
(a theoretical one), and a practical level, where knowledge is, among others, the skill
of using a tool, or a machine. Mutual dependences between the levels and types of
knowledge are complex and we are not going to analyze them in-depth here.
As an interlude, it is worth quoting here a short passage relating to the production
of Damascus steel. The fact that nowadays we produce Damascus steel is a derivative
of our knowledge concerning its production; after centuries this knowledge has suc-
cessfully been recreated (with approximation), both on the theoretical and practical
levels. The very production process itself has ceased to pose difficulty. Thus, in this
case, episteme, as solely theoretical knowledge harmonizes with techne, practical
knowledge. The production of Damascus steel is an interesting example of a com-
pound action system. Damscus steel was used in South Asian and Middle Eastern
swordmaking.6 European knights became acquainted with the power of Damscus-
steel swords during the Crusades. The swords made of this steel surpassed the white
steel manufactured at that time in Europe: the blades were reputed to be tough,
resistant to shattering, and capable of being honed to a sharp, resilient edge. (One
should add, however, that modern steel outperforms these swords.) The blades are
characterized by distinctive patterns of banding and mottling reminiscent of flowing
water. The production methods were meticulously concealed and never revealed. For
unknown reasons the production of the patterned swords gradually declined, ceasing
by around 1750, and the process was lost to metalsmiths. Several hypotheses have
ventured to explain this decline as, e.g., “the breakdown of trade routes to supply
the needed metals, the lack of trace impurities in the metals, the possible loss of
knowledge on the crafting techniques through secrecy and lack of transmission, or
a combination of all the above.” Undoubtedly a vital factor could also be the fact
that firearms became more and more popular, which resulted in a dramatic drop in
orders for cold steel. The original method of producing Damascus steel is not known.
For many years attempts were made to rediscover the methods in which Damascus
steel was produced and, in consequence, to start its production on a larger scale.
(Recreating Damascus steel is therefore a subfield of experimental archeology.)
Thus, the problem was, how—having necessary raw materials at one’s disposal,
such as ores and additions (e.g., carbon, glass)—to obtain a product identical in

6 These remarks are based on the entry ‘Damascus steel’ from the Wikipedia. The citations are taken

from this entry.


6.2 The Frame Problem 221

all structural and functional respects to that of ancient Damascus steel. Early efforts
proved futile. Because of differences in raw materials and manufacturing techniques,
modern attempts to duplicate the metal have not been entirely successful. The empiri-
cal data collected for years and historical records allowed, to some extent, researchers
to establish the origin of the raw materials used in the production of the steel. They
were special ores found in India, which contained key trace elements, like tungsten
or vanadium. The crucial problem, which required solving, was to determine the
structure of Damascus steel on a microscopic scale. The breakthrough came with
the discovery of the presence of cementite nanowires and carbon nanotubes in the
composition of Damascus steel by a team of researchers from Germany. It is now
thought that “the precipitation of carbon nanotubes most likely resulted from a spe-
cific process that may be difficult to replicate should the production technique or raw
materials used be significantly altered.” It is claimed that these nanostructures are a
result of the forging process.
From the perspective of the theory of action, the difficulty in reproducing Dam-
ascus steel consists in identification of the action system(s) pertinent to the original
technology. The earlier attempts erroneously identified various components of this
system. The actions undertaken resulted in steel which displayed different parame-
ters from those of the original product. From the historical perspective, the output
systems of actions, which were to recreate production of Damascus steel, were sub-
jected to a series of modifications through a change in the sets of states and atomic
actions, so that an action system could be worked out, in which steel with properties
that are approximate or identical with those of Damascus steel could be obtained as
a result.
Currently, Damascus steel having properties very close to the classical Damscus
steel is manufactured in a process which most likely resembles the original one fairly
closely. This is indicated by its simplicity and similar properties of products used.
Formal action systems M = (W, R, A) may be viewed as approximations of
the real state of affairs. The relation R mirrors the reality—the relation determines
(at least when one takes into account only physical limitations) what is possible
or what is not. The agent usually has a vague, approximate idea of the relation. A
more competent agent has a better image of R. A much better approximation is
achieved by top experts. For example, during a game of solitaire, it is an objective
fact that at some step the game cannot be continued further. But when a game can
indeed be prolongated, the poor player may stop it maintaining that further steps are
unperformable. His conviction cannot be mixed with objective characteristics of this
particular game. But we are reluctant to adopt ‘A God’s eye point of view’ as an
epistemic or pragmatic criterion of performability of actions. We are inclined rather
to adopt the top experts’ perspective on this matter.
The choice of the set W and the relation R, and also the shape of the actions of A
depends much on the ‘depth’ of the description of the real situation. A transition from
one system M = (W, R, A) to a more adequate model M  = (W  , R  , A ) results in
changing the set of states, the relation of direct transition, and the atomic actions. It
seems that in the description of this transition one can distinguish two components.
First, the states of the initial system M, i.e., the elements of W , may be treated as
222 6 Epistemic Aspects of Action Systems

sets of finite sequences of states of the system M  . This means that a one-to-one
function k from 
 W∗ to the power set of the set of finite sequences of elements of W ,
k : W → ℘ (W ) is singled out. The sequence k(u) is called the reconstruction of
the state u ∈ W within the system M  . Thus, the whole set W is reconstructed in the
power set of the set (W )∗ as the family of sets of finite sequences {k(u) : u ∈ W }.
It is natural to require that if u and w are different states of W , then the sequences
belonging to k(u) and k(w) do not contain a common element. Second, the atomic
actions of A and the transition relation R of M may be also reconstructed within
the system M  by means of another map l. The transition from M to M  makes each
atomic action of A replaced by a compound action, i.e., by a set of finite strings of
atomic actions of A . Thus, each atomic action of M is replaced by a set of sequences
of atomic actions on the deeper level provided by M  . The simplified description of
actions the system M offers, formulated in the ‘input–output’ terms of its atomic
actions, gives way to a more refined description on the basis of the system M  . The
difficult problem of how to formally define such an interpretation of M in M  should
come under further scrutiny; this is not tackled in this book.
The frame problem has already been signaled in this book by discriminating
elementary action systems from situational action systems. The former represent
‘hard’ praxeological aspects of action originating, e.g., from the laws of mechanics
or the engineering principles, while the latter represent ‘soft’ and ‘fluent’ ones. These
volatile aspects of action are encapsulated in the concept of the situational envelope
of an elementary action system. The situational envelope of an action system, though
relevant to workflow or game rules, does not set ‘hard’ action components aside.
What factors/entities constitute the situational envelope of an elementary action
system is often hard to foresee. In the case of the game of chess we include in them,
apart from a distribution of pieces on the chessboard, also the order of moves; that
is, whether the successive move is to be performed by the white or the black player.
One can say that we have come here to deal with a certain type of situational action
system determined by the general rules of the game of chess.
Let us imagine now that a concrete game of chess is being played within the
tournament, where two opponents are competing for the title of world champion,
held in a beautifully illuminated sports arena. The game is progressing smoothly in
compliance with the rules and White is to make his move. Let us mark these situations
with s. Suddenly, all the lights in the hall go out and it gets completely dark. A failure
has occurred in the nearest power station (perhaps an accidental failure, or maybe a
terrorist attack…). Despite the fact that the state on the chessboard and the rules of
the game make it officially possible to continue the game, it is automatically broken
off and White cannot make the move. Both the participants of the game and the
audience have found themselves in a new, different, situation which will be marked
with sω . Of course, sω is not included in the situational envelope of the game of chess
presented in Sect. 2.1.
The above (fictitious) case suggests including chance situations (a sudden black-
out, an earthquake, a terrorist attack, etc.) in the notion of a surrounding. They are
extreme situations. In consequence, knowledge relating to the source of the situation
and the relation Tr is here ex post knowledge; it is not fully articulated before the
6.2 The Frame Problem 223

inclusion of the system into the move. Within the compass of situation action theory
there are found—as we mentioned earlier—systems that are a priori ‘foreseeable’, in
which a transition from one to the other situation falls completely under the control of
agents included into the system. It is clear that if somebody plans a sports tournament
in Japan, he should make provision for the possibility of occurrence of an earthquake.
So as to minimize its influence on the course of the tournament, a variety of additional
protective measures are used such as the reinforcement of the structure of the stadium
and the number of medical services on duty during the event. The very tournament
itself, being a situational system, is described in terms of the categories presented
above, though without the need to assume that an unexpected chance incident will
‘upset’ it. No person with a sound sense of reality plans to conduct their life activities
under the assumption that life on Earth will be destroyed by a meteorite. Various
external sudden random incidents, as unpredictable or of low predictability, are not
composed into the notional apparatus presented here. Probabilistic systems sensu
stricto are an obvious exception. They are discussed in Chap. 1.
The above example illustrates yet another aspect of an action that may be pertinent
to the ramification problem, viz., the distinction between ignorance and uncertainty.
The ramification problem has therefore two sides. The first side concerns the igno-
rance and inability of agents to conduct purposeful actions in the intended way within
a definite action system operated by them. The second side concerns uncertainty and
it is linked to the presence of external forces influencing the action system. These
external forces are framed in the form of meta-action systems (nature, political or
military forces, big business, etc.). These metasystems manifest their action through
earthquakes, floods, cosmic disasters, wars, long or short term economic or demo-
graphic processes, and so on. The given action system is subordinated to various
metasystems; the first (along with its agents) is merely a pawn in the game run by
the other. One can try to learn the rules of the game (modern natural science has
made a spectacular success in this domain); the area of knowledge of metasytems is
impressive. Metasystems are, however, often unpredictable and uncertainty remains.
It is from here that the significance of probabilistic action systems stems. Obviously,
we will not settle here the metaphysical question: What is the order of all things? Is
it a Stoic order (God’s reason and universal order), or an Epicurean one (chaos of
atoms, fortuity)?
Let M = (W, R, A) be an elementary action system. A map which is called
the deliberation operator and denoted by DM is associated with the system M. The
deliberation operator is a function from ℘ (W ) × A to ℘ (W ). Thus, to each subset
Φ ⊆ W and each atomic operation A ∈ A the operator DM assigns a subset of W .
To simplify matters, let us assume that the system is normal, that is, R ⊆ A for all
A ∈ A. Then we define:

DM (Φ, A) := {w ∈ W : (∃u ∈ Φ) A(u, w)},

for all Φ ⊆ W , A ∈ A. DM (Φ, A) is the set of all possible results of the action A
when performed in states belonging to Φ. DM describes the dynamics of M—the
consecutively performed string of atomic actions A0 , A1 , . . . , An which has been
224 6 Epistemic Aspects of Action Systems

initialized a Φ-state u 0 , determines a path

u 0 , u 1 , . . . , u n , u n+1 , (6.2.1)

in the space of states (we shall restrict considerations to finite paths), where
u 0 A0 u 1 A1 . . . u n An u n+1 . (It may happen that there is no possibility of the prolon-
gation of (6.2.1), when for no atomic action A there is a state u such that u n+1 A u.)
Looking at the first argument Φ of DM , one may say that DM assigns to the set
Φ a family of subsets of W , viz., {DM (F, A) : A ∈ A}, representing the possible
outcomes of all atomic actions when performed in Φ-states. The set Φ represents
epistemic limitations of the agents ‘operating’ the system M: they may not know
exactly the state of the system, but they know that this state is localized in Φ. It is
assumed that the agents know the description of atomic actions, that is, they know
which pairs (u, w) belong to a given atomic action A and which do not. Hence they
know that after performing an action A the system will pass to (an unspecified) state
belonging to DM (Φ, A). The problem is that the set DM (Φ, A) may be empty and
possibilities of performing of A in Φ-states are then illusive. Yet another situation is
such that after performing the action A, the agents intend to perform another action,
say B. If the set DM (Φ, A) is disjoint with Dom(B), the domain of B, then there is no
such possibility of such conduct—the action B is totally unperformable in any state
of DM (Φ, A). Agents know about this! Epistemically, a more interesting situation
is when Dom(B) in a nonempty way intersects with DM (Φ, A) and Dom(B) is not
a superset of DM (Φ, A). After performing A, the system M passes to some state
in DM (Φ, A). But the knowledge as to whether the achieved state belongs to the
domain of B or not may be unavailable to the agents—they are at a loss. What can
they do?
In light of the above remarks, the essence of the above problem is contained in the
epistemic limitations of the agents operating a given action system. Even assuming
that the action system is managed by ideal agents, profoundly knowledgeable of all
the components of the system such as states, the transition relation, precise description
of atomic actions, and so on, certain situational aspects (including the epistemic ones)
can remain beyond their reach. The example of a game of chess proves instructive
here again. A novice can know perfectly the rules of the game; nonetheless, many
documents that influence winning of the game can rest beyond his epistemic powers.
Although he knows the rules of the game of chess, his knowledge of strategies may be
limited. He may not to be able to foresee, or simply remember some moves, or the like.
There occurs here the information phenomenon of informational explosion, when
the amount of information necessary to take the proper decision which guarantees
success increases exponentially. On the other hand, it is impossible to exclude actions,
often by other casual agents, external toward the given system, which disturb the
functioning of the system such as forces of nature.
The deliberation operator restricts reasonings about possible events and scenarios
to the world encapsulated by the action system M. This operator does not take into
account various external action systems and forces that may influence the dynamics of
M—only the states and action undertaken within the framework of M matter. In other
6.2 The Frame Problem 225

words, the deliberation operator represents a form of the closed-world assumption


in action theory.
On the side of mathematics, the definition of the elementary system M =
(W, R, A) can be strengthened by the so-called hidden parameters, which can be
made into components of the state of things. In other words, instead of the set W
we can consider the set of pairs {(u, •u ) : u ∈ W }, where •u marks a block of
primitively undefinable or unknown parameters which, in the course of action, may
turn out to be activated in the state u. Let M = (W, R, A) represent the system
of actions that describe the building of a skyscraper in Tokyo. (M is basically an
ordered situational action system, but we disregard situational components and the
order altogether.) Suddenly, an earthquake occurs yet the building being raised does
not break up; there is, however, slight damage done to it, which can be repaired. As
a result of the earthquake, that is nature’s acting on the state of the system, there
follows a transition from M to another system. What system? All the states that are
vital to the further construction of the building, including this in which the earthquake
followed, ought to be provided with an additional label in the place •u notifying that
there was an earthquake and further construction will follow in the conditions after
the earthquake. (In the case of earlier states, as ones insignificant to the construction,
marker •u remains unfilled.)
In light of the above comments, it appears that there is not one recipe for solving
the frame problem in the context of elementary or situational action systems. The
above-presented comments point merely to a few alternative formal frameworks of
the problem.

6.3 Action and Models for Non-monotonic Reasonings

6.3.1 Reasoning in Epistemic Contexts

We begin by recalling the well-known logical puzzle called three cards. The three
cards are red, blue, and green, respectively. Darek, Jarek, and Marek are fair players
(they do not lie and cheat), who have one card each. They only see their own card,
and so they are ignorant about the other agent’s card. (But they know that the cards
are red, blue, and green.) They want to know the colors of the cards of their partners.
(a) Darek asks Jarek the question: Do you have the blue card?
Who amongst them right after Darek’s question and before Jarek’s answer to it,
already knows the distribution of the cards? The answer is: the person (either Jarek
or Marek) who does not have the blue card. Why? Darek, while asking the question,
announces at the same time that he does not have the blue card (this is the presuposi-
tion of his question). Therefore, the blue card is in the hands of Jarek or Marek. Have
a look then at Jarek and Marek. One of them (he who does not have the blue card)
rightly infers that the blue card is in the hands of his colleague. Since he knows the
226 6 Epistemic Aspects of Action Systems

color of his own (non-blue) card, and knowing that the card of his colleague is blue,
he at the same time knows the color of Darek’s card. Ergo: he knows the distribution
of the cards. Yet each of the remaining two people do not know the distribution.
(b) Jarek sincerely answers: I do not have the blue card.
Who amongst them concretely after hearing Jarek’s answer, knows the distribution
of cards? The answer is: Darek and Jarek. Why? After hearing the answer, Darek
rightly concludes that the blue card is in Marek’s hands. Therefore, as Darek knows
both his own card and Marek’s card, Darek knows also the card of Jarek. Hence
Darek knows the distribution of cards.
Right after Darek’s question, Jarek knows that Darek does not have the blue card.
Of course, Jarek knows his card (his card is not blue). Hence, Jarek deduces that the
blue card is in Marek’ hands. Therefore, Jarek also knows the color of Darek’s card.
Hence Jarek also knows the distribution of cards.
Marek has the blue card. (After public announcements (a) and (b), the remaining
players also know that fact.) But Marek is unable to determine the particular color of
the cards of Darek and Jarek. Therefore, Marek still does not know the distributions
of cards.
One may generate alternate scenarios with different questions and answers
addressed to other players. But here we have a specific action system operated by three
agents: Darek, Jarek, and Marek. Each of the agents asks questions to other agents.
The addressee of the question performs another action, namely that of answering the
question.
The above logical exercise in public announcements can be seen from a wider
perspective. The task consists in recognition of distribution of three cards in the
hands of three players. The cards are marked by the letters b, g, r , respectively. Each
possible distribution is therefore represented by a sequence of the above letters of
length 3 (without repetitions). Let us agree that the first element of the sequence is
the card belonging to Darek, the second card—to Jarek, and the third one—to Marek.
There are altogether 3! = 6 distributions.
The cards have been dealt out. The players want to identify this distribution. It
is assumed that the situational envelope of the game contains three agents and an
external, neutral observer. Each agent asks another player a question. The addressee
responds giving an answer of type YES/NO only. More exactly, each question is
treated as a figure Φ?, where Φ is a subset of {b, g, r }. (Usually Φ is a one-element
set.) Φ? is read: Is your card of a color belonging to Φ? For example, {b, g}? is a
query: Is your card blue or green? Possible answers are identified with figures of the
form Φ!, where Φ is a subset of {b, g, r }. For instance, {g, r }! is the answer: I have
the green or the red card. (As classical logic is assumed, in this model the above
answer is equivalent to the sentence: I do not have the blue card.)
Both verbal actions, i.e., asking a question and answering it, result in a change
of epistemic states of the players (that is, tightening suitable disjunctions from state
descriptions by eliminating certain disjuncts). This follows from the fact that apart
from the above labeled epistemic actions (asking questions and answering them), the
players also perform other actions—they carry out reasoning according to the rules
6.3 Action and Models for Non-monotonic Reasonings 227

of classical logic on Boolean combinations of the sentences Darek(c), Jarek(d),


Marek(e), for all permutations (c, d, e) of the set {b, g, r }. (E.g., the rule modus
tollendo ponens is widely applied.) Thus a detailed description of the dynamics of
the game would also require taking into account a succession of the rules of inference
applied by each of the players because these rules enable each of them to acquire better
knowledge of the distribution of the cards after performing the mentioned epistemic
actions (asking and/or answering). The above description of the game leads to a rather
complex situational action system. The process of gaining knowledge about cards
distribution is of cumulative character, that is, by performing the above epistemic
actions and drawing logical conclusions, the players get more detailed data about the
localization of the actual distribution in the space of possible states. The paradigm
of monotonic knowledge is not overruled here because the undertaken verbal and
logical actions do not change the actual distribution of the cards.
Each state of the above action system is represented by three components: the
individual knowledge of possible distributions of cards by Darek, Jarek, and Marek,
respectively. We shall assume God’s eye perspective in describing the arisen situ-
ations. God knows the actual distribution of cards; he/she also knows the (fluent)
knowledge of the players concerning the distribution of the cards. From the above
analysis it follows that the actual distribution is described either by the sentence
Darek(g) ∧ Jarek(r ) ∧ Marek(b) or by Darek(r ) ∧ Jarek(g) ∧ Marek(b). Let us
consider the first case:

Darek(g) ∧ Jarek(r ) ∧ Marek(b) (6.3.1)

The initial knowledge available to each of the agents is described by the sentences:
   
Darek(A1) : Darek(g) ∧ Jarek(r ) ∧ Marek(b) ∨ Jarek(b) ∧ Marek(r )
   
Jarek(A1) : Jarek(r ) ∧ Darek(g) ∧ Marek(b) ∨ Darek(b) ∧ Marek(g)
   
Marek(A1) : Marek(b) ∧ Darek(g) ∧ Jarek(r ) ∨ Darek(r ) ∧ Jarek(g) .

Immediately after asking the above question by Darek and after performing some
logical derivations by the agents, the epistemic status of the agents turns into:
   
Darek(A2) : Darek(g) ∧ Jarek(r ) ∧ Marek(b) ∨ Jarek(b) ∧ Marek(r )
Jarek(A2) : Jarek(r ) ∧ Darek(g)
 ∧ Marek(b)   
Marek(A2) : Marek(b) ∧ Darek(g) ∧ Jarek(r ) ∨ Darek(r ) ∧ Jarek(g) .

(Note that under option (6.3.1), Jarek already knows the distribution of cards.) After
answering the question by Jarek the above state turns into:

Darek(A3) : Darek(g) ∧ Jarek(r ) ∧ Marek(b)


Jarek(A3) : Jarek(r ) ∧ Darek(g)
 ∧ Marek(b)   
Marek(A3) : Marek(b) ∧ Darek(g) ∧ Jarek(r ) ∨ Darek(r ) ∧ Jarek(g) .

(Here Jarek also knows the distribution of cards.)


228 6 Epistemic Aspects of Action Systems

Abstracting from the logical actions performed by the agents, we see that the
above actions give rise to the transitions between epistemic states:

Darek(A1), Jarek(A1), Marek(A1) → R Darek(A2), Jarek(A2), Marek(A2)


→ R Darek(A3), Jarek(A3), Marek(A3).

Analogously, if the actual distribution of cards is

Darek(r ) ∧ Jarek(g) ∧ Marek(b), (6.3.2)

the following epistemic state is initial:


   
Darek(B1) : Darek(r ) ∧ Jarek(g) ∧ Marek(b) ∨ Jarek(b) ∧ Marek(g)
   
Jarek(B1) : Jarek(g) ∧ Darek(r ) ∧ Marek(b) ∨ Darek(b) ∧ Marek(r )
   
Marek(B1) : Marek(b) ∧ Darek(g) ∧ Jarek(r ) ∨ Darek(r ) ∧ Jarek(g) .

Immediately after asking the question by Darek the above state turns into the state:
   
Darek(B2) : Darek(r ) ∧ Jarek(g) ∧ Marek(b) ∨ Jarek(b) ∧ Marek(g)
Jarek(B2) : Jarek(g) ∧ Darek(r
 ) ∧ Marek(b)   
Marek(B2) : Marek(b) ∧ Darek(g) ∧ Jarek(r ) ∨ Darek(r ) ∧ Jarek(g) .

After Jarek’s answer, the above state changes to:

Darek(B3) : Darek(r ) ∧ Jarek(g) ∧ Marek(b)


Jarek(B3) : Jarek(g) ∧ Darek(r
 ) ∧ Marek(b)   
Marek(B3) : Marek(b) ∧ Darek(g) ∧ Jarek(r ) ∨ Darek(r ) ∧ Jarek(g) .

The above transitions between states are depicted as:

Darek(B1), Jarek(B1), Marek(B1) → R Darek(B2), Jarek(B2), Marek(B2)


→ R Darek(B3), Jarek(B3), Marek(B3).

The above example merely plays an illustrative role. It shows that in action systems
operated by many agents one may speak of global states of the system treated as
sequences of the “private” states of particular agents. (In the above case these are
sequences of length 3, because there are only three agents: Darek, Jarek, and Marek.)
Each such string of private epistemic states makes up a global state. Knowledge of the
global system is restricted for each agent. Thus, operating on global states requires
adopting here an external perspective on the system by assuming the existence of
an omniscient superagent who always knows the epistemic states of the particular
agents operating the system (God’s eye perspective). According to this standpoint, an
epistemic model with n interacting individuals (agents) a1 , . . . , an is set-theoretically
represented as an action system whose global states are sequences w = (w1 , . . . , wn ),
6.3 Action and Models for Non-monotonic Reasonings 229

where the ith component wi encodes the ‘private’ knowledge of the agent ai about
the world for i = 1, . . . , n. (The world here may be a distribution of cards.) The
sequence w defines the knowledge of the superagent about the knowledge of the
agents about the world. (This omniscient agent knows the epistemic states of each
agent ai ). Let W be the set of global states; W is thus a set of sequences (w1 , . . . , wn )
of length n.
A simple model can be described as follows. Let A be a finite Boolean algebra.
Each filter of A is principal and every element x of A is a combination of atoms
of A. W is the set of all n-tuples (w1 , . . . , wn ) of proper filters of A. It is assumed
that the filter generated by the union w1 ∪ · · · ∪ wn is proper; this corresponds to
the fact that the union of the partial knowledge available to all agents a1 , . . . , an
is consistent. The filter generated by the union w1 ∪ · · · ∪ wn is the knowledge of
the superagent in (w1 , . . . , wn ). If it is an ultrafilter, we call it a world. If the filter
generated by w1 ∪ · · · ∪ wn is an ultrafilter, the knowledge available to the superagent
is complete. In other words, though the private knowledge wi of each agent ai need
not be complete, the joint knowledge of the agents is complete (and the superagent
knows that). Verbal actions performed by particular agents (queries and responses)
are treated as atomic actions on the set of states. The agents want to describe the
filter w generated by w1 ∪ · · · ∪ wn . In other words, each of them wants to achieve
the private state w. (Therefore the n-tuple (w, . . . , w) is the final global state.) Each
agent ai may publicly asks the agent a j a question of the form Φ?, where Φ is an
element of the algebra. Φ? reads as: Does Φ belong to your knowledge? In other
words, in the state wi the agent ai asks a j a whether Φ is an element of the filter w j .
(As A is finite, each Φ is the join of a finite number of atoms of A.) We shall mark this
question as a quadruple (ai , Φ, a j , ?), where i = j. The agent a j responds to this
question giving one of two answers: Y es, N o. We mark these actions as (a j , Y es),
(a j , N o). Suppose w j is a private state of a j . The agent a j answers Y es in w j if Φ
belongs to the filter w j , i.e., the set of atoms that make Φ contain the atoms whose
join is the generator of the principal filter w j . a j answers N o if Φ is not in w j , i.e.,
there is an atom in the generating element of w j which is disjoint with Φ.
In the model we consider there is no need to revise the earlier knowledge gained
by the agents—they simply enlarge their filters whenever possible to reach eventually
the filter generated by the union w1 ∪· · ·∪wn . (The possibility of knowledge contrac-
tion is excluded. This enables us to avoid well-known difficulties in the mathematical
modeling of the dynamics of knowledge.) There is an order concerning asking ques-
tions. We may assume for simplicity that the last asked agent, after giving an answer,
has the right to ask the next question to an agent selected according to a certain meta-
rule adopted by all players. (The model we describe is therefore a simple situational
action system.)
We make the following assumptions, which are known to the agents:
(a) The union w1 ∪ · · · ∪ wn generates an ultrafilter w of A.
(b) Each question (ai , Φ, a j , ?), when publicly asked by ai , assumes that Φ is not
an element of the filter wi .
230 6 Epistemic Aspects of Action Systems

The goal of the game:


(c) Each of the players wants to achieve w as his final private state.
(c) is adopted by the agents.
If ai asks (ai , Φ, a j , ?), then according to (b), an atom of A that is included in the
generator of the filter wi is not included in Φ and all agents know that after the question
has been asked. The action (ai , Φ, a j , ?) transforms the state (w1 , . . . , wn ) into a
new global state (v1 , . . . , vn ). (The picture is more complicated here, because after
hearing (ai , Φ, a j , ?) each agent sets rules of logic in action. Just these rules enable
them to determine particular components of (v1 , . . . , vn ).) Here vi coincides with wi
(the agent ai does not update his private knowledge wi when asking (ai , Φ, a j , ?).)
But other agents ak may do that by enlarging the filter wk . As mentioned, hearing
(ai , Φ, a j , ?), they rightly deduce that some of the atoms that make up Φ are not
included into the generator of wi . But it is often impossible for each of them to infer
which particular atoms are relevant here. But if ak can do that and he can identify the
atoms of Φ which are not in the generator of wi , he enlarges his filter wk by deleting
those atoms in the generator of wk he can identify as not belonging to the set of
atoms of Φ. If in the state w j the agent a j answers Y es to (ai , Φ, a j , ?), i.e., Φ is an
element of the filter w j , each of the agents deduce that all atoms in the generator of
w j are atoms of Φ. Then the agent ak (k = j) updates his knowledge by enlarging
the filter wk as follows: ak deletes from the generator of wk the atoms which are
in Φ and not in the generator of w j . If in the state w j the agent a j answers No to
(ai , Φ, a j , ?), i.e., Φ is not an element of the filter w j , each of the agents deduce that
there are atoms in Φ that are not in the generator of w j . Each agent ak updates his
knowledge by deleting from the generator of wk these atoms (unless he can identify
them).
How to achieve goal (c)? The agents may apply various strategies. For instance
they may at the outset ask questions of the form (ai , Φ, a j , ?), where Φ is a coatom of
A. Since w1 ∪ · · · ∪ wn generates an ultrafilter w of A, there exists exactly one coatom
Φ0 which is not a member of each of the filters w1 , . . . , wn . (w is generated by the
atom being the complement of Φ0 ). They may attempt to identify Φ0 . Here much
depends on the organization of the work. (This belongs to the situational envelope of
the system.) For example, if for every coatom Φ and any i, j (i = j), the question
(ai , Φ, a j , ?) can be eventually answered, the identification problem is solved.
The above remarks merely provide a sample of the problems epistemic logic raises.
But, more importantly, these remarks point out that epistemic logic is tractable by
the formal apparatus presented in this book.
The problem of building a formal apparatus to represent knowledge undergo-
ing changes in time is discussed in the literature. The idea is to merge together
epistemic logical systems applied in the description of knowledge and temporal
logical systems. Basic systems of epistemic logic are split into two groups: the
first group is concerned with a singular cognitive subject (an agent) and the second
group takes into account the knowledge of many cooperating agents. But the most
challenging problem concerns the description of the dynamic of knowledge when
it is necessary to ‘temporalize’ the systems under question. In this context some
6.3 Action and Models for Non-monotonic Reasonings 231

systems of temporal-epistemic logic created by the use of the fusion method or the
Finger-Gabbay method (1992) of temporalization of logical systems as well as alter-
nating time temporal epistemic logic ATEL (van der Hoek and Wooldridge 2003;
Herzig and Troquard 2006) should be mentioned.
The classical approach to common knowledge, originating from epistemology and
epistemic logic, underlines the significant role of consensus among the members of
a group of agents (the latter forming together a collective agent). Reaching common
knowledge facilitates the process of reasoning in the group—the members of the
group draw common conclusions from the universally known and accepted premises.
Although this approach is useful in modeling various aspects of group conduct as
in modeling the process of forming norms and conventions or building models of
reasoning of other agents, the cost of reaching a consensus may turn out to be too high.
In recent advancements concerning group knowledge, the role of group consensus is
not emphasized and emphasis is placed on the following epistemic aspects of group
cooperation:
1. Instead of the principle ‘what everybody knows’, collective knowledge is aimed
at providing a synthetic body of information abstracted from the information
provided by particular group members. This synthetic information may concern
selected aspects (or fragments) of knowledge and is utilized in various forms of
collective action.
2. The group conclusions are not treated as private conclusions accepted by partic-
ular members of the group. The members of the group comply with the group
conclusions in their actions, accepting at the same time the (meta) principle of
precedence of the group knowledge over individual knowledge.
3. Pointing out non-dogmatic procedures of extracting collective knowledge from
the individual knowledge.
4. Tolerance toward inconsistencies appearing on various levels in the knowledge of
individual agents forming the group. This results in acceptance of paraconsistent
knowledge bases.
5. The existence of information gaps, where some aspects of knowledge are not
fully available in various applications. This results in acceptance of various forms
of non-monotonic reasoning.
A switch from the perspective based on reasonings carried out in multi-modal
systems of high complexity to reasonings based on queries directed to paraconsis-
tent knowledge bases has been proposed by Dunin-Ke˛plicz and Szałas (2012). Their
approach has led to a new formalization of complex belief structures. They introduced
the notion of an epistemic profile which is thought of as a tool that fills a gap between
the epistemic logic approach and various practical implementations aiming at trans-
forming the initial data into the final ones. 4QL, a four-valued language of queries,
built by Małuszyński and Szałas (2011a, b) lays foundations for creating workable
paraconsistent knowledge bases. 4QL assures practical computability, being thus an
underpinning enabling one to implement belief structures and epistemic profiles.
232 6 Epistemic Aspects of Action Systems

6.3.2 Frames for Non-monotonic Reasonings

When do non-monotonic reasonings come into sight then? The term ‘non-monotonic
logic’ seems to be misleading. The term ‘logic’ is reserved for a consequence opera-
tion which is structural (that is, closed under endomorphisms of the propositional lan-
guage). A better term is ‘non-monotonic reasoning’. A reasoning is non-monotonic
if some conclusions can be invalidated by adding more knowledge. Non-monotonic
reasonings need not be structural in comparison with inference rules of logic. Non-
monotonic forms of reasoning should be treated as pragmatic rules of acceptance of
sentences (or propositions) in which the rules of inference of classical logic are fully
utilized. Therefore, the formal constructs which are defined in the theory of non-
monotonic reasonings may admit supraclassical modes of reasoning. (As is well
known, due to the maximality of classical logic, these non-monotonic formal non-
structural constructs extend the consequence operation of classical logic.) The clue
is that acquired knowledge (in sharp contrast to mathematical theories) may (in part)
be suspended in light of a new evidence; the latter may cancel the validity of some
premises previously employed in the reasoning carried out by the agents. There-
fore, reasoning based on the gathered and updated knowledge may be subject to the
process of revision, which in turn may invalidate earlier conclusions.This process
is not captured by the (monotone) consequence operation of classical logic (or any
other logic). Reasoning based on definite clauses with negation as failure is non-
monotonic. Non-monotonic reasoning is useful for representing defaults. A default
is a rule that can be used unless it is overridden by an exception. The phenomenon
of non-monotonicity explicitly appears especially in these situational contexts when
one is faced with fast information flow and information processing. The problem of
description of the dynamics of the logical structure of the future events knowledge is
one of the most difficult and challenging problems that philosophical logic faces. As
is known, sentences and utterances about future are often logically indetermined, that
is, there is no way of assigning a logical value of truth or falsity to them at the moment
they are uttered. Certain knowledge is the knowledge ex post. The knowledge about
the future is entrenched in various modalities: it is possible or it is probable that
an event (a situation) will occur. The grammar of many natural languages have a
complex system of tenses and of future tenses in particular. Usually, sentences about
the future express an intention or ability of the agent of doing (or refraining from
doing) an action (e.g., I will not go to school tomorrow) or they express routine or
repeatable events as e.g., A new issue of the New York Times will appear tomorrow. In
the first case the situation is more complex because the future is not under the control
of agents—just after declaring that I will go to the cinema tomorrow it may happen
that I will be involved in a car accident, the event that will annihilate my plans for
tomorrow. The description of the future evolution of knowledge may contain, apart
from time parameter, epistemic operators as in the utterance Next month I will know
more about relativity theory. The situation is more involved in case of a dialogue
about the future conducted by many persons.
6.3 Action and Models for Non-monotonic Reasonings 233

The theory of non-monotonic reasoning has its roots in a critical appraisal of


the adequacy of ‘traditional’ logic in practical life and in AI. The systems of tradi-
tional logic are founded (explicitly or implicitly) on the notion of Tarskian conse-
quence relation or on the formalisms of natural deduction. Both these formalisms
are monotone in the well-known sense. Minsky (1974) maintains that monotonicity
is a source of problems. He writes:
In any logistic system, all the axioms are necessarily ‘permissive’—they all help to permit
new inferences to be drawn. Each added axiom means more theorems, none can disappear.
There simply is no direct way to add information to tell such the system about kinds of
conclusions that should be drawn! To put it simply: if we adopt enough axioms to deduce
what we need, we deduce far too many things.

Minsky is also critical of the requirement of consistency, the property being at the
core of mathematical logic.
As Alexander Bochmann writes in Logic in Nonmonotonic Reasoning (available
in the Internet):
On the most general level, nonmonotonic reasoning is ultimately related to the traditional
philosophical problems of natural laws and ceteris paribus laws. A more salient feature
of these notions is that they resist precise logical definitions, but involve a description of
‘normal’ or ‘typical’ cases. As a consequence, reasoning with such concepts is inevitably
defeasible, that is they may fail to ‘preserve truth’ under all circumstances, which has always
been considered a standard of logical reasoning.

On a more concrete level, the original motivation that resulted in the rise of
the theory of non-monotonic reasoning was the inadequacy of first-order logic in
solving the problems besetting Artificial Intelligence, especially those concerning
the representation of knowledge. We will mention three such problems (they are not
strictly separated): (1) the frame problem—a fluent (time-dependent fact) does not
change unless some action (it may be the action of nature) modifies it; (2) the decision-
making problem—some actions are planned in the absence of complete information
(some unforeseen obstacles may emerge while performing a string of actions). (3) the
usability problem and usability engineering, the latter being a collection of techniques
to support effective management of available resources, especially of databases while
performing actions.
The theory of non-monotonic reasoning is intended to be close to commonsense,
practical reasoning, the latter being based on what typically or normally holds.
According to this theory, the agents make plans and act being devoid of a com-
plete picture of the world in which they live, they do not know details of states of
affairs, and they make assumptions and decisions relying on their own experience and
habits. There are many general situations when one applies non-monotonic reason-
ing: it may be a communication convention in which one makes default assumptions
in the absence of explicit information or a representation of action plans when, e.g.,
the teacher declares that the math exam will be given on Tuesday unless another
decision is explicitly made, etc. The resulting formal structures conventionally
called non-monotonic logics encompass circumscriptions (McCarthy 1980), default
logic (Reiter 1980), autoepistemic logic (Moore 1985), cumulative consequences
234 6 Epistemic Aspects of Action Systems

(Makinson 1989) and many others. (It should be underlined that the theory of
non-monotonic reasoning itself is an ‘ordinary’ multifaceted mathematical theory,
developed by means of the tools of classical, but not necessarily first-order logic. For
instance, there are strong links between circumscriptions and second-order logic.)
Dov Gabbay (1985) was the first who put forward investigations into non-
monotonic reasoning from a general inferential perspective. The point of departure is
the classical notion of a consequence operation (relation) satisfying the well-known
conditions of reflexivity, cut, and cumulation and operating in an arbitrary sentential
language. (Such a language need not contain only classical connectives.) By way
of relaxation or modification of the above conditions one arrives at an appropriate
inferential unit that may constitute a sound basis of non-monotonic reasonings.
Some researchers point out that intuitions associated with consequence relations
may fail when one tries to transfer them onto non-monotonic inferences. This espe-
cially concerns the cut rule which is validated by every consequence relation. In
some cases involving non-monotonic arguments this rule does not work. Consider
the sentences:
P: It is raining,
Q: I take an umbrella,
R: I will not be wet.
I have the habit that I take an umbrella whenever it rains. Thus I accept P  Q. If
it is raining and I take an umbrella, then I will not be wet. Consequently P, Q  R.
But the cut rule then yields the paradoxical conclusion that P  R, that is, if it is
raining then I will not be wet. To explain this ‘paradox’, it suffices to observe that
in the inference P  Q the premises P is certain—if it is raining, that it is difficult
to deny this fact. But the conclusion Q is not so certain—it is a consequence of my
habit. But it may happen I was absent-minded and I forgot to take an umbrella. In
this example the paradox results from the fact that the conclusion is less certain than
the premises.
The above example points out some difficulties arising when one attempts to
uniformly model non-monotonic reasoning, especially in an axiomatic framework,
by way of weakening the well-known conditions imposed on the Tarskian notion of a
consequence relation. An axiomatic theory of non-monotonic consequence relations
patterned upon some finitistic ideas going back to Gentzen was suggested by Gabbay
(1985). An approach patterned upon Tarski’s theory of consequence operation was
examined by Makinson (1989).
There is a vast literature concerning non-monotonic consequence relations, their
inferential character and semantics. We mention here the classical papers by Gabbay
(1985), Kraus et al. (1990), Makinson (1989, 2005) and Shoham (1987). Before
passing to more formal aspects of non-monotonic reasoning, let us recall here some
locutions which will lead us to the proper topic:
(a) It is the case that P, therefore one usually accepts Q.
(b) It is the case that P, therefore Q in normal circumstances.
(c) We know that P, therefore we should accept Q.
6.3 Action and Models for Non-monotonic Reasonings 235

Omitting probabilistic reasonings (making judgements probable), non-monotonic


forms of reasoning will be treated as those that are based on certain collections of
formal rules of conduct. These rules may be of epistemic character (they are then
pragmatic rules of acceptance of sentences or utterances in the light of available
knowledge) or praxeological (as rules of acceptance on the basis of mastered skills).
Praxeological rules are not purely verbal and they may refer to some nonverbal actions
performed by agents. The rules of conduct may tolerate the principles of logic, the
latter being identified with an appropriate finitary and structural consequence relation;
they are not, however, identical with rules of inference of, say, classical logic. In
other words, the bond holding between the premises and the conclusion of a rule of
conduct does not yield that the conclusion logically follows from the premises. But
it may happen that this bond is stronger than logical consequence. In this case one
obtains supraclassical non-monotonic inferences. Moreover, rules of conduct are not
deontologically loaded—it is safely assumed that all the actions the rule involves are
permitted in the circumstances indicated by the rule.
The main aim of this paragraph is to present a new approach to non-monotonic
forms of reasoning which links them with the theory of action. We shall first outline
a semantic scheme of defining non-monotonic reasonings in terms of frames. Each
frame F is a set of states W endowed with a family R of relations of a certain kind.
If S is a propositional language, say, the language of classical propositional logic
equipped with some additional connectives, then one may define valuations of S in
a frame. In turn, frames and valuations determine in a natural way consequence-like
operations on S. The latter do not generally exhibit all properties of consequence
operations such as monotonicity. These operations thus exemplify non-monotonic
patterns of reasoning. The definition we give is general and the class of resulting
structures encompasses preferential model structures investigated by Makinson. It
also encompasses supraclassical models of reasoning. Then, in the second step, we
pass to elementary action systems. The crucial notion we investigate is that of a
rule of conduct associated with a given action system M. Each rule of conduct (also
called a pragmatic rule) is a rendition of a certain infallible procedures inherently
linked with M. It may be for example a complex system of foolproof action plans that
guarantee manufacturing cars in a net of cooperating plants. In a simplified version,
pragmatic rules are labeled trees defined in a certain way. In other words, pragmatic
rules are conceived of as systems of actions or conduct the agents are obeyed to
in the circumstances pertaining to the rule. These circumstances are determined by
the propositions being the labels attached to vertices and the actions assigned to the
edges of the tree.
Definition 6.3.1 A frame is any pair

F = (W, R),

where
(1) W is a nonempty set,
236 6 Epistemic Aspects of Action Systems

(2) R is a binary relation between families of subsets of W and subsets of W ; that


is,  
R ⊆ ℘ ℘ (W ) × ℘ (W ).

Thus, if (X, Φ) ∈ R, then X is a family of subsets of W and Φ is a subset of W . We


write R(X, Φ) whenever (X, Φ) ∈ R. As is customary, the elements of W are called
states, for which we will use the letters u, w, u  , w , etc. Sets of states are called
propositions.
A frame F is finitary if R obeys the condition: for any X ⊆ ℘ (W ) and any Φ ⊆ W ,
if R(X, Φ), then there is a finite subfamily X f ⊆ X such that R(X f , Φ). A frame F
is reflexive if for any X ⊆ ℘ (W ), R(X, Φ) holds for all Φ ∈ X. 

Example Given a nonempty set W , let R be defined as follows: for X ⊆ ℘ (W ) and


Φ ⊆ W, 
R(X, Φ) ⇔d f X ⊆ Φ.

Then (W, R) is a frame. 

Let S be a sentential language and (W, R) a frame. Any map V : S → ℘ (W ) is


called a valuation. If V is a valuation, then the triple (W, R, V ) is called a model (for
S). If S is the language of classical propositional logic, each valuation is extended in
a one-to-one manner to a homomorphism of S into the Boolean algebra of subsets
of W . If S contains some other connectives as epistemic or dynamic ones, each
valuation is extended to a homomorphism of S using some additional resources of F
as e.g., various accessibility relations on W . But these relations do not belong to R.

Definition 6.3.2 Let F = (W, R) be a frame. F|= is the operation from ℘ (S) to
℘ (S) defined as follows:

α ∈ F|= (X ) ⇔d f for every valuation V of S in F (6.3.3)


 
it is the case that R {V (β) : β ∈ X }, V (α) . 

We shall first give a bunch of simple observations concerning F|= . The proofs are
easy and are omitted.

Lemma 6.3.3 If F = (W, R) is reflexive, then the operation F|= satisfies inclusion;
that is, X ⊆ F|= (X ) for all X ⊆ S. 

Lemma 6.3.4 Suppose F = (W, R) satisfies the condition: for all X, Y ⊆ ℘ (W ),

if X ⊆ Y and R(X, Ψ ) for all Ψ ∈ Y,


then for all Φ ⊆ W, R(Y, Φ) implies R(X, Φ).

Then F|= is cumulative transitive, i.e., X ⊆ Y ⊆ F|= (X ) implies F|= (Y ) ⊆ F|= (X ),


for all X, Y ⊆ S. 
6.3 Action and Models for Non-monotonic Reasonings 237

Corollary 6.3.5 If F satisfies


 |= the
 assumptions of Lemmas 6.3.3 and 6.3.4, then F|=
|= |=
is idempotent, i.e., F F (X ) = F (X ). 

Corollary 6.3.6 If F satisfies the condition: for all X, Y ⊆ ℘ (W ) and any Φ ⊆ W ,

X ⊆ Y and R(X, Φ) imply R(Y, Φ),

then F|= is monotone, i.e., F|= (X ) ⊆ F|= (Y ) whenever X ⊆ Y . 

The frame F from the above example satisfies the hypothesis of Corollary 6.3.6.
F thus yields a monotone consequence operation.

Note The hypothesis of Lemma 6.3.4 can be simplified as follows:

Lemma 6.3.7 Suppose F = (W, R) satisfies the condition: for all X, Y ⊆ ℘ (W )


and Ψ ⊆ W ,

if R(Y, Φ) for all Φ ∈ X, and R(X, Ψ ) then R(Y, Ψ ).

Then F|= satisfies: X ⊆ F|= (Y ) implies F|= (X ) ⊆ F|= (Y ), for all X, Y ⊆ S. 

Let W a nonempty set. Let < be a binary


 relation
 on W which is called a preference
relation. We define the relation R< ⊆ ℘ ℘ (W ) ×℘ (W ) as follows: for X ⊆ ℘ (W )
and Φ ⊆ W ,
   
R< (X, Φ) ⇔d f (∀w ∈ W ) w ∈ X & ¬(∃w )(w < w & w ∈ X) ⇒ w ∈ Φ .

F< := (W, R< ) is called a preference frame.


|=
We then define the consequence operation F< according to formula (6.3.3), that
|=
is, for
 every valuation V of  S in F< , α ∈ F (X ) if and only if it is the case that
R< {V (β) : β ∈ X }, V (a) .
The frame F< is linked with preferential model structures in the sense of Makinson
(2005, Chap. 3) in the following way. Let V : S → ℘ (W ) be a valuation in F< =
(W, R< ). We define the satisfaction relation |=V ⊆ W × S (depending on V ):

w |=V α if and only if w ∈ V (α).

Following Makinson, the triple (W, |=V , <) is called a preferential model struc-
ture. For any X ⊆ S we put:

w |=V X if and only if (∀α ∈X )(w |=V α)


(if and only if w∈ V (α)).
α∈X
238 6 Epistemic Aspects of Action Systems

We also define:

w |=<
V
X if and only if w |=V X & ¬(∃w )(w < w & w |=V X )

and say that w preferentially satisfies X with respect to V .

Proposition 6.3.8 Let X be a set of formulas and α a formula of S. For a given


preference frame F< = (W, R< ) the following conditions are equivalent:
|=
(i) α ∈ F< (X );
(ii) For any valuation V : S → ℘ (W ) in F< , it is the case that
(∀w ∈ W )(w |=<V X ⇒ w |=V α).

|=  
Proof (i) ⇒ (ii). Assume α ∈ F< (X ). This means that R< {V (β) : β ∈ X }, V (α)
for any valuation V : S → ℘ (W ) in F< . Accordingly, for any valuation V in F< ,
it is the case that
     
(∀w ∈ W ) w ∈ V (β) & ¬(∃w ) w < w & w ∈ V (β) ⇒ w ∈ V (α) .
β∈X β∈X

   
But w ∈ V (β) & ¬(∃w ) w < w & w ∈ V (β) states that w |=<
V X . Thus
β∈X β∈X
for any valuation V in F< , (∀w ∈ W )(w |=< V X ⇒ w |=V α). So (ii) holds.

The reverse implication (ii) ⇒ (i) is also immediate. 

How should the above semantics be combined with action theory? A realistic
theory of action should not assume the omnipotence and omniscience of agents.
Some natural limitations should be assumed, which take into account the lives and
professional experience of agents as well as accompanying principles of conduct, also
in difficult situations. We shall not mean merely algorithmizable procedures but also
those actions that require consideration and best fit to a new situation. Agents may
trip up and some tasks may outgrow their action abilities and intellectual capabilities.
But, on the other hand, the agents have the ability to overcome or get around problems
and obstacles. Any reasonable theory should not ignore these aspects of action; it
should however point to sources of the problems and to repair work.

6.3.3 Tree-Like Rules of Conduct

In this section the notion of a rule of conduct is introduced. The terms rule of conduct,
procedure, and pragmatic rule are treated interchangeably. Rules of conduct are
usually historically and socially changeable; they are not set for ever. They constitute,
however, the basis of action. When speaking of rules of conduct we shall abstract from
deontological aspects of action. Each rule of conduct in the semantic formulation is
strictly conjoined with an elementary action system and it involves some actions of
6.3 Action and Models for Non-monotonic Reasonings 239

the system that are organized in a certain way. It is assumed that these actions are
totally performable and hence permitted in the sense of the system. (Rules of conduct
may be treated as norms when one is interested in examining various situations in
which the actions the rule involves are obligatory; but this (difficult) problem is
omitted here.) We shall consider simple rules founded on finite trees. (Rules based
on more elaborate set-theoretic constructs like finite (or infinite) directed graphs are
also conceivable; such rules are not discussed in this book.)
The notion of a rule of conduct is the basic semantic tool that underlies a concep-
tion of non-monotonic reasonings. This conception strictly links the theory of actions
with the semantics of non-monotonic reasonings. This conception is expounded in
this paragraph.
Let us consider the following sentences taken from everyday English:
If it is raining, I usually take an umbrella;
If we are sleepy, we go to bed;
If you have a headache, you take a pill;
He usually drinks tea for breakfast;
If you are a patient with end-stage heart failure or severe coronary artery disease,
usually a heart transplant is performed;
If one is in cardiac arrest, cardiopulmonary resuscitation (CPR) is performed (in
an effort to manually preserve intact brain function until further measures are
taken.)
The above examples show that under the given circumstances one performs an
appropriate action so that after performing it the required states of affairs are usually
reached (I will not be wet; we will be well-rested; you will lose your headache, etc.)
The proviso usually is relevant: I may forget to take the umbrella (hence I will be
wet); the patient in cardiac arrest may die directly after the event despite the fact that
CPR has been immediately performed, etc. We want to model the above situations
on the grounds of our theory of actions, that is, we want to find set-theoretic entities
that will represent the above procedures. The problem is that set theory does not deal
with formulas of type ‘an element a usually belongs to a set A’. (This is not the
case of fuzzy sets or a probabilistic grounded set theory; building a theory of sets
which would capture meanings of such formulas is a program for the future.) We face
therefore with a phenomenon that is an instance of the ramification problem. The
solution we propose is simple: the above procedures are treated as infallible. Possible
failures of them are neglected; exceptions to the rule (when the rule possibly fails)
do not disqualify it if the rule is still in use.
A remark is appropriate here. Pseudo-Geber was a famous medieval alchemist
(but also a chemist!) He assumed that all metals are composed of unified sulfur and
mercury. He gave detailed descriptions of metallic properties in those terms. Today
we know that the procedures (i.e., the rules of transmuting sulfur and mercury into
various metals) based on his descriptions are utterly wrong. However, it should be
added that “his practical directions for laboratory procedures were so clear that it is
obvious he was familiar with many chemical operations” (from Wikipedia.)
240 6 Epistemic Aspects of Action Systems

Let M = (W, R, A) be an atomic action system. Since deontological issues are


not discussed here, it is assumed for simplicity that M is normal. Consequently,
each atomic action of A is totally performable and the relation R may be discarded.
Therefore, the definitions that follow do not involve the relation of direct transition.
In a more formal framework, one may then argue that the above examples are
instantiations of the following scheme of action. Let Φ and Ψ be propositions, i.e.,
subsets of W , and let A be an atomic action of M. We consider the rule:
‘It is the case that Φ; therefore one usually performs an action A in order to achieve
a set of states Ψ ’
(Since the notion of an agent is not incorporated into elementary action systems, we
shall henceforth use the impersonal form ‘one’.) The above rule of conduct should
be infallible in the following sense: if the action A is performed in whatever state
of Φ then, after performing A, a state in Ψ is reached. (As we assume that M is
normal, A is included in the transition relation of the system M.) This postulate is
encapsulated by the inclusion
A[Φ] ⊆ Ψ (6.3.4)

(A[Φ] is the A-image of Φ, A[Φ] := {w ∈ W : (∃u ∈ Φ) (u, w) ∈ A}.) This


inclusion expresses the fact that for every (realizable) performance (u, w) of A, if
u ∈ Φ, then w ∈ Ψ . In other words, by performing A in any state of Φ the agent
always reaches a state belonging to Ψ , as the rule says. From the epistemic perspective
(see Lemma 6.1.1), if A = Aa is the accessibility relation of an agent a then (6.3.4)
says that the agent a knows Ψ at every state u ∈ Φ. But we shall use rules that are
infallible in a possibly strongest way—these are rules in which (6.3.4) is lifted to the
equality
A[Φ] = Ψ. (6.3.5)

Thus, if the action A is performed in whatever state of Φ then, after performing A,


a state in Ψ is reached and every state of Ψ is obtained from Φ by performing the
action A.
In fact, we shall consider more complex rules of conduct. These are tree-like
structures. More precisely, the rules we shall define are certain labeled finite trees.
(We may also define rules built on finite directed graphs but we shall confine ouselves
to finite trees here.) The rules in which only one action is involved and which satisfy
(6.3.5) are certain limit cases of more general tree-like rules of conduct which we
shall define below. In fact, the former are rules defined on two-element, linear trees.
Let us quote the following passage from Wikipedia:
An intermediate product is a product that might require further processing before it is saleable
to the ultimate consumer. This further processing might be done by the producer or by another
processor. Thus, an intermediate product might be a final product for one company and an
input for another company that will process it further.

Suppose c1 , . . . , ck are agents (producers or suppliers). Each of them manufac-


tures an intermediate product of a certain kind which is supplied to another pro-
ducer b. The agent b processes the products provided by c1 , . . . , ck and produces a
6.3 Action and Models for Non-monotonic Reasonings 241

(more advanced) product which in turn is an input for another company operated by
the agent a. We therefore have a simple tree showing this situation:
c1 c2 ............ ck

a
Fig. 6.1
The above tree may be a subtree of a larger tree representing the whole production
process. The suppliers of raw materials are marked at the leaves of this larger tree.
The final product (commodity) is manufactured by the agent at the root of the tree.
The manufacturer ci performs an action Aci b in a definite set of states Φci , required by
the production regime and supplies his intermediate product, being the result of Aci b ,
to b. In turn, the manufacturer b starts producing in a set of definite states Φb in which
the intermediate products provided by the suppliers c1 , . . . , ck are indispensable. As
the output (result) of the action Aba the agent b performs yet another product is
manufactured he supplies to c (c may also have other suppliers.) The above fragment
of the production scheme may be treated as a special infallible rule of action. (From
the mathematical viewpoint, the above tree-like scheme is a simplification of the real
manufacturing process because the intermediate product manufactured by a producer
is often an input for more than one company that processes it further.)
From the more abstract perspective the above scheme is encapsulated by the
following definitions.
We recall that a finite partially ordered set (T, , 0) is a tree (with zero 0 as
a designated constant) if one vertex has been designated (the root), in which case
the edges have a natural orientation: upwards from the root. The root 0 is the least
element of the tree. The tree-order is thus the partial ordering on the vertices of a
tree with x  y if and only if the unique path from the root to y passes through x.
In each finite tree, the parent of a vertex is the vertex connected to it on the path
to the root; every vertex except the root has a unique parent. A child of a vertex x is
a vertex of which x is the parent. Leaves are the vertices that do not have children.
A labeled tree is a tree (T, , 0) in which each vertex is given a unique label. In
Chap. 5 some preliminary remarks on tree-like positive norms were presented. We
shall return to these norms. We shall define here the notion of a rule of conduct. A
rule of conduct is a tree-like positive norm which is labeled in a certain canonical
way.
242 6 Epistemic Aspects of Action Systems

Definition 6.3.9 Let M = (W, R, A) be a normal atomic action system. A rule of


conduct for M is a finite labeled tree with root 0 in which to each vertex a a subset
Φa of W is assigned (Φa is thus the label of a) and to each edge ab in which b is a
child of a (and hence a < b) an atomic action Aba of M (from child b to its parent
a) is assigned so that the following condition is met:
(6.3.6) if b1 , . . . , bk are the children of a, then Ab1 a [Φb1 ] ∩ · · · ∩ Abk a [Φbk ] = Φa .

Φ b1 Φ b2 ......... Φ bk

Ab1 a A b2 a Abk a

Φa
Fig. 6.2
As declared earlier, we shall interchangeably used the terms ‘rule of conduct’ and
‘pragmatic rule’.
Φ0 is the proposition assigned to the root 0. Intuitively, each rule assures that if
the agents start working in states belonging to the propositions attached to the leaves
of the tree and consecutively perform (nonlinearly) the actions Aba for all a, b with
b being a child of a, the system will eventually reach a state belonging to Φ0 . The
proposition Φ0 is thus regarded as the set of final states (or the conclusion) of the rule
while the propositions being the labels of leaves are called initial propositions (or the
premises) of the rule. The actions the rule involves thus guarantee a success; that is,
reaching a final state if the agents start working in any states belonging to the initial
propositions. Each rule is therefore stable in the sense that the actions performable
by the children of a in the states belonging to the respective labels lead to the states
attached to the label of the parent a and each state belonging to the label attached to
a is obtained by performing the actions assigned to the children of a in some states
belonging to the respective label.
If a rule r is built on a two-element tree, and therefore r is visualized as

Φb

Ab0

Φ0
Fig. 6.3
we have that Φ0 = Ab0 [Φb ].
6.3 Action and Models for Non-monotonic Reasonings 243

If r is built on a one-element tree (consisting of the vertex 0 only), the set of


children of 0 is empty and no atomic action is attached to the tree. Therefore the set
of initial propositions (premises) of the rule is empty.
Let P be a collection of pragmatic rules associated with the system M. We define
a relation RP between finite sets of propositions and individual propositions.

Definition 6.3.10 Let X be a finite set of proposition and Ψ a proposition. Then

RP (X, Ψ )

means that there is a rule of conduct r in P such that Ψ = Φ0 is the final proposition
of r (= the label of the root) and X is the set of initial propositions of the rule (= the
set of labels of the leaves of the tree).
The above definition is extended onto infinite sets X of propositions, using the
well-known finitarization procedure, that is, RP (X, Ψ ) means that RP (X f , Ψ ) holds
(in the above sense) for some finite subset X f ⊆ X. 

If P contains a rule built on a tree in which the label of each leaf is the empty set,
then RP (∅, Ψ ).
Finite, linearly ordered sets are trees. If P consists of labeled linear orders, RP is
a one-premises relation, that is, RP (X, Ψ ) implies that the family X consists of at
most one element.
The above definition is correct and it implies that

FP := (W, RP )

is a finitary frame assigned to the action system M.


One of the operations performed on a tree is taking a subtree of the tree. In the
context of pragmatic rules special subtrees are of some importance. (We confine
ourselves to finite trees.) A special subtree of a finite tree (P, , 0) with zero is a
tree (with the same bottom element 0) being the result of consecutive applying the
operation of cutting off the upper part of the tree (P, , 0) (see the figure below)
above a fixed vertex and leaving the rest of the tree.
b1 b2 b3 b4 b5

a1 a2

0
Fig. 6.4
244 6 Epistemic Aspects of Action Systems

By cutting off the three leaves over a1 we get the tree


b4 b5

a1 a2

0
Fig. 6.5
It is easy to see that if r is a pragmatic rule, then any special subtree of the labeled
tree of r is a labeled tree as well and it is also a rule, i.e., condition (6.3.6) is also met
for the labels of the subtree. If the set P has the property that for every pragmatic
rule r in P, the rules based on the special subtrees of the labeled tree of r also belong
to P, we say that P is closed with respect to special subtrees. The closedness with
respect to special subtrees is a quite natural condition that may be imposed on any
set P of applicable pragmatic rules.
RP is monotone if for any finite families X, Y and any set Ψ ⊆ W , X ⊆ Y and
RP (X, Ψ ) implies RP (Y, Ψ ).
It is not difficult to produce action systems M and sets P of pragmatic rules such
that RP is not monotone. For example, if P is nonempty and consists of rules based
on linearly ordered finite sets, then there are one-element families X and sets Ψ ⊆ W
such that RP (X, Ψ ) holds. But RP (Y, Ψ ) fails to hold if Y has at least two elements.
Hence RP is not monotone.
Suppose now that the atomic actions of the system M are reflexive. This implies
that Ψ ⊆ A[Ψ ] for any set Ψ ⊆ W and any atomic action A. Let P be any set of
pragmatic rules associated with M and closed under special subtrees. Then

(6.3.7) RP (X, Ψ ) implies X ⊆ Ψ for any finite family X and any Ψ .
Indeed, let us assume that RP (X, Ψ ), where X = {Ψ1 , . . . , Ψk }. There is a rule r in
P such that X is the set of labels assigned to the leaves of the tree of r and Ψ is the
label of the root of the tree. Induction on the height of the subtrees of the tree of r
then shows that Ψ1 ∩ · · · ∩ Ψk ⊆ Ψ .
We are interested in reversing the implication (6.3.7) for an action system and a
set P of rules, that is, we ask when

(6.3.8) X ⊆ Ψ implies RP (X, Ψ ) for any finite family X and any Ψ .
(6.3.8) is strictly linked with supraclassical non-monotonic forms of reasoning, the
ones that accept the rules and axioms of classical logic.
6.3 Action and Models for Non-monotonic Reasonings 245

Let S be the formula-algebra of classical propositional language. A valuation V


is a mapping from the set of propositional variables of S into the power set ℘ (W ).
V is then extended in the standard (Boolean) way onto the entire set S.
Let P be a set of pragmatic rules and FP = (W, RP ) the frame corresponding
to P. If V is a valuation, then the triple (W, RP , V ) is called a model (for S).
|=
FP denotes the operation from ℘ (S) to ℘ (S) defined according to formula (6.3.3)
above; that is, for a finite set of formulas X = {b1 , . . . , bm } and a formula α,
|=
α ∈ FP (β1 , . . . , βm ) ⇔d f for every valuation V of S in FP
 
it is the case that RP {V (β1 ), . . . , V (βm )}, V (α) .

|=
FP is subsequently extended onto infinite subsets of S by means of the finitariza-
tion procedure.
|=
In the light of the above remarks, FP need not be a monotone operation, that is,
|= |=
α ∈ FP (X ) and X ⊆ Y do not yield α ∈ FP (Y ).
The above constructions provide a bunch of systems of non-monotonic reasoning
from the semantic perspective. This perspective is constituted by sets P of tree-like
rules of conduct and the associated relation RP .
The rules of inference of classical logic can also be represented as tree-like rules
of conduct on spaces of states. Let W be a nonempty set. States are proof trees
whose edges are labeled by subsets of W . Each rule of inference of CPC is treated
as an action on a proof tree which modifies this tree in an appropriate way. Rules
of inference can be then represented as tree-like labeled rules of conduct acting on
proof trees. Yet another option is to consider finite sequences of subsets of W which
are (Hilbert-style) proofs. The details are omitted.
A thorough examination of the resulting systems of non-monotonic reasonings
is beyond the reach of this book and requires a separate, detailed scrutiny. This is a
project for a future investigation.
Bibliography

Ajdukiewicz, K. (1974). Pragmatic logic. Dordrecht: Reidel.


Apostel, L. (1978). The elementary theory of collective action. Philosophica, 21, 129–157.
Atienza, M., & Manero, J. R. (1998). A theory of legal sentences. Dordrecht: Kluwer.
Åqvist, L. (1974). A new approach to the logical theory of action and causality. Logical theory and
semantic analysis (pp. 73–91). Dordrecht: D. Reidel.
Banachowski, L., Kreczmar, A., Mirkowska, G., Rasiowa, H., & Salwicki, A. (1977). An intro-
duction to algorithmic logic. Metamathematical investigations in the theory of programs. In
A. Mazurkiewicz & Z. Pawlak (Eds.), Mathematical foundations of computer science (Vol. 2,
pp. 7–99). Warsaw: Banach Center Publications, PWN.
Barbiani, P., Herzig, A., & Troquard, N. (2007). Alternative axiomatics and complexity of deliber-
ative STIT theories. Journal of Philosophical Logic, 37, 387–406.
Bartoszyński, R., Pawlak, Z., & Szaniawski, K. (1963). Matematyczne przyczynki do rozwoju
prakseologii (Mathematical contributions to the development of praxiology) (in Polish). Warsaw.
Barwise, J., & Perry, J. (1983). Situations and attitudes. Cambridge: MIT.
Bergstöm, L. (1966). Alternatives and consequences of actions (dissertation). University of Stock-
holm. Stockholm Studies in Philosophy (Vol. 4). Stockholm: Almqvist and Wiksell.
Blikle, A. (1973). Wybrane zagadnienia lingwistyki matematycznej (Selected problems of math-
ematical linguistics) (in Polish). In A. Mazurkiewicz (Ed.), Problemy przetwarzania informacji
(Problems in information processing) (Vol. 2, pp. 94–152). Warsaw: WNT.
Blikle, A. (1977). An analysis of programs by algebraic means. In A. Mazurkiewicz & Z. Pawlak
(Eds.), Mathematical foundations of computer science (Vol. 2, pp. 167–213). Warsaw: Banach
Center Publications, PWN.
Błaszczyk, A., & Turek, S. (2007). Teoria mnogości (Set theory) (in Polish). Warsaw: PWN.
Boden, M. (1977). Artificial intelligence and natural man. Brighton: Harvester Press.
Bratman, M. E. (1999). Intention, plans, and practical reason. Stanford: CSLI Publications.
Brown, F. M. (Ed.) (1987). The frame problem in artificial intelligence. In Proceedings of the 1987
Workshop Held in Lawrence, Kan. Palo Alto: Morgan Kaufmann.
Cai, J., & Paige, R. (1992). Languages polynomial in the input plus output. In Second International
(Ed.), Conference on Algebraic Methodology and Software Technology (AMAST 91) (pp. 287–
300). Berlin: Springer.
Castilho, M., Herzig, A., & Varzinczak, I. (2002). It depends on the context! A decidable logic of
action and plans based on a ternary dependence relation. Workshop on nonmonotonic reasoning
(NMR).

© Springer Science+Business Media Dordrecht 2015 247


J. Czelakowski, Freedom and Enforcement in Action, Trends in Logic 42,
DOI 10.1007/978-94-017-9855-6
248 Bibliography

Chang, C. C., & Keisler, H. J. (1973). Model theory. Amsterdam: North Holland and American
Elsevier.
Cholvy, L. (1999). Checking regulations consistency by using SOL-resolution. In Proceedings of
the 7th International Conference on AI and Law (pp. 73–79). Oslo.
Chomsky, N., & Miller, G. A. (1958). Finite state languages. Information and Control, 1(2), 91–112.
(Cook, M.).
Chomsky, N., & Miller, G. A. (2004). Universality in elementary cellular automata. Complex Sys-
tems, 15(1), 1–40.
Czelakowski, J. (1991). Prakseologia formalna i formalna teoria działania (Formal praxiology and
formal action theory) (in Polish). Prakseologia, Nr, 1–2(110–111), 221–255.
Czelakowski, J. (1993). Probabilistyczne teorie działania (Probabilistic theories of action) (in Pol-
ish). Prakseologia, Nr, 1–1(118–119), 25–54.
Czelakowski, J. (1996). Elements of formal action theory. In A. Fuhrmann & H. Rott (Eds.), Logic,
action and information. Essays on logic and artificial intelligence (pp. 3–62). Berlin: Walter de
Gruyter.
Czelakowski, J. (1997). Action and deontology. In E. Ejerhed & S. Lindström (Eds.), Logic, action
and cognition (pp. 47–88). Dordrecht: Kluwer.
Czelakowski, J. (2001). Protoalgebraic logics. Dordrecht: Kluwer.
Davey, B. A., & Priestley, H. (2002). Introduction to lattices and order (2nd ed.). Cambridge:
Cambridge University Press.
Davidson, D. (1963). Actions, reasons, and causes. Journal of Philosophy, 60, 685–700.
Davidson, D. (1967). The Logical form of action sentences. In N. Rescher (Ed.), The logic of
decision and action (pp. 81–95). Pittsburgh: University of Pittsburgh Press.
Davidson, D. (2001). Essays on actions and events (2nd ed.). Oxford: Oxford University Press.
Desharnais, J., & Möller, B. (2005). Least reflexive points of relations. Higher-Order and Symbolic
Computation, 18, 51–77.
Dunin-Ke˛plicz, B. (2012). Epistemic profiles and belief structures. In G. Jezic et al. (Eds.), Agent
and multi-agent systems. Technologies and applications. Proceedings of the 6th KES Interna-
tional Conference, KES-AMSTA 201. Dubrovnik, Croatia, 25–27 June 2012. Lecture Notes in
Computer Science (Vol. 7327, pp. 360–369). Springer.
Eiter, T, Erdem, E., Fink, M, & Senko, J. (2005). Updating action domains descriptions. In L.
Kaelbling & A. Saffioti (Eds.), Proceedings of the 19th International Joint Conference on Artificial
Intelligence (IJCAI) (pp. 418–423). Morgan Kaufmann Publishers.
Finger, M., & Gabbay, D. (1992). Adding a temporal dimension to a logic system. Journal of Logic,
Language and Information, 1(3), 203–233.
Fischer, M. J., & Ladner, R. E. (1979). Propositional dynamic logic of regular programs. Journal
of Computer and System Sciences, 18, 194–211.
Gabbay, D. M. (1985). Theoretical foundations for non-monotonic reasoning in expert systems. In
K. R. Apt (Ed.), Proceedings NATO Advanced Institute on Logics and Models of Concurrent
Systems. (La Colle-sur-Loup, France) (pp. 439–457). Berlin: Springer.
Gabbay, D., Horty, J., Parent, X., van der Meyden, R., van der Torre, L. (Eds.), (2014–2015).
Handbook of deontic logic and normative systems (Vol. 1). Available on Amazon UK/US (Vol.
2). Planned for publication 2014/2015. London: College Publications.
Gabbay, D., Pnueli, A., Shelah, S., & Stavi, J. (1980). The temporal analysis of fairness. In Seventh
ACM Symposium on Principles of Programming Languages (pp. 163–173). Las Vegas, Nevada.
Genesereth, M. R., & Nilsson, N. J. (Eds.). (1987). Logical foundations of artificial intelligence.
Palo Alto: Morgan Kaufmann.
Gettier, E. (1963). Is justified true belief knowledge? Analysis, 23, 121–123.
Goldblatt, R. I. (1982a). Axiomatizing the logic of computer programming. Lecture Notes in Com-
puter Science (Vol. 130). Berlin: Springer.
Goldblatt, R. I. (1982b). The semantics of Hoare’s iteration rule. Studia Logica, 41, 141–158.
Goldblatt, R. I. (1986). Review article. The Journal of Symbolic Logic, 51, 225–227.
Goldblatt, R. I. (1987). Logics of time and computation. CSLI Lecture Notes (Vol. 7). Stanford.
Bibliography 249

Goldmann, A. I. (1970). A theory of human action. Englewood Cliffs: Prentice-Hall.


Goldman, E. (1971). Theory of actions. Ann Arbor: Michigan University Press.
Gonzalez, L. D. (2003). The paradoxes of action: Human action, law and philosophy. Dordrecht:
Kluwer.
Greibach, S. A. (1965). A new normal form theorem for context-free phrase structure grammars.
Journal of the ACM, 12, 42–52.
Grigoris, A. (1997). Nonmonotonic reasoning. Cambridge: The MIT Press.
Halmos, P. R. (1950). Measure theory. New York: Van Nostrand.
Halpern, J. Y. (Ed.) (1986). Theoretical aspects of reasoning about knowledge. In Proceedings of the
1986 Conference Held in Monterey, California 19–22 March 1986. Palo Alto: Morgan Kaufmann.
Hansson, S. O. (1986). Individuals and collective actions. Theoria, 52, 87–97.
Harel, D. (1979). First-order dynamic logic., Lecture Notes in Computer Science Berlin: Springer.
Harel, D. (1984). Dynamic logic. In D. Gabbay & F. Guenthner (Eds.), Handbook of philosophical
logic. Extensions of classical logic (Vol. II, pp. 497–604). Dordrecht: D. Reidel.
Harel, D., Kozen, D., & Parikh, R. (1980). Process logic: Expresiveness, decidability, completeness.
In Proceedings of the 21st Symposium on Foundations of Computer Science (pp. 129–142).
Syracuse.
Harel, D., Pnueli, A., & Stavi, J. (1981). Propositional dynamic logic of context-free programs. Pro-
ceedings of the 22th Symposium on Foundations of Computer Science (pp. 310–321). Nashville:
Tennessee.
Herzig, A., & Troquard, N. (2006). Knowing how to play: Uniform choices in logics of agency. In
Proceedings of the Fifth International Joint Conference on Autonomous Agents and Multiagent
Systems (pp. 209–216). ACM.
Hopcroft, J. E., & Ullman, J. D. (1979). Introduction to automata theory, languages, and computa-
tion. Reading: Addison-Wesley.
Hendricks, V. F. (2003). Active agents. In J. Van Benthem, & R. Van Rooy (Eds.), Journal of Logic,
Language and Information, 12(4), 469–495.
Hintikka, J. (1962). Knowledge and belief. Ithaca: Cornell University Press.
Hornsby, J. (1980). Actions. London: Routledge.
Horthy, J. F. (1993). Deontic logic as founded on nonmonotonic logic. Annals of Mathematics and
Artificial Intelligence, 9(1–2), 69–91.
Horthy, J. F. (2001). Agency and deontic logic. Oxford: Oxford University Press.
Hyman, J., & Steward, H. (Eds.). (2004). Agency and action. Cambridge: Cambridge University
Press.
Kanger, S. (1957). New foundations for ethical theory, reprinted in R. Hilpinen (Ed.), Deontic logic:
Introductory and systematic readings, 1970 (pp. 36–58).
Kleene, S. C. (1952). Introduction to metamathematics. New York: Van Nostrand.
Konieczny, J. (1970). Elementy prakseologii formalnej (Elements of formal praxiology) (in Polish).
Prakseologia, 35, 65–131.
Konieczny, J., & Stonert, H. (1971). Prakseologia formalna (Formal praxiology) (in Polish). Prak-
seologia, 37, 39–52.
Kooi, B. P. (2011). Dynamic epistemic logic. In J. van Benthem & A. ter Meulen (Eds.), Handbook
of logic and language (2nd ed.). Amsterdam: Elsevier.
Kotarbińsk, T. (1965). Praxeology. An introduction to the science of efficient action. New York:
Pergamon Press.
Kotarbińsk, T. (1969). Traktat o dobrej robocie (A treatise on good work) (in Polish). Wrocław:
Ossolineum.
Kozen, D., & Parikh, R. (1981). An elementary proof of the completeness of PDL. Theoretical
Computer Science, 14, 113–118.
Kraus, S., Lehmann, D., & Magidor, M. (1990). Nonmonotonic reasoning, preferential models and
cumulative logics. Artificial Intelligence 44, 167–207. (The full bibliography of the subject till
1990 is available there.).
250 Bibliography

Lorenzen, P., & Lorenz, K. (1978). Dialogische Logik. Darmstadt: Wissenschaftliche Buchge-
sellschaft.
Makinson, D. (1989). General theory of cumulative inferences. In M. Reinfrank (Ed.), Non-
monotonic reasoning. Lecture Notes on Artificial Intelligence (Vol. 346). Proceedings of the
Second International Workshop on Non-Monotonic Reasoning. Berlin: Springer.
Makinson, D. (2005). Bridges from classical to nonmonotonic logic. Text in Computing (Vol. 5).
London: King’s College.
Małuszyński, J., & Szałas, A. (2011a). Living with inconsistency and taming nonmonotonicity. In
O. de Moor, G. Gottlob, T. Furche, & A. Sellers (Eds.), Datalog reloaded. LNCS (Vol. 6702,
pp. 384–398). Berlin: Springer.
Małuszyński, J., & Szałas, A. (2011b). Logical foundations and complexity of 4QL, a query language
with unrestricted negation. Journal of Applied Non-Classical Logics, 21(2), 211–232.
Mazurkiewicz, A. (1972a). Iteratively computable relations. Bulletin de l’Académie Polonaise des
Sciences, Series, Sciences, Mathematics, Astronomy, Physics, 20, 793–798.
Mazurkiewicz, A. (1972b). Recursive algorithms and formal languages. Bulletin de l’Académie
Polonaise des Sciences. Series, Sciences, Mathematics, Astronomy, Physics, 20, 799–803.
Mazurkiewicz, A. (1974). Podstawy teorii programowania (Foundations of the theory of program-
ming) (in Polish). In A. Mazurkiewicz (Ed.), Problemy przetwarzania informacji (Problems in
information processing) (Vol. 2, pp. 39–93). Warsaw: WNT.
McCarthy, J. (1963). Situations, actions and causal laws. Technical report. Stanford University.
McCarthy, J. (1980). Circumscription: A form of non-monotonic reasoning. Artificial Intelligence,
13(1–2), 23–79.
McCarthy, J. (1986). Applications of circumscription to common sense reasoning. Artificial Intel-
ligence, 28(1), 89–116.
Mele, A. R. (1997). The philosophy of action. Oxford: Oxford University Press.
Minsky, M. A Framework for representing knowledge. Technical report 306, Artificial Intelligence
Laboratory, MIT, 1974. (Shorter versions. In: P. Winston (Ed.), The psychology of computer
vision, McGraw-Hill 1975, J. Haugeland (Ed.), Mind design. MIT Press, 1981, and A. Collins
and E. E. Smith (Eds.), Cognitive science. Morgan-Kaufmann 1992.).
Montague, R. (1970). Pragmatics and intensional logic. Synthese, 22, 6–34.
Montague, R. (1974). In R. H. Thomason (Ed.), Formal philosophy. New Haven: Yale University
Press.
Moore, R. C. (1985). Semantic considerations on nonmonotonic logic. Artificial Intelligence, 25,
75–94.
Moschovakis, Y. N. (1994). Notes on set theory. New York: Springer.
Nowakowska, M. (1973a). Language of motivation and language of actions. The Hague: Mouton.
Nowakowska, M. (1973b). Formal theory of actions. Behavioral Science, 18, 393–413.
Nowakowska, M. (1979). Teoria działania (Theory of action) (in Polish). Warsaw: PWN.
O’Connor, T., & Sandis, C. (Eds.). (2010). A companion to the philosophy of action. Oxford: Wiley.
Parikh, R. (1978). The completeness of propositional dynamic logic. Mathematical foundations
of computer science 1978. Lecture Notes in Computer Science (Vol. 64, pp. 403–415). Berlin:
Springer.
Parikh, R. (1984). Logics of knowledge, games, and dynamic logic. In M. Joseph & R. Shyamasundar
(Eds.), Foundations of software technology and theoretical computer science. Lecture Notes in
Computer Science (Vol. 181, pp. 202–222). Berlin: Springer.
Pawlak, Z. (1969). Matematyczne aspekty procesu produkcyjnego (Mathematical aspects of manu-
facturing processes) (in Polish). Warsaw: PWE.
Pnueli, A. (1977). The temporal logic of programs. In Proceedings of the 8th Symposium on
Foundations of Computer Science (pp. 46–57). Providence.
Pratt, V. R. (1976). Semantical considerations on Floyd-Hoare logic. In Proceedings of the 17th
Annual IEEE Symposium on Foundations of Computer Science (pp. 109–121).
Pratt, V. R. (1980). Applications of modal logics to programming. Studia Logica, 39, 257–274.
Prior, A. N. (1967). Past, present and future. Oxford: Oxford University Press.
Bibliography 251

Reiter, R. (1980). A Logic for default reasoning. Artificial Intelligence, 13, 81–132.
Reiter, R. (1991). The frame problem in the situation calculus: A simple solution (sometimes) and
a completeness result for goal regression. In V. Lifschitz (Ed.), Artificial intelligence and the
mathematical theory of computation. Essays in honor of John McCarthy (pp. 359–380). New
York: Academic Press.
Salomaa, A. (1969). Theory of automata. London: Pergamon Press.
Salwicki, A. (1970). Formalized algorithmic languages. Bulletin de l’Académie Polonaise des
Sciences. Series, Sciences, Mathematics, Astronomy, Physics, 18, 227–232.
Samet, D. (2008). The Sure-thing principle and independence of irrelevant knowledge. The faculty
of management: Tel Aviv University, available via the Internet.
Sandis, C. (2009). New essays on the explanation of action. Basingstoke: Palgrave Macmillan.
Scott, D. (1970). Advice on modal logic. In K. Lambert (Ed.), Philosophical problems in logic
(pp. 143–173). Dordrecht: D. Reidel.
Segerberg, K. (1980). Applying modal logic. Studia Logica, 39, 275–295.
Segerberg, K. (1982a). The logic of deliberate action. Journal of Philosophical Logic, 11, 233–254.
Segerberg, K. (1982b). A completeness theorem in the modal logic of programs. In T. Traczyk (Ed.),
Universal algebra and applications (Vol. 9, pp. 31–46). Warsaw: Banach Centre Publications,
PWN.
Segerberg, K. (1982c). A deontic logic of action. Studia Logica, 41, 269–282.
Segerberg, K. (1985a). Models for action. In B. K. Matilal & J. L. Shaw (Eds.), Analytical philosophy
in comparative perspective (pp. 161–171). Dordrecht: D. Reidel.
Segerberg, K. (1985b). Routines. Synthese, 65, 185–210.
Segerberg, K. (1988a). Actions in dynamic logic (abstract). The Journal of Symbolic Logic, 53,
1285–1286.
Segerberg, K. (1988b). Talking about actions. Studia Logica, 47, 347–352.
Segerberg, K. (1989). Bringing It about. Journal of Philosophical Logic, 18, 327–347.
Shoham, Y. A. (1987). Semantical approach to non-monotonic logics. In Proceedings Logics in
Computer Sciences (pp. 275–279). Ithaca.
Suppes, P. (1984). Probabilistic metaphysics. Oxford: Blackwell.
Suszko, R. (1975). Abolition of the Fregean axiom. In P. Rohit (Ed.), Logic colloquium. Symposium
on logic held at Boston, 1972–1973. Lecture Notes in Mathematics (Vol. 453, pp. 169–236).
Berlin: Springer.
Talja, J. (1985). On the logic of omissions. Synthese, 65, 235–248.
Tarski, A. (1955). A lattice-theoretical fixpoint theorem and its applications. Pacific Journal of
Mathematics, 5, 285–309.
Thielscher, M. (1997). Ramification and causality. Artificial Intelligence, 89(1–2), 317–364.
Thielscher, M. (2010). A unifying action calculus. Artificial Intelligence, 175(1), 120–141.
Thomason, R. H. (1970). Indeterminist time and truth-value gaps. Theoria, 36, 265–281.
Thomason, R. H. (1984). Combinations of tense and modality. In D. Gabbay & F. Guenther (Eds.),
The handbook of philosophical logic (Vol. 2, pp. 135–165). Dordrecht: D. Reidel.
Trypuz, R. (Ed.). (2013). Krister segeberg on logic of actions. Outstanding Contributions to Logic
(Vol. 1). Berlin: Springer.
Trypuz, R. & Kulicki, P. (2014). A deontic logic of actions and states, available in the Internet.
van Benthem, J., & Liu, F. (2004). Diversity of agents in games. Philosophia Scientiae, 8(2), 163–
178.
van Benthem, J., & Pacuit, E. (2006). The tree of knowledge in action: Towards a common per-
spective. In G. Governatori, I. Hodkinson, & Y. Venema (Eds.), Advances in modal logic (Vol. 6,
pp. 87–106). London: College Publications.
van Benthem, J., Gerbrandy, J., Hoshi, T., & Pacuit, E. (2009). Merging framework for interaction.
Journal of Philosophical Logic, 38, 491–526.
van Benthem, J. (2011). Logic of dynamics of information and interaction. Cambridge: Cambridge
University Press.
252 Bibliography

van Benthem, J. (2011a). Logic in games. Texts in logic and games. FoLLI Series. Amsterdam:
Springer.
van Benthem, J. (2013). Logic in games. MIT Press (to appear).
van Benthem, J., & Pacuit, E. (2013). Connecting logics of choice and change (to appear).
van Benthem, J., van Ditmarsch, H., van Eijck. J., & Jaspars, J. (2013). Logic in action. New edition.
www.logicinaction.org/docs/lia.pdf.
Van der Hoek, W., & Wooldridge, M. (2003). Cooperation, knowledge and time: Alternating-time
temporal epistemic logic and its applications. Studia Logica, 75(1), 125–157.
van der Torre, L. (1997). Reasoning about obligations: Defeasibility in preference-based deontic
logic. Thesis, available in the Internet.
van der Torre, L. (1999). Defeasible goals. In A. Hunter & S. Parsons (Eds.), ESCQARU (Vol.
1638). Berlin: Springer.
van der Torre, L., & Tan, Y. (1999). An update semantics for deontic reasonings. In P. McNamara
& H. Prakken (Eds.), Norms, logics and information systems. New studies on deontic logic and
computer science. Frontiers in Artificial Intelligence and Applications (Vol. 49). Amsterdam:
IOS Press.
van Ditmarsch, H., van der Hook, W., & Kooi, B. (2013). Dynamic epistemic logic, available in the
Internet.
van Leeuwen, J. (Ed.). (1990). Handbook of theoretical computer science (Vol. A-B). Amster-
dam/Cambridge: The MIT Press/Elsevier.
Varzinczak, L. (2006). What is a good domain descriptions? Evaluating and revising theories in
dynamic logic. Ph.D. thesis. Université Paul Sabatier, Touluse.
Varzinczak, L. (2010). On action theory change. Journal of Artificial Intelligence Research, 37,
189–246.
von Kutschera, F. (1980). Grunbegriffe der Handlungslogik. Handlungstheorien - interdisziplinär
(Vol. 1, pp. 67–106). München, Band.
von Wright, G. H. (1963). Norm and action. London: Routledge and Kegan Paul.
von Wright, G. H. (1968). An essay in deontic logic and the general theory of action. Amsterdam:
North Holland.
von Wright, G. H. (1971). Explanation and understanding. Ithaca: Cornell University Press.
von Wright, G. H. (1981). On the logic of norms and actions. In R. Hilpinen (Ed.), New studies in
deontic logic (pp. 3–25). Dordrecht: D. Reidel.
Weinberger, O. (1985). Freedom, range for action and the ontology of norms. Synthese, 65, 307–324.
Wolfram, S. (2002). A new kind of science. Champaign: Wolfram Media.
Wolper, P. (1983). Temporal logic can be more expressive. Information and Control, 56, 72–99.
Wójcicki, R. (1984). Roman Suszko’s situational semantics. Studia Logica, 43, 326–327.
Wójcicki, R. (1984). Heuristic rules of inference in non-monotonic arguments. Bulletin of the
Section of Logic, 17(2), 56–61.
Wansing, H. (2006). Tableaux for multi-agent deliberative stit logic. In G. Governatori, I. Hod-
kinson, & Y. Venema (Eds.), Advances in modal logic (Vol. 6, pp. 503–520). London: College
Publications.
Xu, M. (2012). Actions as events. Journal of Philosophical Logic, 41, 765–809.
Zhang, D., Chopra, S., & Foo, N. (2002). Consistency of action descriptions. In M. Ishizuka,
& A. Sattar (Eds.), Proceedings of the 7th Pacific Rim International Conference on Artificial
Intelligence (PRICAI): Trends in Artificial Intelligence. LNCS (Vol. 2417, pp. 70–79). Berlin:
Springer.
Zhang, D., & Foo, N. (2001). EDPL: A logic for causal reasoning. In B. Nebel (Ed.), Proceedings
of the 127th Joint Conference on Artificial Intelligence (IJCAI) (pp. 131–138). San Francisco:
Morgan Kaufmann Publishers.
Zhang, D., & Foo, N. (2002). Interpolation properties of action logic: Lazy-formalization to
the frame problem. In S. Flesca, S. Greco, N. Leone, & G. G. Ianni (Eds.), Proceedings of
the 8th European Conference on Logics in Artificial Intelligence (JELIA). LNCS (Vol. 2424,
pp. 357–368). Berlin: Springer.
Symbol Index

Symbols C Dom( f ), 137


↑ a, 122 C Dom(P), 5
↓ a, 122 C DomA, 55
α ∠β, 170 C Dom R A, 55
A∗ , 51 CFL(), 91
|A|, 53 CFL+ (), 91
A∠B, 170 Choiceau , 200
A(D), 84 (C A; ∪, ∩, ∼, ◦, ∗ , −1 ), 51
(A, , B, +), 192 (C + A; ∪, ∩, ∼, ◦, + , −1 ), 52
(A, , B, −), 192 δ, 59
(A, , B, !), 192 δ A (u), 10
(ai , , a j , ?), 229 δ R (u), 10
(a j , Y es), (a j , N o), 229 ♦[a scstit : φ], 207
[a cstit : φ], 201 ♦[a cstit : φ], 201
[a stit : φ], 196 ♦φ, 199
A := (W, , δ, w0 , F), 46 D := (W, V, α, ω, LA), 81
A = (W, , , δ, w0 , α, F), 95 D := (W, V, α, LA), 91
n A, 101 DLREG, 179
B, 59 DL, 164
B+ , 51 DL+ , 173
B−1 , 51 DM , 223
B∗ , 51 Dom( f ), 137
Ba φ, 211 Dom(P), 5
[B], 59 DomA, 55
B ⇔ R C, 59 Dom R A, 55
B ⇒ R C, 59 EW , 5
B ∪ C, 51 F(B | A), 193
B ∩ C, 51 F A, 150
B ◦ C, 51 FA, 177
∼ B, 51 F|= , 236
|=
φ, 203 F< , 237
[a cstit : φ], 204 F = (W, R), 235
[a stit : φ], 197 FP := (W, RP ), 243
φ, 199 ε, 51
•u , 225 ε, 42
C A, 51 [ cstit : φ], 206
C + A, 51  φ, 164
© Springer Science+Business Media Dordrecht 2015 253
J. Czelakowski, Freedom and Enforcement in Action, Trends in Logic 42,
DOI 10.1007/978-94-017-9855-6
254 Symbol Index

G = (, V, P , α), 45 P ∗, 5
G(Z ), 101 P +, 5
K a φ, 211 P −1 , 5
L ∗ , 43 (P, ), 116
L + , 43 PA, 177
L n , 43 P A, 150
L −1 , 43 Pr (A), 60
L 1 L 2 , 42 Pr (A), 60
L(A), 47 puA , 30
L(G), 45 ReM , 12
M s := (W, R, A, S, Tr, f ), 72 ReωM , 131
M(DL) = (W (DL), R, A), 166 RP (X, ), 243
m A ( · | · ), 30 R(u, w), 8
M = (W, R, V0 , V1 ), 162 REG(), 43
M |=u , 163 REG+ (), 43
M |=u φ, 163 Res A, 52
M = (W, R, A), 8 Res D , 82
M = (W, ≤, Agent, Choice), 200 ⇒G , 45
Mo() |= ψ, 169 ⇒∗G , 45
Mo(DL), 166 S D := W × V , 81
Mo(DL+ ), 173 S D := W × V ∗ , 92
N, 5 SReω , 139
[ cstit : φ], 206 (S1 ⊕ S2 ), 103
[a cstit : φ], 205 , 42
[a scstit : φ], 207  ∗ , 42
ω, 5  + , 42
<ω A, 101 (T, , 0), 241
⊕[ cstit : φ], 206 T r , 72
⊕[a cstit : φ], 206 T r D , 81
O(B | A), 193 T r ee, Agent, Choice, V alue, 203
OA, 150 T r ee, Agent, Choice, Ought, 203
OA, 177 u → A w, 8
!, 226 u → R w, 8
?, 226 u/ h, 198
 i ☞ , i = 1, . . . , 4, 56 u A w, 8
(, A, +), 150 u R w, 8
(, A, −), 150 U = (W, R), 8
(, A, !), 152 (W, R + , R − , A), 171
(φ, α, +), 162 (W, ≤, 0), 159 
(φ, α, −), 162 W, {m A ( · | · ) : A ∈ A} , 32
(φ, α, !), 162 while  do A, 61
P ∪ Q, 5 ℘ (W ), 5
P ∩ Q, 5 | x |, 41
P ◦ Q, 5 ZF, 5
Index of Definitions and Terms

A Axiom of Choice, 25, 117


Accomplished transition, 9 Axiom of Determinacy, 107
Action system, 8
complete, 13
constructive, 130 B
deterministic, 13 Back and forth actions, 138
elementary, 8 Basic deontic logic DL, 164
equivalential, 13 Bi-distribution, 30
normal, 13 Branching system, 197
ordered, 128
reversible, 13
separative, 13 C
situational, 72 Canonical model of DL, 166
strictly equivalential, 13 Cellular automaton, 108
strictly normal, 13 Chain, 117
Action variable, 162 Chain complete (=inductive)
Adjoint relations, 134 poset, 117
Admissible valuation for DL, 168 Child (in a tree), 241
Algebras of compound actions, 52 Circumscription, 233
Algorithm Closure Principle, 171
iterative, 81 Coherence Principle, 24
pushdown, 91 Collatz’s Action System, 61
Alphabet, 42 Combinatorial grammar, 45
Alternating-time epistemic logic (ATEL), 231 Common knowledge operator, 217
Atomic action, 8 σ -complete poset, 121
performable, 22 Completeness Theorem, 165
totally performable, 22 Compound action, 44, 50
Atomic norm, 146 performable, 53
obligatory, 152 totally performable, 53
permissive (= positive), 150 Concatenation, 42
prohibitive (= negative), 150 Conditionally σ -continuous
Attainability, 58 function, 133
Autoepistemic logic, 233 Conditionally σ -continuous
Automaton relation, 133
cellular, 108 Consistent set of norms, 184
finite, 46 Context-free grammar, 90
pushdown, 95 Context-free language, 90
© Springer Science+Business Media Dordrecht 2015 255
J. Czelakowski, Freedom and Enforcement in Action, Trends in Logic 42,
DOI 10.1007/978-94-017-9855-6
256 Index of Definitions and Terms

σ -continuous function, 125 Goal, 58


σ -continuous relation, 121 Grammar
σ -continuous relation in the strong sense, 121 combinatorial, 45
Cumulative consequence, 233 context-free, 90
right linear, 45

D
Dead state (= terminal state), 37 H
Default logic, 233 Hamiltonian, 112
Deliberation operator, 223 History, 198
δ-operator, 59
Derivation, 45
Deterministic strategy, 102 I
Direct transition relation, 8 Ideal agent, 98
Directed graph, 10 Index, 198
Directed set, 116 Inductive poset, 117
Directed-complete poset, 117 Inflationary relation, 118
Discrete system, 8 Initial subword, 42
Downward Löwenheim–Skolem–Tarski Intension of an action, 53
Theorem, 126 Iterative algorithm, 81
Doxastic action
(= epistemic action), 210
Dyadic norm, 193 K
Dynamic epistemic logic Knowledge model, 216
(DEL), vi, 216 Kuratowski’s Lemma, 198

E L
Empty word, 42 Labeled action, 73
Epistemic action Labeled tree, 241
(= doxastic action), 210 Law of inertia, 218
Epistemic profile, 231 Leftmost derivation, 91
Execution, 60 Loop, 11
Expansive function, 120
Expansive relation, 119
M
Measure, 30
F Merge of strategies, 103
Finitary frame, 236 Model, 199
Finite automaton, 46 Model for Sent, 162
Fixed point (= reflexive point), 118 Monotone function, 118
Formal language, 42 Monotone relation, 121
Frame, 235 Myhill-Nerode Theorem, 90
Frame Problem, the, 218
Function
σ -continuous, 125 N
conditionally σ -continuous, 133 Non-Monotonic Reasoning, 232
expansive, 120 Norm sentence (= normative sentence), 162
monotone, 121 Normative proposition, 145
quasi-expansive, 132

O
G Obligatory norm, 152
Game, 101 Operation, 12
Index of Definitions and Terms 257

Ordered action system, 128 Reflexive point (= fixed point), 118


σ -complete, 130 Regular language, 43
constructive, 130 Relation, 5
inductive, 130 binary, 5
Oriented edge, 11 conditionally σ -continuous, 133
expansive, 119
inflationary, 118
P quasi-expansive, 132
Parent (in a tree), 241 total (= serial), 25, 118
Partial isomorphism, 137 Resultant relation
Partial order, 116 of an action, 52
Payoff, 101 of an algorithm, 82
Performability of actions Right linear grammar, 45
in situations, 79 Righteous set of atomic norms, 181
in states, 22, 53 Righteous set of norms, 181
Permissive (= positive) norm, 150 Root (of a tree), 241
Phase space, 112
Rule of conduct
Play, 101
(= pragmatic rule), 242
Poset, 116
Rule of inference, 20
σ -complete, 121
Run of situations, 74
directed-complete, 117
inductive, 117
Positive Kleene closure, 83
Possible performance S
of a compound action, 53 Sentential formula, 20
of an atomic action, 5 Sentential language, 20
Possible position, 102 Sentential variable, 162
Pragmatic rule Set of possible effects, 10
(= rule of conduct), 242 Situation, 71
Preference frame, 237 Situational action system, 72
Principle of Dependent Special subtree, 243
Choices, 120 Stack, 91
Probabilistic action system, 29 State, 8
Program, 60 Stit frame, 200
Prohibitive (= negative) norm, 150 Stit model, 200
Proof, 20, 164 Stit semantics, 196
Pushdown algorithm, 91 Strategy, 102
Pushdown automaton, 95 Strong Completeness Theorem, 169
Subword, 42

Q
Quasi-expansive function, 132
Quasi-expansive relation, 132 T
Quasi-order, 138 Task, 13
Tautological sentence, 163
Terminal state (= dead state), 37
R Terminal subword, 42
Reach, 12 Total compound action of an algorithm, 84
ω-reach, 131 Total relation, 118
Real action, 210 Tree, 159, 241
Realizable operation, 12 Tree property, 197
Realizable performance Truth at u in M, 163
of a compound action, 53 Truth axiom T , 212
of an atomic action, 9 Truth in M, 163
258 Index of Definitions and Terms

V Winning strategy, 106


Valuation, 163 Word, 41
Vertex, 11, 241

W Z
Well-designed algorithm, 83 Zermelo-Fraenkel set theory, 5
Well-ordered set, 117 Zorn’s Lemma, 117
Index of Names

A D
Abian, A., 120 Davey, B.A., 125
Ajdukiewicz, K., x Davidson, D., 98
Alchourrón, C.E viii Desharnais, J., 118
Atienza, M, 146 Ditmarsch, H. van, vi, 218
Dunin-Ke˛plicz, B., 231

B
Balbiani, P., 201 E
Banach, S., 108 Ehrenfeucht, A., 138
Benthem, J.K. van, vi, viii, 105, 218 Eijck, J. van, vi
Błaszczyk, A.„ 106 Eiter, T., 210
Blikle, A., 44 Erdem, E., 210
Bochmann, A., 233
Boden, M., vi, 4
Borel, E., 107
Bratman, M.E., 98 F
Brown, F.M., viii, 201 Finger, M., 231
Fink, M., 210
Fisher, M., 146
Foo, N., 210
C Fraenkel, A., 5, 107
Cai, J., 118 Fraïssé, R., 138
Cantor, G., 131, 138
Carmody, M., xi, 209
Castilho, M., 210
Chellas, B.F., viii, 196, 201 G
Chisholm, R., 204, 205 Gabbay, D., 146, 231, 234
Cholvy, L., 210 Gärdenfors, P., viii
Chomsky, N., 46 Gasparski, W., x
Chopra, S., 210 Geach, P.T., 204
Church, A, 41 Gentzen, G., 234
Collatz, L., 61 Gettier, E., 211
Cook, M., 109 Greibach, S.A., 91, 94
Czelakowski, J., 213 Grice, P., 145
© Springer Science+Business Media Dordrecht 2015 259
J. Czelakowski, Freedom and Enforcement in Action, Trends in Logic 42,
DOI 10.1007/978-94-017-9855-6
260 Index of Names

H Minsky, M., 233


Halmos, P.R., 32 Mises, R. von, 41
Hamilton, W.R., 112 Möller, B., 118
Harman, G., 204 Montague, R., 71
Hayek, F., 187 Moore, E.F., 111, 233
Hayes, P.J., 218 Moschovakis, Y.N., 120, 125
Hendricks, V.F., 216 Myhill, J., 90
Herzig, A., 201, 210, 231
Hoek, W. van der, 218, 231
Hopcroft, J.E., 46, 47, 96 N
Horthy, J.F., viii, 146, 196–201, 204–206 Nerode, A., 90
Humberstone, L., 204 Neumann, J. von, 111
Nowakowska, M., v, vii, x, 4, 44

J
Jaspars, J., vi P
Pacuit, E., viii
Paige R., 118
K Parent, X., 146
Kanger, S., viii Pogonowski, J., 80
Kant, I., 187 Priestley, H., 125
Kenny, A., 201 Prior, A.N., 197
Kleene, S.C., 83, 86, 88, 125, 129
Kolmogorov, A., 41
Konieczny, J., x
R
Kooi, B., 218
Rabin, M.O., 41
Kotarbínski, T., x
Reiter, R., 71, 233
Kraus, S., 234
Russell, B., 211
Kripke, S., 9
Kulicki, P., 146
Kuratowski, K., 198
S
Salwicki, A., ix
L Samet, D., 218
Leeuwen, J. van, 46 Scott, D., 9, 71
Lehmann, D., 234 Segerberg, K., vi, viii, 4, 59, 98, 210
Lindenbaum, A., 165, 166 Senko, J., 210
Liu, F., 105 Shoham, Y., 234
Löwenheim, L., 126 Skolem, T., 126
Łukasiewicz, J., 145 Stonert, H., x
Suppes, P., vi, 4
Suszko, R., 214, 215
M Szałas, A., 231
Magidor, M., 234
Makinson, D., viii, 132, 234, 235, 237
Małuszyński, J., 231 T
Manero, J.R., 146 Tarski, A., 125, 126, 234
Martin, D.A., 107 Thielscher, M., 210
Mazur, S., 108 Thomason, R.H., 197
Mazurkiewicz, A., 88 Torre, L. van der, 146, 210
McCarthy, J., 71, 218, 233 Troquard, N., 201, 231
Meinong, A., 204, 205 Trypuz, R., 146, 210
Meyden, R. van der, 146 Turek, S., 106
Miller, G.A., 46 Turing, A., 109
Index of Names 261

U Wolfram, S., 108, 109, 111


Ullman, J.D., 46, 47, 96 Wooldridge, M., 231
Wright, G.H. von, viii, 4, 146–148, 195

V
Varzinczak, I., 210 X
Xu, M., 201

W
Wansing, H., 201 Z
Weinberger, O., 10 Zermelo, E., 5, 107, 120
Wójcicki, R., 214 Zhang, D., 210
Wölfl, S., 201 Zorn, M.A., 117, 119, 120

You might also like