Backandforth

Download as pdf or txt
Download as pdf or txt
You are on page 1of 47

4

Back and Forth Between Logic and Games


Erich Grädel
RWTH Aachen University

Abstract
In this chapter we discuss relationships between logic and games, focusing on
first-order logic and fixed-point logics, and on reachability and parity games.
We discuss the general notion of model-checking games. While it is easily
seen that the semantics of first-order logic can be captured by reachability
games, more effort is required to see that parity games are the appropriate
games for evaluating formulae from least fixed-point logic and the modal
µ-calculus. The algorithmic consequences of this result are discussed. We
also explore the reverse relationship between games and logic, namely the
question of how winning regions in games are definable in logic. Finally the
connections between logic and games are discussed for more complicated
scenarios provided by inflationary fixed-point logic and the quantitative
µ-calculus.

4.1 Introduction
The idea that logical reasoning can be seen as a dialectic game, where a
proponent attempts to convince an opponent of the truth of a proposition
is very old. Indeed, it can be traced back to the studies of Zeno, Socrates,
and Aristotle on logic and rhetoric. Modern manifestation of this idea are
the presentation of the semantics of logical formulae by means of model-
checking games and the algorithmic evaluation of logical statements via
the synthesis of winning strategies in such games.
model-checking games are two-player games played on an arena which is
formed as the product of a structure A and a formula ψ where one player,
called the Verifier, attempts to prove that ψ is true in A while the other
100 Erich Grädel

player, the Falsifier, attempts to refute this. In contrast to common definitions


of the meaning of logical formulae which proceed bottom-up, from atomic
formulae through the application of logical operators (such as connectives
and quantifiers) to more complicated ones, game-theoretic semantics proceed
top-down. Starting with a complicated sentence, Verifier and Falsifier try to
justify their claims by moves to supposedly simpler assertions. By playing
such an evaluation game the two players thus produce a sequence of formulae
that ends when they reach an atomic statement, which cannot be simplified
further. The Verifier has succeeded to justify her original claim if the atomic
formula reached at the end of the sequence of is true, and Falsifier has won
if it is false. We thus assume that the truth of atomic statements can be
readily determined, for instance by a look-up in a table.
model-checking games permit us to evaluate logical formulae by solving
algorithmic problems on games such as the computation of winning
regions and the construction of winning strategies. For the most common
logical systems, such as first-order logic (FO) or propositional modal
logic (ML), the construction of the associated model-checking games is
straightforward, and the games are simple in several senses. First of all, the
goals of the players are the simplest conceivable objectives in such games,
namely reachability objectives: each player tries to force the play to a
terminal position where she has won (like check mate). Secondly, it is the
case that in each move, the formula is strictly simplified, so that every play
terminates after a number of moves that is bounded by the nesting depth of
logical operators in the formula. In particular there are no infinite plays, and
this holds no matter whether the structure on which the formula is evaluated
is finite or infinite. Finally, in the case of finite game graphs, the winner
of such a reachability game can be determined in linear time (with respect
to the size of the game graph). Thus, algorithms for solving reachability
games can be applied to evaluate first-order formulae, and give us a detailed
complexity analysis for the model-checking problem of first-order logic on
finite structures.
But life is not always that simple. For expressing properties of finite
structures, and for defining combinatorial problems on classes of structures
(such as graphs), first-order logic is rather limited. For many tasks arising
in computer science there is thus a need of other kinds of logical systems,
such as temporal logics, dynamic logics, game logics, transitive closure logics,
fixed-point logics and so on, which extend a basic formalism like FO and ML
by more powerful operators.
The natural model-checking games for such logics are more complicated
than reachability games. In particular, they admit infinite plays. Essential
Back and Forth Between Logic and Games 101

ingredients in the description of such games are the winning conditions


for infinite plays. Among the simplest of these are recurrence (or Büchi)
conditions, which require that certain good states must occur infinitely often
in a play, or eventual safety conditions, which impose that from some point
onwards the play must stay outside a bad region. Of special importance for
us are parity games. These are games of possibly infinite duration where
we assign to each position a natural number, and the winner of an infinite
play is determined according to whether the least number seen infinitely
often in the play is even or odd. The importance of parity games is due to
several reasons.

(1) Many classes of games arising in practical applications admit reductions


to parity games (over larger game graphs). This is the case for games
modelling reactive systems, with winning conditions specified in some
temporal logic or in monadic second-order logic over infinite paths (S1S),
for Muller games, but also for games with partial information appearing
in the synthesis of distributed controllers.
(2) Parity games are positionally determined . This means that from
every position, one of the two players has a winning strategy whose
moves depend only on the current position, not on the history of the play.
This property is fundamental for the algorithmic synthesis of winning
strategies.
(3) Parity games arise as the model-checking games for fixed-point logics
such as the modal µ-calculus or LFP, the extension of first-order logic
by least and greatest fixed-points. Conversely, winning regions of parity
games (with a bounded number of priorities) are definable in both LFP
and the µ-calculus. Parity games are also of crucial importance in the
analysis of structural properties of fixed-point logics.

The last point, the intimate relationship between parity games and fixed-
point logic is a central theme of this chapter.
We shall start with an introduction to basic notions on reachability and
parity games, explain the notions of winning strategies and winning regions
and discuss algorithmic questions related to games. We study connections
between logic and games for the special case of reachability games. In
particular, we shall present in detail the model-checking games for first-
order logic. After that, we introduce logics with least and greatest fixed
points, such as LFP and the modal µ-calculus, and explain why parity games
are appropriate evaluation games for these logics. We shall also discuss
the algorithmic consequences of this result. Then, the reverse relationship
102 Erich Grädel

between games and logic is explored, namely the question of how winning
regions in games are definable in logic. We shall see that for parity games with
a bounded number of priorities, winning regions are definable in both LFP
and the modal µ-calculus. For parity games with an unbounded number of
priorities it is not known whether the winning regions are LFP-definable. We
show that this problem is intimately related to the open question of whether
parity games are solvable in polynomial time. In the last two sections we
shall discuss the relationship between logic and games for more complicated
scenarios provided by inflationary fixed-point logic and the quantitative
µ-calculus. In both cases, we can indeed find generalisations of parity games
with a balanced two-way relationship to the associated fixed-point logic. On
the one hand, we obtain appropriate evaluation games for all formulae in the
logic, and on the other hand, the winning regions in the games are definable
in the logic

4.2 Reachability games and parity games


We consider turn-based games where two players move a token through a
directed graph, tracing out a finite or infinite path. Such a graph game is
specified by a directed graph G = (V, E), with a partition V = V0 ∪ V1 of the
nodes into positions of Player 0 and positions of Player 1. In case (v, w) ∈ E
we call w a successor of v and we denote the set of all successors of v by vE.
A play in G is a finite or infinite path v0 v1 . . . formed by the two players
starting from a given initial position v0 . Whenever the current position vi
belongs to V0 , then Player 0 chooses a successor vi+1 ∈ vi E, if vi ∈ V1 , then
vi+1 ∈ vi E is selected by Player 1.
For reachability games we define the winning condition either by saying
that Player σ loses at positions v ∈ Vσ where no moves are possible, or by
explicitly including the sets T0 , T1 of winning terminal positions for each
player into the description of the game. A play that is not won by any of the
two players is called a draw . In reachability games, infinite plays are draws.
It is often convenient to have games without draws, so that Player 1 wins
every play that is not won by Player 0, and vice versa. As the complement of
a reachability condition is a safety condition this leads to a reachability-
safety game: the winning condition is given by a set T ⊆ V ; Player 0 wins
a play if it reaches T , and Player 1 wins if it remains inside V \ T .
There is an extensive theory of games with more general winning conditions
for infinite plays that are specified either by logical formulae from some logic
on infinite sequences such as temporal logic (LTL), first-order logic (FO), or
Back and Forth Between Logic and Games 103

monadic second-order logic (S1S), or by automata-theoretic conditions such


as Muller conditions, Streett–Rabin conditions, or parity conditions (see the
contributions by Christof Löding and Marcin Jurdziński to this book). In
this chapter, only parity conditions will be used.
A parity game is given by a game graph G = (V, V0 , V1 , E) together
with a priority function Ω : V → ω assigning to each position a natural
number. An infinite play π = v0 v1 . . . is won by Player 0 if the least priority
appearing infinitely often in π is even, or no priority appears infinitely often
(which may only happen if the range of Ω is infinite).
Winning strategies, winning regions, and determinacy. A (deter-
ministic) strategy for Player σ is a partial function f : V ∗ Vσ → V that
assigns to finite paths through G ending in a position v ∈ Vσ a successor
w ∈ vE. A play v0 v1 · · · ∈ V ω is consistent with f if, for each initial seg-
ment v0 . . . vi with vi ∈ Vσ , we have that vi+1 = f (v0 . . . vi ). We say that
such a strategy f is winning from position v0 if every play that starts at
v0 and that is consistent with f is won by Player σ. The winning region
of Player σ, denoted Wσ , is the set of positions from which Player σ has a
winning strategy.
A game G, without draws, is called determined if W0 ∪ W1 = V , i.e., if
from each position one of the two players has a winning strategy. For games
with draws, it is appropriate to define determinacy in a slightly different way:
we call a game with draws determined if from each position, either one of the
two players has a winning strategy, or both players have a strategy to achieve
at least a draw. To put it differently, this means that from every position
v ∈ V \ Wσ , Player 1 − σ has a strategy to guarantee that Player σ does
not win. It has been known for almost 100 years that chess is determined
in this sense, see Zermelo [1913]. However, we still do not know which of
the three possibilities holds for the initial position of chess: whether White
has a winning strategy, whether Black has one, or whether both players can
guarantee a draw.
There is a large class of games that are known to be determined, including
all games for which the winning condition is a Borel set (Martin [1975]). One
can show (based on the Boolean Prime Ideal Theorem, which is a weak form
of the the Axiom of Choice) that non-determined games exist. However, all
games considered in this chapter are determined in a strong sense.
Computing winning regions of reachability games. To solve a game
algorithmically means to compute the winning regions for the two players.
When considering algorithmic problems of this kind, we always assume that
game graphs are finite. For reachability games, the winning regions can easily
104 Erich Grädel

be computed in polynomial time. Denote by Wσn the set of positions from


which Player σ has a strategy to win the game in at most n moves. Then
Wσ0 = {v ∈ V1−σ : vE = ∅} is the set of winning terminal positions for
Player σ, and we can compute the sets Wσn inductively by using
Wσn+1 := Wσn ∪ {v ∈ V0 : vE ∩ Wσn 6= ∅} ∪ {v ∈ V1 : vE ⊆ Wσn }
until Wσn+1 = Wσn .
With a more sophisticated algorithm, which is a clever variant of depth-
first search, one can actually compute the winning regions of both players in
linear time O(|V | + |E|) (see e.g., Grädel [2007]).
Theorem 4.1 Winning regions of finite reachability games, and hence also
reachability-safety games, can be computed in linear time.
Further, the problem of computing winning regions of reachability games
is complete for Ptime (see Greenlaw et al. [1995]).
Positional determinacy and complexity of parity games. Winning
strategies can be very complicated objects since they may depend on the
entire history of a play. However, for many important games, including
reachability, safety, and parity games, it suffices to consider positional
strategies, which are strategies that depend only on the current position,
not on the history of the play. A game is positionally determined , if it is
determined, and each player has a positional winning strategy on her winning
region.
The positional determinacy of reachability games – and reachability-safety
games – is obvious since the winning condition itself is purely positional. For
parity games the positional determinacy is a non-trivial and fundamental
result. It has been established independently by Emerson and Jutla [1991]
and Mostowski [1991] for parity games with a finite game graph. This is
generalised by Zielonka [1998] to infinite game graphs with a finite number
of priorities. Finally positional determinacy has been extended by Grädel
and Walukiewicz [2006] to parity games with rng(Ω) = ω.
Theorem 4.2 Every parity game is positionally determined.
In a parity game G = (V, V0 , V1 , E, Ω), a positional strategy for Player σ,
defined on W ⊆ V , can be represented by a subgraph H = (W, S) ⊆ (V, E)
such that there is precisely one outgoing S-edge from each node v ∈ Vσ ∩ W
and vS = vE for each node v ∈ V1−σ ∩ W . On a finite game graph, such a
strategy is winning on W if, and only if, the least priority on every cycle in
(W, S) has the same parity as σ.
Back and Forth Between Logic and Games 105

Hence, given a finite parity game G and a positional strategy (W, S) it can
be decided in polynomial time, whether the strategy is winning on W . To
decide winning regions we can therefore just guess winning strategies, and
verify them in polynomial time.
Corollary 4.3 Winning regions of parity games (on finite game graphs)
can be decided in NP ∩ Co-NP.
In fact, Jurdziński [1998] proved that the problem is in UP ∩ Co-UP,
where UP denotes the class of NP-problems with unique √
witnesses. The
best known deterministic algorithm has complexity n O( n) ) (Jurdziński et al.
[2006]). For parity games with a number d of priorities the progress measure
lifting algorithm by Jurdziński [2000] computes winning regions in time
O(dm · (2n/(d/2))d/2 ) = O(nd/2+O(1) ), where m is the number of edges,
giving a polynomial-time algorithm when d is bounded. The two approaches
can be combined to achieve a worst-case running time of O(nn/3+O(1) ) for
solving parity games with d priorities. These, and other, algorithms, are
explained in detail in Jurdziński’s contribution to this book.

4.3 Reachability games and logic


We now discuss connections between logic and games for the special case of
reachability games. We assume that the reader is familiar with first-order
logic.
(1) Computing winning regions of reachability games is equivalent, under
very simple reductions, to computing minimal models for propositional
Horn formulae.
(2) The model-checking games for first-order logic are reachability games.
We will then discuss the definability problem for winning regions of reach-
ability games and see that more powerful formalisms than first-order logic
are needed.

4.3.1 Games and Horn formulae


Recall that a propositional Horn formula is a conjunction of implication
clauses of the form Z ← X1 ∧ · · · ∧ Xk where X1 , . . . Xk are propositional
variables, forming the body of the clause, and Z, the head of the clause, is
either also a propositional variable, or the constant 0. Notice that the body
of the clause can also be empty, in which case the clause takes the form
106 Erich Grädel

Z ← 1. (Indeed if a Horn formula contains no clause of this form, then it is


trivially satisfiable by setting all variables to false.)
It is well known that Sat-Horn, the satisfiability problem for propositional
Horn formulae, is Ptime-complete (see Greenlaw et al. [1995]) and solvable
in linear time (Dowling and Gallier [1984], Itai and Makowsky [1987]). Hence
its computational properties are very similar to those of reachability games.
Actually there is a simple way of going back and forth between solving
reachability games and finding satisfying assignments for Horn formulae, so
that the two problems are solved by essentially the same algorithms.
From reachability games to Horn formulae: Given a finite game graph G =
(V, V0 , V1 , E), we can construct in linear time a propositional Horn formula
ψG consisting of the clauses u ← v for all edges (u, v) ∈ E with u ∈ V0 , and
the clauses u ← v1 ∧ · · · ∧ vm for all nodes u ∈ V1 , where uE = {v1 , . . . , vm }.
It is easy to see that the winning region W0 for Player 0 in G coincides with
the minimal model for ψG . Hence v ∈ W0 if the Horn formula ψG ∧ (0 ← v)
is unsatisfiable.
V
From Horn formulae to reachability games: With a Horn formula ψ = i∈I Ci
with propositional variables X1 , . . . , Xn and Horn clauses Ci of the form
Zi ← Xi1 ∧ · · · Xim we associate a game Gψ as follows. The positions of
Player 0 are the initial position 0 and the propositional variables X1 , . . . , Xn ,
and the positions of Player 1 are the clauses Ci of ψ. Player 0 can move from
a position X to any clause Ci with head X, and Player 1 can move from a
clause Ci to any variable occurring in the body of Ci . Formally, Gψ = (V, E),
V = V0 ∪ V1 with V0 = {0} ∪ {X1 , . . . , Xn }, V1 = {Ci : i ∈ I}, and
E ={(X, C) ∈ V0 × V1 : X = head(C)}∪
{(C, X) ∈ V1 × V0 : X ∈ body(C)}.
Player 0 has a winning strategy for Gψ from position X if, and only if, ψ |= X.
In particular, ψ is unsatisfiable if, and only if, Player 0 wins from position 0.

4.3.2 model-checking games for first-order logic


For a logic L and a domain D of structures, the model-checking problem
asks, given a structure A ∈ D and a formula ψ ∈ L, whether it is the
case that A |= ψ. Model-checking problems can be reformulated in game-
theoretic terms using appropriate model-checking games. With a sentence ψ,
a structure A (of the same vocabulary as ψ), we associate a model-checking
game G(A, ψ). It is played by two players, Verifier and Falsifier . Verifier
(also called Player 0) tries to prove that A |= ψ, whereas Falsifier (also called
Back and Forth Between Logic and Games 107

Player 1) tries to establish that the sentence is false. For first-order logic,
the evaluation games are simple, in the sense that (1) all plays are finite
(regardless of whether the input structure is finite or infinite) and (2) winning
conditions are defined in terms of reachability.
Let us assume that A = (A, R1 , . . . , Rm ) is a relational structure and ψ
is a first-order sentence in negation normal form, i.e., built up from atoms
and negated atoms by means of the propositional connectives ∧, ∨ and the
quantifiers ∃, ∀. Obviously, any first-order formula can be converted in linear
time into an equivalent one in negation normal form. The model-checking
game G(A, ψ) has positions ϕ(a) where ϕ(x) is a subformula of ψ which is
instantiated by a tuple a of elements of A. The initial position of the game
is the formula ψ.
Verifier (Player 0) moves from positions associated with disjunctions and
with formulae starting with an existential quantifier. From a position ϕ ∨ ϑ,
she moves to either ϕ or ϑ. From a position ∃yϕ(a, y), Verifier can move to any
position ϕ(a, b), where b ∈ A. Dually, Falsifier (Player 1) makes corresponding
moves for conjunctions and universal quantifications. At atoms or negated
atoms, i.e., positions ϕ(a) of the form a = a0 , a =6 a0 , Ra, or ¬Ra, the game
is over. Verifier has won the play if A |= ϕ(a); otherwise, Falsifier has won.
Model-checking games are a way of defining the semantics of a logic. The
equivalence to the standard definition can be proved by a simple induction.
Theorem 4.4 Verifier has a winning strategy from position ϕ(a) in the
game G(A, ψ) if, and only if, A |= ϕ(a).
This suggests a game-based approach to model-checking: given A and
ψ, construct the game G(A, ψ) and decide whether Verifier has a winning
strategy from the initial position.

4.3.3 Complexity of first-order model-checking


A model-checking problem has two inputs: a structure and a formula. We
can measure the complexity in terms of both inputs, and this is what is
commonly referred to as the combined complexity of the model-checking
problem (for L and D). However, in many cases, one of the two inputs is
fixed, and we measure the complexity only in terms of the other. If we fix
the structure A, then the model-checking problem for L on this structure
amounts to deciding ThL (A) := {ψ ∈ L : A |= ψ}, the L-theory of A. The
complexity of this problem is called the expression complexity of the
model-checking problem (for L on A). Especially in finite model theory, one
often considers model-checking problems for a fixed formula ψ, which amounts
108 Erich Grädel

to deciding the model class of ψ inside D, ModD (ψ) := {A ∈ D : A |= ψ}.


Its complexity is the structure complexity of the model-checking problem
(for ψ on D).
Since reachability games can be solved in linear time, the size of the game
graph directly gives us an upper bound for the time complexity for first-order
model-checking. The size of the model-checking game G(A, ψ) is the number
of different instantiations of the subformulae of ψ with elements from A. It
depends on several parameters, including the cardinality of the structure
A, the number of subformulae of ψ (which is of course bounded by the
length ψ) and the width of ψ which is defined as the maximal number of
free variables in subformulae of ψ. Clearly, |G(A, ψ)| ≤ |ψ| · |A|width(ψ) , so
the crucial parameter is the width of the formula: if we have subformulae
with many free variables, then the number of instantiations, and thus the
size of the game, becomes very large. In general the combined complexity
and the expression complexity of first-order model-checking problem are
Pspace-complete. In turn, the game graphs have polynomial size for any
class of first-order formulae with bounded width.
Theorem 4.5 The model-checking problem for first-order logic is Pspace-
complete. For any fixed k ≥ 2, the model-checking problem for first-order
formulae of width at most k is Ptime-complete.
Exercise 4.1 Prove the hardness results. Reduce QBF, the problem of
evaluating quantified Boolean formulae, to the model-checking problem for
first-order logic on a fixed structure with two elements. Reduce the problem
of solving reachability games to the model-checking problem for formulae of
width 2.
By applying the game-based analysis of model-checking to the case of a
fixed sentence ψ, we see that the structure complexity of first-order logic
is much lower than the expression or combined complexity. In particular,
the evaluation problem for any fixed first-order sentence can be computed
deterministically in logarithmic space.
For a detailed study of the complexity of first-order model-checking, giv-
ing precise complexity bounds in terms of deterministic and alternating
complexity classes, the reader may consult Grädel [2007].

4.3.4 Definability of winning regions


Let S be a class of games, represented as structures of some fixed vocabulary.
We say that winning regions on S are definable in a logic L if there
Back and Forth Between Logic and Games 109

exist formulae ψ0 (x) and ψ1 (x) of L that define, on each game G ∈ S, the
winning regions W0 and W1 for the two players. This means that, for each
game G ∈ S, and σ = 0, 1
Wσ = {v ∈ G : G |= ψσ (v)}.
We can view a logic L and class S of games as balanced , if on the one
hand, S provides model-checking games for L, and on the other hand, the
winning regions for games in S are definable in L.
While reachability games are appropriate model-checking games for first-
order logic, the reverse relationship does not hold. Indeed it is well-known
that the expressive power of first-order logic, for defining properties of finite
or infinite structures, is rather limited. A general result making this precise
is Gaifman’s Theorem, saying that first-order logic can express only local
properties. For an exact statement and proof of this fundamental result,
we refer to Ebbinghaus and Flum [1999]. Perhaps the simplest query that
is not local, and hence not first-order definable, is reachability : Given a
directed graph G = (V, E) and a starting node v, determine the set of all
nodes that are reachable from v. This also implies that first-order logic is too
weak for reachability games; indeed the reachability problem can be viewed
as the problem of computing winning regions in the special case of one-player
reachability games.
Theorem 4.6 Winning regions of reachability games are not first-order
definable.
Thus, already for reachability games, and even more so for parity games,
more powerful logics are required to define the winning regions. Appropriate
logics for this are fixed-point logics that we are going to study in the next
section. In particular, we shall see that LFP and parity games (with a
bounded number of priorities) are balanced.

4.4 Logics with least and greatest fixed-points


Consider a formula ψ(R, x) of vocabulary τ ∪ {R} where x is a tuple of
variables whose length matches the arity of R. Such a formula defines, for
every τ -structure A, an update operator Fψ : P(Ak ) → P(Ak ) on the class
of k-ary relations on A, by
Fψ : R 7→ {a : (A, R) |= ψ(R, a)}.
A fixed-point of Fψ is a relation R for which Fψ (R) = R. Fixed-point
logics extend a basic logical formalism (such as first-order logic, conjunctive
110 Erich Grädel

queries, or propositional modal logic) by formulae defining fixed-points of


relational operators. Notice that, in general, fixed-points of Fψ need not exist,
or there may exist many of them. We therefore consider special kinds of
fixed-points, such as least and greatest, and later inflationary and deflationary
fixed-points, and we impose additional conditions on the relational operators
to guarantee that these fixed-points exist.
We shall now describe some basic facts of fixed-point theory for powerset
lattices (P(B), ⊆), where B is an arbitrary (finite or infinite) set. An operator
F : P(B) → P(B) is monotone, if it preserves inclusion, i.e., F (X) ⊆ F (Y )
whenever X ⊆ Y . A fixed-point X of F is called the least fixed-point of F
if X ⊆ Y for all fixed-points Y of F . Similarly, if all fixed-points of F are
subsets of a fixed-point X, then X is the greatest fixed-point of F .
Theorem 4.7 (Knaster–Tarski) Every monotone operator F : P(B) →
P(B) has a least fixed-point lfp(F ) and a greatest fixed-point gfp(F ). Further,
these fixed-points may be written in the form
\ \
lfp(F ) = {X : F (X) = X} = {X : F (X) ⊆ X}
[ [
gfp(F ) = {X : F (X) = X} = {X : F (X) ⊇ X}.
A proof can be found in any standard exposition on fixed-point theory
or fixed-point logics (see e.g., Grädel [2007]). Least fixed-points can also be
constructed inductively. We call an operator F : P(B) → P(B) inductive if
the sequence of its stages X α (where α ranges over the ordinals), defined by
X 0 := ∅,
X α+1 := F (X α ), and
[
X λ := X α for limit ordinals λ,
α<λ

is increasing, i.e., if X β ⊆ X α for all β < α. Obviously, monotone operators


are inductive. The sequence of stages of an inductive operator eventually
reaches a fixed-point, which we denote by X ∞ . The least ordinal β for which
X β = X β+1 = X ∞ is called the closure ordinal of F .
Exercise 4.2 Prove that the cardinality of the closure ordinal of every
inductive operator F : P(B) → P(B) is bounded by the cardinality of B.
However, the closure ordinal itself can be larger than |B|. Prove this by an
example.
Theorem 4.8 For monotone operators, the inductively constructed fixed-
point coincides with the least fixed-point: X ∞ = lfp(F ).
Back and Forth Between Logic and Games 111

Proof As X ∞ is a fixed-point, lfp(X) ⊆ X ∞ . For the converse, we show


T
by induction that X α ⊆ lfp(F ) for all α. As lfp(F ) = {Z : F (Z) ⊆ Z}, it
suffices to show that X α is contained in all Z for which F (Z) ⊆ Z.
For α = 0, this is trivial. By monotonicity and the induction hypothesis,
we have X α+1 = F (X α ) ⊆ F (Z) ⊆ Z. For limit ordinals λ with X α ⊆ Z for
S
all α < λ we also have X λ = α<λ ⊆ Z.
The greatest fixed-point can be constructed by a dual induction, starting
T
with Y 0 = B, by setting Y α+1 := F (Y α ) and Y λ = α<λ Y α for limit
ordinals. The decreasing sequence of these stages then eventually converges
to the greatest fixed-point Y ∞ = gfp(F ).
The least and greatest fixed-points are dual to each other. For every
monotone operator F , the dual operator F d : X 7→ F (X) (where X denotes
the complement of X) is also monotone, and we have that
lfp(F ) = gfp(F d ) and gfp(F ) = lfp(F d ).

4.4.1 Least fixed-point logic and reachability games


Least fixed-point logic (LFP) is defined by adding to the syntax of first-
order logic the following least fixed-point formation rule: If ψ(R, x) is a
formula of vocabulary τ ∪ {R} with only positive occurrences of R, if x is a
tuple of variables, and if t is a tuple of terms (such that the lengths of x and
t match the arity of R), then also
[lfpRx . ψ](t) and [gfpRx . ψ](t)
are formulae of vocabulary τ . The free first-order variables of these formulae
are those in (free(ψ) − {x : x in x}) ∪ free(t).
Semantics. Since R occurs only positive in ψ, the update operator Fψ , defined
by ψ on any τ -structure A (providing interpretations for all free variables in
the formula) is monotone. We define that A |= [lfpRx . ψ](t) if, and only if,
A
t (the tuple of elements of A interpreting t) is contained in lfp(Fψ ). The
definition for greatest fixed-points is analogous.
Obviously, LFP is a fragment of second-order logic. Indeed, by the Tarski–
Knaster Theorem,
[lfpRx . ψ(R, x)](y) ≡ ∀R((∀x(ψ(R, x) → Rx)) → Ry)
[gfpRx . ψ(R, x)](y) ≡ ∃R((∀x(Rx → ψ(R, x)) ∧ Ry).
Perhaps the simplest example of a problem that is expressible in LFP, but
not in first-order logic, is reachability : Given a graph G = (V, E) and a
112 Erich Grädel

starting point v, find the set of nodes that are reachable by a path from v.
It is definable in LFP, by the formula

ψ(x) := [lfpRx . x = v ∨ ∃z(Rz ∧ Ezx)](x).

Indeed, in any graph (G, v), the set ψ G,v := {w : G, v |= ψ(w)} is precisely
the set of nodes reachable from w.

Exercise 4.3 Prove that the LFP-sentence

ψ := ∀y∃zF yz ∧ ∀y[lfpRy . ∀x(F xy → Rx)](y)

is an infinity axiom, i.e., it is satisfiable but does not have a finite model.

We have noticed above that winning regions of reachability and safety


games are not first-order definable. However it not difficult to generalise
the LFP-definition of reachability to LFP-definitions for the winning
regions of reachability (and safety) games. Consider reachability-safety
games G = (V, V0 , V1 , E, T ) where Player 0 wants to reach T and Player 1
tries to stay outside of T . On such games, the winning region W0 of Player 0
is uniformly definable by the LFP-formula ψ0 (x) := [lfp W x . ϕ](x) with

ϕ(W, x) := T x ∨ (V0 x ∧ ∃y(Exy ∧ W y)) ∨ (V1 ∧ ∀y(Exy → W y)).

The complement of W0 , which is the winning region for Player 1 for her
associated safety condition, is defined by a greatest fixed-point formula
ψ1 (x) := [gfp W x . η(W, x)](x) with

η(W, x) := ¬T x ∧ (V0 x → ∀y(Exy → W y)) ∧ (V1 → ∃y(Exy ∧ W y)).

This is just a special case of the duality between least and greatest
fixed-points which implies that for any formula ϕ,

[gfpRx . ϕ](t) ≡ ¬[lfpRx . ¬ϕ[R/¬R]](t),

where ϕ[R/¬R] is the formula obtained from ϕ by replacing all occurrences of


R-atoms by their negations. (As R occurs only positively in ϕ, the same is true
for ¬ϕ[R/¬R].) Because of this duality, greatest fixed-points are sometimes
omitted in the definition of LFP. However, for studying the relationship
between LFP and games it is much more convenient to keep the greatest
fixed-points, and to use the duality (and De Morgan’s laws) to translate
LFP-formulae to negation normal form, i.e., to push negations all the way
to the atoms.
Back and Forth Between Logic and Games 113

4.4.2 Capturing polynomial time


Let ϕ be a formula such that, for any given structure A, the update operator
Fϕ : P(Ak ) → P(Ak ) is monotone and computable in polynomial time
(with respect to |A|). Then also the fixed-points lfp(Fϕ ) and gfp(Fϕ ) are
polynomial-time computable since the inductive constructions of least and
greatest fixed points terminate after at most |A|k iterations of Fϕ . Together
with the fact that first-order operations are polynomial-time computable
we can conclude, by induction, that every LFP-definable property of finite
structures is computable in polynomial time.

Theorem 4.9 Let ψ be a sentence in LFP. It is decidable in polynomial time


whether a given finite structure A is a model of ψ. In short, LFP ⊆ Ptime.

Further, we have already seen that LFP can define properties that are
actually Ptime-complete, such as winning regions in reachability games.
This leads to the question of whether LFP can express all properties of finite
structures that are computable in polynomial time.
This is indeed the case when we consider ordered finite structures. We say
that a logic L captures a complexity class C on a domain D of finite
structures, if (1) for every fixed sentence ψ ∈ L, the complexity of evaluating
ψ on structures from D is a problem in the complexity class C, and (2)
every property of structures in D that can be decided with complexity C
is definable in the logic L. For any finite vocabulary τ , we write Ord(τ )
for the class of all structures (A, <), where A is a finite τ -structure and <
is a linear order on (the universe of) A. It is one of the most influential
results of finite model theory that for every model class K ⊆ Ord(τ ) that
is decidable in polynomial time, there exists a sentence ψ ∈ LFP such that
K = {A ∈ Ord(τ ) : A |= ψ}.

Theorem 4.10 (Immerman and Vardi) On ordered finite structures, least


fixed-point logic captures polynomial time.

However, in the absence of a linear ordering, LFP fails to express all


Ptime-properties. Indeed, there are quite trivial queries on unordered finite
structures that are not LFP-definable. A simple example is the question of
whether a given finite structure has an even number of elements.
The question of whether there exists a logic that captures Ptime on arbi-
trary finite structures, originally posed by Chandra and Harel [1982], is the
most important open problem of finite model theory. The most promising
candidates are suitable extensions of LFP (or other fixed-point logics). How-
ever, many people conjecture that no logic whatsoever can capture Ptime
114 Erich Grädel

on the domain of arbitrary finite structures. Since there exist logics for NP
this would imply that P 6= NP.

4.4.3 model-checking games for least fixed-point logic


We now construct evaluation games for LFP-formulae. We make the following
assumptions:

(1) Fixed-point formulae do not contain parameters. This means that in a


subformula [fp Rx . ϕ] (where fp means either lfp or gfp), the formula
ϕ(R, x) contains no free first-order variables besides those in x. This is
no loss of generality since one can always eliminate parameters, but it
may affect the complexity of model-checking algorithms.
(2) Formulae are in negation normal form, i.e., negations apply to atoms only.
Due to the standard dualities of first-order operators and the duality of
least and greatest fixed points, this is no loss of generality.
(3) Every fixed-point variable is bound only once and the free relation
variables are distinct from the fixed-point variables. For every fixed-point
variable T occurring in ψ, we write ϕT for the unique subformula in ψ
of the form [fp T x . η(T, x)].
(4) Finally, we require that each occurrence of a fixed-point variable T in
ϕT is inside the scope of a quantifier. Again, this is no loss of generality.

For two fixed-point variables S, T , we say that S depends on T if T


occurs free in ϕS . The transitive closure of this dependency relation is called
the dependency order , denoted by @ψ . The alternation level alψ (T ) of
T in ψ is the maximal number of alternations between least and greatest
fixed-point variables on the @ψ -paths from T . The alternation depth ad(ψ)
of a fixed-point formula ψ is the maximal alternation level of its fixed-point
variables.
For a structure A and an LFP-sentence ψ, the arena of the model-checking
game G(A, ψ) is defined as for first-order model-checking games, with ad-
ditional moves for fixed-point formulae. The positions are subformulae of
ψ instantiated by elements of A. The moves are as in the first-order game,
except for the positions associated with fixed-point formulae and with fixed-
point atoms. At such positions there is a unique move (by Falsifier, say)
to the formula defining the fixed-point. For each fixed-point variable T in
ψ, there is a unique subformula [fp T x . ϕ(T, x)](y) of ψ. From position
[fp T x . ϕ(T, x)](b), Falsifier moves to ϕ(T, b), and from any fixed-point atom
T c, she moves to the position ϕ(T, c).
Back and Forth Between Logic and Games 115

Notice that if ψ does not contain fixed-points, this game is the model-
checking game for first-order logic. However, if we have fixed-points the
games may now admit infinite plays. The winning condition for infinite plays
will be a parity condition. To motivate the priority assignment let us discuss
some special cases:
Consider a formula with just one lfp-operator, applied to a first-order
formula. The intuition is that from position [lfp T x . ϕ(T, x)](b), Verifier
tries to establish that b enters T at some stage α of the fixed-point induction
defined by ϕ on A. The game goes to ϕ(T, b) and from there, as ϕ is a
first-order formula, Verifier can either win the ϕ-game in a finite number of
steps, or force it to a position T c, where c enters the fixed-point at some
stage β < α. The game then resumes at position ϕ(c). As any descending
sequence of ordinals is finite, Verifier will win the game in a finite number
of steps. If the formula is not true, then Falsifier can either win in a finite
number of steps or force the play to go through infinitely many positions of
the form T c. Hence, these positions should be assigned priority 1 (and all
other positions higher priorities) so that such a play will be won by Falsifier.
For gfp-formulae, the situation is reversed. Verifier wants to force an infinite
play, going infinitely often through positions T c, so gfp-atoms are assigned
priority 0.
In the general case, we have a formula ψ with nested least and greatest
fixed-points, and in an infinite play of G(A, ψ) one may see different fixed-
point variables infinitely often. But one of these variables is then the smallest
with respect to the dependency order @ψ . It can be shown that A |= ψ if,
and only if, this smallest variable is a gfp-variable (provided the players play
optimally).
Hence, the priority labelling is defined as follows.
(1) Even priorities are assigned to gfp-atoms and odd priorities to lfp-atoms.
(2) If S @ψ T and S, T are fixed-point variables of different kinds, then
S-atoms should get a lower priority than T -atoms.
(3) All positions that are not fixed-point atoms, get a maximal (i.e., most
irrelevant) priority.
This completes the definition of the game G(A, ψ). Note that the number
of priorities in G(A, ψ) is essentially the alternation depth of ψ.
We want to prove that G(A, ψ) is indeed a correct model-checking game for
ψ in A. The proof proceeds by induction on A. The interesting case concerns
fixed-point formulae ψ(a) := [gfpT x . ϕ(x)](a). By the inductive construction
of greatest fixed-points, A |= [gfpT x . ϕ(x)](a) if, and only if, (A, T α ) |= ϕ(a)
for all stages T α of the gfp-induction of ϕ on A. Further, by the induction
116 Erich Grädel

hypothesis, we know that, for every interpretation T0 of T , (A, T0 ) |= ϕ(a) if,


and only if, Player 0 has a winning strategy for the game G((A, T0 ), ϕ(a)).
It suffices therefor to show that Player 0 wins the game G := G(A, ψ(a))
if, and only if, she wins all games G((A, T α ), ϕ(a). But this follows from a
general fact on parity game, the so-called Unfolding Lemma.
The unfolding of a parity game. Let G = (V, V0 , V1 , E, Ω) be a parity
game that has at least one node with priority 0 and in which every node v
with priority 0 has a unique successor s(v) (i.e., vE = {s(v)}). This condition
holds for the game Gg(A, ψ(a)), since the positions of minimal priority are
the fixed-point atoms T b which have unique successors ϕ(b).
Let Z be the set of nodes with priority 0 and let G − be the game obtained
by deleting from G all edges (v, s(v)) ∈ E ∩ (Z × V ) so that the nodes
in Z become terminal positions. The unfolding of G is a sequence G α
(where α ranges over the ordinals) which all coincide with G − up to the
winning conditions for the terminal positions v ∈ Z. For every α, we define
a decomposition Z = Z0α ∪ Z1α , where Zσα is the set of terminal positions
v ∈ Z at which we declare, for the game G α , that Player σ has won. Further,
for every α, we define Wσα to be winning region of Player σ in the game G α .
Note that Wσα depends of course on the decomposition Z = Z0α ∪ Z1α (also
for positions outside Z). In turn, the decomposition of Z for α + 1 depends
on the winning sets Wσα in G α . We set
Z00 := Z
Z0α+1 := {v ∈ Z : s(v) ∈ W0α }
\
Z0λ := Z0α for limit ordinals λ.
α<λ

By determinacy, V = W0α ∪ W1α for all α, and with increasing α, the


winning sets of Player 0 are decreasing and the winning sets of Player 1 are
increasing:
W00 ⊇ W01 ⊇ · · · W0α ⊇ W0α+1 ⊇ · · ·
W10 ⊆ W11 ⊆ · · · W1α ⊆ W1α+1 ⊆ · · · .
Hence there exists an ordinal α (whose cardinality is bounded by the
cardinality of V ) for which W0α = W0α+1 =: W0∞ and W1α = W1α+1 =: W1∞ .
The crucial result on unfoldings of parity games states that these fixed-points
coincide with the winning regions W0 and W1 of the original game G.
Lemma 4.11 (Unfolding Lemma) W0 = W0∞ and W1 = W1∞ .
For a proof, see Grädel [2007].
Back and Forth Between Logic and Games 117

By ordinal induction, one can easily see that the games G((A, T α ), ϕ(a))
associated with the gfp-induction of ϕ in A coincide with the unfolding of the
game G = G(A, ψ(a)). By the Unfolding Lemma, we conclude that Player 0
wins the game G(A, ψ(a)) if, and only if, she wins all games G((A, T α ), ϕ(a)).
By the induction hypothesis this holds if, and only if, (A, T α ) |= ϕ(a) for all
α, which is equivalent to A |= ψ(a).
For least fixed-point formulae we can dualize the arguments.

Theorem 4.12 Let ψ be a well-named and parameter-free LFP-formula in


negation normal form, and let A be a relational structure. Then Player 0 has
a winning strategy from position ψ(a) in the game G(A, ψ(a)) if, and only if,
A |= ψ(a).

For future reference we note that the model-checking games ψ(A, ψ) can not
only be easily constructed from A and ψ, but are also easily (i.e., first-order)
definable inside A.

Theorem 4.13 For every structure A with at least two elements, and
every formula ϕ(x̄) ∈ LFP the model-checking game G(A, ϕ) is first-order
interpretable in A.

For finite structures, the size of the game G(A, ψ(a)) (and the time complex-
ity of its construction) is bounded by |ψ|·|A|width(ψ) . Hence, for LFP-formulae
of bounded width, the size of the game is polynomially bounded.

Corollary 4.14 The model-checking problem for LFP-formulae of bounded


width (and without parameters) is in NP ∩ Co-NP, in fact in UP ∩ Co-UP.

By the complexity results for parity games mentioned at the end of


Section 4.2, we obtain complexity bounds for LFP model-checking which are
polynomial with respect to the size of the structure, but exponential in the
width and the alternation depth of the formula.

Corollary 4.15 The model-checking problem for LFP-formulae of bounded


width and bounded alternation depth is solvable in polynomial time.

We have imposed the condition that the fixed-point formulae do not contain
parameters. If parameters are allowed, then, at least with a naive definition
of width, Corollary 4.14 is no longer true (unless UP = Pspace). The
intuitive reason is that with parameters one can ‘hide’ first-order variables in
fixed-point variables. Indeed, by Dziembowski [1996] the evaluation problem
for quantified Boolean formulae can be reduced to the evaluation of LFP-
formulae with two first-order variables (but an unbounded number of monadic
118 Erich Grädel

fixed-point variables) on a fixed structure with three elements. Hence the


expression complexity of evaluating such formulae is Pspace-complete.
For LFP-formulae of unbounded width, our analysis in terms of model-
checking games only gives only exponential time bound. This cannot be
improved, even for very simple LFP-formulae (Vardi [1982]).

Theorem 4.16 The model-checking problem for LFP-formulae (of un-


bounded width) is Exptime-complete, even for formulae with only one fixed-
point operator, and on a fixed structure with only two elements.

4.4.4 The modal µ-calculus


A fragment of LFP that is of fundamental importance in many areas of com-
puter science (e.g., controller synthesis, hardware verification, and knowledge
representation) is the modal µ-calculus Lµ . It is obtained by adding least
and greatest fixed-points to propositional modal logic (ML) rather than to
FO. In other words Lµ relates to ML in the same way as LFP relates to FO.
Modal logics such as ML and the µ-calculus are evaluated on transition
systems (alias Kripke structures, alias coloured graphs) at a particular
node. Given a formula ψ and a transition system G, we write G, v |= ψ
to denote that G holds at node v of G. Recall that formulae of ML, for
reasoning about transition systems G = (V, (Ea )a∈A , (Pb )b∈B ), are built
from atomic propositions Pb by means of the usual propositional connectives
and the modal operators hai and [a]. That is, if ψ is a formula and a ∈ A is
an action, then we can build the formulae haiψ and [a]ψ, with the following
semantics:

G, v |= haiψ iff G, w |= ψ for some w such that (v, w) ∈ Ea ,


G, v |= [a]ψ iff G, w |= ψ for all w such that (v, w) ∈ Ea .

If there is only one transition relation, i.e., A = {a}, then we simply write 
and ♦ for [a] and hai, respectively.
ML can be viewed as an extension of propositional logic. However, in our
context it is more convenient to view it as a simple fragment of first-order
logic. A modal formula ψ defines a query on transition systems, associating
with G a set of nodes ψ G := {v : G, v |= ψ}, and this set can be defined
equivalently by a first-order formula ψ ∗ (x). This translation maps atomic
propositions Pb to atoms Pb x, it commutes with the Boolean connectives,
Back and Forth Between Logic and Games 119

and it translates the modal operators by use of quantifiers as follows:


(haiψ)∗ (x) := ∃y(Ea xy ∧ ψ ∗ (y))
([a]ψ)∗ (x) := ∀y(Ea xy → ψ ∗ (y)).
Note that the resulting formula has width 2 and can thus be written with
only two variables.
Theorem 4.17 For every formula ψ ∈ ML, there exists a first-order
formula ψ ∗ (x) of width 2, which is equivalent to ψ in the sense that G, v |= ψ
iff G |= ψ ∗ (v).
The modal µ-calculus Lµ extends ML by the following rule for building
fixed-point formulae: If ψ is a formula in Lµ and X is a propositional variable
that only occurs positively in ψ, then µX.ψ and νX.ψ are also Lµ -formulae.
The semantics of these fixed-point formulae is completely analogous to
that for LFP. The formula ψ defines on G (with universe V , and with
interpretations for other free second-order variables that ψ may have besides
X) the monotone operator Fψ : P(V ) → P(V ) assigning to every set X ⊆ V
the set ψ G (X) := {v ∈ V : (G, X), v |= ψ}. Now,
G, v |= µX.ψ iff v ∈ lfp(Fψ )
G, v |= νX.ψ iff v ∈ gfp(Fψ ).
Example 4.18 The formula µX.ϕ ∨ haiX asserts that there exists a path
along a-transitions to a node where ϕ holds. The formula νX.µY.hai((ϕ ∧
X) ∨ Y ) says that there exists a path from the current node on which ϕ
holds infinitely often.
Exercise 4.4 Prove that the formulae in Example 4.18 do indeed express
the stated properties.
The translation from ML into FO extends to a translation from Lµ into
LFP.
Theorem 4.19 Every formula ψ ∈ Lµ is equivalent to a formula ψ ∗ (x) ∈
LFP of width two.
Further the argument proving that LFP can be embedded into second-
order logic also shows that Lµ is a fragment of monadic second-order
logic (MSO).
The model-checking games for LFP easily translate into games for the
µ-calculus. Given a formula ψ ∈ Lµ and a transition system K, we obtain a
parity game G(K, ψ), with positions (ϕ, v) where ϕ is a subformula of ψ and
120 Erich Grädel

v is a node of K, such that K, v |= ϕ if, and only if, Player 0 has a winning
strategy in G(K, ψ) from position (ϕ, v). As a consequence, an efficient
algorithm for solving parity games would also solve the model-checking
problem for Lµ .
Since Lµ -formulae can be seen as LFP-formulae of width two, the bounds
established in the previous section apply: The model-checking problem for
Lµ is in UP ∩ Co-UP, and it is a major open problem whether it can be
solved in polynomial time. For Lµ -formulae of bounded alternation depths,
the associated parity games have a bounded number of priorities and can
therefore be solved in polynomial time.
Also the structure complexity can be settled easily. Since Lµ is a fragment
of LFP, all properties expressible in Lµ are decidable in polynomial time.
Further, there exist ψ ∈ Lµ for which the model-checking problem is Ptime-
complete. Indeed, winning regions of reachability games are definable not only
in LFP, but also in the µ-calculus. In a game G = (V, V0 , V1 , E), Player 0 has
a winning strategy from v if, and only if, G, v |= µX.((V0 ∧ ♦X) ∨ (V1 ∧ X)).
Despite this result, the µ-calculus is far away from a logic that would
capture Ptime. Since Lµ is a fragment of MSO, all word languages definable
in Lµ are regular languages, and of course, not all Ptime-languages are
regular.

4.5 Definability of winning regions in parity games


We have seen that the model-checking problem for the LFP and the modal
µ-calculus can be reduced to the problem of computing winning regions
in parity games. We now discuss the question of whether, and under what
conditions, winning regions of parity games are definable in LFP and the
µ-calculus.
To study questions of logical definability for parity games (V, V0 , V1 , E, Ω)
we need to represent the games as relational structures. We distinguish
between two cases.
For fixed d, we consider parity games where the range of the priority
function Ω is in {0, . . . , d − 1} as structures G = (V, V0 , V1 , E, P0 , . . . , Pd−1 )
where P0 , . . . , Pd−1 are pairwise disjoint unary relations such that Pi is the
set of positions v with Ω(v) = i. We denote this class of structures by PG d .
On the other hand, to consider classes of parity games with an unbounded
number of priorities, we consider them as structures

G = (V, V0 , V1 , E, ≺, Odd)
Back and Forth Between Logic and Games 121

where u ≺ v means that u has a smaller priority than v, and Odd is the set
of nodes with an odd priority. We denote this class of structures by PG.
In each case, when we say that winning regions of parity games are definable
in a logic L, we mean that there is is a formula ψ0 and ψ1 of L such that
for any structure G ∈ PG (resp. PG d ), ψσ is true in exactly those nodes in G
from which Player σ has a winning strategy.

4.5.1 Parity games with a bounded number of priorities


For any fixed d, the winning regions of parity games in PG d are definable by
LFP-formulae with d nested fixed-point operators. For Player 0, the formula
is
ψ0d (x) := [gfp R0 x . [lfp R1 x . . . . [fp Rd−1 x . ϕ(x, R0 , . . . , Rd−1 )](x) . . . ](x),
where
_
ϕ(x, R0 , . . . , Rd−1 ) := ((V0 x ∧ Pi x ∧ ∃y (Exy ∧ Ri y))∨
i<d
(V1 x ∧ Pi x ∧ ∀y (Exy → Ri y))).
The fixed-point operators alternate between gfp and lfp, and hence fp =
gfp if d is odd, and fp = lfp if d is even.
Theorem 4.20 For every d ∈ N, the formula ψ0d defines the winning region
of Player 0 in parity games with priorities 0, . . . , d − 1.
Proof In general, LFP-formulae are hard to understand, especially if they
have many alternations between least and greatest fixed-points. However, in
this case have an elegant argument based on model-checking games to prove
that, for every parity game G = (V, V0 , V1 , P0 , . . . , Pd−1 ) and every position
v ∈V,
G |= ψ0d (v) ⇐⇒ Player 0 has a winning strategy for G from v.
Let G ∗ be the model-checking game for the formula ψ0d (v) on G and identify
Verifier with Player 0 and Falsifier with Player 1. Hence, Player 0 has a
winning strategy for G ∗ if, and only if, G |= ψ0d (v).
By the construction of model-checking games, G ∗ has positions of the form
η(u), where u ∈ V and η is a subformula of ψ0d . The priority of a position
Ri u is i, and when η(u) is not of this form, then its priority is d.
We claim that the game G ∗ is essentially, i.e., up to elimination of stupid
moves (which would lead to a loss within one or two moves) and up to
contraction of several consecutive moves into one, the same as the original
122 Erich Grädel

game G. To see this, we compare playing G from a current position u ∈ V0 ∪Pi


with playing G ∗ from any position ϑk (u), where ϑk (x) is the subformula
[gfp Rk . . . ] or [lfp Rk . . . ] of ψ0d (x). In G, Player 0 selects at position u
a successor w ∈ uE, and the play proceeds from w. In G ∗ , the play goes
from ϑk (u) through the positions ϑk+1 (u), . . . , ϑd−1 (u) to the inner formula
ϕ(u, R0 , . . . , Rd−1 ).
This formula is a disjunction, so Verifier (Player 0) decides how to proceed.
But her only reasonable choice at this point is to move to the position
(V0 u ∧ Pi u ∧ ∃y(Euy ∧ Ri y), since with any other choice she would lose one
move later. But from there, the only reasonable move of Falsifier (Player 1)
is to go to position ∃y(Euy ∧ Ri y), and it is now the turn of Player 0 to
select a successor w ∈ vE and move to (Euw ∧ Ri w). This forces Player 1 to
move to Ri w from which the play proceeds to ϑi (w)).
Thus one move from u to w in G corresponds to a sequence of moves
in G ∗ from ϑk (u) to ϑi (w), but the only genuine choice is the move from
∃y(Euy ∧ Ri y) to (Euw ∧ Ri w), i.e., the choice of a successor w ∈ uE. In
G, the position u has priority i, and in G ∗ the minimal, and hence relevant,
priority that is seen in the sequence of moves from ϑk (u) to ϑi (w) is that of
Ri w which is also i. The situation for positions u ∈ V1 ∩Pi is the same, except
that the play in G ∗ now goes through ∀y(Exy → Ri y) and it is Player 1 who
selects a successor w ∈ uE and forces the play to Ri w.
Hence the (reasonable) choices that have to be made by the players in G ∗
and the relevant priorities that are seen are the same as in a corresponding
play of G. Thus, Player 0 has a winning strategy for G from v if, and only
if, Player 0 has a winning strategy for G ∗ from position ψ0d (v). But since G ∗
is the model-checking game for ψ0d (v) on G this is the case if, and only if,
G |= ψ0d (v).

The formula ψ1d defining the winning region for Player 1 is defined similarly.
Notice that the formula ψσd has width two. An analogous construction can
be carried out in the µ-calculus. The corresponding formulae are
d−1
_ 
Wind = νX0 µX1 νX2 . . . λXd−1 (V0 ∧ Pj ∧ ♦Xj ) ∨ (V1 ∧ Pj ∧ Xj ) .
j=0

Corollary 4.21 The following three problems are algorithmically equivalent,


in the sense that if one of them admits a polynomial-time algorithm, then all
of them do.

(1) Computing winning regions in parity games.


Back and Forth Between Logic and Games 123

(2) The model-checking problem for LFP-formulae of width at most k, for


any k ≥ 2.
(3) The model-checking problem for the modal µ-calculus.

4.5.2 Alternation hierarchies


The formulae Wind also play an important role in the study of the alter-
nation hierarchy of the modal µ-calculus. Clearly, Wind has alternation
depth d and it has been shown that this cannot be avoided. As a consequence
the alternation hierarchy of the µ-calculus is strict, a result due to Bradfield
[1998] and Arnold [1999].
Sometimes, a slightly stronger formulation of this result is needed, for
parity games on finite and strongly connected graphs. This easily follows
from the general result by the finite model property of the µ-calculus and by
a straightforward reduction to strongly connected games.

Theorem 4.22 Winning regions in parity games in PG d are not definable


by formulae in the µ-calculus with alternation depth < d, even under the
assumption that the game graphs are finite and strongly connected.

For LFP the strictness of the alternation hierarchy also applies, even on
certain fixed infinite structures, such as arithmetic N = (N, +, · ).
However, on finite structures, the interleaving of least and greatest fixed
points (or of lfp-operators and negation) can be completely avoided, at the
expense of increasing the arity of fixed-point operators. Indeed, a single
application of an lfp-operator to a first-order formula suffices to express
any LFP-definable property (see Immerman [1986] or Ebbinghaus and Flum
[1999]).

Theorem 4.23 On finite structures, every LFP-formula is equivalent to a


formula of the form ∃y[lfpRx . ϕ(R, x)](y, . . . , y).

This result can be strengthened further. Notice that the model-checking


game of a formula ∃y[lfpRx . ϕ(R, x)](y, . . . , y) is actually a reachability-
safety game. The winning region for Player 0 is this definable by an LFP-
formula of a particularly simple form, where the lfp-operator is applied to a
∆2 -formula.

Theorem 4.24 (Dahlhaus [1987]) Every LFP-definable property of finite


structures can be reduced, by a quantifier-free translation, to the problem of
computing winning regions in reachability games.
124 Erich Grädel

Hence even the problem of computing winning regions in reachability


games is complete for LFP via this logical notion of reduction.

4.5.3 Parity games with an unbounded number of priorities


We now show that winning regions of parity games are not definable in LFP
when the game graph may be infinite.
Theorem 4.25 Winning regions in PG are not definable in LFP, even
under the assumptions that the game graph is countable and the number of
priorities is finite.
Proof Suppose that Win(x) ∈ LFP defines the winning region of Player 0
on PG. We use this formula to solve the model-checking problem for LFP on
N = (ω, +, · ).
Recall that, for any ϕ(x) ∈ LFP, we have a parity game G(N, ϕ) such that,
for all n
N |= ϕ(n) ⇐⇒ G(N, ϕ) |= Win(vn )
(where vn is the initial position associated with ϕ(n))
Further, the model-checking game G(N, ϕ) is first-order interpretable in N.
Hence the formula Win(x) is mapped, via a first-order translation Iϕ , into
another LFP-formula Winϕ (x) such that
G(N, ϕ) |= Win(vn ) ⇐⇒ N |= Winϕ (n).
The first-order translation Win(x) 7→ Winϕ (x) depends on ϕ, but does
not increase the alternation depth. Hence, on arithmetic, every formula ϕ(x)
would be equivalent to one of fixed alternation depth:

N |= ϕ(n) ⇐⇒ N |= Winϕ (n).


However, it is known that the alternation hierarchy of LFP on arithmetic
is strict.

Definability on finite graphs. On finite game graphs, the definability


issues are different and closely related to complexity. One of the most inter-
esting questions is whether the winning regions are definable in fixed-point
logics such as LFP or the µ-calculus.
It is not difficult to see that the µ-calculus is not sufficient (no matter how
one would precisely define a µ-calculus on PG). Again, this is a consequence
of the strictness of the alternation hierarchy. A µ-calculus formula defining
Back and Forth Between Logic and Games 125

the winning region of Player 0 on PG could be translated to formulae of the


usual µ-calculus on structures PG d (for any fixed d) with just a bounded
increase of the alternation depth. But this would mean that, for any d, the
winning regions of parity games with d priorities can be expressed by a
µ-calculus formula with a fixed alternation level, which would contradict the
strictness of the alternation hierarchy of Lµ . For details, we refer to Dawar
and Grädel [2008]
We now turn to the least fixed-point logic LFP. Clearly, a proof that
winning regions of parity games in PG are LFP-definable would imply that
parity games are solvable in polynomial time. Surprisingly, it turns out that
also the converse direction holds, despite the fact that LFP is weaker than
Ptime.
To prove this, we use a result by Otto [1999] saying that the multi-
dimensional µ-calculus, which is a fragment of LFP, captures precisely the
bisimulation-invariant part of Ptime. See also [Grädel et al., 2007, Section
3.5.3] for an exposition of this result.
Winning positions in parity games are of course invariant under the usual
notion of bisimulation (e.g., as structures in PG d ). However, to apply Otto’s
Theorem for parity games with an unbounded number of priorities, we have to
consider bisimulation on structures of the form G = (V, V0 , V1 , E, ≺, Odd). Let
τ = {V0 , V1 , E, ≺, Odd, v} be the vocabulary of parity games with a starting
node, and let Str(τ ) denote the class of all structures of this vocabulary. If
we have two such structures that are indeed parity games, then bisimilarity
as τ -structures coincides with the usual notion of bisimilarity in PG d , for
appropriate d. However, not all structures in Str(τ ) are parity games, and the
class of parity games is not closed under bisimulation. An efficient procedure
for deciding whether a structure is bisimilar to a parity game is to compute
its quotient under bisimulation and checking whether it is a parity game.
For a structure (G, v) ∈ Str(τ ) consider the bisimulation relation a ∼ b on
elements of G defined with respect to the binary relations E, ≺ and ≺−1 .
That is to say ∼ is the largest relation satisfying:

• if a ∼ b then a and b agree on the unary relations V0 , V1 and Odd;


• for every x ∈ aE there is a y ∈ bE such that x ∼ y, and conversely;
• for every x with a ≺ x there is a y with b ≺ y and x ∼ y and conversely;
and finally
• for every x ≺ a there is a y ≺ b such that x ∼ y, and conversely.

We write (G, v)∼ for the bisimulation quotient of (G, v), i.e., the structure
whose elements are the equivalence classes in G with respect to ∼ with the
126 Erich Grädel

relations V0 , V1 , E, ≺, Odd defined in the natural way and [v] as the starting
vertex.

Exercise 4.5 Prove that a structure (G, v) ∈ Str(τ ) is bisimilar to a


parity game if, and only if, its bisimulation quotient is a parity game, i.e.,
(G, v)∼ ∈ PG.

Theorem 4.26 Let C be any class of parity games on finite game graphs,
such that winning positions on its bisimulation quotients are decidable in
polynomial time. Then, on C, winning positions are LFP-definable.

Proof Let WinC be the class of parity games (G, v), such that (G, v) ∈ C, and
Player 0 wins from initial position v. It suffices to construct a bisimulation-
invariant class X of structures (H, u) such that

(1) X is decidable in polynomial time,


(2) X ∩ C = WinC.

Indeed, by Otto’s Theorem X is then definable by an LFP-formula ψ(x),


such that, given any parity game (G, v) ∈ C we have

G, v ∈ WinC ⇐⇒ G, v ∈ X ⇐⇒ G |= ψ(v).

By assumption, there exists a polynomial time algorithm A which, given a


parity game (G, v) ∈ C ∼ , decides whether Player 0 wins G from v. It is not
important what the algorithm returns for quotients outside C ∼ , as long as
it is isomorphism-invariant and halts in polynomial time. Finally, let B be
the algorithm which, given any finite structure in Str(τ ), first computes its
bisimulation quotient, and then applies algorithm A.
Clearly B is a polynomial time algorithm, since bisimulation quotients
are efficiently computable. Further the class X of structures accepted by
B is invariant under bisimulation. Indeed, let H and H0 be two bisimilar
structures. Then their bisimulation quotients are isomorphic and are therefore
either both accepted or both rejected by A. Finally, X ∩ C = WinC. Indeed,
given a parity game G, v ∈ C, then it has the same winner as its bisimulation
quotient which is therefore correctly decided by the algorithm B.

Corollary 4.27 On the class PG of all finite parity games, winning regions
are LFP-definable if, and only if, they are computable in polynomial time.

For further results on the definability of winning regions in parity games,


we refer to Dawar and Grädel [2008].
Back and Forth Between Logic and Games 127

4.6 Inflationary fixed-point logic and backtracking games


LFP and the modal µ-calculus are not the only logics based on fixed-point
operators. In the context of finite model theory, a rich variety of fixed-point
operators has been studied due to the close connection that the resulting
logics have with complexity classes. One of the most prominent fixed-point
logics is IFP, the logic of inflationary fixed points. In finite model theory
the logics IFP and LFP have often been used interchangeably as it has long
been known that they have equivalent expressive power on finite structures.
More recently, it has been shown by Kreutzer [2004] that the two logics are
equally expressive even without the restriction to finite structures. However,
it has also been proved by Dawar et al. [2004] that MIC, the extension
of propositional modal logic by inflationary fixed-points, is vastly more
expressive than the modal µ-calculus Lµ and that LFP and IFP have very
different structural properties even when they have the same expressive
power. This exploration of the different nature of the fixed-point operators
leads naturally to the question of what an appropriate model-checking game
for IFP might look like.
Our analysis of why parity games are the appropriate model-checking games
for LFP logics relied on the well-foundedness of the inductive definition of
a least fixed-point. The Verifier who is trying to prove that a certain tuple
a belongs to a least fixed-point relation R, needs to present a well-founded
justification for its inclusion. That is, the inclusion of a in R may be based
on the inclusion of other elements in R whose inclusion in turn needs to be
justified but the entire process must be well-founded. On the other hand, the
justification for including an element in a greatest fixed-point may well be
circular. This interaction between sequences that are required to be finite and
those that are required to be infinite provides the structural correspondence
with parity games.
A key difference that arises when we consider inflationary fixed points
(and, dually, deflationary fixed-points) is that the stage at which a tuple a
enters the construction of the fixed-point R may be an important part of the
justification for its inclusion. In the case of least and greatest fixed-points,
the operators involved are monotone. Thus, if the inclusion of a can be
justified at some stage, it can be justified at all later stages. In contrast, in
constructing an inflationary fixed-point, if a is included in the set, it is on the
basis of the immediately preceding stage of the iteration. It may be possible
to reflect this fact in the game setting by including the iteration stage as an
explicit component of the game position. However, this would blow up the
game enormously, since we would have to take a separate copy of the arena
128 Erich Grädel

for each ordinal. Our aim is to leave the notion of the game arena unchanged
as the product of the structure and the formula. We wish only to change
the rules of the game to capture the nature of the inflationary fixed-point
operator.
The change we introduce to parity games is that either player is allowed to
backtrack to an earlier position in the game, effectively to force a countback
of the number of stages. That is, when a backtracking move is played, the
number of positions of a given priority that are backtracked are counted
and this count plays an important role in the succeeding play. The precise
definition is given in Section 4.6.2 below. The backtracking games we define
are more complex than parity games. Winning strategies are necessarily more
complicated, requiring unbounded memory, in contrast to the positional
strategies that work for parity games. Furthermore, deciding the winner is
Pspace-hard and remains hard for both NP and Co-NP even when games
have only two priorities. In contrast, parity games are known to be decidable
in NP ∩ Co-NP and in Ptime when the number of priorities is fixed. We will
explain how the model-checking problem for IFP can be represented in the
form of backtracking games. The construction allows us to observe that a
simpler form of backtracking game suffices which we call simple backtracking
games, and we will see that the winning regions of simple backtracking games
are definable in IFP. Thus, we obtain a tight correspondence between the
game and the logic, as exists between LFP and parity games.

4.6.1 Inflationary fixed-point logic


The inflationary fixed-point of any operator F . P(Ak ) → P(Ak ) is
defined as the limit of the increasing sequence of sets (Rα )α∈Ord defined as
S
R0 := ∅, Rα+1 := Rα ∪ F (Rα ), and Rλ := α<λ Rα for limit ordinals λ.
The deflationary fixed-point of F is constructed in the dual way starting
with Ak as the initial stage and taking intersections at successor and limit
ordinals.
Inflationary fixed-point logic (IFP) is obtained from FO by allowing
formulae of the form [ifpRx . ϕ(R, x)](x) and [dfpRx . ϕ(R, x)](x), for arbi-
trary ϕ, defining the inflationary and deflationary fixed-point of the operator
induced by ϕ.
To illustrate the power of IFP, we present here a few examples of situations
where inflationary and deflationary fixed-points arise.
Bisimulation. Let K = (V, E, P1 , . . . , Pm ) be a transition system with a
binary transition relation E and unary predicates Pi . Bisimilarity on K is
Back and Forth Between Logic and Games 129

the maximal equivalence relation ∼ on V such that any two equivalent nodes
satisfy the same unary predicates Pi and have edges into the same equivalence
classes. To put it differently, ∼ is the greatest fixed-point of the refinement
operator F : P(V × V ) → P(V × V ) with
n ^
F : Z 7→ (u, v) ∈ V × V : Pi u ↔ Pi v
i≤m
∧ ∀u (Euu → ∃v (Evv 0 ∧ Zu0 v 0 ))
0 0 0
o
∧ ∀v 0 (Evv 0 → ∃u0 (Euu0 ∧ Zu0 v)) .

For some applications (one of which will appear in Section 4.6.4) one is
interested in having not only the bisimulation relation ∼ but also a linear
order on the bisimulation quotient K/∼ . That is, we want to define a pre-
order  on K such that u ∼ v iff u  v and v  u. We can again do this via
a fixed-point construction, by defining a sequence α of pre-orders (where α
ranges over ordinals) such that α+1 refines α and λ , for limit ordinals λ,
is the intersection of the pre-orders α with α < λ. Let
^  _ 
u 1 v :⇐⇒ Pi u → Pi v ∨ (¬Pj u ∧ Pi v)
i≤m j<i

(i.e., if the truth values of the Pi at u are lexicographically smaller than or


equal to those at v), and for any α, let
u ∼α v :⇐⇒ u α v ∧ v α u.
To define the refinement, we say that the ∼α -class C separates two nodes u
and v, if precisely one of the two nodes has an edge into C. Now, let u α+1 v
if, and only if, u α v and there is an edge from v (and hence none from u)
into the smallest ∼α -class (w.r.t. α ) that separates u from v (if it exists).
Since the sequence of the pre-orders α is decreasing, it must indeed reach a
fixed-point , and it is not hard to show that the corresponding equivalence
relation is precisely the bisimilarity relation ∼.
The point that we want to stress here is that  is a deflationary fixed-point
of a non-monotone induction. Indeed, the refinement operator on pre-orders
is not monotone and does not, in general, have a greatest fixed-point. We
remark that is not difficult to give an analogous definition of this order by
an inflationary, rather than deflationary induction.
The lazy engineer: iterated relativisation. Let ϕ(x) be a specification that
should be satisfied by all states a of a system, which we assume to be
described as a relational structure A. Now, suppose that the engineer notices
that the system he designed is faulty, i.e., that there exist elements a ∈ A
130 Erich Grädel

where ϕ does not hold. Rather than redesigning the system, he tries to just
throw away all bad elements of A, i.e., he relativises A to the substructure
A|ϕ induced by {a : A |= ϕ(a)}. Unfortunately, it need not be the case that
A|ϕ |= ∀xϕ(x). Indeed, the removal of some elements may have the effect
that others no longer satisfy ϕ. But the lazy engineer can of course iterate
this relativisation procedure and define a (possibly transfinite) sequence of
T
substructures Aβ , with A0 = A, Aβ+1 = Aβ |ϕ and Aλ = β<λ Aβ for limit
ordinals λ. This sequence reaches a fixed-point A∞ which satisfies ∀xϕ(x) –
but it may be empty.
This process of iterated relativisation is definable by a fixed-point induction
in A. Let ϕ|Z be the syntactic relativisation of ϕ to a new set variable Z,
obtained by replacing inductively all subformulae ∃yα by ∃y(Zy ∧ α) and
∀yα by ∀y(Zy → α). Iterated relativisation means repeated application of
the operator
F : Z 7→ {a : A|Z |= ϕ(a)} = {a : A |= Za ∧ ϕ|Z (a)}
starting with Z = A (the universe of A). Note that F is deflationary but not
necessarily monotone.
In logics with inflationary and deflationary fixed points (the universe of)
A∞ is uniformly definable in A by a formula of the form [dfpZx . ϕ|Z ](x).
Since IFP and LFP have the same expressive power, A∞ is also LFP-definable.
However, the only known way to provide such a definition is by going through
the proof of Kreutzer’s Theorem (see Kreutzer [2004]). There seems to be no
simple direct definition based on least and greatest fixed-points only.

4.6.2 Parity games with backtracking


Backtracking games are essentially parity games with the addition that,
under certain conditions, players can jump back to an earlier position in the
play. This kind of move is called backtracking. A backtracking move from
position v to an earlier position u is only possible if v belongs to a given set
B of backtrack positions, if u and v have the same priority and if no position
of smaller priority has occurred between u and v. With such a move, the
player who backtracks not only resets the play back to u, she also commits
herself to a backtracking distance d, which is the number of positions of
priority Ω(v) that have been seen between u and v. After this move, the play
ends when d further positions of priority Ω(v) have been seen, unless this
priority is “released” by a lower priority.
For finite plays we have the winning condition that a player wins if her
opponent cannot move. For infinite plays, the winner is determined according
Back and Forth Between Logic and Games 131

to the parity condition, i.e., Player 0 wins a play π if the least priority seen
infinitely often in π is even, otherwise Player 1 wins.
Thus the arena G := (V, E, V0 , V1 , B, Ω) of a backtracking game is a defined
as for parity games, extended by a subset B ⊆ V of backtrack positions. A
play of G from initial position v0 is formed as follows. If, after n steps the
play has gone through positions v0 v1 . . . vn and reached a position vn ∈ Vσ ,
then Player σ can select a successor vn+1 ∈ vn E; this is called an ordinary
move. But if vn ∈ B is a backtrack position, of priority Ω(vn ) = q, say,
then Player σ may also choose to backtrack; in that case she selects a
number i < n subject to the conditions that Ω(vi ) = q and Ω(vj ) ≥ q
for all j with i < j < n. The play then proceeds to position vn+1 = vi
and we set d(q) = |{k : i ≤ k < n ∧ Ω(vk ) = q}|. This number d(q) is
relevant for the rest of the game, because the play ends when d(q) further
positions of priority q have been seen without any occurrence of a priority
< q. Therefore, a play is not completely described by the sequence v0 v1 . . .
of the positions that have been visited. For instance, if a player backtracks
from vn in v0 . . . vi . . . vj . . . vn , it matters whether she backtracks to i or j,
even if vi = vj because the associated numbers d(p) are different. For a more
formal description of how backtracking games are played we refer to Dawar
et al. [2006].
It is easy to see that backtracking games are Borel games, so by Martin’s
Theorem, they are determined. Since parity games are positionally determined
the question arises whether this also holds for backtracking games. However,
simple examples show that this is not the case and, indeed, no fixed amount
of finite memory suffices for winning strategies.

Theorem 4.28 Backtracking games in general do not admit finite-memory


winning strategies.

Exercise 4.6 Find an example proving this.

Thus, winning strategies for backtracking games are more complex than
the strategies needed for parity games. Also the computational complexity of
computing winning regions is higher for backtracking games than for parity
games. While it is known that winning regions of parity games can be decided
in NP ∩ Co-NP (and it is conjectured by many that this problem is actually
solvable in polynomial time), the corresponding problem for backtracking
games is Pspace-hard. Further, for any fixed number of priorities, parity
games can be decided in Ptime, but backtracking games with just two
priorities are already NP-hard (see Dawar et al. [2006]).
132 Erich Grädel

4.6.3 Games for IFP


We restrict attention to finite structures. The model-checking game for an
IFP-formula ψ on a finite structure A is a backtracking game G(A, ψ) =
(V, E, V0 , V1 , B, Ω). As in the games for LFP, the positions are subformulae
of ψ, instantiated by elements of A. We only describe the modifications.
We always assume that formulae are in negation normal form, and write
ϑ for the negation normal form of ¬ϑ. Consider any ifp-formula ϕ∗ (x) :=
[ifpT x . ϕ(T, x)](x) in ψ. In general, ϕ can have positive or negative occur-
rences of the fixed-point variable T . We use the notation ϕ(T, T ) to separate
positive and negative occurrences of T . To define the set of positions we in-
clude also all subformulae of T x ∨ ϕ and T x ∧ ϕ. Note that an ifp-subformula
in ϕ is translated into a dfp-subformula in ϕ, and vice versa. To avoid
conflicts we have to change the names of the fixed-point variables when
doing this, i.e., a subformula [ifpRy . ϑ(R, R, y)](y) in ϕ will correspond to a
subformula [dfpR0 y . ϑ(R0 , R0 , y)](y) of ϕ where R0 is a new relation variable,
distinct from R.
From a position ϕ∗ (a) the play proceeds to T a ∨ ϕ(T, a). When a play
reaches a position T c or T c the play proceeds back to the formula defining
the fixed-point by a regeneration move. More precisely, the regeneration of
an ifp-atom T c is T c ∨ ϕ(T, c), the regeneration of T c is T c ∧ ϕ(T, c). Verifier
can move from T c to its regeneration, Falsifier from T c. For dfp-subformulae
ϑ∗ (x) := [dfpRx . ϑ(R, x)](x), dual definitions apply. Verifier moves from
Rc to its regeneration Rc ∨ ϑ(R, c), and Falsifier can make regeneration
moves from Rc to Rc ∧ ϑ(R, c). The priority assignment associates with each
ifp-variable T an odd priority Ω(T ) and with each dfp-variable R an even
priority Ω(R), such that for any two distinct fixed-point variables S, S 0 , we
have Ω(S) 6= Ω(S 0 ), and whenever S 0 depends on S, then Ω(S) < Ω(S 0 ).
Positions of the form Sc and Sc are called S-positions. All S-positions get
priority Ω(S), all other formulae get a higher priority. The set B of backtrack
positions is the set of S-positions, where S is any fixed-point variable.
Let us focus on IFP-formulae with a single fixed-point, ψ := [ifpT x . ϕ](a)
where ϕ(T, x) is a first-order formula. When the play reaches a position T c
Verifier can make a regeneration move to T c ∨ ϕ(T, c) or backtrack. Dually,
Falsifier can regenerate from positions T c or backtrack. However, since we
have only one fixed-point, all backtrack positions have the same priority and
only one backtrack move can occur in a play.
In this simple case, the rules of the backtracking game ensure that infinite
plays (which are plays without backtracking moves) are won by Falsifier,
since ifp-atoms have odd priority. However, if one of the players backtracks
Back and Forth Between Logic and Games 133

after the play has gone through α T -positions, then the play ends when α
further T -positions have been visited. Falsifier has won, if the last of these is
of form T c, and Verifier has won if it is of form T c.
The differences between IFP model-checking and LFP model-checking are
in fact best illustrated with this simple case. We claim that Verifier has a
winning strategy for the game G(A, ψ) if A |= ψ and Falsifier has a winning
strategy if A 6|= ψ.
We look at the first-order formulae ϕα defining the stages of the induction.
Let ϕ0 (a) = false and ϕα+1 (a) = ϕα (a) ∨ ϕ[T /ϕα , T /ϕα ](x). On finite
W
structures ψ(a) ≡ α<ω ϕα (a).
The first-order game G(A, ϕα (a)) can be seen as an unfolding of the game
G(A, ψ(a)). Every position in G(A, ϕα (a)) corresponds to a unique position
in G(A, ψ(a)), and conversely, for a pair (p, β) where p is a position of
G(A, ϕα (a)) and β ≤ α is an ordinal, there is a unique associated position
pβ of the unfolded game G(A, ϕα (a)). When a play in G(A, ϕα (a)) reaches
a position T c, it is regenerated to either T c or ϕ(T, c) and such a regener-
ation move decrements the associated ordinal. The corresponding play in
G(A, ϕα (a)) proceeds to position ϕβ (c) or ϕ[T /ϕβ , T /ϕβ ](c). We can use this
correspondence to translate strategies between the two games. Notice that
the lifting of a positional strategy f in the unfolded game G(A, ϕα (a)) will
produce a non-positional strategy f ∗ in the original game G(A, ψ): start with
β = α and, for any position p, let f ∗ (p) := f (pβ ); at regeneration moves, the
ordinal β is decremented.
Consider now a play in G(A, ψ) after a backtracking move prior to which
β T -positions have been visited, and suppose that A |= ϕβ (a). Then Verifier
has a winning strategy in the first-order game G(A, ϕβ (a)) (from position
ϕβ (a)) which translates into a (non-positional) strategy for the game G(A, ψ)
with the following properties: Any play that is consistent with this strategy
will either be winning for Verifier before β T -positions have been seen, or
the β-th T -position will be negative.
Similarly, if A 6|= ϕβ (a) then Falsifier has a winning strategy for G(A, ϕβ (a)),
and this strategy translates into a strategy for the game G(A, ψ) by which
Falsifier forces the play (after backtracking) from position ψ(a) to a positive
β-th T -position, unless she wins before β T -positions have been seen. We
hence have established the following fact.
Lemma 4.29 Suppose that a play on G(A, ψ) has been backtracked to the
initial position ψ(a) after β T -positions have been visited. Verifier has a
winning strategy for the remaining game if, and only if, A |= ϕβ (a).
From this we obtain the desired result.
134 Erich Grädel

Theorem 4.30 If A |= ψ(a), then Verifier wins the game G(A, ψ(a)) from
position ψ(a). If A |6 = ψ(a), then Falsifier wins the game G(A, ψ(a)) from
position ψ(a).

Proof Suppose first that A |= ψ(a). Then there is some ordinal α < ω such
that A |= ϕα (a). We construct a winning strategy for Verifier in the game
G(A, ψ(a)) starting at position ψ(a).
From ψ(a) the game proceeds to (T a ∨ ϕ(a)). At this position, Verifier
repeatedly chooses the node T a until this node has been visited α-times.
After that, she backtracks and moves to ϕ(a). By Lemma 4.29 and since
A |= ϕα (a), Verifier has a strategy to win the remaining play.
Now suppose that A 6|= ψ(a). If, after α T -positions, one of the players
backtracks, then Falsifier has a winning strategy for the remaining game, since
A 6|= ϕα (a). Hence, the only possibility for Verifier to win the game in a finite
number of moves is to avoid positions T b where Falsifier can backtrack. Con-
sider the formulae ϕαf , with ϕ0f = false and ϕα+1f (x) = ϕ[T /ϕαf , T /false](x).
They define the stages of [ifpT x . ϕ[T, false](x)], obtained from ψ by replac-
ing negative occurrences of T by false. If Verifier could force a finite winning
play, with α − 1 positions of the form T c and without positions T c, then
she would in fact have a winning strategy for the model-checking game
G(A, ϕαf (a)). Since ψfα implies ϕα , it would follow that A |= ϕα (a). But this
is impossible.

The extension of the proof of Theorem 4.30 to arbitrary IFP-formulae


poses no major difficulties. Proceeding by induction on the number of nested
fixed-point formulae, one has to combine the argument just given (applied
to the outermost fixed-point) with the correctness proof for the LFP-model-
checking games. Notice that the essential differences between backtracking
games and parity games are in the effects of backtracking moves. Backtracking
moves impose a finiteness condition on one priority (unless it is later released
by smaller priority) and the effect of such a move remains essentially the
same in the general case as in the case of formulae with a single fixed-point.
On the other hand, an infinite play in an IFP-model-checking game is a play
in which the backtracking moves do not play a decisive role. The winner of
such a play is determined by the parity condition and the analysis of such
plays closely follows the proof that parity games are the model-checking
games for LFP-formulae.
Back and Forth Between Logic and Games 135

4.6.4 Definability of winning regions in backtracking games


We have seen that backtracking games can be used as model-checking games
for IFP. We will now identify a natural subclass of backtracking games,
which we call simple which is balanced with IFP. This means that for every
formula ϕ ∈ IFP and finite structure A, the game G(A, ϕ) can trivially be
modified to fall within this class and, on the other hand, for every d ∈ N
there is a formula ϕ ∈ IFP defining the winning region for Player 0 in any
simple backtracking game with at most d priorities. In this sense, simple
backtracking games precisely capture IFP model-checking.
Consider the model-checking game G(A, ϕ) and the way backtracking was
used there: if Player 0 wanted to backtrack it was always after opening a
fixed-point, say [ifpRx . Rx ∨ ϕ]. She then looped α times through the Rx
sub-formula and backtracked. By choosing the α she essentially picked a
stage of the fixed-point induction on ϕ and claimed that x ∈ ϕα . From this
observation we can derive two important consequences. As every inflationary
fixed-point induction must close after polynomially many steps in the size of
the structure A and therefore in linearly many steps in terms of the game
graph, there is no need for Player 0 to backtrack more than n steps, where
n is the size of the game graph. Further, the game can easily be modified
such that instead of having the nodes for the disjunction Rx ∨ ϕ and the
sub-formula Rx, we simply have a node for ϕ with a self-loop. In this modified
game graph, not only is it sufficient for Player 0 to backtrack no more than
n steps, we can, in addition, require that whenever she backtracks from
a node v, it must be to v again, i.e., when she decides to backtrack from
a node corresponding to the formula ϕ, she loops α times through ϕ and
then backtracks α steps to ϕ again. The same is true for Player 1 and her
backtracking.
We call a strategy in a backtracking game G local if all backtracking
moves from any node v are to a previous occurrence of v. Given a function
f . N → N, we call a strategy f -backtracking if all backtracking moves made
by the strategy have distance at most f (|G|). The strategy is called linear
in case f (n) = n and polynomial if f is a polynomial in n.
A backtracking game G := (V, E, V0 , V1 , B, Ω) is simple, if every node in
B has a self-loop and both players have local linear winning strategies on
their winning regions.

Theorem 4.31 For any IFP-formula ψ and every finite structure A, the
model-checking game G(A, ϕ), as defined in Section 4.6.3, is simple.

We now want to show that the logic IFP is balanced with the class of
136 Erich Grädel

simple backtracking games, i.e., the winning regions of simple backtracking


games are IFP-definable.
Since backtracking games are extensions of parity games we start with
the formula defining winning regions in parity games. We take this formula
as a starting point for defining an IFP-formula deciding the winner of
backtracking games. To define strategies involving backtracking, we first need
some preparation. In particular, in order to measure distances we need an
ordering on the arenas.
It is easily seen that backtracking games are invariant under bisimulation.
Thus, it suffices to consider arenas where no two distinct nodes are bisimilar
(we refer to such arenas as bisimulation minimal ). The next step is to
define an ordering on the nodes in an arena. This is done by ordering the
bisimulation types realised in it.
Indeed, we have seen above that there is a formula ϕord (x, y) ∈ IFP
defining on every bisimulation minimal arena a linear order. As a result,
we can assume that the backtracking games are ordered and that we are
given an arithmetical predicate for addition with respect to the order defined
above.
Theorem 4.28, saying that there exist backtracking games whose winning
strategies require infinite memory, also applies to games with local strategies.
In general, the reason for the increased memory consumption is that when
the decision to backtrack is made, it is necessary to know which nodes
have been seen in the past, i.e., to which node a backtracking move is
possible. Furthermore, after a backtracking move occurred, both players
have to remember the backtracking distance, as this determines their further
moves. However, since here we consider strategies with local backtracking
only, it suffices to know the distance of the backtracking moves that are still
active, i.e., have not yet been released, whereas the history of the play in
terms of nodes visited may safely be forgotten. Thus we can capture all the
relevant information about a partial play π ending in position v by the tuple
(v, dπ (0), . . . , dπ (k − 1)), where dπ denotes the distance function.
In a backtracking game with priorities 0, . . . , k − 1, a configuration is
a pair (v, d) consisting of a node v and a tuple d ∈ (N ∪ {∞})k . Let π be
a (partial) play ending in node v. The configuration of π is defined as the
tuple (v, dπ (0), . . . , dπ (k)).
Recall that in a simple backtracking game the distance of all backtracking
moves is at most n := |G|. Furthermore we can assume that we are given a
linear order on the nodes of the game graph. Thus the configuration of any
(partial) play π in a simple game can be represented by a pair (v, d) where
Back and Forth Between Logic and Games 137

d ∈ {0, . . . , n, ∞}k and we can use nodes in the game graph to represent the
values of the di .
The structure of the formulae ψ0k defining the winning region for Player 0
in backtracking games with priorities < k is similar to the structure of the
corresponding LFP-formula for parity games. It has the form

ψ0k (x) := [gfp R0 xd . [lfp R1 xd . . . . [fpRk−1 xd . ϕ(R, x, d)] . . . ](x, ∞, . . . ∞)

with k nested fixed-points, applied to a formula ϕ which is first-order, up


to the IFP-subformula defining the order of the bisimulation types. In its
various nested fixed-points the formula builds up sets of configurations
(x, d0 , . . . , dk−1 ) such that if (x, d0 , . . . , dk ) ∈ RΩ(x) , then Player 0 can extend
any partial play π, ending in node x with dπ (j) = dj for all j < k, to a
winning play.
We do not give an explicit construction here, but explain the idea. For
details, we refer to Dawar et al. [2006].
First of all the formula ϕ(R, x, d) states that for some i, the priority of x
is i and the tuple (d0 , . . . , dk−1 ) has ∞ at all positions greater than i (which
corresponds to the fact that a node of priority i releases all backtracking
moves on higher priorities). Further, if x is a position of Player 0, then she
can win from configuration (x, d) if she can move to a successor y of x from
0
which she wins the play. Winning from y means that the configuration (y, d )
reached from (x, d) by moving to y is in RΩ(y) . Thus the formula must define
0
what it means for (y, d ) to be the configuration reached from x when moving
to y.
This involves a case distinction.
If di = ∞, Player 0 can either do an ordinary move or, in case x ∈ B,
a backtracking move. After an ordinary move to a successor node y of
0 0
priority j the play will have the configuration (y, d ) which satisfies d =
(d0 , . . . , dj , ∞, . . . , ∞)) and which must be in Rj . After a backtracking move,
we will have, for some m = 6 ∞, a configuration (x, d0 , . . . , di−1 , m, ∞, . . . , ∞)
which must be in Ri
In the case that di = m ≤ |G|, the formulae must express that Player 0
wins the m-step game on priority i from x. This game is won by Player 0 if
there is a successor y of x from which she wins and either the priority j of
y is less than i, i.e., all backtracking moves on priorities greater than j are
released (dl = ∞ for all l > j), or the priority j of y equals i and Player 0
wins the m − 1 step game from y (and all dl with l < i are left unchanged),
or the priority j of y is greater than i, in which case the play continues with
the configuration (y, d0 , . . . , di , ∞, . . . , ∞), i.e., all active backtracking moves
138 Erich Grädel

(whose distances are stored in d0 , . . . , di ) remain unchanged and the play


continues on priority j without any active backtracking moves on priorities
greater than i.
It is not difficult to express all this in first-order logic, provided an ordering
on priorities is available. For nodes where Player 1 moves the construction is
very similar.
Exercise 4.7 Make the construction of the formulae ψσk explicit, and prove
that they indeed define the winning region for Player σ.
Theorem 4.32 Winning regions of simple backtracking games are definable
in IFP.

4.7 Logic and games in a quantitative setting


Common logical formalisms are two-valued and express qualitative properties.
There have been a number of proposals to extend logics such as propositional
modal logic ML, the temporal logics LTL and CTL, and the modal µ-calculus
Lµ to quantitative formalisms where formulae can take, at a given state
of a system, not just the values true and false, but quantitative values, for
instance real numbers. There are several scenarios and applications where
it is desirable to replace purely qualitative statements by quantitative ones
which can be of very different nature: we may be interested in the probability
of an event, the value that we assign to an event may depend on to how
late it occurs, we can ask for the number of occurrences of an event in a
play, and so on. We can consider transition structures, where already the
atomic propositions take numeric values, or we can ask about the ‘degree of
satisfaction’ of a property.
While there certainly is ample motivation to extend qualitative specification
formalisms to quantitative ones, there are also problems. It has been observed
in many areas of mathematics, engineering and computer science where logical
formalisms are applied, that quantitative formalisms in general lack the clean
and clear mathematical theory of their qualitative counterparts, and that
many of the desirable mathematical and algorithmic properties tend to get
lost. Also, the definitions of quantitative formalisms are often ad hoc and do
not always respect the properties that are relevant for logical methodologies.
Here we discuss to what extent the relationship between the µ-calculus and
parity games can be extended to a quantitative µ-calculus and appropriate
quantitative model-checking games. The extension is not straightforward
and requires that one defines the quantitative µ-calculus in the ‘right’ way,
Back and Forth Between Logic and Games 139

so as to ensure that it has appropriate closure and duality properties (such


as closure under negation, De Morgan equalities, quantifier and fixed-point
dualities) to make it amenable to a game-based approach. But once this is
done, one can indeed construct a quantitative variant of parity games, and
prove that they are the appropriate model-checking games for the quantitative
µ-calculus. As in the classical setting the correspondence goes both ways: The
value of a formula in a structure coincides with the value of the associated
model-checking game, and conversely, the value of quantitative parity games
(with a bounded number of priorities) are definable in the quantitative µ-
calculus. However, the mathematical properties of quantitative parity games
are different from their qualitative counterparts. In particular, they are, in
general, not positionally determined, not even up to approximation, and
the proof that the quantitative model-checking games correctly describe the
value of the formulae is considerably more difficult than for the classical case.
As in the classical case, model-checking games lead to a better understand-
ing of the semantics and expressive power of the quantitative µ-calculus.
Further, the game-based approach also sheds light on the consequences of
different choices in the design of the quantitative formalism, which are far
less obvious than for classical logics.

4.7.1 Quantitative transition systems and quantitative µ-calculus


We write R+ for the set of non-negative real numbers, and R+ +
∞ := R ∪ {∞}.
Quantitative transition systems (QTS) are directed graphs equipped
with quantities at states and with discounts of edges. They have the form
K = (V, E, δ, {Pi }i∈I ) where (V, E) is a directed graph, with a discount
function δ : E → R+ \ {0} and functions Pi : V → R+ ∞ , that assign to
each state the values of the predicates at that state. A transition system is
qualitative if all functions Pi assign only the values 0 or ∞, where 0 stands
for false and ∞ for true, and it is non-discounted if δ(e) = 1 for all e ∈ E.
Given predicate functions {Pi }i∈I , discount factors d ∈ R+ and constants
c ∈ R+ , the quantitative µ-calculus Qµ is built in a similar way to the
modal µ-calculus, with the following two modifications.
(1) Atomic formulae have the form |Pi − c|.
(2) If ϕ is a Qµ-formula, then so is d · ϕ,
Boolean connectives ∧, ∨, modal operators ♦ and , and fixed-point
operators µ, ν are used as in the syntax of Lµ . The semantics, however, is
quite different.
Formulae of Qµ are interpreted over quantitative transition systems K =
140 Erich Grädel

(V, E, δ, (Pi )i∈I ). The meaning of a formula ϕ in K is a function [[ϕ]]K :


V → R+ +
∞ . We write F for the set of functions f : V → R∞ , with f1 ≤ f2
if f1 (v) ≤ f2 (v) for all v. Then (F, ≤) forms a complete lattice with the
constant functions f = ∞ as f = 0 as top and bottom elements.
The interpretation [[ϕ]]K : V → R+∞ is defined as follows:

(1) [[|Pi − c|]]K (v) := |Pi (v) − c|,


(2) [[ϕ1 ∧ ϕ2 ]]K := min{[[ϕ1 ]]K , [[ϕ2 ]]K } and
[[ϕ1 ∨ ϕ2 ]]K := max{[[ϕ1 ]]K , [[ϕ2 ]]K },
(3) [[♦ϕ]]K (v) := supv0 ∈vE δ(v, v 0 ) · [[ϕ]]K (v 0 ) and
1
[[ϕ]]K (v) := inf v0 ∈vE δ(v,v K 0
0 ) [[ϕ]] (v ),

(4) [[d · ϕ]]K (v) := d · [[ϕ]]K (v),


(5) [[µX.ϕ]]K := inf{f ∈ F : f = [[ϕ]]K[X←f ] }, and
[[νX.ϕ]]K = sup{f ∈ F : f = [[ϕ]]K[X←f ] }.

When interpreted over qualitative transition systems Qµ coincides with


the classical µ-calculus and we say that K, v is a model of ϕ, K, v |= ϕ if
[[ϕ]]K (v) = ∞. For discounted systems we take the natural definition for ♦
and use the dual one for  which motivates the 1δ factor. It has been proved
by Fischer et al. [2009] that this is the only definition for which there is a
well-behaved negation operator (with [[¬ϕ]]K = 1/[[ϕ]]K ) and which gives us
the dualities that are needed for natural model-checking games.
Note that all operators in Qµ are monotone. This guarantees the existence
of the least and greatest fixed-points, and their inductive definition according
to the Knaster–Tarski Theorem: Given a formula µX.ϕ and a quantitative
transition system K, we obtain the inductive sequence of functions gα (for
ordinals α) where g0 := 0, gα+1 := [[ϕ]]K[X←gα ] , and gλ := limα<λ [[ϕ]]K[X←gα ]
for limit ordinals λ. Then [[µX.ϕ]]K = gγ for the minimal ordinal γ with
gγ = gγ+1 . For νX.ϕ the dual induction applies, starting with g0 := ∞.

4.7.2 Quantitative parity games


Quantitative parity games are modest modifications of classical parity games.
Quantitative values are assigned to final positions, where they are interpreted
as the payoff for Player 0 at that position, and to moves, where they are
interpreted as discounts to the payoff when the play goes through that move.
A quantitative parity game is a tuple G = (V, V0 , V1 , E, δ, λ, Ω) extend-
ing a classical parity game by two functions δ : E → R+ , assigning to every
move a discount factor , and λ : {v ∈ V : vE = ∅} → R+ ∞ assigning to
every terminal position a payoff value. The outcome p(π) of a finite play
Back and Forth Between Logic and Games 141

π = v0 . . . vk , ending at a terminal position vk is computed by multiplying


all discount factors seen in the play with the payoff value at the final node,
p(v0 v1 . . . vk ) = δ(v0 , v1 ) · δ(v1 , v2 ) · . . . · δ(vk−1 , vk ) · λ(vk ).
The outcome of an infinite play depends only on the lowest priority seen
infinitely often. We assign the value 0 to every infinite play in which the
lowest priority seen infinitely often is odd, and ∞ to those in which it is
even. Player 0 wants to maximise the outcome whereas Player 1 wants to
minimise it.
Determinacy. A quantitative game is determined if, for each position v,
the highest outcome that Player 0 can enforce from v and the lowest outcome
Player 1 can assure converge,
sup inf p(πσ,ρ (v)) = inf sup p(πσ,ρ (v)) =: valG(v),
σ∈Γ0 ρ∈Γ1 ρ∈Γ1 σ∈Γ0

where Γ0 , Γ1 are the sets of all possible strategies for Player 0, Player 1. The
outcome defined in this way is the value of G at v.
One of the fundamental properties of qualitative parity games is the posi-
tional determinacy . Unfortunately, this does not generalise to quantitative
parity games. Example 4.33 shows that there are simple quantitative games
where no player has a positional winning strategy. In the depicted game there
is no optimal strategy for Player 0, and even if one fixes an approximation of
the game value, Player 0 needs infinite memory to reach this approximation,
because she needs to loop in the second position as long as Player 1 looped
in the first one to make up for the discounts. (By convention, we depict
positions of Player 0 with a circle and of Player 1 with a square and the
number inside is the priority for non-terminal positions and the payoff in
terminal ones.)
Example 4.33
1
2
2

0 1 1

4.7.3 model-checking games for Qµ


Given a quantitative transition system K = (S, T, δS , Pi ) and a Qµ-formula
ψ in negation normal form, we define the model-checking game MC[K, ψ] =
(V, V0 , V1 , E, δ, λ, Ω), as a quantitative parity game.
Positions and moves. As in games for Lµ , positions are the pairs (ϕ, s),
142 Erich Grädel

where ϕ is a subformula of ϕ, and s ∈ S is a state of the K; in addition we


have two special positions (0) and (∞). Positions (|Pi − c|, s), (0), and (∞)
are terminal positions. Moves are defined as in the games for Lµ , with the
following modifications: positions of the form (♦ψ, s) have either a single
successor (0), in case s is a terminal state in K, or one successor (ψ, s0 )
for every s0 ∈ sT . Analogously, positions of the form (ψ, s) have a single
successor (∞), if sT = ∅, or one successor (ψ, s0 ) for every s0 ∈ sT otherwise.
Positions of the form (d · ψ, s) have a unique successor (ψ, s0 ). Priorities are
assigned in the same way as in the model-checking games for Lµ .
Discounts and payoffs. The discount of an edge is d for transitions from
positions (d · ψ, s), it is δS (s, s0 ) for transitions from (♦ψ, s) to (ψ, s0 ), it
is 1/δS (s, s0 ) for transitions from (ψ, s) to (ψ, s0 ), and 1 for all outgoing
transitions from other positions. The payoff function λ assigns |[[Pi ]](s) − c|
to all positions (|Pi − c|, s), ∞ to position (∞), and 0 to position (0).
To prove that MC(K, ψ) is indeed an appropriate model-checking game
it must be shown that the value of the game starting from v coincides with
the value of the formula evaluated on K at v 0 . In the qualitative case, that
means, that ψ holds in K, v 0 if Player 0 wins in G from v.
Theorem 4.34 For every formula ψ in Qµ, every quantitative transition
system K, and v ∈ K, the game MC[K, ψ] is determined and

valMC[K, ψ](ψ, v) = [[ψ]]K (v).

This is shown by Fischer et al. [2009] using a generalisation of the unfolding


method for parity games.
Example 4.35 A model-checking game for ϕ = µX.(P ∨ 2 · ♦X) on the
QTS Q shown in Figure 4.1(a), with P (1) = 0, P (2) = 1, is depicted in
Figure 4.1(b). The nodes are labelled with the corresponding subformulae of
ϕ, and the state of Q. Only the edges with discount factor different from 1
are labelled.
Note that in this game only Player 0 is allowed to make any choices. When
we start at the top node, corresponding to an evaluation of ϕ at 1 in Q, the
only choice she has to make is either to keep playing (by looping), or to end
the game by moving to a terminal position.

4.7.4 Defining game values in Qµ


As in the case of parity games and LFP (and Lµ), also the connection
between quantitative parity games and quantitative µ-calculus goes back
Back and Forth Between Logic and Games 143

µX.(P ∨ ♦2 X), 1

a) Q = 1 2 b)
P, 1 P ∨ ♦2 X, 1

2 2
X, 2 ♦2 X, 1 X, 1

P ∨ ♦2 X, 2 µX.(P ∨ ♦2 X), 2

P, 2 ♦2 X, 2 0

Figure 4.1 Example (a) QTS and (b) model-checking game for
µX.(P ∨ 2 · ♦X)

and forth. We have seen that quantitative parity games are appropriate
model-checking games for the evaluation of Qµ-formulae on quantitative
transition systems. For the other direction, we now show that values of
positions in quantitative parity games (with a bounded number of priorities)
are definable in Qµ. It is convenient to describe a quantitative parity game
G = (V, V0 , V1 , E, δG , λ, ΩG ) with priorities in {0, . . . d − 1} by a quantitative
transition system QG = (V, E, δ, V0 , V1 , Λ, Ω), where Vi (v) = ∞ for positions
of Player i, and Vi (v) = 0 otherwise, where Ω(v) = ΩG (v) when vE 6= ∅ and
Ω(v) = d otherwise, with discount function
(
δG (v, w) for v ∈ V0 ,
δ(v, w) = −1
(δG (v, w)) for v ∈ V1
and with payoff predicate Λ(v) := λ(v) in case vE = ∅ and is Λ(v) = 0
otherwise.
We then modify the Lµ -formulae Wind constructed in Section 4.5.1 to
quite similar Qµ-formulae
d−1
_
QWind = νX0 µX1 νX2 . . . λXd−1 ((V0 ∧ Pj ∧ ♦Xj ) ∨ (V1 ∧ Pj ∧ Xj )) ∨ Λ,
j=0

where Pi := ¬(µX.(2 · X ∨ |Ω − i|)). Note that Pi (v) = ∞ when Ω(v) = i


and Pi (v) = 0 otherwise. The formula QWind is therefore analogous to the
formula Wind in the qualitative case.
Theorem 4.36 For every d ∈ N, the value of any quantitative parity game
with priorities in {0, . . . d − 1} coincides with the value of QWind on the
associated transition system.
144 Erich Grädel

Exercise 4.8 Adapt the proof of Theorem 4.20 to get a proof of Theo-
rem 4.36.

References
A. Arnold. The mu-calculus alternation-depth is strict on binary trees. RAIRO
Informatique Théorique et Applications, 33:329–339, 1999.
J. Bradfield. The modal µ-calculus alternation hierarchy is strict. Theoretical
Computer Science, 195:133–153, 1998.
A. Chandra and D. Harel. Structure and complexity for relational queries. Journal
of Computer and System Sciences, 25:99–128, 1982.
E. Dahlhaus. Skolem normal forms concerning the least fixed point. In E. Börger, ed-
itor, Computation Theory and Logic, number 270 in Lecture Notes in Computer
Science, pages 101–106. Springer Verlag, 1987.
A. Dawar and E. Grädel. The descriptive complexity of parity games. In Proceedings
of 22th Annual Conference of the European Association for Computer Science
Logic CSL 2008, pages 354–368, 2008.
A. Dawar, E. Grädel, and S. Kreutzer. Inflationary fixed points in modal logic.
ACM Transactions on Computational Logic, 5:282 – 315, 2004.
A. Dawar, E. Grädel, and S. Kreutzer. Backtracking games and inflationary fixed
points. Theoretical Computer Science, 350:174–187, 2006.
W. F. Dowling and J. H. Gallier. Linear-time algorithms for testing the satisfiability
of propositional horn formulae. Journal of Logic Programming, 1(3):267–284,
1984.
S. Dziembowski. Bounded-variable fixpoint queries are PSPACE-complete. In
10th Annual Conference on Computer Science Logic CSL 96. Selected papers,
Lecture Notes in Computer Science Nr. 1258, pages 89–105. Springer, 1996.
H.-D. Ebbinghaus and J. Flum. Finite Model Theory. Springer, 2nd edition, 1999.
A. Emerson and C. Jutla. Tree automata, mu-calculus and determinacy. In Proc.
32nd IEEE Symp. on Foundations of Computer Science, pages 368–377, 1991.
D. Fischer, E. Grädel, and L. Kaiser. Model checking games for the quantitative
µ-calculus. Theory of Computing Systems, 2009.
E. Grädel. Finite Model Theory and Descriptive Complexity. In Finite Model
Theory and Its Applications. Springer-Verlag, 2007.
E. Grädel and I. Walukiewicz. Positional determinacy of games with infinitely many
priorities. Logical Methods in Computer Science, 2006.
E. Grädel, P. G. Kolaitis, L. Libkin, M. Marx, J. Spencer, M. Y. Vardi, Y. Venema,
and S.Weinstein. Finite Model Theory and Its Applications. Springer, Berlin,
2007.
R. Greenlaw, J. Hoover, and W. Ruzzo. Limits to Parallel Computation. P-
Completeness Theory. Oxford University Press, Oxford, 1995.
N. Immerman. Relational queries computable in polynomial time. Information and
Control, 68:86–104, 1986.
A. Itai and J. Makowsky. Unification as a complexity measure for logic programming.
Journal of Logic Programming, 4:105–117, 1987.
M. Jurdziński. Small progress measures for solving parity games. In Proceedings of
17th Annual Symposium on Theoretical Aspects of Computer Science, STACS
Back and Forth Between Logic and Games 145

2000, Lecture Notes in Computer Science Nr. 1770, pages 290–301. Springer,
2000.
M. Jurdziński. Deciding the winner in parity games is in UP ∩ Co-UP. Information
Processing Letters, 68:119–124, 1998.
M. Jurdziński, M. Paterson, and U. Zwick. A deterministic subexponential algorithm
for solving parity games. In Proceedings of ACM-SIAM Proceedings on Discrete
Algorithms, SODA 2006, pages 117–123, 2006.
S. Kreutzer. Expressive equivalence of least and inflationary fixed point logic. Annals
of Pure and Applied Logic, 130:61–78, 2004.
D. Martin. Borel determinacy. Annals of Mathematics, 102:336–371, 1975.
A. Mostowski. Games with forbidden positions. Technical Report 78, University of
Gdańsk, 1991.
M. Otto. Bisimulation-invariant Ptime and higher-dimensional mu-calculus. Theo-
retical Computer Science, 224:237–265, 1999.
M. Vardi. The complexity of relational query languages. In Proceedings of the 14th
ACM Symposium on the Theory of Computing, pages 137–146, 1982.
E. Zermelo. über eine Anwendung der Mengenlehre auf die Theorie des Schach-
piels. In Proc. 5th Internat. Congr. Mathematicians, volume 2, pages 501–504,
Cambridge, 1913.
W. Zielonka. Infinite games on finitely coloured graphs with applications to automata
on infinite trees. Theoretical Computer Science, 200:135–183, 1998.

You might also like