0% found this document useful (0 votes)
8 views30 pages

Algebraic Separation Logic

This document presents an algebraic approach to separation logic, providing a characterization for assertions and proving abstract laws algebraically. It discusses the semantics of a simple programming language associated with separation logic and proves the frame rule in an abstract manner. The approach enhances the expressiveness and conciseness of proofs, facilitating the use of automated theorem provers for property verification.

Uploaded by

Tommaso Parodo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views30 pages

Algebraic Separation Logic

This document presents an algebraic approach to separation logic, providing a characterization for assertions and proving abstract laws algebraically. It discusses the semantics of a simple programming language associated with separation logic and proves the frame rule in an abstract manner. The approach enhances the expressiveness and conciseness of proofs, facilitating the use of automated theorem provers for property verification.

Uploaded by

Tommaso Parodo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 30

Algebraic Separation Logic

H.-H Danga , P. Höfnera,b , B. Möllera


a Institut für Informatik, Universität Augsburg, D-86159 Augsburg, Germany
b National ICT Australia Ltd. (NICTA), Sydney, Australia

Abstract
We present an algebraic approach to separation logic. In particular, we give an algebraic characterisation for assertions
of separation logic, discuss different classes of assertions and prove abstract laws fully algebraically. After that, we use
our algebraic framework to give a relational semantics of the commands of a simple programming language associated
with separation logic. On this basis we prove the frame rule in an abstract and concise way, parametric in the operator
of separating conjunction, of which two particular variants are discussed. In this we also show how to algebraically
formulate the requirement that a command preserves certain variables. The algebraic view does not only yield new
insights on separation logic but also shortens proofs due to a point free representation. It is largely first-order and
hence enables the use of off-the-shelf automated theorem provers for verifying properties at an abstract level.
Keywords: formal semantics, separation logic, frame rule, algebra, semirings, quantales

1. Introduction

Two prominent formal methods for reasoning about the correctness of programs are Hoare logic [20] and Dijk-
stra’s wp-calculus [18]. These approaches, although foundational, lack expressiveness for shared mutable data struc-
tures, i.e., structures where updatable fields can be referenced from more than one point. To overcome this deficiency,
Reynolds, O’Hearn and others have developed separation logic for reasoning about complex and shared data struc-
tures [42, 46]. Their approach extends Hoare logic by a “spatial conjunction” and adds assertions to express separation
between memory regions. In particular, for arbitrary assertions p and q the conjunction p ∗ q asserts that p and q both
hold, but each for a separate part of the storage. This allows expressing aspects of locality, e.g., that mutation of a
single cell in the part that satisfies p will not affect the part that satisfies q. Hence, when reasoning about a program,
one may concentrate on the memory locations that are actually touched by its execution and then embed them into a
larger memory context.
The basic idea of the spatial conjunction is related to early work of Burstall [7] which was then explicitly described
in an intuionistic logic by Reynolds and others. This classical version of separation logic was extended by Reynolds
with a command language that allows altering separate ranges and includes pointer arithmetic. O’Hearn extended this
language to concurrent programs that work on shared mutable data structures [41].
In this paper we present an abstract algebraic approach to separation logic. Our approach is based on quantales [39]
These structures, also called standard Kleene algebras [10], are a special case of the fundamental algebraic structure
of idempotent semirings, which have been used in various applications ranging from concurrency control [9, 21, 22]
to program analysis [30] and semantics [36]. In particular, there are already algebraic characterisations for Hoare
logic [28, 37] and the wp-calculus of Dijkstra [38], which serve as the basis of our current approach.
The algebraic approach achieves several goals. First, the view becomes more abstract, which leads to a consid-
erable reduction of detail and hence allows simpler and more concise proofs. On some occasions also additional
precision is gained. Second, the algebraic abstraction places the topic into a more general context and therefore allows
re-use of a large body of existing theory. Last but not least, since it is largely formulated in pure first-order logic, the

Email addresses: h.dang@informatik.uni-augsburg.de (H.-H Dang), peter.hoefner@nicta.com.au (P. Höfner),


moeller@informatik.uni-augsburg.de (B. Möller)

Preprint submitted to Elsevier March 31, 2011


algebraic view enables the use of off-the-shelf automated theorem provers for verifying properties at the more abstract
level.
The paper is organised as follows. In Section 2 we recapitulate syntax and semantics of the expressions and
formulas of separation logic. Section 3 gives the semantics of assertions. After providing the algebraic background
in Section 4, we shift from the validity semantics of separation logic to one based on the set of states that satisfy
an assertion. Abstracting from that set-theoretic view yields an algebraic interpretation of assertions in the setting
of idempotent semirings and quantales. In Section 6 we discuss special classes of assertions: intuitionistic assertions
that do not specify the heap exactly, pure assertions which do not depend on the heap at all, precise assertions to
characterise an unambiguous heap portion, and supported assertions that guarantee a subheap for all heaps that satisfy
the assertion. After that, we extend our algebra to cover the command part of separation logic in Section 7 and finally
in Section 8 we give an algebraic proof for the frame rule including a simple treatment of its side conditions. We
conclude with a short outlook.

2. Basic Definitions
Separation logic, as an extension of Hoare logic, does not only allow reasoning about explicitly named program
variables, but also about anonymous variables in dynamically allocated storage. Therefore a program state in separa-
tion logic consists of a store and a heap. In the remainder we consistently write s for stores and h for heaps.
To simplify the formal treatment, one defines values and addresses as integers, stores and heaps as partial functions
from variables or addresses to values and states as pairs of stores and heaps:
Values = Z,
{nil} ∪· Addresses ⊆ Values ,
Stores = V ; Values ,
Heaps = Addresses ; Values ,
States = Stores × Heaps ,
where V is the set of program variables, ∪· denotes the disjoint union on sets and M ; N denotes the set of partial
functions between M and N. The constant nil is a value for pointers that denotes an improper reference like null in
programming languages like Java or C; by the above definitions, nil is not an address and hence heaps do not assign
values to nil.
As usual we denote the domain of a relation (or partial function) R by dom(R):
dom(R) =df {x : ∃ y : (x, y) ∈ R} .
In particular, the domain of a store dom(s) denotes all currently used program variables and dom(h) is the set of all
currently allocated addresses on a heap h.
As in [33] and for later definitions we also need an update operator. It is used to model changes in stores and
heaps. Let f1 and f2 be partial functions. Then we define
f1 | f2 =df f1 ∪ {(x, y) : (x, y) ∈ f2 ∧ x < dom( f1 )} . (1)
The function f1 updates the function f2 with all possible pairs of f1 in such a way that f1 | f2 is again a partial function.
The domain of the right argument of ∪ is disjoint from that of f1 . In particular, f1 | f2 can be seen as an extension
of f1 to dom( f1 ) ∪ dom( f2 ). In later definitions we abbreviate an update {(x, y)} | f on a single variable or address by
omitting the set-braces and simply writing (x, y) | f instead.
Expressions are used to denote values or Boolean conditions on stores and are independent of the heap, i.e., they
only need the store component of a given state for their evaluation. Informally, exp-expressions are simple arithmetical
expressions over variables and values, while bexp-expressions are Boolean expressions over simple comparisons and
true, false. Their syntax is given by

var ::= x | y | z | ...


exp ::= 0 | 1 | 2 | ... | var | exp ± exp | ...
bexp ::= true | false | exp = exp | exp < exp | ...

2
The semantics e s of an expression e w.r.t. a store s is straightforward (assuming that all variables occurring in e
are contained in dom(s)). For example,
z s = z ∀ z ∈ Values = Z , true s = true and false s = false .

3. Assertions
Assertions play an important rôle in separation logic. They are used as predicates to describe properties of heaps
and stores and as pre- or postconditions in programs, like in Hoare logic:
assert ::= bexp | ¬ assert | assert ∨ assert | ∀ var. assert |
emp | exp 7→ exp | assert ∗ assert | assert −∗ assert .

In the remainder we consistently write p, q and r for assertions of separation logic. Assertions are split into two
parts: the “classical” ones from predicate logic and four new ones that express properties of the heap. The former
are supplemented by the logical connectives ∧ , → and ∃ that are defined, as usual, by p ∧ q =df ¬ (¬ p ∨ ¬ q),
p → q =df ¬ p ∨ q and ∃ v : p =df ¬∀ v : ¬ p .
The semantics of assertions is given by the relation s, h |= p of satisfaction. Informally, s, h |= p holds iff the
state (s, h) satisfies the assertion p ; an assertion p is called valid iff p holds in every state and, finally, p is satisfiable
iff there exists a state (s, h) which satisfies p. The semantics is defined inductively as follows (e.g. [46]).
s, h |= b ⇔ b s = true
df
s, h |= ¬p ⇔df s, h 6|= p
s, h |= p ∨ q ⇔df s, h |= p or s, h |= q
s, h |= ∀ v : p ⇔df ∀ x ∈ Z : (v, x) | s, h |= p
s, h |= emp ⇔df h=∅
s, h |= e1 7→ e2 ⇔df h = {( e1s , e2s )}
s, h |= p∗q ⇔df ∃ h1 , h2 ∈ Heaps : dom(h1 ) ∩ dom(h2 ) = ∅ and
h = h1 ∪ h2 and s, h1 |= p and s, h2 |= q
s, h |= p −∗ q ⇔df ∀ h0 ∈ Heaps : (dom(h0 ) ∩ dom(h) = ∅ and s, h0 |= p)
implies s, h0 ∪ h |= q .
Here, b is a bexp-expression, p, q are assertions and e1 , e2 are exp-expressions. The first four clauses do not consider
the heap; they are well known from predicate logic or Hoare logic [20]. The remaining lines describe the new parts
in separation logic: For an arbitrary state (s, h), emp ensures that the heap h is empty and contains no addressable
cells. An assertion e1 7→ e2 characterises states with the singleton heap that has exactly one cell at the address e1s
with the value e2s . To reason about more complex heaps, the separating conjunction ∗ is used. It allows expressing
properties of heaps that result from merging smaller disjoint heaps, i.e., heaps with disjoint domains. Note, that there
is no separating operator for stores. Later, in Section 8, we will introduce an operator ∗ s for splitting stores and heaps
simultaneously.
A state (s, h) satisfies the separating implication p −∗ q iff, whenever its heap h is extended with a disjoint heap h0
satisfying p , the state with identical store s and the combined heap h ∪ h0 is guaranteed to satisfy q (cf. Figure 1). The
diagram for h ∪ h0 might suggest that the heaps are adjacent after the join. But the intention is only to illustrate that
the united heap satisfies q.

p −∗ q p q

h h′ h ∪ h′

Figure 1: Separating implication

3
4. Quantales

To present our algebraic semantics of separation logic in the next section, we now prepare the algebraic background
by defining the main algebraic structure for assertions.

Definition 4.1.
(a) A quantale [39, 47] is a structure (S , ≤, ·, 1) where (S , ≤) is a complete lattice, (S , ·, 1) is a monoid and multipli-
cation · distributes over arbitrary suprema: for a ∈ S and T ⊆ S ,

a · ( T ) = {a · b : b ∈ T } and ( T ) · a = {b · a : b ∈ T } ,
F F F F
(2)
F
where, for U ⊆ S , the element U is the supremum of U. The least and greatest element of S are denoted by
0 and >, resp. The infimum and supremum of two elements a, b ∈ S are denoted by a u b and a + b, resp. We
assume for the rest of this paper that · binds tighter than u and +. The definition implies that · is strict, i.e., we
have 0 · a = 0 = a · 0 for all a ∈ S . The notion of a quantale is equivalent to that of a standard Kleene algebra [10]
and a special case of the notion of an idempotent semiring.
(b) A quantale is called commutative iff a · b = b · a for all a, b ∈ S .
(c) A quantale is called Boolean iff its underlying lattice is distributive and complemented, whence a Boolean algebra.
In such a structure, complementation is denoted by . Moreover, we define the greatest element > by 0.

An important Boolean quantale is REL, the algebra of binary relations over a set under set inclusion ⊆ and relation
composition ; .
A useful property in a Boolean quantale is shunting:

aub≤c ⇔ b≤c+a. (shu)

In particular, a u b ≤ 0 ⇔ b ≤ a.

Definition 4.2.
(a) In any quantale, the right residual a\b [4] exists and is characterised by the Galois connection

x ≤ a\b ⇔df a · x ≤ b . (3)

This specifies a\b as the greatest solution of the inequation a · x ≤ b. Therefore, \ is a pseudo-inverse to compo-
sition, which is useful in many circumstances.
(b) Symmetrically, the left residual b/a can be defined. If the underlying quantale is commutative then both residuals
coincide, i.e., a\b = b/a. In REL, one has R1 \R2 = R1˘ ; R2 and R1 /R2 = R1 ; R2˘, where ˘ denotes relational
converse.
(c) In a Boolean quantale, the right detachment abb can be defined based on the left residual as

abb =df a/b .

In REL, R1 bR2 = R1 ; R2˘.

By de Morgan’s laws, the Galois connection for / transforms into the exchange law

abb ≤ x ⇔ x · b ≤ a (exc)

for b that generalises the Schröder rule of relational calculus. An important consequence is the Dedekind rule [27]

a u (b · c) ≤ (abc u b) · c . (Ded)

The operator b is isotone in both arguments.


4
Definition 4.3. In every quantale we define a test as an element t ≤ 1 that has a complement ¬t relative to 1, i.e.,
t + ¬t = 1 and t · ¬t = 0 = t · ¬t. The set of all tests of S is denoted by test(S ). It is closed under + and ·, which
coincide with t and u, resp., and forms a Boolean algebra with 0 and 1 as its least and greatest elements.

In a Boolean quantale, for each a the element a u 1 is a test with complement ¬(a u 1) = a u 1; in particular, every
element below 1 is a test. Moreover, according to [35] for a test t and arbitrary elements a, b ∈ S

t · (a u b) = t · a u b = t · a u t · b. (testdist)

And as a direct consequence, the equation t1 · a u t2 · a = t1 · t2 · a holds for tests t1 , t2 and an arbitrary element a.
Algebraic structures in general are suitable for automated theorem proving. However, quantales are not easy to
encode and perform rather badly [12]. This is mainly due to the two distributivity laws (2). If one only uses the
distributivity laws of the form a · (b + c) = a · b + a · c and (b + c) · a = b · a + c · a one can use any first-order automated
theorem proving system (ATP system). Examples are Prover9 [32] and Waldmeister [6]. This simpler structure—
an idempotent semiring—is particularly suitable for ATP systems [24, 25]. For the purpose of this paper, we have
proven most of the theorems using ATP systems. Most of the input files can be found at a web page [23]. However, to
demonstrate the simplicity of our algebra, we will also present most of the proofs within the paper.

5. An Algebraic Model of Assertions

In this section we give an algebraic interpretation for the semantics of separation logic. The main idea is to switch
from the satisfaction-based semantics for single states to an equivalent set-based pointfree one where every assertion
is associated with the set of all states satisfying it. This simplifies proofs considerably. For an arbitrary assertion p we
therefore define its set-based semantics as

[[ p ]] =df {(s, h) : s, h |= p} .

Sets of states will be the elements of our algebra, which later will be abstracted to an arbitrary Boolean quantale. For
the standard Boolean connectives we obtain
[[ ¬ p ]] = {(s, h) : s, h 6|= p} = [[ p ]] ,
[[ p ∨ q ]] = [[ p ]] ∪ [[ q ]] ,
[[ p ∧ q ]] = [[p]] ∩ [[q]] , [[ p → q ]] = [[p]] ∪ [[q]] ,
[[ ∀ v : p ]] = {(s, h) : ∀ x ∈ Z : (v, x) | s, h |= p } ,
= {(s, h) : ((v, x) | s, h) ∈ [[ p ]]} ,
d
x∈Z
[[ ∃ v : p ]] = [[ ∀ v : ¬ p ]] = {(s, h) : ∃ x ∈ Z : (v, x) | s, h |= p }
= {(s, h) : ((v, x) | s, h) ∈ [[ p ]]} ,
F
x∈Z

where | is the update operation defined in Equation (1).


The emptiness assertion emp and the assertion operator 7→ are given by

[[ emp ]] = {(s, h) : h = ∅}
= (s, h) : h = e1s , e2s .
  
[[e1 7→ e2 ]]

Next, we reformulate the separating conjunction ∗ algebraically as


[[ p ∗ q ]] = [[ p ]] ∪· [[ q ]], where
P ∪· Q =df {(s, h ∪ h0 ) : (s, h) ∈ P ∧ (s, h0 ) ∈ Q ∧ dom(h) ∩ dom(h0 ) = ∅} .
This yields an algebraic embedding of separation logic assertions.

Theorem 5.1. The structure AS =df (P(States), ⊆ , ∪· , [[ emp ]]) is a commutative and Boolean quantale with P+Q =
P ∪ Q.
5
The proof is by straightforward calculations; it can be found in [11]. It is easy to show that [[true]] is the greatest element
in the above quantale, i.e., [[true]] = >, since every state satisfies the assertion true. This implies immediately that
[[true]] is the neutral element for u. However, in contrast to addition ∪, multiplication ∪· is in general not idempotent.

Example 5.2. In AS, the set [[ x 7→ 1 ]] consists of all states (s, h) that have the single-cell heap {(s(x), 1)}. We
calculate

[[ (x 7→ 1) ∗ (x 7→ 1) ]]
= [[ (x 7→ 1) ]] ∪· [[ (x 7→ 1) ]]
= {(s, h ∪ h0 ) : (s, h), (s, h0 ) ∈ [[ x 7→ 1 ]] ∧ dom(h) ∩ dom(h0 ) = ∅}
= ∅.

In the last step, the states (s, h) and (s, h0 ) would have to share that particular heap. Hence the domains of the heaps
would not be disjoint. Therefore the last step yields the empty result. t
u

As a check of the adequacy of our definitions we list a couple of properties.

Lemma 5.3. In AS, for assertions p, q, r, we have the inference rules


p ⇒ r q ⇒ s
and ,
(p ∧ q) ∗ r ⇒ (p ∗ r) ∧ (q ∗ r) p∗q ⇒ r∗s
where an inference rule reads as implication between the premises and the conclusion and p ⇒ q stands for [[p]] ⊆
[[q]].

The second property denotes isotony of separating conjunction. More laws and examples can be found in [11].
For the separating implication the set-based semantics extracted from the definition in Section 3 is

[[ p −∗ q ]] = {(s, h) : ∀ h0 ∈ Heaps : ( dom(h) ∩ dom(h0 ) = ∅ ∧ (s, h0 ) ∈ [[ p ]])


⇒ (s, h ∪ h0 ) ∈ [[ q ]]} .

This implies that separating implication corresponds to a residual.

Lemma 5.4. In AS, [[ p −∗ q ]] = [[ p ]]\[[ q ]] = [[ q ]]/[[ p ]].

Proof. By set theory and definition of ∪· , we have


(s, h) ∈ [[p −∗ q]]
⇔ ∀ h0 : ((s, h0 ) ∈ [[p]] ∧ dom(h) ∩ dom(h0 ) = ∅ ⇒ (s, h ∪ h0 ) ∈ [[q]])
⇔ {(s, h ∪ h0 ) : (s, h0 ) ∈ [[p]] ∧ dom(h) ∩ dom(h0 ) = ∅} ⊆ [[q]]
⇔ {(s, h)} ∪· [[p]] ⊆ [[q]] .
and therefore, for arbitrary set R of states,
R ⊆ [[p −∗ q]]
⇔ ∀ (s, h) ∈ R : (s, h) ∈ [[p −∗ q]]
⇔ ∀ (s, h) ∈ R : {(s, h)} ∪· [[p]] ⊆ [[q]]
⇔ R ∪· [[p]] ⊆ [[q]] .
Hence, by definition of the residual, [[p −∗ q]] = [[p]]\[[q]]. The second equation follows immediately since multipli-
cation ∪· in AS commutes (cf. Section 2). t
u
Now all all laws about −∗ given by Reynolds in [46] follow from the standard theory of residuals (e.g. [5]). Many
of these laws are proved algebraically in [11]. For example, the defining Galois connection in Equation (3) for the
residual specialises to
p ∗ q ≤ r ⇔ p ≤ (q −∗ r) .
Since for our assertions the order ≤ coincides with implication ⇒ , splitting this equivalence into two implications
yields the currying and decurrying rules.
6
Corollary 5.5. In separation logic the following inference rules hold:
p∗q ⇒ r p ⇒ (q −∗ r)
, (currying) . (decurrying)
p ⇒ (q −∗ r) p∗q ⇒ r

In Reynolds works it is often only stated that these laws follow directly from the definition although many authors
uses these laws as an equivalent definition and refer to Reynolds. In fact the separating implication was first defined
in [26] and used there as a particular instantiation of a Boolean BI algebra [45], i.e., a Boolean algebra with an
additional residuated commutative monoid structure. Hence Lemma 5.4 expresses exactly the expected equalities.
As a further example we prove the algebraic counterpart of the inference rule

q ∗ (q −∗ p) ⇒ p .

Lemma 5.6. Let S be a quantale. For a, b ∈ S the inequality b · (b\a) ≤ a holds. And therefore also (a/b) · b ≤ a.

Proof. By definition of residuals we immediately get


b · (b\a) ≤ a ⇔ b\a ≤ b\a ⇔ true .
t
u
Calculating at the abstract level of quantales often shortens the proofs. Moreover the abstraction paves the way to
use first-order off-the-shelf theorem provers for verifying properties; whereas a first-order theorem prover for separa-
tion logic has yet to be developed and implemented (cf. Section 9).
Using the definitions of Section 4 we now give a concrete definition of the right detachment operator in the logic
itself. This operation is called septraction in the literature (see [49]).

Definition 5.7. p − q =df ¬(q −∗(¬p)). Abstractly, p − q = pbq.

Lemma 5.8. s, h |= p − q ⇔ ∃ ĥ : h ⊆ ĥ, s, ĥ |= p, s, ĥ − h |= q.

The proof of Lemma 5.8 as well as other proofs are deferred to Appendix A.
In the appendix of [13], a couple of properties for septraction are listed. They can all easily be verified (for example
using off-the-shelf ATP systems).
So far, we have derived an algebraic structure for assertions and have presented the correspondence between the
operations of separation logic and the algebra. We sum up this correspondence in Table 1.

Name in SL Symbol in SL Name in Quantales Symbol in AS Symbol in Quantales


disjunction ∨ addition/join ∪ +
conjunction ∧ meet ∩ u
negation ¬ complement
implication ⇒ natural order ⊆ ≤
separating conjunction ∗ multiplication ∪· ·
separating implication −∗ residual / /
septraction − detachment b b

Table 1: Correspondence between operators of separation logic and algebra

6. Special Classes of Assertions

In separation logic one distinguishes different classes of assertions [46]. We now give algebraic characterisations
for the most important classes of assertions, namely intuitionistic, pure, precise and supported assertions. Intuitionistic
assertions do not describe the domain of a heap exactly. Hence, when using these assertions one does not know whether
7
the heap contains additional anonymous cells. This is often the case when pointer references to some portions of a
heap are lost. Pure assertions are independent of the heap and therefore only express conditions on store variables.
In contrast to intuitionistic assertions, the precise ones point out a unique subheap which is relevant to its predicate.
Finally supported assertions ensure for a given set of heaps that there exists a subheap for which the predicate already
holds. This class is e.g. used in [46] when reasoning about directed acyclic graphs.

6.1. Intuitionistic Assertions


The first assertion class for which we present a simple algebraic characterisation is the class of intuitionistic
assertions. Following [46], an assertion p is intuitionistic iff

∀ s ∈ Stores, ∀ h, h0 ∈ Heaps : (h ⊆ h0 ∧ s, h |= p) ⇒ s, h0 |= p . (4)

Intuitively, if a heap h that satisfies an intuitionistic assertion p then every larger heap, i.e. h extended by arbitrary
cells, still satisfies p.

Theorem 6.1. In AS an element [[ p ]] is intuitionistic iff it satisfies

[[ p ]] ∪· [[ true ]] ⊆ [[ p ]] .

Proof.
∀ s, h, h0 : (h ⊆ h0 ∧ s, h |= p) ⇒ s, h0 |= p
⇔ {[ Definition of true ]}
∀ s, h, h0 : (h ⊆ h0 ∧ s, h |= p ∧ s, (h0 − h) |= true) ⇒ s, h0 |= p
⇔ {[ set theory ]}
∀ s, h, h0 : (s, h |= p ∧ s, (h0 − h) |= true ∧ dom(h) ∩ dom(h0 − h) = ∅ ∧ h0 = h ∪ (h0 − h))
⇒ s, h0 |= p
⇔ {[ ⇒ : h00 = h0 − h, ⇐ : dom(h) ∩ dom(h00 ) = ∅ ∧ h0 = h ∪ h00 ⇒ h00 = h0 − h ]}
∀ s, h, h0 : (∃ h00 : s, h |= p ∧ s, h00 |= true ∧ dom(h) ∩ dom(h00 ) = ∅ ∧ h0 = h ∪ h00 ) ⇒ s, h0 |= p
⇔ {[ logic ]}
∀ s, h0 : (∃ h, h00 : s, h |= p ∧ s, h00 |= true ∧ dom(h) ∩ dom(h00 ) = ∅ ∧ h ∪ h00 = h0 ) ⇒ s, h0 |= p
⇔ {[ Definition of ∗ ]}
∀ s, h0 : s, h0 |= p ∗ true ⇒ s, h0 |= p
t
u
Lifting this to an abstract level motivates the following definition.

Definition 6.2. In an arbitrary Boolean quantale S an element a is called intuitionistic iff it satisfies

a·>≤a. (5)

This inequation can be strengthened to an equation since its converse holds for arbitrary Boolean quantales. Elements
of the form a · > are also called vectors or ideals. Those elements are well known, and therefore we obtain many
properties for free (e.g. [48, 31]). We only list some of them to show again the advantages of the algebra.
In particular, we focus on laws that describe the interaction of · and u using intuitionistic assertions.
Lemma 6.3. Consider a commutative Boolean quantale S , intuitionistic elements a, a0 ∈ S and arbitrary elements
b, c ∈ S Then

(a) (a u b) · > ≤ a;
(b) a · b ≤ a u (b · >);
(c) (a u b) · c ≤ a u (b · c);
8
(d) a · a0 ≤ a u a0 .

Proof. To show Part (a) we calculate (a u b) · > ≤ a · > ≤ a. For a proof of Part (b) we know a · b ≤ a · > ≤ a and
a · b ≤ b · > by isotony of · and the assumption. The Laws (c) and (d) can be proved analogously. t
u
Using the quantale AS, it is easy to see that none of these inequations can be strengthened to an equation. In
particular, multiplication (separation conjunction) and meet need not coincide.

Example 6.4. Consider a =df a0 =df [[ x 7→ 1 ∗ true ]] = [[ x 7→ 1 ]] ∪· [[ true ]]. By this definition it is obvious that
a and a0 are intuitionistic. The definitions of Section 3 then immediately imply

a ∩ a0 = [[ x 7→ 1 ]] ∪· [[ true ]]
a ∪· a0 = [[ x 7→ 1 ]] ∪· [[ true ]] ∪· [[ x 7→ 1 ]] ∪· [[ true ]] = ∅ .

The last step follows from Example 5.2. t


u

The next class will be a proper subset of intuitionistic assertions where these operations indeed coincide.

6.2. Pure Assertions


An assertion p is called pure iff it is independent of the heaps of the states involved, i.e.,

p is pure ⇔df (∀ s ∈ Stores : ∀ h, h0 ∈ Heaps : s, h |= p ⇔ s, h0 |= p) . (6)

Examples for such assertions are x = 2, x = x + 2, false or true .

Theorem 6.5. In AS an element [[ p ]] is pure iff it satisfies, for all [[ q ]] and [[ r ]],

[[ p ]] ∪· [[ true ]] ⊆ [[ p ]] and [[ p ]] ∩ ([[ q ]] ∪· [[ r ]]) ⊆ ([[ p ]] ∩ [[ q ]]) ∪· ([[ p ]] ∩ [[ r ]]) .

For the proof—which is presented later—we need a number of auxiliary laws. The above theorem motivates the
following definition.

Definition 6.6. In an arbitrary Boolean quantale S an element a is called pure iff it satisfies, for all b, c ∈ S ,

a·>≤a, (7)
a u (b · c) ≤ (a u b) · (a u c) . (8)

Corollary 6.7. > is pure.


The second equation states that pure elements distribute over meet. By this characterisation we can immediately
conclude

Corollary 6.8. Every pure element of a Boolean quantale is also intuitionistic.

Lemma 6.9. In a commutative Boolean quantale, Property (8) is equivalent to ab> ≤ a, where ab> forms the down-
ward closure of a.

Lemma 6.10. In a commutative Boolean quantale, an element a is pure iff one of the following equivalent properties
is satisfied.
(a) a · > ≤ a and a · > ≤ a.

(b) a = (a u 1) · >.
(c) (a u b) · c = a u b · c.

9
A proof can be found in Appendix A. Part (a) also holds for non-commutative quantales; Part (b) characterises
pure elements as fixed points. Part (c) is according to [3]. Since the underlying quantale is commutative, there is also
the dual of Part (c), namely b · (a u c) = a u b · c.
With these characterisations we can now prove the equivalence between the formulation in separation logic and
the algebraic one.

Proof of Theorem 6.5. By Lemma 6.10(b) and definition of the elements of AS it is sufficient to show that the
following formulas are equivalent in separation logic

∀ s ∈ Stores, ∀ h, h0 ∈ Heaps : (s, h |= p ⇔ s, h0 |= p) , (9)


∀ s ∈ Stores, ∀ h ∈ Heaps : (s, h |= p ⇔ s, h |= (p ∧ emp ) ∗ true) . (10)

Since both assertions are universally quantified over states we omit that quantification in the remainder and only keep
the quantifiers on heaps. Before proving this equivalence we simplify s, h |= (p ∧ emp ) ∗ true. Using the definitions
of Section 3, we get for all h ∈ Heaps
s, h |= (p ∧ emp ) ∗ true
⇔ ∃ h1 , h2 ∈ Heaps : dom(h1 ) ∩ dom(h2 ) = ∅ and h = h1 ∪ h2
and s, h1 |= p and s, h1 |= emp and s, h2 |= true
⇔ ∃ h1 , h2 ∈ Heaps : dom(h1 ) ∩ dom(h2 ) = ∅ and h = h1 ∪ h2
and s, h1 |= p and h1 = ∅
⇔ ∃ h2 ∈ Heaps : h = h2 and s, ∅ |= p
⇔ s, ∅ |= p .
Instantiating Equation (9) and using this result we obtain

∀ h, h0 ∈ Heaps : (s, h |= p ⇔ s, h0 |= p)
⇒ ∀ h ∈ Heaps : (s, h |= p ⇔ s, ∅ |= p)
⇔ ∀ h ∈ Heaps : (s, h |= p ⇔ s, h |= (p ∧ emp ) ∗ true) .

For the converse direction, we have, for arbitrary s, h, h0 , that s, h |= p ⇔ s, ∅ |= p ⇔ s, h0 |= p. 2


To conclude the paragraph concerning pure elements we list a few number of properties which can be proved very
easily by our algebraic approach.

Corollary 6.11. Pure elements form a Boolean lattice, i.e., they are closed under +, u and . Moreover the lattice is
complete.

The following lemma shows that in the complete lattice of pure elements meet and join coincide with composition
and sum, respectively.

Lemma 6.12. Consider a commutative Boolean quantale S , pure elements a, a0 ∈ S and arbitrary elements b, c ∈ S
Then
(a) a · b = a u b · >;
(b) a · a0 = a u a0 ; in particular a · a = a and a · a = 0.

Proof. For a proof of Part (a) we calculate, using Lemma 6.10(c), a u b · > = a u > · b = (a u >) · b = a · b. To show
Part (b), we use again Lemma 6.10(c) and neutrality of 1 w.r.t. · to obtain a · a0 = a u a0 · 1 = a u a0 . t
u
Many further properties, in particular, for the interaction of pure assertions with residuals and detachments, can be
found in in the appendix of [13]. In some cases we have analogous situations as for the ∗ - operator in Lemma 6.10(c)
where pure assertions can be pulled out of each argument of residuals and detachments.

10
6.3. Precise Assertions
Next we focus on the class of precise assertions. They play a major rôle in characterising best local-actions in [8].
Intuitively they point out precise portions of heaps that satisfy the predicate. An assertion p is called precise iff for all
states (s, h), there is at most one subheap h0 of h for which (s, h0 ) |= p, i.e.,

∀ s, h, h1 , h2 : (s, h1 |= p ∧ s, h2 |= p ∧ h1 ⊆ h ∧ h2 ⊆ h) ⇒ h1 = h2

According to [43], this definition is equivalent to distributivity of ∗ over ∧ .

Theorem 6.13. In AS an element [[ p ]] is precise iff it satisfies, for all [[ q ]] and [[ r ]],

([[ p ]] ∪· [[ q ]]) ∩ ([[ p ]] ∪· [[ r ]]) ⊆ [[ p ]] ∪· ([[ q ]] ∩ [[ r ]]) .

Hence in our setting, we can algebraically characterise precise assertions as follows.


Definition 6.14. In an arbitrary Boolean quantale S an element a is called precise iff for all b, c ∈ S

(a · b) u (a · c) ≤ a · (b u c) . (11)

Equation (11) can be strengthened to an equation since a · (b u c) ≤ (a · b) u (a · c) holds by isotony. Next we give
some closure properties for this assertion class which again can be proved fully algebraically.
Lemma 6.15. If a and a0 are precise then so is a · a0 , i.e., precise assertions are closed under multiplication.

Proof. The proof is by straightforward calculations. For arbitrary elements b, c and precise elements a, a0 , we have

(a · a0 ) · b u (a · a0 ) · c = a · (a0 · b) u a · (a0 · c) ≤ a · (a0 · b u a0 · c) ≤ a · a0 · (b u c) .


t
u

Lemma 6.16. If a is precise and a0 ≤ a then a0 is precise, i.e., precise assertions are downward closed.

A proof can be found in [16].

Corollary 6.17. For an arbitrary assertion b and precise a, also a u b is precise.

Further useful properties are again listed in the appendix of [13].

6.4. Fully-allocated assertions


After giving algebraic characterisations for precise and intuitionistic elements we turn to the question whether
there exists a class of assertions that can fulfil both properties. At first sight trying to find such assertions does not
seem to be sensible, since an assertion p that holds for every larger heap cannot unambiguously point out an exact
heap portion. This is stated in [46]. But heaps that are completely allocated, fulfil preciseness and intuitionisticness.
As a consequence this might disable any allocation of further heap cells in an execution of a program. The assertions
in this class are called fully allocated and characterised by

p is fully allocated ⇔df (∀ s, h : s, h |= p ⇒ dom(h) = Addresses) . (12)

Theorem 6.18. In AS an element [[p]] is fully allocated iff it satisfies

[[p]] ∪· [[emp]] ⊆ ∅

Proof.

11
∀ s, h : s, h |= p ⇒ dom(h) = Addresses
⇔ ∀ s, h : (s, h |= p ⇒ (∀ h0 . h ⊆ h0 ⇒ h0 ⊆ h))
⇔ ∀ s, h, h0 : (s, h |= p ⇒ (h ⊆ h0 ⇒ h0 ⊆ h))
⇔ ∀ s, h, h0 : (s, h |= p ⇒ ¬(h ⊆ h0 ∧ h0 − h , ∅))
⇔ ∀ s, h0 : ¬(∃ h : s, h |= p ∧ h ⊆ h0 ∧ h0 − h , ∅)
⇔ ∀ s, h0 : ¬(∃ h, h00 : s, h |= p ∧ h00 , ∅ ∧ dom(h) ∩ dom(h00 ) = ∅ ∧ h0 = h ∪ h00 )
⇔ ∀ s, h0 : s, h0 |= p ∗ ¬emp ⇒ false
t
u
Consequently in our algebraic setting we characterise this class as follows.

Definition 6.19. In an arbitrary Boolean quantale S an element a is called fully allocated iff

a·1≤0. (13)

Lemma 6.20. Every fully allocated element is also intuitionistic.

Proof. Let a be fully allocated, then we a · > = a · (1 + 1) = a · 1 + a · 1 ≤ a · 1 = a. These (in)equations hold by


Boolean algebra, distributivity, p being fully allocated and neutrality of 1 w.r.t. ·. t
u

Lemma 6.21. If a is fully allocated then a is also precise.

For the proof we need an auxiliary property.

Theorem 6.22. If a is fully allocated then a · b = a · (b u 1) holds.

Proof. We calculate

a · b = a · ((b u 1) + (b u 1)) = a · (b u 1) + a · (b u 1) = a · (b u 1) ,

since a · (b u 1) ≤ a · 1 ≤ 0 by isotony of · and the assumption. t


u
Intuitively, this theorem says that adding storage is only possible if the extra portion is empty. In particular, only
the store component can then be changed.
Now we are able to show Lemma 6.21.

Proof of 6.21. For a fully allocated element a and arbitrary b and c, we then get

(a · b) u (a · c) = (a · (b u 1)) u (a · (c u 1)) = a · (b u 1) · (c u 1) = a · ((b u 1) · (c u 1)) ≤ a · (b u c) .

Again this holds by Theorem 6.22, (testdist), isotony of ·, u and the fact that · coincides with u on test elements. 2

6.5. Supported Assertions


The last assertion class we are considering in this section are supported assertions. These assertions characterise
pairs of heaps for which joint satisfaction of the assertion can be traced down to a common subheap. These assertions
are for example used in [46] for reasoning about directed acyclic graphs.
An assertion p is called supported iff

∀ s, h1 , h2 : h1 , h2 are compatible ∧ s, h1 |= p ∧ s, h2 |= p
⇒ ∃ h0 : h0 ⊆ h1 ∧ h0 ⊆ h2 ∧ s, h0 |= p

By the assumption h1 and h2 are compatible, it is meant that they agree on their intersection, i.e., h1 ∪ h2 is a
function again. However, the store s is fixed in the definition. Later on, we will introduce an operator that also allows
decomposition of the store.
The following characterisation of supported elements is novel.
12
Theorem 6.23. In AS an element [[ p ]] is supported iff it satisfies, for all [[ q ]] and [[ r ]],

([[ p ]] ∪· [[ q ]]) ∩ ([[ p ]] ∪· [[ r ]]) ⊆ [[ p ]] ∪· ([[ q ]] ∪· [[ true ]] ∩ [[ r ]] ∪· [[ true ]]) .

The key idea to prove Theorem 6.23 is to use predicates p which precisely describe one heap and one store, i.e.,
the set [[p]] is a singleton set that contains only a single state. Therefore we state directly [[s, h]] = {(s, h)}. Before
showing the proof we need two auxiliary lemmas. The first is simple set theory — therefore we skip the proof.
Lemma 6.24. We assume three arbitrary sets A, B and C. If B ⊆ C then we have C − A ⊆ C − B ⇔ B ⊆ A.
Hence, if A ⊆ C and B ⊆ C then C − A = C − B ⇔ A = B.
Next we give some simple properties of [[s, h]].

Lemma 6.25. For arbitrary heaps h, h0 , store s and assertion p we have


(a) s, h0 |= [[s, h]] ⇔ h = h0 . In particular, s, h |= [[s, h]].
(b) If h ⊆ h0 then s, h |= p ⇔ s, h0 |= p ∗ [[s, h0 − h]].
(c) s, h0 |= [[s, h]] ∗ true ⇔ h ⊆ h0 .

The lengthy, but straightforward proof can be found in Appendix A. Now we give a proof of the above theorem.

Proof of Theorem 6.23. For the ⇒ -direction we assume p is supported.


Let s, h |= p ∗ q ∧ p ∗ r. Then
s, h |= p ∗ q ∧ p ∗ r
⇔ {[ definition of ∗ ]}
∃ h1 , h2 : h1 ⊆ h ∧ h2 ⊆ h ∧ s, h1 |= p ∧ s, h − h1 |= q
∧ s, h2 |= p ∧ s, h − h2 |= r
⇒ {[ p supported, h1 ∪ h2 ⊆ h is a function ]}
∃ h0 : s, h0 |= p ∧ h0 ⊆ h1 ∧ h0 ⊆ h2 ∧
s, h − h1 |= q ∧ s, h − h2 |= r
⇒ {[ h − h1 ⊆ h − h0 , s, h1 − h0 |= true, analogously h2 ]}
∃ h : s, h0 |= p ∧ s, h − h0 |= q ∗ true ∧ s, h − h0 |= r ∗ true
0

⇔ {[ definition of ∗ ]}
s, h |= p ∗ (q ∗ true ∧ r ∗ true)
Next we show the other direction. For that we assume, for all q,r,

s, h |= p ∗ q ∧ p ∗ r ⇒ s, h |= p ∗ (q ∗ true ∧ r ∗ true) . (14)

as well as s, h1 |= p and s, h2 |= p and h1 ∪ h2 is a function.


From this we calculate
s, h1 |= p ∧ s, h2 |= p
⇔ {[ h1 ⊆ h1 ∪ h2 and h2 ⊆ h1 ∪ h2 and Lemma 6.25(b) ]}
s, h1 ∪ h2 |= p ∗ [[s, (h1 ∪ h2 ) − h1 ]] ∧
s, h1 ∪ h2 |= p ∗ [[s, (h1 ∪ h2 ) − h2 ]]
⇒ {[ Assumption (14) ]}
s, h1 ∪ h2 |= p ∗ [[s, (h1 ∪ h2 ) − h1 ]] ∗ true ∧ [[s, (h1 ∪ h2 ) − h2 ]] ∗ true


⇔ {[ definition of ∗ ]}
∃ h : h0 ⊆ h1 ∪ h2 ∧ s, h0 |= p ∧
0

s, (h1 ∪ h2 ) − h0 |= [[s, (h1 ∪ h2 ) − h1 ]] ∗ true ∧ [[s, (h1 ∪ h2 ) − h2 ]] ∗ true


13
⇔ {[ definition of ∧ ]}
∃ h0 : h0 ⊆ h1 ∪ h2 ∧ s, h0 |= p ∧
s, (h1 ∪ h2 ) − h0 |= [[s, (h1 ∪ h2 ) − h1 ]] ∗ true ∧
s, (h1 ∪ h2 ) − h0 |= [[s, (h1 ∪ h2 ) − h2 ]] ∗ true
⇔ {[ Lemma 6.25(c) (twice) ]}
∃ h0 : h0 ⊆ h1 ∪ h2 ∧ s, h0 |= p ∧
(h1 ∪ h2 ) − h1 ⊆ (h1 ∪ h2 ) − h0 ∧
(h1 ∪ h2 ) − h2 ⊆ (h1 ∪ h2 ) − h0
⇔ {[ Lemma 6.24 ]}
∃ h : h0 ⊆ h1 ∪ h2 ∧ s, h0 |= p ∧ h0 ⊆ h1 ∧ h0 ⊆ h2
0

⇔ {[ set theory ]}
∃ h0 : s, h0 |= p ∧ h0 ⊆ h1 ∧ h0 ⊆ h2

2
As before, this can be lifted to the abstract level of quantales.

Definition 6.26. In an arbitrary Boolean quantale S an element a is supported iff it satisfies for arbitrary b,c

a · b u a · c ≤ a · (b · > u c · >) .

Following this characterisation of supported assertions we now give properties for this class of assertion which
can be completely derived algebraically. As before, we refer to the appendix of [13] for further properties.

Lemma 6.27. If a is pure then it is also supported.

Proof. By Lemma 6.12(a), associativity, commutativity and idempotence of u, isotony, and Lemma 6.12(a) again:
a·bua·c
= (a u b · >) u (a u c · >)
= a u (b · > u c · >)
≤ a u (b · > u c · >) · >
= a · (b · > u c · >)
t
u

Lemma 6.28. a is precise implies a is supported.

Proof. a · b u a · c ≤ a · (b u c) ≤ a · (b · > u c · >). This holds by the definition of precise elements and isotony. t
u

Lemma 6.29. Supported elements are closed under ·.

Proof. For supported elements a and a0 we calculate, using the definition of supported elements and isotony,

a · a0 · b u a · a0 · c ≤ a · (a0 · b · > u a0 · c · >)


≤ a · a · (b · > · > u c · > · >)
≤ a · a0 · (b · > u c · >) t
u

Corollary 6.30. If a is supported and b is precise or pure then a · b is supported.

Using this and Corollary 6.7, we obtain

Corollary 6.31. a is precise implies a · > is supported.

14
7. Commands
After dealing with the assertions of separation logic, we now turn to the commands in the simple imperative
language associated with it.

7.1. Commands as Relations


Semantically, we use the common technique of modelling commands as relations between states.
Definition 7.1. A command is a relation C ∈ Cmds =df P(States × States).
Therefore, all relational operations, including sequential composition ; and reflexive-transitive closure ∗ , are avail-
able for commands. Moreover, the structure (Cmds, ⊆ , ;, I), where I is the identity relation, forms a Boolean quantale.
The greatest element > is the universal relation.
In using relations for program semantics, some care has to be taken to correctly treat the possibility of errors in
program execution. In the standard literature on separation logic this is done by introducing a special state fault which
is the “result” of such an error, e.g., access to an unallocated or uninitialised memory cell or to a variable without
value.
The precise treatment of the fault state leads, however, to a lot of detail that is not really essential to the semantics
proper. Therefore, for the purposes of this paper, we ignore these phenomena and deal only with the partial correctness
semantics of program constructs.
A detailed treatment dealing with the possibility of program faults is provided by the well known approach of
demonic relational semantics (see e.g. [2, 14, 15, 40] and [17] for a more abstract algebraic treatment in terms of
idempotent semirings and Kleene algebras). There a total correctness view is taken: a state belongs to the domain of a
command relation iff no execution starting from it may lead to an error. Hence there is no need to include fault into the
set of states. As a price one has to pay, the relation composition operators become more complex: instead of the usual
(angelic) operators of union and sequential composition one has to use their demonic variants. As we will see, our
techniques for the partial correctness semantics can be adapted to the demonic setting; spelling out the details would
fill a paper of its own, though. An alternative would be to use monotonic predicate transformers [44]; however, this
would step outside the current relational framework.

7.2. Tests and Hoare Triples


In partial correctness semantics one uses Hoare triples {p} C {q} where p and q are predicates about states and C
is a command. In the semiring and quantale setting the predicates for pre- and postconditions of such triples can be
modelled by tests (cf. Definition 4.3).
In the command quantale tests are given by the partial identity relations of the form P b = {(σ, σ) | σ ∈ P} for some
set P of states. It is clear that these subidentities, sets of states and predicates characterising states are in one-to-one
correspondence. The set P can be retrieved from P b as P = dom(P).b Because of this isomorphism, for a command C
\
we simply write dom(C) instead of dom(C).
Definition 7.2. Employing tests, we can give the following equivalent definitions of the meaning of Hoare triples
(e.g. [29]):
{p} C {q} ⇔ p ; C ; ¬q ⊆ ∅ ⇔ p ; C ⊆ C ; q ⇔ p ; C = p ; C ; q .
To use this general approach to Hoare logic we simply need to embed the quantale of assertions from the previous
sections into the set of tests of the relational quantale by the above correspondence between sets of states and partial
identity relations. The operation ∪· on assertions is lifted to tests by
b =df \
b ∪· Q
P P ∪· Q .
Since we are working in a semiring with greatest element > we can give the following further useful equivalent
formulations of Hoare triples.
Lemma 7.3. {p} C {q} ⇔ > ; p ; C ⊆ > ; q ⇔ p ; C ⊆ > ; q.
The proof can be found in the Appendix.
15
7.3. Syntax and Semantics of a Simple Programming Language
We now introduce the program constructs associated with separation logic [46]. Syntactically, they are given by

comm ::= var := exp | skip | comm ; comm


| if bexp then comm else comm | while bexp do comm
| newvar var in comm | newvar var := exp in comm
| var := cons (exp, . . . , exp)
| var := [exp] | [exp] := exp
| dispose exp

Let us explain these constructs informally, where we skip the well-known first three lines. For a (boolean) expres-
sion e we denote its value w.r.t. a store s by e s .
In a given state s the command v := cons (e1 , ..., en ) allocates n cells with eis as the contents of the i-th cell.
The cells have to form an unused contiguous region somewhere on the heap; the concrete allocation is chosen non-
deterministically [50].
The address of the first cell is stored in v while the rest of the cells can be addressed indirectly via the start address.
A dereferencing assignment v := [e] assumes that e is an exp-expression and the value e s (corresponding to *e in
C) is an allocated address on the heap, i.e., e s ∈ dom(h) for the current heap h. In particular, after its execution, the
value of v is the contents of a dereferenced heap cell.
Conversely, an execution of [e1 ] := e2 assigns the value of the expression on the right hand side to the cell whose
address is the value of the left hand side.
Finally, the command dispose e is used for deallocating the heap cell with address e s . After its execution the
disposed cell is not valid any more, i.e., dereferencing that cell would cause a fault in the program execution.
We now define a formal semantics for this language by inductively assigning to every program P formed according
to the above grammar a command [[P]]c ∈ Cmds in the sense of the previous section. In particular, [[skip]]c =df I. For
all language constructs we also specify the correctness conditions, like that the free variables of all program parts are
within the domains of the stores involved and that only allocated heap cells may be accessed.
As auxiliaries we define the functions FV and MV that assign to every program P the set of its free and modified
variables, resp., where in the case of FV we assume that for (boolean) expressions a corresponding function with the
same name is predefined.

FV(v := e) =df {v} ∪ FV(e) MV(v := e) =df {v}


FV(skip) =df ∅ MV(skip) =df ∅
FV(P ; Q) =df FV(P) ∪ FV(Q) MV(P ; Q) =df MV(P) ∪ MV(Q)
FV( if b then P else Q) =df FV(b) ∪ FV(P) ∪ FV(Q) MV( if b then P else Q) =df MV(P) ∪ MV(Q)
FV( while b do P) =df FV(b) ∪ FV(P) MV( while b do P) =df MV(P)
FV( newvar v in P) =df FV(P) − {v} MV( newvar v in P) =df MV(P) − {v}
FV( newvar v := e in P) =df (FV(e) ∪ FV(P)) − {v} MV( newvar v := e in P) =df MV(P) − {v}
n
FV(v := cons (e1 , ..., en ) =df MV(v := cons (e1 , ..., en ) =df {v}
S
{v} ∪ FV(ei )
i=1
FV(v := [e]) =df {v} ∪ FV(e) MV(v := [e]) =df {v}
FV([e1 ] := e2 ) =df FV(e1 ) ∪ FV(e2 ) MV([e1 ] := e2 ) =df ∅
FV( dispose e) =df FV(e) MV( dispose e) =df ∅

To abbreviate the semantic definition, we use a convention similar to that of the refinement calculus (e.g. [1]). We
characterise relations by formulas linking the input states (s, h) and output states (s0 , h0 ). If F is such a formula then
Rb= F abbreviates the clause (s, h) R (s0 , h0 ) ⇔df F. For use in the definition of if and while we also assign a semantics

16
to boolean expressions b viewed as assertion commands.

[[v := e]]c b= {v} ∪ FV(e) ⊆ dom(s) ∧ s0 = (v, e s ) | s ∧ h0 = h ,


= FV(b) ⊆ dom(s) ∧ b s = true ∧ s0 = s ,
[[b]]c b
= FV(P) ∪ FV(Q) ⊆ dom(s) ∧ s S s0
[[P ; Q]]c b
where S = [[P]]c ; [[Q]]c ,
= FV(b) ∪ FV(P) ∪ FV(Q) ⊆ dom(s) ∧ s S s0
[[ if b then P else Q]]c b
where S = [[b]]c ; [[P]]c ∪ ¬[[b]]c ; [[Q]]c ,
= FV(b) ∪ FV(P) ⊆ dom(s) ∧ s S s0
[[ while b do P]]c b
where S = ([[b]]c ; [[P]]c )∗ ; ¬[[b]]c ,
= v < dom(s) ∧ ∃ i ∈ Values : ((v, i) | s, h) [[P]]c (s0 , h0 ) ,
[[ newvar v in P]]c b
[[ newvar v := e in P]]c b= v < (FV(e) ∪ dom(s)) ∧ FV(e) ⊆ dom(s) ∧ ((v, e s ) | s, h) [[P]]c (s0 , h0 ) ,
[[v := cons (e1 , ..., en )]]c b
= ∃ a ∈ Addresses : s0 = (v, a) | s ∧
a, . . . , a + n − 1 < dom(h) ∧
h0 = {(a, e1s ), . . . , (a + n − 1, ens )} | h ,
[[v := [e]]]c b = {v} ∪ FV(e) ⊆ dom(s) ∧ s0 = (v, h(e s )) | s ∧ h0 = h ,
[[[e1 ] := e2 ]]c b = FV(e1 ) ∪ FV(e2 ) ⊆ dom(s) ∧ s0 = s ∧ h0 = (e1s , e2s ) | h ∧ e1s ∈ dom(h) ,
[[ dispose e]] b
c = FV(e) ⊆ dom(s) ∧ s0 = s ∧ e s ∈ dom(h) ∧ h0 = h − {(e s , h(e s ))} .

We now discuss inference rules for commands. By the above abstraction these rules become simple consequences
of the well established general relational view of commands. We only sketch the rule for mutation.
Theorem 7.4. Let p be an assertion and e, e0 be exp-expressions. Then valid triples for mutation commands are

{(e1 7→ −)} [e1 ] := e2 {(e1 7→ e2 )} , (local)


{(e1 7→ −) ∗ p} [e1 ] := e2 {(e1 7→ e2 ) ∗ p} , (global)
{(e1 7→ −) ∗ ((e1 7→ e2 ) −∗ p)} [e1 ] := e2 {p} (backward reasoning)

where (e1 7→ −) abbreviates ∃ v. e1 7→ v.


The local mutation law ensures that the cell at address e1s has the value e2s after [e1 ] := e2 has been executed;
provided there is an allocated cell at e1s . In the relational semantics the precondition is satisfied by e1s ∈ dom(h). The
global mutation law follows from the local one as an instantiation of the frame rule. It implies that reasoning can
be modularised. Conversely, the local rule can be derived from the global one by setting p = emp . The backward
reasoning law is used to determine a precondition that ensures the postcondition p. The global and the backward
reasoning rules are again interderivable (see [46, 11]). Further inference rules, like for disposal, overwriting and non-
overwriting allocation as well as lookups are presented and explained in [46]; the algebraic treatment is similar to the
one of mutation and straightforward.
The most central rule of separation logic, however, will be treated in detail in the following section.

8. The Frame Rule

An important ingredient of the separation calculus is the frame rule [42]. It describes the use of the separating con-
junction in pre- and postconditions for commands and allows local reasoning. For assertions p, q and r and command
C it reads
{p} C {q}
.
{p ∗ r} C {q ∗ r}
The premise ensures that starting the execution of C in a state satisfying p ends in a state satisfying q. Furthermore the
conclusion says that extending the initial and final heaps consistently with disjoint heaps will not invalidate the triple
17
in the premise. Hence a “local” proof of {p} C {q} will extend to a “more global” one in the additional heap context
r. A standard proof [50] uses three assumptions: A side condition stating that no modified variable is changed by r,
safety monotonicity and the frame property. Safety monotonicity guarantees that if C is executable from a state, it can
also run on a state with a larger heap; the frame property states that every execution of C can be tracked back to an
execution of C running on states with a possibly smaller heap.
We will present a modified, more generally applicable rule using algebraic counterparts for the conditions involved.

8.1. Separation Algebras


Most of the calculations in the soundness proof of the frame rule can be done at an abstract level. Therefore in
this section we abstract from the concrete separation logic instance of States and the separating conjunction. We will
also model the required conditions for commands that satisfy the frame rule at an abstract level. This captures a wider
range of operations with behaviour similar to the separating conjunction. The following definition and its notation are
inspired by [8].

Definition 8.1. A separation algebra is a partial commutative monoid (Σ, •, u). A partial commutative monoid is
given by a partial binary operation where the commutativity, associativity and unit laws hold for the equality that
means both sides are defined and equal, or both are undefined. The induced combinability relation # is given by

σ0 # σ1 ⇔df σ0 • σ1 is defined

As before, a command is a relation C ⊆ Σ × Σ.

An example for a separation algebra is (States, ∗, [[emp]]c ). Later we will use this particular algebra to illustrate the
frame rule.
Over every separation algebra we can define the split and join relations:

Definition 8.2.
(a) The split relation • ⊆ Σ × (Σ × Σ) w.r.t. • is given by

σ• (σ , σ ) ⇔
1 2 df σ1 # σ2 ∧ σ = σ1 • σ2

(b) The join relation 


• is the converse of split, i.e.,

(σ1 , σ2 ) 
• σ ⇔
df σ1 # σ2 ∧ σ = σ1 • σ2

We introduce a special symbol for it, rather than writing • ˘, to ease reading.

(c) The Cartesian product R1 × R2 ⊆ (Σ × Σ) × (Σ × Σ) of two commands R1 , R2 is defined by

(σ1 , σ2 ) (R1 × R2 ) (τ1 , τ2 ) ⇔df σ1 R1 τ1 ∧ σ2 R2 τ2 .

Sequential composition on Cartesian products is defined componentwise.

Next we lift the operations • and # to commands.


(d) The • composition R1 • R2 of commands R1 , R2 ⊆ Σ × Σ is again a command defined by

R1 • R2 =df • ; (R1 × R2 ) ; 

By associativity and commutativity of • on Σ, the lifted operation is associative and commutative, too. For tests
p, q the • composition p • q is a test again, irrespective of the underlying separation algebra.
(e) The partial identity # ⊆ (Σ × Σ) × (Σ × Σ) characterises those pairs of states which are combinable:

(σ1 , σ2 ) # (τ1 , τ2 ) ⇔df σ1 # σ2 ∧ σ1 = τ1 ∧ σ2 = τ2 .

18
It is well known that × and ; satisfy an exchange law:

(R1 × R2 ) ; (S 1 × S 2 ) = (R1 ; S 1 ) × (R2 ; S 2 ) , (15)

We have the following properties of split, join and compatibility:

• = • ; # , 
• = #; 
• . (16)

Moreover, from the definition and the unit property of u it is clear that  • ;
• = I.

In the concrete separation algebra (States, ∗, [[emp]]c ) the split relation ∗ means that the a state (s, h) is split into
two states (s, h1 ) and (s, h2 ) with h = h1 ∪ h2 ; more precisely, only the heap is split into disjoint parts. By this, the
·
separating conjunction p ∪· q of tests p, q in the command quantale over States coincides with p ∗ q =  ∗ ; (p × q) ; ∗ .

The following properties will be used later on.

Lemma 8.3.

(a) > ; • = • ; (> × >) ; #.


(b) For tests p, q we have > ; • ; (p × q) ; # = • ; ((> ; p) × (> ; q)) ; #.
Therefore, > ; (p • q) = (> ; p) • (> ; q).

Proof.
(a) First, σ (> ; • ) (ρ , ρ ) ⇔ ∃ ρ : σ > ρ ∧ ρ # ρ ∧ ρ = ρ • ρ ⇔ ρ # ρ ∧ ∃ ρ : ρ = ρ • ρ ⇔ ρ # ρ .
1 2 1 2 1 2 1 2 1 2 1 2
Second,
σ ( • ; (> × >) ; #) (ρ , ρ )
1 2
⇔ ∃ σ1 , σ2 : σ1 # σ2 ∧ σ = σ1 • σ2 ∧ σ1 > ρ1 ∧ σ2 > ρ2 ∧ ρ1 # ρ2
⇔ ∃ σ1 , σ2 : σ1 # σ2 ∧ σ = σ1 • σ2 ∧ ρ1 # ρ2
⇔ ρ1 # ρ2 ,
since we can choose σ1 = σ and σ2 = u.

(b) Straightforward from Part (a), exchange (15) and the definition of • on relations. t
u

8.2. The Frame Property


We now turn to an algebraic characterisation of the above-mentioned frame property. In this and the following
subsection we are working within the concrete separation algebra (States, ∗, [[emp]]c ) for better motivation of the de-
velopment; afterwards we will generalise. Intuitively, the frame property expresses that an execution of a command
C can be tracked back to a possibly smaller heap portion that is sufficient for its execution. This means that cer-
tain variables may change while certain parts of the heap are preserved. This means that care has to be taken when
reassembling the overall result state from that resulting from the computation on the smaller portion.
As an example, take the command C = (x := 1) and a starting state σ = (s, h) with s(x) = 2. Suppose that σ splits
into states σ1 = (s1 , h1 ), σ2 = (s2 , h2 ) which then satisfy s1 (x) = s2 (x) = 2. Clearly, C can act on the smaller state σ1 ,
delivering some result state τ1 = (t1 , k1 ) with t1 (x) = 1. However, this cannot be combined with the “remainder” σ2
of the original state σ.
To compensate for this aspect we introduce a special command.

Definition 8.4. Let H be the command that preserves all heaps while being liberal about the stores:

(s, h) H (s0 , h0 ) ⇔df h = h0 .

It is clear that H is an equivalence relation.


Using H, we can formulate the frame property for the ∗ case algebraically.

19
Definition 8.5. Command C has the ∗-frame-property iff

(dom(C) × I) ; 
∗ ; C ⊆ (C × H) ; 
∗ .

Prefixing the join operator with dom(C) × I on the left-hand side ensures that the property need only hold in those
cases where C can indeed act on a smaller state. Pointwise, for arbitrary s, s0 ∈ Stores and h0 , h1 , h0 ∈ Heaps, this
definition spells out as

(s, h0 ) ∈ dom(C) ∧ h0 ∩ h1 = ∅ ∧ (s, h0 ∪ h1 ) C (s0 , h0 )


⇒ ∃ h00 , h01 : dom(h00 ) ∩ dom(h01 ) = ∅ ∧ h0 = h00 ∪ h01 ∧ (s, h1 ) H (s0 , h01 ) ∧ (s, h0 ∪ h1 ) C (s0 , h0 ) .

By the definition of H the conclusion simplifies to

∃ h00 : dom(h00 ) ∩ dom(h1 ) = ∅ ∧ (s, h0 ∪ h1 ) C (s0 , h00 ∪ h1 ) .

Informally, this describes that the heap storage not being used by C will remain unchanged while C may change
certain variables of the starting state (s, h1 ).
As a check for adequacy we show exemplarily that the mutation command [e1 ] := e2 satisfies the frame property.
To ease reading, we denote the heap of a state σ by hσ . Moreover, none of the mutation commands change the stores
and  ∗ assumes the stores of the considered states to be equal, hence all stores of all states that occur in the proof are

the same, denoted by s. Since I ⊆ H, it suffices to show the frame property with ([[[e1 ] := e2 ]]c × I) ;  ∗ instead of

([[[e1 ] := e2 ]]c × H) ; 
∗ on the right hand side.

Moreover by the definition of [[[e1 ] := e2 ]]c , we immediately have

σ = (s, h) ∈ dom([[[e1 ] := e2 ]]c ) ⇔ e1s ∈ dom(h) (17)

From this and setting s = sσ1 ∗σ2 = sσ1 , we get


(σ1 , σ2 ) (dom([[[e1 ] := e2 ]]c ) × I) ;  ∗ ; [[[e ] := e ]] τ
1 2 c
⇔ {[ definition of × and I ]}
σ1 ∈ dom([[[e1 ] := e2 ]]c ) ∧ (σ1 , σ2 )  ∗ ;[[[e ] := e ]] τ
1 2 c
⇔ {[ definition of  ∗ and (17) ]}

(σ1 ∗ σ2 ) [[[e1 ] := e2 ]]c τ ∧ σ1 # σ2 ∧ e1s ∈ dom(hσ1 )


⇔ {[ definition of [[[e1 ] := e2 ]]c ]}
hτ = (e1s , e2s ) | h(σ1 ∗ σ2 ) ∧ e1s ∈ dom(h(σ1 ∗ σ2 ) ) ∧ σ1 # σ2 ∧ e1s ∈ dom(hσ1 )
⇔ {[ definition of ∗ ]}
hτ = (e1s , e2s ) | (hσ1 ∪ hσ2 ) ∧ dom(hσ1 ) ∩ dom(hσ2 ) = ∅ ∧
e1s ∈ dom(hσ1 ) ∪ dom(hσ2 ) ∧ σ1 # σ2 ∧ e1s ∈ dom(hσ1 )
⇔ {[ localisation property for | (see below), since e1s < dom(hσ2 ), isotony of dom ]}
hτ = ((e1s , e2s ) | hσ1 ) ∪ hσ2 ∧ dom(hσ1 ) ∩ dom(hσ2 ) = ∅ ∧ σ1 # σ2 ∧ e1s ∈ dom(hσ1 )
⇒ {[ since e1s < dom(hσ2 ) ]}
hτ = ((e1s , e2s ) | hσ1 ) ∪ hσ2 ∧ dom((e1s , e2s ) | hσ1 ) ∩ dom(hσ2 ) = ∅ ∧ σ1 # σ2 ∧ e1s ∈ dom(hσ1 )
⇒ {[ logic, definition of [[[e1 ] := e2 ]]c and ∗ ]}
σ1 [[[e1 ] := e2 ]]c (s, (e1s , e2s ) | hσ1 ) ∧ τ = (s, (e1s , e2s ) | hσ1 ) ∗ σ2 ∧ (s, (e1s , e2s ) | hσ1 ) # σ2
⇔ {[ definitions ]}
(σ1 , σ2 ) ([[[e1 ] := e2 ]]c × I) ;  ∗ τ.

The localisation property states that for partial functions f1 | ( f2 ∪ f3 ) = ( f1 | f2 ) ∪ f3 provided dom( f1 ) ∩ dom( f3 ) = ∅.
A proof can be found in [33].

20
8.3. Preservation of Variables
It remains to express algebraically the requirement that a command preserves certain variables. In this, we would
like to avoid an explicit mention of syntax and free variables and find a suitable purely algebraic condition instead.
In the proof of the frame rule it will be applied after the command under consideration has been placed in parallel
composition with H after application of the frame property. This “context” is taken care of by the following definition,
where > is again the universal relation.

Definition 8.6. Command C ∗-preserves test r iff

∗ ; (C × (r ; H)) ; # ⊆ > ; ∗ ; (I × r) .
The informal explanation is straightforward: when C is run in parallel with something starting in r, every re-
assembled result state must contain an r-part, too. Equivalently, we can replace the right-hand side by > ; • ; (I × r) ; #.

Moreover the definition of preservation is also equivalent to

# ; (C × (r ; H)) ; # ⊆ # ; (> × (> ; r)) ; # . (18)

This allows simplifying proofs of preservation for concrete commands.


We now show that our notion of preservation fits well with the original side condition of the frame rule. That
condition is MV(C) ∩ FV(r) = ∅. It is trivially satisfied for all r if MV(C) = ∅. Commands C with this property are,
e.g., the mutation command [e1 ] := e2 and dispose . We show that this condition implies our notion of preservation,
so that it is adequate, but also more liberal than the original one.

Lemma 8.7. If MV(C) = ∅ then C ∗-preserves all tests r.

Proof. The assumption implies that C cannot change the store part of any state, but only heap parts. Formally,

∀ σ, τ : σ C τ ⇒ sσ = sτ . (19)

Now we calculate
(σ1 , σ2 ) # ; (C × (r ; H)) ; # (τ1 , τ2 )
⇔ {[ definitions ]}
σ1 # σ2 ∧ σ1 C τ1 ∧ σ2 (r ; H) τ2 ∧ τ1 # τ2
⇔ {[ definition of ; and r is subidentity ]}
σ1 # σ2 ∧ σ1 C τ1 ∧ σ2 ∈ r ∧ σ2 H τ2 ∧ τ1 # τ2
⇔ {[ by τ1 # τ2 ∧ σ1 # σ2 , assumption (19) implies sσ2 = sτ2 ]}
σ1 # σ2 ∧ σ1 C τ1 ∧ σ2 ∈ r ∧ σ2 H τ2 ∧ sσ2 = sτ2 ∧ τ1 # τ2
⇒ {[ the definition of H and sσ2 = sτ2 imply σ2 = τ2 ]}
σ1 # σ2 ∧ σ1 C τ1 ∧ σ2 ∈ r ∧ σ2 = τ2 ∧ τ1 # τ2
⇒ {[ logic and omitting conjuncts ]}
σ1 # σ2 ∧ τ2 ∈ r ∧ τ1 # τ2
⇔ {[ definition of > ]}
σ1 # σ2 ∧ σ1 > τ1 ∧ σ2 > τ2 ∧ τ2 ∈ r ∧ τ1 # τ2
⇔ {[ definitions ]}
(σ1 , σ2 ) # ; (> × (> ; r)) ; # (τ1 , τ2 ) .
t
u
A similar treatment of the general condition MV(C) ∩ FV(r) = ∅ is possible; we omit the technicalities.

21
8.4. A General Frame Rule
In this section we give a completely algebraic soundness proof of the standard frame rule, generalised to separation
algebras. In particular, the relation H will be replaced by a generic one that again compensates for incongruities,
possibly both on store and heap.

Definition 8.8. Given a separation algebra (Σ, •, u), a relation K ⊆ Σ × Σ is called a compensator iff it satisfies the
following properties:
(a) cod(K) ⊆ dom(K) ⊆ K,
(b) # ; (I × K) ; # = #,
(c) K ; K = K.

Requirement (a) ensures that arbitrarily long sequences of commands can be “accompanied” by equally long
compensator sequences. Requirement (b) enforces that K does not deviate too far from simple equality. This and
Requirement (c) r will be used right below and in Section 8.5 for establishing the frame property and preservation for
some concrete (classes of) commands.
The command H is a compensator for the separation algebra (States, ∗, [[emp]]c ).
We now can give generalised definitions of the frame and preservation properties.

Definition 8.9. Assume a compensator K.


(a) Command C has the K-frame-property iff

(dom(C) × dom(K)) ; 
• ; C ⊆ (C × K) ; 
• .

(b) Command C K-preserves test r iff

• ; (C × (r ; K)) ; # ⊆ > ; • ; (I × r) .

Again, preservation is equivalent to

# ; (C × (r ; K)) ; # ⊆ # ; (> × (> ; r)) ; # . (20)

As an indication that this definition is reasonable, we show some properties about I, both as a command and a test.

Corollary 8.10. I has the frame property and preserves every test. Moreover, every command preserves I.

Proof. First, by the definitions, (dom(I) × dom(K)) ; 


• ; I = (I × dom(K)) ; 
• ⊆ (I × K) ; 
• , so that I satisfies the

K-frame property.
Second,
# ; (I × (r ; K)) ; #
= {[ neutrality of I and exchange (15) ]}
# ; (I × r) ; (I × K) ; #
= {[ # and (I × r) are partial identities and hence are idempotent and commute ]}
# ; (I × r) ; # ; (I × K) ; #
⇔ {[ definition of compensators (Def. 8.8(b)) ]}
# ; (I × r) ; #
⊆ {[ isotony ]}
# ; (> × (> ; r)) ; #
Third, that every command preserves I is immediate from (20) and neutrality of I. t
u
We derive some further consequences from the definitions.
22
Lemma 8.11.
(a) Suppose command C has the K-frame-property. Then for all tests p, r we have

p • r ⊆ (p ; dom(C)) • (r ; dom(K)) ⇒ (p • r) ; C ⊆ (p ; C) • (r ; K).

(b) Suppose command C K-preserves test r. Then for all tests q we have

(C ; q) • (r ; K) ⊆ > ; (q • r) .
The proof can be found in the Appendix.
The generalised frame rule now reads as follows.

Theorem 8.12. Assume a compensator K, a command C and a test r such that C has the K-frame-property and
K-preserves r. Then the following inference rule is valid:
p • r ⊆ (p ; dom(C)) • (r ; dom(K)) {p} C {q}
{p • r} C {q • r}
Proof. We calculate, assuming p • r ⊆ (p ; dom(C)) • (r ; dom(K)),
(p • r) ; C
⊆ {[ by Lemma 8.11(a) ]}
(p ; C) • (r ; K)
⊆ {[ by {p} C {q}, i.e., p ; C ⊆ C ; q, and isotony ]}
(C ; q) • (r ; K)
⊆ {[ by Lemma 8.11(b) ]}
> ; (q • r)
By the second characterisation of the Hoare triple presented in Lemma 7.3 we have therefore established {p•r} C {q•r}.
t
u
For the special separation algebra (States, ∗, [[emp]]c ) we obtain the ∗ frame rule. Notice that dom(H) = I.

Corollary 8.13. Consider a command C and a test r such that C has the ∗-frame-property and ∗-preserves r. Then
the following inference rule is valid:
p ∗ r ⊆ (p ; dom(C)) ∗ r {p} C {q}
{p ∗ r} C {q ∗ r}
The condition p ∗ r ⊆ (p ; dom(C)) ∗ r is a relaxation of the implicit condition p ⊆ dom(C) that is part of Reynolds’s
definition of validity of a Hoare triple (which differs from the standard one). For that reason also no notion of safety
or safety-monotonicity of a command enters our treatment.

8.5. Closure Properties


Since more complex commands are built up from simpler ones using the ∪ and ; operators, we show that, subject
to suitable conditions, our frame and preservation properties are closed under them.

Lemma 8.14. Assume a compensator K.


(a) The K-frame property is closed under union and pre-composition with a test. If C and D satisfy cod(C) ⊆ dom(D)
then the K-frame-property propagates from C and D to C ; D.
(b) K-preservation of a test r is closed under union and pre-composition with a test. If

# ; ((C ; D) × K) ; # = # ; (C × K) ; # ; (D × K) ; # (21)

then K-preservation of r propagates from C and D to C ; D.

23
The proof can be found in the Appendix. The condition # ; ((C ; D) × K) ; # = # ; (C × K) ; # ; (D × K) ; # means that
the “local” intermediate state of the composition C ; D induces a “global” intermediate state; it means a “modular”
way of composition.
Let us briefly comment on the assumption cod(C) ⊆ dom(D) in this lemma. This is needed since we want to
argue about compositions in isolation. Contrarily, the related paper [50] talks about complete, non-blocking runs of
programs; therefore for all compositions that make up such a run the above assumption about codomain and domain
of successive commands is automatically always satisfied.
The second condition that enters closure of preservation under composition reflects the following observations in
algebraic terms. If one wants to prove a frame law {p • r} C ; D {s • r} for a composition C ; D one may proceed in the
Hoare logic style as follows. First one shows, with a suitable intermediate assertion q, that {p} C {q} and {q} D {s} hold.
Then, assuming the frame property and r-preservation for C and D, one infers {p • r} C {q • r} and {q • r} D {s • r}, from
which the claim follows by the sequencing rule. The existence of q corresponds to the insertion of an intermediate #
relation in (21).

Corollary 8.15. The K-frame property and K-preservation propagate from commands C and D to if p then C else D
for arbitrary test p.

8.6. Another Separation Algebra


To present the advantage by abstracting from the concrete structure (States, ∗, [[emp]]c ) in Section 8.4 we now give
a second separation algebra with an adequate compensator that also satisfies the frame rule. Contrarily to the structure
(States, ∗, [[emp]]c ) it splits both store and heap into disjoint portions.
More precisely, we define a relation #S on states by (s1 , h1 ) #S (s2 , h2 ) ⇔df s1 ∩ s2 = ∅ ∧ h1 ∩h2 = ∅. Now assume
for states (s1 , h1 ) and (s2 , h2 ) that (s1 , h1 ) #S (s2 , h2 ) holds. Then the operation ∗S is defined by (s1 , h1 ) ∗S (s2 , h2 ) =df
(s1 ∪ s2 , h1 ∪ h2 ).
The following result is easily verified.

Corollary 8.16. The structure (States, ∗S , {(∅, ∅)}) is a separation algebra and I is a compensator. Therefore the gen-
eralised frame rule is valid for it.

9. Conclusion and Outlook

We have presented an algebraic treatment of separation logic. For assertions we have introduced a model based
on sets of states. By this, separating implication coincides with a residual and most of the inference rules of [46]
are simple consequences of standard residual laws. For intuitionistic, pure, precise and supported assertions we have
given algebraic characterisations. Furthermore we have defined a class of assertions which are both intuitionistic and
precise.
As a next step we embedded the command part of separation logic into a relational algebraic structure. There, we
have derived algebraic characterisations of properties on which the frame rule relies. In particular, we have expressed
the side condition that certain variables must not be changed by a command in a purely semantic way without appeal-
ing to the syntax. Moreover, we have formulated a more liberal version of the frame rule that rests on simpler side
conditions and have proved it purely algebraically. Finally we have shown that, under mild conditions, the commands
for which the frame rule is sound are closed under union and composition.
To underpin our approach we have algebraically verified one of the standard examples — an in-place list reversal
algorithm. The details can be found in [11]. The term in-place means that there is no copying of whole structures, i.e.,
the reversal is done by simple pointer modifications.
Due to our relational embedding we can, as a next step, derive inference rules for if-statements and for the while-
loop. This has been done for classical Hoare logic (see [37]); hence it should be straightforward to extend this into the
setting of separation logical.
So far we have not analysed situations where data structures share parts of their cells (cf. Figure 2). First steps
towards an algebraic handling of such situations are given in [34, 19]. In future work, we will adapt these approaches
to algebraic separation logic.
24
x 1 2 3 4 5

y 7 8

Figure 2: Two lists with shared cells.

Our algebraic approach to separation logic also paves the way for verifying properties with off-the-shelf theorem
provers. Boolean semirings have proved to be reasonably well suitable for automated theorem provers [24]. Therefore
the first-order part of our approach can easily be shown by automated theorem provers. If one needs the full power
of quantales, the situation is a bit different: there are encodings for quantales for higher-order theorem provers [12].
However at the moment higher-order systems can only verify simply theorems fully automatically. Looking at the
development of first-order provers in the past, we expect a rapid development in automated higher-order provers.
Hence one of the next plans for future work is to analyse the power of such systems for reasoning with separation logic.
A long-term perspective is to incorporate reasoning about concurrent programs with shared linked data structures
along the lines of [41]. One central property mentioned in that paper is the rule of disjoint concurrency which reads

{p1 } C1 {q1 } {p2 } C2 {q2 }


,
{p1 ∗ p2 } C1 || C2 {q1 ∗ q2 }
assuming C1 does not modify any free variables in p2 and q2 and conversely C2 does not modify free variables of p1
and q1 . Using the approach for the frame rule in this paper it should be possible to get an algebraic proof for this rule,
too.

Acknowledgements: We are grateful to the anonymous referees for their careful scrutiny of the manuscript and their
many helpful remarks. The material in Section 8 benefited greatly from discussions with C.A.R. Hoare and P. O’Hearn.

References
[1] R.-J. Back and J. von Wright. Refinement Calculus: A Systematic Introduction. Graduate Texts in Computer Science. Springer, 1998.
[2] R. C. Backhouse and J. van der Woude. Demonic operators and monotype factors. Mathematical Structures in Computer Science, 3(4):417–
433, 1993.
[3] B. Biering, L. Birkedal, and N. Torp-Smith. BI-hyperdoctrines, Higher-order Separation Logic, and Abstraction. ACM Transactions on
Programming Languages and Systems, 29(5):24, 2007.
[4] G. Birkhoff. Lattice Theory, volume XXV of Colloquium Publications. Annals of Mathematics Studies, 3rd edition, 1967.
[5] T. Blyth and M. Janowitz. Residuation theory. Pergamon Press, 1972.
[6] A. Buch, T. Hillenbrand, and R. Fettig. Waldmeister: High Performance Equational Theorem Proving. In J. Calmet and C. Limongelli,
editors, Proceedings of the International Symposium on Design and Implementation of Symbolic Computation Systems, number 1128 in
Lecture Notes in Computer Science, pages 63–64. Springer, 1996.
[7] R. Burstall. Some techniques for proving correctness of programs which alter data structures. Machine Intelligence, 7:23–50, 1972.
[8] C. Calcagno, P. O’Hearn, and H. Yang. Local action and abstract separation logic. In LICS ’07: Proceedings of the 22nd Annual IEEE
Symposium on Logic in Computer Science, pages 366–378. IEEE Press, 2007.
[9] E. Cohen. Using Kleene algebra to reason about concurrency control. Technical report, Telcordia, 1994.
[10] J. H. Conway. Regular Algebra and Finite Machines. Chapman & Hall, 1971.
[11] H.-H. Dang. Algebraic aspects of separation logic. Technical Report 2009-01, Institut für Informatik, 2009.
[12] H.-H. Dang and P. Höfner. Automated Higher-Order Reasoning about Quantales. In B. Konev, R. Schmidt, and S. Schulz, editors, Workshop
on Practical Aspects of Automated Reasoning (PAAR-2010), pages 40–49, 2010.
[13] H.-H. Dang, P. Höfner, and B. Möller. Algebraic Separation Logic. Technical Report 2010-06, Institute of Computer Science, University of
Augsburg, 2010.
[14] J. Desharnais, N. Belkhiter, S. Sghaier, F. Tchier, A. Jaoua, A. Mili, and N. Zaguia. Embedding a demonic semilattice in a relational algebra.
Theoretical Computer Science, 149(2):333–360, 1995.
[15] J. Desharnais, A. Mili, and T. Nguyen. Refinement and demonic semantics. In C. Brink, W. Kahl, and G. Schmidt, editors, Relational
Methods in Computer Science, pages 166–183. Springer, 1997.
[16] J. Desharnais and B. Möller. Characterizing Determinacy in Kleene Algebras. Information Sciences, 139:253–273, 2001.

25
[17] J. Desharnais, B. Möller, and F. Tchier. Kleene under a modal demonic star. Journal of Logic and Algebraic Programming, 66(2):127–160,
2006.
[18] E. Dijkstra. A discipline of programming. Prentice Hall, 1976.
[19] T. Ehm. Pointer Kleene algebra. In R. Berghammer, B. Möller, and G. Struth, editors, Relational and Kleene-Algebraic Methods in Computer
Science, volume 3051 of Lecture Notes in Computer Science, pages 99–111. Springer, 2004.
[20] C. A. R. Hoare. An axiomatic basis for computer programming. Communications of the ACM, 12(10):576–580, 1969.
[21] C. A. R. Hoare, B. Möller, G. Struth, and I. Wehrman. Concurrent Kleene algebra. In M. Bravetti and G. Zavattaro, editors, CONCUR 09 —
Concurrency Theory, volume 5710 of Lecture Notes in Computer Science, pages 399–414. Springer, 2009.
[22] C. A. R. Hoare, B. Möller, G. Struth, and I. Wehrman. Foundations of concurrent Kleene algebra. In R. Berghammer, A. Jaoua, and B. Möller,
editors, Relations and Kleene Algebra in Computer Science, volume 5827 of Lecture Notes in Computer Science. Springer, 2009.
[23] P. Höfner. Database for automated proofs of Kleene algebra. http://www.kleenealgebra.de (accessed March 31, 2011).
[24] P. Höfner and G. Struth. Automated reasoning in Kleene algebra. In F. Pfennig, editor, Automated Deduction, volume 4603 of Lecture Notes
in Artificial Intelligence, pages 279–294. Springer, 2007.
[25] P. Höfner, G. Struth, and G. Sutcliffe. Automated verification of refinement laws. Annals of Mathematics and Artificial Intelligence, Special
Issue on First-order Theorem Proving, pages 35–62, 2008.
[26] S. Ishtiaq and P. O’Hearn. BI as an assertion language for mutable data structures. ACM SIGPLAN Notices, 36:14–26, 2001.
[27] B. Jónsson and A. Tarski. Boolean algebras with operators, Part I. American Journal of Mathematics, 73, 1951.
[28] D. Kozen. On Hoare logic and Kleene algebra with tests. ACM Transactions on Computational Logic, 1(1):60–76, 2000.
[29] D. Kozen. On Hoare logic, Kleene algebra, and types. In P. Gärdenfors, J. Woleński, and K. Kijania-Placek, editors, In the Scope of Logic,
Methodology, and Philosophy of Science: Volume One of the 11th Int. Congress Logic, Methodology and Philosophy of Science, Cracow,
August 1999, volume 315 of Studies in Epistemology, Logic, Methodology, and Philosophy of Science, pages 119–133. Kluwer, 2002.
[30] D. Kozen and M.-C. Patron. Certification of compiler optimizations using Kleene algebra with tests. In J. Lloyd, V. Dahl, U. Furbach,
M. Kerber, K.-K. Lau, C. Palamidessi, L. M. Pereira, Y. Sagiv, and P. J. Stuckey, editors, Proc. 1st Int. Conf. Computational Logic (CL2000),
volume 1861 of Lecture Notes in Computer Science, pages 568–582. Springer, 2000.
[31] R. Maddux. Relation Algebras, volume 150 of Studies in Logic and the Foundations of Mathematics. Elsevier, 2006.
[32] W. W. McCune. Prover9 and Mace4. <http://www.cs.unm.edu/∼mccune/prover9>. (accessed March 31, 2011).
[33] B. Möller. Towards pointer algebra. Science of Computer Programming, 21(1):57–90, 1993.
[34] B. Möller. Calculating with acyclic and cyclic lists. Information Sciences, 119(3-4):135–154, 1999.
[35] B. Möller. Kleene getting lazy. Science of Computer Programming, 65:195–214, 2007.
[36] B. Möller, P. Höfner, and G. Struth. Quantales and temporal logics. In M. Johnson and V. Vene, editors, Algebraic Methodology and Software
Technology, Proc. AMAST 2006, volume 4019 of Lecture Notes in Computer Science, pages 263–277. Springer, 2006.
[37] B. Möller and G. Struth. Algebras of modal operators and partial correctness. Theoretical Computer Science, 351(2):221–239, 2006.
[38] B. Möller and G. Struth. WP is WLP. In W. MacCaull, M. Winter, and I. Düntsch, editors, Relational Methods in Computer Science, volume
3929 of Lecture Notes in Computer Science, pages 200–211. Springer, 2006.
[39] C. Mulvey. &. Rendiconti del Circolo Matematico di Palermo, 12(2):99–104, 1986.
[40] T. T. Nguyen. A relational model of nondeterministic programs. International J. Foundations Comp. Sci., 2:101–131, 1991.
[41] P. O’Hearn. Resources, concurrency, and local reasoning. Theoretical Computer Science, 375:271–307, 2007.
[42] P. O’Hearn, J. C. Reynolds, and H. Yang. Local reasoning about programs that alter data structures. In L. Fribourg, editor, CSL ’01: 15th
International Workshop on Computer Science Logic, volume 2142 of Lecture Notes in Computer Science, pages 1–19. Springer, 2001.
[43] P. O’Hearn, J. C. Reynolds, and H. Yang. Separation and information hiding. ACM Trans. Program. Lang. Syst., 31(3):1–50, 2009.
[44] V. Preoteasa. Frame rule for mutually recursive procedures manipulating pointers. Theoretical Computer Science, 410(42):4216–4233, 2009.
[45] D. J. Pym, P. O’Hearn, and H. Yang. Possible worlds and resources: the semantics of BI. Theoretical Computer Science, 315(1):257–305,
2004. Mathematical Foundations of Programming Semantics.
[46] J. C. Reynolds. An introduction to separation logic. In M. e. a. Broy, editor, In Engineering Methods and Tools for Software Safety and
Security, pages 285–310. IOS Press, 2009.
[47] K. Rosenthal. Quantales and their applications. Pitman Research Notes in Mathematics Series, 234, 1990.
[48] G. Schmidt and T. Ströhlein. Relations and Graphs: Discrete Mathematics for Computer Scientists. Springer, 1993.
[49] V. Vafeiadis and M. Parkinson. A marriage of rely/guarantee and separation logic. In In 18th Int’l Conference on Concurrency Theory
(CONCUR’07), volume 4703 of Lecture Notes in Computer Science, pages 256–271. Springer, 2007.
[50] H. Yang and P. O’Hearn. A semantic basis for local reasoning. In M. Nielsen and U. Engberg, editors, Foundations of Software Science and
Computation Structures, Proc. FOSSACS 2002, volume 2303 of Lecture Notes in Computer Science, pages 402–416. Springer, 2002.

A. Deferred Proofs

Proof of Lemma 5.8. By Lemma 5.4 we have [[p −∗ q]] = [[q]]/[[p]]. Now it is easy to see that
s, h |= p − q
⇔ {[ definition of − ]}
s, h |= ¬(q −∗(¬p))
⇔ {[ definition of −∗ ]}
¬(∀h0 : ((dom(h0 ) ∩ dom(h) = ∅, s, h0 |= q) ⇒ s, h0 ∪ h |= ¬p))
26
⇔ {[ logic: ¬ over ∀ ]}
∃ h0 : ¬((dom(h0 ) ∩ dom(h) = ∅, s, h0 |= q) ⇒ s, h0 ∪ h |= ¬p)
⇔ {[ logic: ¬(A ⇒ B) ⇔ (A ∧ ¬B) ]}
∃ h0 : dom(h0 ) ∩ dom(h) = ∅, s, h0 |= q, s, h0 ∪ h 6|= ¬p
⇔ {[ logic ]}
∃ h : dom(h0 ) ∩ dom(h) = ∅, s, h0 |= q, s, h0 ∪ h |= p
0

⇔ {[ setting for (⇒) ĥ =df h0 ∪ h and for (⇐) h0 =df ĥ − h ]}


∃ ĥ : h ⊆ ĥ, s, ĥ − h |= q, s, ĥ |= p

Proof of Lemma 6.9. (⇐): Using Equation (Ded), isotony and the assumption, we get 2

a u b · c ≤ (abc u b) · c ≤ (ab> u b) · c ≤ (a u b) · c
and the symmetric formula a u b · c ≤ b · (a u c). From this the claim follows by
a u (b · c) = a u a u (b · c) ≤ a u ((a u b) · c) ≤ (a u b) · (a u c) .
(⇒): From (8) we obtain a u (a · >) ≤ (a u a) · (a u >) = 0 · a = 0 and hence, by shunting (shu) and the exchange
law (exc), ab> ≤ a.

Proof of Lemma 6.10.


(a) The claim follows immediately from Lemma 6.9.
(b) We first show that a = (a u 1) · > follows from Inequations (7) and (8). By neutrality of > for u, neutrality of 1
for ·, meet-distributivity (8) and isotony, we get
a = a u > = a u (1 · >) ≤ (a u 1) · (a u >) ≤ (a u 1) · > .
The converse inequation follows by isotony and Inequation (7):
(a u 1) · > ≤ a · > ≤ a .
Next we show that a = (a u 1) · > implies the two inequations a · > ≤ a and a · > ≤ a which, by Part (a), implies
the claim. The first inequation is shown by the assumption, the general law > · > = > and the assumption again:
a · > = (a u 1) · > · > = (a u 1) · > = a .
For the second inequation, we note that in a Boolean quantale the law t · > = (t u 1) · > holds for all subidentities
t (t ≤ 1) (e.g. [16]). From this we get
a · > = (a u 1) · > · > = (a u 1) · > · > = (a u 1) · > = (a u 1) · > = a .

(c) (a u b) · c = a u b · c.
We show the equivalence of the equation with Definition 6.6. First we prove the ⇒ -direction and split the proof
showing each inequation separately starting with ≥:
a u b · c ≤ (a u b) · (a u c) ≤ (a u b) · c .
This holds by Equation 8 and isotony. To prove the ≤-direction we know (a u b) · c ≤ a · c ≤ a · > ≤ a which
follows from isotony and Equation 7. Now using (aub)·c ≤ b·c we can immediately conclude (aub)·c ≤ aub·c.
Next we give a proof for the ⇐ -direction and assume (a u b) · c = a u b · c holds. Equation 7 follows by
a · > = (a u a) · > = a u a · > ≤ a. Furthermore we calculate
a u (b · c) = a u a u (b · c) = a u ((a u b) · c) = a u (a · (a u b)) = (a u b) · (a u c) ,
which holds by using idempotence of u, the assumption, commutativity of · and again the assumption. 2

27
Proof of Lemma 6.25.
(a) Since [[s, h]] is a singleton set, this is obvious.
(b) By definition of ∗, Part (1), by the assumption h ⊆ h0 and Lemma 6.24, and set theory:

s, h0 |= p ∗ [[s, h0 − h]]
⇔ ∃ ĥ : ĥ ⊆ h0 ∧ s, ĥ |= p ∧ s, h0 − ĥ |= [[s, h0 − h]]
⇔ ∃ ĥ : ĥ ⊆ h0 ∧ s, ĥ |= p ∧ h0 − ĥ = h0 − h
⇔ ∃ ĥ : ĥ ⊆ h0 ∧ s, ĥ |= p ∧ ĥ = h
⇔ s, h |= p

(c) By definition of ∗, s, h0 − ĥ |= true is true, Part (1), and set theory:

s, h0 |= [[s, h]] ∗ true


⇔ ∃ ĥ : ĥ ⊆ h0 ∧ s, ĥ |= [[s, h]] ∧ s, h0 − ĥ |= true
⇔ ∃ ĥ : ĥ ⊆ h0 ∧ s, ĥ |= [[s, h]]
⇔ ∃ ĥ : ĥ ⊆ h0 ∧ ĥ = h
⇔ h ⊆ h0
2

Proof of Lemma 7.3. We first show the second equivalence.


>; p;C ⊆ >;q
⇒ {[ by p ⊆ > ; p and isotony ]}
p;C ⊆ >;q
⇒ {[ composing > to both sides ]}
>; p;C ⊆ >;>;q
⇒ {[ by > ; > ⊆ > ]}
>; p;C ⊆ >;q
Now we tackle the first equivalence.
(⇒) By isotony,
p;C ⊆ C ;q ⇒ >; p;C ⊆ >;C ;q ⊆ >;>;q ⊆ >;q.
(⇐) First, since p ⊆ I, we have p ; C ⊆ C. Second, by neutrality isotony and the assumption,

p;C = I ; p;C ⊆ >; p;C ⊆ >;q.


Therefore, p ; C ⊆ C ∩ > ; q. By a standard result on test semirings (e.g. [35]) the right-hand side is equal to C ; q.

Proof of Lemma 8.11.


(a) We calculate, assuming p • r ⊆ (p ; dom(C)) • (r ; dom(K)),

(p • r) ; C
⊆ {[ assumption ]}
((p ; dom(C)) • (r ; dom(K))) ; C
= {[ definition of • on relations (Def. 8.2(d)) ]}
• ; ((p ; dom(C)) × (r ; dom(K))) ;  • ;C

28
= {[ exchange (15) ]}
• ; (p × r) ; (dom(C) × dom(K)) ; 
• ;C

⊆ {[ since C has the K-frame-property ]}


• ; (p × r) ; (C × K) ; 

⊆ {[ exchange (15) ]}
 ; (p ; C) × (r ; K) ; 
• •

= {[ definition of • on relations ]}
(p ; C) • (r ; K)

(b) (C ; q) • (r ; K)
= {[ definition of • on relations ]}
• ; (C ; q) × (r ; K) ; •

= {[ neutrality of I w.r.t. composition and exchange (15) ]}


• ; (C × (r ; K)) ; (q × I) ; •

= {[ Equation (16) ]}
 ; (C × (r ; K)) ; (q × I) ; # ;
• •

= {[ (q × I) and # are partial identities and hence commute ]}


• ; (C × (r ; K)) ; # ; (q × I) ; 

⊆ {[ C K-preserves r ]}
> ; • ; (I × r) ; (q × I) ; •

= {[ exchange (15) and neutrality of I ]}


> ; • ; (q × r) ; •

= {[ definition of • on relations ]}
> ; (q • r) .

Proof of Lemma 8.14. Closure under union is straightforward from distributivity of ; and × over ∪. We now show
closure under (pre-)composition.
(a) Assume that C has the K-frame property and that p is a test.

(dom(p ; C) × dom(K)) ;  • ; p;C

= {[ neutrality of I, dom(p ; C) = p ; dom(C) and exchange (15) ]}


(p × I) ; (dom(C) × dom(K)) ;  • ; p;C

⊆ {[ isotony ]}
(p × I) ; (dom(C) × dom(K)) ;  • ;C

⊆ {[ C has the frame property ]}


(p × I) ; (C × K) ;  •

= {[ exchange (15) ]}
((p ; C) × K) ;  •

Assume now that C and D have the K-frame property and satisfy cod(C) ⊆ dom(D). Then

(dom(C ; D) × dom(K)) ;  • ; C;D

⊆ {[ definition of domain ]}
(dom(C) × dom(K)) ;  • ; C;D

⊆ {[ C has the K-frame property ]}


(C × K) ; • ;D

29
= {[ R = R ; cod(R) for arbitrary R and exchange (15) ]}
(C × K) ; (cod(C) × cod(K)) ; • ;D

⊆ {[ assumption and definition of compensator ]}


(C × K) ; (dom(D) × dom(K)) ;  • ;D

⊆ {[ D has the K-frame property ]}


(C × K) ; (D × K) ;  •

= {[ exchange (15) ]}
((C ; D) × (K ; K)) ; •

= {[ definition of compensators ]}
((C ; D) × K) ; • .

(b) Assume that C K-preserves r and that p is a test. Then

• ; ((p ; C) × (r ; K)) ; # ⊆ • ; (C × (r ; K)) ; # ⊆ > ; • ; (I × r) .


Assume now that C and D K-preserve r and # ; ((C ; D) × K) ; # = # ; (C × K) ; # ; (D × K) ; # . Then

• ; ((C ; D) × (r ; K)) ; #
= {[ neutrality of I and exchange (15) ]}
• ; (I × r) ; ((C ; D) × K) ; #
= {[ by Equation (16) ]}
 ; # ; (I × r) ; ((C ; D) × K) ; #

= {[ # and I × r are partial identities and hence are idempotent and commute ]}
• ; # ; (I × r) ; # ; ((C ; D) × K) ; #
= {[ by Equation (16) ]}
• ; (I × r) ; # ; ((C ; D) × K) ; #
= {[ by the assumption ]}
• ; (I × r) ; # ; (C × K) ; # ; (D × K) ; #
⊆ {[ # is a subidentity ]}
 ; (I × r) ; (C × K) ; # ; (D × K) ; #

= {[ exchange (16) and neutrality of I ]}


 ; (C × (r ; K)) ; # ; (D × K) ; #

⊆ {[ C K-preserves r ]}
> ; • ; (I × r) ; (D × K) ; #
= {[ exchange (16) and neutrality of I ]}
> ; • ; (D × (r ; K)) ; #
⊆ {[ D K-preserves r ]}
> ; > ; • ; (I × r)
⊆ {[ > largest element ]}
> ; • ; (I × r)

30

You might also like