Tofte1988

Download as pdf or txt
Download as pdf or txt
You are on page 1of 210

This thesis has been submitted in fulfilment of the requirements for a postgraduate degree

(e.g. PhD, MPhil, DClinPsychol) at the University of Edinburgh. Please note the following
terms and conditions of use:

• This work is protected by copyright and other intellectual property rights, which are
retained by the thesis author, unless otherwise stated.
• A copy can be downloaded for personal non-commercial research or study, without
prior permission or charge.
• This thesis cannot be reproduced or quoted extensively from without first obtaining
permission in writing from the author.
• The content must not be changed in any way or sold commercially in any format or
medium without the formal permission of the author.
• When referring to this work, full bibliographic details including the author, title,
awarding institution and date of the thesis must be given.
Operational Semantics and
Polymorphic Type Inference

Mads Tofte

Ph. D.
University of Edinburgh
1987
Abstract

Three languages with polymorphic type disciplines are discussed, namely the
A-calculus with Milner's polymorphic type discipline; a language with imperative
features (polymorphic references); and a skeletal module language with struc-
tures, signatures and functors. In each of the two first cases we show that the
type inference system is consistent with an operational dynamic semantics.
On the module level, polymorphic types correspond to signatures. There is
a notion of principal signature. So-called signature checking is the module level
equivalent of type checking. In particular, there exists an algorithm which either
fails or produces a principal signature.
Contents

1 Introduction 1

I An Applicative Language 8

2 Milner's Polymorphic Type Discipline 9

2.1 Notation 10
2.2 Dynamic Semantics 11

2.3 Static Semantics 13

2.4 Principal Types 20


2.5 The Consistency Result 21

II An Imperative Language 26

3 Formulation of the Problem 27


3.1 A simple language 29
3.2 The Problem is Generalization 31

4 The Type Discipline 36


4.1 The Inference system 36
4.2 Examples of Type Inference 37
4.3 A Type Checker 45

5 Proof of Soundness 47
5.1 Lemmas about Substitutions 47
5.2 Typing of Values using Maximal Fixpoints 51
5.3 The Consistency Theorem 62

6 Comparison with Damas' Inference System 70


6.1 The Inference System 70
6.2 Comparison 76
6.3 Pragmatics 77
III A Module Language 79
7 Typed Modules 80

8 The Language ModL 83


8.1 Syntax 84
8.2 Semantic Objects 88
8.3 Notation 94
8.4 Inference Rules 95
8.4.1 Declarations and Structure Expressions 95
8.4.2 Specifications and Signature Expressions 97

8.4.3 Programs 98

9 Foundations of the Semantics 100


9.1 Coherence and Consistency 100
9.1.1 Coherence 102
9.1.2 Consistency 106
9.1.3 Coherence Only 109
9.2 Well-formedness and Admissibility 110
9.3 Two Lemmas about Instantiation 112

10 Robustness Results 116


10.1 Realisation and Instantiation 116
10.2 The Strict Rules Preserve Admissibility 119
10.3 Realisation and Structure Expressions 121

11 Unification of Structures 123


11.1 Soundness of Unify 128
11.2 Completeness of Unify 133
11.3 Comparison With Term Unification 140
11.4 Comparison with Alt-Kaci's Type Discipline 141

12 Principal Signatures 145


12.1 The Signature Checker, SC 148
12.2 Soundness of SC 150
12.3 Completeness of SC 159
13 Conclusion 166
13.1 Summary 166
13.2 How the ML Modules Evolved 167
13.3 The Experience of Using Operational Semantics 169
13.4 Future Work 172

Appendix A: Robustness Proofs 174

Bibliography 201
Chapter 1

Introduction
Since early days in programming the concept of type has been important. The
basic idea is simple; the values on which a program operates have types and
whenever the program performs an operation on a value the operation must be
consistent with the type of the value. For example, the operation "multiply x by
7" makes sense if x is a value of type integer or real, but not if x is a day in the
week, say.
Any programming language comes with some typing rules, or a type discipline,
if you like, that the programmer keeps in mind when using the language. This is
true even of so-called "untyped" languages.'
There is an overwhelming variety of programming languages with different
notions of values and types and consequently also a great variety of type disci-
plines.
Some languages have deliberately been designed so that a machine by just
examining the program text can determine whether the program complies with
the typing rules of the language. In our example, "multiply x by 7", it is a
simple matter, even for a computer, to check the well-typedness of the expression
assuming that x has type int, say, without ever doing multiplications by 7. This
kind of textual analysis, static type checking, has two advantages: firstly, one can
discover programming mistakes before the program is executed; and secondly, it
can help a compiler generate code.
For these languages one can factor out the static semantics from the dynamic
semantics. The former deals with type checking and possibly other things a com-
piler can handle, while the latter deals with the evaluation of programs provided

- 'Every LISP programmer knows that one should not attempt to add an integer and a list
although this is not always conceived as a typing rule
1
CHAPTER 1. INTRODUCTION 2

they are legal according to the static semantics. Let us refer to this class of lan-
guages as the statically typed languages. It includes ALGOL [4], PASCAL [44],
and Standard ML [18, 20].
But there are also languages where such a separation is impossible. One
example is LISP [26] which has basically one data type. The "type errors" one
can commit are of such a kind that they cannot in general be discovered statically
by a mere textual analysis.' Similarly, there can be no separation in languages
where types can be manipulated as freely as values. Let us call such languages
dynamically typed.
At first sight it seems a wonderful idea that a type checker can find program-
ming mistakes even before the program is executed. The catch is, of course, that
the typing rules have to be simple enough that we humans can understand them
and make the computers enforce them. Hence we will always be able to come up
with examples of programs that are perfectly sensible and yet illegal according
to the typing rules. Some will be quick to say that far from having been offered
a type discipline they have been lumbered with a type bureaucracy.
The introduction of polymorphism [27] in functional programming must have
been extremely welcome news to people who believed in statically typed lan-
guages because the polymorphic type checker in some ways was a lot more tol-
erant than what had previously been seen. A monomorphic type system is one
where every expression has at most one type. By contrast, in Milner's poly-
morphic type discipline there is the distinction between a generic type, or type
scheme, and all the instances of the generic type. One of the typing rules is that
if an expression has a generic type, then it also has all instances of the generic
type. So the empty list, for example, has the generic type Va.a list and all its
instances int list, (bool list) list, (int --> bool) list, and so on. In fact, Va.a list
is said to be the principal type of the empty list, because all other types of the
empty list can be obtained from it by instantiation.
Polymorphism occurs naturally in programming in many situations. For ex-
ample the function that reverses lists is logically the same regardless of the type of
list it reverses. Indeed, if all lists are represented in a uniform way, one compiled
version of the reverse function will suffice.
Milner's polymorphic type discipline has been used in designing the functional
2Even in statically typed languages there will normally be kinds of "type errors" that cannot
be discovered. Taking the head of the empty list is one example; index errors in arrays is
another.
CHAPTER 1. INTRODUCTION 3

language ML, first in "Old ML", which was a "meta language" for the proof
system LCF [15], and later in Standard ML [18].
The main purpose of this thesis is to demonstrate that the basic ideas in
Milner's polymorphism carry over from the purely applicative setting to two quite
different languages. Because these languages have different notions of values and
types, the objects we reason about are different. But there is still the idea of
types that are instances of type schemes, there are still notions of principal types,
and using different kinds of unification algorithm one can get new type checkers
that greatly resemble the one for the applicative case. Perhaps some category
theorist will tell us that these different type systems are all the same, but I find
it interesting that this kind of polymorphism "works" in languages that are not
the same at all.
Unfortunately, polymorphism does not come for free. The price we have to
pay is that we have to think pretty hard when we lay down the typing rules.
First and foremost, typing rules must be sound in the sense that if they admit
a program then (in some sense) that program must not be bad. Whereas in
monomorphic type disciplines one would not feel compelled to invest lots of en-
ergy in investigating soundness, one has to be extremely careful when considering
polymorphism. Indeed Part II of this thesis has its root in the historical fact that
people have worked very hard to extend the purely applicative type discipline to
one that handles references (pointers) as values. Polymorphic exceptions were in
ML for years before it quite recently was discovered that the "obvious" typing
rules are unsound.
Clearly we do not want to launch unsound type inference systems. Perhaps
we do not want to undertake the task of proving the soundness of typing rules
for a big programming language, such as ML, but the least we can do is to make
sure that the big language rests on a sound base. Then we want to formulate the
typing rules for the small language in a way that is convenient for stating and
proving theorems.
It so happens that there is a notation that is extremely convenient for defin-
ing these polymorphic type disciplines (and others as well, I trust). It started
as "Structural Operational Semantics" [34], the French call it "Natural Seman-
tics" [9]- I shall use the term "operational semantics". The idea is to borrow
the concept of inference rule and proof from formal logic. The typing rules are
CHAPTER 1. INTRODUCTION 4

then expressed as type inference rules. For example the rule


ei:T --+ T e2:T
el e2:T
can be read: " if you can prove that the expression el has type r-' --+ T and that e2

has type r-' then you may infer that the application of el to e2 has type r." Such
rules need not be deterministic (r is not a function of e, not even when e contains
no free variables) and that makes them very suitable for defining polymorphism.
Given a type inference system in the form of a collection of type inference
rules, how do we investigate soundness?
The first soundness proofs used denotational semantics [14, 13]. Types are
seen as ideals of values in a model of the lambda calculus and it is proved that
if by the type inference system an expression has a type then the value denoted
by the expression is a member of the ideal that models the type.
It seems a bit unfortunate that we should have to understand domain theory
to be able to investigate whether a type inference system admits faulty programs.
The approach to soundness I take is different in that I also use operational seman-
tics to define the dynamic semantics and then prove that, in a sense that is made
precise later on, the two inference systems are consistent. This only requires
elementary set theory plus a simple powerful technique for proving properties of
maximal fixpoints of monotonic operators.

1.1 Related Work


Let us briefly review some of the contributions related to typing of programs or,
more generally, typing of formal expressions.
Hindley's seminal paper [22] concerns the typing of objects in combinatory
logic. The expressions studied by Hindley can be thought of as the lambda
calculus [5] with constants. He considers type expressions built from base types
using the type constructor for function space. In Hindley's terminology, a type
scheme is a type in which type variables can occur. Hindley gives type inference
rules allowing inferences of the form A f- aX where X is an object (expression),
a is a type scheme and A is a set of statements assigning one (and only one)
type scheme to every variable that occurs free in X. He proves the existence of
principal type schemes in the following strong sense: for all X, if for some a and
A one has A f- aX then there exists a type scheme, ao, with the property that
for all A' , a', if A' f- a'X then a' is a substitution instance of ao.
CHAPTER 1. INTRODUCTION 5

Milner [27] independently discovered essentially the same result but he was
able to extend the type inference rules so that a function, once declared, can be
applied to objects of different types. For instance, the expression

let I = A x.x in (I 3, I true) (1.1)

is typable in Milner's system. By contrast, the semantically equivalent

(A I. (I 3, I true))(A x.x) (1.2)

is typable neither in Hindley's system nor in Milner's system. The essential


innovation in Milner's system is the introduction of bound type variables. More
precisely, trying to maintain the above notation, Milner's notion of a type scheme
is an expression, a, of the form Val . a,,./3, where 0 is what Hindley called a type
scheme, and a1, . . . , a,, are type variables bound in a. In (1.1), when I is declared
it can be ascribed the type scheme Va.a -+ a which in the two applications can
be instantiated to znt -+ znt and bool -+ bool, respectively. This extension can be
done without the loss of principal typing, although the notion of principal typing
is slightly different from the one stated above.
Milner's extension gives the ability to type a very large class of programs.
Type checking can be done effectively by a type checker [27, 14]. Milner's type
discipline is used in the language Standard ML.
Another extension of Hindley's work is the introduction of intersection types
by Coppo et al. [10, 11, 12, 38]. For instance, at the binding of I in (1.2), I can
be ascribed the type (int -+ int) (1 (bool -+ bool) making the two applications of
I typable. The use of intersection types replaces the binding of type variables in
Milner's scheme, in fact every expression that is well-typed in Milner's system is
also well-typed using intersection types. The price for the greater power is that
the well-typedness of expressions is only semi-decidable. There is an algorithm
which produces principal types for well-typed programs, but it is not always able
to detect that an ill-typed program is ill-typed.
Common to the above three approaches is that type expressions are inferred
from expressions that do not contain type expressions. Thus, in Milner's sys-
tem, the type checker infers the types of formal parameters of functions and of
functions that are declared in the program.
A different approach to polymorphism is Reynolds' second order typed lambda
calculus [35, 36]. Here one can form functional abstractions, the formal parameter
CHAPTER 1. INTRODUCTION 6

being a type variable. Such an abstraction can be applied to a type giving an


object the type of which depends on the actual type argument.
Finally there is work on type inheritance or subtyping. This is a form of
polymorphism which is natural in connection with typing of labelled records.
If, for example, function f selects the component labelled L from its argument,
then it should be possible to apply f to all arguments that have an L com-
ponent. Cardelli [8] introduced one system for subtyping and, more recently,
Wand [42] has given a similar type inference system. Principal types do not exist
for Cardelli's system. Due to a mistake principal types do not exist in Wand's
system either, but I understand that Wand will publish a corrected version. Al-
though developed independently, there are strong similarities between Wand's
unification algorithm and the one presented in Part III of this thesis.
A more detailed overview of the above approaches to typing of expressions
can be found in [36]. See also Chapter 6 for a comparison of our imperative type
discipline with that of Luis Damas, and Section 11.4 for a comparison between
structure unification in ML and the similar unification algorithm due to H. Ait-
Kaci.
The three type disciplines we study are all related to the programming lan-
guage ML, although they are not applicable to ML only. Wikstrom's textbook
on ML [43] describes the the core language at length. Harper's shorter report [16]
explains the full language, including modules.

1.2 Outline
There are three parts. In Part I we review Milner's polymorphic type discipline
for a purely applicative language and we formulate and prove a consistency result
using operational semantics.
In Part II we extend the language to have references (pointers) as values. We
present a new polymorphic type discipline for this language and compare it with
one due to Luis Damas [13]. The soundness of the new type inference system is
proved using operational semantics.
Part III is concerned with a polymorphic type discipline for modules. The lan-
guage has structures, signatures and functors as in ML. Signatures will be seen to
correspond to structures as type schemes correspond to types in the applicative
setting. We present a unification algorithm for structures that generalizes the
ordinary first order unification algorithm used for types, and we present a "sig-
CHAPTER 1. INTRODUCTION 7

nature checker" (the analogue of a type checker) and prove that it finds principal
signatures of all well-formed signature expressions.
In the conclusion I shall comment on the role of operational semantics partly
based on the proofs I have done, and partly based on the experience we as a
group had of using operational semantics to write the full ML semantics [20].
Part I
An Applicative Language
Chapter 2

Milner's Polymorphic Type


Discipline

We define a dynamic and a static semantics for a little functional language and
show that, in a sense to be made precise below, they are consistent.
The type discipline is essentially Milner's discipline for polymorphism in func-
tional languages [27, 14]. However, we formulate and prove soundness of the type
inference system with respect to an operational dynamic semantics instead of a
denotational semantics.
I apologize to readers who already know this type discipline for bothering
them with the differences between types and type schemes, substitution and in-
stantiation, the role of free type variables in the type environment, and so on.
However, I cannot bring myself to copy out the technical definitions without some
examples and comments, mostly because I have a feeling that many have some
experience of polymorphism through the use of ML without being used to think-
ing of the semantics in terms of inference rules. Moreover, a good understanding
of the inference rules is essential for the understanding of the type disciplines in
Part II and III.
The soundness of the type inference system is not hard to establish using
operational semantics. The proof therefore serves as a simple illustration of a
proof method that will be put to more heavy use in the later parts.

We are considering the following little language, Exp. Assuming a set, Var,
of program variables
x E Var = {a,b,...,x,y,...}.
9
CHAPTER 2. MILNER'S POLYMORPHIC TYPE DISCIPLINE 10

The language Exp of expressions, ranged over by e, is defined by the syntax


e ::= x variable
Ax.ei lambda abstraction
ei e2 application
let x = ei in e2 let expression

2.1 Notation
Throughout this thesis we shall give inference rules that allow us to infer sequents
of the form
A I- phrase --i B
where phrase is a syntactic object and A and B are so-called semantic objects.
Semantic objects are always defined as the least solution to a collection of set
equations involving Cartesian product (x), disjoint union (+), and finite subsets
and maps. When A is a set then Fin(A) denotes the set of finite subsets of A. A
finite map from a set A to a set B is a partial map with finite domain. The set
of finite maps from A to B is denoted
fin
A B.

The domain and range of any function, f, is denoted Dom(f) and Rng(f), and
f 1 A means the restriction of f
to A. When and g are (perhaps finite) maps f
then f + g, called f modified by g, is the map with domain Dom(f) U Dom(g)
and values
(f + g)(a) = if a E Dom(g) then g(a) else f (a).
f
The symbol + is a reminder that + g may have a larger domain than (hence f
the +) and also that some of the values of f may "disappear" because they are
superseded by the values of g (hence the -).
When Dom(f) fl Dom(g) = write g for + g. We say that
0 we g is f I f f I

f
the simultaneous composition of and g. Note that for every a E Dom(f g) we I

I f
have that either (f g)a = (a) or (f g)a = g(a). I

f
We say that g extends f, written C g if Dom(f) C Dom(g) and for all x in
f
the domain of we have (x) = g(x). f
Any E Af fin
B can be written in the form
>

{al f--+ b1,. .. , an f--+ bn}.

In particular, the empty map is written {}.


CHAPTER 2. MILNER'S POLYMORPHIC TYPE DISCIPLINE 11

b E BasVal = {true, false,1, 2, ...}


v E Val = BasVal + Clos
[x, e, E] E Clos = Var X Exp X Env
E E Env = Var fin*
r E Results = Val + {wrong}

Figure 2.1: Semantic Objects (Dynamic Semantics)

2.2 Dynamic Semantics


The semantic objects for the dynamic semantics of Exp are defined in Figure 2.1.
We assume a set, BasVal, of basic values. The basic values can be thought of
either as syntactic objects (numerals, constants, constructors) or as the math-
matical objects they denote (the integers, the booleans, and so on) but the other
semantic objects should be seen as mathematical objects so that we have the
usual operations (e.g., application) at our disposal. The object wrong is not a
value; wrong is the result of a nonsense evaluation such as an attempt to apply
a non-function to an argument.
The inference rules appear in Figure 2.2. The notation E I- e -* r means
that they define a ternary relation between members of Env, Exp, and Results
- this one is read e evaluates to r in E. Here, and everywhere else, the relation
defined by the inference rules is the smallest relation closed under the rules.
Variable names are used to avoid explicit injections and projections so that for
instance rule 2.7 and rule 2.8 are mutually exclusive (since wrong is kept disjoint
from the values). The or in rule 2.6 is used to collapse what is really two rules
into one.
In general, every inference rule has the form
P1...
P.
(n > 0)
C
and allows us from the premises P1,... , P,z of the rule to infer the conclusion,
C. Each premise is either a sequent, A I- phrase -*
B, or some side condition
expressed using standard mathematical concepts. The conclusion is always a
sequent.
For example, rule 2.6 can be read: "if el evaluates to a basic value or to wrong
in E then the application el e2 evaluates to wrong in E".
CHAPTER 2. MILNER'S POLYMORPHIC TYPE DISCIPLINE 12

x E Dom E
EI-x-E(x)

x Dom E
EI- x -p wrong

E I- Ax.el -p [x, el, E]


E I- el --* [xo, eo, Eo]

Eo ±{xo " E
vo}
I-
I-
e2
eo -' r vo

EI-ele2-r
E I- el -p [xo, eo, Eo] E I- e2 -p wrong
E I- el e2 -p wrong
EI- el -p borwrong
E I- el e2 ) wrong

El-el-iv1 E±{x" v1} I-e2-pr


EI-letx=eline2}r
EI- el -p
wrong
E I- let x = el in e2 wrong -p
Figure 2.2: Dynamic Semantics
CHAPTER 2. MILNER'S POLYMORPHIC TYPE DISCIPLINE 13

By using the conclusion of one rule as the premise of another one can build
complex evaluations out of simpler ones. More precisely, an evaluation is regarded
as a special case of a proof tree in formal logic.

2.3 Static Semantics


We start with an infinite set, TyVar, of type variables and a set, TyCon, of nullary
type constructors.

7r E TyCon = lint, bool,...}


a E TyVar = It, t', tl, t2, ...}

Then the set of types, Type, ranged over by r and the set of type schemes,
TypeScheme, ranged over by o are defined by

T 7r aI Tl-3T2
a T Va.ol
The -p is right associative. Note that types contain no quantifiers and that type
schemes contain outermost quantification only. This is necessary to get a type
checking algorithm based on first order term unification. A type environment is
a finite map from program variables to type schemes:

fin
TE E TyEnv = Var TypeScheme

A type scheme or = dal. ... Van.T written dal ... an.T. We say that
is
al, ... an are bound in o and that a type variable is free in o if it occurs in r
and is not bound. Moreover, we say that a type variable is free in TE if it is free
in a type scheme in the range of TE.
The map tyvars : Type -p Fin(TyVar) maps every type T to set set of type
variables that occur in T. More generally, tyvars(o) and tyvars(TE) means the
set of type variables that occur free in o and TE, respectively. Also, o and TE
are said to be closed if tyvars o = 0 and tyvars TE _ 0 and r is said to be a
monotype if tyvars(T) = 0-
A total substitution is a total map S : TyVar -p Type. A finite substitution
is a finite map from type variables to types. Every finite substitution can be
extended to a total substitution by letting it be the identity outside its domain,
and we shall often not bother to destinguish between the two. However, to deal
CHAPTER 2. MILNER'S POLYMORPHIC TYPE DISCIPLINE 14

with renaming of bound variables, I like to think of the region of a substitution,

Reg(S) = U tyvars(S(a))
oEDom(S)

and this is only relevant when S is a finite substitution.


By natural extension, substitutions can be applied to types. This gives com-
position of substitutions with identity ID. As usual, (S2 o Sl)T means S2(S1 T),
which we often shall write simply S2 Sl T.
A substitution is ground if every type in its range is a monotype.
Substitution on type schemes and type environments is defined as follows.

Definition 2.1 Let al = dal ... an-Ti and v2 = Vf31... 13m..7-2 be type schemes
and S be a substitution. We write of _S__+ 0'2 if
1. m = n, and {a, H f3t I 1 < 2 < n} is a bijection, no at is in Reg(So), and

2. (So {at H 0, })Tl = T2

where so
S
tyvars al. Moreover, we write TE
,
-* TE' if Dom TE
DomTE' and for all x E DomTE, TE(x) -S-+ TE'(x).

We write al = v2 as a shorthand for vl D > o2. Note that this is the familiar
notion of a-conversion.
c'

The operation of putting Va. in front of a type or a type scheme is called gen-
eralization (on a), or quantification (of a), or simply binding (of a). Conversely,
T' is an instance of o = Val ... an-T, written

U>T1,

if there exists a finite substitution, S, with domain {al, ... , an} and S(T) = T'.
The operation of substituting types for bound variables is called instantiation.
Instantiation to type schemes as follows: o2 is an instance of o ,
is extended
written vl > v2i if for all types T, if o2 > T then vl > T. Write v2 = df3l ... P..7-2-
One can prove that vl > v2 if and only if vl > T2 and no /3, is free in o 1. (This,
in turn, is equivalent to demanding that vl > T2 and tyvars(vl) C tyvars(o2)).
Finally,
C1osTET

means Val ... an-T, where {a1, ... , an} = tyvars T \ tyvars TE.1
CHAPTER 2. MILNER'S POLYMORPHIC TYPE DISCIPLINE 15

I TEI- e:T

x EDomTE TE(x)>T
TEF-x:T

TE±{xHT'}F-e1:T
(2.10)
TEFAx.e1:T'->T

TEF-e1:T'->T TEF-e2: r'


(2.11)
TEF-ele2:T

TEF-e1:T1 TE±{xHC1osTET1}F-e2:T
(2.12)
TE F-let x = el in e2 : r

Figure 2.3: Static Semantics

This is all we need to give the static semantics, see Figure 2.3.2
The following examples illustrate the use of the system. Skip them if you
know the type inference system.

Example 2.2 (Shows a simple monomorphic type inference). Consider the ex-
pression
((7x.Ay.y x)z)(Ax.x).
Assume TEo(z) = int, let TE1 = TEo ±{x H ant} and TE2 = TE1 ±{y H int -->
int}. We then have the inference
TE2 F- y : int -> int TE2 F- x : int
TE2F-yx:int
TE1 F- Ay.y x : (int -> int) -r ant
TEo F Ax.Ay.y x : ant -> ((ant -> ant) -> int)
'Throughout, the symbol \ is used for set difference, i.e., A \ B means {x E A x B}. I

2The reader familiar with the type inference system of [14] will note that our version does
not have an instantiation and a generalization rule Instead instantiation is done precisely when
variables are typed and generalization is done explicitly by the closure operation in the let rule
Also note that the result of a typing is a type rather than a general type scheme. Although
it is not trivial to prove it, the two systems admit exactly the same expressions. Our system
has the advantage that whenever TE I- : r, the form of e uniquely determines what rule was
applied.
CHAPTER 2. MILNER'S POLYMORPHIC TYPE DISCIPLINE 16

where the form of the expressions always tells us which rule is used. Thus
TEo I- Ax.Ay.y x : int -- (int -- int) -- int TEo I- z : int
(2.13)
(Ax.Ay.y x)z: (int -4 int) -- int

We have
TE1 I- x int :

TEo I- Ax.x : int -- znt


which with (2.13) gives

TEo I- (Ax.Ay.y x)z: (znt -- int) -- znt TEo I- Ax.x : int -- int
TEo I- ((Ax.Ay.y x)z)(Ax.x) : znt

as we suspected. Notice that we had to "guess" the right types for the lambda
bound variables x and y, but there exists an algorithm that can do this job (see
Section 2.4).

Example 2.3 (Illustrates instantiation of type schemes). For the sake of this
and following examples let us extend Type with lists:

T :._ ... rl list


I

and let us assume that

TEo(nil) = Vt.t list


TEo(hd) = Vt.t list -- t
TEo(tl) = Vt.t list -- t list
TEo(cons) = Vt.t -- t list -- t list
TEo(rev) = Vt.t list -- t list.

Let TE1 = TEo ±{x i-* (znt list) list} and TE2 = TE1 ±{g i-* int int int}
and consider
e = g(hd(rev(hd x))).
Here hd is a polymorphic function used once to take the head of a list of integer
lists and then once to get the head of an integer list. Thus in the following type
inference, two different instances of the type scheme of hd are used:

TE2 I- hd : (int list) list -- int list TE2 I- x : (znt list) list
(2.14)
TE2 I- hd x : int list
TE2 I- rev : int list -- znt list TE2 I- hd x : int list by (2.14)
(2.15)
TE2 I- rev(hd x) : int list
CHAPTER 2. MILNER'S POLYMORPHIC TYPE DISCIPLINE 17

TE2 F- hd : int list --> ant TE2 rev(hd x) : int list


F- by (2.15) (2.16)
TE2 F- hd(rev(hd x)) : int
TE2 F- g : ant -->int TE2 F- hd(rev(hd x)) int by (2.16)
int --> :
(2.17)
TE2 F- g(hd(rev(hd x))) int -a int
-
:

Here it was the instantiations we had to guess because of the type of g the
instantiations we chose are the only ones that make the term well-typed.

Example 2.4 (Illustrates free type variables in the type environment). Take
TEo as in the previous example, but let TE1 = TEo ±{x i- (t' list) last} and
TE2 = TE1 ±{g'--p t' --> Q. Thus t and t' are free in TE2. Now the inference for
the same expression becomes

TE2 F- t' list TE2 F- x (t' list) list


hd : (t' list) list --> :
( 2 . 18 )
TE2F-hdx:t'list
TE2 F- rev t' list --> t' list TE2 F- hd x : t' last by (2.18)
:
(2.19)
TE2 F- rev(hd x) t' list
TE2 F- hd: t' list - + t' TE2 F- rev(hd x) t' list by (2.19) :
( 2 . 20 )
TE2 F- hd(rev(hd x)) : t'
TE2 F- g t' --> t TE2 F- hd(rev(hd x)) t' by (2.20)
: :
(2.21)
TE2 F- g(hd(rev(hd x))) : t
Note that if we substitute ant for t' and int --> ant for t throughout, the proof we
get is precisely the proof from Example 2.3.
So we notice that a type variable free in the type environment behaves like a
type constant different from all other type constants (ant, bool, etc).
Note that it is the rule for lambda abstraction, and only this rule, by which
new type variables can become free in the type environment.

Example 2.5 (Illustrates the let rule, in particular the role of type variables free
in the type environment when types are generalized). We continue the previous
example. From (2.21) we get

TE1 F- )g.g(hd(rev(hd x))) : (t'--+ t) -+ t.

Note that t' is free in TE1 and in the resulting type, whereas t is not free in TE1.
Now consider the expression (where first has type Vtlt2.t1 --a t2 --a t1)

let f= )g.g(hd(rev(hd x)))


in
CHAPTER 2. MILNER'S POLYMORPHIC TYPE DISCIPLINE 18

(f first) 7
(f first) true.
(f cons) nil...
in TEi. The let rule (rule 2.12) requires that the body be checked in the type
environment
TEi=TE1±{f:`dt.(t'-->t)-->t}
since C1osTE1((t' --> t) --> t) _ Vt.(t' --> t) --> t. That we must not generalize on
t' is not too surprising, if we think of t' as a type constant (c.f. Example 2.4).
In the body of the above expression we use the following instantiations for f:

Vt.(t'-->t) -->t > (t'--> (int-->t')) --> (int-->t')


`dt.(t' --> t) --> t > (t' --> (bool --> t')) --> (bool --> t')
`dt.(t' --> t) --> t > (t' --> (t' list --> t' list))--+ (t' list --> t' list)

The following two lemmas are essential for the later proofs:

Lemma 2.6 For any S, if o > T' and o -S--+ 0'2 then Q2 > S T'.

Proof. Let o = `dal ... a,,.T and let I be the instantiation substitution,
I= fee, HT,I1<i<n}
with I T = T'. Now 0'2 ... j3,,.(So la, H 0,})T where {a, H /3,}
is of the form Vol I

is a bijection and no /3: is in Reg(So), where So = S 1 tyvars al. Let J={/3,H


S 'Q and let us first show that

J((So I la,
H Ns})T) = S(IT). (2.22)

This we show by proving that for each a occurring in T,

J((So I la, H /3,})a) = S(I a). (2.23)

If a is bound in al, i.e. a = a say, then


J((So I la, H /3'})aj) = J133 = ST, = S(I a3).
CHAPTER 2. MILNER'S POLYMORPHIC TYPE DISCIPLINE 19

Otherwise a is free in ul, so

J((So {a,
I -* /3 })a) = J(So a) = J(S a) = S a,

since Reg so nip,,..., /3,} = 0. But S a = S(I a), since a is not in {at, ... , an},
showing that (2.23) holds in this case as well. Thus we have (2.22) and since
IT = T', this gives the desired 02 > ST,
Lemma 2.7 IfTEH e : T andTE - TE'thenTE'H e : Sr.
We have already seen one application of this lemma, namely that the proof
in Example 2.3 could be obtained from the proof in Example 2.4.

Proof. By structural induction on e. The case where e is a variable follows


from Lemma 2.6. Of the remaining cases, only the case for let expressions is
interesting.

e= let x=eline2 Here TE I- e : T must have been inferred by application of

TE I- el : r1 TE ±{x i- * ClosTETl } H e2 : T
(2.24)
TE H let x = e1 in e2 : T
Let {al.... an} be tyvars Tl \ tyvars TE. Let {,Ql, ... , ,Qn} be such that {a{
,

#j is a bijection and no ,Qt is in Reg(So), where So = S J. tyvars TE. Let


S' = So I{a, i-* /3t}. Then

CIOSTETI = dal ... an.Tl


S+ Vol ... Qn. S' rl (2.25)

No ,Q, TE' (since TE --> TE' and /3 54 Reg(So)). Moreover, any type
is free in

variable free in dal ... an.Tl is free in TE and TE


we have
-
variable that occurs in S'Tl and is not a ,Qt must be free in TE' (since every type
TE'). Therefore, by (2.25),

Since TE -
on el and the first premise of (2.24) we have
C1osTET1 -S--+ C1osTEs

TE' and no a, is free in TE we have TE -S; TE'. Thus by induction


S' T1. (2.26)

TE' F- el : S' Tl. (2.27)

Moreover, we have

TE ±{x -* ClosTET1 } -S+ TE' ±{x C1osTE, S' Tl }


CHAPTER 2. MILNER'S POLYMORPHIC TYPE DISCIPLINE 20

by (2.26). Thus by induction on e2 and the second premise of (2.24) we get

TE'±{x H ClosTE' S'T1} I- e2 : Sr


which with (2.27) gives the desired TE ' F- e : S r by the let rule.

2.4 Principal Types


The type inference rules are non-deterministic; for instance the expression Ax.x
can get type t -
t, int -
int, and bool -
bool. However, among the types that
can be inferred some are principal in that all the others can be obtained from
them:

Definition 2.8 A type r is principal for e in TE if TE F- e : r and moreover,


whenever TE F- e : r' then ClosTEr > r'.
It can be proved that if an expression has a type in a type environment then
it has a principal type in that environment. More precisely, Milner devised a type
checker, i.e., an algorithm which given TE and e determines whether there exists
a r such that TE F- e : r. We repeat it in Figure 2.4. The algorithm, W, uses
Robinson's unification algorihm [37] to unify types. As in [13] it can be proved
that W is "sound" in the following sense:

Theorem 2.9 (Soundness of W) If (S, T) = W(TE, e) succeeds and


TE --TE' then TE' F- e : r.

Moreover, W is "complete" in the following sense:

Theorem 2.10 (Completeness of W) If TE S, TE1 and TE1


> F- e : Tl then

(So, ro) = W(TE, e)

-
succeeds and there exists a substitution So and a type envzronment TEo such that

TE TEo and TE
Sow TE1 and So ClosTE0 TO > 7-1.

Moreover, if W(TE, e) does not succeed then zt stops with fail.

The proofs of Theorems 2.9 and 2.10 are by structural induction on e and use
Lemma 2.7.
CHAPTER 2. MILNER'S POLYMORPHIC TYPE DISCIPLINE 21

W (TE, e) = case e of
x = if x Dom TE then fail
else let t1a1... a,,.7- = TE(x)
/31 i ... , be new
,13,E

in (ID, {a, H /3t}7-)


)x.e1 = let a be a new type variable
= W(TE±{x H a},el)
(S1i7-1)
in (S1i Si(a) -47-1)
el e2 = let (S1i7-1) = W(TE, el);
(S2,7-2) = W (SI(TE), e2)
a be a new type variable
S3 = Unify(S2(Tl),T2 -> a)
in (S3S2S1, S3(a))
let x eline2=:e
let (Si, 7-1) = W(TE, el)
(S2,7-2) = W(S1TE±{x H C1osSiTET1},e2)
in (S2S1i T2)

Figure 2.4: The Type Checker W.

2.5 The Consistency Result


We shall now show the consistency of the dynamic and the static semantics. The
result will imply that a well-typed program cannot evaluate to wrong.
To this end we define a relation between the objects of the dynamic and the
static semantics.
Assume a basic relation between basic values and type constants

RBz, C BasVal x TyCon

relating 5 to int, true to bool, and so on. In order to ensure that the definitions
really define something, we define the relation between values, v, and types T in
stages. We start with monotypes, y:

Definition 2.11 We say that v has monotype y, written = v : y if


either v = b and (b, µ) E RBA,
or v = [x, e, E] and y = y1 -> µ2, for some µl, 112, r
and for all v1,
if v1 : µ1 and E±{x H v1} f e -r r then r # wrong and =r: IZ2.
CHAPTER 2. MILNER'S POLYMORPHIC TYPE DISCIPLINE 22

This is well-defined on the structure of types. The definition is extended to


types with free type variables, type schemes, and type environments as follows
(where TE° ranges over closed type environments)

Definition 2.12 We define:


k v : T if for all total, ground S we have = v : S r;
1v:Vcal...a,,.T if :T; Iv
= E : TE° if DomE = Dom TE° and = E(x) : TE°(x) for all x E DomE;
=E : TE if = E : TE° whenever S is total and ground and TE ---+ TE°.

We expect the static semantics to be consistent with the dynamic semantics


in the following sense: no matter which type r we can infer for e using the static
semantics, and no matter which result, r, we get by dynamic evaluation of e, r is
not wrong, in fact r type T. If we can prove this then, in particular,
is a value of
if e can be typed then the dynamic evaluation of e will never lead to an attempt
to apply a non-function to an argument or to looking in vain for a variable in the
environment.
Assuming for the moment that e contains no free variables this can be for-
mulated very easily as follows using the relation = defined above:

F-

To prove this, we need to consider open expression as well. When considering


the more general TE F- : T we are only interested in dynamic environments
whose values have the types assumed in TE. Thus we arrive at the main theorem
which states the "soundness" of the polymorphic type discipline in just one line:

Theorem 2.13 (Consistency of Static and Dynamic Semantics)


If IE:TE and TEF- :T and Ef e->r thenr wrong and=r: r.

Proof. By structural induction on e.

e=x Here x E DomTE and TE(x) > T. Since 1 E : TE we have x E DomE.


Thus r wrong, in fact r = E(x). As 1 E(x) : TE(x) and TE(x) > T we have
= E(x) : T using Definition 2.12.
CHAPTER 2. MILNER'S POLYMORPHIC TYPE DISCIPLINE 23

e = Ax.el Here the type inference must have been of the form

TE+{xHT'}F- el:T (2.28)


TEI- Ax.el:T'-3T
and the evaluation must have been

E F- Ax.ej -) [x, el, E]


Thus r = [x, el, E] which clearly is different from wrong. To prove = r : T' -> T,
let S be any total, ground substitution. There is a TE° such that TE
Thus TE +{x i-a T'} --
TE°.
TE°+{x i-4 S T'}. This, with the premise of (2.28) gives
-
TE°+{x'-aST'}F- el:S7- (2.29)

by Lemma 2.7. Now let v' be such that = v' : S r' and assume

E +{x l-4 v'} F- el -* rl. (2.30)

Since = E : TE we have = E : TE° and therefore

= E +{x H v'} : TE° +{x H Sr'} (2.31)

By induction on el, using (2.29), (2.30), and (2.31) we have rl 54 wrong and
r1:ST.
This proves = [x, el, E] : S7-'--+ S T, i.e., = r : S7-'--+ S7-.
Since this holds for any S, we have proved = r : T' -i T as desired.

e=el e2 Here the type inference must have been of the form

TEI- el:T'-3T TEF-e2:T'


(2 . 32)
TE F- el e2: T

Let S be any total, ground substitution. There exists a TE° such that
TE + TE°. By Lemma 2.7 on the premises of (2.32) we have

TE° F- el : ST' -+ ST (2.33)


TE° F- e2 : S T'. (2.34)

Since = E : TE we have = E : TE°. By assumption, E F el e2 r. Looking at --


the evaluation rules we see that there must exist an rl such that E F- el rl. --
Thus, by induction on el, using (2.33), we get

rl wrong and = rl : S T' -> S T.


CHAPTER 2. MILNER'S POLYMORPHIC TYPE DISCIPLINE 24

By Definition 2.11, rl must be a closure, [xo, eo, Eo], say. Thus E F el e2 ---> r
was not by rule (2.6) i.e., it must have been by rule (2.4) or (2.5). Therefore,
there exists an r2 so that E I- e2 r2. -)
Using induction on e2 together with (2.34) we get that

r2 # wrong and I= r2 : ST'.

Thus r2 is a value, vo, say. In particular, it must have been rule (2.4) that
was used.
Hence Eo ±{xo F---> vo} F- eo --> r.
Since I= [xo, eo, Eo] : S7-' -i S r and
vo : S T', we must therefore have that r # wrong and I= r : S7-.
Since this holds for any S, we have proved I= r : T.

e = let x = el in e2 The type inference must have been of the form

TEF-el:T1 TE±{xF-->ClosTE T1} F-e2:T


(2 . 35)
TE let x = el in e2 : T
F-

Let S be any total, ground substitution. Let {al, ... , an} = tyvars T1
tyvars TE, let Sl = S ,j, tyvars TE and let So = S ,j, tyvars ClosTETl. Let
{01, . , On} be such that {a, F--> f,} is a bijection and no 0, is in Reg Sl. Then
no 0, is in Reg So, so

CloSTETI = dal ... an.T


-"S>

Vol ... On -(So I {a, H B })Tl


Vol ... On.(Sl la, I
H 0,})7'1
Vol ... On- S'7-1 (2.36)

where S' = S, If a, F-4 0,}.


There exists a TE° such that TE TE°. Surely Vol ... On. S'7-1
ClosTE" S' T1 so by (2.36) we have

ClosTETI -S ClosTE' S'T1. (2.37)

Since TE TE° and no a, is free in TE we have TE TE°. This with


Lemma 2.7 on the first premise of (2.35) gives

TE°F-e1:S'T1. (2.38)

Since TE TE° and (2.37) we have

TE±{x F--> ClosTETI} -+TE°±{x F--> ClosTEo S'T1}.


CHAPTER 2. MILNER'S POLYMORPHIC TYPE DISCIPLINE 25

This with Lemma 2.7 on the second premise of (2.35) gives

TE° ±{x H ClosTEo S'7-1} I- e2: Sr. (2.39)

Inspecting the evaluation rules we see that there must exist an r1 such that
E I- el ---, r1. Since = E : TE we have J
E : TE° so by induction on e1,
using (2.38), we get
r1 wrong and =r1:S'r-1.
Thus r1 is a value, v1, say. Then it must have been rule 2.7 that was applied
soE±{xHvl}I-e2--->r.
Since vl : S'7-1 we have = v1 : ClosTEo S'7-1 by Definition 2.12. Thus

= E±{x H vl} : TE°±{x H ClosTEo S'r-1}.

Hence by induction on e2, using (2.39), we get r wrong and = r Sr.


:

Since we can do this for any S, we have proved r wrong and = r 7-.
:
I
Part II
An Imperative Language
Chapter 3

Formulation of the Problem


The problem that is the topic of this part is: how can we extend the polymorphic
type discipline to a language with imperative features?
The reason for studying this problem is that it has proved itself to be an
obstacle (if not the obstacle) to the harmonic integration of imperative language
features (e.g., assignment, pointers and arrays) as we know them from languages
like ALGOL and PASCAL and functional language features (e.g., functions as
first class objects, polymorphism) as we know them from ML. Some people take
such obstacles as evidence that imperative and functional languages should be
kept apart.' I shall first argue that we should try to take the best of both worlds
and then give a concrete piece of evidence to the feasibility of the task, namely
a new type discipline that integrates polymorphism with imperative features.
The discipline seems to be sufficiently simple that programmers can use it with
confidence - the point being that a type inference system should never be so
advanced that programmers only have a vague feeling for which programs are
well-typed. Moreover, the discipline seems to admit enough programs to be of
practical use.
What is to gain for functional languages? Firstly, there are good algorithms
and techniques that are based on imperative features (sorting, graph algorithms,
dynamic programming, table driven algorithms, etc.) and it is desirable that
these can be used essentially as they are. Secondly, it is sometimes possible to
obtain greater execution speed with the use of imperative features without sacri-
ficing the clarity of the algorithm. One example is the type checker discussed in
the last chapter; there it was presented as a functional program computing sub-
stitutions, but a much more efficient and equally natural type checker is obtained
'Let alone those who believe that one kind of language is superior to the other in all respects.
27
CHAPTER 3. FORMULATION OF THE PROBLEM 28

by actually doing the substitutions by assignment statements.


What is to gain for imperative languages? Primarily a type system with the
advantages of polymorphism. For example, we shall admit

fun fast reverse(l)=


let left = ref 1;
right = ref nil
in while !left <> nil do
begin
right . = hd(! left) .. (! right);
left := tl(!left)
end;
!right
end

where the evaluation of ref


dynamically creates a new reference to the value
e

of e and ! stands for dereferencing. Notice that none of the variables have had
to be given explicit types. More importantly, this is an example of a polymor-
phic function that uses imperative features; intuitively, the most general type of
fast.severse is Vt.t list -+ t list.
The first ML [15] had typing rules for so-called letref bound variables. (Like
a PASCAL variable, a letref bound variable can be updated with an assignment
operation but, unlike a PASCAL variable, a letref bound variable is bound to a
permanent address in the store). The rules admitted some polymorphic fuctions
that used local letref bound variables.
Damas [13] went further in allowing references (or pointers, as other peo-
ple call them) as first order values and he gave an impressive extension of the
polymorphic type discipline to cope with this situation.
Yet, many more have thought about references and polymorphism without
publishing anything. Many, including the author, know all too well how easy it
is to guess some plausible typing rules that later turn out to be plain wrong.
Guessing and verifying are inseparable parts of developing a new theory. None
is prior to the other, neither in time nor in importance. I believe the reason why
the guessing has been so hard is precisely that the verifying has been hard. The
soundness of the LCF rules was stated informally, but no proof was published.'
In his thesis Damas did give a soundness proof for his system; it was based on
2There is some uncertainty as to whether a proof was carried out
CHAPTER 3. FORMULATION OF THE PROBLEM 29

denotational semantics and involved a very difficult domain construction. More


seriously, although his soundness theorem is not known to be false, there appears
to be a fatal mistake in the soundness proof.'
The good news is that we now do have a tractable way of proving soundness
theorems. The basic idea is to prove consistency theorems similar to that of
Chapter 2, using a simple and very general proof technique concerning maximal
fixpoints of monotonic operators. Credit for this should go to Robin Milner, who
suggested this technique at a point in time where I was stuck because I worked
with minimal fixpoints.
Thanks to this technique we can present two results. Firstly, we can actually
pinpoint the problem in so far as we can explain precisely why the naive extension
of the polymorphic type discipline is unsound. Secondly, we can present a new
solution to the problem and prove it correct. The remainder of the present
chapter is devoted to presenting the first result; the type discipline is presented
in Chapter 4 and its soundness proved in Chapter 5. Finally Chapter 6 contains
a comparison with Damas' work.

3.1 A simple language


Let us use exactly the same syntax as before, that is we assume a set Var of
variables, ranged over by x, and form the set Exp, of expressions, ranged over by
e, by

e ::= x variable
Ax.el lambda abstraction
el e2 application
let x = el in e2 let expression

We assume values ref, asg, and deref bound to the variables ref, : =, and ! ,
respectively. We use the infix form e1:=e2 to mean (:= e1) e2. We introduce a
basic value done which is the value of expressions that are not meant to produce
an ordinary value (an example is e1: = e2). The objects in the dynamic semantics
are defined in Figure 3.1 and the rules appear in Figure 3.2. To reduce the
number of rules, and hence the length of our inductive proofs, we have changed
the dynamic semantics from Chapter 2 by removing wrong from the semantics.
3In the proof of Proposition 4, case INST, page 111, the requirements for using the induction
hypothesis are not met; I do not see how to get around this problem.
CHAPTER 3. FORMULATION OF THE PROBLEM 30

b E BasVal = done, true, false,1, 2, .. .


v E Val = BasVal + Clos + {asg, ref, deref } + Addr
[x, e, E] E Clos = Var x Exp x Env
s E Store= Addr fin Val
E Env = Var
-fin+

E Val
a E Addr

Figure 3.1: Objects in the Dynamic Semantics

xEDomE
s, E F- -* E(x), s

s, E F- Ax.el - -> [x, el, E], s

s, E I- el -4 [xo, eo, Eo], si


sl, E f- e2 v2, S2
s2iE0±{x0F->v2}feo-v,s'
s,Ef- el e2-4V,S'

s, E f- el --a asg, Si
sl,Efe2-4a,s2
S2,EI- e3-;v3,s3
s, E f- (el e2) e3 - -> done, s3 ± {a H V3}

s,Ef-el-*ref,sl sl,Efe2-v2,s2 aDoms2


s, E f- el e2 - -> a, s2 ± {a H v2}

s, E f- el - deref, sl
s,Ef-ele2- )v,s'
sl, E f- e2 ---> a, s' s'(a) = v
(3.6)

s,Ef-el->vl,sl sl,E±{xF+vl}f-e2-*v,s'
s,Ef-letx=eline2 +v,s'

Figure 3.2: Dynamic Semantics


CHAPTER 3. FORMULATION OF THE PROBLEM 31

X E DomTE TE(x) >T


TEI- x:T

TEf{x-+T'}F-61:T
TEF-Ax.ei:T'-pT

TEF-el:T'-+T TEF-e2:T'
TEI-ele2:T

TEF- el:Ti TEf{xHClosTET1} F-e2:T


TE F- let x = el in e2 : T

Figure 3.3: The Applicative Type Inference System (Repeated)

3.2 The Problem is Generalization


The naive generalization of the type discipline from Chapter 2 is to include types
of the form
T stm I T ref
among types and keep the inference system as it is (see Figure 3.3), assuming
that the initial environments are
TEo(ref) = Vt.t -+ t ref Eo(ref) = ref
TEo(: _) = -- + -- + Eo(: _) = asg
TEo(!) = Vt.t ref --+t Eo(I) = deref.

However, the following example shows that with this system one can type
nonsensical programs.

Example 3.1 Consider the following nonsensical program

let r = ref(Ax.x)
in (r:= Ax.x+1; (!r)true)

where ; stands for sequential evaluation (the dynamic and static inference rules
for ; will be given later). Also, x+1 is infix notation for (+x) and the type of l
+ is int -+ int -+ int. This program can be typed; the expression ref(Ax.x)
CHAPTER 3. FORMULATION OF THE PROBLEM 32

can get type (t --> t) ref and the body of the let expression is typable under the
assumption
{r H Vt.((t --> t) ref )} (3.8)

using the instantiations

Vt.((t --> t) ref) > (tint --> tint) ref


and
Vt.((t --> t) ref) > (bool --> bool) ref

for the two occurrences of r.

Now let us formulate a consistency result and see the point at which the
proof breaks down. Our starting point is the consistency result of Section 2.5,
Theorem 2.13 which read'

Theorem3.2 If a nd I - an d I -
In the presence of a store the addresses of which are values, the typing of
values becomes dependent on the store. (Obviously, the type of address a must
depend on what the store contains at address a). Thus the first step will be to
replace the relations = v : 7and = E : TE by s v : T and s = E : TE. Thus
we arrive at the following conjecture

Conjecture 3.3
s' =V:T.
Ifs = E : TE and TE f- e : T and s, E F- - v, s' then

However, it is not the case that the typing of values depends on just the
dynamic store, it also depends on a particular typing of the store. To see this,
consider the following example. Let

s = {a H nil}
E _ {x" a, yHa}
TE _ {x : (tint list) ref, y : (bool last) ref }
e = (.Az. I y) (x: = [7] )

Notice that x and y are bound to the same address. At first it might look like
we have s = E : TE -
after all, x is bound to a and s(a) has type tint list
4As we are not noncerned with wrong, this is a slight simplification of the original
theorem.
CHAPTER 3. FORMULATION OF THE PROBLEM 33

and, similarly, y is bound to a and s(a) has type bool list. But if we admitted
s E : TE, not even the above very fundamental conjecture would hold: we have
TE I- e : bool list and s, E I- e -f
[7], s', but certainly not s' [7] : bool list.
The solution to inconsistent assumptions about the types of stored objects is
to introduce a store typing, ST, to be a map from addresses to types, and replace
therelations s= v:7- ands= E:TEbys:ST= v:-rands:ST= E:TE,
respectively. Hence we arrive at the second conjecture.

Conjecture 3.4 Ifs: ST= E:TEandTEHe:-rands,EHe-->v,s'then


there exists a store typing ST' such that s' : ST' V : T.

The idea is that a stored object can have at most one type, namely the one given
by the store typing; formally,

if s : ST -- a:7- thenr = (ST(a)) ref and s : ST -- s(a):ST(a) (3.9)

With the current type inference system, conjecture 3.4 in fact false. However is
one can "almost" prove it and the one point where the proof breaks down gives
a hint how to improve the type inference system.
If we accept (3.9)
and attempt a proof of Conjecture 3.4 then we see that store
types must be able to contain free type variables. For instance, with s = ST = {}
and e = ref(Ax.x) we have TEo I- e :(t -* t) ref and Eo, {} F- e --> a, {a F-+

Ix, x, {}]}, for some a, so if we are to obtain the conclusion of the conjecture

{aH[x,x,{}]}:ST'= a:(t-*t)ref
then ST' must be {a H (t -* t)}, c.f. (3.9).
These free type variables in the store typings are extremely important. In
fact, they reveal what goes wrong in unsound inferences, as should soon become
clear. Let us attempt a proof of Conjecture 3.4 by induction on the depth of
inference of s, E I- e -)
v, s'. (It requires a definition of the relation =, of
course, but we shall soon see how that can be done). It turns out that all the
cases go through, except one, namely the case concerning let expressions
where
the proof breaks down in the most illuminating way. So let us assume that we
have defined and dealt succesfully with all other cases; we then come to the
dynamic inference rule for let expressions:

s,El-e1-->v1,s1 s1,E±{xHVI}I-e2-->v,s'
s, E I- let x = el in e2 v, s' >
(3.10)
CHAPTER 3. FORMULATION OF THE PROBLEM 34

The conclusion TE I- e : T must have been by the rule


TE E- el : Tl TE±{x f--> ClosTET1} I- e2 : T
(3.11)
TE I- let x = el in e2 : T

(Recall that ClosmTl means dal ... a,,,.Tl, where feel.... , a,,} are the type vari-
ables in Tl that are not free in TE).
We now apply the induction hypothesis to the first premise of (3.10) together
with the first premise of (3.11) and the given s : ST = E : TE. Thus there exists
a ST 1 such that
sl : ST1 = Vi : Ti (3.12)

Before we can apply the induction hypothesis to the second premise of (3.10), we
must establish

sl:ST1 =E±{x-->vl}:TE±{x-->ClosTET1}
and to get this, we must strengthen (3.12) to

sl : STl = vl : ClosTET1. (3.13)

It is precisely this step that goes wrong, if we by taking the closure generalize on
type variables that occur free in ST1. The snag is that when we have imperative
features, there are really two places a type variable can occur free, namely (1) the
type environment and (2) the store typing. In both cases, generalization on such
a type variable is wrong. The naive extension of the polymorphic type discipline
fails because it ignores the free type variables in the store typing.
As a counter-example to conjecture 3.4, we can revisit example 3.1.5 Assum-
ing s = ST = {}, the dynamic evaluation was

{}, Eo l- ref(.Xx.x) -4 a, {a i-* [x, x, {}]} (3.14)

and the type inference TEo I- ref(.Xx.x) : (t -a t) ref. Thus, since


{} : {} = Eo : TEo, the induction hypothesis yields an STl such that

{a-4[x,x,{}]}:ST1 =a:(t-at)ref (3.15)

from which it follows that ST1 must be {a --> (t -a t)}. The free occurrence of
t in STl expresses a dependence of the type of a on the store typing. therefore,
we cannot strengthen (3.15) to

{a-->[x,x,{}]}:{a-->(t-a t) ref} =a:Vt.(t-a t) ref.


5It is easy to extend the dynamic semantics with wrong and with basic functions to imple-
ment the arithmetic and other basic operations. Applying addition to true results in wrong.
CHAPTER 3. FORMULATION OF THE PROBLEM 35

The are various ways in which one can try to strengthen the theorem so
that the induction goes through. One approach is to try to formulate a side
condition on the let rule expressing on which type variables generalization is
admissible. But we should not be surprised that a natural condition is hard to
find - essentially it ought to involve the evolution of the store typings, but store
typings do not occur in the static semantics at all. One way out of this is to give
up having references as values and instead have updatable variables, because the
store typing then essentially becomes a part of the type environment. This was
what was done in the early version of ML used in Edinburgh LCF. Even though
generalization of the types of updatable references is prevented, one still has to
impose extra restrictions; see [15], page 49 rule (2)(i)(b) for details.
Another approach is to enrich the type inference system with information
about the store typing. To include the store typing itself is pointless since we
are not interested in the domain of the store. All that is of interest is the set
of type variables that occur free in the store typing. One way of enriching the
type inference system with information about this set is to partition the type
variables into two syntactic classes, those that are assumed to occur in the store
typing and those that are guaranteed not to occur in the store typing. Because
type checking is purely structural, the set of type variables that are assumed to
be in the store typing is in general a superset of the set of type variables that will
actually have to be in the store typing (it is undecidable to determine given an
arbitrary expression whether it generates a reference). This idea was first used
by Damas in his thesis, and I also use it in the system I shall now present.
Chapter 4
The Type Discipline
We first present the type inference system. Then we give examples of its use and
present a type checker.

4.1 The Inference system


The basis idea is to modify the language of types so that there is a visible dif-
ference between those types that occur in the implicit store typing and those
that do not. This can be achieved by having two disjoint sets of type variables;
ImpTyVar is the set of imperative type variables and AppTyVar is the set of
applicative type variables:

t AppTyVar = It, tl,


E } applicative type variables
u E ImpTyVar = {u, ul, } imperative type variables
a E TyVar = AppTyVar U ImpTyVar type variables

The set Type of types, ranged over by r and the set TypeScheme of type schemes,
ranged over by a, are defined by
7r E TyCon = {stm, int, bool.... } type constructors
r ir a rl -a r2 rl ref
I I I types
a r Va.al
I type schemes
and type environments are defined by
fin
TE E TyEnv = Var TypeScheme.

When T is a type, a type scheme, or a type environment then tyvars(T) means


all the type variables that occur free in T. A type is imperative if it contains no
applicative type variables:

0 E ImpType = Jr E WKpe I apptyvars r= 0}.


CHAPTER 4. THE TYPE DISCIPLINE 37

A substitution is now a map S TyVar -* Type that maps imperative type


:

variables to imperative types. Hence the image of an imperative type variable


cannot contain applicative type variables, but the image of an applicative type
variable can contain imperative type variables.
The definition of instantiation, a > T, is as before but now with the new
meaning of substitution.
An expression is said to be non-expansive if it is a variable or a lambda
abstraction. All other expressions, i.e., applications and let expressions, are said
to be expansive. Although this distinction is purely syntactical it is supposed
to suggest the dynamic behaviour; the dynamic evaluation of a non-expansive
expression cannot expand the domain of the store, while the evaluation of an
expansive expression might. Our syntactic classification is very crude as there
are many expansive expressions that in fact will not expand the domain of the
store. The classification is chosen so as to be very easy to remember; the proofs
that follow do not rely heavily on this very crude classification.
As before, ClosTET means dal ... a,,.T where

{a1, ... , an} = tyvars T \ tyvars TE.


In addition we now define
AppC10sTET

to mean Val... an.T where {al, ... , an} = apptyvars T \ apptyvars TE is the set
of all applicative type variables in T not free in TE.
The type inference rules appear in Figure 4.1 and they allow us to infer
sequents of the form TE H e : T. We see that the first three rules are as before
but that the let rule has been split into two rules.
Notice that if TE contains no imperative type variables (free or bound) then
every type inference that could be done in the original system can also be done
in the new system. (Note that in rule 4.5 when Tl contains no imperative type
variables then taking the applicative closure is the same as taking the ordinary
closure). But in general TE will contain imperative type variables.

4.2 Examples of Type Inference


Let us try to type a couple of example programs to get a feel for the role of
imperative type variables.
CHAPTER 4. THE TYPE DISCIPLINE 38

I TEF- e:T

x E Dom TE TE(x) > T


TEI-x:T

TE f{x r-> T'}-F-yel : T


TEFAx.e1:r'--+T

TE I- el : T' --a T TE I- e2 : T'


TEI-ele2:T -

el is non-expansive TE I- el : Tl TE f{x I-4 CIOSTETl} F e2 : T


TEI-let x=elin e2:T
(4.4)

el is expansive TE I- el : T1 TE f{x i--+ AppClosTETi} I- C2: T


TEI-letx=eline2:T
(4.5)

Figure 4.1: The Type Inference Rules for Imperative Types


CHAPTER 4. THE TYPE DISCIPLINE 39

Throughout, we shall require

TE(ref) = Vu.u -> u ref


TE(: _) = `dt.t ref -> t -> stm
TE(!) = `dt.t ref -> t.

(Recall that u is imperative, while t is applicative; the reason that only the type
of ref contains an imperative type variable should gradually become clear).
To give examples that have some bearing on real programming let us extend
the types with lists
T ... I Tl list

and with constructors

nil : Vt.t list


cons : Vt.t -> t list --- t list

and functions

hd : `dt.t list -> t


tl : Vt.t list -> t list

We shall write el :: e2 for (cons el)e2.


Moreover, we add sequential evaluation and while loops to the language, see
Figures 4.2 and 4.3.

Example 4.1 As long as an expression does not have free variables whose type
contains imperative type variables, the type of the expression need not involve
imperative type variables. As an example we have

TE f Ax. I (! x) ti ref ref -> tl.


:

Notice that tl that the typing of (I x) involves two different


is applicative and I

instantiations of the type scheme for !. Thus, by use of the let rule for non-
expansive expressions, rule 4.4, we get TE e2
f : stm, where

e2 = let double-deref = Ax. (! X) !

in while double-deref(ref(ref false)) do


double-deref (ref (ref 5) )
CHAPTER 4. THE TYPE DISCIPLINE 40

Syntax
e :.= el ; e2 sequential evaluation
while el do e2 while loop
begin el end parenthesis
( el ) parenthesis

Dynamic Semantics

s, E F- el --- done, s1 sl, E F- e2 ---> v, s'


s, E F- e1 ; e2 --- v, s'

s, E F- el -+ false, s'
s, E F- while el do e2 -) done, s'
s, E F- el --- true, sl
s1,EF-e2--v2,s2
S2, E F- while el do e2 -- * done, s'
s, E F- while el do e2 ----* done, s'

s,EI-e1---v,s'
S, E F- begin el end -) v, s'
s,EF-eI---v,s'
s,EF-(e1)---v,s'

Figure 4.2: Extension of the Language (Dynamic Semantics)


CHAPTER 4. THE TYPE DISCIPLINE 41

Static Semantics

TEFe1:stm TEFe2:T
TEFel;e2:T

TE F e1 : boot TE F e2 : r
TE I- while el do e2: stm

TEI-e1:r
TE I- begin el end : r

TEI-e1:r
TEI-(el): r

Figure 4.3: Extension of the Language (Static Semantic

Example 4.2 On the other hand, if the expression has a free variable whose type
(scheme) contains an imperative type variable, in particular if ref occurs free,
then the type of the expression may have to involve imperative type variables.
Hence TE I- el : u -4 u where

el = Ax.! (ref x)

but not TE I- el : t -> t. Still, we can type

let f = el in (f(7); f(true))


using the let rule for non-expansive expressions which will allow a generalization
from u -> u to Vu.u -> u for the type of f.

Example 4.3 We have TE I- el : (u -> u) ref , where

el = ref ()x.x)
but not TE I- el : (t -> t) ref. Consequently, in an expression of the form

let r= ref(.\x.x) in e2
CHAPTER 4. THE TYPE DISCIPLINE 42

the let rule for expansive expressions, rule 4.5, will prohibit generalization from
(u --> u) ref to Vu.((u --+ u)) ref). Thus the faulty expression

let r= ref(Ax.x) in (r:= Ax.x+1; (!r)true)


cannot be typed. Note that

let r= ref (Ax.x) in (r: = Ax.x+1; (I r) 1)


will be typable using TE H ref (Ax.x) (int
: ---> int) ref and rule 4.5.

Example 4.4 Here function that reverses lists in one single scan using iter-
is a
ation and a minimum number of applications of the :: constructor:

ei = Al. let data= ref 1 in


let result= ref nil in
begin
while ' data<>nil do
begin result:= hd(!data)::!result;
data:= tl(Idata)
end;
!result
end
We have TE F- el : u list --* u list where the body of the second let expression will
be typed under the assumptions

TE(data) = u list ref


TE(result) = u list ref

Note that u remains free as u after "Al." occurs free in the type environment,
namely in the type of 1. Now
let fast_reverse= el
in begin fastxeverse [1, 9, 7, 5];
fast-reverse [true, false, false]
end
is typable using rule 4.4, which allows the generalization from u list -- u list to
Vu.u list -+ u list.
As one would expect, since fast-reverse has type Vu.u list -* u list while
the applicative reverse function has type Vt.t list -* t list, there are programs
that are typable with the applicative version only. One example is
CHAPTER 4. THE TYPE DISCIPLINE 43

let reverse=
in let f = hd(reverse [Ax.x] )
in begin f(7); f(true) end

which can be typed under the assumption the reverse has type Vt.t list -> t list
but not under the assumption that reverse has the type Vu.u list -> u list.

Example 4.5 This example illustrates what I believe to be the only interesting
limitation of the inference system. The fast-reverse function is a special case
of folding a function f (e.g. cons) over a list 1 starting with initial result i (e.g.
nil).
el = Af.Ai.Al.
let data= ref 1 in
let result= ref i in
begin
while !data<>nil do
begin result:= f(hd(! data))(! result);
data:= tl(!data)
end;
!result
end

We have TEl-el:(ul->u2->u2)->u2->ullist ->u2and we can type

let fold= el in
begin fold cons nil [5,7,9];
fold cons nil [true, true, false]
end

because the let rule for non-expansive let expressions allows us to generalize on
ul and u2 in the type of fold.
However, we will not be able to type the very similar
let fold= el in
let fast-reverse= fold cons nil in
begin
fast-reverse [3,5,7];
fast-reverse [true, true, false]
end
CHAPTER 4. THE TYPE DISCIPLINE 44

because fold cons nil will be deemed expansive so that fast_reverse cannot
get the polymorphic type Vu.u last -f u last .

This illustrates that the syntactic classification into expansive and non-expan-
sive expressions is quite crude. Fortunately, as we shall see when we study the
soundness proof, soundness is not destroyed by taking a more sophisticated clas-
sification. Moreover, even with the simple classification we get a typable program
by changing the definition of fast_reverse to

let fapt_reverse= .l.fold cons nil 1

so the limitation isn't really too bad.

Example 4.6 The final example is an expression which, if evaluated, would give
a run-time error, and which therefore must be rejected by the typing rules. If
you have some candidate for a clever type inference system, it might be worth
making sure that the expression e below really is untypable in your system. The
example illustrates the use of "own" variables, which form a dangerous threat to
soundness!
When applied to an argument x, the function mk_sham_id produces a function,
sham-id, which has an own variable with initial contents x. Each time sham_id
is applied to an argument it returns the argument with which it was last called
and stores the new argument.

e = let mk_sham_id= Ax.


let own=ref x
in Ay.(let temp=!own in (own:= y; temp))
in let sham-id= mk_sham_id nil in
begin sham_id [true];
hd(sham_id[1])+1
end

If we take the naive extension of Milner's polymorphic type discipline, see


Chapter 3, then e Vt.t --; t -f t;
becomes typable; first mk_sham_id gets type
then sham-id gets the type Vt.t list -+ t list (hence its name) and that does
it. The LCF rules were amended with a special syntactic constraint to exclude
generalization of the type of expressions that use own variables.
With the imperative type variables, mk_sham_id gets type Vu.u -p u -+ u and
sham-id gets type u list -f u list, but then we are barred from generalizing on u
CHAPTER 4. THE TYPE DISCIPLINE 45

since mk_sham_id nil quite rightly is considered expansive. Therefore, at most


one of the applications of sham-id can be typed. 0

Let us sum up the intuitions about inferences of the form

TEI- e:r.

Any imperative type variable free in TE stands for some fixed, but unknown
type that occurs in the typing of the store prior to the dynamic evaluation of e.
Any imperative type variable bound in TE, in the type scheme ascribed to x,
say, stands for a type that might have to be added to the present store typing to
type new references that are created as a result of applying x.
Moreover, an imperative type variable that is free in r but not free in TE
ranges over types that may have to be added to the initial store typing to obtain
the resulting store typing. The "may have to be" is because, in general, more
type variables than strictly necessary may be imperative. For example we have
F- Ax.x : t -* t and also F- Ax.x : u -* u.

Finally let us discuss the let rule. The idea in the first of the two rules for
let x = el in e2 is that if TE F- el : 7-1 and el is non-expansive then the initial
and the resulting store typings are identical and so none of the imperative type
variables free in 7-1 but not free in TE will actually be added to the initial store
typing. Therefore, these imperative type variables may be quantified.
On the other hand, if el is expansive then we have to assume that the new
imperative type variables in 7-1 will have to be added to the initial store typing
and so the second let rule does not allow generalization on these.
In both cases, if 7-1 contains an applicative type variable t that is not free
in TE then t will not be added to the initial store typing simply because store
typings by definition contain no applicative type variables, and so generalization
on t is safe.

4.3 A Type Checker


Figure 4.4 contains a type checker for the new type discipline. I call it Wi
because it is similar to Milner's type checker W, (Section 2.4). Wi uses a modified
CHAPTER 4. THE TYPE DISCIPLINE 46

W1(TE, e) = case e of
x = if x Dom TE then fail
else let Val ... a,,.T = TE(x)
/31, ... , P. be new such that
a, is applicative if /3= is applicative
in (ID, {a, /3=}T)"
let t be a new applicative type variable
Ax.el =
(S1,T1) = W1(TE±{x i--> t},el)
in (Si, Si(t) --* T1)
el e2 = let (S1 i Ti) = Wi (TE, e1);
(S2,T2) = Wi(S1(TE), e2)
t be a new applicative type variable
S3 = Unzfy1(S2(Tl), T2 -* t)
in (S3S2S1, S3(t))
let x = e1 in e2=
let (S1,-r1) = W1(TE, e1)
v = if el is non-expansive then Clossl JTi
else AppCloss, TET1
(S2iT2) = W1(S1 TE ±{x i-r Q}, e2)
in (S2S1, T2)

Figure 4.4: A Type Checker for the New Type Discipline

unification algorithm, Unzfyl, which is like ordinary unification, except that

Unzfyl(a,T)
{a --> if a is imperative
S(T)} U S,

provided a does not occur in T, where {a1 i ... , an} is apptyvars T and
"
{u1, ... , u,,} are new imperative type variables, and S is jai" u1,... , an un}.

That sound in the sense that (S, T) = W1(TE, e) implies that for all TE'
W1 is
with TE Si TE' we have TE' f- e T is easy to prove by structural induction on
:

e given a lemma that type inference is preserved under substitutions (this lemma
will be proved later, see lemma 5.2).
To prove that W1 is complete in the sense that it finds principal types for all
typable expressions requires more work. I have not done it simply because this
is where I had to stop, but I feel confident of the completeness of W1 because,
intuitively, W1 never has to make arbitrary choices. I believe that the proof will
be similar in spirit and simpler in detail than the proof of the completeness of
the signature checker in Part III.
Chapter 5

Proof of Soundness
We shall now prove the soundness of the type inference system. Substitutions
are at the core of all we do, so we start by proving lemmas about substitutions
and type inference. Then we shall define the quaternary relation s : ST = v :
T (discussed in Section 3.2) as the maximal fixpoint of a monotonic operator.

Finally we give the inductive consistency proof itself.

5.1 Lemmas about Substitutions


As defined earlier, a substitution is a map S Type such that S(u) E
: TyVar --+

ImpType for all u E ImpTyVar. Substitutions are extended to types and they can
be composed. The restriction and region of a substitution is defined as before,
see Section 2.3. Similarly, we can define substitutions on type schemes and type
environments by ternary relations a1 S+ U2 and TE S+ TE':

Definition 5.1 Let a1 = Va1... a,.Tl and U2 = dtl ... #,,,.T2 be type schemes
and S be a substitution. We write o1 S--+ 0'2 if
1. m = n, and {a, i-4 i < i < n} is a bijection and at is imperative i f fli
/3, 1

is imperative, and no 0, is in Reg(So), and

2. (So If a, i-a /3t})Tl = T2


tyvars u1. Moreover, we write TE S+TE'
S
def
where So . if DomTE =
Dom TE' and for all x E Dom TE, TE(x) -S- TE'(x).

The definition of instantiation (a > T) is as before but now with the new
meaning of type variables and substitutions. One can prove that if o > T and
o Sa Q' then o,' > S T. The proof is alr st identical to that of Lemma 2.6.
CHAPTER 5. PROOF OF SOUNDNESS 48

The following lemma will be used again and again in what follows.

Lemma 5.2 IfTEI- e:r andTE-`'+TE'then TE'I-e: Sr.

Proof. By structural induction on e. The only interesting case is the one


for e = let x = el in e2 which in turn is proved by case analysis. (The two
cases are similar, but there are subtle differences and since this lemma is terribly
important, we had better be careful here).
el is non-expansive Here TE F- : r was inferred by
el is non-expansive TE f- el : 7-1 TE ±{x " ClosTETi } 1- 62 : T
(5.1)
TE f- let x = el in e2 : T
Let of = ClosTE7-1 = Val ... an-Ti, let Sl = S . tyvars TE and let So = S .1
tyvars ol. Let Q1... On be distinct type variables where /3 is imperative iff ai is
imperative and no /3 is in Reg Sl. Then no Q, is in Reg So, so by definition 5.1
we have

Val ... an-7-1 Vol ... {a,


On. (So
Pi})7-1
I "
= d/3l... /3n.(Si {a, ra /3t})7-,.
I

Since TE -S + TE', no Q, is free in TE'. Moreover, any type variable that is not
a /3 but occurs in (Si I{a, "
/3 })Tl must be free in TE'. (The reason for this is

that every type variable free in o is in TE and TE S- TE'). Therefore,

Vol ... /.3n.(Sl I{a, h-) #,})7-1 = C1osTE'(S1 I{a, h-) Qi})Tl.
Let S'= Sl I{a, " /3t}. Then we have

Since TE S TE' and no ai


ClosmT1 --S-+

free in TE we have TE
is
on el, using the second premise of (5.1),
Close S'Tl.
- TE'. Thus by induction
(5.2)

TE' f- el : S' T1 (5.3)

By (5.2) we have

TE±{x "ClosTETi} S-+TE'±{x " ClosTEI S'Tl}.


Therefore, by induction on 62, using the third premise of (5.1),
TE' ±{x H Close S' Tl } fee : S r.

Thus by rule (4.4) on (5.3) and (5.4) we have TE' f- e : S T as desired.


CHAPTER 5. PROOF OF SOUNDNESS 49

el is expansive Here TE I- e : T was inferred by

el is expansive TE el : 7-1 TE ±{x


F- AppClosTE7-1} " I- e2 : T
(5 . 5)
TE F- let x = el in e2 : r
Let o1 = AppClosTEr1 = Va1... a,,.Ti, let Si = S 1 (tyvars TE U imptyvars T1)
and let So = S 1 tyvars al. Every a, is applicative. Let 01 ... ,Q,' be distinct
applicative type variables none of which is in Reg S. Then no /3, is in Reg So,
so by definition 5.1 we have

Val ... an.T1 -+ Vol ... /3 .(S0 I{a, 10,})T1


= ` of ...0,,.(Sl I{a, " 0,})Tl
Since TE -- + TE' no fl, is free in TE'. Also, any applicative type variable that
is not a ,Q, but occurs in (S1 I la, '-p 0,})T1 must be free in TE'. The reasons for
this are

2.
1. Any applicative a in Tl which is not an a, is free in TE and TE

Any imperative a in T1 is mapped by S to an


-
imperative type, i.e., a type
TE'.

with no applicative type variables.


Thus
Vol...0..(Sl I{a, -p /3 })T1 = AppClosTE,(Sl I{a, " 0,})Tl-
Let S'= Si I{a, - ,Q,}. Then we have

AppClosTET1 -4 AppClosTE, S' T1 (5.6)

Since TE _TE' and no a, is free in TE we have TE ,


TE'. S
Thus by induction on el, using the second premise of (5.5),

TE' F- e1 : S'Tl. (5.7)

We have

TE±{x " AppClosTET1} -S + TE'±{x "


-p AppClosTE, S'7-1}

by (5.6), so by induction on e2, using the third premise of (5.5),

TE'±{x " AppClosTE, S'Tl} F- e2 : Sr

which with (5.7) gives the desired TE F- e : S r by rule 4.5. I


The following lemma will be used in Section 5.2 to prove Lemma 5.11.
CHAPTER 5. PROOF OF SOUNDNESS 50

Lemma 5.3 Assume a- -S--+ a-' and a-' > T1' and A is a set of type variables with
tyvars a C A. Then there exists a type Ti and a substitution Si such that o- > 7-1,
j
Sl Tl = Ti, and S A = Sl IA.

Proof. The following diagram may be helpful:

I V V I'
S1
Tl Ti

Write o, as Val ... a,,.T and a-' as b',131... ,6,,,,.T'. Since o Q' we have m = n,
a, is imperative if and only if ,6, is imperative, {a= i--> #,} is a bijection, no 0, is
in Reg So and (So If a, i-+ #,}) T = T', where So = S J. tyvars 0.
Since a-' > Ti there exists an

I'={/3,HT() I1< i<n}


such that I' T' = T,. Let {yl, ... yk} be the set of type variables occurring in the
,

range of I' and let {Sl, ... Sk} be a set of type variables such that the renaming
,

substitution R = {y, i-a S,} is a bijection, y, is imperative if S= is imperative,


and Rng R fl (A U Reg(S I A)) = 0.
Let I be {a, --> R T(t) 1 < i < n}. Then I maps imperative type variables
I

to imperative types. Let Ti = I T. Then a > Ti as desired.


LetR-1={S, i-->yjII<j < k}andletS1=R-1o(SIA). Then S, is a
substitution and as required we have Sl 1 A = S 1 A, since {6l, ... , Sk} flReg(S 1
A)=0.
Now
Sl Tl = Sl IT by the definition of Tl
= R-1(S . A) IT by the definition of Si
= I'(So If a, #,}) T see below
I'T' as a--a-'
= Ti by the definition of I'
as required. To see the above equality

R-1(S 1 A) IT = Ila, i--> ,6;})T (5.8)

take any a occurring in T. There are two cases:


CHAPTER 5. PROOF OF SOUNDNESS 51

a is free in a Here

R-1(S 1 A) I a = R`'(S 1 A)a as a is free


= R-' So a as tyvars o C A
= So a (*)
= I' So a as no ,Q= is in Reg So
= I'(So {a, I F+ ,Q=})a as a is free

where (*) is because {8 , ... , Sk} fl Reg(S 1 A) This shows 5.8 when a is
free in Q.

a is bound in o, Here a = a2, say, and

R-1(S . I
A) a, = R-1(S I A) RT(`)
T(i)

as tyvars T(`) C Dom R and Rng R fl A = (Q. But

T(`) = I' /3 by the definition of I'


I'(So {a= F+ ,Q=})ai

showing 5.8 when a is bound in Q. 0

5.2 Typing of Values using Maximal Fixpoints


Recall from the discussion in Section 3.2 that the relation between dynamic
values v and types r depends not just on the store, but also on a store typing.
We introduced the notion of imperative type to be able to recognize types that
are types of objects in the store. Hence a store typing is a map from addresses to
imperative types; a typed store is a store and a store typing with equal domains:

ST E StoreTyping = Addr ImpType


s : ST E TypedStore = {(s, ST) E Store x StoreTyping Dom s = Dom ST J I

We shall now define three relations s : ST v : T, s : ST v : Q, and


s : ST E : TE that provide the link we need between the static and the
dynamic semantics.
We start out from a relation RB, C BasVal x TyCon giving the typing of the
basic values. Thus (3, int) E RB,,, (true, bool) E RB,,, etc. We shall assume that
(done, stm) E RBA and that done is the only value of type stm. We are then
looking for relations satisfying the following:
CHAPTER 5. PROOF OF SOUNDNESS 52

Property 5.4 (of =) We have

s:ST =v:T
if v = b then (v, T) E RBA;
if v = [x, el, E] then there exists a TE such that
s: ST = E:TEandTEH )x.e1:T;
if v = asg then r = rl ref -> Tl -> stm for some r1;
if v = ref then r = 0 -- 0 ref for some imperative 0;
if v = deref then r = rl ref ->
ifv=athen r=(ST(a))ref ands:STF-- s(a)
l
for some T1;
: ST(a)

s:STF-- v: o, 4 VT<cr,s:ST =v:T

s:STF-- E:TE
DomE = DomTE and Vx E DomE, s : ST = E(x) : TE(x).

The above property does not define a unique relation F--. However, it can be
regarded as a fixpoint equation

where F is a certain operator. More precisely, let U = U1 X U2 x U3 where

U1 = TypedStore x Val x Type


U2 = TypedStore x Val x TypeScheme
U3 = TypedStore x Env x TyEnv.
Whenever A is a set, we write P(A) for the set of subsets of A. Then F is a map
F : P(U) -- P(U). For every Q C U, let Q= _ 7r=(Q) be the ith projection of Q,
i= 1, 2, 3, and let F(Q) = F1(Q) x F2(Q) x F3(Q), where

F1(Q) = {(s ST,v,T)


:

if v = b then (v, b) E RBaS;


if v = [x, e1, E] then there exists a TE such that
(s:ST) E,TE) EQ3and TEH.x.e1:T;
ifv=asg then T = T1 ref ->T1-pstm for some rl;
if v = ref then r = 0 -> 0 ref for some imperative 0;
if v = deref then r = T1 ref -> T for some r1i
if v = a then r = (ST(a)) ref and (s ST,s(a),ST(a))
: E Q1}
CHAPTER 5. PROOF OF SOUNDNESS 53

F2(Q)={(s:ST,v,o) IVr<a, (s:ST,v,r)EQ1}

F3(Q) = {(s : ST, E, TE) I

Dom E = DomTE and Vx E Dom E, (s : ST, E(x), TE(x)) E Q2}.

It is crucial that F is monotonic (i.e., that Q C Q' implies F(Q) C F(Q')).


This would not have been the case, had we taken the following, Perhaps more
natural, definition of F:

F1(Q) = {(s: ST, v, r) I

if v = [x, el, E] then r = Tl --> r2 and


for all v1, v2, s'
if (s : ST,v1iTl) E Q1 and s,Ef{x H vl} I- el --> v2is'
then 3 ST' D ST such that (s' : ST', v2i r2) E Q1

However, the chosen F is monotonic, so it has a smallest and a greatest


fixpoint in the complete lattice (P(U), C), namely

Rm'n=n{QcUIF(Q)cQ}
and
Rma'=U{Q9UIQ9F(Q)}. (5.10)

For our particular F, the minimal fixpoint Rm'n is strictly contained in the
maximal fixpoint R-' and it turns out that it is the latter we want. This is due
to the possibility of cycles in the store as illustrated by the following example.

Example 5.5 Consider the evaluation of

let r= ref (Xx.x+l)


in let s= ref(Ay.(I r)y+2)
in r:= !s

in the empty store. At the point just before "r: s" is evaluated, the store
looks as follows
{ al F--) [x, x+1, Eo],
a2 H [y, (! r)y+2, Eo ±{r H al}]
}
CHAPTER 5. PROOF OF SOUNDNESS 54

where E0 is the initial environment; after the assignment the store becomes cyclic:

S' a1 F-f [y, (! r)y+2, Eo ±{r H a1 }],


a2 H [y, (! r)y+2, Eo ±{r H a1 }]
}

Now we would expect to have s': ST' = a1 : (int -> int) ref, where

ST' = {a1 H ant -> int, a2 H int -> int}.

Indeed, if we let q = (s' ST', a,, (int


: --> ant) ref) then we do have q E Rma".
To prove this it will suffice to find a Q with q E Q and Q C F(Q) since we
have (5.10). One such Q = Q1 X Q2 X Q3 is

Q1 = {(s' : ST', a1, (int -> int) ref),


(s' : ST', [y, (!r)y+2, {r H a1}], int --3 int)}
Q2 = {(s' : ST', a1, (ant -> int) ref )}
Q3 = {(s' : ST', Jr H a,}, {r H (int -> int) ref })}

where for simplicity I have ignored E0 and the initial type environment.
As we shall see below, one can think of this Q as the smallest consistent set
of typings containing q.
On the other hand, q is not in R""n. This can be seen as follows. There is an
alternative characterization of RI"'n, namely

Rn`n=UAFA (5.11)

where
FA = F(U,<AF') (5.12)

where ) ranges over all ordinals (see [1] for an introduction to inductive defini-
tions). In other words, one obtains Rn"n by starting from the empty set and then
applying F iteratively. It is easy to show that because q is cyclic there is no least
ordinal ) such that q E FA. Therefore q V Rn"n. 0
Let us try to explain informally why maximal fixpoints are often useful in op-
erational semantics. Let us first review the essential differences between minimal
and maximal fixpoints. In what follows, let U be a set and let F be a function
F : P(U) -> P(U) which is monotonic with respect to set inclusion.
CHAPTER 5. PROOF OF SOUNDNESS 55

Think of the elements of U as witnesses in a trial the judge of which is F.


Of course witnesses may refer to each other so the question of the acceptance of
a witness, q, may depend on the acceptance of other witnesses. For instance, in
the case of the particular F we are considering , if q = (s : ST, v, o) then q is
acceptable to F given Q, where Q = Qi X Q2 X Q3, if and only if

{(s':ST',v',T) EU I s': ST'=s:STAv'=vAT<o-} CQ1.

There are two completely different ways of interpreting the set Q.


The first is to consider Q to be a set of witnesses that have been positively
proved to speak the truth. Then F(Q) are the witnesses that are proved to speak
the truth as a logical consequence of the fact that the witnesses in Q speak the
truth. The monotonicity of F implies that the more witnesses that are known to
speak the truth, the more witnesses F can clear. Starting from Q = 0, the set
F(O) consists of those witnesses that can be proved to speak the truth without
reference to any other witnesses. For example we have (s : ST, 3, int) E F(O) for
the F we consider. Next, F(F(O)) are the witnesses that can be proved to speak
the truth because F(O) speak the truth and so on. Using that F is monotonic,
it can be shown that the (perhaps transfinite) limit of the chain

0 C F(0) C F(F(O)) C ...

is afixpoint of F, in fact the least fixpoint of F. In other words, q is in the


minimal fixpoint of F if and only if it can eventually be proved that q speaks the
truth.
As an example, if U =
{A:"On the day of the crime I was in Paris with B",
B:"On the day of the crime I was in Paris with A"}

and if, in order to prove claim of the form "X:On the day of the crime I was in
Paris with Y", F proceeds by investigating all claims of the form "Y: ... " then
F is never going to be able to prove either of the two claims.
The other way to interpret Q is to think of it as being the set of witnesses that
F has not proved wrong. Then F(Q) are the witnesses that cannot be proved
wrong, given that the witnesses in Q cannot be proved wrong. The monotonicity
of F now implies that the more witnesses that have been weeded out, the more
witnesses F can reject. Starting from Q = U, meaning that at the outset F
CHAPTER 5. PROOF OF SOUNDNESS 56

rejects nothing, F(U) is the set of witnesses that cannot be rejected because
they refer to witnesses in U. For example, we have (s : ST, 3, int) E F(U) but
(s : ST, 3, bool) F(U).
Using that F is monotonic we have

U D F(U) F(F(U)) D .. .

and it limit of this chain is a fixpoint


can be proved that the (perhaps transfinite)
of F, in fact the maximal fixpoint of F. Thus q is in the maximal fixpoint of F
if and only if F can never prove that q is wrong. In the international example
above, both claims will be in the maximal fixpoint because A and B consistently
claim that they were in Paris.
To see why the use of the word "truth" in the interpretation of minimal
fixpoints is replaced by the word "consistency" in the interpretation of maximal
fixpoints, let us consider the alternative characterization of the maximal fixpoint:

Rmax=U{QCUI QCF(Q)}.

For a set Q to have the property Q C F(Q) means that for each q E Q, q is
acceptable to F perhaps by virtue of other witnesses, but only witnesses within
Q. Each of these witnesses, in turn, are acceptable to F in the same sense. Thus,
no matter how many iterations F embarks on, no new witnesses need be called.
This motivates the following definition: Q is (F-) consistent if Q C F(Q).
To return to example 5.5 the Q we defined was the smallest F-consistent
set containing q and, as such, contained in the largest F-consistent set there is,
namely Rma".
To sum up,
the maximal fixpoint of F : U --+ U

= the largest F-consistent subset of U

={geUIFcannever reject q}.

It should now be obvious that maximal fixpoints are of interest whenever one
considers consistency properties, in particular when one wants to define relations
between objects that are inherently infinite, although they may have a finite
representation. In CCS [28], for example, observation equivalence of processes is
defined as the maximal fixpoint of a monotonic operator. As yet another example,
let us mention that the technique can be used to extend the soundness result of
CHAPTER 5. PROOF OF SOUNDNESS 57

Part I to a language that admits recursive functions represented in the dynamic


semantics by finite representations of infinite closures. We shall now proceed
to see how the technique is used to prove the soundness of the imperative type
discipline.

Take = to be Rmax :

Definition 5.6 We write

s: ST =v:r for (s:ST,v,T) ERmax,

s:ST v:u for(s:ST,v,u) ERrx, and


s:ST = E:TE for (s : ST, E, TE) ER 'x.

With this definition the consistency result we conjectured earlier (Conjec-


ture 3.4) is true. However a proof by induction on the. depth of inference of
s, E f- e -- v, s' will not work as we can see by considering any inference rule
)

with more than one premise, for example

s, E I- el ---> ref, sl sl, E I- e2 --) v2, s2 a Dom s2


(5.13)
s, E f- el e2 --3 a, s2 ± {a H v2}

with the corresponding typing rule


TEI- el:T' -pr TEI-e2:T'
(5.14)
TEI- el e2:T
By assumption s : ST = E : TE and by (5.13) and (5.14) we have s, E I- el
ref, sl and TE f- el : r' --> r. Hence by induction there exists an ST, such that

s1:STI = ref :T'-fir.


To apply induction a second time, this time on the second premise of (5.13)
and (5.14), we need to know that sl : STI = E : TE; however, we just know
s:ST = E:TE.
What we need is that every typing that holds in the initial typed store, s : ST,
still holds in the resulting typed store, sl : STI.
This motivates the following definition.
CHAPTER 5. PROOF OF SOUNDNESS 58

Definition 5.7 A typed store s': ST' succeeds a typed store s : ST, written

s: ST Es':ST',

if
1. STCST'

2. `dv,T s:ST = v:T=s':5T'= v: r.


As usual, ST C ST' is short for

Dom ST C Dom ST' and for all x E Dom ST, ST(x) = ST'(x).

Notice that if s : ST C s' : ST' and Dom s = Dom s' then ST = ST', since
Dom ST = Dom s = Dom s' = Dom ST'.
The relation C is obviously reflexive and transitive. It is not antisymmetric.
From Property 5.4 we immediately get

Lemma 5.8 Assume s : ST C s': ST'. Then for all v, a, E, and TE

s:ST = v:a = s':ST'= v:a


and
s:ST =E:TE = s':ST'=E:TE.
The following lemmas are all used in the proof of the consistency result.
Moreover, they are proved using an important proof technique regarding maximal
fixpoints.
The first lemma will be needed for the case concerning creation of a reference.
For brevity, let us say that s' : ST' expands s : ST, written s : ST C s' : ST', if
s C s' and ST C ST'.

Lemma 5.9 (Store Expansion) If s : ST 9 s': ST' then s : ST C s': ST'.


CHAPTER 5. PROOF OF SOUNDNESS 59

Proof. It will suffice to prove

VV,T s: ST = v:T-'''-, s':ST'= v: r.

To this end we define Q = Q1 X Q2 X Q3, where

Q1 {(s':ST',v,T) Is:ST -- v:T}


Q2 = {(s':ST',v,o)Is:ST--v:o}
Q3 = {(s':ST',E,TE)Is:ST--E:TE}
where s : ST and s' : ST' both are given.
The point is that it will suffice to prove Q C F(Q), as

U{Q9UI Q9F(Q)}
RmaX

=
is the largest set contained in its image under F.
First, take q1 = (s' : ST', v, T) E Q1; then
s : ST =v : T. (5.15)

If v = b then (v, 7-) E RBa by Property 5.4 on (5.15). Thus q1 E F1(Q) as


desired.
Similarly for v = asg, ref, and deref.
If v = [x, el, E] then by the Property of = on (5.15) there exists a TE such
that s : ST E : TE and TE F Ax.e1 : T. Thus (s': ST',E,TE) E Q3. Thus
q1 E F1(Q)
If v = a then T = (ST(a)) ref and s ST(a) by Property 5.4
: ST = s(a) :

on (5.15). Since s : ST C s': ST' this means T = (ST'(a)) ref and s : ST = s'(a)
ST'(a), i.e., (s' : ST', s'(a), ST'(a)) E Q1. Hence q1 E F1(Q).
Next, take q2 = (s': ST', v, o) E Q2. Thus
s:ST=v: a. (5.16)

Assume < a. then s ST = v : T by Property 5.4 on (5.16). Thus (s'


T

ST ',v,T) E Q1. Thus q2 E F2(Q).


Finally, take q3 = (s' : ST', E, TE) E Q3. Then s : ST = E : TE; thus
DomE = DomTE. Moreover, for all x E DomE, we haves : ST = E(x) : TE(x),
i.e., (s': ST', E(x), TE(x)) E Q2. Thus q3 E F3(Q).

The following lemma is used in the case concerning assignment (of the value
vo to the address ao).
CHAPTER 5. PROOF OF SOUNDNESS 60

Lemma 5.10 (Assignment) Assume s : ST = vo : ST(ao); let s' = s ± {ao H


VO}. Thens:STEs':ST.

Note that if s : ST = vo : ST (ao) then ao E Dom ST = Doms, so s' e


s ± {ao H vol is obtained by overwriting s on an already existing address.

Proof. Define

Q1 = {(s':ST,v,T) s:ST I=v:T} I

Q2 = {(s' ST, v, o,)


:
I
s : ST I= v : O'}

Q3 = {(s' : ST, E, TE) I s : ST = E : TE}

where ao, vo, s, s', and ST all are given and satisfy s : ST = vo : ST(ao) and
s'=s±{ao'-->vo}.
It will suffice to prove Q C F(Q).
First, take ql = (s' : ST, v, T) E Q1. Then

s:ST Fv:T. (5.17)

If v = b, asg, ref, or deref we immediately get q1 E F(Q) from Property 5.4


on (5.17).
If v = [x, el, E] then by (5.17), there exists a TE such that s ST = E : TE
:

and TE I- Ax.el : T. Thus (s': ST,E,TE) E Q3. Thus q1 E F(Q).


If v = a then by (5.17) we have T = (ST(a)) ref and s : ST s(a) : ST(a).
Since s : ST = vo : ST(ao) this gives s : ST = s'(a) : ST(a). Thus (s'
ST, s'(a), ST(a)) E Q1. Thus (s': ST, a, (ST(a)) ref) E F1(Q) i.e., q1 E F1(Q).
Next, take g2=(s':ST,v,a)EQ2. Then s:STFv:a. Assume 7- < (T.
( s' :

Finally, take q3 = (s' : ST, E, TE) E Q3. Then s : ST = E : TE. Thus


DomE = DomTE and for all x E DomE, s : ST = E(x) . TE(x), i.e., (s'
ST, E(x), TE(x)) E Q2. Thus q3 E F3(Q).
61
CHAPTER 5. PROOF OF SOUNDNESS

Finally, this lemma is crucial in the case regarding the let rule.

Lemma 5.11 (Semantic Substitution)


If s : ST = v : T then s : S(ST) = v : S(T) for all substitutions S.

Proof. Define

Q1 = {(s: ST', v, T') 13 S, T s.t.


S (ST) = ST' A S (T) = T' A S: ST = v: T}
Q2 = {(S : Sr', v, o') 13 S, U S.t.

S(ST) = ST' nQ --* o' A S : ST I= v : Q}

Q3 = {(s : ST', E, TE') 13 S, TE s.t.


S (ST) = ST' A TE -- TE' A5 : ST =E : TE }

where s, ST, and ST' are given.


It will suffice to prove Q C F(Q).
First, take q1 = (s : ST', v, T') E Q1. Let S and T be such that

S(ST)=ST' and S (T) = T'and s : ST =v:T. (5.18)

If v = b then (v, T) E RBas by Property 5.4 on (5.18). Thus T E TyCon, So T' = T.


Thus qi E F1(Q).
If v = asg, then by (5.18) we have T = T1 ref --> (Ti stm) for some T1. Thus -
T' = (S Ti) ref -, (S T1 -, stm) showing ql E F1(Q). Similarly for v = deref .
If v = ref then T = 0 -, 0 ref for some imperative type 0. Since substitutions
are required to map imperative type variables to imperative types, we have that
S(0) is an imperative type. Thus T' = S B -+ (S B) ref showing ql E F1(Q).
If v = [x, el, E] then by (5.18) there exists a TE such that s : ST = E : TE
and TE F- ) x.e1 : T. There exists a TE' such that TE S TE'. Thus (s
ST', E, TE') E Q. Moreover, by Lemma 5.2 we have TE' F- ) x.e1 : S(T) i.e.,
TE' F- ) x.e1 : T'. Thus q1 E F1(Q).
If v = a then by (5.18) we have T = (ST(a)) ref and s : ST = s(a) : ST(a).
Thus T` _ (ST'(a)) ref. Also, S(ST(a)) = ST'(a), so (s : ST', s(a), ST'(a)) E Q1.
Thus (s ST',a, (ST'(a)) ref) E F, (Q), i.e., qi E F, (Q).
Next, take q2 = (s : ST', v, a-") E Q2. Let S and o, be such that S(ST) = ST'
anda -S--+ a'and s:ST Hv:a. Assume Ti<a'.
CHAPTER 5. PROOF OF SOUNDNESS 62

Let A = tyvars(ST) U tyvars(a). Then by lemma 5.3 there exists a Tl and and
S1 such that a >,r1, S1 T1 =,r' and Sl I A = S j. A. In particular, S1(ST) = ST'.
hav e
This, together with S1(ST) = ST' and Sl T1 = ri gives (s : ST', v,-ri) E Q1. Since
we can do this for every Tl < a', we have q2 = (s : ST', v, a') E F2(Q) as desired.
Finally, take q3 = (s : ST', E, TE') E Q3. Let S and TE be such that
TE-S+ TE'and S(ST) = ST'and s : ST E:TE. Then Dom E = Dom TE and
Vx E DomE, s : ST E(x) : TE(x) and TE(x) --S--+ TE'(x). Thus Vx E DomE,
(s : ST', E(x),TE'(x)) E Q2 showing that q3 = (s : ST', E, TE') E F3(Q). M

5.3 The Consistency Theorem


We can now state the main theorem.

Theorem 5.12 (Consistency of Static and Dynamic Semantics)


Ifs: ST [-- E: TE andTEf-e:-r ands,EI-e-->v,s'then there exists an ST'
with s:ST Cs':ST' and s': ST'[-- v:r.

The special case that is of basic concern is when the initial store is empty
(hence s = ST = {}), the only free variables in e are ref, !, and :=, and that E
and TE are the initial environments

Eo(ref) = ref
E0(') = deref
TE0(ref) = Vu.u -> u ref
TEo(!) = Vt.t ref t ref --
Eo(: _) = asg TEo(: =) = Vt.t ref t -> stm

Since we have J} J} = Eo : TEo we have

-
:

Corollary 5.13 (Basic Soundness) If TEo f e : r and {} Eo f- e b, s'


then (b, -r) E RBas.

where RBs still is the relation between basic values and types (relating 3 to int,
false to bool etc.).

Proof. The proof of 5.12 is by induction on the depth of the dynamic evaluation.
There is one case for each rule. The first case where one really sees the induction
hypothesis at work is the one for application of a closure. The crucial case is
of course the one for let expressions. Given the lemmas in the previous section,
neither assignment nor creation of a new reference offers great resistance.
CHAPTER 5. PROOF OF SOUNDNESS 63

Variable, rule 3.1 e=xEDomE=DomTEand v=E(x),T<TE(x),


E
Here
ands=s'. Let ST'=ST. Then s:5TEs':ST'. Since s':ST'-- T :

we have s' : ST' = E(x) : TE(x), and since TE(x) < r, Property 5.4 gives
s': ST'=v:r as desired.
Lambda Abstraction, rule 3.2 Here e = .Ax.e1 and TE I- ) x.e1 : r and v =
[x, el, E] and s' = s. Let ST' = ST. Then s : ST C s' : ST'. Moreover,
S/ : ST' = [x, el, E] : r by Property 5.4.
Application of a Closure, rule 3.3 Here the situation is

TEI-e1:7-'-37- TEI- e2:T'


(5.19)
TEI-e1 e2:T-
and
s, E I- el -r [xo, eo, Eo], s 1
s1, E I- e2 v2, S2
S2, Eo f {x0 H V2} I- e0 ---> v, S'
(5.20)
s,EI- el e2 v, s'
By induction on the first premises of (5.19) and (5.20) there exists an ST1
such that s : 5T C s1 : 5T1 and

sl : ST1 [xo, eo, Eo] : T' -+ T. (5.21)

Thus = E : TE by Lemma 5.8. Using this with the second premises


s1 : ST1
of (5.19) and (5.20) we get (by induction) an ST2 such that sl ST1L S2 : ST
and
s2:ST2 =v2:r'. (5.22)

Now (5.21) together with sl : ST1 C S2 : ST2 gives

s2 : ST2 = [x0, e0, E0] : T' -3 T.

Thus by Property 5.4 there exists a TEO such that

s2:ST2 =EO:TEO (5.23)

and
TEo I- Ax0.eo: r'-;T. (5.24)

But (5.24) must be due to

TEo ±{xo H T'} I- eo : r. (5.25)


CHAPTER 5. PROOF OF SOUNDNESS 64

From (5.23) and (5.22) we get

S2: ST2 Eo ±{xo H V2} : TEo ±{xo H T'} (5.26)

We now use induction on (5.26), (5.25), and the third premise of (5.20) to get
the desired ST'.
Notice that we could not have done induction on the depth of the type infer-
ence as we do not know anything about the depth of (5.24). Also note that the
present definition of what it
for a closure to have a type (which almost was
is
forced upon us because we needed F to be monotonic) now most conveniently
provides the TEo for (5.23).

Assignment, rule 3.4 Here e = (e1 e2) e3 so the inferences must have been

TEF-e1:T"-4(T'-4T) TEF-e2:T"
(5.27)
TEF-e1 e2:T'-4T
TEF-e1 e2:T'-4T TEF-e3:T'
(5.28)
TE I- (el e2) e3 : T

s,E F- el ---> asg, s1


s1,EF-e2--->a,s2

where s'=33±{aHv3}.
s, E F- (e1 e2) e3 -
2j EF- e 3 -->v 3) s 3
done, s3 ± {a -> v3}
(5.29)

By induction on the first premises of (5.27) and (5.29) there exists a ST1 such
that s:ST Es1:ST1and
s1 : ST1 = asg : T" -4 (T' -4 T).

By Property 5.4 we must have T" = T'ref and T = stm.


Now ST1 = E : TE. By induction on the second premises of (5.27)
s1 :

and (5.29) we therefore get a ST2 such that s1 : ST1 C S2 : ST2 and S2 : ST2 =
a:T'ref.
Thus S2 : ST2 k E : TE. By induction on the second premise of (5.28) and
the third premise of (5.29) there exists an ST' such that s2 : ST2 C S3 : ST' and
S3: ST'=v3:T'. Thus we have s3: ST' =a:T'ref.
Thus we must have T' = ST'(a) so s3 : ST' V3 : ST'(a). Thus by the
assignment lemma, Lemma 5.10, we have S3: ST' C s': ST'. Since (done, stm) E
RBA we have s': ST' = done : stm, i.e., the desired s': ST' = v : T.
CHAPTER 5. PROOF OF SOUNDNESS 65

Creation of a Reference, rule 3.5 Here

TEI-e1:T'-1T TEI-e2:T'
TE I- el e2 : 7-

s, E I- el -ref,s1 s1,EI-e2-->v2,s2
) aDoms2
s,EI- el e2-ia,s2±{aFfv2}
where s' = s2 ± {a - f v2}. By induction on the first premises there exists a ST1
such that s : ST C sl ref T' -a T. Thus by Property 5.4
: ST1 and s1 : ST1 :

we have T = T' ref and T and T' are imperative types.


Now sl : ST1 = E : TE. Thus induction on the second premises gives an ST2
such that sl:ST1C32:ST2ands2:ST2 =V2:T'.
Let ST' = ST2 ±{a H T'}. This makes sense since T' is an imperative
type! Since a Doms2 and Dom S2 = Dom ST2i we have a Dom ST2. Thus
Dom ST' = Doms' so s' : ST' is a typed store and it clearly expands s2 : ST2.
Thus by the expansion lemma, Lemma 5.9, we get s2 : ST2 C s' : ST'. Hence
S/ ST' V2 : T' i.e., s' : ST' s'(a) : ST'(a) so s' : ST' = a : T' ref i.e.,
s': ST' = V : T

Dereferencing, rule 3.6 Here

TEI-e1:T'-aT TEI-e2:T'
TEI-ele2:T
s, E I- el -+ deref s1 sl, E e2 -+ a, s'
, F- s'(a) = v
s,EI-ele2) V, S'
By induction of the first premises there exists an ST1 such that s : ST C sl : ST1
and s1 : ST1 deref T' -a T. Thus T' = T ref.
Now s1 : ST1 = E : TE. Thus by induction on the second premises there is an
ST' such thatsl:ST1Es':ST'and s':ST' =a:Tref. Thuss:STEs':ST'
and s': ST'= s'(a):Ti.e.,s':ST' =v:T.

Let Expressions, rule 3.7 The dynamic evaluation is

s,Ei-el-+vl,s1 s1iE±Ix Ffv1}I-e2-+ V, s


(5.30)
s,El- let x =e1 in e2--) V, S'

Now there are two subcases:


CHAPTER 5. PROOF OF SOUNDNESS 66

el is expansive Then TE I- e : T must have been inferred by

TEI-el:T1 (5.31)

and
TE ±{x --> AppClosTETi} I- e2: T (5.32)

for some T1,by rule 4.5. By induction on the first premise of (5.30) and (5.31)
there exists an ST, such that s : ST C sl : ST, and

s1:ST1-- V1:T1 (5.33)

Thus
s1:ST1 =E:TE (5.34)

Bearing in mind that we have (5.32), we now want to strengthen (5.33) to

Si : ST1 = v1 : AppClosTET, (5.35)

So take any T < AppClosTETi. Any bound variable in AppClosTET, is applicative,


so it does not occur in ST1, simply because store typings by definition cannot

contain applicative type variables. Thus T < AppClosTET1 ensures the existence
of a substitution S such that S(5T1) = ST, and S(Ti) = T. Thus, when we apply
the semantic substitution lemma, Lemma 5.11, on (5.33) we get

si:ST1 =v1:T (5.36)

Since (5.36) holds for arbitrary 7- < AppClosTETi we have proved (5.35), c.f.
Property 5.4.
Then (5.34) and (5.35) give

sl : ST, 1 E ±{x f--> vl} : TE ±{x --> AppClosTET1} (5.37)

Applying induction on the second premise of (5.30) and (5.37) and (5.32), we
get an ST' such that (s:STC)sl:ST,Cs':ST'ands':ST'J V:Tas desired.
CHAPTER 5. PROOF OF SOUNDNESS 67

I el is non-expansive Then I- e : r must have been inferred from

TE I- el : Tl (5.38)

TE f{x H ClosrErl} F e2 : r (5.39)

for some rl by application of rule 4.4.


Let {al, ... , a,L} -
tyvars rl \ tyvars TE. Then Closjr1 = Va1 ... a,,.Tl.
Let {u1,. .. , u,,,,} be the imperative type variables among {al, ... , an}. Let
{ui, ... , u;n} be imperative type variables such that R = {u, H u; 1 < i < m} I

is a bijection and

Rng R n imptyvars ST = 0 (5.40)


Rng R fl tyvars TE = 0 (5.41)

Now TE --> TE as no u, is free in TE, so the substitution lemma, Lemma 5.2


applied to (5.38) gives
TE F el : R rl (5.42)

Moreover, C1os r1 = ClosTE(Rrl) by (5.41) so from (5.39) we get

TE f{x H ClosTE(R rl)} F e2 : r (5.43)

by using lemma 5.2 on the identity substitution. Applying induction to the first
premises of (5.30) and (5.42) we get an ST1 such that s : ST C sl : ST1 and

s1 : ST1 vl : R7,1 (5.44)

= Dom s'
Since el is non-expansive, we have Dom s and this is the crucial -
property of non-expansive expressions. Since s : ST C sl : ST1 we have ST1 = ST
(recall the definition of C and note that Dom ST = Dom s = Dom s' = Dom ST').
Thus
sl:ST=E:TE (5.45)

and, by (5.44)
sl:ST =v1:RT1. (5.46)

Bearing (5.43) in mind we want to strengthen (5.46) to

sl : ST = v1 : ClosTE(Rr1). (5.47)
CHAPTER 5. PROOF OF SOUNDNESS 68

So take any T < C1osTE(RTl). No variable a bound in ClosTE(RT1) can occur in


ST, either because
we do the renaming.
a is applicative or because of (5.40) - this is precisely why

Hence T < ClosTE(R Ti) implies the existence of a substitution S with S(ST)
ST and S(RT1) = T. We now apply the semantic substitution lemma, Lemma 5.11,
to (5.46) to obtain
s1 : ST = v1 : T (5.48)

Since (5.48) holds for every T < ClosTE(R T1), we have proved (5.47), c.f.
Property 5.4.
From (5.45) and (5.47) we then get

s1 : ST E±{x H v1} : TE±{x'--+ C1osTE(RTl)} (5.49)

Finally we apply induction to (5.49), the second premise of (5.30), and to (5.43)
to get the desired ST'. 0

From the proof case concerned with non-expansive expressions in let expres-
sions we learn that the important property of a non-expansive expression is that
it does not expand the domain of the store. Because of the very simple way we
have defined what it is for an expression to be non-expansive, non-expansive ex-
pressions will in fact leave the store unchanged. The proof shows that this is not
necessary; assignments are harmless, only creation of new references is critical.'
In larger languages -
ML, say -
the class of syntactically non-expansive
expressions is quite large. The class is closed under the application of many
primitive functions, and under many constructions. For instance, if e1 and e2 are
non-expansive, so is let x = e1 in e2.

As a corollary of the Consistency Theorem we have that monomorphic ref-


erences in the original purely applicative type inference system (Section 3.2) are
sound. To be more precise, let TEo be as TEo except that TEo(ref) = Va.a ->
a ref, where a is applicatave. (The initial environments TEo and Eo were defined
in connection with corollary 5.13). Moreover, let P' be a proof of TEo f- e : r in
'In retrospect, this explains why the type scheme for ref has a bound imperative type
variable, while the type schemes for := and are purely applicative.
I
CHAPTER 5. PROOF OF SOUNDNESS 69

the applicative system where P' is special in the sense that the type scheme for
ref is always instantiated to a monotype (i.e., a type without any type variables
at all). Then there is a proof P of TEO I- e : T in the modified inference system.
Hence corollary 5.13 gives the basic soundness of the special applicative type
inference.
Chapter 6

Comparison with Damas'


Inference System
The purpose of this chapter is to compare our system with Damas' system, pre-
sented in Chapter III of his Ph. D. thesis [13]. 1 shall first try to explain his
inference system. Sadly, the inventor himself economized hard on motivations
and explanations, so let the reader be warned that my interpretations might not
be consistent with what he had in mind.
Having described his system, as I understand it, I shall give examples of its
use. I informally conjecture that every expression that I can type, he can type;
the converse is not true. Finally, I shall discuss the pragmatics of the two systems.

6.1 The Inference System


First, types are defined by

T ::= L I a I T ref 1 T1 -+ T2

where t ranges over primitive types and a ranges over one universal set of type
variables,
Next, type schemes, 77 are defined by

77 ::= TITi-+r2*0I `da.q


where 0 ranges over finite sets of types. Thus

77 _ `dal ... an-T1 --+ T2 *O (6.1)

is a type scheme. The quantification binds over both Ti -+ T2 and A. The set
0 should be thought of as an addendu%to the whole of Ti -- T2, not just to r2.
CHAPTER 6. COMPARISON WITH DAMAS' INFERENCE SYSTEM 71

Roughly speaking, a function has the type (6.1) if it has the polymorphic type
Val ... a,.(T1 -r T2) and in addition 0 is an upper bound for the types of objects
to which new references will be created as a side-effect of applying the function.
For example,
TEo(ref) = Vt.t -3 t ref *{t} (6.2)
where TEo as usual means the initial type environment.
Type environments map variables to type schemes of the above kind. The
type inference rules allow us to infer sequents of the form

TEI-e:ri*0.
Since 71 can be of the form Val ... a,.Tl -3 T2 * 0' one can infer sequents of the
form
TE I- e : (VaI ... a,. T, -3 T2 * 0') * A. (6.4)

This device of nested sets of types is extremely important and provides a gen-
eralization of my rough classification of "expansive" versus "non-expansive" ex-
pressions. The rough idea in (6.3) is that 0
contains the types of all objects to
which new references may be created as a side-effect of evaluating e. In addition,
in (6.4), the set 0' contains the types of all objects to which new references will
be created when e is applied to an argument.
As an example, we will have

TEOI- ref: (Vt.t-3tref *{t})*01


Notice that 0=0 since simply mentzonzng ref does not create any references;
the application of ref, however, will create a reference, so 0' is non-empty.
This motivates the inference rule for variables:
TE(x) = y
(6.5)
TEI-x:z7*O
The rule for lambda abstraction is
TE ±{x - T'} I- e1 : T *0
(6.6)
TEI- )x.el : (T'-3T*A)*0
Notice the outer 0 in the conclusion; the evaluation of a lambda abstraction
does not create any references. Also notice that in the premise the inferred type
scheme must take the special form T i.e., it cannot have a set of types associated
with it. One could imagine a rule
TE-f{x-T'}I-el:(T*A')*0
TEI-.\x.el (((T'-;T)*A')*L)*0
:
CHAPTER 6. COMPARISON WITH DAMAS' INFERENCE SYSTEM 72

and so on for arbitrary levels of nesting. The present rule, however, does not
distinguish between whether it
application of the function itself, or, in
is the
the case where the result of the application is a function, the application of the
resulting function that creates the reference. For example, with the present rule
we will have
TE I- Ax.let y=ref x in \z.z :
Ti * 0

and also
TEH)x.Az.lety=refxinz:Ti*0
where q= dtl, t2.tl
-a t2 --> t2 * {t,}.
To suggest some terminology, we might call the types in the outer A-set for
the immediate store types and call the types in the inner 0-set the future store
types.
The generalization rule is

TE I- e :
Ti *,A a tyvars TE U tyvars O
(6.7)
TEHe:(Va.T)*0
Hence, if Ti has the formTl -4 T2 * 0' then a may be free in 0, the future store
types, but it must not be free in A, the immediate store types. This makes sense
following the discussion in Section 3.2 where it was demonstrated that it is the
immediate extension of the present store typing that can make generalization
unsound.
Next, the rule for application is

TEI-el:(T'-->T)*A TEI-e2:T'*0
TEI-ele2:T*0
Note that the type inferred for el has no future store types, only the immediate
store types. The point is that store types associated with el that have hitherto
been considered as belonging to the future are forced to be regarded as immediate
now that we apply el.
The rule for let expressions is

TEI-el:T1*0 TE±{x- qi}I-e20


TEI-letx=eline2:T*0
Since 'il can have quantified type variables, the assignment of Ti to x allows
polymorphic use of x within e2. In contrast to our system no explicit closure
operation is applied in the second premise. Instead, the generalization rule (6.7)
CHAPTER 6. COMPARISON WITH DAMAS' INFERENCE SYSTEM 73

may be applied repeatedly to quantify all variables that are not in 0 and not
free in TE.
Also notice that
a type scheme whereas in the purely applicative system
772 is
the result was a type. This makes a difference in Damas' system because only
type schemes can contain store types. Thus we can infer for instance

TEo F- 7 : ant *0
TEo +{x F-> int} F- Ay.! (ref y) : (t -} t * {t}) *0
(6.10)
TEo F- let x=7in.\y.I(ref y):(t-}t*{t})*0
where t subsequently can be bound because it is a future store type only.
The distinction between immediate and future store types is finer than the
distinction between expansive and non-expansive expressions (see Section 4.1).
That variables and lambda abstraction are non-expansive corresponds to the set
of immediate store types being empty in rules (6.5) and (6.6). However, as we
saw in (6.10), the property of having an empty set of immediate store types
can be synthesized from subexpressions. This is true even when applications are
involved; for example we have TEo F- e : (t -} t * {t}) * 0, where

e = let x=(.\x.x)1 in Ay. I (ref y). (6.11)

When doing proofs in Damas' system, one will seek to partition the store
types in such a way that as many of them as possible belong to the future so
that one can get maximal use of the generalization rule. However, in typing an
application, future store types are being forced to be considered immediate. The
inference rule that allows this transition is the instantiation rule,

TE F- e:71*0 A >,q'* A'


71
(6.12)
TEF-e:71'*A'
because 71 and 7j' may take the forms

71 = dal ... a,a.Tl -4 T2 * AO


7J = dPl ... Pm Tl -T' 2

The relation * 0 > 71' * 0' is defined in terms of type scheme instantiation,
77

77> 71', which in turn is defined as follows' (here 0 ranges over non-quantified
type schemes, i.e., terms of the form r or T1 -} T2 * A):
'Taken from [13], page 95
CHAPTER 6. COMPARISON WITH DAMAS' INFERENCE SYSTEM 74

Definition. We will say that a type scheme rl' = VQ1... ,Q,,,.6' is a


generic instance of another type scheme rt = Val ... a,,.6, and write
rl > rl', iff the /3 do not occur free in Val ... a,,.O and there are types
T1, ... , r,, such that either

1. both 0 and 0' are types and 0'


2. 0isT'-+T*A,0'isv'-pv*A', v'-pv=[T,/a,](T'-p r) and
[T,/a,]0 is a subset of 0'.

Here (1) corresponds to type scheme instantiation in the purely applicative


inference system. In (2), the condition [T,/a,]0 C 0' corresponds to my require-
ment that substitutions map imperative type variables to imperative types.
Then the relation rt * 0 > rl' * A' is defined as follows:2
Definition. Given two terms rt * 0 and rl' * 0' we will write rt *0 >
77' * 0' iff 0 is a subset of 0' and either
1. 77 > 77'

2. 77is Val ... a,,.T' -4 T * 0", 77' is V0 1. .. 0,.v and, assuming the
P. do not occur free in rt nor in 0', there are types r1, ... , T
such that v = [T,/a,](T' -+ T) and is a subset of 0'.

Note that the requirement 0 C 0' allows us to increase the set of immediate
store types at any point in the proof. This will make generalization harder, of
course, but it may be needed to get identical sets of immediate store types in the
premises of rules (6.8) and (6.9).
It is condition (2) in the above definition that allows the transition from future
store types to immediate store types. For example, we have

(Vt.t-->tref *{t}) *0 > (Vt.t-->tref) *{t}


> (t last --> t list ref) * It list }
and
(Vt.t last) * 0 > t last *{t last}
Hence, having used the instantiation rule twice we can conclude

TEo F- ref : t last --+ t last ref *{t last} TEo F- nil : t list *{t list}
TEo F- ref nil : t last ref *{t list}
in which the unsound generalization on t is prevented.
Damas' inference rules translated into our notation are repeated in Figure 6.1.
At this point, I hope that the reader agrees that Damas' system looks plausible.
At the same time it is subtle enough that the question of soundness is difficult.
2From [13], page 96
CHAPTER 6. COMPARISON WITH DAMAS' INFERENCE SYSTEM 75

TE(s)=,q
TEFx:,q *0

TEHe:q*0 A >,q'* A'


TEFe:,q'*A'

TEf e:r7*0 atyvarsTEUtyvarsL


TEHe:(Va.r7)*0

TEFel:(T'-->T)*0 TEf e2:T'*0


TEFele2:T*0

TE±{x"T'}fel:T*0
TEH)x.el:(T'--+T*A)*0

TE ±{ f" T'--+ T, x
f
'T'} fel : T *
TE F rec x.el : (T' -4 T * A) * 0
0

TEHel:77,*A TEf{x "771} Fe2:77*0


TEHletx=e1ine2:77*0

Figure 6.1: Damas' Inference Rules


CHAPTER 6. COMPARISON WITH DAMAS' INFERENCE SYSTEM 76

6.2 Comparison
There are expressions that Damas can type, but I cannot. An example is

let f= e in(f 1 ;f true)


where e is the expression from (6.11). In Damas' system we get

TEOF-e:(Vt.t-->t*{t})*01

so that f can be used polymorphically, while in my system


TEOF- e:u -->u
where u cannot be bound since e is considered to be expansive.
Moreover, I suggest that every expression typable in my system is typable in
Damas' system. I have not made a serious attempt to construct the embedding.
The naive idea would be that a type T in my system corresponds to a term
where tyvars A fl tyvars T corresponds to imptyvars T. Unfortunately, this does
not work because terms of the form T * A are not types according to Damas'
definition. Only type schemes, indeed only function type schemes can contain
store types. In fact, Damas claims that one cannot include the store types in the
types:3

"The inclusion of terms of the form


T1 -4T * A
among types, which would be the natural thing to do according to
the discussion above, would preclude the extension of the type as-
signment algorithm of the previous chapter to this extended type
inference system. This is so because the algorithm relies heavily on
the properties of the unification algorithm and the latter cannot be
extended to unify terms involving sets of types while preserving those
properties."

However, the current ML implementation of Damas' ideas, due, I believe, to


Luca Cardelli, in effect includes store types among types. Every type variable is
either weak or strong. A weak type variable corresponds to a variable that occurs
free in a store type; if it is free in a future store type only, then it is a generic
weak type variable, otherwise it is a non-generic weak type variable. Since types
'See [13) page 90-91
CHAPTER 6. COMPARISON WITH DAMAS' INFERENCE SYSTEM 77

can contain a mixture of strong and weak type variables, store types have in
effect been included among types.
As for unification, if one unifies a against a type T in which a does not
a weak
occur, then the unifying substitution is {a R T} U R where R is a substitution
mapping the strong type variables occurring in T to new weak type variables. (In
the imperative version of the type checker, each variable has a bit for weak/strong
and the effect of applying R is achieved by overwriting this bit). It is true that
when the unification algorithm has to choose new type variables, the unifier,
So, it produces cannot be mediating for every unifier, S, because S may have a
different idea of what should happen to the new type variables. Perhaps this is
the difficulty Damas refers to. But if one wants to be precise, the proof of the
completeness of the type checker must account for what it is for a type variable
to be new and I claim that all that is needed of So is that it is mediating for all
other unifiers on the domain of used variables. The proof of the completeness of
the signature checker in Part III should give an idea of how the proof would go.

Even though Damas' system admits programs that mine rejects, there are
still sensible programs that cannot be typed by Damas. A good example is the
fold function from Example 4.5. The problem is that Damas' system too is
vulnerable to curried functions or, more generally, lambda abstractions within
lambda abstractions.
One idea to overcome this problem, due to David MacQueen, is to associate a
natural number,the rank, say, with each type variable. Moreover, type inference
is relative to the current level of lambda nesting, n, say. Intuitively, immediate
store type variables correspond to type variables of rank n; the type variables with
rank n + 1 range over types that will enter the store typing after one application
of the current expression, and so on. An applicative type variable can then be
represented as a variable with rank oo. It would be interesting to see the rules
written down and their soundness investigated using the method of the previous
chapter.

6.3 Pragmatics
Obviously, there is a tension between the desire for very simple rules that we can
all understand and very advanced rules that admit as many sensible programs as
CHAPTER 6. COMPARISON WITH DAMAS' INFERENCE SYSTEM 78

possible. In the one end of this spectrum is the system I propose - it is very easy
to explain what it is for an expression to be non-expansive and, judging from the
examples given, the system allows a fairly wide range of programs to be typed.
Then comes Damas' system. I am not convinced that the extra complexity
it offers is justified by the larger class of programs it admits. If one wants to
be clever about lambda nesting, then one might as well take the full step and
investigate MacQueen's idea further.
Part III
A Module Language

79
Chapter 7
Typed Modules
The type disciplines in the previous parts are suitable when programs are com-
piled in their entirety. A small program can simply be recompiled every time it is
changed, but this becomes impractical for larger programs. Many languages pro-
vide facilities for breaking large programs into relatively independent modules. A
module can then be compiled separately provided its interface to its surroundings
is known.
Sometimes modules are defined not as the units of separate compilation but
rather as a means of delimiting scope and protecting "private" data from the
surroundings. In any case, a module is typically a collection of components, where
a component can be a value, a variable, a function, a procedure or a data type.
Other terms for "module" are "package" [41], "object" [2] and "structure" [25].
The notion of interface varies from language to language, but typically an
interface lists names together with some specification involving these names. A
module, M, matches an interface, I, if M has the components specified in I and
if these components satisfy the specifications in I. Other terms for "interface"
are "package specification" [41], "class" [2] and "signature"[25].
One often wants to write and compile a module, M2, which depends on an-
other module, M1, before M1 is written. In that case the meaning of A12 is
relative to some assumptions about the properties of All,. Let us assume that
M2 can access M1 as specified by the interface 11. Then we want to be able to
write an abstraction of the form AM1 : Il . M2. Later, we can define an actual
argument module, M, and perform the application

(AM,: Il M2)(M)
. (7.1)

to create a module in which those components of M2 that stem from the formal
80
CHAPTER 7. TYPED MODULES 81

parameter M, are replaced by the corresponding components of M. The notion


of parameterized module is found in Simula [2], Ada (generic package) [41] and
CLU [23], and ML (functors) [25], among others.
Seeing things this way, it becomes obvious that interfaces play the role of
types, that relative independence of modules can be achieved by abstraction on
a typed formal parameter, and that "linking" corresponds to substitution.
An interface has now become a syntactic object. It can be used as a filter to
hide implementation details of an existing module. Alternatively, one can write
all the interfaces of a system before one writes the first real module.' When the
"type" of module is easier to grasp than the module itself (and it had better be!)
working with interfaces can ease design and communication between people.
Corresponding to the notion of type checking we get the notion of "interface
checking". For instance, in (7.1) it is highly desirable that the machine can check
that M really does match I.
As a bonus, if M matches I,, then we are sure that
the linking from M2 to M is meaningful.
There is an unpleasant ring to the distinction between "small" and "big"
programs. "What's next?", one feels like asking. As I have tried to indicate
above many of the terms that are used on the modules level are just other names
for things we are familiar with at the level of individual modules. The work by
Burstall and Lampson on the language Pebble [6] shows that one can collapse
the two layers into one general language. In Pebble one can have records that
can contain types as well as more traditional values. Such a record can be viewed
as a module. The type of such a record is a dependent type. (A type component

of a record can be significant for the type of other components). More complex
"modules" can be built using abstraction and application. Dependent sum and
dependent product suffice to give strong expressive power.
It is desirable that interface checking can be done at compile time rather than
as a part of the execution so that one can make sure that modules at least fit

together in a meaningful way before they are executed.


Large programs contain many modules and each module may be known under
different names to other modules. Therefore it is necessary to be able to specify
sharing. In fact, when we are concerned with statically typed languages, sharing
'A textbook on top-down design would no doubt advocate the latter, but when we consider
the maintenance of a large systemit seems quite reasonable that existing modules influence
subsequent interfaces.
CHAPTER 7. TYPED MODULES 82

constraints may be necessary to assist the type checker. For example in

AM,:I,.M2
if the interface Il postulates two modules both containing a type called stack it
might well be that the body of M2 makes sense only if the two stack types are
assumed to be identical. Thus one would find a sharing specification, perhaps
something like
sharing M.stack = M'.stack
in I,.
If by removing the distinction between small and big programs we obtain that
modules can be handled as other values it becomes impossible to tell statically
whether two modules (or types) are identical.
This leads to the idea of a stratified language: at the bottom the core lan-
guage in which we write small programs; on top a module language. The module
language may allow abstraction and application, but the operations on modules
are kept simple enough that module equality and other properties we want to
check statically remain decidable.
So the main motivation for having a stratified language is that we want more
information about modules than ordinary type checking can give us.
The languages in Part I and II are typical core languages. We shall now
present a module language which has a type discipline that handles sharing and
is very similar to the previous polymorphic type disciplines.
Chapter 8

The Language ModL


Our module language is called ModL. It is a skeletal language originally studied
as part of the work on giving an operational semantics of David MacQueen's
modules proposal for ML [25] to single out the difficult semantics issues of the
full language. Yet, ModL in no way depends on the ML core language, in fact the
type discipline we shall present is completely independent of which core language
is chosen.
ModL has structures, signatures, and functors. A structure corresponds to
what we in the previous chapter called a module. In ML, a structure is a col-
lection of components, where a component can be a datatype, a type, a value,
a function, an exception, or even another structure. In ModL, however, we re-
move all components except the structures, so that we can study the structure -
substructure relationship.
A signature corresponds to what we in the previous chapter called an interface.
In ML, a signature can specify a datatype with its value constructors, a type
(without saying what the type is), the types of values, functions and exceptions
and the signatures of substructures. In addition one can specify sharing between
two structures and between two type constructors. In ModL, however, signatures
just give signatures of substructures and allow sharing specifications.
A functor is a map from structures to structures. (There is no significant dif-
ference between ML and ModL here). Functors do not take functors as arguments
nor can they produce functors as results.
ModL is almost identical to the language we studied in [19], and we shall
prove the theorems conjectured there. This is not the place for a theoretical
investigation of the full ML semantics [20]. However, I do hope that what I have
to say about ModL can explain some of the decisions we made when writing the
83
CHAPTER 8. THE LANGUAGE MODL 84

ML semantics.
We shall first define the syntax of ModL and then give a static semantics
which enables one to infer what structures are created as a result of evaluating
a structure expression so that sharing specifications and functor applications
can be checked statically. We shall then show theorems about the semantics,
in particular that there exists an algorithm, the signature checker, that finds
so-called principal signatures of all legitimate signature expressions.

8.1 Syntax
Assume the following sets of identifiers:

strid E Strld structure identifiers


sigid E Sigld signature identifiers
funid E Funld functor identifiers.

Substructures are accessed via long structure identifiers:

longstrid or strid 1 .... . strid k(k > 1) E LongStrld = Strld+.

The syntax of ModL appears in Figure 8.1


For the sake of the following examples imagine that we can declare and specify
types, values and functions as well as structures.
CHAPTER 8. THE LANGUAGE MODL 85

dec E Dec declarations


strexp E StrExp structure expressions
spec E Spec specifications
sigexp E SigExp signature expressions
prog E Prog programs

dec empty
structure strid = strexp structure
decl dec2 sequential

strexp struct dec end generative


longstrid
funid(strexp) functor application

spec empty
structure strid : sigexp structure
spec1 spec2 sequential
sharing longstrzd 1 = longstrzd 2 sharing

sigexp sig spec end basic


sigid

dec top level declaration


signature sigid = sigexp signature declaration
functor funid(strid : sigexp)=strexp
functor declaration
Pro91 Prog2 sequential program

Figure 8.1: The Syntax of ModL.


CHAPTER 8. THE LANGUAGE MODL 86

Example 8.1 Here is a structure, a signature, and a functor;

structure IntStack=
struct
type elt= int
val stack= ref [ ]
fun push(x: elt)= ...
fun pop( ... )_ ...
end

signature StackSig=
sig
type elt
val stack: elt list ref
fun push: elt-> ...
fun pop:...
end

functor MkStack(A: sig type elt end)=


struct
val stack= ref [ ]
fun push(x: A.elt)= ...
fun pop( ... )_ ...
end

structure IntStack= MkStack (struct type elt= int

end)

The first IntStack is an example of a basic structure expression which is just a


named collection of components. It matches the signature StackSig, because it
has at least the components required by StackSig and the components match
their specifications.
The functor MkStack allows the creation of structure that
a stack given a
contains the type of stack elements. Thus we could also have obtained an integer
stack by the above functor application.
CHAPTER 8. THE LANGUAGE MODL 87

Example 8.2 Here is an example of how sharing can come about by construc-
tion.

struct structure SharedStack=


struct
val stack= ref [ ]
fun push(x)= ...
fun pop( ... )_ .. .
end

structure StackUserl=
struct
structure A1= SharedStack
fun f(x)= ... Al.push( ...) ...
end

structure StackUser2=
struct
structure A2= SharedStack
fun g( ... )= ... A2.pop ...
end
end

Here StackUserl pushes SharedStack while StackUser2 pops it. The ex-
plicit dependency of both users upon the stack has been acknowledged by in-
corporating the shared stack as a substructure in each of the users, under the
identifiers Al and A2, respectively. The above structure with its three substruc-
tures matches the following signature:
CHAPTER 8. THE LANGUAGE MODL

signature Users=
sig
structure StackUserl:
sig
structure Al: sig end

end
structure StackUser2:
sig
structure A2: sig end

end
sharing StackUserl.A1 = StackUser2.A2
end
Sharing is particularly important in functor declarations since the body of
the functor may only make sense provided certain sharing constraints are met.
If, for example two stack users are mentioned in a functor parameter it might
be important that they use the same stack. Therefore one needs to be able to
declare a functor like

functor F(A: Users)= ...


where the signature Users defined above specifies the required sharing. If we
later apply F to a structure then it must be checked statically that the actual
argument really has the required sharing.

8.2 Semantic Objects


The semantic objects are defined in Figure 8.2.
From now on we shall maintain the terminological distinction between phrase
classes and semantic objects. For instance a structure expression is a phrase (a
member of StrExp) while a structure is a member of Str. Note that structures
contain so-called structure names, whereas structure expressions do not. Struc-
ture names are used for representing sharing information; we use a, b, c, and
other small letters (in roman font) for the individual names. Structures can be
depicted as finitely branching trees of finite depth. For instance the structure
expression from Example 8.2 elaborates to the structure
CHAPTER 8. THE LANGUAGE MODL 89

n E StrName structure names


N E NameSet (structure) name sets
E E Env environments
S or (n, E) E Str structures
E or (N)S E Sig signatures
I or (S, (N')S') E Funlnst functor instances
4) or (N)(S, (N')S') E FunSig functor signatures
F E FunEnv functor environments
G E SigEnv signature environments
B or M,F,G,E E Basis bases

NameSet Fin(StrName)
Env
Str
Strld- Str
StrName x Env
Sig NameSet x Str
Funlnst Str x NameSet x Str
FunSig NameSet x Str x NameSet x Str
FunEnv FunId fin FunSig
-fin+

SigEnv Sigld Sig


Basis NameSet x FunEnv x SigEnv x Env

Figure 8.2: Semantic Objects.


CHAPTER 8. THE LANGUAGE MODL 90

be +b
Here the difference between structure names and structure identifiers is obvi-
ous.
For any structure S = (n, E) we call n the structure name of S; also, the proper
substructures of S are the members of Rng E and their proper substructures. The
substructures of S are S and its proper substructures.
A signature E takes the form (N)S where the prefix (N) binds names in S.
A structure is bound in (N)S if it occurs in S and its name is in N. The set of
bound structures of E is denoted boundstrs(E).
The idea is that bound names can be instantiated to other names during
signature matching, whereas free names in the signature must be left unchanged.
In the signature from Example 8.2 all names are bound since no sharing with
structures outside the signature was specified. In diagrams, nodes with bound
names are open circles, whereas free nodes are filled out. The signature from
Example 8.2 can be drawn as follows:
CHAPTER 8. THE LANGUAGE MODL 91

StackUser

ha 6h
We now see the first sign of a correspondence with the polymorphic type
disciplines of Part I and II; a structure S corresponds to a type r, while a signature
(N)S corresponds to a type scheme Va1... an-
7-There are two important aspects of signature matching. Given a signature

(N')S' and a structure S to match it, S is allowed to have more components


than S', i.e., it may be necessary to add more components to S' in order to get
S. Also, the bound names in the signature must be brought to agree with the
names in S. Actually, it is convenient formally to factor matching into these two
aspects, called enrichment and instantiation.

Definition 8.3 (Enrichment) A structure S2 = (n2, E2) enriches


Sl = (ni, Ei), written Si -< S2, if n1 = n2 and Dom E1 C Dom E2 and for all
strid E Dom E1, E2(strid) enriches E1(strid ).

Then matching is the combination of getting agreement on names and adding


components:

Definition 8.4 (Signature Matching) A structure S matches a signature


(N')S' if there exists a structure So such that (N')S' > So -< S.

This leaves open the definition of signature instantiation >. One possibility is
to define it so that it only changes names but preserves the number of components
and then leave all the widening to -<. The other extreme is to allow instantiation
to do widening so that we strengthen the requirement So -< S to So = S. The
two definitions give different semantics, but curious as it may seem, the same
CHAPTER 8. THE LANGUAGE MODL 92

inference rules can be used in both cases. We shall therefore defer the detailed
discussion of the two alternatives till we have presented the rules. In both cases,
the instantiation E > S to structures extends to instantiation to signatures:

Definition 8.5 (Signature Instantiation) E' is an instance of E, written


E > E', if for all S E Str, E' > S implies E > S.

A functor signature takes the form (N)(S, (N')S'). Here the prefix (N')
binds names over S' and the prefix (N) binds over S and also over free names
in (N')S'. Here (N)S stems from the formal parameter signature of the functor
declaration. Moreover, S' is the formal result structure of the functor; the prefix
(N') binds those names of S' that must be generated afresh upon each application
of the functor as will be explained in more detail below. Names that are free in
(N')S' and also occur in S signify sharing between the formal argument and the
formal result. An example where this occurs is

functor F(A: sig structure Al: sig end


structure A2: sig end
sharing Al = A2
end
= A.Al

This functor has the functor signature

({a, b}) ((a, {Al i--> (b, {}), A2 i--> (b, {})}) , (b, {}))

which can be drawn as follows:

ob

A functor with signature (N1)(S1, (N1)Si) can be applied to an actual ar-


gument, S2, that has more components than S1. The result of the application
will be an actual result (N2) S2. Pairs of the form (S2, (N2) S2') are called functor
instances. (The meaning of the prefix (N2) will be explained later). Again we
split matching into instantiation and enrichment:
CHAPTER 8. THE LANGUAGE MODL 93

Definition 8.6 A functor instance (S2, (N2)S2) matches a functor signature


(Ni)(Si, (Ni)S') if there exists an So such that (N1)(S1, (Ni)S') > (So, (Nz)Sz)
and So-S2.

This leaves open the definition of functor instantiation

(Ni)(S1, (N,') S') > (S2, (N2)S2)

but again there are two natural definitions for the same inference rules, so we defer
the detailed discussion of the alternatives till later. No matter which definition
is chosen, the following functor instance matches the functor signature we drew
above.

d
d

Note how sharing between the formal argument and result has been converted
into sharing between the actual argument and result.
Finally, a basis B = M, F, G, E is the semantic object in which all phrases are
elaborated. The environments F, G and E bind semantic objects to identifiers
that occur free in the phrase. Here F is a functor environment mapping functor
identifiers to functor signatures, G is a signature environment mapping signature
identifiers to signatures and E is a (structure) environment mapping structure
identifiers to structures. Finally, M is a set of structure names. It serves two
purposes.
Firstly, M records the names of all structures that have been generated so far.
This use is relevant to the elaboration of declarations, structure expressions and
programs since these are the phrase classes that can generate "new" structures.
Whenever we need a "fresh" structure name, we simply choose it disjoint from all
names in M. In general, a name can have been used although it is not currently
accessible via the F, G, or E components of the basis. Therefore M is not just
redundant information about the other components.
Secondly, M contains all names that are to be considered as "constants" in
the current basis. This becomes important when we study the signature checker
but it is worth having in mind already when reading the rules. The type checker
CHAPTER 8. THE LANGUAGE MODL 94

W discussed in Section 2.4 may perform substitutions on type variables that


are free in the type environment, but it will never replace one type constant
(znt, say) with another (bool, say). Similarly, the signature checker may identify
structure names that occur in E of B but are not in M, but it will not change
a name which is in M. If two structures are specified to be shared and at least
one of the structures has a name which is not in M then the signature checker
will identify the two names when processing the sharing specification. However,
if the names of the two structures that are specified to share are two different
constants, then the signature checker has discovered that the sharing specification
is not met. In the applicative discipline we simply had a syntactic distinction
between type constants and type variables. That is not elegant in the modules
semantics, because the set of "constants" is not constant; in a functor declaration,
the components that have been specified in the formal parameter are to be seen
as constants local to the functor body. This use of M pertains to the elaboration
of signature expressions and specifications since it only during the checking of
these phrases that the signature checker will try to unify structures.

8.3 Notation
Free structures and names: It is sometimes convenient to work with an arbi-
trary semantic object A, or assembly A of such objects. In general, strnames(A)
denotes the set of structure names occurring free in A. A structure is said to be
free if its name is free and strs(A) means the set of structures that occur free in
A.
We often need to change bound names in semantic objects. For arbitrary A
it is sometimes convenient to assume that all nameset prefixes N occurring in A
are disjoint. In that case we say that we are disjoining bound names in A.
Projection: We often need to select components of tuples - for example
the environment of a basis. In such cases we rely on variable names to indicate
which component is selected. For instance "E of B" means "the environment
component of B" and "n of S" means "the name of S."
Moreover, when a tuple contains a finite map we shall "apply" the tuple to
an argument, relying on the syntactic class of the argument to determine the
relevant function. For instance B(funid) means (F ofB)funid.
Finally, environments may be applied to long identifiers. For instance if
CHAPTER 8. THE LANGUAGE MODL 95

longstrid = strid 1 .... . strid k then Eo(longstrid) means

(E of (E of(E of Eo) strid 1)strid 2 . )stridk.

Modification: The modification of one map f by another map g, written


f f g has already been mentioned. As before, a common use is environment
I
modification E1 E2. Often empty components will be left implicit in a mod-
ification. For set components, modification means union, so that for instance
B IN means
MofBUN,FofB,GofB,EofB.
Finally, we frequently need to modify a basis by an environment E, at the same
time extending MofB to include the type names of E. We therefore define B ® E
to mean B I(strnames E, E).

8.4 Inference Rules


The rules allow us to infer sequents of the form

B H phrase =A
where A is some semantic object. The relation is read "phrase elaborates to A
in the basis B."

8.4.1 Declarations and Structure Expressions


These rules are "monomorphic" in the sense that given B and phrase they leave
very little freedom as to what the result A can be. We shall later show that the
only freedom is in the choice of new structure names.
It might be helpful to read the rules with the understanding that in every
basis, the M component contains all the names that are free in the F, G, and E
components. (We shall later show that the rules preserve that property).

Declarations [BI- dec=E

(8.2)
BH 0
CHAPTER 8. THE LANGUAGE MODL 96

B strexp
F- S
B I- structure strid = strexp = {strid H S}

BI-dec1tEl B®El 1-dec2tE2


B I- dec1 dec2 = E1 ± E2
Comment:

(8.4) The use of ®, here and elsewhere, ensures that the names generated during
the first sub-phrase are considered used during the elaboration of the second
sub-phrase.

Structure Expressions B I- strexp S

B I- dec E n V strnames(E) U (M of B)
B I- struct dec end (n, E)

B(longstrid) = S
B F- longstrid S

B(funid) > (S, (N')S') B F- strexp Si S -< S1


N'n(MofB)=0
B F- funid(strexp) S'
Comments:

(8.5) The side condition ensures that the resulting structure receives a new
name. If the expression occurs in a functor body the structure name will
be bound by (N') in rule will ensure that for each application of
8.18. This
the functor, by rule 8.7 (last premise), a new distinct name will be chosen
for the structure generated.

(8.7) The interpretation of this rule depends of the definition of the functor
instantiation relation as will be discussed in Section 9.1. The purpose of
the last premise was explained above. It can always be satisfied by choosing
the bound names in the functor instance (S, (N')S') appropriately.
CHAPTER 8. THE LANGUAGE MODL 97

8.4.2 Specifications and Signature Expressions


These rules are polymorphic in the sense that given the basis and the phrase
there may be many different results of the elaboration. This is essential for
getting a simple rule for sharing. However, as with the applicative polymorphic
type discipline, the richness of results that can be inferred is not bigger than can
be captured by a notion of principality. In the modules semantics the notion is
as follows:

Definition 8.7 (Principal Signatures) A szgnature E zs principal (for szgexp


in B), if B F- sigexp = E, and for all E' satisfying B - szgexp =:>. E' we have
E > E'.

Intuitively, E is principal if it has precisely the components and the sharing


implied by spec.' The definition of principality is used in the inference rules
concerning programs.

Specifications B I- spec =E

B F = {}

B H sigexp = (0) S
B F structure strid : sigexp = {strid H S}

BFspec, =El B±El Fspec2=E2 (8.10)


B F specs spec2 = E1 ± E2

n of B(longstrid 1) = n of B(longstrid 2)
(8.11)
B H sharing longstrzd 1 = longstrid 2 = { }
Comments:

(8.10) Here ± is used instead of ® because the elaboration of specs cannot


generate any new constant structures.

'For each of the two definitions of signature instantiation "implied" means a different thing.
CHAPTER 8. THE LANGUAGE MODL 98

(8.11) The premise is a simple test of identity of names. The liberty that
rule 8.12 and 8.13 give to choose names can be used to achieve the sharing
before it is tested.2

Signature Expressions B I- sigexp =*- E

BI-spec=E
(8.12)
B I- sig spec end = (0) (n, E)

B(sigid) = E
(8.13)
BI-sigid=E

B I- sigexp = (N)S n 0 strnames B ( 8 . 14)


B I- sigexp = (N U {n})S

B I- sigexp = E' V> E (8.15)


BI- sigexp=E
Comments:

(8.12) In contrast to rule 8.5, n is not here required to be new. The name n
may be chosen to achieve the sharing required in rule 8.11.

(8.14) This rule is called the generalization rule.

(8.15) This rule is called the znstantzation rule. It's interpretation obviously
depends on the definition of signature instantiation.3 Regardless of which
of the two definitions we take, the instance is not determined by the rule;
the generalization and instantiation rules are often used to achieve sharing
properties.

8.4.3 Programs
These rules are monomorphic in the same sense as the rules for declarations and
structure expressions.
2While humans can make such predictive choices when doing proofs in this formal sys-
tem, the signature checker will try to unify the two structures upon encountering the sharing
specification
'Exactly how, will be discussed in Section 9 1.
CHAPTER 8. THE LANGUAGE MODL 99

Programs B I- prog = B'


B F- dec = E
(8.16)
B F- dec = (strnames E, {}, {}, E)

B F- sigezp = E E principal for sigezp in B


(8.17)
B F- signature sigid = si ez (strnames E, si id'-+ E},

B F- sigezp = (N)S (N)S principal for sigezp in B


NfMofB=0
B ®{strzd -4 S} F- strezp = S'
N' = strnames(S') \ (N U (M of B)) -D = (N)(S, (N')S')
B F- functor funid(strid : szgezp)=strezp = (strnames D, {funid -4 4)}, {}, {})
(8.18)

B F- prog1 = B1 B + B1 F- prog2 = B2 (8.19)


B F- prog1 prog2 = B1 + B2
Comments:

(8.17) The principality requirement ensures that the signature bound to sigid
has exactly the components and sharing implied by sigezp.

(8.18) Here (N)S is required to be principal so as to have exactly the components


and the sharing implied by szgezp. The requirement N fl M of B = 0 ensures
that no accidental sharing is assumed between S and B. Since ® is used,
any structure name n in S acts like a constant in the functor body; in
particular, it ensures that further names generated during elaboration of
the body are distinct from n. The set N' is the set of names that, as an
addition to the names in the original basis and the names in S, have been
generated by the elaboration of the functor body. Since it is the application
of the functor that creates new names, not the declaration of it, these names
are all bound in D.
Chapter 9

Foundations of the Semantics


The inference rules allow us given some assembly of semantic objects to elaborate
programs hence producing a modified assembly of semantic objects. The semantic
objects that make up an assembly must be consistent with each other. For
example, sharing must be hereditary, so both of the following structures cannot
be in the same assembly:

a a

A A

b c

The static elaboration may change the underlying assembly of semantic ob-
jects but only as long as the consistency conditions are maintained. Therefore
the rules are only part of the semantics. The foundation on which they are built
is the definition of what an admissible assembly of semantic objects is.
In this chapter we shall describe two different definitions of admissibility.
They give rise to different notions of signature instantiation and functor instan-
tiation and two quite different interpretations of the inference rules.

9.1 Coherence and Consistency


At least we must require that sharing is hereditary in the following sense:

Definition 9.1 (Consistency) A semantic object A or assembly A of objects is


said to be consistent if, after disjoining bound names, for all S1 and S2 in A and
for every longstrid if n of S1 = n of S2 and both Si(longstrid) and S2(longstrid )
exist, then n of Si(longstrid) = n of SflWngstrid ).
CHAPTER 9. FOUNDATIONS OF THE SEMANTICS 101

Given the definition of Str, the natural definition of structure equality is

Definition 9.2 (Structure Equality) Two structures (n, E) and (n', E') are
equal, written (n, E) = (n', E'), if n = n' and E = E'. Two environments E and
E' are equal, written E = E, if Dom(E) = Dom(E') and for all strid E Dom(E)
one has E(strzd) = E'(strid ).

The definition of consistency should be compared carefully with the following


definition.

Definition 9.3 (Coherence) A semantic A of objects is


object A or assembly
said to be coherent if, after disjoinzng bound names, for all Sl and S2 in A if
n of Sl = n of S2 then Sl = S2.

Then it will be obvious that coherence implies consistency, but not the other
way around. For instance the assembly consisting of the three structures

a a

A B

b c

is consistent, but not coherent.


Coherence is natural: to talk about a substructure being shared among sev-
eral structures almost grammatically implies that a shared structure is one real
"thing" with a certain number of components regardless of the structures of which
it is a substructure.
This is certainly so for the semantic objects phrases evaluate to in the dy-
namic semantics. But in the static semantics we can think of two different, but
consistent, structures as being two different, but consistent, views (or approxima-
tions) of an object in the dynamic semantics. Thus the three structures drawn
above "hide" different information.
We shall now demonstrate that if we choose appropriate definitions of sig-
nature and functor instantiations based on coherence then the static semantics
will predict the creation of structures without missing out any of the components
that will be there at run-time. Later we shall see in more detail how consistency
leads to a semantics with information hiding.
CHAPTER 9. FOUNDATIONS OF THE SEMANTICS 102

9.1.1 Coherence
First let us consider signature matching. Recalling Definition 8.4, if So and S
are coherent and So -< S then So = S. Therefore S matches (N')S' precisely if
(N')S' > S. Thus the instantiation will have to do both the change of bound
names and the widening.
If two substructures Si and S'2 of S' have the same name then they must
be matched by structures S1 and S2 with n of Sl = n of S2. By coherence
n of S1 = n of S2 implies S1 = S2. Thus matching could be described by a map
from names in S' to substructures of S. By coherence of S' this is equivalent
to a map from substructures of S' to substructures of S. The change of names
together with the widening is captured by the following definition

Definition 9.4 (Realisation) A map 'PStr -> Str as called a realisation


:

of for all S E Str and all strid, if S(strid) exists then ('P S)strid exists and
cp(S strid) = ('p S)strzd.

Notice that realisations preserve existing paths: if S(longstrzd) exists then


('p S) longstrid exist. They also preserve sharing:
if S(longstrid 1) = S(longstrid 2) then ('p S)longstrid 1 = ('p S)longstrid 2.
Realisations in the modules semantics correspond to substitutions in the other
polymorphic type disciplines. The technology of substitutions carries over to real-
isations. A finite realisation is a realisation restricted to a finite set of structures.
The domain and range of a finite realisation are denoted Dom 'P and Rng W.
Moreover, when 'p is finite, the region of 'p, written Reg 'p, is the set of names
that occur in the range of W. Thus Dom'p and Rng'P are sets of structures while
Reg 'p is a set of structure names.
Realisations can be applied to environments by pointwise application i.e., we
define 'P E = 'P o E. In particular, we have that 'P{} = {}.
Any map a : StrName -> StrName naturally extends to a realisation map,
written aa, that changes names without adding components:

aI(n, E) _ (a (n), a' (E)).


Any finite renaming p of the form

{n,"n/ 11<i<k}
can first be extended to a total map on StrName by letting it be the identity
outside In,, , nk} and this map then induces a realisation map, denoted pa.
CHAPTER 9. FOUNDATIONS OF THE SEMANTICS 103

A set A of structures is said to be substructure closed if whenever it contains


a structure S then it also contains all the proper substructures of S. Whenever
p is a realisation map and A is substructure closed then we can extend the finite
realisation Wo = (p J, A to a total realisation, written WO) defined by

(n, Wo E) if (n, E) A,
wo(n, E)
W(n, E) if (n, E) E A.

We shall often omit the from aa, pa, and 4.


0

In general, it is not true that any finite map of structures to structures,

{St HS' II<z<k}


determines a realisation map.1
The ordinary function composition of two realisation maps is a realisation
map. The identity map, ID, from structures to structures is a realisation map
and it is the identity for the composition of realisations. We shall often write
(P2 (P1 S to mean (W2 o `P1)S, i.e., (P2((P1(S)).
A realisation (p is said to glide on a structure S = (n, E) if cP S = (n, (P E).
The support of cP, written Supp (P, is the set of structures on which (P does not
glide i.e., where cp changes the name or adds more components.
We can now define signature instantiation as follows:

Definition 9.5 (Signature Instantiation) A structure S is an instance of a


signature (N')S' if there exists a realisation (P such that (p(S') = S and Supp(W) C
boundstrs((N')S').

Again we notice a strong resemblance with the earlier polymorphic disciplines.


Just as instantiation earlier was defined by substitution on bound variables then
instantiation is now defined as realisation on bound structures.
A signature E is well formed if strs E is substructure closed. This is the same
as demanding that, writing E on the form (N)S, whenever (n, E) is a substructure

of S and n N, then N fl strnames E = 0.


We shall now define a relation E1 ---+ E2 which defines what it is to apply a re-
alisation to a signature (E1). The ideas is that (P is applied to the free structures of
E1 if necessary doing a simultaneous renaming of the bound names of E1 to avoid
capture of names. This only makes sense provided E1 is well-formed (recall that

'However, as we shall see in Chapter 11, there is a natural sufficient condition.


CHAPTER 9. FOUNDATIONS OF THE SEMANTICS 104

this means that no free structure contains a bound structure). More generally, if
p (for renaming) is a finite bijection from structure names to structure names, A
is a substructure closed set of structures such that strnames(A) fl Dom(p) = 0,
and cpo is a finite realisation whose domain is A then the simultaneous compo-
sition of cpo and p, written CPo JAp is the realisation defined as follows. For all
S = (n, E) E Str,

WO(S) if S E A;
(coo IAP)S = (p(n), (coo JAP) E) if n E Domp;
(n, (coo AP) E)
I
otherwise.

It easy to check that coo JAP really is a realisation. We shall often omit the
is
subscript A in coo JAP since A has to be the domain of coo.

Definition 9.6 Let E1 = (N1)Sl and E2 = (N2)S2 be signatures and co be a

realisation. We write E1-- E2 if E1 is well formed and

1. N2 fl Reg(coo) = 0, and

2. there is a bijection p : N1 -a N2 such that (coo lAp)S1 = S2,


where A = strs E1 and coo = co j. A. Likewise, one can define the relation
A1-- A2 for any semantic objects A1, A2 or assemblies A1, A2 of semantic
objects.

We write Al = A2 as a shorthand for Al -D- A2, where ID is the identity


realisation. Note that this is the usual notion of ca-conversion.

Now let us consider functor signature instantiation. As with signature instan-


tiation the definition of functor matching (Definition 8.6) becomes simpler be-
cause of coherence: a pair (S2, (N2)S2') matches a functor signature
(N1)(S1, (Nf)Sf) if (N1)(S1, (Nf)S') > (S2, (N2)SZ). The idea is that the match-
ing of the actual argument against the formal parameter induces a realisation
which is applied to the formal result to get the actual result:
formal argument formal result

co co

actual argument actual result


CHAPTER 9. FOUNDATIONS OF THE SEMANTICS 105

Formally

Definition 9.7 (Functor Instantiation) Given 4) = (N1)(S1, (Nf)S'),


a functor instance (S2, (N2)S2) is an instance of 4), written 4) > (S2, (N') SO if
there exists a realisation ( such that (S,, (N1)Si) -'O+(S2i (N2)S2) and Supp (C
bounds trs((Ni)S,).

Since performs widening as well as change of names the actual result may
(P

have more components that the formal result. For example the following functor
instance

td
A

e4 1e 4e
is an instance of the functor signature (8.1). Another example is the functor

functor F(A: sig end)= A

which has the functor signature {n}((n, {}), (0)(n, {})). In any instance of this
functor signature the actual argument and result will be identical, so F acts like
the identity functor.

Now let us review the inference rules that in interesting ways depend on these
definitions.
Rule 8.7 simplifies to to
B(funid) > (S, (N')S') B I- strexp =S
N' n (M of B) = 0
(9.2)
B I- funid(strexp) S'
and the thing to note is that the actual result, S', may have more components
that the formal result, in fact no components of the actual result :a r' lost as a
result of the functor application.
CHAPTER 9. FOUNDATIONS OF THE SEMANTICS 106

Rule 8.11 concerning sharing is now effectively a test for structure identity
as the premise by coherence implies B(longstrid ) = B(longstrid 2). Structure
identity can be achieved by the generalization and instantiation rules, which now
allow widening as well as name instantiation. An example of this is the signature
expression
sig
structure A:
sig structure Al: sig end
end
structure B:
sig structure A2: sig end
end
sharing A = B
end

which has principal signature

d c C( "b d
where A has an A2 component and B an Al component.

9.1.2 Consistency
First, let us consider signature matching. We can now allow S to match E, where
S is
CHAPTER 9. FOUNDATIONS OF THE SEMANTICS 107

and Eis

but since here we have an example of one substructure of the signature being
matched by two different substructures we can no longer describe the matching
by a realisation map from structures to structures. Instead we let realisation
simply be a map from names to names and leave the widening to the enrichment
relation -<.

Definition 9.8 (Realisation) A realisation is a map (p : StrName -f StrName.

Now Supp W means the set of n for which (p n 54 n and the definition of
signature instantiation becomes

Definition 9.9 (Signature Instantiation) A structure S is an instance of a


signature (N')S' if there exists a realisation W such that W(S') = S
and Supp(W) C

The definition of simulataneous composion (o JAP is revised in the obvious


way and definition 9.6 changes slightly to

Definition 9.10 Let


realisation. We write E1
E1
-
= (N1)S1 and E2
E2 if E1
= (N2)S2
is wellformed and
be signatures and (p be a

1. N2 fl Reg((Po) = 0, and

2. there is a bijection p : N1 -f N2 such that ((po AP)S1 I = S2,

where A = strnamesE1 and Wo = ( I A. Likewise, one can define the relation


Al A2 for any semantic objects A1, A2 or assemblies A1, A2 of semantic
objects.

Now let us consider functor signature instantiation. Definition 9.7 is modified


slightly to

Definition 9.11 (Functor Instantiation) = (N1)(S1i (Nf)S'),


Given
,
a
functor instance (S2, (N2)S2) is an instance of written > (S2, (N2)S2),
if there exists a realisation (p such that (S1, (Nf)Sl') -+(S2, (N2)S2)
and Supp cp C Ni.
CHAPTER 9. FOUNDATIONS OF THE SEMANTICS 108

However, since realisation no longer performs widening, the actual result will
always have exactly the same number of components as the formal result. So
now the functor instance

0 d

is an instance of (8.1), which is now matched by

Notice that (9.1) no longer matches (8.1). More extremely,

functor F(A: sig end)= A

now is the functor that cuts out all the components of its argument.

As to the inference rules, we have already noted that in the rule for functor
application (8.7) the formal and actual results have the same number of compo-
nents. The instantiation rule no longer admits widening, but that is no longer
needed since the sharing rule only tests for name equality. Hence the signature
expression
CHAPTER 9. FOUNDATIONS OF THE SEMANTICS 109

sig
structure A:
sig structure Al: sig end
end
structure B:
sig structure A2: sig end
end
sharing A = B

end

now has the principal signature

Since fewer paths are present in principal signatures, fewer functor declara-
tions (rule 8.18) will be admitted.

9.1.3 Coherence Only


For the rest of this thesis we shall examine the coherent semantics in detail. It
would have been nice to investigate the consistent semantics in a similar way,
but I'm afraid that this thesis is already getting a bit long. Historically, all the
following work was done assuming coherence at a point in time where it was
not clear that the essential choice one has to make is between coherence and
consistency, because the rest of the definitions depend on this choice. In the ML
semantics [20] we actually ended deciding on the consistent semantics because it
was felt that explicit signature constraints

structure strid : sigexp = strexp

had to coerce the number of components of the structure to which strexp elab-
orates down to the number of components mentioned in sigexp without ruining
CHAPTER 9. FOUNDATIONS OF THE SEMANTICS 110

sharing information between the full and the cut down version of the structure.
This cannot be done in the coherent semantics, but it can be done in the consis-
tent semantics.
It is dangerous to claim that the results I prove in the coherent semantics
carry over to the consistent semantics. However, I would be very surprised if
it should turn out that the main results (the existence of principal unifiers and
signatures) did not carry over.
I shall now turn to other restrictions that semantic objects must satisfy in
order to be admissible.

9.2 Well-formedness and Admissibility


We have already defined what it to be well-formed (Sec-
is for a signature
tion 9.1.1). A functor signature (N)(S, (N')S') is well formed if (N)S, (N')S'
and (N U N')S' are well-formed and N n strnames(N')S' C strnames S.
An object or assembly A is wellformed if every signature and functor signa-
ture occurring in A is well-formed.
An object or assembly A is admissible if it is coherent and well-formed. Beware
the difference between, say, the statement "the structure S is admissible and the
structure (P(S) is admissible" and "the assembly IS, BP(S)} is admissible." The
latter is stronger than the former, as it requires that the set of all structures that
occur in S or in 'P(S) be coherent.
For any M E NameSet, the set of M-structures, written M-Str, is

M-Str={SEStrInofSEM}.
Moreover, for any object or assembly, A, we define the (free) M-structures of A,
written
M-strs A,
to mean M-Str (1 strs A, i.e.,

M-strs A= IS E Str n of S E
I M and S occurs free in A}.

We do not impose admissibility constraints on the inference rules for dec-


larations, structure expressions and programs, as these rules will be shown to
preserve admissibility automatically. This is not true for the rules concerning
specifications and signature expressions, so we impose the following constraints
CHAPTER 9. FOUNDATIONS OF THE SEMANTICS 111

Definition 9.12 (Global Constraints) Let B = M, F, G, E. A conclusion


B I- spec = El is admissible if

{B, Ei} is admissible (9.3)

strnames F U strnames G C M (9.4)

M-strs E and M-strs El are substructure closed. (9.5)

Similarly, a conclusion B H sigexp =E is admissible if


{B, E} is admissible (9.6)

strnames F U strnames G C M (9.7)

M-strs E and M-strs E are substructure closed. (9.8)

A proof P of B I- spec =E or B H sigexp =E is admissible if every conclusion


in P is admissible.

In what follows we require that all proofs be admissible, so if we write for


example "if B H sigexp = E then ...", it is always to be read "if there exists an
admissible proof of B H szgexp = E then ...".
A basis B = M, F, G, E is said to be robust if it is admissible, strnames F C
M, strnames G C M, and M-strs E is substructure closed. B is said to be strictly
robust if it is robust and strnames E C M.
For bases in which signature expressions and specifications are elaborated
robustness is tacitly assumed - see definition 9.12. In a strictly robust basis, all
free structures are considered as "constants".2 In the theorems concerning the
elaboration of structure expressions, declarations and programs bases will often
be assumed strictly robust.
Realisation of bases is defined as follows. Assume B = M, F, G, E and B' _
M', F', G', E'. We define
B---B'
to mean F -- F', G - - G', and co(E) = E'. We write B li
B', if B, and B' are
robust and B -`P+ B'. Similarly, we write B S
B' if B and B' are strictly robust
and B -L1 B'.
If B is a robust basis some of the structures in E of B, namely those whose
name is not in M of B, may be affected by realisation. Therefore, among all
realisations, the following are of particular use:
2See the discussion in Section 8.4.
CHAPTER 9. FOUNDATIONS OF THE SEMANTICS 112

Definition 9.13 (M-realisation) An M-realisation is a realisation 0 with


Supp (p n M-strs = 0.

If B = M, F, G, E is a robust basis and co is an M-realisation then o is the


identity on all structures free in F and G, so F --
F and G --
G. Thus it makes
sense to define the application of cp to B, written cp B, to be (M, F, G, 0 E).
As in the other type disciplines the notion of closure is important.

Definition 9.14 For any two objects or assemblies A, B, the B-closure of A,


written C1osBA, is (N)A, where N = strnames A\ strnames B. We abbreviate
C1os0A to ClosA.

9.3 Two Lemmas about Instantiation


The following lemma gives a very useful characterization of signature instantia-
tion.

Lemma 9.15 (Characterization of >) We have

(N)S > (N')S' (N)S > S' and N' fl strnames(N)S = 0.

Proof. We first prove the implication from left to right. Assume (N)S >
(N')S'. Clearly (N') S' > S'. Thus by definition 8.5 we have (N) S > S'.
We prove N' n strnames((N)S) = 0 indirectly. Assume
n' E N' n strnames((N)S). Let n be a name different from n'. Then {n' i-* n} is
a realisation and Supp({n' i-* n}) fl strs S' C boundstrs((N')S'). Thus (N')S' >
{n' i-+ n} S'. By definition 8.5 this implies (N)S > {n' H n} S'. But that is
impossible; n' occurs free in (N)S and hence in any instance of it, but it obviously
does not occur in {n''--> n} S'. Thus N' fl strnames((N)S) = 0.
To prove the implication from right to left, assume (N)S > S' and N' n
strnames((N)S) = 0. Assume (N')S' > So and prove (N)S > So. By as-
sumption there exists a realisation 0 such that 0 S' = So and Supp 0 fl strs S' C
boundstrs((N')S'), and there exists a realisation Yo with Yo S S' and
Supp Yo n strs S C boundstrs((N)S). Thus 0 o is a realisation with ' o y(S) =
So. Moreover, if Si E strs((N)S) then 'p glides on Si so n of 'P Si = n of Si
which is not bound in (N')S' - since by assumption N' n strnames((N)S) = 0.
CHAPTER 9. FOUNDATIONS OF THE SEMANTICS 113

Therefore b glides on W S1. glides on all free structures of (N)S, i.e.,


Hence bo W

Supp(b o gyp) fl strs S C boundstrs((N)S). This shows (N)S > So as required.


Nothing in the proof required admissibility. However, as a corollary we im-
mediately get

Corollary 9.16 If E > E' and E is well formed then strs E C strs E'.

The relation E > E' is obviously reflexive and transitive. Thus we get an
equivalence relation - defined by
E - E' E >E'andE'>E.
The following lemma gives a characterization of this equivalence. Basically,
two signatures are equivalent if and only if one can be obtained from the other
by renaming of bound names together with deletion or insertion of redundant
bound variables. More precisely, defining boundnames((N)S) to mean the set
N fl strnames S, we have

Lemma 9.17 (Characterization of -) Assume that IS, S'} is coherent.


Then the following two statements are equivalent:

1. -
(N)S (N')S'
2. a) strnames((N)S) = strnames((N')S') and
b) There exists a bijection a : boundnames((N)S) -+ boundnames((N')S')
with au(S) = S'.
CHAPTER 9. FOUNDATIONS OF THE SEMANTICS 114

Proof. First assume (2). We have ao(S) = S', Supp(aa) f1 strs S


C boundstrs((N)S) and N' f1 strnames((N)S) = N' f1 strnames((N')S') 0, -
so (N)S > (N')S' by lemma 9.15.
To prove (N')S' > (N)S, we first prove that (aT')a o as is the identity on
all substructures of S. (In general, it is not the identity everywhere). Take
((a_')q
(n, E) E strs S. If n E N then aa(n, E) = (an, as E) so o aa)(n, E) =

o a) E) i.e., (a-1)b o ab glides on (n, E).


((a_1)q
((a-1 o a)n, ((a-1)a o a') E) = (n,
Otherwise n N, so aH glides on (n, E). Now n is free in (N')S' by (a) so
(a-1)q
(a-')O glides on aa(n, E). Thus o as glides on (n, E) in this case as well.
Thus (a-')O o ab is the identity on strs S.
Now (a-1)1(S') = = S;
(a-1)a o aa(S) f1 strsS' c Supp(a_1)q

boundstrs((N')S'); and N f1 strnames((N')S') = N f1 strnames((N)S) = 0, so


(N')S' > (N)S by lemma 9.15. Hence (N)S (N')S'. -
Conversely, assume

(N)S > (N')S' (9.9)


(N')S' > (N)S. (9.10)

Then by lemma 9.15 there exist realisations W and 0 such that

Supp(cp) C boundstrs((N)S) A co(S) = S'AN' f1 strnames((N)S) = 0 (9.11)

Supp(&) C boundstrs((N')S') A &(S') =SAN f1 strnames((N')S') = 0 (9.12)

Thus and 0 o cp are realisations and 0 o W(S) = S and cP o &(S') = S'.


W o -%

The identity is such a realisation so 0 o W must coincide with the identity on


strs S and 'p o 0 must coincide with the identity on strs S':

`d Si E strs S 0 o cp(S1) = Si (9.13)

`d S1 E strs S' W o O (S') = S1. (9.14)

Since cp maps substructures of S to substructures of S' and 0 maps substruc-


tures of S' to substructures of S these equations express that (P is a bijection
from strs S to strs S' with inverse 0. Moreover, cp and 0 may change structure
names, but they cannot add components:

`d(n, E) E strs S an'. cp(n, E) _ (n', cp E) (9.15)

V(n', E') E strs S' an. -%(n', E') _ (n, -/ E'). (9.16)
CHAPTER 9. FOUNDATIONS OF THE SEMANTICS 115

(This is easy to prove using (9.13) and (9.14).)


Now 'P maps free structures of (N)S to free structures of (N')S': if (n, E) E
strs((N)S) then 'P glides on (n, E) by (9.11) so n of '(n, E) = n. But N' fl
strnames((N)S) = 0 by (9.11) so n 0 N'. Thus '(n, E) is free in (N')S'.
Similarly 0 maps free structures of (N')S' to free structures of (N)S. But
that means that ': strs((N)S) -* strs((N')S') is a bijection with inverse
'% : strs((N')S') -+ strs((N)S). This in turn gives that '
: boundstrs((N)S) -
boundstrs((N')S') makes sense and is a bijection with inverse
boundstrs((N')S') --> boundstrs((N)S).
Since 'p is a bijection on the free structures and since 'p glides on the free
structures, we immediately have the desired (a).
To define the desired a, note that for each n c boundnames((N)S) there is
exactly one E such that (n, E) E strs S. This is where the coherence of S is used.
Thus we can define
a(n) = n of '(n, E).
Similarly, by the coherence of S' we define a' : boundnames((N')S')
boundnames((N)S) by a'(n') = n of ?(n', E') where (n', E') is the unique struc-
ture in strs S' whose name is n'.
Since 'p is a bijection on the bound structures with inverse 0, a is a bijection
from boundnames((N)S) onto boundnames((N)S') with inverse a'.
By (9.15), ab coincides with 'p on strsS so by (9.11) we get ab(S) = S',
showing (b). 0
Finally, here is a little lemma that says that functor instantiation is deter-
ministic up to the choice of new bound names. The lemma follows easily from
definition 9.7.

Lemma 9.18 Given = (N1)(S1, (Nf)S,') and assume (N1)Sl > S2. Then
there exists an (N2)S2 such that > (S2, (N2)S2). Moreover, if D > (S2, (N2)S2)
and > (S2, (N2 )S2) then (N2) S2' « AN2)S2.
Chapter 10
Robustness Results
In this chapter we shall state a number of results relating realisation and elab-
oration. Just like one can get a better understanding of a program by stating
and proving properties of it, one can get a better understanding of an inference
system by stating and proving properties of it. So fundamental are the results
we shall now present that if they did not hold, we would feel that the seman-
tics would have to be changed. Moreover, we shall use most of the results in
subsequent proofs.
To avoid unreasonable punishment of readers who do not want to study the
details most of the proofs are deferred to Appendix A.

10.1 Realisation and Instantiation


We first prove that realisation respects signature instantiation. Then we prove
that signature inference is preserved under realisation.

Lemma 10.1 If E is admissible and E > S' and E 0' E1 then E1 > '0 S.

The result extends to signatures:

Lemma 10.2
E1 > Ei.
If E - E1 and E--'-p4 Ei and E > E' and E is admissible then

In definition 9.6 we boldly claimed that we could define the relation A1- A2
for any semantic objects. Functor signatures require a bit of care, though:
116
CHAPTER 10. ROBUSTNESS RESULTS 117

Definition 10.3 Let D1 = (N1)(S1, (Nf)S') and 'D2 = (N2)(S2, (N2)S2) be well-
formed functor signatures and (p be a realisation. We write 4)1 -> D2 if there
exists a bijection pl : N1 --> N2 and a bijection p : N1 U Nl --> N2 U N2' extending
pl such that
(a) Reg cpl nN2 = 0 and (cpl 1pl) Sj = S2,
where (p1 = 'p J, strs(N1)S1.

(b) Reg (2 fl(N2 U N2) = 0 and (`p2 Ip) Si = S2,


where (2 = (p . strs(N1 U Ni) S'1.

This is stronger than requiring (Ni) Sl -+ (N2) S2 and (Ni U Ni) S1-4(N2 U
N2') S2 because having one common renaming ensures that sharing between bound
structures in (N1)S1 and free structures in (Nf)S' is preserved. When N1 = 0
definition 10.3 simplifies to the obvious definition of

(S1i (Ni) Si) -,(S2i (N2) S2)

We can now prove that functor signature instantiation is preserved under


realisation:

,(D1 is admissible and ,(D1 > 13 and,(D1--+(D2 and 13 -


Lemma 10.4 For all functor signatures ob1i '1)2, and functor instances 13,
14 then '1)2 > 14.
14, if

We can now describe under which conditions admissible signature inferences


are transformed into admissible signature inferences by realisation.
CHAPTER 10. ROBUSTNESS RESULTS 118

Theorem 10.5 Let B = M, F, G, E and B' = M', F', G', E' be bases with
B
If
b
B'.

BF-spec =:- E, (10.1)


M'-strs(cp E1) is substructure closed (10.2)
{B', Y> E1} is coherent (10.3)
then
B' I- spec '(P Ei.
Similarly, if
BF-sigexp=:- E andE --+ E' (10.4)

M'-strs E' is substructure closed (10.5)


{B', E'} is coherent (10.6)
then
B'F-sigexp=:- E'.

Recall that when we write B' F- sigexp = E' we implicitly refer to admissible
proofs only (c.f. definition 9.12). Some of the admissibility constraints listed in
that definition follow from the assumptions (10.1) and (10.4), but others have to
be ensured by (10.2),(10.3) and (10.5),(10.6).
We shall mostly use the above theorem in the situation where Yo is an M-
realisation:

Corollary 10.6 Let B = M, F, G, E be a robust basis and be an M-realisation


such that Yo B is robust. If
BFspec =:- E1

M-strs(p E1) is substructure closed

{P B, PE1} is coherent
then
cpBF-spec =cpE1.
Similarly, if
B F- sigexp =:- E and E E'
M-strs(E') is substructure closed

{co B, E'} is coherent


then
co B F- sigexp =:- V.
CHAPTER 10. ROBUSTNESS RESULTS 119

10.2 The Strict Rules Preserve Admissibility


The rules for signature expressions and specifications were forced to respect ad-
missibility (c.f. definition 9.12). No such constraints are needed for the remaining
rules because, as we shall now show, they "automatically" preserve admissibility.

Theorem 10.7 Assume B = M, F, G, E is strictly robust.


If B F- dec = E then {B, E} is coherent,
if B F- strexp = S then {B, S} is coherent, and
if B F- prog = B' then {B, B'} is coherent.

This theorem is a consequence of a stronger theorem which will be used again


and again:

Theorem 10.8 Let B = M, F, G, E be strictly robust.


If B F- dec = E then E is coherent and M-strs E C_ strs B,
if B F- strexp = S then S zs coherent and M-strs S C_ strs B, and
if B F- prog = B' then B' is strictly robust and M-strs B' C strs B.
So if one of these phrases evaluates to a result, every structure in the result is
either brand new i.e., its name is not in M, or the structure is simply inherited
from the basis.
It is not hard to that theorem 10.8 really implies theorem 10.7. For
see
declarations, for instance, since B is coherent and strs B C M-Str and E is
coherent, and since every structure in E whose name is in M is free in B, we
have that {B, E} is coherent.
In the proof of theorem 10.8 we shall use the following lemma.

Lemma 10.9 If E is principal for sigexp in B then strs E C strs B.


CHAPTER 10. ROBUSTNESS RESULTS 120

Proof. By contradiction. We assume that E is principal for sigexp in B and


that strs E \ strs B and construct a E' with B F- sigexp = E' but E X E'.
0

Since we have assumed that strs E \ strs B 0 there exists a structure So E


strs E \ strs B and we can choose So so that it is not a proper substructure of any
other structure free in E.
Write So in the form (no, Eo), let n
be a name not in strnames{B, E}, and
let co be the realisation {no H n}. There exists a E' such that E --+ E'. Note
that no is not free in E' so we cannot have E > E'. Thus it will suffice to prove
that B I- sigexp =- E'. This in turn follows from theorem 10.5, if we can show

B rb B (10.7)

M-strs E' is substructure closed (10.8)

{B, E'} is coherent. (10.9)

But (10.7) follows from nstrnames(B) and the fact that any strictly robust
basis is also robust. As to (10.8),

M-strs E' - { M-strs(E),


M-strs(E) \ So,
if no M
if no E M,
in either case a substructure closed set (in the latter case because we chose So so
that it is not a proper substructure of any other structure free in E).
Finally, (10.9) follows from the coherence of {B, E} and the fact that n
strnames{B, E}.

In the proof of theorem 10.8 (see Appendix A) one checks for each rule that
the bases occurring on the left hand side of the F- in the premises are strictly
robust assuming that the basis on the left hand side of the F- in the conclusion
was strictly robust. Therefore we accidently prove:

Corollary 10.10 If B is strictly robust,


P is an elaboration of a phrase contain-
ing as an intermediate conclusion of the form B F- dec E,
B F- strexp S, or B F- prog = B' then B is strictly robust.
CHAPTER 10. ROBUSTNESS RESULTS 121

10.3 Realisation and Structure Expressions


We shall now prove that the static elaboration of structure expressions and dec-
larations is "essentially" deterministic, namely deterministic up to the choice
of new names. This is a consequence of a theorem about how realisation af-
fects structure expression elaboration: realisation can only affect the elaboration
of structure expressions by acting on structures that stem from the basis; new
structures may have to be renamed to avoid name capture, but the number of
new structures will not be changed by realisation, not even when the structure
expression contains functor applications.
We prove

Theorem 10.11 Assume B is a strictly robust basis and that B H strexp = S.


Then for all S' E Str,

B H strexp = S' if ClosB S = ClosB S1.

by proving the stronger

Theorem 10.12 If B F- strexp = S and B b B', then for all S' E Str,

B' H strexp = S' of ClosB S -w+ ClosBI S' .

Similarly for declarations.

Note that this realisation is the dual of the realisation on signature inferences.
In the latter case we are mostly interested in M-realisations (Corollary 10.10)
that can be used to change structures that do not stem from the basis, whereas
in the above theorem the realisation affects structures stemming from the basis
but glides on new structures (up to the renaming of new names).
Most importantly, note that theorem 10.12 justifies the definition of functor
signatures and functor signature instantiation. Suppose for the moment that we
extend ModL programs to include

prog ::= let progl in prog2

with the obvious inference rule. Moreover, assume that we allow structure dec-
CHAPTER 10. ROBUSTNESS RESULTS 122

larations to be decorated with signature expressions as follows:'

B F- strexp S B F- sigexp (0) S


B F- structure strid : sigexp = strexp = {strid i-a S}

Then using theorem 10.12 it is easy to prove that if

B F- let functor funid(strid : sigexp)= strexpl in funid(strexp2) S

then also

B F- let structure strid : sigexp = strexp2 in strexp, = S

provided funid does not occur free in strexp2. The converse, that one can always
make an abstraction, does not in general hold here since strexp, may refer to
paths that are present in strexp2 but not in sigexp.

'Note that the explicit signature serves simply as a control that S matches the signature;
the decoration in no way constrains S. This choice is forced upon us in the coherent semantics.
In the consistent semantics the signature could constrain S
Chapter 11
Unification of Structures
A unifier for two structures S, S' is a realisation (p such that (p S = (p S'. More-
over, (p is said to be a principal unifier for S and S' if whenever 0 is a unifier for
S and S' there exists a realisation 0' such that 0 _ O' o W.
In this chapter we shall prove that if S and S' are coherent and have a unifier
then they have a principal unifier. We prove it by giving an algorithm, Unify,
which when given S and S' as parameters either fails or succeeds and succeeds
with principal unifier precisely if S and S' have a unifier.
a
The unification algorithm will be vital when we in the next chapter give an
algorithm for checking signature expression with sharing constraints, but the
unification we consider can also be seen as a generalisation of unification from
terms to certain directed acyclic graphs regardless of its use in the modules
semantics. The graphs we consider are those that correspond to so-called simple
subsets of Str:

Definition 11.1 (Simple "worlds") A set W C Str is simple (with respect


to M) if it is finite and coherent and if also W and M-strs W are substructure
closed.

Hence a simple W (W for "world") can be drawn as a directed acyclic graph


in which structure names uniquely label nodes and in which all successors to
M-nodes are M-nodes.
An M-unifier for S and S' is an M-realisation, (p, with (p S = (p S'. Moreover,
(p is a principal M-unifier for S and S' if whenever 0 is an M-unifier for S and

S' there exists an M-realisation 0' such that 0 = O' o W. For the sake of the
semantics we are interested in M-unifiers. We shall solve the general problem of

123
CHAPTER 11. UNIFICATION OF STRUCTURES 124

determining when M-unifiers exist. The problem of deciding when unifiers exist
is a special case, namely M = 0.
Example 11.2 In the following pictures, M-nodes are filled out while the others
are little circles. There exists an M-unifier, cP, for S and S', where S =

and S' =

with wS=wS'=

c
A C

and there exists a principal M-unifier, -0, for S and S' with -0 S = 0 S' =
CHAPTER 11. UNIFICATION OF STRUCTURES 125

However, there is no M-unifier for S' and S", where S" _


O e

4f
or S' and S"', where S"' =
O r

(The last example because S"' is a proper substructure of S'.)

The plain theorem, then, is

Theorem 11.3 (Principal Unifiers) Assume that IS, S'} is coherent and that
M-strs{S, S'} is substructure closed. If S and S' have an M-unifier then they
have a principal M-unifier.

This is nice to know from a theoretical point of view. But from a practical
point of view it is also necessary to know how one finds out whether two struc-
tures are unifiable. Therefore we prove theorem 11.3 constructively by giving an
algorithm, Unify, which finds unifiers, when possible. More precisely, let us say
that an M-realisation cP is a locally principal M-unifier for S and S' if 'P S = 'P S'
and moreover, whenever 0 is an M-unifier for S and S' then there exists an
M-realisation, 0', such that

0 . strs{S, S'} = (O' o gyp) . strs{S, S'}.

Unify then has the following properties:

Theorem 11.4 (Unification) Assume that IS, S'} is coherent and that
M-strs{S, S'} is substructure closed. Either Unzfy(M, S, S') fails or it succeeds
with a locally principal M-unifier for S and S'. Moreover, Unify succeeds if and
only if there exists an M-unifier for S and S'.

This theorem really does imply theorem 11.3: if 'P is a locally principal M-
unifier for S and S', one obtains a principal M-unifier, 'P1, for S and S' as follows
E), if (n, E) occurs in S or in S',
Vi (n, E) - { P(n,
(n, E)
W otherwise.

Here is Unify:
CHAPTER 11. UNIFICATION OF STRUCTURES 126

Unify(M, S, S') =
let (n, E) = S; (n', E') = S' in
if n E M and n' E M then if n = n' then ID else fail
if n E M and n' M then
if Dom E' C Dom E then
let Wi = Unify(M, E, E'); c = {(n', wi E') H (n, E)}
in cowl
else fail

if n M and n' E M then ... (symmetric) ...

if n M and n' M then


if S strs E' and S' strs E then
let wi = Unify(M, E, E');
c= {(n,W E) t-+ (n,y'1EUy'1E'),(n',wiE') t--* (n,w1EUw1E')}
in cow l
else fail

Unify(M, E, E') =
if Dom E fl Dom E'= 0 then ID
else let strid E Dom E fl Dom E'
{strid H Si} U El = E (disjoint union)
{strid H S'} U E' = E' (disjoint union)
wi = Unify(M, Si, Si)
Wz= Unify(M,Y'iEi,BpiEi)
in W21

Comments on Unify. The algorithm seeks to build a realisation by composition


of realisations, starting from the identity realisation, ID. When at least one of
n and n' is not in M, a recursive call is performed and, provided it succeeds, its
result is composed with an elementary realisation, c.
The notation {S1 H Si, ... , Sk H S'k}, (k > 1), means the unique realisation
that maps each S, to S; and glides outsides U, strs(S,), when that realisation
j
exists! In general, the realisation exists if and only if for all i, in {1, ... , k} and
CHAPTER 11. UNIFICATION OF STRUCTURES 127

for all longstrid 1, longstrid 21

ifS,(longstrid 1) is equal to S,(longstrid 2) then S ' longstrid 1 and


S; longstrid 2 exist and are equal.

For k = 1 and k = 2 there are particularly simple conditions that ensure that
{S1 -4 Si, ... , Sk -4 S'k} denotes a realisation.
For k = 1 condition is E of S1 C E of Si.1 In other words, the
a sufficient
operation of changing a given structure by perhaps changing its name and perhaps
widening it at the top is a realisation. The first e in Unify is of this kind.
For k = 2 a sufficient condition is

Si - S'2, EofS1 C EofS', EofS2 C EofS'2,

Si strs(E of S2), and S2 strs(E of Si).


The latter in Unify is of this kind.
e

The conditions S strs E' and S' strs E is what in ordinary term unifi-
cation is known as the "occur check" . From the definition of realisation (defini-
tion 9.4) it is easy to see that if S and S' have a unifier, then S strs(E of S')
and S' strs(E of S).
In the case n E M and n' M we do not need an occur check "if Dom E' C
Dom E and S' strs E" because M-strs S is assumed substructure closed.

In general, the algorithm is expressed so as to make a proof of the previous


thorems fairly easy. In an implementation, where one is more concerned with
efficiency, one can represent coherent structure by directed acyclic graphs. A
structure name is a pointer, then, and unification works by "swinging pointers"
in the store instead of producing realisation maps.
With these explanations, I hope that it is at least plausible that Unify has
the properties stated in theorem 11.4. Proving it is the aim of the next two
sections. Having done that we shall compare our unification with ordinary first
order unification and with a similar theory developed by Hassan Alt-Kaci.
'For arbitrary finite maps, f and g, f g g means Dom f g Domg and f(x) = g(x) for all
xEDomf.
CHAPTER 11. UNIFICATION OF STRUCTURES 128

11.1 Soundness of Unify


We first prove that Unify is sound, i.e., that when it succeeds then it succeeds
with a unifier. In the next section we show "completeness" i.e., that Unify always
terminates and that it succeeds with a locally principal unifier when there is any
unifier.
Below, the names involved in 'P, written Inv 'P, means

U{strnames S U strnames('P S) I S E Supp 'p}.

Moreover, when W is a set of structures and 'p a realisation, 'p W means


{'wI wEW}.
Theorem 11.5 (Soundness of Unify) Assume W is simple with respect to M.
If S, S' E W and 'p = Unify(M, S, S') succeeds then
WS = WS, (11.1)

Supp 'P nW C strs{S, S'} (11.2)

Inv 'p C strnames S U strnames S' (11.3)

'p zs an M-realisation (11.4)

'p W is simple with respect to M (11.5)

If'ID then WWI < IWII (11.6)


WP does not change the name of any structure whose name
(11.7)
is in strnames(' S).
Similarly, if strs E U strs E' C W and 'p = Unzfy(M, E, E') succeeds then

Vstrid E Dom E n Dom E', 'p E strzd = 'p E' strid (11.8)

Supp 'P nW C strs{E, E'} (11.9)

Inv 'p C strnames E U strnames E' (11.10)

'p is an M-realisation (11.11)

(P W is simple with respect to M (11.12)

If' ID then WWI < IWI


I (11.13)
'p does not change the name of any structure whose name (11.14)
is in strnames{W E, 'p E'}.
CHAPTER 11. UNIFICATION OF STRUCTURES 129

Despair not! Among these properties (11.1) and (11.4) are the ones we pri-
marily want, whereas (11.2), (11.3), (11.5) and (11.7) are used partly to get
the induction to work and partly in the proof of the soundness of the signature
checker. Property (11.6) will give a simple termination argument in the next
section.
Note that (11.2) does not say "Supp C strs{S, S'}" for the very good reason
cp

that this inclusion is not true because cp accumulates all operations that have been
performed during the unification. This is why cp is a locally principal M-unifier,
not in general a principal unifier.

Proof. [of theorem 11.5] The proof is by induction on the length of the compu-
tation counted as the number, r, of recursive calls.

Base Case, r = 0. By the definition of Unify there are two cases, one for
structures and one for environments. The latter trivially gives (11.8)-(11.14)
and the former gives (11.1)-(11.7) using the coherence of W.

Inductive Step, r > 0. There are a number of cases corresponding to the def-
inition of the algorithm. We start by considering structures S = (n, E) and
S' = (n', E').

nEMandn'M where also DomE' C DomE, c'1 = Unify(M, E, E'),


{(n', (p1 E') " (n, E)} and cp = e o (Pi.
By induction,

Vstrid E Dom E', W, E strid = c'1 E'strid (11.15)

Supp c'1 f1W C strs{E, E'} (11.16)

Inv W, C strnames E U strnames E' (11.17)


cpl is an M-realisation (11.18)

cpl W is simple with respect to M (11.19)

IfID thenI c" W1 <JWJ


W1 (11.20)
c1 does not change the name of any structure whose (11.21)
name is in strnames{c 1 E, 1 E'}. c

To see that a is a realisation map, note that WE = E by (11.18) and the


substructure closedness of M-strs W. Therefore, W, E' C E by (11.15). It follows
from the remarks in the previous section that a is a realisation. It i, clearly an
M-realisation. Thus cp = eocpl is an M-realisation, showing (11.4).
CHAPTER 11. UNIFICATION OF STRUCTURES 130

Proof of (11.1): (P(n',E') = e o coi(n', E') = e(n', c01 E') since (11.17) and
n' strnames{E, E'}. And (n', 01 E') = (n, E) = 0(n, E) since cP is an M-
realisation.
Proof of (11.2): Supp o f1W c Supp o1 f1W U Supp e f1W C
strs E U strs E' U ({ (n', 01 E') } fl W). By the coherence of W, { (n', 01 E') } n w C
{(n', E')}. Thus Supp co flW C strs{S, S'}.
Proof of (11.3): Inv cp C Inv 01 U Inv e C strnames{E, E'}
U strnames(n', 01 E') U strnames(n, E) C strnames{S, S'} by (11.17).
Proof of (11.5): 0 W is obtained from 01 W by replacing all occurrences of
(n', ol E') by (n, E). But (n, E) is already in ol W. It follows that cP W is finite,
coherent, and that cP W and M-strs W are substructure closed. Hence co W is
simple.
Proof of (11.6): cP is not ID. If 01 = ID then co W1 = e W1 = W1 1
I I I -
as we saw in the proof of (11.5). Otherwise, W W1 = e 01 W1 = 01 W1 1 <
I I -
JWI -1 < JWI by (11.20).

Proof of (11.7): Take a (no, Eo) with no c strnames(CP S). Since co S = S E


M-Str we have no E M. Since cP is an M-realisation, co glides on (no, Eo).
nMand n'EM. Symmetric.

nMandn'M. where also S strs E' and S' 0 strs E and


01 = Unify(M, E, E'),

e = {(n, cot E) H (n, cot E U o1 E'), (n', o1 E') H (n, cot E U o1 E')}

and 0 =eo(P1.
By induction,

`dstrid E Dom En Dom E', 01 E strid = co, E'strid (11.22)

Supp cot flW C strs{E, E'} (11.23)

Inv cot C strnames E U strnames E' (11.24)

01 is an M-realisation (11.25)

501 W is simple with respect to M (11.26)

Ifco1#IDthen Ic01WI <JWI (11.27)


w1 does not change the name of any structure whose (11)
28.
name is in strnames{co1 E, cot E'}.
CHAPTER 11. UNIFICATION OF STRUCTURES 131

Here (11.22) ensures that E1 = (1 E U (l E' really makes sense. Now n does
not occur in 'Pi E' -
since S strs E' and (11.24). Therefore S strs (l E' and
similarly, S' strs 'P1 E. It follows from the considerations in the previous section
that e is a realisation. It is clearly an M-realisation, so cp is an M-realisation
showing (11.4).
Proof of (11.1):
cPS = eco1(n,E)
= e(n, cp1 E) as (11.23) and S strs{E, E'}
e(n', (l E')
e cp1(n', E') as (11.23) and S' strs{E, E'}
= (PS'.
Proof of (11.2): Supp cp f1W C Supp cp1 f1W U Supp e f1W C
strs{E, E'} U ({(n, cpl E), (n', cpl E')} fl W) C strs{E, E'} U {S, S'} by the coher-
ence of W, so Supp cP flW C strs{S, S'}.
Proof of (11.3): Inv cp C Inv cp1 U Inv e C Inv (1 U strnames S U strnames S'
C strnames{S, S'}.
Proof of (11.5): cp W is obtained from cpl W by replacing all occurrences of
(n, cpl E) and (n', (l
E') by (n, cpl E U (1 E'). cpl W is simple by induction and
every structure in strs{cp1 E, cpl E'} is already in (l
W. Therefore cp W is simple.
Proof of (11.6): if ID then either cpl = ID and e
cp 5A ID or cp1 ID.
In the former case n J l -
n' so Jcp W _ Je W = IWI 1 as we saw in the proof
of (11.5). In the latter case,
co W l= e (P1 W
J j < jcp1 W see the proof of (11.5)
j j

< JWI by (11.27)


as required.
Proof of (11.7): n' does not occur in cpl E or in 91 E' by (11.24).
Let So be a structure whose name, no, is in cp S i.e., in (n, cpl E U (l E'). If
no = n then cpl does not change the name of So since (11.24) and since e does not
change the name of any structure with name no we have that cp does not change
the name of So.
If no n then no occurs is cpl E or in cpl E'. Then by (11.28) (l does not
change the name of So. Since n' does not occur in cpl E or in cpl E' we have
no n'. Therefore e does not change the name of (l So. Thus cp does not change
the name of So.
Now to environments.
CHAPTER 11. UNIFICATION OF STRUCTURES 132

DomEnDomE' 54 0 Here {strid o " S1} UE1 = E, {strid o "


S' }UE' = E',
(1 = Unify(M, Si, S'), `P2 = Unify(M, cP1 E1j cP1 Ei), and 0 = `P2 0 (P1-
By induction on the first call of Unify,

(P1 Si = `p1 Si (11.29)

Supp Cpl n w C strs{ S1, S1l} (11.30)

Inv X01 C strnames S1 U strnames S' (11.31)

01 is an M-realisation (11.32)

X01 W is simple with respect to M (11.33)

If(p154 IDthen I(p1W1 <JWJ (11.34)


(1 does not change the name of any structure whose (11.35)
name is in strnames (p1 S1.
Now strs{E1, Ei} C W so strs{'P1 E1, o1 Ell C (p1 W which is simple with
respect to M by (11.33).
Thus by induction on the second call,

Vstrid E Dom 01 E1 n Dom (P1 El, X02 91 E1 strid = P2 01 El strid (11.36)

Supp cP2 n cot wC stns {(P1 Eli cP1 Ei} (11.37)

Inv (P2 C strnames(9P1 E1) U strnames(o1 El) (11.38)

02 is an M-realisation (11.39)

(P2((P1 W) is simple with respect to M (11.40)

If (p2 54 ID then 1
< (p1 W1
(p2 (p1 W1 I (11.41)
X02 does not change the name of any structure whose
(11.42)
name is in strnames{02 01 E1, CP2 o1 Ell.
Proof of (11.8): If strid = strid o then use (11.29), else use (11.36).
Proof of (11.9): Suffice to show that cp glides on every S" = (n", E") E
W \ strs{E, E'}. But cp1 glides on S" by (11.31) i.e., cp1 S" _ (n", o1 E"). Now
n" does not occur in Cpl E1 or in 01 E'1 (use (11.31)). Therefore, by (11.38), V2
glides on (n", (P1 E"). Thus 'p2 o 01 glides on (n", E").
Proof of (11.10): Inv (p C Inv 02 U Inv 01 C Inv Cpl U strnames E1
U strnames El (by (11.38)) strnames E U strnames E' by (11.31).
Next, (p is an M-realisation by (11.32) and (11.39) and (pW is simple with
respect to M by (11.40). Moreover, (11.13) follows from (11.34) and (11.41).
CHAPTER 11. UNIFICATION OF STRUCTURES 133

Proof of (11.14): Take an So = (no,Eo) with no E strnames{gyp E, wE'}.


By (11.31) used on (11.35) we get
pl does not change the name of any structure whose
(11.43)
name is in strnames{oi E, 0i E ' }.

By (11.38) this gives


pi does not change the name of any structure whose
11.44)
name is in strnames{02 oi E, 02 0i E'}.

Using (11.38) on (11.42) we get


(p2 does not change the name of any structure whose
(11.45)
name is in strnames{02 0i E, 02 0i E'}.
This together with (11.44) gives the desired result. I
M-
As a corollary we have that unification of S and S' does not affect the
structures of W.

Corollary 11.6 Assume S, S' E W, W is simple with respect to M and 'P =


Unify(M, S, S') succeeds. Then M-strs W = M-strs(cP W).

Proof. On the one hand M-strs W C M-strs('P W) since 'p is an M-realisation


and M-strs W is substructure closed. Moreover, 'P W is coherent and we have
just shown that it contains M-strs W. Therefore, any (m, E) E M-strs(( W) \
M-strs W would have to satisfy m V strnames W. But that is impossible since

strnames(( W) C Inv 'p U strnames W


C strnames{S, S'} U strnames W
= strnames W.

11.2 Completeness of Unify


We shall now show that Unify is complete i.e., that it succeeds with a locally
principal unifier if a unifier exists and that it fails otherwise.
First, let us say that an M-realisation is an M-unifier for E and E' if for all
strid E Dom E n Dom E' we have (p E strid = (p E' strid . We then have
CHAPTER 11. UNIFICATION OF STRUCTURES 134

Theorem 11.7 (Completeness of Unify) Assume W is simple with respect to


M and that S, S' E W. For every M-unifier, 0, for S and S', the call co =
Unify(M, S, S') succeeds and there exists an M-realisation, 0', such that

0 W=(1'oco)IW. (11.46)

If S and S' do not have an M-unifier, then Unify(M, S, S') fails.


Similarly, assume strs{E, E'} C W. For every M-unifier, 0, for E and E',
the call W = Unify(M, E, E') succeeds and there exists an M-realisation such
that (11.46). If E and E' do not have an M-unifier, then Unify(M, E, E') fails.

Proof. The proof relies on theorem 11.5. In particular (11.6) and (11.13) give
a very painless proof of termination. The trick is to have an outer induction on
J W+ with an inner structural induction on the structures in W.
Outer Base Case, JWJ =0 Thus W = 0. The part of the statement regarding
structures is vacuously true. For the second part, strs{E, E'} C W implies
E = E'= {}. For every M-unifier, 0, for {} and {} (i.e., for every M-realisation)
cp = Unify(M, {}, {}) succeeds and

01 W =(OoID)I W=(OoW)IW
as desired.

Outer Inductive Step, IWI > 1 We assume that the theorem holds for all sim-
ple W' with JW'I < JWJ and prove that it holds for W by structural induction
on the structures in W.

Inner Base Case, E = E' = {} Handled as above.

Inner Inductive Step Let us first consider structures S = (n, E) and S' _
(n', E'). There are four, and only four, possibilities.
nEMand WE M Let 0 be an M-unifier for S and S'. Then S = 0S =
0 S' = S', W = Unify(M, S, S') returns cp = ID and 0 J, W = (0 o W. If S gyp)

and S' do not have an M-unifier we must have S 54 S' (as otherwise ID would
be an M-unifier), so n 54 n' by the coherence of W so Unify fails.
CHAPTER 11. UNIFICATION OF STRUCTURES 135

nEMandn'M Let 0 be an M-unifier for S and S'. Since (n, E) _


0 (n, E) _ 0 (n', E') we must have Dom E' C Dom E and 0 E' strid = E strid =
0 E strid for all strid E Dom E'. Thus by the inner induction,
Cpl = Unify(M, E, E') succeeds and there exists an M-realisation, 01 with

01 W=(blocpl)IW. (11.47)

As we saw in the soundness proof, {(n', Cpl E') i (n, E)} really does denote a
realisation map so it makes sense to say that Unzfy succeeds with the composition
W=Cowl.
Moreover, Cpl W is coherent and cp W is coherent and the latter is obtained
from the former by replacing all occurrences of (n', Cpl E') by (n, E). Consider an
S1 = (n1, E1) E (p W. If n1 n there exists exactly one structure, call it e-1(S1),
in (P1 W such that
e(e-1 Si) = S1.

Thus we can define for each S1 E (p W,


0, S1 { if n of S1 n,
S if n of S1 = n.
Extend 0' to a total map on Str by letting it glide outside cp W. Now let us prove
that 0' is an M-realisation with the desired properites.
First, to prove that 0' is a realisation we shall prove that for all S1 E (p W,
and for all strid , if S1 strid exists then (0' S1)strzd exists and (u' S1)strid =
0strid).
'(Si
When n of S1 n and n of (Si strid) n this follows from the definition of
1' and the fact that 01 is a realisation.
The two remaining cases are
Case 1: n of S1 = n and n of (Si strid) n, where
0'(S1 strid) = 0'(S strid )
= O' E strid
= 01 e-1(E strid) as n of (E strid) n
= O1(E strid) as strnames E C M
= 0 E strid by (11.47)
= (0 S)strid as 0 is a realisation
= (O' S1)strzd by the definition of 0.
Case 2: n of Si n and n of (Si strid) = n. Here e glides on e-1(Sl), so
(E_1(S1))strid
must exist and
CHAPTER 11. UNIFICATION OF STRUCTURES 136

e((e-1 S1)strid) = (e(C' S1))strid as a is a realisation


= S1 strid
= (n, E),
= (n, E) or (e-1(S1))strid = (n', w1 E'). In either case,
so (e-1(S1))strid
by (11.47), we get b1((c1 Si)strid) = 0 S. Thus, since 01 is a realisation,

(?1(e-1 S1)strzd = 0S

(0' S1)strzd = zb'(S1 strid)


as desired.
This proves that 0' is a realisation. That it is an M-realisation follows from
the definition of e-1 and the fact that 01 is an M-realisation.
To prove (11.46) take a w E W. If w = S then 0 w = S = (O' o cp) S
as 0 and O' o 'P are M-realisations. Otherwise, n of w # n. Since Inv 'P1 C
strnames E U strnames E' and n does not occur in E or in E' we have nof('P1 w) 5L
n. If nof((P1 w) 0 n then nof(e('P1 w)) 0 n so /'(co w) = 0'(e(cP1 w)) = ?1('1 w) _
zb w by (11.47). Otherwise n of ('p1 w) = n, so
01
WW ?'(e 'P1 W)
0'(e(n','P1 E'))
zj'(n, E) by the definition of e

bS by the definition of '


b S' as 0 is a unifier
by (11.47)
asn'Inv'1

zb w by (11.47).
This proves (11.46).
Conversely, assume that S and S' have no M-unifier. If DomE' Dom E
then Unify fails, but if DomE' C DomE, there cannot be an M-unifier for
E and E' (since if W1 were one, e o (P1 would unify S and S' with e as in the
program). Therefore, by the inner induction, Unify(M, E, E') will fail so that
also Unify(M, S, S') fails.
nMandn'EMI Symmetric.
CHAPTER 11. UNIFICATION OF STRUCTURES 137

nMandn'M Similar to the two previous cases. Here are the details.
Assume 0 is an M-unifier for S and S'. Then S strs E' and S' strs E
so Unify passes the test and calls Bpi = Unify(M, E, E'). Since 0 is also an M-
unifier for E and E' we get by the inner induction that Bpi = Unify(M, E, E')
succeeds and that there exists an M-realisation z/i1 with

01 W=(b1o'p1). W. (11.48)

As we saw in the proof of theorem 11.5,

c = {(n,'p1 E) i-4 (n, vi E U'p1 E'), (n', pi E') '- (n, p1 E U'1 E')}
really is a realisation map, so it makes sense to say that Unify succeeds with
result V= E o 'pi.
We also saw that Bpi W is coherent, that 'p W is coherent, and that the latter is
obtained from the former by replacing all occurrences of (n, Bpi E) and (n', v, E')
by (n, 'pi E U 'p1 E').
Consider a Si = (ni, E1) E 'p W. If ni 54 n then there exists exactly one
structure in Bpi W, call it C'(S1), that satisfies

E(E-1(Si)) = S1 .

(If n1 = n, Si is the value of c on two structures, if n 54 n', and on one structure,


if n = n'). Thus it makes sense to define for all Si E 'p W,

VA SO
Si)
=
{i(c(Si)) if not S, n,
b S if n of S1 = n.
Extend 0' to a total map by letting it glide outside 'p W. To prove that 0' is
a realisation we prove that for all Si E 'p W and for all strzd , if Si strid exists
then (O' S1)strid exists and (O' Si)strid = /'(Si strid ).
Again this is clear when n of Si 54 n and n of (Si strid) 54 n. The remaining
cases are:
Case 1: n of Si = n and n of (Si strid) 54 n.
Here Si strid = (n, Bpi E U 'p1 E')strid by the coherence of 'p W.
If strid E Dom E we have
,'(Si surd) = O'(cpi E strid )
_ 01 E-1(Vi E strid) as n of ('pi E strid) 54n
= 01 Bpi E strid as n strnames((pi E strid )
= 0 E strid by (11.48)
CHAPTER 11. UNIFICATION OF STRUCTURES 138

= ( S)strid as 0 is a realisation
= (,' Sl)strid by the definition of 0'.
Otherwise strid E Dom E' and we get
,b'(S1 strid) _ (/ S')strid as above
(0 S)strid since 0 unifies S and S'
("' Si)strzd
as desired.
Case 2: n of S1 n and n of (Sl strid) = n. Since S1 strid exists and
n of S1 z/ n, (e-1 Sl)strid must exist. Now a is a realisation map (see the proof
of theorem 11.5) so (e(e-1 Sl))strid exists and

((e-1 Sl))strid = Sl strid = ((c' Sl)strid ).


Since S1 strid = (n, 'P1 E U'P1 E') we must have

(e-1 S1)strid = (n, co1 E)


or
(e-1 Sl)strid = (n', :P1 E').
In either case, (11.48) gives /1((e-1 Sl)strid) = zP S. Since 01 is a realisation,
(01(c' Si))strid exists and

(01(E-1 Si))strid = 7p S

(1' Si)strid ='(Sl strid )


as desired.
This proves that' is a realisation. That it M-realisation is obvious.
is an
To show (11.46) take w E W. If w = S or w = S' then (O' o cp)W =
(0'oeoco1)w=0S=0w.
Otherwise nofw n and nofw n' by the coherence of W. By theorem 11.5,
Inv W1 C strnames E U strnames E' and since

in, n'} fl (strnames E U strnames E') =0

we have In, n'} fl Inv co1 = 0. Therefore, n of (co1 w) 0 In, n'}. Thus by the
definition of ', 0'(W' w) _ 0'(e(co1 w)) = 01(co1 w) _ Ow by (11.48). This
shows (11.46).
CHAPTER 11. UNIFICATION OF STRUCTURES 139

Conversely assume that S and S' have no M-unifier. If S E strs E' or S' E
strs E then Unify fails. But if S strs E' and S' 0 E' there cannot be any M-
unifier for E and E' (if (1 were one, c o (p1 would be an M-unifier for S and S',
where c is as in the algorithm). Therefore by the inner induction, Unify(M, E, E')
fails so that also Unify(M, S, S') fails.
Continuing the case analysis in the inner inductive step, we now turn to
environments. So assume that not both of E and E' are the empty map. There
are two possibilities:
DomEflDomE' = 0 Trivial

DomEflDomE' It is possible to split E and E' in disjoint unions E =


0
{strid +-p S1}UE1 and E'= {strid H Si}UEi, where strid E DomEf1DomE'.
Assume 0 is an M-unifier for E and E'. Then 0 is also an M-unifier for Sl and
Si so by the inner induction, (1 = Unzfy(M, S1, Si) succeeds and there exists
and M-realisation 01 such that

01W=(01001)IW (11.49)

Since strs{E1, Ei} C W we have strs{c01 E1, Cpl E'1} C


W. Moreover, 0 is Cpl

an M-unifier for El and E'1 so by (11.49) 01 is an M-unifier for (p1 E and (p1 E.
Now ifpl=IDwehave (p1El=E1and(p1Ei=E'1 and(p1W=Wandwe
simply apply the inner induction to conclude that (p1 = Unify(M, (p1 E1i (p1 Ei)
succeeds and that there exists an M-realisation 0' such that

01 1 (c1 W) = (V)' o (p2) 1 01 W

observing that Unzfy succeeds with cp = (02 0 (p1 and thus 0 J. W = (V)' o (p) 1 W.
On the other hand, if (p1 is not ID, we use (11.6) of theorem 11.5 to infer
I cc1 W1 < JW I. Then we use the outer induction to infer the same conclusions.
Conversely, assume that E and E' have no M-unifier. Then

Dom E f1 Dom E' 0.

If S1 (1 = Unify(M, Si, S'1) fails by the inner


and Si have no M-unifier then
induction, so Unify(M, S1, Sl) fails. If S1 and Si have a unifier, ' say, then by
the inner induction, (1 = Unzfy(M, Si, Si) succeeds. But then X01 E1 and Cpl E'1
have no unifier (if 0 were one, then 0 o (1 would be a unifier for E and E').
Therefore the second call fails -
by the inner induction, if (p1 = ID, and by the
outer induction otherwise.
CHAPTER 11. UNIFICATION OF STRUCTURES 140

End of inner inductive step.


End of outer inductive step.

We now see that our initial theorem, theorem 11.4, follows from the sound-
ness and completeness theorems (theorem 11.5 and theorem 11.7). Assume
that IS, S'} is coherent and that M-strs{S, S'} is substructure closed. Then
W = strs{S, S'} is simple with respect to M. Either there exists an M-unifier
for S and S' or there does not. In the former case, P = Unify(M, S, S') succeeds
(by the completeness theorem), (P M-unifier (by the soundness theorem)
is an
which in fact is locally principal (by the completeness theorem). In the latter
case, Unify(M, S, S') fails by the completeness theorem.

11.3 Comparison With Term Unification


First order term unification [37] can be seen as a special case of structure unifi-
cation. It is easy to encode terms as coherent structures. For instance, the term
t = int -- (t --> bool) can be represented by the structure rep(t) =
1

To be a bit more precise, take M to be the function symbols and constants


(since unification must not change these). Take StrName to be the disjoint
union of M, TyVar, and the natural numbers. Take Strld to be the (infinite)
set {OP, ARG1, ARG2, ...}. All internal nodes in rep(t) are named with different
natural numbers to obtain coherence. Clearly the representations have substruc-
ture closed intersections with M-Str. Thus we can apply our algorithm and our
results.
For every two terms ti and t2 and every unifier for tl and t2 there exists an
M-unifier for rep(ti) and rep(t2). Conversely, if each function symbol has a fixed
arity, every M-unifier for rep(tl) and rep(t2) gives a unifier for ti and t2.
CHAPTER 11. UNIFICATION OF STRUCTURES 141

Still under the assumption that each function symbol has a fixed arity, a
principal M-unifier for rep(ti) and rep(t2) gives a maximal unifier for tl and t2.
Therefore, the structure unification algorithm is also a decision procedure for the
existence of term unifiers, and when it succeeds, it produces an M-unifier from
which a maximal term unifier can be derived.2

11.4 Comparison with Ait-Kaci's Type Disci-


pline
Another theory that generalizes first order unification has been developed by
Hassan Alt-Kaci [3]. The purpose of this section is to point out the similarities
and differences between his work and ours. The two theories were developed
independently of each other.
I shall start with a review of some of the definitions and results from [3]. 1
take the liberty to reformulate some definitions slightly to ease the comparison.
There is given a set, L, of labels. A label corresponds to a structure identifier
in our terminology. A term domain A on L is a subset of L* which is prefix
closed and finitely branching i.e.,

`du,vEL* ifu.vEL then uEA (11.50)

ifuEAthen {u.aEAIaEL}isfinite. (11.51)

Next, we have a set, E, of symbols. A symbol corresponds to a structure


name. Alt-Kaci considers the situation that E is any partially ordered set and
in particular he studies the case where E is a lattice. This special case is general
enough for our comparison, so from now on let us assume that E is a lattice. A
map 0 maps paths in A to symbols in E hence decorating the nodes of the tree.
Finally, to represent sharing, a coreference relation rc is an equivalence relation
on A. A term is a triple t = (A, 0, rc). The subterm of t at address w is written
t \ w. Then a well formed term is a term t = (A, zk, rc) satisfying

`du, v E A, if u rc v then t \ u = t \ v. (11.52)


2The method by which we proved termination of the unification algorithm can also be used
to give a simple termination argument for ordinary term unification. Use an outer induction
on the size of the set W of terms and their subterms, with an inner structural induction on
the terms of W, the point being that if a substitution "expands" a term it decreases the total
number of terms and subterms under consideration.
CHAPTER 11. UNIFICATION OF STRUCTURES 142

Apart from the generality that lies in having o be a lattice, these definitions give
a richer variety of objects than our structures. Firstly, 0 need not be finite, so
in particular terms can be cyclic. Secondly, one has the ability to distinguish
between for example

and

(The first of these has three coreference classes, the second only two). In the
modules semantics based on coherence sharing is the same as structure identity,
so it is the latter interpretation we want. Any structure S = (n, E) can be drawn
as a tree or, equivalently, it can be regarded as a pair S = (0, 0), where 0 is
a finite term domain and 0 a map from addresses in 0 (i.e., nodes) to symbols
(i.e., structure names). We then turn a "structure" S = (0, 0) into a well-formed
term rep(S) = (0, 0, ic) by taking , _ Kma", where

Vu, v E Al
U KI"ax v if S(u) = S(v). (11.53)

In other words, we take the largest possible coreference relation - compare


with (11.52).
Then Alt-Kaci defines a relation -< on terms as follows. Let t, _ (Ot, &t, ,,)
fori=1,2. Then t1 t2 if
O1C02
/CI C /C2

Vw E O1, 01(w) > 02(w).


This looks a bit like realisation maps (since realisations preserve paths (11.54)
and sharing (11.55)) but the symbol ordering poses a problem. The most natural
thing would be to take E to be the set of structure names with every name related
to any other name but this fails because the total relation is not an ordering, let
CHAPTER 11. UNIFICATION OF STRUCTURES 143

alone a lattice. Instead, we shall take the flat lattice obtained by adding a gratest
element, T, and a smallest element, 1, to the unordered set of structure names.
Then we can think of a structure as a pair S = (0, 0), where Rng onIT, I} = 0.
Let us say that a realisation map W is name preserving if n of S = n of (cP S)
for all S. Then

Observation 11.8 There exists a name preserving realisation map, gyp, with
cP S1 = S2 if and only of rep(Si) }- rep (S2) -

Proof. Assume 'P S1 = S2, where 'p is a name preserving realisation. Clearly
Al C L. Since 'P is a map from structures to structures, we have ic'' C rc28?
and since 'P is name preserving we have ?1(w) = 02(w) showing (11.56).
Conversely, if rep(Si) }- rep(S2) then O1 C A2 so for all w E O1 we can
define
Y'(S1(w)) = S2(w)
Since C tc2 a" and these are maximal, we have that this equation actually
rcm

defines a map on Sl and its substructures. Extend this to a total map, also called
0, letting 0 glide outside S1. By definition 0 is a realisation. Moreover, rep(Si)
does not contain T and rep(S2) does not contain 1, so by (11.56), 0 preserves
names.
Unfortunately, name preserving realisations are not terribly interesting in the
modules semantics. Indeed, if we are not allowed to change the name of a given
structure it is because the structure is "constant" (Section 8.4) and so we should
not be allowed to add more components either.
This is the crucial difference between the two approaches. In Ait-Kaci's
approach, the adding of components is independent from the symbol ordering.
In our approach, the two are linked in that widening is allowed for structures
with bound names only. (Names not in M can be represented by T so that they
can be instantiated to either T or to an ordinary structure name). For instance
in Ait-Kaci's approach one can unify
a

and
CHAPTER 11. UNIFICATION OF STRUCTURES 144

to obtain a

whereas for us, the corresponding unification of

and 0m
A

Ib
must fail.
Alt-Kaci has an elegant proof that if E is a lattice then the well-formed terms
over E form a lattice with the ordering -<. The construction of the greatest lower
bound corresponds to unification and is construed as a closure operation on the
coreference relations. This operation is very similar to the algorithm Unify up
to the difference described above.
Chapter 12
Principal Signatures
In this chapter we shall prove that if a signature expression has a signature then
it has a principal signature, as defined in definition 8.7. We do so by giving
an algorithm, a so-called signature checker, which either fails or succeeds and
succeeds with a principal signature when any legal signature exists.
The algorithm is called SC, which can be read "signature checker" or "specifi-
cation checker" depending on the phrase class, and uses the unification algorithm
defined in Chapter 11. Before disclosing the fascinating inner workings of SC, let
us summarize its properties.
In what follows, U ranges over finite subsets of StrName. Intuitively, U means
the set of "used" names.

Theorem 12.1 (Simple Soundness of SC) Assume B = M, F, G, E is ro-


bust and that strnames B CU. If

(gyp, S, U') = SC(B, sigexp, U)

succeeds then W is an M-realisation and

pB f- sigexp =- (0) S .'

Similarly, if
(go, E', U') = SC(B, spec, U)
succeeds then W is an M-realisation and

WBI-spec =E'.
'Because of the global constraints (definition 9.12), this says rather a lot about co(B) and
S.
145
CHAPTER 12. PRINCIPAL SIGNATURES 146

Intuitively, U contains all names "used" in B and U' is the set of new names
used in the call i.e., u fl U' = 0.
Recall from Section 9.2 that W B, the application of w to B, is defined by

APB = w(M,F,G,E) = (M,F,G,cpE).

Here E may contain variable structures that are affected by unification that
the level of signature declarations, B will be
takes place during the call of SC. At
strongly robust, so that W E = E, but if we are inside a specification or signature
expression then E may contain variable structures.
Moreover, SC has the property that it chooses new names in such a way that

Theorem 12.2 Assume B is robust and that strnames B C U. If


(gyp, S, U') = SC(B, sigexp, U)

succeeds then for every substructure So = (no, Eo) of S,


either no E strnames(<p B) and So E strs(<p B),
or no E U' \ strnames('p B).

The principal signature is obtained as the closure

E = Clos , B S.
(Recall from definition 9.14 that Clos ,B S means (N)S, where
N = strnames S \ strnames(M, F, G, W E).) Note that theorem 12.2 implies that
E is well-formed. (That E is coherent, in fact that {B, E} is coherent, follows
from theorem 12.1).
As for completeness, still assuming B = (M, F, G, E),

Theorem 12.3 (Simple Completeness of SC) Assume that B is robust and


strnames B C U and that 0 is an M-realisation and E a signature such that

.' B I- szgexp = E.
Then (gyp, S, U') = SC(B, sigexp, U) succeeds and there exists an

M-realisation, 0', such that whenever Clos,BS 0') E' then

(0'ocp)E= bE and E'> E.


Moreover, if no such 0, E exist then SC fails.
Similarly for specifications.
CHAPTER 12. PRINCIPAL SIGNATURES 147

In other words, if sigexp becomes legal by adding components and sharing


to the variable structures in B, then SC succeeds and the sharing and extra
components it introduces is as little as necessary.

Theorem 12.4 (Principal Signatures) If sigexp has a signature in B then it


has a principal signature in B.

Proof. The proof uses theorem 12.1, theorem 12.2 and theorem 12.3 all of which
we shall soon have the opportunity to prove. Moreover, we use corollary 10.6.
Assume B F- sigexp = E. Then by the global constraints (definition 9.12), B
is robust. Let U = strnames B and let zb = ID. Thus 0 B F- sigexp = E, so by
theorem 12.3,
(O, S, U') = SC(B, sigexp, U)

-
succeeds and there exists an

C1osWBS E' then

(O' o
M-realisation, 0', such that whenever

cp) E=E and E' > E.


By theorem 12.1,
cP B F- sigexp = (0) S . 12.1)

Moreover, by theorem 12.2, we can strengthen (12.1) to

(P B F- sigexp = C1oscB S (12.2)

by use of the generalization rule without violating the global constraints. Let
E' be such that C1osW BS -L V. Then 0'(c' B) = B and E' > E. Thus it will
suffice to prove that
B F- sigexp = V. (12.3)

This we wish to prove by applying corollary 10.6 on (12.2). Thus we must prove
that M-strs E' is substructure closed and that {B, E'} is coherent. But since
every structure free in C1osWB S occurs free in O B, every structure free in E'
occurs free in b'(P B) = B. Therefore M-strs E' is substructure closed and
{B, E'} is coherent. Thus the desired (12.3) follows from (12.2). 1

The reader will probably already have noticed the similarity with the sound-
ness and completeness results that hold for the purely applicative type discipline
(see Part I, Section 2.4).
CHAPTER 12. PRINCIPAL SIGNATURES 148

12.1 The Signature Checker, SC


The algorithm SC appears below.

SC (B, spec, U)_


let (M, F, G, E) = B in
case spec of

( ) : (ID, {}, 0)

(structure strid sigexp)

let (cps, S1, U1) = SC(B, sigexp, U)


in (cps, {strid H S1}, U1)

(sharing longstrzd 1 = longstrzd 2) :

let strid 1.longstrid i = longstrid 1


fail if strid 1 Dom E
(Rl, Ul) = Fresh(longstrzd i, U)
cPl = Unify(M, R1, E(strzd 1))

strid 2.1ongstrid 2 = longstrid 2


fail if strid 2 Dom E
(R2, U2) = Fresh(longstrzd 2, U U U1)
cP2 = Unify(M, R2, (cps E)strzd 2)

P3 = Unify(M, (c2 cP1 E)longstrid 1, (cP2 (P1 E)longstrid 2)


in (cp3oco20co1,{},Ul U U2)

(specs spec2) :

let El, U1) = SC(B, specs, U)


(cps,
(cp2, E2, U2) = SC(cpl B ± El, spec2i U U Ul)
in (c'2 0 cp1, cP2 E1 ± E2, U1 U U2)
end
CHAPTER 12. PRINCIPAL SIGNATURES 149

SC(B, sigexp, U) =
let (M, F, G, E) = B in
case sigexp of

(sig spec end) :

let = SC(B, spec, U)


(w1, E1, U1)
ni E StrName \ (U U U1)
in (WI, (ni, E1), Ui U {ni})

(sigid) .

let (N)S = G(sigid)


{ni,...,nk} = N
{ni, ... , n} be disjoint with U
in ID, In, nt} S, n', ... , n
end

In the algorithm ModL phrases are enclosed in angle brackets to avoid con-
fusion of reserved words. The case

spec = (sharing longstrid i = longstrid 2)

deserves some comments. The inference rules admit specifications like

structure A : sig end


structure B : sig end
sharing A.C = B.C

because the generalisation and instantiation rules can be used to widen structures
that are not constants. To capture this, SC calls a function Fresh, which has the
following property:

(S, U') = Fresh(strid i. .strid k, U), (k > 0)

always succeeds, S has the form


CHAPTER 12. PRINCIPAL SIGNATURES 150

no n1 nk-1 nk
0 1*
strid strid k

and U' = {no,. .. , nk}, u n U' = 0 and all the n, are distinct.
In the case sigexp = (sigid) the bound names are replaced by fresh names.
This is similar to taking a fresh generic instance in the type checker for the
applicative type discipline (Part I).

12.2 Soundness of SC
For brevity we shall use the following

Definition 12.5 (Cover) Let B be robust basis. A set W C Str covers B if W


is simple with respect to M of B and strs(B) C W.

Theorem 12.6 (Soundness of SC) Assume W covers B = M, F, G, E and


M U strnames W C U. If

('P, S, U') = SC(B, szgexp, U)

succeeds, then
cP is an M-realisation and (12.4)

'P B I- szgexp = (0) S . (12.5)

Moreover, letting W' = strs{O(W), S},

W' covers 'P By (12.6)

M-strs W' = M-strs W (12.7)

Inv 'P U strnames S C strnames{G, E} U U' (12.8)


'P does not change the name of any structure whose name
( 12 9 )
occurs in 'P E or in S
UnU'=0. (12.10)

Similarly for specifications.

Before proving this theorem, let us see some of its implications.


CHAPTER 12. PRINCIPAL SIGNATURES 151

Observation 12.7 Although in general strs B C W, we always have

Supp c n w C strs E. (12.11)

Proof. We have Inv W C strnames G U strnames E UU'. Since strnames W C U


and U fl U' = 0, we have strnames(Supp w f1W) C strnames G U strnames E and
since W M-realisation this can be strengthened to strnames(Supp c nW) C
is an
strnames E. Then the coherence of W ensures (12.11).
In general, it will not be the case that Supp W C strs E since SC is such that
all the changes it performs to intermediate structures are accumulated in W.

Observation 12.8 For every substructure So = (no, Eo) of S, either no E


strnames(W B) and So E strs{G,W E} or no E U' \ strnames(W W).
Proof. Take So = (no, Eo) E strs S.
We first assume that no E strnames(W W) and prove that no E strnames(W B)
and So E strs{G, cP E}. By (12.8) we have no E strnames G, no E strnames E, or
no E U'. We treat each of these three cases in turn.
Assume no E strnames G. Now strs G C strs(W W) C W' and 147' is coherent,
so we must have So E strs G.
Assume no E strnames E. Since W is coherent no structure in W \ strs E has
name no. By observation 12.7, W glides on all structures in W \ strs E. Since
no E strnames(W W), we must therefore have So E strs(W E).
Assume no E U'. Then no U. Hence there must exist a structure in
Supp c fl W the image of which contains no. Then, by observation 12.7 and the
coherence of strs(W IV), we have So E strs(W E).
Secondly, we assume that no strnames((P 1'V) and prove that no E U'. (This
is where we shall need property (12 9)). By (12.8) we have no E strnames G, no E
strnames E, or n E strnames U'. It will suffice to prove that no strnames{G, E}.
Since no strnames(W W) and (P is an A4-realisation we have no strnames G.
Moreover, if no did occur in E then becuase no occurs in S and (12 9) we would
have that no occurred in WE - but that is absurd since no strnames((P W).
Therefore no does not occur in E, so it must occur in U'.

Observation 12.9 Theorem 12.6 implies theorem 12.1 and theorem 12.2 as fol-
lows. Assume that B is robust and that strnames B C U. Let W = strs B. Then
W covers B and MUstrnames W C U. Assume that (W, S, U') = SC(B, sigexp, U)
succeeds. Then, by theorem 12.6, cP in an M-realisation and W B F sigexp = (0) S
CHAPTER 12. PRINCIPAL SIGNATURES 152

as required in theorem 12.1. Moreover, the conclusion of theorem 12.2 follows


from observation 12.8 and the fact that m fl U' = 0.

Observation 12.10 To ease the proof of (12.6), we shall now show that if we
can prove (12.4), (12.7), (12.8), (12.10) and that W' is coherent then it follows
that W' covers 'P B.
Firstly, that 'P B is robust is seen as follows. We have

'pB = cp(M,F,G,E) = (M,F,G,'PE).

Since B is well-formed, B is well-formed. Moreover, strs(W B) is a subset of


'P

W', which is assumed coherent, so strs(' B) is coherent. Thus SP B is admissible.


Moreover, strnames F C M and strnames G C M since B is robust. We must
also prove that M-strs((P E) is substructure closed. For this it will suffice to prove
that M-strs(' B) = M-strs B. But M-strs B C M-strs(' B) since (P is an M-
realisation. Conversely, if S = (m, E) E M-strs(' B) we have S E strs{F, G, E}
or m E Inv W. In the latter case we have m E strnames{G, E}, by (12.8), and
since S E M-strs W' = M-strs W and W is coherent we have S E strs{G, E}.
So in either case S E M-strs{F, G, E}, showing M-strs(' B) C M-strs(B).
Secondly, that W' is simple is seen as follows. It is finite since W is finite; it is
coherent by assumption; it is substructure closed by definition; and M-strs W' is
substructure closed since M-strs W' = M-strs W and M-strs W is substructure
closed.
Thirdly, strs(' B) C W' by the definition of W'. Thus W' covers 'P B.

Observation 12.11 Under the same assumptions as in observation 12.10 we do


not have to worry about the global constraints on (12.5). As we saw there, W B
isrobust, (0) S is well-formed, strs{'P B, (0) S} is coherent, since W' is assumed
coherent, and M-strs S is substructure closed by (12.7).

Proof. theorem 12.6] By induction on the number, r of recursive calls of SC.


[of
The proof uses the soundness theorem for Unify (theorem 11.5), its corollary
(corollary 11.6), and the realisation corollary, corollary 10.6.
CHAPTER 12. PRINCIPAL SIGNATURES 153

Base Case, r = 0 Either spec is the empty specification or sigexp = sigid or

spec = sharing longstrid 1 = longstrid 2.

The first of these is trivial, the second easy (use the instantiation rule once) so
let us go straight to the third one, sharing.
By the definition of SC we must have longstrid 1 = strid 1.1ongstrid i and also
strid 1 E DomE. Also, (R1, U1) = Fresh (Iongstrid U). Now let

W1 = W U strs R1.

It is easy to check that W1 is simple with respect to M and that R1 E W1 and


E(strid 1) E W1.
Thus by the soundness theorem for Unify (theorem 11.5) and its corollary
(corollary 11.6) we have

(p1 R1 = (P1 E strid 1


Inv 0l C strnames{R1, E strid 1}
<P1 does not change the name of any structure whose name

is in strnames((P1 R1)
(P1 is an M-realisation
Cpl W1 is simple with respect to M
M-strs('p1 W1) = M-strs W1.

Next, let
W2 = 01 W1 U strs R2.
Given that 01 W1 is simple with respect to M it is easy to check that W2 is
simple with respect to M and that R2 and (Pl E strid 2 are members of W2. Thus
using theorem 11.5 and its corollary again, we get

co2R2 = E strid2)
(P2(cp1
Inv 02 C strnames{R2, `p1 E strid 2}
W2 does not change the name of any structure whose name
is in strnames((P2 R2)

`p2 is an M-realisation
`p2 W2 is simple with respect to M
M-strs((p2 W2) = M-strs W2.
CHAPTER 12. PRINCIPAL SIGNATURES 154

Hence, letting
W3 = cp2 W2,
W3 is simple with respect to M and given the fact that cpl and (p2 are unifiers it is
easy to see that (`P2 P1 E)longstrid 1 and (W2 P1 E)longstrid 2 both are members
of W3.
Therefore, applying the theorem and corollary a final time,

(p3(((P2 CP1 E) longstrid 1) = E)longstrid 2)


cp3((cp2 P1
Inv (p3 C strnames{((P2 (Pl E)longstrid 1, (W2 (p1 E) longstrid 2}
(p3 does not change the name of any structure whose name
is in strnames((3((W2 (1 E) longstrid 1))

(p3 is an M-realisation

(p3 W3 is simple with respect to M


M-strs((p3 W3) = M-strs W3.

Now the result of SC is

(co,E',U') = ((p3oco2 oco1,{},U1 UU2).

Clearly (P is an M-realisation and we saw that (cp E) longstrid 1


_ (coE)longstrid 2 so we have (12.4) and (12.5).
Proof of (12.6). Let W' = strs{W W, E'} = strs(W W). Then
W' = strs((3 (2 W1 W)
C strs((3 (p2 (p1 W1)
C strs((3 p2 W2)
= (P3W3
and W3 W3 is simple with respect to M so W' is coherent. Thus by observa-
tion 12.10 we have (12.6).
Proof of (12.7). We have
M-strs W' C M-strs((3 W3) = M-strs W3 = M-strs(W2 W2)
= M-strs W2 = M-strs{(P1 W1, R2} = M-strs(Wi W1)
= M-strs W1 = M-strs{W, R1} = M-strs W.
The converse is obvious.
As to (12.8) we have
Inv (P U strnames E' = Inv cP
C Inv (P3 U Inv (P2 U Inv (P1
= Inv (2 U Inv (1 U strnames E
CHAPTER 12. PRINCIPAL SIGNATURES 155

= U2 U strnames E
Inv Bpi U
= U2 U U1 U strnames E
C U' U strnames{E, G}.
Proof of (12.9). Since 01 does not change the name of any structure whose
name is in strnames(O1 R1) and since Inv 01 C strnames{R1, E strid 1}, we have
that W1 does not change the name of any structure whose name is in 01 E.
01 does not change the name of any structure whose name is in R2 and since
Inv cp2 C strnames{R2, 01 E strid 2}, we have that 01 does not change the name
of any structure whose name occurs is 02 01 E. Since Inv 03 C strnames 02 W1 E,
we have that 01 does not change the name of any structure whose name is in
03 02 o1 E i.e., in o E.
Next, since 02 does not change the name of any structure whose name is
in strnames(02 R2) and Inv 02 C strnames{R2, Bpi E strid 2} we have that 02
does not change the name of any structure whose name is in 02 01 E. Since
Inv 03 C strnames{02 01 E} we have that 02 does not change the name of any
structure whose name occurs in 03 02 01 E i.e., o E.
Finally, since 03 does not change the name of any structure whose name
occurs in 03((02 01 E)longstrzd 1) and

Inv 03 C strnames{(02 01 E)longstrzd 1, (02 01 E)longstrid 2},

we have that 03 does not change the name of any structure the name of which
is in O E.
Since nither 01 nor 02 nor 03 change the name of any structure whose name
is in O E we have that O has that property too. This proves (12.9).
Finally u n U' = U n (U1 U U2) by the properties of Fresh, show-
ing (12.10).

Inductive Step, r > 0 There are the following cases

spec = structure strid : szgexp I Use rule 8.9 once.


CHAPTER 12. PRINCIPAL SIGNATURES 156

spec = specs spec2 The call

(01, E1, U1) = SC(B, specs, U)


succeeded so by induction, letting

Wl = strs{'p1(W), E1} (12.12)

we have
'P1 is an M-realisation (12.13)

cP1 B F- specs = E1 (12.14)

W1 covers cP1 B (12.15)

M-strs W1 = M-strs W (12.16)

strnames E1 C strnames{G, E} U U1
Inv 'P1 U (12.17)
SP1 does not change the name of any structure whose
(12.18)
name is in cot E or in E1
U n U1 = 0. (12.19)

We now wish to use the induction hypothesis once again with W1 for W and
'P1B ± E1 for B and U U U1 for U. Given that W1 covers 01 B and contains all
structures in E1 we have that W1 covers 'P1 B ± El, if just '1 B ± E1 is robust.
But 'P1 B ± E1 is admissible by (12.14), and strnames F C M and strnames G C
M since 'p B is robust, and M-strs(E ± E1) is substructure closed by (12.14).
Thus '1 B ± E1 is robust, so W1 covers Cpl B ± E.
Before we can apply the induction we must also check that

M of ('P1 B ± El) U strnames W1 C U U U1,

which is seen as follows: M of ('P1 B ± E1) U strnames W1 = M U strnames W1 C-

M U strnames W U U1 C U U U1, using (12.17).


Thus we can apply the induction hypothesis to get

'P2 is an M-realisation (12.20)

co2(co1 B ± E1) F- spec2 E2 (12.21)

W2 covers 'P2('P1 B ± E1) (12.22)

M-strs W2 = M-strs W1 (12.23)


CHAPTER 12. PRINCIPAL SIGNATURES 157

Inv W2 U strnames E2 C strnames{G, (p1 E ± E1} U U2 (12.24)

(p2 does not change the name of any structure whose


(12.25)
name is in (2((/ E ± E1) or in E2
U2n(UuU1)=0, (12.26)

where
W2 = strs{W2(W1), E2}.
Now SC succeeds with

(P,E',U') _ (W2o1p1,1p2E1±E2jUlUU2).

Letting W' = strs{w(W), E'} we have


W' = strs{(p2 (p1 W, (p2 E1 ± E2}
strs{W2 (p1 W, (2 E1, E2}
= strs{W2(strs{W1 W, E1}), E2}
strs{(2(W1), E2}
W2

W'CW2. (12.27)

Since W2 is coherent also W' that will do for (12.6), c f


is coherent and
observation 12.10. Moreover, (p is clearly an M-realisation i.e., we have (12.4).
To see (12.7), note that M-strs W' C M-strs W by (12.27), (12.23) and (12.16).
Conversely,
M-strs W C M-strs{(P W, E'} = M-strs W'
since an M-realisation and M-strs W
(P is is substructure closed.
From (12.14) we may deduce

(2((l B) I- specs = (p2 E1 (12.28)

if (and now I refer to corollary 10.6)

(2((1 B) is robust (12.29)

M-strs((2 E1) is substructure closed (12.30)

{W2(W1 B), W2 E1} is coherent. (12.31)

As to (12.31), strs{(2((l B), (2 E1} = strs{F, G, (2 (p1 E, (2 E1} C strs((2 WO


C W2, which is coherent. Hence (12.31). As for (12.30) we have M-strs((P2 E1) C
CHAPTER 12. PRINCIPAL SIGNATURES 158

M-strs(cp2 W1) C M-strs{cp2 W1, E2} = M-strs W2 = M-strs W, which is sub-


structure closed. To show (12.29) given that

p2ca1B = (M,F,G,co2co1E)

and the coherence of B, it will suffice to prove that M-strs(Sp2 cal E) is


(P2 W1

substructure closed. But M-strs(c02 W1 E) C M-strs(c02 W1) C M-strs W as in


the proof of (12.30).
Thus we have (12.28). But this, combined with (12.21) gives the desired

cP B I- specs spec2 cot E1 ± E2 .

As for (12.8) we have


Inv(W) U strnames E' C Inv'P1 U Inv'P2 U strnames{E1, E2}
C strnames{G, E, E1} U U2 U Inv 'P1 by (12.24)
C strnames{G, E} U U1 U U2 by (12.17)
C strnames{G, E} U U.
Proof of (12.9). By (12.24) and the fact that c1 glides on every structure
whose name is in U2 we get from (12.18) that 'p1 does not change the name of
any structure whose name is is in 'P2 '1 E or in 'P2 E1 or in E2.
Thus W1 does not change the name of any structure whose name is in W E or
in E' ='P2 E1 ± E2. From (12.25) and (12.24) we get that 'P2 does not change the
name of any structure whose name is in 'P2 W, E. Thus by (12.25), 'P2 does not
change the name of any structure whose name is in 'P E or in E' ='P2 E1 ± E2.
Thus 'P = W2 o 'P1 does not change the name of any structure whose name occurs
in WE or in E'.
This leaves the trivial (12.10).
CHAPTER 12. PRINCIPAL SIGNATURES 159

sigexp = sig spec end Easy inductive step using one application of rule 8.12.

12.3 Completeness of SC
We shall prove the following strengthened version of the completeness result.

Theorem 12.12 (Completeness of SC) Assume W covers B = M, F, G, E


and M U strnames W C U. For all M-realisations, 0o, and signatures Eo =
(No) So, if
i5o B F szgexp = Eo (12.32)

then (ip, S, U') = SC(B, sigexp, U) succeeds and there exists an M-realisation,
01, and a signature, E1, such that

0o1 W=(010`p),LW, (12.33)

E ') E1 and El > Eo, (12.34)

where E = Closw yy S. Moreover, if SC does not succeed, it stops with fail.


Similarly for specifications.

From observation 12.8 we know that strs E C strs((P W). Therefore, if there
exists a c01 as described above then there exists one that in addition satisfies
Supp 01 C strs(P W).
Moreover, to prove (12.34) without having to mention renamings explicitly,
notice the following. Let E = (N)S and Eo = (No) So be well-formed signatures,
let be a realisation such that No fl
co Reg(P I strs E) = 0 and (P(S) = So. Then
there exists a signature E1 such that

E E1 and E1 > Eo. (12.35)

More generally, let E _ (N)S and Eo = (No) So be well-formed signatures and


cP be a realisation. Then there exists a set W C Str with strs E C W and
boundstrs(E) fl W = 0 and No fl Reg(P J, strs E) = 0 and co S = So if and only if
there exists a E1 such that (12.35).
Thus the conclusion of theorem 12.12 is equivalent to the existence of an
M-realisation, 01, with
OoIW=(0,oP)IW,
No fl Reg(ib1 I strs E) = 0 and 01 S = So.
CHAPTER 12. PRINCIPAL SIGNATURES 160

Proof. [of theorem 12.12] The proof uses the soundness and completeness results
about Unify (theorem 11.5 and theorem 11.7), the soundness of SC (theorem 12.6)
and observation 12.8.
The two parts of the statement (regarding success and failure, respectively)
are proved separately. The first part is proved by induction on the depth of
inference of 0o B F- sigexp = Eo and 0o B F- spec = Eo. There is one case for
each inference rule.
The case for empty specifications is trivial -
take 01 = 0o. In the case for
structure specifications one takes the 01 that is supplied by induction. Let us
therefore go straight to the first non-trivial case.
Sequential Specification, rule 8.10 Consider a proof ending

0o B F- specs = Ei ?o B ±E1 F- spec2 = E2


0o F- specs spec2 = Ei ± E2
By induction (P1, E1, U1) = SC(B, specs, U) succeeds and there exists
realisation 01 with
001 W=( 1OP,)IW
and

01 El = El .
Now we wish to use induction again, this time on the second premise of (12.36).
To see that this basis is of the required kind, we compute
0o B ± Ei = 01(gP1 B) ± E1 01
by (12.37) and (12.38)
= 01(`P1 B E1)f
_ 01 B1,
where
B1 = cot B ± E1.
Thus, by (12.36) we have 01 B1 I- spec2 = E2 in strictly fewer steps.
Let W1 = strs{W1(W), E1}. As we saw in the proof of theorem 12.6, W1
covers B1 and M U strnames W1 C U U U1. Thus by induction ((P2, E2, U2) _
SC(B1i spec2, U U U1) succeeds and there exists an M-realisation 02 with

J. W1 = (020c02) 1 W1 (12.39)

and
E.
/2 E2 =E2' (12.40)
CHAPTER 12. PRINCIPAL SIGNATURES 161

Thus SC succeeds with (sp, E', U') = (°2 0 Cpl, cp2 E1 ± E2, Ul U U2)- Now
001 W== (020W) 1W since for each w E W,
bo w = 01(w1 w) by (12.37)
= ?2(p2(W1 w)) by (12.39)
= 7fi2l`p w).
as desired. Moreover,

0 2 E' = 02(p2 E1 + E2)


= 02W2E1fE2 by (12.40)
= blEl fE2 by (12.39)
Ei ± E2,
as desired.

Sharing, rule 8.11 Consider a proof of the form

n of ((0o B)longstrid ) = n of ((-%o B) longstrid 2)


(12.41)
-%o B V- sharing longstrid 1 = longstrid 2 = {}
Write longstrid, as strid,.longstrid =, for i = 1, 2. By (12.41) we have strid 1 E
Dom E so SC does not fail at this point. As in SC, let

(R1, U1) = Fresh (Iongstrzd ', U).

We now wish to use the completeness result about Unify (theorem 11.7) a first
time. To this end let
W1 = W U strs R1.
W1 is simple with respect to M and it contains R1 and E(strid 1). Moreover,
(?o(E strzd 1)) longstrzd 1 exists by (12.41). Since strs R1 fl w = 0 there exists
an M-realisation, 01, such that 0 1 W = 01 1 W and 01(R1) _ 01(E strid 1).
Thus by theorem 11.7, W1 = Unify(M, R1, E strid 1) succeeds and there exists an
M-realisation, '2, such that

01 1 W1 = (/2 ° °1) 1 W1. (12.42)

Next, strzd 2 E DomE by (12.41) so SCdoes not fail that test. Let (R2, U2) =
Fresh(longstrid 2, UUU1). We now wish to apply the completeness result for Unify
a second time.
To this end, let W2 = W1 W1 U strs R2. It follows from theorem 11.5 that W2
is simple with respect to M. Now R2 and Cpl E strzd 2 are in W2 and they have
an M-unifier. More precisely,
CHAPTER 12. PRINCIPAL SIGNATURES 162

(?fiO E)longstrid 2 = ((5o E)strid 2)longstrid'2


(,o(E strid 2))longstrid'2
(01 E strid 2)longstrid'2 as E strid 2 E W
(52(`o1 E strid 2)longstrid by (12.42)
2

Thus (52((P1 E strid 2))longstrzd 2 exists and since strs(R2) fl strs((pl W) _ 0

there exists an M-realisation, 03 with 03 strs((p1 W1) J. _ 021 strs(W1 W1) which
unifies (p1 E strid 2 and R2. Thus by theorem 11.7,

W2 = Unify(M, R2, (P1 E strid 2)

succeeds and there exists an M-realisation, 04, such that

b31 W2=(040W2)I W2. (12.43)

We now want to apply the completeness result for Unify a third time. To this
end, let
W3= (P2 W2.

Theorem 11.5 ensures that W3 is simple with respect to M. Both


((P2 (P1 E)longstrid 1 and ((P2 (P1 E) longstrid 2 exist and are in W3. Moreover, 'b4
unifies them since
/)4((W2 (P1 E) longstrid W2 (P1 E)longstrid,
(04

(03 (P1 E) longstrid , by (12.43)


(02 (P1 E)longstrid,
(01 E)longstrid, by (12.42)
(0o E)longstrid
so by the premise of (12.41) and the coherence of 50 B, 04 is an M-unifier for
(`°2 (P1 E)longstrid 1 and ((P2 (P1 E)longstrid 2. Thus by theorem 11.7 the last call
succeeds and there exists an M-realisation, 05, such that

041 W3 = (05 O W3) 1 W3. (12.44)

Thus SC returns ((p, E', U')= ((P3 0 (P2 0 (P1, {}, Ul U U2). We have z/io 1 W=
(05 0 (p) . W since for all w E W,
5o(w) _ 01(w)
= b2 (P1 w by (12.42)
= 53(p1w
= 04 (P2 (P1 W by (12.43)
= 55 (P3W2 p1 w by (12.44)
CHAPTER 12. PRINCIPAL SIGNATURES 163

=5cpw.
Finally, 05 E'== 05{} _ {} = Eo as desired.
Basic Signature Expression, rule 8.12 Consider a proof ending

,bo B I- spec Eo
(12.45)
0o B f- sig spec end = (0) (n, Eo)
By induction (c01i E1, U1) = SC(B, spec, U) succeeds and there exists an M-
realisation, V1, such that

''oIW=(11°p1)IW (12.46)

and
01 El = Eo. (12.47)

Thus SC succeeds with ('p, S, U') _ ('P1, (nl, E1), Ul U {n,}), where n1
U U U1. The desired conclusions follow from (12.46) and (12.47) provided n1
really is bound in Closww(n1iE1). But that follows from theorem 12.6 which
ensures that strnames('Pi W) C U U U1.
Signature Identifier, rule 8.13 Consider a proof of the form

('o B) sigid = Eo
Eo.
00 B f- rigid

Since 0o is an M-realisation and B is robust we have G(szgid) = Eo. Thus SC


returns

S, U') = (ID, In, " ni} So, {n', ... , n'}),


where Eo = (No) So, No = {n,, ... , nk} and {n'1, ... , nk} fl U = 0. Hence

E = Closww S = (In'}) (In, " ni} So).

Let 01 = 5o. Then (12.33) is obvious. As for (12.34) we have E E since 01


is an M-realisation and strnames E C M. Moreover, E > Eo, in fact E Eo, -
since {ni, ... , nk} fl U = 0 ensures that no n; is free in E.
CHAPTER 12. PRINCIPAL SIGNATURES 164

The Generalization Rule, rule 8.14 Consider a proof ending

0o B I- sigexp = (No) So no strnames(Oo B)


'o B I- sigexp = (No U {no}) So

By induction U') = SC(B, sigexp, U) succeeds and there exists an M-


(W, S,

realisation, 01, such that

00 1W=( 10W) .W (12.48)

as desired and also, letting E = Clospw S,

No fl Reg(z&1 . strs E) =0 and 01 S = So. (12.49)

To strengthen (12.49) to the desired

No fl Reg(z&1 . strs E) = 0 and &1 S = So,

we must show that no Reg(&1 . strs E). This is seen as follows. By observa-
tion 12.8 we have strs E C strs(W B). Thus
Rng(z&1 , strsE) C strs('t&1(cPB))
= strs(OoB) by (12.48)
so Reg(z&1 . strs E) C strnames(oo B). Since no strnames('o B) we have
no 0 Reg(O . strs E).

The Instantiation Rule, rule 8.15 Use induction once, take the 01 provided by
induction and use the transitivity of >.

This concludes the first half of the proof (regarding the success of SC). The
second part is shown by structural induction on sigexp and spec. We show the
two non-trivial cases:
CHAPTER 12. PRINCIPAL SIGNATURES 165

Sharing, spec = sharing longstrid 1 = longstrid 2 So we have assumed that W


covers B and M U strnames W C U and SC(B, spec, U) does not succeed and we
want to show that it terminates with fail. Looking at SC, this is clear if strid 1 0
Dom E, so assume strid 1 E Dom E. Next, (R1i U1) = Fresh (longstrid'1, U) suc-
ceeds. Let W1 = WUstrs(R1). As was seen above, W1 is simple with respect to M
and contains R1 and E strid 1. Then by theorem 11.7, the first call of unify either
succeeds or stops with fail. In the latter case we are done, so assume the former.
The argument is repeated for the second call using W2 = 'p1(W1) U strs(R2) and
finally, if the second call succeeded, we have that W3 = 'P2 W2 is simple with
respect to M and contains the arguments of the final unification which, since it
cannot succeed, must stop with fail by theorem 11.7.

Sequential Specification, spec = specs spec2 Assume SC(B, spec, U) does not
succeed. Then, by induction, if the first call of SC does not succeed it stops
with fail (and we are done); so assume it succeeds. Then, as we saw above,
f
W1 = strs('p1 W)Ustrs(E1) covers B1 = 'P1 B E1 and MUstrnames W1 C UUU1
and the second call of SC cannot have succeeded so by induction it stopped with
fail.

Having proved theorem 12.12, we see that it really does imply theorem 12.3:
Assume B robust and strnames B C U and 0 is an M-realisation and E a sig-
nature such that 0 B F szgexp = E. Let W = strs B. Then W covers B and
MUstrnames W C U, so by theorem 12.12 ('p, S, U') = SC(B, sigexp, U) succeeds
and there exists an M-realisation, 0', and a signature E1 such that 0 1 W =
(0' o W, Closw W S 0') E1, and E1 > E. Thus (O' o 'p) E = 0 E. Moreover,
'p) I
(and this is the only interesting point in this argument) C1osWB S = Clos , S
by observation 12.8. Thus, if Closw B S E', then Closw W S + E' and since
also Closww S ; E1 we must have E' = E1. Thus E' - E1 by lemma 9.17 so
E' > E1 >E as desired.
Chapter 13
Conclusion
We first summarize what has been done and then comment on the role of oper-
ational semantics in language design.

13.1 Summary
We have investigated three type disciplines that accomodate different, and yet
similar kinds of polymorphism.
The first was Milner's polymorphic type discipline. We stated its consistency
with a dynamic operational semantics by defining a relation between values and
types. In particular, the definition of what it is for a closure to have a type is
similar to the definition of what it is for a function in a domain to have that type.
We then proved the soundness by structural induction.
The second is a new type discipline for polymorphic references. It is less
powerful but simpler than Damas' system and I believe the new system is pow-
erful enough to be of practical use. It is easy to extend the distinction between
expansive and non-expansive expressions to a larger language. Besides the type
inference system itself, the technical novelty is the definition of the relation be-
tween values and types as the maximal fixpoint of a monotonic operator.
The third is a type discipline for program modules. We saw how one set of
inference rules gives rise to two different semantics, one based on coherence and
one based on consistency. The former was investigated in detail. A signature
checker was presented and proved sound and complete with respect to the se-
mantics. The signature checker uses a generalization of the ordinary first order
term unification algorithm.
Milner's polymorphic type discipline strongly influenced the two others. The
166
CHAPTER 13. CONCLUSION 167

connections are summarized in this table.

The applicative The imperative The module


language language language
type variable, t type variable, t or u. structure name, n
type, z type, z structure, S
type scheme, a type scheme, a signature, E
substitution, S substitution, S realisation, (P
instantiation, a> z instantiation, a > z instantiation, E> S
As a consequence of the similarity the three type checkers are all very similar.

13.2 How the ML modules evolved.


The work I have done on the modules semantics has been just a small part of the
efforts to define a modules concept for ML. This section gives a brief summary
of how the ML modules evolved.
It is not possible to decide the origin of every bit of progress, since the design
of ML was a collective effort involving a large number of people, but the following
should give a rough idea about who did what.
The development of the core language and the development of the modules
part up till May 85 is reported in the introduction of [18]. Since then, then main
language design challenge has been to gain complete control over the semantics
of modules.
Luca Cardelli's ML implementation [7] had a notion of separate compilation,
but the basic scheme with structures, signatures and functors was proposed by
David MacQueen in 1983 [24]. His proposal was discussed at a design meeting at
Edinburgh held in Edinburgh in June 84 and finally approved on a second design
meeting held at Edinburgh in May 85. Thus, the proposal gradually changed
status and became a definition [18].
Don Sannella was the first to write a modules semantics. He produced several
versions of a denotational semantics for modules [39] early in 85. From this effort
it became clear that significant semantic issues had not been clarified by the
informal definition. In projects where several people try to work together on
arriving at a standard, a very delicate problem is who has the right to decide
what the language is, and a major reason that the denotational semantics was
never finished was that a consensus was hard to achieve. However, this first
semantics contained the notion of names and unification quite explicitly.
CHAPTER 13. CONCLUSION 168

The work on an operational semantics of ML dates back at least to 1984,


when Milner wrote a dynamic semantics for the core language. In summer 85,
Harper and I wrote a static operational semantics. I was responsible for the
core language. Robert Harper did the modules language semantics concurrently
with doing the first modules implementation. The notion of structure names is
present both in the semantics and the implementation. The implementation did
unification of structures. In the semantics, however, sharing equations did not
lead to identification of names in signatures. Instead, sharing equations were
held as an attribute of signatures that was checked as a part of matching.
Milner's "webs" [29] represented sharing using congruences rather than names.
Eventually, the scheme with names was preferred. Milner presented the ideas of
names, realisation and unification and gave inference rules in several notes [30,
31, 32, 33]. He invented the semantic objects that with very minor changes have
been used in this thesis and in the full ML semantics [20]. He also gave a sig-
nature checker that is similar to Harper's implementation and which served as a
model for the ones I present herein.
In June 86, Harper wrote yet another modules semantics with a function
GENSTR producing what was subsequently called principal signatures.
I joined the work on the static semantics of modules in April 86. In June 86,
I defined a slightly different notion of realisation, argued that realisations corre-
spond to substitutions and generic instance corresponds to signature matching,
removed unification from the rules by allowing instantiation and generalisation
rules and started talking about principal signatures [40]. It took me from sum-
mer 86 to March 87 to prove the existence of principal unifiers and principal
signatures based on a coherent semantics.
In March 87 David MacQueen visited Edinburgh and advocated that the
semantics should be based on consistency rather than coherence. His ML compiler
is based on consistency. (The informal language definition did not clarify whether
coherence or consistency should underlie the semantics, so when people wrote a
semantics or a compiler they made different choices). Milner realised that the
inference rules we had arrived at by then with minor changes could express the
consistent semantics. Thus, in March 87, Harper, Milner, MacQueen and I settled
for the consistent semantics, because we felt that explicit signature constraints
on structure declarations had to be coercive.
Finally, in August 87, we were able to print an operational semantics of
CHAPTER 13. CONCLUSION 169

ML [20].

13.3 The Experience of Using Operational Se-


mantics
The ease with which we could state the consistency results in Part I and II
highlights the naturalness of operational semantics and basic set theory. We
did not have to use domain theory, in fact the symbolic nature of operational
semantics is essential for making the typing relation in Part II monotonic hence
avoiding the difficult domain construction to which Damas had to resort.
From the outset it was known that the applicative type discipline was un-
sound in the imperative setting. However, even given a counter-example it is not
obvious just why the system is unsound.
Knowing that the system was unsound, I pretended that I wanted to formulate
and prove its soundness using operational semantics, and when I found that the
proof broke down at one, and only one, point I felt satisfied that the problem
is the quantification of type variables that occur free in the store typing. Store
typings take part neither in the dynamic nor in the static semantics but without
them the soundness theorem was obviously false (see Section 3.2). Concentrating
on the proof method lead to the discovery of what the problem was.
The reader may argue that this is not too surprising now that we know that
the problem is a very specific technicality. Indeed, the applicability of operational
semantics to a problem of this character does not say much about operational
semantics used on a large scale. So let us now turn to the experience gained from
the work on one big project, the writing of the operational semantics of ML.
As described in the previous section, there were several "competing" semantic
approaches. So in case anybody was in doubt, operational semantics does not
automatically lead to a straight line development of program language definitions.
(The same would be true of any semantic school, I'm sure). However, and this
is essential, it did provide an excellent focus for technical dicourse. Different
semantic alternatives could often be localized to a few changes in the rules and
this made the alternatives easier to understand. Moreover, and this must be
a criterion for any theoretical framework, most of the problems one is faced
with when writing the semantics are genuine semantical problems rather than
problems that arise because one has chosen this particular framework.
CHAPTER 13. CONCLUSION 170

But how were the semantic issues isolated? Some came as a result of just
trying to write down the rules. But there is a long way from writing down
a set of rules to understanding how they interact and how they depend upon
assumptions about well-formedness constraints on the semantic objects. These
things were often understood only as a result of trying to prove some theorems
about the skeletal language.
We did not first define a semantics and then prove theorems about it. The
definition of the semantics and the properties of it developed side by side influ-
encing each other. The process was akin to that of developing a program and
proving it correct at the same time. Thus the proof activity is at the very heart
of operational semantics and deserves special attention, not least because finding
and proving theorems is a complex intellectual activity, costly in time and en-
ergy. What are the complications that arise if we try to scale up the work from a
skeletal language to a realistic programming language? To what extend can we
envisage the use of computers to assist the proof finding?
The proofs I have done regarding ModL are fairly long, I think, and I want to
base my answer to the above two questions on the experience of doing these proofs
by hand. In retrospect, it seems that there were two factors that complicated
the work.
The first factor is that although we use relatively standard mathematical
reasoning, we often find that we are not really interested in reasoning about
natural numbers, polynomials, or other objects that we are already familiar with
from mathematics. On the contrary, we have to define our own semantic objects,
such as structures, signatures, realisations. As these objects are new in our
experience we have to do a lot of basic reasoning. The technology of substitutions
is a refined one that has taken time to mature. When we generalize to realisations,
we have once again to think about renaming, free and bound names and so on,
even though the definition of realisation maps itself is simple.
The second factor is that the constuctions we want to prove theorems about
are bigger than normal in mathematics. A semantics with 15 inference rules is
a very small semantics compared with a real language semantics but it feels big
once you start a case analysis with 15 cases. I have not experienced that most of
the cases are trivial, indeed the norm has been that many cases require good ideas
and that it takes many failed attempts before one finds an induction hypothesis
that survives all 15 cases.
CHAPTER 13. CONCLUSION 1-71

The two factors are active at the same time so that the proof activity involves
high-level considerations (will this induction hypothesis work?) and, at the same
time, low-level considerations (what do I need to know about realisations to
rewrite Win i-i n,} S to {n' i-i n} Win n'} S?). The basic concepts are not
given from the outset but emerge as a result of looking for theorems. When one
is stuck in a proof, at least while the theory is still young, it is sometimes hard to
decide whether to try to change the theorem or the basic definitions that underlie
it.

Both factors contribute to giving long proofs, indeed the proofs I have done
are long more because of these two factors than because I'm trying to prove
something very surprising.
To return to the original questions, the larger the language, the more new
semantics objects we have to define and get used to. There will be more con-
nections to understand at the basic theoretical level. The larger the language,
the more cases to consider and the more times we have to start all over again.
Different individuals can cope with different levels of complexity, of course, but
will not everybody feel that at some point they will start forgetting dependencies,
getting the same good ideas many times, changing indecisively from alternative
to alternative?
That machines should be able to take over the proof finding at the point
where we have to give up seems absurd considering the things we can successfully
program machines to do. A much more sensible goal is to use the machine to
help us organize our proof finding.
A handwritten proof is a sequence of sentences. Some of these serve to explain
the main ideas of the proof; some are used to introduce new notation and to bind
variables. Others are judgements. A sequence of judgements will normally have
to speak for itself. We do not consistently refer to a set of inference rules and
lemmas to chain together the individual judgements. After all, we do not want
to be more pedantic than we have to in order to feel confident that what we write
is correct. For each judgement there is an implicit unproved lemma, namely that
this judgement follows from the previous judgements. Whenever we choose to
be explicit about how one judgement is obtained from its predecessors, there is
the possibility of letting the machine "perform" the inference as a computation
so that we don't have to worry about this proof step, but only the rule applied,
if the proof is to be done again later on.
CHAPTER 13. CONCLUSION 172

Writing a program using just a screen editor or pen and paper is like pro-
gramming in an untyped language. One has complete freedom but little control
of the inferences one makes. The more explicit one makes the structure of a
proof, the more work one has to set up the proof, but the pay-off is that one (or
a machine) can perform a kind of type checking of the proof. If, for instance you
have to check 5 things before you apply an induction hypothesis, you should not
get away with forgetting to check one of them. Indeed, the paradigm in much
current work on formalizing logics is that proof checking is type checking (see for
instance the Edinburgh Logical Framework, [17]). A tool that allows you to be
explicit when you want to and does not force you to be explicit when you don't
want to could be of tremendous help in the proof finding process.
Incidently, the problem of polymorphic references seems very suitable for
machine assisted proof. It has deceived people again and again. Everybody I
know who has tried to solve the problem share the experience that firstly, it is
hard to say what the problem really is and secondly, that when something seems
to work, it is hard to give a good explanation of why it works. With the proof
method I have presented I think there has been sufficient progress that it would
be a realistic task to try to prove (or disprove) the consistency theorem with the
assistance of a machine. Firstly, it would be wonderful to have a strict control of
proofs, and secondly, it would be interesting to find out whether the work would
result in new knowledge about the problem itself.

13.4 Future Work


Regarding the polymorphic references, the conjecture that there exists principal
types needs proving. Moreover we need practical experience to decide whether
the inference system is good to use in practice. When "sensible" programs are
rejected by the type checker, will the simple device of wrapping up expressions in
closures suffice? Also, we need to see how the definitions could be incorporated
in the present ML semantics (it should not be very hard).
Regarding the modules semantics an obvious task is to try to establish results
for the consistent semantics that resemble those for the coherent semantics. At
first sight one might think that the theory for the consistent semantics is simpler,
because the notion of realisation is simpler (every realisation in the consistent
semantics induces a realisation in the coherent semantics). However, coherence
of a set of structures is a stronger condition than consistency, and not every-
CHAPTER 13. CONCLUSION 173

thing that works for coherence works for consistency without modifications. For
instance, coherence is important for the unification algorithm, I have presented.
When relaxing coherence to consistency, special care has to be taken in the uni-
fication algorithm to avoid the creation of cyclic structures. One cannot unify
the (sub)structures (n, {}) and (m, {}), say, if somewhere else there is a structure
(n, {A H (m, {})}).
The initial constaint that functors cannot take functors as arguments nor
produce functors as result was introduced simply because it was felt that one
should not try to do too many difficult things at the same time. Now that we
understand the semantics of the first order functors, it does not seem frightening
at all to start considering higher order functors. We have already had to introduce
functor signatures and functor instances as semantic objects even before there is
any program notation for writing down and binding functor signatures.
Appendix A
Robustness Proofs
This appendix contains the proofs of the robustness results stated in Chapter 10.

Lemma 10.1 If E is admissible and E > S' and E --' E1 then E1 > V S'.

Proof. Write E as (N)5 and E1 as (N1)51. Let 0o = I strs E. Since


V
E E1 there exists a bijection p : N -+ N1 such that N1 fl Reg Oo = 0 and

S.
(0o Ip) S = Si (A.1)

By the coherence of 5, for every structure, T, that occurs bound in E1, there
is a unique structure, let us call it V-1 T, bound in E, such that

(V50 Ip)(I-1 T) = T.

Since E > 5' there exists a realisation W with Supp W C boundstrs E and
(PS=5'.
For any substructure T of S1 let us define

f(P(-1 T)), if n of T E Nl
`P'(T) l T, otherwise
(A.2)

and extend it to a total map by letting it glide on all structures that do not occur
in S1. To prove E1 > 0 5' it will suffice to prove

(P' is a realisation (A.3)

' S1 = 0 5' (A.4)


174
APPENDIX A. ROBUSTNESS PROOFS 175

Supp lP' C boundstrs E1 (A.5)


As for (A.3), take any T occurring in S1 and any strid E Dom T. We have
to prove that strid E Dom W' T and (ip' T) strid = ip'(T strid) c.f. definition 9.4.
Looking at (A.2), this is obvious when T is free in E1.
So assume T is bound in E1. Recalling (A.1), we see that
Dom T = Dom(-1 T). Thus strid E Dom(O-1 T). We proceed by case analysis.
(?k-1 T )strid is free in E Then
cp'(Tstrzd )

`P'(((00 1p)(0-1 T))strzd) as T is bound


`P'((00 IP)((0-1 T)strid )) as strid E Dome-1T and 0o Ip is a
.realisation
W'(I((I-1 T)strid)) as E is well-formed and (0-1 T)strid is
free in E
((-1 T)strid ) as this is free in E1
T)strid )) as W is the identity on free structures of
E since E is well-formed
(0 co(O-1 T))strid as 0 and cp are realisations
(co'(T))strid by (A.2)
as desired.

(0-1 T)strid is bound in E Then


T(strid) = ((5 o Ip)(I`1 T))strid as T is bound

= (0o IP)((0-1 T)strid) (A.6)


as strid E Dom(-1 T).
Thus, since (0-1 T)strid is bound in E, we have that
T(strid) is bound in E1. Thus by the definition of gyp',
W'(T(strid )) =
0(W(0-1(T(strid )))).
By (A.6) we have

0-1(T(strid )) = (0-1 T)strid


so continuing (A.7) we have
_ 0(W((&-1 T)strid ))
(/ ip(0-1 T))strid as Wand o are realisations
_ (W'T)strid
as desired.

This proves (A.3). Next, we prove (A.4) by a (much simpler) case analysis:
APPENDIX A. ROBUSTNESS PROOFS 176

S is bound in E Then (1o Ip)(S) is bound in E1. Then

`p'(S1) = P'((00 Ip) S) by (A.1)


0(,P(S)) by (A.2)

S is free in E Then boundstrs E = 0, so S = cP S = S'. Also, (&o 1p) S = 0 S


is free in E1, so V(S1) = V(' S) _ 0 S = 0 S' as desired.
This proves (A.4). Finally, (A.5) follows immediately from (A.2). 0
APPENDIX A. ROBUSTNESS PROOFS 177

Lemma 10.2 If E E1 and E' Ei and E >_ E' and E is admissible then
E1 > Ei.

Proof. From E > E' and E admissible we get

strs E C strs E' (A.8)

by corollary 9.16. Write E' on the form (N')S'. Then since V-24 Ei we have

Ei = (N,') S' = (N,') (W' Ip') S',


where p' : N' --> Ni' is a bijection and N1, n Reg co = 0, where co = J. strs E'.
Since (A.8) and E -4 E1 we have

E 'pI°,} E1.
(A.9)

Surely E > S, so from (A.9) and lemma 10.1 we get

E1 S.
> (co p') S
l
(A.10)

Now since E E1 and E is well-formed, we have

strnames E1 = Reg('P J. strs E). (A.11)

Since Reg'Po nNl = 0 strs E) n Ni = 0. Thus


and (A.8) we have Reg(cp J.

by (A.11) we have strnames E1 n Nl = 0. Therefore we can use lemma 9.15 to


strengthen (A.10) to the desired

E1 > (N,')(co Ip') S


= E1.
APPENDIX A. ROBUSTNESS PROOFS 1.78

Lemma 10.4 For all functor signatures (D 1, and functor instances 13, 14,
(D 2, if
401 is admissible and 401 > 13 and 01--->'02 and 13 `a
14 then '02 > I4.

Proof. The following diagram may be used for reference:

-4
-
W1 W2

`V 1 V V 02
13 I4

Write
'D 1 = (N1) (S1, (NJ') S') 'D 2 = (N2)(S2, (N2)S2)
13 = (S3, (N3) S3) 14 = (S4, (N4)S4).
By the assumptions all of these are well-formed.
Let 'pl = (p . strs(Nj)Sl and let cpi = cp 1 strs(Ni U Ni) S'1. Since D1---> D2
there exists a bijection
p:N1UNi--N2UN2
such that
pi :N1-N2=p.JN,
and
Pi:N' N2=p.JN1
are bijections and

N2 n Reg (Pi = 0 and ((P1 IP1) S1 = S2 (A.12)

and
(N2 U N2) n Reg pi = 0 and (Pi 1p) S' = S'.. (A.13)

As in the proof of lemma 10.1 for any structureT bound in (N2)S2 there is a
unique structure, call it cp-1 T, bound in (N1)Sl such that

((Pl Ipi)(W-1 T) =T (A.14)

(this follows from (A.12) and the coherence of Si).


Moreover, any structure T free in (N2)S2 with n of T E N2 must occur bound
in(N2)S2 (since 'D2 is well-formed); thus W-1(T) exists and occurs free in (Nf)Sl'
and n of (cp-1 T) E Ni.
Since ,D1 > 13 there exists a realisation 01 with

(Si, (Ni)S1) 1'(S3, (N3)Sa) (A.15)


APPENDIX A. ROBUSTNESS PROOFS
1

179

and
Supp 01 C boundstrs(Ni)Si. (A.16)
Let 11 be 01 J. strs(Nf)S'. By (A.15) there exists a bijection p13 : Nl -> N3
such that
N3 fl Reg V) 1 = 0 and (1 IPi3) Sl = S3 . (A.17)
By the final assumption, 13 I44 we have

VS3 = S4 and (N3) S3 --+ (N4) S4 . (A.18)

Let V strs(N3)S3. Then there exists a bijection p3 : N3 -> N4 such


that
N4 fl Reg co =0 and (`P3 IP3) S3 = S4 . (A.19)

We can now define the realisation that instantiaties 41)2 to 14. For every
S = (n, E) E Str, we define

S)), if S E boundstrs(N2)S2
/2(S) = S, if S E strs(N2)S2 (A.20)
(n, b2 E), otherwise.
As in the proof of lemma 10.1 it is proved that 02 is a realisation with
Supp b2 C boundstrs(N2)S2 and '2 S2 = S4. To prove 41)2 > I4 we are therefore
left with proving

(N2 )S2 (N4)S4-


Let 02 = b2 j strs(N2)S2 and let
P2 = P3 o P13 0 (Pi)-1
Notice that p'2 is a bijection from N2 to N4. It suffices to prove

N4 fl Reg b = 0 (A.21)

and
(02 IP2) S2 = S4' (A.22)

To prove (A.21), observe that

strs(N')S2 = strs{(c° pi) S 1 I S E strs(Nf)Sl}. (A.23)

Take any S E strs(NN)S'. Two cases:


APPENDIX A. ROBUSTNESS PROOFS 180

SE strs(Ni U Nl) Si that (ipi pi) S =


Since (N1UNi)Si is well-formed we have 1

P S is free in (N2 U N2) S. Thus 7k2((ii 1P1) S) = (P S. Since Supp 01 C N1-Str,


we have that S is free in (N3) S3. Thus (P S is free in (N4)S4 i.e., &2((ipi pi) S) fl 1

N4-Str = 0 as desired.
SEstrs(Nf)S' andnofSEN1 Then (ip' pi) S is free in (N2) S2' and n of
1

(wi 1P1) S E N2. Thus

02(((Pi 1P1) S) = 02(((P1 IP1) S) as -1)1 is well-formed

= (P(01 S) by (A.20).

Now 01 S is free in (N3)S3 so (P(&1 S) is free in (N4) S4 as desired.


This proved (A.21). To prove (A.22) we calculate that

(`)2 I P2) S2 = W2 IP2)(1Pi P) S'l I

using (A.13), and

S4 - W3 IP3)S3
IP3)(,/'l

(W3 IP13) Sl

by (A.19) and (A.17), respectively. Thus it will suffice to prove

VS E strs Si, (& 2 IP2)(p1 P) S I = (p3 IP3)(4'1 IP13) S . (A.24)

This we prove by structural induction on S. Let S = (n, E) and assume


that (A.24) holds for all structures in E. We prove that it holds for S by case
analysis:
Sis free in (N1UN1)Si Here
( 2 IP2)((i I P) S = (02 P2) I
(P S as (Ni U Nl')S' is well-formed
(P S as (P S is free in (N2U N2 S2

((3 IP3),S as S is free in (N3 U N3) S3'


(W3 IP3) Y,,, S as Supp `)1 C Nl-Str
(W3' IP3)(&. IP13) S as (Nl U Nl) Si is well-formed.
APPENDIX A. ROBUSTNESS PROOFS 181

S is free in (Ni) Si and n of S E Ni Thus


(0'2 IP,2)Mi IP) S (02 12)(W1 pi) 1
s as S E boundstrs(Ni)S1 and -(b1

is well-formed
4'2(W1 1P1) S as (W1 Ipl) S is free in (N2)S2
'(01 S) by the definition of 02
(3 IP3) //,,

(,Y/',1 S) as 01 S is free in (N3)S3


(23 IP3)(Y'1 IPi3) S as S is free in (Ni)Si.

S is bound in (Ni)Si Thus


(02 Ip2)(Wi IP) S (02 Ip2)(Wi P)(n, E) I

( 2 I (c1 1p) E)
((p2 o p)n, (4'2 Ip2)(W1
p2)(P(n),

E) 1p)

((P3 ° Pi3)n, (02 Ip2)(W//,,1 1p) E)


((P3 ° P13)n, (p3 IIP,,3)(Y'1 IPi3) E) by induction
(W3 IP3)(P13(n), (/4 IPi3) E)

(W3' PD (011 P'13) (n, E)


I

('r3 IP3)(4'1 IP13) S


as required. 0
APPENDIX A. ROBUSTNESS PROOFS 182

Theorem 10.5 Let B = M, F, G, E and B' = M', F', G', E' be bases with
B w B'.
If
B I- spec El (10.1)
M'-strs(W E1) is substructure closed (10.2)
{B',coE,} is coherent (10.3)
then
B' H spec 'p El .
Similarly, if
B f- sigexp E and E E' (10.4)
M'-strs E' is substructure closed (10.5)
{B', E'} is coherent (10.6)
then
B' sigexp
f V.

Proof. By induction on the depth of inference of (10.1) and (10.4). One case
for each inference rule.
Empty specification, rule 8.8 Trivial.

Structure specification, rule 8.9

B H sigexp = (0) S
B I- structure strzd : szgexp = {strid -4 S}
So El = {strid --f S}. Assume (10.2) and (10.3). Now M'-strs(cp Ei)
M'-strs(cp S) = M'-strs E', for E' = (0) (W S). Therefore we have (10.4)-(10.5),
so by induction we have B' F- sigexp =, (0)(coS). Thus by rule 8.9 we have
B' I- structure strid : sigexp = WE, as desired.
APPENDIX A. ROBUSTNESS PROOFS 183

Sequential specification, rule 8.10

BI-spec, E1 B+E1 I-spec2 =- E2


(A.25)
B I- specs spec2 E1 + E2
Assume
M'-strs(w(E1 + E2)) is substructure closed (A.26)
{B', :P(E1 + E2)} is coherent. (A.27)
Unfortunately, this is not enough to allow us to use induction directly on the
first premise (we need not have that M'-strs(WE1) is substructure closed and
that {B', WE1} is coherent). Therefore we shall apply induction using a new
realisation W' derived from W as follows:
co(no, Eo), if (no, Eo) E strs{B, E1 + E2}
W'(no, Eo) _ (i(no, Eo), gyp' Eo), \
if (no, Eo) E strs E1 strs{B, E1 + E2}
(no, (p' Eo) otherwise,
(A.28)
where i is any injective map from strs E1 \ strs{B, E1 + E2} to StrName \ (M' U
N'), where N' = strnames{E', (P(E1 + E2)}. Clearly, W' is a realisation.
In order to use induction we wish to establish that

B b B' (A.29)

M'-strs(W' E1) is substructure closed (A.30)

{B', gyp' E1} is coherent. (A.31)

But (A.29) is obvious; and (A.30) follows from (A.26) and the definition of gyp';
and (A.31) follows from (A.27) and the properties of z.
Thus by induction
B' I- specs W' E1. (A.32)

To use induction again, we check that


W1
B + E1 B'± gyp' E1 (A.33)

M'-strs((P' E2) is substructure closed (A.34)

{B' + 'P' E1i (P' E2} is coherent. (A.35)

As to (A.33), B + E1 is robust by the first premise of (A.25), while B' + w' E1


is robust by (A.32) and

W'(E of(B + E1)) = W'(E of B) + w' E1 = E of(B' + W' E1).


APPENDIX A. ROBUSTNESS PROOFS 184

Next, (A.34) follows from (A.26) and (A.28). Also, (A.35) follows from (A.27)
and the definition of cp'. Hence by induction on the second premise of (A.25),

B'± W' E1 I- spec2 co' E2,

which with (A.32) gives

B' F- specs spec2 : co' E1 f cp' E2 (A.36)

by rule (8.10). To check that this conclusion is admissible (c.f. definition 9.12)
we must check that
f
{B', co'(E1 E2)} is admissible (A.37)

strnames F' U strnames G' C M' (A.38)

M'-strs E' and M'-strs(cp'(E1 f E2)) are substructure closed. (A.39)

The well-formedness of {B', W'(E1 f E2)} comes from the robustness of B';
the coherence from (A.27) and the definition of gyp'. Thus (A.37) holds.
Next, (A.38) and the first half of (A.39) follow from the robustness of B', and
the second half of (A.39) follows from (A.26) and (A.28). From (A.36) and (A.28)
we get
B' I- specs spec2 W(E1 f E2)

as required.

Sharing specification, rule 8.11

n of B(longstrid 1) = n of B(longstrzd 2)
(A.40)
B I- sharing longstrid 1 = longstrid 2 {}
Since B is coherent the premise implies that B(longstrid 1) equals B(longstrid 2).
Thus we have
B'(longstrid 1) = E' longstrid 1

(gyp E)longstrzd 1
_ co(E longstrid 1) as cP is a realisation
W(E longstrid 2) by (A.40)
(co E)longstrid 2

B'(longstrid 2).
In particular, n of B'(longstrid 1) = n of B'(longstrid 2), so by rule (8.11) we

have
B' I- sharing longstrid 1 = longstrid 2 {}
which is the desired conclusion . The conclusion is admissible.
APPENDIX A. ROBUSTNESS PROOFS 185

Basic signature expression, rule 8.12

BI- spec=El
(A41)
B F- sig spec end = (0) (n, E1) .

Thus E (0) (n, E1). Assume E E'. Then E' must be (0) W(n, E1). Thus
the assumptions (10.5) and (10.6) read

M'-strs(cp(n, E1)) is substructure closed (A.42)

{B', co(n, E1)} is coherent. (A.43)

Since cp is a realisation this implies

M'-strs((P E1) is substructure closed (A.44)

{B', WE1} is coherent. (A.45)


Thus by induction on the premise of (A.41) we have

B' F- spec = (p E1. (A.46)

Using rule 8.12 on (A.46) we get

B' F- sig spec end = (0) (n', W E1) (A.47)

where we choose n' disjoint from strnames{B' W E1} in order to make sure that
(A.47) is an admissible conclusion.
Then by the generalization rule, rule 8.14, we have

B' F- sig spec end = ({n'})(n' W E1), (A.48)

and this is an admissible conclusion.


Now ({n'}) (n', W E1) > w(n, E1) even though W may "widen" (n, E1). Thus by
lemma 9.15 we have (In'}) (n, WE1) > (0)((p(n, E1)). Thus by the instantiation
rule, rule 8.15, on (A.48) we get

B' F- sig spec end = (0)((P(n, E1))

B' F- sig spec end = E'


as desired. (It is easy to check that this conclusion is admissible by using (A.42)
and (A.43)).
APPENDIX A. ROBUSTNESS PROOFS 186

The generalization rule, rule 8.14

B I- sigexp = (Ni) S1 n1 0 strnames B


Bksigexp=(N1U{n1})S1
Thus E = (N1 U {n1}) S1. Assume E -+ E' and
M'-strs E' is substructure closed (A.49)

{B', E'} is coherent. (A.50)

Since E ---> E', E' has the form (N') S' where there is a bijection

p: N1U{n,}--+ N'

such that Reg 'Po f1N' = 0 and ('Po p) S1 = S1, where 'Po = 0 I strs E. The
bijection p can be partitioned into two bijections R1 N1 --p Nl and pi : {n,}
:

{n'1}, where Nl U {n/} = N'.


First, let us consider the case where n1 is redundant i.e., n1 0 strnames(N1)S1.
We then have
(N1)Sl -'(Ni)Si
sincestrs(Ni) Sl = strs(N U {n1}) S1. Since also strs(Nf) S1 strs E' we have
that M'-strs(Nl) S' is substructure closed and that {B, (Nl) S1} is coherent
using (A.49) and (A.50). Thus by induction we have

BI Hszgexp=(Ni)Si

which is an admissible conclusion. Take any structure name which is neither n


in N1' nor in strnames{B', Si}. Then by the generalization rule

BI Hsigexp=(NiU{n})S'. (A.51)

If follows from lemma 9.17 that

(N,'U In/})S/ = (Nl U {n}) Si


so in particular

(NiU{n})Si > (N1U{n/})S'


V.
APPENDIX A. ROBUSTNESS PROOFS 1-87

Thus, using the instantiation rule on (A.51) we get the desired

B' I- sigexp = E'


which is an admissible conclusion.
= (N1) S1. Since (Ni) Sl
Secondly, let us consider the case that n1 is free in E1
and (Ni U {n,}) Sl are both well-formed there exists precisely one structure, So,
free in (Ni) Sl such that n of So = n1. Every proper substructure of So is free in
(N1U{n,})S1.
Choose an n, not in strnames{B', E'} U N1'. Since n1 Ni we can define

PV= R1U{n1-+n,}

which is a bijection
p,,:N1U{n1}-4N,'U{n}.
Define a map Str-*Strby
1(nv, `P E), if n = nl
(n, E) _ 'p(n, E), if (n, E) E strs{B, (Ni U {n1}) Si}
(n, '% E), otherwise.
Because cp is a realisation and (Ni) Sl and (Ni U {n1}) Sl are well-formed and
nl strnames B, 0 is a realisation. We intend to use induction on 0. Clearly we
have B b B', and we also have
Because of the way we chose n,,, we
that E1 Ei, where E' = (N,') In' n,,} S'
get that M'-strs Ei is substructure closed
and that {B', Ei} is coherent using (A.49) and (A.50).
Thus by induction we have

B' I- sigexp = (Ni){n'1 i--> n,} S' (A.52)

which is an admissible conclusion. Since n, 0 strs B', and this is why we did the
renaming, we can use the generalization rule to conclude

B' I- szgexp = (Nl U {n}) {n'1 '--> n,,} S' *


(A.53)

It follows from lemma 9.17 that


(Ni U {ni}) S' = (Ni U {n}){ni -->

so in particular

(NiU{n}){nip-->n}Si > (NiU{ni})S'


= E'.
APPENDIX A. ROBUSTNESS PROOFS 188

Thus, using the instantiation rule on (A.53) we get the desired

B' I- sigexp = E'


which is an admissible conclusion.

The instantiation rule, rule 8.15

B I- sigexp El > E
El
(A.54)
BI- sigexp=E
Assume E - E' and

M'-strs E' is substructure closed (A.55)

{B', E'} is coherent. (A.56)

exists a Ei such that Ei -


El is admissible, since the conclusion B I- sigexp = El must be admissible. There
E. By lemma 10.2 we have Ei > E'. Since Ei is
well-formed we then have that strs Ei C strs E' by corollary 9.16. Thus (A.55)
and (A.56) imply
M'-strs Ei is substructure closed

{B', E'} is coherent.

Using induction on these and the first premise of (A.54) we get B' I- sigexp =
Ei and since Ei > E' we use the instantiation rule to get the desired conclusion
B' I- sigexp = E'. The conclusion is admissible. 0
APPENDIX A. ROBUSTNESS PROOFS 189

Theorem 10.8 Let B = M, F, G, E be strictly robust.


=
If B I- dec E then E is coherent and M-strs E C strs B,
=
if B I- strexp S then S is coherent and M-strs S C_ strs B, and
if B I- prog B' then B' is strictly robust and M-strs B' C strs B.

Proof. By induction on the depth of inference. We start by declarations,


leaving out the trivial cases where the declaration is an empty declaration or a
structure declaration. The next case is
Sequential declaration, rule 8.4

BI- dec1 =E1 B®E11- dec2=E2


B I- dec1 dec2 = E1 ± E2
By induction
E1 is coherent (A.57)

M-strs E1 C strs B. (A.58)

Thus {B, E1} is coherent. Therefore, letting B1 = B EE1, B1 is coherent. B1


is strictly robust since B is strictly robust and because we use ED.
Thus by induction on the second premise,

E2 is coherent (A.59)

M1-strs E2 C strs B1, (A.60)

where M1 = M of B1 = M U strnames E1.


To show that E1 ± E2 is coherent, let us consider any Sl E strs E1 and any
S2 E strsE2 with n of S1 = n Of S2 = n, say, and let us show S1 = S2 -
this is the only case of interest due to (A.57) and (A.59). By (A.60) we have
S2 E strs B1 = strs(B (D E1) so S1 = S2 by the coherence of B1.
To show that M-strs(E1 ± E2) C strs B it suffices to consider an S2 E
(strs E2 \ strs E1) fl M-Str - the other structures in E1 are accounted for by
(A.58). Now by (A.60) we have S2 E strs(B (D E1) and since S2 strs El we
must have S2 E strs B as desired.
APPENDIX A. ROBUSTNESS PROOFS 1-90

Generative structure expression, rule 8.5

B f- dec =E n strnames(E) U (M of B)
B f- struct dec end = (n, E)
By induction E is coherent and M-strs E C strs B. Since n strnames E we
have that S is coherent. Since n M we have M-strs S = M-strs E C strs B as
desired.
Long structure identifier, rule 8.6

B(longstrid) = S
B f- longstrid = S
Trivial.
Functor application, rule 9.2

B(funzd) > (S, (N')S') B F- strexp =S


N'n(MofB)=0
B F- funid(strexp) = S'
By induction S is coherent and M-strs S C strs B. If follows that

{B, S} is coherent.

Let 4D = (N0)(So, (No)S') be B(funzd). Since 4D > (S, (N') S') there exist a
realisation 0 with Supp 0 C boundstrs(No)So, 0 So = S and (No)SS -4(N')S'.
Now 0 is the identity on all structures free in (No U No)SS. All structures in
No-strs(NN)S5 occur in So since 4D is admissible. It then follows
from (No)SS -4(N')S' that

strs(N')S' C strs SU strs(No U No) So C strs{B, S}. (A.61)

As we saw above, {B, S} is coherent strs(N')S' must be coherent. Moreover,


so
admissible and (No) So'
(No') So' is '0 (N') S', so (A.61) can be strengthened to the
)

desired coherence of S'.


Next,
M-strs S' = M-strs(N')S' as M n N' _ 0

C M-strs{B, S} by (A.61)
C strs B since M-strs S C strs B.
APPENDIX A. ROBUSTNESS PROOFS 191

Top level declaration, rule 8.16

BF
dec= E
B F dec = (strnames E, {}, {}, E)
Trivial inductive case.
Signature declaration, rule 8.17

B F sigexp =::, EE principal for sigexp in B


B IH signature szgid = szgexp = (strnames E, {}, {sigid H E}, {})

Since E is admissible, B' = (strnames E, {}, {szgid H E}, {}) is strictly robust.
By lemma 10.9 we have strs E C strs B. Thus M-strs B' C strs B.
Functor declaration, rule 8.18

Bk szgexp = (N)S (N)S principal for sigexp in B


NnMofB=0
B ®{strid H S} F strexp = S'
N' = strnames(S') \ (N U (M of B)) (b_ (N)(S, (N')S')
B F functor funzd(strid : szgexp)=strexp = (strnames (b, {funid H (D}, {}, {})
By lemma 10.9 we have
strs(N)S C strs B . (A.62)

Since (N)S is admissible, S is coherent. Let

B1 = B ®{strid H S}.

Then B1 is strictly robust -


the coherence because S is coherent and
Thus by induction, S' is coherent and
NnM = 0.

Ml-strs S' C strs B1, (A.63)

where now Ml = M of B1 = (M of B) U N.
Now let us show that (D = (N)(S, (N')S') is admissible. We already know
that (N)S is admissible and that S' is coherent. It remains to be shown that

(N')S' is well-formed (A.64)

(N U N') S' is well-formed (A.65)

N-strs(N')S' C strs S. (A.66)


APPENDIX A. ROBUSTNESS PROOFS 192

Here (A.64) follows from (A.63). strs(N U N') S'.


As to (A.65) assume So E
Since n of So N' we must have n of So E M U N (by the definition of N') and
since n of So N we must have n of So E M. Thus by (A.63), So E strs B1
and since n of So N and (A.62) we must have So E strs B. Since B is strictly
robust, strs So C strs B C M-strs. But M-strs fl(N U N')-Str = 0, so strs So C
strs(N U N') S' as desired.
As to (A.66), we have (A.63) which implies strs(N')S' C strs B1. Thus
N-strs(N')S' C N-strs B1 C strs S, since N fl (M of B) _ 0.

Thus is admissible.
We have strs C strs B by (A.62) and the above argument that strs(N U
N')S' C strs B. Thus B' is strictly robust, and M-strs B' = strs P C strs B as
required.

Sequential program, rule 8.19

BHprogl=:. B1 B±B1Hprog2=:. B2
B I- prog1 prog2 = B1 ± B2
By induction
B1 is strictly robust (A.67)

M-strs B1 C strs B. (A.68)

Thus B + B1 is strictly robust


ent, strnames B C M of B, and (A.68).
-
the coherence because B and B' are coher-

Thus by induction,
B2 is strictly robust (A.69)

M1-strs B2 C strs(B + B1), (A.70)

where M1 = M of (B + B1). Now by (A.67) and (A.69), B1 + B2 is well-formed


and the M component of (B1 + B2) includes all names free in the other com-
ponents. Moreover, B1 + B2 is coherent by (A 67), (A.69), and (A.70). Thus
B1 + B2 is strictly robust. Finally,
M-strs(B1 + B2) C M-strs B1 U M-strs B2
C strs B U M-strs(B + B1) by (A.68) and (A.70)
C strs B by (A.68)
as required. 0
APPENDIX A. ROBUSTNESS PROOFS 193

Theorem 10.12 If B F- strexp = S and B b B', then for all S' E Str,

B' F- strexp = S' if ClosB S ClosB' S' .

Similarly for declarations.

Proof. [of 10.12] By induction on the depth of proof of B F- strexp = S and


B F- dec = E. One case for each rule.
Empty declaration, rule 8.2

BI- = {}
Here B' F- = E' if E' _ {} if ClosB{} ClosB, E'.

Structure declaration, rule 8.3

B F- strexp = S
B F- structure strid = strexp = {strid H S}
Here

B' F- structure strid = strexp = E'


if
if
if
3 S' E' = {strid H S'} and B' F- strexp = S'

-
3 S' E' = {strid H S'} and ClosB S

ClosB{strid H S} ClosB, E'.


ClosB' S' by induction -
APPENDIX A. ROBUSTNESS PROOFS 1,94

Sequential declaration, rule 8.4

B I- dec1 =E1 B®E11- dec2=E2


B I- deci dec2 = E1 + E2
First assume that B' I- decl dec2 E12. Then E12 =E' ± E2 for some El,
E' satisfying
B' I- deci =E1 B'®E'I- dec2=E2
B' I- deci dec2 = El + E2
Write B = M, F, G, E and B' = M', F', G', E. Note that for any semantic
object A we have C1osBA = C1osMA and ClosB,A = ClosM,A, since B and B'
are strictly robust.
By induction
ClosB El C1osB, E' . - (A.71)

Let N1 = strnames Ei \M and N1' = strnames El \M'. Then by (A.71) there


exists a bijection a1 : N1 --f N1' such that

Reg (Pr (1N1' = 0 and (cpi jai) E1 = El) (A.72)

where SPY = (P . M-strs E1.


Let B1 = B ®E1 and Bl = B' ® El and let M1 = MofB1 = MUstrnames E1
and Mi = M of Bl = M' U strnames E.
Let cP° = 0 . strs B and let (P1 = (P° Jai. We have
cp1(E±E1) = cp1E±CP1E1
APE ±o1E1 as N1(lstrsE=Q
E' +(0° jai) E1
E' f(cp° Jai) E1 by theorem 10.8
E'+ El by (A.72).
B1 and B'1 are strictly robust by corollary 10.10. Thus we have B1
so by induction,
b B1,

C1osB, E2 C1osB, E2 (A.73)


Let N2 = strnames E2 \M1 and N2 = strnames E2 \M1'. Then by (A.73) there
exists a bijection a2 : N2 --, N2 such that

Reg cP2 nN2 = 0 and (02 Ja2) E2 = E2, (A.74)

where 02 = cot 1 Ml-strs E2.


APPENDIX A. ROBUSTNESS PROOFS 195

Now N1nN2=N1nN2=0, so
al Ua2:N1UN2--*NjUN2
is abijection.
Let N = strnames(E1 ± E2) \ M and N' = strnames(E' ± E2) \ M'. Clearly,
NCN1UN2and N'CNNUN2but in general N=NlUN2and N'=NNUN2
need not hold.
We wish to prove

(N)(E1 ± E2)-+ (N')(Ei ± E2). (A.75)

To this end we shall first prove that

l(al U a2))(E1 ± E2) = E'1 ± E2 . (A.76)

We have
(cP° l al U a2) E1 = (cP° la1) E1 as N2-strs E1 = 0
= (cPf lal) E1 by theorem 10.8
= E'1 by (A.72).
Moreover, by theorem 10.8, for every structure S2 in E2 either S2 is free in
B1 or n Of S2 E N2. In the former case,

(c0° lai U a2) S2 = `Pl S2 = 02 S2 = (cP2 lag) S2 .

In the latter case, writing S2 = (n2i E2),

(co° lai U a2) S2 = (a2n2, (0° lai U a2) E2)

(a2n2, (`002 lag) E2)


(cPa 1a2)(n2, E2)

(02 la2) S2

by an easy structural induction.


Thus

(c0° lal U a2) E2 = (`/'2 lag) E2


= E' by (A.74)

which with the above equation, (cp° l a1 U a2) El = Ei, gives (A.76).
Now let a = (a1 U a2) . N. Let us show that the co-domain of a is N'.
APPENDIX A. ROBUSTNESS PROOFS 196

Assume n E strnames(E1 + E2) fl N. Then by (A.76), a(n) occurs in E'1 + E'2


and sinceM' fl (N,'U N2) = 0 we have a(n) E N'.
Conversely, if n' E strnames(E' + E2) \ M' then there must be an n E
strnames(E1 + E2) \M with a(n) = n', the reason being that we have (A.76) and
that if a structure, S, is free in (N)(E1 + E2) then it is free in B (by theorem 10.8)
and thus So S is free in (N') (E' + E'2)
Thus the co-domain of a is N'.
as B - B'. b
Let W12 = cP I strs(N)(E1+E2). Since by theorem 10.8 every structure in
E1 + E2 is either free in B or is an N structure we have

(`P12 Ia)(E1 + E2) = (P° ja1 U a2)(E1 + E2)


= E1 + E'2 by (A.76)
showing (A.75).

Conversely, assume

ClosB(E1 + E2) ClosBI Ei2

Again, write B = M, F, G, E and B' = M', F', G', E'.


Let N12 = strnames(E1 + E2) \M and N12 = strnames E12 \M'. Thus there
exists a bijection a12 : N12 --> N1'2 such that

Reg W°2 f1N12 =0 and (`P12 1a12)(E1 + E2) = E12. (A.77)

where = S 1 M-strs(E1 + E2).


`P12

Now extend a12 to a bijection, a1U2, whose domain is

Dom a U strnames E1 \(M U strnames(E1 + E2))

i.e., all names that are new in E1 or in E2, and whose co-domain does not intersect
M'UNi2
Let W° = W I strs B and let us define

El = (W° ja1U2) E1 (A.78)


E'2 = (W° JaW2) E2 (A.79)

Now
E112 = (`Pi2 ja12)(E1 ± E2) by (A.77)
APPENDIX A. ROBUSTNESS PROOFS 197

= (`P° 1a1u2)(E1 E2) f by theorem 10.8


= (c° 1a1u2) E1 +((P° la1u2) E2
f
Ei E2 .
To use induction a first time we wish to prove

(N1)E1 ---'(N,')Ei, (A.80)

where N1 = strnames E1 \M and Nl = strnames E'1 \M'.


Let al = alu2 I strnames E1. Then a1 is a bijection of Ni on Nl by (A.78).
Moreover, letting W = cp . strs(NN) E1 we have that poi is a restriction of cP° by
theorem 10.8 and thus Reg w f1N1 = 0. Surely, (coo Ia1) E1 = (W° Ialu2) E1 = El)
by (A.78), showing (A.80).
Thus by induction we have

B' 1- dec1 =- E.
Ei (A.81)

Let B1=B®E1 and B;=B'®Eiand M1=MofB1 and Ml'=MofBi.


By theorem 10.8, B1 and B'1 are strictly robust.
Let W = cp° Jai. Then we have B1 b
B'1 by (A.78).
To use induction a second time we wish to prove

(N2) E2 (N2) E'2, (A.82)

where N2 = strnames E2 \M1 and N2 = strnames E'2 \M,'.


Let a2 = alu2 1 strnames E2. Then a2 is a bijection of N2 on N2 by (A.79).
Moreover, letting W2 = W 1 strs(N2) E2 we have that W2 is a restriction of
`P1 1 strs B1 and thus, since B1 B;, Rng P2 nN2 = 0. Also,
(`P0 la2) E2 = (((Pi . strs(N2) E2) a2) E2 I

(W° Jai U a2) E2 since every structure free in


(N2) E2 is free in B1
= (cP° Jalu2) E2
= E'2 by (A.79)
showing (A.82).
Thus by induction we have B'1 f- dec2 = E'2 which with (A.81) and the fact
that E12 = E1 ±E' gives the desired B' f- dec1 dec2 E12.
APPENDIX A. ROBUSTNESS PROOFS 198

Generative structure expression, rule 8.5

B I- dec =: E n strnames(E) U (M of B)
B I- struct dec end =: (n, E)
Assume B I- dec E and n strnames E UM. Then

B' I- struct dec end =: S'

if
there exists n', E', such that S' = (n', E') and B' F-

dec E' and n' strnames E' UM'


if (by induction)
the exists n', E', such that S' _ (n', E') and
ClosB E '+ ClosB, E' and n' strnames E' UM'

if
ClosB(n, E) -- ClosB' S' .

Long structure identifier, rule 8.6

B(longstrid) = S
B I- longstrid S

Let E = E of B. Then
B' I- longstrid = S'

if
(E of B') longstrid = S'
if
(co E)longstrid = S'

if
co(E longstrid) = S' (given that E longstrid exists)

if
coy=S'
if
ClosB S -' ClosB' S'

as B and B' strictly robust implies ClosB S = (0) S and ClosB' S' = (0) S'.
APPENDIX A. ROBUSTNESS PROOFS 199

Functor application, rule 9.2

B(funid) > (So, (N)S) B I- strexp = So


Nn(MofB)=0
B F- funid(strexp) =S
As usual, write B = M, F, G, E and B' = M', F', G', E'.
Assume B' I- funid(strexp) = S'. Then for some N', So,

B'(funid) > (Si, (N')S') B' F- strexp = So


N'nM'=0
B F- funid(strexp) = S'

By induction C1osB So -- ClosB' letting No = strnames So \M and


So, so
No = strnames So \M' there exists a bijection a : No -> No such that

Reg &o nNo, = 0 and (0o 1a) So = So, (A.83)

where 0o = W 1 M-strs So.


Let coo = W 1 strs B and let
(N")S" such that
(i = (o Ia. We have B b B'. There exists an

(So, (N)S) -'P-'-+ (So, (N")S") (A.84)


since (i So = So. Since B b B' we have

B(funid) B'(funid). (A.85)

Using lemma 10.4 on (A.84), (A.85) and B(funid) > (So, (N)S) gives

B'(funzd) > (So, (N")S").

Since we also have B'(funad) > (So, (N')S') we must have (N")S" = (N')S'
by lemma 9.18. Thus by (A.84),

(N)S L+(N')S'. (A.86)

We wish to prove
(N1)S -°'(N,)S', (A.87)

where Nl = strnames S\M and Ni = strnames S' \M'. Since NnM = N'nM' =
0, we have N C Nl and N' C Ni but the equalities need not hold as (N)S
may contain free structures that are in So but not free in B. More precisely,
since F(funzd) is well-formed and coherent and F(funzd) > (So, (N)S), every
APPENDIX A. ROBUSTNESS PROOFS 200

structure free in (N)S either is free in B or it occurs free in So and is not an


M structure. By (A.86) and (A.83), structures of the latter kind are renamed
by a conflicting neither with N' nor (by the coherence of (N')S') with names in
Reg(w I strs(N1) S). Hence from (A.86) we get (A.87) as required.

Conversely, assume ClosB S - ClosBl S'.


Let Ni = strnames S \M, Nl = strnames S' \M', and W1' = cp M-strs S.
Then there exists a bijection al : Nl -* Ni such that

Reg wi f1N1 = 0 and (Wi Ja.) S = S'. (A.88)

Let No = strnames So \M and let us refer to the No structures of So as the


new structure in So. Not all new structures in So need be free in (N)S but
those which are free, are bound in (N1)S and thus susceptible to renaming by
al. Extend the bijection al I (strnames(N)S fl No) to a bijection

ao:No -*No,

making sure that ao(no) Ni U M' for all no E No \ strnames(N)S. Let Wo =


p I M-strs So. Then Reg wo f1No = 0 (because of the way we chose ao). Let

'
So = (Wo Jao) So. Then (No) So --(No) S. Thus by induction B' I- strexp S.
Let W° = W I strs B and let W1 = W° Then W1 So = S' and
lao.

(N)S F(N')S' for N' = a1(N1). Thus (So, (N)S) + (So, (N')S').

Since also B(funid) > B'(funid) and B(funid) > (So, (N)S) we have
B'(funid) > (So, (N')S') by lemma 10.4. Also N'f1M' c N11 f1M' = 0. Therefore,
by the functor application rule we have the desired B' I- funid(strexp) = S'.
Bibliography
[1] P. Aczel, An Introduction to Inductive Definitions, in J. Barwise (ed.) Hand-
book of Mathematical Logic, North-Holland Publishing Company, 1977

[2] G. M. Birtwistle, O. Dahl, B. Myhrhaug, K. Nygaard, SIMULA BEGIN,


Studentlitteratur, Lund, Sweden, 1976

[3] H. Ait-Kaci, An Algebraic Semantics Approach to the Effective Resolution


of Type Equations, Theoretical Computer Science 45 (3) (1986) 293-351

[4] J. W. Backus, F. L. Bauer, J. Green, C. Katz, J. McCarthy, P. Nauer (ed.),


A. J. Perlis, H. Rutishauser, K. Samelson, B. Vauquois, J. H. Wegstein,
A. van Wijngaarden and M. Woodger, Revised report on the algorithmic
language ALGOL 60, Comm. ACM 6, 1 (1963, January), 1-17; Computer
Journal 5, 349-367; Num. Math. 2, 106-136.

[5] H. Barendregt, The Lambda Calculus: Its Syntax and Semantics, North-
Holland, Amsterdam 1981.

[6] R. Burstall and B. Lampson, A kernel language for abstract data types
and module, in Symposium on Semantics of Data Types, Sophia Antipolis,
Springer, Lecture Notes in Computer Science 173, 1984, 1-50.

[7] L. Cardelli. ML under Unix. Polymorphism, Vol. 1,Number 3, Decem-


ber, 1983.

[8] L. Cardelli, A Semantics of Multiple Inheritance, Semantics of Datatypes,


International Symposium, Lecture Notes in Computer Science 173, (1984),
51-68

[9] D. Clement, J. Despeyroux, T. Despeyroux, L. Hascoet, G. Kahn Natu-


ral Semantics in the Computer, Technical report RR 416, INRIA, Sophia-
Antipolis, France, June 1985 201
BIBLIOGRAPHY 202

[10] M. Coppo, M. Dezani-Ciancaglini and P. Salle, Functional characteriza-


tion of some semantic equalities inside A-calculus, E. Maurer, ed., in: Au-
tomata, Languages and Programming, Lecture Notes in Computer Sci-
ence 71 (Springer, Berlin, 1979) pp. 133-146.

[11] M. Coppo, M. Dezani-Ciancaglini and B. Venneri, Principal type schemes


and A-calculus semantics, in J.P. Seldin and J.R. Hindley, eds., To
H.B. Curry. Essays on Combinatory Logic, Lambda Calculus and Formalism,
Academic Press, London, 1980 pp. 535-560

[12] M. Coppo, M. Dezani-Ciancaglini an B. Venneri, Functional characters of


solvable terms, Zeit. Math. Logik Grund. 27 (1981) 45-58.

[13] L. Damas, Type Assignment in Programming Languages, Ph. D. Thesis,


University of Edinburgh, Department of Computer Science, CST-33-85,
1985.

[14] L. Damas and R. Milner, Principal type schemes for functional programs, in
Proceedings of the 9th ACM Symposium on the Principles of Programming
Languages, pp. 207-212, 1982

[15] M. Gordon, R. Milner and C. Wadsworth, Edinburgh LCF, Springer, Lecture


Notes in Computer Science 78, Berlin - Heidelberg - New York, 1979

[16] R. Harper, Introduction to Standard ML, LFCS Report Series, ECS-LFCS-


86-14, Department of Computer Science, University of Edinburgh, The
King's Buildings, Edinburgh EH9 3JZ, Scotland.

[17] R. Harper, F. Honsell, G. Plotkin, A Framework for Defining Logics, Pro-


ceedings of the Symposion on Logic in Computer Science, Ithaca, New York,
June 1987, 194-204

[18] R. Harper, David MacQueen and R Milner, Standard ML, Laboratory for
Foundations of Computer Science, Department of Computer Science, Uni-
versity of Edinburgh, ECS-LFCS-86-2, 1986

[19] R. Harper, R. Milner and M. Tofte, A type discipline for program modules,
Proc. TAPSOFT '87, Springer, Lecture Notes in Computer Science 250,
Berlin - Heidelberg - New York, 1987, 308-319.
BIBLIOGRAPHY 203

[20] R. Harper, R. Milner and M. Tofte, The semantics of Standard ML, Ver-
sion 1, Laboratory for Foundations of Computer Science, Department of
Computer Science, University of Edinburgh, ECS-LFCS-87-36, 1987

[21] R Harper and M. Tofte. The Static Semantics of Standard ML, DRAFT.
Unpublished. 1 November, 1985.

[22] R. Hindley, The pincipal type scheme of an object in combinatory logic,


Trans. Amer. Math. Soc. 146, 1969, 29-60.

[23] B. Liskov and J. Guttag, Abstraction and Specification in Program Devel-


opment, The MIT Press, Cambridge, Mass., 1986

[24] D. MacQueen Modules for Standard ML (Draft), Polymorphism, Vol 1,


Number 3, December 83.

[25] D. B. MacQueen Modules for Standard ML, in [18]

[26] J. McCarthy et al. LISP 1.5 Programmer's Manual, MIT Press, Cambridge,
Mass. 1965

[27] R. Milner, A theory of type polymorphism in programming languages, Jour-


nal of Computer and System Sciences, 17:348-375 (1978)

[28] R. Milner, A Calculus of Communicating Systems, Lecture Notes in Com-


puter Science 92, ed., G. Goos and J. Hartmanis, Springer, Berlin, 1980.

[29] R. Milner: Webs. (Working note). 1 September 85.

[30] R. Milner: Static Semantics of Modules. (Working note). May 86.

[31] R. Milner: Static Semantics of Modules, Mark 2. (Working note).


May 24, 1986.

[32] R. Milner: Static Semantics of Modules, Mark 3. (Working note).


June 22, 1986

[33] R. Milner: Static Semantics of Modules, Mark 4. (Working note).


July 20, 1986

[34] G. Plotkin, A Structural Approach to Operational Semantics, Technical


Report DAIMI-FN-19, Computer Science Department, Aarhus University,
Denmark, 1981
BIBLIOGRAPHY 204

[35] J.C. Reynolds, Towards a Theory of Type Structure, Proc. Colloque sur
la Programmation, Lecture Notes in Computer Science 19, Springer, New
York, 1974, pp. 408-425.

[36] J.C. Reynolds, Three Approaches to Type Scructure, Mathmatical Foun-


dations of Software Development, ed., H. Ehrig, Proceedings, TAPSOFT,
Lecture Notes in Computer Science 185, Springer, New York, 1985, pp. 97-
138

[37] J. Robinson A machine-oriented logic based on the resolution principle, Jour-


nal of the ACM 12, 1, (1965),23-41.

[38] S. Ronchi Della Rocca and B. Venneri, Principal type schemes for an ex-
tended type theory, Theoretical Computer Science 28, North-Holland (1984)
151-169.

[39] D. Sannella, A Denotational Semantics for ML modules. Edinburgh Univer-


sity. (Unpublished). Jan 1985.

[40] M. Tofte: A Theory of Realisation Maps, Mark 1. (Working note).


31 June, 86.

[41] Reference Manual for the Ada Programming Language, (Proposed Standard
Document), United States Department of Defence, July 1980

[42] M. Wand, Complete Type Inference for Simple Objects, Symposium on Logic
in Computer Science, Ithaca, New York, Proceedings, 1987, 37-44.

[43] A. Wikstrom, Functional Programming Using Standard ML, Prentice Hall,


1987

[44] N. Wirth, The programming language PASCAL, Acta Informatica 1 (1971),


35-63

You might also like