mmf92b
mmf92b
mmf92b
Theory
— the calculational approach —
Maarten M. Fokkinga
Maarten M. Fokkinga
University of Twente, dept. INF
PO Box 217
NL 7500 AE ENSCHEDE
The Netherlands
e-mail: fokkinga@cs.utwente.nl
Contents
0 Introduction 3
2 Constructions in categories 31
2a Iso, epic, and monic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
2b Initiality and finality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
2c Products and Sums . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
2d Coequalisers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
2e Colimits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
A More on adjointness 59
1
2 CONTENTS
Chapter 0
Introduction
0.1 Aim. In these notes we present the important notions from category theory. The
intention is to provide a fairly good skill in manipulating with those concepts formally.
What you probably will not acquire from these notes is the ability to recognise the concepts
in your daily work when that differs from algorithmics, since we give only a few examples
and those are taken from algorithmics. For such an ability you need to work through many,
very many examples, in diverse fields of applications.
This text differs from most other introductions to category theory in the calculational
style of the proofs (especially in Chapter 2 and Appendix A), the restriction to applications
within algorithmics, and the omission of many additional concepts and facts that I consider
not helpful in a first introduction to category theory.
0.2 Acknowledgements. This text is a compilation and extension of work that I’ve
done for my thesis. That project would have been a failure without the help or stimulation
by many people. Regarding the technical contents, Roland Backhouse, Grant Malcolm,
Lambert Meertens and Jaap van der Woude may recognise their ideas and methodological
and notational suggestions.
0.3 Why category theory? There are various views on what category theory is about,
and what it is good for. Here are some.
• Quoting Hoare [6]: “Category theory is quite the most general and abstract branch of
pure mathematics. [ . . . ] The corollary of a high degree of generality and abstraction
3
4 CHAPTER 0. INTRODUCTION
is that the theory gives almost no assistance in solving the more specific problems
within any of the subdisciplines to which it applies. It is a tool for the generalist, of
little benefit to the practitioner [ . . . ].”
Hence it will come as no surprise that, for algorithmics too, category is mainly useful
for theory development; hardly for individual program derivation.
• Quoting Scott [7]: “[Category theory offers] a pure theory of functions, not a theory
of functions derived from sets.”
To this I want to add that the language of category theory facilitates an elegant
style of expression and proof (equational reasoning); for the use in algorithmics this
happens to be reasoning at the function level, without the need (and the possibility)
to introduce arguments explicitly. Also, the formulas often suggest and ease a far-
reaching generalisation, much more so than the usual set-theoretic formulations.
Category theory has itself grown to a branch in mathematics, like algebra and analysis,
that is studied like any other one. One should not confuse the potential benefits that
category theory may have (for the theory underlying algorithmics, say) with the difficulty
and complexity, and fun, of doing category theory as a specialisation in itself.
0.4 Preliminaries: sequences. Our examples frequently involve finite lists, or se-
quences as we like to call them. Here is our notation.
A sequence is a finite list of elements of a certain type, denoted [a0 , . . . , an−1 ] . The
set of sequences over A is denoted Seq A . Further operations are:
tip = a 7→ [a]
: A → Seq A
: = (a, [a0 , . . . , an−1 ]) 7→ [a, a0 , . . . , an−1 ]
: A × Seq A → Seq A
cons = prefix written operation :
++ = ([a0 , . . . , am−1 ], [am , . . . , an−1 ]) 7→ [a0 , . . . , an−1 ]
: Seq A × Seq A → Seq A
join = prefix written operation + +
Seq f = [a0 , . . . , an−1 ] 7→ [f a0 , . . . , f an−1 ]
: Seq A → Seq B whenever f : A → B
⊕/ = [a0 , . . . , an−1 ] 7→ a0 ⊕ . . . ⊕ an−1
: Seq A → A whenever ⊕: A × A → A and
5
This introductory chapter gives a brief overview of the important categorical concepts,
namely category, functor, naturality, adjunction, duality. In the next chapter we will show
how to express familiar set-theoretic notions in category theoretic terms.
1a Categories
A category is a collection of data that satisfy some particular properties. So, saying that
such-and-so forms a category is merely short for asserting that such-and-so satisfy all the
axioms of a category. Since a large body of concepts and theorems have been developed
that are based on the categorical axioms only, those concepts and theorems are immediately
available for such-and-so if that forms a category.
For an intuitive understanding in the following definition, one may interpret objects as
sets, and morphisms as typed total functions. We shall later provide some more and quite
different examples of a category, in which the objects aren’t sets and the morphisms aren’t
functions.
1.1 Definition. A category is: the following data, subject to the axioms listed in
paragraph 1.2.
7
8 CHAPTER 1. THE MAIN CONCEPTS
by
By default, A, B, C, . . . vary over categories, and particular categories are named after
their objects (rather than their morphisms). Actually, these data define the basic terms
of the categorical language in which properties of the category can be stated. A cate-
gorical statement is an expression built from (notations for) objects, typing, morphisms,
composition and identities by means of the usual logical connectives and quantifications
and equality. If you happen to know what the objects really are, you may use those aspects
in your statements, but then you are not expressing yourself categorically.
Sometimes there are several categories under discussion. Then the name of the category
may and must be added to the above notations, as a subscript or otherwise, in order to
avoid ambiguity. So, let A be a category. Then we may write specifically f : A →A B ,
srcA , tgtA , f ;A g , and id A,A . There is no requirement in the definition of a category
stating that the morphisms of one should be different from those of another; a morphism
of A may also be a morphism of B . In such a case the indication of A in f : A →A B
and srcA f is quite important.
1.2 Axioms. There are three ‘typing’ axioms, and two axioms for equality. The typing
axioms are these:
A morphism term f is well-typed if: a typing f : A → B can be derived for some objects
A, B according to these axioms (and the Type properties of defined notions that will be
given in the sequel).
Convention. Whenever we write a term, we assume that the variables are typed (at
their introduction — mostly an implicit universal quantification in front of the formula)
in such a way that the term is well-typed. This convention allows us to simplify the
formulations considerably, as illustrated in the following axioms.
1A. CATEGORIES 9
1.6 (f ; g) ; h = f ; (g ; h) composition-Assoc
1.7 id ; f = f = f ; id Identity
In accordance with the convention explained a few lines up, axiom composition-Assoc
is universally quantified with “for all objects A, B, C, D and all morphisms f, g, h with
f : A → B , g: B → C , and h: C → D ”, or slightly simpler, “for all f, g, h with
tgt f = src g and tgt g = src h ”. In accordance with that same convention, axiom Identity
actually reads id srcf ; f = f = f ; id tgtf , or even “for all objects A, B and all morphisms
f with f : A → B , id A ; f = f = f ; id B ”.
Convention. The category axioms are so basic that we shall mostly use them tacitly.
In particular, we shall use composition-Assoc implicitly by omitting the parentheses in a
composition, thus writing f ; g ; h instead of either (f ; g) ; h or f ; (g ; h) .
1.9 Example: Set . Set 0 is: the pre-category whose objects are sets, whose morphisms
are total functions, and whose composition and identities are function composition and
identity functions respectively. Further, define f : A → B to mean that, for each a ∈
A , f a is well-defined and f a ∈ B . Thus, for the squaring function square we have
square: nat → nat as well as square: real → real , and so on. With this definition the
axioms listed in paragraph 1.2, except for unique-Type, are fulfilled. (Exercise: verify the
axioms.)
Now define category Set out of pre-category Set 0 by the construction given in para-
graph 1.8. We keep saying that the morphisms in Set are total functions; it may be more
accurate to say that they are ‘typed’ total functions, since they carry their type (source
and target) with them. We also keep the notation f a for the application of f on a ,
whenever f : A →Set B and a ∈ A .
Doing set theory in the categorical language enforces the strait jacket of expressing every-
thing with function composition only, without using explicit arguments, membership and
function application. Once mastered it is often, not always, an elegant way of expressing.
We shall mostly illustrate the notions of category theory in terms of categories where
the morphisms are functions and composition is function composition, like in Set . But
beware, even if the morphisms are functions, the objects need not at all be sets: sometimes
the objects are operations (with an appropriate definition for the typing of the functions).
The categories of F -algebras are an example of this; a special case is Alg(II) discussed
in paragraph 1.22, and Mon in paragraph 1.23. Other times we’ll take “structures” (of
structured data) as objects, again with an appropriate interpretation for the typing of the
morphisms; this occurs in category Ftr (A, B) defined in paragraph 1.37.
1.10 Example: graphs and pre-orders. One should not be mislead by our illus-
trations, where morphisms are functions. There are many more mathematical data that
can be viewed as a category. To mention just one generic example, each directed graph
determines a category as follows. Take the nodes as objects, and all paths as morphisms
typed with their start and end nodes. Composition is concatenation of paths, and the
identities are the empty paths. Thus defined, these data do satisfy the axioms listed in
paragraph 1.2, hence form a category. (Exercise: verify this.)
Here is yet another important example of a class of categories. We don’t need it
in our discussion of algorithmics, but it provides sometimes instructive examples. Each
pre-ordered set (A, ≤) can be considered a category, in the following way. The elements
a, b, . . . of A are the objects of the category and there is a morphism from a to b precisely
when a ≤ b . Formally, the category is defined as follows.
an object is: an element in A
a morphism is: a pair (a, b) with a ≤ b in A
(a, b): c → d ≡ a=c ∧ b=d
(a, b) ; (b, c) = (a, c)
id a = (a, a) .
1A. CATEGORIES 11
1.11 Cartesian closed categories, and Topoi. The axioms on the morphisms and
composition are very weak, so that many mathematical structures can be rendered as a
category. By imposing extra axioms, still in the categorical language, the categories may
have more of the properties you are actually interested in.
For example, a cartesian closed category is a category in which the extra properties
make the morphisms behave more like real functions: in particular, there is a notion of
currying and of applying a curried morphism. There is a close relationship between this
type of categories and typed λ -calculi.
As another example, a topos (plural: topoi or toposes) is a cartesian closed category
in which the extra properties make the objects have more of the properties of real sets: in
particular, for each object there exists an object of its ‘subobjects’.
In these notes, we shall nowhere need the extra properties. As a result, the notions
defined here, and the theorems proved, are very general and very weak at the same time.
1.12 Discussion. Quoting Asperti and Longo [1]: “The basic premise of category theory
is that every kind of mathematically structured object comes equipped with a notion of
[ . . . ] transformation, called ‘morphism’, that preserves the structure of the object.” By
means of the categorical language one cannot express properties of the internal structure
of objects. The internal structure of objects is accessible only externally through the
morphisms between the objects. The morphisms may seem (and sometimes are) functions,
but the categorical language doesn’t express that; it only provides a way to express the
composition of morphisms. The discipline of expressing internal structure only externally is
the key to the uniformity of describing structural concepts in various different application
fields.
Since each graph is a category, the above interpretation of “internal structure” of objects
doesn’t always make sense.
Exercise. What functions are precisely the functions that preserve a partial order?
(Define a category in which these functions are the morphisms.) What functions are
precisely the functions that preserve a partial order and the limits? (Define a category in
which these functions are the morphisms.) What structure of sets is preserved by precisely
all total functions? (What category has these functions as morphisms?)
notions of object, morphism, typing, composition, identity, and notions that can be defined
in terms of these, and further the usual logical connectives and quantifications.
To appreciate the problems and delight involved, the reader may spend some (not
too much) time in trying to find a categorically expressed property P such that in Set
property P holds precisely for the empty set, that is, P (A) ≡ A=∅ . Similarly, you may
think about categorically expressed properties P such that in Set the following holds:
P (A) ≡ A = {17}
P (A) ≡ A is a singleton set
P (A, B) ≡ A⊆B
P (f ) ≡ f is surjective
P (f ) ≡ f is injective
P (A, B, C) ≡ C =A∪B
P (A, B, C, f, g) ≡ C is the disjoint union of A and B
with injections f : A → C and g: B → C .
Also, think about a way to represent a binary relation R on A categorically; what collec-
tion of sets and functions may carry the same information as R ?
Just for fun you may also think about categorically expressed properties P such that
in a pre-order considered as a category (see paragraph 1.10) the following holds:
1.14 Constructing new categories. There are several ways in which new categories
can be constructed out of given ones. Here, we give just two ways, and in paragraph 1.24
we’ll sketch some other ways.
A subcategory of A is completely determined by its objects and morphisms, and A .
Formally, a subcategory of a category A is: a category in which each object, morphism,
and identity is an object, morphism, and identity in A , and in which the typing and
composition is the typing and composition of A restricted to the objects and morphisms
of the subcategory.
A full subcategory of A is completely determined by its objects, and A . Formally, a
subcategory of a category A is a full subcategory of A if: for each A, B in the subcategory,
all the morphisms with type A → B in A are morphisms in the subcategory.
A category is built upon a category A if: its morphisms are morphisms in A , and the
composition and identities are inherited from A , and further, its objects are collections of
morphisms of A , and its typing f : A → B is defined as a collection of equations between
1B. FUNCTORS 13
1b Functors
A functor is a mapping from one category to another that preserves the categorical struc-
ture, that is, it preserves the property of being an object, the property of being a morphism,
the typing, the composition, and the identities. The significance of functors is manifold:
they map one mathematical structure (category, piece of mathematics) to another, they
turn up as objects of interesting categories, they are the mathematically obvious type of
transformation between categories, and, last but not least, they form a categorical tool to
deal with “structured” objects (as we shall explain in paragraph 1.21).
For example, IInat is the set of pairs of natural numbers, and IIsucc maps (19, 48) onto
(20, 49) . Mapping II satisfies the functor properties; it is a functor II: Set → Set .
(Exercise: verify the functor axioms.) Functor II can be used to characterise binary
operations in a neat way. For example,
+ : IInat → nat
n 7→ (n div 10, n mod 10) : nat → IInat .
1.20 Example: functor Seq . Mapping Seq discussed in paragraph 0.4 is a functor
with type Set → Set . To see this, recall that:
Property ftr-Type is the second line of the above equation for Seq f , and the equations
Seq id A = id Seq A and Seq(f ; g) = Seq f ; Seq g are easily verified. (Exercise: do this.)
Functions on or to sequences have Seq in their source or target type, respectively. For
example, function rev A that reverses sequences over A , has type Seq A → Seq A .
1.21 A use of functors. In the definition of a category, objects are “just things”
for which no internal structure is observable by categorical means (composition, identities,
morphisms, and typing). Functors form the tool to deal with “structured” objects. Indeed,
in Set an (the?) aspect of a structure is that it has “constituents”, and that it is possible
to apply a function to all the individual constituents; this is done by F f : F A → F B .
So II is or represents the structure of pairs; IIA is the set of pairs of A , and IIf is
the functions that applies f to each constituent of a pair. Also, Seq is or represents
the structure of sequences; Seq A is the structure of sequences over A , and Seq f is the
function that applies f to each constituent of a sequence.
Even though F A is still just an object, a thing with no observable internal structure,
the functor properties enable to exploit the “structure” of F A . The following example
may make this clear; it illustrates how functor II is or represents the structure of pairs. It
illustrates at the same time where and how the functor properties play a rôle.
For this, let ⊕: IIA → A and ⊗: IIB → B be binary operations on sets A and B
respectively, and let f : A → B be a function. We define the notation
f : ⊕ →II ⊗
to mean
⊕;f = IIf ; ⊗ .
1B. FUNCTORS 15
axioms (all of them) have been used. So the above definition, theorem, and proof are
valid for any functor and any category, not just for functor II and category Set . Here
we see how a categorical formulation suggests or eases a far-reaching generalisation: re-
place II by an arbitrary functor F , and you have a definition of ‘ F -ary operation’ and
‘ F -homomorphism’, and a theorem together with its proof about that notion, and these
are valid for an arbitrary category.
Exercise: generalise the above theorem and proof by replacing II everywhere by an arbi-
trary functor F ; check each step. Also, generalise from Set to an arbitrary category.
This concludes an illustration of the use of the functor axioms, and of using functors to
deal with “structured objects”.
1.22 Category Alg(II) . The theorem in paragraph 1.21 gives rise to another category,
to be called Alg(II) ; an important one for algorithmics, as will become clear in the sequel.
In words, Alg(II) has the II -ary operations in Set as objects, the homomorphisms for
these operations as morphisms, and it inherits the composition and identities from Set .
(This fixes everything except the typing.) Formally, Alg(II) is defined thus:
an Alg(II) -object is: a II -ary operation in Set
an Alg(II) -morphism is: a homomorphism for II -ary operations, in Set
f : ⊕ →Alg(II) ⊗ ≡ f : ⊕ →II ⊗
≡ ⊕ ; f = II f ; ⊗
f ;Alg(II) g = f ;Set g
idAlg(II),⊕ = id Set,A where A = tgtSet (⊕) .
Thus, Alg(II) is built upon Set (see paragraph 1.14 for ‘built upon’). Let us see whether
the category axioms are fulfilled for Alg(II) . The theorem in paragraph 1.21 asserts
that axioms composition-Type and identity-Type are fulfilled. The axioms composition-
Assoc and Identity are fulfilled since the composition and the identities are inherited from
category Set . So, Alg(II) is at least a pre-category. With the definition above axiom
unique-Type is not fulfilled so that Alg(II) is not a category. The reason is that a function
can be a homomorphism for several distinct operations, that is, f : ⊕ →II ⊗ and f : ⊕0 →II
⊗0 can both be true while the pair ⊕, ⊗ differs from the pair ⊕0 , ⊗0 . (Exercise: find such
a function and operations.)
In the sequel we shall pretend that Alg(II) is made into a category (re-defining it) by
the technique of constructing a category out of a pre-category, see paragraph 1.8. Thus,
f : ⊕ →Alg(II) ⊗ denotes the typing in Alg(II) , and implies that tgtAlg(II) f = ⊕ , whereas
formula f : ⊕ →II ⊗ keeps to mean only that ⊕ ; f = II f ; ⊗ .
Exercise: generalise the construction above to an arbitrary category A instead of Set .
That is, given an arbitrary category A , define the (pre)category Alg A (II) analogous to
Alg(II) above. Also, generalise II to an arbitrary functor F .
The name ‘Alg’ is mnemonic for ‘algebra’ and derives from the following observation. The
II -ary operations are, in fact, very simple algebras. Conventionally, an algebra with a
1B. FUNCTORS 17
single operation ⊕: II A → A is the pair (A; ⊕) , and A is called the carrier. Thanks
to axiom unique-Type the carrier is fully determined by the operation itself, so that the
operation itself can be considered the algebra.
1.23 Category Mon . Now that we have defined category Alg(II) , we take the oppor-
tunity to present another category, to be called Mon (mnemonic for ‘monoid’). It will be
used in Section 1d below.
First recall the notion of monoid. A monoid operation is: a binary operation that is
associative and has a neutral element (sometimes called unit, or even identity). Conven-
tionally, a monoid is: a triple (A; ⊕, e) , where ⊕: II A → A is a monoid operation and
e is its neutral element. The carrier A is uniquely determined by ⊕ (thanks to axiom
unique-Type, A = tgt(⊕) ). Also, the neutral element for ⊕ is unique, since e = e⊕e0 = e0
for any two neutral elements e and e0 . So, we might say that ⊕ alone is, or represents,
the monoid. Anyway, we shall talk of monoid operations, rather than of monoids.
The significance of monoid operations for algorithmics is that the reduce-with-⊕ is a
well-defined function of type Seq A → A when ⊕ is a monoid operation; see paragraph 0.4.
Category Mon is: the subcategory of Alg(II) whose objects are the monoid operations,
and whose morphisms are those f for which f : ⊕ →II ⊗ and f (e) = e0 where e, e0 are
the neutral elements of ⊕, ⊗ .
Exercise: give an explicit definition of the objects, morphisms, typing, composition, and
identities in Mon , and prove that the category axioms are fulfilled.
Exercise: prove that, in Set , Mon is not a full subcategory of Alg(II) .
1.24 More functors, new categories. Up to now we have seen only endofunctors,
namely II and Seq ; that is, functors whose source and target are equal. There are several
reasons why it is useful to allow the source and target category of a functor to differ from
each other. We briefly mention three of such reasons here. At the same time, these reasons
demonstrate the need for building new categories out of given ones.
First, there is no problem in defining a notion of a 2-place functor, also called a bifunc-
tor. (Exercise: try to define the notion of bifunctor formally; how would the bifunctor
axioms read?) However, by a suitable definition of the product of two categories (like
the cartesian product of sets), a 2-place functor on category A is just a normal functor
F : A × A → A . (Exercise: try to define the notion of the product category A × B of
two categories A and B . What are the objects, morphisms, typing, composition, and
identities? Prove that these satisfy the category axioms.)
Second, let A be an arbitrary category, and consider the following mapping F from
A to Set .
In view of the equation for F f we might call F f : ‘precede with f ’, and we might write
F f alternatively as (f ;) or (◦f ) . Mapping F is like a functor; it has the properties that
f : B →A A ⇒ F f : F A →Set F B
F id = id
F (g ; f ) = F f ;Set F g .
Notice that in the left hand side A and B , and also f and g , are at the wrong place for
F to be a functor. There is no problem in defining a notion of a contravariant functor,
so that F is a contravariant functor. (Exercise: try to define the notion of a contravariant
functor; how would the functor axioms read?) However, by a suitable definition of the
opposite Aop of a category A , mapping F is just a normal functor F : Aop → Set .
Category Aop is obtained from A by taking the objects, morphisms and identities from
A , and defining the typing and composition as follows:
f : A →Aop B ≡ f : B →A A
f ;Aop g = g ;A f .
One says that Aop is obtained from A by “reversing all arrows”. (Exercise: verify that
Aop is a category indeed. Verify also that mapping F above is a functor F : Aop → Set .)
Third, sometimes we need to speak about, say, pairs of morphisms in B with a common
source. (For example, the two extraction functions from a cartesian product form such a
pair.) We can do this categorically as follows. Let A be the category determined by the
following graph:
f g
• ←− • −→ •
IA x = x
(GF )x = G(F x)
1C. NATURALITY 19
for all objects and morphisms x in A . In view of the defining equation we can write GF x
without semantic ambiguity. We also write just I instead of IA if A is irrelevant or clear
from the context. Thus defined, I and GF are functors:
IA : A → A
F : A → B and G: B → C ⇒ GF : A → C .
The properties ftr-Type and Functor are easily verified. (Exercise: do this.) Other impor-
tant properties of these functors are: associativity of functor composition and neutrality
of I with respect to functor composition:
H (G F ) = (H G) F
F IA = F = IB F for F : A → B .
1.26 Category Cat . The above properties of functors suggest a (pre)category, called
Cat . Take as objects all categories, as morphisms all functors, as typing the functor typing,
as identity on A the identity functor IA , and as composition the functor composition.
As usual, we can make a category of Cat , see paragraph 1.8. Thus our talking of ‘type’ of
functors is justified.
However, there is a foundational problem lurking here. Is this new category Cat an
object in itself? An answer, be it yes or no, would give similar problems as the supposition
that the set of all sets exists. We will neither use the new category in a formal reasoning,
nor discuss ways out of this paradox.
1c Naturality
A natural transformation is nothing but a structure preserving map between functors.
‘Structure preservation’ makes sense, here, since we’ve seen already that a functor is, or
represents, a structure that objects might have. We shall first give an example, and then
present the formal definition.
if: each tA doesn’t affect the constituents of the structured elements in F A but only
reshapes the structure of the elements, from an F -structure into a G -structure; in other
words,
as you can easily verify. Transformation join reshapes each II Seq -structure into a Seq -
structure, and doesn’t affect the constituents of the elements in the structure.
In the next paragraph, naturality in general is defined like naturality in Set ; we abstract
from Set and replace it by an arbitrary category B . The formulas remain the same, but
the interpretation above (in terms of functions, sets, and elements) may change.
This formula is (so natural that it is) easy to remember: morphism εtarget f has type
F (target f ) → G(target f ) and therefore occurs at the target side of an occurrence of f ;
similarly εsource f occurs at the source side of an f . Moreover, since ε is a transformation
from F to G , functor F occurs at the source side of an ε and functor G at the target
side.
The notation εA is an alternative for εA , and uses ε as a function. We also say that
ε is natural in its parameter. By default, γ, δ, ε, η, κ range over natural transformations.
Exercise: prove that 1.29 follows from the assumption that 1.30 is well-typed. (So you
need only remember 1.30.)
1.31 Example. Natural transformations are all over the place; we give here just two
simple examples, and in paragraph 1.38 one application. The category under discussion is
Set .
First, consider the transformation rev that yields the reversal of its argument:
rev A : Seq A → Seq A
1C. NATURALITY 21
Thus, rev reshapes a Seq -structure into a Seq -structure, not affecting the constituents
of its arguments. Family rev is a natural transformation typed
rev : Seq →
. Seq ,
as is easily verified.
Second, consider the transformation inits that yields all initial parts of its argument:
Thus, inits reshapes a Seq -structure into a Seq Seq -structure, not affecting the con-
stituents of its arguments. Family inits is a natural transformation typed
inits : Seq →
. Seq Seq ,
as is easily verified.
Exercise: verify that each of the following well-known operations is a natural transfor-
mation of the given type:
tip : I→ . Seq
concat : Seq Seq → . Seq equals +
+/, also called flatten
segs : Seq → . Seq Seq
parts : Seq → . Seq Seq Seq yields all partitions of its argument
take n : Seq → . Seq
zip : II Seq → . Seq II
rotate : Seq → . Seq
swap : II →. II swaps the components of its argument
exl : II →. I extracts the left component of a pair .
We shall later see how to formulate the naturality of : and nil , and of take (not fixing
one of its arguments), and how to formulate a more general naturality of swap and exl
(not restricting their arguments to the same type), and that reduce itself, operation / , is
a natural transformation in category Mon .
22 CHAPTER 1. THE MAIN CONCEPTS
We shall write id F A and JεA and εK A without parentheses; in view of the equations this
causes no semantic ambiguity. An alternative notation for εK is εK ; so (εK)A = ε(KA)
and we then write εKA without parentheses too. Similarly, (Jε)K = J(εK) and we
write simply JεK . These transformations are natural:
1.33 id F : F →
. F ntrf-Id
1.34 ε: F →
. G and η: G →
. H ⇒ ε ; η: F →
. H ntrf-Compose
1.35 ε: F →
. G ⇒ Jε: JF →
. JG ntrf-Ftr
1.36 ε: F →
. G ⇒ εK : F K →
. GK ntrf-Poly
Notice that for laws 1.35 and 1.36 to make sense, F and G have a common source and
a common target, the source of J is the target of F and G , and the target of K is the
source of F and G . The proofs are quite simple; we prove only law ntrf-Compose. As
regards property ntrf-Type for ε ; η we argue
(ε ; η)A : F A → HA
≡ definition of ε ; η
εA ; ηA : F A → HA
⇐ composition-Type
εA : F A → GA and ηA : GA → HA
⇐ definition →.
ε: F → . G and η: G → . H.
And to show the naturality, property Ntrf, for ε ; η , we argue, for arbitrary f : A → B ,
F f ; (ε ; η)B = (ε ; η)A ; Hf
≡ definition (ε ; η)
F f ; εB ; ηB = εA ; ηA ; HF
≡ premise: naturality ε and η
εA ; Gf ; ηB = εA ; Gf ; ηB
≡ equality
true.
1C. NATURALITY 23
ε ; (η ; θ) = (ε ; η) ; θ
id F ; ε = ε = ε ; id G
for ε: F →
. G . The proof of these properties is simple; the properties are inherited from
composition and identities of the category. (Exercise: prove these claims.)
1.37 Category Ftr (A, B) . The properties of composite natural transformations sug-
gest a category. Let A and B be a category. Form a new category, commonly denoted
Ftr (A, B) , as follows. Take as objects all functors from A to B , as morphisms all natural
transformations (from functors with type A → B to functors with type A → B ), as
typing the typing of naturality (above denoted → . ), as identities all identity natural trans-
formations id F , and as composition the composition of natural transformations defined
above. Thus defined, Ftr (A, B) is a pre-category and even a category. (Exercise: verify
this.)
1.38 Application. Continuing the example of paragraph 1.31, we define a family tails
as follows. Function tails A yields all tail parts of its argument sequence as its result:
In effect, the proof of this semantic property is nothing but type checking (viewing “ : →
. ”
as a typing, and nowhere using the actual meaning of inits , rev , and tails ). Hadn’t we
24 CHAPTER 1. THE MAIN CONCEPTS
had available the concept and properties of naturality, the proof would have been much
longer. Indeed, explicitly using the equalities
Seq f ; rev B = rev A ; Seq f
Seq f ; inits B = inits A ; Seq Seq f
for all f : A → B , the proof of Seq f ; tails B = tails A ; Seq Seq f would run as follows.
Seq f ; tails B
= definition tails
Seq f ; rev B ; inits B ; Seq rev B
= equation for rev
rev A ; Seq f ; inits B ; Seq rev B
= equation for inits
rev A ; inits A ; Seq Seq f ; Seq rev B
= Functor for Seq
rev A ; inits A ; Seq(Seq f ; rev B )
= equation for rev
rev A ; inits A ; Seq(rev A ; Seq f )
= Functor for Seq
rev A ; inits A ; Seq rev A ; Seq Seq f
= definition tails
tails A ; Seq Seq f .
1.39 Omitting subscripts. For readability we shall often omit the subscripts or argu-
ments to natural transformations when they can be retrieved from contextual information.
Here is an example; you are not supposed to understand the ‘meaning’ of the formulas.
Let the following be given:
F : A→B
G : B→A
η : IA →
. GF
ε : FG →. IB ,
The following procedure gives the most general subscripts that make the formula well typed.
Use a, b, c, . . . as type variables (the “unknows”), use these as the subscripts, and write
1C. NATURALITY 25
the source and target type within braces at the source and target side of the morphisms,
thus:
{a} ηb {c} ; {d} G( {e} εf {g} ) {h} = {j} id k {l} .
The typing axioms generate a collection of equations for the type variables:
typing η: a, c = b, GF b on account of ntrf-Type
typing ;: c=d on account of composition-Type
typing Gε : d, h = Ge, Gg on account of ftr-Type
typing ε: e, g = F Gf, f on account of ntrf-Type
typing id : j=k=l on account of identity-Type
typing =: a, h = j, l .
A most general (least constraining) solution for this collection of equations can be found
by the unification algorithm, and yields
a = b = h = j = l = k = Gf
c = d = GF Gf
e = F Gf
g=f.
Hence, writing B for type variable f , and filling in the subscripts, the formula reads: for
arbitrary object B in B ,
ηGB ; GεB = id GB : GB →A GB ,
Exercise: infer in a similar way the categories, and the typing of the functors in:
η : I→
. GF
ε : FG →
. I.
Exercise: assuming
ε, η : F→
. FF
κ : FF→. F,
find the most general subscripts that make ε ; F η ; κ a well-typed term denoting a
morphism. (What function does the term denote if the category is Set , F = Seq , and
ε, η, κ = inits, tails, join/ ?)
26 CHAPTER 1. THE MAIN CONCEPTS
1d Adjunctions
An adjunction is a particular one-one correspondence between, on the one hand, the mor-
phisms of a certain type in one category, and, on the other hand, the morphisms of a certain
type in another category. The correspondence can be formulated as an equivalence between
two equations (in the two categories, respectively). An adjunction has many properties,
and many different but equivalent definitions.
1.40 Example. Here is a law for sequences; it has a lot of well-known consequences, as
we shall show in a while.
“Each homomorphism on sequences is uniquely determined (as a ‘map’ followed by
a reduce) by its restriction to the singleton sequences.”
To be precise, the law reads as follows.
Let A be an arbitrary set, and ⊗ be an arbitrary monoid operation, say with target
+A ) →Mon ⊗ ,
set B . Then, for all f : A → B and all g: (+
f = tip A ; g ≡ Seq f ; ⊗/ = g . SeqAdj
Thus we may call f the ‘restriction of g to the tip elements’ and write f = bbgccA,⊗ =
tip A ; g . Also, we may call g the ‘extension of f to a homomorphism from (+ +A ) to ⊗ ’
and write g = ddf eeA,⊗ = Seq f ; ⊗/ . With these definitions, and omitting the subscripts,
the equivalence reads:
f = bbgcc ≡ ddf ee = g .
This equivalence expresses that bb cc and dd ee are each other’s inverse, and constitute a one-
one correspondence between functions (of a certain type) and homomorphisms (of a certain
type). Mappings bb cc and dd ee are called lad and rad, respectively, from left adjungate
and right adjungate; these names and notations are not standard in category theory.
The above law is an (almost full-fledged) instance of an adjunction. The significance for
algorithmics may be evident from the consequences of SeqAdj listed in paragraph 1.49.
If, given A, B, F, G , there exist η, ε such that the sextuple forms an adjunction, then
F is called left adjoint to G , and G right adjoint to F .
Exercise: verify that the law for sequences in paragraph 1.40 is an adjunction indeed, with
A, B, F, G, η, ε defined as suggested a few lines up.
1.49 Example continued. Here are some consequences of the adjunction in mentioned
in paragraph 1.40. Actually, all these properties are instantiations of the corollaries men-
tioned in paragraph 1.43. So, these properties can be proved from the adjunction property
alone, without referring to the actual meaning of tip, /, Seq, and the very notion of
‘sequences’.
Exercise: derive these properties from the adjunction property. Take care not to use the
actual meaning of tip, /, and Seq .
Exercise: try to give some subcollections of this list of properties that are equivalent to
the adjunction property.
Exercise: try to formulate these properties in terms of A, B, F, G, η, ε, bb cc, dd ee , and try to
derive them from the adjunction property. (This will be done for you in a later section.)
1.50 More corollaries. Here are some more corollaries. Again we list them here only
to show the importance of the concept. These corollaries may be harder to understand
than those in paragraph 1.43, due to the higher level of abstraction.
1. Adjoint functors determine each other “up to isomorphism”. We shall explain the
concept of isomorphism later. More precisely, if A, B, G can be completed to an
adjunction A, B, F, G, η, ε , then F is unique up to isomorphism.
As a consequence, the existence of some F 0 , η 0 , ε0 for which the sextuple Set , Mon,
F 0 , G, η 0 , ε0 (with G as in the above example) forms an adjunction, is equivalent to
the existence of a monoid operation + +0 A (= F 0 A) that has the categorical properties
of ‘the monoid operation of sequences’. Thus, a datatype like that of sequences can
be defined by a certain adjunction.
Exercise: suppose there exist Seq 0 , tip 0 , /0 , and + +0 A that, substituted for Seq,
tip, /, and + +A , make the adjunction property in paragraph 1.40 well-typed and
true. Convince yourself (informally) that (tgt(+ +0A , tip 0A , the neutral element
+0 A ); +
of + +0A ) might be called ‘the datatype of sequences’.
2. (Surely, it’ll take some time and exercising before you can easily grasp the following
highly abstract statement.) The fusion properties of bb cc and dd ee are equivalent
to the statement that bb cc and dd ee are morphisms of a certain type in category
Ftr (Aop × B, Set ) (where the objects are functors and the morphisms are natural
transformations, see paragraph 1.37). So, bb cc and dd ee are natural transformations
1E. DUALITY 29
“of a higher type”, and the omission of the subscripts to bb cc and dd ee thus falls under
our convention for natural transformations.
1e Duality
Dualisation is a formal manipulation with practical significance. For example, the set-
theoretic notions of cartesian product and disjoint union are characterised categorically by
notions that are each other’s dual. As another example, the categorical characterisation
of a ‘datatype for which functions can be defined by induction on the structure of the
argument’ (like the datatype of sequences) is dual to the categorical characterisation of a
‘datatype for which functions can be defined by induction on the structure of their result’
(like the datatype of infinite lists, or streams). Dualisation also applies to theorems and
proofs, thus cutting work in half.
1.52 Definition. The dual of a term in the categorical language is defined by:
Clearly, dualising is its own inverse, that is, dual(dual t) = t for each term t . Another
easy way of dualising a morphism term is simply replacing each ; by ◦ . However, the
presence of both compositions for the same morphisms is not practical. As an example,
the following two statements are each other’s dual.
1.53 ∀B ∃!f :: f: A → B
1.54 ∀B ∃!f :: f: B → A .
Dualising a less trivial statement may be more instructive. Here is one; don’t try to
understand what it means, we’ll meet it in the sequel. Apart from dualising the statement,
30 CHAPTER 1. THE MAIN CONCEPTS
we also rename some bound variables and interchange the sides of the left-hand equation
(which doesn’t affect the meaning).
Exercise: infer the typing of F, α, ([ ]), and bd( )ce in these formulas. Notice that the type
of the free variable α changes due to the dualisation.
1.55 Corollary. For each definition expressed in the categorical language, of a concept
or construction xxx, you obtain another concept, often called co-xxx if no better name
suggests itself, by dualising each term in the definition. For example, an object A is initial
in a category if: formula 1.53 holds for A . (In Set the only initial object is the empty set.)
Dually, an object A is co-initial, conventionally called final or terminal, if: formula 1.54
holds for A . (In Set the final objects are precisely the singleton sets.) Similarly, the other
two formulas above define dual notions of α .
Also, for each equation f = g provable from the axioms of category theory (hence valid
for all categories), the equation dual f = dual g is provable too. (Exercise: check this for
the axioms of a category.) Thus dualisation cuts work in half, and gives each time two
concepts or theorems for the price of one.
1.56 Examples. We shall meet many examples in the sequel, notably the examples
mentioned in the introduction to this section.
Let it it suffice here to say that the opposite category Aop (defined in paragraph 1.24)
is obtained by dualising each notion of A , that is,
It follows that (Aop )op = A , and that the dual of a statement holds for A if and only if
the statement itself holds for Aop . (So again, if a statement is true for all categories, then
its dual is true for all categories too.)
Exercise: prove that F : A → B equivales F : Aop → B op .
Exercise: dualise the notion of natural transformation.
Exercise: dualise the notion of adjunction, and of ‘being a left adjoint’.
Chapter 2
Constructions in categories
In this chapter we discuss some categorical concepts by which some familiar (set-theoretic
or other) concepts can be expressed in categorical terms. It turns out that most charac-
terisations do not fix the objects and morphisms exactly, but only ‘up to isomorphism’.
Isomorphic objects are essentially the same, as regards the “observations” by the mor-
phisms of the category.
There is a general pattern in several definitions; they turn out to define an initial or
final object in a category built upon the category of interest (“the universe of discourse”).
Therefore we shall discuss initiality and finality extensively before we turn to the other
concepts.
2.1 Default category. The declaration that a category is the default category means
that it is this category, rather than another one, that should be mentioned whenever there
is an ambiguity. For example, when A is declared the default category, and several other
(auxiliary) categories are discussed in the same context (in particular categories built upon
A ), then a formula like f : A → B really means f : A →A B , and ‘an object’ really means
‘an object in A ’.
f ; g = id .
31
32 CHAPTER 2. CONSTRUCTIONS IN CATEGORIES
2.5 Discussion. Isomorphic objects are often called ‘abstractly the same’ since for most
categorical purposes one is as good as the other: each morphism to/from the one can be
extended to a morphism to/from the other using the morphisms that establish the isomor-
phism. (The preceding sentence is informal intuition; I do not know of a formalisation of
the idea as a theorem.) This holds, of course, even more so if the isomorphism is unique.
For example, in Set all sets of the same cardinality are isomorphic, hence ‘abstractly the
same’. If you want to distinguish sets of the same cardinality on account of structural
properties, a partial order say, you should not take Set as the category but another one
in which the morphisms better reflect your intention. (In the case of partial orders, you
could take the order-preserving functions as the morphisms, rather than all functions.)
34 CHAPTER 2. CONSTRUCTIONS IN CATEGORIES
2.6 Conventional definition. An object A is initial if: for each object B there is
precisely one morphism from A to B , called the mediating morphism:
∀B :: ∃!f :: f: A → B .
Equivalently, an object A is initial if for each object B there is precisely one (at least one
and at most one) solution for f in the statement
f: A → B .
2.7 A trick. Although the formulations of the conventional definition are quite clear,
they are not very well suited for algebraic manipulation. The formulation in paragraph 2.8
hasn’t this drawback, as will appear from the calculations in the chapters to come. (Exer-
cise: prove that initial objects are unique up to a unique isomorphism, and compare your
proof with the one given below in paragraph 2.22.) The trick to arrive at the convenient
formulation is skolemisation, named after the logician Skolem, which we’ll now explain.
An assertion of the form
∀ x :: ∃! y :: ...y...
(∗) ∀ x, y :: ...y... ≡ y = F x.
Mapping ([ ]) is called the mediator, and to make clear the dependency on A it is some-
times written ([A → ]) . In typewriter font I would write med( ) for ([ ]) .
The initial object, if it exists, is unique up to a unique isomorphism (see paragraph 2.22
below); the default notation for it is 0 . An alternative notation for ([0 → B]) is ¡B .
Dually, an object A is final if, for each object B , there is precisely one morphism from
B to A . In other words, an object A is final if: there exists a mapping bd( )ce such that
Again, mapping bd( )ce is called the mediator, and it is sometimes written bd( → A)ce to make
clear the dependency on A . In typewriter font I would write dem( ), the ‘dual’ of med.
By duality, the final object, if it exists, is unique up to a unique isomorphism; the
default notation for it is 1 . An alternative notation for bd(B → 1)ce is !B .
2.11 Examples. In Set there is just one initial object, namely the empty set. Function
([B]) is the ‘empty function’, that is, the empty set of (argument, result)-pairs. In Set
each singleton set is a final object. Function bd(B)ce maps each b ∈ B to the sole member
of the arbitrary but fixed singleton set 1 .
We shall see later that the datatype of sequences is ‘the’ initial object in a suitably
defined category built upon Set , and that the datatype of streams (infinite lists) is ‘the’
final object in another suitably defined category built upon Set . The morphisms in these
categories are homomorphisms, and the mediators ([ ]) and bd( )ce capture “definitions by
induction on the structure” (structure of the argument and of the result, respectively).
We shall also see that the disjoint union and the cartesian product can be characterised
by initiality and finality, respectively, in a suitably defined category built upon Set .
2.12 Corollaries. Let A be an initial object in the category, with mediator ([ ]) . Here
are some consequences of init-Charn.
Law init-Self is an instantiation of init-Charn in such a way that the right-hand side of
init-Charn becomes true: take f := ([A → B]) . The name Self derives from the fact that
([B]) itSelf is a solution for x in x: A → B .
36 CHAPTER 2. CONSTRUCTIONS IN CATEGORIES
Law init-Id is an instantiation of init-Charn in such a way that the left-hand side of init-
Charn becomes true: take B, f := A, id A .
The ‘proof’ of init-Uniq is left to the reader. The name Uniq derives from the fact that a
solution for x in x: A → B is unique.
For init-Fusion we argue (suppressing A ):
([B]) ; f = ([C])
≡ init-Charn[ B, f := C, ([B]) ; f ]
([B]) ; f : A → C
⇐ composition-Type
([B]): A → B ∧ f : B → C
≡ init-Self, and premise
true.
These five laws become much more interesting in case the category is built upon another
one, Set for example, and the typing is expressed as one or more equations in the underly-
ing category Set . In particular the importance of law Fusion cannot be over-emphasised;
we shall use it quite often.
Exercise: give a fully calculational proof of init-Uniq, starting with the obligation ‘ f = g ’
at the top line of your calculation.
Exercise: give a calculational proof of the equality ([1]) = bd(0)ce .
Exercise: dualise the init-laws to final-laws; prove final-Fusion yourself, and see whether
your proof is the dual of the one given above for init-Fusion.
2.17 Proving initiality. One may prove that an object A is initial in the category, by
providing a definition for ([ ]) and establishing init-Charn. Almost every such a proof in
the sequel has the following format. For arbitrary f and B ,
f: A → B
≡ .
..
≡
f = an expression not involving f
≡ define ([B]) = the right-hand side of the previous equation
f = ([B]).
Actually, the last two lines in the calculation are superfluous: the remaining lines clearly
show that the statement f : A → B has precisely one solution for f . Nevertheless, we shall
not omit the last two lines for the sake of clarity. Sometimes the proof has the following
format:
f: A → B
⇒
2B. INITIALITY AND FINALITY 37
..
.
⇒
f = expression not involving f = ([B]) , by suitably defining ([ ])
⇒ .
..
⇒
f : A → B.
In this case we say that we establish the equivalence init-Charn by circular implication.
In general the formulas are not as simple as suggested in the above calculations, since
mostly we will be dealing with initiality in categories built upon another one, so that the
typing f : A → B is a collection of equations in the underlying category.
2.18 Fact. Law init-Self says that there exists at least one morphism from A to B .
Law init-Uniq says that there exists at most one morphism from A to B . Together they
are equivalent to init-Charn:
2.19 [Self] and [Uniq] ≡ [Charn]
where the square brackets denote the universal quantification that was implicit in the
formulations above. The ⇐ -part has been argued in paragraph 2.12; for the ⇒ -part we
show equivalence init-Char by circular implication:
f: A → B (left-hand side of init-Charn)
≡ init-Self
f : A → B and ([B]): A → B
⇒ init-Uniq
f = ([B]) (right-hand side of init-Charn)
≡ init-Self
f = ([B]) and ([B]): A → B
⇒ proposition logic, equality
f: A → B (left-hand side of init-Charn)
In our experience, proving initiality by establishing init-Self (for some morphism denoted
([B]) ) and init-Uniq is by no means simpler or more elegant than establishing init-Charn
directly, in the way explained in paragraph 2.17.
that is, both compositions of ([A → B]) and ([B → A]) are the identity, and conversely the
identities can be factored (as in the right-hand side) only in this way. We prove both
implications of the equivalence at once.
The equality ([A → B]) ; ([B → A]) = id A can be proved alternatively using init-Id, init-
Fusion, and init-Self in that order. (This gives a nice proof of the weaker claim that initial
objects are isomorphic.)
inl : A → A + B
inr : B → A + B ,
and there may be a predicate that tests whether an element in A + B is inl (x) or inr (y)
for some x ∈ A or some y ∈ B . Using the predicate one can define an operation that in
2C. PRODUCTS AND SUMS 39
programming languages is known as a case construct, and vice versa. The case construct
of f and g is denoted f ∇ g and has the following typing and semantics.
f ∇ g: A + B → C for f : A → C and g: B → C
and
inl ; f ∇ g = x 7→ f x for each x ∈ A
inr ; f ∇ g = y 7→ g y for each y ∈ B .
inl ; f ∇ g = f
inr ; f ∇ g = g.
inl ; h = f
inr ; h = g.
This is an important observation; it holds for each representation of disjoint unions! Indeed,
a ‘disjoint union’-like concept for which the claim does not hold, is normally not considered
to be a proper ‘disjoint union’ of A and B .
Exercise: consider the representation A + B = {0}×A ∪ {1}×B , and work out operations
inl , inr , and f ∇ g . Also, think of another representation for A + B , and work out the
operations again. Prove in each case the above claims. Would you call A ∪ B a disjoint
union of A and B ? Why, or why not?
In summary, we call functions inl : A → D and inr : B → D together with their target D
a disjoint union of A and B if, and only if, for each f : A → C and g: B → C there is
precisely one function h , henceforth denoted f ∇ g , such that
2.27 Sum. Let A be an arbitrary category, the default category, and let A, B be
W
objects. A sum of A and B is: an initial object in (A, B) ; it may or may not exist. Let
(inl , inr ) be a sum of A and B ; their common target is denoted A + B . We abbreviate
([(inl , inr ) → (f, g)])W(A,B) to just f ∇g , not mentioning the dependency on A, B and inl , inr .
W
Working out ‘being an object in (A, B) ’ in terms of A yields the following instantiation
of init-Type:
f : A → C ∧ g: B → C ⇒ f ∇ g: A + B → C ∇ -Type
W
Working out the typing in (A, B) in terms of equations in A yields the following in-
stantiations of the laws for initiality:
Law ∇ -Uniq says that the pair inl , inr is jointly epic. Law ∇ -Fusion simplifies to an
unconditional law by substituting h, j := f ; h, g ; h :
f ∇ g ; h = (f ; h) ∇ (g ; h) ∇ -Fusion
2.28 Product. Products are, by definition, dual to sums. Let exl , exr be a product of
A and B , supposing one exists; its common source is denoted A × B . We abbreviate
bd(f, g → exl , exr )ceV(A,B) to just f ∆ g . The typing law works out as follows:
f : C → A ∧ g: C → B ⇒ f ∆ g: C → A × B ∆ -Type
Law ∆ -Uniq says that the pair (exl , exr ) is jointly monic. Law ∆ -Fusion has been simplified
to an unconditional form.
42 CHAPTER 2. CONSTRUCTIONS IN CATEGORIES
2.29 Application. As an application of the laws for sum and product we show that ∇
and ∆ abide. Two binary operations | and abide with each other if: for all values
a, b, c, d
(a | b) (c | d) = (a c) | (b d) .
a
Writing a | b as a | b and a b as b
, the equation reads
a|b a b
= | .
c|d c d
The term abide has been coined by Bird [3] and comes from “above-beside.” In category
theory this property is called a ‘middle exchange rule’ or ‘interchange rule’.
Here is a proof that ∇ and ∆ abide:
(f ∇g) ∆ (h ∇ j) = (f ∆ h) ∇ (g ∆ j)
≡ ∇ -Charn [f, g, h := lhs, f ∆ h, g ∆ j]
inl ; (f ∇ g) ∆ (h ∇ j) = f ∆ h ∧ inr ; (f ∇ g) ∆ (h ∇ j) = g ∆ j
≡ ∆ -Fusion (at two places)
Exercise: give another proof in which you start with ∆ -Charn rather than ∇ -Charn.
Exercise: give another proof in which you start as above and then apply ∆ -Charn at the
second step (at two places).
Exercise: choose an explicit representation for the disjoint union (and cartesian product),
and prove the abides law in set-theoretic terms, using the chosen representation.
2D. COEQUALISERS 43
2.30 More laws. For arbitrary categories in which sums and products, respectively,
exist, we define, for f : A → B and g: C → D ,
f ×g
In case the category is Set , function f × g acts componentwise: (a, b) 7→ (f a, gb) ; sim-
f +g f +g
ilarly, inl (a) 7→ inl (f a) and inr (b) 7→ inr (gb) . The mappings + and × are bifunctors:
id + id = id and f + g ; h + j = (f ; h) + (g ; j) , and similarly for × . Throughout the
text we shall use several properties of product and sum. These are referred to by the hint
‘product’ or ‘sum’. Here is a list; some of these are just the laws presented before.
Exercise: identify the laws that we’ve seen already, and prove the others.
Exercise: above we’ve explained f ×g and f +g in set-theoretic terms in case the category
is Set ; which of the equations comes closest to those specifications “at the point-level”?
Exercise: what about the following equivalences:
? ?
f ×g =h×j ≡ f =h∧g =j f +g =h+j ≡ f =h∧g =j
Are these true in each category? (Answer: no. Hint: in Set we have A × ∅ = ∅ for each
A . Now take f, h: A → A arbitrary, and g = j = id ∅ .)
Exercise: prove that in each category each exl A,A is epic, whereas exl A,B is not necessarily
epic. (Hint: take B = ∅ in Set .)
Exercise: prove that in each category that has products and a final object, 1 × A ∼ = A and
∼
A × (B × C) = (A × B) × C .
2d Coequalisers
As another example of a categorical characterisation by initiality, we present here the
notion of coequaliser. A coequaliser is a categorical notion that, specialised to category
Set , yields the well-known and important notion of induced equivalence relation.
44 CHAPTER 2. CONSTRUCTIONS IN CATEGORIES
W
2.31 Category (f k g) . In order to characterise coequalisers by initiality, we need the
W
auxiliary notion of (f k g) .
Let A be a category, the default one (think for example of Set ). Let (f, g) be a parallel
W
pair, that is, f and g have a common source and a common target. Then (f k g) is the
W
category built upon A as follows. An object in (f k g) is: a morphism p in A satisfying
W W
f ; p = g ; p . Let p and q be objects in (f k g) ; then a morphism in (f k g) from p
to q is: a morphism x in A such that p ; x = q .
f - p -
• - • •
g H
HH
qHH x
H ?
H
j
•
W
The phrase ‘ p is an object in (f k g) ’ is just a concise way of saying ‘ p is a morphism
satisfying f ; p = g ; p ’. Unfortunately there is no simple noun or verb for this property.
2.32 Definition. Let A be a category, the default one. Let (f, g) be a parallel pair.
W
Then, a coequaliser of f, g is: an initial object in (f k g) .
Let p be a coequaliser of (f, g) , supposing one exists. We write p\f,g q or simply p\q
instead of ([p → q])W(f kg) since, as we shall explain, the fraction notation better suggests
W
the calculational properties. Working out the definition of being an object in (f k g) in
terms of equations in A , we obtain the following instantiation of the laws for initiality.
and further:
In accordance with the convention explained in paragraph 2.20 we have omitted in laws
W
\-Charn, \-Self and \-Fusion the well-formedness condition that q is an object in (f kg) ;
the notation ...\q is only senseful if f ; q = g ; q , like in arithmetic where the notation
m/n is only senseful if n differs from 0 . Notice that \ -Uniq and \ -Fusion simplify to:
2.33 Additional laws. The following law confirms the choice of notation once more.
p\q ; q\r
= \-Fusion
p\(q ; q\r)
= \-Self
p\r.
An interesting aspect is that the omitted subscripts to \ may differ: e.g., p\f,g q and
q\h,j r , and q is not necessarily a coequaliser of f, g . Rephrased in the notation for
initiality in general, law \-Compose reads:
where A and B are full subcategories of some category C and objects B, C are in both
W W W
A and B ; in our case A = (f kg) , B = (hkj) , and C = (D) where D is the common
target of f, g, h, j . Then the proof runs as follows.
Here is another law; its proof shows two of the above laws in action. As before, let p be
a coequaliser. Then
F (p\q) = F p\F q
≡ \-Charn
F p ; F (p\q) = F q
≡ functor
46 CHAPTER 2. CONSTRUCTIONS IN CATEGORIES
F (p ; p\q) = F q
≡ \-Self
true.
Exercise: let p be a coequaliser of a parallel pair (f, g) , and let h be an epimorphism
with tgt h = src f = src g . Prove that p is a coequaliser of (h ; f, h ; g) .
Exercise: let pi be a coequaliser of a pair (fi , gi ) , for i = 0, 1 . Prove that p0 + p1 is a
coequaliser of (f0 + f1 , g0 + g1 ) , assuming that sums exist.
2.35 Interpretation in Set . Take A = Set , the default category, and let A be a
set. Each parallel pair (f, g) with target A determines a binary relation Rf,g on A ,
and conversely, each binary relation R on A determines a parallel pair (fR , gR ) in the
following way:
Rf,g = {(f x, gx) | x ∈ src f } ⊆A×A
fR = exl : {(x, y) | x R y} → A
gR = exr : {(x, y) | x R y} → A .
2e Colimits
The notion of colimit is a far-reaching generalisation of the notions of sum, coequaliser,
and several others. Each colimit is a certain initial object, and each initial object is a
certain colimit. We shall briefly define the notion of colimit, and present its calculational
properties derived from the characterisation by initiality. We shall also give a nontrivial
application involving colimits.
By definition, limits are dual to colimits. So by duality limits generalise such notions
as product, equaliser, and several others. Each limit is a certain final object, and each final
object is a certain limit.
W
The formal definition uses the notions of a diagram D and of the cocone category D ,
which we now present.
2.36 Diagram. Let A be a category, the default one. A diagram in A is: a graph
whose nodes are labelled with objects and whose edges are labeled with morphisms, in such
a way that the labeling is “consistent” with the typing of the category, that is, for labeling
A f B
• −→ • in the diagram, it is required that f : A → B in the category. As a consequence,
f g
if −→ • −→ is in the diagram, then tgt f = src g in the category.
Although a diagram in A is (or: determines) a category, that category is not necessarily
a subcategory of A ; distinct edges may have the same label. Here is a counterexample.
Let A be the category determined by:
A f - B g - C
• • •
A f - B A f - B g - C
• - • • • •
f
h
In the left diagram there are two edges (morphisms) from A to B , whereas in A there
is only one. In the right diagram there are two edges (morphisms) from A to C , labelled
f ; g and h respectively, whereas in A there is only one; by definition h = f ; g .
Extreme cases of diagrams are diagrams with zero, one, or more nodes only, and no
edges at all.
For simplicity in the formulations to come, we consider a diagram in A to be a functor
D: D → A , where D is a graph (hence category) giving the shape of the diagram in A ,
and D gives the labeling: a node A in D is labeled DA (an object in A ), and an edge
f in D is labeled Df (a morphism in A ).
48 CHAPTER 2. CONSTRUCTIONS IN CATEGORIES
W
2.37 Category D . Let A be a category, the default one, and let D: D → A be a
W
functor, hence diagram in A . Then category D , built upon A , is defined as follows; its
objects are called cocones for D .
- • DB · · ·
S
DD DA • Df DD DA • ···
A C
A C @ S
A C @ S
A C @ S
γ AγAC γB γ ... @ S ... δ
A C @ S
AC @S
AC @S
AU @S
ACWC ?
? x S?
R
@
w
-
•C • •
A cocone for D is: a family γA : DA → C of morphisms (for some C ), one for each A
in D , satisfying:
Df ; γB = γA for each f : A →D B .
This condition is called ‘commutativity of the triangles’. Using naturality and constant
functors there is a technically simpler definition of a cocone. Define C to be the constant
functor, C x = C for each object x , and C f = id C for each morphism f . Now, each
cocone for D is a natural transformation γ: D → . C in A (for some C ), and vice versa.
(Exercise: check this.) We define, for the γ above, tgt γ = C . (Notice that γ is a
morphism in the functor category F = Ftr (D, A) , so that tgtF γ = C . The object C
really forms part of the cocone, even if D is empty and, hence, γ is an empty family. To
stress this fact, one might prefer to define a cocone as a pair (C, γ) .)
W
Continuing the definition of category D , let γ and δ be cocones for D ; then, a
W
morphism from γ to δ in D is: a morphism x satisfying γA ; x = δA for each object
A in D . It follows that x: tgt γ → tgt δ . The condition on x can be written simply
γ ; x = δ , when we define the composition of a cocone with a morphism by:
(γ ; x)A = γA ; x . (composition of a cocone with a morphism)
Then
. C and x: C → C 0
γ: D → ⇒ . C0 .
γ ; x: D →
2.38 Definition. Let A be a category, the default one, and let D diagram in A . A
W
colimit for D is: an initial object in D ; it may or may not exist.
Let γ be a colimit for D . We write ([ ]) as γ\ . Working out the definition of cocone
in terms of equations in A , we obtain the following characterisation of a colimit. There
exists a mapping γ\ such that
δ cocone for D ⇒ γ\δ: tgt γ → tgt δ \ -Type
2E. COLIMITS 49
for D -cocones δ and ε ( δ and F γ being a colimit when occurring as the left argument
of \ .) Law \ -Uniq asserts that each colimit is ‘jointly epic’. For the proof of \ -Compose
and \ -Ftr see law init-Compose and \ -Ftr in paragraph 2.33.
2.39 Another law. Here is another law. Write the subscripts to natural transformations
as proper arguments: γA = γA , and recall the definition (γF )A = γ(F A) , for a functor
F . Let both γ and γF be colimits (for the same diagram). Then, for each cocone δ for
that same diagram:
γF \δF = γ\δ
γF \δF = γ\δ
≡ \ -Charn [γ, δ, x := γF, δF, γ\δ]
γF ; γ\δ = δF
≡ \ -Self applied within the right-hand side: δ = γ ; γ\δ
γF ; γ\δ = (γ ; γ\δ)F
≡ extensionality
(γF ; γ\δ)A = (γ ; γ\δ)F A for each A
≡ composition of cocone with a morphism
true.
We shall now show in paragraphs 2.40, 2.41, and 2.42 that initial objects, sums, and
coequalisers are colimits. Then we give in paragraph 2.43 an example of a colimit that
explains the term ‘limit’. Finally in paragraph 2.44 we give an nontrivial application of
the laws for colimits. In paragraph A.55 it is shown that left adjoints preserve colimits.
50 CHAPTER 2. CONSTRUCTIONS IN CATEGORIES
2.40 Initiality as colimit. Let A be a category, the default one. Take D empty, so
that D: D → A is the empty functor. Then a cocone δ for D is the empty family ( )B
of morphisms, where B = tgt δ .
Suppose γ = ( )A is a colimit for D ; it may or may not exist. Then A is initial in
A . To show this, we establish init-Charn, constructing ([ ]) along the way. For arbitrary
object B and morphism x ,
x: A → B
≡ property of the empty natural transformations ( )A and ( )B
( )A ; x = ( ) B
≡ γ = ( )A is colimit for D , and ( )B is cocone for D ; \ -Charn
x = γ\( )B
≡ define ([B]) = γ\( )B
x = ([B]).
Exercise: show that, if there exists an initial object in A , then there exists a colimit for
the diagram D above.
2.41 Sum as colimit. Let A be the default category, and let A and B be objects.
A B
Take D and D as suggested by DD = ( • •) . Then a cocone δ for D is a two-member
family δ = (f, g) with f : A → C and g: B → C , where C = tgt δ .
Let γ = (inl 0 , inr 0 ) be a colimit for D . Then γ is a sum of A and B . To show this,
we establish the existence of ∇0 for which ∇0 -Charn holds, constructing ∇0 along the
way. For arbitrary f : A → C , g: B → C , and morphism x ,
Exercise: show that, if a sum of A and B exists, then there exists a colimit for the
diagram D above.
2.42 Coequaliser as colimit. Let A be the default category, and let (f, g) be a
parallel pair, with source A0 and target A say. Take D and D as suggested in the top
lines of the following pictures.
2E. COLIMITS 51
f - f -
A0 • g -• A A0 • g -• A
C A
C A
C A
C A
C A
C A
q0 C q p0 p A q 0 q
C A
CW ? x AU ?
• • • -
2.43 “Limit point” as colimit. This example explains the name ‘limit’: (co)limits
may define real limiting points. Let Set be the default category, and consider an infinite
sequence of sets, each including the previous one: A0 ⊆ A1 ⊆ A2 . . . . The subsets are
partly identical (they have some elements in common), but categorically they are different
objects. An inclusion A ⊆ A0 is expressed categorically by the existence of an injective
function f : A → A0 that embeds each element from A into A0 . (In this way, a set may
be a subset of another one in several distinct ways.) So the sequence of embeddings is
expressed by the diagram:
A0 f0 - A1 f1 - A2
• • • ···
A0 f0 - A1 f1 - A2 B
• • • ··· •
6
δ
δ0 δ1 δ2
& & & %
To prove the claim, we must show that, for arbitrary cocone δ for that diagram, there is
precisely one solution x for the equation γ ; x = δ . Consider the function x0 that maps
an element a ∈ A onto δi (a0 ) ∈ B , where i, a0 are such that γi (a0 ) = a . There exists for
every a ∈ A a pair i, a0 with γi (a0 ) = a , since by definition A is the least set including all
Ai . The ‘commutativity of the triangles’ (of both γ and δ ) implies that the specification
of x0 is unambiguous: δi (a0 ) = δj (a00 ) if γi (a0 ) = γj (a00 ) . Clearly, this x0 is a solution for
x in γ ; x = δ ; we leave it as an exercise to show that it is the only solution.
In effect, γ represents the infinite composition (embedding) f0 ; f1 ; f2 ; · · · ; more
precisely, γ is the limit of f0 ; f1 ; f2 ; · · · .
α: F A → A
(a) ⇐ definition isomorphism
α: F A ∼=A
(b) ⇐ definition cocone morphism (taking A = tgtγ = tgtγS )
W
α: F γ ∼
= γS in (F D) ∧ F D = DS
(c) ≡ F γ is colimit for F D (taking α = F γ\γS )
γS is colimit for DS ∧ F D = DS.
Step (a): this is motivated by the wish that α be initial in Alg(F ) , and so α will be an
isomorphism; in other words, in view of the required initiality the step is no strengthening.
Step (b): here we merely decide that α, A come from a (co)limit construction; this is
true for many categorical constructions. So we aim at α: F γ ∼ = ... , where γ is a colimit
(which we assume to exist) for a diagram D yet to be defined. Since F γ is a F D -cocone,
there has to be another F D -cocone on the dots. To keep things simple, we aim at an
F D -cocone constructed from γ , say γS , where S is an endofunctor on srcD . Since
γS is evidently a DS -cocone, and must be an F D -cocone, it follows that F D = DS is
another requirement.
Step (c): the hint ‘ F γ is colimit for F D ’ follows from the assumption that F preserves
colimits, and the definition α = F γ\γS is forced by (the proof of) the uniqueness of
initial objects. (It is indeed very easy to verify that F γ\γS and γS\F γ are each other’s
inverse.)
We shall now complete the construction in the following three parts.
54 CHAPTER 2. CONSTRUCTIONS IN CATEGORIES
Our guess is that γ\ε may be chosen for ([γS → δ])W(DS) for some suitably chosen ε:
D→ . tgtδ that depends on δ . This guess is sufficient to start the proof of (♠) ; we shall
derive a definition of ε (more specifically, for ε0 and εS ) along the way.
x = γ\ε
≡ \-Charn
2E. COLIMITS 55
γ;x=ε
≡ observation at the end of Part 1
(γ ; x)0 = ε0 ∧ (γ ; x)S = εS
≡ composition of a cocone with a morphism, extensionality
γ0 ; x = ε0 ∧ γS ; x = εS
≡ { aiming at the left hand side of (♠) }
define εS = δ (noting that δ: DS → . tgtδ = DS →
. tgtδ S )
γ0 ; x = ε0 ∧ γS ; x = δ
(∗) ≡ define ε0 below such that γS ; x = δ ⇒ γ0 ; x = ε0 for all x
γS ; x = δ.
γ0 ; x
= { anticipating next steps, introduce an identity }
(recall that tgtγ has been called A , so that γ: D →. A)
γ0 ; A(0≤1) ; x
= naturality γ (‘commutativity of the triangle’)
D(0≤1) ; γ1 ; x
= assumption γS ; x = δ
D(0≤1) ; δ0
Our guess is that the required morphism ([α → ϕ])F can be written as γ\δ for some suitably
chosen D -cocone δ . This guess is sufficient to start the proof of (♣) , deriving a definition
for δ (more specifically, for δ0 and δS ) along the way:
F γ\γS ; x = F x ; ϕ
≡ \ -Fusion
F γ\(γS ; x) = F x ; ϕ
≡ \ -Charn[ γ, δ, x := F γ, γS ; x, F x ; ϕ ]
F γ ; F x ; ϕ = γS ; x
≡ lhs: functor, rhs: composition of cocone with a morphism
56 CHAPTER 2. CONSTRUCTIONS IN CATEGORIES
F (γ ; x) ; ϕ = (γ ; x)S
(∗) ≡ explained and proved below (defining δ )
γ;x=δ
≡ \ -Charn
x = γ\δ.
Arriving at the line above (∗) I see no way to make progress except to work bottom-up
from the last line. Having the lines above and below (∗) available, we define δSn in terms
of δn by
δS = Fδ ; ϕ,
a definition that is also suggested by type considerations alone. Now part ⇐ of equivalence
(∗) is immediate:
F (γ ; x) ; ϕ = (γ ; x)S
⇐ definition δS : F δ ; ϕ = δS
γ ; x = δ.
For part ⇒ of equivalence (∗) we argue as follows, assuming the line above (∗) as a
premise, and defining δ0 along the way.
γ;x=δ
≡ induction principle
(γ ; x)0 = δ0 ∧ ∀(n :: (γ ; x)n = δn ⇒ (γ ; x)Sn = δSn)
≡ proved below: the ‘base’ in (i), and the ‘induction step’ in (ii)
true.
For (i), the induction base, we calculate:
γ0 ; x
= init-Charn, using γ0: 0 → A
([A])C ; x
= init-Fusion, using x: A → B
([B])C
= define δ0 = ([B])C
true.
And for (ii), the induction step, we calculate for arbitrary n , using the induction hypothesis
(γ ; x)n = δn ,
(γ ; x)Sn
= line above (∗)
2E. COLIMITS 57
(F (γ ; x) ; ϕ)n
= hypothesis (γ ; x)n = δn
(F δ ; ϕ)n
= definition δS
(δS)n
More on adjointness
We give several equivalent definitions of adjointness, and some corollaries and theorems.
[Note — added in proof: you’d better read the paper “Adjunctions” written by Fokkinga
and Meertens, Draft version printed in December 1992.]
ψ: F A →B B ϕ: A →A GB
bbψccA,B : A →A GB ddϕeeA,B : F A →B B
Mappings bb cc and dd ee are called lad and rad, respectively, from left adjungate and right
adjungate. As a memory aid: the first symbol of bb cc has the shape of an ‘L’ and therefore
denotes lad. In typewriter font I would write lad( ) and rad( ). For readability I will
omit the typing information whenever appropriate, as well as most subscripts. Omitting
the subscripts is dangerous (and even erroneous) if categories A and B are built upon
another one. For example, when B itself is a morphism in an underlying category, then
εB might be a morphism that depends on (and is expressed in) B . Nevertheless, in
the following calculations the subscripts are derivable from the context (in a mechanical
way, like type inference in modern functional languages), thus justifying the omission; see
paragraph 1.39.
59
60 APPENDIX A. MORE ON ADJOINTNESS
A.3 Remark. The following theorem asserts the equivalence of several statements. Each
of them defines “ F is left adjoint to G ”.
So, in order to prove that F is left adjoint to G it suffices to establish just one of
the statements, and when you know that F is left adjoint to G you may use all of the
statements. Before we present the proof of the theorem, we also give some corollaries:
additional properties of an adjunction.
A.4 Theorem. Statements Adjunction, Units, LadAdj, RadAdj, Fusions, and Charns
are equivalent. Moreover, the various bb cc that are asserted to exist, can all be chosen
equal; the same holds for dd ee, η, and ε .
Adjunction. There exist η and ε typed as in paragraph A.2 and satisfying
A.5 ϕ = η ; Gψ ≡ Fϕ ; ε = ψ Adjunction
A.29 Discussion. A quick glance at the formulas of the Theorem and the Corollary
reveals that the same subexpressions turn up over and over again. In particular, a definition
for bb cc and dd ee (see lad- and rad-Def) can be read off directly from Adjunction; it is then
also immediate that bb cc and dd ee are each other’s inverse, as expressed by law Inverse.
Also, the pattern of the left-hand side of unit-Inv is clearly recognizable in Adjunction.
The left-hand sides of laws lad- and rad-Charn are the same as the two sides of Adjunction.
(It seems to me that law Adjunction is in general the easiest to work with when deriving
consequences of an adjunction.)
Another reading of Adjunction is this: there is precisely one solution for ψ in the
left-hand side equation, namely the ψ given by the right-hand side equation; and, also,
there is precisely one solution for ϕ in the right-hand side equation, namely the one given
by the left-hand side equation. The uniqueness of the solutions is also expressed by laws
lad- and rad-Charn separately, and the solutions themselves are given by bb cc and dd ee .
(See also paragraph 2.7 that explains the trick of expressing uniqueness of solutions in a
way that is suitable for calculation.)
Law unit-Inv asserts that η has a post-inverse, law Inv-co-unit asserts that ε has a
pre-inverse, and lad-Uniq asserts a kind of monic-ness for ε , and rad-Uniq asserts a kind
of epic-ness for η . Law lad-Self shows that the effect of bb cc can be undone; indeed, the
definition of dd ee follows the pattern of the left-hand side of lad-Self. The name ‘Self’
derives from the observation that bbψcc itself is a solution for ϕ in the ever recurring
equation F ϕ ; ε = ψ . That nomenclature is consistent with the nomenclature that we’ve
proposed for the laws of initiality.
The names of the laws and the symbols bb cc and dd ee are not standard in category
theory.
A.31 Lemma.
So, to prove Fusions it suffices to establish Inverse and either lad-Fusion or rad-Fusion.
The proof of the lemma is simple:
ε: F G → . BI
≡ definition →. :
For all g: B →B B 0
F Gg ; εB 0 = εB ; g
≡ Adjunction[ ϕ, ψ := Gg, (ε ; g) ] (from right to left)
Gg = η ; G(ε ; g)
≡ functor
Gg = η ; Gε ; Gg
⇐ Leibniz
(∗) id = η ; Gε (unit-Inv)
≡ Adjunction[ ϕ, ψ := id , ε ] (from left to right)
F id ; ε = ε
≡ functor, identity
true.
Fϕ ; ε = ψ
≡ Inv-co-unit
Fϕ ; ε = Fη ; ε ; ψ
≡ co-unit-Ntrf
F ϕ ; ε = F η ; F Gψ ; ε
⇐ functor, Leibniz
ϕ = η ; Gψ = bbψcc by defining bbψcc = η ; Gψ (right hand side)
≡ unit-Inv
ϕ ; η ; Gε = η ; Gψ
≡ unit-Ntrf
η ; GF ϕ ; Gε = η ; Gψ
⇐ functor, Leibniz
F ϕ ; ε = ψ.
ϕ = bbψcc
≡ lad-Charn
Fψ ; ε = ψ
≡ define ddψee = F ψ ; ε
64 APPENDIX A. MORE ON ADJOINTNESS
ddϕee = ψ.
Now rad-Fusion follows by Lemma A.30.
A.35 Proof of Fusions ⇒ RadAdj. We establish rad-Charn, starting with the right-
hand side, since that doesn’t contain the unknown η , and defining η along the way:
ψ = ddϕee
≡ Inverse
bbψcc = ϕ
≡ lad-Fusion
bbid cc ; Gψ = ϕ
≡ define η = bbid cc
η ; Gψ = ϕ.
Now we establish unit-Ntrf:
η: I → . GF
≡ definition naturality
For all f :
f ; η = η ; GF f
≡ definition η (derived above)
f ; bbid cc = bbid cc ; GF f
≡ lad-Fusion at both sides
bbF f ; id cc = bbid ; F f cc
≡ identity, equality
true.
A.36 Proof of RadAdj ⇒ Charns. First we establish Inverse, defining bb cc along the
way:
ddϕee = ψ
≡ rad-Charn
ϕ = η ; Gψ
≡ define bbψcc = η ; Gψ
ϕ = bbψcc.
Next we establish lad-Charn, defining ε along the way:
ϕ = bbψcc
≡ Inverse (just derived)
65
ddϕee = ψ
(∗) ≡ rad-Fusion (see below)
F ϕ ; ddid ee = ψ
≡ define ε = ddid ee
F ϕ ; ε = ψ.
In step (∗) we have used rad-Fusion. This law follows from RadAdj in the same way as
lad-Fusion follows from LadAdj, see paragraph A.34.
A.40 Exercise. Give alternative proofs for each of the corollaries. For example, law unit-
Def may also be proved directly from Charn by reducing the obligation η = bbidcc to true
by applying ‘functor and identity’ (introducing Gid after η ), rad-Charn[ ϕ, ψ := bbid cc, id ],
and Inverse, in that order. Another possibility is to apply lad-Charn, Adjunction, ‘functor
and identity’. Yet another possibility is to reduce the obligation η = bbidcc to true by
applying Inverse, Charn, ‘functor and identity’.
A.41 Exercise. Barr and Wells [2] present RadAdj as a definition of “ F is adjoint to
G ”, and they prove LadAdj as a proposition. Compare our calculational proof of LadAdj
⇒ RadAdj with the two-and-a-half page proof of Barr and Wells (Proposition 12.2.2,
containing eight diagrams).
A.42 Exercise. Derive the typing (and the subscripts to bb cc, dd ee, η, and ε ) for each
of the laws, following the procedure of paragraph 1.39.
A.43 Exercise. Let F be left-adjoint to G via η, ε and also via η, ε0 . Prove that
ε = ε0 .
A.44 Exercise. Find F and G such that F is left-adjoint to G via η, ε as well as via
η 0 , ε0 with (η, ε) 6= (η 0 , ε0 ) . (Hint: take F = G = I , and A = B = a category with one
object and two morphisms.) So an adjointness does not determine the unit and co-unit
uniquely.
A.45 Exercise. Suppose that F and F 0 are both left-adjoint to G . Prove that F ∼ = F0
(in category Ftr (A, B) ). (Hint: first establish the existence of natural transformations
κ: F → . F 0 and, by symmetry, κ0 : F 0 →
. F ; then show that κ ; κ0 = id and, by symmetry,
κ0 ; κ = id .) Conclude that κ and κ0 are, in general, not uniquely determined by F, F 0 , G .
(Hint: see Exercise A.44.)
67
A.46 Hom-functor, notation. For arbitrary category C we define the two-place map-
ping ( → ) by:
(C → C 0 ) = {h in C | h: C →C C 0 } an object in Set
(h → h0 ) = λχ. h ; χ ; h 0
a morphism in Set typed
(tgt h → src h0 ) →Set (src h → tgt h0 )
It follows that ( → ) is a functor (contravariant in its first parameter, since src h and
tgt h change place in the source and target type of (h → h0 ) ):
( → ) : C op × C → Set .
This functor is called the hom-functor, and is usually written Hom( , ) . Our notation is
motivated by, amongst others, the observation that h: C → C 0 equivales h ∈ (C → C 0 ) .
For bifunctor ⊕ and functors F, G , we write F ⊕ G for the functor x 7→ F x ⊕ Gx .
Further, let
(F X → Y )(A, B) = (F A → B) = {g in B| g: F A →B B}
(F X → Y )(f, g) = (F f → g) = λψ. F f ; ψ ; g
and
(X → G Y )(A, B) = (A → G B) = {f in A| f : A →A G B}
(X → G Y )(f, g) = (f → G g) = λϕ. f ; ϕ ; Gg .
Exercise: check the claims of the last sentence (‘are functors’, ‘of type’, ‘satisfy the equa-
tions’).
IsoAdj.
A.48 (F X → Y ) ∼
= (X → G Y ) Iso
68 APPENDIX A. MORE ON ADJOINTNESS
A.49 Proof of IsoAdj ≡ Fusions. The isomorphism in Iso is apparently in the category
where functors are the objects and natural transformations are the morphisms. So Iso
abbreviates the following:
A.50 bbcc , : (F X → Y ) →
. (X → G Y ) lad-Ntrf
A.51 ddee , : (X → G Y ) →
. (F X → Y ) rad-Ntrf
A.52 that are each other’s inverse. Inverse
bbcc: (F X → Y ) → . (X → G Y )
≡ definition naturality
For all (f, g): (A, B) → (A0 , B 0 ) in Aop × B :
(F X → Y )(f, g) ; bbccA0 ,B 0 = bbccA,B ; (X → G Y )
≡ property (F X → Y )(f, g) = (F f → g) and similarly for G
(F f → g) ; bbccA0 ,B 0 = bbccA,B ; (f → Gg)
≡ extensionality (in Set )
For all ψ ∈ (F A → B) :
((F f → g) ; bbccA0 ,B 0 )ψ = (bbccA,B ; (f → Gg))ψ
≡ composition applied: (F ; G)x = G(F x)
bbccA0 ,B 0 ((F f → g)ψ) = (f → Gg) (bbccA,B ψ)
≡ definition hom-functor ( → ) , writing bbcc , xyz as bbxyzcc ,
bbF f ; ψ ; gccA0 ,B 0 = f ; bbψccA,B ; Gg.
g: F 0 →B B
⇒ typing rules (composition, functor), η: I →
. GF
69
η0 ; Gg: 0 →A GB
≡ init-Charn [f, A, B := (η0 ; Gg), 0, GB] in A
([GB])A = η0 ; Gg
≡ Adjunction [ϕ, ψ := ([GB])A , g]
F ([GB])A ; εB = g = ([B])B by defining ([B])B = F ([GB])A ; εB
⇒ typing rules, ε: F G → . I , and ([GB])A : 0 →A GB
g: F 0 → B.
Exercise: is the first step also valid with ≡ instead of ⇒ , thus shortening the proof?
Exercise: give an alternative proof, using bb cc and dd ee and Inverse. Is there an essential
difference between your proof and the one above?
Exercise: instantiate this proof to the case where G = 0 , the constant functor mapping
each B to 0 and each g to id 0 . What is, in this case, ([ ])B ?
Exercise: formulate the theorem as concise as possible for the special case that A is taken
to be 1 , the category with one object and one morphism.
Fγ ; x = δ
≡ composition of cocone with a morphism, extensionality
F γA ; x = δ A for each A in D
≡ Inverse, noting that both sides have type F DA →B tgt δ
bbF γA ; xcc = bbδA cc for each A in D
≡ lad-Fusion
γA ; bbxcc = bbδA cc for each A in D
(∗) ≡ for ⇒ : define δ 0 by δA0 = bbδA cc for each A in D
for ⇐ : note that by (?) we have δA0 = bbδA cc for each A in D
γ ; bbxcc = δ 0
≡ γ is colimit for D , colimit-Charn
bbxcc = γ\δ 0
≡ Inverse
x = ddγ\δ 0 ee
(?) ≡ define F γ\δ = ddγ\δ 0 ee where δA0 = bbδA cc ; observation below
x = F γ\δ.
The definition of F γ\ in step (?) requires some care. First, even though in general γ
is not recoverable from F γ , here γ is known from the data of the theorem. Second, the
notation ...γ\δ 0 ... requires that δ 0 is a cocone for D , that is, δ 0 : D →
. X for some object
X in A . It is almost trivial that δ 0 is a transformation from D to some X ; indeed, for
arbitrary A in D :
δA0 : DA →A X
⇐ definition δA0 = bbδA cc , typing bb cc
δA : F DA →B tgt δ and X = G tgt δ
⇐ assumption δ: F D → . I , define X = G tgt δ
true.
δ0: D →. G tgt δ
≡ definition →. ;
For arbitrary f : A →D A0 :
71
Df ; δA0 0 = δA0 ; id
≡ definition δ 0 , identity
Df ; bbδA0 cc = bbδA cc
≡ lad-Fusion
bbF Df ; δA0 cc = bbδA cc
⇐ Leibniz, δ: F D → . tgt δ
true.
[1] A. Asperti and G. Longo. categories, Types, and Structures. Foundations of Computing
Series. The MIT Press, Cambridge, Ma, 1991.
[2] M. Barr and C. Wells. Category Theory for Computing Science. Prentice Hall, 1990.
[3] R.S. Bird. Lecture notes on constructive functional programming. In M. Broy, editor,
Constructive Methods in Computing Science. International Summer School directed by
F.L. Bauer [et al.], Springer Verlag, 1989. NATO Advanced Science Institute Series
(Series F: Computer and System Sciences Vol. 55).
[4] M.M. Fokkinga. Law and Order in Algorithmics. PhD thesis, University of Twente,
dept Comp Sc, Enschede, The Netherlands, 1992.
[5] M.M. Fokkinga and E. Meijer. Program calculation properties of continuous algebras.
Technical Report CS-R9104, CWI, Amsterdam, January 1991.
[6] C.A.R. Hoare. Notes on an Approach to Category Theory for Computer Scientists. In
M. Broy, editor, Constructive Methods in Computing Science, pages 245–305. Interna-
tional Summer School directed by F.L. Bauer [et al.], Springer Verlag, 1989. NATO
Advanced Science Institute Series (Series F: Computer and System Sciences Vol. 55).
[7] D.S. Scott. Relating theories of the lambda calculus. In J.P. Seldin and J.R. Hindley,
editors, To H.B. Curry: Essays on Combinatory Logic, Lambda Calculus and Formal-
ism, page 406. Academic Press, 1980.
73
74 BIBLIOGRAPHY
Introduction to
“Law and Order in Algorithmics”, Ph.D. Thesis by M. Fokkinga,
Chapter 3: Algebras categorically, and
Chapter 5: Datatypes without Signatures
There is a slight discrepancy between the notational conventions in the preceding chapters
and my thesis [4]. In the thesis following conventions prevail.
Moreover:
• For product categories the extraction functors are denoted Exl , Exr while the symbol
∆ also denotes the tupling (pairing) of functors.
• Juxtaposition associates to the right, so that U µF a = U (µ(F a)) , and binds stronger
than any binary operation symbol, so that F a † = (F a)† . Binary operation sym-
bol ; binds the weakest of all operation symbols in a term denoting a morphism.
As usual, × has priority over + .
75
76
In the examples of Chapter 3 there occur references to paragraph 1.12, which introduces
several datatypes informally. Here is a copy of that paragraph:
“Paragraph 1.12: Naturals, lists, streams”. We shall frequently use naturals, cons
lists, cons0 lists, and streams in examples, assuming that you know these concepts. Here
is some informal explanation; the default category is Set .
A distinguished one-element set is denoted 1 . Function !a : a → 1 is the unique
function from a to 1 . Constants, like the number zero, will be modeled by functions with
1 as source, thus zero: 1 → nat . The sole member of 1 is sometimes written ( ) , so that
zero( ) ∈ nat and zero is called a nullary function.
For the naturals we use several known operations.
The set nat consists of all natural numbers. Functions on nat may be defined by induction
on the zero, succ -structure of their argument.
For lists we distinguish between several variants.
The datatype of cons lists over a has as carrier the set La that consists of finite lists
only. There are two functions nil and cons .
nil : 1 → La
cons : a × La → La .
Depending on the context, nil and cons are fixed for one specific set a , or they are
considered to be polymorphic, that is, having the indicated type for each set a . In a very
few cases a subscript will make this explicit. Each element from La can be written as a
finite expression
So, functions over La can be defined by induction on the nil , cons structure of their
argument. For example, definitions of size: La → nat and isempty: La → La + La read
Function isempty sends its argument unaffected to the left/right component of its result
type according to whether it is/isn’t the empty list. A boolean result may be obtained by
post-composing isempty with true ∇ false , see Section 2c for the case construct ∇ . For
each function f : a → b the so-called map f for cons lists, denoted Lf , is defined by
nil a ; Lf = nil b
cons a ; Lf = f × Lf ; cons b .
If L were a functor, these equations assert that nil and cons are natural transformations:
nil : 1 →
. L
cons : I ×L→ . L.
A function yielding a stream can be defined by inductively describing what its result is, in
terms of applications of hd and tl . For example, the lists of naturals is defined as follows.
from : nat → S nat
from ; hd = id
from ; tl = succ ; from
nats : 1 → S nat
nats = zero ; from
These functions act on infinite datastructures and the evaluation of nats on a computing
engine requires an infinite amount of time. Yet these functions are total; for each argument
the result is well-defined. For each function f : a → b the so-called map f for streams,
denoted Sf , is defined by
Sf ; hd b = hd a ; f
Sf ; tl b = tl a ; Sf .
78
If S were a functor, these equations assert that hd and tl are natural transformations:
hd : S→
. I
tl : S→
. S.
nil 0 : 1 → L0 a
cons 0 : a × L0 a → L0 a
destruct 0 : L0 a → 1 + a × L 0 a
isempty 0 : L0 a → L 0 a + L 0 a
with
nil 0 ; destruct 0 = inl
cons 0 ; destruct 0 = inr
nil 0 ; isempty 0 = nil 0 ; inl
cons 0 ; isempty 0 = cons 0 ; inr .
Since cons0 lists are possibly infinite, a ‘definition’ by induction on the nil 0 , cons 0 -structure
of cons0 lists is in general not possible; that would give partially defined functions, and
these do not exist in our intended universe of discourse Set . For example, consider the
following equations with “unknown size 0 ”.
These do not define a total function size 0 : L0 a → nat , in contrast to the situation for
cons lists. (Notice also the difference with the usual datatype of lists of nonstrict functional
programming languages: next to finite and infinite lists, it comprises also partially defined
lists.)