Session Types in Abelian Logic: Yoichi Hirai
Session Types in Abelian Logic: Yoichi Hirai
Session Types in Abelian Logic: Yoichi Hirai
Yoichi Hirai
National Institute of Advanced Industrial Science and Technology
y-hirai@aist.go.jp
There was a PhD student who says I found a pair of wooden shoes. I put a coin in the left and a key
in the right. Next morning, I found those objects in the opposite shoes. We do not claim existence
of such shoes, but propose a similar programming abstraction in the context of typed lambda calculi.
The result, which we call the Amida calculus, extends Abramskys linear lambda calculus LF and
characterizes Abelian logic.
Introduction
We propose a way to unify ML-style programming languages [29, 23] and -calculus [28]. Well-typed
expressions do not go wrong, said Milner [27]. However, when communication is involved, how to
maintain such a typing principle is not yet settled. For example, Haskell, which has types similar to
the ML-style types, allows different threads to communicate using a kind of shared data store called an
MVar mv of type MVar a, with commands putMVar mv of type a -> IO () and takeMVar mv of type
IO a. The former command consumes an argument of type a and the consumed argument appears from
the latter command. However, if programmers make mistakes, these commands can cause a deadlock
during execution even after the program passes type checking.
In order to prevent this kind of mistakes, a type system can force the programmer to use both the
sender and the receiver each once. For doing this, we use the technique of linear types. Linear types are
refinements of the ML-style intuitionistic types. Differently from intuitionistic types, linear types can
specify a portion of program to be used just once. Linear types are used by Wadler [35] and Caires and
Pfenning [9] to encode session types, but our type system can type processes that Wadler and Pfennings
system cannot.
As intuitionistic types are based on intuitionistic logic, linear types are based on linear logic. There
are classical and intuitionistic variants of linear logics. From the intuitionistic linear logic, our only
addition is the Amida axiom ( ) ( ). We will see that the resulting logic is identical to
Abelian logic [10] up to provability of formulae. In the Amida calculus, we can express -calculus-like
processes as macros. From the viewpoint of typed lambda calculi, a natural way to add the axiom (
) ( ) is to add a pair of primitives c and c so that ct cu
reduces to u t : in words,
c returns cs
argument and vice versa. In the axiom, we can substitute the singleton type 1 for the general
to obtain an axiom standing for the send-receive communication primitive pair ( 1) (1 );
the left hand side c of type 1 is the sending primitive and the right hand side c of type 1 is
the receiving primitive. The sending primitive consumes a data of type and produces a meaningless
data of unit type 1. The receiving primitive takes the meaningless data of type 1 and produces a data of
type .
When this work was presented at the workshop, the author was a student at the University of Tokyo and a JSPS fellow
supported by Grant-in-Aid for JSPS Fellows 23-6978.
34
When we want to use these primitives in lambda terms, there is one problem: what happens to c(ct)?
In this case, we do not know the output of c because the output of c comes from cs
input, which is the
output of c. Fortunately, we just want to know the output of c,
which is the input of c, that is, t. In a
Definitions
Types We assume a countably infinite set of propositional variables, for which we use letters X ,Y
and so on. We define a type by BNF: ::= 1 | X | | | | & . A formula is a
type. As the typing rules (Figure 1) reveal, is the multiplicative conjunction, is the multiplicative
implication, is the additive disjunction and & is the additive conjunction.
Terms and Free Variables We assume countably infinitely many variables x, y, z, . . .. Before defining
terms, following Abramskys linear lambda calculus LF [1], we define patterns binding sets of variables:
is a pattern binding 0,
/
hx, i and h , xi are patterns binding {x},
x y is a pattern binding {x, y}.
All patterns are from Abramskys LF [1]. Using patterns, we inductively define a term t with free
variables S. We assume countably infinitely many channels with involution satisfying c 6= c and c = c.
is a term with free variables 0,
/
a variable x is a term with free variables {x},
if t is a term with free variables S, u is a term with free variables S , and moreover S and S are
disjoint, then t u and tu are terms with free variables S S ,
if t and u are terms with free variables S, then ht, ui is a term with free variables S,
if t is a term with free variables S, then inl(t) and inr(t) are terms with free variables S,
if t is a term with free variables S {x} and x is not in S, then x.t is a term with free variables S,
if t is a term with free variables S, p is a pattern binding S , u is a term with free variables S S
and equalities S S = S S = 0/ hold, then, lett be p in u is a term with free variables S S ,
Y. Hirai
35
if t is a term with free variables S, u is a term with free variables S {x}, v is a term with free
variables S {y}, x, y
/ S and S S = 0/ hold, then
matcht of inl(x).u/inr(y).v
is a term with free variables S S , and
if t is a term with free variables S, then ct is also a term with free variables S for any channel c.
Only the last clause is original, introducing channels, which are our communication primitives. Note
that a term with free variables S is not a term with free variables S when S and S are different (even if
S is a subset of S ). In other words, the set of free variables FV (t) is uniquely defined for a term t. We
introduce an abbreviation
ign int t
ign s0 ,
s int let s0 be in (ign
s int)
36
Ax
, x : , y : , t :
O
IE
O
1L
&R
R0
t :
, z : 1 ign z int :
t :
u:
u:
t :
t :
u:
, t u :
, x : , y : t :
, z : let z be x y int :
t :
u:
ht, ui : &
O
O
O
R
O
&L0
t :
inl(t) :
O
O
x : , u:
, u[t/x] :
1R
t :
O
Sync
:1
u:
O ct : cu
:
(c and c uniquely introduced here)
, x : t :
t :
O
L
x.t :
, x : t :
u:
inr(u) :
x : , u :
, f : , u[( f t)/x] :
O
&L1
R1
t :
O
Cut
O
EE
, y : , x : , t :
O
O
Merge O
O
x: x:
, y : t :
, x : u :
, y : v :
, z : matchz of inl(x).u/inr(y).v :
Figure 1: The typing rules of the Amida calculus. O and O stand for hypersequents.
Ax
Ax
x: x:
y: y:
Merge
x: x: y: y:
Sync
x : cx : y : cy
:
Cut
x : c(cx)
:
Evaluation As a programming language, the Amida calculus is equipped with an operational semantics
that evaluates some closed hyper-terms into a sequence of canonical forms. The canonical forms are the
same as those of Abramskys LF [1]:
ht, ui
vw
x.t
inl(v)
inr(w)
E)
where t is a term and v is a canonical form. Now we define evaluation as a set of evaluation sequences
(Figure 2). Though most rules are similar to those of Abramskys LF [1], we add the semantics for
channels. It is noteworthy that the results of evaluation are always canonical forms.
Type Safety
When we can evaluate a derivable hypersequent, the result is also derivable. Especially, this shows that,
whenever a communicating term is used, the communicating term is used according to the types shown
in the Sync rule occurrence introducing the communicating term.
Y. Hirai
37
E
E
t uv
ignt in u v
E
E
Merge E
E
t v uw
t u vw
E
E
t v w u[v/x, w/y] v
E lett be x y in u v
(For any channel c, it is not the case that E contains c and E contains c.)
x.t x.t
E
t x.t
E
E
E
t t
s s
u v t [v/x] w
tu w
s s
t t
E
E
E t v uw
(E , t and u do not contain c or c.)
E ct w cu
v
E
ht, ui ht, ui
E t v
inl(t) inl(v)
E t inl(v) u[v/x] w
matcht of inl(x).u/inr(y).u w
E uw
inr(u) inr(w)
E t inr(v) u [v/y] w
matcht of inl(x).u/inr(y).u w
Figure 2: The definition of evaluation relation of the Amida calculus. E is possibly the empty evaluation
sequence.
Theorem 3.1 (Type Preservation of the Amida calculus). If terms t0 , . . . ,tn have a hypersequent
t0 : 0
tn : n and an evaluation sequence t0 v0 tn vn derivable, then
vn : n is also derivable.
v0 : 0
Proof. By induction on evaluation using the propositions below. We analyze the cases by the last rule.
(Merge) By Proposition 3.2, we can use the induction hypothesis.
(let t be hx, i in u) By Proposition 3.3, we can use the induction hypothesis.
(Other cases) Similar to above.
Two hypersequents O and O are channel-disjoint if and only if it is not the case that O contains c
and O contains c for any channel c.
Proposition 3.2 (Split). If a type derivation leading to O
quents, both O and O are derivable separately.
38
Theorem 3.4 (General Determinacy of the Amida calculus). If t0 v0
t0 w0 t1 w1 tn wn hold, then each vi is identical to wi .
t1 v1
tn vn and
Proof. By induction on the height of evaluation derivation. Each component in the conclusion has only
one applicable rule. Also, the order of decomposing different components is irrelevant (the crucial
condition is freshness of c and c in Figure 2).
Convergence would state that whenever a closed term t is typed t : , then an evaluation t v is
also derivable for some canonical form v. It is a desirable property so that Abramsky [1] proves it for
LF, but there are counter examples against convergence of the Amida calculus. Consider a typed term
c(c(inl()))
: 1 1 with no evaluation. One explanation for the lack of evaluation is deadlock. This
illustrates that the current form of Amida calculus lacks deadlock-freedom. In order to avoid the deadlock
and to evaluate this closed term, we can add the following eval-subst rule:
E
eval-subst
t v u[v/x] w
,
E u[t/x] w
v, which reminds us of the call-with-current-continuation primitive [31] and shift/reset primitives [12, 2]. However, adding the eval-subst rule breaks the current proof of Theorem 3.1 (safety), but
with some modifications, the safety property can possibly be proved. The main difficulty in proving
the safety property can be seen in the form of eval-subst rule. When we only know the conclusion of
an eval-subst occurrence, there are many possible assumptions involving free variables, all of which we
must consider if we are to prove the type safety.
In order to see the usefulness of the communication primitives, we try implementing a process calculus
and a session type system using the Amida calculus.
Session Types as Abbreviations As an abbreviation, we introduce session types. Session types [33,
17] can specify a communication protocol over a channel. The following definitions and the descriptions
are modification from Wadlers translations and descriptions of session types [35]. The notation here is
different from the original notation by Takeuchi, Honda and Kubo [33].
!
I = {0, . . . , n}
&{li : i }iI 0 n ,
I = {0, . . . , n}
end 1
terminator
where I is a finite downward-closed set of natural numbers like {0, 1, 2, 3}. As Wadler [35] notes, the
encoding looks opposite of what some would expect, but as Wadler [35] explains, we are typing channels
instead of processes.
Y. Hirai
39
The grammar , ::= end | X | ! | ? | {li : i }iI | &{li : i }iI covers all types. A linear
type ( possibly with subscript) is generated by this grammar:
? = !
end = end .
Processes as Abbreviations
breviations:
xhui.t t[(xu)/x]
do nothing
We have to be careful about substitution combined with process abbreviations. For example, (xhui.t)[s/x]
is not shui.t because the latter is not defined. Following the definition, (xhui.t)[s/x] is actually (t[xu/x])[s/x] =
t[su/x]. We are going to introduce the name restriction x.t after implementing channels.
Below, we are going to justify these abbreviations statically and dynamically.
Process Typing Rules as Abbreviations The session type abbreviation and the processes abbreviation
allow us to use the typing rules in the next proposition.
Proposition 4.1 (Process Typing Rules: senders and receivers). These rules are admissible.
recv
O
O
y: ,x: t :
x : ? x(y).t :
O
send
, x : t :
u:
, , x : ! xhui.t :
O
end
t:
0:1
Proof. Immediate.
We note that the types of variable x change in the rules. This reflects the intuition of session types:
the session type of a channel changes after some communication occurs through the channel.
Example 4.2 (Typed communicating terms). Using Theorem 4.1, we can type processes. Figure 3 contains one process, which sends a channel y through x and then waits for input in a channel y . Here
is another process that takes an input w from channel x , where the input w itself is expected to be a
channel. After receiving w , the process puts inl() in w .
1R
1R
: 1
: 1
R
end
w : end ign w in : 1
inl() : 1 1
send
w : !(1 1) end w hinl()i. ign w in : 1
end
w : !(1 1) end, x : end ign x in w hinl()i. ign w in : 1
recv
x : ?(!(1 1) end) end x (w ). ign x in w hinl()i. ign w in : 1
40
Ax
z: 1 1 z: 1 1
z : 1 1, y : end ign y in z : 1 1
end
z : 1 1, x : end, y : end ign x, y in z : 1 1
recv
Ax
x : end, y : ?(1 1) end y(z). ign x, y in z : 1 1
y : !(1 1) end y : !(1 1) end
send
y : ?(1 1) end, x : !(!(1 1) end) end, y : !(1 1) end xhyi. y (z). ign x, y in z : 1 1
end
u : is
Proof. Induction on .
Ax
Ax
: 1 is what we seek.
(end) Merge : 1
: 1 : 1
(! ) By the induction hypothesis, t :
following derivation:
IH
Ax
t : u :
x: x:
Merge
x : x : t : u :
Sync
cx : x : ct
: u :
R
x : cx : (ct
) u :
x.cx : (ct
) u :
, y : u : is derivable,
Y. Hirai
41
( ) v
x.cx x.cx
( ) v
E
E
c((
)) u
E
eval-subst
x.cx x.cx
E
x.cx x.cx
( ) v
( ) w
( ) w
c((
)) u
E
E
c((
)) ( )
c((
)) ( )
( ) w
t0 [v /x] v
t0 [v /x] v
( ) w
c((
)) u
( ) w
u w
u w
t0 [v /x] v
x.cx x.cx
x.cx x.cx
t0 [v /x] v
( ) w
t0 [v /x] v
u u
u u
u u
cu v
( x.cx)u v
t1 [u /z][w /y] w
let u w be z y int1 w
let u w be z y int1 w
let u w be z yint1 w
let u w be z y int1 w
t0 [( x.cx)u/x] v
let u w be z y int1
t0 [( x.cx)u/x] v
let u w be z y int1
t0 [( x.cx)u/x] v
let u w be z y int1
Figure 4: Proof of Lemma 4.7. The conclusion is identical to our goal up to abbreviations.
Now we can define the name restriction operator as an abbreviation:
x : .t let ( ) be xL xR int
where we assume injections x 7 xL and x 7 xR whose images are disjoint.
Then, in addition to Theorem 4.1, more typing rules are available.
Proposition 4.5 (Process typing rule: name restriction). The following typing rule is admissible.
O , x : , y : t :
O x : .t[xL /x][xR /y] :
Example 4.6 (Connecting processes using session realizers). Using the session realizers, we can connect
the processes typed in Example 4.2. Indeed,
(x : ?(!(1 1) end) end). (y : !(1 1) end).
(xR hyL i. yR (z). ign xR , yR in z) xL (w ). ign xL in w hinl()i. ign w in : (1 1) 1
is derivable.
Now we have to check the evaluation of the term in this example. For that we prepare a lemma.
Process Evaluation as Abbreviation. The intention of defining xhui.t0 and y(z).t1 is mimicking communication in process calculi. When we substitute x and y with session type realizers, these terms can
actually communicate.
The next lemma can help us evaluate session realizers.
Lemma 4.7. Let t0 be a term containing a free variable x and t1 be a term containing free variables y
and z. The rule
E ( ) v ( ) w t0 [v /x] v u u t1 [u /z][w /y] w
E (! ) x.cx (! ) u w (xhui.t0 )[ x.cx/x] v (y(z).t1 )[u w /y] w
is admissible under presence of the eval-subst rule.
Proof. By the derivation in Figure 4.
Example 4.8 (Evaluation of communicating processes). Here is an example of evaluation using the
eval-subst rule.
42
inl() inl()
(end) (end)
inl() inl()
inl() inl()
(end) (end) inl() inl() inl() inl()
Related Work
Metcalfe, Olivetti and Gabbay [24] gave a hypersequent calculus for Abelian logic and proved cutelimination theorem for the hypersequent calculus. His formulation is different from ours because
Metcalfes system does not use conjunctive hypersequents. Shirahata [32] studied the multiplicative
fragment of Abelian logic, which he called CMLL (compact multiplicative linear logic). He gave a categorical semantics for the proofs of a sequent calculus presentation of CMLL and then proved that the
cut-elimination procedure of the sequent calculus preserves the semantics2 .
Kobayashi, Pierce and Turner [20] developed a type system for the -calculus processes. Similarly to
the type system presented here, their type system can specify types of communication contents through
a name and how many times a name can be used. In some sense, that type system is more flexible than
the one shown in this paper; their type system allows multiple uses of a channel, replicated processes
2 Ciabattoni, Straburger and Terui [11] already pointed out the fact that Shirahata [32] and Metcalfe, Olivetti and Gabbay [25] studied the same logic.
Y. Hirai
43
and weakening [20, Lemma 3.2]. In other respects, the type system in [20] is less expressible. That
type system does not have lambda abstractions. Also, in contrast to our type system, it is impossible to
substitute a free variable with a process in that type system.
Caires and Pfenning [9] provide a type system for a fragment of -calculus. Their type system is, on
some processes, more restrictive than the Amida calculus. For example, this escrowing process P below
is not typable in their type system: P = xhyi. x(a). yhai. 0 . The process first emits a channel y through
channel x and then takes an input from x and outputs it to y. Following the informal description of types
by Caires and Pfenning [9], the process P should be typable as P :: x : (A 1) A . However, such
typing is not possible because (A 1) A is not a theorem of dual intuitionistic linear logic (DILL),
which their type system is based on. In our type system, the following sequent is derivable
x : (?A end)!A end (y : ?A end).xhyL i. x(a). yR hai. ign x, yL , yR in 0 : 1
The resulting sequent indicates that the process is typable with one open channel x that first emits a
channel that one can receive A from, and second sends a value of A. This concludes an example of a
term which our type system can type but the type system in Caires and Pfenning [9] cannot. However,
we cannot judge their type system to be too restrictive because we have not yet obtained both type safety
and deadlock-freedom of Amida calculus at the same time.
On the other hand, the most complicated example in Caires and Pfenning [9], which involves a drink
server, directs us towards a useful extension of the Amida calculus.
Example 5.1 (Drink server from Caires and Pfenning [9] in the Amida cal.).
ServerProto = (N I (N 1)) & (N (I 1))
= (!N !I ?N end) & (!N ?I 1)
N stands for the type of strings and I stands for the type of integers, but following Caires and Pfenning [9], we identify both N and I with 1. Below, SP abbreviates ServerProto. Here is the process of the
server, which serves one client and terminates.
Serv = hs(pn). s(cn). shrci. ign pn, cn, s in 0,
44
Wadler [35] gave a type system for a process calculus based on classical linear logic. Although the
setting is classical, the idea is more or less the same as Caires and Pfenning [9]. Wadlers type system
cannot type the escrowing process above.
Giunti and Vasconcelos [14] give a type system for -calculus with the type preservation theorem.
Their type system is extremely similar to our type system although the motivations are different; their
motivation is process calculi while our motivation is computational interpretation of a logic. It will be
worthwhile to compare their system with our type system closely.
v, which reminds us of the callwith-current-continuation primitive [31] and shift/reset primitives [12, 2]. The appearance of these classical type system primitives is not surprising because Abelian logic validates ((p q) q) p, which
is a stronger form of the double negation elimination. Possibly we could use the technique of Asai and
Kameyama [2] to analyze the Amida calculus with eval-subst rule.
Logic Programming. There are at least two ways to interpret logics computationally. One is proof reduction, which is represented by -calculi. The other is proof searching. We have investigated the Amida
calculus, which embodies the proof reduction approach to the Amida axiom. Then what implication does
The Amida axiom have in the proof searching approach? Let us cite an example from Kobayashi and
Yonezawa [21, A.2]:
Consumption of a message m by a process m B is represented by the following deduction:
(m (m B) C) (B C)
where C can be considered as other processes and messages, or an environment.
3 There
Y. Hirai
45
References
[1] S. Abramsky (1993): Computational Interpretations of Linear Logic. Theo. Comp. Sci. 111(1-2), pp. 357,
doi:10.1016/0304-3975(93)90181-R.
[2] K. Asai & Y. Kameyama (2007): Polymorphic Delimited Continuations. In: APLAS 07, LNCS 4807,
Springer, pp. 239254, doi:10.1007/978-3-540-76637-7_16.
[3] A. Avron (1991): Hypersequents, logical consequence and intermediate logics for concurrency. Ann. Math.
Artif. Intell. 4, pp. 225248, doi:10.1007/BF01531058.
[4] A. Avron (2000): A Tableau System for GodelDummett Logic Based on a Hypersequent Calculus. In:
TABLEAUX 00, LNCS 1847, Springer, pp. 98111, doi:10.1007/10722086_11.
[5] M. Baaz, A. Ciabattoni & C. G. Fermuller (2003): Hypersequent Calculi for Godel Logicsa Survey. Journal
of Logic and Computation 13(6), pp. 835861, doi:10.1093/logcom/13.6.835.
[6] D. Baelde (2012): Least and Greatest Fixed Points in Linear Logic. ACM Trans. Comput. Logic 13(1), pp.
2:12:44, doi:10.1145/2071368.2071370.
[7] A. Bejleri & N. Yoshida (2009): Synchronous Multiparty Session Types. Electronic Notes in Theoretical
Computer Science 241(0), pp. 333, doi:10.1016/j.entcs.2009.06.002.
[8] R. Blute & P. Scott (2004): Category Theory for Linear Logicians. Linear Logic in Computer Science 316,
pp. 365, doi:10.1017/CBO9780511550850.002.
[9] L. Caires & F. Pfenning (2010): Session Types as Intuitionistic Linear Propositions. In P. Gastin
& F. Laroussinie, editors: CONCUR 2010, LNCS 6269, Springer, pp. 222236, doi:10.1007/
978-3-642-15375-4_16.
[10] E. Casari (1989): Comparative Logics and Abelian l-groups. In R. Ferro, C. Bonotto, S. Valentini & A. Zanardo, editors: Logic Colloquium 88, Studies in logic and the foundations of mathematics 127, NorthHolland, pp. 161190, doi:10.1016/S0049-237X(08)70269-6.
[11] A. Ciabattoni, L. Straburger & K. Terui (2009): Expanding the Realm of Systematic Proof Theory. In: CSL,
pp. 163178, doi:10.1007/978-3-642-04027-6_14.
46
[12] O. Danvy & A. Filinski (1990): Abstracting control. In: Proceedings of the 1990 ACM conference on LISP
and functional programming, LFP 90, ACM, pp. 151160, doi:10.1145/91556.91622.
[13] N. Galatos, P. Jipsen, T. Kowalski & H. Ono (2007): Residuated Lattices: An Algebraic Glimpse at Substructural Logics, 1st edition. Studies in Logic and The Foundations of Mathematics 151, Elsevier.
[14] M. Giunti & V. T. Vasconcelos (2010): A Linear Account of Session Types in the Pi Calculus. In: CONCUR 2010, LNCS, Springer, pp. 432446, doi:10.1007/978-3-642-15375-4_30.
[15] Y. Hirai (2012): A Lambda Calculus for Godel-Dummett Logic Capturing Waitfreedom. In T. Schrijvers
& P. Thiemann, editors: Functional and Logic Programming, LNCS 7294, Springer, pp. 151165, doi:10.
1007/978-3-642-29822-6_14.
[16] Y. Hirai (2013): Hyper-Lambda Calculi. Ph.D. thesis, Univ. of Tokyo.
[17] K. Honda (1993): Types for Dyadic Interaction. In: CONCUR 93, LNCS 715, Springer, pp. 509523,
doi:10.1007/3-540-57208-2_35.
[18] K. Honda, N. Yoshida & M. Carbone (2008): Multiparty Asynchronous Session Types. In: POPL 08, POPL
08, ACM, pp. 273284, doi:10.1145/1328438.1328472.
[19] K. Imai, S. Yuen & K. Agusa (2010): Session Type Inference in Haskell. In: PLACES, pp. 7491, doi:10.
4204/EPTCS.69.6.
[20] N. Kobayashi, B. C. Pierce & D. N. Turner (1999): Linearity and the pi-calculus. ACM Trans. Program.
Lang. Syst. 21(5), pp. 914947, doi:10.1145/237721.237804.
[21] N. Kobayashi & A. Yonezawa (1995): Higher-Order Concurrent Linear Logic Programming. In: Theory
and Practice of Parallel Programming, LNCS 907, Springer, pp. 137166, doi:10.1007/BFb0026568.
[22] F. Lamarche (2008): Proof Nets for Intuitionistic Linear Logic: essential nets. Technical Report, Loria &
INRIA-Lorraine.
Available online
[23] S. Marlow et al. (2010):
Haskell 2010 Language Report.
http://www.haskell.org/onlinereport/haskell2010.
[24] G. Metcalfe (2006): Proof Theory for Casaris Comparative Logics. Journal of Logic and Computation
16(4), pp. 405422, doi:10.1093/logcom/exl001.
[25] G. Metcalfe, N. Olivetti & D. Gabbay (2002): Analytic Sequent Calculi for Abelian and ukasiewicz Logics.
In U. Egly & C. Fermuller, editors: TABLEAUX 2002, LNCS 2381, Springer, pp. 191205, doi:10.1007/
3-540-45616-3_14.
[26] R. K. Meyer & J. K. Slaney (1989): Abelian Logic (from A to Z). In G. Priest, R. Richard & J. Norman,
editors: Paraconsistent Logic: Essays on the Inconsistent, chapter IX, Philosophia Verlag, pp. 245288.
[27] R. Milner (1978): A Theory of Type Polymorphism in Programming. Journal of Computer and System
Sciences 17(3), pp. 348375, doi:10.1016/0022-0000(78)90014-4.
[28] R. Milner (1999): Communicating and Mobile Systems: the pi-calculus. Cambridge University Press.
[29] R. Milner, M. Tofte, R. Harper & D. MacQueen (1997): The definition of Standard ML. MIT press.
[30] A. S. Murawski & C. H. L. Ong (2003): Exhausting Strategies, Joker Games and Full Completeness for IMLL
with Unit. Theoretical Computer Science 294(1-2), pp. 269305, doi:10.1016/S0304-3975(01)00244-4.
[31] J. Rees & W. Clinger (1986): Revised Report on the Algorithmic Language Scheme. SIGPLAN Not. 21(12),
pp. 3779, doi:10.1145/15042.15043.
[32] M. Shirahata: A Sequent Calculus for Compact Closed Categories.
[33] K. Takeuchi, K. Honda & M. Kubo: An Interaction-Based Language and its Typing System. PARLE94
Parallel Architectures and Languages Europe, pp. 398413, doi:10.1007/3-540-58184-7_118.
[34] T. Toffoli (1980): Reversible Computing. In J. Bakker & J. Leeuwen, editors: Automata, Languages and
Programming, LNCS 85, Springer, pp. 632644, doi:10.1007/3-540-10003-2_104.
[35] P. Wadler (2012): Propositions as Sessions. In: Proceedings of the 17th ACM SIGPLAN International
Conference on Functional Programming, ICFP 12, ACM, pp. 273286, doi:10.1145/2398856.2364568.
Y. Hirai
47
Categorical Considerations
One might want to ask whether we can model the logic with a symmetric monoidal closed category [8]
with identified isomorphisms ABCD : (A B) (C D) (A D) (C B), with naturality conditions. Before considering equality among morphisms, we know there is a non-trivial example.
Example A.1 (Integer Model of [13, p. 107]). The preorder formed by objects as integers and morphisms
as the usual order among integers forms a symmetric monoidal closed category with swaps when we
interpret as addition and m n as n m.
On the other hand, if we take another formulation requiring natural isomorphisms C (C, A)C (D, B)
=
C (D, A) C (C, B), only singletons can be preorder models because hidA , idB i is mapped to h f , gi where
f : A B and g : B A for any two objects A and B.
A straightforward reading of evaluation rules gives somewhat complicated equality conditions for
morphisms. The condition says the following diagram commutes:
d
(A B) (C D) ABCD
(A C) (B D)
ids
B,D
y
y ABCD
d
(A D) (C B) ADCB
(A C) (D B)
where dABCD is induced by adjunction between and from a morphism ((A B) (C D))
(A C) (B D), which is provided by symmetric monoidal closed properties.
Moreover, since 1 has derivable sequents 1 and 1, we can expect the
semantics of to be the dual object of that of . Indeed, checking one of the coherence condition of
compact closedness is evaluating the below typed term
x: x:
: 1
t:
: 1
x : cx : 1 c
:
t : : 1
x : cx : 1 z : 1 ign z in c
:
t : x : cx : 1 ign in c
:
ct : 1 ign in c
ct : 1 y : 1 ign y in ign in c
:
ign ct in ign in c
:
At least, if t v is derivable, ign ct in ign in c
v is also derivable.
tv
t v
ct c
v
.
ct ign in c
v
ign ct in ign in c
v
Showing the other direction is more involved because of eval-subst rule, but the author expects the case
analysis on possible substitutions will succeed.
Proof Nets
Toward better understanding the Amida calculus, a technique called proof nets seems promising. Generally, proof nets are straightforward for the multiplicative fragments but complicated when additive and
48
exponential connectives are involved. Since the Amida axiom ( ) ( ) does not contain
additives (&, ) or exponentials (!, ?), we can focus on the multiplicative connectives ( and ). The
fragment is called IMLL (intuitionistic multiplicative fragment of linear logic). We also use the unit 1 for
technical reasons. We first describe the IMLL proof nets and their properties. Then we add a new kind
of edges called the Amida edges, which characterizes Abelian logic. The Amida links are named after
the Amida lottery (also known as the Ghost Leg) for the syntactic similarity.
B.1
The proof nets for intuitionistic linear logics are called essential nets. This subsection reviews some
known results about the essential nets for intuitionistic multiplicative linear logic (IMLL). The exposition
here is strongly influenced by Murawski and Ong [30].
We can translate a polarity p {+, } and an IMLL formula into a polarized MLL formula _ ^ p
following Lamarche [22] and Murawski and Ong [30]. We omit the definition of polarized MLL formulae
because the whole grammar is exposed in the translation below:
_1^+ = 1+
_1^ =
_X ^+ = X +
_X ^ = X
_ ^+ = _ ^ `+ _ ^+
_ ^ = _ ^+ _ ^
_ ^+ = _ ^+ + _ ^+
_ ^ = _ ^ ` _ ^ .
For example, the Amida axiom can be translated into a polarized MLL formula
_(X Y ) (Y X )^+
= _X Y ^+ + _Y X^+
= _X ^ `+ _Y ^+ + _Y ^ `+ _X ^+
= X `+ Y + + Y `+ X + .
The symbol ` is pronounced parr.
Any polarized MLL formula can be translated further into a finite rooted tree containing these
branches and polarized atomic formulae (X , X + , 1+ , ) at the leaves.
+
+ +
+
`+ +
+ + +
For brevity, we sometimes write only the top connectives of labeling formulae. In that case, these branching nodes above are denoted like this.
+
+ + +
`+
We call arrows with upward (resp. downward) signs up-edges (resp. down-edges). The dashed child
of a `+ node p is the node which the dashed line from p reaches. The branching nodes labeled by
`+ , ` , + and are called operator nodes. A path follows solid edges according to the direction of
the edges. Dashed edges are not directed and they are not contained in paths.
When we add axiom edges and -branches (shown below) to the other operator nodes (shown above)
we obtain an essential net of . Due to the arbitrariness of choosing axiom edges and -branches, there
Y. Hirai
49
are possibly multiple essential nets for a formula. Murawski and Ong [30] restricts the class of formulae
to linearly balanced formulae so that the essential net is uniquely determined.
X+
axiom edge
-branch
Example B.1 (An essential net of the Amida axiom). Here is one of the essential nets of the Amida
axiom (X `+ Y + ) + (Y `+ X + ).
Y+
`+
X+
`+
+
However, the essential net in Example B.1 is rejected by the following correctness criterion.
Definition B.2 (Correct essential nets). A correct essential net is an essential net satisfying all these
conditions:
1. Any node labeled with X + (resp. Y ) is connected to a unique node labeled with X (resp. Y + ).
Any leaf labeled with is connected to a -branch. 1+ is not connected to anything above itself;
2. the directed graph formed by up-edge, down-edge, axiom edges and -branches is acyclic;
3. for every `+ -node p, every path from the root that reaches ps dashed child also passes through
p.
The essential net in Example B.1 is not correct for condition 3. Actually, the Amida axiom does
not have any correct essential net. IMLL sequent calculus has the subformula property so that we can
confirm that the Amida axiom is not provable in IMLL.
Theorem B.3 (Essential nets by [22, 30]). An IMLL formula is provable in IMLL if and only if there
exists a correct essential net of .
Proof. The left to right is relatively easy. For the other way around, Lamarche [22] uses a common
technique of decomposing an essential net from the bottom. Murawski and Ong [30] chose to reduce the
problem to sequents of special forms called regular.
Actually, Lamarche [22] also considers the cut rule (as well as additive operators and exponentials)
in essential nets, thus we can include the following general axioms (as macros) and cuts (as primitives)
and still use Theorem B.3:
+
general axiom
cut
50
B.2
Definition B.4 (The Amida nets). For a hypersequent H , Amida nets of H are inductively defined by
the following three clauses:
an essential net of H is an Amida net of H ;
for an Amida net of H with two different4 up-edges,
e0
e1
e0u
ea
e1d
e0d
yields an Amida net of H , where the above component has two paths e0d ea e1u and e1d ea e0u ;
for an Amida net of H with an up-edge,
em
ed
yields an Amida net of H , where the above component has one finite path ed ea eu and one infinite
path em ea em ea .
In these clauses, we call the edges labeled ea the Amida edges.
Definition B.5 (Correct Amida nets). A correct Amida net is an Amida net satisfying the three conditions
in Definition B.4.
The Amida edge is not merely a crossing of up-edges. See the difference between
+
and
+
4 The
.
+
two edges can be connected by a new edge as long as they are different; their relative positions do not matter.
Y. Hirai
51
The difference is the labels at the bottom. Although Amida edges cross the paths, they do not transfer
labels. This difference of labels makes Amida nets validate the Amida axiom.
Example B.6 (A correct Amida net for the Amida axiom). Here is a correct Amida net for the Amida
axiom (X Y ) (Y X ).
X
Y+
`+
X+
`+
+
In terms of the set of paths, the above Amida net is equivalent to the following correct essential net for
(X X ) (Y Y ).
X
Y+
`+
X+
`+
+
B.3
Theorem B.7 (Completeness of Amida nets). If a hypersequent H is derivable, there is a correct Amida
net for H .
Proof. Inductively on hypersequent derivations. The Sync rule is translated into a crossing with an
Amida edge:
where the crossing exchanges the formulae and the Amida edge keeps the path connections vertically
straight.
Theorem B.8 (Soundness of Amida nets). If there is a correct Amida net for a hypersequent H , then
H is derivable.
Proof. From a correct Amida net, first we move the Amida edges upwards until they are just below
axiom edges5 . The moves are as follows.
1+
+
+
5 The
52
+
`
1+
1+
`+
+ `+ +
`+ (+ + 1+ )
`+
`+ +