0% found this document useful (0 votes)
10 views31 pages

NSM Math Preprint 0831

This document discusses Boolean vector spaces and their basic properties. It introduces a Boolean-valued inner product and shows that all bases are orthonormal and have the same cardinality. It is shown that the set of subspaces form an atomistic orthomodular poset and that projections are diagonal. The document also examines states and diagonality, showing that a state is extremal if and only if it is pure.

Uploaded by

ctk74378
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views31 pages

NSM Math Preprint 0831

This document discusses Boolean vector spaces and their basic properties. It introduces a Boolean-valued inner product and shows that all bases are orthonormal and have the same cardinality. It is shown that the set of subspaces form an atomistic orthomodular poset and that projections are diagonal. The document also examines states and diagonality, showing that a state is extremal if and only if it is pure.

Uploaded by

ctk74378
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 31

BOOLEAN VECTOR SPACES

Stan Gudder
Department of Mathematics
University of Denver
Denver, Colorado 80208
sgudder@math.du.edu

Abstract
This article discusses the basic properties of finite-dimensional
Boolean vector spaces and their linear transformations. We first intro-
duce a natural Boolean-valued inner product and discuss orthonormal
bases. We show that all bases are orthonormal and have the same
cardinality. It is shown that the set of subspaces form an atomistic
orthomodular poset. We then demonstrate that an operator that is
diagonal relative to one basis is diagonal relative to all bases and that
all projections are diagonal. It is proved that an operator is diagonal
if and only if any basis consists of eigenvectors of the operator. We
characterize extremal states and show that a state is extremal if and
only if it is pure. Finally, we introduce tensor products and direct
sums of Boolean vector spaces.

1 Introduction
Roughly speaking, a Boolean vector space is a vector space in which the
scalars are elements of a Boolean algebra. Although some of our results
can be generalized to the infinite dimensional case, in this work we only
consider finite-dimensional spaces. There is already a considerable literature
on Boolean vector spaces [6, 11, 14, 15, 16], but as far as we know the
results presented here are new. There have also been investigations into
the related fields of Boolean matrices and matrices over distributive lattices

1
[1, 2, 3, 7, 8, 9, 10, 12]. These works have applications in graph theory,
computer science and fuzzy systems.
Following this introduction, Section 2 presents some preliminary results.
We first introduce a natural Boolean-valued inner product and discuss or-
thonormal bases. One of our surprising results is that all bases are orthonor-
mal. Not so surprising is that all bases have the same cardinality. We then
consider linear transformations and operators.
Section 3 discusses subspaces and projections in Boolean vector spaces.
It is shown that the set of subspaces forms an atomistic orthomodular poset.
Since there is a bijection between subspaces and projections, projections
inherit this same structure. Another surprising result is that an operator
that is diagonal relative to one basis is diagonal relative to all bases. Also
surprising is that all projections are diagonal.
In Section 4 we consider states and diagonality. We first discuss the
concepts of eigenvectors and eigenvalues. We then show that an operator is
diagonal if and only if every basis consists of eigenvectors of the operator.
After defining the concept of a state, we show that a state is extremal if and
only if it is pure. We then characterize extremal states.
Finally, Section 5 presents an introduction to tensor products and direct
sums of Boolean vector spaces. We leave a deeper study of these concepts to
a later investigation.
In this paper, B will denote an arbitrary but fixed Boolean algebra with
least and greatest elements 0 and 1, respectively. The order on B is denoted
by ≤ and the meet and join of a, b ∈ B are denoted by ab and a ∨ b, respec-
tively. We write the complement of a ∈ B as a0 and say that a, b ∈ B are
disjoint if ab = 0 or equivalently a ≤ b0 .
We close this section with a motivating example for our work in this field
[4, 5]. A Boolean matrix is an n × m matrix with entries in B. We then
write A = [aij ] with aij ∈ B, i = 1, . . . , n and j = 1, . . . , m. If A is an n × m
Boolean matrix and B is an m×k Boolean matrix, we define the product AB
to be the n × k matrix whose (i, j) entry is given by ∨m r=1 air brj . In particular,
m
we consider elements of B to be m × 1 matrices (column vectors) and for
b = (b1 , . . . , bm ) ∈ B m we have
m
_
(Ab)i = air br
r=1

i = 1, . . . , n so that Ab ∈ Bn . We can thus consider A as a transformation

2
A : B m → B n . As mentioned earlier, Boolean matrices and their general-
ization to distributive lattices provide useful tools in various fields such as
switching nets, automata theory and finite graph theory.
Our main motivation for studying Boolean vector spaces and matrices
comes an analogy to Markov chains [4, 13]. Let G be a finite directed graph
whose vertices are labelled 1, 2, . . . , n. We think of the vectors of G as sites
that a physical system can occupy or possible configurations for a computer.
The edges of G designate the allowable transitions between sites or config-
urations. If there is an edge from vertex i to vertex j, we label it by an
element aji ∈ B. We think of aji as the event or proposition that the system
(computer) evolves from site (configuration ) i to site (configuration) j in
one time-step. If there is no edge from i to j, then we set aji = 0. The n × n
Boolean matrix A = [aij ] is the transition matrix in one time-step for the
physical system and is determined by the dynamical law for the system. Al-
ternatively, for a computer, A is determined by a program or algorithm and
the internal states of the computer. The transition matrix for m time-steps
is then naturally given by the matrix product Am .
Assuming that the system evolves from site i to some specific site j in
one time-step, we postulate that aji aki = 0 for j 6= k and ∨nj=1 aji = 1
for i = 1, 2, . . . , n. Thus, each column of A is what we shall later call a
consistent unit vector. Suppose that bi is the event or proposition that the
system is at the site i initially. We would then have the consistent unit vector
b = (b1 , . . . , bn ) ∈ B n and Ab ∈ B n describes the system location at one time-
step. It is easy to check that Ab is again a consistent unit vector and we
interpret (Ab)i to be the event that the system is at site i at one time-step.
In this way, Am , m = 1, 2, . . . , describes the dynamics of the system and this
gives an analog to a traditional Markov chain [13].

2 Preliminary Results
We begin with the definition of a Boolean vector space. Note that our defi-
nition is different than that given in [14, 15, 16].
A Boolean vector space is a system (V, B, +, · ) where V is a nonempty
set, B is a Boolean algebra, + is a binary operation on V and · is a map from
B × V to V such that

(1) u + v = v + u for all u, v ∈ V

3
(2) u + (v + w) = (u + v) + w for all u, v, w ∈ V

(3) a · (b · v) = (ab) · v for all a, b ∈ B and v ∈ V

(4) a · (u + v) = a · u + a · v for all a ∈ B and u, v ∈ V

(5) (a ∨ b) · v = a · v + b · v for all a, b ∈ B and v ∈ V

(6) there exists {v1 , . . . P


, vn } ⊆ V such that every v ∈ V has a unique
representation v = ni=1 ai · vi

We usually denote a Boolean vector space simply by V and we denote the


scalar product a·v by av. we call {v1 , . . . , vn } in (6) a basis for V . In general,
V has many bases. An example of a Boolean vector space is Ln (B) = B n
where
B n = B × B × · · · × B (n factors)
In this case we define

(a1 , . . . , an ) + (b1 , . . . , bn ) = (a1 ∨ b1 , . . . , an ∨ bn )

and
a(b1 , . . . , bn ) = (ab1 , . . . , abn )
The standard basis for Ln (B) is δ1 = (1, 0, . . . , 0), δ2 = (0, 1, 0, . . . , 0),
. . . , δn = (0, . . . , 0, 1). We shall show that any Boolean vector space is iso-
morphic to Ln (B) for some n ∈ N.
For the Boolean vector space V we define u ≤ P v if there is a wP∈ V
such that u + w P = v. We define 0, 1 ∈PV by 0 = 0vi and 1 = 1vi .
0 0
Moreover, if v = ai vi we define v = ai vi . If an entity associated with
V is independent of the basis of V , we say that the entity is an invariant.
The next result shows that 0, 1 and v 0 are invariants.

Theorem
P 2.1. Let V P be a Boolean vector space with basis {v1 , . . . , vn }. (i) If
u= ai vi and v = bi vi , then u ≤ v if and only if ai ≤ bi , i = 1, . . . , n.
(ii) The relation ≤ is a partial order relation. (iii) 0 ≤ v ≤ 1 for all v ∈ V .
(iv) For v ∈ V , v 0 is the smallest element of V satisfying v + v 0 = 1.
P
Proof. (i) Let u ≤ v with u + w = v and let w = ci vi . Then
X X X X
bi vi = ai vi + ci vi = (ai ∨ ci )vi

4
Hence, bi = ai ∨ ci ≥ ai , i = 1, . . .P
, n. Conversely, if ai ≤ bi then bi = ai ∨ ci
0
where ci = bi ∧ ai . Letting w = ci vi we have that u + w = v so u ≤ v.
(ii) It is clear that ≤ is reflexive and transitive. To prove antisymmetry,
suppose that u ≤ v and v ≤ u. By (i) we have that ai = bi , i = 1, . . . , n, so
u = v. (iii) Since 0 ≤ bi ≤ 1, by (i) we have that 0 ≤ v ≤ 1. (iv) Since
X X X X
bi vi + b0i vi = (bi ∨ b0i )vi = 1vi = 1

we have that v + v 0 = 1. If v + u = 1 we have bi ∨ ai = 1, i = 1, . . . , n. Hence,


ai ≥ b0i , i = 1, . . . , n so by (i) v 0 ≤ u.
Let V be a Boolean vector space with P basis {v1 , . . .P , vn }. We define the
inner product hu, vi = ∨ai bi where u = ai vi , v = bi vi . For example,
in Ln (B) we always use the inner product
_
h(a1 , . . . , an ), (b1 , . . . , bn )i = ai b i

The norm of v ∈ V is kvk = hv, vi = ∨bi . Notice that kavk = akvk and
_
ku + vk = kuk kvk

For Boolean vector spaces V and W a map T : V → W is linear if it satisfies


T (av) = aT v and T (u + v) = T u + T v for all u, v ∈ V and a ∈ B. A linear
map T is isometric if hT u, T vi = hu, vi for all u, v ∈ V . If T : V → W is
isometric, then clearly kT vk = kvk so T is norm preserving. However, the
converse does not hold. For a counterexample, let T : V → V be defined by
T v = kvk 1. Then

T (av) = kavk 1 = akvk 1 = aT v

and
_
T (u + v) = ku + vk 1 = kuk kvk 1 = kuk 1 + kvk 1 = T u + T v

Hence, T is linear, Moreover,

kT vk = | kvk 1 | = kvk

so T is norm preserving. However,

hT u, T vi = hkuk 1, kvk 1i = kuk kvk =


6 hu, vi

5
in general. For example, we may have u, v 6= 0 with hu, vi = 0. Thus, T is
not isometric.
Notice that for the basis {v1 , . . . , vn } we have that hvi , vj i = δij where
δij is the Kronecker delta. If a set {w1 , . . . , wm } ⊆ V satisfies hwi , wj i = δij
we call {w1 , . . . , wm } an orthonormal set. In this way, {v1 , . . . , vn } is an
orthonormal basis. Moreover,
n
X
v= hv, vi ivi
i=1

for all v ∈ V . The proof of the following lemma is straightforward.

Lemma 2.2. The inner product satisfies the following conditions. (i) hu, vi =
hv, ui. (ii) hu + v, wi = hu, wi ∨ hv, wi. (iii) hau, vi = ahu, vi. (iv) kvk =
hv, vi = 0 if and only if v = 0. (v) hu, vi = hw, vi for all v ∈ V implies that
u = w. (vi) hv, v 0 i = 0. (vii) hu, vi ≤ kuk kvk.

Thus, the inner product is symmetric (i), linear ((ii) and (iii)), defi-
nite (iv), nondegenerate (v) and complementary (vi). Condition (vii) is
called Schwarz’s inequality.

Lemma 2.3. Let V be a Boolean vector space with basis {v1 , . . . , vn }. There
exists an isometric linear bijection φ : V → Ln (B). Moreover, (V, ≤, 0 ) is a
Boolean algebra and φ : V → B n is a Boolean algebra isomorphism.
P
Proof. Define φ : V → Ln (B) by φ(v) = (a1 , . . . , an ) where v = ai vi . It is
clear that φ is an isometric, linear bijection. It follows from Theorem 2.1 (i)
that φ and φ−1 preserve order. We conclude that (V, ≤, 0 ) is a Boolean
algebra and that φ : V → B n is a Boolean isomorphism.

Theorem 2.4. If V is a Boolean vector space, then all bases for V are
orthonormal and have the same cardinality. Moreover, the inner product is
an invariant.

Proof. Let {u1 , . . . , um } and {v1 , . . . , vn } be bases for V and let φ1 : V →


Lm (B) and φ2 : V → Ln (B) be the corresponding isometric, linear bijections
given by Lemma 2.3. Then φ2 ◦ φ−1 1 : Lm (B) → Ln (B) is a linear bijection. It
follows from [5] that m = n and that φ2 ◦φ−1 1 is isometric. Let hu, vii , i = 1, 2,

6
be the inner products relative to {u1 , . . . , un } and {v1 , . . . , vn }, respectively.
We then have
hu, vi1 = hφ1 (u), φ1 (v)i = φ2 ◦ φ−1 −1
1 [φ1 (u)] , φ2 ◦ φ1 [φ1 (v)]
= hφ2 (u), φ2 (v)i = hu, vi2
Hence the inner product is an invariant. Denoting this invariant inner prod-
uct by hu, vi, again we have
hui , uj i = hui , uj i1 = δij
Therefore, all bases are orthonormal with respect to this invariant inner
product.
Lemma 2.5. Let V and W be Boolean vector spaces over the same Boolean
algebra B. (i) If f : V → B is a linear functional, then there exists a unique
v ∈ V such that f (u) = hv, ui for all u ∈ V . (ii) If T : V → W is a linear
map, there exists a unique linear map T ∗ : W → V such that
hT v, wi = hv, T ∗ wi
for all v ∈ V , w ∈ W .
Proof. (i) Uniqueness follows from the nondegeneracy P of the inner product.
Let {v1 , . . . , vn } be a basis for V . Since u = hvi , uivi we have
X  _ DX E
f (u) = f hvi , uivi = hvi , uif (vi ) = f (vi )vi , u
P
Letting v = f (vi )vi completes the proof. (ii) Again, uniqueness follows
from the nondegeneracy of the inner product. For a fixed w ∈ W , the map
v 7→ hT v, wi is a linear functional on V . By (i) there exists a unique w∗ ∈ V
such that hT v, wi = hv, w∗ i for all v ∈ V . Define T ∗ : W → V by T ∗ w = w∗ .
It is easy to check that T ∗ is linear.
We call T ∗ in Lemma 2.5 (ii) the adjoint of T . A linear map T : V → V
is called an operator. It is easy to check that an operator is isometric if and
only if T ∗ T = I the identity map. A map F : V × V → B is bilinear if it is
linear in both arguments.
Lemma 2.6. If F : V × V → B is bilinear, there exists a unique operator T
on V such that F (u, v) = hT u, vi for all u, v ∈ V . Moreover, F is symmetric
if and only if T = T ∗ .

7
Proof. As before, uniqueness follows from the nondegenercy of the inner
product. Since u 7→ F (v, u) is linear, by Lemma 2.5 (i) there exists a unique
w ∈ V such that F (v, u) = hw, ui for all u ∈ V . Define T : V → V by
T v = w. Now T is linear because
hT (av), ui = F (av, u) = aF (v, u) = ahT v, ui = haT v, ui
Hence, T (av) = aT v for every a ∈ B, v ∈ V . Also,
_
hT (v1 + v2 ), ui = F (v1 + v2 , u) = F (v1 , u) F (v2 , u)
_
= hT v1 , ui hT v2 , ui = hT v1 + T v2 , ui

Hence, T (v1 + v2 ) = T v1 + T v2 for all v1 , v2 ∈ V . Finally, F is symmetric if


and only if
hT u, vi = hT v, ui = hv, T ∗ ui = hT ∗ u, vi
for all u, v ∈ V . This is equivalent to T = T ∗ .
An operator T is definite if hT v, vi = 0 implies that v = 0. An operator
T is complementary if hT v, v 0 i = 0 for all v ∈ V
Lemma 2.7. Let T : V → V be an operator. (i) T is definite if and only
if hT v, vi = kvk for every v ∈ V . (ii) T is complementary if and only if
hT u, vi = 0 whenever hu, vi = 0.
Proof. (i) It is clear that hT v, vi = kvk implies T is definite. Conversely,
suppose T is definite and hT v, vi = 6 kvk for some v ∈ V . By Schwarz’s
inequality we have that
hT v, vi ≤ kT vk kvk ≤ kvk
Since hT v, vi < kvk, there exists an a ∈ B such that a 6= 0, hT v, vi ∨ a = kvk
and ahT v, vi = 0. Since T is definite and hT (av), avi = 0, we conclude
that av = 0. But this contradicts the fact that kavk = akvk = a 6= 0. (ii) If
hT u, vi = 0 whenever hu, vi = 0 then T is complementary because hv, v 0 i = 0.
Conversely, suppose T isPcomplementaryP and hu, vi = 0. Let {v1 , . . . , vn } be
a basis for V with u = ai vi and v = bi vi . Then ∨ai bi = hu, vi = 0 so
0 0
bi ≤ ai , i = 1, . . . , n. Hence, v ≤ u and there exists a w ∈ V such that
v + w = u0 . We then have
hT u, vi ≤ hT u, vi + hT u, wi = hT u, u0 i = 0
Thus, hT u, vi = 0.

8
Corollary 2.8. An operator T : V → V is definite and complementary if
and only if T = I the identity operator.

Proof. It is clear that I is definite and complementary. Conversely, sup-


pose that T is definite and complementary. Since V is a Boolean algebra,
for u, v ∈ V there exist mutually disjoint and hence mutually orthogonal
elements u1 , v1 , v ∈ V such that u = u1 + w, v = v1 + w. Thus
_ _ _
hT u, vi = hT u1 , v1 i hT w, v1 i hT u1 , wi hT w, wi
= hT w, wi = kwk = hw, wi = hu, vi

We conclude that T u = u for all u ∈ V so that T = I.


The next result shows that a subset of the conditions listed in Lemma 2.2
characterize the inner product.

Theorem 2.9. A map F : V × V → B coincides with the inner product if


and only if F is bilinear, definite and complementary.

Proof. Applying Lemma 2.2, the inner product is bilinear, definite and com-
plementary. Conversely, suppose that F : V × V → B is a definite, comple-
mentary bilinear form. By Lemma 2.6, there exists an operator T : V → V
such that F (u, v) = hT u, vi for all u, v ∈ V . It follows that T is defi-
nite and complementary. Applying Corollary 2.8, we have T = I. Hence,
f (u, v) = hu, vi.
We can also characterize the norm on V .

Theorem 2.10. A map f : V → B coincides with the norm if and only if f


is linear and satisfies f (vi ) = 1, i = 1, . . . , n, for some basis {v1 , . . . , vn }

Proof. Clearly, the norm is linear and satisfies the given condition. Con-
versely, suppose f : V → B is linear and satisfies the condition. By Lemma
2.5 (i) there exists a v ∈ V such that f (u) = hv, ui for all u ∈ V . We then
have
hv, vi i = f (vi ) = 1, i = 1, . . . , m
Hence, v = 1 so that f (u) = h1, ui = kuk for all u ∈ V .
We now give another simple characterization of the norm.

9
Lemma 2.11. The norm kvk is the unique a ∈ B such that v = au for some
u with kuk = 1.
Proof. To show that a is unique, we have

a = akuk = kauk = kvk


P
To show that u exists, let {v1 , . . . , vn } be a basis with v = ai vi . Define u
by  _ 
0
u = a1 v1 + · · · + an−1 vn−1 + an kvk vn
We then have that kuk = 1 and v = kvku.
Although, the vector u in Lemma 2.11 is not unique, it does satisfy v ≤
u ≤ v + kvk0 1 and any u satisfying these inequalities will suffice.

3 Subspaces and Projections


In the previous section we saw that all bases of a Boolean vector space have
the same cardinality. We call this cardinality the dimension of the space. If
a Boolean vector space V has dimension n, we can construct Boolean vector
spaces of any lower dimension inside V . If {v1 , . . . , vn } is a basis for V , let
m ≤ n and define
( m )
X
W = span {v1 , . . . , vm } = ai v i : ai ∈ B
i=1

Then W is a Boolean vector space of dimension m with basis {v1 , . . . , vm }.


This is an example of a subspace of V . We shall later give a general definition
of a subspace of a Boolean vector space and show that they are essentially
of this form.
Theorem 3.1. Let V be a Boolean vector space with dim V = n. An or-
thonormal set {v1 , . . . , vm } ⊆ V is a basis for V if and only if m = n.
Proof. We have already shown that a basis has n elements. Conversely, let
{v1 , . . . , vn } be an orthonormal set in V . By Lemma 2.3, there exists an
isometric, linear bijection φ : V → Ln (B). It follows that {φ(v1 ), . . . , φ(vn )}
is an orthonormal set in Ln (B) and it is shown in [5] that this set must be a
basis for Ln (B). We conclude that {v1 , . . . , vn } is a basis for V .

10
A vector v ∈ V is consistent if there exists a basis {v1 , . . . , vn } for V
such that hv, vi ihv, vj i = 0, i 6= j. A subset of V is consistent if all of its
elements are consistent. It is clear that a basis for V is consistent.

Lemma 3.2. Consistency is an invariant.

Proof. Suppose v is consistent and hv, vi ihv, vj i = 0, i 6= j, for a basis


{v1 , . . . , vn }. If {w1 , . . . , wn } is another basis we have for i 6= j that
* +* +
X X
hv, wi ihv, wj i = v, hwi , vk ivk v, hwj , vr ivr
k r
_ _
= hwi , vk ihv, vk i hwj , vr ihv, vr i
k r
_
= hwi , vk ihwj , vr ihv, vk ihv, vr i
k,r
_
= hwi , vk ihwj , vk ihv, vk i
k
_
≤ hwi , vk ihvk , wj i
k
* +
X
= hwi , vk ivk , wj = hwi , wj i = 0
k

Notice that in Lemma 3.2 we have derived the useful Parseval’s iden-
tity. This states that if {v1 , . . . , vn } is a basis for V then for u, v ∈ V we
have ∨hu, vi ihvi , vi = hu, vi.

Theorem 3.3. If {u1 , . . . , um } is a consistent orthonormal set in a Boolean


vector space V with dim V = n, then m ≤ n and {u1 , . . . , um } can be extended
to a basis for V .

Proof. Let φ : V → Ln (B) be the isometric, linear bijection given by Lemma


2.3. Since φ(vi ) = δi , we have for i 6= j that

hφ(uk ), δi ihφ(uk )δj i = hφ(uk ), φ(vi )ihφ(uk ), φ(vj )i


= huk , vi ihuk , vj i = 0

11
for k = 1, . . . , m. It follows that {φ(u1 ), . . . , φ(um )} is a consistent orthonor-
mal set in Ln (B). It follows from [5] that m ≤ n and that {φ(u1 ), . . . , φ(um )}
can be extended to a basis for Ln (B). We conclude that {u1 , . . . , um } can be
extended to a basis for V .
A subspace of a Boolean vector space V is a subset of V of the form
M = span {v1 , . . . , vm } where {v1 , . . . , vm } is a consistent orthonormal set in
V . Then M is a Boolean vector space with the same operations as in V and
dim M = m. Moreover, {v1 , . . . , vm } is a basis for M that can be extended
to a basis for V . By convention, we call {0} a subspace of V with basis ∅.

Example 1. This example shows that the intersection of two subspaces need
not be a subspace. Let M and N be the following subspaces of L2 (B).

M = {b(1, 0) : b ∈ B} , N = {b(a, a0 ) : b ∈ B}

where a 6= 0, 1. Now (a, 0) = a(a, a0 ) so (a, 0) ∈ M ∩ N and hence, M ∩ N =


6
0
{0} The elements of M ∩ N have the form b(a, a ) where b ≤ a so

M ∩ N = {(b, 0) : b ≤ a}

Hence, M ∩ N contains no unit vectors so M ∩ N is not a subspace.

For a subset M ⊆ V we define

M⊥ = {v ∈ V : hv, ui = 0 for all u ∈ M}

Example 2. This example shows that M⊥ need not be a subspace. In L2 (B)


let M = {(a, a)} where a 6= 0, 1. Then (c, d) ∈ M⊥ if and only if c, d ≤ a0 .
But then c ∨ d ≤ a0 < 1 so M⊥ contains no unit vectors. Hence, M⊥ is not
a subspace.

If M, N are subspaces of V we write M ⊥ N if hu, vi = 0 for every


u ∈ M, v ∈ N . Of course, M ⊥ N implies M ∩ N = {0}. It is not know
whether the converse holds.
We denote the set of subspaces of V by S(V ) and endow S(V ) with the
set-inclusion partial order ⊆. We denote the greatest lower bound and least
upper bound in (S(V ), ⊆) by M ∧ N and M ∨ N , respectively, when they
exist.

12
Example 3. The example shows that M ∧ N need not exist. Let M, N be
the following subspaces of L3 (B):

M = {c(1, 0, 0) + d(0, 1, 0) = (c, d, 0) : c, d ∈ B}


N = {c(a, a0 , 0) + d(a0 , 0, a) : c, d ∈ B}

where a 6= 0, 1. Define the subspace

M1 = {c(1, 0, 0) = (c, 0, 0) : c ∈ B}
N1 = {c(a, a0 , 0) : c ∈ B}

Now it is clear that N1 ⊆ M, N and M1 ⊆ M. Moreover, since

a(a, a0 , 0) + a0 (a0 , 0, a) = (1, 0, 0)

we see that M1 ⊆ N . Since dim(M1 ) = dim(N1 ) = 1 and dim(M) =


dim(N ) = 2 there are no elements of S(V ) strictly between them. Since M1
and N1 are incomparable, M ∧ N does not exist.
Let M be a subspace of V with basis {v1 , . . . , vm }. Extend this basis of
M to a basis {v1 , . . . , vn } of V . It is then clear that M⊥ ∈ S(V ) and M⊥ =
span {vm+1 , . . . , vn }. Let (S, ≤, 0, ⊥ ) be a partially ordered set where 0 ≤ a
for all a ∈ S and ⊥ : S → S. We say that ⊥ is an orthocomplementation
on S if a⊥⊥ = a for all a ∈ S, a ≤ b implies b⊥ ≤ a⊥ and a ∧ a⊥ =
0 for all a ∈ S. We call (S, ≤, 0, ⊥ ) an orthomodular poset if ⊥ is an
orthocomplementation, a ∨ b exists whenever a ≤ b⊥ and a ≤ b implies
b = a ∨ (b ∧ a0 ). An element b ∈ S is an atom if b 6= 0 and a ≤ b implies
a = 0 or a = b. We say that (S, ≤, 0, ⊥ ) is atomistic if for every a ∈ S we
have _
a= {b : b ≤ a, b an atom}
Notice that the atoms in S(V ) are the one-dimensional subspaces in S(V ).
Theorem 3.4. The system S(V ), ⊆, {0} , ⊥ forms an atomistic orthomod-


ular poset.
Proof. It is clear that M ⊆ N implies N ⊥ ⊆ M⊥ , M = M⊥⊥ and M ∧
M⊥ = {0}. Now suppose M ⊆ N ⊥ . Let {u1 , . . . , um } be a basis for M and
{v1 . . . , vn } be a basis for N . Then

{u1 , . . . , um , v1 , . . . , vn }

13
is a consistent orthonormal set and it follows that
R = span {u1 , . . . , um , v1 , . . . , vn } ∈ S(V )
Clearly, M, N ⊆ R. Suppose P ∈ S(V ) with M, N ⊆ P. Then {u1 , . . . , um }
= P and {v1 , . . . , vn } ⊆ P. Hence, R ⊆ P so R = M ∨ N . Next, suppose
that M ⊆ N . Then M ⊆ N ⊥⊥ = (N ⊥ )⊥ so M ∨ N ⊥ exists. It follows that
N ∧ M⊥ = (N ⊥ ∨ M)⊥ exists. Since M ⊆ M ∨ N ⊥ = (N ∧ M⊥ )⊥ , we have
that M∨(N ∧M⊥ ) exists and M∨(N ∧M⊥ ) ⊆ N . To prove the reverse in-
clusion, let {u1 , . . . , um } be a basis for M. Since M ⊆ N and N is a Boolean
vector space we can extend this basis to a basis {u1 , . . . , um , um+1 , . . . , un }
for N . We now show that {um+1 . . . , un } is a basis for N ∧ M⊥ . Since
{um+1 , . . . , un } ⊆ N ∧ M⊥ , span {um+1 , . . . , un } ⊆ N ∧ M⊥ . Conversely, if
v ∈ N ∧ M⊥ then
Xn Xn
v= hv, ui iui = hv, ui iui ∈ span {um+1 , . . . , un }
i=1 i=m+1

Hence, {um+1 , . . . , un } is a basis for N ∧ M⊥ . Applying our previous work


in this proof we conclude that
M ∨ (N ∧ M⊥ ) = span {u1 , . . . , un } = N
To show that S(V ) is atomistic, let M ∈ S(V ). Let {v1 . . . , vm } be a
basis for M and let vb1 , . . . , vbm be the one-dimensional subspaces gener-
ated by v1 , . . . , vm , respectively. Then vb1 , . . . , vbm are atoms with vbi ⊆ M,
i = 1, . . . , m. If N ∈ S(V ) with vbi ⊆ N , i = 1, . . . , m, then M ⊆ N .
Hence, M = ∨m bi . Now suppose that R ∈ S(V ) and P ⊆ R for every
i=1 v
atom P ∈ S(V ) with P ⊆ M. Then vbi ⊆ R, i = 1, . . . , m, so that M ⊆ R.
Hence,
W
M = {P ∈ S(V ) : P ⊆ M, P an atom}
Example 4. If M, N ∈ S(V ) and M ∩ N ∈ S(V )W , then clearly M ∩
N = M ∧ N . The converse does not hold. That is, if M ∧ N exists, then
M ∩ N need not be a subspace. In Example 1, M ∩ N is not a subspace but
M ∧ N = {0}.
Example 5. This example shows that the distributive law does not hold in
S(V ) even when ∨ and ∧ exist. Let M, N , P be the following subspaces in
L2 (B):
M = {b(1, 0) : b ∈ B} , N = {b(0, 1) : b ∈ B} , P = {b(a, a0 ) : b ∈ B}

14
where a 6= 0, 1. Then M ∧ P = N ∧ P = {0} and M ∨ N = S(V ). Hence,

P ∧ (M ∨ N ) = P =
6 {0} = (P ∧ M) ∨ (P ∧ N )

For v ∈ V we define the dual vector v ∗ to be the linear functional


v ∗ : V → B given by v ∗ (u) = hv, ui. For u, v ∈ V we define the outer-
product uv ∗ to be the operator uv ∗ : V → V given by uv ∗ (w) = hv, wiu.
Let M be a subspace of V with basis {v1 , . . . , vm }. We definePthe projection
m ∗
onto M to Pbe the operator PM : V → V given by PM = i=1 vi vi . Thus
PM (u) = hvi , uivi for all u ∈ V . Of course, the range of PM is M and it
∗ 2
is easy to show that PM = PM = PM . The next lemma shows that PM is
an invariant.

Lemma 3.5. The projection PM is independent of the basis of M.

Proof. Let {v1 , . . . , vm } and {w1 , . . . , vm } be bases for M.Then by Parseval’s


identity we have
* +
X X X X
PM (u) = hvi , uivi = hvi , wj iwj , u hvi , wk iwk
i i j k
__ X
= hvi , wj ihwj , ui hvi , wk iwk
i j
X__
= hwj , vi ihvi , wk ihwj , uiwk
k j i

X_ X
= hwj , wk ihwj , uiwk = hwk , uiwk
k j k

An operator T : V → V is diagonal if hT vi , vj i = 0, i 6= j, i, j = 1, . . . , n,
for some basis {v1 , . . . , vn } for V . We now have the following surprising result.

Lemma 3.6. Diagonality is an invariant.

Proof. Let {v1 , . . . , vn } and {w1 , . . . , vn } be bases for V and suppose that

15
hT vi , vj i = 0 for i 6= j. By Parseval’s identity we have for i 6= j that
* +
X X
hT wi , wj i = T hwi , vk ivk , hwj , vr ivr
k r
_
= hwi , vk ihwj , vr ihT vk , vr i
k,r
_
= hwi , vk ihvk , wj ihT vk , vk i
k
_
≤ hwi , vk ihvk , wj i = hwi , wj i = 0
k

Corollary 3.7. Every projection is diagonal.

Proof. Let {v1 , . . . , vm } be a basis for the subspace M. Then for i 6= j we


have * +
X _
hPM vi , vj i = hvk , vi ivk , , vj = hvi , vk ihvk , vj i
k k

= hvi , vj i = 0

Example 6. We have seen that a projection P satisfies P = P ∗ = P 2 .


However, these conditions are not sufficient for P to be a projection. For
instance, on L2 (B) let P be the matrix
" #
1 a
P =
a 1

where a 6= 0. Then P = P ∗ = P 2 but P is not a projection because P is not


diagonal.

Lemma 3.8. (i) If T is diagonal, then T = T ∗ . (ii) If S and T are diagonal


operators on V then ST = T S.

Proof. (i) Suppose T : V → V is diagonal and {v1 , . . . , vn } is a basis for V .


Then for i 6= j we have

hT ∗ vi , vj i = hvi , T vj i = 0

16
Hence, for i 6= j we have hT ∗ vi , vj i = hT vi , vj i. Moreover,

hT ∗ vi , vi i = hvi , T vi i = hT vi , vi i

We conclude that T ∗ vi = T vi for i = 1, . . . , n, so that T ∗ = T . (ii) For a


basis {v1 , . . . , vn } of V we have
_
hST vi , vj i = hT vi , Svj i = hT vi , vk ihvk , Svj i
k
= hSvj , vj ihT vi , vi iδij

By symmetry we have

hT Svi , vj i = hT vj , vj ihSvi , vi iδij

Hence, hST vi , vj i = hT Svi , vj i for all i, j, so that ST = T S.


We denote the set of projections on V by P(V ). Since there is a one-to-
one correspondence between subspaces and projections, we can transfer the
order and ⊥ on S(V ) to P(V ). We thus define PM ≤ PN if M ⊆ N and

define PM = PM⊥ . In this way P(V ) becomes an atomistic, orthomodular
poset.
Lemma 3.9. On P(V ) we have that P ≤ Q if and only if P Q = P . More-
over, P ⊥ is the unique projection satisfying P P ⊥ = 0 and P + P ⊥ = I.
Proof. Let P = PM and Q = PN for M, N ∈ S(V ). If PM ≤ PN then
M ⊆ N . If v ∈ M then PM PN v = PM v. If v ∈ M⊥ then

PM PN v = PN PM v = 0 = P M v

Hence, PM PN = PM . Conversely, suppose that PM PN = PM . If {v1 , . . . , vm }


is a basis for M then

PN vi = PN PM vi = PM vi = vi

Hence, vi ∈ N , i = 1, . . . , n, and we conclude that M ⊆ N . Thus, PM ≤ PN .



For the second statement, again let P = PM . It is clear that PM PM =

PM PM⊥ = 0 and PM + PM = I. For uniqueness, suppose PN satisfies,
PM PN = 0 and PM + PN = I. If v ∈ N then

PM v = PM PN v = 0

17
which implies that v ∈ M⊥ . Hence, N ⊆ M⊥ . If v ∈ M⊥ then

v = PM v + PN v = PN v ∈ N

Hence, M⊥ ⊆ N so that N = M⊥ . Therefore, PN = PM⊥ = PM



.
Corollary 3.10. For P, Q ∈ P(V ) if P Q ∈ P(V ) then P ∧ Q exists and
P Q = P ∧ Q.
Proof. Since P (P Q) = P Q and Q(P Q) = Q(QP ) = P Q we have P Q ≤
P, Q. If R ∈ P(V ) and R ≤ P, Q then R(P Q) = RQ = R. Hence, R ≤ P Q
so that P Q = P ∧ Q.
It is not known whether the converse holds. That is, if P ∧ Q exists then
P Q ∈ P(V ) is unknown.

4 States and Diagonality


An eigenvector for an operator T is a unit vector v such that T v = av for
some a ∈ B. We then call a an eigenvalue corresponding to v and we
call (a, v) an eigenpair for T . In general, av = bv for v 6= 0 does not imply
a = b. However, if kvk = 1, then av = bv implies

a = akvk = kavk = kbvk = b

Hence, if (a, v) and (b, v) are eigenpairs then a = b. Thus, the eigenvalue
corresponding to an eigenvector is unique.
Theorem 4.1. The following statements are equivalent. (i) T is diagonal
in V . (ii) Any basis for V consists of eigenvectors of T . (iii) Any consistent
unit vector is an eigenvector of T . (iv) There is a basis for V consisting of
eigenvectors of T .
Proof. (i)⇒(ii) Let T be diagonal and suppose {v1 , . . . , vn } is a basis for V .
Letting ai = hT vi , vi i we have for i 6= j that

hT vi , vj i = 0 = hai vi , vj i

Moreover, hT vi , vi i = hai vi , vi i. Hence, T vi = ai vi so {v1 , . . . , vn } consists


of eigenvectors of T . (ii)⇒(iii) Since any consistent unit vector can be ex-
tended to a basis for V , (iii) follows from Statement (ii). (iii)⇒(iv) Since

18
the elements of any basis are consistent (iv) follows from Statement (iii).
(iv)⇒(i) Let {v1 , . . . , vn } be a basis of eigenvectors of T and suppose T vi =
ai vi , i = 1, . . . , n. For i 6= j, we have

hT vi , vj i = hai vi , vj i = ai hvi , vj i = 0

Hence, T is diagonal.

Example 7. Eigenvectors corresponding to distinct eigenvalues of a diagonal


operator need not be orthogonal. In L2 (B), let v1 = (a, a0 ), v2 = (a0 , a) where
a 6= 0, 1. Let b1 6= b2 ∈ B and let c1 = b1 a + b2 a0 , c2 = b1 a0 + b2 a. The
operator T given by the matrix
" #
c1 0
T =
0 c2

has many eigenpairs including (c1 , δ1 ), (c2 , δ2 ), (b1 , v1 ), (b2 , v2 ). In general


b1 6= c1 but hv1 , δ1 i = a 6= 0.

Lemma 4.2. If T is a diagonal operator on V with eigenpair (a, v), then


there exists a consistent unit vector u such that u ≤ v and (a, u) is an eigen-
pair.
P
Proof. Let {v1 , . . . , vn } be a basis for V and suppose v = bi vi . Define
0 0 0
ci ∈ B, i = 1, . . . , P
n, by c1 = b1 , c2 = b2 b1 , . . . , cn = bn b1 · · · bn−1 . It is easy
to check that u = ci vi is a consistent unit vector and clearly u ≤ v. Since
T v = av we have that
X X X X X
abi vi = av = bi T vi = bi hT vi , vj ivj = bi hT vi , vi ivi
i j

Hence, abi = hT vi , vi ibi , i = 1, . . . , n. Therefore,

hT vi , vi ici = hT vi , vi ibi b01 · · · b0i−1 = abi b01 · · · b0i−1 = aci

We conclude that
X X X
Tu = ci T vi = ci hT vi , vi ivi = a ci vi = av

Hence, (a, u) is an eigenpair.

19
Theorem 4.3. If T is a diagonal operator on V , then a is an eigenvalue of
T if and only if a = hT v, vi for some consistent unit vector v.

Proof. If a is an eigenvalue for T then T u = au for some unit vector u. By


Lemma 4.2, there is a consistent unit vector v such that T v = av. Hence,

hT v, vi = hav, vi = ahv, vi = a

Conversely, suppose a = hT v1 , v1 i for some consistent unit vector v1 . We can


extend v1 to a basis {v1 , v2 , . . . , vn } for V . Then
X
T v1 = hT v1 , vi ivi = hT v1 , v1 iv1 = av1

Hence, a is an eigenvalue of T .

Example 8. Let T be the operator on L2 (B) given by the matrix


" #
1 0
T =
0 0

Then for any a ∈ B we have T (a, a0 ) = (a, 0) = a(a, a0 ). Hence, every a ∈ B


is an eigenvalue of T .

We denote the set of operators on the Boolean vector space V by O(V ).


A state on O(V ) is map s : O(V ) → [0, 1] ⊆ R that satisfies

(1) s(I) = 1 (unital)

(2) if ST ∗ = 0 or S ∗ T = 0, then s(S + T ) = s(S) + s(T ) (additive)

(3) if u and v are orthogonal, consistent unit vectors, then s(uv ∗ ) =


0 (diagonal)

(4) s [T (uv ∗ )] ≤ s(uv ∗ ) for all T ∈ O(V ), u, v ∈ V . (outer bounded)

We denote the set of states on O(V ) by O(V


b ). Notice that O(V
b ) is convex.
P
That is, if λi ∈ R with λi ≥ 0, λi = 1 and si ∈ O(V
b ), i = 1, . . . , n, then
P
λi si ∈ O(V
b ).

20
Example 9. Let µ be a finitely additive probability measure on B and let
w1 be a consistent unit vector in V . Then s(T ) = µ (hT w1 , w1 i) is a state
on O(V ). Indeed, s clearly satisfies Conditions (1) and (2). To verify (3),
let u and v be orthogonal, consistent unit vectors. Extending w1 to a basis
{w1 , . . . , wn } for V we have
huv ∗ (w1 ), w1 i = hhv, w1 iu, w1 i = hv, w1 ihw1 , ui
_
≤ hv, w1 ihwi , ui = hv, ui = 0

Hence, s(uv ∗ ) = 0. To verify (4) we have by Schwarz’s equality that


s [T (uv ∗ )] = hT (uv ∗ )w1 , w1 i = huv ∗ (w1 ), T ∗ w1 i
≤ kuv ∗ (w1 )k = huv ∗ (w1 ), vu∗ (w1 )i
= hhv, w1 iu, hu, w1 ivi = hv, w1 ihu, w1 ihu, vi
≤ hv, w1 ihu, w1 i = hhv, w1 iu, w1 i = huv ∗ (w1 ), w1 i
= s(uv ∗ )

A state s is extremal if s = λs1 + (1 − λ)s2 for λ ∈ (0, 1), s1 , s2 ∈ O(V


b ),

then s1 = s2 . A state is pure if there is a one-dimensional projection vv such
that s(vv ∗ ) = 1. Notice by Condition (2) that s(T ) = s(0 + T ) = s(0) + s(T )
so s(0) = 0 for any state s.
Lemma 4.4. Extremal states are pure.
Proof. Suppose s : O(V ) is a state that is not pure. Since I is a projection
with s(I) = 1 there exists P ∈ P(V ) such that dim(P ) > 1, s(P ) = 1
and s(Q) 6= 1 for any Q ∈ P(V ) with Q < P . Let {v1 , . . . , vn } be a basis

for V whereP{v
m
1 , . . . , vm } is a basis for P and let Pi = vi vi . We then have

that P P = i=1 Pi . Since Pi Pj = Pi Pj = 0 for i 6= j, we have that 1 =
m
s(P ) = i=1 s(Pi ) and s(Pi ), s(Pj ) 6= 0 for at least two indices i, j. We can
assume without loss of generality that s(Pi ) 6= 0 for i = 1, . . . , r where r ≥ 2.
Now si : O(V ) → [0, 1] defined by si (T ) = s(T Pi )/s(Pi ), i = 1, . . . , r is a
state. Indeed, Condition (1) clearly holds. To verify Condition (2), suppose
ST ∗ = 0. Then (SPi )(T Pi )∗ = SPi T ∗ and for any u, v ∈ V we have
hSPi T ∗ u, vi = hShT ∗ u, vi ivi , vi = hT ∗ u, vi ihSvi , vi
_
= hT ∗ u, vi ihvi , S ∗ vi ≤ hT ∗ u, vi ihvi , S ∗ vi
i
= hT ∗ u, S ∗ vi = hST ∗ u, vi = 0

21
Hence,
1 1 1
si (S + T ) = s(SPi + T Pi ) = s(SPi ) + s(T Pi )
s(Pi ) s(Pi ) s(Pi )
= si (S) + si (T )

To verify Condition (3), let u and v be orthogonal, consistent unit vectors.


Since
(uv ∗ Pi )(uv ∗ Pj )∗ = uv ∗ Pi Pj vu∗ = 0
for i 6= j we have
n n
!
X X
s(uv ∗ Pi ) = s uv ∗ Pi = s(uv ∗ ) = 0
i=1 i=1

Hence,
1
si (uv ∗ ) = s(uv ∗ Pi ) = 0
s(Pi )
To verify Condition (4), we have

uv ∗ Pi = (uv ∗ )(vi vi∗ ) = hv, vi iuvi∗

Letting u1 = hv, vi iu we have that uv ∗ Pi = u1 vi∗ . Hence,


1 1
si [T (uv ∗ )] = s [T (uv ∗ )Pi ] = s [T (u1 vi∗ )]
s(Pi ) s(Pi )
1 1
≤ s(u1 v ∗ ) = s(uv ∗ Pi ) = si (uv ∗ )
s(Pi ) s(Pi )

Finally, Condition (4) gives s(T Pi ) ≤ s(Pi ) so si (T ) ≤ 1. We conclude that


si is a state, i = 1, . . . , r. Since

(T Pi )(T Pj )∗ = T Pi Pj T ∗ = 0

for i 6= j, we have
n
! n
X X
s(T ) = s T Pi = s(T Pi )
i=1 i=1

22
By Condition (4) we have s(T Pi ) ≤ s(Pi ) = 0, i = r + 1, . . . , n. Hence,
r
X r
X
s(T ) = s(T Pi ) = s(Pi )si (T )
i=1 i=1

We conclude that r
X
s= s(Pi )si
i=1

where ri=1 s(Pi ) = 1. Since si (Pi ) = 1 and si (Pj ) = 0 for i 6= j, we have


P
that the si are different, i = 1, . . . , r. Hence, s is not extremal.
Theorem 4.5. If s : O(V ) → [0, 1] is an extremal state, there exists a unique
finitely additive probability measure µ on B and a consistent unit vector v1 ∈
V such that s(T ) = µ (hT v1 , v1 i).
Proof. Define µ : B → [0, 1] by µ(a) = s(aI). Then µ(1) = s(I) = 1 and
ab = 0 implies
(aI)(bI)∗ = abI = 0
so that

µ(a ∨ b) = s ((a ∨ b)I) = s(aI + bI) = s(aI) + s(bI) = µ(a) + µ(b)

Hence, µ is a countably additive probability measure on B. By Lemma 4.4,


s is a pure state so s(v1 v1∗ ) = 1 for some consistent unit vector v1 . Extend v1
to a basis {v1 , . . . , vn } for V . Since (vi vi∗ )(v1 v1∗ ) = 0 for i 6= 1 we have

1 = s(vi vi∗ + v1 v1∗ ) = s(vi vi∗ ) + s(v1 v1∗ ) = s(vi vi∗ ) + 1

Hence, s(vi vi∗ ) = 0 for i 6= 1. Moreover, since s is diagonal we have that


s(vi vj∗ ) = 0 for all i 6= j. By Condition (4) we have that s(avi vi∗ ) =
s(aIvi vi∗ ) ≤ s(vi vi∗ ) = 0 for i 6= 1 and similarly s(avi vj∗ ) = 0 for i 6= j
and all a ∈ B. We conclude that for any a ∈ B we have

s(avi vj∗ ) = µ avi vj∗ v1 , v1




whenever i and j are not both 1. Moreover, for any a ∈ B we have


n
!
X
s(av1 v1∗ ) = s a vi vi∗ = s(aI) = µ(a) = µ (hav1 v1∗ v1 , v1 i)
i=1

23
tij vi vj∗ , tij ∈ B. By
P
For T ∈ O(V ) it is well-known that we can write T =
additivity we have
X 
s(T ) = s tij vi vj = s(t11 v1 v1∗ ) = µ (ht11 v1 v1∗ v1 , v1 i)

= µ (hT v1 , v1 i)

For uniqueness we have for all a ∈ B that

µ(a) = µ (haIv1 , v1 i) = s(aI)


Corollary 4.6. A state is pure if and only if it is extremal.
Proof. By Lemma 4.4 extremal states are pure. Conversely, suppose
s : O(V ) → [0, 1] is pure. Then there exists a consistent unit vector v ∈ V
such that s(vv ∗ ) = 1. To show that s is extremal, assume that s = λs1 +
(1 − λ)s2 where 0 < λ < 1 and s1 , s2 are states. Then

λs1 (vv ∗ ) + (1 − λ)s2 (vv ∗ ) = s(vv ∗ ) = 1

Hence, s1 (vv ∗ ) = s2 (vv ∗ ) = 1. By Theorem 4.5, there exists a probability


measure µ on B such that

s1 (T ) = s2 (T ) = µ (hT v, vi) = s(T )

for every T ∈ O(V ). Hence, s1 = s2 = s so s is extremal.


The next result shows that every state is a finite convex combination of
extremal (pure) states.
Corollary 4.7. If s is a state on O(V ) with dim(V ) = n, then there exists
aPconsistent orthonormal set {v1 , . . . , vm }, m ≤ n, in V , λi ∈ R with λk > 0,
m
i=1 λi = 1 and finitely additive probability measure µi on B, i = 1, . . . , m
such that m
X
s(T ) = λi µi (hT vi , vi i)
i=1

for all T ∈ O(V ).


Proof. Let {v1 , . . . , vn } be a basis for V . Without loss of generality, we can
assume that s(vi vi∗ ) 6= 0 for i = 1, . . . , m and s(vj vj∗ ) = 0 for j = m+1, . . . , n.
As in the proof of Lemma 4.4, the maps si : O(V ) → [0, 1] given by si (T ) =

24
s [T (vi vi∗ )] /s(vi vi∗ ), i = 1, . . . , m are pure states on O(V ). Moreover, as in
the proof of Lemma 4.4, we have
m
X m
X
s(T ) = [T (vi vi∗ )] = s(vi vi∗ )si (T )
i=1 i=1

where s(vi vi∗ ) > 0 and s(vi vi∗ ) = 1. By Theorem 4.5, there exist finitely
P
additive probability measures µi on B such that si (T ) = µi (hT vi , vi i), i =
1, . . . , m. Letting λi = s(vi vi∗ ), i = 1, . . . , m, completes the proof.
Denoting the set of diagonal operators on V by D(V ) the theory of states
on D(V ) is much simpler. We define a state on D(V ) to be a functional
s : D(V ) → [0, ∞) such that s(I) = 1 and s(S + T ) = s(S) + s(T ) whenever
ST = 0. For any D ∈ D(V ) there exists a unique D0 ∈ D(V ) such that
DD0 = 0 and D + D0 = I. Hence,

1 = s(I) = s(D + D0 ) = s(D) + s(D0 )

so that s(D) ≤ 1. We conclude that s : D(V ) → [0, 1] for any state on D(V ).
We define pure and extremal states on D(V ) as before. A simplified version
of the proof of Lemma 4.4 shows that extremal states on D(V ) are pure.
Moreover, the proof of Theorem 4.5 carries over to show that an extremal
state s on D(V ) has the form

s(D) = µ (hDv, vi)

as before. Also, Corollaries 4.6 and 4.7 hold for states on D(V ). Finally, any
state on D(V ) has a unique extension to a state on O(V ).

5 Tensor Products and Direct Sums


Let V1 , V2 be Boolean vector spaces over B. For v1 ∈ V1 , v2 ∈ V2 define
v1 ⊗ v2 : V1 × V2 → B by

v1 ⊗ v2 : (u1 , u2 ) = hv1 , u1 ihv2 , u2 i

Then v1 ⊗ v2 is a bilinear form. If F, G : V1 × V2 → B are bilinear forms, we


define the bilinear form F + G : V1 × V2 → B by

(F + G)(u1 , u2 ) = F (u1 , u2 ) ∨ G(u1 , u2 )

25
and the bilinear form aF : V1 × V2 → B a ∈ B, by

(aF )(u1 , u2 ) = aF (u1 , u2 )

We now define the tensor product V1 ⊗ V2 by

V1 ⊗ V2
( r s )
XX
= aij vi ⊗ uj : aij ∈ B, vi ∈ V1 , uj ∈ V2 , i = 1, . . . , r, j = 1, . . . , s
i=1 j=1

It is clear that

a(v1 ⊗ v2 ) = (av1 ) ⊗ v2 = v1 ⊗ (av2 )


(v1 + w1 ) ⊗ v2 = v1 ⊗ v2 + w1 ⊗ v2
v1 ⊗ (v2 + w2 ) = v1 ⊗ v2 + v1 ⊗ w2

Theorem 5.1. V1 ⊗ V2 is a Boolean vector space.


Proof. The first five axioms for a Boolean vector space are clear. To show
that V1 ⊗ V2 has a basis, let {x1 , . . . , xm } be a basis for V1 and {u1 , . . . , yn }
a basis for V2 . It is clear that xi ⊗ yj , i = 1, . . . , m, j = 1, . . . , n, generates
V1 ⊗ V2 . To show uniqueness, suppose
X X
aij xi ⊗ yj = bij xi ⊗ yj
i,j i,j

We then have
X X
ars = aij hxi , xr ihuj , ys i = aij xi ⊗ yj (xr , ys )
i,j

X X
= bij xi ⊗ yj (xr , ys ) = bij hxi , xr ihyj , ys i = brs
i,j

The theory of the tensor product of a finite number of Boolean vector


spaces carries over in a straightforward way and we shall mainly concentrate
on two Boolean vector spaces. Let U, V, W be Boolean vector spaces. A
bimorphism φ : U × V → W satisfies φ(u1 + u2 , v) = φ(u1 , v) + φ(u2 , v),
φ(u, v1 + v2 ) = φ(u, v1 ) + φ(u, v2 ) and φ(au, v) = φ(u, av) = aφ(u, v) for all
a ∈ B.

26
× V2 → V1 ⊗
Theorem 5.2. (Universality) There exists a bimorphism τ : V1P
V2 such that any element F ∈ V1 ⊗ V2 has the form F = aij τ (vi , uj )
and if φ : V1 × V2 → W is a bimorphism there exists a unique linear map
ψ : V1 ⊗ V2 → W such that φ = ψ ◦ τ .

Proof. Define τ (v1 , v2 ) = v1 ⊗ v2 . Then τ is clearly a bimorphism and any


F ∈ V1 ⊗ V2 has the form
X X
F = aij vi ⊗ vj = aij τ (v1 , uj )

Let φ : V1 ×V2 → W be a bimorphism. Let {x1 , . . . , xm } be a basis for V1 and


{u1 , . . . , un } be a basis for V2 . Define ψ : V1 ⊗V2 → W by ψ(xi ⊗yj ) = φ(xi , yj )
and extend by linearity. Then ψ is linear and
X X  X 
ψ ◦ τ (v1 , v2 ) = ψ ai x i ⊗ bj y j = ψ ai bj xi ⊗ yj
X X
= ai bj ψ(xi ⊗ yj ) = ai bj φ(xi , yj )
i,j
X X 
=ψ ai x i , bj yj = φ(v1 , v2 )

Hence, φ = ψ ◦ τ . To show that ψ is unique, suppose ψ1 : V1 ⊗ V2 → W is


linear and φ = ψ1 ◦ τ . Then

ψ1 (xi ⊗ yj ) = ψ1 ◦ τ (xi , yj ) = φ(xi , yj ) = ψ(xi ⊗ yj )

Since xi ⊗ yj , i = 1, . . . , m, j = 1, . . . , n, is a basis for V1 ⊗ V2 , ψ1 = ψ.

Example 10. We show that Lm (B) ⊗ Ln (B) ≈ Lmn (B). Let {u1 , . . . , um }
be a basis for Lm (B) and {v1 , . . . vn } be a basis for Ln (B). We write ui =
(a1i , . . . , ami ), i = 1, . . . , m, and vj = (b1j , . . . , bnj ), j = 1, . . . , n. Define
φ : Lm (B) ⊗ Ln (B) → Lmn (B) by

φ(ui ⊗ vj ) = (a1i b1j , . . . , a1i bnj , a2i b1j , . . . , a2i bnj , . . . , ami b1j , . . . , ami bnj )

for i = 1, . . . , m, j = 1, . . . , n, and extend by linearity. Since {u1 , . . . , um }


and {v1 , . . . , vn } are consistent orthonormal sets, it is straightforward to show
that A = {φ(ui ⊗ vj ) : i = 1, . . . , m, j = 1, . . . , n} is a consistent orthonormal
set in Lmn (B). Since A has cardinality m, n, it follows that A is a basis for
Lmn (B). Hence, φ is an isomorphism.

27
Example 11. If u1 , u2 ∈ U , v1 , v2 ∈ V we show that
hu1 ⊗ v1 , u2 ⊗ v2 i = hu1 , u2 ihv1 , v2 i
Let {xP
1 , . . . , xm } bePa basis for UPand {y1 , . . . P , yn } be a basis for V . Letting
u1 = ai xi , v1 = bj yj , u2 = cr xr , v2 = ds ys we have
DX X X X E
hu1 ⊗ v1 , u2 ⊗ v2 i = ai x i ⊗ bj yj , cr x r ⊗ ds y s
_
= ai bj cr ds hxi ⊗ yj , xr ⊗ ys i
i,j,r,s
_ _ _
= ai bj ci dj = ai c i bj d n
i,j i

= hu1 , u2 ihv1 , v2 i
Let V and W be Boolean vector spaces over B. Define V ⊕ W = (V ×
W, +, ·) where
(v1 , w1 ) + (v2 , w2 ) = (v1 + v2 , w1 + w2 )
a · (v, w) = (av, aw), a ∈ B
We call V ⊕ W the direct sum of V and W .
Theorem 5.3. V ⊕ W is a Boolean vector space.
Proof. It is clear that V ⊕W satisfies the first five axioms for a Boolean vector
space. To show that V ⊕ W has a basis, let {x1 , . . . , xm } and {y1 , . . . , yn } be
bases for V and W , respectively. Then
{(xi , 0), (0, yj ) : i = 1, . . . , m, j = 1, . . . , n}
P P
is a basis for V ⊕ W . Indeed, if v = ai xi and w = bj yj , then
X X   X   X 
(v, w) = ai x i , bj y j = ai xi , 0 + 0, bj y j
X X
= ai (xi , 0) + bj (0, yj )
P P
To show uniqueness, suppose (v, w) = ci (xi , 0) + dj (0, yj ). Then
X X  X X 
ai x i , bj y j = ci x i , dj y j
P P P P
so that ai x i = ci xi and bj y j = dj yj . It follows that ai = ci and
bj = dj for i = 1, . . . , m, j = 1, . . . , n.

28
Notice that h(v1 , w1 ), (v2 , w2 )i = hv1 , v2 i + hw1 , w2 i. The maps φV : V →
V ⊕ W and φW : W → V ⊕ W given by φV (v) = (v, 0) and φW (w) = (0, w)
are isometries and φV (V ), φV (W ) are orthogonal subspaces of V ⊕ W .
Let Vi , i = 1, 2, . . ., be Boolean vector spaces. We define

V1 ⊕ V2 ⊕ · · · = (V1 × V2 × · · · , +, ·)

where (v1 , v2 , . . .) + (w1 , w2 , . . .) = (v1 + w1 , v2 + w2 , . . .) and c · (v1 , v2 , . . .) =


(cv1 , cv2 , . . .). Now V1 ⊕ V2 ⊕ · · · satisfies the first five axioms for a Boolean
vector space but we don’t have afinite basis. However, we have a countable
basis in the following sense. Let vij be a basis for Vj . Then
 1
(vi , 0, . . .), (0, vi2 , 0, . . .), · · ·

forms a basis for V1 ⊕ V2 ⊕ · · · in the sense that

(w1 , w2 , . . .) = (w1 , 0, . . .) + (0, w2 , 0, . . .) + · · ·


X X
= c1i (vi1 , 0, . . .) + c2i (0, vi2 , 0, . . .) + · · ·

where the coefficients are unique.


If V is a Boolean vector space, we define the Fock space

F(V ) = B ⊕ V ⊕ (V ⊗ V ) ⊕ (V ⊗ V ⊗ V ) ⊕ · · ·

Let {v1 , . . . , vn } be a basis for V . The subspace of V ⊗ V generated by the


consistent orthonormal set

{vi ⊗ vi , vi ⊗ vj + vj ⊗ vi : i, j = 1, . . . , n}

is called the symmetric subspace of V ⊗ V and is denoted by V sV . In a


similar way, we have symmetric subspaces of V ⊗ V ⊗ V, V ⊗ V ⊗ V ⊗ V, . . . .
the symmetric Fock space is

FS (V ) = B ⊕ V ⊕ (V sV ) ⊕ (V sV sV ) ⊕ · · ·

We leave the study of these Fock spaces to a later paper.

29
References
[1] T. S. Blyth, On eigenvectors of Boolean matrices, Proc. Roy. Soc. Ed-
inburgh 67 (1967), 196–204.
[2] K. Cechlárová, Powers of matrices over distributive lattices – a review,
Fuzzy Sets Sys. 138 (2003), 627–641.
[3] Y. Givéon, Lattice matrices, Inf. Contr. 7 (1964), 477–484.
[4] S. Gudder, Quantum Markov chains, J. Math. Phys. 49 (2008),
072105-1–072105-14.
[5] S. Gudder and F. Latrémolère, Boolean inner-product spaces and
Boolean matrices (to appear).
[6] P. V. Jagannadham, Linear transformations on Boolean vector spaces,
Math. Ann. 16 (1966), 240–247.
[7] R. D. Luce, A note on Boolean matrix theory, Proc. Amer. Math. Soc.
3 (1952), 382–388.
[8] D. E. Ruthford, Inverses of Boolean matrices, Proc. Glasgow Math. Soc.
6 (1963), 49–53.
[9] D. E. Ruthford, The eigenvalue problem for Boolean matrices, Proc.
Roy. Soc. Edinburgh 67 (1963/1985), 25–38.
[10] D. E. Ruthford, Orthogonal Boolean matrices, Proc. Roy. Soc. Edin-
burgh 67 (1964/1965), 126–135.
[11] R. L. Sindak, Eigenvectors and maximal vectors in Boolean vector
spaces, Proc. Amer. Math. Soc. 47 (1975), 323–328.
[12] L. A. Skornyakov, Invertible matrices over distributive structures,
(Russian) Sibirsk. Mat. Zh. 27 (1986), 182–185, (English translation)
Siberian Math. J. 27 (1986), 289–292.
[13] D. Stirzaker Stochastic Processes and Models, Oxford University Press,
Oxford, 2005.
[14] N. V. Subrahmanyam, Boolean vector spaces I, Math. Z., 83 (1964),
422–433.

30
[15] N. V. Subrahmanyam, Boolean vector spaces II, Math. Z., 87 (1965),
401–419.

[16] N. V. Subrahmanyam, Boolean vector spaces III, Math. Z., 100 (1967),
295–313.

31

You might also like