NSM Math Preprint 0831
NSM Math Preprint 0831
Stan Gudder
Department of Mathematics
University of Denver
Denver, Colorado 80208
sgudder@math.du.edu
Abstract
This article discusses the basic properties of finite-dimensional
Boolean vector spaces and their linear transformations. We first intro-
duce a natural Boolean-valued inner product and discuss orthonormal
bases. We show that all bases are orthonormal and have the same
cardinality. It is shown that the set of subspaces form an atomistic
orthomodular poset. We then demonstrate that an operator that is
diagonal relative to one basis is diagonal relative to all bases and that
all projections are diagonal. It is proved that an operator is diagonal
if and only if any basis consists of eigenvectors of the operator. We
characterize extremal states and show that a state is extremal if and
only if it is pure. Finally, we introduce tensor products and direct
sums of Boolean vector spaces.
1 Introduction
Roughly speaking, a Boolean vector space is a vector space in which the
scalars are elements of a Boolean algebra. Although some of our results
can be generalized to the infinite dimensional case, in this work we only
consider finite-dimensional spaces. There is already a considerable literature
on Boolean vector spaces [6, 11, 14, 15, 16], but as far as we know the
results presented here are new. There have also been investigations into
the related fields of Boolean matrices and matrices over distributive lattices
1
[1, 2, 3, 7, 8, 9, 10, 12]. These works have applications in graph theory,
computer science and fuzzy systems.
Following this introduction, Section 2 presents some preliminary results.
We first introduce a natural Boolean-valued inner product and discuss or-
thonormal bases. One of our surprising results is that all bases are orthonor-
mal. Not so surprising is that all bases have the same cardinality. We then
consider linear transformations and operators.
Section 3 discusses subspaces and projections in Boolean vector spaces.
It is shown that the set of subspaces forms an atomistic orthomodular poset.
Since there is a bijection between subspaces and projections, projections
inherit this same structure. Another surprising result is that an operator
that is diagonal relative to one basis is diagonal relative to all bases. Also
surprising is that all projections are diagonal.
In Section 4 we consider states and diagonality. We first discuss the
concepts of eigenvectors and eigenvalues. We then show that an operator is
diagonal if and only if every basis consists of eigenvectors of the operator.
After defining the concept of a state, we show that a state is extremal if and
only if it is pure. We then characterize extremal states.
Finally, Section 5 presents an introduction to tensor products and direct
sums of Boolean vector spaces. We leave a deeper study of these concepts to
a later investigation.
In this paper, B will denote an arbitrary but fixed Boolean algebra with
least and greatest elements 0 and 1, respectively. The order on B is denoted
by ≤ and the meet and join of a, b ∈ B are denoted by ab and a ∨ b, respec-
tively. We write the complement of a ∈ B as a0 and say that a, b ∈ B are
disjoint if ab = 0 or equivalently a ≤ b0 .
We close this section with a motivating example for our work in this field
[4, 5]. A Boolean matrix is an n × m matrix with entries in B. We then
write A = [aij ] with aij ∈ B, i = 1, . . . , n and j = 1, . . . , m. If A is an n × m
Boolean matrix and B is an m×k Boolean matrix, we define the product AB
to be the n × k matrix whose (i, j) entry is given by ∨m r=1 air brj . In particular,
m
we consider elements of B to be m × 1 matrices (column vectors) and for
b = (b1 , . . . , bm ) ∈ B m we have
m
_
(Ab)i = air br
r=1
2
A : B m → B n . As mentioned earlier, Boolean matrices and their general-
ization to distributive lattices provide useful tools in various fields such as
switching nets, automata theory and finite graph theory.
Our main motivation for studying Boolean vector spaces and matrices
comes an analogy to Markov chains [4, 13]. Let G be a finite directed graph
whose vertices are labelled 1, 2, . . . , n. We think of the vectors of G as sites
that a physical system can occupy or possible configurations for a computer.
The edges of G designate the allowable transitions between sites or config-
urations. If there is an edge from vertex i to vertex j, we label it by an
element aji ∈ B. We think of aji as the event or proposition that the system
(computer) evolves from site (configuration ) i to site (configuration) j in
one time-step. If there is no edge from i to j, then we set aji = 0. The n × n
Boolean matrix A = [aij ] is the transition matrix in one time-step for the
physical system and is determined by the dynamical law for the system. Al-
ternatively, for a computer, A is determined by a program or algorithm and
the internal states of the computer. The transition matrix for m time-steps
is then naturally given by the matrix product Am .
Assuming that the system evolves from site i to some specific site j in
one time-step, we postulate that aji aki = 0 for j 6= k and ∨nj=1 aji = 1
for i = 1, 2, . . . , n. Thus, each column of A is what we shall later call a
consistent unit vector. Suppose that bi is the event or proposition that the
system is at the site i initially. We would then have the consistent unit vector
b = (b1 , . . . , bn ) ∈ B n and Ab ∈ B n describes the system location at one time-
step. It is easy to check that Ab is again a consistent unit vector and we
interpret (Ab)i to be the event that the system is at site i at one time-step.
In this way, Am , m = 1, 2, . . . , describes the dynamics of the system and this
gives an analog to a traditional Markov chain [13].
2 Preliminary Results
We begin with the definition of a Boolean vector space. Note that our defi-
nition is different than that given in [14, 15, 16].
A Boolean vector space is a system (V, B, +, · ) where V is a nonempty
set, B is a Boolean algebra, + is a binary operation on V and · is a map from
B × V to V such that
3
(2) u + (v + w) = (u + v) + w for all u, v, w ∈ V
and
a(b1 , . . . , bn ) = (ab1 , . . . , abn )
The standard basis for Ln (B) is δ1 = (1, 0, . . . , 0), δ2 = (0, 1, 0, . . . , 0),
. . . , δn = (0, . . . , 0, 1). We shall show that any Boolean vector space is iso-
morphic to Ln (B) for some n ∈ N.
For the Boolean vector space V we define u ≤ P v if there is a wP∈ V
such that u + w P = v. We define 0, 1 ∈PV by 0 = 0vi and 1 = 1vi .
0 0
Moreover, if v = ai vi we define v = ai vi . If an entity associated with
V is independent of the basis of V , we say that the entity is an invariant.
The next result shows that 0, 1 and v 0 are invariants.
Theorem
P 2.1. Let V P be a Boolean vector space with basis {v1 , . . . , vn }. (i) If
u= ai vi and v = bi vi , then u ≤ v if and only if ai ≤ bi , i = 1, . . . , n.
(ii) The relation ≤ is a partial order relation. (iii) 0 ≤ v ≤ 1 for all v ∈ V .
(iv) For v ∈ V , v 0 is the smallest element of V satisfying v + v 0 = 1.
P
Proof. (i) Let u ≤ v with u + w = v and let w = ci vi . Then
X X X X
bi vi = ai vi + ci vi = (ai ∨ ci )vi
4
Hence, bi = ai ∨ ci ≥ ai , i = 1, . . .P
, n. Conversely, if ai ≤ bi then bi = ai ∨ ci
0
where ci = bi ∧ ai . Letting w = ci vi we have that u + w = v so u ≤ v.
(ii) It is clear that ≤ is reflexive and transitive. To prove antisymmetry,
suppose that u ≤ v and v ≤ u. By (i) we have that ai = bi , i = 1, . . . , n, so
u = v. (iii) Since 0 ≤ bi ≤ 1, by (i) we have that 0 ≤ v ≤ 1. (iv) Since
X X X X
bi vi + b0i vi = (bi ∨ b0i )vi = 1vi = 1
The norm of v ∈ V is kvk = hv, vi = ∨bi . Notice that kavk = akvk and
_
ku + vk = kuk kvk
and
_
T (u + v) = ku + vk 1 = kuk kvk 1 = kuk 1 + kvk 1 = T u + T v
kT vk = | kvk 1 | = kvk
5
in general. For example, we may have u, v 6= 0 with hu, vi = 0. Thus, T is
not isometric.
Notice that for the basis {v1 , . . . , vn } we have that hvi , vj i = δij where
δij is the Kronecker delta. If a set {w1 , . . . , wm } ⊆ V satisfies hwi , wj i = δij
we call {w1 , . . . , wm } an orthonormal set. In this way, {v1 , . . . , vn } is an
orthonormal basis. Moreover,
n
X
v= hv, vi ivi
i=1
Lemma 2.2. The inner product satisfies the following conditions. (i) hu, vi =
hv, ui. (ii) hu + v, wi = hu, wi ∨ hv, wi. (iii) hau, vi = ahu, vi. (iv) kvk =
hv, vi = 0 if and only if v = 0. (v) hu, vi = hw, vi for all v ∈ V implies that
u = w. (vi) hv, v 0 i = 0. (vii) hu, vi ≤ kuk kvk.
Thus, the inner product is symmetric (i), linear ((ii) and (iii)), defi-
nite (iv), nondegenerate (v) and complementary (vi). Condition (vii) is
called Schwarz’s inequality.
Lemma 2.3. Let V be a Boolean vector space with basis {v1 , . . . , vn }. There
exists an isometric linear bijection φ : V → Ln (B). Moreover, (V, ≤, 0 ) is a
Boolean algebra and φ : V → B n is a Boolean algebra isomorphism.
P
Proof. Define φ : V → Ln (B) by φ(v) = (a1 , . . . , an ) where v = ai vi . It is
clear that φ is an isometric, linear bijection. It follows from Theorem 2.1 (i)
that φ and φ−1 preserve order. We conclude that (V, ≤, 0 ) is a Boolean
algebra and that φ : V → B n is a Boolean isomorphism.
Theorem 2.4. If V is a Boolean vector space, then all bases for V are
orthonormal and have the same cardinality. Moreover, the inner product is
an invariant.
6
be the inner products relative to {u1 , . . . , un } and {v1 , . . . , vn }, respectively.
We then have
hu, vi1 = hφ1 (u), φ1 (v)i = φ2 ◦ φ−1 −1
1 [φ1 (u)] , φ2 ◦ φ1 [φ1 (v)]
= hφ2 (u), φ2 (v)i = hu, vi2
Hence the inner product is an invariant. Denoting this invariant inner prod-
uct by hu, vi, again we have
hui , uj i = hui , uj i1 = δij
Therefore, all bases are orthonormal with respect to this invariant inner
product.
Lemma 2.5. Let V and W be Boolean vector spaces over the same Boolean
algebra B. (i) If f : V → B is a linear functional, then there exists a unique
v ∈ V such that f (u) = hv, ui for all u ∈ V . (ii) If T : V → W is a linear
map, there exists a unique linear map T ∗ : W → V such that
hT v, wi = hv, T ∗ wi
for all v ∈ V , w ∈ W .
Proof. (i) Uniqueness follows from the nondegeneracy P of the inner product.
Let {v1 , . . . , vn } be a basis for V . Since u = hvi , uivi we have
X _ DX E
f (u) = f hvi , uivi = hvi , uif (vi ) = f (vi )vi , u
P
Letting v = f (vi )vi completes the proof. (ii) Again, uniqueness follows
from the nondegeneracy of the inner product. For a fixed w ∈ W , the map
v 7→ hT v, wi is a linear functional on V . By (i) there exists a unique w∗ ∈ V
such that hT v, wi = hv, w∗ i for all v ∈ V . Define T ∗ : W → V by T ∗ w = w∗ .
It is easy to check that T ∗ is linear.
We call T ∗ in Lemma 2.5 (ii) the adjoint of T . A linear map T : V → V
is called an operator. It is easy to check that an operator is isometric if and
only if T ∗ T = I the identity map. A map F : V × V → B is bilinear if it is
linear in both arguments.
Lemma 2.6. If F : V × V → B is bilinear, there exists a unique operator T
on V such that F (u, v) = hT u, vi for all u, v ∈ V . Moreover, F is symmetric
if and only if T = T ∗ .
7
Proof. As before, uniqueness follows from the nondegenercy of the inner
product. Since u 7→ F (v, u) is linear, by Lemma 2.5 (i) there exists a unique
w ∈ V such that F (v, u) = hw, ui for all u ∈ V . Define T : V → V by
T v = w. Now T is linear because
hT (av), ui = F (av, u) = aF (v, u) = ahT v, ui = haT v, ui
Hence, T (av) = aT v for every a ∈ B, v ∈ V . Also,
_
hT (v1 + v2 ), ui = F (v1 + v2 , u) = F (v1 , u) F (v2 , u)
_
= hT v1 , ui hT v2 , ui = hT v1 + T v2 , ui
8
Corollary 2.8. An operator T : V → V is definite and complementary if
and only if T = I the identity operator.
Proof. Applying Lemma 2.2, the inner product is bilinear, definite and com-
plementary. Conversely, suppose that F : V × V → B is a definite, comple-
mentary bilinear form. By Lemma 2.6, there exists an operator T : V → V
such that F (u, v) = hT u, vi for all u, v ∈ V . It follows that T is defi-
nite and complementary. Applying Corollary 2.8, we have T = I. Hence,
f (u, v) = hu, vi.
We can also characterize the norm on V .
Proof. Clearly, the norm is linear and satisfies the given condition. Con-
versely, suppose f : V → B is linear and satisfies the condition. By Lemma
2.5 (i) there exists a v ∈ V such that f (u) = hv, ui for all u ∈ V . We then
have
hv, vi i = f (vi ) = 1, i = 1, . . . , m
Hence, v = 1 so that f (u) = h1, ui = kuk for all u ∈ V .
We now give another simple characterization of the norm.
9
Lemma 2.11. The norm kvk is the unique a ∈ B such that v = au for some
u with kuk = 1.
Proof. To show that a is unique, we have
10
A vector v ∈ V is consistent if there exists a basis {v1 , . . . , vn } for V
such that hv, vi ihv, vj i = 0, i 6= j. A subset of V is consistent if all of its
elements are consistent. It is clear that a basis for V is consistent.
Notice that in Lemma 3.2 we have derived the useful Parseval’s iden-
tity. This states that if {v1 , . . . , vn } is a basis for V then for u, v ∈ V we
have ∨hu, vi ihvi , vi = hu, vi.
11
for k = 1, . . . , m. It follows that {φ(u1 ), . . . , φ(um )} is a consistent orthonor-
mal set in Ln (B). It follows from [5] that m ≤ n and that {φ(u1 ), . . . , φ(um )}
can be extended to a basis for Ln (B). We conclude that {u1 , . . . , um } can be
extended to a basis for V .
A subspace of a Boolean vector space V is a subset of V of the form
M = span {v1 , . . . , vm } where {v1 , . . . , vm } is a consistent orthonormal set in
V . Then M is a Boolean vector space with the same operations as in V and
dim M = m. Moreover, {v1 , . . . , vm } is a basis for M that can be extended
to a basis for V . By convention, we call {0} a subspace of V with basis ∅.
Example 1. This example shows that the intersection of two subspaces need
not be a subspace. Let M and N be the following subspaces of L2 (B).
M = {b(1, 0) : b ∈ B} , N = {b(a, a0 ) : b ∈ B}
M ∩ N = {(b, 0) : b ≤ a}
12
Example 3. The example shows that M ∧ N need not exist. Let M, N be
the following subspaces of L3 (B):
M1 = {c(1, 0, 0) = (c, 0, 0) : c ∈ B}
N1 = {c(a, a0 , 0) : c ∈ B}
ular poset.
Proof. It is clear that M ⊆ N implies N ⊥ ⊆ M⊥ , M = M⊥⊥ and M ∧
M⊥ = {0}. Now suppose M ⊆ N ⊥ . Let {u1 , . . . , um } be a basis for M and
{v1 . . . , vn } be a basis for N . Then
{u1 , . . . , um , v1 , . . . , vn }
13
is a consistent orthonormal set and it follows that
R = span {u1 , . . . , um , v1 , . . . , vn } ∈ S(V )
Clearly, M, N ⊆ R. Suppose P ∈ S(V ) with M, N ⊆ P. Then {u1 , . . . , um }
= P and {v1 , . . . , vn } ⊆ P. Hence, R ⊆ P so R = M ∨ N . Next, suppose
that M ⊆ N . Then M ⊆ N ⊥⊥ = (N ⊥ )⊥ so M ∨ N ⊥ exists. It follows that
N ∧ M⊥ = (N ⊥ ∨ M)⊥ exists. Since M ⊆ M ∨ N ⊥ = (N ∧ M⊥ )⊥ , we have
that M∨(N ∧M⊥ ) exists and M∨(N ∧M⊥ ) ⊆ N . To prove the reverse in-
clusion, let {u1 , . . . , um } be a basis for M. Since M ⊆ N and N is a Boolean
vector space we can extend this basis to a basis {u1 , . . . , um , um+1 , . . . , un }
for N . We now show that {um+1 . . . , un } is a basis for N ∧ M⊥ . Since
{um+1 , . . . , un } ⊆ N ∧ M⊥ , span {um+1 , . . . , un } ⊆ N ∧ M⊥ . Conversely, if
v ∈ N ∧ M⊥ then
Xn Xn
v= hv, ui iui = hv, ui iui ∈ span {um+1 , . . . , un }
i=1 i=m+1
14
where a 6= 0, 1. Then M ∧ P = N ∧ P = {0} and M ∨ N = S(V ). Hence,
P ∧ (M ∨ N ) = P =
6 {0} = (P ∧ M) ∨ (P ∧ N )
X_ X
= hwj , wk ihwj , uiwk = hwk , uiwk
k j k
An operator T : V → V is diagonal if hT vi , vj i = 0, i 6= j, i, j = 1, . . . , n,
for some basis {v1 , . . . , vn } for V . We now have the following surprising result.
Proof. Let {v1 , . . . , vn } and {w1 , . . . , vn } be bases for V and suppose that
15
hT vi , vj i = 0 for i 6= j. By Parseval’s identity we have for i 6= j that
* +
X X
hT wi , wj i = T hwi , vk ivk , hwj , vr ivr
k r
_
= hwi , vk ihwj , vr ihT vk , vr i
k,r
_
= hwi , vk ihvk , wj ihT vk , vk i
k
_
≤ hwi , vk ihvk , wj i = hwi , wj i = 0
k
= hvi , vj i = 0
hT ∗ vi , vj i = hvi , T vj i = 0
16
Hence, for i 6= j we have hT ∗ vi , vj i = hT vi , vj i. Moreover,
hT ∗ vi , vi i = hvi , T vi i = hT vi , vi i
By symmetry we have
PM PN v = PN PM v = 0 = P M v
PN vi = PN PM vi = PM vi = vi
PM v = PM PN v = 0
17
which implies that v ∈ M⊥ . Hence, N ⊆ M⊥ . If v ∈ M⊥ then
v = PM v + PN v = PN v ∈ N
Hence, if (a, v) and (b, v) are eigenpairs then a = b. Thus, the eigenvalue
corresponding to an eigenvector is unique.
Theorem 4.1. The following statements are equivalent. (i) T is diagonal
in V . (ii) Any basis for V consists of eigenvectors of T . (iii) Any consistent
unit vector is an eigenvector of T . (iv) There is a basis for V consisting of
eigenvectors of T .
Proof. (i)⇒(ii) Let T be diagonal and suppose {v1 , . . . , vn } is a basis for V .
Letting ai = hT vi , vi i we have for i 6= j that
hT vi , vj i = 0 = hai vi , vj i
18
the elements of any basis are consistent (iv) follows from Statement (iii).
(iv)⇒(i) Let {v1 , . . . , vn } be a basis of eigenvectors of T and suppose T vi =
ai vi , i = 1, . . . , n. For i 6= j, we have
hT vi , vj i = hai vi , vj i = ai hvi , vj i = 0
Hence, T is diagonal.
We conclude that
X X X
Tu = ci T vi = ci hT vi , vi ivi = a ci vi = av
19
Theorem 4.3. If T is a diagonal operator on V , then a is an eigenvalue of
T if and only if a = hT v, vi for some consistent unit vector v.
hT v, vi = hav, vi = ahv, vi = a
Hence, a is an eigenvalue of T .
20
Example 9. Let µ be a finitely additive probability measure on B and let
w1 be a consistent unit vector in V . Then s(T ) = µ (hT w1 , w1 i) is a state
on O(V ). Indeed, s clearly satisfies Conditions (1) and (2). To verify (3),
let u and v be orthogonal, consistent unit vectors. Extending w1 to a basis
{w1 , . . . , wn } for V we have
huv ∗ (w1 ), w1 i = hhv, w1 iu, w1 i = hv, w1 ihw1 , ui
_
≤ hv, w1 ihwi , ui = hv, ui = 0
21
Hence,
1 1 1
si (S + T ) = s(SPi + T Pi ) = s(SPi ) + s(T Pi )
s(Pi ) s(Pi ) s(Pi )
= si (S) + si (T )
Hence,
1
si (uv ∗ ) = s(uv ∗ Pi ) = 0
s(Pi )
To verify Condition (4), we have
(T Pi )(T Pj )∗ = T Pi Pj T ∗ = 0
for i 6= j, we have
n
! n
X X
s(T ) = s T Pi = s(T Pi )
i=1 i=1
22
By Condition (4) we have s(T Pi ) ≤ s(Pi ) = 0, i = r + 1, . . . , n. Hence,
r
X r
X
s(T ) = s(T Pi ) = s(Pi )si (T )
i=1 i=1
We conclude that r
X
s= s(Pi )si
i=1
23
tij vi vj∗ , tij ∈ B. By
P
For T ∈ O(V ) it is well-known that we can write T =
additivity we have
X
s(T ) = s tij vi vj = s(t11 v1 v1∗ ) = µ (ht11 v1 v1∗ v1 , v1 i)
∗
= µ (hT v1 , v1 i)
24
s [T (vi vi∗ )] /s(vi vi∗ ), i = 1, . . . , m are pure states on O(V ). Moreover, as in
the proof of Lemma 4.4, we have
m
X m
X
s(T ) = [T (vi vi∗ )] = s(vi vi∗ )si (T )
i=1 i=1
where s(vi vi∗ ) > 0 and s(vi vi∗ ) = 1. By Theorem 4.5, there exist finitely
P
additive probability measures µi on B such that si (T ) = µi (hT vi , vi i), i =
1, . . . , m. Letting λi = s(vi vi∗ ), i = 1, . . . , m, completes the proof.
Denoting the set of diagonal operators on V by D(V ) the theory of states
on D(V ) is much simpler. We define a state on D(V ) to be a functional
s : D(V ) → [0, ∞) such that s(I) = 1 and s(S + T ) = s(S) + s(T ) whenever
ST = 0. For any D ∈ D(V ) there exists a unique D0 ∈ D(V ) such that
DD0 = 0 and D + D0 = I. Hence,
so that s(D) ≤ 1. We conclude that s : D(V ) → [0, 1] for any state on D(V ).
We define pure and extremal states on D(V ) as before. A simplified version
of the proof of Lemma 4.4 shows that extremal states on D(V ) are pure.
Moreover, the proof of Theorem 4.5 carries over to show that an extremal
state s on D(V ) has the form
as before. Also, Corollaries 4.6 and 4.7 hold for states on D(V ). Finally, any
state on D(V ) has a unique extension to a state on O(V ).
25
and the bilinear form aF : V1 × V2 → B a ∈ B, by
V1 ⊗ V2
( r s )
XX
= aij vi ⊗ uj : aij ∈ B, vi ∈ V1 , uj ∈ V2 , i = 1, . . . , r, j = 1, . . . , s
i=1 j=1
It is clear that
We then have
X X
ars = aij hxi , xr ihuj , ys i = aij xi ⊗ yj (xr , ys )
i,j
X X
= bij xi ⊗ yj (xr , ys ) = bij hxi , xr ihyj , ys i = brs
i,j
26
× V2 → V1 ⊗
Theorem 5.2. (Universality) There exists a bimorphism τ : V1P
V2 such that any element F ∈ V1 ⊗ V2 has the form F = aij τ (vi , uj )
and if φ : V1 × V2 → W is a bimorphism there exists a unique linear map
ψ : V1 ⊗ V2 → W such that φ = ψ ◦ τ .
Example 10. We show that Lm (B) ⊗ Ln (B) ≈ Lmn (B). Let {u1 , . . . , um }
be a basis for Lm (B) and {v1 , . . . vn } be a basis for Ln (B). We write ui =
(a1i , . . . , ami ), i = 1, . . . , m, and vj = (b1j , . . . , bnj ), j = 1, . . . , n. Define
φ : Lm (B) ⊗ Ln (B) → Lmn (B) by
φ(ui ⊗ vj ) = (a1i b1j , . . . , a1i bnj , a2i b1j , . . . , a2i bnj , . . . , ami b1j , . . . , ami bnj )
27
Example 11. If u1 , u2 ∈ U , v1 , v2 ∈ V we show that
hu1 ⊗ v1 , u2 ⊗ v2 i = hu1 , u2 ihv1 , v2 i
Let {xP
1 , . . . , xm } bePa basis for UPand {y1 , . . . P , yn } be a basis for V . Letting
u1 = ai xi , v1 = bj yj , u2 = cr xr , v2 = ds ys we have
DX X X X E
hu1 ⊗ v1 , u2 ⊗ v2 i = ai x i ⊗ bj yj , cr x r ⊗ ds y s
_
= ai bj cr ds hxi ⊗ yj , xr ⊗ ys i
i,j,r,s
_ _ _
= ai bj ci dj = ai c i bj d n
i,j i
= hu1 , u2 ihv1 , v2 i
Let V and W be Boolean vector spaces over B. Define V ⊕ W = (V ×
W, +, ·) where
(v1 , w1 ) + (v2 , w2 ) = (v1 + v2 , w1 + w2 )
a · (v, w) = (av, aw), a ∈ B
We call V ⊕ W the direct sum of V and W .
Theorem 5.3. V ⊕ W is a Boolean vector space.
Proof. It is clear that V ⊕W satisfies the first five axioms for a Boolean vector
space. To show that V ⊕ W has a basis, let {x1 , . . . , xm } and {y1 , . . . , yn } be
bases for V and W , respectively. Then
{(xi , 0), (0, yj ) : i = 1, . . . , m, j = 1, . . . , n}
P P
is a basis for V ⊕ W . Indeed, if v = ai xi and w = bj yj , then
X X X X
(v, w) = ai x i , bj y j = ai xi , 0 + 0, bj y j
X X
= ai (xi , 0) + bj (0, yj )
P P
To show uniqueness, suppose (v, w) = ci (xi , 0) + dj (0, yj ). Then
X X X X
ai x i , bj y j = ci x i , dj y j
P P P P
so that ai x i = ci xi and bj y j = dj yj . It follows that ai = ci and
bj = dj for i = 1, . . . , m, j = 1, . . . , n.
28
Notice that h(v1 , w1 ), (v2 , w2 )i = hv1 , v2 i + hw1 , w2 i. The maps φV : V →
V ⊕ W and φW : W → V ⊕ W given by φV (v) = (v, 0) and φW (w) = (0, w)
are isometries and φV (V ), φV (W ) are orthogonal subspaces of V ⊕ W .
Let Vi , i = 1, 2, . . ., be Boolean vector spaces. We define
V1 ⊕ V2 ⊕ · · · = (V1 × V2 × · · · , +, ·)
F(V ) = B ⊕ V ⊕ (V ⊗ V ) ⊕ (V ⊗ V ⊗ V ) ⊕ · · ·
{vi ⊗ vi , vi ⊗ vj + vj ⊗ vi : i, j = 1, . . . , n}
FS (V ) = B ⊕ V ⊕ (V sV ) ⊕ (V sV sV ) ⊕ · · ·
29
References
[1] T. S. Blyth, On eigenvectors of Boolean matrices, Proc. Roy. Soc. Ed-
inburgh 67 (1967), 196–204.
[2] K. Cechlárová, Powers of matrices over distributive lattices – a review,
Fuzzy Sets Sys. 138 (2003), 627–641.
[3] Y. Givéon, Lattice matrices, Inf. Contr. 7 (1964), 477–484.
[4] S. Gudder, Quantum Markov chains, J. Math. Phys. 49 (2008),
072105-1–072105-14.
[5] S. Gudder and F. Latrémolère, Boolean inner-product spaces and
Boolean matrices (to appear).
[6] P. V. Jagannadham, Linear transformations on Boolean vector spaces,
Math. Ann. 16 (1966), 240–247.
[7] R. D. Luce, A note on Boolean matrix theory, Proc. Amer. Math. Soc.
3 (1952), 382–388.
[8] D. E. Ruthford, Inverses of Boolean matrices, Proc. Glasgow Math. Soc.
6 (1963), 49–53.
[9] D. E. Ruthford, The eigenvalue problem for Boolean matrices, Proc.
Roy. Soc. Edinburgh 67 (1963/1985), 25–38.
[10] D. E. Ruthford, Orthogonal Boolean matrices, Proc. Roy. Soc. Edin-
burgh 67 (1964/1965), 126–135.
[11] R. L. Sindak, Eigenvectors and maximal vectors in Boolean vector
spaces, Proc. Amer. Math. Soc. 47 (1975), 323–328.
[12] L. A. Skornyakov, Invertible matrices over distributive structures,
(Russian) Sibirsk. Mat. Zh. 27 (1986), 182–185, (English translation)
Siberian Math. J. 27 (1986), 289–292.
[13] D. Stirzaker Stochastic Processes and Models, Oxford University Press,
Oxford, 2005.
[14] N. V. Subrahmanyam, Boolean vector spaces I, Math. Z., 83 (1964),
422–433.
30
[15] N. V. Subrahmanyam, Boolean vector spaces II, Math. Z., 87 (1965),
401–419.
[16] N. V. Subrahmanyam, Boolean vector spaces III, Math. Z., 100 (1967),
295–313.
31