Boolean Derivatives and Computation of Cellular Automata
Boolean Derivatives and Computation of Cellular Automata
Boolean Derivatives and Computation of Cellular Automata
Introduction
machines. For the first hardware resources, a technique that allows an efficient
use of the memory and CPU is the Multi Site Coding (MSC) technique [2, 3, 4].
This technique implies that the rule is coded only using bitwise operations. Although standard bitwise expressions (canonical forms) are easy to generate given
a Boolean function, the minimization of the number of required operations is
believed to be a np-complete problem [5]. In Section 2 we recall the basic definitions and in the following section we introduce the Boolean derivatives that will
lead to the Taylor and MacLaurin series for a Boolean function. They are more
compact expressions than the standard canonical conjunctive and disjunctive
forms. In Section 4 this technique is farther developed for the particular case
of totalistic cellular automata, for which the future state of a cell depends only
on the total number of neighbors that are in a certain state, regardless of their
position. The symmetries of the problem allow the saving of a large number of
Boolean operations. In Section 5 the results are applied to some models that
appeared in literature. Finally, conclusions are drawn in the last section.
General definitions
Our fundamental set is B1 = {0, 1}. This is called the Boolean set. Higher
dimensional Boolean sets are indicated as Bn = {0, 1}n. By Fn,m we denote
the set of functions f : {0, 1}n 7 {0, 1}m. Fn stands for Fn,1 .
To an element a = (a1 , . . . , an ) Bn corresponds a number a [0, 2n ):
a=
n
X
ai 2i1 ;
i=1
where a stands for the integer part of a. An integer number is thus an ordered
set of Boolean numbers (bits). In order to emphasize its Boolean structure we
shall refer to these sets with the name of bitarray, whose dimension is that of the
space in which it is defined. We introduce a partial ordering between bitarrays
saying that a b if, for all i, ai bi , where 0 < 1. We can thus substitute the
expression a [0, 2n1 ) by a 2n1 1 or simply a Bn . A Boolean function
f is called monotone if a b implies f (a) f (b).
This mapping between numbers and Boolean n-ple corresponds to the representation of integer numbers in computers, the integer
division
by a power of two
being equivalent to a right shift (in FORTRAN a/2i1 SHIFTR(a, i 1)),
and the modulo two operation to take the leftmost (less important) bit.
Let us introduce the most common Boolean operations. If applied to a
n
bitarray, they will act bit by bit. There are 22 Boolean functions in Fn . With
n = 1 the most important function is the negation (NOT), indicated by the
symbol , or by a bar over the argument. With n = 2 there are 16 functions.
The ones that exist on (almost) all computers are the AND, OR and XOR
2
operations. The AND (symbol ) gives one only if both the arguments are one
(it is equivalent to a multiplication of the bits considered as integer numbers),
the OR (symbol ) gives one if either one or the other argument is one, while
the XOR (symbol ) corresponds to the sum modulo two. Notice that the
XOR operation accounts both for the sum and for the subtraction or negation
(a a = 0 and a 1 = a). The AND has higher priority than the OR and
XOR operations. The negation has the highest priority. The OR and the XOR
operations are distributive with respect to the AND. The XOR operation can be
expressed by the NOT, AND and OR: a b = a b a b. Equivalently the
OR can be expressed by the AND and XOR operations: a b = a b a b. If
two Boolean quantities a and b cannot be one at once, both the expressions a b
and ab give the same result. In the following we shall emphasize this condition
by writing a + b, and indeed in this case the usual sum operation can be used.
On certain computers (namely on the CRAY), the logical and numerical unities
can work in parallel. By mixing Boolean and arithmetic operations it is possible
to speed up the actual calculations [6].
Often in the literature [5] the conditional negation is indicated by ab with
the meaning that a0 = a and a1 = a. This is equivalent to (a b) or to
a b 1. In this work we prefer to assign a different meaning to exponentiation,
more similar to the usual one. We define a0 = 1 and a1 = a. With this
definition the expression ab is equivalent to (a 1) b 1 or to a b. When
applied to bitarrays, the exponentiation is performed bit by bit and the results
are afterwards ANDed together,
b
a =
n
^
abi i
n
^
ai bi .
(1)
i=1
i=1
af 1 (1)
f (x) =
af 1 (0)
n
^
i=1
n
_
(xi ai );
xi ai ;
i=1
These canonical forms are the standard starting points for the problem of
minimizing the number of required operations given a function. It is possible
to demonstrate [5] that the minimal expression for a monotone function only
contains the AND and OR operations; the expression is unique and easy to
compute.
As the NOT and the OR can be expressed by the AND and XOR operations,
any function f can be given in terms of the latter two operations (Ring Sum
3
Boolean derivatives
Note that the definition is consistent with the usual chain rule for derivatives,
i.e.,
f
2f
.
(3)
=
xy
y x
A second order derivative with respect to the same argument is identically
zero.
We introduce a more compact definition for the Boolean derivatives. Indicating with x the bitarray of the arguments (x1 , . . . , xn ) and with a (constant)
bitarray of the same dimension, we define
M
f (x) =
f (x ).
f (x),
where
f = f (0);
(5)
of f is given by
y = f (0, 0, 0)
f
x
x
f
f
y
y 0,0,0
z 0,0,0
0,0,0
f 2
f 2
f 2
xz
yz
xy
xy 0,0,0
xz 0,0,0
yz 0,0,0
f 3
xyz
.
xyz
0,0,0
The first order derivatives of all the elementary CA can be found in Ref. [1].
Higher order derivatives can be obtained by using the chain rule (3). Otherwise,
the array of derivatives fi in zero can be obtained from the truth table f (j) via
the matrix Mi,j
M
fi =
Mi,j f (j);
jBn
where
Mi,j =
j
i
mod 2.
(6)
(4 operations).
(3 operations).
We consider it a good result to save one operation out of four in such a widely
used and (apparently) simple expression. Other examples can be found in section 5.
6
Totalistic rules
n
X
xi .
i=1
(7)
k=0
(n)
where k is one if T (n) = k and zero otherwise (totalistic characteristic functions). Only one term contributes in the sums of equations (7) so that we can
use the arithmetic summation. The quantities rk take the value zero or one and
define the automaton rule. Probabilistic CAs may be implemented by allowing the coefficients rk of equation (7) to assume the values zero and one with
probabilities pk (see also the last example of Section 5).
A totalistic function f is completely symmetrical with respect to its arguments [8]. This implies that the derivatives of f of same order are all functionally
equals. In particular, as the derivatives of the MacLaurin expansion (4) are calculated in zero, they are actually equals, and thus can be factorized. This leads
to
(n)
(n)
(8)
f (x1 , . . . , xn ) = f T (n) = f0 f1 1 f2 2 . . . fn n(n) ;
(n)
where the fi represents the derivative of order i of f in 0 (5), and the i are
the homogeneous polynomes of degree i in the variables x1 , . . . , xn (using the
AND for the multiplication and the XOR for the sum)
(n)
= x1 x2 xn ,
(n)
2
= x1 x2 x1 x3 xn1 xn ,
...
n(n) = x1 x2 xn .
The functions i satisfy some recurrence relations. The first one is based on
the idempotent property of the AND operation (a a = a) and the nullpotent
: irreducible;
: irreducible;
= 2 1 ;
: irreducible;
= 4 1 ;
= 4 2 ;
= 4 3 = 4 2 1 ;
: irreducible;
... .
The second property is based on the separation of the variables in two groups
(bisection). Let us call X the group of the variables (x1 , . . . , xn ), with L we
indicate the left part of X up to some index j, and with R the right part of X
L = (x1 , . . . , xj )
R = (xj+1 , . . . , xn ).
We have
i (X) = i (L) i1 (L) 1 (R) i2 (L) 2 (R) . . .
1 (L) i1 (R) i (R).
As an example, let us explicitly calculate the i for eight variables. We bisect
homogeneously the set X = (x1 , . . . , x8 ) first into L, R, and then into LL, LR,
RL, RR. We have
1 (LL) = x1 x2 ,
1 (LR) = x3 x4 ,
1 (RL) = x5 x6 ,
(RR) = x x ;
1
7
8
2 (LL) = x1 x2 ,
2 (LR) = x3 x4 ,
= x5 x6 ,
2 (RL)
2 (RR) = x7 x8 ;
1 (L)
1 (R)
2 (L)
(R)
2
3 (L)
3 (R)
4 (L)
4 (R)
= 1 (LL) 1 (LR),
= 1 (RL) 1 (RR);
= 2 (LL) 1 (LL) 1 (LR) 2 (LR),
= 2 (RL) 1 (RL) 1 (RR) 2 (RR);
= 2 (L) 1 (L),
= 2 (R) 1 (R);
= 2 (LL) 2 (LR),
= 2 (RL) 2 (RR);
8
(8)
1
(8)
(8)
(8)
4
5(8)
(8)
(8)
(8)
8
= 1 (L) 1 (R);
= 2 (L) 1 (L) 1 (R) 2 (R);
= 2 1 ;
= 4 (L) 3 (L) 1 (R) 2 (L) 2 (R) 1 (L) 3 (R) 4 (R);
= 4 1 ;
= 4 2 ;
= 4 3 ;
= 4 (L) 4 (R).
(8)
where k = k (X). Taking into account the common patterns that appear in
(8)
(8)
the expressions of 2 and 4 , we only need 34 operations to build up all the
(8)
i .
The extension of the calculations to 9 variables, only adds a small number
(16) of operations
(9)
(8)
1 = 1 x9 ,
(9)
(9)
9
(8)
= i
=
(8)
8
(8)
i1 x9
(2 i 8),
x9 ;
even though a careful bisection of the set of variables implies fewer (39) operations.
A kind of normalization condition on the i is given by
n
_
xi =
i=1
n
M
(n)
i ,
(9)
i=1
(xi xi+1 ) =
n1
M
(n)
i .
(10)
i=1
i=1
i
M
Mi,j f (T ),
T =0
(8)
(9)
1 = 1 3 5 7 ,
(8)
2 = 2 3 6 7 ,
(8)
3 = 3 7 ,
(8)
4 = 4 5 6 7 ,
(8)
5 = 5 7 ,
(8)
(11)
6 = 6 7 ,
(8)
7 = 7 ,
(8)
8 = 8 ;
(9)
(8)
(9)
(8)
1 = 1 9 ,
2 = 2
(9)
...
(8)
8 = 8 ,
(9)
9 = 9 .
(9)
(8)
Obviously, the k are only formally similar to the k and they are calculated
with nine variables.
(n)
We note that the normalization condition on the k is
n
X
(n)
= 1;
(12)
k=0
(n)
(j, k i)
x Bn .
(fi fi ) = 0,
(13)
iBn
where = (j) (k) . Symmetries among more than two variables can be
obtained via the transition property.
Some applications
In this section we shall apply the algorithm to some problems, chosen among the
ones that appeared in literature. Some of them were investigated with efficiency
in mind, so they are supposed to be carefully studied with the aim of reducing
the number of required operations.
The first example is the totalistic two-dimensional CA M46789 [9]. The
future value c of a cell c is determined by the value most prevalent in its Moore
neighborhood (nearest and next to nearest neighbors, nine variables), with a
twist in case of a marginal majority or minority. In terms of Eq. (7) the rule is
defined as
(
1 if k = 4, 6, 7, 8, 9;
rk =
0 otherwise.
The twist in the majority provides a kind of frustration that simulates a mobile
interface according to the Allen-Cahn equation [10].
From the general expression (8), we get the simplified expression
c = 4 [1 1 (1 2 )] 8 ,
for a total of 39 operations.
The second model is the game of Life [11]. This well known game has
recently shown to be a good tool model for the problem of the self-organizing
criticality [12, 13]. The evolution rule for Life depends separately on the sum
of the eight nearest and next-to-nearest neighbors (outer Moore neighborhood),
and on the central cell itself.
The evolution rule can be expressed saying that a dead (zero) cell can become
alive (one) only if it is surrounded by three alive cells, while survival only occurs
if the alive cell is surrounded by two or three alive cells. Developing first the
rule for the central cell c, we get
(8)
(8)
c = 3 2 c,
(8)
where c represent the updated value of the central cell, and the k are calculated on the outer Moore neighborhood.
(8)
The substitution of the expressions for the k (11) and simplification gives
c = 2 [c (1 c) 1 ] (1 4 ) .
11
As a a b = a b we have
c = 2 (1 4 ) (c 1 ) ,
that implies only 33 operations, to be compared with the 170 ones reported
in Ref. [3].
We can also apply the method to non totalistic rules. Let us examine the
Kohring rule [14] for an FHP [15] gas with obstacles. The collision rules are the
same of the original FHP model, with a set of four body collisions. Let us label
the six directions in a counterclockwise way with the indices ranging from one
to six. The operations on the indices are supposed to be modulo six. All the
Boolean quantities are actually elements of some array of integer words: we do
not consider here the spatial indices. If the Boolean variable xi is one, there is
an incoming particle on the site from direction i. Collisions occur for
Three particles collisions occur when (x1 , x3 , x5 ) or (x2 , x4 , x6 ) are all zero
or one, that is
c(0, 3,
h 6) =
i h
i
(3)
(3)
(3)
(3)
= 1 (x1 , x3 , x5 ) + 2 (x1 , x3 , x5 ) 1 (x2 , x4 , x6 ) + 2 (x2 , x4 , x6 )
= [1 (x1 , x3 , x5 ) 2 (x1 , x3 , x5 )] [1 (x2 , x4 , x6 ) 2 (x2 , x4 , x6 )] .
4
X
(4)
rk k ,
k=0
13
where the rk are random bits equal to one with predefined probabilities pk =
(4)
p(rk ). Building the k from the i as in Eq. (11), we get the quoted result
of 22 Boolean operations and four arithmetic summations. Writing down the
RSE (2), we have
c = r0
4
M
si i ,
i=1
= r0 r1 ,
= r0 r2 ,
= r0 r1 r2 r3 ,
= r0 r4 ;
Conclusions
In this work we extend and complete the notion of the derivatives of a Boolean
function already introduced in Ref. [1]. We are thus able to derive the Taylor
and MacLaurin series of a Boolean function. The latter represent the ring sum
expansion for a Boolean function, which is more compact than the canonical
conjunctive and disjunctive forms. Moreover, for totalistic functions (i.e., for
functions completely symmetric in their arguments) very compact expressions
are found. These ideas have wide applications in the development of faster
algorithms, in particular for cellular automata simulations, and in the design
of digital circuitry. As examples of physical applications, we analyze already
published optimized algorithms, involving both deterministic and stochastic automata. We found that our procedure generally leads to more compact expressions. We think that the Boolean derivative is not limited to the minimization of
Boolean functions. Work is in progress about the connection between Boolean
derivatives and the chaotic properties of cellular automata, possibly leading to
the definition of Lyapunov exponents for discrete systems.
acknowledgements
We are grateful to G. Vichniac, R. Livi and S. Ruffo for fruitful discussions. We
also acknowledge the Institute for Scientific Interchange Foundation (Torino,
Italy) where this work was first started in the framework of the workshop Complexity and Evolution.
14
References
[1] G. Vichniac, Physica D 45 (1990) 63.
[2] H. Herrmann, J. Stat. Phys. 52 (1986) 145.
[3] F. Bagnoli, R. Rechtman and S. Ruffo, J. Comp. Phys. (1991) 101 (1992)
176 .
[4] D. Stauffer, J. Phys. A: Math. Gen. 24 (1991) 1.
[5] I. Wegener, The Complexity of Boolean Functions, (Wiley, New York,
1987).
[6] H. Herrmann, J. Phys. A: Math. Gen. 24 (1991) L691.
[7] E. Ien, Physica D 45 (1990) 3.
[8] This theorem is due to Shannon, see e.g. R.E. Miller, Switching Theory,
(Wiley, New York, 1966), Vol. I, p. 103.
[9] G. Vichniac, Physica 10D (1984) 96.
[10] G. Vichniac, in Chaos and Complexity, eds. R. Livi, S. Ruffo, S. Ciliberto,
M. Buiatti (World Scientific, Singapore 1988).
[11] M. Gardner, Sci. Am. 223 (1970) 120; Sci. Am. 223, (1970) 116; Sci. Am.
224 (1971) 104; Sci. Am. 224 (1971) 112; Sci. Am. 224 (1971) 114; Sci.
Am. 226 (1972) 104; and Life, Wheels and Other Mathematical Amusements, ed. M. Gardner (W. H. Freeman and Company, New York, 1983).
[12] F. Bagnoli, R. Rechtman and S. Ruffo, Physica A 171 (1991) 249.
[13] P. Bak, K. Chen and M. Creutz, Nature 342 (1989) 780.
[14] G.A. Khoring, J. Stat. Phys 63 (1991) 411.
[15] U. Frish, B. Hasslacher and Y. Pomeau, Phys. Rev. Lett. 56 (1986) 1505.
[16] A. Cloquer and D. DHumi`eres, Complex Systems 1 (1987) 585.
15