arXiv:1210.3917v1 [math.PR] 15 Oct 2012
STIT Tessellations have trivial tail σ-algebra
S. Martı́nez
Departamento Ingenierı́a Matemática and Centro Modelamiento Matemático,
Universidad de Chile,
UMI 2807 CNRS, Casilla 170-3, Correo 3, Santiago, Chile.
Email: smartine@dim.uchile.cl
Werner Nagel
Friedrich-Schiller-Universität Jena,
Institut für Stochastik,
Ernst-Abbe-Platz 2, D-07743 Jena, Germany.
Email: werner.nagel@uni-jena.de
Abstract
We consider homogeneous STIT tessellations Y in the ℓ-dimensional
Euclidean space Rℓ and show the triviality of the tail σ-algebra. This is
a sharpening of the mixing result by Lachièze-Rey [8].
Keywords: Stochastic geometry; Random process of tessellations; Ergodic
theory; tail σ−algebra
AMS subject classification 60D0560J25, 60J75, 37A25
1
Introduction
Let Y = (Yt : t > 0) be the STIT tessellation process, which is a Markov process
taking values in the space of tessellations of the ℓ-dimensional Euclidean space
Rℓ . The process Y is spatially stationary (that is its law is invariant under
translations of the space) and on every polytope with nonempty interior W
(called a window) the induced tessellation process, denoted by Y ∧W = (Yt ∧W :
t > 0), is a pure jump process. The process Y was firstly constructed in [12]
and in Section 2 we give a brief description of it and recall some of its main
properties.
In stochastic geometry, ergodic and mixing properties as well as weak dependencies in space are studied. E.g., Heinrich et al. considered mixing properties
for Voronoi and some other tessellations, Poisson cluster processes and germgrain models and derived Laws of Large Numbers and Central Limit Theorems,
see [3, 6, 5].
For STIT tessellations, Lachièze-Rey [8] showed that they are mixing. Schreiber
and Thäle [15, 16, 17] proved some limit theorems. They provided Central
1
Limit Theorems for the number of vertices and the total edge length in the twodimensional case (ℓ = 2). Furthermore, they proved that in dimensions ℓ ≥ 3
there appear non-normal limits, e.g. for the total surface area of the cells of
STIT tessellations.
An issue is the problem of triviality of the tail σ-algebras for certain class of
distributions. For random measures and point processes a key reference is the
book by Daley and Vere-Jones ([2], pp. 207-209).
In Section 3 of the present paper we introduce a definition for the tail σalgebra B−∞ (T) on the space T of tessellations in the ℓ-dimensional Euclidean
space Rℓ . This definition relies essentially on the definition of the Borel σalgebra with respect to the Fell topology on the set of closed subsets of Rℓ (cf.
[14]) as well as on the mentioned definition of the tail σ-algebra for random
measures and point processes, as given in [2].
Our main result is formulated in in Section 4, Theorem 2: For the distribution of a STIT tessellation, the tail σ-algebra B−∞ (T) is trivial, i.e. all its
elements (the terminal events) have either probability 1 or 0. A detailed proof
is given in Section 5.
Finally, we compare STIT tessellations with Poisson hyperplane tessellations
(PHT), and we show, that the tail σ-algebra B−∞ (T) is not trivial with respect
to the distribution of PHT.
2
The STIT model
2.1
A construction of STIT tessellations
ℓ
Let R be the ℓ-dimensional Euclidean space, denote by T the space of tessellations of this space as defined in [14] (Ch. 10, Random Mosaics). A tessellation
can be considered as a set T of polytopes (the cells) with disjoint interiors and
covering the Euclidean space, as well as a closed subset ∂T which is the union of
the cell boundaries. There is an obvious one-to-one relation between both ways
of description of a tessellation, and their measurable structures can be related
appropriately, see [14, 9].
A compact convex polytope W with non-empty interior in Rℓ , is called a
window. We can consider tessellations of W and denote the set of all those
tessellations by T ∧ W . If T ∈ T we denote by T ∧ W the induced tessellation
on W . Its boundary is defined by ∂(T ∧ W ) = (∂T ∩ W ) ∪ ∂W .
In the present paper we will refer to the construction given in [12] in all
detail (for an alternative but equivalent construction see [9]). On every window
W there exists Y ∧ W = (Yt ∧ W : t > 0) a STIT tessellation process, it
turns out to be a pure jump Markov process and hence has the strong Markov
property (see [1], Proposition 15.25). Each marginal Yt ∧ W takes values in
T ∧ W . Furthermore, for any t > 0 the law of Yt is consistent with respect
to the windows, that is if W ′ and W are windows such that W ′ ⊆ W , then
2
(Yt ∧ W ) ∧ W ′ ∼ Yt ∧ W ′ , where ∼ denotes the identity of distributions (for a
proof see [12]). This yields the existence of a STIT tessellation Yt of Rℓ such that
for all windows W the law of Yt ∧ W coincides with the law of the construction
in the window. A global construction for a STIT process was provided in [11].
A STIT tessellation process Y = (Yt : t > 0) is a Markov process and each
marginal Yt takes values in T.
Here, let us recall roughly the construction of Y ∧ W done in [12].
Let Λ be a (non-zero) measure on the space of hyperplanes H in Rℓ . It
is assumed that Λ is translation invariant and possesses the following locally
finiteness property:
Λ([B]) < ∞, ∀ bounded sets B ⊂ Rℓ , where [B] = {H ∈ H : H ∩B 6= ∅}. (1)
(Notation ⊂ means strict inclusion and ⊆ inclusion or equality). It is further
assumed that the support set of Λ is such that there is no line in Rℓ with the
property that all hyperplanes of the support are parallel to it (in order to obtain
a.s. bounded cells in the constructed tessellation, cf. [14], Theorem 10.3.2, which
can also be applied to STIT tessellations).
The assumptions made on Λ imply 0 < Λ([W ]) < ∞ for every window W .
Denote by Λ[W ] the restriction of Λ to [W ] and by ΛW = Λ([W ])−1 Λ[W ] the
normalized probability measure. Let us take two independent families of independent random variables D = (dn,m : n, m ∈ N) and τ = (τn,m : n, m ∈ N),
where each dn,m has distribution ΛW and each τn,m is exponentially distributed
with parameter Λ([W ]).
• Even if for t = 0 the STIT tessellation Y0 is not defined in Rℓ , we define
Y0 ∧ W = {W } the trivial tessellation for the window W . Its unique cell is
denoted by C 1 = W .
• Any extant cell has a random lifetime, and at the end of its lifetime it is
divided by a random hyperplane. The lifetime of W = C 1 is τ1,1 , and at that
time it is divided by d1,1 into two cells denoted by C 2 and C 3 .
• Now, for any cell C i which is generated in the course of the construction, the
random sequences (di,m : m ∈ N) and τ = (τi,m : m ∈ N) are used, and the
following rejection method is applied:
• When the time τi,1 is elapsed, the random hyperplane di,1 is thrown onto the
window W . If it does not intersect C i then this hyperplane is rejected, and we
continue until a number zi , which is the first index j for which a hyperplane
di,j intersects and thus divides C i into two cells C l1 (i) , C l2 (i) , that are called
the successors of C i . Note that this
number zi is finite a.s. Hence the
Pzrandom
i
lifetime of C i is a sum τ ∗ (C i ) := m=1
τi,m . It is easy to check, see [12], that
τ ∗ (C i ) is exponentially distributed with parameter Λ([C i ]). Note that z1 = 1
and τ ∗ (C 1 ) = τ1,1 .
3
• This procedure is performed for any extant cell independently. It starts in
that moment when the cell is born by division of a larger (the predecessor) cell.
In order to guarantee independence of the division processes for the individual
cells, the successors of C i get indexes l1 (i), l2 (i) in N that are different, and
that can be chosen as the smallest numbers which were not yet used before for
other cells.
• For each cell C i we denote by η = η(i) the number of its ancestors and by
(k1 (i), k2 (i), . . . , kη (i)) the sequence of indexes of the ancestors of C i . So W =
C k1 (i) ⊃ C k2 (i)P⊃ . . . ⊃ C kη (i) ⊃ C i . Hence, k1 (i) = 1 The cell C i is born at
η
time κ(C i ) = l=1 τ ∗ (C kl (i) ) and it dies at time κ(C i ) = κ(C i ) + τ ∗ (C i ); for
1
1
C this is κ(C ) = 0 and κ(C 1 ) = τ ∗ (C 1 ). It is useful to put kη+1 (i) = i.
With this notation at each time t > 0 the tessellation Yt ∧ W is constituted
by the cells C i for which κ(C i ) ≤ t and κ(C i ) > t. It is easy to see that at any
time a.s. at most one cell dies and so a.s. at most only two cells are born.
Now we describe the generated cells as intersections of half-spaces. First
note that by translation invariance follows Λ([{0}]) = 0. Hence, all the random
hyperplanes (dn,m : n, m ∈ N) a.s. do not contain the point 0. Now, for a
hyperplane H ∈ H such that 0 6∈ H we denote by H + and H − the closed
half-spaces generated by H with the convention 0 ∈ int(H + ). Hence, C kl+1 (i) =
C kl (i) ∩ d±
kl (i),zk (i) , where the sign in the upper index determines on which side
l
of the dividing hyperplane the cell C kl+1 (i) is located. Then any cell can be
represented as an intersection of W with half-spaces,
η(i) zkl (i)
Ci = W ∩
\ \
s(k (i),m)
l
dkl (i),m
l=1 m=1
∩
z\
i −1
s(i,m)
di,m
.
(2)
m=1
In above relation we define the sign s(j, m) ∈ {+, −} by the relation C j ⊂
s(j,m)
dj,m , for j, m ∈ N. Notice, that the origin 0 ∈ C i if and only if all signs in
the previous formula (2) satisfy s(kl (i), m) = + and s(i, m) = +.
Obviously the set of cells can be organized as a dyadic tree, by the relation
”C ′ is a successor of C”. This method of construction is done in [12]. For the
following it is important to observe that all the rejected hyperplanes dkl (i),m are
also included in this intersection because the intersection with the appropriate
T i −1 s(i,m)
half-spaces does not alter the cell. Although the third set zm=1
di,m in the
intersection (2) does not modify the resulting set, we also include it because we
will use this representation later.
In [12] it was shown that there is no explosion, so at each time t > 0 the
number of cells of Yt ∧ W , denoted by ξt , is finite a.s. Renumbering the cells,
we write {Cti : i = 1, ..., ξt } for the set of cells of Yt ∧ W .
2.2
Independent increments relation
The name STIT is an abbreviation for ”stochastic stability under the operation of iteration of tessellations”. Closely related to that stability is a certain
4
independence of increments of the STIT process in time.
In order to explain the operation of iteration, for T ∈ T we number its cells
in the following way. Assign to each cell a reference point in its interior (e.g. the
Steiner point, see [14], p. 613, or another point that is a.s. uniquely defined).
Order the set of the reference points of all cells of T by its distance from the
origin. For random homogeneous tessellations this order is a.s. unique. Then
number the cells of T according to this order, starting with number 1 for the
cell which contains the origin. Thus we write C(T )1 , C(T )2 , . . . for the cells of
T.
~ = (Rm : m ∈ N) ∈ TN , we define the tessellation T ⊞ R,
~
For T ∈ T and R
~ by its set of cells
referred to as the iteration of T and R,
~ = {C(T )k ∩C(Rk )l : k = 1, ...; l = 1, ...; int(C(T )k ∩C(Rk )l ) 6= ∅}.
T ⊞R
So, we restrict Rk to the cell C(T )k , and this is done for all k = 1, . . .. The
same definition holds when the tessellation and the sequence of tessellations are
restricted to some window.
To state the independence relation of the increments of the Markov process
~′ =
Y of STIT tessellations, we fix a copy of the random process Y and let Y
′m
(Y
: m ∈ N) be a sequence of independent copies of Y , all of them being
m
also independent of Y . In particular Y ′ ∼ Y . For a fixed time s > 0, we set
m
~ ′ = (Y ′ : m ∈ N). Then, from the construction and from the consistency
Y
s
s
property of Y it is straightforward to see that the following property holds
~ ′ for all t, s > 0 .
Yt+s ∼ Yt ⊞ Y
s
(3)
~t′ .
This relation was firstly stated in Lemma 2 in [12]. It implies Y2t ∼ Yt ⊞ Y
The STIT property means that
~ ′ ) for all t > 0 ,
Yt ∼ 2(Yt ⊞ Y
t
(4)
so Yt ∼ 2Y2t . Here the multiplication with 2 stands for the transformation
x 7→ 2x, x ∈ Rℓ .
3
The space of tessellations and the tail σ-algebra
Let C be the set of all compact subsets of Rℓ . We endow T with the Borel
σ-algebra B(T) of the Fell topology (also known as the topology of closed convergence), namely
B(T) = σ ({{T ∈ T : ∂T ∩ C = ∅} : C ∈ C}) .
(As usual, for a class of sets I we denote by σ(I) the smallest σ−algebra containing I). Let us fix P a probability measure on (T, B(T)). All the sets are
determined mod P, that is up to a P−negligible set. So, for E, D ∈ B(T) we
5
write E = D mod P if P(E∆D) = 0. Also for a pair B ′ , B ′′ of sub-σ-algebras of
B(T) we write B ′ ⊆ B ′′ mod P if for all E ∈ B ′ there exists D ∈ B ′′ such that
E = D mod P.
For a window W we introduce
B(TW ) = σ ({{T ∈ T : ∂T ∩ C = ∅} : C ⊆ W, C ∈ C}) .
By definition B(TW ) ⊂ B(T) is a sub-σ-algebra. We notice that if W ′ ⊆ W
then B(TW ′ ) ⊆ B(TW ).
Note that the set T ∧ W can be endowed with the σ−field
B(T ∧ W ) = σ ({{T ∈ T ∧ W : ∂T ∩ C = ∅} : C ⊆ W \ ∂W, C ∈ C}). For all
E ∈ B(TW ) we put E ∧ W = {T ∧ W : T ∈ E}, which belongs to B(T ∧ W ).
Denoting the law of the STIT process Y by P it holds
∀t > 0, ∀ E ∈ B(TW ) : P(Yt ∈ E) = P(Yt ∧ W ∈ E ∧ W ).
To avoid overburden notation and since there will be no confusion, instead of
E ∧ W in the last formula we will only put E, so it reads
∀t > 0, ∀ E ∈ B(TW ) : P(Yt ∈ E) = P(Yt ∧ W ∈ E)
Let (Wn : n ∈ N) be an increasing sequence of windows such that
[
Rℓ =
Wn and ∀ n ∈ N, Wn ⊂ int Wn+1 .
(5)
(6)
n∈N
We have
B(TWn ) ր B(T) as n ր ∞ ,
which means σ n∈N B(TWn ) = B(T). On the other hand, it is easy to check
that B(T) = B(T)a , where
S
B(T)a = {E ∈ B(T) : ∀ǫ > 0, ∃n ∈ N, ∃En ∈ B(TWn ) such that P(E∆En ) < ǫ} .
(7)
S
In fact by definition we have B(T)a ⊆ B(T) and n∈N B(TWn ) ⊆ B(T)a .
Because B(T)a is a σ-algebra, then necessarily B(T)a = B(T).
In order to study the tail σ-algebra, we will also consider sets of tessellations which are determined by their behavior outside a window W , i.e. in its
complement W c . We define the σ-algebra
B(TW c ) = σ ({{T ∈ T : ∂T ∩ C = ∅} : C ⊂ W c , C ∈ C}) .
We have B(TW c ) ⊂ B(T). On the other hand, if W ′ ⊆ W then B(TW c ) ⊆
B(TW ′c ).
Let (Wn : n ∈ N) and (Wn′ : n ∈ N) be a pair of increasing sequences of
′
windows satisfying the conditions in (6). Then ∀n , ∃ m such that Wn ⊆ Wm
′
and ∀m, ∃ q such that Wm ⊆ Wq . This gives
′ c ) ⊆ B(TW c ) .
B(TWnc ) ⊆ B(TWm
q
6
∞
Hence ∩∞
n=1 B(TWnc ) = ∩n=1 B(TWn′ c ). This equality allows us to define, in
analogy with the definition done for point processes (see [2], Definition 12.3.IV),
the tail σ-algebra on the space of tessellations.
T∞
Definition 1 The tail σ-algebra is defined as B−∞ (T) = n=1 B(TWnc ), where
(Wn : n ∈ N) is an increasing
sequence of windows such that for all n ∈ N,
S
Wn ⊂ int Wn+1 , and Rℓ = n∈N Wn .
Note that (Wn = [−n, n]ℓ : n ∈ N) satisfies (6), so it can be used in the
above definition and also in the rest of the paper.
Lemma 1 Assume that for every window W ′ , all D ∈ B(TW ′ ) and all ǫ > 0,
c depending on (W ′ , D, ǫ), such that
there exists a window W
c and ∀ E ∈ B(T cc ) :
W ′ ⊂ int W
W
|P(D ∩ E) − P(D)P(E)| < ǫ.
(8)
Then, the tail σ-algebra B−∞ (T) is P−trivial, that is
∀E ∈ B−∞ (T) : P(E) = 0 or P(E) = 1 .
(9)
Proof 1 Let D ∈ B−∞ (T). Let (Wn : n ∈ N) be an increasing sequence of
windows satisfying (6). Let ǫ > 0 be fixed. Since D ∈ B(T), from (7) we get,
∃k , ∃Dk ∈ B(TWk ) such that P(D∆Dk ) < ǫ .
(10)
c , depending on (Wk , Dk , ǫ), such
From hypothesis (8) there exists a window W
c
that Wk ⊂ int W and ∀E ∈ B(TW
c c ) it holds |P(Dk ∩ E) − P(Dk )P(E)| < ǫ. We
c
know that ∃ n ≥ k such that W ⊆ Wn . So, B(TWnc ) ⊆ B(TW
c c ). We then have
∀E ∈ B(TWnc ) :
|P(Dk ∩ E) − P(Dk )P(E)| < ǫ .
Since D ∈ B−∞ (T) ⊂ B(TWnc ) we get |P(Dk ∩ D) − P(Dk )P(D)| < ǫ. From
(10) we deduce |P(D ∩ D) − P(D)P(D)| < 3ǫ. Since this occurs for all ǫ > 0 we
conclude P(D) = P(D)P(D), so P(D) = 1 or 0.
Let us discuss what happens when P is translation invariant. Let h ∈ Rℓ .
For any set D ⊆ Rℓ put D + h = {x + h : x ∈ D} and for T ∈ T denote by
T + h the tessellation whose boundary is ∂(T + h) = ∂(T ) + h. For E ⊆ T put
E + h = {T + h : T ∈ E}. For all E ∈ B(T) we have E + h ∈ B(T) because
{C ∈ C} = {C + h : C ∈ C}. The probability measure P is translation invariant
if it satisfies P(E) = P(E + h) for all E ∈ B(T) and h ∈ Rℓ .
A set E ∈ B(T) is said to be (P−)invariant, we put E ∈ I(T), if P(E∆(E +
h)) = 0 for all h ∈ Rℓ . Note that if E ∈ I(T), D ∈ B(T) and E = D mod P,
then D ∈ I(T). It is easily checked that I(T) ⊆ B(T) is a sub-σ-algebra, the
invariant σ-algebra (see e.g. [2]). We have the inclusion relation,
I(T) ⊆ B−∞ (T) modP .
7
(11)
We note that (11) corresponds to propositions in [2] (pp. 206–210) for random
measures. For completeness we will prove (11). We denote 1 = (1, ..., 1) ∈ Rℓ ,
so a1 = (a, ..., a) for a ∈ R. Fix the sequence (Wn = [−n, n]ℓ : n ∈ N). Note
that if m > 2n then Wn − m1 ⊂ Wnc .
Let E ∈ I(T): For all n ∈ N, there exists k = k(n) > n and Ek ∈ B(TWk ) such
that P(E∆Ek ) < 2−n . Since Ek ∈ σ({{T ∈ T : ∂T ∩ C = ∅} : C ⊆ Wk , C ∈ C}
we have that for N > 2k,
Ek −N 1 ∈ σ({{T ∈ T : ∂T ∩ C = ∅} : C ⊆ Wk −N 1, C ∈ C})
⊆ σ({{T ∈ T : ∂T ∩ C = ∅} : C ⊂ Wkc , C ∈ C}) = B(TWkc ) .
Since P(E∆(E −N 1)) = 0 and P((E −N 1)∆(Ek −N 1)) = P(E∆Ek ) < 2−n
we get,
∀N > 2k : P(E∆(Ek −N 1)) ≤ P(E∆(E−N 1)) + P((E−N 1)∆(Ek −N 1)) < 2−n .
We have k > 1. Take N = k 2 and use k 2 > 2k to get
P(E∆(Ek −k 2 1)) ≤ 2−n and Ek −k 2 1 ∈ B(TWkc ).
S
Define Dm = n>m (Ek(n) −k(n)2 1) for m ≥ 1. This sequence of sets satisfies
c
) and Dm decreases with m . We conclude
P(E∆Dm ) ≤ 2−m , Dm ∈ B(TWk(m)
T
that D = m>1 Dm satisfies P(E∆D) = 0 and D ∈ B−∞ (T). Hence, relation
(11) holds.
So, when the tail σ-algebra B−∞ (T) is P−trivial, then P is ergodic with
respect to translations, because every invariant set E ∈ I(T) also belongs to
B−∞ (T) and so P(E) = 0 or P(E) = 1. We also have that if P is translation
invariant and B−∞ (T) is P−trivial, then the action of translations is mixing.
That is,
B−∞ (T) is P − trivial ⇒ ∀ D, E ∈ B(T) :
lim P(D ∩ (E + h)) = P(D)P(E).
|h|→∞
(12)
This result is shown for random measures in Proposition 12.3.V. in [2]. But for
completeness let us prove it. Let D, E ∈ B(T) and fix ǫ > 0. From (7) follows
the existence of k and Ek ∈ B(TWk ) such that P(E∆Ek ) < ǫ. For every h ∈ Rℓ
we have
|P(D ∩ (E + h) − P(D ∩ (Ek + h)| < ǫ.
ℓ
ℓ
Our choice W√
n = [−n, n] implies that for all h ∈ R , N ∈ N with N > k and
c
|h| > (N + k) ℓ we have Ek + h ∈ B(TWN ), Wk ∩ WNc = ∅, and so
P(D ∩ (Ek + h)) = E(E(1Ek +h 1D | B(TWNc )) = E(1Ek +h E(1D | B(TWNc )). (13)
The Decreasing Martingale Theorem (see e.g. [13]) yields
lim E(1D |B(TWNc ) = E(1D | B−∞ (T)) in L1 (P) .
N →∞
8
Since B−∞ (T) is assumed to be P−trivial we have E(1D | B−∞ (T)) = P(D). So
lim E(1D |B(TWNc ) = P(D) in L1 (P). Let N be sufficiently large in order that
N →∞
√
||E(1D |B(TWNc ) − P(D)||1 < ǫ. Then for |h| > (N + k) ℓ we can use (13) to
obtain,
|P(D ∩ (Ek + h)) − P(D)P(Ek + h)| ≤ ||E(1D |B(TWNc ) − P(D)||1 < ǫ .
Since P(Ek + h) = P(Ek ) we deduce P(D ∩ (Ek + h)) → P(D)P(Ek ) as h → ∞.
We conclude P(D ∩ (E + h)) → P(D)P(E) as h → ∞, so mixing is shown.
4
Main results
As already defined, Y = (Yt : t > 0) denotes a STIT tessellation process, defined
by the measure Λ on H in Rℓ which satisfies the properties described in Section
2.1.
Theorem 1 Let W ′ be a window. Then
c such that W ′ ⊂ intW
c , and
∀t > 0, ∀ε > 0, ∃ a window W
∀D ∈ B(TW ′ ), ∀E ∈ B(T W
c c ) : |P(Yt ∈ D∩E) − P(Yt ∈ D)P(Yt ∈ E)| < ε .
Denote by Pt the marginal law of Yt in T, that is
∀ E ∈ B(T) :
Pt (E) = P(Yt ∈ E) .
We say that at time t > 0 the tail σ-algebra B−∞ (T) is trivial for the STIT
process Y if Pt is trivial.
Theorem 1 implies that Pt satisfies the sufficient condition (8), which by
Lemma 1 implies the triviality of the tail σ-algebra B−∞ (T). Hence, the following result holds.
Theorem 2 For all t > 0 the tail σ-algebra is trivial for the STIT process Y ,
that is ∀ E ∈ B−∞ (T) we have P(Yt ∈ E) = 0 or P(Yt ∈ E) = 1.
Relation (12) ensures that Theorem 2 is stronger than the mixing property
shown by Lachièze-Rey [8].
5
Proof of Theorem 1
Let W ′ be a window. Since the measure Λ is supposed to be translation invariant, without loss of generality it can be assumed that the origin 0 ∈ int(W ′ ).
Let W be a window such that W ′ ⊂ int(W ). A key idea is the investigation
of the probability that the window W ′ and the complement W c are separated
by the STIT process Y ∧ W (we say W ′ encapsulated within W ) and thus the
9
construction inside W ′ and outside W , respectively, are approximately independent.
For simplicity, denote by Ct = Ct1 the (a.s. uniquely determined) cell of
tessellation Yt ∧ W that contains the origin in its interior. Obviously, C0 = W .
Note that Ct decreases as time t increases. On the other hand since W ′ ⊂ W ,
when we consider the STIT on W ′ we can take (Yt ∧ W ) ∧ W ′ .
Definition 2 Let be W ′ , W be two windows with 0 ∈ W ′ ⊂ int(W ) and let
t > 0. We say that W ′ is encapsulated inside W at time t if the cell Ct that
contains 0 in Yt ∧ W is such that,
W ′ ⊆ Ct ⊂ int(W ) .
(14)
We write W ′ |t W if W ′ is encapsulated inside W at time t.
We denote the encapsulation time by
S(W ′ , W ) = inf{t > 0 : W ′ |t W } ,
where as usual we put S(W ′ , W ) = ∞ when {t > 0 : W ′ |t W } = ∅. Encapsulation of W ′ inside W means that S(W ′ , W ) < ∞ or equivalently W ′ |t W for some
t > 0. That is, where the boundaries ∂W ′ and ∂W are completely separated by
facets of the 0-cell before the smaller window W ′ is hit for the first time by a
facet of the STIT tessellation.
We have {S(W ′ , W ) ≤ t} ∈ σ(Ys : s ≤ t), so it is a stopping time. Hence
the variable S(W ′ , W ) is also a stopping time for the processes Y ∧ W . On the
other hand notice that the distribution of S(W ′ , W ) does neither depend on the
f where the
method of construction of the STIT process Y , nor on the window W
f . In several proofs of the results
construction is performed as long as W ⊆ W
we will assume that the starting process is Y , but in some others ones we will
start from the STIT process Y ∧ W , as it occurs in Lemma 2.
Note that even if in the STIT construction we have ”independence after
separation”, it has to be considered that the tessellation outside W also depends
on the process until the separation time.
For two Borel sets A, B ⊂ Rℓ we denote by
[A|B] = {H ∈ H : A ⊂ int(H + )∧B ⊂ int(H − ) ∨ (A ⊂ int(H − )∧B ⊂ int(H + ) },
the set of all hyperplanes that separate A and B. This set is a Borel set in H.
For a window W (which is defined to be a convex polytope) we denote by
{faW : a = 1, . . . , q} the (ℓ − 1)-dimensional facets of W . Let W ′ be another
window such that W ′ ⊂ int(W ). We denote by
Ga = [W ′ |faW ], a = 1, . . . , q ;
10
the set of hyperplanes that separate W ′ from the facet faW of W . Note that all
these sets are non-empty, and they are not necessarily pairwise disjoint.
There exists a finite family {G′a : a = 1, . . . , q} of pairwise disjoint nonempty
measurable sets that satisfy
∀a ∈ {1, . . . , q}, G′a ⊆ Ga .
(15)
S
S
q
(E.g. we can choose G′a = Ga \ b<a Gb , or we can partition a=1 Ga
alternatively.)
Lemma 2 Let W ′ , W be two compact convex polytopes, with W ′ ⊂ int(W ).
Let {G′a : a = 1, . . . , q} be a finite class of nonempty disjoint measurable sets
satisfying (15) and such that Λ(G′a ) > 0 for all a = 1, . . . , q. Then,
P(S(W ′ , W ) ≤ t) ≥
e−tΛ([W
+
Z
′
])
q
Y
a=1
′
1 − e−tΛ(Ga )
t
Λ([W ′ ])e−xΛ([W
0
′
])
q
Y
′
1 − e−xΛ(Ga ) dx .(16)
a=1
Proof 2 Our starting point in the proof is the construction of the process Y ∧W .
Because we assume 0 ∈ W ′ we focus on the genesis of the 0-cell Ct = Ct1 ,
t ≥ 0, only. We use the representation (2), and we emphasize that in this
intersection also participate the positive half-spaces of those hyperplanes di,m
which are rejected in the construction.
Now, we consider a Poisson point process
Φ = {(dm , Sm ) : m ∈ N}
on [W ] × [0, ∞) with the intensity measure ΛW ⊗ Λ([W ]) λ+ , where λ+ denotes
the Lebesgue measure on R+ = [0, ∞). This point process can be considered as a
marked hyperplane process where the marks are birth-times (or as a ”rain of hyperplanes”). This choice of the intensity measure corresponds to the families D
and τ of random variables in Subsection 2.1: the interval between two sequential
births of hyperplanes is exponentially distributed with parameter Λ([W ]), and the
law of the born hyperplanes is ΛW . Thus the Sm are sums of i.i.d. exponentially
distributed random variables which are independent of the dm . This corresponds
to one of the standard methods to construct (marked) Poisson point processes
(cf. e.g. [7]). Note that this Poisson process is used for the construction of the
0-cell exclusively.
Let η denote the number of ancestors of Ct with indexes k1 , . . . , kη , and
Pl
Zkl = m=1 zkl , l = 1, . . . , η + 1. Thus we can write formula (2) for the 0-cell
as
Ct = W ∩
η
\
Zkη+1 −1
Zkl
\
l=1 m=Zkl−1 +1
d+
m
∩
\
m=Zkη +1
11
d+
m =W ∩
\
m:Sm <t
d+
m.
(17)
Define the random times
σ′
σa
= min{S : ∃(d, S) ∈ Φ : d ∈ [W ′ ]} and
= min{S : ∃(d, S) ∈ Φ : d ∈ G′a }, a = 1, . . . q.
These are the first times that a hyperplane of Φ falls into the respective sets.
Note that these minima exist and are greater than 0 because all Λ[W ] (G′a ) > 0,
and we are working on a bounded window W and Λ is assumed to be locally
finite, so ΛW is a probability measure. Let
M = max{σa : a = 1, . . . q} .
By definition for all a = 1, . . . , q there exists a (d(a) , S(a) ) ∈ Φ with d(a) ∈ G′a
+
and S(a) ≤ M. Then, faW ⊂ d−
(a) and CS(a) ⊆ d(a) . Since CM ⊆ CS(a) we
deduce
q
\
d+
CM ⊆
(18)
(a) ⊂ int(W ) .
a=1
On the other hand, if σ ′ ≥ M then W ′ is not intersected until the time M =
max{σa : a = 1, . . . , q}, so we have W ′ ⊆ CM . Then
W ′ ⊆ CM ⊂ int(W ) .
We have shown
{M ≤ σ ′ } ⊆ {S(W ′ , W ) ≤ M} .
This relation implies straightforwardly the inclusion,
{M ≤ min{σ ′ , t}} ⊆ {S(W ′ , W ) ≤ t} .
(19)
Indeed, from M ≤ σ ′ we get S(W ′ , W ) ≤ M, and we use M ≤ t to get relation
(19). We deduce
P(S(W ′ , W ) ≤ t) ≥ P(M ≤ min{σ ′ , t}) .
(20)
Now, the sets [W ′ ], G′a , a = 1, . . . , q, are pairwise disjoint and therefore
the restricted Poisson point processes Φ ∩ ([W ′ ] × [0, ∞)), Φ ∩ (G′a × [0, ∞)),
a = 1, . . . , q, are independent. Hence σ ′ , σa , a = 1, . . . , q, are independent
random variables. Then σ ′ and M are independent and we get
P(M ≤ min{σ ′ , t}) = P(M ≤ σ ′ ≤ t) + P(M ≤ t ≤ σ ′ )
= P(M ≤ σ ′ ≤ t) + P(M ≤ t)P(t ≤ σ ′ ) .
Since σ ′ , σa , a = 1, . . . , q, are exponentially distributed with the respective parameters Λ([W ′ ]), Λ(G′a ) > 0, a = 1, . . . , q, we find
P(t ≤ σ ′ ) = e−tΛ([W
′
])
and P(M ≤ t) =
12
q
Y
′
1 − e−tΛ(Ga ) .
a=1
Now, by denoting the density functions of σ ′ by p′ and those ones of σi by pi
respectively, we get
Z x
Z t
Z x
′
′
P(M ≤ σ ≤ t) =
p (x)
p1 (x1 )dx1 . . .
pq (xq )dxq dx
0
=
Z
0
t
Λ([W ′ ])e−xΛ([W
0
0
′
])
q
Y
a=1
′
1 − e−xΛ(Ga ) dx .
Therefore, formula (16) follows.
Remark 1 (i) We emphasize that M ≤ min{σ ′ , t} is sufficient but not necessary for S(W ′ , W ) ≤ t. There can also be other ways to encapsulate W ′ within
W than separating the complete facets of W by single hyperplanes. Alternative
geometric constructions are possible.
(ii) It is well known in convex geometry, that [W ′ |faW ] 6= ∅ for all a = 1, . . . , q.
But depending on the support of the measure Λ (in particular, if Λ is concentrated on a set of hyperplanes with only finitely many directions) there can be
windows W such that Λ([W ′ |faW ]) = 0 for some a. In such cases the bound
given in Lemma 2 is useless. Therefore, in the following, W will be adapted to
Λ in order to have all Λ(G′a ) > 0. But here we will not try to find an optimal
W in the sense that the quantities Λ(G′a ) could be somehow maximized.
Pℓ
(iii) As an example consider the particular measure Λ⊥ =
c=1 gc δc , with
gc > 0, δc the translation invariant measure on the set of all hyperplanes that are
orthogonal to the c-th coordinate axis in Rℓ , with the normalization δc ([sc ]) = 1,
where sc is a linear segment of length 1 and parallel to the c-th coordinate axis.
Let W ′ = [−α, α]ℓ , W = [−β, β]ℓ be two windows with 0 < α < β. Then we
can choose the sets G′a = Ga if the facet faW of W is orthogonal to the c-th
coordinate axis, a = 1, . . . , 2ℓ. We have Λ⊥ (G′a ) = gc (β − α). Simple geometric
considerations yield,
{M ≤ min{σ ′ , t}} = {S(W ′ , W ) ≤ t}
a.s.
(21)
and hence for Λ⊥ we have the equality sign in (16).
We will use the following parameterization of hyperplanes. Let Sℓ−1 be the
unit hypersphere in Rℓ . For H ∈ H, d(h) ∈ R denotes its signed distance from
the origin and u(h) ∈ Sℓ−1 is its normal direction. We denote by H(u, d) the
hyperplane with the respective parameters (u, d) ∈ R × Sℓ−1 . The image of the
non-zero, locally finite and translation invariant measure Λ with respect to this
parameterization can be written as the product measure
γ · λ ⊗ θ,
(22)
where γ > 0 is a constant, λ is the Lebesgue measure on R and θ is an even
probability measure on Sℓ−1 (cf, e.g. [14], Theorem 4.4.1 and Theorem 13.2.12).
13
Here θ is even means θ(A) = θ(−A) for all Borel sets A ⊆ Sℓ−1 . The property
that there is no line in Rℓ such that all hyperplanes of the support of Λ are
parallel to it, is equivalent to the property that θ is not concentrated on a great
subsphere of Sℓ−1 , i.e there is no one-dimensional subspace L1 of Rℓ (with the
⊥
ℓ−1
orthogonal complement L⊥
.
1 ) such that the support of θ equals G = L1 ∩ S
Recall that W ′ is a window with 0 ∈ int(W ′ ).
W
Lemma 3 There exists a compact convex polytope W with facets f1W , . . . , f2ℓ
′ W
′
′
′
and W ⊂ int(W ), and pairwise disjoint sets Ga ⊆ [W |fa ] such that Λ(Ga ) > 0
for all a = 1, . . . , 2ℓ.
Proof 3 For u ∈ Sℓ−1 we denote by HW ′ (u) the supporting (i.e. tangential) hyperplane to W ′ with normal direction u ∈ Sℓ−1 . By hW ′ (u) we denote
the distance from the origin to HW ′ (u). This is the support function of W ′ ,
hW ′ (u) = max{< x, u >: x ∈ W ′ } (see [14], p. 600). With this notation we
have HW ′ (u) = H(u, hW ′ (u)). Note that for d ∈ R the hyperplane H(u, d) is
parallel to HW ′ (u) at signed distance d from the origin.
The shape of the window W will depend on Λ. We use some ideas of the
proof of Theorem 10.3.2 in [14]. Under the given assumptions on the support of
Λ there exist points u1 , . . . , u2ℓ ∈ Sℓ−1 which all belong to the support of θ and
0 ∈ int(conv{u1 , . . . , u2ℓ }), i.e. the origin is in the interior of the convex hull.
Now, the facets faW of W are chosen to have normals ua , and their distance
from the origin is hW ′ (ua ) + 3, a = 1, . . . , 2ℓ. Formally,
W =
2ℓ
\
H(ua , hW ′ (ua ) + 3)+ .
a=1
Notice that the described condition on the choice of the directions u1 , . . . , u2ℓ
guarantees that W is bounded (see the proof of Theorem 10.3.2 in [14]).
The definition of the support of a measure and some continuity arguments
(applied to sets of hyperplanes) yield that for all ua there are pairwise disjoint
neighborhoods Ua ⊂ Sℓ−1 such that θ(Ua ) > 0, and for the sets of hyperplanes
G′a = {H ∈ H : u(H) ∈ Ua , hW ′ (ua ) + 1 < d(H) < hW ′ (ua ) + 2} ,
it holds G′a ⊂ [W ′ |faW ]. Hence Λ([W ′ |faW ]) ≥ Λ(G′a ) = γ θ(Ua ) > 0 for all
a = 1, . . . , 2ℓ. Since the Ua are pairwise disjoint, also the sets G′a , a = 1, . . . , 2ℓ,
have this property.
Remark 2 Because the directional distribution θ is assumed to be an even measure, in the construction above one can choose uℓ+a = −ua , a = 1, . . . , ℓ. Then
W
are parallel.
the facets faW and fℓ+a
14
Lemma 4 The compact convex polytope W constructed in Lemma 3 also satisfies the following property: ∀ ε > 0, ∃ t∗ (ε) > 0 such that the following encapsulation time relation holds,
∀ s ∈ (0, t∗ (ε)], ∃ r(s) ≥ 1 such that ∀r ≥ r(s) : P(S(W ′ , rW ) ≤ s) > 1 − ε .
(23)
Proof 4 Let us use the notation introduced in the Lemma 3 and in its proof.
For r > 0 we set rG′a = {rh : h ∈ G′a }. Then, elementary linear algebra yields
that G′a ⊂ [W ′ |faW ] implies rG′a ⊂ [W ′ |rfaW ] for all r > 1. Furthermore, from
(22) we find
∀ a = 1, . . . , 2ℓ : Λ([W ′ | r faW ]) ≥ Λ(rG′a ) = γ r θ(Ua ) .
Now denote by
L = min{Λ(G′a ) = γθ(Ua ) : a = 1, . . . , 2ℓ}.
We have L > 0, and (16) yields
P(S(W ′ , rW ) ≤ s)
≥e
−sΛ([W ′ ])
> e−sΛ([W
Note that
′
])
1−e
−srL 2ℓ
1 − e−srL
2ℓ
+
Z
s
Λ([W ′ ])e−xΛ([W
′
])
0
1 − e−xrL
2ℓ
dx
.
∗
∀ε ∈ (0, 1), ∃ t∗ (ε) > 0 such that e−t
(ε)Λ([W ′ ])
√
1 − ε.
(24)
√
Then for all s ∈ (0, t∗ (ε)] we have e−sΛ([W ]) > 1 − ε. Furthermore, for any
2ℓ
√
such s ∈ (0, t∗ (ε)] there is an r(s) ≥ 1 with 1 − e−sr(s)L
> 1 − ε. This
finishes the proof.
>
′
Lemma 5 For all W ′ , all t > 0 and all ε > 0 and t∗ (ε) such that (23) holds,
there exists t1 = t1 (ε, t) ∈ (0, min{t, t∗ (ε)}) such that for all t2 ∈ (0, t1 ],
P(Y ∧ W ′ has no jump in [t − t2 , t)) > 1 − ε .
′
Proof 5 Let {Ct i : i = 1, ..., ξt′ } be the family of cells of the pure jump Markov
′
process Yt ∧ W ′ . The lifetimes of Ct i are exponentially distributed with the
′
i
parameters Λ([Ct ]) and they are conditionally independent conditioned that a
certain set of cells is given at time t. Then, given a certain set of cells at t,
the minimum of the lifetimes is exponentially distributed with parameter ζt =
Pξt′
′
i
i=1 Λ([Ct ]).
15
Notice that ζt is monotonically increasing in t, because if at some time a cell
C ′ is divided into the cells C ′′ , C ′′′ we have C ′ = C ′′ ∪C ′′′ and [C ′ ] = [C ′′ ]∪[C ′′′ ].
Then, by subadditivity of Λ,
Λ([C ′ ]) = Λ([C ′′ ] ∪ [C ′′′ ]) ≤ Λ([C ′′ ]) + Λ([C ′′′ ]) .
Since the process Y ∧ W ′ has no explosion, for√any fixed t > 0 ∃ x0 > 0 such
that for all s ∈ [0, t] we have P(ζs ≤ x0 ) > 1 − ε. We√fix t1 = t1 (ε, t) ∈
(0, min{t, t∗ (ε)}) as a value which also satisfies e−t1 (ε) x0 > 1 − ε. This yields
for all t2 ∈ (0, t1 ] that
P(Y ∧ W ′ has no jump in [t − t2 , t))
≥
=
≥
=
P(Y ∧ W ′ has no jump in [t − t1 , t))
Z
∞
Z 0 x0
Z
0
x0
0
P(Y ∧ W ′ has no jump in [t − t1 , t) | ζt−t1 = x)P(ζt−t1 ∈ dx)
P(Y ∧ W ′ has no jump in [t − t1 , t) | ζt−t1 = x)P(ζt−t1 ∈ dx)
e−t1 x P(ζt−t1 ∈ dx) ≥ e−t1 x0 P(ζt−t1 ≤ x0 ) > 1 − ε .
In the sequel for t > 0 and ε > 0 the quantity t1 = t1 (ε, t) is the one given
by Lemma 5.
Recall that under the identification E with E ∧ W we can write P(Y ∈ E) =
P(Y ∧ W ′ ∈ E) for all E ∈ B(TW ′ ), see (5). This identification also allows us to
put for all E ∈ B(TW ′ ), D ∈ B(TW c ), s > 0:
P(Yt ∈ E ∩ D, S(W ′ , W ) < s) = P(Yt ∧ W ′ ∈ E, Yt ∈ D, S(W ′ , W ) < s) .
Lemma 6 For all t > 0, ε > 0 and t2 ∈ (0, t1 ], we have
∀ s ∈ (0, t2 ], ∀ E ∈ B(TW ′ ) : |P(Yt ∈ E) − P(Yt−s ∈ E)| < ε.
Proof 6 We have
∀s ∈ (0, t2 ] : {Y ∧W ′ has no jump in [t−t2 , t)} ⊆ {Y ∧W ′ has no jump in [t−s, t)} .
Then,
{Yt ∧ W ′ ∈ E, Y ∧ W ′ has no jump in [t − t2 , t)}
= {Yt−s ∧ W ′ ∈ E, Y ∧ W ′ has no jump in [t − t2 , t)}.
Therefore {Yt ∧ W ′ ∈ E}∆{Yt−s ∧ W ′ ∈ E} ⊆ {Y ∧ W ′ has some jump in [t −
t2 , t)}. By using the relation |P(Γ) − P(Θ)| ≤ P(Γ∆Θ), the result follows.
16
In the following results W is a window such that W ′ ⊂ int(W ). We use the
notation S = S(W ′ , W ) for the encapsulation time.
Proposition 1 For all t > 0, ε > 0 and t2 ∈ (0, t1 ], we have
∀E ∈ B(TW ′ ) : |P(Yt ∈ E) − P(Yt ∈ E | S < t2 )| < ε .
Proof 7 Let us first show,
∀s ∈ (0, t), ∀E ∈ B(TW ′ ) : P(Yt ∈ E | S = s) = P(Yt−s ∈ E) .
(25)
m
~ ′ = (Y ′ : m ∈ N) be a sequence of independent copies of Y , and also
Let Y
independent of Y . Relation (3) yields
′
′
~t−s
Yt ∧ W ′ ∼ (Ys ⊞ Y~t−s
) ∧ W ′ ∼ (Ys ∧ W ′ ) ⊞ Y
.
On the event S = s we have W ′ ⊆ Cs1 , this last is the cell containing the origin
~′ ∼
at time s, and thus Ys ∧W ′ = {W ′ }. Hence, on S = s we have (Ys ∧W ′ )⊞ Y
t−s
′1
′
Yt−s ∧ W . Therefore,
′1
′1
P(Yt ∧ W ′ ∈ E | S = s) = P(Yt−s
∧ W ′ ∈ E | S = s) = P(Yt−s
∧ W ′ ∈ E). (26)
′1
Since Yt−s
∼ Yt−s , relation (25) is satisfied. Hence,
|P(Yt ∈ E) − P(Yt ∈ E | S < t2 )|
Z t2
1
= P(Yt ∈ E) −
P(Yt ∈ E | S = s) P(S ∈ ds)
P(S < t2 ) 0
Z t2
1
= P(Yt ∈ E) −
P(Yt−s ∈ E) P(S ∈ ds)
P(S < t2 ) 0
Z t2
1
=
P(Yt ∈ E) − P(Yt−s ∈ E) P(S ∈ ds)
P(S < t2 ) 0
Z t2
1
≤
|P(Yt ∈ E) − P(Yt−s ∈ E)| P(S ∈ ds) < ε ,
P(S < t2 ) 0
where in the last inequality we use Lemma 6.
Lemma 6, Proposition 1 and relation (25) obviously imply the following
result.
Corollary 1 For all t > 0, ε > 0, t2 ∈ (0, t1 ] and all s ∈ (0, t2 ) it holds
∀E ∈ B(TW ′ ) :
|P(Yt−s ∈ E) − P(Yt ∈ E | S < t2 )|
= |P(Yt ∈ E | S = s) − P(Yt ∈ E | S < t2 )| < 2ε.
17
Lemma 7 For all t > 0, ε > 0 and t2 ∈ (0, t1 ] we have for all D ∈ B(TW ′ ), E ∈
B(TW c ),
|P(Yt ∈ D ∩ E | S < t2 ) − P(Yt ∈ D | S < t2 )P(Yt ∈ E | S < t2 )| < 2ε.
Proof 8 Firstly, we show the following conditional independence property,
∀D ∈ B(TW ′ ), E ∈ B(TW c ) :
P(Yt ∈ D ∩ E |S = s) = P(Yt ∈ D |S = s)P(Yt ∈ E |S = s).
(27)
We use the notation introduced in the proof of Proposition 1 and we also shortly
write E instead of E ∧ W c . Also the arguments are close to the ones in the proof
of relation (25).
′m
Recall that on the event S = s we have W ′ ⊆ Cs1 . By Ys ⊞ (Yt−s
: m ≥ 2)
′m
m
we mean that the tessellations Yt−s are nested only into the cells Cs of Ys with
m ≥ 2, and not into the cell Cs1 . From (25), (26) and the independence of the
′m
random variables Ys , Yt−s
, m ≥ 1 we obtain,
P(Yt ∧W ′ ∈ D, Yt ∈ E | S = s)
~ ′ )∧W ′ ∈ D, (Ys ⊞ Y
~ ′ ) ∈ E | S = s)
= P((Ys ⊞ Y
t−s
t−s
′m
′1
: m ≥ 2)) ∈ E | S = s)
= P(Yt−s
∧W ′ ∈ D, (Ys ⊞ (Yt−s
′1
′
′m
= P(Yt−s ∧W ∈ D)P(Ys ⊞ (Yt−s : m ≥ 2)) ∈ E | S = s)
= P(Yt ∧W ′ ∈ D | S = s)P(Yt ∈ E | S = s) .
Then (27) is verified. By using this equality and Corollary 1 we find,
P(Yt ∈ D | S < t2 )P(Yt ∈ E | S < t2 ) − 2ε
≤ (P(Yt ∈ D | S < t2 ) − 2ε)P(Yt ∈ E | S < t2 )
Z t2
1
=
(P(Yt ∈ D | S < t2 ) − 2ε)P(Yt ∈ E | S = s)P(S ∈ ds)
P(S < t2 ) 0
Z t2
1
P(Yt ∈ D | S = s)P(Yt ∈ E | S = s)P(S ∈ ds)
<
P(S < t2 ) 0
Z t2
1
P(Yt ∈ D ∩ E | S = s) P(S ∈ ds)
=
P(S < t2 ) 0
= P(Yt ∈ D ∩ E | S < t2 ).
In an analogous way it is proven,
P(Yt ∈ D ∩ E |S < t2 ) < P(Yt ∈ D |S < t2 )P(Yt ∈ E |S < t2 ) + 2ε,
which finishes the proof.
We summarize. The window W ′ was fixed and there was no loss of generality
in assuming 0 ∈ int(W ′ ). Let t > 0 and ε > 0 be fixed. We construct t1 =
18
t1 (ε, t) ∈ (0, min{t, t∗ (ε)}). Now let√W be as in Lemma 3. Note that for all
′
t2 ∈ (0, t1 (ε)] we have e−t2 Λ([W ]) > 1 − ε. Then, from Lemma 4 we get that
c ) < t2 ) > 1 − ε
for all t2 ∈ (0, t1 (ε)] there exists r(t2 ) ≥ 1 such that P(S(W ′ , rW
for all r ≥ r(t2 ).
c = r(t2 )W . We have W ′ ⊂ int(W
c ).
Now we fix t2 ∈ (0, t1 (ε)] and define W
From Lemma 7 we deduce the following result.
Corollary 2 For all D ∈ B(TW ′ ), E ∈ B(TW
c c ) it is satisfied
|P(Yt ∈ D ∩ E | S < t2 ) − P(Yt ∈ D | S < t2 )P(Yt ∈ E | S < t2 )| < 2ε .
c
Proposition 2 For the window W ′ , for all t > 0 and ε > 0, the window W
satisfies,
∀ D ∈ B(TW ′ ), E ∈ B(TW
c c ) : |P(Yt ∈ D ∩ E) − P(Yt ∈ D)P(Yt ∈ E)| < 4ε .
Proof 9 Denote
We have
c ) < t2 }.
Γ = {Yt ∈ D}, Θ = {Yt ∈ E}, Υ = {S(W ′ , W
|P(Γ ∩ Θ | Υ) − P(Γ | Υ)P(Θ | Υ)| < 2ε and P(Υ) > 1 − ε .
(28)
Observe that P(Υ) > 1 − ε implies, P(Ξ) − P(Ξ ∩ Υ) < ε and P(Ξ) − P(Ξ ∩
Υ)P(Υ) < 2ε for all events Ξ, in particular when Ξ is Γ, Θ or Γ ∩ Θ.
The first relation in (28) obviously implies
P(Γ ∩ Θ | Υ) − P(Γ | Υ)P(Θ | Υ) P(Υ)2 < 2ε.
Then,
P(Γ∩Θ)−4ε
< P(Γ∩Θ)−P(Γ∩Θ | Υ)P(Υ)2 +P(Γ | Υ)P(Θ | Υ)P(Υ)2 −2ε
= P(Γ∩Θ)−P(Γ∩Θ∩Υ)P(Υ)+P(Γ∩Υ)P(Θ∩Υ)−P(Γ)P(Θ)+P(Γ)P(Θ)−2ε
< P(Γ)P(Θ) .
In an analogous way it is shown that P(Γ)P(Θ) < P(Γ ∩ Θ) + 4ε. Hence, the
result is proven.
Proposition 2 yields the proof of Theorem 1, by substituting 4 ε by ε.
19
6
Comparison of STIT and Poisson hyperplane
tessellations (PHT)
Intuitively, we expect a gradual difference in the mixing properties of STIT and
PHT. Hyperplanes are unbounded, while the maximum faces (also referred to
as I-faces) in a STIT tessellation are always bounded. But these I-faces tend to
be very large (it was already shown in [10] for the planar case that the length of
the typical I-segment has a finite expectation but an infinite second moment).
But since we have shown that the tail σ-algebra for STIT is trivial, then STIT
has a short range dependence in the sense of [2].
One aspect is the following. For PHT, Schneider and Weil [14] (Section
10.5) showed, that it is mixing if the directional distribution θ (see (22)) has
zero mass on all great subspheres of Sℓ−1
+ . And they contribute an example of a
tessellation where this last condition is not fulfilled and which is not mixing. In
contrast to this, Lachièze-Rey proved in [8] that STIT is mixing, for all θ which
are not concentrated on a great subsphere.
Concerning the tail σ-algebra, we have Theorem 2 for the STIT tessellations.
In contrast, for the PHT the tail σ-algebra is not trivial.
Lemma 8 Let Y PHT denote a Poisson hyperplane tessellation with intensity
measure Λ which is a non-zero, locally finite and translation invariant measure
on H, and it is assumed that the support set of Λ is such that there is no line in
Rℓ with the property that all hyperplanes of the support are parallel to it. Then
the tail σ-algebra is not trivial with respect to the distribution of Y PHT .
Proof 10 Let (Wn : n ∈ N) be an increasing
sequence of windows such that for
S
all n ∈ N, Wn ⊂ int Wn+1 , and Rℓ = n∈N Wn . Let B1 be the unit ball in Rℓ
centered at 0. Consider the following event in B(T):
D := {∃ an hyperplane in Y PHT which intersects B1 }.
Because outside of any bounded window Wn all the hyperplanes belonging to
Y PHT can be identified and thus it can be decided (in a measurable way) whether
there is a hyperplane which intersects B1 , we have that D ∈ B(TWnc ) for all
n ∈ N, and hence D ∈ B−∞ (T). We have P(D) = 1 − e−Λ([B1 ]) , and this
probability is neither 0 nor 1.
Acknowledgments. The authors thank Lothar Heinrich for helpful hints
and discussion. The authors are indebted for the support of Program Basal
CMM from CONICYT (Chile) and by DAAD (Germany).
20
References
[1] Breiman, L. (1993). Probability. 2nd ed., SIAM, Philadelphia.
[2] Daley, D.J. and Vere-Jones, D. (2008). An Introduction to the Theory of Point Processes. Vol. II: General Theory and Structure. 2nd ed.,
Springer.
[3] Heinrich, L. (1994). Normal approximation for some mean-value estimates of absolutely regular tessellations. Math. Methods Statist. 3, 1–24.
[4] Heinrich, L. (20012). Asymptotic methods in statistics of random point
processes. in: E. Spodarav (ed.): Lectures on Stochastic Geometry, Spatial Statistics and Random Fields. Asymptotic Methods. Lecture Notes in
Mathematics, Springer, ???–???.
[5] Heinrich, L. and Molchanov, I.S. (1999). Central limit theorem for a
class of random measures associated with germ-grain models. Adv. Appl.
Probab. 31, 283–314.
[6] Heinrich, L., Schmidt, H. and Schmidt, V. (2007). Limit theorems
for functionals on the facets of stationary random tessellations. Bernoulli
13, 868–891.
[7] Kingman, j.F.C. (1992). Poisson Processes. Oxford University Press, Oxford.
[8] Lachièze-Rey, R. (2011). Mixing properties for STIT tessellations. Adv.
Appl. Probab. 43, 40–48.
[9] Martı́nez, S. and Nagel, W. (2012). Ergodic description of STIT tessellations. Stochastics: An Int. Journ. of Prob. and Stoch. Proc. 84, 113–134.
[10] Mecke, J., Nagel, W. and Weiss, V. (2007). Length distributions of
edges of planar stationary and isotropic STIT tessellations. J. Contemp.
Math. Anal. 42, 28–43.
[11] Mecke, J., Nagel, W. and Weiss, V. (2008). A global construction of
homogeneous random planar tessellations that are stable under iteration.
Stochastics 80, 51–67.
[12] Nagel, W. and Weiss, V. (2005). Crack STIT tessellations: Characterization of stationary random tessellations stable with respect to iteration.
Adv. Appl. Probab. 37, 859–883.
[13] Parry, W. (2004). Topics in Ergodic Theory. Cambridge University Press.
21
[14] Schneider, R. and Weil, W. (2008). Stochastic and Integral Geometry.
Springer.
[15] Schreiber, T. and Thäle, C. (2010). Second-order properties and central limit theory for the vertex process of iteration infinitely divisible and
iteration stable random tessellations in the plane. Adv. Appl. Probab. 42,
913–935.
[16] Schreiber, T. and Thäle, C. (2011). Intrinsic volumes of the maximal
polytope process in higher dimensional STIT tessellations. Stoch. Proc.
Appl. 121, 989–1012.
[17] Schreiber, T. and Thäle, C. (2012). Limit theorems for iteration stable tessellations. The Annals of Probability to appear. arXiv:
math.PR/1103.3960
22