Algorithmic Geometry
Algorithmic Geometry
Algorithmic Geometry
Boissonnat
M. Yvinec
Algorithmic
Geometry
2LVI~tl I
A1
A / ;4
/1A, " ~
Zj
J?'
fi <I
/.
Algorithmic geometry
Algorithmic Geometry
Jean-Daniel Boissonnat
Mariette Yvinec
INRIA Sophia-Antipolis, France
.wl CAMBRIDGE
v UNIVERSITY PRESS
PUBLISHED BY THE PRESS SYNDICATE OF THE UNIVERSITY OF CAMBRIDGE
The Pitt Building, Trumpington Street, Cambridge CB2 1RP, United Kingdom
CAMBRIDGE UNIVERSITY PRESS
The Edinburgh Building, Cambridge CB2 2RU, United Kingdom
40 West 20th Street, New York, NY 10011-4211, USA
10 Stamford Road, Oakleigh, Melbourne 3166, Australia
A catalogue record for this book is available from the British Library
4.i*
To Bertrand,
Martine,
Cecile,
Clement,
Alexis,
Marion,
Quentin,
Romain,
Eve,
and the others...
Table of contents
Preface xv
Acknowledgments xxi
References 492
Notation 508
Index 513
Preface
A new field
Many disciplines require a knowledge of how to efficiently deal with and build
geometric objects. Among many examples, one could quote robotics, computer
vision, computer graphics, medical imaging, virtual reality, or computer aided
design. The first geometric results with a constructive flavor date back to Euclid
and remarkable developments occurred during the nineteenth century. However,
only very recently did the design and analysis of geometric algorithms find a
systematic treatment: this is the topic of computational geometry which as a field
truly emerged in the mid 1970s. Since then, the field has undergone considerable
growth, and is now a full-fledged scientific discipline, of which this text presents
the foundations.
The design of efficient geometric algorithms and their analysis are largely based on
geometric structures, algorithmic data structuring techniques, and combinatorial
results.
A major contribution of computational geometry is to exemplify the central
role played by a small number of fundamental geometric structures and their
relation to many geometric problems.
Geometric data structures and their systematic analysis guided the layout of
this text. We have dedicated a part to each of the fundamental geometric struc-
tures: convex hulls, triangulations, arrangements, and Voronoi diagrams.
In order to control the complexity of an algorithm, one must know the com-
plexity of the objects that it generates. For example, it is essential to have a
sharp bound on the number of facets of a polytope as a function of the number
of its vertices: this is the celebrated upper-bound theorem proved by McMullen
in 1970. Combinatorial geometry plays an essential role in this book and the
first chapters of each part lay the mathematical grounds and prove the basic
combinatorial properties satisfied by the corresponding geometric structures.
xvi Preface
At the same time as geometric data structures of general interest were be-
ing studied, new algorithmic techniques were devised. To general algorithmic
paradigms, computational geometry added its own geometric techniques. The
first purely geometric paradigm in the history of the field, the sweep method,
was originally used by Bentley and Ottmann in an algorithm that computes the
intersection of a set of line segments in the plane. Subsequent developments of
general techniques soon encountered important theoretical difficulties which led
to quite sophisticated variants and theoretical constructions without truly affect-
ing the practice of the field. As a reaction against this tendency, a few authors
decided it was more desirable to look for simple algorithms which were efficient
on the average, rather than algorithms whose good behavior in the worst case
did not guarantee good behavior in practical instances of the problem.
The recent body of work on randomization gave the most significant answer
in this direction. An algorithm is said to be randomized if, after making ran-
dom choices during its execution, it gives the solution to a purely deterministic
problem. No probabilistic assumptions are made about the input objects, and
randomness is used only to choose the path that the algorithm will follow to the
solution. Randomized algorithms are often simple to conceive and to program,
and their average complexity (over all the random choices made during the exe-
cution) is usually very good, often even optimal. Randomization leads to general
methods for the design and analysis of algorithms, and allows efficient compu-
tation of geometric structures, both in theory and practice. For these reasons,
randomization holds a central position in this book. The first three chapters in the
first part contain all the generic material related to randomization, and instances
of randomized algorithms are presented throughout the subsequent chapters.
The goal of this book is twofold. In the first place, it aims at giving a coherent
exposition of the field rather than a collection of results, and at presenting only
methods that possess a certain degree of generality. The algorithms presented in
this book have been selected to work in all dimensions whenever this was possible:
the case of dimension 2 only receives special treatment when particular methods
lead to significant improvement, which happens surprisingly seldom.
In the second place, this book aims at presenting solutions which, while theoret-
ically efficient and relevant, remain relatively simple and applicable in practical
situations. Most of these algorithms have been implemented by their authors and
their practical behavior has turned out to agree with the analyses developed in
this book.
Nevertheless, this book does not claim to be a comprehensive treatment of the
whole field of computational geometry. In particular, the reader will find mention
Preface xvii
This book assumes no particular knowledge from the reader and should be ac-
cessible to any enthusiastic geometer. Its contents have been taught in several
graduate courses both in mathematics and in computer science. It is aimed both
at mathematicians interested in a constructive approach to geometry, and at
computer scientists in need of an accurate treatment of computational geometry.
Students, researchers, and engineers in more practical fields will find here a useful
methodology and practical algorithms.
There is more than one way to read this book. The authors have tried to respect
xviii Preface
1In his essay Comme un Roman, the French writer Daniel Pennac describes the unalienable
rights of the reader as:
1. The right not to read.
2. The right to jump ahead.
3. The right not to finish a book.
4. The right to read again.
5. The right to read anything.
6. The right to Bovarysm (textually transmissible disease).
7. The right to read here and there.
8. The right to thesaurize.
9. The right to read aloud.
10. The right not to say anything.
Translator's Preface
The original text was written in French. The translator's task was constrained by
the fact that most of the French words used in the original text were originally
coined by their authors in English publications, or have a commonly accepted
translation into English. The problem was thus one of reverse engineering! For-
tunately, there are now many textbooks in computational geometry which helped
to resolve conflicts in terminology. Whenever possible, the translation conformed
to the standard terminology or, for the more specialized vocabulary, to the ter-
minology set up in the original papers.
For graphs, however, the use of the word edge overlapped with that of 1-faces
for common geometric structures. Similarly, the word vertex is also used for
polytopes in a different meaning than for graphs. The situation is somewhat
complicated by the fact that sometimes graphs are introduced whose nodes are
edges of a polygon. We have followed the French text in systematically using
the words node and arc for the set underlying a graph and the symbolic links
between the elements of this set. The terminology related to graphs is recalled
in subsection 2.2.1.
We have departed from the French text for the word saillant (meaning salient)
to follow the usage with convex vertices/ edges, as opposed to reflex. Although a
vertex or an edge is always convex in the original meaning of convexity, here it
means (as most people would understand it) that the internal angle around the
vertex or around the edge is smaller than 7r. Luckily, this definition is never used
for higher-dimensional faces, and therefore should not create confusion.
Vertical decompositions as they are introduced in this book have also been
called by various names, such as trapezoidal maps, vertical partitions, and verti-
cal visibility maps. As with other authors, we have preferred the phrase vertical
decomposition or even decomposition for short, in order to emphasize the rela-
tion with other geometric decomposition schemes, for example decompositions of
arrangements, polygons, or polyhedra into simplices (also called triangulations).
We should properly speak of the decomposition of the plane induced by a set of
segments. The reader will forgive us for using the phrase decomposition of (a set
of) line segments.
xx Translator'spreface
This book benefited from the joint work of researchers of the PRISME project at
INRIA and is inspired by much common work with Panagiotis Alevizos, Andre
CGrezo, Olivier Devillers, Katrin Dobrindt, Franco Preparata, Micha Sharir, Boaz
Tagansky, and Monique Teillaud. To proofread the manuscript, Jean Berstel and
Franck Nielsen provided their help with an unfailing friendship. The translation
has been carried out by Herv6 Br6nnimann who not only translated but also
corrected the original manuscript in many places. Many thanks to all of them! A
book about geometry could not exist without drawings, and a book about com-
putational geometry could not exist without computer generated drawings. The
Jrdraw software provided the ruler and compass, and together with its designer
Jean-Pierre Merlet it played an essential role in the conception of this book.
Part I
Algorithmic tools
The first part of this book introduces the most popular tools in computational
geometry. These tools will be put to use throughout the rest of the book.
The first chapter gives a framework for the analysis of algorithms. The concept
of complexity of an algorithm is reviewed. The underlying model of computation
is made clear and unambiguous.
The second chapter reviews the fundamentals of data structures: lists, heaps,
queues, dictionaries, and priority queues. These structures are mostly imple-
mented as balanced trees. To serve as an example, red-black trees are fully
described and their performances are evaluated.
The third chapter illustrates the main algorithmic techniques used to solve
geometric problems: the incremental method, the divide-and-conquer method,
the sweep method, and the decomposition method which subdivides a complex
object into elementary geometric objects.
Finally, chapters 4, 5, and 6 introduce the randomization methods which have
recently made a distinguished appearance on the stage of computational geom-
etry. Only the incremental randomized method is introduced and used in this
book, as opposed to the randomized divide-and-conquer method.
Chapter 1
Notions of complexity
scribed in the algorithm. Likewise, the spatial complexity describes how many
memory units are needed in order to store all the data required for the execution
of the corresponding program.
The model of computation underlying all the algorithms given in this book is
the so-called real RAM model. In this model, each memory unit can hold the
representation of a real number, and accessing a memory location takes constant
time, that is, time independent of the particular location to be accessed. The
machine can work on real numbers of arbitrary precision for the same cost. The
elementary operations are:
1. the comparison of two real numbers,
2. the four arithmetic operations,
3. all the usual mathematical functions, such as logarithm, exponential, trigo-
nometric functions, etc.
4. the integer part computation.
The assumption that all numbers can be represented exactly allows us to ignore
all the problems related to numerical accuracy, as they occur in the real world. In
particular, the otherwise very relevant problems of robustness of these algorithms
in relation to rounding and numerical inaccuracies are not mentioned in this book.
Output-sensitive complexity
An algorithm that solves a given problem usually builds, for a given input, a
result called the output, which embodies the solution to the problem. The size
of the output equals the number of memory units needed to store this result.
Obviously, the size of the output depends on the size of the input, but also on
the input itself.
For a given problem, the worst-case output size, or output size in the worst
case, is the function s(n) that upper bounds the output size for all inputs of
size n. The algorithm under consideration needs to at least write the output,
therefore the size of the output in the worst case is an elementary lower bound
on the running time complexity in the worst case.
For a given problem and a given input size, however, the output size can some-
times change a lot depending on the actual input given to the algorithm. For
instance, consider the problem of computing all the intersecting pairs of a set
of line segments in the plane. For a set of n segments, the input consists of 4n
real numbers, two for each endpoint. There might be as few as no intersections,
and as many as n(n)2 In this case it is interesting to have at hand adaptive
algorithms whose time complexity is a function of the output size. The number
of elementary operations executed by such an algorithm depends on the size of
the output for the instance of the problem, and not on the size of the output in
the worst case. For instance, in the problem of reporting all pairs of intersecting
line segments, the number of elementary operations carried out by the algorithm
should be a function of the number of intersecting pairs, which is not true of the
naive algorithm that tests all the pairs for intersection.
An adaptive algorithm can be analyzed in terms of both variables n and s,
6 Chapter 1. Notions of complexity
the respective sizes of the input and the output. The worst-case complexity of
an adaptive algorithm is the function f (n, s) that upper bounds the number of
elementary operations needed for solving all the instances of the problem with
input size n and output size s. Likewise, the average-case complexity of such
an algorithm is the function g(n, s) that upper bounds the number of operations
carried out by the algorithm, averaged over all the instances of the problem with
input size n and output size s.
In this book, the reader will find many randomized algorithms, that is, algorithms
whose execution is to some extent random. Such an algorithm will make ran-
dom choices during its execution, and these choices will influence its subsequent
behavior. In all cases, the algorithm will output the correct answer to the given
problem, but the number of elementary operations needed for this will greatly
depend on the random choices. The efficiency of a randomized algorithm is then
evaluated as an average over all possible random choices. The analysis is then
called a randomized analysis. However, such an analysis by no means involves
any statistical hypothesis on the data itself. Rather, the complexity is averaged
over all possible executions of the algorithm in the worst case for the input.
It happens frequently that we have to answer many different questions of the same
kind about a given set of data. For example, given a set of lines in the plane, the
questions might ask for some kind of localization. Each query consists of a point
in the plane, and the question asks for the enclosing cell in the subdivision of the
plane induced by the lines. In cases such as this, it often pays off to compute
a data structure during a preprocessing phase, which in turn will be queried
repeatedly for all the different requests. The analysis therefore concerns both
the complexity of the preprocessing phase and that of answering the requests. In
some cases, the data structure is semi-dynamic, which means that it is possible
to add more data on-line; it may also be fully dynamic, meaning that deletions
as well as insertions are allowed. Each type of operation (insertion, deletion,
query) has its own associated cost. Sometimes, the cost of a single operation
is hard to evaluate, but one may estimate the compounded cost of a number of
these operations. The complexity of such a sequence divided by the number of
operations gives the amortized complexity of one operation. Such an analysis is
then called an amortized analysis.
1.1. The complexity of algorithms 7
Al (n) = 2n
Ak(n) = An (I),
where Akn) is the function obtained by composing the function Ak with itself
n times. Henceforth, we will write A(n) for An(n). The Ackermann function is
increasing, and its rate of growth is very fast. Here are the first values of this func-
tion: A(1) = 2, A(2) = 4, A(3) = 16, A(4) is a tower of 65,536 powers of 2. The
functional inverse of this function, defined by a(n) = min{p > 1 : A(p) > n},
'The notation log stands for the logarithm in base 2, which in this book will be assumed as
the base for all logarithm functions unless otherwise stated.
8 Chapter 1. Notions of complexity
lim f( )=0.
n-~oog(n)
the complexity g(n) of algorithm B, that is f(n) = O(g(n)). The latter asymp-
totic statement implies that, from a certain input size on, algorithm A will beat
its competitor B in terms of running time. Nothing is said, however, about the
threshold beyond which this is the case (the value of this threshold depends on
the constants no and c concealed by the big-oh notation). One must therefore
refrain from choosing, for a particular practical situation, the algorithm whose
asymptotic analysis yields a complexity with the smallest order of magnitude.
The elegance and simplicity of an algorithm are both likely to lower the order of
magnitude of the concealed constants, and should be taken into consideration if
appropriate. For these reasons, this book usually presents several algorithms for
solving the same problem.
Sorting n numbers according to the natural (increasing) order is one of the rare
problems whose complexity can be found by direct reasoning. Given a finite
sequence of n numbers, X = (xl,...,xx), all in some totally ordered set (for
instance, N or R), to sort them is to determine a permutation a of {1,..., n}
such that the sequence
Proof. The proof of this theorem is based on the idea of a decision tree. One
can always assume that the sequence under consideration does not contain the
same element twice; thus all the numbers in the sequence are distinct. For lack of
other information on the input data, the algorithm can only perform comparisons,
then branch accordingly, depending on the result of this comparison. Branching
is a binary process since there can be only two results to the comparison. The
execution of such an algorithm can be represented by a binary tree, the decision
tree. Each leaf represents a possible output from the algorithm; in our case,
an output is one of the n! possible permutations of the set {1,...,n}. Each
internal node represents some state of the algorithm, at which the algorithm will
perform a comparison. Depending on the result of this comparison and on its
current state, the algorithm will then branch to its right or left descendant in the
tree, and subsequently perform the comparison stored at that node, or output
the corresponding permutation if it reaches a leaf. All computations begin at the
root of the tree, and each execution therefore corresponds to a path from the root
to a leaf of the tree. The number of comparisons performed by the algorithm in
the worst case is thus the height of the decision tree. A possible decision tree
sorting three elements a, b, c is shown in figure 1.1. For our sorting problem, the
1.2. Optimality, lower bounds 11
ri (n)
Problem A Problem B
73 (n)
decision tree has at least n! leaves, and its height h is thus at least log(n!) which,
according to Stirling's approximation formula, 2 is Q(n log n). 0
1. the input to problem A can be converted into an input suitable for problem
B, using r7i(n) elementary operations,
2. it is possible to convert the solution to problem B on the latter input
into a solution to problem A on the former input, using T3(n) elementary
operations, and
3. ri(n) + r 3 (n) = 7(n).
2
Stirling's approximation formula states that n! = 7 (n)fn (1 + 1 + o( )) where e
stands for the base of natural logarithms.
12 Chapter 1. Notions of complexity
Data structures are the keystone on which all algorithmic techniques rely. The
definition of basic yet high-level data structures, with precise features and a well-
studied implementation, allows the designer of an algorithm to concentrate on
the core issues of the problem. For the programmer, it saves the tedious task of
creating and administrating each pointer.
Throughout this book, we describe data structures especially designed for rep-
resenting geometric objects and dealing with them. But computational geometers
also make extensive use of data structures that represent subsets or sequences of
objects. These structures can be used directly by the algorithms, or modified and
augmented for geometric use. The first part of this chapter recalls the terminol-
ogy and features of each basic data structure used in this book. It is useful to
know how these structures can be implemented and what their performances are.
The most delicate problem is undoubtedly the one addressed by dictionaries and
priority queues, which treat finite subsets of a totally ordered set (the universe).
To achieve better efficiency, these structures are usually encoded as balanced bi-
nary trees. For instance, the second part of this chapter describes red-black trees,
a class of balanced trees that can be used to implement dictionaries and priority
queues. Finally, when the universe is finite, dictionaries and priority queues can
be even more efficiently implemented by other more sophisticated techniques, the
characteristics of which are given without proof in the third part of this chapter.
The sole purpose of this chapter is to present, as far as data structures are
concerned, the information necessary for a thorough understanding of the forth-
coming algorithms. In particular, the authors by no means claim to present a
comprehensive account of this topic, and the interested reader is urged to refer
to the references given in the bibliographical notes.
14 Chapter 2. Basic data structures
Em-- FE---
in the list, which is called the top of the stack: to stack an element means to
insert it as the last element of the list, and to pop consists of deleting the element
that was stacked the most recently. Stacks are therefore particularly suited to
process the elements of a set in the LIFO order, which stands for "last in, first
out."
In the case of a queue, all insertions occur at the end of the list, whereas all
deletions take place at the beginning of the list. Queues are therefore well suited
to process elements in the FIFO order, which stands for "first in, first out." This
is the normal order for a waiting line, or queue, hence the name given to this
data structure.
Stacks or queues can always be implemented as general lists. There are more
specific methods to implement these data structures but we will not expand on
them in this book.
We call a dictionaryany data structure that can perform queries, insertions and
deletions. If it supports searching for the minimum as well, we call it a priority
queue. If it supports all the operations detailed above, we call it an augmented
dictionary.
Priority queues and dictionaries can be implemented using lists or arrays. When
the universe is totally ordered, it is often more efficient to use balanced data
structures such as red-black trees, described below.
A branchof the tree is a path that stretches from the root to a leaf of the tree.
A tree is considered to be balanced if all its branches have approximately the
same length. This property, to be made precise below, ensures the efficiency of
the data structure but complicates the insertion and deletion operations. Indeed,
after each such operation, the structure must be rebalanced. There are several
kinds of balanced trees, such as AVL trees, 2-3 or 2-3-4 trees, or even red-
black trees. There are also many ways in which these variants can be used
to implement dictionaries and priority queues. All the performances of these
solutions are equivalent and optimal: if the set S stored in the data structure has
n elements, the data structure occupies O(n) space and any insertion, deletion, or
query takes O(log n) time. For instance, the next section describes how to achieve
these performances using red-black trees, and analyzes the corresponding cost of
these operations.
1. The paths from the root to all the leaves have the same number of black
arcs.
3. There cannot be two consecutive red arcs along a path from the root to a
leaf.
All the nodes have a level, which is the number of arcs on the path from the root
to that node, and a black level, which is the number of black arcs on that path.
The number of black arcs on a path from the root to a leaf is called the black
height of the tree, since it does not depend on the particular leaf.
It is easy to see that a red-black tree is approximately balanced: the longest
branch cannot have more than twice as many arcs as the shortest.
We propose to show how such a data structure can be used to implement a
dictionary on a finite set S drawn from a totally ordered universe U. The red-
black tree is used as a searching data structure: to each node corresponds a key
and two pointers towards its children. The keys attached to the leaves serve to
represent the elements of S. The keys attached to the internal nodes serve as a
guide for the searching operations. The key attached to an internal node must
be greater than or equal to all the keys stored in its left subtree-the subtree
rooted at its left child, and smaller than all the keys stored in its right subtree.
18 Chapter 2. Basic data structures
For instance, the key attached to an internal node can be systematically set to
the greatest of the keys stored in its left subtree. A left-hand depth-first traversal
of the tree visits all the nodes of the tree in the following order: the root first,
then recursively the nodes in the left subtree, and finally the nodes in the right
subtree. Such a traversal visits the leaves of the tree in the order of the elements
of S.
Along with the key and the pointers to its children, the information stored at
a node contains a special field to mark the color, either red or black, of the arc
linking this node to its parent. To simplify the exposition, the color of an arc is
often transferred to the node as well, and so we call a node black if it is linked
to its parent by a black arc, and red if it is linked to its parent by a red arc. By
convention, the root of the tree is always colored black. From now on, we denote
by the same letter N, 0, P, Q, R, S.... both the node and the key stored at that
node.
Storage
Queries
To find out whether or not an element S of the universe U belongs to the set
S, we need only follow a branch of the tree. At each internal node N, the next
node in the branch is identified using a comparison between N and the key S
that we are searching for. If S < N, the search goes through the left child of
N; if S > N the search goes instead through the right child of N. The search
always ends up at a leaf S' of the tree: the answer is that S is present if S' = S,
and that S is missing from S if S' = S. In the latter case, S' is the element
of S that immediately precedes or follows S according to the order on U. The
following theorem shows that, if there are n elements in S, such a search visits
only E(log n) nodes of the tree, and therefore runs in E(log n) time.
Lemma 2.2.1 If a red-black tree has n leaves, any path from the root to a leaf
has at least 2 log n and at most 2 log n arcs.
Proof. The easiest proof of this result is to refer to a different kind of tree, the
2-3-4 tree. A 2-3-4 tree is a tree whose nodes have either 2, 3, or 4 descendants,
and all the paths from the root to the leaves have the same length, which is the
height of the 2-3-4 tree. From a red-black tree, it is easy to make a 2-3-4 tree by
merging all the nodes that are linked through red arcs (see figure 2.2). The height
2.2. Balanced search trees 19
2.2.Balaced
tres earc 1
AQ
XQ
QR
Figure 2.2. The correspondence between red-black trees and 2-3-4 trees.
In this and the subsequent pictures, the black arcs of red-black trees are
represented in bold, the circles stand for internal nodes, rectangles for the
leaves, and triangles stand for arbitrary subtrees.
h of this 2-3-4 tree is exactly the same as the black height of the corresponding
red-black tree.
The red-black tree and its associated 2-3-4 tree have the same number of
leaves, n, and the height h of the 2-3-4 tree satisfies
2h <rn < 4h
From this, it follows that the number h of black arcs on any branch is at least
2 log n and at most log n. The total number of arcs on such a branch cannot be
less than h, nor can it be more than 2h. El
20 Chapter 2. Basic data structures
Insertions
~~> PA
left (resp. right) rotation if Q and R are both right (resp. left) children;
double right-left (resp. left-right) rotation if Q is a right child and R a left
child (resp. Q is a left child and R is a right child). Figure 2.4 shows only
a simple left rotation and a double right-left rotation. We leave it to the
reader to represent the symmetric rotations.
2. Should P have two red children, then the algorithm colors both children
black and colors P red instead (see figure 2.5), unless P is the root of the
tree in which case it is left black and nothing else is done. If the parent
of P is black or at the root of the tree, then the third constraint has been
restored, and the whole rebalancing task is over. If the parent of P is red,
then the default in the third rule has been carried up two levels towards
the root of the tree. Nodes R and Q are popped from the stack and the
next step takes over with node P, its parent, and its grandparent.
) A
) A
>1 Mo.
Figure 2.6. Red-black trees: deletions.
the third stage, only O(log n) nodes may need to be recolored, and only as many
(simple or double) rotations may need to be performed. Red-black trees therefore
allow a new element to be inserted in time O(logrn).
Deletions
Third stage. We rebalance the tree obtained by the removal in the second
stage. This operation is carried out in steps. At the current step, the tree contains
one and only one short node: this is a node X such that the black height of the
subtree rooted at X is one arc smaller than that of other subtrees rooted at the
same black level in the tree. In the first step, the only short node is S'. Let X be
the current short node, Q its parent, and R the other child of Q. Node X being
the only short node, R cannot be a leaf of the tree.
1. Should R be black with two red children, then rebalancing can be obtained
by performing the rotation depicted in figure 2.7, case 1.
2. Should R be black with both a black and a red child, rebalancing can be
obtained by the double rotation depicted in figure 2.7, case 2.
3. Should R be black with two black children, two cases may arise. If node Q,
the parent of X and R, is red, then the tree can be rebalanced by changing
the colors as shown in figure 2.7, case 3a: Q is recolored in black and R in
red. If Q is black, the tree cannot be rebalanced in a single step. Changing
the colors as shown in figure 2.7, case 3b, makes the parent of Q become
the short node, and the next step takes over with this node as the short
node.
(1)>
(2)
(3a)>
(3b)>
(4)>
set of these n numbers. The tree can be built in O(nlogn) time, uses O(n)
space, and the elements can be enumerated in order by performing a left-hand
depth-first traversal of the tree. The only operation on the numbers used in this
algorithm is comparison. Taking into account theorem 1.2.1, we have proved the
following:
Sometimes, the size of the underlying universe is just too big for this method
to be practical. Hashing methods can then be used as a replacement.
Perfect dynamic hashing is a method that stores the dictionary over a finite,
albeit huge, universe. In this method, random choices are made by the algorithm
during the execution of the insertion and deletion operations. Such algorithms
are called randomized below. The cost of these operations (insertions, deletions)
depends on the random choices made by the algorithm and can only be evaluated
on the average over all possible choices. Such an analysis is also said to be
randomized. Moreover, it is impossible to bound the cost of a single operation.
26 Chapter 2. Basic data structures
Finally, by combining both stratified trees and perfect dynamic hashing, one
may build a data structure that performs well on all the operations of an aug-
mented dictionary. Henceforth, this combination of data structures, a data struc-
ture in its own right, will be referred to as an augmented dictionary on a finite
universe. The theorem below summarizes its characteristics.
Table 2.1 summarizes further the performances of the different data structures
discussed here that may be used to implement a dictionary or a priority queue.
2.4 Exercises
Exercise 2.1 (Segment trees) Segment trees were created to deal with a collection of
intervals on the one-dimensional real line. Intervals may be created or deleted, provided
2.4. Exercises 27
that the endpoints belong to a set known in advance. The endpoints are sorted, and
thought of as the integers {1, . . . , n} via a one-to-one correspondence that preserves the
order. The associated segment tree is a balanced binary tree, each leaf of which represents
an elementary interval of the form [i, i + 1]. Each node of the tree therefore corresponds
to an interval which is the union of all the elementary intervals associated with the leaves
of the subtree rooted at that node. Intervals of this kind will be called standardintervals,
and we will speak of a node instead of its associated standard interval.
The intervals of the collection are stored at the nodes of the tree. An interval I is
stored in the structure at a few nodes of the tree: a node V stores I only if its associated
standard interval is contained in I, but the standard interval of the parent of V is not.
1. Let 1, resp. r, be the left, resp. right, endpoint of I. Let VI be the standard
elementary interval whose left endpoint is 1, and let Vr be the standard elementary
interval whose right endpoint is r. Let Vf be the smallest standard interval containing
both VI and Vr. The node Vf is the nearest common ancestor to both V1 and Vr, and it
is called the fork of I. Show that the nodes which are marked as storing I are precisely
the right children of the nodes on the path joining Vf to VI in the tree, together with
the left children of the nodes on the path joining Vf to Vr in the tree. Deduce from this
that the nodes that store I correspond to a partition of I into O(log n) standard disjoint
intervals, with at most two intervals at each level of the tree.
2. At each node, a secondary data structure accounts for the set of intervals stored
by that node. According to the application, the data structure may list the intervals or
simply maintain in a counter the number of these intervals. To add an interval to the
segment tree simply consists of adding it to each of the secondary data structures of the
nodes storing this interval, or incrementing the counter at these nodes. Deletions are
handled similarly. Assume that only a counter is maintained. Show that an insertion or
deletion can be performed in time O(log n). Show that the segment tree can be used to
count the number of intervals containing a given real number x, in time O(log n).
Exercise 2.2 (Range trees) Given a set of n points S in Ed, we wish to build a data
structure to efficiently answer queries of the following kind: count the number of points
inside an axis-oriented hyper-rectangle, or report them. One solution consists of building
a range tree, a data structure particularly suited to this kind of query, which we describe
now.
* The first level of the structure is a segment tree T1 (see exercise 2.1) built on
the first coordinates of the points in S, that is on the set {xi(P) : P e S}. For
each node V of T1 , we denote by Sd(V) the set of those points P of S whose first
coordinate x 1 (P) belongs to the standard interval of V. The set Sd- (V) is the
projection of Sd(V) onto Ed-1 parallel to the x1 -axis.
. If d > 2, every node V of T1 has a pointer towards a range tree for the set of points
Sd-l(V) in Edl.
1. We first assume that the queries ask for the number of points in S inside a given
hyper-rectangle Rd (the counting problem). Let q(S, Rd) be the time it takes to answer a
query on the hyper-rectangle Rd. Let V1 stand for the collection of all the nodes storing
the projection of Rd onto the x1 -axis, and Rd-, be the projection of Rd parallel to the
28 Chapter 2. Basic data structures
From this, show that the maximum amount of time that a query can take on a set of n
points in d dimensions is
Show that a query in the reporting case can be answered in 0 ((log n)d + k) time if k is
the number of points to be reported.
2. Show that the preprocessing space requirement and time are both 0 (n(log n)d).
Exercise 2.3 (Stratified tree) Let S be a subset of a finite, totally ordered universe
U. Let u be the number of elements of U, and without loss of generality assume that
u = 2 k. For convenience, we identify the set of possible keys with {0, 1, . .. , 2k - 1}. A
stratified tree ST(U, S) that implements an augmented dictionary on S is made up of:
* a doubly linked list which contains the elements of U. Each record in this list
has three pointers sub, super, and rep, and a boolean flag marker to identify the
elements of S.
* a representative R with two pointers to the maximal and minimal elements in S,
and a boolean flag to detect whether S is empty or not.
* stratified trees ST(Uj, Si) for the sets Si = Sn Ui and the universes Ui =
2 [k/21
i 2[k/2j + {O,1, . . ., 2Lk/2j - 1}, with i ranging from 0 to 2 fk/21 -1. Depending on
the parity of k, each sub-universe Ui contains \/6 or vfi elements of U.
* A stratified tree ST(U', 1?) for the set of representatives of ST(Ui, Si). The rep-
resentative Ri of ST(Ui, Si) is the element whose key equals i in the set U' =
{0,1,..., 2 rk/21-1}. Depending on the parity of k, the size of U' is u or /u.
The trees ST(Ui, Si) and the tree ST(U', %) are called the sub-structures of ST(U, S).
In turn, ST(U, S) is called a super-structureof those trees. Pointers sub and super keep
a link between a record in the list and the corresponding record in the list of the sub-
structure (resp. super-structure). The pointer rep points toward the representative of
the structure.
1. Show that the stratified tree ST(U, S) can be stored in space O(uloglogu), and
can be built for an empty set S in time 0(u log log u).
2. Show that each operation: insertion, deletion, location, minimum, predecessor,
successor, can be performed in time 0(log log u).
Exercise 2.4 (Stratified trees and segment trees) Let U be a totally ordered, fi-
nite universe with u = 2 k elements. Consider a complete and balanced binary tree C3T
whose leaves are associated with the elements of U. Consider further the set {0, 1 ... , k}
of levels of that tree, and build a segment tree T on this set. Show that you can do it
in such a way so that each sub-structure of the stratified tree built on the universe U
corresponds to a standard interval on T.
2.4. Exercises 29
Shamos [192]. Persistent data structures as described in exercise 2.7 are due to Sarnak
and Tarjan [196]. Several geometric applications of persistent trees will be given in the
exercises of chapter 3.
The perfect dynamic hashing method (see exercise 2.6) was developed by Dietzfel-
binger, Karlin, Mehihorn, auf der Heide, Rohnert, and Tarjan [84] and the augmented
dictionary on a finite universe is due to Mehlhorn and Nhher [165]. See also the book by
Mehlhorn [163] for an extended discussion on hashing.
Chapter 3
Deterministic methods
used in geometry
The goal of this and subsequent chapters is to introduce the algorithmic methods
that are used most frequently to solve geometric problems. Generally speaking,
computational geometry has recourse to all of the classical algorithmic techniques.
Readers examining all the algorithms described in this book from a methodolog-
ical point of view will distinguish essentially three methods: the incremental
method, the divide-and-conquer method, and the sweep method.
The incremental method is perhaps the method which is the most largely em-
phasized in the book. It is also the most natural method, since it consists of
processing the input to the problem one item at a time. The algorithm initiates
the process by solving the problem for a small subset of the input, then maintains
the solution to the problem as the remaining data are inserted one by one. In
some cases, the algorithm may initially sort the input, in order to take advantage
of the fact that the data are sorted. In other cases, the order in which the data are
processed is indifferent, sometimes even deliberately random. In the latter case,
we are dealing with the randomized incremental method, which will be stated
and analyzed at length in chapter 5. We therefore will not expand further on the
incremental method in this chapter.
The divide-and-conquermethod is one of the oldest methods for the design of
algorithms, and its use goes well beyond geometry. In computational geometry,
this method leads to very efficient algorithms for certain problems. In this book
for instance, such algorithms are developed to compute the convex hull of a set of
n points in 2 or 3 dimensions (chapter 8), the lower envelope of a set of functions
(chapter 16), a cell in an arrangement of segments in the plane (exercise 15.9), or
even the Voronoi diagram of n points in the plane (exercise 19.1). In this chapter,
the principles underlying the method are outlined in section 3.1, and the method
is illustrated by an algorithm that has nothing to do with geometry: sorting a
sequence of real numbers using merging (the so-called merge-sort algorithm).
3. 1. The divide-and-conquer method 33
Dividing. Divide the problem into simpler subproblems. Such problems have
a smaller input size, that is, if the input data are elementary, the input to
these problems is made up of some but not all of the input data.
Solving. Separately solve all the subproblems. Usually, the subproblems are
solved by applying the same algorithm recursively.
Merging. Merge the subproblem solutions to form the solution to the original
problem.
The performance of the method depends on the complexities of the divide and
merge steps, as well as on the size and number of the subproblems. Assume that
each problem of size n is divided into p subproblems of size n/q, where p and q
are some integer constants and n is a power of q. If the divide and merge steps
perform O(f(n)) elementary operations altogether in the worst case, then the
time complexity t(n) of the whole algorithm satisfies the recurrence
Usually, the recursion stops when the problem size is small enough, for instance
smaller than some constant no. Then k = [logq(n/no)] is the depth of the
recursive calls (logq stands for the logarithm in base q), and the recurrence solves
to
t(n) =0 ( +Epf
k-1 3
In this expression, the first term corresponds to the time needed to solve all
the elementary problems generated by the algorithm. The second term reflects
the time complexity of all the merge and divide steps taken together. If f is a
multiplicative function, i.e. such that f(xy) = f(x)f(y) (which in particular is
true when f(n) = n' for some constant a), then t(n) satisfies
t(n) = O )
* If p= f (q), then t(n) = 0 (nlogP/ logqlog n), and further if f (n) = n", then
t(n) = O(n log n).
* If p < f (q), then t(n) = 0 (n1og f (q)/1ogq), and further if f (n) n', then
t(n) = 0 (n').
1. the information stored in Y is related to the position of the sweep line, and
changes when this line moves,
The event queue X stores the sequence of events yet to be processed. This
sequence can be entirely known at the beginning of the algorithm, or discovered
on line, i. e. as the algorithm processes the events. The sweep algorithm initializes
the structure Y for the leftmost position x = -x of the sweep line, and the
sequence X with whatever events are known from the start (in increasing order of
their abscissae). Each event is processed in turn, and Y is updated. Occasionally,
new events will be detected and inserted in the queue X, or, on the contrary, some
events present in the queue X will no longer have to be processed and will be
removed. When the event is processed, the queue X gives access to the next
event to be processed.
When all the events are known at the start of the algorithm, the queue X may
be implemented with a mere simply linked list. However, when some events are
to be known only on line, the event queue must handle not only the minimum
operation, but also queries, insertions, and sometimes even deletions: it is a
priority queue (see chapter 2).
3.2. The sweep method 37
The choice of the data structure Y depends on the nature of the problem and
may be handled through multiple components. More often than not, each of these
components must handle a totally ordered set of objects, and the corresponding
operations: query, insertion, deletion, sometimes even predecessor or successor.
The appropriate choice is that of a dictionary, or an augmented dictionary (see
chapter 2).
The sweep method can sometimes be useful in three or more dimensions. The
generalization consists of sweeping the space Ed by a hyperplane perpendicular
to the xd-axis. The state of the sweep is stored in a data structure Y associated
with the sweep hyperplane, and the set of events is the set of positions of the
sweep hyperplane at which the state of the sweep Y changes. The data structure
Y often maintains a representation of a (d - 1)-dimensional object contained in
the sweep hyperplane. The sweep method in higher dimensions, therefore, often
consists of replacing a d-dimensional problem by a sequence of (d- 1)-dimensional
problems.
Figure 3.1. Computing the intersections of a set of line segments using the sweep method.
which intersect the vertical sweep line A. Such segments are said to be active at
the current position of the sweep line. The structure Y stores the active segments
in the order of the ordinates of their intersection point with the line A. The order
of the sequence, or the sequence itself, is modified only when the line sweeps over
the endpoint of a segment or over an intersection point.
1. If A sweeps over the left endpoint of a line segment S (that is to say, the
endpoint with the smaller abscissa), this segment S is added to the structure
Y.
2. If A sweeps over the right endpoint of a line segment S (that is to say, the
endpoint with the greater abscissa), this segment S is removed from the
structure Y.
3. If A sweeps over the intersection of two segments S and S', these segments
S and S' switch their order in the sequence stored in Y.
The set of events therefore includes the sweep line passing over the endpoints
of the segments of S, and over the intersections. The abscissae of the endpoints
are known as part of the input, and we wish to compute the abscissae of the
intersection points. A prospective intersection point I is known when two active
segments become consecutive in the sequence stored in Y. The corresponding
event is then stored in the event queue X. The state of the event queue is shown
for a particular position of A on figure 3.1: each event is marked by a point on
the x-axis.
At the beginning of the algorithm, the queue X stores the sequence of endpoints
of the segments in S ordered by their abscissae. The data structure y is empty.
., I
3.2. The sweep method 39
Case 1. the event is associated with the left endpoint of a segment S. This
segment is then inserted into Y. Let pred(S) and succ(S) be the active
segments which respectively precede and follow S in Y. If pred(S) and S
(resp. S and succ(S)) intersect, their intersection point is inserted into X.
Case 2. the event is associated with the right endpoint of a segment S. This seg-
ment is therefore queried and removed in the structure Y. Let pred(S) and
succ(S) be the active segments which respectively preceded and followed
S in Y. If pred(S) and succ(S) intersect in a point beyond the current
position of the sweep line, this intersection point is queried in the structure
X and the corresponding event is inserted there if it was not found.
Case 3. the event is associated with an intersection point of two segments S
and S'. This intersection point is reported, and the segments S and S' are
exchanged in Y. Assuming S is the predecessor of S' after the exchange,
S and its predecessor pred(S) are tested for intersection. In the case of
a positive answer, if the abscissa of their intersection is greater than the
current position of the sweep line, this point is queried in the structure X
and the corresponding event is inserted there if it was not found. The same
operation is performed for S' and its successor succ(S').
To prove the correctness of this algorithm, it suffices to notice that every in-
tersecting pair becomes a pair of active consecutive segments in Y, when the
abscissa of the sweep line immediately precedes that of their intersection point.
This pair is always tested for intersection at this point, if not before, therefore
the corresponding intersection point is always detected and inserted into X, to
be reported later.
It remains to see how to implement the structures Y and X. The structure Y
contains at most n segments at any time, and must handle queries, insertions,
deletions, and predecessor and successor queries: it is an augmented dictionary
(see section 2.1). If this dictionary is implemented by a balanced tree, each
query, insertion, and deletion can be performed in time O(log n), and finding
predecessors and successors takes constant time.
The event queue X will contain at most O(n + a) events, if a stands for the
number of intersecting pairs among the segments in S. This structure must
handle queries, insertions, deletions, and finding the minimum: it is a priority
queue (see section 2.1). Again, a balanced binary tree will perform each of these
operations in O(log(n + a)) = O(log n) time.
The global analysis of the algorithm is now immediate. The initial step that
sorts all the 2n endpoints according to their abscissae takes time O(n log n). The
40 Chapter 3. Deterministic methods used in geometry
structure X is initialized and built within the same time bound. Next, each of the
2n + a events is processed in turn. Each event requires only a constant number
of operations to be performed on the data structures X and Y and is therefore
handled in time 0(log n). Overall, the algorithm has a running time complexity
of 0((n + a) log n) and requires storage 0(n + a).
The algorithm can be slightly modified to avoid using more than 0(n) storage.
It suffices, while processing any of cases 1 to 3, to remove from the event queue
any event associated with two active but non-consecutive segments. In this way,
the queue X contains only 0(n) events at any time, and yet the event immediately
following the current position of the sweep line is always present in X. Indeed,
this event is associated either with an endpoint of a segment in S, or with two
intersecting segments which therefore must be consecutive in X. Some events
can be inserted into and deleted from X several times before they are processed,
but this does not change the running time complexity of the algorithm, as the
above scheme can be carried out using only a constant number of operations in
the data structures X and Y at each step.
Theorem 3.2.1 The intersection points of a set of segments in the plane can be
computed using the sweep method. If the set of n segments in general position
has a intersecting pairs, the resulting algorithm runs in 0((n+a)logn) time and
0(n) space.
I I I I I i I
I I I I
I I I
I IPI
I I
I I I I
I I I I I I I I I I I I
i I I I I i I I I I I I
I I
I I
I I I
(a) (b)
Figure 3.2. (a)The vertical decomposition Dec(S) of a set of line segments S in the plane.
(b) Its simplified decomposition Dec, (S).
are vertical. Some degenerate ones are triangular (with only one vertical side),
or semi-infinite (bounded at top or bottom by a segment portion with two semi-
infinite walls on both sides), or doubly infinite (a slab bounded by two vertical
lines on either side), or even a half-plane (bounded by only one vertical line).
It is easy to modify the above algorithm to compute not only the intersection
points, but also the vertical decomposition of the given set of line segments.
3.4 Exercises
Exercise 3.1 (Union, intersection of polygonal regions) By a polygonal region,
we mean a connected area of the plane bounded by one or more disjoint polygons (a
polygonal region may not always be simply connected, and may have holes). Show how
to build the union or intersection of k polygonal regions using a sweep algorithm. Show
that if the total complexity of the regions (the number of sides of all the polygons that
bound it) is n, and the number of intersecting pairs between all the sides of all the
polygonal regions is a, the algorithm will run in 0((n + a) logn) time.
Exercise 3.2 (Detecting intersection) Show that to test whether any two segments
in a set S intersect requires at least time £Q(n log n). Show that the sweep algorithm can
be modified to perform this test in time 0 (n log n).
Exercise 3.3 (Computing the intersection of curved arcs) Modify the sweep al-
gorithm described in subsection 3.2.2 so as to report all the intersection points in a family
of curved arcs. The arcs may or may not be finite. We further assume that any two arcs
have only a bounded number of intersection points, which may be computed in constant
time.
Hint: Do not forget to handle the events where the arcs have a vertical tangent.
Exercise 3.4 (Arbitrary sets of segments) Sketch the changes to be made to the
sweep algorithm so that it still works on arbitrary sets of segments, getting rid of the
assumptions about general position. The algorithm should run in time O((n + a) log n)
where a is the number of intersecting pairs.
Exercise 3.5 (Location in a planar map) A planar map of size n is a planar subdi-
vision of the plane E2 induced by a set of n segments which may intersect only at their
endpoints. To locate a point in the planar map is to report the region of the subdivision
that this point lies in. Show that a data structure may be built in time 0(n log n) and
space 0(n) to support location queries in time O(log n).
Hint: The vertical lines passing through the endpoints of the segment divide the plane
into vertical strips ordered by increasing abscissae. The segments that intersect a strip
form a totally ordered sequence inside this strip, and two sequences corresponding to two
consecutive strips differ only in a constant number of positions. A sweep algorithm may
use persistent structures (see exercise 2.7) to build the sequence of such lists.
Hint: The algorithm proceeds by using the divide-and-conquer method. Each merge
step computes the union of two polygonal regions and can be performed using the sweep
method. Each intersection between the edges of these regions is a vertex of their union,
therefore there can be at most a linear number of such intersections.
44 Chapter 3. Deterministic methods used in geometry
Exercise 3.7 (Selecting the k-th element) Let S be a set of n elements, all belong-
ing to a totally ordered universe. A k-th element of S is any element S of S such that
there are at most k - 1 elements in S strictly smaller than S and at least k elements
smaller than or equal to S. Show that it is possible to avoid sorting S yet still compute
a k-th element in time 0(n).
To analyze the complexity of such an algorithm, observe that IS1 < 3n and that
IS21 < 3n. Then, if n > 50, show that the time complexity of the algorithm satisfies the
recurrence
t(n) > t(n/5) + t(3n/4) + cn,
where c is a constant. Show that this recurrence solves to t(n) = 0(n).
Hint: One may use a sweep algorithm that maintains the intersection of the union of the
rectangles with the sweep line, using a segment tree (see exercise 2.1). The perimeter or
the area can be obtained in time O(n log n), and the complete description of the boundary
in time 0((n + k) log n) if this boundary has k edges.
pairs, the induced vertical decomposition in optimal O(n log n + a) time. In degenerate
cases, the number b of intersection points can be much lower than a, and Burnikel,
Mehlhorn and Schirra [40] have shown that it still is possible to compute the vertical
decomposition in O(n log n + b) time. In chapter 5, we describe a randomized algorithm
(that is, an algorithm which makes random choices during its execution) which runs in
time 0 (n log n + a) on the average over all possible random choices it can make.
Persistent data structures and the idea of using them for locating a point in a planar
map (as in exercise 3.5) are due to Sarnak and Tarjan [196]. Segment trees (see exer-
cise 2.1) are especially suitable for solving many problems on rectangles. The solution
to exercise 3.8 can be found in the book by Preparata and Shamos [192].
Chapter 4
Random sampling
4.1 Definitions
4.1.1 Objects, regions, and conflicts
In the framework presented here, any geometric problem can be formulated in
terms of objects, regions, and conflicts between these objects and regions.
Objects are elements of a universe 0, usually infinite. The input to some
problem will be a set S of objects of 0. The objects under consideration are
typically subsets of the Euclidean space Ed such as points, line segments, lines,
half-planes, hyperplanes, half-spaces, etc.
A region is a member of a set F of regions. Each region is associated with two
4.1. Definitions 47
sets of objects: those that determine it, and those that conflict with it.
The set of objects that determine a region is a finite subset of (9, of cardinality
bounded by some constant b. The constant b depends on the nature of the
problem, but not on the actual instance nor on its size. This restriction is required
for all the probabilistic theorems to be expressed within the framework.
The set of objects that conflict with a given region is usually infinite and is
called the domain of influence of the region.
Let S be a set of objects. A region F of F is defined over S if the set of objects
that determines it is contained in S. A region F is said to be without conflict
over S if its domain of influence contains no member of S, and otherwise is said
to have j conflicts over S if its domain of influence contains j objects of S.
For each geometric application, the notions of objects, regions, and conflicts
are defined in such a way that the problem is equivalent to finding all the regions
defined and without conflict over S.
Let us immediately discuss a concrete example. Let S be a set of n points
in the d-dimensional Euclidean space Ed. The convex hull of S is the smallest
convex set containing S; suppose we wish to compute it. Assume the points are in
general position'. The convex hull conv(S) is a polytope whose special properties
will be studied further in chapter 7. For now, it suffices to notice that, in order
to compute the convex hull, we have to find all the subsets of d points in S such
that one of the half-spaces bounded by the hyperplane passing through these d
points contains no other point that belong to S (see figure 4.1). In this example,
the objects are points, and the regions are open half-spaces in Ed. Every set of
d points determines two regions: the open half-spaces whose boundaries are the
hyperplane passing through these points. A point is in conflict with a half-space
if it lies inside it. To find the convex hull, one must find all the regions determined
by points of S and without conflict over S.
The preceding definitions call for a few comments.
Remark 1. A region is determined by a finite and bounded number of objects
and this restriction is the only fundamental condition that objects, regions, and
conflicts must satisfy. Nevertheless, we do not demand that all the regions be
determined by exactly the same number of objects. In the case of the convex hull
of n points in Ed, all the regions are determined by exactly d points. One may
envision other settings (as in the case of the vertical decomposition of a set a line
segments in the plane, discussed in subsection 5.2.2), where the regions can be
determined by a variable number i of objects, provided that 1 < i < b for some
constant b.
Remark 2. A region does not conflict with the objects that determine it. This
'A set of points is in general position if every subset of k + 1 < d + 1 points is affinely
independent, or in other words if it generates an affine subspace of dimension k.
48 Chapter 4. Random sampling
j.*. . . *.
* 0
simple convention greatly simplifies the statements and proofs of the theorems
below, and does not modify their meaning. In the case of the convex hull, this
can be easily achieved by defining the domain of influence of a region as an open
half-space.
Remark 3. A region is characterized by two sets of objects: the set of objects
that determine it, and the set of objects that conflict with it. Regions determined
by different objects will be considered as different, even if they share the same
domain of influence. In this context, a set S of objects is in general position
precisely if any two regions determined by different subsets of S have distinct
domains of influence.
Remark 4. A set of b or fewer objects may determine one, or more, or zero
regions. Usually, the number of regions determined by a given set of (less than b)
objects is bounded by a constant. For instance, in the case of convex hulls, every
subset of d points determines exactly two regions. In this case, the total number
of regions defined over a set of cardinality n is 0(nb).
If S is a finite set of objects, say with n elements, we denote by J'-(S) the set
of regions defined over S and, for each integer j in [0, n], we denote by Fj(S)
the set of all regions defined over S that have j conflicts over S. In particular,
.Fo(S) is the set of those regions that are defined over S and without conflict over
S. Furthermore, we denote by ;F-k(S) the subset of regions defined over S that
have at most k conflicts over S.
When the regions are determined by a variable number i of objects (I < i < b),
the preceding notation may be refined to denote by Ej(S), P<k(S)7 Yik(S), the
subsets of those regions defined by exactly i objects of S, with (respectively)
exactly, at most, at least, k conflicts with the objects of S.
4. 1. Definitions 49
* F-
*i
From now on, we are primarily interested in the regions defined over a random
sample 7 from S. Generally speaking, if g(1R) is a function of the sample 7,
we denote by g(r, S) the expected value of g(7Z) for a random r-sample of S. In
particular, the following functions are defined: We denote by fj(7?.) the number
of regions defined and with j conflicts over a subset R of S (in mathematical
notation, fj(1?) = lFj(1?)I). Following our convention, fM(r,S) denotes the ex-
pected number of regions defined and with j conflicts over a random r-sample of
S. Likewise, fJ.(7?) stands for the number of regions defined by i objects of 7?
and with j conflicts over 7R (in mathematical notation, fj(1Z) = ~j(R?) I). Then
fjt(r, S) is the expected number of such regions for a random r-sample of S.
50 Chapter 4. Random sampling
In this section, we prove two probabilistic theorems, the sampling theorem and
the moment theorem. These two theorems lay the foundations for our analysis
of randomized algorithms as described in chapters 5 and 6. The reader mostly
interested in the algorithmic applications of these theorems may skip this section
in a first reading. In order to understand the results, it would be enough to
memorize the definition of a moment, to look up lemma 4.2.5, and to admit
corollary 4.2.7.
The probabilistic theorems below are based on certain combinatorial properties
of the geometric objects. The probabilities involved concern mainly random
samples from the input data. In particular, these theorems do not make any
assumptions on the statistical distribution of the input data. The theorems are
stated in the formal framework introduced in the preceding section. Nevertheless,
to shape the intuition of the reader, we start by stating them explicitly for the
specific problem of computing the convex hull of a set of points in the plane.
Let S be a set of n points in the plane, assumed to be in general position,
let k be an integer smaller than n and let 7? be a random sample of S of size
r = Ln/k]. The sampling theorem links the number of half-spaces defined over
S and containing at most k points of S, with the expected number of half-spaces
defined and without conflict over 7Z, which is precisely the number of edges of
the convex hull conv(7Z). Let A and B be points of S. Segment AB is an edge of
the convex hull conv(IZ) if and only if A and B are points of 7? and also one the
half-planes HZB and HXB bounded by the line AB does not contain any points
of 7R. The sampling theorem relies on the fact that the segment AB joining two
points of S is an edge of the convex hull conv(IZ) with a probability that increases
as the smallest number of points in either H+B or HiB decreases.
The moment theorem concerns the number of points in S and in its sample 1?
that belong to some half-plane. If the size of 1Z is large enough, the sample is
representative of the whole set, and the number of points of 7? in a half-plane is
roughly the number of points of S in this half-plane scaled by the appropriate
factor r/n.
In fact, the moment theorem is a little more restrictive and concerns only
those half-planes defined and without conflict over the sample. Any edge E of
conv(7Z) corresponds to a region defined and without conflict over 7Z: the half-
plane H-(E) bounded by the line supporting E that contains no point of R?.
The first moment of R relative to S, or moment of order 1, is defined to be the
sum, over all edges E of the convex hull conv(7Z), of the number of points of S
lying inside H- (E). In other words, the moment of order 1 of 7Z with respect to
S counts each point of S \ 7? with a multiplicity equal to the number of edges
of conv(7R) whose supporting lines separate it from conv(7?) itself. Figure 4.3
4.2. Probabilistictheorems 51
indicates the multiplicity of each point, and the first-order moment of the sample
is 16.
The moment theorem shows that, if the size of the sample is big enough, the
expected moment of order 1 is at most n - r.
The sampling theorem yields an upper bound on the number of regions defined
and with at most k conflicts over a set S of n elements. This bound depends
on the expected number of regions defined and without conflict over a random
Ln/kJ-sample of S. The proof of this theorem relies on the simple idea that,
the fewer objects in conflict with a region, the more likely this region is to have
no conflict with a random sample 1R of S. The proof uses the two fundamental
lemmas below.
Lemma 4.2.1 Let S be a set of n objects and F a region in conflict with j objects
of S and determined by i objects of S. If 7? is a r-sample of S, the probability
P3 ,k(r) that F be a region defined and with k conflicts over 7? is
pi k(r) = (
V k
r
r -i-j
pi (r)iJ
n-oin--j
fk(r, S) = , P3(S) I kJ r- kJ
j=o ( )
Proof. The expected number of regions in the set rk(IZ) is the sum, over all the
regions determined by i objects of S, of the probability that this region belongs
to the set P7k(7Z). This probability is given by the lemma 4.2.1 above. 0
Proof. For each i, 1 < i < b, we shall prove the following inequality bounding
the number of regions determined by i objects:
Then the theorem can be easily proved by summing over all the values of i between
1 and b.
4.2. Probabilistictheorems 53
n-i (nij)(nik
( n-i-k) 1
r-i > 1
n A 4(b + 1)iki
(r)
Indeed,
(( r-i
n)
J r!
(r -i)!
(n-i)! (n-r)! (n-i-k)!
n! (n-r-k)! (n - i)!
rJ
We compute
(n-r)! (n-i-k)! > (n-r-k+1 k
(n-r-k)! (n-i)! - in-i-k+1J
> (n-n/k-k+1)k
- ~~ nk - J
> (1 - l/k)k
> 1/4 (if 2 < k),
and
r! (n -i)! dr-l jjjtr+l-l
(r-i)!
- n! 1=0
n-I - 1=1
n
fn/kl > b)
> ;( nb > (1n)
Remark 1. The sampling theorem deals with the numbers I.F<k(S)I of regions
with at most k conflicts, for values of k between 2 and b
For the case of regions without or with at most one conflict, however, it is
possible to prove the following bound
F I(S)I
I-Fo(S)I I< < In 2(S)| < 4(b+ 1)2 bfo(Ln/21 ,S),
valid whenever n > 2(b + 1).
Moreover, for values of k close to n, there is always the trivial bound
IY<k(S)I < I-F(S)l = O(nb)
if, as in remark 4 of subsection 4.1.1, we suppose that each subset of size at most
b determines at most q regions, for a constant number q that depends on the
interpretation of objects and regions.
Remark 2. The sampling theorem yields a deterministic combinatorial result
when an upper bound on fo( [n/k , S) can be derived. For instance, in chapter 14,
we will use an upper bound on the number of faces of a d-dimensional polytope
to yield, via the sampling theorem, an upper bound on the number of faces at
level at most k in an arrangement of hyperplanes.
The following corollary is very useful for analyzing the average performance of
randomized algorithms. It shows that the expected number of regions defined and
with one or two conflicts over a random r-sample of a set S is of the same order
of magnitude as the expected number of regions defined and without conflict over
such a sample.
Corollary 4.2.4 Let S be a set of n objects, with n > 2(b + 1). For each integer
r such that n > r > 2(b + 1), we have
fl (r, S) < 3fo( Lr/2j, S)
f 2 (r,S) < /fo(Lr/2J S)
where fj (r, S) is the expected number of regions defined and with j conflicts over
a random r-sample of S, and 3 is the real constant
/3 = 4(b + 1)b2b.
Proof. Let 7t be a subset of S of size r, such that 2(b + 1) < r. Applied to 7?,
remark 1 following theorem 4.2.3 yields
.F1(1Z)I < 4(b+ 1)b 2 bfo(Lr/2J ,1?).
The first inequality is obtained by taking expectations on the two members of
this equation. Indeed, fo ( Lr/2J , 7) is the expected number of regions defined and
without conflict over a random Lr/2]-sample of 7?, and the expectation of this
expected number when 1R itself is a random r-sample of S is simply fo( [r/2J , S).
The second inequality can be proved in much the same way. D
4.2. Probabilistictheorems 55
Proof. Recall that p (r) stands for the probability that a given region F of
Yj (S) be defined and without conflict over a random r-sample of S, whence
mk(r, S) = b ni ( j )p.(r).
i=1 i=O FEFti(S)
Proof. According to the previous lemma 4.2.5, and to lemma 4.2.1 which gives
the expression for the probability p1 (r), we have
b n-iji
mk(r,S) = b S)
(t j (n)
(Ik n)
t-
- (n) - (n-j-r)! (r-i)!
IP3S)
(
) (i- ) (
(n
)
k n-
rA i- k
nA
rJ
is nothing else but the probability A(r)
Pk that a region F of PFJ(S) belong to
.k (7?), whence
Corollary 4.2.7 Let S be a set of n objects. There exists a real constant -y and
an integer ro, both independent of n, such that for each n > r > ro,
where mk(r, S) is the expected number of the k-th moment of a random r-sample
of S, and fo(r, S) is the expected number of regions defined and without conflict
over a random r-sample of S.
and the upper bound is a consequence of corollary 4.2.4. The second inequality
can be proved very much the same way. E
4.3 Exercises
Exercise 4.1 (Backward analysis) In this exercise, regions are determined by at most
b objects of a set S. Let fi (r,S) be the expected number of regions defined and without
conflict over a random r-sample of S. Corollary 4.2.4 to the sampling theorem proves that
fi (r, S) = O(fo (r, S)). Backward analysis can be used to prove this without invoking the
sampling theorem.
Let 7? be a subset of S of cardinality r, and Jo (r -1, ?) the expected number of regions
defined and without conflict over a random sample of 1 of size r - 1. Show that
Hint: Backward analysis consists in observing that a random (r - 1)-sample 7?' of 1Z can
be obtained by removing one random object from R. Any region in Fo(7?') is defined
over RZ and belongs either to Fo (R) or to Fl (7). A region F that belongs to Fo (7)
determined by i objects is a region of Fo(7?') if the removed object is not one of the i
objects that determine F; this happens with probability r A region F that belongs to
FF1 (7) is a region of Fo (7') if the removed object is precisely the one that was removed
from 7?, which happens with probability 1. To show that fi(r,S) = O(fo(r,S)), it
suffices to take expectations in equation 4.2 over all r-samples of S and to assume that
fo(r, S) is a non-decreasing function of r.
Exercise 4.2 (The moment theorem, using backward analysis) Let R be a ran-
dom r-sample of a set S of n objects, and 0 a random object of S \ R?. Show that the
expected number of regions defined and without conflict over 1Z but conflicting with 0
is 0 ( $ fl(r + 1,S)). From this, show that the expected value ml(r,S) of the moment
of order 1 with respect to S of a random r-sample is O(n~r fl(r + 1,S)). From this,
deduce an alternative proof of the moment theorem by using the result of the previous
exercise or corollary 4.2.4 to the sampling theorem.
Hint: Note that 7? U {O} is a random (r + 1)-sample of S and that a region of Fo(7?)
that conflicts with 0 is a region of Fl (1Z U {0}) that conflicts with 0.
Exercise 4.3 (An extension of the moment theorem) A function w is called con-
vex if it satisfies, for all x, y in R and all a in [0, 1],
Wk(Z) = E (w(IS(F)I))
FE.FO()
where To(RZ) is the set of regions defined and without conflict over 7R and IS(F)l is the
number of objects in S that conflict with F. Let wk (r, S) stand for the expected value
of Wk(Z) for a random r-sample of S. Show that
wkr,) (n -r -k)! (r -b -k)! fk(r, S)\
Wk(r, S)<fo(r,S) w( (n - r)! (r - b)! fo(r, S)
Exercise 4.4 (Non-local subset of regions) We still work with the framework of ob-
jects, regions, and conflicts, each region being determined by at most b objects. In this
exercise, we are mostly interested, for a subset 1Z of objects in S, in a subset g0 (1?) of
regions defined and without conflict over R. The definition of g0 (Rz) is not necessarily
local, however: a region F of Fo (7) belongs to go (1) depending on all the elements of
7, not only those in conflict with F or that determine F. Nevertheless, suppose that the
subsets of the form g0 (1R) satisfy the following property: If F is a region of g0 (7Z), 7?'
a subset of 7?, and if 7t' contains the elements that determine F, then F is a region of
E IS(F) Ik
FE 5 (7)
4
wk(r,S) = o ( go(rS))
where go(r, S) is the expected number of regions in go(1R) for a random r-sample of S.
Hint: 1. Let p(r, F) be the probability that F be a region of go0(R) for a random r-sample
1 of S. Show that, for all t < r <n,
nS(F)- p(r,F).
4.3. Exercises 59
r2b 2,)Sr + I
p(ri,F) + E -p(r,F) >p(rF).
rI+1 rl
and that
Wk (r, S) < 'Yk go (r, S),
rk
where -y and -Yk are constants depending only on k.
Exercise 4.5 (Tail estimates) Let b be the maximum number of objects that deter-
mine a single region. Suppose again that a set of at most b objects determine at most
q regions, q being a constant, or that the number of regions determined by a set S of n
objects is 0 (nrb).
1. Let S be a set of n objects and R a random r-sample of S. Let a be a real constant
in ]0, 1[. Let 7ro(a, r) denote the probability over all samples 7? that some region defined
and without conflict over R have at least Fan] conflicts with S. Show that, for r big
enough,
tro(a, r) = 0 (r(1- n)r).
2. Show that for any constant A > b, the probability 7ro(A log r/r, r) that some region
F, defined and without conflict over X, have at least An log r/r conflicts with S decreases
to 0 as r increases.
Then show that, if a(r) = Alog r/r and m(r) = log r/ log log r,
lii
l r -0 (a ()
m(r) r)) = 0.
Exercise 4.7 (An upper bound on fo(S)) Consider the set F(S) of regions defined
over a set S, each region being determined by at most b objects. Let fj (S) be the number
of regions defined and having j conflicts with S, and fo(n) be the maximum of fo(S) over
all sets S of n objects. Suppose that there is a relation between the number of regions
defined and without conflict over S on one hand, and the number of regions defined over
S and conflicting with one element of S on the other. Suppose further that this relation
is of the type
cfo(S) < f1 (S) + d(n) (4.3)
where c is an integer constant and d(n) a known function of n. Let t = b -c. Show then
that
fo(n) = ( 0+) )
In particular,
Hint: Combining equation 4.2, written for a random (n-1)-sample of S, and equation 4.3
yields
n-b+
n__
cfo(S) = n-foS
n
+bfo
n
n -b 1
< -fo(S) +-(fi(S) + d(n))
n n
1
< fo(n- 1,S) + -d(n).
n
Hint: Each vertex of the union belongs to a bounded number of faces of the union. Hence
it suffices to bound the number of vertices of the union to bound the total complexity.
The proof works by induction on d. The proof is trivial in dimension 1, and easy in
dimension 2.
In dimension d, each cube has 2d pairwise parallel facets. Let us denote by Fjt(C)
the facet of the cube C that is perpendicular to the xi-axis with maximal j-coordinate,
and by Fj- (C) the facet of the cube C that is perpendicular to the xj-axis with minimal
4.3. Exercises 61
j-coordinate. Let C be a set of axis-parallel cubes in Ed, and denote by U(C) the union of
these cubes and A(C) their arrangement, that is, the decomposition of Ed induced by the
cubes (see part IV for an introduction to arrangements). Each vertex of U(C) or of A(C)
is at the intersection of d facets of cubes, one perpendicular to each axis direction. Such
a vertex P is denoted by (Ci', C22,..., Cd") if at the intersection of facets F'j (Cj), for
=1,.. , d and ej = + or-. The vertex P is called outer if it belongs to a (d - 2)-face
of one of the cubes (then not all the cubes Cj are distinct). It is called an inner vertex if
it is at the intersection of d facets of pairwise distinct cubes. A vertex of A(C) is at level
k if it belongs to the interior of k cubes of C. The vertices of the union are precisely the
vertices at level 0 in the arrangement A(C). Let Wk (C) be the number of inner vertices of
A(C) at level k, and Vk(C) be the number of outer vertices at level k, and vk(n, d) (resp.
Wk(n, d)) the maximum of Vk(C) (resp. of 'Wk(C)) over all possible sets C of n axis-parallel
hypercubes in Ed.
1. The maximum number vo(n,d) of outer vertices of the union is 0(nrd/2l) (and
0(nLd/2]) when the cubes have same size). Indeed, any outer vertex of U(C) belongs to
a (d - 2)-face H of one of the cubes in C and is a vertex (either outer or inner) of the
union of all (d - 2)-cubes C n aff(H), where aff(H) is the affine hull of H. Consequently,
vo (n, d) < 2nd(d - 1)(bo(n -1, d -2) + woo(n - 1,d- 2)),
where 6O(n - 1, d - 2) and bo(n - 1, d -2) respectively stand for the maximum numbers
of outer or inner vertices in the union of n - 1 cubes in a (d - 2)-dimensional space lying
inside a given (d - 2)-cube.
2. Applying the sampling theorem (theorem 4.2.3) and its corollary 4.2.4, we derive a
similar bound on the maximum number v, (n, d) of outer vertices at level 1.
3. To count the number of inner vertices, we use the following charging scheme. For
each vertex P = (C11 2, ,Cdd) of U(C), and each direction j = 1,.. .d, slide along
.C2
the edge of A(C) that lies inside the cube Cj (this edge is nij FjiE (Ci)) until the other
vertex P' of this edge is reached.
If P' belongs to the facet F7Ej (Cj) of cube Cj, we do not charge anything. This case
cannot happen unless the cubes have different side lengths and Cj is the smallest of the
cubes intersecting at P.
If P' belongs to a (d - 2)-face of one of the cubes Ci (i 34 j) intersecting at P, P' is an
outer vertex at level 1, and is charged one unit for P. Note that P' cannot be charged
more than twice for this situation.
If P' belongs to another cube C' distinct from all the Ci intersecting at P, then P' is
an inner vertex at level 1, and is charged one unit for P. Any inner vertex P' of this type
may be charged up to d times for this situation. However, when it is charged more than
once, say m times, we may redistribute the extra m - 1 charges on the outer vertices at
level 0 or 1, and these vertices will only be charged once in this fashion.
In the case of cubes with different sizes, the induction is
(d - 1)wo(C) < wi(C) + 3v 1(C) + vo(C).
In the case of cubes with identical sizes, we obtain
dwo(C) < wi(C) + 3v1 (C) + vo(C).
It suffices to apply exercise 4.7 to conclude.
62 Chapter 4. Random sampling
Randomized algorithms
processed objects and let us call step r the step during which we process the r-th
object.
Let 0 be the object processed in step r. From the already computed set of
regions defined and without conflict over 7?, we compute in step r the set of
regions defined and without conflict over 1? U {O}.
* The regions of .Fo(Z) that do not belong to Yo(R U {O}) are exactly those
regions in Fo(QR) that conflict with 0. These regions are said to be killed
by 0, and 0 is their killer.
* The regions of Fo(1Z U {0}) that do not belong to Fo(1Z) are exactly those
regions .Fo(1Z U {0}) that are determined by a subset of R U {0} that
contains 0. These regions are said to be created by 0.
created by 0 are found. The complexity of each incremental step is thus at least
bounded from below by the number of regions that are killed or created during
this step, and by the number of conflict arcs that are removed or added during
this step.
1. updating the set of regions defined and without conflict over the current
subset can be carried out in time proportional to the number of regions
killed or created during this step, and
2. updating the conflict graph can be carried out in time proportional to the
number of conflict arcs added or removed during this step.
2. The probability p' (r) that F be one of the regions created by the algorithm
during step r is
p~i(r) rp (r).
In these expressions, p (r) stands for the probability that a region F of JFj(S) be
defined and without conflict over a random r-sample of S, as given in subsec-
tion 4.2.1.
If we replace pj (r) by its expression in lemma 4.2.1, we obtain (see also exer-
cise 5.1) that the probabilities p', and p' (r) satisfy the relation
n
r~1
determine F are processed before any of the j objects of S that conflict with F.
Since all permutations of these objects are equally likely, this case happens with
probability
i!j!
(i+j)!'
proving the first part of the lemma. Let 7? be the set of objects processed in the
steps preceding and including step r. For a region F to be created during step
r, we first require that F be defined and without conflict over 1R, which happens
precisely with probability pj (r). If so, F is created at step r precisely if the object
0 processed during step r is one of the i objects of 2 that determine F. This
happens with conditional probability i/r. El
0 ( fo(r, S))
r=1
2. The expected total number of conflict arcs added to the conflict graph by the
algorithm is
0 (n n; Mo r, S))
3. If the algorithm satisfies the update condition, then its complexity (both in
time and in space) is, on the average,
In these expressions, fo(r, S) denotes the expected number of regions defined and
without conflict over a random r-sample of S.
Thus, if fo(r, S) behaves linearly with respect to r (fo(r, S) = O(r)), the total
number of created regions is O(n) on the average, the total number of conflict arcs
is 0(n log n) on the average, and the complexity of the algorithm is 0(n log n) on
the average. If the growth of fo(r, S) is super-linear with respect to r (fo(r, S) =
O(r') for some a > 1), then the total number of created regions is O(n') on
the average, the total number of conflict arc is O(n') on the average, and the
complexity of the algorithm is O(n') on the average.
68 Chapter 5. Randomized algorithms
Proof.
1. We obtain the expectation v(S) of the total number of regions created by the
algorithm by summing, over all regions F defined over S, the probability that
this region F be created by the algorithm:
b n-i b n-i n
V~(S) Ip"
~~~ E JP
E|(S)| Iz p.(r).
i=1 j=O i=1 j=O r=1
2. Let e(S) be the expected total number of arcs added to the conflict graph.
To estimate e(S), we note that if a region F in conflict with j objects of S is a
region created by the algorithm, then it is adjacent to j conflict arcs in the graph.
Therefore,
bo n-i bo n-i n .
(S) ,E,|er 3(S) i PI';=
i=1 j=o
,E,£,|j(S)l 3lp(r)'
i=1 j=O r=1
Apart from the factor i/r, we recognize in this expression the moment of order 1
of a random r-sample (lemma 4.2.5). Applying corollary 4.2.7 to the moment
theorem, we get
e(A bE ml (r S)
r=l r-
3. A given region is killed or created at most once during the course of the
algorithm and, likewise, a given conflict arc is added or removed at most once. A
randomized incremental algorithm which satisfies the update condition thus has
an average complexity of at most
v(S)+e(S) = ° r2S)
2
l l
I I
The algorithm
* For each segment S of S \ 7?, the data structure stores a list £(S) repre-
senting the set of trapezoids of Dec,(7?) intersected by S. The list L:(S) is
ordered according to which trapezoids are encountered as we slide along S
from left to right.
* For each trapezoid F of Dec8 (7), the algorithm maintains the list £'(F) of
the segments in S \ 7? that conflict with F.
5.2. Off-line algorithms 71
£(S)
L'(F)
In the initialstep, the algorithm builds the decomposition Dec5 (R.) for a subset
7? of S that contains only a single segment. This decomposition consists of four
trapezoids. It also initializes the lists that represent the conflict graph. The
initial decomposition is built in constant time, and the initial conflict graph in
linear time.
The current step processes a new segment S of S \ I7: it updates the decom-
position and the conflict graph accordingly.
Updating the decomposition. The conflict graph gives the list £(S) of all
the trapezoids of Dec5 (1?) that are intersected by S. Each trapezoid is split into
at most four subregions by the segment S, the walls stemming from the endpoints
of S, and the walls stemming from the intersection points between S and the other
segments in R (see figure 5.3).
These subregions are not necessarily trapezoids of Dec, (1 U { S}). Indeed, S
intersects some vertical walls of Dec5 (1?), and any such wall must be shortened:
the portion of this wall that contains no endpoint or intersection point must be
removed from the decomposition, and the two subregions that share this portion
of the wall must be joined into a new trapezoid of Dec,(1Z U {S}) (see figure 5.4).
Thus, any trapezoid of Dec, (7? U {S}) created by S is either a subregion, or
the union of a maximal subset of subregions that can be ordered so that two
consecutive subregions share a portion of a wall to be removed. The vertical
adjacency relationships in the decomposition that concern trapezoids created by
S can be inferred from the vertical adjacency relationships between the subregions
and from those between the trapezoids of Dec,(7R) that conflict with S.
Updating the data structure that represents the decomposition Dec8 (R) can
therefore be carried out in time linear in the number of trapezoids conflicting
with S.
Updating the conflict graph. When a trapezoid F is split into subregions
Fi (i < 4), the list L'(F) of segments that conflict with F is traversed linearly,
and a conflict list V'(Fi) is set up for each of the subregions Fi. During this
72 Chapter 5. Randomized algorithms
72 Captr 5.Ranomied agorthm
I
So
Figure 5.3. Decomposing a set of segments: splitting a trapezoid into at most four new
trapezoids.
traversal, the list £(S') of each segment S' in £'(F) is updated as follows: each
node pointing to F in such a list is replaced by the sequence of those subregions
Fi that intersect S', in the left-to-right order along S'.
Consider now a sequence F 1 , F2 , .. ., Fk of subregions that have to be joined
to yield a trapezoid F' of Dec, (R U {S}) created by S. We assume that the
subregions are encountered in this order along S. To build the list £'(F'), we
must merge the lists L'(Fi) while at the same time removing redundant elements.
To do this, we traverse successively each of the lists V'(Fi). For each segment
S' that we encounter in £'(Fi), we obtain the entry corresponding to Fi in the
list L(S') by following the bidirectional pointer in the entry corresponding to
S' in the list £'(Fi). The subregions that conflict with S' and that have to be
joined are consecutive in the list C(S'). The nodes that correspond to these
regions are removed from the list L(S'), and for each entry Fj removed from
the list £(S'), the corresponding entry for S' in L'(Fj) is also removed. (This
process is illustrated in figure 5.5.) In this fashion, we merge the conflict lists of
a set of adjacent subregions while visiting each node of the conflict lists of these
subregions once and only once. Similarly, the corresponding nodes in the conflict
lists of the segments are visited once and only once. This ensures that the time
taken to update the conflict graph is linear in the number of arcs of the graph
that have to be removed.
(a) (b)
I
.I
: I : . :
: : I .
.I.
(c) (d)
condition 5.2.1. We may therefore quote theorem 5.2.3 to show that the average
running time of the algorithm, given a set of n segments with a intersection
points, is
O fo (rS))
1<r<n
L'(F2 )
J'(F3 )
]L'(F4)
Figure 5.5. Decomposing a set of line segments: merging the conflict lists.
(zr-2) (r)
Remark 2. The expected storage of the algorithm is O(n log n + a). In the
variant mentioned in the above remark, it is possible to simplify the conflict
graph: for each segment, we retain only a single conflict arc, for instance the
conflict with the trapezoid which contains the left endpoint of the segment. We
can still update the conflict graph in linear time, therefore the average running
time is unchanged and still O(n log n +a), but the expected storage is lowered to
O(n + a) (see exercise 5.4).
Algorithms that use a conflict graph are incremental but static, that is, they
require initial knowledge of all the segments to be inserted. In contrast, on-line
(or semi-dynamic) algorithms maintain the solution to the problem as the input
objects are inserted, with no preliminary knowledge of the input data. A possible
way to transform an algorithm that uses a conflict graph into an on-line algorithm
is to replace the conflict graph by a different kind of structure that can detect
conflicts between any object and the regions defined and without conflict over
the current set of objects. The influence graph is such a structure.
The influence graph is a structure that stores the history of the incremental con-
struction and depends on the order in which the objects have been processed
by the algorithm. This graph represents the regions created by the algorithm
during the incremental construction, and can be used to detect the conflicts
between these regions and a new object. When the algorithm uses a conflict
graph, the set of data is known in advance, and the algorithm may then com-
pute the objects in S that conflict with a given region. However, an on-line
algorithm does not assume any knowledge of the objects to be processed. Thus
it must be able to describe the entire domain of influence of a region which, as
we recall, is the subset of all the objects in the universe that conflict with this
region.
The influence graph is a directed, acyclic, and connected graph. It possesses
a single root, and its nodes correspond to the regions created by the algorithm
during its entire history. Therefore, a node corresponds to a region that was
defined and without conflict over the current set of objects at some point during
the execution of the algorithm. Properly speaking, this graph is not a tree: a
node might have several parents. Nevertheless, the terminology of trees will be
quite useful for describing it. In particular, a leaf is a node that has no children.
The influence graph must possess two essential properties.
76 Chapter 5. Randomized algorithms
Property 5.3.1 1. At each step of the algorithm, a region defined and without
conflict over the current subset is associated with a leaf of the influence
graph.
2. The domain of influence of a region associated with a node of the influence
graph is contained in the union of the domains of influence of the regions
associated with the parents of that node.
The algorithm
The algorithm is incremental and maintains the set Fo(7) of regions defined
and without conflict over the current set RZ, together with the influence graph
corresponding to the chronological sequence of objects in R?.
The initial step processes a small set of ro objects. For instance, ro can be
the minimal number of objects needed to determine a region. The algorithm
computes the regions defined and without conflict over the set Ro of these ro
objects. The influence graph is initialized by creating a root node, corresponding
to a fictitious region whose influence domain is the universe 0 of objects in its
entirety. A node whose parent is the root is created for each of the regions of
F(Ro).
In the current step, the object 0 is added to R?. The work can be divided into
two phases: we first locate 0 and then update the data structures.
Locating. In this phase, we must find all the regions in Fo(1Z) that conflict
with the new object 0. Starting from the root of the influence graph, we recur-
sively visit all the nodes that conflict with 0, and their children. The regions of
.To(7?) that conflict with 0 are said to be killed by 0.
Updating. We now have to update the data structure that represents the set
of those regions defined and without conflict over the current subset of objects
(To( Z) becomes Fo(UU{O})). We also have to update the influence graph. A leaf
of the influence graph is created for each of the regions in To(7? U {0}) \ .Fo(R).
These are the regions created by 0. Each of these leaves is linked to enough
parents to satisfy property 2 of the influence graph. We never remove a node
from the graph.
The details of the implementation of these steps naturally depend on the prob-
lem. Typically, the set of regions created by 0 can be derived from the set of
regions killed by 0, and the parents of the leaves corresponding to created regions
5.3. On-line algorithms 77
Theorem 5.3.2 Let an on-line algorithm use an influence graph to process a set
S of n objects. The expected number v(S) of nodes in this influence graph is
(E (r, S))
In this expression, fo(r,S) denotes the expected number of regions defined and
without conflict over a random r-sample of S.
To carry the analysis further, we must also be able to bound the number of
arcs in the influence graph, since this number gives the time and storage taken
78 Chapter 5. Randomized algorithms
to update the set of regions without conflict and the influence graph itself, as
is done in the second phase of each insertion step of the algorithm. We also
need a special assumption to control the complexity of testing whether there is
a conflict between an object and a region.' The triple update condition stated
below is actually satisfied by a large class of practical problems.
1. the existence of a conflict between a given region and a given object can be
tested in constant time.
2. the number of children of each node of the influence graph is bounded by a
constant, and
3. the parents of a node created by an object 0 are nodes that are killed by 0,
and updating the influence graph takes time linear in the number of nodes
killed or created at each step.
o (E fo(r, S))
r=1
o (En fo(rS))
0 (E fo(r2S))
4. The expected time complexity of the updating phase at step k is
1 Note
that such an assumption is implicitly contained in the update condition 5.2.1 when
the algorithm uses a conflict graph.
5.3. On-line algorithms 79
As always, fo(r, S) denotes the expected number of regions defined and without
conflict over a random r-sample of S.
Thus, the expected time complexity of an on-line algorithm that uses an influence
graph is identical to that of a similar incremental algorithm that uses a conflict
graph, as long as the respective update conditions are satisfied.
If fo(r, S) behaves linearly with respect to r (fo(r, S) = 0(r)), the complex-
ity of the algorithm is O(nlogn) on the average, and the expected storage is
O(n). Introducing the n-th object takes time O(logn) for the locating phase,
and constant time for updating the data structure and the influence graph.
If the growth of fo(r, S) is super-linear with respect to r (fo(r, S) = 0(r 0 ) for
some a > 1), the expected storage is O(n'). Introducing the n-th object takes
time 0(n'l) for the locating and updating phases.
Proof.
1. The upper bound on the expected storage is a direct consequence of theo-
rem 5.3.2, which bounds the number of nodes in the influence graph, and of the
second clause in the update condition, which bounds the number of children of
each node.
2. The contribution to the running time complexity of the updating phases is
proportional to the number of regions created, because of the third clause of the
update condition. From theorem 5.2.3, we know that this number is
0 (E fo(r, S)
kr=1
We still must evaluate the cost of the locating phases. From the first clause
of the update condition, we derive that the complexity of locating an object 0
is proportional to the number of nodes visited to locate 0. If every child has a
constant number of descendants (second clause in the update condition), however,
the number of nodes visited during the locating phase of 0 is at most proportional
to the nodes of the influence graph that conflict with 0. The overall cost of the
locating phases is therefore proportional to the total number of conflicts detected
during the algorithm.
Let F be a region of P.(S). If this region is created at some step of the al-
gorithm, the corresponding node in the influence graph will be visited j times
in the subsequent steps, and this happens each time we insert one of the j ob-
jects that conflict with F. For a given permutation of the input, an algorithm
that uses an influence graph will not only create the same regions as another
that uses a conflict graph, but will also detect a conflict with a given region
as many times as there are conflict arcs incident to this region in the conflict
graph.
80 Chapter 5. Randomized algorithms
b n-i k-1
w(k,S) = EIv ( E - piZ(r) -
i=1 j=0 r=1 r n-
If we recognize the expression for the first order moment of a random r-sample
of S given in lemma 4.2.5, and bound the sum above by using corollary 4.2.7 to
the moment theorem, we obtain
b k-1
w(k, S) = Za,(r)m l(rS)
4. Now, updating the data structure and the influence graph at step k takes time
proportional to the number of nodes created or killed by Ok. Let v(k, S) be the
expected number of regions created at step k. From lemma 5.2.2, we derive
b n-i
v(k,S) = ZZY(S) jpp(k)
i=i j=0
fo(k, S)
k
Let now v'(k, S) be the expected number of regions killed at step k. We denote
by Sk-1 the current subset immediately prior to step k. A region F in P, (S) is a
region killed at step k if it is a region of FO(Sk-1) that conflicts with Ok, which
happens with probability
pj (k -1) -+l.
5.3. On-line algorithms 81
The update condition 5.3.3 is not mandatory and it is often possible to analyze
an on-line algorithm that does not satisfy all of its clauses.
1. For instance, if the first clause is not satisfied, the cost of testing the conflicts
may be added to the analysis. If this cost can be bounded, this bound appears
as a multiplicative factor in the cost of the locating phases.
2. The analyses of on-line algorithms developed above and in the remainder
of this section are still valid for less restrictive statements of the third clause.
We may assume only that the cost of the update phase is proportional to the
number of regions created or killed. We have preferred, however, to assume that
the parents of nodes created by some object are killed by the same object. This
assumption is satisfied by most of the algorithms given in this book, and it greatly
simplifies the analysis of dynamic on-line algorithms given in the next chapter.
3. Lastly, the second clause can also be relaxed. Indeed, in order to bound the
space needed to store the influence graph, it suffices to bound the total number of
arcs in the entire graph and not necessarily the out-degree of each node. We may
then generalize the analysis of the locating phase by using the notion of a biregion
(see exercise 5.7). In particular, such an analysis applies to the case when the
number of parents of a node is bounded, but not the number of children. We
illustrate this situation in the case of the on-line computation of convex hulls (see
exercise 8.5).
decomposition for short. The notions of objects, regions, and conflicts are defined
as in subsection 5.2.2.
The trapezoids in the current decomposition are the regions defined and with-
out conflict over the current set of segments and are linked to the corresponding
nodes in the influence graph. An internal node of this graph is associated with
a trapezoid which was in the current decomposition at some previous step of the
algorithm. In addition to the set of pointers that take care of the parent-child
relationships between the nodes, each node contains the following information:
* A description of the corresponding trapezoid and a list of the (at most four)
segments that determine it.
* At most four pointers for the adjacency relationships through the vertical
walls. As long as the node is a leaf of the influence graph, the corresponding
trapezoid F belongs to the current decomposition and is adjacent to at most
four leaves in the graph, each of which shares a vertical wall with F. When
the node corresponding to F becomes an internal node, these pointers are
not modified any more.
|- D
A -
a5D
E F H D
I
I I :
G I:
Each internal node of the influence graph has at most four children, and the
running time needed to carry out all the operations described in the previous
paragraph is clearly proportional to the number of trapezoids in £(S). The
update condition is therefore satisfied.
From lemma 5.2.4, we know that the expected number of trapezoids in the
vertical decomposition of a random r-sample is O(n + ar2 /n 2 ), if n is the num-
ber of segments in S and a is the number of intersecting pairs of segments.
Theorem 5.3.4 therefore shows that the on-line algorithm just described has an
expected complexity of O(n log n + a) and uses expected storage O(n + a). The
average complexity of the n-th insertion is O(log n + a/n).
84 Chapter 5. Randomized algorithms
84 Chapter 5. Randomized algorithms
E :F A a
D
EFO I
,N . .:
Q
I I
Ir IVP
I IS w s I
oG
l l - l
[ ' n '
i
I
.1 D
N
, SI Q
I : A , Q .
. I
E: F :AI S U Vaw D
I HI p
I, I -
I I '
X YA
i . l,
r=k+l
Proof. The conflict graph at step k can be augmented, for each object 0 in
S \ Sk, by a list of pointers to the nodes of the influence graph which correspond
to a region of FO(Sk) that conflicts with 0. In order to locate the object 01
at step 1, the algorithm may start to traverse the influence graph not from the
root, but from the nodes of the influence graph which correspond to a region of
Fo(Sk) that conflicts with Oj. If the update condition is satisfied, the number
of children of each node is bounded by a constant, and the number of nodes
visited is proportional to the number of regions F created between steps k + 1
and 1 - 1 that conflict with 01. A region F in 4J(S) is created at step r with
probability 'pj(r). Given this, the conditional probability that F conflicts with
a given object 01 is g The expected number w(l,S) of nodes visited while
locating 01 is thus
I-1 b n-b9
w(l,S) = ZE ZYi(S)Ipi(r.i).
r=k+li=lj=O
In this expression we recognize the first order moment (lemma 4.2.5). Using
corollary 4.2.7 to the moment theorem, we obtain
Proof. The conflict graph is computed at steps nk = [n/ log(k) nj, for k = 1,...,
log* n. The conflict graph is therefore computed log* n times overall, accounting
5.4. Accelerated incremental algorithms 87
for an expected complexity of 0 (n log* n). The locating phases, between step nk
and step nk+1, have a total average complexity of
nk+1 nk+1
The total contribution of the locating phases to the running time is therefore, on
the average, O(n log* n). This fact combined with theorem 5.3.4 proves that the
average complexity of the accelerated algorithm is O(n log* n). cl
this decomposition, and the total complexity of following these edges is domi-
nated by the size O(nk) = O(n) of this decomposition. In the latter case, the
cost of following these edges is proportional to the number of conflicts between
these edges and the trapezoid of the decompositions Dec,(S(nk)).2 From the-
orem 4.2.6 and its corollary 4.2.7, the expected number of conflicts reported at
step nk is exactly the first order moment of the current subset of edges at step
nk. From corollary 4.2.7, this number is 0 (fo(Lnk/2j , S)), which is 0(n) for
non-intersecting segments, as is the case for the edges of a polygon.
The hypotheses of theorem 5.4.2 are thus satisfied, which yields:
Theorem 5.4.3 A randomized incremental algorithm can build the vertical de-
composition of a simple polygon with n edges in expected time O(n log* n).
Remark. The algorithm relies on two facts: the edges are connected, and do not
intersect except possibly at common endpoints. The same algorithm therefore
works as well in the more general cases of a polygonal line, or a connected set of
segments whose pairwise interiors are disjoint.
5.5 Exercises
Exercise 5.1 (Probabilities) Prove that
ii (n
n-i
i-i-j
J i!j!
E~srr n = (i + j)!
V rr
Then show that the probabilities p' and p'; (r) defined in section 5.2 satisfy the following
relation: n
plX ,pt'(r).
r=l1
Hint: The expected number of regions killed or created during step k of a randomized
incremental algorithm is estimated in the proof of theorem 5.3.4. It remains to estimate
the expected number of conflict arcs added or removed during step k.
Hint: We may redefine the notions of regions and conflicts as follows: A region defined
on 7t is a paddle with two components, a trapezoid F in the decomposition Dec(JZ), and a
wall butting on the floor or ceiling of F. A paddle is determined by at most six segments.
It conflicts with a segment if the interior of the trapezoid intersects the segment. The
problem is now to find an upper bound on the number of paddles defined and without
conflict with a segment of S \ RZ.
Exercise 5.4 (Storage) Consider the incremental algorithm that uses a conflict graph
as in subsection 5.2.2 in order to compute the decomposition of a set S of n segments.
Show that if a is the number of intersecting pairs, the storage needed by the algorithm
at step k is, on the average,
mi(k, S) =0 (n +a ).
Using the result of the previous exercise, show that we may reduce the storage to O(n)
by storing only one conflict for each non-inserted segment, say with the trapezoid that
contains its left endpoint, without affecting the running time of the algorithm.
Exercise 5.5 (Decomposing a set of curves) Show how to generalize the notion of
a decomposition for a set of curves supported by algebraic curves of bounded degree.
Two such curves intersect at only a constant number of points, which we assume may be
computed in constant time. Show that both algorithms given in subsections 5.2.2 and
5.3.2 may be extended to build the decomposition of a set of such curves.
Hint: Do not forget to trace walls from each point where the curves have a vertical
tangent.
Exercise 5.6 (Backward analysis) Backward analysis (see also exercises 4.1 and 4.2)
gives an alternative proof of the results of this chapter without using the explicit expres-
sions for p. (r) and p.(r).
90 Chapter 5. Randomized algorithms
For instance, we show how backward analysis can be used to estimate the number
v(k, S) of regions created at step k by an incremental algorithm. Note that if Sk is the
current subset immediately after inserting object Ok at step k, the regions created by Ok
during this step are the regions of FO(Sk) determined by a subset of Sk that contains
Ok. Since Ok, which has chronological rank k, may be any of the objects in Sk with
uniform probability k, a region of .FO(Sk) is created at step k with probability at most
b/k. Therefore, v(k, S) is at most the expectation of IFo(Sk)I over all possible Sk.
Similarly, a region that is killed during step k is a region of F1 (Sk) that conflicts with
Ok. Any region of -F (Sk) conflicts with Ok with probability I/k. The expected number
v'(k, S) of regions killed during step k is therefore at most the expectation of k 1F1(Sk)
over all possible Sk.
It is possible to compute in this fashion the expected numbers v(k,S) and v'(k,S)
of regions created or killed during step k. Show how to use backward analysis to
prove the other results in this chapter, for instance, to bound the number of conflict
arcs that are added to or removed from the conflict graph, or to bound the number
of conflicts detected during a locating phase by an algorithm that uses an influence
graph.
Exercise 5.7 (Biregions) The notion of biregion introduced in this exercise can be
used to analyze the average complexity of some algorithms that use an influence graph,
but do not satisfy the update condition 5.3.3. A biregion is pair of regions which can
have a parent-child relationship in the influence graph for at least one permutation of
the data. A biregion is determined by a set of at most 2b objects, those that determine
the parent region and those that determine the child region. Exactly one of the objects
that determine the child region conflicts with the parent region. We can extend the
notion of conflict to biregions in the following way: an object conflicts with a biregion
if it conflicts with at least one of its two regions and does not belong to the set of
objects that determine the biregion. A biregion can then be considered as a region in
the framework described in chapter 4.
1. Let S be a set of n objects. Show that a biregion, determined by i objects of S and
in conflict with j objects of S, is defined and with k conflicts over a random r-sample of
S with the probability p k(r) given by lemma 4.2.1.
2. From this, extend both the sampling theorem and the moment theorem to the case
of biregions.
3. In essence, the difference between biregions and regions resides in the following fact.
Let FF be a biregion determined by i objects and conflicting with j objects of S. For
FF to correspond to an arc in the influence graph built for S, it is not enough that the
i objects that determine FF be inserted before any of the j objects that conflict with
FF; it must also be the case that the i objects that determine i be processed in a certain
order. This order has to meet several criteria. These criteria depend on the algorithm.
At the very least, one of the objects that determine the child region, more precisely the
one that conflicts with the parent region, must be inserted after all the objects that
determine the parent region.
Show that the probability that FF correspond to an arc of the influence graph is eap'j,
where p"t is given
do in lemma
t 5.2.2, and a is a constant
t that satisfies 1 - o< a <
- and
that depends only on the particular criteria that the insertion order has to meet. Then
5.5. Exercises 91
show that the probability that the biregion FF correspond to an arc of the influence
graph that is created at step r is a p. (r), where p3 (r) is defined in subsection 4.2.1.
4. Our goal is now to give a randomized analysis of an on-line algorithm that uses an
influence graph in which a node can have arbitrarily many children. We thus forget about
the second clause in the update condition 5.3.3, and relax the third one by assuming that
the parents of a region created by 0 are either killed by 0, or still have no conflict after
the insertion of 0. In this way, regions defined and without conflict with the current
subset may not be leaves of the influence graph, but could have many children before they
are killed. The complexity of the update phase is still assumed to take time proportional
to the number of arcs added to or removed from the influence graph. For instance, the
algorithm that computes convex hulls described in exercise 8.5 meets these conditions.
Let ffo (r, S) stand for the expected number of biregions defined and without conflict
over a random r-sample of S. Show that the number of arcs in the influence graph built
for S is, on the average,
e (nffo(r, S))
Or= 1
0 (n E ffo (r, S) )
5. Assume now that the influence graph built for a random r-sample of S has an
expected number of arcs at most g(r,S), where g is a known function. For instance,
when each node of the influence graph has at most a bounded number of children, we
may choose
g(r, S) = 0( fo(aS))
j=l 3
Show that the n-th incremental step of the on-line algorithm has an average complexity
of
0 g(S) + n (r S)
O n +E
r=1
S1 C S2 C ... C Slogs n = S-
The algorithm computes a simplified description of the decomposition Dec8 (P), using
log* n steps. Step i computes the decomposition Dec8 (Si) from Dec (Si- 1).
92 Chapter 5. Randomized algorithms
In the initial step, we build Dec,(Si) using the plane sweep algorithm of subsec-
tion 3.2.2, in time O(rjlogrl). (Any algorithm that runs in time O(rilogrl) would
do.)
In a subsequent step i, i > 1:
1. We locate the segments of S in Dec8 (Si-i). In other words, for each region F in
Dec8 (Si-i), we compute the set S(F) of segments in S which intersect F.
2. For each region F of Dec,(Si-I), we compute the decomposition of S(F) USi, and
the portion of it that lies inside F. To do this, we simply compute the total de-
composition Dec, (S(F) U Si), using the plane sweep algorithm of subsection 3.2.2.
(Again, any algorithm that runs in time O(m log m) for m segments would do.)
3. We obtain Dec,(Si) by putting together all the portions Dec,(S(F) U Si) n F
inside the trapezoids F of Dec,( S i-1), and merging the regions that share a wall
of Dec,(Si-1) which disappears in Dec,(Si).
Show that all three phases 1, 2, and 3 can be performed using O(n) operations. To
analyze phase 2, note that Si-, is a random ri-1-sample of Si, then use the extension
of the moment theorem given in exercise 4.3 for the function g(x) = xlogx.
Exercise 5.9 (Querying the influence graph) The influence graph built by an on-
line algorithm can be used to answer conflict queries on a set of objects. For instance, the
influence graph built for a vertical decomposition can answer location queries for a point
inside this decomposition. Show that, if n segments are stored in the influence graph,
answering a given location query takes time O(logn), on the average over all possible
insertion orders of the n segments into the influence graph. More generally, show that
the same time bound holds for any conflict query which, on any subset JZ of objects,
answers with a single region of Fo(7Z).
their algorithm (see exercises 5.3 and 5.4). Mulmuley's algorithm is very similar to that
of Clarkson and Shor, yet its analysis is based on probabilistic games and combinatorial
series, and is much less immediate.
The influence graph was first introduced in a paper by Boissonnat and Teillaud [31, 32]
where it was called the Delaunay tree, and was used there to compute on-line the
Delaunay triangulation of a set of points. Guibas, Knuth, and Sharir [117] proposed
a similar algorithm to solve the same problem. How to use the influence graph in an
abstract setting is described by Boissonnat, Devillers, Schott, Teillaud, and Yvinec in
[28] and applied to other problems, especially to compute convex hulls or to decompose
a set of segments in the plane. The method was later used to solve numerous other
problems. The influence graph is sometimes called the history of the incremental con-
struction.
The accelerated algorithm that builds the vertical decomposition of a simple polygon
is due to Seidel [204]. This method was subsequently extended to solve other problems
by Devillers [80], for instance to compute the Voronoi diagram of a polygonal line or of
a closed simple polygon (see section 19.2). The algorithm described in exercise 5.8 that
computes the decomposition of a polygon in time O(n log* n) is due to Clarkson, Cole
and Tarjan [69].
The method called backward analysis used in exercise 5.6 was first used by Chew in
[59] to analyze an algorithm that computes the Voronoi diagram of a convex polygon
(see exercise 19.4). It was subsequently used in a systematic fashion by Seidel in [203]
and Devillers in [80].
Mehlhorn, Sharir, and Welzl [167, 168] gave a finer analysis of randomized incremental
analysis by bounding the probability that the algorithm exceeds its expected perfor-
mances by more than a constant multiplicative factor.
Randomized incremental algorithms proved very efficient in solving many geometric
problems. The basic methods (using the influence or the conflict graphs) or one of
their many variants inspired much work by several researchers such as Mulmuley [172,
174], Mehlhorn, Meiser and 6 'Dinlaing [164], Seidel [205], Clarkson and Shor [71], and
Aurenhammer and Schwarzkopf [18].
There is a class of randomized algorithms which work not by the incremental method,
but rather by the divide-and-conquer paradigm. The subdividing step is achieved using
a sample of the objects to process. Randomization is used for choosing the sample, and
the method can be proved efficient using the probabilistic theorems given in exercises 4.5
and 4.6. Randomized divide-and-conquer is mainly used for building hierarchical data
structures that support repeated range queries. Typically, these queries can be expressed
in terms of locating a point in the arrangement of a collection of hyperplanes, simplices,
or other geometric objects. In a dual situation, the data set is a set of points and
the queries ask for those points which lie in a given region (half-space, simplex, ... ).
Haussler and Welzl [123] spurred new interest in the field with their notion of an e-net.
Later, Matousek introduced the related notion of c-approximations [150]. Chazelle and
Friedman [53] showed how to compute these objects in a deterministic fashion using
the method of conditional probabilities. The resulting deterministic method is called
a derandomization of the randomized divide-and-conquer method. This method was
then widely used, for instance by Matousek [150, 151, 152, 153, 154, 155], Matousek
and Schwarzkopf [156], or Agarwal and Matousek [4]. In his thesis [35], Br6nnimann
94 Chapter 5. Randomized algorithms
studies the derandomization of geometric algorithms and the related concept of the
Vapnik-Chervonenkis dimension. Randomized divide-and-conquer is also used by Clark-
son, Tarjan, and Van Wyk in [65] to build the vertical decomposition of a simple
polygon.
Last but not least, the book by Mulmuley [177] is entirely devoted to randomized
geometric algorithms, and serves as a very comprehensive reference on the topic.
Chapter 6
Dynamic randomized
algorithms
The geometric problems encountered in this chapter are again stated in the ab-
stract framework of objects, regions, and conflicts introduced in chapter 4. A
dynamic algorithm maintains the set of regions defined and without conflict over
the current set of objects, when the objects can be removed from the current set
as well as added. In contrast, on-line algorithms that support insertions but not
deletions are sometimes called semi-dynamic.
Throughout this chapter, we denote by S the current set of objects and use the
notation introduced in the previous two chapters to denote the different subsets
of regions defined over S. In particular, Fo (S) stands for the set of regions defined
and without conflict over S. To design a dynamic algorithm that maintains the
set Fo(S) is a much more delicate problem than its static counterpart. In the
previous chapter, we have shown how randomized incremental methods provide
simple solutions to static problems. In addition, the influence graph techniques
naturally lead to the design of semi-dynamic algorithms. In this chapter, we
propose to show how the combined use of both conflict and influence graphs can
yield fully dynamic algorithms.
The general idea behind our approach is to maintain a data structure that
meets the following two requirements:
* It allows conflicts to be detected between any object and the regions defined
and without conflict over the current subset.
* After deleting an object, the structure is identical to what it would have
been, had the deleted object never been inserted.
Such a structure is called an augmented influence graph, and can be imple-
mented using an influence graph together with a conflict graph between the re-
gions stored in the influence graph and the current set of objects. In some cases,
we might be able to do without the conflict graph.
96 Chapter 6. Dynamic randomized algorithms
In section 6.2, we describe the augmented influence graph and how to perform
insertions and deletions. The randomized analysis of these operations is given in
section 6.3. This analysis assumes a probabilistic model which is made precise
and unambiguous in section 6.1. The general method is used in section 6.4 to
design a dynamic algorithm that builds the vertical decomposition of a set of
segments in the plane.
This chapter also uses the terminology and notation introduced in the previous
two chapters. To ease the reading process, some definitions are recalled in the
text or in the footnotes.
Inserting an object
Inserting an object O, into a structure built for a set Sn-_ is very similar to the
operation of inserting an object in an on-line algorithm that uses an influence
98 Chapter 6. Dynamic randomized algorithms
graph. The only difference is that, in addition to the insertion into the influence
graph, we must also take care of updating the conflict lists. This can be done in
two phases: a locating phase, and an updating phase.
Locating. The algorithm searches for all the nodes in the influence graph of
:Za(E) that conflict with On. Each time a conflict is detected, we add a conflict
arc to the conflict graph, add On to the conflict list of the region that conflicts
with it, and add this region to the list L(OA).
Updating. A node of the influence graph is created for each region in Fo(Sn)
determined by a set of objects that contains On. This node is also linked to
parent nodes so that the two inclusion properties hold.
We may recall that a region in Fo(Sn) is said to be created by On if it is
determined by a set of objects that contains On. Similarly, a region of FO(Sn-1)
is said to be killed by On if it conflicts with On. More generally, a region stored
in a node of the influence graph Ta(s) has a creator in E, and a killer if it is not
a leaf. The creatorof F is, among all the objects that determine F, the one that
has the highest rank in E. The killer of F is, among all the objects in E that
conflict with F, the one with the lowest chronological rank.
For the rest of this chapter, we assume that the augmented influence graph
satisfies the update condition 5.3.3. In particular, a node of the graph that stores
a region created by On is linked only to nodes storing regions killed by On.
Deleting an object
To simplify the discussion, assume that the current set S has n objects, and that
the current data structure is the augmented influence graph la(s) corresponding
to the chronological sequence E = {O1, - , On}. The object to be deleted is
.
Ok, the object that has chronological rank k. The algorithm must modify the
augmented influence graph to look as if Ok had never been inserted into E.
The augmented graph must therefore correspond to the chronological sequence
S = {O1i,. ,Ok-1, Ok+1.... On}.
For any integer 1, k < I < n, let us denote by S' the subset S1 \ {Ok} of S. In
particular, observe that Sk = Sk-1.
In what follows, an object is called a determinant of a region if it belongs to
the set of objects that determine that region. The symmetric difference between
the nodes of Ia(s) and those of Ia(E') can be described as follows.
1. The nodes of la(s) that do not belong to -Ea(V) are determined by a set
of objects that contain Ok. Therefore Ok is a determinant of those regions,
and we say that such nodes (and the corresponding regions) are destroyed
when Ok is deleted.
2. The influence graph Ta(V') has a node that does not belong to la(E) for
6.2. The augmented influence graph 99
each region in U1=k+1. nFo(SI) that conflicts with Ok. Let us say that
such a node is new when Ok is deleted, and so is its corresponding region.
A new region has a creator and, occasionally, a killer in the sequence E'. If
the region belongs to Fo(S'), conflicts with Ok, and is determined by a set
of objects that contain O, then it is a new region after Ok is deleted, and
its creator is Qi.
Nodes that play a particular role when Ok is deleted include of course the
new nodes as well as the destroyed ones, but the nodes killed by Ok also have
a special part to play. The nodes killed by Ok should not be mistaken for the
nodes destroyed when Ok is deleted. Nodes killed by Ok correspond to regions
of Fo(Sk-l) that conflict with Ok, whereas nodes destroyed when Ok is deleted
correspond to regions that admit Ok as a determinant. The latter nodes disappear
from the whole data structure when Ok is deleted. The former nodes are killed
when Ok is inserted but remain in the data structure (occasionally becoming
internal nodes), and they still remain after Ok is deleted.
Upon a deletion, the arcs in the influence graph Ta(s) that are incident to
the nodes destroyed by Ok disappear and the graph Ia(E') has arcs incident to
the new nodes. In particular, new nodes must be linked to some parents (which
are not necessarily new nodes). Moreover, a few nodes of Ia(s) that are not
destroyed witness the destruction of some of their parents. Let us call these
nodes unhooked. They must be rehooked to other parents.
Again, deletions can be carried out in two phases: a locating phase, and a
rebuilding phase.
Locating. The algorithm must identify which nodes of the influence graph
la(s) are in conflict with Ok, which nodes have to be destroyed, and which are
unhooked. Owing to both inclusion properties, this can be done by a traversal of
the influence graph. This time, however, we not only visit the nodes that conflict
with Ok, but also those which admit Ok as a determinant. The destroyed or
unhooked nodes are inserted into a dictionary which will be looked up during the
rebuilding phase.
Rebuilding. The first thing to do is to effectively remove all the destroyed
nodes. Those nodes can be retrieved from the dictionary, and all the incident
arcs in the graph are also removed from the graph. The conflict lists of the nodes
which conflict with °k are also updated accordingly. We shall not detail these
low-level operations any further, as they should not raise any problems. Next,
we must create the new nodes, as well as their conflict lists; we must also hook
these new nodes and rehook the nodes that were previously unhooked. The detail
of these operations depends on the nature of the specific problem in hand. The
general design is always the same, however: the algorithm reinsertsone by one,
and in chronological order, all the objects Qi whose rank I is higher than k and
that are creators of at least one new or unhooked region. To reinsert an object
100 Chapter 6. Dynamic randomized algorithms
involves creating a node for each new region created by QI, hooking this node
into the influence graph, setting up its conflict list, and finally rehooking all the
unhooked nodes created by O1.
To characterize the objects Oj that must be reinserted during the deletion of
Ok, we must explain what critical regions and the critical zone are. For each
I > k, we call critical those regions in .Fo(S -1) that conflict with Ok- We call
critical zone, and denote by Z- 1 , the set of those regions.
Lemma 6.2.1 Any object 01 of chronological rank 1 > k that is the creator of
a new or unhooked node when Ok is deleted conflicts with at least one critical
region in Z1 - 1.
with 01. The object O1 is then reinserted, and the details of this operation depend
of course on the problem in hand. The main obstacle is that we might have to
change more than the critical zone of the influence graph. Indeed, the new regions
created by 01 always have some critical parents, even though they may also have
non-critical parents. Moreover, parents of an unhooked region are new, but the
unhooked region itself is not. To correctly set up the arcs in the influence graph
that are incident to new nodes, the algorithm must find in Ia(s) the unhooked
nodes and the non-critical parents of the new nodes. At this phase, the dictionary
set up in the locating phase is used. After reinserting 01, the priority queue Q is
updated as follows: the regions in Z 1 _1 that conflict with 01 are not critical any
more; however, any new region created by 0 belongs to Z1. Then for each of
these regions F, the killer of F in A' is identified as the object in L'(F) with the
smallest chronological rank. This object is then searched for in Q and inserted
there if it is not found. Then F is added to the list of regions killed by QI.
Lemma 6.3.1 Upon deleting an object, the number of nodes that are destroyed,
new, or unhooked is, on the average,
o ( Lfo (I S)
where, as usual, fo(l, S) stands for the number of regions defined and without
conflict over a random sample of size I from S.
Proof. We bound the number of destroyed, new, and unhooked nodes separately.
1. The number of destroyed nodes. A node in Ta(s) corresponding to a
region F in 4j (S) is destroyed during a deletion if the object deleted is one of the
i objects that determine the region F. Let F be a region in F(S). Given that F
corresponds to a node in the influence graph built for S, this node is destroyed
during a deletion with a conditional probability i/n < b/n. From theorem 5.3.2,
we know that the expected number of nodes in the influence graph is
n(fMlS)
102 Chapter 6. Dynamic randomized algorithms
so the number of nodes destroyed when deleting an object is, on the average,
1 E fo(1, S))
2. The number of new nodes. The regions that correspond to the new nodes
in the influence graph when Ok is deleted are exactly the regions created by 01,
for some I such that k < 1 < n, that belong to Fo(S') and conflict with Ok. Let
F be a region of 4j(S). This region F belongs to TFo(S1) with the probability
pj(1 - 1) that was given in subsection 5.2.2. Assuming this, F is created by 01
with conditional probability i/(l - 1), and F conflicts with Ok with conditional
probability j/(n - I + 1). Therefore, for a given k, the number of new nodes in
the influence graph upon the deletion Ok is, on the average (using corollary 4.2.7
to the moment theorem),
EL E AI(S)
P;(l -1) I = °( _ _
Averaging over all ranks k, the number of new nodes in the influence graph after
a deletion is
1 nn1 fo( Ll'/2j , S) 1 n-1 fo (1,S)
0n -EE 1'2 =0°VE
3. The number of unhooked nodes. Unhooked nodes are the non-destroyed
children of destroyed nodes. If condition 5.3.3 is satisfied, the number of children
of each node in the augmented influence graph is bounded by a constant. It
follows that the number of unhooked nodes is at most proportional to the number
of destroyed nodes. El
The update condition 5.3.3 assumes that the number of children of a node
is bounded by a constant. However, the number of parents of a node is not
necessarily bounded by a constant and the following lemma is useful to bound
the number of arcs in the influence graph that are removed or added during a
deletion.
Lemma 6.3.2 The number of arcs in the influence graph that are removed or
added during a deletion is, on the average,
Proof. The simplest proof of this lemma involves the notion of biregion encoun-
tered in exercise 5.7. A biregion defined over a set of objects S is a pair of regions
defined over S which can possibly be related as parent and child in the influence
graph, for an appropriate permutation of S. A biregionis determined by at most
2b objects, and the notion of conflict between objects and regions can be extended
to biregions: an object conflicts with a biregion if it is not a determinant of any
of the two regions but conflicts with at least one of the two regions. Biregions
obey statistical laws similar to those obeyed by regions. In particular, a biregion
determined by i objects of S which conflicts with j objects of S is a biregion
defined and without conflict over a random l-sample of S, with the probability
p'(I) given by lemma 4.2.1. A biregion defined and without conflict over a subset
S of S corresponds to an arc in the influence graph whenever the objects that
determine the parent region are inserted before those that determine the child
region and at the same time conflict with the parent region. This only happens
with a probability a E [0,1] (which depends on the number of objects determin-
ing the parent and the child, and the number of objects that at the same time
determine the child and conflict with the parent).
A biregion determined by i objects in S and conflicting with j objects in S
corresponds to an arc in the influence graph la(s) that was created by 01, with
a probability smaller than l pj(1) (see also exercise 5.7); this arc, created by 01,
conflicts with Ok with a probability smaller than
1 j
- pZ (I).
A computation similar to that in the proof of lemma 6.3.1 shows that the
expected number of arcs in the influence graph that are created or removed
during a deletion (which are those adjacent in the influence graph to new nodes
or to destroyed nodes) is
I n1E ffo (1, S)A
Vn 11 l
where ffo(i, S) is the expected number of biregions defined and without conflict
over a random 1-sample of S. It remains to show that ffo(l, S) is proportional to
fo(l, S). Let S1 be a subset of size 1 of S. The parent region in a biregion that
is defined and without conflict over SI is a region defined over SI that conflicts
with exactly one object in SI, and is therefore a region in F1 (SI). Conversely,
if the update condition 5.3.3 is true, every region in Fl(Sl) is the parent in a
bounded number of biregions defined and without conflict over S1. It follows that
ffo(l, S) is within a constant factor of the expectation fi(l, S) of the number of
regions defined and conflicting with one element over a random i-sample. From
corollary 4.2.4 to the sampling theorem, this expected number is O(fo(l, S)). E
104 Chapter 6. Dynamic randomized algorithms
Lemma 6.3.3 The total size of all the conflict lists attached to the nodes that
are new or destroyed when an object is deleted is, on the average,
0 (E fo(1 S))
Proof.
1. Conflict lists of destroyed nodes. A region F of 4j(S) corresponds to a
node of the influence graph Ta(s) with probability
n
P3; (1),
1=1
as implied by lemma 5.2.2. The conflict list attached to this node has length j
and this node is destroyed during the deletion of an object with probability i/n.
The total size of the conflict lists attached to destroyed nodes is thus
LEE~~I-i
l~(S) IP3
p(1)I- = ( Eml,)
1i=t1 3=111
= O( 2 (n l ) fo(LS2))
= 0 ( fo (1iS))
112
E_ E I j(S)I pJ(l-1) n -
n-+)(1
(j-1)
i= =1 =k+l 11
Applying corollary 4.2.7, this size is
Lastly, setting up the priority queue Q of killers of critical regions involves the
regions of the influence graph La(E) that are killed by Ok. The conflict lists of
these regions are traversed in order to set up the conflict lists of the new children
of these nodes. The following lemma is therefore needed in order to fully analyze
dynamic algorithms.
Lemma 6.3.4 The number of nodes in the influence graph la(E) that are killed
by a random object in S is, on the average,
o( Efo(1, S))
The total size of the conflict lists attached to the nodes killed by a random object
is, on the average,
o (I fo(1, S))
pj(k -1)
n EZZZ
k=2 i=1 j=1
i(S)I p,(k-1) 'k = E
k=2
- 0 (!foI:
(n kL
as can be deduced from corollary 4.2.7 to the moment theorem.
The total size of the conflict lists attached to nodes killed by a random object
is, on the average,
= E f o ( Lk/2j, S))
k=1
O (EMol, S))
106 Chapter 6. Dynamic randomized algorithms
Parameter t' is O(log n) if we use a balanced binary tree, but it is O(log log n)
if we use a stratified tree along with perfect dynamic hashing (see section 2.3).
Moreover, as we will see further on, if fo(l, S) grows at least quadratically, then
implementing Q with a simple array of size n will suffice, and t' can be ignored
in the analysis.
o (n Ml S))
o ( fo(l, S)
As always, fo(l,S) is the number of regions defined and without conflict over a
random I-sample of S, t is the complexity of any operation on a dictionary, and
t' is the complexity of an operation on the priority queue Q.
Proof.
1. The storage needed by the augmented influence graph fa(s) is proportional
to the total size of the conflict lists attached to the nodes of lTa(s). Each element
in one of these conflict lists corresponds to a conflict detected by an on-line
algorithm processing the objects in S in the chronological order of the sequence
E. The expected number of conflicts, for a random permutation of E, is thus
given by theorem 5.2.3 which analyzes the complexity of incremental algorithms
that use a conflict graph.
2. The randomized analysis of an insertion into the augmented conflict graph
is identical to that of the incremental step in an on-line algorithm that uses
an influence graph. Indeed, the two algorithms only differ in that one updates
conflict lists. Each conflict between the inserted object and a node in the current
108 Chapter 6. Dynamic randomized algorithms
0 IS (L2 2,S
which we know from the proof of theorem 5.3.4. Averaging over the rank of the
deleted object, we get
o ( fo (l S))
From lemma 6.3.1, the latter expression is also a bound on the expected number
of nodes destroyed and thus on the global cost of traversing the influence graph.
If the update condition 6.3.5 is realized, lemmas 6.3.3 and 6.3.4 show that the
conflict lists of the new regions can be set up in time
o (E fo(1,S))
Lemma 6.3.1 and the update condition 6.3.5 (2a) assert that the term
o (t Eo ( IS))
accounts for the average complexity of all the operations performed on the dic-
tionaries of nodes.
Since t is necessarily Q(1), lemmas 6.3.1 and 6.3.2, together with condition 6.3.5
(2c), assert that the former term also accounts for all the operations that update
the augmented influence graph, not counting those on the conflict lists or the
priority queue.
It remains to analyze the management of the priority queue Q of critical region
killers. The number of insertions and queries in the priority queue is proportional
to the total number of critical regions encountered during the rebuilding phase.
6.3. Randomized analysis of dynamic algorithms 109
These regions are either killed by the deleted object, or they are new regions.
Their average number is thus
0 1 Efo(l' 5))
as asserted by lemmas 6.3.1 and 6.3.4. The average number of minimum queries
to be performed on the queue Q is
since the number of objects to be reinserted is bounded from above by n on the one
hand, and by the number of unhooked or new nodes (estimated by theorem 6.3.1)
on the other hand. [1
Consequently,
I fo (1,S))
each such object QI, a killer of a critical region, the algorithm builds the new
nodes created by 01 and rehooks the unhooked nodes created by 01. Figure 6.1
shows how the influence graph built for the four segments {°O, 02, 03, 04} is
modified when deleting 03. The reader may observe again how the graph was
created incrementally, in figures 5.6, 5.7 and 5.8. In this example, nodes B and H
are killed by 03, nodes J,K,L,M,N,O,P,Q,S,U,V are destroyed, nodes R,T,W
are unhooked (they are created by 04), and B' is a new node (its creator is 04).
The subsequent paragraphs describe in great detail the specific operations
needed.
Locating. This phase is trivial: all the nodes that conflict with the object
Ok to be deleted, or that are determined by a subset containing Ok, are visited
together with their children. The algorithm builds a dictionary D of unhooked
or destroyed nodes, which will be used during the rebuilding phase.
Rebuilding. The priority queue Q, which contains the killers of critical re-
gions, is initialized with the nodes in Ia(s) that are killed by Ok-
At each step in the rebuilding process, the algorithm extracts from the priority
queue Q the object 0 1 of smallest chronological rank. It also retrieves the list of
the critical regions that conflict with 01.
Each of these regions is split into at most four subregions by 01, and the walls
stemming from its endpoints and its intersection points. These subregions are
not necessarily trapezoids in the decomposition Dec(S'). Indeed, the walls cut
by 01 have to be shortened, keeping only the part that is still connected to the
endpoint or intersection point from which it stems. The other part of the wall
must be removed and the adjacent subregions separated by this part must be
joined. The join can be one of two kinds: internalwhen the portion of wall to be
removed separates two critical regions, and external when it separates a critical
region from a non-critical region (see figure 6.2).
To detect which regions to join, 2 the algorithm visits all the critical regions
that conflict with 01, and stores in a secondary dictionary DI, the walls incident
to these regions that are intersected by 01. Any wall in this dictionary that
separates two critical regions gives rise to an internal join, and any wall incident
to only one critical region gives rise to an external join.
In a first phase, the algorithm creates a temporary node for each subregion
resulting from the splitting of a critical region by 01 or the walls stemming from
Oj. The node that corresponds to a subregion Fi of the region F is hooked in the
graph as a child of F. Its conflict list is obtained by selecting, from the conflict
2
The algorithm cannot traverse the sequence, ordered by O, of critical regions for two rea-
sons: (1) it does not maintain the vertical adjacencies between the internal nodes of the influence
graph, and the adjacencies between either the trapezoids of the decomposition Dec(Sl-1) or the
critical regions of Zj-1 are not available, and (2) the intersection of 0Q with the union of the
regions in Zj-1 may not be connected (see for instance figure 6.4).
112 Chapter 6. Dynamic randomized algorithms
(a)
F D
(b) (c)
(d)
(b)
77
-I
.I--
I ! I : ;
(c) (d)
list of F, the segments intersecting Fi. Then the algorithm processes the internal
and the external joins, as explained below.
1. Internal joins. Every maximal set {G 1 , Gh} of subregions, pairwise ad-
jacent and separated by walls to be removed, must be joined together into a single
region G. The algorithm creates a temporary node for G. The nodes correspond-
ing to G 1, G 2 ,. . . , Gh are removed from the graph and the node corresponding
to G inherits all the parents of these nodes. The conflict list of G is obtained by
merging the conflict lists of G 1 , G2 , .. ., Gh, removing redundancies. For this, we
use a procedure similar to that of subsection 5.2.2, but which need not know the
order along 01 of the subregions to be joined. By scanning the conflict lists of
these subregions successively, the algorithm can build for each segment 0 in S a
list LG(O) of the subregions that conflict with 0. A bidirectional pointer inter-
114 Chapter 6. Dynamic randomized algorithms
connects the entry in the list L'(Gi) that corresponds to an object 0 with the
entry in LG(O) corresponding to the subregion Gi. The conflict list of G can be
retrieved by scanning again all the conflict lists L'(Gi) of the subregions G1 , ....
Gh. This time, each segment 0 encountered in one of these lists is added to the
conflict list of G and removed from the other conflict lists, using the information
stored in LG(O).
Let us call auxiliary regions the regions obtained after all the internal joins.
These regions are either subregions that needed no internal join, or regions ob-
tained from an internal join of the subregions. An auxiliary region that does not
need to undergo any external join is a region of the decomposition Dec(S'). Let
H be such a region. This region is new if it conflicts with Ok, unhooked other-
wise. In the former case, the temporary node for H becomes permanent and the
killer of H is inserted into the priority queue Q. In the latter case, a node for H
already exists in the influence graph la(S). A simple query in the dictionary of
unhooked nodes retrieves this node, which can then be rehooked to the parents
of the auxiliary node created for H.
2. External joins. In a second phase, the algorithm performs the external joins.
An auxiliary region undergoes a left join if its left wall must be removed, and a
right join if its right wall must be removed, and a double left-right join if both its
vertical walls must be removed. Let G be an auxiliary region undergoing a right
join. For instance, this is the case for region G = GI U G2 in figure 6.2. The right
wall of G is on the boundary of the critical zone, since this is an external join. This
wall is therefore not cut by the deleted segment Ok. When the decomposition
of S is built incrementally according to the order in the sequence A, this wall
appears at a certain step and is removed when Oj is inserted. Thus, among all
the regions in la(S), there is one region Fd created by O1 that contains the right
wall of G.3 The region Ed is necessarily destroyed or unhooked: indeed, Ed is a
trapezoid in the decomposition Dec(S1), and has a non-empty intersection with
one or more critical regions in Z1 _1 . As every critical region in Z1 - is contained
in the union of the trapezoids of Dec(SI-1 ) of which Ok is a determinant, the
region Ed must intersect those trapezoids. Thus at least one of the parents of Ed
in the graph la(S) is a destroyed node. Similarly, if the left wall of G must be
removed, there is in Ia(E) one destroyed or unhooked region Fg created by 01
that contains the left wall of G. If the join is double left-right, Ed and Fg may
be distinct or identical (see figure 6.3).
Several auxiliary regions may be joined into the same permanent region (see
figure 6.4). Let {G1 , G2, . ., Gj} be the sequence ordered along 0l of the auxiliary
3
1t would have been more desirable to subscript F by 1 and r for left and right, but this
would have conflicted with the index I for Oj and created confusion. We have kept a French
touch with the indices g and r for the French gauche and droit, meaning respectively left and
right. (Translator's note)
6.4. Dynamic decomposition of a set of line segments 115
V VK'1 2/Z
'
6<
Fg /
Fd 4
01:
regions 4 whose left wall is contained in the same region Fg of Ia(s) created by
O°. If j > 1, then the right walls of these auxiliary regions {G1 , G2 , . . ., Gj-1,
are also contained in Fg and must be removed as well. If the right wall of Gj is
a permanent wall (that does not have to be removed), the join results in a single
trapezoid of the decomposition Dec(S, ) that is the same as Fg U G1 U . .. U Gj =
Fg U Gj (see figure 6.4). If the right wall of G3 also has to be removed, then
we introduce the ordered sequence of auxiliary regions {Gj, Gj+l, . . , Gh}: this
sequence consists of regions whose right wall must be removed and which lie in
the same region Fd of Ia(s) created by O1. The left walls of the regions in
{ Gj, Gj+l,... , Gh} then also belong to Fd and have to be removed. The join
operates on the auxiliary regions {G1 , ... , GjC,..., GO} and results in a unique
trapezoid in Dec(S8) that is the same as F. U G1 U . . . U Gh U Fd = F9 U Gj U Fd.
We present below the operations to be performed in the latter case of a double
left-right join. The former cases can be handled in a similar manner. Suppose
for now that the auxiliary regions {G1 , . . . , Gj,. . . , Gh} as well as the regions Fg
and Fd in the decomposition Ia(s) that participate in the join are known to the
algorithm.
If the trapezoid resulting from the join F = Fg U Gj U Fd does not conflict
with Ok (see figure 6.3, right), it is a trapezoid in the decomposition Dec(Sl).
Necessarily, the regions Fg, Fd, and F are the same, and the corresponding node
in Ia(s) is unhooked. It then suffices to search for this node in the dictionary of
unhooked nodes, to remove the auxiliary nodes created for G1 , G2 , . . ., Gh, and
to rehook the node corresponding to F, with the critical nodes in the parents of
G1 , G2 , .. ., Gh as the parents of F.
If the resulting trapezoid F = Fg U Gj U Fd conflicts with Ok (see figure 6.3,
left), then it is a new region of la(E'), and the regions Fg and Fd in la(E)
4
We must emphasize that even though the given description of the region resulting from an
external join refers to the order of the joined auxiliary regions along 01, the algorithm does not
know this order, nor does it need it.
116 Chapter 6. Dynamic randomized algorithms
III
_ I I ,I
Uk I i
(a)
(a)
LD)
(c)
are destroyed. The auxiliary nodes created for G1 , G2 , .. ., Gh are removed, and
replaced by a single node corresponding to F. This node is then rehooked to the
parents of F, and Ed that are not destroyed, and to all the critical parents of G1 ,
G2 , ... , Gh. The conflict list of F is derived from those of F9 ,G1 , G 2 , ... , Gh
and Ed, as is the case for internal joins. Lastly, the killer of F is inserted into the
priority queue Q.
We now have to explain how to retrieve the unhooked or destroyed nodes
corresponding to the regions Fg and Fd involved in the join. Let G be an auxiliary
region whose left wall must be removed. The corresponding region Fg is either
6.4. Dynamic decomposition of a set of line segments 117
destroyed or unhooked, created by QI, and the segments that support its floor
and ceiling 5 respectively support the floor and ceiling of G. Any region in the
decomposition of a given set of segments is identified uniquely by its floor, its
ceiling, and one of its walls. Below, we show that either we can find one of the
walls of F., or we can identify a destroyed region F' which is the unique sibling
of F9 in la(s).
* If G does not conflict with Ok, but its right wall is permanent (see fig-
ure 6.5b), then this right wall is also that of F9 .
* Lastly, if G does not conflict with Ok, and if both its walls must be removed
(see figure 6.5c), then segment 0Q intersects both walls of a critical region
that was subsequently split into G and G'. The other subregion G' also
conflicts with Ok but does not undergo any join. In Ila(s), exactly one
node F' has Ot for creator, is destroyed, and shares the same floor, same
ceiling, and same left wall as G'. This node F' has only one parent, and
this parent has two children, one of which is F' and the other the node F9
that we are looking for: indeed, the parent of F' corresponds to a trapezoid
in the decomposition Dec(S 1- 1) whose two walls are intersected by 0 e.
In either case, the region Fg, or its sibling F' is known through its creator, its
floor, its ceiling, and one of its left or right walls. This information is enough to
characterize it. Naturally, the same observation goes for Fd or its sibling Fd. We
can then use the dictionary D storing all the destroyed or unhooked nodes. This
dictionary comes in two parts, Dg and Dd. In the dictionary Dg, the nodes are
labeled with:
* the pair of segments whose intersection determines the right wall of the
trapezoid, or the same segment repeated twice if the wall stems from the
segment's endpoint.
Similarly, in its counterpart Dd, nodes are labeled the same way, except that in
the last component the right wall is replaced by the left wall. Any destroyed or
unhooked node is inserted into both dictionaries Dg and Dd.
5
We recall that the floor and ceiling of a trapezoid are its two non-vertical sides.
118 Chapter 6. Dynamic randomized algorithms
-77
(a) (b)
-i
A
(c)
To analyze this algorithm, we first check that it does satisfy the update condi-
tions 6.3.5. The first condition is satisfied, since the augmented influence graph
has the same nodes and arcs as the influence graph built by the on-line algorithm
of subsection 5.3.2, which itself satisfies the update condition 5.3.3. Therefore,
we need only look at deletions.
1. Number of operations on the dictionaries. Each deletion involves a
two-sided dictionary D of destroyed or unhooked nodes, as well as a dictionary
D., for each reinserted segment Qi, of walls in the critical zone intersected by
Ot. A destroyed or unhooked node is inserted and queried at most once in D.
A critical region in Z 1 - has at most two walls which must be inserted into DJ,
and this region will not be a critical region any more after the reinsertion of 01.
The number of operations on all dictionaries DI is thus at most proportional to
the total number of critical regions encountered in the rebuilding phase. Any
critical region is either killed or new. The total number of operations is thus at
most proportional to the number of nodes that are killed, destroyed, unhooked,
or new.
2. Conflict lists of new nodes. The conflict list of a new node is obtained by
scanning the conflict lists of the auxiliary or destroyed regions of which it is the
6.4. Dynamic decomposition of a set of line segments 119
(- I f 1S) = (1 + a)'
O (If ( S) = O(logn + )
We can now use theorem 6.3.6 to state the following theorem, which summarizes
the results so far:
O(log n+ a).
n
120 Chapter 6. Dynamic randomized algorithms
where the parameters t and t' stand respectively for the complexities of the
operations on dictionaries and priority queues.
Therefore, if we use perfect dynamic hashing together with stratified tree, the
expected cost of a deletion is
0 (logn + (1 + ) loglogn) .
If we use balanced binary trees, it remains
For the preceding algorithm, we have merely applied the general principles of
the augmented influence graph to the case of computing the vertical decomposi-
tion of a set of segments. In fact, in this specific case, we may derive a simpler
algorithm, yet one that uses less storage. This algorithm does not need to keep
the conflict lists and maintains a non-augmented influence graph. It is outlined in
exercises 6.1, 6.2 and 6.3, and its performances are summarized in the following
theorem:
Therefore, the expected cost of a deletion is 0 ((1 + a) log log n) if we use per-
fect dynamic hashing coupled with stratified trees. It remains 0 ((1 + a) log n)
if we use balanced binary trees.
6. 5. Exercises 121
6.5 Exercises
Exercise 6.1 (Dynamic decomposition) Let us maintain dynamically the decom-
position of a set of segments using an influence graph. Show that the creator of a new
trapezoid, or a trapezoid unhooked during a deletion, is also the creator of at least one
destroyed trapezoid.
Hint: The proof of this fact relies on the two additional properties possessed by the
influence graph of a decomposition:
1. The influence domain of an internal node is contained in the union of the influence
domains of its children.
2. If an object Ok is the determinant of an internal node, it necessarily is a determi-
nant of at least one child of this node.
Exercise 6.3 (Dynamic decomposition) The aim of this exercise is to show how we
may dynamically maintain the decomposition of a set S of segments using a simple
influence graph, without the conflict lists.
122 Chapter 6. Dynamic randomized algorithms
122 Chapter 6. Dynamic randomized algorithms
01
> Ok
9k
/
01
- V
kA') (e)
Figure 6.6. Detecting conflicts in critical zone. Region F is shaded, and region H within
is emphasized.
The segments that must be reinserted during a deletion are the creators of destroyed
regions (see exercise 6.1) and can be detected during the locating phase.
Let 01 be one of the segments to be reinserted during the deletion of Ok. To retrieve
all the critical regions that conflict with O, the algorithm considers in turn the destroyed
regions H with creator 01, and selects the critical regions F related to H by one of the
five cases described in exercise 6.2.
For this, the deletion algorithm maintains an augmented dictionary A, storing the
sequence ordered along Ok of critical regions intersected by Ok. Let H be one of the
destroyed regions, created by 01. If H has a wall that stems from a point on 01 and
butts against Ok, or a wall stemming from 01 n Ok, this wall is located in the structure
A, and the critical region containing this wall is retrieved. If H has two walls stemming
from a point on Ok and from a point on 01, the region containing the wall stemming
from the point on Ok is searched for in A, and it is selected if it also contains the wall
of H stemming from the point on 01. Lastly, if 01 and Ok support the floor and ceiling
of H, the right wall of H is searched for in A, and any critical region that intersects the
floor and the ceiling of H is selected.
1. The selected region obviously conflicts with 01. As shown in exercise 6.2, any
critical region that conflicts with H is selected. Show that such a region can be selected
at most 16 times.
To speed up the locating phase, the algorithm maintains the lists of nodes killed by
each object stored in the structure. To perform the deletion, the algorithm proceeds
along the following lines.
Locating. The algorithm traverses the influence graph starting on the nodes killed
by Ok, and visits the destroyed or unhooked nodes. During this traversal, the algorithm
sets up a dictionary D that stores the destroyed and unhooked nodes, and a list C of the
creators of the destroyed nodes.
6.6. Bibliographicalnotes 123
Rebuilding. The list £ is sorted by chronological order, for instance by storing the
elements in a priority queue, and extracting them in order. The redundant elements are
extracted only once. The dictionary A initially stores the regions killed by Ok*
The objects of £ are processed in chronological order. For each object 01, the critical
regions that conflict with Qi are selected as explained above. The remaining operations
are identical to those in the algorithm of section 6.4. The conflict lists of the new regions
do not have to be computed. On the other hand, the dictionary A must be updated.
2. Show that the performances of this algorithm are those given by theorem 6.4.2.
Exercise 6.4 (Lazy dynamic algorithms) In this exercise, we propose a lazy method
to dynamically maintain the decomposition of a set of segments. For simplicity, let us
assume that the segments do not intersect. The algorithm maintains an influence graph
in the following lazy fashion:
1. The graph is a mere influence graph, no conflict lists are needed.
2. During an insertion, the nodes corresponding to the new trapezoids are hooked to
the nodes corresponding to the killed trapezoids as in the algorithms described in
subsection 5.3.2 and section 6.4.
3. During a deletion, the nodes corresponding to the new trapezoids are hooked to
leaves of the graph that correspond to destroyed trapezoids. More precisely, a node
corresponding to a new trapezoid is hooked to leaves of the graph that correspond
to the destroyed nodes that have a non-empty intersection with the new trapezoid.
No node is removed from the graph.
4. The algorithm keeps the age of the current graph in a counter, meaning the total
number of operations (insertions and deletions) performed on this graph. Each
time the number of segments effectively present falls below half the number stored
in this counter, the algorithm builds the influence graph anew by inserting the
segments effectively present into a brand new structure.
1. Show that when 0(n) segments are stored in the structure, the expected cost of an
insertion or a location query is still O(logn).
2. The cost of the periodic recasting of the graph is shared among all the deletions.
Show that the amortized complexity of a deletion is still 0(log n) on the average. (Recall
that the segments do not intersect, by assumption.)
Hint: It will be noted that the number of children of a node in the influence graph is not
bounded any more. The analysis must then have recourse to biregions (see exercise 5.7)
to estimate the expected complexity of the locating phases.
in the plane. The algorithm by Clarkson, Mehlhorn, and Seidel [70] uses the same
approach to maintain the convex hull of a set of points in any dimension. The method
was then abstracted by Dobrindt and Yvinec [86]. A similar approach is also discussed
by Mulmuley [1761, whose book is the most comprehensive reference on this topic.
There is another way to dynamize randomized incremental algorithms. This approach,
developed by Schwarzkopf [198, 199], can be labeled as lazy. As outlined in exercise 6.4,
it consists in not removing from the structure the elements that should disappear upon
deletions. These elements are marked as destroyed, but remain physically present, and
still serve for all subsequent locating phases. Naturally, the structure may only grow.
When deletions outnumber insertions, the number of objects still present in the structure
is less than half the number of objects still stored, and the algorithm completely rebuilds
the structure from scratch, by inserting one by one the objects that were not previously
removed.
Finally, we shall only touch the topic of randomized or derandomized dynamic struc-
tures which efficiently handle repetitive queries on a given set of objects, while allowing
objects to be inserted into or deleted from this set. These structures embody the dy-
namic version of randomized divide-and-conquer structures, discussed in the notes of the
previous chapter. These dynamic versions can be found in the works by Mulmuley [175],
Mulmuley and Sen [178], Matougek and Schwarzkopf [153, 156], Agarwal, Eppstein, and
Matousek [3] and Agarwal and Matougek [4].
Part II
Convex hulls
Convexity is one of the oldest concepts in mathematics. It already appears
in the works of Archimedes, around three centuries B.C. It was not until the
1950s, however, that this theme developed widely in the works of modern math-
ematicians. Convexity is a fundamental notion for computational geometry, at
the core of many computer engineering applications, for instance in robotics,
computer graphics, or optimization.
A convex set has the basic property that it contains the segment joining any two
of its points. This property guarantees that a convex object has no hole or bump,
is not hollow, and always contains its center of gravity. Convexity is a purely affine
notion: no norm or distance is needed to express the property of being convex.
Any convex set can be expressed as the convex hull of a certain point set, that
is, the smallest convex set that contains those points. It can also be expressed
as the intersection of a set of half-spaces. In the following chapters, we will be
interested in linear convex sets. These can be defined as convex hulls of a finite
number of points, or intersections of a finite number of half-spaces. Traditionally,
a bounded linear convex set is called a polytope. We follow the tradition here, but
we understand the word polytope as a shorthand for bounded polytope. This lets
us speak of an unbounded polytope for the non-bounded intersection of a finite set
of half-spaces.
In chapter 7, we recall the definitions relevant to polytopes, their facial struc-
ture, and their combinatorial properties. We introduce the notion of polarity as
a dual transform on polytopes, and the notions of projective spaces and oriented
projective spaces to extend the above definitions and results to unbounded poly-
topes. In chapter 8, we present solutions to one of the most fundamental problems
of computational geometry, namely that of computing the convex hull of a finite
number of points. Chapter 9 contains algorithms which work only in dimension
2 or 3. Lastly, chapter 10 tackles the related linear programming problem, where
polytopes are given as intersections of a finite number of half-spaces.
Chapter 7
Polytopes
7.1 Definitions
7.1.1 Convex hulls, polytopes
H+=H+UH, H-=H-UH.
Consider a d-polytope P. A hyperplane H supports P, and is called a supporting
hyperplane of P, if H n P is not empty and P is entirely contained in one of the
closed half-spaces H+ or H-. The intersection H n P of the polytope P with
a supporting hyperplane H is called a face of the polytope P. Faces are convex
subsets of Ed, with a dimension ranging from 0 to d - 1. To these faces, called
the proper faces of 'P, we add two faces called improper: the empty face whose
dimension is set to -1 by convention, and the polytope P itself, of dimension d.
A face of dimension j is also called a j-face. A 0-face is called a vertex, a 1-face
is called an edge, and a (d - 1)-face is called a facet of the polytope.
If F is a face of P and H a supporting hyperplane of P such that F = H n P,
H is said to support P along F.
Theorem 7.1.1 The boundary of a polytope is the union of its proper faces.
Proof. Consider a polytope P. It is easy to show that the union of the faces
of P is included in the boundary of P. Indeed, let F be a face of 'P, H a
hyperplane supporting P along F, and H+ the half-space bounded by H that
contains P. Any point X in F belongs to P and to H, and a neighborhood of
this point contains points that do not belong to P. The converse inclusion (of
the boundary in the union of the proper faces) results from a general theorem
on bounded closed convex sets of Ed, stated in exercise 7.5. It is a consequence
of this theorem that there is a supporting hyperplane passing through any point
of the boundary of a polytope 'P; thus every point of the boundary belongs to a
supporting hyperplane and hence to a proper face of P. n
Theorem 7.1.2 A polytope has a finite number of faces. Faces of a polytope are
also polytopes.
Proof. Consider a polytope 'P, the convex hull conv(X) of a finite set of points
X. The theorem can be proved by showing that every proper face of P is the
convex hull of a subset of X. Indeed, let H be a supporting hyperplane of P and
let X' be the subset of the points of X that belong to H. We first show that
H nP = conv(X'). That conv(X') C HfnP is immediate. To prove the converse,
we show that any point of P that does not belong to conv(X') does not belong
to H. Let H(Y) = 0 be an equation of H and assume that P is contained in the
half-space H+ = {Y E Ed: H(Y) > 0}. For any point X' in X' or in conv(X'),
we have H(X') = 0, and for any point X in X \ X' or in conv(X \ X'), we have
H(X) > 0. Any point Y in P is a linear convex combination of points in X. If
Y does not belong to conv(X'), at least one of the coefficients of the points in
X \ X' in this combination is strictly positive, and thus H(Y) > 0. 0
130 Chapter 7. Polytopes
130 Chapter 7. Polytopes
Hi Hi1
I /
II
Figure 7.1. For the proof of theorem 7.1.3.
Proof. Let P be a polytope defined as the convex hull of a finite point set X.
By successively removing from X any point Xi that can be expressed as a linear
convex combination of the remaining points, we are left with a minimal subset X'
of X such that P = conv(X'). Let us now prove that any point of X' is a vertex of
P. Let Xi be a point of X'. Since X' is minimal, Xi does not belong to the convex
hull conv(X' \ {Xi}) of the other points, and the theorem stated in exercise 7.4
shows that there is a hyperplane Hi' that separates Xi from conv(X' \ {Xi}) (see
figure 7.1). The hyperplane Hi parallel to Hi' passing through Xi supports P and
contains only Xi among all points of X'. Now theorem 7.1.2 above shows that
Hi n P = conv({Xi}) = {Xi}.
The following two theorems are of central importance. They show that a poly-
tope might equivalently be defined as the bounded intersection of a set of closed
half-spaces.
Theorem 7.1.4 Any polytope is the intersection of a finite set of closed half-
spaces. More precisely, let P be a polytope, and {Fi: 1 < i < m} be the set of
its (d - 1)-faces, Hi the hyperplane that supports P along Fi, and Hi+ the closed
half-space bounded by Hi that contains P. Then:
m
i=1
belong to P does not belong to the intersection ni=l1 Hi. Let X be a point
not in P, and Y a point in the interior of P but not in the hyperplane passing
through X and some d - 1 vertices of P. Such a point exists, since the interior
of P is of dimension d and cannot be contained in the union of a finite number
of hyperplanes of dimension d - 1. Segment XY intersects the boundary of P in
a point Z (see figure 7.2). This point necessarily belongs to a proper face of P
and, from the choice of Y, cannot belong to a face of P of dimension j < d - 1.
Thus Z belongs to one of the facets Fi of P. Then Z belongs to the hyperplane
Hi, Y to the half-space Hi+ and X to the opposite half-space Hi-. E
Proof. The proof goes by induction on the dimension d of the space. In dimen-
sion 1, the theorem is trivial. Let
m
Q n Hi
t=1
71
From this we may conclude that Q C conv(V). And since Q is convex and
contains V, the opposite inclusion is trivial. 0
Remark. If the intersection Q is of dimension d and if its expression 7.1 is
minimal, that is, for any j = 1,.. ., m,
Q ln Hi
i54j
H~i ( )
and is therefore not empty. Indeed, the intersection nij Hi is neither empty,
nor entirely contained in H-, because Q is not empty. But this intersection is
not contained in Ht3
either, otherwise Ht3
could be removed from expression 7.1
without changing the intersection Q.
Theorem 7.1.1 shows that the boundary of a polytope is the union of its proper
faces, and the preceding remark shows that the union of the (d - 1)-faces gives
the boundary of a polytope. The following theorem shows more precisely that
any proper face of a polytope is entirely contained within a facet of the polytope.
contains all the vertices in V(P) \ V(Fj) and all the vertices in V(Fi) \ V(F 2 ). El
Proof. 1. Let {FI, F2 ,..., Fr} be a family of faces of the polytope P. Let F
be the intersection ni= Fi. If F is empty, F is trivially a face of P. Otherwise
we choose for the origin of Ed a point 0 in F. For i = 1, . .. , r, we let Hi be a
hyperplane that supports P along Fi, and Ni be the vector of Ed such that
Hi = {X E Ed: X. N, = °},
and
PCH+={XEEd: X.N>Ž°}.
If N = Eri=1 Ni, the hyperplane H defined by
H = {X Ed: X *N=0}
supports P along F.
2. Let {FI, F2 , .. ., Fm} be the facets of polytope P. Let {H1 , H2 , . . ., Hm} be
the hyperplanes that support P along these facets. Let F be a (d - 2)-face of P.
From theorem 7.1.6, F is a (d - 2)-face of a facet Fj of P. From theorem 7.1.5,
facet Fj can be expressed as
F3 = H3 n -P= H, n (ki&j
k)=n()nH
k54j
F=(H, HHi) ( n
k/+ {i~j
(Hj n Hk)) = Hi n H n (n
kjiji,j}
k+
or equivalently
F=Hn HI nP=FnF3.
Using the second assertion in theorem 7.1.7, it is also easy to prove by induction
on j (from j = d - 1 down to j = 0) that any j-face (O < j < d - 1) of a polytope
P is the intersection of all the (d - 1)-faces of P that contain it.
Let then j and k satisfy 0 < j < k < d - 2. Consider a j-face F of polytope
P. The intersection of all the k-faces of P that contain F is also a face of P that
contains F. To show that this face is precisely F, it suffices to show that F is
the intersection of some k-faces of P. From what was said above, F is a face of
a (k + 1)-face G of P, and thus F is the intersection of all the k-faces of G that
contain it. But k-faces of G are also k-faces of P and therefore F is indeed the
intersection of some k-faces of P. [1
Two faces F and G of a polytope P are called incident if one is included in the
other, and if their respective dimensions differ by one. Two vertices of a polytope
are said to be adjacent if they are incident to some common edge. Two facets of a
polytope are said to be adjacentif they are incident to some common (d- 2)-face.
A* = {X E Ed A *.X=1}.
Let H be a hyperplane that does not contain the origin 0. The pole of H is the
point H* that satisfies
H*.X=1, VXEH.
Lemma 7.1.8 The polarity of center 0 reverses the inclusion relationships be-
tween points and hyperplanes: a point A belongs to a hyperplane H if and only if
the pole H* of H belongs to the polar hyperplane A* of A.
136 Chapter 7. Polytopes
Proof.
A E H -t~A H* = I H* XA*
H+={XEEd: X.H*<1}
H-={X EEd: X H* > 1}
Proof.
Generally speaking, we call a duality any bijection that reverses inclusion rela-
tionships. The preceding relation shows that polarity centered at 0 is a duality,
and the polar hyperplane A* is often called the dual of A. Similarly, the pole H*
is often called the dual of hyperplane H.
The notion of duality extends naturally to polytopes: a polytope Q is dual to
a polytope P if there is a bijection between the faces of P and the faces of Q that
reverses inclusion relationships.
The following theorems show that it is possible to define a polar image P# for
any polytope P whose interior contains the origin 0.
The polar transformationcentered at 0 is closely linked to the polarity defined
above, but it associates points with half-spaces and not with hyperplanes. Let A
be a point of Ed. The polar image A# of A is the half-space A*+ bounded by A*
that contains the origin:
or I
7. 1. Definitions 137
Note that this formula allows the definition to be extended to the case where A
contains the origin 0.
The two following facts are immediate consequences of the above definition:
1. The polar image A# of a set A is convex.
2. If A and B are two sets such that A C B, then B# C A#.
In the rest of this subsection, P stands for a polytope of Ed whose interior
contains the origin 0, and P# denotes the polar image of P.
then P# is the intersection of the n half-spaces Xi+ bounded by the polar hy-
perplanes Xi of the point Xi,
n
P#= nxi*+.
Indeed, the inclusion P# C ni=l Xi+ is trivial. To prove the converse, we show
that every point that does not belong to P# does not belong to i=l Xi*+. Let
Y be a point that does not belong to P#. There is a point X that belongs to P
such that Y .X > 1. Since X is a linear convex combination of {X, ..., Xr,}, its
existence implies that Y - Xi > 1 for at least one of the points Xi, and thus Y
does not belong to Xi*.
The polar set P# of polytope P is the bounded intersection of a finite number
of half-spaces. It is thus a polytope, by theorem 7.1.4. El
Proof. For any point A of P, the closed half-space A*+ contains the polytope
'P#. Moreover, if A belongs to the boundary of P, it belongs to one proper face
F of 'P and there is a supporting hyperplane H of P that passes through A. The
pole H* of H is a point that belongs to both A* and P#. Thus A* n P# is not
empty and A* is indeed a supporting hyperplane of 'P#. 0
Theorem 7.1.13 There exists a bijection between the faces of 'P and those of
'P# which reverses inclusion relationships. This bijection associates the k-faces
of 'P with the (d-1 - k) -faces of 'P#,for all k = O. . . , d-1.
F*= n(P#nX*)
XEF
F** = F.
7. 1. Definitions 139
This, property is proved below for the proper faces of P and P#. In order to
extend it to improper faces, we note that the images of P and fP# are empty sets
because both P and 7P# have non-empty interiors. Therefore we can make the
convention that the d-dimensional face of P (resp. P#) corresponds to the empty
face of P# (resp. 7P).
Let now F be a proper face of P. Then
Finally, the following properties can be easily proved from the preceding ones.
4. If the origin 0 lies in the interior of two polytopes P1 and P2, then
By using the bijection between the faces of polytope P and its dual, we can
easily prove the following lemma which will be useful in establishing the Dehn-
Sommerville relations (theorem 7.2.2) satisfied by any simple polytope:
Lemma 7.1.14 For any 0 < j < k < d - 1, any i-face of a simple polytope P is
a face of ( dj) k-faces of P.
E (-1)knk((P) = 0.
k= -1
Proof. The proof we present here goes by induction on the dimension d of the
polytope. The base case is proved easily since, in one dimension, a polytope has
only two proper faces, namely its vertices, and thus satisfies Euler's relation.
142 Chapter 7. Polytopes
For each face F of P and each hyperplane Hj, in this family, we define a
signature: Xj(F) = 1 if Hj intersects the relative interior of F, and Xj(F) = 0
otherwise.
Consider a face F, and call P, (resp. Pm) its vertex with minimal (resp. max-
imal) xd-coordinate. The horizontal hyperplanes intersecting the relative inte-
rior of F lie strictly between horizontal hyperplanes H2 1- 1 and H2 mi-that pass
through PI and Pm respectively. If face F is of dimension k > 1, then I and m are
distinct integers, and the number of hyperplanes with even indices that intersect
the relative interior of F is one more than the number of hyperplanes with odd
indices that intersect the relative interior of F, whence
2n-2
1 = E (- 1)'X(F).
j=2
d-2 d-1
Z(_-1)kknk pj) = E(- 1)k-nk-l(Pj) = 1 - (_l)d-1. (7.3)
k=O k=1
7.2. The combinatorics of polytopes 143
It now suffices to multiply relations 7.7 and 7.8 by (-1)i and to sum over
j = 1, . . . , 2n - 1 to obtain by use of equation 7.2, noticing that there are n - 1
even and n - 2 odd relations:
d-1
Recall now that n is the number no(P) of vertices of 'P. In the last equation, we
may now recognize Euler's relation for polytope P. [1
In the case of a 2-polytope, Euler's relation, written as
no(P) - ni(P) = 0,
expresses the fact that a polygon has as many vertices as edges. In the case of a
3-polytope, the relation is a bit more interesting and can be written as
E(-')i (
j=o
d-k )7(P) = nk(P), k = O.. .. ,d.
Z
j=-l
(-1)jnj(F) = 0.
The sum
Z nj (F)
FEYk (P)
S
FcE-k (P)
nj(F)= d ) ni(P), (O< j < k < d-1).
Finally, we get
-
7.2. The combinatorics of polytopes 145
The first relation is trivial, the following two are equivalent, and the last is pre-
cisely Euler's relation. They can be compacted into two linearly independent
equations binding the numbers no, ni, and n2 of proper faces of a simple 3-
polytope. Fixing the number n = n2 of facets, these relations may be expressed
as
no=2n-4, nl=z3n-6.
This proves the following theorem:
The following subsection shows that the Dehn-Sommerville relations alone can
be used to derive an upper bound on the number of faces of any polytope as a
function of its number of vertices or facets.
Theorem 7.2.5 (Upper bound theorem) Any d-polytope with n facets (or n
vertices) has at most O(n[d/ 2 i) faces of all dimensions and O(nLd/ 2 ]) pairs of
incident faces of all dimensions.
yield d + 1 linear relations between the d numbers no, n 1 ,. . . , nd-1 of proper faces
of polytope P. The first relation (obtained for k = 0) is trivial, and the others
are not all linearly independent. But one may prove easily that the odd relations
(those that correspond to odd values of k) are linearly independent. Indeed, the
coefficients of n2p+l in the equations obtained for k = 2q+ 1, with p and q ranging
from 0 to [4- 1 j, form a triangular matrix. Thus the Dehn-Sommerville relations
form a system of rank at least
r=L + 1j.
146 Chapter 7. Polytopes
In fact, it can be shown that there are exactly r linearly independent relations
among the Dehn-Sommerville relations (see exercise 7.7). Moreover, it can be
shown that the Dehn-Somerville system can be solved for the variables nj, j =
0 ... ., r - 1, yielding an expression for these variables as a linear combination of
the nj's, j = r, . .. , d (see exercise 7.8). If the simple polytope has n facets, there
is a trivial bound, for j > r,
( n j )=O(n d-j)
on its number of j-faces. Indeed, lemma 7.1.14 shows that a j-face of a simple d-
polytope is the intersection of d - j facets. We conclude that a simple d-polytope
with n facets has 0(nLd/2j) faces. In a simple polytope, k-faces (k < d) are
incident to d - k (k + 1)-faces; thus the number of pairs of incident faces is also
0(0Ld/2j). We therefore have proved the theorem for a simple polytope with n
facets.
A dual statement of the theorem also shows that the theorem is true for sim-
plicial d-polytopes with n vertices. To extend this result to arbitrary polytopes,
it suffices to show that simple and simplicial polytopes maximize the number of
faces and incidences between faces. The following perturbation argument shows
that the numbers of faces and incidences of faces of a non-simplicial d-polytope
are less than those of some simplicial d-polytope obtained by slightly perturb-
ing the vertices. Let P be a non-simplicial d-polytope, and n be the number of
its vertices. Each face of P is the convex hull of its vertices and may therefore
be triangulated, or in other words decomposed into a union of simplices whose
vertices are the vertices of that face. In a triangulation,1 each face F of P is ex-
pressed as the union of simplices whose relative interiors form a partition of F. A
simple scheme to triangulate a d-polytope and its faces is to proceed recursively,
or equivalently in a bottom-up fashion, as follows. Let F be a (k + 1)-face of P.
To triangulate F, we choose a vertex A of F, and consider the (k + 1)-simplices
conv(A, T), where T ranges over the k-simplices in the recursively obtained trian-
gulation of the k-faces of F which do not contain A. The number of faces of the
triangulation is at least the number of faces of P. Slightly perturbing the vertices
of P while keeping the union of the simplices in the triangulation convex (and
this can always be done, see exercise 7.10) yields a simplicial polytope P' whose
faces are in one-to-one correspondence with the simplices in the triangulation of
P. The numbers of faces and incidences of P' are thus strictly greater than their
counterparts for P.
In this subsection, we prove that the bound given in the upper bound theorem
(theorem 7.2.5) is optimal. For this, we introduce a particular class of polytopes,
and show that their numbers of faces and incidences achieve the bound given in
the upper bound theorem.
The moment curve is the curve Md in Ed followed by a point MT parameterized
by a real number T:
Md {M(-T) = (T, T2,. .. ,Td),T E RI}.
Lemma 7.2.6 Any subset of k < (d + 1) points on the moment curve is linearly
independent.
Proof. Consider d+1 points {Mo, M1 , .. ., Md} on the moment curve, for the val-
ues {TO, Ti, . . ., rd} of the parameter. The determinant formed by the coordinates
of these points is
ro r2 __ od
To T02 ... Td
12ITi = (Td -Ti)
Td T2,,Tdd O<i<j<d
1rd Td ... d
the so-called Van Der Monde determinant, and does not vanish when the Ti's are
pairwise distinct. 5
A consequence of this lemma is that any hyperplane in Ed intersects the moment
curve in at most d points.
A cyclic polytope in Ed is the convex hull of n > d + 1 points on the moment
curve. By the above lemma, a cyclic polytope is simplicial.
Let now P be a cyclic polytope in Ed, the convex hull of n points {M 1 , M2 ,
... , Mn} of Md with respective parameters {T1 T2, ....., r}. Let I a subset of the
set of indices {1, 2,.. . , n}, of cardinality k < d/2, and consider the polynomial
Theorem 7.2.8 For any integer n > d + 1, there is a polytope in Ed that has n
facets and exactly ( n) (d - k)-faces, for all 0 < k < d/2.
L(X)
- I N
XEd
L(X
X /
Sometimes it can help to represent the projective space pd as the set of antipo-
dal (i.e. diametrically opposite) pairs of points on a sphere Sd in Ed+1 centered
at the origin Q. The point X of pd corresponding to the affine line L(X) in Ed+l
can be represented as the pair of points at the intersection of L(X) and Sd. In
this representation, k-subspaces of pd are represented by great k-spheres of Sd,
which are intersections of Sd with affine (k + 1)-subspaces of Ed+1 that contain
Q. The hyperplane at infinity Hoo corresponds to the great (d - 1)-sphere of Sd
in a hyperplane parallel to Ed. The function induced by this representation maps
a point X in pd not in Ho, to the point X = L(X) n Ed of Ed, and is commonly
referred to as the central projection (see figure 7.5).
Homogeneous coordinates
We must note at this point that the equivalence relation 'R used in the definition
of pd is compatible neither with the affine structure nor with the vector-space
structure of Ed+l. Indeed, if XI, X2, YE Y 2 are points in Ed+l, it may happen
that XN 11Z 2 and YlTZYI2 , yet (XI + Y 1 ) R. (X2 + Y2) does not hold. As
a consequence, the projective space pd is neither an affine space, nor a vector
space.
Nevertheless, any basis of Ed+1 can be used as a coordinate system for Pd:
we represent point X as a (d + 1)-tuple (X1 , . .., Xd+1) of reals, the coordinates
of some point in Ed+i on the line L(X). This (d + 1)-tuple (XI,... ,Xd+l) is
not uniquely defined, yet it is unique up to a non-null multiplicative factor, and
constitutes the homogeneous coordinates of X. Any projective hyperplane H
can be described as the set of projective points whose homogeneous coordinates
satisfy a linear equation
d+1
E hixi = 0
i=l
whose coefficients are unique up to a multiplicative factor.
7.3. Projective polytopes, unbounded polytopes 151
Projective mappings
In a projective space, the hyperplane at infinity is like any other hyperplane and
plays no particular role. In general, the properties of a projective space pd are
invariant under any linear map X ) XT whose matrix T is non-singular.
Such a mapping is called a projective mapping. It transforms a k-dimensional
projective subspace into another projective subspace of the same dimension. The
hyperplane at infinity may be mapped onto any hyperplane of pd by a suitable
projective mapping.
Polarity, duality
Any hyperplane H in a projective space has a homogeneous equation of the kind
d+1
H=f{X: Zhixi= 01.
where Rd stands for the d x d identity matrix. Let H* be the projective point
(hi, .. ., hd, -hd+l). The homogeneous equation of H can be rewritten in matrix
form
H = {X: H*SXt = 0}.
Point H* is the pole of hyperplane H. Conversely, to any projective point P with
homogeneous coordinates (Pi, ... ,Pd+1) there corresponds a polar hyperplane P*
with homogeneous equation
P* = {X: PSX t = 0}.
152 Chapter 7. Polytopes
This double correspondence between points and hyperplanes is called the polar-
ity centered at 0. It is exactly the extension to projective spaces of the polarity
centered at 0 described for Euclidean spaces in subsection 7.1.3. In a Euclidean
space, the polarity centered at 0 maps points other than 0 to hyperplanes that
do not pass through 0. In a projective space, the polarity centered at 0 maps
points to hyperplanes, in a one-to-one fashion without restrictions: the projective
point 0 (corresponding to the center 0) is mapped to the polar hyperplane at
infinity Hoo, and the pole of a hyperplane H that passes through 0 is the point
at infinity in the direction normal to the hyperplane H. In the projective space,
as in its Euclidean counterpart, the polarity centered at 0 is an involution, that
is,
P** = P and H** = H,
and reverses inclusion relationships, that is,
P E H -: H* E P*.
Polarity is therefore a duality.
More generally, for any symmetric non-singular (d + 1) x (d + 1) matrix AB,
we consider the mapping that maps a point P to the hyperplane P* satisfying
P*= {X : PAXt = 0, } and a hyperplane H to the point H* satisfying
H = {X : H*ABX t = O}. This mapping is an involution between points and
hyperplanes, therefore is one-to-one, and reverses inclusion relationships. The set
B of those projective points X that satisfy
XABXt = 0
corresponds to a quadric 1 in Ed, and the duality just defined is called the polarity
with respect to B. Using this terminology, the polarity centered at 0 is the polarity
with respect to the unit sphere Sd-i centered at 0. The signatureof the quadric
B is the set of signs of its eigenvalues. In fact, it can be shown that in a projective
space, two quadrics with the same signature or with opposite signatures can be
derived from one another by a projective mapping. The corresponding polarities
are called equivalent.
Besides the polarity centered at 0, one of the polarities most widely used in
computational geometry is that with respect to the unit paraboloid, Pd-1, with
Cartesian equation in Ed
d-1
Xd i
i=l
and homogeneous equation in pd
/ d-1 - 0 0 w
XA-pXt = O with A~p = | 0 -1/2 .
0 -1/2 0
7.3. Projective polytopes, unbounded polytopes 153
The paraboloid Pd-i can therefore be derived from the unit sphere Sd-i by a
projective mapping sending the center 0 of Sd_1 to infinity along the xd-axis. The
polarity with respect to Pd-i is therefore projectively equivalent to the polarity
centered at 0. For more details on this polarity, see exercises 7.13 and 7.14.
Motivation
arcs. There is no way to identify one of these two projective arcs as being the
segment that joins P and Q.
Without segments, we certainly cannot define what it means for a set to be
convex, nor what the convex hull of a set of points is.
About half-spaces. Let us consider a hyperplane H in the projective
space pd, with homogeneous equation H*SXt = 0. If X does not belong to
H, the sign of the bilinear homogeneous form H*SXt is arbitrary and without
significance since the homogeneous coordinates are defined up to a multiplicative
factor (of either sign). It is therefore impossible to locate the point P on either
side of H. In fact, a projective hyperplane does not separate the space into two
disconnected half-spaces. In the spherical model, a hyperplane is represented by
a great (d - 1)-sphere of Sd. Each projective point P is represented as a pair
(P, -,P) of two antipodal points on Sd, and each of these points belongs to a
different hemisphere determined by H.
Oriented projective geometry remedies this situation while keeping the advan-
tages of projective geometry.
Definition
For each vector V of a vector space, the set {AV : A E R, A > 0} is an oriented
vector line. An oriented projective space of dimension d consists of oriented lines
of a vector space Vd+1 of dimension d + 1. A subspace of this space consists of
the oriented lines lying in a subspace of Vd+l.
More concretely, the oriented projective space pd that extends the affine space
Ed can be described in terms of the embedding of Ed in the space Ed+1. As before,
we let the origin Q be a point of Ed+1 not in the hyperplane that we consider as
Ed. The oriented projective space pd is the set of all rays cast from Q in Ed+l,
or equivalently the set of equivalence classes of the points in Ed+1 \ {Q} for the
relation 7?o defined by: X ZoX' if there exists A > 0 such that X = AX'. Thus, a
point in the projective space corresponds to two points in the oriented projective
space, which are then called opposite points. In the spherical representation, the
oriented projective space amounts to distinguishing the two points in an antipodal
pair of Sd.
When a basis of Ed+1 is understood, a point in the oriented projective space
has a vector of homogeneous coordinates which is defined up to a positive mul-
tiplicative factor. In the rest of this chapter, we denote by P either a point in
the oriented projective space or its vector (P1,P2, -,Pd+1) of homogeneous co-
. .
where [Ao, Al, . .- , Ad-1, X] is the determinant of the matrix whose coefficients
are the homogeneous coordinates of {Ao, A1 ... , Ad-L, X}. The coefficients
(hi, .. ., hd+
0 ) in the homogeneous equation Ed+' hixi = 0 defining H are thus
defined up to a positive multiplicative factor. It is possible to determine two
classes between the points in Pd \ H, and an oriented projective hyperplane
separates the space into two half-spaces
= {X
H+ H-= { E Pd
E Pd [AOdAl,...,Ad .,X]X] <
[Ao, Al,-..., Ad-1, > 0},
0}.
2
For instance, for any (k + 1)-tuple {Ao, A 1 ,..., Ak} of independent points in F, the vec-
tors generating the oriented vector lines of Ed+1 corresponding to {Ao,A 1 ,...,Ak} form a
coordinate system for F.
156 Chapter 7. Polytopes
Duality
The notion of duality can be extended without problems to the oriented projective
space. The oriented projective point H* defined by
is the pole of the oriented projective hyperplane H for the polarity centered at
0. Likewise, H is the polar hyperplane of H*. The pole of the hyperplane -'H
with the opposite orientation is the point opposite to the pole of H. Polarity re-
verses the inclusion relationships between points and hyperplanes and, moreover,
reverses the relative positions of a point and a hyperplane, that is
P EH H*SPt = 0 PSH*t = 0 H* G P*
P e H+ H*SPt <O PSH*t < 0 H* E P*+
P E H- H*SPt >O PSH*t > 0 H* E P*-.
H+ = {X E Pd : Xd+1 > O}
We may also define the notion of simplex in an oriented projective space. Let
{Po, P1 , . .. , Pk4 be a set of k + 1 independent points in IPo. This set of points
7.3. Projective polytopes, unbounded polytopes 157
+- n H,;
!+ni;
Proof. Let P and Q be two points in IP0, not opposite to one another. If P and
Q both belong to one of the half-spaces Ho or Hoo, then segment PQ of pd
projects onto segment PQ in Ed. Otherwise, since P is not opposite to Q, there
is a hyperplane H such that P and Q lie on the same side of H. The projective
mapping sending H to H, transforms segment PQ into a segment P'Q' which
projects onto segment P'Q' in Ed. 0
Defined this way, convex sets include segments, simplices, and open half-spaces;
antipodal pairs of points, closed half-spaces, projective subspaces, and the entire
oriented projective space are quasi-convex.
The notions of quasi-convexity and convexity are invariant under projective
mapping. The intersection of quasi-convex sets is quasi-convex, and the intersec-
tion of convex sets is convex.
Projective polytopes
The quasi-convex hull of a set of points in Pd is the smallest quasi-convex set
that contains that set. The quasi-convex hull is not always convex, but it is
convex when the set of points is contained within an open half-space of Pd. In
this case, we may speak of the convex hull of the set of points. Quasi-convex and
convex hulls consist of all linear combinations of the points with non-negative
coefficients. A projective polytope is the convex hull of a finite set of points which
is entirely contained in an open half-space.
The notions of supporting hyperplanes and faces can be carried over to the pro-
jective setting without problems. The above correspondence therefore establishes
a one-to-one correspondence between the faces of the polytopes P and P", which
also allows us to transfer to projective polytopes the combinatorial properties of
Euclidean polytopes. All the theorems in sections 7.1 and 7.2 can therefore be
7.3. Projective polytopes, unbounded polytopes 159
stated for projective polytopes. Here, we only give the projective statements of
theorems 7.1.4 and 7.1.5, which concern the polar transformations.
A set 1t of projective hyperplanes in Pd is in generalposition if, for any j < d,
the intersection of any j of them is a projective subspace of dimension d -j,
and if moreover the intersection of any d + 1 of them is empty. The intersection
of m closed projective half-spaces nl 1 Ht is contained in an open projective
half-space if and only if there is a subset of d + 1 hyperplanes in general position
among the hyperplanes Hj bounding all the half-spaces: such an intersection is
called non-trivial. Theorems 7.1.4 and 7.1.5 can now be restated in a projective
setting:
In the oriented projective space, the polarity centered at 0 (or any other po-
larity, for that matter) can be used to define an involutive one-to-one mapping
on the set of all projective polytopes, without anyjrestrictions. The polar image
A# of a polar point A is the closed half-space A*+ defined by
Let now
'P = conv(Pi, . . ., Pn)
be a projective polytope. For each i = 1,...,n, we denote by Pi* the polar
hyperplane of Pi and by Pi+ = Pi# the polar half-space of Pi*. The polar
image of the polytope P is the intersection
n -
'P#nPi*+.
i=1
(a) (b)
Unbounded polytopes
P= nHi+
j=1
in Ed. On the other hand, if 'P intersects Hoo, it projects onto the union of two
linear convex unbounded subsets
(n 7 U (nH-
m
Q= ( H)fn Hi
j=1
Conversely, let
m
Q=fnHi
j=1
7.4 Exercises
Exercise 7.1 (Radon's theorem) Show that any set X of at least d + 2 points of Ed
can be split into two subsets Xl and X2, such that conv(Xl) n conv(X 2 ) #' 0.
EAiXi = 0.
L=1
For Xl (resp. X2), choose the points Xi whose coefficients Ai in the above relation are
positive (resp. negative or zero).
Exercise 7.2 (Helly's theorem) Let {QC 1 , ./.. , Ir} be a family of r convex sets of Ed.
Show that if any d + 1 convex sets have a non-empty intersection, then so does the whole
family.
Hint: One possible proof goes by induction on r. By induction, we know that there is a
point Xi in the intersection nfl#i )Ci, for all i = 1,. . . , r. Then use Radon's theorem on
the set {Xi : i = 1, .. . , r} to construct a point X that belongs to all the sets ki.
Exercise 7.3 (Caratheodory's theorem) Show that the convex hull of a subset X of
Ed can be described as the set of all possible convex linear combinations of d + 1 points
of X. Use this to show that every polytope is a finite union of simplices.
Hint: Let conv(X) be the convex hull of a subset X of Ed and X a point in conv(X)
given by a minimal convex linear combination X = `=1 AiXi. If r > d + 1, the points
Xi are not independent. Use this to show that we may remove one of the points from
the combination, and that it is therefore not minimal.
7-4. Exercises 163
Hint: First show that for any point X V AC, there is a unique point D(X) of K such that
VX K, D(X) X',
where X' is the unique point of ACsuch that
Exercise 7.6 (Simplices) Show that simplices are the only polytopes which are both
simple and simplicial.
Exercise 7.8 (The upper bound theorem) Let P be a simplicial d-polytope and let
nj = nj (P) denote the number of j-faces of P. Show that the Dehn-Sommerville relations
on the numbers nk can be solved for the numbers nj, j = 0,..., [4-d1, yielding those
numbers as linear combinations of the numbers nj with j = |d] .... d.
Hint: Given integers r > 1, d > 2r -2, let D(r, d) be the r x r determinant
r d 1 d1> Id-r+l0
t r -1)Jtr1J
d
r --2) r-2
- (
-
r )
Exercise 7.9 (Euler's relation) Show that Euler's relation is the only non-trivial lin-
ear relation satisfied by the numbers nk(P) (O < k < d - 1) of faces of any d-polytope.
E Aj nj (P) = Ad
j=O
Exercise 7.11 (Maximal polytope) Show that there exists a polytope with n vertices
on a sphere, or on a paraboloid, with maximal complexity Q(nLd/2] ).
Hint: In the Euclidean space Ed, when d is even (d = 2p), we consider the curve M'd on
the unit sphere, parameterized by
1
M' = {M(r) = - (sin(T), cos(r), sin(2r), cos(2r), .. ., sin(pr), cos(pr)), r E [0, 7r/2]}.
O<i<j<d
1 cos(7 2 p) sin(r2p) ... cos(pr 2 p) sin(pr 2 p)
166 Chapter 7. Polytopes
show that the convex hull of n points on this curve, parameterized respectively by
{r 1 , r2, ... , n}, is a polytope whose faces are in one-to-one correspondence with those of
a cyclic polytope (introduced in subsection 7.2.4). Conclude that it is possible to build
a maximal polytope whose vertices lie on the unit sphere of Ed, with exactly ( k)
(k - 1)-faces for any k, 1 < k < d/2.
By considering the projective mapping that sends the unit sphere of Ed onto the unit
paraboloid in Ed, with equation
d-1
Xd ZExi,
i=l
show that it is possible to build a maximal polytope whose vertices are on the unit
paraboloid of Ed.
Exercise 7.12 (Upper bound theorem) This exercise presents a very simple proof
of the upper bound theorem. This proof considers a polytope, given as the intersection
of n half-spaces in Ed bounded by hyperplanes in general position, and shows that the
number of vertices of this polytope is O(nJ14i).
1. Show that any vertex of the polytope is the vertex that has the minimal or maximal
Xd-coordinate in a k-face, for some k > [d] For this, we consider a vertex P of the
polytope. This vertex is incident to d edges, at least [d] of which are contained in the
half-space Xd > Xd(P) or the half-space Xd < Xd(P).
2. Note that a face has a unique vertex with maximal Xd-coordinate, and a unique
vertex with minimal Xd-coordinate. Recall the bound of (d-k) on the number of
faces of dimension k of a polytope given by the intersection of n half-spaces in Ed and
conclude.
Exercise 7.13 (Polarity with respect to a paraboloid) Consider the polarity with
respect to the unit paraboloid P with homogeneous equation
Ed-1 0 °
XApXt = 0 with ( 01 0 1/2
\ -1/2 0
where Ed-1 is the (d -1) x (d - 1) identity matrix. Show that the restriction of this
polarity to the Euclidean space maps a point P in Ed with coordinates (P1,P2, Pd)
to the hyperplane P* in Ed with equation
d-1
Xd = 2 PiXi -Pd,
PE H H*E P*.
P E H+ H* e P*+
P c H- H* E P*-.
Exercise 7.14 (Lower convex hull) Let {P1 , P2 ,... ., P } be a set of points in Ed and
O' be a point on the Xd-axis, with Xd > 0 large enough such that the facial structure
of conv(O', P1 ,P 2, ... ,PP) is stable as O' goes to infinity along the Xd-axis. We call
lower convex hull of {P1 , P2 ,. .. , Pn}, and we denote by convs(Pl, P2 ,... aPn), the set of
faces of conv(O', P1 , P2 , .. ., Pn) which do not contain 0'. Using the oriented projective
space and the polarity with respect to the unit paraboloid P studied in exercise 7.13, show
that there is a one-to-one correspondence between the faces of conv (P1 , P2 , ... P Pn) and
those of the unbounded intersection fn= Pi*+, where the half-spaces Pi*+ are defined as
in exercise 7.13.
Exercise 7.15 (Euler's relation) Show that Euler's relation for an unbounded poly-
tope of Ed can be expressed as
d
EZi)knk(P) = 0,
k=O
A(DB={A+B: AeA,BCB}.
To compute the convex hull of a finite set of points is a classical problem in com-
putational geometry. In two dimensions, there are several algorithms that solve
this problem in an optimal way. In three dimensions, the problem is consider-
ably more difficult. As for the general case of any dimension, it was not until
1991 that a deterministic optimal algorithm was designed. In dimensions higher
than 3, the method most commonly used is the incremental method. The algo-
rithms described in this chapter are also incremental and work in any dimension.
Methods specific to two or three dimensions will be given in the next chapter.
Before presenting the algorithms, section 8.1 details the representation of poly-
topes as data structures. Section 8.2 shows a lower bound of Q(n log n + nLd/ 2 i )
for computing the convex hull of n points in d dimensions. The basic operation
used by an incremental algorithm is: given a polytope C and a point P, derive the
representation of the polytope conv(C U {P}} assuming the representation of C
has already been computed. Section 8.3 studies the geometric part of this prob-
lem. Section 8.4 shows a deterministic algorithm to compute the convex hull of n
points in d dimensions. This algorithm requires preliminary knowledge of all the
points: it is an off-line algorithm. Its complexity is O(n log n + nL(d+l)/ 2 i), which
is optimal only in even dimensions. In section 8.5, the influence graph method
explained in section 5.3 is used to obtain a semi-dynamic algorithm which al-
lows the points to be inserted on-line. The randomized analysis of this algorithm
shows that its average complexity is optimal in all dimensions. Finally, section 8.6
shows how to adapt the augmented influence graph method of chapter 6 to yield
a fully dynamic algorithm for the convex hull problem, allowing points to be in-
serted or deleted on-line. The expected complexity of an insertion or deletion is
O(logn + nL[d/2 -l), which is optimal.
Throughout this chapter, we assume that the set of points whose convex hull is
to be computed is in general position. This means that any subset of k + 1 < d + 1
points generates an affine subspace of dimension k. This hypothesis is not crucial
170 Chapter 8. Incremental convex hulls
170 Chapter 8. Incremental convex hulls
for the deterministic algorithm (see exercise 8.4), but it allows us to simplify the
description of the algorithm and to focus on the central ideas. It becomes an
essential assumption, however, for the randomized analyses of the on-line and
dynamic algorithms.
Proof. Subsection 7.2.4 shows that the convex hull of n points in the Euclidean
space Ed may have Q(nLd/2 J) faces. In any dimension, Q(nLd/ 2j) is thus a trivial
lower bound for the complexity of computing convex hulls. In two dimensions, the
lower bound Q (n log n) is a consequence of theorem 8.2.2 proved below. Finally,
any set of points in E2 can be embedded into E3 , so the complexity of computing
convex hulls in E3 cannot be smaller than in E2 . [:
Proof. Consider n real numbers x1 , x2, .. ., X,, which we want to sort. One way
to do this is to map the number xi to the point Ai with coordinates (xi, x?) on
the parabola with equation y = x2 (see figure 8.2). The convex hull of the set
of points {Ai : i = 1,...,n} is a cyclic 2-polytope, and the list of its vertices
is exactly the list of the vertices {Ai : i = 1,...,n} ordered according to their
increasing abscissae. El
45
XI X2 z3 X4 X5
Figure 8.2. Transforming a sorting problem into a convex hull problem in two dimensions.
Suppose that point P and polytope C are in general position, meaning that P
and the vertices of C form a set of points in general position. The facets of C
can then be separated into two classes with respect to P. Let F be a facet of C,
HF the hyperplane that supports C along F, and HF+ (resp. HF) the half-space
bounded by HF that contains (resp. does not contain) C. The facet F is red with
respect to P if it is visible from point P, that is if P belongs to the half-space
Hi. It is colored blue if P belongs to HF+. From the general position assumption,
it follows that P never belongs to the supporting hyperplane HF and therefore
every facet of C is either red or blue with respect to P.
Using theorem 7.1.7, any face of C is the intersection of the facets of C which
contain it. The faces of C of dimension strictly smaller than d- 1 can be separated
into three categories with respect to P: a face of C is red if it is the intersection
8.3. Geometric preliminaries 173
of red facets only, blue if it is the intersection of blue facets only, or purple if it is
the intersection of red and blue facets.
Intuitively, the red faces are those that would be lit if a point source of light
was shining from P, the blue faces are those that would remain in the shadow,
and the purple faces would be lit by rays tangent to C. In figure 8.3, the blue
faces of C are shaded, the red edges are outlined in dashed lines, and the purple
edges are shown in bold.
Lemma 8.3.1 Let C be a polytope and P a point in general position with respect
to C. Every face of conv(C U {P}) is either a blue or purple face of C, or the
convex hull conv(G U {P}) of P and a purple face G of C.
Proof. Note that if P belongs to C, all the facets of C are blue with respect to
C (theorem 7.1.4) and the content of the lemma is trivial.
In the other case, we first show that a blue face of C is a face of
conv(C U {P}). Let F be a facet of C that is blue with respect to P. Since
P belongs to the half-space H+, the hyperplane HF which supports C along F
also supports conv(C U {P}) and conv(C U {P}) n HF = F, which proves that
F is indeed a facet of conv(C U {P}). Any blue facet of C is thus a facet of
conv(C U {P}). Any blue face of C, being the intersection of blue facets of C, is
also the intersection of facets of conv(C U {P}): therefore a blue face of C is also
a face of conv(C U {P}) (theorem 7.1.7).
Next we show that, for any purple face G of C, G and conv(G U {P}) are
faces of conv(C U {P}). If G is a purple face of C, then there is at least one
red facet of C, say F1 , and one blue facet of C, say F2 , that both contain G
(see figure 8.4). Let H1 (resp. H2 ) be the hyperplane supporting C along F1
(resp. F2 ). Point P belongs to the half-space H+ which contains C, and since
Hlnconv(CU{P}) = G we have shown that G is a face of conv(CU{P}). Point P
also belongs to the half-space Hj- that does not contain C. Imagine a hyperplane
that rotates around H1 n H2 while supporting C along G. There is a position
H for which this hyperplane passes through point P. Hyperplane H supports
conv(C U {P}), and since conv(C U {P}) n H = conv(G U {P}), we have proved
that conv(G U {P}) is a face of conv(C U {P}).
Finally, let us show that every face of conv(C U {P}) is either a blue or a purple
face of C, or the convex hull conv(G U {P}) of P and of a purple face G of C.
Indeed, a hyperplane that supports conv(CU{P}) is also a supporting hyperplane
of C, unless it intersects conv(C U {P}) only at point P. As a consequence, any
face of conv(C U {P}) that does not contain P is a (blue or purple) face of C, and
any face conv(C U {P}) that contains P is of the form conv(G U {P}) where G
is a purple face of C. Note that the vertex P of conv(C U {P}) is also a face of
the form conv(G U {P}) obtained when G is the empty face of C. Indeed, when
174 Chapter 8. Incremental convex hulls
174 Chapter 8. Incremental convex hulls
H2
/
P does not belong to C, C necessarily has some facets that are blue and some
facets that are red with respect to P. The empty face, being the intersection of
all faces of C, is therefore purple. O
The following lemma, whose proof is straightforward, investigates the incidence
relationships between the faces of C and those of conv(C U {P}).
Lemma 8.3.2 Let C be a polytope and P a point in general position with respect
to C.
* If F and G are two incident faces of polytope C, either blue or purple with
respect to P, then F and G are incident faces of conv(C U {P}).
* If G is a purple face of C, then G and conv(G U {P}) are incident faces of
conv(C U {P}).
* Finally, if F and G are incident purple faces of .F, then conv(F U {P}) and
conv(G U {P}) are incident faces of conv(C U {P}).
Recall that two facets of a polytope C are adjacent if they are incident to the
same (d - 2)-face and that the adjacency graph of a polytope stores a node for
each facet and an arc for each pair of adjacent facets.1 We say that a subset
of facets of a polytope C is connected if it induces a connected subgraph of the
adjacency graph of C.
Lemma 8.3.3 Consider a polytope C and a point P in general position. The set
of facets of C that are red with respect to P is connected, and the set of facets of
C that are blue with respect to P is also connected.
'Two facets sharing a common k-face, k < d - 2, may be not adjacent, even though they
are connected as a topological subset of the boundary of the polytope. Such a situation is only
possible in dimension d > 3.
8.3. Geometric preliminaries 175
8.3. Geometric preliminaries 175
Figure 8.5. Isomorphism between the purple faces and the faces of a (d -l)-polytope.
Proof. If P belongs to C, the set of the red facets is empty, any facet is blue,
and the lemma is trivial. We will therefore assume that P does not belong to C.
The connectedness of the set of red facets can be proved easily in two dimen-
sions. Indeed, the polytope conv(C U {P}) has two edges incident to P. By
lemma 8.3.1, there are exactly two purple vertices of C with respect to P. Hence,
the adjacency graph of the 2-polytope C is a cycle that has exactly two arcs
connecting a blue and a red facet.
Let us now discuss the case of dimension d, and suppose for a contradiction that
the set of facets of C that are red with respect to P is not connected. Therefore,
we may choose two points Q and R on two facets of C that belong to two distinct
connected components of the set of red facets of C. Let H be the affine 2-space
passing through points P, Q, and R. This plane intersects polytope C along a
2-polytope It n H. The edges of C n H that are red with respect to P are exactly
the intersections of the red facets of C with H. The points Q and R belong to two
separate connected components of the set of red edges of C n H. Connectedness
of the set of red faces of a 2-polytope would then not hold, a contradiction.
Analogous arguments prove the connectedness of the set of facets of
conv(C U {P}) that are blue with respect to P. E
Finally, the lemma below completely characterizes the subgraph of the inci-
dence graph induced on the faces of C that are purple with respect to P.
Lemma 8.3.4 Let C be a polytope and P a point in general position with respect
to C. If C has n vertices and does not contain P, then the set of the properfaces of
C that are purple with respect to P is isomorphic, for the incidence relationship,
to the set of faces of a (d - 1)-polytope whose number of vertices is at most n.
Proof. From lemma 8.3.1, we know that the faces of polytope C that are purple
with respect to P are in one-to-one correspondence with the faces of conv(CU{P})
176 Chapter 8. Incremental convex hulls
that do not contain P. Since point P does not belong to C, there must be
a hyperplane H which separates P from C (see exercise 7.4). Hyperplane H
intersects all the faces of conv(C U {P}) that contain P except for the vertex P,
and those faces only. Moreover, the traces in H of the faces of conv(C U{P}) are
the proper faces of the (d - 1)-polytope conv(C U{P}) n H, and the traces in H
of incident faces of conv(C U {P}) are incident faces of conv(C U{P}) n H. Thus,
the incidence graph of the (d - 1)-polytope conv(C U {P}) n H is isomorphic to
the subgraph of the incidence graph of conv(C U {P}) induced by the faces that
contain vertex P. Lemmas 8.3.1 and 8.3.2 show that this subgraph is isomorphic
to the subgraph of the incidence graph of P induced by the faces of C that are
purple with respect to P. Lastly, the vertices of polytope conv(C U{P}) n H are
the traces in H of the edges of conv(C U {P}) incident to vertex P, and their
number is at most n. L
2. Initialize the convex hull to the simplex conv(Ad+l), the convex hull of the
first d + 1 points of A.
3. In the incremental step: the convex hull of conv(Ai) is built knowing the
convex hull conv(Ai-1) and the point Ai to be inserted.
Phase 1. We first identify a facet of conv(Ai-1) that is red with respect to Ai.
Phase 2. The red facets and the red or purple (d - 2)-faces of conv(Ai-1) are
traversed. A separate list is set up for the red facets, the red (d - 2)-faces,
and the purple (d - 2)-faces.
Phase 3. Using the information gathered in phase 2, we identify all the other
red or purple faces of conv(Ai-1). For each dimension k, d - 3 > k > 0O
a list lRk of the red k-faces is computed, as well as a list Pk of the purple
k-faces.
Phase 4. The incidence graph is updated.
Before giving all the details for each phase, let us first describe precisely the
data structure that stores the incidence graph. For each face F of dimension k
(O < k < d -1) of the convex hull, this data structure stores:
* the list of the sub-faces of F, which are the faces of dimension k - 1 incident
to F,
* the list of the super-faces of F, which are the faces of dimension k + 1
incident to F,
* the color of the face (red, blue, purple) in the current step, and
* a pointer p(F) whose use will very soon be clarified.
H-
Ai
4i-1
Figure 8.6. One of the facets of conv(Ai- 1 ) containing Ai-, must be red with respect to Ai.
the adjacency graph2 of conv(Ai-1), starting with the initial red facet that was
found in phase 1, visits all the facets visible from Ai, which we color red, and
their (d - 2)-faces, which we color red if they are incident to two red facets, or
purple if they are incident to a blue facet. The traversal backtracks whenever the
facet encountered was already colored red, or if it is a blue facet.
Phase 3. We now know all the red and purple (d - 2)-faces, and the red facets.
In this phase, all the remaining red and purple faces are colored, and their lists
are set up in order of decreasing dimensions. Assume inductively that all the red
and purple faces of dimension k' > k + 1 have already been identified and colored,
and that the lists lRk' and Pk, have already been set up. We process the k-faces in
the following way. Each sub-face of a face of Pk+, that has not yet been colored
is colored purple and added to the list Pk. Afterwards, each sub-face of Zk+1
that has not yet been colored is added to the list Rk.
Phase 4. To update the incidence graph, we proceed as follows. All the red
faces are removed from the incidence graph, and so are all the arcs adjacent to
these faces in the graph. The purple faces are processed in order of increasing
dimension k. If F is a k-face purple with respect to P, a new node is created for
the (k + 1)-face conv(F U {Ai}) and linked by an arc to the node for F in the
incidence graph. Also the pointer p(F) is set to point to the new node created
for conv(F U {Ai}). It remains to link this node to all the incident k-faces of
the form conv(G U {Ai}), where G is a (k - 1)-face incident to F. For each sub-
face G of F, its pointer p(G) gives a direct access to the node corresponding to
conv(G U {Ai}), and the incidence arc can be created.
2
The adjacency graph is already stored in the incidence graph, and need not be stored
separately (see subsection 8.1).
8.4. A deterministic algorithm 179
Phase 1 of each incremental step can be carried out in time proportional to the
number of facets created at the previous step. The total cost of phase 1 over all
the incremental steps is thus dominated by the total number of facets created.
At step i that sees the insertion of Ai, the cost of phase 2 is proportional to the
number of nodes visited during the traversal of the adjacency graph. The nodes
visited correspond to red facets of conv(Ai-1), and to the blue facets adjacent to
these red facets. The total cost of this phase is thus at most proportional to the
number of red facets of conv(Avi-) and of their incidences.
The cost of phase 3 is bounded by (a constant factor times) the number of
arcs in the incidence graph that are visited, and this number is the same as the
number of incidences between red or purple faces of conv(Ai-1).
Lastly, the cost of phase 4 is proportional to the total number of red faces and
of their incidences, plus the number of purple faces and of their incidences to
purple faces.
In short, when incrementally adding a point to the convex hull, the cost of
phases 2, 3, and 4 is proportional to the number of red or purple faces, plus the
number of faces incident to a red face, plus the number of incident purple faces.
Red faces and their incidences correspond to the nodes and arcs of the incidence
graph that are removed from the graph. The purple faces and the incidences
between two purple faces correspond to nodes and arcs of the incidence graph that
are added to the graph. The total cost of phases 2, 3, and 4 is thus proportional
to the number of changes undergone by the incidence graph. Since a node or arc
that is removed will not be inserted again (red faces will remain inside the convex
hull for the rest of the algorithm), this total number of changes is proportional to
the number of arcs and nodes of the incidence graph that are created throughout
the execution of the algorithm, which also takes care of the cost of phase 1. The
following lemma bounds this number.
Lemma 8.4.1 The number of faces and incidences created during the execution
of an incremental algorithm building the convex hull of n points in d dimensions
is O(n[(d+l)/ 21).
Proof. Lemma 8.3.1 shows that the subgraph of the incidence graph of conv(Ati)
induced by the faces created upon the insertion of Ai is isomorphic to the set of
faces of conv(A4i-) that are purple with respect to Ai. The number of incidences
between a new face and a purple face of conv(Ai-1) is also proportional to the
number of purple faces of conv(Ai-1). Finally, lemma 8.3.4 shows that the set of
purple faces of conv(A<i-) is isomorphic to a (d - 1)-polytope that has at most
i- 1 vertices. The upper bound theorem 7.2.5 shows that the number of these
faces and incidences between these faces, is O(iL(d-1)/2J). This is thus a bound on
180 Chapter 8. Incremental convex hulls
the number of faces and incidences created upon inserting Ai. Summing over all
i, i = 1, . .. , n, the total number of facets and incidences created by the algorithm
is: n
0(iL(d 1)/2J) =(nL(d+1)-/2J
i~Ii
Theorem 8.4.2 The incremental algorithm builds the convex hull of n points in
d dimensions in time O(n log n + nL(d+l)/ 2 J ) and storage O(nLd/ 2 i).
This algorithm is optimal in the worst case when the dimension of the space is
even.
bounded by Hd that does not contain Pd. Similarly let Ho be the hyperplane
containing {Pl, .. . , Pd- , Pd} and let Ho- be the half-space bounded by Ho that
does not contain Pd. The region determined by the (d + 1)-tuple is the union
of the two open half-spaces Hj and Ho-. A point conflicts with a region if it
belongs to at least one of the two open half-spaces that make up the region. In
this case, the influence domain of a region is simply the region itself.
With this definition of regions and conflicts, the convex hull of a set S of
n affinely independent points can be described as the set of regions defined and
without conflict over S. In fact, the regions defined and without conflict over S are
in bijection with the (d-2)-faces of conv(S). Indeed, let a region be determined by
the (d + 1)-tuple {Po, Pi, .. . , Pd- , Pd} of points in S. Because the points in S are
assumed to be in general position, if this region is without conflict over S, the two
d-1 simplices Fd = conv({Po, Pi, ..., Pd-1 }) and F0 = conv({PI,.. . ., Pd- , Pd})
are facets of conv(S), and the (d-2)-simplex G = FonFd = conv({Pi, .. ., Pd-,})
is the (d - 2)-face of conv(S) that is incident to both these facets. This region
will be denoted below by (Fo, Fd) or sometimes by (Ed, Fo). The set of regions
defined and without conflict over a set S therefore not only gives the facets of
conv(S), but also their adjacency graph. Using this information, it is an easy
exercise to build the complete incidence graph of conv(S) in time proportional
to the number of faces of all dimensions of conv(S) (see exercise 8.2).3
The algorithm
The algorithm is incremental, and in fact closely resembles that which is described
in section 8.4. The convex hull conv(S) of the current set S is represented by its
incidence graph. At each step, a new point P is inserted. The faces of conv(S)
can be sorted into three categories according to their color with respect to P,
as explained in section 8.3: red faces, blue faces, and purple faces. The on-line
algorithm, like the incremental algorithm, identifies the faces that are red and
purple with respect to P, then updates the incidence graph. The main difference
resides in the order with which the points are inserted. The on-line algorithm
processes the points in the order given by the input, and therefore cannot take
advantage of the lexicographic order to detect the red facets. For this reason, the
algorithm maintains an influence graph. As we may recall, the influence graph
3
1t would certainly be more natural to define a region as a open half-space determined by
d affinely independent points. In this case the region is one of the half-spaces bounded by the
hyperplane generated by these d affinely independent points, and a point conflicts with such a
region if it lies in this half-space. With these definitions, the facets of the convex hull conv(S)
of a set S of n points in Ed are in bijection with the regions defined and without conflict over S.
In fact, such a definition of regions is perfectly acceptable and so is an incremental algorithm
based on these definitions (see exercise 8.5). Such an algorithm, however, does not satisfy the
update conditions 5.2.1 and 5.3.3, and its analysis calls for the notion of biregion introduced in
exercise 5.7.
182 Chapter 8. Incremental convex hulls
182 Chapter 8. Incremental convex hulls
is used mainly to detect the conflicts between the point to be inserted and the
regions defined and without conflict over the points inserted so far. The influence
graph is an oriented acyclic graph that has a node for each region that, at some
previous step in the algorithm, appeared as a region defined and without conflict
over the current subset of points. At each step of the algorithm, the regions
defined and without conflict over the current subset correspond to the leaves
of the influence graph. The arcs in this graph link these nodes such that the
following inclusion property is always satisfied: the influence domain of a node is
always contained in the union of the influence domains of its parents. 4 A depth-
first traversal of the influence graph can detect all the conflicts between the new
point P and the nodes in the graph. With a knowledge of the conflicts between
points P and the regions defined and without conflict over S, it is easy to find
the facets of conv(S) that are red with respect to P. Indeed:
* A region defined and without conflict over S that conflicts with P corre-
sponds to a red or purple (d - 2)-face of conv(S), since it is incident to two
(d - 1)-faces of conv(S), at least one of which is red (see figure 8.7).
* A region defined and without conflict over S that does not conflict with P
corresponds to a (d - 2)-face of conv(S) that is blue with respect to P.
In an initial step, the algorithm processes the first d+ 1 points that are inserted
into the convex hull. The incidence graph is set to that of the d-simplex formed
4Recall also that we frequently identify a node in the influence graph with the region that it
corresponds to, which for instance lets us speak of conflicts with a node, of the influence domain
of a node, or of the children of a region.
8.5. On-line convex hulls 183
by these points, and the influence graph is initialized by creating a node for each
of the regions that correspond to the (d - 2)-faces of this simplex.
To describe the current step, we denote by S the current set of points already
inserted, and by P the new point that is being inserted. The current step consists
of a location phase and an update phase.
Locating. The location phase aims at detecting the regions killed by the new
point P. These are the regions defined and without conflict over S that conflict
with P. For this, the algorithm recursively visits all the nodes that conflict with
P, starting from the root.
Updating. If none of the regions defined and without conflict over S is found
to conflict with P, then P must lie inside the convex hull conv(S), and there
is nothing to update: the algorithm may proceed to the next insertion. If a
region corresponding to a (d - 2)-face of conv(S) is found to conflict with P,
however, then at least one of the two incident (d - 1)-faces is red with respect to
P. Starting from this red face, the incidence graph of conv(S) can be updated
into that of conv(S U {P}) by executing phases 2, 3, and 4 of the incremental
algorithm described above in section 8.4.
Its remains to show how to update the influence graph. Let us recall that
the nodes of the influence graph are in bijection with the (d - 2)-faces of the
successive convex hulls, and that the corresponding regions are determined by a
pair of adjacent facets, or also by the d + 1 vertices that belong to these facets.
To update the influence graph, the algorithm considers in turn each of the purple
(d - 2)-faces of conv(S), and each of the (d - 3)-faces incident to these faces.
1. Consider a (d - 2)-face G1 of conv(S) that is purple with respect to P, and
let (F1 , F1) be the corresponding region; F1 and F1' are two (d- 1)-faces of conv(S)
that are incident to G1 . We may assume that F1 is blue with respect to P and F1 is
red (see figure 8.8). The face GI is a (d-2)-face of conv(SU{P}) that corresponds
to the new region (F1 , F{'), where F{' is the convex hull conv(GI U {P}). A new
node of the influence graph is created for region (F1 , F{') and this node is hooked
into the influence graph as the child of (F1 , F,). In this way, the inclusion property
is satisfied. Indeed, let H1 and H' be the hyperplanes supporting conv(S) along
F1 and Ff, respectively. The hyperplane H7' supporting conv(S U {P}) along
F,' is also a hyperplane supporting conv(S) along G1 . As a consequence, the
half-space H`' that does not contain conv(S U {P}) is contained in the union
of the half-spaces H1 and H'7, which do not contain conv(S). The influence
domain of region (F1 , F{') is therefore contained within that of (F1 , F1).
2. Let K be a (d-3)-face of conv(S), purple with respect to P, and let G1 and
G2 be the purple (d - 2)-faces of conv(S) that are incident to K.5 Let (F1 , Ffl
and (F2 , F2) be the two regions corresponding to G1 and G2 , the faces F1 and F2
5 The set of purple faces of conv(S) being isomorphic to a (d - 1)-polytope (lemma 8.3.4),
any purple (d-3)-face of conv(S) is incident to exactly two purple (d -2)-faces (theorem 7.1.7).
184 Chapter 8. Incremental convex hulls
/ .1
IFE
Figure 8.8. On-line convex hull: new regions when inserting a point P.
being blue with respect to P while faces Ff and F' are red (see figure 8.8). The
convex hull conv(K U {P}) is a (d - 2)-face of conv(S U {P}), and is incident
to the (d - 1)-faces Fj' = conv(Gi U {P}) and F2' = conv(G2 U {P})). In the
influence graph, a new node is created for the region (Ff', F2'), and hooked into
the graph to two parents which are the nodes corresponding to regions (F1 , F1)
and (F2 , F2). Let us verify that the inclusion property is satisfied. Indeed, the
influence domain of (Fl', F2') is the union H" - U H2' , where H"'- (resp. H2'-) is
the half-space bounded by hyperplane H1' (resp. H2') that supports conv(SU{P})
along Fj' (resp. F2') and does not contain conv(S U {P}). The half-space Hj'
is contained in the the influence domain of region (F1 , Ff), and similarly H2'-
is contained in the influence domain of (F2 , F2). Consequently, the influence
domain of (Ff', F2') is contained in the union of the influence domains of (F1 , Ff)
and (F2 , F2).
This description can be carried over almost verbatim to the case of dimension 2.
We need only remember that the polytope conv(S) has an empty face of dimen-
sion -1, incident to all of its vertices. If P is not contained within conv(S), the
empty face is purple and incident to the two purple vertices of conv(S) (see also
figure 8.9).
8.5. On-line convex hulls 185
P
In this randomized analysis, we assume that the points are inserted in a random
order. The performances of the algorithm are then estimated on the average,
assuming that all n! permutations are equally likely.
To apply the results in chapter 5, we must verify that the algorithm satisfies
the update condition 5.3.3 for algorithms that use an influence graph.
1. Testing conflict between a point and a region boils down to testing whether
a point belongs to two half-spaces, and can be performed in constant time.
3. The parents of a region created by a point P are recruited among the regions
killed by P. From the analysis of phases 2, 3, and 4 of the incremental step
in section 8.4, we can deduce that updating the incidence graph takes time
proportional to the total number of red and purple faces of conv(S) and
of their incidences. If every (d - 2)-face of the convex hull is linked by a
bidirectional pointer with the corresponding node in the influence graph,
it is easy to see that updating the influence graph takes about the same
time as updating the incidence graph. The set of points being in general
position, the facets of conv(S) are simplices; thus the number of red or
purple faces and of their incidences is proportional to the number of red
facets of conv(S). Each of these red facets is incident to d - 1 red or purple
186 Chapter 8. Incremental convex hulls
Since the update conditions are satisfied, the randomized analysis of the onl-line
convex hull computation can now be established readily by theorem 5.3.4 which
analyzes algorithms that use an influence graph. The number of regions without
conflict defined over a set S of n points in a d-dimensional space is exactly the
number of (d - 2)-faces of the convex hull conv(S), which is Q(nLd/21) according
to the upper bound theorem 7.2.5.
Theorem 8.5.1 An on-line algorithm that uses the influence graph method
to build the convex hull of n points in d dimensions requires expected time
O(nlogn + n Ld/ 2 J), and storage O(nLd/ 2 i). The expected time required to per-
form the n-th insertion is O(logn + nLd/ 2i-1).
each deletion, the structure is rebuilt into the exact state it would have been in,
had the deleted point never been inserted. Consequently, the augmented influence
graph only depends on the sequence E = {P 1 , P2 , . . ., Pn} of points in the current
set, sorted by chronological order: Pi occurs before Pj if the last insertion of Pi
occurred before the last insertion of Pj.
Let us denote by la s) the augmented influence graph obtained for the chrono-
logical sequence E. The nodes and arcs of Ia(s) are exactly the same as those of
the influence graph built by the incremental algorithm of the preceding section,
when the objects are inserted in the order given by E. We denote by SI the
subset of S formed by the first 1 objects in E. The nodes of la(E) correspond
to the regions defined and without conflict over the subsets SI, for 1 = 1, . . , n. .
The arcs of -;a(s) ensure both inclusion properties: that the domain of influence
of a node is contained in the union of the domains of influence of its parents, and
that a determinant of this node is either the creator of this node or is contained
in the union of the sets of determinants of its parents. Moreover, the augmented
influence graph contains a conflict graph between the regions that correspond
to nodes in the influence graph, and the objects in S. This conflict graph is
implemented by a system of interconnected lists such as that described in sec-
tion 6.2: each node of the conflict graph has a list (sorted in chronological order)
of the objects that conflict with the corresponding region; also, for each object
we maintain a list of pointers to the nodes in the influence graph that conflict
with that object. The record corresponding to an object in the conflict list of a
node is interconnected with the record corresponding to that node in the conflict
list of the object.
Insertion
Inserting the n-th point into the convex hull is carried out exactly as in the on-line
algorithm described in section 8.5, except that while we are locating the object in
the influence graph, each detected conflict is added to the interconnected conflict
lists.
Deletion
Let us now consider the deletion of point Pk. For l = k, . . . , n, we denote by S,
the subset Si \ {Pk} of S, and by E' the chronological sequence {P1 , . . ., Pk-1,
Pk+I .... Pn}. When deleting Pk, the algorithm rebuilds the augmented influence
graph, resulting in -Ta(VY). For this, we must:
1. remove from the graph la(s) the destroyed nodes, which correspond to
regions having Pk as a determinant, 6
6
Recall that an object is a determinant of a region if it belongs to the set of objects that
determine this region.
188 Chapter 8. Incremental convex hulls
12
2. create a new node for each region defined and without conflict over one of
the subsets S', 1 = k + 1, . . ., n that conflicts with Pk,
3. set up the new arcs that are incident to the new nodes. The new nodes must
be hooked to their parents which may or may not be new. The unhooked
nodes, which are nodes of Ia(s) that are not destroyed but have destroyed
parents, must be rehooked.
creator of some new or unhooked node if and only if there exists a region defined
and without conflict over S8>- which conflicts with both P1 and Pk (lemma 6.2.1).
When processing P1, we call a region critical if it is defined and without conflict
over S' 1- but conflicts with Pk. The criticalzone is the set of all critical regions.
The critical zone evolves as we consider the objects PI in turn. At the beginning
of the rebuilding phase, the critical regions are the regions of Ia(s) that are
killed by Pk. Subsequently, the critical regions are either regions in Ta(s) that
are killed by Pk, or new regions in Ia(E'). At each substep in the rebuilding
phase, the next point to be processed is the point of smallest rank among all the
points that conflict with one or more of the currently critical regions. To find
this point, the algorithm maintains a priority queue Q of the points in E' that
are the killers of critical regions. Each point PI in Q also stores the list of the
current critical regions that it kills. The priority queue Q is initialized with the
killers in E' of the regions in Ia(E) that were killed by Pk.
At each substep in the rebuilding phase, the algorithm extracts the point Pi of
smallest rank in Q, and this point is then reinserted into the data structure. To
reinsert a point means to create new nodes for the new regions created by P1, to
hook them to the influence graph, and to rehook the unhooked nodes created by
Pi. The (d-2)-faces of conv(S>-) that are red or purple with respect to the point
Pk that is removed correspond to critical regions and are, below, called critical
faces. Unless explicitly stated, the color blue, red, or purple, is now given with
respect to the point Pi that is being reinserted. The regions that are unhooked
or new and created by P1 can be derived from the critical purple (d - 2)-faces
and their (d - 3)-subfaces, which will be considered in turn by the algorithm.
Along with point PI, we know the list of critical regions with which it conflicts.
These regions correspond to the critical red or purple (d - 2)-faces, and a linear
traversal of this list allows the sublist of its critical purple (d - 2)-faces to be
extracted.
Let G be a critical purple (d - 2)-face, and (F, F') be the corresponding region;
F and F' are (d- 1)-faces of conv(Sl-1), both incident to G, and we may assume
that F is blue with respect to P1 while F' is red (see figure 8.11 in dimension 3
and figure 8.12 in dimension 2.)
In the convex hull conv(S'), G is a (d - 2)-face that corresponds to (F, F"), a
region defined and without conflict over S', where F" is the convex hull
conv(G U {P 1}) (see figure 8.11 in dimension 3 and figure 8.12 in dimension 2.)
If region (F, F") conflicts with Pk (see figures 8.11a and 8.12a), then it is a new
region created by P1. In the augmented influence graph, a new node is created
for this region, with node (F, F') as parent. The conflict list of (F, F") can be
190 Chapter 8. Incremental convex hulls
0 C 8. nP
.Pk .*Pk
/ / \/ /\
k-)
Figure 8.11. Deleting from a 3-dimensional convex hull: handling critical purple (d - 2)-
faces.
(a) (F, F") is a new region.
(b) (F, F") is an unhooked region.
set up by selecting the objects in conflict with (F, F") from the conflict list of
(F, F'). The killer of (F, F") in A' is inserted in the priority queue Q if it was not
found there. Finally, region (F, F") is added to the list of critical regions killed
by this point.
If region (F, F") does not conflict with Pk (see figures 8.11b and 8.12b), then it
corresponds to an unhooked node created by P1. This node is found by using the
dictionary D of destroyed and unhooked nodes, and hooked as a child of (F, F').
Pk p. Pi Pk
G 6
(a) (b)
Figure 8.12. Deleting from a 2-dimensional convex hull: handling critical purple (d -2)-
faces.
(a) (F, F") is a new region.
(b) (F, F") is an unhooked region.
(d- 3)-face K has two pointers for keeping track of the critical purple (d-2)-faces
incident to K.
Let K be such a (d - 3)-face (see figure 8.13 in dimension 3 and figure 8.14 in
dimension 2). We denote by GI and G2 the two purple (d - 2)-faces incident to
K. At least one of them is a critical face, but not always both. We denote by
(F1 , Ff) and (F2 , F2) the regions corresponding to faces G1 and G2 of the convex
hull conv(S'-1). We may assume that facets F1 and F2 are blue, while F1 and
F2 are red.
The (d-2)-face conv(KU{Pj}) of conv(S') corresponds to some region (Ft', F2'),
where Fi' = conv(Gi U {Pi}) and F2' = conv(G2 U {P1}) (see figure 8.13; see also
figure 8.14, in dimension 2, in which K is the empty face of dimension -1, and
G1 and G2 are the two vertices of conv(S' 1-), both purple with respect to PI).
2.a If both GC and G2 are critical faces, the corresponding nodes in Ia(E')
may be retrieved through dictionary D'.
2.a.1 If region (Fl', F2') conflicts with Pk (see figure 8.14a), it is a new region
created by Pl; a node is created for this region, and inserted into the influence
graph with both (F1 , F) and (F2 , F2) as parents. The conflict list of (F',F2')
may be obtained by merging the conflict lists of (F1 , F1) and (F2 , F2), and then
selecting from the resulting list the objects that conflict with (Fl', F2'). Merging
the conflict lists can be carried out in time proportional to the total length,
because these lists are ordered chronologically. 8 The killer of (F',F2') in the
sequence E' is inserted into the priority queue Q if not found there, and region
(Ff', F2') is added to the list of critical regions killed by this point.
8
An alternative to this solution is to forget about ordering the conflict lists and to resort to
192 Chapter 8. Incremental convex hulls
. Pi
. Pk
/ / p.\ \
I \PO
Figure 8.13. Deleting from a 3-dimensional convex hull: handling critical purple (d -3)-
faces.
2.a.2 If region (F',F"') does not conflict with Pk (see figure 8.14b), then this
region is an unhooked region created by Pi. It suffices to find the corresponding
node using dictionary D and hook it back to the nodes corresponding to (FI, F1)
and (F2 ,F2).
2.b When only one of the purple (d - 2)-faces GC and G2 incident to K is
critical, say G1 , the algorithm must find in the influence graph the node cor-
responding to G2 , the other purple (d - 2)-face incident to K. Lemma 8.6.1
below proves that, in this case, conv(K, P1 ) is a (d - 2)-face of conv(S1 ) which
corresponds to a destroyed or unhooked node of Xa(E), whose parents include
precisely the node corresponding to region (F2 , F2). To find (F2 , F2), we may
therefore search in the dictionary D of destroyed or unhooked nodes, created
by PI, corresponding to the (d - 2)-face conv(K, Pi) of conv(Si). This node is
uniquely known from this criterion, because we know not only the (d - 2)-face
conv(K, P1 ) of its corresponding region, but also its creator P1 .
the method used in section 6.4 for merging the conflicts lists of trapezoids.
8.6. Dynamic convex hulls 193
, Pk
Pl / \
*I'F4' \ G
1 F1 - Pk
(a) (b)
Pk
L Pk / \
/ \\ /
,// I
/ \\
2 ~/\
G2
I 11
(C) ki) (e)
Figure 8.14. Deleting from a 2-dimensional convex hull: handling the critical purple (d-3)-
faces. Critical purple (d - 3)-face K here is the empty face of dimension -1.
G1 and G2 are its two purple vertices.
(a) G, and G2 are critical, (Fl',F2") is new.
(b) GI and G2 are critical, (F{', F2") is unhooked.
(c) G1 is critical, G2 is not, and GC is not a face of conv(S 1-1 ).
(d) G1 is critical, G2 is not, and GI is a face of conv(S 1- 1 ), but not purple
with respect to PI.
(e) G1 is critical, G2 is not, and G1 is a face of conv(S-1), this time purple
with respect to PI.
Proof. For the proof, imagine that Pk then P1 are inserted into S,-,: then we
obtain successively SI-1 and SI.
The (d - 2)-face K of conv(S' 1-) is purple with respect to Pk since it belongs
to a critical (d - 2)-face as well as to a non-critical (d - 2)-face. As a result, both
K and conv(K, Pk) are faces of conv(SI-1 ).
Since it is not critical, the (d - 2)-face G 2 is also a (d - 2)-face of conv(SI-'),
and its corresponding region is still (F 2 , F2), hence face G 2 of conv(Sj- 1 ) is purple
with respect to PI.
194 Chapter 8. Incremental convex hulls
Once the node corresponding to the (d-2)-face G2 of conv(Sl) has been found,
operations can resume as before, apart from a simple detail. If the region (Ff', F2')
that corresponds to the (d-2)-face conv(K, P1) of conv(S ) is new, then its conflict
list may be obtained by merging that of the critical region (FI, F{) and that of the
destroyed region (conv(K, Pi, Pk), F2'). (We do this in order to avoid traversing
the conflict list of region (F2 , F2) corresponding to face G2, which is neither new
nor destroyed.)
The algorithm is deterministic. Yet the analysis given here is randomized and
assumes the following probabilistic model:
* each insertion concerns, with equal probability, any of the objects present
in the current set immediately after the insertion;
* each deletion concerns, with equal probability, any of the objects present
in the current set immediately before the deletion.
8. 7. Exercises 195
Theorem 8.6.2 Using an augmented influence graph allows the fully dynamic
maintenance of the convex hull of points in Ed, under insertion or deletion of
points. If the current set has n points:
* the structure requires expected storage O(n log n + nLd/2 I),
* inserting a point takes expected time O(logn + nLd/ 2 ] -1),
* deleting a point takes expected time O(log n) in dimension 2 or 3 and time
O(tnLd/ 2 -1)l in dimension d > 3. The parametert represents the complexity
of an operation on the dictionaries used by the algorithm (t O(logn) if
balanced binary trees are used, t = 0(1) if perfect dynamic hashing is used.)
Proof. During the rebuilding phase in a deletion, the number of queries into the
dictionary of destroyed or unhooked nodes is at most proportional to the number
of destroyed or unhooked nodes. For each point P1 that is reinserted, the number
of updates or queries on the dictionary of (d - 3)-faces incident to critical purple
(d - 2)-faces is proportional to the number of these critical purple (d - 3)-faces.
Thus, the total number of accesses to the dictionaries is proportional to the total
number of critical faces encountered that correspond to new or killed nodes. The
conflict lists of new nodes can be set up in time at most proportional to the total
sizes of the conflict lists of new or killed nodes. All the other operations performed
during a deletion, except handling the priority queue, take constant time, and
their number is proportional to the number of destroyed, new, or unhooked nodes.
As a result, the algorithm indeed satisfies the update condition 6.3.5 for algo-
rithms that use an augmented conflict graph. Its randomized analysis is therefore
the same as in section 6.3, and is given in theorem 6.3.6 in terms of fo(l, S), the
expected number of regions defined and without conflict over a random I-sample
of S. For the case of convex hulls, since the number of such regions for any sample
is bounded in the worst case by Q(lId/2j) (upper bound theorem 7.2.5), so is their
expectation fo(1, S). This results in the performance given in the statement of
theorem 6.3.6. In dimension 2 or 3, the number of operations to be performed on
the dictionaries and on the priority queue is 0(1) whereas handling the conflict
lists always takes O(log n) time. Therefore, it suffices to implement dictionar-
ies and priority queues with balanced binary trees. In dimensions higher than
3, deletions have supra-linear complexity, and the priority queue may be imple-
mented using a simple array. °
8.7 Exercises
Exercise 8.1 (Extreme points) Extreme points in a set of points are those which are
vertices of the convex hull. Show that to determine the extreme points of n points in EE2
is a problem of complexity e (n log n).
196 Chapter 8. Incremental convex hulls
Hint: You may use the notion of an algebraic decision tree: an algebraic tree of degree a
is a decision tree where the test at any node evaluates the sign of some algebraic function
of degree a for the inputs. Loosely stated, a result by Ben-Or (see also subsection 1.2.2)
says that any algebraic decision tree that decides whether a point in Ek belongs to some
connected component W of Ek must have a height h = Q(logc(W) - k), where c(W) is
the number of connected components of W.
Exercise 8.2 (Adjacency graph) Let a simplicial d-polytope be defined as the convex
hull of n points. Show that knowledge of the facets of the graph (given by their vertices),
along with their adjacencies, suffices to reconstruct the whole incidence graph of the
polytope in time linear in the size of the adjacency graph, which is Q(nLd/2J).
Exercise 8.3 (1-skeleton) This problem is the dual version of its predecessor. Let a
simple d-polytope be defined as the intersection of n half-spaces. Suppose that the 1-
skeleton is known, that is the set of its vertices and the arcs joining them. Each vertex
is given as the intersection of d bounding hyperplanes. Show that the whole incidence
graph of the polytope may be reconstructed in time 0(nLd/2J).
Exercise 8.5 (On-line convex hulls) Give an algorithm to compute on-line the con-
vex hull of a set of points in Ed, by using an influence graph whose nodes correspond to
regions which are half-spaces. Give the randomized analysis of this algorithm.
O(nrlogn + n L§i). Give an on-line version of the preceding algorithm that uses an influ-
ence graph.
Show that in the version of the algorithm that uses a conflict graph, the storage
requirements may be lowered if only one conflict is stored for each half-space.
Convex hulls
in two and three dimensions
There are many algorithms that compute the convex hull of a set of points in
two and three dimensions, and the present chapter does not claim to give a
comprehensive survey. In fact, our goal is mainly to explore the possibilities
offered by the divide-and-conquer method in two and three dimensions, and to
expand on the incremental method in the case of a planar polygonal line.
In dimension 2, the divide-and-conquer method leads, like many other methods,
to a convex hull algorithm that is optimal in the worst case. The main advantage
of this method is that it also generalizes to three dimensions while still leading
to an algorithm that is optimal in the worst case, which is not the case for the
incremental method described in chapter 8. The performances of this divide-and-
conquer algorithm rely on the existence of a circular order on the edges incident to
a given vertex. In dimensions higher than three, such an order does not exist, and
the divide-and-conquer method is no longer efficient for computing convex hulls.
The 2-dimensional divide-and-conquer algorithm is described in section 9.2, and
generalized to dimension 3 in section 9.3. But before these descriptions, we must
comment on the representation of polytopes in dimensions 2 and 3, and describe
a data structure that explicitly provides the circular order of the edges or facets
around a vertex of a 3-dimensional polytope.
The problem of computing the convex hull of a polygonal line is interesting
from the point of view of its complexity. Indeed, the lower bound of Q(nlogn)
on the complexity of computing the convex hull of n points does not hold if the
points are assumed to be the vertices of a simple polygonal line. In fact, any
simple polygonal line that links the points in a given set determines an order on
those points which is not completely unrelated to the order of the vertices on
the boundary of the convex hull. In section 9.4, we show how it is possible to
compute in time O(n) the convex hull of a set of n points given as the vertices
9.1. Representation of 2- and 3-polytopes 199
(a) (b)
Figure 9.1. Representation of a 2-polytope: (a) the incidence graph, (b) the circular list
of its vertices.
The proper faces of a 2-polytope consist of its vertices and edges. Each edge
is incident to two vertices and each vertex to two edges. In fact, the incidence
graph of a 2-polytope is a cyclic graph that alternates vertices and edges (see
figure 9.1a). Without losing information, a 2-polytope may be represented by the
doubly-linked circular list of its vertices. Either direction in this list corresponds
to an order on the boundary of the polytope. If the plane that contains the 2-
polytope has an orientation, it induces an order on this boundary that is called
the direct (or counter-clockwise) order of the vertices, and the reverse order is
called the indirect (or clockwise) order of the vertices.
Representation of 3-polytopes
org(E)
sym(E
vertices contained in any given facet, and also on the set of edges and facets
containing any given vertex. Let us agree that supporting hyperplanes are ori-
ented by the outward normal, pointing into the half-space that does not contain
the polytope. This orientation induces a circular order on the edges and vertices
contained in a facet, which we again call the direct (or counter-clockwise) order;
the other orientation induces the indirect (or clockwise) order.
Cycles of edges of a 3-polytope, around a vertex or a facet, are not stored in the
incidence graph of the polytope. These cycles are commonly used by algorithms
that deal with 3-polytopes, however, and for this reason an alternative data
structure is often preferred: the edge-list representation stores the order of the
edges incident to a given vertex or to a given facet of the 3-polytope.
In this structure, vertices and facets are represented by a single node, whereas
an edge is stored in a double node, one for each possible orientation of the edge.
To orient an edge is to choose an order on its two vertices: the origin is the first
vertex of the edge while the end is the last one. We can now make a distinction
between the two facets incident to an oriented edge: the facet incident on the left,
or left incident facet, is the one whose direct orientation traverses the edge from
origin to end, and the right incident facet is the one whose indirect orientation
traverses the edge from origin to end. In this data structure, each edge node
stores five pointers displayed in figure 9.2:
sym(E) points towards the node for the reverse edge. In this way, org(sym(E))
points towards the end of E and left(sym(E)) towards the facet incident
to E on the right,
9.2. Divide-and-conquer convex hulls in dimension 2 201
onext(E) points towards the edge E' that shares the same origin as E, and whose
facet incident on the right is the same as the facet incident to E on the left:
org(E') = org(E)
left(sym(E')) = left(E),
lnext(E) points towards the edge E" that follows E in the circular order of edges
on the boundary of left(E):
left(E")= left(E)
org(E") = org(sym(E)).
Conversely, each facet keeps a pointer to one oriented edge that has the facet
as its left incident facet. The entire edge cycle on the boundary of a facet may be
obtained in direct (resp. indirect) order by repeated applications of the functor
nextt) (resp. sym(onexto)). The time taken for this operation is constant per
edge on the boundary.
Each vertex node also keeps a pointer to one of the edges originating at that
vertex. We define the order of edges around a vertex as follows: All the edges
originating at that vertex may be obtained in direct (resp. indirect) order by
repeated applications of the functor onext() (resp. lnext(symo)). Again, the
time needed to enumerate these edges is constant per each edge.
Moreover, the total number of purple vertices on both convex hulls must be at
least three.
The edges of conv(A) that are neither edges of conv(Al) nor of conv(A 2 ) must
intersect the separating vertical line Ho, and there are exactly two such edges.
They must connect one purple vertex of conv(Al) to a purple vertex of conv(.A2):
they are the exterior bitangents to polytopes conv(Al) and conv(A 2 ). We call
the upper bitangentthe one that intersects the separating line Ho above the other,
which is called the lower bitangent. Much of the work in the merging process is
to identify these two bitangents.
Let Ak be the vertex of conv(Al) with the greatest abscissa, and Ak+1 be
the vertex of conv(A 2 ) with the smallest abscissa. The segment AkAk+1 lies
outside both conv(Al) and conv(A 2 ). Both vertices Ak and Ak+j are incident
to a red edge. To find the upper bitangent to conv(AI) and conv(A 2 ), the
merging step moves a segment U1 U2 upwards from position AkAk+l, while staying
outside conv(Al) and conv(A 2 ). The left endpoint U1 moves counter-clockwise
on the boundary of conv(Al), taking position at vertices of conv(Ai) that are
incident to a red edge. Likewise, the right endpoint U2 moves clockwise on the
boundary of conv(A 2 ), taking position at vertices of conv(A 2 ) that are incident
to a red edge. More precisely, let succ(U1 ) denote the successor of U1 along
the oriented boundary of conv(A4), and pred(U2 ) the predecessor of U2 along the
204 Chapter 9. Convex hulls in two and three dimensions
204~ ~ Chpe 9. Cove hl in tw an the imnin
Tl
U2
red(U 2 )
conv(Ai)
while U1 e or U2 H 7suc(Ul)
<i
if U1 e H U)U hen U2 = pred(U 2 )
else U1 = succ(Ul)
In this manner, the endpoints U1 and U2 of segment U(1U2 only traverse red edges
of conv(Al) and conv(A2 ), and both U1 and U2 keep in contact with a red edge.
It remains to show that, when the loop is exited, the line joining U1 and U2
is a line supporting both conv(Al) and conv(A 2 ), and therefore is the desired
upper bitangent. When U1 U2 has reached its final position, the edge U2 pred(U2 )
of conv(A 2 ) is blue with respect to U1 and the edge Ulsucc(Ul) of conv(Ai) is
blue with respect to U2. Without loss of generality, we may assume that the
last move was that of U1, on the boundary of conv(Al) (the proof is entirely
symmetrical in the converse situation). Then the edge pred(Ui)Ul of conv(Al) is
red with respect to U2, and as a result vertex U1 is purple with respect to U2, so
that line U1 U2 supports conv(Al) (lemma 8.3.1). It remains to show that U1 U2
also supports conv(A 2 ), or in other words that vertex U2 of conv(A 2 ) is purple
with respect to U1. The edge U2 pred(U2 ) of conv(A 2 ) is blue with respect to U1 ,
however, so we only have to show that edge succ(U2 )U2 is red with respect to
U1 . Let Ul be the position of U1 on the boundary of conv(A 1 ) during the last
9.3. Divide-and-conquer convex hulls in dimension 3 205
move of the other endpoint on the boundary of conv(A2). The edge succ(U 2 )U2
of conv(A 2 ) is red with respect to Ulf. All the vertices between Ulf and U1 on
conv(Ai) lie on the same side of succ(U2 )U2 . As a result, the edge succ(U 2 )U2 is
also red with respect to U1.
We obtain the lower bitangent in the same fashion, only U1 moves on the
boundary of conv(Al), starting at position Ak and passing clockwise over the
vertices of conv(Al). Likewise, U2 moves on the boundary of conv(A 2 ), starting
at position Ak+j and passing counter-clockwise over the vertices of conv(A 2 ).
To analyze this algorithm, it suffices to notice that each test between a vertex
and an edge simply consists in evaluating the sign of a 3 x 3 determinant, and can
therefore be performed in constant time. Moreover, only two tests are performed
at each step, to follow a red edge of either conv(.Al) or conv(A 2 ), or to discover
that a bitangent has been found. A red edge of conv(Al) or conv(A 2 ) is not
part of the convex hull conv(A), and therefore will never be tested again in the
entire algorithm. The total time needed in the recursion for these operations is
therefore at most proportional to the number of edges of convex hulls created
by the algorithm. At each merging step, two new edges are created, and the
total number of these steps is O(n) if the size of the original set A is n. The
total number of edges created is thus linear, and the complexity of the operations
in the divide-and-conquer recursive calls is O(n). In two dimensions, the total
complexity of the algorithm is therefore dominated by the cost of the initial
sorting. Notice that if the points are sorted along one axis, the algorithm runs
in time O(n).
which solves to t(n) = 0(nlogn). This proves the theorem below. Notice that
in the three-dimensional case, the divide-and-conquer algorithm has complexity
E(n log n) even if the points are sorted along one axis.
more subtle than in the incremental algorithm. The colors can be attributed as
follows:
It is very tempting to believe that the purple vertices and edges of C1 (resp.
C2) form a cycle in the incidence graph of C1 (resp. C2 ) as is the case for the set
of edges and vertices that are purple with respect to a point. This is not true,
however. Indeed, a purple vertex can be incident to an arbitrarily high number
of purple edges (for instance, vertex A in figure 9.6a is incident to three purple
edges). This number may even be zero when the purple vertex is the only non-red
face of polytope C1 (consider for instance vertex A in figure 9.6b). A purple edge
may be incident to two red facets: this happens for instance to an edge of C2
whose affine hull is a line that does not intersect C1, but whose incident facets
have their supporting planes intersecting C1 (for instance, edge AB in figure 9.6a).
The faces of C that are neither faces of C1 nor of C2 are the new faces, and they
necessarily intersect the separating plane Ho. A new edge is the convex hull of
a purple vertex of C1 and a purple vertex of C2. A new facet is a triangle, the
convex hull of a purple edge of C1 and of a purple vertex of C2, or conversely the
208 Chapter 9. Convex hulls in two and three dimensions
A A
Cl
C2
A A
Cl
C2
conv(C, UC 2 )
(b)
convex hull of a purple edge of C2 and of a purple vertex of C1. Let Co be the
2-polytope formed by the intersection of C with the plane Ho (see figure 9.5).
The new edges of C intersect Ho at the vertices of Co, and the circular order on
the faces of the 2-polytope Co induces a circular order on the new faces (edges
and facets) of C.
The main idea is then to build the new facets of C in turn, in the order given
by the edges of Co. For instance, the plane Ho may be oriented by the x-axis, and
the boundary of the 2-polytope Co will be followed in counter-clockwise order.
As we will show below, the algorithm takes advantage on the order of edges of
a 3-polytope incident to a given vertex. So we choose to represent polytopes by
the edge-list structure, which explicitly encodes this order (see section 9.1).
The overview of the merging algorithm is then:
2. The algorithm then discovers the other new faces (facets and edges) of C
in the order induced by Co. At the same time, the purple faces (edges and
vertices) of C1 and C2 are found.
9.3. Divide-and-conquer convex hulls in dimension 3 209
3. In a third stage, all the red faces (facets, edges, and vertices) of C1 and C2
are found and the edge-list representation of C is built from those of C1 and
of C2.
To find the other new faces of C, the algorithm uses the gift-wrapping method
which consists in pivoting a plane around the current new edge, so that it supports
both C1 and C2 , as if we were trying to wrap both C1 and C2 with a single sheet
of paper. More precisely, let A1 A2 , A1 G C1 , A2 E C2, be the new edge that
was most recently discovered. The algorithm knows a plane H1 2 that supports
C along its edge A1 A2 : take for H1 2 the vertical plane passing through A1 A2 if
this edge is the first edge found in the previous stage, or otherwise take the affine
hull of the most recently discovered new facet AA 1 A2 , which is incident to the
oriented edge A1 A2 on its left. At this point, the algorithm must discover the
new facet of C that is right incident to the oriented edge A1 A2 . This facet is a
triangle A1 A2 A' where A' is either a vertex of C1 or a vertex of C2. Consider
a plane H that pivots around edge A1 A2 , starting at position H1 2 and moving
counter-clockwise, meaning that its trace H1 2 n Ho in the separating plane Ho
pivots counter-clockwise around the vertex A1 A2 n Ho of Co. Vertex A' is the first
vertex of either C1 or C2 that is touched by H. We say that the winner of C1 for
the pivot A1 A2 is the first vertex A' of C1 that is touched by H. Similarly, the
winner of C2 for the pivot A1 A2 is the first vertex A' of C2 that is touched by H.
Necessarily, A is one of A' or A', and which one can be decided in the following
210 Chapter 9. Convex hulls in two and three dimensions
A2
C2
C1
way. Let N be the unit vector normal to plane H 12 , and directed outside C.,
Likewise, for each i = 1, 2, let Hi' be the affine plane of the triangle AIA 2A', and
Ni' its unit normal vector directed outside C,. 2 Vertex A' is that A' (i = 1, 2) for
which the dihedral angle between H12 and Hi' is minimal, or equivalently that for
which the dot product N *Ni is minimal.
We must now explain how to find the winners A' and A'. These problems
being exactly symmetrical, we will restrict our attention the problem of finding
the winner of Cl.
If A' is a vertex of C, that is adjacent to Al, then we denote by pred(A')
and succ(A') the vertices of C, adjacent to A' that respectively precede and
follow Al in the counter-clockwise order around Al. Denote by HI the planar
affine hull of the triangle A1A'A 2 and H',+the half-space bounded by this plane
that is opposite to the wedge product -AlA2 A AA', which induces a clockwise
orientation of the triangle A1A 2 A1.
Lemma 9.3.2 The winner of C, for the pivot AlA 2 is the unique vertex A' of
C, adjacent to Al such that pred(A') and succ(A') both belong to the half-space
11 1
Proof. A' is the winner of pivot AlA2 if and only if triangle A'AlA2 is the face
of polytope conv(Cl U {A 2 }) incident to the oriented edge AlA 2 on its right. Then
HI is a supporting plane of conv(Cl U {A 2 }) and therefore of C,, so that AIA' is
an edge of C,, and both pred(A') and succ(A') belong to half-space H'+.
'The direction of vector N is the one that orients the triangle (Al, A2, A) counter-clockwise.
It is the direction of the wedge product AI A2 AA, A.
2 0n the other hand, the direction of vector Ni is the one that orients the triangle (Al, A , A%)
2
clockwise. It is the direction of the wedge product -A1 A2 AAAi.
9.3. Divide-and-conquer convex hulls in dimension 3 211
Proof. Here we prove the assertion concerning polytope C1. A proof for polytope
C2 is entirely symmetrical. Since A1 is a vertex of the convex hull C, there is a
plane H1 that separates vertex Al from all the other vertices of Ci and C2 (see
exercise 7.4). Such a plane intersects all the edges of C and C1 incident to Al
(see figure 9.8). Plane H1 intersects polytope C along the 2-polytope H1 n C and
polytope C1 along the 2-polytope H1 n C1 contained inside H1 n C.
Let us orient the plane H1 by a normal unit vector N1 directed towards the
half-space that does not contain A1 . The counter-clockwise order on the edges of
the 2-polytope H1 n C1 corresponds to the indirect order on the facets of C1 that
contain vertex A1 .
The order in which the new faces of C that contain Al are discovered is consis-
tent with the counter-clockwise order of the faces of the 2-polytope H1 nC. To see
this, it suffices to consider a plane H that pivots around HofnH 1 from Ho towards
H1 . As H pivots from Ho to H1 , the 2-polytope H n C changes. Nevertheless,
each new face of C that contains A1 always keeps a trace in H corresponding to a
face of H n C, and all these traces remain in the same order along the boundary
of H n C.
Any purple edge of C1 is also an edge of C, and the trace on plane H1 of a purple
edge of C that is incident to A1 is a common vertex of both polytopes H1 n C
and H1 n C1 . The trace on H1 of a pivot A1 A2 , however, is a vertex of H1 n C
but not of H1 n C1 . The trace on H1 of the plane that pivots around the edge
A1 A2 is a line L that pivots around the point A1 A2 n H1 . The pivoting plane
touches C1 at the winner of A' of C1 for the pivot A1 A2 whenever L becomes
a supporting line of the polytope H1 n C1 . The point along which L supports
212 Chapter 9. Convex hulls in two and three dimensions
Al
C1
Hi nAlA
Hi
H1 n C1 is the trace on H1 of the edge A 1 A'. During the course of the algorithm,
the trace on H1 of the successive pivots incident to A1 moves counter-clockwise
on the boundary of H1 n C; as a result, the point at which the line L touches
H1 nC1 moves counter-clockwise on the boundary of H1 nC1 (see figure 9.8). The
edge AIA' therefore traverses the list of edges of C incident to A1 in indirect (or
clockwise) order. F
In order to find the winner in C1 for a pivot incident to a vertex Al, the algo-
rithm need only consider the edges of C1 incident to vertex A1 in clockwise order.
When the algorithm considers the first pivot incident to Al, the algorithm starts
searching at any edge incident to Al. If the algorithm has already encountered
one or more pivots incident to Al, however, then it starts the search at the winner
of C1 for the last encountered pivot incident to Al.
When the algorithm discovers a new face (A1 A2 A' or A1 A2 A') while pivoting
around edge A1 A2 , it also exhibits a new edge of C (A2 A' or A1 A'). The pivoting
process may be started again around this new pivot. Moreover, a purple vertex
A' (or A') and a purple edge A1 A' (or A 2 A) are also identified. The pivoting
process is terminated when the pivot is back at the initial edge U1 U2 . All the
new facets have been discovered, and the purple faces of C1 and C2 (vertices and
9.3. Divide-and-conquer convex hulls in dimension 3 213
edges) have been sorted out. Note that a purple vertex can be enumerated several
times in the algorithm, and that some purple edges (those incident to two red
facets) may be enumerated twice.
Reconstruction of C
The third stage of the merging process must identify all the red faces of C1 and
C2 , and also build the edge-list representation of the convex hull C of C1 and
C2. For this, we may begin by traversing the list of purple edges and color the
incident facets: a facet of Ci (i = 1, 2) incident to a purple oriented edge AiA'
is red if there is a new facet of C incident to AiA' on the same side. It is blue
otherwise. The red facets are then colored by propagating the red color, as every
facet incident to a red facet and that is not colored blue must be a red facet as
well. The red edges and vertices are then easily determined: an edge incident to
a red facet must be red, unless it was colored purple in the previous stage (finding
the purple edges and vertices); likewise, a vertex incident to a red or purple edge
is red, unless it was colored purple in the previous stage.
When the new faces of C are discovered and the red and purple faces of poly-
topes C1 and C2 have been determined, it is easy to create the edge-list represen-
tation of C from those of C1 and C2.
1. its vertices are all distinct, except perhaps the first and last which may be
identical, and
A polygonal line is closed if the first and last vertices are identical. A simple
and closed polygonal line is also called a polygon. Thus a polygon may be defined
entirely by its circular sequence of vertices. A deep theorem of Jordan (a proof of
the theorem for polygons is given in exercise 11.1) states that any simple closed
curve separates the plane E2 into two connected components, exactly one of which
is bounded. Thus a polygon P separates the points in E2 \ p into two connected
regions, only one of which is bounded. This bounded region is called the interior
of polygon P, and denoted by int(P). The other (unbounded) region is called
the exterior of the polygon and is denoted by ext(P). Regions int(P) and ext(P)
are topological open subsets 3 of E2 , and the topological closure int(P) of int(P)
is the union int(P) U P. In this section, the Euclidean space E2 is oriented and
a polygon will be described as a circular list of vertices in direct (or counter-
clockwise) order. This defines an orientation on the edges. By convention, we
agree that the direct orientation of a polygon is such that the interior of the
polygon is to the left of each oriented edge.
Let A be a set of n points in E2 . One may wonder why knowing a simple
polygonal line £(A) joining these points would help in computing the convex
hull of conv(A). The following theorem shows a deep connection between the
3
For a brief survey of the topological notions of open, closed, and connected subsets, see
chapter 11.
9.4. Convex hull of a polygonal line 215
order of the vertices along the boundary of the convex hull conv(A) and the
order of these points along the polygonal line L(A).
Theorem 9.4.1 Consider two polygons P and Q, such that the interior of Q
is entirely contained inside the interior of P. The common vertices of P and Q
are encountered in the same order when both polygons are traversed in a counter-
clockwise order.
Let Ep and EQ and be the circular sequences of vertices of P and Q. Let E'
be the subsequence of Ep that corresponds to vertices common to both P and
Q, and similarly let X<? be the subsequence of EQ that corresponds to vertices
common to both P and Q. The theorem states that the two sequences E' and
E' are identical. In particular, if P is the convex hull of Q, then the theorem
states that the sequence of vertices of conv(Q) is a subsequence of the sequence
of vertices of Q.
Proof. Because the interior of Q is entirely contained inside the interior of P,
edges of P and Q cannot intersect in a point which is not a vertex of P or Q. In
the following, we first assume that the edges of P and Q intersect only in points
which are vertices of both P and Q, and then remove this assumption at the end
of the proof.
A chord of a topological 2-ball is a simple curve contained in the interior of
the 2-ball except for its endpoints which lie on the boundary. The proof of the
theorem relies on a consequence of Jordan's theorem that states that any closed
topological 2-ball is separated by a chord into two connected components. (A
proof of this consequence is given in exercise 11.2.) Both subsequences V and
E' have the same vertices, namely those common to both P and Q, but not
necessarily in the same order. To prove that they are in fact identical, it suffices
to show that two consecutive vertices in E' are also consecutive in E,. Let then
A1 and A2 be two consecutive vertices of E'. Let Q1 2 be the portion of Q that
joins Al to A2 in counter-clockwise order, and Q21 be the portion that joins A2 to
A1 in counter-clockwise order. Likewise, let P1 2 be the portion of P that joins Al
to A2 in counter-clockwise order, and P21 be the portion that joins A2 to Al in
counter-clockwise order (see figure 9.9). The vertices of P are distinct from those
of Q12 (except for the endpoints Al and A2 ) because Al and A2 are consecutive
in the subsequence EQ. Hence P1 2 and Q 1 2 cannot share a common vertex except
for Al and A2 . Furthermore, there are no intersections between edges of P and
Q except at a common vertex. Hence Q12 is a chord of the topological 2-ball
int(P). Moreover, Q21 is contained in int(P) and does not intersect the chord
Q12 except at its endpoints Al and A2 . Hence Q21 is entirely contained in one of
the two connected components of int(P) \ Q12- Clearly, P12 is entirely contained
in the other component except for its endpoints Al and A2 . Therefore, P1 2 and
Q21 cannot share a common vertex except for Al and A2 . This shows that Al
216 Chapter 9. Convex hulls in two and three dimensions
As
A2
11
*1
and A2 are consecutive in the subsequence E,, and proves the theorem when P
and Q intersect only at common vertices.
The case where some vertices of P can lie on edges of Q is handled by adding
those vertices of P as vertices of Q, splitting the corresponding edges of Q.
Similarly, when some vertices of Q lie on edges of P, we add those vertices of Q
as vertices of P and split the corresponding edges of P. The same vertices are
added to the subsequences Ep and EZ', producing two subsequences E" and E"
which are identical by the proof above. Thus E' and EQ, obtained by removing
the same elements, are also identical. El
The following corollary takes more interest in the convex hull of polygonal lines.
Let A be a set of n points in the plane and L(A) = (Al, A2 , ... , A.) be a simple
polygonal line joining the points in A. Consider the ranks in L(A) of the vertices
on the convex hull conv(A), and denote by Am and AM the vertices of conv(A)
with respectively lowest and highest rank (see figure 9.10).
Proof. We denote by C(A) the polygon that constitutes the boundary of the
convex hull conv(A), by CmM the portion of C(A) that joins Am to AM in counter-
clockwise order, and by CMm the portion of C(A) that joins AM to Am in counter-
clockwise order. To prove the first part of this corollary, it suffices to apply the
previous theorem to the polygons P and Q defined as follows. Polygon P is the
concatenation of CmM and a polygonal line PMm that joins AM to Am such that
L(A) and C(A) are both contained within int(P) (see figure 9.10). Polygon Q
is the concatenation of LmM (the portion of £(A) that joins Am to AM) and of
PMn. The second part of the theorem can be proved very similarly. El
9.4. Convex hull of a polygonal line 217
The remainder of this section presents an algorithm that builds the convex hull
of a polygonal line in E 2 in linear time.
Let A be a set of n points in E 2 and £C(A) a simple polygonal line whose vertices
are the points of A. The algorithm we present here is an incremental algorithm
that processes the points of A in the order of £(A).
Let (A 1 ,A 2 ,...,A.) be the sequence £(A) and Ai {A 1 ,A 2 ,...,Ai} be the
set of the first i points of C(A). The convex hull conv(Ai) of Ai is maintained as
a doubly connected circular list of vertices. The algorithm maintains a pointer
to the vertex of conv(Ai) with the highest rank in L(A).
The initial step builds a circular list for the triangle A1 A2 A3 and the pointer
points to A3 . The current step inserts point Ai in the structure, and updates
the data structure that stores conv(A-i) so that it represents conv(A2 ). The
algorithm works in two phases.
First phase. The algorithm determines whether point Ai belongs to the inte-
rior or exterior of conv(Ai-1). Lemma 9.4.3 below shows that this reduces to eval-
uating the signs of the two determinants [pred(AM)AMAi] and [AM SUCC(AM)Ai],
where AM has the highest rank among the vertices of conv(Ai- 1 ), and pred(AM)
and succ(AM) are respectively the predecessor and successor of AM in a counter-
clockwise enumeration of the vertices on the boundary of conv(A4i-) (see fig-
ure 9.11).
Lemma 9.4.3 Let H+ (resp. H+) be the half-space bounded by the line sup-
porting conv(A4i-) along the edge pred(AM)AM (resp. AMsucc(AM)), and that
contains conv(Ai-1). Point Ai is interior to polytope conv(Ai-1) if and only if
Ai belongs to the intersection of half-spaces H+ n H+.
Proof. The condition is obviously required. To show that it also suffices, we
218 Chapter 9. Convex hulls in two and three dimensions
1)
p Hp
H5
see that the simple polygonal line C(A) connects the vertices pred(AM) and
succ(AM) of conv(A2i-), and that the portion of L:(A) that joins pred(AM) to
succ(AM) together with the edges pred(AM)AM and AMSUCC(AM) of conv(Ai-1)
form a simple closed polygonal line that bounds a region D of E2 entirely con-
tained in conv(Ai-i). This region is shaded in figure 9.11. The simple polygonal
line L(A) also connects vertex AM to Ai, and the portion of L(A) that joins AM
to Ai cannot intersect the portion of L(A) that joins pred(AM) to succ(AM).
This guarantees that if point Ai belongs to both half-spaces H+ and H+,then it
also belongs to D and thus to conv(Ai-1).
Second phase. The algorithm now updates the convex hull if point Ai does
not belong to the interior of polytope conv(Ai-I). In this case, the previous
lemma shows that at least one of the edges pred(AM)AM and AMsucc(AM) is
red with respect to Ai, if we use the terminology of the incremental algorithm of
chapter 8. To update the convex hull, the algorithm need only perform steps 2,
3, and 4 of the incremental algorithm described in section 8.3.
Theorem 9.4.4 The algorithm described previously builds the convex hull of a
simple polygonal line in E2 in linear time.
Proof. For each vertex of the polygonal line L(A), phase 1 requires constant
time, since it only involves the computation of the signs of two 2 x 2 determi-
nants. As to the second phase, if it is performed, its complexity is shown in
section 8.3 to require time that is proportional to the number of the edges of
polytope conv(Avi-) that are red with respect to Ai. The total contribution of
this phase to the complexity of the algorithm is thus proportional to the number
of edges created by the algorithm, which is O(n). El
9.5. Exercises 219
9.5 Exercises
Exercise 9.1 (Common bitangents) Let Ci and C2 be two 2-polytopes, separated by
a vertical line A. Let C = conv(Cl U C2). Show that the edges of C intersecting A may
be found in time O(log n), where n is the total number of vertices of C1 and C2.
Hint: For each polytope C, let us denote by C+ the upper hull of C, which is the convex
polygonal line whose vertices are vertices of C and that joins the vertex of highest abscissa
to the vertex of lowest abscissa of C in the counter-clockwise order of the vertices along
the boundary of C (see exercise 7.14). Similarly, we define the lower hull C-. Note
that the boundary of C is a concatenation of C+ and C- and that any vertical line that
intersects C also intersects an edge of C+ and an edge of C-. If C is the convex hull
conv(Cl U C2) of two polytopes C1 and C2 separated by a vertical line A, we call bridges
the two edges of C intersected by A. We separately search for the upper bridge, which is
the edge of C+ intersected by A, and the lower bridge, which is the edge of C- intersected
by A. The upper bridge is an edge of C that joins a vertex of Cj+ to a vertex of C+. It is
possible to find it by a binary search on each of Cl+ and C,+. Indeed, consider a vertex U1
of C+ and a vertex U2 of C+, and look at the color with respect to U1 of the edges of C2+
incident to U2, and the color with respect to U2 of the edges of C+ incident to U1. There
are nine possible cases, and in each case, at least one of the four chains determined by
U1 and U2 on Cj+ and C2+ can be discarded from further consideration.
Exercise 9.2 (Dynamic convex hulls) We present an algorithm for maintaining the
convex hull of a set of points in the plane under insertion and deletion. In fact, the
algorithm maintains the upper and lower hulls separately (see exercise 9.1).
The data structure used to represent the upper hull is a balanced binary tree whose
leaves correspond to the points in the set ordered by increasing abscissae. At each internal
node N in the tree, we store a secondary data structure which allows us to efficiently
restore the convex hull of the points stored in the subtree rooted at N. More precisely,
this structure is a catenable queue which maintains a list under the following operations:
insertion and deletion of list items at either end of the list, splitting the list at a given
item, and concatenation of two lists. Each of these operations may be performed in time
logarithmic in the size of the lists. Let N be a node of the primary tree. We denote by
conv+(N) the upper hull of the points stored in the subtree rooted at N. The catenable
queue stored at N contains the portion of conv+(N) that is not on the boundary of the
convex hull conv+(M), where M is the parent of N in the tree. Moreover the position
of the first vertex of conv+(M) that does not belong to conv+ (N) is stored at node N
in an integer j(N). The catenable queue stored at the root maintains the upper hull of
all the points stored in the tree.
1. Show that the data structure requires a storage O(n) if n is the number of points
stored in the structure.
2. Show that each insertion or deletion takes time O(log 2 n) when the structure stores
n points.
Hint: Let us consider, for instance, the insertion of a point P into the structure that
maintains the upper hull. We follow the path in the tree that leads, from the root, to the
leaf that is the closest to P (in the x-order). At each node N on this path, we update
220 Chapter 9. Convex hulls in two and three dimensions
the catenable queues of N and of its sibling N' so that they store the upper convex hulls
conv+ (N) and conv+(N'), which may be done in time O(log n). Indeed, it suffices to split
the catenable queue of the parent M of N and N' at position j(N), and to concatenate
the sublists with those stored at N and N'. A node is created for P. Then, while going
the other way on this path, the primary structure is re-balanced and the correct chains
can be computed for each node. Let N be a node on the reverse path, and N' be its
sibling. We can compute the upper hull conv+ (M) of their parent M because the upper
hulls conv+ (N) and conv+ (N') are known, and the bridge joining them can be computed
in time O(logn) (see exercise 9.1). The lists conv+(N) and conv+ (N') may thus be split
at the bridge and we keep only the portion on the right for the left sibling and on the left
for the right sibling. All these operations may be carried out in time O(logn) for each
node on the reverse path from the new node P to the root of the primary tree. During a
rotation (simple or double) of the tree, only a constant number of nodes switch children
and a similar operation restores the correct chains stored in these nodes, in time O(log n)
for each node.
Exercise 9.3 (Onion peeling) Given a set S of n points in the plane, we consider the
subsets
SO = S,
Si = So \ {set of vertices of conv(S)}
until Sk has at most three elements. Give an algorithm that computes the iterated
convex hulls conv(So), conv(S 1), . . . , conv(Sk),... in total time O(n log n).
Exercise 9.4 (Diameter, antipodal pairs) Let P be a set of points of Ed. The di-
ameter of S, denoted by d(S), is the maximal distance between two points of S. A pair
(Pi, Pj) of vertices of the convex hull conv(S) is said to be antipodal if conv(S) admits
two parallel hyperplanes supporting P along Pi and Pj respectively.
1. Show that if the diameter d(S) occurs for Pi and P,, that is d(Pi, Pj) = d(S), then
Pi and Pj are vertices of the convex hull conv(S), and (Pi, Pj) is an antipodal pair.
2. Derive an algorithm in E2 that enumerates all the antipodal pairs to find the
diameter in time O(n log n).
* P0 -P7
* Pk is a simplex,
* for any 0 < i < k, V(Pi+l) c V(Pi) and V(Pi) \ V(Pi+1 ) is a maximal subset of
non-adjacent vertices of Pi that have degree at most 8 in Pi.
3. Show that k = O(log n) and that a hierarchical representation of P may be built in
time O(n).
Hint: In the first three cases, the algorithm can take advantage of the solution for Pi+,
to compute the solution for Pi, in constant time. Note that the fourth query type is dual
to the third.
Exercise 9.7 (Intersection of two convex polygons) Give an algorithm that com-
putes the intersection of two convex polygons in the Euclidean plane E2, in linear time
O(n + m) if m and n are the respective number of vertices of either polygon.
Bass and Schubert [21], published in 1967, that proposed an optimal algorithm for the
convex hull in dimension 2, although the paper had a slight error and gave no complexity
analysis. Another well known algorithm is Jarvis' "gift-wrapping" algorithm [130] which
computes the convex hull by successively finding all supporting hyperplanes. These
algorithms and others are detailed in the book by Preparata and Shamos [192], which
also gives a solution to exercise 9.4.
If the convex hull of n points in dimension d may have Q(nLd/2J) faces in the worst
case, the number of faces may be much less. It is thus important to have output-sensitive
algorithms, meaning that their complexities depend on the size of the output. Jarvis'
algorithm is output-sensitive, as it runs in time O(nh) for a set of n points whose convex
hull has h vertices. Kirkpatrick and Seidel [137] gave an algorithm in dimension 2 that
runs in time 0 (n log h). This algorithm uses a curious variant of the divide-and-conquer
algorithm, which may appropriately be called marriage-before-conquest,since the upper
and lower bridges connecting the convex hull of two subsets are computed before the
convex hull of either subset is known. Edelsbrunner and Shi [99] generalized the idea to
yield a convex hull algorithm in E 3 that runs in O(n log2 h).
The on-line algorithm presented in section 9.4 is due to Melkman [169]. The algo-
rithms by Lee [146] and Graham and Yao [112] are both correct and use only one stack.
The problem of dynamically maintaining the convex hull of a planar set of points (see
exercise 9.2) was solved by Overmars and Van Leeuwen [184] in time O(log 2 n) for each
operation. If only insertions or only deletions are to be performed, time O(logn) for
each operation may be achieved. The case of insertions was studied by Preparata [190]
and that of deletions by Chazelle [43] and Hershberger and Suri [125]. Chazelle's al-
gorithm [43] computes the onion peeling described in exercise 9.3. Hershberger and
Suri [126] also showed that a sequence of n insertions and deletions can be performed
in amortized time O(log n) for each operation if the sequence of operations is known in
advance. Moreover, their data structure can also handle queries (tangent line passing
through a given point, intersection with a line, finding a vertex, and more generally
any query that can be handled with binary search) in time 0(logn), and this after any
number of operations are performed.
The divide-and-conquer algorithm in dimension 3 was first proposed by Preparata and
Hong in 1977 [191], but the first entirely correct description of the algorithm was given
in 1987 by Edelsbrunner [89].
The hierarchical decomposition of 3-polytopes (see exercise 9.5) was invented by Dobkin
and Kirkpatrick [85] who used it to compute the distance between two polyhedra. Edels-
brunner and Maurer [94] also used it to answer different types of queries on 3-polytopes
(see exercise 9.6). Chazelle [45] also used the same kind of decomposition to compute
the intersection of two 3-polytopes in linear time.
The algorithm in exercise 9.8 that computes the union of tricolored triangles is due to
Boissonnat, Devillers, and Preparata [27].
Chapter 10
Linear programming
10.1 Definitions
A linear programming problem consists of optimizing a linear function of d vari-
ables, where the variables must satisfy a given set of n linear constraints. The
linear function to be minimized may be written as a dot product:
f(X)=V X,
where V is a given vector of Ed and X a variable vector in the same space. The
linear constraints that X must satisfy may be written as
Ai-X <ai, 1< i =1,...,n,
where, for each i = 1, . .. , n, Ai is a vector in Ed and ai a real constant. Geo-
metrically, each constraint can be expressed by the fact that the point X lies in
a closed half-space. Denote by Hi+ the closed half-space that corresponds to the
i-th constraint:
Hi={XEEd : AiX<aai}.
The intersection f 1 Hi+ is called the feasible domain of the problem. If the
feasible domain is bounded and not empty, then it is a polytope P and the
solution to the linear programming problem is a vertex of 7P, or occasionally the
set of points on a higher-dimensional face of P. This face F of P is characterized
as follows: the set of supporting hyperplanes of P along F includes a hyperplane
normal to the direction V, and P is contained in the half-space bounded by
this supporting hyperplane that contains the vector V. If the feasible domain is
empty, the linear programming problem is termed unfeasible. On the contrary,
if the feasible domain is infinite in the direction -V, the linear programming
problem is termed unbounded. To have a bounded problem, one may always
restrict the feasible domain to a large box by adding 2d constraints. The size of
this box can be chosen in such a way that if the problem is bounded, the solution
is guaranteed to lie in the box (see exercise 10.1).
For example, the following location problem about convex hulls is a linear pro-
gramming problem in disguise: given n points {P,, P2 , .. ., P.} in Ed, and a query
point Q, the problem asks if Q belongs to the convex hull conv({P,, P2 , ... , ,}).
If an origin 0 is chosen inside this convex hull (for instance as the centroid of d+ 1
of the points), the polarity of center 0 (see section 7.1.3) lets the dual version of
the problem be expressed as follows: given n half-spaces {Pi*+ : i = 1,..., nI
whose intersection P# is bounded, and a query hyperplane Q*, determine whether
Q* intersects the polytope 'P#. The half-space Pi*+ is the half-space bounded by
the hyperplane Pi* polar to Pi and that contains 0, and Q* is the polar hyper-
plane of Q. Point Q lies inside P if and only Q* avoids the dual polytope P#
(see section 7.1.3). The equation of Q* is
Q *X = 1,
10.2. Randomized linear programming 225
and to locate point Q inside or outside P reduces to solving the following two
problems:
The algorithm
This algorithm is an incremental algorithm that adds the constraints one by one
while maintaining the solution to the current linear programming problem. For
the randomization, we assume that the constraints are inserted in random order.
To make the description simpler, assume for now that all linear programming
problems at any incremental step are feasible, bounded, and have a unique solu-
tion. These assumptions will be relaxed afterwards.
Here is the algorithm: In an initial step, compute the optimal vertex of the
polytope Pd, given as the intersection of the half-spaces that correspond to the
first d constraints:
d
Pd=nHj+
j=1
Subsequently, the constraints Hi are added one by one. The algorithm com-
putes the optimal vertex Xi of the polytope Pi = ni=l Ht knowing the optimal
vertex Xi-, of the polytope Pi-,. It proceeds as follows. If vertex Xi- 1 belongs
to the half-space Hi+, then Xi = Xi-,, and nothing else is done for this step.
Otherwise, vertex Xi necessarily belongs to hyperplane Hi, so we know one of
the hyperplanes incident to the optimal vertex. To find the other hyperplanes,
the algorithm recursively solves a (d - 1)-dimensional linear programming prob-
lem with d - 1 variables and i - 1 constraints. The optimizing function for this
226 Chapter 10. Linear programming
The analysis of the algorithm is also particularly simple. The order in which the
constraints are added is assumed to be random, so we evaluate the performances
of the algorithm on the average over all possible permutations. We prove that the
expected running time of the algorithm is 0(d 4 d!n) when the linear programming
problem has d variables and n constraints. More precisely, let t(d, i) be the
expected time to insert the i-th constraint into a linear programming problem
with d variables and i constraints. We prove by induction on d and i that t(d, i) is
O(d4 d!). Indeed, if d equals 1 each insertion is processed in constant time. When
d > 1, inserting the i-th constraint reduces to testing if Xi-1 belongs to the half-
space H+, except when Xi differs from Xi-1, in which case a linear programming
problem with d -1 variables and i - 1 constraints has to be solved. The latter
case occurs only when the i-th constraint is bounded by one of the d constraint
hyperplanes intersecting in Xi. Knowing the first i constraints, this happens
10.3. Convex hulls using a shelling 227
Note finally that the hyperplanes being in general position is not a require-
ment for the algorithm. This assumption is only needed in the analysis of the
algorithm. In fact, a perturbation argument clearly shows that when more than
d hyperplanes intersect in one point, the probability that the optimal vertex Xi
of sub-problem Pi is no longer the optimal vertex of sub-problem Pi+, is only
smaller, so that the average running time only decreases.
The following theorem summarizes the results of this section.
of the convex hull. The algorithm needs, in a first phase, to identify the extreme
points among all the points in the set, and for each extreme point, to determine
the first facet that contains this point in the shelling order. This is where linear
programming helps.
Shelling of a polytope
Lemma 10.3.1 The sequence (F1 , F2 ,..., Fm) of facets of a polytope P ordered
by an acceptable oriented line is a shelling of the polytope P.
Proof. For each value t C ]tl, tm[ of the parameter, the point V(t) on L is exterior
to the polytope P. We may color the facets of P with respect to V(t) as in sec-
tion 8.3.2 If t is negative and belongs to ]ti, ti+l[, then the facets {FF,F 2 ,.. ., Fi}
'A set of facets of a polytope P is said to be connected if it determines a connected subgraph
of the adjacency graph of P.
2
1t is helpful to recall the terminology of section 8.3 here. Let P be a polytope, F a facet
of this polytope, H the hyperplane that supports P along F, H+ the half-space bounded by H
that contains the interior of P, and H- the other half-space bounded by H. Facet F of P is
blue with respect to a point V if V belongs to H+, and red if V belongs to H-.
10.3. Convex hulls using a shelling 229
whose corresponding parameters are smaller than t are red with respect to V(t),
whereas the facets whose corresponding parameters are greater than t are blue
with respect to V(t). If t is positive and belongs to ]ti,ti+l[, then the facets
{F1 , F2 , . . ., Fi} whose corresponding parameter is smaller than t are blue with
respect to V(t), whereas the facets whose corresponding parameter is greater
than t are red with respect to V(t). Therefore, the connectedness of the sets of
facets {F1 , F 2 ,.. ., Fi} for every i =1, . . ., m is a consequence of lemma 8.3.3. LI
The algorithm described here builds the convex hull conv(S) of a set S of n points
in the space Ed. The underlying principle of this algorithm is to enumerate the
facets of P in turn, in the order given by the shelling of conv(S) induced by an
acceptable oriented line.
The set S = {Pi, P 2 , . . ., Pn} of points is supposed to be in general position.
Each facet of conv(S) is thus a (d - 1)-simplex and is entirely characterized by
the set of its d vertices. The algorithm not only builds the facets of conv(S) but
also their adjacency graph. From this graph, it is easy to reconstruct the entire
incidence graph of the polytope conv(S) (see exercise 8.2).
Before explaining the details of the algorithm, we must clarify how we discover
the facets of conv(S) at all, and moreover how this can be done in the order of the
shelling induced by a line L. Suppose therefore that there is a line L acceptable
for conv(S). As before, we agree that L is given a parametric representation:
Let (FI, F2 , . . ., Fm) be the sequence of facets of conv(S) in the shelling of conv(S)
induced by L. We also denote by ti the parameter of the intersection point
Hi n L of L with the hyperplane Hi that contains facet Fi. At a given stage of
the algorithm, the first i facets (F 1 , F2 , .. ., Fi) have been discovered, together
with their adjacency relationships. We call i-horizon (or horizon for short when
i is clearly understood) the boundary of the union U>=1 Fj of these facets. The
horizon is made of the (d - 2)-faces of conv(S) that are incident to only one of
the facets in (F1 , F 2 , .. ., Fi) together with the faces of all dimensions contained
in these (d - 2)-faces.
The algorithm uses the following lemma.
Lemma 10.3.2 Suppose that the first i facets (F1 , F2 ,..., Fi) of the shelling have
been discovered, together with their adjacency relationships.
* The horizon is made of the faces of conv(S) that are purple3 with respect
to V(t) for any t G Iti, ti+C[.
* The horizon is isomorphic to the set of faces of a (d - 1)-polytope with
respect to the incidence relationships among the faces of conv(S).
* For any (d - 2)-face G of the horizon and any point V(t) on L with param-
eter t E [ti,ti+l], the afJine hull of {GUV(t)} is a hyperplane that supports
conv(S) along G.
'N
/ V
/ N
/ h... -
(a) (b)
First case. Facet Fi+j is incident to several (d - 2)-faces on the horizon. Let
G1 and G 2 be any two (d - 2)-faces on the horizon that are incident to
Fi+i. Since Fi+j is a (d - 1)-simplex, G1 and G 2 must be incident to a
common (d - 3)-face K = G1 n G 2 . Both faces G1 and G 2 therefore have
d - 1 vertices, of which d - 2 belong to K. Together, D 1 and G 2 have d + 1
distinct vertices. Facet Fi+j is the convex hull of G1 U G 2 .
Second case. The intersection between facet Fj+I and the union U>=, Fj of the
previous facets only yields a single (d - 2)-face on the horizon. Then facet
Fi+j is necessarily the convex hull conv(G U {P}) of G and a vertex P
of conv(S) that belongs neither to G nor to any of the already discovered
facets Fj, j < i. This implies that Fi+j is the first facet in the shelling that
has admits P as a vertex.
To summarize, either Fj+i is the convex hull of two (d - 2)-faces on the horizon
that share a common (d - 3)-face, or it is the first facet in the shelling that
contains some vertex P of S.
Suppose for now that, in an initial phase, the algorithm has determined for
every point P in S whether it is a vertex of the convex hull conv(S) and, if so,
232 Chapter 10. Linear programming
which facet Fp is the first facet in the shelling that contains P. Each vertex
of cony(S) has a pointer to a record that contains this (d - 1)-simplex Fp, the
corresponding hyperplane Hp (the affine hull of Fp) and the corresponding value
tp of the parameter of the intersection point HpnL. Let JFj be the set of simplices
Fp whose parameter tp is greater than ti.
Since the horizon is isomorphic to a (d - I)-polytope (lemma 10.3.2), each
(d - 3)-face on the horizon is incident to two (d - 2)-faces of the horizon (theo-
rem 7.1.7). For each (d - 3)-face K on the horizon we store a pointer to a record
that contains the simplex FK which is the convex hull of the (d - 2)-faces on the
horizon that are incident to K, and also store the corresponding hyperplane HK
(the affine hull of FK) and the corresponding value tK of the parameter of the
intersection point HK n L. Let .F' be the set of simplices FK whose parameter
tK is greater than ti.
Lemma 10.3.3 The next facet Fi+I in the shelling is the simplex F* of Fi =
Y[ U .F'! that has the smallest parametert*.
Proof. The previous comments show that the face Fj+* is in Fi. Also, the
affine hull Hi+I of Fi+j is a hyperplane that supports conv(S) and, among all
the simplices of Pi whose affine hull supports conv(S), Fj+j is defined as the one
with the smallest parameter. Thus it suffices to show that the affine hull H* of
the simplex F* in .Fi whose parameter is minimal is a hyperplane that supports
conv(S). Now if F* is a simplex of .?T stored in a record that is pointed to
by a vertex of conv(S), then H* is indeed a hyperplane that supports conv(S),
by its definition. Thus suppose that F* is a simplex of .F7 pointed to by some
(d - 3)-faces on the horizon, and let K be such a (d - 3)-face. Let G1 and G2
be the two (d - 2)-faces of the horizon incident to K. From lemma 10.3.2, the
hyperplane aff({Gi U V(t) }) supports conv(S), for any t E [ti, ti+1 ]. In particular,
since ti < t* < ti+,, H* = aff({Gi U V(t*)}) supports conv(S). E
To build the facets of the polytope conv(S) in the order of its shelling induced
by a line L, the preceding lemma suggests that the set 9i of potential facets be
maintained and ordered according to their parameters.
But before this, we must explain how to determine, for each point P in S,
whether it is a vertex of the convex hull and, if so, how to find the first facet Fp
in the shelling that contains P. This is where linear programming comes in.
H ={X E Ed : H*.X=1}.
H* P = 1, (10.1)
Finding the hyperplane that supports P, contains P, and minimizes the pa-
rameter t of its intersection point with L reduces to finding the vector H* that
satisfies equation 10.1 and inequality 10.2, and that also minimizes the linear
functional t(H*). By changing the coordinate system, segment OP can be made
to lie on the xd-axis. Equation 10.1 determines the d-th component of vector
H* and the system given by inequalities 10.2 appears as a set of n - 1 linear
constraints over the remaining d - 1 components of H*. This system is thus a
linear programming problem with n - 1 constraints and d - 1 variables. If the
problem is unfeasible, then there is no supporting hyperplane that passes through
P and this means that P lies inside the convex hull P. If not, the general posi-
tion assumption means that the solution of the linear programming problem is
uniquely defined when line L is acceptable. This unique solution is therefore a
vertex of the feasibility domain that belongs to d - 1 hyperplanes corresponding
to d - 1 constraints in the system (10.2). Let Sp be the subset of d - 1 points in
S \ {P} that correspond to these constraints. The first facet Fp in the shelling
that contains P is then Fp = conv(Sp U {P}). [1
We can now explain the algorithm in its entirety.
The algorithm
First of all, we must choose an origin 0 in the interior of conv(S) and an oriented
line L that contains 0. We assume that L is acceptable for conv(S). If at any
stage of the algorithm it appears that L is not acceptable, the vector U directing
L can always be perturbed a little to make L acceptable without modifying the
result of the previous computations.
234 Chapter 10. Linear programming
The algorithm builds the facets of the convex hull conv(S) in the order of the
shelling of the polytope conv(S) induced by L. It also builds the adjacency graph
of the facets. Each facet is described by the list of its vertices. In addition to the
adjacency graph of the current set of facets, the algorithm maintains the following
three data structures.
* A priority queue Q maintains the set Fi of facets that, when the first i
facets (F1,...,Fi) of the shelling have been computed, are candidates for
the next facet Fi+1. To each simplex of Ti corresponds a value of the
parameter that is the parameter of the intersection point of the affine hull
of this simplex with line L. The simplices in .Fi are ordered in the priority
queue by increasing values of their parameters. The priority queue stores
the d vertices of each simplex, and a pointer. If the simplex is the first
face Fp incident to a vertex P of conv(S), the pointer gives the entry in
the dictionary D corresponding to the (d - 1)-tuple Sp. If the simplex is
associated with a (d - 3)-face K on the horizon, the pointer gives the edge
of the horizon graph that corresponds to K.
The convex hull conv(S 1 ) is the first face F1 in the shelling. The horizon graph
initially contains the complete graph on d nodes corresponding to all the (d - 2)-
faces of the (d - 1)-simplex Fl. Each subset of SI of size d - 1 is in the dictionary
D and corresponds to a node in the horizon graph. The item in the dictionary D
that corresponds to this subset is located and its corresponding pointer updated.
All the simplices of parameter t1 are retrieved from the priority queue.
Current phase. As long as the priority queue Q is not empty, the algorithm
extracts from Q the set of candidates with the smallest parameter t*, uses it to
determine the next facet in the shelling, and updates the adjacency graph of the
facets and the data structures 7i, D and Q.
If t* is the parameter of a simplex Fp that corresponds to a point P in S, the
subset Sp of vertices of Fp minus P itself is located in the dictionary D. The
pointer associated with this item allows the retrieval of the node in the horizon
graph that corresponds to Sp. The face Fp may then be added to the adjacency
graph. In the horizon graph, the node that corresponds to Sp is replaced by
a complete graph with d - 1 nodes, each of which corresponds to a subset of
Sp U {P} with d - 1 points other than Sp itself.
In the opposite case, t* is the parameter of one or more (d - 3)-faces on the
horizon. Let K be the set of (d - 3)-faces K for which tK = t*, and let g be the
set of (d - 2)-faces on the horizon that are incident to the faces of K. The facet
that must be added to the shelling is the (d - 1)-simplex FG, the convex hull of
the vertices of any two faces in g. This facet FG is adjacent to all the facets that
have already been built and that are incident to the (d - 2)-facets of g. Let g be
the number of faces in A9. The pair (5, K) is a complete subgraph with g nodes
in the horizon graph. This subgraph is replaced by the complete subgraph with
d - g nodes that correspond to the (d - 2)-faces of FG that do not belong to g.
In either case, all the hyperplanes of parameter t* are extracted from the prior-
ity queue. Each new edge in the horizon graph (resp. each edge that was removed
from the horizon graph) corresponds to a (d - 3)-face K' and the convex hull FKI
of the two (d - 2)-faces on the horizon that are incident to K' are inserted into
Q (resp. removed from Q) if its parameter t' is greater than t*. Also, for each
new node in the horizon graph, the algorithm checks whether its set of vertices
is stored in the dictionary D and, if so, updates the corresponding pointer.
When the priority queue Q is empty, the algorithm has discovered all the (d- 1)-
facets of the convex hull and their adjacency graph. Using this, it may build the
entire incidence graph of the polytope conv(S) (see exercise 8.2).
10.4 Exercises
Exercise 10.1 (Unbounded linear programming problems) Consider a (possibly
unbounded) linear programming problem, with the constraints expressed as
X Ai <ai, 1< i =1, .. .,n,
where, for each i = 1, ... , n, Ai is a vector in Ed and ai a constant. By scaling if
necessary, one may assume that all the components of Ai, i = 1, . . . , n, are integers. Let
a = maxlsi•n(ai), and
A = mai<<< (Ai,<
)
if Aij denotes the j-th component of the vector Ai. Prove that, if a solution X to the
linear programming problem exists, any of its components Xj, j = 1, . . . , d, satisfies
Xi < dd/ 2 Ad-la
Hint: The point X is the solution of a d x d system of linear equations whose coefficients
are bounded by A on one side and by a for the constant side. The determinant D of this
system is a non-zero integer, and thus IDI is at least 1. Cramer's rules imply that X,
is the quotient by D of a d x d determinant that has the coefficients of d - 1 columns
bounded by A and the coefficients of one column bounded by a.
Exercise 10.2 (Separability) Let Si and S2 be two sets of points in Ed. Show that
there is an algorithm that decides in linear time whether the sets can be separated by
a hyperplane, that is whether there exists H such that SI is contained in one of the
hyperplanes bounded by H and S2 in the other.
Exercise 10.3 (Ray shooting) Let S be a set of points in general position in Ed.
Choose an origin 0 inside the convex hull conv(S), and a vector V in Ed. Show how to
compute in linear time the facet of conv(P) that is intersected by the ray originating at
0 in the direction V.
10-4. Exercises 237
=
Exercise 10.4 (Intersection of half-spaces) Let Q nl7 1 Ht be an intersection of
m half-spaces in Ed. Determine whether Q is empty and, if not, find a point 0 inside
the intersection Q in linear time.
Exercise 10.5 (Minimum area annulus) An annulus is the portion of the plane con-
tained between two concentric circles. Let n points in the plane be given. Find the
annulus of minimal area that contains all these points.
Hint: This can be shown to be a linear programming problem if we use the space of
spheres that is introduced in chapter 17. More directly, let {Pi(ui, vi) :1 < i < n} be
the set of n points in the plane. The problem can be expressed as deciding whether there
is a center (x, y) and two radii r, and r2 such that r2 - r2 is minimal subject to the 2n
constraints
2 2
r2 < (X - U) + (y - U) <2r.
This optimization problem can be cast into a linear programming problem if instead of
the variables X, y, r1 , and r2, we express the constraints in terms of the variables x, y,
x2 Y
-r2 and x2 + y 2 -rr2
Hint: The first problem to be solved is the restriction LP(H nH1, V) to H of LP(7t, V).
If this problem is unbounded, then LP(X, V) itself is either unbounded or unfeasible
(the latter happens when some constraint in NH is parallel to H). If there is an optimal
solution to this linear programming problem, however, consider two hyperplanes H' and
H" parallel to H on either side of H. Solve the two lower-dimensional linear program-
ming problems LP(H' n Hopt, V) in H' and LP(H" n Nopt, V) in H", where NHopt iS
the minimal set of constraints that defines the solution to LP(H n X, V), and compare
the optimal values of these sub-problems. Finally, if LP(H n N, V) is unfeasible, then
238 Chapter 10. Linear programming
consider the three following subsets of Ri in turn: the subset 'Ho of constraints whose
hyperplanes are parallel to V, the subset 1H+ of constraints whose half-spaces are un-
bounded in the direction of -V, and the subset 'X- of constraints whose half-spaces
are unbounded in the direction of V. The amount of unfeasibility of LP(H n A, V) can
be defined as the difference between the optimal value of LP(H n A-+, V) and that of
LP(H n e--, -V). It suffices to compare the amount of unfeasibility of LP(H' n t, V)
with that of LP(H" n AH, V).
Hint: (For the second question.) Pair off the lines in 7-+, and do the same to the lines
in 7H-. Then project onto Ho the intersection of the two lines in each pair, as well as
the hyperplanes in 'Ho, and choose for H the line parallel to V that passes through the
median of the projections.
Exercise 10.9 (Minimum enclosing circle) Given n points in E2 , find the circle
with the smallest radius that contains all these points.
Hint: In a first step, show that the restricted problem where the center of the circle lies
on a given line can be solved in time O(n). Show also that it can be decided in time
O(n) on which side of this line the center of the (unrestricted) minimum enclosing circle
lies. Then apply to this problem the prune-and-search method described in the previous
exercise.
Exercise 10.10 (Maximum inscribed sphere) Let Q be a polytope given as the in-
tersection of m half-spaces in Ed. Determine the sphere inscribed in Q that has the
greatest possible radius.
Exercise 10.11 (LP-type problems) Let R be a finite set (whose elements are called
the constraints) and f a function on 2t that takes its values in a totally ordered set. An
optimization problem is to find the minimal subset B of t such that f(13) = f(R). Such
a problem is an LP-type problem if the following two conditions are true:
10.5. Bibliographicalnotes 239
Monotonicity: For any two subsets F and G of E such that F C g C 7-, we have
f(F) < f(g).
Locality: For any two subsets F and 5 of 7- such that F C 5 C 7H and f(JF) = f(5),
and for any H E A, we have f(5) < f( U {H}) if and only if f(F) < f(Fu {H}).
Show that any linear programming problem is an LP-type problem. Show that the
minimum enclosing circle problem (see exercise 10.9) is also an LP-type problem, and so
is its generalization to minimum enclosing spheres in any dimension.
Their approach leads to an algorithm whose complexity is O(n2 1+ [d/2J +f ±flogn) for n
points in dimension d if the convex hull has f facets. The notation e stands for a constant
that can be made as small as wanted (albeit at the cost of increasing the constant in the
0( notation). In dimension 4 or 5, this complexity is O(n4 /3+E + f log n).
The convex hull algorithm in dimension 2 (described in exercise 10.6), whose complex-
ity O(n log h) depends on the number h of vertices of the computed convex hull, is due
to Kirkpatrick and Seidel [137]. In [99], Edelsbrunner and Shi generalized this algorithm
to 3 dimensions, obtaining an algorithm of complexity 0(n log 2 h).
Lastly, the LP-type problems defined in exercise 10.11 generalize the formulation in
terms of linear programming. Many geometric problems can be expressed as LP-type
problems, such as computing the smallest enclosing ellipsoid, the largest ellipsoid in-
scribed in a polytope, the smallest circle that intersects n convex objects, etc. The
randomized algorithm of Clarkson [68] or those of Sharir and Welzl [208] and Matousek,
Sharir, and Welzl [157] actually solve LP-type problems. Chazelle and Matousek [57]
even explicitly discussed under which conditions a deterministic algorithm to solve an
LP-type problem can be obtained by derandomization.
Part III
Triangulations
11.1 Definitions
11.1.1 Simplices, complexes
In this section, we work in the d-dimensional space Ed. Recall that a simplex
of dimension k for k < d, also called a k-simplex, is a k-polytope with k + 1
vertices, or equivalently the convex hull of k + 1 affinely independent points. Let
A = {Ao, . .. , Ak} be a set of k + 1 affinely independent points and S be the k-
simplex defined by A. Any subset of I + 1 < k + 1 points in A defines an l-simplex
which is a face of S. Simplices of dimension 0,1, 2, and 3 are respectively called
points, segments, triangles, and tetrahedra.
A complex C is a finite set of simplices that satisfy the following two properties:
For instance, the set of faces of a d-polytope (except for the empty face) is
a d-dimensional cell complex, and the set of proper faces of a d-polytope is a
(d - 1)-dimensional cell complex. In the remainder of this chapter, we are mostly
concerned with simplicial complexes, which are at the core of the concept of
triangulation. Nevertheless, the definitions and properties that are stated for
simplicial complexes generalize easily to cell complexes.
'Two subsets of a topological space are homeomorphic if there is a continuous injection from
one to the other whose inverse bijection is also continuous.
246 Chapter 11. Complexes and triangulations
R G A
-
C
C
B
(a) (b)
11.1.3 Triangulations
The notion of a shell also helps to prove the following three lemmas on trian-
gulations.
Lemma 11.1.3 For any pair (T, T') of d-simplices of a d-triangulation, there is
a sequence T 1, T 2 ,... XTn of d-simplices such that T1 = T, T, = T', and Ti is
adjacent to Ti+1 for all i = 1,..., n - 1.
248 Chapter 11. Complexes and triangulations
Proof. The lemma is trivial for 1-triangulations and can be proved by induction
on the dimension of the triangulation. Let A = Al, A2 , ... , An = A' be a path
in the 1-skeleton of T that joins a vertex A of T to a vertex A' of T'. Such a
path exists because the complex is connected. We show that for any vertex Ai,
i = 2,.. ., n - 1 on this path, and any d-simplices Ti and Ti+l in T that contain
respectively the edges Ai-,Ai and AiAi+l, there exists a sequence of adjacent
d-simplices in T that joins Ti to Ti+,. Let Fi and Fi+l be the (d- 1)-faces of Ti and
Ti+l that do not contain Ai. Then Fi and Fi+l are (d-1)-faces in the shell of Ai in
T, and the induction proves that there is a sequence Fi = G,, G2 , . . ., Gm = Fi+1
of adjacent (d- 1)-simplices in the shell of Ai in T that joins Fi to Fi+l. Therefore,
the sequence Ti = conv(Gi U Ai), conv(G 2 U Ai), . . ., conv(Gm U Ai) = Ti+i is a
sequence of adjacent d-simplices of T that joins Ti to Ti+,. Dl
Proof. We first show that every (d - 2)-face of bd(T) belongs to two (d - 1)-
simplices of bd(T). Let G be a (d - 2)-face of bd(T). The shell of G in T is a
simple polygonal line whose boundary is formed by two points U and V. Since
U belongs to a single edge of the shell of G in T, the (d - 1)-simplex conv(G, U)
belongs to only one d-simplex in I and is thus a (d - 1)-simplex in bd(T). The
same argument applies to the (d - 1)-simplex conv(G, V). El
The 1-triangulations are precisely the simple polygonal lines and the polygons.
A triangulation T (of dimension 1 or 2) is said to be planar if its domain dom(T)
can be embedded in a space of dimension 2. A planar polygon P is a closed,
simple (that is, not self-intersecting), planar curve, and Jordan's theorem (see
exercise 11.1) states that this curves splits E2 \ p into two connected regions,
exactly one of them being bounded. The interior of P is the bounded region and
the exterior of P is the unbounded region.
More generally, we call a polygonal region any connected region in the plane
whose boundary is one polygon or the union of a finite number of disjoint poly-
gons. Depending on the context, we consider a polygonal region to include its
boundary or not. The edges and vertices of a polygonal region are the edges and
vertices of the polygons that bound the region.
11.1. Definitions 249
Our study is based on the existence of a linear relation, called Euler's relation,
between the faces of different dimensions in a complex. This relation admits an
elementary proof in the case of a 2-triangulation or for a 2-complex that can be
embedded in a space of dimension 2. Indeed, the 1-skeleton of the complex is then
a planar graph (see exercise 11.4). In higher dimensions, this proof does not work.
In fact, Euler's relation is one of the most famous results of homology, a theory
whose application goes well beyond the scope of this book. We limit ourselves here
to proving Euler's relation for topological spheres and balls, basing the proof on a
11.2. Combinatorics of triangulation 251
single result of the homology theory. (See the proof of theorem 11.2.1 below for a
statement of this result.) In the following subsection we show how to derive from
this result all the Euler's relations for 2-complexes, polyhedra, and polyhedral
regions in E3.
Let C be a d-complex and nk(C) the number of its k-faces, for k = 0, . .. , d. The
Euler characteristice(C) of the d-complex C, is defined as the alternating sum
d
e(C) = Z(_1)knk(C).
k=O
d
Z(-1)knk(C) = 1 + (-I)d. (11.2)
k=O
Proof. The basic result from homology theory mentioned above is that two
complexes C and C' that have homeomorphic domains also have the same Euler
characteristic.
The set of faces of a d-polytope is a topological d-ball. By theorem 7.2.1, its
Euler characteristic is 1. By the definition, we know that any topological d-ball is
homeomorphic to the domain of a polytope. As a result, the Euler characteristic
of any topological d-ball is 1 and it satisfies equation 11.1.
Similarly, the set of proper faces of a d-polytope forms a topological (d - 1)-
sphere. By theorem 7.2.1, its Euler characteristic is 1 + (-l)d-'. As a result,
the Euler characteristic of any topological d-sphere is 1 + (-l)d, and it satisfies
equation 11.2. 0
of facets and edges can be bounded by the number of vertices and the genus of
the polyhedron.
For any 2-complex C, we denote by n(C) the number of vertices of C, by m(C)
the number of its edges, and by f (C) the number of its 2-faces. When the context
is clear, we drop the reference to C and simply write n, m, and f for n(C), m(C),
and f (C).
Proof. Let us first note that dom(C) is a polygonal region with k holes because
C is as usual assumed to be pure, connected, and without singularities. We
first prove the theorem when the complex C is a triangulation E. If k = 0,
the triangulation T is a topological ball and the previous equation is simply
Euler's relation for topological 2-balls that was proved before. For k $? 0, we
invoke a result that we prove independently in the next chapter, showing that
any polygon can be triangulated. More precisely, for any polygon P there exists a
2-triangulation whose vertices are exactly the vertices of P and whose boundary
is the same as that of P. Let Pi, i = 1, ... , k, be the polygons forming the
boundaries of the holes of dom(T), and let 7 be a triangulation of Pi. Each
triangulation 7 is a topological ball, so it has an Euler characteristic of
The faces common to T and to Uk=1 7 are also faces of Pi, so we can compute
the Euler characteristic of T' as
k k
e(T') = e(T) + ZeC h)-Ze(Pi) (11.6)
e(T) = 1 - k.
Consider now a cell complex C of dimension 2. The 2-faces of such a complex are
polygonal regions, and can also be triangulated, using the same result as above.
11.2. Combinatorics of triangulation 253
f < 2(n-1+k)-ne,
m < 3(n-1+k)-n,.
Proof. Note that ne is also the number of external edges of C. Each external
edge of C is incident to a unique 2-face of C, while each internal edge is shared
between exactly two 2-faces. Also, each 2-face of C is incident to at least 3 edges.
Counting the number of incidences between an edge and a 2-face, we obtain
2m - ne > 3f,
and equality holds if and only if C is a triangulation. It now suffices to use Euler's
relation 11.3 to prove the corollary. [1
n-m+f=2-2h. (11.7)
Ch-l n Co = B u B',
where 13 and iS' are two disjoint topological 2-balls (which we call disks) such that
(see figure 11.4). The polyhedron Ch can be obtained by removing the internal
faces of the two topological disks B and 1' from Ch-l U Co. Let P and P' be
the two polygons that form the boundaries of B and B' respectively. The Euler
characteristic e(Ch) of Ch is given by
2m > 3f
11.2. Combinatorics of triangulations 255
Remark. From corollary 11.2.5, we may also infer that the expected number
of edges incident to a random vertex of a polyhedron is at most 6- 12(1-h)
n
Therefore, any polyhedron of genus h always has a vertex of degree at most 5
when h = 0, and of degree at most 6 if h = 1.
Proof. Again, we give a proof by induction on the genus h of the polyhedron that
bounds the complex T. If h = 0, then T is a topological 3-ball and the theorem
is a consequence of theorem 11.2.1. If h 7$0, then the Euler characteristic of T
is the same as that of any 3-triangulation Th = Th-1 U To, obtained by merging
a 3-triangulation Th-l whose boundary is a polyhedron of genus h - 1 and a
topological 3-ball To in such a way that
where B and 13' are two 2-complexes of disjoint topological balls such that
Proof. Denote by ne, me, and fe the respective numbers of vertices, edges, and
triangles on the boundary of T, and by ni, mi, and fi the respective numbers
of internal vertices, edges, and triangles of T. Corollary 11.2.5 applied to the
boundary of T yields
Ae 2ne-4 + 4h (11.11)
me =3ne- 6+ 6h. (11.12)
t = m - n - ne + 3 - 3h,
which gives the number of tetrahedra in the triangulation as a function of the
number of its vertices and edges. The bounds on the number of tetrahedra
claimed by the theorem are then an immediate consequence of bounds on the
number of edges. On the one hand, the number of edges is trivially bounded
above by n(n - 1)/2. On the other hand, each internal vertex is incident to at
least four internal edges, each incident to at most two internal vertices, so the
number mi of internal edges is at least 2ni. Thus the total number m of edges
must satisfy
> 2n+ne-6+6h.
n -3 < t < - - - - ne + 3.
2 2
-
11.2. Combinatorics of triangulations 257
Both upper and lower agree when n = 4, and are thus optimal. Below, we
show that these bounds may also be matched for any n and at least some values
of n,. For this, we must exhibit a 3-triangulation whose number of tetrahedra is
quadratic (resp. linear) in the number of vertices. (See also exercise 11.6).
T4 = {AjA 2 A 3 A 4 }
Tn = Tn-j U {AiAi+±An-An : i = .. - 3},
where each complex Tf is a pure 3-complex described by all the tetrahedra that
belong to it (adding all the 0-, 1-, and 2-faces of these tetrahedra). We now
show by induction on n that En is a triangulation. Indeed, for i = 1, . .. , n - 3,
the tetrahedron AiAi+iA,-iAn has no points in common with the tetrahedra
of Tn-j except for points in the triangles AiAi+,An- 1 on the boundary of Tn- .
The complex Tn is pure, by definition, and connected because its 1-skeleton is a
connected graph. Moreover, we can check easily that the shell in Tn of any vertex
Ai is a topological 2-sphere, that for i = 2, .. ., n the shell of any edge A1 Ai, is
a simple polygonal line, that the same holds for edges AiAn, i = 1,..., n - 1,
and finally that fore any 1 < i < j -1 < n - 1 the shell of the edge AiAj is a
simple polygon. This guarantees that En is indeed a triangulation. The domain
258 Chapter 11. Complexes and triangulations
IT41 =1 (11.14)
I-1= T.iI + n-3. (11.15)
Solving this recurrence yields ITnJ
n 2 /2 - 5n/2 + 3 which, by corollary 11.2.7,
is the maximum number of tetrahedra for a 3-triangulation with n vertices, all
external, and whose boundary is a polyhedron of genus 0. [E1
The following lemma shows the existence of linear triangulations.
Lemma 11.2.9 For any pair of integers (n, ne) such that 4 < ne < n, there
exists a 3-triangulation with n vertices, ne of which are external, and with t
tetrahedra, where
t=n-3+2(n- n).
Proof. Examining the proof of corollary 11.2.7, we note that the lower bound
n - 3 + 2(n - ne) for the number of tetrahedra can only be achieved when the
number of edges itself also achieves its lower bound, 3nT - 6 + 2(n - ne), and this
implies that there can be at most 2(n - ne) internal edges.
We can realize these conditions easily when all the vertices are external, namely
when n = ne. Indeed, we may build a triangulation without internal edges
incrementally, starting with the tetrahedron defined by the first four vertices.
In an incremental step, the next triangulation can be obtained by adding a new
tetrahedron adjacent to a single tetrahedron in the previous triangulation through
a single facet. For this, we choose the new vertex of the triangulation so that only
one facet is red. This implies that all the vertices and edges lie on the convex
hull of the set of vertices of the triangulation.
When n < ne, we can build a triangulation with ne external vertices and
edges using the previous construction. We then add n -Te internal vertices
incrementally. In an incremental step, the new vertex A is added inside an
existing tetrahedron. Let Fi, i = 1, . . ., 4, be the four facets of this tetrahedron
T. In order to make a triangulation of the new set of vertices, we replace T by
four new tetrahedra Ti = conv(Fi, A), as shown in figure 11.5. Each new internal
vertex therefore adds three tetrahedra to the triangulation, so there are exactly
ne - 3 + 3(n - nTe) = n- 3 + 2(n - ne) tetrahedra in the resulting triangulation.
similar data structures to describe and process these two kinds of objects.
The adjacency graph of a complex has a node for each cell and an edge for
each pair of adjacent cells (meaning that these cells are adjacent to a common
facet). Any internal facet is incident to exactly two cells in the complex, so the
adjacency graph may be built easily from the incidence graph. This definition
is also consistent with the one given for polytopes in section 8.1. Indeed, the
adjacency graph of a polytope, as defined in section 8.1, is exactly the adjacency
graph of the (d - 1)-complex formed by the proper faces of this polytope.
The incidence graph of a simplicial complex can be retrieved from its adjacency
graph in time linear in the number of faces (see exercise 11.3).
Duality
We may also generalize the concept of a duality from polytopes to complexes (see
subsection 7.1.3).
Let C be a d-complex. A d-complex C* is dual to C if there is a bijection
between the faces of C and those of C* which reverses inclusion relationships.
260 Chapter 11. Complexes and triangulations
Such a bijection associates the k-faces of C with (d - k)-faces of C*, for any
k = 0, . .. , d (see figure 11.6 for an example).
Figure 11.6. A complex (solid edges) and its dual (dashed edges).
Note that in general the dual of a simplicial complex is not a simplicial complex.
A complex does not have a unique dual. Nevertheless, all the complexes dual
to a given complex C have isomorphic incidence graphs; we say that they are
combinatorially equivalent. Moreover, any complex (C*)* dual to the dual of C is
combinatorially equivalent to C itself.
The adjacency graph of a complex C is also the 1-skeleton of any dual of C. For
this reason, the adjacency graph is also called the dual graph of C.
11.4 Exercises
Exercise 11.1 (Jordan's theorem) A simple curve in the plane is the image in E2 of
the interval [0,1] under a continuous bijection f. The endpoints of the curve are f(0)
and f(1), and the curve is said to link its endpoints. If the mapping f is continuous
and bijective over ]0, 1[, and if f(0) = f(1), then the image f([O, 1]) is called a simple
closed curve. A region 7? in the plane is connected if any two of its points can be linked
by a simple curve entirely contained within R. Jordan's theorem states that if C is a
simple closed curve in E2 , then E 2 \ C has exactly two connected components whose
common boundary is C. This exercise presents a simple proof of Jordan's theorem when
the simple closed curve C is a polygon.
1. Let C be a polygon. Show that E2 \ C has at most two connected components. For
this, consider a disk D such that D n C consists only of a segment. If there are at least
three connected components in E2 \ C, then choose three points Q1, Q2, Q3 in distinct
components. Show that each of these points can be linked to a point of D by a curve
that does not intersect C. Then show that two of the points can be linked by a simple
curve entirely contained within E2 \ C.
11.4. Exercises 261
2. Let Q be a point in E 2 \ C and L any ray extending from Q towards infinity. The
intersection L n C has connected components which are either points or segments. Each
such component S of L n C (of zero length or not) is counted twice if C remains on
the same side of L just before S and just after S (we say that L touches C along S),
otherwise it is counted only once (L goes through C at S). Show that the parity of this
weighted intersection count does not depend on the direction of L, and that it is the
same for all points in the same connected component of E2 \ C. By considering a line L
that intersects C, show that both parities are possible and thus that E2 \ C has exactly
two connected components, whose common boundary is C.
Exercise 11.2 (Jordan's theorem) The notion of a simple closed curve in the plane
is defined in the exercise above, which shows that it encloses a region that is a topological
2-ball, called a disk, and that the complement of this disk is connected. Prove that this
implies that a chord, meaning a simple curve that links two points on the boundary of
the disk and whose relative interior is entirely contained in the interior of the disk and
separates the disk into exactly two distinct connected components.
Hint: Since the complement of the disk is connected, one may join the two endpoints of
the chord by a simple curve that lies in the exterior of the disk. The concatenation of this
curve and of the chord is a simple closed curve, to which one may again apply Jordan's
theorem. One portion of the boundary of the disk lies in the interior, and the other
portion in the exterior. Concatenating the chord to these portions yields two simple
closed curves, to which we can again apply Jordan's theorem. The bounded regions
enclosed by these curves are exactly the two connected components of the disk.
Exercise 11.3 (Incidence graph) Show that the incidence graph of a simplicial d-
complex can be retrieved from its adjacency graph in time linear in the number of faces
of all dimensions.
Hint: Add all the (d -1)-simplices stored in the nodes of the adjacency graph, and for
each pair (F, G) of adjacent facets add a (d - 2)-face incident to F and G. Finally, for
k = d - 3,.. . , 0, add a node for each k-face of the already constructed (k + 1)-faces, and
merge nodes corresponding to identical k-faces, noticing that such nodes descend from a
common (k + 1)-face.
Exercise 11.4 (Planar maps) A graph G is said to be planar if it has a planar em-
bedding: the nodes correspond to points of E2 and the arcs to simple curves linking two
points corresponding to adjacent nodes, such that those curves intersect only at end-
points. The points and simple curves corresponding to the graph for a planar embedding
of the graph, and the induced subdivision of the plane is commonly called a planar map
g. The points are called the vertices of the map, the curves are the edges of the map,
and the connected components of E 2 \ 5 are the 2-faces (sometimes called regions) of the
map.
1. Let n be the number of vertices, m the number of edges, and f the number of
2-faces of a planar map g and let c be the number of connected components of graph G.
Prove Euler's relation:
n-m+f = 1+c.
262 Chapter 11. Complexes and triangulation
2. Show that if the planar map has f' 2-faces whose boundary consists of only two
edges, then the number of edges is bounded by
m < 3n - 3 - 3c + f'.
Hint: Proceed by induction, while analyzing how the sum n - m + f - c varies when a
new vertex or a new edge is inserted into the map.
Exercise 11.6 (Quadratic triangulations) Show that for any pair of integers (n, ne)
such that 4 < ne < n, there exists a triangulation with n vertices, only ne of which are
external, and with t tetrahedra, where
m2 3n
t = 2- - - - n. + 3 -4(n - n)(n - 4).
2 2
Hint: For n = ne, see the proof of lemma 11.2.8. For n, = 4, simply choose n -2 points
A 1,..., A- 2 with respective parameters Ti < ... < n-2 on the moment curve r, and
two points Bo and B,_ 1 such that:
* For i = 2, .. ., n -3, Bo belongs to the half-space bounded by the affine hull of
triangle AjAiAi+l that does not contain any of the points Al,... .An-2
* For i = 1, . . , n - 4, Bn- 1 belongs to the half-space bounded by the affine hull of
triangle AiAi+lAn-2 that does not contain any of the points Al,.. .An-2-
* The interior of the tetrahedron BoA 1A- 2 Bn- contains the points Ai, i = 2,...,
n - 3.
Build a triangulation of vertices Al,-.. , A,,-2 as in the proof of lemma 11.2.8, then add
Bo and Bn-1. For 4 < ne < n, choose ne points on the moment curve, then triangulate
them as in the proof of lemma 11.2.8, and finally add the remaining n -ne points of a
smaller scaled moment curve inside one of the previously built tetrahedra.
Triangulations in dimension 2
f = 2(n-1)-ne
m = 3(n-1)-ne.
(x, y), and then maintains a triangulation of the current set obtained by adding
the points one by one in that order.
Let A = {Al,..., A,n be a set of n points in the plane. To avoid lengthy
discussions, we assume as usual that the set of points is in general position.
Moreover, we assume that A has already been sorted by increasing lexicographic
order on (x, y), so that Al < A2 < ... < An.
The algorithm not only maintains the triangulation 7i-1 built for the subset
A1 = {A 1 , . . . , Ai-,} of points already processed, but also the boundary of this
triangulation, meaning the boundary of the convex hull conv(Avi-) of the set
Ai-1. The current triangulation is maintained as a data structure that stores the
incidence graph of the triangulation. The convex hull conv(Ai-1) is maintained
using the doubly linked circular list of its vertices, with a pointer p to the vertex
in the list that was last inserted.
In the initial step, we build the triangle formed by the first three points Al,
A2 , A3 and set the list L to {Al, A2 , A3}, with p pointing to the node that stores
A3 .
To describe the current incremental step when Ai is the point to be inserted
in the triangulation, we use the same terminology as that of section 8.3 for in-
cremental convex hulls. An edge F of conv(A4i-) is red with respect to Ai if the
line which is the affine hull of F separates Ai from conv(Ai-1), otherwise the
edge F is blue with respect to Ai. A vertex of conv(Ai-1) is red with respect
to Ai if it is incident to two red edges, blue if it is incident to two blue edges,
and purple it it is incident to both a red and a blue edge. Let us recall that the
vertex Ai-, is necessarily incident to at least one red edge (see phase 1 of the
algorithm described in section 8.3) and that the set of edges on the boundary
of conv(Ai-1) that are red with respect to Ai is also connected (lemma 8.3.3).
Starting at point Ai- 1 , the algorithm traverses the red edges of conv(ASi-),
and for each such edge, adds the triangle conv(F, Ai) to the current triangula-
tion. In L, the sub-list of red edges is replaced by the two edges AiAm and
AiA1 that connect Ai to the two purple vertices Am and Al in conv(Ai-1) (see
figure 12.1).
Notice that the complexity of the incremental algorithm for a set of n points
is only O(n) if the points are sorted along some known direction.
Theorem 12.2.1 Let A be a set of points in the plane. Any maximal set of
segments that connect the points in A and have pairwise intersection only at
common endpoints is the set of edges of a triangulation of A, and the converse
is also true.
Proof. Let £ be a maximal set of segments that join the points in A and have
pairwise intersection only at common endpoints. The maximality of £ implies
that no segment may be added to £ while maintaining this property. The edges in
£ must include the edges on the boundary of the convex hull of A, since otherwise
adding any of them would contradict the maximality of £. Since the segments
in £ are all inside the convex hull of A, they determine a decomposition of this
convex hull into polygonal regions. Let us show that any such region must be
12.3. Vertical decompositions and triangulations of a polygon 267
FR
Let P be a polygon in the plane. From the preceding discussion, we know that
it is possible to compute a triangulation of the set of vertices of P constrained
268 Chapter 12. Triangulationsin dimension 2
to include the edges of the polygon 'P. Hence the obtained triangulation has,
besides the edges and vertices of 'P, interior edges and interior triangles that
are contained in the interior of 'P, and occasionally exterior edges and exterior
triangles contained in the exterior of P. The faces of P and the interior faces of the
triangulation form a triangulation whose domain is exactly the interior of P, and
whose boundary is the polygon P. Such a triangulation is called a triangulation
of the polygon P. The following theorem is a straightforward consequence of
theorem 12.2.1.
Theorem 12.3.1 Any polygon P in the plane can be triangulated, in other words
it can be described as the boundary of a 2-triangulationwhose vertices are vertices
of 7P.
If P is a polygon with n vertices, then any triangulation T of P has exactly
n vertices which are external. We can use corollary 11.2.3 to show that such a
triangulation has exactly f = n -2 triangles, m = 2n -3 edges, and n -3 internal
edges. Triangulating a polygon is therefore equivalent to finding the n -3 internal
edges that decompose the polygonal region into n - 2 triangles.
Incidentally, we note the following property:
Lemma 12.3.2 The dual graph of a triangulation of a planarpolygon is a tree.
Proof. This graph is obviously connected. Furthermore, it has no cycle. Indeed,
the existence of a cycle in the dual graph of a triangulation implies either the
existence of a hole in the polygonal region dorn(T), or the presence of an internal
vertex in the triangulation. E
Knowing a simple polygonal line that joins the points of A enables us to com-
pute the convex hull conv(A) in linear time (theorem 9.4.4). So the argument
that proves a lower bound of Q(n log n) on the complexity of computing a trian-
gulation for a set of n points does not apply to the set of vertices of a polygon.
We may legitimately suspect that computing a triangulation of a polygon is a
simpler problem than its counterpart for a set of points.
The complexity of computing a triangulation of a simple polygon remained
elusive for a long time. Classical algorithms only achieved time O(n log n) for the
general problems, while several algorithms were known to perform in linear time
on special kinds of polygons, such as convex, monotone, or star-shaped polygons,
or polygons visible from a single segment. In 1986, a deterministic algorithm
was proposed whose worst-case complexity is o(n log n), proving at least that
O(n log n) was not a tight bound. The problem was settled, at least theoretically,
in 1990 when a linear-time algorithm that computes the triangulation of any
simple polygon in the plane was given. This algorithm is too complex to be
presented in this book, or to be of any practical use. Its existence, however,
provides a proof of the following theorem:
12.3. Vertical decompositions and triangulations of a polygon 269
or by decreasing abscissae:
Proof. This condition is necessary: indeed, a monotone polygon has only one
start vertex, which is the vertex with the minimum abscissa, and only one end
vertex, which has maximum abscissa. Both these vertices are convex.
Reciprocally, the following lemma shows that if a polygon has no start or end
reflex vertex, then it has only one start vertex and one end vertex, and it is
therefore monotone. [
CS = re + 1,
Ce = rs + 1.
Let P be a monotone polygon in the direction of the x-axis. Then P is the con-
catenation of two monotone polygonal lines that connect the vertices of minimum
and maximum abscissae. The algorithm begins by sorting the vertices of P by
increasing abscissae, which can be done by merging the vertices of the upper and
272 Chapter 12. Triangulations in dimension 2
272 Chapter 12. Triangulations in dimension 2
P9
P1 0
pi!
7 Ui+ 1
(a)
I
Ui+1
(c) (d)
lower monotone polygonal lines. Let Qo, Q1,. . ., Q,-i be the resulting ordered
sequence of vertices of P. In the course of the algorithm, the vertices of P are
visited in this order, one by one, and the algorithm adds to the edges of P the
internal edges of the triangulation. Each internal edge added by the algorithm
separates a triangle in the triangulation from the remaining polygon, whose num-
ber of vertices decreases by one at each step. Let Qo, Q1, . . . , Qi-1 be the vertices
already visited by the algorithm before the current step. The algorithm maintains
the following invariants:
12.3. Vertical decompositions and triangulations of a polygon 273
2. If t > 1, the vertices {Vi, .. . , Vt- } are reflex vertices in the remaining
polygon.
1A.
Il Qi Vt
k
Q-1
(b) (c)
of the remaining polygon, so no edge of this polygon can intersect the interior of
T. nor the edge QiV1. Vo is the vertex of minimum abscissa among the vertices
of the remaining polygon, it is therefore convex and so T and QiV1 are interior
to the remaining polygon.
The analysis of this algorithm is immediate. The initial sort can be performed
in linear time since it consists of merging two already sorted lists. Each step in
the algorithm can be carried out in time proportional to the number of vertices
added to and popped from the stack. Since each vertex is stacked and popped
only once, the algorithm has linear complexity, proving that:
.0
P
P
F
0- 0
type 1 type2 type2
Figure 12.7. The two types of trapezoid in the vertical decomposition of a polygon.
time O(n log* n). The algorithm we describe here uses the sweep method, and is
a variant of the algorithm described in subsection 3.2.2 that computes the inter-
section points of a set of segments; it runs in time O(nlogn). We then describe
a lazy version of the same algorithm, whose complexity is lower for a large class
of polygons.
To decompose a polygon using the sweep method, we propose to sweep the
plane with a vertical line A from left to right. The state of the sweep, stored in a
structure Y, is the ordered list of active edges: these are the edges of the polygon
intersected by A, ordered according to their intersections along A. The structure
Y is implemented using a balanced binary tree, letting us insert, delete, or query
active edges in time O(log k) where k is the number of active edges. Moreover,
the nodes of the tree store two extra pointers that allow access in constant time to
the active edge immediately above or below the active edge E stored in this node.
The edge above E is denoted by above(E) and the edge below E by below(E).
The list Y of active edges changes only when A sweeps over a vertex of the
polygon. Thus the list of events to be processed is simply the list of vertices of P
sorted by increasing abscissae. Without loss of generality, we may assume that
no two vertices have the same abscissa.
The structure Y initially stores two fictitious edges that intersect the sweep
line at y = +oo and y =-co respectively. Processing the event corresponding to
a vertex Pi consists of the following operations:
Each of these operations depends on the type of vertex Pi: it can be either a
start, an end, or a monotone vertex.
If Pi is a start vertex (see figure 12.8), locating Pi in the list Y allows us to
retrieve the active edges E and E' that lie immediately below or above Pi on A.
The algorithm builds two walls starting at Pi and butting on E and E'. 1 The
edges incident to Pi are inserted in Y between E and E'.
If Pi is a monotone vertex (see figure 12.9), locating Pi in the structure Y allows
us to retrieve the active edge E1 that is incident to Pi. The walls stemming
from Pi and butting on the edges below(E1 ) and above(El) are inserted into the
decomposition, and edge E1 is replaced in Y by the other edge E2 of P that is
incident to Pi.
'Details of these operations depend on the particular representation of the vertical decom-
position used by the algorithm: simple list of walls, simplified or complete representation of
Dec(P) as described in section 3.3.
278 Chapter 12. Triangulations in dimension 2
278 Chapter 12. Triangulations in dimension 2
'- El
E2
I E,
Pi4
I iE
I
"41111111
q '
2
E
A ..A%12[I
abuveI' ) (LOUveh1~)
I a
F2
El F2E
I.
below(l 1 ) beloaw(E1)
Finally, if Pi is an end vertex (see figure 12.10), locating the vertex Pi in the
structure Y allows us to find both active edges E1 and E2 incident to Pi. Say that
E1 is above E 2 , then the walls stemming from Pi butt on the edges below(E 2 )
and above(El). The two edges E1 and E2 are both removed from the structure
Y.
Proof. Sorting the vertices of P to build the ordered list of events takes time
O(n log n). The number of active edges is always less than n. For each of the
n events, locating the current vertex in Y and updating the structure Y (which
involves at most two insertions or two deletions) require O(log n) operations, and
the remaining operations can be carried out in constant time. E
12.3. Vertical decompositions and triangulations of a polygon 279
A Im DN I I I/
-oove-2) 20ove(PE2)
ED1
JeP El
El "llp"I
uetuwkrti) below(Ei)
Remark. The algorithm is based on the fact that the edges of the simple polygon
do not intersect except at common vertices. Nowhere do we use explicitly the
fact that the edges are connected. The algorithm can therefore be extended
straightforwardly to compute the vertical decomposition of a collection of several
disjoint polygons, or of polygonal region with holes.
In its basic version, an algorithm that computes a vertical decomposition using the
sweep method requires time Q(n log n) for any kind of polygon, even if it is convex
or monotone. In those cases, the vertical decomposition can be easily obtained
by other methods in only linear time. The analysis of this algorithm reveals
that its complexity is dominated by the cost of initially sorting the vertices of P,
the cost of locating the vertices in the structure Y, and the cost of rebalancing
the tree after each insertion or deletion of active edges. These operations are not
necessary for each vertex of 7P, however. Indeed, any polygon P can be considered
as the concatenation of monotone polygonal lines, the monotone chains whose
endpoints are the start and end vertices. The internal vertices of such chains are
monotone vertices and are ordered by increasing or decreasing abscissae along
the chain. Also, when the algorithm processes the event at a monotone vertex,
updating the structure Y only involves replacing an edge by another one, which
requires no rebalancing of the structure. If we manage to successively process
the events corresponding to monotone vertices on a chain, we no longer need to
locate these vertices in the structure Y.
In its lazy version, the algorithm only processes the event when the sweep line
A sweeps over start or end vertices. So the list of events is now only the list of
start or end vertices, sorted by their increasing abscissae, which can be obtained
in time O(n + slog s) where s is the number of these special vertices of P.
280 Chapter 12. Triangulations in dimension 2
A p I
Al A
The structure Y is modified so that it can handle updates in a lazy fashion. This
structure is still implemented as a balanced tree, but each node now corresponds
to an active monotone chain, rather than to an active edge: a monotone chain is
called active if one of the edges on this chain is active, meaning that it intersects
the sweep line A.
The active edges or monotone chains subdivide the sweep line A into an ordered
sequence of segments which are alternately interiorand exterior. To simplify the
discussion, we only describe how to build the decomposition of the interior of the
polygon 7P, that is, to find the interior walls and trapezoids in the decomposition
of P. It thus suffices to consider the interior segments on the sweep line A.
Each of these segments is bounded by a pair (Ci, Ci+1) of active chains which are
consecutive along A. To each pair, we dedicate a local sweep line A'. The local
sweep line A' always lags behind A: the edge Ei of Ci intersected by Ai precedes
(in increasing x-order along the chain Ci) the edge Ei intersected by A; similarly,
the edge Ei,+ 1 of Ci+j intersected by Ah precedes the edge Ei+j of Ci+j intersected
by A. The information stored at the nodes of the tree Y corresponding to the
chains Ci and Ci+j is relevant only to the edges Ei and Ei+, intersected by the
local sweep line Ai.
The local sweep line Ai only advances and reaches the global sweep line A
when a node of Y corresponding to one of the chains Ci or Ci+j is visited in
order to locate a vertex P in the list of events. When visiting the node of Y
that corresponds to the chain Ci, the algorithm tests whether the active edge Ei
intersected by the local sweep line Ai intersects the global sweep line A. If not,
the local sweep line sweeps over the vertices of Ci and Ci+j that lie between the
12-4. Exercises 281
two lines Ai and A. A linear traversal of these two chains (analogous to merging
two sorted lists) then builds the wall from each vertex of Ci and Ci+i, in constant
time for each wall. When the local sweep line has caught up with A, the edge E'
of Ci intersected by the local sweep line Ai coincides with the edge Ei intersected
by the global sweep line A. The vertex P may then be compared with Ei and
the location of P in the structure Y may continue.
The process which advances the local sweep line Ai involves merging two sorted
lists of vertices and does not need to perform location queries in the structure Y
or to rebalance this structure. Such a process takes time linear in the number
of processed monotone vertices. If the number of start or end vertices is s, the
number of chains stored in Y is O(s); thus Y requires storage 0(s), and each
location or rebalancing takes time 0(log s). The total complexity of the lazy
sweeping is thus O(n + slog s). As is proved in lemma 12.3.5, for any direction
of the sweep line, the number of start and end vertices is at most r + 2 if r is
the number of reflex vertices in the polygon. The complexity of the lazy sweep
algorithm is thus O(n + rlogr).
It is possible to build the portion of the vertical decomposition that lies outside
the polygon P in a similar fashion. It suffices to maintain a local sweep line for
each segment of the global sweep line that lies outside the polygon. Finally, we
should also mention that the algorithm does not use explicitly the fact that the
edges are connected, so that it computes equally well the vertical decomposition
of a collection of disjoint polygons, or of a polygonal region with holes.
The following theorem summarizes the results of this paragraph:
Theorem 12.3.9 The lazy sweep algorithm computes the vertical decomposition
of a polygon (or of a collection of disjoint polygons) with a total of n vertices, of
which r are reflex vertices, in time O(n + r log r).
Note that if a polygon is monotone in some given direction, then the direction of
the sweep line may be appropriately chosen as perpendicular to this direction to
ensure that the number of start and end vertices for this direction is exactly two,
and the lazy sweep algorithm takes time O(n) in this case.
12.4 Exercises
Exercise 12.1 (Decomposition into convex parts) Consider a polygon P with n
vertices and r reflex vertices.
1. Show that any decomposition of the interior of P into convex parts has at least
[r/21 + 1 regions.
282 Chapter 12. Triangulations in dimension 2
Exercise 12.2 (Localization in a planar map) This exercise presents a proof of the
following result: given a planar map of size O(n), it is possible to build in time O(n log n)
a data structure of size O(n) that allows localization queries in the map to be performed
in time 0 (log n).
We first consider a planar triangulation T whose boundary consists of a single triangle.
Let n be the number of vertices of T.
1. Show that T has 3n - 6 edges and 2n - 5 triangles.
2. Consider a maximal set of internal vertices of T such that two vertices are not
adjacent and the degree of any of these vertices is at most d. Show that the size n' of
any such set is at least
n/ > (d- 5)n.
-d(d +i)
3. Show that such a set may be found in linear time.
From the triangulation T, it is possible to build a hierarchical structure that allows
efficient localization queries in the triangulation T. This structure is analogous to that for
3-polytopes which was described in exercise 9.5 and used in exercise 9.6 to answer several
kinds of queries on this polytope. The structure represents a sequence of triangulations
To = TOT,, ,Th,
such that Th has bounded complexity, and Ti+I can be deduced from T1 by removing a
maximal set of non-adjacent internal vertices of 71 with degree at most d. More precisely,
if Si is such a subset of vertices of T%, for each vertex P in Si, the triangles in the star
% (P) of P in <1 are replaced by a triangulation Ti'(P) of the boundary of this star. These
triangulations are merged to obtain l+v, so that
The underlying data structure is a graph that has a node for each triangle that belongs
to one or several successive triangulations, and an edge for each pair (T', T) of triangles
such that, for some level i e [0, h[, T belongs to 71 but not to fi+i, T' belongs to T1+i
but not to Ti, and the intersection T n T' has a non-empty relative interior.
4. Show that h = O(logn), that 7+1 can be computed from 71 in linear time, and
that the graph described above can be built in time O(n) if the triangulation T is given.
5. Show that the graph can be used to perform localization queries of a point in the
triangles of T in time O(h) = O(logn).
12.4. Exercises 283
6. Extend the method to any planar map. If the map has no unbounded edges, it
can always be enclosed in a surrounding triangle and the resulting polygonal regions can
be triangulated. Otherwise, the regions in the map may be triangulated, by considering
points at infinity to triangulate the unbounded regions.
Exercise 12.5 (Shortest paths) Consider a polygon P with n vertices, and two points
P and Q that belong to the boundary or to the interior of the polygon P. We suppose
that a triangulation T(P) of P has already been computed. Show how to compute in
linear time the shortest polygonal line ir(P, Q) that links P to Q and remains in the
interior of the polygon.
Hint: Let Tp be a triangle of T(P) that contains P and TQ be a triangle of T(P) that
contains Q. The dual graph of T(P) is a tree, in which there is a unique path from Tp to
TQ. Thus, there is a sequence of adjacent triangles in T(P) that links Tp to TQ. Consider
the sequence El, E2 , ... , El of edges adjacent to two consecutive triangles on this path.
Let Ui and Vi be the two vertices of Ei. The algorithm computes the shortest paths
7r(P,Ui) and 7r(P,Vi) for increasing i. To compute 7r(P,Ui+1) and 7r(P,Vi+ 1) knowing
7r(P, Ui) and 7r(P, Vi), we use the following observations:
1. Either Ui = Ui+1 , or Vi = Vi+. It therefore suffices to compute the shortest path
7r(P, X) for the vertex X = Ui+l or X = Vi+l that does not belong to {UM, Vil}.
2. The shortest paths 7r(P, Ui) and 7r(P, Vi) are polygonal lines; they share an initial
polygonal line 7r(P, Ai), and then consist of two concave chains which are the shortest
284 Chapter 12. THangulations in dimension 2
284 Chapter 12. Triangulations in dimension 2
paths 7r(Ai, Ui) and 7r(Ai, Vi) (see figure 12.12). The funnel of the edge UiVi, denoted
by Fp(UiVi), is the concatenation of the chains 7r(Ui, Ai) and 7r(Ai, Vi). The vertex Ai
is the origin of the funnel Fp(UiVi). The shortest path 7r(P, X) must also pass through
the origin Ai, and is the concatenation of 7r(P, Ai) with the shortest path 7r(Ai, X). This
shortest path is the segment AiX if it does not intersect the funnel Fp(UiVi). Otherwise,
there is a point Ai+l on Fp(UiVi) such that Ai+ 1 X is tangent to the chain that contains
Ai+,, and the shortest path 7r(Ai,X) is the concatenation of 7r(Ai,Ai+,) with Ai+ 1X.
To compute 7r(P,X) and the funnel Fp(Ui+ Vi+1), it therefore suffices to find Ai+ .
3. To find the point Ai+,, we follow Fp(UiVi) simultaneously starting at the two
endpoints Ui and Vi. The cost of the traversal is proportional to the number of Fp(UiVi)
visited. But half of the visited vertices of Fp(UiVi) do not belong to Fp(Ui+1 Vi+1 ), so
they will never be visited again. Show then that the algorithm takes linear time.
Exercise 12.6 (Shortest path tree) Consider a polygon P with n vertices and P a
point that belongs either to the boundary or to the interior of P. The shortest paths
that join P to the vertices of P do not cross, so their union forms a tree whose nodes are
vertices of P. Show that it is possible to compute the tree of shortest paths from P in
time O(n log n).
The cost for each node is proportional to the number of nodes visited in the funnel
of the incoming edge. If the node has a single child, the number of visited vertices on
the funnel of the incoming edge is proportional to the number of vertices that do not
belong to the funnel of the outgoing edge, vertices that will not be visited later. The
total cost for these nodes is thus linear. If a node has two children and the funnel of its
incoming edge is of size m, the funnels of the outgoing edges have sizes ml + 1 and m2 + 1,
with ml + m2 = m. The cost for such a node is proportional to min(ml, M2 ). For this
exercise, say that the width of a subtree of 5(P) is the sum of the sizes of all the funnels
of the incoming edges of the leaves of this subtree. For each node T of G(P), denote
by m(T) the size of the funnel of the incoming edge, by e(T) the number of arcs of the
subtree rooted at this node, and by m'(T) the width of this subtree. It is easily shown
that m'(T) = m(T) + e(T), so that in particular m(T) < m'(T). From this, show that
the total cost c(m') of the binary nodes of a subtree of width m' satisfies the recurrence
Exercise 12.7 (Shortest path queries) Consider a polygon P with n vertices and
P a point that belongs either to the boundary or to the interior of P. Design a data
structure that allows us to find, for any point X on the boundary or in the interior of P,
the shortest path 7r(P, X) that links P to X inside P. Each shortest path query must
be answered in time O(log n + k) where n is the number of vertices of P and k is the
number of edges on the shortest path.
Hint: Build the shortest path tree from P as explained in exercise 12.6. The set of
regions D(E) bounded by an edge E of P and the funnel Fp(E) of this edge forms a
decomposition of the interior of P. Each region 4(E) can be subdivided further by
extending the edges of the funnel Fp(E) all the way until they meet E. Each of the
sub-regions in the decomposition induced by the tree and these extended edges is in fact
triangular, with an edge supported by E, and the opposite vertex is a vertex Q of Fp(E).
For each point X in this triangle, the shortest path 7r(P, X) is the concatenation of the
path in the shortest path tree that links P to Q with the segment QX. The problem is
now replaced by that of locating X in a planar map of size O(n) (see exercise 12.2).
Hint: Build the shortest path tree from P and the planar map formed by the edges of
P, the funnel Fp (E) of each edge E of P, and the edges that extend the edges of the
funnels as in the previous exercise. This induces a decomposition of 'P into sub-edges.
Follow the boundary of P, keeping track of whether each sub-edge is visible from P. The
visible edges linked in a proper way form the boundary of V(P, P).
286 Chapter 12. Triangulations in dimension 2
Another solution is to sort the vertices of P by polar angle around P, and to build
the visibility polygon V(P, P) using a sweep ray that originates from P and sweeps the
plane by rotating with an increasing angle.
Exercise 12.9 (Art gallery) An art gallery is viewed as a polygonal region bounded
by a polygon with n vertices. The gallery must be watched over by a set of guards placed
at the vertices of the polygon. Each guard keeps an eye on the portion that is visible
from its location.
Show that it is always sufficient and sometimes necessary to place [n] guards to
completely watch over an art gallery with n vertices.
Hint: Let T be a triangulation of the polygon that encloses the art gallery. It is easy to
show that the vertices can be colored with three colors so that each edge is bichromatic,
or equivalently that each triangle has a vertex of each color. Simply take the color that
is attributed the least number of times, and place a guard at the vertices that have this
color.
To prove that [L] guards may be necessary, consider a comb-shaped polygon.
Hint: To build the shortest path, first compute a triangulation of P and use the algorithm
of exercise 12.5. Note that the time required to compute a shortest path is proportional
12.5. Bibliographicalnotes 287
to the number of triangles in T traversed by this shortest path, and that the total number
of intersections between the computed shortest paths and the edges of T is O(n log n).
Exercise 12.12 (Ray shooting) Consider a polygon P with n vertices. A ray shooting
query inside P consists of identifying the point of the boundary of P that is hit by a
ray originating from a given point Q inside or on the boundary of P in the direction of
a given vector U. Show that a balanced geodesic decomposition can be used to answer
such a query in time O(log 2 n).
Hint: To answer a ray shooting query (Q, U) involves locating the origin Q in the
geodesic decomposition, and following the ray (Q, U) in the decomposition. The location
can be performed in time O(logn) if a location structure has been precomputed (see
exercise 12.2). The ray (Q, U) intersects at most O(log n) edges in the geodesic decom-
position and the boundary of each region consists of three concave chains, allowing each
intersection to be located by binary search.
into two balanced sub-polygons, and recursively triangulates them (see exercise 12.10).
We must also cite the algorithm by Hertel and Mehlhorn [128] that computes a triangu-
lation directly using the sweep method (see exercise 12.4) and also introduces the lazy
sweep method used in subsection 12.3.3. This method is also described in the book by
Mehlhorn [1621.
The applications of planar triangulations are so many that it is impossible here to
give a complete account. From the standpoint of computational geometry, certainly the
most important one is to provide a preprocessing step to the localization in a triangu-
lar planar map (see exercise 12.2) that was developed by Kirkpatrick [136]. Visibility,
shortest paths, and ray shooting problems tackled in exercises 12.5 to 12.12 also provide
a fertile application domain for triangulations. The algorithm described in exercise 12.5
that computes the shortest path between two vertices of a polygon is due to Lee and
Preparata [148]. In this article, they introduce funnels which are often used by others.
For instance, Guibas et al. [116] used it to compute shortest paths in a polygon from a
vertex or the sub-polygon visible from a given point or segment. Exercises 12.8 to 12.12
are borrowed from this article, but we must point out that the use of finger trees to rep-
resent funnels allows their algorithms to compute the shortest path tree in linear time.
The idea of using the hierarchical decomposition of a polygon (exercise 12.10) to solve ray
shooting problems is exploited by Chazelle and Guibas [54] and by Guibas et al. [116].
The solution that uses a geodesic decomposition of the polygon (see exercises 12.11 and
12.12) was developed by Chazelle et al. [50]. Again, we must point out that the use of
sophisticated data structures (weight-balancedtrees and fractional cascading) allows them
to answer a ray shooting query in time O(log n) for a polygon with n vertices. Finally,
the art gallery theorem (see exercise 12.9) and its numerous variants are discussed in the
book by O'Rourke [183].
Chapter 13
Triangulations in dimension 3
In dimension 3, the possible triangulations of a set of points do not all have the
same number of faces. In fact, there are some sets of points which admit trian-
gulations of both linear and quadratic sizes. Moreover, constrained triangulation
problems do not always have a solution in dimension 3. For instance, some poly-
hedra are not triangulable, meaning that the set of faces of the polyhedron cannot
be completed into a 3-triangulation so that the vertices of the triangulation are
exactly the vertices of the polyhedron. Yet several applications crucially rely on
our ability to decompose polyhedral regions into simplices. We must then design
a simplicial decomposition scheme. The simplicial decomposition of a polyhedral
region is a 3-triangulation whose domain is exactly the polyhedral region (as a
closed topological subset of E3); but this triangulation has additional vertices and
edges that are not faces of the polyhedral region, and the edges and 2-faces of the
polyhedral region may be split into several faces of the simplicial decomposition.
The size of the simplicial decomposition is crucial for subsequent operations, so
we aim at minimizing (exactly or approximately) the size of such decomposi-
tions. In this chapter, we show how to build a simplicial decomposition from
the vertical decomposition. The vertical decomposition of a polyhedral region is
the three-dimensional analog of the vertical decomposition of a polygonal region
introduced in the previous chapter.
Section 13.1 investigates triangulations of a set of points, and presents an algo-
rithm that builds a triangulation of linear size for any set of points such that no
three points are collinear. The remainder of the chapter considers constrained tri-
angulation problems. In section 13.2, we present first two unfeasible constrained
triangulation problems. Section 13.3 generalizes the notion of a vertical decompo-
sition to polyhedral regions and presents an algorithm that computes a simplicial
decomposition for polyhedral regions of genus 0. The resulting simplicial decom-
position is not minimal, but its size can be bounded by O(n+r2 ) for a polyhedron
with n vertices and r reflex edges.
290 Chapter 13. Triangulations in dimension 3
2 2
1 1
3 3
1AB4 1A3C
(a) (b)
Linear triangulations
Some sets of points admit a triangulation of linear size, which we call linear tri-
angulation for short. In particular, this is true for any set of points in general
position, meaning that no three points are collinear and no four points are copla-
nar. Indeed, let A = {A1,. . . , An} be a set of n points in general position. We
may build a triangulation of A in the following way:
Triangulating the convex hull. Let us first consider the subset formed by the
ne points that are on the boundary of the convex hull conv(A). These points
are vertices of the convex hull and the 2-faces of the convex hull are triangles,
because of the general position assumption. Choose a vertex AO of conv(A),
and compute all the tetrahedra that can be obtained as conv(Ao, F) for
any 2-face F of the convex hull conv(A) that does not contain Ao. These
tetrahedra form a 3-triangulation of conv(A). Theorem 7.2.4 on simplicial
polytopes says that conv(A) has 2ne-4 facets, so if g is the number of facets
that contain vertex Ao, the number of tetrahedra in our 3-triangulation of
conv(A) is 2ne- 4 - g.
Quadratic triangulations
There are also sets of points that have triangulations with a quadratic size, which
we call quadratic triangulations for short. Some sets of points even have no
triangulation with subquadratic size. Consider for instance the set A of 2n points
drawn in figure 13.2. This set of points has n points A1 , A2 , .. ., An situated (in
this order) on some given line in E3 , and another n points Bl, B2 , .. ., Bn on
another line that does not lie in the same plane as the first line. A triangulation
may be computed as follows: the convex hull of the points is a tetrahedron
Al, A,, B1, B,. Adding the n - 2 points A2 ,..., An- 1 splits this tetrahedron
292 Chapter 13. Triangulations in dimension 3
B1
Al B,
AB
Bi
Bn
A,,
Figure 13.2. A set of points whose only triangulation has quadratic size.
tained by adding a point C on the line AnBn in the preceding example. One way
to triangulate it is to consider that C splits the tetrahedron ABAnBn into two
tetrahedra AlBlAnC and ABCBn. Each of these two tetrahedra is then split
into n - 1 tetrahedra, by adding the points in {Ai : i = 2,.. . , n - 1} and in
{Bj : j = 2,.. ., n - 1} respectively into each tetrahedron. The resulting trian-
gulation has 2n - 2 tetrahedra. Another way is to simply add the point C to the
unique triangulation of the set {A 1, A2 , .. ., An, B 1, B 2 , .. , Bn. The addition of
C will merely split the tetrahedron A,- 1 ABn-IB, into two tetrahedra, so that
the resulting triangulation has size (n - 1)2 + 1.
Another example of a set of points that has both linear and quadratic trian-
gulations is a set of n points on the moment curve, for which we have shown the
existence of a quadratic triangulation in section 11.2. Since this point set is in
general position, the discussion above shows that it also admits a triangulation
with a linear number of tetrahedra.
Let A be a set of n points, {A 1 ,. . . , An}, in general position in E3 . In order to
triangulate this set of points, we may think of extending the incremental method
described in subsection 12.1.2 to 1E3. According to this method, the points are
sorted by lexicographic order of their coordinates x, y, z, then inserted one by one
into the triangulation. The triangulation initially consists of a single tetrahedron
formed by the first four points. At each incremental step, the algorithm main-
tains the convex hull conv({Al, A2 , . . , Ai-1}) of the points already inserted and
updates the triangulation by adding the tetrahedra conv(Ai, F) for all the facets
F of conv({Ai, A2 ,..., Ai- 1}) that are red with respect to Ai (recall that these
are the facets whose affine hull separates Ai from {Al, A 2 ,... ,Ai-I).
We used this method in section 11.2 in order to build a triangulation of the
set of points on the moment curve. A major drawback of this method is that, in
dimension 3, it can lead to triangulations with a quadratic number of tetrahedra,
although the set of points admits a linear triangulation. This is exactly what
happens for n points on the moment curve.
In contrast, the algorithm we present here finds a triangulation of a set A of
n points which has linear size if no three points in A lie on the same line. This
algorithm uses the divide-and-conquer method and relies on a theorem proved
in the next subsection. Loosely speaking, the theorem shows the existence of a
good splitter for the triangulation.
so that the convex hull is the simplex S = AIA 2 ... Ad+1. Any point in A that
is not a vertex of S is an internal vertex of any triangulation of A, and is called
an internal point of A. The set A therefore has n' = n - (d + 1) internal points.
Any internal point X of A splits S = conv(A) into d + 1 simplices:
Proof. Consider the subset A' of the n' internal points of A. The proof consists
in successively removing the points in A' that are too close to a vertex of S
conv(A). The remaining points will be good splitters for A.
Let us put Qo = A'. We define a sequence of subsets Qo D Ql D ... D Qd+l in
the following way. For each i = 1, . .. , d + 1, let Ni be the normal vector to the
facet of S that does not contain Ai. By convention, Ni points away from S (see
figure 13.3). For each X C Qj-1, we define an i-ordinate si(X) = (X - Ai) - Ni.
Let Yi be the Fd[l1-th point of Qi-l with respect to the increasing order of
si(X). We split Qi-l into two subsets
'Pi = {X E Qi-l si(X) < sj(Yj)},
Pi = {X E Qi-l si(X) < SAW,
Qi = {X E Qi-l: si(X) > si(Yj)} = Qi-l \P i .
Let jPiI denote the size of Pi and IPil the size of Pi. By construction of Qi, we
have
- 1nd+ 1'
Using these inequalities, we can show that Qd+l is not empty, and that any
point Z in Qd+l is a d/(d + 1)-splitter for A. Indeed,
d+1
Qd+I = A'\ U Pi,
i=l
Al
A4
A2
A3
int(Si(Z))lnA' C A'\Pi,
2. Compute si(X) for any point X in Qi-j and compute the [d~1 -th point
in Qi- with respect to increasing order of si(X).
3. Compute Qi.
For each value of i, step 1 takes constant time, step 3 takes linear time, and
step 2 requires the computation of n' dot products, which can be carried out
in time O(n'). The only delicate point is to select the d F 1-th value of these
products. This can nevertheless be performed in linear time (see exercise 3.7),
and so step 2 takes linear time as well. The overall cost is thus linear.
Now let us indicate why the ratio d/(d + 1) is optimal for a split theorem. It
suffices to show that there is a set of points that does not admit a A-splitter for
296 Chapter 13. Triangulations in dimension 3
A2
A < d/(d + 1). Consider the set of points that is a generalization of the planar set
of points represented in figure 13.3. To build this set in Ed, we put d + 1 points
at the vertices of a regular simplex Al, A 2 , .. ., Ad+1 of center C, and m(d + 1)
internal points given by
Clearly, the best splatters for this set are the points Bi,1 for i = 1, . .. , d + 1, and
these splatters are d/(d + 1)-splitters.
2. Split the tetrahedron S = conv (A) into four tetrahedra Si(Z) (i = 1, . .. , 4).
For each of these tetrahedra Si (Z), recursively compute a triangulation for
the set Ai of points in A contained in Si(Z), if there are any.
2. Triangulate the convex hull. For instance, pick the vertex of maximal degree
(the degree of a vertex is the number of incident edges). The facets of the
convex hull are triangles because of the general position assumption, hence
the collection of simplices of the form conv (Ao, F), where F ranges over
all the facets of conv(A) that do not contain AO, is a triangulation T of
conv (A) .
Once this preprocessing is over, we are left with a collection of sets of points, each
contained in a tetrahedron of T, to which we can apply the algorithm above.
The convex hull conv(A) of a set A of n points in E3 can be computed in
time O(nlogn) (see chapters 8 and 9). The triangulation in step 2 can be ob-
tained in linear time once the convex hull is known. It has exactly 2n - 4 - go
tetrahedra if the vertex AO is incident to go edges of conv(A). To process the
location queries, we use the stereographic projection of the triangulation onto
a plane II to transform these queries into location queries in a triangular pla-
nar map. Let II' be a plane supporting conv(A) along the vertex Ao, and II
a plane parallel to 11' that does not intersect conv(A) but such that conv(A)
is contained in the slab between H and II' (see figure 13.5). The stereographic
projection centered at AO sends any point P in E3 \ {Ao} onto the point of
I that is the intersection of II with the line passing through AO and P. The
set of the projections of facets F of conv(A) that do not contain AO forms a
298 Chapter 13. Triangulations in dimension 3
Ao
It now remains to show how to triangulate a set of points which are not in general
position. The general position assumption was used in two ways in the preceding
algorithm:
If the polytope conv (A) is not simplicial, its 2-faces may always be triangulated,
and this triangulation may be used to derive a 3-triangulation of conv(A) as
before.
Then, to triangulate a point set in any position, we ignore in a first phase
the internal points that, during the location steps, fall on a facet or edge of a
tetrahedron in the current triangulation. Unlike the tetrahedra, which may be
split, the edges and triangles created by this algorithm will remain in all the
triangulations formed in the first phase. Each triangle and edge keeps a pointer
to a list that is initially empty, and whenever an internal point is located on a
triangle or edge, it is added to the corresponding list and nothing else is done
for that point until all the points have been processed. At the end of this phase,
the algorithm yields a linear-sized triangulation T' of a subset A' of A. The
tetrahedra in this triangulation do not contain any points of A in their interiors
but triangles and edges may. The points in A \ A' are stored in the lists of the
triangles and edges in whose relative interior they are contained.
In a second phase of the algorithm, the coplanar cases are taken care of. All the
triangles in T' which contain points of A in their relative interior are processed
in turn. For such a triangle F, the set of the mF points that are contained
in the relative interior of F is triangulated within F. The points which lie on
the incident edges are not taken into account yet. In this way, 2 mF + 1 new
triangles are created and the (at most two) tetrahedra adjacent to F are split
into 2mf + 1 tetrahedra each, by lifting these triangles towards the opposite vertex
of the tetrahedron. For each triangle F, the triangulation may be computed in
O(mF log mF) time and the number of tetrahedra increases by at most 2 (2 mF +
1-1) = 4 mF. This phase can thus be carried out in time O(FmFlogmF) =
O(n log n) and yields a triangulation T" with a linear number of tetrahedra.
If A has collinear points, the triangulation T" may still include edges with
points in their relative interiors. In a third phase, the algorithm processes these
edges in turn. For each non-empty edge E, the PE points in A contained in
this edge are sorted along E and each tetrahedron incident to this edge is split
into PE + 1 new tetrahedra. As an edge may be incident to a high number
300 Chapter 13. Triangulations in dimension 3
of tetrahedra (up to O(n)), the triangulation may become quadratic during this
phase. The algorithm takes time O(EEPE10gPE) = O(n log n) to sort the points
along the edges, and each tetrahedron is created in constant time, so that the
overall complexity of the algorithm is 0(n log n + t) if t is the size of the final
triangulation.
The following theorem summarizes the characteristics of this algorithm.
t =m - n - ne + 3 = 6,
LZ
C'
A' B'
I
i
i i
... ....... I
I
Z - Z
------- IX
, -
(a) (b)
Figure 13.8. Vertical decompositions: (a) a cylindrical cell, (b) walls of type 2.
set of vertical segments stemming from points on the same edge and butting on
the same facet of P forms a vertical trapezoid called a 2-wall of type 1. These
2-walls of type 1 decompose the polyhedral region inside P into cylindrical cells
with vertical generator. These cells are called cylindrical cells. They each have
two non-vertical faces, a lower one called the floor and an upper one called the
ceiling. Each floor or ceiling lies in a unique facet of P (see figure 13.8a). The
floor and ceiling of a cell may have arbitrarily high complexity and may not
necessarily be convex or even simply connected. As a consequence, cylindrical
cells may be non-elementary (with an arbitrarily high number of vertical facets),
non-convex (with reflex vertical edges), or even have genus g > 0 (the horizontal
cross-sections have holes).
To obtain convex cells of bounded complexities, we decompose each cylindrical
cell C in turn. For this, we first consider the floor FI(C) of the cell, and we
decompose it. The decomposition of this polygonal region is described in sec-
tion 12.3, if we agree on a direction such as the projection of the y-axis on the
plane that supports Fl(C). The walls are then segments parallel to this direction
stemming from the vertices of Fl(C), contained in Fl(C), and maximal in Fl(C).
Call them 1-walls. On top of these 1-walls, we draw 2-walls as we did before to
construct the cylindrical cells. More precisely, from each point on a 1-wall on the
floor of C, we draw a maximal segment inside C which extends to the ceiling of
C. The set of those maximal vertical segments stemming from a single 1-wall
forms a vertical trapezoid called a 2-wall of type 2 (see figure 13.8b). Note that
the 2-walls of type 2 of a cylindrical cell C decompose the 2-walls of type 1 of
C into 2-walls to which we give type 1'. The 2-walls of type 2 decompose the
cylindrical cell C into cylindrical cells which are both convex and elementary:
each has a floor and a ceiling which is a trapezoid, occasionally degenerated into
304 Chapter 13, Triangulations in dimension 3
a triangle, and with at most four vertical walls which are also trapezoids. These
elementary cells are called prisms, and the set of all prisms forms the vertical
decomposition of the polyhedron P.
2. Build the vertical decomposition of SE and keep only the 2-faces of this
decomposition that are incident to a vertical edge that butt on E. (As we
will see in chapter 15, it is also possible to directly compute the 2-faces
of the decomposition of SE incident to E without computing the entire
decomposition of SE.)
306 Chapter 13. Triangulations in dimension 3
306 Chapter 13. Thangulations in dimension 3
A few definitions
A few explanations and definitions are useful to describe the algorithm we present
below. Given a polyhedron P of genus 0, we seek a triangulation T whose do-
main coincides with the polyhedral region interior to P. The triangulation T
is not necessarily a triangulation of 'P: it may have additional vertices, called
Steiner points, which are not vertices of P. Moreover, the boundary bd(T) is a
2-triangulation whose domain coincides with that of P, but the triangulations P
and bd(T) do not necessarily contain the same triangles.
Consider an edge of P. The planes supporting the two incident facets define
a dihedral angle. If this angle is greater than 7r, the edge is called reflex, if it is
smaller than 7r the edge is called convex, and we say it is a flat edge if this angle
equals 7r. Similarly, a vertex is called reflex if it is incident to at least one reflex
edge, flat if the facets that contain it are contained in at most two planes, and
convex if it is neither flat nor reflex.
Consider a polyhedron P without flat vertices and V a convex vertex of P.
Let F be a facet of P that contains V and H(F) the plane that contains this
facet, that is, its affine hull. This plane H bounds two half-spaces. The one that
308 Chapter 13. Triangulationsin dimension 3
locally contains the interior of P, meaning that it contains the intersection of the
interior of P with a small enough ball centered at V, is denoted by H+(F). The
intersection of the closed half-spaces H+(F) for all the facets F of P that contain
V is an unbounded polytope called the conel of V.
We denote by K(V) the set of vertices of the polyhedron P which are inside the
cone of V or on its boundary, and by K'(V) the set K(V) \ {V} of these vertices
excluding V.
The cup of V can now be described as the difference of convex hulls
conv(K(V)) \ conv (C'(V)) (see figure 13.10). The cup of V is a polyhedral region
and its boundary is a topological 2-sphere which can be separated into two parts:
* The first part is called the dome of V and is formed by the facets of
conv(K'(V)) which are not facets of conv(KC(V)). These are the facets
of conv(K'(V)) which are red 2 with respect to V.
* The second part is called the lateral boundary of the cup. It is formed
by the facets of conv(K(V)) which are not faces of conv(JC'(V)). All the
facets on the lateral boundary are incident to V and the lateral boundary
is contained in the boundary of the cone of V.
The common boundary between the dome and the lateral boundary is a polygon
(although it may not always be contained in a plane), which we call the crown
of V. The crown of V is formed by the edges of conv(JC'(V)) that are incident
to a single facet of conv(/C'(V)) that is red with respect to V.3 The vertices and
edges of the dome which are not part of the crown are called internal. All these
definitions are illustrated in figure 13.10.
The following properties are best observed now and will be used later on.
1. The cup of V is star-shaped with respect to V. Its interior is contained in
the polyhedral region interior to P, since the segment VW joining V to any point
W inside this cup cannot intersect the polyhedron P in its relative interior.
'Since V is convex, the cone of V can also be described as the convex hull of the rays cast
from V and that contain the edges incident to V, or equivalently as the set of points given by
positive combinations V + i=Zai(Wi - V) where the Wi's are the vertices of P adjacent to V
and al, . ., ark are non-negative reals.
2
A facet F of a polytope C is red with respect to a point A if the hyperplane H that supports
C along F separates A from C: A belongs to the open half-space H- bounded by H which
does not intersect the polytope C (see chapter 8). On the other hand, if A belongs to the open
half-space H+ that contains the interior of C, then this facet F is blue with respect to A.
3
1f the set IC(V) of vertices is in general position, then the facets of conv(/C'(V)) are either
red or blue, and the edges and vertices of the crown are the purple faces of conv(C'(V)) with
respect to V, in other words the edges incident both to a blue and a red facet, and the vertices
incident to purple edges. If the set K(V) is not in general position, then V may belong to the
plane supporting a facet F of conv(K'(V)) which is neither red nor blue.
13.3. Vertical and simplicial decompositions 309
I----
Figure 13.11. A facet of the cup.
The algorithm
Let P be a polyhedron with n edges of which r are reflex edges. The algorithm
first normalizes the facets of this polyhedron so as to obtain a 2-triangulation
without flat vertices that has the same domain. For this, it suffices to merge
all the connected edges contained in a common line and all the adjacent facets
contained in a common plane into a polygonal region, possibly with holes. This
region can then be triangulated. The resulting polyhedron is 2-triangulated, still
has O(n) edges and the same r reflex edges, but no longer has flat vertices.
The algorithm then proceeds in two phases. In the first phase, or pull-off
phase, we remove the cups of certain free convex vertices with bounded degree.
The cups of these vertices can be triangulated easily and these triangulations
can be added back to the triangulation of the remaining polyhedron to yield a
triangulation of the original polyhedron. The algorithm keeps on pulling off the
cups of those vertices until the size of the remaining polyhedron is O(r). At this
point, the second phase of the algorithm computes the vertical decomposition,
as explained in the previous subsection, and this decomposition is triangulated
into O(r2 ) tetrahedra. So the only missing part is the description of the pull-off
phase, which we present now.
The pull-off phase is an iterative process. Let P, be the current polyhedron.
A set of vertices of P, is independent if its elements are pairwise not adjacent.
Recall that the degree of a vertex is the number of incident edges.The current
step consists of the following operations.
1. Compute a maximal independent set of vertices of 'Pc that are convex, free,
and of degree smaller than some constant g.
2. Remove those vertices and their attached cup from the polyhedron and its
corresponding polyhedral region.
13.3. Vertical and simplicial decompositions 311
Analysis
In the following string of lemmas, we prove that if r is the number of reflex edges
of the initial polyhedron P, then after the pull-off phase the resulting polyhedron
has size O(r) with exactly r reflex edges.
Lemma 13.3.2 If V and V' are two convex non-adjacent vertices of a polyhedron
1P, no vertex of P can be an internal vertex of the domes of both V and V'.
Likewise, no edge of P can be an internal edge of the domes of both V and V'.
Proof. Let us first show that, if V and V' are two non-adjacent convex vertices
of 1P, the intersection of their cups has an empty interior and is the intersection of
their domes. For this, we show that the polyhedral region I that is the intersection
of the cup of V with the cup of V' cannot have a vertex that does not belong to
both domes. Since V and V' are not adjacent, neither may qualify as a vertex
312 Chapter 13. Triangulationsin dimension 3
of I. Owing to the definition of a cup, the interior of a cup does not contain a
vertex of P, so that any vertex of I is the intersection of an edge E of the cup of
V with a facet F of the cup of V', or the converse. Since the lateral boundary of
a cone is contained in dom(P), however, the facet F and the edge E necessarily
belong to the domes of V and V', which proves our assertion.
Suppose now that a vertex P of the polyhedron P is an internal vertex for
both domes of some non-adjacent convex vertices V and V'. Then P must be a
reflex vertex for the cup of V and a reflex vertex for the cup of V', hence there
must exist a half-ball centered at P contained in the cup of V and a half-ball
centered at P contained in the cup of V' respectively. Since the two interiors of
the cups are non-intersecting, these half-balls have an empty intersection and so
are bounded by the same plane. This shows that the vertex P is a singular face
of the polyhedron P, which is not allowed by the definition of a polyhedron.
For the second part of the lemma, it suffices to place a dummy vertex on the
edge E of P that supposedly is internal to the domes of both V and V'. The
above discussion brings out the contradiction. n
then interior to one of the four tetrahedra defined by Q and three of the points
in {X, Y, Z, T}, say XYZQ. Then P cannot belong to the dome of the fourth
vertex T, a contradiction. 0
Lemma 13.3.5 For any reflex edge PQ of the polyhedron 1P, there are at most
two convex vertices V and W of P such that P and Q belong to the crowns of V
and W respectively, and such that PQ is an internal edge of the domes of both
V and W.
or WUZ, say UVZ. But then PQ cannot be an internal edge of the dome of W
because the relative interior of the triangle UVZ lies outside the dome of W. O
Lemma 13.3.6 A reflex vertex of the polyhedron P cannot at the same time
be internal to the dome of a convex vertex and on the crown of another convex
vertex.
Lemma 13.3.7 If P has r reflex edges, then at most 2r convex vertices are not
free.
Proof. The reflex edges of P may be put into one of three categories:
2. those which are internal edges of a dome with both vertices on the crown
of that dome,
3. the others.
These three categories are disjoint by virtue of lemma 13.3.6. If rl, r 2 , r3 are
the respective sizes of these classes, then
rl + r 2 + r 3 = r.
13.3. Vertical and simplicial decompositions 315
The vertices of the edges in the first category are internal vertices of at most
2ri domes. Indeed, lemma 13.3.4 states that the vertices of an edge in the first
category are internal vertices of at most six domes, but lemma 13.3.3 shows that
each dome will account for at least three edges in the first category. Finally,
lemma 13.3.5 shows that the edges in the second category are internal edges of at
most 2r2 domes. The total number of convex vertices which are not free is thus
at most 2r1 + 2r2 < 2r. a
In what follows, we assume as before that the polyhedron does not have flat
vertices. The following two combinatorial lemmas prove the existence of a large
enough set of convex free vertices of bounded degree.
Proof. Euler's relation 11.2.4 and its corollary 11.2.5 state that
m
n > - +2.
__3
If the polyhedron P has r reflex edges, then at most 2r vertices are reflex and
the number n, of its convex vertices satisfies
m
nc> -+2-2r.
Taking lemma 13.3.7 into account, the number n'(g) of free convex vertices with
degree at most g satisfies
Let us now come back to the analysis of the algorithm. Lemma 13.3.9 serves
to prove that each iteration in the pull-off process removes at least a constant
fraction of the vertices of the polyhedron. Indeed, if Pc, the current polyhedron,
has n vertices, m edges, and r reflex edges, we may assume that m > (1 + t)r
(otherwise the pull-off phase is over) and that P has at least s(m - r) free convex
vertices of degree at most g. Here g, t, and s are defined as in lemma 13.3.9. The
number n" of pulled-off vertices is thus at least
n" > t m
s t 3
> g--n = cn.
The last inequality follows from the fact that 2m > 3n since each vertex is incident
to at least three edges.
Let V be a vertex of P of degree at most g. Whether it is convex, and if so
the region conv(Hl(V)) \ conv(H'(V)), may be computed in constant time, and
its intersection with the reflex edges of P may be tested in time O(r). The set of
free convex vertices with degree at most g can thus be computed in time 0(nr),
and a maximal independent subset can be extracted in time O(n). The cup of
each pulled-off vertex has complexity 0(g), and can be triangulated in constant
time into 0(g) tetrahedra. The polyhedron can also be patched up in constant
time. The number r of reflex edges is constant throughout the algorithm, so
an iteration in the pull-off phase has a complexity of O(r) times the number of
vertices in the current polyhedron.
13-4. Exercises 317
Remark. The size of the decomposition into tetrahedra produced by this al-
gorithm is optimal in the worst case up to a constant factor. Indeed, there are
polyhedra of size O(r) which cannot be decomposed into fewer than r2 convex
parts (see exercise 13.4).
13.4 Exercises
Exercise 13.1 (A non-triangulable polyhedron) Consider the six points {AI, B1 ,
A2, B2 , A3 , B3 } whose coordinates are given in section 13.2. Show that the eight triangles
AjBjA2 , AjBjA3 , A2 B2A1 , A2 B2 B3 , A3B 3B,, A3 B3 B2 , A1 B2 A3, and A2 B1 B3 define
a polyhedron, and that this polyhedron is not triangulable.
Exercise 13.2 (Pull-off) Show that the pull-off phase of the algorithm described in
section 13.3 may be implemented in time O((n + r 2 ) log r).
Exercise 13.3 (Flat vertices) State explicitly the algorithm that removes the flat ver-
tices of a polyhedron in linear time.
Hint: Show that any convex body contained in P intersects the volume between the
two hyperboloids z = xy and z = xy + e in a region of volume O(E).
318 Chapter 13. Thangulations in dimension 3
31 hpe 3 ragltosi ieso
Arrangements
By the arrangementof a finite set of curves or arcs in the plane, we mean the
decomposition of the plane induced by these curves or arcs. In Ed, we call arrange-
ments the decompositions induced by a finite set of hypersurfaces or portions of
hypersurfaces.
Arrangements play a central role and occur in many different applications. This
part is divided into three chapters. In chapter 14, we are interested primarily in
arrangements of hyperplanes. This interest is spurred mainly by two facts. A set
of points transforms into a set of hyperplanes by polarity, and the arrangement
of the hyperplanes contains several useful pieces of information on the set of
points. Also, arrangements of hyperplanes are particular cases of arrangements
of simplices, so the study of the former kind of arrangements provides interesting
combinatorial bounds for the latter.
The subsequent chapters investigate a few combinatorial and algorithmic no-
tions related to arrangements of line segments in the plane (chapter 15) and
arrangement of triangles in three-dimensional space (chapter 16). The central
problem in both chapters is to bound the combinatorial complexity of several
parts of these arrangements, and in particular to show that they may be of much
smaller complexity than the whole arrangement. Efficient algorithms to com-
pute these portions of arrangements are also sought. These studies are motivated
mainly by two applications: computing views and hidden surface removal in com-
puter graphics, and motion planning in robotics.
Chapter 14
Arrangements of hyperplanes
Hyperplane arrangements are the simplest arrangements one may think of. They
appear naturally in several applications and the results given below, which are of
mostly combinatorial nature, will be useful in the subsequent chapters.
The polarity that maps a set of points to a set of hyperplanes often transforms
a problem about points into a problem about hyperplanes. Many examples are
provided in the exercises. A preprocessing step for these problems often consists
of computing the arrangement of the corresponding set of hyperplanes. This
problem is discussed in section 14.4 which takes advantage of a combinatorial
result known as the zone theorem, given in section 14.3.
An interesting correspondence between hyperplane arrangements and a cer-
tain kind of polytopes, called zonotopes, also sheds more light on problems from
crystallography, architecture, or mixture design (see exercises 14.8 and 14.9).
Section 14.5 introduces the notion of levels in hyperplane arrangements, which
is central to our analysis of higher-order Voronoi diagrams, studied in chapter 17.
14.1 Definitions
Let 7i be a set of n hyperplanes in Ed. The intersection of a finite number of half-
spaces is a bounded or unbounded polytope, and so 7H induces a decomposition
of Ed into a collection of bounded or unbounded polytopes with pairwise disjoint
interiors. These polytopes and their faces form a pure cell complex of dimension
d which we call the d-arrangement of 7-, or more simply the arrangement of 7H if
d is clearly understood. This cell complex is denoted by A(X).
From now on, we often use the notions of a set of hyperplanes in general
position, or of a simple arrangement. A set Al of n hyperplanes is said to lie
in general position if the intersection of any k < d of them is an affine space
of dimension d - k, and if moreover the intersection of any d + 1 of them is
322 Chapter 14. Arrangements of hyperplanes
Lemma 14.2.1 Any k-face F in A(Ht \ H) that intersects H gives rise in A(7l)
to a (k - 1)-face, F n H, and to two k-faces, F n H+ and F n H-, where H+
and H- are the two half-spaces bounded by H. For a given H, all the k-faces of
A(-H) that are not faces in A4(H \ H) can be obtained in this way only once.
By counting separately the faces of A(X) that do not intersect H, those that
are contained in H, and those that intersect H but are not contained in H, the
previous lemma yields
nd(7- n H) = 0 (14.2)
nk(R) = nk(Ht \ H) + nk(Hl n H) + nk-l(7H n H). (14.3)
So we are left with the slightly simpler problem of counting the k-faces in a
simple k-arrangement of k hyperplanes. Under the general position assumption,
such an arrangement has precisely one vertex, S, at the intersection of the k
hyperplanes, and every face in the arrangement contains this vertex. This can
be shown by induction on the dimension of the faces. If an edge is incident to no
vertex, then it must be a line contained in k - 1 hyperplanes that has an empty
intersection with the k-th hyperplane, and the intersection of the k hyperplanes
is empty, contradicting the general position assumption. Suppose that the j-faces
in the arrangement all have S as a vertex, for j = 1, . . . , i - 1. Then any i-face F
that does not have a vertex cannot have a subface either, since these faces have
a vertex, namely S. Then F is an affine (k - i)-space contained in i hyperplanes
which does not intersect any of the other k - i hyperplanes. This again contradicts
the general position assumption and proves our statement.
By induction on k, we can show that nk(k, k) = 2 k. This is obviously true
for k = 1 and k = 2. Assume by induction that it is true for any k' < k and
consider a hyperplane H in 'H. Using the notation as above, the (k - 1)-faces of
A(1- n H) are contained in exactly two k-faces of A(X) and each k-face contains
only one (k - 1)-face of A(1- n H). Therefore, A(7i) has exactly twice as many
k-faces as A(1H n H) has (k - l)-faces. The first part of the lemma is thus proved
by induction. The second part is immediate if one considers the d hyperplanes
containing the given vertex. 0
The lemma can be used to verify Euler's relation for d hyperplanes:
This provides the base case for the inductive proof of Euler's relation, the induc-
tion progressing with an increasing number of hyperplanes. Relations 14.1 and
14.2 finish the proof. 1
The following lemma is analogous to lemma 7.1.14 for simple polytopes. It
serves in particular to show the Dehn-Sommerville relations (see exercise 14.2),
as was the case in chapter 7.
Lemma 14.2.4 Any i-face of a simple d-arrangementA(1t) is contained in ex-
actly
(d-i
Proof. For a simple arrangement, the number of k-faces that contain a given
vertex is
n, = 2 k ( ) 0
0< k < d,
Proof. Let Z be the line that defines the zone. Without loss of generality, we
may choose a coordinate system such that the x-axis is supported by Z. We
may also assume that the lines in the arrangement and Z are in general position,
meaning that any two lines in this set intersect in exactly one point and no
three lines have a common intersection. A perturbation argument shows that
the complexity of the zone is maximized in this case (as is done in the proof of
theorem 14.2.5).
Let C be a 2-face in the zone of Z, F an edge of C, and (F, C) the corresponding
side of the zone. Let H(F, C) be the half-plane that contains C and is bounded
by the line that is the affine hull of F. The side (F, C) is called a left side if
H(F, C) contains the point (+oo, 0), a right side otherwise. Since no line in X
is parallel to the x-axis, a side may not be simultaneously left and right. We
show that the total number of left sides in a zone is at most 3n, and a symmetric
argument shows the desired result.
The proof goes by induction on the number of lines. The result is trivial for
n = 1. Let H be the line in the arrangement whose intersection with Z has the
greatest abscissa (see figure 14.2). By induction, the total number of left sides in
the 2-faces of the zone of Z in A(N \ H) does not exceed 3n - 3. Let C be the
2-face of this zone that contains the point (+ox, 0). The intersection F of C and
H is a line segment (with endpoints A and B) or a half-line (ending at A). When
adding H, (F, C) becomes a new left side and the left sides that contained A,
resp. B (if it exists), are both cut into two left sides each. Note that because H is
the line in the arrangement whose intersection with Z has the greatest abscissa,
(F, C) is the only left side supported by H in the zone of Z in A(7-). The overall
number of left sides increases by at most 3, proving the above statement and
the lemma. The bound is asymptotically tight whenever XHU {Z} is in general
position. z
14.3. The zone theorem 327
The preceding lemma serves as the basis for an inductive argument that proves
the zone theorem in the general case.
Theorem 14.3.2 (Zone theorem) The complexity of any zone in the d-arran-
gement A(N) of n hyperplanes in Ed is e(nd-l) if 'HU {Z} is in generalposition.
If not, this complexity is still O(nd-1).
Casel: HnC=@
Then (F, C) gives rise to a single k-side of Z(H), namely (F, C) itself.
C n HF. If Z intersects CF, then (F, C) gives rise to a k-side of Z(XH), namely
(F, CF). Otherwise (F, C) does not correspond to a side in Z(-).
Indeed, a k-side of Z(Ht) is counted each time H is not one of the hyperplanes
that contains its k-face, and this happens n - (d - k) times.
Let us denote by Zk(n, d) (or by Zk when n and d are clearly understood) the
maximum value of Zk(X) over all d-arrangements of n hyperplanes and all choices
of Z. Inequality 14.5 can be rewritten as
An induction on d now yields the asymptotic bound on wk(n, d) and thus for
Zk(n, d). The base case is given by lemma 14.3.1 for the case d = 2 which can be
stated as
Zk(n, 2) = 0(n).
Suppose now that d > 2 and that Zk'(n d') = O(nd'-1) for all k' < d' < d,
the constant in the big-oh notation depending on d' and k'. Then wk'(n,d') =
O(nk' -1), and we infer from 14.8 that
n-1
Wk(n,d) <Wk(d-k+1,d)+ Z Q(ik-2).
i=d-k+1
Hi
the ray supported by Hi originating at Io. The cells intersected by the opposite
ray are obtained similarly, starting with C' instead of C, and this completes the
update phase for 5. We easily verify that this update operation is carried out in
time proportional to the number of edges in the zone of Hi.
Lemma 14.3.1 therefore implies the following theorem.
Theorem 14.4.1 An arrangement of n lines in the plane may be computed in-
crementally in time O(n 2 ).
Let 52 be the 2-skeleton of A'. The active faces of g2 and the faces of 52 nH
are in one-to-one correspondence. But g2 n H is the 1-skeleton of the simple
(d - 1)-arrangement formed in H by the intersection of the hyperplanes in Nt
with H, hence 5 2 n H is connected. This implies the following lemma.
Lemma 14.4.3 The sub-graph of the incidence graph induced on the active 1-
faces and 2-faces of A' is connected.
The preceding two lemmas allow us to find all the active faces knowing only the
active edges. The algorithm can now be described. It proceeds in three phases.
We denote by H+ and H- the two half-spaces bounded by H.
Phase 1. Find an edge of A' that cuts H. For this, we start from any edge E
in A'. If E intersects H, we are done, otherwise we traverse the edges on the line
A that is the affine hull of E. Let A be the vertex of E that is the closest to H,
and E' the other edge on A that contains A as a vertex. Replacing E by E' and
iterating eventually leads to the edge on A intersected by H.
Phase 2. Mark the 1 and 2-faces of A' intersected by H as active. This can
be achieved by using a list L of active edges. The list initially contains the edge
found in phase 1. While L is not empty, extract an edge E from it. All its
incident 2-faces that have not yet been marked as active are considered in turn.
Consider such a face C. Using the incidence graph, the edges of C are traversed
until the other edge of C intersected by H is found. If no edge other than E
intersects H, then skip to another 2-face. If such an edge is found, however, then
it is inserted into C and marked as active. Phase 2 is over when the list L is
empty and no other 2-face is to be considered. Lemma 14.4.3 shows that this
traversal identifies all the active 1 and 2-faces.
Phase 3. Replace the active faces of A' and update the incidence relationships
according to lemma 14.2.1. More precisely, let F be an active 2-face of A', E and
E' the two incident active edges (the case where only one active edge is incident
is handled similarly). Denote by H+ and H- the two half-spaces bounded by
H (see figure 14.4). Create two new vertices E n H and E' n H, five new edges
Eo = FnH, E+ = EnH+, E- = EnH-, E'+ = E'nH+, E'- = E'nH-, and
update their incidence relationships. Create the 2-face F n H+ incident to the
edges of F contained in H+, to E+, E'+, and to E0 . Similarly, create the 2-face
F n H- incident to the edges of F contained in H-, to E+, E'+, and to E 0 .
It remains to create the 2-faces, which are intersections of 3-faces of A' and
H. These 3-faces are not represented explicitly in the data structure. It is
easy, however, to reconstruct the 2-faces of A' contained in H, as well as their
14.5. Levels in hyperplane arrangements 333
incidence relationships, starting with the vertices and edges just created in H:
merely observe that these vertices and edges form the 1-skeleton of a (d - 1)-
arrangement (see exercise 14.4).
The analysis of this algorithm is based on the zone theorem. Phase 1 examines
only vertices contained in A and their incident edges. There are only n - d + 1
such vertices, and any vertex is contained in only 2d edges as was proved in lemma
14.2.4. The complexity of phase 1 is thus 0(n).
Phases 2 and 3 require time proportional to the number of faces traversed:
these are the 0-, 1-, and 2-faces of the zone of a hyperplane. The zone theorem
implies that the complexity of these phases is e9(nd-l). We have thus proved the
following theorem.
3 2 2
Figure 14.5. Levels of facets in an arrangement of lines. The surface of level 1 is shown in
bold.
The framework
with other randomized algorithms given in this book, the algorithm is interested
in maintaining a subset only of the regions defined and without conflict (the edges
of level at most k).
In a first step, the first d hyperplanes are inserted, the incidence graph of the
0, 1, and 2-faces in the first k levels of their arrangement is computed, and the
corresponding influence graph is initialized accordingly.
In the current step, a new hyperplane H is inserted. The incremental step
consists of a location phase, and of an update phase which can itself be split into
three phases.
Locating. The location phase, using a simple traversal of the influence graph,
identifies all the nodes in the graph that conflict with H, which correspond to
the edges in A'<k intersected by H. If no edge in A'k is intersected by H, then
the hyperplane H does not contain a face in the current complex A<k of the first
k levels, nor in any of the subsequent complexes. So the algorithm may skip to
the next incremental step.
Updating. 1. Creating the new faces. In the location phase, all the edges
of A4 !<k intersected by H have been found. Each such edge is cut by H into two
parts and generates two new edges E n H+ and E n H-, where H+ is the open
half-space bounded by H that contains the reference point 0, and H- is the
opposite half-space (see figure 14.6). Let us say that these new edges are of type
1. Each 2-face F of A'<k intersected by H also generates a new edge, F n H.
Let us say that this new edge is of type 2. We compute such an edge in the
following fashion. Let E be an edge in A<k that is intersected by H, and A its
intersection point with H. Then all the 2-faces incident to E are intersected by
H and generate an edge of type 2. Consider such a 2-face F. Following the part
of the boundary of F that is contained in H-, we can look for the other edge E'
incident to F that is intersected by H. If E' does not exist, this means that the
edge of type 2 created by F is a semi-infinite ray originating at A. Otherwise,
we find an edge E' incident to F and intersecting H at a point B. Then the
edge of type 2 created by F is the segment AB. In either case, the incidence
relationships between these new edges and vertices are easily taken care of.
We must also update the 2-faces of A<k. We have just seen how to find the
2-faces intersected by H. Such a face F is replaced by the two new faces F n H+
and F n H-, which we may again call of type 1, and the incidence relationships
between these 2-faces and the new edges are updated correspondingly. More
tricky is the case of new 2-faces, which we may call of type 2, appearing as the
intersection of a 3-face with H. Even though the 3-faces are not stored explicitly
in the structure, it is not difficult (see exercise 14.4) to reconstruct the 2-faces of
type 2 and their incidence relationships, observing that their boundary is made
338 Chapter 14. Arrangements of hyperplanes
338 Chapter 1. Arrangements of hyperplanes
E' (type 1)
up of edges that are contained in H and thus of type 2, and these edges have all
been created and their incidence relationships have been computed (see exercise
14.4).
Finally, the levels of the faces have changed by at most 1. We update the
markers for the edges and for the 2-faces at level k, and collect the created edges
that are at level k + 1 into a list Lk+1 of edges. More precisely, an edge E at level
k in A<k that is intersected by H generates two new edges of type 1 at levels
k and k + 1 respectively. The latter is inserted into Lk+l. Similarly, a 2-face
at level k in A'<k that is intersected by H generates two new faces of type 1, at
levels k and k + 1 respectively. They share a common incident edge (of type 2)
that is at level k and so is marked. Note that the new edges of type 1 obtained
as the intersection of H- and of edges at level k - 1 in A' are also at level k in
A. So are the edges at level k - 1 in A' entirely contained in H-. Nevertheless,
none have been marked yet. These marks will be given in the next phase.
2. Peeling faces at levels greater than k. The faces created in the previous
phase do not all belong to A<k. The peeling process removes all the faces at levels
greater than k from the incidence graph. Of course, this process is not needed if
the current subset of hyperplanes has fewer than k + d - 1 members. The faces to
be removed consist of faces that have been created during this incremental step,
or of faces of A<k that are contained in H-.
As we recall, the algorithm keeps a pointer K to some face at level k in A'<k.
The peeling process traverses the incidence graph of the set Ak+l of constructed
faces at levels greater than k. These faces are necessarily contained in H-. If
Lk+1 is empty and if the face pointed to by K is contained in H-, then Ak+j does
not intersect H, and its incidence graph is connected. The pointer K may be
used as the starting point for a traversal of Ak+l. Otherwise, the incidence graph
of Ak+1 may have several connected components. Nevertheless, each connected
14.5. Levels in hyperplane arrangements 339
component contains one of the new edges of type 1 stored in the list Lk+l, which
may be used as a starting point for a traversal of this connected component.
During the traversal, we can also mark the faces at level k that have not been
marked yet, since these faces are incident to at least one face of Ak+l1
3. Updating the influence graph. We must also show how to update the
influence graph. A new node is created in the graph for each new edge that
was not removed during the peeling process. A node corresponding to a new
edge of type 1 becomes the child of the node corresponding to the edge of A'k
that contains it. A node corresponding to a new edge of type 2, obtained by
intersecting a 2-face F in A'<k with H, has for parents all the nodes corresponding
to edges of F either contained in H- or intersecting H.
Let us first estimate the number of nodes in the influence graph. We first bound
the number of edges created by the algorithm, by bounding the average number
of vertices in the whole arrangement that were created by the algorithm at some
point, then using the fact that a vertex is incident to 2d edges.
if j < k, pj 1
Proof. The proof is trivial if j < k, since the algorithm always creates all the
vertices at levels at most k. Let P be a vertex in the arrangement, let D be
the set of the d hyperplanes that intersect at P, and let C be the set of the j
hyperplanes that determine the level of P, meaning that they separate P from
the origin 0. If j > k, then P is created if and only if, among the hyperplanes
in D U C, the first k + d inserted by the algorithm all belong to D. This happens
exactly with the probability stated above. D
Let us denote by Sj, resp. S<j, the set of vertices at level j, resp. at most j, in
A. The expected number s of vertices created by the algorithm is thus
n-d
s = Elsjlpj
j=o
340 Chapter 14. Arrangements of hyperplanes
n-d
= 1SOlpO + Z(ISjI -1S<3j-)P
j=1
n-d-1
- E IS<jl(Pj-Pj+1)+IS<n-dlPn-d-
j=o
Theorem 14.5.1 and lemma 14.5.2 can then be used to bound the expected number
of vertices created by the algorithm, yielding
Each time a vertex is created, at most 2d incident edges are created. Thus, the ex-
pected number of edges created by the algorithm and hence the expected number
of nodes in the influence graph are both bounded similarly by 0(nld/21krd/2 1).
To bound the number of arcs in the influence graph, we observe that when-
ever the node of a child corresponds to an edge of type 2, the level of the edge
corresponding to the parent node increases by 1. The level of a face cannot be
greater than k + 1 otherwise it is removed from the graph, so that a node in the
graph cannot have more than k children of type 2, and so not more than k + 1
children overall. The total space required to store the influence graph is thus
0(nLd/2J kOd/ 2 1+l).
Updating the incidence graph requires as many elementary operations as there
are arcs in the created incidence graph. This number is at most k times the
number of created edges, since a created edge is traversed at most k times and
has at most k children.
The number of faces traversed during the peeling phase is proportional to the
number of faces at level k + 1, which must have been created in a previous
incremental step.
Updating the influence graph has a complexity proportional to the number of
arcs in the final graph. Hence, the total combined cost of all the update operations
is bounded by the overall number of arcs created, which we have shown to be
0(nld/2i krd/21+1).
It remains to estimate the cost incurred by the location phases. Each node in
the influence graph having at most k + 1 children, the number of nodes visited
during the location phases is bounded by k + 1 times the number of conflicts
detected during these phases. To estimate the expected number of conflicts de-
tected by the algorithm, we first compute the probability Pji that an edge with
14-5. Levels in hyperplane arrangements 341
k- jd I d+ 1
- =o (j+d
I+d 8j-1
J
Observing that
j- jai
(d + )! (i -1)! I)d+1
(d+i)! 1E0 (j+d)j-
and with the convention that (i i) vanishes for I > j -i, we can write in
either case
P'jz(k-1i+ 1 +d-ji)
Pji
1=0 ( j+ d j
+ d}
342 Chapter 14. Arrangements of hyperplanes
n-d i
c=E |Sj|I d (i -l)pji.
j=O i=1
i k-1
gm=E(i- 1)
i=1 1=0
i j-1
Z(i
ti 1
-1)
( j-i)
I J
= E(j-1-i)
i=o
j-1 m-1
(i)
= LE 1: m=O i=O
m=OY
- i( )
= 1+2J)'
so that
( j+d )
1+d J
k-1
1) l-1 j! (I + d)!
= ld +
j+d (j+d-1)!(1+2)!
1=0
Put
k-I j (1 +d)!
gj' = E(d + 1) (j + d -1)! (I + 2)!
1=0
Since gj < g,.we can restrict our attention to gj, for which we have
k-It k +d )
d+ 1 1 (- 1 +d) = d+l ( d -1J
9i d- 1 jj+d- 1 E d -2J d-l1 j+d- 1 '
( d-1 J 1=01
14. 6. Exercises 343
We can perform an Abel transform analogous to that giving the expected num-
ber of vertices created during the execution of the algorithm, to obtain
n-d
c < SIdg'
I
j=0
n-d-1
< , lS<jI
d (gj -. j+q) + d IS<n-df an-d,
j=k
from which we derive, using theorem 14.5.1 once again, that the expected number
of conflicts detected by the algorithm is O(nk log(n/k)) if d = 2, O(nk 2 log(n/k))
if d = 3, and 0(nLd/2ikrd/21) if d > 4. The overall complexity of all the lo-
cation sub-phases in all incremental steps is thus O(nk2 log(n/k)) if d = 2,
O(nk3 log(n/k)) if d = 3, and O(nLd/2Jkrd/21+1) if d > 4.
The algorithm that computes the first k levels in the d-arrangement of n hyper-
planes, as described in this section, therefore has complexity Q(n[d/2J kfd/21+1).
This result may be somewhat improved as in exercise 14.21. A reference is
given in the bibliographical notes. This proves the following theorem.
14.6 Exercises
Exercise 14.1 (Projective arrangements and polytopes) Prove all the results in
section 14.2 using the combinatorial results on polytopes derived in chapter 11.
d 1 + (_l)d
St-l)kpk(A) = 2(
k=O
Exercise 14.2 (Dehn-Sommerville relations) Show that the following relations are
satisfied for simple d-arrangements:
Z(-l)j2k i k ) ni = nk.
Hint: Use the correspondence described in exercise 14.1 between arrangements and
spherical polytopes. The exercise follows from an easy adaptation of the proof of theorem
7.2.2, using lemma 14.2.4 in place of lemma 7.1.14.
Exercise 14.3 (Number of faces) Unlike polytopes, the number of k-faces of a simple
d-arrangement of n hyperplanes depends only on n, d, and k, and not on the relative posi-
tions of the hyperplanes. Show that the number Pk (n, d) of k-faces in the d-arrangement
of n projective hyperplanes (see exercise 14.1) satisfies
Pk d( n ) k ( n-d-1+ k)
Hint: Proceed by rebuilding the faces in order of increasing dimension. Under general
position assumptions, a k-face F incident to a (k - 1)-face G is obtained by relaxing a
hyperplane. To know on which side of this hyperplane F is contained, observe that F is
contained in the intersection of d - k half-spaces bounded by all but one of the d - k + 1
hyperplanes whose intersection contains G. When creating a k-face F, we update the
incidence relations between F and its sub-faces of dimension k - 1 (already created).
The incidence relations between F and the faces of dimension k + 1 will be taken care of
when processing the (k + 1)-faces.
Hint: Let S 1 , ... , Sn be the n segments that define Z. Assume that the segments are
centrally symmetric through the origin, by translating them if necessary, so that Si has
endpoints Ai and -Ai. Denote by Z# the polytope polar to Z. From 7.17, deduce that
n
#= {X: IX Ail < 11.
i-l
Let F be the face defined by
F = Si, eD3
.. e Si, ± ±iAi + .+ + sAin (14.9)
where =i1, j r + 1, . , n. Show that the face F# is
F# = {X Z# X -XAi,
= 0 for j =1,...,r, and X (Ai,+, + + Aij =1}.
From this, it follows that the faces of Z# corresponding to faces of Z that contain a
translate of Si are themselves contained in the hyperplane Hi = {X E Ed: X *Ai = 0}.
The correspondence between d-arrangements of hyperplanes passing through the origin
and projective (d - 1)-arrangements finishes the proof.
346 Chapter 14. Arrangements of hyperplanes
Exercise 14.9 (The painter's problem) A painter has n buckets at his disposal, each
containing a mixture of some basic colors Cl,.. ., Cp. The i-th mixture is characterized
by the proportions of each color, which can be modeled as a point Si in a space EP. A
product is obtained by blending some mixtures together. Show that the set of feasible
products, meaning that they can be derived from the mixtures in the n buckets, can be
characterized by the zonotope Z (see exercise 14.8) built on the segments Si. Design an
algorithm that computes the feasibility of a product using only the mixtures in the n
buckets. Note that the solution is not unique in general. In the case of binary blends
(p = 2), there is an optimal way to compose a product M such that the residualset of
products, which are still feasible after the necessary quantities of mixtures have been used
to make M, is maximal with respect to inclusion. Devise an algorithm that computes
this optimal mixture.
Hint: After the product M is created, the residual set of feasible products is contained
in Zr= Z n (Z - M). If p = 2, then Zr is a zonogon (2-zonotope).
Exercise 14.10 (Ray shooting) Consider a set of n disjoint line segments in the plane.
A ray shooting query, given a point and a vector, asks for the first segment visible from
this point in the direction given by the vector. Equivalently, given a ray, the query asks
for the first segment that intersects this ray. Show how to preprocess a data structure in
time and space 0(n 2 ), so that ray shooting queries can be answered in time 0(logn).
Hint: Let D be the line that supports the ray. Using polarity, show that finding the
segments intersected by D is equivalent to locating the point D* dual to D in an arrange-
ment A of 2n lines. Show also that, for any point A* in a cell of A, the order in which
the segments intersect the dual line /\ is identical. Thus, to a cell A in A corresponds
an ordered list L(A) of segments. If we store this list in an auxiliary structure (such as
a dictionary or a balanced binary tree), then once the cell containing D* is found, the
ray shooting query can be answered in additional time 0(logn). For locating that cell,
we use the structure described in exercise 12.2. The size and preprocessing time of the
entire data structure is thus 0(n 3 ), for a query time of 0(log n). To save a factor n, we
must store the lists more compactly. Notice that these lists differ only by one element
for two adjacent cells. We may therefore use a persistent dictionary (see exercise 2.7).
Exercise 14.11 (Queries in the plane) Consider a set P of n points in the plane.
Show how to preprocess P into a structure using space 0(n), so as to retrieve a point
belonging to any half-plane H+, bounded by a query line H and containing the origin,
in time 0(log n) per query. Show that all these points may then be retrieved in time
proportional to their number.
Hint: Peel the consecutive layers Eo, . . £jk.of P (as is done in exercise 9.3), and build
their vertical decomposition. Polarity with respect to a point inside Sk transforms these
nested polygons into another set of nested polygons E ..... ,.£ and H into a point H*.
The point-location structure of exercise 12.2 can be used to find in logarithmic time the
greatest i such that H* does not belong to Si, which is also the smallest i such that H
intersects Si. A point on Si that belongs to H+ can be found in logarithmic time as well,
and the other points can be retrieved by following the boundary chain of a layer or by
using the vertical decomposition to go from layer to layer.
14.6. Exercises 347
Hint: Consider again the dual problem, by using the polarity with respect to the point
(0, . . . , 0, +oo). The problem is thus to find the hyperplanes dual to the points in P
lying above the pole H* of H. Build a hierarchical structure with O(logn) layers. The
topmost layer represents the canonical triangulation (see exercise 14.7) of the cell at level
0 (with respect to the center of the polarity) in the arrangement of a small sample of
the dual hyperplanes. (These cells being unbounded, the d-simplices in the triangulation
are more appropriately cylinders based on (d -1)-simplices.) Using the tail estimates of
exercises 4.5 and 4.6, show that if H is the set of hyperplanes and 1? a random sample
of constant size r drawn from N, then with high probability no cylinder in the canonical
triangulation of the cell at level 0 in the arrangement of 1Z intersects more than 0( ' log r)
hyperplanes in 'H \ 1Z. Taking r big enough, show that the recurrence on the size of the
resulting structure solves to O(nLd/2j+,) (where E depends on r), and that the structure
has 0(logn) layers.
To answer a query, we traverse the structure from the first layer and recursively deter-
mine all the hyperplanes lying above H*. Let 7? be the sample attached to the current
layer and let S be the cylinder in the triangulation of the arrangement of 1Z that is
intersected by the vertical ray originating at H*. If H* belongs to S, then we recur-
sively determine all the hyperplanes crossing S that lie above H* in S. Otherwise, we
systematically test H* against the n hyperplanes, in time 0(n). Let k be the number of
hyperplanes lying above H*. Point H* lies in no simplex of the canonical triangulation
of the cell at level 0 in the arrangement of R if and only if 7? contains at least one of
the k hyperplanes lying above H*. This happens with probability O(k/n) since 1Z is of
size r = 0(1). Since the structure has 0(logn) layers, the expected time for a query is
O(k log n) as wanted.
Exercise 14.13 (Computing a zone) Show how to compute the zone of a hyperplane
in the arrangement of n hyperplanes in Ed in time 0(nd-1 + n log n) and space (nd-1 ).
Hint: Adapt the algorithm that computes a cell in the arrangement of segments or trian-
gles as described in section 15.4 and subsection 16.4.3, and use the canonical triangulation
of the arrangement defined in exercise 14.7.
Exercise 14.14 (Convex hull of an arrangement) Given n lines, show how to com-
pute the convex hull of all the vertices of their arrangement in O(n log n) time.
Hint: Only the two extreme vertices on each line may be on the convex hull, so this
convex hull has complexity 0(n). Computing the zone of the line at infinity (see exercise
14.13) and removing the infinite edges yields a simple polygonal line with 0(n) vertices,
whose convex hull is exactly the convex hull of the arrangement.
348 Chapter 14. Arrangements of hyperplanes
Hint: We reason in the dual space where an oriented line D of equation y cos 0 - x sin 0 -
6 = 0 ( E t-7r, +7r[) is represented by the point (0,6). To each point Mi corresponds
the pencil of lines passing through this point, and the dual of that pencil is represented
by the line dual to Mi. Thus the duals of the points M 1 ,... , M,, form an arrangement
of dual lines. A breadth-first search traversal of the oriented graph of the vertices of this
arrangement whose arcs correspond to the edges of the arrangement oriented towards
increasing abscissae yields the desired total order.
Exercise 14.16 (Visibility graphs) Consider a set S of n segments in the plane with
disjoint interiors. The visibility graph of these segments is the graph whose nodes are
endpoints of segments and whose arcs join two nodes that are visible one from another,
meaning that the segment joining these two endpoints does not cross any of the other
segments. Show that the visibility graph of S can be computed in time O((n + k) log n),
where k is the size of the visibility graph. When k is high, show how to improve the
complexity to 0(n 2 ) which is optimal in the worst-case.
Hint: Compute the downward vertical decomposition of the segments by erecting walls
hanging below each endpoint. Then rotate the direction of the decomposition from -7r/2
to 37r/2 while maintaining the oriented decomposition. The visibility graph can be com-
puted in the process, since two visible endpoints will share a trapezoid for some orienta-
tion. In order to process the events during the rotation in the correct order, sort them in
a priority queue (as was done for computing segment intersections by a sweep algorithm
in subsection 3.2.2). In order to achieve time O(n 2 ), sort all the lines that connect two
endpoints by polar angle as in exercise 14.15.
Hint: We call a k-segment an oriented line segment AiAj such that the open half-plane
on the right of the line joining Ai to Aj contains exactly k points of A.
1. Let a line A sweep the plane parallel to the y-axis while maintaining the number
of k-segments intersected by A. Show that this number may increase or decrease by at
most one, each time A sweeps over a point of A.
2. Let A be an oriented line containing no points of A, and let Al and Ar be the
half-planes respectively to the left and right of A. Use the previous result to show by
induction on k that there are at most min(k + 1, L[J) k-segments crossing A that are
segments of the form AiAj, Ai E Al and Aj E Ar.
3. Draw n - 1 vertical lines D 1 ,..., Dn- 1 that divide the plane into strips each con-
taining one point of A. Among these lines, pick p - 1 and call them A 1, . .. , Ap-1, such
that each of the p vertical strips they define contains at most [n] points of A. The
number of k-segments that do not cross any of the Ai, i = 1,... ,p - 1 is at most
( Fn1 n2 + np
2 2p
The other k-segments must cut a line Ai and their number may be bounded using the
result in 2. The bound on the number of k-sets is obtained by optimizing the choice of
the parameter p.
Summing over all j < k, we obtain a bound of O(k'n) on the number of all j-sets for
0 < j < k. This bound is not tight, as shown by theorem 14.5.1.
Exercise 14.20 (A lower bound on the k-level) Show that the k-level of the d-
arrangement of n hyperplanes may have as many as Q(nLd/2 ]kfd/ 2 1-1) faces of all di-
mensions.
Hint: Consider the dual problem of bounding the number of j-sets for a set P of n
points in Ed (see exercise 14.18). To establish the lower bound, we place the points in
P on the moment curve M (see subsection 7.2.4). Any subset D of d points in M splits
M into d + 1 arcs Ml, . . . , Md+ and induces a decomposition of P into d + 1 subsets
Pi = P n Mi, i = 1, .d. , d + 1. The sets Pi are alternately on one or on the other side
of the hyperplane affine hull of D. The problem is now to count the subsets D of P such
that
Z 1P
21+11 =j or E P21 1=j.
1=1 1=1
For this, show that the number of ways to split an ordered set of s elements into r ordered
subsets is
1s+
Then deduce that the number of j-sets is (nLd/2J kfd/21 -1), and that summing over all
j < k yields a lower bound on the complexity of A~k which is identical to the upper
bound shown in theorem 14.5.1.
Exercise 14.21 (Lazy computation of the first k levels) Adapt the randomized
algorithm of subsection 15.4.2 to compute the first k levels in the arrangement of n
350 Chapter 14. Arrangements of hyperplanes
Hint: The peeling phase which is systematically processed in the algorithm of subsection
14.5.3 may be replaced by lazy clean-up operations between some well-chosen incremental
steps. Use the canonical triangulation (see exercise 14.7).
Exercise 14.22 (Easier computation of the first k levels) Show how to simplify
the algorithm that builds the first k levels in the d-arrangement of k hyperplanes when
all the hyperplanes contain a face at level 0.
Hint: It suffices to find the first conflict with the polytope at level 0 (a problem analogous
to that of finding the conflicts to build a convex hull), and then to use the adjacency
graph to detect the other conflicts.
Hint: Without loss of generality, H- may account for points of B and H+ for 7R. A
blue point B (resp. red point R) will be accounted for by the wrong set if the pole H*
of H belongs to B*+ (resp. R*-). The question is now equivalent to finding the sum of
the levels of H* in the dual arrangements of B and 7?.
and Straus 1103) and generalized to higher dimensions in the book by Edelsbrunner [89].
The result in exercise 14.19 was slightly improved by Pach, Steiger, and Szemeredi [186].
The algorithm that computes the first k levels is due to Mulmuley [174] (see also [177)).
The algorithm as outlined in exercise 14.21 is fully described (and slightly improved in
dimensions 2 and 3) by Agarwal, de Berg, Matougek, and Schwarzkopf [2]. A determin-
istic optimal algorithm is given in the planar case by Everett, Robert, and van Kreveld
[104], who also give a better solution to exercise 14.23 in the planar case.
Query problems, for a large part unexplored in this book, have spurred a lot of research.
A recent account can be found in the books by Agarwal [1] or by Mulmuley [177]. Variants
of exercises 14.10 and 14.12 are solved in these books, and the solution to exercise 14.11
is due to Chazelle, Guibas, and Lee [55]. Bronnimann [35] explains how to achieve point
location in a polytope with preprocessing time O(nlogn + nLd/2j), storage 0(nLd/2l),
and query time O(log 2 n).
Computing visibility graphs and shortest paths have also motivated a lot of research.
Recent developments and references can be found in the articles by Pocchiola and Vegter
[188] and by Hershberger and Suri [127].
Chapter 15
In an arrangement of n lines in the plane, all the cells are convex and thus have
complexity O(n). Moreover, given a point A, the cell in the arrangement that
contains A can be computed in time e(n log n): indeed, the problem reduces to
computing the intersection of n half-planes bounded by the lines and containing
A (see theorem 7.1.10).
In this chapter, we study arrangements of line segments in the plane. Consider
a set S of n line segments in the plane. The arrangement of S includes cells,
edges, and vertices of the planar subdivision of the plane induced by S, and their
incidence relationships.
Computing the arrangement of S can be achieved in time O(n log n+k) where k
is the number of intersection points (see sections 3.3 and 5.3.2, and theorem 5.2.5).
All the pairs of segments may intersect, so in the worst case we have k = 2(n2 ).
For a few applications, only a cell in this arrangement is needed. This is notably
the case in robotics, for a polygonal robot moving amidst polygonal obstacles by
translation (see exercise 15.6). The reachable positions are characterized by lying
in a single cell of the arrangement of those line segments that correspond to the
set of positions of the robot when a vertex of the robot slides along the edge of
an obstacle, or when the edge of a robot maintains contact with an obstacle at a
point. Since the robot may not cross over an obstacle, it is constrained in always
lying inside the same cell of this arrangement. It is therefore important to bound
the complexity of such a cell and to avoid computing the whole arrangement.
Among the cells of A(S), a few contain the endpoints of some segments, and the
others do not. The latter are naturally convex cells, their complexity is O(n) and
each can be computed in time 0(n log n). The complexity of the former cells,
however, is more difficult to analyze.
15.1. Faces in an arrangement 353
2. For each two symbols a, b in the alphabet, the alternating sequence of length
s + 2 is not a subsequence of this word.
'ESESE' (the other subsequences 'ENENE' and 'ECECE' are also suitable).
The sequence 'A DAVENPORT-SCHINZEL SEQUENCE' is thus a
(26, 4)-Davenport-Schinzel sequence!
Denote by A,(n) the maximal length of an (n, s)-Davenport-Schinzel sequence.
First of all, it is not even clear that AS(n) is finite. In fact, it can be deduced from
the connection with lower envelopes (see section 15.3) that A,(n) < sn(n-l1) + 1.
The following theorem gives more precise bounds on Al, A2, and A3.
AI(n) = n
A 2 (n) = 2n - 1
A3 (n) = e(na(n))
Proof. The proof for s = 1 is trivial, since each symbol may appear only once.
For s = 2, we proceed by induction on n. The result is true for n = 1, so we
consider an (n, 2)-Davenport-Schinzel sequence (n > 1). Let a be its first letter,
'The definitions and order of magnitude of the inverse Ackermann function are given in
subsection 1.1.3.
15.3. The lower envelope of a set of functions 355
and put S = aS'. If a does not occur in S', then the induction applies for S' and
so
SI = 1 + jS'I < 1 + 2(n - 1) - 1=2n - 2.
Otherwise we can write S = aS1 aS2, where a does not occur in Si and ISI > 0.
If S2 is empty the length of aSla is smaller than (2n - 2) + 1 = 2n - 1, as
we have just shown. Otherwise, let k be the number of distinct symbols in S,.
By induction, IS11 < 2k - 1. Moreover, the definition of a Davenport-Schinzel
sequence ensures that no symbol b occurs both in SI and S2, otherwise abab is
a subsequence of S. Thus aS2 may contain at most n - k symbols (note that a
may occur in S2), and by induction we have IaS2 1 < 2(n - k) - 1. Hence
18 = IS11 + IaS2I+1 2n- 1.
To finish the proof for s = 2, we must also show that this bound is exact. This can
be readily seen by considering the sequence ablab2 a... abn- 1 a of length 2n - 1.
For s = 3, the proof goes into very technical details, so we will not prove the
announced result here. We can show, however, the simpler result that A3 (n) =
O(nlogn). Let S be a (n,3)-Davenport-Schinzel sequence, and S(a) be the
subsequence obtained from S by removing all the occurrences of a symbol a. In
S(a), there cannot be a subsequence bcbcb and identical consecutive symbols can
happen at most twice when the first and the last occurrences of a are surrounded
by two b's. Let us call S'(a) the sequence obtained by replacing in S(a) two
consecutive symbols b by a single b, whenever this happens. Then S'(a) is an
(n - 1, 3)-Davenport-Schinzel sequence, and
|S| < |S'(a)| +2+na < A 3 (n-1) +2+na
where na stands for the number of occurrences of a in S. Summing over all the
symbols a appearing in S, we obtain:
nISI < nA 3 (n- 1) + 2n + ISI.
This is true for any sequence S, so that
A3 (n) A 3 (n-1) 2
~+
n n-1 n-1
whence A3 (n) = 0 (n log n). l
f (x) = min
i
fA()-
356 Chapter 15. Arrangements of line segments in the plane
fl
fi f3 fi f2 fl
Figure 15.2. The lower envelope of a set of functions, and the corresponding Davenport-
Schinzel sequence.
The lower envelope is formed by a sequence of curved edges, where each edge is
a maximal connected subset of the envelope that belongs to the graph of a single
function fi(x). The endpoints of these edges are located at the intersections of
the graphs of the functions and are called the vertices of the envelope.
15.3.1 Complexity
Labeling each edge by the index of the corresponding function, we obtain a se-
quence of indices by enumerating these labels in the order in which they appear
along the envelope (see figure 15.2). If the graphs of the functions have pairwise
at most s intersection points, then this sequence is an (n, s)-Davenport-Schinzel
sequence. Indeed, let Ai and Aj be two edges appearing in this order along the
envelope, defined over two intervals I and J. The corresponding functions fi
and fj being continuous, they must intersect in a point whose abscissa is greater
than the right endpoint of I and smaller than the left endpoint of J. Having
an alternating subsequence of length s + 2 for the two symbols i and j implies
the existence of s + 1 intersection points between the graphs of fi and fj, a
contradiction.
The number of edges on the lower envelope is thus bounded above by the
maximal length A, (n) of an (n, s)-Davenport-Schinzel sequence.
Consider the case when the functions are defined over closed intervals and not
over the whole of R. The lower envelope is not continuous and the argument used
15.3. The lower envelope of a set of functions 357
above to bound the number of its edges does not hold any more. This problem
may be overcome by extending the domain of definition of the functions fi to cover
the whole of JR. More precisely, pick a positive real number p/. If fi is defined over
[Ximin, XimajI, then we extend the graph of fi for x > Xijma by the semi-infinite ray
originating at (Xjmax, fj(xjmax)) whose slope is pi, and symmetrically for x < Ximin
by the semi-infinite ray originating at (Ximin Ifi (Ximin)) whose slope is -p (see
figure 15.3). Thus we have a set of functions gi which extend the functions fi
and are continuous. When p is large enough, the sequence of labels of the edges
on the lower envelope of the gi's is identical to that of the lower envelope of the
fj's, and this lower envelope can be easily constructed knowing that of the gi's.
It is readily verified that, for p large enough, gi and gj have at most s + 2
intersection points if the corresponding functions fi and fj intersect in at most s
points. It follows that the sequence of labels of the edges on the lower envelope
of gi, ... , g. is a (n, s + 2)-Davenport-Schinzel sequence.
The complexity of the lower envelope of the gi's is thus bounded above by
the maximal length AS+2 (n) of an (n, s + 2)-Davenport-Schinzel sequence. The
complexity of the lower envelope of the fit's is also bounded by As+2 (n).
Example. Consider the case of line segments. Two line segments intersect
in at most one point, so the sequence of labels on the lower envelope of a set
of segments is an (n, 3)-Davenport-Schinzel sequence. The complexity of this
lower envelope is thus O(na(n)). In fact, this bound is achievable and one may
actually construct line segments whose lower envelope has super-linear complexity
e(na(n)) (see the bibliographical notes at the end of this chapter).
Let us now consider the case when the functions fi are only defined over semi-
infinite intervals. We first consider the functions fi whose domains of definition
are intervals defined by x > Ximin. If we extend these functions by a half-line
starting at (XiminI fA(Ximin)) of slope -y for [z big enough, then we obtain func-
358 Chapter 15. Arrangements of line segments in the plane
tions gi, defined over lR, whose graphs have pairwise at most s + 1 intersection
points if the graphs of the fit's had pairwise at most s intersection points. The
sequence of labels on the lower envelope 4r of the gi's is an (n, s + 1)-Davenport-
Schinzel sequence. The complexity of Lr is thus A,+,(n).
A similar result obviously holds for the lower envelope El of the functions f3
whose domains of definition are defined by x < xjmax, The lower envelope of the
n functions fA is the lower envelope of the union of Lr and El. Its complexity is
O(nr + n1) = 0(A,+,(n)) since both 4r and LI are monotone chains.
Theorem 15.3.1 The lower envelope of n functions fi, i 1, ...,n, defined over
R and whose graphs have pairwise at most s intersection points, has complexity
0(A,(n)) and can be computed in time 0(A,(n) log n).
of a segment (the non-trivial cells) and the cells whose boundaries contain no
endpoints (the trivial cells). Trivial cells are convex and their complexity is
O(n). In section 15.3, we have seen that the complexity of the lower envelope
of a set of n line segments in the plane can be e(na(n)). So we can conclude
that Q(na(n)) is a lower bound on the worst-case complexity of a non-trivial
cell in the arrangement of n line segments. To show this, consider a set of n
line segments whose lower envelope has complexity O(na(n)). To S, we add 2n
segments, almost vertical, and long enough so that each of them stands above an
endpoint of a segment in S (see figure 15.3). We also add a horizontal segment
lying above all the segments in S while cutting all the almost vertical segments
that we added. The new set of segments S' has 3n + 1 segments, and the edges on
the boundary of the unbounded cell lying below all the segments are in one-to-
one correspondence with the edges of the lower envelope of S'. But the Q(na(n))
edges on the lower envelope of S also correspond to a subset of the edges on the
lower envelope of S'. It follows that the unbounded cell is at least as complex as
the lower envelope of S, so that it also has complexity Q(na(n)).
As we will see, this bound is also an upper bound, which shows that the com-
plexity of cells in the arrangement of line segments depends almost linearly on
the number of segments, while the total arrangement may have up to Q(n 2 ) edges
in the worst case. We will then explain how to efficiently compute such a cell.
15.4.1 Complexity
Consider a set S of n line segments in the plane. We will assume that these
segments are in general position, meaning that no three segments have a common
intersection and that any two segments intersect in at most one point. A standard
perturbation argument shows that the complexity of a cell is maximal in this case.
Indeed, if the segments are not in general position, one may perturb them slightly
so that they are in general position, without decreasing the number of edges or
vertices of the cell under consideration.
From now on, and as was done in section 15.1, we consider that each line
segment S is a rectangle of infinitely small width whose boundary is formed
by two copies of the segment S called the sides of S, and two infinitely short
perpendicular segments at the vertices. Under the general position assumption,
the boundary of the union of these rectangles is homeomorphic to the union of
all the segments. Henceforth, we will thus make a distinction between a segment,
considered as a infinitely thin rectangle, and a segment side. The number of sides
is 2n.
We orient the rectangles counter-clockwise, which induces a clockwise orienta-
tion for the connected components of the boundaries of each cell.
Let r be a connected component of the boundary of some cell C in the ar-
360 Chapter 15. Arrangements of line segments in the plane
Lemma 15.4.1 Consider a segment S that contains at least one edge of f. The
edges of f contained in S are traversed on the boundary of r in the same order
as they are traversed on the boundary of S.
Proof. Consider the infinitely thin rectangle S and the region R bounded by r
that does not contain C. Then S is contained in R, and the result follows from
a slight adaptation of the proof of theorem 9.4.1. [1
We label each edge of F by the index of the side of the segment of S to which it
belongs. The sequence Er of these labels forms a circular sequence which we break
into a linear sequence by choosing some origin 0 on F. The number of distinct
labels in Er is at most the number of sides, which is 2n. Two successive labels are
distinct. Since two segments have only one intersection point, it is tempting to
conjecture that the sequence Er is a (2n, 3)-Davenport-Schinzel sequence. The
choice of 0 may induce some additional repeats, however. Indeed, if ababab is not
a subsequence of the circular sequence, it may not always be possible to choose
o so that the same is true for the linear sequence. For instance, consider figure
15.4: the linear sequence
Er = al c2 cl al a2 cl bi b2 Cl C2 b2 a2 al b2 bi
E* has at most 3n distinct labels, since only one side of each segment needs to
be relabeled.
Proof. We already know that E* has at most 3n distinct labels and does not
contain two identical consecutive elements. It remains to see that ababa is not a
subsequence of E* for any two symbols a $4b.
15.4. A cell in an arrangement of line segments 361
al
C2
--1 be
We first show that, if abab is a subsequence of E*, the sides labeled a and b
must intersect. For this, let the subsequence abab correspond to the edges Ea,
Eb, Ea, Eb 1 on r. Let Sa be the side labeled a that contains Ea and Ea. Pick a
point Al in the relative interior of Ea and a point A2 in the relative interior of
Ea" (see figure 15.5). We define Sb, B1 and B2 similarly.
Let A be the union of the subchain F12 of F that joins Al to A2 and of the
simple polygonal chain contained in the interior 2 of Sa. Then A is a simple closed
polygonal chain. The bounded polygonal region A enclosed by A contains, in a
neighborhood of B1 , a portion of the segment B1 B2 . Indeed, if A is oriented by
the orientation induced by r, then in a neighborhood of Al the side Sa is on the
right of A and the cell lies to the left, and a similar statement holds for Sa in
a neighborhood of A2 and for Sb in a neighborhood of B1 . Moreover, A cannot
cross the portion of r that joins A2 to B2 , so that A cannot contain B2 . The
segment B1 B2 must therefore cross A. It cannot cross F1 2 , however, hence it
must cross A \ F12 , and therefore also A1 A 2.
Assume now for a contradiction that ababa is a subsequence of E*. In addition
to the notation above, let us pick a point A3 in the relative interior of Ea" that
is after A2 on Eat, and another point A4 in the relative interior of the third edge
Eas" labeled a, and so supported by Sa. From the preceding argument, we know
that A1 A2 and B1 B2 intersect, and similarly for B1 B2 and A3 A4 (simply consider
2
We assume that the segments are in general position, and that they are infinitely thin
rectangles.
362 Chapter 15. Arrangements of line segments in the plane
r'2
the subsequence baba of Er). The two intersection points must be distinct since,
owing to the relabeling and to lemma 15.4.1, the points Ai are all distinct and
necessarily appear in the order A1 , A2 , A3 , A4 on Sal But this latter condition
implies that A1 A2 and A3 A4 cannot intersect, so that Sb must cut Sa twice. This
is impossible as two segments may only cross once. C
An immediate consequence of this lemma is:
Theorem 15.4.3 The complexity of a cell in the arrangementof n line segments
in the plane is O(na(n)).
As we mentioned in section 15.3, it is possible to place segments in the plane so
that the cell containing, say, the origin has complexity Q(na(n)), so the bound
in the theorem above is tight.
algorithm is invited to refer to subsection 5.3.2 for more details. We will only
recall here the main definitions. The vertical decomposition is obtained by casting
a ray upwards and downwards from any endpoint of the segments. The ray stops
as soon as it encounters a segment in S (see figure 5.4a,d). The vertical segments
(sometimes half-lines) traced by the rays are called walls, and together with
the segments in S they decompose the plane into trapezoids that may degenerate
into triangles or unbounded trapezoids. The algorithm also computes the vertical
adjacencies of the trapezoids. 3
To apply the formalism of chapter 4, we defined the problem in subsection 5.3.2
in terms of objects, regions, and conflicts between objects and regions. For this
problem, an object is a segment. A region is a trapezoid in the decomposition of
a subset of the segments. Each region is determined by at most four segments.
There is a conflict between an object and a region if and only if the segment
intersects the trapezoid. Computing the vertical decomposition is thus the same
as computing the set of regions defined and without conflicts over S.
The algorithm we present to compute a cell C(S) in fact computes a vertical
decomposition of that cell (see figure 15.6). To generalize the algorithm of sub-
section 5.3.2 to compute only a single cell is not straightforward, however: the
regions that interest us are not all the trapezoids defined and without conflict
over the set S of segments, but only those contained in the cell C(S). Unfor-
tunately, whether a trapezoid is contained in the cell C(S) cannot be decided
locally by examining only that trapezoid and the segments that define it. This
forbids verbatim use of the formalism and results of chapters 4 and 5.
To avoid this difficulty, we proceed as follows. Let 7? be the subset of seg-
ments already inserted into the data structure, and let C(7?) be the cell in the
arrangement of R that contains A. We allow the algorithm to compute, in ad-
dition to the trapezoids in the decomposition of C(7?), other trapezoids in the
arrangement of 7R that are not trapezoids of 0(1?). In order not to degrade the
performances of the algorithm, at certain incremental steps we perform a clean-up
step, during which we remove the trapezoids that do not belong to the cell C(7?).
Only the trapezoids that belong to C(7?) will be subdivided during subsequent
incremental insertions. To distinguish between these trapezoids, we traverse the
connected component in the vertical adjacency graph 0 of the current vertical
decomposition that contains the trapezoid containing A. This latter trapezoid
is maintained throughout the incremental steps. The other leaves of the graph
that are not traversed are deactivated: they correspond to trapezoids in the cur-
rent decomposition that are not contained in the cell C(R). These trapezoids
will not be subdivided, and the corresponding leaves in the graph will not have
children in subsequent insertions. Figure 15.7 shows an intermediate situation in
the algorithm.
3
Recall that two trapezoids are vertically adjacent if they share a common vertical wall.
364 Chapter 15. Arrangements of line segments in the plane
Between two clean-up steps, the algorithm is similar to the one described in
subsection 5.3.2, apart from a few details which will be noted below. For each
insertion of a new segment S, we locate S using the influence graph, then update
the decomposition by subdividing the active trapezoids intersected by S. In the
influence graph, this corresponds to creating new children for the active nodes
that conflict with S.
Between two clean-up steps, and inside each trapezoid which has not been de-
activated, we build the decomposition of the arrangement of the segments which
conflict with this trapezoid and are inserted between the two clean-up steps. Let
Tp be the set of nodes in the influence graph which were not deactivated during
the previous clean-up step p. To each node in Up we assign a secondary influence
graph. This secondary graph is rooted at T and its nodes are the descendants of
T created between step p and the next clean-up step. The secondary graph com-
puted just as in subsection 5.3.2 under the incremental insertions of the segments
inserted between step p and the next clean-up step. Its construction differs from
that of a usual influence graph in a minor detail: the removal of superfluous walls.
When inserting a segment S, if it intersects a wall, then only one of the two parts
of that wall intersected by S is a wall in the new arrangement, and the other
part must be removed and the two adjacent trapezoids must be merged. This
procedure is detailed in subsection 5.3.2, and we apply it here to adjacent trape-
zoids that belong to the same secondary influence graph and also to trapezoids in
different secondary influence graphs. A merge of the latter kind is called an ex-
15.4. A cell in an arrangement of line segments 365
Figure 15.7. Intermediate situation in the computation of the cell that contains A. The
shaded zone represents the final cell. The trapezoids which are neither entirely
nor partially shaded are deactivated.
ternal merge. The vertical adjacencies are updated accordingly. External merges
therefore introduce certain links between nodes in distinct secondary influence
graphs.
The clean-up steps ensure that not too many trapezoids are created. Never-
theless, they must not be so frequent that the algorithm becomes inefficient.
We perform clean-up steps after the insertion of the 2 t -th segment, for i =
1,..., [log nJ - 1. Note that the last clean-up step was performed at step pf
where pf is the greatest power of 2 such that 2 pf < n.
Suppose for now that n is a power of 2. We will analyze the complexity of the
algorithm between two clean-up steps p and 2p.
Denote by Si the set of segments inserted during steps 1, ... , 2'. Each trapezoid
T of C(Sp) is subdivided into trapezoids by the segments with which it conflicts.
Let ST stand for the set of segments of S2p that conflict with T, and let ET be
the corresponding chronological sequence. The portion of the decomposition of
the segments in ST that lies inside T has complexity (S1T82).
To make things simpler, we assume that the algorithm does not perform the
external merges. The number of nodes is only greater, so the location phase is
366 Chapter 15. Arrangements of line segments in the plane
always more complex. The cost of the external merges is proportional to the
number of nodes killed (and hence visited) during the steps, so the external
merges are accounted for by the location phase. The bounds we obtain on the
complexity of the algorithm that does not perform the external merges will thus
still be valid for the algorithm that performs the external merges.
Subsection 5.3.2 provides us with a bound on the number of nodes and the
storage needed by the secondary influence graphs. For a trapezoid T the bound
is O(IS2,I2). To bound the storage required by all the sEcondary influence graphs
computed between steps p and 2p (and ignoring the external merges), we sum
this quantity over all the trapezoids of C(Sp). Here we need a moment theorem
analogous to theorem 4.2.6, but usable in a context where regions are not defined
locally. Such a theorem is stated in exercise 4.4 and bounds this sum by a function
of the expected complexity go (r, Z) of a cell in the arrangement of a random
sample of r segments in a set Z. (Note that this complexity is linearly equivalent
to the complexity of its vertical decomposition.) Therefore, the number of nodes
in all the secondary influence graphs computed between steps p and 2p is
Llog nj-I
The complexity of the algorithm can be accounted for by three terms that
correspond to the location phase, the update phase, and the clean-up steps. The
location phase is analyzed in much the same way as the storage. We first evaluate
the average number of nodes visited during step p, in the secondary influence
graph rooted at a node that corresponds to a trapezoid T in C(Sp). The node
is also denoted by T for simplicity. As before, we denote by ST (resp. ET ) the
subset (or the chronological sequence) of the segments that conflict with T and
that are inserted before step 2p. Denote by ST (resp. ET) the subset (or the
chronological sequence) of all the segments that conflict with T. Note that 2p
is a random subset of ST. Under the assumption above, it all happens as if we
were locating the segments in the sequence ST in the secondary graph rooted at
T. This graph is the influence graph corresponding to the decomposition of ST
inside the interior of T. A slight adaptation of the proof of theorem 5.3.4 yields an
upper bound on the expected number of nodes visited in the secondary influence
graph. Let fo(r, Z) be the expected number of trapezoids in the decomposition of
a random sample of r segments in a set Z. Then fo(r, Z) = O(r 2 ). If we assume
15.4. A cell in an arrangement of line segments 367
that S2TP is given, then we may use theorem 5.3.4 to bound the cost of inserting
the last object by
ISI Ifo( Lr/2], S2TP))
This expression bounds the cost of inserting all the segments in S2TP as well as the
cost of locating in the secondary graph rooted at T all the segments in ST \ S2p
that are inserted after step 2 p. The number of nodes visited in the secondary
influence graph during the successive insertions, averaged over all random samples
2p in ST, is thus
Hence, the expected number of segments inserted before step 2p that conflict
with T is
E(2S2p) = - IS ( T
since p < n. The average number of nodes visited in the secondary influence
graph rooted at T is finally 0 (nSTI 2 )
Summing over all the nodes of C(Sp), we then obtain a bound on the expected
number m of nodes visited in all the secondary influence graphs rooted at these
nodes:
m=O(PE1ZIST12)
Once again, we can use the adapted moments theorem in exercise 4.4, to obtain
The update phases and clean-up steps are easily analyzed. Indeed, the up-
date phases require time proportional to the number of nodes created, which
is 0(na(n)). Identifying the trapezoids of C(Sp) during the clean-up step p re-
quires time proportional to the number of trapezoids in C(Sp), which is O(pa(p)).
368 Chapter 15. Arrangements of line segments in the plane
To deactivate the trapezoids during the different clean-up steps takes time pro-
portional to the number of created nodes, which is again O(nco(n)). The total
cost of the clean-up steps is thus
Ltogn]-1
Z 0(2za(2 )) = O(na(n)).
i- 1
This finishes the proof of the theorem stated below when n is a power of 2. To
analyze the general case, we must also analyze the cost of inserting the segments
at steps 2 pf + 1, . .. , n. But this is word for word the same as the analysis above
and produces the same results (and notably equation 15.1) if we note that Pf > i
15.5 Exercises
Exercise 15.1 (Optimal computation of lower envelopes) Show that the lower
envelope of n line segments in the plane can be computed in optimal time 0(n log n).
Hint: The lower bound Q(nlogn) is proved by reduction to sorting. As for the upper
bound, first project the endpoints of the segments on the x-axis. They define 2n - 1
consecutive non-overlapping intervals. Build a balanced binary tree whose leaves are
the intervals in the appropriate order. To a node corresponds an interval which is the
union of the intervals at the leaves in the subtree. A segment S is assigned to the node
whose interval is the smallest that still contains the projected endpoints of the segments.
(This node is the first common ancestor of all the leaves covered by S.) Show that
the lower envelope of the segments assigned to a single node has complexity 0(m) and
not O(ma(m)), using that there exists a vertical line that intersects all these segments
and using also the result on half-lines mentioned in section 15.3. Observing that the
projections of two segments assigned to different nodes at the same level in the tree
do not overlap, show that the lower envelopes of the segments assigned to the nodes
on a given level of the tree also have linear complexity, and can be computed in time
0(n log n). These 0(log n) lower envelopes can be merged in time O(na(n) log log n),
which is 0(n log n).
Exercise 15.2 (Airport scheduling) Consider a set M of n points in Ed that move
along algebraic curves of bounded degree at given constant speeds. At each moment t,
we want to know the point M(t) in M that is the closest to the origin. Show that the
sequence E of points M(t) for t E [0, +oo[ is almost linear, and that it may be computed
in time 0((n + I3E) logn).
Exercise 15.3 (Computing a view) Consider a scene formed by n line segments in
the plane (not necessarily disjoint). Show that a view from a given point (defined as
the portions of segments visible from that point) may be computed in optimal time
0 (n log n). Similar question for more general objects.
15.5. Exercises 369
Hint: A projective transformation sends the origin to (- oo, 0), and the corresponding
problem is exactly that of computing the lower envelope of n segments, see exercise 15.1.
Exercise 15.4 (Convex hull of objects) Show that computing the convex hull of n
objects in the plane reduces to computing the lower envelope of n functions. If the objects
are convex and disjoint, the graphs of these functions have at most two intersection
points. Give bounds on the combinatorial and computational complexities of convex
hulls of curved objects, in particular circles, ellipses, etc.
Hint: For each object, consider the set of its tangent lines, and use polarity to work in
the dual plane (where points correspond to lines in the original plane).
Exercise 15.5 (Stabbing lines) Given n objects in the plane, compute the set of lines
that simultaneously stab all of them.
Figure 15.8. There are Q(m 2 n2 ) feasible positions of M that belong to the same number
of distinct cells of C.
Hint: Put V = B n 1Z for the intersection of B and R. Since R and B play entirely
symmetric roles, it suffices to look at the contribution of the boundary of B to the non-
trivial boundary of V. For this, follow the edges on the boundary of B, and count the
number of edges of V contained in each edge of B.
For each edge E on the boundary of B, count the edges of V contained in E. We
distinguish the first one along E from the others. Among the others, count separately
those that belong to the same connected component of V, those that do not belong to
the same connected component of the boundary of V, and the remaining edges.
Exercise 15.8 (Computing the non-trivial boundary) Show that the non-trivial
boundary of two polygonal regions B and 1t (see exercise 15.7) can be computed in time
O(m log m), if m is the total number of edges of B and RZ.
Hint: We sweep the plane with a line going in two directions, first going from left to
right and then from right to left. During the sweep, we maintain three structures which
respectively represent the segments of B, of 7?, and of the resulting non-trivial boundary
that intersect the sweep line. During the left-to-right sweep, we only create a new interval
for the result when the current event is a vertex of B contained in 7?, or a vertex of 7?
contained in B. We call such a vertex a remarkablevertex. Then we are assured that this
interval is contained in a non-trivial cell. We do not discover the entire non-trivial cell,
however, rather we only know the portion of this cell that can join a remarkable vertex
by a decreasing x-monotone path. This is why we need to sweep the plane in the other
direction, from right to left.
Hint: Use the divide-and-conquer method. Split the set of n segments into two subsets
of roughly the same size to obtain two cells CIA and C2A in the sub-arrangements that
15.6. Bibliographicalnotes 371
contain A. Merging these two cells can be done using a variant of the sweep method of
exercise 15.8. This variant in fact computes the non-trivial boundary of the intersection
CA n CA as well as the boundary of the cell in this intersection that contains A, even if
it does not belong to the non-trivial boundary. It remains to extract the description of
the cell CA in the current divide-and-conquer step that contains A.
Exercise 15.10 (Half-lines) Show that the complexity of a cell in the arrangement of
n half-lines is 0(n). Devise a deterministic algorithm that computes it in optimal time
0(n log n).
Hint: Express the constraints that limit the motion of the manipulator in the configu-
ration space (which has dimension 2) and use exercise 15.11.
plane that has complexity Q(na(n)). The solution to exercise 15.1 is due to Hershberger
[124].
The analyses of the complexity and of the computation of the unbounded cell in the
arrangement of line segments are given by Pollack, Sharir, and Sifrony [189]. Their result
is extended by Guibas, Sharir, and Sifrony [118] to the case of a cell in the arrangement
of curved arcs. Other results on curved arcs are given in [64, 90]. The complexity
and computation of m cells is studied by Edelsbrunner, Guibas, and Sharir in [91, 93].
Solutions to exercises 15.7, 15.8, and 15.9 can be found in their papers.
Alevizos, Boissonnat, and Preparata [7] study the arrangements of half-lines and give
a solution to exercise 15.10. These arrangements find applications in pattern recognition
[8, 33].
The randomized algorithm that computes a single cell described in this chapter is
due to de Berg, Dobrindt, and Schwarzkopf [76]. This algorithm can be generalized to
dimension 3, which is not the case for a previous algorithm due to Chazelle et al. [52].
A comprehensive survey of Davenport-Schinzel sequences and their geometric appli-
cations can be found in the book by Sharir and Agarwal [207].
Chapter 16
Arrangements of triangles
a polyhedron (see section 13.3). Subsection 16.2.2 then shows how to adapt the
vertical decomposition to subdivide all the non-convex cells in the arrangement of
n triangles into 0(n 2 ) convex cells. The vertical decompositions are very useful,
as will be shown in several algorithms described in this chapter. They also allow
arrangements of triangles to be triangulated, a very easy operation when the
vertical decomposition is known.
Henceforth, T will stand for a set of n triangles in E3. We assume that no
triangle in T is parallel to the z-axis.
,:
Figure 16.1. Decomposing an arrangement of triangles. The walls of type 1 are represented
by light lines, and the traces of walls of type 2 on the floor of the cell are
represented in dashed bold.
line segments that join a point of E and a vertex on the lower envelope (in 7?)
of Sh or on the upper envelope of S1. From theorem 15.3.1, we derive that the
number of 2-walls of type 1 that are incident to E is 0(na(n)). If these triangles
do not intersect, this number is 0(n).
Summing over all the segments E that are either edges of a triangle in T or the
intersection of two triangles in T, we find that the total number of walls of type 1
(and hence also the number of walls of types 2 and 1') is 0((n +p)na(n)), if the
number p of intersecting pairs of triangles is not zero. If it is, then the number
of walls of type 1 is 0(n 2 ). The theorem below summarizes this result.
In the worst case, we have p = 0(n2 ) and the decomposition has complexity
0(n 3'(n)), which is very close to the optimum, since the arrangement itself has
complexity Q(n 3 ) in the worst case.
One may also bound the complexity of the decomposition by 0(n 2 a(n) log n +
t), where t is the complexity of the arrangement (see exercise 16.4). This bound
is better than that of the previous theorem when p = Q£(n log n).
Theorem 16.2.2 In the arrangement of n triangles in E3, the union of the non-
convex cells can be decomposed into 0(n 2 ) convex parts. The union of all the
cells can therefore be decomposed into 0(n 3 ) convex parts.
Proof. As indicated above, only the first statement needs a proof. For the
second statement, simply note that the cells that do not contain outer faces are
convex, and that they form a subset of the cells of the arrangement of the planes
that support the triangles. They must therefore have overall complexity 0(n 3 ).
For the non-convex cells, we first show how to decompose the non-convex cells of
the arrangement of n line segments into 0(n) convex parts. This is achieved as
for the usual vertical decomposition of a set of line segments, except that we draw
only the walls from the endpoints of the segments and not from their intersection
points. The cells in the decomposition induced by the walls and the segments
are convex, since the only non-convex vertices of a cell in the arrangement of
line segments are the endpoints of the segments. There are at least one and
at most two walls incident to each non-convex vertex, so there are only twice
as many convex parts as there are endpoints, hence 0(n) convex parts in this
decomposition.
Consider now the case of dimension 3. We decompose the arrangement in a
way similar to the case d = 2. More specifically, we consider each edge on the
boundary of a triangle in T. Through each point P of such an edge we draw a
vertical line segment (parallel to the z-axis) upwards and downwards until it hits
a triangle in T other than that which contains P. In this way, we build 2-walls of
type 1. Note that, contrary to the decomposition described in subsection 16.2.1,
we do not build walls of type 1 on top of the edges of the arrangement that
are contained in the intersection of two triangles. Each cell in the resulting
decomposition is a vertical cylinder, bounded above and below by convex domes.
The cylinders are not necessarily convex since their horizontal projections are
generally not convex either. Yet the reflex edges must be vertical, and they
correspond to segments drawn on top of the vertex of some triangle, or on top
of the intersection of a triangle and an edge. We obtain convex parts by adding
more 2-walls of type 2. More precisely, if II(C) is the projection on the xy-plane
a non-convex cell C, then II(C) may be decomposed into convex parts as we did
for the two-dimensional case, by building 1-walls parallel to the y-axis for each
non-convex vertex of II(C). These vertices are vertices of the triangles, or the
intersection of a triangle and an edge. We then build vertical 2-walls of type 2
16.3. The lower envelope of a set of triangles 379
Figure 16.3: Walls in the convex decomposition induced by the triangles in a vertical plane H.
-f"
z
Figure 16.4. The lower envelope of five triangles, as seen from below projected onto the
xy-plane.
4. the projections of any two edges of the triangles do not overlap, and
augmenting the number of faces on the lower envelope. Therefore, the upper
bounds we give below on the complexity of lower envelopes of triangles in general
position apply to degenerate configurations as well.
We note that, under this general position assumption, the number of vertices
(edges) on the envelope is at most twice the number of vertices (edges) of the
planar map £ obtained by projecting the faces of the lower envelope on the xy-
plane. This is because a vertex (edge) in this planar map is the projection of at
most two vertices (edges) of the lower envelope.
16.3.1 Complexity
This subsection is devoted to proving the following theorem, which bounds the
complexity of the lower envelope of triangles:
Proof of the upper bound. We first count the number s(,f) of vertices in the
planar map - obtained by projecting the faces of the lower envelope £ on the
xy-plane. For this, we consider the lower envelope £' of T \ {T}, T e T, and we
estimate the increase s(&) - s(£f) in the number of vertices of the map when T
is reinserted.
The new vertices of the envelope (which are vertices of £ but not of £') are either
on T or vertically below an edge of T. The triangles being in general position,
the new vertices in the map - are the projections of points of the following kinds:
1. a vertex of T,
The vertices of types 1-4 do not create problems: there are at most O(n) of
them. When we insert T, each edge of £' is contained in the intersection of two
382 Chapter 16. Arrangements of triangles
382 Chapter 16. Arrangements of triangles
(a) (b)
(c) (d)
Figure 16.5. Inserting a triangle: an edge can be (a) hidden, (b) unchanged, (c) shortened,
or (d) split into two edges.
triangles, and can be (a) hidden by T, (b) unchanged, (c) shortened, or (d) split
into two edges of E (see figure 16.5). Only in case (d) does the number of vertices
in the projected map increase. More precisely, there are two new vertices and at
least one of them is of type 6. The number of vertices increases by at most twice
the number of vertices of type 6. Their number is O(na(n)). Indeed, if E is an
edge of T and H is the vertical plane strip formed by the vertical rays originating
from E and going upwards, the triangles in T \ T intersect H along segments,
and the vertices of type 6 contained in E are vertices of the lower envelope of
these segments in H. Theorem 15.3.1 shows that there are O(nce(n)) of them.
This is true for all three edges of T, so there are at most O(nce(n)) new vertices.
Therefore
By inserting all the triangles in T successively, and denoting by s(n) the maxi-
mum number of vertices in the planar map projection of the lower envelope of n
triangles, we get the recurrence
which solves to s(n) = O(n2 a(n)). Under the general position assumption, each
vertex of the projected map £ has degree 2 or 3, so that the map has O(n2 a(n))
edges as well, and Euler's relation yields the same bound for the number of its
cells. As we saw above, the same bounds apply for edges and facets of S.
16.3. The lower envelope of a set of triangles 383
If the segments do not intersect, only cases (a) and (d) above are allowed, and
they account for O(n) new vertices. Hence a similar recurrence shows that the
complexity of the lower envelope is simply O(n 2 ).
Proof of the lower bounds. To construct the example, we use the fact men-
tioned above that the lower envelope of n segments may have Q(na(n)) edges
(see section 15.6). Now take 2 segments in the half-plane y = 0, z > 0, so that
their lower envelope has E)(na(n)) faces. Picking a point A far enough away on
the y-axis, we can construct ' triangles almost parallel to the y-axis by taking
the convex hull of these segments with the point A. The idea is to duplicate this
complexity e)(na(n)) a number of times: place ' disjoint, thin triangles in the
plane z = 0, long enough so that each intersects from side to side the vertical
projections of the '2 triangles constructed above. Then the lower envelope of
these n triangles has complexity E(n 2ae(n)).
A similar construction when the triangles do not intersect leads to the Q(n 2 )
bound.
each trapezoid in the planar map corresponds a vertical cylinder whose horizon-
tal section is the trapezoid: such a cylinder is called a prism. Each prism is
unbounded in the direction of the negative z-axis, and is bounded in the opposite
direction by a facet contained in a single triangle of T. This facet is called the
ceiling of the prism. The collection of these prisms and of their faces constitutes
the vertical decomposition of the lower envelope of the triangles. If T consists
of n triangles, the complexity of this decomposition is O(n 2 oa(n)), and so is the
complexity of the lower envelope. Note that the ceiling of a prism C is a trapezoid
of the decomposition of a set Sc of segments contained in the triangle Tc that
contains the ceiling of the prism. This set SC is formed by
Among the 2-walls in the decomposition of the lower envelope, we make a distinc-
tion between the 2-walls of type 1 hanging below the edges of £ and the 2-walls
of type 2 hanging below the 1-walls of the planar map S. By construction, the
non-vertical edges of the walls of type 2 (which are edges of the ceilings of the
prisms) all contain a vertex of E.
We note that a prism may be adjacent to many prisms since both sides of a
wall of type 1 may be subdivided differently by the abutting walls of type 2 (see
also the discussion in subsection 16.2.1). Nevertheless, the adjacency graph is
planar and its complexity is linear with respect to the number of prisms.
Tc that determine the edges on its ceiling. Each segment being either an edge
of TC, the intersection of TC and a triangle in T \ {Tc}, or the projection on Tc
of an edge of a triangle in T \ {Tc}, it follows that a region is determined by at
most five triangles.
An object conflicts with a region whenever they have a non-empty intersection.
Observe that the regions that are defined by the triangles of T and do not conflict
with these triangles are exactly the prisms in the decomposition of the lower
envelope of T.
Let T be a new triangle inserted into T, let £' be the current lower envelope
before inserting T, and let Dec(&') be the current decomposition. We first identify
the prisms in Dec(e') that intersect T, then we update the lower envelope and
its decomposition. Let us call £ the new lower envelope after updating and let
Dec(g) be its decomposition. To efficiently find the regions that conflict with
T, the algorithm also maintains an influence graph. We may recall that the
influence graph is a structure whose goal is to detect rapidly the conflicts between
a new object and the regions defined and without conflict over the current set
of triangles. The influence graph is an oriented acyclic graph that has a node
for each region that was a region defined and without conflict over a subset of
the set of triangles; this subset was the current set of triangles during a previous
incremental step. The arcs in the influence graph connect the nodes in such a way
that the influence domain of a node (the subset of objects that conflict with the
region that corresponds to this node) is contained in the union of the influence
domains of its parents. At each step of the algorithm, the regions defined and
without conflict over the current subset are stored in the leaves of the influence
graph.
We now describe how to perform the current incremental step by performing a
location phase and an update phase.
Locating. The location phase is used to retrieve all the leaves in the influence
graph that conflict with T. These leaves correspond to prisms in Dec(V') in-
tersected by T. A simple traversal of the influence graph that backtracks each
time it encounters a node that either was already traversed or that does not con-
flict with T identifies all the nodes that conflict with T. If no leaf in the graph
intersects T, then T does not appear on the lower envelope £ or on any lower
envelope that will subsequently be computed. The algorithm may skip updating
the structure and directly insert the next segment.
Figure 16.7. The six sub-prisms whose ceiling is not supported by T, projected onto a
horizontal plane.
z
PI
Figure 16.8. Merging the adjacent temporary prisms: the shaded triangle is inserted and a
wall is subdivided into at most four sub-walls. The projected decomposition
of the lower envelope is represented above, and the walls to be removed are
shown in dashed lines.
;r
Y t
x
Figure 16.9. The union U of the prisms intersected by T, shaded and shown in projection
in the xy-plane. The vertical decomposition in the facets supported by T and
induced by the triangles is shown in dashed lines.
maintain the property that the influence domain of a child is contained within
the union of the influence domains of its parents, each prism should be attached
to all the prisms that conflict with T and intersect F; this would result in an
unbounded number of children for a node, and the usual randomized analysis
388 Chapter 16. Arrangements of triangles
does not apply in this case. We remedy this drawback by creating a new node
N in the influence graph that corresponds to a prism whose ceiling is F. This
node is of a special kind and, unlike the other nodes, does not correspond to a
prism in the decomposition of the lower envelope. We attach N to the nodes in
the influence graph that correspond to prisms that conflict with T (either pierced
or split) and that intersect F. Face F (and hence the prism corresponding to
N) may have a large number of edges and may be not simply connected. It can
be decomposed using the planar randomized incremental algorithm (in the affine
hull of F) and we hang 2-walls vertically below the 1-walls of F. In this way,
we obtain a secondary influence graph that represents the planar decomposition
in the facet F, and hence a decomposition of the space lying below F. This
secondary influence graph is rooted at N.
We make a distinction between the primary nodes in the influence graph of the
triangles, which we also refer to as the primary influence graph, and the secondary
nodes in the secondary influence graphs. Primary nodes correspond to a region
defined and without conflict over a subset of the current set of triangles; this
subset was the current subset during some previous incremental step.
Recall that U is the union of the sub-prisms obtained by subdividing the prisms
that conflict with T and whose ceilings are supported by T. At the i-th step, the
number u of edges on the boundary of U is bounded by the number of primary
nodes that conflict with T. Moreover, lemma 5.2.4 implies that the average
number of secondary nodes created at step i is proportional to u. The expected
number of secondary nodes is thus proportional to the number of primary nodes.
Therefore it suffices to bound the number of primary nodes. These nodes are
in one-to-one correspondence with the prisms defined and without conflict over
the current subset at some previous incremental step.
Even though the situation does not exactly fit the framework of theorem 5.3.4,
because of the existence of secondary nodes, we can still apply its proof verbatim
to count the number of primary nodes created, and the number of primary nodes
visited during a location phase. So we conclude that the number of primary nodes
in the influence graph is
o (E /(r, T ))
where fo (r, T) is the number of prisms defined and without conflict over a random
sample of r triangles in T. Each node in the primary influence graph has a
bounded number of children, so this bound is also valid for the number of arcs in
the primary influence graph. Theorem 16.3.1 states that fo(r) = O(r 2 a (r)) and
16-3. The lower envelope of a set of triangles 389
so the total number of primary nodes created during the incremental insertions
of n triangles is 0(n 2 a(n)).
We now have to estimate the costs incurred by maintaining the secondary
influence graphs. The results of subsection 5.3.2 that refer to the construction
of the decomposition of a set of segments in the plane show that the location
in the secondary graph takes expected time O(logn). Likewise, the secondary
graphs can be computed in expected time O(nT log nT) where nT is the number of
primary nodes that conflict with T. Therefore the costs of updating and locating
in the secondary graphs are proportional by a factor of logn to the number of
primary nodes visited during an incremental step.
The number of primary nodes visited upon inserting the i-th triangle is given
by theorem 5.3.4:
0 (E f(Lr/2) T) = (n)).
We conclude that the costs of locating and updating the structures when inserting
the n-th triangle are both 0(na(n) log n).
0 2 ) and the cost of inserting
If the triangles do not intersect, we have fo(r) 0(r
the n-th triangle becomes 0(n log n).
We have not yet analyzed the cost of updating the adjacency graph. This
cost is not bounded easily since the algorithm keeps track of all the adjacencies
between prisms and yet a prism may be adjacent to several others. Nevertheless,
an argument similar to that presented in exercise 5.3 shows that the cost of these
updates is also 0(na(n)).
This finishes the analysis of the algorithm. Our findings are summarized in the
following theorem.
In several cases, fo(r, T) = 0(r) holds, for instance when studying certain
kinds of Voronoi diagrams (see section 18.4). Using this better bound in the
above discussion implies:
Lemma 16.4.2 The number of outer vertices and edges contained in a single
cell is 0(n 2 ).
Figure 16.10. A cell in the arrangement of line segments. The definitions of outer, inner, and
popular are analogous to the three-dimensional case. The vertex V1 is outer,
while V2 and V3 are inner. V2 is not popular while V3 is popular. The edge EF
is popular, while E2 is not.
Lemma 16.4.3 There are O(n 2 ) popular vertices and popular edges on the bound-
ary of a cell.
Proof. We count the number of lower endpoints of popular edges. The lower
endpoint A of a popular edge E has the smallest z-coordinate, and is either an
inner or an outer vertex. The number of outer vertices is O(n2 ), by lemma 16.4.2.
If A is an inner vertex, then it lies at the intersection of E with the interior of
a triangle T, and so it is the intersection of three planes: the affine hull of T
and the affine hulls of the two triangles that contain E. Among the sides of A,
there must be one, say R, for which A is the lower endpoint. Let us decompose
C into convex sub-cells as was done in subsection 16.2.2, and let us call C' the
sub-cell which intersects R in a neighborhood of A. Since C' is convex, it must
be contained in R and so A is the lower endpoint to C'. By theorem 16.2.2, the
number of convex sub-cells in this decomposition is O(n2 ). Since each sub-cell
has a unique lower endpoint, which must be incident to three popular edges since
the triangles are in general position, the result follows for popular edges.
392 Chapter 16. Arrangements of triangles
Lemma 16.4.4 The number of inner edges contained in popularfacets that are
on the boundary of a cell is O(n 2 log n).
Proof. Consider a triangle T in T and let C' be the cell in the arrangement of
T \ T that contains 0. When inserting T in this arrangement, we estimate the
increase in the number of inner edges in the cell that contains 0 and that belong
to popular facets. We let E be an inner edge in C' that is contained in a popular
facet F in C'.
Casel: TnE=0
Whether E is an edge of C or not, the number of edges that belong to E does
not increase when inserting T.
Case2: TnEO0
Let P be the intersection point of E and T. Then E is cut into two edges E1 and
E2 . If both E1 and E 2 are edges of C, and only in this case, the number of edges
that belong to E increases by one when inserting T. The edge E' of C that is
contained in F n T and incident to P must be popular when the number of edges
that belong to E increases. Therefore P must be the endpoint of a popular edge
of C.
Denote by q and q'(T) the number of inner edges of C and C' contained in
popular facets, and let q"(T) be the number of inner edges of C contained in T.
Moreover, denote by r(T) the number of vertices of popular edges of C contained
in T.
From what was said before, we know that
Finally, if we denote by q(n) the maximum value of q over all possible arrange-
ments of n triangles and all possible choices for the origin 0, inequality 16.2
becomes
n
q(n) < q(n - 1) + O(n), (16.3)
by using the fact that the number of popular edges of C is O(n2 ) (see lemma
16.4.3). We solve this recurrence by putting
q(n) = n ) w (n)
This implies that w(n) = 0(logn), which in turn shows that q(n) = O(n2 logn).
We can now finish the proof of the theorem, if we know a bound on the number
of inner edges in non-popular facets of C. This bound is provided by our last
lemma.
other pairs called outer for which at least one edge is contained in the boundary
of a triangle.
Lemma 16.4.6 The number of 0- and 1-visible outer pairs is O(n 2 a (n)).
We now count the number qo(T) of inner visible pairs. Let us consider two
inner edges E1 = T1 n T2 and E2 = T3 n T4 of C(T) that are mutually visible,
where the Ti's are triangles in T. Assume that E1 is above E2 and denote by
S the vertical segment that connects E1 to E2 . By hypothesis, the interior of S
does not intersect any triangle of T.
Consider an extensible vertical segment S', that occupies the same position
as S initially. We slide S' successively in four different directions. For the first
move, we constrain the upper endpoint of S' to belong to E1 , while the lower
endpoint belongs to T3 and S' intersects T4 . There is a single degree of freedom,
and we move S' along this direction until one of the following situations occurs
(see figure 16.11):
T4
E T4
1 2
El ET
T S T3 eSt iS T3
T4
~T4
3a 3b
El
St IS T3
>:44
T
Figure 16.11. The four different kinds of events and their variants, represented in the vertical
plane that contains E1 . E2 intersects this plane in a single point, and T 3 and
T4 in two segments.
Likewise, for the second move, we slide S' while keeping its upper endpoint on
E1 and its lower endpoint on T4 , in the direction where S' intersects T3. We stop
S' in one of the analogous four cases (exchanging T3 and T4 ). Switching the roles
of E1 and E 2 gives the four different moves.
It is important to notice that each event above is encountered only once in all
the moves along a given direction. Indeed, when moving a vertical segment in one
direction, an inner visible pair is met again only after one of the above events.
We count the total number of events encountered during the moves correspond-
ing to each inner visible pair of edges of C(T).
1. A vertex of C(T) is encountered during at most six events of type 1. Indeed,
the guiding edge must be one of the at most six edges incident to this vertex.
The total number of events of type 1 is at most six times the number of vertices
of C(T), which is O(n2 log n) because of theorem 16.4.1.
2. The outer pair (E1 , E) is encountered during a single event of type 2 when
sliding along E1 . It follows that the total number of events of type 2 equals the
16.4. A cell in an arrangement of triangles 397
Figure 16.12. The 1-visible pair (E., Eb) is reached by two motions from the 0-visible pairs
(E,,E) and (Eb,E').
number of outer pairs that are visible or 1-visible. This number is O(n2 oa(n))
because of lemma 16.4.6.
3. The same reasoning holds for the events of type 3 whose number is also
O(n2a(n)).
4. The events of type 4 can be encountered at most four times. Yet the analysis
only works with a more precise account: the following lemma bounds the number
of events of type 4 that are encountered more than twice.
Lemma 16.4.7 There are at most O(n 2 logn) events of type 4 that are encoun-
tered more than twice.
Proof. Consider a 1-visible inner pair (Ea, Eb), and assume that Ea is contained
in the intersection of the two triangles Ta and Ta, that Eb is contained in the
intersection of the two triangles Tb and Tb, and that Ea lies above Eb (see fig-
ure 16.12). We denote by T the triangle that obscures the pair (Ea, Eb) and by
F the facet of the arrangement contained in T that is stabbed by the vertical
segment Sab that connects Ea and Eb.
If the pair is encountered more than twice during events of type 4, then it
must have been so after at least one motion along Ea and at least one motion
along Eb. So let us consider a visible pair from which we started one of the
motions, and assume it is a pair (Ea, E). (For pairs (Eb, E'), the situation is
entirely symmetrical.) Let Sa be the vertical segment that connects Ea and E
(see for instance figure 16.12 where we also show the vertical segment Sb that
398 Chapter 16. Arrangements of triangles
Lemma 16.4.8 The number of internal nodes in the graph 5, hence the number
of events of type 4 corresponding to F that are encountered more than twice, is
O(IFI).
Proof. We first show that the graph 5 cannot contain cycles of length 2. Indeed,
each arc of g connects an internal node to an external node. An internal node,
that corresponds to the 1-visible pair (Ea, Eb), is connected to three or four
external nodes that represent the four distinct edges F n Ta, F n Ta, F n Tb,
F n Tb'. As before, we denote by Ta and Ta the triangles whose intersection
contains Ea, and by Tb and Tb those whose intersection contains Eb.
Therefore 5 is a planar graph without cycles of length 2. It has ne = O(IFI)
external nodes and internal nodes of degree 3 or 4, which are incident to external
nodes only. To prove the lemma, we construct from 5 another graph g' that
does not have internal nodes as follows. We distinguish the two sides of an arc
16-4. A cell in an arrangement of triangles 399
16.. A cell in an arrangement of triangles 399
Figure 16.13. The segments in S are shown in solid lines. The shaded face corresponds to
a cycle of length 2 in the graph G. Replacing these solid lines by the dashed
lines and identifying the vertices on a common edge of F yields the graph 5'.
of 5, and the three or four sides of a node of 5. We then replace two arcs that
are incident to a common internal node by a single arc in 5'. It is easy to see
that g' is planar and does not have a cycle of length 2. It has the same number
ne of nodes, and the same number of arcs, as G. Euler's relation shows that the
number of arcs of G', and hence of G, is proportional to the number of its nodes,
and hence is O(IFI).
The total number of internal vertices of G is thus O(IFI).
Summing over all the popular facets of C(T), we conclude that the total number
of events of type 4 that are encountered more than twice is O(n 2 log n) because
of lemma 16.4.4. El
The preceding discussion shows that the total number of events of type 4 is
bounded by
4qO (T) < 2qj (T) + O (n2 log n),
if we denote by qk (T) the number of k-visible inner pairs. It follows that
We claim that n-4 qo (T) + 1 qi (T) is bounded by the expected number of visible
inner pairs in the arrangement of a random sample of n-1 triangles in T. Indeed,
an inner visible pair in T is an inner pair visible in the sample if and only if
the removed triangle is not one of the four triangles that define the pair, which
happens with probability n-4. An inner 1-visible pair of f is an inner visible pair
400 Chapter 16. Arrangements of triangles
in the sample if and only if the removed triangle is crossed by the segment that
connects the pair, which happens with probability 1. This provides the desired
upper bound. Observe that it is an upper bound but not an equality: indeed, it
does not account for the visible pairs in the sample that were visible in another
cell of the arrangement of T adjacent to C(T) through the removed triangle.
Denoting by qk (n) the maximum number of k-visible inner pairs in the arrange-
ment of n triangles in E3, we have
from which and from inequality 16.6 we derive the recurrence equation
Let T be a set of n triangles in E3 , and pick a point 0 that does not belong to
any triangle. In this section, we give an algorithm that computes the cell C(T)
in the arrangement of T that contains 0.
The algorithm we present here is an extension of the incremental randomized
algorithm described in subsection 15.4.2 that computes a single cell in the ar-
rangement of line segments in the plane. It consists of inserting the triangles
in turn. Let 7? be the current subset of triangles, introduced in the previous
incremental steps, and C(7?) the cell in the arrangement of 7? that contains 0.
After inserting a triangle, we update the decomposition of the triangles (see sub-
section 16.2.1) without worrying that some prisms in this decomposition may lie
entirely outside C(1?). We thus allow the algorithm to compute, in addition to
the prisms in the decomposition of 7t, some other prisms that are not prisms of
0(7Z). To keep the complexity within reasonable limits, it is necessary to stop
the construction of the decomposition outside C(1?) at certain clean-up steps.
During those clean-up steps, we deactivate the prisms that do not belong to the
cell C(7?). Only the active prisms are subdivided in the subsequent incremental
steps.
Between two clean-up steps, the algorithm is similar to the algorithm presented
in subsection 16.3.3 to compute the lower envelope of triangles.
16.4. A cell in an arrangement of triangles 401
16.5 Exercises
Exercise 16.1 (The complexity of a lower envelope of simplices) Show that the
complexity of the lower envelope of n (d -1)-simplices in Ed is e(nd-'a(n)).
Hint: For a set S of triangles, we denote by A(S) the arrangement of the affine hulls
of the projected edges of the triangles in T in the xy-plane. We denote by £(S) the
planar map obtained by projecting the lower envelope of S, and by £ (S) the refinement
of £(S) obtained by superimposing £(S) and A(S). To use the divide-and-conquer
method, consider two subsets Tl and T2 of T of roughly the same size. Compute
9*(Ti) and £*(T 2 ) recursively. Superimposing E*(T 2) and A(T1 ) yields a planar map
£S and, likewise, superimposing 6*(T 2 ) and A(T1 ) yields a planar map £#1. Show
that lJE 1 I = O(n 2 o(n)) and that 19'0 I = O(n2 oa(n)) (think of inserting the lines in the
arrangements A(Ti) one by one). Compute A(T) and, for each cell in A(T), compute
the portion of the lower envelope whose projection onto z = 0 is this cell. This portion is
a convex dome, the intersection of two convex domes corresponding to a cell of £#9 and
a cell of £4 The factor logn can be removed; see the references in the bibliographical
notes.
Hint: Adapt the proof of the theorem which bounds the complexity of the vertical
decomposition of a single cell.
Exercise 16.7 (Stabbing) Given n polyhedral objects in E3, compute the set of planes
that simultaneously stab them all.
Hint: Use theorem 16.3.1, exercise 16.1, and the sampling theorem 4.2.3.
Exercise 16.9 (Computing the first k levels) Devise and analyze an algorithm
that computes the first k levels in the arrangement of n triangles in E3 .
Hint: Adapt the algorithm that computes the lower envelope of a set of triangles, and
the algorithm that computes the first k levels in an arrangement of hyperplanes (see
section 14.4).
Exercise 16.10 (A cell in higher dimensions) Show that the complexity of a single
cell in the arrangement of n (d - 1)-simplices in Ed is O(nd-1 logd 1 n).
1. Show that the set of translations that bring a vertex of M into contact with a facet
of £, or a vertex of £ into contact with a facet of M, or an edge of M into contact with
an edge of £, is a set C of at most mn triangles and parallelograms (when we identify
the vector OM of the translation with its endpoint M).
2. Show that the set of positions of M inside £ that can be accessed from a given
position I corresponds to the cell that contains I in the arrangement of the triangles and
parallelograms of C. Conclude that it may be determined whether two positions I and
F in £ may be connected by a path along which M remains entirely inside C, and, if so,
compute such a path in time O(m 2 n2 log3 (mn)).
Exercise 16.12 (Flying saucers) Consider a polyhedral flying saucer that flies above
a terrain modeled by a function z(x, y) that is piecewise linear. Show that the set of
translations of E3 for which the flying saucer is strictly above the ground is characterized
as the region above the upper envelope of certain triangles in E3. Give a lower bound on
the complexity of such a set.
Voronoi diagrams
Euclidean metric
This chapter is concerned with the simplest case of Voronoi diagrams, where the
objects are points and the distance is given by the usual Euclidean metric in Ed.
The cells in the Voronoi diagram of a set M of points are then the equivalence
classes of the equivalence relation "to have the same nearest neighbor in M". It
is possible to show (see section 17.2) that such cells can be obtained by projecting
the facets of a polytope in Ed+1 onto Ed, which enables us to use several results
concerning polytopes for Voronoi diagrams as well. Bounds can be obtained in
this way for the complexity of Voronoi diagrams and of their computation. In
section 17.3, we define a dual of the Voronoi diagram, the Delaunay complex,
that enjoys several properties which make it desirable in applications such as
numerical analysis in connection with finite-element methods. The last section
of this chapter introduces a first generalization of Voronoi diagrams (see section
17.4): the higher-order Voronoi diagrams. The cells in the diagram of order k are
the equivalence classes of the equivalence relation "to have the same k nearest
neighbors in M", a notion that is often very helpful in data analysis.
17.1 Definition
Let M be a set of n points in Ed , MI, Mn, which we call the sites to avoid
confusion with the other points in Ed. To each site Mi we attach the region
V(Mi) in Ed that contains the points in Ed closer to Mi than to any other point
in M:
V(Mi) = {X c TEd : 6(X, Mi) < 6(X, My) for any j # i}.
In this chapter, 6 denotes the Euclidean distance in Ed. Other distances will be
considered in chapter 18.
The set of points closer to Mi than to another site Mj is the half-space that
contains Mi and that is bounded by the perpendicular bisector of the segment
408 Chapter 17. Euclidean metric
408 Chapter 17. Euclidean metric
Consider the Euclidean space of dimension d, Ed, and let 0 be its origin, and E
be a sphere of Ed centered at C with radius r. Its equation is given by E(X) = 0
17.2. Voronoi diagrams and polytopes 409
17.2. Voronoi diagrams and polytopes 409
M N'
where
E(X) = XC 2 -_r 2 . (17.1)
By the interior of a sphere A, we mean the set of points X such that E(X) is
negative. The exterior is the set of points X such that E(X) is negative. A
point X is said to be on, inside or outside a sphere if it belongs to the sphere,
respectively to its interior, to its exterior. For any point X in Ed, E(X) is called
the power of X with respect to E. The power of the origin with respect to E is
also denoted by oa and we have
a, = r2(0) = C2 - r2 . (17.2)
If D is any line that contains X, and if M and N are the intersection points of
D with E, then
S(X) = XM - XN. (17.3)
This is obvious when D is the line connecting X and C. Otherwise let D' be the
line that contains X and C, and let M' and N' be its intersection points with E
(see figure 17.2). The triangles XMM' and XN'N are similar (the angles M'MN
and M'N'N are supplementary), which proves equation 17.3. In the case where
X belongs to the exterior of E and D is tangent to E at T, then M = N = T
and the previous equation can be rewritten
E(X) = XT 2 . (17.4)
Let X be the mapping that takes a sphere E in Ed, of center C and whose power
with respect to 0 is cr, to the point O(E) = (C,o-) in Ed+l. Using 0 enables us
to treat spheres in Ed just as points in Ed+1.
We embed Ed as the hyperplane in Ed+1 whose equation is xd+1 = 0. As usual,
the direction of the Xd+1-axis is called the vertical direction and we use the words
410 Chapter 17. Euclidean metric
From equation 17.2, it follows that the images under ; of points in Ed, considered
as spheres of radius 0, belong to the paraboloid of revolution P with vertical axis
and equation
d
Xd+l=Zxix=X.X with X=(xl,...,Xd).
i=1
0 -1/2 0
Identifying a point X and the sphere centered at X with radius 0 shows that X
maps any point X in Ed to the point +(X) in Ed+1 obtained by lifting X onto P.
The set of concentric spheres in Ed, centered at C, is mapped by 0 onto the
vertical line in Ed+l that contains C (and hence O(C)). Let E be such a sphere.
Equation 17.2 implies that the signed vertical distance from O(E) to O(C) equals
r2 (see figure 17.3). Thus, the real spheres, whose squared radii are non-negative,
are mapped by s to the points lying on or below the paraboloid, while the points
lying above the paraboloid are the images under X of the imaginary spheres,
whose squared radii are negative.
17.2. Voronoi diagrams and polytopes 411
IXd+l
, (C)
O(E) -i2
7,E x
Figure 17.3. The paraboloid P (the coordinate system is not normed, so as to simplify the
representation).
17.2.4 Polarity
Consider a quadric Q in Ed+' defined by its homogeneous equation
Q(X, Y) = XAQY t = O.
The polarity with respect to Q described in section 7.3 is an involution between
points and hyperplanes in Ed+l which maps any point A to its polar hyperplane
A* of equation
AAQXt = 0
and maps any hyperplane H to a point H* whose polar hyperplane is H. The
point H* is called the pole of H.
Note that if Q is the paraboloid P and if we put O(E) = (C,a), the equation
of the polar hyperplane 0(S)* of 0(S) can be rewritten as
Xd+1 = 2C . X - a.
An essential property of polarity is that it preserves incidences (see section 7.3):
a point X belongs to a hyperplane H if and only if its polar hyperplane X*
412 Chapter 17. Euclidean metric
contains the pole H* of H. Moreover (see exercise 7.14), when the quadric is the
paraboloid P, we have
X E H+ > H* c X*+
X E H- H* E X*-,
Two spheres El and E2 centered at C1 and C2 and with radii r1 and r2 are
orthogonal if
EI(C2) = r2, (17.5)
or equivalently if
E2(C1) = r.
A simple verification shows that, if the spheres are real, then they are orthogonal
if and only if the angle (IC1 , IC2) at any intersection point I of El n E2 is a right
angle, or equivalently, if and only if the dihedral angle of the tangent hyperplanes
at I is a right angle.
Expression 17.5 may be rewritten as
1
C1 *C 2 -- (01 +a2) = °, with vi = Ci2-r2 (i = 1, 2),
2 (i1,)
which shows that two spheres El and E2 are orthogonal if the two points O(El)
and O(E2) are conjugate with respect to the paraboloid P. This implies that:
Lemma 17.2.1 The set of spheres in Ed that are orthogonal to a given sphere
is mapped by q to the polar hyperplane O(E)* of d(S).
Let us now consider the points in Ed as spheres of radius 0. The set of spheres in
Ed that pass through a given point X E Ed is also the set of spheres orthogonal
to the sphere centered at X with radius 0. Therefore its image under 0 is the
hyperplane O(X)* polar to q(X) E P. This hyperplane must be tangent to P
and to O(X): indeed, the only sphere of radius 0 which is orthogonal to X is X
itself, and hence O(X)* intersects P in a single point O(X).
Let E be a sphere in Ed. The intersection of O(E)* with P is the image under
q of the set of spheres with radius 0 that are orthogonal to E, namely E itself
(considered as a set of points, or equivalently as a set of spheres of radius 0).
Consequently, O(E)* n P in Ed+l projects onto E in Ed. More generally, we have
the following result.
17.2. Voronoi diagrams and polytopes 413
7P
t S(x)
It follows from this lemma that the power of a point X with respect to a sphere E
equals the square of the radius of the sphere Ex orthogonal to E and centered at
X (Ex is imaginary if X is inside E). The power E(X) can be easily computed in
the space Ed+l that represents the spheres of Ed. Indeed (see figure 17.4), Ex is
mapped by X to a point I in Ed+l that is the intersection of the vertical line that
passes through X (which corresponds to the spheres centered at X) with the polar
hyperplane 0(S)* of q(E) (which corresponds to the spheres orthogonal to E).
The xd+l-coordinates of +(x) and I are respectively X 2 and Ex(O) = X 2 - (X)
since the square of the radius of Ex equals the power of X with respect to E. The
difference of these xd+l-coordinates is called the signed vertical distance. This
proves the following lemma.
Lemma 17.2.3 The power of X with respect to a sphere E equals the signed
vertical distance from the point O(X) to the hyperplane ¢(S,)*.
The equivalences on the left are consequences of the two preceding lemmas, and
the ones on the right are proved by the special properties of polarity (see subsec-
tion 17.2.4 and exercise 7.13).
Any point in the half-space that lies below 0(X)* in Ed+l is thus the image
under > of a sphere whose interior contains X. Likewise, any point in the half-
space that lies above +(X)* in Ed+l is the image under 0 of a sphere whose
exterior contains X, and the points on +(X)* are the images of the spheres
passing through X.
Remark. Lemma 17.2.3 shows that the squared distance IIXA112 separating
points X and A, which is also the power of X with respect to the sphere centered
at A with radius 0, equals the absolute value of the vertical distance between
O(A)* and 4(X). Points X and A play symmetric roles, so IIXA112 also equals
the absolute value of the vertical distance between O(X)* and O(A).
H1 2 : E 1 (X) - E 2 (X) = 0.
we let q(Mi)* denote the hyperplane in Ed+l that is tangent to the paraboloid
P at the point O(Mi) obtained by lifting M vertically onto the paraboloid 7P,
for each i = 1, ... , n. The preceding discussion shows that the set of spheres
(real or imaginary) whose interiors contain no point of M is mapped by 0 to the
intersection of the n half-spaces lying above the hyperplanes i(Mi)*,..., i(Mn)*,
This intersection is an unbounded polytope which contains P. We call it the
Voronoi polytope and denote it by V(M) (see figure 17.5).
the points O(Mi)* are in general position in Ed+l, then V(M) is a simple (d+ 1)-
polytope. Each vertex is thus incident to d + 1 hyperplanes. Expressed in terms
of Mi 's, the general condition assumption means that no d + 2 points in M lie on
the boundary of a sphere: this is exactly the L2-general position assumption. If
it is satisfied, Vor(M) is a complex whose vertices are all equidistant from some
d + 1 points in M and closer to these points than to any other point in M: they
are the centers of spheres circumscribed to (d + 1)-tuples whose interiors do not
contain any point in M. More generally, a k-face of Vor(M) is the projection of
a k-face of V(M). It is thus the set of points that are equidistant from d + 1 - k
points in M and closer to these points than to any other point in M.
Theorem 17.2.5 reduces the problem of computing the Voronoi diagram of n
points in Ed to the computation of the intersection of n half-spaces of Ed. The
algorithms described in this book that compute half-space intersections, be they
deterministic, randomized, static or dynamic, output-sensitive or not, can all be
used to compute Voronoi diagrams.
Corollary 17.2.6 The complexity (namely, the number of faces) of the Voronoi
diagrams of n points in Ed is E(n rd/ 21). We may compute such a diagram in
time O(n log n + nrd/2 1), which is optimal in the worst case.
Proof. The upper bounds on the complexity and running time of the algorithm
are immediate consequences of the upper bound theorem 7.2.5 and of results of
the previous sections.
That Q(nfd/2 1) is a lower bound on the complexity of the Voronoi diagram of n
points in Ed is a consequence of exercise 7.11, where it is shown how to construct
a maximal polytope whose vertices lie on the paraboloid, and of theorem 17.3.1
below.
That Q(n log n) is a lower bound on computing the Voronoi diagram in the plane
is a consequence of the fact that the unbounded edges of the Voronoi diagram of
a set M of points correspond to projections of the edges of the convex hull of M.
We also comment on this below. 0
-Xd+l
P
W(M
4)
0(MI )
i \_ LZI 4i
4 >
M1 M2 0 | M3 M4
that convex hull is stable as O' vanishes to infinity (see figure 17.6). The faces of
D(M) that do not contain O' form the lower envelope of conv(O(MI), . .. , O(Mn))
(see also exercise 7.14). Their projections onto Ed form a complex whose vertices
are exactly the Mi's. The domain of this complex is the projection of the convex
hull of the O(Mi)'s: it is therefore the convex hull conv(M) of the Mi's. This
complex is called the Delaunay complex of M and is denoted by Del(M). For
k = 0,..., d, the k-faces of Del(M) are thus in one-to-one correspondence with
the k-faces of D(M) that do not contain O'.
As shown in exercise 7.14, there exists a bijection between the faces of V(M)
and the faces of D(M) that do not contain O'. This bijection maps the facet
of V(M) containing O(Mi)* to the point q(Mi). More generally, the k-faces of
V(M) are in one-to-one correspondence with the (d - k)-faces of D(M) that do
not contain O'. Moreover, this bijection reverses inclusion relationships.
Owing to theorem 17.2.5, the k-faces of Vor(M) are also in bijection with
the k-faces of the unbounded polytope V(M). So we have a bijection between
the k-faces of Vor(M) and the (d - k)-faces of Del(M) that reverses inclusion
relationships. The Delaunay complex Del(M) is therefore dual to the Voronoi
diagram Vor(M).
Notice that the duality above maps a face of Vor(M), formed by the points
equidistant from m sites in M, to the face of Del(M) that is the convex hull of
these sites.
The preceding discussion leads to the following theorem:
418 Chapter 17. Euclidean metric
Proof. Let us pick a d-face T of the Delaunay complex. Then T is the convex hull
T = conv(Mi 0 , . . ., Mi,) of 1 co-spherical points Mi0 , ... ., Mil. (If the points are in
17.3. Delaunay complexes 419
Figure 17.7. The Delaunay triangulation that corresponds to the Voronoi diagram shown
in figure 17.1.
Our next theorem extends this result into a necessary and sufficient condition
for the convex hull of some points in M to be a face of the Delaunay complex of
M.
Theorem 17.3.4 Let M be a set of points in Ed, and Mk = {MiO,. .. ,Mik}
be a subset of k points in M. The convex hull of Mk is a face of the Delaunay
complex if and only if there exists a (d - 1)-sphere passing through Mi 0 .. ,ik
and such that no point in M belongs to its interior.
Proof. The necessary condition immediately results from the preceding theorem
and from the fact that a sphere circumscribed to a face is also circumscribed to
its subfaces. Assume that there exists a (d - 1)-sphere E that passes through
Mil, ... , Mik and whose interior contains no point in M. Let H be the hyper-
plane 0(S)* in Ed+1. This hyperplane contains the points I J
.(Mio),....
420 Chapter 17. Euclidean metric
and the half-space H- lying below H does not contain points in ¢(M) (ac-
cording to lemma 17.2.4). Thus H is a hyperplane supporting D(M) along
conv(O(Mi.), . . .,(Mi)). Hence conv(O(Mio) .. . (Mi)) = H nD(M) is a face
of D(M). It follows from theorem 17.3.1 that Mi, ... ., Mi is a face of the De-
launay complex of M. El
which proves that Md+2 does not belong to the interior of El.
Compactness
The preceding theorem was concerned with spheres circumscribed to simplices in
the triangulation. The next theorem considers the smallest enclosing sphere for
each simplex S: this sphere is the circumscribed sphere of S if the center of the
latter belongs to S, or otherwise is a sphere centered on some k-face (k < d) of
S and passes through the k + 1 centers of this face.
As before, we consider a set M of points in Ed and T(M) a triangulation of
M. To T(M) corresponds a function ET(X) defined over conv(M) as the power
of a point X with respect to the sphere E circumscribing any d-simplex of T(M)
that contains X. By the results of subsection 17.2.6, ET(X) is well-defined when
X belongs to several cells.
of O(T). For a given X, this signed vertical distance is maximized when +(T) is
a face of the convex hull of +(M) in Ed+l: in other words, when T is a simplex
of a Delaunay triangulation of M. El
Proof. Let ET be the sphere circumscribed to T, CT its center, and rT its radius.
Then
ST(X) = XCT -rT
is minimized when X = CT and is therefore greater than -rT. If CT is contained
in T, the smallest enclosing sphere of T is ET, hence r+ = rT and the lemma
is trivial. Otherwise, the smallest enclosing sphere of T is centered on a k-face
(k < d), namely the face F such that the orthogonal projection of CT onto the
plane that supports F falls inside F. The radius r+ of this sphere is that of
the (k - 1)-sphere circumscribed to F. Its center CT minimizes the value of
XCT * XCT when X E T. Pythagoras' theorem then shows that
CTCT + rT = rT2
Let T(M) be any triangulation of a set M of points in Ed. For each simplex T
in T(M), we let r' denote the smallest radius of a sphere that encloses T, and
the maximum min-containment radiusof T(M) is defined by
The most compact triangulations are then defined as the triangulations that min-
imize the maximum min-containment radius.
Theorem 17.3.9 Delaunay triangulationsare the most compact among all the
triangulations of M.
Note that since the maximum min-containment radius C(T(M)) is defined only
by the simplices T of T(M) such that C(T(M)) = r', triangulations other than
Delaunay triangulations might also be most compact among the triangulations
of M.
Proof. Let T(M) be any triangulation of M and let Det(M) be any Delaunay
17.3. Delaunay complexes 423
Equiangularity (d = 2)
Proof. We must prove that, among all the triangulations of M, the ones that
maximize the angle vector for the lexicographic order are always Delaunay tri-
angulations. Let us thus consider two triangles T1 = ABC and T2 = BCD in
some triangulation T(M), such that the union of T1 and T2 is a strictly con-
vex quadrilateral Q. (This means that A, B, C, and D are all vertices of the
convex hull conv(A, B, C, D).) In order to increase the equiangularity, we can
flip the diagonal as follows (shown in figure 17.8). If the triangles Tl = ABD
and T2 = ACD are such that Q(Tj',T2) > Q(Ti,T 2 ), then replace T(M) by a
triangulation Tl(M) which contains T, and T2 instead of T1 and T2 .
The previous rule may be dubbed a regularization rule since it transforms a
pair of adjacent triangles into a regular pair of triangles: if the two triangles do
not form a convex quadrilateral, then the pair is obviously regular, and the rule
does not apply; otherwise, T1 U T2 is convex and the pair is transformed into a
regular pair. Indeed, let El and E2 be the circles circumscribed to T1 and T2 .
We will show that the diagonal AD is flipped if and only if D is contained inside
the circle El. Let a, /, -y, and 6 be the angles at the vertices of the quadrilateral
ABCD, a, 31, and -1 the angles at the vertices of T1 , and /32, 'Y2, and 6 the angles
424 Chapter 17. Euclidean metric
B B
D D
-- 3-y
A A
C- C
at the vertices of T2. Moreover, we denote by a', 3,and 6' the angles at the
vertices of T, and ca' y, and 6' the angles at the vertices of T2. The situation is
depicted in figure 17.8. If a is the smallest angle in T1 and T2 , then the diagonal
is not flipped. But then
so that a+6 < 7r, which shows that A is not contained inside E2, and this implies
that D is not contained inside El. The situation is entirely symmetric when the
smallest angle is 6. When the smallest angle is 01, then we flip the diagonal only
if 6' is greater than /1,which only happens when D is contained inside El. Of
course, the cases when the smallest angle is -y1, /2, or 'Y2 are entirely similar, so we
have shown that the diagonal is flipped if and only if it transforms the irregular
pair (Ti, T2 ) into a regular pair (T1,T2).
Clearly, after a flip we have Q(T1 (M)) > Q(T(M)). Flipping the edges when-
ever possible progressively increases the angle vector of the triangulation. Since
there are only a finite number of triangulations, this process eventually reaches a
triangulation that has only regular pairs of adjacent triangles. This triangulation
is a Delaunay triangulation as is shown by theorem 17.3.6. D
In this section, we define Voronoi diagrams of order k and show the connection
between these diagrams and the faces at level k in a hyperplane arrangement
in Ed+1. As usual, the Euclidean space Ed of dimension d is embedded in Ed+1
as the hyperplane Xd+1 = 0, and 4(M)* denotes the hyperplane in Ed+1 that is
tangent to the paraboloid P at the point O(M) obtained by lifting M vertically
onto the paraboloid.
In section 17.2, we established the connection between the Voronoi diagram
of a set M of n points M 1 ,.. ., Mn in Ed and the polytope in Ed+1 that is the
intersection of the n half-spaces O(Mi)*+ that lie above the hyperplanes O(Mi).
Equivalently, V(M) is the cell at level 0 in the arrangement A of the hyperplanes
O(M 1 )*,..., O(Mn)*, if the reference point is on the Xd+1-axis, sufficiently high
so that it is above all the hyperplanes. Let us recall that a point is at level k in
A if it belongs to exactly k open half-spaces O(Mi,)*-,...., 0(Mik)*-, such that
each (Mij)*- is bounded by (Mij)* and does not contain the reference point
(see section 14.5).
It is tempting to consider the cells at levels k > 0. We define below a cell
complex that spans Ed, called the Voronoi diagram of order k of M, and show in
theorem 17.4.1 that the cells of this complex are the non-overlapping projections
onto Ed of the cells at level k in the arrangement A.
Let Mk be a subset of size k of M. The Voronoi region of Mk is the polytope
Vk(Mk) of the points in Ed that are closer to all the sites in Mk than to any
other site in M \ Mk. Formally,
Let us consider all the subsets of size k of M whose Voronoi regions are not
empty. As proved in the theorem below, these polytopes and their faces form a
d-complex whose domain is Ed. This complex is called the Voronoi diagram of
order k of M (see figures 17.9 and 17.10). It is denoted by Vork(M). When
k = 1, we recognize the definition of the usual Voronoi diagram.
Figure 17.9. The Voronoi diagram of order 2 of the points in figure 17.1.
Figure 17.10. The Voronoi diagram of order 3 of the points in figure 17.1.
Proof. The proof relies on lemma 17.2.4. A sphere in Ed whose interior contains
k points is mapped by q to a point at level k in the arrangement .A of the
hyperplanes O(M1 )*, . . ., O(Mn)*.
More precisely, X belongs to the cell Vk(Mk) in the Voronoi diagram of order
k, if and only if X is the center of a sphere E whose interior contains the points
17.4. Higher-order Voronoi diagrams 427
Zdl I
to
Ed
_d
as=
42=
'14M
'11
Figure 17.11. The Voronoi diagram of order 2 is obtained by projecting the cells at level 2
in V(M).
of Mk and only those. Then +(S) belongs to the k closed half-spaces below the
hyperplanes O(Mj)* for the Mj's in Mk, and only to those half-spaces. The cells
of the Voronoi diagram of order k are obtained by projecting vertically the cells
at level k of A (see figure 17.11).
It is easily verified that any vertical line intersects at least one cell at level k in
A and does not intersect the interior of more than one cell at level k. It follows
that the i-faces, for I < d, of the Voronoi diagram of order k are obtained by
vertically projecting the i-faces common to several cells at level k. If the Mi's are
in L2 -general position, then the hyperplanes 0(Mi)*,.. ., O(Mn)* are in general
position. In that case, it was shown in section 14.5 that the cells of A that contain
an i-face F at level k have levels that vary between k and k + d + 1 - 1. Among
those, there is only one cell at level k and one cell at level k + d + 1 - 1, and
several cells at levels k < j < k + d + 1 - 1. It follows that the vertical projection
of F is an l-face of the Voronoi diagrams of orders k + 1, k + 2, .. ., k + d-1. D
Having computed the Voronoi diagram of order k of the sites, looking for the
k nearest sites of any point X in Ed can be performed by finding the cell of the
diagram that contains X (see exercises 17.2 and 17.4).
It follows from the construction that the total complexity of the Voronoi di-
agrams of all orders k, 1 < k < n - 1, is O(nd+1): indeed it is exactly the
complexity of the arrangement of the hyperplanes in Ed+l. Moreover, these dia-
grams can be computed in time 0(nd+1) (see theorem 14.4.4). An upper bound
on the complexity of the Voronoi diagrams Vorn(M),...,Vork(M) of orders
between 1 and k is provided by theorem 14.5.1, which bounds the complexity
428 Chapter 17. Euclidean metric
Theorem 17.4.2 The overall complexity of the first k Voronoi diagrams of a set
of n points in Ed is O(n[(d+1)/2JkF(d+1)/ 2 J). These k diagrams may be computed
in time O(nL(d+l)/ 2 ] kF(d+l)/2 1) if d > 3, or O(nk2 log n) if d = 2.
Let us close this section by observing that Vorn-i is the complex whose cells
consist of the points further from a particular site than from any other site. This
is why this diagram is sometimes called the furthest-point Voronoi diagram. The
vertices of this diagram are the centers of spheres circumscribed to d + 1 sites
and whose interiors contain all the other sites. Its cells are all unbounded. The
furthest-point diagram can be obtained by computing the intersection of the n
lower half-spaces bounded by the hyperplanes 4(Mi)*, i = 1, ... , n.
17.5 Exercises
Exercise 17.1 (Shortest edge) Denote by F and 5 two finite sets of points in Ed.
Show that the shortest edge that connects a point in F to a point in 5 is an edge of the
Delaunay triangulation of F U 5. From this, conclude that each point is adjacent to its
nearest neighbor in the Delaunay triangulation.
Exercise 17.2 (Nearest neighbor in the plane) Show that, given the Voronoi dia-
gram of a set M of points in the plane, it may be preprocessed in linear time to answer
nearest neighbor queries (that is, find the nearest site to a query point) in logarithmic
time. Same question for the set of k nearest neighbors (k fixed).
Exercise 17.3 (On-line nearest neighbor) We place point sites in the plane and we
want to maintain a data-structure on-line so as to answer nearest neighbor queries on
the current set of sites (that is, find the nearest site to a query point). Devise a structure
that stores n sites using storage 0(n), such that under the assumption that the points
are inserted in a random order, the expected time needed to insert a new site is 0(log n),
and that answers any query in expected time 0(log2 n).
Hint: One may build a two-level data-structure in the following way. The first level
corresponds to a triangulation of the Voronoi diagram, obtained by connecting a site to
all the vertices of its Voronoi region. Build an influence graph for the regions defined as
the triangles in this triangulation (it is a variant of the influence graph used in exercise
17.10). Each triangle points to the site that kills it, and all the triangles created after the
insertion of a site are sorted in polar angle around this site and stored into an array: this
17.5. Exercises 429
is the second level of the data structure. Show that the query point belongs to O(logn)
triangles on the average, and hence that only O(log n) binary searches are performed in
the arrays of the second level.
Exercise 17.4 (Nearest neighbor) Consider a set of n point sites in Ed. Explain how
to design a data structure of size 0(nrd/211+6) that allows the nearest site to any query
point P to be found in logarithmic time.
Hint: Use the solution of exercise 14.12 in Ed+l. The exponent E can be removed, at
the cost of increasing the query time to O(log 2 n) (see the bibliographical notes at the
end of chapter 14).
Exercise 17.5 (Union of balls) Use lemma 17.2.4 to reduce the problem of computing
the union of n balls in Ed to that of computing the intersection of the paraboloid P with a
polytope in Ed+l. Conclude that the complexity of the union of n balls is O(n F21). Devise
an algorithm that computes the union of n balls in expected time E(n log n + n I61).
Exercise 17.6 (Intersection of balls The results of exercise 17.5 are also valid for
the intersection of n balls in Ed. In E , show that if the balls have same radius, the
complexity of the intersection is only O(n) and propose an algorithm that computes this
intersection in expected time E(nlogn).
Hint: Show that each face of the intersection is "convex", meaning that given any two
points in any face, there is an arc of a great circle joining these points which is entirely
contained in that face; then use Euler's relation. For the algorithm, use a variant of the
randomized incremental algorithm of section 8.3.
Exercise 17.7 (Minimum enclosing ball) Show that the center of the smallest ball
whose interior contains a set M of points in E2 is either a vertex of the furthest-point
Voronoi diagram (of order n - 1) of M, or else the intersection of an edge of this diagram
(on the perpendicular bisector of two sites A and B) with the edge AB.
Hint: Use a randomized algorithm with an influence graph. Objects are sites, regions
are the balls circumscribed to d+ 1 sites, and an object conflicts with a region if it belongs
to that region. Show that the ball circumscribed to any new simplex S = conv(A, F) is
contained in the union of the two balls circumscribed to T and V, the two d-simplices
that share the common facet F. Build an influence graph in which each node is the
child of only two nodes, namely the node corresponding to F is the child of the nodes
corresponding to T and V. The number of children of a node is not bounded, but the
analysis can be carried out using biregions (see exercise 5.7).
Hint: As was done for planar triangulations (proof of theorem 17.3.10), local regular-
ization in E3 corresponds to replacing the upper facets of a simplex in E4 by its lower
facets. A simplex in E4 having five facets, local regularization in E3 leads to replacing
two adjacent tetrahedra T1 and T2 by three tetrahedra T3, T4, and T5 that are pairwise
adjacent (and have the same vertices as T1 and T2 ), or the converse. Show that the local
regularization rule cannot always be applied even though the triangulation is not regular
everywhere.
Exercise 17.13 (Flipping in higher dimensions) Show that if one adds a new point
P to a Delaunay triangulation Det(M) of a set M of points in Ed, the Delaunay triangu-
lation Det(M U {P}) can be obtained by splitting the simplex of Det(M) that contains
P into d + 1 new simplices, and then applying the generalized local regularization of
exercise 17.12. Show that if the n points in M are inserted in a random order, this
incremental algorithm computes Det(M) in expected time O(n logn + nJll), which is
optimal.
17.6. Bibliographicalnotes 431
we infer that the k nearest neighbors of X are the points in Mk if and only Mk is the
subset for which X has the smallest power with respect to the sphere centered at G(Mk)
and whose power with respect to the origin is Or(Mk).
Exercise 17.16 (Euclidean minimum spanning tree) Consider a set M of n points
in L2 -general position in Ed. A Euclidean minimum spanning tree, or EMST for short,
is a tree whose nodes are the points in M and whose total edge length is minimal. Show
that such a tree is a subgraph of the Delaunay triangulation of M. For the planar case,
show that an EMST can be computed in time O(n log n). Consider the case where the
set of points is not in L 2 -general position any more.
Hint: Show that the following greedy algorithm produces a minimum spanning tree.
Denote by A the set of points of M that are already connected to the current tree. The
greedy algorithm picks the shortest segment that does not induce a cycle in the current
subtree. This edge connects a point of A to a point of M \ A. The latter point is added
to A, the edge is added to the tree, and so on until the tree spans M. Show that this
yields an EMST, even if the points are not in L2 -general position. Exercise 17.1 shows
that it can be completed into a Delaunay triangulation of M. Explain how to make the
algorithm run in time O(nlogn).
Non-Euclidean metrics
Note that when the hyperplanes O(Ei)* are in general position, P(S) is a simple
polytope in Ed+l, so each vertex is incident to d + 1 hyperplanes. In terms of
the spheres Ei, this general position assumption means that no subset of d + 2
spheres in S are orthogonal to a common sphere in Ed, or equivalently that no
point in Ed has the same power with respect to d + 2 spheres in S. In this case,
we say that the spheres are in general position. The power diagram Pow(S) is a
cell complex whose vertices have the same power with respect to d + 1 spheres in
S (and are therefore the centers of spheres orthogonal to d + 1 spheres in S), and
have a greater power with respect to the other spheres in S. More generally, a
k-face in Pow(S) is formed by the points that have the same power with respect
to d + 1 - k given spheres in S, and a greater power with respect to the other
spheres in S.
Remark 1. In the case of Voronoi diagrams, all the hyperplanes O(Mi)* are
tangent to the paraboloid P, and each of them contributes a facet to V(M).
For power diagrams, however, a polar hyperplane O(Ei)* does not necessarily
contribute a face to P(S). Such a hyperplane is called redundant. In the power
diagram, it means that the cell P(Ei) is empty: Ei does not contribute a cell to
Pow(S).
Remark 2. There is no particular difficulty if some, even all, the spheres in S,
are imaginary. This fact is used in section 18.3.2.
Remark 3. Any polytope in Ed+l that is the intersection of upper half-spaces
corresponds to a power diagram: if H 1 ,... , H,n are the hyperplanes that bound
these half-spaces, then their upper envelope projects onto the power diagram of
the spheres 0- 1 (Hf*),...,4 -(H*).
Consider all the subsets of size k of S whose corresponding power cell is not
empty. These regions and their faces form a cell complex that covers Ed entirely,
and that is called the power diagram of order k of S. We denote it by PoWk(S).
This fact is a consequence of the theorem below, whose proof closely resembles
that of theorem 17.4.1. This theorem clarifies the links between power diagrams
of order k in Ed and faces at level k in the arrangement of n hyperplanes in Ed+l.
As usual, the Euclidean space of dimension d is identified with the hyperplane
Xd+1 = 0 in the space Ed+l of dimension d + 1, and 0(E)* stands for the polar
hyperplane of +(S).
Theorem 18.1.3 Consider a set S = {E,l . , En} of spheres in Ed, and let
A be the arrangement of their polar hyperplanes O(El)*, .**, O(En)* The power
diagram of order k, Powk(S), is a cell d-complex in Ed. Its cells are the vertical
projections of the cells at level k in the arrangement A, the reference point being
on the Xd+ 1 -axis above all the hyperplanes O(i)*, i = 1,... ,n. The I-faces of
Powk(S), 1 < d, are obtained by projecting the l-faces common to at least two
cells of A at level k.
18.2. Affine diagrams 437
Theorem 18.2.1 Any simple affine diagram in Ed is the power diagram of a set
of spheres in Ed.
such that 1 < i < j < n. It follows that the affine diagram is exactly the power
diagram of the spheres Ei, i = 1, . . . , n.
We now show how to build the Pi's. Denote by hij the vertical projection of
Hij onto Pi (note that i < j by the definition of Hij).
Let us take for PI any non-vertical hyperplane, and for P2 any non-vertical
hyperplane that intersects P1 along h 12 . For k > 3, we must take for Pk the
hyperplane that intersects P1 along h1k and P2 along h2k: such a hyperplane
exists because h1k and h2k intersect along the affine subspace of dimension d - 2
that is the projection of I12k onto P1, P2 , or Pk.
It remains to see that the vertical projection of Pi n Pj is exactly Hij. By
construction, this is true for PlnP 2 , PrnPj, and P 2 nPj, j > 3. For 3 < i < j < n,
we know that PinPjnP1 projects onto Ed along '1ij, and that PinPj n P 2 projects
onto Ed along I2ij. The diagram being simple, Ilij and I2ij must be distinct. The
projection of Pi n Pj must therefore contain I1ij and I2ij, and hence also their
affine hull which is nothing other than Hij.
Below, we rather use
Theorem 18.2.2 The affine diagram whose hyperplanes Hij have equations
-2(C,-C)&X++i-o- =0
is the power diagram of the spheres Ei, i = 1,... ,n centered at Ci and with
respect to which the origin has power ai.
Proof. We may simply check that the equation of Hij can be written as Ei(X) -
Ej(X) = 0, which is exactly that of the radical hyperplane of Ei and Ej (see
subsection 17.2.6).
Consider two points X and A in Ed. By the general quadratic distance from A
to X, we mean the quantity
The Voronoi diagram of a finite set of points for a general quadratic distance is
thus an affine diagram by theorem 18.2.2.
The diagram of M with additive weights is defined like the Voronoi diagram
except that the distance used is not the Euclidean distance but the additive
distance defined above. This diagram is denoted by Vor+(M). An instance is
440 Chapter 18. Non-Euclidean metrics
Figure 18.2. A diagram with additive weights. Sites are the centers and their correspond-
ing weights are the radii of the circles. In this example, the diagram of the
points with additive weights is also the Voronoi diagram of the circles for the
Euclidean metric.
which has apex (C, -r), is symmetrical to O(E) with respect to the hyperplane
Xd+1 = 0, and has an aperture angle of 74.' The vertical projection Ix of a point
18.3. Weighted diagrams 441
X in Ed on the cone C(E) is the image under 0 of the sphere centered at X and
tangent to E. The signed vertical distance from X to Ix equals the additive
distance from X to C weighted by r.
To each sphere Ei, i = 1,. . . , n, corresponds the cone C(Ei), also denoted by
Ci. It follows from the discussion above that the projection of the lower envelope
of the cones Ci onto Ed is exactly Vor+(M).
The set of points in Ed that are equidistant (with respect to the additive dis-
tance) from two points of M is thus the projection of the intersection of two
cones. This intersection is a quadric contained in a hyperplane. Indeed, we have
The intersection of the two cones is contained in the hyperplane H1 2 whose equa-
tion is obtained by subtracting the two sides of the above equations:
This and theorem 18.2.2 show that there exists a correspondence between the di-
agram Vor+ (M) and the power diagram of the spheres Ei in Ed+l (i = 1, . .. , n),
where Ei is centered at V)(Ei) and has radius riXV (see figure 18.3). More pre-
cisely, the cell of Vor+(M) that corresponds to Mi is the projection of the inter-
section of the cone Ci with the cell of the power diagram corresponding to the
sphere Ei. Indeed, X is in Vor+(Mi) if and only if the projection Xi of X onto
Ci has a smaller Xd+l-coordinate than the projections of X onto the other cones
Cj, j 4 i. In other words, the coordinates (X, Xd+1) of Xi must obey
(Xd+l +ri)2 = XMz
(xd+l + rj)2 < XM? for any j i,
and by subtracting both sides, it follows that V (Xi) < E (Xi) for all j.
The additive diagram can be computed using the following algorithm:
H12
Figure 18.3. Any Voronoi diagram for the additive distance can be derived from a power
diagram in Ed+l.
This result is optimal in odd dimensions, since the bounds above coincide with
the corresponding bounds for the Voronoi diagram of points under the Euclidean
distance. It is not optimal in dimension 2, however, as we now show. We also
conjecture that it is not optimal in any even dimension.
In the plane, we have seen that additive diagrams can be thought of as the
projection onto E 2 of the lower envelope of cones with vertical axis and aperture
angle 7. Therefore, each cell is connected. Moreover, the vertices of the diagram
are incident to exactly three edges, under the general position assumption, and
these edges are arcs of hyperbolas, each of which is the projection of the inter-
section of two cones. Euler's relation shows that the diagram has complexity
O(n). A perturbation argument shows that the general position assumption is
not restrictive, since allowing degeneracies only merges some vertices and makes
some edges disappear. In section 19.1, it is shown that such a diagram can be
computed in optimal time O(n log n).
Figure 18.4. A diagram with multiplicative weights. Sites are represented by small disks,
and the weight of a site is inversely proportional to the diameter of the disk.
The Voronoi diagram of M for the multiplicative distance is defined like the
Voronoi diagram, except that the distance is not the Euclidean distance but
rather the multiplicative distance. We denote this diagram by Vor*(M) (see
figure 18.4). Observe that a cell of the diagram need not be connected.
The set of points at equal multiplicative distance from two sites Mi and Mj is
a sphere Eij of equation
Pi (X _Mi)2 =pj(X _Mj)2
Its polar hyperplane Hij with respect to the paraboloid P has equation
Hii(X, xd+1) = (Pi - Pj)Xd+1- 2piMi X + 2pjMj *X + piMi2 - pjM? = 0.
.
This result is optimal in odd dimensions, since in that case these bounds match
those of the Voronoi diagram of n points in Ed for the Euclidean distance. It is
also optimal in even dimensions (see exercise 18.4). Figure 18.5 shows a quadratic
multiplicative diagram in dimension 2.
18-4. Li and L,,, metric 445
18.4. L1 and L metrics 445
in Ed is defined as
d
61 (X, M) = E xi - Mil.
i=1
The points at a given distance r from M are thus on a polytope whose vertices
are given by their coordinates xi = mi i r and xj = mj if i #&j, for j = 1, . .. , d.
In dimension 2 this polytope is a tilted square, and in dimension 3 it is a regular
octahedron (see figure 18.6). This polytope is dual to the cube and we call it a
co-cube. Henceforth, a co-cube always means a polytope dual to a cube whose
edges are parallel coordinate axes.
Let M = {M 1, ... , Mn} be a set of n point sites in Ed. The Voronoi diagram
of M for the L1 distance is defined similarly to the Voronoi diagram, except that
the distance used in the definition of the cells is not the Euclidean distance but
the L1 distance. It is denoted by VorL, (M) (see figure 18.7).
We can define a facial structure for this diagram by using the equivalence rela-
tion R shared by the points in Ed that have the same subset of nearest neighbors.
The equivalence classes of R subdivide the space Ed in open regions whose clo-
sures are called the faces of the diagram. The faces of the diagram are piecewise
affine.
If the points in Ed are identified with the hyperplane Xd+1 = 0 in Ed+l, then,
in a way similar to what was explained in subsection 18.3.1, to each point Mi
there corresponds a pyramid Pi of equation
Xd+1 = 6 1 (X, M) -
446 Chapter 18. Non-Euclidean metric
446 Chapter 18. Non-Euclidean metrics
Let us consider the lower envelope of the Pi's, that is, the graph of the func-
tion minl<i<n 61 (X, Mi). The portion of the lower envelope that belongs to any
Pi projects onto the hyperplane xd+1 = 0 as the cell of the diagram VorL, (M)
that corresponds to Mi. The facets of the Pi's form a collection of d-pyramids.
The lower envelope of these pyramids is a collection of d-faces, and their lower-
dimensional faces include all the lower-dimensional faces of the lower envelope of
the Pi's. The vertical projections onto Xd+1 = 0 of the d-faces of the lower en-
velope of the pyramids form a refinement of the faces of the diagram VOrL, (M).
The complexity of the diagram VorL, (M) can thus be bounded by combining the-
orem 16.3.2 and exercise 16.1, which bound the complexity of the lower envelope
of n d-simplices in Ed+1. This yields
IVorL,(M)I = Q(nda(n)).
This bound is almost tight for certain sets of points that are not in general
position (see exercise 18.9). We conjecture, however, that for points in general
enough position, this bound is not attained and that these diagrams have the
same complexity as their Euclidean counterparts. Later on, we show that this is
indeed the case in dimension 2, for which we give a linear bound. It is also the
case in dimension 3 (see exercise 18.10). If d = 2, the bisector for the Ll-distance
of two points is, in general, a polygonal line formed by three linear pieces; if the
line connecting the two points is parallel to one of the main bisectors, however,
18.4. L1 and Loo metrics 447
I %
Iez
-I in
I
I
I
I-
I~ \%
Figure 18.8. Bisectors for the L1 distance. If the line connecting the two points is parallel
to one of the main bisectors, the bisector is not a polygonal line.
the L1 -bisector is no longer a polygonal line and contains two faces of dimension
2 (see figure 18.8).
We say that two points are in Li -general position if no two points are connected
by a line parallel to one of the main bisectors, and if no four points belong to
a common co-cube. In this case, the bisectors are polygonal lines formed of
three line segments, and VorL,(M) contains n connected cells: indeed, for any
i E {1,. . . , n}, the cell V 1(Mi) that corresponds to a point Mi is star-shaped with
respect to Mi (meaning that if X G V1 (Mi), then the segment XMi is contained
in V1 (Mi)), and is therefore connected. Moreover, each vertex in the diagram
is incident to two or three edges because of the Ll-general position assumption.
The diagram VorLi (M) is therefore a planar map with n cells whose vertices have
degree two or three and whose edges consist of at most three segments. Euler's
relation then shows that the complexity of the diagram is O(n).
If the points are not in Li-general position, then some regions may correspond
to pairs of points (see figure 18.8) and some vertices may be of degree higher than
3. This second complication can be straightened out by simply perturbing the
diagram so as to replace each vertex of degree k > 3 by a small polygonal chain
with k - 2 vertices of degree 3 and k - 3 edges. The number of faces does not
448 Chapter 18. Non-Euclidean metric
increase in the process, and the number of vertices increases by the same amount
as the number of edges; hence Euler's relation still guarantees that the complexity
of the diagram is O(n). The first complication, however, is more serious and may
allow the size to grow up to quadratic: exercise 18.9 presents such an example
and a way to avoid this problem. The example generalizes to higher dimensions
and the lower bound Q(nd) may be shown to hold for the complexity of Voronoi
diagrams of n points in Ed for the L1 distance.
If the points are in Li-general position, the complexity of the diagram is thus
O(n) in dimension 2 and the algorithm that computes the lower envelope of n
triangles in space (see subsection 16.3.3 ) can be used to compute this diagram
in time 0(n log 2 n) (see corollary 16.3.3). An optimal algorithm exists that com-
putes such a diagram in time 0(n log n) (see exercise 19.2).
The situation for the L., distance is very similar to the one just described for
the L1 distance. Its complexity in dimensions higher than 3 is easier to analyze,
however. The Loo distance of a point X = (X1, ... ,xd) in Ed from a point
M= (mI,. .. ,md) in Ed is given by
The points at a distance r from M are thus on a cube centered at M whose facets
are parallel to the coordinate axes, and whose side is 2r.
The Voronoi diagram of M for the Loo distance is denoted by VorL. (M). An
instance is shown in figure 18.9.
The cells of this diagram can be obtained by projecting onto the hyperplane
Xd+1 = 0 in Ed+1 the cells on the lower envelope of the n pyramids Qi of equation
The facets of the Q1 's form a collection of d-pyramids. The faces on the lower
envelope of these pyramids form a refinement of the faces on the lower envelope
of the Qi's. Hence, the vertical projections onto the hyperplane Xd+1 = 0 of the
faces on the lower envelope of these pyramids form a refinement of the faces of
the diagram VorL. (M). The complexity of the Voronoi diagram VorL", (M) is
thus bounded by the complexity of a lower envelope of n simplices in Ed+1:
IVorL.(M)| = 0(nda(n)).
This bound is almost tight for certain sets of points that are not in general
position (see exercise 18.9). If the points are in so-called L,,-general position,
then it is possible to show that the complexity of Voronoi diagrams for the L,,
metric is the same as that for Euclidean Voronoi diagrams, namely O(n [d/21) (see
exercise 18.10). We show this for the case d = 2. When d = 2, VorL. (M) can
18.5. Voronoi diagrams in hyperbolic spaces 449
18.5. Voronoi diagrams in hyperbolic spaces 449
be identified with the diagram VorLI (M) studied previously by simply rotating
the coordinate system by an angle of '. The points are in LOO-general position,
if no two points are connected by a line parallel to the coordinate axes, and no
four points belong to a common cube whose facets are parallel to the coordinate
axes. If so, then the complexity of VorL. (M) is O(n) in dimension 2, and the
algorithm described in subsection 16.3.3 that computes the lower envelope of
triangles can be used to compute this diagram in time O(nlog2 n) (see exercise
19.2 for a better algorithm).
The distances considered here, L1 and L., are particular cases of polyhedral
distances, so-called because their unit ball is a polytope. Voronoi diagrams for
polyhedral distances are studied in exercise 19.3.
If we apply to spheres the mapping ¢> introduced in section 17.2, we map the
spheres in Ed to points in Ed+l. From the results of section 17.2, it follows that
the image under 4 of a pencil F is the line 0(F) in Ed+l that connects the points
O(El) and O(E2)-
We may distinguish between four kinds of pencils, according to whether the
line that is the image under 4 of the pencil intersects the paraboloid in one point
(transversally), in two points, is tangent to P, or does not intersect P (see figure
18.10).
* If the line 0(F) intersects P in two points, F contains two spheres of radius
zero, called the limit points of the pencil.
* If the line 0(F) does not intersect P, there exists a family of hyperplanes
tangent to P that contain +(F). Let O(Ey) be the set of points of P at
which these hyperplanes are tangent to P. Then O(.F) is the image under
4 of the set Er of points that belong to all the spheres in the pencil F.
Coming back to the definition of a pencil, we have E(X) = 0 for all values of
A, and this implies that E1 (X) = E2 (X) = 0 and that EJF can be identified
with the (d - 1)-sphere El nF2. All the d-spheres in the pencil F intersect
along the (d - 1)-sphere obtained as the intersection of any two spheres in
the pencil. For this reason, E.F is called the supporting sphere of the pencil.
The very definition of a pencil of spheres implies that any point in the radical
hyperplane H1 2 of two spheres El and E 2 in the pencil has same power with
respect to any sphere in the pencil. We may therefore define the radicalhyperplane
of a pencil of spheres as the radical hyperplane of any two spheres in the pencil.
The radical hyperplane of a pencil supported by a sphere is the affine hull of the
supporting sphere. A concentric pencil has no radical hyperplane. The radical
hyperplane of a pencil with limit points is the perpendicular bisector of these two
points. The radical hyperplane of a tangent pencil is the hyperplane tangent to
all the spheres in the pencil.
LI..
-U
The interested reader will find a more precise account in the classical references
on the topic ([22] for instance). To define the hyperbolic diagram, it suffices to
decide, given three points A, B, and C in Ed, whether B or C is closer to A.
For this, we consider the pencil .FA of spheres with limit points A and A', where
A' denotes the symmetric of A with respect to the hyperplane Ho of equation
452 Chapter 18. Non-Euclidean metrics
Ho
0
The Voronoi diagram for the hyperbolic distance of M, also called the hyper-
bolic diagram of M, is the subdivision of the Poincar6 half-space induced by
the equivalence relation shared by the points that have the same nearest neigh-
bors for the hyperbolic distance. The faces of the diagram are the closures of
the equivalence classes. The Vh(Mi)'s form the cells of the diagram (see figure
18.12).
Vh(Mi) is the set of points X E d that have Mi as a nearest neighbor. Since
E
is a sphere of the pencil FA, it follows that, for any point X in Vh(Mi), the interior
of the sphere in the pencil Fx that passes through Mi contains no point of M.
We can also embed into Ed+l by identifying it with the half-hyperplane
Ed
0(-Fx)
and they are all the hyperplanes that contain ¢(A)* n ¢(B)*. H is thus a hyper-
plane polar to a sphere in the pencil FAB that has two limit points A and B.
454 Chapter 18. Non-Euclidean metrics
But H is the hyperplane polar to C(EAB) (see lemma 17.2.2). As a result, EAB
belongs to -FAB. Finally, EAB is the unique sphere in FAB that is centered on
Ho.
2. A point X is equidistant from d + 1 points AO, .. ., Ad if and only if 0(X)
is the projection of nl=L0 O(Ai)* parallel to the Xd-axis, onto the half-paraboloid.
The point at equal hyperbolic distance from d + 1 points is the limit point of
the pencil that contains the sphere circumscribed to the d + 1 points of radical
hyperplane Ho.
3. The hyperbolic Voronoi diagram can be obtained by projecting the poly-
tope V(M) = nl_ O(Ai)*+ parallel to the xd-axis onto the half-paraboloid, then
projecting the result vertically onto the hyperplane Xd+1 = 0. Note that the
projection parallel to the Xd-axis does not map all the points of V(M) onto the
half-paraboloid. This double projection establishes an injective correspondence
between the Euclidean and the hyperbolic Voronoi diagrams of M. More di-
rectly, these two projections can be avoided by performing the single following
transformation. Replace the planar (d - 1)-faces of the Euclidean diagram that
are (at least partly) contained in the half-space Xd > 0, by the corresponding
portions of spheres (hyperbolic bisectors limited to Xd > 0); a k-face (k < d - 1)
of the Euclidean diagram is the intersection of d - k + 1 planar (d - 1)-faces, and
is replaced by the portion of surface that is the intersection of the d - k + 1 corre-
sponding spherical faces. From the injective correspondence between Euclidean
and hyperbolic diagrams, we deduce the following theorem.
18.6 Exercises
Exercise 18.1 (Greatest empty rectangle) Let X and A be two points in E2 . The
quadratic distance 6Q(X, A) is defined as
6Q(X,A)=(X-A)A(X-A) with A= ( I )
Show that 6Q(X, A) is the area of the rectangle whose sides are parallel to the coordinate
axes and of which A and X are two opposite vertices. Given a set S of points in the
plane, show that its diagram for this quadratic distance function can be used to compute
the rectangle of greatest area whose sides are parallel to the coordinate axes, whose sides
each contain at least one point of S, and whose interior does not contain any point of S.
line. The greatest empty rectangle for which three points of contact lie on one side of
the separating line (and the fourth on the other side) can be found easily. Two points of
contact on one side of the separating line define a corner of the rectangle, and the corners
of empty rectangles are the so called maxima and can be found in O(n log n) time. The
greatest empty rectangle with two points of contact on either side of the separating line
is defined by an opposite pair (A, B) of maxima such that the segment connecting them
is an edge of the affine diagram defined for the generalized quadratic distance 6Q (A, B).
The complexity of the merge step is 0(n log n), hence the total algorithm runs in time
O(n log2 n).
Exercise 18.2 (Lower envelope of cones) Show that the lower envelope of n vertical
cones of revolution in Ed has complexity O(nLd/2j+1) and can be computed in time
O(nLd/ 2 J+1). If the vertices of the cones are all contained in a given horizontal hyperplane,
and if their angles are all identical, then the complexity of the lower envelope drops to
0(n L2 ) and it can be computed in time O(n log n +n[ ).
Exercise 18.3 (Spheres and disks) According to the general definition of Voronoi
diagrams, we may define the Voronoi diagram of a set of disks Dl,. .,Dn as usual,
where the distance of a point X from a disk Di centered at Ci and of radius ri is defined
by
ds(X, Di) = max(O, IIXCiI - ri).
Show that the Voronoi diagram of n disks in Ed, where d > 3, has complexity O(nLd/2J+1)
and that it can be computed in time O(nLd/2i+1). If d = 2, show that these bounds are
0(n) and 0(n log n) respectively.
Hint: To each Ci, give a weight ri and compute the diagram of the disks knowing the
additive diagram of their centers. In the discussion of subsection 18.3.1, the cone Ci,
i = 1, . .. , n must be replaced by the same cone truncated by the halfspace Xd+ 1 > 0.
Hint: Use theorem 18.2.1. For hyperplane arrangements, the connection with zonotopes
is particularly helpful (see exercise 14.8).
456 Chapter 18. Non-Euclidean metrics
Exercise 18.6 (The inverse problem) Show that it is possible to determine whether
a complex is a Voronoi diagram and, if so, to compute the corresponding sites in time
linear in the total complexity of its cells.
Exercise 18.7 (Spider webs) By a spider web, we mean the 1-skeleton of a 2-complex
that covers E2 . Show that if the spider web is the skeleton of a power diagram, then we
can assign a tension to each edge such that each vertex is in an equilibrium state.
Hint: For the tension of an edge, take the length of the dual edge. An edge and its
dual edge are perpendicular, and the dual edges of the edges incident to a vertex S form
a cycle that we orient counter-clockwise. At a vertex S, the sum of the tensions equals
the sum of the vectors of the dual edges, so that the total tension vanishes at the vertices.
Exercise 18.8 (Cubes and co-cubes) Show that in E3 , several homothetic cubes or
co-cubes may pass through four points even though these points are in LOO-general po-
sition.
Exercise 18.9 (Degenerate positions for L1 and Loo distances) Show that in E2 ,
the Voronoi diagram for the L, metric of points that are along one of the main bisectors
is quadratic. Show that if the bisector of two points on a line parallel to one of the main
bisectors is redefined as the Euclidean perpendicular bisector, then the complexity of
the diagram becomes linear, and a cell is formed by the set of points that share exactly
one common nearest neighbor for the L, distance (but do not necessarily have the same
subset of nearest neighbors). Generalize the example above to show that Q(nd) is a lower
bound on the complexity of a Voronoi diagram of n points in Ed for the L1 metric. Also
give similar results for the Loo metric.
Exercise 18.10 (Complexity of VorL-) Show that the complexity of a Voronoi dia-
gram for the Loo metric of a set M of n points in Ed in Lc,,-general position is o(nrd/21 ).
Exercise 18.12 (Hyperbolic bisector) Show that the equation of the hyperbolic bi-
sector EAB of two points A = (a,.. ., ad) and B = (b,... .,bd) is
Exercise 18.13 (The Poincare disk) Rather than using the Poincar6 half-space H2
as a model of the hyperbolic space, we introduce the Poincar6 disk 1Dwhich can be derived
from H2 by a homographic transformation. More precisely, if the Poincar6 half-space is
identified with the complex half-plane {z E C, Imz > 0}, the homographic map defined
by
h(z) =
Z + i
2
is a bijection from H11into D. Show that the edge that joins two points remains a circle
centered on the boundary of 1D, and also that the points at equal distance from A are
on a circle that belongs to the pencil that has A as a limit point and that contains the
boundary of 1D. From this, explain how to compute the Voronoi diagram of a set of points
in H2 .
Exercise 18.14 (Dual of a hyperbolic diagram) Show that we may dualize the hy-
perbolic Voronoi diagram of a set of points M in Hd by projecting the convex hull of
O(M) parallel to the Xd-axis onto the half-paraboloid, and then projecting the result of
this first projection onto the hyperplane Xd+1 = 0. Show that this dual is in bijection
with a sub-complex of the Delaunay complex.
by Chew and Drysdale [61]. Voronoi diagrams for the L1 and Loo metrics in dimensions
3 and higher (see exercise 18.10) and also simplicial distances (see exercise 18.11) are
treated by Boissonnat, Sharir, Tagansky, and Yvinec [34]. In the plane, Klein proposes
a notion of abstract Voronoi diagram (139] and Klein, Mehlhorn, and Meiser describe a
randomized algorithm that computes such diagrams [141].
Diagrams for the hyperbolic distance axe studied by Boissonnat, C6rezo, Devillers, and
Teillaud [26], who present an application to shape reconstruction from plane sections.
Chapter 19
In the two preceding chapters, we have shown how to compute several types
of Voronoi diagrams in Ed by computing the upper envelope of hyperplanes in
Ed+l or Ed+ 2 . This often leads to optimal algorithms: this is notably true for
diagrams of points under a general quadratic distance and for power diagrams of
spheres. In contrast, we have seen that for diagrams with additive weights, such
an approach does not lead to optimal algorithms in dimension 2.
Section 19.1 describes an algorithm that computes the Voronoi diagram of a
set of points in the plane. This algorithm is remarkable for several features: it
uses the sweep method, it is simple and optimal, and it can be generalized in a
number of ways. For instance, it can be adapted to compute the Voronoi diagram
of a set of segments or to use other metrics such as the L1 or Loo distances.
Voronoi diagrams of line segments have important applications, such as the
motion planning of a disk (see subsection 19.2.5). In dimension 2, they are
well understood and we present, in addition to the generalization of the sweep
algorithm, a randomized algorithm and its accelerated version when the set of
segments is connected, and the segments intersect only at common vertices.
Section 19.3 studies an instance of the problem when the points belong to two
planes in E3 . This is a particular instance of three-dimensional diagram for which
an algorithm is presented that is output-sensitive and optimal.
y
XI
E3 , we may place cones Ci on top of each Mi: Ci is the upward vertical cone of
revolution of angle 7 that has apex Mi; its equation is
z = |IXMill.
Then the Voronoi diagram of M is the projection onto the plane z = 0 of the
lower envelope of the cones Ci, i = 1, . . . , n.
The algorithm computes this lower envelope. Rather than using a vertical
projection, however, we project parallel to a line that generates the cones. More
precisely, the plane is swept by a line parallel to the x-axis that moves along the
increasing y-axis, and the direction of the projection onto this plane is given by
the vector (0,1, -1).
In this way, we map a point X = (x, y) to another point
This point is obtained by first lifting X onto the lower envelope of the Ci's and
then projecting the result onto the plane z = 0 parallel to the direction (0, 1, -1).
The map a is depicted in figure 19.1. As usual, V(Mi) stands for the cell that
corresponds to Mi in the Voronoi diagram. Because the lower envelope is con-
tinuous, the map a is also continuous. Moreover, its restriction to a line D that
19.1. A sweep algorithm 461
is parallel to the y-axis and that does not contain a point in M is injective. The
images of the points on D are also on D, and if X1 = (x, yi) and X2 = (x, y2)
are two points on D with yi < Y2, the triangle inequality shows that
This shows that the n maps that return the value y + IIXMiII, given the ordinate
y of a point X on D, are continuous functions that increase with y. Therefore the
minimum of these functions is also an increasing function, and so a is injective on
D. Now if D contains a point Mi, the inequality above still holds if the ordinate
of X2 is greater than that of Mi and so a is still injective. If both X1 and X2
have smaller ordinates than Mi, then
As happens when computing the arrangement of a set of line segments using the
sweep algorithm described in section 3.2, each time two arcs become consecutive
in D, we test whether they intersect beyond A and, if so, we insert this intersection
point into Q. When deleting an arc, we must also remove the two corresponding
entries defined by this arc in Q. In fact, for each arc Q must contain only an
entry that stores the intersection point of smallest ordinate defined by this arc
(beyond this point, the arc cannot exist), and for each arc in D, we maintain a
pointer to this entry. Each time an arc is considered, its pointer is updated, and
the entry in Q is removed when the arc disappears.
We must point out that computing the hyperbolas is not needed. In fact,
hyperbolas are the images under a of the perpendicular bisectors of two of the
sites, and their intersections are the images of the intersections of the two cor-
responding perpendicular bisectors. Location in D can be performed by using
the perpendicular bisectors rather than the hyperbolas. Location in Q can be
carried out by computing the ordinates of the images of the intersection of two
perpendicular bisectors. The Voronoi diagram is computed directly during the
sweep.
The complexity of the algorithm can be estimated very simply. The sweep line
stops over sites and vertices of Vor'(M) so the number of events processed by the
sweep algorithm is no more than the size of Vor'(M), namely O(n). During each
event, only 0(1) locations, insertions, or deletions are performed in D and 0(1)
events are added to or removed from Q. The sizes of D and Q are thus O(n) at
any event throughout the algorithm. That the size of D is O(n) can also be seen
by noticing that A intersects any edge of Vor'(M) at most twice. Each update
operation (insertion, deletion, or location) can therefore be carried out in time
0(log n), for a grand total of 0(n log n) operations. This is optimal as shown by
corollary 17.3.2, and this proves the following theorem.
Theorem 19.1.1 The Voronoi diagram of n points in the plane can be com-
puted using a sweep algorithm in time 0 (n log n), using storage 0(n), and this is
optimal.
Its complexity is 0(n). The aperture angles of the cones being identical, the
sweep algorithm performs in a way that is strictly analogous to the Euclidean
case. Only a few differences deserve to be pointed out.
First of all, the transformation a introduced above (see figure 19.1) becomes
and the Mi's are no longer invariant. Secondly, some sites may have an empty
corresponding region. This can be detected during the sweep: when a new site
Mi is encountered, it is contained in the region of a point Mi. of weight ri. in the
additive diagram, and its region is non-empty if and only if its weight ri satisfies
AMMe. || - ri. > -ri.
The remaining details of the algorithm are strictly analogous to those in the
Euclidean case, and hence:
Theorem 19.1.2 The Voronoi diagram of n points in the plane for the additive
distance can be computed in time 0(nlogn) by a sweep algorithm, and this is
optimal.
Lemma 19.2.1 Let V(Sj) be the cell of Si in the diagram, X a point in V(Sj),
and X' its closest point in Si. The ray originating at X' that contains X is
either entirely contained in V(Sj) or intersects V(Sj) along a line segment whose
endpoints are X' and the unique intersection point of the ray with the boundary
of V(SO).
Proof. With the notation of the lemma, any point Y on the segment XX' is
closer to X' than to any other point in S, so that Y belongs to V(S1 ). The
segment XX' is therefore contained in V(S2 ), proving that V(Sj) is star-shaped.
The ray X'X thus intersects V(S1 ) along a connected subset of the ray that
contains X', which can only be the ray itself or a line segment whose endpoints
are X' and a point Z on the boundary of V(Sj). It remains to see that in the
latter case, this point Z is uniquely defined as the intersection of the ray with
the boundary of V(Sj) (this intersection could contain several points, possibly
infinitely many).
For this we show that the relative interior of X'Z does not intersect the bound-
ary of V(S1 ). Towards a contradiction, assume that there is a point Y on the
relative interior of the segment X'Z that is on the boundary of V(Sj). This im-
plies that Y is equidistant from Si and another segment Sj in S. Let us denote by
Y' the point on Sj that is the closest to Y. The disk centered at Z of radius ZX'
contains the disk centered at Y of radius YX', and only X' belongs to both their
466 Chapter 19. Diagrams in the plane
boundaries. So the first disk must contain Y' in its interior. Therefore Z is closer
to Y' and hence Sj than to Si, and it does not belong to V(Si), a contradiction.
It follows that if the ray X'X intersects V(Si) along a line segment, only the
endpoint other than X' of this segment belongs to the boundary of V(Si), whereas
if the entire ray is contained in V(Si), then no point of this ray belongs to the
boundary of V(Si).
Lemma 19.2.2 The interior of any cell of the Voronoi diagram of S is simply
connected.
Proof. Let us first see that V(Si) is connected. Indeed, considering two points
X and Y in V(Si), denote by X' and Y' their closest point in Si. It follows from
the previous lemma that the segments XX' and YY' are contained in V(Si).
Obviously, Si is contained in V(Si), and X'Y' is also contained in V(Si) since it
is a subset of Si. Thus X and Y can be connected by the polygonal line XX'Y'Y
which is entirely contained in V(Si). Note that its interior is also connected since
the segments are disjoint and thus Si is contained in the interior of V(Si).
Let us now show that the interior of V(Si) is simply connected. Assuming the
contrary, there exists a point X that does not belong to V(Si) but is contained
in the interior of a closed Jordan curve r that is entirely contained in V(Si). If
X' is the closest point to X on Si, the ray originating at X' that contains X
intersects r in at least two points (see exercise 11.1), one of which is further from
X' than from X; call this point I. The interior of the disk centered at I and of
radius IIX' I does not intersect any segment in S, hence neither does the interior
of the disk centered at X of radius IIXX'II. So X is closer to Si than to any other
segment of S, hence belongs to the interior of V(Si), a contradiction. 0
The edges of the diagram are formed by the points that are equidistant from
two segments and closer to these segments than to the others. An edge is thus
contained in the bisector of two segments, which is the set of points at equal
distance from two segments. In general, such a bisector can be split into several
components: line segments, contained in the perpendicular bisector of two end-
points or in the bisector of the lines supporting the two segments, and parabolic
arcs formed by the points at equal distance from an endpoint of one segment and
the line supporting the other segment. The following lemmas explore the nature
of these bisectors more precisely.
Lemma 19.2.3 The bisector of two disjoint line segments is a simple curve that
disconnects the plane into two connected components and that can be split into at
most seven line segments and parabolic arcs.
Proof. Consider two disjoint line segments Si and S2 whose endpoints are Al,
B1 and A2 , B2 respectively. At least one of these segments (say Si) is contained
19.2. Voronoi diagram of a set of line segments 467
in one of the two half-spaces bounded by the line that supports the other (S2).
Each segment is identified with a flat rectangle oriented counter-clockwise, and
its four elements are conceptually separated: its endpoints Ai, Bi, and its two
oriented sides Ei = (Ai, Bi) and Fi = (Bi, Ai). Each arc of the bisector D1 2 is
the locus of points at equal distance from two elements, one of S1 and the other
of S2, and closer to these than to any other element. The arcs of the bisector D1 2
are labeled by the two corresponding elements.
When following the boundary of the region V(Si) counter-clockwise, the labels
of the arcs of D12 are enumerated in counter-clockwise order for those that belong
to Si, and in clockwise order for those that belong to Sj (i, j = 1, 2 and i $ j).
Indeed, consider two points X and Y in D1 2 that are labeled by two distinct
elements of Si, say LX and Ly. Let X' be the closest point to X in LX and let
Y' be the closest point to Y in Ly. It is easy to see that the relative interiors of
XX' and YY' cannot intersect. Thus the labels of Si are seen in counter-clockwise
order along the boundary of V(Si) oriented counter-clockwise.
Let us construct the following oriented graph G (see figures 19.4 and 19.5). A
node in the graph represents a pair formed by an element of S and an element
of S2. There is an arc in G between the nodes N = (L1 , L2 ) and N' = (Li, L 2 )
if and only if L'1 follows L1 in counter-clockwise order along Si. Similarly, there
is an arc in G between the nodes N = (L1 ,L 2 ) and N' = (L1 ,L') if and only
if L' follows L2 in clockwise order along S2. The graph G can be drawn on a
torus (the topological product of the boundaries of Si and S2). The bisector of
SI and S2 is represented by an oriented path in this graph. We distinguish two
cases according to whether the entire segment Si appears on the boundary of the
convex hull of SI and S2, which is the convex hull of A1 , A2 , B1 , and B2 . (Note
that S2 is always entirely contained in this boundary.)
Case 1. The boundary of the convex hull of S1 and S2 is the polygonal line A1 ,
B1 , B2 , A2 , in clockwise order (see figure 19.4). The bisector of SI and S2 has two
infinite branches supported by the perpendicular bisectors of A1 and A2 , and of
B1 and B2 . From the preceding discussion, it is clear that the bisector of SI and
S2 corresponds to an oriented path in the graph G that connects the two nodes
(A1 , A2 ) and (B 1 , B2 ). The number of arcs on this bisector equals the number of
vertices of such a path, namely five.
Case 2. Only one endpoint of S1 , say A1 , appears on the boundary of the convex
hull conv(Sl, S2 ), which is the polygonal line A1 , B2 , A2 in clockwise order (see
figure 19.5). The bisector of S and S2 has two infinite branches supported by
the perpendicular bisectors of A1 and A2 , and of A1 and B2 . The bisector of SI
and S2 corresponds to an oriented path in the graph G that connects the two
nodes (A 1 , A2 ) and (A 1 , B2 ). The number of arcs on this bisector is thus seven.
El
468 Chapter 19. Diagrams in the plane
A2
16-
Al E1 B1 Fl Al
B2 E2 A2
-4
Al El B1 F1 Al
Lemma 19.2.4 Let Si, S2 , and S3 be three disjoint segments. Any two of the
three bisectors of these segments intersect in at most two points.
Proof. Consider three segments Sl, S 2 , S3, and denote by V(Si), V(S 2 ), and
V(S 3 ) the three cells in the Voronoi diagram of the three segments. Let us assume
that two of the three bisectors, say D1 2 and D1 3 , have three points in common.
Then these points also belong to D2 3 : they are at the same distance from Sl and
S2 , and from Sl and S3, so they are also at the same distance from S2 and S3 .
This shows that the three bisectors have three points in common. Then V(Si) has
at least three vertices 1I, I2, and I3, say, in clockwise order along its boundary,
each belonging to the intersection of V(S 1 ), V(S 2 ), and V(S 3 ). Two successive
edges of V(Si) cannot belong to the same bisector, otherwise the cells V(S1 ),
V(S 2 ), and V(S 3 ) would all contain the endpoint common to both edges, and
since they cover the plane entirely, one would not be simply connected, violating
lemma 19.2.2. The edge E1 of V(S1 ) that precedes I1, and the edge E3 of V(Sl)
that connects 12 to I3, both belong to the same bisector, say D1 2 . Similarly, the
edge E2 of V(S 1 ) that connects I, to 12 and the edge E4 of V(Sl) that follows
I3 both belong to the same bisector D1 3 . The situation is shown in figure 19.6.
19.2. Voronoi diagram of a set of line segments 469
r2 r3
. 2 D 12 3
Remark. The bisector of two segments may contain an arbitrarily high number
of edges of Vor(S).
Proof. Let S be a set of n segments. Lemma 19.2.2 assures us that the diagram
has exactly n connected cells. Moreover, each vertex in the diagram has degree
three when the segments are in L2 -general position. The Voronoi diagram of S is
thus a planar map with n cells and vertices of degree three. Lemma 19.2.3 implies
that each edge in the diagram has at most seven arcs, so Euler's relation can be
used to show that the number of faces of this map is O(n). The complexity of the
diagram, defined equivalently (within constant factors) as the number of arcs, or
the number of faces, is thus O(n). It is easy to see that the L2-general position
assumption does not matter for the result. Indeed, the segments in S may be
slightly perturbed so as to achieve L2 -general position, and perturbed back into
their primary positions. During this second perturbation, no face is created; on
the contrary, some vertices are merged and the zero-length edges that join them
disappear. The complexity of the diagram only decreases from that of a diagram
of segments in L2-general position. O
470 Chapter 19. Diagrams in the plane
kind that must be inserted into the event priority queue. An event of the second
kind is an intersection point of two arcs of the transformed diagram. It can be
handled in very much the same way as a circle event for the sweep algorithm
that computes the diagram of a set of points. Processing the events of the third
kind is straightforward: we must simply change the description of the arc when
A sweeps over the endpoint of greatest ordinate of that arc, and also update the
event in the priority queue that corresponds to the disappearance of the edge
that contains that arc.
Theorem 19.2.6 The sweep algorithm described above computes the Voronoi
diagram of a set of n line segments in the plane in time O(nlogn), and this is
optimal.
x
0
Si
S4
53
Figure 19.8. A region E, in bold, and its domain of influence DE, shaded.
segments if and only if E is an edge in the Voronoi diagram of S' but not in the
diagram of any subset of S'. A segment in S' is called a determinant of E. A
region is determined by at most four segments: the two line segments Si and S2
whose bisector contains E, and occasionally one or two segments S3 and S4 that
determine the endpoints of E.
To a region E corresponds a domain of influence DE that is the union of the
open disks centered on E and not intersecting the segments defining E (see figure
19.8). An object S and a region E conflict if S and the interior of DE have a non-
empty intersection. The edges of the Voronoi diagram are precisely the regions
defined and without conflict over S, that is, the regions determined by segments
of S that do not conflict with any segment in S. The edges determined by a
segment S that are without conflict over S are the edges of the Voronoi cell V(S)
and all the edges incident to a vertex of V(S).
The algorithm proceeds by inserting the segments one by one. At an incre-
mental step, the Voronoi diagram of the current subset Sc of already inserted
segments is stored in the influence graph. Each edge of the current diagram is
a region without conflict over the current subset of segments, and has pointers
towards the segments that define it. Updating the diagram upon inserting a
19.2. Voronoi diagram of a set of line segments 473
segment S involves removing all the edges or portions of edges in Vor(S,) that
are contained in the Voronoi cell V(S) of the new diagram Vor(S, U {S}), and
adding the new edges that are on the boundary of V(S). We note that the edges
of Vor(S,) that intersect V(S) are the regions that conflict with S. The following
lemma will be very useful later on:
Lemma 19.2.7 The set A of edges or portions of edges in Vor(S,) that are
contained in the cell V(S) of the new diagram Vor(S, U {S}) is connected.
Proof. Assume the contrary. Then there is a path r contained in V(S) that
connects two points X and Y on the boundary of V(S), does not intersect the
edges in A, and subdivides V(S) into two connected components that each contain
a connected component in A. Since F does not intersect the edges in A, it is
entirely contained in a single cell of Vor(S,), say the cell V,(R) that corresponds
to segment R E S,. But then X and Y belong to the boundary of the cell V(R)
of R in the new diagram Vor(S, U {S}), so there must exist a path F' contained
in V(R) with endpoints X and Y. The union of F and IF' is a simple closed curve
in V,(R) whose interior contains a connected component of A. Thus V,(R) is not
simply connected, which contradicts lemma 19.2.2. 0
To find the conflicting edges rapidly, the algorithm also maintains an influence
graph. We may recall that the influence graph is a structure used for detecting
conflicts between the new segment and the regions defined and without conflict
over the current subset of segments S,. The influence graph is an oriented acyclic
graph that contains a node for each region that was defined and without conflict
over the current subset at some previous incremental step. At each step of the
algorithm, the regions defined and without conflict over S, are stored in the leaves
of the influence graph. The arcs in the influence graph connect two nodes in such
a way that a segment that conflicts with the region stored at a node also conflicts
with the region stored in one of this node's parents. Thus the influence domain
DE of a region E stored at a node is contained in the union of the influence
domains DEj of the regions E1 stored at this node's parents.
The algorithm proceeds in two phases, first locating the edges or portions of
edges to be removed, and then updating the current Voronoi diagram and the
corresponding influence graph.
Locating. The location phase aims at retrieving all the leaves in the influence
graph that conflict with the new segment S inserted during the current incremen-
tal step. This can be achieved by traversing the influence graph from the root
only through the nodes that conflict with S.
Updating. The reader is referred to figure 19.9. All the regions that conflict
with S, found during the location phase, correspond to edges in the Voronoi
474 Chapter 19. Diagrams in the plane
diagram that are modified or disappear in the new diagram. Consider such an
edge E, that belongs to the bisector of two segments S1 and S2. E intersects
the region V(S) in the Voronoi diagram of S, U {S}. If E does not intersect
the boundary of V(S), then E disappears from the new diagram. Otherwise, the
intersection points between E and the boundary of V(S) are new vertices in the
Voronoi diagram: they are the vertices that are at equal distance from S1, S2,
and S. Owing to lemma 19.2.4, there are at most two of them for each edge E,
and they can be computed in constant time. The at most two portions of E that
belong to V(S) disappear from the new diagram while the at most two portions
outside V(S) become two new edges. Each of these portions is a new region that
becomes a child of E in the influence graph.
It remains to compute the new edges that form the boundary of V(S) in the
diagram. These edges have their endpoints at the vertices that we have just
computed and they are contained in the edges of V(S, Si) for some already in-
serted segments Si. These edges are computed in counter-clockwise order along
the boundary of V(S) by the following operations. Let V be a new vertex that
belongs to an edge that is equidistant from S and S,. V is the endpoint of a new
edge E to be created. From V, the next vertex V' of E is found by following
the edges in V(S 1 ) that conflict with S in the previous diagram. (Lemma 19.2.7
19.2. Voronoi diagram of a set of line segments 475
sit
proves that these edges are connected.) V' is equidistant from S, S,, and S3. We
then create the new edge E; the corresponding node in the influence graph has for
parents all the edges visited between V and V'. Starting from V' and following
the boundary of V(S 3 ), we discover a new edge and we repeat this procedure
until V is encountered again. Then all the edges on the boundary of V(S) have
been created, and the update phase is over.
It is easy to see that the update procedure described above creates all the
edges in the Voronoi diagram. It remains to show that it correctly updates the
influence graph. For this, we show that a segment that conflicts with a node of
the influence graph conflicts with at least one of its parents. Consider for instance
a new edge E and a segment L that conflicts with E (see figure 19.10). There
exists a maximal disk D centered on E whose interior does not intersect any of
the segments inserted, including S, but that intersects L. D is tangent 2 to S and
to another segment S'. Let D' be the disk tangent to S' at the same point as
D, maximal among those that cut S but no other segment in the current set of
segments Sc. Then D' contains D and its center C' belongs to an edge E' of the
cell V(S') of the Voronoi diagram of Sc that conflicts with S. Thus L conflicts
with E'. Moreover E' intersects V(S) and C' belongs to V(S). It follows that
the portion of E' n V(S) that contains C' was traversed during the update phase,
and so E received E' as a parent. This finishes the proof of the correctness of
the update phase.
To use the results of the analysis of randomized incremental algorithms, we
must verify the three clauses of the update condition 5.3.3: detecting a conflict is
performed in constant time (condition 1), the number of children of a node must
2
We say that a disk is tangent to a segment if their intersection consists of a single point. It
could either be tangent to the line that supports the segment or contain only one of its endpoints.
476 Chapter 19. Diagrams in the plane
be bounded by a constant (condition 2), and the update phase can be performed
in time proportional to the number of conflicts between S and the edges of the
Voronoi diagram of S, (condition 3).
Condition 1 is clearly satisfied.
As we have seen, an edge of the current Voronoi diagram is split into at most
three pieces by the new region V(S): two pieces outside V(S) and one piece
inside, or two pieces inside and one piece outside, or one piece inside and one
piece outside. To a portion of E outside corresponds a new node in the influence
graph that is a child of E. A portion of E inside V(S) belongs to two cells in the
previous diagram. It is therefore traversed twice during the update phase and
two children are attached to E. The maximum number of children of E is thus
five, which shows that condition 2 is satisfied.
For condition 3, we note that the first part of the update phase requires only
constant time per conflicting edge. For the second part of the update phase,
during which the new edges are created, the incidence graph of the edges in the
Voronoi diagram is traversed, and each edge is visited only four times (since each
edge has at most two portions in V(S), each visited at most twice). This shows
that condition 3 is satisfied.
Finally, the results of theorem 5.3.4 apply, and we must estimate the maximum
number fo(n, S) of edges defined and without conflict over a set of n segments.
This number is simply the number of edges in the Voronoi diagram, which is
0(n). Theorem 5.3.4 shows that the expected cost of inserting the n-th segment
is O(logn).
Theorem 19.2.8 The Voronoi diagram of n segments in the plane can be com-
puted by a randomized on-line algorithm in expected time 0(n log n). The expected
cost of inserting the n-th segment is 0(logn).
(b) (c)
Figure 19.11. The two possible ways to define the Voronoi diagram of a set of connected
segments.
previous results to the case of non-intersecting segments that may share their
endpoints. The two diagrams differ, but one may be computed from the other in
linear time.
In this section, we show that the general technique of accelerated randomized
algorithms (see section 5.4) can be used to compute the Voronoi diagram of a con-
nected set S of n segments with disjoint relative interiors in time O(n log* n). The
same algorithm can be used to compute the Voronoi diagram of a simple polygon
P, which is the portion of the Voronoi diagram of the edges of P contained in
the interior of P. Such a diagram is depicted in figure 19.12.
The algorithm is essentially the same as that presented in the previous subsec-
tion. The main difference is that, at certain incremental steps in the construction,
the algorithm computes the conflict graph between the regions, i.e. edges of the
current Voronoi diagram, and the remaining objects, i.e. segments yet to be in-
serted. (We recall that the conflict graph is a bipartite graph that stores an arc
between a region and the remaining objects that conflict with it.) This graph is
used to speed up the subsequent locations. Section 5.4 explains the operations
for such algorithms in detail.
The general analysis presented in section 5.4 applies, if we can show how to
compute the conflict graph at step r in time O(n), meaning that we must compute
all the conflicts between the regions defined and without conflict over the subset
R of segments inserted before or during step r and the segments in S \ 1Z. This
478 Chapter 19. Diagrams in the plane
can be carried out by traversing the incidence graph of the segments in S. Pick
any segment in the sample, say So, and find all the edges of Vor(7?) that are
determined by So, in time 0(n). Then traverse the incidence graph starting at
So, say, in a depth-first fashion, and for each segment that has not yet been
inserted, enumerate the edges of Vor(7?) with which it conflicts.
Denote by S. the current segment in this traversal of the graph, and let SP
be a segment incident to S, that has been already visited. Either Sp belongs to
7?, or we already know the edges of Vor(7?) that conflict with Sp. If S, belongs
to 7?, the traversal proceeds to the next incident segment not yet visited (case
1). Otherwise, we seek an edge of Vor(R.) that conflicts with S,. If Sp belongs
to 7? (case 2a), we identify, among the edges of Vor(R?) determined by Sp (the
edges on the boundary of V(Sp)), an edge R that conflicts with S,. If Sp does
not belong to 7? (case 2b), among all the edges of Vor(7?) that conflict with Sp,
we look for an edge R that conflicts with S,. In either case, it is easily checked
that we find such an edge R. The other edges of Vor(7?) that conflict with S,
are connected by lemma 19.2.7, so they can be found by traversing the incidence
graph of the edges of Vor(7R), starting at R. At the end of the traversal, all
the conflicts between the segments in S \ 7? and the edges in Vor(7?) have been
identified.
The cost of processing cases of type 1 is bounded by r < n. Each edge of Vor(7?)
is determined by at most four segments in 7?, so it is examined at most four times
during the search for a first conflict in cases 2a. The cost of processing cases 2a is
thus 0(r). The cost of finding the remaining conflicts or a first conflict in cases
2b is proportional to the number of conflicts detected. Indeed, each vertex of
19.2. Voronoi diagram of a set of line segments 479
same as its distance to X' defined above. It also equals the radius p(X) of the
greatest disk centered at X that is contained in S.
We say that an arc r that connects two points A and B is an admissible path
if D moving along this path remains entirely inside E.
Proof. Let 'y" be the path obtained by applying v to all the points in 'Y. Then
," connects v(A) to v(B) and is contained in the edges of the Voronoi diagram.
Moreover, we have p(X) < p(v(X)), since p is non-decreasing on the segment
from X to v(X). Therefore D remains inside E when moving along Oy", showing
that Oy" is admissible. If A and B are admissible positions for D, then so are the
paths Av(A) and v(B)B. E
this time used in the plane Pi, shows that F is a face of Del(Mi). Conversely, if
F is a face of Del(Mi), then F can always be circumscribed by a sphere whose
interior contains no point of M: simply place the center of this sphere far away
from the plane Pj, j k i. Theorem 17.3.4 then shows that F is a face of Del(M).
C
Notice that this result does not depend on the distance between the two planes,
but it is important that they are parallel for proving the second statement in the
lemma.
We now show how to compute Del(M) knowing Del(M1 ) and Del(M2 ). The
previous lemma shows that we already have the faces of Del(M) contained in PI
and P2 . The following lemma characterizes the others.
Lemma 19.3.2 The faces of Del(M) that are not faces of Del(Ml) orDel(M2 )
are in one-to-one correspondence with the 2-complex obtained by projecting the
edges of Vor(Ml) onto Vor(M 2 ) orthogonally to P1 and P2 -
Figure 19.14 shows a simple example. There are three points in P1 , A1 , B1 , and
Cl, and three points in P2 , A2 , B2 , and C2. Vor(Ml) and Vor(M 2 ) have each
one vertex and three edges. Vor(M) contains a vertex corresponding to case 5,
namely the center of the sphere circumscribed to A 1 B1 ClA 2 , a vertex correspond-
ing to case 6, namely the center of the sphere circumscribed to A 1 A 2 B2 C2 , and
three vertices corresponding to case 4, the centers of the spheres circumscribed
to A 1 ClA2 C2 , A 1 ClB2 C2 , and B 1ClB2 C2 .
484 Chapter 19. Diagrams in the plane
V(AiBi)
B2 C2 )
V(BICI)
V(AlCi)
V(A 2 C2 )
Figure 19.14. Construction of Vor(M) knowing Vor(Mi) and Vor(M42). The triangulation
Del(M) consists of five tetrahedra Si = AiBiCiB2 , S 2 = A 1 A 2 B 2C2 , S3 =
BiCiB2 C 2 , S4 = AiCiB2 C2 , and S5 = AlC 1A 2C2.
Lemma 19.3.2 gives a method for computing Vor(M) knowing Vor(M1) and
Vor(M 2 ). In fact, it reduces the problem to that of computing the 2-complex
C that is the overlay of the orthogonal projection of Vor(Ml) onto P 2 and of
Vor(M 2 ). We may compute this complex by using any of the randomized al-
gorithms that compute the intersection of a set of line segments presented in
subsections 5.2.2 and 5.3.2, or a more sophisticated deterministic algorithm (see
the bibliographical notes of chapter 3). The complexity of these algorithms is
0(m log m + t) if m is the number of segments and t the number of intersection
points. Here, m = O(n) is the total number of edges in the planar Voronoi dia-
grams Vor(Mi) and Vor(M 2 ), and t is the number of vertices of Vor(M). This
finishes the proof of the following theorem.
Theorem 19.3.3 Given a set M of n points that belong to two parallel planes,
we can compute its Voronoi diagram Vor(M) in time 0(nlogn + t), where t is
the number of tetrahedra in Vor(M). This is optimal as a function of the input
and output sizes.
19.3. The case of points distributed in two planes 485
Lemma 19.3.4 The faces of Del(M) that are not faces of Del(M1) orDel(M2)
can be put in one-to-one correspondence with the faces of the 2-complex C obtained
by projecting onto the plane y = 0, parallel to the y-axis, the edges of the polytopes
V(Mi) and V(M 2 ) in 8.
486 Chapter 19. Diagrams in the plane
486 Chapter 19. Diagrams in the plane
'I
,21G
Ill
Proof. The proof is analogous to that of lemma 19.3.2, so we only mention how
to construct the bijection between the 0-faces of C and the tetrahedra of Del (M).
A vertex of C is either the projection of a vertex of V(M 1 ) or V(M 2 ), or the
intersection of the projections of two edges of V(Mi) or V(M 2 ).
Case 1. Consider a vertex Sl of V(Ml). S1 is the image under 4 of a circle
El in P1 that passes through three points Al, B1 , Cl in M 1 , and the interior of
El contains no point in Ml. Let F be the pencil of spheres in E3 that intersect
along El (see figure 19.15).
This pencil of spheres intersects P2 along a pencil of circles F2 in P2 whose
image under 4 is the line 0(F 2 ) parallel to the y-axis that contains Si. Indeed,
the radical axis of the circles in -F2 is just L = PinP2 since Pi is the radical plane
of F. The line that supports the centers of the circles in -F2 is thus orthogonal
to L and so 0(:F2 ) is contained in a plane perpendicular to the x-axis. Since
o belongs to L, it has the same power with respect to all the circles in F2 , so
O(F2 ) is also contained in a plane perpendicular to the z-axis. Moreover, q(F 2 )
contains S1, since the center of El has the same abscissa as the centers of the
spheres in the pencil :F2 and since the power of 0 with respect to any circle in
.F2 equals the power of 0 with respect to any sphere of F and hence with respect
to El. This shows that 4(.F 2 ) is the line parallel to the y-axis that contains S1.
The line O(.F2 ) intersects V(M 2 ) in exactly one point. This point is the image
under 0 of the sphere E2 in F2 that contains a point of M 2 , say A2 , but contains
no point of M 2 in its interior. There exists a unique sphere that intersects P1
along El and P2 along E2, and this sphere belongs to F. This sphere passes
through the four points Al, B1 , C1 in M 1 and A2 in M 2 . Its interior contains
no point of M 1 or M 2 . It follows from theorem 17.3.4 that A1 BlClA2 is a
tetrahedron in the Delaunay triangulation Del(M).
19.3. The case of points distributed in two planes 487
plane y = 0. (See also the remark preceding lemma 19.3.4). The previous lemma
still holds if the complex C is restricted to the region R* n R*. It is also worth
observing that a vertex of V(Mi)+ or V(Mi)- that does not project onto R+ or
R17 (j =A i) corresponds to a triangle of Del(Mi) that is not a face of Del(M).
A vertex of the resulting internal subdivision is in bijection with a tetrahedron
of some dihedron, meaning that the center of its circumscribed sphere belongs
to that dihedron. Proceeding similarly for all four dihedra retrieves the entire
triangulation Del (M).
This finishes the proof of the following theorem.
Theorem 19.3.5 Given a set M of n points that belong to two planes, we can
compute their Voronoi diagram Vor(M) in time O(n log n + t), if t is the size of
this diagram. This is optimal both in the size of the input and in the size of the
output.
The previous construction is the strict analogue of the construction for points
distributed in two parallel planes, when embedded in the space that represents
circles. The reader will also notice similarities with section 18.5 devoted to hy-
perbolic diagrams. Exercise 19.11 explains the reasons for these similarities.
19.4 Exercises
Exercise 19.1 (Divide-and-conquer) Adapt the divide-and-conquer method to com-
pute the Voronoi diagram of n points in the plane in optimal time O(n log n).
Exercise 19.3 (Polygonal distance) Let P be a polygon that contains the origin 0.
We denote by AP the image of this polygon under the homothety centered at 0 and
of ratio A. The polygonal distance e6p(X,A) from point X to point A is defined as the
smallest real A > 0 such that X - A belongs to AP. Show that the Voronoi diagram of n
points under this polygonal distance has complexity 0(np) if p is the number of vertices
of P and that it may be computed in time 0(nplog(np)).
Exercise 19.4 (Convex polygon) Show how to compute the Voronoi diagram of a
convex polygon in the plane with a randomized algorithm in expected time 0(n).
Hint: Maintain the convex hull of the polygon while pulling off the vertices one by one
and in a random order. Then insert the points back into the polygon in the reverse
order, while maintaining the Voronoi diagram. During each insertion, we already know
19-4. Exercises 489
a region (half-plane) that conflicts with the vertex to be inserted. The other conflicts
can be retrieved and the structure updated, without paying the logarithmic cost of the
location phase in the usual incremental algorithm.
Exercise 19.5 (Dual of the Voronoi diagram of segments) Consider a set of seg-
ments in the plane, assumed to be in L 2 -general position. A vertex of the Voronoi
diagram is incident to two arcs if the arc is contained in the relative interior of an edge,
or to three arcs if it is an endpoint of an edge. For each vertex contained in two arcs,
the maximal circle centered on this vertex that does not properly intersect the segments
is tangent to two segments, and we draw the edge that joins the two points of tangency.
Show that these edges partition the convex hull of the segments into O(n) regions which
are either triangles or trapezoids.
Exercise 19.10 (Segments in two planes) Show that the complexity of the Voronoi
diagram of a set of n segments distributed in two planes is t = O(n 2 ). Show how to
compute such a diagram in time 0(n log n + t). Extend these results to polygonal regions
distributed in two planes.
Hint: Adapt the discussion in section 19.3 (for two non-parallel planes) using argu-
ments borrowed from section 18.5.
Exercise 19.12 (The case of several planes) Show how to compute the Delaunay
triangulation of a set M of n points in E3 distributed in k given planes in time O(kn log n).
Hint: Compute the triangulation by a greedy method, finding the tetrahedra one
by one. If F is a facet that belongs to a single computed tetrahedron T, we seek the
tetrahedron T' that shares the facet F with T. Notice that the spheres circumscribed
to T and T' are the two greatest spheres in the pencil of spheres that contain the circle
circumscribed to F and whose interiors contain no points of M. This pencil intersects
each of the k planes along a pencil of circles. In each plane Pi, we seek the greatest circle
in the corresponding pencil whose interior contains no point in Mi = M n Pi. This
problem reduces to finding the intersection between a line and a polytope in E3, which
can be solved in logarithmic time (see exercise 9.6). Among all k candidates, pick the
one that gives the fourth vertex of T'.
[10] B. Aronov and M. Sharir. Castles in the air revisited. In Proc. 8th Ann.
ACM Symp. Comp. Geom., 146-156, 1992.
[30] J.-D. Boissonnat and K. Dobrindt. On-line construction of the upper en-
velope of triangles and surface patches in three dimensions. Comp. Geom.
Theory Appl., 5:303-320, 1996.
[38] K. Q. Brown. Voronoi diagrams from convex hulls. Inf. Proc. Lett., 9:223-
228, 1979.
[43] B. Chazelle. On the convex layers of a planar set. IEEE Trans. Inform.
Theory, IT-31:509-517, 1985.
[68] K. L. Clarkson. A Las Vegas algorithm for linear programming when the
dimension is small. In Proc. 29th Ann. IEEE Symp. Found. Comp. Sci.,
452-456, 1988.
[69] K. L. Clarkson, R. Cole, and R. E. Tarjan. Randomized parallel algorithms
for trapezoidal diagrams. Internat. J. Comp. Geom. Apple, 2(2):117-133,
1992.
[70] K. L. Clarkson, K. Mehlhorn, and R. Seidel. Four results on randomized
incremental constructions. Comp. Geom. Theory Appl., 3(4):185-212, 1993.
[71] K. L. Clarkson and P. W. Shor. Applications of random sampling in com-
putational geometry, II. Discrete Comp. Geom., 4:387-421, 1989.
[72] T. H. Cormen, C. E. Leiserson, and R. L. Rivest. Introduction to Algo-
rithms. The MIT Press, Cambridge, Mass., 1990.
[73] H. S. M. Coxeter. A classification of zonohedra by means of projective
diagrams. J. Math. Pure Appl., 41:137-156, 1962.
[74) H. Davenport and A. Schinzel. A combinatorial problem connected with
differential equations. Amer. J. Math., 87:684-689, 1965.
[75] M. de Berg. Ray Shooting, Depth Orders and Hidden Surface Removal,
volume 703 of Lecture Notes in Computer Science. Springer-Verlag, Berlin,
Germany, 1993.
[76] M. de Berg, K. Dobrindt, and 0. Schwarzkopf. On lazy randomized in-
cremental construction. In Proc. 26th Ann. ACM Symp. Theory Comp.,
105-114, 1994.
[77] M. de Berg, L. J. Guibas, and D. Halperin. Vertical decompositions for
triangles in 3-space. Discrete Comp. Geom., 15:35-61, 1996.
[78] B. Delaunay. Sur la sphere vide. A la memoire de Georges Voronoi. Izv.
Akad. Nauk SSSR, Otdelenie Matematicheskih i Estestvennyh Nauk, 7:793-
800, 1934.
[79] P. Desnogues. Triangulations et Quadriques. These de doctorat en sciences,
University de Nice-Sophia Antipolis, France, 1996.
[80] 0. Devillers. Randomization yields simple O(n log* n) algorithms for diffi-
cult Q(n) problems. Internal. J. Comp. Geom. Appl., 2(1):97-111, 1992.
[87] M. E. Dyer. Linear time algorithms for two- and three-variable linear pro-
grams. SIAM J. Comp., 13:31-45, 1984.
[97] H. Edelsbrunner, R. Seidel, and M. Sharir. On the zone theorem for hyper-
plane arrangements. SIAM J. Comp., 22(2):418-429, 1993.
[110] P. J. Giblin. Graphs, Surfaces and Homology. Chapman and Hall, London,
1977.
[112] R. L. Graham and F. F. Yao. Finding the convex hull of a simple polygon.
J. Algorithms, 4:324-331, 1983.
[119] D. Halperin and M. Sharir. Near-quadratic bounds for the motion planning
problem for a polygon in a polygonal environment. In Proc. 34th Ann. IEEE
Symp. Found. Comp. Sci. (FOCS 93), 382-391, 1993.
[120] D. Halperin and M. Sharir. New bounds for lower envelopes in three di-
mensions, with applications to visibility in terrains. Discrete Comp. Geom.,
12:313-326, 1994.
[1211 D. Halperin and M. Sharir. Almost tight upper bounds for the single cell
and zone problems in three dimensions. In Proc. 10th Ann. ACM Symp.
Comp. Geom., 11-20, 1994.
References 501
[123] D. Haussler and E. Welzl. Epsilon-nets and simplex range queries. Discrete
Comp. Geom., 2:127-151, 1987.
[124] J. Hershberger. Finding the upper envelope of n line segments in O(n log n)
time. Inf. Proc. Lett., 33:169-174, 1989.
[129] H. Imai, M. Iri, and K. Murota. Voronoi diagrams in the Laguerre geometry
and its applications. SIAM J. Comp., 14:93-105, 1985.
[130] R. A. Jarvis. On the identification of the convex hull of a finite set of points
in the plane. Inf. Proc. Lett., 2:18-21, 1973.
[132] B. Joe. Delaunay versus max-min solid angle triangulations for three-
dimensional mesh generation. Internat. J. Num. Methods Eng., 31(5):987-
997, April 1991.
[154] J. Matougek. Randomized optimal algorithm for slope selection. Inf. Proc.
Lett., 39:183-187, 1991.
[159] P. McMullen and 0. C. Shephard. Convex Polytopes and the Upper Bound
Conjecture. Cambridge University Press, Cambridge, England, 1971.
[167] K. Mehlhorn, M. Sharir, and E. Welzl. Tail estimates for the space com-
plexity of randomized incremental algorithms. In Proc. 3rd ACM-SIAM
Symp. Discrete Algorithms, 89-93, 1992.
[168] K. Mehlhorn, M. Sharir, and E. Welzl. Tail estimates for the efficiency of
randomized incremental algorithms for line segment intersection. Comp.
Geom. Theory Appl., 3:235-246, 1993.
[169] A. Melkman. On-line construction of the convex hull of a simple polyline.
Inf. Proc. Lett., 25:11-12, 1987.
[185] J. Pach and M. Sharir. The upper envelope of piecewise linear functions and
the boundary of a region enclosed by convex plates: combinatorial analysis.
Discrete Comp. Geom., 4:291-309, 1989.
[188] M. Pocchiola and G. Vegter. Computing the visibility graph via pseudo-
triangulations. In Proc. 11th Ann. ACM Symp. Comp. Geom., 248-257,
1995.
[191] F. P. Preparata and S. J. Hong. Convex hulls of finite sets of points in two
and three dimensions. Commun. ACM, 20:87-93, 1977.
[194] S. Rippa and B. Schiff. Minimum energy triangulations for elliptic prob-
lems. Computer Methods in Applied Mechanics and Engineering, 84:257-
274, 1990.
506 References
[196] N. Sarnak and R. E. Tarjan. Planar point location using persistent search
trees. Commun. ACM, 29:669-679, 1986.
[201] R. Seidel. A convex hull algorithm optimal for point sets in even dimensions.
M.Sc. thesis, Dept. Comp. Sci., Univ. British Columbia, Vancouver, BC,
1981. Report 81/14.
[204] R. Seidel. A simple and fast incremental randomized algorithm for com-
puting trapezoidal decompositions and for triangulating polygons. Comp.
Geom. Theory Appl., 1:51-64, 1991.
[208] M. Sharir and E. Welzl. A combinatorial bound for linear programming and
related problems. In Proc. 9th Symp. Theoret. Aspects Comp. Sci., volume
577 of Lecture Notes in Computer Science, 569-579. Springer-Verlag, 1992.
References 507
[2101 J. Stolfi. Oriented Projective Geometry. In Proc. 3rd Ann. ACM Symp.
Comp. Geom., 76-85, 1987.
[216] P. van Emde Boas, R. Kaas, and E. Zijlstra. Design and implementation
of an efficient priority queue. Math. Syst. Theory, 10:99-127, 1977.
This appendix offers a collection of the notational conventions used in the book.
Apart from a few exceptions, we have tried to abide by the following rules: Lower
case italic letters represent integers and lower case Greek letters represent real
numbers. Upper case letters (whether italic or Greek) represent elementary ge-
ometric objects (points, lines, etc.), and upper case script letters represent sets
thereof, geometric structures, or data structures. Bold upper case letters repre-
sent objects, sets, and structures in projective spaces.
Mathematical symbols
B quadric
C polytope, complex
D dictionary
TD(M) the Delaunay polytope, dual to V(M)
Del(M) the Delaunay complex
Dec(S) vertical decomposition of S
Dec8 (S) simplified vertical decomposition
£ set of edges, envelope
F pencil of spheres
F(S) set of regions defined over a set S of objects
Fo(7Z) set of regions defined and without conflict
over a set S of objects
4j(S) set of regions defined by i objects and with j conflicts
over a set S of objects
FOk(S) set of regions defined and with at most k conflicts
over a set S of objects
g graph
horizon graph
set of hyperplanes
set of indices
list, lower envelope
M set of points
Md moment curve in Ed
polytope, polyhedron, paraboloid
polytope dual to a polytope P
Pow(S) power diagram
Q priority queue, quadric
sample of size r
S set of objects, of segments, of sites, of spheres
T triangulation
T (M) triangulation of a set M of points
U universe
V(M) Voronoi polytope of a set M
Vor (M) Voronoi diagram of a set M
Vork (M) Voronoi diagram of order k of a set M
Vor+(M ) Voronoi diagram with additive weights
Vor* (M) Voronoi diagram with multiplicative weights
VorL1 (M) Voronoi diagram for the L1 norm
VorLc,O (M) Voronoi diagram for the Lx norm
512 Notation
CO-,AMBRIDGE
UNIVERSITY PRESS
ISBN 0-521-56529-4
91111111111111