Antologie - The Science of Fractal Images
Antologie - The Science of Fractal Images
Antologie - The Science of Fractal Images
Heinz-Otto Peitgen
Institut fur Dynamische Systeme, Universitat Bremen, D-2800 Bremen 33, Federal Republic of Germany, and
Department of Mathematics, University of California, Santa Cruz, CA 95064, USA
Dietmar Saupe
Institut fur Dynamische Systeme, Universitat Bremen, D-2800 Bremen 33, Federal Republic of Germany
The cover picture shows a fractal combining the two main topics of this book : deterministic and random fractals. The deterministic part is given in the
form of the potential surface in a neighborhood of the Mandelbrot set. The sky is generated using random fractals. The image is produced by H. Jurgens,
H.-O. Peitgen and D. Saupe.
The back cover images are: Black Forest in Winter (top left, M. Barnsley, F. Jacquin, A. Malassenet, A. Sloan, L. Reuter); Distance estimates at boundary
of Mandelbrot set (top right, H. Jurgens, H.-O. Peitgen, D. Saupe); Floating island of Gullivers Travels (center left, R. Voss); Foggy fractally cratered
landscape (center right, R. Voss); Fractal landscaping (bottom, B. Mandelbrot, K. Musgrave).
2
© 1988 by Springer-Verlag New York Inc; the copyright of all artwork and images remains with the individual authors.
Softcover reprint of the hardcover 1 st edition 1988
All rights reserved. This work may not be translated or copied in whole or in part without the written permission of the publisher (Springer-Verlag, 175
Fifth Avenue, New York, NY 10010, USA), except for brief excerpts in connection with reviews or scholarly analysis. Use in connection with any form
of information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed
is forbidden.
The use of general descriptive names, trade names, trademarks, etc. in this publication, even if the former are not especially identified, is not to be taken
as a sign that such names, as understood by the Trade Marks and Merchandise Marks Act, may accordingly be used freely by anyone.
This book was prepared with I^TgX on Macintosh computers and reproduced by Springer-Verlag from camera-ready copy supplied by the editors.
TgX is a trademark of the American Mathematical Society. Macintosh is a trademark of Applq Computer, Inc.
Preface
This book is based on notes for the course Fractals'Introduction, Basics and Perspectives given by Michael F. B
amsley, Robert L. Devaney, Heinz-Otto Peit- gen, Dietmar Saupe and Richard F. Voss. The course was chaired by
Heinz-Otto Peitgen and was part of the SIGGRAPH ’87 (Anaheim, California) course program. Though the five
chapters of this book have emerged from those courses we have tried to make this book a coherent and uniformly
styled presentation as much as possible. It is the first book which discusses fractals solely from the point of view of
computer graphics. Though fundamental concepts and algorithms are not introduced and discussed in mathematical
rigor we have made a serious attempt to justify and motivate wherever it appeared to be desirable. Ba sic algorithms
are typically presented in pseudo-code or a description so close to code that a reader who is familiar with
elementary computer graphics should find no problem to get started.
Mandelbrot’s fractal geometry provides both a description and a mathematical model for many of the
seemingly complex forms and patterns in nature and the sciences. Fractals have blossomed enormously in the past
few years and have helped reconnect pure mathematics research with both natural sciences and computing.
Computer graphics has played an essential role both in its development and rapidly growing popularity.
Conversely, fractal geometry now plays an important role in the rendering, modelling and animation of natural
phenomena and fantastic shapes in computer graphics.
We are proud and grateful that Benoit B. Mandelbrot agreed to write a detailed foreword for our book. In these
beautiful notes the Father of Fractals shares with us some of the computer graphical history of fractals.
The five chapters of our book cover :
• an introduction to the basic axioms of fractals and their applications in the natural sciences,
• a survey of random fractals together with many pseudo codes for selected algorithms,
• an introduction into fantastic fractals, such as the Mandelbrot set, Julia sets and various chaotic attractors, together
with a detailed discussion of algorithms,
• fractal modelling of real world objects.
Chapters 1 and 2 are devoted to random fractals. While Chapter 1 also gives an introduction to the basic
concepts and the scientific potential of fractals, Chapter 2 is essentially devoted to algorithms and their
mathematical background. Chapters 3, 4 and 5 deal with deterministic fractals and develop a dy namical systems
point of view. The first part of Chapter 3 serves as an introduction to Chapters 4 and 5, and also describes some
links to the recent chaos theory.
The Appendix of our book has four parts. In Appendix A Benoit B. Mandelbrot contributes some of his brand
new ideas to create random fractals which are directed towards the simulation of landscapes, including mountains
and rivers. In Appendix B we present a collection of magnificent photographs created and introduced by Michael
Me Guire, who works in the tradition of Ansel Adams. The other two appendices were added at the last minute. In
Appendix C Diet- mar Saupe provides a short introduction to rewriting systems, which are used for the modelling
of branching patterns of plants and the drawing of classic fractal curves. These are topics which are otherwise not
covered in this book but certainly have their place in the computer graphics of fractals. The final Appen dix D by
3
Yuval Fisher from Cornell University shares with us the fundamentals of a new algorithm for the Mandelbrot set
which is very efficient and therefore has potential to become popular for PC based experiments.
Almost throughout the book we provide selected pseudo codes for the most fundamental algorithms being
developed and discussed, some of them for beginning and some others for advanced readers. These codes are
intended to illustrate the methods and to help with a first implementation, therefore they are not optimized for
speed.
The center of the book displays 39 color plates which exemplify the potential of the algorithms discussed in
the book. They are referred to in the text as Plate followed by a single number N. Color plate captions are found on
the pages immediately preceding and following the color work. There we also describe the front and back cover
images of the book. All black and white figures are listed as Figure N.M. Here N refers to the chapter number and
M is a running number within the chapter.
After our first publication in the Scientific American, August 1985, the Mandelbrot set has become one of the
brightest stars of amateur mathematics. Since then we have received numerous mailings from enthusiasts around
the world.
We have reproduced some of the most beautiful experiments (using MSetDEM(), see Chapter 4) on pages 20 and
306. These were suggested by David Brooks and Daniel N. Kalikow, Framingham, Massachusetts.
Bremen, March 1988
Heinz-Otto Peitgen and Dietmar Saupe
Acknowledgements
Michael F. Barnsley acknowledges collaboration with Laurie Reuter and Alan D. Sloan, both from Georgia
Institute of Technology. Robert L. Devaney thanks Chris Frazier from the University of Bremen for putting his
black and white figures in final form. Chris Frazier also produced Figures 0.2,5.1,5.2 and 5.17. Heinz-Otto Peitgen
acknowledges collaboration with Hartmut Jurgens and Diet- mar Saupe, University of Bremen, and thanks Chris
Frazier for some of the black and white images. Dietmar Saupe thanks Richard F. Voss for sharing his expertise on
random fractals. Besides the authors several people contributed to the color plates of this book, which is gratefully
acknowledged. A detailed list of credits is given in the Color Plate Section.
Michael F. Barnsley. *1946 in Folkestone (England). Ph. D. in Theoretical Chemistry, University of Wisconsin (Madison) 1972.
Professor of Mathematics, Georgia Institute of Technology, Atlanta, since 1983. Formerly at University of Wisconsin (Madison),
University of Bradford (England), Centre d’Etudes Nucleaires de Saclay (Paris). Founding officer of Iterated Systems, Inc.
4
RobertL. Devaney. *1948 in Lawrence, Mass. (USA). Ph. D. at the University of California, Berkeley, 1973. Professor of Mathematics,
Boston University 1980. Formerly at Northwestern University and Tufts University. Research in terests: complex dynamics,
Hamiltonian systems.
Yuval Fisher. *1962 in Israel. 1984 B.S. in Mathematics and Physics, University of California, Irvine. 1986 M.S. in Computer Science,
Cornell University. 1988 Ph. D. in Mathematics (expected), Cornell University.
Benoit B. Mandelbrot. *1924 in Warsaw (Poland). Moved to Paris in 1936, to USA in 1958. Diploma 1947, Ecole Polytechnique, D.
Sc. 1952, Paris, Dr. Sc. (h. c.) Syracuse, Laurentian, Boston, SUNY. 1974 I.B.M. Fellow at Thomas J. Watson Research Center and
1987 Abraham Robinson Adjunct Professor of Mathematical Science, Yale University. Barnard Medal 1985, Franklin Medal 1986.
Member of the American Academy of Arts and Sciences and of the U.S. National Academy of Sciences.
Michael McGuire. *1945 in Ballarat, Victoria (Australia). Ph. D. in Physics, University of Washington (Seattle), 1974. He has worked
in the field of atomic frequency standards at the University of Mainz, Germany, NASA, Goddard Space Flight Center, and Hewlett
Packard Laboratories.
Heinz-Otto Peitgen. *1945 in Bruch (Germany). Dr. rer. nat. 1973, Habilitation 1976,
both from the University of Bonn. Research on nonlinear analysis and dynamical
systems. 1977 Professor of Mathematics at the University of Bremen and since 1985
also Professor of Mathematics at the University of California at Santa Cruz. Visiting
Professor in Belgium, Italy, Mexico and USA.
Dietmar Saupe. *1954 in Bremen (Germany). Dr. rer. nat. 1982 at the University of
Bremen. Visiting Assistant Professor of Mathematics at the University of California,
Santa Cruz, 1985-87 and since 1987 at the University of Bremen. There he is a
researcher at the Dynamical Systems Graphics Laboratory. Research interests :
mathematical computer graphics and experimental mathematics.
Richard F. Voss. *1948 in St. Paul, Minnesota (USA). 1970 B. S. in Physics from M.F. Barnsley R.L. Devaney
M.I.T. 1975 Ph. D. in Physics from U. C. Berkeley. 1975-present: Research Staff
Member at the I.B.M. Thomas J. Watson Research Laboratory in Yorktown Heights,
NY. Research in condensed matter physics.
Contents
6
Color plates and captions 114
8
Benoit B.Mandelbrot
9
A. 1 Non-Gaussian and non-random variants of midpoint displacement244
10
A. 1.1 Midpoint displacement constructions for the paraboloids 244
11
A. 1.2 Midpoint displacement and systematic fractals: The Tak-
12
agi fractal curve, its kin, and the related surfaces .... 246
13
A. 1.3 Random midpoint displacements with a sharply nonGaussian displacements’
distribution 248
A.2 Random landscapes without creases.....................................................................................................................................
A.2.1 A classification of subdivision schemes: One may displace the midpoints of either frame wires or of tiles . . 250
A.2.2 Context independence and the “creased” texture .... 251
A.2.3 A new algorithm using triangular tile midpoint displacement .......................................................................................................
A.2.4 A new algorithm using hexagonal tile midpoint displacement ......................................................................................................
A.3 Random landscape built on prescribed river networks
255
A.3.1 Building on a non-random map made of straight rivers and watersheds, with square drainage basins..........................................
A.3.2 Building on the non-random map shown on the top of
Plate 73 of “The Fractal Geometry of Nature”
258
Bibliography
14
Index 307
15
Foreword
Benoit B.Mandelbrot
It is a delight to watch Heinz-Otto Peitgen get together with several of our mutual friends, and tell the world the
secrets of drawing fractals on the computer. A pedant would of course proclaim that the very first publication in
each branch of fractals had immediately revealed every secret that matters. So let me rephrase what was said: this
book’s goal is to tell the world how to draw the basic fractals without painfully rediscovering what is already
known.
The book needs no foreword, but being asked to provide one, without limitation of space, has unleashed a
flood of recollections about some Men and some Ideas involved in the Science of Fractal Images, including both
Art for Art’s sake and Art for the sake of Science. A few of these recollections may even qualify as history, or
perhaps only as what the French call la petite histoire. As some readers may already know, for me history is forever
part of the present.
Perhaps as a reward for holding this belief, the very fact of writing down for this
book my recollections concerning fractal forgery of landscapes has made me
actually unhappy again about a feature of all past fractal forgeries, that they
fail to combine relief with rivers. Eventually, we did something about this
defect, as well as about other features of the subdivision forgeries described
in the body of this book. The new directions are sketched in Appendix A and
were added to this book at the last minute.0.1 The prehistory of some
fractals-to-be: Poincar£, Fricke, Klein and Escher
To begin, while fractal geometry dates from 1975, it is important in many ways to know that a number of shapes
now counted as fractals have been known for a long time. But surprisingly few had actually been drawn before the
computer era. Most were self-similar or self-affine and represent the artless work of the draftsmen on the payroll of
science publishers. Also, there are renditions of physical and simulated Brownian motion in the book by Jean
Perrin, LesAtomes, and William Feller’s Introduction to Probability. These renditions have helped me dream in
fruitful ways (as told in my 1982 book The Fractal Geometry of Nature [68] p. 240), but they are not beautiful.
Fractals-to-be occur in the work of Fatou and Julia circa 1918, but they led to no illustration in their time.
However, Poincare’s even earlier works circa 1890 do include many sketches, and two very different nice
16
stories are linked with illustrations that appeared shortly afterwards, in the classic book titled Vorlesungen liber die
The- orie der automorphenFunktionen[43], which Fricke & Klein published in 1897. This book’s text and its
preface are by the hand of Fricke, R. Robert Fricke, but (see p. vi) the great Felix Klein, “a teacher and dear friend”
seems to have graciously consented to having his name added on the title page. The illustrations became even more
famous than the text. They have been endlessly reproduced in books on mathematics, and for the better or for the
worse have affected the intuition of countless mathematicians.
A tenacious legend claims that students in industrial drawing at the Tech- nische Hochschule in Braunschweig,
where Fricke was teaching mathematics, drew these figures as assignment, or perhaps even as an exam. Unkind
words have been written about some of the results. In fact, I have done my share in detailing the defects of those
which claim to represent the fractal-to-be limit sets of certain Kleinian groups (leading some to wonder which of
Fricke’s students should be failed posthumously). These dubious figures were drawn with the help of the original
algorithm of Poincare, which is very slow, too slow even for the computer. However, my paper [70] in The
Mathematical Intelligencer in 1983 has given an explicit and quick new algorithm for constructing such limit sets,
as the complements of certain “ sigma-discs”, and has compared Fricke’s Figure 156 with the actual shape drawn
by a computer program using the new algorithm. The comparison is summarized in The Fractal Geometry of
Nature, page 179. As was to be expected, the actual shape is by far the more detailed and refined of the two, but
this is not all: against all expectations, it is not nec-
Fig. 0.1: Circle Limits IV by M.C. Escher, ©1988 M.C. Escher c/o Cordon Art-Baarn- Holland
17
essarily perceived as being more complicated. I feel it is more harmonious, and can be comprehended as a whole,
therefore it is perceived as far simpler than the clumsy old pictures. However, a famous mathematician (15 years
my senior) has expressed dismay at seeing the still vibrant foundation of his intuition knocked down by a mere
machine.
Of wider popular interest by far are Fricke’s drawings of “hyperbolic tessellations”, the reason being that they
have become widely popular behind diverse embellishments due to the pen of Maurits C. Escher, as seen, for
example, in the book The World ofM.C. Escher [33], Many people immediately perceive some “obvious but hard to
describe” connection between Escher and fractals, and it is good to know that these tessellations are indeed closely
related to fractals. In fact, they were knowingly triggered by Poincare, as is well documented by H.S.M. Coxeter in
his Leonardo [22] paper of 1979. Having seen some of Escher’s early work, this well-known geometer wrote to
him and received the following answer: “Did I ever thank you... ? I was so pleased with this booklet and proud of
the two reproductions of my plane patterns!... Though the text of your article [in Trans. Royal Soc. Canada, 1957]
is much too learned for a simple, self-made plane pattem-man like me, some of the illustrations ... gave me quite a
shock. ... Since a long time I am interested in patterns with “motives” getting smaller and smaller til they reach the
limit of infinite smallness... but I was never able to make a pattern in which each “blot” is getting smaller grad ually
from a center towards the outside circle-limit, as [you] show.... I tried to find out how this figure was geometrically
constructed, but I succeeded only in finding the centers and radii of the largest inner-circles. If you could give me a
simple explanation..., I should be immensely pleased and very thankful to you! Are there other systems besides this
one to reach a circle-limit? Nevertheless,... I used your model for a large woodcut”. This was his picture ‘ Circle
Limit I ’, concerning which he wrote on another occasion: “This woodcut Circle Limit I, being a first attempt,
displays all sorts of shortcomings”.
In his reply, Coxeter told Escher of the infinitely many patterns which tessellated a Euclidean or non-Euclidean
plane by black and white triangles. Escher’s sketch-books show that he diligently pursued these ideas before
completing Circle Limits II, III, IV. He wrote: “In the coloured woodcut Circle Limit III most of the defects [of
Circle Limit I], have been eliminated”. In his Magic Mirror ofM.C. Escher (1976), Bruno Ernst wrote: “best of the
four is Circle Limit III, dated 1959... In addition to arcs placed at right angles to the circumference (as they ought to
be), there are also some arcs that are not so placed”. [Now going back to Coxeter], “In fact all the white arcs
‘ought’ to cut the circumference at the same angle, namely 80° (which they do, with remarkable accuracy). Thus
Escher’s work, based on his intuition, without any computation, is perfect, even though his poetic description of it
was only approximate”.
The reader is encouraged to read Coxeter’s paper beyond these brief quotes, but an important lesson remains.
As already stated, the Coxeter pictures which made Escher adopt the style for which he became famous, hence
eventually affected the esthetics of many of our contemporaries, were not the pure creation of an artist’s mind.
They came straight from Fricke & Klein, they were largely inspired by Henri Poincare, and they belong to the same
geometric universe as fractals. Note also that the preceding story is one of only two in this paper to involve a
person who had been professionally trained as an artist.
The first steps of the development of a systematic fractal geometry, including its graphic aspects, were taken at the
IBM T.J. Watson Research Center, or wherever I happened to be visiting from this IBM base. The next task,
therefore, in historical sequence, is to reminisce about the IBM fractals project.
This project has always been an example of very small science, in fact it had reduced to myself for the first ten
of my thirty years at IBM. Since then, it has in principle included one full-time programmer; actually, there were
short overlaps and long periods with no programmer. The assistants of J.M. Berger (whom I had “borrowed’ in
1962), as well as my project’s first assistant, Hirsh Lewitan, were “career” IBM employees, but all the others were
recent graduates or even students on short contract. Here is a complete chronological list of those who stayed for
over a few weeks: G.B.Lichtenberger (part-time), M.S.Taqqu, JLL.Oneto, S.W.Handelman, M.R.Laff, P.Moldave
(part-time), D.M.McKenna, J. A.Given, E.Hironaka, L.Seiter, F.Guder, R.Gagne and K. Musgrave. The grant of
18
IBM Fellowship in 1974 also brought a half-time secretary: H.C.Dietrich, then J.T. Riznychok, and later V.Singh,
and today L.Vasta is my full-time secretary.
R.F.Voss has been since 1975 an invaluable friend, and (as I shall tell momentarily) a close companion when
he was not busy with his low-temperature physics. The mathematicians J.Peyriere, J.Hawkes and V.A.Norton, and
the meteorologist S. Lovejoy (intermittently) have been post-doctoral visitors for a year or two each, and two
“IBM’ers”, the hydrologist J.R. Wallis and the linguist F.J.Damerau, have spent short periods as de facto inter-
project visitors. As for equipment, beyond slide projectors, terminals and P.C.’s and (of course) a good but not
lavish allotment of computer cycles, my project has owned one high- quality film recorder since 1983. Naturally, a
few IBM colleagues outside of my project have also on occasion briefly worked on fractals.
These very short lists are worth detailing, because of inquiries that started coming in early in 1986, when it was
asserted in print, with no intent to praise, that “IBM has spent on fractals a perceptible proportion of its whole
research budget”. The alumni of the project are surprised, but endlessly proud, that the bizarre perception that
fractals ever became big science at IBM should be so widely accepted in good faith. But the almost threadbare truth
is even more interesting to many observers of today’s scientific scene. To accept it, and to find it deserving
gratitude, was the price paid for academic freedom from academia.
The shortness of these lists spanning twenty years of the thirty since I joined IBM also explains my boundless
gratitude for those few people.
My next and very pleasant task is to tell how I met the co-authors of this book, and some other people who matter
for the story of the Science ofF ractal Images.
During the spring of 1975, Richard F. Voss was hopping across the USA in search of the right job. He was
soon to become Dr. Voss, on the basis of a Berkeley dissertation whose contents ranged from electronics to music,
without ever having to leave the study of a widespread physical phenomenon (totally baffling then, and almost
equally baffling today), called y-noise. Other aspects of this noise, all involving fractals, were favorites of mine
since 1963, and my book Les objets fractals, which was to be issued in June 1975, was to contain primitive fractal
mountains based on a generalization of y -noise from curves to surfaces. One of the more striking parts of Voss’s
thesis concerned (composed) music, which he discovered had many facets involving y -noises. He had even based a
micro-cantata on the historical record of Nile river discharges, a topic dear to my heart.
Therefore, Voss and I spoke after his job-hunting talk at IBM Yorktown, and I offered a deal: come here and
let us play together; something really nice is bound to come out. He did join the Yorktown low-temperature group
and we soon became close co-workers and friends. Contrary to what is widely taken for granted, he never joined
my tiny project, and he has spent the bulk of his time on experimental physics. Nevertheless, his contribution to
fractals came at a critical juncture, and it has been absolutely essential. First, we talked about writing a book on y-
noise, but this project never took off (and no one else has carried it out, to my knowledge). Indeed, each time he
dropped by to work together, he found me involved with something very different, namely, translating and revising
Les objets fractals. The end result came out in 1977 as Fractals, and preparing it kept raising graphics problems.
Voss ceaselessly inquired about what Sig Handelman and I were doing, and kept asking whether we would con -
sider better ways, and then he found a sure way of obtaining our full attention.
He conjured a computer graphics system where none was supposed to exist, and brought along pictures of fractals
that were way above what we had been dealing with until then. They appeared in Fractals, which is why the
foreword describes him as the co-author of the pictures in that book.
Color came late at Yorktown, where it seems we fractalists continued to be the only ones to use demanding
graphics in our work. We first used color in my next book, the 1982 Fractal Geometry of Nature. In late 1981, the
text was already in the press, but the color pictures had not yet been delivered to the publishers. The film recorder
we were using was ours on a short lease, and this fact and everything else was conspiring to make us rush, but I
fought back. Since “the desire is boundless and the act a slave to limit” ([68], p. 3 8), I fought hardest for the sake
19
of the Fractal Planetrise on the book’s jacket. It was soon refined to what (by the standards of 1981) was
perfection, but this was not enough. Just another day’s work, or another week’s, I pleaded, and we shall achieve
something that would not need any further improvement, that would not have to be touched up again when
“graphic lo-fi” will go away, to be replaced by “graphic hi-fi”. To our delight, this fitted Voss just fine.
Fractal illustrations had started as wholly Unitarian, the perceived beauty of the old ones by Jean-Louis Oneto
and Sig Handelman being an unexpected and unearned bonus. But by 1981 their beauty had matured and it
deserved respect, even from us hard scientists, and it deserved a gift of our time. Many people have, since those
days, showed me their fractal pictures by the hundreds, but I would have been happier in most cases with fewer
carefully worked out ones.
Everyone experiences wonder at Voss’s pictures, and “to see [them] is to believe [in fractal geometry]”.
Specialists also wonder how these pictures were done, because, without ever drawing specific attention to the fact,
Voss has repeatedly conjured technical tricks that were equivalent to computer graphics procedures that did not
officially develop until much later. This brings to mind a philosophical remark.
Watching Voss the computer artist and Voss the physicist at work for many years had kept reminding me of
the need for a fruitful stress between the social and the private aspects of being a scientist. The only civilized way
of being a scientist is to engage in the process of doing science primarily for one’s private pleasure . To derive
pleasure from the public results of this process is a much more common and entirely different matter. The well-
known danger is that, while dilettare means to delight in Italian, its derivative dilettante is a term of contempt.
While not a few individually profess to be serious scientists, yet motivated primarily by personal enjoyment of their
work, very few could provide what I view as the only acceptable evidence of “serious dilettantism”. This is a
willingness and perhaps even a compulsion to leave significant portions of one’s best work unpublished or
unheralded — knowing full well that one could claim credit forthem. This may be easiest for the scientific giants;
Lars Onsager was a legend on this account. On the other hand, every scientist has been the active or the passive
witness of episodes when one could not or would not work in a field without becoming thoroughly uncivilized. The
true test, therefore, arises when civilized behavior is neither easy nor impossible. On these and other stringent
grounds, I view Dick Voss (as graphics expert and as physicist) as being one of the most civilized serious scientists
in my wide acquaintance.
What about films? We were ill-equipped to produce them, having only an exhausted film recorder (a tube-based
Stormberg-Carlson 4020) at our disposal. In 1972, however, with Hirsh Lewitan, we did prepare a clip on the
creation of fractal galaxy clusters, using the Seeded Universe method. Then, in 1975, with Sig Handelman, we
added a clip in which the landscape to be later used as Plate 271 of The Fractal Geometry of Nature emerged
slowly from the deep, then rotated majestically (or at least very slowly), and finally slipped back under water.
Spontaneously, everyone called this the Flood sequence. By a fluke, the highest altitude was achieved on two
distinct points, and a programming flaw stopped the Flood when these points were still visible. Delighted, I
indulged in commenting that my fractal model of relief had predicted that there were two tips to Mount Ararat, not
one,... until an auditor straight from Armenia reported very drily that this fact was well-known to everyone in his
country. The Galaxy Clustering and the Mount Ararat Mount Ararat sequences, taken together, were nicknamed
Competing with the Good Lord on Sunday. They soon came to look out-of-date and pitifully primitive, but now
they are of historical interest,... valuable antiques.
The Flood, and a recent animation of one of Voss’s data bases, done by R.Greenbeig Associates, both moved
around a landscape, without zooming.
But what about the “real Hollywood” ? “It” realized immediately the potential of Voss’s landscape illustrations in
my 1977 book, and soon introduced variants of these fractals into its films. This brought a lovely reenactment of
20
the old and always new story of the Beauty and the Beast, since it is taken for granted that films are less about
Brains than about Beauty, and since the few brainy mathematicians who had known about individual fractals-to-be
had taken for granted (until my books) that these were but Monsters,... Beastly. The pillars of “our geographically
extended Hollywood” were Alain Fournier, Don Fussell and Loren Carpenter. Early in 1980, John W. van Ness, a
co-author of mine in 1968 who had moved to the University of Texas at Dallas, asked me to comment on the draft
of his student Fournier’s Ph.D. dissertation. Fournier and Fussell had written earlier to us asking for the IBM
programs to generate fractal mountains, but we did not want to deal with lawyers for the sake of programs that
were not documented, and were too intimately linked to one set of computers to be readily transported anywhere
else. Therefore, Fournier and Fussell went their own way, and soon hit upon an alternative method that promised
computations drastically faster than those of Voss.
Precisely the same alternative was hit upon at the same time by Loren Carpenter, then at Boeing Aircraft, soon
afterwards to move to Lucasfilm, and now at Pixar. In his own words in The College Mathematics Journal for
March 1984,
“I went out and bought The Fractal Geometry of Nature as soon as I read Martin Gardner’s original column on
the subject in Scientific American. I have gone through it with a magnifying glass two or three times. I found that it
was inspirational more than anything else. What I got out of it myself was the notion that “Hey, these things are all
over, and if I can find a reasonable mathematical model for making pictures, I can make pictures of all the things
fractals are found in. That is why I was quite excited about it....
“The method I use is recursive subdivision, and it has a lot of advantages for the applications that we are dealing
with here; that is, extreme perspective, dynamic motion, local control — if I want to put a house over here, I can do
it. The subdivision process involves a recursive breaking-up of large triangles into smaller triangles. We can adjust
the fineness of the precision that we use. For example, in “Star Trek", the images were not computed to as fine a
resolution as possible because it is an animated sequence and things are going by quickly. You can see little
triangles if you look carefully, but most people never saw them.
“Mandelbrot and others who have studied these sorts of processes mathematically have long been aware that
there are recursive approximations to them, but the idea of actually using recursive approximations to make
pictures, a computer graphics-type application, as far as we know first occurred to myself and Fournier and
Fussel, in 1979....
“One of the major problems withfractals in synthetic imagery is the control problem. They tend to get out of
hand. They will go random all over the place on you. If you want to keep a good tight fist on it and make it
look like what you want it to look like, it requires quite a bit of tinkering and experience to get it right. There
are not many people around who know how to do it."
While still at Boeing, Carpenter became famous in computer graphics circles for making a short fractal film,
Vol Libre, and he was called to Lucasfilm to take a leading role in the preparation of the full feature Star Trek II:
The Wrath of Khan. Several computer-generated sequences of this film involve fractal landscapes, and have also
become classics in the core computer graphics community. The best known is the Genesis planet transformation
sequence. A different company, Digital Productions, later included massive fractal landscapes in The Last
Starfighter, which I saw — without hearing it — in an airplane. I had seen Star Trek in a suburban movie-house
(since I had gone there on duty, my stub was reimbursed). An associate had seen it on a previous day, and had
reported that it was too bad that the fractal parts have been cut (adding as consolation that is was known that they
always cut out the best parts in the suburbs). Of course, my wife and I immediately saw where the fractal portion
started, and we marveled: If someone less durably immersed than the two of us in these matters could be fooled so
easily, what about people at large?
Later, interviewed for the summer 1985 issue of La lettre de I’image, Carpenter described the severe cost
constraints imposed by his work: “We cannot afford to spend twice as much money to improve the quality of the
pictures by 2%”. We would hate to be asked to attach a numerical % to quality improvement, but computer costs do
keep decreasing precipitously, and we hope that future feature films using fractals will be cheap while pleasing
even to the crankiest amoung mathematicians.
21
This Beauty and the Beast episode was most enjoyable, but also drew us into a few scrapes, long emptied of
bitterness, but instructive. We were disappointed that the endless credits of the films never included the word
fractal, nor our names. The excuse was that everyone who mattered knew, so there was no need to say anything.
Besides, lawyers feared that, if mentioned, we would have been put in a position to sue for a part of the cake. The
world at large does not believe that scientists are resigned to the fact that their best work — the principles of
mathematics and the laws of nature — cannot be patented, copyrighted, or otherwise protected. All that the
scientists can expect is to be paid in the coin of public — not private — praise.
Later on, we greeted with amusement Alvy Ray Smith’s term “graftal”. The differences from “fractal” were
hardly sufficient to justify this proprietary variation on my coinage.
Fournier, Fussel and Carpenter are not represented in this book. It is a pity that we did not come to know them
better. They have hardly ever written to us, even at a time when we could have helped, and would have loved to do
so, and anyhow would have liked to follow their work as it evolved.
Our scrapes with "our Hollywood" have led to a variety of mutually contradictory impressions. Some people came
to believe that the fractal landscapes of Fournier, Fussel and Carpenter are, somehow, not "true fractals". Of course
they are fractals, just as true as the Koch curve itself. Other people believe that I begrudge credit for "recursive
subdivision" in order to claim "midpoint displacement" — which is the same thing under a different term — for
myself. Actually, as the French used to be taught in high school geometry, the basic credit for the procedure itself
(but of course not for fractals) belongs to someone well beyond personal ambition, namely to Archimedes (287-212
BC). The antiquity of the reference is a source of amusement and wonder, but rest assured that his work is amply
documented. One of his great achievements was to evaluate the area between a parabola and a chord AB, and many
writers view his argument as the first documented step toward calculus. Write the parabola’s equation as y = P(x)
= a — bx2, with directions chosen so that b > 0. Given the chord’s endpoints { XA,P(XA) = a — bx2^ } and { XB,P(.
%B) = a — bx^ }, Archimedes interpolates P( x) recursively to values of x that form an increasingly tight dyadic
grid. In a first step, observe that
(by definition of 6). Thus, the first stage of interpolation requires an upward displacement of $ to be applied to the
midpoint of the original chord, replacing it by two shorter ones. In the second stage of interpolation, the counterpart
of XB — %A is twice smaller, hence this stage requires an upward displacement of the midpoint of each sub-chord,
-2
by the amount equal to 4 6. Etc.... The A:-th stage requires an upward displacement of the midpoints of 2 chords
k
by the amount equal to 4 ~ 6. Of course, the idea that the parabola has an equation was not known until Descartes
devised analytic geometry. However, an ingenious ad-hoc argument had allowed Archimedes to derive the above
rule of upward displacements equal to 4~k6.
The algorithm Voss used to generate fractal mountains extends to clouds, as described in his contribution to this
book. The resulting graphics are stunning, but happen not to provide an adequate fit to the real clouds in the sky.
This is the conclusion we had to draw from the work of Shaun Lovejoy.
Lovejoy, then a meteorology student in the Physics Department at McGill University in Montreal, wrote to me,
enclosing a huge draft of his thesis. The first half, concerned with radar observation, was not controversial and
sufficed to fulfill all the requirements. But the second half, devoted to the task of injecting fractals in meteorology,
22
was being subjected to very rough weather by some referees, and he was looking for help. My feeling was that this
work showed very great promise, but needed time to “ripen”. (I was reminded of my own Ph.D. thesis, which had
been hurried to completion in 1952; I was in a rush to take a post-doctoral position, a reason that soon ceased to
appear compelling). Hence, my recommendation to Lovejoy was that he should first obtain his sheepskin on the
basis of his non-controversial work, then work with me as a post-doctoral student. We argued that he must not
leave in his publications too many points that the unavoidable unfriendly critics could latch on to.
I liked best Shaun’s area-perimeter diagram, drawn according to fractal precepts in my 1977 book, which
suggested that the perimeters of the vertical projections of clouds (as seen from zenith, for example from a satellite)
are of fractal dimension about j. A paper reduced to this diagram and a detailed caption appeared in Science in
1982, and immediately became famous. A second of many parts of Lovejoy’s thesis required far more work, and
finally I pitched in. The clouds in our joint paper, which came out (years later) in Tellus for 1985 (see [62]), do not
seem to have yet been surpassed. By then, Lovejoy had drifted away from us. He had grown impatient with my
refusal to reopen old fights that had been won to an acceptable degree, and by my deliberate preference for seek ing
“soft acceptance”, with controversy only when it is unavoidable, as opposed to “hard acceptance”, with unforgiving
victims.
To landscape painters, clouds seem to pose a severe challenge, but one has achieved fame for his prowess. His
name was Salomon van Ruysdael (16021670), and he brings to mind a question and a story. The question is
whether fractal geometry can help compare the clouds by Ruysdael and those by Mother Nature. The story does not
really belong here, because it does not involve the computer as artist’s tool, but let me go ahead. Elizabeth Carter
was an undergraduate in meteorology at the University of California at Los Angeles (UCLA), in the group of
Professor George L. Siscoe. Her hobby is photographing clouds, and she had found a nice way of getting academic
credit for it. They evaluated the fractal dimension for many varied clouds’ contours (as seen from a nearly
horizontal direction, which is not the same thing as Lovejoy’s views from the zenith). They found that Nature’s
clouds’ D varies over an unexpectedly broad range, while Ruysdael’s clouds’ D is far more tightly bunched. In
hindsight, the result was as expected: the painter chose to paint clouds that are dramatic, yet not impossible, hence
his clouds’ D’s are near Nature’s maximum.
Before moving into non linear fractals, it seemed logical to me as manager of a tiny fractals group, to perform a few
preliminary tests without perturbing the on-going programs. This is how a Princeton senior, Peter Oppenheimer,
came to work with us for a few weeks. Later he wrote his senior thesis on fractals, and eventually he moved to the
New York Institute of Technology on Long Island, and became an expert on fractal botany. Today he encounters
competition from Przemyslaw Prusinkiewicz.
Drawing non-random fractal trees is comparatively easy, and there are several in The Fractal Geometry of
Nature. But drawing random fractal trees that are not of unrealistic “sparseness” presents a major difficulty,
because branches must not overlap. Suppose that a random tree is to be constructed recursively. One cannot add a
branch, or even the tiniest twig, without considering the Euclidean neighborhood where the additions will be
attached. However, points that are close by according to Euclidean distance may be far away according to the graph
distance taken along the branches. Therefore, a random recursive construction of a tree, going from trunk to
branches and on to twigs, is by necessity a global process. One may be drawn to seeking a construction by self-
contemplation, or by obeying the constraints imposed by one’s computer better way.
By contrast, space appears forgiving, more precisely offers a near-irresistible temptation to cheat. Indeed, show
a shape described as a tree’s projection on a plane, and challenge our mind to imagine a spatial tree having such a
projection. Even when the original spatial branches happen to intersect or to become entangled, our mind will
readily disentangle them, and see them as a tree.
Now back to planar trees, and to ways of drawing them without worrying about self-intersection. A completely
23
natural method was devised by Tom Witten and Leonard Sander. It came about in what we think is the best
possible way, not during a search for special effects, but during a search for scientific understanding of certain web
or tree-like natural fractal aggregates. The Witten- Sander method is called diffusion limited aggregation. Most
unfortunately, it fails to yield realistic botanical trees, but it gives us hope for the future.
0.9 Iteration, yesterday’s dry mathematics and today’s weird and wonderful new
fractal shapes, and the “Geometry Supercomputer Project”
Now, from fractals that imitate the mountains, the clouds and the trees, let us move on to fractals that do not. For
the artist and the layman, they are simply weird and wonderful new shapes. My brief involvement with Poincare
limit sets has already been touched upon. My initiation to Julia sets began at age 20, when the few who knew them
called them J-sets. This episode, and the beginning of my actual involvement with the study of the iteration of z —*
z2, + c, have both been described in an Invited Contribution to The Beauty of Fractals [83] which need not be
repeated here.
But I do want to mention in this Foreword a brief interaction with David Mumford, which eventually
contributed to a very interesting and broad recent development.
David’s name was known to me, as to everybody else in mathematics, because of his work in algebraic geometry.
We met when I came to Harvard in 1979, and in November 1979 he came to a seminar I gave. After the talk, which
was on iteration, he rushed towards me : - On the basis of what you have said, you should also look into Kleinian
groups; you might even find an explicit construction for their limit set. - Actually, I responded, I already have a
nice algorithm for an important special case. Please come to my office, and I shall showyou. He came over and saw
the algorithm that was eventually published in The Mathematical Intelligencer in 1983, as told earlier in this
Foreword. - This is so simple, that Poincare should have seen it, or someone else since Poincare. Why did the
discovery have to wait for you? - Because no one before me had used a powerful new tool, the computer! - But one
cannot prove anything with a computer! - Sure, but playing with the computer is a source of conjectures, often most
unexpected ones. The conjecture it has suggested about Kleinian limit sets has been easy to prove; others are too
hard for me. - In that case, would you help me to learn to play with the computer? - With pleasure, but we would
have to get help from my latest IBM assistant, Mark Laff.
Soon after, it became clear that Mumford had to seek associates closer by, in Cambridge. He was tutored by
my course assistant Peter Moldave, and started working with David Wright, then a Harvard graduate student in
mathematics, who ceased at that point to hide his exceptional programming skills. Eventually, Mumford became
thoroughly immersed in computers, first as heuristic tools, then for their own sake.
He became instrumental in helping the awareness of the computer-as-tool spread amoung mathematicians. The
resulting needs grew so rapidly that, after a lapse of hardly eight years, the National Science Foundation has now
established a Geometry Supercomputer Project! The charter members are F. Almgren (Princeton), J. Cannon
(Brigham Young), D. Dobkin (Princeton), A. Douady (ENS, Paris), D. Epstein (Warwick), J. Hubbard (Cornell),
B.B. Mandelbrot (IBM and Yale), A. Marden (Minnesota), J. Milnor (IAS, Princeton), D. Mumford (Harvard), R.
Tarjan (Princeton and Bell Labs), and W. Thurston (Princeton). At the risk of sounding corny, let me confess that
the opening of this project was a high point in my life.
The next topic to be discussed concerning iteration is my very fruitful interaction with V. Alan Norton, a
Princeton mathematics Ph.D. in 1976, who was in my group as a post-doc in 1980-82, and stayed on the research
staff at Yorktown. He was one of the two principal “illustrators” of The Fractal Geometry of Nature, as seen in that
book’s very detailed picture credits. He has achieved great renown, starting with SIGGRAPH 1982 for his splendid
quatemionic Julia set pictures.
Norton also worked on the end-papers of The Fractal Geometry of Nature, on which hangs a tale that may be
worth recounting. These end-papers (which the book’s early printings leave without legend, an omission I now
regret) involve an important problem from the theory of iteration of analytic functions, an artifact due to inherent
limitations of the computer, and two decorative touches.
24
Fig. 0.2: The Julia set of Newton’s method applied to e‘ = 1.The original graph was unbounded, and Norton
introduced a decorative touch: to invert with respect to a circle. I loved the result; unfortunately, while bounded, it
did not fit neatly on a double page spread. Hence I imposed a second and more arbitrary decorative touch: to stretch
the graph horizontally to fill the space available.
The serious mathematical problem that had led me to this graph was the use of Newton’s method to solve the
equation eip( z) = c. The solutions are known from calculus, but Gaston Julia has shown in 1917 that Newton’s
method is a fruitful ground to study the iteration of functions of a complex variable z. Chapter 19 of The Fractal
Geometry of Nature examines the iteration of z2 + c and other polynomials. This end-paper relates to the iteration
of the transcendental function z — 1 + ce~z (see also Figure 0.2) .
In Arthur Cayley’s pioneering global studies of iteration, in 1879, the interest in iteration had arisen from the
application of the Newton method. (Peitgen et al. tell the story, and illustrate it, in The Mathematical Intelligencer
in 1984[84].) Cayley began by solving z2 = c, which proved easy, then went on to try z 3 = c , which stumped him by
25
exhibiting three “grey areas” that he found no way of resolving. Julia in 1917 had found many facts about these
areas, and John H. Hubbard had shown us his revealing earliest graph of the corresponding Julia set. It was natural
for us, in late 1980, to play with z p = c, and then view ez = c as a suitable limit of z p = c for p —> oo. We made
many interesting observations on this Emit case, but the study was far from completed and publishable when we
moved on to very different work.
Final and unfortunate fact, the non-fractal bold boundaries between the background and the solidly colored
areas on the end-papers of The Fractal Geometry of Nature are an artifact. The study of transcendental functions’
iterates leads very quickly to enormous integers, hence soon reaches intrinsic limits be yond which the computer
takes its own arbitrary actions.
Our perception of the iteration of transcendental functions as a difficult and very rich topic was confirmed by
several eminent mathematicians, such as Robert L.Devaney. No wonder, therefore, that one should see striking
resemblances between our end-papers and his beautiful and widely seen illustrations and films. Bob’s papers on the
iteration of transcendental functions had already brought admiring attention to him, but we did not become fast
friends until we started bumping into each other constantly on the fractals Son et Lumiere traveling shows.
The life orbit of Michael Barnsley has also crossed mine, and then stayed in the same neighborhood, because
of fractals. The amusing background, in this instance, is in the public record, and I should not repeat it. I first read
about it in James Gleick’s book, Chaos: The Birth of a New Science. It told me how it came to be that Michael
burst into my house one day, full of enthusiasm, and of lovely tales. Later, we held a few meetings at the Atlanta
airport (of all places!), and since then it has been a pleasure to keep up with his work and that of his many
associates.
Now back to the pictures of Julia and Mandelbrot sets in The Fractal Geometry of Nature. During the summer
of 1984, we were tooling up to redo them in color, with Eriko Hironaka as programmer, when mail brought in, hot
off the press, the June issue of the German magazine GEO. We realized immediately that much of what we were
proposing had already been achieved, in fact achieved beyond our aspirations, by Heinz-Otto Peitgen, Peter H.
Richter, Dietmar Saupe, Hartmut Jurgens and their associates. This group’s earlier fractal pictures in The
Mathematical Intelligencer earlier in 1984 had been most welcome, but those in color had not yet given reason for
enthusiasm. The color pictures in the 1984 GEO showed a skilled and artistic eye, and a sure hand, one that had
gained experience but had not become a lazy or a hasty one. They were unmistakably the outcome of the search for
perfection I had admired earlier in the work of Voss, and always attempt in my own.
I wrote to the GEO authors at the University of Bremen to congratulate them, to tell them
of the change of plans their success had provoked, and to express the hope of meeting
them soon. They told me about a fractal exhibit they were planning, and for which they
were preparing a catalogue that eventually led to their book The Beauty of Fractals, and
they asked me to write the personal contribution mentioned earlier in this Foreword.
Granted that this book was fated not to be done by us, it is a delight that it came from
them. I gained these new friends when they invited me to Bremen in May 1985, to open the
first showing of their exhibit, and I participated again in this exhibit in several other cities.
Our joint appearances since then have been too numerous to count. There are no
anecdotes to tell, only very pleasant events to remember.Conclusion
As my friends team up for this book under Heinz-Otto’s and Dietmar’s editorship, I am reminded that only a
while ago (the hurt is thoroughly gone, but the memory remains) no one wanted to scan my early pictures for
longer than a few minutes, and this would-be leader of a new trend had not a single follower. Then (very slowly in
old memory, yet almost overnight in present perception) fractals had become so widely interesting that SIGGRAPH
26
started devoting lectures, then full day courses to them. The first makeshift fractals at SIGGRAPH came in 1985
under my direction, the second were in 1986 under Peter Oppenheimer, and the third in 1987 have led to the
present volume.
27
Is this book likely to be the last word on the subject? I
think not. Several outstanding authors were left out
because of the book’s origin. And my own experience,
that the act of writing the Foreword has sufficed to
make me aware of many new open directions to
explore, is bound to be repeated very widely. Let us
all pay to the book the high compliment of promptly
making it quite obsolete.
28
Chapter 1
Richard F.Voss
The essence of fractals is illustrated in Figure 1.1 with successive views of a fractal planet from an orbiting
spacecraft. In the search for a suitable landing site, a portion of the initial view as shown in the upper left of Figure
1.1 is magnified by a factor of 4 to give the middle top image. Similarly, each successive image represents a
magnification of a selected portion of the coastline of the previous image (as indicated by the white box) up to a
7
final magnification of more than 10 in the middle bottom. This high magnification can be compared with the
initial image that is repeated at the lower right. Although these two images, differing in scale by more than 10 7, are
not identical, they seem to share so many of the same characteristics that it is difficult to believe that they are not
just different sections of the same landscape at the same magnification. This property of objects whereby magnified
subsets look like (or identical to) the whole and to each other is known as self-similarity. It is characteristic of
fractals and differentiates them from the more traditional Euclidean shapes (which, in general, become ever
smoother upon magnification).
29
Figure 1.2 shows another computer generated forgery. Its three elementary fractal shapes, the foreground
landscape, its distribution of craters, and the generalization of Brownian motion onto a sphere rising in the
background, all share this characteristic self-similarity. Magnified subsets of each of these shapes are again
reminiscent of the whole. The fact that these computer generated primitives of fractal geometry strongly evoke the
real world, is a valuable clue to their importance in Nature.
According to Galileo (1623),
Philosophy is written in this grand book -1 mean universe - which stands continuously open to our gaze, but it
cannot be understood unless one first learns to comprehend the language in which it is written. It is written in the
language of mathematics, and its characters are triangles, circles and other geometrical figures, without which it
is humanly impossible to understand a single word of it; without these, one is wandering about in a dark labyrinth.
In this quote Galileo presents several of the basic tenets of modem western science. First, in
order to understand or simulate nature, one must be conversant in its languages. Second,
Nature’s languages are mathematics and geometry is the specific language for describing,
manipulating, and simulating shapes.
MAG 1 .OOE + 00 MAG 4.00E + 00 MAG 3.20E + 01
Fig. 1.1: Zoom sequence of the coastline of a statistically self-similar fractal landscape D -2.2. Each
succeeding picture shows a blowup of the framed portion of the previous image. As the surface is
magnified, a small portion looks similar to (but not exactly the same as) a larger portion. The total
magnification corresponds to 16 million.
Galileo was, however, wrong in terms of nature’s preferred dialect. The inability of the triangles and circles of
Euclidean geometry (the specific dialect mentioned) to concisely describe the natural world has not been widely
appreciated until recently. Yet, in retrospect it is obvious that, “clouds are not spheres, mountains are not cones,
coastlines are not circles, and bark is not smooth, nor does lightning travel in a straight line” (Mandelbrot, [68]).
Fortunately, there is now a new dialect, a new branch of mathematics, that is appropriate for the irregular shapes of
the real world, fractal geometry [68], as conceived and developed by Benoit Mandelbrot.
30
Fig. 1.2: Fractal Planetrise . A variation of Brownian motion on a sphere D = 2.5 rising
above a fractally cratered random Gaussian fractal surface D = 2.2 (from Benoit
Mandelbrot’s Fractal Geometry of Nature [68]).1.1.1 Mathematical monsters: The fractal
heritage
The turn of the century coincided roughly with an upheaval in the world of mathematics. Minds conceived of
strange monsters seemingly without counteipart in nature: Cantor sets, Weierstrass functions, Peano curves, curves
without derivatives and lines that could fill space. Having once discovered these monsters (and congratulated
themselves on a creativity superior to nature), mathematicians banished the pathological beasts, mostly sight
unseen, to a mathematical zoo. They could imagine no use for, nor interest in, their creations by natural scientists.
Nature, however, was not so easily outdone. As shown by Mandelbrot, many of these “monsters” do, in fact, have
counterparts in the real world. Illuminated by computer graphics, they have finally been recognized as some of the
basic structures in the language of nature’s irregular shapes: the fractal geometry of nature.
Fractals (a word coined by Mandelbrot in 1975) have blossomed tremendously in the past few years and have
helped reconnect pure mathematics research with both the natural sciences and computing. Within the last 5-10
years fractal geometry and its concepts have become central tools in most of the natural sciences: physics,
chemistry, biology, geology, meteorology, and materials science. At the same time, fractals are of interest to
graphic designers and filmmakers for their ability to create new and exciting shapes and artificial but realistic
worlds. Fractal images appear complex, yet they arise from simple rules. Computer graphics has played an
important role in the development and rapid acceptance of fractal geometry as a valid new discipline. The computer
rendering of fractal shapes leaves no doubt of their relevance to nature. Conversely, fractal geometry now plays a
31
central role in the realistic rendering and modelling of natural phenomena in computer graphics.
What then are fractals? How are they different from the usual Euclidean shapes? Table 1.1 summarizes some of the
major differences between fractals and traditional Euclidean shapes. First fractals are a decidedly modem invention.
Although possessed of tum-of-the-century “pathological” ancestors, they have been recognized as useful to natural
scientists only over the last 10 years. Second, whereas Euclidean shapes have one, or at most a few, characteristic
sizes or length scales (the radius of a sphere, the side of a cube), fractals, like the coastline of Figure 1.1, possess no
characteristic sizes. Fractal shapes are said to be self-similar and independent of scale or scaling. Third, Euclidean
geometry provides concise accurate descriptions of man-made objects but is inappropriate for natural shapes. It
yields cumbersome and inaccurate descriptions. It is likely that this limitation of our traditional language of shape
is at least partly responsible for the striking qualitative difference between mass produced objects and natural
shapes. Machine shops are essentially Euclidean factories: objects easily described are easily built. Fractals, on the
other hand, provide an excellent description of many natural shapes and have already given computer imagery a
natural flavor. Finally, whereas Euclidean shapes are usually described by a simple algebraic formula (e.g. r 2 = x2 +
y2 defines a circle of radius r), fractals, in general, are the result of a construction procedure or algorithm that is
often recursive (repeated over and over) and ideally suited to computers.
GEOMETRY
EUCLIDEAN_______________________________FRACTAL_____________
• traditional (> 2000 yr) • modem monsters (~ 10 yr)
• based on characteristic size or scale • no specific size or scaling
• suits man made objects • appropriate for natural shapes
• described by formula • (recursive) algorithm
These differences can can be illustrated with one of the early mathematical mon sters: the von Koch snowflake
curve (first proposed around 1904). Figure 1.3 illustrates an iterative or recursive procedure for constructing a
fractal curve. A simple line segment is divided into thirds and the middle segment is replaced by two equal
segments forming part of an equilateral triangle. At the next stage in the construction each of these 4 segments is
replaced by 4 new segments with length j- of their parent according to the original pattern. This procedure, repeated
over and over, yields the beautiful von Koch curve shown at the top right of Figure 1.3 .
It demonstrates that the iteration of a very simple rule can produce seemingly complex shapes with some
highly unusual properties. Unlike Euclidean shapes, this curve has detail on all length scales. Indeed, the closer one
looks, the more powerful the microscope one uses, the more detail one finds. More im-
32
Fig. 13: Recursive replacement procedure for generating the von Koch snowflake curve and variations with different fractal
dimensions.
portant, the curve possesses an exact self-similarity. Each small portion, when magnified, can reproduce exactly a
larger portion. The curve is said to be invariant under changes of scale much like the coastline of Figure 1.1. At
each stage in its construction the length of the curve increases by a factor of j. Thus, the limiting curve crams an
infinite length into a finite area of the plane without intersecting itself. As with the coastline, at successive
iterations corresponding to successive magnifications, one finds new detail and increasing length. Finally,although
the algorithm for generating the von Koch curve is concise, simple to describe, and easily computed, there is no
algebraic formula that specifies the points of the curve.
The property of self-similarity or scaling, as exemplified by the coastline, the von Koch curve, and the Mandelbrot
set is one of the central concepts of fractal geometry. It is closely connected with our intuitive notion of dimension
as illustrated in Figure 1.4 .
An object normally considered as one-dimensional, a line segment, for example, also possesses a similar
scaling property. It can be divided into N identical parts each of which is scaled down by the ratio r = j? from the
whole. Similarly, a two-dimensional object, such as a square area in the plane, can be divided into N self-similar
parts each of which is scaled down by a factor r = .A
three-dimensional object like a solid cube may be divided into N little cubes each of which is scaled down by a
33
ratio r = 5^— . With self-similarity the generalization to fractal dimension is straightforward. A D-dimensional
selfsimilar object can be divided into N smaller copies of itself each of which is scaled down by a factor r where r
= ^-7= or
u
' yN
w
= (11)
Conversely, given a self-similar object of N parts scaled by a ratio r from the whole, its fractal or similarity
dimension is given by
logCV)
(1.2)
logC^)
The fractal dimension, unlike the more familiar notion of Euclidean dimension, need not be an integer. Any
segment of the von Koch curve is composed of 4 sub-segments each of which is scaled down by a factor of j from
its parent. Its fractal dimension is D = or about 1.26.... This non-integer dimension, greater than one but less than
two, reflects the unusual properties of the curve. It somehow fills more of space than a simple line (D = 1), but less
than a Euclidean area of the plane (D = 2) . Mandelbrot [68] gives many variations of the von Koch construction
and two others are presented in Figure 1.3 . In the middle a segment is replaced by 8 new segments each | of the
initial one to yield
log (4)
11 1
11 1
1-D N parts, scaled by ratio r = 1/N kl -1
Nr=1
N r2 = 1
GENERALIZE
NrD=1
defines the fractal (similarity) dimension D
D = log N
log 1/r
Fig. 1.4: Interpretation of standard integer dimension figures in terms of exact self-similarity and extension to non-integer dimensioned
fractals.
At the bottom each segment is replaced by 9 new segments each of the original for
log(3)
34
As D increases from 1 toward 2 the resulting “curves” progress from being “line-like” to “filling” much of the
plane. Indeed, the limit D —> 2 gives a Peano or “space-filling” curve. The fractal dimension D, thus, provides a
quantitative measure of wiggliness of the curves. Although these von Koch curves have fractal dimensions between
1 and 2, they all remain a “curve” with a topological dimension of one. The removal of a single point cuts the curve
in two pieces.
The exactly self-similar von Koch curve may be considered a crude model for a coastline, but it differs from the
coastline in one significant aspect. Upon magnification segments of the coastline look like, but never exactly like,
segments at different scales. The concept of fractal dimension, however, can also be applied to such statistically
self-similar objects. In a measurement of the length of a coastline, the more carefully one follows the smaller
wiggles, the longer it becomes. A walk along a beach is longer than the drive along the corresponding coast
highway. Moreover, each small section of a coastline looks like (but not exactly like) a larger portion. When using
a ruler of size r to measure a coastline’s length, the total length equals the ruler size r times the number of steps of
size r, N( r), taken in tracing the coast
Length oc r ■ -L = (1.4)
With D > 1, as the size of the ruler used to measure a coast decreases, its length increases. The variation of
apparent coastline length with ruler size has been studied by Richardson as summarized in [68]. Real coastlines
can, in fact, be characterized by fractal dimensions D of about 1.15 to 1.25, close to the of the von Koch curve.
The property that objects can look statistically similar while at the same time different in detail at different
length scales, is the central feature of fractals in nature. The coastline of Figure 1.1 is random in the sense that
(unlike the von Koch curve) a large scale view is insufficient to predict the exact details of a magnified view. Yet,
the way in which the detail varies as one changes length scale is once again characterized by a fractal dimension.
The irregular fractal surface is more than a simple surface (D = 2) and the actual surface shown in Figure 1.1 has D
= 2.2 . The fractal dimension of the coastline is one less than that of the surface itself. Here the coastline D = 1.2 .
For the idealized fractal landscape of Figure 1.1 the statistical self-similarity extends from arbitrarily large to
arbitrarily small scales. Actual landscapes, on the other hand, can be statistically self-similar only over a finite (but
often quite large) range of distances. The largest variations may be limited by the size of the planet or the force of
gravity (the materials may not be strong enough to support arbitrarily high mountains). The smallest scales may be
limited by the smoothing of erosion, the basic grain size of the rock and sand or, at the very least, by the atomic
nature of the particles. The mathematical ideal is an approximation to the real world (or vice versa to some
mathematicians). Mandelbrot’s fractal geometry, however, remains by far the best approximation and, to date, the
most widely used successful mathematical model.
As discussed below, almost all algorithms for generating fractal landscapes effectively add random
irregularities to the surface at smaller and smaller scales similar to the process of adding smaller and smaller line
segments to the von Koch curve. Once again, the fractal dimension determines the relative amounts of detail or
irregularities at different distance scales. Surfaces with a larger D seem rougher. This effect is illustrated in Plates
11 to 13 which show the “same” surface but with different fractal dimensions. Plate 11 has a relatively low fractal
dimension for the surface, D = 2.15, that is appropriate for much of the earth. An increase in the fractal dimension
to D = 2.5 is shown in Plate 12. A further increase to D = 2.8 in Plate 13 gives an unrealistically rough surface for
an actual landscape. Fractal geometry specifies only the relative height variations of the landscape at different
length scales. For the samples given here the color changes were based on height above water level and local slope.
35
The images in Plates 11 and 9 (which are the same except for the coloring) do not, however, resemble many of
the valleys on the Earth’s surface which are significantly eroded. A flat bottomed basin can, however, be
approximated mathematically by simply taking the height variations (relative to water level) and scaling them by a
power-law. This process is illustrated in Plate 10 where the original landscape of Plate 11 is “cubed.” The effect of
such a power greater than 1 is to flatten the lower elevations near the water emphasizing the peaks. Scaling with a
power less than one has the opposite effect of flattening the peaks while increasing the steepness near the water. A
cube-root processing of Plate 9 is shown in Plate 8. This forgery gives the impression of river erosion into an
otherwise fairly smooth plain. Such non-linear processing does not change the fractal dimensions of the coastlines.
In order to give the impression of a lunar landscape, as shown in the foreground of Figure
1.2, it is necessary to add craters to the “virgin” fractal landscape of Plate 11. Each crater is
circular with a height profile similar to the effect of dropping marbles in mud. The trick in
achieving realism is to use the proper
Fig. 1.5: A fractal landscape (the same as Plate 11) with craters whose area distribution follows a fractal or power-law dependence with
many more small craters than large ones.
36
distribution of crater sizes. For the moon, the actual distribution is fractal or power-
law, with many more small craters than large ones. Specifically, the number of craters
having an area, a, greater than some number A, N( a > A) varies as j. The computer
simulation is shown in Figures 1.5,36 and on the back cover of this volume. This is the
same surface that forms the foreground of Figure 1.2.1.1.8 Fractal planet: Brownian
motion on a sphere
Rather than add increasing detail at progressively smaller scales, the rising fractal planet in Figure 1.2 was
generated by a different algorithm that is closer to an actual model for the evolution of the earth’s surface. Its
surface is a generalization of Brownian motion or a random walk on a sphere. A random walk is the sum of many
independent steps, and the sphere’s surface becomes the result of many independent surface displacements or
faults. Each fault encircles the sphere in a random direction and divides it into two hemispheres which are displaced
relative to each other in height. Inside the computer, the surface of the sphere is mapped onto a rectangular array
similar to a flat projection map of the earth. Plate 1 shows the surface variations, as represented by regions of
different color, both for the full and the flattened sphere after only 10 random faults. After 20 faults in Plate 2, the
flattened sphere gives the impression of a cubist painting. After 100 faults in Plate 3 the land mass is scattered with
an outline roughly resembling a rooster. With 500 faults in Plate 4, the sphere has developed a localized land mass
in the northern hemisphere similar to the earth before continental drift. Moreover, the effect of an individual fault is
becoming lost. After more than 10,000 faults and the addition of polar caps in Plate 5, the sphere’s surface becomes
a plausible planet with D = 2.5. Mapping the data of Plate 5 back onto a sphere gives the fractal planet in Figure 1.1
or the enlarged view of the rotated sphere shown in Plate 6. Remarkably, the same sphere is also a good forgery of
a much more arid generic planet. Figure 7 shows the same data but with different coloring and without water. Any
particular fractal model is often effective at simulating many different natural shapes. Once the fractal dimension is
in an appropriate range, the mathematical models rarely fail to evoke natural images.
The previous fractal forgeries have been pretentious surfaces. The addition of irregularities
on smaller and smaller scales raise their dimensions above a topological value of 2 to a
fractal value 2 < D < 3. It is, however, also possible to generate fractals with a topological
dimension 3 whose scaling irregularities raise the fractal dimension to 3 < D < 4. Rather
than landscapes, which consist of random heights as a function of 2-dimensional surface
position, such objects might represent a random temperature or water vapor density as a
function of 3-dimensional position. The simplest means of viewing such constructions is
with their zerosets (equivalent to the coastlines of the fractal landscapes) as shown in Figure
1.6. For a temperature distribution T( x, y, z) with fractal di-
37
mension 3 < D < 4, the zerosets are all points for which T(x, y, z) — To = 0 and have a fractal dimension of D — 1.
They can be displayed by treating all points for which T( x, y, z) > To as opaque while all others are transparent.
The resulting fractal flakes with different D are shown in Figure 1.6 . In Figure 1.6(a) D — 1 is highest at 2.7 and
the flake almost fills space. In Figure 1.6(b) D — 1 is reduced to 2.5 and the isolated shapes are beginning to
condense. In Figure 1.6(c) D — 1 is reduced further to 2.2 and many natural shapes seem to appear. The overall
outline is now reminiscent of a dog’s head while just the upper portion could be the Loch Ness monster. Shapes
with a fractal dimension D about 0.2 to 0.3 greater than the Euclidean dimension E seem particularly favored in
nature. Coastlines typically have a fractal dimension around 1.2, landscapes around 2.2, and clouds around 3.3 .
Even the large scale distribution of matter in the universe (clusters of clusters of galaxies) corresponds to D about
1.2 . Perhaps, then, it is not so surprising that our perceptual system, evolving in such a world, is particularly fond
of D — E about 0.2 .
Once again the generic nature of fractal constructions can be illustrated by allowing the local light scattering
and opacity to vary continuously as T( x, y, z). In this case, the flakes are transformed into realistic looking fractal
clouds. A combination of such a fractal cloud above a cratered fractal surface is shown on the back cover. Here
light scattered by the cloud produces shadows of varying intensity on the landscape. The correspondence to actual
clouds is not surprising. Clouds and rain areas are the natural shapes where fractal behavior has been
experimentally verified over the widest range of distance scales in [68].
Man has always been confronted with a world filled with seemingly complex irregular shapes and random
fluctuations. In the quest for understanding, natural science has progressed by concentrating primarily on the
simplest of systems. In this process, it has moved away from the direct experience of nature to the electronically
instrumented laboratory. After all, who could describe, let alone understand, the profile of a mountain or the shape
of a cloud? With fractal geometry, the quest for scientific understanding and realistic computer graphic imagery can
return to the everyday natural world. As the computer generated fractal forgeries [103] of Section 1.1 show,
nature’s complex shapes are only seemingly so.
Diverse scientific disciplines have readily adopted the language of fractal geometry [15,36,61,85,95,97,108]. The
images of Section 1.1 provide strong evidence of the applicability of fractals to nature on geological and planetary
38
scales. Fractal geometry, however, also provides techniques for quantifying nature’s irregular shapes, usually
through a measurement of the fractal dimension [105]. A summary of these methods is given in the Section 1.6. At
large scales, natural boundaries, geological topography, acid rain, cloud, rain and ecosystem boundaries, seismic
faults, and the clustering of galaxies are all susceptible to fractal analysis.
Fractals are just as useful at smaller than human scales. The variation of surface roughness with D shown in Plates
11 to 13 suggests that D is also a good measurable quantifier for characterizing material surfaces on a microscopic
scale. Materials scientists have widely adopted a fractal vocabulary for irregular shapes [ 108]. Fractals provide a
general language for the taxonomy or classification of the various “animals” (previously pathological indescribable
shapes) encountered in the natural sciences. Their ubiquity suggests two important questions: where do fractal
shapes come from and how do the characteristics of fractal shapes effect the processes that occur on them ?
Many of the ways in which matter condenses on the microscopic scale seem to generate fractals. Figure 1.7
shows a transmission electron micrograph of a thin gold film [104] about 50 atomic layers thick on an amorphous
substrate. Rather than a uniform coverage, the gold beads up due to surface tension (like water on a freshly waxed
car hood). A computer analysis of the connected clusters (the largest is shown highlighted) shows irregular
branching structures of finite size. As the amount of gold is increased, the clusters increase in size and eventually
connect across the entire sample. As the largest cluster reaches the sample boundaries its shape is fractal over all
length scales above the bead size. This is an example of a percolation transition, a geometric analog of a 2nd order
phase transition. Here fractal geometry provides an alternative description to analytic scaling theory [104], Fractal
dimensions of the characteristic shapes correspond to the scaling exponents. Fractals also provide the language and
the formalism for studying physical processes (such as diffusion and vibration) on such structures [81], Diffusion
on complex proteins with fractal structures has important biological implications.
39
(a) 100 nm 100 nm
Figures 1.8 and Plate 31 show samples of diffusion limited aggregation (see [36,95,97]). DLA is a simple
model, easily implemented on a computer, that reproduces many natural shapes found in electrostatic discharges,
electrochemical deposition, and fluid-fluid displacement. The resulting structures are well characterized as fractals
and strongly reminiscent of plant-like growths. As with the Mandelbrot set, the resulting shapes are the result of
repeated application of simple non-linear rules. The DLA samples in Plate 31 begin with a single fixed sticky
particle at the origin of a sea of randomly moving particles. Each of these mobile particles moves in its own random
path with its own simple rules. At each time step it moves one unit in a random direction unless it finds itself next
to the sticky origin. If so, it becomes immobile and serves as another sticky site itself. Thus, beginning from one
site, the cluster grows with the addition of other particles that randomly reach its boundary. The open fjords never
fill since a particle trying to reach the bottom by a random path invariably hits one of the sticky sides first. Figure
1.8 shows DLA samples with different initial conditions (sticky line at the bottom) and changes of growth
morphology with sticking probability.
40
Fig. 1.8: Samples of Diffusion Limited Aggregation (DLA) simulations from a line showing variations of growth from dendritic to
moss-like.
1.2.3 Scaling randomness in time: ^-noises
The preceding examples have demonstrated fractal randomness in space. Changes in time, however, have many of
the same similarities at different scales as changes in space. To the physicist, unpredictable changes of any quantity
V varying in time t are known as noise. Graphical samples of typical noises V(t) are shown in Figure 1.9 . To the
left of each sample is a representation of its spectral densities. The spectral density, Sv(f), gives an estimate of the
mean square fluctuations at frequency f and, consequently, of the variations over a time scale of order y. The traces
made by each of these noises is a fractal curve and there is a direct relationship between the fractal dimension and
the logarithmic slope of the spectral density which will be discussed below. Indeed, it is the understanding and
simulation of such noises that forms the basis for all of the fractal forgeries presented here.
Figure 1.9(a) shows a white noise, the most random. It could be produced by a pseudo-random number
generator and is completely uncorrelated from point to point. Its spectral density is a flat line, representing equal
amounts at all frequencies (like a white light). Figure 1.9(c) shows a Brownian motion or a random walk, the most
correlated of the three noise samples. It consists of many more slow (low frequency) than fast (high frequency)
fluctuations and its spectral density is quite steep. It varies as yj-. Formally, the Brownian motion of Figure 1.9(c)
is the integral of the white noise of Figure 1.9(a). In the middle, in Figure 1.9(b) is an intermediate type of noise
known as y-noise because of the functional form of its spectral density. In general, the term y-noise is applied to
any fluctuating quantity V(t) with SvCf) varying as over many decades with 0.5 < f3 < 1.5. Both white and yr-noise
are well understood in terms of mathematics and physics. Although the origin of y-noise remains a mystery after
more than 60 years of investigation, it represents the most common type of noise found in nature.
There are no simple mathematical models that produce y-noise other than the tautological assumption of a
specific distribution of time constants. Little is also known about the physical origins of y, but it is found in many
[102] physical systems: in almost all electronic components from simple carbon resistors to vacuum tubes and all
semiconducting devices; in all time standards from the most accurate atomic clocks and quartz oscillators to the
ancient hourglass; in ocean flows and the changes in yearly flood levels of the river Nile [64] as recorded by the
ancient Egyptians; in the small voltages measurable across nerve membranes due to sodium and potassium flow;
and even in the flow of automobiles on an expressway [78]. y-noise is also found in music.
41
Fig. 1.9: Samples of typical “noises”, V( t), the random variations of a quantity in time.
a. White noise, the most random.
b. y-noise, an intermediate but very commonly found type of fluctuation in nature, its origin is, as yet,
a mystery.
c. Brownian motion or a random walk.
To the left of each sample is a graphical representation of the spectral density, Sy (/), a measurement
characterizing the time correlations in the noise.
One of the most exciting discoveries [ 100,101 ] was that almost all musical melodies mimic y-noise. Music has the
same blend of randomness and predictability that is found in y-noise. If one takes a music score and draws lines
between successive notes of the melody one finds a graph remarkably similar in quality to Figure 1.9(b). Some of
the actual measured melody spectral densities for dif-ferent types of music is shown in Figure 1.10. There is little to
distinguish these measurements on widely different types of music from each other or from the y-noise of Figure
1.9(b). Similar y-spectra are also observed for the loudness fluctuations in all types.
42
Pitch fluctuations from different musical Pitch fluctuations in western music
cultures (a) Medieval music up to 1300 (b)
(a) the Ba-Benzele Pygmies Beethoven, 3rd Symphony (c)
(b) traditional music of Japan Debussey, piano works (d) R.
(c) classical ragas of India Strauss, ein Heldenlebe (e) the
(d) folk songs of old Russia Beatles, Sgt. Pepper
(e) American blues
Fig. 1.10: Spectral density measurements of the pitch variations in various types of music showing their common correlations as 1/f-
noise.
Measurement of spoken languages show a different result. Although the loudness fluctuations retain a y-
spectrum, pitch fluctuations for spoken English show a Lorentzian behavior with white or independent variations
for time scales longer than a spoken phoneme. Oriental languages, on the other hand, show an increase in spectral
density of pitch fluctuations at low frequencies corresponding to increased correlation (but less that y). The y-
spectral density seems to occur for measured quantities that are more closely related to “meaning” (mu sical
melodies, spoken loudness).
This type of analysis is surprisingly insensitive to the different types of music. With the exception of very modem
composers [100] like Stockhausen, Jolet, and Carter (where the melody fluctuations approach white noise at
lowfrequencies), all types of music share this y-noise base. Such a view of melody fluctuations emphasizes the
common element in music and suggests an answer to a question that has long troubled philosophers [45]. In the
words of Plato, “For when there are no words (accompanying music), it is very difficult to rec ognize the meaning
of the harmony and rhythm, or to see that any worthy object is imitated by them”. Greek philosophers generally
agreed on the imitative nature of the arts. It seemed obvious that painting, sculpture or drama imitated nature. But
what does music imitate? The measurements suggest that music is imitating the characteristic way our world
changes in time. Both music and y-noise are intermediate between randomness and predictability. Like fractal
shapes there is something interesting on all (in this case, time) scales. Even the smallest phrase reflects the whole.
It is, of course, possible to use fractals, in this case as -noises, for music as well as landscape forgery. Figure
1.11 shows samples of “music” generated from the three characteristic types of “noise” shown in Figure 1.9 .
Although none of the samples in Figure 1.11 correspond to a sophisticated composition of a specific type of music,
Figure 1.11(b) generated from y -noise is the closest to real music. Such samples sound recognizably musical, but
from a foreign or unknown culture.
The above examples are only a small subset of the specialties in which fractals are finding application. Fractal
geometry, like calculus before it, is rapidly moving from a specialized discipline in its own right to a common
necessary tool and language in many fields. It remains important to emphasize the difference between the
mathematical ideal and nature. Whereas mathematical fractals may provide scaling behavior over unlimited ranges,
natural fractals always have limits. Nonetheless, fractal geometry remains the best available language for most
natural shapes. As in most disciplines, an investigation of the boundaries (which are themselves often fractal)
43
provides new insights.
One of the most useful mathematical models for the random fractals found in nature (such
as mountainous terrain and clouds) has been the fractional Brownian motion (fBm) of
Mandelbrot and Van Ness [68],[63]. It is an extension of the central concept of Brownian
motion that has played an important role in both physics and mathematics. Almost all
natural computer graphics fractal simulations [103] are based on an extension of fBm to
higher dimensions such as the fractional Brownian landscape of Figure 1.1. Fractional
Brownian motion
i.pijlr^iPrUrJj r^rji
J ,1J <b)
Fig. 1.11: Samples of stochastically composed fractal music based on the different types of noises
shown in Figure 1.9 . a. “white” music is too random, b. “y’’-music is the closest to actual music (and
most pleasing), c. “Brown” or -jr . Music is too correlated.
is also a good starting point for understanding anomalous diffusion and random walks on fractals. This section
provides a summary of the usage of fBm. Its mathematical details are left to the Section 1.6 .
As can be seen from the sample traces of fBm in Figure 1.12, a fractional Brownian motion, VH(t)»is a single
valued function of one variable, t (usually time). In appearance, it is reminiscent of a mountainous horizon or the
fluctuations of an economic variable. Formally, it is the increments of fBm (the differences between successive
times) that give the noises of Figure 1.9. The scaling behavior of the different traces in Figure 1.12 is characterized
by a parameter H in the range 0 < H < 1. When H is close to 0 the traces are roughest while those with H close to 1
are relatively smooth. H relates the typical change in V, V = V(ti) — V(ti), to the time difference At = t2 — ti by
the simple scaling law
AVocAt^. (1.5)
In the usual Brownian motion or random walk, the sum of independent incre-
44
Fig. 1.12: Samples of fractional Brownian motion traces Vg(.t) vs t for different values of H and D.
ments or steps leads to a variation that scales as the square root of the number of steps. Thus, H = j corresponds to a
trace of Brownian motion.
1.3.1 Self-affinity
The scaling property of fBm represented by Eqn. (1.5) is different from the statistical self-similarity of the coastline
of Figure 1.1 and the exact self-similarity of the Koch constructions of Figure 1.3. Whereas the self-similar shapes
repeat (statistically or exactly) under a magnification, the fBm traces of Figure 1.12 repeat statistically only when
the t and V directions are magnified by different amounts. If t is magnified by a factor r (t becomes rt), then V must
be magnified by a factor rH (V becomes rHV}. For a random walk (H = j) one must take four times as many steps to
go twice as far. This non-uniform scaling, where shapes are (statistically) invariant under transformations that scale
different coordinates by different amounts, is known as self-affinity. Self-affinity also plays an important role in the
properties of iterated functions as discussed in Chapter 5.
As shown in Figure 1.4, the concept of fractal dimension is strongly connected with self-similar scaling.
Although the extension to self-affine shapes can be ambiguous (see Section 1.6), a useful and consistent definition
is based on the concept of zerosets and the calculus of fractal dimensions [68],[71].
1.3.2 Zerosets
Fractals, like traditional Euclidean shapes, typically reduce their dimension by one under intersection with a plane.
Thus, the intersection of a solid 3-d sphere with a plane is a 2-d circular area. The intersection of this area with
another plane is a 1-d line segment and the intersection of this segment with yet another plane is a 0-d point.
Similarly, the intersection of a fractal curve in the plane (with fractal dimension 1 < D < 2 ) with a straight line is a
set of points of dimension D — 1. By choosing the direction of the intersecting line to eliminate one of the
coordinates, it is possible to reduce a self-affine curve to a self-similar set of points. The zeroset of fBm is the
intersection of the trace of VH (t) with the t axis: the set of all points such that VH(t) = 0. The zeroset is a
disconnected set of points with topological dimension zero and a fractal dimension Do = 1 — H that is less than 1
but greater than 0. Although the trace of VH(t) is self-affine, its zeroset is self-similar and different estimates of Do
45
yield the same answer. The fractal dimension, D = Do + 1, of a self-affine fBm is, thus, simply related to the
scaling parameter H as
D = 2—H. (1.6)
The traces of fBm, particularly Figure 1.12(c) with H = 0.8, bear a striking resemblance to a mountainous horizon.
The modelling of the irregular Earth’s surface as a generalization of traces of fBm was first proposed by
Mandelbrot. The single variable t can be replaced by coordinates x and y in the plane to give Vk(x, y) as the surface
altitude at position x,y as shown in Figure 1.1. In this case, the altitude variations of a hiker following any straight
line path at constant speed in the xy-plane is a fractional Brownian motion. If the hiker travels a distance Ar in the
xy-plane (Ar2 = Ax2 + Ay2) the typical altitude variation A V is given by
AV<xArff (1.7) in analogy with Eqn. (1.5). Once again, the fractal dimension D
must be greater than the topological dimension 2 of the surface. Here,
The intersection of a vertical plane with the surface Vg( x, y) is a self-affine fBm trace characterized by H and has a
fractal dimension one less than the surface itself.
Similarly, the zeroset of Vg(x, y), its intersection with a horizontal plane, has a fractal dimension Do = 2 — H .
This intersection, which produces a family of (possibly disconnected) curves, forms the coastlines of the Vg( x, y)
landscape. Since the two coordinates x and y are, however, equivalent, the coastlines of the self-affine Vg(x, y) are
self-similar. Figure 1.1 demonstrates how the self-similar coastline remains statistically invariant under changes of
magnification. A side view of the same landscape would not, however, show the same invariance because of its
self-affine scaling. Viewed from nearby a fractal landscape such as the earth’s surface appears relatively steep. On
the other hand, when the same section is viewed from further away (for example, a satellite view of the earth) the
surface is almost flat. This is just a consequence of the scaling property of Eqn. (1.7) . If the close-up corresponds
to A V of order Ar then increasing the viewing distance by a factor F > 1 corresponds to increasing the horizontal
scale by F and the vertical scale by only FH. Since H < 1 the vertical scale increases by less than the horizontal and
the surface appears to flatten. The relative height variations seen in a fractal landscape (which may be varied
arbitrarily in a simulation) are an important perceptual clue in estimating the height of the mountains relative to
human scale.
This generalization of fBm can continue to still higher dimensions to produce, for example, a self-affine fractal
temperature or density distribution Vg( x, y, z) . Here, the variations of an observer moving at constant speed along
any straight line path in xyz-space generate a fBm with scaling given by Eqn. (1.7) where now Ar 2 = Aj2 + Ay2 +
A? . In analogy with Eqs. (1.6) and (1.8), the fractal dimension
The zeroset
Vg(x,y,z) = constant
now gives a self-similar fractal with Do = 3 — H as the fractal flakes of Figure 1.6.
To summarize, a statistically self-affine fractional Brownian function, V#, provides a good model for many
natural scaling processes and shapes. As a function of one variable ( E = 1) fBm is a good mathematical model for
noises, random processes, and music. For E = 2 fBm provides fractal landscapes and random surfaces. For E = 3
fBm generates flakes and clouds. In all of these cases, the scaling property may be characterized equivalently by
either H or the fractal dimension D. Yet another characterization is provided by the spectral exponent ft.
46
1.3.4 Spectral densities for fBm and the spectral exponent ft
As discussed earlier, random functions in time V(t) are often characterized [90],[91] by their spectral densities
Sy(f). If V(t) is the input to a narrow bandpass filter at frequency f and bandwidth A/ , then Sv(f) is the mean square
of the output V(f) divided by A/, Sv(f) = '. Sy (ft) gives information about the time correlations of V(t). When Sy( f)
increases steeply at low f, V( t) varies more slowly. As shown in Figure 1.9, the spectral exponent /3 changes with
the appearance of the noise trace similar to the variation with H of the traces of fBm from Figure 1.12. The
Sections 1.6.8 and 2.4 provide a useful connection between the 3 equivalent characterizations, D, H, and ft, of a
fBm function of E variables
3-/3
D= E+ 1 — H = E+ (1.10)
For H in the range 0 < H < 1 , D spans the range E < D < E + 1 , and 1 < /3 < 3. The value H ~ 0.8 is empirically a
good choice for many natural phenomena.
Section 1.1 established the visual connection between many, seemingly complex, shapes in the natural world and
statistically self-similar and self-affine fractals. Section 1.2 indicated some of the ways in which fractals are
becoming an integral part of natural science. Section 1.3 reviewed some of the mathematical background and
established the relation between fractal dimension D, the H parameter of fBm, and the spectral density exponent /3.
This section discusses various methods for producing a finite sample of fBm as a noise (E = 1), a landscape (E =
2), or a cloud (E = 3). As with nature itself, this sample will typically be limited by both a smallest size or
resolution Lmin and a largest scale Lmax. Consideration will be given to sample statistics (mathematically, how well
does it approximate fBm?), visual features (does it look natural?) and computation characteristics (how does
computation time vary with sample size?). In general, the closer a given algorithm approximates fBm, the more
“realistic” the resulting image looks. Although no specific consideration will be given to image rendering, it is
important to note that in most cases rendering of a fractal sample requires far more computation than sample
generation. This is particularly true for rendering packages based on Euclidean shapes which have an extremely
difficult time with the “infinite” number of surface patches on a fractal surface. The Chapter 2 in this volume also
provides a practical and detailed account of some of these algorithms along with pseudo-code samples.
The earliest simulations of fBm were based on the idea of independent perturbations. In the usual definition,
Brownian motion, VB(t) = VB(t) forH = is the sum of independent increments or pulses. Formally, VB( t) is the
integral of a Gaussian white noise W(t),
VB(t) = f W(s)ds. (1.11)
J—oo
Since a single pulse at time t,- causes a step-function jump of magnitude Ai (a Gaussian random variable) in V(t),
Brownian motion may equivalently be considered as the cumulative displacement of a series of independent jumps.
The response to such a pulse, AiP(t — ti) has the form of a step function with P(t) = 1 for t > 0 and P(t) = 0 for t <
0. Thus, VB(t) can also be written as the sum of independent cuts at random (Poisson distributed) times t,
OO
VB(t) = 2 AiP(t-ti). (1.12)
t=—OO
This latter formulation of Brownian motion is useful since it can be generalized to circles and spheres to produce
the fractal planet of Plate 6. In the generalization to a sphere, VB becomes a function of r, the position on the unit
sphere. Local variations of VB mimic the usual Brownian motion but large scales reflect the periodicity of the
sphere. This corresponds to the addition of random hemispheres whose pole positions are uniformly distributed on
the sphere surface,
47
OO
VB(r) = 22 AiP(f-ri). (1.13)
t=—OO
Here, P(r — f}) = 1 for r • ri > 0 and zero otherwise. The evolution of a Brownian sphere under this summation
process is shown in Plates 1 to 6.
A flat Brownian relief VB( X, y) can similarly be constructed from the addition of randomly placed and
randomly oriented faults in the plane. The profile of such a fault corresponds to the step function of random
amplitude A,-. Such a surface will have a fractal dimension D = 2.5.
The independent addition of such step function faults is extremely expensive computationally. Each fault
requires additions to roughly half of the surface elements. Moreover, the resulting fractal always corresponds to a
fBm with H = j-. Nevertheless, the procedure is straightforward and represents, historically, the first method used
for producing fractal landscapes [68].
By changing the profile of the cut from the step function it is possible to use the method of independent cuts to
generate fractal surfaces and planets with different H and D. Thus,
and
P(t) = - 11 for t < 0
can be used to generate a fBm with H different from y .
Another straightforward algorithm for fBm is to directly construct a random function with the desired spectral
density yj. (3 is related to the desired D or H by Eqn. (1.10) . For all practical purposes, the output of a pseudo-
random number generator produces a “white noise” W(t) with constant spectral density Sw( f) ■ Filtering W(t) with
a transfer function T( /) produces an output, V(t), whose spectral density,
Sv(f) « | T(f) |2 Sw(f) a | T(f) |2 .
For computer simulation, a continuous function of time, V( t), is approximated by a finite sequence of N values, Vn,
defined at discrete times tn = n A t, where n runs from 0 to 2V — 1 and At is the time between successive values.
The discrete Fourier transform (DFT) defines Vn in terms of the complex Fourier coefficients, vm , of the series:
W_j
Vn = 52 (1.14)
m=0
,w _N
fm=
NXt for m = 0 ,0 T"1'
For a fBm sequence with Sv(f) varying as the coefficients must satisfy1
,11
<|t>TO|2>OC ~g (X —g (MS)
fp mp
The relation between /? and D and H is given by Eqn. (1.10) with E = 1. The vm may be obtained by multiplying the
Fourier coefficients of a white noise sequence by A- or by directly choosing complex random variables with mean
square amplitude given by Eqn. (1.15) and random phases. One possible variation sets i
I | = j- mt and only randomizes the phases. With an FFT (Fast Fourier
Transform) algorithm the evaluation of Eqn. (1.14) requires of order N log N operations [17] to produce a series of
where the scaling of the spatial frequencies determines D, H, and (3. In the Section 1.6 it is shown that a fractional
Brownian surface (E = 2) corresponds
to
P=1 + 2H = 1—2D
and
v
* a iTp71"a (k% + k2)4-D (1 -17)
with 2 < D < 3. Although neither the random cuts nor the FFT filtering represent a recursive method of generating
random fractals, the process of adding more and more frequencies is similar to the construction of the von Koch
snowflake. Figure 1.13 shows the increasing complexity in the evolution of an FFT generated fBm surface with H
= 0.8 and D = 2.2 as higher frequencies are included.
Similarly, for a fractional Brownian volume (E = 3)
1 1
2
k
Vi OC , , ,aj~ OC ----------------------------------- „-in (1-18)
(fc2 + fc2 + ’
with 3 < D < 4. This method was used to produce the fractal flakes and clouds of Figure 1.6 and on the back cover.
Random midpoint displacement is a recursive generating technique that was applied to normal Brownian motion as
early as the 1920’s by N.Wiener. It is a natural extension of the von Koch construction and figures in many of the
fractal samples described by Mandelbrot [68]. Its use in computer graphics has been widely popularized by
Fournier, Fussell, and Carpenter [19,40]. For H / or E > 1, it sacrifices mathematical purity for execution speed in
its approximation to fBm.
Consider the approximation to a simple fBm, VH( t), where the mean square increment for points separated by
a time A t = 1 is a2. Then, from Eqn. (1.5), for points separated by a time t,
If, for convenience, VH( 0) = 0, then the points at t = ±1 are chosen as samples of a Gaussian random variable with
variance <j2 to satisfy Eqn. (1.5) . Given these initial conditions, one defines the midpoints at
49
Fig. 1.13: Increasing detail in a FFT generated fBm surface for H = 0.9,/? = 2.8, andD = 2.1 as higher frequency components are added,
a. 5 components, b. 25 components, c. all 1024 components.
where Ai is a Gaussian random variable with zero mean and variance A2 that is determined by the condition that the
increments from 0 to must satisfy Eqn. (1.5) .
The first term is the desired total variance from Eqn. (1.5) while the second term represents the fluctuations already
in
due to the previous stage. As H —> 1, A 2 —■> 0, D —♦ 1, no new fluctuations are added at smaller stages, and
VH (t) remains a collection of smooth line segments connecting the starting points. At the second stage,
50
2 = g2
n
(2n)2H 1 -22ff-2 (1.20)
As expected for a fBm, at a length scale r = one adds randomness with mean square variations varying as r2H .
Although this process does produce a fractal, the result is, unfortunately, not stationary [68,69] for all H. Once
a given point at tj has been determined, its value remains unchanged in all later stages. All additional stages change
t < t, independent from t > t,- and the correlations required of fBm with H 5- are not present. By construction the
increments
For a stationary process, the same should be true of all increments with A t = 1. However, the absence of
correlation across an earlier stage requires that surfaces. Figure 1.14 shows a zenith view of such a midpoint
displacement surface with H = 0.8. In the generation of a midpoint displacement surface on a square grid each step
proceeds in two stages. First the midpoints of each of the squares is determined from its 4 comers and shifted by a
random element A. This determines a new square lattice at 45 degrees to the original and with lattice size -j-. In the
second stage, the midpoints of the new lattice receive a random contribution smaller by from the first stage. This
produces the new square lattice with a scale j- the original. The traces of early stages are readily visible in Figure
1.14 . These artifacts, which occur at all stages, can not be eliminated by local smoothing.
In spite of its mathematical failings, the speed of midpoint displacement, and its ability to add “detail” to an
existing shape make it a useful fractal algorithm for some applications. To generate N points requires only order N
operations. To extend the sequence by just one point at the smallest scale beyond its original endpoints, however,
requires an additional log2 N operations.
In many respects the non-stationary artifacts of midpoint displacement are similar to the staircase effect of aliased
raster display lines. With midpoint displacement, once determined, the value at a point remains fixed. At each stage
only half of the points are determined more accurately. If one imagines the process of magnifying an actual object,
as the spatial resolution increases all points are determined more accurately. In terms of the Nyquist sampling
theorem, to approximate N real points requires y complex frequencies or y sine and cosine components. When the
resolution is doubled to 2 N points, the additional high frequency components alter all of the original values.
Midpoint displacement only adds the additional sine (or cosine) components, not both. Conversely, in reducing the
resolution of an image an anti-aliasing procedure will average over neighboring pixels. Simply keeping the same
value as the center (the equivalent of midpoint displacement) produces the objectionable staircase edge.
I call the process of adding randomness to all points at each stage of a recursive subdivision process successive
random additions . This enhancement reduces many of the visible artifacts of midpoint displacement and the
generation still requires only order N operations to generate N points. The computation of the midpoints is the same
as midpoint displacement. The only difference is in the number of random elements. For a sequence of N elements,
y points in the final stage had only one random addition, y points in the previous stage had 2 random additions, y
had 3 and so on. The series converges to the 2 N random additions for the N elements.
The zoom sequence of Figure 1.1 was produced with successive random additions. The artifacts of the square
lattice are not as visible as with the midpoint displacement surface of Figure 1.14 . For comparison Figure 1.15 also
shows the construction of a fBm surfaces with H = 0.8 by successive random additions.
With successive random additions, at each stage all points are treated equivalently. This has the additional
advantage that the resolution at the next stage can change by any factor r < 1. For midpoint displacement r must be
y. Thus, given a sample of Nn points at stage n with resolution I, stage n+ 1 with resolution rl is determined by first
interpolating the Vn+i = new points from the old values. In practice, this can be accomplished using either linear or
spline interpolation. A random element A n is added to all of the new points. At stage n with scaling ratio r < 1, the
A will have a variance
51
A^a^)2*.(1.21)
When | is an integer, the generation of a sequence of N points requires order C(r)N operations. The coefficient C(r)
varies as 52n( 1 — r)n over the number of stages.
The fractal dimension D for the generated objects is determined only by H, r can be varied independently.
Variations in r change the lacunarity of the fractal. Figure 1.15 shows two samples of successive random addition
surfaces (both with H = 0.8, D = 2.2) generated from differing r. The zoom sequence in Figure 1.1 and Figure
1.15(a) were generated with 1 = 2 . As 7 increases to 8 in Figure 1.15(b) the few characteristic resolutions at which
randomness has been added become visible and the lacunarity increases. If £ approaches 1 the surfaces approach
the characteristics of the FFT samples. Empirically, however, a value of 7 much smaller than 2 produces little
observable change. The free choice of a value for r is an important addition for filmmaking. The addition of
irregularities to a surface can be accomplished continuously as the resolution is slowly increased from frame to
frame.
Each stage in a midpoint displacement process increases the highest spatial fre quency by a
factor 7 = 2. Each stage of successive random additions increases this frequency by 7 > 1.
Thus, both are related to Mandelbrot’s generalization [11,68] of the Weierstrass non-
differentiable function, VMW(I)- Whereas the
Fig. 1.14: Zenith and tilted views of a midpoint displacement surface on a square lattice for H = 0.8
and D = 2.2 . The illumination was along one lattice direction. The non-stationary character is visible as
the prominent shadows. This may be compared with Plates 11 to 13 and Figure 1.15 which were also
generated on a square lattice.
52
Fig. 1.15: Zenith and tilted views of successive random addition surfaces on a square lattice for H = 0.8
Fourier series of Eqn. (1.14) involves a linear progression of frequencies, the Weierstrass function involves a
geometric progression.
oo
Jonathan Swift in his 1726 classic describes how Capt. Gulliver, captured by pirates, set adrift, and marooned on a
small island, is eventually rescued by a chain lowered from the large floating island of Laputa. His sense of relief,
however, turns quickly to dismay. His rescuers, the Laputans, have but two interests: music and mathematics. So
exclusive and intense is their speculation that they have effectively withdrawn from the normal everyday world and
must employ servants, known as flappers, to strike them upon the ear or mouth (rather like a primitive form of I/O
interrupt) whenever normal social intercourse is required. Their obsession, moreover, extends to all spheres. For his
first meal Gulliver is served “a shoulder of mutton, cut into an equilateral triangle; a piece of beef into rhomboids;
and a pudding into a cycloid”. Faced with such an unhealthy diet of Euclidean shapes and inhabitants whose
mathematical obsessions did not extend to the world of nature Gulliver so loved, he was soon plotting to leave.
In this short passage, Swift was echoing the sentiments of countless math students who, like Gulliver, when
faced with a diet of Euclidean shapes failed to see their relevance and turned to thoughts of escape. With the advent
of fractal geometry, however, nutrition has greatly improved. Broccoli and other vegetables are now “acceptable”
shapes and a modern-day Gulliver might never leave the island of Laputa. As we have attempted to demonstrate in
this article, an obsession with mathematics need not turn away from nature. The Laputans are now free to
contemplate the beauty of the M-set or the geometry of their own island and its surroundings (back cover).
53
1.6 Mathematical details and formalism
This section provides many of the formal mathematical details that were omitted from the earlier sections.
A fractional Brownian motion, V#(t), is a single valued function of one variable, t (usually time). Its increments
Vntti) — Vff(ti) have a Gaussian distribution with variance
<\VH(t2)-VH<M |2>oc|t2—it I2*, (1.24)
where the brackets < and > denote ensemble averages over many samples of Vjy(t) and the parameter H has a value
0 < H < 1. Such a function is both stationary and isotropic. Its mean square increments depend only on the time
difference <2—ti and all t’s are statistically equivalent. The special value// = y gives the familiar Brownian motion
with
A V2 oc At.
As with the usual Brownian motion, although Vff(t) is continuous, it is nowhere differentiable. Nevertheless,
many constructs have been developed (and are relevant to the problem of light scattering from fractals) to give
meaning to “derivative of fractional Brownian motion” as fractional Gaussian noises [63,68] of Figure 1.9 . Such
constructs are usually based on averages of Vjy(t) over decreasing scales. The derivative of normal Brownian
motion, H = j, corresponds to uncorrelated Gaussian -white noise (Figure 1.9a) and Brownian motion is said to
have independent increments. Formally, this implies that for any three times such that 11 < t < t2,Vn(t) — VH (11)
is statistically independent of Vk(t2) — Vff(t) for H = y. For H > there is a positive correlation both for the
increments of Vn(t) and its derivative fractional Gaussian noise. For H < y the increments are negatively correlated.
Such correlations extend to arbitrarily long time scales and have a large effect on the visual appearance of the fBm
traces as shown in Figure 1.12.
Vjf(t) shows a statistical scaling behavior. If the time scale t is changed by the factor r, then the increments A
VH change by a factor r11. Formally,
Unlike statistically self-similar curves (such as the coastlines in Figure 1.1), a Vff(t) trace requires different scaling
factors in the two coordinates (r for t but rH for VH) reflecting the special status of the t coordinate. Each t can
correspond to only one value of VH but any specific VH may occur at multiple t’s. Such non-uniform scaling is
known as self-affinity rather than self-similarity.
The distinction between similarity and affinity is important, but has only recently received wide attention [72], By
way of summary, a self-similar object is composed of N copies of itself (with possible translations and rotations)
each of which is scaled down by the ratio r in all E coordinates from the whole. More formally, consider a set S of
points at positions
x “ ( xi,..., E)
in Euclidean space of dimension E. Under a similarity transform with real scaling ratio 0 < r < 1, the set S becomes
rS with points at
rx = (rxi,..., rxs)
A bounded set S is self-similar when S is the union of N distinct (non-overlapping) subsets each of which is
congruent to rS. Congruent means identicalunder translations and rotations. The fractal or similarity dimension of S
is then given by
1 = NrD or D = (1.26)
54
log 7
This relation leads to several important methods of estimating D for a given set S as presented in Figure 1.16 and
described below.
D = lo9 N log
1/r
Length = r x N = 1/r D 1
N self-similar parts
Fig. 1.16: Basic techniques for estimating fractal dimension from experimental
images.
The set S is also self-similar if each of the N subsets is scaled down from the whole by a different similarity
ratio rn. In this case, D is given implicitly by
N
I>n=l, d-27)
7V=1
which reduces to the familiar result in Eqn. (1.26) when all of the rn are equal.
The set S is statistically self-similar if it is composed of N distinct subsets each of which is scaled down by the
ratio r from the original and is identical in all statistical respects to rS. The similarity dimension is again given by
Eqn. (1.26). In practice, it is impossible to verify that all moments of the distributions are identical, and claims of
statistical self-similarity are usually based on only a few moments. Moreover, a sample of a random set (such as a
coastline) is often statistically self-similar for all scaling ratios r. Its fractal dimension is usually estimated from the
dependence of box coverings Nbox(L) or mass M(L) on varying L as in Eqs. (1.28) and (1.29).
For topologically one-dimensional fractal “curves”, the apparent “length” varies with the measuring ruler size. If
the entire self-similar curve is of maximum size Lmax, then, on a smaller scale L = rLmax with r < 1, and the curve
consists of N = py segments of length L. Thus, for D > 1
/ r \D i
Length = L • N = L • ( oc . (1.28)
\ L / Li
The fractal dimension D also characterizes the covering of the set S by E- dimensional “boxes” of linear size L. If
the entire S is contained within one box of size Lmax, then each of the N = subsets will fall within one box of size L
= rLmax- Thus, the number of boxes of size L, Nbox(L), needed to cover S is given by
55
/L \D 1
Nbox( L) = -7^ or Nbox(L) a . (1.29)
This definition of box dimension is one of the most useful methods for estimating the fractal dimension of a given
set. The box dimension can be conveniently estimated by dividing the E-dimensional Euclidean space containing
the set into a grid of boxes of size LE and counting the number of such boxes Nbox( L) that are non-empty. It is useful
to examine as large a range of L as possible and to average over various origins for the boxes.
One can also estimate the “volume” or “mass” of the set S' by a covering with boxes of linear size L. If one
considers only distances of order L about a given point in S , one finds a single box of size L with 2?-dimensional
volume LE. If the distance scale about the same point is increased to Lmax = one now finds a total of
1 /1 T \&
_ — I \
7^" j
boxes of mass LE covering the set. Thus, the mass within a distance Lmax of some point in S,
M(L) oc LD . (1.30)
The fractal dimension D, thus, also corresponds to the commonly used mass dimension in physics. Mass dimension
also has a strong connection with intuitive notions of dimension. The amount of material within a distance L of a
point in a one-dimensional object increases as L1. For an E-dimensional object it varies as LE. The mass dimension
is another extremely useful method for estimating the fractal dimension of a given object.
For a distribution of objects in the plane, the fractal dimension of their coastlines can also be estimated from
the perimeter vs area scaling of the islands. Eqn. (1.28) gives the Length or perimeter of a fractal object of size
Lmax when “measured” with a ruler of size L. Consider a population of such islands with the same coastline D and
all measured with the same ruler L. For a given island, perimeter P oc L^ax from Eqn- (1-28) . Provided the
coastline D < 2, each island will have a well-defined area A oc Lmax and
P ocA%. (1.31)
A study of P vs A scaling is a useful method of estimating D for a population of fractal objects such as percolation
clusters [104].
1.6.4 Self-affinity
Under an affine transform, on the other hand, each of the E coordinates of x may be scaled by a
different ratio (n,..., TE) . Thus, the set S is transformed to r( S) with points at r( x) =
(rm,..., VEXE) • A bounded set S is self-affine when S is the union of N distinct (non-
overlapping) subsets each of which is congruent to r( S). Similarly, S is statistically self-
affine when S’ is the union of N distinct subsets each of which is congruent in distribution \
or(S). This is the case for the statistically self-affine fBm, VH( t), where the t- and V-
coordinates must be scaled differently. The fractal dimension D, however, is not as easily
defined as with self-similarity.1.6.5 The relation of D to H for self-affine fractional
Brownian motion
The assignment of a fractal dimension D (using the self-similar strategies presented above) to a self-affine set can
produce ambiguous results [72]. The difficulties can be illustrated with the case of fractional Brownian motion V#
(t). Consider, for convenience, a trace of Vff(t) from Figure 1.12 covering a time span At = 1 and a vertical range
6.VH = 1- Vn(t) is statistically self-affine when t is scaled by r and VH is scaled by rH. Suppose the time span is
56
divided into N equal intervals each with At = j?. Each of these intervals will contain one portion of VH(t) with
vertical range
Since 0 < H < 1 each of these new sections will have a large vertical to horizontal size ratio and the occupied
portion of each interval will be covered by
A TZ A + - -N
^VH^t - J_ - hj H
N JV
square boxes of linear scale L = jj. In terms of box dimension, as t is scaled down by a ratio r = j? the number of
square boxes covering the trace goes from 1 to N( L) = number of intervals times boxes per interval or
=N■•
and N = On small scales with At <c 1, the second term dominates and
I <x &tH so N( 0 oc l~"n and D = On the other hand, on large scales with At 1,1 varies linearly with At and D = 1.
Thus, the same Vff(t) trace can have an apparent self-similar dimension D of either 1, or 2 — H depending on the
measurement technique and arbitrary choice of length scale.
Consider a particle undergoing a fractional Brownian motion or random walk in E dimensions where each of the
coordinates is tracing out an independent fBm in time. Over an interval At each coordinate will vary by typically
AL = &tH. If overlap can be neglected, the “mass” of the trail
Af oc At oc .
In comparison with Eqn. (1.30) , the trail of fBm is a self-similar curve with fractal dimension
D=±- (1.33)
11
provided < E. Normal Brownian motion with H = 0.5 has D = 2. j? is known as the latent fractal dimension [71,73]
of a trail of fBm. When -^ > E overlap cannot be neglected and the actual D = E. When = E the trail is critical and
most quantities will have important logarithmic corrections to the usual fractal power laws. Note that although the
trail of a fBm in E dimensions is self-similar, each of the E coordinates has a self-affine trace vs time.
The formalism of a fBm vs time can be easily extended to provide a self-affine fractional Brownian function, VH of
57
x = (xi,..., XE) in E Euclidean dimensions. VH satisfies the general scaling relation
Random functions in time V(t) are often characterized [42,90] by their spectral densities Sy if). Sy(f) gives
information about the time correlations of V(t). When Sy(f) increases steeply at low f, V(t) varies more slowly. If
one defines V(f, T) as the Fourier transform of a specific sample of V(t) for 0 < t < T,
1 rT
V(f,T) = - V(t)e2iriftdt,
1 Jo
then
Sv(f) oc T | V(f, T) |2 as T oo. (1.36)
An alternate characterization of the time correlations of V(t) is given by the 2 point autocorrelation function
Gy(f) provides a measure of how the fluctuations at two times separated by T are related. Gy(r) and Sy(f) are not
independent In many cases they are related by the Wiener-Khintchine relation [42,90,91] roo
Gy(f) = Sv(f)cos(2irfT)df. (1.38)
Jo
For a Gaussian white noise Sy(f) = constant and Gy( f) = A V2 <5( T) is completely uncorrelated. For certain simple
power laws for Sy(f), Gy(r) can be calculated exactly. Thus, for
Roughly speaking, Sy(f) <x -jf corresponds to Gy(r) oc and a fBm with 2 H = (3 — 1 from Eqs. (1.24) and (1.40).
Thus, the statistically self-affine fractional Brownian function, VH(S), with x in an E-dimensional Euclidean space,
has a fractal dimension D and spectral density Sy(f) oc , for the fluctuations along a straight line path in any
direction in S-space with
This result agrees with other “extensions” of the concepts of spectral density and Wiener-Khintchine relation to
non-stationary noises where some moments may be undefined.
Although the formal definition of fBm restricts H to the range 0 < H < 1, it is often useful to consider
integration and an appropriate definition of “derivative” as extending the range of H. Thus, integration of a fBm
produces a new fBm with H increased by 1, while “differentiation” reduces H by 1. When H 1, the derivative of
fBm looks like a fBm with H —> 0. In terms of spectral density, if V(t) has Sy(f) oc then its derivative has spectral
density In terms of Eqn. (1.41) , differentiation decreases by 2
and decreases H by 1.
Numerical simulation or experimental image analysis often produces a geometrical object defined by a set S of
points at positions x = (xi,... ,XE) in an E-dimensional Euclidean space. Lacking other information, all of the points
may be assumed to be equivalent and all points are equally probable origins for analysis. The spatial arrangement
of the points determines P( m, L). P( m, L) is the probability that there are m points within an E-cube (or sphere) of
size L centered about an arbitrary point in S. P( m, L) is normalized
58
N
^P(m,L) = l for all L. (1.42)
m=l
P( m, L) is directly related to other probability measures as used by Mandelbrot [65,68,71] Hentschel and Procaccia
[52], and others [85]. This particular definition of P( m, L) is, however, particularly efficient to implement on a
computer.
The usual quantities of interest are derived from the moments of P( m, L). The mass dimension,
N
M(L) = (1.43)
771=1
for q y 0 (the q = 0 case is given by Eqn. (1.45) above) and one can then estimate D from the logarithmic
derivatives
a log M9(L)
(1.47)
Slog L
and
1 < dS(L)
(1.48)
q SlogL
A double logarithmic plot of Mq(L') vs L is an essential tool in verifying whether a fractal interpretation is valid
for S'. One expects a fractal to have power-law behavior of M9( L) over a wide range of L.
For a uniform fractal (fractal set) as the number of points examined, N —» oo the distribution is expected to
take the scaling form P( m, L) —> f( jp) and all moments give the same D. For a non-uniform fractal (a fractal
measure such as a Poincare map) the moments may take different values. Shapes and measures requiring more than
one D are known as multi-fractals. They have received wide attention in studying the properties of dynamical
systems (Poincare maps) and growth probability (or electric field strength) in aggregation problems. Different
methods of characterizing the distribution of D’s have been used to compare theory and experiment.
Figure 1.17 demonstrates the use of Eqs. (1.46) and (1.47) in estimating Dq. Figure 1.17(a) shows the measured
dependence of < M9(L) >< on L for several q for the exactly self-similar triadic Koch curve. In this case, as
expected, all of the moments show the same dependence on L and the estimates of Dq from the logarithmic slopes
agree well with the exact value of = 1.2618.... Figure 1.17(b) shows the same estimates for the computer generated
coastline of Figure 1.1. Here slight differences between the various moments are visible with Dq increasing slightly
with q.
1.6.10 Lacunarity
It is obvious from the above discussion that the fractal dimension D characterizes only part of the information in
the distribution P( m, L}. Different fractal sets may share the same D but have different appearances or textures
corresponding to different P(m, L). As an initial step toward quantifying texture,
104
59
103
102
101
10°
104
w3
102
101
10°
Mandelbrot ([68], Chapter 34) has introduced the parameter lacunarity , A (lacuna is Latin for gap). Although the
qualitative visual effect of changing lacunarity at fixed D is quite striking as shown in Figure 1.15, to date there
have been no quantitative measurements of the lacunarity of various random fractals. Mandelbrot [68] offers
several alternative definitions. One derives from the width of the distribution P( m, L) at fixed L. Given Mq( L) as
defined by
Eqn. (1.46),
< M2(L) > - < M(L) >2
A(L) = (1.49)
< Af(L) >2
When P( m, L) takes the scaling form f( j?), A is just the relative mean square width of the distribution f,
Figure 1.15 shows fractal surfaces of the same D with different lacunarity.
The equivalence of Brownian motion to a summation of step functions is a special case of Campbell’s theorem
(1909). Consider a collection of independent pulses occurring at random times ti corresponding to an average rate
60
Each pulse produces the profile AiP(t — ti). The random function V(t) = r AiP(t — ti), will then have a spectral
density [42],
where P(f) is the Fourier transform of P(t), P(f) = f P(t)e2'lt'ftdt. For normal Brownian motion, each excitation by the
white noise W(t) produces the step function response P(t) with P(f) oc y and Sy(f ) oc -j^. With a suitable power-law
choice of P(t), one can generate an approximation to fBm with any H. In fact, fBm can be put in a form [63,68]
similar to normal Brownian motion
1 f* i
VH(t) = -r- / (t - s^W^ds.
r (p +J-oo (1.51)
Thus, fBm can be constructed from independent pulses with response P(t) =
tH~? corresponding to P( f) oc in from Eqn. (1.36) and Sv(f) yrkr
agreement with Eqn. (1.39) .
In Section 1.3 the fractional Brownian surface was defined as having the same correlations in the plane as a
fractional Brownian motion of the same H has in time. Thus, Vg(x, y) should have the same form for its
autocorrelation function as a fBm. Eqn. (1.38) can be extended to the xy plane
f roo f f
<Vg(Jr)Vg(r+6)>= S(k) cos(2irk ■ 6)2irkdk, (1.52)
Jo
where r is a point in the xy plane and 8 is a displacement in the xy plane. Since all directions in the xy plane are
equivalent, S( K) depends only on
i k i= k=y + fc2
and | 6 |. For this 6 dependence to correspond to the r dependence of < V(t)V(t + r) >, S(k) must vary as and S( fc)
<x . The extra factor of k compensates for the 2 dimensional differential, 2-nkdk, in the integrand. For 3
coordinates, the spectral density of VH(x, y, z), S( k) oc and S( k) varying as produces a -noise for a sample VH
along any line.
With E = 2 the corresponding fractal surface is approximated on a finite N by N grid to give VH( xn, ym) where
xn = nX and ym = m\. The 2 dimensional complex FFT can be used to evaluate the series
£L_11
2
TZ — \ A 1f x>2iri(fc9xft+/crj/m) ’rm ~ / j vgrc
9r (1.53)
where - A and kr = are the spatial frequencies in the x and y directions. For a fBm surface corresponding to /3 = 1
+ 227 = 7— 2D, the coefficients vgr
must satisfy
1
<|v9r|2>OC ^+1 (1.54)
( ? + r 2) 4" D
2
The landscapes of Plates 8 to 13 with 2 < D < 3 were generated with this process.
For a 3-dimensional fractional Brownian volume, VH ( x, y, z), the differential in the autocorrelation function
contains k2 and the Fourier coefficients vgTS will satisfy
(1.55)
61
Chapter 2
Dietmar Saupe
2.1 Introduction
For about 200 years now mathematicians have developed the theory of smooth curves and surfaces in two,
three or higher dimensions. These are curves and surfaces that globally may have a very complicated structure but
in small neighborhoods they are just straight lines or planes. The discipline that deals with these objects is
differential geometry. It is one of the most evolved and fascinating subjects in mathematics. On the other hand
fractals feature just the opposite of smoothness. While the smooth objects do not yield any more detail on smaller
scales a fractal possesses infinite detail at all scales no matter how small they are. The fascination that surrounds
fractals has two roots: Fractals are very suitable to simulate many natural phenomena. Stunning pictures have
already been produced, and it will not take very long until an uninitiated observer will no longer be able to tell
whether a given scene is natural or just computer simulated. The other reason is that fractals are simple to generate
on computers. In order to generate a fractal one does not have to be an expert of an involved theory such as
calculus, which is necessary for differential geometry. More importantly, the complexity of a fractal, when
measured in terms of the length of the shortest computer program that can generate it, is very small.Fractals come
in two major variations. There are those that are composed of several scaled down and rotated copies of themselves
such as the von Koch snowflake curve or the Sierpinski gaskets. Julia sets also fall into this general category, in the
sense that the whole set can be obtained by applying a nonlinear iterated map to an arbitrarily small section of it
(see Chapter 3). Thus, the structure of Julia sets is already contained in any small fraction. All these fractals can be
termed deterministic fractals. Their computer graphical generation requires use of a particular mapping or rule
which then is repeated over and over in a usually recursive scheme.
In the fractals discussed here the additional element of randomness is included, allowing simulation of natural
phenomena. Thus we call these fractals random fractals.
Given that fractals have infinite detail at all scales it follows that a complete computation of a fractal is
impossible. So approximations of fractals down to some finite precision have to suffice. The desired level of
resolution is naturally given by constraints such as the numbers of pixels of the available graphics dis play or the
amount of compute-time that one is willing to spend. The algorithms presented below fall into several categories.
(Cl) An approximation of a random fractal with some resolution is used as input and the algorithm produces an improved
approximation with resolution increased by a certain factor. This process is repeated with outputs used as new
inputs until the desired resolution is achieved. In some cases the procedure can be formulated as a recursion. The
midpoint displacement methods are examples.
(C2) Only one approximation of a random fractal is computed namely for the final resolution. Pictures for different
resolutions are possible but require that most of the computations have to be redone. The Fourier filtering method
62
is an example.
(C3) In the third approach the approximation of a fractal is obtained via an iteration. After each step the approximation is
somewhat improved, but the spatial resolution does not necessarily increase by a constant factor in each iteration.
Here the allowed compute-time determines the quality of the result. The random cut method described below or the
computation of Julia sets as preimages of repellers are examples.
Almost all of the algorithms contained in this paper are based on methods and ideas discussed in Chapter 1 as
well as in [103] and ultimately in [68], where also some background information and further references can be
found. Please note also the following two disclaimers:
It is not the intent of this chapter to give a comprehensive account of the theory that leads to the algorithms.
Instead we have to direct the interested reader to Chapter 1 and the references therein. Secondly, we describe only a
few of the aspects of the rendering of fractals. It must be acknowledged that the generation of data of a fractal alone
is not sufficient to obtain a good picture. It is likely that computer programmers will spend far more time writing
software that can render e.g. a fractal landscape with some reasonable coloring and shading (if that work has not
already been done) than for the coding of fractal data generation routines. In any case the computer will surely use
much more CPU time for rendering than for data generation. This is a good reason not to be satisfied with poor
approximations of fractals due to insufficient algorithms.
In the following sections we describe a number of algorithms with increasing complexity. They are
documented in the form of pseudo code which we hope is self-explanatory. This code is included to clarify the
methods and therefore is not optimized for speed. Also, it is not complete. Some routines such as those for fast
Fourier transformation and multilinear interpolation have been omitted. If they are not available in the form of
library routines, they can easily be written using the indicated literature.
In Section 2.7 we explain two methods to graphically display fractal surfaces:
a. flat two-dimensional top view and parallel projection for three-dimensional view using colormapped elevation,
Gouraud shaded polygons, and the painters algorithm for hidden surface elimination.
b. three-dimensional perspective representation with points as primitives, using an extended floating horizon method.
In Section 2.8 we list a number of definitions from probability theory and related to random processes. Notions
such as “random variables”', “correlation” etc. are marked with a dagger (t) where they first appear in the main text,
and they are then briefly explained in Section 2.8 for the convenience of those read ers who would like to remind
themselves about these terms.
2.2.1 Definitions
Brownian motion in one variable constitutes the simplest random fractal, and also it is at the heart of all the
following generalizations. Small particles of solid matter suspended in a liquid can be seen under a microscope to
move about in an irregular and erratic way. This was observed by the botanist R. Brown around 1827. The
modeling of this movement is one of the great topics of statistical mechanics (see e.g. [90]). Occasionally Brownian
motion is also referred to as “brown noise”. In one dimension we obtain a random process! X(t)> i-e. a function X
of a real variable t (time) whose values are random variables! X(ti),X(t2),etc.
63
Fig. 2.1: Properly rescaled Brownian motion. The graph in the center shows a small section of Brownian motion X(i). In the other
graphs the properly rescaled random functions of the form r -!jC(rf) are displayed. The scaling factor r ranges from r = | to r = 8
corresponding to expanding and contracting the original function in the time direction. Note, that the visual appearance of all samples is
the same.
The increment
and the mean square increments! have a variance proportional to the time differences, thus
E[|X(t2)-X(ti)|2] oc|t2-ti|. (2.2)
Here E denotes the mathematical expectation! of a random variable, or, in other words, the average over many
samples. We say that the increments of X are statistically self-similar in the sense that
have the same finite dimensional joint distribution! functions for any to and r > 0. If we take for convenience to = 0
and X (to) =0 then this means that the two random functions
X(t) and4=X(rt)
vr
are statistically indistinguishable. The second one is just a properly rescaled version of the first Thus, if we
accelerate the process X (t) by a factor of 16, for example, then we can divide X( 161) by 4 to obtain the same
Brownian motion that we started with. We will return to this important characterization, when we discuss fractional
Brownian motion, and it will be crucial in understanding the spectral synthesis method.
The integral of uncorrelated white Gaussian noise W satisfies (2.1) and (2.2):
X(t) = W(s)ds.
J—oo (2.3)
64
The random variables W(t) are uncorrelated! and have the same normal distribution! N(0,1). Moreover, the graph of
a sample of Brownian motion X(t) has a fractal dimension of 1.5, and the intersection of the graph with a horizontal
line has a dimension of 0.5 . Formula (2.3) gives rise to the first little algorithm WhiteNoiseBM( ).
BEGIN
X[0] := 0
InitGauss (seed)
FOR i := 1 TO N-l DO
X[i] := X[i-1] + Gauss () / (N-l)
END FOR
END
Fig. 2.2: Brownian motion in one dimension. The lower graph shows a sample of Gaussian white noise, its time integral yields the
brown noise above it (see algorithm WhiteNoiseBM()).
In the above algorithm we perform a simple numerical integration of white Gaussian noise. It requires a routine
called Gauss(), which returns a sample
Arguments
seed Arand seed value for random number generator
Globals
Nrand rand() returns values between 0 and Arand, system dependent number
GaussAdd of samples of rand() to be taken in Gauss() real parameter for the
GaussFac linear transformation in Gauss() real parameter for the linear
srand() transformation in Gauss() initialization of system random numbers
Functions
65
BEGIN
Nrand := 4
Arand := power (2, 31) - 1
GaussAdd := sqrt (3 * Nrand)
GaussFac := 2 * GaussAdd / (Nrand * Arand) srand (seed)
END
ALGORITHM Gauss ()
Title Function returning Gaussian random number
BEGIN
sum := 0
FORi := 1 TO Nrand DO
sum := sum + rand ()
END FOR
RETURN (GaussFac * sum - GaussAdd)
END
of a random variable with normal distribution (mean 0 and variance 1). Let us briefly describe an elementary
method for generating such numbers. On most machines a pseudo random number generator will be available. So
let us assume that we have a routine rand(), which returns random numbers uniformly distributed! over some
31
interval [ 0, A]. Typically A will be 2 — 1 or 215 — 1. Also we assume that a routine srand(seed) exists, which
introduces a seed value for rand(). Taking certain linearly scaled averages of the values returned by rand() will
approximate a Gaussian random variable as follows:
A random variable Y is standardized by subtracting its expected value and dividing by its standard deviation!:
z Y — E(Y)
x/var Y
The Central Limit Theorem states, that if Zn is the standardized sum of any n identically distributed random
variables, then the probability distribution! of Zn tends to the normal distribution as n —> oo. Let Y{ be the i-th
value returned by rand(), i= 1,..., n. Then
and thus
n
( \ _ /n \ n
A var = A2
=T ’ 12 -
z 1Z
»=1 / \i=l /
With these formulas we obtain
as our approximate Gaussian random variable. In practice n = 3 or 4 already yields satisfactory results for our
purposes.
Let us remark that there are other Gaussian random number generators that require only one evaluation of
rand() for each Gaussian random number returned. They use a transformation method, and the underlying
distribution is exactly the normal distribution. We refer the interested reader to [8] and [86].
66
Fig. 2.3: Midpoint displacementThe first two stages in the midpoint displacement technique as explained in the text.
Another straightforward method to produce Brownian motion is random midpoint displacement. If the process is to
be computed for times, t, between 0 and
ALGORITHM MidPointBM (X, maxlevel, sigma, seed) Title Brownian motion via midpoint displacement
BEGIN
InitGauss (seed)
FOR i := 1 TO maxlevel DO
delta[i] := sigma * power (0.5, (i+1 )/2)
END FOR
N = power (2, maxlevel)
X[0] :=0
X[N] := sigma * Gauss ()
MidPointRecursion (X, 0, N, 1, maxlevel)
END
ALGORITHM MidPointRecursion (X, indexO, index2, level, maxlevel) Title Recursive routine called by MidPointBM()
BEGIN
index 1 := (indexO + index2) / 2
X[indexl] := 0.5 * (X[indexO] + X[index2]) + delta[level] * Gauss ()
IF (level < maxlevel) THEN
MidPointRecurs (X, indexO, indexl, level+1, maxlevel)
MidPointRecurs (X, indexl, index2, level+1, maxlevel)
END IF
END
1, then one starts by setting X ( 0) = 0 and selecting X( 1) as a sample of a Gaus sian random variable with mean 0
and variance o2. Thenvar(X(l) —X(0)) = cr2 and we expect
for 0 < ti < t2 < 1. We set X( y) to be the average of X(0) and X( 1) plus some Gaussian random offset Di with mean
67
0 and variance A 2. Then
and observe that again the increments in X, here X( y) — X( 5-) and X( £) — X(0) are Gaussian and have mean 0.
So we must choose the variance A2 of Z?2 such that
var(X(|) — X(0)) = lvar(X(l) -X(0)) + A22 = ^n2
holds, i.e.
A2_12
*2 " 8 ° ’
We apply the same idea to X( j) and continue to finer resolutions yielding
1
A2_ 2
A n+1 CT
"2
as the variance of the displacement Dn- Thus corresponding to time differences At = 2 -n we add a random element
of variance 2-^n+1^ a2 which is proportional to At as expected.
Our last algorithm for Brownian motion falls into the category (C3). We may interpret Brownian motion as the
cumulative displacement of a series of independentjumps, i.e. an infinite sum of functions
where and Ai, ti are random variables with Gaussian and Poisson distributions respectively. This approach
generalizes to circles and spheres. For circles we take time as 27r-periodic and arrive at
0, if t<0
P(t) = 1, if t>0
68
Fig. 2.4: Brownian motion via midpoint displacement method. Eight intermediate stages of the
algorithm MidPointBM() are shown depicting approximations of Brownian motion using up to 3, 5,9,...,
257 points, i.e. the parameter maxlevel was set to 2, 3,..., 8.
where
The Ai are identically distributed Gaussian random variables and the t, are uniformly distributed random variables
with values between 0 and 2TT. Each term in the sum (2.5) adds random displacement on half of the circle.
Of course, we can also use midpoint displacement to obtain Brownian motion on a circle. We would just have
to require that X(0) = X( 1) in order to maintain continuity. However, the midpoint method does not generalize to
spheres whereas the random cut method does: In each step a great circle of the sphere is picked at random and one
of the two half spheres is displaced by an amount determined by a Gaussian random variable. The pictures of the
planets in Figure 1.2 and the Plates 6, 7 and 36. were produced in this way (see also [103,68] and Sections
0.3,1.1.8).
BEGIN
InitGauss (seed)
FOR step := 1 TO maxsteps DO
kO := N * rand () / Arand
kl := kO + N/2 -1
A := Gauss()
FORk:=kOTOkl DO
IF (k < N) THEN
X[k] :=X[k]+A
69
ELSE
X[k-N] := X[k-N]+A
END IF
END FOR
END FOR
END
2.3.1 Definitions
In the last section we studied random processes X( t) with Gaussian increments and
var(X(t2)-X(ti)) <x |t2-ti|2H (2.6)
where JI = |. The generalization to parameters 0 < H < 1 is called fractional Brownian motion (fBm, see [63], [68]).
As in the case of ordinary Brownian motion, we say that the increments of X are statistically self-similar with
parameter H, in other words
have the same finite dimensional joint distribution functions for any to and r > 0. If we again use for convenience
to = 0 and X(to) = 0, the two random functions
X(t) and (2.7)
Fig. 2.5: Brownian motion via random cuts. Up to 1000 random cuts have been performed by the algorithm RandomCuisBM(). All
approximations are step functions which is clearly visible in the lower graphs.
are statistically indistinguishable. Thus “accelerated” fractional Brownian motion X (rt) is properly rescaled by
dividing amplitudes by rH.
Let us visualize this important fact (see Figures 2.1 and 2.6). If we set# = we obtain the usual Brownian motion
of the last section. For H = 0 we get a completely different behavior of X: We can expand or contract the graph of
X in the t-direction by any factor, and the process will still “look” the same. This clearly says that the graph of a
sample of X must densely fill up a region in the plane. In other words, its fractal dimension is 2. The opposite case
is given by the parameter H = 1. There we must compensate for an expansion of the graph in the t-direction by also
70
multiplying the amplitudes by the same factor. It is easy to give an argument showing that the fractal dimension of
this graph must be 2 — H = 1. In fact, graphs of samples of fBm have a fractal dimension of 2 — # for 0 < H < 1.
The parameter H thus describes the “roughness” of the function at small scales.
Fractional Brownian motion can be divided into three quite distinct cate-
Fig. 2.6: Properly rescaled fractional Brownian motion. The two sets of curves show properly rescaled
fractional Brownian motion for parameters H = 0.2 (left) and H = 0.8 (right). The graphs in the center
show small sections of fractional Brownian motion X( t). In the other graphs the properly rescaled
random functions r~HX (rt) are displayed. The scaling factor r ranges from r = | to r = 8 corresponding
to expansion and contraction the original function in the time direction. Compare Figure 2.1 for H =
0.5.
gories: H < y, H = y and H > y. The case H = y is the ordinary Brownian motion which has independent
increments, i.e. X(t2) — X(ti) and X(tj) — X(t2) with ti < t2 < t3 are independent in the sense of probability theory,
their correlation! is 0. For H > y there is a positive correlation between these increments, i.e. if the graph of X is
increasing for some to, then it tends to continue to increase for t > to. For H < y the opposite is true. There is a
negative correlation of increments, and the curves seem to oscillate more erratically.
For the approximation of fBm the approach taken by the midpoint method can formally be extended to also suit
parameters H y. Here we aim for the equivalent of (2.4)
Using the same line of thought as in the last section, we arrive at midpoint displacements Dn that have variances
2
(2.8)
Therefore the only changes which are required in the algorithm MidPointBM() occur in the computation of An
accommodating H y.
71
sigma initial standard deviation
H 0 < H < 1 determines fractal dimension D = 2 — H
seed seed value for random number generator
Globals delta[] array holding standard deviations A»
Variables i, N integers
BEGIN
InitGauss (seed)
FORi := 1 TO maxlevel DO
delta[i] := sigma * power (0.5, i*H) * sqrt (1 - power (2, 2*H-2))
END FOR
N = power (2, maxlevel)
X[0] : = 0
= X[N]: = sigma * Gauss ()
MidPointRecursion (X, 0, N, 1, maxlevel)
END
It has been shown that the above midpoint displacement technique does not yield true fBm for H y [69]. In
fact, although
1 11
var(X(-) -X(0)) =var(X(l) -X(-)) = (-)27yo2,
we do not have
var(X(|)-X(^)) = (l)2ffa2,
as we would like. Thus, this process does not have stationary increments, the times t are not all statistically
equivalent. This defect causes the graphs of X to show some visible traces of the first few stages in the recursion. In
the twodimensional extension of this algorithm one obtains results which may exhibit some disturbing creases
along straight lines, which are related to the underlying grid of points (compare Section 1.4.3 and the discussion in
Appendix A). Nevertheless, this is still a useful algorithm for many purposes. It became most popular after its
appearance in [40] and subsequently in some science magazines. Recently it has also been included as a part of
some computer graphics textbooks [49],[50].
One approach to deal with the non-stationarity of the midpoint displacement technique is to interpolate the
midpoints in the same way, but then to add a displacement Dn of a suitable variance to all of the points and not just
the midpoints. This seems natural, as one would reiterate a measurement at all points in a graph of fBm, when a
device is used that allows measurements at a smaller sampling rate At and a smaller spatial resolution AX. This
method is called successive random addition. The extra amount of work involved as compared with the midpoint
method is tolerable, about twice as many displacements are necessary. The actual formula for the variances of the
displacements as used in the various stages of this algorithm (see pseudo code) will be derived as a special case of
the method dicussed in the next section.
72
X[i] := 0.5 * (X[i-d] + X[i+d]) END FOR FOR i := 0 TO N STEP d DO
X[i] := X[i] + delta[level] * Gauss () END FOR D:=D/2 d:=d/2 level := level + 1
END WHILE END
Fig. 2.7: Fractal motion generated by displacement techniques. On the left results of the algo rithm
MidPoinlFMlD() are shown for various parameters H. On the right the random successive additions
were also included. Those curves look smoother because the amplitudes are scaled in order to fit into
the graph whereas in the time direction no scaling was done. The effect is especially drastic for If = 0.1
where we scaled amplitudes by a factor of 0.7 and thus a properly rescaled graph would have to be
contracted by a factor of 0.7 "° 1 =35.4.
In the midpoint displacement method and the successive random addition method the resolution At is improved by
a factor of r = in each stage. We can modify the second of the above methods to accommodate other factors 0 < r <
1. For this purpose one would interpolate X(t) at times t,- = irAt from the samples one, already has from the
previous stage at a sampling rate of At. Then a random element Dn would be added to all of the interpolated points.
In agreement with the requirements for the mean square increments of X we have to prescribe a variance
proportional to (rn)2ff for the Gaussian random variable Dn:
&2<x(rn)2H. (2.9)
The additional parameter r will change the appearance of the fractal, the feature of the fractal controlled by r has
been termed the lacunarity. In the algorithm below we use a simple linear interpolation. The reason we include yet
another method here is that it generalizes to two, three or more dimensions in a very straight forward way, while
the other methods are harder to convert.
73
BEGIN /* initialize the array with 2 points */
InitGauss (seed) X[0] :=0
X[ 1] := sigma * Gauss () mT:=2 T := 1.0
/* loop while less than N points in array */ WHILE
(mT < N) DO
/* set up new resolution of mt points */ mt := mT / r
IF (mt = mT) THEN mt := mT + 1
IF (mt >N) THEN mt :=N t:=l/(mt-l)
/* interpolate new points from old points */
Y[0] :=X[0]
Y[mt-1] := X[mT-l]
FOR i := 1 TO mt-2 DO index := integer (i * t / T) h := i * t / T - index
Y[i] := (1 - h) * X[index] + h * X[index+1] END FOR /* compute the standard deviation for
offsets */ delta := sqrt (0.5) * power (t, H) * sigma * sqrt (1.0 - power (t/T, 2-2*H)) /* do
displacement at all positions */
FOR i := 0 TO mt-1
X[i] := Y[i] + delta * Gauss () END FOR mT := mt T:=l/mT END WHILE END
Following the ideas for midpoint displacement for Brownian motion we set X(0) = 0 and select X( 1) as a
sample of a Gaussian random variable with variance ct2 . Then we can deduce in the same fashion as before that
4? = l(l-r2-2*)(r")2JV.
Fig. 2.8: Fractal Brownian motion with varying lacunarity. The algorithm InterpolatedFMlD() produced these curves with a parameter
H = 0.8. The other parameter r is the scaling factor in the method, r = 0.1 means that in each stage we have 1/r = 10 times as many
points as in the previous stage. Especially for low r we see a pronounced effect of these high scaling ratios as a modulation remaining
from the first one or two stages. This is called the lacunarity.
A little difficulty arises since we cannot work with a continuous variable X(t) in a computer program (X is stored as
an array). Thus, generally, one cannot maintain the constant factor of r reducing the resolution At in each step. Let
us assume for simplicity that we have some X ( 0) and X (T) already computed where T r”' -1. Then X(t) with t AS rn
74
is first interpolated via
Then all points X ( 0), X (t), X (21), etc. are displaced by samples of a Gaussian random variable D with variance
A 2. The old values X (0) and X (T) satisfied
t2ffCT2 = (^)2r2ffff2 + 2A2 where the first term of the sum stands for the variance already contained in the
interpolation, and the other represents the variance due to the perturbations in bothX(O) andX(t). Thus
A2 = p(l - (^2-2S)t2’1.
To apply this formula to the case of random successive additions we set T = and t = and we obtain
2 1 2,, 1x1
A - 2 1 ~ 22-2fl' 22J?n
which is, as expected, the quantity A 2 used in the algorithm AdditionsFMlD() where the ratio r is y .
The spectral synthesis method (also known as the Fourier filtering method) for generating fBm is based on the
spectral representation of samples of the process X(t). Since the Fourier transform of X generally is undefined we
first restrict X(t) to a finite time interval, say 0 < t < T:
S(f,T) = ^\F(f,T)\2.
The underlying idea of spectral synthesis is that a prescription of the right kind of spectral density S( /) will give
rise to fBm with an exponent 0 < H < 1.
If the random function X(t) contains equal power for all frequencies f, this process is called white noise in
analogy with the white light made up of radiations of all wave lengths. If S( f) is proportional to 1 /f2 we obtain the
usual brown noise or Brownian motion. In general, a process X(t) with a spectral density proportional to 1 / f&
corresponds to fBm with H = :
Let us pause for a moment to explain this relationship between fl and H. Mathematically it can be derived from
the fact that the mean square increments (which are proportional to At 2# for fBm with exponent H) are directly
related to the autocorrelation function of X, which in turn defines the spectral density by means of a Fourier
transform via the Wiener-Khintchine relation (see Section 1.6.8). In place of this “pure” approach we propose a
simple and more heuristic argument for the relation /? = 2 H + 1. We start out by restating the fundamental property
of fBm: If X(t) denotes fBm with exponent 0 < H < 1 then the properly rescaled random function
Y(t) = -^X(rf)
for a given r > 0 has the same statistical properties as X. Thus it also has the same spectral density. From this basic
observation we can deduce the important result (2.10) using only the above definitions and some elementary
calculus as follows.
Let us fix some r > 0, set
Fx(t,T),Fr(t,r) Fourier transforms of X(t, T), Y(t, T), spectral densities of X (t, T), K( t, T),
Sx(f,T),SY(f,T) spectral densities of X( t), Y(t).
Sx(f),SY(f)
We compute
where we have substituted £ for t and y for dt in the second integral. Thus, clearly
Fr(/,T)=^TFx(i rT).
rr) |2
Since Y is just a properly rescaled version of X, their spectral densities must coincide, thus, also
«/) = ^Sx(A.
76
Now we formally set f = 1 and replace 1 /r again by f to finally obtain the desired result (2.10) _ _ 1 1
Sx(f) OC J.2H+1 -
Fig. 2.9: Fractal motion via spectral synthesis. The above curves correspond to spectral density
functions of the form 1 //^ where p = 2 H + 1. The fractal dimensions of these graphs are 2 — H.
For the practical algorithm we have to translate the above into conditions on the coefficients ak of the discrete
Fourier transform
N-l
2 ikt
X(t) = 2 ake * (2.12)
k=0
The coefficients ak are in a 1:1 correspondence with the complex values X (tk~), tk = ^,k = 0,1,... ,N — 1. The
condition to be imposed on the coefficients in order to obtain S(f) oc now becomes
hfc|2) oc (2.13)
since k essentially denotes the frequency in (2.12). This relation (2.13) holds for 0 < k < y. For k > y we must have
a* = because X is a real function. The method thus simply consists of randomly choosing coefficients subject to the
expectation in (2.13) and then computing the inverse Fourier transform to obtain X in the time domain. Since the
process X need only have real values it is actually sufficient to sample real random variables Ak and Bk under the
constraint
E(A2k + B2k) <x ±
In contrast to some of the previous algorithms discussed this method is not recursive and also does not proceed
in stages of increasing spatial resolution. We may, however, interpret the addition of more and more random
Fourier coefficients ak satisfying (2.13) as a process of adding higher frequencies, thus, increasing the resolution in
the frequency domain.
The advantage of this straight forward method is that it is the purest interpretation of the concept of fractional
Brownian motion. Artifacts such as those occurring in the midpoint displacement methods are not apparent.
However, due to the nature of Fourier transforms, the generated samples are periodic. This is sometimes annoying,
and in this case one can compute twice or four times as many points as actually needed and then discard a part of
the sequence.
The algorithm for the inverse Fourier transformation InvFFT is not included with the pseudo code
SpectralSynthesisFMlD(). It should compute the sums (2.14) from the given coefficients in A and B. Usually fast
Fourier transforms are employed. They require O(N log N) operations per transform.
2.5.1 Definitions
In this section we discuss how one can generalize the displacement methods and the spectral synthesis methods to
two and three dimensions.
The generalization of fractional Brownian motion itself is straight forward. It is a multidimensional process (a
random field) X (ti, <2,..., tn) with the properties:
(i) The increments X(ti,£2, • • •,in) — X(si, 32, • • •, sn) are Gaussian with meanO
(ii) The variance of the increments X (11, t2, • • •, tn) — X ( si, S2, • • •, sn) depends only on the distance
A 52(t. - Si)2
1 i=l
and in fact is proportional to the 2H-th power of the distance, where the parameter H again satisfies 0 < H < 1.
Thus,
n
E(|X(ti,t2,...,t„) -X(si,s2)...)sn)|2) ocC^Ct. -s,)2)* 1=1
(2.15)
The random field X again has stationary increments and is isotropic, i.e. all points (ti, ti,..., t n) and all directions
are statistically equivalent. In the frequency domain we have for the spectral density
S</1 <2-i®
78
This fact can be deduced in the exact same fashion as in the last section for the law in the one-dimensional case.
Therefore we skip these details here. This ensures that X restricted to any straight line will be a noise corresponding
to 2 H = (3 — 1 [103]. In analogy with formula (2.10) the fractal dimension of the graph of a sample of X(ti, t2,. ..,
tn) is
The midpoint displacement methods can work with square lattices of points. If the mesh size 6 denotes the
resolution of such a grid, we obtain another square grid of resolution -j- by adding the midpoints of all squares. Of
course, the orientation of the new square lattice is rotated by 45 degrees. Again adding the midpoints of all squares
gives us the next lattice with the same orientation as the first one and the resolution is now y (see Figure 2.10). In
each stage we thus scale the resolution with a factor of r = and in accordance with (2.15) we add random
displacements using a variance which is r2H times the variance of the previous stage. If we assume that the data on
the four comer points of the grid carry mean square increments of o 2 then at stage n of the process we must add a
Gaussian random variable of variance cr 2 r2 Hn = cr2 ( j) nH
. For the usual midpoint method random elements are
added only to the new midpoints in each stage, whereas in the random addition method we add displacements at all
points. Thus we can unify both methods in just one algorithm.
In Figures 2.11 to 2.13 we show topographical maps of random fractal landscapes which were generated using
the algorithm MidPointFM2D(). Random additions and differing seed values were used, and the parameter H varies
from 0.8 to 0.2. At a resolution of 65 by 65 points (maxlevel = 6) a total of 4225 data points were generated. The
heights of the surface were scaled to the range from —10000 to +10000 and then handed to a contour line program
which produced the maps. The parts of the surfaces that have a negative height are assumed to be submerged under
water and are not displayed. The fractal dimension of these surfaces is 3 — H. A rendering of a perspective view of
these landscapes is also included.
79
Fig. 2.11: Topographical map and perspective view of random fractal (H = 0.8).
80
Fig. 2.12: Topographical map and perspective view of random fractal (# = 0.5).
81
Fig. 2.13: Topographical map and perspective view of random fractal (H = 0.2 ).
82
[x]) END FOR
ALGORITHM MidPointFM2D (X, maxlevel, sigma, H, addition, seed) /* interpolate and offset interior grid points */
Title Midpoint displacement and successive random additions in 2 dimensions
FOR x := d TO N-d STEP D DO
FOR y := D TO N-d STEP D DO
Arguments X[][] doubly indexed real array of size (N + l)2 X[x][y] := f4 (delta, X[x][y+d], X[x][y-d], X[x+d][y],
maxlevel maximal number of recursions, N = 2 maxUvel sigma initial standard deviation X[x-d][y]) END FOR
H parameter H determines fractal dimension D = 3 — END FOR
addition boolean parameter (turns random additions on/off) FOR x := D TO N-d STEP D DO
seed seed value for random number generator FOR y := d TO N-d STEP D DO
Variables i, N, stage integers X[x][y] := f4 (delta, X[x][y+d], X[x][y-d], X[x+d][y],
delta real holding standard deviation X[x-d][y]) END FOR
x, y, yO, D, d integer array indexing variables END FOR
Functions f3(delta,x0,xl,x2) = (x0+xl+x2)/3 + delta * Gauss() /* displace other points also if needed */
f4(delta,x0,xl,x2,x3) = (x0+xl+x2+x3)/4 + delta * Gauss() IF (addition) THEN
FOR x ;= 0 TO N STEP D DO
FOR y := 0 TO N STEP D DO
BEGIN X[x][y] := X[x][y] + delta * Gauss ()
InitGauss (seed) END FOR
N = power (2, maxlevel) END FOR
/* set the initial random corners */ FOR x := d TO N-d STEP D DO
delta := sigma FOR y := d TO N-d STEP D DO
X[0] [0] := delta * Gauss () X[x][y] := X[x][y] + delta * Gauss () END FOR
X[0][N] := delta * Gauss () END FOR
X[N][0] := delta ♦ Gauss () END IF
X[N][N] := delta * Gauss () D :=D/2
D:=N d:=d/2
d:=N/2 END FOR
FOR stage := 1 TO maxlevel DO END
/* going from grid type I to type II */
delta := delta * power (0.5, 0.5*H)
/* interpolate and offset points */ fBm in two or three (or even higher)
FOR x :=d TO N-d STEP D DO dimensions. If d denotes this dimension
FOR y := d TO N-d STEP D DO
we have to fill an array of size Nd
elements subject to (2.15). These points
X[x][y] := f4 (delta, X[x+d][y+d], X[x+d][y-d], X[x-d][y+d], X[x-d][y-d]) are evenly spaced grid points in a unit
END FOR cube, thus the final resolution (grid size)
END FOR is 1 /(N — 1). Ata particular stage of this
/* displace other points also if needed */ algorithm we have an approximation at
IF (addition) THEN resolution 1 /(M — d1) given in the form of
an array of size M with M < N. The
FOR x := 0 TO N STEP D DO next stage is approached in two steps: 1.
FOR y := 0 TO N STEP D DO If 0 < r < 1 denotes the factor at which
X[x][y] := X[x][y] + delta * Gauss () the resolution changes, then a dnew
END FOR approximation consisting of L points
END FOR
where L^M/r must be computed. First the
values of these numbers are taken as
END IF multilinear or higher order interpolation
(to be continued on the next page) of the Md old values.
83
are repeated until we have obtained points.
BEGIN
/* initialize the array with 2d points
*/ InitGauss (seed) mT:=2 FOR i := 0 TO power(2,d)-l DO
X[i] := sigma * Gauss () END FOR
/* loop while less than N d points in
array */ WHILE (mT < N) DO
/* set up new resolution of mt points */
mt := mT I r
IF (mt = mT) THEN mt := mT + 1
IF (mt >N) THEN mt := N t := 1.0/(mt-l)
T := 1.0/(mT-1)
/* interpolate new points from old points */
Interpolate (X, mT, Y, mt, d)
/* compute the standard deviation for offsets */
delta := sqrt (0.5) * sigma * sqrt (1.0 - power
(t/T, 2-2*H)) * power (t, H)
/* do displacement at all positions */
FOR i := 0 TO power(mt,d)-l DO
X[i] := Y[i] + delta * Gauss ()
END FOR mT := mt END WHILE END
84
in
''k! I
_x 1
S(u,v) = —
(u2 + v2)ff+1
aN-i,N-j = OiJ
ao,N-j = aoj
O,N-i,0 = Gift
00,0 = 0,0,0-
Fig. 2.18: Spectral synthesis of a mountain The fractal dimension Df of the surface will be Df
(continued).ALGORITHM SpectralSynthesisFM2D (X, N, = 3 — H. The algorithm then consists simply of
H, seed)
Title Fractal motion in 2 dimensions choosing the Fourier coefficients accordingly and
then performing a two-dimensional inverse
Arguments X[][] doubly indexed array of complex variables of size
N size of array X along one dimensiontransform.
H 0 < H < 1 determines fractal dimension D = 3 - H
The code for the inverse Fourier
seed seed value for random number generator
Globals Arand rand() returns values between 0 and transformation
Arand is not included. It can easily be
Variables i, j, iO, jO integers
constructed from the one-dimensional version
rad, phase polar coordinates of Fourier coefficient
A[][] InvFFT
doubly indexed array of complex variables (see [46]). As in the one-dimensional case
of size
Subroutines InvFFT2D fast inverse Fourier transform in 2 dimensions
one can reduce the number of coefficients since the
rand() returns system random numbers
random function is only real. The method also
BEGIN generalizes to three dimensions (see Chapter 1).
InitGauss (seed)
FOR i = 0 TO N/2 DO In the sequence of Figure 2.17 and 2.18 we
FOR j = 0 TO N/2 DO show how the spectral synthesis method “adds
phase := 2 * 3.141592 * rand () I Arand
IF (i/0 OR #0) THEN detail” by improving the spectral representation of
rad := power(i*i+j*j,-(H+l)/2) * Gauss() ELSE
86
the random fractal, i.e. by allowing more and more X(t) = ^akX(t+tk)
Fourier coefficients to be employed. The resolution k
in the algorithm SpectralSynthesisFM2D() was N = yield a linear estimator for the random function X
64 but in the top image of Figure 2.17 only the first at time t. In the usual midpoint technique, valid for
2
2 = 4 coefficients were used. In the other pictures spectral density functions S(f) oc. jp, this sum just
2 2
we allowed 4 = 16 (middle) and 8 = 64 (bottom) ranges over two points, and the weights are both
non-zero coefficients. In Figure 2.18 we increase equal to y. In the generalized stochastic subdivision
the number of Fourier coefficients to 16 2 (top), 32 the weights ak have to be chosen in harmony with
2
(middle) and 64 2 (bottom). the prescribed autocorrelation function R( s) or
equivalently with the spectral density function S( f)
because in the general case the expected value of X
2.6 Generalized stochastic subdivision at a midpoint no longer is just the average of its
and spectral synthesis of ocean two neighbors. The values of the coefficients are
waves deduced using an orthogonality principle known in
estimation theory (see the references in [60]).
A glance at the SIGGRAPH proceedings of the last Eventually this amounts to solving a linear system
few years will reveal that the computer graphics of equations at each stage of the subdivision
community is very interested in the rendering method. Of course, the estimated value X(t) is
aspect of fractals and other procedurally defined finally displaced by a random element with a
objects. In particular ray tracing methods have been properly chosen variance. Following Lewis, some
adapted to handle the infinite detail at small scales of the expected advantages of this method over the
that is characteristic for fractals. The images of direct synthesis via the spectral density function
random fractals presented by Voss demonstrate the and inverse Fourier transformation are (1) that the
perfection with which it is now possible to generate resolution of synthesized functions may be adjusted
mountain scenes, craters on a moon etc. The to the resolution required by its local perspective
question remains in what way the methods used for projection in the rendering process, (2) that
the production of random fractals can be modified portions of the noise may be reproduced at
in order to model even more natural phenomena. In different locations and at different resolutions, and
this last section we will briefly review two such (3) that the random function can be required to pass
approaches published in the recent papers [60] and through certain points, e.g. to ensure boundary
[74], Both papers pursue the idea of spectral continuity with adjacent objects. Some of the
synthesis. considered spectra are the Markovian-like spec-
J.P. Lewis generalizes the displacement trum 1 /(a2 + f2)d, where the parameter d is similar
methods in his paper [60] to provide a better to H in the random fractals (set 2d = 2H + 1), and
control of the synthesized random functions. He the Lorentzian spectrum, l/(a2 + b2(f — fo)2). In
calls it generalized stochastic subdivision. His
two dimensions spectra with directional trends and
generated noises are stationary with a prescribed
others yielding oscillatory textures are easily
autocorrelation function
accommodated in the setup of generalized
R(s) = E(X(t)X(t + s)). stochastic subdivision.
87
Fig. 2.19: Nonfractal waves. The spectral synthesis method was applied using a spectral density
function similar to (2.18). These waves are not fractals since the spectral density falls off as 1 / f5 for
high frequencies.
The objective of the other paper by G. Mastin the paper this is achieved by filtering a white noise
et al. [74] is to model a “fully developed sea in image appropriately, i.e. by multiplying the
nature” using the spectral synthesis method with a squared magnitudes of the Fourier transform of the
spectral density function based on empirical white noise with the spectral density function. A
results. Earlier researchers had recorded following inverse transform then yields the sea
measurements of waves leading to a spectral model model as a height function over the rry-plane. Plate
first in the downwind direction and then also in 16 shows the result in a raytraced image with an
other directions. The resulting two-dimensional added maehlstrom.
spectrum of these models, called the modified
Pierson-Moskowics spectrum, can be formulated as 2.7 Computer graphics for
S(/,</>) = p-e“(>)4 £>(/,</>)■ smooth and fractal surfaces
(2.18)
Here (f, (/>) are the polar coordinates of the The output of the algorithms for the synthesis of
frequency variables; a and b are certain positive fractal surfaces (MidPoint- FM2D(),
88
data. Geometrically the surface is treated as flat, polygons forms a flat square or rectangle. The
and the elevation of the surface is color coded so elevation data is transformed into color. The
that e.g. the low regions correspond to various obvious extension of this is to use the elevation
shades of blue (depending on depth) and the high data as the third coordinate for the vertices of the
elevations are shown in shades of green, brown and polygons. This turns the flat surface into fractal
white. In this way one can produce the equivalent terrain embedded in three-dimensional space. One
of a geographical map of the landscape. If there are then has to define a projection of the polygons onto
enough elevation data points we draw one pixel of the screen. The painter’s algorithm sorts the
color per point. Otherwise we can use the polygons according to decreasing distance, and
interpolation/multilinear of Section 2.5 to obtain as then draws them from back to front, thereby
many new points as needed. E.g. for a screen eliminating the hidden parts of the surface. This
resolution of 768 by 1024 an array of 192 by 256 and the application of a lighting model is a
elevations is sufficient. By multilinear interpolation standard procedure (see e.g. [39]). Figure 2.15 is a
the resolution can be quadrupled in the horizontal coarse example. In addition to the three-
and vertical direction thus matching the given dimensionality of the surface we again can use the
screen resolution. This procedure may be elevation data to define colors relative to height.
considered a special case of Gouraud shading. The lighting model then adds intensities.
Color Plate 14 has been produced this way, the data
were computed
with MidPointFM2D().
By changing the colors corresponding to
various heights it is very easy to produce
convincing clouds. Color Plate 15 shows the same
data as Plate 14 but uses a color map as follows:
We specify colors in RGB space (red, green, blue)
with intensities varying from 0 to 255. Let hmin and
hmax denote the minimum and maximum heights of
the data and h = y ( hmin + hmax) ■ Then the color
corresponding to height h, with /imin < h < hmax is
defined as follows : If h < h, then the color values
for red and green are 0, where the blue value is
255. If h > h, then the value of the blue component
remains at 255, while the red and green portions
are given by
••"max >•'
89
2.7.2 Extended floating horizon method
All methods for the
approximation of
two-dimensional
fractional
Brownian motion
easily generate as
many points as
wished. Using the
Fourier synthesis
method, however,
it may take a
longer time to
perform the
compute intensive
2D Fourier
transformations. If
enough points are
produced it is
possible in the
rendering to avoid
the interpolation
that is introduced
by using polygons.
The points
themselves serve as
primitives for the
rendering. This
90
dimension D for the same landscape is shown in
approach Plates 11 to 13. As D increases, the perceived
91
11
12
92
14 15
16c 16d 21 22
93
23 24
25 26
94
31 32
36Plate 13 D = 2.8
Plate 32 Four views of the three-dimensional fem Amaud Jacquin, Georgia Institute of Technology
of Figure 5.14 . The IFS code consists of four Hartmut Jurgens, Universitat Bremen
96
John P. Lewis, New York Institute of Technology lines generally can be omitted if each point is
Francois Malassenet, Georgia Institute of projected to one pixel on the screen and drawn. The
Technology hidden surface problem is solved most easily when
John F. Mareda, Sandia National Laboratories the view direction is parallel to one coordinate
Gary A. Mastin, Sandia National Laboratories plane. The farthest row of the data is drawn first,
Ken Musgrave, Yale University then the second last row, etc. Occasionally it may
Przemyslaw Prusinkiewicz, University of Regina occur that two consecutive projected rows leave
Laurie Reuter, Geoige Washington University some pixels undefined in between them. For these
Peter H. Richter, Universitat Bremen pixels which are very easy to detect we must set
Marie-Theres Roeckerath-Ries, Universitat Bremen some interpolated intermediate color and intensity.
Alan D. Sloan, Georgia Institute of Technology Most of the fascinating pictures of fractal terrain in
Peter A. Watterbeig, Savannah River Laboratory Chapter 1 and of the potential surfaces for the
Mandelbrot and Julia sets in Chapter 4 have been
R.F. Voss produced by such an algorithm (see the color
D. Saupe plates). In the following we present a description of
G.A. Mastin, P.A. Watterberg and J.F. Mareda our extended version of the floating horizon
J.P. Lewis method. It is sufficiently detailed as a guide for an
J. Hanan and P. Prusinkiewicz implementation. Related algorithms are used in
P. Prusinkiewicz [38] and [41], The method which is outlined below
R.L. Devaney and refinements such as shadows and antialiasing
H. Jurgens, H.-O. will be the topic of our forthcoming paper [57].
Plates 1-13 Peitgen and D.
Plates 14-15 Saupe 2.7.3 The data and the projection
Plate 16a R.F. Voss
Plate 16b M.F. Barnsley, L. Let us assume for simplicity that the output of a
Plate 16c Reuter and A.D. fractal generator or of some other surface
Plate 16d Sloan computation is presented as a square array of AT
Plates 17-24 P.H. Richter and M.- by AT elevation points. If
Plates 25 - 30 T. Roeckerath-Ries h : [0,1] x [0,1] -> [0,1]
Plates 31 R.F. Voss denotes a corresponding height function we may
Plates 32 - 34 H. Jurgens, H.-O. think of the data as a sample of h, namely
Plate 35 Peitgen and D. Zij = h(xi,yj)
Plate 36 Saupe
where . .
Front cover M.F. Barnsley, A.
Xi =
Back cover top left Jacquin, F. AT — 1 ’ = N - 1
97
for the diffuse and specular components, and & is
an exponent which controls the shininess of the
surface. Satisfactory results can be obtained e.g.
with
la = 0.0 ,
fd = 0.6 ,
fa = 0.4,
b = 2.0 .
For a local lighting model surface normals are DisplayHeightFieldO this case may only occur for
essential. An approximate surface the points in the first row. This simple model is a
normal at the point very crude one. For example, it ignores colors of
p = (Xi,yj,Zij) the light and the surface. Better models are
is given by described in more detail in standard textbooks such
_ .dxdydz. as [39] or [92].
n= )
rrr
where
2.7.5 The rendering
dx — Z{j
Slices of the surface, which are parallel to the j/z-
dy = Zij-i - Zij plane, are projected onto a line on the screen,
dz = -------------- which is parallel to the screen y-axis. Therfore we
N—l
can render these lines one by one and
and independently of each other. Define
r - yjdx2 + dy2 + dz2. P(y, z) = int[(N — l)(y sin <f> + z cos </>)]
This will work very well for smooth surfaces. For
fractal surfaces averages over several such as the integer part of the y-value of the projection
approximations for nearby points are more suitable. of a point (x, y, z) in pixel units. Then the algorithm
for the rendering of all N columns of data looks as
Let I denote the unit length vector based at p
follows (see DisplayHeightFieldO). Here Intens(i,
and pointing towards a light source. Let further r be
k) denotes the intensity calculated for the pixel (i,
the reflected light vector, i.e.
k) of the screen, while l(x, y, z) denotes the
r = 2 cos 0 • n — I corresponding intensity for a point (x, y, z) on the
surface.
where 0 is the angle enclosed between n and I. If a
In the first loop all intensities in the i-th
denotes the angle between the view vector
column are initialized to 0. Then the data points in
v = (0, — cos </>,sin and the reflected vector r,
the i-th column are projected one by one, starting at
then a simple illumination model, which takes into
account only ambient, diffuse and specularly the front and moving to the back. The variable
reflected light, may be written as horizon keeps track of the current maximum of all
T( - _ f la, projected y-values, and this explains the names of
if cos e < 0 both the variable and
.2 1Q.
Ia + faces G+ fs • (cos a)6, otherwise ALGORITHM DisplayHeightField (Z, N, phi)
v•1 Title Computer graphics for surfaces given by array of
elevation
Here Ia denotes the contribution of the ambient
Arguments Z[][] doubly indexed square array of heigh
light, fa and fa are factors to be chosen as weights
98
N phi number of elements in each row/column
diplayedof Zobject is to change the data instead. For
Intens[][] view angle
Globals output array for intensities of sizeexample, a linear
at least N by \/2 N transformation
Variables i,j,k integers
p, pO, pl projected heights, screen y-coordinates Zij = a ■ h(xi,y}) + b
x[] N real x-values
y [] N real y-values will shift the picture up or down and may scale the
horizon integer (the floating horizon)
h real parameter for interpolation heights by the factor a. This may be desirable for
P(y,z) = int [(N-l) * (y * sin(phi) + z * cos(phi)] other reasons as well. Often the height variations in
Functions
I(x,y,z) = intensity function, see Eqn. (2.19) the data are too strong or not pronounced enough
BEGIN for a nice picture. A linear scaling may help in this
FORi = OTON-l DO case. But also nonlinear scalings are of interest as
x[i] :=i/(N -1)
y[i] :=i/(N -1) color Plates 8 and 10 (D = 2.15 cubed height)
END FOR demonstrates. In the case of the electrostatic
FORi = OTON-l DO
FORk = 0, 1,2, ...DO
potentials of the Mandelbrot set in Chapter 4 only a
Intens[i][k] := 0 END FOR pO := P(y[0],z[i][0]) nonlinear scaling of the potential was able to
Intens[O][pO] := I(x[i],y[0],z[i][0]) horizon := pO FORj =
change the initially rather dull looking images into
lTON-l DO pl := P(y[j],z[i][j]) IF pl > horizon THEN
Intens[i][pl] := I (x[i],y[j],z[i][j]) p :=pl -1 WHILE p > something dramatic.
horizon DO h := (p - pO) / (pl - pO)
Intens[i][p] := (1-h) * I(x[i],y[j-l],z[i]|j-l]) Another useful manipulation may be to pull
+ h* Intens[i][pl] p:=p-l END WHILE horizon := pl END IF down the heights of points near the front and/or the
po := pl END FOR
END FOR
back of the surface in order to eliminate the
END impression of a surface that has been artificially cut
off in the front and the back.
the rendering method. If a point is projected below 2.7.7 Color, anti-aliasing and shadows
this horizon, it is not visible, and the normal and
The surface may have different colors at different
intensity calculations can be saved. Otherwise, the
places. E.g. color may be linked to height and
correct intensity is entered at the projected screen
possibly also to surface normals as seen in the
coordinates and the value of the horizon is updated.
color plates. It is easy to add color to our little
A special case arises when there are one or more
rendering system just by multiplying the color of
pixels left out between the old and the new horizon.
each projected point with the intensity as calculated
At these locations an interpolated intensity has to
in the algorithm. However, it must be noted, that
be applied. In the above scheme linear interpolation
this is straight forward only in the case that a full
is employed, which corresponds to Gouraud
color (24 bit) graphics system is eventually used
shading. Alternatively one could interpolate surface
for display. Some restrictions apply for smaller
normals and recompute intensities for the
systems with color lookup tables.
intermediate points, a procedure known as Phong
shading. The algorithm DisplayHeightField() can easily
be extended to include antialiasing by means of
It depends very much on the data and the
supersampling. The principle is to first interpolate
projection angle </> chosen, which portion of the
(multilinear or with splines) the surface to obtain a
screen coordinate system actually should be
representation in a higher resolution. Then the
displayed. In the worst case (</> = f and data
algorithm is applied to the “inflated” surface and
occurring at (x, 0,0) as well as at ( x, 1,1) ) only a
finally pixel averaging results in the anti-aliased
viewport containing at least N by \/2N pixels will
image. This process can take advantage of the
contain the whole surface. To get aesthetically
piecewise linear structure in the interpolated
pleasing results one obviously has to experiment
surface, thus, the computational burden of anti-
with the projection and introduce proper scalings in
aliasing is not as severe as it seems. In addition,
the x and y screen coordinates.
distance fading as in [38] or simulated fog as in the
color plates may be employed to further enhance
2.7.6 Data manipulation picture quality.
An alternative way to match the viewport to the The same algorithm that eliminates the hidden
99
surfaces in our rendering may be used to calculate = zitj - for i = 1,N - 1 .
shadows from several light sources. This is the case
because a shadow reflects nothing but the visibility These first differences will be much smaller than
from a given light source. Due to the simple the values j thus, they may be stored in fewer bits
structure of the algorithm only infinitely far light per number. Still the original data may be
zz-plane or the t/z-plane. A complication arises : If the height function h, is also differentiable then
The rendering of the different data columns is no the second differences
longer independent. Points in one column may cast
a shadow on points of other columns. Thus, one j for i = 2,...,N — l
complete row of data has to be rendered
or, more explicitly
simultaneously. The cover picture was rendered
with two light sources, one from behind the view A,?- = Zij - 2z,_ij + Zi-2,j for i = 2,...,N—l
point and the other from the right. The shadows
from the second light source are clearly visible. can be expected to be even smaller in magnitude
than the first differences. Thus, we may be able to
2.7.8 Data storage considerations store the numbers
two major building blocks used in random fractals. (b) /( x) has at most a finite number of discontinuities
Here we review some of the basic definitions (see in every finite interval on the real line.
e.g. [8,107]). [44] is another elementary text for
(c) fZofix)dx= 1.
probability theory.
Sample space. An abstract representation for a Then X is said to be a continuous random variable
trial and its outcomes such as one throw of a die or with probability density function f( x).
ideally also one call to a (pseudo) random number Distribution function. A useful picture of the
generator is the sample space. Each distinguishable spread of probability in a distribution is provided
and indecomposable outcome, or simple event, is by the distribution function F( x), which is defined
regarded as a point in sample space S. Thus, for a as
typical random number generator the sample space
' F(x) = P(X < x).
would consist of all integers between 0 X
31
and a very large number such as 2 — 1. Every
From its definition, 0 < F(x) < 1, F(— oo) = 0,
collection of simple events i.e. a set of points in S'
F(oo) = 1, P(a < X < b) = F(b) — F(a) >0. For a
is called an event.
continuous random variable with probability
Probability. For every event E in the sample density /( z) we have
,, . f if a < x <
J
[ 0, otherwise
101
provided that coefficient, which is the covariance divided by both
■o
o standard deviations.
•oo converges. Otherwise, E( X) is said not to exist.
The name has its justification in the weak law of
Random functions. Let T be a subset of the real
large numbers, which states, roughly speaking, that line. By a random junction of an argument t G T
if Xi, X2,..., Xn is a random sample from a we mean a function X(t) whose values are random
distribution for which E( X) exists, then the average
n variables. Thus, a random function X is a family of
V. x = y— n
converges in probability to the mean E(X). For the random variables X(s), X(t), etc. Formally, a
uniformly distributed random variable defined two random function is considered to be specified if for
paragraphs above we have of course E(X) = each element t 6 T we are given the distribution
whereas for a random variable with a normal function
distribution 7V(/2,<T) we obtain E(X) = y,. The Ft(z) = P(X(t) < x)
expectation E is a linear function in the sense that
for two random variables X and Y and real numbers of the random variable X(t), and if also for each
a and b we have E(oX + bY) - aE(X) + bE(Y). pair of elements 41, tz 6 T we are given the
is another random variable and its expectation is Ftltt2(x\,X2) = P(X(ti) < Xi andX(t2) < 12)
Z OO g(x)f(x)dx. -00
of the two-dimensional random variable Z(ti,t2) =
Variance. Generally a random variable is not
(X(ti),X(t2)), and so on for all finite dimensional
adequately described by just stating its expected
random variables
value. We should also have some idea of how the
values are dispersed around the mean. One such Z(.tX,t2,...,tn) = (X(ti),X(t2))...)X(tn))
measure is the variance, denoted by var X and
Stationarity. The random function will be called
defined as
stationary, if all the finite dimensional distribution
varX = E[(X -E(X))2].
functions defined above remain the same when all
The positive square root of the variance is known points ti, ti, • • •, tn are shifted along the time axis,
as the standard deviation of X, and is denoted by o. i.e. if
We also have
•^•*$1+3,$2+ s , • • •,
varX = E(X2) -E(X)2. ,t2 , *T2, • • • ,
The variance of a random variable which is for any ti,<2,...,t» and s. The physical meaning of
uniformly distributed over [ a, 6] is easily the concept of stationarity is that X( t) describes the
calculated as time variation of a numerical characteristic of an
var
X = a)2 event such that none of the observed macroscopic
factors influencing the occurrence of the event
and for random variables with normal distribution
change in time.
N(y,cr) the variance is o2. If two random variables
Stationary increments. Fractional Brownian
X, Y are independent, i.e.
motion is not a stationary process, but its
P(X < x and V < y) = P(X < x) + P(Y < y) increments
for all x, y G R, then we have for all a, b 6 R X(t + s) -X(t)
are. In general, we say that the random function
var( aX + bY) = a2 var X + b2 var Y.
X(t) has stationary increments, if the expected
Covariance and correlation. A measure which value of the increment is 0
associates two random variables is given by the E[X(t + s) -X(t)] = 0
covariance. The covariance of X, Y, written C(X,
Y), is defined as and if the mean square increments
C(X,Y) = E[(X- E(X))(Y - E(Y))]. E[\X(t + s) — X(t)\2]
If the two random variables are independent, then
their covariance is 0, of course. If units of X or Y
are changed then the covariance also changes. This
defect is removed by considering the correlation
102
all disciplines. Finally, as the reader will see, the
do not depend on time mathematics itself is quite beautiful. All in all, this
t.Chapter 3 field of mathematics offers quite a lot to scientists:
aesthetic appeal, accessibility, and applicability.
We hope that this chapter convey some of these
104
This is the unpredictable nature of this process. Iteration is a very simple process that is
Certain A-values lead to results which are quite perhaps easiest to explain using a scientific
predictable - a fixed or periodically repeating calculator. Consider such a calculator with its many
limiting value. But other A-values lead to results special function keys like “x 2 ”, “■y/x ”, “sin x”,
which are, for all intent and purposes, random. “exp x”, and so forth. Each of these represents a
The reader may object that the quite limited mathematical function. Suppose we input a
table of values for Pn when k = 4 can in no way be particular number and then repeatedly strike the
interpreted as a proof that the values do behave “y/x ” key. What are we doing? We are simply
randomly. Nevertheless, it is a true fact which may iterating the square root function. For example, if
be proved quite easily with techniques from we let S( x) = y/x and input x = 256 into the
dynamical systems theory. See, for example, [25]. calculator, then repeated strikes of “y/x ” yields the
3.1.3 Iteration following iterates of 256:
The principal ingredient in the mathematical 5(256) = 16 52(256) = 4 53(256) = 2 S'4 (256) =
formulation of the example from ecology was 1.414214... 52(256) = 1.189207... 53(256) =
iteration. Iteration means repeating a process over 1.090508... S'4 (256) = 1.044274...
and over again. That is just what we did to generate
the list of successive population values for the
520(256) = 1.000005...
logistic equation. Let us formulate this process in a
slightly different manner. Consider the Notice that successive iterates of S started with 256
mathematical function F(x) = kx( 1 — x). This converge quite quickly to 1. In fact, this happens
function assigns to each number x a new number, no matter which number is initially input into the
the image of x, which we call F( x). So, as is usual calculator. For example, if the initial input is 200,
with mathematical functions, F gives a rule for we get
converting x to a new value, in this case F( x), or
kx( 1 — x). 5(200) = 14.14214...
We may repeat this process by computing 52(200) = 3.760603...
F( F( x)). This means that we compute 53(200) = 1.939227...
54(200) = 1.392561...
F(F(x)) = F(fcx(l—x))
= fc[ fcx( 1 — x) ] [ 1 — fcx( 1 — x) ]
52°(200) = 1.000005...
and we can do it again, finding
Even if the initial input x satisfies 0 < x < 1, we still
F(F(F(x))) = F(k[kx(l - x)][l — fcx( 1 -x)]) find that iterates tend to 1:
and so forth. To make the notation less Let’s iterate more of these functions. For
cumbersome, let us introduce the notation Fn( x) to example, what happens if we repeatedly strike the
mean the n-fold iteration of F; that is, “x2 ” key. Clearly, if a: > 1, repeated squaring tends
to enlarge the results. In fact, after only a few
2 3
F’(x) = F(x), F (X) = F(F(x)), F (X) = F(F(F(x))) iterations, repeated squaring leads to an overflow
message from the computer. If we write T(x) = x2,
Note that F2(x) does not mean (F(x))2 = F(x) • F(x).
Rather, F2(x) is the second iterate of the expression
217(n3) = -0.312...
F. 3S’18 (123) = -0.307...
4S'19 (123) = -0.302...
105
another way of saying this is Sn( 123) —>0 asn—> oo
Tn( x) —> oo as n —> S148(123) oo if x > 1. That is, the orbit of 123 tends asymptotically to 0.
S' 149
(123) We leave it to the reader to check that this happens
The nth iterate of any x > 1 tends to
S15O
(123) no matter what x is initially input. That is
get arbitrarily large if x > 1. What
5(123) = -0.459...
C"(123) = 0.739085...
2
S (123) = -0.443... C100(123) = 0.739085...
3
S' (123) = -0.429... C101(123) = 0.739085...
4 S**56789
S’ (123) = -0.416... So the orbit of 123 is a sequence of numbers that
numbers 123, -0.459..., -0.443..., -0.429... and we Nevertheless, once you have seen a few such
have examples of iteration of cosine, you come quickly
to the conclusion that all orbits tend to .739085...
or, to introduce a technical term, all orbits of the
5S'20 (123) = -0.298...
cosine function are stable. More precisely, a stable
6Slowly, ever so slowly, successive iterates of sin x
tend to 0: orbit is one which has the property that, if you
73
7S (123) = -0.185... change the initial input slightly, the resulting orbit
8S74(123) = -0.184...
behaves more or less similarly. This happens for S(
9S75(123) = -0.183...
106
x) = y/x, since all initial inputs lead to orbits which T(z)= x2
tend to 1. This happens for T(x) = x2, as long as 0 <
x / 1, for orbits of x < 1 tend to 0 while orbits of x The orbit of 1 is quite simple: T( 1) = 1 so T”( 1) =
> 1 tend to oo. 1 for all n. This is an example of a fixed point for
the dynamical system. But this simple orbit is
One should think of stable orbits as “good”
unstable. The reason is, suppose we make a slight
orbits. We mean good here in the following sense.
error in computing the initial value 1. Then our
Suppose your dynamical system represents a
input is x 1, and, as we have already seen, if x > 1
physical process whose outcome you would like to
then Tn(x) —> oo whereas if 0 < x < 1, then Tn(x)
predict. Now, in setting up the physical model, you
—> 0. Thus, nearby initial conditions lead to vastly
will probably make small errors in your
different types of orbits, some tending to 0 and
observations that lead to your choice of an initial
others tending to oo.
input. If the resulting orbit is stable, then chances
are that these small errors will not alter the ultimate Clearly, it is of great importance to understand
behavior of your system, and so the predictions the set of all points in a given dynamical system
based on the model will be more or less accurate. whose orbits are unstable. Is the set of unstable
orbits large, indicating a high degree of instability
Another reason why stable orbits are good
or unpredictability in the system? Or is it small,
orbits arises when round-off errors are
suggesting that the model we are using possesses a
encountered. Generally, when a dynamical system
high probability of actually producing good results.
is iterated on a computer or calculator, each
Unfortunately, many simple dynamical systems
successive iteration yields only an approximation
possess large sets of initial conditions whose orbits
of the next value. That is, the computed orbit may
are unstable. We will call the set of all points
differ slightly at each stage from 1 the actual orbit. It
whose orbit is unstable the chaotic set. As we shall
is true that these round-off errors may accumulate
see, small changes of parameter may radically alter
and lead to major errors in the predictions. But
the makeup of the chaotic set. Nevertheless, to
usually, if the initial orbit is stable, these round-off
mathematicians, the important question is: what
errors will not matter. Small changes introduced at
does this chaotic set look like? Is it large or small?
each stage of the computation will not affect the
Is it a simple set or a complicated one? And how
ultimate behavior of the orbit.
does it vary as the dynamical system itself
changes?
3.2 Chaotic dynamical systems This is where fractals enter the field of
dynamical systems. Very often, the set of points
3.2.1 Instability: The chaotic set whose orbits are unstable form a fractal. So these
fractals are given by a precise rule: they are simply
Unfortunately, not all orbits of dynamical systems the chaotic set of a dynamical system.
are stable. It is an unpleasant fact of life that even
the simplest of dynamical systems possess orbits
which are far from being stable. We call these
3.2.2 A chaotic set in the plane
orbits unstable. An unstable orbit is one for which,
Up to now, all of our examples of dynamical
arbitrarily close to the given initial input, there is
systems have been one dimensional. But most
another possible input whose orbit is vastly
phenomena in nature depend upon more than one
different from the original orbit.
variable (often many more variables), so it is
Unstable orbits come in many different guises. appropriate to study higher dimensional dynamical
The unpredictable orbits of F( x) = 41( 1 — x), systems. For the remainder of this paper we will
which seem to jump around randomly, are one type consider two dimensional dynamical systems. The
of unstable orbit. But there are other, simpler orbits reason for this is twofold. First, mathematically
which should also be termed unstable. speaking, much of what occurs in one-dimensional
For example, consider again the squaring dynamics is fairly well (but not completely!)
function understood. On the other hand, two-dimensional
dynamics presents a completely different set of
107
possibilities which, to this day, remain unsolved. So what is happening? Many orbits tend under
Second, as can be imagined, two dimensional iteration to land on the same set A as depicted in
dynamics are Figure 3.1. A is called an attractor because all
sufficiently nearby orbits converge to it. A is a
readily studied using computer graphics, with its strange attractor because it is not a “simple” object
conveniently two-dimensional graphics screen. like a point or cycle of points (a “periodic orbit”).
Let us begin with a rather simple dynamical In fact, A has all of the attributes of a fractal, as
system known as the Henon map , named for the demonstrated in Figure 3.2 . This figure depicts
French astronomer M. Henon who first several successive enlargements which tend to
encountered this map in 1974. This dynamical separate bands in A, a typical occurrence in
system involves two variables (x n,yn) which evolve Cantor-like sets.
according to the rule
Fig. 3.1: Two plots of the Henon attractor. The orbit of (0,0) is depicted in Figure 3.1a, and the orbit of
(—1,0) is shown in Figure 3.1b . Horizontally x varies from -1.5 to 1.5, vertically y varies from -0.4 to
0.4. 108
Fig. 3.2: Two successive enlargements of the Henon attractor. The same 100,000 points as in
Figure 3.1a are used. The regions shown are 0 < x < 1 and 0 < y < 0.3 on the left and
0.7 < x < 0.8 and 0.15 < y < 0.18 on the right.
We remark again that the exact structure of A 3.2.3 A chaotic gingerbreadman
is not understood and is an important open problem
in mathematics. We also encourage the reader to Here is another example of a chaotic set in the
experiment with the Henon map with different plane. Consider the dynamical system given by
parameter values, i.e., with different values for A 1 1 Vn "h ||
one another. The orbits are quite distinct. which is a periodic rotation of period 6 about (1,1).
Nevertheless, if you now plot these orbits, you see So the orbit of any point which remains in H
that the exact same picture results. That is to say, (except (1,1)) is a finite set of 6 points.
nearby orbits both tend to A, but once they are This should be contrasted to the orbit of a
there, they are hopelessly scrambled. So the strange
point just outside of this region: Figure 3.3a
attractor A forms our set of chaotic orbits.
109
displays the first 100,000 iterates of (—0.1,0). Note facts, we refer the reader to [24].
how this orbit “fills” a region that looks like a
“gingerbreadman.” There is the stable hexagonal
region H forming the belly and five other regions
3.3 Complex dynamical systems
forming the legs, arms, and head of the
3.3.1 Complex maps
gingerbreadman. In these five regions there is a
unique orbit of period 5, and all other points have Now we come to one of the most alluring areas in
period 6 • 5 = 30. These points are, of course, dynamical systems,1 the study of the dynamics of
points with stable orbits. The orbit which gives the complex maps. This study was initiated by P. Fatou
shaded region in Figure 3.3 apparently enters every and G. Julia in the early twentieth century. After a
small region except H and the period 5 hexagon. brief period of intense study during 1910-1925, the
Indeed, it may be proved rigorously that there is field all but disappeared. It was revived about ten
such an orbit that wanders around the years ago, thanks mainly to the discovery by
gingerbreadman, coming arbitrarily close to any Mandelbrot of the well known set that bears his
preassigned point. name as well as to the spectacular computer
An enlargement of a portion of Figure 3.3a is graphics images that typically accompany these
shown in Figure 3.3b . Note how the chosen orbit is dynamical systems.
evenly distributed in this enlargement.
110
specify a complex number, we must specify both unit circle are exactly the same as those of F( z) = 4
the real and the imaginary part. z( 1 — z) on the interval 0 < x < 1! While we will
Since complex numbers are given by pairs of not prove this here, the proof is quite simple and
real numbers, they may be identified with points in we refer the reader to [25] formore information
the Cartesian plane in the obvious way, with the about this.
real part giving the x-coordinate and the imaginary Thus we see that the complex plane
part giving the y-coordinate. decomposes into two regions: the stable set
We also define the absolute value of a consisting of those points with orbits tending to 0
complex number z = x + iy by and oo, and the chaotic set which is the unit circle.
This is typical, as we shall see, for complex maps.
|z| = yjx2 + y2 The chaotic set, the object of our interest, is called
the Julia set, after the French mathematician
That is, |z| just measures the distance from the
Gaston Julia who first studied this set in the early
origin to z. This of course is exactly what the usual
twentieth century.
absolute value on the real line does.
Now let us consider what happens when we
3.3.2 The Julia set
iterate F( z) = z2. Recall that we investigated this
exact dynamical system on the real line in Section How do we use computer graphics to find the Julia
3.1.3 when we introduced the notion of iteration. set? Actually, there are several ways. The first, and
Suppose we start with a point ZQ = XQ + iyo with | | easiest to program and to compute, is the inverse
< 1 • Then iteration method (see IIM and its variations in
Chapter 4). This method works as follows. Let us
|F(zo)| = fop ~1/Q + t-2x03/01________________
= \/zo + l/o — 2XQI/Q + 4z§yg work with F( z) = z2 for the moment. Given zo , we
= 7(^0 + l/o)2 ask which points map to zo under F. Naturally,
= ko|2
these points are the square roots of zo, and, just as
Hence |F(zo) | < |zo | as long as 0 < |zo | < 1. It in the real case, there are two of them. To compute
follows that one iteration of F moves ZQ closer to the two square roots of a complex number ZQ , we
0. Continuing this process (and using the fact that need to introduce the polar representation of zo •
repeated squaring of a number |zo | < 1 tends to 0) In this representation, zo is determined by the polar
we see that any ZQ with |zo | < 1 has orbit which angle and distance from zo to the origin. The polar
tends to 0. Thus all of these orbits are stable. angle is the angle that the ray from the origin to zo
Similar arguments show that, if | zo | > 1, then makes with the positive x-axis measured in a
the orbit of ZQ tends to oo under iteration. Hence counterclockwise direction (usually in radians).
all points ZQ in the plane with |zo | Negative angles are measured in the clockwise
1 have stable orbits. direction.
What about points with | zo | = 1 ? These are points
We let Go denote the polar angle of ZQ. See Figure
which lie on the unit circle in the plane. Clearly,
3.5. The radius of zo is simply its absolute value ro
any such point lies in the chaotic set, since points
= |zo|. The two numbers TQ and Go completely
just outside the circle have orbits tending to oo
specify ZQ ■
while points just inside have orbits tending to 0.
The reader may object that this does not seem
to be too chaotic a situation. After all, we know
exactly what is happening. All points off the unit
circle have predictable behavior, and all points on
the unit circle have orbits which remain there
forever. Indeed, if |zo | = 1, then
|F(z0)| = (^ + ^)2
111
Now how do we compute y/zo ? The two 3.3.3 Julia sets as basin boundaries
square roots are given in terms of the polar
representation ( ro, Go) of ZQ . The new polar There is another method that is commonly used to
radius is simply (where we choose the positive plot Julia sets of polynomials like F(z) = z2 + c (see
square root) and the two new angles are Go/2 and the Boundary Scanning Method BSM and its
7T + Go/2. variants in Chapter 4). This method plots all points
which under iteration of F do not escape to oo. It is
For example, if zo = i, then ro = 1, #o = TT/2.
Thus \/i has the polar representation either ro = 1, a fact that the boundary or frontier of this set of
simply take successive square roots. At each stage Figure 3.7 . These plots were obtained as follows:
we are free to choose one of two polar angles. each point in a 1024 x 1024 grid defined by |x|, |y|
Hence there are lots of different possible backward < 2 was iterated up to 30 times. If |i|, |y| < 2 and |
orbits of ZQ (as long as ZQ 0). The amazing fact is Fn(x + iy) | >3 for some n, then it can be proved
that any of them converges to the Julia set of F\ that such a point has an orbit which tends to oo.
That is if we plot successive square roots of ZQ 0, Such a point therefore does not He in the Juha set.
omitting the first few points along the backward We have colored such points white. On the other
orbit, we produce the set of unstable orbits which, hand, points whose first 30 iterates remain within
in this case, is just the unit circle1. the circle of radius 3 are colored black. It can be
proved that points whose orbits remain inside this
The Julia set for any function of the form F( z)
circle for all iterations lie in the so-called filled-in
= z2 + c may be obtained in precisely the same
Julia set, i.e., in the set whose boundary is precisely
way. The displayed algorithm JuliaIIM() gives a
the Julia set.
simple BASIC program to plot the Julia set of z2 +
As a remark, we note that the filled-in Julia set
c for various c-values. Note that this can be
takes much, much longer
interpreted as an example of an iterated function
system (IFS) with {wn,Pn : n = 1,2 } where
112
to compute than the simple Julia set of z 2 + c as many of the technical details; however, the
described in the last section. Somehow, though, algorithms that generate the Julia sets of these
these pictures are more appealing! Perhaps it is the maps are just as accessible as those in previous
length of time it takes to compute, or perhaps the sections.
proportion of black to white which results on the Any complex analytic function has a Julia set.
screen, but, for whatever reason, filled-in Julia sets As with the quadratic family, the Julia sets of other
make much more interesting pictures. However, complex functions are often fractals. Besides
mathematically speaking, the previous algorithm polynomials, the classes of maps most often
gives precisely the set of points which comprise the studied are rational maps (i.e., quotients of two
chaotic region. polynomials) and entire transcendental functions
Figure 3.7 shows the result of applying this (i.e., functions with no singularities in the finite
algorithm to depict the Julia sets of various plane and which are not polynomials). In this
quadratic functions. section we will describe a method for computing
the Julia sets of this latter class.
First, everything we say will be true for the
3.3.4 Other Julia sets
class of entire transcendental functions which are
The next two sections of this paper are significantly of finite type, that is, for maps with only finitely
more mathematical than the previous sections. many critical values and asymptotic values. A point
They represent some of the recent advances in the q is a critical value if q = F(p) where
field F'(p) = 0. A point g is an asymptotic value if there
Fig. 3.7: The Julia sets of a. z2 — 1, b. z2 + .25, c. z2 — 0.9 + 0.12i, andd. z2 — 0.3. The
region shown is —2 < Re z < 2, —2 < Im z < 2
lim S\(z) = oo
have infinitely many critical points but only two n-too A
One reason for the importance of these maps is compute the Julia sets of these maps. For E\, we
that the Julia set has an alternative characterization simply test whether an orbit ever contains a point
which makes it quite easy to compute. For maps with real part greater than 50 (since e 50 is quite
like Xez, X sin z, and A cos z, the Julia set is also large). A similar test works using the absolute
the closure of the set of points whose orbits tend to value of the imaginary part for sin z. All of the
infinity. Note that this is quite different from the pictures of the Julia sets of \ez and X sin z were
definition of the Julia set for polynomials (see generated using this algorithm, with the exception
Chapter 4). In the latter case, the Julia set formed of Fig. 3.8, which uses a modification.
the boundary of the set of escaping orbits. In the To be more specific, let us outline the
entire case, points whose orbits escape actually lie algorithm that produces the Julia set of sin z, as
in the Julia set. depicted in Figure 3.10. If we write z = x + iy and
sin z = xi + iy i, then
The reader may protest that there is nothing
xi = sin x cosh y
apparently chaotic about an orbit tending to oo.
yi = cos x sinh y.
However, in the entire case, oo is an essential
This gives us the formula to compute the main
singularity, which is the main factor distinguishing
body of the loop in the program. If |j/i | > 50 at any
these maps from polynomials. By the Great Picard
stage in the , then we “pretend” that the original
Theorem, any small neighborhood of oo is smeared
point has escaped to oo and color it white. On the
infinitely often over the entire plane and this
other hand, if the original point never escapes, we
suggests that orbits tending to oo experience quite a
color it black. These points do not lie in the Julia
bit of stretching. Also, the Julia set is the closure of
set. We used 25 iterations to draw the Julia set in
the set of escaping orbits. That is, the Julia set
Figures 3.10.
contains not only the escaping points, but also any
limit point of a sequence of such points. Included
among these limit points are all of the repelling 10
periodic points, so the Julia set is by no means
solely comprised of escaping points.
Thus to produce computer graphics images of
Julia sets of entire functions like ez or sin z, we
116
Fig. 3.10: The Julia set of S\(z) = A sin z where X = 1 + si in is shown in white. The region is —2?r<
Re z < 2TT , —2TT < Imz<2-7r. a. E = 0, b. E = O.li, c. E = 0 Ai
of the singular values11.) When A > the two fixed The fact that the escape of all singular values
points for \ez disappear from the real line and E£ implies that the Julia set explodes is a recent
( 0) —► oo. theorem of D. Sullivan, and was extended to the
Fig. 3.11: The graphs of S\(z) = Xe‘ where A < | and A > |.
11 See e.g. the discussion in Section 4.2.2 .
117
entire case by L. Goldberg and L. Keen. What
Sullivan’s theorem actually says is, if there is any
component of the complement of the Julia set, then
it must have associated with it at least one singular
value. Thus, if all singular values tend to oo (and
thus are in the Julia set), there can be no stable
behavior whatsoever.
A word about the bifurcation which occurs for
z
\e at A = |. At this value the two fixed points q and
p coalesce into one neutral fixed point. This is
called the saddle node bifurcation . When A > it
seems that these fixed points disappear. However,
these fixed points have become complex. Indeed,
they form the central points of the main spiral in The typical situation occurs when we have a
Figure 3.9. The reason that we see black blobs in graph as in Figure 3.12 . Here the function PQ has a
this picture is that these fixed points are very saddle-node fixed point at q. Note that the orbit of
weakly repelling - it takes a long time for nearby p depicted at first escapes, but then returns to be
orbits to go far away. Since our algorithm only attracted by q. This is an example of a homoclinic
computed the first 25 points along an orbit, certain orbit. If we perturb this map to
nearby orbits had not yet escaped. If one were to
P_e(a;) = Po(x) - e
run the same program with 50 or 100 or 200
iterations, one would see a gradual shrinking of the then we clearly produce two new fixed points, one
black region. On the other hand, no larger amount attracting and one repelling. On the other hand, for
of iterations would change Figs. 3.8 or 3.10 . Pe(x) = Po(x) + £ there are no fixed points on the
There are other examples of similar explosions real line. As in the last section, as £ increases
in the dynamics of complex functions. For through 0, the family Pe undergoes a saddle-node
example, the function iA cos z experiences a bifurcation. See Figure 3.13 .
similar saddlenode explosion as A increases There is a difference between this bifurcation
through 0.67. and the saddle-node bifurcation for \ez described in
the previous section. In this case, the critical point
118
for C\. At 2.97 the saddle-node
occurs;the basin for this neutral fixed
120
Heinz-Otto Peitgen
4.1 Introduction
121
attention is given to a background and graphics
discussion of
• Mandelbrot set.
approach seems to be the most effective way to action of fc on C translates to an action on S2, i.e.
122
Fig. 4.2: Basin of attraction of oo in black. Jc = closure [z : /*( z) = uo for some integer k j .
(4.6)
(Readers who feel uncomfortable with 00 may
of course work with Fc and discuss the point 0.) Note that in general fk(z) = uo has 2 k solutions, i.e.
Ac(oo) has a natural boundary, i.e. there are always the total number of iterated preimages of uo
points zo , which generate orbits which do not obtained by recursively solving the equation z 2 + c
approach 00 , thus stay bounded. This is seen = a is
immediately, because, for example, fc has always n(A:) = 2fc+1 — 1, k = 0,l,...
12
two fixed points (i.e. solutions of z + c = 2). The
(beginning with a = uo yielding ui and U2, then a =
boundary of Ac( 00) is denoted by dAc( 00) and is
ui, a = U2 yielding U3, U4, us, us, etc., see Chapter
called the Julia2 set of fc. We also use the symbol Jc
3 for solving equations in C ). The recursion is
= dAc( 00). This is the central object of interest in
nicely represented in a tree.
the theory of iterations on the complex plane.
Together with Ac( 00) and Jc goes a third object Figure 4.4a shows a labeling of the n( k) knots,
(sometimes called the filled-in Julia set): representing solutions in the recursion, which is
induced naturally. The notation “closure” in Eqn.
Kc = C \Ac(oo) = {20 E C :/c (zo) stays (4.6) means that each point in J c can be obtained as
bounded for all k J (4.4) a limit of a sequence composed by points from the
iterated preimages. Let
Obviously, we have that
4.2.1 The Mandelbrot set Then p is called completely labeled, provided the
vertices of p have labels which are not all the same. A
good approximation of Jc is obtained by coloring all
The Mandelbrot set — discovered by B.B.
completely labeled pixels in the lattice. Thus it remains
Mandelbrot in 1980 [67] — is our next object. It is to decide (in approximation), whether v, E Ac{ 00). The
considered to be the most complex object answer is yes, provided that | /*( v,) |> R for some k <
Nmax. Otherwise, it is assumed (with some error
mathematics has ever “seen”. It is a rather peculiar depending on Nmax and R, which define the ‘iterative
fractal in that it combines aspects of self-similarity resolution’ of the experiment) that v, G Kc. Note that
Nmax should be chosen as a function of the pixel
with the properties of infinite change. Subsequently resolution of the graphical device and the magnification
we will discuss how the Mandelbrot set can be factor, if one computes blow ups. Also, in some cases -
similar to IIM - no matter how large one may choose
considered as the pictorial manifestation of order in Nmax> the resulting image will practically not improve.
the infinite variety of Julia sets (for each c there is The mathematical reasons for such effects are sometimes
very delicate and are exemplified and discussed in [83].
a filled in Julia set Kc)- The key of understanding
the Mandelbrot set is the following rather crude
classification: Each filled in Julia set Kc is14 In other words, M lives in the complex c-plane,
which serves as a plane of “control parameters”
14 Figure 4.3a,b,c,e,g are connected Julia sets. A set is
connected if it is not contained in the union of two disjoint
for the Kc's. Figure 4.6 shows an image of M (in
and open sets. Figure 4.3d,f,h are Cantor sets. A Cantor set black).
125
IIM). Given any point umk UQ on the Ac-th level of the tree
in Figure 4.4a there is a unique path on the tree from u mjfc to
UQ which is determined by the forward iteration of umk (k
times):
= Uo-
Now, the idea is to stop using u mjfc in the preimage recursion
(i.e. to cut off the subtree starting at provided that the
derivative
k
1=1
— fc (Um*), 1 = 0,...,/:.
126
Fig. 4.7: Approximation of Kc by preimages of SR: KC connected
constituted by ai and 02 only the following three
We shall now proceed recursively: For k = 1,2,...
cases are possible:
we compute the preimage of Sk-\ under fc, i.e. with
So = SR
Sk = {z: fc(z) e Si-i} (4.11)
2
This amounts to solving equations z + c = b, b E
Sk-i ■ Let us investigate the Sk’s. We claim that Si
is a closed curve which can be thought of as a
deformed circle (see Figure 4.7) . This is certainly
true if R is very large, because near z = oo (see
Eqn. (4.2) ) fc acts like fo ( z) = z2, slightly
perturbed by c. Since the preimage of SR under fo
is a concentric circle with radius vH we may
conclude that Si is indeed a circle-like curve. Next
we discuss the Sk’s, k > 1. The following is a
tautology: Fig. 4.8’. Possible preimages of Sk-i
• either all Sk’s are circle-like Either, as b turns around once on Sk-i, ai sweeps
• or, there is fc* > 1, so that the Sk’s are circle-like out one halfcircle-like curve (1) and 02 sweeps out
for (4.12) 0 < fc < fc* and they are not circle-like
the other one, or they sweep out two different
for fc > fc*.
curves (3), or 01 sweeps out the upper leaf and 02
The nested sequence of circle-like Sk’s in the first sweeps out the lower leaf of the figureeight in (2).
case approximates Kc better and better as fc —+ oo Why is it that the two circle-like curves obtained as
and, therefore, Kc must be connected. What can be ai and 02 move around must be identical, separate
said about the second case? Note that typically any or coincide in exactly one point? The answer leads
h E Sk-i has two preimages ai and 02 under fc. The to the heart of the matter: Assume, for example that
crucial observation is that for the preimages 01 and 02 sweep out curves which intersect in two
127
points w\ and W2 (see Figure 4.9). 4.2.3 Level sets
Hence, wi is a double root of z 2 + c = bi and u>2 is
The approximation of Kc by the S\’s has a very nice
a double root of z2 + c = 62 (bi y 62 , because
potential for graphical representations. (—>
otherwise one has a contradiction with fc being of
LSM/J). The Sk’s look like level sets of a profile in
2nd degree). But this is impossible, because the
central projection from infinity. Figure 4.11 gives
equation z2 + c = b allows double roots only when
an example.
b = c and those are wi = 0 = u>2. This justifies
situation (2) in Figure 4.8 . One uses the following The contours of the level sets obtained by
definition : LSM - i.e. the Sk's discussed above, if T is the
complement of a large disk - have a beautiful and
2
For /c( z) = z + c c is called a critical value. Its important interpretation as equipotential curves of
preimage 0 is called a critical point. the electrostatic potential of Kc- This is our next
objective which will lead to some more
sophisticated algorithms.
Fig. 4.9:
<£C
C\D <- C\2fc
fo I I fc (4.14)
130
Fig
Eqs .. 4.13: Renderingsset).. of
and (4.28). The the potential in
shows
of the (4.27)
(compare
displays
thefront
same
Mandelbrot
Scientific region
American
cover
a Cantor set
near
as
The ,top
the image
the boundary
cover
August
bottom image
1985
image
LSM/J - Level Set Method : This algorithm is just a very
Fig. 4.12: Escher-like tilings: Level sets powerful variant of BSM. We fix a square lattice of pixels,
choose a large integer Nmax (iteration resolution) and an
of fc with c = 0 around 0 with target set
arbitrary set T (target set) containing oo, so that Kc C C \T.
T = Kc For example, T = {z : |z| > |}, E small, is a disk around oo.
Now we assign for each pixel p from the lattice an integer
label lc( p; T) in the following way:
(T is a target set containing oo.) If T is given by T Note that all level curves of Pot are circle-like, even close to
M.
= {z : \z| > |}, then
MSetCPM ( MSet, nx, ny, xmin, xmax, ymin, ymax, maxiter) et via C
|E?(c) — L(c;T)\ is uniformly bounded for |c| < Mandelbrot s Potential Method (CPM)
and c £ M, (4.25) i.e. the contours defined by L MSetQQ output array of real type, size nx by ny
outside of M are approximations of equipotential nx, ny image resolution in x- and y-direction
xmin, xmax low and high x-value of image window
curves for the electrostatic potential of M. The ymin, ymax low and high y-value of image window aspect ratio of win
above results about the potential of Kc and M give by ny
rise to several very interesting algorithms maxiter maximal number of iterations
ix, iy integer
(LSM/M,CPM/J, CPM/M). ex, cy real
MSetPotQ returns potential of a point
LSM/M - Level Set Methods for M : This algorithm is
strictly analogous to LSM/J. We fix a square lattice of
pixels, choose a large integer Nmax (iteration resolution) and = 0 TO ny-1 DO
an arbitrary set T (target set) containing oo , so that M C C \ := ymin + iy * (ymax - ymin) / (ny -1)
FOR ix = OTOnx-lDO
T. Modify Eqn. (4.24) by:
k, if /*(0) £ T and /*(0) E T for 0 < i < k ex := xmin + ix * (xmax - xmin) / (nx - 1) MSet[ix][iy] := MSetPot (ex, c
L(c;T) = END FOR
and 1 < k < Nmax
0, else END FOR
(
4.26) Again it might be useful to identify a certain number
of level sets near M to obtain reasonable pictures. Figure
4.14 shows a result of this method and a blow up near the
boundary of M, where alternating levels areALGORITHM
colored black MSetPot (ex, cy, maxiter)
and white. The boundary of the levels in Figure 4.14 can Function
be returning potential of a point
interpreted as equipotential curves according to Eqn. (4.25),
see Figure 4.6 .
BEGIN
x := ex; x2 := x * x y := cy; y2 := y * y iter := 0
WHILE (iter < maxiter) AND (x2 + y2 < 10000.0) DO temp := x2 - y2 + ex y := 2 * x * y + cy x := temp x2
:= x * x y2 := y * y iter := iter + 1
END WHILE
4.2.6 External angles and binary
IF (iter < maxiter) THEN decompositions
potential := 0.5 * log (x2 + y2) / power (2.0, iter) ELSE
potential := 0.0 The potential of M is most crucial for an
END IF
RETURN (potential) understanding of M. While we have discussed
END equipotential curves already from various points of
view we will now briefly sketch a discussion of
field lines. These are orthogonal to the equipoten-
tial curves and can be formally defined to be the
4.2.5 Distance estimators images of the field lines of the potential of D = {z :
|z| < 1} under 4>-1 (see Figure 4.17) :
Another useful property is due to J. Milnor and W.
Thurston [77]: Using the potential function G( c) La = -e2™ ) : r > 1}
one can show that for c outside of M the distance d(
c, M) from c to the boundary of M can be estimated
as
d(c,M)<--^^c),
(4.29)
|Cf (c)|
where
/ dzn
Zn
~ de'
135
for some k then c should be very close to Af, thus we label
c by -1. If no overflow occurred, then we estimate the
distance of c from Af by
and set
. f 1, i£Dist(c,M) < DELTA
1 2, otherwise
It turns out that the images depend very sensitively on the
various choices (Rt Nmax, OVERFLOW, DELTA and blow-
up factor when magnifying Af). Plotting all pixels which
represent c-values with |Z(c) | = 1 provides a picture of the
boundary of Af. Figure 4.21 was obtained in this way.
Figure 4.16 shows a comparison between BSM and
DEM/M. The estimate value of d( c, M) given by Dist( c,
M) can be used to define a surface
z'k+1 = Izht'k + 1, zo = A: = 0,1, ...,n — 1 19 Af would be locally connected, provided for every c
E Af and every neighborhood U of c there is a connected
If in the course of the iteration of this equation we get an neighborhood V of c in 17. Figure 4.18 is an example of a
overflow, i.e., if set which is connected but not locally connected. Here any
neighborhood of c in Af is obtained as the intersection of Af
14+11 > OVERFLOW
with a neighborhood of c in C .
136
(ex, cy, maxiter) general idea is to subdivide the target set in the
IF dist< delta THEN
LSM/J (resp. LSM/M) algorithm into two parts,
MSet[ix][iy] := 1
ELSE and then label according to the alternative whether
MSet[ix][iy] := 0 zn, n = lc(zo;T) (resp. n = L( c; T) ) , hits the first or
END IF
END FOR second part.
END FOR DEM/J - Distance Estimator Method for Julia Sets : Let c be
END fixed. Choose Nmax (maximal number of iterations) and R =
Arguments ex, cy maxiter point to be tested 4+1 = 2zhz'k) z'o = 1, A; = 0,1, ...,n—1
maximal number of iterations (4.32)
Variables iter, i integers If in the course of the iteration of Eqn. (4.32) we get an
x, y, x2, y2 point coordinates and squares overflow, i.e. if
temp real scratch variable
xder, yder derivative, real and imaginary parts 14+11 > OVERFLOW
xorbit[] array of real parts of orbit, size > maxiter
yorbit[] array of imaginary parts of orbit, sizefor> some k then ZQ should be very close to Kc, thus we label
maxiter
dist real, distance estimate zo by -1. If no overflow occurred, then we estimate the
huge distance of zo from Kc by
large number used for stopping the iteration
flag boolean, indicates overflow in derivative calculation
Constants overflow maximal size of derivative distf. zo, Kc) = 21^1 log |z„I
(4.33)
BEGIN l^nl
x := y := x2 := y2 := dist := xorbit[0] := yorbit[0] := 0.0 and set
iter := 0 huge := 100000.0 x J 1, if distizojKc) < DELTA
WHILE (iter < maxiter) AND (x2 + y2 < huge) DO temp := x2 - 2, otherwise
y2 + ex y:=2*x*y + cy x := temp x2 := x * x y2 := y * y
Figure 4.15 shows a result of a comparison between DEM/J
iter := iter + 1 xorbit[iter] := x yorbitpter] := y
END WHILE IF x2 + y2> huge THEN xder := yder := 0.0 and MUM.
i:=0 flag := FALSE WHILE (i < iter) AND (NOT flag) DO
temp := 2 * (xorbit[i] * xder - yorbit[i] * yder) + 1 yder := 2
* (yorbit[i] * xder + xorbit[i] * yder) xder := temp flag :=
max (abs (xder), abs(yder)) > overflow i := i + 1
END DO
IF (NOT flag) THEN dist := log (x2 + y2) * sqrt (x2 + y2) /
sqrt (xder * xder + yder * yder) END IF
END IF RETURN (dist) END
138
BDM/J - Binary Decomposition Method for Julia Sets (with
respect to a fixed angle ao): This algorithm is a variant of
LSM/J. We
= {z
\z| >
and that each
pixel has a label, i.e. Zc( ZQ ; T) is computed, i.
the (discrete) escape time for
outside of
Kc is known. We now extend our labeling function by
determining a second component
Here a0 is any angle between 0 and 2?r and arg( zk) = /?, if zk
= re2*'?. In Figure 4.19 we chose otQ = 0. The second
component can be used as an additional color information
(see [83], p. 91) or as in Figure 4.19, where the first
component is simply disregarded and 0 is coded white and 1
is coded black for the second component.
139
to become perfect (except for a change in scale)
when one increases the magnifying power. We
illustrate this marvelous “image compression”
property in Figure 4.22 . A region along the
cardioid (see top of Figure 4.22) is continuously
blown up and stretched out, so that the respective
segment of the cardioid becomes a line segment 10.
The result is shown in the center of Figure 4.22.
Attached to each of the disk-components there one
observes dendrite structures with 3,4, •••,8
branches (from left to right). Now we choose as
particular c-values the major branch points of these
dendrites (see arrows). For each of these 6 c-values
we have computed the associated Julia sets (using
DEM/J) which are displayed in the bottom of
Figure 4.22. We observe a striking resemblance in
the structures and the combinatorics of the dendrite
structures. Figure 4.23 is yet another example.
There we show a tiny window at the boundary of
M blown up by a factor of approximately 10 6 (top
part) and compare it with a blow up (factor ~ 10 6)
of an associated Julia set (bottom part).
For all other properties of the Mandelbrot set,
for example, the structure in its interior we refer to
[83]. Mandelbrot
obtained by set
DEM and
/J Julia sets; /images
and DEM M
10
The blow-up is obtained in the following way: Attached
to the cardioid of M one observes an infinity of disk-like
components. Each of these characterizes certain periodic
cycles of fc. For example, the cardioid itself collects all c-
values for which fc has an attracting fixed point; the large
disk attached to the cardioid collects all c-values for which
fc has an attractive 2-cycle; the two next larger disks
attached to the cardioid collect all c-values for which fc has
an attracting 3-cycle; etc. The conjecture that the size of a
disk characterizing a p-cycle is proportional to pr was
recently proven by J. Guckenheimer and R. McGehee [47].
Our blow-up factor is chosen accordingly to the result that
all disks in Figure 4.22 (center) have the same size. They
characterize — from left to right — attracting 3-, 4-, ..., 8-
cycles.
140
is one of its two preimages. Actually, Eqn. (4.36)
has been a very important example in the history of
Julia sets: It is one of the exotic examples for
which JR = C U{oo}, asituation whichnever
appears in the family fc being discussed in Section
4.2.
= (4.37)
3zz
which the reader will immediately recognize, if we
rewrite it as
—1
B(z) = zz-—
3z
Fig. 4.23: “Image compression” in the Mandelbrot set i.e. Eqn. (4.37) is the familiar Newton method to
demonstrated for the c-value —0.745429+ 0.1130081. The
solve the equation z3 — 1 = 0. In fact the problem
top left shows a close up view of the Mandelbrot set around
this c-value. The vertical size of the window is 0.000060. to understand Eqn. (4.37) goes back to Lord A.
The Julia set for the c-value is shown in the bottom right Cayley of Cambridge (1879) and has been one of
image. The close up view of this Julia set in the bottom left
is centered around the same above c-value. It reveals the the guiding problems for Julia and Fatou to develop
same double spiral as in the picture above . The vertical their beautiful theory (see [84] for a historic
window size is 0.000433, and the Julia set has been rotated
by 55° counter clockwise around the center of the window.
discussion). Note that the 3 zeros of z 3 — 1 = 0
(being zk = exp( ^y^), k = 1,2,3 ) are attracting
4.3 Generalizations and extensions
JR= dA(zk), k = 1,2,3 (4.38)
fixed points of R constituting 3 basins of attraction
A(zk), k = 1,2,3. The amazing fact is that
Most of the algorithms introduced in Section 4.2
for fc( z) = z2 + c have natural extensions to general i.e. tL4(.?i) = dA(.Z2) = cL4(z3) .which means that
rational mappings of the complex plane (for JR is a set of triple-points with respect to the three
extensions to other maps like, for example, basins. It turns out that this property seen in Figure
transcendental maps see Chapter 3). So let R(z) be 4.24 is in fact a general property:
a general rational mapping, i.e.
E(Z) =E (4 35)
TT> -
q(z)
where both p( z) and q( z) are polynomials of
degree dp and d, without a common divisor. Then
the degree of R is dR = max {dp, dq}. Let’s assume
that dR > 2. First of all, the definition of the Julia
set JR of R is literally the same as that in Eqn. (4.7)
or Eqn. (4.8), i.e. the algorithms based on these
characterizations - like IIM and MUM - are exactly
the same. In general, however, the role of oo is a
different one. For example, if
/z —2\2
R(z')=[- )
(4.36)
\z/
then oo w 1 1 and R'( 1) = —4, i.e. 1 is a
repelling fixed point and oo
141
Property (4.39) has an impact on BSM, MBSM,
LSM/J and BDM/J. In fact they translate literally
by just replacing the role of oo with that of any of
the zeros of the polynomial under consideration for
Newton’s method, say z\ and zi. Then, rather then
testing orbits going to oo versus orbits remaining
bounded, one tests simply orbits going to z\ versus
going to zz. In fact, Figure 4.24 was obtained by
BSM for the mapping (4.37) and Figure 4.25b is a
result of BDM/J.
Fig. 4.24: Julia set of R(z) = (2z3 + l)/(3z2) and blow ups; a
set of triple-points
4)
142
such as the golden mean oi = (y/5 — 1) /2) from Ac( oo)) only appears if c is from the interior
Herman-ring case of Af. The second and third case occur on the
boundary of M. A family which is closely related to
While the first three cases are related to fixed (or
fc is the family (sometimes called the “logistic
more generally, periodic) points, the fourth case is
family”, see Section 3.1.2)
not. It was discovered by the French mathematician
M. Herman a few years ago. The Siegel-disk 20 case rx(z) = \z(l-z).
is named after the great German mathematician (4.41)
C.L. Siegel who discovered this case in 1942. His
discovery gave birth to an exciting theory ( —> In fact one easily derives an affine change of
Kolmogorov-Amold-Moser Theory) which tries, coordinates h(z), such that
for example, to understand the stability of the solar
fc(z) = h~\rx(h(z))).
system.
It goes beyond our possibilities and aims to This family has the advantage that r>(0) = 0 for all
describe more details here, but one more result is X and r>(0) = A. Thus, if A = exp(27rio!) and a
noteworthy: The critical points of R (i.e. solutions changes between 0 and 1, then A will "hit" case 2
of R!{ Z) = 0 to be solved in C U {oo}) “indicate” infinitely often and also case 3 infinitely often. In
precisely the “character” of R, i.e. which of the four fact an arbitrary small perturbation of A suffices to
cases appear (maybe all - maybe none). push situation 2 into situation 3 and vice versa,
“Indication” means that if one of the four cases is indicating that the Jc's can jump drastically when c
present, then the orbit of at least one critical point is changed very little. In summary, if one of the
zc (i.e. R'(.Zc) = 0) zct-> R(zc) R2(.ZC) ••• will “lead” three cases 1 to 3 in Eqn. (4.40) exists aside from
to that case. Ac(oo) for fc , then the orbit of 0 will lead to it and,
therefore, for each choice of c only one case can
4.3.3 The quadratic family revisited appear. It may , however, happen that none of the
four cases (aside from Ac( oo)) appears. For
Projecting this information onto our knowledge
example, if c = i, then
about fc( z) = z2 + c gathered in the first paragraph,
it will become much more clear even, what the 0 i—» i H—> — 1 + i H-» — i *—> — 1 + i F—
different possibilities hidden in various choices for >...
c are about. Firstly, note that the only critical points
of the fc are {0,oo}. Indeed to solve fc(z) = 0 in C In other words the “critical” orbit leads to a cycle
U {oo} decomposes into two steps. The only finite of length 2 for which one easily finds that it is
solution is 0. For z = oo we must look at Eqn. repelling. That is, Ac( oo) is the only domain in the
(4.1) , i.e., F^u) = 0 which for u = 0 corresponds to complement of Jc and in fact we know that Jc is a
z = oo. Since /c( oo) = oo for all c, everything about dendrite (see Figure 4.3e). The “fate” of critical
the nature of fc will depend only points also explains the exotic example (4.36). One
easily finds that there R has the critical points {0,2}
on the orbit and
0 i—> c H-> c2 + c >-»...
2 w 0 oo n 1 i-> 1, (and 1 is a repelling fixed point)
If 0 £ Ac( oo), i.e. c £ M, then there cannot be any
of the four cases of Eqn. (4.40) , which is reflected i.e., none of the four cases is possible, which means
by the fact that then Kc = Jc is a Cantor set If, that JR = C U {oo}. Note also that the Mandelbrot
however, 0 Ac( oo), i.e., c E then one may have in set in view of critical points is just the locus of c-
principle one of the four cases. It is known, values for which 0 Ac( oo). This leads to ideas for
however, that the fourth case does not exist for fc related experiments of more general mappings:
(all c), and it is conjectured that the first case (aside
4.3.4 Polynomials
20 Let R(ot) = az + z\ then 2?(0) = 0 and 7?'(0) = a.
Thus, if a = exp(27ria) and a is the golden mean, then we Note that any polynomial p( z) (of degree > 2) has
have an example. The reader may try to compute JR and the property that p(oo) = oo and p'(oo) = 0. Thus, if
observe the dynamics near 0 (see [83] for details).
143
we have a one-parameter family pc(z) (e.g. pc{z) = are attracting fixed points. The critical points of Rc
n
z + c) we can identify easily a Mandelbrot-like set are {l,oo, 1 — c,±\/l — c, 1 — f}. Furthermore, 1
(Ac(oo) again denotes the basin of attraction of oo): — oo and ±\/l — c i-> 0. Thus, to study the fate of
critical points it remains to investigate the orbits of
Afpc = {c e C : all finite critical points of pc
1 — c and 0 only. Figure 4.26 shows an experiment
Ac{ oo) }.
(and a close up), in which the critical orbits of c-
Actually, since oo is attracting, one always has that values are tested. It is quite remarkable that the
Pa,b(z) = z3 — 3 a2 z + b, where a, b G C .
= {(a, b) G C : +a £ AOii(oo) and - a £ Aa,i(oo)}. Fig. 4.26: a. For black values of c the orbit of 0 approaches
neither 1 nor oo under iteration of Rc ; b. Blow up, for black
Ma,b corresponds to the Mandelbrot set of p a>j. values of c the orbit of 0 approaches 1 under iteration of _RC
144
Fig. 4.27: Newton’s method for real equations, basins of attraction of some fixed points obtained by an
adaptation of LSM/J.
interpolation is considerably lower. In first Fig. 4.29: Piecewise constant functions v# and tip.
146
(4.43) 5.1 Introduction
Obviously, Eqn. (4.43) is just the integral of
Mankind seems to be obsessed with straight lines. I
over normalized by P(AD( »,;)). This takes care of
am not sure where it all began, but I like to imagine
the “noise effects”. There remains one problem and
two Druids overseeing the construction of Stone-
that is to pass from one key frame Kn to the next
henge, and using a piece of stretched string to
one K^. i without seeing a discontinuity. The
check the straightness of the edges of those huge
homotopy in Eqn. (4.44) will solve this problem
stone rectangular blocks, or the path that the light
and provide appropriate key frame values:
beam from the equinox sun would follow. It was at
v(»,;) = (1 -P)VK„(»,;) + p[( 1 -t)vK»(i,;) + least an efficient form of quality control; for very
(»,;)] (4.44) where little effort by those two Druids it kept the laborers
with their eyes sharply on the job.
Whatever the real history of the matter is, it doesn’t
P(^D(I,;))
alter the fact that now a telltale and hallmark of
Man’s artifacts is the straight line. Look around the
In other words, p can be room at the edge for the tables, the raster scan lines
key frame. Finally, t wood. From our earliest learning moments we are
encouraged to manipulate, to rotate and translate,
should be 0 at key cubes; to roll circles with toy wheels; to underline,
frame n and t should be straighten, measure along straight lines and to use
graph paper. The Romans made their direct roads
1 at key frame n+ 1. A slice across barbarian France and England; and
149
zn and the point labelled with the same name as the together. That is, if d( z\, zz ) is the distance
coin shows. Keep doing this for a long time, and between a pair of points z\ and zz, then for some 0
mark say 1,000,000 points on the paper z\, zz, 23,..., <k<1
2999,999, zt,ooo.ooo • Now erase the first eight
d(wj(zi),Wj(zz)') < k-d(zi,zz)
points z\, zz, 23,..., z%. What do the remaining
points look like? Do they make a random dust on for all pairs of points z\ and zz in R2, for each
the paper ? At first one expects nothing much ! transformation w}. Such a transformation is called a
However, what you actually see, if your pencil contraction mapping. An example of such a
is sharp enough, your measuring accurate enough, transformation is w : R2 —» R2 which acts on the
and if the odds are not dreadfully against you, will point z = ( x, y) according to
be the picture in Figure 5.2.
w(x,y) = ( -x+ -y + 1, —x+—y+0.5
x _ a b y cd ax+by+e ex
(5.2)
+ dy+ f
probabilities assigned to the different trans-
formations, as long as they are all positive;
(however, if any probability is too small one might
have to play the Chaos Game for a very long time
indeed before the whole picture P would appear).
5.3.2 How two ivy leaves lying on a sheet of
paper can specify an affine
transformation
w: R2 ->R2.
151
(01,
It is assumed that V has positive measure, namely window V and resolution L x M , by digitizing at
p( V) >0. Let a viewing resolution be specified by resolution L x M the part of the attractor which lies
partitioning V into a grid of L x Af rectangles as within the viewing window. The rendering values
follows. The interval [ xmin, xmax) is divided into L for this digitization are determined by the measure
subintervals [ xi, iz+i) for I = 0,1 L — 1, where (which corresponds to the masses of sand which lie
upon the pixels).
- r \L
153
5.4.4 The algorithm for computing lmax sum maximum element in I[][] real number
Arand rand() returns values between 0 and A
rendered images Globals
Functions srand() initialization of random numbers
The following algorithm starts from an IFS code (
rand() int() random number generator integer part
wn,pn : n = 1,2,..., N) together with a specified
w(x, y, k) argument returns image of (x, y) under
viewing window V and resolution L x M. It com-
BEGIN
putes the associated IFS image, as defined in the
srand (seed)
previous section. In effect a random walk in R2 is lmax :=0
generated from the IFS code, and the measures FOR1 = OTOL-1 DO
FORm = OTOM-l DO
Vj>m) for the pixels are obtained from the relative I[l][m] :=0
frequencies with which the different rectangles are END FOR
END FOR
visited. That the algorithm works is a consequence
FOR n = 1 TO num DO
of a deep theorem by Elton [32], /* choose a map Wk */
r:= rand() / Arand
An initial point (xo, yo) needs to be fixed. For sum:= P[ 1]
simplicity assume that the affine transformation wi k:=l
WHILE (sum <r) DO
(x) = Ax + b is a contraction. Then, if we choose k:=k+ 1
( zo, yo ) as a fixed point of wi we know a priori sum := sum + P[k]
END WHILE
that (io, yo ) belongs to the image. This is obtained /* apply the map w* */
by solving the linear equation (x, y) := w (x, y, k)
xo _ X0 yo yo bi
62
154
Fig. 5.6: Illustrations of the Collage Theorem. The upper different color from T. wi(T) actually consists of a
collage is good, so the corresponding attractor, shown upper
right resembles the leaf target. quarter size copy of T centered closer to the point
(0,0). The user now interactively adjusts the j’s
using keypresses or a mouse, so that the image
5.5 Determination of IFS codes: wi(T) is variously translated, rotated, and sheared
The Collage Theorem on the screen. The goal of the user is to transform
wi(T) so that it lies over part of T : that is, wi(T) is
In an IFS code {w„, pn : n = 1,2,..., AT}, it is the rendered as a subset of the pixels which represent
wn’s which determine the geometry of the T. It is important that the dimensions of wi (T) are
underlying model, i.e. the structure of the attractor, smaller than those of T, to ensure that wi is a
while it is the pn's which provide rendering contraction. Once wi(T) is suitably positioned it is
information through the intermediary of the fixed, a new subcopy of the target W2(T) is
measure. Here we describe an interactive two- introduced. Then u>2 is interactively adjusted until
dimensional geometric modelling algorithm for W2(T) covers a subset of those pixels in T which
oi = 0.45 + 0.9i 02 =
S2 = 0.6 0.45 + 0.3i 03 =
S3 = 0.4 — 0.3 i
0.60 + 0.3 i 04 =
S4 = 0.4 + 0.3 i
0.30+ 0.3i
determining the wn’s corresponding to a desired are not in wi ( T). Overlap between wi (T) and W2
model. (Some comments on the way in which the (T) is allowed, but for efficiency it should be as
pn’s are chosen to produce desired rendering effects small as possible.
are made in Section 5.6.) The algorithm has its Table 5.1: IFS parameters for Figure 5.7 .
mathematical basis in the Collage Theorem [4].
In this way the user determines a set of
The algorithm starts from a target image T
contractive affine transformations {wi, W2,..., WN}
which lies within a viewing window V, here taken
with this property : the original target T and the set
to be [ 0,1] x [ 0,1]. T may be either a digitized
image (for example a white leaf on a black N
T = |J wn(T)
background) or a polygonal approximation (for n=l
example a polygonalized leaf boundary). T is
are visually close, while AT is as small as possible.
rendered on the graphics workstation monitor. An The mathematical indicator of the closeness of T
Then
£
h(T,A) < 1-----------
1—s
156
which can be viewed at many resolutions. viewing parameters. The number num of iterations
is increased with magnification to keep the number
of points landing within the viewing window
constant.
158
Fig. 5.15: Demonstrates how branch and growth models are subsumed by IFS theory. The IFS codes
for these images were obtained by applying the geometrical modelling algorithm to the leaf image
given by Oppenheimer [80].
rcos 0 r —ssin
Wi (i= 1,2,3,4),
sin 0 scos tp
159
Table 5.2: Fem IFS.
Appendix A
leaf. Different choices of probability lead to
different renderings without changing the
geometry. Fractal landscapes
without creases
and with rivers
Benoit B.Mandelbrot
160
have shown that this undesirably condition is the interval, the section of S' along each side of the
greatly improved by after-the-fact non linear triangle being a parabola. A second stage is to
processing. Furthermore, our “mountains with systematically interpolate at the nodes of an
rivers” involve extreme asymmetry, meaning that increasingly tight lattice made of equilateral trian-
this paper tackles this issue head on. But, in gles. This procedure, not known to Archimedes, is
addition, this text sketches a different and much a retrofit to non-random
simpler path towards the same end. It shows that it
suffices to use the familiar midpoint displacement
method with displacements whose distribution is
itself far from being Gaussian, in fact are extremely
asymmetric.
In the search for algorithms for landscape
design, hence in this text, a central issue is that of
context dependence. The familiar midpoint
displacements method is context independent,
which is the source both of its most striking virtue
(simplicity) and of its most striking defect
(creasing). In real landscapes, however, the veiy
fact that rivers flow down introduces very marked
context dependence; it is theoretically infinite, and
practically it extends to the overall scale of the Fig. A.l: Mount Archimedes, constructed by positive
midpoint displacements.
rendering.
This introduction has listed our goals in the
order of decreasing importance, but this is also the
order of increasing complexity, and of increasing
divergence from what is already familiar to
computer graphics experts. Therefore, the dis-
cussion that follows is made easier by moving
towards increasing complexity. It is hoped that the
new procedures described in this text will inject
fresh vitality into the seemingly stabilized topic of
fractal landscape design. In any event, landscape
design deserves investigation well beyond what is
already in the record, and beyond the relatively
casual remarks in this paper.
Fig. A.2: Archimedes Saddle, constructed by midpoint
displacements.
A.l Non-Gaussian and non-random classical geometry of the method Fournier et al. use
variants of midpoint displacement to generate fractal surfaces. When all the
interpolations along the cross lines go up, because
A.1.1 Midpoint displacement b > 0 (resp., down, because b < 0), we converge to
constructions for the paraboloids Mount Archimedes (Figure A.l) (resp., to the
Archimedes Cup, which is the Mount turned upside
One of the purposes of this text is to explain down.) Suitable slight changes in this construction
midpoint displacement better, by placing it in its yield the other paraboloids. In reduced coordinates,
historical context. The Archimedes construction for a paraboloid’s equation is z = P( x, y) = a — bx2 —
the parabola, as explained in the Foreword, can be cy2, with b > 0 in all cases, and with c > 0 if the
expanded to the surface S closest to it, which is the paraboloid is elliptic (a cap), but c < 0 if it is
isotropic paraboloid z = P(x,y) = a — bx2 — by2. hyperbolic (a saddle). By suitable “tuning” of the
Here, the simplest initiator is a triangle instead of up and down midpoint displacements, one can
161
achieve either result. The Archimedes saddle is
shown on Figure A.2.
Fig. A3: Takagi profile, constructed by positive midpoint (When 0 < w < 1/2 but w / 1 /4, the curves are
displacements.
rectifiable, with a fractal dimension saturated at the
value 1. Left and right derivatives exist at all
points. The two derivatives differ when the
A.1.2 Midpoint displacement and systematic abscissa is dyadic, that is, of the form p2 ~k, with
fractals: The Takagi fractal curve, its an integer p, but there is a well defined derivative
kin, and the related surfaces
when the abscissa is non-dyadic.)
162
parameter p: X = 1 with the probability p and
low angle.
(The question of what is a high or a low angle,
compared to the vertical, is related to the
counterpart of the quantity 5 in the definition of
Mount Archimedes, hence to the fact that our
various reliefs are self-affine, rather than self-
similar.)
163
“sum of b reduced exponentials —b”. frame midpoint displacement is the approach of
These very simple changes beyond the Fournier et al., as applied to the subdivision of
Gaussian have proved to lead to a very marked triangles.
increase in realism. Examples due to Rejean Gagne When the initial lattice is made of squares, it is
are shown on Figures A.6 and A.7. necessary to displace the relief, not only at the
frame midpoints, but also at the square centers,
which are midpoints of tiles. The resulting
A.2 Random landscapes without
construction is a frame-tile hybrid.
creases
In the body of this book, the desire for a
A.2.1 A classification of subdivision schemes: cleaner algorithm to interpolate within a square
One may displace the midpoints of lattice has pushed the authors away from the hybrid
either frame wires or of tiles frame-tile approach, and has led them to a tile
midpoint procedure. (I had failed to notice they had
Irrespective of the issue of distribution, a second
done so, until told by Voss.)
development begins by carefully examining the
different published implementations of random
A.2.2 Context independence and the
midpoint displacement, to see what they have in
“creased” texture
common and how they differ.
First common feature. Starting with an initial The major difference between frame and tile
lattice of nodes or vertices, one “nests” it within a midpoints involves context dependence, a notion
finer lattice, which contains both the “old vertices” from formal linguistics that has been brought into
where H( P), the height at P is already known, and graphics in [96],
many “new vertices” where H( P) must be Frame midpoint fractals are context
interpolated. independent, in the sense that the only external
Second common feature. In order to compute inputs that affect the shape of a surface within a
H(P) at N points of a midpoint fractal requires a triangle are the altitudes at the vertices. Everyone
number of calculations proportional to N. By con- in computer graphics knows this is an
trast, the best-controlled approximations to extraordinary convenience.
fractional Brownian motion require N log N On the other hand, the fractional Brown
calculations. The factor log N in the cost of surfaces incorporate an infinite span of context
computing seems to us of low significance dependence, so that interpolating them within even
compared to the cost of rendering. But this seems, a small area requires some knowledge over an
somehow, to be a controversial opinion. unboundedly wide surrounding area. Relief models
A distinguishing feature. Various methods of that include long range river networks necessarily
interpolation fall into two kinds I propose to call share this property. It is essential to landscape
wireframe displacement and tile displacement. modelling.
(Some physicists among the readers may be helped What about tile midpoint fractals? It is
by knowing that the distinction echoes that important to know that they are context dependent.
between bond and site percolation).
Unfortunately, frame midpoint fractals have a
In wire frame midpoint displacement, the most unwelcome texture. They reminded me of
surface is thought of as a “wire frame”, a collection folded and crumpled paper, and (after first arguing
of intervals, and one displaces the midpoint of back) Fournier et al. have acknowledged my
every interval. The procedure is known to be criticism. Fractional Brown surfaces do not have
context independent. this undesirable “creasing” texture. This
In tile midpoint displacement, the surface is discrepancy has led me long ago to wonder
thought of as a collection of tiles, and one displaces whether or not context dependence is the sole
points in the “middle” of every tile. The procedure reason for creasing. Now we have a counter
will be seen to context dependent. example: the implementations of tile midpoint frac-
The only possible implementation of pure tals in this book are context independent, yet they
164
possess a “creasing” texture, though perhaps to a midpoint displacement.
less extreme degree. This determines H(P) at the vertices of a second
Digging deeper, the undesirable texture could order triangular lattice, which is turned by 30° and
perhaps be traced to either of the following causes. whose sides are smaller in the ratio 1 /x/3- The
A) “The Course of the Squares”: The second construction stage, Figure A. 8b, takes
programmers’ and the physicists’ “folklore” averages in every tile and adds a suitable midpoint
suggests that spurious effects due to the lattice displacement. Note that two values will have been
structure tend to be strongest in the case of square interpolated along each side of ABC, and that they
lattices in which privilegeddirections number two. will depend on initial values beyond the triangle
This suggests trying triangular or hexagonal ABC. The third stage lattice, Figure A.8c is again
lattices, in which privileged directions number triangular and rotated by 30 °, hence it nests the
three. B) “The Nesting of the Frames”. The first stage lattice. The third construction stage
strongest texture problems always appear along the within ABC is also affected by initial values
frames that bound the largest tiles. Let us dwell on beyond this triangle.
this point. As has been previously observed, the
Conceptually, one must also combine
tile vertices at the fc-th stage of construction are
inteipolation with extrapolation. The
always nested among the tile vertices at all later latter amounts to changing the character
stages. But, in addition, triangular tiles have the far of the values of H( P) on the nodes
stronger property that the network of tile sides at
the fc-th stage of construction is nested in (i.e., part
of) the network of tile sides at the ( k + 2) -th stage,
the ( fc + 4) -th, etc.. I believe this feature is
responsible for creasing and propose to
demonstrate how it can be avoided.
My own recent renewal of active interest in
fractal reliefs happens to provide tests of both of
the above explanations of creasing. A first
approach avoids squares and uses the classical
triangular tiles, whose frames are nested. A second
approach uses tiles that are based on the hexagon,
but are actually bounded by fractal curves; their
frames are not nested.
166
Starting with a regular hexagon, the river networks
first stage of construction of this
tile is shown by comparing the The idea behind the two landscape algorithms to be
described now is to first construct a map, with its
bold lines on Figures A.8d and
river and watershed trees, and then to “fill in” a
A.8e: here the side that one meets random relief that fits the map. The two
first (when moving from A implementations available today have the weakness
clockwise) is rotated by 30 it may that the maps themselves fail to be random.
have been rotated by either+30 ° A major ingredient of this approach (beyond
or—30 ° and the remainder of this the current implementation) amplifies on the
construction stage is fully already discussed asymmetry between valleys and
mountains, by drawing very asymmetric rivers and
determined by this single choice.
watersheds. The relief along a watershed can and
At the same time, Figure A.8d, should go up and go down, with local minima at
H(P) is interpolated at the center the sources, and this requirement can be (and will
of the initial hexagon by the equal be in this text) implemented by a randomized
weights average of six neighbors procedure of positive midpoint displacements. On
displacement. But H( P) is also the other hand, the relief along a river necessarily
goes downstream, hence must follow a fractal
interpolated by suitable averages curve that differs profoundly in its structure from
at three other points off-center, anything that has been used so far in this context.
where three sub-hexagons meet
(hence "midpoint" is not
synonymous with “center”.) The A.3.1 Building on a non-random map made of
straight rivers and watersheds, with
averaging weights can be chosen square drainage basins
in many different ways, with
considerable effect on the texture. Our first example of prescribed river and watershed
Displacements are added to all trees is very crude, but worth describing, because it
is easy to follow. It is suggested by Figure A.9,
these weighted averages. The next
which reproduces a quarter of Plate 65 of [68],
interpolation stage by comparing rotated by 45° clockwise. One sees thin black
the bold lines, shown on Figures triangles: one is of length 1 and runs along the
A.8e and A.8f, again involves an vertical diagonal, two are of length 2 -1, eight of
arbitrary decision between +30° length 2 ~2, etc. ... Each triangle is meant to
and —30°, thus the rotation is — represent a straight river flowing towards its very
short side. Similarly, one sees thin white triangles,
30°. At each stage, the boundary
meant to represent a straight watershed. One builds
of the modified hexagon becomes up a non-random highly regular map by making all
increasingly crumpled, and it those triangles become infinitely thin.
never nests in the combination of Relief along the watershed. This is the most
higher stage boundaries. While an conspicuous feature in the rendering of the
undesirable texture may well be previous relief models. This is also where the
present along these crumpled midpoint displacement curves are least
curves, the fact will matter little, controversial, and we may as well use them to
model relief here. The local minima along this
because these curves fail to catch relief are sharp cusps, they will be the rivers’
the eye, contrary to straight lines sources, and the map prescribes their positions. The
that stand outA.3 Random simplest implementation uses positive
landscape built on prescribed displacements, and consists in modeling the relief
167
along the watersheds by the Landsberg functions by a variable randomly selected multiplier.
defined in Section I. One may also multiply the Fractal structure of the saddle points. These
positive midpoint displacement by a random factor points are exemplified by the point (1 /8,1/8) on
that is positive, hence does not displace the Fig. A.9, after the figure has been mentally
sources. transformed
and suggest that the in their work if one looked. Leonardo’s sketches of
turbulent water display this. I can think of no more
procedures advanced in explicitly fractal image than Hokusai’s Great Wave
this text may revive the of Kanazawa. The great capital C shape of the
wave on the left as it threatens to engulf the small
topic of fractal sampan like boat repeats itself in ever smaller
170
of these stones it is necessary to include in
the very near foreground a boulder in which
the texture is adequately revealed. You can
then say that the photograph “reads well".
While you cannot see the texture in the
distant boulders, you can see it in the near
boulder, andyou assume that all the
boulders are the same material. It is this
awareness of substance that is of vital
importance in interpretative photography.
The photograph no matter what its function
may be, must “read” clearly.
171
Fig. B.O *. Mount Williamson, Sierra Nevada. Photograph by Ansel Adams, 1944. Courtesy of the
trustees of the Ansel Adams Publishing Rights Trust. All rights reserved.
Fig. B.4: Pahoehoe lava II, Hawaii
Fig. B.2: California oak tree II,
Arastradero Preserve, Palo Alto
172
Fig. B.5: Ohaiu Forest, Kauai, Hawaii
173
174
Dietmar Saupe
• the axiom,
and plants and the definition for the actions of the turtle,
which draws the picture from the output string.
175
FOR k=l TO strlen (strO) DO
Arguments
command := getchar (strO, k)
Globals number of cycles in string generation
maxlevel i:=0
string containing the axiom of OL-system
axiom Kar[] WHILE (i < num AND NOT command = Kar[i]) DO i := i + 1
character array (size num), inputs of production rules string array
Rule[] num END WHILE
(size num), outputs of production rules number of production rules
TurtleDirN str IF (command = Kar[i]) THEN str := strapp (str, Rule[i])
number of possible directions of turde string to be generated and
xmin, xmax END IF
Variables interpreted range covered by curve in x direction range covered by
ymin, ymax END FOR
curve in y direction
strcpy (strO, str)
END FOR
END
BEGIN
GenerateString (maxlevel, str)
CleanUpString (str)
GetCurveSize (str, xmin, xmax, ymin, ymax) papers22, where the interested reader will also find a
Turdelnterpretation (str, xmin, xmax, ymin, ymax) END more detailed account of the history of L-systems
and further references.
BEGIN
strcpy (strO, axiom)
strcpy (str,'”’) 22 See also the recent book of P. Prusinkiewicz and J.
FOR level=l TO maxlevel DO Hanan, "Lindenmayer Systems, Fractals and Plants",
Lecture Notes in Biomathematics, Springer-Verlag, 1989.
176
and in stage 3 we obtain
BEGIN
FOR i=l TO strlen (str) DO c := getchar (str, i) IF (c=’F’ OR
c=’f’ OR c=’+* OR c=’-’ OR c=T ORc=’[’ OR c=’]’)
THEN strO := strappc (strO, c)
END IF
END FOR
strcpy (str, strO)
END
177
Globals
ALGORITHM GetCurveSize (str, xmin, xmax, ymin, ymax)
TurtleX real x position of turtle
Title Compute the size of curve given by string
TurtleY real y position of turtle
TurtleDir direction of turtle (initialized to 0)
Arguments str string
TurtleDirN number of possible directions (coded a
xmin, xmax range covered by curve in x direction ymin, ymax range covered by
TStackX[] stack of turtle x positions stack of turtl
curve in y direction
TStackY[] of turtle directions
Variables i integer
TStackDir[] size of turtle stack
command character
TStackSize maximal size of turtle stack
Globals
TStackMax array of TurtleDirN cosine values
TurtleX real x position of turtle
CO[] SI[] array of TurtleDirN sine values
TurtleY real y position of turtle
TurtleDir direction of turtle (coded as integer) TurtleDirN number of possible
directions of turtle CO[] array of TurtleDirN cosine values
SI[] array of TurtleDirN sine values
Functions strlen(s) returns the length of string s
getchar(s, k) returns k-th character of string s
BEGIN
FOR =0 TO TurtleDirN-1 DO
i= CO[i] := cos (2 * Pi * i / TurtleDirN)
SI[i
] := sin (2 BEGIN
* Pi * i / IF (command = ’F’ OR command = ’f’) THEN TurtleX :=
TurtleDir TurtleX + CO[TurtleDir] TurtleY := TurtleY + SI[TurtleDir]
N) END ELSE IF (command = ’+’) THEN
FOR TurtleDir := TurtleDir - 1
TurtleDir IF (TurtleDir < 0) THEN
TurtleDir := TurtleDirN -1
:= 0
END IF
TurtleX := TurtleY := 0.0
ELSE IF (command = ’-’) THEN
xmax := xmin := ymax := ymin := 0.0
TurtleDir := TurtleDir + 1
FOR i=l TO strlen (str) DO command := getchar (str, i)
IF (TurtleDir = TurtleDirN) THEN
UpdateTurtleState (command) IF (command=’F’ OR
TurtleDir0
command=’f’) THEN xmax := MAX (TurtleX, xmax)
END IF
xmin := MIN (TurtleX, xmin) ymax := MAX (TurtleY,
ELSE IF (command = ’I’) THEN
ymax) ymin := MIN (TurtleY, ymin) END IF
TurtleDir := TurtleDir + TurtleDirN 12
END FOR
IF (TurtleDir > TurtleDirN) THEN TurtleDir := TurtleDir -
END
TurtleDirN
END IF
ELSE IF (command = ’[’) THEN
In summary we have that when proceeding IF (TStackSize == TStackMax) THEN
PRINT ("ERROR : Maximal stack size exceeded.") EXIT
from one stage to the next we must replace a
PROGRAM
character ’F’ by the string ”F—F++F—F”, while END IF
the characters ’+’ and ’ are preserved. Thus, the L- TStackX[TStackSize] := TurtleX
TStackY[TStackSize] := TurtleY TStackDirfTStackSize] :=
system consists of the axiom ”F” and the TurtleDir TStackSize := TStackSize + 1
production rules ELSE IF (command = ’] ’) THEN
IF (TStackSize == 0) THEN
F—>F—F++F—F, + —>+,----------> —. PRINT ("ERROR : Stack empty.")
EXIT PROGRAM
END IF
TStackSize := TStackSize - 1
C.3 Formal definitions and TurtleX := TStackX[TStackSize]
implementation TurtleY := TStackY[TStackSize]
TurtleDir := TStackDir[TStackSize]
END IF
There are different kinds of L-systems. Here we END
consider only OL-systems, in which characters are Turtlelnterpretation (str, xmin, xmax, ymin, ymax) Title Plot the
178
Arguments str string
xmin, xmax range covered by curve in x direction ymin, ymax range covered by curve in y direction
Variables i integer
command character
Factor real
ix, iy integers
Globals xsize, ysize size of screenin pixels
Functions strlen(s) returns the length of string s
getchar(s, k) returns k-th character of string s MoveTo () move the graphics position DrawTo () draw
a line to the new graphics position
XScreen (x) = TNT (Factor * (x - xmin)) YScreen (y) = TNT (Factor * (y - ymin))
BEGIN
Factor := MIN ((xsize-l)/(xmax-xmin), (ysize- l)/(ymax-ymin)) TurtleDir := 0
TurtleX := TurtleY := 0.0
MoveTo (XScreen (TurtleX), YScreen (TurtleY))
FOR i=l TO strlen (str) DO
command := getchar (str, i)
UpdateTurdeState (command)
IF (command=’F’) THEN
DrawTo (XScreen (TurtleX), YScreen (TurtleY)) ELSE IF (command=’f’) THEN
MoveTo (XScreen (TurtleX), YScreen (TurtleY)) END IF
END FOR
END
Fig. C.3: The 8-th stage in the generation of the classic
Sierpinsky gasket. The OL-system is given by the axiom
"FXF FF--------------------------FF", the angle 8 = f and
the production rules X —>
----FXF++FXF++FXF and F —> FF.
179
corresponding production rule and then recurse for
each of the characters of the resulting string until
the desired level of detail is achieved. The final
characters can immediately be interpreted
graphically provided that the overall dimensions of
the picture are known a priori.
Fig- C.4: The quadratic Koch island (5-th stage) [68]. The
OL-system is given by the axiom "F+F+F+F", the angle 6 =
f- and the production rule F —» F+F—F—FF+F+F—F.
180
Fig. C.6: Dragon curve (16-th stage). The OL-system is one (6-th stage) is given by the axiom "F", the angle 6=7
given by the axiom "X", the angle 6 = f and the production and the production rule F —> F[+F]F[—F]F. The right one
rules X —> X+YF+, Y —> — FX—Y. (8-th stage) is given by the axiom "G", the angle 5 = 7 and
the production rules G —> GFX[+G][—G], X —► X[—
FFF][+FFF]FX. Figure on page 272 : Bush (5-th stage)
The pseudo code is not optimized for with axiom ”F", the angle 5 = | and the production rule F —
> FF+[+F—F—FJ—[—F+F+F].
speed. Many improvements can be made.
For example, characters can be identified
with their integer ASCII representation.
For each of the 128 characters there can to apply in the expansion of a letter is simpler and
be one (possibly empty) rule stored in an faster. As a second remark let us point out that the
array of rules. In this setup the search system routine strapp(), which appends a string to
for the appropriate rule another string, is called very often in the algorithm
GenerateString(). This process generates very long
strings, and as a consequence the performance of
strappO may decrease dramatically. It is easy to
circumvent this problem, but this again depends to
a large extent on the choice of the programming
language. Thus, we omit these details here.
In this section we compute a lower bound on the and then D( co) C 'P(.D) • That is, the distance
from co to M is at least
distance from a point c to M using two beautiful
results of complex analysis. The first is a classical
23 D( r, i) denotes the disk of radius r centered at z
theorem due to P. Koebe, and the second is a in C .
relatively new result in complex dynamics due to 24 if d = x + iy then d, the complex conjugate of d,
is d = x — iy. The map is analytic and bijective on D,
Douady and Hubbard. which is important since we need a one-to-one analytic
map in order to apply the Koebe theorem.
The Koebe theorem states that if f : D —♦ C is 25 This means that we are "throwing away" some
an analytic function from the open unit disk D = {z points in D in our estimate. A more accurate and
somewhat cumbersome bound can be derived by
: \z | < 1} into the complex plane, and if /( z) is
considering the inverse of md o r o <I> o r : C\r(Af) -
+D.
183
Substituting G( c) = log( |<I>(c) |) in Eqn. (D.2) to set an unmarked pixel as belonging to the boundary of M.
we get We do this when such a pixel at the point c satisfies dist( c,
M) < DELTA. Thus, DELTA determines the "thickness" of
R — 2 sinh <J(CQ) , the boundary in the image. If dist( c, M) > DELTA, we
. mark c as being in C \M.
"" eG(co)|G/(Co)|’ The program parameter RECUR, determines when the
recursion stops. That is, if dist(c, M) < RECUR then we do
proving Eqn. (D.l). not call the algorithm recursively. Varying RECUR, de-
termines how quickly and how crudely a rough outline of M
When c is near M, G( c) is close to 0, and is generated.
we can use Eqn. (4.30) and the We can modify the algorithm slightly by varying the
estimate sinh G( c) Arf G( c) to approximate parameter RECUR during runtime. By doing this, we
Eqn. (D.3) by produce successively refined images of M very quickly.
Initially, we fix RECUR at some large value. When we
Hw2
?4s'iyiioE!z"1 compute a disk with radius smaller than RECUR, we store
<D 4)
- its center in a list. When no more disks with radius greater
KI than RECUR are found, we reduce RECUR and call the
algorithm recursively at several points along the boundary of
withzn = /"(co) and 4= ^/?o(co). the disks on the list. Figure D.2 shows how this modification
generates successively refined images of M.
When co is far from A/ , that is when the
Finally, it is possible to use a somewhat different recursive
estimate sinh G(co) & G(co) is
algorithm when, for example, no primitive routines to fill a
poor, we can approximate R in Eqn. (D.2) using disk exists. In this algorithm, we break the image into square
regions, and compute the lower bound on the distance to M
|®(co)| re |2„|^ (D.5) from the center of each region. If the region is wholly
outside of M we mark it as such, otherwise, we break it up
into four smaller square regions and repeat the algorithm on
l*'(«)| « (0.6)
each of these. When the regions contain only one pixel, we
can repeat the procedure above using DELTA to label the
pixel as belonging to the boundary of M or not.
which are a consequence of Hubbard and Douady’s
construction of the map <f(c) [29].DBM/M - Distance
Bound Method for Mandelbrot set : For c outside of M we
want to compute the radius of a disk centered at c which will
be completely outside of M. We will iterate + c until |zn| >
Rjnax for some R.nax > 2. For n large enough, the l/|zn|2*
term in Eqn. (D.4) is very nearly 1. For example, with = 100
and n= 20, 1 and 1 /\zn\T* differ by only 0.01 percent. So in
practice we simply ignore this term. Since Eqn. (D.4) is very
similar to Eqn. (4.30), we estimate the lower bound on the
distance from c to M as one fourth of the estimate of the
upper bound computed in MSetDist(). That is, for |zn| large,
184
Variables ix, iy integers which scan through MSet[][]
Comments
D.2 Finding disks in the interior of
This routine scans through the array MSet[][] which will contain the resul
M
It calls MDisk() which draws the appropriate disks in MSet[] and then calls itself
recursively in order to draw a rough outline of M quickly. M is approximated by the
elements of MSet[][] set to zero.
In this section we derive a lower bound on the
BEGIN distance from a point c e M to the boundary of M.
FOR iy = O TOn-1 DO
FOR ix = OTOn-lDO As in the previous section, we combine the Koebe
IF MSet[ix][iy] = 0 THEN MDisk(ix,iy) |- theorem with a theorem of Douady and Hubbard.
END IF
Douady and Hubbard construct maps from
END FOR
subsets of the interior M to D. The interior of the
ENDFig
FastM
outline
refinements
The ofEND
parameter D ,. FOR
algorithmProgression
ascanning
portion
and showing
the of M,
final of aiswould
the modified
crude
successive
scanning stage
and
images (). then
and
respectively
aconsiderably
40 standard
need
8000 to The
.iterate
.2:to
60,552 . RECUR
images
less in the
5 calls
Note have
that
than to was
the
algorithm
696 first
by
MDisk (),
696 set to .
pixel
points
20,50,100,1000,
60,552
484,416 Mandelbrot set is conjectured to consist of
components (or regions) where the iterates of fc( z)
ALGORITHM MDisk (ix,iy) = z2 + c starting at z = 0 will converge to a periodic
Title Mark a disk at the point ix, iy in MSet[][]
cycle26 of some period p. A component of the
Arguments interior of M where the numbers f?(0) converge to
Globals ix,iy point in MSet[][] to be tested
a periodic cycle of a fixed period p is called
MSet[][] square array of boolean type, size n by n, initialized to 0
n image resolution in both x- and y-direction hyperbolic. It is conjectured that all of the
xmin, ymin low x-value and y-value of image window side side length of image
components of the interior of M are hyperbolic. For
window - which is square
maxiter maximal number of iterations for routine MSetDistf) example, the main cardioid of M is a hyperbolic
delta parameter determining when to mark a pixel as being component where the iterates /"(0) converge as n
in the boundary of M
recur minimum distance from M for which we —+ oo to a fixed point (which depends on the point
Variables call MDisk recursively c).
dist estimate of lower bound on distance to M
cx,cy point corresponding to MSet[ix][iy] Douady and Hubbard demonstrate that each
Functions irad lower bound on distance to M in pixel units
hyperbolic component of the interior of M can be
MSetDist() returns distance estimate of a point to M INT() returns the integer part of
its argument mapped in a one-to-one fashion onto D. We will
Comments FILLDISKO fill a disk in MSet[][] with the value 1 delta should be about apply side / (n-1). It
the Koebe theorem to the inverse of the map
determines the ’’thickness” of the boundary.
they construct, composed with the appropriate
recur should be some small integer multiple of delta. FILLDISK(Mset[][],ix,iy,irad)
should fill a disk centered at Mset[ix][iy] with radius irad. Mobius transformation. Specifically, they show
that if W is a hyperbolic component of M with c 6
W , and if zo (c),..., zp_i ( c) is a periodic cycle27 of
period p to which /”( 0) converges as n —► oo,
then the map
BEGIN p: W -» D
IF ix>=0 AND ix<n AND iy>=0 AND iy<n AND MSet[ix][iy]=O THEN ex := xmin + side c ~ £(/cp)Uo(c))
* ix/(n-l) cy := ymin + side * iy/(n-l) dist := 0.25 * MSetDist(cx,cy,maxiter) irad :=
INT(dist*side) IF irad := 1 THEN is analytic, one-to-one and onto. If co 6 W and
MSet[ix][iy] := 1
ELSE IF irad >1 THEN p(co) = d E D then we can precompose p( c) with
FILLDISK(Mset[] [], ix,iy,irad) md(z) = (z—d)/(l—dz) to get a map mj, op : W —»
ELSE IF dist > delta THEN
D which maps co ■-» 0. Now the map ( mj, o p)-1 (
MSet[ix][iy] := 1 END IF IF dist > recur THEN
IF irad > 1 THEN irad := irad + 1 z) satisfies the hypothesis of the Koebe theorem, by
MDisk(ix, iy + irad) construction.
MDisk(ix, iy - irad)
MDisk(ix + irad, iy) In order to compute the radius for a disk which
MDisk(ix - irad, iy) irad := INT(0.5 + irad / sqrt(2)) MDisk(ix + irad, iy + irad) MDisk(ix - irad,
is contained in W we must find | ^(m o p)-1 ( 0) | ,
iy - irad) MDisk(ix - irad, iy + irad) MDisk(ix + irad, iy - irad) END IF
END IF which we can compute by inverting | ^(md o p) ( co
END
) |. Differentiating gives,
= 7T7rfP(c’z) + ^fP(-c'z^r^ (D 9)
-
de dz de dz dz dz de
This is not the most pleasant expression, but
fortunately each term in Eqn. (D.9) can be
computed recursively by applying the chain rule to
f*\c,z) = [/n(c,2)]2 + C.
This yields,
£^t dz
= (D.10)
(D.ll)
5c
= 2[<Xr)2 + rXXr],
dz dz (D.12)
= 2i|-r ■ + r ■ 4-^-ri
OZ oc oc oz
de dzJ (D.13)
for n = 0,..., p — 1 with
f° = zo(co),
de dz dzJ dcdzJ
Since ZQ (c) is periodic, ZQ ( c) = fp( c, ZQ ( c)) and
we can compute
dzo(c) d p, ,
— = 3-[/p(c, zo(c))]
de de
= |-/P(c,zo(c)) + ^-/p(c,zo(c))^^-,
oc oz ac
so, ^o(c) = ^(c, 20(c))
dc
1 -£/P(c, 20(c)) ' ’f
Combining the equations above we arrive at the find the period of the periodic cycle to which /”( 0) is
desired estimate , attracted and a way to estimate ZQ . The problem of finding
the period does not have a satisfactory solution, and on
occasion bad estimates of the lower bound may occur due to
|^-(mdop)-1(0)| = /?=
an error in computing the period.
(D.15)
(LC The period and ZQ can be found in the following way. As in
MSetDist(), we iterate /*(0) until n = MAXITER, or until the
where iterates obtain a large magnitude. In the first case, we
A= 1 = 1 — I — fpl2 assume that c E M and hope that /*(0) has been attracted
A
K(d)| ]dzJ 1 close to the periodic cycle. We then check if f^p (0) is
near /”(0) for p increasing from 1 to some maximum value.
When f^p (0) and /”(0) are near, we guess that p is the period
dcdzJ dz dzJ 1 — 4-fP ’ and let zo - f”( 0). In practice, it is sufficient to check if ( 0)
OZ J — /*( 0) | is smaller than 5 x 10 “ 6 times the side length of
the region in C we are imaging.
evaluated at (c, z) = ( co, ZQ ( co) ). With this, D( f,
After using Eqn. (D.15) to compute the radius of a disk at c
c) C W C M. That is, all the points which are a which is contained in M, we incorporate this computation
distance smaller than y from c will be in M also. into the MSetDist() algorithm so that it returns a bound on
the distance to the boundary of M for points in both the
DBMI/M - Distance Bound Method for the Interior of the exterior and interior of M. The FastM() algorithm will then
Mandelbrot set : Let c be a point in the interior of M and let work for the interior of A/ as well as the exterior.
z0 ,..., 2p_i be a periodic cycle of period p to which /*(0) is
attracted as n —> oo. We want to compute the radius of a
disk centered at c which is contained completely in M. This
involves using Eqn. (D.10) to Eqn. (D.14) to recursively
compute Eqn. (D.9) and Eqn. (D.15). To overcome the D.3 Connected Julia sets
cumbersome complex arithmetic involved, a small complex
arithmetic library can be written.
The discussion in Section D.l applies almost
In order to compute the equations recursively, we need to
186
identically when we substitute a connected Julia set Computergraphische Experimente mit Pascal
J for M. The distance estimate turns out to be (Vieweg, Braunschweig, 1986)
[14] Bouville, C.
Bounding ellipsoids for ray-fractal
intersections
Computer Graphics 19,3 (1985)
[1] Ambum, P,, Grant, E., Whitted, T.
Managing geometric complexity with [15] LaBrecque, M.
enhanced procedural methods Fractal Applications
Computer Graphics 20,4 (1986) Mosaic 17,4 (1986) 34-18
[2] Aono, M., and Kunii, T.L. [16] Brolin, H.
Botanical tree image generation Invariant sets under iteration of rational
IEEE Computer Graphics and Applications functions
4,5 (1984) 10-33 Aikiv f. Mat. 6 (1965) 103-144
[3] Barnsley, M.F. and Demko, S. [17] Cochran, W. T. et al.
Iterated function systems and the global What is the Fast Fourier Transform ?
construction of fractals Proc. IEEE 55 (1967) 1664-1677
The Proceedings of the Royal Society of
London A399 (1985) 243-275 [18] Coquinart, S. and Gangnet
Shaded display of digital maps
[4] Barnsley, M.F., Ervin, V., Hardin, D.and IEEE Computer Graphics and Applications
Lancaster, J. 4,7 (1984)
Solution of an inverse problem for fractals
and other sets [19] Carpenter, L.
Proceedings of the National Academy of Computer rendering of fractal curves and
Sciences 83 (1985) surfaces
Computer Graphics (1980) 109ff.
[5] Barnsley, M.F.
Fractal functions and interpolation [20] Clark, K.
Constructive Approximation 2 (1986) 303- Landscape Painting
329 (Charles Scribner’s Sons, New York, 1950)
[6] Barnsley, M.F., Elton, J. [21] Coulon,F.de
A new class of Markov processes for image Signal Theory and Processing
encoding (Artech House, 1986)
To appear in : Journal of Applied Probability
[22] Coxeter, H.S.M.
[7] Barnsley, M.F. The Non-Euclidean symmetry of Escher’s
Fractals Everywhere picture ‘Circle Limitin’
to appear, (Academic Press, 1988) Leonardo 12 (1979) 19-25
[8] Beaumont, G.P. [23] Demko, S., Hodges, L., and Naylor, B.
Probability and Random Variables Construction of fractal objects with
(EllisHorwood Limited, 1986) iteratedfunction systems
Computer Graphics 19,3 (July 1985) 271-
[9] Becker K.-H. and Dbrfler, M. 278
187
[24] Devaney, R. L. Kinetics of Aggregation and Gelation (North-
A piecewise linear model for the zones of Holland, New York, 1984)
instability of an area preserving map.
Physica DIO (1984) 387-393 [37] Fatou, P.
Sur les equations fonctionelles
[25] Devaney, R. L. Bull. Soc. Math. Fr. 47 (1919) 161-271,48
An Introduction to Chaotic Dynamical (1920) 33-94, 208-314
Systems
(Benjamin Cummings Publishing Co., [38] Fishman, B. and Schachter, B.
Menlo Park, 1986) Computer display of height fields
Computers and Graphics 5 (1980) 53-60
[26] Devaney, R. L.
Chaotic bursts in nonlinear dynamical [39] Foley, J.D. and van Dam, A.
systems Fundamentals of Interactive Computer Graphics
Science 235 (1987) 342-345 (Addison-Wesley, Reading, Mass., 1982)
188
[51] Henon, M. records
A two-dimensional mapping with a strange Water Resources Research 5 (1969) 321-
attractor 340
Commun. Math. Phys. 50 (1976) 69-77
[65] Mandelbrot, B.B.
[52] Hentschel, H.G.E. and Procaccia, I. Intermittent turbulence in self-similar cascades:
The infinite number of generalized dimensions of Divergence of higher moments and dimension of
fractals and strange attractors the carrier
Physica 8D (1983) 435-444 J. Fluid Meeh. 62 (1974) 331-358
[56] Jurgens, H., Peitgen, H.-O. and Saupe, D. [69] Mandelbrot, B.B.
The Mandelbrot Set: A Computer Animation of Conunent on computer rendering of fractal
Complex Dynamical Systems stochastic models
(Institut fur den Wissenschaftlichen Film, Comm, of the ACM 25,8 (1982) 581-583
Gottingen, 1988) (to appear)
[70] Mandelbrot, B.B.
[57] Jurgens, H., Saupe, D. and Voss, R. Fractal curves osculated by sigma-discs, and
Fast rendering of height fields construction of self-inverse limit sets
in preparation Mathematical Intelligencer 5,2 (1983) 9-17
190
A technique for high performance data on a sphere, 24, 33 bushes, 285, 286
compression
Computer 17,6 (1984) 8-19 calculator, 141,143
calculus, 42,219
[107] Yaglom.A. Campbell’s theorem, 69
An Introduction to the Theory of Stationary Cannon, J„ 15
Random Functions (Prentice-Hall, 1962) and Cantor bouquet, 16,159
reprint (Dover, 1973) Cantor set, 148,179,182,186,192, 211
Carpenter, L., 10
[108] Extended Abstracts: Fractal Aspects of
Carter, E., 13,41
Materials
Cartesian plane, 151
(Materials Research Society, Pittsburg) (1984),
Cayley, A., 207
(1985), (1986)
Central Limit Theorem, 78
[109] Lindenmayer, A. chaos, 148
Mathematical models for cellular interaction in Chaos Game, 223, 224,226 chaotic set,
development, 146,149,152 chimneys, 238
Parts I and II Circle Limit III, 4
Journal of Theoretical Biology 18 (1968) 280-315 CleanUpStringO, 277
closure, 173 cloudlet, 236,238 clouds,
[110] Prusinkiewicz, P. and Hanan, J.
12,13,23,35,113,237,238
Applications of L-systems to computer clusters of galaxies, 35 coastline, 22, 30, 35,46, 62
imagery in: "Graph Grammars and their
collage, 235
Application to Computer Science; Third
Collage Theorem, 235 color, 31,130 color map,
International Workshop", H. Ehrig, M.
Nagi, A. Rosenfeld and G. Rozen- berg 113, 215, 230 completely labeled, 178 complex
(eds.), (Springer-Verlag, New York, 1988). number, 151 absulute value, 151 polar
representation, 153compress, 132 compression
Index ratio, 238 condensing matter, 36 connected,
179,181,187
AdditionsFMlD(), 86, 90 affine transformation, locally, 196,199 context dependence, 251,286
228 contractive, 228 expansive, 228 aliasing, 54, Continuous Potential Method for Ju
216 lia Sets, 190 Continuous Potential
Almgren, F., 15 alphabet, 273,281 Method for Mandelbrot Set, 190 correlation, 41,53,
Ambum, P., 221 59, 84,136 cosine, 144 covariance, 135 Coxeter,
animation, 215 H.S.M., 4 craters, 22, 31, 32 creases, 53,243
zoom, 215 critical point, 165,181 critical value,
anti-aliasing, 130 156,159,165,181
Archimedes, 11, 244
Archimedes Cup, 245
Damerau, F.J., 5 data compression, 132 degree,
Arnold, V.L, 210
207 Demko, S., 221 dendrite, 203 derivative,
ASCII representation, 284 asymptotic value, 157,
logarithmic, 67 deterministic systems, 139
159 attractor, 148, 229
Devaney, R.L., 17,125 Dietrich, H.C., 5 diffusion
strange, 147 autocorrelation function,
limited aggregation, 14,37 Digital Productions, 10
65,69,91, 110
digitized model, 230 dimension, 28, 29,246 box, 61
average contractivity condition, 229 axiom,
Euclidean, 28 fractal, 28, 30, 45, 55, 61, 63, 246
273,277, 281
Hausdorff, 159 latent fractal, 64 mass, 62, 66
barbarian France, 219 similarity, 28, 60 topological, 33,45 displacement
Barnsley, J., 226 methods, 96 DisplayHeightFieldO, 128,129
Barnsley, M., 18,125, 229 basin of attraction, 172 Distance Bound for Interior of M, 296
Berger, J.M., 5 bifurcation, 163 saddle node, 163 Distance Bound Method, 290
binary decomposition, 196 Distance Bound Method for Julia sets, 296
Binary Decomposition Method for distance estimator, 192
Julia Sets, 203 Distance Estimator Method, 196,199 distance
Binary Decomposition Method for the M-set, 203 fading, 131 distribution, 134 density, 175 finite
binomial, asymmetric, 248 dimensional, 136 Gaussian, 58, 244 normal,
Boundary Scanning Method, 176 box coverings, 61 77,134,135 uniform, 134,135
branching patterns, 273 Dobkin, D., 15
broccoli, 58 Douady, A., 15, 189, 288, 294
Brownian motion, 39, 40, 42, 48, 69, 74, 76, 91 Dow Jones, 138,144 dragon curve, 284 Drain, 123
fractional, 42,44,45, 58, 82 Druid, 219 dyadic, 247 dynamical system, 138,145
on a circle, 81 chaotic, 145
191
ecology, 139 Elton, J„ 229,231 Epstein, D., 15
equipotential curves, 184 ergodic theorem, 231
escape time, 203 Escher, M.C., 4 Escher-like, 187
essential singularity, 157 Euclidean factories, 26
Euclidean geometry, 22, 25 expectation, 134
explosion, 164 external angle, 192
FastMO, 291
Fatou, P., 150, 172
Feller, W., 2 fem, 240, 241,242 field lines, 192,197
film, 8,167, 215 fixed point, 146, 171 attracting,
171 indifferent, 171 parabolic, 173, 177 repelling,
171,179
192
flakes, fractal, 34, 35,46 lattice, 178,251
flappers, 58 hexagonal, 252,254
floating horizon method, 126 Herman, M., 210 triangular, 252
floating point numbers, 131 hidden surface problem, 126 leaf target, 233,235
Flower Field, 123 Hilbert curve, 278 Lempel-Ziv-Welch encoding, 132
fog, 131 Hironaka, E., 18
Fourier filtering method, 105 length, 63
histogram, 175 Level Set Method, 187
Fourier transform, 65, 69, 90 Hodges, L., 221
discrete, 49 Level Set Methods for M, 190
Hollywood, 8
inverse, 105 level sets, 182
Hubbard, J.H., 15,17,189,287,288,
Fournier, A., 9,221 294 Lewis, J.P., 110,125
Fractal planet, 22,24, 33,48 Huffman encoding, 132 Lewitan, H„ 5
frequency, 50 Lichtenberger, G.B., 5
frequency cutoff, 57 IBM, 5, 6 light scattering, 35
Fricke, R., 2 illumination model, 128 Lilac Twig, 123
function, 141 image compression, 203 Lindentnayer, A., 274
entire, 157 increments, independent, 59Lipschitz constant, 228,235
exponential, 157 infinity, 170
transcendental, 156 logistic equation, 139, 211
InitGauss(), 77
Fussell, D., 9 initial point, 231 LOGO, 273
initial values, 147 Lovejoy, S., 12
intermittency, 163, 165 Lucasfilm, 10
InterpolatedFMf). 104,112
Malassenet, F., 125
Mandelbrot set, 130,165,177,186, 199,288 fast
algorithm, 287 universality, 213
Mandelbrot, B.B., 15,25,125,177, 221
maple leaf, 241
InterpolatedFMIDO, 100 interpolation,Marden,
216A., 15
multilinear, 103,112 Mareda, J.F., 125
Inverse Iteration Method, 152,173, 177 Mastin G., 125
isotropic, 95 Mastin, G., Ill
McGehee,
iterated function system, 177,221 attractor, 235 code,R., 203
Gagne, R., 5, 250, 260 Galileo, 22 McKenna, D.M., 5
229
Gardner, M., 9 GaussO, 76, 77,134 McMullen, C„ 159
iteration, 141,158 ivy leaves, 227
GenerateStringO, 275 Genesis, 10 MDiskO, 293
GetCurveSizeO, 279 gingerbreadman, Jurgens, H., 125 mean, 134
149 Given, J.A., 5 Gleick, J., 18 gold Jacquin, A., 125 mean square increments, 75
film, 36 Jolet, 41 mesh size, 178
percolation, 37 Gouraud shading, Julia set, 203 microscope, 203
112,130 graftal, 11 Great Picard exploding, 159 midpoint displacement, 11, 51, 56,
Theorem, 157 Greenberg Associates, 8 fast algorithm, 296 fiffed-in, 154, 172,187 78, 85,96, 243, 244, 248
grid types, 96 Guckenheimer, J., 203 Lebesgue measure, 159 tile, 250
Guder, F., 5 Gulliver, 58 Julia, G„ 17, 150, 172 wire frame, 250
JuliallMO, 154, 177 MidPointBMO, 79
Henon map, 147 jumps, independent, 48, 80 MidPointFMlD(), 85, 87
Hanan, J., 125 MidPointFM2D0,100,101,112,113
Handelman, S.W., 5, 6, 8 Kawaguchi, Y., 221 MidPointRecursionO, 79
Hausdorff distance, 235 key frames, 215 Miller, G.S.P., 221
Hawkes, J., 5 Klein, F„ 2
Milnor, J„ 15, 192, 287
Kleinian groups, 14
Modified Boundary Scanning Method, 179
Koch island, 283
Modified Inverse Iteration Method, 178
Koch snowflake curve, 26, 27, 28, 51,275,276
Moire pattern, 160
KoebeP.,288
Moldave, P., 15
Kolmogorov, A.N., 210
moments, 67
L-systems, 273, 274, 279 monster, 25
deterministic, 281 non-deterministic, 286 Loch Ness, 35
probabilistic, 286 pseudo, 286 labeling, 203 Moser, 1, 210
lacunarity, 55, 56, 68, 87, 88 Laff, M.R., 15 Mount Ararat, 8
Landsberg, 246 landscape, 32,47,70,249 a lunar, 31 Mount Archimedes, 245
Laputa, 58 Mount Takagi, 247
Last Starfighter, 10 MSetCPM(), 191
193
MSetDEMO, 197 multi-fractal, 67, 68
MSetDistO, 198 Mumford, D., 14, 15
MSetLevelO, 188 Musgrave, K„ 5,125, 243,260
MSetLSMO, 188 music, 39,42
MSetPotO, 191
Naylor, B„ 221
Ness, J.W. van, 9
Newton method, 17, 207,213 probability density function, 134
Nile river, 39 probability measures, 66 production rules,
noise, 39,40,216
274, 281 projection, 126
fractional Gaussian, 59
non-stationary, 66 perspective, 110 Prusinkiewicz, P.,
Norton, V.A., 15 13,125,274 pseudo code, 73 pulses,
Nyquist sampling theorem, 54 independent, 69
rand(), 77
Oneto, J.L., 5, 7
Oppenheimer, P., 13, 221,240 orbit, 143,
148, 157,163, 164
backward, 153 escaping, 157,159 random additions, 96 successive, 54, 56, 86,90
homoclinic, 164 periodic, 148,150 random field, 95
stable, 145 random function, 136 random number,
unstable, 145, 153 77,133
orthogonality principle, 110 random phase, 57 random variable, 133
Gaussian, 51
random walk, 231
RandomCutsBMO, 82 raytracing, 109,112
painter’s algorithm, 113 rectangles, 230
parabola, 11 recursion, 173
paraboloid, 244 parallel projection, 126 reflected light vector, 127
particle, sticky, 38 relief models, 255
Peano curve, 29, 284 RenderlFSO, 223,231, 232
Peitgen, H.-O., 18,125 percolation, 36 rendering, 113,128
percolation clusters, 62 periodic cycle, cost of, 250
294 rescaled, properly, 74,75, 83
Perrin, J., 2 Reuter, L., 125
Peyrierc, J., 5 phase transitions, 213 RGB space, 113
Phong shading, 130 piecewise linear Richardson, 30
mapping, 149 Richter, P.H., 18, 125 rivers, 1,243, 256
Plato, 42 Riznychok, J.T., 5
Plot-0L-System(), 274 Roeckerath-Ries, M.-T., 125
Poincare, H., 15 round-off errors, 145
polar caps, 33 run length encoding, 132,238
polygons, 113,150, 286 Ruysdael, S. van, 13
polynomial, 212
population, 139 potential, 130,186, 187 sample space, 133
potential function, 189 electrostatic, 182 Sander, L., 14
probability, 133 Saupe, D„ 18,125 scaling, 25,28,47,50, 59
seed, 77
Seiter, L., 5
self-affine, 59, 62, 64
variance, 135
Vasta, L., 5
view vector, 128
195
Vol Libre, 10
von Koch curve, 68
Voss, R.F., 5, 6,125
Wallis, J.R., 5
water vapor density, 33
Watterberg, P.A., 125
waves, 111
Weierstrass function, 57
white noise, 39, 48, 50, 69, 75, 76,
91,112
WhiteNoiseBMO, 75,76
Wiener, N., 51
Wiener-Khintchine relation, 65,91, 110
Wilkes, A., 287
Witten, T„ 14
Wright, D., 15
1987 Award
for Distinguished
Technical Communication
Intellectually stimulating and full of beautiful color plates, this book by German scientists Heinz-Otto
Peitgen and Peter H. Richter is an unusual and unique attempt to present the field of Complex
Dynamics. The astounding results assembled here invite the general public to share in a new
mathematical experience: to revel in the charm of fractal frontiers.
In 88 full color pictures (and many more black and white illustrations), the authors present
variations on a theme whose repercussions reach far beyond the realm of mathematics. They show
how structures of unseen complexity and beauty unfold by the repeated action of simple rules. The
implied unpredictability of many details in these processes, in spite of their complete determination
by the given rules, reflects a major challenge to the prevailing scientific conception.
• Learn more about the Mandelbrot set and enjoy the beauty and complexity of this fascinating object.
• Experiment with fractals using the algorithms outlined in the book.
• Four invited contributions, by leading scientists—including Benoit Mandelbrot— and one artist, complement the book with
further background and provocative views about the relation of science to art.
. . With its stunning images in black and white and in color, it is both a mathematics textbook and a coffee table
adornment ...” Scientific American
“. . . But until very recently not even a science-fiction writer would have anticipated this picture book celebrating the incredibly
rich geometry encoded in the humble quadratic equation ...” Nature
1986/199 pp./184 figures in 221 separate illus., mostly in color/hardcover/ $39.00/ISBN 0-387-
15851-0
This gives the desired result ct2 only for the H = of normal Brownian motion.
196
As a consequence, points generated at different stages have different statistical properties in their
neighborhoods. This often leaves a visible trace that does not disappear as more stages are added. The effect
is more pronounced as H —» 1. These visible artifacts, which are a consequence of the lack of sta- tionarity
of the mathematical approximation, are particularly visible on fractal
lr
The norm of a complex number z is defined as |z| = ^/Re(z) 2 + Im(2?)2, where Re(z) and Im(z) denote the real and imaginary
parts of z. Let R be a mapping on C and R(z) = z. Then z is a fixed point and
• z is attracting, provided |B'(z) | < 1
• z is repelling, provided | > 1
• z is indifferent, provided R!(z) = exp (2 Tria) .
In the latter case a may be rational or irrational and accordingly z is called rationally or irrationally indifferent. If Rk(z) =
R(Rk~l(z')) = z and R}(z) =/■ z for 1 < j < k we call z a point of period k and the set {z, R( z),..., Rk~x (z)} is called a k-cycle, which
may be attracting, repelling or indifferent, depending on |(B* *')'(2) |.
197