The Rust Age

Download as pdf or txt
Download as pdf or txt
You are on page 1of 660

The Rust Age

ribbonfarm.com, 20072012

The Rust Age


ribbonfarm.com, 20072012
Venkatesh Rao

Ribbonfarm Inc.
2014

Copyright 2014 by Venkatesh Rao


All rights reserved. This book or any portion thereof may not be reproduced or used in
any manner whatsoever without the express written permission of the publisher except
for the use of brief quotations in a book review or scholarly journal.
First Printing: 2014
www.ribbonfarm.com

Contents

Part 0: Legibility..............................................7
A Big Little Idea Called Legibility.....................8
Part 1: The Art of Refactored Perception.........12
The Art of Refactored Perception..................13
The Parrot....................................................16
Amy Lin and the Ancient Eye.........................20
The Scientific Sensibility...............................24
Diamonds versus Gold..................................26
How to Define Concepts................................28
Concepts and Prototypes..............................30
How to Name Things.....................................32
How to Think Like Hercule Poirot...................44
Boundary Condition Thinking........................47
Learning From One Data Point.......................50
Lawyer Mind, Judge Mind..............................53
Just Add Water.............................................58
The Rhetoric of the Hyperlink........................62
Seeking Density in the Gonzo Theater...........66
Rediscovering Literacy.................................74
Part 2: Towards an Appreciative View of
Technology..........................................................83
Towards an Appreciative View of Technology..84
An Infrastructure Pilgrimage.........................87
Meditation on Disequilibrium in Nature..........88
Glimpses of a Cryptic God.............................89
The Epic Story of Container Shipping.............90
The World of Garbage...................................91
The Disruption of Bronze..............................92
Bays Conjecture..........................................93
Halls Law: The Nineteenth Century Prequel to
Moores Law.....................................................94
Hacking the Non-Disposable Planet...............95

Welcome to the Future Nauseous..................96


Technology and the Baroque Unconscious......97
The Bloody-Minded Pleasures of Engineering. 98
Towards a Philosophy of Destruction.............99
Creative Destruction: Portrait of an Idea......100
Part 3: Getting Ahead, Getting Along, Getting
Away.................................................................101
Getting Ahead, Getting Along, Getting Away 102
The Crucible Effect and the Scarcity of
Collective Attention.........................................105
The Calculus of Grit....................................106
Tinker, Tailor, Soldier, Sailor........................107
The Turpentine Effect..................................108
The World is Small and Life is Long..............109
My Experiments with Introductions..............110
Extroverts, Introverts, Aspies and Codies.....111
Impro by Keith Johnstone............................112
Your Evil Twins and How to Find Them.........113
Bargaining with your Right Brain.................114
The Tragedy of Wiios Law...........................115
The Allegory of the Stage............................116
The Missing Folkways of Globalization.........117
On Going Feral...........................................118
On Seeing Like a Cat...................................119
How to Take a Walk.....................................120
The Blue Tunnel..........................................121
How Do You Run Away from Home?..............122
On Being an Illegible Person........................123
The Outlaw Sea by William Langewiesche.. . .124
The Stream Map of the World......................125
Part 4: The Mysteries of Money.....................126
The Mysteries of Money..............................127
Ancient Rivers of Money.............................129
Fools and their Money Metaphors................130
Time and Money: Separated at Birth?..........131
The Eight Metaphors of Organization...........132
The Lords of Strategy by Walter Kiechel......133

A Brief History of the Corporation: 1600 to 2100


......................................................................134
Marketing, Innovation and the Creation of
Customers......................................................135
The Milo Criterion.......................................136
Ubiquity Illusions and the Chicken-Egg Problem
......................................................................137
The Seven Dimensions of Positioning...........138
Coloring the Whole Egg: Fixing Integrated
Marketing.......................................................139
How to Draw and Judge Quadrant Diagrams. 140
The Gollum Effect.......................................141
Peak Attention and the Colonization of
Subcultures....................................................142
Acting Dead, Trading Up and Leaving the Middle
Class..............................................................143
Can Hydras Eat Unknown-Unknowns for Lunch?
......................................................................144
The Return of the Barbarian........................145
Glossary.....................................................146

Part 0:
Legibility

This is an edited collection of the first five years of ribbonfarm (2007-2012), retroactively
named the Rust Age.
The Rust Age also generated a book, Tempo, and two ebooks: The Gervais Principle and
Be Slightly Evil.

A Big Little Idea Called Legibility


July 26, 2010
James C. Scotts fascinating and seminal book, Seeing Like a State:
How Certain Schemes to Improve the Human Condition Have Failed,
examines how, across dozens of domains, ranging from agriculture and
forestry, to urban planning and census-taking, a very predictable failure
pattern keeps recurring. The pictures below, from the book (used with
permission from the author) graphically and literally illustrate the central
concept in this failure pattern, an idea called legibility.

States and large organizations exhibit this pattern of behavior most


dramatically, but individuals frequently exhibit it in their private lives as
well.
Along with books like Gareth Morgans Images of Organization,
Lakoff and Johnsons Metaphors we Live By, William Whytes The

Organization Man and Keith Johnstones Impro, this book is one of the
anchor texts for this blog. If I ever teach a course on Ribbonfarmesque
Thinking, all these books would be required reading. Continuing my
series on complex and dense books that I cite often, but are too difficult to
review or summarize, here is a quick introduction to the main idea.
The Authoritarian High-Modernist Recipe for Failure
Scott calls the thinking style behind the failure mode authoritarian
high modernism, but as well see, the failure mode is not limited to the
brief intellectual reign of high modernism (roughly, the first half of the
twentieth century).
Here is the recipe:

Look at a complex and confusing reality, such as the social


dynamics of an old city
Fail to understand all the subtleties of how the complex reality
works
Attribute that failure to the irrationality of what you are looking at,
rather than your own limitations
Come up with an idealized blank-slate vision of what that reality
ought to look like
Argue that the relative simplicity and platonic orderliness of the
vision represents rationality
Use authoritarian power to impose that vision, by demolishing the
old reality if necessary
Watch your rational Utopia fail horribly

The big mistake in this pattern of failure is projecting your subjective


lack of comprehension onto the object you are looking at, as
irrationality. We make this mistake because we are tempted by a desire
for legibility.

Legibility and Control


Central to Scotts thesis is the idea of legibility. He explains how he
stumbled across the idea while researching efforts by nation states to settle
or sedentarize nomads, pastoralists, gypsies and other peoples living
non-mainstream lives:
The more I examined these efforts at sedentarization,
the more I came to see them as a states attempt to make a
society legible, to arrange the population in ways that
simplified the classic state functions of taxation,
conscription, and prevention of rebellion. Having begun to
think in these terms, I began to see legibility as a central
problem in statecraft. The pre-modern state was, in many
crucial respects, particularly blind; it knew precious little
about its subjects, their wealth, their landholdings and
yields, their location, their very identity. It lacked anything
like a detailed map of its terrain and its people.
The book is about the 2-3 century long process by which modern
states reorganized the societies they governed, to make them more legible
to the apparatus of governance. The state is not actually interested in the
rich functional structure and complex behavior of the very organic entities
that it governs (and indeed, is part of, rather than above). It merely
views them as resources that must be organized in order to yield optimal
returns according to a centralized, narrow, and strictly utilitarian logic. The
attempt to maximize returns need not arise from the grasping greed of a
predatory state. In fact, the dynamic is most often driven by a genuine
desire to improve the lot of the people, on the part of governments with a
popular, left-of-center mandate. Hence the subtitle (dont jump to the
conclusion
that
this
is
a
simplistic
anti-big-government
conservative/libertarian view though; this failure mode is ideology-neutral,
since it arises from a flawed pattern of reasoning rather than values).
The book begins with an early example, scientific forestry
(illustrated in the picture above). The early modern state, Germany in this
case, was only interested in maximizing tax revenues from forestry. This

meant that the acreage, yield and market value of a forest had to be
measured, and only these obviously relevant variables were comprehended
by the statist mental model. Traditional wild and unruly forests were
literally illegible to the state surveyors eyes, and this gave birth to
scientific forestry: the gradual transformation of forests with a rich
diversity of species growing wildly and randomly into orderly stands of
the highest-yielding varieties. The resulting catastrophes better
recognized these days as the problems of monoculture were inevitable.
The picture is not an exception, and the word legibility is not a
metaphor; the actual visual/textual sense of the word (as in readability)
is what is meant. The book is full of thought-provoking pictures like this:
farmland neatly divided up into squares versus farmland that is confusing
to the eye, but conforms to the constraints of local topography, soil quality,
and hydrological patterns; rational and unlivable grid-cities like Brasilia,
versus chaotic and alive cities like Sao Paolo. This might explain, by the
way, why I resonated so strongly with the book. The name ribbonfarm
is inspired by the history of the geography of Detroit and its roots in
ribbon farms (see my About page and the historic picture of Detroit
ribbon farms below).

High-modernist (think Bauhaus and Le Corbusier) aesthetics


necessarily lead to simplification, since a reality that serves many purposes
presents itself as illegible to a vision informed by a singular purpose. Any
elements that are non-functional with respect to the singular purpose tend
to confuse, and are therefore eliminated during the attempt to
rationalize. The deep failure in thinking lies is the mistaken assumption
that thriving, successful and functional realities must necessarily be
legible. Or at least more legible to the all-seeing statist eye in the sky
(many of the pictures in the book are literally aerial views) than to the
local, embedded, eye on the ground.
Complex realities turn this logic on its head; it is easier to
comprehend the whole by walking among the trees, absorbing the gestalt,
and becoming a holographic/fractal part of the forest, than by hovering
above it.
This imposed simplification, in service of legibility to the states eye,
makes the rich reality brittle, and failure follows. The imagined
improvements are not realized. The metaphors of killing the golden goose,
and the Procrustean bed come to mind.
The Psychology of Legibility
I suspect that what tempts us into this failure is that legibility quells
the anxieties evoked by apparent chaos. There is more than mere stupidity
at work.
In Mind Wide Open, Steven Johnsons entertaining story of his
experiences subjecting himself to all sorts of medical scanning
technologies, he describes his experience with getting an fMRI scan.
Johnson tells the researcher that perhaps they should start by examining
his brains baseline reaction to meaningless stimuli. He naively suggests a
white-noise pattern as the right starter image. The researcher patiently
informs him that subjects brains tend to go crazy when a white noise
(high Shannon entropy) pattern is presented. The brain goes nuts trying to
find order in the chaos. Instead, the researcher says, they usually start with
something like a black-and-white checkerboard pattern.

If my conjecture is correct, then the High Modernist failure-throughlegibility-seeking formula is a large scale effect of the rationalization of
the fear of (apparent) chaos.
[Techie aside: Complex realities look like Shannon white noise, but in
terms of deeper structure, their Kolmogorov-Chaitin complexity is low
relative to their Shannon entropy; they are like pseudo-random numbers
or , rather than real random numbers; I wrote a two-part series on this
long ago, that I meant to continue, but never did].
The Fertility of the Idea
The idea may seem simple (though it is surprisingly hard to find
words to express it succinctly), but it is an extraordinarily fertile one, and
helps explain all sorts of things. One of my favorite unexpected examples
from the book is the rationalization of people names in the Philippines
under Spanish rule (I wont spoil it for you; read the book). In general, any
aspect of a complex folkway, in the sense of David Hackett Fischers
Albions Seed, can be made a victim of the high-modernist authoritarian
failure formula.
The process doesnt always lead to unmitigated disaster. In some of
the more redeeming examples, there is merely a shift in a balance of
power between more global and more local interests. For example, we
owe to this high-modernist formula the creation of a systematic, global
scheme for measuring time, with sensible time zones. The bewilderingly
illegible geography of time in the 18th century, while it served a lot of
local purposes very well (and much better than even the best atomic clocks
of today), would have made modern global infrastructure, ranging from

the railroads (the original driver for temporal discipline in the United
States) to airlines and the Internet, impossible. The Napoleanic era saw the
spread of the metric system; again an idea that is highly rational from a
centralized birds eye view, but often stupid with respect to the subtle local
adaptions of the systems it displaced. Again this displaced a good deal of
local power and value, and created many injustices and local
irrationalities, but the shift brought with it the benefits of improved
communication and wide-area commerce.
In all these cases, you could argue that the formula merely replaced a
set of locally optimal modes of social organization with a globally optimal
one. But that would be missing the point. The reason the formula is
generally dangerous, and a formula for failure, is that it does not operate
by a thoughtful consideration of local/global tradeoffs, but through the
imposition of a singular view as best for all in a pseudo-scientific sense.
The high-modernist reformer does not acknowledge (and often genuinely
does not understand) that he/she is engineering a shift in optima and
power, with costs as well as benefits. Instead, the process is driven by a
naive best for everybody paternalism, that genuinely intends to improve
the lives of the people it affects. The high-modernist reformer is driven by
a naive-scientific Utopian vision that does not tolerate dissent, because it
believes it is dealing in scientific truths.
The failure pattern is perhaps most evident in urban planning, a
domain which seems to attract the worst of these reformers. A generation
of planners, inspired by the crazed visions of Le Corbusier, created
unlivable urban infrastructure around the world, from Braslia to
Chandigarh. These cities end up with deserted empty centers populated
only by the government workers forced to live there in misery (there is
even a condition known as Brasilitis apparently), with slums and shanty
towns emerging on the periphery of the planned center; ad hoc, bottom-up,
re-humanizing damage control as it were. The book summarizes a very
elegant critique of this approach to urban planning, and the true richness
of what it displaces, due to Jane Jacobs.

Applying the Idea


Going beyond the books own examples, the ideas shed a whole new
light on other stories/ideas. Two examples from my own reading should
suffice.
The first is a book I read several years back, by Nicholas Dirks,
Castes of Mind: Colonialism and the Making of Modern India, which
made the argument (originally proposed by the orientalist Bernard Cohn),
that caste in the sense of the highly rigid and oppressive, 4-varna scheme
was the result of the British failing to understand a complex social reality,
and imposing on it their own simplistic understanding of it (the British Raj
is sometimes called the anthropological state due to the obsessive care it
took to document, codify and re-impose as a simplified, rigidified,
Procrustean prescription, the social structure of pre-colonial India). The
argument of the book obviously one that appeals to Indians (we like to
blame the British or Islam when we can) is that the original reality was
a complex, functional social scheme, which the British turned into a rigid
and oppressive machine by attempting to make it legible and governable.
While I still dont know whether the argument is justified, and whether the
caste system before the British was as benevolent as the most ardent
champions of this view make it out to be, the point here is that if it is true,
Scotts failure model would describe it perfectly.
The second example is Gibbons Decline and Fall of the Roman
Empire, which I am slowly reading right now (I think it is going to be my
personal Mount Everest; I expect to summit in 2013). Perhaps no other
civilization, either in antiquity or today, was so fond of legible and
governable social realities. I havent yet made up my mind, but reading
the history through the lens of Scotts ideas, I think there is strong case to
be made that the fall of the Roman empire was a large-scale instance of
the legibility-failure pattern. Like the British 1700 years later, the Romans
did try to understand the illegible societies they encountered, but their
failure in this effort ultimately led to the fall of the empire.
Aside: if you decide to attempt Mount Everest along with me, take
some time to explore the different editions of Gibbon available; I am

reading a $0.99 19th century edition on my Kindle all six volumes with
annotations and comments from a decidedly pious and critical
Christian editor. Sometimes I dont know why I commit these acts of
large-scale intellectual masochism. The link is to a modern, abridged
Penguin edition.
Is the Model Relevant Today?
The phrase high-modernist authoritarianism might suggest that the
views in this book only apply to those laughably optimistic, high-onscience-and-engineering high modernists of the 1930s. Surely we dont
fail in these dumb ways in our enlightened postmodern times?
Sadly, we do, for four reasons:
1. There is a decades-long time lag between the intellectual highwatermark of an ideology and the last of its effects
2. There are large parts of the world, China in particular, where
authoritarian high-modernism gets a visa, but postmodernism does
not
3. Perhaps most important: though this failure mode is easiest to
describe in terms of high-modernist ideology, it is actually a basic
failure mode for human thought that is time and ideology neutral.
If it is true that the Romans and British managed to fail in these
ways, so can the most postmodern Obama types. The language will
be different, thats all.
4. And no, the currently popular pave the cowpaths and behavioraleconomic choice architecture design philosophies do not provide
immunity against these failure modes. In fact paving the cowpaths
in naive ways is an instance of this failure mode (the way to avoid
it would be to choose to not pave certain cowpaths). Choice
architecture (described as Libertarian Paternalism by its
advocates) seems to merely dress up authoritarian high-modernism
with a thin coat of caution and empirical experimentation. The
basic and dangerous I am more scientific/rational than thou
paternalism is still the central dogma.

[Another Techie aside: For the technologists among you, a quick (and
very crude) calibration point should help: we are talking about the big
brother of waterfall planning here. The psychology is very similar to the
urge to throw legacy software away. In fact Joel Spolsky's post on the
subject Things You Should Never Do, Part I, reads like a narrower version
of Scott's arguments. But Scott's model is much deeper, more robust, more
subtly argued, and more broadly applicable. I haven't yet thought it
through, but I don't think lean/agile software development can actually
mitigate this failure mode anymore than choice architecture can mitigate
it in public policy]
So do yourself a favor and read the book, even if it takes you months
to get through. You will elevate your thinking about big questions.
High-Modernist Authoritarianism in Corporate and Personal Life
The application of these ideas in the personal/corporate domains
actually interests me the most. Though Scotts book is set within the
context of public policy and governance, you can find exactly the same
pattern in individual and corporate behavior. Individuals lacking the
capacity for rich introspection apply dumb 12-step formulas to their lives
and fail. Corporations: well, read the Gervais Principle series and Images
of Organization. As a point of historical interest, Scott notes that the
Soviet planning model, responsible for many spectacular legibilityfailures, was derived from corporate Taylorist precedents, which Lenin
initially criticized, but later modified and embraced.
Final postscript: these ideas have strongly influenced my book
project, and apparently, Ive been thinking about them for a long time
without realizing it. A very early post on this blog (I think only a handful
of you were around when I posted it), on the Harry Potter series and its
relation to my own work in robotics, contains some of these ideas. If Id
read this book before, that post would have been much better.

Part 1:
The Art of Refactored Perception

The Art of Refactored Perception


May 31, 2012
When I made up the tagline, experiments in refactored perception,
back in 2007, I had no idea how deeply that line would come to define the
essence of ribbonfarm. So in this first post in my planned month-long
retrospective on five years in the game, I decided to look back on the
evolution and gradual deepening of the idea of refactoring perceptions.
Ive never attempted an overt characterization of what the phrase
means, but over the years, Ive explored it fairly systematically. This
sequence of posts should help you appreciate what I mean by the phrase.
Ive arranged the sequence as a set of fairly natural stages:
Perceiving
1. The Parrot
2. Amy Lin and the Ancient Eye
3. The Scientific Sensibility
Preparing to Think
1. Diamonds versus Gold
2. How to Define Concepts
3. Concepts and Prototypes
4. How to Name Things
Thinking
1. How to Think Like Hercule Poirot
2. Boundary Condition Thinking
3. Learning from One Data Point
4. Lawyer Mind, Judge Mind
Writing
1. Just Add Water
2. The Rhetoric of the Hyperlink
3. Seeking Density in the Gonzo Theater
4. Rediscovering Literacy

Much of the refactoring happens in the second stage.


OODA for Thinking-by-Writing
I dont know if this is an accident, a case of context-specific
rediscovery, or some unconscious channeling, but this is pretty much an
OODA loop for thinking by writing. The preparing to think stage
corresponds to the crucial orientation piece of OODA. As with OODA,
this sequence is actually a ridiculously interconnected set of thought
processes. Each of the four stages feeds back to each of the others.
Staring at this sequence, I begin to understand why Ive never
seriously considered attempting to teach others this writing-to-think
model. Besides the obvious problem that Ive been figuring it out myself, I
dont think this is very teachable. Its just a crap-load of practice, to drive
certain patterns deep into mental muscle memory.
I suppose some of these pieces are amenable to translation into howto presentations, but I suspect the market for this comprises exactly 3.5
starving bloggers with perverse instincts. This is practically black-belt
level training in How not to Make Money Blogging. I suspect I manage to
survive financially despite this model, not because of it.
But if somebody wants to PowerPoint-ize this material into a teaching
resource, you have my blessing. I only ask that you make the thing
publicly available.
I will probably slap a preface onto this sequence and Kindle-ize it into
a cheap e-book when I get some time.
The Retrospective Process
For those of you interested in how I am doing this retrospective,
heres the brief description of the process so far. I started with a first-cut
selection. Of over 300 posts, just around 70 made the first cut. I cut out
everything that was badly written, off-voice or not part of a broader
exploration theme. I also cut out stuff that I revisited in more solid ways

later. I did not consider popularity at all, but most of the popular posts
made the cut.
So it was definitely a very personal and autocratic selection.
The yield rate was depressingly low, at less than 25%. But the good
news is that it has been steadily increasing. As you will see from this and
upcoming posts, the lists are dominated by later posts. In the first couple
of years, I wrote an awful lot of posts I would now consider terrible.
After the selection, I sorted the set into 5-6 clusters, and forced myself
to completely uncouple the clusters (i.e., each post can belong in only one
cluster). I then sequenced each in some meaningful way. I will be doing
one post on each sequence.
It was surprisingly (and depressingly) easy to do the pruning. I
expected to spend many agonizing hours figuring out what to include and
what to exclude, but it took me about 15 minutes to do the cutting, and
another 15 minutes to do a first, basic sorting/clustering. The hardest part
is developing a narrative arc through the material to sequence each cluster.
On Voice
Yesterday, I posted a beta aphorism on Facebook that many people
seemed to like: integrity is an aesthetic, not a value.
A blogging voice is not just an expression of a coherent aesthetic
perspective; it is also an expression of a certain moral stance. Developing
a high-integrity blogging voice is about learning to recognize, in a moral
sense, on-voice/off-voice drafts and developing the discipline to
systematically say no to off-voice material, no matter how tempting it is to
post it, based on the expedient considerations like topicality or virality. As
your filters develop, you write fewer off-voice drafts to begin with.
Eventually, you dont even think off-voice.
One of the hardest challenges for me in selecting posts for this month
of retrospectives was posts that were partly on-voice and partly off-voice.

I erred on the side of integrity and dropped most such posts, except for a
few that were logically indispensable in some sequence.
Learning to recognize off-voice stuff (especially while your voice is
still developing) is more like learning to be a tea taster than studying to be
a priest at a seminary.
Though I suppose, practiced at sophisticated levels, what would Jesus
do? is an integrity aesthetic rather than a 0/1 litmus test. Few religious
types seem to transcend the bumper-sticker value though.

The Parrot
August 13, 2007
This piece was written in Ithaca, in 2005, and is as accurate a
phenomenological report of an actual mental response to real events as I
am capable of. At the time I thought and still do that a very careful
observation of your own thoughts as you react to sensory input is a very
useful thing. Not quite meditation. Call it meditative observation.
Stylistically, it is inspired by Camus.
-1From my window table on the second floor of the coffee shop,
looking down at the Commons the determinedly medieval, pedestriansonly town square of Ithaca I saw the parrot arrive. It was large and a
slightly dirty white. Its owner carefully set a chair on top of a table and the
parrot hopped from his finger onto the back of the chair and perched there
comfortably. I suppose the owner wanted to keep it out of the reach of any
dogs. He gave it a quick second glance, and stepped inside a restaurant.
The parrot ruffled its feathers a bit, looked around, preened a little
(showing off some unexpected pink plumage on the back of its neck,
hidden in the dirty white), and then settled down
-2The Ithaca Commons is a ring of shops and restaurants around an
open courtyard, occupying the city block between Green and Seneca
streets. The shops are an artfully arranged sequence of mildly unexpected
experiences. Tacky used clothing and dollar stores sit next to upscale
kitchen stores, craft shops, art galleries and expensive restaurants. The
central promise of the Commons is that of the Spectacle. Street musicians,
hippies meditatively kicking hackeysacks, the occasional juggler they all
make their appearance in the Commons. A visibly political Tibetan store
and experiential restaurants such as the Moosewood and Just a Taste
complete the tableau. The Commons is crafted for the American liberal, a
cocoon that gently reinforces her self-image as a more evolved, aware, and

thoughtful creature than her parochial suburban, beer-guzzling, footballfan cousin.


But in any world, the presence of a large, dirty-white parrot is a
definite non sequitur. Wall Street, Hollywood, sell-out Suburbia (and
Exurbia), Southern Baptist congregations and the liberal Ithaca Commons,
are all equally at a loss to accommodate the parrot. The grab-bag of varied
oppressed Others that mill about University towns, I suspect, would also
be at a loss to handle the parrot. Those of us who claim to be governed by
eclectic, deeply considered and original world views and I count myself
among these are also forced to admit that for all our treasured
iconoclasm, we cannot accommodate the parrot. We are therefore forced,
out of sheer necessity, to look at it.
-3I am no deep observer of real life. When I work in public areas, it is
for the steady supply of low-intensity human contact. The mass of
unremarkable humanity does not register, except as a pleasant backdrop.
Pretty girls, babies, dogs and notably ugly people do register, leaving a
gentle and piquant trail of unexamined visual flavor. I am not a true people
watcher.
I didnt quite know what to do with a parrot though, so I was forced to
look at it. It triggered no runaway train of thought, so for a while it was
just me and the parrot, separated by a pane of glass, and about fifty yards.
The impression of parrot, did not fade, get filtered away or get
overwhelmed by free association. It lingered long enough that I began to
watch. The parrot seemed happy. It sat there, awake, but not alert or wary.
It looked straight ahead. Presumably it did not find the scene interesting
enough to strain its neck.
-4I wonder how Hegel would have reacted to the parrot. Would it have
triggered, through some improbable sequence of dominoes, a fresh insight
concerning the Self and the Other? Would he have gazed inattentively at
the parrot and chased gleefully after some new thought (bird freedom
)? Would it be just another little nudge powering the inexorable

progress of his snowballing philosophy of everything? Would it occur to


him that whatever lofty abstractions it triggered, the parrot qua parrot
would not make an appearance in the edifice he was building? Sadly, I
must suspect that the thought would not have occurred to him.
To be fair, I must also suspect that the existentialists would have done
no better, despite their protestations to the contrary. I must conclude that
Camus would have looked at the parrot and instantly exulted, There it is,
the Absurd manifest! The parrot would again have been lost, subsumed
here by the Absurd. As far as the parrot is concerned, Camus and Hegel
differ little.
-5The parrot, without its owner, was sitting there, qua parrot, indifferent
to its impact on passersby. Most people looked at it. Some did a double
take. One man stopped, turned to face it squarely and stared at it for a
minute, as if waiting for it to acquire some significance. A decrepit old
man in a wheelchair rolled by, glancing at it with a painfully slow motion,
before letting his head sink again to his chest, weighed down, I suppose,
by illness and unseen burdens. A black mother, pushing a stroller, walked
by, glancing at the parrot without interest. I wonder why black registered.
A pretty girl in faded red pants stepped out of a shop, talking on a cell
phone. She took in the parrot in the absent-mindedly, absorbed several
network hops away. She exited my field of vision, stage right, but returned
a few minutes later. This time she stopped and genuinely stared at the
parrot before heading back into a shop.
A hippie, dread-locked and tie-dyed, stopped and grinned delightedly
at it. There was no discernible transition from see to grin, and
something about that bothered me. There was something scripted about the
response; her engagement of the parrot was not authentic.

-6You know you have are a slave to the life of the mind if a phrase like
her engagement of the parrot was not authentic crosses your mind quite
naturally, and it takes you more than a minute to laugh.
But consider what it means if your response to the parrot is measured,
seemingly scripted, or otherwise deliberate in any way. A mind with
parrot on it should not look like anything recognizable. A frown might
mean you are trying to rapidly assimilate the parrot but in that case, the
process of assimilation, rather than the parrot itself, must be occupying
your mind. You cannot, at the same time, think parrot and engage in the
task of wrapping up the parrot in a bundle of associations and channeling
it to the right areas of long-term memory. The hippies grin is equally
symptomatic of a non-parrot awareness. The hippie is probably selfindulgently enjoying a validated feeling of one must be one with nature
or something along those lines.
So an authentic engagement of the parrot must have an element of the
unscripted in it. It can neither be deliberative, nor reactive. Furious and
active thinking will not do. Nor the Awww! you might direct at a puppy.
A puppy is a punch you can roll with.
-7Two moms with three babies wandered onto the scene. It being a nice
day, the babies were visible, one squirming in the arms of its mother and
the others poking their snouts out of the stroller. The mom carrying the
baby stopped immediately upon spotting the parrot and approached it (she
was the first to do so). As is the wont of moms, she immediately began
trying to direct her infants attention to the parrot, shoving its face within a
foot of the parrot. Mothers are too engaged in scripting the experiences of
their babies to experience anything other than the baby themselves. The
parrot obliged with a display of orange (I suspect it was stretching,
disturbed from its contemplative reverie). The baby, however, seemed
entirely uninterested in the parrot. Perhaps the parrot was unclear to its
myopic eyes, or perhaps it was simply no more worthy of note than any of

other exciting blobs of visual experience all around. At any rate, the mom
stopped trying after a few moments, and the five of them rolled on.
The pretty girl in faded red pants was back. This time, she had two
waitress friends along, and took a picture of the parrot with her cell phone.
The three girls (the other two were rather dumpy looking, but I suppose it
was the aprons) chattered for a bit and then stared at the parrot some more.
Two more pretty girls walked past, and though the parrot clearly
registered, walked past without a perceptible turning of their heads.
Something about that worried me. They were of the indistinguishable
dressed-in-season species of young college girl that swarm all over
American university towns. These could have been either Ithaca College
or Cornell; I cant tell them apart. Two more of the breed walked by, again
with the same non-reaction.
A black-guy-white-girl couple walked by. The girl turned to look at
the bird as they walked past, while the guy looked at it very briefly.
Shortly after, an absorbed black teenager walked by. She looked at it as
she walked past, with no change in her expression. The parrot was clearly
on Track Two. Track One continued thinking about whatever it was she
was thinking about. I suppose parrot might have consciously registered
with her a few minutes later, but she did not walk by again. Something
about black responses to the parrot was sticking in my mind. The owner
came back out of the store, carrying a cup of coffee.
-8Now, a parrot is not an arresting sort of bird. It does not have the
ostentation of the peacock, the imposing presence of the ostrich or the
latent lethality of a falcon or hawk. Even in context, at a zoo, a typical
white parrot is not remarkable in the company of its more gaudy relatives.
Any of these more dramatic creatures would, I suppose, instantly draw a
big gawking crowd, perhaps even calls to the police. Undivided attention,
active curiosity and action would certainly be merited (try to feed him
some of your bagel).
The parrot though, had neither the domesticated presence of a dog,
nor the demanding presence of a truly unexpected creature. A dog elicits

smiles, pats or studied avoidance, while an ostrich would certainly call for
a cascade of conversation into activity, culminating in the arrival of a
legitimate authority (though, I suppose, most communities would be hard
pressed to generate a legitimate response to an ostrich. Cornell though, is
an agricultural university, so I suppose eventually one of the many animal
experts would arrive on the scene).
So a dog elicits a conventional ripple of cognitive activity as it
progresses through the town square, soon displaced by other
preoccupations. An ostrich presumably triggers a flurry deliberation,
followed by actual activity. So what does the parrot cause, living as it does
in the twilight zone between conventionally expected and actionably
unexpected? You cannot have the comfort of either action or practiced
thoughts, with a parrot in your field of view. Yet, the parrot is not a threat,
so you clearly cannot panic or be overwhelmed. The parrot, I think lives in
the realm of pure contemplation. The parrot is rare in adult life. For the
child, everything is a parrot.
-9The return of the owner annoyed me briefly. With his return, the non
sequitur instantly became an instance of the signature of the Commons: a
spectacle. The owner was clearly used to handling his parrot. He had it
hop on his hand again and swung it up and down. The parrot spread its
wings and did various interesting things with its feathers which I do not
have the vocabulary to describe. With the owner, the context of a small
bubble-zoo had arrived. The owner chatted with the girl in faded red pants,
who had come out again. Fewer adults stared. The ensemble was now
clearly within the realm of the expected. Most people walked on without a
glance, while some, emboldened by the new legitimacy of the situation,
stopped and watched with interest. The owner tired of active display and
set the parrot back on its perch, and turned his attention to the girl.
For a minute, I was sorry, but then a girl, about six years old, walked
by with her mother. It was a classic little girl, in orange pants and ice
cream cone. She stopped and stared at the bird very carefully. It was not a
curious probing look, or the purposeful look that kids sometimes get when
they are looking about for a way to play with a new object. This little girl
did not look like she would be going home and looking up parrots on the

Discovery channel website. She did not look like she was gathering up
courage to pet it or imagining it in the role of a chase-able dog or cat. She
was just looking at it. Clearly her powers of abstraction had yet to mature
to the point where she could see the bubble circus.
A pair of middle-aged women stopped by the parrot. After an initial
look at the parrot, they turned and started chatting with the owner. I expect
the conversation began, Does he talk? or Doesnt he fly away? Shortly
after, I saw them wander off a little to the side, where there was a fountain.
One woman took a picture of the other, standing next to the fountain, with
a disposable camera. Local resident showing visiting Cousin Amy the
town, I guessed. All is legitimate on a vacation, including a parrot.
-10I dont think children are necessarily curious when presented with a
new experience. The little girl presented a clearer display of authentic
engagement of the parrot than all the adults. It was what I have been
describing all along as a stare. But stare doesnt quite cover it. Stare
does not have the implicit cognitive content of the hippys grin. Happy,
bemused, smiling, frowning, eager curiosity these are visible
manifestations of minds occupied by the workings of deliberative or
reactive responses to the parrot. Parrot flits too quickly the face to be
noticed, and is replaced by more normal cognitions.
So, here is a question: what is the expression on the face of a person
who has authentically engaged a parrot? I must propose, in all seriousness,
the ridiculous answer, it looks like the face of a person who has seen a
parrot.
-11The people talking to the owner had left. He now sat reading a book,
while the parrot ate seeds of some sort off the table. Three teenage
skateboarders wandered to a spot about a dozen yards away. One of them
nudged the others and pointed to the parrot. They looked at it in
appreciation. It wasnt quite clear what they were appreciating, but they
clearly approved of the parrot. That made me happy.

Now, a large brood of little black children came by, herded by two
young women who might have been nannies, I suppose. The black kids all
stopped and stared intently at the parrot. The nannies chatted with the
owner, who looked on approvingly at the children while he talked. The
conversation looked left-brained from fifty feet away. Some tentative
petting ensued. As the nannies led the children away, after allowing them a
decent amount of time to engage the parrot, one little boy had to be
dragged away; he managed to turn his head full circle, Exorcist style, to
look at the bird.
Now, five young black men, perhaps eighteen to twenty, walked by.
Theirs was clearly a presence to rival that of the parrot-owner duo as a
spectacle. Their carefully layered oversized sports clothes and reversed
baseball hats demanded attention. I suppose spectacles, be they manparrots or a group of swaggering young black men, do not supply
attention, but demand it. But you cannot really compete with a parrot. The
parrot is entirely unaware that it is competing. The black group almost
rolled past, but suddenly one of them stopped and turned around to look at
the parrot. He looked like hed suddenly reconsidered the studied
indifference that I suppose was his response to competing spectacles. A
visible recalibration of response played across his face, and suddenly, he
was authentically engaging the parrot in a demanding, direct way. The
other stopped and looked to. The first man then pulled out his cell phone,
still staring at the parrot, and took a picture. He then briefly interrogated
the owner about the parrot, and the group rolled on.
-12I wonder now, why are black responses to the parrot more noteworthy
than generic white responses? And while I mull that, why have the
responses of one other group pretty young girls stuck in my mind
(besides the fact that I notice them more)?
Now, for an authentic engagement of the parrot, there must be parrot
on your mind. Your face must look like the face of a person who has seen
a parrot. This is not an ambiguous face, or a face marked visibly by the
presence of other thoughts or a subtext. A parrot-mind may wrestle briefly
with cell phone mind or preoccupied-with-race-and-oppression mind, but

the outcome is all or nothing. There is no useful way a constantly active


subtext of race can inform your engagement of a parrot.
I suppose I was looking for evidence that there is room in the black
mind for at least a small period of unambiguous engagement with the
parrot. If your preoccupation with race and injustice occupies you so
completely that even the parrot cannot dislodge it, then it must be a sad
life. In a very real sense, your mind is not free, and therefore neither are
you, if there is not even temporary room for the parrot. The parrot can
only occupy a free mind. To my list of profundities, I will add the
following: a free mind is one which the parrot can occupy easily, and stay
in as long as it chooses.
Now, the little black children engaged the parrot as completely as the
little white girl. So if the little kids are born free and demonstrably remain
free until at least age six, as demonstrated by the parrot, why and when do
they choose to give away their freedom to a pre-occupation with the
subtext of race, which makes those happy six-year-old faces sad? Or is it
that the mist of preoccupation descends on them, whether they want it or
not?
-13I suppose enough actual watching eventually teaches you to observe
better. It suddenly occurred to me that the neck-language of parrotengagement said a lot.
The clearest response is the snap, or double-take. It signals
computation. A slight glance on the other hand, no different from the
casual scanning of everyday scenery, with no special attention, must mean
filtering. I refuse to believe that everybody has a nontrivial scripted
response to parrot, so it must mean that the scripted response simply treats
the parrot as noise to be filtered. In the casual glance, there is no parrot on
the mind.
Now, a more complex response, one signaled by a snap, is one where
there is a perceptible pause or break in stride, followed by a turning away.
That is a response that is looking for an explanation. The sort of response

that might be hooked by a lone parrot, but would ignore the contextually
appropriate owned parrot. Most of the time, when we look for an
explanation, we can only see an explanation. Sometimes, when the mind
hiccups on the path to the explanation, we see the parrot.
Viktor Frankl said, between stimulus and response there is a space.
In that space is our power to choose our response. In our response lies our
growth and our freedom. Self-improvement gurus like to use that quote to
preach, but to me, it seems that this space is primarily interesting because
the parrot can live there for a bit, so your mind can be parrot for a bit.
You might hesitate and never visit that space. You might react so fast
you leave the space before it registers on your awareness. Or you might
dwell there awhile.

Amy Lin and the Ancient Eye


March 23, 2010
Last weekend, I went to see Amy Lins new show, Kinetics, at the
Addison-Ripley gallery in DC (the show runs till April 24; go). Since I
last wrote about her [May 5, 2008: Art for Thought], she has started
exploring patterns that go beyond her trademark dots. Swirls, lines and
other patterns are starting to appear. Amys art represents the death of
both art and science as simple-minded categories, and the rediscovery of a
much older way of seeing the world, which Ill call the Ancient Eye. Yes,
she nominally functions in the social skin of a modern artist, and is also
a chemical engineer by day, but really, her art represents a way of seeing
the world that is more basic than either artistic or scientific ways of
seeing. Take this piece from the Kinetics collection for instance, my
favorite, titled Cellular.

Is it inspired by diffraction patterns? (image from gatan.com, this one


is an electron diffraction pattern from a Ti2Nb10O29 crystal recorded by a
CCD camera)

Or is it pure art in some sense? The question is actually deeply silly


(though she seems to have been asked it multiple times), since it assumes
that a superficial and recent social divide between art and science is a deep
feature of the universe.
The Ancient Eye is a precursor to both the scientific type of
imagination that invented diffraction patterns, and a specific kind of
artistic eye that can see this way without having ever encountered the idea
of diffraction. Possibly it emerges from the very structure of our minds (I
once watched a documentary about a math savant who could instantly tell
if a number was prime; he apparently saw numbers in his head as a sort
of landscape, within which primes appeared in some special way).
It is tempting to call this the Renaissance Eye (and Amy a
Renaissance Woman), but that would be a bad mistake, since the
Renaissance is what nearly killed it, by introducing the great art/science
schism. Da Vinci was the last possessor of the Ancient Eye before its
recent rediscovery, not the first possessor of the Renaissance Eye(s). If

you look in the period before Da Vinci, in the so-called Dark Ages,
youll see a lot more of this way of seeing than after. It strikes me that we
admire Da Vinci for the wrong reasons, for being what seems in our time
to be a multi-talented mind. No; the divisions that blind us didnt exist
in his age. He was just a seer, and what he saw is more impressive than the
fact that his seeing spanned a multiplicity of our 20th century
categories.
C. P. Snow and the Two Cultures
It is sad that writers like C. P. Snow (of The Two Cultures fame) in
the last century ended up widening and institutionalizing the chasm
between the humanities and the sciences while attempting to bridge it. To
be fair to them though, the humanists started it, by attempting to take
scientists, mathematicians and engineers down a social peg or two. C. P.
Snow quotes number theorist G. H. Hardy: Have you noticed how the
word intellectual is used nowadays? There seems to be a new definition
which certainly doesnt include Rutherford or Eddington or Dirac or
Adrian or me? It does seem rather odd, dont yknow.
The natural anxieties and suspicions of humanist literary intellectuals
are old and deep-rooted (Coleridge: the souls of 500 Newtons would go
to the making up of a Shakespeare or Milton), and cannot be wished
away by lecturing (see my post The Bloody-Minded Pleasures of
Engineering [September 1, 2008]). Humanism is a retreat to a secularized
notion of humans being spiritually special, as a way of combating a
sense of insignificance within our huge, mysterious universe. But perhaps
the way to bridge the gap and bring humanists back to this universe, that
we share with other atom-sets, is to show that the eye of science and the
eye of art are both descended from the Ancient Eye.
Before the Great Divide
Okay, C. P. Snow is yesterdays news; we need to dig further in the
archives to understand the Ancient Eye. The post-reformation notions of
both art and science were distortions of the Ancient Eye way of seeing
(and connecting to) everything from atoms to galaxies. Possibly what
created the disconnect was the rise of late englightenment era Christianity

(post Martin Luther (1483-1546)) and its disdain of the profane material
plane as a sort of waiting room in front of a doorway into a spiritual plane.
Or perhaps it was a result of the thoroughly meaningless idea of scientific
objectivity that was partly the fault of Descartes (1596-1650).
Either way, the result was an anomaly that caused a great divide. Lets
pick up the story just before the Great Blinding of the Ancient Eye, with
Da Vinci. My favorite Da Vinci piece is neither the Mona Lisa, nor his
amazing engineering sketches, but his iconic image, The Vitruvian Man
(public domain), dating from 1485, or two years before the birth Martin
Luther. Da Vincis image is a representation of anatomical proportions and
their relation to the classical orders proposed by the Roman architect,
Vitruvius. This is the Ancient Eye pondering anatomy and seeing
architecture.

This way of seeing the human body is reminiscent of another iconic


image: the image of Shiva in the Chola Nataraja (Lord of Dance)
bronzes. The Chola bronzes, which began evolving in the 8th and 9th
centuries, and stabilized into their modern iconic form by the 12th century,
were an attempt to see a creative-destructive cosmological metaphysics in

the dynamic human form. This is the Ancient Eye seeing cosmic order in
frozen human dance. When I was a kid, an art teacher taught me the
Nataraja formula (it starts with the inscription of a hexagon inside a circle;
Shivas navel is the center). You can create very stylized and abstract
Natarajas once you learn the basic geometry (this image is from the New
York Metropolitan museum, Creative Commons)

This Nataraja-Vitruvian Man story actually continues in interesting


ways with Marcel Duchamp (Nude Descending a Staircase) and another
local DC artist, Larry Morris. I wrote about this in The Solemn Whimsies
of Larry Morris [February 21, 2009]. You can think more about that rabbit
trail if you like, but lets go from the Vitruvian Man and the Nataraja
towards more abstract stuff.
Another Ancient Eye inscribed-circle image, which emerged across
the Himalayas from the Chola Nataraja, is the Yin-Yang symbol. It

represents roughly the same idea, creative-destruction. The white fish and
black fish chase each other. Their eyes contain their duals, and the seeds of
their own destruction, and the creation of the other.

I like to think that in some lost prehistoric time, the distant ancestors
of Da Vinci and the unknown creators of the Nataraja and Yin-Yang
symbols, got drunk together after a boar hunt, and talked about transience
and transformation, while pondering the fact that the death of the boar had
sustained their life.
The Ancient Eye truly comes into its own at a somewhat greater
remove from representation of reality or even metaphysical ideas like YinYang. One of my pilgrimage dreams is to visit the Alhambra in Spain,
reputed to contain depictions of all the major mathematical symmetries. It
will be the atheist hajj of an unapologetic kafir. The Alhambra (14th
century) provides proof that we could see the symmetries of the universe
within ourselves, long before Galois (1811-1832) and Sophus Lie (18421899) gave us the mathematical language of group theory, and the ability
to see the same symmetries in electrons, muons and superstrings.

This particular story of Ancient Eye seeing evolved through classical


tessellation, to the familiar art of Escher, to the weird non-repeating
Penrose tilings of the twentieth century. Here is a picture (Creative
Commons) of Penrose standing on a Penrose-tiled floor at Texas A&M
University:

This particular story had its grand finale of profound Ancient Eye
seeing only a few years ago, when the E8 symmetry group (the last beast,
an exceptional Lie group, in a complete classification of symmetries in
mathematics) was visualized (Creative Commons) :

The language invented by Galois and Lie helped launch the program
of cataloging all the universes symmetries, a program of breathtaking
mathematical cartography that finally drew to a close with the mapping of
E8.
And in case you have a naive view of symmetry and dissonance in
how we see, and disdain such symmetries as not artistic, consider the
enormously dissonant and messy beauty of an object called the
Mandelbulb, found along the way to a holy grail search among
mathematicians for a 3D Mandelbrot set.

No, this isnt a photograph of a cave in Antarctica. This is a


Mandelbulb detail that has been titled Hell Froze Over. And somehow
this thing must emerge from the more visually obvious symmetries of
things like the E8 group.
The Ancient Eye and the Ancient Hand
But it is perhaps in engineering that the Ancient Eye has been best
preserved, waiting to be rediscovered by Amy Lins generation of artists.
Engineering is so strongly associated with human doing that we
sometimes forget that it too begins with human seeing. Before there were
engineering schools, there was still this Ancient Eye seeing and an Ancient
Hand building. It was in the Dark Ages, not in ancient Greece or Rome,
that modern engineering was born, as Joel Mokyr demonstrates in The
Lever of Riches. Today, the Ancient Hand has become engineering and it
creates vastly more powerful things. But the Ancient Eye and Ancient
Hand still lurk in the background. I talked about this schematic of the
worlds largest railroad classification yard, Bailey Yard, in my recent
piece, An Infrastructure Pilgrimage [March 7, 2010].

But you dont have to go to Nebraska to appreciate the workings of


the Ancient Hand. Next time youre on a plane, pick up your in-flight
magazine, skip to the end, and ponder the airport terminal layout
drawings. It helps to turn the magazine upside down. Heres one of my
favorites, Miami International Airport. Turned upside down so you arent
distracted by the words.

But perhaps it would be good to finish this retrospective with Da


Vinci. I have heard no more heartwarming bridge-the-gap tale than this:
Da Vincis brainchild, the helicopter, was finally made real by Igor
Sikorsky, who was funded by the composer Sergei Rachmaninoff, who
supported Sikorskys research with a $5000 check. So much for facile
ideas that art is some kind of precious flower that must at once be
protected from, and funded by, the rapacious endeavor of engineering.
That killing machine of Vietnam and life-saving machine of emergency
rescues might not exist if a rich artist had not decided to support a starving
engineer. Perhaps that is why the enduring symbol of the Ancient Eye is
neither the test-tube, nor the paintbrush, but the notebook. Here is Da
Vincis notebook helicopter sketch from the 15th century (Public Domain),

And heres an Igor Sikorsky sketch (Library of Congress) from his


1930 notebook. Makes you think, doesnt it?

Maybe the gap between science/engineering and art isnt the vast gulf
C. P. Snow imagined it to be. Maybe it is merely the distance between the
2H pencil used in engineering drafting and the 2B pencil, the mainstay of
line art. It is a gap that can easily be bridged by something as simple as a
notebook. Doesnt seem that far, does it?

The Ancient Eye in the Age of TED


I feel deeply ambivalent about the current trend in information
visualization that somehow treats it as a mind-candy production discipline
designed to persuade, manipulate and titillate, to sell pretty illusions of
understanding. A great deal has been written about TED, the elitism it
represents and how it encourages a deep-rooted television-science
mentality among the best thinkers, by tempting them to pander. But it is

perhaps this, an elevation of a way of showing above a way of seeing, that


sometimes makes me uncomfortable about TED. Once more, we are
limiting the grandeur of the universal to the expediencies of the merely
human. Once again we are saying, Look at me! instead of saying Look
at that! We wont get to You ARE that! anytime soon. And yes, I am
aware of the irony of this sentiment being expressed in a very TEDesque
blog post.
Like anybody else fascinated by ways of seeing, I have my unread
Edward Tufte books reverentially placed in my bookshelf, but something
about the whole discipline he has spawned (and the ideas worth
spreading ethos it has spawned in the glossy technology-entertainmentdesign bridge-building project that is TED) bothers me at a deep level.
Perhaps it is this. To me, the most soul-stirring direction in which to
turn the Ancient Eye is towards the unknown, towards what we dont
know, towards doubt. Stuff that we cant even explain to ourselves, let
alone teach others or spread. Here are two such images, whose
significance I dont yet fully understand, that have had me pondering a lot
lately. One is a screenshot from Michael Ogawas Code Swarm
visualization of the evolution of the Eclipse software project.

And the other is another Amy Lin piece, Hydrolysis

I have no idea whether these are Ideas Worth Spreading, but I like
looking at them. It is my substitute for prayer. Blakes Tyger, with its
immortal symmetries, also helps.

The Scientific Sensibility


August 26, 2011
I dont like or use the term scientific method. Instead, I prefer the
phrase scientific sensibility. The idea of a scientific method suggests that
a certain subtle approach to engaging the world can be reduced to a
codified behavior. It confuses a model of justification for a model of
discovery. It attempts to locate the reliability of a certain subjective
approach to discovery in a specific technique.
It is sometimes useful to cast things you discover in a certain form to
verify them, or to allow others to verify them. That is the essence of the
scientific method. This form looks like the description of a sequential
process, but is essentially an origin myth. Discovery itself is an anarchic
process. Like the philosopher Paul Feyerabend, I believe in
methodological anarchy: there is no privileged method for discovering
truths. Dreaming of snakes biting their tails by night is as valid as pursuing
a formal hypothesis-proof process by day. Reading tea leaves is valid too.
Not all forms of justification are equally valid though, but thats a different
thing.
But methodological anarchy does not mean at least not to me
that there is no commonality at all to processes of discovery. The
sensibility that informs reliable processes of discovery has a characteristic
feature: it is unsentimental.
An unsentimental perspective is at the heart of the scientific
sensibility. But first, why sensibility?
Susan Sontags description of a sensibility in her classic essay, Notes
on Camp gets it exactly right:
Taste has no system and no proofs. But there is
something like a logic of taste: the consistent sensibility
which underlies and gives rise to a certain tasteAny
sensibility which can be crammed into the mold of a

system, or handled with the rough tools of proof, is no


longer a sensibility at all. It has hardened into an idea[t]o
snare a sensibility in words, especially one that is alive and
powerful, one must be tentative and nimble.
The scientific method is a sensibility crammed into the mold of a
system. It is a an attempt to externalize something subtle and internal into
something legible and external. The only reason to do this is to scale it into
an industrial mode of knowledge production, which can be powered by
participants who actually lack the sensibility entirely. Such knowledge
production has been characteristic of the bulk of twentieth century science
(in terms of number of practitioners, not in terms of value). Hence the
Hollywood stereotype of the scientist as a methodological bureaucrat;
someone who worships at the altar of a specific method. Sadly, Hollywood
gets it right. The typical scientist is a caricature of a human.
When we objectify discovery into a legible system and a specific
method, the subjective attitude with respect to that system and method
becomes impoverished in proportion to the poverty of the system and
method itself.
So to characterize our subhuman scientist, we use words like
objective, emotionless and disinterested. The first is a reductive
characterization: the unsentimental scientific sensibility can turn its gaze
onto purely subjective realities and discover riches. To limit it to
objectivity is to limit it to the narrow realm of the experimental method.
Similarly, lack of emotion turns into a virtue instead of a crippling
blindness. And finally when we say that to do science is to adopt a
disinterested stance, we institutionalize it. The scientist becomes an
impersonal judge in a courtroom of evidence, free from any conflicts of
interest. It is no wonder that when film-makers attempt to humanize
scientist characters, they have them succumb to personal motivations.
The scientific sensibility, however, is both broader and more fertile
than this combination of an impoverished system and a sub-human
caricature objective, emotionless and disinterested. To look at the
world with the scientific sensibility is to be more human, not less.

The word unsentimental is central here. To be unsentimental is to be


self-aware. To be unsentimental, you must first deal with your inner
realities at the level of sentiments rather than emotions. You do so by
creating mental room for emotions to drift out of your subconscious,
recognizing the desires that generate them and labeling the results. If you
can go beyond that and bracket the sentiments for further contemplation,
you can be unsentimental. The sentiments that accompany you on a
journey of discovery are part of the phenomenology that you must process
on that journey.
To have a perfectly unsentimental sensibility is to be free to look at
reality without expectations about what you will see.
You can be trained in the scientific method. In fact the method, in all
its impoverished glory, can actually be programmed into a computer for
certain problems. You cannot, however, achieve the scientific sensibility
through a training process or program it into a computer. At least not yet.
You cannot achieve this sensibility via a mechanical process of
identifying and neutralizing a laundry list of cognitive biases. Nor can you
get there through an effort of will or by struggling to suppress emotions.
To be unsentimental is not about suppressing your humanity, it is about
making your humanity irrelevant so you are reduced to the pure act of
seeing.
The only way to get there is by making a sacrifice: you must give up
the pleasures of a sentimental engagement with life. The unsentimental
eye, once opened, cannot be closed. The adoption of the scientific
sensibility is an irreversible step. Your experience of love, friendship and
fun will change. Expect your passions to be tragic passions. If you are
religious, expect a troubled existence. The scientific method is not
incompatible with religion, but the scientific sensibility is, because
religion presupposes a sentimental engagement of life.
There is one consolation though. The scientific sensibility makes
humor and irony your constant companions for life.

Diamonds versus Gold


July 14, 2011
I divide my writing into two kinds: gold versus diamonds. Sometimes
I knowingly palm cubic zirconia or pyrite onto you guys, but mostly I
make an honest attempt to produce diamonds or gold. On the blog, I
mainly attempt to hawk rough diamonds and gold ore. Tempo was more of
an attempt at creating a necklace: polished, artistically cut diamonds set in
purified gold.
I find the gold/diamond distinction useful in most types of creative
information work.
What do I mean here? Both are very precious materials. Both are
materials that are already precious in their natural state, as rough diamonds
or gold ore. Refinement only adds limited amounts of additional value.
Both are mostly useless, but do have some uses: gold in conducting
electricity, diamonds for polishing other materials. But there the
similarities end.
Gold is almost infinitely adaptable. It is malleable and ductile. It can
be worked very easily and finely using very little energy and tools made of
nearly any other metal It is a nearly perfectly fungible commodity.
Financially, it is practically a liquid rather than a solid. It plays very well
with other materials and adapts to them. Its purity can be measured with
near-perfect objective precision, and its value is entirely market-driven. It
has no identity. Its value is entirely intrinsic and based on the rarity of the
metal itself.
Gold can be melted, drained of history, and reshaped into new
artifacts. When you add gold to gold, the whole is equal to the sum of the
parts. When you subtract gold from gold, the pieces retain all the value of
the whole. You can work gold in reversible ways.

Diamonds are not adaptable at all. They are the hardest things around,
and the only thing that can work a diamond is another diamond. They are
nearly perfectly non-fungible. The more precious ones are so nonfungible, they have names, personalities and histories that are nearly
impossible to erase. As economic goods, they transcend mere brandhood
and aspire to sentience: we speak of cursed or lucky diamonds. Diamonds
do not play well with other materials. Other materials gold in particular
must adapt to them. Purity and refinement are not very useful concepts
to apply to a diamond. In fact, a diamond is defined by its impurities. The
famous Hope diamond is blue because of trace quantities of boron. Color,
clarity and flaws can be assessed, but ultimately working a diamond is
about revealing its personality rather than molding it. Diamonds that win
personality contests go on to become famous. Those that fail to impress
the contest judges are murdered broken up into smaller pieces or
degraded to industrial status.
The value of a diamond is in the eye of the beholder. At birth, rough
diamonds are assessed by expert sightholders, and at every subsequent
transaction, human judges assess value. A diamond is born as a brand. An
extreme, immutable brand that can only be destroyed by destroying the
diamond itself. And finally and perhaps most importantly a
diamonds value has nothing to do with its material constitution. Carbon is
among the commonest elements on earth. A diamonds value is entirely
based on the immense amounts of energy required to fuel the process that
creates it. They are found in places where deep, high-energy violence has
occurred, such as the insides of volcanic pipes.
Diamonds are forever. They cannot be drained of history. When you
break up a diamond you cannot add diamonds the pieces have less
value than the whole. Diamonds can only be worked in irreversible,
destructive ways.
Diamonds represent a becoming kind of value; the products of
creative destruction. If youve read my Be Slightly Evil newsletter issue,
Be Somebody or Do Something, you know the symbolism I am getting at
here. You also know where my sympathies lie.
I find it particularly amusing that the value of gold is measured in
purity carats, while the value of diamonds is measured in weight carats.

Purity and weight are what are known as intensive and extensive
measures. The size of a diamond is a measure of the quantity of tectonic
violence that created it.
I prefer diamonds to gold, perhaps because I am not an original
thinker, but a creative-destructive one. I am not very good at discovering
rare things. I am better at applying intense pressure to commonplace
things, in the hopes of producing a diamond. Sometimes I stumble upon
natural rough diamonds, but more often, I attempt to manufacture artificial
ones from coal. They are not as pretty, and it is very hard to manufacture
large ones, but when I succeed, I produce legitimate diamonds, born under
pressure.
Gold, I rarely mine myself (and earth-bound humans cannot
manufacture it; only dying suns can). I buy gold in the form of second
hand jewelry at the bookstore, melt it down, and rework into other things.
Most often, into settings for diamonds.

How to Define Concepts


June 21, 2007
Let us say you are the sort of thoughtful (or idle) person who
occasionally wonders about the meaning of everyday concepts. So there
you are, at the fair, laughing at yourself in a concave mirror, when
suddenly it hits you. You dont really know what concave means. You
just recall vague ideas of concave and convex lenses and mirrors from
high school and using the term in general conversation to describe certain
shapes. So you decide to figure out a definition.
What do you? How do you make up a definition? Lets get you into
some trouble.
So the first thought you have is: concavity has something to do with
indentations or inward curvature of shapes. You quickly abandon openended curves, of the sort you see in graphs of population growth and the
like. Being smart, you realize that the concavity there is not fundamental
you could turn a concave graph upside down, and make it convex with
respect to your preferred visual orientation of up is against gravity.
So you decide that the notion of concavity probably only makes sense
for closed curves: things with an inside and an outside (congrats, you just
found a use for the Jordan curve theorem!). You draw yourself a
prototypical closed curve, like the one on the left below, and stare at it:

You think, hmm really what is going on with concavity is that I


can sort of take a short cut across some parts by going outside it. That
leads to your first stab at a definition: a figure is concave if there exists a
pair of points in it such that the straight line between them is not contained
entirely within it. You draw a couple of lines and convince yourself, like
on the right. At this point, if you like to rush to math, you might even write
down an equation like this one:

and go, Aha! a closed curve is concave if and only if you can find a
pair of points like so, and for some theta, the point on the line given by my
clever equation isnt inside the figure!
Youve just found an attribute of convexity that you think is necessary
and sufficient to define it. But then, suddenly a thought occurs to you. You
sketch:

and go, Uh Oh!


What just happened here? Why does this bother you? Youve found a
way to draw a line across two closed figures that dont look concave to
you, and satisfied a formal notion of concavity. You really want to say that
concavity is a notion that only applies to single figures. So what do you
mean by single? Is a Figure 8 single? Is it concave?

You are in trouble. At this point, if you really cared enough, youd go
on to reinvent a good deal of topology, invent the notion of simply
connected, figure out that you need the notion of closed and open sets
and interiors and boundaries (to handle the Figure 8 case) and so forth. But
lets not go down that road. Lets ask the more interesting question, why
didnt you just define concavity to be anything that satisfies your original
straight line test? (For many purposes in math, that is in fact exactly
what you do, use the definition without worrying about connectedness
thats the impatient, technical, lets get on with it aspect of mathematics,
but you and I like to fuss over what we mean instead of getting
somewhere).
The interesting thing about the way our minds work is that math and
formalism is subservient to a fuzzier notion of what I want to get at. As
we refine technical definitions (or natural language definitions of entities
like culture) we tend to move the definition to get at an understood but
inexpressible concept. We practically never reduce the concept itself to the
definition we are working with. This sort of thing is an example of the
operation of what philosophers like to call intension (with an s).
Intension, roughly speaking is the true meaning of a concept we are
after. The difference between definition and meaning is what philosophers
like to characterize as primary (or a priori) and secondary (or a posteriori)
intension. The primary intension of water is watery stuff. That is why
a sentence like Ammonia is the water of Titan makes sense to us we
imagine ammonia oceans. By contrast, Water is H20 is a secondary
intension. David Chalmers has a beautiful discussion of intension in The
Conscious Mind: In Search of a Fundamental Theory.
Does this apply to this example? Concavity, unlike water is an
abstraction of real-world things like inkblots, bays, dents, holes and so
forth. It references too many things in the real world for us to useful say
something like concavity is sort of like bays or dents. Despite this,
however, our brains seem to work with a primary intension of concavity
that draws efforts at definition spiraling towards itself. We grope towards
what we mean through attribute-based tests expressed in terms of simpler
concepts (like straight line in our case).
What makes math special is that starting with a few prototypes that
suggest a useful notion, we can often converge in a finite number of steps

to a watertight characterization of an abstract concept within a useful


closed domain. Brouwer, was perhaps the only major mathematician who
tried to articulate this fundamental aspect of the structure of mathematical
thought that technicalities follow from trying to capture intuition.
Leaky abstractions like culture and war though, are another
matter. I dont yet have a good handle on how to think about the process of
achieving clarity with such concepts. Until then, all I can offer is my own
rule of thumb, Seek to capture the intension!

Concepts and Prototypes


June 14, 2007
We think about abstract concepts in terms of prototypical instances.
These prototypical instances inform how we construct arguments using
these concepts. At a more basic level, they determine how we go about
constructing definitions themselves. Prototypes pop up in all sorts of
conceptual domains, ranging from war to airplane to bird. So how
do prototypes work in our thinking? Lets start with an apparently simple
example the concept of triangle that can get tricky really quickly.
If I asked you to draw a triangle, you would probably draw one that
looked something like the one below, a scalene triangle, almost certainly
drawn with the longest side as base and obtuse angle, if there is one, on
top. Call this a prototypical triangle, understood as the sort of instance
most people would draw. Why we draw such instances is the question of
interest here. Lets exercise this instance in a simple argument to see what
role prototypicity plays in thinking. We will convince ourselves of the
validity of the formula for the area of a triangle half base times height
through mental visual manipulations.

Imagine a line dropping from the top vertex vertically down to the
base. This line enables you to visualize two right triangles. Now imagine a
copy of the triangle on the left being rotated clockwise 180 degrees.
Position this imaginary triangle so that you now have a complete rectangle
on the left. Repeat the process for the right. The two imaginary rectangles
now form a larger rectangle. The area of this rectangle is the product of the
base and height of the original triangle. Since you constructed this
rectangle by copying, rotating and pasting two triangles that exactly
covered the original triangle, the original triangle must have an area given
by half the product of the base and height.

You used imaginary visual manipulations to convince yourself of the


formula for the area of a triangle. Now ask yourself, why did you not start
with any of the following set of perfectly legal triangles:

We have here an isoceles right triangle, an isoceles triangle, an


equilateral triangle, a long, skinny triangle, a straight line segment and a
point. The first 3 cases possess symmetries and the last two contain
degeneracies.
Now work through the visual proof of the area of the triangle for each
of these cases. The last two are the easiest: the answer is zero by
examination, so the formula is trivially correct. More importantly, the
answer does not tell us much any constant instead of 1/2 would work,
so we have validated a non-unique candidate. The first three, I assert, are
also degenerate. Not in terms of their explicit geometric structure, but
because the visual proof of their satisfying a particular asserted property
the area formula in our example collapses to a simpler case than in
the scalene triangle case. Recall your mental manipulations for the scalene
triangle. Can you see how there are fewer steps in convincing yourself of
the truth of the area formula? In each case, a single cut-and-paste
visualization suffices. The scalene triangle proof visualization works (if
inefficiently) for the symmetric cases, but the reverse is not true.
So one answer to why do we choose as prototypes the instances we
do? is an information-theoretic one. A prototypical instance of a concept
is one that contains the maximum information that an entity satisfying the
definition possibly could. We are naturally inclined to work with the
richest-information-structure case. In the case of the explicitly degenerate
cases, the information poverty showed up in the non-uniqueness of the

formula that was satisfiable. In the more subtle cases with symmetries, it
showed up in terms of the degeneracy in the proof construction which
would not work in the general case.
That doesnt explain it all. What about our long skinny triangle
which is close to degenerate, but not strictly so? Why didnt we draw
something like that? I suspect this has to do with the precision of
comparisons we need to make when we mentally manipulate geometric
figures: we want enough asymmetry and non-degeneracy to clearly
illustrate the information capacity of our concept, but not so much that the
precision required of the representation is too high. I am not completely
happy with this hand-wavy account, so Ill revisit this when I come up
with something better. If you have a better account right now, post a
comment.
A final point: why did we choose to draw our original scalene triangle
with the longest side as a visual base? One proximal reason is that the
necessary manipulations require more effort if we were to draw it with,
say, the obtuse angle at the base (try it). A less obvious reason is that we
operate with orientational metaphors that determine notions like up and
down when dealing with abstractions. These metaphors inform both our
language (base and height) and probably explain why non-standard
orientations are mentally harder to work with, even though the explicit
visual-proof steps are orientation agnostic. These conceptual framing
metaphors will come up later when I talk about George Lakoff and his
work on metaphor, so Ill defer discussion of this aspect of prototypicity to
later.
When we move from instantiations of abstractions to sets of entities
(real or imagined) that we want to define, we run into problems with other
methods of picking out elements, such as archetypes and stereotypes, that
get in the way. We also run into issues of intension, with an s. Thats for
later.
I first encountered this notion of prototypicity in a biology class when
I was about 13. The teacher asked the class clown to come up to the
blackboard and draw an amoeba. He drew a neat block L shape, and the
class burst out laughing. The teacher got mad and told him to stop
clowning around and draw a proper amoeba. He countered that since wed

been taught that amoeba proteas could take on any shape, a regular L
was as much an any shape as an irregular blob.

How to Name Things


February 2, 2012
1
Naming and counting are the two most basic behaviors in our divided
brains. Naming is the atomic act of association, recognition,
contextualization and synthesis. Counting is the atomic act of separation,
abstraction, arrangement and analysis. Each behavior contains the seed of
the other.
To name a thing is to invite it to ensnare itself in your mind; to distill
and compress the essence of a gestalt into a single evocative motif, from
which it can be regenerated at will. Just add attention and stir.
Here are three very different American gestalts that I bet many of you
will recognize without clicking: Babbitt, Bobbitt, Rabbit.
We name and count babies, products, species, theorems, countries,
asteroids, ships, drugs, essays, wars, gods, dogs, foods, alcohols, pieces of
legislation, judicial pronouncements, wars, subcultures, ocean currents and
seasonal winds.
We try to name and number every little transient vortex, in William
James blooming, buzzing confusion, that persists long enough for us to
form a thought about it.
As with plans, so with names. Names are nothing; naming is
everything. To name a thing is to truly know it. As Ursula Le Guin said,
for magic consists in this, the true naming of a thing.
It is the process of naming that is important. The actual name that you
settle on at the end is secondary.

2
Vanity and pragmatism wrestle for control of the act of naming. We
bend one ear towards history and the other towards posterity. We parse for
unfortunate rhymes and garbled pronunciations. We attempt at once to
situate and differentiate. We count syllables and look for domain names.
We walk around the name, viewing it as parent, lover, friend, bully,
journalist, lexicographer and historian. We embed it in imaginary
headlines and taunting rhymes.
In Bali to name is to number. It is an unsatisfying synthesis that only
works in limited contexts.
The firstborn is Wokalayan (or Yan, for short),
second is Made, third is Nyoman or Komang (Man or
Mang for short), and fourth is Ketut (often elided to Tut).
I am not sure what happens if Wokalayan dies young. Does Made
replace his older sibling and become the new Wokalayan?
In crypotgraphy, the first named-character in an example scenario is
Alice. The second one is Bob. And so on down an alphabetic cast of
characters. This is not the world of interchangeable John and Jane Doe
figures. The order matters.
When birth order is more important individual personality, you get a
social order in naming that inhabitants of individualistic modernity
struggle to understand.
3
Counting is both ordinal and cardinal. It takes a while to appreciate
the difference between one, two, three and first, second, third.
To truly count is to know both processes intimately. In naming,
ordinality has to do with succession and replacement. Cardinality has to do

with interchangeability. You cannot master naming without mastering


counting.
The ordinal, cardinal and nominal serve to situate and uniquely
identify, but do not necessarily indicate the presence of something real.
Hence the query: name, rank and number?
There was once a substance with rank 0, number 0. It was named
ether. It did not actually exist. Substances 1-1 through 1-4 though, earth,
fire, water and wind, were real enough, and became the founding fathers
and mothers of the modern discipline of chemistry.
It is in fact useful to think of naming an interrogative act that creates
what it questions. Demand insistently enough to know the name, rank and
number of a thing, and you will eventually find out. Even if your mind has
to manufacture an answer.
When you understand both kinds of counting, you can count and
name in both ways, without using actual numbers.
That gives you iMac, iPod, iPhone and iPad on the one hand, and
Kodiak, Cheetah, Puma, Jaguar, Panther, Tiger, Leopard, Snow Leopard
and Lion, on the other. Ill leave you to guess why the first-born is a bear
here, while the rest are cats. Dont give up and click too soon.
Not many languages can efficiently express questions of ordinality. In
English for instance, the question, what is your birth-order ordinality
among your siblings? sounds downright weird, but I cannot find a simpler,
grammatical way to express it.
It is much easier to ask the related cardinality question: how many
siblings do you have?
Curiously, the ordinal question is very easy to ask in my nominal
native language of Kannada. It would translate to something like: How
many-eth son are you of your father? If such constructs were allowed in
English. At least that was the best I could come up with my father
challenged me to translate the line as a kid.

It would be a useful construct to have in English. We could ask,


What-ieth major version of Mac OS X is Lion?
The naming practices in Bali and the Ursula Le Guin quote made me
think of a rather clever idea for a short story about a culture where the
young start out with ordinal names as in Bali, but are given true names if
and when wise elders first spot the child in an act that expresses a unique
individuality.
At this point, a coming-of-age naming ceremony is conducted, and
the child is declared an adult with special privileges over the un-named.
Rather complicated things happened to the heros name in the story,
having to do with self-referential paradoxes. Ive forgotten the plot, but I
remember that at the time I had to diagram the events in the story.
I never wrote the story because coming up with names for the
characters was too hard.
4
We name to liberate, and we name to imprison. We name to flatter,
and we name to insult. We name to own, and we name to be owned. We
name to subsume, and have subsumed. We name to frame, and we name to
reframe.
Google bought Urchin on Demand and turned it into Google
Analytics. It bought Youtube and left the name alone.
The Left calls it Right to Choose. The Right calls it Right to Life. The
debate itself is partly about naming: at what point does something deserve
the name human?
The British and the French built a plane together and fought over the
name. The French won. It became the Concorde rather than the Concord.
Gandhi attempted to rename the untouchables Harijans. Gods
people. They resented being patronized, and chose for themselves the
name Dalit. The oppressed.

Priests weigh about the numerological significance of names and


marketing mavens opine about syllable counts.
States step in with Procrustean templates to tax and conscript: last
name, first name, middle initial. Under Spanish rule, the entire Philippines
became a geographic-lexicographic state.
Philosophers ponder the metaphysics of naming and Greek scholars
hunt for their linguistic roots.
As one anthropologist said (I have never managed to find the source),
naming is never a culturally insignificant act.
5
To name is to appreciate the crucial distinction, due to urban theorist
John Friedmann, between appreciative knowledge and manipulative
knowledge. The one allows us to construct satisfying images of the
world. The other allows us to gain mastery over it.
To either number or name is to both appreciate and manipulate. To
number is to appreciate timeless order; to name is to appreciate
transformative chaos.
You number to extend and preserve. Archival is the ultimate act of
numbering.
You name to create, destroy, fragment and churn. You name a product
and launch it. You give a dog a bad name and hang it.
In a break with family tradition, I was not named after my paternal
grandfather. The timeless sequence, ABABAB was broken.
6
Agent 007, James Bond, was named after an ornithologist.

In his numbered world, he is part of a greater order. A world of


conversations between 007 and M, where technology comes from Q and
even the secretary is a very countable Moneypenny. It is a timeless world
where the Ms and Qs are replaceable and 00s are both replaceable and
interchangeable.
In his named world, first he situates, then he differentiates.
My name is Bond. James Bond.
A tough, hard and unusual name, for a tough, hard guy, who allows
glimpses of a dark past to shine through the veneer of shaken-not-stirred
cocktails and social polish. He blends in, but makes his presence felt. It is
a name that is at once a trust and a threat. Bank of England to friends,
gunboats to foes.
Is that a threat? No, its a promise.
Commander Bond was once a naval reserve officer. It was in the
maritime world that the line, my name is my bond, gained currency.
It is a name of narrative belonging. It situates the man strongly as
British, but differentiates him not at all among Britishers. In Bond is the
veiled threat of a still-potent dying empire. In James lies identification
with, and anonymity within, that dying Empire.
Fleming once wrote to the real Bonds wife: It struck me that this
brief, unromantic, Anglo-Saxon and yet very masculine name was just
what I needed, and so a second James Bond was born.
7
The story of Windows is the story of a wild tree of apparently
domesticated numbers seeking its way in the world, rather than an orderly
parade of tamed wild cats.
1.0, 2.0, 3.0, NT, 3.1, 95, 98, ME, 2000, XP, Vista, 7, 8.

This is no accident. Microsoft, has always been a company that has


sought its way in the existing world, rather than inviting the world into a
fabricated universe of non sequiturs like Apple, Macintosh and Lisa.
The original portmanteau, MICRO-computer SOFT-ware, was a
seeking of a place in a world defined by others. The micro-computer was
ordinally a lesser thing than the mini-computer. Soft-ware was one of three
wares: hard, soft and firm. An element in a set of cardinality three. It was a
shy, retiring and polite name, that knew its place in the scheme of things.
But the personality worked, and Microsoft quietly took over the
universe it entered so politely. Windows was a literal-minded appropriation
of the name of a key element of the desktop metaphor. Office seeks to
belong in the workplace rather than redefine it. Internet Explorer remains
the only browser that presumes to name itself after the thing it explores.
How a company names itself, its products and services, and its
organizational parts, tells you a great deal about it.
To number something implicitly or explicitly, cardinally or
ordinally is the first step in a grander project to order, tag and classify a
part of reality; to prepare it for timeless forms of manipulation:
replacement and interchange. To number is to subsume the particular
within the general.
But to really name something in the sense of Le Guin, is to disrupt
that project at every turn by discovering new magic that confounds the
creeping logic of a rigidly ontological enterprise.
To really name is to find leaks as quickly as the number-givers find
water-tight categories. To break connections thought secure and make new
ones, previously considered impossible. To create difference
irreplaceability and non-interchangeability as fast as numbering creates
homogeneity.
This is perhaps why I still trust Microsoft more than I trust Apple. In
the mess that is the the Windows sequence-numbering, I find reassurance.

8
To position is to number and name at the same time, and create
something that is both a being and a becoming. Something rooted, that
seeks to connect and get along, and something restless that seeks to get
ahead and away.
To position a thing is to teach it to get ahead, get along, and get away.
We project onto the memetic world of names, our own fundamental
genetically-ordained proclivities. Evolutionary biology tells us that getting
ahead and getting along are the basic drives that govern life for a social
species. To this, as a species that invented individualism sometime in the
10th century AD, we must add getting away. The drive to become more
than a rank and number. To become a name, even if the only available one,
alpha, is taken.
The Microsoft version soup is Darwin manifest.
Getting ahead, getting along and getting away. Ordinal numbering,
cardinal numbering and naming. Name, rank and number.
Perhaps it is naming and numbering that are fundamental, not biology.
To number well is to comprehend symmetries and anticipate as-yetunnamed realities; holes in schemata, to be filled in the future. And so we
name new elements before discovering them, imagine antimatter when we
only know of matter. To categorize well is to create timeless order.
Mendeleevs bold leap advanced both chemistry and the art and science of
naming.
To number poorly is to squeeze, stuff and snip. To constrain reality to
our fearful and limited conception of it.
To name well is to challenge and court numbers.
To name poorly is to kill or be killed by numbers.

Naming without numbering creates a chaotic unraveling. Numbering


without naming creates orderly emptiness.
It takes discipline to couple the two forces together. And sometimes,
numbers and names dance together beautifully to create magic, as when
Murray Gell-Mann found inspiration in James Joyce line, three quarks
for Muster Mark.
9
To name is also to hide and cloak. To switch stories and manufacture
realities. This is the world of Don Draper. He dons a mask, and drapes
new realities over old ones. Starting with his own life.
And so Operation Infinite Justice became Operation Enduring
Freedom.
I was supposed to be named after my grandfather, in keeping with the
timeless ABABAB rhythm. I would have been Rama Rao. But then
they broke with tradition.
My mother wanted to name me Rahul, but my grandmother objected:
it is a name with deep significance for Buddhists the name of the
Buddhas son.
Fortunately, in the (cardinal and ordinal) universe of a thousand
names that is Vishnu there is actually a long hymn known as the Vishnu
Sahasranama, Vishnu of the Thousand Names a close cousin of
Rama was found.
And so I came into the world as Venkatesh. A break from tradition, but
not quite a complete break. Certainly not a defection to a competing
tradition. That would have upset my grandmother.
I once wanted to name an algorithm Id developed Mixing Bandits,
since it used mechanisms inspired by bandit processes. I gave a draft of
my paper to a distinguished professor in the field. He liked my work, but
objected to the name. My allusive overloading of a precise term did not sit

well with him. Mathematically, my algorithm was not related enough to


bandit processes.
So this grandmother rejected the baby, refusing to absorb it into the
family tradition. It wanders the world today as an illegitimate orphan of
the noble clan that has disavowed it, under the clumsy and undistinguished
name MixTeam scheduling.
10
In the genealogy of a single name you can trace entire grand
narratives.
Once upon a time, there was a company in Rochester called Haloid. It
made photographic paper and lived in the giant shadow of a company
across town called Kodak.
Haloid wanted to grow up. So it acquired a technology called
xerography: a name coined by a Greek scholar to situate the idea of dry
writing within the illegible history of that long intellectual tradition within
which the West seeks to situate everything it does.
Ironically, the technology was not the result of a long, gradually
evolving tradition that can be traced back to the Greeks. Not only did the
Greeks have nothing to do with it, as the biographer of the technology
David Owen notes, There was no one in Russia or France who was
working on the same thing. The Chinese did not invent it in the 11th
century BC.
Xerography sprang almost fully-formed from the mind of one man,
Chester Carlson. He systematically set about the project of inventing and
patenting something truly new. He managed to do so by putting an obscure
property of the element Selenium to a completely unexpected use.
So Haloid became Haloid Xerox, and eventually just Xerox. It is a
powerful name. So powerful that it subsumed the name of the man who
created it, Joe Wilson. During my time at Xerox, the Wilson Center for
Research and Technology (WCRT) became the Xerox Research Center,

Webster (XRCW). Across the world you will find XRCE (Europe), XRCC
(Canada) and XRCI (India). To earn its right to a unique name within this
orderly namespace, the sole rebel, PARC, had to unleash planet-disrupting
forces.
Xerography eventually became electrophotography, in the hands of
envious competitors who appeared after the trust-busters had done their
work. The name that had gotten ahead and away now had to get along. My
name is photography. Electro-photography.
They still call it xerography at Xerox though.
11
And across town, Kodak slowly declined and began to die. There is
irony here as well.
Photography does have a long history. The ancient Greeks did have
something to do with it. The ancient Chinese did know about pinhole
cameras. The French did play a role.
But Kodak is one of those rare names that was born through an act of
pure invention. George Eastman is quoted as saying about the letter k: it
seems a strong, incisive sort of letter. Yes, incisive like a knife.
The story goes that Eastman and his mother created the name from an
anagrams set. Wikipedia says about the process:
Eastman said that there were three principal concepts
he used in creating the name: it should be short; one cannot
mispronounce it, and it could not resemble anything or be
associated with anything but Kodak.
The first two principles are still adhered to by marketers when
possible. The last has been abandoned since the 1970s, when the
positioning era began.
As with Wilson, the child soon eclipsed the father. Eastman Kodak
became just Kodak to the rest of the world. In proving the soundness of his

principles of memetic stability, Eastman ceded his own place in the history
of naming to a greater name.
Haloid incidentally, is a reference to the binary halogen compounds of
Silver used in photography. The word halogen was coined by Berzelius
from the words hals (sea or salt) and gen (come to be). Coming to
be of the sea. It may be the most perfect name, suggesting the being and
becoming that is the essence of both naming and chemistry.
Jns Jacob Berzelius is a founding father of chemistry in large part
due to his prolific naming. He came up with protein as well. He was also
responsible for naming Selenium. From the Greek Selene, for Moon.
It was no small achievement. Chemistry is a science of variety and
difference. It deals in so many different thing that a narrowly taxonomic
mind will fail to appreciate its broader patterns.
In declaring that Physics is the only real science, all the rest are just
stamp collecting, Rutherford failed to appreciate chemistry the way
Berzelius did. As an ongoing grand narrative with lesser and greater
patterns.
Some deserving names like protein and others merely abstract,
categorical formulas like CnH2n+2 and names that just fall short of
cohering into semantic atoms, like completely saturated hydrocarbon.
12
Counting and naming are at once trivial and profound activities.
Toddlers learn to count starting with One, Two, Three
Terence Tao has won a Fields Medal and lives numbers like nobody
else alive today. And he is still basically learning to count. At levels you
and I would consider magic, but it is counting nevertheless.
Toddlers learn to name, starting with me, mama and dada.

Ursula Le Guin has won five Hugo and six Nebula awards, but is
fundamentally still a name-giver.
Names are born of universes, be they small ones that contain only
Kodak or large ones that contain all of Western civilization between alpha
and omega.
It is very hard to make up universes. It is easier to borrow and
disguise them, as Tolkien and Frank Herbert did.
And it is very hard to do so without accidentally causing collisions
between large, old namespaces that might not like each other, as my mom
found out with Rahul.
Lazy novelists are laziest with names, and the work falls apart. When
you have named every character in your novel perfectly, your novel is
finished. Plot and character converge towards perfection as names do.
Names in turn create universes. Carnegie Hall, Carnegie Foundation,
Carnegie-Mellon University.
To name is to choose one universe to draw from and another to create.
Rockefeller gave his name to few things. He preferred bland names like
Standard Oil and The University of Chicago.
And so it is that the Carnegie Universe is very visible, while the much
larger Rockefeller Universe is more hidden from sight.
13
Rockefeller chose to create, and hide much of what he created. But
you can go further. Beyond hiding lies un-naming. To un-name is to deny
identity.
To un-name and un-number is to anonymize completely.
It is useful for the name-giver to ponder the complementary problem
of un-naming. If to position is to name and number, to de-position is to unname and un-number.

You must seek randomness to disrupt the timeless order imposed by


numbering, disconnection to counter the narrative order created by
naming. Like Dorian Taylor, you must seek cryptonyms.
Cryptonym itself is from the Greek words for hidden and name.
Randomness is hard.
To un-name is to fight the natural. Given enough time, even a set of
cryptonyms will fail to arrest a cohering identity. To truly arrest a name,
even changing the crytponym at a random frequency is not enough. The
underlying cohering realities must be disrupted.
14
Names demand to be born, and hijack numbers if no worthy ones
appear. And so we have 9-11 and Chapter 11.
At other times, names strain to hang on to life, with no stories to tell.
In the arid, random desert that is bingo, where numbers rule, names
struggle.
Only to a Bingo player is 22 two little ducks.
Few numbers truly rise to the level of human meaning, and they are
all small: 13, 42, 867-5309.
The largest number in my life that is also a name with permanent
narrative significance is 1174831686.
When I was nine or ten, our local newspaper, The Telegraph, launched
a club for kids in its Sunday edition, called the Wiz Biz Club. I signed up
excitedly, to belong and to make new friends. That was my membership
number.
I received a badge, some stickers and an ID card with that number.

So Venkatesh Rao became 1174831686. That cryptonym was


probably the start of my struggle to own my name instead of being owned
by it.
I am glad to report that despite it being an extremely common Indian
name, I now own venkateshrao.com (it redirects to this site) and almost
the entire first page of Google results. Vishnu can have the other 999
names, but I plan to pwn this one, at least for one lifetime.
15
We dimly recognize, even without the aid of mathematicians who
study such things, that numbers win this decidedly unequal contest of
appreciation and manipulation in the long-term.
In the beginning, we generously allowed our businesses, products and
services to share the older namespaces of people and geographies. East
India Company, Jardine-Mathieson, Carnegie Steel, Johnson & Johnson.
That strategy quickly exhausted itself, and so we energetically began
manufacturing Xeroxes, Kodaks, Microsofts and Apples.
The first really-big-numbers company decided to name itself after a
number, Google. Its home became an even bigger number, Googleplex.
After Google, the Internet began throwing up naming needs faster
than humans could manufacture them, and the orderly taxonomy
unexpectedly imposed on the world by the Internet Domain Name system
suddenly made life very difficult indeed.
So far, weve kept up by inventing quasi-algorithmic models: flickr,
dopplr, e-widget, i-doodad.
But eventually naming as a way to understand and construct reality
will fail. Technology creates complexity that creeps inexorably towards
the unnameable-but-significant.
When semantic genealogies in naming give way to syntactic and
lexicographic genealogies, you are halfway to the world of pure numbers

(there is a cute scene in Neal Stephensons Cryptonomicon, where


members of an online group decide to abandon names and stick to purely
numbering and ranking the world; the split occurs between those who seek
cryptonyms and those who seek a fundamental order within which, for
instance, Earth might be numbered 1).
The march that begins with Aachen and Aardvark cannot keep up
with a universe that throws countable, but not-nameable, variety at us. We
count on, long after we can no longer name. And eventually we cannot
count, either, and must stare at an unnameable, uncountable void and
wonder as some mathematicians do whether it even exists, given
how it eludes characterization.
Yet we persist with both naming and numbering, finding solace in
imposing a partial lexicographic order on reality, even as the struggle gets
harder.
16
I have not used the word brand even once in this post, until just now.
Over the years, I have lost confidence in the utility of the concept.
It is appropriate only for the cardinal-ordinal world of mass
manufacturing, where everything has a rank and number, but very few
things have real names. Most brands are McBrands. Billions upon billions
have been served up by marketers and fond parents. Most represent no
deeper reality than the first answer to the question, name, rank and
number.
It is not surprising. After all the very word originates in processes that
evolved superficially distinguish the essentially interchangeable. In the
world of cows, and pottery before that, to brand was to mark for
identification and counting, and little else.
Brand is an abstraction that adds very little to the more fundamental
concepts of naming and numbering, and the key derivative concept of
positioning. In fact, it is distracting. The word makes it far too easy to lose
yourself in abstractions. Naming and numbering keep you honest and

focused on the gestalt you are trying to distill, with repeated tests. The
story of these attempts is what we know as PR, and with each proposed
naming and positioning test you can ask, do I understand this story yet?
Without such test-driven naming, branding is an exercise in waterfall
marketing.
To the extent that it is a useful word at all, it describes a consequence
rather than an action. Away from the concrete world of cows being
tortured with red-hot irons, there is no actual action that you can call
branding.
You name, number and position. You then make up non-verbal
correlates colors and logos that derive from these basic elements.
These are things you do.
Brand happens.

How to Think Like Hercule Poirot


August 31, 2009
Last fall, I spent a long weekend in the Outer Banks region, a few
hours south of Washington, DC, reading a collection of Agatha Christie
pastiches called Malice Domestic, Volume 1 (now the title of an annual
mystery conference). The summer tourist season was over, and the hordes
had moved on to Maine and Vermont to chase the Fall colors. The days
were gray, windy, rainy and chilly. The beach front properties had mostly
emptied out, and most of the summer attractions were closed. We had a
large three-level beach front house to ourselves, with a porch facing the
troubled, ominous sea.

The ocean view from our hotel at Cape Hatteras, Outer Banks

Perfect conditions for bundling up in a blanket with a cup of hot


cocoa and a mystery. Reading Malice Domestic was a revelation. None of
the included writers even came close to creating Christie-like magic.
Which led me to wonder: does Poirot endure because he represents certain
truths about how to think effectively, which lesser fictional detectives
lack? I think so.

The Poirot Doctrine


I learned from the varied failures in Malice Domestic that period
settings, isolated cozy contexts (such as locked libraries) and quirky
detective personalities are not necessary, let alone sufficient, for an
effective mystery story. Neither is parlor-trick deductive rationality of the
Holmes variety.
What makes Poirot endure is his capacity for what I call narrative
rationality (the title of a chapter of the book I am writing): the ability to
understand and influence a situation through stories. What saves the quirks
of his character (such as his penchant for merely arranging facts,
borderline obsessive-compulsive fastidiousness and sybarite comfortseeking) from seeming arbitrary is that they integrate seamlessly and
logically into his thinking style.
One element of narrative rationality is particularly important in
Poirots style, the fact that it is strongly driven by a doctrine, a set of
beliefs about how the world works and should work. Poirots doctrine
constrains and defines his narrative imagination, which helps drive the
plot.
Poirots doctrine comprises several sorts of right-brained, left-brained
and moral beliefs that allow him to quickly get beyond a myopic
Holmesian preoccupation with footprints and cigarette ash. He can
therefore think more effectively at higher levels of abstraction and
ambiguity. Sure, as a literary creation, Poirot is rather crude, and yes, the
contrived nature of his cases can make his thinking style itself seem
contrived. Still, his thought processes, unlike those of Sherlock Holmes
say, are surprisingly useful as a model for us non-fictional humans in the
real world.
Poirots psychological doctrine in particular, is a robustly intelligent
one, based on subtle ideas about human behavior and skepticism of
jargon-happy Freudian-technical theorizing. An example is the assertion
he offers (I forget in which novel): women are sometimes tender, but they
are never kind. I forget how Poirot uses the idea in his reasoning, but I

remember immediately feeling a great sense of clarity and relief when I


read it. It is a personality heuristic one that I find to be true that
requires the vocabulary of a storyteller rather than that of the theorist or
experimentalist, and proves powerful in reasoning about human (in this
case, female) behavior.
This is a right-brained sort of doctrinal element, one that enables him
to recognize patterns. But Poirot can go left-brained as well. For instance,
at one point he explains his bachelorhood to Captain Hastings as follows:
In my experience, I know of five cases of wives being murdered by their
devoted husbands. And twenty-two husbands being murdered by their
devoted wives. So thank you, no. Marriage, it is not for me. Poirot is a
Bayesian rationalist: he applies the spouse-as-prime-suspect principle
frequently in stories. In fact it is so likely that a husband or wife will turn
out to be the murderer in a Christie novel that she has to expend much of
her ingenuity in muddying marital equations.
Poirots Moral-Philosophical Universe
But even right and left-brained tendencies do not add up to wholebrained narrative rationality. This is where Poirot truly rises above other
fictional detectives: there is a moral-philosophical dimension to his
thinking that is at once fatalistic (people do not change) and normative.
Though he is Catholic, his views are actually closer to the Protestant
doctrine of predestination, and the Poirot plots are, as a consequence often
Greek-tragic in their inevitability (Death on the Nile is a good example).
His most frequent normative doctrinal utterance is probably I do not
approve of murder. The line usually appears after Poirot has provided a
nuanced and sympathetic exposition of the motives and actions of all
concerned, and it seems like he has practically justified the murderers
actions. But once he presents his compelling theory of the case, he draws
his line in the sand. Unlike the non-fictional francophone, Madame de
Stael, who is credited with the quote to understand all is to forgive all
(Tout comprendre rend trs-indulgent), Poirot never allows the
murkiness of psychology to cloud his moral vision, thereby saving the
Poirot stories from the tedious and self-absorbed agonies of many modern
fictional detectives.

Poirots moral philosophy mostly seems to be inherited from Christie


herself Poirot, like Christie, is a religious conservative who is deeply
suspicious of socialist save-the-world tendencies. Curiously, some of his
moral strengths seem to arise from Christies subconscious awareness of,
and overcompensation for, her own moral flaws. Christie herself is
blatantly xenophobic and racist (see Hickory Dickory Dock for instance).
Poirot began his career in The Mysterious Affair at Styles like any other
xenophobia-inspired Christie caricature, full of ridiculous, unreconstructed
Latin pomposity. But he evolves through later novels into an ironically
self-aware egoist. By the time of his death in Curtain, he has evolved in
ways that the English, with their misguided sense of modesty and selfdeprecation, never can.
To the extent that the moral elements of Poirots doctrine represent
philosophical truths, they simplify his detective work and allow him to
drive events towards decisive outcomes. This again, is an element of his
thinking style that I find useful in the real world: keep your psychology
complex, but your morality simple. Otherwise youll never get anything
done.
There is one last element in Poirots doctrine: the recognition and
exploitation of the flaws of others doctrines. The best known exploit, of
course, is his tendency to exaggerate his foreignness and play on the
xenophobic prejudices and assumptions of civilizational superiority on the
part of the English characters (who always seem to describe him with
archaic words like mountebank and jackanapes). The key moment of
redemption in a Poirot novel, the one that anchors the readers
identification with him, is when a shrewd English character calls Poirot
out on his charade, at which point he can assume his fully-realized
character. But this is not just a recurring motif of exposition and
identification in the Poirot canon. The very point of a Poirot novel is to
validate and reinforce the superiority of Poirots doctrine over lesser
doctrines. The moment of truth is not really the revelation of the murderer,
but the point in the story at which it becomes clear that Poirots world
view provides the best perspective with which to make moral sense of the
plot. The solution to the murder validates the doctrine.
The whole-brained Poirot doctrine right-brained, left-brained and
moral allows him to reason around more ambiguous situations than any

other fictional detective. The integrated unit of thought in Poirot-style


thinking is the story. He urges witnesses to talk freely, speculate, and tell
their story as they please, correctly understanding that people think,
remember and talk (whether they are lying or telling the truth) through
narratives. His own theories in turn, take the form of evolving stories,
which he continually tests for both psychological and empirical
plausibility. Though he has the dramatic imagination of a playwright, he
never loses sight of the distinction between bald facts and the accounts of
those facts; he never hesitates to kill beautiful theories if they fail to
account for
even a single trivial observation or psychological
implausibility. And of course, like any good fictional detective, the
significance he assigns to specific facts in his stories is often very different
from the significance attributed to them by his witnesses in their stories.
Poirot stories are really stories about stories.
Christie frequently highlights the complexities of Poirots thought
processes by juxtaposing them against those of other characters, who
operate by simpler doctrines. Compared to the lurid and sensationalist
imagination of Captain Hastings and the damn-the-facts fantasies of
Ariadne Oliver, Poirots own theories of the case can appear very prosaic.
On the other hand, the lack of imagination of Inspector Japp and Miss
Lemon can make Poirot seem like Shakespeare. Again, this is not to say
that Poirot is not capable of fantastic imagination when the situation
warrants it, as it does in Murder on the Orient Express. When the facts
justify bold leaps of faith, Poirot leaps.
Beyond Poirot
Perhaps I am backward-looking, but to my mind, Poirot has never
been topped in the annals of fictional detection. Christies other creations
can mostly be dismissed. Tommy and Tuppence are the worst secret agent
characters ever, Parker Pyne is a bore and Superintendent Battle rarely
does anything except look enigmatic while others solve the crime. Even
Miss Marple is pretty much a one-trick right-brained pony. Her stock-intrade is identifying similarities in personality patterns across widely
disparate social situations (an urbane Duke in London might remind her of
Tommy The Butchers Boy). The entire holographic Marple universe is
based on the dubious one-element doctrine, people are much the same

everywhere, which allows for specious extrapolations of the social


psychology of St. Marys Mead to the rest of the world.
Within the Christie universe, only the mysterious Mr. Quin is
something of a match for Poirot, when it comes to doctrine-driven
detection. In many ways, thanks to being partly a supernatural-allegorical
construct, Mr. Quin is often more sublime than Poirot. If you havent read
the Mr. Quin books (there are only a few), you should.
Among fictional detectives who have appeared since Poirot (at least
the ones Ive read/watched on TV), only Dr. House, solver of medical
mysteries, comes close. Though nominally a Holmes-inspired character
(the show is full of insider Holmes references), the character of House is
much closer to that of Poirot, once you discard the superficial Holmes
connections. Like Poirot, House is an ironic-doctrinaire mix of rightbrained intuition, left-brained statistical skepticism, and a complex-butblack-and-white moral compass. The fact that most of us understand
absolutely nothing of the medical jargon in the show underlines the fact
that Houses appeal lies at a doctrinal level.
The Short Version
Trust your right-brained pattern-spotting. Be a skeptical, data-driven
empiricist. Add a moral compass. Tie it all together with storytelling. Be
aware of, and exploit, the flawed doctrines of others. Do not be concerned
about the morality of this: doctrinal flaws provide the moral justification
for their own exploitation.

Boundary Condition Thinking


January 19, 2011
It is always interesting to recognize a simple pattern in your own
thinking. Recently, I was wondering why I am so attracted to thinking
about the margins of civilization, ranging from life on the ocean (for
example, my review of The Outlaw Sea [August 27, 2009]) to garbage,
graffiti, extreme poverty and marginal lifestyles that I would never want to
live myself, like being in a motorcycle gang. Lately, for instance, I have
gotten insatiably curious about the various ways one can be nonmainstream. In response to a question I asked on Quora about words that
mean non mainstream, I got a bunch of interesting responses, which I
turned into this Wordle graphic.

Then it struck me: even in my qualitative thinking, I merely follow


the basic principles of mathematical modeling, my primary hands-on
techie skill. This interest of mine in non mainstream is more than a
romantic attraction to dramatic things far from everyday life. My broader,
more clinical interest is simply a case of instinctively paying attention to
what are known as boundary conditions in mathematical modeling.

Mathematical Thought
To build mathematical models, you start by observing and braindumping everything you know about the problem, including key
unknowns, onto paper. This brain-dump is basically an unstructured take
on whats going on. Theres a big word for it: phenomenology. When I do
a phenomenology-dumping brainstorm, I use a mix of qualitative notes,
quotes, questions, little pictures, mind maps, fragments of equations,
fragments of pseudo-code, made-up graphs, and so forth.
You then sort out three types of model building blocks in the
phenomenology: dynamics, constraints and boundary conditions
(technically all three are varieties of constraints, but never mind that).
Dynamics refers to how things change, and the laws govern those
changes. Dynamics are front and center in mathematical thought. Insights
come relatively easily when you are thinking about dynamics, and sudden
changes in dynamics are usually very visible. Dynamics is about things
like the swinging behavior of pendulums.
Constraints are a little harder. It takes some practice and technical
peripheral vision to learn to work elegantly with constraints. When
constraints are created, destroyed, loosened or tightened, the changes are
usually harder to notice, and the effects are often delayed or obscured. If I
were to suddenly pinch the middle of the string of a swinging string-andweight pendulum, it would start oscillating faster. But if you are paying
attention only to the swinging dynamics, you may not notice that the
actual noteworthy event is the introduction of a new constraint. You might
start thinking, there must be a new force that is pushing things along
faster and go hunting for that mysterious force.
This is a trivial example, but in more complex cases, you can waste a
lot of time thinking unproductively about dynamics (even building whole
separate dynamic models) when you should just be watching for changes
in the pattern of constraints.
Inexperienced modelers are often bored by constraints because they
are usually painful and dull to deal with. Unlike dynamics, which dance

around in exciting ways, constraints just sit there, usually messing up the
dancing. Constraints involve and tedious-to-model facts like if the
pendulum swings too widely, it will bounce off that wall. Constraints are
ugly when you first start dealing with them, but you learn to appreciate
their beauty as you build more complex models.
Boundary conditions though, are the hardest of all. Most of the raw,
primitive, numerical data in a mathematical modeling problem lives in the
description of boundary conditions. The initial kick you might give a
pendulum is an example. The fact that the rim of a vibrating drum skin
cannot move is a boundary condition. When boundary conditions change,
the effects can be extremely weird, and hard to sort out, if you arent
looking at the right boundaries.
The effects can also be very beautiful. I used to play the Tabla, and
once you get past the basics, advanced skills involve manipulating the
boundary conditions of the two drums. Thats where much of the beauty of
Tabla drumming comes from. Beginners play in dull, metronomic ways.
Virtuosos create their dizzy effects by messing with the boundary
conditions.
In mathematical modeling, if you want to cheat and get to an illusion
of understanding, you do so most often by simplifying the boundary
conditions. A circular drum is easy to analyze; a drum with a rim shaped
like lake Erie is a special kind of torture that takes computer modeling to
analyze.
A little tangential kick to a pendulum, which makes it swing mildly in
a plane, is a simple physics homework problem. An off-tangent kick that
causes the pendulum bob to jump up, making the string slacken, before
bungeeing to tautness again, and starting to swing in an unpleasant conic,
is an unholy mess to analyze.
But boundary conditions are where actual (as opposed to textbook)
behaviors are born. And the more complex the boundary of a system, the
less insight you can get out of a dynamics-and-constraints model that
simplifies the boundary too much. Often, if you simplify boundary

conditions too much, the behaviors that got you interested in the first place
will vanish.
Dynamics, Constraints and Boundaries in Qualitative Thinking
Without realizing it, many smart people without mathematical training
also gravitate towards thinking in terms of these three basic building
blocks of models. In fact, it is probably likely that the non-mathematical
approach is the older one, with the mathematical kind being a codified and
derivative kind of thinking.
Historians are a great example. The best historians tend to have an
intuitive grasp of this approach to building models using these three
building blocks. Here is how you can sort these three kinds of pieces out
in your own thinking. It involves asking a set of questions when you begin
to think about a complicated problem.
1. What are the patterns of change here? What happens when I do
various things? Whats the simplest explanation here? (dynamics)
2. What can I not change, where are the limits? What can break if
things get extreme? (constraints)
3. What are the raw numbers and facts that I need to actually do
some detective work to get at, and cannot simply infer from what I
already know? (boundary conditions).
Besides historians, trend analysts and fashionistas also seem to think
this way. Notice something? Most of the action is in the third question.
Thats why historians spend so much time organizing their facts and
numbers.
This is also why mathematicians are disappointed when they look at
the dynamics and constraints in models built by historians. Toynbees
monumental work seems, to a dynamics-focused mathematical thinker,
much ado about an approximate 2nd order under-damped oscillator (the
cycle of Golden and Dark ages typical in history). Hegels historicism and
End of History model appears to be a dull observation about an
asymptotic state.

How the World Works


In a way, the big problem that interests me, which I try to think about
through this blog, is simply how does the world work?
At this kind of scale, the hardest part of building good models is
actually in wrestling with the enormous amount of boundary conditions
data. Thats where you either get up off the armchair, or turn to Google or
Amazon. Thinking about boundary conditions organizing the facts and
numbers in elegant ways becomes an art form in its own right, and you
have to work with stories, metaphors and various other crutches to get at
the right set of raw data to inform your problem. Only after youve done
that do dynamics and constraints get both tractable and interesting.
Abstractions and generalizations, if they can be built at all, live in the
middle. Stories live on the periphery.
This is part of the reason I dont like traditional mathematical models
at how the world works scale, like System Dynamics. They ignore or
oversimplify what I think is the main raw material of interest: boundary
conditions. A theory of unemployment, slum growth and housing
development cycles in big cities that ignores distinctions among
vandalism, beggary and back-alley crime is, in my opinion, not a theory
worth much. If you could explain elegantly why some cities in decline
turn to crime, while others turn to vandalism or beggary, then youd have
interesting, high-leverage insights to work with.
Its not surprising therefore, that one of the most seductive ideas in
abstract thinking about history, the deceptively simple center periphery
idea (basically, the idea that change and new historical trends emerge on
the peripheries and in the interstices of centers) is extremely hard to
analyze mathematically, since it involves a weird switcheroo between
boundary conditions and center conditions. Some day, Ill blog about
center-periphery stuff. I have a huge, unprocessed phenomenology braindump on the subject somewhere.
So in a way, thinking about things like the words in the graphic is my
way of wrapping my mind around the boundary conditions of the problem,

how does the world work? If I just made up a theory of the mainstream
world based on mainstream dynamics, it would be very impoverished. It
would offer an illusion of insight and zero predictive power. A theory of
the middle that completely breaks down at the boundaries and doesnt
explain the most interesting stories around us, is deeply unsatisfying.
I have proof that this approach is useful. Some of my most popular
posts have come out of boundary conditions thinking. The Gervais
Principle series was initially inspired by the question, how is Office
funny different from Dilbert funny? That led me to thinking about
marginal slackers inside organizations, who always live on the brink of
being laid off. My post from last week, The Gollum Effect [January 6,
2011], came from pondering extreme couponers and hoarders at the edge
of the mainstream.
So I operate by the vague heuristic that if I pay attention to things on
the edge of the mainstream, ranging from motorcycle gangs to extreme
couponers and hoarders, perhaps I can make more credible progress on big
and difficult problems.
Or at least, thats the leap of faith I make in most of my thinking.

Learning From One Data Point


September 28, 2010
Sometimes I get annoyed by all the pious statistician-types I find all
around me. They arent all statisticians, but there are a lot of people who
raise analytics and data-driven to the level of a holy activity. It isnt that
I dont like analytics. I use statistics whenever it is a cost of doing
business. Youd be dumb to not take advantages of ideas like A/B testing
for messy questions.
What bothers me is that there are a lot of people who use statistics as
an excuse to avoid thinking. Why think about what ONE case means,
when you can create 25 cases using brute force, and code, classify, cluster,
correlate and regress your way to apparent insight?
This kind of thinking is tempting, but is dangerous. I constantly
remind myself of the value of the other approach to dealing with data:
hard, break-out-in-a-sweat thinking about what ONE case means. No
rules, no formulas. Just thinking. I call this learning from one data point.
It is a crucially important skill because by the time a statistically
significant amount of data is in, the relevant window of opportunity might
be gone.
Randomness and Determinism
The world is not a random place. Causality exists. Patterns exist. In
grad school, I learned that there are two types of machine learning models
in AI. Models based on reasoning, and models based on statistics and
probability. This applies to both humans and machines. Both are driven by
feedback, but one kind is driven mainly by statistical formulas, while the
other kind is driven by thinking about the new information.
The probability models, like reinforcement or Bayesian learning, are
very easy to understand. They involve a few variables and a lot of clever
math, mostly already done by smart dead people from three centuries ago,
and programmed into software packages.

The reasoning models on the other hand, are complex, but largely
qualitative, and most of the thinking is up to you, not Thomas Bayes.
Explanation-Based Learning is one type. A slightly looser form is CaseBased Reasoning. Both rely on what are known as rich domain theories.
Most of the hard thinking in EBL and CBR is in the qualitative thinking
involved in building good domain theories, not in the programming or the
math.
The former kind requires lots of data involving a few variables. Do
people buy more beer on Fridays? Easy. Collect beer sales data, and you
get a correlation between time t and sales s. Gauss did most of the
necessary thinking a couple of hundred years ago. You just need to push a
button.
EBL, CBR and other similar models are different. A textbook example
is learning endgames in chess. If I show you an endgame checkmate
position involving a couple of castles and a king, you can think for a bit
and figure out the general explanation of why the situation is a checkmate.
You will be able to construct a correct theory of several other checkmate
patterns that work by the same logic. One case has given you an
explanation that covers many other cases. The cost: you need a rich
domain theory in this case a knowledge of the rules of chess. The
benefit: you didnt waste time doing statistical analyses of dozens of
games to discover what a bit of simple reasoning revealed.
Looser case-based reasoning involves stories rather than 100%
watertight logic. Military and business strategy is taught this way. Where
the explanation of a chess endgame could potentially be extended
perfectly to all applicable situations, it is harder to capture what might
happen if a game starts with a Sicilian defense. You can still apply a lot
of logic and figure out the patterns and types of game stories that might
emerge, but unlike the 2-castles-and-king situation, you are working in too
big a space to figure it all out with 100% certainty. But even this looser
kind of thinking is vastly more efficient than pure brute force statisticsbased thinking.
Theres a lot of data in the qualitative model-based kinds of learning
as well, except its not two columns of x and y data. The data is a fuzzy set

of hard and soft rules that interact in complex ways, and lots of
information about the classes of objects in a domain. All of it deployed in
the service of an analysis of ONE data point. ONE case.
Think about people for instance. Could you figure out, from talking to
one hippie, how most hippies might respond to a question about drilling
for oil in Alaska? Do you really need to ask hundreds of them at Burning
Man? It is worth noting that random samples of people are
extraordinarily hard to construct. And this is a good thing. It gives people
willing to actually think a significant advantage over the unthinking datadriven types.
The more data you have about the structure of a domain, the more you
can figure out from just one data point. In our examples, one chess
position explains dozens. One hippie explains hundreds.
People often forget this elementary idea these days. Ive met idiots
(who shall remain unnamed) who run off energetically do data collection
and statistical analysis to answer questions that take me 5 minutes of
careful qualitative thought with pen and paper, and no math. And yes, I
can do and understand quite a bit of the math. I just think 90% of the
applications are completely pointless. The statistics jocks come back and
are surprised that I figured it out while sitting in my armchair.
The Real World
Forget toy AI problems. Think about a real world question: A/B
testing to determine which subject lines get the best open rates in an email
campaign. Without realizing it, you apply a lot of model-based logic and
eliminate a lot of crud. You end up using statistical methods only for the
uncertainties you cannot resolve through reasoning. Thats the key:
statistics based methods are the last-resort, brute force tool for resolving
questions you cannot resolve through analysis of a single prototypical
case.
Think about customer conversations. Should you talk to 25 customers
about whether your product is good or bad? Or will one deep conversation
yield more dividends?

Depends. If there is a lot of discoverable structure and causality in the


domain, one in-depth customer conversation can reveal vastly more than
25 responses to a 3 question survey. You might find out enough to make
the decision you need to make, and avoid 24 other conversations.
But it takes work. A different kind of work. You can go have lunch
with just ONE well-informed person in an organization and figure out
everything important about it, by asking about the right stories, assessing
that persons personality, factoring out his/her biases, applying everything
you know about management theory and human psychology, and spending
a few hours putting your analysis together. You wont produce pretty
graphs and hard evidence of the sort certain idiots demand, but you will
know. Through your stories and notes, you will know. And nine times out
of ten, youll be right.
Thats the power of one data point. If you care to look, a single data
point or case is an incredibly rich story. Just listen to the story, tease out
the logic within it, and youll learn more than by attempting to listen to
fifty stories and fitting them all into the same 10-variable codification
scheme. Examples of statistical insights that I found incredibly stupid
include:
1. Beyond a point, more money doesnt make people happier
2. Religious people self-report higher levels of happiness than
atheists
Duh. These and other insights are accessible much more easily if
you just bothered to think. Usually the thinking path gets you more than
the statistics path in such cases. I cite such results to people who look for
that kind of verification, but I personally dont bother analyzing such
statistical results deeply.
Sure, it is good to be humble and recognize when you dont have
enough modeling information from one case. Sure, data can prove you
wrong. It doesnt mean you stop thinking and start relying on statistics for
everything. Look at the record of statistics based thinking. How often
are you actually surprised by a data-driven insight? I bet you are like

me. Nine out of ten times you ask they needed a study to figure THAT
out?
And the 1/10 times you get actual insight? Well, consider the beer and
diapers story. I dont tell that story. Statistics-types do.
This means going with your gut-driven deep qualitative analysis of
one anecdotal case will be fine 9 out of 10 times.
The Real Reason Data Driven is Valued
So why this huge emphasis on quants and data driven and
analytics? Could a good storyteller have figured out and explained (in
an EBL/CBR sense) the subprime mortgage crisis created by the quants? I
believe so (and I suspect several did and got out in time).
I think the emphasis is due to a few reasons.
First, if you can do stats, you can avoid thinking. You can plug and
chug a lot of formulas and show off how smart you are because you can
run a logistic regression and the Black-Scholes derivative pricing formula
(sorry to disappoint you; no, you are not that smart. The people who
discovered those formulas are the smart ones).
Second, numbers provide safety. If you tell a one-data-point story and
you turn out to be wrong, you will get beaten up a LOT more badly than if
your statistical model turns out to be based on an idiotic assumption.
Running those numbers looks more like real work than spinning a
qualitative just-so story. People resent it when you get to insights through
armchair thinking. They think the honest way to get to those insights is
through data collection and statistics.
Third: runaway behavioral economics thinking by people without
the taste and competence to actually do statistics well. Ill rant about that
another day.
Dont be brute-force statistics driven. Be feedback-driven. Be
prepared to dive into one case with ethnographic fervor, and keep those

analytics programs handy as well. Judge which tool is most appropriate


given the richness of your domain model. Blend the two together.
Qualitative story telling and reasoning and statistics.
And if I were forced to choose, Id go with the former any day.
Human beings survived and achieved amazing things for thousands of
years before statistics ever existed. Their secret was thinking.

Lawyer Mind, Judge Mind


March 29, 2012
Several recent discussions on a variety of unrelated topics with
different people have gotten me thinking about two different attitudes
towards dialectical processes. They are generalized versions of the
professional attitudes required of lawyers and judges, so Ill refer to them
as lawyer mind and judge mind.
In the specialized context of the law, the dialectical process is
structurally constrained and the required attitudes are codified and legally
mandated to a certain extent. Lawyers must act as though they were
operating from a lawyer-mindset, even if internally they are operating with
a judge-mind. And vice-versa. Outside of the law, the distinction acquires
more philosophical overtones.
I want to start with the law, but get to a broader philosophical,
psychological and political distinction that applies to all of us in all
contexts.
The Two Minds in Law
The lawyer mind allows you to make up the best possible defense or
prosecution strategy with the available evidence. Within limits, even if the
defense lawyer is convinced his client is guilty, s/he is duty-bound to make
the best possible case and is not required to share evidence that
incriminates the defendant or weakens the case. I asked several questions
about this sort of thing on Quora and got some very interesting answers
from lawyers. If you are a lawyer or judge and have opinions on these
basic questions, you may want to add them as answers to the questions
rather than as comments here.
The legal system is designed so that lawyers are under an ethical and
legal obligation to try and win, rather than get at the truth in any sense.
So a defense lawyer with a flimsy case, who is convinced of his clients

guilt, but who wins anyway because the prosecution is incompetent, is


doing his job. S/he should not pull his/her punches.
Whats more, there is a philosophy behind the attitude. It is not letter
over spirit. It is letter in service of the spirit. If things are working well, the
lawyer should not suffer agonies to see justice not being served in the
specific case, but find solace in the fact of the dialectic being vital and
evolving as it should.
The lawyer, by pulling out all stops for a legal win, regardless of the
merits of the case, is philosophically trusting the search for truth to the
dialectic itself, and where the dialectic fails in a particular instance, s/he (I
expect) views it as necessary inefficiency in the interests of the longerterm evolution of the legal system. Its the difference between not in my
job description small-mindedness and trusting the system awareness of
ones own role and its limitations.
The judges nominal role is to act as a steward of the dialectic itself
and make sure it is as fair as can be at any given time, without attempting
to push its limits outside of certain codified mechanisms. The judge is
charged with explicitly driving towards the truth in the particular case,
and also improving the systems potential its dialectical vitality so
that it discovers the truth better in the future (hence the importance of
writing judgments with an eye on the evolution of case law, which is
supposed to be a run a few steps ahead of legislation as a vanguard, and
discover new areas that require legislative attention).
When Does This Work?
Now, if you think about it, this scheme of things works well when the
system is actually getting wiser and smarter over time. If the system is
getting dumber and more subverted over time, it becomes harder and
harder for either the lawyer or the judge to morally justify their
participation in and perpetuation of the system (assuming they care about
such things).
A challenge for a judge might be, for instance, an increasing influence
of money in the system, with public defenders getting worse over time,

and rich people being able to buy better and better lawyers over time. If
this is happening, the whole dialectic is falling apart, and trust in the
system erodes. Dialectical vitality drains away and the only way to operate
within the system is to become good at gaming it without any thought to
larger issues. This is the purely predatory vulture attitude. If a legal system
is full of vulture-lawyers and vulture-judges, it is a carcass.
A moral challenge for a lawyer might be, for instance, deciding
whether or not to use race to his/her advantage in the jury selection
process, effectively using legal processes to get racial discrimination
working in his clients favor. Should the lawyer use such tactics, morally
speaking? It depends on whether the dialectic is slowly evolving towards
managing race more thoughtfully or whether it is making racial
polarization and discrimination worse.
This constant presence of the process itself in peripheral vision means
that both lawyers and judges must have attitudes towards both the specific
case and about the legal system in general. So an activist judge, for
instance, might be judge-minded with respect to the case, but lawyerminded with respect to the dialectic (i.e., being visibly partisan in their
philosophy about if and how the system should evolve, and either being
energetic or conservative in setting new precedents). You could call such a
person a judge-lawyer.
A lawyer who writes legal thrillers on the side, with a dispassionate,
apolitical eye on process evolution, might be called a lawyer-judge. A
lawyer with political ambitions might be a lawyer-lawyer. I cant think of
a good archetype label for judge-judge, but I can imagine the type: an
apolitical judge who is fair in individual cases and doesnt try too hard to
set precedents, but does so when necessary.
The x-(x)-X-(X) Template
Because of the existence of an evolving dialectic framing things, you
really you have four possible types of legal professionals: lawyer-lawyers,
judge-judges, lawyer-judges and judge-lawyers, where the first attitude is
the (legally mandated and formal-role based) attitude towards a specific

case, and the second is the (unregulated) political attitude towards the
dialectic.
When the system is getting better all the time, all four roles are
justifiable. But when it is gradually worsening beyond the point of no
return, none of them is. When things head permanently south, a mismatch
between held and demonstrated beliefs is a case of bad faith. Since all
hope for reform is lost the only rational responses are to abandon the
system or be corrupt within it.
To get at the varieties of bad faith possible in a collapsing dialectic,
you need to distinguish between held and demonstrated beliefs at both
case and dialectic levels to identify the specific pattern.
So you might have constructs like lawyer-(judge)-lawyer-(lawyer).
This allows you to slice and dice various moral positions in a very finegrained way. For example, I think a legalist in the sense that the term has
been used in history, is somebody who adopts a lawyer-like role in a
specific case within a dialectic thats decaying and losing vitality, while
knowing full well that it is decaying. Legalists help perpetuate a dying
dialectic. You could represent this as lawyer-(judge)-judge-(lawyer). Ill
let you parse that.
This is getting too meta even for me, so Ill leave it to people who are
better at abstractions to make sense of the possibilities here. Ill just leave
it at the abstract template expression Ive made up: x-(x)-X-(X).
The special case of the law illuminates a broader divide in any sort of
dialectical process. Some are full of judge-mind types. Others are full of
lawyer-mind types.
The net behavior of a dialectic depends not just on the type of people
within it, but on its boundary conditions: at the highest level of appeal, do
judge-minds rule or lawyer-minds?
Within the judiciary, even though there are more lawyer minds, the
boundary conditions are at the Supreme Court, where judge minds rule. So
the dialectic overall is judge-minded due to the nature of its highest appeal
process.

In other dialectics, things are different because the boundary


conditions are different.
Governance Dialectics
The watershed intellectual difference that separates conservative
(more lawyer-like) and liberal discourses (more judge-like) around a
particular contentious subject is framed by the boundary conditions of the
governance dialectic itself.
Politics exists within the dialectic that in principle subsumes all
others: the governance dialectic. In principle because if the governance
dialectic loses vitality, the subsumed dialectics can devour their parents.
You could argue that in a democracy where the legislative branch has
the ability, in principle, to amend the constitution arbitrarily, the overall
governance dialectic is one where the lawyer mind is the ultimate source
of authority, since the top body is a bunch of formally lawyer-mind types.
There are no judge-mind types with any real power, especially in
parliamentary democracies. Nominally judicial roles like the Speaker are
mostly procedural rather than substantive.
The theory of an independent judiciary does not in practice give
judge-mind people equal authority. The check-and-balance powers of the
judiciary are based on seeking to make the law more internally consistent
rather than improving its intentions or governing values. Of course, if the
legislative arm is slow in keeping up with the landscape being carved out
by case law, the judiciary gains more de facto power. Thats a subsumed
dialectic devouring its parent.
So in a democracy, lawyer-minds are structurally advantaged, since
the most powerful institution is set up for lawyer minds. Bipartisanship
(judge minds operating in a legislature) takes a special effort to go beyond
the structural default through an act of imagination.
Among the other institutions in a free-market democracy, theoretically
the judiciary, executive and free press are nominally judge-minded at their

boundaries, while the market is lawyer-minded (more on that in a bit). So


there is structural lawyer-mind bias in the top-level institutions (the
legislature and the market) and a structural judge-mind bias in the
secondary institutions (the judiciary, the press and the executive branch).
Traditional Imperial China was the opposite. The legal system
ultimately derived its authority from a judge-mind figure, the Emperor.
The lawyers were second-class citizens.
Other Dialectics
The notion of free press is currently being radically transformed
due to the fundamental tension between journalism and blogging.
Journalism, at least nominally, is driven by a judge-mind dialectic.
Journalists nominally aspire to a fair-and-balanced (without the Fox News
scare quotes) role in society.
Blogging is driven by a lawyer-mind dialectic. Bloggers trust that the
truth will out in some larger sense, and feel under no moral obligation to
present or even see all sides of an issue. If the opposed side has no
credible people, well, tough luck. The truth will just take a little longer to
out. This gradual transformation of dialectical boundary conditions has
been particularly clear in the various run-ins between Michael Arrington
and newspapers like the Washington Post. This too is a case of a subsumed
dialectic devouring its parent, since the government basically has no idea
what its role in the new media world should be.
Science is another important dialectic. I wont attempt to analyze it
though, since it exists in a feedback loop with the rest of the universe, and
is too complicated to treat here. Religion used to be dialectical in nature,
but isnt any more. But science is unimportant socially because it is very
fragile, and in a world that is socially messy, it is easily killed. It never
rules primarily because it takes a certain minimum amount of talent to
participate in the scientific dialectic, which makes it similar to a minority
dialectic.

Religion used to be a real dialectic. Now it is mostly theater in service


of political dialectics.
Capitalism is another dialectic with the capacity to devour
governance, just like the judiciary. But it is lawyer-like, not judge-like.
The idea of a fiduciary duty to maximize shareholder wealth in the US
is a lawyer-like duty towards society. The trend towards social
businesses (B-corporations in the US) is an attempt to invent companies
with more judge-like duties towards society. For the former to work, the
market has to be closer to truly competitive, and getting better all the time.
The invisible hand must be guided by an invisible and emergent judicial
mind.
In an environment where pure competition has been greatly
subverted, it is hard to justify this fiduciary duty. The rise of Bcorporation philosophy, indicates a failure in the governance dialectic,
since emergent judge-mind attitudes that should exist at the legislative
level are being devolved to the corporate level.
In the US, the legislature has abdicated the spirit, if not the letter, of
its responsibilities. Fiduciary duty may be a terrible idea, but the better
solution would be to shift to a different, but still lawyer-mind model. This
is because the market has a far lower capacity to manifest an emergent
judge-mind. Since it is the governance dialectic that controls the nature
and future of money, the principal coordination mechanism for the market,
the market is ultimately subservient in principle, just like the judiciary.
Since the top-level emergent judge-mind requires a culture of
bipartisan legislative imagination to exist, a legislative branch that cannot
define imaginative visions on occasion enables a takeover by the
structurally advantaged lawyer minds that comprise it, which leads to
polarization and a power vacuum, which in turn leads to the devouring by
nominally subsumed dialectics.
This is not an accident. By its very nature, you cannot structurally
advantage judge-minds at the ultimate boundary of a social system. If you
do, you are essentially legitimizing a sort of divine authority. The top level

has to be lawyer-minds arguing by default, with an occasional lawyer


gaining enough trust across the board to temporarily play judge.
Societies fail when their governance processes fail to demonstrate
enough imagination for sufficiently long periods. We are living through
such a period in the US today, as well as in many other parts of the world.
Governance processes across the world have lost their vitality and there is
a lot of devouring by dialectics it is supposed to subsume.
In the past during periods of such failure, violent adjustments have
occurred. War is after all, the social dialectic of last resort. Both world
wars and the US Civil War represented such adjustments. In each case, the
governance dialectic was revitalized, but at enormous cost in the short
term.
Empathy and Passion
When you approach all reality with an intrinsic lawyer mind, you
fundamentally believe that no matter how powerful your perspectiveshifting abilities, you cannot adopt all relevant points of view. Not even all
human points of view. With a judge-mind by contrast, your starting
assumption is that you will eventually be able to appreciate all points of
view in play. It is a somewhat arrogantly visionary perspective in that
sense, and requires exhibition of a sufficient imagination to justify itself.
With a lawyer-mind for instance, if you are white, you dont presume
to understand the black point of view. With a judge-mind, you assume you
can. Your emotions can also be lawyer-like (polarized passion) or judgelike (dispassionate).
If you are aware of, and unconflicted about, your role in a given
dialectic, you dont try to either suppress or amplify your emotions. You
try to be mindful about how they influence your intellectual processes and
control that influence if you think it is counter-productive. Up to a point,
passion improves a lawyer mind and lack of passion improves a judge
mind. Too much passion, and a lawyer-mind becomes emotionally
compromised. Too little passion and a judge mind becomes apathetic.
Both pathologies lead to procedural mistakes.

Passion cannot be conjured up out of nothing, nor can it be created or


destroyed independently of intellectual reactions. So if you need more or
less passion for your role, you have to either change your role via a true
intellectual shift, or borrow or lend passion. This requires empathy.
Depending on whether the passion is on your side or the opposite
side, empathy can make you more lawyer-minded or more judge-minded.
Empathy for a friend makes you more lawyer-like. Empathy for a rival
makes you more judge-like. This is how dialectics get more or less
polarized. A dialectic with vitality can swing across this range more easily.
One that lacks vitality gets locked into a preferred state.
So there is a sort of law of conservation of passion in a given
situation, with passions of different polarities canceling out via crossdivide empathy, or reinforcing via same-side empathy.
There is a certain irreversability and asymmetry though. Judge-minds
being fundamentally dispassionate cannot absorb passion and become
lawyer-like as easily as lawyer-minds can absorb opposed passions and
become more judge-like. This means judge-minds are more stable than
lawyer-minds. To lower polarization, all the minds in a dialectic must mix
more and let passions slosh and cancel out somewhat via empathy. This
means breaking down boundaries and creating more human-to-human
contact. To preserve or increase polarization on the other hand, artificial
barriers must be created and maintained. Or you need a situation where
material dialectics, like war and natural calamities, happen to be highly
active.
This is fundamentally why the labels conservative and progressive
mean what they do in politics. This is also why conservatives are typically
better organized institutionally. They have walls to maintain to prevent
contamination of their lawyer minds.
And finally, this is also why the governance dialectic is structurally
set up to advantage lawyer-minds at the highest levels: they need the
structure more. It is up to judge-minds to transcend existing structures and
imagine more structure into existence.

Knowing Your Place


With a lawyer mind in improving times, you conclude that your job is
merely to do your absolute best with the perspectives you can access
directly or via empathy, and trust larger processes to head in sane
directions.
The lawyer mind is therefore an open system view that is more robust
to unknown-unknowns. It trusts things it does not actually comprehend. It
is intellectually conservative in that it knowingly limits itself. The judge
mind is a closed system view that is less robust to unknown unknowns. It
is intellectual ambitious in that it presumes to adopt a see-all/know-all
stance. It does not trust what it cannot comprehend and is limited by what
it can imagine.
Paradoxically, what makes a judge-mind closed is its capacity for
imagination, while a lawyer-mind is open by virtue of its lack of
imagination.
The ability to adopt many conflicting perspectives
dispassionately fuels imaginative synthesis, but this synthesis then
imprisons the judge mind. The reverse paradox holds for lawyer minds.
These paradoxes suggest that each type of mind contains the seed of
the other, yin-yang style. Ill leave you to figure out how. The fundamental
delusion of a frozen judge-mind is the belief that this yin-yang state can
exist in one mind all the time. The fundamental delusion of a frozen
lawyer-mind is the belief that it never can.
In the Myers-Briggs system, where J(udging) and P(erceiving)
represent what Ive been calling the lawyer and judge mindsets
respectively. Ironic that the labels are somewhat reversed.
Psychologically, I am a P (a fairly strong INTP), but intellectually,
over the years Ive become increasingly lawyer-minded rather than judgeminded. Perhaps it is the effect of blogging. Perhaps it is a growing sense
of the limits of my own abilities.

In terms of more artistic archetypes, the fox and hedgehog reflect


lawyer and judge minds.

Just Add Water


February 29, 2012
A Bill Gates Roy Amara quote I encountered last week reminds me
strongly of compound interest.
We always overestimate the change that will occur in
the next two years and underestimate the change that will
occur in the next ten.
I hadnt heard this line before, but based on anecdotal evidence, I
think Amara was right to zeroth order, and it is a very smart comment. The
question is why this happens. I think the answer is that we are naturally
wired for arithmetic, but exponential thinking is unnatural. But I havent
quite worked it out yet. We probably use some sort of linear prediction
that first over-estimates and then under-estimates the underlying
exponential process, but where does that linear prediction come from?
Anyone want to take a crack at an explanation? I could be wrong.
Compound interest/exponential thinking might have nothing to do with it.
***
When I write, I generally start with some sort of interesting motif, like
the Gates quote, that catches my eye, which I then proceed to attempt to
unravel. Sometimes it turns out theres nothing there, and sometimes a
trivial starting point can fuel several thousand words of exploration.
I call this the just add attention model of writing. Its like just-addwater concentrates. A rich motif will yield a large volume of mind fuel if
you just dissolve it in a few hours of informed attention.
The previous nugget is an example. If I were to let it simmer for a few
days and then sat down to do something with the Gates quote, I would
probably be able to spin a 4000-word post from it. I figured Id let you
guys take a crack at this one.

My hit rate has been steadily improving. Nowadays, when I suspect


that something will sustain exploration to such and such a depth, I am
almost always right.
I prefer the word motif to words like pattern or clue, because it is
more general. A motif merely invites attention. By contrast, a pattern
attracts a specific kind of analytical attack, and a clue sets up a specific
kind of dissonance.
***
The nature of just-add-attention writing explains why it is hard for me
to write short posts. If I wrote short posts, theyd just be too-clever
questions with no answers, or worse, cryptic motifs offered with no
explanation.
You cannot really compress just-add-attention writing. You can only
dehydrate it back into a concentrate. Just-add-attention writing has a
generative structure but no clear extensive structure. It is like a tree rather
than a human skeleton.
By this I mean that you can take the concentrate the motif and
repeatedly apply a particular generative process to it to get to what you an
extensive form. But this extensive form has no clear structure at the
extensive level. At best, it has some sort of fractal structure. A human
skeleton is a spine with four limbs, a rib cage and a skull attached. A tree
is just repeated tree-iness.
But I hesitate to plunge forward and call all generative-extensive
forms fractal, as you might be tempted to do. Fractal structures have more
going on.
***
Just-add-attention writing is partially described well by Paul
Grahams essay about writing essays, which somebody pointed out to me
after I posted my dense writing piece a few weeks back. But I dont think

it is the same as the Graham model. I think the Graham model involves
more conscious guidance from a separate idea about the aesthetics of
writing, sort of like bonsai.
Just-add-attention writing is driven by its own aesthetic. This can lead
to unpredictable results, but you get a more uncensored sense of whether
an idea is actually beautiful.
Dense writing is related to just-add-attention in a very simple way:
making something dense is a matter of partially dehydrating an extensive
form again, or stopping short of full hydration in the first place. Along
with pruning of bits that are either hard to dilute or have been irreversibly
over-diluted.
Why would you want to do that? Because just-add-attention writing
can sort of sprawl untidily all over the place. Partially dehydrating it again
makes it more readable, at the cost of making it more cryptic.
This add-attention/dehydrate again process can be iterated with some
care and selectivity to create interesting artistic effects. It reminds me of a
one-word answer Xianhang Zhang posted on Quora to the question, how
do you chop broccoli? Answer: recursively.
Regular writing can be chopped up like a potato. Just-add-attention
writing must be chopped up like a broccoli. It is more time consuming.
Thats why I cannot do what some people innocently suggest, simply
serializing my longer pieces as a sequence of arbitrarily delineated parts. I
have never successfully chopped up a long piece into two shorter pieces.
At best, I have been able to chop off a straggling and unfinished tail end
into another draft and then work that separately.
***
Not all generative processes lack extensive structure. The human
skeleton is after all, also the product of a generative process (ontogeny).
To take a simpler example, the multiplication table for 9 is defined by a
generative rule (9 times n), but also has an extensive structure:

09
18
27
36
45
54
63
72
81
90
In case you didnt learn this trick in grade school, the extensive
structure is that you can generate this table by writing the numerals 0-9
twice in adjacent columns, in ascending and descending order.
If you wanted to blog the multiplication table for 9, and had to keep it
to one line. You could use either:

The nine times table is generated by multipling 1, 2,, n by 9, or


Write down 0-9 in ascending order and then in descending order in
the next column

Both are good compressions, though the second is more limited. But
this is rare. In general a sufficiently complex generative process will
produce an extensive-form output that cannot then be compressed by any
means other than rewinding the process itself.
***
Just-add-attention writing is easy for those who can do it, but not
everybody can do it. More to the point, of the people who can do it, a
significant majority seem to find it boring to do. It feels a little bit like
folding laundry. It is either a chore, or a relaxing experience.
What sort of people can do it?

On the nature front, I believe you need a certain innate capacity for
free association. Some people cannot free associate at all. Others free
associate wildly and end up with noise. The sweet spot is being able to
free associate with a subconscious sense of the quality of each association
moderating the chain reaction. You then weave a narrative through what
youve generated. The higher the initial quality of the free association, the
easier the narrative weaving becomes.
On the nurture front, this capacity for high-initial-quality free
association cannot operate in a vacuum. It needs data. A lot of data,
usually accumulated over a long period of time. What you take in needs to
age and mature first into stable memories before free association can work
well on this foundation. The layers have to settle. By my estimate, you
have to read a lot for about 10 years before you are ready to do just-addwater writing effectively.
Unfortunately, initial conditions matter a lot in this process, because
our n+1 reading choice tends to depend on choices n and n-1. The reading
path itself is guided by free association. But since item n isnt usable for
fertile free association until, say, youve read item n+385, there is a time
lag. So your reading choices are driven by partly digested reading choices
in the immediate past.
So if you make the wrong choices early on, your fill the hopper
phase of about 10 years could go horribly wrong and fill your mind with
crap. Then you get messed-up effects rather than interesting ones.
So there is a lot of luck involved initially, but the process becomes a
lot more controlled as your memories age, adding inertia.
***
This idea that just-add-attention writing is driven by aged memories
of around 10 years of reading suggests that the process works as follows.
When you recognize a motif as potentially interesting, it is your
stored memories sort of getting excited about company. Interesting is a
lot of existing ideas in your head clamoring to meet a new idea. Thats
why you are sometimes captivated by an evocative motif but cannot say

why. You wont know until your old ideas have interviewed the new idea
and hired it. Motif recognition is a screening interview conducted by the
ideas already resident in your brain.
Or to put it in a less overwrought way, old ideas act as a filter for new
ones. Badly tuned filters lead to too-open or too-closed brains. Well-tuned
ones are open just the right amount, and in the right ways.
Recognition must be followed by pursuit. This is the tedious-to-some
laundry-folding process of moderated free association. It is all the ideas in
your head interrogating the new one and forming connections with it.
Finally, the test of whether something interesting has happened is
whether you can extract a narrative out of the whole thing, once the
interviewing dies down.
A good free association phase will both make and break connections.
If your brain only makes connections, it will slowly freeze up because
everything will be connected to everything else. This is as bad as nothing
being connected, because you have no way to assess importance.
The pattern of broken and new connections (including those
formed/broken in distant areas) guides your narrative spinning.

The Rhetoric of the Hyperlink


July 1, 2009
The hyperlink is the most elemental of the bundle of ideas that we call
the Web. If the bit is the quark of information, the hyperlink is the
hydrogen molecule. It shapes the microstructure of information today.
Surprisingly though, it is nearly as mysterious now as it was back in July
1945, when Vannevar Bush first proposed the idea in his Atlantic Monthly
article, As We May Think. July 4th will mark the second anniversary of
Ribbonfarm (I started on July 4th, 2007), and to celebrate, I am going to
tell you everything Ive learned so far about the hyperlink. That is the lens
through which I tend to look at more traditional macro-level blogintrospection topics, such as how to make money blogging, and will
blogs replace newspapers? So with a Happy Second Birthday,
Ribbonfarm! and a Happy 64th Birthday, Hyperlink, lets go explore
the hyperlink.

Hyper-Grammar and Hyper-Style


The hyperlink is not a glorified electronic citation-and-libraryretrieval mechanism. The electronic library perspective, that the
hyperlink is merely a convenience that comes with the cost of amplifying
distraction, is a myopic one. But lousy though it is, that is our starting

point. The first airports were designed to look like railway stations after
all. As McLuhan said, We see the world through a rear-view mirror. We
march backwards into the future. Turn around, backward march.
This is the default mental model most people have of hyperlinks, a
model borrowed from academic citation, and made more informal:

Implicit inline: Nick Carr believes that Google is making us stupid.


Explicit inline: Nick Carr believes that Google is making us stupid
(see this article in the Atlantic).

Both are simple ports of constructs like Nick Carr believes [Carr,
2008] that Google is making us stupid. There are a couple of mildly
interesting things to think about here. For instance, the hyper-grammatical
question of whether to link the word believes, as I have done, or the title.
Similarly, you can ask whether this or article or this article in the Atlantic
should be used as the anchor in the explicit version. There is also the
visual-style question of how long the selected anchor phrase should be: the
more words you include, the more prominent the invitation to click. But
overall, this mental model is self-limiting. If links were only glorified
citations, a Strunk-and-White hyper-grammar/hyper-style guide would
have little new raw material to talk about.
Lets get more sophisticated and look at how hyperlinks break out of
the glorified-citation mould. Turn around, forward march.
Hyperlinking as Form-Content Mixing
Here are two sentences that execute similar intentions:

There are many kinds of fruit (such as apples, oranges and


bananas).
There are many kinds of fruit.

I dont remember where I first saw this clever method of linking (the
second one), but I was instantly fascinated, and I use it when I can. This
method is a new kind of grammar. You are mixing form and content, and
blending figure and ground into a fun open the secret package game.

From a traffic-grubbing perspective, the first method will leak less,


because if you already know all about apples, the link tells you what it is
about, and you wont click. So if you want to reference a really unusual
take on apples, the second method is more effective.
Real hyperlink artists know that paradoxically, the more people are
tempted to click away from your content, the more they want to keep
coming back. There is a set of tradeoffs involving compactness,
temptation to click, foreshadowing to eliminate surprise (and retain the
reader), and altruism in passing on the reader. But the medium is friendlier
to generosity in yielding the stage. This yielding the stage metaphor is
important and we will come back to it.
But possibly, this and similar tricks seems trivial to you. Lets do a
more serious example.
Hyperlinking as Blending of Figure, Ground and Voice
A while ago, on the Indian site, Sulekha.com, I wrote an article
pondering the interesting difficulties faced by non-European-descent
writers (like me) in developing an authentic voice in English.
Postmodernists, especially those interested in post-colonial literature,
obsess a lot about this sort of thing. Gayatri Spivak, for instance, wrote a
famous article, Can the Subaltern Speak? But scarily-impenetrable people
like Spivak are primarily interested in themes of oppression and politics. I
am primarily interested in the purely technical problem of how to write
authentically in English, in cases where my name and identity are
necessary elements of the text. Consider these different non-hyperlinked
ways of writing a sentence, within a hypothetical story set in Bollywood.

Fishbowl/Slumdog Millionaire method: Amitabh stared grimly


from a tattered old Sholay poster.
Expository: Amitabh Bachchan, the Bollywood superstar, stared
grimly from a tattered old Sholay poster. Sholay, as everybody
knew, was the blockbuster that truly established Bachchan.
Global contextual: Amitabh Bachchan, the Clint Eastwood of
India, stared grimly down from a tattered old Sholay poster.

Sholay, that odd mix of Kurosawa and John Wayne that drove
India wild.
Salman Rushdie method: Amitabh, he-of-boundless-splendor,
stared down, a-flaming, from a tattered old Sholay poster.

Critics and authors alike agonize endlessly about the politics of these
different voices. This particular example, crossing as it does linguistic and
cultural boundaries, in the difficult domain of fiction, is extreme. But the
same sorts of figure/ground/voice dynamics occur when you write inculture or non-fiction.
The first simply ignores non-Indian readers, who must look in at
Indians constructing meaning within a fishbowl, with no help. It is simple,
but unless the intent is genuinely to write only for Indians (which is
essentially impossible on the Web, in English), not acknowledging the
global context is a significant decision (whether deliberate or unthinking).
The second method is simply technically bad. If you cant solve the
problem of exposition-overload, you shouldnt be writing fiction.
The third method is the sort of thing that keeps literary scholars up at
nights, worrying about themes of oppression. Is acknowledging Clint
Eastwood as the prototypical strong-silent action hero a political act that
legitimizes the cultural hegemony of the West? What if Id said Bruce Lee
of India or Toshiro Mifune of India? Would those sentences be acts of
protest?
Rushdie pioneered the last method, the literary equivalent of theater
forms where the actors acknowledge the audience and engage them in
artistic ways. Rushdie finesses the problem by adopting neither simplicity
nor exposition, but a deliberate, audience-aware self-exoticization-with-awink. If you know enough about India, you will recognize he-ofboundless-splendor as one literal meaning of the name Amitabh, while
Sholay means flames. By putting in cryptic (to outsiders) cultural
references, Rushdie simultaneously establishes an identity for his voice,
and demands of non-Indians that they either work to access constructible
meaning, or live with opacity. At the same time, Indians are forced to look
at the familiar within a disconcerting context.

But Rushdies solution is far from perfect. In Midnights Children, for


instance, he translates chand-ka-tukda, an affectionate phrase for a child in
Hindi, literally as piece-of-the-moon. A more idiomatically appropriate
translation would be something like sweetie-pie. Depending on the
connotations of moon in non-Indian languages, the constructed meaning
could be anywhere from weird to random. That gets you into the whole
business of talking about languages, local and global metaphors,
translation, and the Sapir-Whorf hypothesis. Fine if that rich tapestry of
crap is what you want to write about. Not so good if you actually just want
to write a story about a pampered child.
Here is a solution that was simply not available to writers in the past:

Amitabh stared down grimly from a ratty old Sholay poster.

(a version of this solution, curiously, has been available to comicbook artists. If the sentence above had been the caption of a panel showing
a boy staring at an Amitabh Bachchan Sholay poster, you would have
achieved nearly the same effect).
This is an extraordinarily complex construct, because the sentence is a
magical, shape-shifting monster. It blends figure and ground compactly;
the gestalt has leaky boundaries limited only by your willingness to click.
Note that you can kill the magic by making the links open in new windows
(which reduces the experience to glorified citation, since you are
insistently hogging the stage and forcing context to stay in the frame).
What makes this magical is that you might never finish reading the story
(or this article) at all. You might go down a bunny trail of exploring the
culture and history of Bollywood. Traditionally, writers have understood
that meaning is constructed by the reader, with the text (which includes the
authors projected identity) as the stimulus. But this construction has
historically been a pretty passive act. By writing the sentence this way, I
am making you an extraordinarily active meaning-constructor. In fact, you
will construct your own text through your click-trail. Both reading and
writing are always political and ideological acts, but here Ive passed on a
lot more of the burden of constructing political and ideological meaning
onto you.

The reason this scares some people is rather Freudian: when an author
hyperlinks, s/he instantly transforms the author-reader relationship from
parent-child to adult-adult. You must decide how to read. Your mom does
not live on the Web.
Thats not all. The writer, as I said, has always been part of the
constructed meaning, but his/her role has expanded. Literary theorists
have speculated that bloggers write themselves into existence by
constructing their online persona/personas. The back-cover author
biography in traditional writing was a limited, unified and carefully
managed persona, usually designed for marketing rather than as a
consciously-engineered part of the text. Online however, you can click on
my name, and explore how I present myself on LinkedIn, Facebook and
Twitter. How deeply you explore me, and which aspects you choose to
explore, will change how you construct meaning from what I write.
So, in our three examples, weve gone from backward-looking, to
clever, to seriously consequential. But you aint seen nothing yet. Lets
talk about how the hyperlink systematically dismantles and reconstructs
our understanding of the idea of a text.
Fractured-Ludic Reading
The Kindle is a curiously anachronistic device. Bezos desire was to
recreate the ludic reading experience of physical books. To be ludic, a
reading experience must be smooth and immersive to the point where the
device vanishes and you lose yourself in the world created by the text. It
is the experience big old-school readers love. Amazon attempted to make
the physical device vanish, which is relatively unproblematic as a goal.
But they also attempted to sharply curtail the possibilities of browsing and
following links.
In light of what weve said about constructing your own text, through
your click-trail, and your meaning from that text, it is clear that Bezos
notion of ludic is not a harmless cognitive-psychology idea. It is a
political and aesthetic idea, and effectively constitutes an attitude towards
that element we know as dissonance. It represents classicism in reading.

Writers (of both fiction and non-fiction) have been curiously lagging
when it comes to exploring dissonance in their art. Musicians have gone
from regular old dissonance through Philip Glass and Nirvana to todays
experimental musicians who record, mix and play back random street
noises as performance. Visual art has always embraced collages and more
extreme forms of non sequitur juxtaposition. Dance (Stravinsky), film
(David Lynch) and theater (Beckett) too, have evolved towards extreme
dissonance. Writers though, have been largely unsuccessful in pushing
things as far. The amount of dissonance a single writer can create seems to
be limited by a very tight ceiling that beyond which lies incomprehensible
nonsense (Becketts character Lucky, in Waiting for Godot, beautifully
demonstrates the transition to nonsense).
In short, we do not expect musical or visual arts to be unfragmented
or smooth or allow us to forget context. We can tolerate extreme
closeness to random noise in other media. Most art does not demand that
our experience of it be ludic the way writing does. Our experience can
be disconnected, arms-length and self-conscious, and still constitute a
legitimate reading. Word-art though, has somehow been trapped within its
own boundaries, defined by a limited idea of comprehensibility and an
aesthetic of intimacy and smooth flow.
There are two reasons for this. First, sounds and images are natural,
and since our brains can process purely unscripted stuff of natural origin,
there is always an inescapable broader sensory context within which work
must be situated. The color of the wall matters to the painting in a way that
the chair does not matter to the reading of a book. Words are unnatural
things, and have always lived in isolated, bound bundles within which
they create their own natural logic. The second reason: music and visual
art can be more easily created collaboratively and rely on the diversity of
minds to achieve greater levels of dissonance (an actor and director for
example, both contribute to the experience of the movie). Writing has
historically been a lonely act since the invention of Gutenbergs press. We
are now returning to a world of writing that is collaborative, the way it
was before Gutenberg.
So what does this mean for how you understand click-happy online
reading? You have two choices:

You could think of click-happy Web browsing as non-ludic


cognition behavior that is destroying the culture of reading. This is
the view adopted by those who bemoan continuous partial
attention. This is Nick Carr in Is Google Making us Stupid?
Or you could think of browsing as a new kind of ludic: an
unsettling, fragmented experience that is still comprehensible in
the sense that a David Lynch movie is comprehensible. It is a kind
of ludic that can never be created within one brain. Click trails are
texts whose coherence derives from your mind, but whose
elements derive from multiple other minds.

In other words, when you browse and skim, you arent distracted and
unfocused. You are just reading a very dissonant book you just made up.
Actually, you are reading a part of a single book. The single book. The one
John Donne talked about. I first quoted this in my post The Deeper
Meaning of Kindle. The second part is well-known, but it is the first part
that interests us.
All mankind is of one author, and is one volume;
when one man dies, one chapter is not torn out of the book,
but translated into a better language; and every chapter
must be so translatedAs therefore the bell that rings to a
sermon, calls not upon the preacher only, but upon the
congregation to come: so this bell calls us all: but how
much more me, who am brought so near the door by this
sickness.No man is an island, entire of itselfany mans
death diminishes me, because I am involved in mankind;
and therefore never send to know for whom the bell tolls; it
tolls for thee.
The Hyperlink as the Medium
If you start with McLuhan, as most people do, there are two ways to
view the Web: as a vast meta-medium, or as a regular McLuhanesque
medium, with nothing meta about it. For a long-time I adopted the metamedium view (after all, the Web can play host to every other form: text,
images, video and audio), but I am convinced now that the other view is

equally legitimate, and perhaps more important. The Web is a regular


medium whose language is the hyperlink. The varieties of hyperlinking
constitute the vocabulary of the Web. If I give you an isolated URL to type
into your browser, for a stand-alone web page with a video or a piece of
text, you are not really on the Web. If there is no clickable hyperlink
involved, you are just using the browser as a novel reading device.
But though hyperlinking can weave through any sort of content, it has
a special relation to the written word. The Gutenberg era was one where
writing was largely an individual act. Before movable type, the epics and
religious texts of the ancient world were harmonies of multiple voices. But
what we have today is not a resurrected age of epics. The multi-voiced
nature of todays hyper-writing is different. The difference lies in the fact
that the entire world of human experience has been textualized online.
Remember what I said about walls and paintings? The color of the wall
matters to the painting in a way that the chair does not matter to the
reading of a book. In the Web of hyperlinks, writing has found its wall.

Seeking Density in the Gonzo Theater


January 11, 2012
Consider this thought experiment: what if you were only allowed
2000 words with which to understand the world?
With these 2000 words, youd have to do everything. Youd be
allowed to occasionally retire some words in favor of others, or invent
new words, but youd have to stick to the budget.
Everything would have to be expressible within the budget: everyday
conversations and deep conversations, shallow thoughts and profound
ones, reflections and expectations, scientific propositions and vocational
instruction manuals, poetry and stories, emotions and facts.
How would you use your budget? Would you choose more nouns or
verbs? How many friends would you elevate to a name-remembered
status? How many stars and bird species would you name? Would you
have more concrete words or more reified ones in your selection? How
many of the most commonly used words would you select? Counting
mathematical symbols as words, how many of those would you select?
Would you mimic others selections or make up your own mind?
***
When I read old texts, I am struck by the density of the writing.
Words used to be expensive. You had to make one word do many things.
That last sentence contains a simple example. I originally had convey
many meanings in place of do many things. For some readers, the
substitution will make no difference. To others, it will make a great deal of
difference.
We talk of dense texts as being layered. They lend themselves to rereading from many perspectives over a long period of time. Even as late as
the nineteenth century, we find that the average professional writer wrote

with a density that rivals the densest writing today. With the exception of
scientific writing best understood as a social-industrial process for
increasing the density of words every other kind of writing today has
become less layered. Most writing admits one reading, if that.
Dense writing is not particularly difficult. Merely time-consuming. As
the word layering suggests, it is something of a mechanical craft, and you
become better with practice. Even mediocre writers in the past, working
with starter material no denser than todays typical Top 10 blog post, could
sometimes achieve sublime results by putting in the time.
If the mediocre can become good by pursuing density, the good can
become great. Robert Louis Stevenson famously wrote gripping action
sequences without using adverbs and adjectives. His prose has a sparse
elegance to it, but is nevertheless dense with meaning and drama. I once
tried the exercise of avoiding adverbs and adjectives. I discovered that it is
not about elimination. The main challenge is to make your nouns and
verbs do more work.
***
In teaching and learning writing today, we focus on the isolated virtue
of brevity. We do not think about density. Traditions of exegesis the
dilution, usually oral, of dense texts to the point where they are
consumable by many are confined to dead rather than living texts.
We have forgotten how to teach density. In fact, weve even forgotten
how to think about it. We confuse density with overwrought, baroque
textures, with a hard-to-handle literary style that can easily turn into
tasteless excess in unskilled hands.
The 2000-word thought experiment, if you try it, will likely force you
to consider density of meaning as a selection factor. Some words, like
schadenfreude, are intrinsically dense. Others, like love, are dense because
they are highly adaptable. Depending on context, they can do many
things.
Density is a more fundamental variable than the length of a text. It is
intrinsic to writing, like the density of a fluid; what is known in fluid

dynamics as an intensive property. The length of an arbitrarily delineated


piece of text on the other hand, is an extensive property, like the volume of
a specified container of fluid.
Choosing words precisely and crafting dense sentences is important.
Choosing small containers is not.
***
Writing used to be a form of making. I sometimes wonder what it
would be like to have to carve your thoughts onto stone tablets. One of
these days I am going to try carving the first draft of a post in stone.
Writing on paper is also an expensive luxury. There was a time when
writers made their own paper and ink. You had to write with
temperamental things like quills. The practice of calligraphy was not a
writerly affectation. It was a necessary skill in the days of temperamental
media.
The scribe was more of an archivist than a writer. The other ancestor
of the writer, the bard-sage, was both composer and performer. The
average person did not read, but relied on the bard or priest to expand
upon and perform the written, archived word. Particularly good
performances would lead to revisions of the written texts.
When fountain pens and cheap factory-made paper made their
appearance, writers were able to waste paper, and as a consequence,
written words. In the history of thought, the invention of the ability to
waste words was probably as important as the invention of the ability
famously noted by Alan Kay to waste bits in the history of
programming.
With cheap paper was born that iconic image of the twentieth century
writer a writer sitting alone in a room, crumpling up a piece of writing
in frustration, and tossing it into an overflowing waste-paper basket.
Unlike the sage-bard, enacting old texts and beta-testing new ones through
public oral performances, or the scribe, committing tested, quality-

controlled and expensive texts to stone, the modern, pre-Internet writer


was a resource-rich creature of profligate excess.
The very idea of a waste paper basket would have been unthinkable
at one time.
***
It is difficult today to get a sense of how expensive writing used to be.
I once watched a traditional temple scribe demonstrate the process of
making the palm-leaf manuscripts that were used in India until Islam
brought paper-making to the subcontinent. That probably happened a few
centuries after the Abbasids defeated the Tang empire at the Battle of Talas
in 751 AD, and extracted the secret of paper-making from Chinese
prisoners of war.
Palm leaves are easily the worst writing technology ever invented by
a major culture. They make leather, papyrus, paper and silk look like
space-age media by comparison. A good deal that seems strange about
India as an idea suddenly makes sense once you get that the civilization
was being enacted through this ridiculous medium (and equally ridiculous
ones like tree bark) until about 1000 AD. Imagine a modern civilization
that had to keep its grand narrative going using only tweets, and you get
some sense of what was going on.
Heres how you make palm-leaf manuscripts. First you cut little
index-card sized rectangles out of palm fronds and dry them flat. Then you
carefully use a needle to scratch out the text typically a few lines per
leaf. Then you make an ink out of ground charcoal, carefully rub it into
the scratches, and swab away the excess. Finally, you carefully pierce a
hole through the middle (not the edge, since the thing is brittle) and thread
a piece of string through a sheaf of loosely-related leaves.
Congratulations, you have a book.
Since the sheaf is more unstable than individual leaves, you have to
plan for graceful degradation. Expect individual leaves to be lost or
damaged. Expect accidental shuffling and page numbering turning to
garbage. Expect new leaves to be inserted, like viruses. Dont expect

multi-leaf stories to remain stable. Expect narrative trunks to sprout


branches added by later authors.
The palm leaf manuscript was brittle and easily damaged, available in
one unhelpful size, with a lifespan of perhaps a few decades on average
(carefully preserved ones lasted around 150 years I believe). After that you
had to make a copy if you wanted to keep the ideas alive. If you were rich
or powerful, you could get stuff carved onto stone or copper plates by
slaves. If not, your best bet was to go with palm leaves and hope that
people would descend on your home to make copies.
***
When you look at old writing technology, poetry suddenly makes
sense.
It is modular content that comes in fixed-length chunks, with
redundancy and error-correcting codes built in. It is designed to be
transmitted and copied across time and space through unreliable and noisy
channels, one stone tablet, palm leaf or piece of handmade paper at a time.
The technology was still unreliable enough that the oral tradition remained
the primary channel. Writing began as a medium for backups. Scribes
were the first data warehousing experts. They did more than merely
transcribe the spoken word. They compressed, corrected and encrypted as
well, and periodically updated texts to reflect the extant state of the oral
tradition.
That is why verses are so eminently quotable outside the context of
poems. Poems are extensive oral containers of arbitrary length, in some
cases delineated after the fact. Verses are standardized containers designed
to carry intense, dense, archival-quality words around.
Today we view traditional verse epics as single works. The Illiad has
about 9000 verses. The Mahabharata has about 24,000. It makes far more
sense to talk about both as data-warehoused records of extremely long
in both time and words convergent conversations. They are closer to
Googles index than to books.

For the ancients, texts had to be little metered packets. But as paper
technology got cheaper and more reliable, poetry, like many other obsolete
technologies before and after, turned into an art form. Critical function
turned into dispensable style. Meter and rhyme ceased to be useful as
error-correcting coding mechanisms and turned into free dimensions for
artistic expression.
Soon, individual verses could be composed under the assumption of
stable, longer embedding contexts. Extensive works could be delineated a
priori, during the composition of the parts. And the parts could be safely
de-containerized. Rhyming verse could be abandoned in favor of blank
verse, and eventually meter became entirely unnecessary. And we ended
up with the bound book of prose.
Technologically, it was something of a backward step, like reverting
to circuit-switched networks after having invented packet switching, or
moving back from digital to analog technology. But it served an important
purpose: allowing the individual writer to emerge. The book could belong
to an individual author in a way a verse from an oral tradition could not.
***
Poetry gets it right: length is irrelevant. You can standardize and
normalize it away using appropriate containerization. It is density that
matters. Evolving your packet size and vocabulary over time helps you
increase density over time.
My posts range between 2000-5000 words, and I post about once a
week here on ribbonfarm. But there are many bloggers who post two or
three 300-word posts a day, five days a week. They also log 2000-5000
words.
So I am not particularly prolific. I merely have a different packet size
compared to other bloggers, optimized for a peculiar purpose: evolving an
idiosyncratic vocabulary. It seems to take several thousand words to
characterize a neologism like gollumize or posturetalk. But once that is
done, I can reuse it as a compact and dense piece of refactored perception.

You could say that what I am really trying to do on this blog is


compose a speculative dictionary of dense words and phrases. Perhaps one
day this blog will collapse under its own gravity into a single super-dense
post written entirely with 2000 hyperlinked neologisms, like a neutron
star.
Poetry functional ancient poetry, the cultural TCP/IP of the world
before around 1000 AD is necessarily a social process, involving, at the
very least, a sage-bard, a scribe, an audience and a patron. The oral culture
refines, distills, tests, reworks, debates and judges. Iterative performance is
a necessary component. When oral exegesis of an unstable verse dies
down, and memorization and repetition validate the quality of the finished
verse, the scribe breaks out his chisel.
The prose book can stand apart from broader social processes in
radically individual ways. It can travel from writer to readers largely
unaltered, setting up a hub-spoke pattern of conversational circuits.
***
Ive occasionally described my blogging as a sort of performance art.
But something about that self-description has been bothering me. I have
now concluded that if the description applies at all, it applies to a different
kind of blogger, not me.
The Web obscures the crucial and necessary distinction between oral
and written cultures. Some bloggers perform and talk. Others are scribes.
I think I am a scribe, not a performer.
Yet, there is no easy correspondence between pre-Gutenberg bardsages and scribes and todays bloggers. In the intervening centuries, we
have seen the rise and fall of the individualist writer, working alone, filling
waste-paper baskets.
History does not rewind. It synthesizes. The blogosphere, I am
convinced, synthesizes the collectivist pre-Gutenberg culture of sage-bard
and scribes with the individualist post-Gutenberg culture of papercrumpling waste-paper-basket fillers.

In the process of synthesis, virtual circuits must ride once more on top
of a revitalized packet-switched network. The oral/written distinction must
be replaced by a more basic one that is medium-agnostic, like the Internet
itself.
***
According to legend, the sage Vyasa needed a scribe to write down
the Mahabharata as he composed it. Ganesha accepted the challenge, but
demanded that the sage compose as fast as he could write. Wary of the
trickster god, Vyasa in turn set his own condition: Ganesha would have to
understand every verse before writing it down. And so, the legend
continues, they began, with Vyasa throwing curveball verses at Ganesha
whenever he needed a break.
The figure of Vyasa the composer is best understood as a literary
device to represent a personified oral tradition (that perhaps included a
single real Vyasa or family of Vyasas).
But the legend gets at something interesting about the role of a scribe
in a dominantly oral culture. A second-class citizen like a minute-taker or
official record-keeper, the scribe must nevertheless synthesize and
interpret an ongoing cacophony in order to produce something coherent to
write down. When the spoken word is cheap and the written word is
expensive, the scribe must add value. The oral tradition may be the
default, but the written one is the court of final appeal in case of conflict
among two authoritative individuals.
There is a brilliant passage in Yes, Prime Minister, where the Cabinet
Secretary Humphery Appleby helps the Prime Minister, Jim Hacker, cook
the minutes of a cabinet meeting after the fact, to escape from an informal
oral commitment. Applebys exposition of the principle of accepting the
minutes as the de facto official memory gets to the heart of the VyasaGanesha legend:
Sir Humphrey: It is characteristic of all committee
discussions and decisions that every member has a vivid
recollection of them and that every members recollection

of them differs violently from every other members


recollection. Consequently, we accept the convention that
the official decisions are those and only those which have
been officially recorded in the minutes by the Officials,
from which it emerges with an elegant inevitability that any
decision which has been officially reached will have been
officially recorded in the minutes by the Officials and any
decision which is not recorded in the minutes is not been
officially reached even if one or more members believe
they can recollect it, so in this particular case, if the
decision had been officially reached it would have been
officially recorded in the minutes by the Officials. And it
isnt so it wasnt.
The key point here is that the scribe must do more than merely
transcribe. He must interpret and synthesize. I suspect the Vyasa-Ganesha
legend was invented by the first scribe paid to write down the hitherto-oral
Mahabharata, to legitimize his own interpretative authority in capturing
something coherent from a many-voiced tradition, with each voice
claiming the authority of a mythical Vyasa.
***
So if the modern blogosphere is neither the collectivist, negotiated
recording of a Grand Narrative, arrived at via a conversation between
scribes and sage-bards, nor the culture of purely individual expression that
reigned between Gutenberg and Tim Berners-Lee, what is it?
For blogging to be performance art, the performer must live an
interesting life and do interesting things. For a while I thought I qualified,
but then I reflected and was forced to admit that my dull daily routine does
not qualify as raw material for performance art.
How about this: instead of a half-coherent oral tradition or the
relatively coordinated doings of the British Cabinet, the blogosphere is
primarily an uncoordinated theater of large-scale individual gonzo
blogging. As culture is increasingly enacted by this theater of decentered

gonzo blogging instead of traditions that enjoy received authority, minutetaking scribe bloggers must increasingly interpret what they are seeing.
The first human scribe who wore the mask of Ganesha could
reasonably assume that there was a coherent trunk narrative with
discriminating judgments required only at the periphery. He would only
be responsible for smoothing out the rough edges of an evolving oral
consensus. Equally Humphrey Appleby could hope for a coherent
emergent intentionality in the deliberations of the cabinet.
But the scribe-blogger cannot assume that there is anything coherent
to be discovered in the gonzo blogging theater. At best he can attempt to
collect and compress and hope that it does not all cancel out.
There is another difference. When words are literally expensive, as
words carved in stone are, anything written has de facto authority,
underwritten by the wealth that paid for the scribe. Scribes were usually
establishment figures associated with courts, temples or monasteries,
deriving their interpretative authority from more fundamental kinds of
authority based on violence or wealth.
With derived authority comes supervision. The compensation for lost
derived authority is the withdrawal of supervision. The scribe-blogger is
an unsupervised and unauthorized chronicler in a world of contending
gonzos. Any authority he or she achieves is a function of the density and
coherence of the interpretative perspective it offers on the gonzo-blogging
theater.
***
I wish I could teach dense blogging. I am not sure how I am gradually
acquiring this skill, but I am convinced it is not a difficult one to pick up.
It requires no particular talent beyond a generic talent for writing and
thinking clearly. It is merely time-consuming and somewhat tedious.
Sometimes I strive for higher density consciously, and at other times,
dense prose flows out naturally after a gonzo-blogger memeplex has
simmered for a while in my head. I rarely let non-dense writing out the
door. You need gonzo-blogging credibility to successfully do Top 10 list

posts. I can manufacture branded ideas, but lack the raw material needed
to sustain a personal brand.
Writing teachers with a doctrinaire belief in brevity urge students to
focus. They encourage selection and elimination in the service of explicit
intentions. The result is highly legible writing. Every word serves a
singular function. Every paragraph contains one idea. Every piece of prose
follows one sequence of thoughts. There is a beginning, a middle and an
end. Like a city laid out by a High-Modernist architect, the result is
anemic. The text takes a single prototypical reader to a predictable
conclusion. In theory. More often, it loses the reader immediately, since no
real reader is anything like the prototypical one assumed by (say) the
writer of a press release.
An insistence on focus turns writing into a vocational trade rather than
a liberal art.
Both gonzo blogging and scribe blogging lead you away from the
writing teacher.
Striving for density, attempting to compress more into the same
number of words, inevitably leads you away from the legibility prized by
writing teachers. Ambiguity, suggestion and allusion become paramount.
Coded references become necessary, to avoid burdening all readers with
selection and filtration problems. Like Humpty-Dumpty, you are
sometimes forced to enslave words and chain them to meanings that they
were not born with.
***
Dense writing creates illegible slums of meaning. To the vocational
writer, it looks discursive, messy and randomly exploratory.
But what the vocational writer mistakes for a lack of clear intention is
actually a multiplicity of intentions, both conscious and unconscious.

Francine Prose, in Reading Like a Writer, remarked that beginning


novelists obsess about voice, the question of who is speaking. She goes on
to remark that the more important question is who is listening?
The failure to ask who is listening is peculiar to pre-Internet book
writers. You cannot possibly fail that way as a blogger.
The modern extensive-prose, word-wasteful book represents the
apogee of a certain kind of individualism. An individualism that writes
itself into existence through self-expression unmodulated by in-process
feedback, something only entire cultures could afford to do in the age of
stone-carved words. For this kind of writer, the reader was a distant
abstraction, easily forgotten.
A muse was an optional aid to the process rather than a necessary
piece of cognitive equipment. At most modern, pre-blogging book writers
wrote for a single archetypal reader.
For the blogger, a multiplicity of readerly intentions is a given. At the
very least, you must constantly balance the needs of the new reader
against the needs of the long-time reader. Every frequent commenter or
email/IM correspondent becomes an unavoidable muse. This post for
instance, was triggered by a particularly demanding muse who accused
me, over IM, of having gotten lazy over the last few posts and neglecting
this blog in favor of my more commercial, less-dense writing.
She was right. Mea culpa. Having to pay the rent is not a valid excuse
for failing to rise to the challenge of a tricky balancing act.
Density is the natural consequence of trying to say many things to
many distinct people over long periods of time without repeating yourself
too much or sparking flame wars. The long-time reader gets impatient
with repetition and demands compaction of old ideas into a shorthand that
can be built upon. The newcomer demands a courteous, non-cryptic
welcome. Active commenters demand a certain kind of room for their own
expansion, elaboration and meaning construction.
The exegesis of living texts is not the respectful affair that it is around
dead ones. If you blog, there will be blood.

***
In the days of 64k memories, programmers wrote code with as much
care as ancient scribes carved out verses on precious pieces of rock, one
expensive chisel-pounding rep at a time.
In the remarkably short space of 50 years, programming has evolved
from rock-carving parsimony to paper-wasting profligacy.
Still living machine-coding gray eminences bemoan the verbosity and
empty abstractions of the young. My one experience of writing raw
machine code (some stepper-motor code, keyed directly into a controller
board, for a mechatronics class) was enlightening, but immediately
convinced me to run away as fast as I could.
But why shouldnt you waste bits or paper when you can, in service of
clarity and accessibility? Why layer meaning upon meaning until you get
to near-impenetrable opacity?
I think it is because the process of compression is actually the process
of validation and comprehension. When you ask repeatedly, who is
listening, every answer generates a new set of conflicts. The more you
resolve those conflicts before hitting Publish, the denser the writing. If
you judge the release density right, you will produce a very generative
piece of text that catalyzes further exploration rather than ugly flame wars.
Sometimes, I judge correctly. Other times I release too early or too
late. And of course, sometimes a quantity of gonzo-blogger theater
compresses down to nothing and I have to throw away a draft.
And some days, I find myself staring at a set of dense thoughts that
refuse to either cohere into a longer piece or dissolve into noise. So I
packetize them into virtual palm-leaf index cards delimited by asterixes,
and let them loose for other scribes to shuffle through and perhaps sinter
into a denser mass in a better furnace.

It is something of a lazy technique, ultimately no better than listblogging in the gonzo blogosphere. But if it was good enough for
Wittgenstein, its good enough for me.
***

Rediscovering Literacy
May 3, 2012
Ive been experimenting lately with aphorisms. Pithy one-liners of the
sort favored by writers like La Rochefoucauld (1613-1680). My goal was
to turn a relatively big idea, the sort I would normally turn into a 4000word post, into a one-liner. After many failed attempts over the last few
months, a few weeks ago, I finally managed to craft one I was happy with:
Civilization is the process of turning the incomprehensible into the
arbitrary.
Many hours of thought went into this 11-word candidate for eternal
quotability. When I was done, I was tempted to immediately unpack it in a
longer essay, but then I realized that that would defeat the purpose.
Maxims and aphorisms are about more than terseness in the face of
expensive writing technology. They are about basic training in literacy.
The aphorism above is possibly the most literate thing I have ever written.
By stronger criteria Ill get to, it might even be the only literate thing Ive
ever written, which means Ive been illiterate until now.
This post isnt about the aphorism itself (Ill leave you to play with it),
but about literacy.
I used to think that the terseness of written language through most of
history was mostly a result of the high cost and low reliability of writing
technologies in pre-modern times. I now think these were secondary
issues. I have come to believe that the very word literacy meant something
entirely different before around 1890, when print technology became
cheap enough to sustain a written form of mass media.
Literacy as Sophistication
Literacy used to be a very subtle concept that meant linguistic
sophistication. It used to denote a skill that could be developed to arbitrary
levels of refinement through practice. Literacy meant using mastery over

language both form and content to sustain a relentless and


increasingly sophisticated pursuit of greater meaning. It was about an
appreciative, rather than instrumental use of language. Language as a
means of seeing rather than as a means of doing.
Reading and writing the ability to translate language back and
forth between oral and written forms was a secondary matter. It was a
vocational pursuit of limited depth.
The written form itself was merely a convenience for transmitting
language across space and time, and a mechanism by which to extend the
limits of working memory. It had little to do with language skills per se.
Confusing the two is like confusing the ability to read and write
musical notation with musical ability. You can have exceptional musical
ability without knowing how to read music. And conversely, you might
have no musical ability whatsoever, but still be able to read and write
musical notation and translate back and forth between the keyboard and
paper. Being able to read and write musical notation really has almost
nothing to do with musical ability.
When writing was expensive, conflating the two skills (two-way
translation and sophisticated use) was safe and useful. If somebody knew
how to read and write, you could safely assume that he or she was also a
sophisticated user of language.
It was never considered a necessary condition though, merely a
sufficient one. A revealing sign is that many religious messiahs have been
illiterate in the reading/writing sense, and have had scribes hanging on
their every word, eagerly transcribing away for posterity.
Exposition and Condensation
Before Gutenberg, you demonstrated true literacy not by reading a
text out aloud and taking down dictation accurately, but through
exposition and condensation.

You were considered literate if you could take a classic verse and
expound upon it at length (exposition) and take an ambiguous idea and
distill its essence into a terse verbal composition (condensation).
Exposition was more than meaning-extraction. It was a demonstration
of contextualized understanding of the text, skill with both form and
content, and an ability to separate both from meaning in the sense of
reference to non-linguistic realities.
Condensation was the art of packing meaning into the fewest possible
words. It was a higher order skill than exposition. All literate people could
do some exposition, but only masters could condense well enough to
produce new texts considered worthy of being added to the literary
tradition.
Exposition and condensation are in fact the fundamental learned
behaviors that constitute literacy, not reading and writing. One behavior
dissolves densely packed words using the solvent that is the extant oral
culture, enriching it, while the other distills the essence into a form that
can be transmitted across cultures.
Two literate people in very different different cultures, if they are
skilled at exposition, might be able to expand the same maxim (the Golden
Rule for instance) into different parables. Conversely, the literary masters
of an era can condense stories and philosophies discovered in their own
time into culturally portable nuggets.
So the terseness of an enduring maxim is as much about cross-cultural
generality as it is about compactness.
The right kind of terseness allows you to accomplish a difficult
transmission challenge: transmission across cultures and mental models.
Reading and writing by contrast, merely accomplish transmission across
time and space. They are much simpler inventions than exposition and
condensation. Cultural distance is a far tougher dimension to navigate than
spatial and temporal dimensions. By inventing a method to transmit across
vast cultural distances, our earliest neolithic ancestors accidentally turned
language into a tool for abstract thinking (it must have existed before then

as a more rudimentary tool for communication, as in other species that


possess more basic forms of language).
So how did we come to focus on reading and writing? Why is it
reading, riting and rithmetic and not exposition, condensation and
arithmetic?
Reading and Writing
Today the ability to read and write is ubiquitous in the developed
world, and what was once a safe conflation of literacy and transcription
ability has become more than meaningless. It has become actively
dangerous.
To see why, it is useful to consider the relative status of the spoken
word with respect to the written word in pre-modern times.
Before Gutenberg, reading and writing were considered not just
secondary skills, but lowly ones, much as typing in the days before
personal computing. It is revealing that the first designs for a personal
computer at Xerox included one that had no keyboard next to the monitor,
but was equipped instead with a dictaphone connection to a secretary who
did any typing necessary. It was assumed that executives would not want
to do their own typing, but would watch the action scroll by on a monitor.
Reading and writing were for students and scribes. Career scribes
were not scholars. Reading and writing skills by themselves represented a
vocation, not learning.
Where both the written and spoken word could be used, the latter was
in fact preferred. Scholars demonstrated linguistic virtuosity through the
spoken rather than the written word. When they gained enough
prominence, they acquired students and scribes who would do the lowly
work of translation between oral and written forms in exchange for the
privilege of learning from a master.
But we havent explained why the spoken word was preferred. What
has confused us is the red herring of preservation through memorization. If

preservation through memorization were the only purpose of oral cultures,


they should have all vanished long ago. As McLuhan famously argued, it
wasnt until the Gutenberg revolution that the spoken word was finally
dethroned by the written word.
The traditional explanation for the mysterious persistence of oral
cultures has been that pre-Gutenberg written-word technologies were
either too expensive to be generally accessible, or simply not reliable
enough. The characteristic practices of oral cultures, by this theory,
evolved to aid accurate preservation through memorization.
This is a bit like saying that people continued to eat fresh foods after
refrigeration was invented because early refrigerators were not reliable
enough or inexpensive enough to allow everybody to eat frozen foods.
The memorization-for-preservation explanation falls apart when you
poke a little. You find that typical oral cultures contain practices that we
moderns loosely label memorization because we dont understand what
they actually accomplish.
I am going to use Indian oral culture as an example because it is the
one I know best, and because it possesses some illuminating extreme
features. But I suspect you will find similar unexplained complexity in
every oral culture, particularly ones associated with major religions, such
as Latin.
Oral Cultural is Not About Memorization
This was a radical realization for me: oral culture is not about
preservation-by-memorization. One strong piece of evidence can be found
in this Wikipedia description of memorization practices in ancient India.
Ignore the commentary and pay attention to the actual descriptions of the
recitation techniques:
Prodigious energy was expended by ancient Indian
culture in ensuring that these texts were transmitted from
generation to generation with inordinate fidelity. For
example, memorization of the sacred Vedas included up to

eleven forms of recitation of the same text. The texts were


subsequently proof-read by comparing the different
recited versions. Forms of recitation included the japha (literally mesh recitation) in which every two
adjacent words in the text were first recited in their original
order, then repeated in the reverse order, and finally
repeated again in the original order. The recitation thus
proceeded as:
word1word2,
word2word1,
word1word2;
word2word3, word3word2, word2word3;
In another form of recitation, dhvaja-pha (literally
flag recitation) a sequence of N words were recited (and
memorized) by pairing the first two and last two words and
then proceeding as:
word1word2, wordN 1wordN; word2word3, wordN
3wordN 2; ..; wordN 1wordN, word1word2;
The most complex form of recitation, ghana-pha
(literally dense recitation), took the form:
word1word2,
word2word1,
word1word2word3,
word3word2word1, word1word2word3; word2word3,
word3word2, word2word3word4, word4word3word2,
word2word3word4;
For fun, I will offer you these recitation forms of my newly-minted
maxim.
Original: Civilization is the process of turning the incomprehensible
into the arbitrary.
Mesh recitation: civilization is, is civilization, civilization is, the
process, process the, the process, of turning, turning of, of turning
Flag recitation: civilization is, the arbitrary, is the, into the, the
process, incomprehensible into, process of, the incomprehensible

Dense recitation: civilization is, is civilization, civilization is the, the


is civilization, civilization is the, is the, the is, is the process, process the
is, is the process
If you are practicing eleven different forms of combinatorial
recitation, there is clearly something going on beyond preservation-bymemorization. One piece of evidence is that though the Vedas were
accurately preserved, the oral culture also sustained torrents of secondary
expository literature that was not accurately preserved. The Mahabharata
is an example. Not only was no canonical version preserved, there was no
canonical version. The thing grew like a Wikipedia of mythological fanfiction.
From my own experiences with memorization, the recitation routines
seem like extreme overkill. Straightforward repetition, aided by meter and
rhyme, is sufficient if preservation-by-memorization (as an alternative to
unreliable writing), is the only goal. I memorized two Shakespeare plays
that way (though admittedly I have now forgotten most of them).
So what is going on here?
Recitation as Creative Destruction
Once you try this out loud, you realize what is happening. This is
microcosmic creative destruction. Try to do this sort of recitation really
mindlessly. You will find it extraordinarily difficult. The recitation patterns
will force you to pay attention to meaning as well.
Far from being about mindless rote memorization, recitation is about
mindful attention to a text.
Youre taking a permutations-and-combinations blender to the words,
juxtaposing them in new ways, and actively performing combinatorial
processing. You are rigorously testing the strength of every single word
choice and ordering decision. You are isolating and foregrounding
different elements of the logical content, such as implication, subject-verb
and verb-object agreement, and so forth. There is an functional-aesthetic

element too. Terseness does not preclude poetry (and therefore,


redundancy). In fact it requires it. Despite the compactness of a text, room
must be made for various useful symmetries.
If the original has any structural or semantic weaknesses at all, this
torture will reveal it. If the original lacks the robustness that poetry brings,
it will be added.
Not only does all this not help plain memorization, I claim that it
makes it harder. You destabilize the original line in your head and turn it
into a word soup. If the original is any way confused or poorly ordered,
you will soon end up in a state of doubt about which sequence of words is
the correct one.
For many students, practicing recitation must have been mindless
tedium, but for a few, it would have catalyzed active consideration and
reworking of the underlying ideas, in search of new wisdom. These
students must have evolved into new masters, the source of beneficial
mutations and crossovers in the cultural memeplexes they were charged
with preserving.
Being forced to juggle words like this must have helped cultivate a
clear awareness of the distinction between form and content. It must have
helped cultivate an appreciation of language as a medium for performance
rather than a medium for transmission or preservation. It must have forced
students to pay careful attention to precision of word choice in their own
compositions. It must have sustained a very mindful linguistic culture.
The analogy to music is again a useful one. The description of the
varied forms of recitation sounds less like tedious memorization and more
like music students practicing their scales. The only reason that you
remember the basic scale (do re mi fa so la te do in Western solfege
notation) is that the sequence has the simplest and most complete
progression among all the permutations and combinations of the notes.
But if you could only sing the one pattern, you wouldnt be a musician
(actually, there is more than an analogy here; music and language are
clearly deeply related, but I havent thought that idea through).

Being only able to faithfully transcribe between oral and written


forms is rather like being only able to sing the default do-re-me sequence.
The former can no more be a true measure of literacy than the latter can be
a measure of musical ability.
The only way the original can survive such mangling is if it is actually
a beautifully dense condensation that has a certain robust memetic
stability. At the risk of losing most of you, I think of a carefully composed
set of related aphorisms as eigenvectors spanning a space of meaning. It is
the space itself, and the competence to explore it, that define a literate
comprehension of the text. Not the ability to reproduce or translate
between written and oral forms.
We can make a fairly strong claim:
Oral cultures are not just, or even primarily, about quality assurance
in transmission. They are primarily about quality assurance in
composition, and training in the basic moves of exposition and
condensation.
When you think about it this way, there is no mystery. Oral culture
persisted long after the development of writing because it was not about
accurate preservation. It was about performance and cultural enactment
through exposition and condensation.
The Costs of Gutenberg
And then Gutenberg happened.
The results were not immediately apparent. The old culture of literacy
persisted for several centuries. The tipping point came in the 1890s, when
printing technology became sufficiently cheap to support mass media
(there is a world of difference between ubiquity of bibles and a culture of
daily newspapers).
So sometime in the twentieth century, we lost all the subtlety of oral
culture, turned our attention to the secondary vocational skills of reading
and writing, and turned literacy into a set of mechanical tests.

Today, to be literate simply means that you can read and write
mechanically, construct simple grammatical sentences, and use a minimal,
basic (and largely instrumental) vocabulary. We have redefined literacy as
a 0-1 condition rather than a skill that can be indefinitely developed.
Gutenberg certainly created a huge positive change. It made the raw
materials of literary culture widely accessible. It did not, however, make
the basic skills of literacy, exposition and condensation, more ubiquitous.
Instead, a secondary vocational craft from the world of oral cultures
(one among many) was turned into the foundation of all education, both
high-culture liberal education and the vocational education that anchors
popular culture.
The Fall of High Culture
I wont spend much time on high culture, since the story should be
familiar to everybody, even if this framing is unfamiliar.
The following things happened.

Instead of condensing new knowledge into wisdom, we began


encrypting it into jargon.
Exposition as creative performance gave way to critical study as
meaning-extraction.
The art of condensation turned into the art of light, witty party
banter.
Conversation turned into correspondence and eventually into
citation.
Natural philosophy turned into science, and lost its literary
character.
Interpretation and re-enactment became restricted to narrowly
political ends.
Poetry was transformed from an intermediate-level literacy skill to
a medium for self-indulgence.

The result of these changes on high culture was drastic. Discovery


began to outpace interpretation and comprehension. We began to discover
more and more, but know less and less. Science seceded from the rest of
culture and retreated behind walls of jargon. The impoverished remains
outside those walls were re-imagined as a shrill and frozen notion of
humanism.
Mathematics and programming, two specialized derivatives of
language that I consider part of high culture, retained the characteristics of
oral cultures of old, with an emphasis on recombinant manipulation,
terseness, generality and portability.
Both are now being threatened (by increasingly capable forms of
computing). I will leave that story for another day.
The Fall of Popular Culture
But it is perhaps the transformation of popular culture that has been
most dramatic. If you have ever talked to an intelligent and articulate, but
illiterate (in the modernist reading-writing sense) member of a popular
folk culture that has been relatively well-shielded from modern mass
culture, you will understand just how dumb the latter is.
Pre-modern folk cultures are as capable as their high-culture cousins
of sustaining linguistic traditions based on exposition and condensation.
They are the linguistic minor leagues in relation to the major leagues of
high culture, not spectator-cultures.
A pre-modern village does not rely, for intellectual sustenance, on
stories brought from imperial capital cities by royal bards. At best, a few
imported elements from distant imperial cultures become political
integration points within larger grand narratives. I encountered a curious
example of this sort of thing in Bali: a minor character in the
Mahabharata, Sahadeva, apparently serves as the integration point
between the localized version of Hinduism and purely local elements like
the Barong and Rangda, which do not appear anywhere in the
Mahabharata to my knowledge.

By contrast, modern mass culture is a spectator culture, linguistically


speaking. You read stories but you do not necessarily attempt to rewrite
them. You watch movies, but you do not attempt to re-enact them as plays
that incorporate elements of local culture. The analogy to music is again
useful. Before the gramaphone and radio, most families around the world
made their own music.
The effects of print, radio and television based mass media were to
basically destroy popular literary (but not necessarily written) cultures
everywhere. Was it an accident or an act of deliberate cultural violence?
I believe it was an accident that proved so helpful for the industrial
world that repairs were never made, like smallpox decimating the ranks of
Native Americans.
For the industrial world, exposition and condensation were useless
skills in the labor force. The world needed workers who could follow
instructions: texts with one instrumental meaning instead of many
appreciative meanings:

Turn on Switch A.
Watch for the green light to come on.
Then push the lever.

As the finely differentiated universe of local folk cultures was


gradually replaced by a handful of mass, popular cultures, ordinary
citizens lost their locally enacted linguistic cultures, and began to feed
passively on mass-produced words. In the process, they also lost the basic
skills of literacy: exposition and condensation, and partially regressed to
pre-Neolithic levels of linguistic sophistication, where language sustains
social interaction and communication, but not critical, abstract thought.
What does this world look like?

Can the Gollum Speak?


I previously proposed the Gollum as an archetype of an ordinary
person turned into a ghost by consumer culture. What I am talking about
here is the linguistic aspect of that transformation.
If you consider the decline of popular literary culture and its
replacement by mass culture a sort of consumerization of language, you
have to ask, can highly gollumized people use language in a literate way at
all?
The Gollum can read, write and repeat, but Ive slowly concluded that
it cannot actually think with language. And not because it isnt smart, but
because it has been educated.
Everywhere around me I find examples of written and spoken
language that I find bizarrely Frankenstein-monster like. Clumsy
constructions based on borrowed parts, and rudely assembled (PR pitches
and resume cover letters are great examples of modern Frankenstein
writing).
The language of the true Gollum is a language of phrases borrowed
and repeated but never quite understood.
Words and phrases turn into mechanical incantations that evoke
predictable responses from similarly educated minds. Yes there is meaning
here, but it is not precise meaning in the sense of a true literary culture.
Instead it is a vague fog of sentiment and intention that shrouds every
spoken word. It is more expressive than the vocalizations of some of our
animal cousins, but not by much.
Curiously, I find the language of illiterate (reading-writing sense) to
usually be much clearer. When I listen to some educated people talk, I get
the curious feeling that the words dont actually matter. That it is all a
behaviorist game of aversion and attraction and basic affect overlaid on
the workings of a mechanical process. That mechanical process is enacted
by instrumental meaning-machines manufactured in schools to generate,

and respond appropriately to, a narrow class of linguistic stimuli without


actually understanding anything.
When I am in a public space dominated by mass culture and its native
inhabitants, such as a mall, I feel like I am surrounded by philosophical
zombies. Yes, they talk and listen, but it is not clear to me that what they
are using is language.
And it isnt just the Im like, duh, and shes like uh-oh crowd that I am
talking about. I am including here the swarms of barely-literate (in the
thinking sense) liberal arts graduates who can read and write phrases like
always-already and dead-white-male (why not already-always or
deceased-European man? I suspect Derrida and Foucault could tell you,
but none of the millions who parrot them could).
This might sound like engineering elitism, but I find that the only
large classes of people who appear to actually think in clearly literate ways
today are mathematicians and programmers. But they typically only do so
in very narrow domains.
To learn to think with language, to become literate in the sense of
linguistically sophisticated, you must work hard to unlearn everything
built on the foundation of literacy-as-reading-and-writing.
Because modern education is not designed to produce literate people.
It is designed to produce programmable people. And this programmability
requires less real literacy with every passing year. Today, genuinely literate
reading and writing are specialized arts. Increasingly, even narrowly
instrumental read-write literacy is becoming unnecessary (computers can
do both very well).
These are not stupid people. You only have to listen to a child
delightedly reciting supercalifragilisticexpialidocious or indulging in other
childish forms of word-play to realize that raw skill with language is a
native capability in the human brain. It must be repressed by industrial
education since it seeks natural expression.

So these are not stupid people. These are merely ordinary people who
have been lobotomized via the consumerization of language, delivered via
modern education.
We dimly realize that we have lost something. But appreciation for
the sophistication of oral cultures mostly manifests itself as mindless
reverence for traditional wisdom. We look back at the works of ancients
and deep down, wonder if humans have gotten fundamentally stupider
over the centuries.
We havent. Weve just had some crucial meme-processing software
removed from our brains.
Towards a Literacy Renaissance
This is one of the few subjects about which I am not a pessimist. I
believe that something strange is happening. Genuine literacy is seeing a
precarious rebirth.
The best of todays tweets seem to rise above the level of mere bon
mots (gamification is the high-fructose corn syrup of user engagement)
and achieve some of the cryptic depth of esoteric verse forms of earlier
ages.
The recombinant madness that is the fate of a new piece of Internet
content, as it travels, has some of the characteristics of the deliberate
forms of recombinant recitation practiced by oral culture.
The comments section of any half-decent blog is a meaning factory.
Sites like tvtropes.org are sustaining basic literacy skills.
The best of todays stand-up comics are preserving ancient wordplay
skills.
But something is still missing: the idea that literacy is a cultivable
skill. That dense, terse thoughts are not just serendipitous finds on the

discursive journeys of our brains, but the product of learnable exposition


and condensation skills.
I suppose paying attention to these things, and actually attempting to
work with archaic forms like maxims and aphorisms in 2012 is something
of a quixotic undertaking. When you can store a terbayte of information
(about 130,000 books, or about 50% larger than a typical local public
library) on a single hard-disk words can seem cheap.
But try reading some La Rochefoucauld, or even late hold outs like
Oliver Wendell Holmes and J. B. S. Haldane, and you begin to understand
what literacy is really about. The cost of words is not the cost of storing
them or distributing, but the cost of producing them. Words are cheap
today because we put little effort into their production, not because we can
store and transmit as much as we like.
It is as yet too early to declare a literacy renaissance, but one can
hope.

Part 2:
Towards an Appreciative View
of Technology

Towards an Appreciative View of Technology


June 5, 2012
Recently I encountered the perfect punchline for my ongoing
exploration of technology: any sufficiently advanced technology is
indistinguishable from nature. The timing was perfect, since Ive been
looking for an organizing idea to describe how I understand technology.
Looking back over the technology-related posts in my archives over
the last five years, this technology-is-nature theme pops out clearly, as
both a descriptive and normative theme. I dont mean that in the sense of
naive visions of bucolic bliss (though that is certainly an attractive
technology design aesthetic) but in the sense of technology as a
manifestation of the same deeper lawfulness that creates forests-and-bears
nature. Technology at its best allows for the fullest expression of that
lawfulness, without narrow human concerns getting in the way.
I will explain the title in a minute but first, here is my technology
sequence of 14 posts written over the last five years. The organizing
narrative for the sequence comes from this technology-is-nature idea that
informs my thinking, whether I am pondering landfills or rusty ships.
Contemplating Technology
1. An Infrastructure Pilgrimage
2. Meditation on Disequilibrium in Nature
3. Glimpses of a Cryptic God
4. The Epic Story of Container Shipping
5. The World of Garbage
What Technology Wants
1. The Disruption of Bronze
2. Bays Conjecture
3. Halls Law: The Nineteenth Century Prequel to Moores Law
4. Hacking the Non-Disposable Planet
5. Welcome to the Future Nauseous

6. Technology and the Baroque Unconscious


Engineering Consolations
1. The Bloody-Minded Pleasures of Engineering
2. Towards a Philosophy of Destruction
3. Creative Destruction: Portrait of an Idea
Technology is central to all my thinking, but my relationship with it is
complicated. I no longer think of myself as an engineer. On the other hand,
though I think and write a lot about the history, sociology, psychology and
aesthetics of technology, I do not have the humanities mindset. I am not a
humanist (or that very humanist creature, a transhumanist). I am pretty
solidly on the science-and-engineering side of C. P. Snows famous twocultures divide.
Engineering is generally understood in instrumental terms by both
practitioners and observers: as something you do with a set of skills. By
contrast, I have tended in recent years to understand it in appreciative
terms.
So you could say that my writing on technology over the years has
turned into something resembling art history of the critical variety (the
connection is made somewhat explicit in one of my posts in the previous
sequence).
Perhaps as a result, I have been accused in the past (with some
justification) of turning my technology writing and thinking into a sort of
sloppy anthropomorphic thermodynamic theology based on loose notions
of technological agency, entropy and decay.
While there is certainly a degree of wabi sabi in my technological
thinking, in my defense I would say that when I lurch into purple prose or
bad poetry, it has less to do with deeply held conceptual beliefs and more
to do with attempting to convey the sense of grandeur that I think is
appropriate for the proper appreciation of technology.
We reserve for overtly showy things like cathedrals the kind of awe
that should really be extended (multiplied several times) to apparently

mundane things like shipping containers. We cannot make sense of the


modern human condition until we begin to understand that
interchangeable parts for everyday machines are actually a far greater
achievement than more narrowly humanist expressions of who we are.
I will leave it at that. I think I am going to be writing more about
technology appreciation in the future.

An Infrastructure Pilgrimage
March 7, 2010
In Omaha, I was asked this question multiple times: Err why do
you want to go to North Platte? Each time, my wife explained, with a hint
of embarrassment, that we were going to see Bailey Yard. He saw this
thing on the Discovery Channel about the worlds largest train yard A
kindly, somewhat pitying look inevitably followed, Oh, are you into
model trains or something? Ive learned to accept reactions like this.
Women, and certain sorts of infidel men, just dont get the infrastructure
religion. No, I explained patiently several times, I just like to look at
such things. I was in Nebraska as a trailing spouse on my wifes business
trip, and as an infrastructure pilgrim. When boys grow into men, the
infrastructure instinct, which first manifests itself as childhood car-planetrain play, turns into a fully-formed religion. A deeply animistic religion
that has its priests, mystics and flocks of spiritually mute, but faithful
believers. And for adherents of this faith, the five-hour drive from Omaha
to North Platte is a spiritual journey. Mine, rather appropriately, began
with a grand cathedral, a grain elevator.

As you leave the unlikely financial nerve center of Omaha behind,


and head west on I-80 (itself a monument), you hit the heartland very
suddenly. We stopped by the roadside just outside of Lincoln, about an
hour away from Omaha, to spend a few meditative minutes in the
company of this giant grain elevator. There is no more poignant symbol of
Food Inc and global agriculture. It is an empire of the spirit at its peak,

facing a necessary decline and fall. The beast is rightly reviled for the
cruelty it unleashes on factory-farmed animals. The problems with
genetically modified seeds are real. The horrendous modern corn-based
diet it has created cannot be condoned. Yet, you cannot help but
experience the awe of being in the presence of a true god of modernity. An
unthinking, cruel, beast of a god, but a god nevertheless.
After a quick pause at Lamars donuts (an appropriate sort of highlyprocessed religious experience) we drove on through the increasingly
desolate prairie. Near Kearney, you find the next stop for pilgrims, the
Great Platte River Archway Monument.

This is a legitimately religious archway, and a thoroughly American


experience. There are penny-stamping machines, kitschy souvenirs (which
must be manufactured in China and container-shipped to Nebraska to
count as holy objects; no locally-manufactured profanities for me) and the
inescapable period-costumed guide who insists on speaking in character.
Once you are inside, it is a curiously unsettling experience. Through
multiple levels that cross back and forth across I-80, you experience the
history of the nineteenth century expansion of Europeans into the
American West, from the journeys of the Mormons to Utah, to those
trudging up the Oregon trail, to the 49ers headed to California in search of
gold. There is also stuff about the Native American perspective, but this is
fundamentally a story about Europeans.

You take an escalator into the archway, where you work your way
through the exhibits. There are exhibits about stagecoaches, dioramas of
greedy, gossiping gold-seekers by campfires, and paintings of miserablelooking Mormons braving a wintry river crossing. There are other
exhibits about the development of the railroad and the great race that
ended with the meeting of the Union and Pacific railroads in Utah. In the
last segment, you find the story of the automobile, trucking, and the early
development of the American highway system. From the window, you can
watch the most recent of these layers of pathways, Eisenhowers I-80,
thunder by underneath at 70 odd miles an hour.
It is a pensive tale of one great struggle after another, with each age of
transportation yielding, with a creative-destructive mix of grace and
reluctance, to the next. The monuments of the religion of infrastructure are
monuments to change.

As you head further west from Kearney along I-80, the Union Pacific
railroad tracks keep you company. I watched a long coal train rumble
slowly across the prairie, and nearer North Platte, a massive double-decker
container train making its way towards Bailey. If you know enough about
container shipping, which Ive written about before, watching a container
train go by is like watching the entire globalized world put on a parade for
your benefit. You can count off names of major gods, such as Hanjin and
Maersk, as they scroll past, like beads on a rosary. From the great ports of
Seattle and Los Angeles, the massive flood of goods that enters America
from Asia pours onto these long snakes, and they all head towards that
Vatican of the modern world, Bailey Yard.

All of the Union Pacific rail traffic is controlled today by computer


from a command center in Omaha, but the heart of the hardware is in
North Platte. Bailey Yard is a classification (train dis-assembly and reassembly) yard. Trains enter from the east or west and are slowly pushed
up one of two humps. At the crest of the hump, single cars detach and roll
down by gravity, with remote-controlled braking. Each is directed towards
one of around 114 bowl tracks (assembly segments), where new trains are
assembled. Once a year, in September, during the North Platte Rail Fest,
you can visit the yard itself. Its sort of a railroad Christmas. On other
days, you must be content with the view from the Golden Spike tower
overlooking Bailey Yard.

It is hard to describe the grandeur of what you see. On the viewing


deck there are benches pews practically that encourage you to sit
and watch a while, and the inevitable retiree volunteer, anxious to explain
things. During my time there, I spoke to a thin and rather sad-looking old
man, a 33-year UP veteran. His tired face, covered with age spots, lit up
briefly when I asked what must have been an unexpectedly technical
question about the lack of turntables (platforms to turn engines around,
since you cannot do a U-turn on a railway track). He explained patiently
that with modern locomotives, you dont need turntables, since they are
equally efficient in either direction.
He also explained the humps and the sorting process, all new stuff for
me. His was the non-meditative variety of infrastructure religiosity. Facts
and figures, seared into memory, were his prayers. I listened as I always

do on these occasions, with the reverence due to the recitation of


scriptures.
The pace of activity at Bailey is deceptively slow. There is an
appropriate gravitas to this grand mechanical opera that makes you
wonder how the place can possibly process 10,000 cars a day, sorting
about 3,000. And then you contemplate the scale of operations and
understand. From the panoramic viewpoint at the top of the Golden Spike,
it can be hard to appreciate this scale. At first I did not believe that there
were 114 bowl tracks. There are 60 that lead off the west hump alone, and
I had to do a rough count before I believed it. You have to remind yourself
that you are looking at real-sized engines and cars sprawling across the flat
Nebraska landscape. Id be a bug underneath just one wheel of one of the
four locomotives in the picture below. I am small. Trains are big. Nebraska
is even bigger.

I hunted in the gift store for a schematic map of the yard, but there
wasnt one. The storekeeper initially thought I was looking for something
like a model train or calendar, but when I explained what I wanted, a look
of understanding and recognition appeared on his face. I had risen in his
estimation. I was no longer an average, uninformed and accidental visitor.
I was a fellow seeker of spiritual truths who knew what was important. A
lot of people ask for that, he said and explained that after 9/11, the
Department of Homeland Security stopped the sale of the posters. He told
me he expected the restrictions to be eased soon. Anyway, I took a picture
of a beautiful large-scale map of the yard that was hanging in the lobby.

Since Google Maps (search for North Platte and zoom in) seems to show
about as much detail as my picture, I feel safe sharing a low-resolution
version. If somebody scarily official objects, Ill take it down.

To the religious, such schematics are sacred texts. You contemplate


the physical reality to experience the awe, but you contemplate artifacts
like this when you want to meditate on the universality of it all. Staring at
the schematic, I was struck by its resemblance to schematics of integrated
circuits. And that after all, is what a railway classification yard is. A largescale circuit with bowl tracks for capacitors, humps for potentials and
brakes for resistors. It is the beauty of thoughts like this, that connect
microscopic chips of silicon to railroad yards too big even for a wideangle lens, that gets us monks and nuns joining those monasteries of
modernity, engineering schools. Laugh if you will, but I can get mistyeyed when looking at something like this schematic.
The next morning, we headed back, rushing to outrace a storm. We
had enough time though, to stop at the Strategic Air and Space Museum.
One of the first things I did after landing in New York for the first time in
1997, was to visit the USS Intrepid. Since then, Ive visited the WrightPatterson AFB museum, the Pima Air and Space Museum in Tuscon, AZ
(with its acres of aircraft parked in the desert heat), oddball little airplane
museums in obscure places, and of course, the Smithsonian. These days, I
live close to Reagan National Airport. Sometimes, I take a walk along
Mount Vernon trail, which at one point swings right past the end of the

Reagan runway. There, you can stand and contemplate airplanes roaring
overhead every few minutes.

But the Strategic Air and Space Museum is a different sort of


experience, an experience designed to remind you that gods of
infrastructure are angry, vengeful gods capable of destroying the planet.
This is not the friendly god who lends you wings to fly across the world
for work and life. These are the darker gods that can rain nuclear wrath
down on us. And no god is more wrathful than the Convair B-36, a
massive six-engined behemoth, with the largest wingspan of any combat
aircraft in history, and appropriately called the peacemaker. Between
1949 and 1959, these beasts were the instruments of Cold War foreign
policy.
To look at pictures of the B-36 in the sky is something of a
sacrilegious act. You must never look at all of a B-36 at once. And within
the confines of this museum, you cannot. Not with the human eye, and not
with the camera. The picture above is one you can safely look at. It took
some maneuvering to get all three engines of one wing into the frame.
And here I am, a tiny zit of a human being, standing below the belly
of the beast, next to a hydrogen bomb (that stubby thing next to me).

No infrastructure pilgrimage can be complete without a reverential


pause before a phallic god of destructive power. Menhirs and obelisks will
not do for our age. Neither will skyscrapers, which are merely symbols of
humanitys child-like greedy grasping at earthly pleasures. Out in the
heartland, among the grain silos (cornucopias?) where I began my
pilgrimage, are scattered very different sorts of silos. Silos containing
ballistic missiles, designed to soar up and kiss space, home to our loftiest
aspirations, before diving back down to destroy us. Outside the museum,
there are three Cold War ballistic missiles on display. Here is one. I forgot
to look at the sign, but I think it is an early Atlas.

Meditation on Disequilibrium in Nature


October 15, 2007
The idea of stability is a central organizing concept in mathematics
and control theory. Lately I have been pondering a more basic idea:
equilibrium, which economists prefer to work with. Looking at some
fallen trees this weekend, a point I had appreciated in the abstract hit me in
a very tangible form: both stability and equilibrium are intellectual
fictions. Here is the sight which sparked this train of thought:

These are the fallen trees that line the southern shore of Lake Ontario,
under the towering Chimney Bluffs cliffs left behind in disequilibrium
by the last Ice Age. For about a half mile along the base of the cliffs, you
see these trees. Here is the longest shot I could take with my camera.

No, evil human loggers did not do this. The cliffs are naturally
unstable, and chunks fall off at regular intervals. Here are a couple of the
more dramatic views you see as you walk along the trail at Chimney
Bluffs State Park.

Signs line the trail, warning you to keep away from the edge. Unlike
the Grand Canyon or the Niagara Falls, whose state of disequilibrium
requires an intellectual effort to appreciate, the Chimney Bluffs are in
disequilibrium on a time scale humans can understand. A life form we can
understand trees can actually bet on an apparently stable equilibrium
and lose in this landscape, fall, and rot in the water, while we watch.
Even earthquakes, despite their power, dont disturb our equilibriumcentric mental models of the universe. We view them as anomalous rather
than characteristic phenomena. The fallen trees of the Chimney Bluffs
cannot be easily dismissed this way. They are signs of steady, creeping
change in the world around us, going on all the time; creative-destruction
in nature.
The glaciated landscape of Upstate New York, of which the Chimney
Bluffs are part, is well known. The deep, long Finger Lakes, ringed by
waterfalls, have anchored my romantic fascination with this region for
several years now. The prototypical symbol of the region is probably
Taughannock falls:

Unless you live in the region for a while, you wont get around to
visiting the Chimney Bluffs. But visit even for a weekend, and everybody
will urge you to go visit the falls.
We create tourist spots around sights which at once combine the
frozen drama of past violence in nature, and a picture of unchanging calm
in the present. Every summer and fall, the falls pour into Lake Cayuga and
tourists take pictures. Every winter, they slow to a trickle. Change is so
slow that we even let lazy thinking overpower us and make preservation
the central ethic of any concern for the environment. Even the entire
ideology and movement is called conservation.
We forget cataclysmic extinction events that periodically wipe out
much of life. We forget to sit back and visualize and absorb the
implications of the dry quantitative evidence of ice ages. Moving to the
astronomical realm, we rarely stop and ponder the thought that the earth is
cooling down, that its magnetic poles seem to flip every tens of thousands
of years, that its rotation has slowed from a once fast 22-hour day. We

forget that our Sun will eventually blow up into a Red Giant that will be
nearly as large as the orbit of Mars.
We forget that nature is the first and original system of evolving
creative destruction. Schumpeters model of the economy came along
later.
Towards Disequilibrium Environmentalism
This troubles me. On the one hand, environmental concerns are
certainly very high on my list of ethical and practical concerns. Yet, when
nature itself is chock full of extinctions, unsteady heatings and coolings
and trembles and crumbles, why are we particularly morally concerned
about global warming and other unsustainable patterns of human activity?
A practical human concern is understandable (tough luck, Seychelles), but
to listen to Al Gore, you would think that it is somehow immoral to not
think entirely in terms of preservation, conservation, equilibrium and
stability. So nature decides to slowly destroy the Chimney Bluffs. We
decide to draw down oil reserves, slowly saturate the oceans with CO2
and melt the ice caps. Why is the first fine, but the others are somehow
morally reprehensible? If you worry that we are destroying a planet that
we share with other species, well, nature did those mass extinctions long
before we came along.
In this respect, the political left is actually rather like the right it is
truly a conservative movement. Instead of insisting on the preservation of
an unchanging set of cultural values and societal forms, it insists on an
unchanging environment.
To be truly powerful, the environmentalist movement must be
reframed in ways that accommodates the natural patterns of
disequilibrium, change, and ultimate tragic, entropic death of the universe.
I dont know how to do this.
Why Disequilibrium instead of Instability?
Stability is a comforting idea. It is the idea that there is a subset of
equilibria that, when disturbed slightly, return to their original conditions.

But it is a false comfort. Every instance of stability lives within an


artificial bubble of time, space and mass-energy isolation. Expand the
boundaries enough, or let the external universe inject a sharp enough
disturbance, and your stability will vanish. Unstable equilibria are even
sillier, because even infinitesimal disturbances can knock them out.
Which means disequilibrium, not equilibrium, is the natural state.
Using the word disequilibrium suggests a steady, sustained absence of
stability the universe is one, long transient signal where every illusion
of stability will be destroyed, given enough time.

Glimpses of a Cryptic God


February 16, 2012
I rarely listen to music anymore. Strange anxieties and fears seem to
flood into my head when I try. When I seek comfort in sound these days, I
tend to seek out non-human ones. The sorts of soundscapes that result
from technological and natural forces gradually inter-penetrating each
other.

At the Mira Flores lock, the gateway into the Pacific Ocean at the
southern end of the Panama Canal, you can listen to one such soundscape:
the idling of your vessels engine, mixed with the flapping and screeching
of seabirds. The draining of the lock causes fresh water to pour into salt
water, killing a new batch of freshwater fish every 30-45 minutes. The
seabirds circle, waiting for the buffet to open.
***

The seabirds have adapted to a world created by human forces better


than humans themselves. They reconcile the technological and the natural
without suffering agonies. They have smoothly reconstructed their
identities without worrying about labels like
transhumanist or
paleohumanist. There is neither futurist eagerness nor primitivist yearning
to their adaptation. They do not strain impatiently to transform into dimly
glimpsed future selves, nor do they strive with quixotic energy to return to
an imagined original hunter-gatherer self.
If they do not strain to transform, they also do not strive for
constancy. No doctrine of seabirdism elevates current contingencies into
eternal values that imprison. The seabirds feast without worry on the
unexpected bounty of salinity-killed fish. They do not ponder whether it is
natural.
They are thankfully unburdened by the sorts of limiting selfperceptions that we humans enshrine into the doctrine of humanism. I
think of humanism as an overweening conception of being flash-frozen
into a prescription during a brief window of time in early-modern Europe.
A time when humans had just gotten comfortable transforming nature, but
had not yet been themselves transformed enough by the consequences to
understand what they were doing.
That naked label, humanism, unadorned by prefixes like paleo- or
trans-, reveals our continued failure to center our sense of self within
larger non-human realities. Our big social brains can invent elaborate
anthropomorphic gods and social realities within which we gladly
subsume ourselves, but struggle to manufacture a sense of belonging to
anything that includes dead fish, seabirds, engineered canal locks and
seawater. Belonging has become an exclusively human idea to humans.
We are still mean little inquisitors at the ongoing trial of Copernicus,
resisting decentering realities that cannot be recursively reduced to the
human. Man makes gods in his own image, blind to the non-human.
And so we distract ourselves with debates about the distinction
between natural and artificial while ignoring the far more basic one
between human and non-human.

Sometimes being a bird-brain helps. Last year, I decided I was going


to be a barbarian. I am going for bird-brain-barbarian this year.
***
So I rarely listen to music.
Music these days feels like a fog descending on my brain, obscuring
visibility and tugging me gently inward into a cocoon of human belonging
that promises warmth and security, but delivers an unsettling estrangement
from non-human realities. Realities that are knocking with increasing
urgency at the door of our species-identity.
Technology is more visual landscape than soundscape, but listening to
pleasing human rhythms makes it harder to see technological ones. So
even when there are no interesting soundscapes, I prefer silence. It is easy
to miss frozen visual music when a soothing voice is piping fog into your
brain through your ears. Perhaps all songs are lullabies.

Visible function lends lyricism to the legible but alien rhythms and
melodies of technology-shaped landscapes. You can make out some of the
words, like crane and unloading, but the song itself is generally
impenetrable.

It is perhaps when the lyrics are at their most impenetrable that you
can most pay attention to the song. To understand is to explain. To explain
is to explain away and turn your attention elsewhere. Obviousness of
function can sometimes draw a veil across form, by encouraging a tooquick settling into a comforting instrumental view of technology.
Oscillating slowly back and forth across sections of the Panama
Canal, you will see strange boats carrying dancing fountains. I missed
what the tour guide said, so I have no idea what this is.

Perhaps a fire-fighting boat of some sort, or a dredging vessel. But I


dont need to know. One of the minor benefits of an engineering education
is a confidence in your ability to fathom function if the need arises,
leaving you free to appreciate pure form without a sense of anxiety.
Looking at this water-dancer of a boat, I found myself wondering about
the place of this beast on a larger spectrum.
On one end you find the Old Faithful geyser at Yellowstone National
Park:

And at the other end, you find the orderly, authoritarian highmodernist fountains at the Bellagio in Las Vegas, which dance to human
music, for human entertainment.

Each is a glimpse of a different stratum of techno-natural geology.


The human layer is built on top of the cryptohuman layer. The
cryptohuman layer on top of the natural. Each layer offers up a waterdancer emissary to explain itself to us.
As an engineer, you no longer suffer those sudden stabs of
uncomprehending anxiety that can be triggered in more humanistic brains
by glances under the hood. When I hear a non-engineer seeking an answer
to what does that thing do, half the time I hear, not curiosity, but fear. An
urge to comprehend intimidating realities through the reassuring lens of
human intention.
It takes an unnatural, inhuman instinct to ponder artificial form
divorced from its intended function. But increasingly, this instinct is a
necessary one if you seek to inhabit the twilight zone between human and
non-human.
The Panama Canal is as much freshwater-fish-killer and seabird-freelunch kitchen as it is a narrowly human shipping shortcut.
And it is also a manifestation of strange symmetries and cryptic
generative laws, whose nature we do not completely understand, but feel
an urge to unleash ever more completely. Technological landscapes have
yet to experience their Watson and Crick moment.

And so we stand aside and ponder the deeper mysteries of banks of


cranes, and wonder about the connection between Old Faithful, Water
Dancer boats and the Bellagio fountains.
***
The Panama Canal is a great place to get up close and personal with
container ships. I pursue ship-spotting opportunities with a mildly
obsessive tenacity.

One of my evil twins, Alain de Botton, appears initially sympathetic


to ship spotters in his writing, but admiration for their willingness to
engage technology soon gives way to a sort of mildly patronizing
humanism.
Admittedly, the ship spotters do not respond to the
objects of their enthusiasm with particular imagination.
They traffic in statistics. Their energies are focused on
logging dates and shipping speeds, recording turbine
numbers and shaft lengths. They behave like a man who
has fallen deeply in love and asks his companion if he
might act on his emotions by measuring the distance
between her elbow and her shoulder blade. But whatever

their inarticulacies, the ship-spotters are at least


appropriately alive to some of the most astonishing aspects
of our time.
For de Botton, to resort to numbers as a mode of appreciation is
inarticulacy. A visible symptom of a lack of poetic eye. It is a very
humanist stance. One that reminds me of that famous quote (I forget the
source) that claimed that it would take 500 Newton souls to make one
Milton soul.
Rather ironic that that comparison required a number.
And there is something deeply sad about the fact that de Botton feels
the urge to compare engagement with technology to the very inadequate
benchmark of human love. Would kissing a ship, or singing a sonnet to it,
be a more appropriate response than recording turbine numbers?
What is of immense importance to us as humans is not necessarily of
importance to the non-human-centric universe qua NHCU. The implicit
suggestion that writing a sonnet might perhaps be a better reaction than
recording turbine numbers says more about our self-absorption than about
turbines.
Taking refuge in numbers when faced with technological complexity
is in part an acknowledgment of the poverty of a poetically enacted
humanist life script . Numbers are how we grope for the trans-human.
I save my number-appreciation for private contemplation, and
sometimes wax lyrical on this blog, but there is never any doubt in my
mind. Numbers are the more fundamental mode of appreciation. And if
your mathematical abilities limit you to mere counting, so be it. Thats
better than pretending a container ship is a girl to be romanced.
When I was a kid, I used to visit my uncle who worked for the
railways and lived in a railway town right by some trunk routes. I would
sit on the porch and count the number of wagons on trains that went by,
for hours on end. The delight of spotting the rare two-locomotive,
hundred-plus-car train is not for the innumerate.

Counting is contemplation. Trains and container ships are our


rosaries.
***
I recently finished Neal Stephensons Cryptonomicon (recommended
by many of you).
He is no great master of narrative or character development,but it is
that very failing that elevates his writing to interesting. There is no
denying that he looks at technology the way it ought to be looked at.
Given a choice between saying something interesting about technology
and crafting a better narrative by human literary aesthetics, he consistently
chooses the former. And were better off for it.
When he occasionally attempts to capture in words the very nonverbal engagement of the world that is the characteristic of technologists,
he offers a glimpse of what an alternative to poetry looks like. An example
is an extended passage in Cryptonomicon where archetypal nerd Randy
Waterhouse ponders the dynamics of dust storms in the eastern desert side
of Oregon, and reaches conclusions about the open-ended strangeness of
the natural world. That sort of idle train of thought is a far more
appropriate reaction to technological reality than de Bottons more
articulate and poetic, but ultimately depth-limited engagement of the nonhuman.
Daniel Pritchett, a frequent email correspondent, IM buddy and my
host in Memphis on my road trip last year, pointed me to a passage in
Stephensons essay, In the Beginning was the Command Line, which reads
thus:
Contemporary culture is a two-tiered system, like the
Morlocks and the Eloi in H.G. Wellss The Time Machine,
except that its been turned upside down. In The Time
Machine the Eloi were an effete upper class, supported by
lots of subterranean Morlocks who kept the technological
wheels turning. But in our world its the other way round.
The Morlocks are in the minority, and they are running the

show, because they understand how everything works. The


much more numerous Eloi learn everything they know
from being steeped from birth in electronic media directed
and controlled by book-reading Morlocks. So many
ignorant people could be dangerous if they got pointed in
the wrong direction, and so weve evolved a popular
culture that is (a) almost unbelievably infectious and (b)
neuters every person who gets infected by it, by rendering
them unwilling to make judgments and incapable of taking
stands.
Morlocks, who have the energy and intelligence to
comprehend details, go out and master complex subjects
and produce Disney-like Sensorial Interfaces so that Eloi
can get the gist without having to strain their minds or
endure boredom.
A little too harsh perhaps, but on the whole a fair indictment of the
techno-illiterate. I wonder if Stephenson would consider de Botton one of
the Eloi. I suspect he would. The acquittal argument for de Botton, in a
Stephensonian court for technology-appreciation crimes, is that he is more
romanticist than politically compromised postmodernist. His crimes are
ultimately forgivable.
Stephensons typology helps us at least distinguish between two of the
three fountains. The Water Dancer boat is a serendipitous Morlock
fountain. The Bellagio fountain is an Eloi fountain constructed by
Morlocks.
My reactions to the three fountains were different in interesting ways.
With Old Faithful, I found myself basically speechless and
thoughtless. A division by zero moment.
With the Water Dancer fountain, I found myself in a state of happy
contemplation.

With the Bellagio fountain, my mind immediately wandered to


speculations about the the control algorithms and valve designs that would
be needed to build the thing.
***
To be among the Eloi is to lack a true sense of scale, and a sense of
when the clumsiest numerical groping with numbers is philosophically a
better response than the most sublime poetry. The Eloi fundamentally do
not get when to give up on words and turn to numbers.
It is a difference not of degree, but of kind. In Stephensons terms, de
Botton finding the ship-spotters response inarticulate is a case of one of
the adult Eloi making fun of a Morlock baby.
Certainly some of the ship-spotters may never venture beyond a
stamp-collector/model-builder/cataloger approach to ships (all very noble
pursuits). But some will eventually end up in places where the Eloi would
be entirely blind. Places where only numbers allow you to feel your way
forward, away from the limited sphere where the light of humanist poetry
shines.
Scale is perhaps the first aspect of reality where innumeracy severely
limits your ability to engage reality.
Scale is a curious thing. Out on the open water, a container ship can
seem normal-sized by some intuitive sense of normal.

But if you watch from the observation tower at Mira Flores, the sheer
sheer size of one of these beasts starts becoming apparent. You get the
sense that something abnormal is going on.

And once it is really close, little cues start to alter your sense of the
various proportions involved, like this lifeboat and Manhattan fire-escape
style stairways.

Cruise ships give you a sense that a large modern ship is something
between a luxury hotel and a small city in terms of scale, but container
ships give you a sense of the non-human scales involved.
Partly this is because cruise ship designers go to great lengths to make
you forget that you are on a ship (which lends a whole new meaning to
Disney-like sensorial interfaces). But mainly it is because our minds
cling so eagerly to the human that even the slightest foothold is sufficient
for anthropocentric perspectives to dominate thought. I am no more
immune than anybody else. My eyes instinctively sought out the lifeboat
and stairways human scale things. Earlier in this essay, I felt obliged to
describe the technological landscape by analogy to human music-making.
You can see why I think de Botton is my evil twin. He embraces
tendencies that I also see in myself, but am intensely suspicious of. I dont
trust my own attraction to poetry when it comes to appreciating
technology.
***
Scale is not just about comparisons and proportions. It is also about
precision.
Take this little engine that runs along the side of the lock on tracks,
steadying the ship. The clearance for some ships is in the inches, and it
takes many of these little guys to keep a large ship moving slowly, safely
and steadily through the lock. Inches in a world of miles. Ounces in a
world of tons.
It is when one scale must interact with another in this manner that you
get a true sense of what scale means. This is another reason numbers
matter. You cannot appreciate precision without numbers (I remember the
first time I experienced scale-shock in the numerical-precision sense of the
term: when I learned that compressors in rocket engines must spin at over
40,000 RPM. I remember spending something like half an hour trying to
understand that number, 40,000 as a mechanical rotation rate).

Scale and precision make for a non-verbal aesthetic. To have a true


sense of scale is to give up the sense of being human. You cannot identify
with the very large and very small if much of your identity is linked to an
object that can be contained within a box about six feet long.
***
The more I study technology, the more I tend to the view that it is a
single connected whole. Recurring motifs like container ships can turn
into obsessions precisely because they offer glimpses of a cryptic God. An
object for the devoutly atheist and anti-humanist soul to seek in perpetuity,
but never quite comprehend.
I go on infrastructure pilgrimages. I write barely readable poptheology treatises with ponderous titles like The Baroque Unconscious in
Technology [November 11, 2011], and I do my little dabbling with math,
software and hardware on the side.
But I still havent seen It. Just an elbow here, a shoulder blade there.
And I make my modest attempts to measure those distances.
***
This essay is my sneaky way of getting around my own noPowerPoint rule for Refactor Camp 2012, where my talk will be on motifs,
mascots and muses. The event has sold out. Thanks everybody for your
great support, and looking forward to meeting everybody.

If you put yourself on the waitlist, Ill see what I can do. I am waiting
to hear from the venue staff about whether there is capacity beyond the
nominal maximum of 45.
Also, for those of you in Chicago, a heads-up. Ill be there for the
ALM Chicago conference next week, Feb 22-23, where Ill be doing a talk
titled Breathing Data, Competing on Code. The Neal Stephenson quote is
involved.
Make it if you can. Or email me, and perhaps we can do a little
meetup if theres a couple of readers there.

The Epic Story of Container Shipping


July 7, 2009
If you read only one book about globalization, make it The Box: How
the Shipping Container Made the World Smaller and the World Economy
Bigger, by Marc Levinson (2006). If your expectations in this space have
been set to low by the mostly obvious, lightweight and mildly
entertaining stuff from the likes of Tom Friedman, be prepared to be
blown away. Levinson is a heavyweight (former finance and economics
editor at the Economist), and the book has won a bagful of prizes. And
with good reason: the story of an unsung star of globalization, the shipping
container, is an extraordinarily gripping one, and it is practically a crime
that it wasnt properly told till 2006.

There are no strained metaphors (like Friedmans Flat) or attempts


to dazzle with overworked, right-brained high concepts (Gladwells books
come to mind). This is an important story of the modern world,
painstakingly researched, and masterfully narrated with the sort of
balanced and detached passion one would expect from an Economist
writer. It isnt a narrow tale though. Even though the Internet revolution,
spaceflight, GPS and biotechnology dont feature in this book, the story
teases out the DNA of globalization in a way grand sweeping syntheses
never could. Think of the container story as the radioactive tracer in the
body politic of globalization.

The Big Story


(Note: Ive tried to make this more than a book review/summary, so a
BIG thank-you is due to @otoburb, aka Davison Avery, a dazzlingly wellinformed regular reader who provided me with a lot of the additional
material for this piece.)
What is amazing about The Box is that despite being told from a
finance/economics perspective, the story has an edge-of-the-seat quality of
excitement to it. This book could (in fact, should) become a movie; a
cinematic uber-human counterpoint to Brandos On the Waterfront. The
tale would definitely be one of epic proportions, larger than any one
character; comparable to How the West Was Won, or Lord of the Rings.
The Box Movie could serve as the origin-myth for the world that Syriana
captured with impressionistic strokes.
The movie would probably begin with a montage of views of
containerships sounding their whistles in homage around the world on the
morning of May 30, 2001. That was the morning of the funeral of the
colorful character at the center of this story, Malcolm McLean. McLean
was a hard-driving self-made American trucking magnate who charged
into the world of shipping in the 1950s, knowing nothing about the
industry, and proceeded, over the course of four decades, to turn that
world upside down. He did that by relentlessly envisioning and driving
through an agenda that made ships, railroads and trucks subservient to the
intermodal container, and in the process, made globalization possible. In
doing so, he destroyed not only an old economic order while creating a
new one, he also destroyed a backward-looking schoolboy romanticism
anchored in ships, trucks and steam engines. In its place, he created a new,
adult romanticism, based on an aesthetic of networks, boxes, speed and
scale. Reading this story was a revelation: McLean clearly belongs in the
top five list of the true titans of the second half of the twentieth century.
Easily ahead of the likes of Bill Gates or even Jack Welch.
Levinson is too sophisticated a writer to construct simple-minded
origin myths. He is careful not to paint McLean as an original visionary or
Biblical patriarch. From an engineering and business point of view, the

container was a somewhat obvious idea, and many attempts had been
made before McLean to realize some version of the concept. While he did
contribute some technological ideas to the mix (marked more by
simplicity and daring than technical ingenuity), McLeans is the central
plot line because of his personality. He brought to a tradition-bound, selfromanticizing industry a mix of high-risk, opportunistic drive and a
relentless focus on abstractions like cost and utilization. He seems to have
simultaneously had a thoroughly bean-counterish side to his personality,
and a supremely right-brained sense of design and architecture. Starting
with the idea of a single coastal route, McLean navigated and took full
advantage of the world of regulated transport, leveraged his company to
the hilt, swung multi-million dollar deals risking only tens of thousands of
his own money, manipulated New-York-New-Jersey politics like a Judo
master and made intermodal shipping a reality. He dealt with the nittygritty of crane design, turned the Vietnam war logistical nightmare into a
catalyst for organizing the Japan-Pacific coast trade, and finally, sold the
company he built, Sea-Land, just in time to escape the first of many slow
cyclic shocks to hit container shipping. His encore though, wasnt as
successful (an attempt to make an easterly round-the-world route feasible,
to get around the problem of empty westbound container capacity created
by trade imbalances). The entire story is one of ready-fire-aim audacity;
Kipling would have loved McLean for his ability to repeatedly make a
heap of all his winnings and risk it on one turn of pitch-and-toss. He
walked away from his first trucking empire to build a shipping empire.
And then repeated the move several times.
McLeans story, spanning a half-century, doesnt overwhelm the plot
though; it merely functions as a spinal cord. A story this complex
necessarily has many important subplots, which Ill cover briefly in a
minute, but the overall story (which McLeans personal story manifests, in
a Forrest Gumpish way) also has an overarching shape. On one end, you
have four fragmented and heavily regulated industries in post World-War
II mode (railroads, trucking, shipping and port operations). It is a world of
breakbulk shipping (mixed discrete cargo), when swaggering, Brando-like
longshoremen unloaded trucks packed with an assortment of items,
ranging from baskets of fruit and bales of cotton to machine parts and
sacks of coffee. These they then transferred to dockside warehouses and
again into the holds of ships whose basic geometric design had survived
the transitions from sail to steam and steam to diesel. It was a system that

was costly, inefficient, almost designed for theft, and mind-numbingly


slow, keeping transportation systems stationary and losing money for far
too much of their useful lives.
On the other end of the big story (with a climactic moment in the
Vietnam war), is the world we now live in: where romantic old-world
waterfronts have disappeared and goods move, practically untouched by
humans, from anywhere in the world to anywhere else, with an
orchestrated elegance that rivals that of the Internets packet switching
systems. Along the way the container did to distribution what the assembly
line had done earlier to manufacturing: it made mass distribution possible.
The fortunes of port cities old and new swung wildly, railroads clawed
back into prominence, regulation fell apart, and supply chains got globally
integrated as manufacturing got distributed. And yes, last but not the least,
the vast rivers of material pouring through the worlds container-based
plumbing created the quintessential security threat of our age: terror
sneaking through security nets struggling to monitor more than a percent
or two of the worlds container traffic.
Now if you tell me that isnt an exciting story, I have to conclude you
have no imagination. Lets sample some of the subplots.
The Top Five Subplots
There are at least a dozen intricate subplots here, and I picked out the
top five.
One: The Financial/Business Subplot
At heart, containerization is a financial story, and nothing illustrates
this better than some stark numbers. At the beginning of the story, total
port costs ate up a whopping 48% (or $1163 of $2386) of an illustrative
shipment of one truckload of medicine from Chicago to Nancy, France, in
1960. In more comprehensible terms, an expert quoted in the book
explains: a four thousand mile shipment might consume 50 percent of its
costs in covering just the two ten-mile movements through two ports. For
many goods then, shipping accounted for nearly 25% of total cost for a
product sold beyond its local market. Fast forward to today: the book

quotes economists Edward Glaeser and Janet Kohlhase: It is better to


assume that moving goods is essentially costless than to assume that
moving goods is an important component of the production process. At
this moment in time, this is almost literally true: due to the recession.
These sort of odd dynamics are due to the fact that world shipping
infrastructure changes very slowly but inexorably (and cyclically) towards
higher, more aggregated capacity, and lower costs. This is due to the
highly capital-intensive nature of the business, and the extreme economies
of scale (leading to successively larger ships in every generation). Ships,
though they are moving vehicles, are better thought of as somewhere
between pieces of civic infrastructure (due to the large legacy impact of
government regulation and subsidies) and fabs in the semiconductor
industry (which, like shipping, undergoes a serious extinction event and
consolidation with every trough in the business cycle). Currently the top
10 companies pretty much account for 100% of the capacity, as this
visualization from from gcaptain.com shows, which tells us that today
there are over 6048 container ships afloat, with a total capacity of around
13 million TEU (twenty-foot equivalent).

The mortgaging and financial arrangements dictate that ships


absolutely must be kept moving at all costs, so long as the revenue can at
least make up port costs and service debt. As the most financially

constrained part of the system, ships dominate the equation over trains and
trucks. One tidbit about the gradual consolidation: as of the books
writing, McLeans original company, Sea-Land, is now part of Maersk.
How this came to be is the most important (though not the most fun)
subplot. Things didnt proceed smoothly, as you might expect. All sorts of
forces, from regulation, to misguided attempts to mix breakbulk and
containers, to irrationallities and tariffs deliberately engineered in to keep
longshoremen employed, held back the emergence of the true efficiencies
of containerization. But finally, by the mid-seventies, todays business
dynamics had been created.
Two: The Technology Subplot
If the dollar figures and percentages tell the financial story, the heart
of the technology action is in the operations research. While McLean and
Sea-Land were improvising on the East Coast, a West Coast pioneer,
Matson, involved primarily in the 60s Hawaii-California trade, drove this
storyline forward. The cautious company hired university researchers to
throw operations research at the problem, to figure out optimal container
sizes and other system parameters, based on a careful analysis of goods
mixes on their routes. Today, container shipping, technically speaking, is
primarily this sort of operations research domain, where systems are so
optimized that an added second of delay in handling a container can
translate to tens of thousands of dollars lost per ship per year.
If you are wondering how port operations involving longshore labor
could have been that expensive before containerization, the book provides
an illuminating sample manifest from a 1954 voyage of a C-2 type cargo
ship, the S. S. Warrior. The contents: 74,903 cases, 71,726 cartons,
24,0336 bags, 10,671 boxes, 2,880 bundles, 2,877 packages, 2,634 pieces,
1,538 drums, 888 cans, 815 barrels, 53 wheeled vehicles, 21 crates, 10
transporters, 5 reels and 1,525 undetermined. Thats a total of 194,582
pieces, each of which had to be manually handled! The total was just
5,015 long tons of cargo (about 5,095 metric tons). By contrast, the
gigantic MSC Daniela, which made its maiden voyage in 2009, carries
13,800 containers, with a deadweight tonnage of 165,000 tons. Thats a
30x improvement in tonnage and a 15x reduction in number of pieces for a

single port call. Or in other words, a change from 0.02 tons (20 kg) per
handling to about 12 tons per handling, or a 465X improvement in
handling efficiency (somebody check my arithmetic but I think I did
this right). And of course, every movement in the MSC Danielas world is
precisely choreographed and monitered by computer. Back in 1954,
Brando time, experienced longshoremen decided how to pack a hold, and
if they got it wrong, loading and unloading would take vastly longer. And
of course there was no end-to-end coordination, let alone global
coordination.
Thats not to say the mechanical engineering part of the story is
uninteresting. The plain big box itself is simple: thin corrugated sheet
aluminum with load-bearing corner posts capable of supporting a stack
about 6-containers high (not sure of this figure), with locking mechanisms
to link the boxes. But this arrangement teems with subtleties, from
questions of swing control of ship-straddling cranes, to path-planning for
automated port transporters, to the problem of ensuring the stability of a 6high stack of containers in high seas, with the ship pitching and rolling
violently up to 30 degrees away from the vertical. Here is a picture of the
twist-lock mechanism that holds containers together and to the
ship/train/truck-bed and endures enormous stresses, to makes this magic
possible:

I am probably a little biased in my interest here, since I am fascinated


by the blend of OR, planning and mechanical engineering (particularly

stability and control) problems represented by container handling


operations. I actually wrote a little simulator for my students to use as the
basis for their term project when I taught a graduate course on complex
engineering systems at Cornell in 2006 (it is basically a Matlab
visualization and domain model with swinging cranes and stacking logic;
if you are interested, email me and Ill send you the code). But if you are
interested in this aspect, try to get hold of the Rotterdam and Singapore
port episodes of the National Geographic Channel Megastructures show.
There is a third thread to this subplot, that is probably the dullest part
of the book: the story of how American and International standards bodies
under heavy pressure from various business and political interests
struggled and eventually reached a set of compromises that allowed the
container to reach its full potential as an interoperability mechanism. The
story was probably a lot more interesting than Levinson was able to make
it sound, but thats probably because it would take an engineering eye,
rather than an economists eye, to bring out the richness underneath the
apparently dull deliberations of standards bodies. There are also less
important, but entertaining threads that have to do with the technical
challenges of getting containers on and off trains and trucks, the sideshow
battle between trucking and railroads, the design of cells in the ships
themselves, the relationship between a ships speed/capacity tradeoffs and
oil prices, and so forth.
Three: The Labor and Politics Subplot
This is the subplot that most of us would instinctively associate with
the story of shipping, thanks to Marlon Brando. The big picture has a story
with two big swings. First, in the early part of the century, dock labor was
a truly Darwinian world of competition, since there were spikes of demand
for longshore labor followed by long periods of no work. Since it was a
low-skill job, requiring little formal education and a lot of muscle, there
was a huge oversupply of willing labor. Stevedoring companies general
contractors for port operations picked crews for loading and unloading
operations through a highly corrupt system of mustering, favors, bribes,
kickbacks and loansharking. The longshoremen, for their part, formed
close brotherhoods, usually along ethnic lines (Irish, Italian, Black in the
US) that systematically kept out outsiders, and maintained a tightly

territorial system of controls over individual piers. This capitalist


exploitation system then gave way to organized labor, but a very different
sort of labor movement than in other industries. Where other workers
fought for steady work and regular hours and pay, longshoremen fought to
keep their free-agent/social-network driven labor model alive, and resist
systematization. This local, highly clannish and tribal labor movement had
a very different kind of DNA from that of the inland labor movements, and
as containerization proceeded, the two sides fought each other as much as
they fought management, politicians and automation. Though longshore
labor is at the center of this subplot, it is important not to forget the labor
movements in the railroad and trucking worlds. Those stories played out
equally messily.
East and West coast labor reacted and responded differently, as did
other parts of the world, but ultimately, the forces were much too large for
labor to handle. Still, the labor movement won possibly its most
significant victory in this industry, and came to be viewed as a model by
labor movements in other industries.
The labor story is essentially a human one, and it has to be read in
detail to be appreciated, for all its drama. The story has its bizarre
moments (at one point, West Coast labor had to actual fight management
to advocate faster mechanization and containerization, for reasons too
complex to go into in this post), and is overall the part that will interest the
most people. It is important though, not to lose sight of the grand epic
story, within which labor was just one thread.
The other big part of this subplot, inextricably intertwined with the
labor thread, is the politics thread. And here I mean primarily the politics
of regulation and deregulation, not local/urban. To those of us who have
no rich memory of regulated economies, the labyrinthine complexities of
regulation-era industrial organization are simply incomprehensible. The
star of this thread was the all-powerful Interstate Commerce Commission
of the US (ICC), and its sidekicks, the government-legitimized pricefixing cartels of shipping lines on major routes. The ICC controlled the
world of transport at a bizarre level of detail, ranging from commoditylevel pricing, to dictating route-level access, to carefully managing
competition between rail, road and sea, to keep each sector viable and
stable. And of course, there was a massive money-chest of subsidies,

loans and direct government infrastructure investment in ports to be fought


over. The half-century long story can in fact be read as the McLean bull in
the china shop of brittle and insane ICC regulations, simultaneously
smashing the system to pieces, and taking advantage of it.
Four: The Urban Geography and History Subplot
This is the subplot that interested me the most. Containerization
represented a technological force that old-style manual-labor-intensive
ports and their cities simply were not capable of handling. The case of
New York vs. Newark/Elizabeth is instructive. New York, the greatest port
of the previous era of shipping, was an economy that practically lived off
shipping, with hundreds of thousands employed directly or indirectly by
the sector. Other industries ranging from garments to meatpacking
inhabited New York primarily because the inefficiencies of shipping made
it crucial to gain any possible efficiency through close location.
Containerization changed all that. While New York local politics
around ports was struggling with irrelevant issues, it was about to be
blindsided by containers. The bistate Port Authority, finding itself cut out
of New York power games, saw an opportunity when McLean shipping
was looking to build the first northeastern container handling wharf. This
required clean sheet design (parallel parking wharfs instead of piers
perpendicular to shore), and plenty of room for stacking and cranes. While
nominally supposed to work towards the interests of both states, the Port
Authority essentially bet on Newark, and later, the first modern container
port at Elizabeth. The result was drastic: New York cargo traffic collapsed
over just a decade, while Newark went from nothing to gigantic. Today,
you can see signs of this: if you ever fly into Newark, look out the window
at the enormous maze of rail, truck and sea traffic. The story repeated
itself around the US and the world. Famous old ports like London,
Liverpool and San Francisco declined. In their place arose fewer and far
larger ports in oddball places: Felixstowe in the UK, Rotterdam, Seattle,
Charleston, Singapore, and so forth.
This geographic churn had a pattern. Not only did old displace new,
but there were far fewer new ports, and they were far larger and with a
different texture. Since container ports are efficient, industry didnt need to

locate near them, and they became vast box parking lots in otherwise
empty areas. The left-behind cities not only faced a loss of their portbased economies, but also saw their industrial base flee to the hinterland.
Cities like New York and San Francisco had to rethink their entire raison
detre, figure out what to do with abandoned shorelines, and reinvent
themselves as centers of culture and information work.
There is a historical texture here: the rise of Japan, Vietnam, the Suez
Crisis, oil shocks, and the Panama Canal all played a role. Just one
example: McLean, through his Vietnam contract, found himself with fullypaid up, return-trip empty containers making their way back across the
Pacific. Anything he could fill his boxes with was pure profit, and Japan
provided the contents. With that, the stage was set for the Western US to
rapidly outpace the East Coast in shipping. Entire country-sized
economies had their histories shaped by big bets on container shipping
(Singapore being the most obvious example). At the time the book was
written, 3 of the top 5 ports (Hong Kong, Singapore, Shanghai, Shenzen
and Busan, Korea) were in China. Los Angeles had displaced
Newark/New York as the top port in the US. London and Liverpool, the
heart of the great maritime empire of the Colonial British, did not make
the top 20 list.
Five: The Broad Impact Subplot
Lets wrap up by looking at how the narrow world of container
shipping ended up disrupting the rest of the world. The big insight here is
not just that shipping costs dropped precipitously, but that shipping
became vastly more reliable and simple as a consequence. The 25%
transportation fraction of global goods in 1960 is almost certainly an
understatement because most producers simply could not ship long
distances at all: stuff got broken, stolen and lost, and it took nightmarish
levels of effort to even make that happen. Instead of end-to-end shipping
with central consolidation, you had shipping departments orchestrating ad
hoc journeys, dealing with dozens of carriers, forwarding agents, transport
lines and border controls.
Today, shipping has gotten to a level of point-to-point packetswitched efficiency, where the shipper needs to do a hundredth of the work
and can expect vastly higher reliability, on-time performance, far lower

insurance costs, and lower inventories. That means a qualitatively new


level of thinking, one driven by the axiom that realistically, the entire
world is your market, no matter what you make. The dependability of the
container-plumbing makes you rethink every business.
In short, container shipping, through its efficiency, was a big cause of
the disaggregation of vertically integrated industry structures and the
globalization of supply chains along Toyota-like just-in-time models. Just
as the Web (1.0 and 2.0) sparked a whole new world of business models,
container shipping did as well.
The deepest insight about this is captured in one startling point made
in the book. Before container shipping, most cargo transport involved
either raw materials or completely finished products. After container
shipping, the center of gravity shifted to intermediate (supply chain)
goods: parts and subassemblies. Multinationals learned the art of sourcing
production in real time to take advantage of supply chain and currency
conditions, and moving components for assembly and delivery at the right
levels of disaggregation. Thanks to container shipping, manufacturers of
things as messy and complicated as refrigerators, computers and airplanes
are able to manage their material flows with almost the same level of ease
that the power sector manages power flows on the electric grid through
near real-time commodity trading and load-balancing.
My clever-phrase-coinage of the day. The container did not only make
just-in-time possible. It made just-in-place possible.
Conclusion: Towards Box: The Movie
I wasnt kidding: I think this story deserves a big, epic-scale movie.
Not some schmaltzy piece-of-crap story about a single longshoreman
facing down adversity and succeeding or failing in the container world,
but one that tells the tale in all its austere, beyond-human grandeur; one
that acknowledges and celebrates the drama of forces far larger than any
individual human.
An end-note: this is my first post in a deliberate attempt to steer my
business/management posts and book reviews 90 degrees: from a focus on

general management topics like leadership, and functional topics like HR


and marketing, towards vertical topics. I have felt for some time that
business writing is facing diminishing returns from horizontal and
functional foci, and while Ill return to those views when I find good
material, my pipeline is mainly full of this kind of stuff now. Hope you
guys like the change in direction: steering a major theme of a very wordy,
high-inertia blog like this one is as hard as steering a fully-laden
container ship. It is going to take some time to overcome the directional
inertia. Some verticals on my radar, besides shipping, include the
garbage/trash industry, healthcare, and infrastructure. If you are
interested in particular verticals, holler em out, along with suggested
books.

The World of Garbage


November 6, 2010
For the last two years, Ive had three books on garbage near the top of
my reading pile, and Ive gradually worked my way through two of them
and am nearly done with the third. The books are Rubbish: The
Archeology of Garbage by William Rathje and Cullen Murphy (1992),
Garbage Land: On the Secret Trail of Trash by Elizabeth Royte (2005),
and Gone Tomorrow: The Hidden Life of Garbage by Heather Rogers
(2005). Last week, I also watched the CNBC documentary, Trash Inc.:
The Secret Life of Garbage. Notice something about the four subtitles?
Each hints at the hidden nature of the subject. It is a buried, hidden secret
physically and philosophically. And there are many reasons why
uncovering the secret is an interesting and valuable activity. The three
books are motivated by three largely separate reasons: Rathje and Cullen
bring an academic, anthropological eye to the subject. Roytes book is a
mix of amateur curiosity and concerned citizenship, while Rogers is
straight-up environmental activism. But reading the 3 books, I realized
that none of those reasons interested me particularly. I was fascinated by a
fourth reason: garbage (along with sewage, which I wont cover here) is
possibly the only complete, empirical big-picture view of humanity you
can find.
The Boundary Conditions of Civilization
Sometimes an engineering education can lead to very curious ideas
about what is important. Garbage is important and interesting in an
engineering sense because it illuminates one of the boundary conditions of
any systemic view of the world. If you cut through the crap (no pun
intended) of all our lofty views of ourselves, humanity is essentially a
giant system that feeds on low-entropy resources on one end (mines,
forests, oilfields) and defecates high-entropy waste at the other. Among
other things, this transformation allows us to create low-entropy islands of
order around ourselves (cities, buildings and everything else physical that
we build). If this flow from resources to garbage were to shut down,
nature would rapidly reclaim every inch of civilization, and you can read

about this fascinating thought experiment in The World Without Us by


Alan Weisman which Ive mentioned before.
Heres the thing about this view: the input end is simply too complex
to comprehend in any summary sense. We suck resources out of the planet
in extremely complicated and diversified ways. The processing part is also
far too complex to understand (it is basically civilization), but thought
experiments like Weismans at least help us get a non-empirical sense of
the scale and complexity of our presence on this planet.
But the output end? Easy. Just drill into the nearest landfill. Or follow
the course of a single man-made artifact. In Trash Inc., there is a revealing
example: plastic beverage bottles.
Message in a Bottle
The story of plastic water/soda bottles from a trash perspective is
simple. According to Trash Inc., in the US, about 51 billion bottles are
used every year (this number seems incredible. It amounts to about 1
bottle per person every 2 days. But it seems to be correct).
Only about 22% are recycled. The recycled stuff goes to make
polyester fabrics, mats and the like. Ironically, a manufacturer of such
recycled plastic goods in the US profiled in the documentary noted that he
was forced to import about 70% of his bottle needs from countries like
Canada.
What happens to the rest? Those that get thrown away with the
regular trash make it into the regular waste stream, with companies like
Waste Management working hard to figure out how to cheaply separate
the bottles out (since they represent a significant revenue opportunity; a
WM talking head in the documentary noted that WM could potentially
increase its revenues from $13 billion to $23 billion if it could just figure
out how to cheaply separate valuable recyclables from the waste stream
headed to landfills).
And there is a third category: stuff that doesnt even get to landfills,
but washes down streams and rivers into the open ocean, where it drifts for

hundreds of miles to form garbage islands in the middle of the ocean, such
as the Great Pacific Garbage Patch.
The story of the plastic water bottle serves as a sort of radioactive
tracer through the garbage industry, touching as it does every piece of the
puzzle.
The three books and the documentary explore different aspects of the
system, so lets briefly review them.
Rubbish by Rathje and Cullen
Rubbish, though a little dated, is the most professional of the three
books, since it is the result of a large, long-term academic study, with no
particular agenda in mind, and written by the godfather of the entire field
of Garbology. To the principals of the University of Arizona Garbage
project, garbage is just archeological raw material. The fact that drilling
into modern, active landfills tells us about modern humans, while digging
into ancient mounds tells us about Sumerians, is irrelevant to them. The
perspective lends an interesting kind of objectivity to the book.
The first and most basic thing I learned from the book surprised me
no end, and answered a question that I had always wondered about. Why
do ancient civilizations seem to get buried under mounds?
Turns out that for much of history, waste simply accumulated on
floors inside dwellings. Residents would simply put in new layers of fresh
clay to cover up the trash. Every dwelling was a micro landfill. When the
floor rose too high, they raised the ceiling and doorways.
The result was that most ancient civilizations rose (literally) on a pile
of their own trash. There is even a table of historical waste accumulation
rates included. South Asia is the winner in this contest: the Bronze Age
Indus Valley Civilization apparently had the fastest accumulation of waste
at nearly 1000 cm/century. (I cant resist a little subcontinental humor:
how about we attribute all the great cultural achievements of the Indus
Valley Civilization to modern India, and the trash to modern Pakistan,
where the major archeological sites are situated today?)

Ancient Troy was also quite the trash generator, at about a 140
cm/century. Since those ancient times, accumulation rates have declined
dramatically (this doesnt mean weve been producing less trash per
capita; merely that weve stopped burying it under our own floors).
Historically, trash was also thrown out onto streets, and burned
outside cities. The composition of trash has changed as well. If you think
todays plastic water bottles are a menace, you should read the description
of the horse-manure problem that (literally) buried New York before the
automobile.
Skipping ahead a few thousand years, you get the modern sanitary
landfill. But the takeaway here is a sense of perspective. Historically
speaking, our modern times are not the trashiest time in our history.
Though the scale and chemical diversity of the trash management problem
is huge in our time simply because of the size of the global population, we
are relatively far ahead of older civilizations in managing our trash.
Much of the work described in the book is about the insights you can
obtained by drilling into landfills, or collecting garbage bags directly from
households. The findings provide fascinating glimpses into the delusions
of human beings. Take food habits for instance. One interesting research
exercise the book describes is a study comparing self-reported food habits
to the revealed food habits based on trash analysis. The authors call this
the Lean Cuisine Syndrome:
People consistently underreport the amount of regular
soda, pastries, chocolate, and fats that they consume; they
consistently over-report the amount of fruits and diet soda.
The book notes a related phenomenon called the Surrogate
Syndrome: people are able to describe the actual habits of family members
and neighbors with chilling accuracy.
Another fascinating analysis involves pull-tabs of beer cans. These
seem to be a sort of carbon-dating tool for modern garbage.

The unique punch-top on Coors beer cans, for


example, was used only between March of1974 and June of
1977 In landfills around the country, wherever Coors
beer cans were discarded, punch-top cans not only identify
strata associated with a narrow band of dates but also
separate two epochs fone from another.
Perhaps the most fascinating part of the book is the demographic
detective work stories. It turns out you can accurately figure out a lot of
things about neighborhoods: income levels, race, number of children,
consumption patterns and the like, simply by looking at and classifying the
trash. Trash also appears to be a goldmine of market research (I am
surprised there isnt a market research agency out there offering
segmentation reports based on personas/clusters derived from trash
analysis. Or perhaps there is). Interestingly, the hardest thing to infer from
trash is the proportion of men in a population. A Census Bureau funded
project failed to find any convincing models. For other variables, reliable
equations are available. For example,
Infant Population = 0.01506*(Number of diapers in a
5 week collection)
There are similar correlates for women. For men though, such
indicators are unreliable: Men are not exactly invisible in garbage, but
garbage is a more unreliable indicator of their live-in presence than it is
for any other demographic group
Overall, the book is fascinating in the sense that Levitts
Freakonomics is fascinating. There is no overarching conceptual
framework, just an entertainingly told story that weaves together a few
broad themes and dozens of anecdotes chosen as much for entertainment
as insight.
Garbage Land by Elizabeth Royte
Roytes book is much more of a popular science treatment. The
interesting part is her follow the trail approach to her subject.

She starts with an account of an urban adventure: canoeing in


Gowanus Canal, a highly polluted waterway in Brooklyn, in 2002, with
volunteers dedicated to keeping it clean. From there she moves on to an
analysis of her own life by examining her own garbage, an amateur selfstudy along the lines of the Rathje-Cullen study of larger communities.
Among her reflections:
Picking through garbage was smelly and messy and
time-consuming, but it was revelatory in a way. I hadnt
realized my diet was so boring. Anyone picking through my
castoffs would presume my family survived on peanut
butter, jelly, bread, orange juice, milk, and wine. And,
largely, we did.
The opening chapter includes a page from her garbage diary, and it
inspired me enough to stop and reflect on my own garbage and recycling
that week. Suffice it to say, the lessons were not pleasant.
From her home, Royte moves on to the next logical step: the curbside.
She arranges a ride-along with a garbage truck. This section is a
fascinating portrait of New Yorks Strongest, as the sanitation department
workers call themselves (the cops are the Finest and the firefighters are
the bravest). The NYC garbagemen lift about five to six tons a day, in
seventy-pound bags. The view from the garbagemans perspective is
disturbing. Royte notes:
I knew, after just one day on the job, that san men
constantly made judgments about individuals. They
determined residents wealth or poverty by the artifacts
they left behind. They appraised real estate by the height of
a discarded Christmas tree, measured education level by the
newspapers and magazines stacked on the curb. Glancing at
the flotsam and jetsam as it tumbled through their hopper,
they parsed health status and sexual practices.
It is not entirely a first-person narrative though. Bits of history and
research are woven through the narrative. There is an interesting section
on the history of New Yorks sanitation history, and the horse manure

problem I mentioned before. In 1880, we learn, 15,000 dead horses had to


be cleared from city streets. City horses dumped 500,000 pounds of
manure and 45,000 gallons of urine onto city streets daily. The situation
needed a hero, and Colonel George Waring was that hero. He created the
first modern civic garbage-handling infrastructure in the US.
The rest of the book continues in this vein, chronicling Roytes
explorations of landfills, incinerator plants, toilets and sewage. The story
is by turns alarming, amusing, disgusting and scary. While there is no
overt alarmism, the book, by virtue of being a very personal exploration,
gets to you in a way that the more detached and objective Rathje-Cullen
book does not.
Gone Tomorrow by Heather Rogers
For completeness, Ill offer just a note about Gone Tomorrow, since I
havent finished reading it. It covers much of the same ground as the first
two books, but primarily from an environmentalist perspective (there is
also a documentary). It lacks the open-ended curiosity and sense of
discovery you get from the other two books, but you do get the right
pattern of highlighting if you are interested in the environmental angle.
Trash Inc.
And lets wrap with the CNBC documentary. While rather shallow,
the documentary does have the largest scope of all the material I went
through. Of particular interest is a segment on the garbage problem in
China, another on the MIT Trash Track project, and the plastic water bottle
story I told in the beginning. Catch a rerun if you can.
Landfills
Through the three books and the documentary, the star of the show is
definitely the landfill. One particular landfill, the Fresh Kills landfill in
New York (closed about a decade ago) plays a role in all the stories (the
largest landfill in the US today is the Apex landfill in Nevada).

The closing of Fresh Kills turned out to be a big event in garbage


history, since it triggered possibly the biggest trash transport program in
history, as the city orchestrated a massive garbage trucking program that
today ships its trash out all over the country. Of New York Citys 1.3
billion dollar annual budget, about $330 million a year goes towards
exporting the trash.
New Yorks statistics are astounding: 12,000 tons a day, 24,000lb per
person per year, garbagemen making $70,000 a year with overtime (the
most experienced making six figures), a 300 square mile territory, a Mafia
angle, 1500 trucks, and a transport network that fans out hundreds of miles
into the American hinterland.
At the other end of the distribution chain are towns like Fox Township
in Pennsylvania, neighbor to the Greentree landfill owned by Veolia, a
French company. The residents are understandably ambivalent about the
presence of a giant garbage can in their backyard. On the one hand, the
landfill is a constant threat to the local environment, the water quality in
particular. But on the other hand, half the towns budget comes from the
fees paid by the landfill, which charges $3 per ton as tipping fees to
customers, and passes along a cut to the city.
The landfills themselves are fascinating civil engineering structures.
Todays modern sanitary landfills are dry landfills (the old theory that
garbage should be wet so it can degrade faster has been discarded in
favor of keeping it as dry as possible and sealing it in so that a landfill is
effectively forever). Liquid runoff (leachate: exactly the same stuff that
you sometimes find at the bottom of your trash can, the brown smelly
liquid) is carefully directed to the sewage stream, while vents release the
gases. The gases include methane and are a source of revenue, via power
generation (there is a BMW plant that runs off landfill gas).
But despite the engineering complexity, these are basically just large
trash cans. Lined with plastic like the one in your kitchen. The only
difference is that the trash has nowhere to go. Once it is full, it is capped
and landscaped, and you get all those strangely beautiful platonic
mountains you see when you drive along country highways (you can tell
when you are looking at a trash mountain: you will see venting pipes

sticking out, and the slopes will be at a precise 30 degree gradient). There
doesnt appear to be any need for alarmism though. America at least, has
plenty of room. Other parts of the world may not be as lucky.
There are 2300 landfills around the country. You could say the United
States is a collection of 2300 large families, each with one giant trash can.
The Global Picture
I havent found a good source that provides a global picture. The
CNBC documentary provides a glimpse into China, where Beijing alone
has a catastrophe looming (the city is overflowing with garbage in
unauthorized dump sites, because the available government-owned
landfills are insufficient for the growing citys waste stream).
Growing up in India, I have some sense of the world of garbage there.
There are both positives and negatives. On the positive side, the largescale consumerist levels of trash production are still relatively rare in
India, and limited to the most well-off, westernized households. Growing
up, we generated practically no trash, simply because we mostly ate homecooked food and did not consume the bewildering array of consumer
products that Americans routinely consume. As I recall, we owned a small
2-3 gallon trash basket, and generated perhaps one basket-full a week,
most of which was organic matter (which went to our garden). There was
little packaging. Groceries came in recycled newspaper bags, which we
recycled again.
But what little waste we did generate was poorly captured in the
organized waste stream. There were many disorganized small dumps in
the back alleys and few dumpsters.
By my teenage years in the 80s, modernity began catching up. Thin
plastic bags made from recycled (downcycled actually) plastic caught on
and replaced the newspaper bags. After reigning for about a decade, they
thankfully declined in popularity (thanks in part due to an unanticipated
consequence: stray cows eating them and then dying as the plastic choked
their intestines), and I believe have actually been banned, at least in major
cities.

On the other end, though much of the waste is basically un-managed,


recycling is probably vastly more efficient than anywhere in the West. But
the efficiency comes at a great human cost: there is an entire hierarchy of
impoverished classes (and socially immobile castes) that makes its living
off the waste stream. At the very top (which isnt saying much) are the
door-to-door used-newspaper buyers, who make paper bags or sell to
recycling plants (our gardener made some money on the side in this trade,
and I spent many evenings as a kid happily helping him and his son, who
was about my age, make paper bags). Also at the top are the wandering
traders who exchange junk and scrap metal for new aluminum
kitchenware. Below them you find a variety of roles, from the ragpickers
and scavengers, who clamber over landfills looking for anything of value,
to entire shantytowns of scrap merchants that spring up around the
landfills, buying from the scavengers. The system is efficient and picks the
waste-stream clean of anything of even the lowest potential value. But yes,
it involves humans running a daily risk of all sorts of infection and other
dangers.
To foreigners, looking out the window as an airplane comes in to land
at Mumbai can be a shock. The landing/take off glide paths often go right
over the main garbage dumps of Mumbai and the sprawling mess is
anything but pleasant to look at. But if you ever drive past through the
citys neighborhoods where the scavenger trade shops line the streets, you
cannot help but admire the gritty resourcefulness with which so many
people manage to live off garbage.
But the situation is gradually getting worse, driven both by the
exploding population and the rise of American-style consumerism. During
my last visit to India in 2008, I noticed that while my mother still ran the
same tight, low-footprint household she always has, many of the younger
yuppie couples seemed to have adopted the same lifestyle that had
shocked me when I first arrived in America in 1997. A lifestyle whose
story is written with discarded paper cups, too many paper napkins, water
bottles, product packaging and discarded, broken appliances. A culture of
home-cooked food is gradually transforming into a culture of take-out
food. And it isnt American-style fast-food that is to blame. You can now
buy frozen or packaged versions of almost everything that I thought of as

home-made Indian food, growing up. And I have to admit, every passing
year here in the States, I cook less, and buy more frozen, packaged foods
from my local Indian grocery store. Pizza boxes may be appearing in
Indian trash cans, but frozen chana masala boxes are appearing in
American trash cans as well (looking around the world though, it seems to
me that the Japanese are possibly the most in love with ridiculous amounts
of packaging).
But theres even more to the globalization of garbage than just
different country-level views. There is the international trade in garbage.
Places like India and China import garbage and recycling at all levels from
entire ships destined for the scrap-metal yard (which I wrote about earlier),
[January 28, 2010] to lead batteries to paper meant for recycling. The
waste stream is more than a network of dump routes that fans out from
cities like New York. It is a huge circulatory system that spans the globe.
Exploring Further
I have to admit, despite reading a ton of material on the subject, I am
merely a lot more informed, not much wiser. What is the true DNA of the
world of garbage? What is its significance within an overall understanding
of our world? Is it merely a treasure-trove of anthropological insights, or is
there a deeper level of analysis we can get to? The books left me with the
uncomfortable feeling that the garbage professionals were so absorbed in
the immediate details that they were missing something bigger. But I dont
know what that is. Somehow garbage in the literal sense probably fits into
the End of the World theme that I blogged about before (where I proposed
my garbage eschatology model of how the world might end).
Anyway, I expect my interest in this topic will continue to evolve.
Ive started a trail on the subject (click the image below), which you can
explore. Do send me link/resource suggestions to add to it. As you can tell
by the relative incoherence of the trail, I dont yet have a good idea about
how to put the jigsaw puzzle together in a more meaningful way.

The Disruption of Bronze


February 2, 2011
I pride myself on my hard-won sense of history. World history is
probably the subject Ive studied the most on my own, starting with
Somerset Plantagenet Frys beautifully illustrated DK History of the
World at age 15. I studied the thing obsessively for nearly a year, taking
copious notes and neglecting my school history syllabus. Its been the best
intellectual investment of my life. Since then, I periodically return to
history to refresh my brain whenever I think it my thinking is getting stale.
Most recently, Ive been reading Gibbons Decline and Fall of the Roman
Empire and Alfred Thayer Mahans The Influence of Sea Power Upon
History. My tastes have gradually shifted from straightforward histories by
modern historians to analytical histories with a specific angle, preferably
written by historians from eras besides our own.
The big value to studying world history is that no matter how much
you know or think you know, one new fact can completely rewire your
perspectives. The biggest such surprise for me was understanding the real
story (or as real as history ever gets) of how iron came to displace bronze,
and what truly happened in the shift between the Bronze Age and the Iron
Age.
What comes to mind when you think bronze? Hand-crafted
artifacts, right?
What about iron? Big, modern, steel mills and skyscrapers, right? Iron
metallurgy is obviously the more advanced and sophisticated industry in
our time.
The Iron Age displaced the Bronze Age sometime in the late second
millennium BC. The way the story is usually told, iron was what powered
the rise of the obscure barbarian-nomads known as the Aryans throughout
the ancient world.

You could be forgiven for thinking that this was a sudden event based
on iron being suddenly discovered and turning out to be a superior
material for weaponry, and the advantage accidentally happening to fall to
the barbarian-nomads rather than the civilization centers.
Far from it.
Heres the real (or less wrong) story in outline.
The Clue in the Tin
You see iron and bronze co-existed for a long time. Iron is a plentiful
element, and can be found in relatively pure forms in meteorites (think of
meteorites as the starter kits for iron metallurgy). Visit a geological
museum sometime to see for yourself (I grew up in a steel town).
It is hard to smelt and work, but basically once you figure out some
rudimentary metallurgy and can generate sufficiently high temperatures to
work it, you can handle iron, at least in crude, brittle and easily rusted
forms. Not quite steel, but then who cares about rust and extreme hardness
if the objective is to split open the skull of another guy in the next 10
seconds.
Bronze on the other hand is a very difficult material to handle. There
have been two forms in antiquity. The earlier Bronze Age was dominated
by what is known as arsenical bronze. Thats copper alloyed with arsenic
to make it harder. Thats not very different from iron. Copper is much
scarcer and less widely-distributed of course, but it does occur all over the
place. And fortunately, when you do find it, copper usually has trace
arsenic contamination in its natural form. So you are starting with all the
raw material you need.
The later Bronze Age though, relied on a much better material: tin
bronze. Now this is where the story gets interesting. Tin is an extremely
rare element. It only occurs in usable concentrations in a few isolated
locations worldwide.

In fact known sources during the Bronze Age were in places like
England, France, the Czech Republic and the Malay peninsula. Deep in
barbarian-nomad lands of the time. As far as we can tell, tin was first
mined somewhere in the Czech Republic around 2500 BC, and the
practice spread to places like Britain and France by about 2000.
Notice something about that list? They are very far from the major
Bronze Age urban civilizations around the Nile, in the Middle East and in
the Indus Valley, of 4000-2000 BC or so.
This immediately implies that there must have been a globalized longdistance trade in tin connecting the farthest corners of Europe (and
possibly Malaya) with the heart of the ancient world. Not only that, you
are forced to recognize that the metallurgists of the day must have had
sophisticated and deliberate alloying methods, since you cannot assume,
as you might be tempted to in the case of arsenical bronze, that the
ancients didnt really know what they were doing. You cannot produce tinbronze by accident. Tin implies skills, accurate measurements, technology,
guild-style education, and land and sea trade of sufficient sophistication
that you can call it an industry.
Whats more, the use of tin also implies that the Bronze Age
civilizations didnt just sit around inside their borders, enjoying their
urban lifestyles. They must have actually traded somehow with the far
corners of the barbarian-nomad world that eventually conquered them.
Clearly the precursors of the Aryans and other nomadic peoples of the
Bronze Age (including the Celts in Europe, the ethnic Malays, and so
forth) must have had a lot of active contact with the urban civilizations
(naive students of history often dont get that humans had basically
dispersed through the entire known world by 10,000 BC; civilization
may have spread from a few centers, but people didnt spread that way,
they spread much earlier).
In fact, tin almost defines civilization: only the 3-4 centers of urban
civilization of that period had the coordination capabilities necessary to
arrange for the shipping of tin over land and sea, across long distances. It
is well recognized that they had trade with each other, with different trade
imbalances (there is clear evidence of land and sea trade among the

Mesopotamian, Nile and Indus river valleys; the Yellow River portions of
China were a little more disconnected at that time).
What is not as well recognized is that the evidence of commodities
like tin indicates that these civilizations must have also traded extensively
with the barbarian-nomad worlds in their interstices and beyond their
borders in every direction. The iron-wielding barbarians were not
shadowy strangers who suddenly descended on the urban centers out of
the shadows. They were marginal peoples with whom the civilizations had
relationships.
So tin implies the existence of sophisticated international trade. I
suspect it even means that tin was the first true commodity money
(commodity monies dont just emerge based on their physical properties
and value; they must provide a raison detre for trade over long distances).
Iron vs. Bronze
So what about iron? Since it was all over the place, we cannot trace
the origins of iron smelting properly, and in a sense there is no good
answer to the question where was iron discovered? It was in use as a
peripheral metal for a long period before it displaced bronze (possibly
inside the Bronze Age civilizations and the barbarian-nomad margins). As
the Wikipedia article says, with reference to iron use before the Iron Age:
Meteoric iron, or iron-nickel alloy, was used by
various ancient peoples thousands of years before the Iron
Age. This iron, being in its native metallic state, required
no smelting of ores.By the Middle Bronze Age, increasing
numbers of smelted iron objects (distinguishable from
meteoric iron by the lack of nickel in the product) appeared
throughout
Anatolia,
Mesopotamia,
the
Indian
subcontinent, the Levant, the Mediterranean, and Egypt.
The earliest systematic production and use of iron
implements originates in Anatolia, beginning around 2000
BCE. Recent archaeological research in the Ganges Valley,
India showed early iron working by 1800 BC. However,
this metal was expensive, perhaps because of the technical

processes required to make steel, the most useful iron


product. It is attested in both documents and in
archaeological contexts as a substance used in high value
items such as jewelry.
Unlike tin-bronze, which probably required a specific sequence of
local inventions near the ore sources followed by diffusion, iron use could
(and probably did) arise and evolve in multiple places in unrelated ways,
because it didnt depend on special ingredients. The idea that it might have
been expensive enough, in the form of steel, to be jewelry, is reminiscent
of the modern history of another metal: aluminum. Like iron, it is one of
the most commonplace metals, and like iron, until a cheap manufacturing
process was discovered, it was a noble metal. Rich people ate off
aluminum ware and wore aluminum jewelry.
So you can tell a broader, speculative history: since you didnt need
complicated shipping and smelting to make a basic use of iron, its use
could develop on the peripheries of civilization, among barbarian-nomads
who didnt demand the high quality that the tin-bronze markets did. Iron
didnt need the complicated industry that bronze did. Whats more,
chances are, the bronze guilds were likely quite snooty about the crappy,
rusty material outside of highly-refined and expensive jewelry uses.
But the margins, which didnt have the tin trade or industry, had a
good reason to pay attention to iron. I speculate that for the barbariannomad cultures that were far from the Bronze Age urban centers, the
upgrade that iron provided over stone, even with the problems of rust and
brittleness that plagued primitive iron, was enough for them to take down
the old Bronze-powered civilizations, and then leisurely evolve iron to its
modern form. I suspect a bronze-leapfrogging transition from stone to iron
happened in many places, as with cellphones today in Africa.
(aside, I assume there is an equally sophisticated story about how
bronze displaced stone; neolithic stone age cultures like the ones the
Europeans encountered in America, were far from grunting cave-dwellers.
They had evolved stone use to a high art).

By the time iron got both good enough and cheap enough to take on
bronze as a serious contender for uses like weaponry, the incumbent
Bronze Age civilizations couldnt catch up. The pre-industrial barbariannomads had the upper hand.
Iron didnt completely displace bronze in weaponry until quite late.
As late as Alexanders conquests, he still used bronze; iron technology
was not yet good enough at the highest levels of quality, but the point is, it
was good enough initially for the marginal markets, and for masses of
barbarian soldiers.
Sound familiar?
This is classic disruption in the sense of Clayton Christensen. An
initially low-quality marginal market product (iron) getting better and
eventually taking down the mainstream market (bronze), at a point where
the incumbents could do nothing, despite the extreme sophistication of
their civlization, with its evolved tin trading routes and deliberate
metallurgical practices.

Rewinding History
Understanding the history of bronze and iron better has forced me to
rewind my sense of when history proper starts by at least 11,000 years.

The story has given me a new appreciation for how sophisticated human
beings have been, and for how long. I used to think that truly
psychologically modern humans didnt emerge till about 1200 AD. The
story of bronze made me rewind my assessments to 4000 BC. Now,
though I dont know the details (nobody does), I think psychologically
modern human culture must have started no later than 10,000 BC, the
approximate period of what is called the Neolithic revolution.
Now I think the most interesting period in history is probably 10,000
BC to 4,000 BC. Even 20,000 BC to 10,000 BC is fascinating (thats when
the caves in Lascaux were painted), but lets march backwards one
millennium at a time.

Bays Conjecture
May 21, 2009
A few years ago, I was part of a two-day DARPA workshop on the
theme of Embedded Humans. These things tend to be brain-numbing, so
you know an idea is a good one if it manages to stick in your head. One
idea really stayed with me, and well call it Bays conjecture (John Bay,
who proposed it, has held several senior military research positions, and is
the author of a well-known technical textbook). It concerns the effect of
intelligent automation on work. What happens when the matrix of
technology around you gets smarter and smarter, and is able to make
decisions on your behalf, for itself and the overall system? Bays
conjecture is the antithesis of the Singularity idea (machines will get
smarter and rule us, a la Skynet I admit I am itching to see Terminator
Salvation). In some ways its implications are scarier.
The Conjecture
Bays conjecture is simply this: Autonomous machines are more
demanding of their operator than non-autonomous machines. The
implication is this picture:

The point of the picture is this: when technology gets smarter, the
total work being performed increases. Or in Bays words, force
multiplication through accomplishment of more demanding tasks.
Humans are always taking on challenges that are at the edge of the current
capability of humans and machines combined. So like a muscle being
stressed to failure, total capacity grows, but work grows faster. We never
build technology that will actually relieve the load on us and make things
simpler. We only end up building technology that creates MORE work for
us.
The one exception is what we might call Bays corollary: he asserts
that if you design systems with the principle of human override
protection, total work capacity collapses back to the capability of humans

alone. We are both too greedy and too lazy for that. We are motivated by
the delusional picture in Case 1, and we end up creating Case 2.
Heres why this is the opposite of Skynet/Singularity. Those ideas are
based (in the caricature Sci-Fi/horror version) on the idea that machines,
once they get smarter than us, will want to enslave us. In the Matrix,
humans are reduced to batteries. In the Terminator series, it is unclear
what Skynet wants to do with humans, though I am guessing well find out
and it will probably be some sort of naive enslavement.
The point is: the greed-laziness dynamic will probably apply to
computer AIs as well. To get the most bang for the buck, humans will have
to be at their most free/liberated/creative within the Matrix. So thats good
news. But on the other hand, the complexity of the challenges we take on
cannot increase indefinitely. At some point, the humans+machines matrix
will take on a challenge thats too much for us, and well do it with a
creaking, high-entropy worldwide technology matrix that is built on
rotting, stratified layers of techno-human infrastructure. The whole thing
will fail to rise to the challenge and will collapse, dumping us all back into
the stone age.

Halls Law:
The Nineteenth Century Prequel to Moores Law
March 8, 2012
For the past several months, Ive been immersed in nineteenth century
history. Specifically, the history of interchangeability in technology
between 1765, when the Systme Gribeauval, the first modern technology
doctrine based on the potential of interchangeable parts, was articulated,
and 1919, when Frederick Taylor wrote The Principles of Scientific
Management.
Here is the story represented as a Double Freytag diagram, which
should be particularly useful for those of you who have read Tempo. For
those of you who havent, think of the 1825 Hall Carbine peak as the
Aha! moment when interchangeability was first figured out, and the
1919 peak as the conclusion of the technology part of the story, with the
focus shifting to management innovation, thanks in part to Taylor.

The unsung and rather tragic hero of the story of interchangeability


was John Harris Hall (1781 1841), inventor of the Hall carbine. So I am
naming my analog to Moores Law for the 19th century Halls Law in his
honor.

The story of Halls Law is in a sense a prequel to the unfinished story


of Moores Law. The two stories are almost eerily similar, even to
believers in the history repeats itself maxim.
Why does the story matter? For me, it is enough that it is a
fantastically interesting story. But if you must have a mercenary reason for
reading this post, here it is: understanding it is your best guide to the
Moores Law endgame.
So here is my telling of this tale. Settle in, its going to be another
long one.
Onion Steel
In A Brief History of the Corporation, I argued that there were two
distinct phases an early mercantile-industrial phase that was primarily
European in character, extending from about 1600 to 1800, and a later
Schumpeterian-industrial phase, extending from about 1800-2000, that
was primarily American and Russian in character.
Each phase was enabled by a distinct technological culture. In the
early, British phase, a scientific sensibility was the exception rather than
the rule. The default was the craftsman sensibility. In the later, AmericanRussian phase, the scientific sensibility was the rule and the craftsman
sensibility the exception (it is notable that the American-Russian phase
was inspired by French thought rather than British; call it Napoleons
revenge).
What was this (much romanticized today) craftsman sensibility?
Consider this passage about the state of steel-making in Sheffield, the
leading early nineteenth century technology center for the industry, before
the rise of American steel. The quote is from Charles Morris excellent
book The Tycoons, my primary reference for this post (it is nominally
about the lives of Rockefeller, Carnegie, J. P. Morgan and Jay Gould, but
is actually a much richer story about the broad sweep of 19th century
technology history; I am not done with it yet, but it has been such a
stimulating read that I had to stop and write this post):

Making a modest batch of steel could take a week or


more, and traditional techniques were carefully passed
down from father to son; one Sheffield recipe started by
adding the juice of four white onions.
Morris attributes the onion story to Thomas Misas Nation of Steel,
which is now on my reading list.
American steel displaced British steel not because it was based on the
Bessemer and open hearth processes (Bessemer was English), but because
the industry was built from the ground up along scientific lines, with no
craftsman-baggage slowing it down.
The interesting thing about this recipe for onion steel is that it
illustrates both the strengths and the weaknesses of the craftsman
sensibility. You can only imagine the tedious sort of uninformed
experimentation it took to consider adding onions to a steel recipe. There
is something beautiful about the absence of preconceived notions in this
sensibility. No modern metallurgist would even think to add onions to a
metal recipe.
On the other hand, if a modern metallurgist were faced with data
showing that onions improved the properties of steel, he or she would not
rest until theyd either disproved the effect, or explained it in less bizarre
terms. The recipe would certainly not get passed down from father to
son (mentor to mentee today) unexplained.
What America brought to manufacturing was a wholesale shift from
craftsman-and-merchant thinking about technology and business to
engineer-and-manager thinking. The shift affected every important 19th
century business sector: armaments, railroads, oil, steel, textile equipment.
And it created a whole new sector: the consumer market.
But this was not the result of an abstract, ideological quest for
scientific engineering and manufacturing, or a deliberate effort to replace
high-skill/high-wage craftsmen with low-skill/low-wage/interchangeable
machine operators.

It was a consequence of a relentless pursuit of interchangeability of


parts, which in turn was a consequence of a pursuit of greater scale, profits
and competition for market share (which drove greater complexity in
offerings) on the vast geographic canvas that was America. Craft was
merely a casualty along the way.
So why was interchangeability of parts a holy grail in this pursuit?
Interchangeability, Complexity and Scaling
The problem is that even the highest-quality craft does not scale.
When something like a rifle is mass-produced using interchangeable parts,
breakdowns can be fixed using parts cannibalized from other broken-down
rifles (so two broken rifles can be mashed-up to make at least one that
works) or with spare parts shipped from an warehouse. Manufacturing can
be centralized or distributed in optimal ways, and constantly improved.
Production schedules can be decoupled from demand schedules.
A craftsman-made rifle on the other hand, requires a custommade/fitted replacement part. The problem is especially severe for an
object like a rifle: small, widely-dispersed geographically, and liable to
break down in the unfriendliest of conditions. Conditions where
minimizing repair time is of the essence, and skilled craftsmen are rather
thin on the ground. It is no surprise that the problem was first solved for
guns.
Lets do some pidgin math to get a sense of what a true mathematical
model might look like.
Roughly speaking, scaling production for any mechanical widget
involves three key dimensions: production volume V, structural
complexity S (the number of interconnections in an assembly is a good
proxy measure for S, just like the number of transistors on a chip is a good
proxy for its complexity) and operating tempo of the machine in use, T
(since the speed of operation of a machine determines the stress and wear
patterns, which in turn determines breakdown frequency; clock-rate is a
similar measure for Moores Law).

For complex widgets, scaling production isnt just (or even primarily)
about making more new widgets; it is about keeping the widgets in
existence in the field functioning for their design lifetime through postsales repair and maintenance. The greater the complexity and cost, the
more the game shifts to post-sales.
You can combine the three variables to get a rough sense of
manufacturing complexity and how it relates to scaling limits. Something
like C=SxT provides a measure of the complexity of the artifact itself.
Breakdown rate B is some function of complexity and production
volumes, B=f(C, V). At some point, as you increase V, you get a
corresponding increase in B that overwhelms your manufacturing
capability. To complete this pidgin math model, you can think in terms of
some B_max=f(C, V_max) above which V cannot increase without
interchangeability.
Modern engineers use much more sophisticated measures (this crude
model does not capture the tradeoff between part complexity and
interconnection complexity for example, or the fact that different parts of a
machine may experience different stress/wear patterns), but for our
purposes, this is enough.
To scale production volume above V_max without introducing
interchangeability, you have to either lower complexity and/or tempo or
increase the number of skilled craftsmen. The first two are not options
when you are trying to out-do the competition in an expanding market.
That would be unilateral disarmament in a land-grab race. The last method
is simply not feasible, since education in a craft-driven industrial
landscape means long, slow and inefficient (in the sense that it teaches
things like onion recipes) 1:1 apprenticeship relationships.
There is one additional method that does not involve
interchangeability: moving towards disposability for the whole artifact,
which finesses the parts-replacement problem entirely. But in practice,
things get cheap enough for disposability to be a workable strategy only
after mass production is achieved. Disposability is rarely a cost-effective

strategy for craft-driven manufacturing, though I can think of a few


examples.
These facts of life severely limited the scale of early nineteenth
century technology. The more machines there are in existence, the greater
the proportion of craftsmen whose time must be devoted to repair and
maintenance rather than new production.
Since breakdowns are
unpredictable and parts unique, there is no way to stockpile an inventory
of spare parts cheaply. There is little room for cannibalization of parts in
the field to temporarily mitigate parts shortages.
What was needed in the 19th century was a decoupling of scaling
problems from manufacturing limitations.
Interchangeability and the Rise of Supply Chains
Interchangeability of parts breaks the coupling between scaling and
manufacturing capacity by substituting supply-chain limits for
manufacturing limits. For a rifle, you can build up a stockpile of spare
parts in peace time, and deliver an uninterrupted supply of parts to match
the breakdown rate. There is no need to predict which part might break
down in order to meaningfully anticipate and prepare. You can also
distribute production optimally (close to raw material sources or low-cost
talent for instance), since there is no need to locate craftsmen near the
point-of-use.
So when interchangeability was finally achieved and had diffused
through the economy as standard practice (a process that took about 65
years), demand-management complexity moved to the supply chain, and
most problems could be solved by distributing inventories appropriately.
These happy conditions lasted for nearly a century after widespread
interchangeability was achieved, from about 1880 to 1980, when supply
chains met their own nemesis, demand variability (that problem was
partially solved using lean supply chains, which relied in turn on the idea
of interchangeability applied to transportation logistics: container
shipping. But I wont get into that story here, since it is conceptually part
of the unfinished Moores Law story).

The price that had to be paid for this solution was that the American
economy had to lose the craftsmen and work with engineers, technicians
and unskilled workers instead. This creates a very different technology
culture, with different strengths and weaknesses. For example the scope of
innovation is narrowed by such codification and scientific systematization
of crafts (prima facie nutty ideas like onion steel are less likely to be
tried), but within the narrower scope, specific patterns of innovation are
greatly amplified (serendipitous discoveries like penicillin or x-rays are
immediately leveraged to the hilt).
Why must craft be given up? Even the best craftsmen cannot produce
interchangeable parts. In fact, the craft is practically defined by skill at
dealing with unique parts through carefully fitted assemblies.
(Interchangeability is of course a loose notion that can range from
functional replaceability to indistinguishability, but craft cannot achieve
even the coarsest kind of interchangeability at any meaningful sort of
scale).
Put another way, craft is about relative precision between unlike parts.
Engineering based on interchangeability is about objective precision
between like parts. One requires human judgment. The other requires
refined metrology.
From Armory Practice to the American System
It was the sheer scale of America, the abundance of its natural
resources (and the scarcity of its human resources), that provided the
impetus for automation and the interchangeable parts approach to
engineering.
As agriculture moved westward through New York, Pennsylvania and
Michigan, the older settled regions began to turn to manufacturing for
economic sustenance. The process began with the textile industry, born of
stolen British designs around what is now Lowell, Massachusetts. But
American engineering in the Connecticut river valley soon took on a
distinct character.

Like the OSD/DARPA/NASA driven technology boom after World


War II, the revolution was driven by the (at the time, fledgling) American
military, which had begun to acquire a mature and professional character
after the war of 1812 (especially during the John Quincy Adams
administration).
The epicenter of the action was the Springfield Armory, the PARC of
its day, and outposts of the technology scene extended as far south as
Harpers Ferry, West Virginia.
John Hall was among the hundreds of pioneers who swarmed all over
the Connecticut valley region, dreaming up mechanical innovations and
chasing local venture capitalists, much like software engineers in Silicon
Valley today.
There were plenty of other extraordinary people, including other
mechanical engineering geniuses like Thomas Blanchard, inventor of the
Blanchard gun-stock lathe (which was actually a general solution for
turning any kind of irregular shape using what is known today as a pattern
lathe). By the time he was done with gun stocks, a bottleneck part in gunmaking, with all sorts of subtle curves along multiple axes he had
created a system of 16 separate machines at the Springfield Armory that
pretty much automated the whole process, squeezing out all craft of what
had been the single most demanding component in gun-making.
British gun-making was like British steel-making before people like
Blanchard and Hall blew up the scene. Here is Morris again:
The workings of the British gun industry were
reasonably typical of the mid-nineteenth-century
manufacturing. It was craft-based and included at least
forty trades, each with its own apprenticeship system and
organizations. The gun-lock, the key firing mechanism, was
the most complicated, while the most skilled men were the
lock-filers[who] spent years as apprentices learning to
painstakingly hand-file the forty or so separate lock pieces
to create a unified assembly When the Americans
breezily described machine-made stocks, and locks that

required no hand fitting, they sounded as if there were


smoking opium.
Among the opium-smoking geniuses, Blanchard at least enjoyed a
good deal of success. Hall did not.
He put together almost the entire American System through his
single-minded drive, in the technology-hostile Harpers Ferry location far
from the Connecticut Valley hub. When he was done, he had created an
integrated manufacturing system of dozens of machines that produced
interchangeable parts for every component of his carbine. Even parts from
production runs from different years could be interchanged, a standard
some manufacturing operations struggle to reach even today.
The achievement was based on relentless automation to eliminate
human sources of error, increasingly specialized machines, and rigorous
and precise measurements (there were three of every measurement
instrument, one for production use, one for calibration, and a master
instrument to measure wear on the other two).
It was a massive systems-engineering accomplishment. The Hall
carbine was the starter pistol for the American industrial revolution.
Overtake, Pause, Overdrive
Hall did not reap much of the rewards. Thanks to unfortunate
exploitative relationships (in particular with a shameless patent troll,
William Thornton, a complete jerk by Morris account), he was banished
to Harpers Ferry rather than being allowed to work in Springfield. And
his work, when completed, was acknowledged grudgingly, and with poor
grace. The Hall carbine itself was obsolete by the time his system was
mature, and others who applied it to newer products reaped the benefits.
Between 1825 and the 1910s, the methods pioneered by Hall spread
through the region and beyond, and were refined and generalized. In the
process, first America, and then the world, experienced a Moores Law
type shock: rapidly increasing standards of living provided by an
increasing variety of goods whose costs kept dropping.

Culturally, the period can be divided into three partially overlapping


phases: an overtake phase (1851 1876) when America clearly pulled
ahead of Britain as the first nation in the technology world, a pause
represented by the recession of the 1870s, and finally an over-drive phase
beginning in the 1880s and continuing to the beginning of World War I,
when the American model became the global model (and in particular, the
Russian model, as Taylorism morphed into state doctrine).
Overtake: 1851 1876
The overtake phase has a pair of useful bookend events marking it. It
began with the 1851 Crystal Palace Exhibition, the first of the great 19th
century world fairs, when the world began to suspect that America was up
to something (McCormicks harvester and Colts revolver were among the
items on display), and ended with the 1876 Centennial World Fair in
Philadelphia, when all remaining doubt was erased and it became obvious
that America had now comprehensively overtaken Britain in technology.
When Britain finally caught on and hastily began copying American
practices following the Philadelphia fair, the result was a revitalization of
British industry that produced, among other things, the legendary Enfield
rifle (the rifle subplot in the story of interchangeability has an interesting
coda that is shaping the world to this day, the Russian AK-47, as pure an
example of the power of interchangeability-based mass manufacturing as
has ever existed).
It wasnt just guns. In every industry America began to show up
Britain. Much of the credit went to showboating hustlers who claimed
credit for interchangeability and the American System/Armory Practice,
and made a lot of money without actually contributing very much to core
technological developments. These included Eli Whitney of cotton gin
fame, the McCormicks of the harvester, Samuel Colt (revolvers) and Isaac
Singer (sewing machines). While they certainly contributed to the
development of individual products, the invention of the American model
itself was due to technologists like Blanchard and John Hall.

In the initial decades of the overtake, fueled in part by opportunity


(and profiteering) associated with the Civil War and government
subsidized building out of the railroad system, much of the impact was
invisible. But by the 1890s, as the infrastructure phase was completed, the
same methods were unleashed on everyday life, creating modern
consumer culture and the middle class within the short space of a single
generation.
The Pause: the 1870s
The Civil War looms large as the major political-economic event in
this history (1861 1865), but the bulk of the impact was felt in the
decade that followed, once the dust had settled and interrupted
infrastructure projects were completed.
This impact took the form of the rather strange long recession of the
1870s, which was very culturally very similar to the one we are currently
experiencing (increased economic uncertainty and fall in nominal
incomes, hidden technology-driven increases in standard of living,
foundational shifts in the nature of money back then it was a
greenbacks vs. gold thing).
One way to understand this process is that the infrastructure phase had
created both tycoons and an extremely over-leveraged economy. It was the
uncertain gap between build it and they will come. It was a huge,
collective pause, a national decade of breath-holding as people wondered
whether the chaos unleashed by the new infrastructure would create a
better social order or destroy everything without creating something new
in its place.
Starting in the 1880s, the bet began paying off in spades. The
recession ended and the over-drive boom began, as people figured out
what to do with the newfound capabilities in their environment.
Overdrive: 1880s 1913
A good early marker here is probably the first Montgomery Ward
catalog in 1872, the first major sign that the new infrastructure allowed old

businesses to be rethought, leading to the creation of the modern consumer


economy.
The mail-order catalog was by itself a simple idea (the first catalog
was just a single page), but the reason it disrupted old-school merchants
was that it relied on all the infrastructure complexity that now existed.
Trains that ran on reliable schedules, to deliver mail, telegraph lines
that brought instant price updates on western grain to the East Coast, steel
to build everything, oil and electricity to light up (and later, fuel)
everything, new financial systems to move money around, and of course,
the application of interchangeability technology to everything in sight.
It took Sears, starting in 1888, to scale the idea and truly take down
the merchant elites who had defined the old business culture, but by World
War I, middle-class consumer culture had emerged and had come to define
America. In another 50 years, it would come to define the world.
It was such a powerful boom that globally, it lasted a century, with
two world wars and a Great Depression failing to arrest its momentum (as
an aside, I wonder why people pay so much attention to the 1930s
depression to make sense of the current recession; the 1870s recession
makes for a far more appropriate comparison).
What ultimately killed it was its own success. Semiconductor
manufacturing probably represents the crowning achievement of the
Armory Practice/American System that began with a lonely John Hall
pushing ahead against all odds at Harpers Ferry.
Moores Law was born as the last and greatest achievement of the
parent it ultimately devoured: Halls Law.
Halls Law
When you step back and ponder the developments between 1825 and
1919, it can be hard to make sense of all the action.

There is the pioneering work in manufacturing technology. There is


the explosion of different product types as the American System diffused
through the industrial landscape. There is the story of the rise of the first
tycoons. There is the rise of consumerism and the gradual emergence of
the middle class. There is the connectivity by steam and telegraph.
Then there is the increasingly confident and strident American
presence on the global scene (especially through the World Fairs, two of
which I already talked about). And of course, you have the Civil War, the
California Gold Rush, the cowboy culture that existed briefly (and
permanently reshaped the American identity) before Jay Gould killed it by
finishing the railroad system.
There was the rise of factory farming and the meatpacking and
refrigerator-car industries together killing the urban butcher trade and
suddenly turning Americans into the greatest meat eaters in history.
Paycheck economics took over as the tycoon economy killed the free
agent.
In fact, there was a lot going on, to put it mildly. And that was just
America. The rest of the world wasnt exactly enjoying peace and stability
either. Perry had kicked down the doors of Japan, Opium wars had
ravaged China, the East India Company (the star of my History of
Corporations post) had been quietly put out to pasture and the Mughal
empire had collapsed. The Ottomans were starting on a terminal decline.
Continental Europe had begun its century-long post-Napoleon march
towards World War I (the US Civil War served as a beta test for the postBismarck model of total war, just as the Spanish Civil war served as a beta
test for World War II).
But just as Moores Law provides something of a satisfying
explanatory framework for almost everything that has happened in the last
50 years, the drive towards the holy grail of interchangeability provides a
satisfying explanatory framework for much of this action. Heres my
attempt at capturing what happened (someone enlighten me if something
like this has already been proposed under a different name) :

Halls Law: the maximum complexity of artifacts that


can be manufactured at scales limited only by resource
availability doubles every 10 years.
I believe this law held between 1825 and 1960, at which point the law
hit its natural limits.
Here, I mean complexity in the loose sense I defined before: some
function of mechanical complexity and operating tempo of the machine,
analogous to the transistor count and clock-rate of chips.
I dont have empirical data to accurately estimate the doubling period,
but 10 years is my initial guess, based on the anecdotal descriptions from
Morris book and the descriptions of the increasing presence of technology
in the world fairs.
Along the complexity dimension, mass-produced goods increased
rapidly got more complex, from guns with a few dozen parts to late-model
steam engines with thousands. The progress on the consumer front was no
less impressive, with the Montogmery Ward catalog offering massproduced pianos within a few years of its introduction for instance. By the
turn of the century, you could buy entire houses in mail-order kit form.
The cost of everything was collapsing.
Along the tempo dimension, everything got relentlessly faster as well.
Somewhere along the way, things got so fast thanks to trains and the
telegraph, that time zones had to be invented and people had to start
paying attention the second hand on clocks.
There is a ton of historical research on all aspects of this boom, but I
suspect nobody has yet compiled the data in a form that can be used to fit
a complexity-limit growth model and figure out the parameters of my
proposed Halls Law, since it is the sort of engineering-plus-history
analysis that probably has no hope of getting any sort of research funding
(it would take some serious archaeology to discover the part-count,
operating speed and production volumes for a sufficient number of sample
products through the period to fit even my simple model, let alone a model
that includes things like breakdown rates and actual, as opposed to
theoretical, interchangeability).

But even without the necessary empirical grounding, I am fairly sure


the model would turn out to be an exponential, just like Moores Law.
Nothing else could have achieved that kind of transformation in that short
a period, or created the kind of staggering inequality that emerged by the
Gilded Age.
Break Boundaries and Tycoon Games
Both Moores Law and Halls Law in the speculative form that I have
proposed, are exponential trajectories. These trajectories generally emerge
when some sort of runaway positive-feedback process is unleashed,
through the breaking of some boundary constraint (the term break
boundary is due to Marshall McLuhan).
The positive-feedback part is critical (if you know some math, you
can guess why: a doubling law in a difference/differential equation form
has to be at least a first-order process; something like compound interest,
if you dont know what the math terms mean).
Loosely speaking, this implies a technological process that can be
applied to itself, improving it. Better machines with interchangeable parts
also means better machine tools that are themselves made with
interchangeable parts and therefore can run continuously at higher speeds,
with low downtime. Computers can be used to design more complex
computers. This is not true of all technological processes. Better plastics
do not improve your ability to make new plastics, for instance, since they
do not play much of a role in their own manufacturing processes.
This is the inner, technological positive-feedback loop (think of an
entire technology sector engaging in a sort of 10,000 hours of deliberate
practice; a major sign is that the most talented people turn to tool-building:
Blanchard and Hall for Halls Law, people like the late Dennis Ritchie and
Linus Torvalds for Moores Law).
But the technological positive-feedback loop requires an outer
financial positive-feedback loop around it to fuel it. You need conditions
where the second million is easier to make than the first million.

This means tycoons who spot some vast new opportunity and play
land-grabbing games on a massive scale.
Both Halls Law and Moores Law led to wholesale management and
financial innovation by precisely such new tycoons.
For Halls Law, the process started with Cornelius Vanderbilt, the hero
of A. J. Stiles excellent The First Tycoon, who figured out how to tame
the strange new beast, the post-East-India-Company corporation and in the
process sidelined old money.
It is revealing that Vanderbilt was blooded in business through a
major legal battle for steamboat water rights: Gibbons vs. Ogden (1824)
that helped define the relationship of corporations to the rest of society.
From there, he went from strength to strength, inventing new business and
financial thinking along the way. Only in his old age did he finally meet
his match: Jay Gould, who would go on to become the archetypal Robber
Baron, taking over most of Vanderbilts empire from his not-so-talented
children.
Vanderbilt was something of a transition figure. He straddled both
management and finance, and old and new economies: he was a cross
between an old-economy merchant-pirate in the Robert Clive mold (he ran
a small war in Nicaragua for instance) and a new-economy corporate
tycoon. He transcended the categories that he helped solidify, which
helped define the next generation of tycoons.
Among the four tycoons in Morris book, Rockefeller (Chernows
Titan on Rockefeller is another must-read) and Carnegie appear on one
side, as the archetypes of modern managers and CEOs. Both were masters
of Wall Street as well, but were primarily businessmen.
On the financial side, we find the Joker-Batman pair of Gould and
Morgan. Jay Gould was the loophole-finder-and-destabilizer; J. P. Morgan
was the loophole-closer and stabilizer. While Gould was a competent, if
unscrupulous manager during the brief periods that he actually managed
the companies he wrangled, he was primarily a financial pirate par
excellence.

It makes for a very good story that he made his name by giving the
elderly Vanderbilt, who pretty much invented the playbook along with his
friends and rivals, the only financial bloody nose of his life (though
Vanderbilt exacted quite a revenge before he died). Through the rest of
his career, he exposed and exploited every single flaw in the fledgling
American corporate model, turning crude Vanderbilt-era financial tactics
into a high art form. When he was done, he had generated all the data
necessary for J. P. Morgan to redesign the financial system in a much
stronger form.
Morgans model would survive for a century until the Moores Law
era descendants of Gould (the financial pirates of the 1980s) started
another round of creative destruction in the evolution of the corporate
form.
From Halls Law to Moores Law
Halls Law was the prequel to Moores Law in almost every way. The
comparison is not a narrow one based on just one dimension like finance
or technology. It spans every important variable. Here is the corresponding
Double Freytag:

Ill save my analysis of the Moores Law era for another day, but here
is a short point-by-point mapping/comparison of fundamental dynamics

(i.e. things that were a consequence of the fundamental dynamics rather


than historical accidents).
1. Obviously Halls Law maps to Moores Law
2. Increasing interchangeability in mechanical engineering maps
to
increasing
transistor
counts
in
semiconductor
manufacturing. Increasing machine speeds map to increasing
chip clock-rates.
3. Both technologies radically drove down costs of goods and
created de facto higher standards of living
4. Both technologies saw the emergence of a new breed of
tycoons within a few leadership generations. Jack Welch maps
to Cornelius Vanderbilt. Bill Gates and Michael Dell map to
Rockefeller and Carnegie. Jeff Bezos maps to Montgomery
Ward and Sears.
5. The newer, younger digital native tycoons, starting with
Zuckerberg, map to the post 1890 3rd generation innovators
who were native to the new world of interchangeability rather
than pioneers, similar to the early 20th century automobile and
airplane industry tycoons (it is revealing that the Wrights were
bicycle mechanics; bicycles were the first major consumer
product to be designed around interchangeability from the
ground up; the airplane was a result of the careful application
of exactly the precise sorts of careful scientific measurement,
experimentation and optimization that had been developed in
the previous 75 years).
6. Each era was punctuated in the middle by a recessionary
decade marked by financial excesses, as the economy retooled
around the new infrastructure. The 1870s maps to the 2000s.
7. Each era enabled, and was in turn fueled by, new kinds of
warfare, exemplified by major wars that disturbed a balance of
power that had been maintained by old technology. The
American Civil War maps to the Cold War, while the wars of
the 1990s and 2000s are analogous to World War I.
8. Guns (including high-tempo machine guns) with
interchangeable parts map to nuclear weapons. John Halls
stint at Harpers Ferry was the Manhattan Project of its day
(here the mapping is not exact, since semiconductors were
spawned by the military-industrial research infrastructure

around electronics that emerged after World War II, rather than
through the Manhattan project itself).
9. Lincolns assassination is eerily similar to Kennedys. Just
checking to see if you are still paying attention. The first
person to call bullshit on this point gets a free copy of The
Tycoons.
10. The Internet and container shipping taken together are to
Moores Law as the railroad, steamship and telegraph networks
taken together were to Halls Law. The electric power grid
provides the continuity between Halls Law and Moores Law.
11. Each era changed employment patterns and class structures
wholesale. Halls Law destroyed nobility-based social
structures, created a new middle class defined by educational
attainments and consumer goods, and created paycheck
employment. Moores Law is currently destroying each of
these things and creating a Trading Up class, a new model of
free agency, and killing education-based reputation models.
12. A new mass entertainment model started in each case. With
Halls Law it was Broadway (which led on to radio, movies
and television). With Moores Law, Id say the analogy is to
reality TV, which like Broadway represents new-era content in
an old-era medium.
13. At the risk of getting flamed, Id say that Seth Godin is
arguably the Horatio Alger of today, but in a good way.
Somebody has to do the pumping-up and motivating to inspire
the masses to abandon the old culture and embrace the new by
offering a strong and simple message that is just sound enough
to get people moving, even if it cannot withstand serious
scrutiny.
14. Halls Law led on to the application of its core methods to
people, leading to new models of high-school and college
education and eventually the perfect interchangeable human,
The Organization Man. Moores Law is destroying these
things, and replacing them with Y-Combinator style education
and co-working spaces (this will end with the Organization
Entrepreneur, a predictably-unique individual, just like
everybody else).

15. Halls Law led to the industrial labor movement. Moores Law
is leading to a new labor movement defined, in its early days,
by things like standardized term-sheets for entrepreneurs ( the
5 day/40 hour week issue of our times; YC-entrepreneurs are
decidedly not the new capitalists. They are the new labor.
Thats a whole other post).
16. And perhaps most importantly, each era suffered an early crisis
of financial exploitation which led first to loophole closing,
and then to a new financial system and corporate governance
model. Jay Gould maps to the architects of the subprime crisis.
No J. P. Morgan figure has emerged to really clean up the
mess, but new corporate models are already emerging that look
so unlike traditional ones that they really shouldnt be called
corporations at all (hence the pointless semantic debate around
my history of corporations post; it is really irrelevant whether
you think corporations are dying or being radically reinvented.
You are talking about the same underlying creative-destruction
reality).
The New Gilded Age
When Mark Twain coined the term Gilded Age, he wasnt exactly
being complimentary. For some reason, the term seems to be commonly
used as a positive one today, by those who want to romanticize the period.
I started to read the book and realized that Twain had completely
missed the point of what was happening around him (the focus of the
novel is political corruption; an element that loomed large back then, but
was ultimately a sideshow), so I abandoned it.
But he got one thing right: the name.
Halls Law created a culture that was initially a layer of fake gloss on
top of much grimmer realities. Things were improving dramatically, but it
probably did not seem like it at the time, thanks to the anxiety and
uncertainty. Just as you and I arent exactly celebrating the crashing cost
of computers in the last two decades, those who lived through the 1870s
were more worried about farming moving ever westward (outsourcing)

and strange new status dynamics that made them uncertain of their place
in the world.
It took time for Gilded to turn into Golden (about 50 years by my
estimate, things became truly golden only after World War II). There were
decades of turmoil which made the lives of transitional generations quite
miserable. The 1870s were a youll-thank-me-later decade, but for those
who lived through the decade in misery, that is no consolation.
I abandoned The Gilded Age within a few pages. It is decidedly
tedious compared to Tom Sawyer and Huckleberry Finn. Sadly, Twains
affection for a vanishing culture, which made him such an able observer of
one part of American life, made him a poor observer of the new realities
taking shape around him.
He makes a personal appearance in the stories of both Vanderbilt and
Rockefeller, and appears to have strongly disliked the former and admired
the latter, though both were clearly cut from the same cloth.
To my mind, Twains best stab at describing the transformation
(probably A Connecticut Yankee in King Arthurs Court note the
significance of Connecticut) is probably much worse than the attempts of
younger writers like Edith Wharton and later, of course, everybody from
Horatio Alger to F. Scott Fitzgerald.
We are clearly living through a New Gilded Age today, and Bruce
Sterlings term Favela Chic (rather unfortunately cryptic ; perhaps we
should call it Painted Slum) is effectively analogous to Gilded Age.
We put on brave faces as we live through our rerun of the 1870s. We
celebrate the economic precariousness of free agency as though it were a
no-strings-attached good thing. We read our own Horatio Alger stories,
fawn over new Silicon Valley millionaires and conveniently forget the
ones who dont make it.
New Media tycoons like Arrington and Huffington fight wars that
would have made the Hearsts and Pulitzers of the Gilded Age proud, while
us lesser bloggers go divining for smaller pockets of attention with

dowsing rods, driven by the same romantic hope that drove the tragicomic
heroes of P. G. Wodehouse novels to pitch their plays to Broadway
producers a century ago.
History is repeating itself. And the rerun episode we are living right
now is not a pleasant one.
The problem with history repeating itself of course, is that sometimes
it does not. The fact that 1819-1880 map pretty well to 1959-2012 does
not mean that 2012-2112 will map to 1880-1980. Many things are
different this time around.
But assuming history does repeat itself, what are we in for?
If the Moores Law endgame is the same century-long economicoverdrive that was the Halls Law endgame, todays kids will enter the
adult world with prosperity and a fully-diffused Moores Law all around
them.
The children will do well. In the long term, things will look up.
But in the long term, you and I will be dead.
Some thanks are due for this post. It was inspired in part by Chris
McCoy of YourSports.com, who badgered me about the Internet =
Railroad analogy enough that I was motivated to go hunt for the best
place to anchor a broader analogy. His original hypothesis is now the
generalized point 10 of my list. Thanks also to Nick Pinkston for
interesting discussions on the future of post-Moores Law manufacturing;
the child may resurrect its devoured parent after all. Also thanks to
everybody who commented on the History of Corporations piece.

Hacking the Non-Disposable Planet


April 18, 2012
Sometime in the last few years, apparently everybody turned into a
hacker. Besides computer hacking, we now have lifehacking (using
tricks and short-cuts to improve everyday life), body-hacking (using
sensor-driven experimentation to manipulate your body), college-hacking
(students who figure out how to get a high GPA without putting in the
work) and career-hacking (getting ahead in the workplace without paying
your dues). The trend shows no sign of letting up. I suspect well soon
see the term applied in every conceivable domain of human activity.
I was initially very annoyed by what I saw as a content-free
overloading of the term, but the more I examined the various uses, the
more I realized that there really is a common pattern to everything that is
being subsumed by the term hacking. I now believe that the term hacking
is not over-extended; it is actually under-extended. It should be applied to
a much bigger range of activities, and to human endeavors on much larger
scales, all the way up to human civilization.

Ive concluded that were reaching a technological complexity


threshold where hacking is going to be the main mechanism for the further
evolution of civilization. Hacking is part of a future thats neither the
exponentially improving AI future envisioned by Singularity types, nor the
entropic collapse envisioned by the Collapsonomics types. It is part of a
marginally stable future where the upward lift of diminishing-magnitude
technological improvements and hacks just balances the downward pull of
entropic gravity, resulting in an indefinite plateau, as the picture above
illustrates.
I call this possible future hackstability.
Hacking as Anti-Refinement
Hacking is the term we reach for when trying to describe an
intelligent, but rough-handed and expedient behavior aimed at
manipulating a complicated reality locally for immediate gain. Two
connotations of the word hack, rough-hewing and mediocrity, apply to
some extent.
Ill offer this rather dense definition that I think covers the
phenomenology, and unpack it through the rest of the post.
Hacking is a pattern of local, opportunistic
manipulation of a non-disposable complex system that
causes a lowering of its conceptual integrity, creates
systemic debt and moves intelligence from systems into
human brains.
By this definition, hacking is anti-refinement. It is therefore a
barbarian mode of production because it moves intelligence out of systems
and into human brains, making those human brains less interchangeable.
Yet, it is not the traditional barbarian mode of predatory destruction of a
settled civilization from outside its periphery.
Technology has now colonized the planet, and there is no outside
for anyone to emerge from or retreat to. Hackers are part of the system,
dependent on it, and aware of its non-disposable nature. In evolutionary

terms, hacking is a parasitic strategy: weaken the host just enough to feed
off it, but not enough to kill it.
Breaching computer systems is of course the classic example. Another
example is figuring out hacks to fall asleep faster. A third is coming up
with a new traffic pattern to reroute traffic around a temporary
construction site.

In our first example, the hacker has discovered and thought


through the implications of a particular feature of a computer
system more thoroughly than the original designer, and
synthesized a locally rewarding behavior pattern: an exploit.
In our second example, the body-hacker has figured out a way
to manipulate sleep neurochemistry in a corner of design space
that was never explored by the creeping tendrils of evolution,
because there was never any corresponding environmental
selection pressure.
In our third example, the urban planner is creating a temporary
hack in service of long-term systemic improvement. The hacker
has been co-opted and legitimized by a subsuming system that
has enough self-awareness and foresight to see past the
immediate dip in conceptual integrity.

Urban planning is a better prototypical example to think about when


talking about hacking than software itself, since it is so visual. Even
programmers and UX designers themselves resort to urban planning
metaphors to talk about complicated software ideas. If you want to ponder
examples for some of the abstractions I am talking about here, I suggest
you think in terms of city-hacking rather than software hacking, even if
you are a programmer.
For the overall vision of hackstability, think about any major urban
region with its never-ending construction and infrastructure projects
ranging from emergency repairs to new mass-transit or water/sewage
projects. If a large city is thriving and persisting, it is likely hackstable.
Increasingly, the entire planet is hackstable.

The atomic prototype of hacking is the short-cut. The urban planner


has a better map and understands cartography better, but in one small
neighborhood, some little kid knows a shorter, undocumented A-to-B path
than the planner. Even though the planner laid out the streets in the first
place. Whats more, the short-cut may connect points on the map that are
otherwise disconnected for non-hackers, because the documented design
has no connections between those points.
Disposability and Debt
I got to my definition of hacking after trying to assemble a lot of folk
wisdom about programming into a single picture.
The most significant piece for me was Joel Spolskys article about
things you should never do, and in particular his counter-argument to
Frederick Brooks famous idea that you should plan to throw one away
(the idea in software architecture that you should plan to throw away the
first version of a piece of software and start again from scratch).
Spolsky offers practical reasons why this is a bad idea, but what I took
away from the post was a broader idea: that it is increasingly a mistake to
treat any technology as disposable. Technology is fundamentally not a doover game today. It is a cumulative game. This has been especially true in
the last century, as all technology infrastructure has gotten increasingly
inter-connected and temporally layered into techno-geological strata of
varying levels of antiquity. We should expect to see disciplines emerge
with labels like techno-geography, techno-geology and technoarchaeology. Some layers are functional (techno-geologically active),
while others are compressed garbage, like the sunken Gold Rush era ships
on which parts of San Francisco are built.
Non-disposability along with global functional and temporal
connectedness means technology is a single evolving entity with a
memory. For such systems the notion of technical debt, due to Ward
Cunningham, becomes important:
Shipping first time code is like going into debt. A
little debt speeds development so long as it is paid back

promptly with a rewrite The danger occurs when the debt


is not repaid. Every minute spent on not-quite-right code
counts as interest on that debt. Entire engineering
organizations can be brought to a stand-still under the debt
load of an unconsolidated implementation.
For me, the central implicit idea in the definition is the notion of
disposability. Everything hinges on whether or not you can throw your
work away and move on. We are so used to dealing with disposable things
in everyday consumer life that we dont realize that much of our
technological infrastructure is in fact non-disposable.
How ubiquitous is non-disposability? I am tempted to conclude that
almost nothing of significance is disposable. And by that I mean
disposable with insignificant negative consequences of course. Anything
can be thrown away if you are willing to pay the costs.
Your body, New York City and the English language are obviously
non-disposable. Reacting to problems with those things and trying to do
over is either impossible or doomed. The first is impossible to even do
badly today. You can try to do over New York City, but youll get
something else that will probably not serve. If you try to do-over English,
you get Esperanto.
Obviously, the bigger the system and the more interdependent it is
with its technological environment, the harder it is to do it over. The
dynamics of technical debt naturally leads us to non-disposability, but lets
make the connection explicit and talk about the value in the patchwork of
hacks and workarounds in a complex system that, as Spolsky argues,
represents value that should not be thrown away.
Quantified Technical Debt and Metis
If a system must last indefinitely, cutting corners in an initial design
leads to a necessary commitment to doing it right later. This deferral is
due to lack of both resources and information in an initial design. You lack
the money/time and the information to do it right.

When a new contingency arises, some of the missing information


becomes available. But resources do not generally become available at the
same time, so the design must be adapted via cheaper improvisation to
deal with the contingency a hack and the real solution deferred. A
hack turns an unquantified bit of technical debt into a quantified bit: when
you have a hack, you know the principal, interest rate and so forth.
It is this quantified technical debt that is the interesting quantity. The
designers original vague sense of incompleteness and inadequacy
becomes sharply defined once a hack has exposed a failing, illuminated its
costs, and suggested a more permanent solution. The new information
revealed by the hack is, by definition, not properly codified and embedded
in the system itself, so most of it must live in a human brain as tacit design
intelligence (the rest lives in the hack itself, representing the value that
Spolsky argues should not be throw away).
When you have a complex and heavily-used, but slowly-evolving
technology, this tacit knowledge accumulating in the heads of hackers
constitutes what James Scott calls metis. Distributed and contentious
barbarian intelligence. It can only be passed on from human to human via
apprenticeship, or inform a risky and radical redesign that codifies and
embeds it into a new version of the system itself. The longer you wait, the
more the debt compounds, increasing risk and the cost of the eventual
redesign.
Technological Deficit Economics
This compounding rate is very high because the longer a system
persists, the more tightly it integrates into everything around it, causing
co-evolution. So eventually replacing even a small hack in a relatively
isolated system with a better solution turns into a planet-wide exercise, as
we learned during Y2K.
Isolated technologies also get increasingly situated over time, no
matter how encapsulated they appear at conception, so that what looks like
a do-over from the point of view of a single subsystem (say Linux)
looks like a hack with respect to larger, subsuming systems (like the

Internet). So debt accumulates at levels of the system that no individual


agent is nominally responsible for. This is collective, public technical debt.
Most complex technologies incur quantified technical debt faster than
they can pay it off, which makes them effectively non-disposable. This
includes non-software systems. Sometimes the debt can be ignored
because it ends up being an economic externality (pollution for
automobiles, for instance), but the more all-encompassing the system gets,
the less room there is for anything to be an unaccounted-for externality.
The regulatory environment can be viewed as a co-evolving part of
technology and subject to the same rules. The US constitution and the tax
code for instance, started off as high-conceptual-integrity constructs which
have been endlessly hacked through case law and tax code exceptions to
the point that they are now effectively non-disposable. It is impossible, as
a practical matter, to even conceptualize a Constitution 2.0 to cleanly
accommodate the accumulated wisdom in case law.
In general, following Spolskys logic through to its natural
conclusion, it is only worth throwing a system away and building a new
one from scratch when it is on the very brink of collapse under the weight
of its hacks (and the hackers on the brink of retirement or death,
threatening to take the accumulated metis with them). The larger the
system, the costlier the redesign, and the more it makes sense to let more
metis accumulate.
Beyond a certain critical scale, you can never throw a system away
because there is no hope of ever finding the wealth to pay off the
accumulated technical debt via a new design. The redesign itself
experiences scope creep and spirals out of the realm of human capability.
All you can hope for is to keep hacking and extending its life in
increasingly brittle ways, and hope to avoid a big random event that
triggers collapse. This is technological deficit economics.
Now extend the argument to all of civilization as a single massive
technology that can never be thrown away, and you can make sense of the
idea of hackstability as an alternative to collapse. Maybe if you keep

hacking away furiously enough, and grabbing improvements where


possible, you can keep a system alive indefinitely, or at least steer it to a
safe soft-landing instead of a crash-landing.
Hacker Folk Theorems
With disposability as the anchor element, we can try to arrange a lot
of the other well-known pieces of hacker folk-wisdom into a more
comprehensive jigsaw puzzle view.
The pieces of wisdom are actually precise enough that I think of them
as folk theorems (item 5 actually suggests a way to model hackstability
mathematically as a sort of hydrostatic bug-o-static? equilibrium)
1. Given enough eyeballs, all bugs are shallow. Linus Law,
formulated by Eric S. Raymond
2. Perspective is worth 80 IQ points. Alan Kay
3. Fixing a bug is harder than writing the code. not sure who
first said this.
4. Reading code is harder than writing code. Joel Spolsky
5. Fixing a bug introduces 2 more. not sure where I first
encountered this quote.
6. Release early, release often. Eric S. Raymond
7. Plan to throw one away Frederick Brooks, The Mythical
Man-Month
Take a shot at using these ideas to put together a picture of how
complex technological systems evolve, using the definition of hacking that
I offered and the idea of technical debt as the anchor element (I started
elaborating on this full picture, but it threatened to run to another 5000
words).
When youre done, you may want to watch (or rewatch) Alan Kays
talk, Programming and Scaling, which Ive referenced before.
I dont know of any systematic studies of the truth of these folkwisdom phenomena (I think I saw one study of the bugs-eyeballs
conjecture that concluded it was somewhat shaky, but I cant find the

reference). But I have anecdotal evidence from my own limited experience


with engineering, and somewhat more extensive experience as a product
manager, that all the statements have significant substance behind them.
So these are not casual, throwaway remarks. Each can sustain hours
of thoughtful and stimulating debate between any two people whove
worked in technology.
The Ubiquity of Hacking
At this point, it is useful to look for more examples that fit the
definition of hacking I offered. The following seem to fit:
1. The pick-up artist movement should really be called femalebrain hacking (or alternatively, alpha-status hacking)
2. Disruptive technologies represent market-hacking.
3. Lifestyle design can be viewed as standard-of-living hacking
4. One half of the modern smart city/neo-urbanist movement can
be understood as city-hacking (smart cities includes cleansheet high-modernist smart cities in China, but lets leave those
out)
5. All of politics is culture hacking
6. Guerrilla warfare and terrorism represent military hacking
7. Almost the entire modern finance industry is economicshacking
8. Most intelligence on both sides of any adversarial human table
(VCs vs. entrepreneurs, interviewers vs. interviewees) is
hacker intelligence.
9. Fossil fuels represent energy hacking
Looking at these, it strikes me that not all examples are equally
interesting. Anything that has the nature of a human-vs.-human arms race
(including the canonical black-hat vs. white-hat information security race
and PUA) is actually a pretty wimpy example of hackstability dynamics.
The really interesting cases are the ones where one side is a human
intelligence, and the other side is a non-human system that simply gets
more complex and less disposable over time.

But interesting or not, all these are really interconnected patterns of


hacking in what is increasingly Planet Hacker.
The Third Future
So what is the hackstable future? What reason is there to believe that
hacking can keep up with the downward pull of entropy? I am not entirely
sure. The way big old cities seem to miraculously survive indefinitely on
the brink of collapse gives me some confidence that hackstability is a
meaningful concept.
Collapse is the easiest of the three scenarios to understand, since it
requires no new concepts. If the rate of entropy accumulation exceeds the
rate at which we can keep hacking, we may get sudden collapse.
The Singularity concept relies on a major unknown-unknown type
hypothesis: self-improving AI. A system that feeds on entropy rather than
being dragged down by it. This is rather like Talebs notion of antifragility, so I am assuming there are at least a few credible ideas to be
discovered here. These I have collectively labeled autopoietic lift. Antigravity for complex systems that are subject to accumulating entropy, but
are (thermodynamically) open enough that they might still evolve in
complexity. So far, weve been experiencing two centuries of lift as the
result of a major hack (fossil fuels). It remains to be seen whether we can
get to sustainable lift.
Hackstability is the idea that well get enough autopoietic lift through
hacks and occasional advances in anti-fragile system design to just balance
entropy gravity, but not enough to drive exponential self-improvement.
Viewed another way, it is a hydrostatic balance between global hacker
metis (barbarian intelligence) and codified systemic intelligence
(civilizational intelligence). In this view, hackstability is the slow
dampening of the creative-destruction dialectic between barbarian and
civilized modes of existence that has been going on for a few thousand
years. If you weaken the metis enough, the system collapses. If you

strengthen it too much, again it collapses (a case of the hackers shorting


the system as predators rather than exploiting it parasitically).
I dont yet know whether these are well-posed concepts.
I am beginning to see the murky outlines of a clean evolutionary
model that encompasses all three futures though. One with enough
predictive power to allow coarse computation of the relative probabilities
of the three futures. This is the idea Ive labeled the Electric Leviathan,
and chased for several years. But it remains ever elusive. Each time I
think Ive found the right way to model it, it turns out Ive just missed my
mark. Maybe the idea is my white whale and Ill never manage a digitalage update to Hobbes.
So I might be seeing things. In a way, my own writing is a kind of
idea-hacking: using local motifs to illuminate some sort of subtlety in a
theme and invalidate some naive Grand Unified Theory without offering a
better candidate myself. Maybe all I can hope for is to characterize the
Electric Leviathan via a series of idea hacks without ever adequately
explaining what I mean by the phrase.

Welcome to the Future Nauseous


May 9, 2012
Both science fiction and futurism seem to miss an important piece of
how the future actually turns into the present. They fail to capture the way
we dont seem to notice when the future actually arrives.
Sure, we can all see the small clues all around us: cellphones, laptops,
Facebook, Prius cars on the street. Yet, somehow, the future always seems
like something that is going to happen rather than something that is
happening; future perfect rather than present-continuous. Even the nearest
of near-term science fiction seems to evolve at some fixed recedinghorizon distance from the present.
There is an unexplained cognitive dissonance between changingreality-as-experienced and change as imagined, and I dont mean specifics
of failed and successful predictions.
My new explanation is this: we live in a continuous state of
manufactured normalcy. There are mechanisms that operate a mix of
natural, emergent and designed that work to prevent us from realizing
that the future is actually happening as we speak. To really understand the
world and how it is evolving, you need to break through this manufactured
normalcy field. Unfortunately, that leads, as we will see, to a kind of
existential nausea.
The Manufactured Normalcy Field
Life as we live it has this familiar sense of being a static, continuous
present. Our ongoing time travel (at a velocity of one second per second)
never seems to take us to a foreign place. It is always 4 PM; it is always
tea-time.
Of course, a quick look back to your own life ten or twenty years back
will turn up all sorts of evidence that your life has, in fact, been radically
transformed, both at a micro-level and the macro-level. At the micro-level,

I now possess a cellphone that works better than Captain Kirks


communicator, but I dont feel like I am living in the future I imagined
back then, even a tiny bit. For a macro example, back in the eighties,
people used to paint scary pictures of the world with a few billion more
people and water wars. I think I wrote essays in school about such things.
Yet were here now, and I dont feel all that different, even though the
scary predicted things are happening on schedule. To other people (this is
important).
Try and reflect on your life. I guarantee that you wont be able to feel
any big change in your gut, even if you are able to appreciate it
intellectually.
The psychology here is actually not that interesting. A slight
generalization of normalcy bias and denial of black-swan futures is
sufficient. What is interesting is how this psychological pre-disposition to
believe in an unchanging, normal present doesnt kill us.
How, as a species, are we able to prepare for, create, and deal with,
the future, while managing to effectively deny that it is happening at all?
Futurists, artists and edge-culturists like to take credit for this. They
like to pretend that they are the lonely, brave guardians of the species who
deal with the real future and pre-digest it for the rest of us.
But this explanation falls apart with just a little poking. It turns out
that the cultural edge is just as frozen in time as the mainstream. It is just
frozen in a different part of the time theater, populated by people who seek
more stimulation than the mainstream, and draw on imagined futures to
feed their cravings rather than inform actual future-manufacturing.
The two beaten-to-death ways of understanding this phenomenon are
due to McLuhan (We look at the present through a rear-view mirror. We
march backwards into the future.) and William Gibson (The future is
already here; it is just unevenly distributed.)
Both framing perspectives have serious limitations that I will get to.
What is missing in both needs a name, so Ill call the familiar sense of a

static, continuous present a Manufactured Normalcy Field. For the rest of


this post, Ill refer to this as the Field for short.
So we can divide the future into two useful pieces: things coming at
us that have been integrated into the Field, and things that have not. The
integration kicks in at some level of ubiquity. Gibson got that part right.
Lets call the crossing of the Field threshold by a piece of futuristic
technology normalization (not to be confused with the postmodernist
sense of the term, but related to the mathematical sense). Normalization
involves incorporation of a piece of technological novelty into larger
conceptual metaphors built out of familiar experiences.
A simple example is commercial air travel.
The Example of Air Travel
A great deal of effort goes into making sure passengers never realize
just how unnatural their state of motion is, on a commercial airplane.
Climb rates, bank angles and acceleration profiles are maintained within
strict limits. Back in the day, I used to do homework problems to calculate
these limits.
Airline passengers dont fly. The travel in a manufactured normalcy
field. Space travel is not yet common enough, so there is no manufactured
normalcy field for it.
When you are sitting on a typical modern jetliner, you are traveling at
500 mph in an aluminum tube that is actually capable of some pretty scary
acrobatics. Including generating brief periods of zero-g.
Yet a typical air traveler never experiences anything that one of our
ancestors could not experience on a fast chariot or a boat.
Air travel is manufactured normalcy. If you ever truly experience
what modern air travel can do, chances are, the experience will be framed
as either a bit of entertainment (fighter pilot for a day! which you will
understand as expensive roller-coaster) or a visit to an alien-specialist

land (American aerospace engineering students who participate in NASA


summer camps often get to ride on the vomit comet, modified Boeing
727s that fly the zero-g training missions).
This means that even though air travel is now a hundred years old, it
hasnt actually arrived psychologically. A full appreciation of what air
travel is has been kept from the general population through manufactured
normalcy.
All were left with is out-of-context data that we are not equipped to
really understand in any deep way (Oh, it used to take months to sail
from India to the US in the seventeenth century, and now it takes a 17 hour
flight, how interesting.)
Think about the small fraction of humanity who have actually
experienced air travel qua air travel, as a mode of transport distinct from
older ones. These include fighter pilots, astronauts and the few air
travelers who have been part of a serious emergency that forced (for
instance) an airliner to lose 10,000 feet of altitude in a few seconds.
Of course, manufactured normalcy is never quite perfect (passengers
on the Concorde could see the earths curvature for instance), but the point
is, it is good enough that behaviorally, we do not experience the censored
future. We dont have to learn the future in any significant way (what
exactly have you learned about air travel that is not a fairly trivial port
of train-travel behavior?)
So the way the future of air travel in 1900 actually arrived was the
following:

A specialized future arrived for a subset who were trained and


equipped with new mental models to comprehend it in the
fullest sense, but in a narrowly instrumental rather than
appreciative way. A fighter pilot does not necessarily
experience flight the way a bird does.
The vast majority started experiencing a manufactured
normalcy, via McLuhan-esque extension of existing media

Occasionally, the manufactured normalcy broke down for a


few people by accident, who were then exposed to the
future without being equipped to handle it

Air travel is also a convenient metaphor for the idea of existential


nausea Ill get to. If you experience air travel in its true form and are not
prepared for it by nature and nurture, you will throw up.
The Future Arrives via Specialization and Metaphor Expansion
So this is a very different way to understand the future: it doesnt
arrive in a temporal sense. It arrives mainly via social fragmentation.
Specialization is how the future arrives.
And in many cases, arrival-via-specialization means psychological
non-arrival. Not every element of the future brings with it a visceral
human experience that at least a subset can encounter. There are no
pilots in the arrival of cheap gene sequencing, for instance. At least not
yet. When you can pay to grow a tail, that might change.
There is a subset of humanity that routinely does DNA sequencing
and similar things everyday, but if the genomic future has arrived for
them, it has arrived as a clean, purely cerebral-instrumental experience,
transformed into a new kind of symbol-manipulation and equipmentoperation expertise.
Arrival-via-specialization requires potential specialists. Presumably,
humans with extra high tolerance for g-forces have always existed, and
technology began selecting for that trait once airplanes were invented.
This suggests that only those futures arrive for which there is human
capacity to cope. This conclusion is not true, because a future can arrive
before humans figure out whether they have the ability to cope. For
instance, the widespread problem of obesity suggests that food-abundance
arrived before we figured out that most of us cannot cope. And this is one
piece of the future that cannot be relegated to specialists. Others cannot
eat for you, even though others can fly planes for you.

So what about elements of the future that arrive relatively


successfully for everybody, like cellphones? Here, the idea I called the
Milo Criterion kicks in: successful products are precisely those that do not
attempt to move user experiences significantly, even if the underlying
technology has shifted radically. In fact the whole point of user
experience design is to manufacture the necessary normalcy for a product
to succeed and get integrated into the Field. In this sense user experience
design is reductive with respect to technological potential.
So for this bucket of experiencing the future, what we get is a
Darwinian weeding out of those manifestations of the future that break the
continuity of technological experience. So things like Google Wave fail.
Just because something is technically feasible does not mean it can
psychologically normalized into the Field.
The Web arrived via the document metaphor. Despite the rise of the
stream metaphor for conceptualizing the Web architecturally, the userexperience metaphor is still descended from the document.
The smartphone, which I understand conceptually these days via a
pacifier metaphor, is nothing like a phone. Voice is just one clunky feature
grandfathered into a handheld computer that is engineered to loosely
resemble its nominal ancestor.
The phone in turn was a gradual morphing of things like speaking
tubes. This line of descent has an element of conscious design, so
technological genealogy is not as deterministic as biological genealogy.
The smartphone could have developed via metaphoric descent from
the hand-held calculator; Oh, I can now talk to people on my calculator
would have been a fairly natural way to understand it. That it was the
phone rather than the calculator is probably partly due to path-dependency
effects and partly due to the greater ubiquity of phones in mainstream life.
What Century Do We Actually Live In?
I havent done a careful analysis, but my rough, back-of-the-napkin
working out of the implications of these ideas suggests that we are all

living, in user-experience terms, in some thoroughly mangled, overloaded,


stretched and precarious version of the 15th century that is just good
enough to withstand casual scrutiny. Ill qualify this a bit in a minute, but
stay with me here.
What about edge-culturists who think they are more alive to the real
oncoming future?
I am convinced that they frozen in time too. The edge today looks
strangely similar to the edge in any previous century. It is defined by
reactionary musical and sartorial tastes and being a little more outrageous
than everybody else in challenging the prevailing culture of manners.
Edge-dwelling is a social rather than technological phenomenon. If it
reveals anything about technology or the future, it is mostly by accident.
Art occasionally rises to the challenge of cracking open a window
onto the actual present, but mostly restricts itself to creating dissonance in
the mainstreams view of the imagined present, a relative rather than
absolute dialectic.
Edge culturists end up living lives that are continuously repeated
rehearsal loops for a future that never actually arrives. They do experience
a version of the future a little earlier than others, but the mechanisms they
need to resort to are so cumbersome, that what they actually experience is
the mechanisms rather than the future as it will eventually be lived.
For instance, the Behemoth, a futuristic bicycle built by Steven
Roberts in 1991, had many features that have today eventually arrived for
all via the iPhone. So in a sense, Roberts didnt really experience the
future ahead of us, because what shapes our experience of universal
mobile communication definitely has nothing to do with a bicycle and a
lot to do with pacifiers (I dont think Roberts had a pacifier in the
Behemoth).
At a more human level, I find that I am unable to relate to people who
are deeply into any sort of cyberculture or other future-obsessed edge
zone. There is a certain extreme banality to my thoughts when I think
about the future. Futurists as a subculture seem to organize their lives as
future-experience theaters. These theaters are perhaps entertaining and

interesting in their own right, as a sort of performance art, but are not of
much interest or value to people who are interested in the future in the
form it might arrive in, for all.
It is easy to make the distinction explicit. Most futurists are interested
in the future beyond the Field. I am primarily interested in the future once
it enters the Field, and the process by which it gets integrated into it. This
is also where the future turns into money, so perhaps my motivations are
less intellectual than they are narrowly mercenary. This is also a more
complicated way of making a point made by several marketers:
technology only becomes interesting once it becomes technically boring.
Technological futurists are pre-Fieldists. Marketing futurists are postFieldists.
This also explains why so few futurists make any money. They are
attracted to exactly those parts of the future that are worth very little. They
find visions of changed human behavior stimulating. Technological
change serves as a basis for constructing aspirational visions of changed
humanity. Unfortunately, technological change actually arrives in ways
that leave human behavior minimally altered.
Engineering is about finding excitement by figuring out how human
behavior could change. Marketing is about finding money by making sure
it doesnt. The future arrives along a least-cognitive-effort path.
This suggests a different, subtler reading of Gibsons unevenlydistributed line.
It isnt that what is patchily distributed today will become widespread
tomorrow. The mainstream never ends up looking like the edge of today.
Not even close. The mainstream seeks placidity while the edge seeks
stimulation.
Instead, what is unevenly distributed are isolated windows into the
un-normalized future that exist as weak spots in the Field. When the
windows start to become larger and more common, economics kicks in
and the Field maintenance industry quickly moves to create specialists,
codified knowledge and normalcy-preserving design patterns.

Time is a meaningless organizing variable here. Is gene-hacking


more or less futuristic than pod-cities or bionic chips?
The future is simply a landscape defined by two natural (and nontemporal) boundaries. One separates the currently infeasible from the
feasible (hyperspatial travel is unfortunately infeasible), and the other
separates the normalized from the un-normalized. The Field is
manufactured out of the feasible-and-normalized. We call it the present,
but it is not the same as the temporal concept. In fact, the labeling of the
Field as the present is itself part of the manufactured normalcy. The
labeling serves to hide a complex construction process underneath an
apparently familiar label that most of us think we experience but dont
really (as generations of meditation teachers exhorting us to live in the
present try to get across; they mostly fail because their sense of time has
been largely hijacked by a cultural process).
What gets normalized first has very little to do with what is easier,
and a lot to do with what is more attractive economically and politically.
Humans have achieved some fantastic things like space travel. They have
even done things initially thought to be infeasible (like heavier-than-air
flight) but other parts of a very accessible future lie beyond the
Manufactured Normalcy Field, seemingly beyond the reach of economic
feasibility forever. As the grumpy old man in an old Readers Digest joke
grumbled, We can put a man on the moon, but we cannot get the jelly
into the exact center of a jelly doughnut.
The future is a stream of bug reports in the normalcy-maintenance
software that keeps getting patched, maintaining a hackstable present
Field.
Field Elasticity and Attenuation
A basic objection to my account of what you could call the futurism
dialectic is that 2012 looks nothing like the fifteenth century, as we
understand it today, through our best reconstructions.

My answer to that objection is simple: as everyday experiences get


mangled by layer after layer of metaphoric back-referencing, these
metaphors get reified into a sort of atemporal, non-physical realm of
abstract experience-primitives.
These are sort of like Platonic primitives, except that they are reified
patterns of behavior, understood with reference to a manufactured
perception of reality. The Field does evolve in time, but this evolution is
not a delayed version of real change or even related to it. In fact
movement is a bad way to understand how the Field transforms. Its
dynamic nature is best understood as a kind of stretching. The Field
stretches to accommodate the future, rather than moving to cover it.
It stretches in its own design space: that of ever-expanding, reifying,
conceptual metaphor. Expansion as a basic framing suggests an entirely
different set of risks and concerns. We neednt worry about acceleration.
We need to worry about attenuation. We need not worry about not being
able to keep up with a present that moves faster. We need to worry about
the Field expanding to a breaking point and popping, like an over-inflated
balloon. We need not worry about computers getting ever faster. We need
to worry about the document metaphor breaking suddenly, leaving us
unable to comprehend the Internet.
Dating the planetary UX to the fifteenth century is something like
chronological anchoring of the genealogy of extant metaphors to the
nearest historical point where some recognizable physical basis exists.
The 15th century is sort of the Garden of Eden of the modern experience
of technology. It represents the point where our current balloon started to
get inflated.
When we think of differences between historical periods, we tend to
focus on the most superficial of human differences that have very little
coupling to technological progress.
Quick, imagine the fifteenth century. Youre thinking of people in
funny pants and hats, right (if youre of European descent. Mutatis
mutandis if you are not)? Perhaps you are thinking of dimensions of social
experience like racial diversity and gender roles.

Think about how trivial and inconsequential changes on those fronts


are, compared to the changes on the technological front. Weve landed on
the moon, we screw around with our genes, we routinely fly at 30,000 feet
at 500 mph. You can repeat those words a thousand times and you still
wont be able to appreciate the magnitude of the transformation the way
you can appreciate the magnitude of a radical social change (a Black man
is president of the United States!).
If I am still not getting through to you, imagine having a conversation
over time-phone with someone living in 3000 BC. Assume theres a Babel
fish in the link. Which of these concepts do you think would be easiest to
get across?
1. In our time, women are considered the equal of men in many
parts of the world
2. In our time, a Black man is the most powerful man in the world
3. In our time, we can sequence our genes
4. In our time, we can send pictures of what we see to our friends
around the world instantly
Even if the 3000 BC guy gets some vague, magic-based sense of what
item 4 means, he or she will have no comprehension of the things in our
mental models behind that statement (Facebook, Instagram, the Internet,
wireless radio technology). Item 3 will not be translatable at all.
But this does not mean that he does not understand your present. It
means you do not understand your own present in any meaningful way.
You are merely able to function within it.
Appreciative versus Instrumental Comprehension
If your understanding of the present were a coherent understanding
and appreciation of your reality, you would be able to communicate it. I
am going to borrow terms from John Friedman and distinguish between
two sorts of conceptual metaphors we use to comprehend present reality:
appreciative and instrumental.

Instrumental (what Friedman misleadingly called manipulative)


conceptual metaphors are basic UX metaphors like scrolling web pages,
or the metaphor of the keypad on a phone. Appreciative conceptual
metaphors help us understand present realities in terms of their
fundamental dynamics. So my use of the metaphor smartphones are
pacifiers (it looks like a figurative metaphor, but once you get used to it,
you find that it has the natural depth of a classic Lakoff conceptual
metaphor) is an appreciative conceptual metaphor.
Instrumental conceptual metaphors allow us to function. Appreciative
ones allow us to make sense of our lives and communicate such
understanding.
So our failure to communicate the idea of Instagram to somebody in
3000 BC is due to an atemporal and asymmetric incomprehension: we
possess good instrumental metaphors but poor appreciative ones.
So this failure has less to do with Arthur C. Clarkes famous assertion
that a sufficiently advanced technology will seem like magic to those from
more primitive eras, and more to do with the fact that the Field actively
prevents us from ever understanding our own present on its own terms.
We manage to function and comprehend reality in instrumental ways
while falling behind in comprehending it in appreciative ways.
So my update to Clarke would be this: any sufficiently advanced
technology will seem like magic to all humans at all times. Some will
merely live within a Field that allow them to function within specific
advanced technology environments.
Take item 4 for instance. After all, it is Instagram, a reference to a
telegram. We understand Facebook in terms of school year-books. It is
exactly this sort of pattern of purely instrumental comprehension that leads
to the plausibility of certain types of Internet hoaxes, like the one that did
the rounds recently about Abraham Lincoln having patented a version of
the Facebook idea.
The fact that the core idea of Facebook can be translated to the
language of Abes world of newspapers suggests that we are papering over

(I had to, sorry) complicated realities with surfaces we can understand.


The alternative conclusion is silly (that the technology underlying
Facebook is not really more expressive than the one underlying
newspapers).
Facebook is not a Yearbook. It is a few warehouse-sized buildings
containing racks and racks of electronic hardware sheets, each containing
etched little slivers of silicon at their core. Each of those little slivers
contains more intricacy than all the jewelry designers in history together
managed to put into all the earrings they ever made. These warehouses are
connected via radio and optic-fiber links to.
Oh well, forget it. Its a frikkin Yearbook that contains everybody.
Thats enough for us to deal with it, even if we cannot explain what were
doing or why to Mr. 3000 BC.
The Always-Unreal
Have you ever wondered why Alvin Tofflers writings seem so
strange today? Intellectually you can recognize that he saw a lot of things
coming. But somehow, he imagined the future in future-unfamiliar terms.
So it appears strange to us. Because we are experiencing a lot of what he
saw coming, translated into terms that would actually have been
completely familiar to him.
His writings seem unreal partly because they are impoverished
imaginings of things that did not exist back then, but also partly because
his writing seems to be informed by the idea that the future would define
itself. He speaks of future-concepts like (say) modular housing in terms
that make sense with respect to those concepts.
When the future actually arrived, in the form of couchsurfing and
Airbnb, it arrived translated into a crazed-familiarity. Toffler sort of got
the basic idea that mobility would change our sense of home. His failure
was not in failing to predict how housing might evolve. His failure was in
failing to predict that we would comprehend it in terms of Bed and
Breakfast metaphors.

This is not an indictment of Tofflers skill as a futurist, but of the very


methods of futurism. We build conceptual models of the world as it exists
today, posit laws of transformation and change, simulate possible futures,
and cherry-pick interesting and likely-sounding elements that appear
robustly across many simulations and appear feasible.
And then we stop. We do not transform the end-state conceptual
models into the behavioral terms we use to actually engage and understand
reality-in-use, as opposed to reality-in-contemplation. We forget to do the
most important part of a futurist prediction: predicting how user
experience might evolve to normalize the future-unfamiliar.
Something similar happens with even the best of science fiction.
There is a strangeness to the imagining that seems missing when the
imagined futures finally arrive, pre-processed into the familiar.
But here, something slightly different plays out, because the future is
presented in the context of imaginary human characters facing up to
timeless Campbellian human challenges. So we have characters living out
lives involving very strange behaviors in strange landscapes, wearing
strange clothes, and so forth. This is what makes science fiction science
fiction after all. George Lucas space opera is interesting precisely because
it is not set in the Wild West or Mt. Olympus.
We turn imagined behavioral differences that the future might bring
into entertainment, but when it actually arrives, we make sure the
behavioral differences are minimized. The Field creates a suspension of
potential disbelief.
So both futurism and science fiction are trapped in an always-unreal
strange land that must always exist at a certain remove from the
manufactured-to-be-familiar present. Much of present-moment science
fiction and fantasy is in fact forced into parallel universe territory not
because there are deep philosophical counterfactuals involved (a lot of
Harry Potter magic is very functionally replicable by us Muggles) but
because it would lose its capacity to stimulate. Do you really want to read
about a newspaper made of flexible e-ink that plays black-and-white

movies over WiFi? That sounds like a bad startup pitch rather than a good
fantasy novel.
The Matrix was something of an interesting triumph in this sense, and
in a way smarter than one of its inspirations, The Neuromancer, because it
made Gibsons cyberspace co-incident with a temporally frozen realitysimulacrum.
But it did not go far enough. The world of 1997 (or wherever the
Matrix decided to hit Pause) was itself never an experienced reality.
1997 never happened. Neither did 1500 in a way. What we did have
was different stretched states of the Manufactured Normalcy Field in 1500
and 1997. If the Matrix were to happen, it would have to actually keep that
stretching going.
Breathless
There is one element of the future that does arrive on schedule,
uncensored. This is its emotional quality. The pace of change is
accelerating and we experience this as Field-stretching anxiety.
But emotions being what they are, we cannot separate future anxiety
from other forms of anxiety. Are you upset today because your boss yelled
at you or because subtle cues made the accelerating pace of change leak
into your life as a tear in the Field?
Increased anxiety is only one dimension of how we experience
change. Another dimension is a constant sense of crisis (which has,
incidentally, always prevailed in history).
A third dimension is a constant feeling of chaos held at bay (another
constant in history), just beyond the firewall of everyday routine (the Field
is everyday routine).
Sometimes we experience the future via a basic individual-level it
wont happen to me normalcy bias. Things like SARS or dying in a plane
crash are uncomprehended future-things (remember, you live in a

manufactured reality that has been stretching since the fifteenth century)
that are nominally in our present, but havent penetrated the Field for most
of us. Most of us substitute probability for time in such cases. As time
progresses, the long tail of the unexperienced future grows fatter. A lot
more can happen to us in 2012 than in 1500, but we try to ensure that very
little does happen.
The uncertainty of the future is about this long tail of waiting events
that the Field hasnt yet digested, but we know exists out there, as a space
where Bad Things Happen to People Like Me but Never to Me.
In a way, when we ask, is there a sustainable future, we are not really
asking about fossil fuels or feeding 9 billion people. We are asking can the
Manufactured Normalcy Field absorb such and such changes?
We arent really tied to specific elements of todays lifestyles. We are
definitely open to change. But only change that comes to us via the Field.
Weve adapted to the idea of people cutting open our bodies, stopping our
hearts and pumping our blood through machines while they cut us up. The
Field has digested those realities. Various sorts of existential anesthetics
are an important part of how the Field is manufactured and maintained.
Our sense of impending doom or extraordinary potential have to do
with the perceived fragility or robustness of the Field.
It is possible to slide into a sort of technological solipsism here and
declare that there is no reality; that only the Field exists. Many
postmodernists do exactly that.
Except that history repeatedly proves them wrong. The Field is
distinct from reality. It can and does break down a couple of times in
every human lifetime. Were coming off a very long period since World
War II of Field stability. Except for a few poor schmucks in places like
Vietnam, the Field has been precariously preserved for most of us.
When larger global Fields break, we experience dark ages. We
literally cannot process change at all. We grope, waiting for an age when it
will all make sense again.

So we could be entering a Dark Age right now, because most of us


dont experience a global Field anymore. We live in tiny personal fields.
We can only connect socially with people whose little-f fields are similar
to ours. When individual fields also start popping, psychic chaos will start
to loom.
The scary possibility in the near future is not that we will see another
radical break in the Field, but a permanent collapse of all fields, big and
small.
The result will be a state of constant psychological warfare between
the present and the future, where reality changes far too fast for either a
global Field or a personal one to keep up. Where adaptation-byspecialization turns into a crazed, continuous reinvention of oneself for
survival. Where the reinvention is sufficient to sustain existence
financially, but not sufficient to maintain continuity of present-experience.
Instrumental metaphors will persist while appreciative ones will collapse
entirely.
The result will be a world population with a large majority of people
on the edge of madness, somehow functioning in a haze where past,
present and future form a chaotic soup (have you checked out your
Facebook feed lately?) of drunken perspective shifts.
This is already starting to happen. Instead of a newspaper feeding us
daily doses of a shared Field, we get a nauseating mix of news from
forgotten classmates, slogan-placards about issues trivial and grave,
revisionist histories coming at us via a million political voices, the future
as a patchwork quilt of incoherent glimpses, all mixed in with pictures of
cats doing improbable things.
The waning Field, still coming at us through weakening media like
television, seems increasingly like a surreal zone of Wonderland madness.
We arent being hit by Future Shock. We are going to be hit by Future
Nausea. Youre not going to be knocked out cold. Youre just going to
throw up in some existential sense of the word. Id like to prepare. I wish
some science fiction writers would write a few nauseating stories.

Welcome to the Future Nauseous.


For the record, I havent read Sartres novel Nausea. From
Wikipedia, it seems vaguely related to my use of the term. I might read it.
If somebody has read it, please help connect some dots here.

Technology and the Baroque Unconscious


November 11, 2011
Engineering romantics fall in love with the work of Jorge Luis Borges
early in their careers. Long after Douglas Hofstadter is forgotten for his
own work in AI (which seems dated today), he will be remembered with
gratitude for introducing Borges to generations of technologists.
Borges once wrote:
I should define the baroque as that style which
deliberately exhausts (or tries to exhaust) all its own
possibilities and which borders on its own parodyI would
say that the final stage of all styles is baroque when that
style only too obviously exhibits or overdoes its own
tricks.
The baroque in Borges sense is self-consciously humorous. Borges
own work in this sense is a baroque exploration of the processes of
thought. As one critic (see the footnote on this page) noted, Borges
writings serve to dramatize the process of thought in the apprehension of
truth.
Unlike art, complex and mature technology (not all technology) is
baroque without being self-conscious. At best there is a collective
sensibility informing its design that can be called a baroque unconscious.
This post is a sequel of sorts to The Gollum Effect. You can read it
stand-alone, but you will probably get more out of it if you read that first.
Within the Lord of the Rings metaphor I developed in that post, baroque
unconscious is basically my answer to the question, if extreme consumers
are Gollums, who is Sauron?
This idea of a baroque unconscious helps clarify things about the
phenomenon of technological refinement that have been bothering me for
a while. In particular, it helps distinguish among three kinds of refinement

in technological artifacts: refinement that is useful to the user, refinement


(often exploitative) that is useful to somebody besides the user, and
refinement that benefits nobody at all.
It is this last characteristic that interests me. Refinement that benefits
nobody anything that attracts the adjective overwrought is what I
attribute to the workings of the baroque unconscious. And I write this fully
aware of the irony that this kind of post, might be viewed as overwrought
analysis by some.
Interestingly though, viewed from this perspective, the other two
kinds of apparently intentional refinement can be seen as opportunistic
exploitation. They arise through manipulation of those elements of the
workings of the baroque unconscious that happen to be consciously
recognized.
In other words, I am arguing that the collective unconscious
component in the evolution of technology is primary. The conscious
component is peripheral.
Or to borrow another idea from art, it is technology for technologys
sake. And unlike in art, there is no primary artist.
The Baroque in Art
There is no such thing as the baroque unconscious in art.
When art exhausts its own possibilities unintentionally we generally
characterize it as camp (what Susan Sontag aptly called failed
seriousness in Notes on Camp). The baroque element in the work is
evident to observers, even if the creator lacks the self-awareness to
recognize it.
When art exhausts its own possibilities as a side-effect, while
pursuing other objectives, we do not call it baroque. We call it either
cynical or tasteless. The auteur theory of art applies well enough that if we
cannot reasonably impute baroque intentions to the artist, we feel safe
assuming that artist was aware of the baroque consequences of his/her

decisions. Michael Bays Transformers movies (especially the last


installment) are examples. They are both tasteless and cynical, but they are
not campy or baroque.
Technology is generally more complex and collaborative than even
the most collaborative kinds of art, such as movies. The process can create
things that exhaust certain possibilities, with no single creator or observer
being fully conscious of it. Yet, we cannot call such things campy, cynical
or tasteless.
To understand this, suspend for a moment your default idea of what it
means for something to be baroque. You are probably thinking of
European architecture of a certain period with an exaggerated and visible
sort of drama on the surface. That prototypical idea of the baroque is what
we tend to apply, in unreconstructed form, to technology: clunky user
interfaces and a degree of featuritis that has us groaning.
This is a narrow sense of the baroque. The original architectural
instances served a specific function: to impress and intimidate commoners
with a display of awe-inspiring grandeur (some art historians have argued
that the original examples of baroque were therefore not baroque at all, but
cynical). The exhaustion of possibilities in that kind of baroque is all on
the surface.
But things can be baroque without being visibly so, depending on the
audience for the original function. The key is that the governing aesthetic
must seek to self-consciously exhaust its own possibilities.
Invisible, but still intentional baroque is particularly common in
modern American pop culture. Most viewers of The Simpsons for instance,
miss the bulk of the hidden pop-culture references in the show. A loyal
subculture of fans devotedly mines these references and discusses them
online. While this sort of thing is often cynical (deliberate creation of
baroque plots to create addiction, as in the show Lost), in the case of The
Simpsons, I suspect the writers genuinely seek to exhaust the possibilities
of the artistic technique of reference, without annoying the mainstream
audience.

The Baroque in Technology


In technology, Apples products border on the baroque in their
exaggerated simplicity. Once the iPad achieves the edge-to-edge display
and maximal technically feasible thinness for instance, it is hard to
imagine how one would parody it there is no room left for exaggeration
in the physical form at least. Certain possibilities will have been
exhausted.
This sort of intentional (and therefore artistic) baroque in technology,
however, is not really what interests me. What fascinates me is technology
that grows baroque without anyone consciously intending to exhaust any
design possibilities. Social forces, such as the competitive pressures of an
arms race, or the demands of extreme lead customers, dont seem to be
sufficient explanations.
Art is usually the outcome of a singular vision. But technology, even
the auteur form of technology practiced by Steve Jobs, is deeply
collectivist. Engineering real things is far too hard for one mind to impose
a singular vision on all but the simplest of products. When a piece of
technology appears to be the work of a single mind and possesses the
dense layers of coherent complexity that can only be the product of a large
team, it is evidence of a deep coherence in the team itself. In such a team,
individuals trust the collective to the point that they feel comfortable
narrowing their domain of conscious concern to their own work.
The baroque sensibility resides in the collective unconscious of the
team that produces it. The baroque in the whole is greater than the sum of
the baroque accounted for by the self-awareness of the many individuals.
Moderately obsessive-compulsive attention to detail at the level of
individuals oblivious to larger purposes, eventually turns into baroque
exhaustion of possibilities at the level of the whole product.
This brings us to the idea of refinement, and the question of when,
why and how wrought keels over into overwrought.

Refinement and the Baroque


When I first started thinking about refinement, in the context of
addictive consumption (as in, refined cocaine), I had examples such as
American fast food in mind: precisely engineered concoctions of key
refined substances (salt, sugar and fat) designed to cause addictive overconsumption.
The pathologies of consumerism can be traced to an entire universe of
such refined goods. I offered the term gollumized to describe humans who
end up being entirely defined by a pattern of such consumptive behavior,
much like the character of Gollum in the Lord of the Rings, with his
addictive, enslaving attachment to the One Ring: a highly refined, pure
essence.
Something bothered me however, about the implicit equation of
refinement with pathological addictive dependence on the one hand, and
cynical exploitation on the other.
The refinement in the construction of something like the space shuttle
does not seem pathological. It seems necessary.
A highly refined kitchen knife that plays a role in your creative selfexpression as a chef seems somehow different from a McDonalds
hamburger or an expensive wine, both of which are consumptionaddiction refined in their own ways.
Even with hamburgers, while acknowledging that they are effectively
exploitative and addictive foods designed to enrich the food industry by
ruining the health of consumers, it is clearly farfetched to believe that
there is some vast conspiracy that includes every biochemist.
The idea that the creation and sale of such foods is more a matter of
cynical opportunism is more reasonable. You could accuse the industry of
carefully engineering high-fructose corn syrup as a way to make money
off corn surpluses, but the industry didnt create the necessary
biochemistry knowledge or surplus-creating agricultural advances with the

idea of eventually selling cheap and addictive burgers (for one thing, the
evolutionary processes took longer than the lifetime of any individual
involved in the story). You could say that the existence of HFCS is 10%
intentional and 90% a consequence of the baroque unconscious driving
food technology.
In other words, the existence of a Gollum does not imply the
existence of a Gollumizer. Sauron in the The Lord of the Rings is at best a
personification of the baroque unconscious (with Saruman being one of
the cynical exploiters an HFSC creator so to speak).
But lets figure out what refinement in technology really means.
Consider the following senses of the word refinement:
1. Refinement as in purity or purification of substances: ore, oil,
drugs, foods
2. Refinement in the sense of highly developed and cultivated
sensibilities, as in refined palate
3. Refinement in the sense of elaborate sophistication of mature
or declining cultures
4. Refinement in the sense of detailed, attentive design in
advanced technologies
5. Refinement in the sense of an Apple product (or any other
possibility-exhausting product aesthetic)
How do these different senses of the idea of refinement relate to each
other and to the baroque? What distinguishes the space shuttle, quality
kitchen knife from an iPad, an expensive wine, or a McDonalds
hamburger?
The Sword, the Nail and the Machine Gun
I found a key clue when Greg Rader decided (to my slight discomfort)
to overload this sense of refinement with an economic meaning in his 22
model of types of economies.
In Gregs model, the economic role of refinement is to make it easy to
value artifacts in an impersonal way, in a cash economy. Unrefined

artifacts get you attention or help build social capital in relationships.


Refined artifacts help you earn money or participate in the gift economy.
But why should refinement lead to easier valuation and thence to
exchange for money.
The crucial missing piece is the role of interchangeability in mass
production. As Joseph Ellis writes in The Social History of the Machine
Gun:
It was always theoretically possible to conceive of a
gun that would spew out vast numbers of bullets or
whatever in a short period of timemanufacturing
techniques [were not] sufficiently well-advanced to allow
individual craftsmen to work to the fractional tolerances
demanded for every part of such a complex gun.
The key point here is often lost in discussions of industrialization that
use Adam Smiths simple example of a nail to highlight the division of
labor aspect of industrial production. Nail manufacture illustrates the
reductionist capacities of industrialization, but it is the integration capacity
of industrialization that drives refinement.
The machine gun illustrates the dynamics of integration. It is a
complex machine, and as such, liable to break down more easily.
Reliability involves network effects within a complex artifact. Roughly
speaking, in a design with no redundancy, the more parts you have, and
the more complex and fast-moving the linkages among them, the less
reliable the machine.
Unless you find an opposed network effect that can scale at least as
fast, machines will get less reliable as they scale.
The opposed network effect that was discovered late in the industrial
revolution was interchangeability. Interchangeability creates a network
effect between artifacts. Crucially, they need not be functionally similar.
They only need share a structural language. A machine gun can be
cannibalized to repair a telescope for instance.

The significance of Ellis point about fractional tolerances has to do


with replacement and cannibalization. Craftsmen are capable of very
refined work, but the work tends to be unique. It involves fitting this hilt
on this sword with great precision. You can get away with this because
craft also tends to involve fewer parts, static linkages and performance
regimes where breakdowns are infrequent.
With interchangeability comes the possibility of easy valuation, since
it is possible to talk of supply and demand at the level of many non-unique
parts that can be compared to each other. That helps connect the dots to
Gregs economic hypotheses.
But we still havent fingerprinted the essence of refinement itself.
Replacement and Repair
The first key threshold crossed on the road to industrialization was the
replacement of human, animal and uncontrolled inanimate power (wind or
water) with controlled inanimate power: coal and oil. Much of the
attention in attempts to characterize industrialization is given over to the
study of this threshold-crossing.
The second key threshold crossed was the shift from repair to
replacement. When breakdowns became frequent enough that anticipatory
manufacture of replacement parts became cheaper than reactive repair or
replacement, the network effects of industrialization truly kicked in.
The network effects of reliability in a sword are not strong enough
that you need to counteract them with interchangeability effects. In fact,
much of the complexity in a sword may well be in baroque artistic
elements that serve no purpose (a sword that loses a diamond from its hilt
is still equally effective on the battlefield).
Even early industrial-age artifacts, do not have enough complexity
and speed to really require interchangeability. This is one reason I find
elaborate steampunk fantasies fundamentally uninteresting. They involve
imagined machines that come across as laughably Rube Goldberg-esque

precisely because they dont comprehend reliability problems, and the


methods actually created during the industrial age to mitigate them.
When you get to something like a machine gun though, where
breakdowns are frequent and waiting for custom replacement parts is
hugely expensive, you must meet absolute tolerances, so that any
replacement part can replace any broken part (and equally crucially, so
that two broken, complex assemblies can be cannibalized to produce at
least one working assembly).
So we can conclude that:
1. Refinement in craft based on relative tolerances leads to
uniqueness.
2. Refinement in manufacturing based on absolute tolerances
leads to interchangeability.
From these two basic kinds of refinement, we get the five
connotations of the word I listed earlier. This happens via the appearance
of a refinement surplus.
The Refinement Surplus
Interchangeable parts based on absolute tolerances solve the
reliability problem and then some. The network effects of
interchangeability turn out to be stronger than the network effect of
increasing unreliability in individual complex artifacts.
Whats more, since interchangeability limits the need for
communication among collaborating makers, refinement of component
technologies can progress much faster (as Adam Smith noted). This is
what we call specialization. It happened in physical engineering before
object-oriented programing ported the idea to software engineering.
You could say that work previously achieved by communication
among makers is now achieved via communication among artifacts. This
is most obvious with software objects, but the core idea is present even

when you shift from a custom-made nut-bolt pair to a standardized pair


that communicates via numerical absolute tolerances.
So interchangeability creates a social network of (say) machine guns.
There are functional linkages within complex artifacts that make them
useful, and substitution and reuse linkages between them that make them
reliable (redundancy inside an artifact is merely a semantic distinction:
think of it as carrying interchangeable spare parts inside the boundary of
the artifact, with the capacity to automatically switch out broken parts).
Interchangeability and standardization make every machine gun less
unique, and more a part of a sort of hive-machine-gun beast.
Dramatic as this effect is, it pales in comparison to the effect of
commonalities across the needs of different types of complex systems.
This connects all complex artifacts into a giant social network. The One
Machine.
A high-tolerance part can serve a low-tolerance function, but not vice
versa. Economies of scale then kick in and dictate that many components
become more refined than they need to be, for typical artifacts that make
use of them. The result is that systems gradually get more refined than
they functionally need to based on immediate intentions. The needs of a
few artifacts drive the refinement levels in all technologies.
This creates a refinement surplus. Industrial technology, unlike craft
work, runs a continuous refinement surplus. The surplus was initially
triggered by the need for interchangeability to solve the reliability
problem, but that turned out to be a case of using a sledgehammer to kill a
fly.
Or so it might seem if you only look at individual artifacts. Ill argue
in a future post that once software and the Internet kick in, reliability
problems can once again overtake what interchangeability can mitigate. As
the One Machine gets increasingly interconnected, the unreliability
network effect may overtake the interchangeability network effect, hence
the fundamental Singularity-vs.-Collapse debate.

The possibilities represented by limiting refinement levels are always


greater than the universe of artifacts in existence at any given time.
Exploitation of this refinement surplus is fundamentally what creates
the predictable growth in industrial age Schumpeterian creative
destruction. But it isnt the intent to exploit that drives the evolution. It is a
collective unconscious drive to exhaust possibilities and find limits,
independent of any specific need.
The Platonic Baroque
The Lord of the Rings captures artistic anxieties about engineering:
the good races create beautiful craft, the evil ones engineer ugly
things.
Where LOTR goes wrong is in focusing on beauty in craft as the
distinguishing factor (there is a line in The Hobbit which goes something
like the Goblins create many clever things, but few beautiful ones).
In LOTR, evil engineering artifacts are crude, unrefined and possess
little symmetry. Good ones made with craft are intricate, refined and
highly symmetric.
This is obviously the exact opposite of what actually happens.
Open up a laptop and compare what you see to (say) a beautiful handcrafted necklace. Not only is the inside of the laptop more intricate than
the necklace, it is more intricate than you can even see. You would need
electron microscopes to get a sense of how unbelievably intricate, refined
and symmetric a laptop is.
The technological landscape is defined by two kinds of beauty. On the
one hand, you have the possibility-exhausting conscious baroque artifacts
that we view as pushing the envelope. Both the iPad and the space
shuttle belong on this end of the spectrum. One contains chips at the limit
of fabrication technology, the other contains materials that can handle
enormous heat and cold, produce unimaginable levels of thrust, and so on.

On the other hand you have things that are not at the edge of
technological capability, but manufactured out of component and process
technologies created for those leading edge technologies. And I dont just
mean obviously over-engineered things like space pens that write upside
down (which you can buy at NASA museums). I mean everything.
Regular Bics included.
In this category, makers strive to exhaust the possibilities, but always
lag
behind. The surplus refinement potential shows up in the
unnecessarily clean lines of modernism. Unused bits. Unbroken
symmetries. Blank engineering canvases that expand faster than designers
and technicians can paint.
The interaction of the two kinds of beauty is what creates the texture
of the modern technological landscape. I call it platonic baroque. This
may seem like a contradiction in terms, but bear with me for a moment.
The baroque unconscious is the force that drives technological
evolution: a force whose potential increases faster than it can be exploited.
Recall that the baroque seeks to exhaust its own possibilities. It is a
technical exercise in exploring process limits, not an exercise in
expressing ideas or creating utility. But this process needs ideas to fuel it.
In the days when royalty and religion loomed large in the minds of
creators, it was natural to exhaust possibilities by filling them up with the
content of the mythology associated with the power and money that drove
their work. It was natural to fill up blank walls with gargoyles and
cherubs, popes and princes.
But when the power and money come from a force whose main
characteristic is vast and featureless potential, the baroque aesthetic seeks
to exhaust possibilities by expressing that emptiness with platonic forms.
So the Bauhaus chair is not a rejection of the baroque. The modernist
designer merely seeks to build cathedrals to his new master: a vast
emptiness of possibility within the refinement surplus. This possibility is

the father of industrial invention, a restless, paternalist force that replaces


necessity, the mother of craft-like invention.
I am tempted to explore that male/female symbolism further, but Ill
limit myself to one overwrought metaphor. This unexploited possibility
that is the father of industrial invention is at once a Dark Lord and
engineering Dark Matter.
Maker Addiction and Exponential Technology
Where there is surplus, it will be exploited. Possibility, rather than
necessity, drives invention. When ideas for exploitation lag the potential to
be exploited you get baroque unconscious design.
Why would somebody build something simply because it is possible?
Both craft and engineering are driven by an addiction to making. It
does not matter whether needs or possibilities enable the making. Makers
will make. What determines how fast they make is whether they are able
to focus on their strengths or whether they are limited by their weaknesses.
This is the shift in maker psychology due to industrialization: from
deliberative craft work limited by individual weaknesses, to reactive
engineering work that is not limited in this way, thanks to specialization.
Need-driven making requires a focus on function and utility. Nonfunctional making in craft is easily recognized as artistic embellishment.
The idealized craftsman and it was usually a he was a deliberate
and mindful creator. He made the whole, and he made the parts. When
things broke, he made repairs or crafted new parts. Each whole was
unique. When craftspeople collaborated on larger projects stonemasons making blocks for cathedrals say assembly itself became a craft
that was limited by the skill of the best (if you look at the history of
masonry, you can see an obvious and gradual progression from roughhewn blocks carefully fitted together, to more refined blocks that look
increasingly interchangeable in late pre-modern architecture).

In industrial artifacts based on interchangeability, however, the role of


craftsman bifurcates into the twin roles of technician and engineerdesigner (for now, we can safely conflate engineer and designer). Both are
reactive roles where function and utility take a backseat to sheer maker
addiction.
The technician reacts to component work defined in terms of absolute
tolerances by pushing the boundaries of process capabilities and
component quality with addictive urgency. I explored this earlier in my
post, The Turpentine Effect (though I didnt connect the dots until now).
The result is Six Sigma, an explosion of process tools, and the dominance
of an intrinsic and abstract notion of potential future value over an
extrinsic and specific notion of realizable current value. Somebody will
use this in the future beats nobody can use this right now. By and large,
this trust is justified: increasing demands for refinement from the most
demanding applications keep up with the possibilities.
In this process of reactive design, refinement in available components
and processes starts to drive refinement levels in complete artifacts that
have already been invented, and suggests new inventions. A positive
feedback loop is set in motion: increasing component and process
refinement overtakes application needs as individual artifacts mature, but
then new applications emerge as pace-setters. Design bottlenecks migrate
freely across the entire technological landscape, via the coupled
technological web, instead of remaining confined within the design space
of individual artifacts.
For those of you who are familiar with the S-curve models of
technology maturation and disruption, imagine disruption S-curves
bleeding across unrelated artifact categories via shared components and
processes, creating an overall exponential technology evolution curve of
the sort that both Singularity and Collapsonomics watchers like to obsess
about, and that I will obsess about in future posts.
Across the fence from the technician, the engineer-designer loses
mindfulness by shifting from deliberately dreaming up useful ideas to
reacting to the possibilities of available component and process
sophistication levels.

A perfect example is Moores Law: semiconductor companies began


pushing fabrication technology to extremes before applications for the
increased capability became clear.
On the other end we have Alan Kays reaction to Moores Law in the
early 70s: the idea that computing should strive to waste bits in
anticipation of decreasing cost. Computer design shifted from
fundamentally deliberative before PARC to fundamentally reactive after.
Effects, Large and Small
So the net effect of maker addiction faced with refinement surplus is
that existing artifacts get pulled into a baroque stage of their evolution and
new artifacts appear to exploit possibilities rather than respond to
necessities. I am not sure this is much better than Gollumizing
consumption.
The One Machine gets increasingly integrated, and takes on an eerily
coherent appearance due to uniform refinement levels and the operation of
the platonic baroque aesthetic at the level of individual artifact design.
Design bottlenecks drift around within this technological body politic,
making it more coherent, more eerily platonic-baroque over time.
If the creation of unrealized refinement potential ever slows, and
exploitation starts to catch up, you can expect the platonic baroque to
become less platonic and more visibly overwrought. The blank canvas will
start to fill up.
Thanks to this eerie collectively created aesthetic coherence, the One
Machine takes on the appearance of subsuming intelligence and
intentionality that suggests visions of a Singularity-AI to some. Whether
this is a case of anthropomorphic projection onto a smooth facade beneath
which unreliability-driven collapse lurks, or whether there is an emerging
systemic intelligence to the process, is something I still havent made up
my mind about. If youve been following my writing, you know that at the
moment, I lean towards the collapse interpretation. Darwinian evolution as

refined complexity created by a blind watchmaker is too much of a


precedent to ignore.
At more mundane levels, the baroque unconscious creates a critical
shift in the nature of engineering: the pull of under-exploited refinement
surplus is so strong that nominally less useful things that exploit the
surplus can diffuse far faster, and suck away resources, far faster than
nominally more useful things that ignore it.
All you need is a human behavior with potential for escalating
addiction. You can then move as fast as the refinement surplus will allow. I
explored this idea in The Milo Criterion.
Ignoring this leads to the classic entrepreneurial mistake: attempting
to build useful things instead of things that exploit refinement surplus. The
most high-impact technologies of the day are almost never whatever the
wisdom of the day identifies as the most potentially useful ones. They are
the ones that can spread most rapidly through The One Machine, mopping
up refinement surplus.
So the best and brightest flock to Facebook or Google, and cancer
remains uncured. Again, I am not sure whether this a good thing or not.
Perhaps from the perspective of the Dark Lord, optimizing the One
Machine, now is simply not the right time to cure cancer. One day
perhaps, the design bottlenecks will drift to that corner of the
technological Web. Until then, well have to content ourselves with
doctors who tweet during surgeries and webcast the proceedings, but still
cannot cure cancer.
Ill stop here for now. This post has been something of a stream of
conscious expression of my own baroque-unconscious addicted-maker
tendencies. But then, I figure I can allow myself one of these selfindulgent posts every once in a while. Especially since my birthday is
coming up in a couple of days.

The Bloody-Minded Pleasures of Engineering


September 1, 2008
Welcome back. Labor Day tends to punctuate my year like the eye of
a storm (Ive been watching too much Hurricane-Gustav-TV). For those,
like me, who do not vacation in August, it tends to be the hectic anchor
month for the years work. On the other side of Labor Day, September
brings with it the first advance charge of the year to come. The tense
clarity of Labor Day is charged with the urgency of the present. There is
none of the optimistic blue-sky vitality of spring-time visioning. But
neither is there the wintry somnolence and ritual banality of New-YearResolution visioning. So I tend to pay attention to my Labor Day thoughts.
This year I asked myself: why am I an engineer? The answer I came up
with surprised me: out of sheer bloody-mindedness. In this year of viral
widgetry, when everyone, degreed or not, became an engineer with a click
on an install-this dialog on Facebook, this answer is important, because
the most bloody-minded will win. Here is why.
***
Why Engineer?
There are three answers that preceded mine (out of sheer bloodymindedness).
Engineering outgrew its ancestry in the crafts, and acquired a unique
identity, around the turn of the century. Between about 1880-1910, as
engineering transformed the world with electricity, steam and oil, the
answer to the question, Why engineer? was an officiously triumphalist one
to conquer nature and harness its powers for humanity. Then, as World
War I and II left the world with the mushroom cloud as the enduring
symbol of engineering, the answer went from apologetic and defensive to
subtle. Samuel Florman, in his 1976 classic The Existential Pleasures of
Engineering, reconstructed engineering as primarily a private,
philosophical act. The social impact of engineering was the responsibility,
he suggested, of all of society. Making his case retroactive, he suggested

that the triumphalist answer was largely an imputed one: part of a social
perception of engineering that was mostly manufactured by non-engineers.
Flormans answer to Why engineer? can probably be reduced to
because it helps me become me.
Curiously, this denial of culpability on the part of engineers was
largely accepted as legitimate . Possibly because it was true. As James
Scott argues brilliantly in Seeing Like a State, to the extent that there is
blame to be assigned, it attaches itself rather clearly to every citizen who
participates in the legitimization of a state. Sign here on the social
contract; well try to make sure bullies dont beat you up; you consent to
be governed by an entity the State with less than 20/20 vision; you
accept your part of the blame if we accidentally blow ourselves up by
taking on large-scale engineering efforts.
So the first shift in the Big Answer, post WWII (lets arbitrarily say
1960) was the one from triumphalist to existential. The third answer,
which succeeded the triumphalist one around 1980, was the ironic one.
The ironic rhetorical non-answer goes, in brief, Why Not?
***
Lets return for a moment to the surging waters pounding the levees of
New Orleans as I write this. Levees are a symbol of that oldest of all
engineering disciplines, civil engineering. As I watch Hurricane Gustav
pound at this meek and archaic symbol of human defiance, with anxious
politicians looking on, it is hard to believe that we ever had the hubris to
believe that we could either discipline or destroy nature. The
environmentalists of the 90s and the high modernists of 1910 were both
wrong. They are as wrong about, say, Facebook, as they were about the
dams and bridges of 1908.
This isnt because technology cannot destabilize nature. It is because
nature does such a bang-up job on its own. For every doomsday future we
make possible say nuclear holocaust or a nasty-minded all-conquering
post-Singularity global AI nature cheerfully aims another asteroid at
Earth. I was particularly amused by all the talk of the Large Hadron

Collider possibly destroying the planet. I dont understand the physics of


the possibility, but I suspect there is an equal probability that nature will
randomly lob a black hole at us.
We are not capable of being the stewards of nature, anymore than we
are capable of mastering it. The most damage we are likely to do is just
destroy ourselves as a species, after which Nature will probably shed a
tear at the stupidity of a disobedient child, and move on.
The ironic answer then, is based on two observations. The first
observation is that the legitimacy of the lets-preserve-nature ethic, at least
as an objective, selfless stance, is suspect. Postmodernist critiques of
simple-minded environmentalism have been around for a while, and it
seems to me that the takeaway is that the only good reason to have
environmental concerns is a selfish one to save ourselves. And maybe
to be nicer to the cows in our factory farms.
The other observation that leads to the ironic answer is that, unlike in
Flormans time, no credible person today is asking the question why
engineer? in the sort of accusatory tone that engineers endured in the 70s.
Nobody is suggesting a return to nature in the sense of an ossified,
never-changing Garden-of-Eden stable ecosystem. An entire cybergeneration has grown up with William Gibson as its saint. And this
generation, rather shockingly, is the first human generation that
understands at a deep, subconscious level that there is no such thing as
technology. It is all nature. Herbert Simon may have been the first to
articulate this idea at an intellectual level, but the Millenials are the first
generation to get it at gut-level. Even if they havent read Gibson, the
irony of creating a Facebook group to Save the Rainforests By
Abandoning Technology isnt lost on them.
So the ironic answer to why engineer? is really one that does not
differentiate technology at all from, say, art, science or any other human
endeavor or natural phenomenon. The rhetorical non-answer, why not? is
not quite as shy, private and retiring as the existential one. And the point of
this rhetorical non-answer is not (just) to create a decentered debate about
engineering, but to legitimize a view of engineering as a socially-engaged
aesthetic enterprise.

The iPod, perhaps, is the apotheosis of ironic engineering. It is


because it can be, and because Steve Jobs chose to make it be. Its
utilitarian inevitability (something like it had to disrupt the music industry)
is overwhelmed by its overweening sense of Big-D Design; the aspects of
it that didnt have to be. By being so essentially artistic, the iPod
reductively defines technology as art. Which is why the ironic answer
fails.
And its child, the iPhone, is a symbol of the end of the short, fewdecades-long age of Ironic Engineering. As another iconic designer of our
times, James Dyson, said, I have an iPhone and a BlackBerry. And I have
to confess that I use the BlackBerry more. Steve Jobs, for all his
phenomenal creativity, seems to be missing an essential idea about what
technology is.
So ironic engineering will not do. The raison detre of engineering
cannot be borrowed from art or science (both of which, I think, may truly
be at a terminally ironic stage).
***
C. P. Snow (he of the Two Cultures fame), was wrong. There arent
two opposed cultures in the world today. There are three. Besides the
sciences and the humanities, engineering represents a third culture. One
that is only nominally rooted in the epistemic ethos of the sciences and the
design ethos of the fine arts. At heart, engineering is a wild, tribal,
synthetic culture that builds before it understands. Its signature
characteristics are quantity, energy and passion. By contrast, the dominant
characteristics of science and the humanities are probably reason and
emotion. Nature, in an editorial dated 22nd June, 2006, titled The Mad
Technologist, discusses this very subtle distinction:
We find that pure scientists are often treated kindly by
film-makers, who have portrayed them sympathetically, as
brooding mathematicians (A Beautiful Mind) and heroic
archaeologists (Raiders of the Lost Ark). It is technology
that movie-makers seem to fear. Even the best-loved
science-fiction films have a distinctly ambivalent take on it.

Blade Runner features a genetic designer without empathy


for his creations, who end up killing him. In 2001: A Space
Odyssey, computers turn against humans, and Star Wars
has us rooting for the side that relies on spiritual power
over that which prefers technology, exemplified by the
Death Star.
Science is content to poke at nature with just enough force to help
verify or falsify its models. The humanities, to the extent that they engage
the non-human at all, through art, return quickly to anthropocentric selfabsorption, with entirely human levels of energy and passion.
Engineering asks, for the hell of it, just how powerfully can I mess
with the world, in all its intertwined natural and artificial beauty?
Sometimes and this is why engineering is sometimes the agnostic force
that the Hitlers and Saddams co-opt the most interesting answer is
blow it up.
***
Lets return to today, and the it-idea of the Singularity. To the idea that
some form of artificial intelligence might surpass human intelligence
(scroll to the end of this earlier piece for some pointers on this interesting
topic).
Here is a simple illustration of the sorts of reasoning that make people
panic about a Googlezon global intelligence taking over the world. Start
with the (reasonable) axiom that it takes a smarter person to debug a
computer program than to write it in the first place. Conclusion: if the
smartest programmer in the world were to write a flawed program, nobody
will be able to debug it. If it happens to be some sort of protean, selfreconfiguring, critical-to-the-Internet sort of program, it might well trigger
the Singularity.
This particular line of reasoning is suspect (a too-complex-foranyone-to-debug program is far more likely to acquire entropy than
intelligence), but the overall line of thinking is not. The idea that the
connected beast of technology might become too complex to manage is a

sound one. I personally suspect that in this sense, the Singularity actually
occurred with the invention of agriculture.
So contemplate, as an an engineer (and remember, this includes
anyone who has every chosen to install a Facebook widget), this globespanning beast called nature+technology (or nature-including-technology).
It has a life of its own, and it is threatening today to either die of a
creeping entropy that we arent smart enough to control, or become
effectively sentient and smarter than us.
How can you engage it productively?
By being even more creatively-destructive than it is capable of being
without human intervention. Bloody-minded in short.
***
Let me make it more concrete. Imagine engineers from 1900, 1965,
1995 and 2008 (time-ported as necessary) answering the question why are
you an engineer? within the 2008 context.
1900-engineer: I thought it was to make the world a better place, but
clearly technology is so complex today that any innovation is as likely to
spawn terrorism or exacerbate climate change as it is to improve our lot. I
quit; I will become a monk.
1965-engineer: I thought I was doing this to self-actualize within my
lonely existence, but clearly engineering in 2008 has become as much selfindulgent art as engagement of the natural world. I will not write a
Facebook widget. I will become a monk.
1995-engineer: I thought I did it for the same reasons that drive that
guy to make art and that other guy to do science, but it seems like
whatever I do, be it designing a fixture or writing a piece of code, I am
fueling the emergence of this strange Googlezon beast. Thats scarily large
and impactful. It changes reality far more than any piece of art or science
could, and I want no part of it. I am off to become a monk.

2008-engineer: Crap! this will either blow up in our faces or it will


be the biggest thrill-ride ever. Awesome! lemme dive in! Carpe Diem.
***
I spent ten days in August in California, mostly in the Bay Area. It is a
part of world that cannot be matched for the sheer obscenity of its
relentlessly positive technological energy. There is none of the sense of the
tragic that pervades the air on the East Coast.
California is full of people who are cheerfully bloody-minded about
their engagement of technology.
Here is a thought experiment about these curious folks. Imagine that a
mathematician proved conclusively that a particular type of Uber-Machine
was the most complex piece of technology theoretically possible. Call this
the Uber-Machine theorem. Maybe the Uber-Machine is the theoretically
most complex future-Internet possible, powered by the theoretically mostcomplex computer chip within its nodes.
Nothing more complex, intelligent or capable is theoretically possible.
But there is a corollary. The theorem also implies that it is possible to
make a different kind of ultimate artifact, call it Uber-Machine B. One that
annihilates the Universe completely. Maybe Uber-Machine B is some
descendant of the Large Hadron Collider, capable of provably destroying
the fabric of space-time.
Which would you choose to help build? Secretly, I believe the
bloody-minded technologists (and I am among them) would want to build
Uber-Machine B because it represents the most impact we could ever have
on reality. Uber-Machine A would depress us as representing a
fundamental plateau.
There is even a higher morality to this. Technology-fueled growth
what Joel Mokyr called Schumpeterian growth is the only kind of
growth, towards the unknown, that leaves open the possibility that we may
solve the apparently intractable problems of today. The cost is that we may
create the truly intractable problems of tomorrow civilizational death-

forces that we may have to accept the way we accept the inevitability
of our individual deaths. Maybe weve already created these problems.
And that is why bloody-mindedness is the only defensible motivation
for being a technologist today. You may delude yourself with culturally
older reasons, but this is the only one that holds up. It is also the only
reason that will allow you to dive in without second-guessing yourself too
much, with enough energy to have any hope of having an impact. Because
the people shaping the technology tomorrow arent holding back out of
fear of (say) green-house emissions from large data centers.
***
Alright. Holiday over. Back to recycling tomorrow.

Towards a Philosophy of Destruction


July 21, 2008
Somewhere in the back of our minds, we know that creation and
growth must be accompanied by destruction and decline. We pay lip
service to this essential dichotomy, or attempt to avoid it altogether, by
using false-synthesis weasel words like renewal. I too have been guilty of
this, as in this romanticized treatment of creative destruction (though I
think that was a fine piece overall). Though I define innovation as
creative destruction in the sense of Schumpeter, most of the time I spend
thinking about this subject is devoted to creativity and growth. The
reasons for this asymmetry are not hard to find. Destruction is often
associated (and conflated) with evil. More troubling it is often
associated with pain, even if there is no evil intent involved. Finally,
destruction lets loosely define it as any entropy-increasing process
is also more likely to happen naturally. It therefore requires less deliberate
attention, and is easier to deny and ignore. Still, the subject of destruction
does deserve, say, at least 1/5 the attention that creation commands. A
thoughtful philosophy of destruction is essential to a rich life, at the very
least because each of us must grapple with his/her own mortality. So here
is a quick introduction to non-evil destruction, within the context of
business and innovation. Before we begin, lodge this prototypical example
of creative destruction, the game of Jenga, in your mind:

Destruction in Business and Innovation


It is relatively easy to separate out obviously evil destruction (Hitler,
9/11). It is also easy to separate out non-evil and non-painful destruction
(demolition of unsafe, derelict buildings, controlled burns to contain the
risk of forest fires). Here are three gray-area examples:

Version 6.2 of your companys great software, everybody


recognizes, represents an end-of-life technology (for example, a
famous product beginning with V and ending in ista). Layers of
band-aid bug-fixes and patches have destroyed the architectural
integrity of the original product. You must make the painful
decision of completely discarding it and starting with a cleansheet design based on a more advanced architecture. Maybe
some key employees, for whom the product represents their life
work, and who still believe in it, quit in bitter disappointment,
seething with a sense of betrayal.
Widget Inc. has an old legacy product A, that is nearing the end
of its design life. Most new investment is going towards a new
product B, that requires a completely new set of business and
technical competencies. There is buzz and excitement around
B, while A is surrounded by an atmosphere of quiet despair .
You gradually stop hiring A-relevant skills and increase hiring
of B-relevant skills. A population of employees, too old or too
set in its ways to learn new skills, is left providing legacy
system support as the product slowly dies out of the economy.
As CEO, you eventually offer a very attractive trade-in program
to the few remaining customers, stop support, and lay off the
few remaining employees who dont adapt.
What do you think of all the great (including life-saving)
technology that came out of both the Allied and Axis sides of
World War II (radar, microwaves, rocketry, computing, jet
engines, the Volkswagen Beetle)? Is it morally possible to
appreciate these technologies without condoning Hitler?

These examples illustrate the complexity of thinking about


destruction. All reasonable people, I suspect, try to simplify things and

operate with an attitude of kindness and gentleness. But does the world
always allow our actions to be kind or gentle?
The Phenomenology of Destruction
Creation and growth can be gradual, steady, linear and calm, but this
is rarely the case. More often, we either see head-spinning Kool-Aid
exponential dynamics, critical-mass effects, tipping points and the like. Or
slowing, diminishing-returns effects. Steady progress is a myth.
Destruction is the same way. Wed like all destruction to be strictly
necessary, linear and peaceful. Thats why phrases like graceful
degradation are engineering favorites. Thats why my friend and animal
rights activist Erik Marcus champions dismantlement of animal agriculture
rather than its destruction. The world unfortunately, rarely behaves that
way. Our rich vocabulary around destruction is an indication of this:
decay, rot, neglect, catastrophe, failure mode, buckle, shatter, collapse,
death, life-support, apocalypse. Destruction isnt this messy simply
because we are unkind or evil. Destruction is fundamentally messy, and
keeping it gentle takes a lot of work.
I once read that nearly 70% of deaths are painful (no clue whether this
is true, but much as my first experience of euthanasia hurt, I still believe in
it). Reliability engineering provides some clues as to why this is so
IEEE Spectrum had this excellent cover story a few years ago, analyzing
biological death from a reliability engineering perspective. The shorter
version: complex systems admit cascading, exponentially-increasing
failure modes that are hard to contain. Any specific failure can be
contained and corrected, but as failures pile on top of failures, and the
body starts to weaken and destabilize overall as a system, doctors can
scramble, but eventually cannot keep up. The shortest version: He died of
complications following heart surgery.
Jenga as Metaphor
The game of Jenga illustrates why it is so hard to keep destruction to
linear-dismantlement forms. Once you throw in an element of creation in
parallel (removing blocks and stacking them on top to make the tower

higher), you are constrained. If you had the luxury of time, you could
unstack all the blocks carefully, and restack them in a taller, hollow
configuration with only 2 bricks per layer. Thats graceful reconstruction.
The world rarely allows us to do this. We must reconstruct the tower while
deconstructing it, and eventually the growth creates the kind of brittle
complexity where further attempts at growth cause collapse.
Milton, the real star of Office Space, provides a more true-to-life
example of the Jenga mode of destruction.

Remember how Lumberg gradually took away Miltons work and


authority, degraded his office space, took him off the payroll, stole his
stapler and consigned him to the basement? When Milton ultimately
snaps, he burns down the office. He escapes to a tropical island paradise
with a lot of loot, but his victory does not last waiters ignore his drink
requests, causing him to mumble about further arson attempts.
In less dramatic forms, you can observe similar dynamics in any
modern corporation. Look away from the bright glow of the new
product/service lines and exciting areas with plenty of growth and cool
technology. Look into the darkness that defines the halo around the new,
and youll see the slow undermining and ongoing multi-faceted
destruction of the old. Resources are moved, project priorities are lowered,
incentives are handed out to the participants in the growth. Things

crumble, with occasional smaller and larger collapses. Watch closely, and
you will feel the actual pain. You will participate in the tragedy.
If you happen to be part of new growth, recognize this. One day, a
brighter light will put you in the shadows, and you will have to face the
mortality of your own creations. One of my favorite Hindi songs gets at
this ultimately tragic, Sisyphean nature of all human creation:
Main pal do pal ka shayar hun, pal do pal meri kahani hain
pal do pal meri hasti hai, pal do pal meri jawaani hain
Mujhse pehle kitne shayar, aaye aur aa kar chale gaye
kuch aahe bhar kar laut gaye, kuch naghme gaa kar chale gaye
woh bhi ek pal ka kissa they, main bhi ek pal ka kissa hun
kal tumse juda ho jaoonga, jo aaj tumhara hissa hun
Kal aur aayenge naghmo ki, khilti kaliyan chunne wale
Mujhse behtar kehne waale, tumse behtar sunne wale
kal koi mujhko yaad kare, kyon koi mujhko yaad kare
masroof zamaana mere liye, kyon waqt apna barbaad kare?
Which roughly translates to the following (better translators, feel free
to correct me):
I am but a poet of a moment or two, a moment or two is as
long as my story lasts
I exist but for a moment or two, for a moment or two does
my youth last
Many a poet came before me, they came and then they
faded away
they took a few breaths and left, they sang a few songs and
left
they too were but anecdotes of the moment, I too am an
anecdote of a moment
tomorrow, I will be parted from you, though today I am a
part of you

And tomorrow, there will come other pickers of blooming


flower-songs
Poets who speak more eloquently than I, listeners more
sophisticated than you
Were somebody to remember me tomorrow why would
anybody remember me?
this busy, preoccupied world, why should it waste its time
on me?
Life After People
It seems likely that the universe at large is likely a place of
destruction-by-entropy. Yet, on our little far-from-equilibrium home here
on earth, the picture, at least for a few millenia, is one of renewal,
emphasizing creation over destruction.
The history channel recently aired a show about what would happen
to our planet if all humans were to suddenly vanish. There is also a
brilliant book devoted to this thought experiment, which I am currently
reading:
Though the events in both the show and book are largely about how
human-created reality would collapse, the overall story is an uplifting one
of growth and renewal, as nature not as brittle and in-danger as we like
to think gradually reclaims the human sphere.

Creative Destruction: Portrait of an Idea


February 6, 2008
The phrase creative destruction has resonated with me since I first
heard it, and since then, it has been an organizing magnet in my mind for a
variety of ideas. I was reminded of the concept again this weekend while
reading William Duggans Strategic Intuition, which mentioned Joseph
Schumpeter as a source of inspiration. Visually, I associate the phrase most
with Eschers etching, Liberation, which shows a triangular tessellation
transforming into a flock of birds. As the eye travels up the etching, the
beauty of the original pattern must be destroyed in order that the new
pattern may emerge

I dont know when I first heard the phrase, but I first used it in the
frontispiece of my PhD thesis. Here are the three quotes I put there, back
in 2003, when I was searching for just the right sort of imagery to give my
research the right-brained starting point it needed. My first quote was a
basic, bald statement due to Schumpeter:
Creative Destruction is the essential fact about capitalism.
Joseph Schumpeter, Capitalism, Socialism, and Democracy
I followed that up with a Rabindranath Tagore bit that Id found
somewhere (update: Googling rediscovered the somewhere on the
frontispiece of Hugo Reinerts draft version of a paper on Creative
Destruction which seems to have finally appeared in the collection:
Friedrich Nietzsche: Economy and Society), and for which, to this day, I
havent found a citation (update: Hail! Google books; a work-colleague,
Tom K., dug the reference out for me the extract is from Brahma,
Vishnu, Siva, which appears in Radices translation of selections from
Tagore so much for the detractors of Googles book scanning project:
plain Googling did not get me the source).
From the heart of all matter
Comes the anguished cry
Wake, wake, great Siva,
Our body grows weary
Of its law-fixed path,
Give us new form
Sing our destruction,
That we gain new life
Rabindranath Tagore
And concluded the Grand Opening of my Immortal Thesis with a
dash of Nietzsche:
[H]ow could you wish to become new unless you had first become
ashes!
Freidrich Nietzsche, Thus Spake Zarathustra

Curiously, I havent read any of these (something I dont mind


admitting, since I actually read a lot more of the books I quote than most
people). For me creative-destruction has always been a right-brained sort
of thing. In fact I almost titled my thesis The Creation and Destruction of
Teams but I decided that was way too ponderous and self-important, even
for me, and settled for the more prosaic Team Formation and Breakup in
Multiagent Systems. But throughout the process of doing the research
and writing up the results, the metaphor of creative destruction and the
associated imagery was in my mind. Sometimes I dreamed of swarms of
airplanes making and breaking formation (formation flight was one of the
applications I worked on).
But looking further back, I can see that my first serious infatuation
with the idea goes further back to a beautiful Urdu poem by Ali Sardar
Jaffri, Mera Safar, ably translated by Philip Nikolayev. Nowhere else have
I encountered the idea captured with such poetic precision. If you know
Hindi/Urdu, reading the original is well worth it.
The infatuation continues creative destruction is at the heart of my
latest research at work.
A Possible History of the Idea
A long time ago, I read on the Web one speculative history of the idea
of creative destruction that traced it from a particular form of a school of
Saivaite philosophy called Kashmir Saivism through Schopenauer,
through to Nietzsche and finally to the most familiar name associated with
it today: Schumpeter. It is curious that this abstract idea went from
religious philosophy, through metaphysics and finally to economics. I
wouldnt be surprised if this story were apocryphal the basic
abstraction of renewal and change as continuous creation and destruction
is pretty elemental, and Id expect it to have been rediscovered multiple
times.
Certainly the idea is certainly a favorite in classical Indian
metaphysics, and mostly approached through the metaphor of Siva
(usually characterized as the destructive aspect of the creator-preserverdestroyer trinity of Brahma-Vishnu-Siva, but, I am told by more

knowledgeable people, better understood as symbolizing continuous


renewal through creative-destruction). Alain Danielou seems to have
written a lot on this topic, including a book relating Siva to Dionysius, and
Ive seen references to the idea in comments by people like Camille
Paglia.
Elsewhere, both Hegel and Nietzsche seem to have had this idea in
their head (the former particularly through what we now know as the
Hegelian dialectic). I suspect it is also at the heart of the methodological
anarchy model of discovery proposed by Feyerabend in Against Method.
In short, there is probably enough to this idea to fuel a dozen PhDs.
But curiously, Ive always felt a little reluctant to go read all this stuff.
To me, the idea of creative destruction is so fundamental and basic
practically axiomatic, that I am wary of contaminating my raw intuition of
the idea with a lot of postmodern (or for that matter, Vedantic) verbiage. In
a way, monosyllabic gym-jocks making cryptic remarks about muscles
being torn down and rebuilt stronger get it better than academics do. I fear
the magic of the idea may disappear if I over-analyze it.

Part 3:
Getting Ahead, Getting Along,
Getting Away

Getting Ahead, Getting Along, Getting Away


June 13, 2012
Sometimes I think that if I were much more famous, female and in
Hollywood instead of the penny theater circuit that is the blogosphere, Id
be Greta Garbo. Constantly insisting that I want to be left alone while at
the same time being drawn to a kind of work that is intrinsically public
and social. Simultaneously inviting attention and withdrawing from it.
Which I suppose is why ruminations on the key tensions of being a
self-proclaimed introvert, in a role that seems better suited to extroverts,
occupies so much bandwidth on this blog. Thats the theme of this third
installment in my ongoing series of introductory sequences to ribbonfarm
(here are the first two). This is the longest of the sequences, at 21 posts,
and also has the most commentary. So here you go. I hope this will be
useful to both new and old readers.
Future of Work The Human Condition
This sequence probably represents the single biggest category of
writing on ribbonfarm. It originally started out with several posts on the
Future of Work theme, which was a popular blogosphere bandwagon
around 2007-08, when I was still half-heartedly trying various
bandwagons on for size.
Though I had a few modest hits in that category, it took me a couple
of years to realize that I was fundamentally not interested in the subject of
work per se. I was primarily interested in work as a lens into the human
condition.
Once I realized that, the writing in this category got a lot more fluid,
and I got off the bandwagon. I still use work as the primary approach
vector, rather than relationships or family, since I think in the modern
human condition, work is the most basic (and unavoidable) piece of the
puzzle.

The best tweet-sized description of the human condition Ive


encountered is due to personality psychologist Robert Hogan: getting
along and getting ahead. To this I like to add the instinct towards selfexile and perverse (for our species) seeking out of solitude: getting away.
So Ive divided the selections into three corresponding sections.
Heres the sequence. Theres a little more commentary at the end.
Getting Ahead
1. The Crucible Effect and the Scarcity of Collective Attention
2. The Calculus of Grit
3. Tinker, Tailor, Soldier, Sailor
4. The Turpentine Effect
5. The World is Small and Life is Long
Getting Along
1. My Experiments with Introductions
2. Extroverts, Introverts, Aspies and Codies
3. Impro by Keith Johnstone
4. Your Evil Twins and How to Find Them
5. Bargaining with your Right Brain
6. The Tragedy of Wiios Law
7. The Allegory of the Stage
8. The Missing Folkways of Globalization
Getting Away
1. On Going Feral
2. On Seeing Like a Cat
3. How to Take a Walk
4. The Blue Tunnel
5. How Do You Run Away from Home?
6. On Being an Illegible Person
7. The Outlaw Sea by William Langewiesche
8. The Stream Map of the World

Triumph and Tragedy


Since I am not a credentialed social scientist, but frequently stomp
rudely into areas where academic social scientists rule, a few words of
warning and contextualization are in order.
The warning first. It should be clear that my approach to these
subjects is nothing like the academic approach. It is amateurish,
speculative, fanciful (occasionally bordering on the literary or mystical)
and resolutely narrative-driven. Empiricism plays second fiddle to
conceptualization, if it is present at all. And this at a time when narrative is
becoming a dirty word in mainstream intellectual culture. If you like any
of the ideas in the posts above, you would probably be well advised to
hide or disguise the fact that youve gone shopping in an intellectually
disreputable snake-oil marketplace.
Surprisingly though, I dont think those are the most important
differences between the way I approach these subjects and the way
academics do. Criticism I get is more often due to my overall
philosophical stance rather than my lack of credentials or non-empiricist
snake-oil methods.
I approach these themes with a sort of tragic-realist philosophical
stance, while the academic world is going through a seriously positivist
phase that is marked by extreme self-confidence and optimism about its
own future potential for somehow fixing the world. At least for a chosen
few.
Social scientists are going through a period of extreme belief in their
own views and methods. This is most true of behavioral economics, which
exhibits an attitude that borders on triumphalism. The attitude appears to
have spilled over to the rest of the social sciences. Thanks to tools and
concepts like social graphs, fMRI mapping and so forth, a great
mathematization, quantification and apparent empiricization of the social
sciences is now underway. Freud and Jung are in the doghouse. There is a
good chance that Shakespeare and Dostoevsky will follow.

This is not a new kind of attitude, but the last time we saw this kind of
social science triumphalism, it was derivative. The triumphalism of late
19th century engineering triggered a wave of High Modernist social
engineering in its wake that lasted till around 1970. That project failed
across the world and social scientists quickly abandoned the engineers and
turned into severe critics overnight (talk about fair weather friends). But
social scientists today have found a native vein of confidence to mine.
They are now rushing in boldly where engineers fear to tread.
It is rather ironic that much of the confidence stems from discoveries
made by the Gotcha Science of cognitive biases. In case it isnt obvious,
the irony is that revelations about the building blocks of the tragic DNA of
the human condition have been pressed into service within a
fundamentally bright-sided narrative. This narrative (though the believers
deny that there is one) is based on the premise that cataloging and
neutralizing biases will eventually leave behind a rationally empiricist
core of perfectible humanity, free of deluded narratives. One educational
magic bullet per major bias. The associated sociological grand narrative is
about separating the world of the Chosen Ones from the world of the
Deluded Masses, and using some sort of Libertarian Paternalism as the
basis for the former to benevolently govern the latter without their being
aware of it.
I suppose it is this sort of overweening patronizing attitude that leads
me to occasionally troll the Chosen Ones by triggering completely
pointless Batman vs. Joker Evil Twin debates.
Sometimes I feel like going to a behavioral economics conference and
yelling out from the audience, youre reading the evidence wrong you
morons, it is turtles biases and narratives all the way down; we should be
learning to live with and through them, not fighting them!
Unlike the woman who yelled the original line at an astronomer in the
apocryphal story, I think Id be right. In this case, anthropocentric thinking
lies in believing that there is a Golden Universal Turing Machine Running
the Perfect Linux Distro at the bottom. There is no good reason to believe
that natural selection designed us as perfect (or perfectible) cores wrapped
in a mantle of biases and narrative patterns.

In my more mean-spirited and uncharitable moments, I like to think


of Biasocial Science as an enterprise driven by the grand-daddy of all
biases: the bias towards believing that cataloging biases advances our
understanding of the human condition in a fundamental way that can
enable the construction and enactment of a progressive Ascent of
Quantified Man narrative.
Oh well, I am probably going to be proved wrong. I seem to have a
talent for championing lost causes. Anyway, that warning and
contextualization riff aside, go ahead and dive in. Youve been warned of
the dangers.

The Crucible Effect and the Scarcity of Collective


Attention
July 21, 2009
This article is about a number I call the optimal crucible size. Ill
define this number call it C in a bit, but I believe its value to be
around 12. This article is also about an argument that Ive been
unconsciously circling for a long time. Chris Andersons Free provided me
with the insight that helped me put the whole package together: economics
is fundamentally a process driven by abundance and creative-destruction
rather than scarcity. The reason we focus on scarcity is that at any given
time, the economy is constrained by a single important bottleneck
scarcity. Land, labor, factories, information and most recently, individual
attention, have all played the bottleneck role in the past. I believe we are
experiencing the first major bottleneck-shift in a decade. Attention, as
an unqualified commodity is no longer the critical scarcity. Collective
attention is: the coordinated, creative attention of more than 1 person. It is
scarce and it is horrendously badly allocated in the economy today. The
free-agent planet under-organizes it, and the industrial economy overorganizes it. Thats the story of C, the optimal size of a creative group.
There are seven other significant numbers in this tale: 0, 1, 7, 150, 8, 1000
and 10,000. The big story is how the economy is moving closer to Cdriven allocation of creative capital. But the little story starts with my
table tennis clique in high school.
A Table-Tennis Story
R and I played table-tennis nearly every day in high school. We were
regular partners in a loose clique of serious players at our club, comprising
approximately a dozen players. The score in nearly every 3-game match
would go something like 21-14, 21-7, 23-22. It wasnt that I was getting
creamed every time; Id occasionally take a game off R. He was only
slightly better than me, in just about every department, but that all added
up to him beating me nearly every time. He knew his strengths
(defense/offense, forehand/backhand) enough to always pick a better
strategy for each game. He selected his shots better and executed them

better. The net result was that I was beaten mentally and physically. Errors
would accumulate, and Id invariably choke.
Then one day, I managed to convince S, whose father had been a
state-level champion, to practice with me (there was no point playing, he
would have beaten me 21-0, 21-0, 21-0). S was the sort of calm,
unflappable guy who simply cannot be psyched-out or forced into error.
He had an almost robotic level of perfection in all basic elements of the
game. S put me through half an hour of very basic forehand-to-forehand
top spin practice rallies, and it completely changed my game. After that, I
still mostly got beaten by R, my regular partner (who was fundamentally
more talented than me), but I actually began winning the occasional
match, and all games were a lot closer.
Fast-forward 15 years. At the University of Michigan, I organized an
informal tournament at the residential scholarship house I was living in at
the time. Out of the field of about 8-10, I came in second. Most Americans
in the house fared as well as youd expect; since they view ping pong as
not really a sport, most of them lack basic skills. I beat most of them
relatively easily, but was beaten pretty handily by a Korean-American guy.
A final data point. About 2 years ago, with rather foolhardy
confidence, I joined in a Saturday afternoon group of serious Chinese
players. The result: I was beaten comprehensively by everybody. In
particular, by a bored, tired-looking 14 year old (clearly first-generation)
who looked like he hated the game and had been dragged there by his
immigrant father.
Collective Attention and Arms Races
Now step back and analyze this for a moment. Table tennis is
primarily information work. It is not among the more physically
demanding games except at the highest levels. My serious table-tennis
clique in an apathetic-to-the-game country, with a lousy athletic culture
(India) got me to a certain level of competence: enough to beat many
casual players in a vastly more athletic country (the US). But a disengaged
kid from the diaspora of an athletic country that is crazy about the game

(China) was able to beat me with practically no effort, despite being far
less interested (apparently) in the game than me.
This little story captures the most essential features of collective
attention. It exists at all scales (from small clique to country to planet).
Within a group that is paying coordinated attention to any informationwork domain, skill levels rapidly escalate, leaving isolated individuals far
behind. I call this the arms race effect, and it is a product of a fertile mix of
elements in the crucible: competition, mutual teaching, constant practice
and sufficient, but not overwhelming variety. This is a very particular kind
of attention. It isnt passive consumption by spectators, and it isnt
performance for an audience. It is co-creation of value: that same dynamic
that is starting to drive the entire economy, blurring lines between
producers and consumers.
So our challenge in this article is to answer the question: what is the
optimal size of a creative group? Is country level attention the best (China
and table tennis) or clique (my high school)? Is it perhaps 1 (solo loneranger creative blogger)? Our quest starts with the first of our supportingcast numbers, 10,000. As in the 10,000-hour rule studied by K. Anders
Ericsson and made famous by Gladwell in Outliers.
10,000 Hours and Gladwells Staircase
Gladwell is a jump-the-gun trend-spotter. He nearly always finds a
uniquely interesting angle on a subject, and nearly always analyzes it
prematurely in flawed ways. Thats a story for another day, but lets talk
about his latest, Outliers. The basic thesis of the book is that there are all
sorts of subtly arbitrary effects in the structure of nurture (Gladwells way
too smart to play up a naive nature/nurture angle) that make runaway
success a rather unfair and random game of chance. In particular, Gladwell
focuses on a key argument: that to get really good at anything, you need
about 10,000 hours of steadily escalating practice, with opportunities to
take your game to the next level becoming available at the right times.
For instance, due to some weird cutoff-date effects, nearly all top
Canadian hockey players are born in winter (thereby, Gladwell implies,
unfairly penalizing burly talents born in warmer months). This basic
argument is just plain wrong for the simple reason that no human talent is

that specifically matched to particular arbitrary opportunity paths like


hockey. No talented human being is starkly hockey star or schmuck.
There are presumably other things demanding strength and athletic ability
available in Canada and other parts of the world, that have no winter bias
(or perhaps, complementary summer biases). As Richard Hamming put it
eloquently in his famous speech at Bell Labs, You and Your Research,
There is indeed an element of luck, and no, there isnt. The prepared
mind sooner or later finds something important and does it. So yes, it is
luck. The particular thing you do is luck, but that you do something is
not.
But that said, Gladwell is on to something. The pattern of increasing
opportunity stage-gates he spotted is real, but most of the arbitrary effects
he talks about (being born at certain times, your university having one of
the first computers, and so forth) are red herrings/minor elements that
confuse the issue. But one effect is not a red herring, and that is the fact
that the staircase of opportunity puts you in increasingly intense crucibles
of collective co-creative attention.
The Distillation Effect
Start with 1728 (12^3) people and let them learn widget-making
in144 groups of 12, for 3000 hours. Then take the top talent in each group
and make 12 groups of 12, and again let them engage in an arms race for
3000 hours. Then take the final top 12 and throw in another 4000 hours.
With two levels of distillation, youve got yourself a widget-making dream
team. Or a fine scotch. A team that will be leaving the remaining 1716 far,
far behind. You can watch this process accelerated and live today on
Americas Got Talent and American Idol. Imagine the same process
playing out more slowly over 20 years. What does that transformation
look like?
That is what is scarce. Collective attention. Thats what creates the
10,000 hour staircase-of-opportunities that Gladwell talks about.
Information may want to be free, but live attention from other humans
never will be (AI is a different story).

A note of irony here: Gladwell was also among the first to stumble
across the importance of such dream-team crucibles, in The Tipping
Point. Today, researchers like Duncan Watts have pointed out that viral
effects dont necessarily depend on particularly talented or connected
special people (the sort Gladwell called mavens and salesmen). But
special people do have a special role in shaping culture. It is just that
their most important effect isnt in popularizing things like Hush Puppies,
but in actually creating their own value. New kinds of music, science,
technology, art or sporting culture.
This is the signal in the noise, and here is the lesson. Information
work in any domain is like weight training: you only grow when you
exercise to failure. The only source of weight to overload your mental
muscles is other people. And the only people who can load you without
either boring you or killing you are people of approximately the same
level of talent development. And that leads to the question: what happens
when you hit the top crucible of 12 in your chosen field? Where do you go
when there are no more levels (or if youve reached the highest level you
can, short of the top)? That brings us to the next two numbers in our story:
how you innovate and differentiate as a creative.
1 Free Agent and 1000 Raving Fans?
Ive hated the phrase raving fan since the day I heard it. If you are
not familiar with the argument, Kevin Kelly, who originated the idea,
claims that an individual creative blogger or musician say can
scrape along and subsist in Chris Andersons Long Tail, by attracting a
1000 raving fans who buy everything he/she puts out (blogs, books,
special editions, t-shirts, mousepads; 1000 raving fans times $100 per year
per fan is a $100,000 income). Kellys original adjective is a lessobjectionable true rather than raving but raving has caught on, and
the intended meaning is the same.
This basic model of creative capital is just not believable for two
reasons. First, it reduces a prosumer/co-creation economic-cultural
environment to a godawful unthinking bleating-sheep model of
community. I try to imagine my blog, for instance, as the focal point of a
stoned army of buy-anything idiot groupies, and fail utterly. I would not

want to serve such a community, and I dont believe it can really form
around what I do. I certainly refuse to sell ribbonfarm.com swag.
The second problem is the tacit assumption that creation is
prototypically organized in units of 1. The argument is seductive. The bad
old corporations will die, along with its committees of groupthink. The
brave new solo free agent, wandering in the woods of cultural anarchy,
finds a way to lead his tribe to the promised land of whatever his niche is
about. Tribe is a related problematic term that Seth Godin recently ran
amok with.
The reason Kelly (and others like Godin) ends up here is that he
answers my question after the dream team, what? with individuals
break away, brand themselves and become individual innovators. Kinda
like Justin Timberlake leaving NSync. A dream team of 12, in this view,
turns into 12 soloists. Not that he ignores groups, but his focus is on the
individual.
Individuals vs. Groups
Thats not what happens. You cannot break the crucible rule. 12 is
always the magic number for optimal creative production. The reason
people make this mistake is because they draw a flawed inference from the
(correct) axiom that the original act of creativity is always an individual
one. Ive talked about this before: I am a believer in radical individualism;
I believe, as William Whyte did, that innovation by committee is
impossible. Good ideas nearly always come from a single mind. What
makes the crucible of 12 important is that it takes a group of
competing/co-operating individuals, each operating from a private
fountainhead of creative individual energy, to come up with enough of a
critical mass of individual contributions to spark major revolutions.
Usually thats about 12 people for major social impact, though sometimes
it can happen with smaller crucibles. These groups arent the deadening
committees of groupthink and assumed consensus. They are the fertile,
fiercely contentious and competitive collaborators who at least partly hate
the fact that they need the others, but grudgingly admire skills besides
their own.

What happens when you exit the dream team level in a mature
disciplinary game is that you get out there and start innovating beyond
disciplinary boundaries; places where there are no experts and no managed
progression of levels with ritualistic gatekeeper tests. But you dont do
that by going solo. You look for crucibles of diversity, multidisciplinary
stimulation and cross-pollination. But you still need the group of 12 or so,
training your brain muscles to failure.
This gives me a much more believable picture. As a blogger, I am the
primary catalyst on this site, but I am not creating the value solo. If I try to
think of the most valuable commenters on this site, I can think of no more
than 12. My best writing has come from trying to stay ahead of their
expectations, and running with themes they originally introduced me to.
But thats far from optimal, since I still am the dominant creator on this
blog. The closer I get that number to 12 via regular heavy-weight
commeters, guest bloggers and mutually-linked blogroll friends (Ive
turned my blogroll off for now for unrelated reasons), the closer Ill get to
optimum. Think of all the significant power blogs: they are all team-acts.
Now, I may never get there, and theres multiple ways to get to 12, but the
important thing is to be counting to 12. At work these days, I am pretty
close to that magic number 12, and enjoying myself a lot as a result.
So the important number for the creative of the future is 12, not 1 or
1000. But what about money and volume? Dont we need a number like
1000? Not really. As the creative class matures, you wont really ever find
1000 uncritical sheep-like groupie admirers. That is a relic of the celebrity
era. The real bigge- than-crucible number is not 1000 but 150. Dunbars
number.
The Dunbar Number and $0.00
Why 150? Thats the Dunbar number. The most people you can
cognitively process as individuals (the dynamics are entertainingly
described in the famous Monkeysphere article). Thats the right number to
drive long-tail logic. By Kellys logic though, I have to get to, say,
100,000 casual occasional customers before I find my 1000 raving fans
(1% conversion is realistic).

Face it: theres no way in hell most of us will get there. If I


accidentally did, through this blog, Id probably erect walls to keep the
scary crowds out somehow. That picture makes sense for almost nobody. I
write long, dense epic posts and dont bother to be accessible. I look to
attract readers who can keep up with me. Unapologetic intellectuals in
fact, whose own eclectic interests overlap sufficiently with mine to create
the right mix of resonance, dissonance and dissent. In terms of Geoffrey
Moores classic pair of business models: complex systems (a few hightouch, high-personalization customers) and volume operations (massconsumption stuff), this blog is a complex-systems play. I can (and have)
written posts entirely with one reader-muse in mind. I have more chance
of making a living off 100% of a base of 150 powerful micropatrons than
from 1% of a base of 100,000. The question is: which is actually the right
type of model for the individual creative (in a crucible of 12 similarminded others; not selling to each other, but collectively representing a
high-value-concentration crucible)?
I am going to make a prediction: personalization and customization
will rule. Without that common prefix of the day, mass-. Mass
customization/personalization is a good model for Enterprise 2.0, but
individual creatives have a far better chance of creating an economically
sustainable lifestyle by paying close individual attention to 150 people
than by selling the same thing to 100,000 and hoping 1% of the sheep
convert to your religion. This isnt to say that volume games cant
succeed. But it isnt the way most people will succeed, because the
numbers will not add up. Can you really imagine a significant proportion
of the worlds information worker/creative class being able to draw
100,000 unique visitors per month to their blogs, most of whom will be
other creatives trying to build their own 100,000/1000?
The 100,000 base argument can be safely ignored for most of us.
And thats what Ive done to most of the 62,000 unique visitors Google
Analytics tells me have visited this blog since I opened up shop in July
2007. An overwhelming majority of them bounced away before I could
even say Hi! Some read one article and never came back, leaving only
an IP address behind. In an age where superhits and celebrities are on their
way out, thats what any crowd of ~100,000 will do. Your actual goal as
creative today is to find and keep your 150, to whom you pay individual

attention. Pass-through crowds dont deserve much attention. In fact, the


monetary value of your transaction with them is exactly $0.00. Anderson
hammered home the point that to the masses, the right price for your work
is $0.00, but he didnt address the flip side. They are also worth only $0.00
to you on average. Which means you should put no marginal effort into
pleasing them. If one of them finds something you did for your 150 useful,
let them have it. You get paid in word-of-mouth, they get free stuff. Small
serendipitous barter transaction. Aggregate over 100,000 and net harddollar value is still 100,000x$0=$0. The barter is non-zero sum, but
doesnt pay your rent.
Personal Economic Neighborhoods
By carefully curating your Dunbar neighborhood of at most 150 (in
practice, likely much less), in collaboration with your crucible of 12 (each
curating their own 150-neighborhoods, with a good deal of overlap),
through actual personal attention, you create the foundation for your life as
a cultural creative and information worker. Free agency is an important
piece of this, but dont dismiss traditional economics: a good part of your
150 is likely to remain inside the formal organizations you are part of.
The Kelly number, 1000, is important, but not in his sense. If you and
your crucible of 12 are creating value in a loose coalition, and each have a
150 circle with some high-value overlap, the total is probably near 1000.
So thats 12 people sharing a community of 1000, each of whom gets
personal attention from at least 1 of the 12. The members of the 1000 get
the overhead savings of finding more than 1 useful, personally-attentive
creator in one place.
Count the 12 most valuable co-creators you work with. Now consider
the overlap in your Dunbar neighborhoods. If the average level of overlap
isnt in the double digits (the actual set-theoretic math is tricky), you
probably havent reached critical mass yet. Guess where you can still find
such critical mass today? Inside large corporations. Any pair of people in
my immediate workgroup of around 12 can probably find 20-30 common
acquaintances. Our collective personalized-attention audience at is
probably around 1000. Large corporations still allocate collective attention
pretty badly (they hit the numbers, but get the composition wrong), but

still do a better job than say, the blogsphere. But the free-agent nation is
catching up rapidly. The wilderness is becoming more capable of
sustaining economics-without-borders-or-walls every day.
So how will you create and monetize your Dunbar neighborhood? By
definition, there are no one-size-fits-all answers, because the point of
working this way is that youll find opportunities through personalized
attention. Not a great answer, I know, but still easier for most of us than
dreaming up ideas that can net 100,000 regulars of whom 1000 turn into
raving fans.
8: The Maximal Span of Control
Weve argued that the optimal crucible size must be greater than 1 and
less than 150, but we still havent gotten to the reasoning behind 12 rather
than 30 or 5. Another number will help get us there: 8, the upper end of the
range of a number known as the span of control. The number of direct
reports a manager can effectively handle, and still keep the individualized
calculus of interpersonal relationships tractable.
What happens when you exceed the span of control? You get
hierarchies. You cannot organize, complex coupled work (think space
shuttle) requiring more than 8 people in a flat structure. But heres the
dilemma: between 9 and 15, if you split the group into 2, you may get high
overhead and micromanagement by managers with too little to do, and
other pathologies. So between the limit of a single managers abilities, and
the optimal point at which to force cell division, ontogeny and
organization, you get a curious effect: the edge of control. Single-manager
structures fail, but team chemistry can take over. The whole thing is just
barely in control, and teetering on chaos.
Should sound familiar. Those are the conditions, complexity theorists
have been telling us for decades, that spark creative output. More than 8,
less than 16. Why 12, besides being a nice mean? Anecdotal data.

The Ubiquity of 12
I hope you are too smart to conclude that I am making 12 a number of
religious significance. It is simply the mean of a fairly narrow distribution.
Still, it turns up in a surprising number of creative crucible places in
practice:
1.
2.
3.
4.
5.
6.
7.
8.
9.

The dirty dozen (alright, there were also the 7 samurai)


Juries (creative judicial decision-making)
Teams in cricket and soccer (~12)
The number of apostles required to start a major religion
The approximate size of famous cliques of mathematicians,
scientists, engineers, philosophers, writers and so forth.
Ideal class sizes in education
G-8, G-12
Ensemble casts (Friends, Seinfeld, counted with frequent
regular side characters who appear often enough that you
recognize them).
Improv comedy groups (typical size of a generation of SNL
regulars).

(I believe there is some research related to Dunbar number research


that actually talks about how small world groupings where you really
intimately know the others in the group tend to be around 12. All our craze
for weak links has tended to distract us from the fact that the small world
is small in 2 ways: the maximum distance on the social graph between 2
nodes being 6, and the fact that we cluster in small groups).
The Magic Number 7
Lets get to the last of our big list of numbers. Seven. As in, Millers
famous magic number, the number of unrelated chunks of information you
can manage in your short-term memory at a given time, and a big implicit
hidden variable in everything weve talked about so far. Its why lists of 7
are effective. So lets make up a list of 7 to summarize the key concepts so
far.

1. Collective Attention: a group of people paying attention to the


same thing, with the group size varying in size from 2 to 6
billion.
2. Arms Race: The effect by which groups paying collective
attention to something force individuals within the group to
rapidly improve their skills and separate the group from
outsiders.
3. Mental Exercise-to-Failure: The fact that only people close to
your talent level can load your mind in ways that cause you to
grow
4. Crucible: The optimal-sized creative group. Stages of crucibles
reach successively higher plateaus and culminate at the dreamteam level, beyond which lies innovation.
5. Innovation: The graduation of a creative from a dream-team
level in a disciplinary game to a more diverse and unstructured
type of crucible, with few rules.
6. Dunbar personalization: The idea that you are more likely to
succeed as a creative in the new economy by paying personal
attention to up to 150 people, than by paying mass attention to
100,000 in hopes of harvesting a 1000 raving fans.
7. Span of Control: The threshold group size for a crucible.
Above this number creativity is possible. Below this, the group
can be brought under the dictatorial and low-creativity control
of a single individual.
Thats CAMCIDS for you acronym buffs. I know. I am a terrible
person.
Managing Collective Attention Scarcity: The Dynamics of 12
A little paper math shows you why collective attention is scarce.
Marketers will recognize a classic example: marketing sugary cereals to a
kid, but closing the sale with Mommy, is much trickier than marketing and
selling to the same person. You need the collective coordinated attention
of both, and you need that attention to have good chemistry. But unlike
individual-level attention scarcity and its complement, the myth of
information overload, you cannot solve the collective attention problem
through appropriate reframing and a good set of automated filters. Also

unlike individual attention dynamics, the action isnt in stuff like


advertising, fame or too many emails. It is co-creation groups.
Individual attention economics was merely poorly managed, and is
now technology exists to manage it well, even though it hasnt diffused
completely. Collective attention, even if it is optimally allocated, is
fundamentally scarce, since it requires live people in the other 11 empty
spots in the crucible.
Optimal allocation is hard because the numbers blow up in your face.
A Dunbar community of 150 admits 11175 unique pairings. Most will be
fights, divorces and toxic messes. A few will create great value. Searching
that space for the Gates-Ballmers and the Jobs-Wozniaks is horrendously
hard. Thats why it is a scarcity. Things do get simpler once a core of 2-3
start to attract enough to reach that critical mass of 12, but not by much.
The math is much harder for more complex ways of thinking about
groups, but you get the idea. Can you think of all possible ways to break
down the worlds population of 6 billion into constantly shifting crucibles
of 12?
But lets at least work on the right problem.

The Calculus of Grit


August 19, 2011
I find myself feeling strangely uncomfortable when people call me a
generalist and imagine that to be a compliment. My standard response is
that I am actually an extremely narrow, hidebound specialist. I just look
like a generalist because my path happens to cross many boundaries that
are meaningful to others, but not to me. If youve been reading this blog
for any length of time, you know the degree to which I keep returning to
the same few narrow themes.
I think I now understand the reason I reject the generalist label and
resonate far more with the specialist label. The generalist/specialist
distinction is an extrinsic coordinate system for mapping human potential.
This system itself is breaking down, so we have to reconstruct whatever
meaning the distinction had in intrinsic terms. When I chart my life course
using such intrinsic notions, I end up clearly a (reconstructed) specialist.
The keys to this reconstruction project are: the much-abused idea of
10,000 hours of deliberate practice, the notion of grit, and an approach to
keeping track of your journey through life in terms of an intrinsic
coordinate system. Think of it as replacing compass or GPS-based
extrinsic navigation with accelerometer and gyroscope-based inertial
navigation.
I call the result the calculus of grit. It is my idea of an inertial
navigation system for an age of anomie, where the external world has too
little usable structure to navigate by.
The Generalist-Specialist Distinction
The generalist/specialist distinction constitutes an extrinsic coordinate
system. We think of our environment as containing breadth and depth
dimensions. The breadth dimension is chopped up by disciplinary
boundaries (whether academic, trade-based or business-domain based),
while the depth dimension is chopped up by markers of validated

progressive achievement. What you get is a matrix of domains of


endeavor: bounded loci within which you can sustain deepening practice
of some skilled behavior.
The boundedness is key. Mathematicians do not suddenly discover, in
the 10th year of their practice, that they need advanced ballroom dancing
skills to progress further. Ballroom dancers do not suddenly encounter a
need for advanced aircraft engine maintenance skills after a few years of
practice. Based on your strengths, you can place fairly safe bets early on
about what you will/will not need to do if you make your home
somewhere in the matrix.
Or at least, you used to be able to. Ill get to how these expectations
from the twentieth century are breaking down.
There is a social structure that conforms to these breadth/depth
boundaries as well. A field of practitioners in each domain, stacked in a
totem pole of increasing expertise, that legitimizes the work of individuals
and provides the recognition needed for both pragmatic ends (degrees and
such) and existential ends (recognition in the sense of say, Hegel).
In his book Creativity, Mihaly Csikzentmihalyi made up exactly such
a definition of extrinsically situated creativity as the behavior of an
individual within a field/domain matrix.
We are now breaking away from this model. Ironically,
Csikzentmihalyis own work makes little sense within this model that he
helped describe in codified ways; his work makes a lot more sense if you
dont attempt to situate it within his nominal home in psychology.
Extrinsically situated creativity with reference to some global,
absolute scheme of generalist/specialist dimensions is unworkable. At best
we can hope for local, relative schemes and an idea of intrinsically
situated individual lives.

The Vacuity of Multi-Disciplinarity


The problem with this generalist/specialist extrinsically situated
creativity model is that the extrinsic frames of references are getting
increasingly dynamic, chaotic and murky. To the point that the distinction
is becoming useless. Nobody seems to know which way is up, which way
is down, and which way is sideways. If you guess and get lucky, the
answers may change next year, leaving you disoriented once more.
The usual response to this environment is to invoke notions of multidisciplinarity.
Unfortunately, this is worse than useless. In the labor market for
skilled capabilities, and particularly in academia, multi-disciplinarity is the
equivalent of gerrymandering or secession on an already deeply messedup political map. Instead of votes, you are grubbing for easily won
markers of accomplishment. Its main purpose (in which it usually fails) is
to create a new political balance of power rather than unleash human
potential more effectively.
The purpose is rarely to provide a context for previously difficult
novice-to-master journeys.
How do I know this? Its patently obvious. If it takes 10,000 hours (K.
Anders Ericssons now-famous threshold of deliberate practice, thanks to
Gladwell, which translates to about 10 years typically) to acquire mastery
in any usefully bounded domain, and you assume that there is at least one
generation of pioneers who blazed that path to a new kind of mastery,
what are you to make of fields that come and go like fruit flies in 2-3
years, in sync with business or funding cycles? The suspicious individual
is right to suspect faddishness.
I have come to the conclusion that if I cannot trace a coherent history
of at least 20 years for something that claims the label discipline, it isnt
one.

The problem with this though is that increasing amounts of valuable


stuff is happening outside disciplines by this definition. It isnt multidisciplinary. It isnt inter-disciplinary. It is simply non-disciplinary. Its in
the miscellaneous folder. It is so fluid that it resists extrinsic organization.
So given that most excitement centers around short-lived fruitfly nondisciplines, how do people even manage to log 10,000 deliberate practice
hours in any coherent journey to mastery? Can you jump across three or
four fruit-fly domains over the course of a decade and still end up with
mastery of something, even if you cannot define it?
Yes. If you drop extrinsic frames of reference altogether.
The Compass and the Gyroscope
We are used to describing movement in terms of x, y and z
coordinates, with respect to the Greenwich meridian, the Equator and sea
level. Our sense of space is almost entirely based on such extrinsic
coordinate systems (or landmarks within them). Things that we understand
via spatial metaphors naturally tempt us into metaphoric coordinate
systems like the depth/breadth one we just talked about. In academic
domains, for instance, you could say the world is mapped with reference
to an origin that represents a high-school graduate, with disciplinary
majors and years of study forming the two axes that define further
movement.
Somewhere in graduate school, I encountered an idea that blew my
mind: you can also describe movement entirely intrinsically. Actually, I
had encountered this idea before, in vague popular science treatments of
Einsteins general theory of relativity, but learning the basics of the math
is what truly blows your mind.
The central idea is not hard to appreciate: imagine riding a
complicated roller coaster and keeping track of how far along you are on
the track, how youve been turning, and how youve been twisting. That
much is easy.

What is not easy is appreciating that thats all you need. You can
dispense with extrinsic coordinate systems entirely. Just keeping track of
how those three variables (known as arc-length, curvature and torsion if
my memory serves me) are changing, is enough. For short periods, you
can roughly measure them using just your intrinsic sense of time and how
your stomach and ears feel. To keep the measurements precise over longer
periods, you need a gyroscope, an accelerometer and a watch.
If you want motifs for the two modes of operation, think of it as the
difference between a magnetic compass and a gyroscope (these days, GPS
might be a better motif for the former, but the phrase the compass and the
gyroscope has a certain ring to it that I like).
We need another supporting notion before we can construct an
intrinsic coordinate system for human lives.
Behavioral Boundedness
Remember that the primary real value of an extrinsically defined
discipline in a field/domain matrix is predictable boundedness.
Mathematicians can trust that they wont have to suddenly start dancing
halfway through their career to progress further.
This predictability allows you to form reasonable expectations for
decades of investment, and make decisions based on your upfront
assessment of your strengths, and expectations about how those strengths
will evolve as you age.
If I decide that I have certain strengths in mathematics and that I want
to bet on those strengths for a decade, to get to mastery, I shouldnt
suddenly stumble into a serious weakness along the way that blocks me,
like a lack of natural athleticism.
So a disciplinary boundary is very useful if it provides that kind of
predictability. I call this behavioral boundedness. An expectation that your
expected behaviors in the future wont wander too far out of certain
strengths-based comfort zones you can guess at fairly accurately, upfront.
Before putting in 10,000 hours.

What happens when that sort of predictability breaks down? It is


certainly happening all over the place. For instance, I didnt realize I
lacked the strengths needed for a typical career in aerospace engineering
(the sort high-school kids fantasize about when they first get interested in
airplanes and rockets) until well into a PhD program in the subject.
Fortunately, I was able to pivot and head in another direction with almost
no wasted effort. Few people are that lucky.
There are domains where the boundedness is very weak indeed. The
upfront visible boundedness is a complete illusion. Marketing is one such
domain. You might get into it because you love creative messaging or
talking to people. You may discover the idea of positioning two years into
the journey and realize that creativity in messaging is a sideshow, and the
real job is somewhat tedious analysis of the mental models of prospects. A
further two years down the road, you may discover that to level-up your
game once more, you need to become a serious quantitative analytics ninja
and database geek.
This can also work out in positive ways. You might wake up one fine
day and realize that your life, which makes no sense in nominal terms,
actually adds up to expertise in some domain youd never identified with
at all. That actually happened to me with respect to marketing. On paper, I
am the opposite of a marketer. I have a PhD in aerospace engineering, am
introverted, and write in long-winded and opaque ways rather than in
catchy sound-bytes.
Nevertheless, at some point I realized with a shock that I had
accidentally logged several thousand hours along a marketing career path
without realizing it. I had just completely misunderstood what
marketing meant based on the popular image the field presents to
novices.
When I went free-agent a few months ago, most of my consulting
leads I had coming in had to do with marketing work. This did not surprise
me, but it certainly surprised my father and several close friends, who
assumed I was doing some sort of technical consulting work around
computational modeling and scientific computing.

Id never thought of myself as a marketer. A computational modeler,


yes. A hustler perhaps. A fairly effective corporate guerrilla, yes. A
marketer, not really. I viewed my previous marketing work as the work of
a curious tourist in a strange land. I viewed my marketing writing as
outsider-anthropology amongst strange creatures. But apparently, thats
not how others view me.
Looking back, and trying to make sense of my life in retrospect as
the training of an accidental marketer, it makes sense though: Ive
logged the right mix of complementary experiences. Marketing is still not
my primary identity though (that would mean returning to a Procrustean
bed of disciplinary identity).
Many people luck out like me, accidentally. We recognize what
particular path to mastery were on, long after we actually get on it.
Many do not. They bum around in angsty anomie, craving structure
where none exists, and realizing after a decade of wandering that theyve
unfortunately gotten nowhere.
Is it possible to systematically do things to put yourself on a path to
mastery, and know youre on one, without actually knowing what that path
is until youre already far down it?
Inside and Outside Views of Grit
If there is no external frame of reference, how do you know where
you are, where you are going and whether you are progressing at all, as
opposed to bumming around?
Can you log any old time-sheet of 10,000 hours, slap a label on it, and
claim mastery?
Thankfully, intrinsic navigation is not quite that trite.
A clue to the mystery is the personality trait known as grit, probably
the best predictor of success in the modern world.

Grit is the enduring intrinsic quality that, for a brief period in recent
history, was coincident with the pattern of behavior known as progressive
disciplinary specialization.
Grit has external connotations of extreme toughness, a high apparent
threshold for pain, and an ability to keep picking yourself up after getting
knocked down. From the outside, grit looks like the bloody-minded
exercise of extreme will power. It looks like a super-power.
I used to believe this understanding of grit as a superhuman trait. I
used to think I didnt possess it. Yet people seem to think I exhibit it in
some departments. Like reading and writing. They are aghast at the
amount of reading I do. They wonder how I can keep churning out
thousands of words, week after week, year after year, with no guarantee
that any particular piece of writing will be well-received.
They think I must possess superhuman willpower because they make
a very simple projection error: they think it is hard for me because it
would be hard for them. Well of course things are going to take
superhuman willpower if you go after them with the wrong strengths.
For a while, I went around calling this faux-grit. The appearance of
toughness. But the more I looked around me at other people who seemed
to display grit in other domains, the more I realized that it wasnt hard for
them either. What they did would merely be superhuman effort for me.
Faux grit and true grit are the same thing (the movie True Grit is actually
quite a decent showcase of the trait; it showcases the superhuman
outside/fluid inside phenomenon quite well).
So what does the inside view of grit look like? I took a shot at
describing the subjective feel in my last post on the Tempo blog. It simply
feels like mindful learning across a series of increasingly demanding
episodes that build on the same strengths.
But the subjective feel of grit is not my concern here. I am interested
in objective, intrinsically measurable aspects of grit that can serve as an
internal inertial navigation system; a gyroscope rather than GPS.

The Grit Gyroscope: Reworking, Referencing, Releasing


In physical space, latitude, longitude and altitude get replaced by arclength, curvature and torsion when you go intrinsic.
In endeavor space, field, domain and years of experience get replaced
by three variables that lend themselves to a convenient new 3Rs acronym:
reworking, referencing, releasing (well, technically, it is internal
referencing and early-and-frequent releasing, but lets keep the phrase
short and alliterative). I believe the new 3Rs are as important to adults as
the old ones (Reading, wRiting and aRithmetic) are for kids.
Reworking
I stumbled upon rework as a key variable when I tried to answer a
question on Quora: what are some tips for advanced writers?
Since writing is something everybody does, logging 10,000 writing
hours is something anyone can do. My aha! moment came when I realized
that it isnt the writing hours that count, it is the rewriting hours.
Everybody writes. People who are trying to walk the path towards mastery
rewrite. I wont say more about this variable. If you want a worked
example, read my Quora answer. If you want a quick and pleasant read on
the subject, Jason Frieds Rework gets at some of the essential themes
(though perhaps in a slightly gimmicky way).
Referencing
For referencing, my clue was my recent discovery that new readers of
this blog often dive deep into the archives and read nearly everything Ive
written in the last four years. I dubbed it the ribbonfarm absurdity
marathon because I didnt understand what would possess anyone to
undertake it.
But then I realized that I write in ways that practically demand this
reading behavior if people really want to get the most value out of what I
am talking about: I reference my own previous posts a lot. Not to tempt

people into reading related content, but out of sheer laziness. I dont like
repeating arguments, definitions or key ideas. So I back-link. I do like
most of my posts to be stand-alone and comprehensible to a new reader
though, so I try to write in such a way that you can get value out of
reading a post by itself, but significantly more value if youve read what
Ive written before. For example, merely knowing what I mean by the
word legibility, which I use a lot, can increase what you get out of some
posts by 50%. This is one reason blogging is such a natural medium for
me. The possibilities of hyperlinking make it easy to do what would be
extremely tedious with paper publishing.
The key here is internal referencing. I use far fewer external reference
points (theres perhaps a dozen key texts and a dozen papers that I
reference all the time). It sounds narcissistic, but if youre not referencing
your own work at least 10 times as often as youre referencing others,
youre in trouble in the intrinsic navigation world. Instead of developing
your own internal momentum and inertia, you are being buffeted by
external forces, like a grain of pollen being subjected to the forces of
Brownian motion.
Releasing
And finally, releasing. As in the agile software dictum of release
early and often. In blogging, frequency isnt about bug-fixing or
collaboration. It isnt even about market testing (none of my posts are
explicitly engineered to test hypotheses about what kind of writing will do
well). It is purely about rational gambling in the dollar-cost averaging
sense. It is the investing advice dont try to time the market applied to
your personal work.
If the environment is so murky and chaotic that you cannot
strategically figure out clever moves and timing, the next best thing you
can do is just periodically release bits of your developing work in the form
of gambles in the external world. I think theres a justifiable leap of faith
here: if you are work admits significant reworking and internallyreferencing, youre probably on to something that is of value to others.
If a post happens to say the right thing at the right time, it will go
viral. If not, it wont. All I need to do is to keep releasing. This realization

incidentally, has changed my understanding of phenomena like iteration in


lean startups and serial entrepreneurs who succeed on their fifth attempt.
Its mostly about averaging across risk/opportunity exposure events, in an
environment that you cannot model well. I am pretty sure you can apply
this model beyond blogging and entrepreneurship, but Ill leave you to
figure it out.
These three variables together can measure your progress along any
path to mastery. Whats more, they can be measured intrinsically, without
reference to any external map of disciplinary boundaries. All you have to
do is to look for an area in your life where a lot of rework is naturally
happening, maintain an adequate density of internal referencing to your
own past work in that area, and release often enough that you can forget
about timing the market for your ouput.
What does navigating by these three variables look like from the
outside?
If you only do a lot of internal referencing, thats like marching along
a straight, level road.
If you do a lot of internal referencing and a lot of rework, thats like
marching along a steady uphill road thats gradually getting steeper from
an external point of view (in other words, you are on your own
exponential path of progress). What you are doing will look impossible to
observers. It may look like you are marching up a vertical cliff. A great
example is the Silicon Valley archetype of the 10x engineer.
And finally, if you are releasing frequently, thats like turning and
twisting: spiraling around an increasingly steep mountain (or zig-zagging
up via a series of switchbacks).
The Path of Least Resistance
Navigating with the 3Rs as an adult isnt enough. You still have to
recover the value the old disciplinary model provided: behavioral
boundedness. Whether you are navigating intrinsically or extrinsically,
suddenly running into a mountain a major weakness is just as bad.

The key here is very simple and very Sun Tzu: with respect to the
external world, take the path of least resistance.
Why? Think of it this way. The disciplinary world very coarsely
measured your aptitudes and strengths once in your lifetime, pointed you
in a roughly right direction and said Go! The external environment had
been turned into a giant obstacle course designed around a coarse global
mapping of everybodys strengths.
So there was no distinction between the map of the external world
you were navigating, and the map of your internal strengths. The two had
been arranged to synchronize. If you navigated through a map of external
achievement, landmarks and honors, youd automatically be navigating
safely through the landscape of your internal strengths.
But when you cannot trust that youve been pointed in the right
direction in a landscape designed around your strengths, you cannot afford
to navigate based on a one-time coarse mapping of your own strengths at
age 18.
If you run into an obstacle, it is far more likely that it represents a
weakness rather than a meaningful real-world challenge to be overcome,
as a learning experience.
Dont try to go over or through. It makes far more sense to go around.
Hack and work around. Dont persevere out of a foolhardy superhuman
sense of valor.
Hard Equals Wrong
If it isnt crystal clear, I am advocating the view that if you find that
what you are doing is ridiculously hard for you, it is the wrong thing for
you to be doing. I maintain that you should not have to work significantly
harder or faster to succeed today than you had to 50 years ago. A little
harder perhaps. Mainly, you just have to drop external frames of reference
and trust your internal navigation on a landscape of your own strengths. It

may look like superhuman grit to an outsider, but if it feels like that inside
to you, youre doing something wrong.
This is a very contrarian position to take today. Thomas Friedman in
particular has been beating the harder is better drum for a decade now,
most recently in his take on the London riots, modestly titled A Theory of
Everything (Sort Of):
Why now? It starts with the fact that globalization and
the information technology revolution have gone to a whole
new level. Thanks to cloud computing, robotics, 3G
wireless connectivity, Skype, Facebook, Google, LinkedIn,
Twitter, the iPad, and cheap Internet-enabled smartphones,
the world has gone from connected to hyper-connected.
This is the single most important trend in the world
today. And it is a critical reason why, to get into the middle
class now, you have to study harder, work smarter and
adapt quicker than ever before. All this technology and
globalization are eliminating more and more routine
work the sort of work that once sustained a lot of
middle-class lifestyles.
The environment that really matters isnt the external world. It is
pretty much pure noise. You can easily find and process the subset that is
meaningful for your life. It isnt about harder, smarter, faster. If it were, Id
be dead. Ive been getting lazier, dumber and slower. Its called aging. I
think Friedman is going to run out of superlatives like hyper- before I
run out of life. If I am wrong, the world is going to collapse before he gets
around to writing The World is Hyper-Flatter-er. Humans are simply not
as capable as Friedmans survival formula requires them to be.
Exhortation is pointless. Humans dont suddenly become superhuman just because the environment suddenly seems to demand
superhuman behavior for survival. Those who attempt this kill themselves
just as surely as those dumb kids who watch a superman movie and jump
off buildings hoping to fly.

It is the landscape of your own strengths that matters. And you can set
your own, completely human pace through it.
The only truly new behavior you need is increased introspection. And
yes, this will advantage some people over others. To avoid running faster
and faster until you die of exhaustion, you need to develop an increasingly
refined understanding of this landscape as you progress. You twist and
turn as you walk (not run) primarily to find the path of least resistance on
the landscape of your strengths.
The only truly new belief you need is that the landscape of
disciplinary endeavors and achievement is meaningless. If you are too
attached to degrees, medals, prizes, prestigious titles and other extrinsic
markers of progress in your life, you might as well give up now. With 90%
probability you arent going to make it. Its simple math: even if they were
worth it, as our friend Friedman notes with his characteristic scaremongering, there simply isnt enough to go around:
Think of what The Times reported last February: At
little Grinnell College in rural Iowa, with 1,600 students,
nearly one of every 10 applicants being considered for the
class of 2015 is from China. The article noted that dozens
of other American colleges and universities are seeing a
similar surge as well. And the article added this fact: Half
the applicants from China this year have perfect scores of
800 on the math portion of the SAT.
If youre paying attention to the Chinese kids who score a perfect 800,
youre paying attention to the wrong people. I mean, really? You should
worry about some Chinese kid terrorized into achieving a perfect-800
math score by some Tiger Mom, and applying to Grinnell College?
Its the Chinese kids who are rebelling against their Tiger Moms,
completely ignoring the SAT, and flowing down the path of least
resistance that you should be worried about. After all Sun Tzu invented
that whole idea.
So rework, reference, release. Flow through the landscape of your
own strengths and weaknesses. Count to 10,000 rework hours as you walk.

If you arent seeing accelerating external results by hour 3300, stop and
introspect. That is the calculus of grit. Its the exponential human
psychology you need for exponential times. Ignore everything else.
Factoid: this entire 4000-plus word article is a working out of a 21word footnote on page 89 of Tempo. Thats how internally-referenced my
writing has become. Never say I dont eat my own dogfood.

Tinker, Tailor, Soldier, Sailor


February 17, 2010
What did you want to grow up to be, when you were a kid? Where did
you actually end up? For a few weeks now, I have been idly wondering
about the atavistic psychology behind career choices. Whenever I develop
an odd intellectual itch like this, something odder usually comes along to
scratch it. In this case, it was a strange rhyme that emerged in Britain
sometime between 1475 and 1695, which has turned into one of the most
robust memes in the English language:
tinker, tailor, soldier, sailor
richman, poorman, beggarman, thief
Everybody from John LeCarre to the Yardbirds seems to have been
influenced by this rhyme. For the past week, it has been stuck in my head;
an annoying tune that was my only clue to an undefined mystery about the
nature of work that I hadnt yet framed. So I went a-detecting with this
clue in hand, and ended up discovering what might be the most
fundamental way to view the world of work.
The Clue in the Rhyme
With the tinker, tailor rhyme stuck in my head, I was browsing
some old books in a library last week. A random 1970s volume, titled In
Search of History, caught my eye. In the prologue was this interesting
passage:
Most ordinary people lived their lives in boxes, as bees
did in cells. It did not matter how the boxes were labeled:
President, Vice President butcher, baker, beggarman,
thief, doctor, lawyer, Indian chief, the box shaped their
identity. But the box was an idea. Sir Robert Peel had put
London policemen on patrol one hundred fifty years ago
and the bobbies in London or the cops in New York
now lived in the box invented by Sir Robert PeelAll

ordinary people below the eye level of public recognition


were either captives or descendants of ideas Only a very,
very rich man, or a farmer, could escape from this system
of boxes. The very rich could escape because wealth itself
shelters or buys identityAnd farmers tooor
perhaps? not even a farmer could escape. After all [in
the 1910s] more than half of all Americans lived in villages
or tilled the fields. And now only four percent worked the
land. Some set of ideasmust have had something to do
with the dwindling of their numbers.
It was rather a coincidence that I found this passage just when I was
thinking of the tinker-tailor rhyme (the butcher, baker bit is an
American variant). A case of serendipitously mistaking the author,
Theodore Harold White, who Id never heard of, for Terence Hanbury
White, author of The Once and Future King, which I love. That sort of
coincidence doesnt happen too often outside of libraries, but oh well.
The important insight here is that the structure of professions and
work-identities is neither fundamental, nor a consequence of the industrial
revolution. Between macroeconomic root causes and the details of your
everyday life, there is an element of deliberate design. Design of
profession boxes that is constrained by some deeply obvious natural
laws, and largely controlled by those who are not themselves in boxes.
The tinker, tailor archetypes began emerging four centuries before the
modern organization of the workforce took shape, during the British
industrial revolution (which started around the 17th century).
Besides the peculiar circumstances of late medieval Britain, and the
allure of alliteration and rhyme, ask yourself, why has this rhyme become
such a powerful meme? Well return to this question shortly. But for now,
lets run with Theodore Whites insight about professions being conceptual
boxes created by acts of imagination, rather than facts of economics, and
see where it gets us. Well also get to the meaning of a revealing little
factoid: the rhyme was originally part of a counting game played by young
girls, to divine who they might marry.

And yes the basic political question of capitalism versus social justice
rears its ugly head here. Choosing a calling is a political act, and Ill
explain the choices you have available.
The Central Dogma in the World of Work
There are three perspectives we normally utilize when we think about
the world of work.
The first is that of the economist, who applies the laws of demand and
supply to labor markets. In this world, if a skill grows scarce in the
economy, wages for that skill will rise, and more people will study hard to
acquire that skill. Except that humans perversely insist on not following
these entirely reasonable laws. As BLS (Bureau of Labor Statistics)
statistics reveal, people insist on leaving the skilled nursing profession
perennially thirsting for new recruits, while the restaurant industry in Los
Angeles enjoys bargain labor prices, thanks to those hordes of Hollywood
hopefuls, who are good for nothing other than acting, singing and waiting
tables.
Then there is the perspective of the career counselor. That theatrical
professional who earnestly administers personality and strengths tests, and
solemnly asks you to set career goals, think about marketability of
skills, weigh income against personal fulfillment, and so forth. I say
theatrical because the substance of what they offer is typically the same,
whether the mask is that of a drill sergeant, guardian angel or an earth
mother; whether the stance is one of realism, paternalism or romanticism.
Somewhere in the hustle and bustle of motivational talk, resume critiquing
and mock interviews, they manage to cleverly hide a fact that becomes
obvious to the rest of us by the time we hit our late twenties: most of us
have no clue what to do with our lives until weve bummed around, testdriven, and failed at, multiple callings. Until weve explored enough to
experience a career Aha! moment, most of us cant use counselors. After
we do, they cant really help us. If we never experience the Aha!
moment, we are lost forever in darkness.
And finally there is the perspective of the hiring manager. That
hopeful creature who does his or her best to cultivate a pipeline of

fungible labor, in the fond and mostly deluded hope that cheap talent
will fit neatly into available positions. It is a necessary delusion. To
admit otherwise would be to admit that the macroeconomic purpose an
organization appears to fulfill is the random vector sum of multiple people
pulling their own way, with some being fortunate enough to be pulling in
the accidental majority direction, while others are dragged along, kicking
and screaming, until they let go, and still others pretend to pull whichever
way the mass is moving. Mark Twains observations of ants are more
applicable than hiring managers ideas that talent-position fit is a
strongly-controllable variable.
Heres the one common problem that severely limits the value of each
of these perspectives. There is a bald, obvious and pertinent fact that is so
important, yet so rarely acknowledged, let alone systematically
incorporated, that each of these perspectives ends up with a significant
blind spot.
That bald fact is this: it takes two kinds of work to make a society
function. First, there is the sexy, lucrative and powerful (SLP) work that
everybody wants to do. And then there is the dull, dirty and dangerous
(DDD) work that nobody wants to do. There is a lot of gray stuff in the
middle, but thats the basic polarity in the world of work. Everything
depends on it, and neither pole is dispensable.
The economist prefers not to model this fact. The career counselor
does not want to draw attention to it. The hiring manager has good reason
to deny it.
This brings us to the central dogma in the world of work: everyone
can simultaneously climb the Maslow pyramid, play to their strengths, and
live rewarding lives. That somehow magically, in this orgy of selfactualization, Adam Smith will ensure that the trash will take itself out.
Like all dogmas, it is false, but still manages to work, magically.
The dull, dirty and dangerous work does get done. Trash gets hauled,
sewers get cleaned, wars get fought by cannon-fodder types. And yet the
dogma is technically never violated. You see, there is a loophole that

allows the dogma to remain technically true, while being practically false.
The loophole is called false hope.
The False Hope Tax and Dull, Dirty and Dangerous (DDD)
The phrase dull, dirty or dangerous became popular in the military in
the last decade, as a way to segment out and identify the work that suits
UAVs (Unmanned Aerial Vehicles, like the Predator) the best. It also
describes the general order in which we will accept work situations that do
not offer any hope of sex, money, or power. Most of us will accept dull
before dirty, and dirty before dangerous. Any pair is worse than any one
alone, and all three together represent hell. Theres a vicious spiral here.
Dull can depress you enough that you are fired and need to work at dull
and dirty, which only accelerates the decline into dull, dirty and
dangerous. And I am not talking dangerous redeemed by Top Gun
heroism. I am talking die in stupid, pointless ways dangerous.
William Rathje, a garbologist (a garbage-archeologist) notes in his
book, Rubbish (to be reviewed), that once you get used to it, garbage in
landfills has a definite bouquet that is not entirely unpleasant. But then, he
is a professor, poking intellectually at garbage rather than having to
merely haul and pile it, with no time off to write papers about it. Dull,
dirty and dangerous work is stuff that takes scholars to make interesting,
priests to ennoble, and artists to make beautiful. But in general, it is
actually done by some mix of the deluded hopeful, the coerced, and the
broken and miserable, depending on how far the civilization in question
has advanced. You might feel noble about recycling, but somewhere out
there, near-destitute people are risking thoroughly stupid deaths (like
getting pricked by an infected needle) to sort your recycling. Downcycling
really, once you learn about how recycling works. On the other side of
the world, ship-breakers are killing themselves through a mix of toxic
poison and slow starvation, to sustain the processes that bring your cheap
Walmart goods to you from China.
The reasons behind the mysteriously perennial talent scarcity and
inelastic wages in the nursing profession, or the hordes of waitstaff in LA
hopefully (Pandora be praised!) waiting for their big Hollywood break, are
blindingly obvious. The obviously germane facts are that one profession

involves bedpans and adult diapers, to be paid for by people on fixed


incomes (so theres a limit to how much nurses can make), while the other
involves tantalizingly close-at-hand hopes of sex, money and fame.
False hope is the key phrase there. Nurses hope from afar, waiters in
LA hope from the front row. The trick Adam Smith uses to get the dull,
dirty and dangerous work done work that took slavery and coercion
until very recently is to sustain hope. American Idol is the greatest
expression of this false hope. A quick ticket from dull, dirty and
dangerous to sexy, lucrative and powerful. The fact that one in a million
will make it allows the other 999,999 to sustain themselves. It is one year
of hope after the other, until you accept the mantra of if you dont get
what you like, youll be forced to like what you get.
That is why the Central Dogma of work is never technically violated.
You could self-actualize, no matter where on the SLP-DDD spectrum you
are. It is just that in the Dull, Dirty and Dangerous part of the world, the
probability that you will do so becomes vanishingly small. To believe in a
probability that small, you have to be capable of massive delusions. You
have to believe you can win American Idol.
But that one technically possible, but highly improbable piece of hope
can replace the whips of an entire class of slave-drivers and dictators, and
replace it with something called democracy.
Snarky probability theorists like to call lotteries a stupidity tax
imposed on people who cannot compute expected values. What they dont
realize is that most professions (probability theorists included) carry a
heavy stupidity tax load: the extraordinarily low-probability hope of
leaping into the world of Sexy, Lucrative and Powerful. The only
difference is, unlike the lottery, you have no option but to participate
(actually, by this reasoning, the hope of winning a lottery is possibly more
reasonable than the more organic sorts of false hope embedded in most
work).

Sexy, Lucrative and Powerful (SLP)


The promised land may not be all it seems to those who arent there
yet (rock stars certainly whine, with drug-addled words, about it), but it
certainly exists.
Again, the order is important. Just as dull, dirty and dangerous is a
vicious spiral towards a thoroughly stupid death, sexy, lucrative and
powerful is a virtuous cycle that gets you to a thoroughly puzzling nirvana.
If you can do rock star or model, it is a relatively easy slide downhill from
sexy to lucrative and from lucrative to powerful. If you are not blessed
with looks or a marketable voice (and Beyonces dad), but can hit
lucrative by say, starting a garbage-hauling business staffed by Mexican
immigrants, you could still claw uphill to sexy. Or you could start with
powerful and trade the gossamer currency of influence for hard cash, and
hard cash for sex (figuratively and literally).
I have much less to say about sexy, lucrative and powerful because
most of you know all about it. Because, like me, youve been dreaming
about it since you were 10. You can easily tell SLP work apart from DDD
work by the structure of labor demand and supply. In one sector, people
are dragged down, kicking and screaming. In the other, they need to be
barricaded out, as they hurry from their restaurant shift to auditions. You
dont need a behavioral economist to tell you that career choices are not
entirely defined by the paychecks associated with them.
So lets move straight on to the reason little girls play their tinker,
tailor counting games.
The Developmental Psychology of Work
In Time Wars (to be reviewed), Jeremy Rifkin cites a study that shows
that young girls typically switch from fantasy career dreams to more
pragmatic ones around the age of ten and a half. For boys, it is about
eleven and a half. For both, the switch from fantasy to reality occurs on
the cusp of adolescence. It is fairly obvious what drives childish job
fantasies. Little children like being the center of attention. They like to feel

important and powerful. What drives realism-modulated adolescent


dreams, which have a more direct impact on career choices, is less clear.
What is clear is that the SLP dreams of pre-adolescents are not abandoned,
merely painted over with some realism.
The first profession I can remember wanting to join desperately was
road-roller driver. Growing up, my house was down the street from a lot
where the city administration parked its road rollers. They were big and
powerful, and I wanted to drive one for the rest of my life. Later, I
expanded my horizons. An uncle who worked in the railways took me for
a ride in a tower wagon (a special kind of track-maintenance locomotive),
and I was convinced I wanted to drive some sort of locomotive for the rest
of my life.
When I hit adolescence, my twin passions were military aircraft and
astronomy. I was already realistic enough to not hanker after Top-Gun
sexy (revealingly, my one classmate who joined the Indian Air Force
dropped out within a year). I was headed for engineering or science, which
were neither sexy nor lucrative, but held out a vague promise of powerful.
Somewhere in college, by turning down an internship at a radio astronomy
center, and picking one in a robotics lab, I abandoned the slightly more
romantic world of astronomy for the less romantic world of aerospace
engineering (I did work on space telescopes in grad school though, so I
guess I didnt really grow up till I was 30).
You probably have your own version of this story. You think it is
heartwarming dont you?
In actual fact, this sort of story reveals something deeply, deeply ugly
about childhood and adolescent yearnings; something on par with
Goldings Lord of the Flies: our brains are prepared for, and our
environment encourages, a hankering for sexy, lucrative, powerful. No kid
ever dreams of a career sorting through smelly, toxic garbage. Or even the
merely dull (and not dangerous or dirty) work of data entry.
But the world does not run on SLP alone. It needs DDD, and no
matter how much we automate things, it always will. By hankering after

SLP, we are inevitably legitimizing the cruelty that the world of DDD
suffers.
Tinker, Tailor, Soldier Sailor
Lets circle back and revisit tinker, tailor, solidier, sailor, richman,
poorman, beggarman, thief.
Why did little 17th century girls enjoy counting stones and guessing
who their future husbands might be? Was their choice of archetypes mere
alliterative randomness?
We tend to think of specialization and complex social organization as
consequences of the industrial age, but the forces that shape the
imaginative division of labor have been at work for millenia.
Macroeconomics and Darwin only dictate that there will be a spectrum
with dull, dirty and dangerous at one end, and sexy, lucrative and
powerful at another. This spectrum is what creates and sustains social and
economic structures. I am not saying anything new. I am merely restating,
in modern terms, what Veblen noted in Theory of the Leisure Class. From
one century to the next, it is only the artistic details that change. Tinker,
tailor evolves to a different set of archetypes.
Weve moved from slavery to false hope as the main mechanism for
working with the spectrum, but whatever the means, the spectrum is here
to stay. Automation may nip at its heels, but fundamentally, it cannot be
changed. Why? The rhyme illustrates why.
At first sight, the tinker, tailor rhyme represents major category
errors. Richman and poorman are socioeconomic classes, while tailor,
sailor and soldier are professions. Tinker (originally a term for a
Scottish/Irish nomad engaged in the tinsmith profession) is a lifestyle.
Beggarman and thief are categories of social exodus behaviors.
Relate them to the DDD-SLP spectrum, and you begin to see a
pattern. As Theodore White noted, Richman enjoys the ultimate privilege:
buying his own social identity at the SLP end of the spectrum. Poorman is
stuck in the DDD end. Beggarman and thief have fallen off the edge of

society, the DDD end of the spectrum, by either giving up all dignity, or
sneaking about in the dark. Sailor and Tinker are successful exodus
archetypes. The former is effectively a free agent. Remember that around
the time this rhyme captured the popular imagination in the 17th century,
the legitimized piracy and seaborne thuggery that was privateering, had
created an alternative path to sexy, lucrative and powerful; one that did not
rely on rising reputably to high office (the path that Samuel Pepys
followed between 1633 and 1703; The Diary of Samuel Pepys remains one
of the most illuminating looks at the world of work ever written). The
latter, the tinker, was a neo-nomad, substituting tin-smithing for
pastoralism in pre-industrial Britain.
The little girls had it right. In an age that denied them the freedom to
create their own destiny, they wisely framed their tag-along life choices in
the form of a rhyme that listed deep realities. Today, the remaining modern
women who look to men, rather than to themselves, to define their lives,
might sing a different song:
blogger, coder, soldier, consultant
rockstar, burger-flipper, welfareman, spammer
Everything changes. Everything remains the same.
The Politics of Career Choices
Somewhere along the path to growing up, if you bought into the
moral legitimacy argument that justified striving for sexy, lucrative,
powerful, you implicitly took on the guilt of letting dull, dirty and
dangerous work, done by others, enabling your life. If that guilt is killing
you, you are a liberal. If you think this is an unchangeable reality of life,
you are a conservative. If you think robots will let us all live sexy,
lucrative, powerful lives, you are deluded. You see, the SLP-DDD
spectrum is not absolute, it is relative. Because our genes program us to
strive for relative reproductive success in complicated ways. There is a
ponderous theory called relative deprivation theory that explains this
phenomenon. So no matter how much DDD work robots take off the table,
well still be the same pathetic fools in our pajamas.

Can you live with what youve chosen?


Heres what makes me, at a first approximation, a business
conservative/social liberal. I can live with it, and shamelessly pursue SLP,
without denying the unpleasant reality that starving and poisoned shipbreakers and American-Idol hopeful garbage haulers make my striving
possible. In my mind, it isnt the pursuit of SLP that is morally suspect. It
is the denial of the existence of DDD.
So what are you? Tinker, tailor, soldier, sailor, richman, poorman,
beggarman or thief?
There is a trail associated with this post that explains the history of the
rhyme.
I wrote this post while consuming three vodka tonics, so if it turns out
to be a successful post, I might change that link down there to say buy me
a vodka tonic instead of buy me a coffee.

The Turpentine Effect


March 18, 2010
Picasso once noted that when art critics get together they talk about
Form and Structure and Meaning. When artists get together they talk about
where you can buy cheap turpentine. When you practice a craft you
become skilled and knowledgeable in two areas: the stuff the craft
produces, and the processes used to create it. And the second kind of
expertise accumulates much faster. I call this the turpentine effect. Under
normal circumstances, the turpentine effect only has minor consequences.
At best, you become a more thoughtful practitioner of your craft, and at
worst, you procrastinate a little, shopping for turpentine rather than
painting. But there are trades where tool-making and tool-use involve
exactly the same skills, which has interesting consequences.
Programming, teaching, writing and mechanical engineering are all such
trades.
Self-Limiting and Runaway Turpentine Effects
Any sufficiently abstract craft seems to cause some convergence of
tool-making and tool-use. Painters arent normally also chemists, so thats
actually not a great example. But I dont doubt that some of Picassos
forgotten technician contemporaries, who had more ability to say things
with art than things to say, set up shop as turpentine sellers, paint-makers
or art teachers. But in most fields the turpentine effect is self-limiting. As
customers, pilots can only offer so many user insights to airplane
designers. To actually become airplane designers, theyd have to learn
aerospace engineering. But in domains where tool-making involves few
or no new skills, you can get runaway turpentine effects.
As Paul Graham famously noted, hackers and painters are very
similar creatures. But unlike painting or aircraft, programming is a domain
where tool-use skills can easily be turned into tool-making skills. So it is
no surprise that programmers are particularly susceptible to the runaway
turpentine effect. Joel Spolsky struck me very forcefully as the runawayturpentine-effect type, when I read his Guerrilla Guide to Interviewing (a

process designed to allow technically brilliant programmers to clone


themselves). It is no surprise that his company produces tools (really good
ones, I am told) for programmers, not software for regular people. And
their hiring process is guaranteed to weed out (with rather extreme
disrespect and prejudice) anyone who could get them to see problems that
are experienced by non-programmers. 37 Signals is another such company
(project management software). If you see a tool-making company,
chances are it was founded entirely by engineers. And the consequences
arent always as pretty as these two examples suggest.
Linus Torvalds most famous accomplishment was an act of
thoroughly unoriginal cloning (Unix to Linux, via Minix). But among
programmers, he seems to be most admired for his invention of git, a
version control system whose subtle and original design elements only
programmers can appreciate. This discussion with a somewhat postal
Torvalds comment (I hope it is authentic) is a revealing look at a mastertool-maker mind. Curiously, Torvalds is going postal over a C vs. C++
point, and it is interesting to read his comment alongside this interview
with another programmers programmer, Bjarne Stroustroup, the inventor
of C++.
Eric Raymond codified, legitimized and spiritualized this path for
programmers, in The Cathedral and the Bazaar, when he noted that most
open-source projects begin with a programmer trying to scratch a very
personal itch, not other-user needs. The open source world, as a result,
has produced far more original products for programmers than for end
users. Off the top of my head, I actually cant think of a single great enduser open source product that is not a clone of a commercial original
(aside: there is a crying need for open-source market research).
When I was a mechanical engineering undergraduate, my computer
science peers created a department t-shirt that said Id rather write
programs to write programs than write programs. That about sums it up.
In my home territory of mechanical engineering, some engineers naturally
like to build machines that do useful things. Others build machine tools,
machines that build machines, and wouldnt have a raison detre without
the first category.

Next door to programming and engineering, the turpentine effect can


occur in science as well. Stephen Wolfram is my favorite example of this.
His prodigal talents in physics and mathematics are probably going to be
forgotten in 50 years, because he never did anything worthy of them
(according to his peers, neither his early work, nor A New Kind of
Science, is as paradigm-shattering as he personally believes). But the
paradigm-shifting tool he built, Mathematica, is going to be in the history
books much longer.
Teaching is a very basic creative skill that seems to emerge through
runaway turpentine effects. I knew a professor at Cornell who had,
outside his door, a sign that said, Those who can, do. Those who can do
better, teach. Methinks the professor doth protest too much. There is a
reason the actual cliche is those who can, do. Those who cant, teach.
But the insinuation that teachers are somehow not good enough to do is
too facile. That is often, but not always, the case.
What happens is that all talented people engage in deliberate practice
(a very conscious and concentrated form of self-aware learning) in
acquiring a skill. But if you cant find (or get interested in) things to do
that are worthy of your skill, you turn to the skill itself as an object of
attention, and become better at improving the skill rather than applying it.
Great coaches were rarely great players in their time. John Wright, a
mostly forgettable cricket player, had a phenomenal second innings in his
life, as the coach who turned the Indian cricket team around.
But this effect of producing great teachers has a dark side as well,
especially in new fields, where there are more learners than teachers.
Thankfully, despite being tempted several times, I never started a how to
blog
blog.
More
generally,
this
is
the
writing/speaking/teaching/consulting (W/S/T/C) syndrome that hits people
who go free agent. We talked about before in my review of One Person,
Multiple Careers (check out the comments as well).
This relation to teaching (via self-learning) has actually been studied
in psychology. In Overachievement, John Eliot talks about a training
mindset and a performance mindset. The former involves meta-cognition
and continuously monitoring your own performance. The latter involves

an ability to shut off the meta-cognition and just get lost in doing. Great
teachers were probably great learners. Great doers may be slower learners,
but are great at shutting off the meta-cognition.
Causes and Consequences
I think the turpentine effect is caused by and I am treading on
dangerous territory here the lack of a truly artistic eye in the domain
defined by a given tool (so it is ironic that it was Picasso who came up
with the line). Interesting art arises out of a combination of refined skills
and a peculiar, highly original way of looking at the world through that
skill. If you have the eye without the skills, you become an idiosyncratic
eccentric who is never taken seriously. If you have the skills without the
eye, you become susceptible to the turpentine effect. The artistic eye is
innate and requires no real refinement. In fact, the more you learn, the
more the eye is blinded. The adult artistic eye is largely a matter of
protecting a childlike way of seeing, but coupling it to an adult way of
processing what you see. And to turn it into value, you need a second
coupling to a skill that translates your unique way of seeing into unique
ways of creating.
There is a feedback loop here. Sometimes acquiring a skill can make
you see things you didnt see before. When you have a hammer in your
hand, everything looks like a nail. On the other hand, if you cant see
nails, all you see is opportunities to make better hammers.
The artistic eye is also what you need to make design decisions that
are not constrained by the tools. A complete absence of artistic instincts
leads to an extreme lack of judgment. In a Seinfeld episode, Jerry gets
massively frustrated with a skilled but thoroughly inartistic carpenter
whom he has hired to remodel his kitchen. The carpenter entirely lacks
judgment and keeps referring every minor decision to Jerry. Finally Jerry
screams in frustration and tells him to do whatever, and just stop bothering
him. The result: the carpenter produces an absolute nightmare of a kitchen.
In Wonderboys, (a movie based on a Michael Chabon novel) the
writer/professor character played by Michael Douglas tells his students
that a good writer must make decisions. But he himself completely fails to
do so, and his book turns into an unreadable, technically-perfect, 1000-

page monster. No artistic decisions usually means doing everything rather


than doing nothing. Artists mainly decide what not to do.
What about consequences?
The most obvious and important one is a negative consequence:
creative self-indulgence. Nikki Hilton designs expensive handbags (which
is still, admittedly, a more admirable way of spending a life than the one
her sister models). There is a reason most product and service ideas in the
world are created for and by rich or middle-class people for their own
classes. The turpentine effect is far more prevalent than its utility requires.
There is a limit to how many people can be absorbed in safe and sociallyuseful turpentine-effect activities like tool-building or teaching. Let loose
where a content-focus, artistic eyes and judgment are needed, it leads to
over-engineered monstrosities, products nobody wants or needs, and a
massive waste of resources. Focusing on the problems of others, rather
than your own (or of your own class), requires even more effort.
The positive effects are harder to see, but they are important. The
turpentine effect is how isolated creatives can get together and form
creative communities that help refine and evolve a discipline, sometimes
over centuries, and take it much further than any individual can. Socially,
this emerges as the aesthetic of classicism in any field of craft.

The World is Small and Life is Long


January 18, 2012
In the Harry Potter series, J. K. Rowling repeatedly uses a very
effective technique: turning a character, initially introduced as part of the
background, into a foreground character. This happens with the characters
of Gilderoy Lockhart, Viktor Krum and Sirius Black for instance. In fact
she uses the technique so frequently (with even minor characters like Mr.
Ollivander and Stan Shunpike) that the background starts to empty out.
This is rather annoying because the narrative suggests and promises a
very large world comparable in scope and complexity to the Lord of
the Rings world say but delivers a very small world in which
everybody knows everybody. You are promised an epic story about the
fate of human civilization, but get what feels like the story of a small
town. Characters end up influencing each others lives a little too
frequently, given the apparent size of the canvas.
We are used to big worlds that act big and small worlds that act small.
We are not used to big worlds that act small.
Which is a problem, because thats the sort of world we now live in.
Our world is turning into Rowlings world.
The Double-Take Zone
Our lives are streams of mostly inconsequential encounters with
people who momentarily break away from the nameless and faceless
social dark matter that surrounds our personal worlds. But most of the
time, they return to the void.
Each of us is at the center of a social reality surrounded by a
foreground social zone of 150 odd people with names and faces, a 7billion strong world of social dark matter outside, and an annular social
gray zone in between, comprising a few thousand people.

This last category contains people who are neither completely


anonymous and interchangeable, nor possessed of completely unique
identities in relation to us. Included in this annular ring are old classmates
and coworkers who still register as unique individuals but have turned into
insubstantial ghosts, associated only with a dim memory or two. Also in
this ring are public figures and celebrities whom we recognize
individually, but who dont rise above the archetypes that define their
respective classes. And then there are all those baristas and receptionists
whom you see regularly.
It is this social gray zone that interests me, and theres a simple test
for figuring out if somebody is in this zone with respect to you: if you
meet them out of context, youll do a double-take.
If the barista at your coffee shop shows up at the grocery store, youll
do a quick double-take. Then youll make the appropriate context switch,
and recognition will turn into identification. Our language accurately
reflects this thought process: we say I cant place her and I just figured out
where I know her from.
This happens with celebrities too. I am pretty good at the game of
recognizing lesser-known actors in new roles. When watching TV, I often
say things like, Oh, the villains sister in Dexter I just realized, she
played Alma Garrett in Deadwood. I tend to spot these connections
across shows and movies faster than most people.
Context-Dependent Relationships
The reason for the double-take effect is obvious. Most of the people
we recognize enough to distinguish from faceless/nameless social dark
matter are still one-dimensional, context-dependent figures: the barista
who works mornings at the Starbucks on Sahara Avenue. Double-take
zone people are literally part of the social background.
It takes a few serendipitous encounters in different contexts to pry
someone loose from context, but mostly, nothing happens. They merely
turn into slightly more well-defined elements of their default contexts: the

barista who works at Starbcucks on Sahara Avenue, that I once ran into at
Whole Foods.
This still isnt the same as actually knowing someone, but it is a
necessary first step (as an aside, this is the reason why the three
media/three contacts rule in sales works the way it does). Double-take
moments are relationship-escalation options with expiry dates. They create
a window of opportunity within which the relationship can escalate into a
personal one.
There is a reason havent we met before? is the mother of all pick-up
lines.
So lets say there are three zones around you. The context-free zone of
personal relationships, surround by a context-dependent double-take zone
(call it the dont-I-know-you-from-somewhere zone if you prefer), and
finally, social dark matter.
The Real and Abstract Parts of the Social Graph
The personal, context-free zone is the part of the social graph that is
real for you. Here, you dont deal in abstractions like Its not what you
know, but who you know. You deal in specifics like, You need to get
yourself a meeting with Joe. Let me send an introductory email. You
could probably sketch out this part of the social graph fairly accurately on
paper, with real names and who-knows-whom connections. You dont
need to speculate about degrees of separation here. You can count them.
The dark matter world is the part of the social graph that is an
abstraction for you. You have abstract ideas about how it works (Old Boy
networks, people taking board seats in each others companies, the idea
that weak links lead to jobs, the idea that Asians have stronger connections
than Americans), but you couldnt actually sketch it out except in coarse,
speculative ways using groups rather than individuals.
The double-take zone is populated by people who are socially part of
the abstract social network that defines the dark matter, but physically or
digitally are concrete entities in your world, embedded in specific contexts

that you frequent. Prying someone loose from the double-take zone means
moving them from the abstract social graph into your real, neighborhood
graph. They go from being concrete and physically or virtually situated in
your mind to being concrete and socially situated, independent of specific
contexts. If mathematicians and theoretical computer scientists ran the
world, the socially correct thing to say in a double-take situation would be:
Oh, were context-independent now; do you want to take this on-graph?
In these terms, Rowlings little trick involves introducing characters
in the double-take zone and then moving them to the context-free zone. In
the process, she socially situates them. Lockhart goes from abstract
celebrity author making an appearance at a bookstore to teacher with
specific relationships to the lead characters. Sirius Black initially appears
as an abstract criminal on television, but turns into Harrys godfather.
Viktor Krum is a distant celebrity Quidditch player who turns into Rons
rival for the affections of Hermione.
The Active, Unstable Layer
The double-take zone is defined by the double-take test, but such tests
are rare. What happens when they do occur? Since an actual double take
creates a window of opportunity to personalize a relationship an active
option you could call this the active and unstable layer of the doubletake zone. The more actual double takes are happening, the more the zone
is active and unstable.
Our minds deal badly with the double-take zone when it is stable and
dormant. And we really fumble when it gets active and unstable. Why?

Our social instincts are based on physical-geographic separation of


scales. In the pre-urban world, the double-take zone was empty. You either
knew somebody personally as a fellow villager, or as a stranger visiting
from the dark-matter world. Strangers couldnt stay strangers. They either
went away soon and were forgotten, or stayed and became fellow
villagers.
We are used to being careful around people from our village, and
more careless in our dealings with strangers passing through. We take the
long view of relationships within local communities, and are more willing
to pick fights with strangers. There is less likelihood of costs escalating
out of control via vendettas in the latter case. It is also easier. The obvious
tourist is more easily cheated than the local.
Our psychological instincts appear to have evolved to deal with this
type of social reality. We are more likely (and able) to dehumanize
strangers before dealing roughly with them.
Urbanization created the double-take zone. Mass media expanded it
vastly, but asymmetrically (mass media creates relationships that are
double-take in one direction, dark-matter in the other). The Internet is

expanding it vastly once again, this time with more symmetry, thanks to
the explosion in number of contexts it offers, for encounters to occur.
This wouldnt matter so much if the expansion didnt affect stability.
We know how to deal with stable and dormant double-take zones.
The Rules of Civility
Before the Internet began seriously destabilizing and activating the
double-take zone, it was an unnatural social space, but we knew how to
deal with it.
The double-take zone merely requires learning a decent and polite,
but impersonal approach to interpersonal behavior: civility. It requires a
capacity for an abstract sort of friendliness and a baseline level of mutual
helpfulness among strangers. We learn the non-Duchene smile
something that sits uncomfortably in the middle of a triangle defined by a
genuine smile, a genuine frown, and a blank stare.
We think of such baseline civility as the right way to deal with the
double-take zone. This is why salespeople come across as insincere: they
act as though double-take zone relationships were something deeper.
The pre-Interent double-take zone was fairly stable. Double-take
events were truly serendipitous and generally didnt go anywhere. Most
relationship options expired due to low social and geographic mobility. A
random encounter was just a random encounter. Travel was stimulating,
but poignant encounters abroad rarely turned into anything more.
The rules of conduct that we know as civility have an additional
feature: they are based on an assumption of stable, default-context status
relationships that carry over to non-default contexts. A century ago, if a
double-take moment did occur, once the parties recognized each other
(made easier by obvious differences in clothing and other physical
markers of class membership), the default-context status relationship
would kick in. If a lord decided to take a walk through the village market
on a whim, and ran into his gardener, once the double-take moment

passed, the gardener would doff his hat to the lord, and the lord would
confer a gracious nod upon the gardener.
But this sort of prescribed, status-dependent civility is no longer
enough. The rules of civility cannot deal with an explosion of
serendipitous encounters.
Social Mobility versus Status Churn
Since double-take encounters temporarily dislocate people from the
default context through which you know them, and make them
temporarily more alive after, you could say the double-take zone is
coming alive with nascent relationships: relationships that have been
dislodged from a fixed physical or digital context, but havent yet been
socially situated.
There is an additional necessary condition for more to happen: the
double-take moment must also destabilize default assumptions about
relative status.
Double-take events today destabilize status, unlike similar events a
century ago. This is because we read them differently. A lord strolling
through a market a century ago a domain marked for the service class
knew that he was a social tourist. Double-take events, if they happened,
were informed by the assumption that one party was an alien to the
context, and both sides knew which one was the alien. Everybody wore
the uniform of their home class, wherever they went.
Things are different today. A century ago, social classes were much
more self-contained. Rich, middle class and poor people didnt run into
each other much outside of expected contexts. They shopped, ate and
socialized in different places for instance. This is why traditional romantic
stories are nearly always based on the trope of the heroine temporarily
escaping from a home social class to a lower one, and having a statusdestabilizing encounter with a lower-class male (the reverse, a prince
going walkabout and meeting a feisty commoner-girl, seems to be a less
common premise, but thats a whole other story).

But today, one of the effects of the breakdown of the middle class and
trading-up is that status relationships become context-dependent. There is
no default context.
Lets say youre an administrative assistant at a university, have an
associates degree, and frequent a coffeeshop where the barista is a
graduate student. You both shop at Whole Foods. Shes trading up, as far
as dietary lifestyles go, to shop at Whole Foods, while it is normal for you
because you have a higher household income.
In the coffeeshop, youre higher status as customer. If you run into
each other at Whole Foods, youre equals. If you run into each other on
campus, shes the superior.
Short of becoming President, there is almost nothing you can do that
will earn you a default status with everybody. Its up in the air.
This isnt social mobility. The whole idea of social mobility, at least in
the sense of classes as separate, self-contained social worlds, is breaking
down. Instead you have context-dependent status churn. Double-take
moments dont necessarily indicate that one party is a tourist outside their
class. There are merely moments that highlight that class is a shaky
construct today.
Worlds are mixing, so double-takes become more frequent. But what
makes the increased frequency socially disruptive is that status
relationships are different in the different contexts.
Temporal Churn
Even more unprecedented than status churn is temporal churn.
People from the same nominal class, who once knew each other, can
move into each others double-take zones simply by drifting apart in
space. Thats why you do a double-take when you randomly run into an
old classmate, whom you havent seen for decades, in a bookstore
(happened to me once). Or when you run into a hallway-hellos level

coworker, whom youve never worked directly with, at the grocery store
(this happened to me as well).
It is not changes in appearance or social status that make immediate
recognition difficult. It is the unfamiliar context itself.
This sort of thing doesnt happen much anymore. We dont catch up
as much anymore because we never disconnect. Unexpected encounters
are rare because online visibility never drops to zero. Truly serendipitous
encounters turn into opportunistically planned ones via online earlywarning signals.
One effect of this is that relationships can go up or down in strength
over a lifetime, since they are continuously unstable and active. Once
youve friended somebody on Facebook, and their activities keep showing
up in your stream, you are more likely to look them up deliberately for a
meeting or collaboration. Social situation awareness is not allowed to fade.
The active and unstable double-take layer is constantly suggesting
opportunities and ideas for deeper interaction.
Its not that time doesnt matter anymore, but that time does more
complicated things to relationships. In the pre-Internet world, relationships
behaved monotonically in the long term. You either lost touch, and the
relationship weakened over time, or you stayed in touch and the
relationship got stronger over time. Some relationships plateaued at a
certain distance.
Few relationships went up and down in dramatic swings as they
routinely do today.
Beyond Civility
Mere static-status civility is no longer enough to deal with a world of
volatile relationships created by status churn across previously distinct
classes, and temporal churn that ensures that relationships that never quite
die. Relationships that move in and out of the double-take zone (or even
just threaten to do so) need a very different approach.

You never know when you might turn a barista into a new friend after
a double-take encounter, or renew a relationship with an old one via a
Facebook Like.
The sane default attitude today is the world is small and life is long.
Reinventing yourself is becoming prohibitively expensive. You have to
navigate under the expectation that the real part of your social graph will
grow over time, even if you move around a lot. If you are immortal and
can move sufficiently fast in space and time, the abstract social graph may
vanish altogether, like it did for Wowbagger the Infinitely Prolonged in
The Hitchhikers Guide to the Galaxy, who made it the mission of his
immortal life to insult everybody in the galaxy, in person, by name, and in
alphabetical order.
The phrase the world is small and life is long came up in a
conversation with an acquaintance in Silicon Valley. Wed been talking
about how the Silicon Valley technology world, despite being quite large,
acts like a small world. Wed been talking, in particular, about the dangers
of burning bridges and picking fights. We both agreed that thats a very
dangerous thing to do. Thats when my acquaintance trotted out that
phrase, with a philosophical shrug.
Of the two parts of the phrase, the world is small is easier to
understand. I dont think it has much to do with the much-publicized fourdegrees finding on Facebook. Status and temporal churn within the sixdegree world is sufficient to explain whats happening.
Life is long is the bit people often fail to appreciate. The social graph
throbs with actual encounters every minute, that are constantly rewiring it.
If you are in a particular neck of the woods for a long enough time, youll
eventually run into everybody within it more than once. Its the law of
large numbers applied to accumulating random encounters.
Silicon Valley is a place where worlds collide frequently in different
status-churning contexts, and circulation through different roles over time
creates temporal churn. There are other worlds that exhibit similar
dynamics. Most of the world is going to look like this in a few decades.

It is increasingly going to be a world of shifting alliances and status


relationships within a larger, far more active and unstable layer in a much
larger double-take zone. A world where you will never be quite sure where
you stand in relation to a large number of potentially important people.
Some people love this emerging, charged social world, always poised
on the edge of serendipity. They seem to come alive with this much static
in the air. They thrive on status churn. They hoard relationships, turning
every chance encounter into a rest-of-life relationship.
Others fantasize about declaring relationship bankruptcy and starting
a new life somewhere else. At one time, this was actually very easy to do.
Today, you need the Witness Protection Program to pull it off.
I am not certain whether I like or dislike this emerging world. I think I
am leaning towards dislike. The slogan, the world is small and life is long
describes a tense and anxious world of constant social shadow-boxing.
One where you must always be on, socially. A world where burning
bridges is more dangerous, and open conflict becomes ever costlier,
leading to less dissent and more stupidity.
It is a situation of false harmony. One where peace is less an indicator
of increasing empathy and human connection, and more an indicator of
increasing wariness. You never know which world your world will collide
with next, with what consequences. You never know what missed
opportunity or threat could decisively impact your life.
So far, weve been able to do without the opportunities, and avoid the
threats. We try to teach teenagers what we think are the right kinds of
cautious lessons: it boils down to be careful what you post on Facebook, it
could affect your job.
But this is a transient stage. Soon we wont be able to do without the
opportunities, and our lives will come to depend on the serendipity
catalyzed by the active, unstable double-take layer. Nice-to-have has a
way of turning into must-have. This dependence will come with necessary
exposure to the threats. The world is small and life is long will not be
enough protection.

Motivational speakers used to preach a few decades back that we


should all think global and act local.
It has happened. But I dont think this is quite what they had in mind.

My Experiments with Introductions


May 7, 2011
Introductions are how unsociable introverts do social capital.
Community building is for extroverts. But introductions I find stimulating.
Doing them and getting them. This is probably a direct consequence of the
type of social interaction I myself prefer. My comfort zone is 1:1, and an
introduction is a 3-way that is designed to switch to a 2-way in short order,
allowing the introducer to gracefully withdraw once the introducees start
talking. As groups get larger than two, my stamina for dealing with them
starts to plummet, and around 12, I basically give up (I dont count
speaking/presentation gigs; those feel more like performance than
socializing to me).
I am pretty good at introductions. Ive helped a few people get jobs,
and helped one entrepreneur raise money. Off the top of my head, I can
think of at least a half-dozen very productive relationships that I have
catalyzed. I think my instincts around when I should introduce X to Y are
pretty good: 2 out of 3 times that I do an introduction, at the very least an
interesting conversation tends to start. Since Ive been getting involved in
a lot of introductions lately, I thought Id share some thoughts based on
my experiments with introductions.
Weak-Link Hubs vs. Strong-Link Hubs
Introductions are the atomic unit of social interaction. They are
central to the creation and destruction of communities, but arent
themselves a feature of communities. Rather they drive the creative
destruction process within the universe of communities, as Romeo and
Juliet illustrates particularly well. Introductions are constantly rewiring the
social graph, causing old communities to collapse and new ones to cohere.
To understand how introductions work, you have to understand a
subtle point: stereotypical extroverted community types are actually pretty
bad at introductions, except for one special variety: introducing a
newcomer into an existing group, as a gatekeeper. Stereotypical

host/hostess community types are great at helping existing communities


grow stronger and endure. Their social behaviors are therefore in direct
conflict with uncensored introduction activity, which causes social
creative destruction to intensify. I call the stereotypical community types
strong-link social hubs. They know everybody in a given local (physical or
virtual) community. They are a friend, mentor or mentee to every
individual within that community. They are the ultimate insiders. When a
strong-link social hub makes an introduction, it is usually quick and
superficial, I am sure you two will find that you have a lot in common,
youre both engineers! Or the half-joking everybody this is X; X this
everybody, ha ha! Enough to sustain party conversations, but usually not
enough to catalyze relationships except by accident.
The real hubs of introduction activity on the social graph though, are
what I call weak-link hubs. It is both a personality type and a structural
position in the social graph. It is easiest for me to explain what this means
via a personal anecdote.
When I was a kid in high school, I resisted being sucked into any
particular group.For their part, the 2-3 major groups in my class saw me as
a puzzle: I was not one of us or one of them. Neither was I one of the
social outcasts. I did 1:1 friendships or hung out occasionally as a guest in
groups, but I rarely joined in group activities.
One day, I remarked to a friend, I guess I am equally inside all the
groups. His retort: No, you are equally outside all the groups. I realized
that not only was he right, that was pretty much my identity. It hardened
into a sort of reactionary tendency towards self-exile (one of my
nicknames in college was hermit) that has stayed with me. Whenever I
find myself getting sucked too deeply into any group, I automatically start
withdrawing to the edge. Physically, if the group is in a room.
That is what I mean by weak-link hubs being both a personality type
and a structural position. You have to have the personality that makes you
retreat from centers and you have to have centers around you to retreat
from. This retreat is an interesting dynamic. You cannot really be attracted
to the edge around a single center, since that is a diffuse place. But if you
are retreating simultaneously from multiple centers, you will find yourself

a position in the illegible and chaotic intersection lands. Why illegible?


Try drawing a random set of overlapping circles and making sense of the
pattern of intersections. Heres an example:

This retreating from all nearby centers is not exactly the personality
description of a great social hub. So why is it a great position for
introduction-making? Its the same reason Switzerland is a great place for
international negotiations: neutrality and small size anchoring credibility,
but with sufficient actual clout to enforce good behavior. If you are big or
powerful, you have an agenda. If you are from the center of a community,
you have an agenda. Another great example is the Bocchicchio family in
The Godfather: not big enough to be one of the Five Families, but bloodyminded enough to effectively play intermediary in negotiations by offering
themselves up as hostages.
Edge Blogging and the Introduction Scaling Problem
This post actually grew out of a problem I havent yet solved. My
instincts around introductions arent serving me well these days. Over the
last few months, the number of potential connection opportunities that go
above my threshold triggers has been escalating. Two years ago, Id spot
one potential connection every few months and do an introduction. Now I
spot one or two a week, and its accelerating. I am getting the strange
feeling that I might turn into one of those cartoon characters at a
switchboard who starts out all calm and in control and is reduced to crazed
scrambling. In case it isnt obvious, the growth of ribbonfarm is the driver
that is creating this scaling problem.

The answer is obvious for extroverts: create a community and start


dealing with people in one-to-many and many-to-many ways in group
contexts. This allows you to simply create a social field around yourself
where people can connect without overt catalysis from you. The cost is
that you must turn yourself into a human social object. You must become a
new center. You will no longer be in the illegible intersection lands where
creativity and originality live. Call me selfish, but thats the big reason I
dont like the idea that readers frequently propose: formal ribbonfarm
meetups or an online ribbonfarm community.
The anatomy of the problem is simple. Blogging is often an edge role.
If you see a blog that sprawls untidily across multiple domains rather than
staying within a tidy niche, chances are you are reading an edge blog.
They tend to be small and slow-growth, with weird numbers in their traffic
anatomy.
The social graph of an edge blogger is very different from the social
graphs of both celebrities and regular people without much public
visibility. Regular people have many active strong links and many more
weak links that used to be strong links (old classmates, colleagues from
former jobs and the like). For regular people weak links are usually either
strong links weakened by time or intrinsically weak links catalyzed by a
short sequence of strong links (like a friend-of-a-friend or an in-law). In
both cases, the weak links of regular people tend to be quiescent.
Celebrities on the other hand have a huge number of active weak
links, but they only go one way: a lot of people know Obama but Obama
doesnt know 99.9999% of them. Even if you count only those who have
shaken hands with Obama, the asymmetry is still massive. Center bloggers
are effectively celebrities. In fact they often are celebrities who have taken
to blogging, like Seth Godin.
Edge bloggers though are an odd species. They are perhaps most like
professional headhunters, used car salesmen or other types of people who
regularly come into weak two-way contact with total strangers. Unlike
those rather transactional roles though, bloggers do a whole lot of weak
social rather than financial transactions with a lot of total strangers. Many
of you (Ive lost count) have ongoing email conversations with me,

usually about a specific theme that Ive blogged about or mentioned


somewhere online (container shipping, martial arts, organizational decay
and s/w design are some of the themes). The intensity ranges from several
times a week to once every couple of months (for the infrequent ones, I
usually have to do an inbox search to remember who the person is). With
some correspondents, I have periodic bursts of activity. With a small
handful of people, thanks to phone or face-to-face meetings, I have made
the jump to actual friendship.
Edge bloggers are natural weak link hubs. We have vastly more active
two-way weak link relationships going on than regular people or
celebrities (or center bloggers). These are not forgotten classmates or
friends-of-friends who can be called upon when you are job-hunting. Nor
are they one-way-recognition handshakes.
I got a visceral sense of what it means to be a weak-link hub when I
compared my LinkedIn graph visualization to that of a couple of regular
people friends. Though my friends had comparable numbers of contacts,
most of their contacts fell into very obvious small-world categories, like
workplace, school, customers or industry associations. My social graph on
the other hand, has a huge bucket that I could only label miscellaneous.
Many are from ribbonfarm, but I suppose my weak link hub style carries
over to regular life as well. For instance, I have a lot more random
connections to people in widely separated parts of Xerox, my former
employer, compared to most of my former coworkers.
Keeping Edges Edgy
Make no mistake, this is fun for me and hugely valuable. But I have
to admit, it takes a lot of time to keep up a whole bunch of 1:1 email
relationships, and it is getting steadily harder. So far, my clean-inbox
practices have helped me keep up, but there has some of the inevitable
increase in response time and sometimes decrease in my response quality.
The big temptation is of course to ignore my personality and
preferences and allow ribbonfarm to become a center. Its not
necessarily a bad thing. You trade off continued creativity and vitality for
deeper collaborative cultivation of established value. I dont like doing

that much. I get distracted too quickly. My brain is not built for depth in
that sense, even around things I trigger, like the Gervais Principle
memeplex.
The conundrum is that I dont think raising the threshold for
potential connection quality is the right answer. Thats the wrong filter
variable for scaling. I am not sure what the right one is, but I wont
attempt to jump to synthesis. So far, Ive simply been letting a steadilyincreasing fraction of introduction opportunities simply go by. Mostly I try
to avoid making introductions to people who are already oversubscribed.
Though I dont have a theory, I do have one heuristic that serves me
well though: closer potential direct connection. If I know A and B, and I
sense that A and B would have a more fertile relationship with each other
than either has with me, I make the connection and exit. It is the opposite
logic of marketplaces whose organizers are afraid of disintermediation. To
me being an intermediary in the social sense is mostly costs and little
benefit.
But that one heuristic isnt enough. I have experimenting with
introductions in different ways lately, and learning new ideas and
techniques.
Heres one new idea Ive learned. To keep edges edgy, and prevent
them from becoming centers, you need feedback signals. One I look for is
symmetry. Introducer types tend to be introducees equally often. If
the ratio changes, I get worried.
As an illustration of the symmetry of this process of mutual crosscatalysis among sociopath weak-link hubs, consider this, while I was
conducting my experiments with introductions, others have been
introducing me to their friends. Hang Zhang of Bumblebee Labs
introduced me to Tristan Harris, CEO of Apture and Seb Paquet formally
introduced me to Daniel Lemire (who I knew indirectly through comments
on each others blogs, before but had never directly emailed/interacted
with).
We are all lab rats running in each others mazes. I like that thought.

Extroverts, Introverts, Aspies and Codies


April 7, 2011
Lately Ive been thinking a lot about extroversion (E) and introversion
(I). As a fundamental spectrum of personality dispositions, E/I represents a
timeless theme in psychology. But it manifests itself differently during
different periods in history. Social psychology is the child of a historicist
discipline (sociology) and an effectively ahistorical one (psychology).
The reason Ive been thinking a lot about the E/I spectrum is that a lot of
my recent ruminations have been about how the rapid changes in social
psychology going on around us might be caused by the drastic changes in
how E/I dispositions manifest themselves in the new (online+offline)
sociological environment. Here are just a few of the ideas Ive been
mulling:

As more relationships are catalyzed online than offline, a great


sorting is taking place: mixed E/I groups are separating into purer
groups dominated by one type
Each trait is getting exaggerated as a result
The emphasis on collaborative creativity, creative capital and
teams is disturbing the balance between E-creativity and Icreativity
Lifestyle design works out very differently for Es and Is
The extreme mental conditions (dubiously) associated with each
type in the popular imagination, such as Aspergers syndrome or
co-dependency, are exhibiting new social phenomenology

It was the last of these that triggered this train of thought, but Ill get
to that.
I am still working through the arguments for each of these
conjectures, but whether or not they are true, I believe we are seeing
something historically unprecedented: an intrinsic psychological variable
is turning into a watershed sociological variable. Historically, extrinsic and
non-psychological variables such as race, class, gender, socio-economic
status and nationality have dominated the evolution of societies.

Psychology has at best indirectly affected social evolution. For perhaps the
first time in history, it is directly shaping society.
So since so many interesting questions hinge on the E/I distinction, I
figured it was time to dig a little deeper into it.
Wrong, Crude and Refined Models
Ill assume you are past the lay, wrong model of the E/I spectrum.
Introversion has nothing to with shyness or social awkwardness.
If you have taken a Psychology 101 course at some point in your life,
you should be familiar with the crude model: extroverts are energized by
social interactions while introverts are energized by solitude. Every major
personality model has an introversion/extroversion spectrum that roughly
maps to this energy-based model. It is arguably the most important of the
Big Five traits.
For the ideas I am interested in exploring, the Psychology 101 model
is too coarse. We sometimes forget that there are no true solitary types in
homo sapiens. As a social species, we merely vary in the degree to which
we are sociable. We need a more refined model that distinguishes between
varieties of sociability.
A traditional mixed group of introverts and extroverts exhibits these
varieties clearly. Watch a typical student group at a cafeteria. The
extroverts will be in their mutually energizing huddle at the center, while
the introverts will be hovering at the edges, content to get the low dosage
social energy they need either through one-on-one sidebar conversations
or occasional contributions tossed like artillery shells into the extrovert
energy-huddle at the core. Usually contributions designed to arrest
groupthink or runaway optimism/pessimism.
As this example illustrates, a more precise and accurate view of the
distinction is that introverts need less frequent and less intense social
interaction, and can use it to fuel activities requiring long periods of
isolation. Extroverts need more frequent and more intense social

interactions, and can only handle very brief periods away from the group.
They prefer to use the energy in collaborative action.
While true solitude (like being marooned an island without even a
pet) is likely intolerable to 99% of humanity, introverts prefer to spend the
social energy they help create individually. This leads naturally to a
financial metaphor for the E/I spectrum.
E/I Microeconomics
Positive social interactions generate psychological energy, while
negative ones use it up. One way to understand the introvert/extrovert
difference is to think in terms of where the energy (which behaves like
money) is stored.
Introverts are transactional in their approach to social interactions;
they are likely to walk away with their share of the energy generated by
any exchange, leaving little or nothing invested in the relationship itself.
This is like a deposit split between two individually held bank accounts.
This means introverts can enjoy interactions while they are happening,
without missing the relationships much when they are inactive. In fact, the
relationship doesnt really exist when it is inactive.
Extroverts are more likely to invest most of the energy into the
relationship itself, a mutually-held joint account that either side can draw
on when in need, or (more likely) both sides can invest together in
collaboration. This is also why extroverts miss each other when separated.
The mutually-held energy, like a joint bank account, can only be accessed
when all parties are present. In fact strong extroverts dont really exist
outside of their web of relationships. They turn into zombies, only coming
alive when surrounded by friends.
In balance sheet terms, introverts like to bring the mutual social debts
as close to zero as possible at the end of every transaction. Extroverts like
to get deeper and deeper into social debt with each other, binding
themselves in a tight web of psychological interdependence.

This shared custodial arrangement of relationship energy is one


reason strong relationships are the biggest predictor of happiness: as
Jonathan Haidt has put it, happiness is neither inside, nor outside, but inbetween. Happiness is the energy latent in interpersonal bonds that helps
smooth out the emotional ups and downs of individual lives. The more
you put into them, the happier you will be.
Continuing the financial analogy, the small pools of individually-held
stores of introvert energy tend to be more volatile in the short term but
better insulated from the exposures of collectivization. The large
collectively held stores of extrovert energy tend to be less volatile in the
short term, but more susceptible to dramatic large scale bubbles of
optimism and widespread depression.
Both sides of course, pay a price for their preferred patterns of social
energy management. But thats a topic for another day. In this post, I am
more interested in bald behavioral implications of this model:
Introverts
1. require a minimum period of isolation every day to survive
psychologically
2. are energized by weak-link social fields, such as coffee shops,
where little interaction is expected
3. are energized by occasional, deeper 1:1 interactions, but still at
arms length; no soul-baring
4. are energized by such 1:1 encounters with anyone, whether or
not a prior relationship exists
5. are drained by strong-link social fields such as family gatherings
6. are reduced to near-panic by huddles: extremely close manymany encounters such as group hugs
7. have depth-limited relationships that reach their maximum depth
very fast
Extroverts
1. need a minimum amount of physical contact everyday, even if it is
just laying around with a pet

2. are energized by strong-link social fields such as family gatherings


3. like soul-baring 1:1 relationships characterized by swings between
extreme intimacy and murderous enmity
4. are not willing to have 1:1 encounters with anyone unless theyve
been properly introduced into their social fields
5. are made restless and anxious by weak-link social fields such as
coffee shops unless they go with a friend
6. are reduced to near panic by extended episodes of solitude
7. have relationships that gradually deepen over time to extreme
levels
It took me a long time to learn point 4 in particular, because it is so
counter-intuitive with respect to the wrong-but-influential conflation of
introversion and shyness. I am a classic introvert. You might even say I
am an extreme introvert. One of my nicknames in college was hermit.
Yet, I find that I am far more capable of talking with random strangers
than most extroverts.
Extroverts tend to enjoy spending a lot of time with people they know
well. Talking to strangers is less rewarding to them because most E-E
transactions are maintenance transactions that help maintain, spend or
appreciate the invested capital in the relationships. Some of my extrovert
friends and family members are even offended by how easily and openly I
talk to random strangers: to them it seems obvious that depth of sharing
should correlate to length of interpersonal history. People like me simply
dont get that since our approach to relationships is to pretty much bring
the depth back to zero at the end of every conversation.
The E-I Tension
Introverts (Es) and extroverts (Is) have a curiously symbiotic, lovehate relationship as a result. Both E-E and I-I interactions tend to be
harmonious, since there is consensus on what to do with any energy
generated. Positive E-E interactions strengthen bonds over time. Positive
I-I interactions generate energy that is used up before the next interaction,
with no collective storage.

It is E-I interactions that create interesting tensions. Extroverts accuse


introverts of selfishness: from their point of view, the introverts are taking
out loans against jointly-held wealth, to invest unilaterally in risky
ventures. Introverts in turn accuse extroverts of being overly possessive
and stifling, since they cannot draw on the energy of the relationship
without the other party being present. The confusion is simple if you note
that the introvert is thinking in terms of two individually held bank
accounts, while the extrovert is thinking in terms of a single jointly held
one.
The tension between introverts and extroverts is most visible in the
loose, non-clinical mental health diagnoses they make up for each other as
insults. Introverts are likely to accuse extroverts of codependency.
Extroverts are likely to accuse introverts of Aspergers syndrome. I only
recently learned about the slang term extroverts have for introverts: aspie.
Introverts dont have an equivalent short slang term for codependency that
I know of (probably because by definition they dont gossip enough to
require such shorthand). So lets simply make one up for the purpose of
symmetry: codie.
Ive met people suffering from clinical versions of both codependency and Aspergers, so I know that most of the aspie/codie
accusations flying around are baseless.
Lately Ive seen a lot more aspie accusations flying around than codie
accusations. This is perhaps partly due to Aspergers becoming an
aspirational disease in places like Silicon Valley (along with dyslexia), due
to a presumed correlation with startup success, but I believe there is more
to it. Recent shifts in the social landscape have made introversion far
more visible. This is among the many cracks in E-I relationships that I
mentioned earlier. There are seismic shifts going on in social psychology.
We may see a re-organization of social geography comparable to the great
racial and socio-economic sortings created by the flight to suburbia and
exurbia at the peak of the urban sprawl era.

Impro by Keith Johnstone


January 23, 2010
Once every four or five years, I find a book that is a genuine lifechanger. Impro by Keith Johnstone joins my extremely short list of such
books. The book crossed my radar after two readers mentioned it, in
reactions to the Gervais Principle series: Kevin Simler recommended the
book in an email, and a reader with the handle angelbob mentioned it in
the discussion around GP II on Hacker News. Impro is ostensibly a book
about improvisation and the theater. Depending on where you are coming
from, it might be no more than that, or it might be a near-religious
experience.
The Alien Soulmate
In Your Evil Twins and How to Find Them, I defined an evil twin as
somebody who thinks exactly like you in most ways, but differs in just a
few critical ways that end up making all the difference. I listed Alain de
Botton and Nicholas Nassim Taleb among my evil twins. Johnstone has
defined for me a category that I didnt know existed, alien soulmate:
someone whose life has been shaped by radically different life
experiences, and thinks with a completely different conceptual language,
but is like you in just a few critical ways that make you soulmates.
Johnstones life (described in the opening chapter, Notes on Myself)
seems to have been shaped by extremely unpleasant early educational
experiences. Mine has been shaped largely by rewarding ones. He loves
teaching and is clearly unbelievably good at it; the sort of teacher who
changes lives. I dislike teaching, and though Ive done a fair amount of it,
I am not particularly good at it. His life revolves around theater, while
mine revolves around engineering, which are about as far apart as
professions can get. I could go on, but you get it. Polar opposites on paper.
We seem to share two critical similarities. First, like me, he seems to
stubbornly think things through for himself, with reference to his own
observations of the world, even if it means clumsily reinventing the wheel

and making horrible mistakes. Second, like me, he seems to adopt


methodological anarchy in groping for truths. Anything goes, if it gets you
to a valuable insight; no religious adherence to any particular
methodology, scientific or otherwise.
There is also a connection that may or may not be important: I was
active in theater for about a decade, from sixth grade through college. In
school, I was mostly the go-to guy for scripting class productions, and in
college I expanded my activities to acting and directing. I even won a
couple of inter-hostel (intramural to you Americans) acting prizes, and was
the dramatics secretary for my hostel for a year. Not that that means much.
It was pretty much a case of the one-eyed man being king in the land of
the blind. Engineering schools are not known for producing eventual
movie stars.
But though I was pretty much a talentless hack among other talentless
hacks, in retrospect, my experience with amateur theater did profoundly
shape how I think. I suppose thats why I resonated strongly with Impro.
I am pretty sure though, that experience with theater is not necessary
for the book to have a deep impact on you. It seems to have attained a cult
status with a wide audience that extends well beyond the theater
community, so if you like this blog, you will probably like the book.
The Book
The book, first published in 1981, is a collection of loosely-connected
essays on various aspects of improvisational theater. The essays are not
philosophical (which is why their philosophical impact is so startling).
They are about very specific details of stagecraft. There are exercises
designed to teach particular skills, acting tips, short explanations
motivating the descriptions of the exercises, and insider references to
famous theater personalities (the only name I recognized among all the
references was Stanislavsky, he of the Method School). This is what
makes the non-theater reader feel so pleasantly blindsided. You shouldnt
be getting epiphanies about life, death and the universe while reading
about how to put on a mask or strike a pose. But more on that later, heres
a quick survey of the contents.

Chapter 1, Notes on Myself, begins with an exercise designed to get


you seeing the world differently. Literally. The exercise is to simply walk
around looking at things and shouting out the wrong names for things you
see (for example, look at your couch and yell apple). The effect he
asserts, of doing this for a minutes, is that everything seems to come alive
and acquire the intensity it held for you when you were a child. Try it for a
bit. It works, though I did not experience as much intensifying as he
claims his students typically experience. After that unsettling start, we get
a short and unsentimental, yet poignant and intimate, autobiographical
sketch of his early educational experiences. The descriptions of the
experiences are accompanied by deft insights into the nature of education .
This chapter includes the philosophical premise of the book, that adults are
atrophied children, and that traditional education accelerates rather than
slows this process of atrophy. But the point is not made with any sort of
political intent. It is simply presented as a useful perspective from which
to view what he has to say, and why theater training has the effects it does.
Chapter 2, Status, is particularly spectacular, and the most accessible
chapter in the book. It is based on the idea that the only thing you really
need to do, in preparing to improvise a scene, is to decide what status to
play, high or low, in relation to the other actors on stage. Through a series
of explanations and descriptions of startlingly original exercises,
Johnstone illustrates the working of status dynamics in interpersonal
interactions. One that I found both enlightening and hilarious was this: you
have a completely boring, everyday conversation with your improv
partner, but include an insult in every line you make up. Heres one of his
example fragments:
Can I help you, fool?
Yes, Bugeyes!
Do you want a hat, slut?
Ive done just enough theater to be able to visualize this clearly, but I
suspect, even if you have no experience with theater, you can imagine how
this strange exercise can turn quickly into drama that helps you understand
status. There are other surgically precise exercises that are designed to

teach how personal space relates to status, and how master-servant


dynamics play out. One true Aha! moment for me was a throwaway
remark on Becketts Waiting for Godot, which I saw in New York last fall.
I knew of the play by reputation of course, but I had no idea what to
expect, and whether I would get it. I only got it at a fairly superficial
level, but enjoyed it immensely nevertheless, for reasons that I did not
understand. Yet, others in the audience seemed to not get it at all, to the
point of being bored.
Impro completely explained the play for me. The plays appeal lies in
the fact that it is a showcase for status dynamics. The four characters,
Vladimir, Estragon, Pozzo and Lucky, perform what amounts to a status
opera. Though a good deal of the content is nonsensical, the status
interactions are not.
Chapter 3, Spontaneity, describes exercises and acting principles that
seem like they would take you perilously close to madness if you tried
them unsupervised. Having had a lifelong preference for learning by
myself rather than listening to teachers, I dont often tell myself, this
material needs a teacher. So that should give you an idea of just how
unusual this is likely to be for most people. Johnstone recognizes this, and
he notes that the work described in this chapter is closer to intensive
therapy than to learning a skill. In fact, it sounds like it would be more
intense and more effective than therapy (therapy being, like teaching, yet
another process that I dont trust to others). I am surprised nobody has
invented theater-therapy. Actually, I take that back. I once knew a girl who
did prison theater. I never understood the point of that. Now I do. Done
right, I suspect prison theater could lower rates of recidivism. Maybe there
are other examples of theater as therapy.
Chapter 4, Narrative Skills, is close to the best fiction-writing advice
Ive ever read, probably second only to Francine Proses Reading Like a
Writer (also recommended by a regular reader, Navin Kabra). The
material in this chapter actually got me curious enough that I put down the
book and tried out one of the exercises right then. At the time, I happened
to be on a long flight from DC to Tokyo (on my way to Bali), so I actually
sat there for an hour with my eyes closed, thinking up a story, and then
spent another hour scribbling like crazy, writing it down. I came up with

probably the best plot outline of my life. I might actually flesh it out and
post it here at some point (I dabbled in fiction a fair amount about a
decade ago, but somehow never pursued it very far).
Chapter 5, Masks and Trance, is easily the most intense, disturbing
and rewarding chapter. The subject is acting with masks on, a stylized sort
of theater that seems to have been part of every culture, during every time
period, until enlightenment values began stamping it out. Since I had
just returned from Bali when I read this chapter (examples from Bali
feature prominently books treatment), and seen glimpses of what he was
talking about during my trip, the material came alive in particularly vivid
ways. The chapter deals, with easy familiarity, with topics that would
make most of us very uncomfortable: trances, possession and atavistic
archetypes. Yet, despite the disturbing raw material, the ideas and concepts
are not particularly difficult to grasp and accept. They make sense.
The Book, Take Two
So much for the straightforward summary of the book. That it teaches
theater skills effectively should not be surprising. What is surprising is the
light it sheds on a variety of other topics. Here are just a few:
1. Body Language: Ive always found body language a somewhat
distasteful subject, whether it is of the traditional covering your
mouth means you think the other person is lying variety, or
neurolinguistic programming, or the latest craze, the study of
microexpressions. Despite the apparent validity of specific
insights, the field has always seemed to me intellectually
disreputable and shoddy. Impro does something I didnt think was
possible: it lends the subject dignity and intellectual respectability.
The trick, with hindsight, is to view the ideas in the field in the
context of art, not psychology.
2. Interpersonal Relationships: I spend a good deal of time thinking
about the principles of interpersonal interaction, and writing up
my thoughts. The reason Impro sheds a unique sort of light on the
subject is that it describes simulations of what-if scenarios that
would never happen in real life, but serve to validate theories that
do apply to real-life situations.

3. Psychology: Elsewhere in recent posts, Ive recommended the


classic books on transactional analysis (TA), Eric Bernes Games
People Play and What Do You Say after You Say Hello and
Thomas Harris Im OKYoure OK. Ive always felt though, that
TA, while useful as an analytical framework, isnt very helpful if
you are trying to figure out what to do. Impro is pretty much the
how to manual for TA, and it works through a sort of
experimental reductio ad absurdum. There is no better way to
recognize the stupidity of game playing than to act out (or at least
think out) game scripts in exaggerated forms.
Youll probably find insights into other subjects if you look harder. I
suspect the reason there is so much to learn from the practice of theater is
that the humanities and social sciences lack a strong culture of
experimentation. Theater is, in a sense, the true laboratory for the
humanities and social sciences.
Ill finish up with one thought. I explain the tagline of this blog,
experiments in refactored perception as geekspeak for seeing the world
differently. If you ignore the theater-manual aspect, that pretty much
describes the book: it is a textbook that teaches you how to see the world
differently.

Your Evil Twins and How to Find Them


September 17, 2009
Recently a reader emailed me a note: I just wanted to bring to your
radar the pleasures and sorrows of work by Alain de Botton, and what
you thought of its theses. Now de Botton (The Pleasures and Sorrows of
Work, The Consolations of Philosophy, How Proust Can Change Your
Life) has been on my radar for a while. I had browsed his books at Barnes
and Noble a few times, but always put them down due to strange, sick
feelings in my stomach. Thanks to this readers gentle nudge, I finally
caved and read the first of the three, and managed to figure out why de
Bottons books had made me viscerally uncomfortable at first glance: he is
my evil twin. An evil twin is defined as somebody who thinks exactly like
you in most ways, but differs in just a few critical ways that end up
making all the difference. Think the Batman and the Joker. Heres why
evil twins matter, and how to discover yours.
Why Evil Twins Matter
In the closing scene of Batman Begins, Commissioner Gordon tells
the Batman that a new villain is abroad who has a taste for theatrics, like
you and shows him the Jokers calling card. The premise of the evil twin
setup plays out in the sequel, the The Dark Night. Towards the end, Heath
Ledgers disturbing Joker elaborates on the logic: I wouldnt kill you!
What would I do without you? You complete me.
Comic book universes provide plenty of examples of this fundamental
idea, that your nemesis is not a polar opposite, but an eerily similar person
who is just different in a few subtle but critical ways. Some narratives in
fact present the nemesis as a polarity within one character, as in the Jekyll
and Hyde model and more recently, the Hulk.
If you think about it, this makes sense. Your nemesis has to be
interested in the same things as you, operate in the same areas, and think
and act at levels of sophistication similar to yours. Polar opposites would
live lives that would likely not even intersect. List the 10 most important

elements of your social (not private) identity. In my case for instance, they
might be PhD, researcher, omnivorous reader, writer, individualist,
polymath-wannabe, coffee-shop person, non-athletic, physically lazy,
amoral, atheistic and so forth. If you turned them all around, youd get
something like high-school drop-out, non-reader, groupie, parochial, pub
person, sportsy, physically active, moral and religious. I am no snob, but it
is highly unlikely that Id have much to do with somebody with that
profile.
On the other hand, if you meet somebody to whom every adjective
applies, but they rub you the wrong way at a deep level, what are you to
conclude? The clash has to be at the most subtle levels of your personality.
Meeting your evil twin helps you find yourself, which is why you should
look. Of course, I am being somewhat facetious here. You dont have to
hate your evil twin or battle him/her to the death. You can actually get
along fine and even complement each other in a yin-yang way.
de Botton, Taleb and Me
Take Alain de Botton for instance. Despite my evil twin adjective, I
think Id like him a lot and get along with him quite well. No climactic
battles. The Pleasures and Sorrows of Work is just beautiful as a book. As
you know if youve been reading this blog for a while, I write a lot on the
philosophy of work. The book literally produced dozens of thoughts and
associations in my head on every page. Since I was reading it on the
Kindle, I was annotating and highlighting like crazy. We think about the
same things. He opens with a pensive essay on container shipping
logistics, something Ive written about. The Shawshank Redemption with
its accountant hero is one of my favorite movies; de Botton finds romance
in the profession as well. Ive written about ship-breaking graveyards, he
writes about airplane graveyards. He seems fascinated by aerospace stuff.
I am an aerospace engineer. He sees more romance in a biscuit factory
than in grand cathedrals. So do I. Like me (only more successfully) he
shoots for an introspective, lyrical style. But as I continued reading, I
realized I was intellectually a little too close to the guy.
When I tried putting my notes all together, the feelings of discomfort
only intensified. There was no coherent pattern to my responses. I realized

that, in a way, you can only build one picture at a time with a given set of
jigsaw pieces. Writers normally leave enough room for you to construct
meaning so you feel a sense of control over the reading experience. With
evil twins, thats not possible, since you are trying to build different
pictures. I felt absorbed in the book, but also confused and disoriented by
it.
Thinking harder, I realized that the points of conflict in our
worldviews were at a very abstract level indeed. In a deep sense, de
Bottons worldview is that of an observer. Mine, though I do observe and
write a lot, is primarily that of a get-in-the-fray doer. He is content to
watch. I feel compelled to engage. He admires engineers and engineering;
I felt compelled to become one and get involved in building stuff. It is a
being-vs.-becoming dynamic.To a certain extent, he is driven by needs of
an almost religious nature: to overcome his sense of separateness and be
part of something larger than himself. My primary instinct is to separate
myself. It is a happiness vs. will-to-power dynamic. One last example. de
Botton is clearly a humanist: he wants to be kind and feel for others, and
paradoxically, ends up being quite cruel in places. I, on the other hand, am
mainly driven by a deep ubermensch tendency towards hard/cold
interpersonal attitudes, but end up surprising myself by being kind and
compassionate more often, in practice. Kind cruelty vs. tough love. I could
go on.
Another of my evil twins is Nicholas Nassim Taleb (Fooled by
Randomness, The Black Swan). I am re-reading the latter at the moment,
and I noticed that Taleb describes himself as a flaneur. In the comments to
my piece, Is there a Cloudworker Culture? a reader noted that my selfdescription as a cloudworker sounded a lot like the idea of a flaneur.
Again, a lot of the exact same things interest us, and we share opinions on
a lot of key fronts (the nature of mathematics, empiricism and
falsifiability, unapologetic elitist tastes, long-windedness, low tolerance
for idiots and the accidentally wealthy, a preference for reading books
rather than the news). And again, we part ways at a deep level. Thats a
story for another day.
So before we move on to the How-To section, a recommendation. If
you feel strangely attracted to my writing, and yet rebel against it at some
deep level, you might really (and unreservedly) love de Botton and/or

Taleb. I am too close to their thinking to do justice to them with book


reviews, but you should read them. If the books help you clarify who you
are, and you end up dropping ribbonfarm from your reading list, Ill
consider it my good deed for the day.
How to Find Your Evil Twin
In my case, my evil twins mostly turn out to be writers Ive never
met. Sometimes dead writers. Thats because so much of my life revolves
around books and ideas. I suspect most people have a pretty good chance
of actually meeting and getting to know their evil twins.
The key things to look for are the following:
1. You share a lot of interests, down to very specific details like books
read, places visited, socio-economic and cultural backgrounds
(though oddly enough, not race or ethnicity).
2. Your thinking levels are similar, and your conceptual categories for
viewing the world are similar
3. You try to act in the world in very similar ways; you choose similar
means and ends
4. You reach similar conclusions about what is, what ought to be,
what you should do and how
5. If you ever meet them in person, you instantly resonate with them
That sounds like soulmate right? Now for the differential that will
discriminate between soulmate and evil twin:
1. If you are straight, they are the same gender as you. If you are gay,
I dont know.
2. You lean in different directions on key philosophical tradeoffs. For
example, if you both believe truth vs. kindness is a fundamental
tradeoff, you lean towards truth, while he/she leans towards
kindness.
3. On the important question of attitude towards others, you are
clearly different. You want different things from other people and
the world at large.

So go, look for your evil twin. You will be enlightened by what you
find. If you already know who yours are, I am curious. Post a comment
(suitably anonymized if necessary).

Bargaining with your Right Brain


March 16, 2008
At the straw market in Nassau, in the Bahamas, famous for stuff
like the straw handbags below I recently encountered a distinctive
culture of bargaining that made me stop and ponder the subject (on a
beach, aided by rum). The pondering resulted in a neat little flash of
insight that allowed me to synthesize everything I know about the subject
in a way that surprised me. The short version: game-theoretic and
information-theoretic approaches to the subject are something between
irrelevant and secondary. What drives bargaining behaviors and outcomes
is story-telling skill. Heres how you can learn the skills that really matter
in being a successful bargainer.

The Simple, Elegant and Wrong Answer


Buyer A and seller B are haggling over product P. Neither knows what
price the other is willing to settle for, but A wants P at the lowest price B
might sell, and B wants to sell at the highest price A might be willing to
pay. You model the situation as a sequential move game, with each move
being, at its simplest, a price. You might represent the progress of the
game thus, as pairs of B-A call-response moves:

($200, $100), ($180, $120), ($160, $140), ($150, SOLD!)


At each turn, each player uses the history of previous offers/counteroffers to estimate the true limit of the other party, and makes an offer that
induces the other to move his price-point as much as possible while
revealing as little as possible of his own limit. Throw in some bounded
rationality and a value on time, and you have the sort of framing
economists like.
You could mathematically model this (no doubt, somebody already
has analyzed this to death and proved all sorts of convergence results).
One example of such an analysis is in the classic game-theoretic decision
analysis book, Thinking Strategically (an excellent read, by the way, for
what it sets out to do).
But neat though this mathematical formulation is, it is fundamentally
wrong-headed. Bargaining isnt primarily driven by parties attempting to
(bounded) rationally guess each others limit points by doing (non-trivial)
real-time analysis of number sequences. Yes, the alternating sequence of
numbers does carry information, but in most real situations, information is
primarily conveyed in other, non-quantitative ways, and the relevant
information isnt even about price. Lets examine two examples before I
present my alternate model of bargaining.
Real-World Examples
Example 1: The Bahamian Turtle
In Nassau, I bought this coconut-shell toy turtle for my nephew, for
$5:

The seller, a pretty young woman, came up to us and engaged us with,


not one, but three moves all at once, and persuaded me to close, before I
had a chance to make a counter-offer:
Seller: You like this turtle? Nice toy for child! Eight dollars!
Seller (conspiratorial whisper, eyes darting left and right): Tell you
what, I make you a deal, only six dollars.
Me (doubtfully): Hmm
My Wife (enthusiastically): oh, its so cute; we should get one for
Arjun [my nephew]!
Seller: Alright, I give it to you for five dollars. What color bead you
want lady? [the thing has a bead on the string that is not visible in this
picture].
This one is, in a sense, a non-starter example, since I got played
before we got to bargaining. Lets take a longer example where I acquitted
myself better.
Example 2: The Jaipur Good-Cop/Bad-Cop
In Jaipur, India, on a vacation recently, my wife wanted to buy herself
a traditional Rajasthani kurta, or shirt. Translating from Hindi, the
exchange went roughly as follows (I may be misremembering the prices,
but the gist is accurate):
Me: How much?
Seller: 300 rupees
Me: Thats too much, how about 175?
Seller: Come on sir, at that price, I wont even recover my costs!

Me: No, 175 is the reasonable price for this kind of item.
Seller: Arrey, come on sir! Just look at this fine needlework; you
may have seen similar stuff for less in other shops, but if you look closely,
the work isnt as delicate!
Me: Of course I can see the quality of the work, thats why we want
to buy it, now come on, quote me the right price.
Seller: Okay sir, for you, Ill let it go for 250 (starts folding up the
kurta).
Me: No no, this lady may not be Indian, but I am; be reasonable
[my wife is Korean, and since I hadn't mentioned that she was my wife,
the shopkeeper had almost certainly assumed I was her local guide -many other shopkeepers had in fact called out to me to bring her into their
stores, offering me a commission!]
Seller: But I did quote the price for you sir, for foreigners, we
normally ask for at least 4-500!
Me: Fine, tell you what, Ill give you 190.
Seller: Come on sir, at that price, I dont even make a profit of 10
rupees!
Me: Fine, lets do this deal. 200; final offer.
Seller (looking upset): But
At this point, the sellers boss, probably the store owner, whod been
poring over a ledger in the background, looked up, interrupted and said
shortly, Cant you see the lady wants it? Just give it to them for 200, lets
cut this short!
I have several other examples I could offer (in the US, bargaining
tends to be restricted to larger purchases like cars), but these two examples
suffice to illustrate the points I want to pick out.

The Phenomenology
There are several features of interest here. Here is a round dozen:
1. Fake moves: In the Bahamian example, consider the rapid series of
three prices offered with a very quick change of subject to the color
of the bead at the first sign that I wanted to buy. This bargaining is
clearly fake, the numbers being part of the initial courtship ritual
rather than the actual price negotiations, which were shortcircuited.
2. Bargaining as bait: The sellers in the Nassau marketplace promote
their wares with a curious mix of American retail rhetoric (Cmon
honey! Everything 50% off today) and more traditional bargainhunter bait (You want a handbag sir, for the pretty lady? Cmon I
make you a deal!). I suspect very little serious bargaining actually
takes place, since the customers are largely American cruise ship
tourists, who are not used to bargaining for the small amounts in
play in these transactions.
3. Qualitative Re-valuation: Consider the variety of non-quantitative
moves in the Jaipur example. In the fine needlework move, the
seller attempted to change my valuation of the object, rather than
move the price point. I accepted the point, but indicated Id already
factored that in.
4. Narrative: A narrative also developed, inviting me to cast myself
as the knowledgeable insider who was being offered the smart
Indian deal, as opposed to the high-mark-up offered to clueless
foreigners. This is a key point that I will return to.
5. Deal-breaker feints: Twice, the seller attempted to convince me
that I was offering him a price he could not accept. These are
rhetorical feints. A similar move on the customers part is to
pretend to walk away (that old saw about the key to negotiation
being the willingness to walk away isnt much use in practice, but
pretending to walk away is very useful).
6. Closure Bluffs: another interesting feature of the Indian example is
the closure bluff; a non-serious price accompanied by closure
moves (such as starting to package the item), on the off-chance that
the other party may panic and fold early.

7. Good Cop: Finally, note the second individual stepping in to make


the deal towards the end (this dynamic is particularly common in
traditional retail stores in India, where the owner, or seth and
accountant, or munim, will often be watching the salesmen at work,
stepping in at the right moment. The psychological key to this is an
implicit sense of escalation and respect: forget that unimportant
lackey, clearly youre a smart customer and I, the boss, will deal
with you personally and cut you a special deal that my lackey isnt
authorized to offer.)
8. Ritual: In most cultures of bargaining, there are also moves of
ritual significance. In India, the best-known one is the bohni, or
first sale of the day. Sellers will often plead with customers to close
the deal, since it would constitute the bohni and also assert that the
price being offered is a really good deal because of the sellers
anxiety to finish his bohni. This is not entirely a pragmatic sort of
move the bohni does matter to traditional merchants, who will
often actually take a bit of a hit on the first transaction for luck, to
get the cash flow started. Curiously, I also encountered a much
more open version of this in the Bahamas, where one shopkeeper
said it plainly, Cmon honey, first customer gets best deal! I
suspect anthropologists would find an equivalent in every culture
besides modern fixed-price retail.
9. Knowledge Bluffs: Though I didnt use them, knowledge bluffs
are common in bargaining (I saw that same thing in Delhi for half
the price!) . These are bluffs because if the buyer really knows
something about the cost structure of the seller, that information is
usually quickly deployed and factored out, reducing the bargaining
to a matter of what is the convenience-value to me of buying here
rather than in that other place? Why do they still work? Ill tell
you in a moment.
10. The Justice Bluff: Surprisingly, in the form of a bluff, a notion of
fair price often enters even the purest free-market exchanges. This
is usually brought into play via displays of mock anger at being
treated unfairly. Surprisingly, even sellers will do this, attempting
to convey a clear sense of disgust by putting away the items under
discussion. But there are boundaries: sellers will rarely plead to
make a sale on the basis of personal need, since this subtly moves
the situation from a peer-transaction to a (morally unacceptable)
charity transaction. My mom once bought some vaseline from a

tearful door-to-door saleswoman who apparently genuinely broke


down and said, buy some just out of sympathy please! My mom
caved.
11. Boundary Blurring: If a full transaction has three parts
(discovery/selection, negotiations, closing), elements of bargaining
often creep out of their nominal home in the middle section. For
example, in traditional Indian full-service retail, the sales staff will
often pull out a vast selection more than necessary or asked for
visibly doing a lot of work, creating a clear sense of pressure.
On the other side, towards the end of a transaction, sometimes the
customer will throw in surprise in-kind requests after the deal
seems closed. Alright, I am buying this car from you for more
than I wanted, how about you throw in some floor mats?
12. Non-influence of actual knowledge: Though there is a lot of
bluffing, there is not much actual information about price limits in
play. In every example Ive encountered, at least for small
amounts, buyers and sellers do not attempt to guess limits directly
(beyond having a ballpark reasonable figure in mind that may be
utterly irrational). It is obvious that the buyer may not even have a
walk away limit price in his/her head, but what is not so obvious
is that even the seller may not have such a price in mind. Cost
structures can be murky even to sellers, and other hard-to-value
elements may be in play, like prospects for future sales, desire to
clear slow-moving inventory, and the like. Actual price information
usually enters the picture only when the amounts are significant, in
which case the parties generally do their research beforehand, and
attempt to factor that information out of the bargaining stage
altogether. Bargaining is primarily about ownership of the
unknown factors in pricing, and is a high-cost process, and it is in
the interest of both parties not to bother bargaining about mutuallyacknowledged certainties.
The Right Model
So thats all very well. There are lots of psychological subtleties
involved. But is this all really important? Is it possible to just cut the
Gordian knot and become really good at some sort of game-theoretic

model? Would all the bluffing and qualtiative nuances vanish under the
right sort of time-series modeling?
The answers are yes and no respectively. Yes, you do need to work
with the full thing; game theory wont cut the Gordian knot for you. And
no, you will not be able to subsume all the bluffing and complexity no
matter how much you crunch the numbers. So you do need to appreciate
the qualitative sound-track of the bargaining, but no, dont be discouraged
I am not suggesting that the only meaningful model is a localized sui
generis ethnography. Universal models and approaches to bargaining are
possible.
What actually happens in a bargaining transaction is the coconstruction of a storyline that both sides end up committing to. Every
move has a qualitative and a quantitative part. The prototypical transaction
pair can be modeled roughly as:
((p1, v1) , (p2, v2))
Where the ps are qualitative statements and the vs are price
statements. The key is that the qualitative parts constitute a language game
(in the sense of people like Stalnaker). Each assertion is either accepted or
challenged by subsequent assertions. The set of mutually accepted
assertions serves to build up a narrative of increasing inertia, since every
new statement must be consistent with previous ones to maintain
credibility, even if it is only the credibility of a ritual rather than literal
storyline.
This is the real reason why there is apparent spinning-of-wheels
where the price point may not move for several iterations. For example in
the Indian kurta case, I rejected the sellers assertion that 175 would
represent a loss, but acknowledged (but successfully factored out) the this
is fine needlework assertion. Though the price point wasnt moving, the
narrative was. At a more abstract level, a full narrative with characters and
plot may develop. This is also the reason why knowledge bluffs work
even if the seller knows the buyer cannot have seen the same item for half
the price in another store, he cannot call out the bluff in an obvious way
since that would challenge the (always positive) role in which the buyer is
cast.

The key conclusion from all this? The transaction moves to closure
when the emerging logic of the narrative becomes overwhelming, not
when price transparency has been achieved. To bargain successfully, you
must be able to control the pace and direction of the development of the
narrative. At a point of narrative critical mass, something snaps and either
a new narrative must displace the old one (rare), or there must be a
movement towards closure.
Becoming a Right-Brained Bargainer
So here is my magic solution: become good at story-telling based
conversations.

Walk in, not with a full-fledged plan/story, but a sense of what roles
you can comfortably fill (straight-dealer? cynic? know-it-all?
Innocent student without much to spend?)
As the conversation progresses, try to sense what roles the other
party is trying on for size, and suggest ones favorable to you
(Look, I try to buy only from local merchants, and you guys do
are doing a great job for the economy of our town, but). Say
things that move towards a locked-in role on both sides that favors
you. In the example above, I got locked in into the role of
knowledgeable local on disadvantageous terms.
Look out for the narrative logic as it develops. For example, I
successfully resisted an attempt to bring fine needlework
assertion into play, which would have moved the story from guy
looking for cheap deal to connoisseur transaction and a
premium-value storyline.
There are critical/climactic points where you can move decisively
for closure; watch and grab. In my case, I thought I had one when
the seller offered the Not even 10 rupees move, but the owner
cutting in for the kill and accepting was a clue that I could have
pushed lower.
Be aware of the symbolic/narrative significance of your numerical
moves. If the seller moves from 200 to 180, and you move from
100 to 120, the very symmetry of your move shows that you have
no information at all to use for leverage, and the transaction is
likely to proceed dully to a bisection unless you do something

creative. If the seller offers 500 and you say 250, that reveals that
you may be using a start at half heuristic, which might create an
opening for the seller to launch a storyline of really, 500 is a fair
price, heres why. Offering 275 instead creates the right sort of
ambiguity. If you do want to drive towards a symmetric-bisection
storyline, make sure you pick an irrational starting point, but not
one so irrational that it reveals you know nothing about the price
(irrationally low opening offers can work, but you need a 201 level
bargaining course to learn why).
Now, this isnt easy. You have to become a storyteller. But I never
said I was going to offer an easy answer; merely a better one than a
misguided attempt to do real-time game-theoretic computations.

The Tragedy of Wiios Law


March 26, 2009
The game-break is to 1:1 interpersonal relationships what the Aha!
moment is to individual introspection. The rare moment, shortly after
meeting for the first time, when two people experience a sudden,
uncontracted moment of connection, shared meaning and resonance. A
moment that breaks through normal social defenses. I call it uncontracted,
because I mean the kind of moment that occurs when there isnt an
obvious subtext of sexual tension, or a potential buy/sell transaction,
limiting behavior to the boundaries of an informal social contract. The best
examples are the ones that happen between people who arent trying to
sleep with, or sell to each other (at least not right then). I call it a gamebreak, because you momentarily stop playing social games and realize
with a shock that there is some part of an actual person on the other side
that perfectly matches a part of you that you thought was unique. A
moment that elevates human contact from the level of colliding billiard
balls to the level of electricity or chemistry. It is the moment when a
relationship can be born. Our fundamental nature as a social species rests
on the anatomy of this moment. Here is a picture: lowered masks, a spark
breaking through invisible shells.

The Interpersonal Double-Take


The game-break is not the same as the forced-festivity ritual of the
party ice-breaker, which merely preempts social discomfort without
catalyzing genuine connection. In fact, by giving people something

ritualistic to do, the ice-breaker often delays the game-break, or prevents it


altogether. The game-break cannot be engineered with certainty, but you
can do things that make it more likely. But this is not a how-to article; it is
a dissection of the phenomenon itself. Lets start with a specimen.
I stumbled upon the meaning of the game-break a few years ago,
during the course of a routine, you really should reach out to meeting.
This particular meeting happened while I was booting-up my postdoctoral
stint at Cornell. I was meeting a Russian PhD student Ill call Z. My
adviser had recommended I talk to his adviser, who had recommended I
talk to Z. So I did the usual thing: asking questions about his research,
mentioning my own work where I saw a connection, and so on. Like many
European researchers, he was unsmiling and taciturn, providing precise,
sealed-and-complete answers to even the most open-ended of questions,
and making only the bare minimum socially-acceptable effort to ask
questions in turn. Fortunately, I have a good deal of stamina for this sort of
thing. My ability to talk authoritatively about pretty much any subject
under the sun for one minute serves me well in such situations. So for
about 15 minutes we did our little immovable object, irresistible force
dance.
And then suddenly, the game-break. Something I shared got through
to him, and it was like watching a bulb light up. It was obvious that I went
from being guy who works for that other professor, to hey, I thought
only I thought/felt that way; who is this guy? On my end, I too was able
to instantly fingerprint his inner life. For a moment, roles and social
identities fell away. It was particularly dramatic because of his previous
impenetrability. With Americans, you often get such over-the-top fake
resonance that you can easily miss the moment when it turns genuine. Z
instantly became voluble and excited, eager to brainstorm and have a big
old mind-meld. I had already experienced many such moments, but I
detected something new in my own reaction: a clinical checking of a
mental box.
Wiios Law
That conversation led, as I fully expected, to nothing. We did have a
great mind-meld, and the meeting paid for itself socially, but we didnt

actually collaborate on anything afterward. For professional collaboration,


the game-break moment is necessary, but not sufficient. A lot of other
pieces have to fall into place for that.
But my clinical box-checking intrigued me for quite a while, and I
only understood it months later when I ran across a Finnish epigram called
Wiios Law: communication usually fails, except by accident. The Finnish
original, if you care, reads: Viestint yleens eponnistuu, paitsi
sattumalta.
I understood what my little mental check-box act represented. I was
noting and filing away one of Wiios accidental moments of true
communication. We view the world through mental models. The
culturally-inherited and shared parts of these models dont feel very
personal to us. If you and I met and talked about, say, shopping or cricket,
we wouldnt be too surprised to find that we think about these things in
roughly similar terms. Nor would we share a sense of intimacy through the
shared meaning.
But there are parts that derive from our own experiences, which we
mistakenly believe are completely unique to us. Things we think others
will never understand (a source of relief to those, like me, who are
existentialist by preference, and unholy fear for those who yearn for a
sense of complete connection to something bigger than themselves). I
earnestly hope there are irreducibly subjective elements of being, but I
have been through enough game-break moments to realize that we are far
less existentially unique than I think.
The Golden Rule of the Game Break
The game break is so rare because we are at once desperate for, and
terrified of, genuine connection. Watch people at a busy intersection.
Heres a portion of a scene from New Yorks Times Square for instance
(Wikimedia Commons):

Intersections illustrate how efficiently we avoid contact and maintain


the coherence of our groupings in confined settings. If we shook a
comparable box of marbles around, wed get hundreds of collisions.
Living things turn those hundreds into zero, most of the time.
But if our physical obstacle avoidance skills are amazing, our social
mind-bump collision avoidance skills are even better. We know exactly
when to break eye-contact to prevent polite from turning into unwanted
intimacy. We know exactly the appropriate kind of smile for every
situation. Our sense of our own, and others personal space is finely-tuned,
and we know what level of closeness requires apology, and what levels
require snubs, sharpness or displays of anger.
Though they certainly trap us in prisons of existential solitude, if we
didnt have these mechanisms, wed be rubbed raw by the sheer volume of
social contact in modern society. Nearly all of the contact would be
annoying or draining. But in every situation, even the most random, there
are always tiny leftover gaps in our defenses, and we are grateful to have
them. Through these gaps and imperfections, sometimes the game-break
can sneak through. Which leads us to:
The Golden Rule of the Game Break: Given enough time, any two
people forced into proximity will experience a game break.
I learned this growing up as a kid in India, when we used to go on
long train journeys, between 24 to 48 hours, to visit relatives. Thats a long
time for anybody to be cooped up together with strangers. The

unavoidable morning intimacy of tousled hair and teeth-brushing breaks


down reserves even more. You can reliably expect at least one game break
on every long train ride.
What happens before the game-break, of course, is the gradual
approach, as time-driven norms dictate the lowering of all the outer layers
of defenses. Once enough layers have been peeled away, the probability of
a game break starts climbing. It might take 4 hours or 4 years, but given
enough time, it will happen. The Golden Rule is merely an application of
what statisticians call the law of large numbers. What happens after,
though, is more interesting (and for some of you, philosophically
depressing).
The Tragedy of Wiios Law
Unfortunately for the romantic, Wiios law is a grim and agnostic one.
The accident of genuine communication can lead to both deep trust and
implacable enmity. You can get Harry Potter-Voldemort outcomes, a level
of intimacy in enmity that makes decisive conflict as much suicide as
murder. Worse, even when the outcome is a positive, resonant, mind-meld,
our minds cannot stand the deep connection for very long. While the full
defenses never return, the uncontracted quality of the game-break does not
last for long. A social contract descends, to structure and codify any
continuation of the relationship.
Which explains why some of our most precious social memories are
of brief, accidental encounters with strangers, often nameless. Encounters
which didnt have a chance to get codified and structured, and remain in
limbo in our memories as highly significant episodes. Moments of deep
loneliness can arise out of familiar situations, when we are surrounded by
people with whom we have achieved very intimate, but controlled
relationships, and we recall our still-raw contacts with strangers. We have
had our game-break moments with those who surround us, but the
memories of those moments lie irretrievably buried under the reality of
active relationship contracts. There is a beautiful Hindi film song that
goes:
Aate-jaate khoobsurat, aawara sadakon pe

kabhi-kabhi ittefaq se, kitne anjan log mil jate hain,


unme se kuch log bhool jaate hain, kuch yaad reh jaate hain
Which translates roughly to:
Wandering through beautiful lonely streets,
sometimes, by accident, you meet so many strangers
of these, some you forget, some you remember
The game-break is the gateway to the real relationship: two people
who have unique mental models of each other, and keep up contact
frequently enough to keep those models fresh and evolving. Friendship,
romance, hatred, professional collaboration, marriage, business
partnerships and investments all follow from game-break moments. Yet,
each of those is a contracted relationship, safely bounded away from
further unscripted and uncontracted game-break moments. We may have
more game-break moments with the same people; that is what we mean by
taking the relationship to another level, but each time, the fog of a
(possibly deeper) social contract descends, shrouding the memory of the
moment.
The tragedy of Wiios law is this. Our most connected moments are
with people we know we will never meet again. The moments of
connection stay with us only to the extent that relationships do not follow.
I am no exception. There are too-long glances burned in my brain (as
many exchanged with wrinkled old men as with pretty girls) that I will
never forget. There are conversations I occasionally replay, there are
memories jogged by old letters and photographs.
One of the most poignant such memories for me is from three weeks I
spent backpacking in Europe in 1998. As you might expect, I experienced
several game-break moments (thats the main reason we do things like
backpacking). One particular evening, towards the end of the trip, in
Brussels, I ran into a group of Indian folk musicians practicing in the park
(they were members of the Jaipur Kawa Brass Band and Musafir, who
were touring Europe together, and were due to perform as part of the
World Cup Soccer festivities). I was then newly-minted as a global citizen
(having left India in 1997), and still capable of homesickness. A gamebreak moment followed, and I went back to their lodgings with them for a

magical evening of rustic Indian food, unbelievable conversation about


music, and listening to them playing, not for an audience (I was only one,
and they were clearly not counting me), but for themselves.
Several years later, one of the groups, Musafir, toured the US, and
played in Ann Arbor (I was then a graduate student at the University of
Michigan). With a great deal of anticipation and excitement I went to their
concert, and then backstage after. They remembered me, and their eyes lit
up too, very briefly, at the memory. Then discomfort descended, as we
realized we would not be able to recreate that magic. Now I was on the
edge of a contracted relationship: artists-and-fan.
I chose to say goodbye quickly, and walk away with my memory.
That is the tragedy of Wiios law.

The Allegory of the Stage


September 23, 2009
Have you ever taken a deep breath and stepped out on a stage of some
sort to perform? Time slows down. Sounds quiet down and you can
actually hear the thudding of your heart. And then, just as suddenly, as
your performance starts, your acute sense of self-consciousness is forced
to recede. Time speeds back up and the audio gets turned up again. You
are left with a hallucination-like memory of that moment of transition.
This experience, which I call the trigger moment is at the heart of the
allegory of the stage.

Movie directors didnt make up this subjective feeling. They copied


something very real with the slow-motion camera and the sound mixer.
Trigger moments are such powerful experiences that we are tempted to
weave our life stories around them. Shakespeares all the worlds a stage
bit from As You Like It is of course the most famous take on the idea.
I consider it more an allegory than a metaphor, but lets not quibble.
Think of it as a metaphor if you like.
In school and college, I used to get on the stage quite a bit to perform
debates, public-speaking contests, quizzes, and yes, theater. Not all
performances have a hallucinatory trigger moment between normal-mind
and stage mind. If you have just had a few drinks, and are fooling around

to entertain friends, you wont experience this transition. But what my


literal stage experiences did was sensitize me to the trigger-moment
feeling. Once you learn to recognize it, you realize that plenty of life
experiences have the same subjective signature. Submitting a homework
assignment, releasing a product, jumping off the high board, pushing a
button to start a chemistry experiment, pulling a trigger, hitting publish
to release a blog post into the wild. Even hitting send in your email
editor. At the other end of the spectrum, on large collective scales, you get
the first wave of soldiers landing on a beach on D-day, or the moment in
medieval battles when two opposed armies begin to charge at each other.
Or thousands of people holding their breath as someone goes 3, 2, 1, we
have ignition.
What all these trigger-moment experiences have in common is that
they represent thresholds beyond which you are no longer in control of the
consequences of your actions. Something you are creating goes from
being protected by you (and your delusions) to facing the forces of the
wild world. Every time a comic steps on stage, he is about to either bomb
or kill, putting his theory of life to the test. Movie script writers design
entire stories around such moments. There are plenty of overused and
cliched lines to choose from (This is it or So it begins or most
dramatically, my whole life has been building up to this moment).
The moments themselves are infrequent and short-lived, but there is
no mistaking the transition, or the feeling of being on stage on the other
side. You know whether you are performing, or on the sidelines, waiting.
There is truth underlying the angsty teenagers sense of anticipation, the
feeling that something significant hasnt yet happened, that life has not yet
begun. You havent started living until you experience and survive your
first powerful stepping on stage moment. The bitter, depressed middleaged adult who tells the 18-year old that real life isnt like the movies is
actually wrong. He has merely never dared to step onto a significant stage
himself, so he doesnt know that such powerful crossing-the-threshold
moments are possible. That every life can be the Heros journey.
Sure, the rather crude and vulgar yearnings of teenagers (they all want
to be rock stars, sports stars or novelists) are mostly unlikely to be
fulfilled. But whether your life feels like it is playing out as a series of on-

stage episodes doesnt depend on whether you head towards the more
obvious stages. It depends entirely on whether you have the mental
toughness to recognize and not shy away from big trigger moments. This
mental toughness is what allows you to say damn the torpedoes, full
speed ahead. You accept the worst that can happen and step on stage
anyway. The exhilaration that can follow is not the exhilaration of having
impressed an audience. It is the exhilaration of having cheated death one
more time.
The allegory of the stage is the story of your life told around the
moments when you faced death, and charged ahead anyway.
But life is more than a series of step-on-stage/step-off vignettes; there
is a narrative logic to the whole thing. Each trigger moment prepares you
for larger trigger moments. Each time you shy away from a trigger
moment, you become weaker. There is a virtuous cycle of increasingly
difficult trigger moments, and if you can get through them all, you are
ready for the biggest trigger moment of all: the jump into eternal oblivion.
Everybody dies. Not everybody can make it an intentional act of stepping
onto a pitch-black stage.
There is also a vicious cycle of increasing existential stage fright. Do
that enough, and you will find yourself permanently in the darkness, life
having passed you by. As you might expect, the universe has a sense of
humor. You can only experience living to the fullest if you are able to
get through death-like trigger moments. Shy away from these death-like
moments, and your life will actually feel like living death.
Curiously though, in this allegory of the stage, it isnt other people
who are spectators of your life. Everybody is either on the stage or waiting
backstage for their moment. Whats out there is the universe itself,
random, indifferent to your strutting. Thats what separates teenagers from
adults: the realization that other people are not your audience.

The Missing Folkways of Globalization


June 16, 2010
Between individual life scripts and civilization-scale Grand
Narratives, there is an interesting unit of social analysis called the
folkway. Historian David Hackett Fischer came up with the modern
definition in 1989, in his classic, Albions Seed: Four British Folkways in
America:
the normative structure of values, customs and
meanings that exist in any culture. This complex is not
many things but one thing, with many interlocking parts
Folkways do not rise from the unconscious in even a
symbolic sense though most people do many social
things without reflecting very much about them. In the
modern world a folkway is apt to be a cultural artifact
the conscious instrument of human will and purpose. Often
(and increasingly today) it is also the deliberate contrivance
of a cultural elite.
Ever since I first encountered Fischers ideas, Ive wondered whether
folkways might help us understand the social landscape of globalization.
As I started thinking the idea through, it struck me that the notion of the
folkway actually does the opposite. It helps explain why a force as
powerful as globalization hasnt had the social impact you would expect.
The phrase global citizen rings hollow in a way that even the officially
defunct Yugoslavian does not. Globalization has created a good deal of
industrial and financial infrastructure, but no real social landscape,
Friedman-flat or otherwise. Why? I think the answer is that we are missing
some folkways. Why should you care? Let me explain.
Folkway Analysis
Folkways are a particularly useful unit of analysis for America, since
the sociological slate was pretty much wiped clean with the arrival of
Europeans. As Fischer shows, just four folkways, all emerging in 17th

and 18th century Britain, suffice to explain much of American culture as it


exists today. It is instructive to examine the American case before jumping
to globalization.
So what exactly is a folkway? Its an interrelated collection of default
ways of conducting the basic, routine affairs of a society. Fischer lists the
following 23 components: speech ways, building ways, family ways,
gender ways, sex ways, child-rearing ways, naming ways, age ways, death
ways, religious ways, magic ways, learning ways, food ways, dress ways,
sport ways, work ways, time ways, wealth ways, rank ways, social ways,
order ways, power ways and freedom ways.
Even a cursory examination of this list should tell you why this is
such a powerful approach to analysis. If you were to describe any society
through these 23 categories, you would have pretty much sequenced its
genome (curious coincidence, 23 Fischer categories, 23 chromosome pairs
in the human genome). You wouldnt necessarily be able to answer every
interesting social or cultural question immediately, but descriptions of the
relevant folkways would contain the necessary data.
The four folkways examined by Fischer (the Puritans of New
England, the Jamestown-Virginia elites, the Quakers in Pennsylvania, and
migrants from northern parts of Britain to Appalachia), constitute the
proverbial 20% of ingredients that define 80% of the social and cultural
landscape of modern America. These four original folkways created the
foundations of modern American society. It is fairly easy to trace
recognizable modern American folkways, such as Red and Blue state
folkways, back to the original four.
Other folkways that came later added to the base, but did not
fundamentally alter the basic DNA of American society (one obvious sign:
the English language as default speech way). Those that dissolved
relatively easily into the 4-folkway matrix (such as German, Irish, Dutch
or Scandinavian) are barely discernible today if you dont know what to
look for. Call them mutations. Less soluble, but high-impact ones, such as
Italian, and Black (slave-descended), have turned into major subcultures
that accentuate, rather than disrupt, the four-folkway matrix; rather like
mitochondrial DNA. And truly alien DNA, such as Asian, has largely
remained confined within insular diaspora communities; intestinal fauna,

so to speak. The one massive exception is the Latino community. In both


size (current, and potential) and cultural distance from the Anglo-Saxon
core, Latinos represent the only serious threat to the dominance of the
four-folkway matrix. The rising Latino population led Samuel Huntington,
in his controversial article in Foreign Affairs, The Hispanic Challenge, to
raise an alarm about the threat to the American socio-cultural operating
system. To complete our rather overwrought genetic analogy, this is a
heart transplant, and Huntington was raising concerns about the risks of
rejection (this is my charitable reading; there is also clearly some
xenophobic anxiety at work in Huntingtons article).
I offer these details from the American case only as illustrations of the
utility of the folkway concept. What interests me is the application of the
concept to globalization. And I am not attempting to apply this definition
merely as an academic exercise. It really is an extraordinarily solid one. It
sustains Fischers extremely dense 2-inch thick tome (which I hope to
finish by 2012). This isnt some flippant definition made up by a shallow
quick-bucks writer. It has legs.
Globalization and Folkways
Globalization is, if Tom Friedman is to be believed, an exciting
process of massive social and cultural change. A great flattening.
Friedmans critics (who have written books with titles like The World is
Curved) disagree about the specifics of the metaphoric geometry, but dont
contest the idea that globalization is creating a new kind of society. I
agree that globalization is creating new technological, military and
economic landscapes, but I am not sure it creating a new social landscape.
We know what the before looks like: an uneasy, conflict-ridden
patchwork quilt of national/civilizational societies. It is a multi-polar
world where, thanks to weapons of mass destruction, refined models of
stateless terror, and multi-national corporations binding the fates of
nations in what is starting to look like a death embrace, no one hegemon
can presume to rule the world. Nobody seriously argues anymore that
Globalization is reducible to Americanization (in the sense of a
wholesale export of the four-folkway matrix of America). That was a
genuine fear in the 80s and 90s that has since faded. The Romanization of

Europe in antiquity, and the Islamization of the Middle East and North
Africa in medieval times, have been the only successful examples of that
dynamic.
But it is still seems reasonable to expect that this process,
globalization, is destroying something and creating something equally
coherent in its place. It is reasonable to expect that there are coherent new
patterns of life emerging that deserve the label globalized lifestyles, and
that large groups of people somewhere are living these lifestyles. It is
reasonable in short, to expect some folkways of globalization.
Surprisingly, no candidate pattern really appears to satisfy the
definition of folkway.
With hindsight, this is not surprising. What is interesting about the list
of ways within a folkway is the sheer quantity of stuff that must be
defined, designed and matured into common use (in emergent ways of
course), in order to create a basic functioning society. Even when a
society is basically sitting there, doing nothing interesting (and by
interesting I mean living out epic collective journeys such as the
settlement of the West for America or the Meiji restoration in Japan) there
is a whole lot of activity going on.
The point here is that the activity within a folkway is not news, but
that doesnt mean nothing is happening. People are born, they grow up,
have lives, and die. All this background folkway activity frames and
contextualizes everything that happens in the foreground. The little and
big epics that we take note of, and turn into everything from personal
blogs to epic movies, are defined by their departure from, and return to,
the canvas of folkways.
That is why, despite the power of globalization, there is no there
there, to borrow Gertude Steins phrase. There is no canvas on which to
paint the life stories of wannabe global citizens itching to assert a social
identity that transcends tired old categories such as nationality, ethnicity,
race and religion.
This wouldnt be a problem if these venerable old folkways were in
good shape. They are not. As Robert Putnam noted in Bowling Alone, old

folkways in America are eroding faster than the ice caps are melting.
Globalization itself, of course, is one of the causes. But it is not the only
one. Folkways, like individual lives and civilizations, undergo rise and fall
dynamics, and require periodic renewals. They have expiry dates.
Every traditional folkway today is an end-of-life social technology;
internal stresses and entropy, as much as external shocks, are causing them
to collapse. The erosion has perhaps progressed fastest in America, but is
happening everywhere. I am enough of a nihilist to enjoy the crash-andburn spectacle, but I am not enough of an anarchist to celebrate the lack of
candidates to fill the vacuum.
The Usual Suspects
Weve described the social before of globalization. What does the
after look like? Presumably there already is (or will be) an after, and
globalization is not an endless, featureless journey of continuous
unstable change. That sounds like a dark sort of fun, but I suspect humans
are not actually capable of living in that sort of extreme flux. We seek the
security of stable patterns of life. So we should at some point be able to
point to something and proclaim, there, thats a bit of globalized society.
I once met a 19-year old second-generation Indian-American who,
clearly uneasy in his skin, claimed that he thought of himself as a global
citizen. Is there any substance to such an identity?
How is this global citizen born? What are the distinguishing
peculiarities of his speech ways and marriage ways? What does he
eat for breakfast? What are his building ways? How does this creature
differ from his poor old frog-in-the-well national-identity ancestors? If
there were four dominant folkways that shaped America, how many
folkways are shaping the El Dorado landscape of globalization that he
claims to inhabit? One? Four? Twenty? Which of this set does our heros
story conform to? Is the Obama folkway (for want of a better word) a neoAmerican folkway or a global folkway?
These questions, and the difficulty of answering them, suggest that the
concept of a global citizen is currently a pretty vacuous one. Fischers

point that the folkway is a complex of interlocking parts is a very


important one. Most descriptions of globalized lifestyles fail the
folkway test either because they are impoverished (they dont offer
substance in all 23 categories) or are too incoherent; they lack the
systematic interlocking structure.

Multicultural societies are no more than many decrepit old


folkways living in incongruous juxtaposition, and occasionally
coming together in Benneton ads and anxious mutual-admiration
culture fests
Melting pot societies are merely an outcome of some folkways
dissolving into a dominant base, and others forming
distinguishable subcultural flavors
Cyberpunk landscapes are more fantasy than fact; a few people
may be living on this gritty edge, but most are not.
Intentional communities, which date back to early utopia
experiments, have the characteristic brittleness and cultural
impoverishment of too-closed communities, that limits them to
marginal status.
Purely virtual communities are not even worth discussing.
Click-and-mortar communities, that might come together
virtually, have so far been just too narrow. Take a moment to
browse the groups on meetup.com. How many of those interest
groups do you think have the breadth and depth to anchor a
folkway?

The genetic analogy helps explain why both coverage (of the 23
categories) and complex of interlocking parts are important. Even the
best a la carte lifestyle is a bit of a mule. In Korea for instance, or so I am
told, marriages are Western style but other important life events draw from
traditional sources. Interesting, perhaps even useful, but not an
independent folkway species capable of perpetuating itself as a distinct
entity. Thats because a la carte gives you coverage, but not complex
interlocking. On the other hand, biker gangs have complex interlocking
structures and even perpetuate themselves to some extent, but do not have
complete coverage. Ive been watching some biker documentaries lately,
and it is interesting how their societies default back to the four-folkway

base for most of their needs, and only depart from it in some areas. They
really are subcultures, not cultures.
Latte Land
I dont know if there is even one coherent folkway of globalization,
let alone the dozen or so that I think will be necessary at a minimum
(some of you might in fact argue that we need thousands of micro-Balkan
folkways, but I dont think that is a stable situation). But I have my
theories and clues.
Heres one big clue. Remember Howard Dean and the tax-hiking,
government-expanding, latte-drinking, sushi-eating, Volvo-driving, New
York Times-reading body-piercing, Hollywood-loving, left-wing freak
show culture?
Perhaps thats a folkway? It wouldnt be the first time a major
folkway derived its first definition from an external source. It sounds a la
carte at first sight, but theres some curious poetic resonance suggestive of
deeper patterns.
For a long-time I was convinced that this was the case; that Blue
America could be extrapolated to a Blue World, and considered the
Promised Land of globalization, home to recognizable folkways. That it
might allow (say) the Bay Area, Israel, Taiwan and Bangalore to be tied
together into one latte-drinking entrepreneurial folkway for instance. And
maybe via a similar logic, we could bind all areas connected, and socially
dominated by, Walmart supply chains into a different folkway. If Latte
Land is one conceptual continent that might one day host the folkways of
globalization, Walmartia would be another candidate.
I think theres something nascent brewing there, but clearly were
talking seeds of folkways, not fully developed ones. There are tax-hiking,
latte-drinking types in Bangalore, but it is still primarily an Indian city,
just as the Bay Area, despite parts achieving an Asian majority, is still
recognizably and quintessentially American.

But there are interesting hints that suggest that even if Latte Land isnt
yet host to true globalized folkways, it is part of the geography that will
eventually be colonized by globalization. One big hint has to do with walls
and connections.
In the Age of Empires, the Chinese built the Great Wall to keep the
barbarians out, and a canal system to connect the empire. The Romans
built Hadrians wall across Britain to keep the barbarians out, and the
famed Roman roads to connect the insides.
Connections within, and walls around, are characteristic features of an
emerging social geography. Today the connections are fiber optic and
satellite hookups between buildings in Bangalore and the Bay Area. In
Bangalore, walled gated communities seal Latte Land off from the rest of
India, their boundaries constituting a fractal Great Wall. In California, if
you drive too far north or south of the Bay Area, the cultural change is
sudden and very dramatic. Head north and you hit hippie-pot land. Head
south and you hit a hangover from the 49ers (the Gold Rush guys, not the
sports team). In some parts of the middle, it is easier to find samosas than
burgers. Unlike in Bangalore, there are no physical walls, but there is still
a clear boundary. I dont know how the laptop farms of Taiwan are sealed
off, or the entrepreneurial digital parts of Israel from the parts fighting
messy 2000 year old civilizational wars, but I bet they are.
Within the walls people are more connected to each other
economically than to their host neighborhoods. Some financial shocks will
propagate far faster from Bangalore to San Jose than from San Jose to
(say) Merced. I know at least one couple whose marriage way involves
the longest geometrically possible long-distance relationship, a full 180
longitude degrees apart, and maintained through frequent 17 hour flights.
Curiously, since both the insides and outsides of the new walls are
internally well-connected, though in different ways, the question of who
the barbarians are is not easy to answer. My tentative answer is that our
side of the wall is in fact the barbarian side. Our nascent folkways have
more in common with the folkways of pastoral nomads than settled
peoples. Unlike the ancient Chinese and Romans, weve built the walls to
seal the settled people in. Ill argue that point another day. Trailer: the key
is that barbarians in history havent actually been any more barbaric

than settled peoples, and the ages of their dominance havent actually been
dark ages. We may well be headed for a digital dark age driven by
digital nomad-barbarians.
Our missing folkways, I think, are going to start showing up in Latte
Land in the next 20 years. Also in Walmartia and other emerging
globalization continents, but I dont know as much about those.
In the meantime, I am curious if any of you have candidate folkways.
Remember, it has to cover the 23 categories in complex and
interconnected ways, and there should be a recognizable elite whose
discourses are shaping it (the folkway itself cant be limited to the elite
though: the elite have always had their own globalized jet-setting
folkways; we are talking firmly middle class here). How many folkways
do you think will emerge? 0, 1, 10 or 1000? Where? How many
conceptual continents?
Random side note: This post has officially set a record for longest
gestation period. I started this essay in 2004, two years before I started
blogging. Its kinda been a holding area for a lot of globalization ideas,
about 20% of which made it into this post. I finally decided to flush it out
and evolve the thread in public view rather than continue it as a working
(very hard-working) paper.
Random side note #2: There are lots of books that are so thick, dense
and chock-full of fantastic ideas that I could never hope to review or
summarize them. In a way, this post is an alternative sort of book
review, based on plucking one really good idea from a big book. Fischers
book is a worthwhile reading project if you are ready for some intellectual
heavy lifting.

On Going Feral
August 19, 2009
Yesterday, a colleague looked at me and deadpanned, arent you
supposed to have a long beard? When you remote-work for an extended
period (its been six months since my last visit to the mother ship), you
can expect to hear your share of jokes and odd remarks when you do show
up. Once you become a true cloudworker, a ghost in the corporate
machine who only exists as a tinny voice on conference calls, perceptions
change. So when you do show up, you find that people react to you with
some confusion. Youre not a visitor or guest, but you dont seem to truly
belong either.
I hadnt planned on such a long period without visits to the home
base, but the recession and a travel freeze got in the way of my regular
monthly visits for a while. The anomalous situation created an accidental
social-psychological experiment with me as guinea pig. Whats the
difference between six months and one month, you might ask? Everything.
Monthly visits keep you domesticated. Six months is long enough to make
you go feral. Ive gone feral.

Consider the meaning of the word: the condition of a domesticated


species that goes back to its wild ways. The picture above is of a feral cat.
Curiously enough, this one is apparently from Virginia, where I live.
Most common domesticated animals can go feral: dogs, pigs, cats,
horses and sheep, for instance. We tend to forget though, that the most
impossibly ornery species weve managed to domesticate is ourselves,
homo sapiens. Settled agriculture, urbanization, religion, the nation-state
and finally, industrialization, each added one more layer of domestication.
Its not for nothing that primate ethologist Desmond Morris titled one of
his books about human sociology The Human Zoo. Modern work styles
are ripping away all the layers at once. I am an atheistic, post-nationalist
immigrant from the other side of the planet, living in a neo-urban (though
not bleak-cyberpunk) landscape. I inhabit physical environments where
old communities are crumbling, and people are tentatively groping for
social structure through meetups (aside: I just started a writers meetup,
1000 words a day, in the DC area). I am tethered to a corporation too
loosely to be a significant part of it socially. No Friday happy hours or
regular lunch-buddy for me.
Ive become to the society that is my parent company what the
privateers of old used to be to the big naval powers of the 18th century. A
sort of barely-legal socio-economic quasi-outlaw. Maybe I am yielding to
self-romanticizing temptations, but there are some hard truths here.
Political scientists often use a fictional construct, man in the state of
nature, as a starting point for their conceptual models of the gradual
domestication civilization of humans. William Golding offered one
fictional imagining of what might happen if humans went feral at an early
enough age, in Lord of the Flies. But until now, the idea of modern feral
humans has largely been a theoretical one.
Cloudworker lifestyles mobile, home-based, unshaven, pajamaclad and Starbucks-swilling create a psychological transformation that
is very similar to what happens when animals go feral. In animals, it takes
a couple of generations of breeding for the true wild nature to re-emerge.
Cats, for instance, revert to a basic, hardy, stocky, short-haired robustlyinterbred tabby variety. Dogs become mutts. But in humans it can happen

faster, since most of our domestication is through education and


socialization rather than breeding.
You might think that the true tabby-mutt human must live outside the
financial system, maybe as a wilderness survivalist or fight-club member.
Maybe engage in desperate and deadly Lord-of-the-flies style lifestyles, all
nature-red-in-tooth-and-claw. But thats actually a mistaken notion,
because that sort of officially checked-out or actively nihilistic person is
defined and motivated by the structure of human civilization. To rebel is to
be defined by what you rebel against. Criminals and anarchists are
civilized creatures. Feral populations are agnostic, rather than either
dependent on, or self-consciously independent of, codified social
structures. Feral cloudworkers use social structures where it accidentally
works for them, rather like feral cats congregating near fish markets, and
improvise ad-hoc self-support structures for the rest of their needs.
As a truly feral cloudworker, you simply end up being thrown to your
own devices. Social infrastructure no longer works for you, except by
accident. You dont get friends for free just because you have a job or
belong to a bowling league. You improvise. You find some social contact
at Starbucks. You go for long walks and learn to appreciate solitude more.
You become more closely attuned to your personal bio-rhythms. You nap
(well, I do). You have left your cubicle for the wide world, but you pay the
cost. You have to learn to survive in the social wilderness. Much of it is as
bleak as the deep open ocean, where it takes the personality of the oceanic
white-tip shark to survive.
But the contrast is most vivid when you do, on occasion, rejoin
society as a physical guest. I was surprised at how different I felt, starting
with my shoes and badge (I am barefoot or in flip-flops at home. At
work I have to wear close-toed shoes because it is a lab environment).
The regular rhythms of morning coffee-hello rituals and meeting behaviors
seem strangely alien. It all seems like a foreign country youve only read
about in theoretical org charts. Names and faces drift apart, starved of
nourishing, daily reinforcement, and you struggle to conjure up names of
people you used to pass by in the hallways everyday. Out of cc out of
mind. The logic of promotions, team staffing and budgeting seem as
obscure as the rituals of Martian society. Even though you know that in
theory, you are being affected by it.

As my colleagues beard joke illustrates, you are perceived differently.


You are some strange off-the-org-chart species, and people dont know
what to do with you. You are disconnected from water-cooler gossip to a
significant extent, but the fact that you are clearly surviving, productive,
and effective, I suspect, makes the regular workers introspect as much as
us aliens. I imagine they wonder whether all the seemingly solid reality all
around them can really be what it seems if somebody like me can
randomly show up and disappear occasionally and still impact things as
much as their next-cubicle neighbor. Anybody with imagination who is
still desk-bound in traditional ways, I suspect, is feeling reality and its
walls, floors and ceilings dissolve around him/her, Matrix style.
On Friday, Ill return to my natural wild habitat. Will life redomesticate me at some point? I dont know.

On Seeing Like a Cat


August 6, 2009
Cats and dogs are the most familiar among the animal archetypes
inhabiting the human imagination. They are to popular modern culture
what the fox and the hedgehog are to high culture, and what farm animals
like cows and sheep were to agrarian cultures. They also differ from foxes,
hedgehogs, sheep and cows in an important way: nearly all of us have
directly interacted with real cats and dogs. So let me begin this meditation
by introducing you to the new ribbonfarm mascot: the junkyard cat,
Skeletor, and my real, live cat, Jackson. Here they are. And no, this isnt
an aww-my-cat-is-so-cute post. I hate those too.

The Truth about Cats and Dogs


I am a cat person, not in the sense of liking cats more (though I do),
but actually being more catlike than doglike. Humans are more complex
than either species; we are the products of the tension between our doglike and cat-like instincts. We do both sociability and individualism in
more complicated ways than our two friends; call it hyper-dogginess plus
hyper-cattiness. That is why reductively mapping yourself exclusively to

one or the other is such a useful exercise. You develop a more focused
self-awareness about who you really are.
Our language is full of dog and cat references. Dogs populate our
understanding of social dynamics: conflict, competition, dominance,
slavery, mastery, belonging and otherness:

Finance is a dog-eat-dog world


Hes the alpha-dog/underdog around here
Hes a pit bull
Dhobi ka kutta, na ghar ka, na ghat ka (trans: the washermans
dog belongs neither at the riverbank, nor at the house; i.e. a
social misfit everywhere)
He follows her around like a dog
He looks at his boss with dog-like devotion

Cat references speak to individualism, play, opportunism, risk,


comfort, mystery, luck and curiosity

A cat may look at a king


She curled up like a cat
A cat has nine lives
Managing this team is like herding cats
Look what the cat dragged in
Its a cat-and-mouse game
Curiosity killed the cat

There is a dark side to each: viciousness and deliberate cruelty (dogs),


coldness and lack of empathy (cats). We also like to use the idealized catdog polarity to illuminate our understanding of deep conflicts: they are
going at it like cats and dogs. Curiously, though the domestic cat is a far
less threatening animal than the domestic dog (wolf vs. tiger is a different
story), we are able to develop contempt for certain dog-archetypes, but not
for cat-archetypes. You cant really insult someone in any culture by
calling him/her a cat (to my knowledge). But there is fear associated with
cats (witches, black cats for bad luck) in every culture. Much of this fear, I

believe, arises from the cats clear indifference to our assumptions about
our own species-superiority and intra-species status.
That point is clearly illustrated in the pair of opposites he looks at his
boss with dog-like devotion/a cat may look at a king. The latter is my
favorite cat-proverb. It gets to the heart of what is special about the cat as
an archetype: being not oblivious, but indifferent to ascriptive authority
and social status. You can wear fancy robes and a crown and be declared
King by all the dogs, but a cat will still look quizzically at you, trying to
assess whether the intrinsic you, as opposed to the socially situated,
extrinsic you, is interesting. Like the child, the cat sees through the
Emperors lack of clothes.
Our ability to impress and intimidate is mostly inherited from
ascriptive social status rather than actual competence or power. Cats call
our bluff, and scare us psychologically. Dogs validate what cats ignore.
But it is this very act of validating the unreal that actually creates an
economy of dog-power, expressed outside the dog society as the power of
collective, coordinated action. Dogs create society by believing it exists.
In the Canine-Feline Mirror
We map ourselves to these two species by picking out, exaggerating
and idealizing certain real cat and dog behaviors. In the process, we
reveal more about ourselves than either cats or dogs. Cats are loyal to
places, dogs to people is an observation that is more true of people than
either dogs or cats. Just substitute interest in the limited human sphere (the
globalized world of gossipy, politicky, watercoolerized, historicized and
CNNized human society; feebly ennobled as humanism) versus the
entire universe (physical reality, quarks, ketchup, ideas, garbage, container
ships, art, history, humans-drawn-to-scale). There are plenty of such
dichotomous observations. A particularly perceptive one is this: dogpeople think dogs are smarter than cats because they learn to obey
commands and do tricks; cat-people think cats are smarter for the exact
same reason. Substitute interest in degrees, medals, awards, brands and
titles versus interest in snowflakes and Saturns rings. I dont mean to be
derisive here: medals and titles are only unreal to cats. Remember, dogs

make them real by believing they are real. They lend substance to the
ephemeral through belief.
Cat-people, incidentally, can develop a pragmatic understanding of
the value of dog-society things even if deep down they are puzzled by
them. You can get that degree and title while being ironic about it. Of
course, if you never break out and go cat-like at some point, you will be a
de facto dog (check out the hilarious Onion piece a commenter on this
blog pointed out a while back: Why cant anyone tell I am wearing this
suit ironically?).
But lets get to the most interesting thing about cats, an observation
that led to the title of this article. My copy of the The Encyclopedia of the
Cat says:
It is not entirely frivolous to suggest that whereas pet
dogs tend to regard themselves as humans and part of the
human pack, the owner being the pack leader, cats regard
the humans in the household as other cats. In many ways
they behave towards people as they would towards other
kittens in the nest, grooming them, snuggling up with
them, and communicating with them in the ways that they
would use with other cats.
There is in fact an evolutionary theory that while humans deliberately
domesticated wild dogs, cats self-domesticated by figuring out that
hanging around humans led to safety and plenty.
I want to point out one implication of these two observations: cats
arent unsociable. They just use lazy mental models for the species-society
they find themselves in: projecting themselves onto every other being they
relate to, rather than obsessing over distinctions. They only devote as
much brain power to social thinking as is necessary to get what they want.
The rest of their attention is free to look, with characteristic curiosity, at
the rest of the universe.
To summarize, dog identities are largely socially constructed, inspecies (actual or adopted, which is why the reverse-pet raised by

wolves sort of story makes sense). Cat identities are universeconstructed. Which brings us to a quote from Kant (I think).
Personal History, Identity and Perception
It was Kant, I believe, who said, we see not what is, but who we are.
We dont start out this way, but as our world-views form by accretion,
each new layer is constructed out of new perceptions filtered and distorted
by existing layers. As we mature, we get to the state Kant describes, where
identity overwhelms perception altogether, and everything we see
reinforces the inertia of who we are, sometimes leading to complete
philosophical blindness. Neither cats, nor dogs can resist this inevitability,
this brain-entropy, but our personalities drive us to seek different kinds of
perceptions to fuel our identity-construction.
Dogs, and dog-like people end up with socially-constructed, largely
extrinsic identities because thats what they pay attention to as they
mature: other individuals. People to be like, people to avoid being like. It
is at once a homogenizing and stratifying kind of focus; it creates out of
self-fulfilling beliefs an identity mountain capped by Ken and Barbie
dolls, with foothills populated by hopeless, upward-gazing peripheral
Others, who must either continue the climb or mutiny.
Cats and cat-like people though, simply arent autocentric/speciescenteric (anthropomorphic, canino-morphic and felino-morphic).
Wherever they are on the identity mountain believed into existence by
dogs, they are looking outwards, not at the mountain itself. They are
driven to look at everything from quarks to black holes. In this broad
engagement of reality, there isnt a whole lot of room for detailed mental
models of just one species. In fact, the ideal cat uses exactly one spacesaving mental (and, to dogs, wrong) model: everyone is basically kinda
like me. Appropriate, considering we are one species on one insignificant
speck of dust circling an average star in a humdrum galaxy. The
Hitchhikers Guide to the Galaxy, remember, has a two-word entry for
Earth: Mostly Harmless. This indiscriminate, non-autocentric curiosity is
dangerous though: curiosity does kill the cat. Often, it is dogs that do the
killing. We may be mostly harmless to Vogons and Zaphod Beeblebrox,
but not to ourselves.

Paradoxically, this leads cat-people, through the Kantian dynamic, to


develop identities with vastly more diversity than dog-people. It is quite
logical though: random-sampling a broader universe of available
perceptions must inevitably lead to path-dependent divergence, while
imitative-selective-sampling of a subset of the universe must lead to some
convergence. By looking inward at species-level interpersonal differences,
dog-people become more alike. By caricaturing themselves and everybody
else to indistinguishable stick-figure levels, cats become more
individualized and unique. The more self-aware among the cats realize
that who I am and what I see are two aspects of the same reality: the sum
total of their experiences. Their identities are at once intrinsic and
universal.
Thats why the title of the article is Seeing Like a Cat. We see not
what is, but who we are. Cats become unique by feeding on a unique
history of perceptions. And that makes their perspectives unique. To see
the world like a cat is to see it from a unique angle. Equally, it is the
inability to see it from the collective perspective of dogs.
If you are a lucky cat, your unique cat perspective has value in dog
society. That brings us to Darwin.
Dogs, Cats and Darwin
To intellectualize something as colloquial as the cat-dog discourse
might seem like a pointless exercise to some. And yes, as the hilariously
mischievous parody of solemn analysis, Why Cats Paint: A Theory of
Feline Aesthetics demonstrates, it is easy to get carried away.
Yet I think there is something here as fundamental as the
fox/hedgehog argument. As I said when I started, we have both cat-like
and dog-like tendencies within us, and the two are not compatible. Both
sorts of personalities are necessary for the world to function, but you can
really only be like one or the other, and the course is set in childhood, long
before we are mature enough to consciously choose.
Where does this dichotomy come from though? Darwin, I think.

When we think in Darwinist terms, we usually pay the most attention


to the natural selection and survival of the fittest bits, which dog-belief
societies replicate as artificial selection and social competition. But theres
the other side of Darwin: variation. It is variation and natural selection.
Variation is the effect of being influenced by randomness. Without it, there
is no selection.
Cats create the variation, and mostly die for their efforts. The
successful (mostly by accident) cats spawn dog societies. Thats why, at
the very top of the identity pyramids constructed by dog-beliefs, even
above the prototypical Barbie/Ken abstractions, you will find cats. Cats
who didnt climb the mountain, but under whom the mountain grew.
Those unsociable messed-up-perspective neurotics who are as puzzled by
their position as the dogs who actually want it.

How to Take a Walk


August 9, 2010
It was cool and mildly breezy around 8 PM today. So I went for a
walk, and I noticed something. Though I passed a couple of hundred
people, nobody else was taking a walk. There were people returning from
work, people going places with purpose-laden bags, people running,
people going to the store, people sipping slurpies. But nobody taking a
walk. Young women working their phones, but not taking a walk. People
walking their dogs, or pushing a stroller, with the virtuous air of one
performing a chore for the benefit of another, but not themselves taking a
walk. I was the only one taking a walk. The closest activity to taking a
walk that I encountered was two people walking together and forgetting,
for a moment, to talk to each other. The moment passed. One of them said
something and they slipped back into talking rather than taking a walk.
My observation surprised me, and I tried to think back to other walks.
I take a lot of walks, so there are a lot of memories to comb through. In
my 13 years of taking walks in the United States, I could remember only
ever seeing one native-born American taking a walk. All other examples I
could remember were clearly immigrants. Middle-aged eastern European
matrons strolling. Old Chinese men walking slowly with their hands
behind their backs. Even elderly Americans dont seem to take walks the
way elderly immigrants do. They walk slowly, but they look like theyre
doing it for the exercise. They often look resentfully at young runners.
It is not hard to take a walk. The right shoes are the ones nearest the
door. The right clothes are the ones you happen to be wearing. You will
not sweat. You may need a jacket if it is cold, or an umbrella if it is
raining. If you pass anybody, you are not walking slowly enough for it to
be taking a walk. If you need to make up a nominal purpose like get
more bananas from the store you are not taking a walk.
Taking walks is the entry drug into the quiet, solitary heaven of
idleness (the next level up is sitting on a bench without a view). For
modern Americans, idleness is a shameful, private indulgence. If they

attempt it in public, they are stricken by social anxiety. They seem to fear
that the slow, solitary, and obviously purposeless amble that marks taking
a walk signals social incompetence or a life unacceptably adrift. If a
shopping bag, gym bag, friend or dog cannot be manufactured, nominal
non-idleness must be signaled through an ostentatious I have friends
phone call, or email-checking. If all else fails, hands must be placed
defiantly in pockets, to signal a brazen challenge to anyone who dares
look askance at you, Yeah, Im takin a walk! You got a problem with
that?
In America, visible idleness is a luxury for the homeless, the
delinquent and immigrants. The defiantly tautological protest, I have a
life, is quintessentially American. The American life does not exist until it
is filled up.
Even a pause at a bench must be justified by a worthwhile view or a
chilled drink.
Worthwhile. Now, theres an American word. Worth-while. Worthyour-while. The time value of money. Someone recently remarked that the
iPad has lowered the cost of waiting. Americans everywhere heaved a sigh
of relief, as their collective social anxiety dipped slightly. The rest of the
world groaned just a little bit.
The one American I remember seeing taking a walk was Tom Hales,
then a professor at the University of Michigan. He was teaching the
differential geometry course I was auditing that semester. One dark,
solitary Friday, while the rest of America was desperately trying to
demonstrate to itself that it had a life, I was taking a walk in an empty,
desolate part of the campus. I saw Hales taking a walk on the other side of
the street. He did not look like he was pondering Deep Matters. He merely
looked like he was taking a walk.
That year he proved the Kepler conjecture, a famous unsolved
problem dating back to 1611. A beautifully pointless problem about how
to stack balls. I like to think that Kepler must have enjoyed taking walks
too.

The Blue Tunnel


February 21, 2008

How Do You Run Away from Home?


April 11, 2012
My Big History reading binge last year got me interested in the
history of individualism as an idea. I am not entirely sure why, but it
seems to me that the right question to ask is the apparently whimsical one,
How do you run away from home?
I dont have good answers yet. So rather than waiting for answers to
come to me in the shower, I decided to post my incomplete thoughts.
Lets start with the concept of individualism.
The standard account of the idea appears to be an ahistorical one; an
ism that modifies other isms like libertarianism, existentialism and
anarchism.
Fukuyama argues, fairly persuasively, that the individual as a
meaningful unit only emerged in the early second millennium AD in
Europe, as a consequence of the rise of the Church and the resultant
weakening of kinship-based social structures. This immediately suggests a
follow-on question: is the slow, 600-700-year rise of individualism an
expression of an innate drive, unleashed at some point in history, or is it an
unnatural consequence of forces that weaken collectivism and make it
increasingly difficult to sustain? Are we drifting apart or being torn apart?
Do we possess a fundamental run away from home drive, or are we
torn away from home by larger, non-biological forces, despite a strong
attachment drive?
Chronic Disease or Natural Drive?
If the former is true, individualism is a real personality trait that was
merely expensive to express before around 1300 AD. The human
condition prior to the rise of individualism could be viewed as a sort of

widespread diseased state. Only the rare prince or brave runaway could
experience an individualistic lifestyle.
If the latter is true, individualism is something like an occasional
solitude-seeking impulse that has been turned into a persistent chronic
condition by modern environments. That would make individualism the
psychological equivalent of chronic physiological stress.
According to Robert Sapolskys excellent book Why Zebras Dont
Get Ulcers, chronic stress is the diseased state that results when natural
and healthy acute stress responses the kind we use to run away from
lions get turned on and never turned off. This is more than an analogy.
If individualism is a disease, it probably works by increasing chronic
stress levels.
The interesting thing about this question is that the answer will seem
like a no-brainer to you depending on your personality. To someone like
me, there is no question at all that individualism is natural and healthy. To
someone capable of forming very strong attachments, it seems equally
obvious that individualism is a disease.
The data apparently supports the latter view, since happiness and
longevity are correlated with relationships, as is physical health. Radical
individualism is physically stressful and shortens lifespans. I bet if you
looked at the data, youd find that individualists do get ulcers more
frequently than collectivists.
But to conclude from this data that individualism is a disease is to
reduce the essence of being human to a sort of mindlessly sociable
existence within a warm cocoon called home. If individualism is a disease,
then the exploratory and restless human brain that seeks to wander alone
for the hell of it is a sort of tumor.
Our brains, with their capacity for open-ended change, and restless
seeking of change and novelty (including specifically social change and
novelty), make the question non-trivial. We can potentially reprogram
ourselves in ways that muddy the distinctions between natural and
diseased behaviors.

The social perception of individualism through history has been


decidedly mixed, and we have popular narratives around both possibilities
thrown at us from early childhood (think of two classic childrens books:
The Runaway Bunny and Oh, the Places You Will Go).
Around the world (and particularly in the West), individualism has
superficially positive connotations. Correlations to things like creativity
and originality are emphasized.
But the social-institutional structure of the world possesses a strong
an immune defense against individualism everywhere. We dont realize
this because mature Western-style institutions allow for a greater variety of
scripts to choose from.
This variety represents a false synthesis of individualism and
collectivism. A domestication of individualist instincts. A better synthesis
is likely to be psychological rather than sociological, since we are talking
about intrinsic drives.
The Runaway Drive
The existence of an attachment drive is not a matter for debate. It
clearly exists, and is just as clearly healthy and natural. Nobody has
suggested (to my knowledge) that the ability to form attachments and
relationships is a disease. There do exist fundamentally unsociable species
(such as tigers and polar bears) for which adult sociability could be
considered a disease, but homo sapiens is not among them.
The attachment drive breaks down into two sub-drives, getting ahead
and getting along (competition and cooperation) that both require being
attached to the group.
The question is whether a third drive, getting away, exists. This is not
the same as being an exile or outcast. Those are circumstantial and
contingent situations: self- or other-imposed punishments. I am also not
talking about running away from home as a response to toxic
communities or abusive families. That is merely a case of lower-level
survival drives in Maslows pyramid over-riding higher-level social drives.

The getting away drive is the drive to voluntarily leave a group


because it is a natural thing to do. A drive that is powerful enough to
permanently overpower getting ahead and getting along drives, resulting
in a persistent state of solitary nomadism and transient sociability in the
extreme case, like that of George Clooney in Up in the Air. In his case, it
turns out to be empty bravado, a pretense covering up a yearning for
home. But I believe real (and less angsty) versions exist.
If we do possess such a drive, it presumably shows up as a weaker or
stronger trait, with some individuals remaining strongly attached and
others itching to cut themselves loose. In my post, Seeing Like a Cat, I
argued that:
I am a cat person, not in the sense of liking cats more
(though I do), but actually being more catlike than doglike.
Humans are more complex than either species; we are the
products of the tension between our dog-like and cat-like
instincts. We do both sociability and individualism in more
complicated ways than our two friends; call it hyperdogginess plushyper-cattiness. That is why reductively
mapping yourself exclusively to one or the other is such a
useful exercise.
To argue for a getting away drive is to argue for the presence of a catlike element to our nature (specifically, tiger-like unsociability, not lionlike; in the latter, individualism is exile imposed on young males.
Domestic cats appear to be an in-between species).
Fukuyama does not get to the evolutionary psychology of
individualism, and appears to be agnostic towards the question. He merely
marshals evidence to show that the original human condition was a
strongly collectivist one, from which at some point a widespread pattern of
individualist behavior emerged. Since his focus is on the institutional
history of civilization, he limits his treatment to the necessary level of
institutional development and externalized trust required for individualism
to exist.

Graeber appears to believe that it is a disease. For him, identity is


social identity. The individual is defined in terms of a nexus of
relationships. To be torn away from this nexus is slavery and loss of
identity. While the theory is a workable one if you are talking about actual
slavery (he treats the history of the African slave trade at considerable
length), things get murky when you get to other situations.
The Three Rs of Rootedness
Homesickness provides a good lens through which to understand
attachment drives. Diasporas and expat communities provide a good
illustration of the dynamics of both wanderlust and homesickness.
For some, the expat condition is torture. They return to some place
that feels like home every chance they get. If they cannot, they recreate
home wherever they are, as a frozen museum of memories. Home in this
sense is doggie home. It is a social idea, not a physical idea. Physical
elements of home serve as triggers for memories of social belonging.
There is a third kind response to the diaspora state, integration into the
new environment, that is also an expression of homesickness. It is merely
a more adaptable variety that is capable of building a new home in
unfamiliar surroundings (which can be either a new stationary geography
or a moving stream). This takes effort. Many of my Indian friends who
came to America at the same time as I did are now rabid football fans.
They used to be rabid cricket fans back in India. Its just a small part of
their careful (and ongoing) effort to construct a new sense of home.
All these responses are a reaction to the pain of homesickness: return,
recreation, rerooting. The three Rs of rootedness.
It is tempting to believe that some sense of home is necessary from a
pragmatic point of view. After all, life would be hell for practical purposes
if you were always in highly unfamiliar physical and social environments.
Perhaps you dont need the pain of homesickness in order to want a home.
Perhaps practical considerations are enough.
Its more complicated than that.

Utilitarian and Psychological Homes


Utilitarian familiarity in the environment to support a low-friction,
efficient life does not require a full-blown sense of home. Something
much simpler will suffice. Practical needs are much easier to satisfy than
existential ones.
For instance, Starbucks can supply a familiar work environment
anywhere in the world, but it hardly seems meaningful to call Starbucks a
part of a sense of home.
Highly developed civil societies can provide, with greater ubiquity,
much of the utilitarian support structure that home supplies in less
developed ones. Starbucks represents a mass-produced modular piece of
an abstract sense of home that can be manufactured from interchangeable
environmental pieces.
This is a useful thought, but you need to distinguish between
utilitarian homes (defined primarily by sufficiently familiar material
environments that dont require new learning) and psychological homes
(defined primarily by social environments and specific relationships) to
make the model hang together.
I actually resist the notion of my Starbucks wherever I am, and if
possible, I try to find multiple Starbucks locations that I then rotate
through. I seem to naturally resist the tendency of utilitarian homes to turn
into psychological homes. I like to keep my cafes interchangeable. I have
never personalized a cubicle or office. I am not quite as extreme as George
Clooney in Up in the Air though, who prefers hotel rooms to his own
apartment. I do personalize some parts of my home environment, but the
need has been diminishing.
There is some evidence that people are starting to manufacture
interchangeable ideas of psychological homes as well. For example, there
is the trope of fashionable urban women looking for a gay, male friend
when they move to a new city. The role becomes defined in terms of the

interchangeable-parts individuals capable of filling it, and home is


anywhere your set of roles can be easily filled.
Ensemble television shows are full of references to this idea of
interchangeable people in roles. In South Park for instance, when Cartman
ends up in jail, the other kids look for a new fat kid. When Kenny is sent
to a foster home, Cartman looks for a new poorest kid in school. On
Seinfeld (I think I am allowed to make Seinfeld references till 2017),
Elaine at one point drops the other three characters and finds three new
friends who are very similar, but with a small change (they are nice and
positive instead of mean and negative, an example of a simple change in
Elaines design pattern for home.)
But it is not clear to me that interchangeable psychological homes are
possible beyond a point. Still, the social trends are suggestive.
But the result of these developments is that we are now living with a
strange successor to the idea of home.
Homes as Design Patterns
The utilitarian home is digital rather than physical in its dynamics.
Home becomes a design pattern in your head (RAM) that can be saved to
disk anywhere in the world where the substrate of civil society is
sufficiently evolved. This is not the same as living out of a suitcase. This is
not minimalism. This is virtualization. My design pattern for example,
includes bike paths, a gym nearby, at least 2-3 coffee shops (preferably
Starbucks) within walking/biking distance, a Chinese restaurant, and an
Indian grocery store. I could probably write down the full specification in
a couple of pages. It is very easy to instantiate this pattern in any
American city above a certain size.
A slightly more complex metaphor is that home is now a program that
can be recompiled, with a few changes, in any new environment. The
physical pieces of the pattern are simply those that must be physical, and
are too expensive to rent or sell/rebuy as you move. You cart these
physical elements around in a U-Haul. Only a few pieces are in there due

to their emotional significance. Most could be virtualized if cost structures


changed.
Such environments are not new. Roman military camps were
expressly designed this way. What is new is the ubiquity and general
accessibility of such environments, and the rise in the number of people
who choose to live this way, with a digital sense of home.
Since individual ideas of home constitute such a large proportion of
what we call civilization, this has big consequences. The planet is turning
into a hardware platform for a fluid idea of civilization that exists as a
collection of design patterns for home.
It is less clear what the psychological idea of home has turned into.
For some people, psychological home has clearly moved online. I recall an
op-ed somewhere several years ago, comparing cellphones to pacifiers.
Appropriate, if they represent a connection to psychological home.
Putting your phone away is like suddenly being teleported away from
home to a strange new place.
For others, the three Rs still dominate the idea of home. Online life is
not satisfying for these people. I think this segment will shrink, just as the
number of people who are attached to paper books is shrinking.
For a speculative third category, we have the sitcom-ish idea of
interchangeable people in roles. I am not sure this category is real yet. I
see some evidence for it in my own life, but it is not compelling.
But for a fourth category of people, the need for a psychological home
itself is reduced. A utilitarian home is enough. The getting away drive has
irreversibly altered psychology.
Running Away from Home
I am afraid I am going to have to abandon you to your own devices
abruptly at this point. This is as far as Ive gotten. Questions that I am still
thinking about include:

The
relationship
between
individualism
and
introversion/extroversion
Developing the idea of utilitarian homes as design patterns that
can be compiled anywhere
What does the Freudian idea of superego map to in this model?
A more satisfactory account of the evolution of psychological
home.

The interesting thing about thinking about home in this digital sense
is that running away from home is no longer about physical movement
between unique social-physical environments (though that can play a
part). If your sense of home is a pattern that you can instantiate anywhere
the environment supports it, you cannot actually run away from it. But you
can throw it away and make up or borrow a new design pattern.
Ill write more about that at some point.
This post was partly inspired by discussions with reader MFH.

On Being an Illegible Person


July 31, 2011
Ive been drifting slowly through California for the past three weeks
at about 100 miles/week, and several times Ive been asked an apparently
simple question that has become nearly impossible for me to answer:
What are you here for?
Unlike regular travelers, I am not here for anything. I am just here,
like area residents. The only difference is that Ill drift on out of the Bay
Area in a week. The true answer is I am nomadic for the time being. I
just move through places, the way you stay put in places. I am doing
things that constant movement enables, just like you do things that staying
put enables. That is of course too bizarre an answer to use in everyday
conversation.
My temporary nomadic state is just one aspect of a broader fog of
illegibility that is starting to descend on my social identity. And I am not
alone. I seem to run into more illegible people every year. And we are not
just illegible to the IRS and to regular people whose social identities can
be accurately summarized on business cards. We are also illegible to each
other. Unlike nomads from previous ages, who wandered in groups within
which individuals at least enjoyed mutual legibility, we seem to wander
through life as largely solitary creatures. Our scripts and situations are
mostly incomprehensible to others.
***
Since my particular variety of nomadism has me couchsurfing
through readers homes, they sometimes have to explain my visit to others.
Most people are simply puzzled; Ive had second-hand reports of
conversations that appear to have gone as follows: Wait, what? You read
some blog and youve never met the blogger, but he is randomly coming
to live on our couch for few days? When readers introduce me to others,
they struggle. Some simply give up with, I have no idea how to introduce
you. If I make up some ritual response to move on, such as I am a

blogger, I write mostly about business they protest, wait, thats not
really it your blog isnt really about business, and you do more than
blogging.
Curiously, while long-time readers at least subconsciously realize that
blogger doesnt quite cover it, people who nominally know me far
better, but dont read my blog (such as old high school friends) often dont
even get that there is something to get, since their substantial memories of
me from long ago distract them from the current reality that blogger (at
least at my level) is too insubstantial a label to account for an average
human life. It is a non-job, like the other non-job title I sometimes claim,
independent consultant. Both are usually taken as euphemisms for
unemployed. For the legible, the choice is between gainful employment
and lossy unemployment. For the illegible, the choice is between gainful
unemployment and lossy employment.
Nomadism is the sine qua non of this general phenomenon of
individual illegibility. The homeless, the destitute and seasonal migrant
workers bum around. Billionaires with yachts and private jets bum around
in a rather more luxurious way through each others mansions. Regular
middle-class people generally stay put; nomadism hasnt been an option
until recently. This little piggy stayed at home.
***
Nomad is a concept that rooted-living people think they understand
but dont. I know this because I myself thought I understood it, but
realized I didnt once Id actually tried it for a few weeks.
I used to think of nomadism as a functional and pragmatically
necessary behavior, related to things like having to follow the migratory
paths of herd animals in the case of pastoral nomads. Or having to work at
client sites, in the case of road-warrior consultant types. Or even having to
travel the world in order to satisfy an eat-pray-love urge.
Now Ive come to realize thats not really it. When voluntarily
chosen, nomadism is not a profession, lifestyle, or restless spiritual quest.
It is a stable and restful state of mind where constant movement is simply

a default chosen behavior that frames everything else. True nomads decide
they like stable movement better than rootedness, and then decide to fill
their lives with activities that go well with movement. How you are
moving matters a lot more than where you are, were, or will be. Why you
are moving is an ill-posed question.
This is not really as strangely backwards as it might seem. Rooted
people often decide to relocate somewhere based on a general sense of
opportunities and lifestyle possibilities, and then figure out how theyll
live their lives there. Smart rooted people usually target regions first, jobs,
activities and relationships second. Nomads pick a pattern of movement
first, and then figure out the possibilities of that pattern later. While I
havent found a sustainable pattern yet, Ive experienced several
unsustainable ones.
Moving in a slow and solitary way through cheap hotels helps me
write better and reflect more deeply.
Moving slightly faster through peoples couches slows down my
writing (as my recent posts show), but helps me experience relationships
in brief, poignant ways.
Moving through a corporate social geography (in the past week, Ive
sampled three Bay Area company buffets) helps me understand the world
of work.
Shuttling around on a lot of long-distance flights helps me get through
piles of reading.
House-sitting helps me understand others lives in a role-playing
sense.
So Ive changed my perspective. I am not on the road to promote the
book. I am promoting the book because I am on the road. The activity fits
the pattern of movement. The pattern itself is too fertile to be merely a
means to a single end. Nomadism is not an instrumental behavior. It is a
foundational behavior like rootedness, the uncaused cause of other things.
Book promotion is simply one of the many activities that benefits from

constant movement, just like growing a garden is one that benefits from
staying in the same place.
***
All this is very complex to convey, so I dont use the nomad answer.
But on the other hand, I also dont like getting dragged into long-winded
explanations. So if people insist on a substantial answer, I just say Well, I
am promoting my new book, meeting blog readers and consulting clients.
That instrumental description satisfies people. But it annoys me that I have
to basically mislead because the language of rootedness lacks the right
words to explain behaviors that arise from nomadism.
The follow-up question is also predictable, where are you from?
When I was a much more rooted person, this question was always a
politically correct way of asking about my ethnicity and nationality;
people wanting to plot me on the globe with as much accuracy as their
knowledge of world geography allows. But as a nomad, the question is
always about my current base of operations. Movement makes you
unplottable, which apparently provokes more social anxiety among the
rooted than unclear ethnicity or nationality. People want to tag you with
current, physical x, y coordinates before probing other dimensions of your
social identity. This conversation also tends to be bizarre:
Where are you from?
Vegas
Vegas? (look of puzzlement) Why Vegas?
Its cheap.

Vegas confuses people. Most regions are understood in terms of their


attractions for rooted people. California is for techies and entertainment
types who like good weather. New York is for finance types who like
gritty, tough urban living. Chicago is for easygoing types who work in

areas like logistics and commodities. Vegas doesnt have a clear raison
detre on the rooted-living map (except perhaps as a retirement location).
You travel there for a bit of hedonism; you dont live there. For nomads on
the other hand, Vegas does have a very clear raison detre. It is a great city
to pass through (not so great to grow roots in).
Ive taken to making a weak joke: Vegas is like the miscellaneous
file; you meet a lot of random people there. I was initially having fun
watching them, but then I realized I am one of them.
At this point, if I am in the mood, I explain that we are subletting and
house-sitting my in-laws house for cheap while they summer in
Michigan, and that our stuff is in storage. That we originally meant to
make Vegas a temporary, low-cost and geographically strategic base while
we figured out where to go next, but that my wife has now found a job
there, so well be staying on indefinitely after the summer. The variables
that made us pick Vegas are classic nomad variables: cost, seasonal
considerations, and strategic positioning for further movement.
I have been nomadic since May 1, almost three months now. Ive
spent six of those weeks living out of a car, and another five living out of a
temporary, borrowed home out of a couple of suitcases and boxes (this has
been like playing house; my first experience living in a single family home
with all the accouterments of American suburban life).
***
In the past three months, my understanding of the nomadic state has
been slowly but radically altered. The best way I can explain what Ive
learned is to offer this comparison: nomadism has almost nothing to do
with the rooted-living behavior it nominally resembles, travel.
The modern world is organized around rooted living, with travel as its
subservient companion concept. Travel is unstable movement away from
home with a purpose, even if the purpose is something ambiguous like
exploration or self-discovery. It is always a loop from home to home, or a
move from old home to new home. For the rooted living person, travel is
a story. A disturbed equilibrium that requires explanation and eventual
correction, resulting in a return to equilibrium. A small handful of stories

explain most cases: business trip, visiting family, tourism, backpacking,


finding myself. Even hippie-drifting, karma-trekking, eat-pray-loving and
backpacking are purposeful patterns of movement in a world that is a
landscape shaped by rootedness.
For the rooted person, previous and next equilibrium points, with
associated departure and arrival dates, and a focal climatic point in the
journey between, suffice to model any movement. Once upon a time, a
couple lived in New York. They traveled to California to go camping in
Yosemite. Then they returned to New York. The End. A dead giveaway that
you are looking at travel rather than nomadism is that the paths to and
from the focal points are generally very efficient; often shortest/cheapest
paths. A good story doesnt dawdle.
For the nomad, that data is annoying to supply because even when it
is nominally available, it conveys almost no information. Better questions
concern the quality and shape of the movement itself:
How quickly are your moving? (a 100-mile/week drift rate is a very
different stable movement rate than a 1000-mile/week drift rate).
What route are you taking? (to the nomad, whether you take a
northerly or southerly route between DC and Vegas isnt a minor detail, it
is the main question; it is the end points that are minor details).
Are you living out of hotels, camping or couchsurfing?
Are you living out of a car or a backpack?
Where are you headed for Fall?
Are you moving up the coast or through the forests?
How do you pack? Do you prefer a sleeping bag or a Hennessy
hammock?
For the nomad, the question of why you are temporarily somewhere is
simply ill-posed. Its like asking a settled person, why arent you

moving? For the nomad, a period of rootedness is unstable, like travel for
the rooted. It is a disturbed equilibrium that requires explanation. An
explanation of non-movement, and eventual resumption of movement, are
required. The associated stories can range from a car breakdown, to
insufficient funds to fuel the next phase of movement, to unexpected
weather conditions. Once upon a time, a guy who lived out of a car was
heading south for the winter. His car broke down in Kansas City, and he
was stuck there for a week. Fortunately he was able to find a place to
couchsurf, get it repaired and move on.
***
In a way, nomadism is a more basic instinct for humans. Rootedness
is natural for trees. Legs demand movement. The movement is the cause,
not the effect. Just as the mantra for rootedness is location, location,
location, it is movement, movement, movement for nomadism. When
humans grow roots, strange new adaptations appear to accommodate
restless brains.
If I have romanticized nomadism it is because nomadism is a
fundamentally romantic state of being. If you can sustain it, it is somehow
fulfilling without any further need for achievement or accomplishment.
The pursuit of success is, for the rooted, the price they must pay for
immobilizing themselves geographically. The reward is something
equivalent to the state of stable movement that is, for the nomad, a natural
state of affairs.
Success itself in a way is very much a notion for the rooted; it is the
establishment of some sort of stable self-propelled movement pattern
through some sort of achievement space: up a career ladder; down a rabbit
hole of skilled specialization; sideways through a series of stimulating
project experiences. When there is no true north, no physical landmarks
growing smaller behind you, and no fresh sights constantly appearing over
the horizon, you need abstract markers of movement: degrees, money, a
sequence of more expensive cars, a series of increasingly successful
books, a growing readership for a blog, increasingly prestigious speaking
gigs.

When you bind naturally restless feet, the minds that have evolved to
animate them seek movement elsewhere.
I misunderstood the psychology of travel badly when I was younger.
About 12 years ago, when I was 24, I went backpacking for three weeks in
Europe. After that, somehow I lost my wanderlust. I explained my
reluctance to travel to myself, and to others, with the lofty line, Ive kinda
tired myself of exploring the geographic dimensions of experience; I am
now exploring more conceptual directions.
Bullshit. Geography is just too fundamental to our psychology. If we
arent moving, it is because there is too much friction and cost. Wanderlust
never goes away. It merely becomes too costly to sustain as you age.
Recently, when I traded my Indian passport for an American one (which
allows me to travel far more freely, without the annoyances of the Great
Wall of Visas that is designed to keep the developed world from getting
too footloose), the old itch to travel instantly reappeared. So much for my
pretentious other dimensions of experience. It was mere paperwork
friction that was holding me back. But sadly, while one source of friction
has disappeared, others have grown. In my late 30s now, the fact of my
wifes non-portable job and the complexities of moving our two cats
across national borders, are what keep us from simply embarking on some
extended nomadism around the world. But at least we dont have a
mortgage and school-going kids.
***
Scotts notion of illegibility was originally inspired by the nomadic
state and its incomprehensibility to the governance apparatus of settled
cultures. To the stationary eye of the state, a moving person is a blur rather
than a sharply-defined identity; it is harder to tax, conscript, charge with
crimes or even reward nomads. To the stationary eye of the corporation,
the nomad appears harder to hire, manage or pay.
The blurriness extends to other aspects of rooted life. Ownership and
community life change from being stock concepts (defined by things you
accumulate) to flow concepts (defined by things you pass through and that
pass through you). Identity starts to anchor to what you are doing rather

than who you are. Social life acquires, due to its permanently transient
nature, a certain poignancy that it lacks in rooted contexts. Even routine
errands like grocery shopping and doing the laundry become minor
adventures that require your full attention and engagement.
Everyday rituals acquire a monastic depth. The difference between
nomadism and travel even shows up in how you pack. Packing a suitcase
for extended travel is very different from packing for a period of
nomadism. In the first case, you pack for compactness and unpack at your
destination. It is an exercise in efficiency. In the second case, you pack for
daily in-out access in a changing context. You have to think harder about
what you are doing. You need constant mindful repacking, rather than
efficient one-time packing.
Even the most basic, unexamined rituals change. For instance, I stay
so often with people who dont drink coffee that Ive taken to carrying a
small bottle of instant coffee with me. But its a different kitchen every
few days.
Nomadism is, in a way, the most accessible pattern of mindful living.
***
The romanticism aside, true permanent nomadism is not really an
option today. This particular romantic episode will end around October,
and I will be rooted once more. All the neuroses of the rooted will come
flooding back. I will once more start to worry about my next book and my
next hit blog post.
The direct costs of living arent actually very different for nomads and
settled people. It is the indirect costs that kill you. If it werent for the
burden of an address-and-nationality anchored paperwork identity and the
tyranny of 12-month leases and 30-year mortgages, nomadic living would
be no more difficult than static living at the same income level. Newtons
law applies approximately: a human in a state of rest or steady motion
continues in that state unless an external force acts to change it. A nomad
is a human in a state of steady motion. Not in a Newtonian sense, but in a
cognitive sense. Once youve settled into a particular pattern of living out
of a car, you are in a steady state that has inertia.

Movement is not expensive if the environment is set up to support it. I


am not an extremist or minimalist. I dont want to be living off a few
packs on a bicycle for the rest of my life. I like warm beds, hot showers
and large, well-equipped kitchens as much as anybody else. I like having
access to lots of useful things like washing machines and gyms. It is not
inconceivable that the world could be arranged to provide all these in a
way that supports both rootedness and nomadism. Thanks to online
friendships, and emerging infrastructure around couchsurfing and
companies like Airbnb, it is becoming easier every year. Id like to see
trains getting cheaper, tent-living becoming available for the non-destitute
classes, health insurance becoming more portable, public toilets acquiring
shower stalls, and government identity documents becoming anchored to
something other than physical addresses. Id like to see the time-share
concept expand beyond vacations to regular living. Id like to see
executive suites and coworking spaces sprout up all over, and acquire
cheap bedrooms that you can live out of. Id like to be able to rent nappods at Starbucks. Id rather own or rent a twelfth of a home in twelve
cities than one home in one city.
There is no necessary either-or between nomadism and rooted living.
Technology has evolved to the point where the apparatus of the state
should be able to accommodate illegible people without pinning them
down.

The Outlaw Sea by William Langewiesche


August 27, 2009
To most of us, the oceans are about romance, not shipping logistics.
Violent thirty-foot waves and gripping piracy tales are conspicuously
missing from The Box, the first shipping-themed book I reviewed. While
that story (see my post the epic story of container shipping) had all the
passion and high drama of a business thriller, it was essentially a human
and technology story. The Outlaw Sea: A World of Freedom, Chaos, and
Crime tells a parallel tale, one focusing on the realities of the oceans
themselves . There are plenty of waves and pirates here, and this is easily
the most absorbing maritime-themed book Ive read since Treasure Island,
which is saying a lot, since it is non-fiction.

The "Alondra Rainbow", pirated and renamed "Mega Rama"


(picture from Indian Coast Guard site)
Old Mankind and the Sea
Unlike deep space, the oceans seem just within reach of the grasping,
civilizing instincts of humanity. On land, especially as consumers who can
safely ignore the question of how their stuff gets to them from China, it
can seem as if the oceans have been tamed by the Amazonian one-click.
After all, we get our iPods and Wii consoles delivered pretty reliably,
dont we?

The good news for us romantic landlubbers is that despite steel hulls,
GPS and diesel engines, the oceans remain untamed. The bad news is that
despite steel hulls, GPS and diesel engines, the oceans remain untamed. As
Katrina reminded us, the oceans can still take a casually violent swipe at
us and wreak havoc. The reliability of modern shipping does not imply
that we have domesticated the oceans. The big and believable suggestion
in the book is that we never will.
Langewiesches is a near-flawless modern, global voice. I bought the
book because I was enthralled by an extract in The Atlantic a few years
ago. The book tells the stories of a bewildering cast of characters: Eastern
European captains, Pakistani crews, Malaysian pirates, Indian
shipbreaking yards, bleeding-heart European Greenpeace activists, and
Alaskan oil-spill investigators. In less competent hands, this could have
ended up as a sea-cowboy story for overgrown boys (think Deadliest
Catch), a self-absorbed tale of human-scale tragedies (think Perfect
Storm), an overwrought tale of environmentalism (think Whale Wars) or a
random leftist screed about the exploitation of third world humans by
Western mega-corporations.
Fortunately Langewiesche avoids all those temptations. With precise
strokes, he first humanizes, and then dehumanizes, both first and third
world nations and peoples, gently getting you to focus on the grandeur of
the oceans themselves. Whether he is forcing you to vicariously
experience the chilling horror of being in a sinking ferry (the Estonia) in a
violent Baltic storm, or presenting the farcical aftermath of the tragedy
within the byzantine world of European maritime politics, he brings a sort
of ironic compassion to every story.
The raw material is almost too rich for a single book. There are oil
spills and shipwrecks, the chaos of international flags of convenience
and tales of tradeoffs between avoiding expensive delays and foolhardy
storm-defying navigation. There are pirates haunting the Straits of
Malacca, terrorists and dirty bombs hiding in containers, and desperate
navies and coast-guards trying hopelessly to catch them all. Above it all
looms a single theme: the cluelessness of us landlubbers about the
medieval anarchy that your Chinese-made iPod navigates, in the process

of getting to you somewhere else on the planet. The people dealing with
the oceans come across as the last true frontier folk, the last adults
protecting the rest of us children from a universe that is far wilder than we
think.
Though it is about modern shipping, the whole book has a timeless
quality to it. You could be reading The Odyssey, the tales of Sinbad the
Sailor or Treasure Island. A particularly eerie bit of timelessness is in the
briefly-sketched story of the trial and execution, in China, of the pirates
who hijacked the Cheung Son and murdered its crew in 1998:
On the way to the execution ground, a group of them,
who were drunk on rice wine, defiantly sang, Go, go, go!
Ale, ale, ale!, the chorus from a pop song called Cup of
Life.
No wonder Eric Cartman went off to Somalia to become a modernday pirate. My own fascination with the sea began when my dad
introduced me to Treasure Island. Yo ho ho and a bottle of rum. Stevenson
wrote that book in 1883. It wasnt until after I turned thirty though, that I
managed to experience the ocean first-hand, on a cruise to the Caribbean.
It did not disappoint; the oceans lived up to all my romantic expectations,
and even the crassness of cruise-ship buffets could not ruin it for me.
There is nothing quite like being on the deck of a ship in the open ocean,
out of sight of land.
Blue Planet
A series of stories of tragedies at sea forms the backbone narrative.
The book opens with the story of a rusty tanker, on its last legs, the
Kristal, making its way from India to Europe with a load of molasses, with
a Ukranian captain and a Spanish-Pakistani crew. The Kristal broke in half
in stormy seas and killed most of its crew, and this opening anecdote
serves to shatter your notions of the the ocean as a benign place. The book
then moves on to the Exxon Valdez and other tales of oil spills, and finally
to a detailed telling of the story of the sinking of the passenger ferry,
Estonia. There are other vignettes scattered throughout.

This is more than a collection of exciting tales of the sea. A bigger


picture emerges through the stories. We learn that there are really no
governing authorities at sea, besides a near-toothless IMO working
through obscure trans-national certification companies. We learn that ships
mature and gradually get downcycled, as they age and rust. In the story of
the Kristal, we are informed, as an aside, that molasses tends to be the sort
of cargo carried by tankers on their last legs, since spills dont cause much
damage. In the oil spills section, we learn that European nations selfishly
maneuver to direct oil spills to their neighbors shores.
The point of this wreck-to-wreck tale is not to focus on how unsafe
ships can be. It is to highlight the fact that human technology is much
flimsier than we think, when faced with an environment that routinely
unleashes earthquake-level forces. The Titanic is not a tale of isolated
hubris: the ocean still retains the capacity to destroy our best efforts at
ship-building if we are not properly respectful. I found myself wondering:
what would land technology be like if trucks and cars had to deal with
roads that routinely bucked and swayed like demonically-possessed
mountains?
This core narrative is about our fundamental limitations when it
comes to dealing with the oceans. The logic of the other narratives flows
from this basic one.
One key supporting story illustrates the failure of the human political
imagination to really comprehend the oceans. This is the story of how we
ended up with todays bizarre state, which can contain a ship built in Japan
flying the flag of Malta, owned by a holding company in Italy, but really
owned by somebody else altogether, certified seaworthy by a French
company, being captained by a Ukranian and crewed by Pakistanis. Far
from being a situation of heartwarming international cooperation, it is a
dangerous, nearly ungovernable, stateless mess. The most weve been able
to extend the logic of landbound nation-states is twelve nautical miles, the
extent of territorial waters, which are still too much for most navies and
coastguards to deal with.

There are plenty of other themes, but Ill highlight just two more,
piracy and shipbreaking, since they highlight the limits of the idea of the
nation state, and provide an unusual perspective on globalization.
Nation and Ocean
The piracy and ship-breaking stories in the book both involve India,
which was particularly illuminating for me, since I have never thought
about my identity as an Indian citizen being derived from my more basic
identity as a land-based primate. Barring the doings of one 11th century
emperor, India itself has very little of note in its maritime history,
compared to say, the European nations or Japan. Despite its 7000 km
coastline, Indias national self-perception is primarily a land-based and
isolationist one. So the view from the oceans, which connect the world
physically, is rather unsettling.
Like the legal business of shipping, the structure of modern piracy too
is the outcome of the confused stateless anarchy of the seas (unlike the
older epoch of Caribbean piracy, much of which was state-sponsored).
The Straits of Malacca are where much of the action takes place (not
Somalia, as most Americans imagine). What makes piracy in this region so
surprising is that it is a very narrow, massively busy seaway that would
seem like the most civilized part of the oceans. Over 50,000 vessels pass
through every year, through the 2.8 kilometer wide chokepoint near
Singapore. All around are the industrialized and heavily populated
shipping-dependent countries of South East Asia. This is as close as you
can get to oceanic bumper-to-bumper highway traffic. Yet, pirates
routinely vanish with entire ships, with millions of dollars worth of cargo.
The big piracy story in the book involves the Alondra Rainbow (the
picture at the top of this article), which was hijacked in a carefully planned
and coordinated attack by a group of Malaysian and Indonesian pirates in
1999, while carrying a cargo of aluminum ingots worth around $10
million. The ship vanished and the Filipino crew, along with their Japanese
captain, were cast adrift in the Indian ocean (they were rescued). The ship
managed to transfer half of its booty to another ship, and then apparently
got rechristened the Global Venture before fleeing across the Indian ocean,
eluding searchers. Most such stories apparently end there, with a vanished

ghost ship, but in this case the story had a non-ghostly ending. It was
spotted, sailing under the name Mega Rama, by the captain of a Kuwaiti
freighter, the al-Shuhadaa, who alerted the nearest country, which
happened to be India. The Indian coast guard patrol boat Tarabai
responded and chased the ship down, and with the help of a Navy missile
corvette, the Prahar, finally managed to arrest it as it was attempting to
flee into Pakistani waters.
The Indian Navy and coast guard apparently had a good deal of fun
with the exercise, and were rather proud of having actually caught a
pirated vessel for once, and enjoyed quite a bit of media attention as they
shepherded the stolen ship into Mumbai harbor. The Mumbai courts and
police, however, were decidedly less happy about having a high-profile
international piracy case being dropped into their already overburdened
laps.
What followed was a piece of international silliness, as a country with
no stake in the ship, crew, pirates or victims, ended up having to use
taxpayer money to prosecute a complex precedent-setting piracy case. The
case worked its way slowly through the Indian courts as the world figured
out how to apply nation-state level laws to a crime that obviously
transcended the very concept of a nation. Langewiesche reports a
particularly revealing conversation with a Mumbai police officer, about
why they were reluctant to accept the captured ship:
What would happen, he asked, if India convicted and
imprisoned them, but after their release Indonesia refused
to accept them? What did you conclude? I asked.
That they would become stateless people. Then the
problem for India, he said, would be where to send them. I
suggested that they could be repatriated to their natural
environment at sea. He smiled wanly.
The leading maritime attorney in India prosecuted the case pro bono,
and easily outmaneuvered the poor public defender assigned to the pirates
by the court. The pirates were found guilty, and imprisoned. They were
mostly the underlings, not the kingpins, and some seemed to have no idea

theyd been recruited into a piracy plot by a manning agent. The real
culprits remained mysterious citizens of the oceans.
If this story puts the nations involved in the background and the ocean
itself into the foreground, the next story, also involving India, is even
weirder, and involves all the oceans of the world.
The center of the action here is Alang, the coastal city in Gujarat
which is home to nearly half the shipbreaking trade in the world. India,
Pakistan and Bangladesh among them handle nearly the entire
international trade of scrapping old ships for steel, a dangerous business
involving explosions, toxic chemicals and awful conditions. The trade
ended up in the region over the course of half a century, as both the labor
costs and safety issues made it politically impossible to conduct in other
parts of the world.

View of Alang (from globalsecurity.org).


Google Images turns up many more fascinating pictures.
Through the late nineties and early 2000s, controversy erupted,
spearheaded by Greenpeace, over the idea that rich shipping companies
were exploiting the developing world and not paying the true lifecycle
costs of disposing off their floating, toxic deathtraps safely. As you might
expect, the workers in the industry (escaping worse relative poverty) were
entirely hostile to European do-gooders acting on their behalf, arguing
with grim pragmatism that death from toxic chemicals was rather better
than death by starvation. Langewiesche tells the various versions of this
story with an unsparing eye, but the tale of this activism, framed by ideas

of nationhood, ends on a surreal note, which underlines the


meaninglessness of ideas like Western and Developing land-worlds
where the oceans are concerned.
But others in the business told me that the more likely
effect of such reforms would simply be a new and less
direct route to Asia: ships would pass through more hands,
would maybe live longer plying faraway waters under new
names and flags, and would still end up dying on some
filthy beach. Already, there was evidence that European
shippers had begun to find new foreign buyers for vessels
that they would normally have sold directly to scrappers.
In fact the whole story is surreal. Lyrical descriptions of the careful
orchestration of the ship-breaking process (which made me itch to visit
Alang) are interspersed with unsentimental indictments of all parties.
Included is a drive-by shooting at people like me, alongside a spirited
defense of the shipbreaking merchants:
They were direct men, who walked willingly among
the laborers; and though they had grown wealthy on the
backs of the poor, they maintained a connection to them
nonetheless. The alternative seemed to be the
disengagement I had witnessed in New Delhi and Mumbai,
where the upper levels of society were floating free of the
ground, aided by the the airlines and the Internet, as if the
poverty of India were a geographic inconvenience. [His]
own daughter had graduated from the University of
Chicago with a degree in computer science but standing
beside him on the beach, in the midst of his piles of scrap, I
suspected he knew that shipbreakers were unfashionable
among the Indian elitesAlang was becoming an
embarrassment.
Guilty as charged, though I think the charge applies to the entire
global elite, studiously ignoring the problem of disposing off the biggest
physical artifacts humans build. My first thought was that the Internet is to
free floating people like me what the oceans are to the impoverished

thousands living off it: a stateless anarchy (we are not yet at the stage
where anyone can claim to be a global citizen, a phrase I detest for its
vacuousness). My next thought was that this is a self-serving view. The
Internet is nothing like the oceans.
Between the Nation-State and the Globe
As it happens, some of the other reading I am doing right now deals
with rarefied subjects, far removed from messy things like ship-breaking,
like the rise of global financial integration through bond markets, the
history of the first true multi-national corporation, the British East India
company and yes, undersea Internet cables. Within all these tales,
spanning several centuries, there is a constant subtext of assumptions
about the oceans.
The Outlaw Sea precisely nails the big point about oceans: they are
the physical manifestation of the stuff between the global system of
nation-states and the abstraction of the globalized world, which really
only exists on the Internet today. But we forget that the transnational
anarchy that is the Internet could be rapidly and comprehensively
fragmented and shoehorned into nation-state boundaries by the flipping of
a few key router switches, and the reconfiguring of a handful of satellites.
The ocean though is not, never has been, and (it seems) never can be
subsumed within the nation-state system. It will always form a gray zone
of anarchy sandwiched between global and national contexts. Despite its
grim implications, in an odd way it is an uplifting thought that the oceans
will never be within our control. Looking back, I think I realized this
point, and grew fascinated by it, very early. I have always been fascinated
by maps, but as a schoolkid, one set of maps in particular, fascinated me.
This was a series of maps included with special issues of the National
Geographic, that presented the world with the oceans in the foreground.
There were maps for each of the major oceans, with finely detailed
depictions of mid-ocean ridges, mountain ranges, volcanoes and currents.
The oceanic areas of the maps were a riot of blues. Landmasses on those
maps were shown in background-white, with barely any annotation. This,
I thought, is a better way of looking at Planet Earth.

My friendly librarian allowed me to steal the maps from the librarys


copies of the National Geographic. For a while I had a couple tacked to
my bedroom walls, but for many years, I just had them folded away. I
would frequently take them out to look at; meditate upon.
I think what fascinated me back then was the same thing that
fascinates me today: the incredible richness and complexity hidden behind
a simple statistic: our worlds surface is 70% water. Land is a sideshow.

The Stream Map of the World


October 4, 2011
For most of the last decade, Israeli soldiers have been making the
transition back to civilian life after their compulsory military service by
going on a drug-dazed recovery trip to India, where an invisible stream of
modern global culture runs from the beaches of Goa to the mountains of
Himachal Pradesh in the north. While most of the Israelis eventually
return home after a year or so, many have stayed as permanent expat
stewards of the stream. The Israeli military stream is changing course
these days, and starting to flow through Thailand, where the same pattern
of drug-use and conflict with the locals is being repeated.
This pattern of movement among young Israelis is an example of
what Ive started calling a stream. A stream is not a migration pattern,
travel in the usual sense, or a consequence of specific kinds of work that
require travel (such as seafaring or diplomacy). It is a sort of slow, lifelong communal nomadism, enabled by globalization and a sense of shared
transnational social identity within a small population.
Ive been getting increasingly curious about such streams. I have
come to believe that though small in terms of absolute numbers (my
estimate is between 20-25 million worldwide), the stream citizenry of the
world shapes the course of globalization. In fact, it would not be
unreasonable to say that streams provide the indirect staffing for the
processes of modern technology-driven globalization. They are therefore a
distinctly modern phenomenon, not to be confused with earlier mobile
populations they may partly resemble.
Stream Citizenship
Stream citizens are not global citizens (a vacuous high-modernist
concept that is as culturally anemic as the UN). Their social identities are
far narrower and richer. They are (undeclared) stream citizens, whose
identities derive from their slow journey across the world.

But the individualist, existential notion of nomadism that I wrote


about in On Being an Illegible Person does not apply. In particular, stream
citizens are not necessarily nomadic in literal ways (such as living out of
cars, boats or mobile homes). They may buy or rent property, accumulate
material possessions, and so forth.
Streams are highly sociable collectives, not individuals. The stream
itself may be illegible on a map of nation-states, but individuals within it
are fairly legible at least to fellow citizens within the same stream. In this
sense, streams are like David Hackett Fischers folkways. Unlike
folkways, streams use geographic movement to structure themselves
internally. You could also apply the John Hagel model in The Power of
Pull and think of traditional folkways as stock folkways and streams as
flow folkways. The running example in the book (global surfer culture)
is not quite a stream, however.
The argument for a distinct new construct, the stream, is not based on
a single clear criterion that separates it from other kinds of population
movements. Instead, we have a distinctive pattern of deviations from other
kinds of population movements.
I have a few examples in mind (such as the Israeli one), but to avoid
the dangers of over-fitting, Ill characterize the idea of the stream via a
dozen abstract features, and follow it up with a very primitive and sketchy
world stream map, without trying to describe specific streams in these
abstract terms.
1. Distinct social identity: Streams possess a unique and distinct social
identity, unlike more inchoate movements that may share some of
the features of streams. Unlike rite-of-passage travel patterns
though (such as karma-trekkers), they tend not to have named,
brand-like identities. Instead, they have unmistakeable, but implicit
identities.
2. Partial subsumption: Streams subsume the lives of their citizens
more strongly than more diffuse population movements, but less
strongly than focused intentional communities like the global
surfing community. There is a great deal more variety and
individual variation. In particular, there is no solidarity around

3.

4.

5.

6.

grand ideologies in the sense of Benedict Andersons Imagined


Communities. In this, streams differ from nation-states, even
though they provide something of an alternative organizational
scheme. Not only is the subsumption at about a middling level at
any given point in time, it varies in intensity throughout life, being
particularly weak early and late in life.
Voluntary slowness: a stream is a pattern of movement where
individual movements take place over years or decades, spanning
entire development life stages. Unlike a decade-long limbo state
imposed by (say) waiting for an American green card, which has
individuals impatient to get the process over with and settle down
in either a new home, or return to an old one, stream citizens dont
experience their state as a limbo state. They are always home.
Being a relatively new phenomenon, there are no streams that are
life-encompassing as yet. But I believe those will emerge
distinctive cradle-t0-grave geographic journeys.
Exclusionary communality: streams provide a great deal of social
support to those who are eligible to join and choose to do so, but
are highly exclusionary with respect to very traditional variables
like race, ethnicity and gender. The exclusionary nature of streams
is not self-adopted, but a consequence of the fact that streams pass
through multiple host cultures. A shared social identity in one host
culture may splinter in another, while distinct ones may be
conflated in unwanted ways.
So only relatively tightlycircumscribed social identities can survive these forces intact. I am
really tempted to illustrate this particular point with examples, but
Ill leave it as an abstraction.
Distinct economic identity: unlike commercial travel that is part of
broader economic activity (such as sea-faring), or non-commercial
travel (such as tourism), streams tend to be at least partially selfsustaining within every host culture that they pass through. This
partial self-sustainability often involves patterns of global
commercial activity that lends money a different meaning within
the stream. So even though streams dont issue currencies, and
merely borrow the economic apparatus of their host cultures, the
money behaves in very different ways while it is circulating within
the stream.
Non-tribal: Streams are not completely self-sufficient though, in
the sense of segmentary tribes. This is a crucial distinction from

nomads or barbarians in the classical sense. They do not seek to


form bonds of mechanical solidarity with other streams. Instead
they seek to form fairly strong bonds of organic solidarity (mutual
interdependence) with host cultures.
7. Vorticity: Streams contain higher-tempo patterns of travel among
the waypoints, especially to old home bases, due to obligations
and attachments inherited from pre-stream home cultures.
8. Partial self-absorption: stream citizens are not very interested in
the host cultures they pass through except to the extent of
maintaining economic and practical relationships. There is no sense
of being on the periphery, looking on with longing at the action at
the center. There is no oppressive sense of being trapped in a
diaspora-ghetto.
9. Relative poverty: unlike the global jet-setting (think Davos) elite,
streams are generally impoverished. In fact a great deal of the
motivation for living in a stream is to leverage limited means. But
this does not mean we are only talking about lifestyle-designing
Internet marketers in Bali. We are also talking about migrant labor
from Asia to the Middle East that starts with a let me save money
working in construction in Dubai for a few years motivation, but
ends up extending to a whole lifetime.
10. High adaptability: Unlike nomads who carry their lives around
with them, creating tiny shells of reassuring familiarity around
themselves, stream citizens behave more like hermit crabs. They
cobble together the necessities of life shelter, income, patterns of
diet and exercise from whatever is around them. Stream citizens
eat Chinese food in China and Thai food in Thailand, not because
they are particularly curious about local cuisines, but because the
sustainability of the stream lifestyle is based in part on such
adaptation. Nostalgia is weak for stream citizens, as is the farawayhome/near-exotic sense of alienation from surrounding. Stream
citizens are both home and abroad at the same time.
11. Direct connection to globalization: In a sense, the notion of
stream I am trying to construct is a generalization of the Internetenabled lifestyle designer, which I think is much too narrow. But
streams are definitely a modern phenomenon, and owe their
capacity for stable existence to some connection with the
infrastructure of globalization. The Internet is the major one for the

creative class, but anything from container shipping to the


Chimerica manufacturing trade to the globalized high-rise
construciton industry qualifies.
12. Lack of an arrival dynamic: this is perhaps the most important
feature. There is no sense of anticipation of an arrival event such
as getting an American green card, after which real life can
begin. There is a wherever you go, there you are indifference to
rootedness. This psychological shift is the central individual act. By
abandoning arrival-based frames, stream citizens free themselves
from yearning for geographically rooted forms of social identity.
The Scale and Impact of Streams
In terms of sheer numbers, global migration does not seem to be a
very powerful force. In World 3.0, Pankaj Ghemawat notes that only about
3% of the worlds population comprises first-generation immigrants. Over
90% of the worlds population will never leave their home country.
As a small subset of global migration and travel, the total population
of stream citizenry is unlikely to exceed about 0.3% of the world
population by my estimate (about 20-25 million perhaps). In terms of
populations of individual streams, given the level of cultural complexity I
am talking about, you would need between about ten thousand to a million
people to create a stream.
This suggests that there are less than a few hundred streams, with
perhaps a few lower levels of differentiation into sub-streams and sub-substreams. This means a project to catalog and map the streams of the world
should not be too hard.
In terms of impact however, I suspect streams are hugely important.
Viewed as a process of increasing global integration on multiple fronts
(commodities, money, products, services and people), most fronts of
integration are developing painfully slowly. Measured with an appropriate
set of metrics, according to Ghemawat, globalization is generally
somewhere between 10-30% of its theoretical potential for maximal
integration along most fronts.

Human movement is actually one of the least-developed fronts.


However, since moving humans is the most efficient way to move ideas,
and since ideas are very high-leverage things to move across borders, this
slow front is also the highest-impact front. Two African students returning
to Eritrea infected with the Y-combinator virus can do more than several
container loads of iPads.
Another way to think about the increasing impact of streams is to
compare them to their ancestors. Consider the populations that staffed the
diffusion of previous waves of technology-driven globalization, such as
sailing ships (which created among other archeo-streams, a population of
lascars who formed a stream stretching from South Asia to the Caribbean,
for several centuries).
Compared to such populations, the modern stream citizenry of the
world is much larger. Perhaps an order of magnitude larger. Thanks to the
more mature and stable substrate (container shipping is not going away
anytime soon for instance), the cultures that take root along patterns of
movement are much more robust and fully-formed.
They may lack the romantic transience of older archeo-streams (such
as a putative Silk Road culture, which may or may not have ever had a
distinct identity), but they are a lot more substantial internally.
Stream Mapping
I didnt try to illustrate the idea of a stream with reference to specific
examples because they interact among themselves and with host cultures
in such complicated ways. The only meaningful way to understand
streams is to start with a more global situation awareness of a sort of
stream map of the world.
I have no idea how to make one (other than to follow the contours of
globalization), so Ill illustrate the geography to the extent that Ive
traversed it.
The Israeli stream, in its path across India, collides with the Tibetian
exile community in Dharamshala, itself a lake created by an older stream

of migration that flowed for a few brief years during the 1950s, when the
Dalai Lama fled Tibet and landed in India.
Along this route, the Israelis get into fights with the locals, run an
underground drug culture and in general recover from their PTSD in the
messy ways you might expect. The modern Israeli stream runs along
roughly the same course that, decades ago, played host to the hippies on
journeys of self-discovery from Goa to Kathmandu. Ecstasy has replaced
LSD, and the culture is a darker, cyberpunk echo of the naive spirituality
that marked the questing of the swami-seeking hippies.
Today, the stream is shifting course towards Thailand, as I noted
earlier. The Indian branch may dry up, or slow to a trickle. I suspect a
branch of the stream continues, post an Israel-return, to America, via hightech startups founded by friends who perhaps were blooded in combat
together, or met in India or Thailand.
Curiously, even though the Israeli stream runs right through Bombay,
where I lived for years, I had no idea it existed while I was there.
I learned the story partly from an Israeli anthropologist (from whom I
borrowed the term liminal passage which I used in Tempo) and partly
from a Romanian-born Australian, herself an expat in Bali, married to a
Dutch expat (Indonesia was once a Dutch colony). The two of them run
canoeing tours on Lake Batur for tourists. Wed gotten started on the
subject of nomadic expat cultures after Id asked, rather innocently, if the
success of Eat, Pray, Love had had an impact on Bali tourism. Oh My
God! my guide exploded, All these annoying American women in their
30s landing here and expecting to find their Argentinian Man!
Eat, Pray, Love might well be the motif of a new emerging stream,
involving older single Western women. It is probably a gyre rather than a
one-way stream, originating in, and returning to, an American home base.
I personally am a product of a one-way migration pattern that matured
into a full-blown stream-and-gyre just around the time I joined it. Post
9/11 and Y2K, as the US economy began slowing down, and the Indian
economy began to heat up, increasing numbers of Indians began choosing
to inhabit a vague loop between the two countries instead of settling down

in one, trying to have their cake and eat it too the economic
opportunities of India and the lifestyle of the US. The first observers of
this loop tended to classify them as global citizens but I find the term to
be pretty non-descriptive of what is actually happening.
The Tibetan community and the India-US stream-gyre are wellknown. The Israeli PTSD Stream is less well-known. The Eat-Love-Pray
gyre is just starting to mature.
Around the globe, streams slosh about, run into each other, branch,
loop, and in general carve out new cultural landscapes within a
hydrologically active layer that exists above earlier landscapes.
This is a complicated view of cultural geography. But I bet it could be
properly represented on a map. As I said, the number of important streams
cannot be more than a few hundred, about comparable to the number of
nation states or significant multinational corporations.
Globalization as Liquefaction
This post is really about my dissatisfaction with the static units of
analysis for globalization. We are reluctant to embrace more fluid units
like streams because they seem so small in terms of population sizes. It
seems wrong to basically ignore the 90% of the world who are never
going to venture beyond the borders they were born within.
Yet, I find that it is far easier to understand globalization as a system
of such human flows, than it is to understand it in terms of nations, states
and multi-national corporations. It is the actions of the 0.3% that will
ultimately drive the fates of the 90%. The cultures that play host to
streams are starting to see their evolution being driven by the very act of
hosting streams. There are entire regions in the Indian state of Kerala for
instance, whose culture can only be explained with reference to the gyre
that transports Keralites back and forth from the Middle East.
The word globalization itself is a clue.

Globalization signifies an incomplete process, not a state. For a long


time I was convinced that there was a bit of semantic confusion
somewhere. Why is there a becoming without discernible being states
before and after? The reason is that the word globalization works like the
word liquefaction. Liquids arent a transition from one solid state to
another. They are a transition from a fundamentally static state to a
fundamentally dynamic one.
The world is not getting flatter, rounder or spikier. It is liquefying.
There you go, Thomas Friedman, thats my modest little challenge to your
metaphor.
More seriously, Id like to get started building a stream map of the
World. If you have candidate streams to propose, or some cartographic
insights to offer, please do so in the comments.
So far my list includes:
1.
2.
3.
4.
5.
6.

The Israeli stream


The Indo-US technology stream
Eat-Pray-Love
Tibetian expats
Americans camping out in Eastern Europe for several years
Mainland Americans moving to Hawaii to set up what appears
to be an economy based entirely on yoga studios
7. Lifestyle designers converging on Thailand and Bali

Part 4:
The Mysteries of Money

The Mysteries of Money


June 20, 2012
There was a brief period early in the life of ribbonfarm when I
thought the blog was about business. But I was never quite comfortable
with that idea, though I do write a lot about business matters.
I finally realized where I was going wrong: businesses, markets,
products, even society, culture and civilization itself: these are all clumsy
constructs that revolve around money. Money is the most basic stuff in this
universe of consensual fictions that we call civilized life.
I am terrible at making money, but I have never understood people
who dont take money seriously, and have even managed to develop a
disdain for it. I suspect it is sour grapes, pure and simple. Which is a pity,
since money is absolutely fascinating stuff even if you dont have enough
of it to appreciate close-up or swim around in, like Scrooge McDuck. It is
the fabric of social reality stuff that is real because we collectively
believe in it the way space-time is the fabric of physical reality.
So with that bit of purple prose, I give you: the fourth and last
sequence through the ribbonfarm archives, 2007-2012.
Money is more fascinating than the products that earn it, the violence
it causes inside and outside our heads, the things it buys, and yes, the
relationships it makes and breaks. Not because it is great to have it
(though it certainly is), but because it reveals so much about everything it
touches, while itself remaining ineffable. More ineffable than even its
closest cousins, like information and risk. You can get to roughly
equivalent results in thinking about social realities by following the
principles: follow the money, follow the information and follow the risk.
But follow the money tends to be the most tractable heuristic.
This realization led to one of my personal all-time favorite posts, and
the first one in this sequence, Ancient Rivers of Money. I think I
understood something about money for the first time in my life with this

post, in 2010, at age 36. It ceased to be a completely impenetrable mystery


to me. It is now merely 99.999% impenetrable. I am still terrible at making
money, but I am starting to slowly appreciate it. Maybe the making will
follow.
I now see money as the implicit organizing concept for all of my
writing about social reality. Organizing along those lines, I have broken
down this sequence into posts about money itself, posts about
organizations (understood as things that move money around), posts about
markets (understood as fields of money) and finally, civilization itself
(understood as the space where money matters). Barbarian or exiled states
of being, and possible post-civilizational futures, are best understood as
the negative space of social reality. Their common salient feature is a
vastly attentuated role for money, broadly understood. These states never
quite rise above shared, communal, interpersonal realities to shared,
impersonal, social realities.
Money
1. Ancient Rivers of Money
2. Fools and their Money Metaphors
3. Time and Money: Separated at Birth?
Moving Money
1. The Eight Metaphors of Organization
2. The Lords of Strategy by Walter Kiechel
3. A Brief History of the Corporation: 1600 to 2100
Fields of Money
1. Marketing, Innovation and the Creation of Customers
2. The Milo Criterion
3. Ubiquity Illusions and the Chicken-Egg Problem
4. The Seven Dimensions of Positioning
5. Coloring the Whole Egg: Fixing Integrated Marketing
6. How to Draw and Judge Quadrant Diagrams
7. The Gollum Effect
8. Peak Attention and the Colonization of Subcultures

Life Outside Money


1. Acting Dead, Trading Up and Leaving the Middle Class
2. Can Hydras Eat Unknown-Unknowns for Lunch?
3. The Return of the Barbarian
Next week, Ill do a wrap-up of the wrap-ups and attempt to construct
a big-picture view of what this blog is ultimately about, and situate the two
crucial keystone pieces required for making sense of ribbonfarm. I expect
a few of you can guess what those two pieces are. They havent been
included in any of these sequences.

Ancient Rivers of Money


November 5, 2010
Sometimes a single phrase will pop into my head and illuminate a
murky idea for me. This happened a few days ago. The phrase was
ancient rivers of money and suddenly it helped me understand the idea
of inertia as it applies to business in a deeper way. Inertia in business
comes from predictable cash flows. Thats not a particularly original
thought, but you get to new insights once you start thinking about the age
of a cash flow.
We think of cash-flow as a very present-moment kind of idea. It is
money going in and out right now. But actually, major cash flow patterns
are the oldest part of any business. It is the very stability of the cash flow
that allows a business to form around it. In fact, most cash flows are older
than the businesses that grow around them. They emerge from older cash
flows. When you buy a sandwich at Subway, the few dollars that change
hands are part of a very ancient river of money indeed. Through countless
small and large course changes, the same river of money that once allowed
some ancient Egyptian to buy some bread from his neighbor now allows
you to buy a sandwich.
Buyers and sellers alike see markets as an illegible and turbulent
churn of transaction opportunities. But really, they are landscapes carved
out by great, ancient rivers of money and their tributaries. These rivers
change course rarely. Cash flows are also among the most basic financial
ideas. Only businesses make profits, but governments and non-profits
form around cash flows too.
These ancient rivers carve out both a spatial and temporal landscape.
Spatially, the flow metaphor suggests old, dried-up river beds, gorges and
ravines, flood plains, ox-bow lakes, watersheds, and of course, the rivers
themselves. This plays well with the idea of segment.
But markets also have a temporal dimension, based on which river of
money you are talking about, and how long ago it last changed course.

If you think of markets that way, things look very different. Some
rivers of money are very old and very stable. You can at most fight to
displace others from prime positions along the banks. Others are new and
unstable and may change course frequently, creating and destroying
fortunes through their vagaries. Others may be maturing, with dams being
built to stabilize them. People have always bought food and clothes. They
are only now beginning to buy iPads. They are starting to not buy CDs.
Generalizing, you can even think of an average age of the market as
a whole. An interesting question to ask is whether early adopters as a
group should be considered as living in a future market, or whether the
mainstream should be thought of as living in the past. I prefer the latter
model.
Organizations are like riverbank communities. They are as old as the
last significant course change or waterfront battle. The stability of the
river, not the attitudes of people, is what makes old organizations seem set
in their ways. Perhaps people resist new ideas not because they have
specific personalities, but because they have settled on the banks of a river
of money of a certain age. Or perhaps there is self-selection. Possibly the
hidebound kinds go settle on the banks of the most ancient rivers. Tax
rivers are among the oldest and most stable rivers of money (and the only
ones protected by the threat of legitimate force), and people attracted to
government work arent exactly known for being passionate champions of
creative destruction.
Some startups are about finding and colonizing the banks of minor
unknown tributaries of old rivers. Others are about creating new rivers.
Still others are about building canals between vigorous new rivers and
somnolent old ones. And of course, there are those that are about
displacing incumbents from prime waterfront locations.
The nice thing about thinking this way is that the market is now a
system of cash flows that exists independently of the specific set of
businesses serving it in a given era. You can map the system and look for
an unoccupied waterfront spot.

I would like to create a visualization of the oldest and most stable


rivers of money, around things like food, clothing, taxes and shelter. I
dont know how to do that yet.
I first mentioned the metaphor of money as a system of flows (with
things like glaciers mapping to frozen assets) in my old post, Fools and
their Money Metaphors, and this particular one stuck in my head. Then in
a comment to my Eight Metaphors of Organization post, a reader used the
phrase high inertia cash flow.
When I first read that comment, an image popped into my head
unbidden: a dark subterranean cavern with a river flowing through, with
goblin-like creatures swarming around it, holding torches. Like Gringotts
bank in the Harry Potter movies. Ancient is how I would describe the
feel of that image.
Id like a t-shirt with a skull-and-crossbones below graphic and the
line Dont touch my cash flows! below it. The attitude pretty much
defines anybody who is effective in the world of business. When you meet
a tough, no-bullshit businessperson, no matter what function they come
from, chances are, they see their job as protecting a cash flow.

Fools and their Money Metaphors


March 2, 2009
This has always puzzled me: why do people with similar backgrounds
and intellects vary so widely in their effectiveness in dealing with money?
One guy goes to work straight out of college, saves strategically, quits and
starts his own SAP consultancy in 5 years, and is worth a few million by
age 30. Another gets an MBA, gets sucked into a high-class lifestyle of
expensive suits and dinners, and ends up with a BMW and barely $50,000
saved by age 30. And yet another, for reasons obscure even to himself
(ahem!) goes off into a PhD program, and emerges, blinking at the harsh
sunlight, at age 30, with exactly $0. Last weekend, I finally began to
understand. Here is the secret: depending on your direct experience of the
money you manage, you think about it with different metaphors. Your
metaphors, not your financial or mathematical acumen, determine the
outcome of your dealings with money.
Money Mindsets
Lets get two misconceptions out of the way. Money as a conceptual
or theoretical construct (academic debates about fiat vs. gold-backed) or
as a technical definition (the M0, M1 stuff) is mostly irrelevant to
managing money at any level from big bailouts to a kid with a weekly
allowance. So is advanced mathematics (in fact an interest in mathematics
makes it harder to be interested in money, because money-math is among
the dullest kind). You dont need more than basic arithmetic and some
trivial algebra to get money mathematically. Even all the statistics and
optimization doesnt get you to truly interesting math.
What trips us up is money metaphors. I began to realize this when I
noticed that I was relating very differently to the completely trivial
amounts of money this blog makes, compared to my paycheck. The first
clue that put me on the track of this idea was this bit from The
Organization Man (I havent yet blogged about this part in my ongoing
series about the book):

[Organization men] have little sense of capital. The


benevolent economy has insulated them from having to
manage large personal sums
Whytes point was that those of us who have gotten our finances onto
the ammortization autopilot thanks to paycheck deductions even for big
sums like annual income taxes and house purchases, have simply never
learned how to think about large quantities of money at a personal level.
Thanks to the Nanny State and the Nanny Corporation, our financial
horizon is the month, and through autopilot math, 80% of the cash flow
that we are aware of in our lives passes by like clockwork: we watch it,
but dont actively manage it. The remaining 20%, we do manage
consciously. This gives us the basic financial calibration point:

High water mark: the largest amount of personal money youve


ever dealt with

Middle class people in the US have a high-water mark of around


$5000, whatever their actual income levels. This leads to two metaphors
we are comfortable with: money as a clock (the limits between which we
watch our bank accounts rise and fall predictably, like a pendulum) and
money as renewable energy (like a cellphone battery). Here is the mental
model of money this leads to, for an average middle class person in the
United States (applicable, mutatis mutandis to the rest of the world):

Note that everything we commonly associate with different amounts


of money falls into the renewable fuel or clockwork bucket. Even if
we are nominally dealing with much larger sums (say $350,000 as the
value of a house), the high water mark of $5000 or so limits our
imagination. At this level, for instance, we dont take interest seriously
(8% is $400 a year) since it maps to an apparently trivial sort of expense (a
couple of nights at a nice hotel).
But if you take somebody operating with an entrepreneurial mindset
in the same rough range, that person actively thinks about and works with
money very differently. He or she thinks on a scale at which the clockwork
and fuel metaphors break down, and other metaphors work better. The
paycheck person above might be a 100K a year employee. For this
person, $1 million is not an obscene amount. If he had the true capitalist
mindset, and lived with Protestant Ethic frugality on just $40,000 a year,

and invested the rest at 8%, then $1 million is what hed have built up as
start-up capital to strike out on his own in 10 years, age 32 (yeah, yeah, I
know, nobody is talking 8% returns at the moment). So why is this path so
rare? Ive met many people with the right level of frugality (mostly
immigrants), but they are still stuck in clock/battery metaphors.
For the entrepreneurial mindset, the same money is viewed with
metaphors of building material and time to deadline. Thinking of
money as time to a deadline, or non-renewable fuel (for example, time to
build up a certain capital position, or time to burn it down at a particular
burn rate) or as building material (this is what it would take to buy a
McDonalds franchise), leads to a very different view of the same levels
of money:

Since I worked at a startup as the first employee for a year, Ive had a
ring-side seat to this mindset. But even that doesnt get to the visceral
reality of living this metaphor by managing money with this mindset. But
curiously, even something as simple as a blog can put your mind in this
gear. I feel a child-like sense of emotion and excitement when somebody
uses the buy me a cappuccino link on posts to send me $3.00, yet I feel
no excitement actually buying my daily coffee at Starbucks. The
difference comes from earning as a capitalist, but spending as a paycheckguy.
These are just two different money mindsets based on two different
sets of metaphors. So what are the others out there, and what happens
when you use them in the wrong contexts?
Thirteen Money Metaphors and their Uses and Misuses
1. Money as a clock: the predictable paycheck-in-auto-payments out
oscillator is a good idea only for recurring necessary payments.
Any money dynamics that dont need to be on an auto-pilot should
be taken off and managed actively.
2. Money as renewable fuel/rechargeable battery: this is only
good for living expenses up to a middle class level. A misuse is to
divide the national debt by the population to get a per capita debt.
This may give the man on the street the illusion of
comprehensibility, but trillions of dollars simply behave differently
than thousands. At the trillions level, money is NOT renewable
fuel, and it is dumb to let policy be informed by this metaphor.
3. Money as time-to-deadline/non-renewable fuel: good for smalltime entrepreneurs, but really bad for countries. Applying startup
burn rate thinking to the cost of the war in Iraq is probably a
terrible idea.
4. Money as building/growth material: this is great for young
businesses, but inefficient for older businesses. Kids consume
calories and grow taller. Adults consume calories and grow fatter.
5. Money as freedom: beyond about $1 million, money represents
freedom, since you could live very well off the interest alone if
you were frugal. Good for lazy trust-fund kids and endowments

that fund otherwise un-fundable causes. Bad for nearly everybody


else.
6. Money as an organic creature: what if you get beyond the
endowment mindset of money as something to live off, interest
wise? At this point, your surplus is big enough that depreciation of
cash assets is noticeable even to millionaires. This is money as
rotting vegetables. If you dont change your mindset and plant it in
the open economy so it grows, youll mismanage money. But I am
beginning to believe this is a TERRIBLE mindset for the middle
class, and what keeps us trapped there. Mutual funds, like monthly
payroll deductions, have made us much stupider financially, and
lazy about thinking about money as capital/building material for
specific projects.
7. Money as commodity: at a certain level, money becomes just
another commodity like iron rods or size #9 bolts. It flows through
your systems like a river, in large quantities, can be kept as
inventory, depreciates, clogs up supply chains, and depending on
the vagaries of the markets and interest rates, it may make sense to
hold more or less of it.
8. Money as a lever: I cant even begin to imagine the level of
wealth where you get used to thinking of your money as a way to
move more money. At the level we are familiar with (home equity)
it tends to be a dicey business. But at a sufficiently high level, it is
dumb NOT to think of it in terms of leverage. I suspect a lot of
people use leverage mindsets when they play musical chairs with
credit cards. Leverage is a bad metaphor to apply if you are using
it to consume beyond your paycheck means, but a good one if you
want to influence what money plants get fertilized.
9. Money as water: though this is about flows, sources and sinks, it
is subtly different from the commodity/supply chain metaphor,
since it has natural origins. Think in terms of dams, rainwater,
artesian wells, money frozen up in old families as glaciers/polar
ice caps, and so on. This is probably the best way to think about
money culturally and socially. Paris Hilton lives on a glacier.
10. Money as power: I suspect that there is a point of wealth at which
entrepreneurs who got started thinking about money as building
material realize that theyve now made enough money themselves
that they should think of themselves as investors helping others
worry about the building material metaphor. I strongly suspect

though, that this is probably a bad metaphor for governance. This


is why I think Thomas Friedman is being dumb in advising Obama
to invest like a venture capitalist.
11. Money as a work of fiction: once you get to the level of
government, with control over the printing presses, where the only
checks and balances are the hopes and fears of the Chinese
government, you MUST think of money as cultural fiction. It is
what you want it to be. There is enough obscurity in the
international web of debts, bonds, export-import controls and
currency trading controls that the only meaningful way to engage
money is that of the high-integrity, truth-seeking artist.
12. Money as blood: this is where all those haemmorage and
tourniquet metaphors come from. Money as circulating stuff that,
if pumping pressure gets low enough, causes structural collapses
and death. Though most of us resent the idea of the government
bailing out fat cats, pure Darwinist let em pay the costs
opinions miss the fact that the world is not only like an ecosystem
of many organisms, but also like a single connected living thing.
You can cause a lot of pain or kill the whole thing if you are not
careful. This is also the sort of metaphor switch that made Bill
Gates switch from wealth creation to AIDS fighting.
13. Money as nonsense: weve all heard of those wartime economies
with inflation at ridiculous levels like 1000%. This metaphor is
related to the fiction metaphor: if you do the fiction part badly,
you get nonsense. And then everybody has to scramble to buy gold
or guns.
There are probably many other metaphors. I havent even looked at
the entire leisure side of metaphors the mindsets behind yachts, Louis
Vuitton purses and the emerald jewelry worn by Angelina Jolie at the
Oscars (reportedly worth millions).Many people with excellent metaphors
on the management side have awful ones on the leisure side. Thats how
you get the tasteless McMansions, crude new-wealth social behaviors and
so on.
The metaphors you use determine your money personality, and how
much you will be able to do with it. To get to the next level of money, you
probably need to think with the metaphors appropriate to that level. Think

too far above your league, and youll be reduced to daydreaming. Stick to
your own level of metaphors, and youll never move anywhere. Change
your leisure metaphor without changing your management metaphor, and
you are in for frustration.
So much for the armchair lecture. When it comes to practicing what I
preach, I admit I still havent got my mind out of the paycheck level of
metaphors.

Time and Money: Separated at Birth?


September 9, 2009
An intriguing theme keeps popping up in finance discussions: the
relationship between time and money. The best-known line of thinking is
the one that Ben Franklin popularized, that time is money. This is the
Protestant ethic in three words. Then there is the transactional view that
says that time can be traded for money. Lets call it the Catholic ethic.
There is a third view, which Ill call the Zen ethic. The first two lead to
misery. The third, I speculate, does not.

The first two views differ in how they treat leisure. Ben Franklin was
an opportunity-cost focused buzz-kill. Bens ghost seems to admonish
you: yes, you are having fun, but remember, you COULD be earning $10
an hour cranking widgets. So youd better be improving your mental
health enough that your earnings increase by at least $10 in the future.
Adding modern math does not change things much. The cost-of-leisure
equation just acquires the trappings of net-present-value analysis. You
want to sleep eight hours today? Make sure the marginal discounted
future cash flow due to increased productivity is greater than $64.
The Catholic ethic (or what William Whyte called social ethic)
naturally leads to viewing leisure as time-profit rather than money-cost.
No-strings-attached discretionary time. You trade as little of your time as
you can to meet your basic needs, and the rest is surplus. If you want more
stuff to enhance your leisure, a pool toy for your swimming say, you have

to trade more time. This has the effect of creating a firewall between two
preference economies. On the supply side, you prefer the work that offers
the biggest cash returns per minute. On the demand side, you end up
deciding whether, for instance, an hour splashing in the pool without a
pool toy is better than a half-hour in the pool with one, and whether either
is better than an hour watching TV. You could be running at a loss. If your
job requires more caloric output than you are able to replace with food you
can afford with your earnings, you will slowly starve to death. Many of
the worlds poorest people are forced into this loss-making economic
equation. And of course, to finish up the logic, you can buy your time
leisure with money debt. That of course, is the moral of the ant and
grasshopper fable: spend leisure you havent earned, and fake remorse and
hope the ants bail you out.The analogy to priestly absolution for sins at
confession is nearly exact. Of course, there is the gray area of cashprofitable my work is my hobby time which you can double-book in
both ledgers, but that does not conceptually add anything to the
philosophy. If you have a lot of that going on, good for you.
The Catholic ethic does not oppressively mess with your experience
of leisure the way the Protestant ethic does. The agenda in Protestant-ethic
time management is to maximize lifetime wealth accumulation (few
modern Protestant-ethic-ers actually get or operate by the underlying
theology of predestination). The agenda in the Catholic-ethic money
management is to maximize immediate time profit. Capitalists operate by
the former and end up time-poor/cash-rich. Worker bees operate by the
latter and end up cash-poor/time-rich. Both get into debt: capitalists for
leverage, worker-bees to front-load leisure in youth. Both ultimately lead
to misery.
Here is the third angle that I think is interesting, and has the potential
to combine the wealth-creating tendencies of the Protestant ethic and the
hedonistic pleasures of the Catholic ethic, without leading to misery. The
third view says that time and money are near-perfect Yin-Yang opposites.
Hence the name Zen ethic. The underlying thing is not
either/or/neither/both. It is one of those paradox thingies. Some evidence:
Money is the most liquid thing imaginable, more liquid than water
even. Time is the most illiquid thing imaginable.You cannot save it, move
it, transfer it or trade it for anything else (you can sell the output of your

time, not your experience of it). About the only thing you can do is
modify your psychological experience of it: drugs and adrenaline can
make time pass more slowly, age and long memories can make it pass
faster. In certain cultures, you can sort of pool it and experience it in a
collective way, but still, it is illiquid. No matter how fast, slow or
collective you make the experience, you still cannot experience one time
instead of another or something other than time in place of time. Yet,
somehow, time can dance with money.
Time is the most deeply foundational thing imaginable. Even if you
are blind and deaf, and suspended in a sensory-deprivation chamber so
your sense of space and proprioception is messed up, you will still
experience time. I think. Money, by contrast, is the most completely
artificial thing ever invented. It is arbitrariness manifest, and it will
become instantly meaningless if you are put on a desert island. Yet,
somehow, time can dance with money.
I have no idea what to do with these thoughts. I didnt say I had
answers, just an interesting third angle. Maybe a theory of work can be
built on top of it.

The Eight Metaphors of Organization


July 13, 2010
Gareth Morgans Images of Organization is a must-read for those who
want to develop a deeper understanding of a lot of the stuff I talk about
here. Though Ive cited the book lots of times, it is one of those dense,
complex books that I am never going to attempt to review or summarize.
Youll just have to read it. But I figured since I refer to it so much, I need
at least a simple anchor post about it. So I thought Id summarize the main
idea with a picture, and point out some quick connections to things I have
written/plan to write.

Morgans book is based on the premise that almost all our thinking
about organizations is based on one or more of eight basic metaphors. The
main reason this book is hugely valuable is that 99% of organizational
conversations stay exclusively within one metaphor. Worse, most people
are permanently stuck in their favorite metaphor and simply cannot
understand things said within other metaphors. So these are not really 8
perspectives, but 8 languages. Speaking 8 languages is a lot harder than
learning to appreciate 8 perspectives. I consider myself a bit of an
organizational linguist: I speak languages 2, 5, 6 and 7 fluently, 1 and 3
passably well (enough to get by), and 8 poorly.
1. Organization as Machine: This is the most simplistic metaphor,
and is the foundation of Taylorism. Any geometrically structuralist
approach also falls into this category, which is why I have little
patience for people who use words/phrases like top down, bottomup, centralized, decentralized and so forth, without realizing how
narrow their view of organizations is. The entire mainstream
Michael-Porter view of business is within this metaphor.
2. Organization as Organism: This is a slightly richer metaphor and
suggests such ideas as organizational DNA, birth, maturity and
death, and so forth. I really like this one a LOT, and have so much
to say about it that I havent said anything yet. I even bought a
domain name (electricleviathan.com) to develop my ideas on this
topic separately. Maybe one day Ill do at least a summary here.
3. Organization as Brain: This may sound like a subset of the
Organism metaphor (and there is some overlap), but there is a
subtle and important shift in emphasis from life processes to
learning. Organization as brain is the source of informationtheoretic ways of understanding collectives (who knows what,
how information spreads and informs systems and processes). The
System Dynamics people like this a lot, especially Peter Senge
(The Fifth Discipline). I cannot recommend the SysDyn approach
though; I think it is fundamentally flawed. But the learning view
itself is very valuable.
4. Organization as Culture: Ive written about this stuff before
(There is No Such Thing as Culture Change on the E2.0 blog), and
plan to do so soon, when I review Tony Hsiehs Delivering
Happiness and in the next Gervais Principle post. I honestly dislike

5.
6.

7.

8.

this metaphor, but can understand its appeal objectively. More so


than others, culturalists tend to be extremists; they think the culture
metaphor is the most important one, and this rigidity traps them in
peculiar ways.
Organization as Political System: Most of the Gervais Principle
series falls within the boundaries of this metaphor, though I
sometimes step out to the Psychic Prison metaphor.
Organization as Psychic Prison: I chose to represent this as a guy
in a prison, since that is immediately obvious to everybody, but the
right symbol (and the one Morgan uses) is the Platos cave symbol,
which would be obscure to most people even if I could sketch it in
a recognizable form. Weve talked about this on the edges of the
Gervais Principle series, through our discussions of exile/exodus,
and also extensively in my old Cloudworker series.
Organization as System of Change and Flux: Think of a
dynamically stable whirlpool or eddy in a flowing stream, and you
get this one. It highlights some of the same aspects of organizations
as the Organism metaphor, but in different ways. For example,
notions of stability, dissipation, entropy, and other physics ideas are
used. This is where things like GTD, lean startups and agile
programming fit. The idea of creative destruction also fits in here.
If the Machine metaphor is the dominant one, this one is the
market-leading alternative metaphor.
Organization as Instrument of Domination: This is NOT the
same as the political metaphor, since it involves naked aggression
in some form. This is where you get themes of oppression, sweatshops, social costs (such as the BP oil spill), the military-industrial
complex and so forth. This used to be a lot more important than it
is now, because humans are selfish creatures. So long as the
subjects of oppression were human laborers, this was the leading
metaphor. The moment that variety of oppression began to wane,
and corporations shifted their oppressive gaze to animals, via
factory farming, and the environment, via wanton damage out of
public view, we stopped caring as much. Fortunately, that is
starting to change, because out of public view is an increasingly
difficult state to maintain. Cases in point: Iran, Burma and BP.

There is a lot to be said about each metaphor. Morgans book is not


particularly original in its analysis, but it is magisterial in its scope,

coverage and organization. It surveys and contextualizes a lot of work by


others in organizational theory. Bits of it can be tedious and too
cautious/conservative, but overall, this is one of those get your
foundational education books that you truly must read. I dont want to
tempt you into an illusion of understanding with this post, but just give
you a taste of what is in store for you, if you choose to read the book.
I plan to do a series of such quick-tastes of books that I consider very
important, but dont plan to review/summarize.

The Lords of Strategy by Walter Kiechel


May 4, 2010
It takes some guts to subtitle a business book The Secret Intellectual
History of the New Corporate World. Even for a genre whose grand
overstatements are only rivaled by the diet-books aisle, that is an
ambitious tagline. The Lords of Strategy lives up to that subtitle and then
some. It is a grand, sweeping saga that tells the story of how the illdefined function known as corporate strategy emerged in the 60s,
systematically took over boardrooms and MBA classrooms, and altered
the business landscape forever. Even though we are only 4 months into
2010, it is pretty likely this is going to be the best business book of the
year for me. If you are considering, currently in, or recently graduated
from, an MBA program, you really must read this book. If this book had
been written 10 years ago, it would have saved me a good deal of trouble
making my own career decisions.
Does Strategy Matter?
The book is a dense, but deftly told story of how, starting in the 60s,
and armed with little more than 22 matrices (you may enjoy my post on
these) and spreadsheets, a new breed of strategy professionals completely
reshaped the business landscape.
Refreshingly, the book starts out by tackling the elephant in the room
immediately. Could an industry devoted to manufacturing intangibles
really have had such a huge impact? Especially an industry whose
products are considered by skeptics to be a series of vacuous flavors of the
month? Whose members are viewed as mercenaries brought in, under the
cover of new ideas, primarily to make layoffs politically feasible? Can
you actually take a profession that prides itself on staying away from
execution, at its own estimation?
Even otherwise charitable business people are inclined to view
strategy as a function that at best has no real impact, and at worst,
legitimizes wanton acts of corporate destruction by creating paper trails of

justification for fait accompli decisions. In this view, the entire output of
the strategy profession is a nonsensical smokescreen obscuring more
fundamental machinations.
These are serious charges. Any book that attempts to spin a positive
story around strategy starts out with its hero in the dock, presumed
guilty. Kiechel succeeds in his main objective: acquitting the profession of
the charges against it, and demonstrating the true impact of the literaryindustrial complex that is strategy.
As an idea-peddler myself, I am obviously playing devils advocate
here. I personally have no doubt that strategy does matter. That skeptics
who itch to dive in and do real work eventually pay a high price for their
skepticism. In the long run, it is the deliberate types, who take strategy
seriously, who prevail. In a way, the mark of the true strategy type is the
ability to use that very disdain and skepticism as cover for getting the
right things done.
The book pointedly avoids offering a definition of strategy (though it
cites several), so that Drucker phrase is probably a good operating
definition to start with: strategy is about getting the right things done. The
problem of defining strategy is surprisingly hard, but lets look at the
major themes of the book before considering why.
The Historical Development of Strategy
The major narrative arc in the book is a straightforward historical one.
Strategy as a function did not really exist before the 60s. To the extent
that the growth economies of the post-WW II decades needed such a
construct, the implicit ones in the heads of CEOs sufficed. The
Organization Man era was about what Michael Porter a key figure in the
book would characterize, in the 90s, as operational effectiveness, in
low-competition growth markets.
The events comprising the origin myth are fairly straightforward and
distinctly American. Bruce Henderson invented the sector by founding the
Boston Consulting Group in 1963. A textbook maverick idea guy type,
Henderson pioneered the now familiar practices of hiring the best and

brightest from the top MBA programs, especially those with engineering
backgrounds (driving up the intake IQ and prestige of the programs in the
process, with the result that the MBA slowly caught up, in terms of
respectability, to law degrees and PhDs). BCG, when it began, was
primarily a high-concept idea company, relying on carefully-crafted
conceptual insights, applied to specific clients, to drive its business.
Very quickly competition emerged. Bill Bain, the top salesman in
BCG, broke away, taking some of the best talent with him. The result was
Bain Consulting. Bruce Henderson had only himself to blame: he had
taken his own advice a little too well, organizing his young company as a
crucible of internal Darwinian competition. Bain and his entourage were
the fittest, and they not only survived and thrived, they decided to head out
and turn the mock competition into a real one. And as befits a mutiny,
Bains signature style was distinctly non high-concept and non-BCG. It
was all about working closely, secretively, and at length, with only one
client in a given industry. Alone in the strategy consulting world, Bain was
also committed to participating in execution. This would both position
them for serious growth in the eighties, when shareholder value became
the sole metric of strategic success, and get them into serious trouble, due
to their extreme intimacy with their clients. But through their ups and
downs, Bain remained the un-BCG; with a cult-like (to their competitors,
who called them Bainies) devotion to helping clients execute their
strategy recommendations. Their calling card was the line, we dont sell
advice by the hour; we sell profits at a discount.
These events, and the early successes of BCG and Bain, did not go
unnoticed. The genteel white-shoes at McKinsey, who had been running a
trusted and somnolent business since 1926, with no strategy offering,
realized that they had to react. And under the leadership of Fred Gluck,
who joined the firm in 1967 and took over the helm in 1978, they did. In
their response, they relied on neither ideas, nor execution, but on learning
quickly. As a result, they took over the strategy revolution started by BCG
by commoditizing (by their own admission) and hawking in volume the
ideas that BCG had pioneered. The upstarts were going to be put in their
place.
The last significant origin event was the entrance of Michael Porter,
who around 1979 took on the task of dignifying and elevating the

emerging ideas from the consulting world into an entire academic


discipline.
With those four players on the stage: BCG, the idea company, Bain,
the all-the-way-to-execution cult, McKinsey the behemoth commoditizer,
and Porter the intellectual heavyweight, the strategy revolution was
underway.
Kiechels explanation for why strategy arose at all is a well-worn
one: the slowing of growth and demand, and the rise of competition in
sector after sector. The management literature before that time had very
little to say about competition, and operated under the (flawed)
assumption that in a given industry, cost structures would largely be the
same across players, and that there was enough room for everybody. Of
the classic three Cs of strategy (costs, customers and competition), the
last element was largely missing in the thinking of senior managers in the
fifties.
Compelling though the explanation is, it is not completely satisfying.
The first two decades after World War II were extremely anomalous.
Competition clearly was a feature in the Robber Baron era (recall the mad
race to lay the most miles of track between the Union and Pacific railroads
in the 1860s). Competition was also present and literally brutal (involving
actual wars) in the mercantilist era of national-charter companies in the
two centuries before that, a subject Ill get to shortly in my review of Nick
Robins excellent 2006 book about the East India Company, The
Corporation That Changed the World. Surely there is a prehistory of
strategy to be mined from those eras, even if there were no Bains, BCGs
and McKinseys to capture it in 2x2s?
Be that as it may, corporate strategy arose to solve real problems. The
rise of competition was merely a stressor, and the obscurity of costs just a
symptom. Larger forces were at work, what Kiechel calls the four
horsemen: deregulation, technology, changes in capital markets and
globalization. This particular list of four forces is intriguing: they were the
exact same four forces that shaped the world of mercantilist corporations
between 1600 and 1850, but in rather different ways. But I am getting
ahead of myself. Thats a whole other book review.

The Rise of Strategy


Among them, the four key players BCG, Bain, McKinsey and the
Michael Porter one-man show created the intellectual landscape on
which corporate stories have played out over the last four decades. The
history of modern strategy ideas begins with BCG, which adopted its
regular Perspectives newsletter (the high-end blog-like content marketing
vehicle of its day) as its main calling card. Through it, they introduced the
world to the famous Growth Share Matrix which introduced the colorful
terms dog, cow, wildcat and stars into the business lexicon, and its
comrade-in-arms, the experience curve. Together, these two constructs
helped strategy create its first great success: showing business leaders that
costs could not be assumed equal across players in an industry; that the
market leader, by learning the most and having the largest economies of
scale, was positioned to dominate the market in terms of costs. BCG
taught its clients that costs could be systematically and predictably driven
down as a product matured; that cash cows should be milked to feed the
stars; that dogs should be put out of their misery; that wildcats should be
given careful attention. All ideas that are taken for granted today.
These crisp, unqualified high-concept ideas were full of problems of
course, and there were plenty of unintended consequences. BCGs ideas
helped create brutal cost competition in entire sectors, driving everybodys
margins down. Their undervaluing of the dog quadrant helped created
the leverage buyout (LBO) sector in the late 80s (a business activity that
has recently re-emerged in the guise of Private Equity). On the
conceptual front, it would take Michael Porter much of the eighties to
refine the ideas with his voluminous writings, and plug the major holes.
But the important thing to note is that BCG started the conversation.
The march of new ideas continued, and while some were true flavors
of the month the re-engineering bubble of the early nineties would
later be dismissed by Porter as not strategy at all, but glorified Taylorist
operational effectiveness, and flawed at that there was indeed a gradual
accumulation of real and solid ideas that led to what Kiechel calls the
intellectualization of business. The foundations had been laid for
analytical leadership, driven by ideas and data.

Porters contributions of course, are well-known, but to my mind,


surprisingly unoriginal. Starting with his five-forces framework, all the
way to his coda, What is Strategy (which he wrote in 1996, upon his
return to strategy after a brief flirtation with government and state
policies), much of Porters work, if you trace the lineage of ideas, is about
working out the details and filling in the gaps. Crude and over-simplified
they may have been, but it was the consultants (and the micro-economists
across the street) who actually came up with the original ideas. Porters
most significant idea-hijacking victim was likely McKinsey. The book
notes and this was news to me that the idea of Value Chain Analysis,
which Porter named and worked out in detail, actually originated in the
one piece of original work that can be attributed to McKinsey, the idea of
the business system:
If there were an award for the most famous footnote in
management literature, a strong candidate would be the
first one in chapter 2 of Porters book [On Competition].
Among McKinsey veterans, mere mention of it still causes
certain sets of teeth to grind. In this footnote, the professor
acknowledged that the business system concept captures
the idea that a firm is a series of functionsand that
analyzing how each is performed relative to competitors
can provide useful insights. He also conceded that
McKinsey stress the power of redefining the business
system to gain competitive advantage, an important
idea.But then, in two quick sentences, Porter contrasted
the system to his own ideas and dismissed its relevant to
the rest of his discussion, which would go on for five
hundred pages.
Moving on, Bains role in the history of strategy strikes me as the
most significant, and their contribution was at once a philosophical idea
and an operating doctrine: they insisted on working only with one client in
an industry, staying in the background, and not publishing their ideas in
codified forms for others to use. Besides finessing the practical problem of
deciding who owns the intellectual property arising from a client
engagement an issue that caused resentment among BCG and McKinsey

clients the stance also reflects what I consider a fundamental and


defining feature of strategy: it is primarily a temporary informational
advantage, and it is only valuable to the extent that it cannot be copied.
This means that the very act of codification and broad dissemination turns
a strategic idea into a commodity. Porter would later popularize the idea
that certain ideas, which he arbitrarily labeled operational effectiveness
ideas, were merely costs of doing business. But the very fact that
Porters own ideas (like BCGs and McKinseys) are available to all, in a
way, makes them non-strategic cost-of-business commodities as well.
Bain also, alone among the major players, stayed true to another idea
(that dominated academia before Porter) that I consider central to strategy:
a strategy by definition is unique to a situation. It is not a formula that can
be applied all over; what can be codified is not strategic. Generality and
strategy do not go together, and this is reflected in the very work processes
of both business and military strategists: they rely primarily not on general
ideas, but on case studies, and treating each new case as a mystery to be
cracked with a case-specific insight, in a way that delivers competitive
advantage. To the extent that you are successfully applying formulas in
uncreative ways, you are not solving strategy problems at all. If you and
your competitor are both BCG clients, and both are applying an idea like
the experience curve, the result is not a temporary strategic advantage for
one player, but an race to the bottom, with customers and suppliers
enjoying the benefits.
This idea is not new, and goes back to Clausewitz notion of the coup
doeil (strike of the eye, seeing and grasping the essential and unique
advantage in a situation, by reference to similar, but not identical, cases
encountered in the past). But among the major strategy firms, only Bain
appears to have invested in this notion.
Of course, this strength was also a weakness. As the book recounts,
Bains secretiveness and unique, situational execution support would get
them into bed with their clients, most famously with Guinness in Britain,
leading to conflicts of interest and barely-legal activities. Bain would also
lay the groundwork for establishing stock market performance, rather than
CEO hagiographies, as the main barometer of success, and help create,
through Bain Capital, the LBO sector, to take advantage of BCGs
systematic undervaluing of dogs. Bains work was also indirectly the

cause of the rise of a whole breed of smaller boutique firms specializing in


propping up share prices.
Those then, are the highlights of the story created by the main players.
In the nineties, besides the re-engineering bubble, other ideas would be
added to the mix, including that of core competency due to Prahalad and
Hamel, but the main thread of the story essentially winds down after the
mid 90s. All three of the top firms focused on international expansion and
industry-specific practice development rather than further management
innovation. Porter mysteriously vanished from strategy conversations.
Kiechel notes, that after publishing What is Strategy in 1997, and
putting the re-engineering mavens in their properly humble place, he
quickly received a contract to publish a book
expounding and further developing the ideas in What is
Strategy? Thirteen years after the article appeared, that
book remains unpublished. A mystery perhaps, but not as
intriguing as the question of why this man, whose work has
had more effect on how companies chart their future than
any other living scholars, has yet to receive the Nobel
Memorial Prize in economics.
Kiechel is being perhaps too respectful. Influential though they have
been, Porters ideas have not yet been proven solid enough to merit the
honor. I personally have had a fat volume of his collected works sitting on
my shelf for a couple of years, but have never been inspired enough to
finish it. Unlike the equally prolific Drucker, Porter is rather turgid and not
very readable. There is a reason: there is a deep conceptual problem with
Porters work, and the entire main line of development of strategy as a
discipline, that makes it deeply suspect: the fact that people are missing.
With characteristic fairness, Kiechel also tells the alternative story
of strategy, involving very different actors. If the Positioning School,
Porter and the Big Three, constitute the Keynesian school of strategy,
arrayed on the other side is the equivalent of the Friedman school: the
People School, a minority school that is gaining prominence today.

People or Position?
Kiechel correctly notes that the main tension in the literature on
strategy is the one between positioning (driven by numbers and models),
and people (driven by organizational theory ideas).
At the heart of everything accomplished by Porter and the Big 3 is an
assumption that people dont really matter. This makes the main story of
strategy a story about positioning and formulas. Weve already seen one
problem: that codification, generalization and dissemination turn strategies
into costs of doing business. This creates an ever-faster arms-race by
eroding competitive advantage faster than new ideas can create it. Entire
industry sectors start to tick faster and faster, benefiting customers and
suppliers, but not corporations, when strategy ideas take hold across the
board. This problem was implicitly solved by Bain through secrecy,
exclusivity and non-publication. Among the mainstream players, it was
again Bain consultants who implicitly acknowledged, through their
preference for long engagements and participation in execution, the fact
that people and strategy are not separable, and neither are strategies and
execution.
The alternate approach to strategy focuses directly on dynamics, and
by dynamics, we mean the patterns of change created by that most
unpredictable variable in the equation, people.
Porter here is the target of most of the criticism, and the leading lights
of the People school begin their critiques with the question, where are
the people in a Porter strategy? Kiechel neatly brings out the nuances of
Porters reaction to this charge through carefully selected quotes. At one
point, he describes how Porter insists that his framework is dynamic,
protesting, to this day I completely accept the premise that every
company is different, that every company is unique. At another point, he
has Porter resignedly saying, Where I fail is in the human dimension.
To be fair to the positioning school though, people, the driver of
unpredictable dynamics, are not easy to model and integrate into strategy.
And it is not for lack of trying. The school began its work by drawing

inspiration from the work of Herbert Simon, who introduced the idea of
bounded rationality and the idea that people satisfice rather than optimize.
From there, the march to behavioral economics-inspired approaches to
strategy over several decades, was inevitable.
Besides my favorite, William Whyte (who gets a too-brief mention),
the important thinkers in this school are not as well-known as the
positioning school Big Four: Richard Cyert (The Behavioral Theory of the
Firm), Karl Weick, (Collective Sense-Making), Henry Mintzberg
(Mintzberg on Management) and Jeffrey Pfeffer (whom Kiechel calls the
Porter of Organizational Behavior).
One name though, should be familiar: Tom Peters was the lone rebel
in the mainstream strategy world, trying to draw attention to people
aspects. Though Thriving on Chaos was the first big business book I
ever read (in the mid 80s, as a teenager), I am frankly not a fan. But he
must be given credit for an entirely different achievement: creating the
best-selling business book sector.
Though it mostly lost the war, the People School achieved its
greatest success playing defense in the early 80s, with Richard Pascales
1984 article in the California Management Review, Perspectives on
Strategy: The Real Story Behind Hondas Success. The significance of
the article was that Pascale showed that Hondas seemingly deliberate and
modeling/data based invasion of America was really an outcome of
serendipity mixing with in-market adaptive learning and the peculiar
personalities of the principals. In other words, the actions and successes of
humans within an agile, quick-learning startup were being attributed, by
mainstream strategists, to deliberate modeling and data, and the use of
elaborate constructs. A case of post-hoc rationalization.
Though Pascale won the battle, the People philosophy did not win
the war, and for good reasons.
One good reason is simply that the People school is preparadigmatic. There is very little agreement between a multitude of
contending schools of thought. The book quotes one study which found
that 105 experts polled for key ideas from the school produced 146

candidates, of which 106 were unique. With that much dissent, the
People school doesnt stand a chance in the commercial marketplace for
retail business ideas (which is why, by my reasoning, it is automatically
more valuable, since fewer people understand the ideas). By contrast, in
the Positioning school, there are perhaps a couple of dozen key ideas
that everybody agrees are important, which every MBA learns, and most
non-MBA managers eventually learn through osmosis.
Add to this the fact that any People focused school is necessarily
based on metaphysical, rather than psychological axioms, and you get a
mess. If you believe in an idealist perfectibility of Man doctrine, you
will follow Maslow and end up with high-minded ideas about
organizations allowing their people to self-actualize, resulting in their
banding together into missionary tribes that proceed to Save the World.
If you are skeptical of human perfectibility, you get People models like
my Gervais Principle series.
The Left and Right Brains of Corporate Strategy
The Positioning school is basically a half-century worth of
codification and dissemination of ideas under an assumption that
companies are run by sound operating management, capable of execution.
Every idea developed by the school either creates a flavor-of-the-month
bubble, or gets validated and incorporated into the very structure of the
broader business environment, as an across-the-board cost of doing
business. The result has been a gradual acceleration of change and a
shortening of the advantage offered by any given idea. Ideas go from
being secret strategies to codified commodities so quickly that they barely
pay for themselves. Okay, I wont repeat that idea again.
The People school has come down to a basic position that good
people with a bad system/process will always outperform bad people with
a good system/process. Hence the Good to Great idea that you must get
the right people on the bus, the wrong people off the bus, and then decide
where to drive. It is a fundamentally adaptive, experimental, local and
entrepreneurial approach to business problems. It is also a model that does
not naturally lead to industry-wide acceleration, since it is people, not
ideas, that matter, and people and teams cannot (yet) be cloned.

The professionals may disagree in public, but Ive never yet met
anyone in the real world who does not mix and match ideas from both
worlds. The Pascale Honda story is clever, but does not belie things like
the Growth Share Matrix and its descendants. To some extent, the
Positioning and People schools are the left and right brains of strategy,
and smart people tend to operate in whole-brained ways.
Other Threads
There is plenty more in the book, all of it illuminated by fascinating
and fresh anecdotes, and statistics on the growth of the sector.
One thread deals with the endgames for consultants. Since the
sector operates by an up-or-out dynamic, with only about 10% making
partner, the strategy sector creates an endless supply of exiting
experienced business professionals. There is an extended discussion of one
end-game: the emergence of a consulting stint as a fast-track path to senior
management in client companies (which created a whole generation of
consultant-turned-VPs, who became more demanding customers, raising
the stakes for the whole sector). Another currently popular endgame is
apparently the Private Equity (PE) sector (the descendant of LBOs and the
big brother of Venture Capital, in case you dont know what that is).
Another interesting thread deals with the relative failure of the
industry in dealing with innovation problems as opposed to cost control
problems (which has led to the perhaps unfair association between strategy
consultants and layoffs).
Yet another thread deals with the emergence of the literary
industrial complex, including a discussion of conferences, the business
book packaging industry, and the dominant influence of the Harvard
Business Review (one insider is quoted as saying You can get a years
worth of business, maybe two, on the strength of one article.)
Perhaps the most significant minor thread is the story of the rise of
shareholder value as the key metric (an idea Jack Welch is quoted as
calling the dumbest idea in the world). Related to that is an entire

chapter on the role of strategy consultants in the financial crisis (short


version, we didnt do it; we were down in the basement laying off people
while the evil Quants were whispering stupid ideas in the CEOs ears).
There are also plenty of juicy nuggets of insider information for fresh
MBAs to chew on. For instance, a McKinsey veteran is quoted as saying
that the on a scale of 1-10, the relative power of various players is as
follows:

Office Manager: 10
Industry Practice Manager: 4-5
Function Manager: 1-2

The Decline and Fall of Strategy


The tension between the resurgent People school (which is gaining
ground thanks to the Strengths movement, and books like Good to Great)
and the Positioning school is just one of the main cracks in the edifice of
modern corporate strategy.
But though a synthesis may occur there, other forces may yet create
trouble for strategy as a discipline. Though Kiechel takes great pains to
avoid this interpretation, his story lends itself very well to a rise and fall
interpretation. Trends unfolding today might well be undermining the
foundations of strategy.
Not least among these trends is the disruption of the very ontology of
strategy as one of the new thinkers quoted towards the end of the book,
Philip Evans, suggests. Evans of course, is talking about the things that are
very familiar to the readers of this blog the rise of social media, the
increasing ambiguity of constructs such as corporation and market,
and the uneasy new constructs (such as Enterprise 2.0 and Ecosystem)
that seem poised to take the place of the old ones. Besides this evidence,
the book also notes that there hasnt really been a notable new idea in
strategy since the mid nineties. Rather to my delight, Kiechel dismisses
the one candidate new idea, Blue Ocean Strategy as self-serving, and
later in the book asks, rhetorically:

[N]ame one strategy guru on the order of a Porter or


Hamel who established his reputation after 1995, or the title
of a best-selling book on the topic published since then.
How many out there snap to at the names of W. Chan Kim
and Renee Mauborgne?
Since Blue Ocean Strategy is the only book I have completely panned
on this blog, it is nice to have my views validated.
So while Kiechel doesnt overtly suggest that the strategy
revolution may be in trouble, after four decades of success, he (very fairminded of him) does lay out the necessary evidence required for those
who want to draw that conclusion.
That said, and whether or not the discipline and function survive, the
accomplishments in the decades of its reign are truly impressive. Even if
you have a fairly good mastery of major business ideas (I consider myself
fairly well-read on the subject), the book deftly weaves a story through all
the major ideas of the last four decades, in a way that truly bring outs the
highlights. This is definitely a sum-greater-than-parts book.
What the Book Does Not Cover
There are many potentially relevant topics the book leaves out, and
most are defensible exclusions (for example, the ecosystem role of the
industry analyst players like Gartner and Forrester is not considered, nor is
the pre-history of strategy, which I already mentioned).
But one omission is worth discussing: explicit definitions. Kiechel,
adopting a journalistic stance, avoids offering up his own definition.
Instead, he characterizes strategy in a functional way as
the framework by which companies understand
what theyre doing and want to do, the construct through
which and around which the rest of their efforts are
organized

This is clearly not a definition, and not intended as one. You could be
reading tea-leaves and calling it strategy. Or, if you are a pure Druckerian,
you could declare that the purpose of a business is to create and keep a
customer, and use that static doctrine as your framework and construct,
and worry no further. Yet, strategy is clearly more than that.
The positive definitions offered are only offered as illustrations of the
thinking of specific schools or people. For instance, the pre-Porter state of
management thinking in academia is characterized through a Ken
Andrews quote (a Harvard faculty member who taught courses eventually
taken over by Porter):
Corporate strategy is the pattern of major objectives,
purposes, or goals and established plans for achieving those
goals, stated in such a way as to define what business the
company is in or is to be in and the kind of company it is or
is to be.
As Kiechel notes, that grand, overarching definition says everything
and nothing. But it creates the intellectual room for viewing strategy as
highly unique and individual to companies and situations, a process of
creative story-telling. By contrast, Porters formula offerings, and the
industrys, are more confining, for example, The essence of formal
corporate strategy is relating a company to its environment (which
suggests that strategy is essentially about responding to competition).
I should mention here that I have a vested interest in raising the
question of definitions, since I actually offer one in my upcoming book,
Tempo (you can find a really old version of my ideas in my 2007 post,
Strategy, Tactics, Operations and Doctrine: A Decision-Language
Tutorial, but my thinking has evolved a LOT since then, so dont hold me
to the details).
But getting back to the question of definitions for the specific context
of corporate strategy, if thinkers like Andrews were being too general,
Porter and his group too formulaic, and the People school too implicit,
where are we to look? I personally believe the heart of the matter goes
back to Clausewitz and Napoleans coup doeil: a whole-brained local, and

specific insight, in the context of a narrative. I first stumbled across this


idea in William Duggans book, Strategic Intuition, which I reviewed a
while back, where he noted that Porters thinking is really a modern-day
version of the thinking of Jomini, a contemporary of Clausewitz, who
offered very formulaic explanations of Napoleans successes where
Clausewitz offered explanations based on Napoleans unique capacity for
strategic insight. If thats too historical for you, you may want to read my
post How to Think Like Hercule Poirot. If strategy is a matter of cracking
a case, it is probably not a bad idea to learn from fictional detectives.
Looking back, Ive clearly been beating this particular drum for a
while.
Ill close with a little piece of personal disclosure. I have endured two
significant professional rejections in my career so far. I failed
spectacularly to land a tenure-track position in academia, and I crashed out
of the second round of a McKinsey interview. Curiously, while it took me
months of nursing unworthy sour-grapes feelings to get over the first
rejection, my reaction to being rejected by McKinsey was one of
immediate relief. I never understood why until today. Now I know: despite
my deep interest in the subject, my personality and strengths would have
made me a train-wreck at a place like McKinsey, so I am glad I failed fast,
and that they were wise enough to kick me out of the process early. If I
knew then what I know now, Id probably have applied to Bain instead.
From what Ive read, that seems more like my style. Or most likely, Id
have avoided the mainstream altogether and gone over and found a niche
in the People school somewhere.
But wait a moment. This blog is that niche. So maybe I am in the
People School strategy business after all. Hmm Ribbonfarm
Consulting Group (RCG) has a nice ring to it. Egggggssellent. Maybe Ill
hire my cat as my first employee.

A Brief History of the Corporation: 1600 to 2100


June 8, 2011
On 8 June, a Scottish banker named Alexander Fordyce shorted the
collapsing Companys shares in the London markets. But a momentary
bounce-back in the stock ruined his plans, and he skipped town leaving
550,000 in debt. Much of this was owed to the Ayr Bank, which
imploded. In less than three weeks, another 30 banks collapsed across
Europe, bringing trade to a standstill. On July 15, the directors of the
Company applied to the Bank of England for a 400,000 loan. Two weeks
later, they wanted another 300,000. By August, the directors wanted a 1
million bailout. The news began leaking out and seemingly contrite
executives, running from angry shareholders, faced furious Parliament
members. By January, the terms of a comprehensive bailout were worked
out, and the British government inserted its czars into the Companys
management to ensure compliance with its terms.
If this sounds eerily familiar, it shouldnt. The year was 1772, exactly
239 years ago today, the apogee of power for the corporation as a business
construct. The company was the British East India company (EIC). The
bubble that burst was the East India Bubble. Between the founding of the
EIC in 1600 and the post-subprime world of 2011, the idea of the
corporation was born, matured, over-extended, reined-in, refined, patched,
updated, over-extended again, propped-up and finally widely declared to
be obsolete. Between 2011 and 2100, it will decline hopefully
gracefully into a well-behaved retiree on the economic scene.
In its 400+ year history, the corporation has achieved extraordinary
things, cutting around-the-world travel time from years to less than a day,
putting a computer on every desk, a toilet in every home (nearly) and a
cellphone within reach of every human. It even put a man on the Moon
and kinda-sorta cured AIDS.
So it is a sort of grim privilege for the generations living today to
watch the slow demise of such a spectacularly effective intellectual
construct. The Age of Corporations is coming to an end. The traditional

corporation wont vanish, but it will cease to be the center of gravity of


economic life in another generation or two. They will live on as religious
institutions do today, as weakened ghosts of more vital institutions from
centuries ago.
It is not yet time for the obituary (and that time may never come), but
the sun is certainly setting on the Golden Age of corporations. It is time to
review the memoirs of the corporation as an idea, and contemplate a postcorporate future framed by its gradual withdrawal from the center stage of
the worlds economic affairs.
Framing Modernity and Globalization
For quite a while now, I have been looking for the right set of frames
to get me started on understanding geopolitics and globalization. For a
long time, I was misled by the fact that 90% of the available books frame
globalization and the emergence of modernity in terms of the nation-state
as the fundamental unit of analysis, with politics as the fundamental area
of human activity that shapes things. On the face of it, this seems
reasonable. Nominally, nation-states subsume economic activity, with
even the most powerful multi-national corporations being merely
secondary organizing schemes for the world.
But the more Ive thought about it, the more Ive been pulled towards
a business-first perspective on modernity and globalization. As a result,
this post is mostly woven around ideas drawn from five books that provide
appropriate fuel for this business-first frame. I will be citing, quoting and
otherwise indirectly using these books over several future posts, but I
wont be reviewing them. So if you want to follow the arguments more
closely, you may want to read some or all of these. The investment is
definitely worthwhile.

The Corporation that Changed the World by Nick Robins, a history


of the East India Company, a rather unique original prototype of
the idea
Monsoon by Robert Kaplan, an examination of the re-emergence of
the Indian Ocean as the primary theater of global geopolitics in the
21st century

The Influence of Sea Power Upon History: 1660-1783 by Alfred


Thayer Mahan, a classic examination of how naval power is the
most critical link between political, cultural, military and business
forces.
The Post-American World by Fareed Zakaria, an examination of
the structure of the world being created, not by the decline of
America, but by the rise of the rest.
The Lever of Riches by Joel Mokyr, probably the most compelling
model and account of how technological change drives the
evolution of civilizations, through monotonic, path-dependent
accumulation of changes

I didnt settle on these five lightly. I must have browsed or partlyread-and-abandoned dozens of books about modernity and globalization
before settling on these as the ones that collectively provided the best
framing of the themes that intrigued me. If I were to teach a 101 course on
the subject, Id start with these as required reading in the first 8 weeks.
The human world, like physics, can be reduced to four fundamental
forces: culture, politics, war and business. That is also roughly the order of
decreasing strength, increasing legibility and partial subsumption of the
four forces. Here is a visualization of my mental model:

Culture is the most mysterious, illegible and powerful force. It


includes such tricky things as race, language and religion. Business, like
gravity in physics, is the weakest and most legible: it can be reduced to a
few basic rules and principles (comprehensible to high-school students)

that govern the structure of the corporate form, and descriptive artifacts
like macroeconomic indicators, microeconomic balance sheets, annual
reports and stock market numbers.
But one quality makes gravity dominate at large space-time scales:
gravity affects all masses and is always attractive, never repulsive. So
despite its weakness, it dominates things at sufficiently large scales. I
dont want to stretch the metaphor too far, but something similar holds
true of business.
On the scale of days or weeks, culture, politics and war matter a lot
more in shaping our daily lives. But those forces fundamentally cancel out
over longer periods. They are mostly noise, historically speaking. They
dont cause creative-destructive, unidirectional change (whether or not you
think of that change as progress is a different matter).
Business though, as an expression of the force of unidirectional
technological evolution, has a destabilizing unidirectional effect. It is
technology, acting through business and Schumpeterian creativedestruction, that drives monotonic, historicist change, for good or bad.
Business is the locus where the non-human force of technological change
sneaks into the human sphere.
Of course, there is arguably some progress on all four fronts. You
could say that Shakespeare represents progress with respect to Aeschylus,
and Tom Stoppard with respect to Shakespeare. You could say Obama
understands politics in ways that say, Hammurabi did not. You could say
that General Petraeus thinks of the problems of military strategy in ways
that Genghis Khan did not. But all these are decidedly weak claims.
On the other hand the proposition that Facebook (the corporation) is
in some ways a beast entirely beyond the comprehension of an ancient
Silk Road trader seems vastly more solid. And this is entirely a function of
the intimate relationship between business and technology. Culture is
suspicious of technology. Politics is mostly indifferent to and above it.
War-making uses it, but maintains an arms-length separation. Business? It
gets into bed with it. It is sort of vaguely plausible that you could switch
artists, politicians and generals around with their peers from another age
and still expect them to function. But there is no meaningful way for a

businessman from (say) 2000 BC to comprehend what Mark Zuckerberg


does, let alone take over for him. Too much magical technological water
has flowed under the bridge.
Arthur C. Clarke once said that any sufficiently advanced technology
is indistinguishable from magic, but technology (and science) arent what
create the visible magic. Most of the magic never leaves journal papers or
discarded engineering prototypes. It is business that creates the world of
magic, not technology itself. And the story of business in the last 400
years is the story of the corporate form.
There are some who treat corporate forms as yet another technology
(in this case a technology of people-management), but despite the
trappings of scientific foundations (usually in psychology) and
engineering synthesis (we speak of organizational design), the corporate
form is not a technology. It is the consequence of a social contract like the
one that anchors nationhood. It is a codified bundle of quasi-religious
beliefs externalized into an animate form that seeks to preserve itself like
any other living creature.
The Corporate View of history: 1600 2100
We are not used to viewing world history through the perspective of
the corporation for the very good reason that corporations are a recent
invention, and instances that had the ability to transform the world in
magical ways did not really exist till the EIC was born. Businesses of
course, have been around for a while. The oldest continuously surviving
business, until recently, was Kongo Gumi, a Japanese temple construction
business founded in 584 AD that finally closed its doors in 2009. Guilds
and banks have existed since the 16th century. Trading merchants, who
raised capital to fund individual ships or voyages, often with some royal
patronage, were also not a new phenomenon. What was new was the idea
of a publicly traded joint-stock corporation, an entity with rights similar to
those of states and individuals, with limited liability and significant
autonomy (even in its earliest days, when corporations were formed for
defined periods of time by royal charter).

This idea morphed a lot as it evolved (most significantly in the


aftermath of the East India Bubble), but it retained a recognizable DNA
throughout. Many authors such as Gary Hamel (The Future of
Management), Tom Malone (The Future of Work) and Don Tapscott
(Wikinomics) have talked about how the traditional corporate form is
getting obsolete. But in digging around, I found to my surprise that
nobody has actually attempted to meaningfully represent the birth-toobsoloscence evolution of the idea of the corporation.
Here is my first stab at it (I am working on a much more detailed,
data-driven timeline as a side project):

To understand history world history in the fullest sense, not just


economic history from this perspective, you need to understand two
important points about this evolution of corporations.

The Smithian/Schumpeterian Divide


The first point is that the corporate form was born in the era of
Mercantilism, the economic ideology that (zero-sum) control of land is the
foundation of all economic power.
In politics, Mercantilism led to balance-of-power models. In business,
once the Age of Exploration (the 16th century) opened up the world, it led
to mercantilist corporations focused on trade (if land is the source of all
economic power, the only way to grow value faster than your land
holdings permit, is to trade on advantageous terms).
The forces of radical technological change the Industrial
Revolution did not seriously kick in until after nearly 200 years of
corporate evolution (1600-1800) in a mercantilist mold. Mercantilist
models of economic growth map to what Joel Mokyr calls Smithian
growth, after Adam Smith. It is worth noting here that Adam Smith
published The Wealth of Nations in 1776, strongly influenced by his
reading of the events surrounding the bursting of the East India Bubble in
1772 and debates in Parliament about its mismanagement. Smith was both
the prophet of doom for the Mercantilist corporation, and the herald of
what came to replace it: the Schumpeterian corporation. Mokyr
characterizes the growth created by the latter as Schumpeterian growth.
The corporate form therefore spent almost 200 years nearly half of
its life to date being shaped by Mercantilist thinking, a fundamentally
zero-sum way of viewing the world. It is easy to underestimate the impact
of this early life since the physical form of modern corporations looks so
different. But to the extent that organizational forms represent externalized
mental models, codified concepts and structure-following-strategy (as
Alfred Chandler eloquently put it), the corporate form contains the inertia
of that early formative stage.
In fact, in terms of the two functions that Drucker considered the only
essential ones in business, marketing and innovation, the Mercantilist
corporation lacked one. The archetypal Mercantilist corporation, the EIC,

understood marketing intimately and managed demand and supply with


extraordinary accuracy. But it did not innovate.
Innovation was the function grafted onto the corporate form by the
possibility of Schumpeterian growth, but it would take nearly an entire
additional century for the function to be properly absorbed into
corporations. It was not until after the American Civil War and the Gilded
Age that businesses fundamentally reorganized around (as we will see)
time instead of space, which led, as we will see, to a central role for ideas
and therefore the innovation function.
The Black Hills Gold Rush of the 1870s, the focus of the Deadwood
saga, was in a way the last hurrah of Mercantilist thinking. William
Randolph Hearst, the son of gold mining mogul George Hearst who took
over Deadwood in the 1870s, made his name with newspapers. The baton
had formally been passed from mercantilists to schumpeterians.
This divide between the two models can be placed at around 1800, the
nominal start date of the Industrial Revolution, as the ideas of Renaissance
Science met the energy of coal to create a cocktail that would allow
corporations to colonize time.
Reach versus Power
The second thing to understand about the evolution of the corporation
is that the apogee of power did not coincide with the apogee of reach. In
the 1780s, only a small fraction of humanity was employed by
corporations, but corporations were shaping the destinies of empires. In
the centuries that followed the crash of 1772, the power of the corporation
was curtailed significantly, but in terms of sheer reach, they continued to
grow, until by around 1980, a significant fraction of humanity was
effectively being governed by corporations.
I dont have numbers for the whole world, but for America, less than
20% of the population had paycheck incomes in 1780, and over 80% in
1980, and the percentage has been declining since (I have cited these
figures before; they are from Gareth Morgans Images of Organization and
Dan Pinks Free Agent Nation). Employment fraction is of course only one

of the many dimensions of corporate power (which include economic,


material, cultural, human and political forms of power), but this graph
provides some sense of the numbers behind the rise and fall of the
corporation as an idea.

It is tempting to analyze corporations in terms of some measure of


overall power, which I call reach. Certainly corporations today seem far
more powerful than those of the 1700s, but the point is that the form is
much weaker today, even though it has organized more of our lives. This
is roughly the same as the distinction between fertility of women and
population growth: the peak in fertility (a per-capita number) and peak in
population growth rates (an aggregate) behave differently.
To make sense of the form, the divide between the Smithian and
Schumpeterian growth epochs is much more useful than the dynamics of
reach. This gives us a useful 3-phase model of the history of the
corporation: the Mercantilist/Smithian era from 1600-1800, the
Industrial/Schumpeterian era from 1800 2000 and finally, the era we are
entering, which I will dub the Information/Coasean era. By a happy
accident, there is a major economist whose ideas help fingerprint the
economic contours of our world: Ronald Coase.

This post is mainly about the two historical phases, and are in a sense
a macro-prequel to the ideas I normally write about which are more
individual-focused and future-oriented.
I: Smithian Growth and the Mercantilist Economy (1600 1800)
The story of the old corporation and the sea
It is difficult for us in 2011, with Walmart and Facebook as examples
of corporations that significantly control our lives, to understand the sheer
power the East India Company exercised during its heyday. Power that
makes even the most out-of-control of todays corporations seem tame by
comparison. To a large extent, the history of the first 200 years of
corporate evolution is the history of the East India Company. And despite
its name and nation of origin, to think of it as a corporation that helped
Britain rule India is to entirely misunderstand the nature of the beast.
Two images hint at its actual globe-straddling, 10x-Walmart
influence: the image of the Boston Tea Partiers dumping crates of tea into
the sea during the American struggle for independence, and the image of
smoky opium dens in China. One image symbolizes the rise of a new
empire. The other marks the decline of an old one.
The East India Company supplied both the tea and the opium.
At a broader level, the EIC managed to balance an unbalanced trade
equation between Europe and Asia whose solution had eluded even the
Roman empire. Massive flows of gold and silver from Europe to Asia via
the Silk and Spice routes had been a given in world trade for several
thousand years. Asia simply had far more to sell than it wanted to buy.
Until the EIC came along
A very rough sketch of how the EIC solved the equation reveals the
structure of value-addition in the mercantilist world economy.
The EIC started out by buying textiles from Bengal and tea from
China in exchange for gold and silver.

Then it realized it was playing the same sucker game that had trapped
and helped bankrupt Rome.
Next, it figured out that it could take control of the opium industry in
Bengal, trade opium for tea in China with a significant surplus, and use the
money to buy the textiles it needed in Bengal. Guns would be needed.
As a bonus, along with its partners, it participated in yet another
clever trade: textiles for slaves along the coast of Africa, who could be
sold in America for gold and silver.
For this scheme to work, three foreground things and one background
thing had to happen: the corporation had to effectively take over Bengal
(and eventually all of India), Hong Kong (and eventually, all of China,
indirectly) and England. Robert Clive achieved the first goal by 1757. An
employee of the EIC, William Jardine, founded what is today Jardine
Matheson, the spinoff corporation most associated with Hong Kong and
the historic opium trade. It was, during in its early history, what we would
call today a narco-terrorist corporation; the Taliban today are
kindergarteners in that game by comparison. And while the corporation
never actually took control of the British Crown, it came close several
times, by financing the government during its many troubles.
The background development was simpler. England had to take over
the oceans and ensure the safe operations of the EIC.
Just how comprehensively did the EIC control the affairs of states?
Bengal is an excellent example. In the 1600s and the first half of the
1700s, before the Industrial Revolution, Bengali textiles were the
dominant note in the giant sucking sound drawing away European wealth
(which was flowing from the mines and farms of the Americas). The
European market, once the EIC had shoved the Dutch VOC aside,
constantly demanded more and more of an increasing variety of textiles,
ignoring the complaining of its own weavers. Initially, the company did no
more than battle the Dutch and Portuguese on water, and negotiate
agreements to set up trading posts on land. For a while, it played by the
rules of the Mughal empire and its intricate system of economic control

based on various imperial decrees and permissions. The Mughal system


kept the business world firmly subservient to the political class, and
ensured a level playing field for all traders. Bengal in the 17th and 18th
centuries was a cheerful drama of Turks, Arabs, Armenians, Indians,
Chinese and Europeans. Trade in the key commodities, textiles, opium,
saltpeter and betel nuts, was carefully managed to keep the empire on top.
But eventually, as the threat from the Dutch was tamed, it became
clear that the company actually had more firepower at its disposal than
most of the nation-states it was dealing with. The realization led to the first
big domino falling, in the corporate colonization of India, at the battle of
Plassey. Robert Clive along with Indian co-conspirators managed to take
over Bengal, appoint a puppet Nawab, and get himself appointed as the
Mughal diwan (finance minister/treasurer) of the province of Bengal,
charged with tax collection and economic administration on behalf of the
weakened Mughals, who were busy destroying their empire. Even people
who are familiar enough with world history to recognize the name Robert
Clive rarely understand the extent to which this was the act of a single
sociopath within a dangerously unregulated corporation, rather than the
country it was nominally subservient to (England).
This history doesnt really stand out in sharp relief until you contrast
it with the behavior of modern corporations. Today, we listen with shock
to rumors about the backroom influence of corporations like Halliburton
or BP, and politicians being in bed with the business leaders in the TooBig-to-Fail companies they are supposed to regulate.
The EIC was the original too-big-to-fail corporation. The EIC was the
beneficiary of the original Big Bailout. Before there was TARP, there was
the Tea Act of 1773 and the Pitt India Act of 1783. The former was a failed
attempt to rein in the EIC, which cost Britain the American Colonies. The
latter created the British Raj as Britain doubled down in the east to recover
from its losses in the west. An invisible thread connects the histories of
India and America at this point. Lord Cornwallis, the loser at the Siege of
Yorktown in 1781 during the revolutionary war, became the second
Governor General of India in 1786.
But these events were set in motion over 30 years earlier, in the
1750s. There was no need for backroom subterfuge. It was all out in the

open because the corporation was such a new beast, nobody really
understood the dangers it represented. The EIC maintained an army. Its
merchant ships often carried vastly more firepower than the naval ships of
lesser nations. Its officers were not only not prevented from making
money on the side, private trade was actually a perk of employment (it
was exactly this perk that allowed William Jardine to start a rival business
that took over the China trade in the EICs old age). And finally the
cherry on the sundae there was nothing preventing its officers like
Clive from simultaneously holding political appointments that legitimized
conflicts of interest. If you thought it was bad enough that Dick Cheney
used to work for Halliburton before he took office, imagine if hed worked
there while in office, with legitimate authority to use his government
power to favor his corporate employer and make as much money on the
side as he wanted, and call in the Army and Navy to enforce his will. That
picture gives you an idea of the position Robert Clive found himself in, in
1757.
He made out like a bandit. A full 150 years before American corporate
barons earned the appellation robber.
In the aftermath of Plassey, in his dual position of Mughal diwan of
Bengal and representative of the EIC with permission to make money for
himself and the company, and the armed power to enforce his will, Clive
did exactly what youd expect an unprincipled and enterprising adventurer
to do. He killed the golden goose. He squeezed the Bengal textile industry
dry for profits, destroying its sustainability. A bubble in London and a
famine in Bengal later, the industry collapsed under the pressure (Bengali
economist Amartya Sen would make his bones and win the Nobel two
centuries later, studying such famines). With industrialization and
machine-made textiles taking over in a few decades, the economy had
been destroyed. But by that time the EIC had already moved on to the next
opportunities for predatory trade: opium and tea.
The East India bubble was a turning point. Thanks to a rare moment
of the Crown being more powerful than the company during the bust, the
bailout and regulation that came in the aftermath of the bubble
fundamentally altered the structure of the EIC and the power relations
between it and the state. Over the next 70 years, political, military and

economic power were gradually separated and modern checks and


balances against corporate excess came into being.
The whole intricate story of the corporate takeover of Bengal is told
in detail in Robins book. The Battle of Plassey is actually almost
irrelevant; most of the action was in the intrigue that led up to it, and
followed. Even if you have some familiarity with Indian and British
history during that period, chances are youve never drilled down into the
intricate details. It has all the elements of a great movie: there is deceit,
forgery of contracts, licensing frauds, murder, double-crossing, armtwisting and everything else you could hope for in a juicy business story.
As an enabling mechanism, Britain had to rule the seas,
comprehensively shut out the Dutch, keep France, the Habsburgs, the
Ottomans (and later Russia) occupied on land, and have enough firepower
left over to protect the EICs operations when the EICs own guns did not
suffice. It is not too much of a stretch to say that for at least a century and
a half, Englands foreign policy was a dance in Europe in service of the
EICs needs on the oceans. That story, with much of the action in Europe,
but most of the important consequences in America and Asia, is told in
Mahans book. (Though boats were likely invented before the wheel,
surprisingly, the huge influence of sea power upon history was not
generally recognized until Mahan wrote his classic. The book is deep and
dense. Its worth reading just for the story of how Rome defeated Carthage
through invisible negative-space non-action on the seas by the Roman
Navy. I wont dive into the details here, except to note that Mahans book
is the essential lens you need to understand the peculiar military
conditions in the 17th and 18th centuries that made the birth of the
corporation possible.)
To read both books is to experience a process of enlightenment. An
illegible period of world history suddenly becomes legible. The broad
sweep of world history between 1500-1800 makes no real sense (between
approximately the decline of Islam and the rise of the British Empire)
except through the story of the EIC and corporate mercantilism in general.
The short version is as follows.

Constantinople fell to the Ottomans in 1453 and the last Muslim ruler
was thrown out of Spain in 1492, the year Columbus sailed the ocean blue.
Vasco de Gama found a sea route to India in 1498. The three events
together caused a defensive consolidation of Islam under the later
Ottomans, and an economic undermining of the Islamic world (a process
that would directly lead to the radicalization of Islam under the influence
of religious leaders like Abd-al Wahhab (1703-1792)).
The 16th century makes a vague sort of sense as the Age of
Exploration, but it really makes a lot more sense as the startup/firstmover/early-adopter phase of the corporate mercantilism. The period was
dominated by the daring pioneer spirit of Spain and Portugal, which
together served as the Silicon Valley of Mercantilism. But the maritime
business operations of Spain and Portugal turned out to be the MySpace
and Friendster of Mercantilism: pioneers who could not capitalize on their
early lead.
Conventionally, it is understood that the British and the Dutch were
the ones who truly took over. But in reality, it was two corporations that
took over: the EIC and the VOC (the Dutch East India Company,
Vereenigde Oost-Indische Compagnie, founded one year after the EIC) the
Facebook and LinkedIn of Mercantile economics respectively. Both were
fundamentally more independent of the nation states that had given birth
to them than any business entities in history. The EIC more so than the
VOC. Both eventually became complex multi-national beasts.
A lot of other stuff happened between 1600 1800. The names from
world history are familiar ones: Elizabeth I, Louis XIV, Akbar, the Qing
emperors (the dynasty is better known than individual emperors) and the
American Founding Fathers. The events that come to mind are political
ones: the founding of America, the English Civil War, the rise of the
Ottomans and Mughals.
The important names in the history of the EIC are less well-known:
Josiah Child, Robert Clive, Warren Hastings. The events, like Plassey,
seem like sideshows on the margins of land-based empires.

The British Empire lives on in memories, museums and grand


monuments in two countries. Company Raj is largely forgotten. The
Leadenhall docks in London, the heart of the action, have disappeared
today under new construction.
But arguably, the doings of the EIC and VOC on the water were more
important than the pageantry on land. Today the invisible web of
container shipping serves as the bloodstream of the world. Its foundations
were laid by the EIC.
For nearly two centuries they ruled unchallenged, until finally the
nations woke up to their corporate enemies on the water. With the reining
in and gradual decline of the EIC between 1780 and 1857, the war
between the next generation of corporations and nations moved to a new
domain: the world of time.
The last phase of Mercantilism eventually came to an end by the
1850s, as events ranging from the first war of Independence in India
(known in Britain as the Sepoy Mutiny), the first Opium War and Perry
prying Japan open signaled the end of the Mercantilist corporation
worldwide. The EIC wound up its operations in 1876. But the Mercantilist
corporation died many decades before that as an idea. A new idea began to
take its place in the early 19th century: the Schumpeterian corporation that
controlled, not trade routes, but time. It added the second of the two
essential Druckerian functions to the corporation: innovation.
II. Schumpeterian Growth and the Industrial Economy (1800
2000)
The colonization of time and the apparently endless frontier
To understand what changed in 1800, consider this extremely
misleading table about GDP shares of different countries, between 16001870. There are many roughly similar versions floating around in
globalization debates, and the numbers are usually used gleefully to shock
people who have no sense of history. I call this the most misleading table
in the world.

Chinese and Indian jingoists in particular, are prone to misreading this


table as evidence that colonization stole wealth from Asia (the collapse
of GDP share for China and India actually went much further, into the low
single digits, in the 20th century). The claim of GDP theft is true if you
use a zero-sum Mercantilist frame of reference (and it is true in a different
sense of steal that this table does not show).
But the Mercantilist model was already sharply declining by 1800.
Something else was happening, and Fareed Zakaria, as far as I know,
is the only major commentator to read this sort of table correctly, in The
Post-American World. He notes that what matters is not absolute totals,
but per-capita productivity.
We get a much clearer picture of the real standing of
countries if we consider economic growth and GDP per
capita. Western Europe GDP per capita was higher than that
of both China and India by 1500; by 1600 it was 50%
higher than Chinas. From there, the gap kept growing.
Between 1350 and 1950 six hundred years GDP per
capita remained roughly constant in India and China
(hovering around $600 for China and $550 for India). In
the same period, Western European GDP per capita went
from $662 to $4,594, a 594 percent increase.

Sure, corporations and nations may have been running on Mercantilist


logic, but the undercurrent of Schumpeterian growth was taking off in
Europe as early as 1500 in the less organized sectors like agriculture. It
was only formally recognized and tamed in the early 1800s, but the
technology genie had escaped.
The action shifted to two huge wildcards in world affairs of the 1800s:
the newly-born nation of America and the awakening giant in the east,
Russia. Per capita productivity is about efficient use of human time. But
time, unlike space, is not a collective and objective dimension of human
experience. It is a private and subjective one. Two people cannot own the
same piece of land, but they can own the same piece of time. To own
space, you control it by force of arms. To own time is to own attention. To
own attention, it must first be freed up, one individual stream of
consciousness at a time.
The Schumpeterian corporation was about colonizing individual
minds. Ideas powered by essentially limitless fossil-fuel energy allowed it
to actually pull it off.
By the mid 1800s, as the EIC and its peers declined, the battle
seemingly shifted back to land, especially in the run-up to and aftermath
of, the American Civil War. I havent made complete sense of the Russian
half of the story, but that peaked later and ultimately proved less important
than the American half, so it is probably reaosonably safe to treat the story
of Schumpeterian growth as an essentially American story.
If the EIC was the archetype of the Mercantilist era, the Pennsylvania
Railroad company was probably the best archetype for the Schumpeterian
corporation. Modern corporate management as well Soviet forms of statist
governance can be traced back to it. In many ways the railroads solved a
vastly speeded up version of the problem solved by the EIC: complex
coordination across a large area. Unlike the EIC though, the railroads
were built around the telegraph, rather than postal mail, as the
communication system. The difference was like the difference between the
nervous systems of invertebrates and vertebrates.

If the ship sailing the Indian Ocean ferrying tea, textiles, opium and
spices was the star of the mercantilist era, the steam engine and steamboat
opening up America were the stars of the Schumpeterian era. Almost
everybody misunderstood what was happening. Traveling up and down the
Mississippi, the steamboat seemed to be opening up the American interior.
Traveling across the breadth of America, the railroad seemed to be
opening up the wealth of the West, and the great possibilities of the Pacific
Ocean.
Those were side effects. The primary effect of steam was not that it
helped colonize a new land, but that it started the colonization of time.
First, social time was colonized. The anarchy of time zones across the vast
expanse of America was first tamed by the railroads for the narrow
purpose of maintaining train schedules, but ultimately, the tools that
served to coordinate train schedules: the mechanical clock and time zones,
served to colonize human minds. An exhibit I saw recently at the Union
Pacific Railroad Museum in Omaha clearly illustrates this crucial
fragment of history:

The steam engine was a fundamentally different beast than the sailing
ship. For all its sophistication, the technology of sail was mostly a veryrefined craft, not an engineering discipline based on science. You can trace
a relatively continuous line of development, with relatively few new
scientific or mathematical ideas, from early Roman galleys, Arab dhows
and Chinese junks, all the way to the amazing Tea Clippers of the mid
19th century (Mokyr sketches out the story well, as does Mahan, in more
detail).
Steam power though was a scientific and engineering invention.
Sailing ships were the crowning achievements of the age of craft guilds.
Steam engines created, and were created by engineers, marketers and
business owners working together with (significantly disempowered)
craftsmen in genuinely industrial modes of production. Scientific
principles about gases, heat, thermodynamics and energy applied to
practical ends, resulting in new artifacts. The disempowerment of
craftsmen would continue through the Schumpeterian age, until Fredrick
Taylor found ways to completely strip mine all craft out of the minds of
craftsmen, and put it into machines and the minds of managers. It sounds
awful when I put it that way, and it was, in human terms, but there is no
denying that the process was mostly inevitable and that the result was
vastly better products.
The Schumpeterian corporation did to business what the doctrine of
Blitzkrieg would do to warfare in 1939: move humans at the speed of
technology instead of moving technology at the speed of humans. Steam
power used the coal trust fund (and later, oil) to fundamentally speed up
human events and decouple them from the constraints of limited forms of
energy such as the wind or human muscles. Blitzkrieg allowed armies to
roar ahead at 30-40 miles per hour instead of marching at 5 miles per hour.
Blitzeconomics allowed the global economy to roar ahead at 8% annual
growth rates instead of the theoretical 0% average across the world for
Mercantilist zero-sum economics. Progress had begun.
The equation was simple: energy and ideas turned into products and
services could be used to buy time. Specifically, energy and ideas could be
used to shrink autonomously-owned individual time and grow a space of
corporate-owned time, to be divided between production and

consumption. Two phrases were invented to name the phenomenon:


productivity meant shrinking autonomously-owned time. Increased
standard of living through time-saving devices became code for the fact
that the freed up time through labor saving devices was actually the
de facto property of corporations. It was a Faustian bargain.
Many people misunderstood the fundamental nature of Schumpeterian
growth as being fueled by ideas rather than time. Ideas fueled by energy
can free up time which can then partly be used to create more ideas to free
up more time. It is a positive feedback cycle, but with a limit. The
fundamental scarce resource is time. There is only one Earth worth of
space to colonize. Only one fossil-fuel store of energy to dig out. Only 24
hours per person per day to turn into capitive attention.
Among the people who got it wrong was my favorite visionary,
Vannevar Bush, who talked of science: the endless frontier. To believe that
there is an arguably limitless supply of valuable ideas waiting to be
discovered is one thing. To argue that they constitute a limitless reserve of
value for Schumpeterian growth to deliver is to misunderstand how ideas
work: they are only valuable if attention is efficiently directed to the right
places to discover them and energy is used to turn them into businesses,
and Arthur-Clarke magic.
It is fairly obvious that Schumpeterian growth has been fueled so far
by reserves of fossil fuels. It is less obvious that it is also fueled by
reserves of collectively-managed attention.
For two centuries, we burned coal and oil without a thought. Then
suddenly, around 1980, Peak Oil seemed to loom menacingly closer.
For the same two centuries it seemed like time/attention reserves
could be endlessly mined. New pockets of attention could always be
discovered, colonized and turned into wealth.
Then the Internet happened, and we discovered the ability to mine
time as fast as it could be discovered in hidden pockets of attention. And
we discovered limits.

And suddenly a new peak started to loom: Peak Attention.


III. Coasean Growth and the Perspective Economy
Peak Attention and Alternative Attention Sources
I am not sure who first came up with the term Peak Attention, but the
analogy to Peak Oil is surprisingly precise. It has its critics, but I think the
model is basically correct.
Peak Oil refers to a graph of oil production with a maximum called
Hubberts peak, that represents peak oil production. The theory behind it is
that new oil reserves become harder to find over time, are smaller in size,
and harder to mine. You have to look harder and work harder for every
new gallon, new wells run dry faster than old ones, and the frequency of
discovery goes down. You have to drill more.
There is certainly plenty of energy all around (the Sun and the wind,
to name two sources), but oil represents a particularly high-value kind.
Attention behaves the same way. Take an average housewife, the
target of much time mining early in the 20th century. It was clear where
her attention was directed. Laundry, cooking, walking to the well for
water, cleaning, were all obvious attention sinks. Washing machines,
kitchen appliances, plumbing and vacuum cleaners helped free up a lot of
that attention, which was then immediately directed (as corporate-captive
attention) to magazines and television.
But as you find and capture most of the wild attention, new pockets of
attention become harder to find. Worse, you now have to cannibalize your
own previous uses of captive attention. Time for TV must be stolen from
magazines and newspapers. Time for specialized entertainment must be
stolen from time devoted to generalized entertainment.
Sure, there is an equivalent to the Sun in the picture. Just ask anyone
who has tried mindfulness meditation, and youll understand why the
limits to attention (and therefore the value of time) are far further out than
we think.

The point isnt that we are running out of attention. We are running
out of the equivalent of oil: high-energy-concentration pockets of easily
mined fuel.
The result is a spectacular kind of bubble-and-bust.
Each new pocket of attention is harder to find: maybe your product
needs to steal attention from that one TV obscure show watched by just
3% of the population between 11:30 and 12:30 AM. The next
displacement will fragment the attention even more. When found, each
new pocket is less valuable. There is a lot more money to be made in
replacing hand-washing time with washing-machine plus magazine time,
than there is to be found in replacing one hour of TV with a different hour
of TV.
Whats more, due to the increasingly frantic zero-sum competition
over attention, each new well of attention runs out sooner. We know this
idea as shorter product lifespans.
So one effect of Peak Attention is that every human mind has been
mined to capacity using attention-oil drilling technologies. To get to Clay
Shirkys hypothetical notion of cognitive surplus, we need Alternative
Attention sources.
To put it in terms of per-capita productivity gains, we hit a plateau.
We can now connect the dots to Zakarias reading of global GDP
trends, and explain why the action is shifting back to Asia, after being
dominated by Europe for 600 years.
Europe may have increased per capita productivity 594% in 600
years, while China and India stayed where they were, but Europe has been
slowing down and Asia has been catching up. When Asia hits Peak
Attention (America is already past it, I believe), absolute size, rather than
big productivity differentials, will again define the game, and the center of
gravity of economic activity will shift to Asia.

If you think thats a long way off, you are probably thinking in terms
of living standards rather than attention and energy. In those terms, sure,
China and India have a long way to go before catching up with even
Southeast Asia. But standard of living is the wrong variable. It is a derived
variable, a function of available energy and attention supply. China and
India will never catch up (though Western standards of living will
decline), but Peak Attention will hit both countries nevertheless. Within
the next 10 years or so.
What happens as the action shifts? Kaplans Monsoon frames the
future in possibly the most effective way. Once again, it is the oceans,
rather than land, that will become the theater for the next act of the human
drama. While American lifestyle designers are fleeing to Bali, much
bigger things are afoot in the region.
And when that shift happens, the Schumpeterian corporation, the oil
rig of human attention, will start to decline at an accelerating rate.
Lifestyle businesses and other oddball contraptions the solar panels and
wind farms of attention economics will start to take over.
It will be the dawn of the age of Coasean growth.
Adam Smiths fundamental ideas helped explain the mechanics of
Mercantile economics and the colonization of space.
Joseph Schumpeters ideas helped extend Smiths ideas to cover
Industrial economics and the colonization of time.
Ronald Coase turned 100 in 2010. He is best known for his work on
transaction costs, social costs and the nature of the firm. Where most
classical economists have nothing much to say about the corporate form,
for Coase, it has been the main focus of his life.
Without realizing it, the hundreds of entrepreneurs, startup-studios
and incubators, 4-hour-work-weekers and lifestyle designers around the
world, experimenting with novel business structures and the attention
mining technologies of social media, are collectively triggering the age of
Coasean growth.

Coasean growth is not measured in terms of national GDP growth.


Thats a Smithian/Mercantilist measure of growth.
It is also not measured in terms of 8% returns on the global stock
market. That is a Schumpeterian growth measure. For that model of
growth to continue would be a case of civilizational cancer (growth for
the sake of growth is the ideology of the cancer cell as Edward Abbey put
it).
Coasean growth is fundamentally not measured in aggregate terms at
all. It is measured in individual terms. An individuals income and
productivity may both actually decline, with net growth in a Coasean
sense.
How do we measure Coasean growth? I have no idea. I am open to
suggestions. All I know is that the metric will need to be hyperpersonalized and relative to individuals rather than countries, corporations
or the global economy. There will be a meaningful notion of Venkats rate
of Coasean growth, but no equivalent for larger entities.
The fundamental scarce resource that Coasean growth discovers and
colonizes is neither space, nor time. It is perspective.
The bad news: it too is a scarce resource that can be mined to a Peak
Perspective situation.
The good news: you will likely need to colonize your own unclaimed
perspective territory. No collectivist business machinery will really be able
to mine it out of you.
Those are stories for another day. Stay tuned.

Marketing, Innovation and the Creation of


Customers
June 15, 2009
Recently, one of my projects went through a rapid, but nearly
imperceptible phase change. It went from being an innovation-first
project to being a marketing-first project. The marketing hat feels at
once comfortable and uncomfortable, familiar and unfamiliar. It feels like
listening to unfamiliar lyrics set to a familiar tunes. This disconcerting
feeling of being caught up in the dance of a yin-yang pair has had me
pondering the best-known Drucker quote (from The Essential Drucker) for
weeks now:
Because the purpose of business is to create a
customer, the business enterprise has twoand only two
basic functions: marketing and innovation.
Marketing and innovation define each other in yin-yang ways.
Thinking about the fundamentals of this dual pair of concepts led me to a
curious definition of the most important word in business: customer.

The Yin-Yang Evidence


Let me first treat you to two lists of weird symmetries and similarities
between marketing and innovation, in the Lincoln-Kennedy-assassination
similarities mode. I want to convince you there is a pattern here and,
unlike the Lincoln-Kennedy stuff, that there is a reason for the pattern.
Lets start with the similarities, and then look at the polarities.
Similarities

Both functions are systematically misunderstood, compared to


other business functions. Innovation is often reductively
understood as invention and R&D, while marketing is often
reductively understood as promotion and advertising. By
contrast, nobody seriously misunderstands what HR, accounting
or manufacturing are about.
Ideally, both are in a balance: the optimal ratio of marketing
spend to engineering spend in a product launch (known as the
Grabowski Ratio) is 1.
Both functions lay claim to the DNA of the organization.
Marketing owns the overt form, the brand, that integrates the
self-image and story of the company, while innovation owns the
individuation behind the brand, within the society of
corporations.
Both functions frame their basic processes in terms of an
increasing certainty funnel metaphor. In innovation, the funnel
narrows from basic R&D through various stage gates to
successful commercialized product/service while in marketing,
the funnel narrows from a customer who is unaware of your
offering, through stages of awareness, interest, purchase, repeat
purchase and the ultimate stage, loyalty. In both cases, the
fundamental dynamic is a weeding out, a filtration. Of ideas in
one case, of people in the other.funnels

Both functions are numbers games by definition. You try to factor


in all the information you have, and are still left with a situation
of incomplete information, at which point you have to make a
leap of faith and push a button. The prototype either flies, or it
does not. The customer either pays attention, or s/he does not. By
contrast, every other function in business can reach much higher
levels of certainty.
Both functions revolve around the concept of differentiation.
Innovators deal in differentiation in product/service features,
while marketers deal in differentiation in customer perceptions.
A notion of creativity is at the heart of both functions. The idea
that it is easier to invent the future than to predict it applies to
both marketing and innovation. Innovators seek to shape future
material realities. Marketers seek to shape perceptions.
Both have a love/hate relationship with a downstream partner
function (production and sales respectively) that deals in scale

and repetition. One design, a production run of a thousand. One


user-story, a thousand registered users. One advertisement, a
thousand sales calls. Even in the age of mass customization, you
can always tell the two sides apart. Production and sales are
always repeating something (indeed, their raison detre is skill in
creating perfect repetitions efficiently, even if the repetition is at
a level of abstraction above mass customization). Marketing
and innovation, on the other hand, depend on novelty and
uniqueness to add value. This is necessary. If innovation and
marketing did not create repetition opportunities downstream,
you would not have a business. Youd have a one-off project.
Theres a lot more that I am sure you can dream up. But lets look at
some polarities.
Polarities

The stereotype of the innovator is the unsociable recluse, hiding


unkempt in a lab. The stereotype of the marketer is the ubersociable, immaculately turned-out social sophisticate.
Both groups are known for extreme precision in their use of
language, but in one case it leads to discourses impenetrable to
others, while in the other, it leads to models of clarity, brevity and
comprehensibility.
Marketing and innovation play a zero-sum game driven by the
clarity of the customer. When the customer has been created
with great clarity, marketing leads innovation and you get
sustaining and/or incremental innovations. When the customer is a
mystery, innovation leads, and you get disruptive and/or radical
innovations (heres what those adjectives mean, if you dont
know).
Though the ideal balance of marketing to engineering spend may
be 1, this is a dynamic balance. In general, you can cannot be
doing both at once. The two have to dance, creating a
conversation.

There seem to be fewer polarities than similarities. Well ponder that


another day. Right now, lets move on to the theory that can elevate this
above Lincoln-Kennedy level.
The Creation of the Customer
To understand this yin-yang stuff, we have to tease out the substance
behind the create a customer bit from Druckers famous quote. What
exactly is this deified construct known as the customer? Why do we
speak in terms of reverential awe? Why do we impute Godly attributes to
this creature (omniscience: the customer is always right, omnipotence:
the customer will ultimately decide.)? Why does this creator-like
creature need to be created rather than discovered? You and I are
customers, and I dont exactly feel Godly when deigning to buy socks at
the store. Rather small, controlled, overwhelmed by choices and
intimidated actually.
Thats because a customer isnt a human being. Repeat after me:
A customer is a novel and stable pattern of human behavior.
You shouldnt be surprised. What, after all, are user stories
(innovation) and psychographic profiles (marketing) but hypotheses about
the stability of human behavior patterns? Underneath humanistic rhetoric
about authenticity, the creation of a customer is an act of control.
A beautiful example of this principle can be found on the streets of
Bombay, where vendors will offer to sell you time pass. Time-pass, as it
happens, is a paper cone of roasted and salted peanuts. But you dont buy
peanuts. You dont even buy a snack. The branding as time pass tells
you what you buy: a way to pass the time as you stroll along on the beach,
or wait for your train to arrive. The customer created by the innovation
(peanuts in paper cones made available at certain locations) is the pattern
of modification of waiting and relaxation behaviors. You used to stroll.
Now you stroll munching peanuts. You used to fret, looking at your watch,
cursing railway delays. Now you peacefully munch peanuts instead.

This explains why customers need to be created, and what innovations


really are. Innovation isnt about creating novel products or services.
An innovation is a stimulus that causes a novel and stable pattern
of human behavior to emerge.
Google isnt fundamentally an innovative search engine. It is a
stimulus that creates a novel pattern of information-discovery behavior
known as Googling that is different from what searching used to be
before Google. Cars arent products that replace horses. They are novel
patterns of human movement and settlement.
That then is at why marketing and innovation are deeply linked in a
yin-yang pattern. They are both exploring the same uncertainties in free
human behavior, and seeking ways to stabilize it into predictable patterns.
When both look at uncertainties in human behavior, or uncertainties in
potential stimuli, you get similarities and harmonies. When they are
looking in different directions (typically, marketing looking at the
customer, while innovation is looking at the stimulus), you get polarities.
This tension is necessary. If ever innovation became truly customer-led
youll be in a universe of faster horses. If ever marketing becomes truly
product-led, youll be in a universe of stuff nobody will buy.
The Death of the Customer
This definition of a customer also explains the life cycle of a product.
With every enhancement of the stimulus, the pattern that is the customer
evolves and becomes more complex. At some point though, there is no
more novelty emerging in the pattern. The customer is rather predictably
asking for a faster horse, and it is costing you a lot more to add every extra
bit of speed to your horses. At this point, you either have to look for less
mature patterns on the periphery (disruption-ready categories) or look for
new base patterns. The category is dead from the point of view of
innovation and marketing. The product/service is end-of-life, and more
importantly, the customer is end-of-life. The pattern cannot sustain
itself. You put both in a hospice, and harvest residual value.

The Milo Criterion


September 23, 2011
There is a saying that goes back to Milo of Croton: lift a calf everyday
and when you grow up, you can lift a cow. The story goes that Milo, a
famous wrestler in ancient Greece, gained his immense strength by lifting
a newborn calf one day when he was a boy, and then lifting it every day as
it grew. In a few years, he was able to lift the grown cow. The calf grew
into a cow at about the rate that Milo grew into a man. A rather freakish
man apparently, since grown cows can weigh over 1000 lb. The point is,
the calf grew old along with the boy.
I have been pondering this story for a couple of years, and it has led
me to a very fertile idea about product design and entrepreneurship.
I call it the Milo Criterion: products must mature no faster than the
rate at which users can adapt. Call that ideal maximum rate the Milo rate.
It seems like a simple and almost tautological thought, but it leads to
some subversive consequences, which is one reason I have been reluctant
to talk about it. The most subversive effect is that it has led me to abandon
lean startup theory, which is now orthodoxy in the startup world.
As a consequence, I have mostly abandoned notions like productmarket-fit, minimum viable product, pivots and the core value of lean. I
only use the terms to communicate with people who think in those terms.
And I cant communicate very much within that vocabulary.
Physical Products and Services
Commercial airline travel is an example of a service product that
satisfied the Milo Criterion during its evolution. In the early days, the user
experience was not very different from riding a train or bus. Airport
designers modeled their early efforts on railroad stations leading to
familiar experience for early air travelers.

But modern air travel, which has evolved over nearly a century, is a
very different complex of behaviors that has drifted far from bus and train
travel. Just look at the enormous number of complex behaviors weve
learned:
1. Checking in (online and off)
2. Security checks and rules about carrying liquids
3. Gates and air-bridges that look nothing like railroad stations
4. Checked baggage and hand baggage rules
5. Seat belts and rules about staying seated at certain times
6. Baggage carousels for retrieving luggage
7. Dealing with layovers
8. Online bidding for cheap ticket deals
9. Airport parking and car rental options
10. Duty-free shopping
11. Visas, passports, immigration, customs
12. Rules about when you can use electronic devices
Weve been able to get this far successfully because we took our time.
By a happy coincidence, the physical constraints of the technology limited
the rate at which airline travel could evolve.
Another example is driving, which is estimated to involve close to
1500 separate sub-skills. It took us about a century to get to modern
driving, GPS, zipcars and all, starting with horse-drawn carriages.
This sort of long evolution trajectory is generally the case for physical
products and services. They are naturally rate-limited by a variety of
factors, so they tend to evolve and mature in ways that naturally satisfy the
Milo Criterion.
Web Products
Thanks to the lack of physical constraints, Web products can go from
paper napkin to fully realized vision in months rather than decades. They
can evolve at rates that far exceed the Milo rate.

It takes decades to build out airline infrastructure in a country. Even


the Chinese government cannot move arbitrarily fast.
For Web products though, there appear to be no real limits, other than
typing speed, to how fast you can build things. And thanks to certain
pathological externalities, they perversely go as fast as they can. In fact
going faster and faster has become the motto of the sector.
Successful Web products do seem to satisfy the Milo criterion though.
I tried applying the criterion to a whole bunch of products, and it turns out
to be a pretty reliable way to sort the two classes. Google Wave fails the
criterion. Google+ satisfies it.
The criterion seems to be descriptive. Is it prescriptive?
Consider modern software development. A set of behaviors have
emerged in the last decade that appear to increase the success rate of Web
products:
1. Starting small and simple
2. Building, testing and iterating rapidly
3. Testing with active users as frequently as possible, starting as
early as possible
It is important to note that these principles were discovered bottom-up
by the practitioners, rather than prescribed top-down by the theorists.
Theoretical codification followed rather than led. So it is possible to
criticize the theories while accepting the empirically validated practices.
I have come to believe that much of the theorizing about why these
methods work is mostly noise. These theories including lean startup
theory are mostly a set of just-so explanations that serve to motivate
practically effective behaviors, the way religions motivate moral behavior.
So even though the theories lead to the diffusion of useful behaviors,
the flaws limit their potential.
I wont attempt a full critique here but offer a basic axiom for an
alternative theory:

The primary reason these behaviors are effective is that they slow
down the process of software development and maintain the optimal
behavior modification rate for humans.
In other words, the Milo Criterion is not just descriptive. It is
prescriptive. It is the dominant dynamic for successful products.
It leads to alternative explanations for why the effective practices
work. It leads to building blocks that are different from the ones
recommended by lean startup theory.
In fact it is a pretty fertile starting point for a whole different approach
to thinking about entrepreneurship and product development. Ive been
developing these ideas, mostly in private, and applying them to my own
business decisions.
Slow Marketing
I dont like being cryptic, but in this case, I am not going to elaborate
further (at least not right now) because the very thought of the tedious and
potentially acrimonious arguments that might result is enough to turn me
off. I dont have enough skin in the game to make it worthwhile. Perhaps I
am getting old and conflict-averse.
So I am not going to share my explanations or alternative building
blocks. In fact, I deleted a couple of much longer draft posts, something I
rarely do, since I hate wasting writing effort.
I wrote this post primarily as a way of saying hello to others who
might already be thinking along the same lines I am. If you are, chances
are the Milo Criterion will spark some productive thinking for you. If not,
at least you learned the story of Milo of Croton, for use at cocktail parties.
I will share one more clue: Ive started calling my developing theory
slow marketing. Read into that what you will.

Ubiquity Illusions and the Chicken-Egg Problem


September 29, 2011
I enjoy thinking about chicken-and-egg problems. They lead to a lot
of perception-refactoring. Some common examples include:
1. You need relevant experience to get a good job, you need a good
job to get relevant experience.
2. You need good credit to get a loan, you need to get loans to
develop good credit.
3. You need users to help you build a better product, you need a
better product to get users.
This post is about one particular way to solve the problem, using what
I call a ubiquity illusion. It is one version of what is colloquially known as
the fake-it-till-you-make-it method.

Creating a ubiquity illusion is the most readily available method for


solving a chicken-egg problem. It is, to be perfectly honest, not the best
method. There are other methods that are superior, but they are generally
not available to most people.

Ubiquity illusions are like the sculpture above (The Awakening, by J.


Seward Johnson, photograph by Ryan Sandridge, Creative Commons 2.5
Attribution). It is actually five separate pieces strategically buried to give
the impression of a much larger buried sculpture, of which three are
visible above.
Lets talk magic.
A Note on Redacted Prescriptions
Last weeks post, The Milo Criterion generated some criticism that I
was being uncharacteristically coy, bordering on concept baiting. A
commenter on Hacker News grumbled that instead of offering refactored
perception I had basically provided a redacted prescription.
I really like the redacted prescription phrase, so I am going to steal
it. Instead of completely self-censoring the broader thinking behind last
weeks post as Id originally planned, Ill offer bits and pieces of a larger
redacted prescription, as and when I am able to carve out relatively
uncontroversial chunks. Since Ill be deliberately withholding key pieces,
what comes out is going to look somewhat random and uncorrelated, but
each post should make sense as a stand-alone post.
Frankly its not just the lean startup world that I dont want to
needlessly antagonize. Other thoughts I am working out are likely to be
viewed as me spoiling for a fight with other groups I have no interest in
antagonizing.
But there is at least a handful of ideas that I think I can write about
without inviting flame wars. This is one such idea.
Slowly, Painfully, Unfairly or Untruthfully
Chicken-egg problems combine a positive-feedback loop problem
with a logical paradox involving two primitive categories that form a
duality (chicken and egg).

The first feature implies that there will be an iterative element in the
solution.
The second feature implies that somewhere along the way, youre
going to have to question implicit assumptions, frames and definitions of
the primitive elements. Like Einstein said, you arent going to solve the
problem at the same level that you encountered it.
For example, in the job/experience loop, you can question the
atomicity of the definition of a job (work for pay) by pondering such
constructs as unpaid internships that loosen the notion of what a job is,
allowing you to trigger the positive feedback loop.
Stated in a general form, the chicken-egg problem is: how do you get
X, when you need Y to get X, and X to get Y?
There are at least four correct answers:
1.
2.
3.
4.

Slowly
Painfully
Unfairly
Untruthfully

Slowly is my favorite answer, and is also the answer to the literal


biological question. I am not a biologist, but I assume the chicken-egg
reproductive process evolved very slowly and fuzzily from older
reproductive processes, until at some point recognizable and differentiated
chicken-like and egg-like elements emerged in the positive feedback
reproduction loop with some early reptile species. This is my favorite
solution.
Painfully is about using money or brute force to hunt for the rare
chicken that did not come from an egg, or the rare egg that did not come
from a chicken. If you look hard enough, you might find a good job
without showing evidence of experience. If you make enough pitches, you
might find the one customer willing to make a first-lemming-like leap of
faith and adopt a new product without social proof. Many chicken-egg
problems that are constructed out of relatively arbitrary primitives (such as
the job and experience pair) are solvable this way if you have enough

energy. The more truly atomic the primitive categories are (more like real
chickens and eggs), the more painful this process is. This is my least
favorite solution. This is also the most widespread solution people attempt.
Unfairly is the cheapest and fastest solution, if it is available.
Somebody might just give you a chicken or an egg. Daddy might pull
strings and get you a job. You might have incriminating photographs of a
banker that allows you to get a loan on suspiciously good terms with no
credit or collateral. But not all unfair advantages are sleazy or nepotistic
advantages. Included in the general category of unfair advantage is
everything that falls under the umbrella term strategy. My definition of
strategy in Tempo basically boils down to unfair advantage. Anything
from privileged information to exclusive access to a key distribution
channel, to owning the rights to a key invention, counts as an unfair
advantage and a basis for strategy. This is my second-choice solution.
Untruthfully, or the fake it till you make it solution, is my third-choice
solution, but the one I want to talk about today.
My clients often ask me
If you want your metaphoric ts cross and is dotted, the solution we
are talking about is: fake the chicken while the egg incubates.
There are many ways to do this, most of them both stupid and illegal.
For instance, you could doctor your resume and make up fake letters of
recommendation.
A ubiquity illusion is a much more subtle mechanism, and in most
cases, is not illegal.
The simplest example is using we to refer to a business that is really
just yourself or a partnership of two people. By concealing some
information, and enough self-confident copy-writing, you can convey the
impression that your business is much bigger than it is.
A slightly smarter example is any sentence that begins: My clients
often ask me.

Without actually revealing how many clients you have, youve


conveyed an impression to the gullible that you have a thriving business
with many clients, and that the clients are adoringly hanging on your every
word and pestering you with questions.
Such statements are often used by struggling new consultants. If
youre like me, you immediately look for some sort of proof behind the
bluster, like a page with actual testimonials from a large roster of live
clients, or a private email to someone able to verify credentials.
For the record, Ive closed exactly three paying clients since I went
free agent 6 months ago. Not enough that I can legitimately begin any
sentence with my clients often ask me. At best, I am at the one of my
clients once asked me point (there is also no we to my consulting
practice; its just me. But on the plus side, if one of you refers just one
more client to me, Ill register a staggering 33% growth in my clientele
this quarter).
On the other hand, since this blog is past the chicken-egg phase (I
solved the problem slowly, as I am now attempting to solve the problem
of growing my consulting business), I can quite honestly make many
statements that begin my readers often tell me.
Not ask me; you guys tell me things a lot more often than you ask me
things, and usually, unless I ask a specific question, you tend to email me
to tell me I am wrong about something or to educate me on some
advanced subtlety that Ive missed (damn know-it-alls).
Ironically, it is one of the things many readers have told me
(genuinely many since Ive lost count) that led me to more subtle
ubiquity illusions beyond my clients often ask me bravado.
The Three Contacts, Three Media Rule
My favorite question to ask readers is how did you find
ribbonfarm? It is what passes for market research in this one-horse
operation.

About half the time, the answer is that post about The Office which
makes me groan silently, but the other half of the time, the answer I get is
something along these lines:
I think a friend forwarded some post to me once a while back, but I
didnt really start reading regularly until I was searching for something
and one of your posts came up.
The media often differ tweets, Facebook, email forwards, party
conversations, workplace conversations, Google searches but the
pattern is usually the same: new readers encounter ribbonfarm at least
twice in two different ways before turning into regular readers.
In the cases where the two media initially appeared to be the same
(for example, two email forwards), it usually turned out that the context
differed: one forward from a coworker and one from a family member, for
instance. I buy the theory that in social media, the actual media are
individual people (the billion-channel marketing theory), so with some
overloading, you can call this the two-contacts-two-media rule.
On my 7-week road-trip across the country over the summer meeting
readers (you guys clearly arent hanging on my every word to the point
that you hunt me down and interrupt my life; I have to run around hunting
you down, interrupting your lives), I collected many examples of the 2contacts-2-media rule.
This curious phenomenon reminded me of a classic rule-of-thumb in
sales: to prime a prospect for a close, you need to first prepare them by
engineering three contacts via three media. For example, a face-to-face
encounter at a conference, a passing mention in an innocent-seeming
email exchange, and perhaps a referral from a friend at a party.
The two vs. three distinction is mostly irrelevant (it has to do with
online versus offline dynamics), but the key point is that youve got
deliberately engineered process that looks like the natural process,
resulting in acceleration of a selling process.

Ubiquity Illusions
What you need to fake is ubiquity. Faking ubiquity is about faking
social proof.
If something is ubiquitous in a given environment, you will naturally
encounter it in somewhat random and uncorrelated situations. The
randomness and uncorrelatedness is critical. The Amazon Kindle is a
perfect example of a product that spread via genuine ubiquity. After first
hearing about it on technology news sites, I didnt actually buy it until Id
spotted it in the wild at a couple of different coffee shops. I doubt
Bezos planted them.
You need the randomness. Seeing a bear at the zoo does not lead you
to suspect that bears are common in the area, but seeing one randomly in a
public park would lead you to that suspicion.
The multiple encounters must also be uncorrelated. Seeing two bears
in the same zoo means nothing. But seeing two bears loose in different
city parks will confirm your suspicions that bears are running wild in the
area.
The reason ubiquity illusions work is obvious: you are basically
gaming human pattern recognition instincts.
In fact in some cases, you dont even need to run around planting fake
random-and-uncorrelated signs in the environment. Since ubiquity usually
goes along with oversubscription of the producer of the ubiquity, you can
get away with just planting signs of oversubscription. Ubiquity illusions
and oversubscription illusions are two sides of the same coin.

Get people to call you while you are meeting a new client.
Plant a few friends at a party and walk around graciously shaking
hands, faking Big Man on Campus.
Pay people to stand in a line outside your new coffee shop.
Accidentally flash a view of your packed calendar while setting
up your laptop for a presentation.

Run an artificial-scarcity beta-invite process for your new software


product.

Of the two approaches, I prefer ubiquity illusions. They are harder to


manufacture than oversubscription illusions, but are more robust,
adaptable to many marketing situations, and the actions you need to take
are closer in spirit to slow, natural diffusion.
Suggesting Deep Structure
The basic three-contacts-three-media approach is enough to create a
basic illusion of abundance, but there is a reason this particular technique
is usually restricted to sales operations. It just inflates your apparent size
in limited ways.
This is a case of sales doing a very limited amount of local marketing.
To go beyond the basics, you need to think about what ubiquity
illusions look like from the point of view of the subject of such an illusion.
Ill stick to the special case of growing markets for products and leave
other chicken-egg problems like job hunting and getting loans for you to
figure out on your own.
The key is to use the random-and-uncorrelated evidence from the
basic ubiquity illusion to suggest deeper structure. Make them slightly less
random and more correlated to suggest something other than a diffuse
sense of larger size.
In the case of the sculpture, The Awakening, if you are looking from
sufficiently far away, youll immediately manufacture the theory that there
is a buried statue, with only the head, hands and feet showing. In other
words, given apparently random and uncorrelated perceptions relating to
the same thing, we try to manufacture the simplest theory that can connect
them.
This tendency to theorize is susceptible to suggestion. In particular, it
is particularly susceptible to suggestions of archetypal superhuman

agency. In other words, our tendency to see mysterious gods in


randomness.
A classic example (which I think I first read about in Schellings
Micromotives and Macrobehavior) is a strategy used by highway police to
enforce speed limits.
In the contest between speeding drivers and a small police force, the
latter are at a serious numerical disadvantage. They cannot be everywhere
at once or catch and ticket every speeding driver.
Random enforcement is also insufficient, since encounters with
pulled-over vehicles might be too rare to reinforce a fear of speeding.
One strategy that works is a concentrated, unpredictable and very
visible enforcement drive in a few locations. By issuing a burst of
speeding tickets over a few days in a very visible way, you can reinforce a
reluctance to speed in a large population. A reluctance that will likely
persist until your next (unpredictable) burst of reinforcement. You have
manufactured a very specific kind of ubiquity illusion: that you could be
waiting to pounce anywhere, anytime. Or more precisely, that you can be
anywhere, anytime, with a much higher probability than you actually are
(geek aside: I vaguely recall a bunch of machine-learning papers about
techniques to speed up learning algorithms using such deliberate bias in
the training inputs)
Drivers overestimate the likelihood of getting caught, and behave as
though the police were anywhere/anytime superheroes.
I dont know if cops add this twist: for the illusion to be really
effective, they shouldnt restrict such isolated enforcement efforts to
predictable locations like busy highways. They should also target medium
and low traffic locations, suggesting that they have enough numbers to
even cover unimportant, low-traffic locations.

Ghosts, Godfathers and Gorillas


The traffic-enforcement case is an example of what I call the ghost
archetype. Youve signaled a deep structure: a ghostly ability to strike
anywhere, anytime. Guerillas also practice this model.
Another archetype you can project is the Godfather archetype. Here
you create an illusion of a vast hidden capacity for influence by projecting
varied and indirect signs all over the place. Where a pulled-over car is
basically the same scene everywhere, with the agent (the cop) being
directly visible, the actions of a Godfather are varied and indirect.
Consider the typical signs of mafia activity in a city (and here I have
to rely on movies and The Sopranos). In criminal prosecutions, you have
key witnesses in prosecutions suddenly recanting testimony, judges who
you thought were inclining in your favor mysteriously changing their tune,
key evidence disappearing. More broadly, in a city, you have suspiciously
profitable restaurants and strangely counter-intuitive business dynamics in
sectors like construction or garbage collection.
Unlike the ghost illusion, which is basically one-dimensional, a
Godfather illusion suggests a complex, highly intelligent, and powerful
hidden entity orchestrating affairs in hidden ways, with a capacity to
influence anything, anywhere, even in places you thought were out of
reach.
The scene in the Godfather movie where the movie producer finds the
severed head of his favorite horse on his bed is a great illustration. The
poor producer had his false sense of security shattered: the mafia was
capable of reaching deep into his privileged upper-class life. It wasnt just
about street thuggery.
Besides mafia dons, others who project Godfather illusions include
anyone projecting a Big Man on Campus, It Girl or Cool New Kid on the
Block presence is an example of the Godfather illusion.

Hot Silicon Valley startups that have everybody crazily scrambling


for beta invites are an example. The beta invite infrastructure is just the tip
of the iceberg of perception management activity. You can orchestrate an
entire illusion of hidden connections, powerful people trying to get in
early, and hidden mystery that not everybody is privy to. For the special
case of the Cool New Kid on the Block, you can signal an enigmatic
arrival by concentrating the hype buildup in time. The sign that you are
succeeding is the comment, hey, lately Ive been hearing about you
everywhere.
Its happened to me without orchestration on a couple of occasions,
but the effect can be precisely engineered. The game of doing this
engineering is known as PR. PR people are the special forces in the world
of marketing.
Finally, a third major archetype is the Gorilla.
Here, you dont convey the can be anywhere/anytime illusion
(ghosts) or hidden and intelligent power structure of unknown reach
illusion (Godfather). You simply convey sheer, overwhelming size. You
are everywhere, all the time.
In particular, you try to not convey intelligence. In fact, you attempt
to convey an impression of slightly blundering stupidity.
The classic symptom is use of the phrase 800 lb Gorilla. The key
tactic in achieving this is local and visible homogeneous saturation. The
homogeneity is what conveys the impression of slight stupidity: it suggests
you have so much power you dont have to be particularly thoughtful
about how you spend it.
A great example is TD Bank. When I set up a business bank account, I
did some online research that showed that TD Bank was generally well
regarded by small business owners. Driving around on the East Coast, I
found TD Bank ATMs and branches all over the place. Their slogan,
Americas Most Convenient Bank added to the illusion of ubiquity. So I
signed up.

Then I moved out West to Nevada and discovered that TD Bank is


merely the most convenient bank in the North East with almost no
presence elsewhere. By saturating one market in a very visible and
homogeneous way, and with an exaggeration in their slogan, they
managed to convince me that it would be a good bank to go with,
nationally. Unfortunately, I didnt bother to check their nationwide branch
distribution. Fortunately, they are still a pretty convenient bank for me
since I mostly need online banking. I am not too annoyed by them, since it
was partly my fault for not doing more research.
The study of ubiquity illusion archetypes is fascinating and you can
spend hours archetype-spotting. The three major ones, ghosts, godfathers
and gorillas, are just three relatively obvious ones. You can spot these,
and other species, everywhere. Ill just point out one particularly rich
source of ubiquity archetype examples, the recent Christopher Nolan
Batman movies:
1.
2.
3.
4.
5.
6.

Wayne Enterprises (good 800lb Gorilla)


Commissioner Gordon (good Godfather)
Batman (good Ghost),
Ras al Ghul (evil 800lb Gorilla)
The Mob (bad Godfather)
The Joker (bad Ghost)

A little genealogical note: the modern Gorilla/Godfather/Ghost trinity


is descended from medieval political structure (roughly, Robin
Hood/Sheriff of Nottingham/King).
Is this post a ubiquity illusion?
In the Hacker News thread about the Milo Criterion, another
commenter wrote: I cant help if there isnt something meta going on,
slow marketing and all.
With this post, I am sure you are going to start entertaining the
suspicion that my slow marketing phrase is itself a ubiquity illusion; me
pretending there are deeper and more radical ideas here, behind a
redacted prescription, than I am letting on.

Am I faking it?
Hee hee hee! (thats my slightly evil laugh)

The Seven Dimensions of Positioning


September 21, 2010
I dont like reinventing the wheel, so for months now, Ive been trying
to reconcile everything I know about traditional business (think Peter
Drucker and the Harvard Business Review) with all the seductive ideas
Ive been learning from the Lean Startup movement (and Ill admit I am
simultaneously attracted to, and wary of, those ideas). Some instinct led
me to focus on a single word: positioning.
It seemed to be the key word, and I think my instincts were correct.
Ive concluded that positioning, defined in a 7-dimensional way, is the
single most important word in business. So what is positioning? To
paraphrase Marc Andreessen, it is the only thing that matters. It is the
controlled, but not deterministic, crossing of a threshold beyond which the
business suddenly seems to come alive and work. The emotion changes
from depressed to excited. The energy changes from languid to explosive.
The rhythms change from weak and uncertain to harmonious, vigorous
and steady. Positioning happens when a business has an Aha! moment,
and discovers identity, profitability and sustainability. The business has
found its groove and tempo (the business word for tempo is clockspeed)
Positioning involves throwing seven firing switches from Off to On
position and all 7 cylinders firing steadily enough that anyone in the
business can take a real vacation without everything going to hell.

Seven is not an arbitrary number. I looked hard and thats all I could
find. Ill tell you about two that didnt make the cut later. Each of the 7
switches, if it causes successful firing, induces an S-curve (if not, you get
a peak and collapse).

If the S-curves are clustered close together in time, you get one big
Aha! Otherwise you get a series of smaller Ahas! All 7 must be switched
on. Otherwise youll get a change in emotion and energy, but not a true
business positioning. The characteristic sign is that you get a frenzied,

high-anxiety, manic energy tempo instead of a harmonious, vigorous and


steady tempo. I call the former the fire alarm situation, and it will
collapse if it isnt corrected. Steady rhythms are a sign that you are in a
predictable place. So lets explore the seven dimensions of positioning and
see if theres anything useful to be found.
1. Marketing: Positioning as Hole-in-the-Head
Positioning in marketing is Al Ries classic theory of marketing.
Peoples heads are overstuffed. The only way to get in is to associate
yourself with whats already in there. Avis, Were No. 2, so we try harder,
and The Uncola are examples. These either fill a hole in the head
(creneau to use Ries French term), or reposition an incumbent to create a
space for yourself. Nyquil created a position against strong incumbent
cold remedies by turning their 24-hour nature against them. There was a
creneau that could be created for a night-time cold remedy.
What happens when you get this right? Simple. An anemic demanddriven business turns into an overbooked supply-limited business. This is
what Drucker meant when he said the job of marketing is to make sales
superfluous. One killer positioning concept smoked up by some Mad Men
can bring the business to you, so you dont have to pound the pavements.
Your selling costs shrink spectacularly. (Aside to readers whove been
demanding I watch and write about Mad Men: I finally caved and
watched the whole series to date on DVD over the last few months; thank
you all for a great recommendation, and stay tuned.)
Marketing positioning is not the same thing as finding a repeatable
sales road map in the sense of customer development inside lean startups.
Yes, you still have to be agile, and pivot, and get outside the building. But
you dont use the customer development model, which is optimized for
sales-led discovery. You focus on typical marketing things like finding
good names and taglines. If you talk to potential customers at all, you do
so in different ways, to find a creneau. You look for inspiration in popculture trends. If it works really well, you may not have to do the sales
pavement-pounding and hypothesis testing at all. At the risk of losing half
of you, heres the football metaphor. Customer development is a rush
offense. One yard at a time. One problem or product presentation to a

customer/group at a time. Effective marketing positioning is a Hail-Mary


passing offense. Touchdown in one pass if you are lucky and skillful.
For ribbonfarm, I did no customer development, hypothesis testing or
anything of the sort. I just wrote whatever the hell I liked. Then the
Gervais Principle happened. Now ribbonfarm is positioned as a blog
selling a certain darkly-humorous, realist, dystopian view of life, the
universe and everything. Marketing positioning and luck, not customer
development.
Getting marketing positioning right is at once liberating and
confining. In the case of ribbonfarm, it is liberating because the blog is
operating cash-flow positive (not counting my time, which I view as
ongoing in-kind capital infusion). It is also enough of a believable
insurance policy that I think I could make a living off it if I had to.
Constraining because ribbonfarm now means very specific things to
readers. Now if I want to experiment outside this core, Ill need a different
blog and brand. But within this core, my marketing costs are near zero.
The Be Slightly Evil list is a natural line extension, but not a brand
extension. With near zero additional marketing, and ONE email to a few
readers counting as customer discovery, I was able to launch it. And in
less than 3 months it is already getting close to 500 subscribers. But I
could not have done this if Id wanted to build a non sequitur or dissonant
brand off the ribbonfarm base, like a blog about inspiring quotes or great
shopping deals.

2. Operations: Positioning as Rapidly Improving Margins

Chronologically, this notion of positioning came first, with BCGs


pioneering role in the strategy industry (a long story Ive told elsewhere),
and focus on the fact that market leaders grow rapidly, learn and drive
down cost curves, setting a pace that followers cannot keep up. At the
heart of it is an accelerating trajectory of increasing margins, generating
growth money, leading to more revenues at better margins, a virtuous
cycle that leaves competitors far behind until you are the entrenched lowcost leader. Only true disruption (item 7, wait for it) can displace you.
Until then, others can fight over your table scraps at the margins. This is
the growth curve you get to after Crossing the Chasm. This is a
positioning problem start-ups rarely have to solve, since principals often
exit before they are forced to solve it. You can get roaring rivers of
revenue and still bleed margins for a long time.
This is also not the free-cash-flow positive threshold. It is the
accelerating margin improvements threshold. You can limp along with
razor-thin margins for quite a while and call yourself cash-flow positive,
but until you hit this phase transformation, your position is very shaky
indeed. Specific things happen to trigger this phase transformation. Startup
types think of it as mechanical introduce big company systems and

processes but theres a lot more. You have to find the artistically right
kind of systems and processes that can put you on the accelerating margins
trajectory. For Zappos, for instance, it appears to have been the decision to
move away from drop shipping. So it is not a matter of just hiring a few
bureaucrats to create some tedious forms. Big companies know all about
this transition. Ive done work on this dimension, but unfortunately it isnt
work I can talk about publicly.
The fully-refined version of this gets you the classic positioning
model of Michael Porter (the five forces model). Practitioners like to call
it strategy but it doesnt deserve that lofty term. Its operations they are
talking about. Very useful nevertheless.
In BCG Growth Share Matrix language, the switch gets thrown when
an uncertain wildcat (or question mark) business suddenly turns into a
Star (moving from the top-right to the top-left quadrant). From here you
can drive down costs faster than competitors can, and move the business
into a relatively unassailable high-margin cash cow position.
3. Sales: Positioning as a Pain Point Relief

If you plow through the Lean Startup material, youll find that the
entire customer development process hinges on one crucial decision: you

only go after a small subset of early customers who a) have a problem you
can solve, b) are aware that they have a problem c) are actively shopping
for a solution d) are actually improvising temporary solutions.
This is a customer in pain as it were. Product-Market Fit (PMF) in
this narrow sense relieves a pain for someone. Focusing on customers
in pain is a very specific way to find a market.
In an earlier Drucker-inspired article, I defined a customer as a novel
pattern of human behavior based on Druckers notion of customer
creation. Creation is expensive, but it can be done. But in CD-driven
businesses, you dont create this novel pattern so much as you recognize it
in the wild and then offer a less painful substitute. This is significantly
cheaper, which is why it is so popular in the startup world.
It is a slightly worrying metaphor, but I like it: in customer
development, you domesticate a wild customer.
Here is my example. I was the first employee at Sulekha.com, after
the two founders, 10 years ago. Today, it is sort of the Craigslist-plusFacebook-plus-Fandango of India. I witnessed (and, in modest ways,
contributed to) the PMF phase change, when we found our first strong
revenue model (online ticket sales). And yes, the script ran exactly as the
lean startup people describe it, with pivots and everything. We just used
different language to talk about what was happening.

4. Engineering: Positioning as Killer App

Everybody hates us engineers when it comes to the business side of


things. Even engineers themselves, when they move over to the dark side,
have a tendency to speak disparagingly about the narrow mindset theyve
left behind. Ive done the leap, but I dont do the disparagement. For
positioning to work you also need an engineering switch to fire: from
platform concept to killer-app.
Visicalc is everyones favorite example of a killer app. Killer App is
primarily an engineering dimensions of positioning. Engineers, like
mathematicians, are lazy. They like to generalize and come up with
powerful solutions that can do lots of things. This generality is what
ultimately creates value, otherwise wed be living in a flood of what Alton
Brown (in the context of kitchen equipment) calls unitasker products.
But a journey of a thousand apps must still begin with a first app.
The story repeats itself all over the place. Walk through this trail of
killer apps to see more examples (Atari and Pong, Nintendo and Mario
Brothers, Gutenbergs Press and the Bible, and many more).
Brad Feld has labeled platform the annoying word of 2010. He
correctly notes that you cannot build a platform, anymore than you can

make a viral video. The best you can do is build a platform-intent product
or service, or a viral-intent video. But platform-intent thinking is crucial.
Otherwise if your first and only application idea fails, well, youre
screwed. Nor will a generic multi-tasking minimum-viable product do
the trick. That gets you a Swiss Army knife. That still has only one shot at
success. You dont just want a multi-tasker product. You want multiple
cheap shots at making an application catch on.
Once you ask the question minimum viable product that does WHAT?
youll see why Killer App is a useful separate term. It is that last 20% of
the engineering that brings in 80% of the value. First you build a
minimum-viable platform, and then you start doing several 20% stabs to
find your first killer app. Each stab is a minimum-viable product
hypothesis, but each stab is not necessarily a full repositioning or pivot.
Think of a startup as a new PC that and each MVP stab as a half-assed app
like Microsoft Works. If you find that a lot of people are using Microsoft
Works, well, go ahead and build and sell Office. Thats your killer app.
But if it doesnt work, you shouldnt have to retool 100%. Only 20%.
Most high-value engineering products turn out to be platforms with
applications. So platform-intent is the right strategy. Unitaskers, such as
combs or toothbrushes, are rarely enough to build a business (unitaskers
are usually made by companies that maintain portfolios based on
similarities in manufacturing or service delivery processes).
But dont let the word platform intimidate you. A platform does not
have to be as complex as an operating system or a new fighter plane. A
knife is a very simple instrument, but it is a platform in the kitchen
because it can do so many things. The killer app turned out to be
chopping, but it can still do some mean squashing, stirring, serving and
spatula-ing. Some caveman or cavewoman probably started the search for
a business model with a stick, and figured out that sharpening one edge
created the first killer app. Pun intended.
Note: there are two engineering styles which I call vertical first (the
first app comes before the minimum-viable platform) and horizontal
first (the other way around). I think both can work, but the risk-benefit
tradeoff does favor at least some platform work upfront, in my opinion.

Pure vertical-first too easily leads to a series of narrow visions, none of


which is worth much.
5. Public Relations: Positioning as Brand Socialization

While a pure marketing brand can exist just as a service or product,


entrenched and strong brands also become part of the society within which
they live. Levis is not just a famous (and now trashed through
mismanagement) brand. It is part of the story of the American West. Ford
stars in the story of American ingenuity, with its role in the growth of the
assembly line. The Tatas are the story of industrialization in India. The
East India Company is the story of 17th Century Britain.
PR is the difference between a strong marketing position for an
unsocialized brand and a socialized brand with a role in the grand
narrative of its host society. The story doesnt just happen. But it cant be
created in controlled ways like advertising either. You have to scan for
sparks of genuine social integration in the environment and pour fuel on
them. Volkswagens ongoing punch-dub series of commercials is an
attempt to do exactly this: talk up something to do with VW customer
culture. I am not sure if it will work though, because this is a case of trying
to make marketing do PRs job. PR is essentially a hidden and delicate
backstage influencer activity. You are trying to co-opt a story thats

already out there, in service of your brand. Many people have a stake in
that story, so at best you can influence the story, not tell it. VW may
regret its punch-dub series of commercials. It may have killed the golden
goose. Now I bet people who play the game might want to stop. If, on the
other hand, VW had spent its money on a grassroots word-of-mouth
campaign around the punch-dub game, a lot more could have happened.
Groundswell has several great examples. I could be totally wrong on this
one. Only time will tell.
Aside: this is why the new continent of social media has primarily
been colonized by PR people. The marketing and sales people are talking a
lot about the potential, but it is PR people who are making the medium
work for them. Good marketing talks more than it listens. Good sales
listens more than it talks. Good PR strikes a conversational balance. Social
media is fundamentally friendlier to PR than either sales or marketing. In
the past companies had to have either marketing or sales cultures. You
could not lead with PR. Today you can.This is especially true because
rank-and-file employees can be turned into a PR army. To use them in
marketing means cheesy employee photos in brochures. Using them in
sales means sales people bringing customers in for insider visits.
Though Word-of-Mouth can work for sales (forwarding discount
coupons/referral/lead generation schemes), marketing (contests, viral
videos) or PR, it works best for PR.
This is where the classic reading of the Google origin myth gets it
wrong. The story goes that Brin and Page, when told they had to choose
between a marketing or a sales culture, (and this is engineering
braggadocio pure and simple) chose to create an engineering culture
instead. This is wrong on two levels. First, it is a three-way fork today, not
two way, and Google is a company built on effective PR. Dont be Evil
and stories about great buffets (and ironically, the story of Brin and Page
choosing an engineering culture) are basically the core of a PR
socialization narrative (how many people know Googles marketing
tagline of organizing the worlds information? or have encountered its
AdSense/AdWords sales face?). Second, culture isnt yours to choose.
Your business model completely determines it, and it will always be a
culture driven by a customer-facing function. More on that later.

6. Finance: Positioning as Pricing Sweet Spot

You didnt think the bean counters would have nothing to say, do
you? Pricing confuses a lot of people because they think it is some sort of
objective, if inexact science. The most naive people think: if only I had
perfect information and could construct my demand/supply curves,
identify my substitutes and measure elasticity, I could price this thing
perfectly to maximize earnings.
Wrong. Economics constrains, but does not determine, pricing design.
Economics will make you crash and burn if you get it wrong, but it wont
tell you how to get it right. Itll just create a canvas. Getting the pricing
model right is a positioning switch in its own right.
Creative finance people know that pricing is a positioning art. There
are many famous products that made it via the right pricing strategy.
Gillette (cheap razors, expensive blades), Xerox (originally, lease the
copier, sell the toner) and Netflix (no late fees) are examples. And of
course the whole world of $0.99, $19.99, introductory price, artificial
scarcity limited editions, and and the like are all pricing design ideas.
The entire cloud computing sector is driven by a pricing idea: pay-by-the-

sip $0.10 offerings for enterprises that are used to paying by the million.
To innovate in the cellphone market, pricing should be your top concern.
I recently tried myfooddiary.com (a great calorie counting tool) for a
couple of weeks. They advertise $0.29 a day. Not the equivalent $8.70 a
month. Why? Monthly subscriptions are better, right? No. This has to do
with the psychology, calibration points and money metaphors at work in
the prospects mind. See my Fools and their Money Metaphors article.
Calorie counting is a daily activity for dieters. Health and fitness run on
daily tempo mental models. The most effective pricing models are likely
to be daily. That way you can compare it to other daily health/nutrition
expenses like food purchases. Gyms would do well to shift to a daily price
advertising model. A $90/month gym membership is a $3/day
membership. So I know that it costs me about as much to ruin my healthy
day with a slice of pizza as it is to redeem it with a workout. Why would
you want me to think about my gym membership with a mental model that
contains things like rent checks and phone bills? If some gym uses this
daily price advertising idea, I demand a royalty!
Money metaphors are complex beasts. Entrepreneurs think with the
entrepreneurship (capitalist) metaphor. But to sell stuff, you must think
and talk within the customers active metaphor. Get it right, and the
pricing cylinder fires.

7. Innovation: Positioning as Disruptive Breakthrough

Disruption theory is the most fundamental explanation of


differentiation. It is an innovation model, and while it can seem very close
to engineering, it isnt. Innovation can come from a platform-creating
scientific breakthrough, but it can just as easily be an enabling
breakthrough along any of the other 6 positioning dimensions. It may be
technically major or trivial (or to use the correct terms, radical or
incremental), but you wont know what it enables until after it has
happened.
Three conditions have to be met for disruptive breakthrough. First, an
innovation is disruptive because the place it is born is not the place it
can grow. So it needs to be transplanted into a new business unit run by a
logic within which the idea is sustaining. Second, you need a grow-inpeace peripheral position next to a major disruptee market, where you are
too small to pay attention to, but too big for the incumbent to kill once you
gain traction. If you dont do the first, the business is stillborn. If you
transplant, but there is no big disruptee market, you create a small niche
business. But if you do both, you can get breakthrough.

The theory of disruption is highly evolved, and the relevant phase


change happens when your adoption S-curve crosses an older one. Read
my primer if you are not sure about what disruption means (and most
people who use the term without having read Clayton Christensens book
dont know what it means, but think they do).
Is every new business disruptive? Is this an optional switch? Ill leave
that for later.
What Really is Business Strategy?
The 7-dimensions model allows you to view the essence of business
in a very simple way. It is a matter of turning 7 switches to the On
position, and hoping the corresponding cylinder fires. If youre lucky,
youll start out with one or more of the cylinders already firing. If not,
youll have to keep trying each switch till all cylinders are firing.

Are there more than 7 switches? I thought about this really hard,
especially about two very attractive candidates for an eighth switch: the
culture switch (going from an inchoate culture of random types of
people to a distinctive one) and an ecosystem fit (where the corporation
is socialized into a supply chain).

After much thought, I gave up on culture. A distinctive culture is an


outcome, not a control variable. How you throw the 7 basic switches
determines what a corporate culture looks like. Equally, when a culture
seems to be going wrong or toxic, it is almost certain that one of the 7
basic cylinders is misfiring, and the switch has been reset to Off. I think
if you try direct cultural design rather than hiring against your 7-switch
needs, you are asking for trouble. And once culture has emerged, naming,
codifying or ritualizing it is a very dangerous game. All you can do is try
subtle things to not screw up a working culture, and to protect it from too
much toxic disruption. At the same time, you shouldnt protect it too
much, otherwise the culture will ossify, and when the business
environment makes a particular cylinder misfire, the culture will lack the
ability to adapt.
The last candidate is ecosystem fit. Normally, this would be part of
operational fit (strong, effective and mutually beneficial supplier and
distributor relationships are a big part of switching from Wildcat to Cash
Cow). But there is a difference between inside the corporation fit as
processes stabilize and fit into a jigsaw puzzle, and outside the
corporation fit as a vertical or horizontal integration structure emerges in
a sector. But overall, I dont think this is a meaningfully separate
distinction with separate legal control variables. Antitrust laws see to that.
When these laws can be bent or broken without consequence, or the
government gets involved, then youve got an eighth switch. Ecosystem fit
design is therefore just a part of organization design. Where you draw the
boundary of the organization is a somewhat arbitrary legal issue.
Thats it for now. This is Part I of a two-part article (the whole thing
was starting to weigh in at over 6000 words, which Ive decided is too
much even for me, so I decided to separate this idea into two parts). Ill
finish and post Part II if people like this one. Call this the MVP of a
potential series.

Coloring the Whole Egg: Fixing Integrated


Marketing
October 20, 2010
Three kids are selling lemonade in their neighborhoods one hot day,
to passers-by.
Kid Red yells things like The best lemonade in town!
Kid Green yells things like Hey Joe, how bout some lemonade?
Kid Blue yells things like Its hot today! Get your lemonade before
you head to the beach!
Can you identify the future marketer, salesperson and PR guy? It turns
out there is a systematic way of guessing. On this important question
hinge many things: business vision, market positioning and corporate
culture. The answer also drives a mutually-exclusive 3-way choice that
sorts companies into marketing, sales, and PR-driven kinds. And perhaps
most important, the mutual exclusivity means that the most seductive idea
in selling, a 1972 idea known as the Whole Egg, (an integrated
sales+PR+marketing model) originated by Ed Nay, then president of
Young & Rubicam needs an update. The Whole Egg is not a white egg.
One primary color will dominate. One of the three functions will always
lead. Looking for balance is a recipe for failure. To get to a whole egg, you
must first pick a color to paint it.
Telling Marketing, PR and Sales Apart
By 1991, the Whole Egg idea had evolved into an academic theory,
proposed by Clarke Caywood and Donald Schultz of Northwestern
University (curiously, they were from the journalism department, not the
business school) with the impressive name Integrated Marketing
Communications. I admit Ive not read the original IMC work, but only
second-hand summaries.

But heres the gist. There is a lot of overlap among the three
functions. So much so that it is hard to tell them apart, and a good deal of
potential value to integrating them. Each is a customer-facing function.
Each is about crafting messages designed to sell things. Each is about
managing a portfolio of channels. Each listens and talks to the market.
This procedural similarity is what confuses people and leads them to
misguided partitioning based on channel: marketing is about advertising,
sales is about face-to-face pitching, and PR is about getting journalists
interested. No, no and no. You can put a sales pitch in an advertisement, a
marketing positioning idea into a news story, and a newsy idea into a sales
pitch or advertisement. You can market face-to-face and sell en masse.
You can sell with a news story, and turn an advertisement into a
newsworthy event in its own right.
With old media you could at least make a medium-is-the-message
argument. Yes, in traditional media, advertising is friendlier to marketing.
Face-to-face is friendlier to sales. The news is friendlier to PR. In new
media though, these distinctions fall apart immediately. Every new
medium (blogs, Twitter, Facebook) can be personalized, customized, made
as one-way or two-way as you like, and customized for word-of-mouth or
broadcast. These media have no message. Or every message, if you like.
But the three are different. You see, the distinction lies in the type of
message. Especially with new media. They can work together to form a
whole egg, but never confuse them.
In the example I started with, Kid Red is a marketer, Kid Green is a
salesperson and Kid Blue is a PR prodigy. Marketers like themselves,
salespeople like other people, and PR people like ideas. Each turns his or
her personality into a selling strength.
Smart people who like themselves soon realize that other people like
themselves too. They understand self-indulgence. They understand what it
means to always be conscious of, and care about, how you are perceived.
All marketing messaging is based on self-perception, whether it appears as
an ad, a lifestyle-section trend story, or a sales strategy that relies on your
salespeople wearing hipster clothes. Kid Red knows this unconsciously.
He knows some people have a self-perception based on elitism. They like

the best lemonade (as opposed to say the cheapest lemonade or the
weirdest-colored lemonade). He probably likes the best lemonade himself.
His elitism translates into an marketing strategy that focuses on hooking
elitist self-perceptions.
Now Kid Green likes others. And she realizes that others like others
too. They like their friends, enjoy interpersonal interactions, and buy from
friends if possible. So she personalizes the interaction as much as she can.
Trust matters more than product attributes. Kid Green and her customers
would both rather buy from someone they know than someone who claims
to have the best lemonade. Even if they do have elitist tastes, they are
likely to go to a friend. Even if the strangers lemonade booth has a queue
of a dozen people and the friends stand has no queue.
And finally PR, the latest kid on the block (Ill explain why in a
minute). Kid Blue has a message that isnt about people at all, but about an
idea: the role of lemonade in a hot-day story. You can see why this so
easily segues into the news: you could pay the local radio station host to
talk about beaches and lemonade as part of the weather report on hot days.
Note that there is a subtlety here. Given the same power to tweak an
offering, marketers naturally customize, salespeople naturally personalize,
and PR people naturally contextualize. All three lead to differentiation.
Like many Indians I like a dash of salt in my lemonade. If the kid in my
neighborhood notices and greets me every time with a the usual? with a
pinch of salt? he is personalizing. But if he reacts by making up a menu
with Regular and Salty options, hes a marketer. Marketers dont care
to know you, they only care about how you know yourself. And finally, if
he runs a promotion on Diwali selling Indian salty lemonade, well, hes
pulling off a PR stunt.
And with new media, all three can scale. To use the example closest
to traditional media, you can use variable print technology in paper direct
mail to personalize (put peoples names and their kids picture into a
message), customize (use revealed preferences to include a beer picture in
some messages, and a wine picture in another), or contextualize (insert
excerpts of your products reviews from media you know your prospect
consumes).

If you have trouble remembering these subtle distinctions, here is a


mnemonic: you personalize to peoples identities, customize to peoples
tastes, and contextualize to peoples environments. Identities, tastes,
environments.
But enough about lemonade. Lets talk grown-ups and grown-up
companies.
Microsoft, Apple and Google
In my previous post, The Seven Dimensions of Positioning, I made a
remark about Google:
the classic reading of the Google origin myth gets it
wrong. The story goes that Brin and Page, when told they
had to choose between a marketing or a sales culture,
(and this is engineering braggadocio pure and simple)
chose to create an engineering culture instead. This is
wrong on two levels. First, it is a three-way fork today, not
two way, and Google is a company built on effective PR.
Dont be Evil and stories about great buffets (and
ironically, the story of Brin and Page choosing an
engineering culture) are basically the core of a PR
socialization narrative
The reason people havent thought about a three-way choice is that
until recently PR was just too uncontrolled a variable. In the age of
broadcast media, journalists were just too few and too powerful to be
managed well enough to drive a selling strategy. This is no longer true.
The breadth, depth and diversity of new media channels means that PR is
a function with vastly increased leverage today. Enough that it can sit at
the big boys table with marketing and sales. The original PR guy, P. T.
Barnum only succeeded because he was in the circus business, a business
that naturally lends itself to spectacle and buzz-creation. But today, anyone
can choose to lead with PR.

So, with that detour out of the way, how does the 3-way play out? The
computing industry offers a near perfect case study. Apple is as pure a
marketing-led company as you can hope to find. Microsoft breathes sales.
And Google is entirely a PR-constructed narrative.
Do the three selling strategies support my basic psychological claims?
Absolutely.
Apple is led by a guy who likes himself to the point that he doesnt
care at all what others think about him. And his customers are all people
who like themselves too. The best piece of evidence is probably the Mac
vs. PC ads. The entire campaign was about self-perceptions. The productfocused ads? They sell to self-perceptions and personal identities as well.
Their effectiveness relies on people knowing that they strongly prefer
highly visual and tactile interfaces. The archetypical Apple customer is so
well-defined that he or she is practically a caricature: a dancing hipster
with eclectic musical tastes who drives certain types of cars.
Which is why Microsofts response was so effective in turn. Rather
than accept the self-perception/identity based framing, they reframed the
contest. The entire I am a PC was highly personal. You get faux-real
people with names and faces. Not actors modeling abstract Claritas
PRIZM psychographic personas. And Microsofts entire selling strategy is
sales-driven: OEM partnerships, large enterprise sales, institutional
channel partnerships and the like; its all 1:1 work. We all know you can
only buy Macs at certain prices from a few places. Microsoft software?
You are a complete sucker if you routinely pay sticker price. If you cant
find a deal through your company or school, you are subsidizing the rest
of us. The likes other people bit is also at work. Most Microsoft people
Ive met tend to be friendly, down-to-earth and dressed-down (one sales
guy I met wore a suit but carried a backpack; a bit of gaucherie that would
probably invite a death sentence in an Apple store). Spend five minutes
talking to any Microsoft rep, and they will have ruefully, but confidently
acknowledged and laughed at Microsofts brand image issues, and made
sure you like them even if you dont like Microsoft. Interacting with Apple
people in an Apple store on the other hand, is a slightly intimidating
experience, like shopping at an upscale clothing store.

And what about Google? They dont advertise. They know your name
and everything about you but they dont even attempt to personalize or
customize your experience. Instead they spread stories about great buffets,
whiteboards with Dont Be Evil scribbled on them, and how Brin and
Page insist on less than 7 +/- 2 items on the Google home page. They
make sure that every geek knows that in PageRank, it is Page as in Larry,
not as in Web. Every marketer recoils in horror at a brand name being
commoditized into the category name (Asprin, Kleenex, Xerox). But
Google doesnt care that Google has become a generic verb. Unlike
marketing and sales brand equity, PR brand equity is amplified when a
brand becomes the category generic name. And perhaps the most
compelling evidence of Googles PR-driven culture? They mangle their
logo every chance they get (know any other major brand that allows this?),
to reflect PR opportunities. Remember our hypothetical kid selling salty
lemonade on Diwali? Google offered this Diwali logo to Indian users in
2008:

I rest my case.
But lets get back to my colored egg argument. Why cant you do all
three? Why cant the marketing department focus on identity and
personalization, sales on tastes and personalization, and PR on ideas in the
environment and contextualization?
There are two reasons: people and product. But first lets marshal the
evidence that you cannot do all three. It is only a weak proof-by-nonexistence, but strong enough for me.
And One Function Shall Rule Them All
The IMC/Whole Egg idea is largely viewed as a failed vision today.
Some are resurrecting the idea based on the convergence of media, but

unconverged media was never what held the Whole Egg idea back in the
first place. It was the mutual-exclusivity among messaging styles. A
personalized, customized and contextualized message is a complicated and
schizophrenic message: Hey Joe, how about taking some of the best
lemonade in town to the beach today? The passer-by has walked past by
the time you can get that sentence out. Effective messaging is about
making choices.
Today, most companies clearly reveal their selling colors. Integrated
or not, there are no white eggs to be seen. Its all red, green, or blue
dominated.
In our computer industry example, I pointed out how the dominant
function colors (or contaminates, depending on your point of view) the
subservient functions. Another place you can find evidence of One Must
Rule dynamics is in post-sales. This is the fourth major customer-facing
function that usually goes unnoticed in discussions like this one. But it is a
selling function all the same: retention is cheaper than acquisition in
general, and customer service is the major retention (and upselling)
touchpoint. When a company has its act together and doing post-sales
well, you can ask: what differentiates a given high-quality customerservice department?
Does the service optimize on customization attributes? Lots of ability
to tweak or change your relationship? Speed for the impatient, simplicity
for the easily confused? Thats marketing-driven post-sales. Does the rep
know you by name, and does your call get routed to the same rep
everytime? Sales is in the driving seat. Receiving a lot of contextualized
offers like relevant holiday specials? Thats a PR post-sales show (this is
as yet quite rare, but Amazon, another PR-driven idea company, is a good
example: look at their rare advertising, it is about ideas. They fought back
against the me-me-me iPad ads with read on the beach in sunlight idea).
Initial Conditions and Egg Color
So why does this happen?

First people drive the equation. The founder vision is based on the
foundational selling personality. Jobs is a marketer; we know a lot about
him because he likes himself (all those black shirt stories). Ballmer is a
high-energy salesman, and Gates is down-to-earth. Where Jobs appears on
a stage alone, holding an audience in thrall, Gates shared the screen with
Jerry Seinfeld, an entertainer who might have overshadowed him, and the
focus was on the banter between them. You see him in conversation (with
Warren Buffet for instance) more often than you see him speaking from a
stage.
Brin and Page clearly like their personalities to fuel the news, rather
than cultivating either a personal brand or an interpersonal style. I know
nothing about either of them. Ive only once seen a video of Brin
addressing a classroom. Both have been reduced to the ideas they
represent. Heck, Page is part of their main idea, PageRank. Theyve even
ceded the people stuff to Schmidt, and made it hard to even tell them apart
(compared to how clearly you can tell the two Apple Steves apart, or
Gates, Allen and Ballmer apart).
Why does this matter? Like attracts like and you get massive initial
condition effects, both in terms of customer base and employee base (and
remember, many of the best employees start as passionate customers).
People who like people join Microsoft. People who like themselves join
Apple. People who like ideas join Google.
Second, product drives the equation, also indirectly via people. People
who like themselves build what they want, and then sell it to others
through the force of their personalities. People who like others do
customer-driven product development. It is blindingly obvious that Jobs
has designed every major Apple product to his tastes, and sold them to
people who share those tastes. Microsoft? Well, apparently Windows 7
was your idea. Even before Windows 7, you could always personalize PCs
more than you could Macs. And Google of course, is the quintessential
idea product (the core ideas for both Apple and Microsoft, by contrast,
came from various outside sources, which included my mothership
Xerox).
Curiously, Facebook is apparently none of the above and a true
engineering culture. Zuckerberg reportedly tries to hire engineers even for

non-engineering functions. Perhaps that explains why Facebook appears to


suck at all three: their brand image is weak (poor marketing), and theyve
ceded control of their PR to an unfriendly Aaron Sorkin (their sales face is
an unknown quantity to me, but I am betting it is weak as well).
But engineering cannot sell products. A customer-facing function
must lead. Facebook has bought time by being miles ahead on
engineering, and getting mileage out of selling to engineers (by pioneering
the open API/developer relationships strategy), but at some point, I think
theyll be forced to choose. Unless they end up as a true monopoly, in
which case theyll be able to get away with it.
What is Corporate Culture?
In my review of Tony Hsiehs book about Zappos, I voiced my deep
ambivalence towards the very idea of corporate culture. Besides the
obvious problems of fostering groupthink and hiring of clones, which I
pointed out, the very idea of identifying culture through a bottom-up-plustop-down shared values exercise deeply bothers me. In the seven
dimensions post, I went further and argued that culture is an outcome of
other variables, especially the customer-facing functions, and that
attempting to control it directly or indirectly was a dangerous thing.
On the face of it, the shared values model of corporate culture is
clearly ridiculous. For most people religious values (including the choice
to be non-religious) are at the core of their value system. If you truly
wanted to base corporate culture on shared values, youd be dead before
you started. In fact it would be illegal in most democracies.
So apparently we only look for shared values in some areas. And noncore areas at that.
Which brings me to my final conclusion on the subject of corporate
culture: you dont need to share core values (impossible) or all values
(idiotic and impossible). You just need to share your selling values.
This means, when you are wondering whether or not to join a
particular company based on cultural fit, you should ask: whats your

preferred selling style (and everybodys got one, whether or not they are in
a selling profession). Do you like selling based on self-perceptions,
starting with your own self-perception (sign: you can sell best to people
like yourself)? Join a marketing-driven company. Do you like getting to
know people and selling in personalized ways (sign: you can sell to
anybody)? Join a sales-driven company. And finally, do you like selling
ideas (sign: you can sell to anyone who gets it; they dont have to like
you or be like you)? Join a PR-driven company.
As companies mature, the original culture remains, but weakens and
diversifies. If your selling style is strongly defined, join an early stage
company with a very strong culture. A primary-colored egg. If you dont
lean strongly one way or the other, join a mature company with a
weakened founding culture, and lots of local silo flavors. A more colorful
egg, but still with a dominant primary hue.
The Story of this Post
Besides my previous posts logically leading up to this one, some
interesting recent events led me to this conclusion. In the last month or so,
I met three people who seemed to be strongly influenced by my ideas, but
disagreed with me in very specific and puzzling ways. Thinking about it
led to a personal realization: I am an idea-driven-sales guy, and PR is my
medium. I almost never personalize or customize, but I often contextualize
(though I dont lean towards PR in an extremist way, which explains why I
am comfortable in a more mature sales-first company like Xerox). But of
the three people I met, two have been sales-first people, and one has been
marketing first (the three of you know who you are!).
So that explains that mystery. It also explains why, looking back on
my personal history of selling or hiring people to sell, Ive pretty much
always gone with a PR-first decision. When I havent, the decision has
backfired badly. I can now read a new meaning into a really old (2007)
post of mine, How to be an Idea Person. All my lemonade stand
experiences that I described there were PR-driven.

By the way, the solution to problem posed in the title: to fix your IMC
strategy, you need to paint your whole egg (or rather, stop being in denial
about the fact that it is colored).

How to Draw and Judge Quadrant Diagrams


April 20, 2009
The quadrant diagram has achieved the status of an intellectual farce.
If you, as a presenter, do not make an ironic joke when you throw one on
the screen, you will automatically lose a lot of credibility. For some very
good reasons though, the diagram is an indispensable one for the
presenters toolkit. As a listener, if you have a default dismissive attitude
towards the thing, you will have to sit out far too many important
conversations with a cynical, superior smile. So heres a quick tutorial on
quadrant diagrams. Ill tell you both how to make them, and how to
evaluate them. Heres a made-up one to get the basics clear. You basically
take two spectra (or watersheds) relevant to a complex issue, simplify each
down to a black/white dichotomy, and label the four quadrants you
produce, like so:

This particular one is nonsense, and falls apart at the slightest poking
(well poke later in the article), and I made it up for fun. Let us discuss
three real examples from business books before we develop a critical
theory and design principles. The three I will use are from The Power of
Full Engagement by Jim Loehr and Tony Schwartz, Making It All Work by

David Allen, and Listening to the Future by Dan Rasmus and Rob
Salkowitz.
The Dynamics of Energy
The Power of Full Engagement by Jim Loehr and Tony Schwartz is a
pretty neat little self-improvement book that is based on the premise that
managing energy is more important than managing time, and that we
should do so the way top athletes do: by balancing training and
performance. The book offers this quadrant diagram:

This example illustrates the difficulty of working with complex,


ambiguous multi-faceted issues. Energy at the level of physics is welldefined, but when we are talking about the more ambiguous sort that goes
with peoples behaviors, there are a lot of nuances. The book itself
discusses several other aspects, like a classification into physical,
emotional and spiritual energy.

Notice one thing about the quadrants: they do not have evocative
names, but mere structural labels like high positive alongside lists of
features, which are clearly variables deemed to be of lesser importance,
but too important to leave out. The diagram picks out two specific
attributes out of the ambiguity: subjective intensity and pleasantness, for
highlighting. While this is a reasonable thing to do, it is not a necessary
choice. You could defend these choices, but they do not seem particularly
compelling. Why not, you might ask, steady vs. spiky energy, or
physical and mental energies? The choices are also weakened by the
low chemistry between the two variables.
You would not expect this diagram to support a conceptually strong
theory, and it doesnt. The book stakes its credibility on case studies and
anecdotes, and fortunately, the structural strength of this diagram is not
tested. This is basically a quick-and-dirty conceptual framework for
organizing subject matter and ideas that are largely empirical in origin.
This should not be surprising, since the source of the books ideas is data
from performance coaching of athletes and executives.
Overall, this one rates a C-. As I will argue, it uses a quadrant for the
wrong material, and does so poorly at that (the book itself is decent
though).
The Self-Management Matrix
Moving to a more analytical, concept-driven quadrant, consider this
one, from David Allens Making It All Work, a reflective analysis of his
earlier book, Getting Things Done (GTD).

This diagram is an immediate improvement over the previous one on


two fronts: the quadrants get evocative labels, and the chosen dichotomies
along the x and y axes: perspective and control, rather than seeming like
arbitrary lead variables plucked out of a list of many, have a yin-yang
fundamental quality to them. The evocative labels serve an important
function. Unlike High-high, the phrase Master and commander picks
out a prototypical example of being in the state of high perspective and
high control. It at once suggests more implied complexity than the long
feature lists in the Loehr/Schwarz diagram, while retaining coherence
which the Loehr/Schwarz diagram lacks. You get the sense that while
each quadrant is a fuzzy set, they do represent fundamental pure types.
Though we are not talking math or rigorous logic here, you would accept
perspective/control as metaphysically foundational concepts, rather like
Euclids axioms. You are willing to make a leap of faith and assume the
pair as basic, important dichotomies.

This diagram, unlike the previous one, is the result of a more


deliberate effort at fundamental analysis. More thought has gone into it
and you could (and Allen does) build more of a sound theory on top of it.
This is also not surprising, because though the diagram is based on the
empirically validated GTD methodology, the methodology itself grew out
of conceptual analysis, not data analysis.
This diagram rates a B+. Pretty decent. Points lost for insufficient
qualification of the rigor of the argument.
The Microsoft Scenarios
Our last example is from business strategy rather than selfimprovement, and is a diagram that organizes four future of the world
scenarios that Microsoft uses to test its strategies, and is the basis of
Listening to the Future by Dan Rasmus and Rob Salkowitz. The diagram
is used for scenario planning, and the idea is that if a strategy seems
robust to the four scenarios (the metaphor used is wind tunnel testing),
then it is a good strategy. Heres the diagram:

This is perhaps the most interesting one of the three. The diagram
takes on the formidable task of thinking about the future of the entire
planet. The framework is based neither on experimental/field data (we are
talking about the future, the product of thousands of trends gathering
momentum today, and uncertainties that nobody can guess at), nor is it
conceptual in origin. There is no possible fundamental theory that would
tell you that globalization and labor market organization are the two most
important. Maybe the important ones are the evolution of Islam, water
wars or the global aging population. The choices made here are essentially
artistic ones, not statistical key indicators or first-principles self-evident
concepts. Though globalization and labor dynamics are important, they
simply are not metaphysically primitive constructs like control or
perspective (or line and point). Instead, they represent observable
patterns at the other end, the most complex sorts of patterns we can
process and understand, what we call mega-trends.
Which is why the labels in this diagram are crucially important. They
go beyond evocative to purely artistic. They suggest entire stories and
science-fiction trilogies. At the risk of sounding like a bad fiction
reviewer, Id call the quadrant names rich background tapestries. Whats
more, the supporting text provides the right sort of nuanced and ironic
meta-analysis of the diagram itself.
This rates an A grade.
When Should You Use a Quadrant Diagram?
In summary, the three diagrams rate C-, B+ and A. The grades are a
reflection of both the difficulty of applying quadrant diagrams to the
source material in the particular cases, as well as the effectiveness of the
actual application. Lets create a quadrant diagram to illustrate when to use
quadrant diagrams, and when to do something else.

The key is to use it when there is high ambiguity, overlap and


fuzziness in the basic categories, and apparent high-dimensionality (lots of
variables with complex coupling) but somehow, when they mix together, a
few dominant patterns leap out. In the soup that is predicting the future of
the world, despite the complexity, a few things obviously leap out, like
climate change and globalization, as useful top-level constructs. In
talking metaphysics of being, somehow yin and yang capture
something important.
If the concepts were clearer, and dominant patterns were visible
(bottom right) you would either be able to apply first-principles analysis
and identify the foundational concepts, axioms and inference rules, OR
you would be able to measure things and apply statistical techniques like
regression, principal component analysis and clustering. Imagine a
contemporary of Euclid offering a quadrant diagram with closed/open
on one axis and small/big on the other, and marking the quadrants
lake, river, speck of dust and hair. He would have been blasted
away by Euclids more fundamental point/line approach. Similarly, the
dumb example I opened with (rich/poor, smart/stupid) begs for statistical

analysis, because it lazily uses an unnecessary quadrant diagram for stuff


that would yield to systematic number-crunching (IQ/personality
types/wealth correlations). The Loehr/Schwarz diagram is interesting
because it is on the cusp between statistical tractability and intractability.
The books source material is just coherent enough that you get the feeling
a good statistician could have eliminated the need for the quadrant
diagram. A good part of the blame for the low grade of the diagram is that
the material should not have been quadrantized in the first place.
If things are ambiguous and no dominant patterns leap out (top left) ,
you are better off creating a dictionary or glossary of example types to
illustrate diversity and differentiation within the soup of ambiguity. This
can mean case studies, collections of anecdotes or examples, and so forth.
If concepts are clear-edged, but nothing seems any more important than
anything else, the material is ripe for a taxonomist. Which is why the
alchemists, with their earth/fire/water/air/ether model did quite well, as did
the periodic-table folks, before the subject of chemistry yielded to
fundamental analysis at the level of physics. The same goes for Carolus
Linnaeus and his binomial nomenclature, before the double-helix came
along. Most taxonomies though, will never find a more fundamental layer
below the arbitrary one.
If you DO use a Quadrant Diagram
If you ARE in the top right quadrant, you still have work to do. Your
primary job is to identify four interesting and complex clusters of
phenomenology, without the aid of statistical or first-principles analysis,
and think up two interesting lines that will separate them. These are the
dominant patterns, and the organizing spectra/watersheds. You are
effectively doing right-brained statistics and first-principles guesswork.
If your lines end up being spectra, or related to each other in nice
ways (for example, the perception/control dualism), thats a bonus. The
Microsoft example is interesting because the two axes do not represent
simple spectra. Between hierarchy and decentralization, a lot of variables
change.

The value of your diagram will be validated by your ability to think


up evocative names for the quadrants. If people see your diagram and
instantly feel a sense of relief and recognition, it means you are
articulating and clarifying something theyve already subconsciously
noticed. Naming is important in other ways as well. Structural indicators
like high/low as in the Schwarz/Loehr diagram put people on their
guard, because they recognize that you are shoehorning a
multidimensional issue into two dimensions. The list of additional
qualifiers makes things worse, since it suggests you are ignoring complex
couplings. All adjectives, no nouns. Evocative names, on the other hand,
suggest complexity, as well as soundness/coherence.
Finally, depending on your source material, you will need different
types of evocative names for your quadrants, and different types of
supporting qualitative commentary. Here is a quadrant on how to do
quadrants:

Remember, if you are doing quadrants at all, you are in the


ambiguous/dominant patterns zone. If you are playing with seriously
conceptual stuff (things like yin/yang and center/periphery) that
people understand intellectually in the abstract, you are in the top half. If
you are dealing with things individuals and groups experience, and can
relate to specific memories, histories, or entities theyve encountered, you
are in the bottom half. If you are dealing with things that seem like they
could apply to green aliens in the Andromeda galaxy (say
being/becoming or light/dark), you are in the left half. If you are
dealing with things like global warming or jerks who talk too much at
parties, you are tied to the specific, path-dependent history of planet earth
and the unique attributes of homo sapiens, and are in the right half.
You already know you need evocative labels. This quadrant tells you
what types of evocative labels to manufacture, and how to support the
diagram with argumentation. If you are in the top right, you have to make
up names that sound like high-concept Hollywood movie names, and be
prepared to tell imaginary stories: what-ifs about possible worlds. If you
are in the bottom right, be prepared to provide examples of real people,
events and places. You want to talk about classic (or archetypal)
members of the quadrant. You might give them abstract names, but you
should be able to map the abstract names to real examples, as I did in the
previous section. A good real-world example of such an archetypal
quadrant is the Keirsey temperament sorter, that lumps the 16 possible
Myers-Briggs types (based on 4 variables) into four groups called
temperaments. This diagram is particularly interesting because each
quadrant holds two variables constant, and two variable, which means the
axes do not represent anything simple (work out the axial logic if you are
really curious).

Continuing clockwise, in the lower left, you have experienced


abstractions, which yield best to metaphor. If Loehr/Schwarz had read this
article, they might have named their quadrants (clockwise from top-left)
something like Mind like water, Evening on the beach, Ghost town
and Hurricane Katrina. The point isnt to be cute. It is to suggest a
pattern of complexity, and a calibration/benchmark point, without a
hopeless attempt to list all the attributes that create the complexity. But of
course, the Loehr/Schwarz diagram is a poor quadrant candidate in the
first place. It is better as statistics fodder.
Finally, in the top left, we have metaphysics. David Allens diagram,
rather schizophrenically, uses both archetype labels (master and
commander, crazy-maker) and metaphysical ones (implementer,
visionary). If this confuses you, think of it this way: archetypes are
thumbnail portraits of an entire typical entity. Metaphysical categories
emphasize a fundamental/dominant attribute. A Crazymaker is a very
richly defined image that says far more than the label visionary (which
could be good/bad). Things are clearer if you pick just one type, and when
in doubt, go more right-brained. This has already happened: crazymaker,

victim and micromanager have already become the preferred terms in the
discussions of Allens diagram. I still put his diagram in the
Metaphysician category though, since I think he is working with
context-free categories (perspective/control) that are not restricted to
humans. The Keirsey diagram, by contrast, is more closely tied to human
psychology.
Evaluating Quadrants
The discussion so far should suggest obvious evaluation criteria. First
ask the question: should this be a quadrant diagram at all? If not, probe the
speaker with respect to the quadrant of the should this be a quadrant
diagram diagram where you think the subject belongs. Ask statistical,
first-principles, variety and taxonomy questions as appropriate. If
quadrants are indeed appropriate. Apply the second quadrant diagram to
classify what you are looking at, and look for or ask for the right sort of
supporting argumentation. A speaker talking about global warming
swamping coastal cities and citing examples of historical floods is
providing the wrong sort of evidence: even the worst localized flooding in
known history is not the right sort of reference point. You need something
like an imaginative science fiction story.
Wrapping Up: Other Diagrams
Visual constructs live in a special sweet spot inhabited by issues that
are too complex for rigorous analysis, and too structured or impoverished
to support full-blown narrative treatments in the form of novels or stories.
Within this universe, quadrant diagrams are in the Goldilocks position.
One dimension (spectrum scales and circular life cycles) is fairly
limiting and needs a lot of verbal support. Three dimensions gets you to a
place where sheer visual processing overshadows the content of what you
are saying. There are also interesting special cases like triangles. Beyond
that, you are reduced to things that start to look quantitative or operational:
multiple sliders on scales, tables, and flow charts. Beyond that, qualitative
analysis through stories and metaphor is the only thing that will work.
So appreciate the quadrant diagram. In the right hands, it defuses
polarizations, reframes arguments, separates out coherent alternatives and

makes everybodys life a lot easier. In the wrong hands, it produces


amusement, supplies fodder for Dilbert jokes, and gives mediocre
consultants a picturesque path for their descent into madness.

The Gollum Effect


January 6, 2011
Throughout the last year, Ive been increasingly troubled by a set of
vague thoughts centered on the word addiction. Addiction as a concept
has expanded for me, over the last few months, beyond its normal
connotations, to encompass the entire consumer economy. Disturbing
shows like Hoarders have contributed to my growing sense that
conventional critiques of consumerism are either missing or marginalizing
something central, and that addiction has something to do with it. These
vague, troubling thoughts coalesced into a concrete idea a few weeks ago,
when I watched this video of a hand supermodel talking about her work, in
a way that I can only describe as creepy.
The concrete idea is something I call the Gollum effect. It is a
process by which regular humans are Gollumized: transformed into
hollow shells of their former selves, defined almost entirely by their
patterns of consumption.
The Creation and Consumption of Gollum
There is a sense in which Gollum, rather than Frodo, is the central
protagonist in The Lord of the Rings, since his destiny is tied to the
inanimate star of the show, the One Ring. He is the only character who
truly rises above the standard two-dimensional archetypes of the fantasy
genre, and elevates Tolkiens works to a near-literary status.
Gollum is a real character. He does not evoke a one-dimensional
emotional response such as identification, annoyance, pity, disgust, fear,
suspicion or hate. He evokes a full-spectrum response that involves all
those feelings and more.
And yet paradoxically, he is in fact one-dimensional, almost as
featureless as the object that holds him in thrall, the One Ring.

It is tempting to conclude that the featurelessness of the One Ring


symbolizes the abstract nature of the malignancy of which it is an agent.
But you can read a much deeper meaning into the Lord of the Rings if you
interpret the featurelessness as symbolizing purity and refinement: in the
sense of cocaine.
That Gollum is the archetypal addict is not a particularly novel
reading of the character. In their parody of The Lord of the Rings, the
writers of South Park turned the character of Butters into Gollum, a
newly-minted porn addict, following a porn video tape through the plot,
calling it his precious, and ultimately falling into the tape return slot at
the video store (Gollum falls into the fires of Mount Doom along with the
One Ring).
Gollum is a creature created, and ultimately consumed by, the One
Ring. Smeagol, the ordinary living being with a single fatal flaw, is
transformed into a pure pattern of addictive consumption. He sustains the
ring through its lost years, and is sustained by it.
If it werent for the spirit-like remnants of Smeagol in his character,
Gollum would be no more than a dead finger defined entirely by the ring,
a ring-wearing appendage. The ring only allows the ghost of Smeagol to
persist because it brings with it the capacity for cunning, deception and
trickery, which it needs to further its own objectives.
The ring itself though, remains unchanged by Smeagol-Gollum, even
as it transforms and consumes him. It is important to note that the One
Ring does not actually destroy Gollum till its own end is imminent; it
keeps Gollum alive to serve.
I want to offer you this thought as a starting point for understanding
Gollumization: consumerism is not about humans consuming products. It
is about products consuming humans.
Again, this is not a novel thought, but it is marginalized to the status
of a joke in our discourses around consumerism. In an episode of The
Simpsons, for instance, a hippie tells Principal Skinner: Do you own the
car, or does your car own you? Simplify man!

It is rather ironic that this potent and consequential message is only


heard today from an impotent and inconsequential peripheral subculture
that is so predictably ineffective, nothing need be done by the forces and
institutions of consumerism that it threatens. In the hands of hippies, the
message reduces itself to farce.
But Gollum is not truly the sort of hollowed-out and useless addict
created by something like cocaine, a product that is more predatory than
parasitic, since it destroys its host prematurely. The scariest thing about
Gollum is that he is just functional and lucid enough to be usefully
employable within the tale. This high-functioning state of addictive
collapse makes him a creature of mainstream consumer culture, rather
than of the back-alley culture where we first meet him (hiding, murdering
and thieving among the Goblins in subterranean caverns in The Hobbit).
The One Ring does not just drain Gollum to feed itself, the way a
drug like cocaine sucks a victim dry of wealth. It also needs Gollums
more creative and productive servitude, and for that, it needs him to be
functional.
Gollum is both employee and consumer. A prosumer locked in a death
embrace with a product. He is a raving fan.
Gollumization Showcased
What is utterly scary about Ellen Sirot, the hand model in the video, is
that like Gollum, she is not a cocaine-devastated creature living a wrecked
life on the margins of society. She is an employable, functional creature
living at the very center of it, in the spotlight. She is a mainstream
Gollumized creature, whose particular pattern of Gollumization just
happens to be a little more extreme and visible than the patterns that
define the rest of us.
As I watched Sirots Gollum-like mannerisms in the video, my hair
actually stood on end. I was that creeped out by her, as she caressed her
own hands lovingly throughout her conversation with Katie Couric. I fully
expected her to say, My Precioussss at some point. I found the video via
a post on kottke.org, in which Jason Kottke notes:

This is a really strange and fascinating videoSirot is


constantly performing with her hands but its also like she
hasnt got any hands, not functional ones anyway. She
holds them like atrophied T. Rex arms!
Sirot is a poster-Gollum for consumerism. I expected she is a leading
and discerning consumer of hand-care products, which must help feed
what appears to be a narcissistic obsession with her own hands, that goes
well beyond pragmatic concern for her means of income.
The economy that produces those hand-care products has found a
larger, life-consuming role for her. One that requires reducing her not just
to her hands, but to a single aspect of her hands: their camera-friendliness.
You and I arent as different from her as you might think. She is a fullyrealized Gollum, whose special talents attract special attention. Her ring
demands her extreme services under the glare of studio lights. You and I
are lesser Gollums; what saves us is not strength of will on our parts, but
the fact that we are just not useful enough for our rings to completely
possess.
Watch the video. Sirots hands seem like lifeless cul de sacs within
which her humanity is trapped. She refers to her hands as elite Olympic
athletes (my athletesssess!!?), but unlike say, a pianists hands, her
hands are not instruments through which she can express her entire human
nature. Her fingers are the bars of a gilded cage. As she says later in the
interview, her life is all about constraints and saying no to the merely
human. Forget playing the piano with her elite athlete hands; she cant
do the simplest things that the rest of us take for granted, like twisting
open bottle caps, pushing elevator buttons, or picking up things.
The only spark of humanity I saw in the entire interview was a bit of
mischievous, self-deprecating humor: she noted how ironic it was that her
hands frequently feature in commercials for dishwashing products, but she
cannot afford to actually risk that most mundane of household chores in
her own life. In fact, she wears gloves all day. I had assumed, based on the
Seinfeld episode where I first heard about hand models, that this was just
comedic exaggeration. Apparently not.

But like I said, you and I are not that far removed from Ellen Sirot.
Combinatorial Consumption and Gollumization
The sheer variety of things that we consume obscures and moderates,
but does not entirely prevent, our collective Gollumization. The
subsuming envelope of consumption behaviors we adopt helps each of us
sustain an illusion of fully-expressed and uniquely individual humanness.
As a line in a recently-popular song goes, I am wearing all my favorite
brands, brands, brands.
Put us all together, and you get what we call mainstream culture.
What separates us from the fully-realized Gollums is that we mostly lack
the talents to deserve complete possession. Our very mediocrity as food,
with respect to the devouring appetites of the products that choose us,
saves us. Each of our consumption behaviors feeds on us every day, but
slowly enough that we can heal ourselves and achieve a fragile stalemate
with the forces of complete Gollumization.
But the equilibrium state falls well short of fully-human.
The apparent variety and uniqueness in our personalities is as illusory
as the apparent variety in what we consume. This illusory variety in our
consumption homogenizes us, while supplying each of us with the raw
material we need, to construct illusory notions of our own uniqueness.
Take the choices offered by the food industry for instance:
permutations and combinations of a few pure and highly-refined (a lot of
them corn-based) ingredients, all designed to hook our three main
addiction circuits that crave salt, simple sugars and fat respectively. It
doesnt matter whether you are addicted to burgers, pizza, french fries or
chips (my particular poison). To the extent that you dont cook your own
meals from scratch, you have been partially Gollumized by the food
industry.
Our food choices are only a subset of our overall mode of
consumption, which I call combinatorial consumption. Combinatorial
consumption reduces the universe of human potential to a deeply-

impoverished ghost of itself; a potentially infinite range of creative


consumption behaviors reduced to paint-by-numbers consumption. Our
lives are about choosing within the confines of a giant macro version of
the Starbucks drink-construction decision tree. The dizzying, but finite
variety on offer, helps distract us from the general impoverishment of
whats on the decision tree, with respect to the unbridled bounty of nature
that is not on it.
We live in a cartoon universe where Claritas PRIZM psychographics
categories have morphed from partial description of a population of
human beings to a nearly-complete, Procrustean prescription for the
construction of a universe of Gollums.
Within the realm of food consumption, we are prisoners of what
Michael Pollan calls nutritionism: a highly-legible combinatorial food
consumption universe reductively captured in Nutrition Information
labels.
Real food is simply so time-consuming to prepare that we cannot be
allowed to indulge in it too much, lest it steal time from our reductive
roles as crank-widget producers. The widget-cranker is necessarily a
frozen-meal-eater. Only true free agents, like my friend Erik Marcus, who
have chosen to trade their talents for time instead of money, can actually
afford to eat real food routinely (Erik is responsible for some of the finest,
and cheapest, home-made food that Ive ever eaten; his recipes for vegan
chili and japonica rice with stir-fried kale are to die for).
For the rest of us, real food is an occasional luxury.
To the extent that his value as a producer lies in a few simple and
optimal motions dictated by time-and-motion studies, like Gollums
limited repertoire of tricks, the widget-crankers consumption of food
must be imprisoned within the Nutrition Information box. A marginal
market for heirloom tomatoes, on the edges of the three-dimensional saltsugar-fat universe, is all that can be tolerated, to allow him to retain a
sense of connection to the natural.
For the most part the widget-cranker must eat, not food, but what
Pollan calls processed food-like substances. Functionally, he is not

actually distinguishable from the Mad-Cow cannibalistic humans of The


Matrix.
Thanks to established critiques like The Organization Man, we have
come to understand, and partially defend against, the forces that map us to
our reductive roles as producers in cookie-cutter jobs. We can turn to
things like Dilbert, or to my own modest contribution, the Gervais
Principle, for succor. There are survival strategies both inside and outside
the workplace.
This is due to the liberating and self-actualizing effects of even the
meanest kind of widget-cranking production work. All but the clueless
retain their humanity as long as they are actually producing. Gollum,
recall, remains Smeagol only to the extent that the ring needs his producerskills, the cunning and craft of his forgotten Smeagol-hood. That little
foothold might have been enough for Gollum to claw his way back to
existential health, in a different telling of the story.
Now, not all products and services are like the abominations that are
fattening America up for slaughter, but the point is that the cheapest stuff
at the heart of mainstream culture almost entirely comprises Gollumizing,
pure-and-refined products and services, starting with the eternallyyouthful Barbies, Kens and Ellens (now available in different pure-andrefined racial flavors) acting out the life scripts that teach us how to
consume the rest of whats on offer.
What makes these core products such a potent force is that their low
cost makes them the stable attractors for the weak and at-risk. If you
stumble even slightly on the periphery, where you can be close to luxuries
like farmers markets that can serve as life-preservers, you will spiral down
into the hell of Gollumhood, optimizing calories-per-dollar along the way.
Answers to the question, what does it feel like to be poor? reveal the
horrifying fact that pop-tarts are the calorie-optimal food for the poor.
So heirloom tomatoes on the periphery (the butt of another Simpsons
joke) notwithstanding, addiction to the pure-and-refined is at the heart of
consumerism. And this is so uncontroversial that even well-intentioned
entrepreneurs uncritically declare that their goal is to create addictive

products and services that can attract a small core group of raving
superfans who can organize (if you pay them a sub-minimum wage via
games and coupons), an inchoate crowd into a synchronized raving tribe.
So the world of combinatorial consumption that Gollumizes our lives
as consumers is a more complete prison than the world of work that
imprisons us as producers. True escape is nearly impossible, except
through extreme acts of rebellion, self-imposed exile, and marginalized
live-off-the-land self-sufficiency.
In our consumption behaviors, unlike our production behaviors, there
is no natural source of redemption to be found. The world of
combinatorial consumption provides a pseudo-richness that is so
superficially close to the richness of nature in fact, that one of the survival
strategies in the world of work, loser-dom, actually relies on discovering a
sufficiently interesting pattern of Gollumizing consumption outside the
workplace. This is the person who endures cubicle farm days,
daydreaming about the slightly richer pleasures of (say) football-fandom
on evenings and weekends.
And if you decide to fight Gollumization from within, you must
venture dangerously close to the thin line dividing those fighting for their
souls from those who have already lost it.
So lets talk about extreme couponers and hoarders.
Couponers and Hoarders
On one side of the line separating those fighting for their souls and
those who have lost it, you have the deadly game of existential chess
played by the protagonists of Extreme Couponing, who exult every time
they game the system and manage to buy $1000 worth of groceries for
$20.
These are people who spend all their spare time collecting,
organizing, investing in, and analyzing their coupon collections, to mount
weekly attacks on grocery stores, like card-counting blackjack players at
casinos. This is what Gollumized raving-fandom looks like.

For the most part, these are not resellers or rational participants in a
supply chain; they literally stock up on 150 years worth of hand soap and
deodorant. As with the Sirot video, there were a few glimpses of humanity
in the Extreme Couponing show (catch a rerun if you can). In one rare,
human moment, an extreme couponer managed to score thousands of
boxes of cereal essentially free, which he then gave away to the homeless.
The lives of couponers are apparently about gaming the Big, Bad
marketing machine. One extreme couponer constantly made references to
chess, beating the house, and gambling with a strategy that allows him to
win every time. He conveniently discounted his hours of preparatory labor
as a fun hobby. He clearly viewed the marketing machinery of his grocery
store as an adversary to be beaten, and himself as some sort of hacker.
You might wonder then, why does the marketing machine tolerate
such acts of sedition? Is it only because they are not worth the cost of
completely stamping out, and are unlikely to grow into wide-spread
revolt? Perhaps occasional patching of particular exploits in the arbitrary
universe of couponing is enough for the marketing machine to stay one
step ahead in the arms race?
This seductive analysis, and the implied analogy to hackers attacking
a computer system, is deeply misguided. When hackers compromise a
valuable site via an undocumented exploit, they can steal or cause millions
or even billions of dollars worth of damage. The process is in no way
controlled, let alone legitimized, by the site owners.
By contrast, the extreme couponers, if you count the value of their
time, basically make a modest living doing below-minimum-wage
marketing work for the coupon-based marketing universe that welcomes
them as raving fans.
From the point of view of the stores, far from being hostile opponents
in some asymmetric game of chess, these are merely cheap and committed
marketers. They are encouraged to model, in extreme ways, the very
couponing behaviors that the marketing machine wants others to emulate
in less extreme ways.

Which is exactly what happens. So long as you and I casually clip and
use coupons, inspired by the extreme couponers in our midst, the grocery
stores still comes out on top. If the extreme couponers leadership
behavior were to actually lead to large-scale loss-driving sedition by too
many customers, the store could easily staunch the losses overnight, by
making minor changes to coupon-redemption rules.
The coupon-based raving-fan gambling industry is merely a lessregulated version of Las Vegas. Instead of the temptations of lowprobability jackpots, the house strategy for coming out on top merely
relies on making profitable couponing so difficult, boring and timeconsuming, that only the destitute or obsessive, in possession of more time
than money and underutilized sunk-cost home warehouse space, would
attempt it.
If you need proof that this is a gambling industry rather than a hacker
subculture, you need only look at the support the stores provide to extreme
couponers. In the show, the store employees actually applaud when the
extreme couponers check out with their ridiculous hauls. Letting a hardworking couponer walk away with winnings of $5000 worth of
groceries for $200 is basically cheap marketing. The store makes more
than its money back through the cheaply-inspired loyalty of the lessdisciplined casual couponers, who halfheartedly mimic the extreme
Gollums.
If you want more validation, simply visit a Vegas casino and wait for
someone to win reasonably big. You will see the exact same applause and
encouragement from the staff. And the applauding front-line service
employees in both cases arent faking it. They genuinely believe the little
guy has beaten the house rather than provided it with cheap marketing.
If youve been reading this site for a while, you should be able to figure
out why the applause is genuine (hint: losers).
On the other side of the dividing line, you have the hollow shells of
human beings profiled on Hoarders. These are human beings whose
patterns of addictive consumption have reduced their homes to toxic
garbage dumps. Literally. The interventions are triggered by the threat of
having their residential properties you can hardly call them homes

condemned by health inspectors. Where extreme couponers carefully


stockpile supplies in their garages under relatively sanitary conditions, the
hoarders have homes full of refuse, decay, cockroaches and mold.
One episode almost made me throw up: it featured an elderly woman,
a real-life Miss Havisham, who began buying dolls to cope with some
traumatic life event. She lived in a house that was packed with thousands
and thousands of dolls. In all my years of television watching, I have seen
few creepier scenes than this one: the interventionists gingerly parading
her dolls past her, one at a time, allowing her to make individual keep/give
away decisions, letting her have just enough of a sense of control over the
intervention to avoid triggering a full-blown psychotic episode.
Here is what should worry you: both the extreme couponers and the
hoarders map better, conceptually, to the center of our consumerist world
than to the margins. The margins are for drop-out exiles who have
managed to flee sufficiently far away that they can live semi-redeemed
human lives. Couponers and hoarders, by contrast, straddle the event
horizon of the black hole at the very heart of things.
And around the black hole, sandwiched in an annular ring between
the full-blown Gollums and the exiles, is the mainstream world you and I
inhabit. Not far enough out to have escaped, not close enough to have
been torn apart and assimilated like couponers and hoarders.
The mainstream world, as I said, is characterized by the reassuring
faux-variety that stands in for diversity, within which individual
uniqueness is replaced by the faux-uniqueness induced by a sufficiently
rare combination of consumption choices.
This is a universe within which your doppelganger is not an eerie
existential twin with whom you might share a mystic bond, but merely
that hard-to-find person who also happens to live at the intersection of a
Coke-over-Pepsi, McDonalds-over-Burger King, DC-over-Marvel and
Nike-over-Reebok.
Rather curiously, the Harry Potter series manages to incorporate both
kinds of connection in the relationship between Harry and Voldemort: the

mystic connection created by Harrys scar, and the more prosaic one
created by the twin phoenix feathers in their respective wands, from the
same phoenix.
Anyday now, I expect to see a doppelganger app on Facebook based
on Likes. It will likely be named phoenix feather.
When that happens, the black hole at the center of our universe, now
equipped with a social-graph fishing net, will begin gaining mass at an
accelerating rate, drawing more of us into the embrace of subterranean
Social Gollumization, caught up in some surreal world of addictive,
mobile-app-based coupon-trading games.
From Customers to Consumer
In a rather popular post of mine from a while back, I derived, from
Druckerian first principles, a definition of a customer.
A customer isnt a human being. A customer is a novel and stable
pattern of behavior.
I have since reused that definition in other popular posts, which have
served to validate its soundness. But with each new and successful post
that rests on that definition, I become more uncomfortable about its
implications.
When I came up with the definition, I finessed its obviously dehumanizing implications with the idea that it was merely a functional
definition that relied on an aspect of the underlying human being. The
whole, I allowed myself to believe, was still fully human, and greater than
the isolated stable behaviors of interest to the marketer.
I now believe that is a deeply disingenuous stance, based on a
perverse assumption that combinatorial consumption of a sufficient variety
of products and services is equivalent to fully-experienced humanity.

I believe that the definition of customer, unfortunately implies another


definition: of an abject inhabitant of the macro-economy called a
consumer:
A consumer is a human being reduced to the sum total of the
behaviors that define his various customer-roles in relation to the
products and services s/he consumes.
This ideal addict of an abstract economic process (the One Ring is
perhaps shopaholism) is also the perfect Gollum.
While each business is morally responsible for the individual behavior
the customer role that it creates, the problem is that no one product or
service can be deemed culpable for the creation of the emergent subhuman: the consumer.
Each business, in codifying the microeconomic behaviors that define
its customer contributes to making the market as a whole more
reductively legible, in the sense of James Scott. We become, as I have
said, our psychographic personas, defined by our Likes rather than our
likes.
Is there any kind of escape that does not involve couponing on the
edge of hoarding-madness, or log-cabin survivalism?
Beyond Gollumhood
If youve been following my writing, you know that Ive been inching
reluctantly towards this contrarian position with respect to prevailing
marketing orthodoxy (especially the uncritical, unironic and frothy 2.0
kind). I have been reluctant to talk openly about this viewpoint, because I
know a lot of you are believers in the raving fan/raving tribe school of
marketing, and I know you try hard to view your customers as humans,
even as you think about how to acquire and retain them.
In my own marketing work (of products that I hope liberate rather
than enslave, including this blog), I have been extremely reluctant to
engage in raving fan/raving tribe tactics.

On this blog in particular, I have immediately disengaged from


anyone who shows any signs of becoming a true raving fan (and there
have been a couple whose obsessive and uncritical consumption of my
writing has bordered on stalking). If marketing discipline is about being
willing to fire your customers, I am a terrible marketer: I fire my best
customers instead of my worst ones.
I do not want sub-human addicts around me or anything I help
market, let alone entire zombie-tribes of them. Perhaps I will live to regret
this decision. Or perhaps there is a sustainable economic model that does
not involve zombie-tribes at all.
Theres a good deal more to be said here. When you look at the other
side of the free market, at entrepreneurs and capitalists in particular, very
troubling questions arise. Starting with this one: within the Lord of the
Rings metaphor, what do Dark Lord characters like Sauron really want?
So the ideas in this post are threatening to snowball into yet another
series (these serieses consumes us! my precious!), which I may or may not
continue, depending on the reactions to this piece.

Peak Attention and the Colonization of Subcultures


January 27, 2012
Coded, informal communication significant messages buried
inside innocuous messages has long interested me. I dont mean things
like NX398 VJ899 ABBX3 that the NSA might deal with (though thats
related). I mean things like this:
You: lets get coffee sometime
Me: Sure, thatd be great
We both know that the real exchange was:
You: lets pretend we want to take this further
Me: yeah, lets do that
The question of how such coded language emerges, spreads and
evolves is a big one. I am interested in a very specific question: how do
members of an emerging subculture recognize each other in public,
especially on the Internet, using more specialized coded language?
The question is interesting because the Web is making traditional
subcultures historically illegible to governance mechanisms, and
therefore hotbeds of subversion increasingly visible and open to cheap,
large-scale economic and political exploitation. This exploitation takes the
form of attention mining, and is the end-game on the path to what I called
Peak Attention a while back.
Does this mean the subversive potential of the Internet is an illusion,
and that it will ultimately be domesticated? Possibly.

Mining Subcultural Attention


Manipulation of subcultures through the Internet has been limited to
date because the tools are still very new. The mining of large reserves of
attention the largely one-way kind directed at work, beautiful sunsets,
or the manufactured pop celebrity du jour for instance is now a mature
science.
Social attention though, trapped within relationships, is the shale oil
of attention mining. The institutional world has not yet learned to
efficiently mine the attention that is locked up today within subculturescale social interactions.
As they learn over the next decade, todays garden-variety subcultures
will turn into docile and domesticated micro-markets for businesses, and
micro-constituencies for politics. They will cease to be subversive threats,
much as the old labor movement, which formed as a reaction to Gilded
Age capitalism, ceased to be a threat within about a century. The world
moves faster now. The new models of subcultural collective action, I
predict, will last less than a decade or two before they become irrelevant.
All attention that lives within subcultures is now vulnerable to external
control.
Their weakness is that they seek to externalize their structure into
digital institutions. Loose and transient P2P network institutions perhaps,
but still institutions, due to their reliance on externalized trust, impersonal
organizing principles and most importantly, social scaling.
They rely on the power of numbers rather than intelligence. Smart
mobs are still mobs. As we will see, they are vulnerable to control, and
attractive targets for attention mining. Rather ironically, most of the
mechanisms required to observe and control subcultures are being
invented by subcultures themselves. External forces are merely stepping in
to co-opt them.
But lets return to coded communication. Thats where our journey
begins.

Impersonal Secret Handshakes


The bulk of coded communication is designed to sustain the polite
fictions of civil society, to limit relationships to the depth of immediate
transactions, as in the example I started with.
But a proportion of such communication goes the other way: it serves
to deepen relationships. Some of this is a matter of widespread convention
and ritual, like the classic would you like to come upstairs for a drink?
This one is not particularly interesting, because there is no content beyond
the accepted meaning of the ritual incantation. It is visible culture, not
invisible subculture.
More interesting is coded communication that allows members of a
subculture to recognize and interact with each other, without an
institutional context.
The most common way to do this is to use a linguistic motif that
signals membership of a subculture, via reference to a recognized
subcultural text.
If I use the word discourse in a specific way, it will signal baseline
membership in postmodernist-pretender subculture.
If I begin an essay with the words: You can check out of Facebook
any time you like, but you can never leave, the dropped reference signals a
basic awareness of American music to others with a comparable
awareness, but seems merely like an odd turn of phrase to others (my
parents for instance, would not get this reference).
If you understand the coded message, youll respond with a coded
message of your own that shows that you got it (perhaps using a phrase
like always-already in the first case, or with a reference to a different
classic song in the second case).

These are impersonal secret handshakes and have existed forever.


They are based on shared cultural texts like the lyrics of Hotel California
or immersion in the peculiar vocabulary of an academic subculture.
Hipsters might distinguish themselves from generic pop-culture
aficionados by dropping references from Haruki Murakami novels instead
of Hotel California, but it is still an impersonal secret handshake, since it
is based on recognized common knowledge (stuff that everybody knows
everybody knows) within an existing group, defined by its core texts.
The membership precedes the mutual recognition, and the secret
handshake serves to validate membership of the group rather than
knowledge of the text. The text is a social object with a limited role (note
that the manufacture of social objects is slowly becoming a codified
science in its own right, a development that is part of the ongoing
colonization of subcultural attention).
Impersonal secret handshakes are fundamentally weak, and the groups
they protect are vulnerable to infiltration in very basic ways. Since the
group is defined by impersonal texts that serve as common knowledge,
strangers can acquire knowledge of the same impersonal texts and become
pretenders (such as trustafarians faking poverty to gain access to hipster
culture). Some subcultures are much easier to penetrate than others (the
cute-kitten-picture subculture for instance), but they are all vulnerable.
Vulnerable to what or whom? To answer the question, we need to
switch gears and talk about patterns of social organization for a bit, and
where subcultures fit in the larger scheme of things.
Patterns of Social Organization
We are used to thinking about the global social order in terms of a
class-culture matrix. This is the scheme upon which institutional social
order the world of nation-states, corporations and religions is based.
When you rebel, this is the scheme you try to disrupt. Both types of
groupings rely on recognizable markers and boundaries to distinguish
themselves from others, and cryptic in-group behaviors and language to
sustain necessary opacity.

When a great deal of power is involved, cryptic in-group behaviors


can give rise to a refined inner core of formal institutional secrecy,
creating a hidden social order. Though they increasingly seem ludicrous
today, secret societies have always been an essential part of maintaining
the social order, becoming more or less visible in concert with the waning
and waxing of institutional power.
This class-culture organizing scheme is best understood as a global
matrix. It is global in scope because it documents mutual recognition
between maximally-distant parts: the Chinese Party-Member/NonMember distinction is recognized globally, as is the American
Republican/Democrat distinction. It is a matrix because it is understood in
ordered, visual-spatial terms. Class is horizontal, culture is vertical. This
abstract visual ordering induces a literal geographic ordering. So rich and
poor, black and white, sort themselves out at every geographic scale from
town to nation, fractally embodying a fundamentally simple scheme.
There is another type of social organization, based on subcultures, that
has historically served as a check and balance to the power of the classculture matrix.
Contrary to popular belief, subcultures are not vague constructs. They
have a precise, if negative, definition: a subculture is a pattern of social
order that is not worth codifying and institutionalizing for the purposes of
governance or economic exploitation, under normal circumstances. So
subcultures have historically relied on their obscurity, illegibility and
unimportance to ensure autonomy and security.
The very existence of a subculture is only known to neighboring
subcultures. This limited local visibility suggests that the world of
subcultures is not a matrix, but a web. Classic Rock fans can tell Punk
Rock apart from other kinds. It all sounds the same to a non Rock-fan.
Imperceptible distinctions that make no difference in the larger scheme of
things.
Under abnormal circumstances, when seditious sentiments are
brewing in the subcultural web, the zero-sum game of power swings in its

favor, causing a reaction from the class-culture matrix: increased and more
visible action by the hidden institutional order to restore the balance.
When slums start to seethe, the secret police gets going in not-verysecret ways.
If the slums win, subversive subcultures become institutionalized, and
displaced ones turn into subcultures. If the slums lose, things stay roughly
the same. Either way, the scheme of social organization remains the same:
a balance of power between an institutional class-culture matrix and a
subcultural web.
This is the world we are used to, and this is the world the Internet is
changing. The subcultural web is now being made legible and governable
under the harsh light of Facebook Like actions. Just in time too, since the
returns on coarser forms of political and economic exploitation are now
rapidly diminishing. Obamas victory in the last Presidential election, and
the penetration of entities like Groupon into local food subcultures, are
just the early signs of where we are headed.
This is a contrarian conclusion. Most commentators today are arguing
that the subcultural world is getting stronger, more incomprehensible and
increasingly ungovernable.
This is a mix of an illusion, a poor sense of history, and the effects of
a temporary learning phase on the part of class-culture matrix institutions.
The world of subcultures are about to be comprehensively explored,
mapped, tamed and domesticated. The larger the subculture, the faster it
will fall.
The subcultural web looks increasingly incomprehensible (and
therefore stronger and more ungovernable) to you and me as humans. It
does not seem incomprehensible if you peer at it through the increasingly
sophisticated instruments of digital governance. Facebook is to marketers
and politicians what Google Maps is to travelers.
The poor sense of history is due to the passing of the last living
generation that experienced truly terrifying levels of global conflict.

Twitter revolutions pale in comparison to World Wars and the immense


conflicts of the nineteenth century.
Which brings us to the only serious reason behind the temporary
resurgence of subcultural power on an overall downward trajectory:
learning lag in the institutional world.
The Taming of Subcultures
I remarked earlier that subcultures are sub-institutional in resolution.
There is no Federation of American Hipster Societies with a national
president and member organizations each with their own chairpersons,
badge-printing machines and envelope-stuffing volunteers. There is no
Annual National Hipster Convention that attempts to influence elections,
and no zoning ordinances and tax laws that specifically target hipster
neighborhoods. And perhaps most importantly, there is no master email
list of hipsters that you can use to survey and promote.
But just because subcultures lack impersonal institutions in the
traditional sense does not mean that they are personal patterns of social
organization. They are not. They are merely illegible to the class-culture
matrix working with pre-Internet tools.
Since they only serve a subset of the functions of formal organizations
(relying on the class-culture matrix for basics like cars and underwear),
they need fewer pieces of externalized infrastructure.
Shared common knowledge texts are often enough. Secret handshakes
serve the purpose of one-to-one mutual recognition, and three-way
introductions are enough to allow small local groups to cohere. Dress
codes, popular haunts and the active-use texts change slowly enough that
secret handshakes suffice for all information diffusion. No envelope
stuffing or email lists are needed. Punishment for defection shunning
and expulsion is generally weak and local, because the value of
membership is generally weak and local (friends to hang out with, parties
to go to, a local economy of favor trading).

Before the Internet came along, it was the sheer number and
insignificance of local subcultures that made governance too expensive to
bother with. The risk of the rare seditious uprising could not justify the
cost of more fine-grained pre-Internet governance mechanisms.
Businesses sold a modest selection of mass-produced shoes for
instance, and produced more of the varieties that sold better. It wasnt
particularly useful to know that hipsters liked Converse sneakers. For
politicians, a coarse color-coding of Red and Blue states (in America) and
a certain amount of county-level intelligence sufficed to inform election
campaigns.
The Internet though, has changed all this. It has allowed subcultures
to scale (by moving their secret-handshake institutions online), and
become more valuable in the process. While mass-manufactured celebrity
cultures have been weakening, we are not returning to pre-mass-media
patterns of local culture. Instead, weve evolved to mega-subcultures that
scale without developing institutions.
And at the same time, the visibility of subcultural behaviors has made
governance and exploitation much cheaper and easier. You dont have to
go to a specific neighborhood, in specific clothes, and drop specific
references. You can sit at your desk, dress any way you want, and fake
your way into any subculture. Long enough to sell a whole lot of shoes.
It will not take long for businesses and politicians to completely
master this game.
The outcome is inevitable. Subcultures will be comprehensively
tamed. Institutional sociopaths within the class-culture matrix are now in a
position to detect and take control of subcultures before they even come
into existence. This will lead on to control over the very inception of
subcultures.
The Fabrication of Subcultures
Subcultures are vulnerable because they form around shared
common-knowledge texts (even if the shared text in question comprises

nothing more than a particular vocabulary of new urban slang). In Web


terms, todays invisible to all but the eye of Big Data crunching AI
pattern of preferences is tomorrows subcultural small world on the global
Interest Graph. And tomorrows Interest Graph is next weeks Social
Graph.
The day is not far off when Amazon will be able to predict, based on
book-sales correlations in a given geography, the formation of a new
subculture before the first defining event (say a party where an originmyth is created) ever takes place. It wont be long before influence
mechanisms emerge, to complement the detection mechanisms.
Today, naive marketers try to clumsily set up online communities
framed by their products or services, to attract target subcultures, and
generally fail.
Somewhat smarter ones try to own relevant conversations, based on
identifying core subcultural texts that are adjacent to the productpositioning conversation (the classic example is: want to own the teen
tampon market? Set up a community for girltalk). This is marketing-byperipheral-vision.
The smartest ones try to infiltrate and co-opt existing subcultural
communities online.
But all these mechanisms have had very limited success. Because they
are all about taming wild subcultures.
But once marketers working with Big Data get ahead of the cultural
curve, you can expect the balance of power to shift decisively in their
favor. From detecting subcultures before future members themselves do,
to actively seeding, breeding and shaping desirable subcultures, is not a
big leap to imagine. It will be a world of pre-cognitive marketing, run by
quants in data vats.
Taming will turn into domestication.

Today, the marketing machine can at best put its muscle behind a
Justin Bieber and create coarse, large-scale culture whose manufactured
nature is obvious to all but the dimmest of observers.
Tomorrow, it will be able to create tiny, niche cultures whose
members will either sincerely believe that the subculture is their own
creation, or ironically not care that it has been manufactured for them to
find through engineered serendipity.
A sort of Moores Law of cultural fabrication will get underway, and
it will eventually be capable of etching an entire subculture within a few
city blocks.
Heck, let me go out on a limb and make a Moores Law type
prediction: the size of the smallest manufacturable subculture will halve in
size and transience every 18 months. In 10 years, well have a
microprocessor moment: the ability to etch culture at a one-city-block-forone-month level of resolution. Working in concert with neo-urbanists, the
new marketers will be able to pack a thousand domesticated hyperlocal
subcultures in every major city, and entirely reprogram it culturally every
few months, to sell a new crop of products and services.
That future (either utopian or dystopian, depending on where you
stand) is a ways off, but well get there.
Three of the four companies that dominate the Web today: Facebook
(Like patterns), Google (search patterns) and Amazon (purchase patterns),
are equipped with extremely powerful cultural early-warning radars, based
on massive data flows. Data flows so massive that only large institutions
within the class-culture matrix will have the power to crunch them into
usable intelligence.
Apple, the fourth company, curiously does not have the capacity to
lead the zeitgeist this way. Their historic competitive advantage the
mind of Steve Jobs has turned into a serious weakness with his passing.
Because he was preternaturally good at following the zeitgeist, Apple
squandered its potential to lead it. A key kind of cultural early-warning
radar (based on music tastes) was ceded to startups. It was cheaper to let
Jobs stay one step ahead of other gut-driven pre-Internet marketers than to

invest in assets that could be exploited by less-talented post-Internet datadriven marketers, capable of staying ahead of culture itself.
This is why Bruce Sterling was right to label Apple an example of
Gothic-High-Tech zeitgeist following rather than zeitgeist leading, but I
believe he is wrong in thinking that all marketing is going to be this way;
much of it is now going to get ahead of the zeitgeist and actively shape it,
within the decade.
As a revealing sign, it is noteworthy that subcultures have already
been subverted so completely that they voluntarily self-document their
doings online on privately-owned platforms. Every party or group lunch is
now likely to be photographed, video-taped and archived online as part of
collective memory. Group-life streams and grand narratives are out there,
for the reading.
If youre not paying, youre the product. Indeed.
But the nitty-gritty aside, the conclusion is inevitable. The subcultural
web is now open for colonization. It will retain a potential for very coarse
and rough kinds of subversion (#OccupyWallStreet is sort of the Swan
Song of subcultural power). This potential will soon peak, and then begin
to decline.
The Fortune at the Bottom of the Attention Pyramid
How big is the potential value of subcultural attention mining? The
rumored valuation of the Facebook IPO provides a hint: $100 billion. That
suggests a market that is big enough when you consider all players
to move global GDP a few percentage points. Is that a lot or a little?
Depends on your frame of reference.
One way to frame the value is to imagine a pyramid of social
groupings, representing various levels of social attention (not attention
devoted to the non-human world).
At the bottom you have 7 billion little pools of individually-directed
attention. At the very top, you have a single point, the group called

humanity. There are moments, like 9/11, when all available attention
floods to the top.
One organizational rung below, you have perhaps 18 groupings at the
coarsest resolution level of the global class-culture matrix: the three basic
social classes (rich, middle-class, poor) times the half-dozen or so major
civilizations.
Then you have perhaps 700-odd nation-class groupings, and so on
down, past cities, kinship groups, traditional family-societies and various
other kinds of groupings that were long ago domesticated and subsumed
within the class-culture matrix.
At some level of resolution, past a gray transition zone, the classculture matrix gives way to the untamed subcultural web. The gray zone is
moving relentlessly downwards, domesticating the subcultural web and
subsuming it within the class-culture matrix.
This is not like the fortune at the bottom of the C. K. Prahalad
pyramid. This is the cultural equivalent of the plenty of room at the
bottom remark by Richard Feynman, which serves as inspiration today
for the entire field of nanotechnology.
Except that there isnt plenty of room. Though the social space
occupied by the subcultural web is vast, it is being domesticated so fast
that we can expect complete colonization within a decade. Recall what
happened with the nineteenth-century railroad boom in America.
Settlement processes that had been crawling painfully along for three and
a half centuries, suddenly accelerated and finished the job within a few
decades (the marker was a major 5-year depression that began in 1873).
So from that perspective, $100 billion seems both reasonable and not
particularly large. It seems like a market that should take no more than a
decade to occupy. At that point, Id expect Facebook to turn into a mature
company with declining margins.
At that point, we will hit the limit I called Peak Attention. Once all
subcultural attention is mined, only two kinds of attention will remain: the

attention currently trapped within personal relationships, and the attention


controlled by individualist instincts.
Both are likely to be resistant to industrial-scale attention-mining
techniques. All genuine subversive instincts will retreat to these lowest
two layers of the attention pyramid: groups of size one and two
respectively (there are likely around half a trillion one-on-one
relationships in the world; Ill leave you to figure out why).
We will move past Peak Attention, and a new game will begin.

Acting Dead, Trading Up


and Leaving the Middle Class
December 8, 2011
I want to share the story behind approximately $2700 dollars worth of
my spending this year that reveals how I am finally starting to leave the
middle class, materially, financially and psychologically. No, I am not
moving up into the rich class or down into the poor class. I am doing
something complicated called trading up.
This $2700 is money that, if Id decided to pull the trigger and spend
it a few months earlier, would have spared me a ton of unnecessary
frustration. Why didnt I spend it when I should have?
One reason is that I still have residual middle-class financial
programming in my head, expertly misguiding me to the wrong answers.
Getting it out of my head feels like getting a bad malware and virus
infection off a computer. It is painful and messy, and there are really no
completely reliable tools that work in all cases. And youre never quite
sure if you got the last infected file off the system, when the infection is
really bad.
Another reason is that I was (and remain to some extent) guilty of
what science fiction writer Bruce Sterling calls acting dead: being
irrationally averse to spending money where it matters, in a misguided
attempt to save money to the point that the behavior paralyzes you. A
large segment of the middle class is starting to act dead these days. Which
makes sense since the class itself is dying. To stop acting dead, you have
to resolve to exit the traditional middle class as well, unless you want to
go down with it.

Not acting dead involves a strategic spending pattern that marketers


are starting to call trading up: buying premium in some areas of your life,
while buying budget or entirely forgoing spending in other areas. This
pattern of conscious, discriminating consumption defines the emerging
replacement for the middle class. As the picture above illustrates, there
isnt really one New Middle Class. Instead, it is a fragmented social
space, with each little island being defined by a specific pattern of tradingup, and an associated lifestyle design script.
This effect is a sort of the opposite of what I called Gollumization
earlier this year: unthinking, undiscriminating consumption to the point
that consumption defines you.

Theres a pretty neat book about it, Trading Up by Michael Silverstein


and Neil Fiske, which you should read if you, like me, have exited or are
planning to exit the traditional middle class.
But back to acting dead and my $2700 dollars, which Ill use as my
running example to get at various things.
The Dead Great-Grandfather Test
Sterling was using the term specifically to describe the hairshirt green
lifestyle that is driven by eco-anxieties. For hairshirt-green types, life is all
about saving water, recycling, composting, reducing eco-footprints and
various other behaviors marked by a kind of fearful, non-generative retreat
from living. Permanent existential hibernation.
Sterlings rule of thumb for spotting acting-dead behaviors is a great
one: if its something your dead great-grandfather can do better than you,
its a case of acting dead. Your dead great-grandfather uses no water or
plastic, and is actually recycling himself as we speak, not just his
possessions. Try and top that.
But acting dead goes beyond hairshirt-green behaviors. While spartan
frugality is a virtue, when it becomes the entire purpose of your life,
theres a problem. For a portion of the dying American middle class,
frugality has turned into a life purpose.
An example is extreme couponing, which is why I used that as an
example of radical Gollumization. It is saving gone amok: never buying
anything not on sale (and therefore never buying things that never go on
sale) and systematically being a jerk to businesses that may be running
loss-leader sales to get new customers.
So how should you spend?
Spending Money
In his talk, Sterling offers up a simple rule for how to spend money. If
it is something you use a lot everyday, spend the money, and get the good

stuff. Dont buy cheap. Look for deals, but dont let deal-seeking make
you compromise on quality or wait too long. It will cost you more in the
long term. Sterlings examples are obvious and physical: a good quality
bed and work chair for instance. You might spend up to 8 hours a day in
each; thats 2/3 of your life.
I own both an excellent bed and a great chair. I am not sure the latter
was a good investment for me in particular, since I spend most of my
sitting hours in coffee shops, but in principle, it is a great example. Other
examples include: a great kitchen knife, a nice car if you spend many
hours commuting per day, plenty of quality gym clothes and a membership
at a good gym, so you never have an excuse not to work out. Good quality
produce to cook with.
If you work mostly at your desk, a large monitor. Heck, multiple
monitors. The best keyboard.
Sterling also has ideas on what not to buy, or get rid of if you already
own it. Expensive china sets for example, if you never do any formal
entertaining. Things you think are assets but are actually liabilities. Things
you are being unnecessarily sentimental about.
Sterlings ideas seem to have been independently rediscovered by a
growing segment of the middle class. Hence the phenomenon of trading
up (the book has lots of data and anecdotal evidence for the trend).
I think of these sorts of examples as physical furniture. Stuff in your
life that can make it hoarder hell if you buy the wrong things, or heaven if
you buy the right things.
$2700 Worth of Acting-Dead
My acting-dead behaviors this year were more about mental furniture.
Heres the breakdown of the $2700 that I eventually spent when I stopped
acting dead:
1. About $250 to get Tempo converted to epub and Kindle formats

2. About $300 odd to get an agent to file some Nevada business


paperwork for me
3. $2100 for a Matlab (scientific computing software) license
In each case, I procrastinated for months, with the vague idea of
saving money. Actually, it was worse than mere procrastination, since I
was expending useless effort. In each case, my dead great-grandfather
could have achieved what I did around those tasks during those months:
nothing. And hed have done it more efficiently.
In the first two cases, I tried to do it all myself, even though I have an
aversion to fussy kinds of technical formatting work and paperwork to the
point that they should count as phobias. When I finally pulled the trigger
and outsourced the work, it was like a major load being taken off my
mind, coupled with severe regret for the time already spent on pointless
frustration.
In the third case, it was again about saving money. I spent months
mucking around with Python, R and various other open source alternatives
to Matlab. Here, the messiness of having to deal with a unwieldy and
weakly integrated open-source tools, along with my own serious aversion
(similar to my paperwork aversion) to fussy configuration issues, and my
generally poor ability to pick up new programming skills, had me wasting
months in frustrated spinning-of-wheels.
And in the meantime, I was not doing things I wanted to do, simply
because I was too cheap to buy a quality tool that I was familiar with, and
could save me months of painful learning (especially painful now due to
the Python 2.x to 3.x transition). As with the other two cases, finally
pulling the trigger made me intensely relieved.
You could say that each poor decision (each a case of delaying the
right decision) was caused by specific phobias, aversions and irrationality.
But there is also a general pattern here. I really was not able to
rationally assess the costs and benefits of each decision until after I had
persisted with the wrong decision for months and made the right decision
out of frustration. I could only see the simple logic after Id made the right
decision and stopped rationalizing the wrong one.

The general pattern that causes such poor decision-making is the


middle class financial script.
The Middle-Class Financial Script
The middle class financial script is simple really. It involves uniform
spending habits within a large class, based on norms that are learned via
imitation.
If you are in the middle class, you are expected to own certain things,
do certain things and do so at quality levels that exceed the quality
purchased by the poor class (if they purchase that category of things at all)
but dont hit luxury levels.
You are also expected to not buy certain things that are either above or
beneath you, or do certain things for yourself. Vanity, humility and a
sense of entitlement are all at work here. For the middle class, there are
things that are beneath your station and things that are above your station.
For the rich and poor, things are much more one-sided.
To take some simple examples, youd be looked upon with suspicion
if you bought a car that was either too luxurious or too cheap for
somebody claiming middle class status. You are expected to vacation in
certain places and not others.
In fact, imitation and uniformity in consumption define the middle
class. In countries where the middle class is burgeoning instead of dying,
especially in Asia, the growth of the class is tracked via measurement of
ownership rates of certain typical goods at typical quality levels. By
contrast, there is much more variety in how the poor are poor, and how the
rich are rich.
Why does the middle class script (or any script) exist?
Mainly because it makes financial management easy. Constantly
computing the total costs of ownership, potential returns and risks around
all spending decisions, is hard. And it doesnt seem worthwhile when the

income side is predictable and comfortable. Why bother to control costs


when revenues are fixed and somebody else has already made up a
predictable-costs script with reasonable margins designed to get you
through retirement?
In other words, the middle class in recent history has been defined by
its ability to both earn and spend money in very predictable ways.
Then of course, the risks started creeping back in, around 1980,
slowly at first, and then with increasing rapidity over the last few years.
All the things the middle class relied on job security, defined benefits
pensions, affordable mortgages, predictably rising real-estate values
one by one, all these supports began to break down.
But autopilot spending has persisted, long after the new patterns of
exposure to financial risk have become clear. The reason of course is that
the old financial habits were not really financial per se, they were driven
by class norms rather than financial risk-management calculations.
My own examples are a case in point. My behavior is readily
explained with reference to middle class norms:
1. The eBook conversion example: Middle class people do not hire
other middle class people outside of a few approved exceptions
such as doctors, lawyers and accountants; they work for the rich
and hire the poor.
2. The business paperwork example: Middle class people do not
indulge in luxuries like hiring administrative help to do
paperwork. Thats for rich people with complicated financial
affairs. Honest middle-class people should be able to do their own
paperwork, with at most some professional help at tax time.
Needing help probably means you are up to shady things.
3. The Matlab example: Middle class people do not pay for their
tools. In fact, they shouldnt need tools beyond the basic tools of
literacy (books, pen and paper 100 years ago, a computer today).
Poor people use specialized tools. Rich people buy them. Middle
class people merely supervise the use of the rich peoples tools
(capital) by the poor (labor). Even today, if you use specialized
tools to work, your membership in the middle class is suspect.

Above all this, the middle class script involves a certain aversion to
talking about or dealing with tough financial decisions. It is considered
unseemly. Decent people dont talk about money, let alone risk. If you
work hard and play by the rules, the money should take care of itself. If it
isnt doing that, you are probably looking for dishonest and exploitative
shortcuts like the evil rich or doing dumb things like the stupid poor, and
deserve what you get.
If you have to budget and watch your money too closely, you were
probably being irresponsible with credit cards and deserve your pain. For
decent people, paycheck-in, on-time-credit-card-payments-out should
work smoothly on autopilot.
And above all, you dont speculate. If forced to speculate by pensions
being turned into 401(ks) (American stock-based defined contribution
retirement plans), decent people leave the actual risk-taking decisions to
professional fund managers, telling themselves things like you cannot
beat the professionals.
So what will happen to people operating by such obviously dangerous
attitudes in difficult times?
Turns out, weve been here before. Theyll die out.
Middle Class Declines in History
This is not a new phenomenon in history. Middle classes have
appeared and disappeared several times before in history.
Tennessee Williams plays (A Streetcar Named Desire, The Glass
Menagerie) tell exactly such poignant fall-from-the-middle-class stories
set in early 20th century America.
Early twentieth century British novels set during the decline of empire
(such as Agatha Christie novels), often contain aging spinsters desperately
keeping up appearances and surviving on small incomes derived from
being companions to richer old women.

You can also find examples outside the Western world. In nineteenth
century India for example, where the Urdu and Sanskrit-literate middle
classes, which had grown around the courts of the Nawabs and Maharajas
in older medieval cities, went into severe decline. The new English-literate
middle class began supplanting it in the newer cities of the British Raj.
I suspect similar middle class declines can be found in the Middle
East (during the Ottoman decline), China (after the Boxer Rebellion) and
Latin America (after the Monroe Doctrine perhaps? I am not too familiar
with Latin American history).
When a middle class goes into decline, you get a large segment of the
population engaging in a desperate scramble to keep up appearances,
while switching from collective-norm-based to individual-risk-based
financial thinking.
Keeping up with the Joneses becomes far harder, because the financial
support starts to collapse at different times for different people, but
everybody agrees to pretend that everybody is in it together. For the
current American decline, there have already been a couple of good
movies chronicling the decline: The Joneses (2009) and The Company
Men (2010).
A norm-based social class will persist with disastrous financial
choices long after the secure financial environment, on which its scripts
are based, collapses. Simply because membership of the class is the source
of all social identity and access to social capital.
Except that the social capital, which the members are clinging to, is
eroding rapidly as well. There is no point in two non-swimmers with
immense trust between them, clinging to each other while drowning.
Mutual trust and social capital within a group only mean something when
there are objective reasons to expect a prosperous future of indefinite
length stretching out ahead.
When this is not the case, it makes sense to cash out your hard assets,
rethink your financial life more directly, write off investments in the social

capital of the declining class, and look for an alternative emerging class to
join.
Trading Up and Fragmentation
As the picture I started with shows, a key effect of the trading-up
phenomenon is that it causes serious fragmentation. The social landscape
starts to get restructured along new lines. Cultural geography changes, as
governing financial scripts change from one city block to the next (you see
a lot of this in San Francisco in particular).
The transition from a monolithic middle class to one of many tradingup classes is a very tough one. First, you have to go through a period
where you manage your finances very directly, with no help from a script
that simplifies decision-making.
Then you have to evaluate various alternative trading-up scripts to
figure out which ones might actually fit your situation and encode
meaningful adaptations to the new environment. Not every lifestyle design
script is likely to work.
In the last few months, going back to the broader context of my three
examples, Ive done a good deal of very direct financial decision-making.
Ive made up detailed scenario planning spreadsheets, risk models and the
like. Ive done minute tracking of spending (only for a month, to sort of
calibrate; it is far too difficult and depressing to do on an ongoing basis).
Heres the funny thing: doing this kind of very direct financial
management around my small-business book-keeping felt good. It felt
smart, like I was learning valuable new skills. But doing it around personal
and household finances still felt somehow dirty. Thats how deeply
embedded the middle class script is.
The three examples were interesting and particularly tough because
they bridged the two mental models: my healthy business mental model
(within which the right spending decisions would have been easy) and my
toxic middle-class-paycheck mental model (within which they were
unnecessarily hard).

Scared, Foolhardy and Brave New Scripts


Once youve worked with your finances directly for a while (its like
working in assembly language, on a computer without an operating
system) to start the transition away from the middle class script, you have
to end the transition. Staying in limbo doesnt work.
The transition can end in three ways:
1. Prolonged Misery: You get so scared, you retreat to the middle
class and do your best to delay the inevitable
2. Waiting for Godot: You latch onto some script and stick to it even
after it becomes clear that it isnt working for you.
3. Quick-Change Artists: You try on different scripts for size,
attempting to force outcomes and fast failures, until you find one
that fits and works, the way those quick-change artists change
clothes.
Prolonged misery makes for the best tragic literature but is entirely
unpleasant to live through. You act increasingly dead, get increasingly
frugal, gradually squeeze out all the generativity in your life, and then
finally you die.
The characteristic sign that you are practicing unhealthy acting-dead
frugality is that you cut back on core expenses that might help you be
more generative, in order to keep up appearances as long as possible.
If you are cutting back on the quality of the food you eat (trading
fresh vegetables for canned, say), in order to buy the same clothes your
friends wear, you are on the prolonged misery path. This incidentally, may
be part of the reason why the middle class has become so attached to
recycling and other hairshirt-green behaviors (outside of the actual merits
of the behaviors) during exactly the period that the class itself has been in
decline.
Waiting for Godot is your classic arrival fallacy. You fixate on
specific narrative elements (like moving to Bali or working for 4 hours a

week), make the few big moves, and spend the rest of your life waiting for
the Big Event signifying that it is working, while slipping slowly into
destitution and denial. I see a lot of people in this mode right now.
Theyve never really stopped to analyze the logic of the script, but
accepted it on faith based on assurances from a few for whom it has
worked.
Quick-change artistry is of course, the card I think you should pick. It
is a turbulent, experimental approach, where there are no absolute life
truths, no permanent commitments to any script, no one-book formulas,
and no easy no-brainer decisions.
It involves trying different trading-up patterns until you find one that
works. It involves a commitment to stop acting dead. It involves a
conscious decision to leave the middle class.
Or you can wait for all the Kings men and all the kings horses to put
Humpty-Dumpty together again.
This piece is sort of a continuation of my Las Vegas Rules series, but
Ive abandoned the attempt to keep a coherent larger narrative going.
This is going to be more of an occasional diary-entry sort of thing.

Can Hydras Eat Unknown-Unknowns for Lunch?


March 22, 2012
There is a fascinating set of ideas that has been swirling around in the
global zeitgeist for the past decade, around the quote that will keep Donald
Rumsfeld in the history books long after his political career is forgotten. I
am referring, of course, to the famous unknown-unknowns quote from
2002. Here it is:
[T]here are known knowns; there are things we know
we know. We also know there are known unknowns; that is
to say we know there are some things we do not know. But
there are also unknown unknowns there are things we do
not know we dont know.
Rumsfeld put his finger on a major itch that set off widespread
scratching. This scratching, which is about the collective human condition
in the face of fundamental uncertainties, shows no sign of slowing down a
decade later. But the conversation has taken an interesting turn that I want
to call out.

Out of all this scratching, four broad narratives have emerged that can
be arranged on a 22 with analytic/synthetic on one axis and
optimistic/pessimistic on the other. Three are rehashes of older narratives.
But the fourth the Hydra narrative is new. I have labeled it the
Hydra narrative after Talebs metaphor in his explanation of anti-fragility:
you cut one head off, two emerge in its place (his book on the subject is
due out in October).
The general idea behind the Hydra narrative in a broad sense (not just
what Taleb has said/will say in October) is that hydras eat all unknown
unknowns (not just Talebs famous black swans) for lunch. I have heard at
least three different versions of this proposition in the last year. The
narrative inspires social system designs that feed on uncertainty rather
than being destroyed by it. Geoffrey Wests ideas about superlinearity are
the empirical part of an attempt to construct an existence proof showing
that such systems are actually possible.

My own favorite starting point for thinking about these things, as


some of you would have guessed, is James Scotts idea of illegibility,
which is poised diplomatically at the origin, equally amenable to being
incorporated in any of the narratives. It is equally capable of informing
either skepticism or faith in any of the narratives, and can be employed
towards both analysis and synthesis.
I havent made up my mind about the question in the title of the post,
but am on alert for new ideas relating to it, from Taleb and others. So this
is something of an early-warning post.
A Timeline of Significant Events
The Rumsfeld quote captures the widespread (but mistaken) sense
that this decade has been unusually full of unexpected major disasters, and
the sense that systemic global reactions to those events have been
inadequate.
Heres the rough timeline of some major and/or representative events
in this particular trend.

1999: James Scott publishes Seeing Like a State


2001: The 9/11 attacks
2002: Donald Rumsfeld enters the history books with unknownunknown
2004: Indian ocean tsunami
2005: Hurricane Katrina
2007: Nicholas Nassim Taleb publishes The Black Swan
2010: Haiti earthquake
2010: BP Deepwater Horizon oil spill
2011: Fukushima nuclear disaster
2011: Geoffrey West of the Santa Fe institute starts talking about
new research on superlinearity, and why cities are immortal while
corporations and people die
2012: Global Guerrillas blogger John Robb starts a new site,
Resilient Communities

2012 Nicholas Nassim Taleb book, Anti-fragility (due out in


October)

It is important to note that the decade itself has not been exceptional.
As Fareed Zakaria noted in The Post-American World, we simply hear
about big, unexpected, global disasters much faster than we used to, and in
much greater (and more gory) detail.
If you dont believe me, simply take an honest inventory of any other
decade in the last century (you could go further back if you know enough
history). Youll find big natural disasters and political cataclysms in every
decade.
What has been exceptional about the 2002-2012 decade is not what
happened, but our intellectual response to it. The responses go beyond the
well-known ones in the timeline above. There appear to be hundreds of
people thinking seriously along such lines and taking on significant
projects related to such interests.
In the last year alone, Ive been introduced to two such people in my
local virtual neighborhood: Jean Russell (who coined the word thrivability
as an alternative to sustainability) and Ed Beakley, who has been studying
preparedness for unconventional crises through his Project White Horse
since Katrina.
You might say a major movement is afoot. Whether it will go
anywhere is unclear.
An Exceptional Response to an Unexceptional Decade
Two things are responsible for our exceptional response as a global
culture.
The first is simply the slow decline of Americas relative role in
global affairs, and the corresponding rise of a chaotic political energy
around the globe, at all spatial frequencies from neighborhood block to
planet-wide. It feels like theres nobody in charge. This feels both
liberating and scary.

The second is related to Zakarias point about information


dissemination. The speed and completeness of our knowledge of global
affairs has done more than expand our circle of concern. The potential of
the Internet to enable new forms of collective action has also convinced us
that we can act on those concerns in improved ways.
Unusually visible chaos, plus an authority vacuum, plus a perceived
sense of greater control equal a deep restlessness.
It is a popular restlessness, not just elitist hand-wringing. The latter is
a permanent feature of world history; it is hard to find a period when the
intellectual elites have not been animated by a sense of both crisis and
opportunity. This is not true of popular restlessness (which is different
from popular unrest).
The popular restlessness has also been amplified by the collapse of
traditional publishing. Not only is nobody in charge anymore, there are no
official-sounding voices even pretending to be in charge. Newspaper of
record sounds almost archaic today.
The restlessness represents a social energy that seeks to do big things
and looks for both intellectual and political leadership. It is a social energy
that swings wildly between a sense of limitless potential and deep despair,
and is hungry for both meaningful perspectives and rallying cries.
In other words, the social energy sloshes violently across the four
quadrants, fueling a demand for all four of the emergent narratives.
The Rehash Quadrants
I dont have much to say about the three older quadrants.
The bottom left is basically fatalist, and the label is due to Bruce
Sterling. He uses it to cover the top left quadrant as well (in his scheme
such hairshirt green thinking is a subset of acting dead and therefore
part of Dark Euphoria), but I think this is a little unfair, since the
thinking generally includes the idea of regeneration after a Dark Age. So

Spore thinking seems to me to be a more accurate label than acting


dead.
The bottom right quadrant includes your usual suspects who offer
revisionist counter-narratives to every Dark Euphoria narrative.
Contemporary thinkers in this quadrant include Matt Ridley (The Rational
Optimist) and Steven Pinker (The Better Angels of Our Nature) and the
late Michael Crichton (State of Fear).
Their general rhetorical strategy is to focus on data showing that
things are actually improving and that perceptions of impending doom are
either mistaken or overblown. Zakaria and most pro-globalists also belong
in this quadrant. Their revisionist attempts enjoy varying degrees of
success.
The optimistic-synthetic quadrant is the one where the most fresh
thinking has emerged.
The Hydra Quadrant
There are two elements to the Hydras-eat-Unknown-Unknowns-forlunch narrative.
One is simply a massive amount of Gung-Ho sentiment around
Internet-tool-enabled individual empowerment. This is a mob of Horatio
Alger heroes busily connecting the dots between 3D printing and
worldwide abundance and peace. It almost feels as though, given the right
cue, they would break out in a collective, worldwide song-and-dance flash
mob involving a billion people.
This (non-dark) euphoria element is not new. It accompanies every
major wave of technology.
What is new is the idea that we might be on the brink of a successful
theory of social engineering.
The great hope is that we might somehow be able to put together
ideas about anti-fragility, immortal cities and resilience to solve the

problems that defeated the similarly-inspired authoritarian high-modernist


(a term due to Scott) social engineers of a century ago.
The old failure, in the Hydra narratives, is framed as both a moral
failure (a case of hubris and hamartia), and a technical failure: (they didnt
understand bottom-up, organic, open-systems, network thinking.)
It is important to note that no believer in the resurrected social
engineering narrative has any clue what bottom-up, organic, opensystems network thinking actually means. In fact they typically
understand what they mean far less clearly than Le Corbusier understood
authoritarian high modernism.
What lends them confidence in their narrative is, firstly, a sense that
their efforts are now informed by an appropriate humility and a penitent
understanding of past failures, and secondly, the (unfalsifiable) idea that
bottom-up and organic cannot (or even should not) be comprehensible
to any individual. There is a sense that an understanding of the idea can
only exist at some, higher, collective level. Gaia knows, and we shall not
want.
The moral dimension of the confidence can basically be ignored. It is
merely secularized religiosity and a yearning for a moral calculus to
confirm an analysis-by-faith.
There are of course psychological
consequences of hubris that can be analyzed and understood, but there is
nothing special about hubris as a source of failure modes. Humility and
penitence generate their own failure modes.
The should not part is the culturally interesting reaction. True
believers take offense at the very idea of studying the apparently ineffablycollective.
On occasion, when Ive had this sort of discussion with the religiously
Hydra-minded, and sketched out some sort of tentative model, theyve
looked at me aghast, as if I were King Nimrod attempting to build the
Tower of Babel.

Building with Illegibility


I suppose I resonate with the idea of illegibility so much because it is
so neutral with respect to the four narratives, and because it provides a
useful amoral framework of analysis, within which things like hubris,
over-reach and humility are merely minor psychological variables rather
than central concerns (though Scotts own leanings are clear, he keeps
them clearly separated).

In the bottom left quadrant, you can use the idea to understand why
some grand social engineering projects fail.
In the bottom right, you can use it to understand why other projects
succeed.
In the top left, it suggests design principles for resilient survival.
And in the top right, the interesting new quadrant, it suggests the
right questions that need to be asked in order to test, and if
possible, realize, Hydra narratives.

It is this last project that interests me. Some questions that occur to
me include:

Can illegibility be understood as a reservoir of spare hydra heads in


some information-theoretic sense?
Is perfect illegibility equivalent to a renewable flow of maximally
compressed information potential to fuel behavior?
What dynamic mix of epistemic knowledge and metis knowledge
best informs the growth and stewardship of Hydras?
What is the ideal amount of illegibility in a given social system?
What are the failure modes associated with too little legibility?
(Scott documents the failure modes of too much legibility well, but
mostly ignores the other end of the spectrum).

But to ask such questions, you must first give up the near-religious
reverence for ineffable bottom-up, network models and the idea that
attempting to understand them clearly within a single head rather than a
swarm-head is a sinful act. It is merely a tricky one.

I am really looking forward to hearing what Taleb has to say in his


book. I suspect, even if I disagree with all of it, it will fuel some fertile
thinking for me. Evil twins tend to be reliably stimulating.

The Return of the Barbarian


March 10, 2011
Our cartoon view of history goes straight from the Flintstones to
Jetsons without developmental stages of any consequence in between.
Hunter-gatherers and settled modern civilizations loom large, as bookends,
in our study of history. The more I study history though, the more I realize
that hunter-gatherer lifestyles are mostly of importance in evolutionary
prehistory, not in history proper. If you think about history proper, a
different lifestyle, pastoral nomadism, starts to loom large, and its
influence on the course of human history is grossly underestimated. This is
partly because civilizations and pastoral nomad cultures have a figureground relationship. You need to understand both to understand the gestalt
of world history.
Modern hunter-gatherer lifestyles are cul-de-sacs in cultural evolution
terms. They stopped mattering by around 4000 BC, and havent
significantly affected world events since. Pastoral nomads though, played
a crucial role until at least World War I. Until about 1405 (the year Timur
died), they actually played the starring role. And in reconstructed form, the
lifestyle may again start to dominate world affairs within the next few
decades. Their eclipse over the last 5oo or so years, I am going to argue,
was an accident of history that is finally being corrected.

The barbarians are about to return to their proper place at the helm of
the worlds affairs, and the story revolves around this picture:

I am about to zoom from about 15,000 BC to 2011 AD in less than


4000 words, so you may want to fasten your seat belts and grab a few
pinches of salt.
Savagery, Barbarism and Civilization
From hunter-gatherers to early pastoral nomads, you get a gradual
evolution, and at some point (the Neolithic revolution, probably between
15,000 to 10,000 BC) you get a fork in the road. One path leads to settled
civilizations and the other leads to increasingly sophisticated modes of
pastoralism. Pre-Columbian Plains Indians could be viewed as being right
at the fork: they didnt quite herd domesticated beasts so much as follow
buffalo around on their normal migratory routes. There were also other
tribes that were more sedentary, but didnt develop into full-blown settled
civilizations like their cousins further south in Central and Latin America.
On the pastoral nomad branch of the fork, you get, in reverse
chronological order of influence on world history, Turks, Mongols, Arabs,
Northern Europeans and Proto Indo-Europeans.

On the sedentary branch, you get, in no particular order, American,


Soviet, British, Continental-European, Persian, Graeco-Roman, Ancient
Near Eastern, later-stage Arabic (the Abbassids more than the Ummayads),
Sinic and Indian. There arent actually more of them, though it looks that
way. They are merely easier to count off since they stay in one place and
give each other names that stick.
I like Thorstein Veblens labels for hunter gatherers, pastoral nomads
and settled peoples (savage, barbarian and civilized respectively, from his
1899 classic, The Theory of the Leisure Class) but lest you take offense
(and in case it isnt obvious), in this post, barbarian is a term of
approbation, while civilized is an insult. The term for hunter-gatherers,
savage, is neutral. They dont feature much in this story, but they will if
I ever do a post on prehistory between 100,000 BC to 10,000 BC.
My treatment also differs from Veblens in one crucial way: what he
views as a linear progression, I view as a forking path with barbarian and
civilized branches evolving interdependently and in parallel. Like others
thinkers of the 19th century, he also used the metaphor of progression
from childlike to adult stages (a sort of ontogeny recapitulates
phyllogeny idea applied to cultural evolution) to think about the linear
model, which I think is fundamentally mistaken (though it persists as a
trope in movies and television). So to acknowledge my debt to Veblen
while distinguishing my views from his, I am going to call the anchor
picture the Neo-Veblen Fork.
This post is partly an attempt to reconstruct a portion of Veblens
ideas, but you can read it independently of the book. I strongly
recommend the book though, another one of my top 10 reads. It covers
vastly more territory than this post (though mostly within the context of
late 19th century Robber Baron America), and most of it applies without
any reconstruction in 2011.
The Idle Savage
Hunter gatherers need and create very little technology. They manage
to live in a stable relationship with their environments. To the extent that
they follow their main prey species around, they are more like proto-

nomads. To the extent that they live around their main plant food sources,
they are like proto-sedentary cultures. These are the lifestyles Veblen
labeled savage.
The biblical archetype for hunter-gatherers has traditionally been the
Garden of Eden. Savages are minimalist predators, and simply live off the
bounty of nature, in areas where it is effectively inexhaustible. To the
extent that their gathering has evolved into agriculture, it is slash-and-burn
agriculture based on immediate consumption and natural renewal rather
than accumulation and storage of vast quantities of non-perishable food
over long periods of time. You could call their style of farming nomadic
farming, since they move from cultivating one cleared patch of forest to
the next, rather than staying put and practicing crop rotation in a small
confined (and owned) patch of land.
For the record, I think the Garden of Eden story has it right. Savagery
is the most pleasurable state of existence, if you can get it (until you annoy
the witch doctor or get a toothache). Not in the sense of noble savage (an
idea within what is known as romantic primitivism that is currently
enjoying a somewhat silly revival thanks to things like the Paleo diet), but
in the sense of what you might call the idle savage state. In some ways, an
idle savage is what I am, in private, on weekends.
Though they dont play a big part in this story, dont underestimate
what they did when they were center-stage: fire, spoken language, art and
archery are all savage inventions. Wisely, they didnt get addicted to
invention and stayed idle.
Idle savagery is basically unsustainable today unless you retreat
completely from the mainstream, so though Id like to be an idle savage,
Ive settled for the compromise state of being a barbarian. Thats where it
gets interesting.
The Illegible Barbarian
Pastoral nomads need, and develop, a good deal more technology, and
in areas that matter to them, are usually ahead of settled civilizations. They
are not quite as predatory as hunter-gatherers. Unlike hunter-gatherers,

they dont just follow prey around. They consciously domesticate and
manage their herds. Rather than let the herds move by instinct, they direct
their migratory instincts (hence herding). They dont just occasionally
slaughter what they need for food and clothing. They develop dairy,
husbandry and veterinary practices as well . You could say they cultivate
animals (a more demanding task than cultivating plants). The biblical
reference point is of course Abel the shepherd, of killed-by-Cain fame (at
one point I was enamored of Daniel Quinns reading of the Cain-Abel tale
in Ishmael, which I now think is completely mistaken, and a case of
confusing hunter-gatherers with pastoral nomads).
Ive already argued that barbarians were responsible for the
development of iron technology. Id also credit them for the invention of
the wheel, chariots, leather craft, rope-making, animal husbandry, falconry
and sewing (via sewing of hide tents with gut-string and bone needles,
which clearly must have come before cloth woven from plant fibers
needed sewing). Basically, if anything looks like it came out of a mobile
lifestyle, pastoral nomads probably invented it. At a more abstract level,
barbarian cultures create fundamentally predatory technologies:
technologies that allow you to do less work to get the same returns, freeing
up time for idleness. What Hegel would have called Master
technologies. The barbarian works to earn the idleness which the luckier
savage gets for free.
Barbarian technologies, like savage technologies, are fundamentally
sustainable, since using them tends to fulfill immediate needs rather than
causing wealth accumulation. The connection to mobility is central to this
characteristic: nomadic cultures do not accumulate useless things. It is a
naturally self-limiting way of life. If it doesnt fit in saddlebags or is too
heavy to be carried by pack animals, it isnt useful.
Mobility is also the fundamental reason why barbarian cultures are
illegible (see my post A Big Little Idea Called Legibility) to civilized ones
in literal and abstract ways.
They self-organize in sophisticated ways, but you cannot draw
organization charts (the Romans tried and failed).

For most of history, theyve owned most of the map of the world, yet
you cannot draw boundaries and identify proto-nations, since they are
defined by patterns of movement rather than patterns of settlement.
They practice the most evolved forms of leadership, but actual leaders
change from one situation to the next (a fact which confused the Roman
army no end when it fought them).
Pastoral nomads come in two varieties, which Veblen called lower
and higher barbarian stages. Lower barbarian pastoral nomads include
groups like the 12th century Mongols. Higher barbarian stages look like
settled civilizations on the surface, but (and this was Veblens enduring
contribution in his book) are characterized by a vigorous ruling class, with
roots in pastoral nomadism, that generally maintains at least a metaphoric
version of that lifestyle.
Among the more obvious symbols, as late as the 19th century, the
higher barbarians often maintained herds of unnecessary domestic
animals, hunted for sport (rather than for sustenance, unlike the huntergatherers) and generally spent their wealth recreating idealized pastoral
nomad landscapes.
When the vigorous leaders of a higher barbarian culture start to settle
down like their subjects, you get civilization.
The Stationary Civilized
Veblens notion of civilized roughly corresponds to agrarian (or
more generally, production-accumulation based) cultures governed by
social contracts and non-absolute rulers. By this measure, parts of the
Near East became civilized by about 1500 BC (I regard the Hittites as
the first true examples), followed by southern Europe around 800 BC and
northern Europe around the time of the Magna Carta.
Asian cultures are much harder to track: Veblen considered them all
higher barbarian, but depending on how you read the history of Persia,
China and India, theyve oscillated between higher barbarian and
civilized over the centuries (for instance, the growth and consolidation

reigns of Ashoka and Akbar were civilized while the entrepreneurial


startup reigns of their respective grandfathers, Chandragupta Maurya
and Babur, were higher barbarian; I dont know Persian and Chinese
history well enough to cite equivalent examples).
The mark of civilization is the replacement of sustainable
predatory patterns of life based on immediate consumption with
unsustainable non-predatory ones based on accumulation.
Civilized cultures create different types of technology compared to
barbarian cultures. What Hegel would have called Slave technologies.
Technologies that keep you working harder and harder to accumulate stuff.
Civilization is the opposite of idleness. It is a treadmill of increasing
industriousness and productivity.
This isnt irrational: sedentary lifestyles allow you to store everything
from grain to gold in large quantities and lower the risk of future
starvation. The carrot and stick of surplus-fueled hedonism and starvationavoiding accumulation lock sedentary people into human zoos that
become fundamentally harder to break out of over time.
But the effects are inevitable. As you settle down and accumulate
stuff, the risks of existence gradually decrease and the surpluses available
for hedonism increase. The net effect of both is that less actual thinking,
but more work, is required to exist.
To peek ahead a bit, settled civilization is a fundamentally
Gollumizing force. It makes you comfortable, stupid and addicted to the
security and accumulated fruits of your labor.
Which brings us to the figure-ground interaction pattern that scripts
world history.
The Barbarians and the Civilized
The most famous lower and higher barbarians in history are Genghis
Khan and his grandson Kublai Khan respectively. They represent the

classic historical pattern of interaction between pastoral nomads and


civilized peoples.
The pattern is a simple one: a settled civilization grows old, stupid
and tired, and a vigorous barbarian culture swoops in and takes over from
the top, and gradually gets civilized and stupid in turn, until it too is ripe
for destruction by pastoral nomads on its periphery.
Modern Europeans since the time of Gibbon (Decline and Fall of the
Roman Empire) have managed to rejoice in a rather contradictory view of
themselves: they celebrate their dual origins in the vigorous barbarian
cultures of the North and the exhausted cultures of antiquity. Over the
protests of modern Italians and Greeks, Northern Europeans have
successfully managed to appropriate for themselves the role of true
stewards of the achievements of Greece and Rome, cultures that their
barbarian forbears were instrumental in destroying (if you want to know
which origin myth is closer to the hearts of Europeans, look no further
than the tattoos of white gangs in prisons: they tend to be drawn from
Scandinavian mythologies).
Heres a rather suggestive piece of European history that illustrates
the barbarian/civilized dynamic. In the traditional account of the
civilization of Europe, wine played an interesting role. The Gauls (so the
story goes, according to Gibbon) became Romanized first, as Roman
wine-making techniques spread to what is today modern France. The
Goths were interested in many of the luxuries of Rome, but the one that
tempted them the most was wine, which they grew to prefer over the
cruder spirits they themselves distilled.
I dont want to hang my entire theory of civilization on this little item,
but it is interesting that the barbarians were civilized, in part, through the
temptations of an addiction: better booze, the refined product of an
agrarian accumulation culture.
Enough examples, lets note the two interesting questions that emerge,
that deserve analysis:
First, how is it that apparently inferior cultures have repeatedly
swooped in and destroyed and/or taken over superior cultures? Why was

Genghis Khan able to take over China, and how did his grandson
successfully create the Yuan dynasty? How did Arab armies conquer the
vastly more civilized and sophisticated Persian society? How did Turks
pretty much take over most of South Asia, the Middle East and North
Africa? Going further back, how did the Proto Indo-European (or
Aryans) take down the entire Bronze Age family of civilizations?
Second, given the astounding win record of the barbarians against
the civilized, how come history isnt written from the point of view of
the pastoral nomads? Why arent the histories of Egypt, Greece, Rome,
Babylon, Persia, India and China sideshows, with pride of place being
given to Mongols, Turks, Arabs and Northern Europeans (pre 1000 AD)?
Isnt history supposed to be written by the winners?
Refinement and Stupidity
Heres the answer to the first question: barbarians are on average,
individually smarter, but collectively stupider than a thriving settled
civilization.
One-on-one, a lower barbarian can outthink, outfight, and outinnovate a civilized citizen any day.
But a settled civilization at its peak can blow a lower barbarian
civilization away. Not least because at the very top, you still have Veblens
uncivilized higher barbarians (or, to use the Ribbonfarm term,
sociopaths). But once it begins its decline, the greater live intelligence of
the barbarians begins to take effect.
The explanation for this contradiction is a very simple one: by
definition, civilization is the process of taking intelligence out of human
minds and putting it into institutions. And by institution I mean
something completely general: any codified organizational form based on
writing will do. Writing, as Plato noted in Phaedrus, is the main medium
through which intelligence passes from humans to institutions.
[Writing] will introduce forgetfulness into the soul of
those who learn it: they will not practice using their

memory because they will put their trust in writing, which


is external and depends on signs that belong to others,
instead of trying to remember from the inside, completely
on their ownYoud think they[written words] were
speaking as if they had some understanding, but if you
question anything that has been said because you want to
learn more, it continues to signify just that very same thing
forever. When it has once been written down, every
discourse
roams
about
everywhere,
reaching
indiscriminately those with understanding no less than
those who have no business with it, and it doesnt know to
whom it should speak and to whom it should not. And
when it is faulted and attacked unfairly, it always needs its
fathers support; alone, it can neither defend itself nor come
to its own support. (Phaedrus 275d-e)
In the short term this works brilliantly. The ideas of the smartest
people (usually embedded higher barbarians) are externalized and encoded
into the design of institutions, which can then make far stupider people
vastly more effective than their raw capabilities would allow (this is the
reason why the modern economic notion of productivity is so
misleading).
But in the long term this fails. The smart people die, and their ideas
become obsolete and ritualized. Initially, more intelligence is being
externalized into institutions than is being taken away through
ritualization, but at some point, you get a peak, and the decline begins. As
entropy accumulates, it becomes a simple matter for another wave of
lower barbarians on the periphery to take down the civilization.
The reason this seems like a strange phenomenon is that we confuse
refinement with advancement. Finely-crafted jewelry is not more
advanced than roughly-hewn jewelry. A Boeing 747 is about a million
times more capable than the Wright Flyer I, but it does not contain a
million times as much intelligence. It is merely more refined (in the sense
of cocaine, by the same logic I applied in The Gollum Effect). The
difference between advancement and refinement is clearest in disruption.
A beautifully-crafted sword is not more advanced than a crude gun. It is
merely more refined.

Or to go back to our earlier example, wine isnt more intelligent than


a crude country-brew. It is merely more refined.
The intelligence manifest in an artifact is simply the amount of human
thought that has been externalized into it. Refinement on the other hand, is
a measure of the amount of work that has gone into it. In Hegelian terms,
intelligence in design is fundamentally a predatory quality put in by
barbarian-Masters. Refinement in design is a non-predatory quality put in
by civilized-Slaves.
We miss this dynamic because of a curious phenomenon: history is
only written by the winners if the winners can actually write. At their
apogee, when civilizations have the most surplus wealth, they indulge in
the most refined forms of writing: writing histories with autocentric
conceit, they focus on the visibly-refined glories of their own age, rather
than the higher-barbarian sensibilities at the foundations. Genghis Khan is
the sole exception in being more famous than his grandson. In the other
two examples Ive mentioned, Ashoka and Akbar both traditionally get
the Great added to their names. Their empire-founding barbarian
grandfathers do not. The most famous symbol of the Mughal empire is the
Taj Mahal, which was built by Shah Jahan, who bankrupted his empire in
the process, hastening the fall that followed his reign. Baburs tomb is a
modest little building in Kabul that few would recognize in a photograph.
As a civilization becomes increasingly refined, and far less intelligent,
it becomes easy prey for pastoral nomads on the margins, who swoop in
cleanse the culture of accumulated stupidity, and revitalize it with a fresh
infusion of barbarian blood at the top.
You might even say that barbarians operate at a meta-level: they plant
and harvest value out of civilizations. They are civilization farmers, just as
they are animal herders.
The Eclipse and Return of the Barbarian
The reign of Timur was the last time a true barbarian ruled a
significant proportion of the world. Since his death in 1405, the barbarian

has been in decline. The process reached its peak during the Cold War. In
America, the Organization Man threatened to squeeze higher barbarians
out of the capitalist world, while in Soviet Russia, forced settlement and
collectivization in Siberia and Mongolia threatened to corral the last of the
wandering lower barbarians.
It almost seemed like the fountain of barbarian culture at which
humanity drinks to renew itself, was about to be completely exhausted
once and for all.
The moment, thankfully passed. The Gervais Principle kicked in to reinvigorate capitalism, and the High Modernist doctrines of the Soviet state
collapsed (followed by a remarkably quick return to pastoral nomadism in
Mongolia and Siberia).
That was just the opening act. Today as institutions of all sorts
crumble and collapse, and the written word becomes a living, dancing,
hyperlinked thing that would have made Plato happy, the barbarian is set
to return. Ill blog about this in a future piece, when I extrapolate this
speculative history into a speculative future.
Note, some of the ideas in this post were inspired by Seb Paquets
two-part series on how social movements happen. I dont entirely agree
with Sebs model, but you should check it out if these things interest you.
This was also partly motivated by the impending April 12th release of
Francis Fukuyamas new book, The Origins of Political Order. I wanted
to get my own thoughts on the subject down before tackling his. His first
book, The End of History and the Last Man, was in many ways my
personal introduction to this sort of subject matter. And no, I am not a
neocon.

Glossary
Ancient Eye: An approach to perceiving reality that precedes modern
categories of professionalized disciplinary knowledge such as science,
engineering or art.
Babytalk (GP): The language spoken by Sociopaths and Losers to the
Clueless.
Barbarian: On ribbonfarm, a term of approbation, while civilized is
an insult. Somebody whose lifestyle pattern is not based on accumulation
or externalization of cognition into institutions. The definition is based on
Thorstein Veblens model in Theory of the Leisure Class.
Baroque Unconscious: The idea that technology can be understood as
an entity that behaves as though it is a sentient agent unconsciously
groping towards realization of its own extreme baroque form.
Clueless (GP): Employees who overperform and believe in the
benevolence of the organization.
Crucible Effect: A crucible is group of optimal size for doing creative
information work. The number of people is about 12. It is too large to be
managed and too small to split up, balancing on the brink of chaos.
Members of crucibles focus collective attention into an arms race of
constant practice, backed by an established culture around its particular
kind of information work. The escalation into increasingly more refined
crucibles allows for the 10.000 hours of deliberate practice that is needed
for elite performance.
Curse of Development (GP): If the situational developmental gap
between two people is sufficiently small, the more evolved person will
systematically lose more often than he/she wins.
Evil Twin: Somebody who thinks exactly like you in most ways, but
differs in just a few critical ways that end up making all the difference.

Future Nausea: The subjective reaction to being exposed to unnormalized futures. See Manufactured Normalcy Field.
Game Talk (GP): The language spoken by Losers among themselves.
Gervais Principle ( (GP): The conjecture that Sociopaths promote the
Clueless to middle management and fast-track a subset of enlightened
Losers to upper management as new Sociopaths.
Gollum Effect: The reduction of a consumer to a subhuman creature
defined purely by patterns of consumption. Verb form: gollumize.
Hackstability: A postulated stable equilibrium state created by a
balance forces: exponentially increasing technological capability and
entropy-driven technology collapse.
Halls Law: A speculative Moores Law analog for the 19th century,
based on the growing sophistication of manufacturing as measured by
progress in creating interchangeable parts.
HIWTYL
(GP): Heads-I-Win-Tails-You-Lose, pronounced
HIWTYL. The general design principle behind incentive structures
designed by Sociopaths.
Legibility: A system is legible if it is comprehensible to a calculativerational observer looking to optimize the system from the point of view of
narrow utilitarian concerns and eliminate other phenomenology. It is
illegible if it serves many functions and purposes in complex ways, such
that no single participant can easily comprehend the whole. The terms
were coined by James Scott in Seeing Like a State. Illegible systems are
generally more robust than legible ones, and Scotts model is mainly about
the failures caused by imposing legibility on an initially illegible reality.
See State.
Loser (GP): A bare-minimum effort, rationally disengaged employee
who seeks fulfillment outside of work.

Manufactured Normalcy Field: A large-scale engineered perception


that makes radical technologies appear normal, thereby preventing the
future from arriving for most people.
Milo Criterion: Products must mature no faster than the rate at which
users can adapt
Posturetalk (GP): The language spoken by the Clueless.
Powertalk (GP): The language spoken by Sociopaths.
Refinement: Refinement is a measure of the amount of work that has
gone into an artifact. Intelligence in design is fundamentally a predatory
quality put in by Barbarians. Refinement in design is a non-predatory
quality put in by civilized-Slaves.
Scientific Sensibility: Perceiving reality in a dispassionate and
mindful way. I argued that this is a more basic foundation for science than
the scientific method and formal metaphysical motions like falsifiability
or empiricism/analyticity.
Sociopath (GP): A self-aware employee who understands how
organizations really work and takes reasoned risks to acquire power and
influence by manipulating it.
Straight Talk (GP): The language spoken between Sociopaths and
Losers.
Stream: A sort of slow, life-long communal nomadism, enabled by
globalization and a sense of shared transnational social identity within a
small population.
Turpentine Effect: The tendency of practioners of a skilled craft to
gravitate to tool-making over application
Ubiquity Illusion: Creating a perception of ubiquity around a new
product or service to fake social proof

You might also like