The Rust Age
The Rust Age
The Rust Age
ribbonfarm.com, 20072012
Ribbonfarm Inc.
2014
Contents
Part 0: Legibility..............................................7
A Big Little Idea Called Legibility.....................8
Part 1: The Art of Refactored Perception.........12
The Art of Refactored Perception..................13
The Parrot....................................................16
Amy Lin and the Ancient Eye.........................20
The Scientific Sensibility...............................24
Diamonds versus Gold..................................26
How to Define Concepts................................28
Concepts and Prototypes..............................30
How to Name Things.....................................32
How to Think Like Hercule Poirot...................44
Boundary Condition Thinking........................47
Learning From One Data Point.......................50
Lawyer Mind, Judge Mind..............................53
Just Add Water.............................................58
The Rhetoric of the Hyperlink........................62
Seeking Density in the Gonzo Theater...........66
Rediscovering Literacy.................................74
Part 2: Towards an Appreciative View of
Technology..........................................................83
Towards an Appreciative View of Technology..84
An Infrastructure Pilgrimage.........................87
Meditation on Disequilibrium in Nature..........88
Glimpses of a Cryptic God.............................89
The Epic Story of Container Shipping.............90
The World of Garbage...................................91
The Disruption of Bronze..............................92
Bays Conjecture..........................................93
Halls Law: The Nineteenth Century Prequel to
Moores Law.....................................................94
Hacking the Non-Disposable Planet...............95
Part 0:
Legibility
This is an edited collection of the first five years of ribbonfarm (2007-2012), retroactively
named the Rust Age.
The Rust Age also generated a book, Tempo, and two ebooks: The Gervais Principle and
Be Slightly Evil.
Organization Man and Keith Johnstones Impro, this book is one of the
anchor texts for this blog. If I ever teach a course on Ribbonfarmesque
Thinking, all these books would be required reading. Continuing my
series on complex and dense books that I cite often, but are too difficult to
review or summarize, here is a quick introduction to the main idea.
The Authoritarian High-Modernist Recipe for Failure
Scott calls the thinking style behind the failure mode authoritarian
high modernism, but as well see, the failure mode is not limited to the
brief intellectual reign of high modernism (roughly, the first half of the
twentieth century).
Here is the recipe:
meant that the acreage, yield and market value of a forest had to be
measured, and only these obviously relevant variables were comprehended
by the statist mental model. Traditional wild and unruly forests were
literally illegible to the state surveyors eyes, and this gave birth to
scientific forestry: the gradual transformation of forests with a rich
diversity of species growing wildly and randomly into orderly stands of
the highest-yielding varieties. The resulting catastrophes better
recognized these days as the problems of monoculture were inevitable.
The picture is not an exception, and the word legibility is not a
metaphor; the actual visual/textual sense of the word (as in readability)
is what is meant. The book is full of thought-provoking pictures like this:
farmland neatly divided up into squares versus farmland that is confusing
to the eye, but conforms to the constraints of local topography, soil quality,
and hydrological patterns; rational and unlivable grid-cities like Brasilia,
versus chaotic and alive cities like Sao Paolo. This might explain, by the
way, why I resonated so strongly with the book. The name ribbonfarm
is inspired by the history of the geography of Detroit and its roots in
ribbon farms (see my About page and the historic picture of Detroit
ribbon farms below).
If my conjecture is correct, then the High Modernist failure-throughlegibility-seeking formula is a large scale effect of the rationalization of
the fear of (apparent) chaos.
[Techie aside: Complex realities look like Shannon white noise, but in
terms of deeper structure, their Kolmogorov-Chaitin complexity is low
relative to their Shannon entropy; they are like pseudo-random numbers
or , rather than real random numbers; I wrote a two-part series on this
long ago, that I meant to continue, but never did].
The Fertility of the Idea
The idea may seem simple (though it is surprisingly hard to find
words to express it succinctly), but it is an extraordinarily fertile one, and
helps explain all sorts of things. One of my favorite unexpected examples
from the book is the rationalization of people names in the Philippines
under Spanish rule (I wont spoil it for you; read the book). In general, any
aspect of a complex folkway, in the sense of David Hackett Fischers
Albions Seed, can be made a victim of the high-modernist authoritarian
failure formula.
The process doesnt always lead to unmitigated disaster. In some of
the more redeeming examples, there is merely a shift in a balance of
power between more global and more local interests. For example, we
owe to this high-modernist formula the creation of a systematic, global
scheme for measuring time, with sensible time zones. The bewilderingly
illegible geography of time in the 18th century, while it served a lot of
local purposes very well (and much better than even the best atomic clocks
of today), would have made modern global infrastructure, ranging from
the railroads (the original driver for temporal discipline in the United
States) to airlines and the Internet, impossible. The Napoleanic era saw the
spread of the metric system; again an idea that is highly rational from a
centralized birds eye view, but often stupid with respect to the subtle local
adaptions of the systems it displaced. Again this displaced a good deal of
local power and value, and created many injustices and local
irrationalities, but the shift brought with it the benefits of improved
communication and wide-area commerce.
In all these cases, you could argue that the formula merely replaced a
set of locally optimal modes of social organization with a globally optimal
one. But that would be missing the point. The reason the formula is
generally dangerous, and a formula for failure, is that it does not operate
by a thoughtful consideration of local/global tradeoffs, but through the
imposition of a singular view as best for all in a pseudo-scientific sense.
The high-modernist reformer does not acknowledge (and often genuinely
does not understand) that he/she is engineering a shift in optima and
power, with costs as well as benefits. Instead, the process is driven by a
naive best for everybody paternalism, that genuinely intends to improve
the lives of the people it affects. The high-modernist reformer is driven by
a naive-scientific Utopian vision that does not tolerate dissent, because it
believes it is dealing in scientific truths.
The failure pattern is perhaps most evident in urban planning, a
domain which seems to attract the worst of these reformers. A generation
of planners, inspired by the crazed visions of Le Corbusier, created
unlivable urban infrastructure around the world, from Braslia to
Chandigarh. These cities end up with deserted empty centers populated
only by the government workers forced to live there in misery (there is
even a condition known as Brasilitis apparently), with slums and shanty
towns emerging on the periphery of the planned center; ad hoc, bottom-up,
re-humanizing damage control as it were. The book summarizes a very
elegant critique of this approach to urban planning, and the true richness
of what it displaces, due to Jane Jacobs.
reading a $0.99 19th century edition on my Kindle all six volumes with
annotations and comments from a decidedly pious and critical
Christian editor. Sometimes I dont know why I commit these acts of
large-scale intellectual masochism. The link is to a modern, abridged
Penguin edition.
Is the Model Relevant Today?
The phrase high-modernist authoritarianism might suggest that the
views in this book only apply to those laughably optimistic, high-onscience-and-engineering high modernists of the 1930s. Surely we dont
fail in these dumb ways in our enlightened postmodern times?
Sadly, we do, for four reasons:
1. There is a decades-long time lag between the intellectual highwatermark of an ideology and the last of its effects
2. There are large parts of the world, China in particular, where
authoritarian high-modernism gets a visa, but postmodernism does
not
3. Perhaps most important: though this failure mode is easiest to
describe in terms of high-modernist ideology, it is actually a basic
failure mode for human thought that is time and ideology neutral.
If it is true that the Romans and British managed to fail in these
ways, so can the most postmodern Obama types. The language will
be different, thats all.
4. And no, the currently popular pave the cowpaths and behavioraleconomic choice architecture design philosophies do not provide
immunity against these failure modes. In fact paving the cowpaths
in naive ways is an instance of this failure mode (the way to avoid
it would be to choose to not pave certain cowpaths). Choice
architecture (described as Libertarian Paternalism by its
advocates) seems to merely dress up authoritarian high-modernism
with a thin coat of caution and empirical experimentation. The
basic and dangerous I am more scientific/rational than thou
paternalism is still the central dogma.
[Another Techie aside: For the technologists among you, a quick (and
very crude) calibration point should help: we are talking about the big
brother of waterfall planning here. The psychology is very similar to the
urge to throw legacy software away. In fact Joel Spolsky's post on the
subject Things You Should Never Do, Part I, reads like a narrower version
of Scott's arguments. But Scott's model is much deeper, more robust, more
subtly argued, and more broadly applicable. I haven't yet thought it
through, but I don't think lean/agile software development can actually
mitigate this failure mode anymore than choice architecture can mitigate
it in public policy]
So do yourself a favor and read the book, even if it takes you months
to get through. You will elevate your thinking about big questions.
High-Modernist Authoritarianism in Corporate and Personal Life
The application of these ideas in the personal/corporate domains
actually interests me the most. Though Scotts book is set within the
context of public policy and governance, you can find exactly the same
pattern in individual and corporate behavior. Individuals lacking the
capacity for rich introspection apply dumb 12-step formulas to their lives
and fail. Corporations: well, read the Gervais Principle series and Images
of Organization. As a point of historical interest, Scott notes that the
Soviet planning model, responsible for many spectacular legibilityfailures, was derived from corporate Taylorist precedents, which Lenin
initially criticized, but later modified and embraced.
Final postscript: these ideas have strongly influenced my book
project, and apparently, Ive been thinking about them for a long time
without realizing it. A very early post on this blog (I think only a handful
of you were around when I posted it), on the Harry Potter series and its
relation to my own work in robotics, contains some of these ideas. If Id
read this book before, that post would have been much better.
Part 1:
The Art of Refactored Perception
later. I did not consider popularity at all, but most of the popular posts
made the cut.
So it was definitely a very personal and autocratic selection.
The yield rate was depressingly low, at less than 25%. But the good
news is that it has been steadily increasing. As you will see from this and
upcoming posts, the lists are dominated by later posts. In the first couple
of years, I wrote an awful lot of posts I would now consider terrible.
After the selection, I sorted the set into 5-6 clusters, and forced myself
to completely uncouple the clusters (i.e., each post can belong in only one
cluster). I then sequenced each in some meaningful way. I will be doing
one post on each sequence.
It was surprisingly (and depressingly) easy to do the pruning. I
expected to spend many agonizing hours figuring out what to include and
what to exclude, but it took me about 15 minutes to do the cutting, and
another 15 minutes to do a first, basic sorting/clustering. The hardest part
is developing a narrative arc through the material to sequence each cluster.
On Voice
Yesterday, I posted a beta aphorism on Facebook that many people
seemed to like: integrity is an aesthetic, not a value.
A blogging voice is not just an expression of a coherent aesthetic
perspective; it is also an expression of a certain moral stance. Developing
a high-integrity blogging voice is about learning to recognize, in a moral
sense, on-voice/off-voice drafts and developing the discipline to
systematically say no to off-voice material, no matter how tempting it is to
post it, based on the expedient considerations like topicality or virality. As
your filters develop, you write fewer off-voice drafts to begin with.
Eventually, you dont even think off-voice.
One of the hardest challenges for me in selecting posts for this month
of retrospectives was posts that were partly on-voice and partly off-voice.
I erred on the side of integrity and dropped most such posts, except for a
few that were logically indispensable in some sequence.
Learning to recognize off-voice stuff (especially while your voice is
still developing) is more like learning to be a tea taster than studying to be
a priest at a seminary.
Though I suppose, practiced at sophisticated levels, what would Jesus
do? is an integrity aesthetic rather than a 0/1 litmus test. Few religious
types seem to transcend the bumper-sticker value though.
The Parrot
August 13, 2007
This piece was written in Ithaca, in 2005, and is as accurate a
phenomenological report of an actual mental response to real events as I
am capable of. At the time I thought and still do that a very careful
observation of your own thoughts as you react to sensory input is a very
useful thing. Not quite meditation. Call it meditative observation.
Stylistically, it is inspired by Camus.
-1From my window table on the second floor of the coffee shop,
looking down at the Commons the determinedly medieval, pedestriansonly town square of Ithaca I saw the parrot arrive. It was large and a
slightly dirty white. Its owner carefully set a chair on top of a table and the
parrot hopped from his finger onto the back of the chair and perched there
comfortably. I suppose the owner wanted to keep it out of the reach of any
dogs. He gave it a quick second glance, and stepped inside a restaurant.
The parrot ruffled its feathers a bit, looked around, preened a little
(showing off some unexpected pink plumage on the back of its neck,
hidden in the dirty white), and then settled down
-2The Ithaca Commons is a ring of shops and restaurants around an
open courtyard, occupying the city block between Green and Seneca
streets. The shops are an artfully arranged sequence of mildly unexpected
experiences. Tacky used clothing and dollar stores sit next to upscale
kitchen stores, craft shops, art galleries and expensive restaurants. The
central promise of the Commons is that of the Spectacle. Street musicians,
hippies meditatively kicking hackeysacks, the occasional juggler they all
make their appearance in the Commons. A visibly political Tibetan store
and experiential restaurants such as the Moosewood and Just a Taste
complete the tableau. The Commons is crafted for the American liberal, a
cocoon that gently reinforces her self-image as a more evolved, aware, and
-6You know you have are a slave to the life of the mind if a phrase like
her engagement of the parrot was not authentic crosses your mind quite
naturally, and it takes you more than a minute to laugh.
But consider what it means if your response to the parrot is measured,
seemingly scripted, or otherwise deliberate in any way. A mind with
parrot on it should not look like anything recognizable. A frown might
mean you are trying to rapidly assimilate the parrot but in that case, the
process of assimilation, rather than the parrot itself, must be occupying
your mind. You cannot, at the same time, think parrot and engage in the
task of wrapping up the parrot in a bundle of associations and channeling
it to the right areas of long-term memory. The hippies grin is equally
symptomatic of a non-parrot awareness. The hippie is probably selfindulgently enjoying a validated feeling of one must be one with nature
or something along those lines.
So an authentic engagement of the parrot must have an element of the
unscripted in it. It can neither be deliberative, nor reactive. Furious and
active thinking will not do. Nor the Awww! you might direct at a puppy.
A puppy is a punch you can roll with.
-7Two moms with three babies wandered onto the scene. It being a nice
day, the babies were visible, one squirming in the arms of its mother and
the others poking their snouts out of the stroller. The mom carrying the
baby stopped immediately upon spotting the parrot and approached it (she
was the first to do so). As is the wont of moms, she immediately began
trying to direct her infants attention to the parrot, shoving its face within a
foot of the parrot. Mothers are too engaged in scripting the experiences of
their babies to experience anything other than the baby themselves. The
parrot obliged with a display of orange (I suspect it was stretching,
disturbed from its contemplative reverie). The baby, however, seemed
entirely uninterested in the parrot. Perhaps the parrot was unclear to its
myopic eyes, or perhaps it was simply no more worthy of note than any of
other exciting blobs of visual experience all around. At any rate, the mom
stopped trying after a few moments, and the five of them rolled on.
The pretty girl in faded red pants was back. This time, she had two
waitress friends along, and took a picture of the parrot with her cell phone.
The three girls (the other two were rather dumpy looking, but I suppose it
was the aprons) chattered for a bit and then stared at the parrot some more.
Two more pretty girls walked past, and though the parrot clearly
registered, walked past without a perceptible turning of their heads.
Something about that worried me. They were of the indistinguishable
dressed-in-season species of young college girl that swarm all over
American university towns. These could have been either Ithaca College
or Cornell; I cant tell them apart. Two more of the breed walked by, again
with the same non-reaction.
A black-guy-white-girl couple walked by. The girl turned to look at
the bird as they walked past, while the guy looked at it very briefly.
Shortly after, an absorbed black teenager walked by. She looked at it as
she walked past, with no change in her expression. The parrot was clearly
on Track Two. Track One continued thinking about whatever it was she
was thinking about. I suppose parrot might have consciously registered
with her a few minutes later, but she did not walk by again. Something
about black responses to the parrot was sticking in my mind. The owner
came back out of the store, carrying a cup of coffee.
-8Now, a parrot is not an arresting sort of bird. It does not have the
ostentation of the peacock, the imposing presence of the ostrich or the
latent lethality of a falcon or hawk. Even in context, at a zoo, a typical
white parrot is not remarkable in the company of its more gaudy relatives.
Any of these more dramatic creatures would, I suppose, instantly draw a
big gawking crowd, perhaps even calls to the police. Undivided attention,
active curiosity and action would certainly be merited (try to feed him
some of your bagel).
The parrot though, had neither the domesticated presence of a dog,
nor the demanding presence of a truly unexpected creature. A dog elicits
smiles, pats or studied avoidance, while an ostrich would certainly call for
a cascade of conversation into activity, culminating in the arrival of a
legitimate authority (though, I suppose, most communities would be hard
pressed to generate a legitimate response to an ostrich. Cornell though, is
an agricultural university, so I suppose eventually one of the many animal
experts would arrive on the scene).
So a dog elicits a conventional ripple of cognitive activity as it
progresses through the town square, soon displaced by other
preoccupations. An ostrich presumably triggers a flurry deliberation,
followed by actual activity. So what does the parrot cause, living as it does
in the twilight zone between conventionally expected and actionably
unexpected? You cannot have the comfort of either action or practiced
thoughts, with a parrot in your field of view. Yet, the parrot is not a threat,
so you clearly cannot panic or be overwhelmed. The parrot, I think lives in
the realm of pure contemplation. The parrot is rare in adult life. For the
child, everything is a parrot.
-9The return of the owner annoyed me briefly. With his return, the non
sequitur instantly became an instance of the signature of the Commons: a
spectacle. The owner was clearly used to handling his parrot. He had it
hop on his hand again and swung it up and down. The parrot spread its
wings and did various interesting things with its feathers which I do not
have the vocabulary to describe. With the owner, the context of a small
bubble-zoo had arrived. The owner chatted with the girl in faded red pants,
who had come out again. Fewer adults stared. The ensemble was now
clearly within the realm of the expected. Most people walked on without a
glance, while some, emboldened by the new legitimacy of the situation,
stopped and watched with interest. The owner tired of active display and
set the parrot back on its perch, and turned his attention to the girl.
For a minute, I was sorry, but then a girl, about six years old, walked
by with her mother. It was a classic little girl, in orange pants and ice
cream cone. She stopped and stared at the bird very carefully. It was not a
curious probing look, or the purposeful look that kids sometimes get when
they are looking about for a way to play with a new object. This little girl
did not look like she would be going home and looking up parrots on the
Discovery channel website. She did not look like she was gathering up
courage to pet it or imagining it in the role of a chase-able dog or cat. She
was just looking at it. Clearly her powers of abstraction had yet to mature
to the point where she could see the bubble circus.
A pair of middle-aged women stopped by the parrot. After an initial
look at the parrot, they turned and started chatting with the owner. I expect
the conversation began, Does he talk? or Doesnt he fly away? Shortly
after, I saw them wander off a little to the side, where there was a fountain.
One woman took a picture of the other, standing next to the fountain, with
a disposable camera. Local resident showing visiting Cousin Amy the
town, I guessed. All is legitimate on a vacation, including a parrot.
-10I dont think children are necessarily curious when presented with a
new experience. The little girl presented a clearer display of authentic
engagement of the parrot than all the adults. It was what I have been
describing all along as a stare. But stare doesnt quite cover it. Stare
does not have the implicit cognitive content of the hippys grin. Happy,
bemused, smiling, frowning, eager curiosity these are visible
manifestations of minds occupied by the workings of deliberative or
reactive responses to the parrot. Parrot flits too quickly the face to be
noticed, and is replaced by more normal cognitions.
So, here is a question: what is the expression on the face of a person
who has authentically engaged a parrot? I must propose, in all seriousness,
the ridiculous answer, it looks like the face of a person who has seen a
parrot.
-11The people talking to the owner had left. He now sat reading a book,
while the parrot ate seeds of some sort off the table. Three teenage
skateboarders wandered to a spot about a dozen yards away. One of them
nudged the others and pointed to the parrot. They looked at it in
appreciation. It wasnt quite clear what they were appreciating, but they
clearly approved of the parrot. That made me happy.
Now, a large brood of little black children came by, herded by two
young women who might have been nannies, I suppose. The black kids all
stopped and stared intently at the parrot. The nannies chatted with the
owner, who looked on approvingly at the children while he talked. The
conversation looked left-brained from fifty feet away. Some tentative
petting ensued. As the nannies led the children away, after allowing them a
decent amount of time to engage the parrot, one little boy had to be
dragged away; he managed to turn his head full circle, Exorcist style, to
look at the bird.
Now, five young black men, perhaps eighteen to twenty, walked by.
Theirs was clearly a presence to rival that of the parrot-owner duo as a
spectacle. Their carefully layered oversized sports clothes and reversed
baseball hats demanded attention. I suppose spectacles, be they manparrots or a group of swaggering young black men, do not supply
attention, but demand it. But you cannot really compete with a parrot. The
parrot is entirely unaware that it is competing. The black group almost
rolled past, but suddenly one of them stopped and turned around to look at
the parrot. He looked like hed suddenly reconsidered the studied
indifference that I suppose was his response to competing spectacles. A
visible recalibration of response played across his face, and suddenly, he
was authentically engaging the parrot in a demanding, direct way. The
other stopped and looked to. The first man then pulled out his cell phone,
still staring at the parrot, and took a picture. He then briefly interrogated
the owner about the parrot, and the group rolled on.
-12I wonder now, why are black responses to the parrot more noteworthy
than generic white responses? And while I mull that, why have the
responses of one other group pretty young girls stuck in my mind
(besides the fact that I notice them more)?
Now, for an authentic engagement of the parrot, there must be parrot
on your mind. Your face must look like the face of a person who has seen
a parrot. This is not an ambiguous face, or a face marked visibly by the
presence of other thoughts or a subtext. A parrot-mind may wrestle briefly
with cell phone mind or preoccupied-with-race-and-oppression mind, but
that might be hooked by a lone parrot, but would ignore the contextually
appropriate owned parrot. Most of the time, when we look for an
explanation, we can only see an explanation. Sometimes, when the mind
hiccups on the path to the explanation, we see the parrot.
Viktor Frankl said, between stimulus and response there is a space.
In that space is our power to choose our response. In our response lies our
growth and our freedom. Self-improvement gurus like to use that quote to
preach, but to me, it seems that this space is primarily interesting because
the parrot can live there for a bit, so your mind can be parrot for a bit.
You might hesitate and never visit that space. You might react so fast
you leave the space before it registers on your awareness. Or you might
dwell there awhile.
you look in the period before Da Vinci, in the so-called Dark Ages,
youll see a lot more of this way of seeing than after. It strikes me that we
admire Da Vinci for the wrong reasons, for being what seems in our time
to be a multi-talented mind. No; the divisions that blind us didnt exist
in his age. He was just a seer, and what he saw is more impressive than the
fact that his seeing spanned a multiplicity of our 20th century
categories.
C. P. Snow and the Two Cultures
It is sad that writers like C. P. Snow (of The Two Cultures fame) in
the last century ended up widening and institutionalizing the chasm
between the humanities and the sciences while attempting to bridge it. To
be fair to them though, the humanists started it, by attempting to take
scientists, mathematicians and engineers down a social peg or two. C. P.
Snow quotes number theorist G. H. Hardy: Have you noticed how the
word intellectual is used nowadays? There seems to be a new definition
which certainly doesnt include Rutherford or Eddington or Dirac or
Adrian or me? It does seem rather odd, dont yknow.
The natural anxieties and suspicions of humanist literary intellectuals
are old and deep-rooted (Coleridge: the souls of 500 Newtons would go
to the making up of a Shakespeare or Milton), and cannot be wished
away by lecturing (see my post The Bloody-Minded Pleasures of
Engineering [September 1, 2008]). Humanism is a retreat to a secularized
notion of humans being spiritually special, as a way of combating a
sense of insignificance within our huge, mysterious universe. But perhaps
the way to bridge the gap and bring humanists back to this universe, that
we share with other atom-sets, is to show that the eye of science and the
eye of art are both descended from the Ancient Eye.
Before the Great Divide
Okay, C. P. Snow is yesterdays news; we need to dig further in the
archives to understand the Ancient Eye. The post-reformation notions of
both art and science were distortions of the Ancient Eye way of seeing
(and connecting to) everything from atoms to galaxies. Possibly what
created the disconnect was the rise of late englightenment era Christianity
(post Martin Luther (1483-1546)) and its disdain of the profane material
plane as a sort of waiting room in front of a doorway into a spiritual plane.
Or perhaps it was a result of the thoroughly meaningless idea of scientific
objectivity that was partly the fault of Descartes (1596-1650).
Either way, the result was an anomaly that caused a great divide. Lets
pick up the story just before the Great Blinding of the Ancient Eye, with
Da Vinci. My favorite Da Vinci piece is neither the Mona Lisa, nor his
amazing engineering sketches, but his iconic image, The Vitruvian Man
(public domain), dating from 1485, or two years before the birth Martin
Luther. Da Vincis image is a representation of anatomical proportions and
their relation to the classical orders proposed by the Roman architect,
Vitruvius. This is the Ancient Eye pondering anatomy and seeing
architecture.
the dynamic human form. This is the Ancient Eye seeing cosmic order in
frozen human dance. When I was a kid, an art teacher taught me the
Nataraja formula (it starts with the inscription of a hexagon inside a circle;
Shivas navel is the center). You can create very stylized and abstract
Natarajas once you learn the basic geometry (this image is from the New
York Metropolitan museum, Creative Commons)
represents roughly the same idea, creative-destruction. The white fish and
black fish chase each other. Their eyes contain their duals, and the seeds of
their own destruction, and the creation of the other.
I like to think that in some lost prehistoric time, the distant ancestors
of Da Vinci and the unknown creators of the Nataraja and Yin-Yang
symbols, got drunk together after a boar hunt, and talked about transience
and transformation, while pondering the fact that the death of the boar had
sustained their life.
The Ancient Eye truly comes into its own at a somewhat greater
remove from representation of reality or even metaphysical ideas like YinYang. One of my pilgrimage dreams is to visit the Alhambra in Spain,
reputed to contain depictions of all the major mathematical symmetries. It
will be the atheist hajj of an unapologetic kafir. The Alhambra (14th
century) provides proof that we could see the symmetries of the universe
within ourselves, long before Galois (1811-1832) and Sophus Lie (18421899) gave us the mathematical language of group theory, and the ability
to see the same symmetries in electrons, muons and superstrings.
This particular story had its grand finale of profound Ancient Eye
seeing only a few years ago, when the E8 symmetry group (the last beast,
an exceptional Lie group, in a complete classification of symmetries in
mathematics) was visualized (Creative Commons) :
The language invented by Galois and Lie helped launch the program
of cataloging all the universes symmetries, a program of breathtaking
mathematical cartography that finally drew to a close with the mapping of
E8.
And in case you have a naive view of symmetry and dissonance in
how we see, and disdain such symmetries as not artistic, consider the
enormously dissonant and messy beauty of an object called the
Mandelbulb, found along the way to a holy grail search among
mathematicians for a 3D Mandelbrot set.
Maybe the gap between science/engineering and art isnt the vast gulf
C. P. Snow imagined it to be. Maybe it is merely the distance between the
2H pencil used in engineering drafting and the 2B pencil, the mainstay of
line art. It is a gap that can easily be bridged by something as simple as a
notebook. Doesnt seem that far, does it?
I have no idea whether these are Ideas Worth Spreading, but I like
looking at them. It is my substitute for prayer. Blakes Tyger, with its
immortal symmetries, also helps.
Diamonds are not adaptable at all. They are the hardest things around,
and the only thing that can work a diamond is another diamond. They are
nearly perfectly non-fungible. The more precious ones are so nonfungible, they have names, personalities and histories that are nearly
impossible to erase. As economic goods, they transcend mere brandhood
and aspire to sentience: we speak of cursed or lucky diamonds. Diamonds
do not play well with other materials. Other materials gold in particular
must adapt to them. Purity and refinement are not very useful concepts
to apply to a diamond. In fact, a diamond is defined by its impurities. The
famous Hope diamond is blue because of trace quantities of boron. Color,
clarity and flaws can be assessed, but ultimately working a diamond is
about revealing its personality rather than molding it. Diamonds that win
personality contests go on to become famous. Those that fail to impress
the contest judges are murdered broken up into smaller pieces or
degraded to industrial status.
The value of a diamond is in the eye of the beholder. At birth, rough
diamonds are assessed by expert sightholders, and at every subsequent
transaction, human judges assess value. A diamond is born as a brand. An
extreme, immutable brand that can only be destroyed by destroying the
diamond itself. And finally and perhaps most importantly a
diamonds value has nothing to do with its material constitution. Carbon is
among the commonest elements on earth. A diamonds value is entirely
based on the immense amounts of energy required to fuel the process that
creates it. They are found in places where deep, high-energy violence has
occurred, such as the insides of volcanic pipes.
Diamonds are forever. They cannot be drained of history. When you
break up a diamond you cannot add diamonds the pieces have less
value than the whole. Diamonds can only be worked in irreversible,
destructive ways.
Diamonds represent a becoming kind of value; the products of
creative destruction. If youve read my Be Slightly Evil newsletter issue,
Be Somebody or Do Something, you know the symbolism I am getting at
here. You also know where my sympathies lie.
I find it particularly amusing that the value of gold is measured in
purity carats, while the value of diamonds is measured in weight carats.
Purity and weight are what are known as intensive and extensive
measures. The size of a diamond is a measure of the quantity of tectonic
violence that created it.
I prefer diamonds to gold, perhaps because I am not an original
thinker, but a creative-destructive one. I am not very good at discovering
rare things. I am better at applying intense pressure to commonplace
things, in the hopes of producing a diamond. Sometimes I stumble upon
natural rough diamonds, but more often, I attempt to manufacture artificial
ones from coal. They are not as pretty, and it is very hard to manufacture
large ones, but when I succeed, I produce legitimate diamonds, born under
pressure.
Gold, I rarely mine myself (and earth-bound humans cannot
manufacture it; only dying suns can). I buy gold in the form of second
hand jewelry at the bookstore, melt it down, and rework into other things.
Most often, into settings for diamonds.
and go, Aha! a closed curve is concave if and only if you can find a
pair of points like so, and for some theta, the point on the line given by my
clever equation isnt inside the figure!
Youve just found an attribute of convexity that you think is necessary
and sufficient to define it. But then, suddenly a thought occurs to you. You
sketch:
You are in trouble. At this point, if you really cared enough, youd go
on to reinvent a good deal of topology, invent the notion of simply
connected, figure out that you need the notion of closed and open sets
and interiors and boundaries (to handle the Figure 8 case) and so forth. But
lets not go down that road. Lets ask the more interesting question, why
didnt you just define concavity to be anything that satisfies your original
straight line test? (For many purposes in math, that is in fact exactly
what you do, use the definition without worrying about connectedness
thats the impatient, technical, lets get on with it aspect of mathematics,
but you and I like to fuss over what we mean instead of getting
somewhere).
The interesting thing about the way our minds work is that math and
formalism is subservient to a fuzzier notion of what I want to get at. As
we refine technical definitions (or natural language definitions of entities
like culture) we tend to move the definition to get at an understood but
inexpressible concept. We practically never reduce the concept itself to the
definition we are working with. This sort of thing is an example of the
operation of what philosophers like to call intension (with an s).
Intension, roughly speaking is the true meaning of a concept we are
after. The difference between definition and meaning is what philosophers
like to characterize as primary (or a priori) and secondary (or a posteriori)
intension. The primary intension of water is watery stuff. That is why
a sentence like Ammonia is the water of Titan makes sense to us we
imagine ammonia oceans. By contrast, Water is H20 is a secondary
intension. David Chalmers has a beautiful discussion of intension in The
Conscious Mind: In Search of a Fundamental Theory.
Does this apply to this example? Concavity, unlike water is an
abstraction of real-world things like inkblots, bays, dents, holes and so
forth. It references too many things in the real world for us to useful say
something like concavity is sort of like bays or dents. Despite this,
however, our brains seem to work with a primary intension of concavity
that draws efforts at definition spiraling towards itself. We grope towards
what we mean through attribute-based tests expressed in terms of simpler
concepts (like straight line in our case).
What makes math special is that starting with a few prototypes that
suggest a useful notion, we can often converge in a finite number of steps
Imagine a line dropping from the top vertex vertically down to the
base. This line enables you to visualize two right triangles. Now imagine a
copy of the triangle on the left being rotated clockwise 180 degrees.
Position this imaginary triangle so that you now have a complete rectangle
on the left. Repeat the process for the right. The two imaginary rectangles
now form a larger rectangle. The area of this rectangle is the product of the
base and height of the original triangle. Since you constructed this
rectangle by copying, rotating and pasting two triangles that exactly
covered the original triangle, the original triangle must have an area given
by half the product of the base and height.
formula that was satisfiable. In the more subtle cases with symmetries, it
showed up in terms of the degeneracy in the proof construction which
would not work in the general case.
That doesnt explain it all. What about our long skinny triangle
which is close to degenerate, but not strictly so? Why didnt we draw
something like that? I suspect this has to do with the precision of
comparisons we need to make when we mentally manipulate geometric
figures: we want enough asymmetry and non-degeneracy to clearly
illustrate the information capacity of our concept, but not so much that the
precision required of the representation is too high. I am not completely
happy with this hand-wavy account, so Ill revisit this when I come up
with something better. If you have a better account right now, post a
comment.
A final point: why did we choose to draw our original scalene triangle
with the longest side as a visual base? One proximal reason is that the
necessary manipulations require more effort if we were to draw it with,
say, the obtuse angle at the base (try it). A less obvious reason is that we
operate with orientational metaphors that determine notions like up and
down when dealing with abstractions. These metaphors inform both our
language (base and height) and probably explain why non-standard
orientations are mentally harder to work with, even though the explicit
visual-proof steps are orientation agnostic. These conceptual framing
metaphors will come up later when I talk about George Lakoff and his
work on metaphor, so Ill defer discussion of this aspect of prototypicity to
later.
When we move from instantiations of abstractions to sets of entities
(real or imagined) that we want to define, we run into problems with other
methods of picking out elements, such as archetypes and stereotypes, that
get in the way. We also run into issues of intension, with an s. Thats for
later.
I first encountered this notion of prototypicity in a biology class when
I was about 13. The teacher asked the class clown to come up to the
blackboard and draw an amoeba. He drew a neat block L shape, and the
class burst out laughing. The teacher got mad and told him to stop
clowning around and draw a proper amoeba. He countered that since wed
been taught that amoeba proteas could take on any shape, a regular L
was as much an any shape as an irregular blob.
2
Vanity and pragmatism wrestle for control of the act of naming. We
bend one ear towards history and the other towards posterity. We parse for
unfortunate rhymes and garbled pronunciations. We attempt at once to
situate and differentiate. We count syllables and look for domain names.
We walk around the name, viewing it as parent, lover, friend, bully,
journalist, lexicographer and historian. We embed it in imaginary
headlines and taunting rhymes.
In Bali to name is to number. It is an unsatisfying synthesis that only
works in limited contexts.
The firstborn is Wokalayan (or Yan, for short),
second is Made, third is Nyoman or Komang (Man or
Mang for short), and fourth is Ketut (often elided to Tut).
I am not sure what happens if Wokalayan dies young. Does Made
replace his older sibling and become the new Wokalayan?
In crypotgraphy, the first named-character in an example scenario is
Alice. The second one is Bob. And so on down an alphabetic cast of
characters. This is not the world of interchangeable John and Jane Doe
figures. The order matters.
When birth order is more important individual personality, you get a
social order in naming that inhabitants of individualistic modernity
struggle to understand.
3
Counting is both ordinal and cardinal. It takes a while to appreciate
the difference between one, two, three and first, second, third.
To truly count is to know both processes intimately. In naming,
ordinality has to do with succession and replacement. Cardinality has to do
8
To position is to number and name at the same time, and create
something that is both a being and a becoming. Something rooted, that
seeks to connect and get along, and something restless that seeks to get
ahead and away.
To position a thing is to teach it to get ahead, get along, and get away.
We project onto the memetic world of names, our own fundamental
genetically-ordained proclivities. Evolutionary biology tells us that getting
ahead and getting along are the basic drives that govern life for a social
species. To this, as a species that invented individualism sometime in the
10th century AD, we must add getting away. The drive to become more
than a rank and number. To become a name, even if the only available one,
alpha, is taken.
The Microsoft version soup is Darwin manifest.
Getting ahead, getting along and getting away. Ordinal numbering,
cardinal numbering and naming. Name, rank and number.
Perhaps it is naming and numbering that are fundamental, not biology.
To number well is to comprehend symmetries and anticipate as-yetunnamed realities; holes in schemata, to be filled in the future. And so we
name new elements before discovering them, imagine antimatter when we
only know of matter. To categorize well is to create timeless order.
Mendeleevs bold leap advanced both chemistry and the art and science of
naming.
To number poorly is to squeeze, stuff and snip. To constrain reality to
our fearful and limited conception of it.
To name well is to challenge and court numbers.
To name poorly is to kill or be killed by numbers.
Webster (XRCW). Across the world you will find XRCE (Europe), XRCC
(Canada) and XRCI (India). To earn its right to a unique name within this
orderly namespace, the sole rebel, PARC, had to unleash planet-disrupting
forces.
Xerography eventually became electrophotography, in the hands of
envious competitors who appeared after the trust-busters had done their
work. The name that had gotten ahead and away now had to get along. My
name is photography. Electro-photography.
They still call it xerography at Xerox though.
11
And across town, Kodak slowly declined and began to die. There is
irony here as well.
Photography does have a long history. The ancient Greeks did have
something to do with it. The ancient Chinese did know about pinhole
cameras. The French did play a role.
But Kodak is one of those rare names that was born through an act of
pure invention. George Eastman is quoted as saying about the letter k: it
seems a strong, incisive sort of letter. Yes, incisive like a knife.
The story goes that Eastman and his mother created the name from an
anagrams set. Wikipedia says about the process:
Eastman said that there were three principal concepts
he used in creating the name: it should be short; one cannot
mispronounce it, and it could not resemble anything or be
associated with anything but Kodak.
The first two principles are still adhered to by marketers when
possible. The last has been abandoned since the 1970s, when the
positioning era began.
As with Wilson, the child soon eclipsed the father. Eastman Kodak
became just Kodak to the rest of the world. In proving the soundness of his
principles of memetic stability, Eastman ceded his own place in the history
of naming to a greater name.
Haloid incidentally, is a reference to the binary halogen compounds of
Silver used in photography. The word halogen was coined by Berzelius
from the words hals (sea or salt) and gen (come to be). Coming to
be of the sea. It may be the most perfect name, suggesting the being and
becoming that is the essence of both naming and chemistry.
Jns Jacob Berzelius is a founding father of chemistry in large part
due to his prolific naming. He came up with protein as well. He was also
responsible for naming Selenium. From the Greek Selene, for Moon.
It was no small achievement. Chemistry is a science of variety and
difference. It deals in so many different thing that a narrowly taxonomic
mind will fail to appreciate its broader patterns.
In declaring that Physics is the only real science, all the rest are just
stamp collecting, Rutherford failed to appreciate chemistry the way
Berzelius did. As an ongoing grand narrative with lesser and greater
patterns.
Some deserving names like protein and others merely abstract,
categorical formulas like CnH2n+2 and names that just fall short of
cohering into semantic atoms, like completely saturated hydrocarbon.
12
Counting and naming are at once trivial and profound activities.
Toddlers learn to count starting with One, Two, Three
Terence Tao has won a Fields Medal and lives numbers like nobody
else alive today. And he is still basically learning to count. At levels you
and I would consider magic, but it is counting nevertheless.
Toddlers learn to name, starting with me, mama and dada.
Ursula Le Guin has won five Hugo and six Nebula awards, but is
fundamentally still a name-giver.
Names are born of universes, be they small ones that contain only
Kodak or large ones that contain all of Western civilization between alpha
and omega.
It is very hard to make up universes. It is easier to borrow and
disguise them, as Tolkien and Frank Herbert did.
And it is very hard to do so without accidentally causing collisions
between large, old namespaces that might not like each other, as my mom
found out with Rahul.
Lazy novelists are laziest with names, and the work falls apart. When
you have named every character in your novel perfectly, your novel is
finished. Plot and character converge towards perfection as names do.
Names in turn create universes. Carnegie Hall, Carnegie Foundation,
Carnegie-Mellon University.
To name is to choose one universe to draw from and another to create.
Rockefeller gave his name to few things. He preferred bland names like
Standard Oil and The University of Chicago.
And so it is that the Carnegie Universe is very visible, while the much
larger Rockefeller Universe is more hidden from sight.
13
Rockefeller chose to create, and hide much of what he created. But
you can go further. Beyond hiding lies un-naming. To un-name is to deny
identity.
To un-name and un-number is to anonymize completely.
It is useful for the name-giver to ponder the complementary problem
of un-naming. If to position is to name and number, to de-position is to unname and un-number.
focused on the gestalt you are trying to distill, with repeated tests. The
story of these attempts is what we know as PR, and with each proposed
naming and positioning test you can ask, do I understand this story yet?
Without such test-driven naming, branding is an exercise in waterfall
marketing.
To the extent that it is a useful word at all, it describes a consequence
rather than an action. Away from the concrete world of cows being
tortured with red-hot irons, there is no actual action that you can call
branding.
You name, number and position. You then make up non-verbal
correlates colors and logos that derive from these basic elements.
These are things you do.
Brand happens.
The ocean view from our hotel at Cape Hatteras, Outer Banks
Mathematical Thought
To build mathematical models, you start by observing and braindumping everything you know about the problem, including key
unknowns, onto paper. This brain-dump is basically an unstructured take
on whats going on. Theres a big word for it: phenomenology. When I do
a phenomenology-dumping brainstorm, I use a mix of qualitative notes,
quotes, questions, little pictures, mind maps, fragments of equations,
fragments of pseudo-code, made-up graphs, and so forth.
You then sort out three types of model building blocks in the
phenomenology: dynamics, constraints and boundary conditions
(technically all three are varieties of constraints, but never mind that).
Dynamics refers to how things change, and the laws govern those
changes. Dynamics are front and center in mathematical thought. Insights
come relatively easily when you are thinking about dynamics, and sudden
changes in dynamics are usually very visible. Dynamics is about things
like the swinging behavior of pendulums.
Constraints are a little harder. It takes some practice and technical
peripheral vision to learn to work elegantly with constraints. When
constraints are created, destroyed, loosened or tightened, the changes are
usually harder to notice, and the effects are often delayed or obscured. If I
were to suddenly pinch the middle of the string of a swinging string-andweight pendulum, it would start oscillating faster. But if you are paying
attention only to the swinging dynamics, you may not notice that the
actual noteworthy event is the introduction of a new constraint. You might
start thinking, there must be a new force that is pushing things along
faster and go hunting for that mysterious force.
This is a trivial example, but in more complex cases, you can waste a
lot of time thinking unproductively about dynamics (even building whole
separate dynamic models) when you should just be watching for changes
in the pattern of constraints.
Inexperienced modelers are often bored by constraints because they
are usually painful and dull to deal with. Unlike dynamics, which dance
around in exciting ways, constraints just sit there, usually messing up the
dancing. Constraints involve and tedious-to-model facts like if the
pendulum swings too widely, it will bounce off that wall. Constraints are
ugly when you first start dealing with them, but you learn to appreciate
their beauty as you build more complex models.
Boundary conditions though, are the hardest of all. Most of the raw,
primitive, numerical data in a mathematical modeling problem lives in the
description of boundary conditions. The initial kick you might give a
pendulum is an example. The fact that the rim of a vibrating drum skin
cannot move is a boundary condition. When boundary conditions change,
the effects can be extremely weird, and hard to sort out, if you arent
looking at the right boundaries.
The effects can also be very beautiful. I used to play the Tabla, and
once you get past the basics, advanced skills involve manipulating the
boundary conditions of the two drums. Thats where much of the beauty of
Tabla drumming comes from. Beginners play in dull, metronomic ways.
Virtuosos create their dizzy effects by messing with the boundary
conditions.
In mathematical modeling, if you want to cheat and get to an illusion
of understanding, you do so most often by simplifying the boundary
conditions. A circular drum is easy to analyze; a drum with a rim shaped
like lake Erie is a special kind of torture that takes computer modeling to
analyze.
A little tangential kick to a pendulum, which makes it swing mildly in
a plane, is a simple physics homework problem. An off-tangent kick that
causes the pendulum bob to jump up, making the string slacken, before
bungeeing to tautness again, and starting to swing in an unpleasant conic,
is an unholy mess to analyze.
But boundary conditions are where actual (as opposed to textbook)
behaviors are born. And the more complex the boundary of a system, the
less insight you can get out of a dynamics-and-constraints model that
simplifies the boundary too much. Often, if you simplify boundary
conditions too much, the behaviors that got you interested in the first place
will vanish.
Dynamics, Constraints and Boundaries in Qualitative Thinking
Without realizing it, many smart people without mathematical training
also gravitate towards thinking in terms of these three basic building
blocks of models. In fact, it is probably likely that the non-mathematical
approach is the older one, with the mathematical kind being a codified and
derivative kind of thinking.
Historians are a great example. The best historians tend to have an
intuitive grasp of this approach to building models using these three
building blocks. Here is how you can sort these three kinds of pieces out
in your own thinking. It involves asking a set of questions when you begin
to think about a complicated problem.
1. What are the patterns of change here? What happens when I do
various things? Whats the simplest explanation here? (dynamics)
2. What can I not change, where are the limits? What can break if
things get extreme? (constraints)
3. What are the raw numbers and facts that I need to actually do
some detective work to get at, and cannot simply infer from what I
already know? (boundary conditions).
Besides historians, trend analysts and fashionistas also seem to think
this way. Notice something? Most of the action is in the third question.
Thats why historians spend so much time organizing their facts and
numbers.
This is also why mathematicians are disappointed when they look at
the dynamics and constraints in models built by historians. Toynbees
monumental work seems, to a dynamics-focused mathematical thinker,
much ado about an approximate 2nd order under-damped oscillator (the
cycle of Golden and Dark ages typical in history). Hegels historicism and
End of History model appears to be a dull observation about an
asymptotic state.
how does the world work? If I just made up a theory of the mainstream
world based on mainstream dynamics, it would be very impoverished. It
would offer an illusion of insight and zero predictive power. A theory of
the middle that completely breaks down at the boundaries and doesnt
explain the most interesting stories around us, is deeply unsatisfying.
I have proof that this approach is useful. Some of my most popular
posts have come out of boundary conditions thinking. The Gervais
Principle series was initially inspired by the question, how is Office
funny different from Dilbert funny? That led me to thinking about
marginal slackers inside organizations, who always live on the brink of
being laid off. My post from last week, The Gollum Effect [January 6,
2011], came from pondering extreme couponers and hoarders at the edge
of the mainstream.
So I operate by the vague heuristic that if I pay attention to things on
the edge of the mainstream, ranging from motorcycle gangs to extreme
couponers and hoarders, perhaps I can make more credible progress on big
and difficult problems.
Or at least, thats the leap of faith I make in most of my thinking.
The reasoning models on the other hand, are complex, but largely
qualitative, and most of the thinking is up to you, not Thomas Bayes.
Explanation-Based Learning is one type. A slightly looser form is CaseBased Reasoning. Both rely on what are known as rich domain theories.
Most of the hard thinking in EBL and CBR is in the qualitative thinking
involved in building good domain theories, not in the programming or the
math.
The former kind requires lots of data involving a few variables. Do
people buy more beer on Fridays? Easy. Collect beer sales data, and you
get a correlation between time t and sales s. Gauss did most of the
necessary thinking a couple of hundred years ago. You just need to push a
button.
EBL, CBR and other similar models are different. A textbook example
is learning endgames in chess. If I show you an endgame checkmate
position involving a couple of castles and a king, you can think for a bit
and figure out the general explanation of why the situation is a checkmate.
You will be able to construct a correct theory of several other checkmate
patterns that work by the same logic. One case has given you an
explanation that covers many other cases. The cost: you need a rich
domain theory in this case a knowledge of the rules of chess. The
benefit: you didnt waste time doing statistical analyses of dozens of
games to discover what a bit of simple reasoning revealed.
Looser case-based reasoning involves stories rather than 100%
watertight logic. Military and business strategy is taught this way. Where
the explanation of a chess endgame could potentially be extended
perfectly to all applicable situations, it is harder to capture what might
happen if a game starts with a Sicilian defense. You can still apply a lot
of logic and figure out the patterns and types of game stories that might
emerge, but unlike the 2-castles-and-king situation, you are working in too
big a space to figure it all out with 100% certainty. But even this looser
kind of thinking is vastly more efficient than pure brute force statisticsbased thinking.
Theres a lot of data in the qualitative model-based kinds of learning
as well, except its not two columns of x and y data. The data is a fuzzy set
of hard and soft rules that interact in complex ways, and lots of
information about the classes of objects in a domain. All of it deployed in
the service of an analysis of ONE data point. ONE case.
Think about people for instance. Could you figure out, from talking to
one hippie, how most hippies might respond to a question about drilling
for oil in Alaska? Do you really need to ask hundreds of them at Burning
Man? It is worth noting that random samples of people are
extraordinarily hard to construct. And this is a good thing. It gives people
willing to actually think a significant advantage over the unthinking datadriven types.
The more data you have about the structure of a domain, the more you
can figure out from just one data point. In our examples, one chess
position explains dozens. One hippie explains hundreds.
People often forget this elementary idea these days. Ive met idiots
(who shall remain unnamed) who run off energetically do data collection
and statistical analysis to answer questions that take me 5 minutes of
careful qualitative thought with pen and paper, and no math. And yes, I
can do and understand quite a bit of the math. I just think 90% of the
applications are completely pointless. The statistics jocks come back and
are surprised that I figured it out while sitting in my armchair.
The Real World
Forget toy AI problems. Think about a real world question: A/B
testing to determine which subject lines get the best open rates in an email
campaign. Without realizing it, you apply a lot of model-based logic and
eliminate a lot of crud. You end up using statistical methods only for the
uncertainties you cannot resolve through reasoning. Thats the key:
statistics based methods are the last-resort, brute force tool for resolving
questions you cannot resolve through analysis of a single prototypical
case.
Think about customer conversations. Should you talk to 25 customers
about whether your product is good or bad? Or will one deep conversation
yield more dividends?
me. Nine out of ten times you ask they needed a study to figure THAT
out?
And the 1/10 times you get actual insight? Well, consider the beer and
diapers story. I dont tell that story. Statistics-types do.
This means going with your gut-driven deep qualitative analysis of
one anecdotal case will be fine 9 out of 10 times.
The Real Reason Data Driven is Valued
So why this huge emphasis on quants and data driven and
analytics? Could a good storyteller have figured out and explained (in
an EBL/CBR sense) the subprime mortgage crisis created by the quants? I
believe so (and I suspect several did and got out in time).
I think the emphasis is due to a few reasons.
First, if you can do stats, you can avoid thinking. You can plug and
chug a lot of formulas and show off how smart you are because you can
run a logistic regression and the Black-Scholes derivative pricing formula
(sorry to disappoint you; no, you are not that smart. The people who
discovered those formulas are the smart ones).
Second, numbers provide safety. If you tell a one-data-point story and
you turn out to be wrong, you will get beaten up a LOT more badly than if
your statistical model turns out to be based on an idiotic assumption.
Running those numbers looks more like real work than spinning a
qualitative just-so story. People resent it when you get to insights through
armchair thinking. They think the honest way to get to those insights is
through data collection and statistics.
Third: runaway behavioral economics thinking by people without
the taste and competence to actually do statistics well. Ill rant about that
another day.
Dont be brute-force statistics driven. Be feedback-driven. Be
prepared to dive into one case with ethnographic fervor, and keep those
and rich people being able to buy better and better lawyers over time. If
this is happening, the whole dialectic is falling apart, and trust in the
system erodes. Dialectical vitality drains away and the only way to operate
within the system is to become good at gaming it without any thought to
larger issues. This is the purely predatory vulture attitude. If a legal system
is full of vulture-lawyers and vulture-judges, it is a carcass.
A moral challenge for a lawyer might be, for instance, deciding
whether or not to use race to his/her advantage in the jury selection
process, effectively using legal processes to get racial discrimination
working in his clients favor. Should the lawyer use such tactics, morally
speaking? It depends on whether the dialectic is slowly evolving towards
managing race more thoughtfully or whether it is making racial
polarization and discrimination worse.
This constant presence of the process itself in peripheral vision means
that both lawyers and judges must have attitudes towards both the specific
case and about the legal system in general. So an activist judge, for
instance, might be judge-minded with respect to the case, but lawyerminded with respect to the dialectic (i.e., being visibly partisan in their
philosophy about if and how the system should evolve, and either being
energetic or conservative in setting new precedents). You could call such a
person a judge-lawyer.
A lawyer who writes legal thrillers on the side, with a dispassionate,
apolitical eye on process evolution, might be called a lawyer-judge. A
lawyer with political ambitions might be a lawyer-lawyer. I cant think of
a good archetype label for judge-judge, but I can imagine the type: an
apolitical judge who is fair in individual cases and doesnt try too hard to
set precedents, but does so when necessary.
The x-(x)-X-(X) Template
Because of the existence of an evolving dialectic framing things, you
really you have four possible types of legal professionals: lawyer-lawyers,
judge-judges, lawyer-judges and judge-lawyers, where the first attitude is
the (legally mandated and formal-role based) attitude towards a specific
case, and the second is the (unregulated) political attitude towards the
dialectic.
When the system is getting better all the time, all four roles are
justifiable. But when it is gradually worsening beyond the point of no
return, none of them is. When things head permanently south, a mismatch
between held and demonstrated beliefs is a case of bad faith. Since all
hope for reform is lost the only rational responses are to abandon the
system or be corrupt within it.
To get at the varieties of bad faith possible in a collapsing dialectic,
you need to distinguish between held and demonstrated beliefs at both
case and dialectic levels to identify the specific pattern.
So you might have constructs like lawyer-(judge)-lawyer-(lawyer).
This allows you to slice and dice various moral positions in a very finegrained way. For example, I think a legalist in the sense that the term has
been used in history, is somebody who adopts a lawyer-like role in a
specific case within a dialectic thats decaying and losing vitality, while
knowing full well that it is decaying. Legalists help perpetuate a dying
dialectic. You could represent this as lawyer-(judge)-judge-(lawyer). Ill
let you parse that.
This is getting too meta even for me, so Ill leave it to people who are
better at abstractions to make sense of the possibilities here. Ill just leave
it at the abstract template expression Ive made up: x-(x)-X-(X).
The special case of the law illuminates a broader divide in any sort of
dialectical process. Some are full of judge-mind types. Others are full of
lawyer-mind types.
The net behavior of a dialectic depends not just on the type of people
within it, but on its boundary conditions: at the highest level of appeal, do
judge-minds rule or lawyer-minds?
Within the judiciary, even though there are more lawyer minds, the
boundary conditions are at the Supreme Court, where judge minds rule. So
the dialectic overall is judge-minded due to the nature of its highest appeal
process.
it is the same as the Graham model. I think the Graham model involves
more conscious guidance from a separate idea about the aesthetics of
writing, sort of like bonsai.
Just-add-attention writing is driven by its own aesthetic. This can lead
to unpredictable results, but you get a more uncensored sense of whether
an idea is actually beautiful.
Dense writing is related to just-add-attention in a very simple way:
making something dense is a matter of partially dehydrating an extensive
form again, or stopping short of full hydration in the first place. Along
with pruning of bits that are either hard to dilute or have been irreversibly
over-diluted.
Why would you want to do that? Because just-add-attention writing
can sort of sprawl untidily all over the place. Partially dehydrating it again
makes it more readable, at the cost of making it more cryptic.
This add-attention/dehydrate again process can be iterated with some
care and selectivity to create interesting artistic effects. It reminds me of a
one-word answer Xianhang Zhang posted on Quora to the question, how
do you chop broccoli? Answer: recursively.
Regular writing can be chopped up like a potato. Just-add-attention
writing must be chopped up like a broccoli. It is more time consuming.
Thats why I cannot do what some people innocently suggest, simply
serializing my longer pieces as a sequence of arbitrarily delineated parts. I
have never successfully chopped up a long piece into two shorter pieces.
At best, I have been able to chop off a straggling and unfinished tail end
into another draft and then work that separately.
***
Not all generative processes lack extensive structure. The human
skeleton is after all, also the product of a generative process (ontogeny).
To take a simpler example, the multiplication table for 9 is defined by a
generative rule (9 times n), but also has an extensive structure:
09
18
27
36
45
54
63
72
81
90
In case you didnt learn this trick in grade school, the extensive
structure is that you can generate this table by writing the numerals 0-9
twice in adjacent columns, in ascending and descending order.
If you wanted to blog the multiplication table for 9, and had to keep it
to one line. You could use either:
Both are good compressions, though the second is more limited. But
this is rare. In general a sufficiently complex generative process will
produce an extensive-form output that cannot then be compressed by any
means other than rewinding the process itself.
***
Just-add-attention writing is easy for those who can do it, but not
everybody can do it. More to the point, of the people who can do it, a
significant majority seem to find it boring to do. It feels a little bit like
folding laundry. It is either a chore, or a relaxing experience.
What sort of people can do it?
On the nature front, I believe you need a certain innate capacity for
free association. Some people cannot free associate at all. Others free
associate wildly and end up with noise. The sweet spot is being able to
free associate with a subconscious sense of the quality of each association
moderating the chain reaction. You then weave a narrative through what
youve generated. The higher the initial quality of the free association, the
easier the narrative weaving becomes.
On the nurture front, this capacity for high-initial-quality free
association cannot operate in a vacuum. It needs data. A lot of data,
usually accumulated over a long period of time. What you take in needs to
age and mature first into stable memories before free association can work
well on this foundation. The layers have to settle. By my estimate, you
have to read a lot for about 10 years before you are ready to do just-addwater writing effectively.
Unfortunately, initial conditions matter a lot in this process, because
our n+1 reading choice tends to depend on choices n and n-1. The reading
path itself is guided by free association. But since item n isnt usable for
fertile free association until, say, youve read item n+385, there is a time
lag. So your reading choices are driven by partly digested reading choices
in the immediate past.
So if you make the wrong choices early on, your fill the hopper
phase of about 10 years could go horribly wrong and fill your mind with
crap. Then you get messed-up effects rather than interesting ones.
So there is a lot of luck involved initially, but the process becomes a
lot more controlled as your memories age, adding inertia.
***
This idea that just-add-attention writing is driven by aged memories
of around 10 years of reading suggests that the process works as follows.
When you recognize a motif as potentially interesting, it is your
stored memories sort of getting excited about company. Interesting is a
lot of existing ideas in your head clamoring to meet a new idea. Thats
why you are sometimes captivated by an evocative motif but cannot say
why. You wont know until your old ideas have interviewed the new idea
and hired it. Motif recognition is a screening interview conducted by the
ideas already resident in your brain.
Or to put it in a less overwrought way, old ideas act as a filter for new
ones. Badly tuned filters lead to too-open or too-closed brains. Well-tuned
ones are open just the right amount, and in the right ways.
Recognition must be followed by pursuit. This is the tedious-to-some
laundry-folding process of moderated free association. It is all the ideas in
your head interrogating the new one and forming connections with it.
Finally, the test of whether something interesting has happened is
whether you can extract a narrative out of the whole thing, once the
interviewing dies down.
A good free association phase will both make and break connections.
If your brain only makes connections, it will slowly freeze up because
everything will be connected to everything else. This is as bad as nothing
being connected, because you have no way to assess importance.
The pattern of broken and new connections (including those
formed/broken in distant areas) guides your narrative spinning.
point. The first airports were designed to look like railway stations after
all. As McLuhan said, We see the world through a rear-view mirror. We
march backwards into the future. Turn around, backward march.
This is the default mental model most people have of hyperlinks, a
model borrowed from academic citation, and made more informal:
Both are simple ports of constructs like Nick Carr believes [Carr,
2008] that Google is making us stupid. There are a couple of mildly
interesting things to think about here. For instance, the hyper-grammatical
question of whether to link the word believes, as I have done, or the title.
Similarly, you can ask whether this or article or this article in the Atlantic
should be used as the anchor in the explicit version. There is also the
visual-style question of how long the selected anchor phrase should be: the
more words you include, the more prominent the invitation to click. But
overall, this mental model is self-limiting. If links were only glorified
citations, a Strunk-and-White hyper-grammar/hyper-style guide would
have little new raw material to talk about.
Lets get more sophisticated and look at how hyperlinks break out of
the glorified-citation mould. Turn around, forward march.
Hyperlinking as Form-Content Mixing
Here are two sentences that execute similar intentions:
I dont remember where I first saw this clever method of linking (the
second one), but I was instantly fascinated, and I use it when I can. This
method is a new kind of grammar. You are mixing form and content, and
blending figure and ground into a fun open the secret package game.
Sholay, that odd mix of Kurosawa and John Wayne that drove
India wild.
Salman Rushdie method: Amitabh, he-of-boundless-splendor,
stared down, a-flaming, from a tattered old Sholay poster.
Critics and authors alike agonize endlessly about the politics of these
different voices. This particular example, crossing as it does linguistic and
cultural boundaries, in the difficult domain of fiction, is extreme. But the
same sorts of figure/ground/voice dynamics occur when you write inculture or non-fiction.
The first simply ignores non-Indian readers, who must look in at
Indians constructing meaning within a fishbowl, with no help. It is simple,
but unless the intent is genuinely to write only for Indians (which is
essentially impossible on the Web, in English), not acknowledging the
global context is a significant decision (whether deliberate or unthinking).
The second method is simply technically bad. If you cant solve the
problem of exposition-overload, you shouldnt be writing fiction.
The third method is the sort of thing that keeps literary scholars up at
nights, worrying about themes of oppression. Is acknowledging Clint
Eastwood as the prototypical strong-silent action hero a political act that
legitimizes the cultural hegemony of the West? What if Id said Bruce Lee
of India or Toshiro Mifune of India? Would those sentences be acts of
protest?
Rushdie pioneered the last method, the literary equivalent of theater
forms where the actors acknowledge the audience and engage them in
artistic ways. Rushdie finesses the problem by adopting neither simplicity
nor exposition, but a deliberate, audience-aware self-exoticization-with-awink. If you know enough about India, you will recognize he-ofboundless-splendor as one literal meaning of the name Amitabh, while
Sholay means flames. By putting in cryptic (to outsiders) cultural
references, Rushdie simultaneously establishes an identity for his voice,
and demands of non-Indians that they either work to access constructible
meaning, or live with opacity. At the same time, Indians are forced to look
at the familiar within a disconcerting context.
(a version of this solution, curiously, has been available to comicbook artists. If the sentence above had been the caption of a panel showing
a boy staring at an Amitabh Bachchan Sholay poster, you would have
achieved nearly the same effect).
This is an extraordinarily complex construct, because the sentence is a
magical, shape-shifting monster. It blends figure and ground compactly;
the gestalt has leaky boundaries limited only by your willingness to click.
Note that you can kill the magic by making the links open in new windows
(which reduces the experience to glorified citation, since you are
insistently hogging the stage and forcing context to stay in the frame).
What makes this magical is that you might never finish reading the story
(or this article) at all. You might go down a bunny trail of exploring the
culture and history of Bollywood. Traditionally, writers have understood
that meaning is constructed by the reader, with the text (which includes the
authors projected identity) as the stimulus. But this construction has
historically been a pretty passive act. By writing the sentence this way, I
am making you an extraordinarily active meaning-constructor. In fact, you
will construct your own text through your click-trail. Both reading and
writing are always political and ideological acts, but here Ive passed on a
lot more of the burden of constructing political and ideological meaning
onto you.
The reason this scares some people is rather Freudian: when an author
hyperlinks, s/he instantly transforms the author-reader relationship from
parent-child to adult-adult. You must decide how to read. Your mom does
not live on the Web.
Thats not all. The writer, as I said, has always been part of the
constructed meaning, but his/her role has expanded. Literary theorists
have speculated that bloggers write themselves into existence by
constructing their online persona/personas. The back-cover author
biography in traditional writing was a limited, unified and carefully
managed persona, usually designed for marketing rather than as a
consciously-engineered part of the text. Online however, you can click on
my name, and explore how I present myself on LinkedIn, Facebook and
Twitter. How deeply you explore me, and which aspects you choose to
explore, will change how you construct meaning from what I write.
So, in our three examples, weve gone from backward-looking, to
clever, to seriously consequential. But you aint seen nothing yet. Lets
talk about how the hyperlink systematically dismantles and reconstructs
our understanding of the idea of a text.
Fractured-Ludic Reading
The Kindle is a curiously anachronistic device. Bezos desire was to
recreate the ludic reading experience of physical books. To be ludic, a
reading experience must be smooth and immersive to the point where the
device vanishes and you lose yourself in the world created by the text. It
is the experience big old-school readers love. Amazon attempted to make
the physical device vanish, which is relatively unproblematic as a goal.
But they also attempted to sharply curtail the possibilities of browsing and
following links.
In light of what weve said about constructing your own text, through
your click-trail, and your meaning from that text, it is clear that Bezos
notion of ludic is not a harmless cognitive-psychology idea. It is a
political and aesthetic idea, and effectively constitutes an attitude towards
that element we know as dissonance. It represents classicism in reading.
Writers (of both fiction and non-fiction) have been curiously lagging
when it comes to exploring dissonance in their art. Musicians have gone
from regular old dissonance through Philip Glass and Nirvana to todays
experimental musicians who record, mix and play back random street
noises as performance. Visual art has always embraced collages and more
extreme forms of non sequitur juxtaposition. Dance (Stravinsky), film
(David Lynch) and theater (Beckett) too, have evolved towards extreme
dissonance. Writers though, have been largely unsuccessful in pushing
things as far. The amount of dissonance a single writer can create seems to
be limited by a very tight ceiling that beyond which lies incomprehensible
nonsense (Becketts character Lucky, in Waiting for Godot, beautifully
demonstrates the transition to nonsense).
In short, we do not expect musical or visual arts to be unfragmented
or smooth or allow us to forget context. We can tolerate extreme
closeness to random noise in other media. Most art does not demand that
our experience of it be ludic the way writing does. Our experience can
be disconnected, arms-length and self-conscious, and still constitute a
legitimate reading. Word-art though, has somehow been trapped within its
own boundaries, defined by a limited idea of comprehensibility and an
aesthetic of intimacy and smooth flow.
There are two reasons for this. First, sounds and images are natural,
and since our brains can process purely unscripted stuff of natural origin,
there is always an inescapable broader sensory context within which work
must be situated. The color of the wall matters to the painting in a way that
the chair does not matter to the reading of a book. Words are unnatural
things, and have always lived in isolated, bound bundles within which
they create their own natural logic. The second reason: music and visual
art can be more easily created collaboratively and rely on the diversity of
minds to achieve greater levels of dissonance (an actor and director for
example, both contribute to the experience of the movie). Writing has
historically been a lonely act since the invention of Gutenbergs press. We
are now returning to a world of writing that is collaborative, the way it
was before Gutenberg.
So what does this mean for how you understand click-happy online
reading? You have two choices:
In other words, when you browse and skim, you arent distracted and
unfocused. You are just reading a very dissonant book you just made up.
Actually, you are reading a part of a single book. The single book. The one
John Donne talked about. I first quoted this in my post The Deeper
Meaning of Kindle. The second part is well-known, but it is the first part
that interests us.
All mankind is of one author, and is one volume;
when one man dies, one chapter is not torn out of the book,
but translated into a better language; and every chapter
must be so translatedAs therefore the bell that rings to a
sermon, calls not upon the preacher only, but upon the
congregation to come: so this bell calls us all: but how
much more me, who am brought so near the door by this
sickness.No man is an island, entire of itselfany mans
death diminishes me, because I am involved in mankind;
and therefore never send to know for whom the bell tolls; it
tolls for thee.
The Hyperlink as the Medium
If you start with McLuhan, as most people do, there are two ways to
view the Web: as a vast meta-medium, or as a regular McLuhanesque
medium, with nothing meta about it. For a long-time I adopted the metamedium view (after all, the Web can play host to every other form: text,
images, video and audio), but I am convinced now that the other view is
with a density that rivals the densest writing today. With the exception of
scientific writing best understood as a social-industrial process for
increasing the density of words every other kind of writing today has
become less layered. Most writing admits one reading, if that.
Dense writing is not particularly difficult. Merely time-consuming. As
the word layering suggests, it is something of a mechanical craft, and you
become better with practice. Even mediocre writers in the past, working
with starter material no denser than todays typical Top 10 blog post, could
sometimes achieve sublime results by putting in the time.
If the mediocre can become good by pursuing density, the good can
become great. Robert Louis Stevenson famously wrote gripping action
sequences without using adverbs and adjectives. His prose has a sparse
elegance to it, but is nevertheless dense with meaning and drama. I once
tried the exercise of avoiding adverbs and adjectives. I discovered that it is
not about elimination. The main challenge is to make your nouns and
verbs do more work.
***
In teaching and learning writing today, we focus on the isolated virtue
of brevity. We do not think about density. Traditions of exegesis the
dilution, usually oral, of dense texts to the point where they are
consumable by many are confined to dead rather than living texts.
We have forgotten how to teach density. In fact, weve even forgotten
how to think about it. We confuse density with overwrought, baroque
textures, with a hard-to-handle literary style that can easily turn into
tasteless excess in unskilled hands.
The 2000-word thought experiment, if you try it, will likely force you
to consider density of meaning as a selection factor. Some words, like
schadenfreude, are intrinsically dense. Others, like love, are dense because
they are highly adaptable. Depending on context, they can do many
things.
Density is a more fundamental variable than the length of a text. It is
intrinsic to writing, like the density of a fluid; what is known in fluid
For the ancients, texts had to be little metered packets. But as paper
technology got cheaper and more reliable, poetry, like many other obsolete
technologies before and after, turned into an art form. Critical function
turned into dispensable style. Meter and rhyme ceased to be useful as
error-correcting coding mechanisms and turned into free dimensions for
artistic expression.
Soon, individual verses could be composed under the assumption of
stable, longer embedding contexts. Extensive works could be delineated a
priori, during the composition of the parts. And the parts could be safely
de-containerized. Rhyming verse could be abandoned in favor of blank
verse, and eventually meter became entirely unnecessary. And we ended
up with the bound book of prose.
Technologically, it was something of a backward step, like reverting
to circuit-switched networks after having invented packet switching, or
moving back from digital to analog technology. But it served an important
purpose: allowing the individual writer to emerge. The book could belong
to an individual author in a way a verse from an oral tradition could not.
***
Poetry gets it right: length is irrelevant. You can standardize and
normalize it away using appropriate containerization. It is density that
matters. Evolving your packet size and vocabulary over time helps you
increase density over time.
My posts range between 2000-5000 words, and I post about once a
week here on ribbonfarm. But there are many bloggers who post two or
three 300-word posts a day, five days a week. They also log 2000-5000
words.
So I am not particularly prolific. I merely have a different packet size
compared to other bloggers, optimized for a peculiar purpose: evolving an
idiosyncratic vocabulary. It seems to take several thousand words to
characterize a neologism like gollumize or posturetalk. But once that is
done, I can reuse it as a compact and dense piece of refactored perception.
In the process of synthesis, virtual circuits must ride once more on top
of a revitalized packet-switched network. The oral/written distinction must
be replaced by a more basic one that is medium-agnostic, like the Internet
itself.
***
According to legend, the sage Vyasa needed a scribe to write down
the Mahabharata as he composed it. Ganesha accepted the challenge, but
demanded that the sage compose as fast as he could write. Wary of the
trickster god, Vyasa in turn set his own condition: Ganesha would have to
understand every verse before writing it down. And so, the legend
continues, they began, with Vyasa throwing curveball verses at Ganesha
whenever he needed a break.
The figure of Vyasa the composer is best understood as a literary
device to represent a personified oral tradition (that perhaps included a
single real Vyasa or family of Vyasas).
But the legend gets at something interesting about the role of a scribe
in a dominantly oral culture. A second-class citizen like a minute-taker or
official record-keeper, the scribe must nevertheless synthesize and
interpret an ongoing cacophony in order to produce something coherent to
write down. When the spoken word is cheap and the written word is
expensive, the scribe must add value. The oral tradition may be the
default, but the written one is the court of final appeal in case of conflict
among two authoritative individuals.
There is a brilliant passage in Yes, Prime Minister, where the Cabinet
Secretary Humphery Appleby helps the Prime Minister, Jim Hacker, cook
the minutes of a cabinet meeting after the fact, to escape from an informal
oral commitment. Applebys exposition of the principle of accepting the
minutes as the de facto official memory gets to the heart of the VyasaGanesha legend:
Sir Humphrey: It is characteristic of all committee
discussions and decisions that every member has a vivid
recollection of them and that every members recollection
gonzo blogging instead of traditions that enjoy received authority, minutetaking scribe bloggers must increasingly interpret what they are seeing.
The first human scribe who wore the mask of Ganesha could
reasonably assume that there was a coherent trunk narrative with
discriminating judgments required only at the periphery. He would only
be responsible for smoothing out the rough edges of an evolving oral
consensus. Equally Humphrey Appleby could hope for a coherent
emergent intentionality in the deliberations of the cabinet.
But the scribe-blogger cannot assume that there is anything coherent
to be discovered in the gonzo blogging theater. At best he can attempt to
collect and compress and hope that it does not all cancel out.
There is another difference. When words are literally expensive, as
words carved in stone are, anything written has de facto authority,
underwritten by the wealth that paid for the scribe. Scribes were usually
establishment figures associated with courts, temples or monasteries,
deriving their interpretative authority from more fundamental kinds of
authority based on violence or wealth.
With derived authority comes supervision. The compensation for lost
derived authority is the withdrawal of supervision. The scribe-blogger is
an unsupervised and unauthorized chronicler in a world of contending
gonzos. Any authority he or she achieves is a function of the density and
coherence of the interpretative perspective it offers on the gonzo-blogging
theater.
***
I wish I could teach dense blogging. I am not sure how I am gradually
acquiring this skill, but I am convinced it is not a difficult one to pick up.
It requires no particular talent beyond a generic talent for writing and
thinking clearly. It is merely time-consuming and somewhat tedious.
Sometimes I strive for higher density consciously, and at other times,
dense prose flows out naturally after a gonzo-blogger memeplex has
simmered for a while in my head. I rarely let non-dense writing out the
door. You need gonzo-blogging credibility to successfully do Top 10 list
posts. I can manufacture branded ideas, but lack the raw material needed
to sustain a personal brand.
Writing teachers with a doctrinaire belief in brevity urge students to
focus. They encourage selection and elimination in the service of explicit
intentions. The result is highly legible writing. Every word serves a
singular function. Every paragraph contains one idea. Every piece of prose
follows one sequence of thoughts. There is a beginning, a middle and an
end. Like a city laid out by a High-Modernist architect, the result is
anemic. The text takes a single prototypical reader to a predictable
conclusion. In theory. More often, it loses the reader immediately, since no
real reader is anything like the prototypical one assumed by (say) the
writer of a press release.
An insistence on focus turns writing into a vocational trade rather than
a liberal art.
Both gonzo blogging and scribe blogging lead you away from the
writing teacher.
Striving for density, attempting to compress more into the same
number of words, inevitably leads you away from the legibility prized by
writing teachers. Ambiguity, suggestion and allusion become paramount.
Coded references become necessary, to avoid burdening all readers with
selection and filtration problems. Like Humpty-Dumpty, you are
sometimes forced to enslave words and chain them to meanings that they
were not born with.
***
Dense writing creates illegible slums of meaning. To the vocational
writer, it looks discursive, messy and randomly exploratory.
But what the vocational writer mistakes for a lack of clear intention is
actually a multiplicity of intentions, both conscious and unconscious.
***
In the days of 64k memories, programmers wrote code with as much
care as ancient scribes carved out verses on precious pieces of rock, one
expensive chisel-pounding rep at a time.
In the remarkably short space of 50 years, programming has evolved
from rock-carving parsimony to paper-wasting profligacy.
Still living machine-coding gray eminences bemoan the verbosity and
empty abstractions of the young. My one experience of writing raw
machine code (some stepper-motor code, keyed directly into a controller
board, for a mechatronics class) was enlightening, but immediately
convinced me to run away as fast as I could.
But why shouldnt you waste bits or paper when you can, in service of
clarity and accessibility? Why layer meaning upon meaning until you get
to near-impenetrable opacity?
I think it is because the process of compression is actually the process
of validation and comprehension. When you ask repeatedly, who is
listening, every answer generates a new set of conflicts. The more you
resolve those conflicts before hitting Publish, the denser the writing. If
you judge the release density right, you will produce a very generative
piece of text that catalyzes further exploration rather than ugly flame wars.
Sometimes, I judge correctly. Other times I release too early or too
late. And of course, sometimes a quantity of gonzo-blogger theater
compresses down to nothing and I have to throw away a draft.
And some days, I find myself staring at a set of dense thoughts that
refuse to either cohere into a longer piece or dissolve into noise. So I
packetize them into virtual palm-leaf index cards delimited by asterixes,
and let them loose for other scribes to shuffle through and perhaps sinter
into a denser mass in a better furnace.
It is something of a lazy technique, ultimately no better than listblogging in the gonzo blogosphere. But if it was good enough for
Wittgenstein, its good enough for me.
***
Rediscovering Literacy
May 3, 2012
Ive been experimenting lately with aphorisms. Pithy one-liners of the
sort favored by writers like La Rochefoucauld (1613-1680). My goal was
to turn a relatively big idea, the sort I would normally turn into a 4000word post, into a one-liner. After many failed attempts over the last few
months, a few weeks ago, I finally managed to craft one I was happy with:
Civilization is the process of turning the incomprehensible into the
arbitrary.
Many hours of thought went into this 11-word candidate for eternal
quotability. When I was done, I was tempted to immediately unpack it in a
longer essay, but then I realized that that would defeat the purpose.
Maxims and aphorisms are about more than terseness in the face of
expensive writing technology. They are about basic training in literacy.
The aphorism above is possibly the most literate thing I have ever written.
By stronger criteria Ill get to, it might even be the only literate thing Ive
ever written, which means Ive been illiterate until now.
This post isnt about the aphorism itself (Ill leave you to play with it),
but about literacy.
I used to think that the terseness of written language through most of
history was mostly a result of the high cost and low reliability of writing
technologies in pre-modern times. I now think these were secondary
issues. I have come to believe that the very word literacy meant something
entirely different before around 1890, when print technology became
cheap enough to sustain a written form of mass media.
Literacy as Sophistication
Literacy used to be a very subtle concept that meant linguistic
sophistication. It used to denote a skill that could be developed to arbitrary
levels of refinement through practice. Literacy meant using mastery over
You were considered literate if you could take a classic verse and
expound upon it at length (exposition) and take an ambiguous idea and
distill its essence into a terse verbal composition (condensation).
Exposition was more than meaning-extraction. It was a demonstration
of contextualized understanding of the text, skill with both form and
content, and an ability to separate both from meaning in the sense of
reference to non-linguistic realities.
Condensation was the art of packing meaning into the fewest possible
words. It was a higher order skill than exposition. All literate people could
do some exposition, but only masters could condense well enough to
produce new texts considered worthy of being added to the literary
tradition.
Exposition and condensation are in fact the fundamental learned
behaviors that constitute literacy, not reading and writing. One behavior
dissolves densely packed words using the solvent that is the extant oral
culture, enriching it, while the other distills the essence into a form that
can be transmitted across cultures.
Two literate people in very different different cultures, if they are
skilled at exposition, might be able to expand the same maxim (the Golden
Rule for instance) into different parables. Conversely, the literary masters
of an era can condense stories and philosophies discovered in their own
time into culturally portable nuggets.
So the terseness of an enduring maxim is as much about cross-cultural
generality as it is about compactness.
The right kind of terseness allows you to accomplish a difficult
transmission challenge: transmission across cultures and mental models.
Reading and writing by contrast, merely accomplish transmission across
time and space. They are much simpler inventions than exposition and
condensation. Cultural distance is a far tougher dimension to navigate than
spatial and temporal dimensions. By inventing a method to transmit across
vast cultural distances, our earliest neolithic ancestors accidentally turned
language into a tool for abstract thinking (it must have existed before then
Today, to be literate simply means that you can read and write
mechanically, construct simple grammatical sentences, and use a minimal,
basic (and largely instrumental) vocabulary. We have redefined literacy as
a 0-1 condition rather than a skill that can be indefinitely developed.
Gutenberg certainly created a huge positive change. It made the raw
materials of literary culture widely accessible. It did not, however, make
the basic skills of literacy, exposition and condensation, more ubiquitous.
Instead, a secondary vocational craft from the world of oral cultures
(one among many) was turned into the foundation of all education, both
high-culture liberal education and the vocational education that anchors
popular culture.
The Fall of High Culture
I wont spend much time on high culture, since the story should be
familiar to everybody, even if this framing is unfamiliar.
The following things happened.
Turn on Switch A.
Watch for the green light to come on.
Then push the lever.
So these are not stupid people. These are merely ordinary people who
have been lobotomized via the consumerization of language, delivered via
modern education.
We dimly realize that we have lost something. But appreciation for
the sophistication of oral cultures mostly manifests itself as mindless
reverence for traditional wisdom. We look back at the works of ancients
and deep down, wonder if humans have gotten fundamentally stupider
over the centuries.
We havent. Weve just had some crucial meme-processing software
removed from our brains.
Towards a Literacy Renaissance
This is one of the few subjects about which I am not a pessimist. I
believe that something strange is happening. Genuine literacy is seeing a
precarious rebirth.
The best of todays tweets seem to rise above the level of mere bon
mots (gamification is the high-fructose corn syrup of user engagement)
and achieve some of the cryptic depth of esoteric verse forms of earlier
ages.
The recombinant madness that is the fate of a new piece of Internet
content, as it travels, has some of the characteristics of the deliberate
forms of recombinant recitation practiced by oral culture.
The comments section of any half-decent blog is a meaning factory.
Sites like tvtropes.org are sustaining basic literacy skills.
The best of todays stand-up comics are preserving ancient wordplay
skills.
But something is still missing: the idea that literacy is a cultivable
skill. That dense, terse thoughts are not just serendipitous finds on the
Part 2:
Towards an Appreciative View
of Technology
An Infrastructure Pilgrimage
March 7, 2010
In Omaha, I was asked this question multiple times: Err why do
you want to go to North Platte? Each time, my wife explained, with a hint
of embarrassment, that we were going to see Bailey Yard. He saw this
thing on the Discovery Channel about the worlds largest train yard A
kindly, somewhat pitying look inevitably followed, Oh, are you into
model trains or something? Ive learned to accept reactions like this.
Women, and certain sorts of infidel men, just dont get the infrastructure
religion. No, I explained patiently several times, I just like to look at
such things. I was in Nebraska as a trailing spouse on my wifes business
trip, and as an infrastructure pilgrim. When boys grow into men, the
infrastructure instinct, which first manifests itself as childhood car-planetrain play, turns into a fully-formed religion. A deeply animistic religion
that has its priests, mystics and flocks of spiritually mute, but faithful
believers. And for adherents of this faith, the five-hour drive from Omaha
to North Platte is a spiritual journey. Mine, rather appropriately, began
with a grand cathedral, a grain elevator.
facing a necessary decline and fall. The beast is rightly reviled for the
cruelty it unleashes on factory-farmed animals. The problems with
genetically modified seeds are real. The horrendous modern corn-based
diet it has created cannot be condoned. Yet, you cannot help but
experience the awe of being in the presence of a true god of modernity. An
unthinking, cruel, beast of a god, but a god nevertheless.
After a quick pause at Lamars donuts (an appropriate sort of highlyprocessed religious experience) we drove on through the increasingly
desolate prairie. Near Kearney, you find the next stop for pilgrims, the
Great Platte River Archway Monument.
You take an escalator into the archway, where you work your way
through the exhibits. There are exhibits about stagecoaches, dioramas of
greedy, gossiping gold-seekers by campfires, and paintings of miserablelooking Mormons braving a wintry river crossing. There are other
exhibits about the development of the railroad and the great race that
ended with the meeting of the Union and Pacific railroads in Utah. In the
last segment, you find the story of the automobile, trucking, and the early
development of the American highway system. From the window, you can
watch the most recent of these layers of pathways, Eisenhowers I-80,
thunder by underneath at 70 odd miles an hour.
It is a pensive tale of one great struggle after another, with each age of
transportation yielding, with a creative-destructive mix of grace and
reluctance, to the next. The monuments of the religion of infrastructure are
monuments to change.
As you head further west from Kearney along I-80, the Union Pacific
railroad tracks keep you company. I watched a long coal train rumble
slowly across the prairie, and nearer North Platte, a massive double-decker
container train making its way towards Bailey. If you know enough about
container shipping, which Ive written about before, watching a container
train go by is like watching the entire globalized world put on a parade for
your benefit. You can count off names of major gods, such as Hanjin and
Maersk, as they scroll past, like beads on a rosary. From the great ports of
Seattle and Los Angeles, the massive flood of goods that enters America
from Asia pours onto these long snakes, and they all head towards that
Vatican of the modern world, Bailey Yard.
I hunted in the gift store for a schematic map of the yard, but there
wasnt one. The storekeeper initially thought I was looking for something
like a model train or calendar, but when I explained what I wanted, a look
of understanding and recognition appeared on his face. I had risen in his
estimation. I was no longer an average, uninformed and accidental visitor.
I was a fellow seeker of spiritual truths who knew what was important. A
lot of people ask for that, he said and explained that after 9/11, the
Department of Homeland Security stopped the sale of the posters. He told
me he expected the restrictions to be eased soon. Anyway, I took a picture
of a beautiful large-scale map of the yard that was hanging in the lobby.
Since Google Maps (search for North Platte and zoom in) seems to show
about as much detail as my picture, I feel safe sharing a low-resolution
version. If somebody scarily official objects, Ill take it down.
Reagan runway. There, you can stand and contemplate airplanes roaring
overhead every few minutes.
These are the fallen trees that line the southern shore of Lake Ontario,
under the towering Chimney Bluffs cliffs left behind in disequilibrium
by the last Ice Age. For about a half mile along the base of the cliffs, you
see these trees. Here is the longest shot I could take with my camera.
No, evil human loggers did not do this. The cliffs are naturally
unstable, and chunks fall off at regular intervals. Here are a couple of the
more dramatic views you see as you walk along the trail at Chimney
Bluffs State Park.
Signs line the trail, warning you to keep away from the edge. Unlike
the Grand Canyon or the Niagara Falls, whose state of disequilibrium
requires an intellectual effort to appreciate, the Chimney Bluffs are in
disequilibrium on a time scale humans can understand. A life form we can
understand trees can actually bet on an apparently stable equilibrium
and lose in this landscape, fall, and rot in the water, while we watch.
Even earthquakes, despite their power, dont disturb our equilibriumcentric mental models of the universe. We view them as anomalous rather
than characteristic phenomena. The fallen trees of the Chimney Bluffs
cannot be easily dismissed this way. They are signs of steady, creeping
change in the world around us, going on all the time; creative-destruction
in nature.
The glaciated landscape of Upstate New York, of which the Chimney
Bluffs are part, is well known. The deep, long Finger Lakes, ringed by
waterfalls, have anchored my romantic fascination with this region for
several years now. The prototypical symbol of the region is probably
Taughannock falls:
Unless you live in the region for a while, you wont get around to
visiting the Chimney Bluffs. But visit even for a weekend, and everybody
will urge you to go visit the falls.
We create tourist spots around sights which at once combine the
frozen drama of past violence in nature, and a picture of unchanging calm
in the present. Every summer and fall, the falls pour into Lake Cayuga and
tourists take pictures. Every winter, they slow to a trickle. Change is so
slow that we even let lazy thinking overpower us and make preservation
the central ethic of any concern for the environment. Even the entire
ideology and movement is called conservation.
We forget cataclysmic extinction events that periodically wipe out
much of life. We forget to sit back and visualize and absorb the
implications of the dry quantitative evidence of ice ages. Moving to the
astronomical realm, we rarely stop and ponder the thought that the earth is
cooling down, that its magnetic poles seem to flip every tens of thousands
of years, that its rotation has slowed from a once fast 22-hour day. We
forget that our Sun will eventually blow up into a Red Giant that will be
nearly as large as the orbit of Mars.
We forget that nature is the first and original system of evolving
creative destruction. Schumpeters model of the economy came along
later.
Towards Disequilibrium Environmentalism
This troubles me. On the one hand, environmental concerns are
certainly very high on my list of ethical and practical concerns. Yet, when
nature itself is chock full of extinctions, unsteady heatings and coolings
and trembles and crumbles, why are we particularly morally concerned
about global warming and other unsustainable patterns of human activity?
A practical human concern is understandable (tough luck, Seychelles), but
to listen to Al Gore, you would think that it is somehow immoral to not
think entirely in terms of preservation, conservation, equilibrium and
stability. So nature decides to slowly destroy the Chimney Bluffs. We
decide to draw down oil reserves, slowly saturate the oceans with CO2
and melt the ice caps. Why is the first fine, but the others are somehow
morally reprehensible? If you worry that we are destroying a planet that
we share with other species, well, nature did those mass extinctions long
before we came along.
In this respect, the political left is actually rather like the right it is
truly a conservative movement. Instead of insisting on the preservation of
an unchanging set of cultural values and societal forms, it insists on an
unchanging environment.
To be truly powerful, the environmentalist movement must be
reframed in ways that accommodates the natural patterns of
disequilibrium, change, and ultimate tragic, entropic death of the universe.
I dont know how to do this.
Why Disequilibrium instead of Instability?
Stability is a comforting idea. It is the idea that there is a subset of
equilibria that, when disturbed slightly, return to their original conditions.
At the Mira Flores lock, the gateway into the Pacific Ocean at the
southern end of the Panama Canal, you can listen to one such soundscape:
the idling of your vessels engine, mixed with the flapping and screeching
of seabirds. The draining of the lock causes fresh water to pour into salt
water, killing a new batch of freshwater fish every 30-45 minutes. The
seabirds circle, waiting for the buffet to open.
***
Visible function lends lyricism to the legible but alien rhythms and
melodies of technology-shaped landscapes. You can make out some of the
words, like crane and unloading, but the song itself is generally
impenetrable.
It is perhaps when the lyrics are at their most impenetrable that you
can most pay attention to the song. To understand is to explain. To explain
is to explain away and turn your attention elsewhere. Obviousness of
function can sometimes draw a veil across form, by encouraging a tooquick settling into a comforting instrumental view of technology.
Oscillating slowly back and forth across sections of the Panama
Canal, you will see strange boats carrying dancing fountains. I missed
what the tour guide said, so I have no idea what this is.
And at the other end, you find the orderly, authoritarian highmodernist fountains at the Bellagio in Las Vegas, which dance to human
music, for human entertainment.
But if you watch from the observation tower at Mira Flores, the sheer
sheer size of one of these beasts starts becoming apparent. You get the
sense that something abnormal is going on.
And once it is really close, little cues start to alter your sense of the
various proportions involved, like this lifeboat and Manhattan fire-escape
style stairways.
Cruise ships give you a sense that a large modern ship is something
between a luxury hotel and a small city in terms of scale, but container
ships give you a sense of the non-human scales involved.
Partly this is because cruise ship designers go to great lengths to make
you forget that you are on a ship (which lends a whole new meaning to
Disney-like sensorial interfaces). But mainly it is because our minds
cling so eagerly to the human that even the slightest foothold is sufficient
for anthropocentric perspectives to dominate thought. I am no more
immune than anybody else. My eyes instinctively sought out the lifeboat
and stairways human scale things. Earlier in this essay, I felt obliged to
describe the technological landscape by analogy to human music-making.
You can see why I think de Botton is my evil twin. He embraces
tendencies that I also see in myself, but am intensely suspicious of. I dont
trust my own attraction to poetry when it comes to appreciating
technology.
***
Scale is not just about comparisons and proportions. It is also about
precision.
Take this little engine that runs along the side of the lock on tracks,
steadying the ship. The clearance for some ships is in the inches, and it
takes many of these little guys to keep a large ship moving slowly, safely
and steadily through the lock. Inches in a world of miles. Ounces in a
world of tons.
It is when one scale must interact with another in this manner that you
get a true sense of what scale means. This is another reason numbers
matter. You cannot appreciate precision without numbers (I remember the
first time I experienced scale-shock in the numerical-precision sense of the
term: when I learned that compressors in rocket engines must spin at over
40,000 RPM. I remember spending something like half an hour trying to
understand that number, 40,000 as a mechanical rotation rate).
If you put yourself on the waitlist, Ill see what I can do. I am waiting
to hear from the venue staff about whether there is capacity beyond the
nominal maximum of 45.
Also, for those of you in Chicago, a heads-up. Ill be there for the
ALM Chicago conference next week, Feb 22-23, where Ill be doing a talk
titled Breathing Data, Competing on Code. The Neal Stephenson quote is
involved.
Make it if you can. Or email me, and perhaps we can do a little
meetup if theres a couple of readers there.
container was a somewhat obvious idea, and many attempts had been
made before McLean to realize some version of the concept. While he did
contribute some technological ideas to the mix (marked more by
simplicity and daring than technical ingenuity), McLeans is the central
plot line because of his personality. He brought to a tradition-bound, selfromanticizing industry a mix of high-risk, opportunistic drive and a
relentless focus on abstractions like cost and utilization. He seems to have
simultaneously had a thoroughly bean-counterish side to his personality,
and a supremely right-brained sense of design and architecture. Starting
with the idea of a single coastal route, McLean navigated and took full
advantage of the world of regulated transport, leveraged his company to
the hilt, swung multi-million dollar deals risking only tens of thousands of
his own money, manipulated New-York-New-Jersey politics like a Judo
master and made intermodal shipping a reality. He dealt with the nittygritty of crane design, turned the Vietnam war logistical nightmare into a
catalyst for organizing the Japan-Pacific coast trade, and finally, sold the
company he built, Sea-Land, just in time to escape the first of many slow
cyclic shocks to hit container shipping. His encore though, wasnt as
successful (an attempt to make an easterly round-the-world route feasible,
to get around the problem of empty westbound container capacity created
by trade imbalances). The entire story is one of ready-fire-aim audacity;
Kipling would have loved McLean for his ability to repeatedly make a
heap of all his winnings and risk it on one turn of pitch-and-toss. He
walked away from his first trucking empire to build a shipping empire.
And then repeated the move several times.
McLeans story, spanning a half-century, doesnt overwhelm the plot
though; it merely functions as a spinal cord. A story this complex
necessarily has many important subplots, which Ill cover briefly in a
minute, but the overall story (which McLeans personal story manifests, in
a Forrest Gumpish way) also has an overarching shape. On one end, you
have four fragmented and heavily regulated industries in post World-War
II mode (railroads, trucking, shipping and port operations). It is a world of
breakbulk shipping (mixed discrete cargo), when swaggering, Brando-like
longshoremen unloaded trucks packed with an assortment of items,
ranging from baskets of fruit and bales of cotton to machine parts and
sacks of coffee. These they then transferred to dockside warehouses and
again into the holds of ships whose basic geometric design had survived
the transitions from sail to steam and steam to diesel. It was a system that
constrained part of the system, ships dominate the equation over trains and
trucks. One tidbit about the gradual consolidation: as of the books
writing, McLeans original company, Sea-Land, is now part of Maersk.
How this came to be is the most important (though not the most fun)
subplot. Things didnt proceed smoothly, as you might expect. All sorts of
forces, from regulation, to misguided attempts to mix breakbulk and
containers, to irrationallities and tariffs deliberately engineered in to keep
longshoremen employed, held back the emergence of the true efficiencies
of containerization. But finally, by the mid-seventies, todays business
dynamics had been created.
Two: The Technology Subplot
If the dollar figures and percentages tell the financial story, the heart
of the technology action is in the operations research. While McLean and
Sea-Land were improvising on the East Coast, a West Coast pioneer,
Matson, involved primarily in the 60s Hawaii-California trade, drove this
storyline forward. The cautious company hired university researchers to
throw operations research at the problem, to figure out optimal container
sizes and other system parameters, based on a careful analysis of goods
mixes on their routes. Today, container shipping, technically speaking, is
primarily this sort of operations research domain, where systems are so
optimized that an added second of delay in handling a container can
translate to tens of thousands of dollars lost per ship per year.
If you are wondering how port operations involving longshore labor
could have been that expensive before containerization, the book provides
an illuminating sample manifest from a 1954 voyage of a C-2 type cargo
ship, the S. S. Warrior. The contents: 74,903 cases, 71,726 cartons,
24,0336 bags, 10,671 boxes, 2,880 bundles, 2,877 packages, 2,634 pieces,
1,538 drums, 888 cans, 815 barrels, 53 wheeled vehicles, 21 crates, 10
transporters, 5 reels and 1,525 undetermined. Thats a total of 194,582
pieces, each of which had to be manually handled! The total was just
5,015 long tons of cargo (about 5,095 metric tons). By contrast, the
gigantic MSC Daniela, which made its maiden voyage in 2009, carries
13,800 containers, with a deadweight tonnage of 165,000 tons. Thats a
30x improvement in tonnage and a 15x reduction in number of pieces for a
single port call. Or in other words, a change from 0.02 tons (20 kg) per
handling to about 12 tons per handling, or a 465X improvement in
handling efficiency (somebody check my arithmetic but I think I did
this right). And of course, every movement in the MSC Danielas world is
precisely choreographed and monitered by computer. Back in 1954,
Brando time, experienced longshoremen decided how to pack a hold, and
if they got it wrong, loading and unloading would take vastly longer. And
of course there was no end-to-end coordination, let alone global
coordination.
Thats not to say the mechanical engineering part of the story is
uninteresting. The plain big box itself is simple: thin corrugated sheet
aluminum with load-bearing corner posts capable of supporting a stack
about 6-containers high (not sure of this figure), with locking mechanisms
to link the boxes. But this arrangement teems with subtleties, from
questions of swing control of ship-straddling cranes, to path-planning for
automated port transporters, to the problem of ensuring the stability of a 6high stack of containers in high seas, with the ship pitching and rolling
violently up to 30 degrees away from the vertical. Here is a picture of the
twist-lock mechanism that holds containers together and to the
ship/train/truck-bed and endures enormous stresses, to makes this magic
possible:
locate near them, and they became vast box parking lots in otherwise
empty areas. The left-behind cities not only faced a loss of their portbased economies, but also saw their industrial base flee to the hinterland.
Cities like New York and San Francisco had to rethink their entire raison
detre, figure out what to do with abandoned shorelines, and reinvent
themselves as centers of culture and information work.
There is a historical texture here: the rise of Japan, Vietnam, the Suez
Crisis, oil shocks, and the Panama Canal all played a role. Just one
example: McLean, through his Vietnam contract, found himself with fullypaid up, return-trip empty containers making their way back across the
Pacific. Anything he could fill his boxes with was pure profit, and Japan
provided the contents. With that, the stage was set for the Western US to
rapidly outpace the East Coast in shipping. Entire country-sized
economies had their histories shaped by big bets on container shipping
(Singapore being the most obvious example). At the time the book was
written, 3 of the top 5 ports (Hong Kong, Singapore, Shanghai, Shenzen
and Busan, Korea) were in China. Los Angeles had displaced
Newark/New York as the top port in the US. London and Liverpool, the
heart of the great maritime empire of the Colonial British, did not make
the top 20 list.
Five: The Broad Impact Subplot
Lets wrap up by looking at how the narrow world of container
shipping ended up disrupting the rest of the world. The big insight here is
not just that shipping costs dropped precipitously, but that shipping
became vastly more reliable and simple as a consequence. The 25%
transportation fraction of global goods in 1960 is almost certainly an
understatement because most producers simply could not ship long
distances at all: stuff got broken, stolen and lost, and it took nightmarish
levels of effort to even make that happen. Instead of end-to-end shipping
with central consolidation, you had shipping departments orchestrating ad
hoc journeys, dealing with dozens of carriers, forwarding agents, transport
lines and border controls.
Today, shipping has gotten to a level of point-to-point packetswitched efficiency, where the shipper needs to do a hundredth of the work
and can expect vastly higher reliability, on-time performance, far lower
hundreds of miles to form garbage islands in the middle of the ocean, such
as the Great Pacific Garbage Patch.
The story of the plastic water bottle serves as a sort of radioactive
tracer through the garbage industry, touching as it does every piece of the
puzzle.
The three books and the documentary explore different aspects of the
system, so lets briefly review them.
Rubbish by Rathje and Cullen
Rubbish, though a little dated, is the most professional of the three
books, since it is the result of a large, long-term academic study, with no
particular agenda in mind, and written by the godfather of the entire field
of Garbology. To the principals of the University of Arizona Garbage
project, garbage is just archeological raw material. The fact that drilling
into modern, active landfills tells us about modern humans, while digging
into ancient mounds tells us about Sumerians, is irrelevant to them. The
perspective lends an interesting kind of objectivity to the book.
The first and most basic thing I learned from the book surprised me
no end, and answered a question that I had always wondered about. Why
do ancient civilizations seem to get buried under mounds?
Turns out that for much of history, waste simply accumulated on
floors inside dwellings. Residents would simply put in new layers of fresh
clay to cover up the trash. Every dwelling was a micro landfill. When the
floor rose too high, they raised the ceiling and doorways.
The result was that most ancient civilizations rose (literally) on a pile
of their own trash. There is even a table of historical waste accumulation
rates included. South Asia is the winner in this contest: the Bronze Age
Indus Valley Civilization apparently had the fastest accumulation of waste
at nearly 1000 cm/century. (I cant resist a little subcontinental humor:
how about we attribute all the great cultural achievements of the Indus
Valley Civilization to modern India, and the trash to modern Pakistan,
where the major archeological sites are situated today?)
Ancient Troy was also quite the trash generator, at about a 140
cm/century. Since those ancient times, accumulation rates have declined
dramatically (this doesnt mean weve been producing less trash per
capita; merely that weve stopped burying it under our own floors).
Historically, trash was also thrown out onto streets, and burned
outside cities. The composition of trash has changed as well. If you think
todays plastic water bottles are a menace, you should read the description
of the horse-manure problem that (literally) buried New York before the
automobile.
Skipping ahead a few thousand years, you get the modern sanitary
landfill. But the takeaway here is a sense of perspective. Historically
speaking, our modern times are not the trashiest time in our history.
Though the scale and chemical diversity of the trash management problem
is huge in our time simply because of the size of the global population, we
are relatively far ahead of older civilizations in managing our trash.
Much of the work described in the book is about the insights you can
obtained by drilling into landfills, or collecting garbage bags directly from
households. The findings provide fascinating glimpses into the delusions
of human beings. Take food habits for instance. One interesting research
exercise the book describes is a study comparing self-reported food habits
to the revealed food habits based on trash analysis. The authors call this
the Lean Cuisine Syndrome:
People consistently underreport the amount of regular
soda, pastries, chocolate, and fats that they consume; they
consistently over-report the amount of fruits and diet soda.
The book notes a related phenomenon called the Surrogate
Syndrome: people are able to describe the actual habits of family members
and neighbors with chilling accuracy.
Another fascinating analysis involves pull-tabs of beer cans. These
seem to be a sort of carbon-dating tool for modern garbage.
sticking out, and the slopes will be at a precise 30 degree gradient). There
doesnt appear to be any need for alarmism though. America at least, has
plenty of room. Other parts of the world may not be as lucky.
There are 2300 landfills around the country. You could say the United
States is a collection of 2300 large families, each with one giant trash can.
The Global Picture
I havent found a good source that provides a global picture. The
CNBC documentary provides a glimpse into China, where Beijing alone
has a catastrophe looming (the city is overflowing with garbage in
unauthorized dump sites, because the available government-owned
landfills are insufficient for the growing citys waste stream).
Growing up in India, I have some sense of the world of garbage there.
There are both positives and negatives. On the positive side, the largescale consumerist levels of trash production are still relatively rare in
India, and limited to the most well-off, westernized households. Growing
up, we generated practically no trash, simply because we mostly ate homecooked food and did not consume the bewildering array of consumer
products that Americans routinely consume. As I recall, we owned a small
2-3 gallon trash basket, and generated perhaps one basket-full a week,
most of which was organic matter (which went to our garden). There was
little packaging. Groceries came in recycled newspaper bags, which we
recycled again.
But what little waste we did generate was poorly captured in the
organized waste stream. There were many disorganized small dumps in
the back alleys and few dumpsters.
By my teenage years in the 80s, modernity began catching up. Thin
plastic bags made from recycled (downcycled actually) plastic caught on
and replaced the newspaper bags. After reigning for about a decade, they
thankfully declined in popularity (thanks in part due to an unanticipated
consequence: stray cows eating them and then dying as the plastic choked
their intestines), and I believe have actually been banned, at least in major
cities.
home-made Indian food, growing up. And I have to admit, every passing
year here in the States, I cook less, and buy more frozen, packaged foods
from my local Indian grocery store. Pizza boxes may be appearing in
Indian trash cans, but frozen chana masala boxes are appearing in
American trash cans as well (looking around the world though, it seems to
me that the Japanese are possibly the most in love with ridiculous amounts
of packaging).
But theres even more to the globalization of garbage than just
different country-level views. There is the international trade in garbage.
Places like India and China import garbage and recycling at all levels from
entire ships destined for the scrap-metal yard (which I wrote about earlier),
[January 28, 2010] to lead batteries to paper meant for recycling. The
waste stream is more than a network of dump routes that fans out from
cities like New York. It is a huge circulatory system that spans the globe.
Exploring Further
I have to admit, despite reading a ton of material on the subject, I am
merely a lot more informed, not much wiser. What is the true DNA of the
world of garbage? What is its significance within an overall understanding
of our world? Is it merely a treasure-trove of anthropological insights, or is
there a deeper level of analysis we can get to? The books left me with the
uncomfortable feeling that the garbage professionals were so absorbed in
the immediate details that they were missing something bigger. But I dont
know what that is. Somehow garbage in the literal sense probably fits into
the End of the World theme that I blogged about before (where I proposed
my garbage eschatology model of how the world might end).
Anyway, I expect my interest in this topic will continue to evolve.
Ive started a trail on the subject (click the image below), which you can
explore. Do send me link/resource suggestions to add to it. As you can tell
by the relative incoherence of the trail, I dont yet have a good idea about
how to put the jigsaw puzzle together in a more meaningful way.
You could be forgiven for thinking that this was a sudden event based
on iron being suddenly discovered and turning out to be a superior
material for weaponry, and the advantage accidentally happening to fall to
the barbarian-nomads rather than the civilization centers.
Far from it.
Heres the real (or less wrong) story in outline.
The Clue in the Tin
You see iron and bronze co-existed for a long time. Iron is a plentiful
element, and can be found in relatively pure forms in meteorites (think of
meteorites as the starter kits for iron metallurgy). Visit a geological
museum sometime to see for yourself (I grew up in a steel town).
It is hard to smelt and work, but basically once you figure out some
rudimentary metallurgy and can generate sufficiently high temperatures to
work it, you can handle iron, at least in crude, brittle and easily rusted
forms. Not quite steel, but then who cares about rust and extreme hardness
if the objective is to split open the skull of another guy in the next 10
seconds.
Bronze on the other hand is a very difficult material to handle. There
have been two forms in antiquity. The earlier Bronze Age was dominated
by what is known as arsenical bronze. Thats copper alloyed with arsenic
to make it harder. Thats not very different from iron. Copper is much
scarcer and less widely-distributed of course, but it does occur all over the
place. And fortunately, when you do find it, copper usually has trace
arsenic contamination in its natural form. So you are starting with all the
raw material you need.
The later Bronze Age though, relied on a much better material: tin
bronze. Now this is where the story gets interesting. Tin is an extremely
rare element. It only occurs in usable concentrations in a few isolated
locations worldwide.
In fact known sources during the Bronze Age were in places like
England, France, the Czech Republic and the Malay peninsula. Deep in
barbarian-nomad lands of the time. As far as we can tell, tin was first
mined somewhere in the Czech Republic around 2500 BC, and the
practice spread to places like Britain and France by about 2000.
Notice something about that list? They are very far from the major
Bronze Age urban civilizations around the Nile, in the Middle East and in
the Indus Valley, of 4000-2000 BC or so.
This immediately implies that there must have been a globalized longdistance trade in tin connecting the farthest corners of Europe (and
possibly Malaya) with the heart of the ancient world. Not only that, you
are forced to recognize that the metallurgists of the day must have had
sophisticated and deliberate alloying methods, since you cannot assume,
as you might be tempted to in the case of arsenical bronze, that the
ancients didnt really know what they were doing. You cannot produce tinbronze by accident. Tin implies skills, accurate measurements, technology,
guild-style education, and land and sea trade of sufficient sophistication
that you can call it an industry.
Whats more, the use of tin also implies that the Bronze Age
civilizations didnt just sit around inside their borders, enjoying their
urban lifestyles. They must have actually traded somehow with the far
corners of the barbarian-nomad world that eventually conquered them.
Clearly the precursors of the Aryans and other nomadic peoples of the
Bronze Age (including the Celts in Europe, the ethnic Malays, and so
forth) must have had a lot of active contact with the urban civilizations
(naive students of history often dont get that humans had basically
dispersed through the entire known world by 10,000 BC; civilization
may have spread from a few centers, but people didnt spread that way,
they spread much earlier).
In fact, tin almost defines civilization: only the 3-4 centers of urban
civilization of that period had the coordination capabilities necessary to
arrange for the shipping of tin over land and sea, across long distances. It
is well recognized that they had trade with each other, with different trade
imbalances (there is clear evidence of land and sea trade among the
Mesopotamian, Nile and Indus river valleys; the Yellow River portions of
China were a little more disconnected at that time).
What is not as well recognized is that the evidence of commodities
like tin indicates that these civilizations must have also traded extensively
with the barbarian-nomad worlds in their interstices and beyond their
borders in every direction. The iron-wielding barbarians were not
shadowy strangers who suddenly descended on the urban centers out of
the shadows. They were marginal peoples with whom the civilizations had
relationships.
So tin implies the existence of sophisticated international trade. I
suspect it even means that tin was the first true commodity money
(commodity monies dont just emerge based on their physical properties
and value; they must provide a raison detre for trade over long distances).
Iron vs. Bronze
So what about iron? Since it was all over the place, we cannot trace
the origins of iron smelting properly, and in a sense there is no good
answer to the question where was iron discovered? It was in use as a
peripheral metal for a long period before it displaced bronze (possibly
inside the Bronze Age civilizations and the barbarian-nomad margins). As
the Wikipedia article says, with reference to iron use before the Iron Age:
Meteoric iron, or iron-nickel alloy, was used by
various ancient peoples thousands of years before the Iron
Age. This iron, being in its native metallic state, required
no smelting of ores.By the Middle Bronze Age, increasing
numbers of smelted iron objects (distinguishable from
meteoric iron by the lack of nickel in the product) appeared
throughout
Anatolia,
Mesopotamia,
the
Indian
subcontinent, the Levant, the Mediterranean, and Egypt.
The earliest systematic production and use of iron
implements originates in Anatolia, beginning around 2000
BCE. Recent archaeological research in the Ganges Valley,
India showed early iron working by 1800 BC. However,
this metal was expensive, perhaps because of the technical
By the time iron got both good enough and cheap enough to take on
bronze as a serious contender for uses like weaponry, the incumbent
Bronze Age civilizations couldnt catch up. The pre-industrial barbariannomads had the upper hand.
Iron didnt completely displace bronze in weaponry until quite late.
As late as Alexanders conquests, he still used bronze; iron technology
was not yet good enough at the highest levels of quality, but the point is, it
was good enough initially for the marginal markets, and for masses of
barbarian soldiers.
Sound familiar?
This is classic disruption in the sense of Clayton Christensen. An
initially low-quality marginal market product (iron) getting better and
eventually taking down the mainstream market (bronze), at a point where
the incumbents could do nothing, despite the extreme sophistication of
their civlization, with its evolved tin trading routes and deliberate
metallurgical practices.
Rewinding History
Understanding the history of bronze and iron better has forced me to
rewind my sense of when history proper starts by at least 11,000 years.
The story has given me a new appreciation for how sophisticated human
beings have been, and for how long. I used to think that truly
psychologically modern humans didnt emerge till about 1200 AD. The
story of bronze made me rewind my assessments to 4000 BC. Now,
though I dont know the details (nobody does), I think psychologically
modern human culture must have started no later than 10,000 BC, the
approximate period of what is called the Neolithic revolution.
Now I think the most interesting period in history is probably 10,000
BC to 4,000 BC. Even 20,000 BC to 10,000 BC is fascinating (thats when
the caves in Lascaux were painted), but lets march backwards one
millennium at a time.
Bays Conjecture
May 21, 2009
A few years ago, I was part of a two-day DARPA workshop on the
theme of Embedded Humans. These things tend to be brain-numbing, so
you know an idea is a good one if it manages to stick in your head. One
idea really stayed with me, and well call it Bays conjecture (John Bay,
who proposed it, has held several senior military research positions, and is
the author of a well-known technical textbook). It concerns the effect of
intelligent automation on work. What happens when the matrix of
technology around you gets smarter and smarter, and is able to make
decisions on your behalf, for itself and the overall system? Bays
conjecture is the antithesis of the Singularity idea (machines will get
smarter and rule us, a la Skynet I admit I am itching to see Terminator
Salvation). In some ways its implications are scarier.
The Conjecture
Bays conjecture is simply this: Autonomous machines are more
demanding of their operator than non-autonomous machines. The
implication is this picture:
The point of the picture is this: when technology gets smarter, the
total work being performed increases. Or in Bays words, force
multiplication through accomplishment of more demanding tasks.
Humans are always taking on challenges that are at the edge of the current
capability of humans and machines combined. So like a muscle being
stressed to failure, total capacity grows, but work grows faster. We never
build technology that will actually relieve the load on us and make things
simpler. We only end up building technology that creates MORE work for
us.
The one exception is what we might call Bays corollary: he asserts
that if you design systems with the principle of human override
protection, total work capacity collapses back to the capability of humans
alone. We are both too greedy and too lazy for that. We are motivated by
the delusional picture in Case 1, and we end up creating Case 2.
Heres why this is the opposite of Skynet/Singularity. Those ideas are
based (in the caricature Sci-Fi/horror version) on the idea that machines,
once they get smarter than us, will want to enslave us. In the Matrix,
humans are reduced to batteries. In the Terminator series, it is unclear
what Skynet wants to do with humans, though I am guessing well find out
and it will probably be some sort of naive enslavement.
The point is: the greed-laziness dynamic will probably apply to
computer AIs as well. To get the most bang for the buck, humans will have
to be at their most free/liberated/creative within the Matrix. So thats good
news. But on the other hand, the complexity of the challenges we take on
cannot increase indefinitely. At some point, the humans+machines matrix
will take on a challenge thats too much for us, and well do it with a
creaking, high-entropy worldwide technology matrix that is built on
rotting, stratified layers of techno-human infrastructure. The whole thing
will fail to rise to the challenge and will collapse, dumping us all back into
the stone age.
Halls Law:
The Nineteenth Century Prequel to Moores Law
March 8, 2012
For the past several months, Ive been immersed in nineteenth century
history. Specifically, the history of interchangeability in technology
between 1765, when the Systme Gribeauval, the first modern technology
doctrine based on the potential of interchangeable parts, was articulated,
and 1919, when Frederick Taylor wrote The Principles of Scientific
Management.
Here is the story represented as a Double Freytag diagram, which
should be particularly useful for those of you who have read Tempo. For
those of you who havent, think of the 1825 Hall Carbine peak as the
Aha! moment when interchangeability was first figured out, and the
1919 peak as the conclusion of the technology part of the story, with the
focus shifting to management innovation, thanks in part to Taylor.
For complex widgets, scaling production isnt just (or even primarily)
about making more new widgets; it is about keeping the widgets in
existence in the field functioning for their design lifetime through postsales repair and maintenance. The greater the complexity and cost, the
more the game shifts to post-sales.
You can combine the three variables to get a rough sense of
manufacturing complexity and how it relates to scaling limits. Something
like C=SxT provides a measure of the complexity of the artifact itself.
Breakdown rate B is some function of complexity and production
volumes, B=f(C, V). At some point, as you increase V, you get a
corresponding increase in B that overwhelms your manufacturing
capability. To complete this pidgin math model, you can think in terms of
some B_max=f(C, V_max) above which V cannot increase without
interchangeability.
Modern engineers use much more sophisticated measures (this crude
model does not capture the tradeoff between part complexity and
interconnection complexity for example, or the fact that different parts of a
machine may experience different stress/wear patterns), but for our
purposes, this is enough.
To scale production volume above V_max without introducing
interchangeability, you have to either lower complexity and/or tempo or
increase the number of skilled craftsmen. The first two are not options
when you are trying to out-do the competition in an expanding market.
That would be unilateral disarmament in a land-grab race. The last method
is simply not feasible, since education in a craft-driven industrial
landscape means long, slow and inefficient (in the sense that it teaches
things like onion recipes) 1:1 apprenticeship relationships.
There is one additional method that does not involve
interchangeability: moving towards disposability for the whole artifact,
which finesses the parts-replacement problem entirely. But in practice,
things get cheap enough for disposability to be a workable strategy only
after mass production is achieved. Disposability is rarely a cost-effective
The price that had to be paid for this solution was that the American
economy had to lose the craftsmen and work with engineers, technicians
and unskilled workers instead. This creates a very different technology
culture, with different strengths and weaknesses. For example the scope of
innovation is narrowed by such codification and scientific systematization
of crafts (prima facie nutty ideas like onion steel are less likely to be
tried), but within the narrower scope, specific patterns of innovation are
greatly amplified (serendipitous discoveries like penicillin or x-rays are
immediately leveraged to the hilt).
Why must craft be given up? Even the best craftsmen cannot produce
interchangeable parts. In fact, the craft is practically defined by skill at
dealing with unique parts through carefully fitted assemblies.
(Interchangeability is of course a loose notion that can range from
functional replaceability to indistinguishability, but craft cannot achieve
even the coarsest kind of interchangeability at any meaningful sort of
scale).
Put another way, craft is about relative precision between unlike parts.
Engineering based on interchangeability is about objective precision
between like parts. One requires human judgment. The other requires
refined metrology.
From Armory Practice to the American System
It was the sheer scale of America, the abundance of its natural
resources (and the scarcity of its human resources), that provided the
impetus for automation and the interchangeable parts approach to
engineering.
As agriculture moved westward through New York, Pennsylvania and
Michigan, the older settled regions began to turn to manufacturing for
economic sustenance. The process began with the textile industry, born of
stolen British designs around what is now Lowell, Massachusetts. But
American engineering in the Connecticut river valley soon took on a
distinct character.
This means tycoons who spot some vast new opportunity and play
land-grabbing games on a massive scale.
Both Halls Law and Moores Law led to wholesale management and
financial innovation by precisely such new tycoons.
For Halls Law, the process started with Cornelius Vanderbilt, the hero
of A. J. Stiles excellent The First Tycoon, who figured out how to tame
the strange new beast, the post-East-India-Company corporation and in the
process sidelined old money.
It is revealing that Vanderbilt was blooded in business through a
major legal battle for steamboat water rights: Gibbons vs. Ogden (1824)
that helped define the relationship of corporations to the rest of society.
From there, he went from strength to strength, inventing new business and
financial thinking along the way. Only in his old age did he finally meet
his match: Jay Gould, who would go on to become the archetypal Robber
Baron, taking over most of Vanderbilts empire from his not-so-talented
children.
Vanderbilt was something of a transition figure. He straddled both
management and finance, and old and new economies: he was a cross
between an old-economy merchant-pirate in the Robert Clive mold (he ran
a small war in Nicaragua for instance) and a new-economy corporate
tycoon. He transcended the categories that he helped solidify, which
helped define the next generation of tycoons.
Among the four tycoons in Morris book, Rockefeller (Chernows
Titan on Rockefeller is another must-read) and Carnegie appear on one
side, as the archetypes of modern managers and CEOs. Both were masters
of Wall Street as well, but were primarily businessmen.
On the financial side, we find the Joker-Batman pair of Gould and
Morgan. Jay Gould was the loophole-finder-and-destabilizer; J. P. Morgan
was the loophole-closer and stabilizer. While Gould was a competent, if
unscrupulous manager during the brief periods that he actually managed
the companies he wrangled, he was primarily a financial pirate par
excellence.
It makes for a very good story that he made his name by giving the
elderly Vanderbilt, who pretty much invented the playbook along with his
friends and rivals, the only financial bloody nose of his life (though
Vanderbilt exacted quite a revenge before he died). Through the rest of
his career, he exposed and exploited every single flaw in the fledgling
American corporate model, turning crude Vanderbilt-era financial tactics
into a high art form. When he was done, he had generated all the data
necessary for J. P. Morgan to redesign the financial system in a much
stronger form.
Morgans model would survive for a century until the Moores Law
era descendants of Gould (the financial pirates of the 1980s) started
another round of creative destruction in the evolution of the corporate
form.
From Halls Law to Moores Law
Halls Law was the prequel to Moores Law in almost every way. The
comparison is not a narrow one based on just one dimension like finance
or technology. It spans every important variable. Here is the corresponding
Double Freytag:
Ill save my analysis of the Moores Law era for another day, but here
is a short point-by-point mapping/comparison of fundamental dynamics
around electronics that emerged after World War II, rather than
through the Manhattan project itself).
9. Lincolns assassination is eerily similar to Kennedys. Just
checking to see if you are still paying attention. The first
person to call bullshit on this point gets a free copy of The
Tycoons.
10. The Internet and container shipping taken together are to
Moores Law as the railroad, steamship and telegraph networks
taken together were to Halls Law. The electric power grid
provides the continuity between Halls Law and Moores Law.
11. Each era changed employment patterns and class structures
wholesale. Halls Law destroyed nobility-based social
structures, created a new middle class defined by educational
attainments and consumer goods, and created paycheck
employment. Moores Law is currently destroying each of
these things and creating a Trading Up class, a new model of
free agency, and killing education-based reputation models.
12. A new mass entertainment model started in each case. With
Halls Law it was Broadway (which led on to radio, movies
and television). With Moores Law, Id say the analogy is to
reality TV, which like Broadway represents new-era content in
an old-era medium.
13. At the risk of getting flamed, Id say that Seth Godin is
arguably the Horatio Alger of today, but in a good way.
Somebody has to do the pumping-up and motivating to inspire
the masses to abandon the old culture and embrace the new by
offering a strong and simple message that is just sound enough
to get people moving, even if it cannot withstand serious
scrutiny.
14. Halls Law led on to the application of its core methods to
people, leading to new models of high-school and college
education and eventually the perfect interchangeable human,
The Organization Man. Moores Law is destroying these
things, and replacing them with Y-Combinator style education
and co-working spaces (this will end with the Organization
Entrepreneur, a predictably-unique individual, just like
everybody else).
15. Halls Law led to the industrial labor movement. Moores Law
is leading to a new labor movement defined, in its early days,
by things like standardized term-sheets for entrepreneurs ( the
5 day/40 hour week issue of our times; YC-entrepreneurs are
decidedly not the new capitalists. They are the new labor.
Thats a whole other post).
16. And perhaps most importantly, each era suffered an early crisis
of financial exploitation which led first to loophole closing,
and then to a new financial system and corporate governance
model. Jay Gould maps to the architects of the subprime crisis.
No J. P. Morgan figure has emerged to really clean up the
mess, but new corporate models are already emerging that look
so unlike traditional ones that they really shouldnt be called
corporations at all (hence the pointless semantic debate around
my history of corporations post; it is really irrelevant whether
you think corporations are dying or being radically reinvented.
You are talking about the same underlying creative-destruction
reality).
The New Gilded Age
When Mark Twain coined the term Gilded Age, he wasnt exactly
being complimentary. For some reason, the term seems to be commonly
used as a positive one today, by those who want to romanticize the period.
I started to read the book and realized that Twain had completely
missed the point of what was happening around him (the focus of the
novel is political corruption; an element that loomed large back then, but
was ultimately a sideshow), so I abandoned it.
But he got one thing right: the name.
Halls Law created a culture that was initially a layer of fake gloss on
top of much grimmer realities. Things were improving dramatically, but it
probably did not seem like it at the time, thanks to the anxiety and
uncertainty. Just as you and I arent exactly celebrating the crashing cost
of computers in the last two decades, those who lived through the 1870s
were more worried about farming moving ever westward (outsourcing)
and strange new status dynamics that made them uncertain of their place
in the world.
It took time for Gilded to turn into Golden (about 50 years by my
estimate, things became truly golden only after World War II). There were
decades of turmoil which made the lives of transitional generations quite
miserable. The 1870s were a youll-thank-me-later decade, but for those
who lived through the decade in misery, that is no consolation.
I abandoned The Gilded Age within a few pages. It is decidedly
tedious compared to Tom Sawyer and Huckleberry Finn. Sadly, Twains
affection for a vanishing culture, which made him such an able observer of
one part of American life, made him a poor observer of the new realities
taking shape around him.
He makes a personal appearance in the stories of both Vanderbilt and
Rockefeller, and appears to have strongly disliked the former and admired
the latter, though both were clearly cut from the same cloth.
To my mind, Twains best stab at describing the transformation
(probably A Connecticut Yankee in King Arthurs Court note the
significance of Connecticut) is probably much worse than the attempts of
younger writers like Edith Wharton and later, of course, everybody from
Horatio Alger to F. Scott Fitzgerald.
We are clearly living through a New Gilded Age today, and Bruce
Sterlings term Favela Chic (rather unfortunately cryptic ; perhaps we
should call it Painted Slum) is effectively analogous to Gilded Age.
We put on brave faces as we live through our rerun of the 1870s. We
celebrate the economic precariousness of free agency as though it were a
no-strings-attached good thing. We read our own Horatio Alger stories,
fawn over new Silicon Valley millionaires and conveniently forget the
ones who dont make it.
New Media tycoons like Arrington and Huffington fight wars that
would have made the Hearsts and Pulitzers of the Gilded Age proud, while
us lesser bloggers go divining for smaller pockets of attention with
dowsing rods, driven by the same romantic hope that drove the tragicomic
heroes of P. G. Wodehouse novels to pitch their plays to Broadway
producers a century ago.
History is repeating itself. And the rerun episode we are living right
now is not a pleasant one.
The problem with history repeating itself of course, is that sometimes
it does not. The fact that 1819-1880 map pretty well to 1959-2012 does
not mean that 2012-2112 will map to 1880-1980. Many things are
different this time around.
But assuming history does repeat itself, what are we in for?
If the Moores Law endgame is the same century-long economicoverdrive that was the Halls Law endgame, todays kids will enter the
adult world with prosperity and a fully-diffused Moores Law all around
them.
The children will do well. In the long term, things will look up.
But in the long term, you and I will be dead.
Some thanks are due for this post. It was inspired in part by Chris
McCoy of YourSports.com, who badgered me about the Internet =
Railroad analogy enough that I was motivated to go hunt for the best
place to anchor a broader analogy. His original hypothesis is now the
generalized point 10 of my list. Thanks also to Nick Pinkston for
interesting discussions on the future of post-Moores Law manufacturing;
the child may resurrect its devoured parent after all. Also thanks to
everybody who commented on the History of Corporations piece.
terms, hacking is a parasitic strategy: weaken the host just enough to feed
off it, but not enough to kill it.
Breaching computer systems is of course the classic example. Another
example is figuring out hacks to fall asleep faster. A third is coming up
with a new traffic pattern to reroute traffic around a temporary
construction site.
interesting in their own right, as a sort of performance art, but are not of
much interest or value to people who are interested in the future in the
form it might arrive in, for all.
It is easy to make the distinction explicit. Most futurists are interested
in the future beyond the Field. I am primarily interested in the future once
it enters the Field, and the process by which it gets integrated into it. This
is also where the future turns into money, so perhaps my motivations are
less intellectual than they are narrowly mercenary. This is also a more
complicated way of making a point made by several marketers:
technology only becomes interesting once it becomes technically boring.
Technological futurists are pre-Fieldists. Marketing futurists are postFieldists.
This also explains why so few futurists make any money. They are
attracted to exactly those parts of the future that are worth very little. They
find visions of changed human behavior stimulating. Technological
change serves as a basis for constructing aspirational visions of changed
humanity. Unfortunately, technological change actually arrives in ways
that leave human behavior minimally altered.
Engineering is about finding excitement by figuring out how human
behavior could change. Marketing is about finding money by making sure
it doesnt. The future arrives along a least-cognitive-effort path.
This suggests a different, subtler reading of Gibsons unevenlydistributed line.
It isnt that what is patchily distributed today will become widespread
tomorrow. The mainstream never ends up looking like the edge of today.
Not even close. The mainstream seeks placidity while the edge seeks
stimulation.
Instead, what is unevenly distributed are isolated windows into the
un-normalized future that exist as weak spots in the Field. When the
windows start to become larger and more common, economics kicks in
and the Field maintenance industry quickly moves to create specialists,
codified knowledge and normalcy-preserving design patterns.
movies over WiFi? That sounds like a bad startup pitch rather than a good
fantasy novel.
The Matrix was something of an interesting triumph in this sense, and
in a way smarter than one of its inspirations, The Neuromancer, because it
made Gibsons cyberspace co-incident with a temporally frozen realitysimulacrum.
But it did not go far enough. The world of 1997 (or wherever the
Matrix decided to hit Pause) was itself never an experienced reality.
1997 never happened. Neither did 1500 in a way. What we did have
was different stretched states of the Manufactured Normalcy Field in 1500
and 1997. If the Matrix were to happen, it would have to actually keep that
stretching going.
Breathless
There is one element of the future that does arrive on schedule,
uncensored. This is its emotional quality. The pace of change is
accelerating and we experience this as Field-stretching anxiety.
But emotions being what they are, we cannot separate future anxiety
from other forms of anxiety. Are you upset today because your boss yelled
at you or because subtle cues made the accelerating pace of change leak
into your life as a tear in the Field?
Increased anxiety is only one dimension of how we experience
change. Another dimension is a constant sense of crisis (which has,
incidentally, always prevailed in history).
A third dimension is a constant feeling of chaos held at bay (another
constant in history), just beyond the firewall of everyday routine (the Field
is everyday routine).
Sometimes we experience the future via a basic individual-level it
wont happen to me normalcy bias. Things like SARS or dying in a plane
crash are uncomprehended future-things (remember, you live in a
manufactured reality that has been stretching since the fifteenth century)
that are nominally in our present, but havent penetrated the Field for most
of us. Most of us substitute probability for time in such cases. As time
progresses, the long tail of the unexperienced future grows fatter. A lot
more can happen to us in 2012 than in 1500, but we try to ensure that very
little does happen.
The uncertainty of the future is about this long tail of waiting events
that the Field hasnt yet digested, but we know exists out there, as a space
where Bad Things Happen to People Like Me but Never to Me.
In a way, when we ask, is there a sustainable future, we are not really
asking about fossil fuels or feeding 9 billion people. We are asking can the
Manufactured Normalcy Field absorb such and such changes?
We arent really tied to specific elements of todays lifestyles. We are
definitely open to change. But only change that comes to us via the Field.
Weve adapted to the idea of people cutting open our bodies, stopping our
hearts and pumping our blood through machines while they cut us up. The
Field has digested those realities. Various sorts of existential anesthetics
are an important part of how the Field is manufactured and maintained.
Our sense of impending doom or extraordinary potential have to do
with the perceived fragility or robustness of the Field.
It is possible to slide into a sort of technological solipsism here and
declare that there is no reality; that only the Field exists. Many
postmodernists do exactly that.
Except that history repeatedly proves them wrong. The Field is
distinct from reality. It can and does break down a couple of times in
every human lifetime. Were coming off a very long period since World
War II of Field stability. Except for a few poor schmucks in places like
Vietnam, the Field has been precariously preserved for most of us.
When larger global Fields break, we experience dark ages. We
literally cannot process change at all. We grope, waiting for an age when it
will all make sense again.
idea of eventually selling cheap and addictive burgers (for one thing, the
evolutionary processes took longer than the lifetime of any individual
involved in the story). You could say that the existence of HFCS is 10%
intentional and 90% a consequence of the baroque unconscious driving
food technology.
In other words, the existence of a Gollum does not imply the
existence of a Gollumizer. Sauron in the The Lord of the Rings is at best a
personification of the baroque unconscious (with Saruman being one of
the cynical exploiters an HFSC creator so to speak).
But lets figure out what refinement in technology really means.
Consider the following senses of the word refinement:
1. Refinement as in purity or purification of substances: ore, oil,
drugs, foods
2. Refinement in the sense of highly developed and cultivated
sensibilities, as in refined palate
3. Refinement in the sense of elaborate sophistication of mature
or declining cultures
4. Refinement in the sense of detailed, attentive design in
advanced technologies
5. Refinement in the sense of an Apple product (or any other
possibility-exhausting product aesthetic)
How do these different senses of the idea of refinement relate to each
other and to the baroque? What distinguishes the space shuttle, quality
kitchen knife from an iPad, an expensive wine, or a McDonalds
hamburger?
The Sword, the Nail and the Machine Gun
I found a key clue when Greg Rader decided (to my slight discomfort)
to overload this sense of refinement with an economic meaning in his 22
model of types of economies.
In Gregs model, the economic role of refinement is to make it easy to
value artifacts in an impersonal way, in a cash economy. Unrefined
On the other hand you have things that are not at the edge of
technological capability, but manufactured out of component and process
technologies created for those leading edge technologies. And I dont just
mean obviously over-engineered things like space pens that write upside
down (which you can buy at NASA museums). I mean everything.
Regular Bics included.
In this category, makers strive to exhaust the possibilities, but always
lag
behind. The surplus refinement potential shows up in the
unnecessarily clean lines of modernism. Unused bits. Unbroken
symmetries. Blank engineering canvases that expand faster than designers
and technicians can paint.
The interaction of the two kinds of beauty is what creates the texture
of the modern technological landscape. I call it platonic baroque. This
may seem like a contradiction in terms, but bear with me for a moment.
The baroque unconscious is the force that drives technological
evolution: a force whose potential increases faster than it can be exploited.
Recall that the baroque seeks to exhaust its own possibilities. It is a
technical exercise in exploring process limits, not an exercise in
expressing ideas or creating utility. But this process needs ideas to fuel it.
In the days when royalty and religion loomed large in the minds of
creators, it was natural to exhaust possibilities by filling them up with the
content of the mythology associated with the power and money that drove
their work. It was natural to fill up blank walls with gargoyles and
cherubs, popes and princes.
But when the power and money come from a force whose main
characteristic is vast and featureless potential, the baroque aesthetic seeks
to exhaust possibilities by expressing that emptiness with platonic forms.
So the Bauhaus chair is not a rejection of the baroque. The modernist
designer merely seeks to build cathedrals to his new master: a vast
emptiness of possibility within the refinement surplus. This possibility is
that the triumphalist answer was largely an imputed one: part of a social
perception of engineering that was mostly manufactured by non-engineers.
Flormans answer to Why engineer? can probably be reduced to
because it helps me become me.
Curiously, this denial of culpability on the part of engineers was
largely accepted as legitimate . Possibly because it was true. As James
Scott argues brilliantly in Seeing Like a State, to the extent that there is
blame to be assigned, it attaches itself rather clearly to every citizen who
participates in the legitimization of a state. Sign here on the social
contract; well try to make sure bullies dont beat you up; you consent to
be governed by an entity the State with less than 20/20 vision; you
accept your part of the blame if we accidentally blow ourselves up by
taking on large-scale engineering efforts.
So the first shift in the Big Answer, post WWII (lets arbitrarily say
1960) was the one from triumphalist to existential. The third answer,
which succeeded the triumphalist one around 1980, was the ironic one.
The ironic rhetorical non-answer goes, in brief, Why Not?
***
Lets return for a moment to the surging waters pounding the levees of
New Orleans as I write this. Levees are a symbol of that oldest of all
engineering disciplines, civil engineering. As I watch Hurricane Gustav
pound at this meek and archaic symbol of human defiance, with anxious
politicians looking on, it is hard to believe that we ever had the hubris to
believe that we could either discipline or destroy nature. The
environmentalists of the 90s and the high modernists of 1910 were both
wrong. They are as wrong about, say, Facebook, as they were about the
dams and bridges of 1908.
This isnt because technology cannot destabilize nature. It is because
nature does such a bang-up job on its own. For every doomsday future we
make possible say nuclear holocaust or a nasty-minded all-conquering
post-Singularity global AI nature cheerfully aims another asteroid at
Earth. I was particularly amused by all the talk of the Large Hadron
sound one. I personally suspect that in this sense, the Singularity actually
occurred with the invention of agriculture.
So contemplate, as an an engineer (and remember, this includes
anyone who has every chosen to install a Facebook widget), this globespanning beast called nature+technology (or nature-including-technology).
It has a life of its own, and it is threatening today to either die of a
creeping entropy that we arent smart enough to control, or become
effectively sentient and smarter than us.
How can you engage it productively?
By being even more creatively-destructive than it is capable of being
without human intervention. Bloody-minded in short.
***
Let me make it more concrete. Imagine engineers from 1900, 1965,
1995 and 2008 (time-ported as necessary) answering the question why are
you an engineer? within the 2008 context.
1900-engineer: I thought it was to make the world a better place, but
clearly technology is so complex today that any innovation is as likely to
spawn terrorism or exacerbate climate change as it is to improve our lot. I
quit; I will become a monk.
1965-engineer: I thought I was doing this to self-actualize within my
lonely existence, but clearly engineering in 2008 has become as much selfindulgent art as engagement of the natural world. I will not write a
Facebook widget. I will become a monk.
1995-engineer: I thought I did it for the same reasons that drive that
guy to make art and that other guy to do science, but it seems like
whatever I do, be it designing a fixture or writing a piece of code, I am
fueling the emergence of this strange Googlezon beast. Thats scarily large
and impactful. It changes reality far more than any piece of art or science
could, and I want no part of it. I am off to become a monk.
forces that we may have to accept the way we accept the inevitability
of our individual deaths. Maybe weve already created these problems.
And that is why bloody-mindedness is the only defensible motivation
for being a technologist today. You may delude yourself with culturally
older reasons, but this is the only one that holds up. It is also the only
reason that will allow you to dive in without second-guessing yourself too
much, with enough energy to have any hope of having an impact. Because
the people shaping the technology tomorrow arent holding back out of
fear of (say) green-house emissions from large data centers.
***
Alright. Holiday over. Back to recycling tomorrow.
operate with an attitude of kindness and gentleness. But does the world
always allow our actions to be kind or gentle?
The Phenomenology of Destruction
Creation and growth can be gradual, steady, linear and calm, but this
is rarely the case. More often, we either see head-spinning Kool-Aid
exponential dynamics, critical-mass effects, tipping points and the like. Or
slowing, diminishing-returns effects. Steady progress is a myth.
Destruction is the same way. Wed like all destruction to be strictly
necessary, linear and peaceful. Thats why phrases like graceful
degradation are engineering favorites. Thats why my friend and animal
rights activist Erik Marcus champions dismantlement of animal agriculture
rather than its destruction. The world unfortunately, rarely behaves that
way. Our rich vocabulary around destruction is an indication of this:
decay, rot, neglect, catastrophe, failure mode, buckle, shatter, collapse,
death, life-support, apocalypse. Destruction isnt this messy simply
because we are unkind or evil. Destruction is fundamentally messy, and
keeping it gentle takes a lot of work.
I once read that nearly 70% of deaths are painful (no clue whether this
is true, but much as my first experience of euthanasia hurt, I still believe in
it). Reliability engineering provides some clues as to why this is so
IEEE Spectrum had this excellent cover story a few years ago, analyzing
biological death from a reliability engineering perspective. The shorter
version: complex systems admit cascading, exponentially-increasing
failure modes that are hard to contain. Any specific failure can be
contained and corrected, but as failures pile on top of failures, and the
body starts to weaken and destabilize overall as a system, doctors can
scramble, but eventually cannot keep up. The shortest version: He died of
complications following heart surgery.
Jenga as Metaphor
The game of Jenga illustrates why it is so hard to keep destruction to
linear-dismantlement forms. Once you throw in an element of creation in
parallel (removing blocks and stacking them on top to make the tower
higher), you are constrained. If you had the luxury of time, you could
unstack all the blocks carefully, and restack them in a taller, hollow
configuration with only 2 bricks per layer. Thats graceful reconstruction.
The world rarely allows us to do this. We must reconstruct the tower while
deconstructing it, and eventually the growth creates the kind of brittle
complexity where further attempts at growth cause collapse.
Milton, the real star of Office Space, provides a more true-to-life
example of the Jenga mode of destruction.
crumble, with occasional smaller and larger collapses. Watch closely, and
you will feel the actual pain. You will participate in the tragedy.
If you happen to be part of new growth, recognize this. One day, a
brighter light will put you in the shadows, and you will have to face the
mortality of your own creations. One of my favorite Hindi songs gets at
this ultimately tragic, Sisyphean nature of all human creation:
Main pal do pal ka shayar hun, pal do pal meri kahani hain
pal do pal meri hasti hai, pal do pal meri jawaani hain
Mujhse pehle kitne shayar, aaye aur aa kar chale gaye
kuch aahe bhar kar laut gaye, kuch naghme gaa kar chale gaye
woh bhi ek pal ka kissa they, main bhi ek pal ka kissa hun
kal tumse juda ho jaoonga, jo aaj tumhara hissa hun
Kal aur aayenge naghmo ki, khilti kaliyan chunne wale
Mujhse behtar kehne waale, tumse behtar sunne wale
kal koi mujhko yaad kare, kyon koi mujhko yaad kare
masroof zamaana mere liye, kyon waqt apna barbaad kare?
Which roughly translates to the following (better translators, feel free
to correct me):
I am but a poet of a moment or two, a moment or two is as
long as my story lasts
I exist but for a moment or two, for a moment or two does
my youth last
Many a poet came before me, they came and then they
faded away
they took a few breaths and left, they sang a few songs and
left
they too were but anecdotes of the moment, I too am an
anecdote of a moment
tomorrow, I will be parted from you, though today I am a
part of you
I dont know when I first heard the phrase, but I first used it in the
frontispiece of my PhD thesis. Here are the three quotes I put there, back
in 2003, when I was searching for just the right sort of imagery to give my
research the right-brained starting point it needed. My first quote was a
basic, bald statement due to Schumpeter:
Creative Destruction is the essential fact about capitalism.
Joseph Schumpeter, Capitalism, Socialism, and Democracy
I followed that up with a Rabindranath Tagore bit that Id found
somewhere (update: Googling rediscovered the somewhere on the
frontispiece of Hugo Reinerts draft version of a paper on Creative
Destruction which seems to have finally appeared in the collection:
Friedrich Nietzsche: Economy and Society), and for which, to this day, I
havent found a citation (update: Hail! Google books; a work-colleague,
Tom K., dug the reference out for me the extract is from Brahma,
Vishnu, Siva, which appears in Radices translation of selections from
Tagore so much for the detractors of Googles book scanning project:
plain Googling did not get me the source).
From the heart of all matter
Comes the anguished cry
Wake, wake, great Siva,
Our body grows weary
Of its law-fixed path,
Give us new form
Sing our destruction,
That we gain new life
Rabindranath Tagore
And concluded the Grand Opening of my Immortal Thesis with a
dash of Nietzsche:
[H]ow could you wish to become new unless you had first become
ashes!
Freidrich Nietzsche, Thus Spake Zarathustra
Part 3:
Getting Ahead, Getting Along,
Getting Away
This is not a new kind of attitude, but the last time we saw this kind of
social science triumphalism, it was derivative. The triumphalism of late
19th century engineering triggered a wave of High Modernist social
engineering in its wake that lasted till around 1970. That project failed
across the world and social scientists quickly abandoned the engineers and
turned into severe critics overnight (talk about fair weather friends). But
social scientists today have found a native vein of confidence to mine.
They are now rushing in boldly where engineers fear to tread.
It is rather ironic that much of the confidence stems from discoveries
made by the Gotcha Science of cognitive biases. In case it isnt obvious,
the irony is that revelations about the building blocks of the tragic DNA of
the human condition have been pressed into service within a
fundamentally bright-sided narrative. This narrative (though the believers
deny that there is one) is based on the premise that cataloging and
neutralizing biases will eventually leave behind a rationally empiricist
core of perfectible humanity, free of deluded narratives. One educational
magic bullet per major bias. The associated sociological grand narrative is
about separating the world of the Chosen Ones from the world of the
Deluded Masses, and using some sort of Libertarian Paternalism as the
basis for the former to benevolently govern the latter without their being
aware of it.
I suppose it is this sort of overweening patronizing attitude that leads
me to occasionally troll the Chosen Ones by triggering completely
pointless Batman vs. Joker Evil Twin debates.
Sometimes I feel like going to a behavioral economics conference and
yelling out from the audience, youre reading the evidence wrong you
morons, it is turtles biases and narratives all the way down; we should be
learning to live with and through them, not fighting them!
Unlike the woman who yelled the original line at an astronomer in the
apocryphal story, I think Id be right. In this case, anthropocentric thinking
lies in believing that there is a Golden Universal Turing Machine Running
the Perfect Linux Distro at the bottom. There is no good reason to believe
that natural selection designed us as perfect (or perfectible) cores wrapped
in a mantle of biases and narrative patterns.
better. The net result was that I was beaten mentally and physically. Errors
would accumulate, and Id invariably choke.
Then one day, I managed to convince S, whose father had been a
state-level champion, to practice with me (there was no point playing, he
would have beaten me 21-0, 21-0, 21-0). S was the sort of calm,
unflappable guy who simply cannot be psyched-out or forced into error.
He had an almost robotic level of perfection in all basic elements of the
game. S put me through half an hour of very basic forehand-to-forehand
top spin practice rallies, and it completely changed my game. After that, I
still mostly got beaten by R, my regular partner (who was fundamentally
more talented than me), but I actually began winning the occasional
match, and all games were a lot closer.
Fast-forward 15 years. At the University of Michigan, I organized an
informal tournament at the residential scholarship house I was living in at
the time. Out of the field of about 8-10, I came in second. Most Americans
in the house fared as well as youd expect; since they view ping pong as
not really a sport, most of them lack basic skills. I beat most of them
relatively easily, but was beaten pretty handily by a Korean-American guy.
A final data point. About 2 years ago, with rather foolhardy
confidence, I joined in a Saturday afternoon group of serious Chinese
players. The result: I was beaten comprehensively by everybody. In
particular, by a bored, tired-looking 14 year old (clearly first-generation)
who looked like he hated the game and had been dragged there by his
immigrant father.
Collective Attention and Arms Races
Now step back and analyze this for a moment. Table tennis is
primarily information work. It is not among the more physically
demanding games except at the highest levels. My serious table-tennis
clique in an apathetic-to-the-game country, with a lousy athletic culture
(India) got me to a certain level of competence: enough to beat many
casual players in a vastly more athletic country (the US). But a disengaged
kid from the diaspora of an athletic country that is crazy about the game
(China) was able to beat me with practically no effort, despite being far
less interested (apparently) in the game than me.
This little story captures the most essential features of collective
attention. It exists at all scales (from small clique to country to planet).
Within a group that is paying coordinated attention to any informationwork domain, skill levels rapidly escalate, leaving isolated individuals far
behind. I call this the arms race effect, and it is a product of a fertile mix of
elements in the crucible: competition, mutual teaching, constant practice
and sufficient, but not overwhelming variety. This is a very particular kind
of attention. It isnt passive consumption by spectators, and it isnt
performance for an audience. It is co-creation of value: that same dynamic
that is starting to drive the entire economy, blurring lines between
producers and consumers.
So our challenge in this article is to answer the question: what is the
optimal size of a creative group? Is country level attention the best (China
and table tennis) or clique (my high school)? Is it perhaps 1 (solo loneranger creative blogger)? Our quest starts with the first of our supportingcast numbers, 10,000. As in the 10,000-hour rule studied by K. Anders
Ericsson and made famous by Gladwell in Outliers.
10,000 Hours and Gladwells Staircase
Gladwell is a jump-the-gun trend-spotter. He nearly always finds a
uniquely interesting angle on a subject, and nearly always analyzes it
prematurely in flawed ways. Thats a story for another day, but lets talk
about his latest, Outliers. The basic thesis of the book is that there are all
sorts of subtly arbitrary effects in the structure of nurture (Gladwells way
too smart to play up a naive nature/nurture angle) that make runaway
success a rather unfair and random game of chance. In particular, Gladwell
focuses on a key argument: that to get really good at anything, you need
about 10,000 hours of steadily escalating practice, with opportunities to
take your game to the next level becoming available at the right times.
For instance, due to some weird cutoff-date effects, nearly all top
Canadian hockey players are born in winter (thereby, Gladwell implies,
unfairly penalizing burly talents born in warmer months). This basic
argument is just plain wrong for the simple reason that no human talent is
A note of irony here: Gladwell was also among the first to stumble
across the importance of such dream-team crucibles, in The Tipping
Point. Today, researchers like Duncan Watts have pointed out that viral
effects dont necessarily depend on particularly talented or connected
special people (the sort Gladwell called mavens and salesmen). But
special people do have a special role in shaping culture. It is just that
their most important effect isnt in popularizing things like Hush Puppies,
but in actually creating their own value. New kinds of music, science,
technology, art or sporting culture.
This is the signal in the noise, and here is the lesson. Information
work in any domain is like weight training: you only grow when you
exercise to failure. The only source of weight to overload your mental
muscles is other people. And the only people who can load you without
either boring you or killing you are people of approximately the same
level of talent development. And that leads to the question: what happens
when you hit the top crucible of 12 in your chosen field? Where do you go
when there are no more levels (or if youve reached the highest level you
can, short of the top)? That brings us to the next two numbers in our story:
how you innovate and differentiate as a creative.
1 Free Agent and 1000 Raving Fans?
Ive hated the phrase raving fan since the day I heard it. If you are
not familiar with the argument, Kevin Kelly, who originated the idea,
claims that an individual creative blogger or musician say can
scrape along and subsist in Chris Andersons Long Tail, by attracting a
1000 raving fans who buy everything he/she puts out (blogs, books,
special editions, t-shirts, mousepads; 1000 raving fans times $100 per year
per fan is a $100,000 income). Kellys original adjective is a lessobjectionable true rather than raving but raving has caught on, and
the intended meaning is the same.
This basic model of creative capital is just not believable for two
reasons. First, it reduces a prosumer/co-creation economic-cultural
environment to a godawful unthinking bleating-sheep model of
community. I try to imagine my blog, for instance, as the focal point of a
stoned army of buy-anything idiot groupies, and fail utterly. I would not
want to serve such a community, and I dont believe it can really form
around what I do. I certainly refuse to sell ribbonfarm.com swag.
The second problem is the tacit assumption that creation is
prototypically organized in units of 1. The argument is seductive. The bad
old corporations will die, along with its committees of groupthink. The
brave new solo free agent, wandering in the woods of cultural anarchy,
finds a way to lead his tribe to the promised land of whatever his niche is
about. Tribe is a related problematic term that Seth Godin recently ran
amok with.
The reason Kelly (and others like Godin) ends up here is that he
answers my question after the dream team, what? with individuals
break away, brand themselves and become individual innovators. Kinda
like Justin Timberlake leaving NSync. A dream team of 12, in this view,
turns into 12 soloists. Not that he ignores groups, but his focus is on the
individual.
Individuals vs. Groups
Thats not what happens. You cannot break the crucible rule. 12 is
always the magic number for optimal creative production. The reason
people make this mistake is because they draw a flawed inference from the
(correct) axiom that the original act of creativity is always an individual
one. Ive talked about this before: I am a believer in radical individualism;
I believe, as William Whyte did, that innovation by committee is
impossible. Good ideas nearly always come from a single mind. What
makes the crucible of 12 important is that it takes a group of
competing/co-operating individuals, each operating from a private
fountainhead of creative individual energy, to come up with enough of a
critical mass of individual contributions to spark major revolutions.
Usually thats about 12 people for major social impact, though sometimes
it can happen with smaller crucibles. These groups arent the deadening
committees of groupthink and assumed consensus. They are the fertile,
fiercely contentious and competitive collaborators who at least partly hate
the fact that they need the others, but grudgingly admire skills besides
their own.
What happens when you exit the dream team level in a mature
disciplinary game is that you get out there and start innovating beyond
disciplinary boundaries; places where there are no experts and no managed
progression of levels with ritualistic gatekeeper tests. But you dont do
that by going solo. You look for crucibles of diversity, multidisciplinary
stimulation and cross-pollination. But you still need the group of 12 or so,
training your brain muscles to failure.
This gives me a much more believable picture. As a blogger, I am the
primary catalyst on this site, but I am not creating the value solo. If I try to
think of the most valuable commenters on this site, I can think of no more
than 12. My best writing has come from trying to stay ahead of their
expectations, and running with themes they originally introduced me to.
But thats far from optimal, since I still am the dominant creator on this
blog. The closer I get that number to 12 via regular heavy-weight
commeters, guest bloggers and mutually-linked blogroll friends (Ive
turned my blogroll off for now for unrelated reasons), the closer Ill get to
optimum. Think of all the significant power blogs: they are all team-acts.
Now, I may never get there, and theres multiple ways to get to 12, but the
important thing is to be counting to 12. At work these days, I am pretty
close to that magic number 12, and enjoying myself a lot as a result.
So the important number for the creative of the future is 12, not 1 or
1000. But what about money and volume? Dont we need a number like
1000? Not really. As the creative class matures, you wont really ever find
1000 uncritical sheep-like groupie admirers. That is a relic of the celebrity
era. The real bigge- than-crucible number is not 1000 but 150. Dunbars
number.
The Dunbar Number and $0.00
Why 150? Thats the Dunbar number. The most people you can
cognitively process as individuals (the dynamics are entertainingly
described in the famous Monkeysphere article). Thats the right number to
drive long-tail logic. By Kellys logic though, I have to get to, say,
100,000 casual occasional customers before I find my 1000 raving fans
(1% conversion is realistic).
still do a better job than say, the blogsphere. But the free-agent nation is
catching up rapidly. The wilderness is becoming more capable of
sustaining economics-without-borders-or-walls every day.
So how will you create and monetize your Dunbar neighborhood? By
definition, there are no one-size-fits-all answers, because the point of
working this way is that youll find opportunities through personalized
attention. Not a great answer, I know, but still easier for most of us than
dreaming up ideas that can net 100,000 regulars of whom 1000 turn into
raving fans.
8: The Maximal Span of Control
Weve argued that the optimal crucible size must be greater than 1 and
less than 150, but we still havent gotten to the reasoning behind 12 rather
than 30 or 5. Another number will help get us there: 8, the upper end of the
range of a number known as the span of control. The number of direct
reports a manager can effectively handle, and still keep the individualized
calculus of interpersonal relationships tractable.
What happens when you exceed the span of control? You get
hierarchies. You cannot organize, complex coupled work (think space
shuttle) requiring more than 8 people in a flat structure. But heres the
dilemma: between 9 and 15, if you split the group into 2, you may get high
overhead and micromanagement by managers with too little to do, and
other pathologies. So between the limit of a single managers abilities, and
the optimal point at which to force cell division, ontogeny and
organization, you get a curious effect: the edge of control. Single-manager
structures fail, but team chemistry can take over. The whole thing is just
barely in control, and teetering on chaos.
Should sound familiar. Those are the conditions, complexity theorists
have been telling us for decades, that spark creative output. More than 8,
less than 16. Why 12, besides being a nice mean? Anecdotal data.
The Ubiquity of 12
I hope you are too smart to conclude that I am making 12 a number of
religious significance. It is simply the mean of a fairly narrow distribution.
Still, it turns up in a surprising number of creative crucible places in
practice:
1.
2.
3.
4.
5.
6.
7.
8.
9.
What is not easy is appreciating that thats all you need. You can
dispense with extrinsic coordinate systems entirely. Just keeping track of
how those three variables (known as arc-length, curvature and torsion if
my memory serves me) are changing, is enough. For short periods, you
can roughly measure them using just your intrinsic sense of time and how
your stomach and ears feel. To keep the measurements precise over longer
periods, you need a gyroscope, an accelerometer and a watch.
If you want motifs for the two modes of operation, think of it as the
difference between a magnetic compass and a gyroscope (these days, GPS
might be a better motif for the former, but the phrase the compass and the
gyroscope has a certain ring to it that I like).
We need another supporting notion before we can construct an
intrinsic coordinate system for human lives.
Behavioral Boundedness
Remember that the primary real value of an extrinsically defined
discipline in a field/domain matrix is predictable boundedness.
Mathematicians can trust that they wont have to suddenly start dancing
halfway through their career to progress further.
This predictability allows you to form reasonable expectations for
decades of investment, and make decisions based on your upfront
assessment of your strengths, and expectations about how those strengths
will evolve as you age.
If I decide that I have certain strengths in mathematics and that I want
to bet on those strengths for a decade, to get to mastery, I shouldnt
suddenly stumble into a serious weakness along the way that blocks me,
like a lack of natural athleticism.
So a disciplinary boundary is very useful if it provides that kind of
predictability. I call this behavioral boundedness. An expectation that your
expected behaviors in the future wont wander too far out of certain
strengths-based comfort zones you can guess at fairly accurately, upfront.
Before putting in 10,000 hours.
Grit is the enduring intrinsic quality that, for a brief period in recent
history, was coincident with the pattern of behavior known as progressive
disciplinary specialization.
Grit has external connotations of extreme toughness, a high apparent
threshold for pain, and an ability to keep picking yourself up after getting
knocked down. From the outside, grit looks like the bloody-minded
exercise of extreme will power. It looks like a super-power.
I used to believe this understanding of grit as a superhuman trait. I
used to think I didnt possess it. Yet people seem to think I exhibit it in
some departments. Like reading and writing. They are aghast at the
amount of reading I do. They wonder how I can keep churning out
thousands of words, week after week, year after year, with no guarantee
that any particular piece of writing will be well-received.
They think I must possess superhuman willpower because they make
a very simple projection error: they think it is hard for me because it
would be hard for them. Well of course things are going to take
superhuman willpower if you go after them with the wrong strengths.
For a while, I went around calling this faux-grit. The appearance of
toughness. But the more I looked around me at other people who seemed
to display grit in other domains, the more I realized that it wasnt hard for
them either. What they did would merely be superhuman effort for me.
Faux grit and true grit are the same thing (the movie True Grit is actually
quite a decent showcase of the trait; it showcases the superhuman
outside/fluid inside phenomenon quite well).
So what does the inside view of grit look like? I took a shot at
describing the subjective feel in my last post on the Tempo blog. It simply
feels like mindful learning across a series of increasingly demanding
episodes that build on the same strengths.
But the subjective feel of grit is not my concern here. I am interested
in objective, intrinsically measurable aspects of grit that can serve as an
internal inertial navigation system; a gyroscope rather than GPS.
people into reading related content, but out of sheer laziness. I dont like
repeating arguments, definitions or key ideas. So I back-link. I do like
most of my posts to be stand-alone and comprehensible to a new reader
though, so I try to write in such a way that you can get value out of
reading a post by itself, but significantly more value if youve read what
Ive written before. For example, merely knowing what I mean by the
word legibility, which I use a lot, can increase what you get out of some
posts by 50%. This is one reason blogging is such a natural medium for
me. The possibilities of hyperlinking make it easy to do what would be
extremely tedious with paper publishing.
The key here is internal referencing. I use far fewer external reference
points (theres perhaps a dozen key texts and a dozen papers that I
reference all the time). It sounds narcissistic, but if youre not referencing
your own work at least 10 times as often as youre referencing others,
youre in trouble in the intrinsic navigation world. Instead of developing
your own internal momentum and inertia, you are being buffeted by
external forces, like a grain of pollen being subjected to the forces of
Brownian motion.
Releasing
And finally, releasing. As in the agile software dictum of release
early and often. In blogging, frequency isnt about bug-fixing or
collaboration. It isnt even about market testing (none of my posts are
explicitly engineered to test hypotheses about what kind of writing will do
well). It is purely about rational gambling in the dollar-cost averaging
sense. It is the investing advice dont try to time the market applied to
your personal work.
If the environment is so murky and chaotic that you cannot
strategically figure out clever moves and timing, the next best thing you
can do is just periodically release bits of your developing work in the form
of gambles in the external world. I think theres a justifiable leap of faith
here: if you are work admits significant reworking and internallyreferencing, youre probably on to something that is of value to others.
If a post happens to say the right thing at the right time, it will go
viral. If not, it wont. All I need to do is to keep releasing. This realization
The key here is very simple and very Sun Tzu: with respect to the
external world, take the path of least resistance.
Why? Think of it this way. The disciplinary world very coarsely
measured your aptitudes and strengths once in your lifetime, pointed you
in a roughly right direction and said Go! The external environment had
been turned into a giant obstacle course designed around a coarse global
mapping of everybodys strengths.
So there was no distinction between the map of the external world
you were navigating, and the map of your internal strengths. The two had
been arranged to synchronize. If you navigated through a map of external
achievement, landmarks and honors, youd automatically be navigating
safely through the landscape of your internal strengths.
But when you cannot trust that youve been pointed in the right
direction in a landscape designed around your strengths, you cannot afford
to navigate based on a one-time coarse mapping of your own strengths at
age 18.
If you run into an obstacle, it is far more likely that it represents a
weakness rather than a meaningful real-world challenge to be overcome,
as a learning experience.
Dont try to go over or through. It makes far more sense to go around.
Hack and work around. Dont persevere out of a foolhardy superhuman
sense of valor.
Hard Equals Wrong
If it isnt crystal clear, I am advocating the view that if you find that
what you are doing is ridiculously hard for you, it is the wrong thing for
you to be doing. I maintain that you should not have to work significantly
harder or faster to succeed today than you had to 50 years ago. A little
harder perhaps. Mainly, you just have to drop external frames of reference
and trust your internal navigation on a landscape of your own strengths. It
may look like superhuman grit to an outsider, but if it feels like that inside
to you, youre doing something wrong.
This is a very contrarian position to take today. Thomas Friedman in
particular has been beating the harder is better drum for a decade now,
most recently in his take on the London riots, modestly titled A Theory of
Everything (Sort Of):
Why now? It starts with the fact that globalization and
the information technology revolution have gone to a whole
new level. Thanks to cloud computing, robotics, 3G
wireless connectivity, Skype, Facebook, Google, LinkedIn,
Twitter, the iPad, and cheap Internet-enabled smartphones,
the world has gone from connected to hyper-connected.
This is the single most important trend in the world
today. And it is a critical reason why, to get into the middle
class now, you have to study harder, work smarter and
adapt quicker than ever before. All this technology and
globalization are eliminating more and more routine
work the sort of work that once sustained a lot of
middle-class lifestyles.
The environment that really matters isnt the external world. It is
pretty much pure noise. You can easily find and process the subset that is
meaningful for your life. It isnt about harder, smarter, faster. If it were, Id
be dead. Ive been getting lazier, dumber and slower. Its called aging. I
think Friedman is going to run out of superlatives like hyper- before I
run out of life. If I am wrong, the world is going to collapse before he gets
around to writing The World is Hyper-Flatter-er. Humans are simply not
as capable as Friedmans survival formula requires them to be.
Exhortation is pointless. Humans dont suddenly become superhuman just because the environment suddenly seems to demand
superhuman behavior for survival. Those who attempt this kill themselves
just as surely as those dumb kids who watch a superman movie and jump
off buildings hoping to fly.
It is the landscape of your own strengths that matters. And you can set
your own, completely human pace through it.
The only truly new behavior you need is increased introspection. And
yes, this will advantage some people over others. To avoid running faster
and faster until you die of exhaustion, you need to develop an increasingly
refined understanding of this landscape as you progress. You twist and
turn as you walk (not run) primarily to find the path of least resistance on
the landscape of your strengths.
The only truly new belief you need is that the landscape of
disciplinary endeavors and achievement is meaningless. If you are too
attached to degrees, medals, prizes, prestigious titles and other extrinsic
markers of progress in your life, you might as well give up now. With 90%
probability you arent going to make it. Its simple math: even if they were
worth it, as our friend Friedman notes with his characteristic scaremongering, there simply isnt enough to go around:
Think of what The Times reported last February: At
little Grinnell College in rural Iowa, with 1,600 students,
nearly one of every 10 applicants being considered for the
class of 2015 is from China. The article noted that dozens
of other American colleges and universities are seeing a
similar surge as well. And the article added this fact: Half
the applicants from China this year have perfect scores of
800 on the math portion of the SAT.
If youre paying attention to the Chinese kids who score a perfect 800,
youre paying attention to the wrong people. I mean, really? You should
worry about some Chinese kid terrorized into achieving a perfect-800
math score by some Tiger Mom, and applying to Grinnell College?
Its the Chinese kids who are rebelling against their Tiger Moms,
completely ignoring the SAT, and flowing down the path of least
resistance that you should be worried about. After all Sun Tzu invented
that whole idea.
So rework, reference, release. Flow through the landscape of your
own strengths and weaknesses. Count to 10,000 rework hours as you walk.
If you arent seeing accelerating external results by hour 3300, stop and
introspect. That is the calculus of grit. Its the exponential human
psychology you need for exponential times. Ignore everything else.
Factoid: this entire 4000-plus word article is a working out of a 21word footnote on page 89 of Tempo. Thats how internally-referenced my
writing has become. Never say I dont eat my own dogfood.
And yes the basic political question of capitalism versus social justice
rears its ugly head here. Choosing a calling is a political act, and Ill
explain the choices you have available.
The Central Dogma in the World of Work
There are three perspectives we normally utilize when we think about
the world of work.
The first is that of the economist, who applies the laws of demand and
supply to labor markets. In this world, if a skill grows scarce in the
economy, wages for that skill will rise, and more people will study hard to
acquire that skill. Except that humans perversely insist on not following
these entirely reasonable laws. As BLS (Bureau of Labor Statistics)
statistics reveal, people insist on leaving the skilled nursing profession
perennially thirsting for new recruits, while the restaurant industry in Los
Angeles enjoys bargain labor prices, thanks to those hordes of Hollywood
hopefuls, who are good for nothing other than acting, singing and waiting
tables.
Then there is the perspective of the career counselor. That theatrical
professional who earnestly administers personality and strengths tests, and
solemnly asks you to set career goals, think about marketability of
skills, weigh income against personal fulfillment, and so forth. I say
theatrical because the substance of what they offer is typically the same,
whether the mask is that of a drill sergeant, guardian angel or an earth
mother; whether the stance is one of realism, paternalism or romanticism.
Somewhere in the hustle and bustle of motivational talk, resume critiquing
and mock interviews, they manage to cleverly hide a fact that becomes
obvious to the rest of us by the time we hit our late twenties: most of us
have no clue what to do with our lives until weve bummed around, testdriven, and failed at, multiple callings. Until weve explored enough to
experience a career Aha! moment, most of us cant use counselors. After
we do, they cant really help us. If we never experience the Aha!
moment, we are lost forever in darkness.
And finally there is the perspective of the hiring manager. That
hopeful creature who does his or her best to cultivate a pipeline of
fungible labor, in the fond and mostly deluded hope that cheap talent
will fit neatly into available positions. It is a necessary delusion. To
admit otherwise would be to admit that the macroeconomic purpose an
organization appears to fulfill is the random vector sum of multiple people
pulling their own way, with some being fortunate enough to be pulling in
the accidental majority direction, while others are dragged along, kicking
and screaming, until they let go, and still others pretend to pull whichever
way the mass is moving. Mark Twains observations of ants are more
applicable than hiring managers ideas that talent-position fit is a
strongly-controllable variable.
Heres the one common problem that severely limits the value of each
of these perspectives. There is a bald, obvious and pertinent fact that is so
important, yet so rarely acknowledged, let alone systematically
incorporated, that each of these perspectives ends up with a significant
blind spot.
That bald fact is this: it takes two kinds of work to make a society
function. First, there is the sexy, lucrative and powerful (SLP) work that
everybody wants to do. And then there is the dull, dirty and dangerous
(DDD) work that nobody wants to do. There is a lot of gray stuff in the
middle, but thats the basic polarity in the world of work. Everything
depends on it, and neither pole is dispensable.
The economist prefers not to model this fact. The career counselor
does not want to draw attention to it. The hiring manager has good reason
to deny it.
This brings us to the central dogma in the world of work: everyone
can simultaneously climb the Maslow pyramid, play to their strengths, and
live rewarding lives. That somehow magically, in this orgy of selfactualization, Adam Smith will ensure that the trash will take itself out.
Like all dogmas, it is false, but still manages to work, magically.
The dull, dirty and dangerous work does get done. Trash gets hauled,
sewers get cleaned, wars get fought by cannon-fodder types. And yet the
dogma is technically never violated. You see, there is a loophole that
allows the dogma to remain technically true, while being practically false.
The loophole is called false hope.
The False Hope Tax and Dull, Dirty and Dangerous (DDD)
The phrase dull, dirty or dangerous became popular in the military in
the last decade, as a way to segment out and identify the work that suits
UAVs (Unmanned Aerial Vehicles, like the Predator) the best. It also
describes the general order in which we will accept work situations that do
not offer any hope of sex, money, or power. Most of us will accept dull
before dirty, and dirty before dangerous. Any pair is worse than any one
alone, and all three together represent hell. Theres a vicious spiral here.
Dull can depress you enough that you are fired and need to work at dull
and dirty, which only accelerates the decline into dull, dirty and
dangerous. And I am not talking dangerous redeemed by Top Gun
heroism. I am talking die in stupid, pointless ways dangerous.
William Rathje, a garbologist (a garbage-archeologist) notes in his
book, Rubbish (to be reviewed), that once you get used to it, garbage in
landfills has a definite bouquet that is not entirely unpleasant. But then, he
is a professor, poking intellectually at garbage rather than having to
merely haul and pile it, with no time off to write papers about it. Dull,
dirty and dangerous work is stuff that takes scholars to make interesting,
priests to ennoble, and artists to make beautiful. But in general, it is
actually done by some mix of the deluded hopeful, the coerced, and the
broken and miserable, depending on how far the civilization in question
has advanced. You might feel noble about recycling, but somewhere out
there, near-destitute people are risking thoroughly stupid deaths (like
getting pricked by an infected needle) to sort your recycling. Downcycling
really, once you learn about how recycling works. On the other side of
the world, ship-breakers are killing themselves through a mix of toxic
poison and slow starvation, to sustain the processes that bring your cheap
Walmart goods to you from China.
The reasons behind the mysteriously perennial talent scarcity and
inelastic wages in the nursing profession, or the hordes of waitstaff in LA
hopefully (Pandora be praised!) waiting for their big Hollywood break, are
blindingly obvious. The obviously germane facts are that one profession
SLP, we are inevitably legitimizing the cruelty that the world of DDD
suffers.
Tinker, Tailor, Soldier Sailor
Lets circle back and revisit tinker, tailor, solidier, sailor, richman,
poorman, beggarman, thief.
Why did little 17th century girls enjoy counting stones and guessing
who their future husbands might be? Was their choice of archetypes mere
alliterative randomness?
We tend to think of specialization and complex social organization as
consequences of the industrial age, but the forces that shape the
imaginative division of labor have been at work for millenia.
Macroeconomics and Darwin only dictate that there will be a spectrum
with dull, dirty and dangerous at one end, and sexy, lucrative and
powerful at another. This spectrum is what creates and sustains social and
economic structures. I am not saying anything new. I am merely restating,
in modern terms, what Veblen noted in Theory of the Leisure Class. From
one century to the next, it is only the artistic details that change. Tinker,
tailor evolves to a different set of archetypes.
Weve moved from slavery to false hope as the main mechanism for
working with the spectrum, but whatever the means, the spectrum is here
to stay. Automation may nip at its heels, but fundamentally, it cannot be
changed. Why? The rhyme illustrates why.
At first sight, the tinker, tailor rhyme represents major category
errors. Richman and poorman are socioeconomic classes, while tailor,
sailor and soldier are professions. Tinker (originally a term for a
Scottish/Irish nomad engaged in the tinsmith profession) is a lifestyle.
Beggarman and thief are categories of social exodus behaviors.
Relate them to the DDD-SLP spectrum, and you begin to see a
pattern. As Theodore White noted, Richman enjoys the ultimate privilege:
buying his own social identity at the SLP end of the spectrum. Poorman is
stuck in the DDD end. Beggarman and thief have fallen off the edge of
society, the DDD end of the spectrum, by either giving up all dignity, or
sneaking about in the dark. Sailor and Tinker are successful exodus
archetypes. The former is effectively a free agent. Remember that around
the time this rhyme captured the popular imagination in the 17th century,
the legitimized piracy and seaborne thuggery that was privateering, had
created an alternative path to sexy, lucrative and powerful; one that did not
rely on rising reputably to high office (the path that Samuel Pepys
followed between 1633 and 1703; The Diary of Samuel Pepys remains one
of the most illuminating looks at the world of work ever written). The
latter, the tinker, was a neo-nomad, substituting tin-smithing for
pastoralism in pre-industrial Britain.
The little girls had it right. In an age that denied them the freedom to
create their own destiny, they wisely framed their tag-along life choices in
the form of a rhyme that listed deep realities. Today, the remaining modern
women who look to men, rather than to themselves, to define their lives,
might sing a different song:
blogger, coder, soldier, consultant
rockstar, burger-flipper, welfareman, spammer
Everything changes. Everything remains the same.
The Politics of Career Choices
Somewhere along the path to growing up, if you bought into the
moral legitimacy argument that justified striving for sexy, lucrative,
powerful, you implicitly took on the guilt of letting dull, dirty and
dangerous work, done by others, enabling your life. If that guilt is killing
you, you are a liberal. If you think this is an unchangeable reality of life,
you are a conservative. If you think robots will let us all live sexy,
lucrative, powerful lives, you are deluded. You see, the SLP-DDD
spectrum is not absolute, it is relative. Because our genes program us to
strive for relative reproductive success in complicated ways. There is a
ponderous theory called relative deprivation theory that explains this
phenomenon. So no matter how much DDD work robots take off the table,
well still be the same pathetic fools in our pajamas.
an ability to shut off the meta-cognition and just get lost in doing. Great
teachers were probably great learners. Great doers may be slower learners,
but are great at shutting off the meta-cognition.
Causes and Consequences
I think the turpentine effect is caused by and I am treading on
dangerous territory here the lack of a truly artistic eye in the domain
defined by a given tool (so it is ironic that it was Picasso who came up
with the line). Interesting art arises out of a combination of refined skills
and a peculiar, highly original way of looking at the world through that
skill. If you have the eye without the skills, you become an idiosyncratic
eccentric who is never taken seriously. If you have the skills without the
eye, you become susceptible to the turpentine effect. The artistic eye is
innate and requires no real refinement. In fact, the more you learn, the
more the eye is blinded. The adult artistic eye is largely a matter of
protecting a childlike way of seeing, but coupling it to an adult way of
processing what you see. And to turn it into value, you need a second
coupling to a skill that translates your unique way of seeing into unique
ways of creating.
There is a feedback loop here. Sometimes acquiring a skill can make
you see things you didnt see before. When you have a hammer in your
hand, everything looks like a nail. On the other hand, if you cant see
nails, all you see is opportunities to make better hammers.
The artistic eye is also what you need to make design decisions that
are not constrained by the tools. A complete absence of artistic instincts
leads to an extreme lack of judgment. In a Seinfeld episode, Jerry gets
massively frustrated with a skilled but thoroughly inartistic carpenter
whom he has hired to remodel his kitchen. The carpenter entirely lacks
judgment and keeps referring every minor decision to Jerry. Finally Jerry
screams in frustration and tells him to do whatever, and just stop bothering
him. The result: the carpenter produces an absolute nightmare of a kitchen.
In Wonderboys, (a movie based on a Michael Chabon novel) the
writer/professor character played by Michael Douglas tells his students
that a good writer must make decisions. But he himself completely fails to
do so, and his book turns into an unreadable, technically-perfect, 1000-
barista who works at Starbcucks on Sahara Avenue, that I once ran into at
Whole Foods.
This still isnt the same as actually knowing someone, but it is a
necessary first step (as an aside, this is the reason why the three
media/three contacts rule in sales works the way it does). Double-take
moments are relationship-escalation options with expiry dates. They create
a window of opportunity within which the relationship can escalate into a
personal one.
There is a reason havent we met before? is the mother of all pick-up
lines.
So lets say there are three zones around you. The context-free zone of
personal relationships, surround by a context-dependent double-take zone
(call it the dont-I-know-you-from-somewhere zone if you prefer), and
finally, social dark matter.
The Real and Abstract Parts of the Social Graph
The personal, context-free zone is the part of the social graph that is
real for you. Here, you dont deal in abstractions like Its not what you
know, but who you know. You deal in specifics like, You need to get
yourself a meeting with Joe. Let me send an introductory email. You
could probably sketch out this part of the social graph fairly accurately on
paper, with real names and who-knows-whom connections. You dont
need to speculate about degrees of separation here. You can count them.
The dark matter world is the part of the social graph that is an
abstraction for you. You have abstract ideas about how it works (Old Boy
networks, people taking board seats in each others companies, the idea
that weak links lead to jobs, the idea that Asians have stronger connections
than Americans), but you couldnt actually sketch it out except in coarse,
speculative ways using groups rather than individuals.
The double-take zone is populated by people who are socially part of
the abstract social network that defines the dark matter, but physically or
digitally are concrete entities in your world, embedded in specific contexts
that you frequent. Prying someone loose from the double-take zone means
moving them from the abstract social graph into your real, neighborhood
graph. They go from being concrete and physically or virtually situated in
your mind to being concrete and socially situated, independent of specific
contexts. If mathematicians and theoretical computer scientists ran the
world, the socially correct thing to say in a double-take situation would be:
Oh, were context-independent now; do you want to take this on-graph?
In these terms, Rowlings little trick involves introducing characters
in the double-take zone and then moving them to the context-free zone. In
the process, she socially situates them. Lockhart goes from abstract
celebrity author making an appearance at a bookstore to teacher with
specific relationships to the lead characters. Sirius Black initially appears
as an abstract criminal on television, but turns into Harrys godfather.
Viktor Krum is a distant celebrity Quidditch player who turns into Rons
rival for the affections of Hermione.
The Active, Unstable Layer
The double-take zone is defined by the double-take test, but such tests
are rare. What happens when they do occur? Since an actual double take
creates a window of opportunity to personalize a relationship an active
option you could call this the active and unstable layer of the doubletake zone. The more actual double takes are happening, the more the zone
is active and unstable.
Our minds deal badly with the double-take zone when it is stable and
dormant. And we really fumble when it gets active and unstable. Why?
expanding it vastly once again, this time with more symmetry, thanks to
the explosion in number of contexts it offers, for encounters to occur.
This wouldnt matter so much if the expansion didnt affect stability.
We know how to deal with stable and dormant double-take zones.
The Rules of Civility
Before the Internet began seriously destabilizing and activating the
double-take zone, it was an unnatural social space, but we knew how to
deal with it.
The double-take zone merely requires learning a decent and polite,
but impersonal approach to interpersonal behavior: civility. It requires a
capacity for an abstract sort of friendliness and a baseline level of mutual
helpfulness among strangers. We learn the non-Duchene smile
something that sits uncomfortably in the middle of a triangle defined by a
genuine smile, a genuine frown, and a blank stare.
We think of such baseline civility as the right way to deal with the
double-take zone. This is why salespeople come across as insincere: they
act as though double-take zone relationships were something deeper.
The pre-Interent double-take zone was fairly stable. Double-take
events were truly serendipitous and generally didnt go anywhere. Most
relationship options expired due to low social and geographic mobility. A
random encounter was just a random encounter. Travel was stimulating,
but poignant encounters abroad rarely turned into anything more.
The rules of conduct that we know as civility have an additional
feature: they are based on an assumption of stable, default-context status
relationships that carry over to non-default contexts. A century ago, if a
double-take moment did occur, once the parties recognized each other
(made easier by obvious differences in clothing and other physical
markers of class membership), the default-context status relationship
would kick in. If a lord decided to take a walk through the village market
on a whim, and ran into his gardener, once the double-take moment
passed, the gardener would doff his hat to the lord, and the lord would
confer a gracious nod upon the gardener.
But this sort of prescribed, status-dependent civility is no longer
enough. The rules of civility cannot deal with an explosion of
serendipitous encounters.
Social Mobility versus Status Churn
Since double-take encounters temporarily dislocate people from the
default context through which you know them, and make them
temporarily more alive after, you could say the double-take zone is
coming alive with nascent relationships: relationships that have been
dislodged from a fixed physical or digital context, but havent yet been
socially situated.
There is an additional necessary condition for more to happen: the
double-take moment must also destabilize default assumptions about
relative status.
Double-take events today destabilize status, unlike similar events a
century ago. This is because we read them differently. A lord strolling
through a market a century ago a domain marked for the service class
knew that he was a social tourist. Double-take events, if they happened,
were informed by the assumption that one party was an alien to the
context, and both sides knew which one was the alien. Everybody wore
the uniform of their home class, wherever they went.
Things are different today. A century ago, social classes were much
more self-contained. Rich, middle class and poor people didnt run into
each other much outside of expected contexts. They shopped, ate and
socialized in different places for instance. This is why traditional romantic
stories are nearly always based on the trope of the heroine temporarily
escaping from a home social class to a lower one, and having a statusdestabilizing encounter with a lower-class male (the reverse, a prince
going walkabout and meeting a feisty commoner-girl, seems to be a less
common premise, but thats a whole other story).
But today, one of the effects of the breakdown of the middle class and
trading-up is that status relationships become context-dependent. There is
no default context.
Lets say youre an administrative assistant at a university, have an
associates degree, and frequent a coffeeshop where the barista is a
graduate student. You both shop at Whole Foods. Shes trading up, as far
as dietary lifestyles go, to shop at Whole Foods, while it is normal for you
because you have a higher household income.
In the coffeeshop, youre higher status as customer. If you run into
each other at Whole Foods, youre equals. If you run into each other on
campus, shes the superior.
Short of becoming President, there is almost nothing you can do that
will earn you a default status with everybody. Its up in the air.
This isnt social mobility. The whole idea of social mobility, at least in
the sense of classes as separate, self-contained social worlds, is breaking
down. Instead you have context-dependent status churn. Double-take
moments dont necessarily indicate that one party is a tourist outside their
class. There are merely moments that highlight that class is a shaky
construct today.
Worlds are mixing, so double-takes become more frequent. But what
makes the increased frequency socially disruptive is that status
relationships are different in the different contexts.
Temporal Churn
Even more unprecedented than status churn is temporal churn.
People from the same nominal class, who once knew each other, can
move into each others double-take zones simply by drifting apart in
space. Thats why you do a double-take when you randomly run into an
old classmate, whom you havent seen for decades, in a bookstore
(happened to me once). Or when you run into a hallway-hellos level
coworker, whom youve never worked directly with, at the grocery store
(this happened to me as well).
It is not changes in appearance or social status that make immediate
recognition difficult. It is the unfamiliar context itself.
This sort of thing doesnt happen much anymore. We dont catch up
as much anymore because we never disconnect. Unexpected encounters
are rare because online visibility never drops to zero. Truly serendipitous
encounters turn into opportunistically planned ones via online earlywarning signals.
One effect of this is that relationships can go up or down in strength
over a lifetime, since they are continuously unstable and active. Once
youve friended somebody on Facebook, and their activities keep showing
up in your stream, you are more likely to look them up deliberately for a
meeting or collaboration. Social situation awareness is not allowed to fade.
The active and unstable double-take layer is constantly suggesting
opportunities and ideas for deeper interaction.
Its not that time doesnt matter anymore, but that time does more
complicated things to relationships. In the pre-Internet world, relationships
behaved monotonically in the long term. You either lost touch, and the
relationship weakened over time, or you stayed in touch and the
relationship got stronger over time. Some relationships plateaued at a
certain distance.
Few relationships went up and down in dramatic swings as they
routinely do today.
Beyond Civility
Mere static-status civility is no longer enough to deal with a world of
volatile relationships created by status churn across previously distinct
classes, and temporal churn that ensures that relationships that never quite
die. Relationships that move in and out of the double-take zone (or even
just threaten to do so) need a very different approach.
You never know when you might turn a barista into a new friend after
a double-take encounter, or renew a relationship with an old one via a
Facebook Like.
The sane default attitude today is the world is small and life is long.
Reinventing yourself is becoming prohibitively expensive. You have to
navigate under the expectation that the real part of your social graph will
grow over time, even if you move around a lot. If you are immortal and
can move sufficiently fast in space and time, the abstract social graph may
vanish altogether, like it did for Wowbagger the Infinitely Prolonged in
The Hitchhikers Guide to the Galaxy, who made it the mission of his
immortal life to insult everybody in the galaxy, in person, by name, and in
alphabetical order.
The phrase the world is small and life is long came up in a
conversation with an acquaintance in Silicon Valley. Wed been talking
about how the Silicon Valley technology world, despite being quite large,
acts like a small world. Wed been talking, in particular, about the dangers
of burning bridges and picking fights. We both agreed that thats a very
dangerous thing to do. Thats when my acquaintance trotted out that
phrase, with a philosophical shrug.
Of the two parts of the phrase, the world is small is easier to
understand. I dont think it has much to do with the much-publicized fourdegrees finding on Facebook. Status and temporal churn within the sixdegree world is sufficient to explain whats happening.
Life is long is the bit people often fail to appreciate. The social graph
throbs with actual encounters every minute, that are constantly rewiring it.
If you are in a particular neck of the woods for a long enough time, youll
eventually run into everybody within it more than once. Its the law of
large numbers applied to accumulating random encounters.
Silicon Valley is a place where worlds collide frequently in different
status-churning contexts, and circulation through different roles over time
creates temporal churn. There are other worlds that exhibit similar
dynamics. Most of the world is going to look like this in a few decades.
This retreating from all nearby centers is not exactly the personality
description of a great social hub. So why is it a great position for
introduction-making? Its the same reason Switzerland is a great place for
international negotiations: neutrality and small size anchoring credibility,
but with sufficient actual clout to enforce good behavior. If you are big or
powerful, you have an agenda. If you are from the center of a community,
you have an agenda. Another great example is the Bocchicchio family in
The Godfather: not big enough to be one of the Five Families, but bloodyminded enough to effectively play intermediary in negotiations by offering
themselves up as hostages.
Edge Blogging and the Introduction Scaling Problem
This post actually grew out of a problem I havent yet solved. My
instincts around introductions arent serving me well these days. Over the
last few months, the number of potential connection opportunities that go
above my threshold triggers has been escalating. Two years ago, Id spot
one potential connection every few months and do an introduction. Now I
spot one or two a week, and its accelerating. I am getting the strange
feeling that I might turn into one of those cartoon characters at a
switchboard who starts out all calm and in control and is reduced to crazed
scrambling. In case it isnt obvious, the growth of ribbonfarm is the driver
that is creating this scaling problem.
that much. I get distracted too quickly. My brain is not built for depth in
that sense, even around things I trigger, like the Gervais Principle
memeplex.
The conundrum is that I dont think raising the threshold for
potential connection quality is the right answer. Thats the wrong filter
variable for scaling. I am not sure what the right one is, but I wont
attempt to jump to synthesis. So far, Ive simply been letting a steadilyincreasing fraction of introduction opportunities simply go by. Mostly I try
to avoid making introductions to people who are already oversubscribed.
Though I dont have a theory, I do have one heuristic that serves me
well though: closer potential direct connection. If I know A and B, and I
sense that A and B would have a more fertile relationship with each other
than either has with me, I make the connection and exit. It is the opposite
logic of marketplaces whose organizers are afraid of disintermediation. To
me being an intermediary in the social sense is mostly costs and little
benefit.
But that one heuristic isnt enough. I have experimenting with
introductions in different ways lately, and learning new ideas and
techniques.
Heres one new idea Ive learned. To keep edges edgy, and prevent
them from becoming centers, you need feedback signals. One I look for is
symmetry. Introducer types tend to be introducees equally often. If
the ratio changes, I get worried.
As an illustration of the symmetry of this process of mutual crosscatalysis among sociopath weak-link hubs, consider this, while I was
conducting my experiments with introductions, others have been
introducing me to their friends. Hang Zhang of Bumblebee Labs
introduced me to Tristan Harris, CEO of Apture and Seb Paquet formally
introduced me to Daniel Lemire (who I knew indirectly through comments
on each others blogs, before but had never directly emailed/interacted
with).
We are all lab rats running in each others mazes. I like that thought.
It was the last of these that triggered this train of thought, but Ill get
to that.
I am still working through the arguments for each of these
conjectures, but whether or not they are true, I believe we are seeing
something historically unprecedented: an intrinsic psychological variable
is turning into a watershed sociological variable. Historically, extrinsic and
non-psychological variables such as race, class, gender, socio-economic
status and nationality have dominated the evolution of societies.
Psychology has at best indirectly affected social evolution. For perhaps the
first time in history, it is directly shaping society.
So since so many interesting questions hinge on the E/I distinction, I
figured it was time to dig a little deeper into it.
Wrong, Crude and Refined Models
Ill assume you are past the lay, wrong model of the E/I spectrum.
Introversion has nothing to with shyness or social awkwardness.
If you have taken a Psychology 101 course at some point in your life,
you should be familiar with the crude model: extroverts are energized by
social interactions while introverts are energized by solitude. Every major
personality model has an introversion/extroversion spectrum that roughly
maps to this energy-based model. It is arguably the most important of the
Big Five traits.
For the ideas I am interested in exploring, the Psychology 101 model
is too coarse. We sometimes forget that there are no true solitary types in
homo sapiens. As a social species, we merely vary in the degree to which
we are sociable. We need a more refined model that distinguishes between
varieties of sociability.
A traditional mixed group of introverts and extroverts exhibits these
varieties clearly. Watch a typical student group at a cafeteria. The
extroverts will be in their mutually energizing huddle at the center, while
the introverts will be hovering at the edges, content to get the low dosage
social energy they need either through one-on-one sidebar conversations
or occasional contributions tossed like artillery shells into the extrovert
energy-huddle at the core. Usually contributions designed to arrest
groupthink or runaway optimism/pessimism.
As this example illustrates, a more precise and accurate view of the
distinction is that introverts need less frequent and less intense social
interaction, and can use it to fuel activities requiring long periods of
isolation. Extroverts need more frequent and more intense social
interactions, and can only handle very brief periods away from the group.
They prefer to use the energy in collaborative action.
While true solitude (like being marooned an island without even a
pet) is likely intolerable to 99% of humanity, introverts prefer to spend the
social energy they help create individually. This leads naturally to a
financial metaphor for the E/I spectrum.
E/I Microeconomics
Positive social interactions generate psychological energy, while
negative ones use it up. One way to understand the introvert/extrovert
difference is to think in terms of where the energy (which behaves like
money) is stored.
Introverts are transactional in their approach to social interactions;
they are likely to walk away with their share of the energy generated by
any exchange, leaving little or nothing invested in the relationship itself.
This is like a deposit split between two individually held bank accounts.
This means introverts can enjoy interactions while they are happening,
without missing the relationships much when they are inactive. In fact, the
relationship doesnt really exist when it is inactive.
Extroverts are more likely to invest most of the energy into the
relationship itself, a mutually-held joint account that either side can draw
on when in need, or (more likely) both sides can invest together in
collaboration. This is also why extroverts miss each other when separated.
The mutually-held energy, like a joint bank account, can only be accessed
when all parties are present. In fact strong extroverts dont really exist
outside of their web of relationships. They turn into zombies, only coming
alive when surrounded by friends.
In balance sheet terms, introverts like to bring the mutual social debts
as close to zero as possible at the end of every transaction. Extroverts like
to get deeper and deeper into social debt with each other, binding
themselves in a tight web of psychological interdependence.
probably the best plot outline of my life. I might actually flesh it out and
post it here at some point (I dabbled in fiction a fair amount about a
decade ago, but somehow never pursued it very far).
Chapter 5, Masks and Trance, is easily the most intense, disturbing
and rewarding chapter. The subject is acting with masks on, a stylized sort
of theater that seems to have been part of every culture, during every time
period, until enlightenment values began stamping it out. Since I had
just returned from Bali when I read this chapter (examples from Bali
feature prominently books treatment), and seen glimpses of what he was
talking about during my trip, the material came alive in particularly vivid
ways. The chapter deals, with easy familiarity, with topics that would
make most of us very uncomfortable: trances, possession and atavistic
archetypes. Yet, despite the disturbing raw material, the ideas and concepts
are not particularly difficult to grasp and accept. They make sense.
The Book, Take Two
So much for the straightforward summary of the book. That it teaches
theater skills effectively should not be surprising. What is surprising is the
light it sheds on a variety of other topics. Here are just a few:
1. Body Language: Ive always found body language a somewhat
distasteful subject, whether it is of the traditional covering your
mouth means you think the other person is lying variety, or
neurolinguistic programming, or the latest craze, the study of
microexpressions. Despite the apparent validity of specific
insights, the field has always seemed to me intellectually
disreputable and shoddy. Impro does something I didnt think was
possible: it lends the subject dignity and intellectual respectability.
The trick, with hindsight, is to view the ideas in the field in the
context of art, not psychology.
2. Interpersonal Relationships: I spend a good deal of time thinking
about the principles of interpersonal interaction, and writing up
my thoughts. The reason Impro sheds a unique sort of light on the
subject is that it describes simulations of what-if scenarios that
would never happen in real life, but serve to validate theories that
do apply to real-life situations.
elements of your social (not private) identity. In my case for instance, they
might be PhD, researcher, omnivorous reader, writer, individualist,
polymath-wannabe, coffee-shop person, non-athletic, physically lazy,
amoral, atheistic and so forth. If you turned them all around, youd get
something like high-school drop-out, non-reader, groupie, parochial, pub
person, sportsy, physically active, moral and religious. I am no snob, but it
is highly unlikely that Id have much to do with somebody with that
profile.
On the other hand, if you meet somebody to whom every adjective
applies, but they rub you the wrong way at a deep level, what are you to
conclude? The clash has to be at the most subtle levels of your personality.
Meeting your evil twin helps you find yourself, which is why you should
look. Of course, I am being somewhat facetious here. You dont have to
hate your evil twin or battle him/her to the death. You can actually get
along fine and even complement each other in a yin-yang way.
de Botton, Taleb and Me
Take Alain de Botton for instance. Despite my evil twin adjective, I
think Id like him a lot and get along with him quite well. No climactic
battles. The Pleasures and Sorrows of Work is just beautiful as a book. As
you know if youve been reading this blog for a while, I write a lot on the
philosophy of work. The book literally produced dozens of thoughts and
associations in my head on every page. Since I was reading it on the
Kindle, I was annotating and highlighting like crazy. We think about the
same things. He opens with a pensive essay on container shipping
logistics, something Ive written about. The Shawshank Redemption with
its accountant hero is one of my favorite movies; de Botton finds romance
in the profession as well. Ive written about ship-breaking graveyards, he
writes about airplane graveyards. He seems fascinated by aerospace stuff.
I am an aerospace engineer. He sees more romance in a biscuit factory
than in grand cathedrals. So do I. Like me (only more successfully) he
shoots for an introspective, lyrical style. But as I continued reading, I
realized I was intellectually a little too close to the guy.
When I tried putting my notes all together, the feelings of discomfort
only intensified. There was no coherent pattern to my responses. I realized
that, in a way, you can only build one picture at a time with a given set of
jigsaw pieces. Writers normally leave enough room for you to construct
meaning so you feel a sense of control over the reading experience. With
evil twins, thats not possible, since you are trying to build different
pictures. I felt absorbed in the book, but also confused and disoriented by
it.
Thinking harder, I realized that the points of conflict in our
worldviews were at a very abstract level indeed. In a deep sense, de
Bottons worldview is that of an observer. Mine, though I do observe and
write a lot, is primarily that of a get-in-the-fray doer. He is content to
watch. I feel compelled to engage. He admires engineers and engineering;
I felt compelled to become one and get involved in building stuff. It is a
being-vs.-becoming dynamic.To a certain extent, he is driven by needs of
an almost religious nature: to overcome his sense of separateness and be
part of something larger than himself. My primary instinct is to separate
myself. It is a happiness vs. will-to-power dynamic. One last example. de
Botton is clearly a humanist: he wants to be kind and feel for others, and
paradoxically, ends up being quite cruel in places. I, on the other hand, am
mainly driven by a deep ubermensch tendency towards hard/cold
interpersonal attitudes, but end up surprising myself by being kind and
compassionate more often, in practice. Kind cruelty vs. tough love. I could
go on.
Another of my evil twins is Nicholas Nassim Taleb (Fooled by
Randomness, The Black Swan). I am re-reading the latter at the moment,
and I noticed that Taleb describes himself as a flaneur. In the comments to
my piece, Is there a Cloudworker Culture? a reader noted that my selfdescription as a cloudworker sounded a lot like the idea of a flaneur.
Again, a lot of the exact same things interest us, and we share opinions on
a lot of key fronts (the nature of mathematics, empiricism and
falsifiability, unapologetic elitist tastes, long-windedness, low tolerance
for idiots and the accidentally wealthy, a preference for reading books
rather than the news). And again, we part ways at a deep level. Thats a
story for another day.
So before we move on to the How-To section, a recommendation. If
you feel strangely attracted to my writing, and yet rebel against it at some
deep level, you might really (and unreservedly) love de Botton and/or
So go, look for your evil twin. You will be enlightened by what you
find. If you already know who yours are, I am curious. Post a comment
(suitably anonymized if necessary).
Me: No, 175 is the reasonable price for this kind of item.
Seller: Arrey, come on sir! Just look at this fine needlework; you
may have seen similar stuff for less in other shops, but if you look closely,
the work isnt as delicate!
Me: Of course I can see the quality of the work, thats why we want
to buy it, now come on, quote me the right price.
Seller: Okay sir, for you, Ill let it go for 250 (starts folding up the
kurta).
Me: No no, this lady may not be Indian, but I am; be reasonable
[my wife is Korean, and since I hadn't mentioned that she was my wife,
the shopkeeper had almost certainly assumed I was her local guide -many other shopkeepers had in fact called out to me to bring her into their
stores, offering me a commission!]
Seller: But I did quote the price for you sir, for foreigners, we
normally ask for at least 4-500!
Me: Fine, tell you what, Ill give you 190.
Seller: Come on sir, at that price, I dont even make a profit of 10
rupees!
Me: Fine, lets do this deal. 200; final offer.
Seller (looking upset): But
At this point, the sellers boss, probably the store owner, whod been
poring over a ledger in the background, looked up, interrupted and said
shortly, Cant you see the lady wants it? Just give it to them for 200, lets
cut this short!
I have several other examples I could offer (in the US, bargaining
tends to be restricted to larger purchases like cars), but these two examples
suffice to illustrate the points I want to pick out.
The Phenomenology
There are several features of interest here. Here is a round dozen:
1. Fake moves: In the Bahamian example, consider the rapid series of
three prices offered with a very quick change of subject to the color
of the bead at the first sign that I wanted to buy. This bargaining is
clearly fake, the numbers being part of the initial courtship ritual
rather than the actual price negotiations, which were shortcircuited.
2. Bargaining as bait: The sellers in the Nassau marketplace promote
their wares with a curious mix of American retail rhetoric (Cmon
honey! Everything 50% off today) and more traditional bargainhunter bait (You want a handbag sir, for the pretty lady? Cmon I
make you a deal!). I suspect very little serious bargaining actually
takes place, since the customers are largely American cruise ship
tourists, who are not used to bargaining for the small amounts in
play in these transactions.
3. Qualitative Re-valuation: Consider the variety of non-quantitative
moves in the Jaipur example. In the fine needlework move, the
seller attempted to change my valuation of the object, rather than
move the price point. I accepted the point, but indicated Id already
factored that in.
4. Narrative: A narrative also developed, inviting me to cast myself
as the knowledgeable insider who was being offered the smart
Indian deal, as opposed to the high-mark-up offered to clueless
foreigners. This is a key point that I will return to.
5. Deal-breaker feints: Twice, the seller attempted to convince me
that I was offering him a price he could not accept. These are
rhetorical feints. A similar move on the customers part is to
pretend to walk away (that old saw about the key to negotiation
being the willingness to walk away isnt much use in practice, but
pretending to walk away is very useful).
6. Closure Bluffs: another interesting feature of the Indian example is
the closure bluff; a non-serious price accompanied by closure
moves (such as starting to package the item), on the off-chance that
the other party may panic and fold early.
model? Would all the bluffing and qualtiative nuances vanish under the
right sort of time-series modeling?
The answers are yes and no respectively. Yes, you do need to work
with the full thing; game theory wont cut the Gordian knot for you. And
no, you will not be able to subsume all the bluffing and complexity no
matter how much you crunch the numbers. So you do need to appreciate
the qualitative sound-track of the bargaining, but no, dont be discouraged
I am not suggesting that the only meaningful model is a localized sui
generis ethnography. Universal models and approaches to bargaining are
possible.
What actually happens in a bargaining transaction is the coconstruction of a storyline that both sides end up committing to. Every
move has a qualitative and a quantitative part. The prototypical transaction
pair can be modeled roughly as:
((p1, v1) , (p2, v2))
Where the ps are qualitative statements and the vs are price
statements. The key is that the qualitative parts constitute a language game
(in the sense of people like Stalnaker). Each assertion is either accepted or
challenged by subsequent assertions. The set of mutually accepted
assertions serves to build up a narrative of increasing inertia, since every
new statement must be consistent with previous ones to maintain
credibility, even if it is only the credibility of a ritual rather than literal
storyline.
This is the real reason why there is apparent spinning-of-wheels
where the price point may not move for several iterations. For example in
the Indian kurta case, I rejected the sellers assertion that 175 would
represent a loss, but acknowledged (but successfully factored out) the this
is fine needlework assertion. Though the price point wasnt moving, the
narrative was. At a more abstract level, a full narrative with characters and
plot may develop. This is also the reason why knowledge bluffs work
even if the seller knows the buyer cannot have seen the same item for half
the price in another store, he cannot call out the bluff in an obvious way
since that would challenge the (always positive) role in which the buyer is
cast.
The key conclusion from all this? The transaction moves to closure
when the emerging logic of the narrative becomes overwhelming, not
when price transparency has been achieved. To bargain successfully, you
must be able to control the pace and direction of the development of the
narrative. At a point of narrative critical mass, something snaps and either
a new narrative must displace the old one (rare), or there must be a
movement towards closure.
Becoming a Right-Brained Bargainer
So here is my magic solution: become good at story-telling based
conversations.
Walk in, not with a full-fledged plan/story, but a sense of what roles
you can comfortably fill (straight-dealer? cynic? know-it-all?
Innocent student without much to spend?)
As the conversation progresses, try to sense what roles the other
party is trying on for size, and suggest ones favorable to you
(Look, I try to buy only from local merchants, and you guys do
are doing a great job for the economy of our town, but). Say
things that move towards a locked-in role on both sides that favors
you. In the example above, I got locked in into the role of
knowledgeable local on disadvantageous terms.
Look out for the narrative logic as it develops. For example, I
successfully resisted an attempt to bring fine needlework
assertion into play, which would have moved the story from guy
looking for cheap deal to connoisseur transaction and a
premium-value storyline.
There are critical/climactic points where you can move decisively
for closure; watch and grab. In my case, I thought I had one when
the seller offered the Not even 10 rupees move, but the owner
cutting in for the kill and accepting was a clue that I could have
pushed lower.
Be aware of the symbolic/narrative significance of your numerical
moves. If the seller moves from 200 to 180, and you move from
100 to 120, the very symmetry of your move shows that you have
no information at all to use for leverage, and the transaction is
likely to proceed dully to a bisection unless you do something
creative. If the seller offers 500 and you say 250, that reveals that
you may be using a start at half heuristic, which might create an
opening for the seller to launch a storyline of really, 500 is a fair
price, heres why. Offering 275 instead creates the right sort of
ambiguity. If you do want to drive towards a symmetric-bisection
storyline, make sure you pick an irrational starting point, but not
one so irrational that it reveals you know nothing about the price
(irrationally low opening offers can work, but you need a 201 level
bargaining course to learn why).
Now, this isnt easy. You have to become a storyteller. But I never
said I was going to offer an easy answer; merely a better one than a
misguided attempt to do real-time game-theoretic computations.
stage episodes doesnt depend on whether you head towards the more
obvious stages. It depends entirely on whether you have the mental
toughness to recognize and not shy away from big trigger moments. This
mental toughness is what allows you to say damn the torpedoes, full
speed ahead. You accept the worst that can happen and step on stage
anyway. The exhilaration that can follow is not the exhilaration of having
impressed an audience. It is the exhilaration of having cheated death one
more time.
The allegory of the stage is the story of your life told around the
moments when you faced death, and charged ahead anyway.
But life is more than a series of step-on-stage/step-off vignettes; there
is a narrative logic to the whole thing. Each trigger moment prepares you
for larger trigger moments. Each time you shy away from a trigger
moment, you become weaker. There is a virtuous cycle of increasingly
difficult trigger moments, and if you can get through them all, you are
ready for the biggest trigger moment of all: the jump into eternal oblivion.
Everybody dies. Not everybody can make it an intentional act of stepping
onto a pitch-black stage.
There is also a vicious cycle of increasing existential stage fright. Do
that enough, and you will find yourself permanently in the darkness, life
having passed you by. As you might expect, the universe has a sense of
humor. You can only experience living to the fullest if you are able to
get through death-like trigger moments. Shy away from these death-like
moments, and your life will actually feel like living death.
Curiously though, in this allegory of the stage, it isnt other people
who are spectators of your life. Everybody is either on the stage or waiting
backstage for their moment. Whats out there is the universe itself,
random, indifferent to your strutting. Thats what separates teenagers from
adults: the realization that other people are not your audience.
Europe in antiquity, and the Islamization of the Middle East and North
Africa in medieval times, have been the only successful examples of that
dynamic.
But it is still seems reasonable to expect that this process,
globalization, is destroying something and creating something equally
coherent in its place. It is reasonable to expect that there are coherent new
patterns of life emerging that deserve the label globalized lifestyles, and
that large groups of people somewhere are living these lifestyles. It is
reasonable in short, to expect some folkways of globalization.
Surprisingly, no candidate pattern really appears to satisfy the
definition of folkway.
With hindsight, this is not surprising. What is interesting about the list
of ways within a folkway is the sheer quantity of stuff that must be
defined, designed and matured into common use (in emergent ways of
course), in order to create a basic functioning society. Even when a
society is basically sitting there, doing nothing interesting (and by
interesting I mean living out epic collective journeys such as the
settlement of the West for America or the Meiji restoration in Japan) there
is a whole lot of activity going on.
The point here is that the activity within a folkway is not news, but
that doesnt mean nothing is happening. People are born, they grow up,
have lives, and die. All this background folkway activity frames and
contextualizes everything that happens in the foreground. The little and
big epics that we take note of, and turn into everything from personal
blogs to epic movies, are defined by their departure from, and return to,
the canvas of folkways.
That is why, despite the power of globalization, there is no there
there, to borrow Gertude Steins phrase. There is no canvas on which to
paint the life stories of wannabe global citizens itching to assert a social
identity that transcends tired old categories such as nationality, ethnicity,
race and religion.
This wouldnt be a problem if these venerable old folkways were in
good shape. They are not. As Robert Putnam noted in Bowling Alone, old
folkways in America are eroding faster than the ice caps are melting.
Globalization itself, of course, is one of the causes. But it is not the only
one. Folkways, like individual lives and civilizations, undergo rise and fall
dynamics, and require periodic renewals. They have expiry dates.
Every traditional folkway today is an end-of-life social technology;
internal stresses and entropy, as much as external shocks, are causing them
to collapse. The erosion has perhaps progressed fastest in America, but is
happening everywhere. I am enough of a nihilist to enjoy the crash-andburn spectacle, but I am not enough of an anarchist to celebrate the lack of
candidates to fill the vacuum.
The Usual Suspects
Weve described the social before of globalization. What does the
after look like? Presumably there already is (or will be) an after, and
globalization is not an endless, featureless journey of continuous
unstable change. That sounds like a dark sort of fun, but I suspect humans
are not actually capable of living in that sort of extreme flux. We seek the
security of stable patterns of life. So we should at some point be able to
point to something and proclaim, there, thats a bit of globalized society.
I once met a 19-year old second-generation Indian-American who,
clearly uneasy in his skin, claimed that he thought of himself as a global
citizen. Is there any substance to such an identity?
How is this global citizen born? What are the distinguishing
peculiarities of his speech ways and marriage ways? What does he
eat for breakfast? What are his building ways? How does this creature
differ from his poor old frog-in-the-well national-identity ancestors? If
there were four dominant folkways that shaped America, how many
folkways are shaping the El Dorado landscape of globalization that he
claims to inhabit? One? Four? Twenty? Which of this set does our heros
story conform to? Is the Obama folkway (for want of a better word) a neoAmerican folkway or a global folkway?
These questions, and the difficulty of answering them, suggest that the
concept of a global citizen is currently a pretty vacuous one. Fischers
The genetic analogy helps explain why both coverage (of the 23
categories) and complex of interlocking parts are important. Even the
best a la carte lifestyle is a bit of a mule. In Korea for instance, or so I am
told, marriages are Western style but other important life events draw from
traditional sources. Interesting, perhaps even useful, but not an
independent folkway species capable of perpetuating itself as a distinct
entity. Thats because a la carte gives you coverage, but not complex
interlocking. On the other hand, biker gangs have complex interlocking
structures and even perpetuate themselves to some extent, but do not have
complete coverage. Ive been watching some biker documentaries lately,
and it is interesting how their societies default back to the four-folkway
base for most of their needs, and only depart from it in some areas. They
really are subcultures, not cultures.
Latte Land
I dont know if there is even one coherent folkway of globalization,
let alone the dozen or so that I think will be necessary at a minimum
(some of you might in fact argue that we need thousands of micro-Balkan
folkways, but I dont think that is a stable situation). But I have my
theories and clues.
Heres one big clue. Remember Howard Dean and the tax-hiking,
government-expanding, latte-drinking, sushi-eating, Volvo-driving, New
York Times-reading body-piercing, Hollywood-loving, left-wing freak
show culture?
Perhaps thats a folkway? It wouldnt be the first time a major
folkway derived its first definition from an external source. It sounds a la
carte at first sight, but theres some curious poetic resonance suggestive of
deeper patterns.
For a long-time I was convinced that this was the case; that Blue
America could be extrapolated to a Blue World, and considered the
Promised Land of globalization, home to recognizable folkways. That it
might allow (say) the Bay Area, Israel, Taiwan and Bangalore to be tied
together into one latte-drinking entrepreneurial folkway for instance. And
maybe via a similar logic, we could bind all areas connected, and socially
dominated by, Walmart supply chains into a different folkway. If Latte
Land is one conceptual continent that might one day host the folkways of
globalization, Walmartia would be another candidate.
I think theres something nascent brewing there, but clearly were
talking seeds of folkways, not fully developed ones. There are tax-hiking,
latte-drinking types in Bangalore, but it is still primarily an Indian city,
just as the Bay Area, despite parts achieving an Asian majority, is still
recognizably and quintessentially American.
But there are interesting hints that suggest that even if Latte Land isnt
yet host to true globalized folkways, it is part of the geography that will
eventually be colonized by globalization. One big hint has to do with walls
and connections.
In the Age of Empires, the Chinese built the Great Wall to keep the
barbarians out, and a canal system to connect the empire. The Romans
built Hadrians wall across Britain to keep the barbarians out, and the
famed Roman roads to connect the insides.
Connections within, and walls around, are characteristic features of an
emerging social geography. Today the connections are fiber optic and
satellite hookups between buildings in Bangalore and the Bay Area. In
Bangalore, walled gated communities seal Latte Land off from the rest of
India, their boundaries constituting a fractal Great Wall. In California, if
you drive too far north or south of the Bay Area, the cultural change is
sudden and very dramatic. Head north and you hit hippie-pot land. Head
south and you hit a hangover from the 49ers (the Gold Rush guys, not the
sports team). In some parts of the middle, it is easier to find samosas than
burgers. Unlike in Bangalore, there are no physical walls, but there is still
a clear boundary. I dont know how the laptop farms of Taiwan are sealed
off, or the entrepreneurial digital parts of Israel from the parts fighting
messy 2000 year old civilizational wars, but I bet they are.
Within the walls people are more connected to each other
economically than to their host neighborhoods. Some financial shocks will
propagate far faster from Bangalore to San Jose than from San Jose to
(say) Merced. I know at least one couple whose marriage way involves
the longest geometrically possible long-distance relationship, a full 180
longitude degrees apart, and maintained through frequent 17 hour flights.
Curiously, since both the insides and outsides of the new walls are
internally well-connected, though in different ways, the question of who
the barbarians are is not easy to answer. My tentative answer is that our
side of the wall is in fact the barbarian side. Our nascent folkways have
more in common with the folkways of pastoral nomads than settled
peoples. Unlike the ancient Chinese and Romans, weve built the walls to
seal the settled people in. Ill argue that point another day. Trailer: the key
is that barbarians in history havent actually been any more barbaric
than settled peoples, and the ages of their dominance havent actually been
dark ages. We may well be headed for a digital dark age driven by
digital nomad-barbarians.
Our missing folkways, I think, are going to start showing up in Latte
Land in the next 20 years. Also in Walmartia and other emerging
globalization continents, but I dont know as much about those.
In the meantime, I am curious if any of you have candidate folkways.
Remember, it has to cover the 23 categories in complex and
interconnected ways, and there should be a recognizable elite whose
discourses are shaping it (the folkway itself cant be limited to the elite
though: the elite have always had their own globalized jet-setting
folkways; we are talking firmly middle class here). How many folkways
do you think will emerge? 0, 1, 10 or 1000? Where? How many
conceptual continents?
Random side note: This post has officially set a record for longest
gestation period. I started this essay in 2004, two years before I started
blogging. Its kinda been a holding area for a lot of globalization ideas,
about 20% of which made it into this post. I finally decided to flush it out
and evolve the thread in public view rather than continue it as a working
(very hard-working) paper.
Random side note #2: There are lots of books that are so thick, dense
and chock-full of fantastic ideas that I could never hope to review or
summarize them. In a way, this post is an alternative sort of book
review, based on plucking one really good idea from a big book. Fischers
book is a worthwhile reading project if you are ready for some intellectual
heavy lifting.
On Going Feral
August 19, 2009
Yesterday, a colleague looked at me and deadpanned, arent you
supposed to have a long beard? When you remote-work for an extended
period (its been six months since my last visit to the mother ship), you
can expect to hear your share of jokes and odd remarks when you do show
up. Once you become a true cloudworker, a ghost in the corporate
machine who only exists as a tinny voice on conference calls, perceptions
change. So when you do show up, you find that people react to you with
some confusion. Youre not a visitor or guest, but you dont seem to truly
belong either.
I hadnt planned on such a long period without visits to the home
base, but the recession and a travel freeze got in the way of my regular
monthly visits for a while. The anomalous situation created an accidental
social-psychological experiment with me as guinea pig. Whats the
difference between six months and one month, you might ask? Everything.
Monthly visits keep you domesticated. Six months is long enough to make
you go feral. Ive gone feral.
one or the other is such a useful exercise. You develop a more focused
self-awareness about who you really are.
Our language is full of dog and cat references. Dogs populate our
understanding of social dynamics: conflict, competition, dominance,
slavery, mastery, belonging and otherness:
believe, arises from the cats clear indifference to our assumptions about
our own species-superiority and intra-species status.
That point is clearly illustrated in the pair of opposites he looks at his
boss with dog-like devotion/a cat may look at a king. The latter is my
favorite cat-proverb. It gets to the heart of what is special about the cat as
an archetype: being not oblivious, but indifferent to ascriptive authority
and social status. You can wear fancy robes and a crown and be declared
King by all the dogs, but a cat will still look quizzically at you, trying to
assess whether the intrinsic you, as opposed to the socially situated,
extrinsic you, is interesting. Like the child, the cat sees through the
Emperors lack of clothes.
Our ability to impress and intimidate is mostly inherited from
ascriptive social status rather than actual competence or power. Cats call
our bluff, and scare us psychologically. Dogs validate what cats ignore.
But it is this very act of validating the unreal that actually creates an
economy of dog-power, expressed outside the dog society as the power of
collective, coordinated action. Dogs create society by believing it exists.
In the Canine-Feline Mirror
We map ourselves to these two species by picking out, exaggerating
and idealizing certain real cat and dog behaviors. In the process, we
reveal more about ourselves than either cats or dogs. Cats are loyal to
places, dogs to people is an observation that is more true of people than
either dogs or cats. Just substitute interest in the limited human sphere (the
globalized world of gossipy, politicky, watercoolerized, historicized and
CNNized human society; feebly ennobled as humanism) versus the
entire universe (physical reality, quarks, ketchup, ideas, garbage, container
ships, art, history, humans-drawn-to-scale). There are plenty of such
dichotomous observations. A particularly perceptive one is this: dogpeople think dogs are smarter than cats because they learn to obey
commands and do tricks; cat-people think cats are smarter for the exact
same reason. Substitute interest in degrees, medals, awards, brands and
titles versus interest in snowflakes and Saturns rings. I dont mean to be
derisive here: medals and titles are only unreal to cats. Remember, dogs
make them real by believing they are real. They lend substance to the
ephemeral through belief.
Cat-people, incidentally, can develop a pragmatic understanding of
the value of dog-society things even if deep down they are puzzled by
them. You can get that degree and title while being ironic about it. Of
course, if you never break out and go cat-like at some point, you will be a
de facto dog (check out the hilarious Onion piece a commenter on this
blog pointed out a while back: Why cant anyone tell I am wearing this
suit ironically?).
But lets get to the most interesting thing about cats, an observation
that led to the title of this article. My copy of the The Encyclopedia of the
Cat says:
It is not entirely frivolous to suggest that whereas pet
dogs tend to regard themselves as humans and part of the
human pack, the owner being the pack leader, cats regard
the humans in the household as other cats. In many ways
they behave towards people as they would towards other
kittens in the nest, grooming them, snuggling up with
them, and communicating with them in the ways that they
would use with other cats.
There is in fact an evolutionary theory that while humans deliberately
domesticated wild dogs, cats self-domesticated by figuring out that
hanging around humans led to safety and plenty.
I want to point out one implication of these two observations: cats
arent unsociable. They just use lazy mental models for the species-society
they find themselves in: projecting themselves onto every other being they
relate to, rather than obsessing over distinctions. They only devote as
much brain power to social thinking as is necessary to get what they want.
The rest of their attention is free to look, with characteristic curiosity, at
the rest of the universe.
To summarize, dog identities are largely socially constructed, inspecies (actual or adopted, which is why the reverse-pet raised by
wolves sort of story makes sense). Cat identities are universeconstructed. Which brings us to a quote from Kant (I think).
Personal History, Identity and Perception
It was Kant, I believe, who said, we see not what is, but who we are.
We dont start out this way, but as our world-views form by accretion,
each new layer is constructed out of new perceptions filtered and distorted
by existing layers. As we mature, we get to the state Kant describes, where
identity overwhelms perception altogether, and everything we see
reinforces the inertia of who we are, sometimes leading to complete
philosophical blindness. Neither cats, nor dogs can resist this inevitability,
this brain-entropy, but our personalities drive us to seek different kinds of
perceptions to fuel our identity-construction.
Dogs, and dog-like people end up with socially-constructed, largely
extrinsic identities because thats what they pay attention to as they
mature: other individuals. People to be like, people to avoid being like. It
is at once a homogenizing and stratifying kind of focus; it creates out of
self-fulfilling beliefs an identity mountain capped by Ken and Barbie
dolls, with foothills populated by hopeless, upward-gazing peripheral
Others, who must either continue the climb or mutiny.
Cats and cat-like people though, simply arent autocentric/speciescenteric (anthropomorphic, canino-morphic and felino-morphic).
Wherever they are on the identity mountain believed into existence by
dogs, they are looking outwards, not at the mountain itself. They are
driven to look at everything from quarks to black holes. In this broad
engagement of reality, there isnt a whole lot of room for detailed mental
models of just one species. In fact, the ideal cat uses exactly one spacesaving mental (and, to dogs, wrong) model: everyone is basically kinda
like me. Appropriate, considering we are one species on one insignificant
speck of dust circling an average star in a humdrum galaxy. The
Hitchhikers Guide to the Galaxy, remember, has a two-word entry for
Earth: Mostly Harmless. This indiscriminate, non-autocentric curiosity is
dangerous though: curiosity does kill the cat. Often, it is dogs that do the
killing. We may be mostly harmless to Vogons and Zaphod Beeblebrox,
but not to ourselves.
attempt it in public, they are stricken by social anxiety. They seem to fear
that the slow, solitary, and obviously purposeless amble that marks taking
a walk signals social incompetence or a life unacceptably adrift. If a
shopping bag, gym bag, friend or dog cannot be manufactured, nominal
non-idleness must be signaled through an ostentatious I have friends
phone call, or email-checking. If all else fails, hands must be placed
defiantly in pockets, to signal a brazen challenge to anyone who dares
look askance at you, Yeah, Im takin a walk! You got a problem with
that?
In America, visible idleness is a luxury for the homeless, the
delinquent and immigrants. The defiantly tautological protest, I have a
life, is quintessentially American. The American life does not exist until it
is filled up.
Even a pause at a bench must be justified by a worthwhile view or a
chilled drink.
Worthwhile. Now, theres an American word. Worth-while. Worthyour-while. The time value of money. Someone recently remarked that the
iPad has lowered the cost of waiting. Americans everywhere heaved a sigh
of relief, as their collective social anxiety dipped slightly. The rest of the
world groaned just a little bit.
The one American I remember seeing taking a walk was Tom Hales,
then a professor at the University of Michigan. He was teaching the
differential geometry course I was auditing that semester. One dark,
solitary Friday, while the rest of America was desperately trying to
demonstrate to itself that it had a life, I was taking a walk in an empty,
desolate part of the campus. I saw Hales taking a walk on the other side of
the street. He did not look like he was pondering Deep Matters. He merely
looked like he was taking a walk.
That year he proved the Kepler conjecture, a famous unsolved
problem dating back to 1611. A beautifully pointless problem about how
to stack balls. I like to think that Kepler must have enjoyed taking walks
too.
widespread diseased state. Only the rare prince or brave runaway could
experience an individualistic lifestyle.
If the latter is true, individualism is something like an occasional
solitude-seeking impulse that has been turned into a persistent chronic
condition by modern environments. That would make individualism the
psychological equivalent of chronic physiological stress.
According to Robert Sapolskys excellent book Why Zebras Dont
Get Ulcers, chronic stress is the diseased state that results when natural
and healthy acute stress responses the kind we use to run away from
lions get turned on and never turned off. This is more than an analogy.
If individualism is a disease, it probably works by increasing chronic
stress levels.
The interesting thing about this question is that the answer will seem
like a no-brainer to you depending on your personality. To someone like
me, there is no question at all that individualism is natural and healthy. To
someone capable of forming very strong attachments, it seems equally
obvious that individualism is a disease.
The data apparently supports the latter view, since happiness and
longevity are correlated with relationships, as is physical health. Radical
individualism is physically stressful and shortens lifespans. I bet if you
looked at the data, youd find that individualists do get ulcers more
frequently than collectivists.
But to conclude from this data that individualism is a disease is to
reduce the essence of being human to a sort of mindlessly sociable
existence within a warm cocoon called home. If individualism is a disease,
then the exploratory and restless human brain that seeks to wander alone
for the hell of it is a sort of tumor.
Our brains, with their capacity for open-ended change, and restless
seeking of change and novelty (including specifically social change and
novelty), make the question non-trivial. We can potentially reprogram
ourselves in ways that muddy the distinctions between natural and
diseased behaviors.
The
relationship
between
individualism
and
introversion/extroversion
Developing the idea of utilitarian homes as design patterns that
can be compiled anywhere
What does the Freudian idea of superego map to in this model?
A more satisfactory account of the evolution of psychological
home.
The interesting thing about thinking about home in this digital sense
is that running away from home is no longer about physical movement
between unique social-physical environments (though that can play a
part). If your sense of home is a pattern that you can instantiate anywhere
the environment supports it, you cannot actually run away from it. But you
can throw it away and make up or borrow a new design pattern.
Ill write more about that at some point.
This post was partly inspired by discussions with reader MFH.
blogger, I write mostly about business they protest, wait, thats not
really it your blog isnt really about business, and you do more than
blogging.
Curiously, while long-time readers at least subconsciously realize that
blogger doesnt quite cover it, people who nominally know me far
better, but dont read my blog (such as old high school friends) often dont
even get that there is something to get, since their substantial memories of
me from long ago distract them from the current reality that blogger (at
least at my level) is too insubstantial a label to account for an average
human life. It is a non-job, like the other non-job title I sometimes claim,
independent consultant. Both are usually taken as euphemisms for
unemployed. For the legible, the choice is between gainful employment
and lossy unemployment. For the illegible, the choice is between gainful
unemployment and lossy employment.
Nomadism is the sine qua non of this general phenomenon of
individual illegibility. The homeless, the destitute and seasonal migrant
workers bum around. Billionaires with yachts and private jets bum around
in a rather more luxurious way through each others mansions. Regular
middle-class people generally stay put; nomadism hasnt been an option
until recently. This little piggy stayed at home.
***
Nomad is a concept that rooted-living people think they understand
but dont. I know this because I myself thought I understood it, but
realized I didnt once Id actually tried it for a few weeks.
I used to think of nomadism as a functional and pragmatically
necessary behavior, related to things like having to follow the migratory
paths of herd animals in the case of pastoral nomads. Or having to work at
client sites, in the case of road-warrior consultant types. Or even having to
travel the world in order to satisfy an eat-pray-love urge.
Now Ive come to realize thats not really it. When voluntarily
chosen, nomadism is not a profession, lifestyle, or restless spiritual quest.
It is a stable and restful state of mind where constant movement is simply
a default chosen behavior that frames everything else. True nomads decide
they like stable movement better than rootedness, and then decide to fill
their lives with activities that go well with movement. How you are
moving matters a lot more than where you are, were, or will be. Why you
are moving is an ill-posed question.
This is not really as strangely backwards as it might seem. Rooted
people often decide to relocate somewhere based on a general sense of
opportunities and lifestyle possibilities, and then figure out how theyll
live their lives there. Smart rooted people usually target regions first, jobs,
activities and relationships second. Nomads pick a pattern of movement
first, and then figure out the possibilities of that pattern later. While I
havent found a sustainable pattern yet, Ive experienced several
unsustainable ones.
Moving in a slow and solitary way through cheap hotels helps me
write better and reflect more deeply.
Moving slightly faster through peoples couches slows down my
writing (as my recent posts show), but helps me experience relationships
in brief, poignant ways.
Moving through a corporate social geography (in the past week, Ive
sampled three Bay Area company buffets) helps me understand the world
of work.
Shuttling around on a lot of long-distance flights helps me get through
piles of reading.
House-sitting helps me understand others lives in a role-playing
sense.
So Ive changed my perspective. I am not on the road to promote the
book. I am promoting the book because I am on the road. The activity fits
the pattern of movement. The pattern itself is too fertile to be merely a
means to a single end. Nomadism is not an instrumental behavior. It is a
foundational behavior like rootedness, the uncaused cause of other things.
Book promotion is simply one of the many activities that benefits from
constant movement, just like growing a garden is one that benefits from
staying in the same place.
***
All this is very complex to convey, so I dont use the nomad answer.
But on the other hand, I also dont like getting dragged into long-winded
explanations. So if people insist on a substantial answer, I just say Well, I
am promoting my new book, meeting blog readers and consulting clients.
That instrumental description satisfies people. But it annoys me that I have
to basically mislead because the language of rootedness lacks the right
words to explain behaviors that arise from nomadism.
The follow-up question is also predictable, where are you from?
When I was a much more rooted person, this question was always a
politically correct way of asking about my ethnicity and nationality;
people wanting to plot me on the globe with as much accuracy as their
knowledge of world geography allows. But as a nomad, the question is
always about my current base of operations. Movement makes you
unplottable, which apparently provokes more social anxiety among the
rooted than unclear ethnicity or nationality. People want to tag you with
current, physical x, y coordinates before probing other dimensions of your
social identity. This conversation also tends to be bizarre:
Where are you from?
Vegas
Vegas? (look of puzzlement) Why Vegas?
Its cheap.
areas like logistics and commodities. Vegas doesnt have a clear raison
detre on the rooted-living map (except perhaps as a retirement location).
You travel there for a bit of hedonism; you dont live there. For nomads on
the other hand, Vegas does have a very clear raison detre. It is a great city
to pass through (not so great to grow roots in).
Ive taken to making a weak joke: Vegas is like the miscellaneous
file; you meet a lot of random people there. I was initially having fun
watching them, but then I realized I am one of them.
At this point, if I am in the mood, I explain that we are subletting and
house-sitting my in-laws house for cheap while they summer in
Michigan, and that our stuff is in storage. That we originally meant to
make Vegas a temporary, low-cost and geographically strategic base while
we figured out where to go next, but that my wife has now found a job
there, so well be staying on indefinitely after the summer. The variables
that made us pick Vegas are classic nomad variables: cost, seasonal
considerations, and strategic positioning for further movement.
I have been nomadic since May 1, almost three months now. Ive
spent six of those weeks living out of a car, and another five living out of a
temporary, borrowed home out of a couple of suitcases and boxes (this has
been like playing house; my first experience living in a single family home
with all the accouterments of American suburban life).
***
In the past three months, my understanding of the nomadic state has
been slowly but radically altered. The best way I can explain what Ive
learned is to offer this comparison: nomadism has almost nothing to do
with the rooted-living behavior it nominally resembles, travel.
The modern world is organized around rooted living, with travel as its
subservient companion concept. Travel is unstable movement away from
home with a purpose, even if the purpose is something ambiguous like
exploration or self-discovery. It is always a loop from home to home, or a
move from old home to new home. For the rooted living person, travel is
a story. A disturbed equilibrium that requires explanation and eventual
correction, resulting in a return to equilibrium. A small handful of stories
moving? For the nomad, a period of rootedness is unstable, like travel for
the rooted. It is a disturbed equilibrium that requires explanation. An
explanation of non-movement, and eventual resumption of movement, are
required. The associated stories can range from a car breakdown, to
insufficient funds to fuel the next phase of movement, to unexpected
weather conditions. Once upon a time, a guy who lived out of a car was
heading south for the winter. His car broke down in Kansas City, and he
was stuck there for a week. Fortunately he was able to find a place to
couchsurf, get it repaired and move on.
***
In a way, nomadism is a more basic instinct for humans. Rootedness
is natural for trees. Legs demand movement. The movement is the cause,
not the effect. Just as the mantra for rootedness is location, location,
location, it is movement, movement, movement for nomadism. When
humans grow roots, strange new adaptations appear to accommodate
restless brains.
If I have romanticized nomadism it is because nomadism is a
fundamentally romantic state of being. If you can sustain it, it is somehow
fulfilling without any further need for achievement or accomplishment.
The pursuit of success is, for the rooted, the price they must pay for
immobilizing themselves geographically. The reward is something
equivalent to the state of stable movement that is, for the nomad, a natural
state of affairs.
Success itself in a way is very much a notion for the rooted; it is the
establishment of some sort of stable self-propelled movement pattern
through some sort of achievement space: up a career ladder; down a rabbit
hole of skilled specialization; sideways through a series of stimulating
project experiences. When there is no true north, no physical landmarks
growing smaller behind you, and no fresh sights constantly appearing over
the horizon, you need abstract markers of movement: degrees, money, a
sequence of more expensive cars, a series of increasingly successful
books, a growing readership for a blog, increasingly prestigious speaking
gigs.
When you bind naturally restless feet, the minds that have evolved to
animate them seek movement elsewhere.
I misunderstood the psychology of travel badly when I was younger.
About 12 years ago, when I was 24, I went backpacking for three weeks in
Europe. After that, somehow I lost my wanderlust. I explained my
reluctance to travel to myself, and to others, with the lofty line, Ive kinda
tired myself of exploring the geographic dimensions of experience; I am
now exploring more conceptual directions.
Bullshit. Geography is just too fundamental to our psychology. If we
arent moving, it is because there is too much friction and cost. Wanderlust
never goes away. It merely becomes too costly to sustain as you age.
Recently, when I traded my Indian passport for an American one (which
allows me to travel far more freely, without the annoyances of the Great
Wall of Visas that is designed to keep the developed world from getting
too footloose), the old itch to travel instantly reappeared. So much for my
pretentious other dimensions of experience. It was mere paperwork
friction that was holding me back. But sadly, while one source of friction
has disappeared, others have grown. In my late 30s now, the fact of my
wifes non-portable job and the complexities of moving our two cats
across national borders, are what keep us from simply embarking on some
extended nomadism around the world. But at least we dont have a
mortgage and school-going kids.
***
Scotts notion of illegibility was originally inspired by the nomadic
state and its incomprehensibility to the governance apparatus of settled
cultures. To the stationary eye of the state, a moving person is a blur rather
than a sharply-defined identity; it is harder to tax, conscript, charge with
crimes or even reward nomads. To the stationary eye of the corporation,
the nomad appears harder to hire, manage or pay.
The blurriness extends to other aspects of rooted life. Ownership and
community life change from being stock concepts (defined by things you
accumulate) to flow concepts (defined by things you pass through and that
pass through you). Identity starts to anchor to what you are doing rather
than who you are. Social life acquires, due to its permanently transient
nature, a certain poignancy that it lacks in rooted contexts. Even routine
errands like grocery shopping and doing the laundry become minor
adventures that require your full attention and engagement.
Everyday rituals acquire a monastic depth. The difference between
nomadism and travel even shows up in how you pack. Packing a suitcase
for extended travel is very different from packing for a period of
nomadism. In the first case, you pack for compactness and unpack at your
destination. It is an exercise in efficiency. In the second case, you pack for
daily in-out access in a changing context. You have to think harder about
what you are doing. You need constant mindful repacking, rather than
efficient one-time packing.
Even the most basic, unexamined rituals change. For instance, I stay
so often with people who dont drink coffee that Ive taken to carrying a
small bottle of instant coffee with me. But its a different kitchen every
few days.
Nomadism is, in a way, the most accessible pattern of mindful living.
***
The romanticism aside, true permanent nomadism is not really an
option today. This particular romantic episode will end around October,
and I will be rooted once more. All the neuroses of the rooted will come
flooding back. I will once more start to worry about my next book and my
next hit blog post.
The direct costs of living arent actually very different for nomads and
settled people. It is the indirect costs that kill you. If it werent for the
burden of an address-and-nationality anchored paperwork identity and the
tyranny of 12-month leases and 30-year mortgages, nomadic living would
be no more difficult than static living at the same income level. Newtons
law applies approximately: a human in a state of rest or steady motion
continues in that state unless an external force acts to change it. A nomad
is a human in a state of steady motion. Not in a Newtonian sense, but in a
cognitive sense. Once youve settled into a particular pattern of living out
of a car, you are in a steady state that has inertia.
The good news for us romantic landlubbers is that despite steel hulls,
GPS and diesel engines, the oceans remain untamed. The bad news is that
despite steel hulls, GPS and diesel engines, the oceans remain untamed. As
Katrina reminded us, the oceans can still take a casually violent swipe at
us and wreak havoc. The reliability of modern shipping does not imply
that we have domesticated the oceans. The big and believable suggestion
in the book is that we never will.
Langewiesches is a near-flawless modern, global voice. I bought the
book because I was enthralled by an extract in The Atlantic a few years
ago. The book tells the stories of a bewildering cast of characters: Eastern
European captains, Pakistani crews, Malaysian pirates, Indian
shipbreaking yards, bleeding-heart European Greenpeace activists, and
Alaskan oil-spill investigators. In less competent hands, this could have
ended up as a sea-cowboy story for overgrown boys (think Deadliest
Catch), a self-absorbed tale of human-scale tragedies (think Perfect
Storm), an overwrought tale of environmentalism (think Whale Wars) or a
random leftist screed about the exploitation of third world humans by
Western mega-corporations.
Fortunately Langewiesche avoids all those temptations. With precise
strokes, he first humanizes, and then dehumanizes, both first and third
world nations and peoples, gently getting you to focus on the grandeur of
the oceans themselves. Whether he is forcing you to vicariously
experience the chilling horror of being in a sinking ferry (the Estonia) in a
violent Baltic storm, or presenting the farcical aftermath of the tragedy
within the byzantine world of European maritime politics, he brings a sort
of ironic compassion to every story.
The raw material is almost too rich for a single book. There are oil
spills and shipwrecks, the chaos of international flags of convenience
and tales of tradeoffs between avoiding expensive delays and foolhardy
storm-defying navigation. There are pirates haunting the Straits of
Malacca, terrorists and dirty bombs hiding in containers, and desperate
navies and coast-guards trying hopelessly to catch them all. Above it all
looms a single theme: the cluelessness of us landlubbers about the
medieval anarchy that your Chinese-made iPod navigates, in the process
of getting to you somewhere else on the planet. The people dealing with
the oceans come across as the last true frontier folk, the last adults
protecting the rest of us children from a universe that is far wilder than we
think.
Though it is about modern shipping, the whole book has a timeless
quality to it. You could be reading The Odyssey, the tales of Sinbad the
Sailor or Treasure Island. A particularly eerie bit of timelessness is in the
briefly-sketched story of the trial and execution, in China, of the pirates
who hijacked the Cheung Son and murdered its crew in 1998:
On the way to the execution ground, a group of them,
who were drunk on rice wine, defiantly sang, Go, go, go!
Ale, ale, ale!, the chorus from a pop song called Cup of
Life.
No wonder Eric Cartman went off to Somalia to become a modernday pirate. My own fascination with the sea began when my dad
introduced me to Treasure Island. Yo ho ho and a bottle of rum. Stevenson
wrote that book in 1883. It wasnt until after I turned thirty though, that I
managed to experience the ocean first-hand, on a cruise to the Caribbean.
It did not disappoint; the oceans lived up to all my romantic expectations,
and even the crassness of cruise-ship buffets could not ruin it for me.
There is nothing quite like being on the deck of a ship in the open ocean,
out of sight of land.
Blue Planet
A series of stories of tragedies at sea forms the backbone narrative.
The book opens with the story of a rusty tanker, on its last legs, the
Kristal, making its way from India to Europe with a load of molasses, with
a Ukranian captain and a Spanish-Pakistani crew. The Kristal broke in half
in stormy seas and killed most of its crew, and this opening anecdote
serves to shatter your notions of the the ocean as a benign place. The book
then moves on to the Exxon Valdez and other tales of oil spills, and finally
to a detailed telling of the story of the sinking of the passenger ferry,
Estonia. There are other vignettes scattered throughout.
There are plenty of other themes, but Ill highlight just two more,
piracy and shipbreaking, since they highlight the limits of the idea of the
nation state, and provide an unusual perspective on globalization.
Nation and Ocean
The piracy and ship-breaking stories in the book both involve India,
which was particularly illuminating for me, since I have never thought
about my identity as an Indian citizen being derived from my more basic
identity as a land-based primate. Barring the doings of one 11th century
emperor, India itself has very little of note in its maritime history,
compared to say, the European nations or Japan. Despite its 7000 km
coastline, Indias national self-perception is primarily a land-based and
isolationist one. So the view from the oceans, which connect the world
physically, is rather unsettling.
Like the legal business of shipping, the structure of modern piracy too
is the outcome of the confused stateless anarchy of the seas (unlike the
older epoch of Caribbean piracy, much of which was state-sponsored).
The Straits of Malacca are where much of the action takes place (not
Somalia, as most Americans imagine). What makes piracy in this region so
surprising is that it is a very narrow, massively busy seaway that would
seem like the most civilized part of the oceans. Over 50,000 vessels pass
through every year, through the 2.8 kilometer wide chokepoint near
Singapore. All around are the industrialized and heavily populated
shipping-dependent countries of South East Asia. This is as close as you
can get to oceanic bumper-to-bumper highway traffic. Yet, pirates
routinely vanish with entire ships, with millions of dollars worth of cargo.
The big piracy story in the book involves the Alondra Rainbow (the
picture at the top of this article), which was hijacked in a carefully planned
and coordinated attack by a group of Malaysian and Indonesian pirates in
1999, while carrying a cargo of aluminum ingots worth around $10
million. The ship vanished and the Filipino crew, along with their Japanese
captain, were cast adrift in the Indian ocean (they were rescued). The ship
managed to transfer half of its booty to another ship, and then apparently
got rechristened the Global Venture before fleeing across the Indian ocean,
eluding searchers. Most such stories apparently end there, with a vanished
ghost ship, but in this case the story had a non-ghostly ending. It was
spotted, sailing under the name Mega Rama, by the captain of a Kuwaiti
freighter, the al-Shuhadaa, who alerted the nearest country, which
happened to be India. The Indian coast guard patrol boat Tarabai
responded and chased the ship down, and with the help of a Navy missile
corvette, the Prahar, finally managed to arrest it as it was attempting to
flee into Pakistani waters.
The Indian Navy and coast guard apparently had a good deal of fun
with the exercise, and were rather proud of having actually caught a
pirated vessel for once, and enjoyed quite a bit of media attention as they
shepherded the stolen ship into Mumbai harbor. The Mumbai courts and
police, however, were decidedly less happy about having a high-profile
international piracy case being dropped into their already overburdened
laps.
What followed was a piece of international silliness, as a country with
no stake in the ship, crew, pirates or victims, ended up having to use
taxpayer money to prosecute a complex precedent-setting piracy case. The
case worked its way slowly through the Indian courts as the world figured
out how to apply nation-state level laws to a crime that obviously
transcended the very concept of a nation. Langewiesche reports a
particularly revealing conversation with a Mumbai police officer, about
why they were reluctant to accept the captured ship:
What would happen, he asked, if India convicted and
imprisoned them, but after their release Indonesia refused
to accept them? What did you conclude? I asked.
That they would become stateless people. Then the
problem for India, he said, would be where to send them. I
suggested that they could be repatriated to their natural
environment at sea. He smiled wanly.
The leading maritime attorney in India prosecuted the case pro bono,
and easily outmaneuvered the poor public defender assigned to the pirates
by the court. The pirates were found guilty, and imprisoned. They were
mostly the underlings, not the kingpins, and some seemed to have no idea
theyd been recruited into a piracy plot by a manning agent. The real
culprits remained mysterious citizens of the oceans.
If this story puts the nations involved in the background and the ocean
itself into the foreground, the next story, also involving India, is even
weirder, and involves all the oceans of the world.
The center of the action here is Alang, the coastal city in Gujarat
which is home to nearly half the shipbreaking trade in the world. India,
Pakistan and Bangladesh among them handle nearly the entire
international trade of scrapping old ships for steel, a dangerous business
involving explosions, toxic chemicals and awful conditions. The trade
ended up in the region over the course of half a century, as both the labor
costs and safety issues made it politically impossible to conduct in other
parts of the world.
thousands living off it: a stateless anarchy (we are not yet at the stage
where anyone can claim to be a global citizen, a phrase I detest for its
vacuousness). My next thought was that this is a self-serving view. The
Internet is nothing like the oceans.
Between the Nation-State and the Globe
As it happens, some of the other reading I am doing right now deals
with rarefied subjects, far removed from messy things like ship-breaking,
like the rise of global financial integration through bond markets, the
history of the first true multi-national corporation, the British East India
company and yes, undersea Internet cables. Within all these tales,
spanning several centuries, there is a constant subtext of assumptions
about the oceans.
The Outlaw Sea precisely nails the big point about oceans: they are
the physical manifestation of the stuff between the global system of
nation-states and the abstraction of the globalized world, which really
only exists on the Internet today. But we forget that the transnational
anarchy that is the Internet could be rapidly and comprehensively
fragmented and shoehorned into nation-state boundaries by the flipping of
a few key router switches, and the reconfiguring of a handful of satellites.
The ocean though is not, never has been, and (it seems) never can be
subsumed within the nation-state system. It will always form a gray zone
of anarchy sandwiched between global and national contexts. Despite its
grim implications, in an odd way it is an uplifting thought that the oceans
will never be within our control. Looking back, I think I realized this
point, and grew fascinated by it, very early. I have always been fascinated
by maps, but as a schoolkid, one set of maps in particular, fascinated me.
This was a series of maps included with special issues of the National
Geographic, that presented the world with the oceans in the foreground.
There were maps for each of the major oceans, with finely detailed
depictions of mid-ocean ridges, mountain ranges, volcanoes and currents.
The oceanic areas of the maps were a riot of blues. Landmasses on those
maps were shown in background-white, with barely any annotation. This,
I thought, is a better way of looking at Planet Earth.
3.
4.
5.
6.
of migration that flowed for a few brief years during the 1950s, when the
Dalai Lama fled Tibet and landed in India.
Along this route, the Israelis get into fights with the locals, run an
underground drug culture and in general recover from their PTSD in the
messy ways you might expect. The modern Israeli stream runs along
roughly the same course that, decades ago, played host to the hippies on
journeys of self-discovery from Goa to Kathmandu. Ecstasy has replaced
LSD, and the culture is a darker, cyberpunk echo of the naive spirituality
that marked the questing of the swami-seeking hippies.
Today, the stream is shifting course towards Thailand, as I noted
earlier. The Indian branch may dry up, or slow to a trickle. I suspect a
branch of the stream continues, post an Israel-return, to America, via hightech startups founded by friends who perhaps were blooded in combat
together, or met in India or Thailand.
Curiously, even though the Israeli stream runs right through Bombay,
where I lived for years, I had no idea it existed while I was there.
I learned the story partly from an Israeli anthropologist (from whom I
borrowed the term liminal passage which I used in Tempo) and partly
from a Romanian-born Australian, herself an expat in Bali, married to a
Dutch expat (Indonesia was once a Dutch colony). The two of them run
canoeing tours on Lake Batur for tourists. Wed gotten started on the
subject of nomadic expat cultures after Id asked, rather innocently, if the
success of Eat, Pray, Love had had an impact on Bali tourism. Oh My
God! my guide exploded, All these annoying American women in their
30s landing here and expecting to find their Argentinian Man!
Eat, Pray, Love might well be the motif of a new emerging stream,
involving older single Western women. It is probably a gyre rather than a
one-way stream, originating in, and returning to, an American home base.
I personally am a product of a one-way migration pattern that matured
into a full-blown stream-and-gyre just around the time I joined it. Post
9/11 and Y2K, as the US economy began slowing down, and the Indian
economy began to heat up, increasing numbers of Indians began choosing
to inhabit a vague loop between the two countries instead of settling down
in one, trying to have their cake and eat it too the economic
opportunities of India and the lifestyle of the US. The first observers of
this loop tended to classify them as global citizens but I find the term to
be pretty non-descriptive of what is actually happening.
The Tibetan community and the India-US stream-gyre are wellknown. The Israeli PTSD Stream is less well-known. The Eat-Love-Pray
gyre is just starting to mature.
Around the globe, streams slosh about, run into each other, branch,
loop, and in general carve out new cultural landscapes within a
hydrologically active layer that exists above earlier landscapes.
This is a complicated view of cultural geography. But I bet it could be
properly represented on a map. As I said, the number of important streams
cannot be more than a few hundred, about comparable to the number of
nation states or significant multinational corporations.
Globalization as Liquefaction
This post is really about my dissatisfaction with the static units of
analysis for globalization. We are reluctant to embrace more fluid units
like streams because they seem so small in terms of population sizes. It
seems wrong to basically ignore the 90% of the world who are never
going to venture beyond the borders they were born within.
Yet, I find that it is far easier to understand globalization as a system
of such human flows, than it is to understand it in terms of nations, states
and multi-national corporations. It is the actions of the 0.3% that will
ultimately drive the fates of the 90%. The cultures that play host to
streams are starting to see their evolution being driven by the very act of
hosting streams. There are entire regions in the Indian state of Kerala for
instance, whose culture can only be explained with reference to the gyre
that transports Keralites back and forth from the Middle East.
The word globalization itself is a clue.
Part 4:
The Mysteries of Money
If you think of markets that way, things look very different. Some
rivers of money are very old and very stable. You can at most fight to
displace others from prime positions along the banks. Others are new and
unstable and may change course frequently, creating and destroying
fortunes through their vagaries. Others may be maturing, with dams being
built to stabilize them. People have always bought food and clothes. They
are only now beginning to buy iPads. They are starting to not buy CDs.
Generalizing, you can even think of an average age of the market as
a whole. An interesting question to ask is whether early adopters as a
group should be considered as living in a future market, or whether the
mainstream should be thought of as living in the past. I prefer the latter
model.
Organizations are like riverbank communities. They are as old as the
last significant course change or waterfront battle. The stability of the
river, not the attitudes of people, is what makes old organizations seem set
in their ways. Perhaps people resist new ideas not because they have
specific personalities, but because they have settled on the banks of a river
of money of a certain age. Or perhaps there is self-selection. Possibly the
hidebound kinds go settle on the banks of the most ancient rivers. Tax
rivers are among the oldest and most stable rivers of money (and the only
ones protected by the threat of legitimate force), and people attracted to
government work arent exactly known for being passionate champions of
creative destruction.
Some startups are about finding and colonizing the banks of minor
unknown tributaries of old rivers. Others are about creating new rivers.
Still others are about building canals between vigorous new rivers and
somnolent old ones. And of course, there are those that are about
displacing incumbents from prime waterfront locations.
The nice thing about thinking this way is that the market is now a
system of cash flows that exists independently of the specific set of
businesses serving it in a given era. You can map the system and look for
an unoccupied waterfront spot.
and invested the rest at 8%, then $1 million is what hed have built up as
start-up capital to strike out on his own in 10 years, age 32 (yeah, yeah, I
know, nobody is talking 8% returns at the moment). So why is this path so
rare? Ive met many people with the right level of frugality (mostly
immigrants), but they are still stuck in clock/battery metaphors.
For the entrepreneurial mindset, the same money is viewed with
metaphors of building material and time to deadline. Thinking of
money as time to a deadline, or non-renewable fuel (for example, time to
build up a certain capital position, or time to burn it down at a particular
burn rate) or as building material (this is what it would take to buy a
McDonalds franchise), leads to a very different view of the same levels
of money:
Since I worked at a startup as the first employee for a year, Ive had a
ring-side seat to this mindset. But even that doesnt get to the visceral
reality of living this metaphor by managing money with this mindset. But
curiously, even something as simple as a blog can put your mind in this
gear. I feel a child-like sense of emotion and excitement when somebody
uses the buy me a cappuccino link on posts to send me $3.00, yet I feel
no excitement actually buying my daily coffee at Starbucks. The
difference comes from earning as a capitalist, but spending as a paycheckguy.
These are just two different money mindsets based on two different
sets of metaphors. So what are the others out there, and what happens
when you use them in the wrong contexts?
Thirteen Money Metaphors and their Uses and Misuses
1. Money as a clock: the predictable paycheck-in-auto-payments out
oscillator is a good idea only for recurring necessary payments.
Any money dynamics that dont need to be on an auto-pilot should
be taken off and managed actively.
2. Money as renewable fuel/rechargeable battery: this is only
good for living expenses up to a middle class level. A misuse is to
divide the national debt by the population to get a per capita debt.
This may give the man on the street the illusion of
comprehensibility, but trillions of dollars simply behave differently
than thousands. At the trillions level, money is NOT renewable
fuel, and it is dumb to let policy be informed by this metaphor.
3. Money as time-to-deadline/non-renewable fuel: good for smalltime entrepreneurs, but really bad for countries. Applying startup
burn rate thinking to the cost of the war in Iraq is probably a
terrible idea.
4. Money as building/growth material: this is great for young
businesses, but inefficient for older businesses. Kids consume
calories and grow taller. Adults consume calories and grow fatter.
5. Money as freedom: beyond about $1 million, money represents
freedom, since you could live very well off the interest alone if
you were frugal. Good for lazy trust-fund kids and endowments
too far above your league, and youll be reduced to daydreaming. Stick to
your own level of metaphors, and youll never move anywhere. Change
your leisure metaphor without changing your management metaphor, and
you are in for frustration.
So much for the armchair lecture. When it comes to practicing what I
preach, I admit I still havent got my mind out of the paycheck level of
metaphors.
The first two views differ in how they treat leisure. Ben Franklin was
an opportunity-cost focused buzz-kill. Bens ghost seems to admonish
you: yes, you are having fun, but remember, you COULD be earning $10
an hour cranking widgets. So youd better be improving your mental
health enough that your earnings increase by at least $10 in the future.
Adding modern math does not change things much. The cost-of-leisure
equation just acquires the trappings of net-present-value analysis. You
want to sleep eight hours today? Make sure the marginal discounted
future cash flow due to increased productivity is greater than $64.
The Catholic ethic (or what William Whyte called social ethic)
naturally leads to viewing leisure as time-profit rather than money-cost.
No-strings-attached discretionary time. You trade as little of your time as
you can to meet your basic needs, and the rest is surplus. If you want more
stuff to enhance your leisure, a pool toy for your swimming say, you have
to trade more time. This has the effect of creating a firewall between two
preference economies. On the supply side, you prefer the work that offers
the biggest cash returns per minute. On the demand side, you end up
deciding whether, for instance, an hour splashing in the pool without a
pool toy is better than a half-hour in the pool with one, and whether either
is better than an hour watching TV. You could be running at a loss. If your
job requires more caloric output than you are able to replace with food you
can afford with your earnings, you will slowly starve to death. Many of
the worlds poorest people are forced into this loss-making economic
equation. And of course, to finish up the logic, you can buy your time
leisure with money debt. That of course, is the moral of the ant and
grasshopper fable: spend leisure you havent earned, and fake remorse and
hope the ants bail you out.The analogy to priestly absolution for sins at
confession is nearly exact. Of course, there is the gray area of cashprofitable my work is my hobby time which you can double-book in
both ledgers, but that does not conceptually add anything to the
philosophy. If you have a lot of that going on, good for you.
The Catholic ethic does not oppressively mess with your experience
of leisure the way the Protestant ethic does. The agenda in Protestant-ethic
time management is to maximize lifetime wealth accumulation (few
modern Protestant-ethic-ers actually get or operate by the underlying
theology of predestination). The agenda in the Catholic-ethic money
management is to maximize immediate time profit. Capitalists operate by
the former and end up time-poor/cash-rich. Worker bees operate by the
latter and end up cash-poor/time-rich. Both get into debt: capitalists for
leverage, worker-bees to front-load leisure in youth. Both ultimately lead
to misery.
Here is the third angle that I think is interesting, and has the potential
to combine the wealth-creating tendencies of the Protestant ethic and the
hedonistic pleasures of the Catholic ethic, without leading to misery. The
third view says that time and money are near-perfect Yin-Yang opposites.
Hence the name Zen ethic. The underlying thing is not
either/or/neither/both. It is one of those paradox thingies. Some evidence:
Money is the most liquid thing imaginable, more liquid than water
even. Time is the most illiquid thing imaginable.You cannot save it, move
it, transfer it or trade it for anything else (you can sell the output of your
time, not your experience of it). About the only thing you can do is
modify your psychological experience of it: drugs and adrenaline can
make time pass more slowly, age and long memories can make it pass
faster. In certain cultures, you can sort of pool it and experience it in a
collective way, but still, it is illiquid. No matter how fast, slow or
collective you make the experience, you still cannot experience one time
instead of another or something other than time in place of time. Yet,
somehow, time can dance with money.
Time is the most deeply foundational thing imaginable. Even if you
are blind and deaf, and suspended in a sensory-deprivation chamber so
your sense of space and proprioception is messed up, you will still
experience time. I think. Money, by contrast, is the most completely
artificial thing ever invented. It is arbitrariness manifest, and it will
become instantly meaningless if you are put on a desert island. Yet,
somehow, time can dance with money.
I have no idea what to do with these thoughts. I didnt say I had
answers, just an interesting third angle. Maybe a theory of work can be
built on top of it.
Morgans book is based on the premise that almost all our thinking
about organizations is based on one or more of eight basic metaphors. The
main reason this book is hugely valuable is that 99% of organizational
conversations stay exclusively within one metaphor. Worse, most people
are permanently stuck in their favorite metaphor and simply cannot
understand things said within other metaphors. So these are not really 8
perspectives, but 8 languages. Speaking 8 languages is a lot harder than
learning to appreciate 8 perspectives. I consider myself a bit of an
organizational linguist: I speak languages 2, 5, 6 and 7 fluently, 1 and 3
passably well (enough to get by), and 8 poorly.
1. Organization as Machine: This is the most simplistic metaphor,
and is the foundation of Taylorism. Any geometrically structuralist
approach also falls into this category, which is why I have little
patience for people who use words/phrases like top down, bottomup, centralized, decentralized and so forth, without realizing how
narrow their view of organizations is. The entire mainstream
Michael-Porter view of business is within this metaphor.
2. Organization as Organism: This is a slightly richer metaphor and
suggests such ideas as organizational DNA, birth, maturity and
death, and so forth. I really like this one a LOT, and have so much
to say about it that I havent said anything yet. I even bought a
domain name (electricleviathan.com) to develop my ideas on this
topic separately. Maybe one day Ill do at least a summary here.
3. Organization as Brain: This may sound like a subset of the
Organism metaphor (and there is some overlap), but there is a
subtle and important shift in emphasis from life processes to
learning. Organization as brain is the source of informationtheoretic ways of understanding collectives (who knows what,
how information spreads and informs systems and processes). The
System Dynamics people like this a lot, especially Peter Senge
(The Fifth Discipline). I cannot recommend the SysDyn approach
though; I think it is fundamentally flawed. But the learning view
itself is very valuable.
4. Organization as Culture: Ive written about this stuff before
(There is No Such Thing as Culture Change on the E2.0 blog), and
plan to do so soon, when I review Tony Hsiehs Delivering
Happiness and in the next Gervais Principle post. I honestly dislike
5.
6.
7.
8.
justification for fait accompli decisions. In this view, the entire output of
the strategy profession is a nonsensical smokescreen obscuring more
fundamental machinations.
These are serious charges. Any book that attempts to spin a positive
story around strategy starts out with its hero in the dock, presumed
guilty. Kiechel succeeds in his main objective: acquitting the profession of
the charges against it, and demonstrating the true impact of the literaryindustrial complex that is strategy.
As an idea-peddler myself, I am obviously playing devils advocate
here. I personally have no doubt that strategy does matter. That skeptics
who itch to dive in and do real work eventually pay a high price for their
skepticism. In the long run, it is the deliberate types, who take strategy
seriously, who prevail. In a way, the mark of the true strategy type is the
ability to use that very disdain and skepticism as cover for getting the
right things done.
The book pointedly avoids offering a definition of strategy (though it
cites several), so that Drucker phrase is probably a good operating
definition to start with: strategy is about getting the right things done. The
problem of defining strategy is surprisingly hard, but lets look at the
major themes of the book before considering why.
The Historical Development of Strategy
The major narrative arc in the book is a straightforward historical one.
Strategy as a function did not really exist before the 60s. To the extent
that the growth economies of the post-WW II decades needed such a
construct, the implicit ones in the heads of CEOs sufficed. The
Organization Man era was about what Michael Porter a key figure in the
book would characterize, in the 90s, as operational effectiveness, in
low-competition growth markets.
The events comprising the origin myth are fairly straightforward and
distinctly American. Bruce Henderson invented the sector by founding the
Boston Consulting Group in 1963. A textbook maverick idea guy type,
Henderson pioneered the now familiar practices of hiring the best and
brightest from the top MBA programs, especially those with engineering
backgrounds (driving up the intake IQ and prestige of the programs in the
process, with the result that the MBA slowly caught up, in terms of
respectability, to law degrees and PhDs). BCG, when it began, was
primarily a high-concept idea company, relying on carefully-crafted
conceptual insights, applied to specific clients, to drive its business.
Very quickly competition emerged. Bill Bain, the top salesman in
BCG, broke away, taking some of the best talent with him. The result was
Bain Consulting. Bruce Henderson had only himself to blame: he had
taken his own advice a little too well, organizing his young company as a
crucible of internal Darwinian competition. Bain and his entourage were
the fittest, and they not only survived and thrived, they decided to head out
and turn the mock competition into a real one. And as befits a mutiny,
Bains signature style was distinctly non high-concept and non-BCG. It
was all about working closely, secretively, and at length, with only one
client in a given industry. Alone in the strategy consulting world, Bain was
also committed to participating in execution. This would both position
them for serious growth in the eighties, when shareholder value became
the sole metric of strategic success, and get them into serious trouble, due
to their extreme intimacy with their clients. But through their ups and
downs, Bain remained the un-BCG; with a cult-like (to their competitors,
who called them Bainies) devotion to helping clients execute their
strategy recommendations. Their calling card was the line, we dont sell
advice by the hour; we sell profits at a discount.
These events, and the early successes of BCG and Bain, did not go
unnoticed. The genteel white-shoes at McKinsey, who had been running a
trusted and somnolent business since 1926, with no strategy offering,
realized that they had to react. And under the leadership of Fred Gluck,
who joined the firm in 1967 and took over the helm in 1978, they did. In
their response, they relied on neither ideas, nor execution, but on learning
quickly. As a result, they took over the strategy revolution started by BCG
by commoditizing (by their own admission) and hawking in volume the
ideas that BCG had pioneered. The upstarts were going to be put in their
place.
The last significant origin event was the entrance of Michael Porter,
who around 1979 took on the task of dignifying and elevating the
People or Position?
Kiechel correctly notes that the main tension in the literature on
strategy is the one between positioning (driven by numbers and models),
and people (driven by organizational theory ideas).
At the heart of everything accomplished by Porter and the Big 3 is an
assumption that people dont really matter. This makes the main story of
strategy a story about positioning and formulas. Weve already seen one
problem: that codification, generalization and dissemination turn strategies
into costs of doing business. This creates an ever-faster arms-race by
eroding competitive advantage faster than new ideas can create it. Entire
industry sectors start to tick faster and faster, benefiting customers and
suppliers, but not corporations, when strategy ideas take hold across the
board. This problem was implicitly solved by Bain through secrecy,
exclusivity and non-publication. Among the mainstream players, it was
again Bain consultants who implicitly acknowledged, through their
preference for long engagements and participation in execution, the fact
that people and strategy are not separable, and neither are strategies and
execution.
The alternate approach to strategy focuses directly on dynamics, and
by dynamics, we mean the patterns of change created by that most
unpredictable variable in the equation, people.
Porter here is the target of most of the criticism, and the leading lights
of the People school begin their critiques with the question, where are
the people in a Porter strategy? Kiechel neatly brings out the nuances of
Porters reaction to this charge through carefully selected quotes. At one
point, he describes how Porter insists that his framework is dynamic,
protesting, to this day I completely accept the premise that every
company is different, that every company is unique. At another point, he
has Porter resignedly saying, Where I fail is in the human dimension.
To be fair to the positioning school though, people, the driver of
unpredictable dynamics, are not easy to model and integrate into strategy.
And it is not for lack of trying. The school began its work by drawing
inspiration from the work of Herbert Simon, who introduced the idea of
bounded rationality and the idea that people satisfice rather than optimize.
From there, the march to behavioral economics-inspired approaches to
strategy over several decades, was inevitable.
Besides my favorite, William Whyte (who gets a too-brief mention),
the important thinkers in this school are not as well-known as the
positioning school Big Four: Richard Cyert (The Behavioral Theory of the
Firm), Karl Weick, (Collective Sense-Making), Henry Mintzberg
(Mintzberg on Management) and Jeffrey Pfeffer (whom Kiechel calls the
Porter of Organizational Behavior).
One name though, should be familiar: Tom Peters was the lone rebel
in the mainstream strategy world, trying to draw attention to people
aspects. Though Thriving on Chaos was the first big business book I
ever read (in the mid 80s, as a teenager), I am frankly not a fan. But he
must be given credit for an entirely different achievement: creating the
best-selling business book sector.
Though it mostly lost the war, the People School achieved its
greatest success playing defense in the early 80s, with Richard Pascales
1984 article in the California Management Review, Perspectives on
Strategy: The Real Story Behind Hondas Success. The significance of
the article was that Pascale showed that Hondas seemingly deliberate and
modeling/data based invasion of America was really an outcome of
serendipity mixing with in-market adaptive learning and the peculiar
personalities of the principals. In other words, the actions and successes of
humans within an agile, quick-learning startup were being attributed, by
mainstream strategists, to deliberate modeling and data, and the use of
elaborate constructs. A case of post-hoc rationalization.
Though Pascale won the battle, the People philosophy did not win
the war, and for good reasons.
One good reason is simply that the People school is preparadigmatic. There is very little agreement between a multitude of
contending schools of thought. The book quotes one study which found
that 105 experts polled for key ideas from the school produced 146
candidates, of which 106 were unique. With that much dissent, the
People school doesnt stand a chance in the commercial marketplace for
retail business ideas (which is why, by my reasoning, it is automatically
more valuable, since fewer people understand the ideas). By contrast, in
the Positioning school, there are perhaps a couple of dozen key ideas
that everybody agrees are important, which every MBA learns, and most
non-MBA managers eventually learn through osmosis.
Add to this the fact that any People focused school is necessarily
based on metaphysical, rather than psychological axioms, and you get a
mess. If you believe in an idealist perfectibility of Man doctrine, you
will follow Maslow and end up with high-minded ideas about
organizations allowing their people to self-actualize, resulting in their
banding together into missionary tribes that proceed to Save the World.
If you are skeptical of human perfectibility, you get People models like
my Gervais Principle series.
The Left and Right Brains of Corporate Strategy
The Positioning school is basically a half-century worth of
codification and dissemination of ideas under an assumption that
companies are run by sound operating management, capable of execution.
Every idea developed by the school either creates a flavor-of-the-month
bubble, or gets validated and incorporated into the very structure of the
broader business environment, as an across-the-board cost of doing
business. The result has been a gradual acceleration of change and a
shortening of the advantage offered by any given idea. Ideas go from
being secret strategies to codified commodities so quickly that they barely
pay for themselves. Okay, I wont repeat that idea again.
The People school has come down to a basic position that good
people with a bad system/process will always outperform bad people with
a good system/process. Hence the Good to Great idea that you must get
the right people on the bus, the wrong people off the bus, and then decide
where to drive. It is a fundamentally adaptive, experimental, local and
entrepreneurial approach to business problems. It is also a model that does
not naturally lead to industry-wide acceleration, since it is people, not
ideas, that matter, and people and teams cannot (yet) be cloned.
The professionals may disagree in public, but Ive never yet met
anyone in the real world who does not mix and match ideas from both
worlds. The Pascale Honda story is clever, but does not belie things like
the Growth Share Matrix and its descendants. To some extent, the
Positioning and People schools are the left and right brains of strategy,
and smart people tend to operate in whole-brained ways.
Other Threads
There is plenty more in the book, all of it illuminated by fascinating
and fresh anecdotes, and statistics on the growth of the sector.
One thread deals with the endgames for consultants. Since the
sector operates by an up-or-out dynamic, with only about 10% making
partner, the strategy sector creates an endless supply of exiting
experienced business professionals. There is an extended discussion of one
end-game: the emergence of a consulting stint as a fast-track path to senior
management in client companies (which created a whole generation of
consultant-turned-VPs, who became more demanding customers, raising
the stakes for the whole sector). Another currently popular endgame is
apparently the Private Equity (PE) sector (the descendant of LBOs and the
big brother of Venture Capital, in case you dont know what that is).
Another interesting thread deals with the relative failure of the
industry in dealing with innovation problems as opposed to cost control
problems (which has led to the perhaps unfair association between strategy
consultants and layoffs).
Yet another thread deals with the emergence of the literary
industrial complex, including a discussion of conferences, the business
book packaging industry, and the dominant influence of the Harvard
Business Review (one insider is quoted as saying You can get a years
worth of business, maybe two, on the strength of one article.)
Perhaps the most significant minor thread is the story of the rise of
shareholder value as the key metric (an idea Jack Welch is quoted as
calling the dumbest idea in the world). Related to that is an entire
Office Manager: 10
Industry Practice Manager: 4-5
Function Manager: 1-2
This is clearly not a definition, and not intended as one. You could be
reading tea-leaves and calling it strategy. Or, if you are a pure Druckerian,
you could declare that the purpose of a business is to create and keep a
customer, and use that static doctrine as your framework and construct,
and worry no further. Yet, strategy is clearly more than that.
The positive definitions offered are only offered as illustrations of the
thinking of specific schools or people. For instance, the pre-Porter state of
management thinking in academia is characterized through a Ken
Andrews quote (a Harvard faculty member who taught courses eventually
taken over by Porter):
Corporate strategy is the pattern of major objectives,
purposes, or goals and established plans for achieving those
goals, stated in such a way as to define what business the
company is in or is to be in and the kind of company it is or
is to be.
As Kiechel notes, that grand, overarching definition says everything
and nothing. But it creates the intellectual room for viewing strategy as
highly unique and individual to companies and situations, a process of
creative story-telling. By contrast, Porters formula offerings, and the
industrys, are more confining, for example, The essence of formal
corporate strategy is relating a company to its environment (which
suggests that strategy is essentially about responding to competition).
I should mention here that I have a vested interest in raising the
question of definitions, since I actually offer one in my upcoming book,
Tempo (you can find a really old version of my ideas in my 2007 post,
Strategy, Tactics, Operations and Doctrine: A Decision-Language
Tutorial, but my thinking has evolved a LOT since then, so dont hold me
to the details).
But getting back to the question of definitions for the specific context
of corporate strategy, if thinkers like Andrews were being too general,
Porter and his group too formulaic, and the People school too implicit,
where are we to look? I personally believe the heart of the matter goes
back to Clausewitz and Napoleans coup doeil: a whole-brained local, and
I didnt settle on these five lightly. I must have browsed or partlyread-and-abandoned dozens of books about modernity and globalization
before settling on these as the ones that collectively provided the best
framing of the themes that intrigued me. If I were to teach a 101 course on
the subject, Id start with these as required reading in the first 8 weeks.
The human world, like physics, can be reduced to four fundamental
forces: culture, politics, war and business. That is also roughly the order of
decreasing strength, increasing legibility and partial subsumption of the
four forces. Here is a visualization of my mental model:
that govern the structure of the corporate form, and descriptive artifacts
like macroeconomic indicators, microeconomic balance sheets, annual
reports and stock market numbers.
But one quality makes gravity dominate at large space-time scales:
gravity affects all masses and is always attractive, never repulsive. So
despite its weakness, it dominates things at sufficiently large scales. I
dont want to stretch the metaphor too far, but something similar holds
true of business.
On the scale of days or weeks, culture, politics and war matter a lot
more in shaping our daily lives. But those forces fundamentally cancel out
over longer periods. They are mostly noise, historically speaking. They
dont cause creative-destructive, unidirectional change (whether or not you
think of that change as progress is a different matter).
Business though, as an expression of the force of unidirectional
technological evolution, has a destabilizing unidirectional effect. It is
technology, acting through business and Schumpeterian creativedestruction, that drives monotonic, historicist change, for good or bad.
Business is the locus where the non-human force of technological change
sneaks into the human sphere.
Of course, there is arguably some progress on all four fronts. You
could say that Shakespeare represents progress with respect to Aeschylus,
and Tom Stoppard with respect to Shakespeare. You could say Obama
understands politics in ways that say, Hammurabi did not. You could say
that General Petraeus thinks of the problems of military strategy in ways
that Genghis Khan did not. But all these are decidedly weak claims.
On the other hand the proposition that Facebook (the corporation) is
in some ways a beast entirely beyond the comprehension of an ancient
Silk Road trader seems vastly more solid. And this is entirely a function of
the intimate relationship between business and technology. Culture is
suspicious of technology. Politics is mostly indifferent to and above it.
War-making uses it, but maintains an arms-length separation. Business? It
gets into bed with it. It is sort of vaguely plausible that you could switch
artists, politicians and generals around with their peers from another age
and still expect them to function. But there is no meaningful way for a
This post is mainly about the two historical phases, and are in a sense
a macro-prequel to the ideas I normally write about which are more
individual-focused and future-oriented.
I: Smithian Growth and the Mercantilist Economy (1600 1800)
The story of the old corporation and the sea
It is difficult for us in 2011, with Walmart and Facebook as examples
of corporations that significantly control our lives, to understand the sheer
power the East India Company exercised during its heyday. Power that
makes even the most out-of-control of todays corporations seem tame by
comparison. To a large extent, the history of the first 200 years of
corporate evolution is the history of the East India Company. And despite
its name and nation of origin, to think of it as a corporation that helped
Britain rule India is to entirely misunderstand the nature of the beast.
Two images hint at its actual globe-straddling, 10x-Walmart
influence: the image of the Boston Tea Partiers dumping crates of tea into
the sea during the American struggle for independence, and the image of
smoky opium dens in China. One image symbolizes the rise of a new
empire. The other marks the decline of an old one.
The East India Company supplied both the tea and the opium.
At a broader level, the EIC managed to balance an unbalanced trade
equation between Europe and Asia whose solution had eluded even the
Roman empire. Massive flows of gold and silver from Europe to Asia via
the Silk and Spice routes had been a given in world trade for several
thousand years. Asia simply had far more to sell than it wanted to buy.
Until the EIC came along
A very rough sketch of how the EIC solved the equation reveals the
structure of value-addition in the mercantilist world economy.
The EIC started out by buying textiles from Bengal and tea from
China in exchange for gold and silver.
Then it realized it was playing the same sucker game that had trapped
and helped bankrupt Rome.
Next, it figured out that it could take control of the opium industry in
Bengal, trade opium for tea in China with a significant surplus, and use the
money to buy the textiles it needed in Bengal. Guns would be needed.
As a bonus, along with its partners, it participated in yet another
clever trade: textiles for slaves along the coast of Africa, who could be
sold in America for gold and silver.
For this scheme to work, three foreground things and one background
thing had to happen: the corporation had to effectively take over Bengal
(and eventually all of India), Hong Kong (and eventually, all of China,
indirectly) and England. Robert Clive achieved the first goal by 1757. An
employee of the EIC, William Jardine, founded what is today Jardine
Matheson, the spinoff corporation most associated with Hong Kong and
the historic opium trade. It was, during in its early history, what we would
call today a narco-terrorist corporation; the Taliban today are
kindergarteners in that game by comparison. And while the corporation
never actually took control of the British Crown, it came close several
times, by financing the government during its many troubles.
The background development was simpler. England had to take over
the oceans and ensure the safe operations of the EIC.
Just how comprehensively did the EIC control the affairs of states?
Bengal is an excellent example. In the 1600s and the first half of the
1700s, before the Industrial Revolution, Bengali textiles were the
dominant note in the giant sucking sound drawing away European wealth
(which was flowing from the mines and farms of the Americas). The
European market, once the EIC had shoved the Dutch VOC aside,
constantly demanded more and more of an increasing variety of textiles,
ignoring the complaining of its own weavers. Initially, the company did no
more than battle the Dutch and Portuguese on water, and negotiate
agreements to set up trading posts on land. For a while, it played by the
rules of the Mughal empire and its intricate system of economic control
open because the corporation was such a new beast, nobody really
understood the dangers it represented. The EIC maintained an army. Its
merchant ships often carried vastly more firepower than the naval ships of
lesser nations. Its officers were not only not prevented from making
money on the side, private trade was actually a perk of employment (it
was exactly this perk that allowed William Jardine to start a rival business
that took over the China trade in the EICs old age). And finally the
cherry on the sundae there was nothing preventing its officers like
Clive from simultaneously holding political appointments that legitimized
conflicts of interest. If you thought it was bad enough that Dick Cheney
used to work for Halliburton before he took office, imagine if hed worked
there while in office, with legitimate authority to use his government
power to favor his corporate employer and make as much money on the
side as he wanted, and call in the Army and Navy to enforce his will. That
picture gives you an idea of the position Robert Clive found himself in, in
1757.
He made out like a bandit. A full 150 years before American corporate
barons earned the appellation robber.
In the aftermath of Plassey, in his dual position of Mughal diwan of
Bengal and representative of the EIC with permission to make money for
himself and the company, and the armed power to enforce his will, Clive
did exactly what youd expect an unprincipled and enterprising adventurer
to do. He killed the golden goose. He squeezed the Bengal textile industry
dry for profits, destroying its sustainability. A bubble in London and a
famine in Bengal later, the industry collapsed under the pressure (Bengali
economist Amartya Sen would make his bones and win the Nobel two
centuries later, studying such famines). With industrialization and
machine-made textiles taking over in a few decades, the economy had
been destroyed. But by that time the EIC had already moved on to the next
opportunities for predatory trade: opium and tea.
The East India bubble was a turning point. Thanks to a rare moment
of the Crown being more powerful than the company during the bust, the
bailout and regulation that came in the aftermath of the bubble
fundamentally altered the structure of the EIC and the power relations
between it and the state. Over the next 70 years, political, military and
Constantinople fell to the Ottomans in 1453 and the last Muslim ruler
was thrown out of Spain in 1492, the year Columbus sailed the ocean blue.
Vasco de Gama found a sea route to India in 1498. The three events
together caused a defensive consolidation of Islam under the later
Ottomans, and an economic undermining of the Islamic world (a process
that would directly lead to the radicalization of Islam under the influence
of religious leaders like Abd-al Wahhab (1703-1792)).
The 16th century makes a vague sort of sense as the Age of
Exploration, but it really makes a lot more sense as the startup/firstmover/early-adopter phase of the corporate mercantilism. The period was
dominated by the daring pioneer spirit of Spain and Portugal, which
together served as the Silicon Valley of Mercantilism. But the maritime
business operations of Spain and Portugal turned out to be the MySpace
and Friendster of Mercantilism: pioneers who could not capitalize on their
early lead.
Conventionally, it is understood that the British and the Dutch were
the ones who truly took over. But in reality, it was two corporations that
took over: the EIC and the VOC (the Dutch East India Company,
Vereenigde Oost-Indische Compagnie, founded one year after the EIC) the
Facebook and LinkedIn of Mercantile economics respectively. Both were
fundamentally more independent of the nation states that had given birth
to them than any business entities in history. The EIC more so than the
VOC. Both eventually became complex multi-national beasts.
A lot of other stuff happened between 1600 1800. The names from
world history are familiar ones: Elizabeth I, Louis XIV, Akbar, the Qing
emperors (the dynasty is better known than individual emperors) and the
American Founding Fathers. The events that come to mind are political
ones: the founding of America, the English Civil War, the rise of the
Ottomans and Mughals.
The important names in the history of the EIC are less well-known:
Josiah Child, Robert Clive, Warren Hastings. The events, like Plassey,
seem like sideshows on the margins of land-based empires.
If the ship sailing the Indian Ocean ferrying tea, textiles, opium and
spices was the star of the mercantilist era, the steam engine and steamboat
opening up America were the stars of the Schumpeterian era. Almost
everybody misunderstood what was happening. Traveling up and down the
Mississippi, the steamboat seemed to be opening up the American interior.
Traveling across the breadth of America, the railroad seemed to be
opening up the wealth of the West, and the great possibilities of the Pacific
Ocean.
Those were side effects. The primary effect of steam was not that it
helped colonize a new land, but that it started the colonization of time.
First, social time was colonized. The anarchy of time zones across the vast
expanse of America was first tamed by the railroads for the narrow
purpose of maintaining train schedules, but ultimately, the tools that
served to coordinate train schedules: the mechanical clock and time zones,
served to colonize human minds. An exhibit I saw recently at the Union
Pacific Railroad Museum in Omaha clearly illustrates this crucial
fragment of history:
The steam engine was a fundamentally different beast than the sailing
ship. For all its sophistication, the technology of sail was mostly a veryrefined craft, not an engineering discipline based on science. You can trace
a relatively continuous line of development, with relatively few new
scientific or mathematical ideas, from early Roman galleys, Arab dhows
and Chinese junks, all the way to the amazing Tea Clippers of the mid
19th century (Mokyr sketches out the story well, as does Mahan, in more
detail).
Steam power though was a scientific and engineering invention.
Sailing ships were the crowning achievements of the age of craft guilds.
Steam engines created, and were created by engineers, marketers and
business owners working together with (significantly disempowered)
craftsmen in genuinely industrial modes of production. Scientific
principles about gases, heat, thermodynamics and energy applied to
practical ends, resulting in new artifacts. The disempowerment of
craftsmen would continue through the Schumpeterian age, until Fredrick
Taylor found ways to completely strip mine all craft out of the minds of
craftsmen, and put it into machines and the minds of managers. It sounds
awful when I put it that way, and it was, in human terms, but there is no
denying that the process was mostly inevitable and that the result was
vastly better products.
The Schumpeterian corporation did to business what the doctrine of
Blitzkrieg would do to warfare in 1939: move humans at the speed of
technology instead of moving technology at the speed of humans. Steam
power used the coal trust fund (and later, oil) to fundamentally speed up
human events and decouple them from the constraints of limited forms of
energy such as the wind or human muscles. Blitzkrieg allowed armies to
roar ahead at 30-40 miles per hour instead of marching at 5 miles per hour.
Blitzeconomics allowed the global economy to roar ahead at 8% annual
growth rates instead of the theoretical 0% average across the world for
Mercantilist zero-sum economics. Progress had begun.
The equation was simple: energy and ideas turned into products and
services could be used to buy time. Specifically, energy and ideas could be
used to shrink autonomously-owned individual time and grow a space of
corporate-owned time, to be divided between production and
The point isnt that we are running out of attention. We are running
out of the equivalent of oil: high-energy-concentration pockets of easily
mined fuel.
The result is a spectacular kind of bubble-and-bust.
Each new pocket of attention is harder to find: maybe your product
needs to steal attention from that one TV obscure show watched by just
3% of the population between 11:30 and 12:30 AM. The next
displacement will fragment the attention even more. When found, each
new pocket is less valuable. There is a lot more money to be made in
replacing hand-washing time with washing-machine plus magazine time,
than there is to be found in replacing one hour of TV with a different hour
of TV.
Whats more, due to the increasingly frantic zero-sum competition
over attention, each new well of attention runs out sooner. We know this
idea as shorter product lifespans.
So one effect of Peak Attention is that every human mind has been
mined to capacity using attention-oil drilling technologies. To get to Clay
Shirkys hypothetical notion of cognitive surplus, we need Alternative
Attention sources.
To put it in terms of per-capita productivity gains, we hit a plateau.
We can now connect the dots to Zakarias reading of global GDP
trends, and explain why the action is shifting back to Asia, after being
dominated by Europe for 600 years.
Europe may have increased per capita productivity 594% in 600
years, while China and India stayed where they were, but Europe has been
slowing down and Asia has been catching up. When Asia hits Peak
Attention (America is already past it, I believe), absolute size, rather than
big productivity differentials, will again define the game, and the center of
gravity of economic activity will shift to Asia.
If you think thats a long way off, you are probably thinking in terms
of living standards rather than attention and energy. In those terms, sure,
China and India have a long way to go before catching up with even
Southeast Asia. But standard of living is the wrong variable. It is a derived
variable, a function of available energy and attention supply. China and
India will never catch up (though Western standards of living will
decline), but Peak Attention will hit both countries nevertheless. Within
the next 10 years or so.
What happens as the action shifts? Kaplans Monsoon frames the
future in possibly the most effective way. Once again, it is the oceans,
rather than land, that will become the theater for the next act of the human
drama. While American lifestyle designers are fleeing to Bali, much
bigger things are afoot in the region.
And when that shift happens, the Schumpeterian corporation, the oil
rig of human attention, will start to decline at an accelerating rate.
Lifestyle businesses and other oddball contraptions the solar panels and
wind farms of attention economics will start to take over.
It will be the dawn of the age of Coasean growth.
Adam Smiths fundamental ideas helped explain the mechanics of
Mercantile economics and the colonization of space.
Joseph Schumpeters ideas helped extend Smiths ideas to cover
Industrial economics and the colonization of time.
Ronald Coase turned 100 in 2010. He is best known for his work on
transaction costs, social costs and the nature of the firm. Where most
classical economists have nothing much to say about the corporate form,
for Coase, it has been the main focus of his life.
Without realizing it, the hundreds of entrepreneurs, startup-studios
and incubators, 4-hour-work-weekers and lifestyle designers around the
world, experimenting with novel business structures and the attention
mining technologies of social media, are collectively triggering the age of
Coasean growth.
But modern air travel, which has evolved over nearly a century, is a
very different complex of behaviors that has drifted far from bus and train
travel. Just look at the enormous number of complex behaviors weve
learned:
1. Checking in (online and off)
2. Security checks and rules about carrying liquids
3. Gates and air-bridges that look nothing like railroad stations
4. Checked baggage and hand baggage rules
5. Seat belts and rules about staying seated at certain times
6. Baggage carousels for retrieving luggage
7. Dealing with layovers
8. Online bidding for cheap ticket deals
9. Airport parking and car rental options
10. Duty-free shopping
11. Visas, passports, immigration, customs
12. Rules about when you can use electronic devices
Weve been able to get this far successfully because we took our time.
By a happy coincidence, the physical constraints of the technology limited
the rate at which airline travel could evolve.
Another example is driving, which is estimated to involve close to
1500 separate sub-skills. It took us about a century to get to modern
driving, GPS, zipcars and all, starting with horse-drawn carriages.
This sort of long evolution trajectory is generally the case for physical
products and services. They are naturally rate-limited by a variety of
factors, so they tend to evolve and mature in ways that naturally satisfy the
Milo Criterion.
Web Products
Thanks to the lack of physical constraints, Web products can go from
paper napkin to fully realized vision in months rather than decades. They
can evolve at rates that far exceed the Milo rate.
The primary reason these behaviors are effective is that they slow
down the process of software development and maintain the optimal
behavior modification rate for humans.
In other words, the Milo Criterion is not just descriptive. It is
prescriptive. It is the dominant dynamic for successful products.
It leads to alternative explanations for why the effective practices
work. It leads to building blocks that are different from the ones
recommended by lean startup theory.
In fact it is a pretty fertile starting point for a whole different approach
to thinking about entrepreneurship and product development. Ive been
developing these ideas, mostly in private, and applying them to my own
business decisions.
Slow Marketing
I dont like being cryptic, but in this case, I am not going to elaborate
further (at least not right now) because the very thought of the tedious and
potentially acrimonious arguments that might result is enough to turn me
off. I dont have enough skin in the game to make it worthwhile. Perhaps I
am getting old and conflict-averse.
So I am not going to share my explanations or alternative building
blocks. In fact, I deleted a couple of much longer draft posts, something I
rarely do, since I hate wasting writing effort.
I wrote this post primarily as a way of saying hello to others who
might already be thinking along the same lines I am. If you are, chances
are the Milo Criterion will spark some productive thinking for you. If not,
at least you learned the story of Milo of Croton, for use at cocktail parties.
I will share one more clue: Ive started calling my developing theory
slow marketing. Read into that what you will.
The first feature implies that there will be an iterative element in the
solution.
The second feature implies that somewhere along the way, youre
going to have to question implicit assumptions, frames and definitions of
the primitive elements. Like Einstein said, you arent going to solve the
problem at the same level that you encountered it.
For example, in the job/experience loop, you can question the
atomicity of the definition of a job (work for pay) by pondering such
constructs as unpaid internships that loosen the notion of what a job is,
allowing you to trigger the positive feedback loop.
Stated in a general form, the chicken-egg problem is: how do you get
X, when you need Y to get X, and X to get Y?
There are at least four correct answers:
1.
2.
3.
4.
Slowly
Painfully
Unfairly
Untruthfully
energy. The more truly atomic the primitive categories are (more like real
chickens and eggs), the more painful this process is. This is my least
favorite solution. This is also the most widespread solution people attempt.
Unfairly is the cheapest and fastest solution, if it is available.
Somebody might just give you a chicken or an egg. Daddy might pull
strings and get you a job. You might have incriminating photographs of a
banker that allows you to get a loan on suspiciously good terms with no
credit or collateral. But not all unfair advantages are sleazy or nepotistic
advantages. Included in the general category of unfair advantage is
everything that falls under the umbrella term strategy. My definition of
strategy in Tempo basically boils down to unfair advantage. Anything
from privileged information to exclusive access to a key distribution
channel, to owning the rights to a key invention, counts as an unfair
advantage and a basis for strategy. This is my second-choice solution.
Untruthfully, or the fake it till you make it solution, is my third-choice
solution, but the one I want to talk about today.
My clients often ask me
If you want your metaphoric ts cross and is dotted, the solution we
are talking about is: fake the chicken while the egg incubates.
There are many ways to do this, most of them both stupid and illegal.
For instance, you could doctor your resume and make up fake letters of
recommendation.
A ubiquity illusion is a much more subtle mechanism, and in most
cases, is not illegal.
The simplest example is using we to refer to a business that is really
just yourself or a partnership of two people. By concealing some
information, and enough self-confident copy-writing, you can convey the
impression that your business is much bigger than it is.
A slightly smarter example is any sentence that begins: My clients
often ask me.
About half the time, the answer is that post about The Office which
makes me groan silently, but the other half of the time, the answer I get is
something along these lines:
I think a friend forwarded some post to me once a while back, but I
didnt really start reading regularly until I was searching for something
and one of your posts came up.
The media often differ tweets, Facebook, email forwards, party
conversations, workplace conversations, Google searches but the
pattern is usually the same: new readers encounter ribbonfarm at least
twice in two different ways before turning into regular readers.
In the cases where the two media initially appeared to be the same
(for example, two email forwards), it usually turned out that the context
differed: one forward from a coworker and one from a family member, for
instance. I buy the theory that in social media, the actual media are
individual people (the billion-channel marketing theory), so with some
overloading, you can call this the two-contacts-two-media rule.
On my 7-week road-trip across the country over the summer meeting
readers (you guys clearly arent hanging on my every word to the point
that you hunt me down and interrupt my life; I have to run around hunting
you down, interrupting your lives), I collected many examples of the 2contacts-2-media rule.
This curious phenomenon reminded me of a classic rule-of-thumb in
sales: to prime a prospect for a close, you need to first prepare them by
engineering three contacts via three media. For example, a face-to-face
encounter at a conference, a passing mention in an innocent-seeming
email exchange, and perhaps a referral from a friend at a party.
The two vs. three distinction is mostly irrelevant (it has to do with
online versus offline dynamics), but the key point is that youve got
deliberately engineered process that looks like the natural process,
resulting in acceleration of a selling process.
Ubiquity Illusions
What you need to fake is ubiquity. Faking ubiquity is about faking
social proof.
If something is ubiquitous in a given environment, you will naturally
encounter it in somewhat random and uncorrelated situations. The
randomness and uncorrelatedness is critical. The Amazon Kindle is a
perfect example of a product that spread via genuine ubiquity. After first
hearing about it on technology news sites, I didnt actually buy it until Id
spotted it in the wild at a couple of different coffee shops. I doubt
Bezos planted them.
You need the randomness. Seeing a bear at the zoo does not lead you
to suspect that bears are common in the area, but seeing one randomly in a
public park would lead you to that suspicion.
The multiple encounters must also be uncorrelated. Seeing two bears
in the same zoo means nothing. But seeing two bears loose in different
city parks will confirm your suspicions that bears are running wild in the
area.
The reason ubiquity illusions work is obvious: you are basically
gaming human pattern recognition instincts.
In fact in some cases, you dont even need to run around planting fake
random-and-uncorrelated signs in the environment. Since ubiquity usually
goes along with oversubscription of the producer of the ubiquity, you can
get away with just planting signs of oversubscription. Ubiquity illusions
and oversubscription illusions are two sides of the same coin.
Get people to call you while you are meeting a new client.
Plant a few friends at a party and walk around graciously shaking
hands, faking Big Man on Campus.
Pay people to stand in a line outside your new coffee shop.
Accidentally flash a view of your packed calendar while setting
up your laptop for a presentation.
Am I faking it?
Hee hee hee! (thats my slightly evil laugh)
Seven is not an arbitrary number. I looked hard and thats all I could
find. Ill tell you about two that didnt make the cut later. Each of the 7
switches, if it causes successful firing, induces an S-curve (if not, you get
a peak and collapse).
If the S-curves are clustered close together in time, you get one big
Aha! Otherwise you get a series of smaller Ahas! All 7 must be switched
on. Otherwise youll get a change in emotion and energy, but not a true
business positioning. The characteristic sign is that you get a frenzied,
processes but theres a lot more. You have to find the artistically right
kind of systems and processes that can put you on the accelerating margins
trajectory. For Zappos, for instance, it appears to have been the decision to
move away from drop shipping. So it is not a matter of just hiring a few
bureaucrats to create some tedious forms. Big companies know all about
this transition. Ive done work on this dimension, but unfortunately it isnt
work I can talk about publicly.
The fully-refined version of this gets you the classic positioning
model of Michael Porter (the five forces model). Practitioners like to call
it strategy but it doesnt deserve that lofty term. Its operations they are
talking about. Very useful nevertheless.
In BCG Growth Share Matrix language, the switch gets thrown when
an uncertain wildcat (or question mark) business suddenly turns into a
Star (moving from the top-right to the top-left quadrant). From here you
can drive down costs faster than competitors can, and move the business
into a relatively unassailable high-margin cash cow position.
3. Sales: Positioning as a Pain Point Relief
If you plow through the Lean Startup material, youll find that the
entire customer development process hinges on one crucial decision: you
only go after a small subset of early customers who a) have a problem you
can solve, b) are aware that they have a problem c) are actively shopping
for a solution d) are actually improvising temporary solutions.
This is a customer in pain as it were. Product-Market Fit (PMF) in
this narrow sense relieves a pain for someone. Focusing on customers
in pain is a very specific way to find a market.
In an earlier Drucker-inspired article, I defined a customer as a novel
pattern of human behavior based on Druckers notion of customer
creation. Creation is expensive, but it can be done. But in CD-driven
businesses, you dont create this novel pattern so much as you recognize it
in the wild and then offer a less painful substitute. This is significantly
cheaper, which is why it is so popular in the startup world.
It is a slightly worrying metaphor, but I like it: in customer
development, you domesticate a wild customer.
Here is my example. I was the first employee at Sulekha.com, after
the two founders, 10 years ago. Today, it is sort of the Craigslist-plusFacebook-plus-Fandango of India. I witnessed (and, in modest ways,
contributed to) the PMF phase change, when we found our first strong
revenue model (online ticket sales). And yes, the script ran exactly as the
lean startup people describe it, with pivots and everything. We just used
different language to talk about what was happening.
make a viral video. The best you can do is build a platform-intent product
or service, or a viral-intent video. But platform-intent thinking is crucial.
Otherwise if your first and only application idea fails, well, youre
screwed. Nor will a generic multi-tasking minimum-viable product do
the trick. That gets you a Swiss Army knife. That still has only one shot at
success. You dont just want a multi-tasker product. You want multiple
cheap shots at making an application catch on.
Once you ask the question minimum viable product that does WHAT?
youll see why Killer App is a useful separate term. It is that last 20% of
the engineering that brings in 80% of the value. First you build a
minimum-viable platform, and then you start doing several 20% stabs to
find your first killer app. Each stab is a minimum-viable product
hypothesis, but each stab is not necessarily a full repositioning or pivot.
Think of a startup as a new PC that and each MVP stab as a half-assed app
like Microsoft Works. If you find that a lot of people are using Microsoft
Works, well, go ahead and build and sell Office. Thats your killer app.
But if it doesnt work, you shouldnt have to retool 100%. Only 20%.
Most high-value engineering products turn out to be platforms with
applications. So platform-intent is the right strategy. Unitaskers, such as
combs or toothbrushes, are rarely enough to build a business (unitaskers
are usually made by companies that maintain portfolios based on
similarities in manufacturing or service delivery processes).
But dont let the word platform intimidate you. A platform does not
have to be as complex as an operating system or a new fighter plane. A
knife is a very simple instrument, but it is a platform in the kitchen
because it can do so many things. The killer app turned out to be
chopping, but it can still do some mean squashing, stirring, serving and
spatula-ing. Some caveman or cavewoman probably started the search for
a business model with a stick, and figured out that sharpening one edge
created the first killer app. Pun intended.
Note: there are two engineering styles which I call vertical first (the
first app comes before the minimum-viable platform) and horizontal
first (the other way around). I think both can work, but the risk-benefit
tradeoff does favor at least some platform work upfront, in my opinion.
already out there, in service of your brand. Many people have a stake in
that story, so at best you can influence the story, not tell it. VW may
regret its punch-dub series of commercials. It may have killed the golden
goose. Now I bet people who play the game might want to stop. If, on the
other hand, VW had spent its money on a grassroots word-of-mouth
campaign around the punch-dub game, a lot more could have happened.
Groundswell has several great examples. I could be totally wrong on this
one. Only time will tell.
Aside: this is why the new continent of social media has primarily
been colonized by PR people. The marketing and sales people are talking a
lot about the potential, but it is PR people who are making the medium
work for them. Good marketing talks more than it listens. Good sales
listens more than it talks. Good PR strikes a conversational balance. Social
media is fundamentally friendlier to PR than either sales or marketing. In
the past companies had to have either marketing or sales cultures. You
could not lead with PR. Today you can.This is especially true because
rank-and-file employees can be turned into a PR army. To use them in
marketing means cheesy employee photos in brochures. Using them in
sales means sales people bringing customers in for insider visits.
Though Word-of-Mouth can work for sales (forwarding discount
coupons/referral/lead generation schemes), marketing (contests, viral
videos) or PR, it works best for PR.
This is where the classic reading of the Google origin myth gets it
wrong. The story goes that Brin and Page, when told they had to choose
between a marketing or a sales culture, (and this is engineering
braggadocio pure and simple) chose to create an engineering culture
instead. This is wrong on two levels. First, it is a three-way fork today, not
two way, and Google is a company built on effective PR. Dont be Evil
and stories about great buffets (and ironically, the story of Brin and Page
choosing an engineering culture) are basically the core of a PR
socialization narrative (how many people know Googles marketing
tagline of organizing the worlds information? or have encountered its
AdSense/AdWords sales face?). Second, culture isnt yours to choose.
Your business model completely determines it, and it will always be a
culture driven by a customer-facing function. More on that later.
You didnt think the bean counters would have nothing to say, do
you? Pricing confuses a lot of people because they think it is some sort of
objective, if inexact science. The most naive people think: if only I had
perfect information and could construct my demand/supply curves,
identify my substitutes and measure elasticity, I could price this thing
perfectly to maximize earnings.
Wrong. Economics constrains, but does not determine, pricing design.
Economics will make you crash and burn if you get it wrong, but it wont
tell you how to get it right. Itll just create a canvas. Getting the pricing
model right is a positioning switch in its own right.
Creative finance people know that pricing is a positioning art. There
are many famous products that made it via the right pricing strategy.
Gillette (cheap razors, expensive blades), Xerox (originally, lease the
copier, sell the toner) and Netflix (no late fees) are examples. And of
course the whole world of $0.99, $19.99, introductory price, artificial
scarcity limited editions, and and the like are all pricing design ideas.
The entire cloud computing sector is driven by a pricing idea: pay-by-the-
sip $0.10 offerings for enterprises that are used to paying by the million.
To innovate in the cellphone market, pricing should be your top concern.
I recently tried myfooddiary.com (a great calorie counting tool) for a
couple of weeks. They advertise $0.29 a day. Not the equivalent $8.70 a
month. Why? Monthly subscriptions are better, right? No. This has to do
with the psychology, calibration points and money metaphors at work in
the prospects mind. See my Fools and their Money Metaphors article.
Calorie counting is a daily activity for dieters. Health and fitness run on
daily tempo mental models. The most effective pricing models are likely
to be daily. That way you can compare it to other daily health/nutrition
expenses like food purchases. Gyms would do well to shift to a daily price
advertising model. A $90/month gym membership is a $3/day
membership. So I know that it costs me about as much to ruin my healthy
day with a slice of pizza as it is to redeem it with a workout. Why would
you want me to think about my gym membership with a mental model that
contains things like rent checks and phone bills? If some gym uses this
daily price advertising idea, I demand a royalty!
Money metaphors are complex beasts. Entrepreneurs think with the
entrepreneurship (capitalist) metaphor. But to sell stuff, you must think
and talk within the customers active metaphor. Get it right, and the
pricing cylinder fires.
Are there more than 7 switches? I thought about this really hard,
especially about two very attractive candidates for an eighth switch: the
culture switch (going from an inchoate culture of random types of
people to a distinctive one) and an ecosystem fit (where the corporation
is socialized into a supply chain).
But heres the gist. There is a lot of overlap among the three
functions. So much so that it is hard to tell them apart, and a good deal of
potential value to integrating them. Each is a customer-facing function.
Each is about crafting messages designed to sell things. Each is about
managing a portfolio of channels. Each listens and talks to the market.
This procedural similarity is what confuses people and leads them to
misguided partitioning based on channel: marketing is about advertising,
sales is about face-to-face pitching, and PR is about getting journalists
interested. No, no and no. You can put a sales pitch in an advertisement, a
marketing positioning idea into a news story, and a newsy idea into a sales
pitch or advertisement. You can market face-to-face and sell en masse.
You can sell with a news story, and turn an advertisement into a
newsworthy event in its own right.
With old media you could at least make a medium-is-the-message
argument. Yes, in traditional media, advertising is friendlier to marketing.
Face-to-face is friendlier to sales. The news is friendlier to PR. In new
media though, these distinctions fall apart immediately. Every new
medium (blogs, Twitter, Facebook) can be personalized, customized, made
as one-way or two-way as you like, and customized for word-of-mouth or
broadcast. These media have no message. Or every message, if you like.
But the three are different. You see, the distinction lies in the type of
message. Especially with new media. They can work together to form a
whole egg, but never confuse them.
In the example I started with, Kid Red is a marketer, Kid Green is a
salesperson and Kid Blue is a PR prodigy. Marketers like themselves,
salespeople like other people, and PR people like ideas. Each turns his or
her personality into a selling strength.
Smart people who like themselves soon realize that other people like
themselves too. They understand self-indulgence. They understand what it
means to always be conscious of, and care about, how you are perceived.
All marketing messaging is based on self-perception, whether it appears as
an ad, a lifestyle-section trend story, or a sales strategy that relies on your
salespeople wearing hipster clothes. Kid Red knows this unconsciously.
He knows some people have a self-perception based on elitism. They like
the best lemonade (as opposed to say the cheapest lemonade or the
weirdest-colored lemonade). He probably likes the best lemonade himself.
His elitism translates into an marketing strategy that focuses on hooking
elitist self-perceptions.
Now Kid Green likes others. And she realizes that others like others
too. They like their friends, enjoy interpersonal interactions, and buy from
friends if possible. So she personalizes the interaction as much as she can.
Trust matters more than product attributes. Kid Green and her customers
would both rather buy from someone they know than someone who claims
to have the best lemonade. Even if they do have elitist tastes, they are
likely to go to a friend. Even if the strangers lemonade booth has a queue
of a dozen people and the friends stand has no queue.
And finally PR, the latest kid on the block (Ill explain why in a
minute). Kid Blue has a message that isnt about people at all, but about an
idea: the role of lemonade in a hot-day story. You can see why this so
easily segues into the news: you could pay the local radio station host to
talk about beaches and lemonade as part of the weather report on hot days.
Note that there is a subtlety here. Given the same power to tweak an
offering, marketers naturally customize, salespeople naturally personalize,
and PR people naturally contextualize. All three lead to differentiation.
Like many Indians I like a dash of salt in my lemonade. If the kid in my
neighborhood notices and greets me every time with a the usual? with a
pinch of salt? he is personalizing. But if he reacts by making up a menu
with Regular and Salty options, hes a marketer. Marketers dont care
to know you, they only care about how you know yourself. And finally, if
he runs a promotion on Diwali selling Indian salty lemonade, well, hes
pulling off a PR stunt.
And with new media, all three can scale. To use the example closest
to traditional media, you can use variable print technology in paper direct
mail to personalize (put peoples names and their kids picture into a
message), customize (use revealed preferences to include a beer picture in
some messages, and a wine picture in another), or contextualize (insert
excerpts of your products reviews from media you know your prospect
consumes).
So, with that detour out of the way, how does the 3-way play out? The
computing industry offers a near perfect case study. Apple is as pure a
marketing-led company as you can hope to find. Microsoft breathes sales.
And Google is entirely a PR-constructed narrative.
Do the three selling strategies support my basic psychological claims?
Absolutely.
Apple is led by a guy who likes himself to the point that he doesnt
care at all what others think about him. And his customers are all people
who like themselves too. The best piece of evidence is probably the Mac
vs. PC ads. The entire campaign was about self-perceptions. The productfocused ads? They sell to self-perceptions and personal identities as well.
Their effectiveness relies on people knowing that they strongly prefer
highly visual and tactile interfaces. The archetypical Apple customer is so
well-defined that he or she is practically a caricature: a dancing hipster
with eclectic musical tastes who drives certain types of cars.
Which is why Microsofts response was so effective in turn. Rather
than accept the self-perception/identity based framing, they reframed the
contest. The entire I am a PC was highly personal. You get faux-real
people with names and faces. Not actors modeling abstract Claritas
PRIZM psychographic personas. And Microsofts entire selling strategy is
sales-driven: OEM partnerships, large enterprise sales, institutional
channel partnerships and the like; its all 1:1 work. We all know you can
only buy Macs at certain prices from a few places. Microsoft software?
You are a complete sucker if you routinely pay sticker price. If you cant
find a deal through your company or school, you are subsidizing the rest
of us. The likes other people bit is also at work. Most Microsoft people
Ive met tend to be friendly, down-to-earth and dressed-down (one sales
guy I met wore a suit but carried a backpack; a bit of gaucherie that would
probably invite a death sentence in an Apple store). Spend five minutes
talking to any Microsoft rep, and they will have ruefully, but confidently
acknowledged and laughed at Microsofts brand image issues, and made
sure you like them even if you dont like Microsoft. Interacting with Apple
people in an Apple store on the other hand, is a slightly intimidating
experience, like shopping at an upscale clothing store.
And what about Google? They dont advertise. They know your name
and everything about you but they dont even attempt to personalize or
customize your experience. Instead they spread stories about great buffets,
whiteboards with Dont Be Evil scribbled on them, and how Brin and
Page insist on less than 7 +/- 2 items on the Google home page. They
make sure that every geek knows that in PageRank, it is Page as in Larry,
not as in Web. Every marketer recoils in horror at a brand name being
commoditized into the category name (Asprin, Kleenex, Xerox). But
Google doesnt care that Google has become a generic verb. Unlike
marketing and sales brand equity, PR brand equity is amplified when a
brand becomes the category generic name. And perhaps the most
compelling evidence of Googles PR-driven culture? They mangle their
logo every chance they get (know any other major brand that allows this?),
to reflect PR opportunities. Remember our hypothetical kid selling salty
lemonade on Diwali? Google offered this Diwali logo to Indian users in
2008:
I rest my case.
But lets get back to my colored egg argument. Why cant you do all
three? Why cant the marketing department focus on identity and
personalization, sales on tastes and personalization, and PR on ideas in the
environment and contextualization?
There are two reasons: people and product. But first lets marshal the
evidence that you cannot do all three. It is only a weak proof-by-nonexistence, but strong enough for me.
And One Function Shall Rule Them All
The IMC/Whole Egg idea is largely viewed as a failed vision today.
Some are resurrecting the idea based on the convergence of media, but
unconverged media was never what held the Whole Egg idea back in the
first place. It was the mutual-exclusivity among messaging styles. A
personalized, customized and contextualized message is a complicated and
schizophrenic message: Hey Joe, how about taking some of the best
lemonade in town to the beach today? The passer-by has walked past by
the time you can get that sentence out. Effective messaging is about
making choices.
Today, most companies clearly reveal their selling colors. Integrated
or not, there are no white eggs to be seen. Its all red, green, or blue
dominated.
In our computer industry example, I pointed out how the dominant
function colors (or contaminates, depending on your point of view) the
subservient functions. Another place you can find evidence of One Must
Rule dynamics is in post-sales. This is the fourth major customer-facing
function that usually goes unnoticed in discussions like this one. But it is a
selling function all the same: retention is cheaper than acquisition in
general, and customer service is the major retention (and upselling)
touchpoint. When a company has its act together and doing post-sales
well, you can ask: what differentiates a given high-quality customerservice department?
Does the service optimize on customization attributes? Lots of ability
to tweak or change your relationship? Speed for the impatient, simplicity
for the easily confused? Thats marketing-driven post-sales. Does the rep
know you by name, and does your call get routed to the same rep
everytime? Sales is in the driving seat. Receiving a lot of contextualized
offers like relevant holiday specials? Thats a PR post-sales show (this is
as yet quite rare, but Amazon, another PR-driven idea company, is a good
example: look at their rare advertising, it is about ideas. They fought back
against the me-me-me iPad ads with read on the beach in sunlight idea).
Initial Conditions and Egg Color
So why does this happen?
First people drive the equation. The founder vision is based on the
foundational selling personality. Jobs is a marketer; we know a lot about
him because he likes himself (all those black shirt stories). Ballmer is a
high-energy salesman, and Gates is down-to-earth. Where Jobs appears on
a stage alone, holding an audience in thrall, Gates shared the screen with
Jerry Seinfeld, an entertainer who might have overshadowed him, and the
focus was on the banter between them. You see him in conversation (with
Warren Buffet for instance) more often than you see him speaking from a
stage.
Brin and Page clearly like their personalities to fuel the news, rather
than cultivating either a personal brand or an interpersonal style. I know
nothing about either of them. Ive only once seen a video of Brin
addressing a classroom. Both have been reduced to the ideas they
represent. Heck, Page is part of their main idea, PageRank. Theyve even
ceded the people stuff to Schmidt, and made it hard to even tell them apart
(compared to how clearly you can tell the two Apple Steves apart, or
Gates, Allen and Ballmer apart).
Why does this matter? Like attracts like and you get massive initial
condition effects, both in terms of customer base and employee base (and
remember, many of the best employees start as passionate customers).
People who like people join Microsoft. People who like themselves join
Apple. People who like ideas join Google.
Second, product drives the equation, also indirectly via people. People
who like themselves build what they want, and then sell it to others
through the force of their personalities. People who like others do
customer-driven product development. It is blindingly obvious that Jobs
has designed every major Apple product to his tastes, and sold them to
people who share those tastes. Microsoft? Well, apparently Windows 7
was your idea. Even before Windows 7, you could always personalize PCs
more than you could Macs. And Google of course, is the quintessential
idea product (the core ideas for both Apple and Microsoft, by contrast,
came from various outside sources, which included my mothership
Xerox).
Curiously, Facebook is apparently none of the above and a true
engineering culture. Zuckerberg reportedly tries to hire engineers even for
preferred selling style (and everybodys got one, whether or not they are in
a selling profession). Do you like selling based on self-perceptions,
starting with your own self-perception (sign: you can sell best to people
like yourself)? Join a marketing-driven company. Do you like getting to
know people and selling in personalized ways (sign: you can sell to
anybody)? Join a sales-driven company. And finally, do you like selling
ideas (sign: you can sell to anyone who gets it; they dont have to like
you or be like you)? Join a PR-driven company.
As companies mature, the original culture remains, but weakens and
diversifies. If your selling style is strongly defined, join an early stage
company with a very strong culture. A primary-colored egg. If you dont
lean strongly one way or the other, join a mature company with a
weakened founding culture, and lots of local silo flavors. A more colorful
egg, but still with a dominant primary hue.
The Story of this Post
Besides my previous posts logically leading up to this one, some
interesting recent events led me to this conclusion. In the last month or so,
I met three people who seemed to be strongly influenced by my ideas, but
disagreed with me in very specific and puzzling ways. Thinking about it
led to a personal realization: I am an idea-driven-sales guy, and PR is my
medium. I almost never personalize or customize, but I often contextualize
(though I dont lean towards PR in an extremist way, which explains why I
am comfortable in a more mature sales-first company like Xerox). But of
the three people I met, two have been sales-first people, and one has been
marketing first (the three of you know who you are!).
So that explains that mystery. It also explains why, looking back on
my personal history of selling or hiring people to sell, Ive pretty much
always gone with a PR-first decision. When I havent, the decision has
backfired badly. I can now read a new meaning into a really old (2007)
post of mine, How to be an Idea Person. All my lemonade stand
experiences that I described there were PR-driven.
By the way, the solution to problem posed in the title: to fix your IMC
strategy, you need to paint your whole egg (or rather, stop being in denial
about the fact that it is colored).
This particular one is nonsense, and falls apart at the slightest poking
(well poke later in the article), and I made it up for fun. Let us discuss
three real examples from business books before we develop a critical
theory and design principles. The three I will use are from The Power of
Full Engagement by Jim Loehr and Tony Schwartz, Making It All Work by
David Allen, and Listening to the Future by Dan Rasmus and Rob
Salkowitz.
The Dynamics of Energy
The Power of Full Engagement by Jim Loehr and Tony Schwartz is a
pretty neat little self-improvement book that is based on the premise that
managing energy is more important than managing time, and that we
should do so the way top athletes do: by balancing training and
performance. The book offers this quadrant diagram:
Notice one thing about the quadrants: they do not have evocative
names, but mere structural labels like high positive alongside lists of
features, which are clearly variables deemed to be of lesser importance,
but too important to leave out. The diagram picks out two specific
attributes out of the ambiguity: subjective intensity and pleasantness, for
highlighting. While this is a reasonable thing to do, it is not a necessary
choice. You could defend these choices, but they do not seem particularly
compelling. Why not, you might ask, steady vs. spiky energy, or
physical and mental energies? The choices are also weakened by the
low chemistry between the two variables.
You would not expect this diagram to support a conceptually strong
theory, and it doesnt. The book stakes its credibility on case studies and
anecdotes, and fortunately, the structural strength of this diagram is not
tested. This is basically a quick-and-dirty conceptual framework for
organizing subject matter and ideas that are largely empirical in origin.
This should not be surprising, since the source of the books ideas is data
from performance coaching of athletes and executives.
Overall, this one rates a C-. As I will argue, it uses a quadrant for the
wrong material, and does so poorly at that (the book itself is decent
though).
The Self-Management Matrix
Moving to a more analytical, concept-driven quadrant, consider this
one, from David Allens Making It All Work, a reflective analysis of his
earlier book, Getting Things Done (GTD).
This is perhaps the most interesting one of the three. The diagram
takes on the formidable task of thinking about the future of the entire
planet. The framework is based neither on experimental/field data (we are
talking about the future, the product of thousands of trends gathering
momentum today, and uncertainties that nobody can guess at), nor is it
conceptual in origin. There is no possible fundamental theory that would
tell you that globalization and labor market organization are the two most
important. Maybe the important ones are the evolution of Islam, water
wars or the global aging population. The choices made here are essentially
artistic ones, not statistical key indicators or first-principles self-evident
concepts. Though globalization and labor dynamics are important, they
simply are not metaphysically primitive constructs like control or
perspective (or line and point). Instead, they represent observable
patterns at the other end, the most complex sorts of patterns we can
process and understand, what we call mega-trends.
Which is why the labels in this diagram are crucially important. They
go beyond evocative to purely artistic. They suggest entire stories and
science-fiction trilogies. At the risk of sounding like a bad fiction
reviewer, Id call the quadrant names rich background tapestries. Whats
more, the supporting text provides the right sort of nuanced and ironic
meta-analysis of the diagram itself.
This rates an A grade.
When Should You Use a Quadrant Diagram?
In summary, the three diagrams rate C-, B+ and A. The grades are a
reflection of both the difficulty of applying quadrant diagrams to the
source material in the particular cases, as well as the effectiveness of the
actual application. Lets create a quadrant diagram to illustrate when to use
quadrant diagrams, and when to do something else.
victim and micromanager have already become the preferred terms in the
discussions of Allens diagram. I still put his diagram in the
Metaphysician category though, since I think he is working with
context-free categories (perspective/control) that are not restricted to
humans. The Keirsey diagram, by contrast, is more closely tied to human
psychology.
Evaluating Quadrants
The discussion so far should suggest obvious evaluation criteria. First
ask the question: should this be a quadrant diagram at all? If not, probe the
speaker with respect to the quadrant of the should this be a quadrant
diagram diagram where you think the subject belongs. Ask statistical,
first-principles, variety and taxonomy questions as appropriate. If
quadrants are indeed appropriate. Apply the second quadrant diagram to
classify what you are looking at, and look for or ask for the right sort of
supporting argumentation. A speaker talking about global warming
swamping coastal cities and citing examples of historical floods is
providing the wrong sort of evidence: even the worst localized flooding in
known history is not the right sort of reference point. You need something
like an imaginative science fiction story.
Wrapping Up: Other Diagrams
Visual constructs live in a special sweet spot inhabited by issues that
are too complex for rigorous analysis, and too structured or impoverished
to support full-blown narrative treatments in the form of novels or stories.
Within this universe, quadrant diagrams are in the Goldilocks position.
One dimension (spectrum scales and circular life cycles) is fairly
limiting and needs a lot of verbal support. Three dimensions gets you to a
place where sheer visual processing overshadows the content of what you
are saying. There are also interesting special cases like triangles. Beyond
that, you are reduced to things that start to look quantitative or operational:
multiple sliders on scales, tables, and flow charts. Beyond that, qualitative
analysis through stories and metaphor is the only thing that will work.
So appreciate the quadrant diagram. In the right hands, it defuses
polarizations, reframes arguments, separates out coherent alternatives and
But like I said, you and I are not that far removed from Ellen Sirot.
Combinatorial Consumption and Gollumization
The sheer variety of things that we consume obscures and moderates,
but does not entirely prevent, our collective Gollumization. The
subsuming envelope of consumption behaviors we adopt helps each of us
sustain an illusion of fully-expressed and uniquely individual humanness.
As a line in a recently-popular song goes, I am wearing all my favorite
brands, brands, brands.
Put us all together, and you get what we call mainstream culture.
What separates us from the fully-realized Gollums is that we mostly lack
the talents to deserve complete possession. Our very mediocrity as food,
with respect to the devouring appetites of the products that choose us,
saves us. Each of our consumption behaviors feeds on us every day, but
slowly enough that we can heal ourselves and achieve a fragile stalemate
with the forces of complete Gollumization.
But the equilibrium state falls well short of fully-human.
The apparent variety and uniqueness in our personalities is as illusory
as the apparent variety in what we consume. This illusory variety in our
consumption homogenizes us, while supplying each of us with the raw
material we need, to construct illusory notions of our own uniqueness.
Take the choices offered by the food industry for instance:
permutations and combinations of a few pure and highly-refined (a lot of
them corn-based) ingredients, all designed to hook our three main
addiction circuits that crave salt, simple sugars and fat respectively. It
doesnt matter whether you are addicted to burgers, pizza, french fries or
chips (my particular poison). To the extent that you dont cook your own
meals from scratch, you have been partially Gollumized by the food
industry.
Our food choices are only a subset of our overall mode of
consumption, which I call combinatorial consumption. Combinatorial
consumption reduces the universe of human potential to a deeply-
products and services that can attract a small core group of raving
superfans who can organize (if you pay them a sub-minimum wage via
games and coupons), an inchoate crowd into a synchronized raving tribe.
So the world of combinatorial consumption that Gollumizes our lives
as consumers is a more complete prison than the world of work that
imprisons us as producers. True escape is nearly impossible, except
through extreme acts of rebellion, self-imposed exile, and marginalized
live-off-the-land self-sufficiency.
In our consumption behaviors, unlike our production behaviors, there
is no natural source of redemption to be found. The world of
combinatorial consumption provides a pseudo-richness that is so
superficially close to the richness of nature in fact, that one of the survival
strategies in the world of work, loser-dom, actually relies on discovering a
sufficiently interesting pattern of Gollumizing consumption outside the
workplace. This is the person who endures cubicle farm days,
daydreaming about the slightly richer pleasures of (say) football-fandom
on evenings and weekends.
And if you decide to fight Gollumization from within, you must
venture dangerously close to the thin line dividing those fighting for their
souls from those who have already lost it.
So lets talk about extreme couponers and hoarders.
Couponers and Hoarders
On one side of the line separating those fighting for their souls and
those who have lost it, you have the deadly game of existential chess
played by the protagonists of Extreme Couponing, who exult every time
they game the system and manage to buy $1000 worth of groceries for
$20.
These are people who spend all their spare time collecting,
organizing, investing in, and analyzing their coupon collections, to mount
weekly attacks on grocery stores, like card-counting blackjack players at
casinos. This is what Gollumized raving-fandom looks like.
For the most part, these are not resellers or rational participants in a
supply chain; they literally stock up on 150 years worth of hand soap and
deodorant. As with the Sirot video, there were a few glimpses of humanity
in the Extreme Couponing show (catch a rerun if you can). In one rare,
human moment, an extreme couponer managed to score thousands of
boxes of cereal essentially free, which he then gave away to the homeless.
The lives of couponers are apparently about gaming the Big, Bad
marketing machine. One extreme couponer constantly made references to
chess, beating the house, and gambling with a strategy that allows him to
win every time. He conveniently discounted his hours of preparatory labor
as a fun hobby. He clearly viewed the marketing machinery of his grocery
store as an adversary to be beaten, and himself as some sort of hacker.
You might wonder then, why does the marketing machine tolerate
such acts of sedition? Is it only because they are not worth the cost of
completely stamping out, and are unlikely to grow into wide-spread
revolt? Perhaps occasional patching of particular exploits in the arbitrary
universe of couponing is enough for the marketing machine to stay one
step ahead in the arms race?
This seductive analysis, and the implied analogy to hackers attacking
a computer system, is deeply misguided. When hackers compromise a
valuable site via an undocumented exploit, they can steal or cause millions
or even billions of dollars worth of damage. The process is in no way
controlled, let alone legitimized, by the site owners.
By contrast, the extreme couponers, if you count the value of their
time, basically make a modest living doing below-minimum-wage
marketing work for the coupon-based marketing universe that welcomes
them as raving fans.
From the point of view of the stores, far from being hostile opponents
in some asymmetric game of chess, these are merely cheap and committed
marketers. They are encouraged to model, in extreme ways, the very
couponing behaviors that the marketing machine wants others to emulate
in less extreme ways.
Which is exactly what happens. So long as you and I casually clip and
use coupons, inspired by the extreme couponers in our midst, the grocery
stores still comes out on top. If the extreme couponers leadership
behavior were to actually lead to large-scale loss-driving sedition by too
many customers, the store could easily staunch the losses overnight, by
making minor changes to coupon-redemption rules.
The coupon-based raving-fan gambling industry is merely a lessregulated version of Las Vegas. Instead of the temptations of lowprobability jackpots, the house strategy for coming out on top merely
relies on making profitable couponing so difficult, boring and timeconsuming, that only the destitute or obsessive, in possession of more time
than money and underutilized sunk-cost home warehouse space, would
attempt it.
If you need proof that this is a gambling industry rather than a hacker
subculture, you need only look at the support the stores provide to extreme
couponers. In the show, the store employees actually applaud when the
extreme couponers check out with their ridiculous hauls. Letting a hardworking couponer walk away with winnings of $5000 worth of
groceries for $200 is basically cheap marketing. The store makes more
than its money back through the cheaply-inspired loyalty of the lessdisciplined casual couponers, who halfheartedly mimic the extreme
Gollums.
If you want more validation, simply visit a Vegas casino and wait for
someone to win reasonably big. You will see the exact same applause and
encouragement from the staff. And the applauding front-line service
employees in both cases arent faking it. They genuinely believe the little
guy has beaten the house rather than provided it with cheap marketing.
If youve been reading this site for a while, you should be able to figure
out why the applause is genuine (hint: losers).
On the other side of the dividing line, you have the hollow shells of
human beings profiled on Hoarders. These are human beings whose
patterns of addictive consumption have reduced their homes to toxic
garbage dumps. Literally. The interventions are triggered by the threat of
having their residential properties you can hardly call them homes
mystic connection created by Harrys scar, and the more prosaic one
created by the twin phoenix feathers in their respective wands, from the
same phoenix.
Anyday now, I expect to see a doppelganger app on Facebook based
on Likes. It will likely be named phoenix feather.
When that happens, the black hole at the center of our universe, now
equipped with a social-graph fishing net, will begin gaining mass at an
accelerating rate, drawing more of us into the embrace of subterranean
Social Gollumization, caught up in some surreal world of addictive,
mobile-app-based coupon-trading games.
From Customers to Consumer
In a rather popular post of mine from a while back, I derived, from
Druckerian first principles, a definition of a customer.
A customer isnt a human being. A customer is a novel and stable
pattern of behavior.
I have since reused that definition in other popular posts, which have
served to validate its soundness. But with each new and successful post
that rests on that definition, I become more uncomfortable about its
implications.
When I came up with the definition, I finessed its obviously dehumanizing implications with the idea that it was merely a functional
definition that relied on an aspect of the underlying human being. The
whole, I allowed myself to believe, was still fully human, and greater than
the isolated stable behaviors of interest to the marketer.
I now believe that is a deeply disingenuous stance, based on a
perverse assumption that combinatorial consumption of a sufficient variety
of products and services is equivalent to fully-experienced humanity.
favor, causing a reaction from the class-culture matrix: increased and more
visible action by the hidden institutional order to restore the balance.
When slums start to seethe, the secret police gets going in not-verysecret ways.
If the slums win, subversive subcultures become institutionalized, and
displaced ones turn into subcultures. If the slums lose, things stay roughly
the same. Either way, the scheme of social organization remains the same:
a balance of power between an institutional class-culture matrix and a
subcultural web.
This is the world we are used to, and this is the world the Internet is
changing. The subcultural web is now being made legible and governable
under the harsh light of Facebook Like actions. Just in time too, since the
returns on coarser forms of political and economic exploitation are now
rapidly diminishing. Obamas victory in the last Presidential election, and
the penetration of entities like Groupon into local food subcultures, are
just the early signs of where we are headed.
This is a contrarian conclusion. Most commentators today are arguing
that the subcultural world is getting stronger, more incomprehensible and
increasingly ungovernable.
This is a mix of an illusion, a poor sense of history, and the effects of
a temporary learning phase on the part of class-culture matrix institutions.
The world of subcultures are about to be comprehensively explored,
mapped, tamed and domesticated. The larger the subculture, the faster it
will fall.
The subcultural web looks increasingly incomprehensible (and
therefore stronger and more ungovernable) to you and me as humans. It
does not seem incomprehensible if you peer at it through the increasingly
sophisticated instruments of digital governance. Facebook is to marketers
and politicians what Google Maps is to travelers.
The poor sense of history is due to the passing of the last living
generation that experienced truly terrifying levels of global conflict.
Before the Internet came along, it was the sheer number and
insignificance of local subcultures that made governance too expensive to
bother with. The risk of the rare seditious uprising could not justify the
cost of more fine-grained pre-Internet governance mechanisms.
Businesses sold a modest selection of mass-produced shoes for
instance, and produced more of the varieties that sold better. It wasnt
particularly useful to know that hipsters liked Converse sneakers. For
politicians, a coarse color-coding of Red and Blue states (in America) and
a certain amount of county-level intelligence sufficed to inform election
campaigns.
The Internet though, has changed all this. It has allowed subcultures
to scale (by moving their secret-handshake institutions online), and
become more valuable in the process. While mass-manufactured celebrity
cultures have been weakening, we are not returning to pre-mass-media
patterns of local culture. Instead, weve evolved to mega-subcultures that
scale without developing institutions.
And at the same time, the visibility of subcultural behaviors has made
governance and exploitation much cheaper and easier. You dont have to
go to a specific neighborhood, in specific clothes, and drop specific
references. You can sit at your desk, dress any way you want, and fake
your way into any subculture. Long enough to sell a whole lot of shoes.
It will not take long for businesses and politicians to completely
master this game.
The outcome is inevitable. Subcultures will be comprehensively
tamed. Institutional sociopaths within the class-culture matrix are now in a
position to detect and take control of subcultures before they even come
into existence. This will lead on to control over the very inception of
subcultures.
The Fabrication of Subcultures
Subcultures are vulnerable because they form around shared
common-knowledge texts (even if the shared text in question comprises
Today, the marketing machine can at best put its muscle behind a
Justin Bieber and create coarse, large-scale culture whose manufactured
nature is obvious to all but the dimmest of observers.
Tomorrow, it will be able to create tiny, niche cultures whose
members will either sincerely believe that the subculture is their own
creation, or ironically not care that it has been manufactured for them to
find through engineered serendipity.
A sort of Moores Law of cultural fabrication will get underway, and
it will eventually be capable of etching an entire subculture within a few
city blocks.
Heck, let me go out on a limb and make a Moores Law type
prediction: the size of the smallest manufacturable subculture will halve in
size and transience every 18 months. In 10 years, well have a
microprocessor moment: the ability to etch culture at a one-city-block-forone-month level of resolution. Working in concert with neo-urbanists, the
new marketers will be able to pack a thousand domesticated hyperlocal
subcultures in every major city, and entirely reprogram it culturally every
few months, to sell a new crop of products and services.
That future (either utopian or dystopian, depending on where you
stand) is a ways off, but well get there.
Three of the four companies that dominate the Web today: Facebook
(Like patterns), Google (search patterns) and Amazon (purchase patterns),
are equipped with extremely powerful cultural early-warning radars, based
on massive data flows. Data flows so massive that only large institutions
within the class-culture matrix will have the power to crunch them into
usable intelligence.
Apple, the fourth company, curiously does not have the capacity to
lead the zeitgeist this way. Their historic competitive advantage the
mind of Steve Jobs has turned into a serious weakness with his passing.
Because he was preternaturally good at following the zeitgeist, Apple
squandered its potential to lead it. A key kind of cultural early-warning
radar (based on music tastes) was ceded to startups. It was cheaper to let
Jobs stay one step ahead of other gut-driven pre-Internet marketers than to
invest in assets that could be exploited by less-talented post-Internet datadriven marketers, capable of staying ahead of culture itself.
This is why Bruce Sterling was right to label Apple an example of
Gothic-High-Tech zeitgeist following rather than zeitgeist leading, but I
believe he is wrong in thinking that all marketing is going to be this way;
much of it is now going to get ahead of the zeitgeist and actively shape it,
within the decade.
As a revealing sign, it is noteworthy that subcultures have already
been subverted so completely that they voluntarily self-document their
doings online on privately-owned platforms. Every party or group lunch is
now likely to be photographed, video-taped and archived online as part of
collective memory. Group-life streams and grand narratives are out there,
for the reading.
If youre not paying, youre the product. Indeed.
But the nitty-gritty aside, the conclusion is inevitable. The subcultural
web is now open for colonization. It will retain a potential for very coarse
and rough kinds of subversion (#OccupyWallStreet is sort of the Swan
Song of subcultural power). This potential will soon peak, and then begin
to decline.
The Fortune at the Bottom of the Attention Pyramid
How big is the potential value of subcultural attention mining? The
rumored valuation of the Facebook IPO provides a hint: $100 billion. That
suggests a market that is big enough when you consider all players
to move global GDP a few percentage points. Is that a lot or a little?
Depends on your frame of reference.
One way to frame the value is to imagine a pyramid of social
groupings, representing various levels of social attention (not attention
devoted to the non-human world).
At the bottom you have 7 billion little pools of individually-directed
attention. At the very top, you have a single point, the group called
humanity. There are moments, like 9/11, when all available attention
floods to the top.
One organizational rung below, you have perhaps 18 groupings at the
coarsest resolution level of the global class-culture matrix: the three basic
social classes (rich, middle-class, poor) times the half-dozen or so major
civilizations.
Then you have perhaps 700-odd nation-class groupings, and so on
down, past cities, kinship groups, traditional family-societies and various
other kinds of groupings that were long ago domesticated and subsumed
within the class-culture matrix.
At some level of resolution, past a gray transition zone, the classculture matrix gives way to the untamed subcultural web. The gray zone is
moving relentlessly downwards, domesticating the subcultural web and
subsuming it within the class-culture matrix.
This is not like the fortune at the bottom of the C. K. Prahalad
pyramid. This is the cultural equivalent of the plenty of room at the
bottom remark by Richard Feynman, which serves as inspiration today
for the entire field of nanotechnology.
Except that there isnt plenty of room. Though the social space
occupied by the subcultural web is vast, it is being domesticated so fast
that we can expect complete colonization within a decade. Recall what
happened with the nineteenth-century railroad boom in America.
Settlement processes that had been crawling painfully along for three and
a half centuries, suddenly accelerated and finished the job within a few
decades (the marker was a major 5-year depression that began in 1873).
So from that perspective, $100 billion seems both reasonable and not
particularly large. It seems like a market that should take no more than a
decade to occupy. At that point, Id expect Facebook to turn into a mature
company with declining margins.
At that point, we will hit the limit I called Peak Attention. Once all
subcultural attention is mined, only two kinds of attention will remain: the
stuff. Dont buy cheap. Look for deals, but dont let deal-seeking make
you compromise on quality or wait too long. It will cost you more in the
long term. Sterlings examples are obvious and physical: a good quality
bed and work chair for instance. You might spend up to 8 hours a day in
each; thats 2/3 of your life.
I own both an excellent bed and a great chair. I am not sure the latter
was a good investment for me in particular, since I spend most of my
sitting hours in coffee shops, but in principle, it is a great example. Other
examples include: a great kitchen knife, a nice car if you spend many
hours commuting per day, plenty of quality gym clothes and a membership
at a good gym, so you never have an excuse not to work out. Good quality
produce to cook with.
If you work mostly at your desk, a large monitor. Heck, multiple
monitors. The best keyboard.
Sterling also has ideas on what not to buy, or get rid of if you already
own it. Expensive china sets for example, if you never do any formal
entertaining. Things you think are assets but are actually liabilities. Things
you are being unnecessarily sentimental about.
Sterlings ideas seem to have been independently rediscovered by a
growing segment of the middle class. Hence the phenomenon of trading
up (the book has lots of data and anecdotal evidence for the trend).
I think of these sorts of examples as physical furniture. Stuff in your
life that can make it hoarder hell if you buy the wrong things, or heaven if
you buy the right things.
$2700 Worth of Acting-Dead
My acting-dead behaviors this year were more about mental furniture.
Heres the breakdown of the $2700 that I eventually spent when I stopped
acting dead:
1. About $250 to get Tempo converted to epub and Kindle formats
Above all this, the middle class script involves a certain aversion to
talking about or dealing with tough financial decisions. It is considered
unseemly. Decent people dont talk about money, let alone risk. If you
work hard and play by the rules, the money should take care of itself. If it
isnt doing that, you are probably looking for dishonest and exploitative
shortcuts like the evil rich or doing dumb things like the stupid poor, and
deserve what you get.
If you have to budget and watch your money too closely, you were
probably being irresponsible with credit cards and deserve your pain. For
decent people, paycheck-in, on-time-credit-card-payments-out should
work smoothly on autopilot.
And above all, you dont speculate. If forced to speculate by pensions
being turned into 401(ks) (American stock-based defined contribution
retirement plans), decent people leave the actual risk-taking decisions to
professional fund managers, telling themselves things like you cannot
beat the professionals.
So what will happen to people operating by such obviously dangerous
attitudes in difficult times?
Turns out, weve been here before. Theyll die out.
Middle Class Declines in History
This is not a new phenomenon in history. Middle classes have
appeared and disappeared several times before in history.
Tennessee Williams plays (A Streetcar Named Desire, The Glass
Menagerie) tell exactly such poignant fall-from-the-middle-class stories
set in early 20th century America.
Early twentieth century British novels set during the decline of empire
(such as Agatha Christie novels), often contain aging spinsters desperately
keeping up appearances and surviving on small incomes derived from
being companions to richer old women.
You can also find examples outside the Western world. In nineteenth
century India for example, where the Urdu and Sanskrit-literate middle
classes, which had grown around the courts of the Nawabs and Maharajas
in older medieval cities, went into severe decline. The new English-literate
middle class began supplanting it in the newer cities of the British Raj.
I suspect similar middle class declines can be found in the Middle
East (during the Ottoman decline), China (after the Boxer Rebellion) and
Latin America (after the Monroe Doctrine perhaps? I am not too familiar
with Latin American history).
When a middle class goes into decline, you get a large segment of the
population engaging in a desperate scramble to keep up appearances,
while switching from collective-norm-based to individual-risk-based
financial thinking.
Keeping up with the Joneses becomes far harder, because the financial
support starts to collapse at different times for different people, but
everybody agrees to pretend that everybody is in it together. For the
current American decline, there have already been a couple of good
movies chronicling the decline: The Joneses (2009) and The Company
Men (2010).
A norm-based social class will persist with disastrous financial
choices long after the secure financial environment, on which its scripts
are based, collapses. Simply because membership of the class is the source
of all social identity and access to social capital.
Except that the social capital, which the members are clinging to, is
eroding rapidly as well. There is no point in two non-swimmers with
immense trust between them, clinging to each other while drowning.
Mutual trust and social capital within a group only mean something when
there are objective reasons to expect a prosperous future of indefinite
length stretching out ahead.
When this is not the case, it makes sense to cash out your hard assets,
rethink your financial life more directly, write off investments in the social
capital of the declining class, and look for an alternative emerging class to
join.
Trading Up and Fragmentation
As the picture I started with shows, a key effect of the trading-up
phenomenon is that it causes serious fragmentation. The social landscape
starts to get restructured along new lines. Cultural geography changes, as
governing financial scripts change from one city block to the next (you see
a lot of this in San Francisco in particular).
The transition from a monolithic middle class to one of many tradingup classes is a very tough one. First, you have to go through a period
where you manage your finances very directly, with no help from a script
that simplifies decision-making.
Then you have to evaluate various alternative trading-up scripts to
figure out which ones might actually fit your situation and encode
meaningful adaptations to the new environment. Not every lifestyle design
script is likely to work.
In the last few months, going back to the broader context of my three
examples, Ive done a good deal of very direct financial decision-making.
Ive made up detailed scenario planning spreadsheets, risk models and the
like. Ive done minute tracking of spending (only for a month, to sort of
calibrate; it is far too difficult and depressing to do on an ongoing basis).
Heres the funny thing: doing this kind of very direct financial
management around my small-business book-keeping felt good. It felt
smart, like I was learning valuable new skills. But doing it around personal
and household finances still felt somehow dirty. Thats how deeply
embedded the middle class script is.
The three examples were interesting and particularly tough because
they bridged the two mental models: my healthy business mental model
(within which the right spending decisions would have been easy) and my
toxic middle-class-paycheck mental model (within which they were
unnecessarily hard).
week), make the few big moves, and spend the rest of your life waiting for
the Big Event signifying that it is working, while slipping slowly into
destitution and denial. I see a lot of people in this mode right now.
Theyve never really stopped to analyze the logic of the script, but
accepted it on faith based on assurances from a few for whom it has
worked.
Quick-change artistry is of course, the card I think you should pick. It
is a turbulent, experimental approach, where there are no absolute life
truths, no permanent commitments to any script, no one-book formulas,
and no easy no-brainer decisions.
It involves trying different trading-up patterns until you find one that
works. It involves a commitment to stop acting dead. It involves a
conscious decision to leave the middle class.
Or you can wait for all the Kings men and all the kings horses to put
Humpty-Dumpty together again.
This piece is sort of a continuation of my Las Vegas Rules series, but
Ive abandoned the attempt to keep a coherent larger narrative going.
This is going to be more of an occasional diary-entry sort of thing.
Out of all this scratching, four broad narratives have emerged that can
be arranged on a 22 with analytic/synthetic on one axis and
optimistic/pessimistic on the other. Three are rehashes of older narratives.
But the fourth the Hydra narrative is new. I have labeled it the
Hydra narrative after Talebs metaphor in his explanation of anti-fragility:
you cut one head off, two emerge in its place (his book on the subject is
due out in October).
The general idea behind the Hydra narrative in a broad sense (not just
what Taleb has said/will say in October) is that hydras eat all unknown
unknowns (not just Talebs famous black swans) for lunch. I have heard at
least three different versions of this proposition in the last year. The
narrative inspires social system designs that feed on uncertainty rather
than being destroyed by it. Geoffrey Wests ideas about superlinearity are
the empirical part of an attempt to construct an existence proof showing
that such systems are actually possible.
It is important to note that the decade itself has not been exceptional.
As Fareed Zakaria noted in The Post-American World, we simply hear
about big, unexpected, global disasters much faster than we used to, and in
much greater (and more gory) detail.
If you dont believe me, simply take an honest inventory of any other
decade in the last century (you could go further back if you know enough
history). Youll find big natural disasters and political cataclysms in every
decade.
What has been exceptional about the 2002-2012 decade is not what
happened, but our intellectual response to it. The responses go beyond the
well-known ones in the timeline above. There appear to be hundreds of
people thinking seriously along such lines and taking on significant
projects related to such interests.
In the last year alone, Ive been introduced to two such people in my
local virtual neighborhood: Jean Russell (who coined the word thrivability
as an alternative to sustainability) and Ed Beakley, who has been studying
preparedness for unconventional crises through his Project White Horse
since Katrina.
You might say a major movement is afoot. Whether it will go
anywhere is unclear.
An Exceptional Response to an Unexceptional Decade
Two things are responsible for our exceptional response as a global
culture.
The first is simply the slow decline of Americas relative role in
global affairs, and the corresponding rise of a chaotic political energy
around the globe, at all spatial frequencies from neighborhood block to
planet-wide. It feels like theres nobody in charge. This feels both
liberating and scary.
In the bottom left quadrant, you can use the idea to understand why
some grand social engineering projects fail.
In the bottom right, you can use it to understand why other projects
succeed.
In the top left, it suggests design principles for resilient survival.
And in the top right, the interesting new quadrant, it suggests the
right questions that need to be asked in order to test, and if
possible, realize, Hydra narratives.
It is this last project that interests me. Some questions that occur to
me include:
But to ask such questions, you must first give up the near-religious
reverence for ineffable bottom-up, network models and the idea that
attempting to understand them clearly within a single head rather than a
swarm-head is a sinful act. It is merely a tricky one.
The barbarians are about to return to their proper place at the helm of
the worlds affairs, and the story revolves around this picture:
nomads. To the extent that they live around their main plant food sources,
they are like proto-sedentary cultures. These are the lifestyles Veblen
labeled savage.
The biblical archetype for hunter-gatherers has traditionally been the
Garden of Eden. Savages are minimalist predators, and simply live off the
bounty of nature, in areas where it is effectively inexhaustible. To the
extent that their gathering has evolved into agriculture, it is slash-and-burn
agriculture based on immediate consumption and natural renewal rather
than accumulation and storage of vast quantities of non-perishable food
over long periods of time. You could call their style of farming nomadic
farming, since they move from cultivating one cleared patch of forest to
the next, rather than staying put and practicing crop rotation in a small
confined (and owned) patch of land.
For the record, I think the Garden of Eden story has it right. Savagery
is the most pleasurable state of existence, if you can get it (until you annoy
the witch doctor or get a toothache). Not in the sense of noble savage (an
idea within what is known as romantic primitivism that is currently
enjoying a somewhat silly revival thanks to things like the Paleo diet), but
in the sense of what you might call the idle savage state. In some ways, an
idle savage is what I am, in private, on weekends.
Though they dont play a big part in this story, dont underestimate
what they did when they were center-stage: fire, spoken language, art and
archery are all savage inventions. Wisely, they didnt get addicted to
invention and stayed idle.
Idle savagery is basically unsustainable today unless you retreat
completely from the mainstream, so though Id like to be an idle savage,
Ive settled for the compromise state of being a barbarian. Thats where it
gets interesting.
The Illegible Barbarian
Pastoral nomads need, and develop, a good deal more technology, and
in areas that matter to them, are usually ahead of settled civilizations. They
are not quite as predatory as hunter-gatherers. Unlike hunter-gatherers,
they dont just follow prey around. They consciously domesticate and
manage their herds. Rather than let the herds move by instinct, they direct
their migratory instincts (hence herding). They dont just occasionally
slaughter what they need for food and clothing. They develop dairy,
husbandry and veterinary practices as well . You could say they cultivate
animals (a more demanding task than cultivating plants). The biblical
reference point is of course Abel the shepherd, of killed-by-Cain fame (at
one point I was enamored of Daniel Quinns reading of the Cain-Abel tale
in Ishmael, which I now think is completely mistaken, and a case of
confusing hunter-gatherers with pastoral nomads).
Ive already argued that barbarians were responsible for the
development of iron technology. Id also credit them for the invention of
the wheel, chariots, leather craft, rope-making, animal husbandry, falconry
and sewing (via sewing of hide tents with gut-string and bone needles,
which clearly must have come before cloth woven from plant fibers
needed sewing). Basically, if anything looks like it came out of a mobile
lifestyle, pastoral nomads probably invented it. At a more abstract level,
barbarian cultures create fundamentally predatory technologies:
technologies that allow you to do less work to get the same returns, freeing
up time for idleness. What Hegel would have called Master
technologies. The barbarian works to earn the idleness which the luckier
savage gets for free.
Barbarian technologies, like savage technologies, are fundamentally
sustainable, since using them tends to fulfill immediate needs rather than
causing wealth accumulation. The connection to mobility is central to this
characteristic: nomadic cultures do not accumulate useless things. It is a
naturally self-limiting way of life. If it doesnt fit in saddlebags or is too
heavy to be carried by pack animals, it isnt useful.
Mobility is also the fundamental reason why barbarian cultures are
illegible (see my post A Big Little Idea Called Legibility) to civilized ones
in literal and abstract ways.
They self-organize in sophisticated ways, but you cannot draw
organization charts (the Romans tried and failed).
For most of history, theyve owned most of the map of the world, yet
you cannot draw boundaries and identify proto-nations, since they are
defined by patterns of movement rather than patterns of settlement.
They practice the most evolved forms of leadership, but actual leaders
change from one situation to the next (a fact which confused the Roman
army no end when it fought them).
Pastoral nomads come in two varieties, which Veblen called lower
and higher barbarian stages. Lower barbarian pastoral nomads include
groups like the 12th century Mongols. Higher barbarian stages look like
settled civilizations on the surface, but (and this was Veblens enduring
contribution in his book) are characterized by a vigorous ruling class, with
roots in pastoral nomadism, that generally maintains at least a metaphoric
version of that lifestyle.
Among the more obvious symbols, as late as the 19th century, the
higher barbarians often maintained herds of unnecessary domestic
animals, hunted for sport (rather than for sustenance, unlike the huntergatherers) and generally spent their wealth recreating idealized pastoral
nomad landscapes.
When the vigorous leaders of a higher barbarian culture start to settle
down like their subjects, you get civilization.
The Stationary Civilized
Veblens notion of civilized roughly corresponds to agrarian (or
more generally, production-accumulation based) cultures governed by
social contracts and non-absolute rulers. By this measure, parts of the
Near East became civilized by about 1500 BC (I regard the Hittites as
the first true examples), followed by southern Europe around 800 BC and
northern Europe around the time of the Magna Carta.
Asian cultures are much harder to track: Veblen considered them all
higher barbarian, but depending on how you read the history of Persia,
China and India, theyve oscillated between higher barbarian and
civilized over the centuries (for instance, the growth and consolidation
Genghis Khan able to take over China, and how did his grandson
successfully create the Yuan dynasty? How did Arab armies conquer the
vastly more civilized and sophisticated Persian society? How did Turks
pretty much take over most of South Asia, the Middle East and North
Africa? Going further back, how did the Proto Indo-European (or
Aryans) take down the entire Bronze Age family of civilizations?
Second, given the astounding win record of the barbarians against
the civilized, how come history isnt written from the point of view of
the pastoral nomads? Why arent the histories of Egypt, Greece, Rome,
Babylon, Persia, India and China sideshows, with pride of place being
given to Mongols, Turks, Arabs and Northern Europeans (pre 1000 AD)?
Isnt history supposed to be written by the winners?
Refinement and Stupidity
Heres the answer to the first question: barbarians are on average,
individually smarter, but collectively stupider than a thriving settled
civilization.
One-on-one, a lower barbarian can outthink, outfight, and outinnovate a civilized citizen any day.
But a settled civilization at its peak can blow a lower barbarian
civilization away. Not least because at the very top, you still have Veblens
uncivilized higher barbarians (or, to use the Ribbonfarm term,
sociopaths). But once it begins its decline, the greater live intelligence of
the barbarians begins to take effect.
The explanation for this contradiction is a very simple one: by
definition, civilization is the process of taking intelligence out of human
minds and putting it into institutions. And by institution I mean
something completely general: any codified organizational form based on
writing will do. Writing, as Plato noted in Phaedrus, is the main medium
through which intelligence passes from humans to institutions.
[Writing] will introduce forgetfulness into the soul of
those who learn it: they will not practice using their
has been in decline. The process reached its peak during the Cold War. In
America, the Organization Man threatened to squeeze higher barbarians
out of the capitalist world, while in Soviet Russia, forced settlement and
collectivization in Siberia and Mongolia threatened to corral the last of the
wandering lower barbarians.
It almost seemed like the fountain of barbarian culture at which
humanity drinks to renew itself, was about to be completely exhausted
once and for all.
The moment, thankfully passed. The Gervais Principle kicked in to reinvigorate capitalism, and the High Modernist doctrines of the Soviet state
collapsed (followed by a remarkably quick return to pastoral nomadism in
Mongolia and Siberia).
That was just the opening act. Today as institutions of all sorts
crumble and collapse, and the written word becomes a living, dancing,
hyperlinked thing that would have made Plato happy, the barbarian is set
to return. Ill blog about this in a future piece, when I extrapolate this
speculative history into a speculative future.
Note, some of the ideas in this post were inspired by Seb Paquets
two-part series on how social movements happen. I dont entirely agree
with Sebs model, but you should check it out if these things interest you.
This was also partly motivated by the impending April 12th release of
Francis Fukuyamas new book, The Origins of Political Order. I wanted
to get my own thoughts on the subject down before tackling his. His first
book, The End of History and the Last Man, was in many ways my
personal introduction to this sort of subject matter. And no, I am not a
neocon.
Glossary
Ancient Eye: An approach to perceiving reality that precedes modern
categories of professionalized disciplinary knowledge such as science,
engineering or art.
Babytalk (GP): The language spoken by Sociopaths and Losers to the
Clueless.
Barbarian: On ribbonfarm, a term of approbation, while civilized is
an insult. Somebody whose lifestyle pattern is not based on accumulation
or externalization of cognition into institutions. The definition is based on
Thorstein Veblens model in Theory of the Leisure Class.
Baroque Unconscious: The idea that technology can be understood as
an entity that behaves as though it is a sentient agent unconsciously
groping towards realization of its own extreme baroque form.
Clueless (GP): Employees who overperform and believe in the
benevolence of the organization.
Crucible Effect: A crucible is group of optimal size for doing creative
information work. The number of people is about 12. It is too large to be
managed and too small to split up, balancing on the brink of chaos.
Members of crucibles focus collective attention into an arms race of
constant practice, backed by an established culture around its particular
kind of information work. The escalation into increasingly more refined
crucibles allows for the 10.000 hours of deliberate practice that is needed
for elite performance.
Curse of Development (GP): If the situational developmental gap
between two people is sufficiently small, the more evolved person will
systematically lose more often than he/she wins.
Evil Twin: Somebody who thinks exactly like you in most ways, but
differs in just a few critical ways that end up making all the difference.
Future Nausea: The subjective reaction to being exposed to unnormalized futures. See Manufactured Normalcy Field.
Game Talk (GP): The language spoken by Losers among themselves.
Gervais Principle ( (GP): The conjecture that Sociopaths promote the
Clueless to middle management and fast-track a subset of enlightened
Losers to upper management as new Sociopaths.
Gollum Effect: The reduction of a consumer to a subhuman creature
defined purely by patterns of consumption. Verb form: gollumize.
Hackstability: A postulated stable equilibrium state created by a
balance forces: exponentially increasing technological capability and
entropy-driven technology collapse.
Halls Law: A speculative Moores Law analog for the 19th century,
based on the growing sophistication of manufacturing as measured by
progress in creating interchangeable parts.
HIWTYL
(GP): Heads-I-Win-Tails-You-Lose, pronounced
HIWTYL. The general design principle behind incentive structures
designed by Sociopaths.
Legibility: A system is legible if it is comprehensible to a calculativerational observer looking to optimize the system from the point of view of
narrow utilitarian concerns and eliminate other phenomenology. It is
illegible if it serves many functions and purposes in complex ways, such
that no single participant can easily comprehend the whole. The terms
were coined by James Scott in Seeing Like a State. Illegible systems are
generally more robust than legible ones, and Scotts model is mainly about
the failures caused by imposing legibility on an initially illegible reality.
See State.
Loser (GP): A bare-minimum effort, rationally disengaged employee
who seeks fulfillment outside of work.