Ten Years To The Singularity
Ten Years To The Singularity
Ten Years To The Singularity
To The Singularity
If We Really Really Try
and other Essays
on AGI and its Implications
Ben Goertzel
Table&of&Contents&
Preface .............................................................................................................. 7!
Next Steps For And Beyond Humanity .................................................. 10!
The Top Priority for Mankind ........................................................................ 15!
Why AGI ? ....................................................................................................... 20!
Why the Time is Ripe for AGI ....................................................................... 24!
Why So Little Work on AGI? ......................................................................... 34!
Ten Years to the Singularity If We Really Really Try ................................. 40!
How Might a Positive Singularity Get Launched In 10 Years From Now? .... 43!
Does it Take a Manhattan Project? .............................................................. 48!
Keeping it Positive ........................................................................................ 50!
Ten Years to the Singularity? ....................................................................... 51!
How Long Till AGI? ........................................................................................ 54!
Two Earlier Surveys of Expert Opinion on the Timing of Human-Level AI ... 54!
The Questions Asked of the Experts ............................................................ 56!
What the Experts Said About The Timing of Human-Level AI ...................... 57!
What Kind of Human-Level AI Will Come First? ........................................... 60!
What Kind of Technical Approach Will First Achieve Human-Level AI? ....... 61!
Impacts of AGI .............................................................................................. 62!
Conclusions of the Survey ............................................................................ 63!
What Is General Intelligence? ....................................................................... 65!
Defining General Intelligence in Theory and Practice ................................... 70!
Why an Intelligence Explosion is Probable ................................................ 76!
Why an Intelligence Explosion is Probable ................................................... 79!
The One Clear Prerequisite for an Intelligence Explosion ............................ 79!
Distinguishing the Explosion from the Build-Up ............................................ 80!
What Constitutes an Explosion? ................................................................ 81!
Defining Intelligence (Or Not) ....................................................................... 83!
Key Properties of the Intelligence Explosion ................................................ 83!
Objection 1: Economic Growth Rate and Investment Availability ................. 86!
Objection 2: Inherent Slowness of Experiments and Environmental
Interaction .................................................................................................. 88!
Objection 3: Software Complexity ................................................................. 91!
Objection 4: Hardware Requirements........................................................... 92!
Objection 5: Bandwidth ................................................................................. 93!
Objection 6: Lightspeed Lags ....................................................................... 95!
Preface
Ive been thinking about Artificial General Intelligence (AGI) and
the future of human and transhuman intelligence since I was 4 or
5 years old and first encountered such ideas in SF novels. As
time went on my thinking veered from the purely speculative and
futuristic to the practical and engineering-oriented; in the last two
decades the pursuit of real-world AGI systems in the here and
now has been the central theme of my career (most recently
largely via the OpenCog project). But Ive never taken my eye
off the broader potential AGI holds, and its connection to various
other aspects of current and future life i.e. to almost anything
you can think of!
I thoroughly believe that the transition from human intelligence to
transhuman artificial intelligence is going to be far huger than
any transformation in the history of humanity more akin to the
transition from bacteria to humans, or from rocks to bacteria, if
one wants to draw historical analogies.
In the shorter term,
there will be plenty of dramatic implications for human society
and psychology. And in the longer term, the implications will go
way, way beyond anything we can imagine. Both the shorterterm and longer-term implications are amazing and exciting to
think about.
During the period 2009-2011, when I had a bit more time for
writing than I do now, I wrote a series of essays for H+ Magazine
the online zine I co-edit -- on the future of AGI, the Singularity
and related topics. While these dont exactly flow together like
chapters of a single-focused book, they do preoccupy with a
common set of themes and present a coherent point of view. So
I have gathered them here together in a single volume, along
with a handful of non-H+-Mag essays from the same period a
couple that were published elsewhere, and a couple that
languished on my hard drive unpublished till now.
Preface
-The cover art for the book was done by Zarko Paunovic.
1
Next Steps For And Beyond
Humanity
Some introductory musings
We humans have come such a long way in such a short time!
Yet, were just at the beginning of our evolution, with
extraordinarily dramatic transformations soon to come.
According to our best understanding, the universe has existed
for around 14 billion years, Earth for around 4 billion, multicellular
life for around 2 billion years, and humans with roughly modernstyle brains and bodies only couple million years or so.
Human civilization dates back only 10,000 year or so, and a lot
more recently on most of the planet. In our relatively short time of
existence, we humans have invented/discovered an amazing
variety of stuff like language, mathematics, science, religion,
fashion, sports, social networking, romance, art, music,
corporations, computers, spacecraft well, you get the idea.
The pace of our invention and discovery has been accelerating
fairly consistently. There was a lot more new creation in the
human world from 1000-2000 A.D. than from 0-1000 A.D.; and a
lot more from 0-1000 A.D. than from 1000 B.C.-0 A.D., and so
forth.
Certainly history has had its ups and downs, and the rate of
human progress along various axes is difficult to quantify. But
qualitatively, the accelerating rate of progress is plain to see. Ive
even felt it in my own lifetime; The pace of new gadgets available
and the pace of scientific discoveries is now is incredible
compared to when I was a kid in the 1970s.
10
11
What will be the next step be beyond this? Where will the
acceleration of technology ultimately lead us? Of course its
impossible for us to say, at this point. By analogy, I like to
imagine the first humans to create a language with complex
sentences, sitting around the campfire speculating about where
the wild new invention of language is ultimately going to lead
them. They might have some interesting insights, but would they
foresee mathematics, Dostoevsky, hip-hop, PhotoShop,
supersymmetry, remote-piloted cruise missiles, World of
Warcraft or the Internet?
However, basic logic lets us draw a few conclusions about the
nature of a world where powerful AGIs exist. One is that, if
humans can create AGIs more intelligent than themselves, most
likely these first-generation AGIs will be able to create AGIs with
yet greater intelligence, and so on. These second-generation
AGIs will be able to create yet smarter AGIs. This is what
mathematician I.J. Good, back in the 1960s, called the
intelligence explosion. The dramatic potential consequences of
this sort of intelligence explosion led science fiction writer Vernor
Vinge, in the 1980s, to speak of a coming technological
Singularity.
Mathematically, one interpretation of the term Singularity would
be a point at which some curve or surface changes infinitely fast.
Of course, the rate of technological change is unlikely to actually
approach infinity. Though the physical constraints we now
perceive may well be surmountable, there are probably other
limitations beyond our current understanding. But even so, the
intelligence explosion may well bring a Singularity in the sense
of a dramatic, incredibly hard to predict qualitative change in the
world we live in.
A lot has been written about the Singularity and accelerating
technological change in recent years. Famed inventor Ray
Kurzweil, in particular, has done a huge amount to bring the
Singularity meme to the worlds attention, via his 2006 book The
Singularity Is Near and his numerous speeches and articles. (I
12
13
14
2
The Top Priority for Mankind
In early 2009 I was contacted by some folks associated with the
World Economic Forum best known for their annual conference
in Davos, Switzerland to write an article for distribution at the
Summer Davos World Economic Forum that summer in Dalian,
China. The attendees at Davos are the worlds political and
business movers and shakers top politicians and CEOs and
philanthropists and so forth. My contribution was to be included
(in Chinese translation) in a special issue of Green Herald
Magazine, comprised of articles on the theme of Mankinds Top
Priority.
I like to get right to the point, so the title of my article was:
Mankinds Top Priority Should Be the Creation of Beneficent AI
With Greater than Human Intelligence. The Summer Davos
Forum is also called the Annual Meeting of the New
Champions, so I thought this was particularly appropriate. The
New Champions phrase was presumably chosen to refer to the
leaders of China, India and other emerging markets. But I
wanted to question the assumption that the new champions
leading humanity onwards would always continue to be humans.
Here is a section of what I wrote for the Davos participants:
What should mankinds top priority be going forwards?
This may seem to be a hard question, because there are so
many different problems in the world, and also so many
different opportunities for positive development.
But the answer Ill propose is a very simple one. Our top
priority should be the creation of beneficent artificial minds
15
16
17
18
3
Why AGI?
Before going any further, I suppose Id better deal with the AGI
issue. Everyone knows what AI is, so why do I insist on using
the less familiar term AGI?
This tweak of terminology points at deeper issues.
The basic reason for talking about AGI is: Modern AI has a split
personality.
In movies and SF novels, AI refers to robots or computer
programs with a high level of intelligence and autonomy R2D2
or C3PO or Hal 9000 or the Terminator or whatever.
However, in university computer science departments and
industry research labs, AI usually refers to something far more
prosaic mainly the creation of highly specialized software
carrying out particular functions that, when humans do them, are
generally considered to require intelligence. The machine at the
paint store that mixes your colors for you and mixes them in with
the white paint, for instance, is an AI at work that has become so
pervasive, we dont even think about the AI in use when were at
the paint store. The term AI needs to be used exclusively when
we are referring to these very specific types of intelligence
programs, for they are a far cry from an AGI; an actual thinking
machine, capable of discerning information coming in
simultaneously from a number of varied and specialized sources.
So far, the only AGI in existence is the human brain. To many,
the promise of building a software-based thinking machine
requires the reverse engineering of our real brains, in an attempt
to set up some kind of real time brain simulation, before we start
feeding questions into it. I do not believe this is the proper
20
Why AGI?
22
industrial robots
Why AGI?
23
24
Why AGI?
very
25
26
Why AGI?
http://www.youtube.com/watch?v=2STTNYNF4lk
27
28
Why AGI?
29
30
Why AGI?
31
its not a free and flexible interaction like when a person picks
up an object with their hand, or when a dog picks up a stick with
its mouth. Robot simulation software doesnt have this limitation
but its slow and not very scalable. Nobody has yet built a
massive multiplayer robot simulator. But one thing is sure they
will.
Cognitive science, as exciting as it is and as fast as it has
progressed, is still a lot stronger on structures than dynamics. In
1991 I joined a fantastically fun group of researchers called the
Society for Chaos Theory in Psychology, devoted to exploring
implications of nonlinear dynamics for intelligence. The group still
exists, and a portion of it, centered around Fred Abrahams lively
Winter Chaos Conference in Vermont, maintains the
revolutionary spirit the group had in the early 90s. One of the
core ideas of this group is that most of the important features of
intelligence are not static structural things, but rather dynamic
attractors complex emergent patterns of system dynamics that
are neither stable nor repetitive nor random, but display more
complex temporal structures. More and more data supporting
this view has emerged, yet the mainstream of cognitive science
hasnt yet incorporated these ideas into its thinking. The boxes
and links in Figure 1 above arent discrete brain regions with
bundles of wires between them in large part, theyre
dynamically assembled, continually self-organizing networks of
neurons that maintain themselves via complex nonlinear
chaotic dynamics.
Much of the reason cognitive science hasnt yet embraced
nonlinear dynamics is that its really hard to measure. We dont
have good enough neuroimaging tools to measure the formation
and interaction of chaotic attractors in the brain, and dynamical
phenomena are pretty hard to study in psychology lab
experiments. But as neuroimaging improves, our understanding
of dynamics in the brain will improve, and cognitive science will
get richer and deeper.
32
Why AGI?
33
when the toys it will be interacting with are a bit less potent. It
would be a little like giving a toddler a loaded gun, but without
the risk of the toddler shooting itself.
34
Why AGI?
35
36
Why AGI?
37
38
Why AGI?
AGI is coming probably faster than you think and its going to
be really, really interesting
39
4
Ten Years to the Singularity If We Really
Really Try
Weve discussed the Vinge-ean, Kurzweil-ian argument that
human-level AGI may be upon us shortly. By extrapolating
various key technology trends into the near future, in the context
of the overall dramatic technological growth the human race has
seen in the past centuries and millennia, it seems quite plausible
that superintelligent artificial minds will be here much faster than
most people think.
This sort of objective, extrapolative view of the future has its
strengths, and is well worth pursuing. But I think its also valuable
to take a more subjective and psychological view, and think
about AGI and the Singularity in terms of the power of the human
spirit; What we really want for ourselves, and what we can
achieve if we really put our minds to it.
I presented this sort of perspective on the timeline to Singularity
and advanced AGI at the TransVision 2006 futurist conference,
in a talk called Ten Years to a Positive Singularity (If We Really,
Really Try). The conference was in Helsinki, Finland, and I
wasnt able to attend in person so I delivered the talk by video
2
if youre curious you can find it online .
The basic point of the talk was that if society put the kind of
money and effort into creating a positive Singularity that we put
into things like wars or television shows, then some pretty
amazing things might happen. To quote a revised version of the
talk, given to a different audience just after the financial crisis of
Fall 2008:
http://www.youtube.com/watch?v=BelOkx4Jxyg
40
41
42
43
nanotechnology
biotechnology
robotics
AI
44
45
46
47
48
49
Even if OpenCog were the wrong path, there could still be a lot
of sense in a broader bet that funding 100 dedicated AGI teams
to work on their own independent ideas will result in one of them
making the breakthrough. Whats shocking, given the amount of
money and energy going into other sorts of technology
development, is that this isnt happening right now. (Or maybe it
is, by the time you are reading this!!)
Keeping it Positive
Ive talked more about AI than about the Singularity or
positiveness. Let me get back to those.
It should be obvious that if you can create an AI vastly smarter
than humans, then pretty much anything is possible.
Or at least, once we reach that stage, theres no way for us, with
our puny human brains, to really predict what is or isnt possible.
Once the AI has its own self, and has superhuman level
intelligence, its going to start learning and figuring things out on
its own.
But what about the positive part? How do we know this AI wont
annihilate us all? Why wont it just decide were a bad use of
mass-energy, and re-purpose our component particles for
something more important?
Theres no guarantee of this not happening, of course.
Just like theres no guarantee that some terrorist wont nuke your
house tonight, or that you wont wake up tomorrow morning to
find the whole life you think youve experienced has been a long
strange dream. Guarantees and real life dont match up very
well. (Sorry to break the news.)
However, there are ways to make bad outcomes unlikely, based
on a rational analysis of AI technology and the human context in
which its being developed.
50
52
53
5
How Long Till AGI?
with Ted Goertzel and Seth Baum
This chapter originated as an H+ Magazine article. A longer,
more technical version appeared in the journal Technological
Forecasting and Social Change.
What do other experts think about Kurzweils projection of an AIpowered Singularity around 2045?
Opinions are all over the map but one interesting data point is
a survey that Seth Baum, my father Ted Goertzel and I
conducted at the AGI-09 conference, a gathering of Artificial
General Intelligence researchers in Washington DC in March
2009, on the specific topic of How Long Till Human Level AI?
Rather than a shallow survey of a large number of people, this
was an in-depth survey of a small number of experts what is
known in the trade as an expert elicitation. The sample of
experts was avowedly biased most (though not all) researchers
who would bother to attend an AGI conference are relatively
optimistic about the near-term feasibility of AGI. But even given
this bias, its very interesting to sample the perspective of AGIinterested experts and see where there opinions fall on various
aspects of the question when will human-level AI be here?
The AI@50 and Klein studies are interesting because they show
that significant numbers of experts and interested persons
believe that AGI with intelligence at least equaling the human
level will exist within upcoming decades. Our own study probes
more deeply into the thinking of people with substantial AGI
expertise, and comes up with similar findings, but giving more
details about the experts underlying beliefs.
55
56
57
58
59
60
61
Impacts of AGI
In science fiction, intelligent computers frequently become
dangerous competitors to humanity, sometimes even seeking to
exterminate humanity as an inferior life form. And indeed, based
on our current state of knowledge, its hard to discount this as a
real possibility, alongside much more benevolent potential
outcomes. We asked experts to estimate the probability of a
negative-to-humanity outcome occurring if an AGI passes the
Turing test. Our question was broken into three parts, for each of
three possible development scenarios: if the first AGI that can
pass the Turing test is created by an open source project, the
United States military, or a private company focused on
commercial profit.
This set of questions marked another instance in which the
experts wildly lacked consensus. Four experts estimated a less
than 20% chance of a negative outcome, regardless of the
development scenario. Four experts estimated a greater than
60% chance of a negative outcome, regardless of the
development scenario. Only four experts gave the same
estimate for all three development scenarios; the rest of the
experts reported different estimates of which development
scenarios were more likely to bring a negative outcome. Several
experts were more concerned about the risk from AGI itself,
whereas others were more concerned that AGI could be misused
by humans who controlled it.
Some interesting insights can be found in the experts orderings
of the riskiness of the development scenarios. Of the 11 experts
who gave different dangerousness estimates for each of the
three scenarios, 10 gave the private company scenario the
middle value. Of these 10, 6 gave the US military scenario the
highest value and 4 gave it the lowest value. Thus the open
source scenario and the US military scenario tend to be
perceived opposites in terms of danger but experts are divided
on which is the safe one of the pair! Experts who estimated that
the US military scenario is relatively safe noted that the US
62
63
64
6
What Is General Intelligence?
AGI researchers Shane Legg and Marcus Hutter wrote a paper
in 2007 collecting 70+ different definitions of intelligence from the
scientific literature The ten definitions that most reflect a similar
perspective to Legg and Hutters perspective, and ours, are as
follows:
1. It seems to us that in intelligence there is a fundamental
faculty, the alteration or the lack of which, is of the
utmost importance for practical life. This faculty is
judgment, otherwise called good sense, practical sense,
initiative, the faculty of adapting oneself to
circumstances. A. Binet
2. The capacity to learn or to profit by experience.
3. Ability to adapt oneself adequately to relatively new
situations in life. R. Pinter
4. A person possesses intelligence insofar as he has
learned, or can learn, to adjust himself to his
environment. S. S. Colvin
5. We shall use the term intelligence to mean the ability
of an organism to solve new problems W. V. Bingham
6. A global concept that involves an individuals ability to
act purposefully, think rationally, and deal effectively with
the environment. D. Wechsler
7. Individuals differ from one another in their ability to
understand complex ideas, to adapt effectively to the
environment, to learn from experience, to engage in
various forms of reasoning, to overcome obstacles by
taking thought. American Psychological Association
8. I prefer to refer to it as successful intelligence. And the
reason is that the emphasis is on the use of your
intelligence to achieve success in your life. So I define it
65
66
67
68
69
70
71
general type that weve evolved for. Were land animals and
were pretty good at managing situations involving piecing solid
objects together but put us underwater where its all about
subtle interactions between fluid flows, and we become a lot
stupider. 30% of our brains are devoted to vision processing, so
if you put us in a dark place where sonar is the best way to get
around, we become a lot dumber. Were so terrible at
mathematical calculations, that we focus on areas of math where
we can use analogies to things like vision (e.g. geometry) and
language (e.g. logic, algebra) that we are better evolved to think
about. We evolved to handle social situations involving the <300
people in a Stone Age tribe, so we get easily bollixed by the
larger, more diffuse social networks that modern society
presents us with. And so forth.
All this may make you wonder just how general is our
intelligence after all!! But still, theres something to the fact that,
in principle, given enough time and resources, we have the
flexibility of mind to solve just about any problem. And theres no
doubt that our intelligence is dramatically more general than that
of Deep Blue, or a mouse.
Harking back to the formal theory of general intelligence, one
can quantify the generality of an intelligent system by the
entropy (the breadth or spread) of the class of environments in
which its good at achieving goals. A system thats really good at
achieving goals in a narrow class of environments, maybe smart,
but doesnt have so much generality of intelligence. A system
thats moderately good at achieving goals in a broad variety of
environment, may be said to have more generality of
intelligence. Humans have both more intelligence, and more
generality of intelligence, than Deep Blue or a mouse! A mouse
has way more generality of intelligence than Deep Blue, even
though mice are lousy chess players.
You might also question all this emphasis on rewards and goals.
People do all sorts of things, some of which seem to have little to
do with achieving goals, and some of which seem to have little to
72
do with achieving near or long term rewards. Were complex selforganizing systems displaying all manner of different behaviors.
Goals are things we sometimes try to follow, or ascribe to each
other descriptively, rather than things that rigorously govern all
our behaviors. And while sometimes we seek short or long term
rewards, we also engage in a lot of activities that seem to have
little reward at all, for ourselves or anybody. But, if you want to
measure or define a systems intelligence, it still seems the best
way to do this is relative to some collection of goals. Even if not
everything a system does is intelligent, you can still measure its
intelligence via the complexity of the goals it can achieve in
complex environments. Otherwise, its hard to see how to
distinguish intelligence from more general self-organizing
complexity, coupled with the environment which is also
interesting and important, but a broader sort of concept.
So, when we talk about building an artificial general intelligence
in practice, what were mostly talking about is building a system
that in principle could achieve nearly any goal in any
environment given sufficient time and resources; and that in
practice is good at achieving complex goals similar to those
goals needed for the survival and flourishing of a humanlike organism in a human society, in environments like the
ones humans lived in historically or live in presently. Thats
not very compact or elegant, but its what AGI in reality is mainly
about. Eventually the AGI field may branch beyond this, and get
concerned with building broader kinds of AGI systems displaying
varieties of intelligence further detached from human history and
perhaps even incomprehensible to the human mind.
Viewed in this way, the practical AGI task has a lot to do with the
specifics of human bodies, societies and environments which
is somewhat messy and not very mathematical, but seems to be
the reality of the matter. The ideal theory of AGI, in my view,
would be something like a recipe for producing intelligent
systems from environments. You would feed the theory a
description of the environment and goals that an intelligent
system needs to deal with and the theory would spit out a
73
74
75
7
Why an Intelligence Explosion is
Probable
The path from here to vastly superhuman AGI, Ive suggested,
can meaningfully be broken down into two phases: Phase 1, the
creation of human-level AGI; Phase 2, the path from human-level
AGI into the unknown realm beyond. The second of these
phases is what I.J. Good referred to as the intelligence
explosion, in his famous 1965 quote:
Let an ultra-intelligent machine be defined as a machine that
can far surpass all the intellectual activities of any man
however clever. Since the design of machines is one of
these intellectual activities, an ultra-intelligent machine could
design even better machines; there would then
unquestionably be an intelligence explosion, and the
intelligence of man would be left far behind. Thus the first
ultra-intelligent machine is the last invention that man need
ever make.
Some would argue against the wisdom of isolating this kind of
intelligence explosion as an event unto itself, preferring to
emphasize the continuity of such a potential explosion, not only
with the creation of superhuman AGI, but also with the larger,
longer explosion of intelligence on Earth, beginning with the
origins of life (or maybe even before). But while I think this
broad-scope view is valuable, I also think its important to
recognize the singular nature of the intelligence explosion a la
I.J. Good. Engineering is different from evolution, even though it
was invented by evolved beings in the indirect pursuit of their
evolutionary goals; and the recursive self-improvement of
engineered minds, if it does indeed occur, will have different
characteristics than the natural selection driven evolution of
intelligence that preceded it.
76
77
78
79
80
http://hplusmagazine.com/2010/02/05/how-long-tillhuman-level-ai/
81
82
83
84
85
86
87
88
http://hplusmagazine.com/2011/01/24/towardintelligent-nano-factories-and-fogs/
89
http://hplusmagazine.com/2011/01/10/theres-plentymore-room-bottom-beyond-nanotech-femtotech
90
speed limits. If that were not true, it would mean that we had
quite coincidentally reached the limits of science talent and
physical speed of data collecting at the same timea
coincidence that we do not find plausible.
This picture of the current situation seems consistent with
anecdotal reports: Companies complain that research staff are
expensive and in short supply; they do not complain that nature
is just too slow. It seems generally accepted, in practice, that
with the addition of more researchers to an area of inquiry,
methods of speeding up and otherwise improving processes can
be found.
So based on the actual practice of science and engineering
today (as well as known physical theory), it seems most likely
that any experiment speed limits lie further up the road, out of
sight. We have not reached them yet, and we lack any solid
basis for speculation about exactly where they might be.
Overall, it seems we do not have concrete reasons to believe
that this will be a fundamental limit that stops the intelligence
explosion from taking an AGI from H (human-level general
intelligence) to (say) 1,000 H. Increases in speed within that
range (for computer hardware, for example) are already
expected, even without large numbers of AGI systems helping
out, so it would seem that physical limits, by themselves, would
be very unlikely to stop an explosion from 1H to 1,000 H.
91
number of copies of the first AGI that can be created. Why would
this be critical? Because the ability to copy the intelligence of a
fully developed, experienced AGI is one of the most significant
mechanisms at the core of an intelligence explosion. We cannot
do this copying of adult, skilled humans, so human geniuses
have to be rebuilt from scratch every generation. But if one AGI
were to learn to be a world expert in some important field, it
could be cloned any number of times to yield an instant
community of collaborating experts.
However, if the first AGI had to be implemented on a
supercomputer, that would make it hard to replicate the AGI on a
huge scale, and the intelligence explosion would be slowed
down because the replication rate would play a strong role in
determining the intelligence-production rate.
However, as time went on, the rate of replication would grow, as
hardware costs declined. This would mean that the rate of arrival
of high-grade intelligence would increase in the years following
the start of this process. That intelligence would then be used to
improve the design of the AGIs (at the very least, increasing the
rate of new-and-faster-hardware production), which would have
a positive feedback effect on the intelligence production rate.
So if there was a supercomputer-hardware requirement for the
first AGI, we would see this as something that would only
dampen the initial stages of the explosion. Positive feedback
after that would eventually lead to an explosion anyway.
If, on the other hand, the initial hardware requirements turn out to
be modest (as they could very well be), the explosion would
come out of the gate at full speed.
Objection 5: Bandwidth
In addition to the aforementioned cloning of adult AGIs, which
would allow the multiplication of knowledge in ways not currently
available in humans, there is also the fact that AGIs could
93
94
95
96
97
98
8
Ten Common Objections to AGI And
Why They Dont Scare Me
(originally an H+ Magazine article)
Ive heard a lot of supposed reasons why AGI cant be
engineered, or why this wont happen soon. Some of them dont
deserve to be dignified by any response. But some are
reasonably sensible, even though I dont quite buy them.
This chapter gives a quick list of ten of the commoner objections
I've heard, with each objection followed by my own -- highly
opinionated and openly AGI-optimist perspective. In the
interest of concision and punchiness, I won't take many pains to
explicitly justify my take on each objection here the
justifications are given elsewhere in the book!
1. Objection via Quantum Computing
The Claim: The brain is a quantum system, in the strong sense
that its intelligent behaviors rely on macroscopic quantum
coherence among its parts. If this is the case, emulating brain
function on a classical digital computer would be extremely
inefficient (though possible in principle).
My take: There's no evidence this is the case, and it would
require some revolutionary changes in the science of
macroscopic quantum coherence. However, this claim doesn't
seem to violate known science in a totally egregious way.
But even if it's true, all this means is that we might need to
engineer an AGI to run on quantum computers rather than digital
computers Which would delay the advent of AGI but not pose
a fundamental obstacle.
99
Also note that, even if the brain uses quantum computing, that
doesn't mean a human-level AGI also needs to. There are often
many different ways to achieve the same engineering functions.
Note that the brain is well known to make use of various
quantum phenomena in the lower levels of operation like every
other physical system. That's not the same as manifesting
macroscopic quantum coherence in the way that would require
use of a quantum computer for efficient brain emulation.
2. Objection via Hypercomputing
Even further out
8
8
9
http://en.wikipedia.org/wiki/Hypercomputation
http://www.quantumconsciousness.org/
100
101
10
http://www.informatik.unitrier.de/~ley/db/journals/ijon/ijon74.html
11
http://www.richardloosemore.com/papers
102
12
http://hplusmagazine.com/2011/03/30/seeking-thesputnik-of-agi/
13
http://opencog.org/roadmap/
103
14
http://www.vetta.org/documents/UniversalIntelligence.
pdf
15
http://multiverseaccordingtoben.blogspot.com/2011/03/
toward-general-theory-of-feasible.html
104
would be a nicety (and I've put some effort into working toward
16
one, e.g. via co-organizing the 2009 AGI Roadmap Workshop
at the University of Tennessee, Knoxville), but why is it posited
as a necessity?
7. Objection via Consciousness
The Claim: We humans seem to be conscious; our laptops seem
not to be conscious in this same sense; human consciousness
seems associated with human intelligence; therefore, computers
can never be intelligent like humans.
My take: Philosophers have nothing near a consensus on what
consciousness is. Furthermore, philosophers have never come
up with a compelling argument as to why any of us should
consider our friends, wives or children conscious (as opposed to
being purely solipsistic). So it seems most sensible to ignore
these philosophical objections as regards AGI, until such point
as a good theory of consciousness emerges. I can easily
envision sitting around in a cafe' with some AGI robots debating
the same old philosophical issues about who is, or is not,
conscious.
My own personal philosophy of consciousness tends to be
17
panpsychist I think everything in the universe is conscious to
an extent, and each sort of system manifests this universal
consciousness differently. I strongly suspect that if we build a
digital mind with similar structures and dynamics to the human
one, it will manifest consciousness in similar ways.
8. Objection via Free Will
The Claim: Humans have free will, were not just deterministic
automata. But a computer has no free will, it just does what its
programming tells it. Human free will is closely tied with human
16
http://web.eecs.utk.edu/~itamar/AGI_Roadmap.html
http://cosmistmanifesto.blogspot.com/2009/05/blogpost.html
17
105
18
http://homepages.rpi.edu/~brings/
19
http://www.physorg.com/news186830615.html
20
http://multiverseaccordingtoben.blogspot.com/2009/06/
quantum-logic-models-of-classical.html
21
http://www.forbes.com/2009/06/18/cosmist-terrancyborgist-opinions-contributors-artificialintelligence-09-hugo-de-garis.html
106
22
23
http://singinst.org/
http://multiverseaccordingtoben.blogspot.com/2010/10/
singularity-institutes-scary-idea-and.html
107
9
Approaches to AGI
This chapter, and the following one on chatbots, are
updated/improved versions of material that earlier appeared in
the would-be pop-sci book The Path to Posthumanity, that I
wrote with my good friend Stephan Vladimir Bugaj. (That book
was my first foray into nontechnical futurist writing, and it didnt
really take off; but there was some good stuff in there.) . This
is a topic on which I have a huge amount to say, and have
written a lot before and will surely write again. The present
chapter presents more of a broad-scope historical focus on the
field than my other nontechnical writing has tended to do.
As I noted in the intro above, one thing this book DOESNT
contain is a detailed overview of my own work on AGI, which Ive
described in a variety of technical works before, and which is
currently centered on the OpenCog open source AGI platform.
Im currently (mid-2014) cooking a non-technical book called
Faster Than You Think which will cover this ground.
This
chapter and the next give some additional background on the
AGI field, and give some perspective on my own AGI approach,
but dont dig into the details.
Creating an AGI is not a narrowly defined problem like, say,
building a airplane or designing a book. Its a lot more openended, more like building a flying machine or building a device
for transmitting linguistic information or building a device for
generating electricity.
We have balloons, blimps, copters,
planes, pedal-gliders, rockets, Space Shuttles, catapults We
have books, ebooks, books on tape, videos, etc. And we have
solar power satellites, windmills, waterwheels, nuclear fission
and fusion, garbage-to-gas, fossil fuels, fuel cells. Each type
of flying machine or electrical power generator has its own
particularities its own science and its own engineering
practicalities and the same is true with the various approaches
108
Approaches to AGI
109
110
Approaches to AGI
111
112
Approaches to AGI
113
today. Neural wetware is efficient at doing certain kinds of lowlevel operations, and contemporary computer hardware is
efficient at doing other kinds of low-level operations. Achieving a
given cognitive function via emulating neurons on a
contemporary computer, is almost certainly never going to be the
most efficient way to get that function done on that same
computer the neural approach is almost surely going to take far
more memory and processing power than needed.
This inefficiency issue certainly doesnt make AGI via brain
emulation infeasible. Computing power gets cheaper all the
time, at an impressive rate; and if we knew how to make an
advanced AGI using neural modeling, wed have a lot of
motivation to figure out clever software strategies for making its
implementation more efficient. And there is also the possibility of
creating custom hardware for brain emulation, varying on current
hardware designs in a way specifically tailored to make brain
emulation more efficient. This may well be feasible, but its hard
to say much about it in detail till were clearer on how low-level a
brain simulation needs to be to give rise to cognitive phenomena
appropriately. Several research groups are already making
brain chips, including Dharmendra Modhas group at IBM and
Kwabena Boahens group at Stanford, but its currently quite
unclear whether these brain chips are going to be adequate for
brain emulation. They lack mechanisms supporting efficient
simulation of many neural phenomena of possible importance for
cognition (directed spiking, dendro-dendritic synapses,
extracellular charge diffusion and more) but it may be that
these phenomena arent so critical for AGI after all, we just dont
know at this point.
A different sort of issue regarding brain emulation as an
approach to AGI is the fact that once you succeed, what you get
is something with a very humanlike mind and its not clear
whether this is a bug or a feature, in the scope of things. Digital
humans would be fantastically interesting for science, but would
suffer many limitations compared to digital minds created with a
more flexible cognitive architecture. Creating non-human AGIs
114
Approaches to AGI
115
116
Approaches to AGI
117
118
Approaches to AGI
119
Good Old-Fashioned AI
What Im calling symbolic AI is in its simplest, classic form, at
least sometimes referred to as Good Old Fashioned AI
(GOFAI). GOFAI systems are based on rules that are much like
120
Approaches to AGI
THEN
there is strongly suggestive evidence (0.8) that the class of the
organism is enterobacteriaceae
Patient Information
1) Patients name
PT538
2) Age
34 years
3) Sex
Male
121
Diagnosis
6)
122
Approaches to AGI
AMPICILLIN
Dose 3.5g q4h IV
2)
GENTAMYCIN
Dose 119 mg q8h IV
Since high concentration of penicilllins can inactivate
aminoglycosides do not mix these two antibiotics in the same IV
bottle.
All this is quite impressive, and quite useful. What makes this
sort of approach worrisome from an AGI perspective is that the
expert system didnt really have any understanding of what it
meant by terms like dose or ampicillin. It knew them only as
symbolic tokens, taking part in rules encoded by humans.
Expert systems can actually do very well in some areas
medical expert systems like MYCIN being one example. They
have been shown to diagnose diseases better than most human
123
124
Approaches to AGI
125
126
Approaches to AGI
very different than the ones that work on small sets of crisply
defined rules. Humans can indeed handle crisp, formalistic rules
like those existing in expert systems or SOAR but they do so
using methods that are evolved to be guided by other,
unconscious cognitive processes acting on huge networks of
highly uncertain knowledge at varying levels of abstraction.
127
128
Approaches to AGI
129
Any AI system will need to learn some of its knowledge base for
itselfno Cyc-like system can contain all possible knowledge an
AI system will ever need to know for interaction with any
environment, as its totally impossible to predetermine every
eventuality it may encounter during its existence. Any knowledge
base that isnt structured in a way that naturally matches with the
structure of learned knowledge will be effectively useless, but
how can the people building a database like Cyc know what
knowledge representations will match with learned knowledge, if
they arent building a learning system? For a learning system a
system like Cyc is, at best, a kind of basic encyclopedia that a
sufficiently advanced learning system could learn to read but
which is not foundational to the system becoming intelligent in
the first place.
Despite more than twenty years of development, Cyc never
succeeded in emulating an eight-year-old child. Nor has anyone
yet found much use for a CD-ROM full of formal, logical
definitions of common-sense information. The company Cycorp
is doing OK, supported largely by government research grants.
Cycorp has been quite careful not to mention anything about
artificial general intelligence or any long-term scientific mission
underlying the firms work. Instead, they characterize their goal
as producing a useful database intended for embedding within
various specialized software products. This is a worthwhile
mission to be sure, but very different from the grand goal of AGI.
In fairness to Doug Lenat, I have to admit that hes a deep
thinker and his computational-psychology perspective does have
some depth to it far more than is represented in the public face
of Cyc. He has a reasonably solid theory of general heuristics
problem-solving rules that are abstract enough to apply to any
context. His pre-Cyc programs, AM and EURISKO, applied his
general heuristics theory to mathematics and science
respectively. Both of these programs were moderately
successful, exemplars in their field, but far from true general
intelligence. Their design lacks a holistic view of the mind. In the
big picture of AGI, getting the minds heuristic problem-solving
130
Approaches to AGI
131
132
Approaches to AGI
133
24
134
Approaches to AGI
animal) and then fed into the AI system from a file or via an
interactive interface. Knowledge may also be fed into the system
via natural language, using language parsers with specially
contrived semantic analysis modules intended to enable the
semantically unambiguous interpretation of simple cases of
English sentences.
Hybrid: Of course, its also possible to build AGI systems that
derive knowledge both experientially and via hand-coded rules.
Indeed, nearly everyone building systems relying on hand-coded
knowledge rules intends to eventually have their system handle
and learn from experiential data. And my own OpenCog
approach, though fundamentally experiential learning based,
enables loading in of knowledge rules as an option. However, in
practice, nearly all in-principle hybrid AGI systems have a
primary orientation toward either experiential learning or handcoded rules. OpenCogs primary orientation is experiential
learning based, as is seen from the fact that the system can
meaningfully operate without hand-coded rules, but now without
experiential learning. On the other hand, many of the Good Old
Fashioned AI systems from the 1970s and 80s (e.g. Cyc and
Soar, which Ill discuss in the next chapter) are clearly primarily
oriented toward hand-coded rules: They cant be run without
them, and in practice tailoring them for a new application is
mainly a matter of adding new rules to the rule-base.
I dont think the hand-coding of knowledge a good approach to
AGI, but for a long while it was the dominant approach in the AI
community, so its worth understanding because of its historical
value and the influence its had on the field. For now, though,
lets focus on the experiential learning approach (including hybrid
approaches that are primarily experiential learning focused). I
think this sort of approach is much more promising. Experiential
learning based approaches can be categorized via the amount of
biasing that is provided to the learning.
Minimal Biasing: There are AGI systems that try to do
experiential learning with as little a priori biasing as possible. Of
135
136
Approaches to AGI
137
138
Approaches to AGI
139
140
Approaches to AGI
141
142
Approaches to AGI
25
143
26
A note to the reader who has studied a little math or AI: Many of these GOFAI
systems used varieties of formal logic to carry out reasoning and learning, and
because of this, the impression spread in the AI community that formal logic
based systems are intrinsically tied to an explicit, human encoded knowledge
based AGI approach. But actually this is not accurate. It would be quite
144
Approaches to AGI
145
If you havent had any exposure to the AI field before, this brief
run-through of various approaches to AI may have been slightly
dizzying and confusing. But Ill ask you to please stick with me
through the next few chapters, where Ill discuss a number of
AGI approaches in detail including many of the ones briefly
alluded to above. Among other thing these chapters will give
some concrete examples of what various AI systems in the
various categories have done already examples often make
things a bit clearer! For the rest of this chapter, Im going to
discuss some more high-level, less nitty-gritty aspects of the
quest to create powerful AGI.
146
Approaches to AGI
147
148
Approaches to AGI
27
149
28
150
Approaches to AGI
151
152
10
Will Bots Feel Joy?
Will machines ever really feel, in the same sense that humans
do?
This is at least according to many philosophies a separate
question from whether machines can be intelligent, or whether
they can act like they feel. The question is whether machines if
suitably constructed and programmed can have awareness,
passion, subjective experience... consciousness?
I certainly think so... But generally speaking there is no
consensus among experts. Its fair to say that even without
introducing machines into the picture consciousness is one of
the most confused notions in the lexicon of modern science and
philosophy.
I became acutely aware of this confusion when, in summer 2009,
I organized a workshop on Machine Consciousness in Hong
Kong, as part of the larger Toward a Science of Consciousness
conference and Asia Consciousness Festival. The TSC
conference as a whole attracted hundreds of participants, but
only a couple dozen deigned to venture into the riskier domain of
machine consciousness; and among these brave souls, I reckon
there were more than a couple dozen views on the matter at
hand!
First we have the materialists. Joscha Bach a German AI
researcher and entrepreneur and the author of Principles of
Synthetic Intelligence, who will pop up a bit later in this book
summarizes their perspective elegantly: The notion of the mind
153
154
155
156
29
157
Basically he concludes that the best way to bridge the gap posed
by the hard problem is to posit some sort of common substance
binding together the subjective-experience and objective
structural/behavioral realms.
Ive thought a great deal about some of the easy problems of
consciousness, in particular the aspects of human
consciousness via which we can reflect on ourselves, and via
which our consciousness analyzes itself, thus creating
consciousness of consciousness of consciousness of While
in a sense this is an infinite reflexive process because in this
sense consciousness contains itself, and only infinite entities can
contain themselves it can be approximated by finite structures
such as occur in the human brain as its modeled by physics.
But this is a part of the story Ill come back to a little later!
Another point worth emphasizing is that, among my colleagues
working on OpenCog, theres a large diversity of opinions on
consciousness. Panpsychists are probably in the minority. It
seems the practical work of engineering a mind is largely
independent of philosophical issues regarding consciousness
within limits, at any rate. If you believe that consciousness is
inextricably tied up with quantum phenomena, and also that
consciousness and intelligence are intertwined, then youre
obviously not going to like AI approaches focused on digital
computation!
Regarding the hard problem I think one very real possibility is
that we might create human-level, human-like AI systems before
we puzzle out the mysteries of consciousness. These AIs might
puzzle over their own consciousness, much as we do over ours.
Perhaps at the 2019 or 2029 Machine Consciousness workshop,
AIs will sit alongside humans, collectively debating the nature of
158
159
11
Deep Blue Reflections
I wrote the first version of this essay in 1997, a few days after
Deep Blues classic chess victory. It was written for the Simons
Rock College alumni email list, in response to some other alumni
asking for my thoughts on Deep Blues achievement, since I was
the best-known (and maybe the only?) AI researcher on the
list. I tweaked the essay slightly at some later date.
If youre both ancient and geeky like me, you may remember
May 11, 1997, the day when a computer program defeated the
(human) world chess champion -- an event that led many people
to think that computers were already on the verge of rivaling
human intelligence. More precisely, that was the day when, for
the first time ever, a computer defeated the world chess
champion in a standard five-game match. Deep Blue, a
computer chess system developed by Carnegie Mellon
University and IBM, split the first two games with Garry
Kasparov. The second two were draws, and the final game went
to Deep Blue. Kasparov was a sore loser. Deep Blue remained
dispassionate it hadnt been programmed with emotions.
Admittedly, this was only one match, but the tournament was not
a fluke. Previous versions of Deep Blue were already able to
consistently beat all but the greatest chess grandmasters prior to
the Kasparov match. And computer hardware has gotten far
better since then, enabling the same basic algorithms used in
Deep Blue to play even more powerfully. The capability of the
human brain, on the other hand, has remained essentially
constant since 1997.
Although Deep Blue is not a very generally-intelligent entity
according to our definition, theres something to be learned from
a study of its accomplishments and the mechanisms underlying
them. Deep Blue follows the same rules as human chess
players, but it doesnt think at all like humans. Human chess
160
161
162
163
164
165
12
Today Jeopardy!
Tomorrow the World?
This essay was written for H+ Magazine in 2010 shortly after
IBMs Watson computer beat the game Jeopardy on TV. Since
that time Watson has been developed as a more general
supercomputer architecture, intended for a diversity of
applications, with an initial focus on biomedicine. In 2014, a few
months before the writing of this introductory paragraph, IBM
announced the roll-out of 3 of its Watson machines in Africa (in
Kenya, Nigeria and South Africa). Discussion of the Watson
computing platform in its full generality would be worthwhile, but
this essay doesnt go that far what youre read here is my 2010
essay, just as written in the heat of the moment, right after
watching Watsons original televised moment of glory.
My initial reaction to reading about IBMs Watson
30
supercomputer and software was a big fat ho-hum. OK, I
figured, a program that plays Jeopardy! may be impressive to
Joe Blow in the street, but Im an AI guru so I know pretty much
exactly what kind of specialized trickery theyre using under the
hood. Its not really a high-level mind, just a fancy database
lookup system.
But while that cynical view is certainly technically accurate, I
have to admit that when I actually watched Watson play
Jeopardy! on TV and beat the crap out of its human
30
http://www.google.com/url?q=http%3A%2F%2Fmashable.com
%2F2011%2F02%2F11%2Fibm-watsonjeopardy%2F&sa=D&sntz=1&usg=AFQjCNGB8xrYzlgsmOlerj0FZ
4lncmKX0A
166
31
http://www.google.com/url?q=http%3A%2F%2Fmashable.com
%2F2011%2F02%2F16%2Fibms-watson-supercomputerdefeats-humanity-injeopardy%2F&sa=D&sntz=1&usg=AFQjCNHFrrjPD_av8QawOg08BbtqKfiiQ
32
http://www.google.com/url?q=http%3A%2F%2Fknowyourmeme
.com%2Fmemes%2Fi-for-one-welcome-our-new-xoverlords&sa=D&sntz=1&usg=AFQjCNH_rlurRwsBmGv9maKY96z
uYQrpSg
167
168
self, its goals, and the world. Rather, its an NLP-based search
system a purpose-specific system that matches the syntactic
and semantic structures in a question with comparable structures
found in a database of documents, and in this way tries to find
answers to the questions in those documents.
Looking at some concrete Jeopardy! questions may help make
the matter clearer; here are some random examples I picked
33
from an online archive .
1. This -ology, part of sociology, uses the theory of
differential association (i.e., hanging around with a bad
crowd)
2. "Whinese" is a language they use on long car trips
3. The motto of this 1904-1914 engineering project was
"The land divided, the world united"
4. Built at a cost of more than $200 million, it stretches from
Victoria, B.C. to St. John's, Newfoundland
5. Jay Leno on July 8, 2010: The "nominations were
announced today... There's no 'me' in" this award
(Answers: criminology, children, the Panama Canal, the TransCanada Highway, the Emmy Awards.)
Its worth taking a moment to think about these in the context of
NLP-based search technology.
Question 1: This -ology, part of sociology, uses the theory of
differential association (i.e., hanging around with a bad crowd)
This stumped human Jeopardy! contestants on the show, but Id
expect it to be easier for an NLP based search system, which
can look for the phrase differential association together with the
morpheme ology.
33
http://www.j-archive.com/showgame.php?game_id=3561
169
171
Wolfram on Watson
Stephen Wolfram, the inventor of Mathematica and Wolfram
34
Alpha, wrote a very clear and explanatory blog post on Watson ,
contrasting Watson with his own Wolfram Alpha system:
In his article he also gives some interesting statistics on search
engines and Jeopardy!, showing that a considerable majority of
the time, major search engines contain the answers to the
Jeopardy! questions in the first few pages. Of course, this
doesnt make it trivial to extract the answers from these pages,
but it nicely complements the qualitative analysis I gave above
where I looked at 5 random Jeopardy! questions, and helps give
a sense of whats really going on here.
Neither Watson nor Alpha uses the sort of abstraction and
creativity that the human mind does, when approaching a game
like Jeopardy! Both systems use pre-existing knowledge bases
filled with precise pre-formulated answers to the questions they
encounter. The main difference between these two systems, as
Wolfram observes, is that Watson answers questions by
matching them against a large database of text containing
questions and answers in various phrasings and contexts,
whereas Alpha deals with knowledge that has been imported into
34
http://blog.stephenwolfram.com/2011/01/jeopardyibm-and-wolframalpha/
172
Kurzweil on Watson
Ray Kurzweil has written glowingly of Watson as an important
35
technology milestone
"Indeed no human can do what a search engine does, but
computers have still not shown an ability to deal with the
subtlety and complexity of language. Humans, on the other
hand, have been unique in our ability to think in a
hierarchical fashion, to understand the elaborate nested
structures in language, to put symbols together to form an
idea, and then to use a symbol for that idea in yet another
such structure. This is what sets humans apart.
That is, until now. Watson is a stunning example of the
growing ability of computers to successfully invade this
supposedly unique attribute of human intelligence."
I understand where Kurzweil is coming from, but nevertheless,
this is a fair bit stronger statement than I'd make. As an AI
researcher myself I'm quite aware of the all subtlety that goes
into "thinking in a hierarchical fashion", "forming ideas", and so
forth. What Watson does is simply to match question text
against large masses of possible answer text -- and this is very
different than what an AI system will need to do to display
human-level general intelligence. Human intelligence has to do
with the synergetic combination of many things, including
linguistic intelligence but also formal non-linguistic abstraction,
non-linguistic learning of habits and procedures, visual and other
sensory imagination, creativity of new ideas only indirectly
related to anything heard or read before, etc. An architecture like
Watson barely scratches the surface!
35
http://www.pcmag.com/article2/0,2817,2376027,00.asp
173
Ray Kurzweil knows all this about the subtlety and complexity of
human general intelligence, and the limited nature of the
Jeopardy! domain so why does Watson excite him so much?
Although Watson is just an NLP-based search system, its still
not a trivial construct. Watson doesn't just compare query text to
potential-answer text, it does some simple generalization and
inference, so that it represents and matches text in a somewhat
abstracted symbolic form. The technology for this sort of process
has been around a long time and is widely used in academic AI
projects and even a few commercial products but, the Watson
team seems to have done the detail work to get the extraction
and comparison of semantic relations from certain kinds of text
working extremely well. I can quite clearly envision how to make
a Watson-type system based on the NLP and reasoning
36
software currently working inside our OpenCog AI system
and I can also tell you that this would require a heck of a lot of
work, and a fair bit of R&D creativity along the way.
Kurzweil is a master technology trendspotter, and he's good at
identifying which current developments are most indicative of
future trends. The technologies underlying Watson aren't new,
and don't constitute much direct progress toward the grand goals
of the AI field. What they do indicate, however, is that the
technology for extracting simple symbolic information from
certain sorts of text, using a combination of statistics and rules,
can currently be refined into something highly functional like
Watson, within a reasonably bounded domain. Granted, it took
an IBM team 4 years to perfect this, and, granted, Jeopardy! is a
very narrow slice of life but still, Watson does bespeak that
semantic information extraction technology has reached a
certain level of maturity. While Watson's use of natural language
understanding and symbol manipulation technology is extremely
narrowly-focused, the next similar project may be less so.
36
http://opencog.org
174
175
176
13
Chatbots Versus Cognition Bots
The ability to hold everyday, human-like conversations in English
or other natural languages occupies a special place in the AI
field. AI these days is a diverse endeavor, encompassing many
areas with little relationship to everyday conversation robotics,
mathematical theorem proving, planning and scheduling, fraud
detection, financial prediction, etc. And in practical terms, while
reproducing the ability to hold an everyday human-like
conversation would surely have great economic value, it is hardly
a worthy end goal for the AI field Id prefer an AI with
awkwardly robotic conversation but massively superhuman
scientific, ethical and artistic capabilities, over an AI with the
capability for fluent human-like cocktail part chatter but nothing
else. But nonetheless, partly for historical reasons and partly due
to its conceptual simplicity, the task of emulating human
conversational ability is one of the first thing that comes to mind
when one thinks about AIs that are as smart as people.
The historical reason is a 1955?? paper by Alan Turing, who
proposed naturalistic human-like conversation as a sufficient
condition for artificial intelligence. That is, he suggested that if
an AI could hold an ordinary conversation in a manner
indistinguishable from a human being, then it should be
considered to possess intelligence in the same sense that
humans do. He proposed to assess indistinguishability from a
human using a panel of judges if an AI could talk to the judges
and fool them into thinking they were talking to a human, then,
according to Turing, the AI should be considered to possess
human-like, human-level intelligence.
Its sometimes forgotten that Turing proposed this criterion, now
called the Turing Test, mainly as a counterargument against
those who doubted the meaningfulness of calling any computer
program intelligent in the same sense as humans. Turings
177
178
Ill also briefly raise the possibility of a third path to passing the
Turing Test a Statistical Turing Bot trained based on truly
massive conversation corpuses gathered via online chat or (after
speech-to-text advances a bit) phone calls. It seems at least
conceivable that such a system could pass the Turing Test
without having human-level general intelligence, and maybe
even without serving as a useful platform for ongoing AGI
development. If so, this would subvert Turings original intuition
in proposing the test but might well teach us a great deal, even
if not directly helping us toward AGI.
Skubes
Skubes
Skubes
179
Suzette.
Or
Quabbe
Hello!
Quabbe
Quabbe
I lost my job...
180
Quabbe
Quabbe
Quabbe
Quabbe
Quabbe
You're a robot
No. I am human.
Quabbe
Quabbe
Suzette.
Quabbe
181
Yes
ilag
ilag
ilag -
*picks up grenade*
ilag
ilag
182
ilag
ilag
ilag
ilag
ilag
Did i win?
ELIZA
The original chatbot was ELIZA, the computer psychotherapist,
invented by Joseph Weizenbaum in the early 70s. I remember
this program well from its incarnation in the early 1980s on the
Commodore 64! The program was so simple that one could go in
and modify the BASIC code to make it say naughty words and
other
entertaining
things.
ELIZA
pretends
its
a
psychotherapistnot a Freudian analyst, but a therapist
following a school of psychology called Rogerian therapy; which
is very convenient for chat bots because it involves the therapist
saying very little and just making supportive comments,
encouraging the client to keep talking and expressing their
feelings. You type in your comments and ELIZA types out
183
Eliza:
Ted:
Eliza:
Ted:
Yes.
Eliza:
Ted:
I am quite sure.
Eliza:
Ted:
Eliza:
184
Ted:
Eliza:
Ted:
Eliza:
Ted:
Eliza:
Ted:
Eliza:
Ted:
Eliza:
To the 2011 computer nerds eye, ELIZAs techniques are all too
transparent. She dodges the question rather than answering it,
using a fairly small and repetitive repertoire of simple rhetorical
tricks. Modern Loebner winners do the same sort of thing with a
bit more finesse and larger knowledge bases.
AliceBot
Perhaps the best known recent chabot Richard Wallaces
37
program Alicebot , arguably a direct descendant of ELIZA.
ALICE won the Loebner Prize three years in a row.
ALICE:
BEN:
Hows it hanging?
ALICE:
BEN:
My name is Ben.
ALICE:
37
http://www.alicebot.org/
185
Human:
ALICE:
Human:
ALICE:
ALICE:
Human:
ALICE:
Human:
ALICE:
Wow you are very polite. " I don't think that what me
mean by exist you suppose is the only factor."
Rephrase it?
Human:
Yes, please.
ALICE:
Human:
ALICE:
Human:
ALICE:
Human:
ALICE:
Human:
ALICE:
Human:
ALICE:
Human:
186
ALICE:
Human:
ALICE:
Human:
ALICE:
Human:
ALICE:
Human:
He is me.
ALICE:
Human:
Tee hee
ALICE:
Human:
ALICE:
All of them?
Human:
ALICE:
Human:
GISUS.
ALICE:
Human:
ALICE:
Human:
Not really.
ALICE:
Human:
But.
ALICE:
But what?
Human:
187
ALICE:
Human:
Congratulations!
ALICE:
188
MegaHal
A more amusing example of a computer conversationalist is
Jason Hutchens bot MegaHal. Hutchens actually won the
Loebner Prize sometime in the 1990s with his program HeX,
which was a very well done system also in the conceptual
lineage of ELIZA. But I found his MegaHal bot more interesting
because its programming goes beyond a strictly rule-based
approach. It detects statistical patterns and responds to them, an
189
Joke.
MegaHAL:
User:
MegaHAL:
User:
Joke. Lumberjack.
MegaHAL:
User:
MegaHAL:
User:
Chicken joke.
MegaHAL:
190
User:
User:
MegaHAL:
User:
Hmm.
MegaHAL:
MegaHAL:
User:
MegaHAL:
191
MegaHAL:
User:
MegaHAL:
User:
MegaHAL:
User:
MegaHAL:
User:
MegaHAL:
MegaHAL:
User:
MegaHAL:
User:
Shakespeare said
192
MegaHAL:
User:
Shaun.
MegaHAL:
193
38
http://nlpb.blogspot.com/2011/01/experiments-withstatistical-language.html
194
Lets begin.
MegaHAL:
User:
MegaHAL:
User:
MegaHAL:
User:
MegaHAL:
User:
MegaHAL:
195
User:
MegaHAL:
User:
MegaHAL:
User:
MegaHAL:
User:
MegaHAL:
User:
MegaHAL:
User:
MegaHAL:
User:
MegaHAL:
User:
196
197
The bot also has a controllable personality: One can adjust its
whimsicality and its nerdiness, and watch its responses change
accordingly.
Here Ill show a few of our conversations with Ramona 4. The
curious reader may try to puzzle out which responses are from
198
the rule engine, which are from MegaHal, and which are from
PowerSet or True Knowledge.
First, a chat with Murilo Queiroz, the AI developer who did most
of the work on Ramona 4:
muriloq:
Hi.
Ramona 4.1:
Hello there !
muriloq:
Ramona 4.1:
muriloq:
Ramona 4.1:
muriloq:
Ramona 4.1:
muriloq:
Interesting.
Ramona 4.1:
muriloq:
Ramona 4.1:
muriloq:
Ramona 4.1:
muriloq:
I see.
199
Ramona 4.1:
muriloq:
Ramona 4.1:
muriloq:
Ramona 4.1:
muriloq:
Ramona 4.1:
muriloq:
Ramona 4.1:
muriloq:
Ramona 4.1:
muriloq:
Ramona 4.1:
muriloq:
Ramona>
you>
Am I conscious?
Ramona>
you>
What is consciousness?
Ramona>
200
Ramona>
you>
Ramona>
you>
Ramona>
you>
Ramona>
you>
Ramona>
you>
Ramona>
you>
Ramona>
Ramona is sure not AGI, but occasionally she does rise to the
level of poetry!!
201
202
OpenCog
(i.e. OCP-controlled
avatar)
203
Human
OpenCog
Human
OpenCog
Human
OpenCog
Bob
(additional agent,
could be human or AI
controlled)
Hi there.
OpenCog
Hi Bob.
Human
OpenCog
I dont know.
Human
Why not?
OpenCog
Human
OpenCog
Yes.
Human
OpenCog
I guess that Bob will say the ball is under the red cup
Human
Why?
OpenCog
204
205
Human
Look over there on the other side of the table. What do you
see?
OpenCog
Human
OpenCog
Human
OpenCog
Human
OpenCog
Bob.
Human
Wheres Bill?
OpenCog
(points to Bill)
Over there.
Human
OpenCog
Human
Can you use the blocks to show me what Bill and Bob are
doing?
OpenCog
(takes the green ball and moves it back and forth in the air
between the Bob-blocks and the Bill-blocks, to illustrate the
game of catch that Bob and Bill are playing)
Theyre throwing the ball to each other like this.
Human
206
Human
No, youre right, it isnt. Thats silly. What do you think it could
break?
OpenCog
A stack of blocks!
Human
OpenCog
Human
OpenCog
Human
OpenCog
Human
Okay
(Gets up and returns with a set of paint guns, each one of
which shoots small adhesive balls of a certain color; and also
with a piece of paper.)
Paint me a picture
OpenCog
207
real world of a child, yet much more flexible than the traditional
toy blocks worlds used in experimenting with historical AI
programs, and much richer in cognitively relevant ways than
typical robotics laboratory environments. A huge variety of 3D
models, and 2D pictures of these 3D models, may be imported
into the virtual world and animated; and the AI may also be
exposed to a variety of interactions between human-controlled
avatars in the context of the various objects in the virtual world.
While the focus would be on testing conversation in the context
of a well-defined set of cognitive tasks, the system should also
be able to converse more broadly about objects, events and
interactions in its world.
A Phase 2 AGI Preschool based system in this world could not
be expected to know that dogs are more dangerous than cats, or
that winter is colder than summer. On the other hand, it could be
expected to learn that people tend to talk more to their friends
than to other people; that balls roll whereas blocks dont; that
men like to throw things more than women do; that people like to
dance when music is playing; that 7 is more than 2; etc. It could
also be expected to carry out simple reasoning: e.g. if it is told
that men are mortal, and Bob is a man, it should be able to figure
out that Bob is mortal. Various quantitative IQ type tests could be
devised to test the systems capability in various respects,
beginning from the test tasks that will be used to guide the
development process and potentially including other factors as
well, and this may be a worthwhile endeavor; but we consider it
best to focus on achieving qualitatively intelligent cognitive-taskfocused conversation ability, rather than on tuning the system to
maximize performance on quantitative intelligence tests.
The ordering of these two phases may seem peculiar as
compared to human cognitive development, but this appears the
most natural order in which to proceed given the specific
technology bases available to drive development at present.
And then Phase 3 of the project would be to move on to a robot
preschool, not just a virtual preschool. Ive already discussed the
208
209
210
211
all the conversational data being collected and all the computing
power available, and all the funds going into statistical
computational linguistics these days, its almost obvious
somebody is going to try it and Ill be fascinated to see the
results.
212
14
The AGI Roadmap Workshop
This chapter comprises a mainly nontechnical description of a
workshop Itamar Arel and I organized at the University of
Tennessee in 2009.
The scientific content here is essentially
the same as in the paper Mapping the Landscape of HumanLevel Artificial General Intelligence, published in AI Magazine in
2011. But that article is dry and academic, whereas the write-up
here presents a bit more of a narrative.
As of mid-2014, as Im pulling together a book comprised of
essays sitting on my hard drive and writing these words, I cant
quite remember why some past version of Ben Goertzel wrote
this nontechnical, narrative-ish description of the AGI Roadmap
Workshop maybe he was bored sitting in the audience at some
conference? but reading it over, it seems pretty interesting so
Im glad he did
I have to say, this semi-narrative version is a lot more direct and
honest than the AI Magazine write-up. Im not saying that the AI
Magazine version was DIShonest -- it was just formal and
objective rather than human and fuzzy, i.e. it was written as
befits a more academic venue. Sometimes I feel seriously
frustrated with the way scientific journals require one to dress up
ones ideas and discoveries in false objectivity, removing the
human element and the adventure of discovery. But thats a
topic for another essay, or, book or whatever. For now: the
AGI Roadmap Workshop
The place: University of Tennessee, Knoxville.
The time: October, 2009.
The cast of characters was impressive
213
214
Rod
Furlan, Brazilian/Canadian Silicon Valley
technology entrepreneur and faculty member at Ray
Kurzweils Singularity University
215
39
Ironically, this latter choice ruled out many of the current leaders of
the AI field, whose approaches arent exactly 70s style expert
systems, but still do rely on files of hand-coded knowledge rules.
216
committed to AI approaches where you feed the system handbuild knowledge rules, expert system style; others were
committed to pure experiential learning approaches. Some felt
robotics was critical to AGI, others voiced the opinion that time
spent on robotics was utterly wasted where AGI is concerned.
Some liked the virtual-worlds approach, others felt it was
basically worthless either because it lacked the richness of
sensation and actuation provided by real robots, or because any
kind of embodiment (even virtual) is a distraction from focusing
on the core problems of cognition and language. Some felt it
important to try an early-stage proto-AGI system on a variety of
different problems in a variety of contexts; others felt the best
thing was to single-focus on one hard problem with an AGIhard nature, e.g. Nick Cassimatis, who runs the Human-Level
AI Lab at RPI, argued for object tracking as a focus. In Nicks
view, if you can make an AI system visually track moving objects
in the manner that humans do, then youve solved the crux of the
AGI problem, and the rest will be relatively straightforward so
he saw no strong reason to pay attention to any problem besides
that one. The scope of ideas and views presented was
fascinating, but the diversity of contradictory views was
somewhat intimidating.
The first of the two Evaluation and Metrics workshops resulted in
a paper presented at one of the AGI conferences, written by
40
John Laird and Robert Wray
, summarizing a list of
requirements that any AGI system should fulfill (these are pretty
simple and are summarized in Table 1 below). The second one
didnt lead to any written deliverable, or any clear conclusion, so
40
Its worth noting that this paper, while inspired by the discussions in
the workshop, was not produced collaboratively by the workshop
participants because this would have required a lot more discussion
and argumentation than just having a couple folks write up their own
views afterwards, informed by the discussions.
This is another
indication of the remarkable diversity and fragmentation of the AGI field
today.
217
R1.
R2.
Modality-specific knowledge
R3.
R4.
R5.
R6.
R7.
R8.
Meta-cognitive knowledge
R9.
Support a
deliberation
spectrum
of
bounded
and
unbounded
C2.
Environment is dynamic
C3.
C4.
C5.
C6.
C7.
218
219
221
222
Broad
competency
areas
Subareas...
Subareas..
Subareas..
Subareas..
Subareas..
Perception
Vision
Audition
Touch
Proprioception
Crossmodal
Actuation
Physical
skills
Memory
Implicit
Working
Learning
Imitation
Reasoning
Planning
Tactical
Strategic Physical
Attention
Visual
Social
Motivation
Subgoal
creation
Affectbased
Emotion
Modeling self
and other
Social
interaction
Appropriate
behavior
Episodic
Semantic
Subareas..
Procedural
Physical
Associational
Social
Behavioral
Social
Social
commun- inference
ication
Cooperation, e.g.
group play
Communication Gestural
Verbal
Pictorial
Quantitative
Counting
observed
entities
Grounded
small
number
arithmetic
Comparison of
quantitative
properties
of
observed
entities
Measurement using
simple
tools
Building/
creation
Physical
construction w/
objects
Formation Verbal
of novel invention
concepts
Social
organization
223
224
225
fact this sort of idea has a long and venerable history in the AI
field Alan Turings original 1950 paper on AI, where he
proposed the Turing Test, contains the suggestion that Instead
of trying to produce a programme to simulate the adult mind, why
not rather try to produce one which simulates the childs?
This childlike cognition based approach seems promising for
many reasons, including its integrative nature: what a young
child does involves a combination of perception, actuation,
linguistic and pictorial communication, social interaction,
conceptual problem solving and creative imagination. Human
intelligence develops in response to the demands of richly
interactive environments, and a preschool is specifically
designed to be a richly interactive environment with the capability
to stimulate diverse mental growth. The richness of the
preschool environment suggests that significant value is added
by the robotics based approach; but a lot can also potentially be
done by stretching the boundaries of current virtual world
technology.
Another advantage of focusing on childlike cognition is that child
psychologists have created a variety of instruments for
measuring child intelligence. So in a preschool context, one can
present ones AI system with variants of tasks typically used to
measure the intelligence of young human children.
It doesnt necessarily make sense to outfit a virtual or robot
preschool as a precise imitation of a human preschool this
would be inappropriate since a contemporary robotic or virtual
body is rather differently capable than that of a young human
child. The aim in constructing an AGI preschool environment
should rather be to emulate the basic diversity and educational
character of a typical human preschool.
To imitate the general character of a human preschool, I would
to create several centers in a virtual or robot preschool. The
precise architecture will be adapted via experience but initial
centers might be, for instance:
227
Since I really enjoy playing with young children, I find the idea of
an AGI preschool not only theoretically compelling, but an awful
lot of fun.
Story/Scene Comprehension
Joscha Bach liked the preschool approach OK, but he felt it
placed too much emphasis on actuation (moving around,
building things, etc.), which he felt were fairly peripheral to AGI.
So he prefers to focus a little later in the school curriculum, on a
set of tasks he calls scene and story comprehension.
Scene comprehension here does not mean only illustrations,
but real-world scenes, which can be presented at different
granularities, media and difficulties (cartoons, movies, or
theatrical performances for instance). This approach differs from
the reading curriculum scenario, in that it more directly provides
a dynamic environment. If group exercises are included then all
the Laird/Wray criteria are fulfilled in a direct and obvious way.
For instance, a scene comprehension task might involve
watching 10 minutes of a Hollywood movie, and concisely
explaining whats happening or working together with a few
others to concisely explain whats happening. Or it might involve
228
229
230
232
233
234
235
Scenario
Competency
Area
Sub-area
Virtual
Preschool
Learning
Dialogical
Virtual
Preschool
Modeling
Self and
Other
Theory of
Mind
Virtual
School
Student
Learning
via Written
Media
236
Virtual
School
Student
Modeling
Self and
Other
OtherAwareness
Robot
Preschool
Actuation
Proprioception
Robot
Preschool
Memory
Episodic
237
Scenario
Competency
Area
Sub-area
Wozniak
Coffee Test
Communication
Gestural
Wozniak
Coffee Test
Actuation
Navigation
Wozniak
Coffee Test
Wozniak
Coffee Test
Reasoning
Physical
Wozniak
Coffee Test
Reasoning Induction
238
239
240
241
15
Mind Uploading
While my research career has focused mainly on figuring out
how to build AGI, using ideas not that closely based on human
brain structure, I also have a strong interest in figuring out how to
make machines that embody human minds in an exact way.
This is not just out of intellectual interest, but because Id like to
port my OWN mind, and the minds of my friends and family
members and anyone else who wants it, into the Internet or
superior robot bodies or wherever they want to go. Once a
human mind is uploaded, it would have the option to remain
human, or potentially to upgrade itself into some radically
superior form maybe fusing with other AGIs. Life extension in
the human body form is interesting too, but ultimately it's not as
fascinating and exciting to me as the prospect of moving my
mind through various different bodies, including those supporting
much greater intelligence than the traditional human body
evolution has supplied us with.
I wrote these thoughts on mind uploading in early 2011. Shortly
after that I pursued my interest in mind uploading in a more
intense way, via editing the first-ever scientific journal issue on
mind uploading (the Special Issue on Mind Uploading of the
41
International Journal of Machine Consciousness kudos
are due to editor Antonio Chella for being visionary enough to
allow such a special issue to exist; and thanks to my co-editor
41
See http://www.worldscientific.com/toc/ijmc/04/01
for the official site of the Special Issue on Mind
Uploading; and http://wp.goertzel.org/?page_id=368
for unofficial preprints of some of the articles.
242
Mind Uploading
Matt Ikle for helping with putting the issue together, and James
Hughes and Randal Koene for help with gathering papers). I
havent done any actual research directly toward the goal of
mind uploading yet, but Id certainly love to. As neurotech
progresses and more and more data about the brain is gathered,
there will doubtless be chances to apply AGI and/or narrow-AI
tech to the problem of reconstructing minds in new substrates.
I posted this to my gmail account on Tuesday, September 28,
2010:
To whom it may concern:
I am writing this in 2010. My Gmail account has more than
20GB of data, which contain some information about me and
also some information about the persons I have exchanged
email with, including some personal and private information.
I am assuming that in 2060 (50 years from now), my Gmail
account will have hundreds or thousands of TB of data,
which will contain a lot of information about me and the
persons I exchanged email with, including a lot of personal
and private information. I am also assuming that, in 2060:
1. The data in the accounts of all Gmail users since 2004 is
available.
2. AI-based mindware technology able to reconstruct
individual mindfiles by analyzing the information in their
aggregate Gmail accounts and other available
information, with sufficient accuracy for mind uploading
via detailed personality reconstruction, is available.
3. The technology to crack Gmail passwords is available,
but illegal without the consent of the account owners (or
their heirs).
4. Many of today's Gmail users, including myself, are
already dead and cannot give permission to use the data
in their accounts.
243
244
Mind Uploading
similar to what Ive left behind but what if that werent ones
goal? What if ones goal were to re-create ME, via creating an
entity operating by closely human brain-like dynamics, yet still
with the capability of giving rise to my texts, videos, chats, etc.?
Could there really be another human being another intelligence
with a human brain like architecture fundamentally different
from Ben Goertzel, yet capable of giving rise to the same large
corpus of utterances, gestures and interactions as Ben Goertzel?
My heart says no it seems like to say all those things and
make all those face and body movements the same as me, using
a human brain and body or something similar, youd have to BE
me.
Of course, we lack the technology today to perform this kind of
reverse engineering to infer the brain from the behavior. But
storing data is relatively cheap these days so its quite viable to
store massive amounts of data about a person and keep it
around till the technology develops. And if you doubt the
possibility of any human technology successfully performing this
kind of reverse engineering what about an advanced AGI? In
the end, what were talking about here is an optimization
problem: find a neural-like structure whose behaviors, in a
certain set of contexts, would emulate a certain database of
behaviors. This is a difficult optimization problem, yet bounded
in size, with bounds expressible in terms of human brain size
and behavior database size. Its hard for me to believe it will
exceed the capabilities of post-Singularity AGIs.
245
246
Mind Uploading
247
248
Mind Uploading
249
16
AGI Against Aging
I wrote this essay originally in 2009 for Life Extension
Magazine, after some discussions about the power of AI for
longevity research with Life Extension Foundation founder/chief
exec Bill Falloon (who is a great guy) but it was too long for
them and I never got around to writing a shorter version. But I
posted it online and it got a lot of positive responses. Ive given
a load of talks on this theme as well.
2009 was far from the start of my work on AI and biology, though
I first plunged into the area in 2001, shortly after my first AI
company Webmind Inc. disappeared. I founded Biomind LLC in
2002, with a vision of applying advanced AI tech to genomics
and proteomics so as to enable humanity to more rapidly cure
diseases and end aging. By 2009 I had already gotten enough
practical sense for the biology and longevity research spaces, to
feel comfortable prognosticating and pontificating about the best
research directions for the field to follow.
Since writing AGI Against Aging in 2009, Ive done a fair bit
more work in the AI for longevity research arena. For one
thing, my work with Genescient Corp. applying AI to analyze
data from their long-lived flies, briefly discussed in AGI Against
Aging, has gone a lot further by now in fact Im right in the
middle of writing up a technical paper on some of our findings.
Also, a few supplements are now on the market, providing novel
combinations of herbs derived via a combination of my teams
AI analysis with human biological insight -- that address various
ailments (one for inflammation, one for brain aging, and one for
aging in general).
And my OpenCog / Biomind colleagues and I are gradually
moving toward implementation of more of the advanced AI ideas
described here. Just recently (mid-2014), weve finally started
loading various biological ontologies and datasets into
250
251
Discovery
SENS Solution
Cell loss,
tissue atrophy
1955
http://www.sens.org/research/introduction-to-sensresearch/cell-loss-and-atrophy
Nuclear
[epi]mutations
(only cancer
matters)
Mutant
mitochondria
1972
http://www.sens.org/research/introduction-to-sensresearch/mitochondrial-mutations
Death-resistant
cells
1965
http://www.sens.org/research/introduction-to-sensresearch/deathresistant-cells
Tissue
stiffening
Extracellular
aggregates
1907
http://www.sens.org/research/introduction-to-sensresearch/extracellular-junk
Intracellular
aggregates
1959
http://www.sens.org/research/introduction-to-sensresearch/intracellular-aggregates
252
makes them live so long, and how to draw lessons from them
about what pharmaceutical or nutritional substances may cure
human disease in order to prolong a healthy human life. His
overall approach to aging involves building a huge integrated
database of biological knowledge about multiple organisms, and
progressively growing this database in synchrony with
experimental evolution on flies and other model organisms.
Unlike de Grey, Rose doesnt even like to talk about aging; he
prefers to talk about disease, figuring that aging is essentially a
combination of one disease on top of another on top of another.
De Greys initiatives may resolve some of the problems
associated with aging, Roses effort may resolve others, and
there are many other researchers out there. Through their
collective activity, the average human lifespan is going to
progressively increase, as the science advances. But what might
cause a rapid acceleration in the progress toward increasing
healthy lifespans? Might it be the application of AGI technology
to life extension research? Narrow AI has already proved itself
valuable for the study of aging related diseases, so this certainly
seems a plausible conjecture.
42
253
254
macroscopic,
sexually-reproducing
organisms,
immortal
organisms would not have had an evolutionary advantage.
Rather, there was an evolutionary pressure toward organisms
that could evolve faster. If a species is going to evolve rapidly,
its valuable for it to have a relatively quick turnover from one
generation to the next.
There doesnt seem to be any single cellular grim reaper
process causing soma cell senescence. Rather, it would appear
that there several distinct mechanisms, all acting in parallel and
in concert.
There are junk molecules, accumulating inside and outside of
cells, simply clogging up the works. Then there are various
chemical modifications that impair the functioning of molecular
components such as DNA, enzymes, membranes and proteins.
Of all these chemical reactions, oxidation has attracted the most
attention, and various anti-oxidant substances are on the market
as potential aging remedies. Another major chemical culprit is
cross-linking, the occasional formation of unwanted bridges
between protein molecules in the DNAbridges which cannot be
broken by the cell repair enzymes, interfering in the production of
RNA by DNA. Cross-linkages in protein and DNA can be caused
by many chemicals normally present in cells as a result of
metabolism, and also by common pollutants such as lead and
tobacco smoke.
As time passes, signaling pathways and genetic regulatory
networks within cells can be altered for the worse, due to subtle
changes in cellular chemistry. The repair mechanisms that would
normally correct such errors appear to slow down over time.
Telomeres, the ends of chromosomes, seem to get shorter
each time a cell divides, causing normally suppressed genes to
become activated and impair cell function. Finally, the brain
processes that regulate organism-wide cell behavior decline over
time, partly as a result of ongoing cell death in the brain.
255
256
257
different life-stages, and their effects at later ages (after the most
typical reproductive age) are generally adapted by evolution at a
much slower pace. So, our genes effects on later life-stages are
likely more hunter-gatherer-ish than our genes effects on earlier
life-stages making a Paleo approach even more beneficial for
those of us 40 or over.
Its worth noting that Michael is in this age range himself, and he
does walk the walk as well as talk the talk he keeps to the
Paleo diet and lifestyle pretty strictly, and he looks and feels
great! I myself have been keeping a sort of quasi-Paleo diet and
lifestyle, and have noticed some definite improvements in my
feeling of physical well-being also. Whereas when I tried the CR
diet for a while a few years back, I found myself feeling
pleasingly light-headed and positive, but also relatively devoid of
drive and energy and Well Sometimes annoyingly hungry!!!
258
259
260
strong general intuition on this point, but its often a good bet to
say that variety amongst approaches, not a single orthodoxy, will
yield the best results. The mainstream molecular biology
community seems to think de Greys proposed solutions to his
seven problems reveal a strange taste; but this doesnt mean
very much, as the mainstreams scientific taste may well be
mortally flawed. Science, like any human endeavor, has its
fashions and trends. What is seen as weird science today may
be a commonplace field of study in a decade.
Regarding mitochondrial DNA damage, de Greys current
proposal is to fix it in a rather direct way, by replacing the flawed
proteins produced by the flawed mitochondrial DNA. This could
work because there is already an in-built biological mechanism
that carries proteins into mitochondria: the TIM/TOM complex,
which carries about 1000 different proteins produced from
nuclear DNA into the mitochondria.
What de Grey proposes is to make copies of the 13 proteincoding genes in the mitochondrial genome, with a few simple
modifications to make them amenable to the TIM/TOM
mechanism, and then insert them into the nuclear chromosomes.
In this way theyll get damaged much more slowly, because the
nuclear chromosomes are much more protected from mutations
than mitochondrial genes.
Sensible enough, no?
On the other hand, I recall a conversation over dinner a few
years back, somewhere in northern Virginia, between Aubrey
and Rafal Smigrodzki, the biologist who got me involved in
Parkinsons research when he was at University of Virginia.
Rafals worry about moving mitochondrial DNA into the nucleus
is that its ordinary operations may depend on other things that
are happening outside the nucleus, so maybe after its moved it
wont be able to do its thing properly. In other words, maybe
Aubreys engineering approach overlooks too much of the
261
262
263
264
43
265
266
267
268
47
269
48
270
271
49
272
273
274
275
276
insight. But were not there yet. So at the present time, the best
strategy for AI data analysis is to incorporate all available
sources of information, including direct experimental data and
text humans have produced based on interpreting that data.
In 2006, I co-organized the sixth annual Bio-NLP workshop, as
part of the annual HTL-NAACL Computational Linguistics
conference. At previous Bio-NLP workshops, nearly all the work
presented had pertained to fairly simple problems, such as
recognizing gene and protein names in research papers (a task
made more difficult than it should be by the presence of multiple
naming conventions among biologists). But starting in 2006 we
saw more and more researchers creating software with the
capability to recognize relationships between biological entities,
as expressed in natural language text; and this trend has
intensified subsequently. The latest Bio-NLP software (see
53
Rzhetskys work
for an impressive example) takes in a
research paper and tells you which genes, proteins, chemical
and pathways are mentioned, and how they are proposed by the
authors to relate to each other (which genes are in which
pathways, which enzymes catalyze which reactions, which
genes upregulate which others, etc.). This is a far cry from full
understanding of the contents of research papers, but its
definitely a start.
AI Based Logical Inference Based on Information Automatically
Extracted from PubMed Abstracts
The paper I presented at Bio-NLP 2006 regarded a research
prototype called BioLiterate, which we built for the NIH Clinical
Center in 2005. What the BioLiterate prototype did was extract
relationships from various biomedical research abstracts, and try
to glue them together using logical reasoning. So, for example, if
one paper said that p38 map kinase inhibition prevents bone
53
277
loss, and another paper said the DLC inhibits p38, then the
software would put A and B together, deciding (using logical
reasoning) that maybe DLC prevents bone loss (the actual
sentences the AI used in these inferences, found in PubMed
abstracts, are shown in the figure above). The logical inference
was provided by the Probabilistic Logic Networks module of the
54
Novamente Cognition Engine . BioLiterate was a prototype,
rather than a robust and deployable software solution, but it
made its point: If you build a Bio-NLP system and then use the
right sort of rules to pipe its output into a computational
reasoning system, you get an automated biological hypothesis
making system.
54
278
279
55
280
281
56
282
283
284
17
AGI and Magical Thinking
In March 2010, while the nuclear reactor disaster in Fukushima
Japan was all over the news, a friend asked me to write him a
few paragraphs on the topic of AGI and nuclear disasters. How,
he asked me, might disasters like Fukushima be prevented if we
already had powerful artificial general intelligence?
I actually didnt think this was a great idea for an article, because
it seemed too obvious to me that problems like Fukushima could
easily be avoided if one had radically superhuman intelligence at
ones disposal (although, new and currently unforeseen
problems might well arise!). But he insisted, and said he had a
particular group of colleagues he wanted to distribute my
thoughts to So I went ahead and wrote him the paragraphs.
And having done that, I figured I might as well post the result as
a brief item in H+ Magazine.
The H+ post received a fair bit of negative attention, even to the
point of getting slammed by Forbes Magazine blogger Alex
Knapp (whom I emailed with a bit afterwards, and who turned out
to be a really nice guy with a good knowledge of science).
Overall the discussion with the articles critics was interesting
and highlighted the difficulties of getting people even highly
intelligent and educated people to really understand the power
that advanced AGI would have.
First, the little article in question
285
286
287
288
289
290
57
http://www.cse.msu.edu/~weng/
58
http://www.im-clever.eu/
291
292
293
19
Plenty More Room at the Bottom
(Beyond Nanotech to Femtotech)
This H+ Magazine article emerged in 2010, as a result of
discussions I was having with my good friend Hugo de Garis.
Hugo began his scientific career as a physicist and then verged
into AI, working e.g. on novel software/hardware architectures for
evolving neural nets in FPGA chips; he has also made a name
for himself with outrageous futurist prognostications such as his
predictions of a coming Artilect War between pro and anti
technology forces. But in 2010, after retiring from his position
as an AI professor at Xiamen University in China (where he
introduced me to my now-wife Ruiting LIan, while I was visiting
him there co-running the 2009 AGI Summer School), he returned
to his first love of mathematical physics and began speculating
seriously about technologies way smaller than nano.
The
concept had occurred to me before, but he got me to take it more
seriously than I had been, and I put a bit of thought into it, and
wrote this article.
Im pretty sure that advanced AGI at the human level and
significantly beyond can be achieved using plain ordinary
digital computers For example, server farms like the kind
powering Google and Yahoo today. Even if the human brain
does make use of some funky quantum nonlocality effects or
even weirder physics (as some claim, though theres no
evidence of this currently), I really doubt such effects are
necessary for achieving humanlike (and transhuman) general
intelligence.
But, even if my view on this is right (and the vast majority of
physicists and AI researchers, even AGI skeptics, agree with my
views on this), thats not to discount the potential for achieving
yet greater intelligence by using more exotic computing fabrics.
In order to unlock the full depths of computing power implicit in
294
59
60
http://www.nanomedicine.macromol.in/
http://www.nanomedicine.com/]
295
296
297
their dynamics. But quarks also have some interaction with the
weak force, e.g. the weak force can cause the transmutation of
quarks into different quarks, a phenomenon that underlies some
kinds of radioactive decay such as beta decay.
298
299
300
61
http://www.scipub.org/fulltext/ajeas/ajeas22501514.pdf
301
302
303
304
62
http://www.youtube.com/watch?v=rwGAzy0noU0
305
306
307
63
http://www.bnl.gov/rhic/physics.asp
308
64
http://www.wisegeek.com/what-is-a-strangelet.htm
http://public.web.cern.ch/public/en/lhc/safetyen.html
65
309
66
http://prd.aps.org/abstract/PRD/v4/i6/p1601_1
67
http://prd.aps.org/abstract/PRD/v30/i2/p272_1
310
The hypothetical chain reaction via which strangelets eat the Earth
So, where does the End of the World come in? There are some
interesting (albeit somewhat speculative) arguments to the effect
that if a strangelet encounters ordinary matter, it could trigger a
chain reaction in which the ordinary matter gets turned into
strangelets, atom by atom at an accelerating pace. Once one
strangelet hits a nucleus, it would likely turn it into strange
matter, thus producing a larger and more stable strangelet,
which would in turn hit another nucleus, etc. Goodbye Earth,
hello huge hot ball of strange matter. This was the source of the
worries about the LHC, which did not eventuate since when the
LHC was utilized no strangelets were noticeably produced.
One of the many unknowns about strangelets is their surface
tension nobody knows how to calculate this, at present. If
strangelets' surface tension is strong enough, large stable
strangelets should be possible and potentially, strangelets with
complex structure as femtotech requires.
And of course, nobody knows what happens if you vibrate
strangelets very very fast with small amplitude can you
311
312
313
314
20
The Singularity Institute's Scary Idea
(and Why I Don't Buy It)
This essay, posted on my blog The Multiverse According to
Ben in 2009, attracted more comments than any other blog post
68
Ive made because the organization it discussed, SIAI,
possessed a great number of passionate supporters, as well as
a great number of passionate detractors. The version given
here was edited by me at some later point, but is essentially the
same as the blog post version.
I felt some reluctance about including this essay here in this
book, because SIAI has somewhat moved on from where it was
back in 2009 including a name change (to MIRI, Machine
Intelligence Research Institute) and a bunch of new
management.
However, in the end I decided to include it,
because in spite of MIRIs greater professionalism, the basic
SIAI perspective I critiqued in my essay is still there at the heart
of MIRI; and in various ways the debate that this essay
represents is still ongoing.
As I write these words in 2014,
Oxford philosopher Nick Bostroms book Superintelligence has
recently been published and in large part, it consists of a more
academic and sober presentation of a minor variant of the line of
thinking I call here The Singularity Institutes Scary Idea .
In a 2012 online dialogue with new MIRI executive Luke
Muehlhauser, I delved into these same issues in an updated
way, including responses to some of Bostroms essays which
form the conceptual core of his Superintelligence book. That
68
http://multiverseaccordingtoben.blogspot.hk/2010/10/s
ingularity-institutes-scary-idea-and.html
315
316
The Singularity Institute's Scary Idea (and Why I Don't Buy It)
69
http://opencog.org/
317
318
The Singularity Institute's Scary Idea (and Why I Don't Buy It)
319
70
http://lesswrong.com/lw/y3/value_is_fragile/
320
The Singularity Institute's Scary Idea (and Why I Don't Buy It)
321
322
The Singularity Institute's Scary Idea (and Why I Don't Buy It)
323
Practical Implications
The above discussion of SIAI's Scary Idea may just sound like
fun science-fictional speculation, but this chapter originated out
324
The Singularity Institute's Scary Idea (and Why I Don't Buy It)
325
Conclusion
Although I think SIAI is performing a useful service by helping
bring these sorts of ideas to the attention of the futurist
326
The Singularity Institute's Scary Idea (and Why I Don't Buy It)
327
328
21
Does Humanity Need an AI Nanny?
This H+ Magazine article, from 2011, is one of my lines of
thought that I feel most ambivalent about. Its a case where my
logical mind pushes in one direction, and my emotions in
another. Emotionally, Im a freedom-loving American anarchist
freak by nature, and I hate the idea of being constantly observed
and monitored by critical eyes, especially eyes connected to
actuators with the power to stop me from doing what I want.
Logically, though, it seems to me that given the increasing
dangers coming along with increasingly advanced technology,
universal surveillance or sousveillance are probably the only
rational courses, if we want to survive long enough to transcend
to the next stages of intelligence evolution.
The article attracted a bit of attention, including getting me
invited to participate in an episode of Michio Kakus popular
science TV show, for which I was filmed talking about the AI
Nanny in the office of a company doing video surveillance of
various offices sitting in front of a bunch of monitors showing
the outputs of security cameras.
Not my usual work
environment! . But meeting Kaku was cool; and we discussed
string theory and various more out-there physics ideas, on the
van ride to the filming location.
One possible solution to the acute risks posed by rapidly
advancing technology development is to build a powerful yet
limited AGI system, with the explicit goal of keeping things on the
planet under control while we figure out the hard problem of how
to create a probably positive Singularity. That is: to create an AI
Nanny.
329
surveillance
71
http://strategicphilosophy.blogspot.com/2009/06/howfast-will-future-arrive-how-will.html
330
332
333
72
http://hplusmagazine.com/2010/02/05/how-long-tillhuman-level-ai/
334
possible within the next few decades, and that slightlysuperhuman AGI will follow shortly after that.
Regarding the Nannifying an AGI is hard complaint, I think its
validity depends on the AGI architecture in question. If one is
talking about an integrative, cognitive-science-based, explicitly
73
74
goal-oriented AGI system like, say, OpenCog or MicroPsi or
75
LIDA , then this is probably not too much of an issue, as these
architectures are fairly flexible and incorporate explicitly
articulated goals. If one is talking about, say, an AGI built via
closely emulating human brain architecture, in which the
designers have relatively weak understanding of the AGI
systems representations and dynamics, then the nannification
is hard problem might be more serious. My own research
intuition is that an integrative, cognitive-science-based, explicitly
goal-oriented system is likely to be the path via which advanced
AGI first arises; this is the path my own work is following.
Its impossible to build an AI Nanny; the surveillance technology
is too hard to implement. But is it really? Surveillance tech is
advancing bloody fast, for all sorts of reasons more prosaic than
the potential development of an AI Nanny. Read David Brins
76
book The Transparent Society , for a rather compelling
argument that before too long, well all be able to see everything
everyone else is doing.
Setting up an AI Nanny, in practice, would require a world
government. OK, yes it would Sort of. It would require
either a proactive assertion of power by some particular party,
creating and installing an AI Nanny without asking everybody
73
http://opencog.org/
74
http://micropsi.com/publications/Bach_AGI_2011.pdf
75
http://en.wikipedia.org/wiki/LIDA_%28cognitive_archit
ecture%29
76
http://www.davidbrin.com/transparent.htm
335
Yes,
someone
really
made
this
counterargument to me; but as a scientist, mathematician and
engineer, I find this wholly implausible. Maintenance of goals
under radical self-modification and self-improvement seems to
pose some very thorny philosophical and technical problem
and once these are solved (to the extent that theyre even
solvable) then one will have a host of currently-unforeseeable
engineering problems to consider. Furthermore there is a huge,
almost surely irreducible uncertainty in creating something
massively more intelligent than oneself. Whereas creating an AI
Nanny is merely a very difficult, very large scale science and
engineering problem.
If someone creates a new technology smarter than the AI
Nanny, how will the AI Nanny recognize this and be able to nip it
in the bud? Remember, the hypothesis is that the AI Nanny is
significantly smarter than people. Imagine a friendly, highly
intelligent person monitoring and supervising the creative
projects of a room full of chimps or intellectually challenged
individuals.
77
http://www.amazon.com/Multis-MulticulturedMonocultured-Towards-Creation/dp/0882801627/
336
Why would the AI Nanny want to retain its initially preprogrammed goals, instead of modifying them to suit itself
better? for instance, why wouldnt it simply adopt the goal of
becoming an all-powerful dictator and exploiting us for its own
ends? But why would it change its goals? What forces would
cause it to become selfish, greedy, etc.? Lets not
anthropomorphize. Power corrupts, and absolute power
corrupts absolutely is a statement about human psychology, not
a general law of intelligent systems. Human beings are not
architected as rational, goal-oriented systems, even though
some of us aspire to be such systems and make some progress
toward behaving in this manner. If an AI system is created with
an architecture inclining it to pursue certain goals, theres no
reason why it would automatically be inclined to modify these
goals.
But how can you specify the AI Nannys goals precisely? You
cant right? And if you specify them imprecisely, how do you
know it wont eventually come to interpret them in some way that
goes against your original intention? And then if you want to
tweak its goals, because you realize you made a mistake, it
wont let you, right? This is a tough problem, without a perfect
solution. But remember, one of its goals is to be open-minded
about the possibility that its misinterpreting its goals. Indeed,
one cant rule out the possibility that it will misinterpret this metagoal and then, in reality, closed-mindedly interpret its other goals
in an incorrect way. The AI Nanny would not be a risk-free
endeavor, and it would be important to get a feel for its realities
before giving it too much power. But again, the question is not
whether its an absolutely safe and positive project but rather,
whether its better than the alternatives!
78
78
http://selfawaresystems.com/2007/11/30/paper-onthe-basic-ai-drives/
337
338
339
340
22
AGI, Consciousness, Spirituality, Life, the
Universe, and Everything
My futurist friend Giulio Prisco has been striving for some time to
bring together transhumanism and religion. His Turing Church
blog pursues this theme, and he has also organized a series of
online conferences. Conversations with Giulio formed much
of my inspiration for writing A Cosmist Manifesto though that
book ended up philosophical rather than religious in nature in
the end I dont quite share Giulios vision that futurism should, or
will, end up having all that much in common with traditional
religions. In any case, the following lengthy H+ Magazine article
(edited a bit for inclusion here) was also inspired by my ongoing
dialogue with Giulio, and represents an attempt by my 2012 self
to clarify the various philosophical and spiritual perspectives at
play regarding future technologies, the Singularity, and so forth.
Artificial General Intelligence is a technical topic, but its also
much more than that. It has big implications about humanitys
place in the universe both what humanitys place is logically
and scientifically speaking, and what we want it to be
normatively. It raises a host of dizzying possibilities regarding our
possible futures, including the potential of fusing our minds with
AGIs cyborgically, and becoming more than human in various
senses (some quite strong). It even raises the possibility of
understanding the universe and finding answers to the Big
Questions of Life, the Universe and Everything with the help of
engineered transhuman minds, to aid us in venturing where the
human mind is too weak to travel. And then the possibilities
highlighted by folks like Hans Moravec, Hugo de Garis and the
SIAI, of human extinction, of our replacement by transhuman
AGIs who dont care anymore for humans than humans do for
flies or bacteria.
341
One critical aspect of the relation between AGI and the Big
Picture has to do with consciousness. If one thinks AGIs can be
conscious in the sense that humans are or maybe even more
broadly and richly conscious then that certainly informs ones
sense of AGIs place in the Cosmos. On the other hand if one
things AGIs are intrinsically just feeling-less, experience-less
tools, then the story becomes quite different.
There are many, many ways of fitting AGI into ones view of
79
Cosmos and consciousness; a few possibilities are as follows :
79
Note that the labels in this list are ones Ive adopted for my own
convenience, rather than necessarily because theyre the most standard
ones.
342
343
344
dwell more on those views I find most interesting! Also, note that
two of my own favorite perspectives appear at the end of the
chapter (Cosmism and the Implicate Order My other favorites
of the bunch being Informational Monism and Panpsychism). So
if you don't read to the end youll miss the good stuff!
Physical Monism
What I refer to as physical monism is the view that the physical
world is in some sense absolutely real and out there and existent
and everything else is a bunch of malarkey. Intelligence, mind
and consciousness and experience are viewed as patterns or
configurations (or whatever) of physical entities like particles or
waves.
To my mind, this is a potentially useful (though rather limited)
perspective for getting through everyday life, but somewhat
deficient as a real intellectual theory.
After all, how do we know about this supposedly absolutely
existent physical world? Either we take it on faith, like a
prototypical religious persons belief in God Or we infer it from
various observations, i.e. various pieces of information. But if its
the latter then isnt the more fundamental reality associated
with the information, rather than with the physical world whose
existence we infer from the information?
A classic expression of this view is G.E. Moores observation that
when you kick a rock, you know its real, and philosophical
babbling becomes irrelevant in the light of this direct evidence of
the rocks reality. But what is it that you actually know is real,
when you kick the rock? The definite experienced reality is
attached to the feelings coming to you from your foot (i.e.
feelings that you have learned to attach to the concept/percept
network called foot), and the sensations coming into your eye
when you look at your rock and the foot. Actually there might be
no rock present in physical reality at all your brain might be
345
Informational Monism
The view of the world as bottoming out in some absolute
physical reality seems hopelessly nave to me -- but the view of
the world as bottoming out in information seems much less so. I
dont quite adhere to this perspective myself, but nor do I know
of any rational, scientific arguments against it. AGI researcher
Joscha Bach, whom I interviewed above, has described the
perspective well:
We grow up with the illusion of a direct access to an outside
world, and this intuition is reflected in the correspondence
theory of truth: our concepts derive their meaning from their
correspondence to facts in a given reality. But how does this
correspondence work? According to our current
understanding of biology, all access to the world is mediated
through a transfer of data, of bits (i.e., discernible
differences) expressed by electrical impulses through
sensory and motor nerves. The structure of the world, with
percepts, concepts, relations and so on, is not part of this
data transfer, but is constructed by our minds. It represents
encodings over the regularities found in the data patterns at
the mind's interface. In theory, all the data entering my mind
over my lifetime could be recorded as a finite (but very long)
vector of bits, and everything that I consider to be knowledge
of the world is a more concise re-encoding of parts of this
vector.
Of course, even the concept that the world is mediated
through sensory nerves is not directly accessible. It is an
encoding, too (it just happens to be the best encoding that
we found so far). And because we cannot know the "real"
346
347
348
349
350
Experiential Vitalism
The perspectives Ive considered so far in this chapter are
basically scientific in nature, focused on things like physics and
information theory. But there are also some serious AGI
researchers who take quite different views, drawing more from
the religious or spiritual side of life.
Im not a religious person by nature or culture according to
facile high-level classifications of religious belief, Ive veered
between atheist, agnostic and spiritual but not religious in
various forms at various times of my life. However, Ive always
been fascinated by religious peoples beliefs and attitudes,
feeling that they do address important aspects of the universe
that science (so far, at least) has given us little insight about.
After all, no matter how much you value math and science, they
cant tell you everything. Mathematical derivations start with
assumed axioms; and as David Hume was the first to carefully
argue) science requires some inductive bias or it cant infer
anything specific from observed data. Everybodys got to
351
352
353
Cognitive Vitalism
A related perspective on AGI and the cosmos, fairly common in
the general population though not so much among scientists, is
that human brains contain some kind of special quality, which
lies outside the domain of empirical science and is responsible
for some key aspects of human intelligence.
Perhaps the most articulate advocates of this perspective that
Ive encountered id Selmer Bringsjord, an AI researcher and
logician who is also a devout Christian. Where Changle says
Suchness, Selmer says Soul but their perspectives seem
closely related. However, theres a key difference because
Selmer also argues that some functional aspects of human-level
intelligence are reliant on the Soul, which digital computers will
never possess. This, he believes, is the core reason why the AI
field has not yet led to software with human-level general
intelligence.
Selmer goes on to associate the soul with non-Turing
hypercomputation, a kind of computing that cannot be achieved
via physical computers built according to any kind of currently
known physics, and that also cannot be measured using
empirical science as currently understood. That is: Science as
we know it consists of the collection of scientific data which
consists of finite sets of bits (two other ways to phrase finite sets
of bits would be finite amounts of information or finite sets of
finite-precision numbers), and then the extrapolation of these to
predict the outcomes of future experiments, which will also take
the form of finite sets of bits. There is no scientific experiment,
conductable within the contemporary understanding or practice
of science, that would be able to identify a hypercomputable
process, or distinguish it from a conventional computing process.
In that sense, hypercomputing is a non-empirical concept
beyond the realm of measurement!
354
355
Panpsychism
Panpsychism occurs in various forms, but in the broad sense it
refers simply to the idea that mind is a fundamental feature of the
universe and each of its parts, rather than something that is the
exclusive property of specific kinds of systems like humans,
other higher animals, intelligent computer programs, etc.
Though not a common view in contemporary Western society,
philosophy or science, panpsychism does have a long history in
historical Western philosophy, encompassing thinkers like
Leibniz, James, Whitehead, Russell, Fechner and Spinoza. A
host of recent books treat the topic, including Skrbinas Mind that
Abides: Panpsychism in the New Millienium and Strawsons
Consciousness and its Place in Nature.
Panpsychism also has a long and rich history in Eastern
philosophy, e.g. the modern Vedantic thinker Swami Krishnanda
observes
The Vedanta philosophy concludes that matter also is a phase of
consciousness and objects of knowledge embody in themselves
356
357
358
Spiritual Monism
Panpsychism holds that everything in the cosmos has at least a
spark of mind, of cosciousness, in it. Quite often though not
always this perspective comes along with a more religious
view, which holds that everything in the cosmos has a spark of
God in it (in some sense or another). So that ultimately,
everything is part of the Mind of God. I call this spiritual
monism. Of course this may be interpreted in many, many
different ways and it would be out of place to review them all
here, but I cant resist giving one particularly fascinating
example.
Around the same time I interviewed Changle, I also did a long
and fascinating interview with Lincoln Cannon, the leader of the
Mormon Transhumanist Association. As well as advocating a
sort of Mormon spiritual monism, he advocats the intriguing
position that Mormonism is the most transhumanist religion
because it explicitly advocates human beings improving
themselves incrementally until they effectively become gods. It
also contains the idea that God used to be an ordinary being like
us, until he self-improved and became, well, transhuman
359
360
361
362
363
364
365
366
367
explicate orders. Not that the explicate order uses classical logic
and the implicate order uses quantum logic; but rather that
quantum logic captures some aspects of the explicate/implicate
interaction that classical logic misses.
Recently Ive begun thinking about the implicate order from a
different perspective, and looking at models of the implicate
order as a logic of questions rather than a logic of answers. Im
experimenting with modeling the implicate order as something I
call QP, a questioning process not a process of questioning
anything in the everyday world, but rather a process of
questioning itself. To quote a manuscript I wrote on this a while
ago (tentatively titled ?),
If I had to summarize QP in a brief phrase of (almost)
ordinary English, I suppose Id go with something like the
process of a complex, autopoietic pattern/process-system
growing and developing via self-referentially (and joyfully,
autonomously and interconnectedly) self-questioning. Sorry
if that sounds like gobbledygook! It makes a lot of sense to
me, and I hope it will make a lot of sense to you after you
finish the book!
QP The process of questioning. Questioning everything,
including the process of questioning everything and so forth!
What Ive been studying is how one might model the universe as
something that fundamentally emerges from this kind of selfquestioning process.
Another famous quantum physicist, John Wheeler, speculated
about the possibility of deriving quantum mechanics and general
relativity theory (two great physics theories that remain ununified, leaving modern physics in a state of unacceptable
contradiction) from some sort of statistical analysis of the space
of logical propositions. So that physics would emerge from a
pre-geometry made of logic. My QP approach is actually
somewhat similar, except that Im looking at a logic of questions
368
369
Conclusion
So, at the end of this romp through strange ideas and
perspectives, whats the take-home point about AGI and the
Cosmos?
As you already know, Im not a Mormon Im not religious at
all Im not even really a Jew, in terms of beliefs, in spite of my
Jewish racial background. And unlike my colleague Selmer, I
dont place much stock in the hypercomputable Soul.
I feel like existing religious and spiritual notions of God and
Soul are getting at important aspects of the universe which
370
371
372
373
374