1
Why Natural Science Needs Phenomenological Philosophy
Steven M. Rosen
[This article is published in Progress in Biophysics and Molecular Biology
119 (2015: 257−269)]
ABSTRACT
Through an exploration of theoretical physics, this paper suggests the need for
regrounding natural science in phenomenological philosophy. To begin, the
philosophical roots of the prevailing scientific paradigm are traced to the thinking of
Plato, Descartes, and Newton. The crisis in modern science is then investigated,
tracking developments in physics, science’s premier discipline. Einsteinian special
relativity is interpreted as a response to the threat of discontinuity implied by the
Michelson-Morley experiment, a challenge to classical objectivism that Einstein
sought to counteract. We see that Einstein’s efforts to banish discontinuity
ultimately fall into the “black hole” predicted in his general theory of relativity. The
unavoidable discontinuity that haunts Einstein’s theory is also central to quantum
mechanics. Here too the attempt has been made to manage discontinuity, only to
have this strategy thwarted in the end by the intractable problem of quantum
gravity. The irrepressible discontinuity manifested in the phenomena of modern
physics proves to be linked to a merging of subject and object that flies in the face of
Cartesian philosophy. To accommodate these radically non-classical phenomena, a
new philosophical foundation is called for: phenomenology. Phenomenological
philosophy is elaborated through Merleau-Ponty’s concept of depth and is then
brought into focus for use in theoretical physics via qualitative work with topology
and hypercomplex numbers. In the final part of this paper, a detailed summary is
offered of the specific application of topological phenomenology to quantum gravity
that was systematically articulated in The Self-Evolving Cosmos (Rosen 2008a).
KEYWORDS: phenomenological philosophy; relativity theory; Michelson-Morley
experiment; discontinuity; quantum gravity; string theory; subject and object;
topology; dimension
1. CLASSICAL SCIENCE IN CRISIS
The unquestioned point of departure for Newtonian science is its self-evident
intuition of object-in-space-before-subject (Rosen 2004, 2008a). This formulation
derives from Plato, who stated in the Timaeus that “we must make a threefold
distinction and think of that which becomes, that in which it becomes, and the
model which it resembles” (1965, 69). “That which becomes” corresponds to the
2
object term in the formula; this ontological category comprises the things and events
that we observe and measure. The context in which we make these observations is
what Plato called the “receptacle,” a concept that evolved into our modern idea of
space. And the “model” that the transitory object “resembles” is the “eternal object,”
the changeless form or archetype. For Plato, this perfect form is eidos, a rational idea
or ordering principle in the mind of the Demiurge. Using his archetypal thoughts as
his blueprints and the receptacle as his container, this Divine Creator fashions an
orderly world of particular objects and events. The Cartesian descendent of the
Platonic Demiurge constitutes the third term of science’s axiomatic formula: the
subject, idealized in classical mechanics as a “Laplacean demon,” a global observer
that is detached from the concrete world but that has “an instantaneous bird’s eye
view of everything” (Matsuno and Salthe 1995, 311). To summarize the underlying
trichotomy of classical metaphysics implicit in the work Descartes, Newton, and
their successors: the object is what is observed, space is the continuous medium
through which the observation occurs, and the subject is the transcendent
perspective from which the observation is made. These three terms are taken to be
categorically separate from each other.
Despite science’s long-held ideal of detached, purely objective observation, in
actual practice the observing subject has always interacted with the observations
made. Though this “human factor” in science had never been wholly deniable, up to
the middle of the nineteenth century it was possible for the subjective element to be
minimized and marginalized, attributed to errors in measurement that were readily
manageable and thus had little impact on the primary aim of apodictic certainty. But
the Newtonian ideal was seriously challenged toward the end of the nineteenth
century.
Philosopher Karl Jaspers commented on how science changed in the century
that followed the death of Kant in 1804: “It extends further than in Kant’s time; it is
more radical than ever, both in the precision of its methods and in its consequences”
(1941/1975, 166). But Jaspers goes on to tell us that, in the very extension and
refinement of science, the limitations of scientific knowing have become much more
transparent: “We experience the limits of science as the limits of our ability to know
and as limits of our realization of the world through knowledge...the knowledge of
science fails in the face of all ultimate questions” (1941/1975, 167). For my part, I
would emphasize that the barrier science reached as it progressed into the
nineteenth century was not merely an external one. It was not simply that the
scientific approach was found to be inapplicable to traditionally nonscientific fields
of knowledge. It was that fundamental problems arose within science itself. As
physics pressed toward ever higher levels of exactitude, extending itself to the
extremes of measurement, to the limits of its scales—ultra-high velocities, submicroscopic distances, and so on—some of its most basic expectations were upset.
The initial upheaval came with the Michelson-Morley experiment on light
(1887). This research raised doubts about the luminiferous ether that Maxwell had
imagined to be the medium for propagating electromagnetic energy. Just as
relatively crude mechanical phenomena like water waves and sound waves could be
taken as transmitted through Newtonian space via the media of water and air
(respectively), Maxwell supposed that the subtler electromagnetic energy he was
3
investigating was transmitted through the ether, a highly refined medium thought to
pervade the whole universe. Possessing few properties and no action of its own, the
ether was presumed to serve exclusively as the framework within which the
continuous motions of coarser substances could be measured and analyzed—
including the motion of light. Maxwell’s ether hypothesis reflected the underlying
idea that light could be viewed as a mechanical force that passed through the
Newtonian continuum like any other force—in other words, that light could be
treated as an object in space that could be observed objectively by a Newtonian
subject detached from that space. In so postulating the existence of a luminiferous
ether, the old formula of object-in-space-before-subject was tacitly maintained. But
the postulate proved untenable.
If it were true that light moved through a motionless ethereal continuum,
then a key principle of classical mechanics should apply: the addition of velocities.
Assuming light to propagate through the ether at the absolute speed of c (~186,000
mps), a traveler moving toward a beam of light should observe the beam to be
approaching her at a velocity greater than c, her own velocity being added to c to
obtain the higher relative velocity. Similarly, if the light beam and the observer are
moving away from each other through the ether, the relative velocity of the light
beam would be less than c, the observer’s velocity now being subtracted from c.
What Michelson and Morley discovered was that the velocity of light actually always
appeared to be the same, regardless of its direction of motion relative to the
observer. This astonishing result sounded the death knell of the ether theory.
The result of the Michelson-Morley experiment was indeed baffling to the
classical “eye.” Is it not an obvious fact of perception that, if I change my perspective
on an object I am viewing, its appearance will change accordingly? What the
experiment demonstrated in its abstract way was that, when the “object” being
considered is light, the familiar principle of perspective does not apply. It would
certainly look strange to me if I got up from this computer screen I am sitting at,
moved all the way to the right of it so that I was viewing the screen at an acute angle,
but found that the screen had the same full, square appearance as when I was sitting
directly in front of it! Analogously, this is what Michelson and Morley “saw” when
they looked at light from different “angles” (reference frames). This strange
outcome made it clear that the phenomenon of light does not behave the way
mechanical phenomena do, thus suggesting that electromagnetic phenomena are
not strictly governed by the classical laws of Newtonian space.
But just why was it that the velocity of light did not change regardless of the
frame of reference that Michelson and Morley adopted? Why did light “look” the
same to them no matter what perspective upon it they assumed? I propose it was
because, in confronting the phenomenon of light, they were not encountering an
object to be seen, but that by which they saw. As the physicist Mendel Sachs put it in
his inquiry into the meaning of light: “What is ‘it’ that propagates from an emitter of
light, such as the sun, to an absorber of light, such as one’s eye? Is ‘it’ truly a thing on
its own, or is it a manifestation of the coupling of an emitter to an absorber?” (1999,
14). Sachs’s rhetorical question intimates that light—instead of lending itself to
being treated as an object open to the scrutiny of a subject that stands apart from
it—must be understood as entailing the inseparable blending of subject and object
4
(Rosen 2004, 20; 2008a, 164). This computer screen surely does not look the same
to me from every perspective, but would not my viewing of the screen look the
same? In attempting to observe the light by which the screen is perceived, it seems I
would be confronted with the prospect of “viewing my own viewing,” and this would
mean that I would not encounter the concrete variations in appearance that attend
the observation of an object from a viewpoint that itself is not viewed. At bottom
then, the finding of Michelson and Morley evidently called into question the classical
intuition of object-in-space-before-subject that had implicitly governed the work of
science for many centuries.
Let me be clear that the classical formula is hyphenated to indicate that its
three main terms are mutually interdependent. So, if there is no separation of
subject and object in the phenomenon of light, there can be no space, since the
existence of space presupposes that separation. Space is therefore no free-standing
abstraction. Instead of simply existing on its own, it exists as a container of concrete
objects, and these objects necessarily are contained in such a way that they are
categorically divided from the subject, he who is uncontained (cf. Descartes’ sharp
distinction between res extensa, an object extended in space, and res cogitans, a
mental substance or thinking subject not contained in space). If the objects were not
thus sealed into space, if the subject was not sealed out, the spatial seal would be
broken. Just such a breach is signified by the fusion of subject and object
encountered in the phenomenon of light. It was for this reason that the findings of
Michelson and Morley, instead of confirming the existence of the “ethereal
continuum” as expected, pointed to an alarming loss of continuity.
2. RELATIVITY: FROM ONE “BLACK HOLE” TO ANOTHER
The crisis precipitated by the Michelson-Morley experiment was seemingly
addressed by the Einsteinian revolution in physics. But to what extent was
Einstein’s theory truly revolutionary?
The theory of relativity is essentially co-optative (Rosen 2004, 22). It
accommodates the electromagnetic challenge to classical intuition in such a way
that the challenge loses its force. The brilliance of Einstein lay in the fact that, at the
very same time that he accepted the rupture of the classical continuum (he could
hardly do otherwise, given the demise of the ether), he found a way to seamlessly
repair it. It is this that is responsible for the popular confusion over the meaning of
the term “relative” in Einstein’s theory.
Space and time are indeed relativized in Einstein’s theory. Prior to Einstein,
these dimensions were assumed to be absolute parameters of change that did not
themselves change. The space and time of Newton and Descartes were taken as
perfectly homogeneous continua. What did change on the classical view, what was
subject to discontinuity, were not space and time per se but the concrete objects
contained therein. How did Einstein respond when the Michelson-Morley
experiment cast doubt on the older assumptions about space and time? In effect, he
said: Let space and time themselves change (in the theory, time dilates and space
contracts at high relative velocities). This transformation of space and time certainly
appears to dynamize physics and render it concrete, for now, not only do objective
5
events entail dynamic processes; also in process is what previously had been taken
as the abstract, utterly static framework for those events. In thus introducing change
at the fundamental level of space and time themselves, Einstein did seem to be
challenging the classical order of object-in-space(-and-time)-before-subject. And
yet, the philosopher Bertrand Russell (1925) was prompted to declare that
Einstein’s theory was misnamed, since the theory actually seeks a description of
nature that is anything but relative!
It is clearly an oversimplification of Einsteinian relativity to say, without
qualification, that, in it, space and time change. Einstein did not simply posit the
variability of space and time. Instead he declared that, while these terms previously
had been taken as invariant, they must now be seen as undergoing change within a
new context of invariance. Yes, time is now deemed relative. No longer can we
overcome the “human factor” by assuming a universal clock that enables local
observers in different frames of reference, traveling at different velocities, to
synchronize their watches with absolute precision. But while time is no longer
absolute, space-time is. All concrete observational perspectives, involving variations
in velocity, are rendered strictly equivalent in relation to the four-dimensional
space-time continuum whose unity is conferred by c, the constant velocity of light.
The Michelson-Morley experiment had intimated the possibility that light’s
constancy was indicative of a blending of subject and object that confounded
classical intuition. Einstein foreclosed this interpretive option before it could even
reach the threshold of conscious consideration. For Einstein, light is hardly a
merging of subject and object but is simply an abstract object: it is the empirical
constant necessary for the objective determination of space-time events. Thus, most
essentially, Einsteinian “relativity” was actually not about relativizing or dynamizing
nature; it did not embody a genuine recognition that there is fundamental change or
discontinuity in the world, that the world is in process. Einstein’s success, his
profound influence on twentieth century physics, was rooted in his ability to
accommodate the nineteenth century challenge to classical physics in such a way
that the classical viewpoint is basically upheld. The old order of space and time is
supplanted by Einstein, yet, with scarcely a pause, it is replaced by an even more
abstract order of this kind: that of the four-dimensional space-time continuum. Here
there is still the object, or rather, the objectified relativistic event; still the static
continuum that contains the event, divesting it of its vitality; and still the detached,
idealized subject who analyzes all this from afar. To be sure, Einstein significantly
updated the details of the classical formula, but he did this in order to maintain the
viability of its basic terms.
To all appearances, Einstein’s theory of relativity was a resounding success.
However, when he unveiled this idea in 1905, he was well aware that it was
incomplete. Einstein came to call his initial theory “special relativity” because it was
limited to the ideal case of coordinate systems that moved uniformly. In the real
world, however, systems typically change their state of motion, speed up or slow
down. With the special theory published, Einstein turned to the task of accounting
for the relative motion of all reference frames, whether or not the motion was
uniform. This effort eventuated in the 1915 publication of the general theory of
relativity. By switching from the Minkowski flat space of special relativity to the far
6
more general Riemannian manifold, Einstein could now explain the interaction of
systems in non-uniform relative motion. The flexibility of Riemannian geometry
permitted Einstein to gauge the degree of non-uniformity of motion in precise terms
by associating it with the degree of curvature in the manifold. Space-time is without
curvature for systems in uniform motion and becomes progressively more curved as
the acceleration of the reference frame increases. Applying the principle of general
relativity that establishes the equivalence of inertial and gravitational masses,
space-time curvature is related to gravitational effects: the greater the gravitational
mass of a body, the more curved is the space-time continuum.
Now, while Einstein found it necessary to adopt this approach, he soon
realized that it had its limitations. For, there were solutions to the field equations of
general relativity that predicted infinite curvature. That is, if a gravitational body
were massive enough, the curvature of space-time would become so great that a
singularity would be produced in the continuum. What this meant is that analytic
continuity would be lost and the theory would fail! However, for that to happen, the
mass density of the gravitational body indeed would have to be enormous. When the
general theory was first propounded in 1915, the existence of such astrophysical
bodies was taken as purely hypothetical. But, as the twentieth century wore on, the
possibility of stellar objects whose masses were sufficient to produce “black holes”
in space began to be considered more seriously. This led physicist Brandon Carter
(1968) to raise explicit doubts about Einstein’s theory: Would it be able to survive
its prediction of gravitational collapse? By the end of the twentieth century,
empirical evidence for black holes had only grown stronger, and, now, in the new
millennium, the evidence seems almost irrefutable. One might think that, as a
consequence, Einstein’s theory might have lost significant influence. Before
considering why that is not the case, let me summarize the theory’s course of
development and reflect on its meaning.
Einsteinian relativity evolved out of the attempt to circumvent the “black
hole” that was created when Michelson and Morley could not confirm the existence
of the luminiferous ethereal continuum. The effect of Einstein’s theory was to plug
the implicit gap in three-dimensional space by postulating a four-dimensional
space-time continuum. To generalize the new account to non-uniform motion,
Einstein posited the curvature of space-time. What we are seeing, in effect, is that
the four-dimensional approach used to compensate for the absence of continuity in
three-dimensional space winds up re-introducing discontinuity. Even though
general relativity permits one to establish invariances involving non-uniform
motion, invariances that presuppose continuity, the greater the non-uniformity, the
greater is the curvature of space-time, and the closer one then approaches to the
point where invariance breaks down and continuity is lost. So it seems that the
moment curved Riemannian geometry was applied to generalize Einstein’s remedy
for discontinuity, a new order of discontinuity was presaged. In the end then,
Einsteinian relativity does not effectively address the underlying crisis in theoretical
physics precipitated by the Michelson-Morley experiment. Why has the inherent
discontinuity of Einstein’s theory not undermined its influence more completely?
Perhaps it is because the other preeminent field of modern physics has created an
7
atmosphere in which discontinuity can better be tolerated and its ultimate
consequences better denied.
3. QUANTUM MECHANICS, QUANTUM GRAVITY, AND THE NEED FOR A NEW
FOUNDATION
At the close of the nineteenth century, just around the time when physicists were
digesting the Michelson-Morley findings, another groundbreaking investigation of
electromagnetism was being conducted. To recapitulate this famous experiment,
Max Planck was studying blackbody radiation, the emission of electromagnetic
energy in a completely absorbent medium (a closed cavity that does not reflect light
but soaks it up, discharging the energy internally). Classical theory faced a difficulty
here that was on a par with the problem engendered by the Michelson-Morley
experiment. If the traditional analysis was correct, energy should be transmitted in a
smoothly continuous fashion. Yet this assumption leads to the peculiar prediction
that, if a non-reflective body is exposed to intense heat, it should radiate an infinite
amount of energy—a result that clearly is not borne out by empirical observation.
Planck responded to the contradiction by boldly amending the underlying classical
assumption. He proposed that light, rather than radiating in a continuous manner, is
transmitted in discrete bundles, quanta. The introduction of discontinuity into the
theory now brought a remarkable correspondence with empirical data. The new
quantum theory could predict laboratory findings to a high degree of accuracy by
adding just one parameter, h. This is the constant of proportionality that relates the
energy (E) of a quantum of radiation to the frequency (v) of the oscillation that
produced it: E = hv. The numerical value of h is 6.63 × 10−34 joule-seconds. The
extremely small value of Planck’s constant is consistent with the fact that, in the
familiar world of large-scale happenings, energy does appear to propagate in a
smoothly continuous fashion. It is only when we “look more closely,” examining the
microscopic properties of light, that we notice its discontinuous, quantized grain.
It took a generation for the truly revolutionary implications of quantum
mechanics to become clear. Under the lingering sway of classical thinking, it was
natural to assume that the discontinuity of energy found in QM was not really
fundamental. For, if the properties of a quantum of energy were to be subject to
complete scientific determination, it seemed as if the discontinuity ultimately had to
be reducible to continuous expression via an underlying space-time substrate. Yet,
by 1930, most physicists had arrived at the conclusion that no such reduction is
possible. At this point, the majority of researchers felt obliged to accept the idea that
Planck’s microscopic quantization implies a basic indivisibility of energy that
confounds analytic continuity (which assumes infinite divisibility; see Rosen 2004).
With the continuity principle thus subverted, all classical thinking about space and
time, including that of Einstein, was called into question. It was this implication that
led philosopher Milič Čapek to comment that, in light of quantum mechanics, “the
concepts of spatial and temporal continuity are hardly adequate tools for dealing
with the microphysical reality” (1961, 238).
8
The microscopic loss of continuity may be better understood by considering
more closely Planck’s constant, h. This number gives a quantum of action. If we
rewrite Planck’s basic equation, E = hv, by replacing frequency (v) with its inverse,
namely, time, we then have E = h/T or h = ET, and in physics, energy multiplied by
time is a measure of action. The angularity of quantized action, its internal “spin,” is
expressed by the application of phase, as given in the formula h/2Π = !. Here h is
operated upon by a phase of 2Π radians, equivalent to a turn of 360°. In quantum
mechanics, ! is regarded as an indivisible “atom of process,” one not reducible to
smaller units that could be applied in its quantitative analysis. Thus, at the submicroscopic Planck threshold of 10–35 meter, the analytical continuity of space gives
way to a “graininess” or discreteness that admits of no further quantitative
determination. We see here the intimate relationship between the indivisibility of
the quantum domain and its basic indeterminacy or uncertainty. According to
Heisenberg’s uncertainty principle, there is a built-in limit to the information we can
obtain about the physical properties of quantum systems. This limitation can be
stated in terms of Planck’s constant: ΔpΔq ≈ !, where p and q are variables such as
position and momentum, or time and energy (variables that are paired or
conjugated so as to be essentially indivisible from each other). The formula says that
the product of the uncertainties (Δs) of such paired terms approximately equals
(cannot be less than) the value of Planck’s constant. Clearly then, the phasic
indivisibility (h/2Π) of Planck-level action is equivalent to its uncertainty (ΔpΔq).
There is another way to look at the quantum uncertainty. Nearing the submicroscopic Planck length, it appears that precise objective measurement is
thwarted by the fact that the energy that must be transferred to a system in order to
observe it disturbs that system significantly. This well-known “problem of
measurement” in quantum mechanics expresses quantum indivisibility in terms of
the indivisibility of the observer and the observed. It seems that in QM, the observer
no longer can maintain the classical posture of detached objectivity; unavoidably,
s/he will be an active participant. Evidently this means that quantum mechanical
action cannot be regarded merely as objective but must be seen as entailing an
intimate merging of object and subject that defies Newtonian order. Therefore, just
as the old formula of object-in-space-before-subject was thrown into doubt by the
Michelson-Morley experiment on the velocity of light, so too was it challenged by
Planck’s blackbody research. Could we say that, whereas Einstein attempted to plug
the hole in the spatial vessel by denying it (via his proposal of a four-dimensional
space-time continuum), Planck and his successors fully accepted the discontinuity?
Did quantum physics give up Einstein’s effort to uphold objectivism? Did it embrace
the indivisibility of object and subject? These questions must be answered in the
negative. QM certainly did not just relinquish continuity and the objectivity it
conferred. Instead, the implicit attempt was made to retain continuity through an
approach that is even more abstract than Einstein’s.
Let us consider a central feature of the quantum theoretic formalism:
analysis by probability. According to the classical ideal, the extensive continuum is
infinitely differentiable, which means that the position of a system within it is
always uniquely determinable. When QM was confronted with the inability to
9
precisely determine the position of a particle in microspace, it did not merely resign
itself to the lack of continuity that creates this fundamental uncertainty. Instead of
allowing the conclusion that a microsystem in principle cannot occupy a completely
distinct position—which would be tantamount to admitting that microspace is not
completely continuous—a multiplicity of continuous spaces was axiomatically
invoked to account for the “probable” positions of the particle: “it” is locally “here”
with a certain probability, or “there” with another. This collection of spaces is
known as Hilbert space. N-dimensional Hilbert space plays a role not unlike that
played by Einstein’s four-dimensional space-time continuum: it responds to the
threat of discontinuity by restoring continuity through an act of abstraction. And, as
with Einsteinian relativity, the quantum mechanical abstraction of classical space
brings with it an abstraction of subjectivity.
There is a substantial difference between the pre-Einsteinian and Einsteinian
versions of the classical posture. In the former, we have objective events occurring
in three-dimensional space before the observing gaze of an idealized global subject
(a “Laplacean demon”). In the latter—where the local space and time of the concrete
observer could not merely be discounted, subjectivity itself is taken as object, with
the “object” now being regarded as an observational event transpiring in fourdimensional space-time. Whereas three-dimensional events are concretely
observable, the fourth dimension of Einsteinian relativity is an abstraction. The
higher-order Einsteinian observer of these four-dimensional acts of observation
functions as a kind of “hyper-Laplacean demon,” for this omniscient being is a
further step removed from concrete reality than was his Newtonian predecessor.
Nevertheless, in both cases, the traditional stance is strictly maintained. In both, we
have object-in-space-before-subject.
Like Einsteinian relativity, quantum mechanics implicitly transforms the old
subject into an object cast before a more abstract, higher-order subject. In effect, the
quantum mechanical analyst assumes a superordinate vantage point from which
s/he is able to consider alternative acts of classical observation and weight them
probabilistically, with each act corresponding to a different subspace of the Hilbert
space. Similar to relativistic analysis, the “objects” to be analyzed are not mere
concrete substances but observations themselves—what Max Planck called the “run
of our perceptions” (Planck quoted in Jahn and Dunne 1984, 9). If the “scientific
objectivity” of QM’s analysis of observation is to be maintained, the implicit
observational activity of the analyst of observation must itself be exempted from the
analysis. That is to say, two ontologically distinct levels of observational or
subjective activity have to exist: that which is to be analyzed, and that through
which the analysis is to take place. The former is constituted by the old subjective
activity that is now objectified within the framework of the Hilbert space, whereas
the latter corresponds to the more abstract, higher-order, wholly implicit activity of
the quantum mechanical subject standing outside of Hilbert space. It is clear that
this QM subject assumes the same detached, “purely objective” stance as did his
Newtonian forerunner. Still operative in its essential relations is the basic formula of
object-in-space-before-subject.
Nevertheless, Hilbert space does not retain its usefulness for all levels of
energy at all scales of magnification. Its range of applicability is limited to the
10
comparatively low-energy regime that lies above the Planck length of 10–35 meter. It
is true that, in studying the phenomenon of electromagnetic radiation, Planck
brought us into an energy domain in which classical continuity was shaken, and,
along with it, the certitude of classical objectivism. But while the domain in question
is surely microscopic and Planckian uncertainty becomes a significant factor here
(whereas, in the large-scale classical world, it does not), this realm of interaction
remains considerably above the ultra-microscopic, ultra-energetic Planck scale
where discontinuity becomes completely unmanageable.
However, in the course of the twentieth century, physicists probed the
microworld ever more deeply as they sought to advance their project of arriving at a
unified understanding of nature. Whereas the fundamental forces of nature appear
irreconcilable at lower energy levels and orders of magnification, physicists, by
pushing their quantum mechanical research into the high-energy, sub-microscopic
domain, could now account for the atomic decay force (the weak interaction) and
the electromagnetic force in a unified manner. Going still further into the
microworld, impressive progress was made on a “grand unification” that
incorporated the strong nuclear force. And yet, in drawing closer and closer to the
Planck length, the element of uncertainty only grew greater.
To complete its quest for unity, physics now faces one final task. It must
include in its quantum mechanical analysis the one force of nature hitherto
unaccounted for, namely, gravitation. The problem is that gravity, unlike the nongravitational forces, resists QM treatment until the bitter end. That is, gravitational
energy behaves classically, appears to retain its continuity all the way down the
scale of magnitude to the Planck length itself. It is precisely here that a QM theory of
gravitation would have to operate to fulfill its aim of total unification. Of course, the
Planck length is the threshold at which spatiotemporal turbulence goes out of
control and uncertainty becomes all consuming. Crossing this threshold, the quasicontinuity of Hilbert space yields to utter discontinuity. Not even a probabilistic
analysis of nature is possible here, as is reflected in the unworkable probability
values obtained for equations dealing with sub-Planckian reality.
It is interesting to note how quantum mechanics and Einstein’s theory of
gravitation converge in negation. It is not merely that the former reaches its
Planckian limit and encounters irrepressible discontinuity in a manner that is
analogous to the black-hole limitation confronting the latter. For what may seem at
first like analogous but different limitations, actually can be said to constitute the
very same limitation.
The work of physicist Arthur Eddington (1946) contributed to an
understanding of this. In his own effort at unification, we find the implication that
quantum mechanical discontinuity (and its associated uncertainty) is equivalent to
relativistic curvature. Speaking of the fundamental relation “between the
microscopic constant σ and the cosmological constants R0, N,”Eddington declared
that “curvature and [quantum mechanical] wave functions are alternative ways of
representing distributions of energy and momentum” (1946, 46). Eddington’s
findings are consistent with the fact mentioned earlier that, the greater the
curvature of space-time, the closer we approach to the loss of continuity realized in
the singularity of the black hole. Therefore, the production of curvature in general
11
relativity, which culminates in the infinite warping of space-time found in the heart
of a black hole, maps onto the production of Planckian discontinuity, the degree of
which progressively increases as we descend into the microcosmos. It seems then
that the black hole singularity of general relativity is none other than the
manifestation of quantized gravitational energy at the Planck length. But aren’t
black holes large-scale phenomena, astrophysical events taking place at the opposite
end of the scale of magnitude from the microphysical happenings of quantum
mechanics? On the contrary, upon entering the singularity of the black hole, the
pervasive uncertainty about distance that arises here (owing to the loss of
continuity) renders any notion of “large-scale” vs. “small-scale” inoperative. Simply
stated, the scale of magnitude collapses.
What we are seeing is that Einstein’s “macroscopic” theory comes to an end
at the very same place where quantum mechanics ends: at the “microscopic”
Planckian limit. Here, in this singularity, relativity theory and quantum mechanics
coalesce. Of course, this “unification of the field” is hardly what science had
intended, since the unity is realized in negation, marking as it does the failure of
determinative analysis.
It was in the 1970s, following the progress achieved with grand unification,
that work on a theory of quantum gravity began in earnest. And this is when
confrontation with the chaos of the Planck realm could no longer be avoided. The
equations that would unify all four forces of nature were now completely unable to
contain the wildly fluctuating Planckian energies, as manifested by the infinite
probabilities that turned up to render those equations useless. Consequently,
progress was now blocked and has continued to be thwarted up to the present time.
Over the past forty years, there has been little meaningful movement toward an
effective theory of quantum gravity. Musing ironically on this, physicist Lee Smolin
(2006) observed that, “for more than two centuries…our understanding of the laws
of nature expanded rapidly…. [yet] today, despite our best efforts, what we know for
certain about these laws is no more than what we knew back in the 1970s” (viii).
What “best efforts” is Smolin referring to? Since the 1970s, the quest for a
mathematical unification of nature has largely been dominated by an approach
known as string theory. In this endeavor, the attempt is made to avoid probing
below the Planck threshold simply by assuming that the smallest constituents of
nature are not indefinitely miniscule point-particles as previous theory had
assumed, but string-like vibrating elements of finite extension conveniently scaled
at the Planck length. It is because this stratagem has managed to eliminate infinite
terms from quantum gravitational equations that it has become the preferred
approach. But the price paid for this positivistic ploy has come to be acknowledged
(Smolin 2006, Woit 2006). In my own explorations of the matter (Rosen 2004,
2008a, 2008b, 2013), I have identified several problems with string theory.
First, while it is true that string theory serves the classical ontology by
sidestepping sub-Planckian ambiguity, an epistemic ambiguity takes its place. String
theory’s general equations may be free of unmanageable infinities, but theorists
must be able to solve these highly abstract equations in a manner that produces a
specific description of the world as we know it. As things now stand, the equations
yield a vast array of possible solutions with no guiding principle by means of which
12
the field can be narrowed in unique correspondence with known physical reality. A
second limitation of the theory is the evident impossibility of objectively testing it in
a direct fashion since, according to physicist Brian Greene, the test would have to be
conducted on a scale “some hundred million billion times smaller than anything we
can directly probe experimentally [!]” (1999, 212). Finally, the theory seems to
contradict itself in its assumption of fundamental particles with finite extension.
“Strings are truly fundamental,” says Greene, “they are ‘atoms,’ uncuttable
constituents” of nature. So, “even though strings have spatial extent, the question of
their composition is without any content” (141). But isn’t this a contradiction? For—
at least according to the classical concept of the continuum not explicitly challenged
by string theory, to be spatially extended is to be cuttable, in fact, infinitely divisible.
How then could a string be a fundamental particle, an atomic or indivisible
ingredient of nature, when it is spatially extended? In sum, string theory is
ambiguous, objectively untestable, and it contradicts itself when seen in classical
terms.
In his book The Trouble With Physics, Smolin (2006) winds up calling for a
different style of doing physics than what has been practiced since the advent of
string theory. He advocates a “more reflective, risky, and philosophical style” (294)
that confronts “the deep philosophical and foundational issues in physics” (290). I
applaud this call for a more philosophically-oriented physics, and I propose that the
recent stalemate in physics suggests it will no longer be possible for us to rely on the
old philosophical foundation. With the coming to prominence of the quantum
gravity issue, theoretical physics evidently has reached an unprecedented
watershed. The problems confronted by string theory, and by quantum gravity in
general, are not merely theoretical ones that can be resolved within the extant
philosophical framework of object-in-space-before-subject. Rather, the difficulty lies
squarely with that framework itself. The trans-Planckian dissolution of
spatiotemporal continuity and fusion of subject and object strike at the very heart of
the ancient formula. I therefore venture to say that any new theory presupposing
said formula will fail to bring the unification that is sought. But if the long-dominant
tradition of philosophy is not equal to the task of effectively grounding a unified
physics, is there any alternative philosophical foundation that can serve in this
capacity? I believe there is.
4. A BRIEF INTRODUCTION TO PHENOMENOLOGICAL PHILOSOPHY
What I am proposing is that meeting the challenge of quantum gravity requires that
physics be regrounded not merely in a new theory, but in a new philosophy, one
that can accommodate the intimate interplay of subject and object. Beginning in the
twentieth century, the classical tradition has been perceptively questioned by the
proponents of a philosophical initiative known as phenomenology. After describing
the general features of this approach in the present section, in the next section I will
focus on a phenomenological concept that has immediate relevance for the current
impasse in theoretical physics.
The phenomenological movement is rooted in the nineteenth century
existentialist writings of thinkers like Kierkegaard, Nietzsche, and Dostoevsky. It
13
takes its contemporary form through the work of its principal figures: Edmund
Husserl, Martin Heidegger, and Maurice Merleau-Ponty. In terms of the present
paper, phenomenology can be seen most essentially as a critique of the classical
trichotomy of object-in-space-before-subject. To the phenomenologist, the activities
of the detached Cartesian subject are idealizing objectifications of the world that
conceal the concrete reality of the lifeworld (Husserl 1936/1970). Obscured by the
lofty abstractions of European science, this earthy realm of lived experience is
inhabited by subjects that are not anonymous, that do not fly above the world,
exerting their influence from afar. In the lifeworld, the subject is a fully situated,
fully-fledged participant engaging in transactions so intimately entangling that it can
no longer rightly be taken as separated either from its objects, or from the worldly
context itself. As Heidegger put it, the down-to-earth, living subject is a being-in-theworld (1927/1962), a being involved in
a much richer relation than merely the spatial one of being located in the
world.... This wider kind of personal or existential “inhood” implies the whole
relation of “dwelling” in a place. We are not simply located there, but are
bound to it by all the ties of work, interest, affection, and so on. (Macquarrie
1968, 14-15)
It is clear that all three terms of the classical formulation are affected by the
phenomenological move. To reiterate the traditional account, the object is what is
experienced, the subject is the transcendent perspective from which the experience
is had, and space is the continuous medium through which the experience occurs. In
this approach, objects are taken as simply external to each other and as appearing
within a spatial continuum of sheer externality—space’s infinite divisibility, or, in
Heidegger’s words, the “‘outside-of-one-another’ of the multiplicity of points”
(1927/1962, 481). The agents operating upon the objects constitute a third kind of
externality, acting as they do from a transcendent vantage point beyond the objects
in space. It is this privileging of external relations that is counteracted in the
phenomenological approach. Notwithstanding the Platonic/Cartesian idealization of
the world, in the underlying lifeworld there is no object with boundaries so sharply
defined that it is closed off completely from other objects. The lifeworld is
characterized instead by the transpermeation of objects (the quantum scientist
might say “superposition”), by their mutual interpenetration, by the “reciprocal
insertion and intertwining of one in the other,” as Merleau-Ponty put it (1968, 138).
With objects thus related by way of mutual containment, no separate container is
required to mediate their relations, as would have to be the case with externally
related objects. Objects are therefore no longer to be thought of as contained in
space like things in a box, for, in containing each other, they contain themselves. At
the same time, it must also be understood that, in the lifeworld, there can be no
peremptory division of object and subject. The lifeworld subject, far from being the
disengaged, high-flying deus ex machina of Descartes, finds itself down among the
objects, is “one of the visibles”(Merleau-Ponty 1968, 135), is itself always an object
to some other subject, so that the simple distinction between subject and object is
confounded and “we no longer know which sees and which is seen” (139). The
14
phenomenological grounding of the subject is thus indicative of the close interplay
of subject and object in the lifeworld. Generally speaking then, what the move from
classical thinking to phenomenology essentially entails is an internalization of the
relations among subject, object, and space.
5. THE DIMESNION OF DEPTH
The link between the lifeworld and the quantum world should already be broadly
evident. With the former, the classical continuum is supplanted by an internally
constituted space of overlapping entities featuring the intimate interaction of
subject and object. A more specific articulation of the phenomenological response to
the problem of quantum gravity can be derived from another work of MerleauPonty. In his essay “Eye and Mind” (1964), his concept of depth provides an account
of dimensionality that permits us to better understand the limitations of Cartesian
space and to surpass them.
For Descartes, notes Merleau-Ponty, a dimension is an extensive continuum
entailing “absolute positivity” (1964, 173). Descartes’s assumption is that space
simply is there, that it subsists as a positive presence possessing no folds or
nuances; no shadows, shadings, or subtle gradations; no internal dynamism. Space is
thus taken as the utterly explicit openness that constitutes a field of strictly external
relations wherein unambiguous measurements can be made. Along with height and
width, depth is but the third dimension of this hypostatized three-dimensional field.
Merleau-Ponty contrasts the Cartesian view of depth with the animated depth of the
lifeworld, where we discover in the dialectical action of perceptual experience a
paradoxical interplay of the visible and invisible, of identity and difference:
The enigma consists in the fact that I see things, each one in its place,
precisely because they eclipse one another, and that they are rivals before
my sight precisely because each one is in its own place. Their exteriority is
known in their envelopment and their mutual dependence in their autonomy.
Once depth is understood in this way, we can no longer call it a third
dimension. In the first place, if it were a dimension, it would be the first one;
there are forms and definite planes only if it is stipulated how far from me
their different parts are. But a first dimension that contains all the others is
no longer a dimension, at least in the ordinary sense of a certain relationship
according to which we make measurements. Depth thus understood is,
rather, the experience of the reversibility of dimensions, of a global ‘locality’
— everything in the same place at the same time, a locality from which
height, width, and depth [the classical dimensions] are abstracted. (1964,
180)
Speaking in the same vein, Merleau-Ponty characterizes depth as “a single
dimensionality, a polymorphous Being,” from which the Cartesian dimensions of
linear extension derive, and “which justifies all [Cartesian dimensions] without
being fully expressed by any” (1964, 174). The dimension of depth is “both natal
space and matrix of every other existing space” (176).
15
Merleau-Ponty proceeds to explore the depth dimension via the artwork of
Cézanne. Through the painter, he demonstrates that primal dimensionality is selfcontaining. For Cézanne works with a visual space that is not abstracted from its
content but flows unbrokenly into it. Or, putting it the other way around, the
contents of a Cézanne painting overspill their boundaries as contents so that, rather
than merely being contained like objects in an empty box, they fully participate in
the containment process. Inspired by Cézanne’s paintings, Merleau-Ponty comments
that “we must seek space and its content as together” (1964, 180).
Merleau-Ponty also makes it clear that the primal dimension engages
embodied subjectivity: the dimension of depth “goes toward things from, as starting
point, this body to which I myself am fastened” (1964, 173). In commenting that,
“there are forms and definite planes only if it is stipulated how far from me their
different parts are” (180; italics mine), Merleau-Ponty is conveying the same idea. A
little later, he goes further:
The painter’s vision is not a view upon the outside, a merely “physicaloptical” relation with the world. The world no longer stands before him
through representation; rather, it is the painter to whom the things of the
world give birth by a sort of concentration or coming-to-itself of the visible.
Ultimately the painting relates to nothing at all among experienced things
unless it is first of all ‘autofigurative.’ . . . The spectacle is first of all a
spectacle of itself before it is a spectacle of something outside of it. (1964,
181)
In this passage, the painting of which Merleau-Ponty speaks, in drawing upon the
originary dimension of depth, draws in upon itself. Painting of this kind is not
merely a signification of objects but a concrete self-signification that surpasses the
division of object and subject.
In sum, the phenomenological dimension of depth as described by MerleauPonty, is (1) the “first” dimension, inasmuch as it is the source of the Cartesian
dimensions, which are idealizations of it; it is (2) a self-containing dimension, not
merely a container for contents that are taken as separate from it; and it is (3) a
dimension that blends subject and object concretely, rather than serving as a static
staging platform for the objectifications of a detached subject. Therefore, in realizing
depth, we go beyond the concept of space as but an inert container and come to
understand it as an aspect of an indivisible cycle of lifeworld action in which the
“contained” and “uncontained”—object and subject—are integrally incorporated.
Have we not previously encountered an action cycle of this kind? In section 3,
we considered the fundamental “atom of process” that lies at the core of quantum
mechanics: !, the quantum of action. The discontinuity associated with quantized
microphysical action bespeaks the fact that this indivisible circulation undermines
the infinitely divisible classical continuum, and, along with it, the idealized objects
purported to be enclosed in said continuum and the idealized subject alleged to
stand outside it. We know that it is only through probabilistic artifice that
microphysical action can be accommodated while maintaining the old trichotomy,
16
and that this stratagem is only effective above the Planck length, where the full
impact of quantized action can be avoided. In addressing the problem of quantum
gravity, however, no longer can we remain safely above the Planck length. And it is
at or below the Planck scale that quantized action is simply unmanageable as a
circumscribed object contained within an analytical continuum from which the
analyst is detached. The action in question entails the indivisible transpermeation of
object, space, and subject—something utterly unthinkable when adhering to the
classical formula. Yet just such a dialectic defines the depth dimension as described
by Merleau-Ponty. Broadly speaking, this suggests that, when the problem of
quantum gravity can no longer be deferred in the quest for unification, science can
no longer conduct its business as usual. Instead, a whole new basis for scientific
activity is required, a new way of thinking about object, space, and subject, one cast
along the lines of Merleau-Pontean depth.
6. PHENOMENOLOGY, TOPOLOGY, AND THE KLEIN BOTTLE
I have intimated that the Planckian action integral to the account of quantum gravity
is better understood when approached from the standpoint of phenomenological
philosophy than from that of traditional philosophy. Whereas the Platonic-Cartesian
intuition of object-in-space-before-subject makes it impossible to come to grips with
the discontinuity and intimate subject-object interaction of the Planckian realm,
Merleau-Ponty’s depth dimensional intuition gives us the insight we need. It is
obvious, however, that a full-fledged phenomenology of quantum gravity must be
delivered in comprehensive detail, not just as a broad philosophical sketch. This task
was undertaken in The Self-Evolving Cosmos (Rosen 2008a). In the present
introductory paper, I will limit myself to a synopsis of that work. But first, in the
section at hand, I want to pave the way for the synopsis by turning to topology. This
qualitative field of mathematics will help flesh out the connection between the
philosophical notion of depth and the more sharply defined concepts and
phenomena of theoretical physics.
To conventional thinking, topology is generally defined as the branch of
mathematics that concerns itself with the properties of geometric figures that stay
the same when the figures are stretched or deformed. In algebraic topology,
structures from abstract algebra are employed to study topological spaces. A more
concrete approach to topology is exemplified by the practical experiments of
mathematician Stephen Barr (1964). In either case, however, the underlying
philosophical default setting tacitly operates, with topological structures regarded
strictly as objects under the scrutiny of a detached analyst. Yet, in Heidegger’s
enigmatic invocation of a “topology of Being” (1954/1971, 12), and in MerleauPonty’s reference to “topological space as…constitutive of life” (1968, 211), there is
a first intimation of a phenomenologically-based, non-objectifying topology. As a
matter of fact, when Merleau-Ponty metaphorically describes this topological space
as “the image of a being that…is older than everything and ‘of the first day’” (210),
we are reminded of the concept of dimension he had outlined in his earlier work:
the concept of depth (1964). Can we sharpen our focus on the depth dimension by
17
going further with topology? A well-known topological curiosity appears especially
promising in this regard: the Klein bottle.
Elsewhere, I have used the Klein bottle to address a variety of philosophical
issues (see, for example, Rosen 1994, 1997, 2004, 2006, 2014). For our present
purpose, we begin with a simple illustration.
Figure 1. Parts of the Klein bottle (after Ryan 1993, 98)
Figure 1 is my adaptation of communication theorist Paul Ryan’s linear
schemata for the Klein bottle (1993, 98). According to Ryan, the three basic features
of the Klein bottle are “part contained,” “part uncontained,” and “part containing.”
Here we see how the part contained opens out (at the bottom of the figure) to form
the perimeter of the container, and how this, in turn, passes over into the
uncontained aspect (in the upper portion of Fig. 1). The three parts of this structure
thus flow into one another in a continuous, self-containing movement that flies in
the face of the classical trichotomy of contained, containing, and uncontained—
symbolically, of object, space, and subject. But we can also see an aspect of
discontinuity in the diagram. At the juncture where the part uncontained passes into
the part contained, the structure must intersect itself. Would this not break the
figure open, rendering it simply discontinuous? While this is indeed the case for a
Klein bottle conceived as an object in ordinary space, the true Klein bottle actually
enacts a dialectic of continuity and discontinuity, as will become clearer in further
exploring this peculiar structure. We can say then that, in its highly schematic way,
the one-dimensional diagram lays out symbolically the basic terms involved in the
“continuously discontinuous” dialectic of depth. Depicted here is the process by
which the three-dimensional object of the lifeworld, in the act of containing itself, is
transformed into the subject. This blueprint for phenomenological interrelatedness
gives us a graphic indication of how the mutually exclusive categories of classical
thought are surpassed by a threefold relation of mutual inclusion. It is this relation
that is expressed in the primal dimension of depth.
When Merleau-Ponty says that the “enigma [of depth] consists in the fact that
I see things...precisely because they eclipse one another,” that “their exteriority is
known in their envelopment,” he is saying, in effect, that the peremptory division
between the inside and outside of things is superseded in the depth dimension. Just
this supersession is embodied by the Klein bottle. What makes this topological
surface so surprising from the classical standpoint is its property of one-sidedness.
18
More commonplace topological figures such as the sphere and the torus are twosided; their opposing sides can be identified in a straightforward, unambiguous
fashion. Therefore, they meet the classical expectation of being closed structures,
structures whose interior regions (“parts contained”) remain interior. In the
contrasting case of the Klein bottle, inside and outside are freely reversible. Thus,
while the Klein bottle is not simply an open structure, neither is it simply closed, as
are the sphere and the torus. In studying the properties of the Klein bottle, we are
led to a conclusion that is paradoxical from the classical viewpoint: this structure is
both open and closed. The Klein bottle therefore helps to convey something of the
sense of dimensional depth that is lost to us when the fluid lifeworld relationships
between inside and outside, closure and openness, continuity and discontinuity, are
overshadowed in the Cartesian experience of their categorical separation.
However, must the self-containing one-sidedness of the Klein bottle be seen
as involving the spatial container? Granting the Klein bottle’s symbolic value, could
we not view its inside-out flow from “part contained” to “part containing” merely as
a characteristic of an object that itself is simply “inside” of space, with space
continuing to play the classical role of that which contains without being contained?
In other words, despite its suggestive quality, does the Klein bottle not lend itself to
classical idealization as a mere object-in-space just as much as any other structure?
A well-known example of a one-sided topological structure that indeed can
be treated as simply contained in three-dimensional space is the Moebius strip.
Although its opposing sides do flow into each other, this is classically interpretable
as but a global property of the surface, a feature that depends on the way in which
the surface is enclosed in space but one that has no bearing on the closure of space
as such. Here the topological structure of the Moebius, the particular way its
boundaries are formed (one end of the strip must be twisted before joining it to the
other), can be seen as unrelated to the sheer boundedness of the infinitely many
structureless point elements tightly packed into the spatial continuum itself. So,
despite the one-sidedness of the Moebius strip, the three-dimensional space in
which it is embedded can be taken as retaining its simple closure. The maintenance
of a strict distinction between the global properties of a topological structure and
the local structurelessness of its spatial context is mathematics’ way of upholding
the underlying classical relation of object-in-space. Given that the Moebius strip
does lend itself to drawing said categorical distinction, can we say the same of the
Klein bottle? Although conventional mathematics answers this question in the
affirmative, I will suggest the contrary.
The schematic representation of the Klein bottle provided by Figure 1 shows
that it possesses the curious property of passing through itself. When we consider
the actual construction of a Klein bottle in three-dimensional space (by joining one
boundary circle of a cylinder to the other from the inside), we are confronted with
the fact that no structure can penetrate itself without cutting a hole in its surface, an
act that would render the model topologically imperfect (simply discontinuous). So
the Klein bottle cannot be assembled effectively when one is limited to three
dimensions.
Mathematicians observe that a form that penetrates itself in a given number
of dimensions can be produced without cutting a hole if an added dimension is
19
available. The point is imaginatively illustrated by Rudolf Rucker (1977). He asks us
to picture a species of “Flatlanders” attempting to assemble a Moebius strip, which
is a lower-dimensional analogue of the Klein bottle. Rucker shows that, since the
reality of these creatures would be limited to two dimensions, when they would try
to make an actual model of the Moebius, they would be forced to cut a hole in it. Of
course, no such problem with Moebius construction arises for us human beings, who
have full access to three external dimensions. It is the making of the Klein bottle that
is problematic for us, requiring as it would a fourth dimension. Try as we might we
find no fourth dimension in which to execute this operation.
However, in contemporary mathematics, the fact that we cannot create a
proper model of the Klein bottle in three-dimensional space is not seen as an
obstacle. The modern mathematician does not limit him- or herself to the concrete
reality of space but feels free to invoke any number of higher dimensions. Notice
though, that in summoning into being these extra dimensions, the mathematician is
extrapolating from the known three-dimensionality of the concrete world. This
procedure of dimensional proliferation is an act of abstraction that presupposes that
the nature of dimensionality itself is left unchanged. In the case of the Klein bottle,
the “fourth dimension” required to complete its formation remains an extensive
continuum, though this “higher space” is acknowledged as but a formal construct;
the Klein bottle per se is regarded as an abstract mathematical object simply
contained in this hyperspace (whereas the sphere, torus, and Moebius strip are
relatively concrete mathematical objects, since tangibly perceptible models of them
may be successfully fashioned in three dimensions). We see here how the
conventional analysis of the Klein bottle unswervingly adheres to the classical
formulation of object-in-space. Moreover, whether a mathematical object must be
approached through hyperdimensional abstraction or it is concretizable, the
mathematician’s attention is always directed outward toward an object, toward that
which is cast before his or her subjectivity. This is the aspect of the classical stance
that takes subjectivity as the detached position from which all objects are viewed
(or, better perhaps, from which all is viewed as object); here, never is subjectivity as
such opened to view. Thus the posture of contemporary mathematics is faithfully
aligned with that of Plato, Descartes, and Newton in whatever topic it may be
addressing. Always, there is the mathematical object (a geometric form or algebraic
function), the space in which the object is contained, and the seldom-acknowledged
uncontained subjectivity of the mathematician who is carrying out the analysis.
Now, in his study of topology, Barr advised that we should not be intimidated
by the “higher mathematician…. We must not be put off because he is interested
only in the higher abstractions: we have an equal right to be interested in the
tangible” (1964, 20). The tangible fact about the Klein bottle that is glossed over in
the higher abstractions of modern mathematics is its hole. Because the standard
approach has always presupposed extensive continuity, it cannot come to terms
with the inherent discontinuity of the Klein bottle created by its self-intersection.
Therefore, all too quickly, “higher” mathematics circumvents this concrete hole by
an act of abstraction in which the Klein bottle is treated as a properly closed object
embedded in a hyper-dimensional continuum. Also implicit in the mainstream
approach is the detached subjectivity of the mathematician before whom the object
20
is cast. I suggest that, by staying with the hole, we may bring into question the
classical intuition of object-in-space-before-subject.
Let us look more closely at the hole in the Klein bottle. This loss in continuity
is necessary. One certainly could make a hole in the Moebius strip, torus, or any
other object in three-dimensional space, but such discontinuities would not be
necessary inasmuch as these objects could be properly assembled in space without
rupturing them. It is clear that whether such objects are cut open or left intact, the
closure of the space containing them will not be brought into question; in rendering
these objects discontinuous, we do not affect the assumption that the space in which
they are embedded is simply continuous. With the Klein bottle it is different. Its
discontinuity does speak to the supposed continuity of three-dimensional space
itself, for the necessity of the hole in the bottle indicates that space is unable to
contain the bottle the way ordinary objects appear containable. We know that if the
Kleinian “object” is properly to be closed, assembled without merely tearing a hole
in it, an “added dimension” is needed. Thus, for the Klein bottle to be
accommodated, it seems the three-dimensional continuum itself must in some way
be opened up, its continuity opened to challenge. Of course, we could attempt to
sidestep the challenge by a continuity-maintaining act of abstraction, as in the
standard mathematical analysis of the Klein bottle. Assuming we do not employ this
stratagem, what conclusion are we led to regarding the “higher” dimension that is
required for the completion of the Klein bottle? If it is not an extensive continuum,
what sort of dimension is it? I suggest that it is none other than the dimension of
depth adumbrated by Merleau-Ponty.
Depth is not a “higher” dimension or an “extra” dimension; it is not a fourth
dimension that transcends classical three-dimensionality. Rather—as the “first
dimension” (1964, 180), depth constitutes the dynamic source of the Cartesian
dimensions, their “natal space and matrix” (176). Therefore, in realizing depth, we
do not move away from classical experience but move back into its ground where
we can gain a sense of the primordial process that first gives rise to it. The depth
dimension does not complete the Klein bottle by adding anything to it. Instead, the
Klein bottle reaches completion when we cease viewing it as an object-in-space and
recognize it as the embodiment of depth. It is the Kleinian pattern of action (as
schematically laid out in Fig. 1) that expresses the in-depth relations among object,
space, and subject from which the old trichotomy is abstracted as an idealization. So
it turns out that, far from the Klein bottle requiring a classical dimension for its
completion, it is classical dimensionality that is completed by the Klein bottle,
since—in its capacity as the embodiment of depth—the Klein bottle exposes the
hitherto concealed ground of classical dimensionality. Here is the key to
transforming our understanding of the Klein bottle so that we no longer view it as
an imperfectly formed object in classical space but as the dynamic ground of that
space: we must recognize that the hole in the bottle is a hole in classical space itself,
a discontinuity that—when accepted in dialectical relation to continuity rather than
evaded—leads us beyond the concept of dimension as Cartesian continuum to the
idea of dimension as depth.
By way of summarizing the paradoxical features of the Klein bottle, I refocus
on the threefold disjunction implicit in the standard treatment of the bottle:
21
contained object, containing space, uncontained subject. (1) The contained
constitutes the category of the bounded or finite, of the immanent contents we
reflect upon, whatever they may be. These include empirical facts and their
generalizations, which may be given in the form of equations, invariances, or
symmetries. (2) The containing space is the contextual boundedness serving as the
means by which reflection occurs. (3) The uncontained or unbounded is the
transcendent agent of reflection, namely, the subject. It is in adhering to this
classical trichotomy that the Klein bottle is conventionally deemed a topological
object embedded in “four-dimensional space.” But the actual nature of the Klein
bottle suggests otherwise. The concrete necessity of its hole indicates that, in reality,
this bottle is not a mere object, not simply enclosed in a continuum as can be
assumed of ordinary objects, and not open to the view of a subject that itself is
detached, unviewed (uncontained). Instead of being contained in space, the Klein
bottle may be described as containing itself, thereby superseding the dichotomy of
container and contained. Instead of being reflected upon by a subject that itself
remains out of reach, we may say that the self-containing Kleinian “object” is selfreflexive: it flows back into the subject thereby disclosing—not a detached cogito,
but the dimension of depth that constitutes the dialectical lifeworld.
7. PHENOMENOLOGICAL QUANTUM GRAVITY: A SUMMARY
In The Self-Evolving Cosmos (2008a), I offer a phenomenological rendition of
quantum gravity accounting for the four forces of nature, the matter particles of
physics’ standard model, and the transformation of particles and fields in the course
of cosmogony. Having demonstrated in the previous sections of the present paper
why a unified physics requires phenomenological philosophy, my intention now is
to show through a summary of Cosmos how the specific application of
phenomenology can yield significant concrete results. A synoptic review of the
results will be presented here. The reader is referred to the book itself for the more
detailed arguments that support those findings.
As noted above, the primary “atom of process” in microphysics is !, the quantum of
action associated with the emission of radiant energy. This quantized action takes
the form of an odd spinning that Wolfgang Pauli modeled by using complex
numbers. Musès (1976) suggested that Pauli’s spin matrices for the electron are
actually based on a kind of complex number or “hypernumber” that goes beyond
Pauli’s imaginary i: the hypernumber ε (defined as ε2 = +1, but ε ≠ ± 1). What I
demonstrate in Cosmos is that the geometric counterpart of ε is the Klein bottle. In
the form of ε!, the Klein bottle is thus seen to implicitly embody the angular action
that lies at the core of quantum mechanics. And this Kleinian spin is the basic
building block of phenomenological quantum gravity.
In Pauli’s matrices, !/2 is taken as the fundamental unit of electron spin. In
fact, !/2 is the basis for determining the spin of all subatomic particles, fermions
and bosons alike. Given the essential role played by spin in quantum mechanics and
22
the underlying significance of the Klein bottle in said spin, I propose in Cosmos that
all microworld dynamics arise from spin of the Kleinian kind: ε!/2.
Now, in section 3 of the present paper, I offer a critique of what has been the
favored approach to quantum gravity: string theory. One of the problems I note is
that the theory’s quantum gravitational equations lead to a vast multiplicity of
possible solutions with no guiding principle by means of which the field can be
narrowed. But if we take the vibratory pattern of the fundamental strings as
essentially Kleinian in nature—with Kleinian spin not objectified but understood in
its phenomenological depth—string theory can gain greater coherence. In fact, I
demonstrate in Cosmos that by reformulating the theory in the context of topological
phenomenology, it can be cast in a form that provides a detailed and definitive
(albeit qualitative) account of quantum gravity, one that unambiguously yields the
fundamental particles of the standard model. Let me summarize these findings.
In his further exploration of the hypernumber ε, Musès indicated a “higher
epsilon-algebra” wherein “√εn involves in, the subscripts of course referring to the (n
+ 1)th dimension since i≡i1 already refers to D2” (1968, 42). Bearing in mind the
intimate relationship between ε and the Klein bottle, can Musès’ implication of a
dimensional hierarchy of hypernumber values be given topo-phenomenological
expression? The Klein bottle does lend itself to such a generalization.
Mathematicians have investigated the transformations that result from
bisecting topological surfaces. If the Klein bottle is bisected, cut down the middle, it
will fall into a pair of oppositely-oriented Moebius strips. Next, bisecting the onesided Moebius strip, a two-sided lemniscatory surface will be produced, its sides
being related enantiomorphically (i.e., as mirror opposites). Finally, cutting the
lemniscate down the middle yields interlocking lemniscates. The transformation
brought about by this bisection is clearly the last one of any significance, since
additional bisections—being bisections of lemniscates, can only produce the same
result: interlocking lemniscates. The bisection series is completed then when we
obtain interlocking lemniscates, a structure termed the sub-lemniscate. By
experimenting with the bisection of the Klein bottle in this way, a closed family of
four nested topological structures is discovered (Fig. 2).
23
Figure 2. Topological bisection series. From top to bottom: Klein bottle, Moebius strip, lemniscate,
sub-lemniscate
In Cosmos, dimensional differences among the four members of the bisection
series are studied phenomenologically. While to ordinary observation each member
appears as but a two-dimensional surface in three-dimensional space,
phenomenological reflection leads to the insight that each actually constitutes a
depth-dimensional lifeworld unto itself. Whereas the Klein bottle is threedimensional, its nested correlates are of progressively lower dimension: the
Moebius is two-dimensional, the lemniscate is one-dimensional, and the sublemniscate is zero-dimensional. This account of several different topodimensional
lifeworlds embedded within each other is consistent with the hierarchy of ε-like
spin structures suggested by Musès.
εD0
εD1/εD0
εD2/εD0
εD3/εD0
εD0/εD1
εD1
εD2/εD1
εD3/εD1
εD0/εD2
εD1/εD2
εD2
εD3/εD2
εD0/εD3
εD1/εD3
εD2/εD3
εD3
Table 1. Interrelational matrix of topodimensional spin structures
Table 1, the topodimensional spin matrix, gives the ε-based counterpart of
the topological bisection series. The three-dimensional Kleinian spinor is written
εD3, with lower-dimensional members of the tightly knit spin family designated εD2,
εD1, and εD0 (corresponding to the Moebial, lemniscatory, and sub-lemniscatory
circulations, respectively). These terms are arrayed on the principal diagonal of the
matrix (extending from upper left to lower right). The interrelationships among the
24
four principal matrix elements, taken two at a time, are reflected in the elements
appearing off the main diagonal.
Generally speaking, Table 1 unpacks the dialectical structure of
topodimensional interrelations. In keeping with the “musical” implications of string
theory, we may regard topodimensional action as inherently vibratory in nature.
The principal diagonal of the table contains a depth-dimensional series of
fundamental vibrations or tones, and these four principal terms are coupled to each
other two at a time by six pairs of overtone-undertone intervals related to each
other in the mirror-opposed fashion of enantiomorphs. The dimensional overtone
ratios are the values extending below the fundamental tones, whereas the
undertone ratios are the values appearing to the right of the fundamentals. (In
Cosmos, the topodimensional action matrix is seen as analogous to the old
Pythagorean table, which is portrayed as an expanding series of musical intervals,
with fundamental tones on the principal diagonal, flanked by overtones and
undertones.)
Consider in Table 1 the two principal tones of highest dimensionality: εD2 and
εD3. These matrix elements are linked by the overtone and undertone given in the
two corresponding non-principal cells, εD3/εD2 and εD2/εD3 (respectively). The
enantiomorphically-related coupling cells in question are the depth-dimensional
counterparts of the concretely observable, oppositely oriented Moebius strips
which, when glued together, form the Klein bottle. Taken strictly as a principal
matrix element, the depth-dimensional Moebius vibration is the spin structure that
constitutes the two-dimensional lifeworld (εD2). But when we shift our view of the
Moebius, consider it in relation to higher, Kleinian dimensionality, a kind of
“doubling” takes place in which the εD2 singular Moebius spin structure becomes a
pair of asymmetric, mirror opposed twins, εD3/εD2 and εD2/εD3. It is through the
fusion of these dimensional enantiomorphs that Kleinian dimensionality is
crystallized. Since the Table-1 matrix indicates that all four principal
dimensionalities or fundamental tones are interrelated by accompanying offdiagonal overtone-undertone pairs, we can draw the general conclusion that higher
dimensions emerge through processes of enantiomorphic fusion (this is fully
detailed in Cosmos).
The process of dimensional generation can be clarified in broad terms by
relating it to a reverse movement through the bisection series wherein topological
structures are not divided but glued together. To begin, we imagine the fusion of
interlocking lemniscates that yields the single lemniscate. This corresponds to the
generation of the one-dimensional lifeworld (εD1). Next, we picture the
enantiomorphically-related sides of the two-sided lemniscate merging to form the
one-sided Moebius structure, this being associated with the genesis of the twodimensional lifeworld (εD2). Finally, we imagine Moebius enantiomorphs fusing to
produce the Klein bottle, which corresponds to the evolution of our threedimensional lifeworld (εD3). With each fusion, a lower-dimensional lifeworld is
absorbed by a world of higher dimension, taken into it in such a way that the lower
dimension is concealed. In the end, we have three lower-dimensional vibratory
structures concealed within the three-dimensional Kleinian vibration, much as
lower dimensions are hidden by becoming “curled up” within visible 3 + 1-
25
dimensional space-time in the conventional string theoretic account of dimensional
cosmogony. It turns out, in fact, that the phenomenological approach arrives at the
same total number of dimensions as does the conventional theory.
What I demonstrate in Cosmos is that the depth-dimensional Kleinian spinor,
εD3, is not itself an extended three-dimensional space, but is a quantized threedimensional blend of space and time that first gives birth to our familiar 3 + 1dimensional space-time (the Kleinian spinor is a “natal space,” to echo MerleauPonty’s metaphor). In like manner, the two-dimensional Moebius spinor (εD2) would
spin out a 2 + 1-dimensional space-time, the lemniscatory spinor (εD1) would send
forth a 1 + 1-dimensional space-time, and the sub-lemniscatory spinor (εD0) would
project a 0 + 1-dimensional space-time. A simple summation of projected space-time
dimensions gives us a total of ten, with the six lower dimensions—(2 + 1) + (1 + 1) +
(0 + 1)—being hidden like Matryoshka dolls within the larger 3 + 1-dimensional
space-time. This picture of overall ten-dimensionality, with six dimensions
concealed, accords with the basic account provided by string theory. Thus we may
say that our four depth-dimensional spinors spin out the ten space-time dimensions
of string theory. 1
Yet despite the general agreement between conventional and
phenomenological interpretations of string theory, important differences exist.
Mainstream theorists have approached cosmogony by adopting the concept of
symmetry breaking. In this narrative, the four forces of nature are conceived as
vibrating strings that initially existed in a purely symmetric ten-dimensional space
scaled around the Planck length. Subsequently, the perfect primordial symmetry
was spontaneously broken by a dimensional bifurcation in which four of the original
dimensions expanded to produce the visible universe we know today, with the other
dimensions remaining hidden. Coupled with this was the breaking of force-field
symmetry to create the appearance of irreconcilable differences among the forces.
However, while the foregoing account of cosmogony incorporates both
dimensional and force-field symmetry breaking, the two are not precisely aligned
with each other in the theoretical reckoning. This reflects the fact that contemporary
theorists have been unable to articulate a detailed geometric rendering of cosmic
evolution. For the geometric program fully to be realized, the physical events
described in the standard and inflationary models of cosmic development would
need to be specifically expressible as dimensional events. What Heinz Pagels noted
twenty years ago in discussing the extra-dimensional (Kaluza-Klein) interpretation
of cosmogony remains true today: “No one has yet been able to find a realistic
Kaluza-Klein theory which yields the standard model” (1985, 328). In the stringtheoretic application of Kaluza-Klein theory, one obvious reason for this limitation is
the absence of a conceptual principle that could guide the analyst to unambiguous
solutions of the ten-dimensional general equations, solutions specifying the exact
shapes of the hidden dimensions that would correspond to the physical facts of the
With the extension of string theory known as M-theory, eleven dimensions are
actually entailed, though the eleventh dimension is not like the other ten. This
“extra” dimension in fact may be interpreted as intimating the depth dimension. See
The Self-Evolving Cosmos (2008a).
1
26
standard model. Of course, if the prevailing theory cannot tell us what the
dimensional structures are that correspond to physical reality, it can hardly inform
us on how these dimensions develop. In point of fact, there is really no positive
feature intrinsic to the theory that provides for the evolution of dimensions. From
what I can tell, the only reason dimensional bifurcation is assumed to have taken
place at all is that theorists must somehow account for the present inability to
observe six of the ten dimensions needed for a consistent rendering of quantum
gravity (one that avoids untenable probability values).
Smolin seems to put his finger on the underlying problem in calling attention
to the “wrong assumption” physicists “are all making” when they present the “whole
history of constant motion and change…as something static and unchanging” (2006,
256–57). When authentic change is thus denied, it is not surprising that no natural,
parsimonious way of accounting for cosmogony is forthcoming. Conventional string
theory well exemplifies this adherence to the classical intuition of changelessness in
the primacy it gives to the notion of symmetry. It is in assuming an initial state of
“perfect symmetry” that theorists must resort to the artifice of “spontaneous
symmetry breaking,” an alleged event that—far from being a natural consequence of
the purely symmetric theory—is gratuitously invoked without a compelling
explanation of its basis.
The inherent dynamism of phenomenological string theory affords a way out
of the impasse. Instead of artificially appending asymmetry to a primordially perfect
symmetry, a dialectic of symmetry and asymmetry is offered that permits an
unequivocal, intrinsically meaningful account of the evolving forces of nature. This
principle of “synsymmetry” (Rosen 1975, 1994, 2006, 2008a) is implicit in the
topological bisection series and its associated topodimensional spin matrix (Table
1).
For a simple illustration, consider the Moebius strip. It arises from the fusion
of mirror-opposed, asymmetrically-related sides of the lemniscate. We can say that,
through this union of opposites, the asymmetry of lemniscatory sides is rendered
symmetric. However, while the Moebius can be deemed symmetric vis-à-vis the
fused lemniscatory sides that constitute it, at the same time it is itself a member of
an enantiomorphically asymmetric pair whose own fusion produces the Klein bottle.
Generally speaking, we may conclude that the members of our topodimensional
family are neither simply asymmetric nor simply symmetric, but synsymmetric: a
given member combines symmetry and asymmetry in such a way that it is
symmetric in relation to its lower-dimensional counterpart and asymmetric in
relation to its higher one (the sub-lemniscate is an exception to this, since it has no
lower-dimensional counterpart). I propose that the synsymmetry concept, viewed
dynamically in terms of enantiomorphic fusion events, constitutes a guiding
principle for cosmogony. The forces and particles of nature evolve by a general
process wherein asymmetric dimensional enantiomorphs fuse to create a
dimensional symmetry that at once inherently gives way to new asymmetry. My
topo-phenomenological interpretation of cosmogony is detailed in Cosmos.
Presently, I will restrict myself to a synoptic sketch.
27
What I am suggesting is that a full account of the elementary forces of string theory
may be afforded by embedding the theory in the matrix of primordial spin
structures given in Table 1. This matrix constitutes a special application of the
hypernumber idea, one that provides a highly specific rendition of primordial spin
action. The topodimensional array of four fundamental spinors (shown on the
principal diagonal of the matrix) can be directly associated with the four types of
gauge bosons found in nature. The gauge-boson correlates of Table 1 are displayed
in Table 2. What is the basis of these correlations?
G
g/G
(W, Z)/G
γ/G
G/g
g
(W, Z)/g
γ/g
G/(W, Z)
g/(W, Z)
W, Z
γ/(W, Z)
G/γ
g/γ
(W, Z)/γ
γ
Table 2. Spin matrix of gauge bosons. G is the graviton; g is the strong gauge boson; W,Z is the weak
gauge boson particle pair; and γ is the photon
We know that Table 1 signifies a process of generation in which higher
topological dimensions evolve from lower ones. The facts of physical evolution lend
themselves to straightforward, one-to-one correlation with topogenetic process.
The first force particle to “freeze out” of the Big Bang’s hot primordial soup is the
hypothesized graviton, G. The graviton of Table 2 is associated with εD0, the zerodimensional sub-lemniscatory action of Table 1, which can be written εD0(!/2) to
give expression to subatomic particle spin; thus, G ≡ εD0(!/2). Next to separate itself
from the primordial chaos is the strong gauge boson, g, and we relate it to εD1
lemniscatory action, writing g ≡ εD1(!/2). Then the weak force emerges, given by
the boson pair W and Z, which we identify with εD2(!/2). When the three orders of
lower-dimensional gauge bosons have “frozen out,” what remains is γ, the photon,
topodimensionally expressed as εD3(!/2).
Having focused our attention on the principal terms or “fundamental tones”
of our matrices, let us now inquire into the physical significance of the “overtoneundertone” couplings appearing off the principal diagonals. In Table 1, these are the
topodimensional enantiomorphs whose synsymmetric fusions drive the process of
dimensional generation. The overtone-undertone couplings appear in Table 2 as
enantiomorphically-related boson ratios. It is from their interactions that the
primary gauge bosons emerge. Since nature’s force fields evolve by a process in
which the universe expands, boson-ratio fusion may be regarded as impelling said
expansion. I conjecture accordingly that these primordial boson ratio interactions,
which are not themselves directly observable, comprise the mysterious “dark
energy” said to fuel the accelerated expansion of the cosmos.
In phenomenological string theory, boson-ratio interaction not only accounts
for the generation of the four kinds of gauge bosons, but for the production of the 12
fermions of the standard model as well. The six pairs of ratios involved in distilling
the bosons also interact to yield the six pairs of fermions (three lepton pairs and
three quark pairs). Geometrically speaking, the fermions function as “dimensional
28
bounding elements,” local features of global bosonic dimensionality, with local and
global aspects intimately interwoven (in keeping with Merleau-Ponty’s notion of the
depth dimension as a “global ‘locality’”; see Section 5). Needless to say, this requires
clarification, but I will not elaborate further on it here (see Cosmos). I will only
suggest that the purely geometric account of boson-fermion interrelatedness I am
proposing obviates the need for the unparsimonious and unsubstantiated
postulation of particle “super-partners” given in the notion of “supersymmetry.”
8. CONCLUSION
To make the case for why natural science needs phenomenological philosophy, I
have focused on what has come to be known as the “king of the sciences,” the
discipline of physics. It is physics that all other natural sciences (and many social
sciences) have adopted as their paradigm. And it is physics—considered the most
advanced and refined of sciences—in which the necessity for a phenomenological
approach becomes most obvious. In this paper, I have shown that the refinement of
physics that was to bring its long-sought unity ultimately reached the point (in the
1970s) where the facts of the Planck world could no longer be avoided effectively.
And it is when we cross the Planckian threshold that objectivist philosophy must be
left behind and a philosophical stance adopted that unites subject and object as subPlanckian reality demands: the phenomenological stance.
Because I believe the challenge of quantum gravity provides the clearest
evidence of the need for phenomenology in theoretical science, I have chosen to
highlight this challenge in my introduction to our Special Issue. Here the reader is
able to see at the outset that, in the key field of unification physics, regrounding
natural science in a phenomenological approach is indispensable for solving
science’s own problems. In the pages that follow, you will find many other examples
of the importance of phenomenology to the natural sciences. There are additional
works on physics in this Special Issue, and a number of papers on the life sciences,
mathematics, and (bio)semiotics. A previous Special Issue of this journal (Simeonov,
Matsuno, and Root-Berstein 2013) already paved the way for what is presently set
forth. There too, the Newtonian paradigm was called into question (see Gare 2013)
and elements of phenomenological thinking were in evidence (see Matsuno 2013,
Simeonov 2013). In the Issue now before you, phenomenological philosophy takes
center stage and its relations to the natural sciences are examined in a
comprehensive, thoroughgoing manner. I trust the reader will enjoy the rich
assortment of innovative explorations that carry us beyond the obsolete formula of
object-in-space-before-subject into exciting new territory.
REFERENCES
Barr, Stephen. 1964. Experiments in Topology. New York: Dover.
Čapek, Milič. 1961. Philosophical Impact of Contemporary Physics. New York: Van
Nostrand.
29
Carter, Brandon. 1968. “Global Structure of the Kerr Family of Gravitational Fields.”
Physical Review 174:1559–1571.
Eddington, Arthur Stanley. 1946. Fundamental Theory. Cambridge: Cambridge
University Press.
Gare, Arran. 2013. “Overcoming the Newtonian Paradigm: The Unfinished Project of
Theoretical Biology from a Schellingian Perspective.” Progress in Biophysics and
Molecular Biology 113(1):5–24.
Greene, Brian. 1999. The Elegant Universe. New York: W. W. Norton.
Heidegger, Martin. 1927/1962. Being and Time, translated by John Macquarrie and
Edward Robinson. New York: Harper and Row.
———. 1954/1971. “The Thinker as Poet.” In Poetry, Language, Thought, translated
by Albert Hofstadter, 1–14. New York: Harper and Row.
Husserl, Edmund. 1936/1970. Crisis of the European Sciences and Transcendental
Phenomenology, translated by David Carr. Evanston, IL: Northwestern University
Press.
Jahn, Robert, and Brenda Dunne. 1984. On the Quantum Mechanics of Consciousness
(Appendix B). Princeton, NJ: Princeton University School of Engineering/ Applied
Sciences.
Jaspers, Karl. 1941/1975. “On My Philosophy.” In Existentialism from Dostoevsky to
Sartre, edited by Walter Kaufmann, 158–185. New York: New American Library.
Macquarrie, John. 1968. Martin Heidegger. Richmond, VA: John Knox.
Matsuno, Koichiro. 2013. “Making Biological Theory More Down to Earth.” Progress
in Biophysics and Molecular Biology 113(1):46–56.
Matsuno, Koichiro and Stanley N. Salthe. 1995. “Global Idealism/Local Materialism.”
Biology and Philosophy 10:309–337.
Merleau-Ponty, Maurice. 1964. “Eye and Mind.” In The Primacy of Perception, edited
by James M. Edie, 159–90. Evanston, IL: Northwestern University Press.
———. 1968. The Visible and the Invisible, translated by Alphonso Lingis. Evanston,
IL: Northwestern University Press.
Musès, Charles. 1968. “Hypernumber and Metadimension Theory.” Journal of
Consciousness Studies 1:29–48.
30
———. 1976. “Applied Hypernumbers: Computational Concepts.” Applied
Mathematics and Computation 3:211–226.
Pagels, Heinz R. 1985. Perfect Symmetry. New York: Bantam.
Plato. 1965. Timaeus and Critias, translated by Desmond Lee. New York: Penguin.
Rosen, Steven M. 1975. “Synsymmetry.” Scientia 110:539–549.
———. 1994. Science, Paradox, and the Moebius Principle. Albany: State University of
New York Press.
———. 1997. “Wholeness as the Body of Paradox.” Journal of Mind and Behavior
18:391–423.
———. 2004. Dimensions of Apeiron. Amsterdam: Editions Rodopi.
———. 2006. Topologies of the Flesh. Athens, OH: Ohio University Press.
———. 2008a. The Self-Evolving Cosmos. Hackensack, NJ: World Scientific.
———. 2008b. “Quantum Gravity and Phenomenological Philosophy.” Foundations
of Physics 38:556–582.
———. 2013. “Bridging the ‘Two Cultures’: Merleau-Ponty and the Crisis in Modern
Physics.” Cosmos and History 9:1–12.
———. 2014. “How Can We Signify Being?” Cosmos and History 10:250–277.
Rucker, Rudolph. 1977. Geometry, Relativity, and the Fourth Dimension. New York:
Dover Books.
Russell, Bertrand. 1925. The ABC of Relativity. New York: Harper & Brothers.
Ryan, Paul. 1993. Video Mind/Earth Mind: Art, Communications, and Ecology. New
York: Peter Lang.
Sachs, Mendel. 1999. “Fundamental Conflicts in Modern Physics and Cosmology.”
Frontier Perspectives 8:13–19.
Simeonov, Plamen. 2013. “On Some Recent Insights in Integral Biomathics.” Progress
in Biophysics and Molecular Biology 113(1):216–228.
Simeonov, Plamen, Koichiro Matsuno, and Robert Root-Berstein. 2013. “Can Biology
Create a Profoundly New Mathematics and Computation?” Special Theme Issue on
Integral Biomathics. Progress in Biophysics and Molecular Biology. 113 (1).
31
Smolin, Lee. 2006. The Trouble With Physics. Boston: Houghton Mifflin.
Woit, Peter. 2006. Not Even Wrong. New York: Basic Books