Academia.eduAcademia.edu

The Myth of the Physical Given

I have two objectives. First, I want to argue that the neo-Aristotelian vision of how biological entities fit into the physical world (which I shall begin by describing) is inconsistent with ‘the Physical Given’ (which I shall endeavour to characterise). Secondly, I want to raise doubts about whether this conception of physical reality is something given to us by our ‘best physics’ so much as imposed on it. Having cleared space for a more expansive conception of nature, which includes teleology at the inorganic level, I will briefly reflect on an existing teleological model of the quantum physical world and its possible extension to the biological.

The Myth of the Physical Given William M. R. Simpsona a Wolfson College, University of Cambridge, UK; wmrs2@cam.ac.uk 1 Preliminary Remarks What is the physical world made of, and how do living things fit into it?* This is one of those Big Questions that is likely to raise smiles among professional philosophers. In a time in which the humanities are being squeezed to make way for STEM subjects, we find ourselves tempted to focus on more narrowly defined problems in the hope of achieving some measurable success. Nonetheless, whilst the substance of what I wish to discuss in this paper is fairly narrow in scope, I am seeking to make room for a ‘neo-Aristotelian’ answer to this question that is emerging in contemporary philosophy; one which locates an immanent teleology within nature outside of anyone’s thoughts or intentions. To do this, I want to call into question something I’ve dubbed the ‘The Myth of the Physical Given’, modifying a phrase coined by the philosopher Wilfrid Sellars (‘The Myth of the Given’). For Sellars, it is by providing a general account of ‘how things in the broadest possible sense of the term hang together in the broadest possible sense of the term’ (Sellars 1962) that philosophy adds value to an academy which, left to itself, tends to ramify and fragment into ever-increasing specialisations. I am inclined to think that what Sellars meant by ‘The Myth of the Given’ is that there is nothing given to us in experience which comes ‘pre-judged’, as it were, with a truth-value that cannot be questioned (Sellars 1997). What I mean by ‘the Myth of the Physical Given’ is rather different (although not unrelated). It does not concern an alleged datum of phenomenal experience, but rather a categorical conception of ‘physical reality’ that excludes teleology; one which has tended to dominate philosophers’ interpretations of physical theories since the ‘Scientific Revolution’. I have two objectives. First, I want to argue that the neo-Aristotelian vision of how biological entities fit into the physical world (which I shall begin by describing) is inconsistent with ‘the Physical Given’ (which I shall endeavour to characterise). Secondly, I want to raise doubts about whether this conception of physical reality is something given to us by our ‘best physics’ so much as imposed on it. Having cleared space for a more expansive conception of nature, which includes teleology at the inorganic level, I will briefly reflect on an existing teleological model of the quantum physical world and its possible extension to the biological. * This paper is based on the Cardinal Mercier Prize lecture given by the author in 2022. The prize was awarded for the author’s doctoral thesis (Simpson 2020). 2 A neo-Aristotelian revival It is not uncommon for analytic philosophers to be caricatured by their colleagues as downsizing the Big Questions of Great Philosophers like Plato and Aristotle to a set of much narrower concerns arising from our use of language. If you live in Cambridge, you might be excused for having such impressions. With the revival of metaphysics in the last half of the twentieth century, however, analytic philosophy can no longer justly be dismissed as footnotes to Russel or Wittgenstein. A variety of traditions are now competing for its soul, which marshal different intuitions about the nature of reality and are shaped by different convictions about what we can know and where we should start our inquiries. In particular, there has been a rise in what we might call ‘neo-Aristotelianism’ among contemporary analytic philosophers, which has witnessed both a return to essentialism and the revival of the notion of causal powers. Although the philosopher Willard Quine had insisted that Aristotle’s distinction between essential and accidental properties is ‘surely indefensible’ for us today (Quine, 1960, p.199), the concept of essence was reintroduced into analytic philosophy by Saul Kripke and given a robust logical foundation (Kripke 1981). Although David Hume had denounced the Aristotelian notion that things have powers to bring about change as a projection of our tendency to associate events that follow in succession, the concept of causal powers as properties which are essentially directed toward their manifestations was reintroduced within analytic philosophy by Rom Harré, Edward Madden (Harré & Madden 1973) and George Molnar (Molnar 2006). And although neoHumeans like David Lewis and Frank Ramsey had insisted that laws of nature are merely contingent generalisations of how properties just happen to be distributed in space and time (Ramsey 1978; Lewis 1973, pp. 73-75; Lewis 1987, postscript), the so-called ‘RamseyLewis’ view of laws is now widely criticised for its anthropocentricity, its difficulties accommodating causal direction or asymmetry, and its inability to capture intuitively correct counterfactuals concerning what would happen in other possible worlds. It is becoming acceptable within analytic metaphysics to maintain that laws express the essence of powers. Yet that is not the end of the story. According to a small but growing number of neoAristotelians, the natural world is made of substances (Inman 2017), which are entities that constitute a uniquely fundamental level of reality, in the sense that everything that exists within nature is wholly contained in the totality of substances, and any changes that take place within nature are wrought through the exercise of their causal powers (Koons 2019). For these neo-Aristotelians, as for Aristotle, living beings – such as plants, animals, and 3 persons – are to be counted as substances. They are unified entities which exhibit goaldirected behaviour and they have powers to settle certain matters of fact concerning the motion of their physical bodies in the pursuit of their natural goals. They can be studied in terms of their integral parts, and corrupted into various chemicals, but they cannot be reduced to mere aggregates of more fundamental stuff. Within the ‘Aristotelian’ tradition, this is a view of the building blocks of nature in which the matter of a substance (the Greek word is hule) is physically inseparable from the form (the Greek word is morphe) which determines the essential powers that it exercises in pursuing its ends. In this hylomorphic philosophy of nature, substances have both active powers to cause change to other substances and passive powers to suffer change. In this teleological vision of reality, both the minded and the un-minded, both the living and the non-living, exist within a shared normative space of ends and goals. Something of the appeal of this picture of nature is captured for me by this refrain from the musical Oliver!: “Consider yourself… at home. Consider yourself… part of the furniture.” The Origin of the Physical Given Now, to a Whiggish Reading of history such developments will seem highly regressive. The medieval-Aristotelian philosophy of nature, from which such ideas draw their inspiration, could not see beyond what Sellars called the ‘manifest image’ of the world; a view of reality that is founded upon reflection on ordinary experience. But such ontologies are called into question by those seeking to construct what Sellars called the ‘scientific image’ of the world, which is a critical view of reality founded upon scientific inquiry. The mechanical or corpuscularian philosophies which supplanted medieval Aristotelianism, in the wake of the Scientific Revolution, dispensed with the idea of there being ‘forms’ which determine things’ ‘matter’, and settled upon a wholly determinate conception of matter as a physical stuff with intrinsic properties which obey universal laws (Simpson 2018, 2021). By a process of methodological doubt in which he questioned the verity of his bodily senses, Descartes arrived at a picture of reality that is fundamentally divided into physical and mental domains. On the one hand, physics is supposed to tells us the laws that govern how material reality behaves; on the other hand, introspection is supposed to tells us about a mental reality that lies beyond the physical world, which encompasses human purposes and goals. Descartes believed immaterial mind has the power to cause matter to move, although he ran into difficulties in explaining how causal interactions are supposed to take place between human minds and those material mannequins they are meant to have under their control. For 4 Descartes, the existence of mind was indubitable, but the physical and mental were only contingently related. Human beings were real but displaced from the physical world, whilst animals and plants were merely machines which obeyed physical laws. For contemporary philosophers, mind occupies a less privileged position today than it did in Descartes’ era, whilst Cartesian dualism is often judged to have been a blind alley. It is the physical world – or, rather, a certain conception of the physical world – that is taken as an ontological given. This is evident in the kind of question that provides the starting point for many discussions in the philosophy of mind: to adapt from Kripke, “When God made the physical world, did he have to add anything in order for there to be minds?” The philosophical answers to this kind of question are permitted to vary within certain limits: for the reductive physicalist, minds are reducible to the physical; for non-reductive physicalists, they are supposed to supervene in some way upon the physical. What the postCartesian consensus retains, however, is a commitment to the existence of a ‘Physical Given’ outside of the realm of ends, purposes and goals (which pertain to the mind). This requirement has been packaged as part of a success story about the sciences, and therefore part of a tribute philosophers must pay if they wish to be deemed ‘scientifically minded’. Teleology and the Physical Given So how does a commitment to the Physical Given rig the way philosophers interpret physical theories so they exclude teleology, banning goal-directed things from nature? It is widely supposed among contemporary analytic metaphysicians that to interpret a physical theory is to identify the set of worlds that are possible according to that theory (Ruetsche 2011, chp. 1). On this view, a possible world is a complete and internally consistent possible state of affairs, and a physical theory contributes to our knowledge of nature by declaring some of these states permissible whilst excluding others. It is the universal laws which are specified by this theory which are supposed to determine the set of ‘physically possible worlds’. According to how many contemporary philosophers have interpreted the relation between physics and metaphysics, the task of interpreting a physical theory involves identifying some set of fundamental physical constituents to which this theory refers, and elucidating their possible patterns or configurations according to this theory’s laws. These basic constituents may be microscopic entities or modifications of a single substance. Either way, the total set of their possible configurations determines the ‘state space’ within which the cosmos evolves without reference to anything’s intrinsic ends or goals. Having identified these basic 5 constituents, propositions about the physical world may be evaluated as true or false just in case they can be understood as referring to their possible configurations. This nomological conception of possibility is presupposed both by ‘microphysicalists’ like David Lewis, who believe the world is made fundamentally of microscopic constituents, and by ‘priority monists’ like Jonathan Schaffer, who believe the cosmos is fundamentally one thing. Microphysicalists and priority monists are divided concerning the number of fundamental entities, but united in excluding from the fundamental ontology any entities which exist between the microscopic or cosmic scales – such as plants, animals or persons. Lewis made it clear how his vision of reality displaces the macroscopic when he said: ‘all there is to the world is a vast mosaic of local matters of particular fact, just one little thing and then another... We have geometry: a system of external relations of spatiotemporal distances between points... And at those points we have local qualities: perfectly natural intrinsic properties which need nothing bigger than a point at which to be instantiated. For short, we have an arrangement of qualities. And that is all.’ (Lewis, 1986, p.ix). Likewise, Schaffer made it clear that his vision of reality downplays the macroscopic when he dismissed ‘folk intuitions’ about what exist as being ‘based on a crude teleologically-laden conception of when composition occurs’ that is ‘fit for debunking’ (Rose & Schaffer 2017). The nomological conception of physical possibility goes hand-in-glove with a hierarchical conception of how the ‘special sciences’ such as biology or psychology are supposed to relate to our ‘best physics’. In Oppenheim and Putnam’s (1958) influential paper, “The Unity of Science as a Working Hypothesis”, nature is conceived as a hierarchy in which cells are composed of molecules, molecules of atoms, and atoms of whatever microscopic constituents are identified by physics at the ‘unique lower level’ (ibid., p. 9). This vision of nature can be pictorially represented as a giant pyramid, in which everything rests upon a lower-level of microscopic constituents. For Schaffer, the reduction goes in the opposite direction, such that everything studied by the special sciences is ultimately grounded in the Cosmos. The picture of nature, in this case, is of an inverted pyramid, in which everything rests on a single point. But we don’t have to be strict reductionists today. For ‘weak’ emergentists, ‘less is more’: we can think of the higher-level causal powers of the entities studied by the special sciences as involving a subset of the lower-level causal powers of their emergence bases (Wilson 2015). Whilst acknowledging failures in reduction due to our epistemic limitations, weak emergentists affirm the supervenience of higher-level properties upon lower-level properties, and continue to conceive all of the physical possibilities as being ‘closed’ under lower-level 6 laws. At base, nothing in nature has the power to make a causal difference in pursuing its own ends or goals. In summary, the Physical Given requires: i. firstly, the existence of some set of fundamental constituents characterised by determinate physical properties; ii. secondly, some fundamental state of the world that is closed under physical laws, such that all possible worlds are determined without reference to ends or goals; iii. thirdly, that this fundamental state should admit a unique representation in terms of our ‘best physics’, where everything else supervenes upon this physical state. This view of the Physical Given conceives the space in which nature conducts her affairs as a closed space of physical laws, dividing it from the normative space of things which pursue ends and goals. According to the philosopher of science and atheist activist, Alex Rosenberg, the message given by science is “absolutely clear: no teleology, no purposes, goals, or ends” (Rosenberg, 2012, p.43). Under these rules the aim of the game for philosophers of biology, philosophers of mind, theologians – or anybody whose subject matter is deemed less respectable than particle physics – is to relate those things they want to talk about to the ‘physical facts’ without helping themselves, ontologically speaking, to anything more than is strictly necessary. This is a balancing act in which the stakes are high. The scientific image may be sparse, but not too sparse; otherwise, we risk being unable to cash out any of the truth claims of our best physics. The manifest image may be thrown into doubt, but not too much doubt; otherwise, we risk sawing off the epistemic branch upon which the sciences are sitting. The Myth of the Physical Given I’d like to consider each of the three ‘givens’ that I have identified in the light of quantum mechanics, and to ask whether they are really given by physics so much as imposed. a. The problem of quantum entanglement1 According to David Lewis, nature is simply a spatiotemporal mosaic of properties picked out by our best physics. In standard quantum mechanics, however, the state of a system is not represented by a distribution of physical properties in ordinary space and time, but by a ‘wave function’ defined in an abstract high-dimensional configuration space. This wave function evolves according to the famous Schrödinger equation and determines the probabilities of 1 This section draws on the discussion in (Simpson 2017). 7 the various ‘observables’ which can be measured. Significantly, different wave functions can be combined into ‘quantum-entangled superpositions’ where the state of the composite system cannot be factored into the states of its spatially separated parts. For example, in the famous ‘EPR experiment’ involving two quantum-entangled particles, one particle is constrained to be ‘spin-up’ when another is measured to be ‘spin-down’, and vice versa, however far apart the two particles are separated (Einstein et. al. 1935, Bohm 1951). For a quantum-entangled system in what is called the ‘singlet state’, there’s a probability of: 1/2 that we will observe particle 1 to be ‘spin up’, and particle 2 to be ‘spin down’; and 1/2 that we will observe particle 1 to be ‘spin down’, and particle 2 to be ‘spin up’. As long as the two measuring devices being used to measure the two particles have both been set to measure vertical spin, those are the only possible outcomes. Whilst this anti-correlation is curious, it doesn’t by itself prove that there is some special connection between the two spatially separated particles. After all, one occasionally encounters students in Cambridge who persistently wear odd socks: either red on their left feet and blue on their right, or vice versa; the choice is random. Suppose that during a laboratory demonstration involving such a group of students, an unfortunate accident takes place in which their feet are suddenly separated by several miles, before their socks are collected and their colours noted. Whilst the anti-correlations in left and right sock-colours might be put down to some sort of social signalling, we have no cause to attribute the redness of one sock and the blueness of the other to any kind of ‘spooky’ connection spanning the gaps between their feet. But there is an important empirical distinction between these two cases. Once the measuring devices being used to detect the spins of the two particles in the EPR experiment are rotated in relation to their axes of polarisation, the probabilities of measuring ‘Up’ or ‘Down’ for either particle will fall somewhere between one or zero. Significantly, the assumption of ‘classical’ physics that the spin of each particle is locally determined prior to measurement – like the colour of each sock – results in one set of statistical predictions, whilst quantum mechanics produces another. It is the quantum predictions that are born out in experiments. The physicist John Bell famously demonstrated that quantum physics diverges from classical physics in predicting the dependence of the correlations of the measurement outcomes upon the relative angle between the two polarisers – a fact which neither of the particles, considered separately, should be in a position to ‘know’. Bell's theorem is widely accepted 8 and continues to make trouble for microphysicalists like David Lewis, who famously declared his unwillingness to “take lessons in ontology from quantum physics”. Taken at face value, I suggest, a quantum system looks more like an anthill in which every passage leads into each other than a Lewisan mosaic made of one little piece and then another. Of course, there are ways to reconceive the mosaic to make it compatible with quantum mechanics such as stripping it down to just particle positions or taking it out of spacetime. Yet there is no universally agreed way to interpret quantum mechanics in terms of a set of microscopic constituents and each attempt to preserve such an ontology comes with theoretical costs. I see no reason to treat the existence of a such a mosaic as given. b. The measurement problem2 What about the second requirement that the temporal development of the whole physical world should be governed by universal laws? In the EPR experiment, we discovered that if we treat the two particles as having the local and determinate property of being either ‘spin up’ or ‘spin down’ before making a measurement we end up predicting the wrong statistics. If the whole world is governed by the famous Schrödinger equation, however, then it’s a puzzle why this indeterminacy is not transmitted up to the macroscopic level. After all, if the macroworld is made of indeterminate microscopic constituents, then why can’t we have situations in which there are macroscopic measuring devices whose pointers exist in an indeterminate superposition of pointing up and down at the same time? Why don’t we live in a world in which cats exist in a superposition of being both dead and alive, as in Schrödinger’s notorious thought-experiment? It seems quantum mechanics, if it is taken as theory of everything, undermines itself, making it impossible to recover the manifest image. Let’s consider the quantum dynamics in a little more detail. Prior to any measurement of a quantum system, the quantum state evolves according to the time-dependent Schrödinger equation. The quantum state can be expressed such that the state of a system at some arbitrary time can be obtained from its state at some earlier time through the action of what is called a unitary operator upon the earlier state. This formula tells us how to start from a given state of a system (at time t = 0) and evolve the probability amplitudes for all the possible configurations of the system at some arbitrary time t. 2 This section draws on the discussion of the measurement problem in (Simpson & Horsley 2022). 9 Yet suppose we perform what is called a ‘non-demolition’ measurement on the system at time t. After this measurement, we know more about the state of the system than the information contained in the wave function. Specifically, the measurement outcome of the EPR experiment will have ruled out one of the combined states of the two particles to which the wave function assigns a non-zero probability; perhaps the state in which particle 1 is spindown and particle 2 is spin-up. To obtain the correct results for future experiments, we have to update the wave function of the system that we are measuring with the empirical knowledge that we have gained from our experiment. Yet this updating is not performed at any stage by the time evolution operator. The wave function has to undergo a discontinuous modification. Just prior to time t, the wave function was evolving according to the evolution operator, and the system was in a superposition. Just after time t, we’re left with a situation in which particle 1 is spin-up and particle 2 is spindown. This discontinuous change in a system’s quantum state is known as ‘the collapse of the wave function’, and it is necessary to properly account for the determinate outcomes of our experiments. However, there is no agreed understanding of this physical process, with physicists disagreeing about when the wave function collapses, whether it really collapses, and what (if anything) causes it to collapse. This problem of reconciling the indeterminate world of quantum mechanics with the world of determinate measurement outcomes is called ‘the measurement problem’, and it has convinced many philosophers and physicists that the standard theory of quantum mechanics must in some sense be incomplete. According to the physicist John Bell, any realist approach to quantum mechanics confronts a dilemma: either the dynamics of standard quantum mechanics is wrong and the wave function evolves according to a modified Schrödinger equation that permits it to collapse, or standard quantum mechanics is incomplete and there are ‘hidden variables’ that evolve according to some non-linear dynamics of their own (Bell 1987). Tim Maudlin has argued, on the assumption that the world is governed by universal laws, that there are two basic options: either we should adopt something like the GRW theory of the collapse of the wave function, or something like Bohmian mechanics (Maudlin 1995). On the one hand, the GRW model suggested by Ghirardi, Rimini and Weber supplements standard quantum mechanics with a stochastic mechanism which produces random ‘hits’ on the wave function that result in an objective collapse (Ghirardi, Rimini, and Weber 1986). The effects of this rather ad hoc modification become significant when a large number of entangled particles are involved, such as the particles composing a macroscopic instrument 10 of measurement. On the other hand, the pilot wave theory of de Broglie and Bohm, lately championed under the name of ‘Bohmian mechanics’, posits a configuration of particles with definite positions governed by a supplementary guiding equation (Bohm 1951, 1952a,b); de Broglie 1928). This guiding equation depends upon a ‘universal wave function’ that evolves according to the standard Schrödinger equation and does not collapse. Agreement with the Born Rule, which gives the probability that a measurement of a quantum system will yield a given result, is secured via something called the ‘quantum equilibrium hypothesis’; a move which restricts the space of possible solutions to those predicted by standard quantum mechanics and which has been criticised for being ‘artificial’. In both cases, then, the standard textbook theory of quantum mechanics has to be adjusted in rather ad hoc ways to produce a theory which can specify universal laws for the microscopic domain that do not depend upon the existence of a macroscopic ‘observer’. Another solution to the measurement problem exists, however, which drops the assumption that the temporal development of every microscopic system is closed under the same universal dynamics. According to the contextual wave function collapse theory proposed by Barbara Drossel and George Ellis, quantum systems are causally open to their ‘classical’ environments, and it is the interaction of a quantum system with the intrinsic heat bath of a finite-temperature, macroscopic system that causes the collapse of its wave function (Drossel and Ellis 2018). Like GRW theory, CWC theory seizes the first horn of Bell’s dilemma, allowing the wave function of a microscopic system to collapse. Unlike GRW theory, the stochastic corrections that collapse the wave function depend upon the macroscopic context of the physical system. In short, the CWC model incorporates a feedback loop – from a particle, via the intrinsic heat bath of the measuring device, back to the particle – which introduces non-linear terms in the equation governing the evolution of the system that are specific to the system’s context. According to Drossel and Ellis, these extra terms can be accounted for using thermodynamics and solid-state physics. CWC theory is thus able to avoid introducing an ad hoc collapse mechanism into quantum mechanics (Drossel & Ellis, 2018, pp.13-19). Whilst CWC theory is empirically equivalent to other interpretations of quantum mechanics, it implies that the whole world is not a single closed system which evolves according to universal laws. Rather, the world contains ‘open’ quantum systems whose temporal development is context-dependent. These quantum systems are embedded in ‘classical’ environments which are characterised by higher-level properties that are not governed by lower-level laws and make a difference to the dynamics of quantum systems. They derive 11 these causal powers from the role they play in defining the Hilbert spaces and time scales in which the unitary time evolution of an open quantum system takes place. Taken at face value, I suggest, the quantum world looks more like a world in which macroscopic things play a top-down role in modifying microscopic behaviour than a world governed by mechanical laws from which higher-level properties are ‘causally excluded’. Of course, there are ways to fix quantum mechanics to make it compatible with universal laws, such as adding a collapse mechanism or positing hidden variables. Yet there is no universally agreed way to interpret quantum mechanics in terms of a set of laws that govern everything for all time, and each attempt to turn quantum mechanics into such a theory comes with theoretical costs. I see no reason to regard the universality of microscopic laws as given. c. The problem of unitarily inequivalent representations3 And now onto the third and last requirement I wish to consider: namely, that all of the properties investigated by the so-called ‘special sciences’ should supervene upon that fundamental state which admits a unique representation in terms of our ‘best physics’. According to the standard approach to scientific realism, a scientific theory’s explanatory virtues gives us good reason to attribute some part of a physical theory with representational content. There seems to be a striking disparity, however, between the standard approach to realism and the way that physical theories are often used to explain phenomena in practice, which calls into question whether physics converges upon a unique lower-level. I shall focus here on phenomena described by the theory of quantum statistical mechanics, but the same difficulties arise in quantum field theory. At the core of any quantum theory are the canonical commutation relations between conjugate quantities such as position and momentum which encode Heisenberg’s famous ‘uncertainty principle’. Any quantum theory which specifies a quantum state for a physical system defined in a Hilbert space, and which specifies a set of ‘observables’ in terms of bounded self-adjoint operators that act on this state, must realise something called ‘the Weyl algebra’ that is associated with these relations. When the operators on a Hilbert space conform to these commutation relations, they are said to be a ‘representation’ of these relations. 3 This section draws on the discussion of representations in (Simpson & Horsley 2022). 12 Unitary equivalence is widely considered the standard of empirical equivalence: if two representations are unitarily equivalent, there is some unitary operator that transforms one representation into the other, such that they both determine the same long-term experimental averages for the various observables which they define. However, the theory of quantum statistical mechanics generates a continuum of unitarily inequivalent representations in the so-called thermodynamic limit, which is used to describe systems with thermal properties. Whilst the Stone-von Neumann Theorem is able to establish that any pairs of distinct representations for a finite system will be unitarily equivalent, systems generated in the thermodynamic limit are infinite systems whose Hilbert-space representations fall outside of the scope of the Stone-von Neumann theorem. Let us consider the example of a ferromagnet.4 When a physical system experiences a phase transition, certain properties of the system undergo discontinuous change due to some macroscopic change in their external conditions. An iron bar that is at thermal equilibrium, for instance, exhibits a paramagnetic phase above a critical temperature, in which it experiences no net magnetization. Below this critical temperature, however, it exhibits a ferromagnetic phase, in which it experiences spontaneous magnetization. In the presence of an external magnetic field, the ferromagnet admits two possible metastable states which are characterised by opposite magnetic polarisations. These two states are defined in the thermodynamic limit using unitarily inequivalent representations. The use of these infinite models turns out to be necessary for achieving empirical adequacy in describing systems like ferromagnets which are able to undergo phase transitions at critical temperatures. The statistical physics of finite systems identifies equilibrium states with unique Gibbs states (Ruetsche, 2011, p.3), implying that the phase available to a system at some temperature T is unique for all T. Yet systems like ferromagnets admit multiple metastable states at some temperature T. According to Laura Ruetsche, it is ‘only in the thermodynamic limit [that] one can introduce a notion of equilibrium that allows what the Gibbs notion of equilibrium for finite systems disallows: the multiplicity of equilibrium states at a finite temperature implicated in phase structure’. The problem with the standard approach to realism, as Laura Ruetsche points out, is that ‘there often isn’t a single interpretation under which a theory enjoys the full range of 4 For an extended discussion of this issue, see (Ruetsche 2006) and (Simpson & Horsley 2022). 13 virtues that realists are wont to cite as reasons for believing that theory is true or approximately true’ (Ruetsche, 2011, p.5). In our example of the ferromagnet, it is evident that the quantum theory that describes the behaviour of the system admits of (at least) two representations and that these representations are not empirically equivalent. How might we interpret the theory to get rid of this element of pluralism and make sure that our theory only determines one set of possible worlds? On the one hand, to privilege the physical content of one particular representation – a move that Ruetsche calls ‘Hilbert space conservatism’ – would be reduce the number of physically significant states to a subset of those that are generally accepted within successful scientific practices. On the other hand, to confine the physical content of a quantum theory to the algebraic structure that is shared by different Hilbert space representations – a move that Ruetsche calls ‘algebraic imperialism’ – would be to reduce the number of physically significant observables that are measured within successful scientific practices. Taken at face value, I suggest, the thermalised world looks more like a world in which the macroscopic enjoys a certain autonomy from the microscopic than a world in which the macroscopic merely supervene upon the microscopic. Of course, there are a plurality of strategies that we can deploy to generate a microscopic supervenience base, yet there is no universally agreed way to reduce quantum systems that admit multiple inequivalent representations to systems which admit a unique representation, and each attempt to fix upon a unique supervenience base drives a wedge between theory and practice. I see no reason to regard the existence of a unique microscopic supervenience base as given. Beyond the Myth of the Physical Given It is time to step back from the more technical details of this discussion to summarise the misgivings I have expressed concerning ‘The Myth of the Physical Given’. But first, please don’t hear what I’m not saying: I am not claiming there is no way in which philosophers might interpret quantum theories in terms of some set of microscopic constituents governed by universal laws, in a world in which everything else supervenes upon a unique microphysical state. I am questioning whether philosophers should privilege this particular way of thinking about the relationship between physics and metaphysics, and I am also questioning its claim to superior explanatory power. The Myth of the Physical Given, as I have presented it, is a tendentious story about the success of the physical sciences which seeks to normalise a certain interpretive structure that has been imposed upon the scientific image; one in which the whole truth about nature supervenes upon a physical reality that 14 exists outside of the space of ends and goals. It’s a mythology I think we have reasonable grounds to reject in the light of recent developments. In the first place, I think the ‘Quantum Revolution’ that has taken place within physics has exposed a certain arbitrariness among the mythologisers concerning what counts as the physically significant states or observables with which they are going to build their totalising physical theory. It has also opened a conceptual space for hybrid approaches that admit irreducibly higher-level powers which avoid adding epicycles to the dynamics of microscopic systems. In the second place, I think the ‘turn to powers’ in contemporary metaphysics has exposed certain conceptual problems in how the mythologisers attempt to account for lawlike regularities in nature without reference to anything’s ends or goals, such as the famous Ramsey-Lewis account of nature’s laws. It has also opened a conceptual space for a kind of teleology – or a built-in directedness – in our understanding of nature at the inorganic level. In the third place, I think the ‘turn to practices’ in philosophy of science has exposed a chasm between how the mythologisers think about the success of the natural sciences and how our best scientific theories explain things in practice. It has also opened a conceptual space for a scientific realism centred on powers rather than laws, including (in principle) chemical or biological powers that do not supervene upon microphysical properties. What might it look like, then, if we rejected the Myth of the Physical Given and sought to interpret physical theories like quantum mechanics within a more self-consistently neoAristotelian framework, taking at face value: the holism suggested by quantum entanglement, the openness to top-down causation invited by the measurement problem, and the emergence of higher-level properties in complex physical systems? This is something I’ve been considering recently, along with a small number of other ‘neo-Aristotelian’ philosophers. Let me briefly suggest one way to proceed. I have argued elsewhere that CWC theory can be interpreted in terms of Robert Koons’s theory of ‘staunch hylomorphism’ (Simpson 2020, Simpson 2021), in which a form is united to the matter of a substance by grounding its causal powers (Koons 2014). In this theory, the inorganic world is wholly contained in a totality of ‘thermal substances’ (Koons 2019), which have both quantal and classical properties. There is no microscopic supervenience base which is uniquely represented by our ‘best physics’. Indeed, the world is not made of microscopic particles. Rather, every microscopic system is a dependent part of a thermal substance, which is a hylomorphic composite of both matter and form. Any changes that take place in the inorganic world are wrought through the causal powers of thermal substances, which have a tendency towards thermal equilibrium. 15 Of course, this metaphysical theory only describes the inorganic world and says nothing directly about life. It affirms an openness of lower-level processes to top-down causation from higher levels, however, which could in principle be repeated at multiple levels. One challenge facing neo-Aristotelians, going forward, will be to extend this picture rigorously to include other levels of reality such as biological systems, and to consider how this incorporation of multiple levels – requiring a harmonisation of reciprocal powers between different levels – might enliven our perception of how gracefully everything ‘hangs together’. Bibliography Bell, J. S. (1987). Speakable and unspeakable in quantum mechanics. (Cambridge: Cambridge University Press). Bohm, D. (1951). Quantum theory. Englewood Cliffs: Prentice-Hall. Bohm, D. (1952a). A Suggested Interpretation of the Quantum Theory in Terms of “Hidden” Variables. I. Physical Review, 85(2):166–179. Bohm, D. (1952b). A Suggested Interpretation of the Quantum Theory in Terms of “Hidden” Variables. II. Physical Review, 85(2):180–193. de Broglie, L. (1928). La nouvelle dynamique des quanta [The new dynamics of quanta] Electrons et photons. Rapports et discussions du cinqui`eme Conseil de physique tenu `a Bruxelles du 24 au 29 octobre 1927 sous les auspices de l’Institut international de physique Solvay. Paris: Gauthier-Villars. pp. 105-132. English trans- lation. In Bacciagaluppi, G. and Valentini, A., editors, Quantum Theory at the Crossroads: Reconsidering the 1927 Solvay Conference, pages 341–371. Cambridge University Press, Cambridge. Drossel, B. & Ellis G (2018). Contextual Wavefunction collapse: an integrated theory of quantum measurement. New J. Phys. 20, 113025. Einstein, A., Podolsky, B., and Rosen, N. (1935). Can Quantum-Mechanical Description of Physical Reality Be Considered Complete? Physical Review, 47(10):777–780. Ghirardi, G. C., Rimini, A., and Weber, T. (1986). Unified dynamics for microscopic and macroscopic systems. Physical Review D, 34(2):470–491. Harré, R. & Madden, E. H. (1973). Natural Powers and Powerful Natures. Philosophy 48, 209–230. Inman, Ross D (2017). Substance and the Fundamentality of the Familiar: A NeoAristotelian Mereology. London: Routledge. 16 Koons, R. C. (2014). Staunch vs. Faint-hearted Hylomorphism: Toward an Aristotelian Account of Composition. Res Philosophica 91, 151–177. Koons, R. C. (2019). Thermal substances: a Neo-Aristotelian ontology of the quantum world. Synthese 4, 1–22. Kripke, S. A. (1981), Naming and Necessity. (Oxford: Wiley-Blackwell). Lewis, D. (1973). Counterfactuals. (Harvard University Press). Lewis, D. (1986). On the Plurality of Worlds (Oxford: Blackwell Publishing). Lewis, D. (1987). A Subjectivist’s Guide to Objective Chance. In Philosophical Papers Volume II, pages 83–113. Oxford: Oxford University Press. Maudlin, T. (1995). Three measurement problems. Topoi 14, 7–15. Molnar, G. (2006). Powers: A Study in Metaphysics, (Oxford: Oxford University Press). Oppenheim, Paul, and Hilary Putnam. (1958). Unity of science as a working hypothesis. In Minnesota Studies in the Philosophy of Science (vol. 2), eds. Herbert Feigl, Michael Scriven, and Grover Maxwell, 3–36. Minneapolis: University of Minnesota Press. Ramsey, F. P. (1978). Universals of Law and Fact. (Routledge & Kegan Paul). Ruetsche, L. (2011). Interpreting Quantum Theories, (Oxford: Oxford University Press). Sellars, W. (1997). Empiricism and the Philosophy of Mind. (Cambridge Mass., Harvard University Press). Quine, W. V. (1960). Word and object. (Cambridge Mass., MIT Press). Rose, D. & Schaffer, J. (2017), Folk Mereology is Teleological, Synthese 51, 238–270 Rosenberg, A. (2012). The Atheist’s Guide to Reality: Enjoying Life without Illusions. (New York: W. W. Norton & Co.). Ruetsche, L. (2006). Johnny’s So Long at the Ferromagnet. Philosophy of Science 73, 473– 486. Ruetsche, L. (2011). Interpreting Quantum Theories. (Oxford: Oxford University Press). Schaffer, Jonathan, (2010), Monism: The Priority of the Whole. Philosophical Review 199(1): 31-76. 17 Simpson, W. M. R. (2017), Half-Baked Humeanism. in Neo-Aristotelian Perspectives on Contemporary Science 123–145 (New York: Routledge). Simpson, W. M. R. (2018). Knowing nature: beyond reduction and emergence, in Knowing Creation, eds. Torrance, A. B. & McCall, T. H., (Grand Rapids: Zondervan). Simpson, W. M. R. (2020). What’s the Matter? Toward a Neo-Aristotelian Ontology of Nature (Doctoral Thesis, Peterhouse, University of Cambridge). Simpson, W. M. R. (2021). From Quantum Physics to Classical Metaphysics. in NeoAristotelian Metaphysics and the Theology of Nature, eds. Simpson, W. M. R., Koons, R. C. & Orr, J., (New York: Routledge). Simpson, W. M. R. & Horsley, S. A. R. (2022). Toppling the pyramids: physics without physical state monism, in Time, Law and Free Will (eds. Marmodoro, A., Austin, C. & Roselli, A.) (Springer, forthcoming). Sellars, W. (1962). Philosophy and the scientific image of man, in Frontiers of Science and Philosophy, Robert Colodny (ed.) (Pittsburgh, PA: University of Pittsburgh Press): 35–78; reprinted in SPR: 1–40, ISR: 369–408. Sellars, W. (1997). Empiricism and the Philosophy of Mind, Robert Brandom (ed.), (Harvard University Press.; Cambridge, MA). Wilson, J. (2015). Metaphysical emergence: Weak and Strong. In Bigaj, T. and Wüthrich, C., editors, Metaphysics in Contemporary Physics, pages 251–306. 18