Free Mind

Free Mind

Philosophers of free will usually focus on bodily action. Does anything change if we switch to purely mental actions such as thinking and imagining? Suppose a man is imprisoned: we would normally say that he is not free to do what he wants as far as his body is concerned, but he is free so far as his mind is concerned. He is not free to go to the corner store, but he is free to imagine doing that. He can think whatever he likes, but he can’t move his body in whatever way he likes; he is physically coerced or controlled, but not mentally. The compatibilist says he is mentally free but not physically free. That seems like an eminently reasonable thing to say—simple common sense. It would sound paradoxical to say that he was equally unfree both physically and mentally. If we add that the prisoner’s mental acts are determined by his desires, we seem not to detract from his freedom.[1] The reason he imagines as he does is that he has certain mental states, variously called desires, wants, wishes, inclinations, feelings, likes, and attitudes, which lead him to act as he does mentally; that is precisely why he acts freely. So the compatibilist maintains, and he seems right on the money—free action is doing what one likes because one likes to do it. It is going to take philosophical work to dislodge the compatibilist from this commonsense position. Notice that no reference to the body is made in this telling of things: the action takes place wholly within the mind; the body doesn’t move. Intuitively, the action was as free as any action could be—what more could possibly be wanted of freedom? This kind of action is a paradigm of free action: how can there be a problem of whether free will exists with regard to such actions? Free action is acting on one’s desires (etc.) and that is what we do when performing purely mental actions. A shade more theoretically, free action is action caused by desires (using the term in the broadest sense), and we do that all the time; therefore, we are free. If an action is not caused by one’s desires, then it is not free—say, it is caused by someone else’s desires or some sort of brain defect. An action is free if, but only if, it is (appropriately) caused by one’s desires.[2] It is harder for other people to influence your ability to think or imagine what you like than to control your bodily movements, so you are freer in the former area than the latter. But in both cases, you can act according to your desires, so you can be free to act with your body as well as your mind. All this seems pretty straightforward.

But things get more complicated for the compatibilist when we consider bodily action. In that case, the action involves movements of the body, and these are caused by internal states of the agent’s nervous system. This may also be true of purely mental acts, but it is not so obviously true—we can ignore it in the case of mental acts. But in the case of bodily actions there is now a clear rival to the explanation in terms of desires and other mental states, viz. internal states of the body. Indeed, such internal states look like the correct explanation of the movement in question, since physical events have physical causes. When the effect is mental, however, we have no such compelling reason to wheel in extra-mental causes. That is, bodily action involves us in the idea that actions have physical causes; maybe mental actions do too, but this is not something pressed upon us by the phenomena. And now we start to see a threat to freedom of action: for the physical causes of bodily movements are not desires, and freedom exists only when the action is caused by a desire. It isn’t determinism that undermines freedom; it is determinism by non-mental causes. Freedom actually requires determination by mental causes, chiefly desire, but it is ruled out by the existence of non-mental (i.e., physical) causes. Free action must (according to the compatibilist conception) be caused by desires, but bodily movements are not so caused, so they are not free. There is no such argument against the freedom of mental actions, since they are not bodily and hence don’t cry out for physical explanation. In other words, the incompatibilist position is more compelling for bodily action than mental action. Determinism is not the issue; the issue is what kind determinism, mental or physical. Physical determinism rules out freedom even if you are a compatibilist, because it invokes non-mental causation in the explanation of action. An action is free only if it is caused by desire, not by brain states. If epiphenomenalism were true, desires would not cause actions, only brain states would, so action would not be free according to the compatibilist credo. Desires have to be the reason people act as they do, or else they are effectively coerced by something other than their desires, something non-mental. So it is natural to suppose.

There is an obvious way out: identify desires with brain states. Then causation by suitable brain states iscausation by desires. But this involves taking a controversial stand on the mind-body problem: we are free only if the mind-brain identity theory is true, i.e., freedom requires physical reductionism. We might have hoped not to be saddled with such a heavy metaphysical commitment in our efforts to save freedom. We could try claiming that bodily movements are not caused by brain states, but only by desires, but that is a hard pill to swallow. Or we could go for a more complicated position (supervenience, token identity without type identity, functionalism, etc.); but now we are up to our neck in the mind-body problem. The lesson is that the free will problem is deeply connected to the mind-body problem: we won’t know whether we are free until we have solved the mind-body problem. That is, we won’t know whether we are free in the way required by the (plausible) compatibilist definition of freedom until we are clear about how, if at all, the mind acts causally, i.e., how it relates to the causal machinery of the brain. If the mind really does cause movements of the body, and nothing else does, then we have freedom in the defined sense; but if the brain, as distinct from the mind, carries the causal burden, then we are not free, because then desire isn’t the cause of bodily action. The question of freedom must remain murky so long as these questions are not resolved. We may be free, if desires cause actions in the required way, but we may not be, if they don’t. To put it intuitively, desire must cause action directly and intimately in order that action be free, but we don’t really know whether this is true or not, because we don’t understand the mind-brain nexus. This is why freedom comes under threat: not from determinism as such but from the involvement of the physical world in the causation of action. If anything otherthan desire is responsible for the production of action, then the compatibilist conception of free action is undermined: but that is something far from clear, pending a resolution of the mind-body problem. Of course, this means that if the mind-body problem cannot be solved, neither can the free will problem be solved. We know what it would be to be free, but we don’t know whether those conditions are in fact satisfied in the real world. The incompatibilist thinks he knows we are not free (because of the truth of determinism); the typical compatibilist thinks he knows we are free (given the correct definition of freedom): but really, we only know that we may be free (but may not be), given the uncertainty about the relation of mind and brain. My own feeling is that desire really does cause and explain action, both mental and bodily, so that the compatibilist position is correct: but I acknowledge that this is not demonstrably true. It feels like desire causes action, but that may be illusory, or partly illusory. I think mind and brain are inextricably connected, so that the causal story will inevitably award desire a central role in action causation, in which case it will be true that we are free in the required sense—desire will be the reason we act as we do. If so, bodily action will fall into line with mental action: both will involve causation by desire and only by desire (no intrusion of causes that can act as alternatives to the agent’s desires).

Perhaps ironically, it is dualism that acts as the greatest threat to freedom, because it suggests that action has a causal history that excludes desire, and hence it takes freedom away from the agent. The causation of action could become detached from the agent’s psychology, thus undermining the idea of doing what one likes as definitive of freedom. The identity theory, on the other hand, keeps desire in the center of free action by assimilating it to brain states, which cause bodily movements; but that theory is deeply controversial (perhaps not even intelligible in its classic formulations). We need a theory of mind and body that makes desire the true cause of action but doesn’t sideline the brain completely. This is none too easy a thing to do. Thus, the difficulty of the free will problem owes a lot to the difficulty of the mind-body problem. Still, the bogeyman of determinism is beside the point: freedom entails determination, though it must be determination of a specific kind, viz. desire-based determination. What is incompatible with freedom is the idea that our actions are notcaused by our desires, because then we are not doing what we would like to do (what we like could easily include what we think is morally right). True, freedom consists in acting from one’s desires, but do we really act from our desires, as opposed to physical states of the brain? That is the question.[3]

[1] In this paper I presuppose the compatibilist position; the problem I discuss arises from within this tradition. I have defended compatibilism elsewhere.

[2] I put aside the deviant causal chains problem.

[3] If you take a moment and review the possibility that your actions stem from deep within your brain, instead of from your manifest desires, you will feel your sense of your own freedom evaporate; you will start to feel like a puppet. You will not feel like the master of your own destiny.

Share

Cosmological Phenomenology

Cosmological Phenomenology

There are many types of intentional object, each with its associated phenomenology: physical, psychological, mathematical, linguistic, ethical, aesthetic, spatial, temporal, non-existent. I will be concerned with a rather extensive object—the universe. How does the universe present itself to consciousness? What suite of seeming (if I may put it so) is peculiar to this intentional object? What meaning does it have for us? The first point to note is that this meaning has changed over time, dramatically so, because astronomy has changed. I am talking about cosmic phenomenology under contemporary astronomy in contrast to pre-modern astronomy. I won’t rehearse the modern astronomical picture, or the old one, assuming that it is familiar enough.[1] We used to think there was one sun that revolved around us, and we had no reason to doubt that other planets (“stars”) had other civilizations on them. We now know that there are billions of stars just like the Sun, and we have every reason to suppose that life is rare in the universe, certainly in the nearby universe. So, there has been a kind of reversal in our picture of the universe: we thought the Sun was special and unique, while conscious beings were common and plentiful; now we see that the Sun is just one among a great many such objects, while conscious life is markedly confined in scope. The Sun is nothing special, but life is very special, only occurring in isolated pockets of the universe. Stars are ten a penny; minds are rare gems. Conscious beings are common on Earth, to be sure, but planets like Earth are few and far between. From an economist’s standpoint, stars are not scarce in the universe, and thus a dime a dozen, while minds are in relatively short supply (so far as we know) and therefore worth a king’s ransom. Stars are also just condensed clouds of dust (hot squashed dirt), while conscious beings are impenetrable mysteries of nature. What even isconsciousness? How did it arise? It isn’t squashed (or stretched) anything. Thus, our perspective on the universe has changed quite drastically: suns don’t command worship anymore, but conscious beings arguably do (reverence at least). Animals on Earth, particularly humans, are now the gods of the universe, not those glittering points of light in the night sky. The extraterrestrial universe has been de-mystified, naturalized, demythologized.

How else has our cosmic phenomenology changed? There are three basic aspects of the way we now view the universe, which I abbreviate to SIC—Small, Improbable, and Contingent. We now appreciate how small we really are compared to the rest of the universe, i.e., how small our neck of the woods is; even everything visible to us is only a tiny part of all there is out there. In the past we had no reason to think the universe much larger than the Earth (the Sun seemed relatively close and small); now we appreciate how miniscule our sector of the universe is compared to the whole. Thus, our current cosmic phenomenology is that of unimaginable distances and sizes. Second, we now understand that we are extremely improbable: it is only by remote chance that we exist at all; in most parts of the universe there is nothing like us (as far as we know). On the earlier conception, our existence was not improbable at all—we were what the universe was designed for. We were the point of the whole thing, what it naturally leads to or accommodates. Now we see that it was a just a massive fluke that we came to exist at all. Third, our existence is highly contingent: we are not a necessary feature of the universe. We used to think of ourselves as essential to the universe, but now we see ourselves as radically contingent—much more so than the stars and planets. This makes us feel alienated from the universe: it is not generally hospitable to us, but inimical to us. There are very few places, apart from planet Earth, where beings like us could survive. The universe could easily have skipped life and consciousness altogether, as it has for vast stretches of its geography. The universe is an in-itself that contains no hint of a for-itself that takes itself as intentional object. It is mind-indifferent in its general nature, only producing mind in isolated spots, possibly only one spot among countless billions of other potential spots. We are not at its ontological center but in its anomalous periphery, a freak exception. There could have been no beings like us and the universe would have existed in its present form nonetheless. The thought of the universe is thus the thought of something that is sublimely indifferent to our existence. Big hot stars are part of the predictable natural order, but we sentient beings are just a kind of local curiosity, despite our importance to ourselves. This is all part of the phenomenology of our intentional relation to the universe as it is now constituted.  We used to be the point of the whole arrangement, but now we are just a point. We have been demoted in the great scheme of things. We just happen to be, and will die out as unceremoniously as we arrived. We are small, improbable, and contingent—a mere blip in the universe’s history. This is what modern astronomy has taught us, shaping our consciousness of the universe and our place in it. It is not easy to digest.

A vivid illustration of this is provided by black holes (aka “dark stars”). There are two salient features of black holes: their absolute darkness and their extreme power. The black hole is antithetical to life: an inescapable dark place of implacable gravitational force. No light can escape its grip, and it crushes everything that falls into it. There can be no life in a black hole. This is not the traditional image of a cosmos generously created by God for human habitation and flourishing; you can’t raise a family in a black hole, or enjoy a game of croquet. But black holes are everywhere apparently, including at the center of our galaxy; they are inevitable, a result of deep laws of nature. Yet they are totally inhuman—the very opposite of life-affirming. They are life-denying. A lot of the universe is like that: life-denyingly hot or cold, destructively stormy and cruelly crushing, no place for life to take hold. The black hole is the embodiment of annihilating power. This is why it grips the human imagination, shaping our modern consciousness of the world in which we live. That is what nature is all about, its true identity, not soft life and gentle consciousness. It would be different if we had discovered other hospitable planets and nurturing suns, full of teeming life and mind; but instead, we have found lifeless life-denying brute matter—even the water out there is frozen solid for all eternity! Phenomenologically, the universe is a bleak and unforgiving desert. And it begins at our back door: even our neighboring planets are bereft of life and quite inimical to it—lumps of dead rock, basically. We exist by the skin of our teeth and extinction is an all-to-real possibility (global warming exemplifies the destructive power of cosmic chemistry). Our consciousness of the universe is shot through with images of peril, reflecting the universe in its true colors; a welcoming place it is not. The black hole is just an extreme manifestation of the mindless violence of nature. Thus, our cosmic phenomenology is permeated by ideas of death and destruction. Stars are created only to violently destroy themselves; they have predictable life-spans and self-annihilate on a regular basis, sometimes quietly, sometimes spectacularly. The universe itself will one day run out of fuel and end in dark deadness. Starlight will be extinguished completely–there will be no sparkling stars anymore. The intentional object known as “The Universe” has gone from being God’s eternal creation designed for human flourishing to being a death machine destined to annihilate all life and eventually itself. That is the phenomenology that modern cosmology has bequeathed to us, though one seldom hears of its bleaker pronouncements (it’s always reported as dealing with the “the beauties of the night sky” etc.). It hardly bears thinking about and I wonder how much human gloom stems from it.[2]

[1] Readers may wish to immerse themselves in the Smithsonian Universe (2020), edited by Martin Rees, a chastening experience (exhilarating too). The book is as big as its subject (nearly).

[2] This is an essay in philosophical-scientific belles-lettres and should be read as such. It is neither astronomy nor philosophy, except in an extended sense. Still, there is room for such ruminations, perhaps a need for them. There is very real despair at the heart of modern cosmology.

Share

Developmental Philosophy of Mind

Developmental Philosophy of Mind

Developmental philosophy of mind is an undeveloped field. There are two questions: phylogenetic and ontogenetic. How did the mind as it now exists develop over evolutionary time, and how does it develop in the individual? Like developmental psychology, it is natural to adopt a stage conception of these processes: what stages does the mind go through to reach its mature state (as currently conceived)? There are successive discrete stages, characterized by distinctive principles, that predictably occur and which prepare the organism for subsequent stages. Each stage enriches the previous stage, possibly subtracting certain features, and which is required for the more sophisticated stages to emerge (think Piaget). There are continuities and discontinuities, smooth ascents and abrupt leaps; there is no reduction of the later stages to the earlier stages. What we find are modifications of earlier traits to serve new functions, neither complete novelties nor mere re-applications. Thus, we may speak of an X-type stage giving rise to a Y-type stage. Given that ontogeny often recapitulates phylogeny, we might expect some parallelism in these two sequences. Of course, the brain of the organism in question will contain the necessary equipment for each stage once the phylogenetic process has done its work, but it may be that it manifests its evolutionary history in particular cases. I will be focusing here on the phylogenetic question, while keeping an eye on the ontogenetic question; I want to know the likely evolutionary history of the mind. What progressive sequence led to the mind as it now exists in human beings (and perhaps other animals with a relatively sophisticated psychological set-up)? What intelligible process of modification might have led to the mind as it now is? What is the natural history of human psychology? How did it start and what transformations did it undergo over evolutionary time?

The story I will outline should not be unfamiliar, though the ordering might seem eccentric. It runs as follows: sensation—perception—memory—imagination—thought–language. I am going to be brutally brief; the field is enormous, though well-trodden. We begin with sensation conceived as information-bearing but not fully representational, rather like the sensation of pain.[1] The sensation is correlated with worldly magnitudes and this correlation is relevant to the survival prospects of the creature in question—as it might be, subjective intensity and chemical gradients indicating a food source, coupled with suitable motor capacities. Yet there is nothing corresponding to the predicative attribution of a feature to objects in the environment—nothing that can be evaluated as veridical or inaccurate. This is sensation without perception proper. The next stage, then, will be the development of genuine perception, which does involve a kind of primitive semantics. This is a large step forward (it might have taken millions of years to establish itself in some sea-dwelling creature, say an octopus). The sensation is preserved but modified into a new psychological category: seeing x as F—a particular thing in the immediate environment represented as being a certain way. Next, we find memory: the perception is retained in some sort of storage facility, available for later use. This also is a major step forward, however inevitable it seems to us today—animals might never have remembered anything, or very little. However, it does exploit pre-existing psychological features in the form of perceptions: it is these that are remembered, stored for later use. We might conjecture that sensations per se were not remembered; only representational perceptions were deemed fit to be preserved. So, now we have remnants of perceptions stored in memory in some form, cut loose from their originating causal connections to the external world. What follows is predictable: the emergence of the imagination. Mental images are formed from the materials of memory: the organism becomes capable of conjuring up such images (say, of its regular prey). Stimulus-freedom has entered the mental world of Earth-bound creatures. Soon the isolated image was subjected to manipulations and emendations, so that imagination begins to get a grip—the boundless capacity for invention, novelty, free expression.[2] At this point we might postulate a hiatus: the mind gets stuck at the imaginative stage; millions of years go by without any major developments to report. Then, slowly and tentatively, something new and different begins to stir: concepts, the building blocks of thought. Some enterprising species (probably that innovative octopus) develops thought, along with reasoning. This is certainly a giant step forward, though its biological function might be obscure; it takes a considerable architectural re-configuration. There is still sensation, perception, memory, and imagination in the organism’s repertoire, folded into the new cognitive capacity, but something original has come into the world—the thing we call thinking. In due course, this would be capped by the upsurge of language, a vehicle for thinking and communicating thoughts. We thus reach the level of words. There was no straight path from sensations to language; that was not a suitable pre-adaptation that might intelligibly lead to words and sentences. But the intermediate stages provide an adequate jumping-off point, when suitably supplemented, for language to get a grip on the mind. Thought and language are the culmination of the series of developmental changes that resulted from sensation in its primitive form. As nebulous cosmic clouds lead eventually to star formation, with associated solar systems etc., so inarticulate clouds of sensation lead to the formation of more differentiated psychological characteristics, one step at a time. There is a natural developmental sequence, a predictable history. Of course, we don’t know much (if anything) about the mechanisms of such psychological ascent—we know much more about star formation—but we can suppose that natural processes account for the sequence we observe or postulate. History can be a mystery, but it happened somehow. The point is that we have a plausible story to tell about how it might have happened in the case of the phylogeny of mind. It’s rather like the emergence of feathers and flight in birds: initially feathers functioned as thermal regulators in dinosaurs, but in the fullness of time they were coopted to serve as means of flight. Evolution makes use of what it finds lying about; it can’t just magically conjure complex organs from nowhere. According to the developmental story I have sketched, just this kind of opportunistic tinkering is what drove the evolutionary development of mind. It is what made the modern mind possible. There is a natural ordering of mental faculties. Perhaps the child’s mind goes through a similar sequence: from sensation to perception, then memory, followed by imagination, leading to thought and language. Much of this is no doubt shrouded in mystery and occurs early on in the child’s mental life, but it doesn’t sound too farfetched or thrown together. The child achieves in about three years what it took life on Earth to achieve in billions of years (but then the child has a brain that is the upshot of those billions of years).

Superimposed on the developmental story I have told is a grand dichotomy: that between the propositional and the pre-propositional. We might think of this as the analogue of the dichotomy between the cold-blooded and the warm-blooded. Up to time t animals got by without anything propositional running through their heads; after t propositions found their way in. Minds began to grasp propositions and think real thoughts. I doubt this happened at the early stages of mental history—not even including the time of the imagination. It arrived late in the game—before language, I would say, but not before thought proper. Even today the human imagination is not essentially propositional, though propositions have infiltrated it (imagining-that); it is still largely perceptual in nature, though not a form of perception. Natural language is heavily propositional, though not exclusively so. The proposition now enjoys a kind of psychological hegemony, but it isn’t an absolute tyrant; it coexists with other psychological ingredients and remnants, with which it has obscure historical connections. It may be regarded as a watershed adaptation, requiring a new and challenging kind of mental athletics (logical reasoning etc.). Did it have any precursor in the evolution of animal brains? How much of a saltation is it? What is it a modification of? Whatever the answers to these questions may be, there is clearly a big distinction between two types of animal mind: those that can engage with propositions and those that cannot.

Animal bodies evolved in an orderly sequence, each body plan building on earlier ones. This was a long, drawn-out process, subject to all the pressures of natural selection. What we see now is the end product of this complicated history. Likewise, the mind evolved over billions of years, subject to the same pressures, each adaptation building on previous adaptations, with deletions and additions. It went through specific eras and phases. This sequence is not random or shapeless; it has a certain “logic”. Developmental philosophy of mind tries to discern the patterns and interrelations, the innovations and consolidations. It doesn’t just happen any old way. Broadly speaking, it is a story of increasingly refined intentionality, culminating in the phenomenon of linguistic meaning, but with useful remnants of the past still in play.[3]

[1] See Tyler Burge, Origins of Objectivity (2010), chapter 9, for a discussion of this distinction.

[2] See my Mindsight (2004) for a discussion of imagination.

[3] If we want to understand the human body, we do well to consider its evolutionary origins. Similarly, if we want to understand the human mind, we do well to consider its evolutionary origins. I am campaigning for a Darwinian perspective in the philosophy of mind, as there is already a Darwinian perspective in psychology. Of course, this is perfectly consistent with a more synchronic investigation alongside the diachronic one. (None of this means that I sign on to all so-called Darwinian approaches to the mind, and I don’t.)

Share

Stephen Hawking: Logical Positivist

Stephen Hawking: Logical Positivist

Reading Stephen Hawking’s The Universe in a Nutshell (2001), I came upon the following passage: “Any sound scientific theory, whether of time or of any other concept, should in my opinion be based on the most workable philosophy of science: the positivist approach put forward by Karl Popper and others. According to this way of thinking, a scientific theory is a mathematical model that describes and codifies the observations we make… If one takes the positivist position, as I do, one cannot say what time actually is. All one can do is describe what has been found to be a very good mathematical model for time and say what predictions it makes” (31). Later we read: “But as a positivist, the question ‘Do extra dimensions really exist?’ has no meaning” (54). Then: “From the point of view of positivist philosophy, however, one cannot determine what is real. All one can do is find which mathematical models describe the universe we live in.” (59) More: “From a positivist viewpoint, one is free to use whatever picture is most useful for the problem in question” (118). Additionally: “The mathematical model of black holes as made of p-brains gives results similar to the virtual-particle pair picture described earlier. Thus from a positivist viewpoint, it is an equally good model, at least for certain classes of black hole” (127). Finally: “However, from a positivist viewpoint, one cannot ask: which is reality, brane or bubble? They are both mathematical models that describe the observations” (198). In his glossary Hawking defines positivism as follows: “The idea that a scientific theory is a mathematical model that describes and codifies the observations we make” (206).

What should we say about these pronouncements, none of which is defended in the book? One would not think that positivism has been a dead letter in philosophy for many decades, for well-known reasons (which I will not rehearse). Alarm bells are sounded in the first quotation when Hawking identifies Popper as a positivist: he was explicitly and vociferously not a positivist. The idea that Hawking is taken with is that scientific theories don’t describe reality or purport to say what is true but rather provide “mathematical models” (whatever they are—we are not told) that can be “useful” in making “predictions”. They don’t tell as what things are or how they work but merely provide “good” models (“pictures”); the former type of question “has no meaning”. A scientific theory merely sums up (“describes and codifies”) the observations; it does not attempt to arrive at the truth about what these observations are observations of. This is good old-fashioned instrumentalism, a descendant of classical empiricism. So, the heliocentric theory of the solar system is not true or accurate—a correct description of real things—but merely a useful device for predicting the movements of the planets (themselves just ways of summing up our observations). Darwin’s theory of evolution is not a true account of how actual species came to exist but just a “mathematical model” of our biological observations. Anything else is literally meaningless. I won’t go into the very familiar arguments against such ideas; what is remarkable is the way Hawking adopts an extreme positivism without even acknowledging that he is saying something highly controversial (I would say complete rubbish). He is a physicist attempting to talk philosophy and making a complete hash of it. He obviously has no idea what he is talking about, but that is no impediment to making confident philosophical pronouncements. Has he ever read any positivist literature (e.g., Ayer’s Language, Truth and Logic) or had a look at Popper’s writings? I doubt it. Instead, we are told what his “opinion” is, as if he has a right to say whatever he likes when it comes to philosophy, which is just a bunch of “opinions” anyway.

But there is a deeper and more disturbing point to be made: at least Hawking knows he is a positivist—he is aware that he taking a philosophical stance in his physics. I don’t know how many times I have read a physicist (Einstein is a prime example) and thought, “He is clearly making positivist assumptions but is quite oblivious to the fact”. They think they are just talking plain common sense with which no one could sanely disagree. Obviously, this kind of attitude is deeply embedded in the culture of physics as it is now practiced. It is simple verificationism: what is real is what is verifiable. Any questions that don’t yield to verification must be deemed meaningless. Reality reduces to what is humanly knowable by means of the senses. That is just terrible epistemology. Yet it is tacitly taken for granted by supposedly educated people. So, I am grateful to Stephen Hawking for laying his cards on the table, shocking is it may be to see what those cards reveal.[1]

[1] From the point of view of human vices, it is the sheer overconfidence of many physicists that really shocks me.

Share

Is the Universe Large?

Is the Universe Large?

If you study astronomy, it will be impressed upon you that the universe is large—very very very large, unimaginably so. The galaxies, their number, the distance between them, the travel times (even for light)—the universe is an extremely big object, much bigger than you thought, much bigger than anyone thought until quite recently. A feeling of awe routinely follows. I am going to argue that this is not true: the universe is not extremely large—it isn’t even large simpliciter. This is not because I have discovered that cosmologists have got their measurements wrong and the universe is actually much smaller than they thought; it follows rather from the semantics of the word “large”, from its ordinary meaning. The sentence “The universe is large” is not true (nor is the sentence “The universe is small”). The argument is in fact quite simple and obvious. Consider “Jumbo is large” said of an elephant. This sentence is true if and only if Jumbo is large for an elephant.[1] To be large for an elephant is to be larger than most elephants (or the typical elephant, or a normal adult elephant, or some such). That is, there is a (non-empty) comparison class presupposed in the original sentence, viz. the class of elephants. That is why a large flea is smaller than a small elephant—different comparison classes. We can thus define the positive use of the adjective in terms of the comparative use (and the superlative use too—the largest elephant is larger than all elephants). Crucially, there is no sense in the positive use unless there exists a suitable comparison class. So, what is the comparison class for “The universe is large”? A large what, we must ask. A large universe, of course: This (pointing at our universe) is a large universe—the adjective now standing in attributive position. It is large for a universe—i.e., larger than most universes. But there aren’t any other universes! There is just this universe; there are no other universes hanging out in the wings. There are no other universes for ours to be larger than, some smaller, some larger. Of course, the universe (note the definite article) is larger than the solar system or the whole Milky Way or a cluster of galaxies; but it is not larger than some other universe. That is the sortal term we need to make sense of the original claim, not “galaxy” and the like. The universe is certainly much larger than the things contained in it, but it is not larger than the other universes, because there are none. Maybe it is larger than some other possible universes, but that is irrelevant, since a small elephant is not rendered large by the merely possible existence of yet smaller elephants (nor is a large elephant rendered small by the possible existence of still larger elephants). No, the universe can only be meaningfully described as large if there are actual universes smaller than it—but there are none such. It’s like saying the Eiffel tower is a large Eiffel tower when there is only one Eiffel tower. The universe is not larger than some other existing universe, so it makes no sense to speak of it as large—larger than what exactly? Other objects exist within a class of similar objects between which comparisons of size can be made, but that is precisely what cannot be said of the universe (everything that actually exists). If there were such co-existing universes, then it would make sense to say that this one is large by comparison with them (say, twice as far across), but that is what is signally lacking. Nothing is inherently large or small—a comparison class is needed for such judgments—so it is meaningless to suggest that the universe itself might be large (or small).

Why then do we insist on talking this way? The answer is that we are tacitly describing the universe subjectively, by reference to ourselves and our local environment. Yes, the universe is vastly larger than us or our particular neck of the cosmic woods, but it doesn’t follow that it is large in any objective sense. We tend to think anything larger than us is large in an absolute sense: to be larger than us is to be large, period. But that is an anthropocentric perspective: there is nothing intrinsically large about the spatial-material macro universe, as there is nothing intrinsically small about the micro world of atoms, quarks, etc. A possible world containing only free-floating atoms has nothing small in it, as a world containing many objects the size of our universe has nothing large in it. When we speak of such things as large or small tout court, we are thinking of them in comparison with our size, but really there is nothing to these descriptions, objectively speaking. The world does not come into existence containing small things and big things, only bigger or smaller things. The terms are completely relative, either to us or to suitable comparison classes. Things have shape and other qualities intrinsically, but their size is a relational matter. If I describe a mountain as huge, I am tacitly comparing it to my own body; compared to a whole planet it is a mere speck.

The same point applies to other attributive adjectives used in astronomy and cosmology and physics—“hot”, “heavy”, “fast”, “strong”, and their antonyms. The Sun is said to be extremely hot in its interior; some elements are described as heavy; the speed of light is said to be very fast; some forces are said to be strong. But none of these uses is truly objective: things are said to be “hot” or “cold” relative to normal human (or animal) temperatures, and the same for “heavy”, “fast”, and “strong”. These uses are subjective intrusions into our descriptions of nature, or else technical terms defined by objective relations between things. To say that light travels very fast can only mean much faster than we can or faster than other physical things. Nothing is inherently hot or heavy or fast or strong: there is nothing of our subjective nature in them, and they all come down to physical relations described in comparative terms. Gravity, say, is only described as a weak force in comparison with the electromagnetic force; it is not weak in any absolute sense. In a possible world in which things regularly travel faster than the speed of light, it could be correct to say that light travels slowly, even very slowly. That is just the logic of attributive adjectives (of this class). Semantically, light could be a slow mover and the Sun’s interior pretty cool and black holes quite light (weight-wise)—it all depends on what is true of other things that form the comparison classes for these attributions. We must not commit the fallacy of misplaced absoluteness. The language of astronomy, cosmology, and physics is logically misleading and could do with an overhaul. It needs de-subjectivizing.[2]

[1] Attributive adjectives like “large” are said to be syncategorematic, needing the appended noun in order to have meaning. The sentence “Jumbo is a large elephant” does not mean “Jumbo is large and an elephant”.

[2] There are even emotional connotations to these words that have no place in rigorous austere objective science: “hot” and “cold” evoke more or less hospitable environments; “heavy” suggests something hard to carry or potentially crushing; “fast” is something we like in ourselves but not in a predator; “strong” connotes an admirable quality. Such words humanize what should not be humanized, i.e., the physical universe. Even words like “attraction” and “repulsion” are suspiciously anthropomorphic. The constellations are clearly human projections not hard objective astronomical facts. The Sun and Moon have been personified since the dawn of man.

Share

Bad Philosophers

Bad Philosophers

Time for a bit of academic sociology. Who are the world’s worst philosophers? I don’t mean which individual philosophers from within philosophy; I mean academics in other fields who like to comment on philosophy. What disciplines produce the worst philosophical commentators? We have quite a full list to choose from: physicists, mathematicians, psychologists, biologists, literary theorists, linguists, neuroscientists, playwrights, novelists, and dishwashers (have I omitted anyone?). I won’t mention any names, but individuals will no doubt spring to mind. Nor will I cite compelling evidence; I will rely on my own reading and memories of encounters. Ready? I think neuroscientists come out the worst (closely followed by dishwashers, though we will discount them as lacking any academic specialty). Psychologists are slightly less bad because they keep their opinions more to themselves (glass houses and all that). Neuroscientists, amazingly, think they are cock of the walk. Literary theorists are notoriously inept philosophically, but they lack much in the way of prestige anyway, so people don’t take much notice of them (except other literary theorists). Biologists are not too bad, perhaps because they are engaged in doing real science and know the difficulties thereof (origin of life anyone?). Linguists are really not bad at all, maybe because they are quite close to philosophers of language (and some are actually pretty smart). But it is physicists that really let the side down: they have no idea what philosophy is about. They seem to think it is physics without math and observation. And they are far too convinced of their own infallibility, or at least intellectual superiority. The best, it seems to me, are mathematicians, many of whom become professional philosophers: they understand the abstract, the “non-empirical”, the infinite. They are not lab-obsessed.

It’s the method not the subject matter that makes the difference. Not what the discipline is about but how it goes about it. Academics always make the mistake of thinking that their method is the only respectable one. That’s why mathematicians are the best at philosophy and neuroscientists are the worst: the a priori versus the a posteriori—the eyes versus the brain. Neuroscientists look at and into the brain and think that is the only way to arrive at sound conclusions; mathematicians don’t look at anything but deploy their rational faculties. Numbers are not like neurons. Psychologists are methodologically insecure, so they avoid methodological dogmatism when it comes to philosophy (though there are exceptions). Biologists have to use highly inferential methods in order to reconstruct the past, so they are more methodologically lenient. But physicists with their expensive machines and their calculators think anything not methodologically like physics is illegitimate. They are also invariably closet logical positivists who don’t know they are.

What about philosophy itself—who are best and worst philosophers among philosophers? Some may say that ethicists are the worst, because they know the least about philosophy in general; and that is not wide of the mark. But I sense humility in them, which saves them from the worst excesses (they are just happy to be tolerated). Actually, I think the worst philosophers are the philosophers of physics (again!), mainly because they often are trained as physicists and then move into philosophy departments. They simply don’t know much philosophy and don’t care, but no physics department would accept them to do what they like to do. Also, they suffer from physics narcissism (the counterpart to physics envy): they think what they do is inherently better than what (real) philosophers do. Not that they are not high IQ people; the trouble is they think too well of themselves and less well of people working in other areas. If they weren’t experts in physics, they wouldn’t be let near a philosophy department (we don’t have professors of the philosophy of chemistry or physiology). Philosophers of physics are given a free pass, and sometimes even admired! Perhaps this is because they can’t and don’t do actual philosophy, that unscientific discipline. So, who is the best at philosophy among the philosophers? I am inclined to say the philosophers of logic and mathematics—that kind of area, bordering on metaphysics. Some real philosophy gets done in these areas, which are taxing and abstract, genuinely difficult, but with a degree of rigor. Philosophers of mind strike me as too ideological, too sectarian; they posture and preen but don’t suffer for their calling. Really, though, the best philosophers are the ones that do it all, and there are not many of them, for understandable reasons. They at least appreciate the full extent of the subject and are not biased in favor of one department of it over the others; they don’t believe that theirspecialty is superior to all others. It  also takes a lot of brain power to do it all.[1]

[1] I hope this piece is taken in the spirit in which it was intended, as a complete denunciation of everybody.

Share

Proof of an External World

Proof of an External World

Kant famously (and ruefully) remarked that it was a scandal of philosophy that it has been unable to come up with a proof of the external world. He was right: it is a matter of some embarrassment that philosophy should be unable to prove something so obvious, so commonsensical. What good is philosophy if it can’t even prove something that elementary? The proof need not be simple or obvious (that also would be to the detriment of philosophy as an interesting enterprise); it could be intricate and convoluted, with spots of uncertainty. I am going to offer such a proof: it has a Kantian ring, but is not to my knowledge to be found in Kant (or anywhere else). This should remove the scandal and prove the worth of the discipline of philosophy. It should also be personally satisfying (I myself feel a great sense of relief).

Let’s start with a simple thought, which will point us in the right direction. Suppose the skeptic says that our perceived world might be pure projection—a figment of the human imagination, corresponding to no further reality. After all, we already agree that much of it is projection—as with the perception of color and other secondary qualities. Why not all—why shouldn’t primary qualities also be subjective projections? We might think there is an obvious reply to this: projections need a screen onto which to project, which is not itself a projection. Thus, material objects in space provide the screen onto which colors (etc.) are projected; they are the equivalent of the movie screen that pre-exists the pattern of light thrown onto it. So, the perceived world can’t all be projected image; it must include a non-mental background. If so, we have a proof of the external world: it follows from the fact of subjective projection that something other than projection must exist, viz. material objects in space. But, of course, the skeptic will not be deterred by this simple-minded maneuver: he will suggest that the alleged non-mental screen is really just a virtual world, an imaginary world, a fictional world. So-called objects in space are non-existent objects, or may be for all we know. It only seems to us as if such objects exist; they might all be non-existent intentional objects, like objects in dreams or works of fiction or hallucinations. It is that hypothesis that needs to be disproved in order to prove that there is an external world. For example, there is an appearance of a square object in my visual field, but this could be a non-existent square object not one that really inhabits objective space. How can we rule this possibility out? I could be dreaming of a square object in front of me, this object being a mere figment of my imagination.

Here is the problem with this alternative skeptical hypothesis: we normally think there is a definite number of things that fall under a perceived (or conceived) attribute, but this will not be so if its extension consists only of non-existent objects. If lions and square things exist, then there is a definite number of them, known or unknown; but if they don’t exist, then there is no definite number of them. The point is familiar: there is no definite number of moles on Hamlet’s back or unicorns or angels or fairies. Such things are numerically indeterminate. But we normally think that ordinary objects of perception come in definite quantities, so they can’t just be non-existent entities. It follows from the fact of numerical determinacy that the objects of perception are not non-existent. Indeed, it is their existence in space that accounts for their numerical determinacy, since material objects are individuated by their location in space. Since non-existent objects do not exist in space, they can have no spatial principle of individuation that underpins their numerical determinacy. So, the skeptical hypothesis can be ruled out and our normal conception accepted. However, the skeptic is not beaten yet: why not say that there is no definite number of square things or lions since they are non-existent intentional objects? Why not bite the bullet and accept that consequence?

First, we should note that even if we do bite the bullet, we are still accepting that there are non-mental objects, because non-existent square things are not mental entities, any more than existent square things are (same for lions). We can quantify over them and they are not mental, so we have still proved that there are non-mental things (that don’t exist). But second, it is not so easy to give up on the numerical determinacy of attribute extensions: for attributes like these (sortal attributes) provide principles of counting, criteria of individuation, and these will generate assignments of cardinality. It is easy to miss this when an attribute applies to both existent and non-existent objects, but what sense does it make to suppose that an attribute that applies to pluralities of objects applies to no definite plurality of objects? If we claim that the attribute lioncorresponds to no definite number of lions, how can it be said to distinguish one lion from another? Not in virtue of position in space, to be sure, because non-existent lions don’t exist in (real) space. We lose the idea of a totality of individual lions standing in spatial relations to each other and adding up to a specific number of lions. That idea requires existence; it can’t survive in the realm of non-existence. The notion of non-existent lions is parasitic on that of existent lions, but then we are back with the external world as naively conceived. A fictionalist about minds (a mental eliminativist) has a problem about the individuation of minds—how many non-existent fictional minds are there?—and a fictionalist about bodies has the same problem about theirquantity. There really must be a definite number of minds and bodies for those concepts to have any intelligible content, but that idea goes out the window once we give up on existence altogether. Even the concepts of identity and difference begin to wobble when we enter the land of the non-existent (when are non-existent gods identical and when different?).

The attitude of sophisticated common sense is that we perceive a world of objects laid out in space, numerically distinct from each other, and forming totalities of specific cardinality. The skeptic tries to convince us that what we perceive are just non-existent intentional objects, but this involves abandoning the idea that we have concepts with definite cardinalities attached to them; and that is not a possible position, given the nature of our concepts (and associated attributes). Thus, an external world exists. The essential move in this proof is the observation that non-existence can provide no grounds for determining the number of things falling into the extension of a concept; only existence in space (in the case of material objects) can provide a basis for this determination. Things that don’t exist are not really countable in the way we normally (and rightly) take objects to be. Countability implies objectivity.[1]

[1] The proof here offered comes at the problem from a surprising direction. I think this is what we should expect, since no obviousmethod of proof has succeeded in removing the scandal. It would be surprising if the proof were not surprising.

Share

Subjective and Objective

Subjective and Objective

The distinction between subjective and objective is often used in philosophy, but it is less often articulated, still less analyzed.[1] I will do that. The task is not particularly difficult, though there are glitches to be ironed out. The distinction is well-founded and its basic nature easily understood. We can begin with the dictionary (OED) definition: for “subjective” we have “based on or influenced by personal feelings, tastes, or opinion; dependent on the mind for existence”; for “objective” we have “not dependent on the mind for existence; actual”. For philosophical purposes, the second definition is the appropriate one, not the definition in terms of personal feelings etc. Many things count as subjective without being based on feelings, tastes, or opinions (see below). The main limitations of the mind-dependence definition are (a) specifying what kind of dependence, (b) saying what is meant by “the mind”, (c) the lack of any positive characterization of the objective, and (d) the restriction to the mind as the sole source of subjectivity. We don’t want to say that logical inference produces subjectivity simply because the premises of an argument are beliefs (states of the mind) on which a conclusion depends. Also, the mind is a very various thing, so what specifically generates subjectivity? Is it anything mental or only some mental things? But more important are (c) and (d). First, let’s make it explicit that we are talking about mental representations and their content: we want to know what makes a representation subjective or objective—a perception, a thought, a sentence, an item of knowledge. Then question (c) is what a mental representation is dependent on when it is objective, granted that it is not dependent on the mind (whatever that is taken to include). Also, must the subjectivity-producing fact always be a mental fact? Can it ever be a physical fact?

The answer to the first question must surely be: the world, what exists outside the mind. For simplicity, let’s just speak of the physical world: then we can say that a representation of the physical world is objective if and only if it depends on the physical world. To be more precise, it must depend on the physical world beyond the subject’s body—ordinary objects surrounding the subject. These cause instances of the representation to occur; they explain the occurrence of the representation. In short, objective representations are dependent on external objects, while subjective representations are dependent on internal states of the subject. I say “internal states” because I want to lift the restriction to mental states, for several reasons. First, a mindless zombie could in principle have subjective representations, given that it allows its representations to be influenced by internal physical states just like those occurring in someone with genuine feelings etc. Second, an eliminative materialist can make use of the subjective-objective distinction while denying that minds exist at all, so long as internal states of the subject exercise control over the formation of representations instead of external facts. Third, illness or injury could induce the subject to form beliefs that are not appropriately sensitive to external facts but stem from internal physiological pathologies—these need not be mental. Representations can be subjective just by virtue of their internal causation, whether mental or otherwise; what matters is their detachment from external reality. We might even say that the dictionary has it the wrong way round: a belief about the external world is objective if and only if it is appropriately caused by that world, while a subjective belief is one not so caused (being caused by an internal state of the subject, mental or physical). What matters, intuitively, is what led up to the formation of the belief—external objects or internal states cut off from such objects. Is the belief object-generated or subject-generated? Is it held because of the facts it concerns or is it held because of certain internal perturbations? That is the crux of the distinction.

With these glitches taken care of we can now turn to classifying philosophical positions as subjectivist or objectivist: does our definition capture the intended notions? First, color (and other secondary qualities): perceptions of color are subjective in that the origin of such perceptions lies within the perceiver, and similarly for beliefs about color. The cause of color perceptions is internal to the perceiver (according to subjectivist views), so they come out as subjective by our criterion. Perception of primary qualities comes out as objective, since it is external features of objects that cause these perceptions; exogenous not endogenous causation. Colors come from the subject; shapes come from the object. Likewise, hallucinations stem from within, whereas veridical perceptions stem from without, so the former are subjective and the latter objective. In the case of perceptual constancies, objectivity arises when (for example) the retinal image is corrected in the direction of veridicality; the image is an aspect of the organism, and it can create subjective impressions of things that belie the objective facts.[2] Image and object are out of sync and objectivity requires correction of what the image suggests. Constancies are the result of amending the proximal retinal image to fit the distal objects, i.e., removing subjectivity from the visual output. Ethical subjectivism is precisely the doctrine that values originate from inside the subject: desires, inclinations, emotions. Ethical objectivism, by contrast, claims that moral judgment is responsive to facts external to the subject. So, this pair of doctrines tracks the definition I have proposed. The same goes for aesthetic values: if beauty is in the eye of the beholder, then it is in the subject, not in the object: hence aesthetic subjectivism and objectivism. Questions of taste obviously follow this model: what you happen to like is a fact about you not about the object, and therefore subjective. According to Kant, space and time are in us not in mind-independent reality; hence, representations of space and time are deemed subjective (mere appearances). Indexical representations rely on the location of the subject in space and time, so they too will count as subjective, as opposed to non-indexical representations which reflect the condition of the object represented without reference to the subject. It is all a question of whether the representational content owes its allegiance to something within (or about) the subject, or to the nature of the object represented.

We should not confuse subjectivity and objectivity with subject and object as such. All representation (intentionality) involves subject and object: a thing representing and a thing represented. So, every act of representation is both subject-involving and object-involving. There is no such thing as a view from nowhere (a subject-less view), and there is no such as a view of nothing (an object-less view). Subject and object are locked inexorably together (that is the logic of intentionality). But it doesn’t follow that all such acts are both subjective and objective; it all depends on the generation and composition of the representing content. Does it owe its existence to the world or to the mind (strictly, the internal)? The picture we have is that the external world exerts some control over how we represent it but that our inner nature also shapes how we represent things. Thus, we speak of the “subjective” and the “objective”. On occasion, both coexist in the same representation, as when we see both color and shape. Sometimes the subjectivity is undesirable, if it leads the subject into error (e.g., misperceptions of size when size constancy breaks down); but sometimes it is beneficial to the subject (e.g., in food selection or color vision). Both traits can be defended; neither is exclusively correct or useful. The search for an “absolute conception” is motivated by the reasonable ambition of expunging ourselves from our maximally objective picture of reality, but it would be misguided zealotry to try to eliminate all subjectivity from our modes of representation. Subjectivity has its uses, its virtues. Subjective and objective both deserve a place in the sun.[3]

[1] For background, see Colin McGinn, The Subjective View (1983) and Thomas Nagel, The View from Nowhere (1986).

[2] For a discussion of objectivity and perceptual constancy see Tyler Burge, Origins of Objectivity (2010).

[3] Setting aside technical issues of formulation, I take it that what I suggest here is not particularly controversial; indeed, it might be thought not terribly exciting. I agree, but sometimes it is nice just to have something obvious for a change. And clarity is never a bad thing.

Share