Experience Pluralism

We have five senses and five types of sensory experience. This is doubly contingent: we might have had fewer or more senses, and we might not have had a different phenomenological type of experience corresponding to each sense. The second claim is less obvious than the first, but evident on reflection. First, note that the relationship between stimulus-type and experience-type is contingent: the physical nature of the stimulus doesn’t entail the phenomenological nature of the perceptual response. Thus you can’t infer what visual experience is like from the physical nature of light or what auditory experience is like from the physical nature of sound waves (similarly for touch, smell, and taste). Nor can you infer the physics of the stimulus from the nature of the experience. There is a lawful correlation between stimulus and response, but there is no identity or metaphysically rigid relation between them. One could exist without the other. This lack of necessity underlies some familiar thought experiments: we can imagine rerouting the inputs from the ears into the visual cortex, so producing visual experiences from auditory stimuli, and vice versa. Or there could be beings initially set up to convert sound waves into visual experience and light into auditory experience. The stimulus contains information about the environment and the brain interprets this by using alternatives modes of phenomenological response. Isn’t this what the human senses already do to some extent? The same (distal) stimulus can be seen or touched or even heard, and smell and taste respond to the same molecular stimuli. There is also the phenomenon of synesthesia, in which the same stimulus produces a response in two sense modalities. How the brain codes sensory inputs is not dictated by the physical stimulus, distal or proximal; in principle, we could invert the relations that actually obtain. There are possible worlds in which light produces olfactory sensations and people taste visually.

But there is a thought experiment of this class that I have never seen mentioned: the idea that all the senses of a given creature might be served by the same phenomenological type.[1] For instance, our five senses might be centrally manifested in only visual experience—we only see things when stimuli impinge on the ears, skin, nose, and mouth. We reduce the phenomenological range to a single sensory type that is common to all the senses. For humans this set-up would require some major (futuristic) surgery, so let’s assume we are dealing with a Martian that is born this way. Symphonies are “heard” as complex patterns of shifting light, objects “feel” as they are seen, food “tastes” like a visual mosaic. For any type of input, there is just one mode of experiential response: instead of experience pluralism we have experience monism. Whether we should describe this situation as possessing a single sense that responds to a variety of stimuli or several senses that are mediated by a single type of experience is not critical to decide; what matters is that there is a leveling of phenomenology combined with the usual types of sensory impingements. The same variegated physical world is represented by a uniform type of phenomenal world. This seems like a logical possibility, not ruled out by the concepts or by some deep metaphysical necessity. Granted, we don’t find any actual instances of it on planet Earth, but there might be other planets that are home to beings like this.

This thought experiment raises interesting questions. First, is there some biological reason that we don’t find actual instances of creatures like this? On the face of it such a set-up is more parsimonious than the actual situation, and doesn’t nature prefer parsimony? The genes would only need to engineer a single type of central sensory nerve to handle input from all the senses—the visual type. This would serve as well, so why complicate the physiology? It may be retorted that representing all the senses in a single phenomenological type would be confusing for the organism, since it wouldn’t know whether it was tasting or seeing; but this could presumably be accommodated by assigning different visual types to the two sorts of stimulus. Isn’t this what we already do within the visual sense—as when we have distinct sensations for shape and color? Couldn’t the all-encompassing visual sense contain a reference to the part of the body being stimulated, so that it was clear what sense was being activated? Why should this be any more confusing than simultaneously receiving inputs from senses with different phenomenological character, since a central unit has to separate and integrate the inputs so received in this case too? The purely visual organism could be constructed so as to keep track of the origin of its visual experiences, in part by assigning different visual types to each type of input. Brighter colors might be assigned to one sense compared to another, or different colors entirely. Visual experience is already very various and dependent on varying aspects of the light stimulus, so there seems no problem of principle preventing a purely visual subject from existing (perhaps one that is perceptually simpler than us). More strongly, this might be a better way to increase sensory bandwidth: smell and taste might become more discriminating when mediated visually. To the objection that visual tasting wouldn’t have the motivational force of ordinary tasting, we could stipulate that gustatory visual sensations be genetically linked to the pleasure centers of the brain, so that certain visual arrays elicit pleasure in the hungry eater. Don’t some tastes become pleasurable to us that were once repugnant or bland? Why not have menus listing the particularly tasty color combinations on offer tonight? You bite into an oyster and your visual sensorium lights up with an accompanying rush of pleasure.

So parsimony recommends experience monism, but so do other aspects of the organism. Don’t we find a conspicuous absence of florid pluralism in the anatomy and physiology of the body? The bones are much the same in point of composition throughout the body, despite differences of function and structure—we don’t find different types of bone composition according to where the bone is located. What would be the point of that? It would just make ontogenesis more difficult. And the underlying physiology of the nervous system is likewise homogeneous: the nerves associated with the different senses are of basically the same type (a nucleus, axons, dendrites, and the same suite of chemical neurotransmitters); we don’t find radically different histological characteristics from sense to sense. Moreover, the distal stimulus is likewise uniform: the same physical world is present to each sense—consisting of atoms, forces, etc. But the sensory systems inject a marked heterogeneity into nature: they are more richly various from a phenomenological point of view than the external world or the physiology of the brain. They provide the pageantry and pizzazz. So we have a puzzle: why so much variety when parsimony and the general laws of nature recommend uniformity? Why make seeing so very different from hearing—or smelling so very different from touching? It seems like an act of generosity from nature to the experiencing organism—making life a little less boring and monotonous. But natural selection and the genes are not known for their generosity; they like things as simple as possible (such complexity as we find is forced on organisms by the rigors of survival). Our thoughts don’t exhibit as much phenomenological variety, no matter what their subject matter may be, so why do our senses insist on the gaudy plurality of our sensory experience? It seems surplus to requirements, a gratuitous gift, an unnecessary extravagance. What would you say if we had fifteen senses each equipped with its own distinctive phenomenology when far fewer would do just as well? That would seem like biological largesse above and beyond the cause of gene propagation; why not strip it down a bit? The natural thought is that the variety we experience must possess some hidden biological utility, but it is not clear what this utility is, given the informational powers of visual experience (or the other senses in their most advanced forms). The cell serves every biological purpose in the body, but it is fundamentally the same from organ to organ. To be sure, cells vary somewhat from heart to kidney, skin to brain, but no more than visual experiences differ among themselves. What we don’t find is organisms (or organs) made of completely different chemicals, or partly cell-based and partly continuous, or bones that are sometimes made of calcium and sometimes made of metal. We find variations on a theme: but sensory experience varies the theme. Seeing is really nothing like tasting. To lack a sense is to lack something sui generis, to miss out on something unique. A purely visual organism might go blind but still be replete with visual sensation; a blind man, however, can get at best hints of what vision might be like. Each type of sensory experience is, we might say, a world unto itself.

One possible view is that the present sensory set-up is temporary and the result of a dispensable holdover from earlier evolutionary times. The senses evolved separately as solutions to survival challenges and the senses that now populate the planet build upon these early forays (much the same is true of basic anatomy). This is not a matter of ideal optimality but of contingent evolutionary history. Conceivably, the process could have started with greater uniformity and stayed that way, or it might eventually work out the kinks and favor sensory homogeneity. If we were building sentient robots, we might be faced with a design decision—one type of central component that delivers only visual phenomenology or several types that afford sensory variety. The decision could affect future production whether or not we make the optimal decision. If reasons of economy favor the single-component approach, we might end up producing purely visual robots (though capable of responding to the full variety of physical inputs). This might correspond to life on other planets, depending on the actual course of evolutionary history. On our planet the earlier “decisions” favored distinct types of sensory experience, and thereafter organisms were stuck with them. This arrangement might be highly sub-optimal despite its universality in terrestrial life forms. If we imagine an early life form equipped only with visual sensations responsive to light, wondering how to expand into other stimulus fields, it would be intelligible if this form plumped for retention of its existing phenomenological capabilities extended to other types of stimulus. It could either devise new modes of sensory response to sound waves and other types of stimulus or stick with what it has onboard already. The latter choice might be preferable, given the engineering demands created by branching out. So we must not simply assume that experience pluralism is the biological ideal; it might just be an adventitious artifact of how evolution on earth has actually progressed. Aliens might view our mixed phenomenology as distinctly old-world, pre-self-technological, and recommend switching to a more streamlined approach (they promise it will not be boring compared to the cumbersome system we now employ). Or there might be hearing-obsessed aliens (of bat-like aspect) who urge the merits of their sensory world and disparage the purely visual species. After all, whoever said that we humans are biologically perfect? Surely pain is not the best possible way to cope with injury in every possible world, so why should sensory diversity be the best possible way to handle information in every possible world? Among the life forms of the universe it might be quite parochial. Certainly some life forms on earth manage quite well without the full panoply of the five human senses—bacteria, worms, and much marine life.

I will mention another possibility, if only for completeness. This is that our sensory phenomenology might be less various than we suppose. Obviously, introspection plays a determining role here—we experience ourselves as experientially plural. We seem to ourselves to contain phenomenological multitudes. But perhaps this appearance is misleading; perhaps we are more uniform than we think—as the external world is more uniform than we naively suppose given the way we experience it. From a more abstract or objective point of view we may be more uniform than we appear. We already accept that there are commonalities in perceptual experience—intentionality, spatial embedding, functionality—and it may be that there is a way of describing experience that will render it more unified than our current ways. A more objective phenomenology might be a more uniform phenomenology; there may be structural universals across sense modalities.[2] Synesthesia suggests as much. Just as science can reveal hidden universals, so a scientific phenomenology might reveal experiential universals beyond our current grasp. Then the variety of sense experience would be revealed as superficial. Chomsky sometimes suggests that there is really only one human language when you get right down to it, despite superficial appearances; well, is it ruled out that there might be just one type of human sense experience? Call this Universal Phenomenology (UP for short): just like Universal Grammar, Universal Phenomenology might unite all human experience and distinguish it from other possible types of sensory awareness (reptilian, Martian). If that were so, phenomenology might be as uniform as physiology at a deeper level. I don’t think we could ever conclude that really there is just the sense of vision, with every other sense a minor variation on it; but we might conclude that the deep structure of all sensory experience is common to every type—no more various than the cell types that correlate with experience in the brain. At any rate, this is a possibility to keep in mind, especially since otherwise we seem confronted by a genuine biological puzzle (the puzzle of excessive phenomenological variety, as we might call it).

Our language is hooked up to our senses, so that we can comment on what we see and hear (etc.), but we don’t have a separate language for each sense equipped with its own sound system, syntax, and semantics. That would be pointless and biologically redundant, as well as confusing and energy-consuming. So why do we have separate phenomenological systems hooked up to our senses instead of a single system? Why isn’t our sensory system more like our language system? The language system is a singular and separate module with its own distinctive internal structure; it is not divided into five different modules each with its own grammar and lexicon. Evidently, this kind of architecture could in principle characterize our sensory system—say, a single visual module hooked up to our several sense organs. Yet that is not what we find, but instead a diverse and divided set of systems that must all be integrated somehow. It seems unduly complicated and unwieldy, like speaking five languages when one would suffice. Why the difference? Why not speak a single phenomenological language?


[1] This thought experiment emerged during a conversation with Tom Nagel on October 10, 2019.

[2] Here we might be reminded of Nagel’s discussion of “objective phenomenology” in “What is it like to be a Bat?” The more a phenomenological description prescinds from the specifics of a given type of experience the more universal it is apt to be. Thus we might aspire to cross-modality phenomenology.


What is Belief?



What is Belief?



For all the work that has been done on the topic of belief, do we really know what belief is?[1] What kind of state (if state it be) is the belief state? Two suggestions have been prominent: belief is a feeling and belief is a disposition. Either belief is a state of consciousness analogous to sensation (pain, seeing red, feeling sad) or it is a tendency to behave in a certain way (assenting to a proposition, combining with desire to produce action). The OED defines “believe” as “feel sure that (something) is true”, thus categorizing belief as a type of feeling: not “be sure” but “feelsure”. What that feeling might be is left undetermined, though the definition has the ring of truth. And indeed belief is connected to feeling: your feelings tend to change when you acquire a belief, and there is such a thing as feeling sure. But what about beliefs you hold without thinking about them– are those beliefs all associated with feelings? Do you feel sure that London is in England, for example, even when the thought has not crossed your mind in months? Here is where the dispositional theory suggests itself: belief isn’t an episodic state of consciousness but a readiness to act in a certain way—to respond “yes” when asked whether London is in England, say. Ramsey said belief is a “map by which we steer”, emphasizing that beliefs guide action (but do we inspect our beliefs as we inspect maps?). And certainly beliefs and dispositions are tightly connected (as are desires and dispositions): your dispositions change when you acquire a belief, and belief encourages assent behavior. But is this what a belief is? Isn’t it rather the mental state that gives rise to the disposition? What if you had a tendency to assent verbally to propositions not because you believe them but because you have been rigged up that way by a clever scientist intent on simulating the state of belief? In general, dispositional theories confound properties (states, facts) with their causal consequences; and we want to know what belief is not what it does. The OED also has this under “believe”: “accept the statement (of someone) as true”. But don’t we accept statements because of what we believe? It isn’t that the belief is the acceptance. It is hard to avoid the impression that the dictionary (and the usual philosophical theories) conflates the symptoms of belief—feelings and dispositions–with belief itself. But then what is belief itself exactly?

Are we acquainted with belief itself? We are acquainted with sensations and behavior, both signs of belief, but are we acquainted with beliefs? The answer is not obvious. If we are, it seems curious that we draw a blank when considering the nature of belief; but if we are not, why do we bandy the concept around with such confidence? Is it perhaps that the concept is logically primitive and hence admits of no explanation in other terms? But that can’t be the reason for our ignorance, because the same is true of many concepts and yet we are not blind to the nature of their reference (pain, seeing red, maybe moral goodness). Or is it that the felt ignorance is an illusion born of a mistaken assumption, namely that we only know what a mental phenomenon is if we can reduce it either to a feeling or to a disposition? Maybe we know exactly what belief is but we think we don’t because beliefs are not sensational or behavioral, these being our preferred touchstones of mental reality when thinking philosophically. But that approach, though not unsound in principle, is hard to square with an evident fact: we really don’t know what it is to believe something—we have no conception of what fact is at issue. Once belief is distinguished from its symptoms its elusiveness becomes evident (compare Hume on causation).

This leaves us with another possibility—that “believes” is really a name for an I-know-not-what that we introduce to denote something that we reasonably believe to exist but can’t properly conceptualize. Belief is thus that state, whatever it is, that has such and such symptoms and plays such and such a role but whose nature we find elusive. In short, “belief” is a theoretical term—not just in application to others but also in application to oneself. Our knowledge of belief operates at one remove from the thing itself, which is why we have such an indeterminate conception of it. A similar approach might be suggested for the concepts of meaning and the self: these too are not directly encountered constituents of consciousness, which is why we can’t reconstruct them in such terms, but they are real nonetheless, just at some epistemic distance from our cognitive faculties. That is, not all parts of what we think of as the mind exist at the same epistemic level (and not because of a detached Freudian unconscious); some are not objects of direct inspection (perceptual or introspective). The ontology of folk psychology is an amalgam of these two types of fact (and we can add desire to belief): the mind consists of directly known constituents and relatively unknown constituents. Differently stated, belief (desire, meaning, the self) is a state that we refer to but are not acquainted with; we know many of its properties, but not its intrinsic nature. We know it is a propositional attitude (but what is an attitude exactly?) and that it involves the exercise of concepts, as well as being a truth-bearer, subject to referential opacity, and capable of combining with desire to lead to action: but we don’t grasp what kind of state it is—not in its intrinsic nature. The state gives rise to inner feelings and to outer behavior, but we have no clear idea of what it is in itself. We experience shadows of it, fleeting intimations and glimpses, but we have no firm conception of the thing itself: it is just “that which gives rise to these symptoms”. Ask yourself what kind of mental state you are in when you are asleep: you have various beliefs, but what is their mode of existence exactly? You might be tempted to reach for the concept of a disposition, but we have been down that road before—what is the ground of such a disposition? Let’s face it: you don’t know what to say, and yet you don’t doubt that you are in some sort of mental state. You might sputter that you are in a “cognitive state”, but that raises the same question over again: what kind of state is that? Not a feeling state and not a disposition, but a sui generis state that confounds comprehension. As we might say, we have only a partial grasp of what belief is. And the part we don’t grasp intrigues us the most, i.e. the very being of belief.

I grant that this position might sound counterintuitive. Doesn’t the Cogito express certain knowledge (“I believe, therefore I am”)? But how can that be if we don’t know what thinking (believing) is? However, this is really not such a paradoxical position to be in: we know that we think and believe, and that this entails our existence, but it doesn’t follow that we know what thinking and believing are—or what the self is for that matter. And did Descartes ever claim anything to the contrary—did he suppose that the nature of thinking is totally transparent to us? Knowing that something exists is not the same as knowing its nature. If Descartes had claimed that thinking is processing sentences in the language of thought, he could have been wrong about that; but this wouldn’t undermine the Cogito. In fact, I would say that if you focus really hard on what is going on when you believe something you will see that nothing determinate comes into view—you never catch your belief in flagrante, as it were. And you have no clear conception of what it is that you attribute when you ascribe beliefs to others (beyond their conceptual content). Nor does knowledge of the brain help: identifying belief with neural excitation in the B-fibers, say, affords no knowledge of what belief is in the ordinary sense. The problem is that neither does anything else—crucially, not introspection. We didn’t come by the concept of belief by noticing feelings of belief in ourselves (where would those feelings to be located?), or by observing the operation of dispositions to behavior; rather, we introduced a term for a type of psychological state whose nature was not evident to us but which we were sure existed. I have evidence for my beliefs drawn from my experience (e.g. feelings of conviction), but I don’t believe in beliefs because I can grasp them whole. I see them through a glass darkly. I have a nebulous sense that certain propositions attract my assent, as if gravitationally, but what exactly my mind is up to I cannot tell. Even the strongest of our beliefs, say religious or moral or scientific beliefs, fail to disclose their inner nature—we just find ourselves filled with passionate conviction about certain things. It isn’t like feeling a headache or a hunger pang in the stomach. Nor is it like hearing a sentence in your head. It isn’t like anything.

Psychology used to be conceived as an introspective science, and then later as a science of observable behavior, but these ideas were predicated on a certain conception of the essence of the mind. Either the mind consists of inner episodes of consciousness of which we have immediate introspective awareness, or it consists of outer behavior that can be perceived externally. But the case of belief (also desire) shows that these alternatives are not exhaustive and are fundamentally on the wrong track. In so far as psychology is about belief and kindred states, it is not about feelings or behavioral dispositions, but about facts we find systematically elusive, which fit into neither category. Beliefs are not feelings and they are not dispositions to behavior, yet there are fully mental phenomena, paradigmatically so. As Hume would say, we have no impression of belief, yet belief is real and knowable (in some of its aspects).  Belief is yet another example of the limits of human cognition. Psychology thus has an elusive subject matter.[2]



[1] The background to this essay is scattered. The issues discussed bubble under the surface of Wittgenstein’s Philosophical Investigations and are explicitly posed in Kripke’s Wittgenstein on Rules and Private Language (as well as my Wittgenstein on Meaning). In addition, the emphasis on ignorance reflects my standing interest in human mysteries as they pertain to philosophy. Hume is hovering paternally in the wings. Russell makes a brief appearance.

[2] It might be said that belief is a computational state and that this gives its essential nature. There is a lot to be said about this suggestion; suffice it to remark that this doesn’t give us a conception of belief comparable to our intuitive notions of pain or seeing red. Belief may well have computational properties, but it is another thing to claim that this is what belief is (would it follow that computers believe?).


Knowledge and Human Nature



Knowledge and Human Nature



An alien observer of human cognitive development would be struck by a fact he might be tempted to describe as paradoxical. This is that in the first five years or so of life development is rapid and impressive while subsequent learning tends to be slow and laborious. The typical five-year-old already has excellent sensory awareness of the world, a mature language, and a fully functioning conceptual scheme—all without apparent effort. They may be small, but they are smart. The reason for this precocity, we conjecture, is that much of what they have achieved by that age is the unfolding of an innate program or set of programs: all this cognitive sophistication is written into the genes awaiting read-out.[1] It is not picked up by diligent inspection of the environment. It comes quickly because it was already present in substantial outline. Thereafter the child must learn things the hard way—by learningthem. Hence school, memorization, studying, instruction, concentration. Knowledge becomes willed, while before it was unwilled, spontaneous, given.[2] Cognitive development turns into work.

It could be otherwise for our alien observers: they are accustomed to school virtually from birth, because their children are born knowing practically nothing. They learn language by painstaking instruction, having no innate grammar; concepts are acquired by something called “deliberate abstraction”, which is arduous and time-consuming; even their senses need years to get honed into something usable. They don’t reach the cognitive level of a typical human five-year-old till the age of fifteen. Empiricism is true of them, and it takes time and effort. However, they have excellent memories and powers of concentration, as well as an aversion to play, so their later cognitive development is rapid and smooth: they are superior to human college-educated humans by the age of seventeen and they go on to spectacular intellectual achievements in later life, vastly outstripping human adults. They are slow at first, given the paucity of their innate endowments, but quick later, while humans are quick at first but slow later (our memory is weak and our powers of concentration lamentable). To the alien observers this seems strange, almost paradoxical: why start so promisingly and then lapse into mediocrity? They continue to gain in intellectual strength while we seem to lose that spark of genius that characterized the first few years of life. That’s just the way the two species are cognitively set up: an initial large genetic boost for us, and a virtual blank slate for them (but excellent capacities of attention, memory, and studiousness). Our five-year-olds outshine theirs, but their adults put ours to shame.

I tell this story to highlight an important point about the human capacity for knowledge—an existential point. The existentialists thought that freedom was the essence of human nature, conditioning many aspects of our lives, individual and social; but a case can be made that human knowledge plays a similar life-determining role. For we suffer under a fundamental ambivalence about knowledge, which is to say about our cognitive nature (which is not confined to non-affective parts of our lives). We are simultaneously very good at knowledge and quite poor at it. Some things come to us naturally and smoothly, especially in our earliest experience (pre-school); but other things tax us terribly, calling for intense effort and leading to inevitable frustration. Rote memory becomes the bane of our lives. Examinations loom over us. School is experienced as a kind of prison. Calculus is hard. History refuses to stick. Geography is boring. What happened to that earlier facility when everything came so easily? We were all equal then, but now we must compete with each other to achieve good test results, which determine later success in life. We seem to go from genius to dunce overnight. Imagine if you could remember your earlier successes and compare them with your current travails: it was all so easy and enjoyable then, as the innate program unfurled itself, but now the daily need to absorb new material has become trial and tribulation. Getting an education is no cakewalk. Wouldn’t it be nice if it could just be uploaded into your brain as you slept, as your genes uploaded all that innate information? It’s like a lost paradise, a heavenly pre-existence (shades of Plato), with school as the fall from blessedness. You are condemned to feel unintelligent, a disappointment, an intellectual hack. Maybe you will make your mark in society by dint of great effort and a bit of luck, but you are still a member of a species that has to struggle for knowledge, for which knowledge is elusive and hard-won. Suppose you had to live in a society in which those late-developing aliens also lived: they would make you look like a complete ignoramus, an utter nincompoop—despite their initial slow start.

A vice to which human beings are particularly prone is overestimating their claims to knowledge. It is as if they need to do this—it serves some psychic purpose. Reversion to childhood would be one hypothesis (“epistemic regression”). But the actual state of human knowledge renders it intelligible: within each of us there exists a substantial core of inherited solid knowledge combined with laboriously acquired knowledge, some of it shaky at best. Take our knowledge of language, including the conceptual scheme that goes with it: we are right to feel confident that we have this under control—the skeptic will not meet fertile ground here (I know how to speak grammatically!). Generalizing, we may come to the conclusion that our epistemic skills are well up to par: so far as knowledge is concerned, we are a credit to our species. But this is a bad induction: some of our knowledge is indeed rock solid, but a lot isn’t. Being good at language is not being good at politics or medicine or metaphysics or morals. We are extrapolating from an unrepresentative sample. As young children, our knowledge tends to be well founded, because restricted to certain areas; but as adults we venture into areas in which we have little inborn expertise, and here we are prone to error, sometimes fantastically so. We know what sentences are grammatical but not what political system is best. But we overestimate our cognitive powers because some of them are exemplary. It would be different if all our so-called knowledge were shaky from the start; then we might have the requisite humility. But our early-life knowledge gives us a false sense of security, which we tend to overgeneralize. We believe we are as clever about everything as we are about some things.

I recommend accepting that we have two sorts of knowledge—that we are split epistemic beings. On the one hand, we have the robust innately given type of knowledge; but on the other hand, we have a rather rickety set of aptitudes that we press into service in order to extend our innately given knowledge. Science and philosophy belong to the latter system. Thus they developed late in human evolution, are superfluous to survival, and are grafted on by main force not biological patrimony. There is no established name for this distinction between types of knowledge, though it seems real enough, and I can’t think of anything that really captures what we need; still, it is a distinction that corresponds to an important dimension of human life—an existential fact. We are caught between an image of ourselves as epistemic experts and a contrasting image of epistemic amateurishness. We are not cognitively unified. We have a dual nature. We are rich and poor, advantaged and disadvantaged. Other animals don’t suffer from this kind of divide: they don’t strive to extend their knowledge beyond what comes naturally to them. Many learn, but they don’t go to school to do it. They don’t get grades and flunk exams and read books. Reading is in some ways the quintessential human activity—an artificial way to cram your brain with information not given at birth or vouchsafed by personal experience. Reading is hard, unnatural, and an effort. It is an exercise in concentration management. We may come to find it enjoyable[3], but no one thinks it is a skill acquired without training and dedication (and reading came late in the human story). It is also fallible. And it hurts your eyes. This is your secondary epistemic system in operation (we could label the types of knowledge “primary knowledge” and “secondary knowledge” just to have handy names).

Animals are not divided beings in this way (lamenting their reading ability); nor do they apprehend themselves as so divided. But we are well aware of our dual nature, and we chafe at it (as the existentialists say that we chafe at the recognition of our freedom). We wish we could return to epistemic Eden, when knowledge came so readily; but we are condemned to conscious ignorance, with little inroads here and there—we are aware of our epistemic limits and foibles. We know how much we don’t know and how hard it would be to know it (think of remote parts of space). We know, that is, that we fall short of an ideal. We can’t even remember names and telephone numbers!  Yet our knowledge of convoluted grammatical constructions is effortless. If we are that good at knowledge, why are we so bad? Skepticism is just the extreme expression of what we all know in our hearts—that we leave a lot to be desired from an epistemic point of view.[4] We are both paragons and pariahs in the epistemic marketplace. In some moods we celebrate our epistemic achievements, in others we rue our epistemic failures. The reason is that we are genuinely split, cognitively schizoid. Perhaps in the prehistoric world the split was not so evident, in those halcyon hunter-gatherer days, before school, writing, and transmissible civilization; but modern humans, living in large organized groups, developing unnatural specialized skills, have the split before their eyes every day—the specter of the not-known. We thus experience epistemic insecurity, epistemic neurosis, and epistemic anxiety. Our self-worth is bound up with knowledge (“erudite” is not a pejorative). It is as if we contain an epistemic god (already manifest by age 5) existing side by side with an epistemic savage: the high and the low, the ideal and the flawed. I don’t mean that we shouldn’t value what we acquire with the secondary system, or that it isn’t really knowledge, just that it contrasts sharply with the primary system. The secondary system might never have existed, in which case no felt disparity would have existed; but with us as we are now we cannot avoid the pang of awareness that our efforts at knowledge are halting and frequently feeble. The young child does not suffer from epistemic angst, but the adult has epistemic angst as a permanent companion. School is the primary purveyor of that angst today. Education is thus a fraught venture, psychologically speaking, in which our dual nature uneasily plays itself out. The existentialists stressed the agony of decision, but there is also the agony of ignorance (Hamlet is all about this subject, as is Othello).[5]

Freud contended that the foundations of psychic life are laid down in the first few years of life (and sex not freedom or knowledge is the dominant theme), shaping everything that comes later. The stage was set and then the drama played out. I am suggesting something similar: the first few years of cognitive life lay down the foundations, and they are relatively trouble-free. Knowledge grows in the child quite naturally and spontaneously without any strenuous effort or difficulty. Only subsequently does the acquisition of knowledge become a labor, calling upon will power and explicit instruction. We might view this transition, psychoanalytically, as a kind of trauma: from ease to unease, from self-confidence to self-doubt. Whoever thought knowledge could be so hard! Compare acquiring a first language with learning a second language: so effortless the first time, so demanding the second. What happened? Now learning has become a chore and a trial. It is a type of fall from grace. The reason we don’t feel the trauma more is that it happens at such an early age (I assume there is no active repression)—though many a child remembers the misery of school. Knowledge becomes fraught, a site of potential distress. Cramming becomes a way of life, a series of tests and trials. But all the while the memory of a happier time haunts us, when knowledge came as easily as the dawn.[6] And then there is death, when all that knowledge comes to nothing—when all the epistemic effort is shown futile. Our divided nature as epistemic beings thus has its significance for how we live in and experience the world. It is not just a matter of bloodless ratiocination.



[1] I won’t rehearse all the evidence and arguments that have been convincingly given for this conjecture, save to mention the existence of critical periods for learning. Would that such periods could occur during high school mathematics training!

[2] Of course, we still pick up a lot of information without effort just by being in the world, but for many areas of knowledge something like school is required (this is true even for illiterate tribes).

[3] Logan Pearsall Smith: “People say that life is the thing, but I prefer reading.”

[4] Is it an accident that one of the prime distinguishing characteristics of God is his omniscience? He knows automatically what we can never hope to.

[5] The Internet, with its seemingly infinite resources, drives this point home. It also leads to varied and grotesque deformities in our cognitive lives.

[6] Here you see me lapsing into weak poetry, as all theorists of the meaning of life must inevitably do. Sartre’s Being and Nothingness is one long dramatic poem: who can forget his puppet-like waiter, or the woman in bad faith whose hand remains limp as her would-be suitor grasps it, or Pierre’s vivid absence from the cafe? My illustrative vignette would feature a bleary-eyed student studying in a gloomy library while recollecting her carefree sunlit days of cheerful effortless knowing.


Are There Subjective Reasons?




Are There Subjective Reasons?



I like coffee and you like tea. This gives me a reason to choose coffee, but it doesn’t give you a reason to make that choice. The reason is relative to me—to my preferences. You would choose tea given the choice.  Thus we might say that reasons of this type—desire-based reasons—are “subjective reasons”: they are relative to the individual subject making the choice. They are not like “objective reasons” that apply to everyone equally, such as (allegedly) moral reasons, which are indifferent to the individual’s personal preferences. Everyone has a moral reason not to murder his neighbor, no matter how much he might prefer him dead—viz. that it would be morally wrong to do it. But some reasons (perhaps most) are subjective in the sense that they don’t generalize: they apply only to individuals with appropriate desires or wishes or tastes or inclinations. They have no rational hold over anyone else. It would be wrong to criticize someone for not acting on them, given their personal preferences. When it comes to matters of taste, the right response is: “It’s all completely subjective”.

But this is mistaken for two reasons. The first is that your preferring tea gives me a reason to offer you tea, while I contentedly stick to coffee: that is, the fact that you have a preference for tea works as a reason applicable to me to act in certain ways in relation to you. You have a certain property—being a tea-fancier—and that gives me a reason to supply you with tea in appropriate circumstances. So that reason applies to everyone equally: it is objective. It is objectively the case that everyone has a reason to give you tea not coffee: there is nothing subjective about that. Second, ifI shared that property I too would have a reason to choose as you do. So we can generalize as follows: everyone is such that if they have a preference for tea they have a reason to choose tea. It is not as if you could have that preference and it still be a question what you have reason to do. It isn’t “up to you” what it is rational to do, a matter of subjective whim. True, you may not actually have the property in question, but it is an entirely objective matter that ifyou do a certain choice is rational. It is an objective property of the property that it requires a certain choice. It functions as an objective reason whenever it is instantiated. There is nothing subjective about the reason once the facts are fixed. The reason may be said to be a conditionalreason, i.e. it depends on instantiating certain properties, but there is nothing “subjective” about it. Salt only dissolves if certain conditions obtain—that doesn’t make it “subjective”. We might call desires “subjective states” because they are psychological properties of conscious subjects, but that doesn’t imply that they provide merely subjective reasons. Whenever a reason applies it always generates objective requirements: on others to act in certain ways, and on anyone who has the property that grounds the reason. There is never any purely subjective (or “agent-relative”) rationality: all rationality is objective (impersonal, absolute, general).

We might compare this to subjective facts. There are no purely subjective facts, i.e. facts that have no objective reality. There are psychological facts about subjects, but these are objective facts in the sense that they exist absolutely, not forsome people and not others. Bat experiences are facts in the objective world (there is no other). They might be knownonly by bats, but their existenceis not relative to bats—they are part of objective reality (not fictions or dreams or projections). To be is to be objective. Not everyone has bat experiences, but they don’t exist only from the perspective of bats (whatever that might mean). In the same way not everyone has a preference for tea, but that preference exists objectively and gives rise to objective reasons for action that apply to anyone. Even a taste shared by no one else, say a fondness for grilled cactus, has its objective reason-giving power: this idiosyncratic individual can expect to be offered grilled cactus at a barbecue, and if anyone else were to acquire the taste they would have every reason to act on it. There are no reasons that apply to an individual in isolation without implications for anyone else. Rationality is never purely personal in this sense.[1]



[1]We might then say that there are two sorts of objective reason for action: the sort that depends on the psychological make-up of the individual and the sort that doesn’t so depend. The former would include personal tastes; the latter would apply to moral reasons (assuming we accept this view of morality). There are not “subjective reasons” and “objective reasons”.


A Problem In Hume


A Problem in Hume




Early in the TreatiseHume sets out to establish what he calls a “general proposition”, namely: “That all our simple ideas in their first appearance are deriv’d from simple impressions, which are correspondent to them, and which they exactly represent” (Book I, Section I, p.52).[1]What kind of proposition is this? It is evidently a causal proposition, to the effect that ideas are caused by impressions, and not vice versa: the word “deriv’d” indicates causality. So Hume’s general proposition concerns a type of mental causation linking impressions and ideas; accordingly, it states a psychological causal law. It is not like a mathematical generalization that expresses mere “relations of ideas”, so it is not known a priori. As if to confirm this interpretation of his meaning, Hume goes on to say:  “The constant conjunction of our resembling perceptions [impressions and ideas], is a convincing proof, that the one are the causes of the other; and this priority of the impressions is an equal proof, that our impressions are the causes of our ideas, not our ideas of our impressions” (p. 53). Thus we observe the constant conjunction of impressions and ideas, as well as the temporal priority of impressions over ideas, and we infer that the two are causally connected, with impressions doing the causing. In Hume’s terminology, we believe his general proposition on the basis of “experience”—our experience of constant conjunction.

But this means that Hume’s own critique of causal belief applies to his guiding principle. In brief: our causal beliefs are not based on insight into the real powers of cause and effect but on mere constant conjunctions that could easily have been otherwise, and which interact with our instincts to produce non-rational beliefs of an inductive nature. It is like our knowledge of the actions of colliding billiard balls: the real powers are hidden and our experience of objects is consistent with anything following anything; we are merely brought by custom and instinct to expect a particular type of effect when we experience a constant conjunction (and not otherwise). Thus induction is not an affair of reason but of our animal nature (animals too form expectations based on nothing more than constant conjunction). Skepticism regarding our inductive inferences is therefore indicated: induction has no rational foundation. For example, prior to our experience of constant conjunction ideas might be the cause of impressions, or ideas might have no cause, or the impression of red might cause the idea of blue, or impressions might cause heart palpitations. We observe no “necessary connexion” between cause and effect and associate the two only by experience of regularity—which might break down at any moment. Impressions have caused ideas so far but we have no reason to suppose that they will continue to do so—any more than we have reason to expect billiard balls to impart motion as they have hitherto. Hume’s general proposition is an inductive generalization and hence falls under his strictures regarding our causal knowledge (so called); in particular, it is believed on instinct not reason.

Why is this a problem for Hume? Because his own philosophy is based on a principle that he himself is committed to regarding as irrational—mere custom, animal instinct, blind acceptance. He accepts a principle—a crucial principle–that he has no reasonto accept. It might be that the idea of necessary connexion, say, is an exception to the generalization Hume has arrived at on the basis of his experience of constant conjunction between impressions and ideas—the equivalent of a black swan. Nothing in our experience can logically rule out such an exception, so we cannot exclude the idea based on anything we have observed. The missing shade of blue might also simply be an instance in which the generalization breaks down. There is no necessityin the general proposition Hume seeks to establish, by his own lights–at any rate, no necessity we can know about. Hume’s philosophy is therefore self-refuting. His fundamental empiricist principle—all ideas are derived from impressions—is unjustifiable given his skepticism about induction. Maybe we can’t helpaccepting his principle, but that is just a matter of our animal tendencies not a reflection of any foundation in reason. It is just that when we encounter an idea our mind suggests the existence of a corresponding impression because that is what we have experienced so far—we expectto find an impression. But that is not a rational expectation, merely the operation of brute instinct. Hume’s entire philosophy thus rests on a principle that he himself regards as embodying an invalid inference.

It is remarkable that Hume uses the word “proof” as he does in the passage quoted above: he says there that the constant conjunction of impressions and ideas gives us “convincing proof” that there is a causal relation that can be relied on in new cases. Where else would Hume say that constant conjunction gives us “convincing proof” of a causal generalization? His entire position is that constant conjunction gives us no such “proof” but only inclines us by instinct to have certain psychological expectations. And it is noteworthy that in the Enquiry, the more mature work, he drops all such talk of constant conjunction, causality, and proof in relation to his basic empiricist principle, speaking merely of ideas as “derived” from impressions. But we are still entitled to ask what manner of relation this derivation is, and it is hard to see how it could be anything but causality given Hume’s general outlook. Did he come to see the basic incoherence of his philosophy and seek to paper over the problem? He certainly never directly confronts the question of whether his principle is an inductive causal generalization, and hence is subject to Humean scruples about such generalizations.

It is clear from the way he writes that Hume does not regard his principle as a fallible inference from constant conjunctions with no force beyond what experience has so far provided. He seems to suppose that it is something like a conceptual or necessary truth: there couldnot be a simple idea that arose spontaneously without the help of an antecedent sensory impression—as (to use his own example) a blind man necessarily cannot have ideas of color. The trouble is that nothing in his official philosophy allows him to assert such a thing: there are only “relations of ideas” and “matters of fact”, with causal knowledge based on nothing but “experience”. His principle has to be a causal generalization, according to his own standards, and yet to admit that is to undermine its power to do the work Hume requires of it. Why shouldn’t the ideas of space, time, number, body, self, and necessity all be exceptions to a generalization based on a past constant conjunction of impressions and ideas? Sometimes ideas are copies of impressions but sometimes they may not be—there is no a priori necessity about the link. That is precisely what a rationalist like Descartes or Leibniz will insist: there are many simple ideas that don’t stem from impressions; it is simply a bad induction to suppose otherwise.

According to Hume’s general theory of causation, we import the idea of necessary connexion from somewhere “extraneous and foreign”[2]to the causal relation itself, i.e. from the mind’s instinctual tendency to project constant conjunctions. This point should apply as much to his general proposition about ideas and impressions as to any other causal statement: but then his philosophy rests upon the same fallacy–he has attributed to his principle a necessity that arises from within his own mind. He should regard the principle as recording nothing more than a constant conjunction that he has so far observed, so that his philosophy might collapse at any time. Maybe tomorrow ideas will notbe caused by impressions but arise in the mind ab initio. Nowhere does Hume ever confront such a possibility, but it is what his general position commits him to.



[1]David Hume, A Treatise of Human Nature(Penguin Books, 1969; originally published 1739).

[2]The phrase is from Section VII, [26], p. 56 of An Enquiry Concerning Human Understanding(Oxford University Press, 2007).


Is Solipsism Logically Possible?



Is Solipsism Logically Possible?



It has been commonly assumed that solipsism is logically or metaphysically possible. I could exist without anything else existing. There are possible worlds in which I exist and nothing else does. I can imagine myself completely alone. Seductive as such thoughts may appear, I think they are mistaken; they arise from a confusion of metaphysical and epistemic possibility.

Suppose someone claims that this table in front of me could exist in splendid isolation, the sole occupant of an ontologically impoverished world—no chairs, planets, people, birds, etc. Well, thatseems true—those absences are logically possible. But what about the piece of wood the table is made of? This table is made of that piece of wood in every possible world in which it exists, so the table cannot exist without the piece of wood. But that piece of wood came from a particular tree—it could not have come from any other tree. So this table can only exist in a world that alsocontains the tree in question, since it was a part of that tree. The table and the tree are distinct existences, so the table cannot exist without something elseexisting—the tree that donated the part that composes it. The table is necessarily composed of that piece of wood and that piece of wood necessarily derives from a particular tree: there are necessities linking the table with another object, viz. the tree. Thus “solipsism” with respect to this table is not logically possible.

Now consider a person, say me. I could not exist without my parents existing, since no person could be thisindividual and not be born to my parents. This is the necessity of origin as applied to persons. In any world in which I exist my parents exist; more precisely, in any world in which I exist a particular sperm and egg exist (and they can exist only because of the human organisms that produced them). So my existence implies the existence of my parents. Therefore solipsism is not logically possible. But the existential ramifications go further: my parents cannot exist in a world in which theirparents don’t exist. And so on back down the ancestral line, till we get to the origin of life: no later organism can exist without the procreative organisms in its ancestral line. Every organism has an origin, and that origin is essential to its identity. But it goes even further, because the very first organism must have had its own inorganic origin, presumably in a clump of molecules, and that origin is essential to it—itcould not exist without thatclump existing. And that clump of molecules also had an origin, possibly in element-forming stars; so it couldn’t exist without the physical entities that gave rise to it. And those physical entities go back to the big bang, originating in some sort of super-hot plasma. So I (thisperson) could not exist unless the whole chain existed, up to and including certain components of the big bang. Colin McGinn could not exist without millions and millions of other things existing, granted the necessity of origin. I am linked by hard necessity to an enormous sequence of distinct particulars. I couldn’t be mewithout them.

Of course, there could be someone just likeme that exists in the absence of my specific generative sequence—though he too will necessarily carry his own generative sequence. Perhaps in some remote possible world this counterpart of mine arises not by procreation but by instantaneous generation—say, by lightning rearranging the molecules in a swamp. But even then that individual would not be able to exist without hisparticular origins—his collection of swampy molecules and that magical bolt of lightning. Solipsism will not be logically possible even for him. In any case, the question is irrelevant to whether Icould exist without my generative sequence: my counterparts are not identical to me. All we are claiming is that solipsism is logically impossible so far as Iam concerned—this specific human being. It is myexistence that logically (metaphysically) requires the existence of other things—lots of other things. I (Colin McGinn) could never exist in another possible world and peer out over it to find nothing but myself (at least throughout history–I might exist without any other organism existing at the same time as me, my parents both being dead). The same applies to any person with the kind of origin I have, i.e. all human beings.

Why do we feel resistance to these crushingly banal points? I think it is in part because we confuse a metaphysical question with an epistemological question; and we cannot answer the epistemological question by appealing to our answer to the metaphysical question. The epistemological question is whether I can now provethat solipsism is false: can I establish that I am not alone in the universe? In particular, can I establish that my parents really exist (or existed)? Maybe they are just figments of my imagination; maybe I was conceived by lightning and swamp. I cannot be certainthat I was not. I cannot even be certain that I have a body. I can establish that I think and exist, but I cannot get beyond that in the quest for certainty. So the existence of my parents is not an epistemicnecessity. If I could prove that I am a member of a particular biological species, then maybe I could prove that I must have arisen by sexual reproduction from other members of that species: but the skeptic is not going to let that by–she will demand that I demonstrate that I ama particular kind of organism arising by sexual reproduction. And I will not be able to meet that challenge, since there are conceivable alternatives to it (the hand of God, swamp and lightning, the dream hypothesis). Maybe I just imaginethat I am a biological entity with parents and an evolutionary history. So we cannot disprove solipsism in the epistemological sense: for all I know, there is nothing in the universe apart from me.

But this is perfectly compatible with the thesis that it is not in factlogically possible for me to exist without other entities existing along with me: for if I ama biological entity born by procreation, then my existence logically implies the existence of many other things. It is just that I cannot prove to the skeptic’s satisfaction (or my own) that that is what I am. I might come to the conclusion that I had no parents after all, but that will not make it the case that there are metaphysically possible worlds in which I had no parents—this is a matter of the facts about me, not my beliefs about the facts. Thus solipsism is an epistemic possibility but not a metaphysical possibility. It is just like the table being both necessarily made of wood (metaphysical) and also being possibly not made of wood (epistemic). Giventhat I arose from biological parents, I necessarily did; but it is an epistemic possibility that I did not so arise—I could be mistaken about this.

It would be nice to disprove solipsism, but it isn’t insignificant to show that it is not in fact logically possible, given the actual nature of persons. Persons are the kind of thing that implies the existence of other things (granted that we are right in our commonsense view of what a person is). In this they resemble many ordinary biological and physical entities, which also have non-contingent origins. We may feel ourselves to be removed from the world that surrounds us, as if we are self-standing individuals, ontologically autonomous—as if our essential nature could subsist alone in the world. But that is a mistake—we are more dependent on other things than we are prone to suppose. We are more enmeshed in what lies outside of us than we imagine. We suffer from illusions of transcendence and autonomy. We are not free-floating egos that owe no allegiance to anything else; we are essentially relational beings, our identity bound up in our history. We cannot be metaphysically detached from our origins, proximate and remote.

The same point applies to our mental states: they too cannot be separated from other things. Could this pain exist in complete isolation? That may seem like a logical possibility, but on reflection it is not: first, this pain’s identity depends on its bearer—it could not be thispain unless it had thatbearer; and second, the identity of the bearer depends on the kind of history it has. So this pain could not exist without the generative sequence that gave rise to its bearer, a particular living organism; and that depends upon billions of years of history, going back to the big bang (and before). There is no possible world in which this painexists and certain remote physical occurrences don’t exist. There are necessary links connecting present mental states with remote physical occurrences—from the joining of a particular sperm and egg, to the origin of mammals, to the production of chemical elements. My pains can’t exist in a world without me (you can’t have mypains), but I can’t exist in a world without my parents, and my parents can’t exist in a world without their remote primate ancestors, and these ancestors too had their own necessary origins. The pains that now occur on planet Earth (thosepains) could not exist in a possible world without an elaborate biological and physical history that coincides with their actual history.

It is an interesting fact that we recognize these necessities. On the one hand, we have quite strongly Cartesian intuitions about the person and the mind, which is why dualism and solipsism appeal to us—these seemlike logical possibilities. But on the other hand, we are willing to accept that the person and mind are tied to other entities with bonds of necessity—as with the necessity of personal origin. We recognize that the identity of a person cannot be radically detached from all extrinsic and bodily things—parents, sperms, and eggs. These are anti-Cartesian intuitions insofar as they dispute the self-subsistence of the self.[1]We are thus both Cartesian and anti-Cartesian in our modal instincts about persons. It is as if we know quite well that the self cannot be a self-subsistent non-material substance without logical ties to anything beyond itself, even though in certain moods we fall prey to such thoughts. We know that our essence implies the existence of other things—as demonstrated by the necessity of origin—and therefore solipsism is not in fact logically possible. We are modally ambivalent about self and mind, but not confused.


Colin McGinn

[1]Kripke mentions the anti-Cartesian consequences of the necessity of origin at the very end of Naming and Necessity(footnote 77, p. 155). What is surprising is that neither he nor anyone else seems to have noticed the consequences for solipsism (including myself, and I published an article on the necessity of origin in 1976). But it is really just a fairly obvious deduction from the necessity of origin (originally proposed by Sprigge in 1962, as Kripke notes).


The Mind-World Nexus


The Mind-World Nexus



According to a dominant tradition, appearances are not “in” objects: that is, how an object appears is not (completely) determined by its objective properties but depends on the mode of sensibility employed to perceive it.  The classic example is color: objects are not colored independently of how they seem but in virtue of the color sensations they elicit in perceivers. Thus we can conceive of variations of color without an intrinsic variation in the object but merely in virtue of being differently perceived (Martians may see as green what we see as red). Color is then relative to a type of perceiver—and not just perceived color but actual color. An object isred if and only if it seemsred to a suitable group of perceivers. We could put this point by saying that color is extrinsic to objects; it depends on what kind of perceiver exists in the object’s environment. If the environment contains one kind of perceiver (humans), then it is red; but if it contains contains another type of perceiver (Martians), then it is green. The color depends on context—on how the object is hooked up to experience. It would be wrong to think that color is internalto objects, as if objects could have determinate colors no matter how they are perceived. And much the same can be said of other sensible qualities associated with hearing, touch, smell and taste. Perhaps it is true that not all apparent qualities are thus subjective, such as shape and size, but many are. As is often said, such qualities are projected by the mind, generated from within, and spread on objects. They depend on the “psychological environment” of the object (no perceivers, no qualities).

I have put the point by using terms drawn from another debate, namely the debate between internalism and externalism about the mind. It is claimed that what kind of mental state a person has is dependent on his or her environment and is not a result of purely internal factors.[1]We can vary a person’s environment while keeping her internal states the same (Twin Earth cases), and when we do so we find that mental states track the environment. So mental states are extrinsically fixed (in part anyway) and environmentally sensitive. They are not “in” the subject—not locally supervenient, not a matter of internal facts. They depend on the physical context, on how the person is hooked up to his environment. So there is an abstract analogy between certain views of color and certain views of mental states: both are regarded as relational and context-dependent. In the slogan, mental states are not “in the head”, but neither are sensible qualities “in the object”. The mental world is not independent of the physical world, and the physical world is not independent of the mental world. The subjective embeds the objective, and the objective embeds the subjective. Thus mind and world are mixed together, each incorporating the other, each flowing into the other. It is not that the whole being of the mind is sealed off from the environment, but neither is the whole being of the external world sealed off from the mind. The world contains projected properties, and the mind contains introjected properties. The mind shapes the world (in part), and the world shapes the mind (also in part). So there is no fundamental dualism here: the world is partly formed by the mind, while the mind is partly formed by the world. When you are aware of external objects you are aware of your own mental contribution to their appearance, but equally when you are aware of your mental states you are aware of the world’s contribution to them. The mind absorbs and projects; the world also “absorbs” and “projects” (it “absorbs” color and “projects” mental content). Mind and world work together to produce a reality of colored objects and content-bearing mental states (though I don’t suppose there is any teleology coming from the world). In other terminology, the mind externalizescolor and internalizescontent—as we might say (metaphorically) that the world “internalizes” color and “externalizes” content. Mind and world are mirror images of each other, abstractly considered.

In fact, we shouldn’t really be speaking any longer of mind and world, as if there is an exclusive dichotomy, since each is woven into the other: there are traces of mind in the perceived world and there are traces of the world in the formations of the mind. What we have is a mind-world nexus: a joining, a merging, an overlapping. What we call “the world” is not purely objective in nature, and what we call “the mind” is not purely subjective in nature. The mind is (partly) constituted by the world, while the world is (partly) constituted by the mind: properties drawn from one side of this divide are found located on the other side. The world I perceive is partly internal to me (i.e. projected), and the mind I introspect is partly external to me (not “in my head”). From the point of view of objects (not that they have one), the colors (etc.) they wear are donated from the outside, while they provide their own service by constituting the mental content of subjects. Fancifully, we might view this arrangement as a quid pro quo: give me your colors and I will give you content in return. More soberly, the mind has two capacities: the capacity to absorb (internalize) and the capacity to project (externalize). It employs external properties to form its conceptual landscape, and it draws on its own resources to confer perceptible properties on things (it is useful to see objects as colored, etc.).

This is not to say that there is no mind-independent external world, or that there is no world-independent mental reality. On the contrary, I would strongly deny both assertions.[2]It is only to say that the livedworld is infused with both—both the world of external objects and the world of inner perception and thought. This is quite consistent with allowing that there is another level of description under which objects have purely internal properties (call it physics) and a level of description under which minds also have purely internal properties (call it narrow psychology). We don’t in physics describe objects in terms of mind-dependent qualities, and we don’t in narrow psychology advert to environmentally fixed psychological kinds. Color doesn’t affect the motion of bodies and they can exist without it; similarly, the operations of mind can be characterized without reliance on wide content and minds can exist without such content. For the purposes of science, we could accept that the two worlds don’t overlap; and it would certainly be quite wrong to conclude that either idealism or materialism is true given the considerations advanced so far (not everything about the world is contributed by the mind and not everything about the mind is contributed by the world). Rather, the phenomenalworld—both mental and external—the world we directly experience—thatworld is a mixture of mental and physical. The world I seeis partly made up of projected properties, and the mind with which I am directly acquaintedis up to its neck in externalities (e.g. my concept water). There are two levels of description here: one is inherently dualistic and the other is not. The one that is not concerns the world that we commonly occupy—the world that we sense, feel, talk about, and take for granted (which includes the mind). The other world is largely theoretical, which is not to say any less real. Think manifest image and scientific image.

I want to point out how remarkable the aforementioned capacities of mind actually are. When the mind internalizes an external property it converts that property from being a feature of external objects to being a vehicle of thought—and these are completely different roles. The property becomes bound up with a concept, and a concept has all sorts of distinctive properties–notably being a constituent of thoughts. This is a brand new career for the property and not one for which it had any prior training. Once it is a constituent of a proposition, it is required to participate in logical reasoning as well as mental representation of states of affairs. What has water (the H2O stuff) got to do with that, or being square or being arthritic? How does the mind perform this conversion operation—repurposing a property to start a new life as a concept? Externalists never answer this question—they just point to examples that (purport to) establish the doctrine. But it is really very puzzling: for how can a feature of the environment enter the mind in such a way as to shape its operations? What is this internalizationthat we speak of? How, for example, is the property of being square made to function as a constituent of perceptual experience and of thought? Not by making the mind square! It seems to undergo a metamorphosis, but the mechanism of this metamorphosis is obscure at best and impossible at worst. It might even make one to give up on externalism completely. Likewise, we speak blithely of projection, but how is thissupposed to work? It is not that the mind literally throws color at objects! Nor does it secrete color onto objects. No, the operation is purely mental—an operation of spreadingin Hume’s metaphor. This is both unhelpful and positively misleading. How does the mind externalize color, when it is not colored itself? How does it generate the qualities projected? How does the projected quality always manage to hit its target, painting the leaves green and the roses red with such precision? Somehow the brain produces an impression of a single object that has both color and shape, exactly coordinated, but it is not supposed that shape is projected; so how does it manage to project one quality and introject the other? Projecting color seems like a magic power—quite unlike what a film projector does (here there is actual transmission of light waves). All we have are vague metaphors but no theoretical understanding. This doesn’t mean that the mind doesn’t perform the action in question; it only means that we don’t understand how. In other words, the projective and introjective powers of mind are a mystery. Yet they are fundamental to our entire view of things.

Look at how perception must operate second by second. At a given moment a state of affairs presents itself to the senses—say, a red bird 10 feet in front of your eyes. Your visual sense must internalize this scene, its various properties and arrangements. The scene must so imprint itself that a suitable percept is formed that can then function as input to behavior: this is a highly complex conversion process whose workings are still poorly understood. But at the same time the brain must carry out a projection operation that bestows color on the seen objects, which is rapidly updated over time. It must take in but it must also give out. These operations have to be coordinated and unified. The stimulus for color projection just consists of impinging light rays in which no color is to be found; on reception of these rays the brain must issue an instruction to retrieve a certain color impression, which must then be combined with various shape impressions. The input is not colored but the output is. How the brain does this is a mystery. We know that the cones of the retina must be involved, but how the nervous system contrives to generate and project color is unknown except in gross outline. The result is that the perceiver sees a colored object, where the existence of the color depends on the existence of perceivers to project it. So there is a continuous interplay between internalizing and externalizing operations, tightly intertwined. It is not that perception is all internalization, as very naive naïve realism might suggest; but nor is it all projection, as idealism might maintain. The world is contributing to the mind and the mind is contributing to the world. In this two-way nexus we find the world as it is lived. We internalize the world (hence psychological externalism) and we externalize the mind (hence color subjectivism). Thus mind and world become intermingled.[3]


Colin M

[1]I won’t defend externalism here or even fuss over formulation; neither will I defend the subjectivist view of color. I am more concerned with their implications when conjoined. I defend externalism in Mental Content(1989) and subjectivism in The Subjective View(1983).

[2]I would count myself a staunch externalist andinternalist about the mind, and a staunch subjectivist andobjectivist about objects of perception. The key is to make distinctions between types of property or fact.

[3]Much the same can be said of language and meaning: semantic externalism brings the world into meaning and hence involves internalization, but the structure of language also shapes our view of reality, since our concepts are bound up with the structures of language (verbs and nouns, objects and properties). We need not accept the extreme view that our entire conception of reality is fixed by our language, which can vary from speaker to speaker, in order to recognize that language can function in a projective manner, imposing its internal architecture on our view of things. Language takes in but it also reaches out—it spreads itself onto perceived reality. Thus grammar can function like color perception. Then too, there is Freudian projection, in which traits of oneself are projected onto others, while the mind also introjects authority figures like parents. It seems that the mind is fond of the introjection-projection dialectic.


Al Franken

I just read the recent article in the New Yorker about Al Franken by Jane Mayer (July 29, 2019). It is a model of responsible journalism and lays the facts out admirably. He should clearly not have been forced to resign. It reminded me of when I met him in 2013 at George Soros’s wedding (though I had met him briefly a few years before at my gym in New York just after George W. Bush “won” the presidency for the second time). I told him my favorite Soros joke: “What is the difference between a Hungarian and a Romanian? They will both sell you their mother, but the Hungarian will deliver”. Franken said he thought it was a good joke. Now he has had his life destroyed.