Anthropocentric Physical Empiricism

Anthropocentric Physical Empiricism

Empiricism is the doctrine that all knowledge derives from something called “experience”. Alternatively, all (non-trivial) knowledge comes from the senses. Knowledge is ultimately reducible to “impressions” or “sense data” originating in the human sense organs. In some forms it takes on a metaphysical cast: all of realityderives from experience and is reducible to it. Thus, phenomenalism is empiricism with respect to the external world. Both knowledge and reality come down to sense data—epistemological and metaphysical empiricism. But it is seldom observed that this doctrine can be pushed further: all knowledge (and possibly all reality) derives from, or reduces to, the activities of the sense organs. Impressions cause ideas, sense data cause beliefs, but activity in the eyes and ears (etc.) cause impressions and sense data—the latter are “copies” of the former. The physical sense organs are the conduits through which information flows. Let’s be more specific: it is retinal stimulation and eardrum vibration that cause experiences—the activation of the rods and cones in the eye, the quivering of the tympanic membrane in the ear. These connect with deeper structures, such as the optic nerve and the cochlear, but they are where the body and the world make initial contact. So, really, the empiricist should be saying that all knowledge traces back to the retina and the eardrum (also the skin, nose, and mouth, but these are less important to knowledge than the other two senses, especially vision). Clearly, the retina and the eardrum are physical (bodily) and anthropocentric (species-specific): so, knowledge is held to reduce to bits of human anatomy and physiology, these being the causal origins of experience (whatever quite that is). There is no human knowledge but that these bodily structures are involved—that does not go through these structures. Likewise for metaphysical empiricism.

Already we are wondering if that can be right, given that knowledge is mental and the retina and ear drum are physical (biological). But putting that aside, there is this worry: how can such small and localized entities constitute all of human knowledge (and possibly all the world)? Does our knowledge of reality really reduce to irritations in the small patch of tissue known as the retina? You might reply that visual experience is a good deal more than the retina: it has representational content, is conscious, and figures in reasoning. That is what true empiricism takes to be the foundation and form of all knowledge—full-blooded human experience. Then does experience add something to the retinal stimulations? Of course, you retort—we can’t reduce experience to physical processes in the retina! You are quite right, but notice that you have helped yourself to the inner resources of the mind by insisting on the transcendence of experiences over retinas. And that isn’t true empiricism, since the mind is now not a blank tablet on this way of looking at things: it contributes to, enriches, the retinal input—not everything derives from the senses alone. Thus, we have diluted empiricism with rationalism or nativism by crediting the mind with properties not derivable from bare sensory input. The mind is not remotely a blank tablet under the current dispensation; it is a seething, amply furnished, hothouse. So, the determined empiricist might dig in his heels at this point and assert that it is retinal input that is the source of all knowledge (Quine held such a view). The claim might seem preposterous given our customary conception of human knowledge, but the physical empiricist might junk this mentalistic conception and replace it with some scaled-back notion of cortical configurations. Would that be the end of his troubles?

No, because of the implied anthropocentrism. There won’t be anything universal about human knowledge as so viewed. The human eye and ear are quite species-specific (or genus-specific), much more so than our system of knowledge purports to be, which we take to be more universal—objective, absolute. We don’t think our scientific knowledge, say, is relative to us, like our anatomy in general; that would undermine its claim to be knowledge. Rods and cones are hardly epistemological universals, woven into everything we know. Nor is our knowledge of reality confined to objects of a size that can stimulate the retina differentially, thus giving rise to perceptions of particular types of objects (medium sized dry goods); for we know about other things too (e.g., atoms). Consider the following (admittedly extreme) thought experiment: there are microscopic men that are no bigger than atoms, and they have a thirst for knowledge. But their eyes respond only to objects at their scale, seeing only atoms and their parts (electrons, protons). They have no perception of macroscopic objects (as we understand that term), yet they wonder whether the particles they see might compose such large objects. The empiricist philosophers among them insist that all their knowledge is derived from, and reduces to, the evidence of their senses, anything else being problematic at best. It would clearly be wrong for them to claim that reality reduces to what they can see with their eyes, just as it is wrong for us to claim that reality reduces to what we can see with ours—the microscopic and the macroscopic, respectively. Their visual set-up is biased in its picture of the world, just as ours is (very low resolution). Similarly, a giant intelligence that sees only whole galaxies, never their constituent stars and planets, has a biased view of the universe—is, in fact, blind to huge swathes of it. The senses are highly selective and species-relative, providing biased pictures of reality. If knowledge seeks to correct this bias, as it clearly does, sense-based empiricism must be false: our knowledge attains a level of universality, and hence objectivity and absoluteness, that cannot be accommodated by what might be called “retinal empiricism”. We can only satisfy the demands of knowledge by moving away from the senses considered as items of human anatomy. Just so, reality itself possesses a degree of universality that is inconsistent with retinal empiricism. The human senses simply don’t have the scope and generality required to constitute human knowledge or reality as a whole. They are too circumscribed, species-specific, idiosyncratic, and variable to fix any knowledge worthy of the name, still less any reality worthy of the name. True, we can learn things by deploying our eyes, but that is a far cry from constituting the entire nature of human knowledge. We certainly cannot hope to define anything by means of language referring to the excitations of the retina, or the vibrations of the tympanic membrane. Such sensory activities cannot create human knowledge by themselves, nor can they suffice to construct an external world. When empiricism is pushed to the limit its limitations become apparent. The sense organs are not the sole organs of knowledge.[1]

[1] The original empiricists knew little about the workings of the sense organs and tended to adopt a first-person perspective on the nature of perception.  Once we learn more and take an objective perspective on the senses their inadequacy as a source of knowledge become apparent. They are just physical transducers of impinging energy; they are not mirrors held up to reality. All the talk of “impressions” and “sense data” is scientifically naive; a complex multi-stage process leads up to their formation. At what point does the empiricist want to fix his epistemic origins? Aren’t there many things that could qualify as the foundations of the whole enterprise? The lesson is that human knowledge is an active corrective to the senses not a passive reflection (mirroring) of them.

Share

Knife Throwing

Knife Throwing

I have been working on my knife throwing recently. It’s not a mainstream sport perhaps, but it has its own charm. I heard someone the other day describe it as “like darts but more macho”; indeed, but it is more than that. It is technically more difficult to stick the knife in the target than it is to stick the dart in the board, because the knife rotates; so, the skill element is more demanding. Plus, it is more dangerous, potentially lethal. To me it has three characteristics that appeal: aesthetic, athletic, and scientific. The knife flies through the air, spinning beautifully, then it pierces the target with a satisfying thud, as if by magic. Knives are quite beautiful in themselves but their ability to stick in a target when thrown from a distance is a sight to see. The action of throwing a knife so as to achieve this end is athletically demanding and takes a good deal of practice (plus innate talent). There are a lot of clanging misses, rebounding blades, frustrating failures, but when you have the skill down it is like a well-executed tennis stroke. Scientifically, the trajectory of the knife follows strict laws that have to be respected, especially when gauging distance from the target: it rotates at its own pace. The sport is mathematically precise. It isn’t just macho but also artistic, skilled, and scientific. I recommend it.

Share

2024 Resolutions

2024 Resolutions

I don’t have any, except one I can’t implement. I would like to ban all teaching of my work in American philosophy departments. Why should I let the products of my labor be used for free by people who refuse to employ me? Shouldn’t there be a law against this? Shouldn’t I have the right to prohibit this kind of exploitation? But it isn’t so: people can use my work in their teaching completely against my wishes, and be paid to do so! Why should people who have been cancelled have to accept that other people can use their work for profit? Suppose someone uses my writing on the mind-body problem in a couple of classes but would refuse to invite me to give a talk, or even have me on campus: is that morally acceptable? It is not acceptable to me—I don’t want someone like that teaching my work. So, please desist. Moreover, I don’t want philosophers in America to use my work for research purposes—discussing me in print, citing me, or otherwise benefitting from my labors. I wish they would stop, because I have no desire to be part of the conversation in this country. Read me if you must, but don’t take my name in vain. There may be exceptions to this rule, where I would be willing to relax my ban—I may grant special permission to teach and cite me—but in general I forbid people to make use of my work, all fifty years of it. I must be the first academic in history who doesn’t want his work discussed by a large section of the academic community (sic). It’s a pity I can’t bring the weight of the law upon violators.[1]

[1] I have no objection to the rest of the world making use of my work. I particularly resent being taught at the University of Miami, where I am banned from the campus, though there is probably no danger of that.

Share

Does the Mind Age?

Does the Mind Age?

The mind has an age, but does it age? The body also has an age, and it does age. The person has an age, and that thing too ages. How old are these things? The body is probably the oldest, because its existence pre-dates the existence of both mind and person (these two are connected): the fetal body in the womb exists before the mind or person does. Clearly, all three are older than they are customarily taken to be, since someone’s age is conventionally reckoned from the date of birth—and our body, mind, and selfhood pre-date that (to the best of our knowledge). We really have three ages, none of them coinciding with our conventionally defined age. But that is not my question: my question is whether the mind can be said to age (verb), as the body and person can be said to age. The OED gives the following definition for “age” (verb): “Grow old, mature, show the effects of the passage of time; begin to appear older”. What are the usual effects of the passage of time on the human being (also animals generally)? They are very familiar: sagging and wrinkled skin, grey hair, muscular atrophy, stooped posture, slowness of movement, joint stiffness, bone fragility, proneness to fatigue, and the like. These are generally expressed in appearances—the person looks old. Ageing is the process of coming to appear old as thus understood. They are the defining symptoms of age. The person is declared old in virtue of these bodily changes; ageing is the accumulation of such changes over time.

But notice that they are all bodily: nothing about the mind is mentioned. And with good reason—because none of these apply to the mind (you don’t have a wrinkled mental skin, because the mind doesn’t have a skin). It would be a kind of category mistake to attribute such changes to the mind. Based on these criteria, then, the mind doesn’t age. But aren’t they the only criteria of ageing that we have? If so, the mind doesn’t age. The mind changes with time, it grows and matures, it may even decline: but it doesn’t age. Clothes and shoes age, as do houses and cars, as do animal bodies, but minds don’t undergo standard ageing processes: they don’t bear the tell-tale marks of the passage of time (“wear and tear”). Many things have an age but don’t age: regions of space, atoms, oceans, planets, doctrines; and some things have neither an age nor do they age: numbers, universals (according to Plato), time itself, modus ponens, to name a few. So, the mind might belong to one of these groups: minds don’t undergo the ageing process, though they evidently have an age. They change with time, but they don’t grow old, or appear to grow old.

You might reply that minds have their own type of ageing process, their own way of growing old, admittedly not the same as the body’s ageing process: forgetfulness, mental slowness, concentration problems, confusion. But these are not the effects of time (the rub of the world), and they are not confined to the old (people whose bodies have been around for a comparatively lengthy period). Some young people are forgetful, mentally slow, can’t concentrate, and get confused—that doesn’t mean their minds have prematurely aged, or that they are mentally old. What is true is that brains age, like the body as a whole, and this can affect mental functioning; but it doesn’t follow the the mind ages. The mind doesn’t appear old—whatever that might be in the case of the mind. Alcohol and disease can cause these kinds of psychological conditions, but they have nothing intrinsically to do with age. They may be correlated with age, but they aren’t examples of ageing—any more than being unemployed is a type of ageing, or having right-wing opinions.

And isn’t it simply a fact that one’s consciousness does not change its nature as we (our bodies) grow old? It feels the same as it used to when we were young: my visual experience, say, has undergone no degradation due to ageing—it isn’t fainter or slower or more wrinkled. It is the same as it always was. This is why people say, as they grow older, that they don’t internally grow older, as the body indubitably does. Really, the mind is not the kind of thing that ages; to suppose otherwise is a category mistake, based on viewing the mind through the lens of the body. The brain may shrink with age, so that it is appropriate to speak of an ageing brain, but the mind doesn’t shrink—that is just a category mistake. The mind changes during the period known as old age, as it changes during the period known as adolescence, but in neither case is it appropriate to speak of mental ageing—all we have is age-related change.[1] Hence the feeling that I have not aged (my mind, my soul, my consciousness, my-self)—though my body palpably has. Ageing is what you can see in the mirror, but you can’t see your mind in a mirror. Nor can you introspect and notice that your mind appears a lot older than it was a few years ago—though you might notice that you forget more than you used to (as you tend to think about different things now). Change with age is not ipso facto ageing. If someone becomes forgetful at the age of twenty, that doesn’t imply that he or she has aged—they might just have suffered an accident to the head. The concept of ageing is really defined by the various symptoms of age that I listed earlier and has no life outside of these symptoms, but the mind doesn’t exemplify such symptoms—therefore, it doesn’t age. Not that anyone seriously denies this; we simply don’t talk that way in the normal course of things. We don’t suppose that the mind literally ages in the same sense in which the body ages (and hence the person). But the metaphor might prove irresistible, given our tendency to model the mind on the body; it is therefore salutary to inoculate ourselves against such a tendency—not least because of the dangers of ageism as a prejudice. The whole idea of eternal life in disembodied form is premised on the agelessness of the soul, and to that extent is not conceptually incoherent; as the same idea about the body arguably is (how could a material animal body not age?). The concept of mind is the concept of an un-aging thing; the self of the Cogito knows no ageing process. Thus, the negative connotations of ageing don’t apply to the mind (perhaps they shouldn’t apply to the body); the mind remains spanking new, never scuffed and worn, flabby and bent, wrinkled and discolored. Some regrettable mental changes might be caused by the ageing of the body, but they are not thereby instances of ageing. The mind remains forever young.[2]

[1] Would anyone say that a marked increase of intelligence that reliably occurs in one’s seventies is an example of ageing? I think not.

[2] This sharp contrast between the mind and the body–one ageless, the other inevitably afflicted with ageing—is surely part of the human condition, as the existentialists conceived it. We are conscious of ourselves not just as destined to die but also as ageing steadily in that direction, while consciousness itself is free of such degradation. Thus, we are mixed beings, confusingly so. We embody the “contradiction” of both ageing and not ageing, with each vying for supremacy in our self-conception. Am I old or am I young? I am both. Or neither.

Share

Experimental Atomic Psychology

Experimental Atomic Psychology

Is there any evidence for the atomic hypothesis in psychology, however slender? It certainly doesn’t seem to us that our consciousness is composed of little psychic particles separated in space—the analogue of physical particles. But there is one area in which the hypothesis enjoys some phenomenological support—I mean, the experience we have when we close our eyes in the dark. Our visual field seems populated with tiny dots, light against a dark background, and these dots are visited by edges and blobs that move slowly around. The dots are shaped into primitive forms that seem to seek greater solidity and sharpness, like ghosts of the real world. In this experience, admittedly unusual, our visual field appears granular, corpuscular, pixelated, like a pointillist painting—an assembly of point-like particles. Might this be evidence of an underlying particulate structure operating at the level of the brain, one neuron per point perhaps? I want you to do an experiment: next time you are in bed at night close your eyes and focus on the tiny dots in your visual field, trying to get a sense of their sharpness and clarity. Now open your eyes and gaze into the gloom: vague forms will appear in the darkness, say the shape of an overhead fan. Can you still see the dots? If you are anything like me, you will still see them, but they are slightly less well-defined. The shapes perceived have somehow reduced their phenomenological salience without eliminating them altogether. If you close one eye, you find that the dots gain slightly in salience as the external shape is less clearly perceived. And if you close both eyes again, they come back in all their understated glory. The external form has co-opted what was only vestigially and virtually present in those edges and blobs. It would be possible to replicate this experiment more systematically: assemble a group of subjects and gather reports under varying conditions of illumination, beginning with pitch dark. At what point do the visual pixels disappear from consciousness completely? Is there much intersubjective variation? Can input from another sense interfere with the disappearance? I will venture a hypothesis: pixelation is inversely proportional to form—that is, the greater the perceived form the less the apparent pixelation, and vice versa. At the point of ordinary daytime illumination, pixelation is zero, except perhaps in abnormal conditions. When there is no form to see, as in the closed eyelid condition, the pixelation is at its height; but as forms enter the visual field, even in low illumination, they disappear from view. This is not to say they no longer exist, just that we have no awareness of them. Perhaps blind people have vivid pixelation and their pixels never disappear from view; perhaps hallucinogenic drugs can heighten their presence; perhaps certain diseases cause them to occur in normal vision—these are all empirical questions. We do know there is such a thing as visual snow syndrome, which sounds a lot like pathological pixelation. The idea, then, is that the brain employs two mechanisms in the production of visual percepts: one mechanism generates mental atoms or points; the other mechanism organizes these into visual forms, generally controlled by an external stimulus. The mental atoms are organized into wholes that represent shapes and other qualities. If psychological atomism is true, the same should hold of the other senses, though the atoms may be less accessible to introspection. Taste, for example, operates by way of innumerable receptors that work to create (in conjunction with the brain) gustatory points, the totality of which constitute (say) the taste of pineapple. The taste is not an ontological simple having neither parts nor structure; instead, it is a complex sensation made up of many elements (if you have ever partially lost your sense of taste, you will know what I mean). Then the general hypothesis is that all mental phenomena obey the same basic principles—atoms combining to produce complex mental states. Is there anything analogous to closed-eye vision for thought, i.e., unorganized dots of thought awaiting assembly into a coherent whole? Not that I know of, but it would be worth investigating whether certain kinds of degenerative brain disease produce such effects, e.g., Alzheimer’s disease. What about sleep and dreams in normal humans? Rational thought seems to fall to pieces there. Could drugs have this kind of effect? Surely it would be possible for the brain to cease to be able to put concepts together to form coherent thoughts. Couldn’t concepts themselves break down into parts that refuse to join with other concepts (neurons can clearly lose the ability to connect with other neurons)? Empirical, indeed experimental, work could be done to determine the answer to such questions. It need not be left up to philosophy. So, the atomic hypothesis could be subjected to empirical test, beginning with the bedroom experiment I suggested above. Returning to vision, we can confidently report that the retina and the brain have a pixelated structure–rods and cones in the retina, neurons in the brain—so it is on the cards that the mind itself duplicates this structure. We may not be conscious of it (what biological point would there be in that?) but it may yet be present inconsciousness, hovering just below the surface. Eyelid vision hints at these subterranean depths, and it may be that they exist elsewhere too. It is true that we can’t detect the mental particles by the use of particle accelerators that bombard the mind with supercharged particles and reveal the hidden gems, but we have other ways of determining the fine structure of mental phenomena (such as introspecting our closed-eye visual field). The brain is thus (we conjecture) a synthesizer of basic mental atoms that together form mental life as we experience it. First it manufactures the elementary particles, then it assembles them into mental complexes. What the most elementary particles are is, as they say, a matter for further research. We already know the mind is combinatorial at more coarse-grained levels; atomic psychology simply extends this basic idea down to smaller scales. The search for the elusive mental quark is now on.[1]

[1] I first discussed mental atomism in “Consciousness, Atomism, and the Ancient Greeks” in Consciousness and its Objects (2004). Research is proceeding slowly.

Share

Atomic Psychology

Atomic Psychology

Atomic physics has achieved the status of common sense. It is hard now to understand why it took so long to arrive at it. Despite the efforts of a couple of pre-Socratics, it took till the nineteenth and twentieth century till atomic physics came into its own, driven by technology. People just didn’t have the idea of the minute constituents of matter, largely uniform, and constituting the whole of the physical universe. They didn’t envisage a hidden particulate (“corpuscular”) level of physical reality. Same with biology: biologists didn’t get the idea of the cell till fairly late in the game, let alone the molecular structure of the gene. Now biology is an atomic biology: bodies made of organs, organs made of sub-organs, sub-organs made of cells, cells made of nuclei, mitochondria, and other tiny structures, and so on down to biochemical molecules. Atomic physics and atomic biology are just part of the modern intellectual landscape, despite being invisible for centuries. But there is no such thing as atomic psychology. You would think that atomic psychology should be well developed by now, given our close proximity to the mind; but in fact, it doesn’t exist even as a twinkle in the eye of the aspiring mind scientist (I use that phrase because the term “psychologist” conjures up a rather limited picture of what a student of the mind might hope to produce). Why aren’t the atoms of mind staring us in the face, if there are such? Is it because the atomic conception simply doesn’t fit the mind? However, there are reasons to believe that some sort of atomic psychology must be true, even if it is not evident to us introspectively. First, it is hard to believe that mental states, as they are phenomenologically presented and commonsensically conceived, are ontologically primitive; it is hard to believe they have no further analysis—decomposition, part-whole hierarchy. What, do they just spring into being as they are as indivisible wholes? Is there no micro to their macro? Clearly, some kind of breakdown does exist, because there are complex mental states that are composed of simpler mental states (e.g., regret is composed of belief and desire). Also, propositional attitudes have complex conceptual content—propositions are decomposable entities. So, why shouldn’t the breakdown go deeper? Second, there appear to be commonalities between mental states that suggest recurring constituents: for example, pains, though very various, all partake of a single phenomenological quality—painfulness. Indeed, aversive mental states generally share a phenomenological feature: fear, hunger, and sexual frustration all display a quality of disagreeableness that marks them as belonging together. So, is there a psychological atom corresponding to this trait—the “nasty-tron”? It is negatively charged, like the electron, and unlike the “fun-tron” that corresponds to pleasant feelings, which is positively charged (I speak metaphorically). Why not suppose that there is a deeper level of psychological atoms underlying anything we can detect introspectively? Why not go the whole hog and see where this idea takes us—to a panoply of finitely many psychic particles that exist in the mind-brain and combine together to yield what we know as the mind? These particles could constitute a kind of periodic table of psychic elements—the basic constituents of the psychological universe. The situation is analogous to what obtains in linguistics (which is close to psychology): from sentences to phrases to words to morphemes to underlying constituents of morphemes.[1]In other words, the brain is a place where the atoms of mind and language live, hitherto evading inspection. The brain is made of biological cells (neurons—note the suffix), which are made of molecules and atoms, and it is also made of psychical cells that break down further into more elementary components. Then, we achieve unification with physics and biology: psychology emerges as also atomic in structure. There is macro psychology and micro psychology, big mind and little mind. As a bonus, we might find that micro psychology brings us closer to understanding the mind-brain link, because the psychic particles are more intimately tied to the physics of the brain (they might not look much like the macro mental states they constitute). Possibly these mental particles are to be found outside the brain too, so that we end up embracing a sort of panpsychism (God help us), but they might also be peculiar to the brain for some reason. In either case, the mind comes to have an atomic architecture: the gross resulting from the miniscule, the observable composed of the unobservable. It has its lines and points, its planes and solids. The mind scientist will want to trace these compositional relations, discover their laws, and formulate theories that impose order on multiplicity. When asked what his academic specialty is he will say, “Atomic psychology”. Others in his department might reply, “Macro psychology” or “Cosmo-psychology” (aka “social psychology”). Maybe there will be a small sub-department devoted to interdisciplinary work between physicists and atomic psychologists called “Department of Micro Cognitive and Physical Science” (mainly consisting of string theorists and people called “bling theorists”—the ultimate particles of the mind being deemed “incandescent” in some way). The excitement will be palpable, yet dignified—after all, this is Deep Stuff. Seriously, though, we should not dismiss the idea of atomic psychology; and isn’t this what many psychologists have hankered after these many years—simple unanalyzable sensations, elementary conditioned reflexes, “bits” of information, units of psychic force, just noticeable differences, unconscious primitive drives, discrete bumps on the skull, IQ points, little homunculi in the head? Maybe one day atomic psychology will reach maturity just like atomic physics and atomic biology.[2]

[1] Much the same is true of what might be called “atomic logic”—analyzing propositions and their logical relations in terms of logical atoms and their molecular compounds; indeed, just this terminology already exists.

[2] We could also say that mental states must already be partly composed of physical atoms, since their causal powers rely on the actions of physical atoms ultimately. If causal role is intrinsic to mental states, and causal role requires physical implementation, then mental states must harbor a physical atomic structure somewhere in their total constitution. This would mean that pains, for example, have both a physical and a mental atomic nature—both sorts of atoms exist within them. They are not as simple as they seem. The brain is a kind of atomic hothouse, contrary to initial appearances. It is not an undifferentiated grey lump or a continuous flowing river.

Share

The Cruel Gene

          Ditto this paper.

The Cruel Gene

I can forgive the genes their selfishness; it is their cruelty I can’t forgive.[1] I understand their need to build survival machines to preserve themselves until they can replicate: they need the secure fortress of an animal body. But why did they have to build suffering survival machines? Hunger, thirst, pain, and fear—why did they have to make animal bodies feel these things? Granted the survival machines benefit from having a mind, but it was cruel of the genes to produce so much suffering in those minds. Couldn’t they have found another way? Are they sadists?

            The answer is that suffering is an excellent adaptation. Genes build animals that suffer because suffering keeps the animal on its toes. If the body is the genes’ bodyguard, it pays to make the bodyguard exceptionally careful. Since pain signals danger, and hunger and thirst signal deprivation, and fear motivates, the genes will build bodyguards that are rich in these traits. To build a bodyguard that suffered less would be to risk losing out to genes that build one that suffered more. This is why we find suffering so widely in the animal kingdom—because it is so useful from the genes’ point of view. It probably evolved separately many times, like the eye or the tail. Pain also has many varieties, also like the eye and tail. There doesn’t seem to be any complex animal that lives without suffering, so the trait is clearly not dispensable. Surviving and suffering therefore go hand in hand.

            Most adaptations have a downside: a thick warm coat is a heavy coat, brains use up a lot of energy, and fur must be groomed. In fact, all adaptations have some downside, because all need maintenance, which calls upon resources. But pain and suffering have very little downside from the point of view of the genes. They don’t slow the animal down or make it lethargic or confused; on the contrary, they keep it alert and primed. The avoidance of pain is a powerful stimulus; hunger is a terrible state to be in. Animal behavior is organized around these aversive psychological states—and the genes know it. They are cruel to be kind—to themselves: suffering helps protect the survival machine from injury and death, so the animal lives longer with it than without it, with its cargo of genes. The reason the genes favor suffering is not from altruistic concern for the life of the animal, but merely because a longer life helps them replicate. The genes aim to reproduce themselves, and this requires a fortress that can withstand adversity; suffering is a means they have devised for keeping their fortress alive and functioning until reproduction can occur. Since there is so little downside to pain, from their perspective, they can afford to be lavish in its production. Thus the animal suffers acutely so that they may survive. They know nothing of pain themselves (or anything else), but natural selection has seen to it that pain is part of animal life. Nature has selected animals according to the adaptive power of their suffering. Genes for suffering therefore do well in the gene pool.

            Suffering has no meaning beyond this ruthless gene cruelty. It exists only because natural selection hit upon it as an adaptive trait. A mutation that produced a talent for pain, probably slight pain initially, turned out to have selective advantage, and then the adaptation developed over the generations, until spectacular amounts of pain became quite routine. As giraffes evolved long necks, and cheetahs evolved fast legs, so animals evolved high-intensity pain. As an adaptation, pain is very impressive, a clever and efficient way for genes to keep themselves in the gene pool; it is just that pain is very bad for the animal. Pain is an intrinsically bad thing for the sufferer—but it is very beneficial to the genes. But they don’t care how bad it is for the sufferer—they don’t give it a second thought. Pain is just one adaptation among many, so far as they are concerned. Maybe if there was another way to obtain the beneficial effects of suffering—another way to keep the survival machines on their toes—the genes would have favored that: but as things are suffering is the optimal solution to a survival problem. The genes are unlikely to spare the animals that contain them by devising another method more compassionate but less efficient. Suffering just works too well, biologically. It wasn’t used for the first couple of billion years of life on earth, when only bacteria populated the planet; but once complex organisms evolved pain soon followed. It probably came about as a result of an arms race, as one animal competed with another. Today plants survive and reproduce without suffering: it is not an element in their suite of adaptations. They are the lucky ones, the ones spared by the ruthlessly selfish genes. Mammals probably suffer the most, and maybe humans most of all, at least potentially. We suffer acutely because the genes decided they needed an especially finely tuned and sensitive survival machine to get themselves into future generations. The possibility of excruciating torture was the price they left us to pay. They don’t suffer as their human vehicle endures agonies; yet the reason the agonies exist is to benefit the genes. The genes are the architects of a system of suffering from which they are exempt.

            Animals are probably tuned better for suffering than for pleasure and happiness. It is true that the contented sensation of a full belly is a good motivator for an animal to eat, but then the animal has already eaten. Far more exigent is the demand that an empty belly prick the animal into action. The pleasure of grooming might motivate animals to groom, thus avoiding parasites and the like. Far more exigent is the need to avoid injuries from bites and battering. The system must be geared to avoidance, more so than to approach. Thus animals are better at suffering than at enjoyment—their suffering is sharper and more pointed. Some animals may be capable of suffering but not enjoyment, because their pattern of life makes that combination optimal. But no animal feels enjoyment in the absence of a capacity to suffer, not here on earth. Suffering is essential to life at a complex level, but enjoyment is optional.

            This is why I can’t forgive the genes: with callous indifference they have exploited the ability of animals to suffer, just so that they can march mindlessly on. They have no purpose, no feelings, just a brute power to replicate their molecular kind; and they do so by constructing bodies that are exquisite instruments of pain and suffering. If they were gods, they would be moral monsters. As it is, their cruelty is completely mindless: they have created a world that is terrible to behold, yet they know nothing of it. It just so happens that animal suffering follows from their prime directive—to reproduce themselves. Animal suffering is how the genes lever themselves into the future. It is one tactic, among others, for successful replication. Its moral status is of no concern to them. The genes are supremely cruel, but quite unknowingly so—like blind little devils.

Colin McGinn

[1] I indulge in rampant personification in this paper, knowing that some may bristle. I assure readers that it is possible to eliminate such talk without change of truth-value. Actually it is a helpfully vivid way to convey the sober truth.

Share

Pain and Unintelligent Design

This is an earlier paper that I am re-posting because of the interest shown in “Evolution of Pain”.

Pain and Unintelligent Design

Pain is a very widespread biological adaptation. Pain receptors are everywhere in the animal world. Evidently pain serves the purposes of the genes—it enables survival. It is not just a by-product or holdover; it is specifically functional. To a first approximation we can say that pain serves the purpose of avoiding danger: it signals danger and it shapes behavior so as to avoid it. It hurts of course, and hurting is not good for the organism’s feeling of wellbeing: but that hurt is beneficial to the organism because it serves to keep it from injury and death. So the story goes: evolution equips us with the necessary evil of pain the better to enable our survival. We hurt in order to live.  If we didn’t hurt, we would die. People born without pain receptors are exceptionally prone to injury. So nature is not so cruel after all. Animals feel pain for their own good.

            But why is pain quite so bad? Why does it hurt so much? Is the degree of pain we observe really necessary for pain to perform its function? Suppose we [encountered alien creatures much like ourselves except that their pain threshold is much lower and their degree of pain much higher. If they stub their toe even slightly the pain is excruciating (equivalent to us having our toe hit hard with a hammer); their headaches are epic bouts of suffering; a mere graze has them screaming in agony. True, all this pain encourages them to be especially careful not to be injured, and it certainly aids their survival, but it all seems a bit excessive. Wouldn’t a lesser amount of pain serve the purpose just as well? And note that their extremes of pain are quite debilitating: they can’t go about their daily business with so much pain all the time.  If one of them stubs her toe she is laid off work for a week and confined to bed. Moreover, the pain tends to persist when the painful stimulus is removed: it hurts just as much after the graze has occurred. If these creatures were designed by some conscious being, we would say that the designer was an unintelligent designer. If the genes are the ones responsible, we would wonder what selective pressure could have allowed such extremes of pain. Their pain level is clearly surplus to requirements. But isn’t it much the same with us? I would be careful not to stub my toe even if I felt half the pain I feel now. The pain of a burn would make me avoid the flame even if it was much less fierce than it is now. And what precisely is the point of digestive pain or muscle pain? What do these things enable me to avoid? We get along quite well without pain receptors in the brain (or the hair, nails, and teeth enamel), so why not dispense with it for other organs too? Why does cancer cause so much pain? What good does that do? Why are we built to be susceptible to torture? Torture makes us do things against our wishes—it can be used coercively—so why build us to be susceptible to it? A warrior who can’t be tortured is a better warrior, surely. Why allow chronic pain that serves no discernible biological function? A more rational pain perception system would limit pain to those occasions on which it can serve its purpose of informing and avoiding, without overdoing it in the way it seems to. In a perfect world there would be no pain at all, just a perceptual system that alerts us non-painfully to danger; but granted that pain is a more effective deterrent, why not limit it to the real necessities? The negative side effects of severe pain surely outweigh its benefits. It seems like a case of unintelligent design.

            Yet pain evidently has a long and distinguished evolutionary history. It has been tried and tested over countless generations in millions of species. There is every reason to believe that pain receptors are as precisely calibrated as visual receptors. Just as the eye independently evolved in several lineages, so we can suppose that pain did (“convergent evolution”). It isn’t that pain only recently evolved in a single species and hasn’t yet worked out the kinks in its design (cf. bipedalism); pain is as old as flesh and bone. Plants don’t feel pain, but almost everything else does, above a certain level of biological complexity. There are no pain-free mammals. Can it be that mammalian pain is a kind of colossal biological blunder entailing much more suffering than is necessary for it to perform its function? So we have a puzzle—the puzzle of pain. On the one hand, the general level of pain seems excessive, with non-functional side effects; on the other hand, it is hard to believe that evolution would tolerate something so pointless. After all, pain uses energy, and evolution is miserly about energy. We can suppose that some organisms experience less pain than others (humans seem especially prone to it)—invertebrates less than vertebrates, say—so why not make all organisms function with a lower propensity for pain? Obviously, organisms can survive quite well without being quite so exquisitely sensitive to pain, so why not raise the threshold and reduce the intensity?

            Compare pleasure. Pleasure, like pain, is motivational, prompting organisms to engage not avoid. Food and sex are the obvious examples (defecation too, according to Freud). But the extremes of pleasure are never so intense as the extremes of pain: pain is really motivational, while pleasure can be taken or left. No one would rather die than forfeit an orgasm, but pain can make you want to die. Why the asymmetry? Pleasure motivates effectively enough without going sky-high, while excruciating pain is always moments away. Why not regulate pain to match pleasure? There is no need to make eating berries sheer ecstasy in order to get animals to eat berries, so why make being burnt sheer agony in order to get animals to avoid being burnt? Our pleasure system seems designed sensibly, moderately, non-hyperbolically, while our pain system goes way over the top. And yet that would make it biologically anomalous, a kind of freak accident. It’s like having grotesquely enlarged eyes when smaller eyes will do. Pleasure is a good thing biologically, but there is no need to overdo it; pain is also a good thing biologically (not otherwise), but there is no need to overdo it.

            I think this is a genuine puzzle with no obvious solution. How do we reconcile the efficiency and parsimony of evolution with the apparent extravagance of pain, as it currently exists? However, I can think of a possible resolution of the puzzle, which finds in pain a unique biological function, or one that is uniquely imperative. By way of analogy consider the following imaginary scenario. The local children have a predilection for playing over by the railway tracks, which feature a live electrical line guaranteed to cause death in anyone who touches it. There have been a number of fatalities recently and the parents are up in arms. There seems no way to prevent the children from straying over there—being grounded or conventionally punished is not enough of a deterrent. The no-nonsense headmaster of the local school comes up with an extreme idea: any child caught in the vicinity of the railway tracks will be given twenty lashes! This is certainly cruel and unusual punishment, but the dangers it is meant to deter are so extreme that the community decides it is the only way to save the children’s lives. In fact, several children, perhaps skeptical of the headmaster’s threats, have already received this extreme punishment, and as a result they sure as hell aren’t going over to the railway tracks any time soon. An outsider unfamiliar with the situation might suspect a sadistic headmaster and hysterical parents, but in fact this is the only way to prevent fatalities, as experience has shown. Someone might object: “Surely twenty lashes is too much! What about reducing it to ten or even five?” The answer given is that this is just too risky, given the very real dangers faced by the children; in fact, twenty lashes is the minimum that will ensure the desired result (child psychologists have studied it, etc.). Here we might reasonably conclude that the apparently excessive punishment is justified given the facts of the case—death by electrocution versus twenty lashes. The attractions of the railway tracks are simply that strong! We might compare it to talking out an insurance policy: if the results of a catastrophic storm are severe enough we may be willing to part with a lot of money to purchase an insurance policy. It may seem irrational to purchase the policy given its steep price and the improbability of a severe storm, but actually it makes sense because of the seriousness of the storm if it happens. Now suppose that the consequences of injury for an organism are severe indeed—maiming followed by certain death. There are no doctors to patch you up, just brutal nature to bring you down. A broken forelimb can and will result in certain death. It is then imperative to avoid breaking that forelimb, so if you feel it under dangerous stress you had better relieve that stress immediately. Just in case the animal doesn’t get the message the genes have taken out an insurance policy: make the pain so severe that the animal will always avoid the threatening stimulus. Strictly speaking, the severe pain is unnecessary to ensure the desired outcome, but just in case the genes ramp it up to excruciating levels. This is like the home insurer who thinks he should buy the policy just in case there is a storm; otherwise he might be ruined. Similarly, the genes take no chances and deliver a jolt of pain guaranteed to get the animal’s attention. It isn’t like the case of pleasure because not getting some particular pleasure will not automatically result in death, but being wounded generally will. That is, if injury and death are tightly correlated it makes sense to install pain receptors that operate to the max. No lazily leaving your hand in the flame as you snooze and suffering only mild discomfort: rather, deliver a jolt of pain guaranteed to make you withdraw your hand ASAP. Call this the insurance policy theory of pain: don’t take any chances where bodily injury is concerned–insure you are covered in case of catastrophe.[1] If it hurts like hell, so be it—better to groan than to die. So the underlying reason for the excessiveness of pain is that biological entities are very prone to death from injury, even slight injury. If you could die from a mere graze, your genes would see to it that a graze really stings, so that you avoid grazes at all costs. Death spells non-survival for the genes, so they had better do everything in their power to keep their host organism from dying on them. The result is organisms that feel pain easily and intensely. If it turned out that those alien organisms I mentioned that suffer extreme levels of pain were also very prone to death from minor injury, we would begin to understand why things hurt so bad for them. In our own case, according to the insurance policy theory, evolution has designed our pain perception system to carefully track our risks in a perilous world. It isn’t just poor design and mindless stupidity that have made us so susceptible to pain in extreme forms; this is just the optimum way to keep as alive as bearers of those precious genes (in their eyes anyway). We inherit our pain receptors from our ancestors, and they lived in a far more dangerous world, in which even minor injuries can have fatal consequences. Those catastrophic storms came more often then.

            This puts the extremes of romantic suffering in a new light. It is understandable from a biological point of view why romantic rejection would feel bad, but why so bad? Why, in some cases, does it lead to suicide? Why is romantic suffering so uniquely awful?[2] After all, there are other people out there who could serve as the vehicle of your genes—too many fish in the sea, etc. The reason is that we must be hyper-motivated in the case of romantic love because that’s the only way the genes can perpetuate themselves. Sexual attraction must be extreme, and that means that the pain of sexual rejection must be extreme too. Persistence is of the essence. If people felt pretty indifferent about it, it wouldn’t get done; and where would the genes be then? They would be stuck in a body without any means of escape into future generations. Therefore they ensure that the penalty for sexual and romantic rejection is lots of emotional pain; that way people will try to avoid it.  It is the same with separation: the reason lovers find separation so painful is that the genes have built them to stay together during the time of maximum reproductive potential. It may seem excessive—it is excessive—but it works as an insurance policy against reproductive failure. People don’t need to suffer that much from romantic rejection and separation, but making them suffer as they do is insurance against the catastrophe of non-reproduction. It is crucial biologically for reproduction to occur, so the genes make sure that whatever interferes with that causes a lot of suffering. This is why there is a great deal of pleasure in love, but also a great deal of pain–more than seems strictly necessary to get the job done. The pain involved in the loss of children is similar: it acts as a deterrent to neglecting one’s children and thus terminating the genetic line. Emotional excess functions as an insurance policy about a biologically crucial event. Extreme pain is thus not so much maladaptive as hyper-adaptive: it works to ensure that appropriate steps are taken when the going gets tough, no matter how awful for the sufferer. It may be, then, that the amount of pain an animal suffers is precisely the right amount all things considered, even though it seems surplus to requirements (and nasty in itself). So at least the insurance policy theory maintains, and it must be admitted that accusing evolution of gratuitous pain production would be uncharitable to evolution.

            To the sufferer pain seems excessive, a gratuitous infliction, far beyond what is necessary to promote survival; but from the point of view of the genes it is simply an effective way to optimize performance in the game of survival. It may hurt us a lot, but it does them a favor. It keeps us on our toes. Still, it is puzzling that it hurts quite as much as it does.[3]

Colin McGinn

[1] We can compare the insurance policy theory of excessive pain to the arms race theory of excessive biological weaponry: they may seem pointless and counterproductive but they result from the inner logic of evolution as a mindless process driven by gene wars. Biological exaggeration can occur when the genes are fighting for survival and are not too concerned about the welfare of their hosts.

[2] Romeo and Juliet are the obvious example, but the case of Marianne Dashwood in Jane Austen’s Sense and Sensibilityis a study in romantic suffering—so extreme, so pointless.

[3] In this paper I simply assume the gene-centered view of evolution and biology, with ample use of associated metaphor. I intend no biological reductionism, just biological realism.

Share