Biological Philosophy of Language

 

 

Biological Philosophy of Language

 

 

Linguistics has grown accustomed to viewing human language as a biological phenomenon. This view stands opposed to two other views: supernaturalism and cultural determination. Ancient thought conceived of language as a gift from God, closely adjoined to the immaterial soul: this accounted for its origin, its seemingly miraculous nature, and its uniqueness to the human species (we are God’s chosen ones). Recent thought instead insisted that language is a cultural product, a human invention, an artifact: this too accounts for its origin, nature, and uniqueness to humans (only humans have this kind of creative power). Both views deny that language is a species-specific adaptation driven by natural selection and arising in the individual by a process of organic maturation—rather like other natural organs. The “biological turn” in linguistics maintains that language is not supernatural or cultural but genetically based, largely innate, founded in physiology, modular, a product of blind evolution, organically structured, developmentally involuntary, invariant across the human species, and part of our natural history. Biological naturalism is the right way to think about language.[1]No one would doubt this in the case of the “languages” (systems of communication) of other species like bees, birds, whales, and dolphins; human language is also part of our biological heritage and our phenotype (as well as our genotype). But this perspective, though now standard in linguistics, is not shared by contemporary philosophy of language: we don’t see thesequestions framed as questions of biology. Not that existing philosophy of language overtly adopts a supernatural or cultural conception of the nature of language in preference to a biological conception; rather, it is studiously neutral on the issue.  The question I want to address is whether the received debates in philosophy of language can be recast as questions of biology, in line with the prevailing biological perspective in linguistics. And I shall suggest that they can, illuminatingly so. I thus propose that philosophy of language take a biological turn and recognize that it is dealing with questions of natural biology (if the pleonasm may be excused). This will require no excision of questions but merely a reformulation of them. Philosophy of language is alreadysteeped in biology.

Let’s start with something relatively innocuous: the productivity of language. Instead of seeing this as a reflection of God’s infinite nature or the creative power of human invention, we see it as a natural fact about the structure of a certain biological trait, analogous to the structure of the eye or the musculature. Finitely many lexical units combine to generate a potential infinity of possible sentences—that is just a genetically encoded fact about the human brain. It arose by some sort of mutation and it develops during the course of individual maturation according to a predetermined schedule. It is humanly universal and invariant just like human anatomy and physiology. It should not be viewed as a purely formal or mathematical structure but as an organic part of the human animal. So when the philosopher of language remarks on the ability of speakers to construct infinitely many sentences from a finite set of words by recursive procedures he or she is recording a biological fact about the human species—just like bipedal posture or locomotion or copulation or digestion. Nothing prevents us from saying that the human phenotype includes an organ capable of unbounded productivity—the language faculty. It isn’t supernatural and it isn’t cultural (whatever exactly this means). It is, we might say, animal.

But what about theories of meaning—are they also biological theories in disguise? The biological naturalist says yes: truth conditions, for example, are a biological trait of certain biological entities. The entities are sentences (strings of mental representations—“words”) and their having truth conditions is a biological fact about them. Truth conditions evolved in the not too distant past, they mature in the individual’s brain, and they perform a biological function. Truth conditions constitute meaning (according to theory), and having meaning is a trait of certain external actions and internal symbols. Meanings are as organic as eyeballs. So a theoryof meaning is a theory of a certain biological phenomenon—a biological theory. It says that the trait of meaning is the trait of having truth conditions. Suppose we base the theory on Tarski’s theory of truth: then Tarski’s definition of truth for formalized languages is really a recursive theory of an organic structure. It is mathematical biology. Sentences are part of biology and their having truth-conditions is too; so a theory of truth is tacitly an exercise in biological description. No one would doubt this for a theory of bee language or whale language, because there is no resistance to the idea that these are biological traits—a theory of truth conditions herewould naturally be interpreted a theory of a biological phenomenon. Bee dances don’t have their truth conditions in virtue of the bee god or bee culture, but in virtue of genetically based hardwired facts of bee physiology. It isn’t that bees collectively decideto award their dances with meanings—and neither do human infants decide such things either. Sentences have truth conditions in virtue of biological facts about their users, whether bee or human. Semantics is biology.

Consider Davidson’s project of translating sentences of natural language into sentences of predicate calculus and then applying Tarski’s theory to them. Suppose that, contrary to fact, there existed a species that spoke only a language with the structure of predicate calculus; and suppose too that we evolved from this species. It would then be plausible to suppose that our language faculty descended from theirs with certain enrichments and ornamentations. Then Davidson could claim that their language gives the logical form of our language and that it can in principle translate the entirety of our language. This would be a straightforward biological theory, claiming that one evolved trait is equivalent (more or less) to another evolved trait. The “deep structure” of one trait is manifest in another trait. Likewise, if we view a formalized language as really a fragment of our natural language, then a claim like Davidson’s is just the claim that one trait of ours is semantically equivalent to another trait—that is, its semantic character is exhausted by the formalized fragment, the rest being merely stylistic flourish. For example, the biological adaptation of adverbs is nothing more than the surface appearance of the underlying trait of predicates combining with quantification over events. Thus we convert the Davidsonian program into a biological enterprise—to describe one trait in terms of other traits. This is the analogue of claiming that the anatomy of the hand is really the anatomy of the foot, because hands evolved from feet—just as our language evolved from the more “primitive” language of our predicate-calculus-speaking ancestors in my imaginary example. Our language organ is both meaningful and combinatorial, and Davidson has a theory about what these traits consist in: he is a kind of anatomist of the language faculty.

Then what is Dummett up to? He is contending that the trait of meaning is not actually the trait of having truth conditions but rather the trait of having verification conditions.[2]We don’t have the former trait because it has no functional utility so far as communication is concerned (it can’t be “manifested”). So Dummett is claiming that a better biological theory is provided by verification conditions. This is a bit like claiming that the function of the eye is not to register distal conditions but to respond to more proximate facts about the perceiver, these being of greater concern to the organism (cf. sense-datum theory and phenomenalism); or that the function of feathers is not flight but thermal regulation (as apparently it was for dinosaurs). Dummett is a kind of skeptic about orthodox descriptions of biological traits. He might be compared to someone who claims that there are no traits for aiding species or group survival but only traits for aiding individual or gene survival (“the selfish meaning”). Quine is in much the same camp: he claims that no traits have determinate meaning, whether truth conditions or verification conditions. The alleged trait of meaning is like the ill-starred entelechy—a piece of outdated mythology. A proper science of organisms will dispense with such airy-fairy nonsense and stick to physical inputs and outputs. For Quine, meaning is bad biology. Nor would Quine be very sanguine about the notion of biological function: for what is to stop us from saying that the function of the wolves’ jaws is to catch undetached rabbit parts? Our usual assignments of function are far too specific to be justified by the physical facts, so we should dispense with them altogether. We need desert landscape biology: no vital spirits, no meanings, and no functions, just bodies being stimulated and responding to stimulation—Pavlovian (Skinnerian) biology. Quine is really a biological eliminativist.

Where does Wittgenstein fit in? He emerges as a biological pluralist and expansionist. He denies that morphology is everything; he prefers to emphasize the biological deed. He forthrightly asserts that language is part of our “natural history” (not much discussion of genetics though).[3]The Tractatusemployed an austere biology of pictures and propositions, while the Investigationsplumps for a great variety of sentences and words as making up human linguistic life. Wittgenstein is like a zoologist who once thought there were only mammals in the world and now discovers that there are many types of species very different from each other. He also decides it is better to describe them accurately than try to force them into predetermined forms. His landscape is profuse and open-ended, like a Brazilian jungle. He is resolutely naturalistic in the sense of rejecting all supernatural (“sublime”) conceptions of language. What he would have made of Chomsky I don’t know, but he would surely have applauded Chomsky’s focus on the natural facts and phases of a child’s use of language. His anti-intellectualism about meaning (and the mind generally) is certainly congenial to the biological point of view.

What about Frege? Frege is the D’Arcy Thompson of philosophical linguistics, seeking the mathematical laws of the anatomy of thought. He discerns very general structures of a binary nature (sense and reference, object and concept, function and argument) and finds them repeated everywhere, like the recurrent body-plans of the anatomical biologist. The human skeleton resembles the skeletons of other mammals and indeed of fish (from which all are derived), and Frege finds the same abstract structure in the most diverse of sentences (function and argument is everywhere, like the spinal column or cells). But these abstract structures are not antithetical to biology, just its most general features. When a laryngeal event occurs it carries with it a cargo of semantic apparatus that confers meaning on it, intricate and layered. The speech organs are impregnated with sense and reference as a matter of their very biology, not bestowed by God or human stipulation (the underlying thoughts are certainly not imbued with sense and reference as a matter of culture). Thus it is easy to transpose Frege’s logical system into a biological key—whether Frege himself would approve or not. Again, we should think of the developing infant acquiring a spoken language: his words have sense and reference as a matter of course not as a matter of cultural instigation—this is why language precedes culture for the child. Acquiring language is no more cultural than puberty is cultural (and I have never heard of an ancient theory to the effect that pubertyis a gift from God). Meaning comes with the territory, and the territory is thoroughly biological.

Ordinary language philosophy? Why, it’s just ecologically realistic biological theorizing, instead of rigid attachment to over-simple paradigms. It’s rich linguistic ethology instead of desiccated linguistic anatomy.  It’s looking at how the human animal actually behaves in the wild instead of clinically dissecting it on the laboratory table. Austin, Grice, Strawson—all theorists of in situlinguistic behavior. Nothing in their work negates the idea of an innate language faculty expressed in acts of speech and subject to biological constraints. When Austin analyzes a speech act into its locutionary meaning and its illocutionary force he is dissecting an act with a biological substructure, because the language faculty that permit the act is structured in that way. Words are strung together according to biologically determined rules, and the same is true of different types of illocutionary force. Zoology took an ethological turn when scientists stopped examining rats and pigeons in the laboratory and turned their attention to animal behavior in its natural setting; ordinary language philosophy did much the same thing (at much the same time). This led to considerable theoretical enrichment in both cases as the biological perspective widened. One can imagine aliens visiting earth and making an ethological study of human linguistic behavior, combining it with organic studies of speech physiology. They would add this to their other investigations of bee and whale linguistic behavior. All of it would come under the heading of earth biology.

Of course, biologically based language activity interactswith cultural formations, as with speech acts performed within socially constructed institutions (e.g. the marriage ceremony). But the same thing is true of other biological organs—say, the hands: that doesn’t undermine the thesis that basic biological adaptations are in play. It is not being claimed that everythingabout language and its use is biologically based. But the traits of language of interest to philosophers of language tend to be of such generality that they are bound to be biological in nature. For example, the role of intention in creating speaker meaning, as described by Grice, introduces a clearly biological trait of the organism—purposive goal-directed action. We don’t have intentions as a result of divine intervention or cultural invention; intention is in the genes. Intention grows in the infant along with motor skills and doesn’t depend upon active teaching from adults. Intentions will play a role in cultural activities, but they are not themselves products of culture. The same is true of consciousness, perception, memory, and so on—all biological phenomena.

According to Chomsky, a grammar for a natural language simply is a description of the human biologically given language faculty. Following that model philosophical theories of meaning have the same status: they are attempted descriptions of a specific biological trait. Semantic properties are as much biological properties as respiration and reproduction. Philosophy of language is thus a branch of biology. The standard theories are easily construed this way. Semantics follows syntax and phonetics in making the biological turn. Fortunately, existing philosophy of language can incorporate this insight.[4]

 

Coli

[1]For an authoritative study see Eric Lenneberg, Biological Foundations of Language(1967) and the many works of Noam Chomsky. If we ask who is the Darwin of language studies, the consensus seems to be Wilhelm von Humboldt (1767-1835).

[2]The positivists may be construed as claiming that no sentence can have the trait of meaning without having the trait of verifiability. One trait is necessary for the other. This is like claiming that no organ can circulate the blood without being a pump or that no organ can be the organ of speech without expelling air. Thus a metaphysical sentence can’t be meaningful because it lacks the necessary trait of verifiability. No evolutionary process could produce a language faculty that included sentences that mean without being verifiable. Put that way, it looks like a pretty implausible doctrine—why couldn’t there be a mutation that produced meaningful sentences that exceed our powers of verification? Meaning is one thing, our powers of verification another.

[3]“Commanding, questioning, recounting, chatting, are as much a part of our natural history as walking, eating, drinking, playing.” Philosophical Investigations, section 25.

[4]It would be different if existing philosophy of language tacitly presupposed some sort of divine dispensation theory, or a brand of extreme cultural determination; but as things stand we can preserve it by recasting its questions as biological in nature. There is nothing reductionist about this, simply taxonomically correct.

Share

Innate Blank Slates

 

Innate Blank Slates

 

 

Even the most hardline nativist will agree that not everything that passes before the mind, or exists in it, is innately fixed. In particular, memory contains contents that derive from experience. Memory may be defined as the ability to learn, and animals with memory absorb information from the environment that was not in them at birth. To this extent the mind is a blank slate—a receptacle waiting to be filled by post-natal experience. There may be (there is) a lot that is innate, but not everything is innate: fresh input reaches the mind to be added to its original resources. Does this mean that nativism has to concede a local victory to empiricism? Is it that the mind is partly innately structured and partly formless? Is it well stocked in some departments and entirely empty in others? Is the library of the mind a collection of written texts existing alongside an empty volume waiting for the world to inscribe messages on it? Is the mind partly nativist and partly empiricist? I think this is the wrong way to look at the matter; in fact, the so-called blank slate (memory) is really just another iteration of the nativist doctrine. The mind is genetically determined all the way down—including the blank slate. The blank slate is just another innately fixed biological component of the mind.[1]

It might not have been so. Consider this theory, conceivably true in some possible world: the blank slate is acquired by experiencing blank slates in the world and copying them inwardly. You observe empty spaces or sheets of white paper or wax tablets and this creates in you a blank mental canvas on which experience can subsequently write. You “abstract” inner blankness from perceptible blank and formless things, and this forms the basis of memory. Thus the blank slate is an acquired characteristic. If this is the entirety of the mind, then the whole thing is acquired. But that is clearly a completely wacky theory, held by exactly no one. For one thing, wouldn’t the mind need an antecedent blank slate in order to acquire one by means of observing external blank slates? And how on earth could the mind “abstract” blankness and then internalize it—wouldn’t that be just an idea ofblank slates? No, the reasonable view—and the one held by all empiricists—is that the mind is innatelyblank. That is its character at birth, its genetically determined nature, its intrinsic essence. So the blank slate is itself an innate component of the mind existing alongside other innate components. Its distinguishing characteristic is its flexibility: it is a receptiveepistemic faculty—it accepts novelty and change. It is modified by experience instead of being oblivious to experience. We might better call it “the receptive slate” in order to emphasize its function and mode of operation. It is genetically fixed and yet malleable, inborn and also plastic. In this respect it is like the perceptual faculties: we are not born already seeing all the things we will ever see (!), yet vision is an innately fixed faculty. The point of vision (and the other senses) is to permit variation in what is seen, i.e. sensitivity to environmental contingencies; but that is quite consistent with a strongly nativist conception of vision. The form is innate and the content is acquired, as we might put it. The perceptual categories might all be innate, but the particular state of affairs perceived is a result of environmental influence. Similarly, memory is innate even though whatis remembered results from the impact of the world. There are clearly advantages to such flexibility and receptivity, but in no way does this cast doubt on the innateness of the faculty in question. Even the humble earthworm can sense and learn, but its ability to do so depends on its innate constitution.

Why should we classify the blank slate existing inside every learning organism alongside its other innate characteristics? Why should we deem it biological? Why is it not, say, “cultural”? Why resist a form of dualism about what traits an organism possesses, the blank traits and the non-blank traits? There are several reasons. First, it is genetically determined: there is a gene (or complex of genes) forblankness (flexibility, receptivity). Blankness is certainly not the result of an absenceof genes! The genes construct an epistemic organ whose specialization is openness to experience, as opposed to one that already knows all the answers. It is as if the genes constructed an amorphous bodily organ that could be modified by experience—say, a limb that could be molded into a different appendage depending upon environmental demands (a leg, a fin, a wing). That could be a useful adaptation in certain conditions, as a modifiable memory is a useful adaptation. An animal with a gene for blankness will enjoy a selective advantage. Some creatures lack such genes, harboring merely a set of instinctive reflexes, but others contain them, the better to survive in a changing world. So the blank state is as genetically engineered as any physical or mental organ (say, the human language faculty).[2]Animals are genetically designedthat way. They are born to be blank (in part).

Second, the blank slate is not as blank as all that: it is not a featureless nothing, devoid of all inner structure. Consider paper: paper is a highly structured and carefully designed piece of technology, not just a mere absence. It must absorb and retain ink, not blurring or running. It stacks and folds. It can be bound into volumes. It is durable. It took centuries to invent and perfect paper. Paper has a certain intrinsic constitution precisely designed to accept ink. It has as much of an inner nature as the ink that adorns it. And memory must be very similar: memory too took a long time to evolve, and it must be possessed of an inner nature that makes its feats of retention possible. The genes for memory are entrusted with a difficult and intricate job: to construct a system that absorbs and retains information, while letting it to degrade if it is no longer useful. Compare the hard drive of a computer—also an intricate and inventive piece of technology. These are not just empty boxes waiting to have stuff thrown into them. So there is no viable dualism of the structured and the unstructured—the mental plenum versus the mental vacuum. It’s all sophisticated architecture. The blank slate is blank only relative to what may be written on it; in itself it is plenitude, no more vacant than any other natural object.[3]

Third, the blank slate has a biological function, which may be characterized as follows. The world is divided into general facts and specific facts; it is useful to know both kinds. An individual organism has a specific history and it is useful for the organism to learn from its history—to know the specific facts that aid its survival, such as where food is to be found. So it is adaptive to install an organ that can record particular facts for later use—that is, a memory organ. Thus the blank slate is as functionally adaptive as any non-blank organ of the body or mind. It isn’t just for frivolous “culture” and knowing historical dates to pass examinations; it functions as an adaptive trait, no less biological than digestion or locomotion. The genes design it to perform this function. Animals learn new things so as to get the genes that make them into future generations (fundamentally).

Fourth, and very important, there is not theblank slate, there are manyblank slates: that is, each species has its own type of blank slate designed to serve its particular mode of life. Memory is species-specific. Squirrels remember where they have stowed their nuts, birds remember which direction to fly in, social animals remember their conspecifics, humans remember birthdays and to pick up the dry cleaning. Memory faculties vary in their storage capacity and in their contents, with inbuilt biases to remember some things and not others. They are like eyes: they all do the same thing, but they vary in their architecture and acuity. The phenomenology and physiology of memory varies from species to species, as does its functional character. Memory systems are shaped by natural selection like any other trait, and they are as species-specific as other traits of evolved organisms. The metaphor of the blank slate should not be allowed to obscure this fact—as if all forms of blankness were the same (compare different sizes and shapes of paper, or paper and hard drives and vinyl discs). The larynx of different vocal animals serves the purpose of emitting sound for every animal that has a larynx, but larynxes come in different designs and produce different sounds—there is not some universal larynx common to all species.[4]Just as the larynx of one species will not allow it to make the sounds of another species, so the memory of one species will not allow it to remember what another species remembers. Species-specific means functionally limited. The blank slate of an organism is thus tied to a particular ecological niche, a specific biological set-up.

We can accordingly say that blank slates are genetically determined, intrinsically structured, biologically functional, and species-specific. They are part of an animal’s organic endowment—certainly not a product of environmental contingencies or “culture”. True, they can receive information from experience, but that doesn’t render them non-biological, or introduce a sharp line between the innate and the learned—any more than the varying objects of the senses show the senses to be non-biological. So it isn’t that memory reveals the limits of nativism; nativism, rightly understood, simply includes the blank slate of memory. If there were a “blank limb” capable of assuming one specific form or another depending on environmental demands, then that limb would not thereby cease to be biological. It would simply be innately adaptable instead of innately fixed. The traditional opposition between nativism (rationalism) and empiricism is thus misconceived, since even the empiricist is a (closet) nativist. The only issue is how muchof human (and animal) knowledge is due to memory based on experience and how much to what is known at birth (without the use of memory); whatever view you take, the faculties in question are innate and biological. The terminology should really be dropped and replaced by talk of memory knowledge and non-memory knowledge. The question then will be whether specific areas of knowledge are known by memory or otherwise—knowledge of logic, mathematics, morals, language, laws of nature, colors, shapes, historical events, science, geography, etc. It is misleading to speak of nativism versusempiricism, as if empiricism could escape nativism about its preferred model of human knowledge. Talk of a blank slate is really a misleading way to talk about memory. Traditional empiricists claim that all knowledge is based on memory, while traditional nativists claim that much knowledge is not based on memory (though some certainly is). Knowledge based on experience is possible only if experience is remembered; knowledge not based on experience is knowledge notremembered. Do we know mathematics because we remember what we were taught or because we have that knowledge built into our minds before being taught anything? That is the real question, not whether the knowledge is innate or acquired.[5]Even if it is acquired by means of experience and memory, the knowledge rests on an innate faculty, as biological as anything else about us. And again, the fact that vision “acquires” different objects as the eyes rove around the world doesn’t show that vision is not an innate faculty—just as the different objects you might pick up with your hands doesn’t demonstrate that your hands are not innately determined structures. A nativist who held that which objects you pick up in life is genetically determined would clearly be out to lunch, but that is not required in order to maintain the sensible position that hands themselves are genetically determined. It is the same with memory and the blank slate. The blank slate is empty just in the sense in which the empty hand is empty—neither of which entails a lack of innateness. We could rename the debate “memory-ists versus non-memory-ists”. Even the extreme empiricist who believes that allknowledge is based on memory is committed to the innateness of the faculty of memory, with the four characteristics I listed above; the inner constitution of the mind would still be independent of all experience (learning, environmental impact). There is no way to avoid nativism as the foundation of knowledge.

Let me end with some reflections on knowledge of language in the light of the foregoing observations. We can accept that some knowledge of language is innate, i.e. knowledge of universal grammar—memory plays no role in possessing such knowledge. But in addition to this we also have knowledge of the particular human language that we learn to speak—and here memory indisputably plays a role. Does this mean that biology leaves off where knowledge of a particular language begins? No, because memory is itself a biological endowment programmed into the genes. Human speakers thus exploit two innate endowments in their acquisition of language. But there is a further point to be made: the specific form of memory that is exploited in learning a particular language is likely dedicated to that task. We possess a remarkable memory for linguistic information–phonetic, syntactic, and semantic—and it is plausible that this is specific to language. So we are not just using our general-purpose species-specific form of memory but also a special memory module dedicated to language learning.[6]Of course, this is an empirical hypothesis, but its distinct possibility allows us to make a conceptual point, namely that we have a genetically fixed and highly specific form of memory that is employed in language acquisition—a third type of innate mechanism. Thus language acquisition employs three levels of innate machinery: an innate knowledge of universal grammar; an innate general memory faculty directed to knowledge of a particular language; and an innate memory module dedicated to linguistic memory of a specific language. So it is just not innateness at the first universal level but also at the levels that deal with learning a specific language. We possess not just ablank slate peculiar to humans but severalblank slates devoted to different cognitive tasks—and all are innate. The blank slate might be as modular as the non-blank systems that make up our general knowledge. In any case, the blank slate is not the negation of innateness but a special case of it.[7]

 

[1]I am not the first person to make this point, but I think it is still underappreciated. Even if all ideasare acquired, the thing that acquires them isn’t. The blank slate is as innate as anatomy and eye color.

[2]It might be that blank slates are morecomplex genetically than determinate organs, because of the engineering requirements of extreme receptivity; certainly, they require big brains. Empiricism could not then claim biological parsimony.

[3]Notice that paper is selectively receptive: ink leaves a mark on it but wind doesn’t. There could be a form of “paper” that is receptive to wind but not ink. Thus biasis built in as part of the nature of the thing.

[4]There is an illuminating discussion of larynxes in Eric Lenneberg, Biological Foundations of Language(1967): Chapter Two. They serve as a good model for all adaptive traits.

[5]Of course, innate knowledge is also “acquired” in the sense that organisms come to have it at a certain time by certain processes—by gene activity (and earlier by natural selection). The usual use of “innate” and “acquired” in these debates is quite unsatisfactory.

[6]Mimicry is one expression of this type of linguistic memory.

[7]Why would anyone think that the blank slate is not a robust biological trait of the organism? Perhaps for epistemological reasons: we can’t perceive or introspect the blank slate (i.e. the memory faculty); we can only apprehend its contents (ideas, concepts). Thus we are inclined to doubt its reality. And something unreal can’t be a biological fact. I won’t take time to dissect the errors in this way of thinking.

Share

Attitudes to Reality

 

 

 

Attitudes to Reality

 

 

I am interested in devising a general taxonomy of epistemic attitudes towards reality as a whole. This taxonomy can be expected to have an historical interpretation, and heuristically that is a good way to understand it. So let us consider the human attitude to reality in pre-historic times—possibly as far back as our arboreal ancestors. At this time our attitude would resemble the attitude of other animals: there would be no religion and no science, and not even any recognition of why-questions. We did not seek explanations or general causal understanding; we simply accepted the world in which we found ourselves. No one asked why things happen one way rather than another, or how things originated, or what the laws of nature are. Our relation to the world was practical not theoretical: we needed food, shelter, and other biologically given goods; and such knowledge as we had was geared to those ends. Perhaps we also had some nascent aesthetic sense, as other animals seem to today. Our attitude to reality was unquestioning and unreflective: reality, to us, was just the given.

But at some point humans began to ask questions and generate answers to them. We wanted to know why things happen as they do. How exactly we moved to this stage is obscure—other animals never seem to get beyond the first stage of unquestioning acceptance. In reply to these questions we fashioned explanations with frankly supernatural elements: spiritual forces, divine beings, malign agencies, the gods. Why does thunder occur?  It’s the anger of the gods. Why is there disease? It’s punishment for our sins. This epistemic mode continued for thousands of years and still prevails in many human populations today. It is ancient and deep-seated, if utterly misguided. But it is an intellectual step beyond blind unquestioning acceptance of reality—a cognitive advance of sorts. Even our primate relatives seem not to have entered the superstition stage.

Then came science, and none too soon (the human brain had been ready for it for a long time). In fits and starts science began to replace the supernatural mode of explanation. No divine agencies were permitted in the explanation of natural phenomena—just natural mechanisms and laws. The scientific attitude replaced the supernatural attitude as an epistemic stance, at least to some degree and for some people. For many of us now this is the attitude we take for granted, though it is a comparatively recent development. It motivates us to live as we do. We are curious about reality and we accept the scientific method of discovering the truth about it. We have become dedicated students of nature. We delight in scientific results, facts, and theories. Reality, for us, is not (merely) practical or aesthetically pleasing; it is an object of investigation, discovery, and understanding. It exists to be explained–scientifically. We regard reality as a problem to be solved.

The scientific attitude may be accompanied by a secondary critical attitude, a kind of frustration or disappointment. For all the virtues of the scientific method, we understand that it is limited and fallible. We don’t apprehend reality as God might, in one sweeping glance, directly and incontrovertibly. We proceed by cautious inference from what our limited senses reveal of the world: we generate hypotheses, conjectures or guesses—and then we seek evidence to support what we have surmised. Thus we possess the idea of a superior mode of knowledge, which is unavailable to us—godlike knowledge. Still, our very limitations spur us on, in an effort of human transcendence. We enjoy the thrill of the chase, handicapped as we are, and not just the attainment of the knowledge we seek. We feel driven by our curiosity, and our life is given meaning by it. Even though we acknowledge our epistemic limitations, we fight through them in acts of intellectual conquest. Our attitude is passionate and heroic, though tinged with self-doubt. We earnestly hope that science will yield the ultimate truth of the universe, but we have to admit that it might ultimately fail us. Our attitude is optimistic, but with an undercurrent of pessimism. We certainly feel that the scientific attitude is superior to the attitude of blind acceptance or the attitude of supernatural mythmaking. Science enhances our self-image. It makes us feel special. For some, it makes us godlike.

These three attitudes are not the only possible ones, though they are surely the most common. Another attitude would be one of total skepticism: we can ask why-questions and recognize our state of ignorance, but we cannot answer such questions—not by religion orby science. Human knowledge of how the world works is impossible. This attitude is not like the acceptance attitude, which involves no stance with respect to whether reality can be known. The skeptical attitude allows that the questions can be asked, and agrees that they have answers, but denies that we can discover the answers. Reality must remain enigmatic. We cannot know what we desire to know.

A further attitude, combining both skepticism and science, is what might be called “scientific mysterianism”: this is the idea that not all scientific questions admit of answers that we humans can discover or understand, though some do. The attitude can come in degrees, ranging from local mysteries (e.g. the fate of the dinosaurs) to broader mysteries (e.g. consciousness and the ultimate nature of matter). A person who adopts this attitude accepts that the scientific drive must be curtailed in certain cases: we cannot satisfy our curiosity about everything in nature. The attitude is nothing like the supernatural attitude, which does claim to provide answers. The scientific mysterian is someone who believes that for scientific reasonsnot all of reality can be understood by humans. This individual holds that science is a human construct, constrained by limited human intelligence, and evolved for biological purposes: so the science of science implies the real possibility of mysteries of nature.[1]This attitude is quite distinct from any of the other attitudes described so far, and deserves its own place in the general taxonomy. I believe it is the attitude that best fits the state of human knowledge at present, at least in some areas; I expect it to become more prevalent in the coming decades, as the limits of science become apparent. It results from applying the outlook of science to science itself, i.e. a particular kind of human knowledge. The science-forming faculty (as Chomsky calls it) is a natural faculty endowed with inherent strengths and weaknesses.

But what primarily interests me here is the epistemic attitude that will prevail once science comes to an end. For end it will, given some relatively obvious facts. The end of science can come about in two ways: either everything is discovered or some areas resist scientific understanding and always will. In either case there will be no more science for humans to do. Science is a finite enterprise, because the world itself is finite—there are only finitely many laws, explanations, and truths.[2]We have already discovered a great deal about the world, and we will doubtless discover a great deal more; but at some point the discovering will end, either because there is nothing left to discover or because we cannot in principle reach any further. There is only so much botany to do—and zoology, psychology, chemistry, and so on. And before the end point is reached the supply of unanswered questions will shrink visibly: we will be aware that science is drawing to a close and that the pickings that remain are slim. Quite possibly we have already discovered the major theories of nature. What reason is there to believe that science will exist forever, constantly making major new discoveries? Geography effectively came to an end once the earth had been thoroughly explored—there are no more continents to discover. Couldn’t physics come to an end in the next 50 years?[3]

However, my question is less whether and when science might come to an end than what impact this would have on our attitudes to reality and our own lives. What will this recognition do to us? How will our mental attitude change? It will, obviously, deprive us of scientific motivation. We will no longer be able to dedicate ourselves to the pursuit of scientific knowledge. Our curiosity will not find a rich vein in applying the scientific method to reality. We will no longer define ourselves as working scientists. Accordingly, we will have to transform ourselves into a new kind of cognitive being with a different attitude to reality. We will have to find a different meaning and purpose in life. Call this the “post-scientific attitude”. It is not like the pre-scientific attitude, because now science lies before us complete (or as complete as it can get), but it shares one trait with that attitude: we are no longer scientifically motivated. There is no reason, however, to expect that this would lead to a recrudescence of the supernatural attitude, because science has made that obsolete–and certainly it won’t lead to the old unquestioning acceptance of the world. It requires a quite new attitude, a new stance. We will no longer see ourselves as explorers of nature, put here to perfect human knowledge. All scientific knowledge will be available at the push of a button on a giant computer. There will no laboratories, no experiments, no scientific instruments, no professional scientists, no Nobel prizes, and no scientific breakthroughs. What will we do with our intellectual energy, our restless curiosity, our idealism? We will have to adjust to a new epistemic world. Children could still learn science at school, and they might well derive intellectual pleasure from doing so, but the primary drive to discoverwill no longer be one that can be satisfied. The excitement of ongoing science will no longer be there to stimulate and motivate. No one will thirst to become a practicing scientist revealing the truths of nature. No more Darwins and Einsteins.

I think this will be a difficult cultural adjustment, given our human cognitive nature and its emotional associations. There might be widespread depression and a sense of existential emptiness. But perhaps there is a more hopeful future to contemplate: all that mental energy might go into other areas of human concern. We might rediscover things that science has distracted us from; we might find a fresh sense of value in other areas. If our idealism can no longer strike out in search of scientific knowledge, then it might be channeled into art, morality, politics, the preservation of the planet, and so on. We might start to view nature (including human nature) not as a puzzle to be solved, with which we are engaged in a titanic intellectual struggle, but more as an aesthetic object, or an object requiring dedicated preservation. Instead of striving to uncover its secrets, which it zealously conceals from us (with our feeble senses and rickety inferences), we might come to celebrate nature’s beauty more than we do now and seek to preserve and enhance it. Improving the general condition of humankind (and animal kind) might also seem more compelling once we are no longer focused on trying to understand nature. There will be no conflict between searching for knowledge and improving human wellbeing (resources being limited). Art will not be merely a pastime we pursue when the day’s scientific work is done, but something to occupy our fullest attention. In other words, different values will take up the space heretofore occupied by science.

If this prediction is along the right lines, then the science-free world of the future may be a better world than the world we occupy now.[4]However, none of this might come to pass and all that will be left is a gaping hole where science used to be. It might even be that we will turn from nature in an attitude of inconsolable boredom, expressing our despair in destructive acts. Unending war might be the outcome. Nature might no longer engage our interest or even our respect, once its secrets are laid bare, and the nasty side of the human animal might come to the fore. It is hard to say, but things will not go on as before. We would do well to prepare ourselves for the end of science, taking whatever precautions seem necessary, and actively encouraging positive alternatives. Not science education, but post-science education. Philosophy might come to the rescue: it could fill the intellectual vacuum, providing challenges to ambitious young minds (assuming it has not also reached its end point).  I don’t expect science to wind down any time soon, but in a hundred years or so its demise might become a reality.[5]

Let me make a methodological point. It is worth trying to articulate and understand the kinds of general attitudes I have described because they shape the entire way we view the world and ourselves (we might call it “epistemic psychology”). Once Homo sapiensgot beyond the brute animal acceptance of nature, various epistemic options opened up, and these have shaped the course of human history.  A taxonomy of these attitudes helps in grasping them in their full generality—they are natural facts too, capable of study. And just because we are in the middle of the scientific period doesn’t mean that this period will last forever. We should begin to consider the future of the human spirit once science loses its centrality, at least as an active area of human endeavor. We should be ready for the transition.  My own view is that the era of science is a strictly temporary form of human existence, which will inevitably be succeeded by something different. We will always have the results of science (barring some terrible catastrophe, physical or cultural), but we will not always have science as a living form of human endeavor—as something that absorbs our interest and energy. Indeed, science may come to an end before religion, fading into the cultural background, because religion does not have finiteness built into it. My guess is that both religion and science will effectively end in less than two hundred years, given the rate of change we are seeing now; and then we will need to figure out where to go next. I hope that aesthetics and morality will occupy the center of our new attitude to reality, not war and personal enrichment, but I wouldn’t bet on it. Science will be safely tucked away in the reference books (if books still exist) to be enjoyed, savored, and used as need be. We will then inhabit a new form of human consciousness in which other concerns have become salient. The world spirit will have moved on.

 

 

 

Colin McGinn

 

 

 

 

 

[1]There are also limits arising from distance in time and space that may never be overcome.

[2]I don’t mean to deny the infinite—of space, time, numbers, and so on. My point is just that there are only so many scientific facts and explanations to be known, so that science won’t go on for all eternity: it has a natural end. This is especially true of natural laws and general theories: there are only so many of these.

[3]See John Horgan, The End of Science: Facing the Limits of Knowledge in the Twilight of the Scientific Age(Basic Books, 2015) for a discussion of this possibility.

[4]Of course, science will still exist in the form of established knowledge; what will be gone is the scientist as discoverer—there will no longer be scientific research. The position of prestige currently occupied by scientists will shift to other members of society. Scientific skills will not be prized as they presently are.

[5]We should distinguish science and technology: I am talking about pure theoretical science not applied science. Technology might have a much more protracted future than science. Discovering the fundamental secrets of nature might not take much longer (or realizing that some things are inherently beyond us), but making new machines by applying scientific knowledge could go on indefinitely.

Share

Introspective Invariance

 

Introspective Invariance

 

 

Our knowledge of the external world is subject to much variation in type and degree of access. We don’t always perceive accurately or clearly or with the same amount of revelation. There are illusions, occlusions, blurring, darkness, variations in appearance, constancy effects, blindness (partial or total), stimulus overload, perspectival disparities, squinting, habituation, priming, etc. Some things are too small to see, some too large. One sound can drown out another. It can thus be hard to know what is going on around you and mistakes are common. But this kind of variation doesn’t apply to our knowledge of the internal world: here we know everything to the same degree with no variation of access. I know my pains as well as I know my intentions and beliefs; and I know individual instances of these types with the same degree of transparency. There are no analogues of perceptual illusions or occlusions or absences of light. Traditionally, it is supposed that such knowledge is certain and incorrigible; but it is also uniform, as if each mental state is bathed in an equal amount of illumination and appears quite unimpeded. There is epistemic invariance in introspection, unlike perception.

This should strike us as remarkable, because mental states themselves are very various. Sensations, thoughts, emotions, intentions, beliefs, and acts of will differ widely among themselves—as physical objects do. Yet they are all presented in the same uniform manner to introspection; it isn’t that some are more difficult to introspect than others in the way physical objects vary in their ease of perceptibility. There are no mental analogues of remote galaxies or invisible germs or atoms or things buried underground. Everything seems presented just as it is without any impediment to knowledge. So in addition to the traditional attributes of infallibility, incorrigibility, first-person authority, and certainty, we have epistemic invariance—the property of being always equally accessible. The contents of the mind don’t vary in their degree of availability to introspection. But that seems odd and inexplicable, since the mind is not homogeneous in itself; and one would expect some variation of access depending on the prevailing conditions of introspection. Why isn’t introspection more like perception in this respect? Surely there couldbe a mind that exhibited introspective variance: the different types of mental state are variously known, with the possibility of error, and analogues of blurring, darkness, blindness, and so on. Isn’t that what we would expect given the realities of knowledge in an imperfect world? Why is our knowledge of our own mind like God’s omniscient knowledge of everything? It seems nothing short of miraculous.

It may be replied that the traditional picture is wrong and epistemic variation is the way things really are. That picture of introspective knowledge is a Cartesian myth: we are notinfallible and incorrigible with respect to our own minds, and there isvariation in quality and degree of access from case to case. Thus we have unconscious mental states, unattended pains, being unsure what you really believe or desire, not knowing whether you are in love. So there is variance in degree and type of epistemic access with respect to one’s own mind. But these points, though not mistaken in themselves, don’t really restore the analogy to perception: we still don’t have the kind of variance that characterizes perceptual knowledge. Ordinary occurrent conscious mental states are all apparently known in the same way with the same degree of clarity and certainty. They are laid out before the introspective eye in equal measure, whether they are sensations, thoughts, acts of will, etc. A pain in the toe is as present to introspection as a thought in the head, despite its relative remoteness. No matter what your beliefs are about they are equally present to you. Sensations of touch are not more introspectively available than sensations of sight. Here it may be said that this is not really as surprising or remarkable as I am making out, for all of these mental states are really in the same place—the brain. The pain in my toe is really located in my brain, just like my thoughts; there is no difference of epistemic proximity. But this just raises another puzzle: why do different parts of the brain produce the same kind of introspective access? Suppose the introspective faculty is located in a certain part of the brain, say the prefrontal cortex, while the pain and thought centers are located in other parts: won’t those other parts be differently hooked up to the prefrontal cortex, more or less distant from it and employing different nerve fibers? If so, shouldn’t we expect a difference in degree of access, with signals from one brain part taking longer to reach the introspection center than signals from another brain part? How is introspective invariance consistent with cerebral variance? Situating all mental states in the brain doesn’t support introspective invariance; it undermines it. We still have the puzzle of why different compartments of the mind converge in their introspective accessibility.

Here is another way to put the point. You can selectively lose a sense but you can’t selectively lose the ability to detect the sensations delivered by a sense. You can go blind but you can’t go “blind” to your visual sensations. I have never heard of a case of someone losing their entire introspective faculty (they go “mind-blind”), still less of someone ceasing to detect their own visual sensations while still being aware of their auditory and tactual sensations. There are no such introspective breakdowns or pathologies. But they seem like logically conceivable scenarios—couldn’t they occur in some imaginary creature? Then there would a very distinct kind of introspective variance—knowledge of some sensations but not of others (which nevertheless exist). Suppose we adopt a biological perspective, always a salutary procedure, and consider the evolution of introspection. First consider sensations and introspective knowledge of these sensations: that is one possible kind of species psychology. Then consider thoughts and emotions along with their own introspective faculty. Why should that faculty be just like the faculty directed at sensations? The faculties could exist in different species, arising at different times, and with different objects—why should they function identically? If we put both faculties together in a single species, why should the result be epistemic invariance? These are different biological adaptations, so why should there be such strong convergence? Yet in our case the entire contents of our mind present themselves with exactly the same transparency. There is a uniformity here that is at odds with biological reality as well as mental heterogeneity. To put it simply: why shouldn’t thoughts be better known than pains (or vice versa)?

It might be retorted that the puzzle arises only under a misguided perceptual model of introspection (the term itself might be contested). If we insist on viewing so-called self-knowledge as a type of inner vision, then we shall feel puzzled about why it doesn’t have the characteristics of vision; but that picture isn’t compulsory, so the puzzle dissolves. I don’t think we need to be committed to an inner vision model to feel the force of the puzzle, but anyway this response doesn’t really advance the discussion, because the same puzzle arises under other conceptions of reports of one’s own mental states. Why should all mental phenomena be expressivelyidentical? I express my pains with the same alacrity and finesse as I express my thoughts or emotions—there isn’t some sort of temporal delay or potential for selective breakdown. Intuitively, I have the mental state and I am aware of it, so I express it at will: it isn’t that in some cases the expression is thwarted or compromised. Logically speaking, a creature could exhibit selective expression, but we don’t do that—why? That is, why does our (conscious) mind always present itself to us with the kind of uniform availability that it does?  The objects around me present themselves to my senses in all sorts of different ways, with great differences of accessibility, but the mental states inside me don’t do that—they just sit there with an equal degree of accessibility, like peas in a pod (or rather notlike that). This is a fact so familiar that it takes work even to notice it, but once noticed it cannot but appear puzzling. The physical world varies enormously in its degree of perceptual accessibility, but the mental world is unvarying in its degree of introspective accessibility (with the qualifications made earlier).[1]It’s as if we always have 2020-vision as far as the contents of our own (conscious) minds are concerned.

Consider animal minds before introspection ever evolved. At some point it did evolve and mental states began to be known by their bearers. Did it operate over all mental states initially or only a subset of them? Was it equally adept for all existing mental states? Did it go through a phase of epistemic variance? Do other animals have the same invariance that we have? What is the explanation of this invariance? These are puzzling questions indeed.

 

Co

[1]Compare knowledge of one’s own body: here too we have marked epistemic variance, since some parts of the body are better known than other parts, even in the case of proprioception. I can’t see my back or feel my brain, for example—yet these body parts are as much parts of my body as any. But the interior of my mind isn’t like that: it is the analogue of a completely visible body. The mind is thus epistemically anomalous, puzzlingly so.

Share

Thinking as the Good

 

 

 

The Good Life As Thinking Well

 

 

What is the good life for a human being? It is hard to think of a more pressing and important question, or an older one.[1]Two remarks on the question, as so formulated, should be made immediately. The first is that there might be several goods for human beings, which may or may not be ranked, not a single good. I mean to be asking what is the deepest and most distinctive good for human beings: what, given our nature, is the highest form of good for us? What is the ultimatehuman good? Second, the question concerns what is good specifically for human beings (and possible creatures relevantly similar to us), not for animals in general. The highest good for a dog or a snake is doubtless different from the highest good for a human, because these three species have different natures, especially psychologically. Our specific form of good depends on what we are—centrally, essentially, distinctively. I shall accordingly say that I am concerned to discover the coregood for humans—the good that is closest to our specific nature.

Two answers to our question have been historically prominent: hedonism and moralism. Hedonism says that the good life consists in feeling good—in individual pleasure, happiness, or wellbeing. Your life is good if and only if it is full of pleasurable or agreeable sensations and emotions. Promoting the good life is maximizing such sensations and emotions, in oneself and others. Moralism, by contrast, says that the good life consists in doing good—that is, in moral or virtuous actions. Your life is good if and only if it is full of virtuous acts. The worthwhile life is the moral life. Hedonism presupposes that we are beings with the capacity to experience pleasure, while moralism presupposes that we are moral agents. If we had neither attribute, it would be pointless to claim that the good life for us consists in either thing. And if we were to lose either attribute, as a result of some catastrophe, then human life would become devoid of value, given the truth of hedonism or moralism: for there would then be nothing about us that could constitute a good life. If the pleasure centers were removed from our brain, or we were rendered unable to act virtuously (say by total paralysis), then human life would be made meaningless, empty of value, not worth living. There would literally be nothing of value for us to live for (this reflection might already make us suspicious of both doctrines, at least as complete accounts of the good life.)

The thesis I wish to advocate is that we have a third type of capacity and that this is where human good ultimately resides. I do not say that pleasure and virtue are not human goods (I think they clearly are); my point is that there is a third type of human good that is more central—that is closer to our core nature. This good I call thinking well. We have this good because it is in our nature to be thinking beings: we are rational, reflective, meditative, and cognitive. Hence my title: the good life for humans is thinking well. There is no established label for views of this type (which hark back to the ancient Greeks), but just for the sake of a name we can call the view “intellectualism” (though this has some misleading connotations). According to intellectualism, the reason it is better to be Socrates dissatisfied than a pig satisfied is that Socrates is still thinking well, even if he is not feeling any pleasure or satisfaction (and not acting virtuously either). But a pig dissatisfied has no other good to fall back on (assuming virtuous action is also out of its reach). I shall now elaborate this intellectualist thesis.

 

What do I mean by “thinking well”? What does cognitive value consist in? Two views may be distinguished, according as the value in question is construed intrinsically or instrumentally. A very widespread opinion is that all value consists in the satisfaction of desire—the utilitarian position. Thus the value of thought must depend instrumentally upon its ability to aid in the satisfaction of desire. Thought is taken to be a useful device for producing desire satisfaction, and its value resides wholly in that instrumental function. Perhaps the rational capacity evolved so as to help in furthering the organism’s goals, and hence its value is instrumental in relation to these goals (“the better you think the more you get what you want”). Now I do not doubt that thought can have this kind of instrumental value, but I do not believe this exhausts its value; what I maintain is that thought also has intrinsic value—value quathought. Thinking well is not just thinking effectively in the desire-satisfaction sense; there are other characteristics thought has that render it intrinsically valuable, i.e. valuable independently of the good states of affairs it can bring about. An incomplete list of the characteristics in question would include: clarity, precision, creativity, profundity, truth, justification, explanatory power, importance, acuity, brilliance, and objectivity. These features have value in themselves, I maintain, and not merely instrumentally in relation to goals and desires. Thus it is a good thing for thinking to be clear independently of any desirable state of affairs such clarity may bring about. Clarity is in itself a desirable trait of thought—a good way for thought to be, inherently. By contrast, if a thought is unclear or muddled, confused or tangled, then this is a bad way for thought to be—even if it might happen to bring about some amount of desire satisfaction. To characterize someone’s thought with the adjectives I listed is ipso factoto praise or commend his or her thought: it is to assert that the thought in question is good(in some respect). So what I am saying is that thinking well consists in having the traits listed, where these traits themselves have intrinsic value. And if thinking well is our highest good, then our highest good consists in having thoughts with these valuable characteristics. For instance, our highest good consists in having clear thoughts—as well as creative and brilliant thoughts, and so on. If our thoughts are good in this sense, then our life is (to that extent) good.

We could put this by saying that thoughts can have various “cognitive virtues”. This notion is close to what is sometimes called “epistemic virtue”, but that notion tends to focus on procedural aspects of belief formation—how one assesses evidence and so on. I mean to be speaking more about the kinds of virtue that thoughts in themselves can have, as opposed to the virtues attaching to investigation or enquiry. In any case, we are moving in the realm of value—praise and blame, norms, assessment, and evaluation. Just as bodily actions can fall into the realm of value, so can mental actions—but this cognitive value is not ordinary moral value. The value at issue is peculiar to acts of thinking, and reflects the nature of such acts (that they are propositional, in particular). To think well is to think in such a way as to attract positive evaluation—quathinker (not as a moral agent in the usual sense). To be a good thinker is to manifest the traits listed, since these constitute the value that thoughts per sepossess. For a thought to be, say, clear, original, and brilliant is for it to have high intrinsic value, these being value-making characteristics. This is what I mean by “cognitive virtues”.

There is another aspect of thought and its value worth mentioning, though I won’t explore it here: namely, that thought connects us to things that in themselves have been regarded as having positive value. Thus Plato famously held that thought connects us to universals and universals possess a special kind of value, being eternal, timeless, unchanging, perfect, etc. In thought we are in direct contact with this higher world, according to Plato—the intellect is our route to a superior mode of existence. In particular, the form of the Good is accessible by means of our intellectual faculties, and apprehension of the Good elevates us correspondingly. Similar ideas have attached themselves to generality and modality: human thought acquaints us with generality and necessity, with universal law and how things must be. Since these concepts take us beyond the ordinary empirical world of particulars, they are deemed “transcendent” in some way—an ontological cut above. Thought, it is felt, takes us beyond the senses and into a deeper reality—where this deeper reality confers value on the thoughts that achieve that feat. As Russell would say, we become acquaintedwith what is fine and noble in itself—mathematics being the favored example. These inchoate ideas may or may not be philosophically defensible, but they do seem to exert a hold on those who see in thought a route to something higher. Thought takes us to the superlunary, where things are purer and more perfect.

 

I have said that our highest good consists in thinking well, but I have not yet said anything about happiness. Can we say that happiness is thinking well? That sounds a bit off as stated—how can you be made happy simply by having objectively valuable thoughts? What if we had such thoughts but didn’t know it? I think we need to add another ingredient to the picture: knowing thatwe are thinking well. Then we can say, more plausibly, that happiness—or at least one type of happiness—comes from knowing that one is thinking well—being self-aware in this kind of way. When your thinking is going well andyou know it, then you are happy (with respect to your thinking). If your thinking were good and you didn’t know it, even doubting it, then happiness would not be yours. But if your thinking is both excellent and known by you to be so, then happiness will be the outcome. You happily think: “I am having good thoughts today!” Thus happiness (or one type of it) results from recognitionof one’s intellectual or cognitive excellence; we might say it is enjoyment ofone’s good thoughts. It is a kind of self-congratulation, if I may put it so. This is something over and above just havingexcellent thoughts; it is self-knowledge with respect to the quality of the thoughts one has. You won’t be a happy thinker unless you perceive your thoughts in this way; you might even be quite miserable (“My thoughts are terrible today!”). Intellectual happiness thus requires not just first-order intellectual excellence but second-order awareness ofone’s intellectual excellence. And knowledge of one’s intellectual excellence will naturally give rise to a type of pride, which is a source of happiness. It is self-attribution that links cognitive excellence to happiness—you are happy thatyou are thinking well. When the ancients spoke of a “love of wisdom” they presupposed that wisdom was recognizable, in oneself and others. Since in general we love wisdom (understanding, knowledge, insight), we love it in ourselves, as well as in others; but we need to be able to recognize it in ourselves if we are to experience the love in question. Recognizing wisdom in oneself will naturally produce self-love—and self-love is part of what happiness involves (someone with self-hatred is not happy). A being that had wisdom but could not recognize it would not be happy in the way we are. (Recognizing stupidity and ineptitude in oneself will correspondingly produce self-loathing and unhappiness: the intellectual life is not always and ipso factothe happy life.) The happily wise person thinks well andknows that she thinks well; and since she loves wisdom, she loves herself for being wise. Thus she is made happy by her knowledge of the wisdom in herself that she loves generally. She is proud that she instantiates a quality that she loves (esteems, admires) in others.

You might object as follows: “That all sounds very fancy and high-minded, but isn’t it rather elitist? Can only philosophers be happy? That seems unfair, and also not true. What about the happy practical man and the joyful doer of good deeds?” But, I reply, there is practical wisdom too, and there is a cognitive dimension to right action. The view I am defending does not privilege theoretical reason over practical reason: good thoughts are available in all domains of thinking. The carpenter can think well (or badly) as well as the philosopher. In fact, since everyone loves wisdom, whether practical or theoretical, everyone counts as a “philosopher” in the original meaning of the term. So everyone is able, in principle, to achieve “philosophical excellence”. The doctrine I call “intellectualism” does not advocate the life of the “intellectual” above all others; it simply celebrates the virtues of reason, in all of its many forms. So the view is not elitist in the sense that it elevates some kinds of cognitive activity over other kinds. To do that would be to make a further claim, right or wrong. But it is elitist in the sense that it recognizes that some people are better thinkers than others, whether they are carpenters or epistemologists. Some people have more cognitive virtue than others, whether innately or by training or by strength of will. Whether some areas of thought are more valuable than others is a question I won’t go into, though there is room for more elitism here too (physics versus economics, say). Maybe thinking well about thinking well has the highest value of all… But the idea is not that the “intellectual life” is higher or better or more worthwhile than the “practical life”. It is just that whenever we are thinking—and we always are—we should strive for excellence in our thinking. No matter what we are thinking about, our thinking should always be excellent, according to the criteria cited earlier. Stupid, lazy, fuzzy thinking–about any subject matter–is nevergood.

 

We are condemned to think. We wake up in the morning and start thinking immediately, and we go on thinking till we fall asleep at night (and then the dreaming starts and thinking returns). You can’t stop yourself from thinking or take a ten-minute break from it—not if you are conscious and awake. You can close your eyes or block your ears if you want to stop seeing or hearing, but you can’t block your thinking organ (while remaining conscious)—it just keeps ticking away, relentlessly. The existentialists declared that we were condemned to freedom, as an inevitable part of the human condition; that may or may not be so, but we are certainly condemned to cognition—to being ceaselessly bombarded with thoughts at every waking moment. We can, to some extent, choose what to think about, but we cannot choose whether to think at all. To be human is to be constantly thinking, reflecting, and fretting. And we think a great many thoughts in the average day—I would estimate more thoughts than there are seconds. Not for nothing did Descartes announce that our essence is to be a thinking thing, a “res cogitans”. Whatever may be said about dreamless sleep, our conscious lives are replete with thinking, and we would not be what we are without it. Thus we could enunciate, in Cartesian spirit, the “inverted Cogito”: “I am, therefore I think”. If a conscious human self exists, it is necessarily a thinking being. Remove my capacity to think and you remove my essence—what it is to be me. I could go blind and deaf and still be myself, thinking my thoughts; I could even lose all bodily awareness and tactual sensation and persist as myself. I might conceivably even exist in a totally disembodied form yet still exist as a thinking thing. But if my ability to think is destroyed, then I become a mere “vegetable” and I no longer exist as a self or person. As things stand, I am also an acting thing and a feeling thing, but these attributes do not form my core in the way my thinking does. Moreover, I experience myself asa thinking thing—I am aware of myself as a thinker. I know thatI think. It is my nature to think thoughts thinkingly, if I can put it so.

These are remarks about the metaphysics of the human self—about what kind of being it is. What consequences do they have for the good life for humans? Well, if I am essentially a thinking thing, then the highest good for me will depend upon the proper form of excellence for such an entity. If it is my natureto think, then what is good for me will depend upon that nature flourishing (as Aristotle would say). For what is good for mewill depend on the form of excellence appropriate to my essential nature. The form of excellence for thought is simply thinking well, as defined earlier: so my nature achieves its highest good when I am thinking well. I am also a digesting thing, inter alia, but the form of excellence appropriate to digestion—“digesting well”—is not part of my very nature as a conscious self (similarly for seeing, hearing, etc). In the case of thinking, though, the appropriate excellence does concern my essential nature, since I am necessarily a thinking thing. Thus, if we combine a Cartesian metaphysics of the self with a broadly Aristotelian conception of human good, then we reach the thesis that our highest and most central good consists in thinking well. The good of an essentially thinking thing is precisely the good attaching to thought itself. There are no doubt other goods appropriate to us because we are also animals that perceive, feel, digest, act, and so on; but the good proper to thinking is the good that most centrally concerns us in our core being. The good of an insect or a reptile, by contrast, will not include cognitive excellence, since these creatures are not to be defined as thinking things. But human beings are equipped with rational thought, and so their good concerns the proper functioning of that faculty. If I am thinking well, then Iam well—quathinking thing. We might interpret this conclusion as supplying the missing Cartesian ethics: Descartes focused on metaphysics, but his metaphysical conception of the self leads naturally to the intellectualist account of human wellbeing. A good state of affairs, for a committed Cartesian, is precisely one in which thinking beings think well. That is what we have a moral duty to bring about. Cartesian happiness is accordingly knowledge that one is thinking well. We may also be happy that we are feeling good or performing good actions (hedonism and moralism), but these are contingent and extrinsic forms of human happiness. Core happiness concerns our core, and our core is to be a thinking thing. In a slogan: the happy man is the man blessed with a happy intellect. Or again, the good of the human self consists in the good of her thoughts. There is also, to be sure, affective good and moral good; but cognitive good is what concerns us most centrally and intimately.

 

Perhaps I can clarify the position defended here by considering the meaning of life in a utopian world. By a utopian world I mean one in which all material and basic needs are met—no one wants for anything. It is a world in which scarcity has been abolished. In particular, any pleasure can be obtained with a snap of the fingers, and no one needs any help. We are to understand, then, that in utopia any hedonic or moral striving is redundant. Altruistic action is pointless and one’s own desires can be satisfied without effort or expertise. So is there nothing left to strive for—no good that has still to be attained? Has life become perfect? In utopia, as so defined, has man reached his highest good, fully and completely? Well, suppose that in utopia people have become stupid, muddle-headed, intellectually lazy, prone to numbing non-sequiturs, easily duped, mentally dull, incapable of rational thought, and generally shabby in the cognitive department. They may have all their basic desires satisfied and be always brimming with pleasure, and no one can call them immoral, but they can’t think straight for two minutes at a time and never have a creative idea. I suggest that they are lacking an essential ingredient of the good life: thinking well. There is surely something shameful about their condition; they are not living a good human life, everything considered. We might even say that they are failing to live a good life in a deep and fundamental way–their seeming contentment and blamelessness is hollow, mindless, and undignified. What is the point of bliss without intelligence? And is it really bliss if their thinking is that debased? These are not fully admirable people (if they are admirable at all).

The obvious thing to say is that the denizens of utopia have a lot of work to do if they are to live a really good life. They have much to rectify and improve. They are lacking a basic human good. They must, that is, strive for excellence of the intellect. Notice that their thoughts do not lack in instrumental value, since in utopia thoughts don’t have to be inherently excellent in order to conduce to the satisfaction of desire. What they lack is the kind of value intrinsic to thoughts—clarity, creativity, truth, and so on. The good life is therefore not all about desire satisfaction; there are other distinct kinds of good that must exist too. A fool with satisfied desires is still lacking an essential type of good—good of the intellect. In fact, we all instinctively recognize this source of value, because we tend to despise “happy” people who don’t have an intelligent thought in their head; and we are embarrassed when our own thinking fails to measure up. No one will ever tell you they are remarkably stupid and be perfectly happy about it. People wish to be intelligent. Why? Because they valueintelligence (whether it’s intelligence in philosophy or carpentry)—they see goodnessin it. Hence intelligence must be cultivated and celebrated. From a Cartesian perspective, the mindless utopians are failing to realize the good appropriate to their very nature: they are thinking things that do not care to instantiate the good proper to thinking things. They instantiate other goods, such as pleasure, but they don’t instantiate the good that pertains to their essence. And if they are aware that they fail to instantiate this central good, then they cannot be completely happy, because they realize that they are lacking in a crucial type of value. In order to achieve the highest good available to them, they must clean up their thinking: they must discover the joys of beautiful well-formed thoughts, of healthy cognition, of a well-oiled rational faculty. They will be better off once they acquire the capacity for clear intelligent thought, and be the happier for it. They have been neglecting a key component of the life well lived.

 

One of the nice things about having good thoughts is communicating them to other people. There is a social or interpersonal dimension to all that solitary inner cogitation. Here one might be tempted by a couple of mistaken ideas. One idea is that the value of thinking well reduces to personal desire satisfaction or moral action after all. For, it may be said, when I recognize that I have just had a good thought, especially a creative one, I see the benefits that will accrue to me by communicating that thought to others: I will be rewarded or admired for having the thought in question. So the value of the thought depends upon what it will bring to me in the way of desire satisfaction. Or again, I might reflect, altruistically, that by sharing my thought with others I can improve their thinking and hence the quality of their life—that is, I see the possible moral benefits of my thought. But both these ideas are off the mark: my thought has value independently of its communicative payoff, whether prudential or altruistic. It has value in itself, quainner thought; its value does not derive from its potential relation to other people. And this point puts paid to a second tempting idea: that we need other people for thought to have value, either as sources of reward for the thinker or as targets of cognitive altruism. The truth is rather that the basic value of thinking is essentially solitary: my thoughts have value just by occurring in me, whether I share them with others or not (in this respect they resemble pleasure). This is because they have the value-making traits earlier enumerated quite independently of the existence of other people. We therefore do not need other people in order to enjoy the goodness of good thoughts. Good thoughts maybe communicated, of course, with rewards received and lives enhanced, but the goodness of the good thought does not consistin such consequences. The value would still be there in a totally solipsistic world. Thus the good life, in its highest form, does not require other people; it can be enjoyed in complete solitude. This may strike us as a comforting reflection, what with the vagaries of other people and all that. I can quietly appreciate the merits of my excellent thinking in perfect solitude, without having to take into account the impact my thoughts might have on other people—and so, of course, can you. I can therefore be happy alone (at least quathinking thing). There could be value in my thoughts, and hence in my life, even if I were the last human being alive. This contrasts with the moralist view of human good, which requires other people to exist to be recipients of my good deeds; I cannot live a good life in complete solitude for the moralist. In its most austerely pure form, the intellectualist view of the good life allows that a human life might be good and happy in the complete and lifelong absence of other people–as on a desert island that is conducive to excellent thinking. We do not, in principle, need others in order to be happy, i.e. to experience our core happiness. Our highest good is not a social good. Even if others are taken from us, Cartesian happiness is still possible, though other kinds of happiness are not. Thinking well is something you can do on your own, and indeed is usually best done in solitude.

 

I will now spell out some practical consequences of the conception of human good defended here. I will be brief, since the consequences are fairly obvious. The first concerns education. Although we are born with an ability to think, good thinking is not a developmental given. Bad thinking is rife and apparently quite “natural”. It takes work and discipline to acquire good thinking. I would recommend telling all children that they innately possess a wonderful thinking faculty—something to celebrate and take pride in—but I would also insist that special attention be paid to that innate faculty in order for it to develop its full potential. I would focus education precisely on cultivating the thinking faculty, with emphasis placed on its value-making role in human life. The ultimate aim of a general education should be to maximize the good thinking of the student, as adumbrated earlier. Thinking should not be regarded as merely instrumentally valuable, as in securing a “good job” (i.e. one that pays well); nor should it be taken to be a skill with purely practical benefits. It has value in and of itself. Since it is not a given that people’s thinking will achieve excellence, it is necessary to educate people systematically in how to think well. Teaching them logic would be an essential part of this, formal and informal. The quality of the student’s thinking should be the focus, not how much he or she has memorized or even his or her knowledge of a particular subject (knowledge without intelligence is sometimes worse than ignorance). That is, good thinking must be the explicit object of educational effort. Students should be tested in it too (the ETT—the Excellent Thinking Test).

This perspective should give us a new appreciation for some traditional modes of education—reading, writing, and (I would add) conversation. Reading, especially of good writing, stimulates and encourages thought; it trains thought. Reading is really an interaction between thought and text, not merely a taking in of information (at least the right kind of reading). Writing is the careful expression of thought, its embodiment; and it also disciplines thought. Mastery of written language (including punctuation!) is essential to good thinking—as to both clarity and creativity. Proverbially, those who write poorly are apt to think poorly. Conversation is also important in cultivating excellent thinking, because it tests clarity and comprehension. Students should be taught how to have an intelligent conversation—how to listen, absorb, and respond. Bad conversational habits should be discouraged. Speaking well to others is part of thinking well, because it requires quality in the underlying thought. In each of these areas, the emphasis should not be on external performance but on the inner process of thought that lies behind performance. For instance, ask the students what they thought about as they read a certain passage, not merely what the passage contained. Make sure the student’s writing is expressing his or her thoughts properly, not merely meeting standards of spelling and grammar (though these are important). Don’t have a behavioristview of education but a mentalistview—what matters is what is going on inside the student’s head, the quality of the hidden thought. Try to find out if a conversation improved a student’s thoughts about a particular subject. Did it clarify anything? Did it lead to any new thoughts on the part of the student? The concern should not be with what “response” can be “elicited” from the student by a given “stimulus” but rather with the virtues manifest in the student’s inner cognitive process. Given that the student is essentially a thinking thing, education should address itself to the nature of the student’s essence. This will produce true human excellence, and with it happiness (if our earlier suggestions are along the right lines). In sum, the central point of a general education is to enable the student think well, and education should be conceived as such: cogitation not recitation.

With respect to politics, the practical upshot is equally obvious. Societies should be arranged so that the populace achieves the best chance to develop their cognitive abilities. An “open society” with genuine freedom of speech is critical to achieving this, as is the absence of propaganda and devices of conformity. Technology is valuable if, but only if, it furthers the ultimate goal of improved thinking; if it interferes with that goal it should be deplored. The aim of government should be to provide the conditions under which the cognitive wellbeing of all can be maximized (the provision of the right kind of education is thus a crucial duty of government). Distributive justice requires that the means of cognitive enhancement should be freely available and not concentrated in the hands of a few. We should judge political systems by their record in improving the intellectual wellbeing of the population (remember that this will naturally lead to success in more practical areas, such as manufacture and trade). Political turmoil, financial crashes, fanaticism, prejudice, and so on, are often the result of shoddy thinking. No doubt other sources of political difficulty exist, but better thinking can often work to block some of the worst results of dangerous forces. To put it bluntly, political evil is often enabled by human stupidity, i.e. less than optimal rational thought. Bad judgment is more often the culprit than an inherently evil will. Politicians should therefore be thinkers of the highest caliber (so we have a very long way to go).

Lastly, some quick comments about art. Our response to art typically involves perception, emotion, and thought. An attractive thesis, prompted by our reflections so far, is that the value of art depends, at least in part, on the quality of the thoughts the artwork occasions in us, especially as these thoughts infuse the emotions evoked. The mere sensory perception of the stimulus is not enough to constitute an aesthetic response, since one can imagine a creature’s senses responding as ours do but which has none of the thoughts or emotions that we have. It might be argued that it is the emotions that are crucial, not the perception as such. But surely the emotions are what they are in virtue of the cognitive response of the onlooker; considered independently of the thoughts, the emotions have no aesthetic value. The artwork makes us thinkin a certain way (the novel is the obvious case); and if this way is itself valuable, then the artwork is. If we had no thoughts at all when experiencing an artwork, just sensory experiences and raw emotions, then it is hard to see what value art could have. But if the artwork clarifies our thoughts, or gives rise to new and interesting thoughts, then it will have value, as a trigger to such valuable thoughts. Or better, perhaps we should say that the artwork unifies perception, emotion, and thought—with the thought ultimately providing the source of value. The thoughts evoked could be of many kinds, but unless they are worthwhile thoughts themselves nothing of value has been gained from the artwork. An artwork that evoked in the audience only confused, inept, false, prejudicial, boring, and trivial thoughts could hardly claim to possess aesthetic value—it would be a failed work of art. If this is on the right lines, then art depends for its existence on thought, and on thought having value. Art may not always improve us ethically, but it might (when it is good) improve us cognitively. Art that made people worse in their thinking would not warrant our esteem. We might say then that art that improves the art of thinking is good art. What seems clear is that the value of art cannot be separated from the value of the thoughts it evokes. This should make Plato more reconciled to art than he was.

 

My general thesis has been that thought has its own distinctive kind of intrinsic value, not to be assimilated to moral value or to desire satisfaction. This kind of value lies close to our metaphysical essence as thinking things. Thus our duty is to improve our thinking in such a way as to realize our highest good. If we do so, we can achieve a kind of happiness not otherwise achievable. Feeling good and doing good both matter to the life well lived, but good thinking matters most.

 

Colin McGinn

 

 

 

[1]The tone and style of this paper reflect its origin: it was the text for an annual lecture I gave to students and members of the general public at a small American college–hence its didactic and hortatory quality. I am not meaning to lecture my fellow professional philosophers on the value of thinking well!

Share

Impressions of Existence

 

 

Impressions of Existence

 

 

You wake up in the morning and you become conscious of the world again. For a while nothing existed for you, but now existence floods back. You become aware of external objects, of space, of time, of yourself, of your mental states. I shall say that you have impressions of existence. I am interested in the nature of these impressions—their psychological character. They are of a special kind, not just an instance of other psychological categories. I want to say they are neither beliefs nor sensations; they are a sui generispsychological state. They need to be recognized as such in both philosophy of mind and epistemology. Intuitively, they are sensory states without qualitative content—perceptual but not phenomenal (though these terms are really too crude to capture all the distinctions we need). Let me try to identify what is so special about them, acknowledging that we are in obscure conceptual territory.

First, impressions of existence are not beliefs: existential beliefs are neither necessary nor sufficient for existential impressions. I might have the impression that there is a cat in front of me, but actually be hallucinating, and know it. My sense experience gives me the impression of an existing cat, but I know better, so I don’t believe a cat exists in my vicinity. Just as I can have an impression of an object with certain properties but decline to believe there is an object with those properties (I know I’m hallucinating), so I can have an impression that something exists and yet not believe it does. So existential belief is not necessary for existential impression. Nor is it sufficient because I can believe in the existence of things that I don’t have impressions of existence of—such as remote galaxies or atoms or other minds. These points make it look as if impressions of existence are standard perceptual states, like seeing red things and square things. But there is no qualitythat I see when I have a visual impression of existence: it strikes me visually that a certain object exists, but there is no quality of the object that is presented to me asits existence (this is an old point about existence). Redness and rectangularity can enter the content of my experience, but existence can’t. It isn’t a sensory quality, primary or secondary. I have the impression that a certain object exists—that’s how things seem to me—but there is no quality of existence that is recorded by my senses. Even theories of existence as a first-order property don’t claim that existence is perceptible in the way color and shape are; and theories that identify existence with a second-order property certainly don’t regard it as perceptible. I don’t seethe existence of a thing, as I see its color and shape. Yet I have an impression of existence, and that impression belongs with my experience (not my beliefs). I describe myself as “under the impression” that various things exist—my experience is not neutralas to existence—but this impression is not a belief I have, and it is not a type of sensation either.

Not all experience carries impressions of existence: not imaginative experience, for example. If I form an image of a unicorn, I am not thereby under the impression that a unicorn exists. Nor do I have existential impressions of fictional characters. There is sensory content to these experiences (note how strained language is here), but I would never say that I have impressions of existence with respect to imaginary objects. On the contrary, I would say that I have impressions of non-existence. Impressions of existence are not constitutive of consciousness as such, though they are certainly a common feature of consciousness. Do I have such impressions in the case of numbers and other abstract objects? That is not an easy question, but I am inclined to say no, which is perhaps why Platonism strikes us as bold. We might have an intuitionof existence here (and elsewhere), but that is not the same as an impressionof existence. We don’t say, “It sure as hell lookslike there’s a number here!” Impressions of existence belong with the senses (including introspection) not the intellectual faculties. Do I have impressions of existence with respect to language? Well, I certainly have the feeling that wordsexist—I keep hearing and seeing them—but as to meaningsthe answer is unclear. The meanings of words don’t impress themselves on my senses in the way material objects do. Physical events impress me with their existence too, but fields of force not so much. We seem more or less inclined to believe in the existence of things according as they provide impressions of existence or not. We are impressed with impressions of existence, though we extend our existential beliefs beyond this basic case.

Impressions of existence undermine traditional conceptions of sense experience, such as sense-datum theory, sensory qualia, and the phenomenal mosaic, rather as seeing-as undermines these conceptions. Seeing-as is not to be conceived as a “purely sensory” visual state either. Sense experience contains more than qualitative atoms of sensation (“the given”); there is a variety and richness to it that is not recognized by traditional notions. Impressions of existence are not instances of Humean “impressions” or Lockean “ideas”. To be sure, there is something it is like to have an impression of existence, which is not available to someone that has only theoretical existential beliefs, and we can rightly describe such impressions as phenomenological facts; but we are not dealing here with what are traditionally described as “ideas of sensible qualities”, such as ideas (sic) of primary and secondary qualities. The impressions in question sit loosely between what we are inclined to call (misleadingly) perception and intellect, sensation and cognition, seeing and thinking. They are neither hills nor valleys, but something in between. It is thus hard to recognize their existence, or to describe them without distortion. Existence is woven into ordinary experience, but not as one thread intertwined with others (color, shape). One is tempted to describe such impressions as assumptions or presuppositions or tacit beliefs, but none of these terms does justice to their immediate sensory character—for it really is as if we are directly informed of an object’s existence, as if it announces its existence to our senses. As Wittgenstein might say, we see things asexisting (even when they don’t). The sensory world is not an existentially neutral manifold. Seeing-as shows that seeing is not just a passive copy of the stimulus, and “seeing-existence” carries a similar lesson. It doesn’t fit the paradigm of seeing a color, but so what?

This has a bearing on skepticism. It is not merely that the skeptic questions our existential beliefs; he questions our existential impressions. We don’t feel a visceral affront when someone questions our belief in galaxies, atoms, and other minds—we feel such things to be negotiable—but we jib when we are told that the very nature of our experience is riddled with falsehood. Our galaxy without other galaxies is one thing, but a brain in a vat is something else entirely. The brain in a vat is brimming with impressions of existence, as a matter of basic phenomenological fact, but these impressions are all false—there are no objects meeting the conditions laid down in its experience. Here, we want to say, the skepticism is existential—it shakes us to the core. How could our experience mislead us so badly, so dramatically? It is like being lied to by an intimate friend. How could experience do that to us! It seduces us into believing that things exist, but they don’t! So the shock of skepticism is magnified by the experiential immediacy of impressions of existence; it isn’t just theoretical, academic. It is different with skepticism about other minds, because in this case we don’t have such impressions of existence; so the skeptic isn’t contradicting ordinary experience, just commonsense assumption. We assumeother people have minds, but we don’t have sensory impressionsof other minds (paceWittgenstein and others). We might then say there are two kinds of skepticism: belief skepticism and impression skepticism. The skeptic about other minds is a belief skeptic, but the skeptic about the external world (or the self) is an impression skeptic. Skepticism about the past, the future, and the unobservable falls into the former category, while the latter category might extend to include skepticism about our own mental states, as well as the self that has them. And certainly we have a very strong impression that our own mental states exist (not merely a firm belief). Of course, there is always a distinction between actual existence and the impression of existence, but it is surely indisputable that we have an impressionthat our own mental states exist—whether they really do is another question. In any case, the skeptic who questions the veridicality of our impressions, as opposed to our beliefs, is always a more nerve-racking figure.

I will mention a few issues that arise once we have accepted this addition to the phenomenological inventory. First, animals: I take it that sensing animals enjoy impressions of existence, even though they may not be capable of existential beliefs. They may not have the conceptof existence but they have a senseof it—the world they experience impresses them as real. If they have mental images, there will a contrast in this respect in their mind. This shows how primitive and biologically rooted impressions of existence are. Second, training: is it possible to train someone out of her impressions of existence? We can’t train someone not to experience perceptual illusions (the system is modular), but could we train someone to cease to experience the world as existing? It’s an empirical question, but I doubt it—this too is part of the encapsulated perceptual system, hard-wired and irreversible. Of course, beliefs can be readily changed by suitable training—as by pointing out their falsity. The brain in a vat will never be able to reconfigure its perceptual experience to rid itself of the impression of existence, even when thoroughly persuaded of its true situation. No matter how much it believes its experiences not to be veridical they will keep on seeming that way (we could always do an experiment to check my conjecture). Third, are impressions of existence capable of varying by degree? Can we have stronger impressions of existence in some cases than in others? A Cartesian might think that the impression is at its strongest with respect to the self, with external objects trailing. A Humean might deny any strong impression of existence for the self, but insist on it for impressions and ideas. Judging from my own case, it seems pretty constant: the impression itself is always the same, though the associated beliefs may vary by degree. Even when I know quite well that an experience is illusory, it still seemsto assert existence, just as much as when I am certain an experience is veridical. So I am inclined to think the impression doesn’t vary from case to case. It is all or nothing. Fourth, are there other cases in which we have sensory impressions that fail to fit traditional categories? Are there impressions of necessity or identity or causation or moral rightness? That would be an interesting result, because then we could claim that these cases are still sensory in the broader sense without accepting that they belong with impressions of color and shape. We could thus widen the scope of the perceptual model. I leave the question open.

 

C

Share

Pain and Unintelligent Design

 

Pain and Unintelligent Design

 

 

Pain is a very widespread biological adaptation. Pain receptors are everywhere in the animal world. Evidently pain serves the purposes of the genes—it enables survival. It is not just a by-product or holdover; it is specifically functional. To a first approximation we can say that pain serves the purpose of avoiding danger: it signals danger and it shapes behavior so as to avoid it. It hurtsof course, and hurting is not good for the organism’s feeling of wellbeing: but that hurt is beneficial to the organism because it serves to keep it from injury and death. So the story goes: evolution equips us with the necessary evil of pain the better to enable our survival. We hurt in order to live.  If we didn’t hurt, we would die. People born without pain receptors are exceptionally prone to injury. So nature is not so cruel after all. Animals feel pain for their own good.

But why is pain quite so bad? Why does it hurt so much? Is the degree of pain we observe really necessary for pain to perform its function? Suppose we [encountered alien creatures much like ourselves except that their pain threshold is much lower and their degree of pain much higher. If they stub their toe even slightly the pain is excruciating (equivalent to us having our toe hit hard with a hammer); their headaches are epic bouts of suffering; a mere graze has them screaming in agony. True, all this pain encourages them to be especially careful not to be injured, and it certainly aids their survival, but it all seems a bit excessive. Wouldn’t a lesser amount of pain serve the purpose just as well? And note that their extremes of pain are quite debilitating: they can’t go about their daily business with so much pain all the time.  If one of them stubs her toe she is laid off work for a week and confined to bed. Moreover, the pain tends to persist when the painful stimulus is removed: it hurts just as much after the graze has occurred. If these creatures were designed by some conscious being, we would say that the designer was an unintelligent designer. If the genes are the ones responsible, we would wonder what selective pressure could have allowed such extremes of pain. Their pain level is clearly surplus to requirements. But isn’t it much the same with us? I would be careful not to stub my toe even if I felt half the pain I feel now. The pain of a burn would make me avoid the flame even if it was much less fierce than it is now. And what precisely is the point of digestive pain or muscle pain? What do these things enable me to avoid? We get along quite well without pain receptors in the brain (or the hair, nails, and teeth enamel), so why not dispense with it for other organs too? Why does cancer cause so much pain? What good does that do? Why are we built to be susceptible to torture? Torture makes us do things against our wishes—it can be used coercively—so why build us to be susceptible to it? A warrior who can’t be tortured is a better warrior, surely. Why allow chronic pain that serves no discernible biological function? A more rational pain perception system would limit pain to those occasions on which it can serve its purpose of informing and avoiding, without overdoing it in the way it seems to. In a perfect world there would be no pain at all, just a perceptual system that alerts us non-painfully to danger; but granted that pain is a more effective deterrent, why not limit it to the real necessities? The negative side effects of severe pain surely outweigh its benefits. It seems like a case of unintelligent design.

Yet pain evidently has a long and distinguished evolutionary history. It has been tried and tested over countless generations in millions of species. There is every reason to believe that pain receptors are as precisely calibrated as visual receptors. Just as the eye independently evolved in several lineages, so we can suppose that pain did (“convergent evolution”). It isn’t that pain only recently evolved in a single species and hasn’t yet worked out the kinks in its design (cf. bipedalism); pain is as old as flesh and bone. Plants don’t feel pain, but almost everything else does, above a certain level of biological complexity. There are no pain-free mammals. Can it be that mammalian pain is a kind of colossal biological blunder entailing much more suffering than is necessary for it to perform its function? So we have a puzzle—the puzzle of pain. On the one hand, the general level of pain seems excessive, with non-functional side effects; on the other hand, it is hard to believe that evolution would tolerate something so pointless. After all, pain uses energy, and evolution is miserly about energy. We can suppose that some organisms experience less pain than others (humans seem especially prone to it)—invertebrates less than vertebrates, say—so why not make all organisms function with a lower propensity for pain? Obviously, organisms can survive quite well without being quite so exquisitely sensitive to pain, so why not raise the threshold and reduce the intensity?

Compare pleasure. Pleasure, like pain, is motivational, prompting organisms to engage not avoid. Food and sex are the obvious examples (defecation too, according to Freud). But the extremes of pleasure are never so intense as the extremes of pain: pain is reallymotivational, while pleasure can be taken or left. No one would rather die than forfeit an orgasm, but pain can make you want to die. Why the asymmetry? Pleasure motivates effectively enough without going sky-high, while excruciating pain is always moments away. Why not regulate pain to match pleasure? There is no need to make eating berries sheer ecstasy in order to get animals to eat berries, so why make being burnt sheer agony in order to get animals to avoid being burnt? Our pleasure system seems designed sensibly, moderately, non-hyperbolically, while our pain system goes way over the top. And yet that would make it biologically anomalous, a kind of freak accident. It’s like having grotesquely enlarged eyes when smaller eyes will do. Pleasure is a good thing biologically, but there is no need to overdo it; pain is also a good thing biologically (not otherwise), but there is no need to overdo it.

I think this is a genuine puzzle with no obvious solution. How do we reconcile the efficiency and parsimony of evolution with the apparent extravagance of pain, as it currently exists? However, I can think of a possible resolution of the puzzle, which finds in pain a unique biological function, or one that is uniquely imperative. By way of analogy consider the following imaginary scenario. The local children have a predilection for playing over by the railway tracks, which feature a live electrical line guaranteed to cause death in anyone who touches it. There have been a number of fatalities recently and the parents are up in arms. There seems no way to prevent the children from straying over there—being grounded or conventionally punished is not enough of a deterrent. The no-nonsense headmaster of the local school comes up with an extreme idea: any child caught in the vicinity of the railway tracks will be given twenty lashes! This is certainly cruel and unusual punishment, but the dangers it is meant to deter are so extreme that the community decides it is the only way to save the children’s lives. In fact, several children, perhaps skeptical of the headmaster’s threats, have already received this extreme punishment, and as a result they sure as hell aren’t going over to the railway tracks any time soon. An outsider unfamiliar with the situation might suspect a sadistic headmaster and hysterical parents, but in fact this is the only way to prevent fatalities, as experience has shown. Someone might object: “Surely twenty lashes is too much! What about reducing it to ten or even five?” The answer given is that this is just too risky, given the very real dangers faced by the children; in fact, twenty lashes is the minimumthat will ensure the desired result (child psychologists have studied it, etc.). Here we might reasonably conclude that the apparently excessive punishment is justified given the facts of the case—death by electrocution versus twenty lashes. The attractions of the railway tracks are simply that strong! We might compare it to talking out an insurance policy: if the results of a catastrophic storm are severe enough we may be willing to part with a lot of money to purchase an insurance policy. It may seem irrational to purchase the policy given its steep price and the improbability of a severe storm, but actually it makes sense because of the seriousness of the storm if it happens. Now suppose that the consequences of injury for an organism are severe indeed—maiming followed by certain death. There are no doctors to patch you up, just brutal nature to bring you down. A broken forelimb can and will result in certain death. It is then imperativeto avoid breaking that forelimb, so if you feel it under dangerous stress you had better relieve that stress immediately. Just in case the animal doesn’t get the message the genes have taken out an insurance policy: make the pain so severe that the animal will alwaysavoid the threatening stimulus. Strictly speaking, the severe pain is unnecessary to ensure the desired outcome, but just in casethe genes ramp it up to excruciating levels. This is like the home insurer who thinks he should buy the policy just in casethere is a storm; otherwise he might be ruined. Similarly, the genes take no chances and deliver a jolt of pain guaranteed to get the animal’s attention. It isn’t like the case of pleasure because not getting some particular pleasure will not automatically result in death, but being wounded generally will. That is, if injury and death are tightly correlated it makes sense to install pain receptors that operate to the max. No lazily leaving your hand in the flame as you snooze and suffering only mild discomfort: rather, deliver a jolt of pain guaranteed to make you withdraw your hand ASAP. Call this the insurance policytheory of pain: don’t take any chances where bodily injury is concerned–insure you are covered in case of catastrophe.[1]If it hurts like hell, so be it—better to groan than to die. So the underlying reason for the excessiveness of pain is that biological entities are very prone to death from injury, even slight injury. If you could die from a mere graze, your genes would see to it that a graze really stings, so that you avoid grazes at all costs. Death spells non-survival for the genes, so they had better do everything in their power to keep their host organism from dying on them. The result is organisms that feel pain easily and intensely. If it turned out that those alien organisms I mentioned that suffer extreme levels of pain were also very prone to death from minor injury, we would begin to understand why things hurt so bad for them. In our own case, according to the insurance policy theory, evolution has designed our pain perception system to carefully track our risks in a perilous world. It isn’t just poor design and mindless stupidity that have made us so susceptible to pain in extreme forms; this is just the optimum way to keep as alive as bearers of those precious genes (in their eyes anyway). We inherit our pain receptors from our ancestors, and they lived in a far more dangerous world, in which even minor injuries can have fatal consequences. Those catastrophic storms came more often then.

This puts the extremes of romantic suffering in a new light. It is understandable from a biological point of view why romantic rejection would feel bad, but why sobad? Why, in some cases, does it lead to suicide? Why is romantic suffering so uniquely awful?[2]After all, there are other people out there who could serve as the vehicle of your genes—too many fish in the sea, etc. The reason is that we must be hyper-motivated in the case of romantic love because that’s the only way the genes can perpetuate themselves. Sexual attraction must be extreme, and that means that the pain of sexual rejection must be extreme too. Persistence is of the essence. If people felt pretty indifferent about it, it wouldn’t get done; and where would the genes be then? They would be stuck in a body without any means of escape into future generations. Therefore they ensure that the penalty for sexual and romantic rejection is lots of emotional pain; that way people will try to avoid it.  It is the same with separation: the reason lovers find separation so painful is that the genes have built them to stay together during the time of maximum reproductive potential. It may seem excessive—it isexcessive—but it works as an insurance policy against reproductive failure. People don’t needto suffer that much from romantic rejection and separation, but making them suffer as they do is insurance against the catastrophe of non-reproduction. It is crucial biologically for reproduction to occur, so the genes make sure that whatever interferes with that causes a lot of suffering. This is why there is a great deal of pleasure in love, but also a great deal of pain–more than seems strictly necessary to get the job done. The pain involved in the loss of children is similar: it acts as a deterrent to neglecting one’s children and thus terminating the genetic line. Emotional excess functions as an insurance policy about a biologically crucial event. Extreme pain is thus not so much maladaptive as hyper-adaptive: it works to ensure that appropriate steps are taken when the going gets tough, no matter how awful for the sufferer. It may be, then, that the amount of pain an animal suffers is precisely the right amount all things considered, even though it seemssurplus to requirements (and nasty in itself). So at least the insurance policy theory maintains, and it must be admitted that accusing evolution of gratuitous pain production would be uncharitable to evolution.

To the sufferer pain seems excessive, a gratuitous infliction, far beyond what is necessary to promote survival; but from the point of view of the genes it is simply an effective way to optimize performance in the game of survival. It may hurt us a lot, but it does them a favor. It keeps us on our toes. Still, it is puzzling that it hurts quiteas much as it does.[3]

 

Colin McGinn

[1]We can compare the insurance policy theory of excessive pain to the arms race theory of excessive biological weaponry: they may seem pointless and counterproductive but they result from the inner logic of evolution as a mindless process driven by gene wars. Biological exaggeration can occur when the genes are fighting for survival and are not too concerned about the welfare of their hosts.

[2]Romeo and Juliet are the obvious example, but the case of Marianne Dashwood in Jane Austen’s Sense and Sensibilityis a study in romantic suffering—so extreme, so pointless.

[3]In this paper I simply assume the gene-centered view of evolution and biology, with ample use of associated metaphor. I intend no biological reductionism, just biological realism.

Share

Being Here

 

 

 

Human Contingency

 

 

There are two views about the existence of humans on this planet: one view says that human existence was inevitable, a natural culmination, just a matter of time; the other view says that human existence is an accident, an unpredictable anomaly, just a matter of luck (I am discounting theological ideas). I can think of myself as the kind of being whose existence was built into the mechanism of evolution, or I can think of myself as a bizarre aberration of evolution. The first view is often defended (or found natural) because evolution is thought to produce superiority, and we are superior—the pinnacle of the evolutionary process. Evolution is conceived as a process that tends towards superior intelligence, and we are the most intelligent creatures of all. The second view notes that our kind of intelligence is unique in the animal kingdom and therefore hardly a prerequisite for evolutionary success; indeed some of the most successful animals as judged by biological criteria are the least intelligent (bacteria do pretty well for themselves). Big brains are biologically costly and can be hazardous, hardly the sine qua nonof survival and reproductive success. I hold to the second view of the evolutionary process (which is standard among evolutionary biologists) but I won’t try to defend it here; my aim is rather to adduce some considerations that support the view that human existence (and human success) are highly contingent in quite specific ways—we really are a complete anomaly, an extremely improbable biological phenomenon. It is a miracle that we are here at all (though a natural miracle). We might easily not have existed.

First, there are no other mammals like us on the planet: upright, bipedal, ground dwelling. Most land animals are quadrupeds (with the obvious exception of birds, whose forelimbs are wings, and who spend a lot of time in the air), and that body plan makes perfect sense given the demands of terrestrial locomotion. Our body plan, by contrast, makes little sense and no other species has followed us down this evolutionary path. Even our closest relatives don’t go around on their hind legs all the time existing in all manner of environments (are there any apes that live on the open plains or in the arctic?). There is no evolutionary convergence of traits here, as with eyes or a means of communication. Natural selection has not favored our bipedal wandering in other species (contrast the vastly many species of quadruped). This is by no means the natural and predictable mode of locomotion and posture that evolution homes in on. It is strange and unnatural (and fraught) not somehow logical or design-optimal. No sensible god would design his favorite species this way—unbalanced, top-heavy, swollen of head. (Note how slow even our fastest runners are compared to many other mammalian species.) Nor does evolution seem to have a penchant for large ingenious brains; it prefers compact efficient brains that stick to the point.  Whatever the reason for these characteristics, it is not that our bodily design is a biological engineer’s dream: evolution has not all along been dying to get this design instantiated in its proudest achievement (as if expecting huge applause from the evolutionary judges of the universe—“And the first prize goes to…”).  Cats, yes, who have been a long time in the making; but hardly humans, who arrived on the scene only yesterday and never looked the part to begin with.

Second, imagine what would happen if you drove gibbons down from the trees. Up there they are well adjusted, at home, finely tuned, grasping and swinging; but down on the ground they would be miserably out of place, athletically talentless, scarcely able to survive. Indeed, they would notsurvive—they would go rapidly extinct. They evolved to live in the trees not on the ground, and you can take a gibbon out of a tree but you can’t take a tree out of a gibbon. Yet we (or our ancestors) were driven down from the trees and forced to survive in alien territory, subject to terrifying predators, cut off from our natural food supply, poorly designed to deal with life on level ground. We should have gone extinct, but by some amazing accident we didn’t—something saved us from quick extinction (and it is possible to tell a plausible story about this). Descending from the trees is not something built into the evolutionary trajectory of tree-dwelling animals, as if it is a natural promotion or development, life on the ground being somehow preferable, like a fancy neighborhood and upward mobility. That’s why other species have not followed us—those gibbons are still happily up there, as they have been for millions of years. Our descent and eventual success was not a natural progression but a regression that happened to pan out against all odds. It could easily not have happened. There is certainly no general evolutionary trend that favors animals that make the descent—which is why birds haven’t abandoned their aerial life-style and taken up residence on the ground. There is no biological analogue of gravity causing animals to cling to the earth’s surface. That we made a go of it is more a reason for astonishment than confident confirmation.

Third, and perhaps most telling of all, the other evolutionary experiments in our line have not met with conspicuous success. We are the only one left standing (literally). We now know there were many hominid species in addition to the branch called Sapiens, which flourished (if that is the word) for a while, but they are all now extinct—things just didn’t work out for them. And it’s not like the dinosaurs where a massive catastrophe caused the extinction (of them as well as innumerable other species); no, these hominid species went extinct for more local and mundane reasons—they just couldn’t cut it in the evolutionary struggle. They just weren’t made of the right stuff, sadly. Slow, ungainly, unprotected, weak—they simply didn’t have what it takes. Yet we, amazingly, are still here: we made it through the wilderness despite the obstacles and our lack of equipment. How did we do it? That’s an interesting question, but the point I want to make here is that it is remarkable that we did—no other comparable species managed it. Evolution experimented with the hominid line and it didn’t work out too well in general (most mammal species living at the time of our early hominid relatives are still robustly around), but somehow we managed to beat the odds. We look like a bad idea made good—here by the skin of our teeth. The characteristics that set our extinct relatives apart from other animals did not prove advantageous in the long run, but by some miracle the Sapiensbranch won out—we did what they could not. And we didn’t just survive; we dominated. Not only are we still here; we are here in huge numbers, everywhere, pushing other species around, the top of the pile. We have unprecedented power over other animals and indeed over the planet. But this is not because evolution came up with a product (the bipedal brainy animals descended for the trees) that had success written into its genes–most such animals fell by the wayside, with only us marching triumphantly forward. And notice how recently our dominance came about: we weren’t the alpha species for a very long time, a sudden success story once our innate talent shone forth; instead we scraped and struggled for many thousands of years before we started to bloom—the proverbial late developers. None of it was predictable: only in hindsight do we look like the evolutionary success we have turned out to be. Anyone paying a visit to the planet before and after our improbable rise would exclaim, “I never saw that one coming!” You could safely predict the continuing success of cats and elephants, sparrows and centipedes, given their track record; but the spectacular success of those weedy two-footed creatures seems like pure serendipity. You would have expected them to be extinct long ago! You would want to inquire into the reasons for their unlikely success, looking again at their distinguishing characteristics (language, imagination, a tendency to congregate, dangling hands). These characteristics turned out to be a lot more potent than anyone could have predicted. Certainly there is no general trend in evolution favoring animals designed this way. It is not as if being driven from one’s natural habitat and being made to start over is a recipe for biological success.

For these reasons, then, the existence and success of homo sapienswas not a foregone conclusion, a mere natural unfolding. It was vastly improbable and entirely accidental. It was like making a car from old bits of wood and newspapers that ends up winning the Grand Prix.[1]

 

Colin McGinn

 

[1]This essay recurs to themes explored in my Prehension: The Hand and the Emergence of Humanity(MIT Press, 2015). Of course, there is an enormous literature dealing with these themes. I think there is room for a type of writing about them that emphasizes the human significance of the scientific facts (one of the jobs of philosophy). It matters to us whether we are an accident or a preordained crescendo.

Share