Impressions of Existence

 

 

Impressions of Existence

 

 

You wake up in the morning and you become conscious of the world again. For a while nothing existed for you, but now existence floods back. You become aware of external objects, of space, of time, of yourself, of your mental states. I shall say that you have impressions of existence. I am interested in the nature of these impressions—their psychological character. They are of a special kind, not just an instance of other psychological categories. I want to say they are neither beliefs nor sensations; they are a sui generispsychological state. They need to be recognized as such in both philosophy of mind and epistemology. Intuitively, they are sensory states without qualitative content—perceptual but not phenomenal (though these terms are really too crude to capture all the distinctions we need). Let me try to identify what is so special about them, acknowledging that we are in obscure conceptual territory.

First, impressions of existence are not beliefs: existential beliefs are neither necessary nor sufficient for existential impressions. I might have the impression that there is a cat in front of me, but actually be hallucinating, and know it. My sense experience gives me the impression of an existing cat, but I know better, so I don’t believe a cat exists in my vicinity. Just as I can have an impression of an object with certain properties but decline to believe there is an object with those properties (I know I’m hallucinating), so I can have an impression that something exists and yet not believe it does. So existential belief is not necessary for existential impression. Nor is it sufficient because I can believe in the existence of things that I don’t have impressions of existence of—such as remote galaxies or atoms or other minds. These points make it look as if impressions of existence are standard perceptual states, like seeing red things and square things. But there is no qualitythat I see when I have a visual impression of existence: it strikes me visually that a certain object exists, but there is no quality of the object that is presented to me asits existence (this is an old point about existence). Redness and rectangularity can enter the content of my experience, but existence can’t. It isn’t a sensory quality, primary or secondary. I have the impression that a certain object exists—that’s how things seem to me—but there is no quality of existence that is recorded by my senses. Even theories of existence as a first-order property don’t claim that existence is perceptible in the way color and shape are; and theories that identify existence with a second-order property certainly don’t regard it as perceptible. I don’t seethe existence of a thing, as I see its color and shape. Yet I have an impression of existence, and that impression belongs with my experience (not my beliefs). I describe myself as “under the impression” that various things exist—my experience is not neutralas to existence—but this impression is not a belief I have, and it is not a type of sensation either.

Not all experience carries impressions of existence: not imaginative experience, for example. If I form an image of a unicorn, I am not thereby under the impression that a unicorn exists. Nor do I have existential impressions of fictional characters. There is sensory content to these experiences (note how strained language is here), but I would never say that I have impressions of existence with respect to imaginary objects. On the contrary, I would say that I have impressions of non-existence. Impressions of existence are not constitutive of consciousness as such, though they are certainly a common feature of consciousness. Do I have such impressions in the case of numbers and other abstract objects? That is not an easy question, but I am inclined to say no, which is perhaps why Platonism strikes us as bold. We might have an intuitionof existence here (and elsewhere), but that is not the same as an impressionof existence. We don’t say, “It sure as hell lookslike there’s a number here!” Impressions of existence belong with the senses (including introspection) not the intellectual faculties. Do I have impressions of existence with respect to language? Well, I certainly have the feeling that wordsexist—I keep hearing and seeing them—but as to meaningsthe answer is unclear. The meanings of words don’t impress themselves on my senses in the way material objects do. Physical events impress me with their existence too, but fields of force not so much. We seem more or less inclined to believe in the existence of things according as they provide impressions of existence or not. We are impressed with impressions of existence, though we extend our existential beliefs beyond this basic case.

Impressions of existence undermine traditional conceptions of sense experience, such as sense-datum theory, sensory qualia, and the phenomenal mosaic, rather as seeing-as undermines these conceptions. Seeing-as is not to be conceived as a “purely sensory” visual state either. Sense experience contains more than qualitative atoms of sensation (“the given”); there is a variety and richness to it that is not recognized by traditional notions. Impressions of existence are not instances of Humean “impressions” or Lockean “ideas”. To be sure, there is something it is like to have an impression of existence, which is not available to someone that has only theoretical existential beliefs, and we can rightly describe such impressions as phenomenological facts; but we are not dealing here with what are traditionally described as “ideas of sensible qualities”, such as ideas (sic) of primary and secondary qualities. The impressions in question sit loosely between what we are inclined to call (misleadingly) perception and intellect, sensation and cognition, seeing and thinking. They are neither hills nor valleys, but something in between. It is thus hard to recognize their existence, or to describe them without distortion. Existence is woven into ordinary experience, but not as one thread intertwined with others (color, shape). One is tempted to describe such impressions as assumptions or presuppositions or tacit beliefs, but none of these terms does justice to their immediate sensory character—for it really is as if we are directly informed of an object’s existence, as if it announces its existence to our senses. As Wittgenstein might say, we see things asexisting (even when they don’t). The sensory world is not an existentially neutral manifold. Seeing-as shows that seeing is not just a passive copy of the stimulus, and “seeing-existence” carries a similar lesson. It doesn’t fit the paradigm of seeing a color, but so what?

This has a bearing on skepticism. It is not merely that the skeptic questions our existential beliefs; he questions our existential impressions. We don’t feel a visceral affront when someone questions our belief in galaxies, atoms, and other minds—we feel such things to be negotiable—but we jib when we are told that the very nature of our experience is riddled with falsehood. Our galaxy without other galaxies is one thing, but a brain in a vat is something else entirely. The brain in a vat is brimming with impressions of existence, as a matter of basic phenomenological fact, but these impressions are all false—there are no objects meeting the conditions laid down in its experience. Here, we want to say, the skepticism is existential—it shakes us to the core. How could our experience mislead us so badly, so dramatically? It is like being lied to by an intimate friend. How could experience do that to us! It seduces us into believing that things exist, but they don’t! So the shock of skepticism is magnified by the experiential immediacy of impressions of existence; it isn’t just theoretical, academic. It is different with skepticism about other minds, because in this case we don’t have such impressions of existence; so the skeptic isn’t contradicting ordinary experience, just commonsense assumption. We assumeother people have minds, but we don’t have sensory impressionsof other minds (paceWittgenstein and others). We might then say there are two kinds of skepticism: belief skepticism and impression skepticism. The skeptic about other minds is a belief skeptic, but the skeptic about the external world (or the self) is an impression skeptic. Skepticism about the past, the future, and the unobservable falls into the former category, while the latter category might extend to include skepticism about our own mental states, as well as the self that has them. And certainly we have a very strong impression that our own mental states exist (not merely a firm belief). Of course, there is always a distinction between actual existence and the impression of existence, but it is surely indisputable that we have an impressionthat our own mental states exist—whether they really do is another question. In any case, the skeptic who questions the veridicality of our impressions, as opposed to our beliefs, is always a more nerve-racking figure.

I will mention a few issues that arise once we have accepted this addition to the phenomenological inventory. First, animals: I take it that sensing animals enjoy impressions of existence, even though they may not be capable of existential beliefs. They may not have the conceptof existence but they have a senseof it—the world they experience impresses them as real. If they have mental images, there will a contrast in this respect in their mind. This shows how primitive and biologically rooted impressions of existence are. Second, training: is it possible to train someone out of her impressions of existence? We can’t train someone not to experience perceptual illusions (the system is modular), but could we train someone to cease to experience the world as existing? It’s an empirical question, but I doubt it—this too is part of the encapsulated perceptual system, hard-wired and irreversible. Of course, beliefs can be readily changed by suitable training—as by pointing out their falsity. The brain in a vat will never be able to reconfigure its perceptual experience to rid itself of the impression of existence, even when thoroughly persuaded of its true situation. No matter how much it believes its experiences not to be veridical they will keep on seeming that way (we could always do an experiment to check my conjecture). Third, are impressions of existence capable of varying by degree? Can we have stronger impressions of existence in some cases than in others? A Cartesian might think that the impression is at its strongest with respect to the self, with external objects trailing. A Humean might deny any strong impression of existence for the self, but insist on it for impressions and ideas. Judging from my own case, it seems pretty constant: the impression itself is always the same, though the associated beliefs may vary by degree. Even when I know quite well that an experience is illusory, it still seemsto assert existence, just as much as when I am certain an experience is veridical. So I am inclined to think the impression doesn’t vary from case to case. It is all or nothing. Fourth, are there other cases in which we have sensory impressions that fail to fit traditional categories? Are there impressions of necessity or identity or causation or moral rightness? That would be an interesting result, because then we could claim that these cases are still sensory in the broader sense without accepting that they belong with impressions of color and shape. We could thus widen the scope of the perceptual model. I leave the question open.

 

C

Share

Pain and Unintelligent Design

 

Pain and Unintelligent Design

 

 

Pain is a very widespread biological adaptation. Pain receptors are everywhere in the animal world. Evidently pain serves the purposes of the genes—it enables survival. It is not just a by-product or holdover; it is specifically functional. To a first approximation we can say that pain serves the purpose of avoiding danger: it signals danger and it shapes behavior so as to avoid it. It hurtsof course, and hurting is not good for the organism’s feeling of wellbeing: but that hurt is beneficial to the organism because it serves to keep it from injury and death. So the story goes: evolution equips us with the necessary evil of pain the better to enable our survival. We hurt in order to live.  If we didn’t hurt, we would die. People born without pain receptors are exceptionally prone to injury. So nature is not so cruel after all. Animals feel pain for their own good.

But why is pain quite so bad? Why does it hurt so much? Is the degree of pain we observe really necessary for pain to perform its function? Suppose we [encountered alien creatures much like ourselves except that their pain threshold is much lower and their degree of pain much higher. If they stub their toe even slightly the pain is excruciating (equivalent to us having our toe hit hard with a hammer); their headaches are epic bouts of suffering; a mere graze has them screaming in agony. True, all this pain encourages them to be especially careful not to be injured, and it certainly aids their survival, but it all seems a bit excessive. Wouldn’t a lesser amount of pain serve the purpose just as well? And note that their extremes of pain are quite debilitating: they can’t go about their daily business with so much pain all the time.  If one of them stubs her toe she is laid off work for a week and confined to bed. Moreover, the pain tends to persist when the painful stimulus is removed: it hurts just as much after the graze has occurred. If these creatures were designed by some conscious being, we would say that the designer was an unintelligent designer. If the genes are the ones responsible, we would wonder what selective pressure could have allowed such extremes of pain. Their pain level is clearly surplus to requirements. But isn’t it much the same with us? I would be careful not to stub my toe even if I felt half the pain I feel now. The pain of a burn would make me avoid the flame even if it was much less fierce than it is now. And what precisely is the point of digestive pain or muscle pain? What do these things enable me to avoid? We get along quite well without pain receptors in the brain (or the hair, nails, and teeth enamel), so why not dispense with it for other organs too? Why does cancer cause so much pain? What good does that do? Why are we built to be susceptible to torture? Torture makes us do things against our wishes—it can be used coercively—so why build us to be susceptible to it? A warrior who can’t be tortured is a better warrior, surely. Why allow chronic pain that serves no discernible biological function? A more rational pain perception system would limit pain to those occasions on which it can serve its purpose of informing and avoiding, without overdoing it in the way it seems to. In a perfect world there would be no pain at all, just a perceptual system that alerts us non-painfully to danger; but granted that pain is a more effective deterrent, why not limit it to the real necessities? The negative side effects of severe pain surely outweigh its benefits. It seems like a case of unintelligent design.

Yet pain evidently has a long and distinguished evolutionary history. It has been tried and tested over countless generations in millions of species. There is every reason to believe that pain receptors are as precisely calibrated as visual receptors. Just as the eye independently evolved in several lineages, so we can suppose that pain did (“convergent evolution”). It isn’t that pain only recently evolved in a single species and hasn’t yet worked out the kinks in its design (cf. bipedalism); pain is as old as flesh and bone. Plants don’t feel pain, but almost everything else does, above a certain level of biological complexity. There are no pain-free mammals. Can it be that mammalian pain is a kind of colossal biological blunder entailing much more suffering than is necessary for it to perform its function? So we have a puzzle—the puzzle of pain. On the one hand, the general level of pain seems excessive, with non-functional side effects; on the other hand, it is hard to believe that evolution would tolerate something so pointless. After all, pain uses energy, and evolution is miserly about energy. We can suppose that some organisms experience less pain than others (humans seem especially prone to it)—invertebrates less than vertebrates, say—so why not make all organisms function with a lower propensity for pain? Obviously, organisms can survive quite well without being quite so exquisitely sensitive to pain, so why not raise the threshold and reduce the intensity?

Compare pleasure. Pleasure, like pain, is motivational, prompting organisms to engage not avoid. Food and sex are the obvious examples (defecation too, according to Freud). But the extremes of pleasure are never so intense as the extremes of pain: pain is reallymotivational, while pleasure can be taken or left. No one would rather die than forfeit an orgasm, but pain can make you want to die. Why the asymmetry? Pleasure motivates effectively enough without going sky-high, while excruciating pain is always moments away. Why not regulate pain to match pleasure? There is no need to make eating berries sheer ecstasy in order to get animals to eat berries, so why make being burnt sheer agony in order to get animals to avoid being burnt? Our pleasure system seems designed sensibly, moderately, non-hyperbolically, while our pain system goes way over the top. And yet that would make it biologically anomalous, a kind of freak accident. It’s like having grotesquely enlarged eyes when smaller eyes will do. Pleasure is a good thing biologically, but there is no need to overdo it; pain is also a good thing biologically (not otherwise), but there is no need to overdo it.

I think this is a genuine puzzle with no obvious solution. How do we reconcile the efficiency and parsimony of evolution with the apparent extravagance of pain, as it currently exists? However, I can think of a possible resolution of the puzzle, which finds in pain a unique biological function, or one that is uniquely imperative. By way of analogy consider the following imaginary scenario. The local children have a predilection for playing over by the railway tracks, which feature a live electrical line guaranteed to cause death in anyone who touches it. There have been a number of fatalities recently and the parents are up in arms. There seems no way to prevent the children from straying over there—being grounded or conventionally punished is not enough of a deterrent. The no-nonsense headmaster of the local school comes up with an extreme idea: any child caught in the vicinity of the railway tracks will be given twenty lashes! This is certainly cruel and unusual punishment, but the dangers it is meant to deter are so extreme that the community decides it is the only way to save the children’s lives. In fact, several children, perhaps skeptical of the headmaster’s threats, have already received this extreme punishment, and as a result they sure as hell aren’t going over to the railway tracks any time soon. An outsider unfamiliar with the situation might suspect a sadistic headmaster and hysterical parents, but in fact this is the only way to prevent fatalities, as experience has shown. Someone might object: “Surely twenty lashes is too much! What about reducing it to ten or even five?” The answer given is that this is just too risky, given the very real dangers faced by the children; in fact, twenty lashes is the minimumthat will ensure the desired result (child psychologists have studied it, etc.). Here we might reasonably conclude that the apparently excessive punishment is justified given the facts of the case—death by electrocution versus twenty lashes. The attractions of the railway tracks are simply that strong! We might compare it to talking out an insurance policy: if the results of a catastrophic storm are severe enough we may be willing to part with a lot of money to purchase an insurance policy. It may seem irrational to purchase the policy given its steep price and the improbability of a severe storm, but actually it makes sense because of the seriousness of the storm if it happens. Now suppose that the consequences of injury for an organism are severe indeed—maiming followed by certain death. There are no doctors to patch you up, just brutal nature to bring you down. A broken forelimb can and will result in certain death. It is then imperativeto avoid breaking that forelimb, so if you feel it under dangerous stress you had better relieve that stress immediately. Just in case the animal doesn’t get the message the genes have taken out an insurance policy: make the pain so severe that the animal will alwaysavoid the threatening stimulus. Strictly speaking, the severe pain is unnecessary to ensure the desired outcome, but just in casethe genes ramp it up to excruciating levels. This is like the home insurer who thinks he should buy the policy just in casethere is a storm; otherwise he might be ruined. Similarly, the genes take no chances and deliver a jolt of pain guaranteed to get the animal’s attention. It isn’t like the case of pleasure because not getting some particular pleasure will not automatically result in death, but being wounded generally will. That is, if injury and death are tightly correlated it makes sense to install pain receptors that operate to the max. No lazily leaving your hand in the flame as you snooze and suffering only mild discomfort: rather, deliver a jolt of pain guaranteed to make you withdraw your hand ASAP. Call this the insurance policytheory of pain: don’t take any chances where bodily injury is concerned–insure you are covered in case of catastrophe.[1]If it hurts like hell, so be it—better to groan than to die. So the underlying reason for the excessiveness of pain is that biological entities are very prone to death from injury, even slight injury. If you could die from a mere graze, your genes would see to it that a graze really stings, so that you avoid grazes at all costs. Death spells non-survival for the genes, so they had better do everything in their power to keep their host organism from dying on them. The result is organisms that feel pain easily and intensely. If it turned out that those alien organisms I mentioned that suffer extreme levels of pain were also very prone to death from minor injury, we would begin to understand why things hurt so bad for them. In our own case, according to the insurance policy theory, evolution has designed our pain perception system to carefully track our risks in a perilous world. It isn’t just poor design and mindless stupidity that have made us so susceptible to pain in extreme forms; this is just the optimum way to keep as alive as bearers of those precious genes (in their eyes anyway). We inherit our pain receptors from our ancestors, and they lived in a far more dangerous world, in which even minor injuries can have fatal consequences. Those catastrophic storms came more often then.

This puts the extremes of romantic suffering in a new light. It is understandable from a biological point of view why romantic rejection would feel bad, but why sobad? Why, in some cases, does it lead to suicide? Why is romantic suffering so uniquely awful?[2]After all, there are other people out there who could serve as the vehicle of your genes—too many fish in the sea, etc. The reason is that we must be hyper-motivated in the case of romantic love because that’s the only way the genes can perpetuate themselves. Sexual attraction must be extreme, and that means that the pain of sexual rejection must be extreme too. Persistence is of the essence. If people felt pretty indifferent about it, it wouldn’t get done; and where would the genes be then? They would be stuck in a body without any means of escape into future generations. Therefore they ensure that the penalty for sexual and romantic rejection is lots of emotional pain; that way people will try to avoid it.  It is the same with separation: the reason lovers find separation so painful is that the genes have built them to stay together during the time of maximum reproductive potential. It may seem excessive—it isexcessive—but it works as an insurance policy against reproductive failure. People don’t needto suffer that much from romantic rejection and separation, but making them suffer as they do is insurance against the catastrophe of non-reproduction. It is crucial biologically for reproduction to occur, so the genes make sure that whatever interferes with that causes a lot of suffering. This is why there is a great deal of pleasure in love, but also a great deal of pain–more than seems strictly necessary to get the job done. The pain involved in the loss of children is similar: it acts as a deterrent to neglecting one’s children and thus terminating the genetic line. Emotional excess functions as an insurance policy about a biologically crucial event. Extreme pain is thus not so much maladaptive as hyper-adaptive: it works to ensure that appropriate steps are taken when the going gets tough, no matter how awful for the sufferer. It may be, then, that the amount of pain an animal suffers is precisely the right amount all things considered, even though it seemssurplus to requirements (and nasty in itself). So at least the insurance policy theory maintains, and it must be admitted that accusing evolution of gratuitous pain production would be uncharitable to evolution.

To the sufferer pain seems excessive, a gratuitous infliction, far beyond what is necessary to promote survival; but from the point of view of the genes it is simply an effective way to optimize performance in the game of survival. It may hurt us a lot, but it does them a favor. It keeps us on our toes. Still, it is puzzling that it hurts quiteas much as it does.[3]

 

Colin McGinn

[1]We can compare the insurance policy theory of excessive pain to the arms race theory of excessive biological weaponry: they may seem pointless and counterproductive but they result from the inner logic of evolution as a mindless process driven by gene wars. Biological exaggeration can occur when the genes are fighting for survival and are not too concerned about the welfare of their hosts.

[2]Romeo and Juliet are the obvious example, but the case of Marianne Dashwood in Jane Austen’s Sense and Sensibilityis a study in romantic suffering—so extreme, so pointless.

[3]In this paper I simply assume the gene-centered view of evolution and biology, with ample use of associated metaphor. I intend no biological reductionism, just biological realism.

Share

Being Here

 

 

 

Human Contingency

 

 

There are two views about the existence of humans on this planet: one view says that human existence was inevitable, a natural culmination, just a matter of time; the other view says that human existence is an accident, an unpredictable anomaly, just a matter of luck (I am discounting theological ideas). I can think of myself as the kind of being whose existence was built into the mechanism of evolution, or I can think of myself as a bizarre aberration of evolution. The first view is often defended (or found natural) because evolution is thought to produce superiority, and we are superior—the pinnacle of the evolutionary process. Evolution is conceived as a process that tends towards superior intelligence, and we are the most intelligent creatures of all. The second view notes that our kind of intelligence is unique in the animal kingdom and therefore hardly a prerequisite for evolutionary success; indeed some of the most successful animals as judged by biological criteria are the least intelligent (bacteria do pretty well for themselves). Big brains are biologically costly and can be hazardous, hardly the sine qua nonof survival and reproductive success. I hold to the second view of the evolutionary process (which is standard among evolutionary biologists) but I won’t try to defend it here; my aim is rather to adduce some considerations that support the view that human existence (and human success) are highly contingent in quite specific ways—we really are a complete anomaly, an extremely improbable biological phenomenon. It is a miracle that we are here at all (though a natural miracle). We might easily not have existed.

First, there are no other mammals like us on the planet: upright, bipedal, ground dwelling. Most land animals are quadrupeds (with the obvious exception of birds, whose forelimbs are wings, and who spend a lot of time in the air), and that body plan makes perfect sense given the demands of terrestrial locomotion. Our body plan, by contrast, makes little sense and no other species has followed us down this evolutionary path. Even our closest relatives don’t go around on their hind legs all the time existing in all manner of environments (are there any apes that live on the open plains or in the arctic?). There is no evolutionary convergence of traits here, as with eyes or a means of communication. Natural selection has not favored our bipedal wandering in other species (contrast the vastly many species of quadruped). This is by no means the natural and predictable mode of locomotion and posture that evolution homes in on. It is strange and unnatural (and fraught) not somehow logical or design-optimal. No sensible god would design his favorite species this way—unbalanced, top-heavy, swollen of head. (Note how slow even our fastest runners are compared to many other mammalian species.) Nor does evolution seem to have a penchant for large ingenious brains; it prefers compact efficient brains that stick to the point.  Whatever the reason for these characteristics, it is not that our bodily design is a biological engineer’s dream: evolution has not all along been dying to get this design instantiated in its proudest achievement (as if expecting huge applause from the evolutionary judges of the universe—“And the first prize goes to…”).  Cats, yes, who have been a long time in the making; but hardly humans, who arrived on the scene only yesterday and never looked the part to begin with.

Second, imagine what would happen if you drove gibbons down from the trees. Up there they are well adjusted, at home, finely tuned, grasping and swinging; but down on the ground they would be miserably out of place, athletically talentless, scarcely able to survive. Indeed, they would notsurvive—they would go rapidly extinct. They evolved to live in the trees not on the ground, and you can take a gibbon out of a tree but you can’t take a tree out of a gibbon. Yet we (or our ancestors) were driven down from the trees and forced to survive in alien territory, subject to terrifying predators, cut off from our natural food supply, poorly designed to deal with life on level ground. We should have gone extinct, but by some amazing accident we didn’t—something saved us from quick extinction (and it is possible to tell a plausible story about this). Descending from the trees is not something built into the evolutionary trajectory of tree-dwelling animals, as if it is a natural promotion or development, life on the ground being somehow preferable, like a fancy neighborhood and upward mobility. That’s why other species have not followed us—those gibbons are still happily up there, as they have been for millions of years. Our descent and eventual success was not a natural progression but a regression that happened to pan out against all odds. It could easily not have happened. There is certainly no general evolutionary trend that favors animals that make the descent—which is why birds haven’t abandoned their aerial life-style and taken up residence on the ground. There is no biological analogue of gravity causing animals to cling to the earth’s surface. That we made a go of it is more a reason for astonishment than confident confirmation.

Third, and perhaps most telling of all, the other evolutionary experiments in our line have not met with conspicuous success. We are the only one left standing (literally). We now know there were many hominid species in addition to the branch called Sapiens, which flourished (if that is the word) for a while, but they are all now extinct—things just didn’t work out for them. And it’s not like the dinosaurs where a massive catastrophe caused the extinction (of them as well as innumerable other species); no, these hominid species went extinct for more local and mundane reasons—they just couldn’t cut it in the evolutionary struggle. They just weren’t made of the right stuff, sadly. Slow, ungainly, unprotected, weak—they simply didn’t have what it takes. Yet we, amazingly, are still here: we made it through the wilderness despite the obstacles and our lack of equipment. How did we do it? That’s an interesting question, but the point I want to make here is that it is remarkable that we did—no other comparable species managed it. Evolution experimented with the hominid line and it didn’t work out too well in general (most mammal species living at the time of our early hominid relatives are still robustly around), but somehow we managed to beat the odds. We look like a bad idea made good—here by the skin of our teeth. The characteristics that set our extinct relatives apart from other animals did not prove advantageous in the long run, but by some miracle the Sapiensbranch won out—we did what they could not. And we didn’t just survive; we dominated. Not only are we still here; we are here in huge numbers, everywhere, pushing other species around, the top of the pile. We have unprecedented power over other animals and indeed over the planet. But this is not because evolution came up with a product (the bipedal brainy animals descended for the trees) that had success written into its genes–most such animals fell by the wayside, with only us marching triumphantly forward. And notice how recently our dominance came about: we weren’t the alpha species for a very long time, a sudden success story once our innate talent shone forth; instead we scraped and struggled for many thousands of years before we started to bloom—the proverbial late developers. None of it was predictable: only in hindsight do we look like the evolutionary success we have turned out to be. Anyone paying a visit to the planet before and after our improbable rise would exclaim, “I never saw that one coming!” You could safely predict the continuing success of cats and elephants, sparrows and centipedes, given their track record; but the spectacular success of those weedy two-footed creatures seems like pure serendipity. You would have expected them to be extinct long ago! You would want to inquire into the reasons for their unlikely success, looking again at their distinguishing characteristics (language, imagination, a tendency to congregate, dangling hands). These characteristics turned out to be a lot more potent than anyone could have predicted. Certainly there is no general trend in evolution favoring animals designed this way. It is not as if being driven from one’s natural habitat and being made to start over is a recipe for biological success.

For these reasons, then, the existence and success of homo sapienswas not a foregone conclusion, a mere natural unfolding. It was vastly improbable and entirely accidental. It was like making a car from old bits of wood and newspapers that ends up winning the Grand Prix.[1]

 

Colin McGinn

 

[1]This essay recurs to themes explored in my Prehension: The Hand and the Emergence of Humanity(MIT Press, 2015). Of course, there is an enormous literature dealing with these themes. I think there is room for a type of writing about them that emphasizes the human significance of the scientific facts (one of the jobs of philosophy). It matters to us whether we are an accident or a preordained crescendo.

Share

Puzzling Pimples

 

 

 

Puzzling Pimples

 

 

The philosophy of pimples is an underdeveloped subject. Why do we react to them with such revulsion? The other night I was watching TV (Jimmy Kimmel Live, 14 August, 2018) and was treated to some footage of assorted people looking at film of pimples being burst. My philosophical antennae twitched (I have written a book on disgust).[1]The reactions were striking: turning of the head, averting of the eyes, grimaces, expressions of nausea, protests at having to watch this stuff. Why the extreme reaction? It would be hard to maintain that fear of contamination lay behind it, since these were just computer images they were looking at–and why would pimples be carriers of disease? Nor did anyone complain that they might catch something by looking at these images. Yet the reaction of disgust was intense and uniform. So what exactly was their psychological state? Presumably there was some property Psuch that the subjects of the experiment judged that pimples have P, where P is marked as disgusting: but what is this property P? It can’t just be the whiteness of pimples or their hilly shape or the fact that they can burst and discharge their contents—lots of things are like that and cause no revulsion at all. If plants had pimples, would we be quite so disgusted by them? The physical properties of pimples don’t constitute P; nor does their purely sensory appearance. Occurring on a human body, particularly the face, seems to make all the difference: but why? Why pimples and not freckles?

Theories have been proposed (being out of place, being a reminder of our animal nature, genital connotations, signs of ill-health, tokens of death, unruly life, and so on), but what is striking is that none of this is evident to the subject of the experience. It is not as if the subject responds by citing these theories when asked what he or she finds so disgusting. Instead subjects become strangely inarticulate when asked to explain their disgust reactions, even perplexed. And the theories do not command general assent, as well as being vague and poorly formulated. It is quite a mystery why we find certain stimuli disgusting and not others. So (a) people reliably have disgust reactions to pimples (inter alia) and (b) they don’t know what it is that so disgusts them. Indeed they are certainthat pimples are disgusting (especially when squeezed) but they are ignorant about the source of the disgust: they can’t say what it isthat triggers their visceral reaction. In the case of fear people can specify why the object produces the reaction of fear, because of the dangers presented by the object, but the objective properties that elicit the disgust reaction are elusive and inscrutable. The puzzle is how this is possible—what the explanation of the ignorance is. Why can’t we say what bothers us so? People are apt to resort to asking, “Can’t you just seethat bursting pimples are disgusting?” When pressed to justify their reaction they fall silent, perhaps admitting that they don’t know what to say.[2]

The Freudian will insist that the reasons for disgust are unconscious, so it is not surprising if the subject can’t access them—as with revulsion at snakes (phallic symbols etc.). But this is not a credible explanation in the case of many disgust objects, including pimples—do we really have repressed sexual emotions surrounding pimples? It is not that we have repressed knowledge of the significance of pimples and that’s why we can’t articulate our revulsion. Nor would it be plausible to assimilate the case to cases of tacit knowledge, holding that we have tacit knowledge of what Pis but we don’t have explicit knowledge of it (compare our tacit knowledge of the definition of knowledge, say). This does not explain why it is so difficult to excavate the grounds of disgust—why we can’t complete “xis disgusting if and only if…” Is it perhaps a cognitively unmediated reflex that hasno articulation, like the patellar reflex? Does the stimulus just tap into brain circuits that initiate a reaction of nausea without any conceptual mediation? That too seems implausible: why is the reaction found only in mature humans, and why is it accompanied by a judgmentof disgust? So the puzzle remains: not just the puzzle of what prompts disgust, but also the puzzle why we don’t knowwhat prompts disgust. Why are we so baffled by our own reactions? It can hardly be that disgustingness is just a primitive property that resists all attempts at articulation—a perceptual simple. Pimples don’t have ordinary perceptible properties and in addition a further simple property of being disgusting. Compare beauty and ugliness: here we can make a shot at saying why we find things beautiful or ugly, but we are not similarly able to spell out our judgments of disgust. Thus we are liable to accusations of irrationality in the disgust case (why do we find mucus and ear wax disgusting but not tears?). Freckles we are fine with for some reason, but pimples powerfully repel us—why? Are we just arbitrarily sounding off?

Here is one thing that seems right to say: when a person is disgusted by something he or she seeks to avoid sensory contact with it. We don’t want to look at or touch or smell or taste the disgusting thing. Again, disgust differs from fear in this respect: we want to flee the fearful object yet we don’t mind observing it from a safe distance, but the disgusting object we want out of our sight whether it is close or distant. The mark of disgust is averting the gaze, as with those pimple viewers I mentioned. And it goes beyond that: we don’t even want to hearabout disgusting things. Embedded in the disgust reaction is a desire not to know—we don’t want to be acquainted with, or cognitively linked to, the disgusting stimulus. We would be happier never to have encountered a disgusting object. It is torture to be subjected to unremitting perception of disgusting objects—feces being the obvious example. Fear is not like this: we don’t desire not to know fearful objects, only not to be exposed to their dangerous tendencies. We don’t find lions disgusting—we are quite happy to gaze at them—but we don’t want to be caged up with a hungry or aggressive lion. So we can say that disgust is anti-epistemic—it is a positive wish not to know. If we are unfortunate enough to witness a pimple burst on someone’s face, we want to forget the experience as soon as possible—we want our memory to fail us. We are against having this type of knowledge. If the inanimate material world produced strong disgust reactions in us, we might not want to know about it (at least we would be ambivalent about physical knowledge); in the case of the organic world, we definitely want to avoid certain kinds of knowledge about it, and might need to be trained to overcome our natural disgust reactions (as with medical training and cadavers).[3]The central message of disgust is: “I don’t want to know!” This again is rather puzzling: why are there things that we don’t want to know about? Isn’t knowledge generally a good thing? Don’t we want to add to our stock of knowledge? But the thirst for knowledge runs up against an obstacle in the shape of disgust—there are some things we prefer notto know about, especially if the knowledge is by acquaintance.

So there are two epistemological puzzles about disgust: the puzzle of why we can’t formulate what disgusts us, though we make confident judgments about it; and the puzzle of why we prefer to limit knowledge in the way we do. These are to be added to the puzzle of what makessomething disgusting, i.e. what its necessary and sufficient conditions are. Pimples are a problem.

 

[1]Colin McGinn: The Meaning of Disgust(OUP, 2011). In this book I defend a general theory of what makes an object disgusting, emphasizing death-in-life and life-in-death; in the present essay I take up some ancillary puzzles.

[2]People can certainly say that slimy things are generally disgusting and that gleaming things are not disgusting, but they can give no general characterization of the class of disgusting objects. They tend merely to list the things that particularly revolt them. By contrast, they have no difficulty saying what scares them, namely dangerous things.

[3]It is fortunate that our disgust reactions are confined in the way they are: just think how difficult science would be if its subject matter made us want to vomit! What if psychological states elicited disgust reactions? Numbers? It would all be like studying pimples.

Share

A Theory of Evil

 

 

 

 

The Uniformity of Evil

 

 

Evil comes in many varieties. A typical list would include: genocide, murder, torture, terrorism, slavery, sadism, the sexual and physical abuse of children, slander, betrayal of trust, desecration of the sacred, disfiguring, maiming, and crippling. We might count as evil the willful destruction of great works of art or architecture, in addition to such standard examples as the extermination of innocent populations. Physical harm to persons is not always involved, though it often is, along with emotional pain. Given this variety, we might be tempted to suppose that the class of evil acts is irreducibly heterogeneous, united by nothing more than brute disjunction or family resemblance. That is, we might deny that there is any one feature common and peculiar to all evil acts. The concept of evil, it may be said, is just too vague and open-textured to admit of informative definition. We must accordingly accept the diversity of evil.

I shall suggest, to the contrary, that evil is a unitary quality common to all acts rightly classified as evil. Moreover, it is quite a simple quality, which is not to say that it is easily identified in practical life. My definition of evil, to get right to it, is that it is the intentional destruction of the good—but this will need some unpacking. First, destruction: by this I simply mean, “causing to cease to exist”. The world contains a certain entity or quality at a certain time and to destroy that entity or quality is to bring about its cessation. This may be done violently or insidiously, quickly or slowly. It is the opposite of creation: instead of causing something to exist, it removes that thing from reality. So destruction is explained through the notion of existence and its negation. It is therefore a highly general notion applicable a wide variety of cases—people, animals, artifacts, states of mind, social movements, bits of nature.

Second, the good: by this I mean any good state of affairs. Without going into the matter fully, the following list will serve our purposes (we could add to it if need be): life, happiness, knowledge, innocence, freedom, friendship, and aesthetic quality. If you think some of these items reduce to others, or should not be on the list of intrinsic goods at all, by all means amend as you see fit; the definition of evil will remain the same, even if its extension differs. I favor keeping the list fairly long and non-reductive, because I think that the good is best seen in all its variety; we don’t want theories that try to reduce every basic value to one (such as pleasure). Despite the variety of the goods, there is something they all having in common—that they are precisely good—and that is what matters to the definition of evil.

Third, intentional: by this I mean that the act in question must be intended in a certain way. If an agent destroys something good by accident, through no fault of his own, and is horrified by what he has wrought, he cannot be adjudged evil, merely unlucky. So we should say that an evil act is one that is intentional under the description “destruction of the good”: the agent foresees and intends the destruction of the good and acts as he does in order to bring this destruction about. He “knows what he is doing”. In a typical case he plans the destructive act and self-consciously carries it out.

Thus an evil act is one that involves an agent intentionally destroying what he knows to be good. The mental state of the agent incorporates the concepts of destructionand goodness—this is the content of his intention in acting. It is the intention that defines the evil agent. Is there a second-order intention associated with this first-order intention? Grice argued that communicative acts require a second-order intention—not only the intention to produce a belief in one’s audience, but also an intention that the first intention should be recognized by the audience. Thus the basic intention is transparent, not concealed and secret. In the case of the evil agent, there is also a second-order intention, but it is not a transparency intention—it is an opacity intention. The agent intends that his first-order intention should notbe recognized by observers (he may even try to shield himself from knowledge of his intention). The evil agent is trying to destroy the good, but he doesn’t want people to know that this is what he is doing, possibly including himself. Even if he feels safe in his actions, fearing no repercussions, he does not want it to be apparent that his aim is precisely to destroy something good. So he will often characterize his actions in other ways—say, by arguing that he is serving a greater good. I might put it by saying that there is always a level of shameabout evil actions, and hence a desire for concealment. The agent is not proudof what he does, even if he tells himself it is somehow necessary. For the agent has set about intentionally destroying what he acknowledges to be good, and this is not something he can happily admit. That is why there is often a degree of self-deception involved in evil actions (not so for virtuous actions). For this reason there will typically be a second-order intention to conceal the first-order intention. The easiest way to fulfill that intention is to commit the evil act secretly, away from prying eyes—as it might be, in a dungeon or concentration camp or in the dark. The evil agent is by nature deceptive; secrecy is his cover, his protection.

The conception of evil I am suggesting limits it to creatures capable of certain kinds of “sophisticated” attitudes. I doubt that animals are capable of evil in the sense I have defined, though they are certainly capable of impressive feats of destruction. Animals may maim or kill but they don’t do so with the kinds of intentions I have described (some of our primate relatives may have such intentions, in which case my claim applies to non-primate animals). They may cause great suffering and death but they do not do so under the description “destroy the good”. They just don’t have the concept. Evil is what results when a creature acquires such abstract concepts, so it is a uniquely human achievement. Perhaps, indeed, the very acquisition of the concept of the good (as well as the concept of destruction) is what opens the human species up to feats of evil not possible for other species. We do evil things precisely because we know what good is; we destroy the good because we apprehend things asgood. Evil thus requires a certain intellectual attainment. The necessity to conceal evil acts also requires a cognitive sophistication absent in other animals (possibly with certain exceptions). It is not that animals do less harm than we do—though that is doubtless true—but rather that the harm they do does not spring from evil motives and intentions.

Now we must see how the definition fits the various types of evil I have listed. Let’s start with a hypothetical example. Suppose a university administrator, call her Eva, receives a complaint against a distinguished professor, call him Carl. The complaint is completely fictitious, being motivated by malice and a bad grade. Eva knows this, but she also knows that taking disciplinary action against Carl will, in the current climate, score her political points, help with funding, and appease the radical feminists. She decides to initiate dismissal proceedings against Carl, fully aware that this will ruin his reputation, take away his livelihood, and prevent him from any further achievements as a scholar and teacher. She also knows that he cannot fight her actions legally because it would bankrupt him to do so. Eva thus uses her power, quite cynically, to destroy Carl in order to advance her political and personal goals. Carl is duly forced out of his position, becoming impoverished and bitter. I hope we can agree that Eva was evil in acting as she did, and the reason is clear: she intentionally destroyed something good. Carl was an innocent man, a good man, and also a productive and brilliant scholar. Eva destroyed his ability to work and teach, as well as his happiness and security, along with that of his family. She did so deceptively, unethically, and callously. Her evil actions fit the definition perfectly.[1]

Next consider an artist who is tired of being unfavorably compared to another artist, whose work is vastly superior. He decides to destroy the superior artist’s work, stealing into his studio one night and burning all his paintings. Let’s suppose that he manages to destroy every one of the great artist’s works and also to prevent him producing any more (he is so traumatized by the destruction). Now the second-rate artist gets more attention and makes more sales, with his main rival eliminated. Again, these actions are clearly evil, and they fit the definition perfectly: the evil artist has intentionally destroyed works of great aesthetic value for his personal gain and out of envy.

David is a bitter man and a failure in life. He lashes out at anyone he can, belittling and insulting people. His young son Patrick becomes a target of his ire because David cannot stand the thought that his son might succeed where he failed. He sets out to damage Patrick psychologically, even going to the extreme of raping his five-year old son. He succeeds in his aim and Patrick is so traumatized that he becomes a heroin addict and eventually commits suicide. Again, the evil is obvious, and again we can see why: David has destroyed Patrick’s innocence and happiness in order to satisfy his own warped needs. His express aim was to prevent his son from achieving anything good in life, including any chance of happiness: he destroys the good in order not to suffer the pangs of his own sense of failure.[2]

Terrorists bomb a city center, killing dozens of innocent men, women, and children. They do so because the people they have targeted practice a different religion from theirs and appear to be happy and prosperous doing so, making their own religion look shabby and regressive. Their aim is not just to kill and maim but also to undermine the peace of mind of people living in the city in question. Their actions are evil and for the usual reason: they have destroyed life, happiness, and peace of mind among the target population, because of their misguided religious zealotry.

The Nazis undertake a program of mass extermination against the Jews. Their motivation is that the Jews are far too successful in German society, owing to their intellectual and cultural superiority. The Nazis covertly acknowledge the qualities of the Jewish minority and wish to rid themselves of a people that challenge their sense of racial superiority. They accordingly murder six million Jews by means of starvation, gunshots, and poison gas. They are defeated before they can realize their project of total genocide, but they would have carried it through to the end if they could. No one can doubt the evil of the Nazis, and their actions clearly conform to the theory: they intentionally destroyed the good—life, well-being, culture, achievement—in order to gratify their own (shaky) sense of superiority.

Liz is a friend of Susan, who is also friends with Wendy. But Liz doesn’t like the friendship between Susan and Wendy; she wants Susan to herself. She decides to undermine the friendship between Susan and Wendy by telling lies about Wendy to Susan, to the effect that Wendy has been making advances to Susan’s boyfriend. Liz convinces Susan of this falsehood, using doctored photographs and what not. Susan consequently drops Wendy as a friend, causing her considerable distress. This is not evil on a grand scale, like the previous example, but it is evil nonetheless. Here the good that has been destroyed is friendship.

Iago sets out to destroy Othello, who is respected as a great general and honorable man (Iago’s reasons are obscure), by making him jealous. He succeeds in reducing the normally unflappable Othello to a blubbering heap and a murderer of Desdemona, his wife. Iago’s evil consists in this act of destruction, more of the soul than the body, in the case of Othello. Macbeth betrays the trust of King Duncan, murdering him while he sleeps, in order to advance his own ambitions, and then murders others to cover his crime. He doesn’t think Duncan is a bad king; on the contrary, he likes and admires Duncan. So he has knowingly destroyed something good. Judas betrays Jesus, despite believing him to be the Son of God, for fifty pieces of silver; he thus destroys the embodiment of goodness for a tawdry sum.

I don’t think I need to multiply examples any further: it is easy to see how the definition of evil I have presented works, and indeed it is an intuitive and natural way to characterize evil. The definition is simple and straightforward; and it offers a uniform account of what evil is. Are there any counterexamples to it? Someone might suggest that the definition does not provide a necessary condition for evil, since some evil consists in positively producing harm, not just removing the good. The evil of torture, say, is that it produces a lot of harm, either pain or injury. But I take it that this is just another way to phrase the theory under discussion: to produce harm is just to annihilate a good, i.e. the good of notbeing harmed. Harms are defined relative to goods: for example, pain is bad because it is good not to be in pain. The trouble with stating the theory in terms of harm is that it loses generality—not all cases involve an intention to harm. The envious artist was not attempting to harm his rival exactly, though he did; his intention was to destroy the good—the harm to his rival was just a by-product.  The same can be said of the desecration of sacred sites or buildings. The harm formulation gets the emphasis wrong: the evil agent recognizes the good in something and seeks to destroy it despite this; he is not just out to do harm. A run-of the-mill thug might be out to create harm by punching anyone within range, but he is not evil in the sense I am trying to capture. Evil is the intentional abolition of the good, recognized as the good. Iago, say, is not interested in bringing down some undistinguished nobody; what incites him is Othello’s distinction—the good that he embodies. And what marks Judas out is not just a betrayal of any old goat-herder from Palestine, but the fact that he betrayed the Son of God (allegedly). The harm caused might be the same in both cases, but the evil agent is doing more than just maximizing harm—he is destroying that which is indisputably good. It is true that one way to destroy what is good is to cause harm, as in crippling an athletic rival, but the evil resides in the negation of goodness, not in the harm as such. Nor is it clear that negating the goodness of a person is always harming her: if a scientist reduces the intelligence of a rival by putting a chemical in her drink, this is definitely evil, but it is not clear that the target has been harmed—she might be quite happy having average intelligence. I might set out to make you happier by chemical means, so that you spend less time at home working, and more time out having fun—as a way to lessen your intellectual output. This would be evil, but it is not clear what harm I have done to you—you might even decide you want to change your life-style in that direction anyway. What if I introduce you to a very seductive partner so as to distract you from your important intellectual work—have I harmed you?

Now it might be claimed that the conditions are not sufficient for evil, since it is possible to intend to destroy the good for morally praiseworthy reasons. Thus we have vaccination and surgery—we remove a person’s tranquility and freedom from suffering by subjecting them to these procedures. Are dentistsnecessarily evil? The obvious answer is that the agent is aiming for the greater good of the patient, and rightly so: the short term removal of the good is justified by the long term creation of the good. It wasn’t that Iago believed that only by destroying Othello and Desdemona could he save the city of Venice from a terrible fate: he did not commit his harmful acts with a heavy heart, with everyone’s best interests in mind. So we should add that evil is the intentional destruction of the good all things considered—that is, when the destruction of one good is not justified by the production of a further good. Of course, this is not to deny that some evil agents use such justifications spuriously, as the Nazis did to excuse their genocidal actions. But in cases like dentistry it is clear that no evil is committed, since the intention is to produce long term dental goodness in the (temporarily) suffering patient. The dentist is promoting the good not negating it.

Let me return for a moment to the destruction of reputation, because I think it is particularly instructive. This does not involve physical harm or death, so it doesn’t fit a crude definition of evil as simply “causing suffering”. A person can no doubt suffer from the unjust destruction of his reputation, but that suffering does not pinpoint wherein the evil lies. The slanderer is taking aim at a manifest good and seeking to annihilate it: the good character or good standing of the person unjustly accused. Suppose the target’s reputation is well earned and fully justified—it is backed by undeniably good qualities. Then the slanderous accuser is attempting to negate this manifest good—say, with a view to preventing the person accused from gaining employment. The intention is precisely to destroy a human good—that is its exact focus. This epitomizes evil, perhaps more clearly than any other case, because the good that is destroyed is specifically targeted as such. It is close to another paradigm of evil—the intentional undermining of trust. If an evil agent sets out to gain the trust of another person, himself without evil intent, by encouraging such trust, with a view to betraying it later, she has attacked a deep and central human good—the ability to trust another person. A person treated in this way may never be able to trust again, which undermines many other human goods. The betrayer has destroyed something precious and precarious, and we rightly reserve our severest criticism for such actions. This is precisely what Iago and Macbeth do. It is particularly heinous because it specifically targets a central human good for annihilation. Just as a person values his good name, so he values being able to trust other human beings: to destroy these things is evil in the purest sense. Neither of these forms of evil is calculated to cause pain or death (though they may cause both of these things); what they are calculated to do is to take a certain kind of good from a person that is highly valued. Both involve depriving the target of normal social relations. The evil here consists in destroying a fundamental socialgood—being well thought of and kindly received, and being able to place one’s trust in another. Hence these are my paradigms of evil, not the usual cases of torture and murder—because they exemplify the abstract form of evil so clearly.

We need to make a minor amendment to the definition. I have been speaking of evil agents, but there are also those who are passively complicit in evil—bystanders or onlookers. There are not just those who do the deeds, there are those who allow them to be done. It is not only the agents of the action who are evil but also the observers of it: the wife who lets her husband rape his son, those who tolerate atrocities committed by others, people who make no protest when those with power persecute the innocent—the whole sorry crew of cowards, toadies, and the morally numb. These enablers of evil should also be included under the concept. It is easy to do so: just add “or those who tolerate the destruction of the good”. We thus recognize two categories of evil: active and passive.

We should also make a distinction between ideological evil and non-ideological evil. Iago and Macbeth are not evil ideologues, like Stalin or Hitler. They stop when the count of corpses reaches the double figures, and no general ideology drives their homicidal tendencies. But the evil ideologue envisages a much wider field of operations—sometimes totaling in the millions. Here entire sections of the population are targeted for destruction: Jews, gypsies, homosexuals, the bourgeoisie, heretics, racial minorities, and many others. The guiding ideologies are by now very familiar to us, but it is easy to miss them when they emerge, because they masquerade as moral crusades. It is often only in retrospect that an evil ideology reveals itself for what it is. Ideological evil allows people to destroy the good while telling themselves they are working for a greater good, so it is especially sinister and dangerous. They make people think that their evil acts are not evil at all. Whenever you see people justifying destructive acts by reference to an ideology be on the lookout for ideological evil. One sees in the ideologue a wild-eyed enthusiasm, a disregard for basic principles of fairness and justice, violent imagery and extreme response, blanket condemnation, sloganeering, demonizing, prejudice and pre-judgment, sectarianism, and social conformity. The psychology of ideology is murky, but the human mind clearly has a weakness for ideology, and the results can be devastating (consult history). I don’t doubt that one of their principal attractions is that they permit people to do evil in the guise of promoting the good.

It is important for any conception of evil to distinguish it from merely bad or immoral acts. Evil acts are always immoral, trivially, but not all immoral acts are evil. It is not ipso factoevil to break promises or steal or tell lies or defraud or assault. In certain circumstances all these can be evil, but they are not evil in all circumstances. So we had better hope that they don’t turn out to be evil according to our definition. Nor do they: breaking a promise or stealing things are not intentional under the description “destroying the good”. They are not even cases of intending to do harm, even if they do in fact do harm. When I break a promise to you I have not identified a good in you that I proceed intentionally to eliminate; I simply act selfishly or lazily. Nor is it my aim in stealing from you to remove a good from your life; it is simply to add a good to mylife. I would be quite happy to enhance my life by leaving yours undisturbed, so long as I get what I want; taking your things is just my means to enhancing my life. It is entirely contingent that my gain is your loss.

By contrast, if I decided to steal from you in orderto deprive you of something precious to you, even if it meant nothing to me, then I would be acting evilly. But ordinary instrumental theft, in which I am merely trying to accumulate more goods for myself, does not exemplify the evil schema; I am not so much destroying a good as transferring it from you to me. Even assaulting another person, say in the course of robbery, is not evil by the criterion laid down here, since this is merely a means for me to get what I want. I am not trying to obliterate a good that you have; I am simply using the means necessary to my obtaining a good that I want. I would be quite happy to get what I want without assaulting you, but as it happens I have to. If I assault you intending to destroy your happiness and future, then I am acting evilly; but not all assault is so motivated. A crime it may be, and it is certainly immoral; but it is not evil, intuitively or according to our theory. It all depends on the motive behind the assault.

This is why, if the assault is disproportionate to the intended theft, it veers into the realm of the evil. If all I need to do is twist your arm, but I hit you on the head with a brick, then I have acted evilly, because I have removed more good from you than if I had used the minimal means to enact the theft. My action is immoral either way, but it is only evil when I destroy a good as an end in itself. Just war and self-defense both involve destroying good things, notably lives, but they are not evil because there is no intention to destroy the good as an end, just as a (proportionate) means. I would even distinguish between verybad acts and the subclass of bad acts I am calling evil acts. It is very bad to steal from helpless old ladies, and more so to assault them, but this is not a case of downright evil, as when you decide to terrorize old ladies for its own sake. It is when you take aim at their wellbeing itself that you become evil. The hardened criminal is not necessarily opposedto the good of others; he is merely out for his own good, irrespective of the deprivations he brings to others.  But Iago is not just a self-centered criminal using Othello for his own enrichment; his intention is rather to destroy Othello, mind and body, without regard for how he might benefit. A career criminal would find Iago irrational, given the risks and potential payoffs, but Iago is quite rational given his real aims. He is in the business of removing the good not in acquiring goods.

The evildoer is therefore often quite difficult to distinguish from the mere criminal or immoralist. The actions look the same from the outside; it is the inner attitude that makes the difference. The same act of violence can be motivated by evil intentions or by merely criminal intentions. It would be easier if all evil actions were purelyevil, i.e. motivated by nothing more than a desire to destroy something good. But some evil is instrumental—the agent expects to get something out of it himself. Here is where evil can shade into mere criminality or wrongdoing. Suppose I have a selfish aim and I am not too particular about how I achieve my aim: then I am not ipso factoevil, just rather unscrupulous. I might cheat people or coerce them or rob them to get what I want. This is not yet to act evilly towards others, because my focus is not destroying what is good for them. It is said by historians that the Germans at the beginning of their persecution of the Jews sought only to have them leave Germany: they made life difficult for Jews in the hope that they would voluntarily leave the country. These were no doubt deplorable and vicious policies, but they do not compare to the policies that succeeded them. If the Jews were not willing to leave voluntarily, then they would have to be exterminated. At first this was achieved by mass executions conducted wherever Jews lived, using bullets, but that was deemed inefficient, so special extermination camps were set up, where starvation and gas were used to kill people. Here the intentions of the Germans were nakedly sadistic and designed to bring about extreme degradation. They wanted to remove as much as possible of what makes life good from the Jews in their captivity. In this they entered the realm of evil quite decisively. They began to make the destruction of soul and body an end in itself. At the beginning they had an instrumental desire to force Jews into exile, but as time went on this was replaced by a desire to annul everything Jewish. They went from the merely criminal and bad to outright evil and depravity. They sought systematically and ruthlessly to destroy the good as exemplified in a population of people.

We find evil shocking in a way we don’t find routine crime shocking. Why? The theory gives us the answer: because the evil will is aimed at the destruction of the good. The criminal will is not: it is aimed rather at the good of the criminal, with indifference towards the good of others. But the evil agent is bent on the destruction of the good as such—in the purest case, he wishes simply to destroy what is good without any benefit accruing to himself. This is shocking, because we normally think that the pursuit of good states of affairs is what human motivation is all about. The evil agent inverts that assumption and aims to annihilate the good, not create it (in himself or others). We wonder why anyone would do anything so negative; hence the evil agent strikes us as a monster, a freak, even a paradox. The merely self-interested criminal, by contrast, is normal in his motivation, just unscrupulous. We wonder what the pointof evil is, if it is aimed solely at the reduction of the amount of good in the world. No one’s utilities are being maximized. This raises the question of motivation, which I don’t want to get into here. Suffice it to say that envy, competition, and Schadenfreude often play a role. There is also, apparently, a brute appetite for destruction for its own sake—a kind of generalized vandalism. It may have to do with assertions of power, and certainly evil shadows power. In any case it is the opposite of the normal desire to bring about the good.[3]

Let me end with the question of natural evil, i.e. the kind that arises in the world independently of anyone’s will—earthquakes, floods, fires, disease, etc. This appears to be a counterexample to the theory defended here, since the natural destruction of the good is not an intentionaldestruction. Of course, if there is an agency behind it (say, Satan), then it fits our definition—these events are instances of intentional action. But suppose they are purely natural—what should we say about this kind of evil? My answer is that this is not a kind of evil; it is simply the occurrence of bad states of affairs. Talk of evil here is just a holdover from antiquated ways of thinking about the natural world, as if everything that happens must be willed by somebody. There are evil agents, but there aren’t evil facts or events or conditions. So the notion of “natural evil” is an oxymoron, unless we explicitly postulate an agent behind the bad events. A child dying of cancer is no doubt a horrible thing, but it is not an evil thing. What is called “the problem of evil” only arises when we introduce an agent like God. The problem is usually posed by asking why God allows horrible things to happen, as if he is a passive bystander too lazy or indifferent to lift a finger; and indeed, that is a form of evil (“passive onlooker evil”). Then evil is involved, but only because of an assumed agent—not because of the horrible event in itself.

But there is also the problem of active divine evil if we suppose that God is responsible for everything that happens—if he is the cause of all natural events. Then it looks as if God is actively, intentionally, and knowingly producing very bad states of affairs—that is, he is destroying the good on a grand scale. He then appears as an evil agent. This problem of evil (“active agent evil”) is even worse than the kind in which God is conceived as a mere onlooker, since it is his will that actively creates the bad state of affairs. How can God be good and yet he intentionally produces very bad states of affairs? The only conceivable answer relies on the model of the benevolent dentist, but that rings very hollow to most people. In any case, there is no counterexample here to the definition, since God wouldbe evil if he intentionally destroys the good (without some excusing instrumental explanation). In either case (God or no God) the existence of “natural evil” poses no problem for our theory.

I hope that the theory I have presented strikes the reader as natural and intuitive, almost a truism. Truism or not, it still serves to bring order to our thinking about evil, by providing an account that discerns uniformity in the many varieties of evil. We don’t have to fall back on a disjunctive analysis or a vague family resemblance story, i.e. no definition at all. We now know what to look for when we are keeping our eye open for evil. Thus a theoretical advance might lead to a practical advance: we might become better at detecting evil, and hence preventing it. It is also good to reserve a special label for one particular kind of human badness, and we need to be able to justify the use of the concept of evil in our classifications of human actions. We need to know that the word “evil” denotes a coherent and well-defined natural kind—a distinctive moral natural kind. My view is that the concept of evil is a vital part of our moral conceptual scheme, corresponding to a very real type of human act. My aim has been to buttress the concept by providing a clear and straightforward definition of it, applicable to the major kinds of evil that exist. Absolute precision may not be possible, and borderline cases can no doubt be constructed, but I hope to have shown that the concept of evil deserves a place in our repertoire of moral concepts. Actually getting rid of evil may not be so easy.

 

Colin

[1]I do not intend to describe any actual case here; it is purely fictitious. This paper is philosophy not history.

[2]This case is based on, but departs from, the novel sequence The Patrick Melrose Trilogyby Edward St Aubyn, a study of evil.

[3]I discuss evil motivation at length in Ethics, Evil and Fiction(Oxford University Press, 1997). Here I am defining what evil is; in that book I was concerned with its psychology.

Share

Philosophy as Biology

 

 

 

Philosophy as Biology

 

 

In the 1960s linguistics took a biological turn with the work of Lenneberg and Chomsky.[1]Language was held to be genetically fixed, a species universal, just like the anatomy of the body. It is a biological aspect of human beings, not something cultural or learned, more like digestion than chess. Language evolved, became encoded in the genes, and is present in the brain at birth. Since linguistics is properly viewed as a branch of psychology, according to these theorists, this means that part of psychology is also biological, not something separate from and additional to biology. But then it is reasonable to ask whether more of psychology might fall under biological categories; and succeeding years saw psychology as a whole taking a biological turn. Many of our mental faculties turn out to have biological origins and forms of realization in the organism. Indeed, learning itself must be genetically based and qualifies as a biological phenomenon: what an animal learns is part of its biological nature, not something set apart from biology. True, what is learned is not innate, but many things are not innate that are part of the natural life of the organism (e.g. a bee’s knowledge of the whereabouts of nectar). Dying by predation is not innate but it is certainly biological. Biology is the science of living things, and living things learn as part of their natural way of life. In any case, psychology turned from cultural conditioning to biological naturalism; it became evolutionary. How could it not given that minds evolved along with bodies? The mind of an organism is part of its nature as a living thing; it doesn’t exist outside the sphere of biology (as the soul was supposed to). The organism is a psychophysical package.

The basic architecture of language is thus a biological architecture. Syntax is an organic structure; the lexicon is a biological system too. When we study these things we are studying the properties of an organism, just like its other biological properties. They had an evolutionary origin in mutation and natural selection, and they have a biological function (probably to enhance thought, as well as serve in communication). One of the organs of the body, the brain, serves as the organic basis for language, as the heart serves as the basis for blood circulation. So linguistics (descriptive grammar) is not discontinuous with biology but part of biology.[2]It had conceived itself differently, perhaps out of a feeling that language raises us above the level of the beasts, but in these post-Darwinian times it should be relegated to biological science. Freud had made similar moves in affective psychology; the biological school in linguistics was moving in the same general direction. This broke down the old dualism and established the study of language as a department of biology, even when it came to the fine structure of grammar.

This is an oft-told tale (though still not without its detractors), but it has not yet colonized the entire intellectual landscape. Recently there has been a movement to classify consciousness as a biological phenomenon: it too is innately determined and biologically functional. Organisms have consciousness the way they have blood and bile—as a result of biological evolution and bodily mechanisms. It is not something supernatural, an immaterial infusion. That certainly seems of a piece with the biological naturalism that has dominated psychology in recent decades, but does it go far enough? Can’t we also announce that phenomenology is a branch of biology? That is, the systematic phenomenology of Husserl is really a form of biology: the very structures of consciousness are biological facts. Husserl doesn’t suspend the natural sciences (the epoche); he promotes one of them. Phenomenology is the study of a biological aspect of the human mind (and bats have their phenomenology too), just as linguistics is the study of a biological aspect of the human mind (and bees have their language too). When Sartre characterizes consciousness as nothingness and explores its modalities he is doing biology, because consciousness is a biological phenomenon—evolved, innately programmed, functional, and rooted in tissues of the body. To be sure, it is not reducible to otherbiological facts (such as brain structures); it is a biological fact in its own right. But it is a biological fact nevertheless—part of the life of a living thing. Its essence is nothingness, as the essence of the heart is pumping and the essence of the kidneys is filtering. It has a certain natural architecture, established by the genes, in both humans and animals. We certainly don’t chooseits essence. In so far as consciousness exhibits universals (intentionality, qualia, transparency), those are biological universals, like the universals of human grammar. Phenomenology thus belongs with psychology as a branch of biology. Biology deals with living things–as opposed to physics, which deals with non-living things—and the mind is an aspect of life. Husserl could have cited Darwin (correctly understood): The Origin of Species of Consciousness. This is not biological reductionism, simply the acknowledgment that biology extends beyond the body. It is not that religion takes up where biology leaves off.

I take it I am not shocking the reader unduly. Isn’t this all part of our current secular scientific worldview? Biology by definition encompasses the life sciences, and linguistics, psychology, and phenomenology are all parts of the life sciences. Speaking, thinking, and experiencing are all modes of living—what living things do (some of them). They are, as Wittgenstein would say, aspects of our “form of life”, part of our “natural history”. Maybe we need to expand our conception of biology beyond the typical curriculum, but it is not difficult to see that these aspects of our nature properly belong to biology, broadly conceived (certainly not to the physical sciences). However, I now wish to assert something that may strike readers as pushing it just a bit too far: philosophy too is a branch of biology. I don’t say this because I think philosophical questions reduce to biological questions; I say it because of the methodology of philosophy. We hear about the “linguistic turn” in philosophy—using the study of language as a means of arriving at philosophical conclusions about ground-floor questions. But given the biological turn in linguistics this implies that philosophy has already turned into a branch of biology. Language is a biological phenomenon and it is held to be the foundation of philosophy, sophilosophy is based on a sub-discipline of biology. If the logical form of sentences is deemed central to philosophy, then it is the form of a biological entity that is in question. Logical form, like syntax, is an aspect of an evolved and biologically based entity—the architecture of a biological trait of humans. If speech acts are deemed central, then this aspect of living things will assume methodological importance—as opposed to acts of reproduction or respiration or excretion. The combinatorial power of language has rightly received considerable attention, but this too is an evolved biological trait. The biological turn in linguistics combined with the linguistic turn in philosophy together imply the biological turn in philosophy.

But what if we reject the linguistic turn? What was it a turn from? Mainly it was a turn from a more direct investigation of concepts. But investigating concepts is also investigating a biological phenomenon. Let me put it bluntly: a concept is a living thing. A concept is like a cell of the mind (and note that biological cells were so called because of their resemblance to the living quarters occupied by monks). Concepts are the units that make up thoughts and other mental states, as words make up sentences. Concepts have functions, they evolved, and they are rooted in organic structures of the brain. So when we study concepts philosophically we are studying entities as biological as blood cells or enzymes. We scrutinize these things for their philosophical yield, not for their contributions to biology as such, but they are still biological entities. To be sure, we are interested in their content not their physiology, but having content is just another biologically fixed fact about them. Even if you think concepts are acquired by abstraction, they are still entities that exist in the context of a living organism (like big muscles or manicured nails). Conceptual analysis is the dissection of a biological entity; it is not the examination of a disembodied abstract form. There might be such forms, but they must be reflected in the natural traits of organisms at some level. We have no trouble recognizing that an animal’s concepts are biological forms; human concepts are not different in kind. Bee philosophers can reflect on their bee concepts (or turn their attention to bee language), and human philosophers are in the same case—reflecting on their biologically given traits.[3]How they do that must also be rooted in biology, but the important point is that thinkingis a biological fact; and in so far as philosophy concerns itself with “the structure of thought” it is a biological enterprise. The results don’t concernbiological matters, as opposed to matters in the world at large, but the methodinvolves surveying a certain class of biological entities. Analyzing a concept is analyzing a living thing—as much a living thing as any organ of the body. Our intellectual faculties are indisputably aspects of our life as organic beings, and concepts are just their basic components—as cells are the basic components of bodily organs. It follows that philosophy is (a branch of) biology. Philosophy could be called conceptual biology.

I want to emphasize how biological concepts are. First, they arise through the evolutionary process (though we have little understanding of how this happened). Second, they are manufactured during embryonic development as a result of genetic realization (or if you think they are acquired later, it is by biological means, e.g. abstraction). Third, they have a biological function—to enable thought, which enables rational action. Fourth and crucially, they must be realized in some neural mechanism that enables them to have their characteristic features, chief among which is their combinatorial powers. The neurons must be able to hook up with other neurons so as to produce complex thoughts; and this hooking up must respect the logical relations inherent in thought (it’s not just a matter of brute aggregation). There must be a physiology of thinking, and specific to thinking. So concepts cannot somehow float above the biological substructure; they depend upon it. Presumably this implies some sort of hidden structure to concepts analogous to the hidden structure of the cell (nucleus, mitochondria, etc.) Concepts are biological through and through. So if they are what philosophy investigates philosophy is up to its ears in biology. It would be different if philosophy could pursue its interests without recourse to concepts, say by simply looking at the extra-conceptual world, but that idea is hopelessly wide of the mark. And even if you think that someparts of philosophy require no reference to concepts, much of it clearly does (the parts that expressly analyze concepts, in particular). Philosophy is thus one of the life sciences and should be understood as such. There are the sciences of the inorganic world—physics, chemistry, astronomy, geology—and there are the sciences of the organic world—zoology, biology, genetics, biochemistry: and within this broad grouping linguistics, psychology, phenomenology, andphilosophy fall into the latter category. As I say, this is no form of biological reductionism or determinism, simply a taxonomic observation. It is making explicit what has been implicit since the time of Darwin.[4]

I want to end with a point about mathematics. The kinship between mathematics and philosophy has long been recognized; in particular, the status of mathematics as a non-empirical conceptual inquiry makes it similar to philosophy. So is mathematics also a department of biology? Well, if we view it as investigating the implications of basic mathematical concepts it presumably is, for the same reasons philosophy is. Mathematical concepts are products of evolution too, and they must have an underlying physiology. They too are living things. To the extent that mathematical concepts are part of the subject matter or method of mathematics, that subject is also fundamentally biological. Suppose mathematical ideas are innate, just as the classical rationalists supposed; then they must have evolved by mutation and natural selection, become genetically encoded, and matured in the individual organism’s brain to become the conscious entities we now know. Investigating these concepts is thus an exercise in biological exploration—discovering what these evolved traits have hidden in them. How they evolved we don’t know, but if they did evolve then mathematics is another kind of life science, mathematics being part of human life. The concept of number, say, is part of our evolved form of life (quite literally). Counting is like speaking—a human universal. Mathematical theory is the spelling out of the mathematical concepts we inherited from out ancestors.

 

[1]See Eric Lenneberg, Biological Foundations of Language(1967) and Noam Chomsky, Language and Mind(1968), and many other works.

[2]The work of Ruth Millikan is also an instance of the biological turn in linguistics and psychology, to be set beside Lenneberg and Chomsky. The biological concept she emphasizes is function as distinct from innateness.

[3]No one would doubt that the study of bee language belongs to biology (zoology to be precise), but it took some persuasion to get people to accept that human language is part of human biology (zoology). If bees had philosophers it would be clear enough that these philosophers are studying a biological phenomenon—bee language or bee thought. Is it that there is resistance to the very idea of human biology?

[4]In retrospect we can see the work of Locke and Hume (among others) as a form of human biology: they undertook a naturalistic study of the human mind, turning away from scholastic essences and the like. If they had known about Darwin, they might have welcomed the biological naturalism inherent in his work.

Share

Psychological Science?

 

Is Psychology a Science?

 

 

The question is only as precise as the word “science”, which isn’t very precise. But I don’t propose to quibble about that word (I incline to a wide application of it); instead I will compare psychology to some established sciences and note various gaps in what psychology has accomplished compared to these sciences. We might express the upshot of these reflections as denying that psychology is a maturescience, or that it is a realscience, or that it is an explanatoryscience; what matters is the reality of the distinctions I identify. Psychology is not as other sciences are, dramatically so. It is signally lacking in the chief characteristics of the sciences, as they now exists. It is weak science, proto science, science in name but not in substance.

First consider the physical sciences—physics (pure and applied), cosmology, astronomy, and chemistry. What has physical science achieved? I would say that it has achieved success in three (interconnected) areas: origin, structure, and dynamics. To summarize: it explains the origin of the physical universe (big bang cosmology); it has uncovered the hidden composition of physical things (atoms, molecules, fields); and it has developed a dynamic theory of how the physical world evolves over time (specifying the basic forces and the laws that govern them). I shall say that it has achieved OSD success: it has established theories of how matter came to exist in its present form, how it is composed, and how it changes its properties with time. This is what we would expect of an adequate empirical scientific theory of some aspect of reality: an account of its origins, its underlying structure, and its behavior over time. Not to have answered these questions would make physics into a merely embryonic science, hardly worthy of the name (think physics in the age of Aristotle or earlier). Now turn to biology—anatomy, physiology, evolution, and genetics. It too can boast real achievements in the three areas identified: how life originated, the structure of living things, change of biological forms over time. We now know that life on earth began with bacteria some four billion years ago (though we don’t have a clear idea of how bacteria came to exist), and it has been evolving by means of mutation and natural selection ever since. We also know about the fine cellular structure of organisms, as well as the molecular structure of the genetic material. And we have a well-established theory of how organisms change over time (the aforementioned evolution by natural selection), as well as how individual organisms function biologically (blood flow, enzymes, digestion, photosynthesis, etc.) Granted, we don’t know everything about life–as we don’t know what preceded the big bang or how to integrate quantum physics and gravity—but we have made serious progress in understanding these three aspects of the biological world. Biology is well advanced in OSD studies. It is not that a student of the subject would have fundamental questions in these three areas about which biology has established nothing. What we expect of a reputable science is that it can tell us where its proprietary entities came from, how they are internally structured down to the microscopic level, and what explains change in them over time. Biology and physics satisfy these criteria.

But what about psychology–can itboast comparable achievements? The short answer is no. What theory in psychology plays the role of big bang cosmology in physics and Darwinian evolution in biology? None: psychology has no theory of how minds as they currently exist came to be. The best it can do is piggyback on biology, but there is no explanatory theory of how minds with their characteristic properties came to be—subjectivity, consciousness, intentionality, reason, introspection, and more. How did these develop from more primitive traits? How did the whole process begin? If a mind is like a galaxy, how did the mental galaxy form? Psychology just accepts minds as they are, animal and human, but it doesn’t explain how they came to be, what triggered them, what shaped them. There is no origin story in psychology. What about structure? We can say what the partsof the mind are—the analogues of bodily organs—but we have nothing to say about the ultimate constituents of the mind, especially its hidden structure. People mumble about “bits” of information, as if these were the atoms of mentation, but really this is hand waving, not solid science. There are no microscopes of the mind, no diffraction chambers, no spectral analyses, no supercolliders. Psychology makes do with commonsense divisions into belief and desire, memory and perception, emotions and sensations; but there is no elaborated theory of fundamental constituents analogous to atoms and molecules, cells and DNA. We don’t know how our mental life is built up. And what about dynamics? How does psychology explain the flow of conscious thoughts or the changing behavior of the organism? What laws are cited to predict how one thought will follow another, or how emotions influence overall mental state, or how a subject will act in a novel situation? Psychologists like to talk about various “effects” (e.g. the Zeigarnik effect), but where is the analogue to Newton’s three laws of motion? We just don’t have a theory of how a psychological system changes over time; at best we have rough hints about what might lead to what (as in “laws of association”). Where is the unified theory of psychological dynamics? Where are the equations of thought and action? A physicist or biologist encountering psychology for the first time might wonder how the subject accounts for origin, structure, and dynamic change—the basic facts she is familiar with in her own discipline—but her psychology professor will have little to say about these questions. He will report some experiments, maybe some established “effects”, but he won’t have comprehensive theories to offer in these three areas. He won’tsay, “I’m glad you asked that question because we have great theories of how minds originated, what composes them, and how they change with time”. If he is honest, he will mutter in a low voice, “Good question, we’re working in it”, perhaps followed by some boilerplate about psychology being a young science and all that (but is it really any younger than physics and biology?).

Compare linguistics. Chomsky has long pointed out areas of ignorance in that field, mainly relating to the evolutionary origins of language and in the free use of language in speech (“performance”). The evolution of language is largely a mystery, especially the origins of the lexicon, and the stimulus-freedom of speech makes language use hard to subsume under predictive laws. Some progress has been made with linguistic structure, but even here it is reasonable to wonder whether we have reached linguistic bedrock. So linguistics has not achieved what the established sciences have. Linguistics is really a branch of psychology, and it looks as if psychology in general has the limitations Chomsky finds in this branch of it. There is some grasp of structure, basically extrapolated from commonsense psychology (including commonsense linguistics), though it has nothing like the depth we find in physics and biology. But the origins of the language faculty in evolutionary history, and how that faculty is manifested in action, are shrouded in mystery. Whether the mystery is temporary or permanent, contingent or necessary, is another question; what is clear is that psychology and linguistics do not have the kinds of explanatory success found in the established sciences. And what holds for linguistics and psychology also holds for sociology and anthropology (and maybe economics): how social structures and cultures came about is unexplained except in the most rudimentary terms, and there is no generally accepted dynamic theory of how they change over time (Hegel and Marx anyone?). Human history is not like the history of the physical universe or of the biological world. Freud made some heroic efforts to do for psychology and human history what physics and biology have achieved in their domains, but his efforts are not generally lauded. The simple fact is that the psychological “sciences” are nowhere near as advanced as the physical and biological sciences. They suffer from OSD deficiency. This is not, of course, the fault of psychologists, who are just too lazy or incompetent to bring the subject to maturity; it is inherent in the subject itself. It is very difficultto explain how minds originated, what their compositional structure is, and how they change over time.[1]I intend no aspersions on the field or its practitioners; I merely point out certain significant asymmetries. Presumably the mind hassome sort of intelligible origin (it didn’t just spring into existence from nothing), and some sort of internal structure (the “cells of thought”), and some dynamic principles (not just stochastic chaos): but we are far from understanding what any of these might be. Nor do I see any relief on the horizon. It is pretty amazing that we have achieved the kind of insight in physics and biology that we have, and it didn’t happen overnight; there is really no guarantee that psychology will ever repeat these successes. Psychology might always remain a semi-science.[2]

 

[1]Note the contrast with the brain as an organ of the body. There is no more difficulty explaining its origin than other organs of the body; it is composed of cells that are composed of molecules; and its dynamic mode of operation is the nerve impulse that changes the brain’s state over time. We havea science of the brain, much as we have a science of matter and life, though of course it still has a long way to go. But that doesn’t provide us with the right level of explanation to account for the mind. Perhaps this is (partly) why people tend to favor neural reductionism: it enables psychology to mirror the theoretical successes of the other sciences.

[2]That is not to say that it can’t be useful or illuminating, just that it may never mimic the OSD successes of physics and biology.

Share

Conceptual Skepticism

 

 

 

Skepticism About the Conceptual World

 

 

I will describe a startling new form of skepticism, to be set beside more familiar forms. It lurks beneath the surface of recent work on meaning and reference. Consider “water”: it has both a meaning (sense, connotation) and a reference (denotation, extension). Suppose its meaning is equivalent to “tasteless transparent liquid found in lakes and flowing from taps”. These are the properties a typical speaker associates with the word (its “stereotype”). Combining these words with “water” produces an analytic a priori truth. They provide an analysis of the concept we associate with “water”. Yet they are not epistemically necessary: it could turn out that water is none of these things, as it could turn out that water is not H2O. Perhaps we have all been under a giant illusion about these properties of water: by some quirk of our nervous system a yellowish lemony-tasting liquid has appeared to be transparent and tasteless, and what fills lakes and flows from taps is some other liquid than water. These possibilities are not beyond the powers of an evil demon to contrive. We cannot be certainthat water has the properties we typically associate with it—mistakes are possible, illusions conceivable. Water might turn out to have none of the properties included in its stereotype, i.e. its meaning or connotation. Yet we would still be referring to water by “water”, whatever water is. The reason this is possible is that we fix the reference of “water” in a certain way, namely by pointing to a sample of a certain natural kind and saying, “Let ‘water’ designate whatever natural kind underlies these appearances”—whether those appearances are veridical or not. We might thus have successfully referred to a certain liquid and yet acquired quite false beliefs about its properties. Reference is independent of opinion: appearances can be inaccurate even as reference succeeds. If we adopt a causal theory of reference, we can say that the reference-establishing causal relations are logically independent of whatever beliefs we form about the extension of the term. What this means is that the sentence “Water is a tasteless transparent liquid found in lakes and flowing from taps” is both analytic and conceivably false. It could be false because water might actually have none of these properties and yet the meaning we assign to the term includes them: they are contained in the connotation but the denotation lacks them. Thus we have an analytic but false statement—it makes a false ascription of properties to thing we refer to.

Now consider skepticism. Hearing about the semantics of natural kind terms the skeptic will seize his chance: he will insist that all our natural kind terms are vulnerable to a skeptical doubt, namely that propositions formed from them are not knowable with certainty, even when analytic. We might be wrong that water is tasteless and transparent, that lightning is bright and precedes thunder, or that gold is shiny and malleable. These are characteristic skeptical claims, but the extra turn of the screw is that analytic truth does nothing to preserve them from skepticism. The semantics of the terms combines demonstrative reference with fallible descriptive stereotype, so that the reference could succeed while the stereotype is inaccurate. Sense (descriptive content) doesn’t determine reference, but it can generate analytic truths nonetheless. Since the descriptive content of sense is possibly erroneous, we can generate fallibly known analytic truths—we can’t be certain that these are truths. Water might turn out not to be what we suppose, however much what we suppose fixes its meaning (one aspect of it at least). In the extreme, water might be a bitter tasting dark-colored solid that has presented a totally misleading appearance to us all these years—so the skeptic will contend, and he is notoriously difficult to thwart. What we have been designating by the term is quite different from the way it appears (if one day the scales fell from our eyes, we would exclaim “So that’s what we’ve been drinking all this time!” while beholding a mud-like substance).

How far can the skeptic push it? Consider knowledge (the word, the concept, and the thing): we customarily suppose that the meaning of “know” includes belief, truth, and justification—that is its sense or connotation. It also refers to a specific mental state of a person. We take it that this reference instantiates the properties contained in the meaning of the term—that itinvolves belief, truth, and justification. That’s what wemean and itinstantiates. But the skeptic wants to know what makes us so sure that the thing we refer to has the properties we ascribe to it: why assume that the nature of the mental state that “know” refers to actually includes the properties we ascribe to it? Why couldn’t it be like the case of water? Suppose we are confronted by a sample of a certain mental kind and we announce, “Let ‘know’ refer to the mental state before us”, while believing that the state in question is an instance of true justified belief—maybe that’s just the way the sample happens to strike us. But suppose that, contrary to our impression, none of this is true: the state in question is unjustified belief in a falsehood, or not even belief at all but disbelief (the sample is being insincere in its assertions). Then the skeptic maintains by citing the semantics of natural kind terms we can say that knowledge is nottrue justified belief—the state we are referring to is unjustified false belief! Now the question is what we can say to rule this possibility out in our case: might it not turn out that knowledge is not true justified belief at all but false unjustified belief? This is epistemically possible, the skeptic contends, given the way the term “know” was introduced and given the facts of the case. So we should admit that it might turn out that knowledge is not true justified belief, because the term designates something quite different from what we supposed—we had false beliefs about the extension of the term as it actually was at the moment of reference-fixing. But that implies that the analytic truth “Knowledge is true justified belief” might turn out to be false, simply because the state originally designated lacked the properties we thought it had. Our false ideas entered its meaning, but reality failed to confirm these ideas. The proposition might be analytic but false, and the skeptic wants to know what we can say to rule out this possibility. Of course, it is also epistemically possible that knowledge istrue justified belief, but the skeptic is asking why we prefer that alternative to his skeptical possibility. We should be agnostic.

Or consider “bachelor” and suppose that the initial sample is quite other than what the introducers of the term think: they think they are confronted with a bunch of unmarried males but in fact they are confronted by a group of married females. These individuals are masquerading as unmarried males while being just the opposite. The fooled introducers then stipulate, “Let the word ‘bachelor’ designate the marital and gender status of thisgroup”. They fix reference to the property of being married and female while mistakenly believing that the group in question is male and unmarried. Then the sentence “Bachelors are unmarried males” is false for these speakers, despite their firm belief in its truth. It may indeed be analytic in their language, but it is still false. And now the skeptic asks how we can rule this out in our own case: couldn’t it turn out that bachelors are married females? Maybe our ancestors introduced the term in the way described and thereby fixed its reference to married females; their beliefs were false of these individuals, but so what? Thus we today refer with “bachelor” to the natural kind of married females, even though we thinkwe refer to married males. Or suppose all the people we have ever met who called themselves “bachelor” and gave every appearance of being male and unmarried were really married women in disguise—wouldn’t that tie down the reference to that group, not the group we thoughtwe were referring to? If this is the way reference works in general, then such skepticism would seem indicated. It might turn out that bachelors are married females! It is not epistemically necessary that bachelors are unmarried males, despite the analyticity of the corresponding sentence. The skeptic thus extends his doubts to knowledge of analytic truths.

Let us make explicit what is going on in this argument. On the one hand, we have the concept, an item in the mind; it contains various components, which fix the set of analytic truths with respect to that concept. On the other hand, we have the extension of the concept, an item in the world; it has a certain objective nature, which fixes its de reessence. We normally suppose there is a correspondence between these two levels: the components of the concept actually capture the objective nature of the thing designated. In the water case it is easy to see how this correspondence could be disrupted, because we can be wrong about the properties of the natural kinds we are referring to. The skeptic then seeks to extend this point to other concepts by adopting the same type of analysis: there is the concept we have of knowledge, and there is the fact of knowledge itself; but the former might not correspond to the latter—knowledge itselfmight not have the properties the concept ascribes to it. What guarantees that the objective thing has the properties we think it has? It might be like the case of water. Similarly, the concept we have of a triangle implies that triangles have three sides, but the skeptic conjures a scenario in which we introduce the term “triangle” in reference to things that are actually four-sided, thereby referring to such things with the word “triangle”. Then “Triangles are three-sided” would be analytic for us, given our beliefs, but actually false. And the skeptical question is how to demonstrate that our present use of “triangle” is not like this: maybe we actually refer to four-sided figures with the term “triangle”! Might we not one day discover that triangles have four sides, contrary to what we now believe? We might discover that we are brains in vats, and we might discover that we refer to quadrangles with “triangle”. That would be strange, to be sure, but not logically impossible.

How could we respond to the skeptic? Gap closing is the obvious manoeuver: don’t let the concept and the property diverge. Then there will no logical space between what we think and what is. Thus we might identify the property with the concept: forxto have Pis just for xto have C(correctly) ascribed to it. But this gives rise to an idealism that destroys objectivity—as is typically the case with this kind of counter to skepticism. Clearly there was water before there was the concept of water, and similarly for knowledge, triangles, and bachelors. At the other extreme we could try going radically externalist and make the concept nothing more than the property: then what is in the mind will not be separate from what is in the object. The trouble is that this will entail that we can’t be misled about the nature of water, or mistaken about what knowledge involves, because our concept will simply bewhatever these things are objectively. A more realistic suggestion is that there is a kind of pre-established harmony between the concept and the property: the constituents of the concept necessarily correspond to the constituents of the fact (the nature of the property). But again, this fails to allow both for error and for incompleteness: our concept may misrepresent the property and it may fail to exhaust its nature. For example, there may be more to knowledge than we think, and our conception of knowledge may be inaccurate in some respects. This is precisely what the skeptic is capitalizing on by pointing out the epistemic possibilities: water may not be as we suppose it to be, no matter how central to our concept a certain feature is; and similarly for other natural kinds. His point is that analytic containment in the concept is no proof against the possibility of such errors about reality. The world contains various phenomena and we are trying to capture them in our concepts, but we may fail; so it might always turn out that things are not as we take them to be. Water might not be transparent, knowledge might not be true, triangles might not have three sides. Of course, ifthese things have those properties, then it is plausible that they have them necessarily; but the question is whether we know with certainty that they do, and the skeptic finds reason to doubt this. Metaphysical necessity does not imply epistemic necessity.

It might be said that the underlying semantics presupposed by the skeptic applies to semantically simple expressions like proper names and natural kind terms but that not all terms fit this mold. The former terms denote by mechanisms independent of their descriptive content, which forms a separate component of meaning; but terms like “knowledge” and “triangle” and “bachelor” don’t work like that—here the descriptive content is active in fixing reference. Thus the meaning of “know” mustimply truth in knowledge itself because that is simply how the term determines its reference—“know” refers by definition to what is believed and true (etc.). It is semantically complex and works as a logical conjunction of conditions, unlike a proper name. This, however, is all very debatable and anyway faces an obvious retort: what about the simple elements that make up the meaning of the term? Thesewill be subject to the same skeptical argument that we started with: maybe “believe” and “true” denote properties other than we suppose because they were introduced under conditions of fallibility. We announce “Let ‘believe’ denote thatmental state” in front of a sample, convinced that we are referring to a state of internal assent, but in fact our sample is in a state of suspended assent or outright dissent. We don’t have infallible access to other minds! Same for triangles (so we can’t wriggle out of the problem by appealing to introspective authority): we supposethere are three-sided figures in front of us and we stipulate that “triangle” refers to thatgeometrical form, pointing at the sample; but in fact we are suffering from a visual illusion and four-sided figures constitute the sample. The skeptic is saying that we can always misrepresent the properties of the sample that forms our semantic anchor, which is why it may turn out that we have actually done so. Analytic containment in the concept is no protection against this possibility. Skepticism about the external world thus generalizes to skepticism about what we regard as definitional. That is to say, we can be wrong about the essence of things as well as about their accidental properties, even when that essence is supposedly contained in our concepts. Since complex concepts resolve into simpler ones, the skeptical problem can always recur at the basic level.

This skeptical problem deserves to be called a skeptical paradox because whether or not I know anything I surely know what it wouldbe to have knowledge—I surely know that I cannot know what I disbelieve or what is false! Similarly, I may not know whether there are any triangles in nature but I surely know what a triangle is—I know it’s not a circle! But the skeptic is denying, startlingly, that I do have such knowledge; his claim is that it is epistemically possible that knowledge is not of truths and doesn’t require belief. We just don’t have that degree of apodictic insight into the nature of the things in question; we merely conjecture that this is the nature of what we refer to. We may be profoundly ignorant of the objective nature of the kinds of which we speak. Philosophers took it for granted that knowledge of analytic truths is free from skeptical doubt, but it turns out that they are swallowed up too. How far can this skepticism go? What about our knowledge of what “red” means, or “plus” or “and” or “ought”: can we conceive of scenarios in which people are radically mistaken about what these terms designate? Could it turn out that genocide falls into the extension of “good”? Could “red” turn out to designate blue? Could “and” mean disjunction? These would be paradoxical results indeed, so any skepticism that implied such things would deserve that label.[1]

The skepticism I have been expounding doesn’t apply to our knowledge of our concepts as such: we canknow with certainty what our concepts contain. We know with certainty that our conceptof water includes being transparent and tasteless, and similarly for our concept of knowledge in relation to truth. The skeptic questions the move from this to our putative knowledge of the referenceof our concepts—whether we know that water itself is transparent and tasteless, or that knowledge itself involves truth. What holds of the concept is not the same as what holds of the object it refers to. Thus I can be certain of analytic truths in so far as they concern what is true of my concepts, but I can’t (with certainty) infer from this anything about the essence of what I am referring to with these concepts. Hence (according to the skeptic) water might turn out not to be transparent and knowledge might turn out not to be true and triangles might turn out not to have three sides.[2]

 

Colin McGinn

 

 

 

 

 

 

[1]Note that the skepticism I am considering does not contend that there is no fact of the matter about what words mean, only that we cannot knowsuch facts. We could be radically mistaken about what words actually do mean.

[2]I have said nothing here about skepticism concerning rule following, as expounded in Saul Kripke’s Wittgenstein on Rules and Private Language(1982), but that is certainly a useful comparison point for the skepticism presented here (they are not at all the same thing).

Share