The Inefficacy of Skepticism
There is something peculiar about skepticism, psychologically speaking. It seems both cogent and ineffectual. Our epistemic practices are roundly criticized, convincingly so, and yet we decline to modify them; we soldier on regardless. We don’t refute the skeptic, but we don’t heed him either. Why are we so complacent, stubborn, and irrational? We are like flat-earth true believers who know all the refutations of their position but cling to it anyway. Of course, there are those who seek, by a variety of stratagems, to blunt the force of skepticism, thus bolstering their initial naïve epistemology; but this always reeks of cognitive dissonance, and is generally unpersuasive. And there are many others who reject such soothing stances and yet still decline to alter their epistemological assumptions—what is it with them? Skepticism thus retains both its power and its impotence: hard to rebut, but also hard to go along with.
We can imagine that not being so. Suppose some possible beings develop the kind of commonsense epistemology possessed by normal humans—the usual concepts of belief, knowledge, justification, reason, evidence, etc., along with the usual distribution of these concepts. They believe, and take themselves to know, various truths about their environment, other minds, the past, the future, etc. But also suppose that at the very outset skepticism starts to rear its head—it comes with the territory. Skeptical possibilities become widely entertained, and not just in university departments. Children start to bring them up, spontaneously and nervously, around the age of ten after a few blissful years of epistemological complacency. People quickly realize there is something seriously amiss with their assumptions. It is rather like starting life with superstitious beliefs and then soon discovering science. We would expect some major revisions of opinion, on pain of irrationality. Maybe some diehards will try to cling to their earlier views, but most will capitulate to reason, accepting that they are really notcertain that there is a table there, or that they are not brains in a vat, or that other people have minds. They might well also agree that such beliefs are not even adequately justified and are not genuine instances of knowledge. They accordingly cease to talk and think this way—as people outgrow superstition, religious dogma, and prejudiced politics. Skepticism is given its due; earlier opinion is discarded. One can be brought up to believe the tenets of a particular religion, but it is within one’s power to abandon those tenets under cogent criticism—and so it might be for one’s “epistemological religion”. And it is not as if you will go to hell if you discard your callow epistemological dogmas.
Or consider this possibility: a group of beings holds to commonsense epistemology for thousands of years with nary a skeptical thought among them. No one has so much as thought of skepticism, so there isn’t any pressure for revision. Children never wonder about what colors other people see, or ask if it’s all a dream, or how we know that the future will resemble the past. Then a particular individual, call her Helena, through some quirk of genius, comes up with skeptical arguments, which (we are supposing) are thoroughly convincing. Helena publishes these arguments in a scientific journal and they reach the public via global press coverage (think Einstein on relativity). Headline: Scientist Discovers We Don’t Know Anything. It is conceivable that our beings will accept the findings in question, as people have accepted other earth-shattering discoveries (species evolved by natural selection, the earth moves, we are not the center of the universe, slavery is wrong). They thus revise their epistemic belief system to fit the new ideas, declining to throw around words like “certain”, “know”, and “justified” with the abandon of yore. It becomes part of received opinion that these words don’t apply as widely as was believed heretofore, maybe applying only to one’s knowledge of one’s own mental state. So late-onset skepticism could have the same efficacy as early-onset skepticism; the mere fact of arriving late on the scene doesn’t imply that skepticism must be resisted. It is just that epistemic error has been around for a very long time. So why don’t we fit either of these models?
It is an empirical question when skepticism first arose for humans. We can safely assume that it wasn’t coeval with the onset of epistemic concepts and their associated beliefs: humans didn’t discover skepticism at the precise moment they began to deploy epistemic concepts, or a short time thereafter. We know that skepticism was much discussed in ancient Greece, but that must post-date by thousands of years the existence of commonsense epistemology. So we would expect some inertia about abandoning this aspect of ordinary belief. But such inertia didn’t stop other ancient canards from going extinct. Would humans have found it easier to let skepticism have its way if it had cropped up later in history, as in my second scenario? It might be argued that skepticism has grown stale for us and thus has lost its initial sharp edge; if it had been discovered at the time of the scientific revolution, it might have had more potency. But this seems implausible: it would still have encountered stiff resistance if discovered in 1680 or 1952. Why? It is conceptually possible for there to be beings who accept its urgings, so why are we so intransigent? It would be different if the skeptical arguments were simply fallacies and follies, easily defeated and defanged, but (we are assuming) that is not so. So what is going on here? Why are we so internally divided on the subject, schizoid almost? Why do we accept the force of skepticism and yet do nothing about it, even when we are quite self-conscious about our position? Why the psychological split?
I have a theory. Consider perceptual illusions such as the Muller-Lyer illusion: these are products of our epistemic faculties—they purport to represent aspects of the world, inviting belief, and offering justification (i.e., how things seem). Yet they are false: those lines are not really unequal in length. The perceiver need not know this and so may form the belief that the lines are not equal. This counts as a piece of commonsense perceptual epistemology. We can imagine people being under this illusion for centuries, never realizing that it is an illusion. Or consider the moon illusion: people might really believe that the moon shrinks as it rises over the horizon into the night sky. Now some scientist points out that these are indeed illusions—things seem that way but they are not really that way. Some people may persist in their earlier beliefs in the face of compelling evidence to the contrary, but most will come to accept that they were wrong—they have been under an illusion. But there is no corresponding change in their perceptual responses: it still looksto everyone as if the lines are unequal and that the moon shrinks. The perceptual system is not penetrable by outside information; it is “encapsulated”.It is modular, stimulus-dependent, and bottom-up. No amount of reasoning, no matter how cogent, can prevent this system from mechanically delivering its standard output. It is, we might say, impervious to reason—a kind of reflex that we have to live with even when we know it is leading us astray. You might have very good reasons for suppressing your patellar reflex, but find it impossible to do, given the way your nervous system is wired; well, the visual system is like that, sublimely indifferent to higher rationality. From this we may infer causal isolation: central beliefs cannot cause the visual module to vary in its mode of response. Such beliefs are quite inefficacious when it comes to controlling vision. So we evidently contain two sorts of informational system that are insensitive to each other (the visual system can’t usurp the central cognitive system either). We thus arrive at a dual-system epistemic psychology. The two systems function autonomously, have a different ontogenesis, and may even employ distinct sorts of mental representation; no doubt they also have different neural substrates. They co-exist and can interact but they don’t mingle—one has no veto power over the other. Sometimes they hardly seem to communicate with each other. We might say that the visual system inclines one to believe what its output (a visual impression) suggests, but this inclination doesn’t prevent one from fully accepting that the facts are otherwise. The result is an oddly ambivalent state of mind, which accounts for the fascination of such illusions: on the one hand, you are inclined to believe what your senses tell you, but on the other you firmly reject their intimations. If someone were to tell you that all this illusion stuff is just a hoax thought up by manipulative scientists—the lines really are unequal and the moon really does shrink—you would be tempted to respond, “I knew it all along!” It is hard to accept that your visual system, otherwise so reliable, should so cruelly deceive you—as if you have a little evil demon living inside your head. As it is, however, you conclude that the scientists are right and your visual sense is wrong—while all the while your senses stick to their own version of things. Theywill not be moved.
My theory, then, is this: our commonsense epistemic system is also an autonomous module separate from the system that trades in skeptical arguments. The commonsense system (“folk epistemology”) pre-dates the reflective skepticism-involving system and is largely independent of it. We started by describing ourselves using epistemic terms at some point in human evolution well before written language and sophisticated culture developed, probably as a result of an innate faculty, and only later did human thought begin to reflect upon itself and issue criticisms. Just so, human vision evolved before science and objective measurement, only later becoming subject to rational criticism, but unable to respond to such criticism by altering its mode of operation. Thus skepticism failed to penetrate the cognitive system that pre-existed it. Put simply, people reflexively apply epistemic concepts to themselves and others, but these applications are independent of higher-order reflection of the kind employed in skeptical arguments.They are encapsulated, automatic, and hard-wired. It is as if we are under the illusionthat we know: that’s how we seemto each other. I seem to know many things about the external world, other minds, etc., but skepticism teaches me that this is an illusion, exposed by consideration of our true epistemic situation.The hamster may be under a similar illusion and be equally impervious to skepticism (if the hamster could understand skepticism)—precisely because it is programmed to accept its natural perspective on things. We go around the world representing it (wrongly, I am assuming) as containing knowledge, justification, certainty, and evidence, by courtesy of our innate epistemic endowment; but skepticism can make no inroads into this system, despite its superior wisdom, because the system is encapsulated and largely oblivious.Thus we have a dual epistemic architecture: a primitive quasi-reflexive uncritical system, and a more sophisticated reflective critical system. The former is reluctant to take orders from the latter, as an adolescent doesn’t listen to his elders and betters. These faculties have different developmental schedules and neural implementations, and they might even differ in their coding properties and mode of operation. It would be possible to have the former without the latter, as presumably children are for some time in this condition. This would explain our divided epistemic self, our ambivalence, our schizoid tendencies, and our confusion. The commonsense faculty is a lot more ancient and primitive, a holdover from simpler times (and look how much illusion and error has been alleged by contemporary science!); the skeptical line of thought is something superadded, a product of late human culture, by no means integral to what preceded it.No wonder it has so little impact on its set-in-its-ways predecessor! But we mixed-up humans contain both systems and so have to live with their quarrels: we are aware of what we are inclined naturally to think andof what our reflective reason says about those inclinations. We want to attribute knowledge, but we are cognizant that this attribution is questionable. We are caught between these two judgments—as we are caught between the conflicting claims of perception and cognition. This is why skepticism is psychologically peculiar: in it we are witness to our own psychological plurality—the fact that we are an assembly not a unity. We suffer from internal conflicts, no doubt with an evolutionary basis. Our faculties don’t always fit harmoniously together. This is a familiar thought for our emotional nature, but the same is true of our cognitive nature, as the case of perceptual illusion so vividly illustrates. I have suggested that our attitude to skepticism has similar roots. One faculty tells us that we know; another says not so fast. The former persists in its ways despite the superior wisdom of the latter.
Here we may be reminded of Hume’s famous response to the skeptic: we can believe skepticism in our study but we lapse back into common sense when we leave it. Thus we have “natural beliefs” and “philosophical beliefs”—each the province of different cognitive faculties. (Contemporary cognitive science is in many ways a reversion to seventeenth and eighteenth century thought.)
The same might be said of the problem of free will: first we believe in it as a result of an ancient cognitive-cum-affective system, then we come to question it because of scientific ideas such as determinism and genetics. Even when convinced we have a tendency to cleave to old ways—thus we can find ourselves both disbelieving in free will and yet unable to shed our commitment to it.
Psychologists have postulated a “theory of mind” module that grows spontaneously in the young child, much as the language faculty does; it is quite unhindered by skeptical thoughts concerning other minds. Those thoughts come later and from a different source (“critical reason”): they make little dent in the earlier module. Thus we automatically view others as having minds, even though we may be intellectually convinced that we have no grounds for this supposition. Similarly, we have a “theory of world” module that pre-dates and resists later skeptical reflections, however unreasonably. If we put the point in terms of organ psychology, it is as if the primitive mental organ keeps pumping out what it is designed to pump out even when the organ of rational thought counsels otherwise. After all, evolution has only a passing interest in objective truth; what matters to the genes is what works (which may, or may not, coincide with the truth). Whether anyone reallyknows anything, or has justified certainty, is not a matter of great concern to evolution.