The Concept of Miracle

 

 

 

The Concept of Miracle

 

 

Where do we get the concept of the miraculous? Why does that concept seem compelling to us? Why do we take to it so readily? It is not, to be sure, from the observation of miracles, in the style of empiricism—we don’t have perceptions of actual miracles. Nor, presumably, is it innate: what would be the use of a concept so inapplicable? Apparently it is a complex concept, so it could be constructed from simpler components, but why does it grip us—why this concept and not any of the indefinitely many other concepts that we might construct? Why does it seem so natural, so inevitable? It is a conceptual universal, but nothing about the world to which we apply it suggests its necessity. What explains its presence in the human conceptual scheme? Even those who reject its application most fervently are familiar with the concept itself.

Here is how the OED defines “miracle”: “an extraordinary and welcome event that is not explicable by natural or scientific laws, attributed to divine agency”. This is too narrow for our purposes, since not all supernatural events are thought to be welcome, nor assigned to divine agency. Some are unwelcome and assigned to malign forces—the Devil is deemed capable of devilish “miracles”, i.e. extraordinary events not explicable by natural or scientific law. The broader concept we are interested in connotes the uncanny, the inexplicable, the exempt from natural law—the weird, the spooky. How does that idea enter our thoughts? Whence the concept of magic, whether for good or ill? Might it simply never have occurred to us? Is it just a dispensable historical accident, a piece of cultural detritus with no discernible foundation? Or does it have deep roots in our experience of the world, including ourselves?

I once compared the emergence of consciousness from the brain to the miracle of converting water into wine.[1] Why did I do that? It was because the concept of the miraculous suggests itself when considering the way consciousness arises from the physical world: this seems uncanny, contrary to natural law, freakish, inexplicable. I emphasized that this can only be an appearance—consciousness is not really miraculous. The mind is not objectively a miracle; rather, it is a mystery that looks like a miracle. But now I want to invert that thought and make a speculative suggestion: we get the concept of a miracle from our sense of ourselves as conscious beings. We strike ourselves as freakish and uncanny, at least when we reach a certain level of self-consciousness, and we then project this idea onto things outside of us. The dependence of mind on body appears unintelligible, extraordinary, possibly a sign of divine agency (assuming we find consciousness a “welcome event”). So it is not so much that the brain is like water and the mind is like wine; rather, water is like the brain and wine is like the mind. The emergence of mind from matter is the paradigm of the miraculous–everything else is projection and extension. Thus the concept arises spontaneously in us as a consequence of our very nature, at least as that nature strikes us; it isn’t just an adventitious eccentricity of culture. We don’t regard the mind-brain connection as miraculous because we alreadyhave the concept of a miracle from some other source; we derive the concept of the miraculous from our apprehension of ourselves as psychophysical beings. This is why it is universal and deep seated. This is why the concept seems so familiar, so easy to grasp. No wonder people often feel that the supernatural is all around them and ever-present—because it is part of our nature (as we apprehend ourselves). We see the world as spooky because we are spooky. In fact, of course, we are not objectively spooky, just deeply mysterious; but we convert a mystery into a miracle and then spread the concept outwards. After all, if we are a miracle inside, why can’t there be miracles outside? When someone miraculously rises from the dead (allegedly) isn’t this just like the way conscious life rises from dead matter, even when that matter resides in a living brain? There is that peculiar sense of getting something from nothing that attends all putative miracles. And the miracle of consciousness does occur all the time—every time a baby is conceived, every time we wake from sleep, every time our brain causes a thought. So why can’t external miracles happen regularly too? Clearly they are possible because they happen all the time in our own lives. Is water turning into wine any more impossible than brain chemicals turning into consciousness? In fact, it looks a lot more possible, what with water and wine both being liquids and all.

The form of explanation I am suggesting resembles Hume on causation and the origins of animism. Hume couldn’t find a source for the concept of causation in external objects (no impression of necessary connection), so he sought it within the mind in our habit of anticipation; we then project this inner impression outwards and populate the world with causal relations. It is not that we derive our concept of causation from external objects and then project it inwards; we get the idea from our inner feeling of expectation and then project it outwards. By analogy, we sense miracle within ourselves (erroneously but intelligibly) and then project it onto the outside world. We are under the illusion that we are miraculous and we suppose that we are not alone in this. In the case of animism, we attribute the qualities of living things to inanimate objects, mistakenly assimilating them to ourselves: we find intention and will where they do not objectively exist. We have a first-person awareness of life and we spread it around indiscriminately—as we have a first-person awareness of the (seemingly) miraculous and then ascribe it to the world outside. No doubt we are motivated to do this in various ways, but the cognitive groundwork is prepared by our knowledge (sic) of ourselves. The idea of miracle is all too familiar from our ordinary experience.[2] Presumably other animals don’t have the concept of a miracle, because they don’t have the kind of self-consciousness that gives rise to it; but we humans apprehend ourselves as enigmas, which we then convert into the idea of a miracle. Suppose that dualism were really true and that causal interaction takes place in the pineal gland: that would strike us as a type of miracle and God might be invoked to make sense of it. This could be the origin of the concept of the miraculous, and it would be an intelligible explanation of how the general concept arises. But the same is true even if we don’t accept that kind of metaphysics, because emergence is mysterious anyway. The enigma of emergence is readily converted into the idea of miracle, and then projection does the rest.

The obvious next question is whether the concept of God has a similar type of origin. I won’t go into this deeply, but I will make a couple of suggestions. The concept of God is clearly a compound of other concepts: omnipotence, omniscience, moral perfection, immateriality, and infinity. The last two are the hardest to explain in that both concepts are difficult to account for: where do we get them? We obviously don’t see and touch immaterial spirits and derive the concept by abstraction; and the concept of an infinite being is likewise not derived from perceptual acquaintance with such entities. A plausible hypothesis is that we derive them from knowledge of our own nature, or at least the kind of limited awareness we have of our nature. We certainly don’t experience our own consciousness as material, so it is at least intelligible that we could form the idea of an immaterial being on this basis—even if we are not rightly so described. Crudely, we have an illusion of immateriality. In the case of infinity we have more than an illusion of infinity: we ourselves are infinite beings. I don’t mean that as spatial beings we are infinitely divisible; I mean that we have attributes that are characterized by infinity—namely, language and thought. I intend nothing mystical here; I am just making the familiar point that language and thought admit of infinitely many combinations of primitive elements. And we are aware of this fact about ourselves: we know that we have this kind of infinite potential. So we have no trouble forming the idea of an infinite being, combining it with the other attributes that define God. We thus come by the idea of an immaterial infinite being via contemplation of our own make-up: this concept is not alien to us. So it is not that we have an antecedent idea of God that we subsequently apply to ourselves, casting ourselves in his exalted image; rather, we use ourselves as model to construct the complex idea of God, which we then proceed to project onto the world. Whether the world really contains anything answering to this concept is another question, but the concept itself has its origin in our own nature. How else could we get it? The concept of the supernatural is ultimately based on a distorted picture of ourselves, as a result of partial understanding and incorrigible projection. Religion begins at home.[3]

 

Colin McGinn

[1] In “Can We Solve the Mind-Body Problem?” (1989).

[2] Does anyone ever really get over the discovery that his or her precious consciousness, in all its glory, is the result of that furrowed and frightful thing called the brain? The miracle seems almost cruel in its absurdity!

[3] Let me stress that this is a speculative proposal—other theories might be suggested. The advantage of the present proposal is that it finds a firm place for the concept of the supernatural in the natural world. We don’t want to discover that only the existence of the supernatural can explain the presence of that concept in our minds—not if we want a secular psychology anyway.

Share

Discrete and Continuous

 

 

Discrete and Continuous

 

 

Philosophy is awash in grand dichotomies—particular and general, mind and body, fact and value, finite and infinite, being and nothingness. Reality is held to divide into two large categories and the relations between them are mapped. But there is one dichotomy that is seldom discussed by philosophers, though it is generally recognized elsewhere: that between the discrete and the continuous. These concepts are not easy to define, though they are widely accepted at an intuitive level, no doubt because they pervade our everyday experience. The discrete consists of separate, distinct, self-contained objects that can be distinguished and counted: animals, mountains, tables, cells, atoms, words, concepts, numbers, propositions, gods. The continuous consists of undivided, unbroken, uninterrupted, seamless, smooth, homogeneous…what? Not objects or things–for then they would be discrete–but what we call mediums or manifolds or dimensions or magnitudes: stuff of some sort. Space and time are the paradigms, but we also regard other things as continuous: intensity of emotion, milk and honey, geometrical figures, colors, motion, fundamental matter. Of course, things that seem continuous have sometimes been discovered to be discrete, as with the atomic theory of matter or the quantum theory of energy; but we have a clear idea of what continuity might be even in these cases. We have a commonsense concept of the continuous that meshes with our ordinary perception of things, in which discrete objects are perceived to be internally continuous (possibly falsely). We thus feel ourselves to be surrounded by two kinds of being: discrete separated entities that can be counted, on the one hand, and smoothly varying continua that can only be measured, on the other. There are the discrete objects in space and time and the continuous mediums of space and time. The latter require their own mathematics, which nowadays involves the real numbers, infinitesimals, the concept of a limit, and calculus. We employ the modern notion of a dense array of points between any two of which there is always a third (this may be viewed as a way to discretize continuity). There is even a distinctive type of paradox associated with continuity (Zeno et al). So we accept a kind of ontological dualism: two kinds of being with different essential natures. Descartes used the concept of extension to unite space and matter, but that concept papers over the deep difference between the discrete and the continuous, both of which can be said to have extension—though we should note that not everything that is continuous is physical, e.g. emotional strength. The discrete-continuous distinction cuts across the mental-physical distinction, and brings its own brand of dualism.

Like other dualisms, this one invites philosophical scrutiny. How solid is the distinction? Might we not view each as a special case of the other? Is one derivable or emergent from the other? Are there illusions of continuity and discreteness? Is it possible to be a monist with respect to one or the other type of being? For instance, we are told that in the first moments after the big bang the temperature was so high that no particles could exist, so there were no discrete objects then—they came into being only when the universe cooled. Then wasn’t physical reality entirely continuous at that early point? If so, our current discrete universe emerged from a continuous universe, rather as we suppose that mind emerged from matter (which took more than mere cooling). Might not other universes stay at that initial high temperature and never evolve into discrete universes? On the other hand, it has been maintained that continuity is a mathematical fiction—everything real consists of discrete entities with no smooth transitions anywhere. Motion is really jerky and jumpy, space and time are particulate, and the mind is purely digital. Or we could just decide to eliminate entities that don’t meet our ontological expectations: there is no such thing as motion, space and time are unreal, and there are no emotions to vary continuously. We have the usual panoply of philosophical options to choose from: dualism, monism, reductionism, elimination, and invocations of God to get over ontological humps (e.g. the miracle of discrete entities springing from continuous stuff). Our experience suggests a dualism of the discrete and the continuous, but maybe reality is not so constituted; maybe in the noumenal world all is discrete (or all continuous). Continuity certainly presents problems of understanding, and it was only in the nineteenth century that mathematicians began to be comfortable with it (but at what cost—is a smooth line really reducible to a collection of points?). And why is the universe made this way to begin with? Why the ontological division? Wouldn’t it be simpler to make a universe that was just one way or the other? Why did God introduce continuity at all, given that his main purpose was to create discrete moral beings like us? What has continuity got to do with morality? We appear to live in a mixed world, but this doesn’t seem like a logical necessity—unless it really is once you get down to basics (maybe space and time couldn’t exist without their smooth structure). It is all quite puzzling—the mark of a good philosophical problem.[1]

That was about the metaphysics of the discrete and continuous, but there is also the epistemology. Do we know about these things in the same way? Do we perceive continuity as we perceive discreteness? How do we get the concepts? There is a kind of primitive impression of continuity in vision that exists side by side with impressions of discreteness, but what exactly does this amount to? Is it just an absence of perceptible discreteness or is it a positive sense datum in itself? Is the child’s mind a continuous visual blur until sensations of discreteness supervene? What does it mean to say that a surface looks continuous—does it look as if all potential gaps have been filled? What if we look closer and see that the object is made up of lots of little discrete entities? Were we under an illusion? But is it even possible to see a discrete object without some parts of the visual field giving an impression of continuity? The gaps between objects look to be filled with continuous space and the objects themselves look like they are composed of continuous matter. And the cognitive mechanisms that process perception must recognize the discrete-continuous distinction: they deliver different kinds of mental representation to handle the sensory input. Is consciousness itself continuous or discrete or both? Is it quantized or infinitely divisible? Are the features of the brain that account for consciousness discrete properties of neurons or continuous features? Neural firings are discrete, but electrical charges can vary as continuous magnitudes—do both contribute to generating consciousness? Behaviorism in effect treated the mind as continuous, because behavior is just a type of motion, but how does that square with the discrete character of so much of the mind, particularly language and concepts? There is no such thing as applying half a concept, but the body can move half a meter. Your utterances must be either meaningful or not, but your voice can be louder or softer. How do we derive the discrete mental notions from concepts of continuous bodily motion? That is like trying to define atomic structure in terms of motions of matter—a sort of category mistake.

The natural position to take is that the world contains two sorts of ontological structure corresponding to two types of mathematics: discrete structure and continuous structure. The former can be dealt with using finite mathematics (or the mathematics of discrete infinity), while the latter requires the infinite mathematics of the continuum. Space and time have a continuous structure, while atoms and species have a discrete structure. This is just an irreducible fact. The two coexist and intermingle. Correspondingly, we have two sorts of phenomenology and mental representation geared to these objective structures—discrete cognition and continuous cognition. These might be conceived as distinct modules located somewhere in the brain. We know how to handle continuous magnitudes and we know how to handle discrete objects. When we see an object in motion we separate it from its surroundings as a distinct individual thing (using our discrete module) and we also track its movement through space as a continuous path with no gaps or interruptions (using our continuous module). We are capable of seeing the world in both ways simultaneously. The dualism is present but it is integrated, fused. It is rather like the perception of shape and color: different properties, different perceptual modules, but a unified perception. Just as there is a division of primary and secondary qualities despite perceptual unity, so there is a division of discrete and continuous properties despite perceptual unity. We see the same object as a discrete entity and as moving through a space without internal discreteness. Phenomenology thus recapitulates ontology. The distinction between discrete and continuous deserves a place in the pantheon of philosophical dualities.[2]

 

Co

[1] We could call it the problem of the granular and the gradual: are both equally real, and how do they meet up? The grainy and the graded, the chopped up and the smoothed out, the lumpish and the soupy: which form does reality prefer, and how does it combine them?

[2] The distinction is entrenched in mathematics along with other dualities (finite and infinite, odd and even, rational and irrational, prime or non-prime); time for philosophy to catch up.

Share

Realisms and Anti-Realisms

 

 

Realisms and Anti-Realisms

 

 

We reflexively speak of realism and anti-realism, as if we had a dichotomy of positions: you are either a realist or an anti-realist about a given subject matter. But this is too simple: there is a range of positions going from one extreme to another, depending upon the ostensible relation between the subject matter in question and human knowledge and experience. Consider material objects: the further the nature of such objects is removed from human knowledge and experience the more of a realist one becomes, and the closer the more of an anti-realist. Suppose you hold that material objects have only unknowable properties, so that their nature is totally distinct from anything revealed to the human mind. Then you hold an extreme realist view of material objects—they are maximally unrelated to human knowledge. The objects are objectively a certain way, but that way transcends the capacities of the human mind. But you might hold to a weaker thesis, namely that the properties of objects are knowable and yet independent of the mind: objects would be that way even if no minds existed, though all properties are in principle knowable. This also would be a realist thesis. You might even hold that all properties of objects are actually known and still be a realist about them.

But you might want to go a step further towards the mind: every property of an object must be the categorical basis of a disposition for the object to appear in a certain way. This position combines a non-identity thesis with respect to properties and their disposition to appear with a limit to properties that form the basis of such dispositions; if a putative property fails to correlate with any such disposition, it is deemed not to exist. Next you might choose to dispense with the categorical bases and simply identify properties with dispositions to appear: to be square, say, is just to have a disposition to appear square. Now you are moving into phenomenalist territory–objects as possibilities of sensations. That sounds decidedly anti-realist, but you are not quite there yet: for it might be supposed that the features of sensations that define material objects are not known to the subject of experience. This possible position is not typically recognized in the area of philosophy dedicated to these questions, but it exists in logical space. Thus it might be supposed that sensations are identical to brain processes, so that being square is identified with a disposition to cause certain brain processes in subjects (those correlated with its seeming that there is something square there). But these brain processes are not known to the subject, so objects are both dependent on sentient beings and yet removed form such beings’ knowledge. This strikes us as a weak form of anti-realism about material objects; and we can envisage variations on this theme that detect hidden aspects to conscious experience.[1] A stronger form of anti-realism maintains, familiarly, that properties of objects are identical to dispositions to produce experiences that appear a certain way (“sense-data”). And then we have the most extreme anti-realist thesis of all, namely that so-called material objects have no existence save that of actually producing experiences with a certain appearance—to be square is just to actually appear square to someone at some time. We thus move from the thesis that material objects have a completely unknown nature to the thesis that they are nothing but fully known experiences—but this movement goes through a number of intermediate steps, not an abrupt switch from realism to anti-realism. That dichotomy does not do justice to the range of philosophical positions available to the theorist of material objects.

We can run a similar gamut for realism and anti-realism about the mind. A strong realist might hold that mental states exist but are never revealed in behavior, being cut off completely from third-person knowledge: the properties of mental states are completely unknowable to the outside observer, so that their existence and nature is independent of any behavioral evidence that might be adduced. Or they might be thought to be knowable via behavior but not identical to behavior (or dispositions to behavior). Or they might be thought to be the categorical basis of dispositions to behavior. Or they might be taken to be the dispositions themselves. Now we are squarely in anti-realist behaviorist territory, but again there is a neglected position here: what if the aspects of behavior that constitute mental states are hidden to observers? It might be supposed that they are the muscular and neural events that underlie observable behavior, so that they are not the evidence we normally use to ascribe mental states to others. This position combines an ontological claim about the nature of mental states—they are identical to episodes of behavior—but it removes them from ordinary third-person observation. The standard behaviorist position is that mental states are identical to observable behavior (under some notion of observation). This position is clearly anti-realist because it ties the mental facts to our evidence for asserting these facts, but the weaker behaviorist position doesn’t claim this connection to evidence—it is quasi anti-realist. Once again, the usual dichotomy of realist and anti-realist fails to capture the full range of options. We have gone from maximally realist to maximally anti-realist, as gauged by proximity to evidence, via a number of intermediate positions. The usual dichotomy is too crude.

The moral case follows a similar pattern. You could hold that some moral truths are completely unknowable, or even that all are (though that is certainly hard to credit). Or you could hold that, though knowable, moral truths are not mental in nature. Or you could hold that moral values are dispositions to elicit approval, or just approval itself. The neglected position would be that moral facts reduce to mental facts but that mental facts have a hidden aspect that constitutes moral facts (substitute for “facts” here anything you like). That is, moral facts might be mind-dependent and yet inaccessible to the subject, because the relevant mental states have a hidden aspect—as it might be, brain states or some other inaccessible property. What if you thought that moral approval is really an unconscious matter inaccessible to the conscious mind? Again, we have a range of conceivable positions, differing in strength, not a simple dichotomy. If someone says he is a moral realist, we need to ask what specific kind of realism he has in mind; and similarly for the moral anti-realist. There are clearly degrees of inaccessibility relating facts and our knowledge of them, and these define different kinds of metaphysical position. The same situation obtains for the mathematical and modal cases, though I won’t spell out how this goes, as it should be obvious by now.[2]

The case of time is instructive. An extreme realist view would be that we can’t know anything of the past and the future, even though these stretches of time exist as determinately as the present. A less extreme view is that some facts about the past and future can be known but these facts have nothing intrinsically to do with what obtains in the present—they are in no way constituted by present facts. The anti-realist will insist, by contrast, that past and future facts are constituted by present facts or else do not exist at all; but again there is room for two strengths of position here—do these present facts provide our evidence for assertions about the past and future, or are they merely present whether evidential or not? That is, might they be currently inaccessible present facts? Maybe the past consists in current microphysical facts of which we have no knowledge, or the future consists of current facts of divine dispensation about which we are ignorant. This is recognizably anti-realist, though there is no claim of reduction to evidence. The stronger kind of anti-realism claims that past and future reduce to current evidence, presumably observable evidence of the kind we typically rely on to make statements about other times. This in turn can be regarded as mental in nature, as when it is held that the past reduces to present facts about sense data and not to external physical traces. There is thus no such thing as the anti-realist about the past, since a number of different positions can qualify for that description; and similarly for the realist about the past. What we really have is the comparative locution “A is more realist than B”, and likewise for “anti-realist”.

This also shows that it is not a helpful formulation to describe the realism versus anti-realism dispute as a contest between truth-condition semantics and assertion-condition semantics. That distinction is indeed dichotomous, but the distinction of metaphysical realism and anti-realism is not; so we had better not try to formulate the latter in terms of the former. We should not, here as elsewhere, let the metaphysical debate take a linguistic turn.[3]

 

Colin M

[1] We might think that experiences have an “objective phenomenology” that is not apparent to the subject of the experience, or a computational profile that is also not apparent.

[2] I will mention that nominalism admits of varying strengths of anti-realism depending on how symbols are conceived. If symbols are observable marks on paper, then we get a close proximity between numbers and sensory evidence; but if instead they are viewed as internal mental entities in an unconscious language of thought, then we get distance between numbers and perceptual evidence, since such symbols are not perceived at all.

[3] This is contrary, in particular, to the views of Michael Dummett in influential work on realism and anti-realism.

Share

Fiona and Me

 

 

 

Fiona and Me

 

 

I was watching the impeachment hearings on Thursday waiting for Dr. Fiona Hill to begin her testimony, expecting to hear an American woman speak. When she started speaking I immediately knew she was English by origin and a few seconds later recognized her accent as characteristic of the north east of England. This piqued my interest as I too was born in that part of the country. She went on to explain that in England her accent would have acted as an impediment to her professional success, which I don’t doubt, but that in America it had not counted against her—and indeed she had done little to smooth its edges (to her credit). She was, and is, what is called a Geordie in England. I remarked to George Stephanopoulos, who was covering the hearings live for ABC news, that people from that part of England are as tough as nails and very blunt (she is a coalminer’s daughter and my own father worked “down the pit” for a while as a teenager). He replied “Clearly” and indeed she went on to demonstrate the correctness of my description. It took an immigrant Geordie woman to teach a bunch of American senators a lesson in intelligence and integrity, and I hope the lesson wasn’t lost on them.

Anyway it prompted in me a series of reflections on language acquisition. I must have spoken with a strong Geordie accent as a young boy, even after we moved to Kent when I was three. This will have continued till I went to school at age five as my parents naturally carried their accent with them to the south of England. Of course, I have no recollection of any of this. At that point I must have gradually made the transition to the accent characteristic of that part of England—an accent close to the London accent, exemplified well by Mick Jagger and Keith Richards. It is completely different from the Geordie accent. When my family moved back north to Blackpool when I was twelve I was taken to have a Cockney accent by my schoolfellows, while my parents continued with their original accent. How did I do it? I went from Geordie to Cockney without apparent effort or consciousness. How long did it take? What did I work on first? Was it at all difficult for me to pronounce the words so differently? I can’t even do a Geordie accent today. Evidently I still had enough brain plasticity at age five to move smoothly from one accent to another, and there is no reason to suppose that I was deficient in either accent at the time I spoke them. A few years later and I would probably have had the Geordie accent for life, but my brain enabled me to pick up the brand new accent with remarkable facility. I wonder what my parents made of it. So, Fiona, you evoked strong memories and deep reflections in me. Thanks for everything, hinny.

 

Co

Share

Understanding the Duck-Rabbit

It was Wittgenstein who sparked philosophical interest in what psychologists call ambiguous figures.[1] The phrase “seeing as” became a staple of philosophical vocabulary and various uses were made of it. I want to revisit the topic in the hope of gaining some clarity on the matter. There are many instances of so-called ambiguous figures: Rubin’s vase, the Necker cube, Schroeder’s stairs, old woman/young woman, etc. What they all have in common is well illustrated by the duck-rabbit drawing, so I will focus on it. A single physical stimulus—lines drawn on paper—can appear to be a picture of a duck as well as a picture of a rabbit. I will begin by simply describing the case compendiously so that we have it as clearly in our mind as possible (I recommend having another look at it).

First, the two aspects alternate over time, now a duck, now a rabbit; they never appear at the same time—it is impossible to see both simultaneously.  Second, the physical stimulus remains unchanged and is perceived to remain unchanged; we don’t perceive it to alter in any way as aspect succeeds aspect. This is a perceptual object as much as the aspects it affords—a certain fixed pattern of lines that we see as such. It is a phenomenological invariant. Third, the aspects presented are not themselves ambiguous: they are clearly either duck or rabbit, as clear as simply seeing a duck picture or a rabbit picture. Fourth, the alternation is only partially a matter the subject’s will: it typically happens automatically, though the process can be accelerated by an effort of will. The shift of perception does not emanate from any change in the stimulus and it is also not a matter of simple choice; it seems natural to say that the brain does it, an agency unto itself. Fifth, there is no reason to suppose that the stimulus is incapable of being seen in only one way: it is perfectly conceivable that some perceivers will see it only as a duck and some only as a rabbit—say, if they had knowledge of only one kind of animal. The very same stimulus array would be, for these perceivers, simply a picture of a duck or a picture of a rabbit—while for us it is a picture of both. Sixth, it is noteworthy that the ambiguity is invariably binary: there are just two possible aspects associated with the stimulus in question. This seems entirely contingent: why not three aspects or even seventeen? Some possible perceivers might increase the cardinality considerably. Seventh, there is both what Wittgenstein called the dawning of an aspect, often experienced as wondrous or surprising, and then there is the steady perception of the aspect for some extended period of time, typically a few seconds. Eighth, specific parts of the stimulus are experienced as different parts of the animal depicted—the same lines are seen now as ears and now as a beak. There is part seeing-as and whole seeing-as, the latter dependent on the former. Ninth, it is not possible to produce an imaginary array that admits of this kind of ambiguity: a mental image will either be a duck picture or a rabbit picture, with no alternation between them. So the ambiguity (but see below) is a feature of the visual sense not of the visual imagination. Tenth, the effect is not confined to pictures: we can contrive cases of a stimulus in the wild that can be seen in either way. I am not aware of any experimentalist actually doing this, but it seems easy enough to envisage presenting a three-dimensional stimulus at a suitable distance from the perceiver that similarly underdetermines the type of animal seen yonder—it might elicit the same kind of alternation that a drawn picture does. Is that a duck or a rabbit in the bushes? Now it looks like a duck, now like a rabbit.

Those, I take it, are the main phenomenological facts. Now there is the question of how to interpret them, classify them, and fit them into a theory. Are they a special case of some more general phenomenon? Do they show that there is more than one type of seeing? What concepts best characterize such perception? I think most of what philosophers have had to say about these cases is wrong, ironically because of a need to overgeneralize (Wittgenstein’s bête noir). First, there is the persistent tendency to describe them in terms of ambiguity, as if this is just like linguistic ambiguity—as with the use of the phrase “ambiguous figures”. Words can be ambiguous, and visual arrays can be too: thus they belong together conceptually. But this is wrong for a number of reasons. The ambiguity of language stems from the conventional character of the relation between sound and meaning: the word “bank” can conventionally mean either money bank or riverbank. But the visual array that gives rise to the duck-rabbit effect is not conventionally related to the type of animal seen—that is really what ducks and rabbits look like. What we have here is under-determination not ambiguity: there is nothing arbitrary about the relation between the array and the animals depicted—it is simply consistent with both. Also, there is no phenomenon of meaning alternation with ambiguous words: it is not that if you stare at the word “bank” or hear it uttered many times your perception of its meaning changes, now meaning money bank and now meaning riverbank. Nor is it true that the shift of aspect is a shift of meaning: we don’t perceive the array as a symbol that can mean one thing or another—we perceive it as a picture of one thing or another. Nor, further, is there any question of intended meaning, since the stimulus is not a linguistic act. At best it is a metaphor to speak of ambiguity here, and a misleading metaphor at that. This is a point specifically about visual perception not about language or symbolism generally. It is thus quite wrong to assimilate the duck-rabbit case to that of “bank” and the like.

Second, calling the phenomenon in question “noticing an aspect” (as Wittgenstein does) does not do it justice, since that phrase applies far more widely. We are always noticing aspects of things on the basis of perception, but we are not often subject to a duck-rabbit type of case. I may notice an aspect of your face, say the shape of your nose, but this is not an ambiguous (sic) figure case. What is crucial in such a case (as Wittgenstein himself stresses) is that we have a core of perception that does not change under a change of aspect, corresponding to the physical stimulus. But merely noticing an aspect does not involve anything like that; generally speaking, it involves noticing precisely an intrinsic feature of the stimulus (say, the way your nose physically curves). I would prefer to call duck-rabbit cases “alternating aspect” cases, not “noticing an aspect” cases (or “ambiguous figure” cases). The same goes for the phrase “seeing as”: that phrase applies to all seeing not just to what happens in a duck-rabbit case. I see you as tall, my cat as speckled, my car as shiny: that is, I see things as having properties. But that doesn’t capture what is distinctive about duck-rabbit cases, as I described them above—particularly, the constancy of the visual core and the unwilled alternation of the aspects. Granted, it is difficult to describe the phenomenon concisely in a single phrase, but these standard descriptions are positively misleading (and have misled). I think the phrases “imaginative seeing” or “interpretative seeing” invite similar objections, since they too apply more widely, but I won’t labor the point further.

A more interesting question is whether the apparatus of sense and reference applies here. On the face of it, it does: two modes of presentation associated with a common object. The same patch of lines can give rise to two ways of seeing it, as the same planet can be perceived in two ways corresponding to “the evening star” and “the morning star”. Why not say that the common object corresponds to two “senses”, a duck sense and a rabbit sense? Couldn’t someone see the patch as a duck in one context and a rabbit in another, and then come to realize that it is the same patch that is involved? Isn’t the structure much the same in alternating aspects cases and sense-reference cases? And didn’t Frege himself characterize modes of presentation as “aspects”? It turns out that his apparatus applies more widely than he thought: the duck-rabbit drawing is a special case of sense and reference. That is certainly a pleasant conjecture, but it is flawed at a crucial point, namely that the common element is actually perceived in the duck-rabbit case, i.e. it occurs as a phenomenological datum. We see the duck, the rabbit, and the lines; but in the case of sense and reference we don’t have a separate presentation of the reference aside from its two modes of presentation. We are not seeing the same physical stimulus, perceptually represented as such, giving rise to two aspects in the case of the evening star and the morning star, as we are in the duck-rabbit case. Nor, of course, do we oscillate from one sense to the other while gazing intently at Venus. So the structure isn’t the same in the two cases despite a superficial resemblance. The perceived unchanging core is not present in Frege-type cases, so the one is not a special case of the other.

It seems to me that alternating aspect cases are genuinely sui generis. There is really nothing like them, which is why a general label is elusive (like “ambiguous figure” or “noticing an aspect” or “seeing as”). But it doesn’t follow that they involve a special type of seeing, as opposed to a unique type of perceptual phenomenon. On the contrary, it seems to me that the same type of seeing is involved here as elsewhere. Suppose you see a picture of a duck but without any alternation with a picture of a rabbit. This could be exactly like the experience you have when looking at a duck-rabbit picture and see its duck aspect: the experience is not altered by being caused by an “ambiguous figure”. No new type of seeing is occasioned by such figures in addition to the experiences occasioned by unambiguous duck pictures. Similarly, if an experimenter could contrive a stimulus that could be perceived as a duck or as a rabbit (not as a picture of such), that would not cause any experiences additional to those caused by ducks and rabbits. The possibility of alternation doesn’t alter the nature of the experience had when seeing a single aspect. So the duck-rabbit case and others like it don’t require us to expand our phenomenological inventory beyond the seeing of ducks and rabbits (or pictures of them). Indeed, we might well claim that all seeing is seeing-as (an object as having a property) and that the duck-rabbit cases add nothing to this simple picture. They merely show that there can be alternations of aspect under conditions of stimulus identity.

It is sometimes supposed that the kind of seeing that goes on in duck-rabbit cases is relevant to pictorial perception: this kind of perception is supposed to differ from object perception and to be a special case of seeing-as, as that notion is illustrated by duck-rabbit cases. But this idea is confused: the seeing of an aspect is the same whether there is alternation or not, so these cases cannot provide a new type of seeing. Also, the essential feature of such cases is conspicuously missing in pictorial perception, namely the alternation of aspects.  Normally a painting depicts a single aspect; it isn’t “ambiguous”. Trivially, seeing a picture is a case of seeing-as because all seeing is seeing-as; but the kind of seeing-as that occurs in duck-rabbit cases is nothing special, so nothing new can be learned from it about pictorial perception. The concept of seeing-as, as philosophers have come to employ it, should really be retired or else explicitly extended to all types of seeing.

Duck-rabbit cases are highly unusual, indeed carefully contrived: they are not instances of something more general, and they shed no light on anything beyond themselves. It is surprising they exist at all, being an anomaly of the human visual system (I don’t know of any experiments that have shown other animals capable of such strange oscillations). They only occur under very special and manufactured conditions (the duck-rabbit drawing was first introduced in a German humor magazine in 1892). They appear to have no analogue in other sense modalities: there is no such case for smelling, tasting, touching or hearing. We would be in no way worse off without them; they appear to have no use except as entertainment. Contrary to Wittgenstein’s advocacy, they have no philosophical significance, except perhaps to illustrate how very peculiar things can be. Their significance is their insignificance, their sheer quirkiness.[2]

 

[1]See Philosophical Investigations, pp.193-208.

[2] It used to be suggested by psychologists that duck-rabbit cases are to be explained by invoking the idea of hypothesis formation: the visual system constructs a hypothesis on the basis of exiguous data and this hypothesis corresponds to a visual aspect. However, this doesn’t explain the oscillation characteristic of these cases: why does the visual system switch from one hypothesis to another for no apparent reason? That is not what scientists do when they propose a hypothesis. So this attempt to subsume the cases under the wider category of hypothesis formation also fails. As far as I know, the phenomenon has still not been satisfactorily explained.

Share

Experience Pluralism

We have five senses and five types of sensory experience. This is doubly contingent: we might have had fewer or more senses, and we might not have had a different phenomenological type of experience corresponding to each sense. The second claim is less obvious than the first, but evident on reflection. First, note that the relationship between stimulus-type and experience-type is contingent: the physical nature of the stimulus doesn’t entail the phenomenological nature of the perceptual response. Thus you can’t infer what visual experience is like from the physical nature of light or what auditory experience is like from the physical nature of sound waves (similarly for touch, smell, and taste). Nor can you infer the physics of the stimulus from the nature of the experience. There is a lawful correlation between stimulus and response, but there is no identity or metaphysically rigid relation between them. One could exist without the other. This lack of necessity underlies some familiar thought experiments: we can imagine rerouting the inputs from the ears into the visual cortex, so producing visual experiences from auditory stimuli, and vice versa. Or there could be beings initially set up to convert sound waves into visual experience and light into auditory experience. The stimulus contains information about the environment and the brain interprets this by using alternatives modes of phenomenological response. Isn’t this what the human senses already do to some extent? The same (distal) stimulus can be seen or touched or even heard, and smell and taste respond to the same molecular stimuli. There is also the phenomenon of synesthesia, in which the same stimulus produces a response in two sense modalities. How the brain codes sensory inputs is not dictated by the physical stimulus, distal or proximal; in principle, we could invert the relations that actually obtain. There are possible worlds in which light produces olfactory sensations and people taste visually.

But there is a thought experiment of this class that I have never seen mentioned: the idea that all the senses of a given creature might be served by the same phenomenological type.[1] For instance, our five senses might be centrally manifested in only visual experience—we only see things when stimuli impinge on the ears, skin, nose, and mouth. We reduce the phenomenological range to a single sensory type that is common to all the senses. For humans this set-up would require some major (futuristic) surgery, so let’s assume we are dealing with a Martian that is born this way. Symphonies are “heard” as complex patterns of shifting light, objects “feel” as they are seen, food “tastes” like a visual mosaic. For any type of input, there is just one mode of experiential response: instead of experience pluralism we have experience monism. Whether we should describe this situation as possessing a single sense that responds to a variety of stimuli or several senses that are mediated by a single type of experience is not critical to decide; what matters is that there is a leveling of phenomenology combined with the usual types of sensory impingements. The same variegated physical world is represented by a uniform type of phenomenal world. This seems like a logical possibility, not ruled out by the concepts or by some deep metaphysical necessity. Granted, we don’t find any actual instances of it on planet Earth, but there might be other planets that are home to beings like this.

This thought experiment raises interesting questions. First, is there some biological reason that we don’t find actual instances of creatures like this? On the face of it such a set-up is more parsimonious than the actual situation, and doesn’t nature prefer parsimony? The genes would only need to engineer a single type of central sensory nerve to handle input from all the senses—the visual type. This would serve as well, so why complicate the physiology? It may be retorted that representing all the senses in a single phenomenological type would be confusing for the organism, since it wouldn’t know whether it was tasting or seeing; but this could presumably be accommodated by assigning different visual types to the two sorts of stimulus. Isn’t this what we already do within the visual sense—as when we have distinct sensations for shape and color? Couldn’t the all-encompassing visual sense contain a reference to the part of the body being stimulated, so that it was clear what sense was being activated? Why should this be any more confusing than simultaneously receiving inputs from senses with different phenomenological character, since a central unit has to separate and integrate the inputs so received in this case too? The purely visual organism could be constructed so as to keep track of the origin of its visual experiences, in part by assigning different visual types to each type of input. Brighter colors might be assigned to one sense compared to another, or different colors entirely. Visual experience is already very various and dependent on varying aspects of the light stimulus, so there seems no problem of principle preventing a purely visual subject from existing (perhaps one that is perceptually simpler than us). More strongly, this might be a better way to increase sensory bandwidth: smell and taste might become more discriminating when mediated visually. To the objection that visual tasting wouldn’t have the motivational force of ordinary tasting, we could stipulate that gustatory visual sensations be genetically linked to the pleasure centers of the brain, so that certain visual arrays elicit pleasure in the hungry eater. Don’t some tastes become pleasurable to us that were once repugnant or bland? Why not have menus listing the particularly tasty color combinations on offer tonight? You bite into an oyster and your visual sensorium lights up with an accompanying rush of pleasure.

So parsimony recommends experience monism, but so do other aspects of the organism. Don’t we find a conspicuous absence of florid pluralism in the anatomy and physiology of the body? The bones are much the same in point of composition throughout the body, despite differences of function and structure—we don’t find different types of bone composition according to where the bone is located. What would be the point of that? It would just make ontogenesis more difficult. And the underlying physiology of the nervous system is likewise homogeneous: the nerves associated with the different senses are of basically the same type (a nucleus, axons, dendrites, and the same suite of chemical neurotransmitters); we don’t find radically different histological characteristics from sense to sense. Moreover, the distal stimulus is likewise uniform: the same physical world is present to each sense—consisting of atoms, forces, etc. But the sensory systems inject a marked heterogeneity into nature: they are more richly various from a phenomenological point of view than the external world or the physiology of the brain. They provide the pageantry and pizzazz. So we have a puzzle: why so much variety when parsimony and the general laws of nature recommend uniformity? Why make seeing so very different from hearing—or smelling so very different from touching? It seems like an act of generosity from nature to the experiencing organism—making life a little less boring and monotonous. But natural selection and the genes are not known for their generosity; they like things as simple as possible (such complexity as we find is forced on organisms by the rigors of survival). Our thoughts don’t exhibit as much phenomenological variety, no matter what their subject matter may be, so why do our senses insist on the gaudy plurality of our sensory experience? It seems surplus to requirements, a gratuitous gift, an unnecessary extravagance. What would you say if we had fifteen senses each equipped with its own distinctive phenomenology when far fewer would do just as well? That would seem like biological largesse above and beyond the cause of gene propagation; why not strip it down a bit? The natural thought is that the variety we experience must possess some hidden biological utility, but it is not clear what this utility is, given the informational powers of visual experience (or the other senses in their most advanced forms). The cell serves every biological purpose in the body, but it is fundamentally the same from organ to organ. To be sure, cells vary somewhat from heart to kidney, skin to brain, but no more than visual experiences differ among themselves. What we don’t find is organisms (or organs) made of completely different chemicals, or partly cell-based and partly continuous, or bones that are sometimes made of calcium and sometimes made of metal. We find variations on a theme: but sensory experience varies the theme. Seeing is really nothing like tasting. To lack a sense is to lack something sui generis, to miss out on something unique. A purely visual organism might go blind but still be replete with visual sensation; a blind man, however, can get at best hints of what vision might be like. Each type of sensory experience is, we might say, a world unto itself.

One possible view is that the present sensory set-up is temporary and the result of a dispensable holdover from earlier evolutionary times. The senses evolved separately as solutions to survival challenges and the senses that now populate the planet build upon these early forays (much the same is true of basic anatomy). This is not a matter of ideal optimality but of contingent evolutionary history. Conceivably, the process could have started with greater uniformity and stayed that way, or it might eventually work out the kinks and favor sensory homogeneity. If we were building sentient robots, we might be faced with a design decision—one type of central component that delivers only visual phenomenology or several types that afford sensory variety. The decision could affect future production whether or not we make the optimal decision. If reasons of economy favor the single-component approach, we might end up producing purely visual robots (though capable of responding to the full variety of physical inputs). This might correspond to life on other planets, depending on the actual course of evolutionary history. On our planet the earlier “decisions” favored distinct types of sensory experience, and thereafter organisms were stuck with them. This arrangement might be highly sub-optimal despite its universality in terrestrial life forms. If we imagine an early life form equipped only with visual sensations responsive to light, wondering how to expand into other stimulus fields, it would be intelligible if this form plumped for retention of its existing phenomenological capabilities extended to other types of stimulus. It could either devise new modes of sensory response to sound waves and other types of stimulus or stick with what it has onboard already. The latter choice might be preferable, given the engineering demands created by branching out. So we must not simply assume that experience pluralism is the biological ideal; it might just be an adventitious artifact of how evolution on earth has actually progressed. Aliens might view our mixed phenomenology as distinctly old-world, pre-self-technological, and recommend switching to a more streamlined approach (they promise it will not be boring compared to the cumbersome system we now employ). Or there might be hearing-obsessed aliens (of bat-like aspect) who urge the merits of their sensory world and disparage the purely visual species. After all, whoever said that we humans are biologically perfect? Surely pain is not the best possible way to cope with injury in every possible world, so why should sensory diversity be the best possible way to handle information in every possible world? Among the life forms of the universe it might be quite parochial. Certainly some life forms on earth manage quite well without the full panoply of the five human senses—bacteria, worms, and much marine life.

I will mention another possibility, if only for completeness. This is that our sensory phenomenology might be less various than we suppose. Obviously, introspection plays a determining role here—we experience ourselves as experientially plural. We seem to ourselves to contain phenomenological multitudes. But perhaps this appearance is misleading; perhaps we are more uniform than we think—as the external world is more uniform than we naively suppose given the way we experience it. From a more abstract or objective point of view we may be more uniform than we appear. We already accept that there are commonalities in perceptual experience—intentionality, spatial embedding, functionality—and it may be that there is a way of describing experience that will render it more unified than our current ways. A more objective phenomenology might be a more uniform phenomenology; there may be structural universals across sense modalities.[2] Synesthesia suggests as much. Just as science can reveal hidden universals, so a scientific phenomenology might reveal experiential universals beyond our current grasp. Then the variety of sense experience would be revealed as superficial. Chomsky sometimes suggests that there is really only one human language when you get right down to it, despite superficial appearances; well, is it ruled out that there might be just one type of human sense experience? Call this Universal Phenomenology (UP for short): just like Universal Grammar, Universal Phenomenology might unite all human experience and distinguish it from other possible types of sensory awareness (reptilian, Martian). If that were so, phenomenology might be as uniform as physiology at a deeper level. I don’t think we could ever conclude that really there is just the sense of vision, with every other sense a minor variation on it; but we might conclude that the deep structure of all sensory experience is common to every type—no more various than the cell types that correlate with experience in the brain. At any rate, this is a possibility to keep in mind, especially since otherwise we seem confronted by a genuine biological puzzle (the puzzle of excessive phenomenological variety, as we might call it).

Our language is hooked up to our senses, so that we can comment on what we see and hear (etc.), but we don’t have a separate language for each sense equipped with its own sound system, syntax, and semantics. That would be pointless and biologically redundant, as well as confusing and energy-consuming. So why do we have separate phenomenological systems hooked up to our senses instead of a single system? Why isn’t our sensory system more like our language system? The language system is a singular and separate module with its own distinctive internal structure; it is not divided into five different modules each with its own grammar and lexicon. Evidently, this kind of architecture could in principle characterize our sensory system—say, a single visual module hooked up to our several sense organs. Yet that is not what we find, but instead a diverse and divided set of systems that must all be integrated somehow. It seems unduly complicated and unwieldy, like speaking five languages when one would suffice. Why the difference? Why not speak a single phenomenological language?

 

[1] This thought experiment emerged during a conversation with Tom Nagel on October 10, 2019.

[2] Here we might be reminded of Nagel’s discussion of “objective phenomenology” in “What is it like to be a Bat?” The more a phenomenological description prescinds from the specifics of a given type of experience the more universal it is apt to be. Thus we might aspire to cross-modality phenomenology.

Share

What is Belief?

 

 

What is Belief?

 

 

For all the work that has been done on the topic of belief, do we really know what belief is?[1] What kind of state (if state it be) is the belief state? Two suggestions have been prominent: belief is a feeling and belief is a disposition. Either belief is a state of consciousness analogous to sensation (pain, seeing red, feeling sad) or it is a tendency to behave in a certain way (assenting to a proposition, combining with desire to produce action). The OED defines “believe” as “feel sure that (something) is true”, thus categorizing belief as a type of feeling: not “be sure” but “feelsure”. What that feeling might be is left undetermined, though the definition has the ring of truth. And indeed belief is connected to feeling: your feelings tend to change when you acquire a belief, and there is such a thing as feeling sure. But what about beliefs you hold without thinking about them– are those beliefs all associated with feelings? Do you feel sure that London is in England, for example, even when the thought has not crossed your mind in months? Here is where the dispositional theory suggests itself: belief isn’t an episodic state of consciousness but a readiness to act in a certain way—to respond “yes” when asked whether London is in England, say. Ramsey said belief is a “map by which we steer”, emphasizing that beliefs guide action (but do we inspect our beliefs as we inspect maps?). And certainly beliefs and dispositions are tightly connected (as are desires and dispositions): your dispositions change when you acquire a belief, and belief encourages assent behavior. But is this what a belief is? Isn’t it rather the mental state that gives rise to the disposition? What if you had a tendency to assent verbally to propositions not because you believe them but because you have been rigged up that way by a clever scientist intent on simulating the state of belief? In general, dispositional theories confound properties (states, facts) with their causal consequences; and we want to know what belief is not what it does. The OED also has this under “believe”: “accept the statement (of someone) as true”. But don’t we accept statements because of what we believe? It isn’t that the belief is the acceptance. It is hard to avoid the impression that the dictionary (and the usual philosophical theories) conflates the symptoms of belief—feelings and dispositions–with belief itself. But then what is belief itself exactly?

Are we acquainted with belief itself? We are acquainted with sensations and behavior, both signs of belief, but are we acquainted with beliefs? The answer is not obvious. If we are, it seems curious that we draw a blank when considering the nature of belief; but if we are not, why do we bandy the concept around with such confidence? Is it perhaps that the concept is logically primitive and hence admits of no explanation in other terms? But that can’t be the reason for our ignorance, because the same is true of many concepts and yet we are not blind to the nature of their reference (pain, seeing red, maybe moral goodness). Or is it that the felt ignorance is an illusion born of a mistaken assumption, namely that we only know what a mental phenomenon is if we can reduce it either to a feeling or to a disposition? Maybe we know exactly what belief is but we think we don’t because beliefs are not sensational or behavioral, these being our preferred touchstones of mental reality when thinking philosophically. But that approach, though not unsound in principle, is hard to square with an evident fact: we really don’t know what it is to believe something—we have no conception of what fact is at issue. Once belief is distinguished from its symptoms its elusiveness becomes evident (compare Hume on causation).

This leaves us with another possibility—that “believes” is really a name for an I-know-not-what that we introduce to denote something that we reasonably believe to exist but can’t properly conceptualize. Belief is thus that state, whatever it is, that has such and such symptoms and plays such and such a role but whose nature we find elusive. In short, “belief” is a theoretical term—not just in application to others but also in application to oneself. Our knowledge of belief operates at one remove from the thing itself, which is why we have such an indeterminate conception of it. A similar approach might be suggested for the concepts of meaning and the self: these too are not directly encountered constituents of consciousness, which is why we can’t reconstruct them in such terms, but they are real nonetheless, just at some epistemic distance from our cognitive faculties. That is, not all parts of what we think of as the mind exist at the same epistemic level (and not because of a detached Freudian unconscious); some are not objects of direct inspection (perceptual or introspective). The ontology of folk psychology is an amalgam of these two types of fact (and we can add desire to belief): the mind consists of directly known constituents and relatively unknown constituents. Differently stated, belief (desire, meaning, the self) is a state that we refer to but are not acquainted with; we know many of its properties, but not its intrinsic nature. We know it is a propositional attitude (but what is an attitude exactly?) and that it involves the exercise of concepts, as well as being a truth-bearer, subject to referential opacity, and capable of combining with desire to lead to action: but we don’t grasp what kind of state it is—not in its intrinsic nature. The state gives rise to inner feelings and to outer behavior, but we have no clear idea of what it is in itself. We experience shadows of it, fleeting intimations and glimpses, but we have no firm conception of the thing itself: it is just “that which gives rise to these symptoms”. Ask yourself what kind of mental state you are in when you are asleep: you have various beliefs, but what is their mode of existence exactly? You might be tempted to reach for the concept of a disposition, but we have been down that road before—what is the ground of such a disposition? Let’s face it: you don’t know what to say, and yet you don’t doubt that you are in some sort of mental state. You might sputter that you are in a “cognitive state”, but that raises the same question over again: what kind of state is that? Not a feeling state and not a disposition, but a sui generis state that confounds comprehension. As we might say, we have only a partial grasp of what belief is. And the part we don’t grasp intrigues us the most, i.e. the very being of belief.

I grant that this position might sound counterintuitive. Doesn’t the Cogito express certain knowledge (“I believe, therefore I am”)? But how can that be if we don’t know what thinking (believing) is? However, this is really not such a paradoxical position to be in: we know that we think and believe, and that this entails our existence, but it doesn’t follow that we know what thinking and believing are—or what the self is for that matter. And did Descartes ever claim anything to the contrary—did he suppose that the nature of thinking is totally transparent to us? Knowing that something exists is not the same as knowing its nature. If Descartes had claimed that thinking is processing sentences in the language of thought, he could have been wrong about that; but this wouldn’t undermine the Cogito. In fact, I would say that if you focus really hard on what is going on when you believe something you will see that nothing determinate comes into view—you never catch your belief in flagrante, as it were. And you have no clear conception of what it is that you attribute when you ascribe beliefs to others (beyond their conceptual content). Nor does knowledge of the brain help: identifying belief with neural excitation in the B-fibers, say, affords no knowledge of what belief is in the ordinary sense. The problem is that neither does anything else—crucially, not introspection. We didn’t come by the concept of belief by noticing feelings of belief in ourselves (where would those feelings to be located?), or by observing the operation of dispositions to behavior; rather, we introduced a term for a type of psychological state whose nature was not evident to us but which we were sure existed. I have evidence for my beliefs drawn from my experience (e.g. feelings of conviction), but I don’t believe in beliefs because I can grasp them whole. I see them through a glass darkly. I have a nebulous sense that certain propositions attract my assent, as if gravitationally, but what exactly my mind is up to I cannot tell. Even the strongest of our beliefs, say religious or moral or scientific beliefs, fail to disclose their inner nature—we just find ourselves filled with passionate conviction about certain things. It isn’t like feeling a headache or a hunger pang in the stomach. Nor is it like hearing a sentence in your head. It isn’t like anything.

Psychology used to be conceived as an introspective science, and then later as a science of observable behavior, but these ideas were predicated on a certain conception of the essence of the mind. Either the mind consists of inner episodes of consciousness of which we have immediate introspective awareness, or it consists of outer behavior that can be perceived externally. But the case of belief (also desire) shows that these alternatives are not exhaustive and are fundamentally on the wrong track. In so far as psychology is about belief and kindred states, it is not about feelings or behavioral dispositions, but about facts we find systematically elusive, which fit into neither category. Beliefs are not feelings and they are not dispositions to behavior, yet there are fully mental phenomena, paradigmatically so. As Hume would say, we have no impression of belief, yet belief is real and knowable (in some of its aspects).  Belief is yet another example of the limits of human cognition. Psychology thus has an elusive subject matter.[2]

 

C

[1] The background to this essay is scattered. The issues discussed bubble under the surface of Wittgenstein’s Philosophical Investigations and are explicitly posed in Kripke’s Wittgenstein on Rules and Private Language (as well as my Wittgenstein on Meaning). In addition, the emphasis on ignorance reflects my standing interest in human mysteries as they pertain to philosophy. Hume is hovering paternally in the wings. Russell makes a brief appearance.

[2] It might be said that belief is a computational state and that this gives its essential nature. There is a lot to be said about this suggestion; suffice it to remark that this doesn’t give us a conception of belief comparable to our intuitive notions of pain or seeing red. Belief may well have computational properties, but it is another thing to claim that this is what belief is (would it follow that computers believe?).

Share

Knowledge and Human Nature

 

 

Knowledge and Human Nature

 

 

An alien observer of human cognitive development would be struck by a fact he might be tempted to describe as paradoxical. This is that in the first five years or so of life development is rapid and impressive while subsequent learning tends to be slow and laborious. The typical five-year-old already has excellent sensory awareness of the world, a mature language, and a fully functioning conceptual scheme—all without apparent effort. They may be small, but they are smart. The reason for this precocity, we conjecture, is that much of what they have achieved by that age is the unfolding of an innate program or set of programs: all this cognitive sophistication is written into the genes awaiting read-out.[1] It is not picked up by diligent inspection of the environment. It comes quickly because it was already present in substantial outline. Thereafter the child must learn things the hard way—by learningthem. Hence school, memorization, studying, instruction, concentration. Knowledge becomes willed, while before it was unwilled, spontaneous, given.[2] Cognitive development turns into work.

It could be otherwise for our alien observers: they are accustomed to school virtually from birth, because their children are born knowing practically nothing. They learn language by painstaking instruction, having no innate grammar; concepts are acquired by something called “deliberate abstraction”, which is arduous and time-consuming; even their senses need years to get honed into something usable. They don’t reach the cognitive level of a typical human five-year-old till the age of fifteen. Empiricism is true of them, and it takes time and effort. However, they have excellent memories and powers of concentration, as well as an aversion to play, so their later cognitive development is rapid and smooth: they are superior to human college-educated humans by the age of seventeen and they go on to spectacular intellectual achievements in later life, vastly outstripping human adults. They are slow at first, given the paucity of their innate endowments, but quick later, while humans are quick at first but slow later (our memory is weak and our powers of concentration lamentable). To the alien observers this seems strange, almost paradoxical: why start so promisingly and then lapse into mediocrity? They continue to gain in intellectual strength while we seem to lose that spark of genius that characterized the first few years of life. That’s just the way the two species are cognitively set up: an initial large genetic boost for us, and a virtual blank slate for them (but excellent capacities of attention, memory, and studiousness). Our five-year-olds outshine theirs, but their adults put ours to shame.

I tell this story to highlight an important point about the human capacity for knowledge—an existential point. The existentialists thought that freedom was the essence of human nature, conditioning many aspects of our lives, individual and social; but a case can be made that human knowledge plays a similar life-determining role. For we suffer under a fundamental ambivalence about knowledge, which is to say about our cognitive nature (which is not confined to non-affective parts of our lives). We are simultaneously very good at knowledge and quite poor at it. Some things come to us naturally and smoothly, especially in our earliest experience (pre-school); but other things tax us terribly, calling for intense effort and leading to inevitable frustration. Rote memory becomes the bane of our lives. Examinations loom over us. School is experienced as a kind of prison. Calculus is hard. History refuses to stick. Geography is boring. What happened to that earlier facility when everything came so easily? We were all equal then, but now we must compete with each other to achieve good test results, which determine later success in life. We seem to go from genius to dunce overnight. Imagine if you could remember your earlier successes and compare them with your current travails: it was all so easy and enjoyable then, as the innate program unfurled itself, but now the daily need to absorb new material has become trial and tribulation. Getting an education is no cakewalk. Wouldn’t it be nice if it could just be uploaded into your brain as you slept, as your genes uploaded all that innate information? It’s like a lost paradise, a heavenly pre-existence (shades of Plato), with school as the fall from blessedness. You are condemned to feel unintelligent, a disappointment, an intellectual hack. Maybe you will make your mark in society by dint of great effort and a bit of luck, but you are still a member of a species that has to struggle for knowledge, for which knowledge is elusive and hard-won. Suppose you had to live in a society in which those late-developing aliens also lived: they would make you look like a complete ignoramus, an utter nincompoop—despite their initial slow start.

A vice to which human beings are particularly prone is overestimating their claims to knowledge. It is as if they need to do this—it serves some psychic purpose. Reversion to childhood would be one hypothesis (“epistemic regression”). But the actual state of human knowledge renders it intelligible: within each of us there exists a substantial core of inherited solid knowledge combined with laboriously acquired knowledge, some of it shaky at best. Take our knowledge of language, including the conceptual scheme that goes with it: we are right to feel confident that we have this under control—the skeptic will not meet fertile ground here (I know how to speak grammatically!). Generalizing, we may come to the conclusion that our epistemic skills are well up to par: so far as knowledge is concerned, we are a credit to our species. But this is a bad induction: some of our knowledge is indeed rock solid, but a lot isn’t. Being good at language is not being good at politics or medicine or metaphysics or morals. We are extrapolating from an unrepresentative sample. As young children, our knowledge tends to be well founded, because restricted to certain areas; but as adults we venture into areas in which we have little inborn expertise, and here we are prone to error, sometimes fantastically so. We know what sentences are grammatical but not what political system is best. But we overestimate our cognitive powers because some of them are exemplary. It would be different if all our so-called knowledge were shaky from the start; then we might have the requisite humility. But our early-life knowledge gives us a false sense of security, which we tend to overgeneralize. We believe we are as clever about everything as we are about some things.

I recommend accepting that we have two sorts of knowledge—that we are split epistemic beings. On the one hand, we have the robust innately given type of knowledge; but on the other hand, we have a rather rickety set of aptitudes that we press into service in order to extend our innately given knowledge. Science and philosophy belong to the latter system. Thus they developed late in human evolution, are superfluous to survival, and are grafted on by main force not biological patrimony. There is no established name for this distinction between types of knowledge, though it seems real enough, and I can’t think of anything that really captures what we need; still, it is a distinction that corresponds to an important dimension of human life—an existential fact. We are caught between an image of ourselves as epistemic experts and a contrasting image of epistemic amateurishness. We are not cognitively unified. We have a dual nature. We are rich and poor, advantaged and disadvantaged. Other animals don’t suffer from this kind of divide: they don’t strive to extend their knowledge beyond what comes naturally to them. Many learn, but they don’t go to school to do it. They don’t get grades and flunk exams and read books. Reading is in some ways the quintessential human activity—an artificial way to cram your brain with information not given at birth or vouchsafed by personal experience. Reading is hard, unnatural, and an effort. It is an exercise in concentration management. We may come to find it enjoyable[3], but no one thinks it is a skill acquired without training and dedication (and reading came late in the human story). It is also fallible. And it hurts your eyes. This is your secondary epistemic system in operation (we could label the types of knowledge “primary knowledge” and “secondary knowledge” just to have handy names).

Animals are not divided beings in this way (lamenting their reading ability); nor do they apprehend themselves as so divided. But we are well aware of our dual nature, and we chafe at it (as the existentialists say that we chafe at the recognition of our freedom). We wish we could return to epistemic Eden, when knowledge came so readily; but we are condemned to conscious ignorance, with little inroads here and there—we are aware of our epistemic limits and foibles. We know how much we don’t know and how hard it would be to know it (think of remote parts of space). We know, that is, that we fall short of an ideal. We can’t even remember names and telephone numbers!  Yet our knowledge of convoluted grammatical constructions is effortless. If we are that good at knowledge, why are we so bad? Skepticism is just the extreme expression of what we all know in our hearts—that we leave a lot to be desired from an epistemic point of view.[4] We are both paragons and pariahs in the epistemic marketplace. In some moods we celebrate our epistemic achievements, in others we rue our epistemic failures. The reason is that we are genuinely split, cognitively schizoid. Perhaps in the prehistoric world the split was not so evident, in those halcyon hunter-gatherer days, before school, writing, and transmissible civilization; but modern humans, living in large organized groups, developing unnatural specialized skills, have the split before their eyes every day—the specter of the not-known. We thus experience epistemic insecurity, epistemic neurosis, and epistemic anxiety. Our self-worth is bound up with knowledge (“erudite” is not a pejorative). It is as if we contain an epistemic god (already manifest by age 5) existing side by side with an epistemic savage: the high and the low, the ideal and the flawed. I don’t mean that we shouldn’t value what we acquire with the secondary system, or that it isn’t really knowledge, just that it contrasts sharply with the primary system. The secondary system might never have existed, in which case no felt disparity would have existed; but with us as we are now we cannot avoid the pang of awareness that our efforts at knowledge are halting and frequently feeble. The young child does not suffer from epistemic angst, but the adult has epistemic angst as a permanent companion. School is the primary purveyor of that angst today. Education is thus a fraught venture, psychologically speaking, in which our dual nature uneasily plays itself out. The existentialists stressed the agony of decision, but there is also the agony of ignorance (Hamlet is all about this subject, as is Othello).[5]

Freud contended that the foundations of psychic life are laid down in the first few years of life (and sex not freedom or knowledge is the dominant theme), shaping everything that comes later. The stage was set and then the drama played out. I am suggesting something similar: the first few years of cognitive life lay down the foundations, and they are relatively trouble-free. Knowledge grows in the child quite naturally and spontaneously without any strenuous effort or difficulty. Only subsequently does the acquisition of knowledge become a labor, calling upon will power and explicit instruction. We might view this transition, psychoanalytically, as a kind of trauma: from ease to unease, from self-confidence to self-doubt. Whoever thought knowledge could be so hard! Compare acquiring a first language with learning a second language: so effortless the first time, so demanding the second. What happened? Now learning has become a chore and a trial. It is a type of fall from grace. The reason we don’t feel the trauma more is that it happens at such an early age (I assume there is no active repression)—though many a child remembers the misery of school. Knowledge becomes fraught, a site of potential distress. Cramming becomes a way of life, a series of tests and trials. But all the while the memory of a happier time haunts us, when knowledge came as easily as the dawn.[6] And then there is death, when all that knowledge comes to nothing—when all the epistemic effort is shown futile. Our divided nature as epistemic beings thus has its significance for how we live in and experience the world. It is not just a matter of bloodless ratiocination.

 

C

[1] I won’t rehearse all the evidence and arguments that have been convincingly given for this conjecture, save to mention the existence of critical periods for learning. Would that such periods could occur during high school mathematics training!

[2] Of course, we still pick up a lot of information without effort just by being in the world, but for many areas of knowledge something like school is required (this is true even for illiterate tribes).

[3] Logan Pearsall Smith: “People say that life is the thing, but I prefer reading.”

[4] Is it an accident that one of the prime distinguishing characteristics of God is his omniscience? He knows automatically what we can never hope to.

[5] The Internet, with its seemingly infinite resources, drives this point home. It also leads to varied and grotesque deformities in our cognitive lives.

[6] Here you see me lapsing into weak poetry, as all theorists of the meaning of life must inevitably do. Sartre’s Being and Nothingness is one long dramatic poem: who can forget his puppet-like waiter, or the woman in bad faith whose hand remains limp as her would-be suitor grasps it, or Pierre’s vivid absence from the cafe? My illustrative vignette would feature a bleary-eyed student studying in a gloomy library while recollecting her carefree sunlit days of cheerful effortless knowing.

Share