On Not Denoting

 

On Not Denoting

 

The literature on descriptions tends to operate with a limited class of examples, mainly descriptions of people and places (“the queen of England”, “the capital of France”). This can bias us in favor of certain theories of their semantics. We can think of descriptions as referring to people and places (Frege and Strawson), or we can think of descriptions as quantifying over people and places (Russell). But there are many other types of descriptions that lend themselves far less readily to such theories: “the end of the line”, “the impossible dream”, “the shortest way home”, “the day after tomorrow”, “the big home sale”, “the height of absurdity”, “the reason why”, “the truth about Mary”, “the logic of existence”, “the problem of consciousness”, “the Greek gods”, “the mystery of life”, etc. etc. Here we are not dealing with concrete particulars unabashed reference to which can be assumed unproblematic; some ontological hesitancy might be presumed. Are we really to believe that the use of such descriptions necessarily commits the speaker to a robust ontology of corresponding entities? The speaker might turn around and say that of course she has no belief that such things really exist: if a theory of descriptions commits her to such existence, she might demur from the theory.  [1] When a speaker says, “the golden mountain must be worth a lot of money” are we to assume that she believes that the golden mountain exists to be referred to and quantified over? It is noteworthy that these descriptions don’t have names or demonstratives associated with them, unlike descriptions of people and places: there isn’t an antecedent assumption of existence-committed reference of the kind suggested by names and demonstratives. We are not already referring to such things in other ways; the description is the only way we have to speak of them. So a referential or quantificational treatment of them is not intuitively natural: their meaning is not naturally taken to involve the kind of ontological commitment we associate with other referential devices. Their “logical grammar” is not that of a straightforward assertion (or presupposition) of existence, whether by singular term (name or demonstrative) or by existential quantifier.

            The idea that descriptions are name-like or demonstrative-like has never seemed particularly attractive—they seem semantically sui generis—and this has fueled the ascendancy of Russell’s quantificational theory. But that theory too has some untoward consequences, notably the consequence that a freestanding description is really a whole sentence: if I utter the sentence fragment “the queen of England”, I have really uttered the whole sentence “there exists a unique queen of England”. The film title The French Connection is equivalent to There Exists a Unique Connection that is French or some such. Maybe descriptions imply or presuppose existence and uniqueness (or maybe not: see above), but it is hard to accept that they state existence and uniqueness (this was Strawson’s point). If they did, it would be possible to negate them; but we can’t say “It’s not the case that the queen of England”. So neither of the two standard theories looks very attractive, despite their hegemony: the suspicion grows that they have been accepted mainly because of a lack of any better alternative. The appearances suggest that descriptions are not referring devices like names and demonstratives (which don’t contain the particle “the”) and they are not like quantified expressions either: they are what they are and not some other thing. The meaning of “the” is unique—which is why we have the word. But then, what kind of semantics do they have? Their meaning appears not to be referential (in any clear sense of that elastic term), either in acts of singular reference or in acts of quantification; but if not, what kind of meaning do they have? Are we to say that they have non-referential meaning? That would not be unprecedented, since words like “not”, “and”, and “if” are also used non-referentially; but descriptions are at least nouns, unlike these words, so the idea of reference clings to them more tightly. This is really a problem: descriptions fit none of our referential paradigms, but they also don’t fit non-referential paradigms. They occupy a curious semantic no-man’s land. We might say that they serve to “introduce a topic” or “identify a subject”, but these phrases don’t really help to pin down how they specifically function. Indeed, they seem to challenge the whole referential framework: the concept of reference (or lack of reference) is unable to cope with them. They don’t denote, but they don’t not denote either. Russell could have written a paper called “On Not Denoting” and proceeded to discuss the semantic peculiarities of “the”. He was right that descriptions don’t denote like names and demonstratives, but he was wrong to suppose that they denote in the way quantifiers do, i.e. with variables ranging over existent entities. There is no denoting at all going on with descriptions as such, as opposed to certain instances of them, but what is going on remains obscure.  [2]

            A radical solution may be proposed: the whole framework of referential semantics deriving from Frege and developed by others needs to be abandoned. It is hard to remember that in the old days meaning and reference were not closely associated: meaning was held to consist in ideas of the mind; it was not a matter of word-world relations. Today we would say that meaning is a matter of concepts, which are psychological entities: they have a role in the mind but any supposed relation to things outside the mind is purely incidental. This is non-referential semantics (meaning is completely “in the head”). According to a view like that descriptions have meaning in virtue of internal psychological factors, so that the idea of reference never comes into the picture. In the extreme, this internalist semantics says the same thing about all expressions, including names and demonstratives, but we could restrict it to the case of descriptions: they express concepts but they don’t refer to anything or quantify over anything (i.e. refer by means of variables). The case of descriptions, then, provides support for such a non-referential semantics—though it is certainly a contrarian point of view (compared to orthodoxy). Short of that, we have an unsolved semantic problem on our hands, despite the enormous amount of attention paid to the word “the”: we still don’t know what this word means. We know what “that” means and we know what “there is” means, but “the” leaves us baffled. Russell was right to fret mightily over this little word, and Strawson was right to question his theory, but in fact it remains as puzzling now as it was over a hundred years ago.  [3]

 

  [1] I am not saying that the descriptions listed clearly fail to refer to existent entities; I am saying that a theory that says they definitely do is going out on a limb that no semantics should venture. We don’t want to end up saying that a speaker’s sentences commit him to the existence of things he expressly repudiates: a nominalist, say, should be able to use these descriptions with a clean conscience.

  [2] The semantics of descriptions should not be dependent on their type of subject matter: it should be the same whether we are speaking of entities acknowledged to exist or things to which we are reluctant to ascribe existence. Indeed, it should be neutral with respect to whether anything exists. Building assertions of existence into the very meaning of descriptions burdens them with far too much ontological responsibility. Questions of existence are far too controversial to be presupposed by semantics. 

  [3] Note to experts: I am well aware of the complexities of this subject, and the myriad ways of wriggling out of objections, and the passions aroused by the meaning “the”: but I am trying to cut through all that to expose a basic weakness in our thinking about descriptions since Russell wrote “On Denoting”.

Share

Particle Psychology

 

 

Particle Psychology

 

Physics is particle physics. The physical world consists of atoms that consist of particles (electrons, protons, neutrons). This was discovered not so long ago, though it was conjectured by the ancient Greeks. It is not part of common sense and is not suggested by perceptual appearance. If anything, physical objects look continuous not granular. We have empirically discovered, as a result of arduous experimentation, that big objects are made of invisible small objects, and that these small objects have a characteristic structure consisting of a nucleus surrounded by orbiting particles. This is one of the greatest scientific discoveries of all time, on a par with the discovery of the heliocentric theory; it is something to emulate, envy, and aspire to. It sets an example to all other sciences. But is it an example that has been followed? Have the other sciences discovered anything analogous to the atomic theory of particle physics? In particular, are there psychological particles analogous to physical particles? Is psychology particle psychology?

            Do the other sciences speak explicitly of particles and atoms? We might cite the doctrine of logical atomism: language consists of atomic sentences compounded into molecular sentences, where the atomic sentences contain subatomic parts, viz. words. The picture is that elementary semantic particles (words) combine to form atomic wholes that can then combine to form molecular compounds (conjunctions, negations, etc.). These molecular compounds can then form larger semantic entities such as paragraphs or speeches or books. The semantic universe is thus a universe of elementary semantic particles that join with other such articles to form larger units—ultimately the whole of semantic reality. We could even go cosmological: there are semantic planets and solar systems and galaxies (e.g. the works of Shakespeare). Language has infinite potential, so we can envisage an enormous proliferation of semantic entities existing alongside the physical galaxies described by cosmology. Maybe there are just a few primitive semantic particles, analogous to the physical particles, which generate this whole hierarchy of existence. Logicians talk about the “logical particles” and we might seek to model logic on particle physics (including cosmology). We have found higher-order uniformity in nature: nature is organized atomically, with elementary particles lying at the bottom, physical or semantic. It is particulate, corpuscular, and combinatorial. Similarly, linguists speak of grammatical particles, defined by the OED as follows: “a minor function word that has comparatively little meaning and does not inflect, e.g. in, up, off, or over used with verbs to make phrasal verbs”. Admittedly, this restricts grammatical particles to a small subclass of semantic units, but we can envisage a more generalized notion of grammatical particle that takes in verbs and nouns. Isn’t linguistics really about the constituent structure of language, and hence a theory of linguistic atoms and their combinations? And what about biology—aren’t cells the analogue of the particles of physics? A whole organism is like a medium sized physical object: it contains organs as parts (physical and mental) and these organs are made of cells, the atoms of the body. The cells in turn have subatomic constituents (e.g. mitochondria) and even a nucleus. So biology looks like an atomic theory too: it follows the pattern of particle physics—an encouraging emulation. Biology is particle biology. The cells are invisible to the naked eye and took some discovering; their discovery was a major piece of scientific progress. Again, a high level uniformity in nature is revealed: things turn out to be collections of invisible units not continuous substances. Reality is particulate, discrete, and reticulated, though it could have been continuous, smooth, and unbroken. Geometrically, it is like a matrix not a continuous solid. We could say that science is particle science.  [1]

            But what about psychology—is it too an atomic theory? Superficially, it may appear so: it deals in such units as concepts, phenomenal points, and combinations thereof. We look at a train of thought and discern constituent structure: the train consists of carriages, the individual thoughts, and these are made of simpler elements, usually designated as concepts. The concepts are like the subatomic particles of logical atomism, with thoughts as the atoms. In vision, likewise, we have an array of components corresponding to points in the visual field: here green, there red, with an impression of square woven in. The brain is certainly a particulate entity, with its neurons and their simpler parts, reaching down to the level of chemicals. But in the case of psychology the analogy starts to creak (maybe it should have started to creak earlier): the whiff of metaphor starts to permeate the proceedings. It is true enough that mental processes such as reasoning consist of simpler cognitive elements, and it is true that thoughts are made up of concepts: but are concepts really like subatomic particles? Here an uncomfortable dilemma presents itself: concepts are either the end of the line, or they are not. If they are, then our atomic theory peters out disappointingly early—there are no deeper psychological particles to be discovered. But then we don’t have the kind of empirical discovery that characterizes atomic theory in physics, since we know a priori that thoughts are composed of concepts. But if concepts are not the end of the mereological line, and there are more basic psychological particles to be discovered, then we have hitherto failed to identify what the basic particles of the mind are. This means that psychology has not made the kinds of empirical discoveries that physics so spectacularly and arduously made. It is merely reporting a boring fact of common sense, namely that we have concepts and they compose our thoughts. This is like “discovering” that animal bodies have limbs or that trees have leaves. The mere existence of parts does not an atomic theory make. Physicists knew that physical objects have parts long before they established the atomic theory we now justly celebrate; parts are not atoms. Likewise, parts of thoughts are not atoms in the epistemologically significant sense exemplified by physics. Concepts are not like electrons and protons, but more like limbs and leaves.

            The thing about atoms is that they have a specific structure that has an explanatory aspect. They are not merely very small parts but articulated structures: they have a distinctive architecture. Thus they have a discrete nucleus made up of protons and neutrons surrounded by a shell consisting of orbiting electrons of varying numbers. The constituent particles have various properties, notably electric charge, positive or negative. All this explains the behavior of the matter composed of these particles, thus allowing for the reduction of chemistry to physics. It is not just a matter of conjoined chunks, mere aggregation. Atoms and their constituent particles are organized in a certain way, not anticipated by common sense or contained in the very concept of matter. That is why atomic theory ranks as a momentous scientific discovery; it is not the mere the assertion of invisible granular parts. It is the analogue of the discovery of the solar system as a system of interrelated parts orbiting a sun under the force of gravity. This is not merely the claim that the planets and the sun are parts of a larger whole; it is a theory of how the parts hang together. The atom is often compared to the solar system in its internal structure, but we could equally compare the solar system to the atom (and might have done so if we had revealed the structure of the atom first): both are tightly organized complex wholes held together by forces and laws, the nature of which we have managed to articulate (if not finally explain). But nothing like this is true in psychology: the mere observation of constituent structure is a far cry from the kind of explanatory atomic theory supplied by physics. Where is the nucleus of the concept, where the conceptual electrons, and where the law-governed orbits? At least in the case of the biological cell we have something approximating to this, but in the case of concepts (or points in the visual field) nothing comparable suggests itself: we just have the banal observation that thoughts are made up of concepts. In order for psychology to become particle psychology it needs to do a lot more than that. I don’t say it cannot do this more, but in its current state it does not. It is an entirely open question whether psychology can mimic the model of particle physics; certainly nothing in it now deserves to be compared to that model. And nothing we now know of the mind suggests a research program capable of development into a full-blooded particle psychology (the same is true of linguistics once we take the measure of genuine particle physics). So nothing in our current psychological knowledge warrants any claim of prestige deriving from a supposed analogy with particle physics. Not that this is a common claim—but it is worth making explicit how feeble the analogy to physics actually is. Physics is particle physics, but psychology isn’t particle psychology. The gulf is wide and deep. One might well suppose that a quite different paradigm would be appropriate for psychology—though what this might be remains to be determined. At any rate, we should not tacitly assume any reassuring analogy to physics based on the idea of atomic composition. The “corpuscular theory”, so beloved by Locke and Boyle, primitive though it was, finds no counterpart in psychology (with “ideas” as the mental corpuscles). By all means let us continue to stress the combinatorial nature of the mind–its generative, recursive, discretely digital character—but let us not interpret this as a vindication of a hankering for the glories of modern physics. We have not discovered the hidden particles of the mind, along with their architectural features, in anything like the sense in which physicists have unveiled the hidden particles of the physical world.  [2] Whether this shows that psychology is still in its infancy, or that its maturity is the same as its infancy, remains moot.

 

Colin McGinn                    

 

  [1] It is an interesting question whether there could be a general particle theory not just a variety of special particle theories. Are there any abstract properties shared by all particle theories (beyond truisms)? Is the concept of a nucleus essential? Does the idea of an orbit find a place in all atomic theories? Is there always some sort of glue holding the atom together? It seems unlikely that such concepts generalize in the way required, in which case the idea of a general theory of particles seems infeasible. Still, the question is worth pursuing.

  [2] Much the same can be said of mathematics: composition without atomicity. We certainly have the idea of elements that combine according to fixed rules, addition being the obvious example, and maybe we can make sense of numbers as parts of other numbers; but it would be stretching a point to claim a significant analogy with particle physics. Again, where is the idea of a laboriously discovered structure with the abstract architecture of a physical atom? Are there nuclear numbers and orbital numbers? Are there mathematical forces that hold numbers together? The idea seems metaphorical at best. Mathematics is not particle mathematics, though it is a generative system with divisions and discontinuities (as well as continuous quantities). Physics really is special in that it reveals a hidden layer of reality that is not anticipated by our ordinary perceptions and conceptions, and is in many ways alien to them. The physical atom is a universe apart (so to speak), and this is before we get to the peculiarities of quantum theory. Psychology has yet to encounter its subversive quantum theory—particles as anti-particles, in effect. It is conservatively Newtonian, steeped in common sense.

Share

Sexuality and the Transsexual

 

 

 Sexuality and the Transsexual

 

Consider a person, Alec, who believes he was born into the wrong kind of body. He believes himself to be essentially female, despite his anatomy. Accordingly, he chooses to become the woman he inwardly perceives himself to be: he dresses in women’s clothes, wears make-up, and acts the part of a woman (but no surgery or hormone treatment). He feels his authentic female self in his new role. He now calls himself (or she calls herself) Alice and feels the better for it. After a while, Alice comes to the realization that she is a lesbian—a female homosexual. She decides to change her appearance into a more masculine image: she cuts her hair short, wears manly clothes, and acts like a man (in so far as she can). In this guise Alice frequents places where lesbians mingle and successfully hooks up with likeminded others, most of whom are regular women. Sexually, Alice performs in the standard male manner, this being what her anatomy demands. To all appearances Alice looks and behaves just like the old Alec. Question: is Alice homosexual or heterosexual? She seems heterosexual when you consider her anatomy and sexual behavior, but from the inside she feels herself to be a woman making love to another woman, and thus homosexual. A male body is interacting with a female body in the usual heterosexual fashion, and yet psychologically the person is a woman interacting with a woman. Is this same-sex sexual behavior or different-sex sexual behavior?

            Neither alternative seems to capture the facts. If we dwell on the external appearances, we find heterosexuality; while if we look inside, we see homosexuality. Objectively, the erstwhile Alec is a straight male; subjectively, the new Alice is a committed lesbian. Which is it to be? It seems to me that the only answer can be “Both”. A single person is biologically heterosexual and psychologically homosexual. Generally these two attributes go together–your psychological sexuality tracks your biological sexuality—but in the case described the two come apart. In other words, we can’t deduce psychological sexuality from biological sexuality: Alice is a counterexample. She has it both ways: she is a heterosexual homosexual. She is partly heterosexual and partly homosexual. Not bisexual, mind you: she only has sex with women; the duality lies in her not in her partners. On the one hand, Alice is a biological male copulating only with females (both biologically and psychologically); on the other, she is a psychological female copulating with other females. She lives a dual existence: gay and straight, queer and ordinary. Some may say she has the best of both worlds; at any rate, she is familiar with both. Of course, the same is true if we switch genders: Joan may wish she were male and enacts her wishes, becoming John. In this new identity she discovers herself to be gay, i.e. John identifies as homosexual. But John has a female body and uses it accordingly. So John uses his female body to sleep with men: it may look like typical heterosexual sex, but from the inside the erstwhile Joan is a man sleeping with another man. John is both homosexual and heterosexual, and quite happy with the combination. He is attracted to people of the same sex as him considered psychologically, but his biological identity implies heterosexual sexual behavior. He feels himself to be a gay man, but his way of expressing this is to use his female body in the biologically indicated manner.  [1]

            Here is an even more challenging case: Percy has a very special psychological identity—he feels himself to be a woman who identifies as a man. At present he is simply a man with an unusual yearning, but he wishes to transform himself into something closer to his subjective identity. Accordingly, he transitions to a female persona by changing his clothes, manner, etc. Now Percy is a woman who wishes she were a man: call this person Pearl. Pearl would like to become a man, or so she says. Percy has achieved his wish, but Pearl has wishes of her own—she wants to become a man. Maybe she does, by adopting male accouterments: but then we are back where we started—a man who wishes he were a woman wanting to be a man. If Pearl decides she is a lesbian, is she a heterosexual or a homosexual? She is a woman sleeping with other women using a male body, but she also identifies as male: she is homosexual psychologically, heterosexual biologically, and also heterosexual psychologically (since she identifies as male while being psychologically female). It sounds like a very difficult psychological state to be in, and perhaps one that can never actually occur. But it shows what can happen when biological sexuality and psychological sexuality diverge in logically possible ways. We may need to reckon with far more complex forms of sexuality than the few forms currently recognized. Possible worlds may be sexually much richer than we can easily imagine.            

 

  [1] If you were to transfer the brain of a homosexual man into a woman’s body, you would get something like John: the brain would want men, while the female body would have to suffice for sexual relations.

Share

Night and Day

 

 

Night and Day

 

How firm is the distinction between night and day? What is night and what is day? What kind of distinction is this? Clearly there are periods during which it is neither night nor day—at dusk and dawn. Night is turning into day, but it is neither one nor the other. We don’t have a simple name for this time period, which lasts I would say for about 20 minutes; we could call it “nay” or “dight”. So the distinction between night and day is not sharp and not exhaustive. Nor is it universal: there is no night on the sun, and no day at the center of the earth. Things that are always bathed in light have no night, and things that are always dark have no day. In the land of the blind the distinction presumably makes no sense, since light is not sensed there; the distinction is not apparent to blind people, except perhaps by testimony. Is what we would call night really day for nocturnal creatures with superior vision to ours? They can see at night as well as we can see during the day, and they conduct their waking activities in our night; perhaps sunlight blinds them so that everything is dark for them during our day.  [1] Do they invert our day and night? What if the moon was larger and always reflected a lot of sunlight onto the earth—would that abolish night for us? The sleeping hours might be just like an overcast day. But then, is it really night for us now when the moon is bright and full? What about eclipses—do they literally turn day into night? And what if heat replaced light as the stimulus for vision—wouldn’t that undermine the night-day distinction? If things were warmer at night than during the day, people would see better at night, so there might be a linguistic switch with the words “night” and “day”. The distinction between day and night is friable and relative, pragmatic and contextual; there is no rigid absolute dichotomy. I can imagine a principled eliminativism about night and day: there are really no such things, not as aspects of objective reality; there is just a continuously varying amount of light and grades of visual acuity. You don’t find physicists talking much about the physics of day and night; the whole idea is centered on human concerns and powers. We can certainly imagine beings that have no use for the distinction, but without missing out on any objective matter of fact. When the Beatles sang about a Hard Day’s Night they were reflecting on the superficiality of the distinction. If night is defined as the hours of sleep, its status as objective looks distinctly questionable; it is perfectly possible to do a day’s work in the night hours (so called). People often say that two things differ as night and day, as if this was a rigid and sharp distinction; but in fact the distinction is vague, pragmatic, and relative. 

            Note that in another use of “day” we include the night as part of the day. You can say with perfect propriety, “What day is it today?” in the dead of night. The names of days of the week clearly include the period we call night. These distinctions map onto objective motions of the planet and are not as soft and pliable as the use of “day” in contrast to “night”. The semantics of these words ties them to certain anthropocentric conditions: “x is day” is true if and only if x has enough light to make human activity feasible, or the human eye can make things out clearly. We then reify this notion into something that seems to us to transcend human concerns, but reflection suggests that night and day are not independent of our particular contingent place in nature (they are part of the “relative conception” not the “absolute conception”). Just imagine if we became nocturnal because of some mutation in our eyes: we sleep when the sun casts the most light, are able to see only when the light is low, and are most active between the hours of 10pm and 6am; then we would surely start to refer to the hours of the 24-hour day quite differently. Night is when we can’t see so well, tend to feel sleepy, and want to stay home; day is when we see clearly, are wide awake, and feel like venturing forth. We do better, conceptually, to think of night as just another part of the day—the part in which the day is darker, quieter, and scarier. Alternatively, day is just the attenuation of night, the time when night becomes less opaque. If the difference of sunlight were less dramatic, with a lighter night and a gloomier day, we would surely think this way. This is one of those cases in which ordinary language (and ordinary thought) mislead us about reality: the words and their use make us think that we are dealing with a more solid and durable distinction than we really are. Rectifying this tendency, we could find ourselves more willing to stay at home during the day and more willing to go out at night (the “late day”); and be less prone to feeling guilty about afternoon naps and middle-of-the-night snacks. We need to free ourselves from the tyranny of the night-day dichotomy. The sun and the moon have vied for human worship; it is time to grant them equal status. Sun-time and moon-time are all parts of the same continuous day. The distinction between day and night is not written into the fabric of the universe. The day is bright night and the night is dark day.

 

Co

  [1] Is it analytic that the night is dark? How dark?  What if we had artificial light everywhere? The starry sky is quite bright at night. Is it analytic that the day is light? How light? What if the earth were enveloped in a light-blocking mist?  The cloudy sky can be quite dark at noon. So it is unclear that lightness and darkness can be used to define day and night more precisely.

Share

The Tyranny of the Majority

Tyranny of the Majority

 

Here is de Tocqueville in Democracy in America (1835): “But the dominating power in the United States [the majority] does not understand being mocked like that. The slightest reproach offends it, the smallest sharp truth stimulates its angry response and it must be praised from the style of its language to its more solid virtues. No writer, however famous, can escape from this obligation to praise his fellow citizens. The majority lives therefore in an everlasting self-adoration. Only foreigners or experience might be able to bring certain truths to the ears of Americans” (from “The Power Exercised by the Majority in America Over Thought”).

Do I need to make any comment?

Share

Oh America!

I have lived in this country for 30 years, but I count myself an outsider. What we have witnessed in the last week, and in the last few years, has undoubtedly been ugly in the extreme, but to my eye it is not so far removed from business as usual in America. The same tendencies towards conspiracy theories, hysteria, gullibility, brutality, and delight in destruction are everywhere, including in American universities. They are not the exclusive property of the political right (appalling as that is ) but are quite independent of political affiliation. They are features of the American psyche, but Americans are oblivious to it. They are part of the national character, going back a long way. Of course, there are some good things about America, but people need to wake up to their blind spots and habitual reactions. Yes, I am talking to you.

Share

Self-Blindness and “I”

 

Self-Blindness and “I”

 

Hume’s point that the self is not introspectively detectable has always been met with intuitive acceptance. Its significance is more contentious. Does it show that the self does not exist, or that introspection is limited, or that the self is really just a congeries of mental states? A possible view is that introspection has blind spots and the self falls into one of them. Introspection doesn’t reveal the body or the brain either, but they clearly exist; perhaps the self is a real entity that happens to fall outside introspection’s possible field of acquaintance. God can see it perfectly well, but humans are blind to this aspect of their nature. They are introspectively blind to the self in the same way they are blind to other things—through lack of acuity, lack of coverage, and lack of receptivity. They are not completely blind to the mind, since introspection can reveal other mental phenomena, but the self eludes their introspective powers. Some people are totally blind in the ocular sense, some partially so, and everyone is blind to some things (elementary particles, remote galaxies, parts of the electromagnetic spectrum); well, humans are introspectively blind to the self. Hume could have made the same point about the senses: search through the data of smell, taste, and hearing and you will not find any presentation of an object given to these senses. They deliver information about qualities, states, events, and processes, but they don’t include the perception of a continuant object—the objective source of the phenomena in question. You hear the bark of a dog but hearing gives you no impression of the dog itself as an object existing through time. Only vision and touch offer impressions of continuant particulars (and some have contested even this); the other senses are blind to such entities. Their intentional objects include only fragmentary passing occurrences: smells, tastes, and sounds. Introspection seems to be like this: it never presents the continuing self to the inner eye but only its states and contents. It suffers from perceptual closure with respect to the self—as well as for the brain and body (not to mention the rest of the world). Every sense is limited in some way, and introspection is no different. Self-blindness is really just par for the course. It certainly doesn’t imply the non-existence of the self.

            But Hume’s point does raise an interesting question about the word “I”: how does it refer? It seems not to be equivalent to a description, but it doesn’t correlate with a mode of acquaintance either, since we have no acquaintance with the self (accepting Hume’s point). It isn’t like reference to pain or color sensations: here we doknow what we are talking about—these things are immediately presented to us inwardly. In these cases we “see” what we refer to. But we are constitutionally blind to the self, so we are referring to something we can neither perceive nor describe. We are like a blind man referring to what he can neither see nor pick out descriptively. This is perfectly possible: he may say “that dog” while pointing forward and happen to pick out a particular dog in front of him. He has no acquaintance with the dog and cannot describe it uniquely, but the demonstrative enables him to achieve reference nevertheless.  The dog is not referentially closed to him—just perceptually and descriptively closed. Reference can transcend acquaintance and description. Hallelujah! We can refer to what we cannot otherwise access (compare remote galaxies, elementary particles, future persons, and other universes). Similarly, according to this line of thought, we refer to the self in just this kind of way: “I” refers in roughly the way “that dog” refers for the blind man. And what is that way? By means of context, indexical mechanisms, and the semantics of content and character.  [1] We are like someone cut off epistemically from an object yet able to deploy the apparatus of indexical reference to make reference to that thing. Otherwise we would be referentially impotent with respect to the object. What this means is that we have no perception (and no real conception) of the self that we so effortlessly denote all the time—thanks to the semantics of “I”. We have nothing of epistemic substance in mind when we use “I”, but that doesn’t stop us referring to the self. This is a species of ignorant reference—reference in the absence of knowledge (by acquaintance or description).  [2] This may account for some of the peculiarities of the word “I”—in particular, its air of airiness. It seems totally devoid of content, a mere schema or skeleton, perhaps a pseudo singular term, not really denotative at all. The reason is that it is a case of blind reference—reference not backed by knowledge. It seems like a shot in the dark, a mere gesture at reference—like putting your hand over your eyes and enunciating the word “that” hoping to net something to refer to. Even if you succeed, you have nothing much to say about the thing you have referred to, with no mental act of identification to back up your stab in the dark. Introspection is blind to the self (and external observation does nothing to remedy the lack) but the indexical semantics of “I” enables you to hang onto reference by the skin of your teeth. It is the constitutional weakness of introspection combined with the elastic power of indexical reference that characterizes the use of “I”: a sort of blind strength.

            Other indexical words conform to roughly the same pattern, particularly “here” and “now”. Imagine someone subjected to complete sensory deprivation: absolutely no input is received via the senses from the external world. Thus nothing is known by the deprived subject about what is going on around her: she has no perception of what is occurring at the present time, nor does she have any descriptive knowledge that could uniquely identify the place and time involved. Yet she pronounces the magic words “here” and “now”, outwardly or inwardly, and evidently makes determinate reference thereby: a certain place and time are picked out. The reference is not mediated by acquaintance and not by individuating description, but it proceeds nonetheless. This is an extreme case of what I am calling ignorant reference. There is clearly a lot of ignorant indexical reference going on in typical language use, and it can be used to anchor other reference such as with proper names. It might even be argued that these cases are like the case of the self in that they involve blindness to the entities actually denoted: we never really encounter places and times as such in our epistemic searches. We don’t perceive them directly with the senses and we don’t have adequate descriptive knowledge of them; but we can refer to them by exploiting the mechanisms of indexical reference. And they are always there to be referred to, like the self (and unlike the material objects of perception): there is never simply no such thing as time or space or self to reciprocate the referential act. The self, though contingent, is always there because the act of referring guarantees it—where there is a referrer there must be a self that is referred to by “I”. There cannot be cases of empty self-reference. There is no need to rely on perception to provide evidence of existence as a precondition of successful reference; existence comes with the referential territory. We exist in space and time and the self is always present whenever reference occurs: so “here”, “now”’ and “I” always find a target. Blind spots don’t undermine successful reference in any of these cases. There has been a tendency to link reference with knowledge (by acquaintance or by description) but the interesting fact about reference is that it is free of such epistemic constraints: it can proceed in blissful ignorance of the thing referred to. Hume’s point is correct (though not damaging to the self’s existence) but it is no bar to reliable and useful reference to the self. Self-blindness does not entail referential self-blankness: we can denote what we cannot see (or otherwise sense). And this is true even if we necessarily can never encounter the self.  [3]

 

  [1] See David Kaplan on indexical semantics.

  [2] This is consistent with a causal theory of reference: the self might be the cause of reference to itself. But if so, it is a cause of which we have no knowledge. The causal theory of names is typically formulated in terms of observable objects causing chains of reference leading up to a particular use, but a causal theory of reference by “I” would more naturally be formulated in terms of an elusive self that nevertheless causes acts of reference to itself. Even a totally transcendental or noumenal self could operate causally in the production of occurrences of “I”. Some causes of acts of reference might be completely invisible and unknowable, yet indispensable.

  [3] The theory of reference has been dominated by consideration of reference to public middle-sized material objects—people, animals, cities, etc. But this is parochial and possibly misleading: we also refer to selves (construed as private mental entities) as well as to places and times. Thus ignorant reference might be more paradigmatic than knowledge-based reference. The emphasis on reference to ordinary material objects probably traces to an empiricist (or positivist) tendency to put sense perception at the center of cognition, including linguistic understanding. But once we see the limitations of perception-based reference as a model for self-reference we are free to recognize that reference can proceed in conditions of ignorance. We don’t (can’t) perceive (introspect) the self, but we can refer to it with the greatest of ease. In the case of the self we appear to have an extreme example of the divorce between the epistemic and the semantic: deep ignorance combined with infallible reference. No wonder “I” is deemed so philosophically problematic.      

Share

Death, Time, and Other Minds

 

Death, Time, and Other Minds

 

It is often said that we are “social animals”. What seems to be intended is twofold: we have an emotional craving for human contact, and we are similar to other social animals in this respect.  [1] Both observations are surely correct, but could there be anything more to our gregarious disposition? Does it reflect any deeper need? I want to suggest that death comes into it—death and consciousness. We seek the company of others because of our peculiar relationship with our own consciousness and its extinction. Suppose you were the last man alive, indeed the last sentient organism alive: when you die consciousness dies too—all of it. Suppose also that it will not return: that will be the end of the line for consciousness, with the universe reverting to its pre-conscious state. That seems like a momentous annihilation. It is bad enough that you yourself will be gone—that localized center of consciousness—but in addition the entire field of consciousness will be no more. If the bleakness of your solitary existence inclines you to suicide, you need to consider that your decision also concerns consciousness itself. Presumably consciousness is a thing of value, maybe the only thing of value, so you will be putting this valuable thing out of existence for good—in addition to your individual self. This would not be so if there were other centers of consciousness in existence; then your death would not be the end of consciousness as such. And not only is the cessation of consciousness tragic; it is also mind-bending—hard to get your mind around. Suddenly there will be nothing but brute existence with no conscious record of it: and the idealist in you rebels and reels at the thought. The universe is now empty. The same feeling applies if you simply don’t know whether there is anybody else: so far as you are concerned, your death could be the death of consciousness as such. It will be worse if this is so than if other centers of consciousness exist. The case is like species extinction: the death of the last mongoose is worse than the death of one mongoose among other mongooses—and consciousness is far more significant than the mongoose species (sorry mongooses, I mean no disrespect). The extinction of all consciousness is a pretty big deal, a major cosmic catastrophe. A lot hangs on your continued existence.

            But isn’t our actual predicament disturbingly similar to this? There are two factors at work: first-person salience, and the problem of other minds. For obvious reasons our own consciousness seems the most real to us—the most in your face. It is right here, up very close and personal: we are saturated in it. The consciousness of others, by contrast, is remote and occluded. So when my consciousness goes the most conspicuous instance of consciousness goes: its absence will be all too evident, because there won’t even be a hint of it from my point of view (since my point of view is gone). This bias towards my own consciousness feeds into the second factor: I don’t really know that other people have consciousness. Maybe they don’t, in which case when mine disappears that’s it for consciousness in general. So my death is particularly critical so far as the existence of consciousness is concerned, since I am the only clear and indisputable case of it. For all I know, my death is the death of all consciousness—that is an epistemic possibility. It would be good to be assured that this is not the actual situation—other minds do exist. And it would be good to know this in the context of impending death—especially death by suicide. Then I could calculate the cosmic momentousness of my own death. Of course, given the difficulty of the problem of other minds, combined with first-person salience, I am not going to obtain the information I seek, so I have to go to my grave not knowing if my end is the end of everything worthwhile. But it would be natural for me at least to try to gain an impression of consciousness elsewhere—to feel the existence of consciousness in other beings even if I can never prove it. Thus I might naturally seek out the company of others: I might try to sense the existence of other minds as strongly as possible. This will be particularly true while I am on my deathbed, but it could also be a lifelong project. I want to believe in other minds because this will give me the feeling that my death is not the death of all consciousness, so I pursue social relations with a particular intensity. We are social beings in part because we are mortal beings haunted by the problem of other minds. Our access to our own consciousness, by contrast to other centers of consciousness, is what (partly) fuels our propensity to social intercourse (which includes sexual intercourse). We nurture a kind of cosmic altruism in relation to other instances of consciousness that conditions our attitude toward death. We want to go on individually, of course, but we also want consciousness to go on, and we can’t be certain that it will, given the problem of other minds.

            This consideration may seem a bit high-minded, as well as suspiciously abstract, but there is another more obvious way that the problem of other minds affects our attitude towards death, namely that we need other centers of consciousness to exist in order that we shall carry on. After all, if there are no other minds, then there is no one to remember me. If everyone is a zombie or a robot, then no one will grieve for you, or love you, or remember your precious self (as Achilles grieved so passionately for his murdered friend Patroclos). Imagine if you were to believe that at death your spirit literally flows into the minds of others; then if it turns out that there are no other minds, there is no such flowing going on. You can only have an afterlife if other minds exist—your continuing life depends on their having an inner life (or so you believe). But even if no such literal afterlife is possible, it is still true that each of us requires other minds to exist in order to affect their state after we are gone. If there are no other minds, then my death will be that much worse, because there will be no conscious beings to care about and remember me. Even the production of great works will be meaningless, because there will be no minds around to appreciate them. So I will naturally try to assure myself that other minds are real, insofar as I can—I will attach myself to a social group. I will try to feel part of a collective consciousness. My need for society is thus rooted (in part) in my existential fears and anxieties concerning my own death. I need to feel that I am not alone or else my death will be the complete and utter end of me, with nary a trace remaining. If only I could get hooked up to other minds by some sort of brain linkage and experience them directly! Then I could be sure that my mind is not the only one. As it is I must rely on whatever methods of social contact are available, even if they are not really satisfactory. The life of the hermit leaves me disconcertingly susceptible to skeptical fears regarding my own death: that it might for all I know be the end of all conscious life, and that my own life will definitively end when I die, because it will leave no remnant in the minds of others.  [2]

            It is instructive to consider the case of animals in this connection. Animals don’t ponder the meaning of their death. They don’t see themselves as living a finite lifespan at a specific moment of history, with a beginning and end. It is as if they are immortal from a subjective point of view; they don’t lament their mortality. They don’t regard themselves as part of natural history, as occupying a certain finite stretch of time. They have no conception of time at all as an all-encompassing medium. The relationship between their own short lives and eternity is not apparent to them (we on the other hand are consumed by this relationship). Nor are they afflicted with the problem of other minds, haunting their attitudes towards their eventual death. So nothing of what I said above applies to them. Thus they don’t have any of those reasons to seek the society of others: they don’t have any philosophical reasons to be “social animals”. They are social animals for purely pragmatic reasons: that is just the best way of living for them in the light of their reproductive and survival requirements. They have no need to consider what their personal demise means for consciousness as a whole, nor whether there are minds out there that will fondly dwell on memories of them. So they are social beings for reasons that fall short of our reasons (no doubt we have their kinds of reasons too): we are not social animals in the sense that we are social for no reasons that transcend animal reasons. For us the question of society is mixed up with existential questions—about consciousness, other minds, value, finitude, and the afterlife. This is why the scenario of the last man alive illuminates our actual predicament: it expresses a deep truth about our attitudes towards death. It is as if every human death is, for the subject of that death, the death of the sole example of consciousness. Death forces us to think about the value of consciousness and the reality (or otherwise) of other conscious minds. If the problem of other minds were more of an everyday problem, not just a philosopher’s conundrum, these points would be more evident to us. Suppose a disease came along that renders the victim a functioning zombie, but only with a certain probability: 50% of sufferers literally lose their mind while the remainder is not affected, but you can’t tell which is which. Thus you really don’t know whether your social group is conscious or not—it could be that your entire nation has been turned into zombies. Then your own death will seem like the all-too possible total end of consciousness, at least so far as your field of acquaintance is concerned; and you have no assurance at all that any conscious being will remember you. You will be effectively living a last-man-alive life, and social existence will seem largely futile (save for pragmatic reasons). You will not have the reassurance social life provides that you are not alone—that there are other sentient beings like you in the world. Epistemologically uncritical social intimacy is what softens the blow delivered by individual death (which is still a hard enough blow), but absent that your reasons for seeking a social existence are significantly diminished. You may as well live alone if everyone around you is a zombie! The costs of social life will begin to outweigh its benefits. You will not be living a genuinely social life if your “companions” are all mindless robots—your attitudes towards them will be quite different. And this will affect your feelings regarding your own death, making it seem more tragic, more of a loss, more catastrophic. The death of a diehard solipsist is the worst death of all.  [3]

 

  [1] It is less often remarked that we are also anti-social animals. We like to be alone too: then we don’t have to feel the burden of another consciousness. We can indulge our natural solipsism. It is when the thought of death intrudes on our solitude that we feel the tug of the social group (though not only then). We thus live with two countervailing impulses: to be alone and to be with others. I wonder if other animals ever feel the same tension.

  [2] This is the meaning of ostracism: social death. The fully ostracized individual is deprived of continued existence in the minds of others. Exile makes death sting more poignantly because there is no prospect of continuing in other minds. To be alone is to die without significant remainder or residue.

  [3] There is this consolation: the solipsist has no envy towards those who continue living while he perishes—for their “living” is no better than a rock continuing in existence. In fact, the solipsist has no envy at all.

Share