A Plurality of Selves

                                   

 

 

A Plurality of Selves

 

 

  1. Human beings are persons or selves and they have a specific nature: they have a certain type of psychology and a certain type of biological make-up. Not all possible sentient beings share this nature. For instance, humans have personal memories, consciousness, self-reflection, rationality, and a brain with two hemispheres that is in principle detachable from the body. Arguments about personal identity take these facts for granted and contrive various thought experiments on their basis: transferred brains, divided brains, memory loss, memory upload, personality alteration, and so on. Thus we arrive at theories of personal identity for humans. One well-known argument proceeds from the possibility of brain splits to conclude that personal survival does not logically require personal identity.  [1] But what about other possible types of being that don’t share our human nature? Can’t they be persons or selves too? If so, we can’t expect to derive general “criteria of personal identity” just by considering the human case: we need to look at the full range of possible cases if our theory is to have the generality we seek (and it might turn out not to have that generality).

            Consider sentient beings that don’t have brains that can be divided or transferred: the brains of these beings don’t have two equipotential hemispheres and they are distributed throughout the organism’s body (rather like an octopus). There are thus no possible scenarios in which their brain is divided and the hemispheres placed in separate bodies, so there is no way that they can survive without being identical to some future being (at least so far as the standard fission arguments are concerned). We can’t consult our intuitions about what we would say under conditions of brain bisection and relocation, since these are not possible (such surgeries would result in certain death). For these beings there would be no survival under the imagined conditions. In fact, a theory that ties personal identity to the body would be more plausible for them than in the human case: having that brain in that body would be tightly correlated to future survival. There would be no pressure to accept psychological continuity theories if it were not possible to dissociate survival from bodily identity, as in the standard thought experiments. Lesson: be careful not to accept a general theory of personal identity based on the contingent peculiarities of the human organism. That might lead to chauvinism about personal identity, i.e. ruling out bona fide persons as not really so.

            Now consider this hypothetical case: sentient beings without personal memories. We can allow that these beings possess general factual memory; what they lack is memory of their past experiences and deeds. Their earlier life is a complete blank to them, though they live and love. They clearly persist through time, but this persistence cannot be a matter of remembering different periods of their existence: they don’t persist through time because of the power of memory. So for these beings personal identity cannot consist in memory links to earlier selves, however it may be for us. We can’t say that A is identical to B because A can experientially remember what B did. These beings may have the anatomy described in the previous case, so their identity is better explained in terms of bodily continuity, not in terms of memory links. Not that bodily continuity will work for everyone: for some possible beings the body changes over time to become a different body, as with bodily metamorphosis (including the brain). Butterfly persons would persist through time while they acquired a brand new body at puberty. So it would be wrong to generalize from the no-memory type of person to all possible types, as it would be wrong to generalize from the human case to all possible cases. The butterfly adults might have vivid memories of their pupa childhood while not sharing their body with that being; in their case a memory criterion might well seem attractive. It all depends on the being.

            Here is an even more radical case to consider: the no-consciousness self. It might be claimed that personal identity consists in the persistence of a subject of consciousness over time: and certainly for conscious beings that theory has some appeal (though it doesn’t seem very explanatory). But consider a hypothetical case in which a conscious being loses consciousness during the course of life yet retains an unconscious mind; or a species that was once conscious but now, through natural selection, has abandoned that trait and survives by means of unconscious psychological mechanisms. Such beings might have memories, beliefs, desires, personalities, and so on—they just aren’t conscious. They are, if you like, zombie selves (though with an elaborate unconscious psyche). They would look and sound like conscious persons, living their lives like such persons to outside observation (maybe a bit wooden in certain respects). So they exist through time and possess the usual attributes of persons (except one)—picture them on a remote planet with a functioning civilization. For these beings a theory based on continuity of a conscious subject would be wide of the mark—more like continuity of an unconscious subject.  [2] They have a psychology and they exist through time, but there is no consciousness in there: “unconscious self” is not an oxymoron.

            We can thus refute bodily theories, memory theories, and consciousness theories as general theories of personal identity by considering the full range of possible persons.  [3] Maybe there is no general theory available just different theories for different types of being, or maybe some other theory can be contrived; what is clear, however, is that the human case is a special case, not characteristic of all possible cases. Methodologically, then, it is unwise to proceed from this case alone; that will only lead to parochialism and special pleading. We have personal memories, consciousness, and divisible transferable brains, but that is not true of all possible selves, and perhaps not of all actual ones (animals, aliens). The case is unlike theories of persistence for material objects in that the material objects around us are characteristic of material objects in general: if material identity is explicable in terms of spatiotemporal continuity or some such for the objects on earth, then it will be explicable in this way for objects elsewhere, actual and possible. There are no non-spatiotemporal objects to deal with and accommodate. Similarly for set identity: the criterion in terms of identity of membership generalizes to all possible sets—it isn’t limited to the sets we encounter every day. The thing about selves is that they can be “multiply realized” both physically and psychologically, so we don’t want to tie the concept down to a specific type of self—as it might be, adult humans with consciousness, memory, and a divisible anatomy. That would be like defining set identity purely in terms of sets of elephants or ants. The plurality of possible selves imposes a constraint on theories of personal identity, and one that is not easy to meet.

 

  1. Let me now turn to a different question involving plural selves, namely whether I could have been a different self: that is, is there a plurality of selves in metaphysically possible worlds that could be said to be possible selves of mine? The question is tricky because I clearly could not have been a different human being: I am necessarily Colin McGinn, given that a certain human being is denoted at both places. Any human being in a possible world that is not identical to this human being is not me. Someone could look and sound like me, but if they are not the same human being they are not me. No member of an animal species can ever be identical to a different member of that species. But it doesn’t follow that I could not have been a different person (associated with the same human being). In fact, this is quite easy to imagine: we just have to suppose that I undergo very different experiences in some possible world. Suppose my experiences in world w involve being born into poverty in a war-stricken land where abuse is rampant and education non-existent: I suffer various life-altering traumas and end up with emotional problems radically unlike those I now have. My personality, my memories, and my abilities are totally different in w: am I not then a different person from what I am today? The person you become is a function of your life experiences, among other things, but these are contingent, so you could have become a different person. You could even be subjected to chemical attacks that rewire your nervous system, or suffer genetic alteration in the womb. It would be the same organism, but it wouldn’t be the same person, because psychology counts in the latter respect. If we call that person you could have become “Albert”, then we can say that you might have been Albert, in the sense that the human being you are could have been associated with (“housed”) another person, namely Albert. You quaperson could not have been identical to Albert, but your organism could have been his residence instead of yours. Thus we derive the paradoxical-seeming proposition, “I could have been a different person”, which translates roughly as, “My organism could have housed a different person”. The word “I” can slip from referring to a human being to referring to the person housed by that human being, but there is a clear sense in which it is true to say, “I might not have been me”: that is, “This human being might have housed someone other than my actual self” expresses a truth. Indeed, I might have been any number of people in this sense, given the plurality of possible lives I (sic) might have led. What my name actually stands for is an interesting semantic question: is it a human being or a person (self)? It seems ambiguous between the two in actual use, which is why I can say, “Colin McGinn might not have been Colin McGinn” without sensing contradiction, where the first occurrence of the name refers to a certain human being and the second refers to the person currently occupying that human being. I am necessarily the person I am, and I am necessarily the human being I am, but that person is not necessarily identical to that human being—in fact, they are not identical at all. In one sense “I am not a (particular) human being” is true, and in another sense “I am not a (particular) person” is true; but it is equally true that I am a human being and also a person! The word “I” is flexible enough to allow for all these statements to be true under the right interpretation.

            We can say, then, that across modal space I have many counterpart selves that could each have occupied this particular organism. I have no such human being counterparts—in this respect I am a unity. But I am (associated with) a plurality of selves in the sense that possible worlds contain many such selves corresponding to me. This is not a denial that proper names are rigid designators, since each of these entities has its own name: it isn’t that my counterpart selves are all designated by “Colin McGinn”, construed as a name of a particular person in the actual world. It is quite true that Colin McGinn is necessarily Colin McGinn (under the right interpretation), even though I might have had many numerically distinct counterparts that inhabit my actual body (and were all called “Colin McGinn”). This can be verbally confusing, but the underlying logic and metaphysics are not: one human being, many selves, with names for each of these separate entities. This enables us to say such potentially confusing things as, “Colin McGinn (human being) might not have been (associated with) Colin McGinn (person)”. The name seems capable of referring to both.

 

  1. I now take up another issue in which the notion of a plurality of selves suggests itself, namely whether we actually contain more than one self. It is commonly assumed that we contain at most one self, though there have been dissenters to that conservative opinion (as we will see). Hume argued that we contain zero selves, having conducted an internal survey; but most people put the number at unity after no survey at all. It is an interesting question why we do this so readily: has anyone ever actually counted the number of selves he or she contains? Is it that you can tell just by looking that you contain a single self, as you can tell by looking that you have a single body? But you can’t look at yourself and then proceed to count the number of selves in the vicinity. Is it that the ordinary use of “I” suggests unity? But that seems a flimsy way to get at the cardinality. Is it perhaps just a lazy prejudice like assuming there is only one type of person in the world? At any rate, it is apparently a general belief on the part of (human) selves that there is only one of them per organism. If we ask for a demonstration, we are apt to be dismissed as blind to the obvious. Is this just how we appear to ourselves? Maybe, maybe not, but maybe the appearances are misleading: we need a reason to accept that we really are thus unitary. At least we should be open to evidence that such unity is illusory. People used to think there was only one sun in the universe, but more careful investigation revealed a plurality of suns; might the same thing be true of the self in our own personal universe?

            Let me list some putative reasons for dissent from the common assumption: the Freudian division into ego, id, and superego; the phenomenon of multiple personality; brain bisection experiments; modular conceptions of mind; the theatrical conception of the self; division into private and public self; a general sense of self splintering (R.D. Laing, The Divided Self). I don’t propose to discuss each of these in detail; I am more interested in the general idea of multiples selves. I certainly think it is logically possible for a single organism to house more than one psychic entity deserving the name of self; and I think there is good empirical evidence that this is normal for ordinary adult humans. I am with Erving Goffman (and William Shakespeare) in believing that a given individual presents a number of distinct selves in different social contexts, and that these are deeply entrenched. The person is something dramatically constructed—and we can construct a plurality of these things. I myself have always felt that I am made up of three distinct selves—an intellectual self, an athletic self, and a musical self—with little overlap between them; and I fancy I am not alone in having this kind of impression. Is my impression to be disputed? I also wrote a novel, The Space Trap, in which I played with the idea of a phobic self and an imaginary self in addition to the self we ordinarily recognize. Such ideas are quite common in writers trying to represent the complexity of human psychological reality. People feel they are not the simple unity that we tend to speak of; there are significant divisions and separations (hence the famous Walt Whitman remark, “I contain multitudes”). Just as people feel themselves to change dramatically over time, becoming “a different person”, so they feel that at a given time there is a plurality lurking inside. Pathological conditions like schizoid personality or multiple personality are not so far from the norm, maybe just extreme cases of it. If someone sincerely believes himself to have a divided self, what evidence can be used to refute him? What kind of counting procedure would undermine such a claim? Might there not be degrees of division with the normal case of personal separation just at the far end? Whence the dogmatic conviction that there must be only one self each? We have got used to the idea that we possess more than one mind, what with the unconscious and generalized modularity, so why should the self be treated as uniquely unitary? If I contain many minds, don’t I thereby contain many selves? If Freud were right about the unconscious, surely he would have discovered another self in us in addition to the conscious self—an autonomous agent with its own agenda. True, the conscious self that is encountered in introspection has a certain salience, but why should that determine the full extent of our selfhood? And that self might divide into a number of sub-selves upon closer examination. We are often torn, internally conflicted, and doesn’t that suggest a separation of selves? No one ever told the genes, or our life experiences, that they were to construct only a single self, so the possibility is open that they construct a plurality of selves uneasily (or easily) conjoined. We are more like a constellation of selves than a single unified self, a galaxy not a solitary star.

            If this is so, then our identity through time consists of the persistence of many selves, not one.  There is not a single self that exists from one moment to the next but a plurality of selves. Some of these selves may perish while others march on; all may perish at some point to be replaced by new ones. What we call our personal identity, and picture as a single persistent capsule, is really a mixture of separate elements held tenuously together: an identity of selves in the plural not the singular. Conceivably, these selves might have different conditions of identity: for example, there may be a biological self fixed by the genes that is tied to the constitution of the organism, existing alongside a number theatrical selves freely constructed to serve suitable social purposes and revocable at will. A theatrical self may disappear at a certain time when the context no longer demands it, while the biological self goes on regardless. Once we accept a plurality of selves we have the possibility of separate existence through time. It is really too simple to speak of “personal identity” as if we had a single well-defined thing called a “person” whose identity is at issue; the human psyche is too complex for that. Surely we can imagine a being that regards himself as such a plurality and speaks spontaneously of one of his selves going out of existence while others continue. If we insist on his answering the question whether he survived such and such an event, he might give us a puzzled look and reply, “Well, this self and that self survived, though that other one didn’t”. For this being it would be wrong by stipulation to speak only of a single self that survives or fails to. To what extent we approximate to his condition is an empirical question, and one that has a good deal of evidence in its favor.

            I believe it is true to say that we experience our body as more of a unity than it really is. It comes as a surprise to discover all those separate organs each doing its specific job—and illness can deliver a jolt to our assumption of unity. If we ask after its persistence conditions, we quickly come to see that many organs are involved, and some can survive what others may not. If we insist on asking whether the body survives such and such an event, we can see that this question is too simple, given the complexity of the body (the plurality of its organs). The person is a bit like that: in principle some parts may survive while others perish (consider Alzheimer’s). I can lose one hand while retaining the other because I have two hands; why couldn’t I have more than one self where each can survive separately? If the brain realizes human personality in more than one location, then damage to one location may destroy one instance but leave another: wouldn’t this be the loss of one self and the retention of the other? Here we would have different tokens of the same type, or similar types, but different types may also coexist with one another. It may be convenient to talk as if we are a single entity, as it is convenient to talk of the body as a single entity, but both are made up of other units. What we call the self is really a plurality of distinct self-like entities.  [4]

            There is a plurality of types of self; there is a plurality of possible selves corresponding to each human (and animal) individual; and there is a plurality of actual selves within each individual. There is not just the human type of self; there is not just a single possible self for each individual; and there is not just a single actual self for each individual.      

              

Colin McGinn

           

  [1] Derek Parfit, Reasons and Persons.

  [2] In considering these beings it might help to adopt a higher-order thought theory of consciousness.

  [3] I haven’t considered so-called psychological continuity theories in relation to hypothetical persons. This is because I am not convinced such theories have ever been properly formulated, and because they seem open to obvious counterexamples concerning sufficiency (continuity is a “cheap relation”). And couldn’t there be beings that revel in their psychological discontinuities, changing their beliefs and desires dramatically from day to day? They might regard this flexibility as essential to their identity.

  [4] Of course, the parts of the body are not themselves bodies but organs of the body, and parts of the self may also not themselves be selves; but there is reason to accept that some parts of what we call our self are also self-like. If they existed alone, we would still call them selves.

Share

A Plea for Persuasion

                                               

 

A Plea for Persuasion

 

 

Jane Austen’s sixth and final novel is entitled Persuasion. There is a reason it is so entitled—it deals with the role of persuasion in human life (as exemplified in Anne Eliot being persuaded against her better judgment not to marry Captain Wentworth). But we might see the whole sequence of her novels as occupied with the topic of persuasion in one way or another. In any case, she clearly believes that persuasion is central to human life, for good or ill. It is not hard to see why: persuasion is heavily implicated in personal relations (courtship, seduction), in politics and diplomacy, in business and finance, in law, in science, in philosophy, in scholarly discourse generally, and in any form of leadership. To the most persuasive go the spoils, we might say. Accordingly, psychology has studied the workings of persuasion, exploring the principles whereby persuasion operates (the role of authority, conformity, reciprocity, commitment, liking, etc.).  [1] But philosophy has not been much concerned with the topic: the philosophy of language has little to say about it, and epistemology has not found a place for persuasion as a source of both knowledge and error. Plato was certainly interested in it because of its place in the armory of the sophists (there is good persuasion and bad persuasion), but recent philosophy has been silent on the subject. Here I will make some remarks intended to bring persuasion into the conversation. Given its centrality to human life, it might be useful to get a bit clearer about it.

            Consider speech act theory. We are told that there are several kinds of speech act, each irreducible to the others—assertion, command, question, performative, etc. Wittgenstein took this plurality to the extreme, contending that there are “countless” ways of using language with nothing significant in common. The idea that persuasion might be the common thread has not been mooted. But note that, while one can only assert that and order to, one can both persuade that and persuade to. That is, persuasion can aim at both belief and action, while assertion aims only at belief and command only at action. The OED has two definitions for “persuade”: “cause to do something through reasoning or argument” and “cause to believe something”. So persuasion is a genus with two species, corresponding to assertion and command—inducing the other to believe or to act. Whether these can be unified is an interesting question: might belief formation be a type of action, or action a result of a specific type of belief (say, the belief that this action is best all things considered)?  [2] Maybe all persuasion is persuasion-that, with practical belief the kind aimed at by command. In any case, persuasion covers both types of speech act; so we need not accept irreducible plurality. Questioning might then take its place as persuading the other to provide information (a special case of command perhaps)—“I wonder whether you would be so kind as to tell me the time”. This seems like an attractive all-encompassing conception: speech as persuasion. If it is objected that not all talk is talking-into (or out of), because speakers are not always offering arguments, we can reply that persuasion need not always be explicit—there is also implicit persuasion. All speech acts are implicitly (or explicitly) argument-like because they offer reasons for the hearer to respond in a certain way: assertion involves inviting the hearer to reason from the speaker’s making an utterance to the likelihood of its being true, and command involves getting the hearer to recognize that the speaker is in a position to enforce what he commands (or would be displeased if ignored).  [3] The hearer is always reasoning from premises about what the speaker has said and responding accordingly. So even a simple speech act is tacitly argument-like: if I just shout, “Help!” I am trying to persuade you to come to my assistance by reasoning about why I would make that noise. In a benign sense, I am manipulating you—trying to get you to do (or think) what I want. Even when a cat meows to go out she is trying to persuade you to open the door for her. We have a strong interest in getting people to act so as to promote our desires; speech is a way of making this happen, and so persuasion is central to it. In talking we are always talking people into believing and doing (compare: all seeing is seeing-as). Thus persuasion is the general type of all speech acts.  [4]

            Conceptually, persuasion is necessarily intentional: when we persuade we do so intentionally. This means that we can never try to persuade someone of what we know he will not do or believe: we don’t set out to persuade the unpersuadable. You may try to entertain or embarrass someone by talking to him even if you know he won’t be convinced, but you won’t be trying to persuade him of what you are saying. You only try to persuade people you regard as (minimally) rational. So the practice of persuasion presupposes an assumption of rationality; it takes place against a background of respect for the other as a rational agent. When this is lacking persuasion might be replaced by brute force—making the other to do what you want him to (rightly or wrongly). Thus you don’t try to persuade toddlers to do what you want them to; you simply impose your will on them. Persuasion occurs within what Kant would call the kingdom of ends—respect for others as autonomous rational agents. Crucially, persuasion calls upon consent (unlike the brute exercise of power): you are trying to get someone to agree with what you are saying. And they may not: they may reject your arguments, refusing to shape their beliefs or actions as you suggest. The consent may be of many kinds, from sexual to political, scientific to economic. Advertising is trying to persuade people to buy things, but people may not consent to spending their money as you wish them to. It takes two to persuade successfully: the would-be persuader and the targeted consenter. The persuader is trying to secure the free assent of the consenter. There are many possible ways of doing this, ranging from outright psychological manipulation to the purest rational argument; but there is no skipping the obligation to secure assent if persuasion is the name of the game. Thus persuasion is always preferable to coercion and should not be regarded as a special case of coercion. Never coerce where you can persuade.

            Persuasion may be a step up from coercion, but it is still inherently problematic: this is what so exercised Plato, as well as Jane Austen. For any good act of persuasion there are many bad acts. There is education, but also propaganda; there is logical reasoning, but also bullshit and manipulation. Moreover, it is not always easy to tell one from the other (they don’t come in different color ink). The credulity of human beings is as obvious as their educability. People can be persuaded of the most arrant nonsense if it suits them to be so persuaded.  The con man can be as convincing as the wisest sage. The trick is to be persuadable just when one ought to be persuadable, but that is no easy task. Memes and fakery lurk around every corner. The Internet is a cesspool of toxic phony persuasion. It’s enough to make you want to give up on persuading anyone of anything—abandoning the very idea of persuasion! But no, we must persist in sorting out the wheat from the chaff. I am laboring the obvious, but we must always be aware of the potential evil inherent in persuasion, always on the lookout for its pernicious forms and manifestations. Just think of human history without pernicious persuasion!

            Logically, persuasion is a four-place relation: x persuades y of p by means of m. We can allow for reflexive persuasion, as when you persuade yourself of something, but persuasion is always directed at some object. The value of p might be a proposition or an action, depending on whether the speech act is assertive or imperative. There must always be a means m that may vary while keeping p constant: you may try different m to secure the same p. This too is essential to persuasion: it is not like logical proof, but a matter of individual psychology (Euclid was more of a deducer than a persuader). Persuading is like teaching someone to dance: there are many ways to do it so long as you get them dancing (but please, no coercion!). What we must not do is persuade by lying (except in very special cases): in the general case, the recipient assumes that the means you are employing does not involve outright falsehood—that is part of the pact of persuasion. I am prepared to be persuaded by you, but only if you tell me the truth. Truthfulness and persuasion go together. So persuasion is quite a complex operation, not one available to organisms generally. Add this to the condition that persuasion is always intentional and we get the result that an agent can persuade only if she possesses reflective knowledge of the means and ends integral to a given persuasive act (this includes the meowing cat). And you can only be good at instantiating this relation if you are skilled in the arts of persuasion; indeed, you do well to learn those arts as you would any complex skill. You should take Persuasion 101 and possibly get an advanced degree in it (if you want to be a diplomat, say). Practice your persuasive skills daily (the good kind, of course).

            We should not neglect the use of the concept of persuasion in “I am persuaded that p”: what kind of state of mind does this describe? This state could come about otherwise than by some other person persuading you; it could issue from the facts themselves (and we do sometimes speak of facts as persuasive, usually in relation to a theory). This locution appears to suggest something stronger, more potent, than mere belief: I don’t just believe that p; I’m prepared to act on it. Thus it edges towards the conative—it is motivational. If someone announces that she is persuaded that eating meat is wrong, we expect abstinence from meat eating (note the conceptual connection between persuasion and doing). Thus the concept appears to straddle belief and desire, i.e. it suggests motivating beliefs. This seems like the right notion to employ in moral psychology: we don’t merely believe certain moral principles; we find them persuasive. So the concept of persuasion seems to have a role in moral motivation: to be persuaded that eating meat is wrong you have to take yourself to have very good reasons for not eating meat. You don’t just think it’s wrong; you are persuaded it’s wrong. That is your persuasion, your conviction, and your commitment. When Anne Eliot was persuaded not to marry Captain Wentworth she acted on it; it wasn’t just a state of her cognitive apparatus. Jane Austen’s novel has a title that denotes both a verbal act and a state of mind: the act of persuading and the state of being of a certain persuasion. We could say that though Miss Eliot was persuaded at age 19 not to marry Captain Wentworth, it was never her persuasion that she should not marry him—which is why she did marry him seven years later. It was not her deep conviction and she regretted her earlier decision. You can be persuaded to do something without it being your persuasion.

            A cluster of concepts has captured the attention of philosophers: knowledge, belief, certainty, intention, assertion, reason, justification, testimony, and argument. I suggest we add the concept of persuasion to this list.  [5]

 

  [1] A classic text is Robert Cialdini, Influence: The Psychology of Persuasion (1984).

  [2] I discuss this in “Actions and Reasons” in Philosophical Provocations (2017).

  [3] I discuss this view in “Meaning and Argument” in Philosophical Provocations.

  [4] We have become accustomed to speaking of speech as communication, but that term is loaded, connoting the transfer of something from speaker to hearer (the OED has “share or exchange information or ideas”). But the persuasion conception suggests rather the idea of influence: speaking is causing the hearer to react in a certain way, not giving him something. The speaker is exercising a certain power over the hearer not conveying something precious.

  [5] Here is an interesting question for the new field of persuasion studies: how do performatives persuade? Not by stating language-independent facts but by the very issuing of them. I am persuaded that you have promised to meet me precisely because I just heard you utter the sentence “I promise to meet you”. This type of persuasion is very effective because the speaker doesn’t need to rely on the cooperation of outside facts—the speech act alone suffices to make it so. Hence performatives are uniquely persuasive (possible paper title, “Persuasive Performatives”).

Share

A New Theory of Color

                                   

 

 

A New Theory of Color

 

 

I will first state the theory as simply and clearly as possible, and then I will consider what may be said in its favor. I call the theory “Double Object Dispositional Primitivism” (DODP) or just “the double object theory”.  [1] Its tenets are as follows: When you see an ordinary object the color it is seen to have is a simple monadic property of the object’s surface (not a disposition). This property is generated from within the mind and is projected onto the seen object. The mind has a disposition (power) to project color qualities onto seen objects, and it is a necessary and sufficient condition for being (say) red that the object should elicit this disposition. The object is red in virtue of the fact that it triggers the mind’s disposition to project redness onto objects. Another type of mind might have a disposition to project blueness onto the same objects (say, Martians) and then the object would be blue for them. Color is relative. In addition to this object there is another object (intuitively, an object of physics) that is not itself red but which has a disposition to interact with the first disposition to give rise to experiences of red. This disposition is by no means identical to redness, but it is closely related to it: the object that has it triggers perceptions of the primitive property of redness. The physical object interacts with a mind to make that mind activate its disposition to see things as red; it is “red-inducing”. The object induces perceptions of red (in certain perceivers) in which the primitive property of redness is projected onto an object that is not identical with the inducing object. So there are two connected ontological levels: a perceptual object that has the primitive property of being red, and a physical object that lacks that property but which has a disposition to cause perceptions of red (in conjunction with the mind’s disposition to see certain objects as red). It is strictly false to describe this second object as red, though it is natural to do so given its actual role in producing experiences of red. In short: perceptual objects are primitively red while physical objects are dispositionally red (i.e. not really red). We see the primitive property of redness, but we never see the dispositional property associated with it. These two properties are possessed by two distinct objects (think the manifest image and the scientific image).

            The mind has a disposition to see certain objects as red and this disposition can be triggered by physical objects. When this happens an object is seen as red, but that object is not the triggering object. The mental disposition can in principle be triggered in other ways too, as when a brain in a vat sees things as red because of stimulation of the visual centers. Here no physical object operates to trigger the disposition (i.e. an object in the perceiver’s environment acting on the eyes) and yet an object is seen. Hallucinated objects can be red too. This is hard to account for under the classical dispositional theory, since that theory supposes that only (existing) physical objects with certain dispositions can be red. But to be red is not just to be a physical object that is disposed to produce red experiences by normal perception, because hallucinated objects can also be red. The important factor is the mind’s disposition to generate such experiences, not the de facto dispositions of the physical world. Generally the mental disposition is triggered by the usual environmental objects, but it can also be triggered in other ways, as with the brain in a vat scenario. Suppose I say, “That tomato is red”: this is true if I am referring to a perceptual object of a certain kind, whether real or hallucinated, but not if I am referring to the physical object associated with that object. The objects of physics have no color (they lack primitive color properties), though they do have dispositions to produce color experiences (in conjunction with suitable minds). Those objects had no color before perceivers came along and they have none now, though they do now possess a disposition they lacked earlier. They do not have secondary as well as primary qualities (or else physics would be required to mention them). The objects that have colors are different objects—perceptual objects. And colored objects can be perceived even by a brain in a vat. To use a familiar terminology, phenomenal objects are red but noumenal objects are not, despite being closely tied to phenomenal objects.  [2]

            What nice things can be said about the double object theory? First, we do justice to the phenomenal primitiveness of colors, their manifest simplicity, and to the fact that we can see them (as we could not if colors were identical to dispositions).  Second, we acknowledge the role of mental dispositions in grounding attributions of color, as well as the role of external objects in eliciting perceptions of color. We are not completely wide of the mark in calling physical objects colored—though it is a question how often we really do this, given that we are normally talking about perceptual objects not the objects of physics. The relationship between our ordinary ontology of tables, tomatoes, and tulips, on the one hand, and the objects described in physics, on the other, is obscure; and it is by no means obvious that we speak of the latter when referring to the former. In any case, according to DODP there are two objects at play here, one of which is red, and the other of which is not (though it has a disposition to cause experiences of objects of the first kind to look red). It is not one and the same entity that is both red and disposed to look red. Certainly, the color red is not the categorical basis of the disposition to appear red—that will be a matter of the physical properties of the object belonging to physics. There are three levels at work here: the physical properties of the physical object that ground its disposition to give rise to appearances; the disposition itself; and the primitive property that perceptual objects possess (and appear to possess). Only the last of these is perceptible. The crucial component is the disposition of the mind to see things a certain way: once that disposition is activated color comes into the world, projected by the mind.

            What might be said against the theory? Perhaps some will find the doubling of objects objectionable—they will prefer to attribute color to the objects of physics. In fact the spirit of the theory would be largely preserved by this move, with the primitive color properties instantiated in the same object as the disposition to cause color experiences—we can still keep these properties distinct, as well as invoke projection to explain the presence of the primitive property in the object. I have formulated the theory in the double-object way because I favor this position on independent grounds (having to do with hallucinations, intentional objects, and brains in vats). I also think it undesirable to locate colors in the world studied by physics, since physics makes no mention of these properties (their relativity to perceivers disqualifies them to begin with). I think of perceptual objects as an ontological layer over and above the objects described by physics (compare Eddington on the “two tables”). Artifacts and organisms, in particular, should not be seen as individuated and constituted by the categories proper to physics, but as a distinct ontological layer (though no doubt dependent on the physical level in some way). Color properties attach to this common sense level not to the rarefied level occupied by physics (the “absolute conception”). Still, it would be possible to apply the apparatus of DODP to a single-level ontology, the essential idea being that colors are primitive properties bestowed on the world by dispositions of the mind (coupled with the action of physical objects).

            That idea might itself provoke further dissent: for how can the mind generate these properties from within its own resources? Isn’t this mysterious and magical-seeming? I totally agree: how the mind (brain) manufactures color properties is indeed mysterious, like many things about the mind. But this is not a fatal objection to the theory, simply a fact about the mind that needs to be acknowledged, i.e. that there is much about it that we cannot explain. Other theories avoid such mysteries by advocating reductive accounts of color—as that colors are reducible to electromagnetic wavelengths or that they are logical constructions from subjective qualia conceived as inner sensations. But these attempts at reduction are implausible (for reasons I won’t go into), the primitive property theory emerging as superior—though it does indeed lead to problems of intelligibility. Where do these remarkable properties come from? Does the mind create them itself or find them elsewhere (in a Platonic world of color universals, say)? How exactly are they “projected” onto objects? The theory raises plenty of puzzles, to be sure, but it might yet be true, since the truth is sometimes mysterious.  [3] What the theory does is arrange the facts into an intelligible structure, aiming to respect phenomenology and logical coherence. Instead of working just with physical objects and their dispositions, it invokes an extra layer of non-dispositional properties and places them within a mind possessing certain projective dispositions. Perceptual objects thus have exactly the properties they appear to have, while we avoid treating colors as mind-independent. Colors have no place in physics, but they are front and center in our ordinary experience of things, just as they seem to be.

 

Colin McGinn  

  [1] I first wrote about color in The Subjective View (1983), then in “Another Look at Color” (1996), and now in this paper (2018). At each point I have modified the position that came before, while retaining the basic outlook. The successive theories have become more complicated as time has gone by.

  [2] This terminology is not strictly accurate because “noumenal” is generally taken to entail “unknowable”, but the objects of physics are not unknowable. Still, the terminology may be helpful in capturing the structure of the position.

  [3] It is true that we should not multiply mysteries beyond necessity, but necessity sometimes requires that we face up to mysteries.

Share

A New Riddle of Induction

A New Riddle of Induction

 

Suppose that tomorrow the sun does not rise, bread does not nourish, and swans are blue. Does that show that nature is not uniform, that the past is not projectable to the future, and that induction has broken down? Can we conclude that what we observe tomorrow does not resemble the past? Not unless we know the past—unless we know that the sun used to rise every day, that bread used to nourish, and that previous swans were white. But memory is fallible and vulnerable to skepticism. If we are wrong about the past in these respects, then when we suppose that the future diverges from the past, we are mistaken—actually the future does resemble the past (blue swans etc). So unless we have an answer to skepticism about the past we cannot infer from an apparent breakdown in the uniformity of nature that there is a real breakdown.  [1] Given that we have no such answer, we cannot know that the future fails to resemble the past. If bread never actually nourished in the past, then its failure to nourish tomorrow is perfectly uniform and projectable from its past properties. So it is not just that we can’t establish that nature is uniform; we also can’t establish that it is not uniform. We can’t describe a situation in which we discover that the previous laws of nature have broken down, or were not laws after all, for it is always possible that we are wrong about how things were in the past. This makes the skeptical problem of induction ever harder. We can know that our predictions have been falsified, but it doesn’t follow that we can know that the future does not resemble the past, since we could be wrong about the past. Even a total failure in all our inductive predictions would not establish that the future diverges from the past. Nature might be completely uniform and yet appear to us not to be. We can’t know that nature will continue the same into the future and we can’t know that it has not continued the same.

 

  [1] There are two sources of potential error about the past: first, we might just be wrong that bread ever nourished (we have false memories); second, we might have made an inductive error about bread in the past, inferring that all past bread nourishes from the limited sample of bread we have encountered (maybe the uneaten bread was poisonous). If we make the latter error, our observation tomorrow that some bread is poisonous actually follows the way bread was in the past, so there is no breakdown of uniformity. 

Share

A Model of Language Acquisition

 

 

A Model of Language Acquisition

 

 

Psycholinguists report that the child “internalizes” the grammar of his or her native language. Beginning with an innate schema of universal grammar (UG), the child hears the speech of adults and somehow extracts the rules that govern the particular language in question. That heard language is external to the child’s mind, but it becomes internalized as the language is gradually acquired. At some point the acquired language is externalized in the form of overt speech, as the child’s inner competence gets expressed by means of a sensorimotor system. We have internalization followed by externalization. But what kind of internalization is this—in what form is the outer language internalized? A natural answer is: memory. The child remembers what he or she has heard, suitably processed and generalized, and acquires the ability to speak by using these memories. Memories are internal, so that is the form the internalization takes: outer speech is internalized in the form of memories. The child possesses an innate internal UG combined with a memorized internal PG (particular grammar)—and also a lexicon of some sort, innate or acquired. Innate schema plus memory equals linguistic competence.

            I want to enrich this picture somewhat. In addition to internal memories, I want to say that language acquisition involves inner speech: the child first learns how to speak inwardly, only subsequently expressing his or her linguistic mastery externally. So the internalization involves becoming a linguistic agent—a speaker. It is not just a matter of acquiring memories of what is heard, but also of acquiring an ability to engage in internal speech acts. Memory is presupposed in this, but it is not all that is going on internally. When outer speech develops inner speech is hooked up to a sensorimotor system, typically hearing and oral action (but in the deaf it can be vision and manual action). The child does not go directly from hearing a language and remembering it to being an agent of external speech; she takes the intermediate step of acquiring internal speech, a type of purely mental action. So the internalization consists of more than stored memories; it is full-blown internal linguistic agency. The external speech of others is internalized in the form of internal speech in oneself.

            The psychological structure here may be compared to mental imagery. A person perceives an external object, forms a mental image corresponding to that perception, and then acts on the basis of the image (saying by drawing a picture of the imaged object from memory). This involves an extra step beyond merely perceiving an object and acting on the perception: an additional psychological layer is introduced. It is apt to describe the process of image formation as a type of internalization: an external object is internalized in the form of an image (not just a perception). The internal image acts as a kind of replica of the external object. Likewise, someone may hear a piece of music and retain the tune in memory, rehearsing it in her mind silently: this is not just storing the tune in memory, but also actively engaging in musical performance internally. We can think of this as inner musical action analogous to outward musical action like singing or playing an instrument. Learning to sing or play an instrument will typically involve developing the ability to perform inwardly—inner musicianship, we might say. You don’t just hear the violin with your ears and then play it outwardly; you also hear it inwardly and rehearse inwardly. You have internalized the (sound of the) violin. If someone lacked the ability to perform inwardly, they would presumably lack something important to learning the instrument. We might say that “musical imagination” is an important (essential?) component of musical ability. Imagination is “subject to the will” (as Wittgenstein says) and musical imagery is as willed as other forms of imagery. You can whistle with your mouth or you can whistle in your head. And there are other forms of internalization that proceed in much the same way—for example, internalizing a set of moral commands. It isn’t just that the child hears the moral commands of adults and commits them to memory, thus acquiring moral competence. He or she also incorporates these commands into an internal moral system—commonly known as conscience. Freud took the superego to consist of internalized parental commands—telling the child what to do and not do. This was taken to be essential to moral development: not just remembering what others have commanded but also commanding oneself—the “voice of conscience”. Whether Freud was right about the details doesn’t matter: what is important is that moral development involves the internalization of moral prescriptions—you tell yourself what to do (a form of inner speech). So: imagery, music, and morals incorporate this kind of strong internalization–as well as language. They are not like merely memorizing the dates of battles or the capitals of countries, because they involve inner action analogous to outer action. In particular, language acquisition goes through a stage of acquiring a highly structured set of internal abilities generating inner speech acts. Conceivably it might stop at that point, never progressing to the next stage of acquiring an ability to communicate—a language dedicated purely to thought. Language acquisition is not just a matter of stimulus-memory-response, but of stimulus-memory-inner action-response. To put it baldly, the child primarily acquires inner speech, which may or may not lead to outer speech.

            This is an empirical hypothesis. I don’t know if it is true of actual human children. It certainly could be true of logically possible children, and it fits with the fact that children do acquire both inner and outer speech. Investigators would have to examine language development to see whether there is evidence that inner speech is acquired before outer speech. That might not be so easy to determine, given that inner speech is silent and invisible. But we could observe whether the child engages in self-directed monologue or shows signs of internal contemplation. Perhaps such investigations have already been undertaken: I am merely suggesting a plausible-sounding model that might or might not receive empirical confirmation. What I do think is that such a model would fall foul of traditional behaviorist prejudices and so might not be taken as seriously as it should; and also that it fits a general conception of learning that has many merits—the idea of learning as internalization in a strong sense. I gave several examples where such internalization operates and the case of language seems a natural addition to the list. The alternatives to the hypothesis are that inner speech develops in tandem with outer speech, but does not precede or enable outer speech; or that inner speech is the internalization of the child’s own outer speech.  Obviously these are empirical questions, but the hypothesis I offer seems to me antecedently at least as plausible as the others: inner speech is the mechanism whereby outer speech develops, not merely something additional to it or the result of it. For it provides a psychologically natural way to construct linguistic competence: first master language internally without worrying about how it will be publically expressed, and only then search for a way to link linguistic competence with the body—whether the mouth or the hands, the ears or the eyes. For example, if you are mute but not deaf, you will naturally acquire a language by internalizing what you hear, but you will not externalize it by using your mouth. The ability to engage in communicative speech goes significantly beyond merely mastering grammar and vocabulary, which can be done purely inwardly. I imagine the child hearing outer speech, rehearsing it in his head, acquiring the ability to form internal linguistic strings, playing with these strings inwardly, and only later wondering how best to express his burgeoning thoughts to others.

            This picture fits well the idea of language as primarily a vehicle of thought not communication. If language is mainly a medium of thought, its natural form of existence is as an internal symbolic system, silent and solitary; no need to recruit bodily organs that can produce externally observable signals to others. So the child first internalizes outer speech to aid it in cognitive processes—employing a language of thought—and then uses what has been so acquired to lever external communicative speech into existence. First we have symbolic thinking, then symbolic communication—the inner as the foundation of the outer. UG is already internal and intrinsically unconnected to communication; so PG can occupy the same psychological territory—an internal system dedicated initially to thought. Silent speech is the natural medium for thought, so it develops first; only subsequently do noise and gesture enter the picture to permit language to be used for speaking to others. The larynx is very much a Johnny-come-lately. The speech centers of the brain make contact with the larynx late in the game, and might not make contact with it at all. After all, if people had no use for communication, they would still need a language to express their thoughts inwardly: a language of thought has a point even if a language of communication does not. Granted that language enhances thought, silent speech is the way to go, the noisy kind being redundant if communication is not on the menu. You are going to need a language to enhance thought no matter what, so you might as well get that under your belt as soon as possible; how far you will need it for communication is a far more chancy affair and can be left as a secondary accomplishment.

            Inner speech is certainly a reality of adult linguistic life. For solitary individuals it constitutes most of linguistic life, and even for the very social it rumbles as ceaseless background chatter. It also mingles with outer speech in myriad ways. An interesting question is whether inner speech regularly precedes outer speech: do we first say it inwardly and then give it outward utterance? We (our brains) certainly plan what to say before engaging the larynx, constructing in silence a pre-formed string of words (often only milliseconds before the utterance). This is a form of inner speech, the production of symbolic strings independently of external manifestation, and it precedes external speech; so we can say that adult outer speech is subsequent to inner speech, expressing what existed antecedently. In the child language acquisition proceeds from inner to outer too, according to the hypothesis: outer spoken language externalizes a prior inner language. This is certainly contrary to the behaviorist assumptions of nearly all psychology (and philosophy) in the last hundred years, but being contrary to that tradition is surely a mark of truth in these more enlightened times.  [1] The whole point of the mind is that it cannot be observed; any theory of its achievements should respect that fact.

 

Colin McGinn              

              

  [1] The idea that the primary reality of language is its appearance in outer speech is shared by nearly all approaches to language in the last hundred years, but it is belied by the simple fact that inner speech is common and arguably basic. Language is essentially larynx-independent, sub-vocal not vocal.

Share

A Negative Definition of Truth

 

 

 

A Negative Definition of Truth

 

 

Consider a tribe that speaks a language containing no truth predicate. They do, however, have a falsity predicate, which they put to good and frequent use, for this is an argumentative tribe. They are forever telling each other that what they are saying is false—false! Their philosophers have naturally given some thought to the meaning of the falsity predicate and they have a theory: they think that “false” expresses negation. When a speaker asserts that pand her interlocutor objects, “That’s false” this is equivalent to asserting that not-p. Thus: it is false that p if and only if not-p. They call this the “negation theory” of falsity. The reason members of the tribe don’t just assert the negation of what someone else has asserted is simply that it is quicker to say, “That’s false”, because then you don’t have to repeat what the speaker said. For a general statement like, “Everything the tribal leader says is false” they offer the paraphrase, “For any proposition p, if the leader says that p, then not-p”.

            But being an argumentative tribe they don’t let it rest there: they often respond to an allegation of falsity by denying the allegation—“It’s not false!” they exclaim. In this way they reject an imputation of falsity—they deny a denial. The correct analysis of “p is not false” is “It is not the case that not-p”: a negation of a claim of falsity is equivalent to a double negation. At this point our tribe introduces an abbreviation for “not false” in the form of the word “true”, though somewhat reluctantly given their argumentative ways—they are uncomfortable with a word that expresses commendation. Still, no one ever ascribes “true” outright to another’s assertion: the word “true” is only used in rebuttal of someone else’s imputation of falsity. Someone asserts that p and receives the usual caustic response, “That’s false”. He hotly replies, “No, it’s true” meaning simply “It’s not false”. In the language of the tribe “true” means “not false” and “not false” means double negation, so “true” means double negation in that language.

            This suggests a possible theory of the meaning of our word “true”: it means double negation. According to this way of looking at things, falsity comes first, being analyzed as negation, and then truth is the negation of falsity, i.e. of negation. It isn’t that “false” is a mere device of disquotation, since we can’t say, “’Snow is black’ is false if and only if snow is black”—we have to insert a negation sign before “snow is black”. The falsity predicate does not simply disappear on the right hand side; it is replaced with a negation sign. This is a substantive piece of analysis: “p is false” means “not-p’. It is not like the claim that “p is true” means “p”, where “p” contains nothing corresponding to the word “true”: negation is what falsity consists in, its proper analysis. But now “not false” surely means the same as “true” (assuming bivalence), so truth is double negation. We can say: “’Snow is white’ is not false if and only if snow is white”. The difference is purely rhetorical–a matter of sounding more positive.  Truth is basically the absence of falsity—the opposite of error. Just as we can think of falsity as the absence of truth, so we can think of truth as the absence of falsity. First we had negation, used to deny what someone else says; then we abbreviated to “false” in order to avoid repetition; then we had negation of falsity; then we arrived at truth. Truth is a logical construction out of negation. The predicate “true” just means “not-not”.

            At this point an objection is likely: double negation is simply equivalent to the proposition doubly negated, so if truth is double negation, then it is nothing—it is just the proposition being doubly negated. The double negation theory collapses into the redundancy theory, since doubly negating a proposition just gives the original proposition—formally it is just like disquotation. But this objection conflates logical equivalence with propositional identity: propositions can be logically equivalent without being the same proposition. Clearly adding negation to a proposition changes the proposition (it will go from one truth-value to the other), so it is hard to see how adding an extra negation will return us to the original proposition. We have enriched or extended the proposition when we doubly negate it; we have not left it exactly where it was. That is why it is harder cognitively to process “not-not-p” than “p”: there are more propositional components to go through. All we really have is a mutual logical entailment between the two propositions, but this is a far cry from strict propositional identity. Doubly negating a proposition is adding an extra negation operation to the singly negated proposition, not subtracting the first negation. We generate new propositions every time we add a negation sign, and of course we can do this arbitrarily many times; we do not thereby stand in one place, simply reiterating the original proposition. Very soon the sentences become impossible to process, which they shouldn’t be if we are not moving anywhere in propositional space. Sentences and their double negations are not merely stylistic variations.

            The idea, then, is that truth can be analyzed as an operation of double negation without a collapse into a redundancy theory, or equivalently negation plus the falsity predicate. It may seem redundant or merely disquotational because of the logical equivalence of “p” and “not-not-p”, but it is not really so; it is an amalgam of specific conceptual elements—falsity and negation, with falsity itself resolving into negation. The best way to understand truth, therefore, is to begin with falsity, because truth itself gives an illusion of redundancy or vacuity, which falsity does not. Then we construct truth from falsity. It turns out that negation is the underlying logical reality in the analysis of truth. In the beginning was negation, and negation begot falsity, which in turn begot truth. The correct analysis of “p is true” is thus, “It is not the case that not-p”, not simply “p”. If I say, “Everything the pope says is true”, my meaning is best expressed by, “For any proposition p, if the pope says that p, then not-not-p”. When Frege remarked that “it is true that p” and “p” express the same thought, he was strictly speaking wrong, though close to being right; rather, “it is true that p” expresses the same thought as “it is not the case that not-p”. Thus truth is not strictly speaking disquotational, since the double negation of a sentence is not identical to that sentence, i.e. not the result of removing the quotation marks.

            It is true that the double negation theory is in roughly the same spirit as the classic redundancy theory, in contrast to the other standard definitions of truth in terms of coherence or correspondence; but it is importantly different, since it gives a real analysis of the concept of truth. The theory allows us to see what is right in the old redundancy theories (as found in Frege, Wittgenstein, Strawson, Tarski, Ramsey, et al) but also to see where it overstates the matter by claiming that “true” is empty of content. Truth does have an analysis (in terms of negation), but it also generates sentences that are logically equivalent to sentences not containing it (nor the concepts used to analyze it), since “not-not-p” entails “p”. The theory also suggests that “true” is not logically a predicate since it reduces to an operator, as “false” reduces to an operator (the same one, but a single application). The reason the theory has not been recognized and favored is that people have tended to investigate truth without investigating falsity, where the role of negation is quite obvious. No doubt there are many things that can be meant by a “definition of truth”, and each may have its value, but the double negation theory is one sort of definition—and quite satisfactory as far as it goes.  [1] It “catches the actual meaning of the word ‘true’”, to borrow Tarski’s phrase. Negation turns out to be integral to its meaning. It should be added to the other standard theories of truth.

            One final point: the double negation theory imposes a condition on the possible bearers of truth, namely that they should be logically subject to negation. Any sentence that can be negated is a potential bearer of truth, and none that cannot be negated can be true (or false). Thus moral sentences are capable of truth, given that they can be doubly negated, which they clearly can. But imperative sentences, say, can’t be true if the theory is correct, since you can’t say, “It is not the case that shut the door!” The theory indeed explains how truth distributes over sentences, because it provides a necessary and sufficient condition for sentences to be capable of truth-value: viz. can the sentence be coherently negated.

 

 

  [1] The concept of truth can be approached from different directions and different aspects of its significance explored; these different approaches need not be incompatible. Perhaps we shouldn’t be surprised if truth turns out to be multi-dimensional, given its many liaisons.

Share

A Causal Theory of Truth

A Causal Theory of Truth

We have been inundated with causal theories: of perception, knowledge, memory, and reference. But no one (to my knowledge) has proposed a causal theory of truth. On the face of it this is surprising, since truth is so closely bound up with reference. If reference to both objects and properties is subject to a causal theory, why isn’t truth? I will explore a causal theory of truth that seems rather natural, indeed a natural extension of the causal theory of reference. Put simply, the theory says that a belief or statement is true if and only if it is caused by the facts. Some beliefs or statements are caused by the facts and some are not, being caused instead by desires or errors or fictions or fantasies. That is the difference between a true belief and a false belief: its causal relation to the facts. Some beliefs are brought about by objective reality and some are otherwise brought about (say, by subjective factors): to be true is to be caused in the former kind of way. Where the correspondence theory says that truth is correspondence to the facts, the causal theory says that truth is causation by the facts.  [1]

The theory assumes that the world consists of facts (objects having properties) and that these facts causally shape beliefs, making them true. If it is a fact that p, then it is true that p, so there can be no problem with the theory as far as sufficiency is concerned. But then couldn’t the theory dispense with the causal element and simply equate truth with fact? No: because truth is a property of representations (beliefs or statements or sentences or propositions), so we need something to connect facts with truth. Traditionally that has been a correspondence relation; according to the causal theory, it is a causal relation. For a belief to be true is for it to be caused by a fact, not just for the fact to be a fact: the belief that snow is white is true in virtue of its being caused by the fact that snow is white—that is, the belief is caused by what it represents (what is believed to obtain). In the most straightforward case a person is in perceptual contact with a fact and he or she forms the belief that it is a fact, thus forming a true belief: you see that it’s raining and this fact causes you to believe that it’s raining—so you have a true belief. If you were to be hallucinating rain because of a drug, you might form the same belief but it would be false, since your belief would not be caused by the fact that it’s raining but by the drug. If you dream that p and form the belief that p, then your belief is not true, since it was not caused by the fact that p. If I tell you a lie, my statement is false because it was not caused by the fact I purport to state but by my desire to deceive you; while in the case of a truthful statement a fact causes my true belief and my statement transmits the causal relation to you—you have a true belief because your belief was (indirectly) caused by the fact that I stated to obtain.  [2] When the facts shape belief we have truth, but when illusion, error, deception, and fantasy shape belief we have falsehood. Truth depends on the causal antecedents of belief: do they stem from objective reality or from other factors (often internal to the subject)? Is belief caused by the factual or the fictional?

That is the simple way to put the theory, but of course it needs to be refined and complicated. Still I wish to emphasize its intuitive starting point: true belief is the kind brought about by the facts; false belief is the kind brought about by things other than the facts. Compare: veridical perception is the kind brought about by external objects; illusory perception is the kind brought about by other factors, such as intoxication or defects of the perceptual system. You believe truly if the facts impress themselves on your belief system; you believe falsely if your beliefs arise from some source other than the facts, such as biases or blind spots. Of course, all factors that influence belief are trivially facts, but truth is having your belief caused by the fact represented by the belief in question. If the fact that p causes you to believe that p, then you have a true belief.

We can compare this account to causal theories of reference.  A speaker refers to an object x with a name “a” if and only if there exists a (suitable) causal connection between x and “a”—say, a chain of causal links leading back to an initial baptism. A speaker refers to a property P with a predicate “F” if and only if instances of P regularly elicit utterances of “F” (or some such). In the case of whole sentences we are dealing with fact-like entities (states of affairs, situations, ways things are) not objects and properties, so these are the appropriate entities to stand in causal relations to sentences.  [3] We simply extend the causal theory from names and predicates to sentences: reference to an object is being caused by that object to utter its name, reference to a property is being regularly caused by instances of that property to utter a predicate, making a true statement is making an utterance caused by an appropriate fact. We thus use the word “true” to distinguish this kind of causation from other kinds—the kinds that produce false statements. To say that a belief or statement is true is to say that it is a consequence of the facts; to say that a belief or statement is false is to say that it is not a consequence of the facts, but of fictions, fantasies, errors, etc. In its strongest form the theory says that the property of truth is that property a belief has when it is caused by a fact (the fact represented). Instead of saying, “Your belief is true” we could equally say, “Your belief is factually caused”.

At this point a swarm of questions assails the causal theorist; they are for the most part quite familiar. Are the causal conditions necessary and sufficient for truth? How do we handle truths about non-causal facts? What about deviant causal chains? Do facts really cause anything? To spare the reader (and myself) tedium, I will be as speedy as possible with these well-worn issues. Are the conditions necessary? Couldn’t we have true beliefs and yet there be no causal links between belief and fact (pre-established harmony)? What about the truths of mathematics, modality, and morality? Here we can reply by amending the theory from its simple causal formulation: we can invoke the concepts of reason or explanation or counterfactual dependence. Thus: the reason (but not the cause) for forming the belief that p is the fact that p; the explanation for believing that p is the fact that p (where this is not causal explanation); a person would not believe that p were it not for the fact that p. We just weaken the causal relation to accommodate the awkward cases—just as we have to for causal theories of reference and knowledge. The causal theory of truth is thus no worse off than these causal theories (no better either). We can also remark, with a knowing wink, that this is actually a desirable result for the theory, since these non-causal cases are precisely those in which the concept of truth carries dubious credentials. Causal dependence is what truth basically consists in, so anything non-causal will struggle to qualify as true—except perhaps by extension or metaphorically or fictitiously. In the clearest cases truth amounts to causation by fact–we needn’t get too worked up about peripheral cases. Or we could simply stipulate that there are two kinds of truth requiring two kinds of theory: causal theory for one kind and correspondence theory for the other (or coherence or deflationary theory). It depends on the type of subject matter involved (and we already know there is a distinction between analytic truth and synthetic truth).

As to deviant causal chains: there are none–so long as a fact causes a belief in that fact we will have truth. As to facts as causes: we should be liberal with the notion of cause, but if we decline so to be, we can always choose another kind of cause (say, event causation), and let that be the cause of belief. If you don’t think beliefs have any causes at all, I invite you to substitute whatever else you think is responsible for beliefs; and if you think nothing is responsible, you are beyond help. We can thus make the standard dialectical moves in response to the standard objections. At worst we concede that no causal analysis of the concept of truth is possible but suggest instead that we are offering a better picture of truth, one that sees truth as a passive effect of reality not as an active mapping onto reality (as with the correspondence theory). The world gives us truth by acting on us; we don’t achieve truth by contriving to depict it. This is a theory that works nicely for animal truth: animals have true beliefs because the world acts on them to install beliefs (or some more primitive representational state); they have no need to strive for truth. When facts cause beliefs they automatically produce truth, whether in mouse or man.

Here is a more difficult counterexample: the case of random truth. Suppose I am making random statements about the color of things in some unknown part of the world, most of which are false, but by chance I hit on a true statement about the color of a flower there—I have said something true but the fact in question was not the cause of my saying it. The case must be admitted: there is such a thing as an accidentally true statement (similarly for a case of wishful thinking that just happens to produce a true belief). But surely the case is exceptional: the vast majority of cases are those in which the belief’s truth results from the fact in question—where we can know the belief is true just by knowing the person’s causal history. In the random truth case we can’t infer truth from knowledge of the person’s causal history. It’s a bit like introducing by stipulation a name for an unknown soldier and succeeding thereby in referring to a certain individual long dead: you do name a person without there existing any causal link to that person, but the case is quite unlike standard cases of naming. Truth is rooted in causation by facts though it can break free of these confines in unusual circumstances; we shouldn’t give up the basic insight in order to accommodate exceptional cases.  [4] Hard cases make bad law and all that. At a pinch we can retreat to a genealogical theory: this is how the concept of truth started out, but it might develop new forms alien to its origins. We must cling to the initial insight derived from perceptual beliefs: their truth consists in the fact that they are caused in a certain way, i.e. by the very fact they represent. The fact by itself will guarantee truth; we just need to add the relational conditions that enable beliefs to be true—that they exist and are externally caused. Once all this has been stated there is nothing further for talk of truth to add: the distinction between truth and falsity emerges from the distinction between fact-caused belief and fantasy-caused belief (to put it simply). What does an ascription of truth add to the assertion that a person’s belief that p was caused by the fact that p? It is quite redundant.

            The causal theory of truth, like other causal theories, can lay claim to the honorific label “naturalistic”: truth is primarily a property of empirical particulars (beliefs, statements) not abstract propositions, and it consists in a causal relation between agent and world. It is not conceived as a mysterious mapping or isomorphism or picturing; nor is it declared an irreducible primitive. It is a relation between the mind and the world that consists in a kind of causal connection, particularly via the senses. We observe that people’s beliefs are shaped by the world of fact and we call those beliefs true because of it; we also observe that sometimes people’s beliefs result from other factors (bias, illusion, wishful thinking) and these beliefs we call false (though they might in odd cases be true by chance).

  [1] One version of the correspondence theory (there are many) equates truth with “designating an existing state of affairs”: the causal theory replaces the designation relation with a causal relation but retains the general form of the correspondence theory. We could view it as proposing a causal theory of the designation relation between beliefs (or statements) and states of affairs. It thus “naturalizes” such designation—as a causal theory of names “naturalizes” the naming relation.

  [2] Note the analogy to causal theories of names: there is a social dimension to the causal relations involved, as well as experts and deference. Thus some beliefs are directly caused by facts while some are caused via chains of communication radiating out from an original encounter. Testimony exploits causality to transmit truth—as chains of communication can transmit reference.

  [3] An attractive feature of the causal theory is that it explains the referential transparency of truth: if “Hesperus is a planet” is true, so is “Phosphorous is a planet”. This is explained by the fact that causal statements are themselves transparent. The transparency feature is not captured by disquotational theories, since the disquoted statement is just the original statement. But causation is indifferent to mode of presentation or verbal formulation.

  [4] One thing we can say is that in standard cases true statements about color are caused by the facts. So the theory can be reformulated to assert that a given belief is true if and only if it is in standard cases caused by the facts

Share

A Difficulty with Utilitarianism

 

 

Utilitarianism maintains that the value of a state of affairs depends solely on its level of utility. For a state of affairs to be good (desirable, valuable) it is necessary and sufficient that it contains the best possible level of wellbeing (pleasure, happiness, preference satisfaction). So if two situations contain the same level of utility they must be indistinguishable morally: value supervenes on good feelings (roughly). But consider the following possible states of affairs: (a) people enjoy a level l of happiness and know that l is their level of happiness; (b) people enjoy level lof happiness but don’t know that l is their level. In condition (b) they have false beliefs about how happy they are, either underestimating it or overestimating it; while in condition (a) their beliefs are just right. The level of utility is the same in both cases but the epistemic facts are quite different. Are these situations indistinguishable from the point of view of value? It might well be supposed that they are not: (a) is a better situation than (b). If so, utilitarianism cannot be a complete account of value. Knowledge of utility adds value to utility itself. The utilitarian typically assumes that knowledge of utility tracks utility, so there is no gap of the kind exploited by cases (a) and (b); but we can pull these apart in conceivable cases, and then the insufficiency of utility reveals itself.

            A number of responses may be made to this simple argument. One response is that the case I described is not logically possible: people can’t be wrong about their level of happiness, since happiness is a mental state and people can’t be wrong about their mental states. However, whatever may be true about mental states in general, it is clearly possible to wrongly estimate one’s state of happiness. A change for the worse may make you realize how happy you used to be (“I didn’t know how lucky I was”), and you might think yourself happier than you really are because you have been so deprived for so long. People are not infallible about their level of wellbeing, though they may be generally reliable. What if you have been brainwashed into believing yourself brimming with joy when in fact you are only moderately content? Don’t people habitually underestimate their level of wellbeing until things turn nasty for them? Happiness is more elusive to knowledge than sensations of pain or experiences of red. If someone asks how happy you are, you might have to pause and reflect before giving an answer.

            Second, it may be claimed that the cases don’t actually differ in value: if the utility level is the same, the value is the same. But this is so much biting of the bullet: surely it is better to know than not to know, especially when it comes to one’s own happiness. Isn’t this a rather vital piece of knowledge? A person who went through life believing himself a miserable wretch when in fact he was quite happy would not be living as good a life as one who gets it right; and similarly for someone who regards himself as unusually happy but in fact has a rotten time of it. There is positive value in knowing where you stand happiness-wise.

            Third, it might be maintained that the knowledge in question contributes to the level of happiness, and that’s why we judge (a) and (b) differently. That is, knowing your correct level of happiness is a form of happiness: the person who gets it right will therefore be a happier person. If so, we can subsume the value of knowledge under the heading of utility. But this is not plausible: judging your degree of utility correctly does not add to your utility count, any more than other knowledge does. These are two separate things: utility on the one hand, knowledge of utility on the other. Belief isn’t a feeling, so it can’t contribute to the good feelings a person has. Knowledge isn’t a form of pleasure.  [1] Whether someone’s beliefs about their own happiness are true or false doesn’t affect how happy they are.

            So we are compelled to accept that happiness plus true belief about happiness is better than happiness alone, which means that happiness is not the only valuable thing. Of course, it has been held that knowledge is a value separate from utility, but what the cases of (a) and (b) show is that knowledge of happiness has intrinsic value. The utilitarian failed to see this because of the assumption of transparency—that happiness will necessarily communicate itself to belief. But once we recognize that that is false we have to accept that knowledge carries its own value, even when (especially when) it is knowledge concerning happiness.  [2] Nor can we suppose that such knowledge has merely instrumental value in producing further happiness, because we can stipulate a case in which no such variation in happiness is present—the two people converge exactly and for all time in their utilities while differing in their utility knowledge. Not only is happiness a good thing, but knowledge of happiness is also a good thing—though a good thing of a different type. In a sense, then, utilitarianism is self-refuting, because it presupposes a value it refuses to acknowledge. It assumes that knowledge tracks happiness, thus avoiding acceptance of the separate value of knowledge, but pulling the two apart shows that utility is not enough. The good life is not just the happy life; it is a life in which one is also properly apprised of one’s happiness.

 

Colin McGinn           

 

 

  [1] We don’t analyze knowledge by saying: “x knows that p if and only if x believes that p, x feels good about believing that p, etc.”.

  [2] Once it is accepted that utility and knowledge constitute separate values, the question of priority arises: which value is more important? Granted limited resources, we have to assign them to promoting our accepted values, so we have to decide how much to allocate to utility and how much to knowledge of utility. This means that we will have to allocate less to utility than we would under the pure utilitarian doctrine, since we have to allocate resources for the production of knowledge of utility too. So the extended utilitarian doctrine will contradict the recommendations of the simple utilitarian doctrine. And there will always be difficult questions about which value to promote in a given situation. The dent in utilitarianism is therefore not trivial.  

Share