The Selfish Biosphere

 

 

The Selfish Biosphere

 

 

The gene is selfish with respect to whole organisms: it replicates itself by using organisms as survival machines that might have to sacrifice themselves for the genetic good, as in raising young. Genes bear the same kind of relation to individual organisms that people used to think applied to the relation between species or groups and individual organisms (acting “for the good of the species”). However, it doesn’t follow from this that genes are not themselves subject to a similar type of selfishness. Elsewhere I have argued that genes act as survival machines for the process of natural selection; here I will expand on this idea.[1] In the case of so-called artificial selection just such a relation holds: the human breeder may literally survive (make a living) by the intentional selection of genes and hence whole organisms. The genes are selected by the breeder in such a way as to sell the maximum numbers of dogs (say). This may even be accomplished by chemical analysis and advanced genetic technology. In such a case the gene is the entity that serves the interests of its designer, thus maximizing the designer’s survival prospects. Likewise, in the case of so-called natural selection, where no such conscious designer is present, that impersonal process generates genes that enable it (that process) to continue in existence and even to spread far and wide. What I want to add to this picture now is the suggestion that we can refer to the selfish entity (a process or mechanism or pattern of relations) as the “biosphere”. The term is variously defined, sometimes meaning the regions of the earth occupied by life forms and sometimes meaning the totality of all earthly life forms; I wish to mean by it the fundamental processes or mechanisms or traits of life forms. It translates directly as “life-sphere” and this is helpful for my purposes: I mean the sphere of reality in which living processes are manifested. We might coin the abstract term “life-ness”, to be contrasted with non-living objects and systems (rocks, atoms). It is the idea of the biological as such—the vital, the animate. Then the claim is that it is the biosphere in this sense that is the ultimately selfish biological reality. The biosphere constructs genes as its survival machines via natural selection, which in turn construct whole organisms (as well as the external products of organisms such as webs or burrows). It is what survives and prospers when genes survive and prosper in virtue of the survival machines they build. The processes of life are the ends of the biological line, the final beneficiaries of biological activity. Life itself is “selfish”.

Let me be more specific about what these processes are. We can certainly cite natural selection as a life process—the main engine of life creation. It is a biological mechanism or procedure. But we can also list certain universal traits of life that survive because of the excellence of the genes they produce (in a certain sense of “produce”): reproduction, respiration, digestion, growth, maturation, embryogenesis, locomotion, and no doubt others. These traits are responsible for the success of organisms, which leads to the success of genes, which leads to these traits continuing in existence, i.e. their success. The biosphere is the totality of these traits, processes, mechanisms, and events—life elements, we might say. So the biosphere is what benefits from excellent genes (hence excellent organisms): it thrives and expands when effective genes exist, and it would wither and die if it started manufacturing lousy genes (ones that produced short-lived non-reproducing organisms). So we can say, for instance, that the trait or process of reproduction is what survives when genes survive—they enable it to survive. The biosphere is to genes what dog breeders are to genes: except that it is a mindless cause of differential survival not an intentional agent. If we imagine that the entire biosphere was invented by a super-scientist, precisely in order to create living processes, with genes brought in just in order to bring about that end, then this super-scientist would be the equivalent of a dog breeder. As it is, the biosphere acts like a dog breeder, i.e. it keeps going because of what it creates–hence “the selfish biosphere”. Dawkins called his book The Selfish Gene, but other titles would have been possible: The Successful Gene, The Megalomaniac Gene, The Ruthless Gene, The Predatory Gene, The Immortal Gene, etc. Likewise, we can transfer these titles to the biosphere: it is what fits these (mostly metaphorical) descriptions—the apex beneficiary and mastermind. And just as we might correspondingly refer to the individual organism as “the exploited organism”, “the slave organism”, “the victim organism”, or “the employee organism” in relation to its genetic masters, so we could speak of the gene as “the exploited gene”, “the slave gene”, etc. in relation to the biosphere. It is the ultimate boss, controller, and beneficiary. The point of a gene, for the biosphere, is to act as a means for its own survival and flourishing; and indeed the genes have served the biosphere well over the eons, as witness its extent and richness. Just as we are the “lumbering robots” of the genes (to use Dawkins’ phrase), so the genes are the lumbering robots of the biosphere—or perhaps better its faithful nanobots. The gene master has its servants in the form of organisms, but the master is in turn a servant to a super-master, viz. the life process.[2] The biosphere is the ultimate employer, creator, and beneficiary, logically speaking. Of course, neither genes nor biosphere get to enjoy their dominance, not being conscious beings, but they are the de factomasters of the biological universe: they are the things that organisms (those moist plebian units) labor to maintain. What comes as a shock to the genes’ egotism is that they are exploited too—or perhaps we should say they are in existence only because of the process of natural selection and should therefore be grateful (as organisms should be grateful to genes for creating them at all). If you are happy to exist, then you can thank those tiny scheming selfish genes, and ultimately the life force that brought them into existence (i.e. natural selection with its host of life-promoting traits). Your mind and brain exist only because the biosphere was desperate to keep itself in existence, if I may put it so. Remember, when organisms go, it goes (and that, we are told, is just a matter of time). The extinction of all species is the extinction of the biosphere, as well as the extinction of all genes. The days of the biosphere are numbered, but meanwhile the genes labor to keep it in existence.

One might wonder whether there is an analogue in the biosphere of competition among genes. Genes don’t just build survival machines; they build survival machines that are better than those of their genetic rivals. Natural selection is selective–picky, discriminating. The gene that survives best is the gene that outperforms its rivals. Is there anything similarly competitive at the level of the biosphere? Indeed there is, as we may see by considering the fate of the dinosaurs. As everyone knows, the dinosaurs went extinct as a result of a cataclysmic event, soon to be replaced by a burgeoning mammal population, leading eventually to us. The biosphere underwent a convulsion, altering its composition. This can be described as follows: the section of the biosphere consisting of dinosaur traits was replaced by mammalian traits in accordance with natural selection, since the post-cataclysmic environment was more hospitable to mammals than to dinosaurs. In other words, the genes that produce dinosaurs were not as effective as contributors to the biosphere as the genes that produce mammals: the biosphere did better under one genetic dispensation than the other. Traits that characterize mammals became more numerous than traits that characterize dinosaurs in the competition for survival. Living processes did better with mammals as their vehicles than with dinosaurs: better reproduction, respiration, locomotion, etc. Traits compete as genes compete (indeed genes compete in virtue of the traits they produce). The biosphere fluctuates over time as new traits and processes replace old ones, and it survives better under changing conditions according to the genes it produces. Particular types of life compete with other types of life under natural selection, just as particular types of genes so compete. So sections of the biosphere not only survive; they survive by competing with other sections. Life forms outperform other life forms.[3]

We might reasonably say that the central fact of biology is reproduction and inheritance. This is what distinguishes the biological world from the non-biological world (mountains don’t have baby mountains). Reproduction is the biological process par excellence. The primary job of the gene is to enable effective reproduction—to make sure the next survival machine is born.[4] Given that, it is the reproductive process in all its variety that is maximized by effective genes: the job of genes is to produce successful acts of reproduction, lots of them. Genes are reproduction maximizers. Not all reproduction is sexual, but most of it is, so we can say that genes operate to generate fruitful sexual acts. This means that the section of the biosphere that is most benefitted by good genes is the sexual section: good genes lead to an active sex life. So genes are sex maximizers: they act as they do so as to maximize the amount of sex in the biosphere. Less anthropomorphically, it is a de facto consequence of good genes that sex is maximized in the biosphere. The less sex there is, the less reproduction there is, and hence the less inheritance there is, which means the less transmission of genes there is. Genes and sex go hand in hand. The gene is really a sex machine—a device for bringing sex about. Thus the genes act as machines for generating sex in the biosphere. They are obsessed with it, dedicated to it. The ultimate rationale of genes is maximizing sex in the biosphere, because sex is required for reproduction, and reproduction is the basic fact of biological existence (the biosphere is the reproduction-sphere).[5]

 

Colin McGinn

 

[1] See “The Selfless Machine”; also my “Trait Selection” in Philosophical Provocations (2017).

[2] We could refer to this as “process biology” as distinct from “object biology”: processes are taken to be theoretically basic not objects like organisms and genes. This would bring biology closer to physics in some respects.

[3] The biosphere is usually limited to life on earth, but we can extend the notion to life elsewhere in the universe—so there can be many biospheres. In principle these could compete with each other for survival, though that would require tremendous feats of relocation or undreamt of forms of travel. There could be competition for natural resources as well as direct conflict. In such an eventuality the genes would act as competitive weapons in the struggle for biosphere survival; they would be means of preservation for biospheres. We would then have the analogue of competition among genes (and organisms) at the level of biospheres: the intergalactic selfish competitive mindless biosphere.

[4] If we ask what are the reproductive organs of the body, we quickly see that the genitals are just part of the story. In addition to the womb, we can say that the whole animal body is geared towards reproduction: the overall anatomy as well as the brain and mind. The animal needs to be able to mount its mate correctly, find its mate, entice its mate, and then carry out the necessary actions. The whole animal is involved in reproductive activity, this being the central biological imperative (you eat so you can mate). Looked at this way, the whole animal is a reproductive organ—a device for transmitting its genes into future generations. The entire biosphere is taken up with reproductive machinery and reproductive behavior, conceived broadly. The body is a survivalist sex machine dedicated to genetic reproduction.

[5] Let me put it cheekily as follows (we need not remain stone-faced about this): the DNA is a device for filling the biosphere with sex, a veritable libertine. And the amount of sex in it has increased exponentially as the population of living beings has increased. The biosphere is now having a lot of sex (courtesy of the genes and their bodily appendages). If the whole system had been set up by a superhuman erotomaniac, she would no doubt feel as if her work has been done, since planet Earth is pullulating with sex. Sex, you see, is what is preserved by biological activity. Before there was a sexless universe; then the biosphere came along and lo! sex was born. Sex has been going strong ever since, aided by genes and their bodily assistants. Selfish sex, as we might say.

Share

Two Concepts of Freedom

 

 

Two Concepts of Freedom

 

It is hard not to feel the pull of both of the standard positions on free will. On the one hand, it seems right to say that a free action is one that is in accordance with the agent’s desires, as opposed to one that is forced on the agent in some way (the OED defines “free” as simply “not under the control or in the power of another”). This is quite compatible with determinism–physical, psychological, or divine. On the other hand, it seems right to insist that freedom requires the ability to do otherwise, which is ruled out by determinism. If an agent has no alternative to acting as he did, how can his act be free? But surely the future course of nature is always necessitated by antecedent conditions, so there are no alternative actions the agent could have performed. Thus the will is both free and not free, a contradiction. Depending upon what conception of freedom we choose to adopt, we get different answers to the question of whether free will exists. But it is assumed that there is a single thing (denoted by “free will”) over which the combatants are contending.

I want to suggest that this debate is afflicted by a methodological problem, and once this problem is fixed the solution drops out quite naturally. The problem consists in extracting the word “free” from its normal linguistic context and trying to analyze it in isolation. In fact, there are two very different notions expressed by standard locutions, which generate different answers to the question whether free will exists. Both answers are correct, so that one type of locution has application while the other does not. The locutions are “free from” and “free to”. We say that an agent is free from constraints or influences that potentially limit his range of actions: illnesses, obligations, engagements, coercion, upbringing, genes, or divine interference. In this vein we can sensibly ask if the agent is free from his desires and free from his physical condition (including his brain states): here the answer appears to be universally in the negative. I am not free from my own psychology or my own physiology—though I may be free from external coercion or prior obligations or God’s dominion. The determinist adds up all the antecedent states of the world and declares that we are not free from this totality. Again, this seems logically permissible: we simply ask whether my freedom-from extends to all of the factors bearing down on me, specifically my mental and physical states. And the answer is clear: I am not free from all of that. I don’t have that kind of freedom. In the relevant sense, I could not have acted otherwise (though there are perhaps other senses in which I could have acted otherwise, e.g. I could logically have had a different psychology). Put simply, we don’t have freedom from the past—the locution “free from” does not apply to the totality of past facts (though it applies to various subsets of these facts). It is quite true that we are free from X for many values of X, so we are free relative to these values, but we are not free from all values of X. We don’t have complete freedom-from. So we can forget having that kind of freedom. Determinism rules out freedom-from.

But it doesn’t follow that we don’t have freedom-to. I have freedom to do Y if I can act on my desire to do Y. The locution “free to” allows application in conditions in which I do as I please, as opposed to acting against my desires because of external (or internal) coercion. That is what “free to” means (as the OED records): I have a set of desires (wishes, inclinations, commitments, etc.) and I can either act in accordance with them or against them, thus acting freely or not.[1] This has nothing to do with being free from all prevailing conditions: indeed I am not free from my desires (which may causally determine me to act as I do), but that doesn’t mean that I can’t act in accordance with them! I am free to follow my desires, because not prevented from doing so, even though I am not free from them. I may sometimes not be free to follow my desires, if I am imprisoned or shackled or subject to physiological upsurges that prevent me from acting as I wish; but much of the time I am free to do pretty much as I please (but see below). Quite often I am free to do exactly as I please, with no impediment at all to my freedom to do as I please. This is in no way compromised by my lack of freedom from antecedent conditions. Freedom-to is just a different concept from freedom-from; the locutions have quite different meanings and conditions of application. The compatibilist is thus right to insist that freedom-to is consistent with determinism, while the incompatibilist is right to maintain that freedom-from is inconsistent with the facts of historical determination. But the two theorists are not disagreeing with each other, once we distinguish between the two sorts of locution with their different meanings. The reason we feel the pull of each position is that both positions are perfectly correct so far as they go; we only get confused because we conflate the two concepts. And the reason we do that is that we yank the word “free” from its normal linguistic context and ask questions like “Does free will exist?” or “Is free will compatible with determinism?” Strictly speaking, these questions are ill formed, because they try to sever the concept of freedom from its surrounding grammatical context, which alone gives the word sense. We violate Frege’s context principle, or we fail to heed Wittgenstein’s warning about the perils of taking language on holiday. We are like someone who perplexes herself about freedom by trying to integrate the meaning of the locution “free with” (“John is rather free with his money”) with “free from” and “free to”. Is it that a free agent is one who is free with his actions? Can we be free with our past? Are our desires free with us? None of these sentences makes sense and can only generate pseudo-problems. Likewise, we should not try to shuttle between “free from” and “free to”, as if asking whether we are free to change our past or free from the future. In fact “free from” is a backwards-looking locution while “free to” is a forwards-looking locution: one connotes independence from the past; the other connotes dependence on desire in relation to the future. Am I free to act as my desires prompt me to? That is the question of freedom-to. Am I free from everything that has led up to this moment? That is the question of absolute freedom-from. And the answers are respectively: yes, I am free to act on my desires as opposed to being made to go against them; but no, I am not free from the conditions leading up to and surrounding my action, including my desires. I have freedom-to but I don’t have freedom-from. That is all that needs to be said, or can be said; there is no further question expressible as “Am I free?” or “Do I have free will?” It is not that I both have free will and don’t have it, or that I have to reject the plausible things said by the compatibilist and the incompatibilist; rather, I just have to return the word “free” to its natural environment tightly coupled with the prepositions “from” and “to”. Then (and only then) I will understand the import of our talk of freedom. The correct assessment of the philosophical upshot of this examination is thus twofold: (a) we are not free from our past, since our actions are determined by it; but (b) this does not rule out a robust sense in which we are free to act on our desires (the only kind of freedom-to there is). As a matter of fact, if we were free from our past, that would not provide an acceptable notion of freedom, since it would amount merely to randomness; and if we had the ability to act otherwise than our desires indicate (including our moral and prudential desires), that would not be a form of freedom-to. No occurrence in nature is free from the past, including human action; and nothing but acting on desire can add up to freedom-to. Nor is there any notion of freedom that is purer or better than freely acting on one’s desires—as if we are only really free when discarding our desires and acting in a vacuum.[2] For instance, a person who acts on his desire to save the world (perhaps putting aside his other selfish desires) is the paradigm of a free agent—and it is no impediment to this that his desire follows strictly from his genes and his upbringing. He couldn’t have acted otherwise, but so what—he was free to act on his most cherished desire. He was free to act on his altruistic desire despite attempts by others to thwart him, though his action wasn’t miraculously free from his mind and nervous system. The former freedom is not undermined by lack of the latter freedom.[3]

The difference between the two concepts is illustrated by a difference in their logic. An action is either free from a factor X or not; in particular, either actions are determined by antecedent conditions or they are not. It is an all-or-nothing matter. But forward-looking freedom-to is not so simple: a good case can be made that we are partially free in this way but not completely free. Am I really free to do exactly what I please, even in the most favorable conditions? Don’t I have all sorts of unrealistic desires that I can never act on? I would dearly love to fly like a bird, but I am not free to do so—the laws of nature prevent me from so acting. Don’t I also have a lot of conflicting desires that keep me from fulfilling all of them? Realistically, we can’t always do exactly what we please—we are not completely free. We are pretty free most of the time (if we are lucky), or more free than our neighbor, but we are not totally free. Freedom-to is not an all-or-nothing matter, unlike freedom-from. It operates in different conceptual terrain. It doesn’t breathe the same air. The logical behavior of “free to” is not the same as that of “free from”. This is why the compatibilist and the incompatibilist often seem like they are talking past each other: for they are talking about different things. The word “free” crops up in their discourse about these things but not because they have an identical subject matter—any more than its occurrence in “free with” discourse (compare also “tax-free”, “free as a bird”, “free society”, “free radicals”, “degrees of freedom”, “stimulus-free”, etc.) We mustn’t mix language games; we mustn’t tear “free” free of its linguistic auxiliaries. We mustn’t confuse one concept with another. Then we can accept that we have plenty of “free to” freedom but zero “free from” freedom—though remaining wary of that dangling use of “freedom”. Both are legitimate uses of the word “free”, but the constructions in which they occur have quite different import.

The intuitive idea of determinism is that the future is bound by the past, not able to escape its clutches, its shackles. This conflicts with the idea that we are free from the past and hence have alternative courses of action open to us. Thus we don’t have this kind of freedom. The intuitive idea of voluntary action is that we are often able to act without constraint or interference from sources external to our own desires, wishes, inclinations, preferences, values, and so on. This in no way precludes our actions from being genuinely free: to do what you want because of what you want is the very essence of freedom. Not indeed freedom-from, since we are not in so acting free from our desires (values etc.), but freedom to follow these desires without external impediment. Both lines of thought are perfectly sound: but, contrary to traditional thinking, they are not in conflict with each other. The plain fact is that we are free (to) but we are not free (from). I would recommend never using the word “free” in philosophical discourse without its attached preposition. That will make us free from confusion and free to stop worrying about the problem of freedom.[4]

 

[1] The fundamental idea of free action is surely freedom from other people: it is doing what you want irrespective of the wishes and actions of others. Internal factors can operate like other people, but the basic idea is that of interpersonal constraint or restraint. This has nothing to do with determinism; it is purely a matter of being free to act on one’s desires independently of others. So the notion of freedom is a social notion at root: if there are no other people, the question of freedom cannot arise. It is other people who put one’s freedom in jeopardy, not the past or one’s internal physiology or the laws of nature or one’s own desires. Have philosophers and others succumbed to a kind of anthropomorphism about such factors, modeling them on interfering human agents? That would explain a lot.

[2] It is sometimes said, quite correctly, that we are not compelled to act on our desires—we can resist their urgings and refuse to act on them. However, this is so only because we have other mental states that countermand these desires, typically commitments to values that conflict with the desire in question. These may take the form of second-order desires to the effect that the first-order desire ought to be resisted. The fact that a certain desire may incline us to action without compelling us should not be converted into an argument against psychological determinism, let alone physical determinism; for the reason for resistance will itself be another desire, possibly a value judgment, or some other psychological factor.

[3] It is instructive to consider free animal action. We recognize the difference between a caged or bound animal and a free-ranging one: the difference is the difference between the animal acting as it desires and being prevented from so acting. To be sure, its actions are determined by its genes, upbringing, and impinging stimuli, but that has nothing to do with the distinction between being caged and being free-ranging. Animals can be free to act on their desires or not so free, but they are certainly not free from the antecedent state of the world—they must act as they do. Still, there is all the difference in the world between being free to act on their desires and being coerced in various ways. The case is precisely analogous in the case of humans.

[4] I have been fretting about freedom for over fifty years and have wavered between different positions, most recently favoring the compatibilist position. This is my attempt to lay the subject to rest, at least so far as I am concerned.

Share

Truth and Meaning

 

 

Truth and Meaning

 

What have truth and meaning got to do with each other? A dominant view has it that the two are deeply connected—specifically, meanings are truth conditions.[1] The view comes in several varieties, but the central thought is that the meaning of a sentence consists of the state of affairs that would make it true. Alternatively, meaning is given by the conditions of the world that would render the corresponding sentence true. Thus the meaning of “snow is white” can be identified with the condition of snow being white, i.e. the state of affairs consisting of snow being white, i.e. snow being white. Truth conditions can hold or fail to hold (as with “snow is black”), so that false sentences can still be meaningful; but when they do hold they coincide with facts (otherwise they are merely possible states of affairs). Meanings are accordingly to be identified (generally) with non-psychological states of the world—states that could exist without the existence of minds. There were states of affairs (truth conditions) before there were minds and language. Language hooks onto these antecedent states of affairs and thereby becomes meaningful. Meanings are thus an extra-mental matter—combinations of objects and properties, according to a standard position (sets of possible worlds in one version).

There is a fundamental difficulty with this doctrine (as well as several non-fundamental difficulties): meanings are always for someone but truth conditions are not necessarily for anyone. No sentence is meaningful except in relation to a speaker or hearer; there is no such thing as a meaningful sentence that no one understands or could understand. Some sentences are difficult to understand, but no sentence has a meaning that transcends human understanding, i.e. contains concepts that no one possesses. A sentence to be meaningful must be meaningful to someone. Meanings are inherently graspable things; they are essentially human. They are objects of apprehension. But the same is not true of truth conditions: being objective aspects of the world, there is no guarantee that they are grasped by humans. They may not be humanly graspable at all. Certainly many states of affairs eluded human knowledge for a long time, and no doubt there are many that still do: they existed but were not apprehended by us. None of these constituted meanings. They can only enter the realm of meaning if they are grasped—and they may not be. Some speakers may not even grasp the state of affairs of snow being white, so the sentence “snow is white” will not have a graspable truth condition for them; the sentence is not meaningful tothem. But then meanings can’t be truth conditions: for meanings are essentially graspable while truth conditions are not. Truth conditions are the wrong kind of thing to constitute meanings—too divorced from a speaker’s understanding. Meanings are speaker-relative, but truth conditions are speaker-independent. Intuitively put, meanings are necessarily things that speakers know, but truth conditions are not necessarily things that speakers know. This is why it is possible to be a skeptic about truth conditions but not about meanings: maybe we don’t know what states of affairs there are in the world, but we surely know what our words mean. Then how could the latter be the former? In short, meanings are psychological, but truth conditions are not.

Here we see a sharp contrast between truth conditions and every other factor that meanings have been identified with. Verification conditions are not speaker-independent in the way truth conditions are, since they depend on attributes of the speaker, viz. his powers of knowledge acquisition. A sentence is always verifiable (or not) for a speaker. Similarly for the use of a sentence: this too is an attribute of the speaker—his ability to perform acts of speech with the sentence. Equally in the case of image theories: images are items existing in people’s minds not realities that can exist independently of human consciousness. Ditto for intentions and beliefs. It is generally assumed that whatever meaning is it had better relate to the speaker’s mind or behavior or brain, but truth conditions theories locate meaning in the extra-mental world, where it can possibly transcend human knowledge. But even when the states of affairs are known, they are logically of the wrong nature to constitute meaning, since meanings are meanings for someone but truth conditions are not truth conditions for anyone. To be a state of affairs is not to be a state of affairs to someone, but meanings are always meanings to someone. Meanings are always meaningful to someone, but states of affairs are not always meaningful to anyone—they are not intrinsicallyaccessible to the mind. Meanings are things that are communicable or usable in thought, but states of affairs are not subject to this constraint; they exist independently of the human subject. They can’t be meanings because they are too objective.

It might be replied that meanings should not be identified with truth conditions themselves but with mental representations of truth conditions. Truth conditions are admittedly too mind-independent to constitute meaning, but the same cannot be said of mental representations of them. Two questions arise about this reply. First, what is the nature of these mental representations? Perhaps they are senses construed as modes of presentation: this is obscure for many meanings, but in any case the suggestion threatens to make truth conditions redundant in the theory of meaning—why not just make do with the mental representations themselves? Second, the question must arise as to what constitutes the meaning of these mental representations: won’t this lead to the same problem as before? If their meaning is constituted by truth conditions, then we have chosen the wrong kind of thing for meaning to be; but if not, we have moved to another theory of meaning altogether. This is very clear if the mental representation is a sentence in the language of thought: it either has a truth conditions type of meaning or it does not—the former is untenable, while the latter amounts to abandonment. No, the theory must be that meanings aretruth conditions; but then we are faced with the objectivity problem. We are locating meaning too far “outside the head”, making it hostage to ignorance and skepticism. To repeat: states of affairs are (generally) extra-mental in nature and bear no necessary relation to a speaker’s understanding, while meanings are necessarily objects of understanding. Identifying the two cannot then be correct. What a speaker means by an utterance is not what obtains in the world when that utterance is true. The two are subject to quite different epistemic constraints.

What then is meaning? I have no pat answer: none of the alternative theories strikes me as adequate for one reason or another. But that’s okay: maybe we just don’t know what meaning is (lord knows it has been a problem). Wittgenstein repudiated the truth conditions theory of the Tractatus in the Investigations, but he put no new theory in its place; and he was Wittgenstein. Certainly the considerations adduced in the Investigations are far more psychological and speaker-centered than the abstract truth conditions theory of the Tractatus, but they don’t add up to a nice neat theory of what meaning really us. Wittgenstein saw that human meaning cannot emerge from objective states of affairs, but he also rejected both image theories and simple behaviorist theories. His solution, in effect, was to question the hunt for a theory. I prefer to say simply that I don’t know. What I would accept is that truth conditions are connected to meaning in some way: we do often grasp the truth conditions of our sentences. We know that “snow is white” is true if and only if snow is white, and this fact is somehow connected to the meaning we grasp. So we can say that meaning somehow involves or leads to or points to truth conditions, but we can’t say how. Meaning serves to bring truth conditions into view, rendering them visible to us: we see that the sentence is true under certain conditions. Meaning makes truth conditions apparent to us, but not because it is identical to truth conditions. If meaning were use (whatever exactly that means) we could say that the use of a sentence renders its truth conditions manifest to us: we see the relevant state of affairs in the use. But this is horribly obscure and leaves us none the wiser. Still, one can appreciate how the truth conditions theory gained its popularity: truth conditions are connected to meaning somehow and in certain cases (but not when they transcend human knowledge); we just can’t see how. Indeed, it is problematic how this can be so, given that truth conditions are external to the human subject: how can we apprehend objective truth conditions in virtue of any fact about our inner nature? It’s not as if we literally perceive them with our senses, or snub our toe against them. So we not only don’t know what meaning is; we also don’t know how meaning brings truth conditions into the picture, as it apparently does. But if I am right, we at least know that it isn’t truth conditions—we have some negative knowledge. And of course our not knowing what meaning is explains a lot about the history of the subject (all that floundering and fighting).

I began by asking what truth and meaning have to do with each other. I just argued that truth conditions cannot constitute meaning as a matter of deep principle, putting aside questions of extensionality and so on. But there is another line of attack that is far more obvious and well trodden, namely that many meaningful sentences are not true at all, such as imperatives. The limitation of this style of argument is that it is open to ingenious response: it might be contended that imperatives are really disguised indicatives, or that the spirit of the truth conditions theory can be preserved by switching to obedience conditions. However, in the light of what I argued above these maneuvers look suspiciously ad hoc: for the thrust of the present objection is simply that the concept of truth plays no obvious role in the theory of the meaning of non-indicative sentences. If a language consisted solely of imperatives, the truth conditions theory would look like a non-starter. What does the meaning of thesesentences have to do with whether states of affairs obtain? They command actions; they don’t describe states of affairs. Or suppose a language was totally expressive of emotions with no world-directed assertions in it at all: wouldn’t it contain meaningful sentences with nary a hint of truth conditions? There is just nothing in the concept of meaning as such to entail that truth conditions are necessarily involved. So long as there is communication there is meaning, but communication can take many forms. If truth conditions constitute meaning for indicative sentences, that is a special case by no means generalizable to other sentence types. But even in that case there is a principled problem about identifying meaning with truth conditions—the objectivity problem. It is noteworthy that such a theory only gained traction relatively recently with the work of Frege, early Wittgenstein, Carnap, Tarski,[2] and Davidson: before that it had no defenders. Earlier theorists took a far more psychological approach, preferring images, ideas, feelings, thoughts, or behavior. Did they tacitly sense that truth conditions place meaning too far beyond the human subject? Did they realize that states of the world are not intrinsically meaningful or meaning conferring? Meaning is always meaning to someone, but the world doesn’t point to the human subject in this way. Meaning refers itself to us, as it were, but the world makes no such reference—so how could it be meaning? How can facts be meanings given that facts bear no essential relation to the beings that grasp meanings? The fact that snow is white is not a fact for us, but the meaning of “snow is white” is a meaning for us. Identifying the two looks like a category mistake. Alternately put, possible worlds are not worlds for subjects, but meanings are intrinsically subject-directed. As Wittgenstein would say, meanings are part of our form of life, but objective states of affairs are external to that (they constitute the form of the world). Meaning can certainly be about the objective world, but it isn’t the same thing as that world.[3]

Here is another way to put the same basic point. Meaning essentially attaches to symbols: no symbols, no meaning. If I say, “’Snow is white’ means that snow is white”, this is by no means equivalent to any statement that omits reference to the symbols “snow is white”. But notoriously, if I say, “’Snow is white’ is true”, this is equivalent to a statement that omits reference to those symbols, viz. “snow is white”. Statements about truth are equivalent to statements about the world, but statements about meaning are not thus equivalent. So statements about truth point directly to the world whereas statements about meaning do not. Meaning has to do with symbols, but truth has to do with reality. How then could truth conditions constitute meaning? How, in particular, could the statement “’snow is white’ means that snow is white” be analyzable by the statement “’snow is white’ is true if and only if snow is white”? The word “true” cancels quotation, but the word “means” does not. Truth takes you to the world, but meaning keeps you within the domain of symbols. Thus meaning and truth are logically different kinds of concepts. Accordingly, we cannot hope to explain meaning by invoking truth: truth belongs out in the non-symbolic world while meaning belongs with symbols grasped by the mind.

[1] See, for example, Davidson’s “Truth and Meaning”.

[2] It is very much a moot point whether Tarski’s theory of truth qualifies as a truth conditions theory in the classic sense. He does not speak of truth conditions or states of affairs in the development of his definition, and it is not unreasonable to take it that the notion amounts to nothing more than a sentence of the meta-language. He is not referring to states of affairs or anything of the kind but simply using meta-language sentences on the right-hand-side of biconditionals referring to object language sentences. It is quite unclear that he is advancing a truth conditional semantics in the classical sense (in fact, I think he is not).

[3] Is there some kind of use-mention confusion at work here?

Share

The Selfless Machine

 

 

The Selfless Machine

 

In a striking passage from The Selfish Gene Richard Dawkins writes as follows: “Other replicators perhaps discovered how to protect themselves, either chemically, or by building a physical wall of protein around themselves. This may have been how the first living cells appeared. Replicators began not merely to exist, but to construct for themselves containers, vehicles for their continued existence. The replicators that survived were the ones that built survival machines for themselves to live in. The first survival machines probably consisted of nothing more than a protective coat. But making a living got steadily harder as new rivals arose with better and more effective survival machines. Survival machines got bigger and more elaborate, and the process was cumulative and progressive.”[1] He goes on to say of these encased replicators: “They are in you and me; they created us, body and mind; and their preservation is the ultimate rationale for our existence. They have come a long way, these replicators. Now they go by the name of genes, and we are their survival machines”. This is the core of the “selfish gene” perspective: animal bodies (and minds) are the selfless conveyors of selfish genes into future generations. Organisms are artifacts constructed by genes to get themselves propagated. Dawkins is obviously thinking of the survival machines consciously constructed by humans (but obviously not consciously constructed by genes): houses, clothes, weapons, body armor, heating systems, etc. We construct artifacts that aid our survival, and genes do much the same. Their survival machines are squishy, organic, and sometimes conscious, while ours are dry, mechanical, and unconscious (so far at least): but the logic is the same—survival-enhancing devices. He could equally have spoken of survival kits or suits or sheaths or vehicles or pods or crafts or envelopes—the idea is just that of a complex entity that contains the replicators and enables them to reproduce safely. Thus a bodily trait is functionally just like a spider’s web or a beaver’s dam: it is a device for increasing the probability of gene survival in a competitive world. The spider’s genes are machines for making other machines (spider bodies), and these machines make still other machines (spider webs). The genes are the architects and beneficiaries of organisms, which in turn are the architects and beneficiaries of bits of external machinery (nests, burrows, mounds, tools, etc.) Genes are indirectly the producers of these bits of machinery, because they build organisms that build these bits. The external machinery is not selfish, as organisms are not selfish (in the technical sense of “selfish”); these things work to aid the selfish genes. They are selfless machines working for selfish genes.

If we add the extended phenotype to this picture, we can say that the genes build survival machines that incorporate both bodies and the external products of bodies (as well as minds in some cases)—webs as well as spiders. It might seem that this is a rather limited class of cases, since many animals don’t create anything tool-like. But closer examination reveals that actually extended phenotype is the rule not the exception, since animals modify their environment in myriad ways the better to aid survival. Many animals dig holes and these act as useful survival devices (so do plants with their roots). Animals tend to prepare their food before ingestion by chewing, chopping up, ripping, or infusing saliva. Birds use twigs as benches as well as build nests. Snakes use friction between themselves and the ground to propel themselves forward. Cats use spatial proximity to catch prey.[2] Fish use water to push their fins against. Organisms take advantage of the environment external to them in order to aid their survival: sometimes they actively construct useful artifacts and sometimes they find them already in existence. The survival machine includes all the facts that promote gene survival, extending out into the environment. It is arbitrary to locate a cut-off point at the animal’s epidermis. Put differently, the genes must take into account the environment external to the organism as well as to the organism itself when building a good survival machine. Natural selection favors good webs, but it also operates with hospitable found objects: it selects organism-environment combinations. A house includes not just the walls but the spaces in between, as well as proximity to water, etc. The phenotype is always extended in some way. In any case, external machinery should be reckoned to the machines the genes have created, as suggested by the extended phenotype. The phenotype survival machine built by the genotype is an extended phenotype survival machine.

So far this is pure Dawkins, stated in other language: it’s all selfish genes, selfless survival machines, and extended phenotypes. Now I want to venture further afield and push this perspective a little harder. The first question is whether the gene is the end of the line: might genes be survival machines for something else? Webs are survival machines for spiders, and spiders are survival machines for genes, but are genes survival machines for another type of entity? Do they function so as to keep something else in existence? Does their survival promote the survival of an even more basic biological entity? What about the chemicals that make them up? If a chemically complex gene survives, so does its chemical components—they become more common than potential rivals. So maybe the components of genes are entities that benefit from the survival of whole genes—selfish gene parts. After all, if a human machine proliferates, so do its parts: the more BMW cars there are in the world the more BMW car components there are in the world. The preservation of whole cars leads to the preservation of parts of cars (say, a particular type of hood). Thus we arrive at the idea of the selfish molecule. Second, isn’t it at least logically possible that we don’t know everything about genes, and lurking within them is some new kind of entity that shapes them? Maybe there exist more basic biological units that construct DNA sequences, so that the survival of DNA sequences ensures the survival of the more basic units. There is no reason that I know of to believe this is actually true, but its mere logical possibility shows that the gene is not necessarily the end of the line. Physicists keep discovering new and more basic particles: is it inconceivable that geneticists might discover even more basic genetic units? But third—and this is the point on which I wish to place particular emphasis—can’t we say that genes are the product of the process of natural selection, so that that process is the most biologically basic reality? For genes enable that processto survive: no genes, no organisms, and so no process of natural selection. Natural selection builds genes and without them it—that process–cannot exist: its very survival depends on the survival of genes. If natural selection started producing dud genes, ones that fail to build bodies that get the genes propagated, then the whole process would grind to a sickening halt—with nothing biological left. There is no process of natural selection on planets that contain no genes (or some equivalent): the process needs genes (replicators) if it is to gain a foothold. Looked at this way the genes function as devices for keeping natural selection around. So natural selection had better build good genes or it will go extinct. If we picture natural selection as a tinker whose livelihood depends on good gene tinkering, then an incompetent tinker is not long for this world—he goes to Valhalla if he produces bad tinkering work. Similarly, the process of natural selection depends for its survival on good gene construction (the kind that leads to robust organisms): the genes act as survival machines for that process. Genes are artifacts of that process, as organisms are artifacts of genes, and as tools are artifacts of animals. So the chain of production leads back to natural selection: it is the driving force of evolution, and genes act as its survival-enhancing devices. Natural selection is thus the ultimate “selfish” biological reality. To put it anthropomorphically, the genes are the indispensable servants or slaves of the process of natural selection: if they fail, it fails; if they die, it dies. The more genes there are the more natural selection there is, as the more organisms there are the more genes there are. It is in the “interests” of the process of natural selection that genes should survive, as it is in the “interests” of the genes that organisms should survive. We have existence-dependence at multiple levels: the spiders need the webs to survive, the genes need the spiders (and the webs) to survive, and natural selection needs the genes to survive (as well as the spiders and the webs). There is thus a whole hierarchy of survival machines terminating in the process of natural selection. Natural selection is the biological reality—the ultimate basis of the whole shebang. To paraphrase Dawkins, the process of natural selection, which may have looked paltry in the beginning, has gone far in its multi-million year history on earth, producing some remarkable survival machines. The DNA molecule itself is an impressive vehicle for the continuance of natural selection—it has enabled natural selection to colonize every corner of the planet. Natural selection now flourishes everywhere, while millions of years ago it was relatively confined and primitive. The gene was one of its greatest inventions, providing an excellent vehicle for it to travel far and wide. It is what has multiplied and proliferated beyond all imagining, aided by the trusty gene: without the gene (specifically DNA) it might have gone nowhere. If we think of natural selection as a species of process, comparable to species of chemical processes, then we can say that it is one of the most widespread species of process in the world. The genes are proud to have acted as its survival method. By building better bodies they improved its prospects dramatically by improving their own prospects. They gave natural selection a chance to operate widely and ingeniously; without them it would be eking out a living at a very low level of replication. The genes have maximized the spread of natural selection—as organisms have maximized the spread of genes. The logic is the same in both cases, though the entities are very different. Genes are complexes of molecules while natural selection is an abstract process. Complex molecules have enabled an abstract process to become an entrenched feature of planet earth. It is almost as if natural selection consciously designed them with that purpose in mind—as it can seem like the genes were conscious designers too. But no, in both cases we just have a mindless mechanical system following its own inevitable logic. It all follows from the very nature of natural selection as an abstract creative process.

Now that we are expanding the conceptual scheme of foundational biology let us revisit the extended phenotype. That was a good insight into the arbitrary nature of what might be called narrow phenotype—the kind limited to the animal’s body. The functional unit is really the body plus its adaptive effects (spider plus web); we could even reckon the web to be part of the extended body of the spider—it is that body spread out a bit further. But what about the genotype—must it be understood narrowly? Is the genotype confined by the boundaries of the DNA? It is telling that we regularly speak of genes by referring to their effects: genes for the kidneys, genes for the heart, and genes for the brain. We identify the gene by reference to the phenotypic trait it produces not by reference to its molecular composition. So why not work with the idea of the extended genotype? Why not take the bodily trait the gene produces to be part of the gene, as the web is part of the spider (i.e. its phenotype)? Not indeed part of the chemical composition of the DNA molecule, but part of a functional unit comprising molecule and bodily trait. Dawkins speaks of the “long arm of the gene”: yes, and it extends to the body it constructs in embryogenesis. The organism has a long arm, reaching to its adaptive artifacts; well, the gene reaches out that way too, to include its adaptive phenotype. Let’s call the external artifacts of an organism its “exotype”: then we can say that phenotype includes exotype and genotype includes phenotype—which includes exotype. By transitivity, genotype includes exotype—the web is part of the gene! That is, we reconfigure the boundaries of the gene so as to incorporate the external effects of the gene: the wide gene, as we might call it. There is the narrow body (spider alone) and the wide body (spider plus web); likewise there is the narrow gene (DNA) and the wide gene (DNA plus phenotypic trait). The canonical form of a wide gene description is thus “gene for X”, where X is some adaptive trait of the body or its products. It is the functional unit that is selected for by natural selection—a molecule plus the trait produced by that molecule (with accompanying apparatus). We thus insert the survival machine into the gene, widely construed—just as we insert the survival machinery of the exotype into the phenotype, widely construed. The web becomes part of the spider; the spider becomes part of the genes, or its traits do. We still have the narrow gene and the narrow spider in our ontology, but for theoretical purposes we also recognize another level of description that blurs such boundaries. If we imagine another biological world in which the same molecules are coupled with different traits, by virtue of different laws obtaining in that world, then we can see the point of carving things up this way: for in this world the same narrow genes will have different phenotypic expression, which obviously affects survival value. The same narrow gene will be adaptive in our world but not in this different world, given that it produces different disadvantageous traits. But if we individuate the gene widely we can say that adaptive value is preserved, since these are different genes in that world: the body comes along with the (wide) gene. What types of genes exist in a given biological world? Are they chemical types or trait-dependent types? If the latter, we only have the same genes when the traits are included in the total package—the extended genotype. Given that we are already comfortable with the extended phenotype, I see no reason why we shouldn’t accept the extended genotype (while retaining the narrow genotype—the gene without its long arm).

The initial replicators built their survival machines and even added external devices to enhance their survival capabilities. But later they recognized that they had merged with their machines and become extended beings. It is the same with spiders and webs: the initial spider built its web machinery and then stood proudly back admiring its handiwork. But later it recognized that it and the web were parts of a larger whole—it had merged with its web. The web survival machine became part of what the spider is—not significantly different from its legs and jaws. Similarly, the replicators were not just sitting snugly inside a survival machine but had joined forces with it: they were made up of bodily traits as well as tightly localized chemicals. To avoid confusion we might introduce a new term, since “gene” has become so strongly associated with spatially confined chemicals: we could call the entire complex of chemical material and bodily trait the “trene”, a combination of “trait” and “gene”. So trenes get selected by natural selection, just as combinations of spiders and webs get so selected. Trenes constitute the extended genotype. Organisms are collections of trenes, i.e. DNA molecules plus their associated bodily traits. A given trene would include, for example, a specific molecule in the spider’s DNA, a set of anatomical characteristics, and a distinctive type of web: this is what gets selected by natural selection (“selective holism”) not each of its components alone. Reverting to the point about natural selection as a beneficiary of the genes, we can say that natural selection builds survival machines with three main components: DNA, bodily traits, and external artifacts. Thus the process of natural selection survives because it builds machines that incorporate these three elements—that is, collections of trenes. Natural selection keeps going, keeps proliferating, because this is a good design for a survival machine—better than just bare replicators and better than organisms that can’t make anything. We ourselves, then, with our DNA, bodies, minds, and artifacts are vehicles for the continued existence of the process of natural selection.[3] We have become accustomed to thinking of ourselves as survival machines for our genes, but actually we and our genes are survival machines for an underlying biological process. We exist in all our glory because natural selection built us so that it could survive and flourish. We are the machines needed to allow natural selection to keep a foothold. We thus join the vast array of organisms on earth, from plants to people, bacteria to bats, whose job in life is to keep natural selection going. We are the devices invented by this abstract process so that it can survive. Natural selection created the gene (a certain type of replicator) and the gene created us (a certain type of reproducer): everything in the biological world is a survival machine for mindless natural selection to remain in existence and expand its domain. This is the ultimate rationale (to use Dawkins’ words) for the whole biological world. If you thought gene survival was desiccated enough, then natural selection survival surely brings desiccation to a new level. Even the mighty gene is made to feel secondary in the great scheme of things.

 

[1] The Selfish Gene (1989), pp.19-20. I am going to assume the perspective of this book in what follows.

[2] We could also say that cats use the device of injury in order to secure their prey: their teeth construct an injury that spells the demise of the prey animal. In the case of many big cats they use the windpipe of the prey as a device to subdue it, which is not fundamentally different from throwing a lasso around its neck. The environment takes on a function for the cat in its pursuit of food.

[3] This is not to say that we are only that—which is why I didn’t say we are just such vehicles. This is just one of the things we are, though quite an important thing. We are also moral beings, creative forces, free agents, romantic partners, scientists, etc. But from the point of view of biology we are the survival machines (conscious ones!) of the impersonal process of natural selection. This is ultimately why we exist.

Share

Criteria of Meaningfulness

 

Criteria of Meaningfulness

 

The positivists created a question that had not existed before, viz. what is the criterion for whether a string of words is meaningful? Their proposal was that such a string is meaningful if and only if it is empirically verifiable (with due allowance made for analytic sentences). The intention was to place metaphysical sentences on the wrong side of the line—to declare sentences that seemed meaningful not to be meaningful at all. They were trying to expose illusions of meaning—sentences with pseudo meanings. As there is fake jewelry, so there is fake meaning. Verifiability was to be the test of authentic meaning. As is well known, their efforts came to naught for a whole series of reasons (mainly the criterion kept ruling out too much); but the enterprise itself was not cast into doubt. The positivists simply had the wrong criterion—some other criterion would do better. Accordingly, other criteria were mooted: falsifiability, truth conditions, use, inferential role, grammaticality, language-games, etc. Without going into detail all these proposals ran into problems of one kind or another: the existence of meaningful non-indicative sentences, excessive narrowness, vagueness of formulation, lack of selective bite, and so on. Perhaps most instructive is the grammaticality criterion: it sounds eminently reasonable at first but quickly comes to grief. A sentence is meaningful just if it is grammatical, i.e. is well formed according to the rules of grammar. This is generous enough to let in what intuitively counts as meaningful (including metaphysical sentences) but strict enough to rule out nonsense strings and things like rocks. But two questions can be pressed: (a) what makes a word meaningful and (b) by which grammatical rules exactly. Can’t there be meaningless words (“brillig” and the like)? And can’t grammars vary from language to language? What is grammatical in one language may not be grammatical in another. Even if there is a common basic grammar to all natural human languages, what about invented languages or languages spoken by aliens—can’t they be meaningful too? So this proposal also flops. The question seems hard to answer, like many philosophical questions, despite the plethora of attempted answers. Is this one more puzzle to be added to an already long list? The mind-body problem and the problem of meaningfulness—both are resistant to solution.

An array of possible responses to the difficulty suggests itself. Maybe the property of meaningfulness is a primitive property, like Moore’s simple indefinable good; maybe “meaningful” is a family resemblance term with no property shared by all instances of the meaningful; maybe we just haven’t found the answer yet but should keep trying; maybe it is a bona fide philosophical mystery like the mind-body problem; maybe the whole idea of meaning is an illusion and should be eliminated from our thought. Again, I will not discuss these options, noting merely that they all seem like using a sledgehammer to crack nuts. Instead I will defend my own answer: the question is not a sensible question—not as philosophers intend it anyway. It has no interesting philosophical answer. It is an illusory question. We can certainly distinguish between the meaningful and the meaningless: words and sentences are meaningful but rocks and mountains are meaningless. It would be odd to say of a mountain that it is meaningless—it isn’t even a candidate for being meaningful. A mountain isn’t made up of symbols or even of purported symbols: it’s just a big lump of stuff. It isn’t verifiable or falsifiable or grammatical simply because it isn’t symbolic at all. Most things aren’t—they are devoid of meaning, meaning-less. But this distinction is not what philosophers were getting at who sought a criterion of the meaningful—of course rocks and mountains are not meaningful! Whoever thought they were? They give no impression of meaning, no illusion of it; no one is tempted to regard them as meaningful. So the genuine distinction illustrated by the difference between words and rocks is irrelevant to the philosopher’s quest; the philosopher is interested in the distinction as it applies within the class of symbols. But that is exactly where the project runs aground: for anything that counts as a symbol is already meaningful. Recall Grice’s theory of speaker meaning: an utterance is meaningful if it is used to induce in an audience a belief by way of the audience’s recognition of an intention to do just that. But any symbol can be used to achieve that aim just by being a symbol: metaphysicians can certainly induce beliefs in others by using metaphysical strings of symbols. They could even do this by using a system of whistles or hand gestures or even heads on platters (to use Grice’s own example). It is only too easy to be meaningful given that there are speakers intent on communicating thoughts. That’s all there is to it really—what is meaningful is what people treat as meaningful, i.e. use to communicate. What kind of speech acts did the positivists think the metaphysicians were performing as they talked to each other? Obviously they were getting things across to each other, so they must have been speaking meaningfully. And the same for all the other interesting criteria proposed by philosophers. In so far as the question has any answer, it is entirely trivial. The sentences are meaningful because you can mean things by them, i.e. communicate thoughts.

But wait: can’t we reformulate our question to focus on beliefs and thoughts? What is the criterion for having a contentful thought? Now we have created a new concept (and a new word)—the concept of the contentful—and raised the question of when it is satisfied. When does a belief have content and when does it not? Certainly we can sensibly report that some things have content and some do not—again, states of mind do but rocks and mountains do not. But that doesn’t give us any sense in which the distinction applies within the class of beliefs and thoughts—as it might be, beliefs that give the illusion of having content but don’t really have it. And that distinction is barely intelligible: to lack content is to lack a condition necessary for being a belief. What did metaphysicians have according to the positivists—fake beliefs, non-beliefs masquerading as beliefs, illusions of introspection? That is absurd: of course they had beliefs with content just by having beliefs. They reasoned with them, disputed each other’s, and abandoned them under argumentative pressure. But then their sentences were meaningful by Gricean standards, since they served to communicate such beliefs to each other. The idea of a belief just is the idea of something that is contentful, as the idea of a symbol just is the idea of something meaningful (given its employment in an act of communication). So the notion of a criterion of meaningfulness, as sought by philosophers, is an incoherent notion, a kind of displacement of a genuine distinction into an area in which it can’t properly apply.

It is rather like trying to find a criterion of the humorous. It is true that some things are humorous and some things are not: jokes are but funerals are not (or rocks and mountains). But it would be folly to try to find a criterion within the class of jokes: jokes are funny qua jokes—some funnier than others perhaps—but it would be bizarre to suggest that a certain class of things that are regarded as jokes are not really jokes. A joke is a joke and a sentence is a sentence. A joke can lack in humor as a sentence can lack in significance, but that is not to say that either can belong to a class of the literally non-humorous or non-meaningful. It might be said that a joke can be in bad taste or tedious or crappy in some other way, as a sentence uttered by a metaphysician can be trivial or absurd or crappy in some other way; but that is not a matter of being an illusory joke or an illusory meaning—something that someone might be tempted to wrongly classify in these ways. What is humorous is what makes people laugh, as what is meaningful is what makes people believe (according to Grice). It would be strange to say that a metaphysician’s humor is not real humor by some tendentious standard, as a metaphysician’s meaning is not real meaning, or a metaphysician’s belief is not real belief. One might wish to say that such humor or meaning or belief is a waste of time, or is nothing like science, or is totally obscure, or is a mere game with words—but not that it isn’t even humor or meaning or belief at all.

Another way to see what is wrong with the search for a criterion of meaningfulness is to note that it treats meaningfulness as an evaluative notion: it is supposed to be good to be meaningful and bad to be meaningless. The positivists didn’t think that metaphysics is perfectly fine but entirely meaningless! But this attitude is clearly mistaken with regard to things that really are meaningless—like rocks and mountains. These things are not defective through lack of meaning; they simply are not the kind of thing for which we expect meaningfulness. What are supposed to be defective are symbols that lack meaning, since they purport to mean something. It is the illusionof meaning that they present that is deemed deplorable not the mere fact that they lack meaning. That would no doubt be a bad thing—a kind of deceptiveness—but it is not really possible, as I argued above. On the other hand, real lack of meaning is perfectly possible—most things not being symbols to start with—but this is not what the seeker after a criterion of meaningfulness is looking for. There is all the difference in the world between denouncing the metaphysician’s words as meaningless and remarking that his bowtie is meaningless (or the soles of his feet). The philosopher should never have taken the concept of meaningfulness as evaluative in the first place; that notion simply marks a natural division between two kinds of fact, semantic and non-semantic. If the metaphysician were simply practicing elocution by uttering his metaphysical sentences, he would not be disturbed by the allegation that his words are meaningless; what hurts is that he intends thereby to speak deep truths. But this whole idea is misguided because there cannot be a criterion of meaningfulness of the kind the philosopher seeks. A grammar book will tell you that a sentence is a part of speech that serves to convey a complete thought: that is quite correct, and the condition is easily satisfied. No criterion of meaningfulness can be acceptable that entails that sentences in this sense can fail to be meaningful. The concept of meaningfulness is not a philosophically interesting concept, at least as philosophers approached it in the first half of the twentieth century (perhaps the question of whether life is meaningful is a real question). This could be why it was not a topic of philosophical interest before that time (contrary to some projective history of philosophy).[1]

 

Colin McGinn

[1] For instance, it would be quite wrong to interpret Hume in this way, as some positivists were inclined to do. One might try to claim that Heidegger’s “Nothing noths” is literally meaningless, but (a) Heidegger offers a gloss on that sentence that gives it a kind of sense and (b) if it is meaningless that is because it is ungrammatical not because it is unverifiable. In any case, it is hardly representative of the class of metaphysical sentences (if indeed there is such a class). I note that it is hard to find anything in Wittgenstein’s Philosophical Investigations that can be construed as an attempt to give a criterion of meaningfulness, in contrast to the Tractatus. That whole project was pretty much dead by the latter half of the twentieth century, though the idea of it lingered and was never fully repudiated.

Share

Action and Acting

 

 

Action and Acting

 

Jack gets up, goes to the kitchen, opens the fridge, takes out a beer, pops the cap, and drinks it. Why did he do that? Because he wanted a beer and thought there was one in the fridge. The philosopher says that Jack’s action is explained by his having a desire for a beer and a belief that this course of action will bring about the satisfaction of that desire. The action fits the desire via an instrumental belief. The belief-desire pair constitutes the agent’s reason for acting; some say it causes the action (others deny this but still hold that the action is explained by a belief-desire pair). Isn’t this plain common sense? You want something, you figure out a way to get it, and you act based on those two factors. That all sounds very reasonable and convincing: actions are explained by the agent’s having desires and beliefs that lead to the action in question. This is what folk psychology is all about.

But consider an actor’s actions. John is sitting on a sofa on a stage with an audience in front of him. He gets up, walks across the stage, opens a fridge, takes out a beer, uncaps it, and drinks the contents. Why did he do that? Was it because he fancied a beer and figured the fridge would contain one?  No: he had no desire for a beer, and did not form an instrumental belief about how to satisfy such a desire. So John appears to be a counterexample to the classic story: his action, though just like Jack’s, has no such explanation. Yet it was intentional, intelligent, and motivated. John might even hate the taste of beer; he was merely pretending to desire a beer and acting so as to satisfy that non-existent desire. Pretending to want a beer does not entail wanting a beer (maybe the opposite), so John’s action cannot be explained in the way Jack’s was. Can it be explained by any belief-desire pair? Maybe this one: he desired to give the impression that he desired a beer and he reasoned that by acting as he did he would give that impression. This cannot be quite right, however, because then he wouldn’t be surprised if a member of the audience handed him a beer—after all, he contrived to give them the impression that he wanted one. He pretended to want a beer in a setting in which such a response is contraindicated. We needn’t go further into the precise nature of John’s psychology, noting simply that he was engaged in an act of pretense in which he desired to give a certain impression. The point is that the impression he desired to give is a false impression: he had no desire for a beer. His action is explained (according to the belief-desire model) by another desire—the desire to be perceived in a certain way. Thespian action, then, is different from ordinary action, requiring a wrinkle in the explanatory apparatus. It is governed by a special sort of desire—the desire to be seen in a certain way, as an actor portraying a character. The belief-desire theorist thus breathes a sigh of relief that his preferred model covers the case of the actor’s actions (though there may be a lingering disquiet). The actor just has a funny sort of desire.

Theatrical action is not confined to the conventional setting of the stage. People often contrive to give the impression that they have desires they don’t have (or don’t have desires they do have) for motives both innocent and nefarious, thus inviting an explanation that gets things wrong. I may want you to think that I like you and wish to spend time with you, while all along hating your guts: in such a case my actions are explained by my wishing to give you a false impression of my true feelings. I don’t spend time with you because I desire to but because I want you to think that I desire to. This kind of point has prompted some theorists of human psychology to propound a theatrical view of human behavior in a social context. We all know Shakespeare’s line about the world being a stage and we being merely players on it, and Erving Goffman did much to entrench this view of social interactions. I won’t go into the reasons for holding this view, merely observing that if it is true then a great many of our actions are like an actor’s actions. We perform roles designed to convey a certain impression—dutiful husband, kindly professor, tough guy—without actually having the desires we project. Our actions are a front we offer to the world in order to present ourselves in a certain light, and they may not correspond to our actual desires.[1] Goffman spoke of the “theatrical self”; we may equally speak of the “theatrical agent” performing “theatrical acts”—acts of pretense, simulation, deception. Moreover, the roles we play can become internalized, so that we don’t shed them even when unobserved: your entire personality can be the result of habitual role-playing reinforced by social pressures. Maybe we rarely act according to our true desires (whatever that might mean) but rather act in such a way as to project a desired impression—even to ourselves. Suppose that were so; suppose indeed that there are people who never act on their actual desires in the manner of Jack but always act on theatrical desires in the manner of John. Everything they do is impression management guided by the desire to appear a certain way, never by what they really want. For example, someone might have a longing for beer but live in a social world in which that desire is frowned upon, so they always act so as to give the impression that they hate the taste of beer (sexual desires might provide a more obvious example). The point I am making is that the standard story of human action assumes that it is not thus theatrical, but this is an empirical and contested question. In fact, ordinary action is shot through with such histrionic elements—acts of theatrical pretense. This type of action needs to be included in any general theory of the nature of human action (animals are not similarly histrionic).

Once this point is acknowledged doubts arise about the standard scheme. Is it really true that the average human desires to give certain impressions to others? Maybe the professional actor does, but what about someone who acts so as to cover up perfectly acceptable desires that other people happen to disapprove of? In the bad old days, did homosexuals really want to act like heterosexuals? Did they have a yearning to present themselves as other than there are? Did they come home after a long day of acting straight and feel happy about their day, feeling that their desires had been satisfactorily met? The truth is that they judged their actions to be in their best interests all things considered–their true desires notwithstanding. It is simply a misuse of language to say that they desired to act straight. They desired to act according to their own sexual preferences, not according to how they were expected to act. Pretending may sometimes be socially necessary, but it is not always enjoyable. The case of Jack gives a quite misleading impression of the general nature of human action, as if we always act so as to satisfy our real desires; but often we have to dissimulate, suppressing what we really desire in order to manage social interactions. We do what we think will serve us best (though with understandable lapses) not according to what we really want. This gives a very different picture of human agency from what the standard model implies. It is not so much desire that prompts our actions as social necessity (often internalized). Action is all about maintaining self-image not the free flow of appetite. The standard model forgets that we are social beings whose actions must be tailored to fit with the demands of others.[2] That can be a strain not a release—the denial of desire, not its free expression. We often act contrary to our desires, not from them

It is fair to say that this perspective represents human motivation as more cognitive than appetitive. We must think about how we are perceived and act accordingly, not just go with the flow of internal desire. The actor is always thinking, calculating, reflecting. Hamlet is nothing if not a thinker: not for him the spontaneous expression of desire. The gay man must be constantly vigilant, constantly monitoring his behavior, for fear of exposure (in the bad old days). Our lives are burdened with such thoughts: we can’t act without thinking about appearances most of the time. Intelligent social judgment is required of us (perhaps this is why people often need to “let their hair down”). So the correct model is not unmediated desire spilling out into action but tightly controlled judgment about what is best socially. This makes a cognitivist view of moral motivation less exceptional—more the standard case. True, moral action is not like going for a beer when you feel like one; it is more like judging what would be best for you from a social point of view. Genuine desire can be the cause of action, but very often the cause is something more cognitive and ratiocinative. Value-directed reasoning is the normal case. Self-control is the rule, given that we are actors on a stage, not giving vent to what we happen to be feeling (the professional actor must often suppress his actual feelings on the night in order to turn in a decent performance). Action is rarely desire made visible, but more desire filtered and disguised. Even Jack as he heads for the fridge is wondering what his mother would think of all his drinking, resolving to put on a good show of sobriety when next they meet; perhaps he even pictures seeing her in his mind’s eye and stays where he is on the sofa (he’s already had a few). He must play the part of a responsible drinker not someone who simply can’t resist the booze (alcoholics are notoriously fine actors). Even Jack is a skilled thespian, the rival of John (“No thanks, Mum, I’ve had two already”). Do we everact on a desire and not wonder what people might think of us? Our social role is always part of our practical reasoning, even when acting alone. That was the insight of Shakespeare and Goffman: we are inescapably social beings assiduously managing our image. We are not solitary creatures free to act on whatever we feel like without considering the opinions of others. The standard belief-desire model is unrealistically utopian, picturing us as isolated beings free to express whatever desires we may have. In reality our actions are always socially mediated, even if only notionally. It’s always: how would this look?

There is another respect in which the standard model is unrealistic. When Jack goes to get his beer he performs a large number of actions: his action of getting a beer consists of a series of sub-actions, such as putting his hand on the fridge door. How far down this subdivision can go is an interesting question, but let’s stop at the relatively molar level. Now does Jack have sub-desires corresponding to these sub-actions—did he desire to put his hand on the fridge door? Well, this is an action that could have occurred outside of the sequence of actions Jack performed in obtaining his beer, so presumably there is a desire that corresponds to it. That is the standard model: for each action there is a desire-belief combination corresponding to that action. But did Jack really desire to put his hand on the fridge door? He might have been indifferent to it, or he might have been actively opposed to it—perhaps he is abnormally sensitive to cold. In general friend Jack doesn’t like making an effort at anything, including getting a beer inside him. It is true that he judges that it is necessary to achieving his goal that he should put his hand on the fridge, but it is a stretch to say that he desires to do this. In general it is not plausible to suggest that we desire to do every part of the means we employ to obtain a given end. Means are just undesired necessities. People don’t generally want to study; they do it because it is a necessary means to obtaining an end that they do want. But then it is not true that every action is explained by a corresponding desire. These sub-actions are explained by a cognitive state to the effect that this is a necessary part of the means to a desired end; they are not themselves desired. You might try saying that Jack’s action of clutching the fridge door is explained by his desire for a beer, but that doesn’t explain the specific character of the sub-action in question. Desire may initiate the process and explain its whole existence, but it doesn’t explain the details of the process—belief does (or something like it). But then there are actions that are not explained by a desire, since the overall desire can’t explain them. We simply don’t always desire what we intentionally do, though we may judge that the action is necessary in the circumstances. Don’t we often desire not to do what we do, though deeming it necessary given our other goals? If you ask a person digging a hole whether he wants to dig a hole, he will likely say no, but then point out that it is necessary if he is to reach the water hidden underground. He wants to get the water but not to the dig the hole that exposes it. Jack may in fact think it’s a pain in the butt to go to the fridge, but how else is the poor man to get a beer? Indeed, Jack may not really want a beer at all (!) but rather thinks it is a necessary means for relieving his indigestion (my own mother used to drink Guinness in order to put on weight not because she liked Guinness). Lots of the time we act so as to achieve distant goals without desiring to perform the actions necessary to achieve them. Shall we say that we desire to stay alive (or experience pleasure) and this explains all our actions when combined with suitable instrumental beliefs? That would be a reductio of the standard model not a vindication of it. The attraction of the theory is that it offers to explain each action by means of a distinctive belief-desire pair, but this breaks down for complex actions. We don’t desire to do an awful lot of what we in fact do.

The picture that emerges is that judgment plays a far greater role in human action than has often been supposed. It is not that existing desires trigger actions with a little help from beliefs; rather, judgments about what is best are the main determinants of action. This is true for actions geared to social relations as well as for actions that make up sequences of actions aimed at achieving certain ends (the ends might not be desires either but value judgments). Desires can play a role but they are generally mediated and filtered, suppressed and dissembled, not given free rein (or reign). As agents we are much more cognitive creatures than appetitive ones (though these categories are themselves rather too simple); thought is the main engine of action. We think about how our actions will be perceived by others, and we think about what we must do in order to achieve our aims—neither of which has much to do with our actual desires. Action is more embodied thought than embodied desire (in humans anyway[3]).

 

Colin

[1] Even when we are acting on our actual desires, say when eating lunch, we are conscious of the impression we make on others (though they might not be present), so there is always the influence of an accompanying higher-order desire to create a good or passable impression. You desire that your desires not be expressed repulsively or awkwardly or embarrassingly. You monitor your desire-satisfying actions for their social impact. Hell, as Sartre remarked, is other people. Or, as Freud observed, all our actions are subject to criticism from an internalized parent: we are acting a part to gain parental approval. We must always please a demanding audience.

[2] Can we imagine creatures that literally never act on their desires but always shape their behavior to fit social expectations—pure politicians, as it were? They are always insincere, dissembling, repressing their real desires. So do they never eat or relieve themselves? Suppose this is done for them so that no action on their part is necessary: then it seems that their behavior could be entirely governed by other-regarding desires to the effect that they create a good impression. It’s wall-to-wall theater from morning till night, pure pretense. This is a far cry from the standard model. These creatures have desires, perhaps very much like ours, but they never ever act on them. Their actions are never explained by their desires (save the desire to create a good impression in others—which may not be a real desire anyway but simply arise from fear). It is logically possible never to do what you want but to act nonetheless.

[3] My pet African lizard, Ramon, agrees, seeing a stark contrast between his reasons for action and those of his keeper. He just bites at his lettuce whenever he feels like it without regard for what anyone else might think of him, whereas his keeper has to consider how his actions will be perceived by others. Ramon is no actor; I perforce am. Do I wish I had his freedom of action? You bet I do.

Share

Manners and Morals

 

 

Manners and Morals

 

The topic of manners, good or bad, is neglected in philosophy, receiving scant attention in moral philosophy.[1]Perhaps it is felt to be trivial compared to the weighty matters of morality. But I think the topic is not without philosophical interest and I propose to explore it programmatically. First, what is meant by “manners”? The OEDprovides some useful hints: “manners” is defined as “polite or well-bred social behavior”. Turning to “polite” we find “respectful or considerate of other people” (with the word deriving from a Latin word meaning “polished or smooth”). Then “respect” is rendered as “due regard for the feelings and rights of others”.[2] Clearly the notion is normative and redolent of moral notions: good manners consist of correct actions in relation to other people concerning their feelings and their rights—actions of respect and consideration. For example, it is good manners not to interrupt people when they are speaking: this is something that a well-mannered person does not do, because he should not do it. It is good manners to greet people when you meet them and to signal to them when you are leaving, not to raise your voice unnecessarily, and to consider their feelings with respect to their appearance and deportment. In some cultures bowing is considered good manners, in others smiling is regarded as polite. Not to act in such ways is regarded as reprehensible, mildly or strongly. Children are therefore taught to behave politely.

Moving beyond the dictionary, I would say that acting politely involves three main elements: it is theatrical, symbolic, and self-referential. By “theatrical” I mean that good manners are a type of performance akin to acting on a stage: this is why they are often ritualized and stylized, and people can vary in their ability to act politely. You have to put on a good performance—it is no use giving a half-hearted bow or emitting an inaudible “hello”. In previous ages good manners were often quite elaborate, requiring much training and practice, especially court manners, or how to “treat a lady”. Even now people being presented to the Queen have to execute a series of theatrical maneuvers in order to conform to protocol. Professional actors can be expected to have excellent manners. Manners often require pretense, since one may not particularly like or approve of the person towards whom good manners are expected. The polite action may not be sincere; indeed good manners are supposed to counter the effects of social hostility or coolness. Good manners are a front we present to the world akin to the theatrical self, as explored by certain writers (Shakespeare, Erving Goffman). The smooth operator is above all a talented thespian. By “symbolic” I mean that the polite act is intended to signify something, namely that the agent is a trustworthy and safe person to deal with. The hearty handshake and accompanying steady eye contact are intended to symbolize a person who is respectful and considerate, not a shifty customer who can’t be trusted with the family jewels. Again, this is why good manners tend to be stylized and codified, like a kind of language of respect and consideration. The bow performs no genuine service, but it indicates a certain kind of reliable and deferential individual: it is symbolic. It needs to be decoded, and will not be if the recipient is unfamiliar with the culture in which it occurs. Good manners are signs, signals, messages, declarations. Third, polite behavior is self-referential in the sense that it is intended to be perceived as such: the agent wants the audience of his performance to understand that he acting politely. I don’t just intend to act in a well-mannered way but also to be seen to be so acting. Moreover, I intend that my audience should recognize this intention (shades of Grice): I want my audience, before whom I am symbolically acting, to grasp that I am intending to treat them politely. It is not necessarily so with moral action: here it is not essential that the recipient should grasp that the action was intended morally (he may not even know that he is the beneficiary of any moral action). Thus the polite person must act conspicuously politely (it’s no use bowing behind a curtain) so as to make his intention plain. Good manners thus require the ability to project good manners—to make them evident, salient. So manners require a fairly complex set of intentions as well as theatrical skill and a grasp of symbolism. We are not born knowing these things but need to have them inculcated—hence all those etiquette textbooks of yore and costly lessons in the art of behaving in “polite society”. Miss Manners earns her keep as an instructor in the Theater of Symbolic Good Impressions. Good manners are not for ignoramuses.[3]

Moral action does not have these characteristics: it isn’t essentially theatrical, symbolic, and self-referential. When one person benefits another or keeps a promise or tells another the truth this is not a theatrical performance intended to symbolize something meritorious about the agent: it is the fulfillment of a duty, an act with real consequences, an instance of practical reason. It is not a type of play-acting calculated to create a favorable impression (this is not to say that agents never do this in the guise of acting morally). It is not merely good manners to give money to charity or to treat other people fairly. An unethical person is not one who needs to improve her manners (her manner isn’t the problem). This connects with two other features of manners that distinguish it from morality. First, good manners are not appropriate for animals and small children: we don’t have to treat our pets and babies courteously. Why? Because they don’t understand the symbolic theater of manners: good manners are lost on them. By all means treat them kindly, but there is no need to worry about hurting their feelings by social snubs or snobbish behavior (or even by leaving the house without saying goodbye). Good manners require the recognition of good manners, but moral behavior does not. We don’t need to be instructed in how to treat a dog politely at a social event. Second, good manners do not extend to ourselves: I don’t need to watch my behavior in connection with myself in case I offend myself by a lapse of politeness. Good manners are essentially other-directed: they concern social behavior not solitary behavior. I don’t need to be taught the correct way to address myself.[4] Personal hygiene may be a courtesy issue in interaction with others, but it is not impolite of me to eschew deodorant on a lone trip. I don’t have to avoid being rude to myself. Again, morality is different: I do have duties to myself as one person among many, not merely to other people. Prudence may be understood as self-directed morality. When I act so as to benefit my future self I am acting rationally and morally, but it would not be rational or moral to put on a good performance to myself of consideration and respect. Good manners are an effort to give a positive impression of myself to others and to make them feel at ease, but I don’t need to convince myself that I am a solid sort of chap; I don’t need to manage my perception of myself by deft indications of decency. I can interrupt myself in mid-sentence without incurring any self-censure regarding my manners.

Now I can discuss the question of the relation between manners and morality. I suspect I am not alone in being ambivalent about the claims of proper etiquette. On the one hand, it seems like a pretty suspect sort of business: all that contrivance, self-consciousness, self-advertising, insincerity, and brand promotion. And correct etiquette is certainly no substitute for sound morality. Just think of its associations with social rank, snobbery, the caste system, sexism, etc. Hasn’t the emphasis on good manners done more harm than good? Hasn’t it had a tendency to displace real morality? Who wants to go back to the days of, “Kind sir, may I have the honor of extending to you an invitation to partake of a libation?” and suchlike rigmarole? Must ladies be stood up for whenever they enter a room and be deemed incapable of opening doors? Must the rude rustic be condemned as a lesser being because of his rough country manners? The whole artifice can seem like a relic from the past that we could well do without. Isn’t a more relaxed view of manners more conducive to human happiness? And wasn’t it always more about social acceptance and self-advancement than genuine concern for others? Away with manners! Let morality suffice to govern human interactions—doing your duty, maximizing happiness, that sort of thing. No more bowing and scraping, but plenty of helping and giving. On the other hand, isn’t the core concept of good manners really an instance of sturdy morality? How could it be wrong to respect the feelings and rights of other people? Isn’t politeness a means to that end? It might be objected that it is really just putting on a show of such respect, a kind of pantomime, not actually making sure that those feelings and rights are respected and protected. How does bowing ensure that someone is not assaulted or wrongly imprisoned or slandered? But isn’t the show itself a valuable thing? Don’t we need to see that people care about us as beings with feelings and rights? Isn’t this a kind of social cement enabling us to function harmoniously together? Good manners are a kind of assertion of the importance of morality without themselves being morality. When you behave politely you are saying “I am a moral being” and people need to hear that. Of course, such statements can be deceptive, which is why manners can aid the villain, but they are nevertheless important ingredients in a social network. Good manners are pleasing precisely because they reassure us that morality is still in force (even if deceptively in some instances). When you stand up when a lady enters the room are you not indicating by your action that you would not sit idly by if she were in mortal danger? When you say hello to someone aren’t you letting him know that you respect him as a human being—that he is not just a piece of furniture to you? This may not be the same as actually doing something just and good, but it’s something—it’s a step in the right direction. At least you are acknowledging that you have duties towards the person in question. So manners may not be morality but they are an indicator of it—they are not an entirely separate sphere of human activity.[5] If someone shows you consideration by politely welcoming you in, they may show you consideration when things get challenging. So we shouldn’t jettison etiquette just because of its abuses and absurdities; it plays an important moral role as a symbolic recognition of the claims of morality. You may feel slighted when someone doesn’t remember your name or ignores you at a party, despite knowing that no material harm has been done to you thereby; but this isn’t irrational oversensitivity because such impoliteness indicates a person who is unlikely to treat you considerately in the event of a fire or a fight. It may not be true that “manners maketh the man”—only morality can do that—but it is true that manners indicateth the man. At least the solid core of manners has that function, putting aside all the silly rituals that are used to put down one sort of person in order to elevate another. Immoral etiquette does not rule out a morality-driven etiquette. Looking down on a stranger who doesn’t know our mannerly ways is no doubt deplorable—a case of really bad manners on our part—but it isn’t wrong to teach good manners as a token of good morals. It is just that manners should never become detached from morals, a kind of elaborate theatrical game designed to weed out the not “clubbable”; manners should be the servant of morals, never its rival. In other words, manners are a tool to be wielded responsibly, not a hammer with which to crush people socially. The ambivalence I mentioned is not unreasonable, but it is possible to preserve what is valuable in good manners while rejecting its worst excesses. I myself am fond of the bright and graceful hello, as well as the slightly melancholy but hopeful goodbye. I also like to see to it that my guest is seated comfortably without the sun in her eyes, and I make a point of not interrupting her verbal flow. It’s not much, I know, but it serves to convey my respect for the guest’s feelings and rights. So on balance I am an enthusiast of good manners, though I am sensitive to its pitfalls, and would never prefer it to morals.[6]

 

[1] How would the standard types of normative ethics treat manners? Presumably the utilitarian would say that manners are good or bad according as they increase or decrease total utility or something of the sort. On this account they may turn out to be immoral, since poor manners (by some standard) often lead to unjust discrimination and consequent suffering. Deontological ethics would need to include a specific set of duties listing all the forms of politeness that exist. Interestingly, no such thing is ever attempted, and standard theories don’t even include manners as belonging to our moral duties. Generally, normative ethics steers clear of the ethics of politeness (though I am sure someone must have talked about it).

[2] The word “courteous” is defined as “polite, respectful, and considerate”, and we learn that it derives from a Middle English word meaning “having manners fit for a royal court”. Today the word has lost its royal connotations but survives in humbler environs such as shops and buses. The vocabulary surrounding this universal human institution seems notably thin and lacking in descriptive power (the French word “etiquette” had to be adopted rather late in the game).

[3] There is no name for the field of study that focuses on good manners, politeness, or etiquette—nothing analogous to “ethics” or “morality”. My suggestion for such a name is “politics” but pronounced like “polite-ics”. Admittedly the written form is easily confused with another field of studies with that name, but we can remind ourselves that the words “polite” and “politic” have different roots: the former comes from a Latin word meaning “polish” or “smooth”, while the latter comes from a Greek word for “city” (“polis”). In any case, we do well to have a name for this neglected field of study and I think “politics” will do nicely, properly pronounced (po-light-ics). We can then form derivatives such as “politically correct”, using the recommended pronunciation.

[4] A lot of etiquette concerns the proper rules governing polite speech—not too loud, no profanity, no mumbling, speaking only when you are spoken to, etc. But inner speech is subject to no such prohibitions—the idea of impolite inner speech sounds like a category mistake.

[5] Might it be that we are far less polite than we should be—as it has been argued that we are far less moral than we should be? Is a form of skepticism possible that questions our normal politeness assumptions? Is our perception of the norms governing polite behavior radically mistaken? The idea seems preposterous, but perhaps something can be made of it. Maybe we should be far more attentive to our guests than we are.

[6] I was tickled to discover recently that Philip Stanhope, the Fourth Earl of Chesterfield, shared my view of laughter as bad manners, especially when loud and “merry”; we both, however, thoroughly approve of smiling as an instance of good manners. See his Letters to His Son on the Art of Becoming a Man of the World and a Gentleman(1774).

Share

Rigidity Revisited

 

 

Rigidity Revisited

 

A rigid designator is one that designates the same object in every possible world. Thus “Plato” designates Plato in every world; in no world does it designate anyone else. We must hasten to add that names are only rigid with respect to a language, i.e. under a particular assignment of meaning; no name is rigid in virtue of being the sound or mark that it is. Words are conventionally attached to meanings, so that they only contingently denote whatever it is they actually denote. Clarity might be served by saying that the meaning of a name is what is properly rigid (similarly the meaning of a description is what is properly non-rigid). The meaning or sense of a name rigidly designates its reference. It doesn’t follow that the mode of presentation associated with a name is rigid, if that concept it taken qualitatively, i.e. how the reference seems to the speaker. And that would not be a plausible view given that numerically distinct objects can appear the same way. Nor do the ideas in the speaker’s mind rigidly designate (this is one reason description theories of names run into trouble). Names have a special kind of meaning that ties them to their actual bearer across possible worlds. The standard view of this is that the meaning of the name is its bearer, so that constancy of meaning guarantees constancy of bearer, by virtue of strict identity. If the meaning of “Plato” is Plato, then of course it designates the same person in every world, since the meaning just is the reference: this is like saying that Plato is Plato in every world. The statement “The meaning of ‘Plato’ is identical to the reference of ‘Plato’” is true, and identities hold necessarily. Nothing like this can be said of definite descriptions, so they fail to be rigid. We could say that the general terms forming the description rigidly designate the properties they actually designate, but not that the description rigidly designates the object that contingently satisfies it.

So far, so orthodox: but now I want to raise the question of what kind of necessity is in play here. We could rephrase the concept of rigidity as follows: names have the property of necessarily denoting what they actually denote. It is part of the essence of “Plato” (its meaning) that it denotes Plato; in no world does it denote anyone else. So names have essences just as objects have essences: Plato is necessarily a man and “Plato” necessarily denotes this man. The name “Plato” has other essential properties (remember we mean its meaning) such as that it is a name or is part of language or is not identical to the name “Aristotle”: meanings have essences too. It is part of the essence of the meaning of “Plato” that it designates Plato—but it is not part of the essence of the meaning of “the teacher of Aristotle” that it designates Plato, even though he did teach Aristotle. So we can say that the same notion of necessity is used to characterize rigidity as is used to characterize the essence of objects—good old metaphysical necessity. We say that a person necessarily has the parents she actually has, and we can equally say that a name necessarily refers to what it actually refers to; while a person does not necessarily attend the school she actually attends, and does not necessarily satisfy the descriptions she actually satisfies. Semantic properties can be essential (or contingent) properties too. Languages are bearers of modality just as non-linguistic reality is. Rigidity is just another species of necessity.

Now I can raise the following heterodox question: is the necessity involved in rigidity reducible to other categories of necessity? Kripke gave us four categories of necessity: identity, kind, constitution, and origin. Is the rigidity of names a special case of one of these categories? The alternative is that it is not but is a sui generiscategory of necessity that we need to add to our inventory of categories of metaphysical necessity (“necessities of reference”). I am going to suggest that rigidity is reducible to the necessity of constitution plus the necessity of origin: a name’s having a certain reference essentially is the upshot of a particular type of necessity of constitution plus necessity of origin. Thus we can explain these necessities of language in terms of more general types of necessity applicable to the non-linguistic world. This may sound strange, but on reflection it is quite intuitive, once we understand how general the notions of constitution and origin are. Suppose we say that the meaning of a name is constituted by its bearer; and we compare this to saying that this table is constituted by a particular piece of wood. In the latter case it is right to say that the table is essentially so constituted—in every world in which the table exists it is made of the same piece of wood as in the actual world. Similarly, in the former case, if the meaning of the name is constituted by its actual bearer, then it is so constituted in any world, since constitution generates necessities. If x is made of y, then you can’t have x without y. Of course, you can have something that is like x that is not made of y, but not that very thing—a table that looks like x, say. Likewise, you can have a meaning that resembles the meaning of “Plato” and it not be constituted by Plato, but you can’t have that meaning without Plato. Thus two speakers may be exactly alike physically and mentally and use a name “Plato” but refer to different people by that name, because the meaning is constituted by different references in the two cases. Two meanings can seem the same but not be the same because of a difference in actual constitution—just like two tables. According to the direct reference conception of names (the “Millian” view), the meaning of a name is constituted by its bearer; but then it is necessarily so constituted, by the necessity of constitution, in which case it will be rigid. We could say metaphorically that the table “rigidly designates” the piece of wood it is actually made from, just as a name literally rigidly designates its actual bearer: the necessity of constitution is at work in both cases. And don’t object that this latter must be metaphorical because only physical objects have constitutions: clearly the concept of constitution can be applied outside of the physical realm, for example to states of mind and to mathematical entities (emotions and geometric figures, for example).[1] Identity can be applied with this kind of generality (and is often invoked to express the Millian view), and there is no metaphysical reason to restrict the idea of constitution to material objects (the Constitution is not wrongly named). Thus the referential rigidity of names falls out as a consequence of the necessity of constitution: the former follows from the latter.

How does the necessity of origin enter the picture? First we must note the generality of that notion; it isn’t just parents and children but any generative historical relation. Clearly it applies to any organism and its ancestors: each organism necessarily has the ancestors it actually has, going back to the origin of life on Earth. But also historical events fall under the necessity of origin as well as human artifacts: WWI (that war) could not exist in a world in which its actual antecedents do not exist (though there could be a war similar to it but differently caused), and no one other than Leonardo could have painted the Mona Lisa (that very painting).[2] We can’t completely change history and leave the identity of the objects and events intact. In the case at hand, a name has a certain history originating in the initial baptism of a particular object (say, baby Plato): then a chain of linguistic events connects this origin to later uses of the name. Let us then say that the name “Plato” has origin O: accordingly, it (that name) could not exist without O. The name (its meaning) owes its identity to its actual origin: just as Plato has to come from his actual parents, so the name “Plato” (with its actual meaning) has to come from baby Plato in an act of baptism. If we substitute another baby in a possible world, we get a different name (a different meaning), despite any resemblance of baby—just as we get a different child if we substitute different parents, despite resemblance of progeny. If so, rigidity follows from origin: the name could not refer to anyone not at the origin of the causal chain that exists in the actual world, i.e. the one culminating in baby Plato.  If the origin of “Plato” is baby Plato, then it could not have had any other origin, by the necessity of origin; but then the name must designate the same person in all worlds. That name requires that origin, so there is no world in which that name exists but is anchored to a different origin (as it might be, baby Aristotle). Nothing like this is true of descriptions, of course, since they are not individuated by origin at all: they don’t refer in virtue of a causal chain leading back to an object’s baptism. Accordingly, descriptions are free to be non-rigid, as flexible as the occasion demands. But names are strictly tied to their historical antecedents in babies, baptisms, and the like. If so, rigidity follows from the necessity of origin, and is a special case of it.

There is a question, then, about which of these two necessities is basic in the modal semantic of names. We need not take a firm stand, but I am inclined to think that origin is basic: it is because names are introduced in the way they are that their meaning is constituted in the way it is. Having that origin determines what constitutes the name’s meaning: there is nothing else for the meaning to be given that names originate as they do. Tracing back to a particular object is what fixes their meaning (not a cluster of associated descriptions), and hence we say that the meaning is constituted by the object. So origin is primary, though both are equally correct as modal claims. The important point is that semantic rigidity is not some new type of necessity but is a special case of necessities already recognized. We don’t have Kripke’s four categories and the necessity of reference as an additional primitive category; the latter is an instance of the former. It is what the necessity of constitution and origin look like when manifested in language. This is good because it is not clear what else referential necessity might be given that we seem to have covered the bases with the four categories.[3] Rigidity is a type of essence found in language, but what other types of essence are there other than the big four? They seem to exhaust the field, in which case linguistic essence needs to emerge as a form of one of more of those. Clearly the concepts of constitution and origin apply to language, and are so employed quite spontaneously by theorists, so it is in the cards that we can explain referential necessity by appeal to these concepts. Referential necessity thus arises from a combination of the necessity of constitution and the necessity of origin.[4]

 

Colin McGinn

[1] It can also be applied to phrases and sentences: a string of words is constituted by the individual words that compose it. As a consequence, we can say things like, “The sentence ‘snow is white’ is necessarily constituted by the words ‘snow’, ‘is’, and ‘white’”. The same applies to thoughts and their constitutive concepts.

[2] We can also define the notion of rigid portrayal: a painting rigidly portrays a certain individual if it portrays the same individual in every possible world. The claim that paintings are sometimes rigid portrayers is plausible: the painting must portray the same individual in any world in which it (that painting) exists—no Mona Lisa no Mona Lisa painting (her twin will not do). This is different from a painting just happening to fit a certain individual. In this kind of case the origin theory of the necessity seems very plausible; so the Mona Lisa painting needs both Leonardo and Mona herself in order to exist in any world.

[3] It might be said that there are strictly five categories: in addition to constitution by a particular object we need to recognize the type of the constituting object. Thus we can say that this table is necessarily made of wood not just this piece of wood (as is the piece itself). The analogue for names would be the fact that the meaning of the name is necessarily of the human being type: the name “Plato” must refer to a human being and not (say) to a goat, in addition to necessarily referring to Plato (who is himself necessarily a human being). But this isn’t to accept that there are irreducibly semantic types of de re necessity. The name itself might be necessarily composed of certain sounds, which are necessarily sounds. We just have iterations of the same metaphysical necessities we had before we got to the de re necessities of language.

[4] If we choose to say that predicates rigidly designate their corresponding properties, we can give the same type of explanation of this semantic fact, namely that the properties denoted form the constitution and origin of (the meaning of) their denoting predicates. The property of being red constitutes the meaning of “red” and is the origin of that predicate (in its actual meaning): hence “red” rigidly designates the property red. The meaning of “red” could not be constituted by any other property or originate in any other property.

Share