An Obvious Theory of Truth

Truisms are welcome in the theory of truth. Here is one: the sentence “London is rainy” if true if and only if the entity referred to by “London” has the property expressed by “rainy”. Generalizing, a sentence (or proposition) is true just in case the reference of the subject expression instantiates the property expressed by the predicate expression. This formula combines two concepts: a semantic concept of reference (denotation, expression) and the concept of instantiation understood as a non-semantic relation between objects and properties. Truth results when the entities denoted (objects and properties) stand in the instantiation relation. So we can say that truth consists of a combination of a semantic relation and a non-semantic relation: it is the “logical product” of these two relations. The analysis of truth is given by a “vertical” relation to the world and a “horizontal” relation between worldly entities. Thus “true” expresses a complex property comprising representation and instantiation—that is what the concept amounts to. Both are necessary for truth and together they are sufficient. Moreover, the formula is the most banal of truisms: of course a sentence is true if the things it talks about have the properties the sentence attributes to them. The sentence “snow is white” is true just if the stuff it refers to (snow) has the property the sentence ascribes to it (being white). How could this fail to be correct?[1]

Some minor wrinkles can be quickly ironed out. Is the theory (let’s call it that) ontologically committed to properties in some objectionable platonic sense? I stated it that way, but this is not integral to the theory (though metaphysically unobjectionable, in my view): we could state it in terms of concepts or even just predicates—as in the notion of an object falling into the extension of a predicate. Nor is the theory committed to sentences as truth-bearers: we can run it on propositions, statements, beliefs, what have you, so long as we have a relation like denotation to work with.  It might be thought that the theory is restricted to subject-predicate sentences and won’t extend to quantified sentences, but this limitation is easily remedied by adding that the objects referred to or quantified over should instantiate whatever is predicated of them. Whatever objects are semantically relevant are the ones that need to do the instantiating if the sentence is to be true. What about moral truths? Well, if there are such truths the theory commits us to the idea that moral sentences can be true only if there are moral properties (or concepts or predicates) for objects to instantiate—but this will presumably be so if there are moral truths to start with. What we don’t get are nonsensical truths, because there will be no objects and properties to stand in the instantiation relation (e.g. borogroves and mimsiness). We just have the commonsense thought that whether a sentence is true depends on what objects have which properties. If you say that an object has a property and it does, your statement is true; but if you say that an object has a property and it doesn’t, your statement is false. Clear?

What is surprising is that this theory, if we can dignify it with that word, has not been mooted (at least to my knowledge), since it seems blindingly obvious.[2] Some theories in its vicinity have been mooted, but not this theory exactly. It certainly carries the whiff of the correspondence theory, but it invokes no relation between whole propositions and facts, speaking instead of objects and properties and associated sentence-parts. The world comes into the picture, but not by way of a correspondence relation between facts and propositions. Nor is it a redundancy theory, since it defines truth as a complex property constituted by substantive relations; still less is the theory deflationary. It is also not the same as Tarski’s theory: the schema employed does not repeat on the right the sentence mentioned on the left (so it doesn’t satisfy Convention T) but rather embeds semantic vocabulary and the notion of instantiation. It is possible to universally quantify an instance of the schema and produce a well-formed result, whereas that is not possible for Tarski’s schema. We can say, “For all propositions x, x is true if and only if the objects referred to in x instantiate the properties expressed in x”, but we can’t say, “For all propositions x, x is true if and only if x”, because that is not well-formed (“x” being an individual variable not a sentence letter). Also, the definition proposed by the obvious theory is explicit, not inductive, and applies to any sentence in any language (we are not defining “true-in-L”).[3] The theory is closer to a formulation championed by P.F. Strawson: a statement is true if and only if “things are as they are thereby stated to be”. The spirit looks the same, but what are these “things”, and where is the reference to properties and their instantiation? It sounds a lot like saying, “if and only if reality is as stated”: but that is not the same as the formulation in terms of objects and properties. Perhaps the obvious theory could be read as a more explicit version of this type of theory; and indeed it looks very much like what people were driving at all along. For surely we want to say that the truth of a statement turns on the instantiation of properties by objects combined with suitable semantic relations to those objects and properties. To say something true you have to refer to an object and then assign a property to it that it actually has—obviously.

Consider the locution “true of”: what is its analysis? Obviously this: a predicate is true of an object if and only if the object has the property expressed by the predicate. This is the core of the obvious theory: truth itself is defined by reference to “true of” (as Tarski defines truth in terms of “satisfies”). We might say that “true of” is the basic notion in the theory of truth. We reach truth of propositions by plugging in a singular term: from “F is true of x” we derive “F is true of a” where “a” is a closed singular term (say a proper name). Thus the sentence “Fa” is true just if the predicate “F” is true of the object referred to by “a”. The other theories of truth remain neutral on the analysis of “true of”, which is a limitation in any attempt to define the concept of truth generally; but the obvious theory puts it at the center. To say something true you have to apply a predicate to what it is true of. And that is a matter of picking a predicate that expresses a property that applies to the object.

The OED defines “true” as “in accordance with fact or reality”. Fair enough, but what is “in accordance with” and what is “fact or reality”? The correspondence theory suggests some sort of isomorphism between propositions and complexes called facts. The obvious theory says that truth is a matter of identified objects instantiating assigned properties; so accordance is simply objects having the properties they are said to have. A statement is in accordance with reality just on the condition that it assigns properties to objects as they are actually distributed, i.e. as they are. Fact and reality are just objects having properties. This is a substantive definition of truth meeting standard conditions of adequacy: it defines truth in terms of notions severally necessary and jointly sufficient; it is non-circular; and it permits a universally quantified formula that captures our intuitions about truth. To repeat it in a slightly different language, a proposition is true if and only if its subject matter (objects and properties) exemplifies suitable instantiation relations. Truth is a matter of objects instantiating properties in the way alleged by a proposition. To understand the concept of truth, then, we need to grasp this complex of concepts: reference, object and property, instantiation. It is not simply a device of semantic ascent or essentially redundant or logically simple or merely a means of abbreviation. It is a thick analytically deep concept with a definite nature. Yet its nature is entirely (indeed painfully) obvious—not in the least bit surprising. The truth about truth is a true truism.[4]

 

[1] The same form of analysis can be applied to the concept of justification, which I take to be confirmation of the theory: a proposition is justified if and only if there are good reasons to believe that the objects referred to instantiate the property expressed. Likewise, we can say that it is a fact that p if and only if a certain object instantiates a given property, e.g. London instantiates being rainy (notice that no semantic relation is involved here).

[2] Why this should be is not clear to me: perhaps it is thought too obvious, or perhaps less obvious theories are confounded with it (correspondence theories).

[3] Devotees of Tarski’s theory will want to know how to provide recursion clauses for logical connectives. This is easily done: for example, “p and q” is true if and only if the objects and properties referred to in “p” stand in the instantiation relation and the objects and properties referred to in “q” stand in the instantiation relation; and similarly for “or” and “not”.

[4] Why is the truth about truth a truism while the truth about (say) knowledge is not? Because there is nothing more to the truth of propositions than objects instantiating properties combined with the fact that propositions stand for things. There is nothing hidden here, nothing to be discovered. Other theories purport to say something interesting, but the obvious theory is content with mere accuracy.

Share

Quantifier Concepts

Would it be quixotic to suppose that quantifiers hold the secret to human success?[1] Could the student of quantification theory be studying the ultimate differentia that separates humans from the rest of nature? That would be a delightful result for the logically minded; and I think there is actually a good deal to be said for it. For it is plausible to suppose that what other animals lack, cognitively speaking, and we splendidly possess, is the ability to engage in quantifier-driven reasoning. We grasp what Quine called the “apparatus of quantification” but they do not—though they no doubt grasp much else. That apparatus, to put it briefly, involves the existential and universal quantifiers, variable binding, embedding, scope, domain, and a distinctive syntactic form—not to mention the non-standard quantifiers “most”, “a few”, “many”, and others. Suppose all this represented in the human language of thought.  Then we can surmise that other animals, though cognitively gifted in many ways, lack an internalization of the apparatus of quantification—though they may well entertain singular and general concepts, truth functions, psychological concepts, etc. At any rate, there are possible beings that have mastery of a conceptual apparatus just like ours except that quantification is not included. The question is what capacities they would thereby lack that we possess, and which confer signal advantages on us. What do quantifiers do for you? What mental achievements do they make possible?

Quantifiers are obviously deeply embedded in our thinking, so it is not easy to tease out their contribution, but certain areas of human thought clearly depend on them.  First, science: where would science be without the universal quantifier? A law is precisely a generalization about all things of a certain kind (we can include ceteris paribus laws). If you don’t grasp the concept all, you don’t know what a law is. Similarly, you have to grasp that if some things lack a certain property then it is not a law that all things have that property (I omit some obvious qualifications). Quantificational reasoning is essential to scientific thought (animals don’t seem strong with science): science consists of universally quantified propositions. Second, mathematics: this too is shot through with quantificational structure (it was mathematics that caused Frege to invent modern quantification theory). The most basic axiom of arithmetic is universally quantified: for every number, there exists a successor number. Peano’s axioms are quantificational in form. The embedding of quantifiers is rife in mathematics. Geometry is much the same: we have theorems about all triangles, circles, etc. Moreover, according to some views, arithmetic reduces to quantification theory (plus set theory, which is itself formulated by means of quantifiers). Standard first-order predicate logic is clearly quantificational, but so is second-order logic (which greatly increases expressive power). Propositional logic discerns no quantifiers in its formulas, but it is tacitly quantificational itself, since sentence letters are interpreted generally: for any p and q… We understand it to express universal propositions. That is what logical necessity consists in. Modal logic involves quantification over possible worlds (necessity and universality are close cousins). Inductive logic involves moving from singular premises to general conclusions and would be impossible in the absence of the concept everything. Falsification depends on there being some counter-instance to a generalization. All this would be impossible for beings without a mental representation of the apparatus of quantification. When we reason we move from the particular to the general and the general to the particular, and this requires grasping how all and some work; not to grasp these principles would be a severe cognitive deficit (“quantifier derangement syndrome”). If Russell is right, definite descriptions are not possible without quantification (do animals grasp definite descriptions?); they are built from the quantifiers “all” and “some”. Many pronouns function as bound variables. Lastly, cosmology requires the use of the ultimate universal quantifier: for it concerns the nature of everything (ditto metaphysics). Here we ascend from specific domains to the entire domain of the universe. It is remarkable that we have such an all-encompassing concept—we can think about everything there is. Can animals ever think about the whole enchilada? I doubt it: they think specific and particular, local and limited. Maybe their thought is largely demonstrative, or maybe it employs a medium alien to human thought. In any case, our cognitive resources include the extensive and intricate apparatus of quantification, which greatly expands our powers of mental representation and hence our understanding of the world.[2] In turn, this enhanced understanding feeds into our actions and mode of life. We are quantifying creatures (other creatures could be rational beings but not quantifying beings).

Let me note two further features of quantifier concepts that set them apart. We know from the work of logicians that they are not semantically singular terms but a sui generis type of expression; they occupy their own category in mental grammar. It is sometimes said that they are second-order concepts, i.e., concepts of concepts, and this sets them apart from their first-order brethren. To grasp them, you have to be able to ascend a level and predicate them of a concept: this requires a cognitive leap, a new mode of mental representation. Creatures with only first-order concepts are not guaranteed to be capable of achieving this new level, however hard they think. Presumably, it occurred at some point in human cognitive evolution, perhaps triggered by a specific mutation affecting brain circuitry, and not shared by other species. Perhaps we have a gene for quantification! Some piece of brain rewiring caused us to be able to grasp second-order concepts like all and some, where there was no such grasp before. Then we were off to the races, with science, logic, mathematics, and cosmology on the horizon. A new cognitive trick catapulted us to the next intellectual level. Imagine if you lacked these concepts and were stuck at the level of the specific and particular: then a super-scientist rewires your brain to give you a grasp of quantification. Wouldn’t that be a stunning intellectual breakthrough, opening up vast avenues of new understanding and reasoning power? The child picks it all up automatically along with the other remarkable resources of human language, but that doesn’t mean it isn’t a signal achievement—imagine losing it one day! Quantification is a classy mental act, belonging only to the intellectual elite, by no means proletarian.

Secondly, quantifier concepts are unique as to content: no other concept is such a bad candidate for empiricist treatment.[3] How could the concepts all and some be derived by a copying operation from sensory stimuli? They are not concepts of a sensory quality, or of any mental operation. I am tempted to call them abstract, but that is just a vague way to register their distinctness from other types of concept. I would guess they are innate—for how could they be picked up from observation of the environment? They enter our thought at an early age and shape it pervasively, but their origins are obscure. They are part of the universal human lexicon, but they name nothing and describe nothing. Form the thought “Everything changes” and ask yourself what is going on in your mind: you will find no discernible constituent corresponding to the quantifier—no image or feeling or disposition. There is nothing…concrete here. Yet you mentally took in all of reality! How is that possible? What do you have that your cat doesn’t have? I mean: how is the concept of universality mentally represented? Can we have a description theory of it? How about a causal theory? Neither seems remotely feasible. We are used to the words (and their corresponding logical symbols) but what is the content exactly? Where is the cognitive science of quantification? What we have here is a complex and intricate biological adaptation of enormous utility but quite opaque in its mode of operation. It took logicians thousands of years to identify it and describe its logical character, but its psychology is not even in its infancy (or its neuroscience). The point I am urging is that it has some claim to distinguish us from other thinking beings on our planet. Let us grant that bees, whales, and dolphins have communication systems, along with associated cognitive structures—but it is a further claim to maintain that they understand quantification as humans do.[4] All humans do understand it (short of pathology), but there is no evidence that other animals can engage in quantificational reasoning (just consider the difficulties of embedded quantifiers).

It is not implausible to suppose that humans go through an ontogenesis in which “all” begins locally and then gradually widens to take in more and more of reality. Thus the child initially applies “all” to all the marbles on the table or all the apples he can see, later expanding the domain to include all the marbles or apples on earth. But that isn’t enough to yield the adult concept: the child must include all past and future marbles and apples, as well as any found elsewhere in space. Then there are all the possible marbles and apples. Finally, we reach everything there is. The original concept (innately present, we can suppose) already contained this potential, but it undergoes a process of maturation that ends with the cosmic all. This would be in conformity with standard views of linguistic and conceptual development. But the process has a special interest because the concept is so all-encompassing in its nature: its enormous reach signifies a kind of supremacy among concepts—it is the king of all concepts, as it were. Every other concept is subordinate to it, literally. Doubtless, it is a concept that has fueled the acts of many a despot or madman, or metaphysician or cosmologist (a “theory of everything”). God is described as all-powerful, all-knowing, and all-virtuous: the recipient of every estimable universal quantification. So much majesty revolves around this concept—its place in human thought is unrivaled.[5] Once the child has fully absorbed this concept (or it has fully matured within her) she becomes a being of a different cognitive order from the run of terrestrial animals, including her former self. Morality is stamped with it too: duty is what everyone ought always to do in any circumstances (remember Kant’s categorical imperative, in which universalization is paramount). We would not be the cognitive (and emotional) beings we are without this capacious and ubiquitous concept. When Aristotle enunciated his famous syllogism beginning “All men are mortal” he was drawing attention to the mighty power of that little word “all”: once you know that all F’s are G you know something of high significance from which many interesting things follow. In it may reside our capacity for the type of thought that defines human nature.[6]

 

Colin McGinn

[1] I am slightly misusing the word “quixotic” here, but the alliteration was irresistible.

[2] George Eliot reminds us of a downside to this mental advantage over other animals: “But this power of generalizing which gives men so much the superiority in mistake over the dumb animals…” (Middlemarch, 592) Our ability to generalize lays us open to errors of thought unknown to animals lacking this capacity; and it must be said that quantifiers can cause us no end of trouble—especially the standing temptation to abuse “all” in the presence of “some” (quantificational malfeasance).

[3] I know this is saying a lot given empiricism’s poor track record, but a bit of overstatement may be forgiven in the light of the fact that one never hears much about quantifier concepts from empiricists (I don’t recall Hume discussing them at all). They are expected to take care of themselves.

[4] Given other differences between human and animal thought, it might be more apt to compare humans to other hominids now extinct. What if Neanderthals matched humans cognitively except where quantification is concerned? That could be the reason for our relative success.

[5] What is the connection between death and the universal quantifier? Simply this: when you die it is all over. Everything about you has gone. You are now nothing. The quantifiers say it all. We understand what death is because we can use quantifiers this way.

[6] People often discuss this question as if it is all-or-nothing matter—either we share thought with animals or we don’t. But a more nuanced discussion can focus on whether there are any areas of human thought inaccessible to other thinking beings. Thought may not be homogeneous in its nature and origins (similarly for language). Quantification may have been added quite late in the game.

Share

The Concept of Meaning

The concept of meaning is recalcitrant to analysis, elucidation, or theory. There is almost no consensus about what constitutes meaning. We possess the concept, but we don’t know what to say about it—it is opaque to us. Thus we are treated to a wide variety of opposed suggestions: mental images, dispositions to behavior, truth conditions, verification conditions, criteria, possible worlds, functions, intentions, use, modes of presentation, mental models, nothing at all. Compare the concept of knowledge: there is wide agreement that knowledge is a type of true belief. Granted, there are differences of opinion, especially when it comes to filling out the idea of true belief, but the involvement of truth and belief are not contested. Our concept of knowledge makes this clear to us; it is not a complete cipher. But the concept of meaning is silent about itself, or speaks with many voices. We don’t know what it entails. We search for a central concept with which to understand it, say the concept of truth, but soon encounter difficulties with sentences that are not truth-bearers (imperatives, questions, exclamations, performatives, ejaculations), among other difficulties. In the case of knowledge the concept of truth is clearly central, but in the case of meaning it is only disputably so—hardly a necessary condition of meaning.

Why does the concept of meaning contrast so strikingly with the concept of knowledge? They are both concepts we possess, yet one is relatively transparent and the other maddeningly opaque. Indeed, it is hard to think of a concept of philosophical interest that is quite as opaque as the concept of meaning—quite as fundamentally contentious. Take belief, intention, necessity, causation, truth, free will, and consciousness: at least there is some consensus here—all is not darkness. People know what they are talking about, more or less. Why then is meaning so obscure, elusive, and slippery? Take the locution “x knows that s means that p”: we know what  “know” means in this type of sentence, but when it comes to “means” we are brought up short. Thus we have a meta-puzzle about the puzzle of meaning: the puzzle of why is it so puzzling? Why is the puzzle a puzzle? It ought not to be, given that the concept is very familiar to us, but apparently it is. What is the concept of meaning such that it is puzzling in the way it is? Phonetics and syntax are not similarly puzzling, so why is semantics so up in the air? Why is the theory of meaning such a quagmire? Wittgenstein veered sharply from a truth conditions theory in the Tractatus to a use “theory” in the Investigations: how did the concept of meaning make that possible? How could it give rise to such contradictory intimations?

The question has not been asked, so far as I know, but possible answers suggest themselves. It might be said that the word “meaning” is ambiguous: the reason no single central concept carries the day is that the word signifies quite different things. Likewise, there is no satisfactory theory of banks, if we insist on supposing that “bank” is univocal: x is a bank if and only if x is a river with money floating in it! When we say “Snow is white” has meaning in the same sense that “Shut the door!” has meaning we speak erroneously; rather, these two types of sentence mean in different senses. I don’t know of anyone who has ever propounded such an ambiguity thesis, but it is surely implausible in the extreme, for reasons too obvious to be worth going into. More plausible is the idea that “meaning” is a family resemblance term, so that the search for a single definition of meaning is misguided. Some meanings are constituted by truth conditions and some by verification conditions, while some have their meaning by dint of use, or the association of mental images. Thus the different theories that have been proposed are correct for some varieties of meaning but not for all; we have here the familiar philosophical vice of overgeneralization. Again, I don’t know that anyone has ever held this view—certainly not Wittgenstein in the form just described (he held that all meaning is use). Or again, it might be suggested that “meaning” is an empty term and the concept of meaning a pseudo-concept: that’s why we can’t come up with an adequate theory of it. There is nothing for the theory of meaning to be a theory of, so the wheels are turning in a vacuum. How can there be agreement about the content of a concept that has no determinate content? This account of the puzzle is also hard to swallow: words and sentences certainly seem to mean something, even if we find it hard to say what this consists in. But there does seem to be something in the idea that the concept is exceptional in some way—that it is a concept of a certain type—and that this type precludes it from the usual kind of analytic treatment. It is a concept that belongs in a different category from the concept of knowledge and similar concepts. We are mistaking the category and then cudgeling our brains over how it should be analyzed. Let’s pursue this hint.

The dictionary is always a useful point of departure. The OED gives this definition for “meaning”: “what is meant by a word, text, concept, or action”. The broad scope of the word “meaning” is registered here, though the definition looks disappointingly circular, what with the word “meant” occurring in it. However, the definition does offer the suggestion that meaning should be understood in the context of what is meant by agents—those who utter words, write texts, possess concepts, and perform actions. For x to have meaning is for x to be meant in a certain way by agents. Presumably, this relation is a type of action or process or event; so what has meaning is what is usable to mean something in such acts, etc. But what is it for an agent to mean something? The dictionary doesn’t say, but we can: it is to employ a symbol in order to communicate—to get something across, to convey something to somebody. This is a highly neutral description with nothing specific contained in it—nothing about truth or verification or images or dispositions or criteria or use. Meaning is simply what is meant when people communicate. This could include gestures and facial expressions (“She gave him a meaning look”) as well as elements of grammar or signs of arithmetic. Notice that there is no requirement for all the things that can be meant to resemble each other, either by sharing a common property or by dint of family resemblance. There need not be anything in virtue of which the class of things that can be meant mean what they do; the class is united merely by its relation to agents. So, in particular, the class is not united by the property of having truth conditions or verification conditions, but merely by being usable in a certain way, i.e. to get something across.[1] This is not to say that meaning is use in the manner of Wittgenstein; it is just to say that meaning is a matter of getting things across by employing some kind of symbolic entity or other. This isn’t a theory of meaning, just an indication of its scope and context. The question of interest here is whether this definition of “meaning” resolves our puzzle.

Consider the concept of furniture. The OED defines “furniture” as “the movable articles that are used to make a room or building suitable for living or working in, such as tables, chairs, or desks”. Notice that this is not a family resemblance concept: there is no suggestion that all items of furniture have any such resemblance. Rather, the class of items is determined by the use the items are put to, supplemented with some examples. We could call it a functional concept, except that would align it with the concept of biological function. I prefer to call it a “collectivity concept” because it gathers together a widely heterogeneous collection of items according to how they are used (“suitable for living”). It would obviously be a mistake to try to define this class by fastening onto certain of its members, as if the shape of chairs (say) could define it. This would give rise to pointless controversies as other theorists select a different subset of furniture items (beds, say, instead of chairs). It is not that furniture has a hidden essence not apparent on the surface to be discovered by empirical methods. Furniture has no nature beyond what the dictionary definition specifies. It is not like water or heat—or even knowledge. Nor is the concept elusive or obscure, though it may be vague and interest-relative. Well, the concept of meaning is like that—a collectivity concept held together by what agents mean (strive to communicate). There is no property with a submerged nature that we might investigate and articulate. Items with meaning might well have properties with such natures—such as truth conditions or verification conditions—but these properties are not what meaning in general is. There are no identity statements of the form “Meaning is X”, where X might be truth conditions or verification conditions (or use, etc.). There are many properties in virtue of which an item can be meaningful, but none of these is what meaning is. There are many properties in virtue of which an item can be an instance of furniture, but none of these is what being furniture is (e.g. being shaped to fit the human body). I will put this point by saying that the concept of meaning is a collectivity concept not a property concept, acknowledging the inadequacy of these portmanteau terms. The intuitive idea is that meaning is not a single attribute common to all meaningful items but what items come to have when agents use them to get things across.

The point of this proposal is to explain why the theory of meaning takes the form that it does. We are taking a concept of one type and assuming that it is of another type—a category error. We search for a single central concept because we assume that meaning is a property that meaningful items have—like the property of knowledge. It is true that meaning involves various properties, such as truth conditions, but it involves many properties, so that it cannot be united by any one of them. So there cannot really be a theory of meaning, i.e. a specification of what all meaningful items have in common (including use). The concept of meaning is not the concept of any property or trait of the sort proposed by putative theories of meaning, just as the concept of furniture is not the concept of any property or trait of items of furniture such as comfort or human shape or intentional design or location in the home. We tend to think the concept belongs to the same category as the concept of knowledge or belief or intention, which do have a uniform nature, but in fact, it is like the concept of furniture or tool, whose principle of unity is quite different. In effect, we are reifying the concept—taking it to connote something over and above using a symbol to get something across. Asking the question, “What is meaning?” or “What does meaning consist in?” invites the kind of category error I am diagnosing. Better to ask, “In virtue of what does this act of meaning work?” Then we can specify what property is being exploited in the act of meaning, such as truth conditions, verification conditions, felicity conditions, intensions, extensions, images, conventional use, etc. It is not that the word “meaning” is ambiguous between these various properties, any more than “furniture” is ambiguous between chairs and beds; rather, these words connote collections of things united by patterns of human employment, namely in living and communicating. There cannot therefore be a general theory of meaning of the kind that people have sought. To be specific, the idea that meaning is truth conditions is a category mistake. There can be theories of truth conditions  (like Tarski’s theory of truth) but there cannot be theories of meaning, not because they are false and some other type of theory is true, but because it is misguided to seek theories of meaning to begin with.[2] So it isn’t that the concept of meaning is maddeningly opaque but rather that we misconstrue what kind of concept it is. Semantics isn’t so controversial because the concept of meaning has a content that we can’t easily access; rather, it’s because the concept has no such content, being what I am calling a collectivity concept. This resolves the puzzle.

 

[1] Much the same point can be made about the meaning of individual words: names have meaning and so predicates, but there doesn’t have to be anything else they have in common that makes them usable in acts of communication, such as a denotation. There is no theory of meaning common to names and predicates, only the fact that both compose sentences that can be used to communicate, i.e. be meant in a certain way. It would be a mistake to cudgel our brains in the search for the common semantic property possessed by different categories of expression (compare chair legs and chair seats).

[2] In a sense the position defended here is more Wittgensteinian than Wittgenstein. He took language and meaning to be family resemblance concepts, assuming genuine resemblance, and opposed this to a common essence view. I am suggesting a view on which there is no resemblance at all between different meaningful items, but only a similarity of employment. We can thus allow that there is nothing remotely alike about facial expressions and sentences–not sound, not grammar, not truth conditions—and yet both count as meaningful items. All we can say to unite them is that both can be used to get things across. Contrast members of a family and people who happen to do the same job: the former look alike, but not the latter. So meaning is even more heterogeneous and unsystematic than Wittgenstein supposed.

Share

Skepticism and Possible Worlds

Picture all the possible worlds laid out in logical space in the style of David Lewis.[1] They all objectively exist just like the actual world—real and concrete entities. There are people in some of them who know about the world they inhabit, as we take ourselves to know about the actual world. Now consider skepticism: the contention that we don’t know much, if anything, about the world we live in. That is, we don’t know much about the actual world—whether it contains material objects or other minds or a future like the past. We can’t be certain what objects, facts, and events constitute the actual world. If the actual world is the totality of facts, we don’t know what this totality is—it might be quite otherwise than what we normally suppose. Maybe it is a totality of facts about a solitary brain in a vat, or a disembodied mind being deceived by an evil demon. The contents and nature of the actual world are subject to skeptical doubt.

But is the same thing true of merely possible worlds? Is it possible to be skeptical about our knowledge of them? Suppose I set out to consider a possible world in which everything is just like the actual world except that it contains one less penguin. Is it possible for a skeptic to question whether I really know that the world I am considering contains one less penguin than the actual world? Can the skeptic say that I have no right to make such a claim because I might be wrong about the contents of that world? Obviously not: I know with certainty that the world in question is as I say it is. It is not that I might be in a situation analogous to a brain in a vat with respect to that possible world. I can’t say, “For all I know, I might be considering a world in which there are 10 more penguins than the actual world”. I can’t be sure how many penguins there are in the actual world, but I can be sure about this question with respect to a possible world. Here I am quite certain of the contents of the world in question. Yet, by hypothesis, possible worlds are existing entities distributed in logical space, just like the actual world. So there is an epistemological asymmetry between the equally real actual world and all the possible worlds: the former is subject to skepticism while the latter are not. There is no such thing as skepticism with regard to our knowledge of possible worlds. Possible worlds are transparent to us while the actual world is opaque (at least according to the skeptic).[2]

What does this tell us about possible worlds? You might suppose it tells us nothing, ontologically speaking: it just so happens that possible worlds are available to our knowledge in a way the actual world is not. They are still entities just like the actual world, considered intrinsically. Granted, the epistemological asymmetry might be puzzling given the ontological symmetry, but lots of things are puzzling—no need to question the ontology. On the other hand, you might take the asymmetry to demonstrate that possible worlds are merely mental constructions, matters of stipulation, not mind-independent entities—so that our knowledge of them is really knowledge of our own minds. Neither of these responses is attractive, which is why the asymmetry with respect to skepticism is interesting. It poses a philosophical problem. What I would venture to suggest is that it reflects the different roles of perception and imagination in grounding knowledge of worlds. I can coherently say that what I perceive might be otherwise than I perceive it to be, but I can’t say that what I imagine might be otherwise than I imagine it to be.  If I imagine some possible flying pigs, I can’t say, “These pigs I’m imagining might not be flying”—for the possibility I am imagining must be the possibility I seem to be imagining. If I imagine a certain possible world, there can be no doubt about what I am imagining: but the same is not true of perception. This is why the asymmetry exists, because of the different epistemic roles of perception and imagination in producing knowledge of the actual and the possible, respectively. Thus it is that skepticism applies in the one case but not in the other.

A radical response to the asymmetry would be to claim that it shows that realism about possible worlds is more acceptable than realism about the actual world. That is, it is better to believe in entities that can be known than entities that cannot be known. The actual world cannot be known, according to the skeptic—it is entirely conjectural—but possible worlds are transparent to knowledge. The actual world is like an unobservable while possible worlds are like an observable—we can only guess about the former, but the latter are presented to us just as they are. Possibilities are part of the given while actualities are merely “theoretical”. This point of view is not without philosophical interest—and I can see David Lewis’s eyes lighting up at the mention of it—because it turns the tables on dull commonsense realism. It’s the actual world that is philosophically suspect! Possible worlds are entities in good standing, ontologically and epistemologically, while the actual world is riddled with uncertainty. Wouldn’t Descartes welcome possible worlds over the actual world given their indubitable status? Isn’t a skepticism-proof ontology superior to a skepticism-prone ontology? One can imagine a Platonist favoring the possible over the actual, i.e. the ordinary empirical world. Maybe we should just junk the actual world!

Let me put the point another way. It is coherent to say, “The actual world may consist only of brains in vats”, but it is not coherent to say, “All possible worlds may consist only of brains in vats”. The reason is that we know that there certainly are possible worlds that consist of people seeing ordinary objects in their environment in the way we normally suppose; we just don’t know if our world is one such. Thus from an epistemological point of view, we stand in quite a different relation to the actual world and possible worlds—ignorance and knowledge, respectively. This is why there has never been a skeptic about our knowledge of possible worlds: for we can’t misperceive logical space.[3]

 

[1] See On the Plurality of Worlds (2001).

[2] It is not the same with space and time. Skepticism applies to places other than here and times other than now: the spatially and temporally remote are not privileged over the here and now. But remote possible worlds are known to have just the properties we take them to have.

[3] Of course, people in a possible world can misperceive the world they are in, so that skepticism always gets a purchase on the inhabitants of a world; but outsiders are granted special access to the content of a possible world—they don’t perceive it but conceive (i.e. imagine) it. In a sense, we know more about a possible world than the inhabitants of it know. Of course, we can make modal errors, but we can’t misperceive a possible world once we have it in our sights, since we don’t perceive it to start with. No one has ever seen possible pigs flying, though they are frequently conceived.

Share

Falsehood and Meaning

In a famous paper entitled “Truth and Meaning” Donald Davidson argues that meaning is constituted by truth conditions. A recursive theory of truth for a language in the style of Tarski is thus a theory of meaning for that language. Understanding a sentence consists in grasping its truth conditions. The meaning of a word is its contribution to determining truth conditions. Truth is the central concept of semantic theory. Davidson says nothing about falsity in relation to meaning; that concept has no place in the theory of meaning. Perhaps the reason is obvious: falsity conditions are not what a sentence means. Suppose we say, evidently correctly, that “snow is white” is false (in English) if and only if snow is not white—the falsity condition is given by inserting negation into the sentence whose meaning is in question. Then clearly it would be wrong to say that “snow is white” means that snow is not white—it means the opposite of that! So falsity conditions don’t constitute meaning. I will return to this point, but at present, I merely observe that falsity is not the concept chosen to characterize meaning, by Davidson or by the many others who have seen meaning as residing in truth conditions. I propose to argue that this is a mistake—that falsehood is as closely intertwined with meaning as truth.

The first point to make is that understanding a sentence involves knowing under what conditions it is false. If I understand “snow is white” I know that this sentence is false if and only if snow is not white—just as I know that it is true if and only if snow is white. I know its truth conditions and I know its falsity conditions. It is perfectly true that we cannot replace “is false if and only if” with “means that”, but this doesn’t imply that knowing falsity conditions isn’t part of understanding a sentence. For the same thing is true of many sentences in relation to truth: we can’t replace a statement of truth conditions for indexical sentences with a “means that” clause either (“I am hot” uttered by me doesn’t mean that Colin McGinn is hot at the time of utterance), and most sentences of a natural language are at least implicitly indexical. Similarly, a biconditional for “Shut the door!” employing the concept of obedience doesn’t license the proposition that the sentence means such a condition (the sentence doesn’t mean that the addressee shuts the door in response to the command to shut it). And there is really no reason to suppose that what constitutes grasp of meaning should be susceptible of statement in the “means that” form. It is just an accident that this holds for truth conditions in the case of context-independent sentences (actually it doesn’t even hold for “snow is white” because of the indexicality of tense). If you say that meaning is use, you are not saying that a given word or sentence means anything about use. In any case, it is not an objection to a claim about meaning that it won’t go over into the “means that” form; and intuitively it is a platitude that to understand a sentence (in the indicative) one needs to know under what conditions it is false. You wouldn’t understand “snow is black” unless you knew that the circumstance of snow being white renders that sentence false. We could test someone’s grasp of meaning precisely by asking her whether the sentence would be true or false under such and such conditions.

But is it possible to give a Tarski-type theory of falsehood analogous to his theory of truth? That was certainly part of the appeal of a truth conditions theory of meaning for Davidson: it permits the employment of Tarski’s powerful and rigorous theory of truth. If falsehood cannot be treated in this way, then it lacks one of the most attractive aspects of the concept of truth in semantic theory. To my knowledge neither Tarski nor anyone else has investigated this question, so mesmerized are they by Tarski’s formidable apparatus; but the question is easily answered in the affirmative—falsehood is just as amenable to recursive formal treatment as truth (which is just what we should expect). I will run quickly through the basic clauses for falsity; it is really a routine matter. For any sentence s, s is false if and only if not-p (where p is a sentence of the meta-language translating s). A conjunction “p and q” is false if and only if either p is false or q is false (not if and only if p is false and q is false). A disjunction “p or q” is false if and only if p is false and q is false (not if and only if p is false or q is false). Notice how disjunction is used in the meta-language to give falsity conditions for “and” and conjunction is used to give falsity conditions for “or”, instead of the usual alignment of connectives for truth conditions. A universal quantification “For all x, Fx” is false if and only if something x is not F. An existential quantification “For some x, Fx” is false if and only everything x is not F. Again notice the inversion of the quantifiers compared to the standard clauses for truth. With these clauses, we can construct a recursive theory of falsity entirely parallel to Tarski’s construction for truth. The analogue of a satisfaction clause will simply be: an object x counter-satisfies F if and only if x is not F, where “counter-satisfies” means the converse of “false of” (alternatively, “dissatisfies”). We can then speak of “Convention F” which specifies that a definition of falsehood should entail all instances of the schema, “s is false if and only if not-p”; and even define falsehood as “dissatisfaction by all sequences”. There would be F-sentences as well as T-sentences. The apparatus is exactly as for truth but with suitable amendments. Tarski could have written an appendix to his famous 1944 paper with the title “The Concept of Falsehood in Formalized Languages” and said much the same things as he said about truth. It would be surprising if he couldn’t, given the close connection between the two concepts—it would constitute an important theorem!

So we now add a Tarski-style theory of falsehood to a Davidson-type theory of meaning to produce a theory of falsity conditions for sentences of natural language (or disobedience conditions for the case of imperatives). This will be part of our theory of meaning for the language. It joins with a theory of truth conditions to give (allegedly) a complete theory of meaning. Both theories are necessary and neither is sufficient by itself. A speaker of the language grasps both the truth conditions and the falsity conditions of the sentences of that language. Thus I know that “snow is white” is true if and only if snow is white and that “snow is white” is false if and only if snow is not white. These are separate pieces of knowledge concerning distinct properties and employing different concepts (notably negation in the case of falsity). We can imagine possible beings that embrace one sort of knowledge while eschewing the other—they might be softhearted relativists that reject the notion of falsity altogether or stern skeptics about truth that recognize only falsity—but in our case, we have and embrace both sorts of knowledge. Our understanding of sentences includes both truth-conditions knowledge and falsity- conditions knowledge. This implies that a theory of meaning is based around two central concepts, truth and falsehood, not a single concept—which is not what we have been traditionally taught. Word meaning is now geared to two concepts: this is not truth-theoretic semantics but truth-value–theoretic semantics. Truth and falsehood play coordinate roles in the overall theory. Linguistic understanding has two parts or aspects. We could say that a meaning is a location in logical space that comprises both a positive condition and a negative condition: both snow being white and also snow not being white. Meanings are both inclusive and exclusive.

This opens up some interesting perspectives. Suppose you are a hardboiled Popperian: you don’t think truth can ever be established, but you do think falsehood can be. You hold that “all swans are white” cannot be confirmed as true, but can be falsified by observing a single instance of a non-white swan. You believe the concept of truth is irrelevant to science, but you think the concept of falsehood plays an important role. Verification is out of the question, but falsification is the engine of progress. Suppose you even go so far as to believe truth should be eliminated from our conceptual scheme, while retaining falsehood. You accordingly don’t accept that meaning is constituted by truth conditions (any more than you accept that scientific progress is the accumulation of truths) or by verification conditions (there are no such conditions): but you do believe that sentences can be false and can be established to be false. Then you may well find yourself attracted to a pure falsity conditions theory of meaning: the meaning of “all Fs are G” is given by the condition that this sentence is falsified by the fact that an F has been observed not be a G. That is, we understand a sentence by constructing its falsification conditions, which embed its falsity conditions, and truth conditions be hanged. You thus don’t much care for Tarski’s definition of truth—for what use is the concept of truth?—but you do fancy his implied definition of falsity. It enshrines your general “critical epistemology”—your dedication to the notion of falsification. You embrace falsity-theoretic semantics done in the general style of Tarski, as adopted by Davidson. This seems like a coherent position, however radical or misguided it may be. It serves to bring out the change of perspective that results from taking falsehood seriously in semantics.

Falsity and negation go together—notice how often I used negation in explaining falsity conditions semantics. Similarly for Popperian epistemology: we are always discovering that theories are not true (i.e. false). So negation plays a critical role in the theory of meaning (and in Popperian epistemology): we don’t know the meaning of a sentence unless we know under what conditions it is not true. The concept of negation thus enters into our understanding of any and every sentence, even when the sentence doesn’t contain negation. Hence negation is integral to meaning as such. I doubt that so-called animal languages incorporate negation in this way, even if the animal in question possesses the concept of negation. We might then speak of negation-theoretic semantics—theories that emphasize the role of negation in constituting meaning. This makes a better understanding of negation desirable, and indeed I think negation is an underexplored topic (not counting Sartre’s Being and Nothingness). Would a good analysis of negation shed light on the nature of meaning?

Share

Prehension Prehended

Did I mention that my book Prehension recently came out? I have held it in my hands. It’s a funny book. It’s not really a philosophy book, but a science book. But it’s more like nineteenth century science, informal and personal, as well as “scientific”. The title alludes both to gripping with the hands and grasping with the mind (any reference to Whitehead is quite accidental). I adopt a very biological view of the mind, though without the usual reductionism. I intend it to be “meaningful” in the sense of summing up the human condition. We are very odd creatures when you look at it closely; my book is odd too.

Share

Quine and Groucho

I was watching the Marx brothers film Horse Feathers and noticed a reference to the distinction between de reand de dicto readings of vernacular sentences. Groucho says to Chico, “You have the brain of a four year old…and I bet he was glad to get rid of it”. The joke depends upon switching from a narrow scope reading of the quantifier to a wide scope reading–as if Chico had stolen the brain of an actual four year old. I wonder if Quine ever saw the film and had his attention drawn to the ambiguity of the original sentence. The unexpected reading is: “There is a four year old whose brain you have”.

Share

Prehension

My book on the hand finally comes out next week (August 14). I just got copies: nice cover, good paper. Oh what trouble that book has caused me! It is pretty academic stuff actually. I’m curious to know what people make of it. It’s really a science book, laced with philosophy. People might object that it is more than conceptual analysis, and don’t I say that philosophy is conceptual analysis? But philosophers can do things other than philosophy, and might even be helped by their strictly philosophical expertise. I am all in favor of cross-disciplinary work, though it is harder to do than people think.

Share