Multi-Dimensional (Inclusive) Semantics
I address you today in a spirit of inclusiveness and diversity. For too long semantics (theory of meaning) has been the confine of a single type of entity held to constitute all that meaning encompasses (or a couple of entities, closely related). We must broaden our horizons and recognize that many kinds of entity contribute to the overall significance of an expression, often emanating from different traditions and regions. Above all it is referencethat has proved hegemonic, squeezing out other contenders for semantic acceptance. Whether that notion is phallocentric to boot I shall not venture to say; what I shall say is that we need a far more inclusive and diversity-driven approach to semantics. Semantics correctly conceived is a rainbow.
It used to be that only reference (denotation) was admitted into the semantic club: the meaning of an expression was its denotation. This was the view of Lord Bertrand Russell, English aristocrat and logic whiz (Western logic). Definite descriptions had to be distorted beyond recognition in order to fit them into this narrow picture (a form of linguistic colonialism perhaps). In any case, this approach, hailing from John Stuart Mill, another privileged upper class Englishman (and we duly note the gender) held sway until a rebellious German, a certain Gottlob Frege, added an extra element to the story—what he provocatively labeled sense. This was an improvement, breaking the stranglehold of the English referential aristocrats, but sense was conceived as the mode of presentation of the reference; so reference was still occupying center stage, with sense acting merely as its reflection or image, i.e. how we viewreference. (Can we say that while reference is the phallus sense is its codpiece?) Still, the basic monism is firmly in place: semantics remains one-dimensional, or at least one-and-a-half-dimensional. Not till Ludwig Wittgenstein arrived (also a white male aristocrat) was this monism seriously questioned and a certain kind of pluralism put in its place—with all the variety of language emphasized and celebrated. This was a welcome development in the openness of semantic studies, even allowing for the existence of actual workingmen (those builders of the early Philosophical Investigations—though again we must note the gender bias). But instead of embracing diversity the Austrian aristocrat insisted on imposing a new one-dimensional hegemony—all meaning is use. Reference drops out of the picture entirely, as if use has ousted it altogether. We don’t have use andreference but use and notreference. The old exclusiveness survives in a new form, less rigid perhaps, but with the same drive towards uniformity. One half expects the use to be restricted to only the most privileged of users! This entire trajectory then reaches its climax, i.e. nadir, in the person of Sir Michael Dummett, a white male Oxford philosopher, whose main mantra is that everything about meaning should be explained by one central concept—such as truth or verification. There could not be a more blatant hegemony! Nothing is to be included in meaning except what can be subsumed under a single conceptual category: you are welcome to join the semantic club, but only if you are properly related to the concept of truth (or verification). No diversity allowed!
At this point I shall drop the political backstory and proceed immediately to theoretical matters, though I trust my enlightened readers to keep that political context always in mind. And let me lay my cards on the table right away: I am all in for maximum semantic inclusiveness with as much diversity as possible (within reason of course). Not just two-dimensional semantics, or even three or four, but manydimensions, indefinitely many—as many as we can come up with. Fortunately, we have this diversity already lying around—it requires no strenuous inventing on our part. I have prepared a long list: reference, sense (mode of presentation), tone, character and content, intension and extension, grammatical mood, inferential role, rules, stereotype, mental image, individual and social understanding, ideas, brain states, use, conceptual analysis, truth conditions, criteria, causal chains, and whatever else comes to mind. For my contention is that allof these may be reckoned to the meaning of a word or sentence: not one of them and notthe others, but the whole lot. They don’t exclude each other but coexist peacefully. For example, a proper name, say “Aristotle”, has reference, sense, an intension and extension, a character (constant in all contexts), a role in inference, an associated stereotype (“bearded cogitating Greek man”), individual grasp and socially agreed grasp, a use, a contribution to truth conditions, criteria of application (see stereotype), a causal-historical chain, even a tone (vaguely distinguished and admirable). From among this variegated list we may pick out sense and intension for instructive contrast: the former is defined in epistemic terms (mode of presentation and interchangeability in belief contexts) while the latter is defined in modal terms (functions from worlds to extensions). These are by no means the same notion, but they equally belong to a single name, existing side by side in perfect harmony. There is no point in arguing that one is the realmeaning and the other a mere impostor: both belong to the overall semantic significance of the name. Both are attributes the name has, and they clearly flow from what it means (not what it sounds like). Meaning is multi-dimensional, diverse, and inclusive. No doubt there are interesting relations of dependency between these various elements, which may be studied, but the plurality is irreducible—part of meaning’s rich pageant. We can even throw in some Meinong-style ontology if that is to our taste, assigning to so-called empty names a subsistent entity as reference, or what is called an ”intentional object”. A committed Kantian might insist that reference be divided into phenomenal reference and noumenal reference. A follower of Sir Arthur Eddington might propose a double reference for “chair”: the commonsense chair and the chair of physics. The possibilities are endless, to be considered on their merits; but they should not be rejected simply because of some presumed one-dimensionality in meaning. In the theory of meaning our adage should be, “The more the merrier”. Plurality is a sign that we have not omitted anything not a symptom of conceptual chaos or indecision.
It may be remarked that the situation in other departments of linguistic theory is already happily pluralist. Consider the theory of syntax, taken to include the study of the sound system of a language. There is no one central concept here to which others must bow down; instead there are layers and dimensions. We can study speech as an acoustic phenomenon (as with a speech spectrograph), or as an articulatory system, or as embodied in the brain, or computationally. None of these competes with the others; all are legitimate and important. Syntax more narrowly conceived is typically understood as consisting of layers of rules, which may be viewed computationally or in terms of brain mechanisms. These are all aspects of the “formal” properties of language, and they all coexist—people don’t go around complaining that someone else’s pet theory isn’treallyabout syntax. Syntax isn’t one-dimensional. Similarly, in pragmatics there is room for a diversity of perspectives—not a single overarching concept. Thus there is no inconsistency between Gricean, Austinian, and Wittgensteinian approaches to (philosophical) pragmatics: all can be true and illuminating in their different ways. After all, there are many aspects to the employment of language by people, and we should not expect to be able to subsume all them neatly under a single heading. For example, an utterance of “Shut the door!” may be made with Gricean intentions, while having an Austinian perlocutionary effect, and occurring within a Wittgensteinian language game. Then too, we may approach pragmatics from an individual’s perspective, studying the way language is used as a tool of thought (say), or we can approach it socially, studying how language is used in interpersonal communication. There are indefinitely many possible ways to do pragmatics, as there are multiple ways to do syntax; and there is no reason semantics should be an exception. There are multiple components across the board. The fact is that the list of concepts I gave represents a variety of insights into meaning on the part of different thinkers, each valuable in its own way, and there no necessity to reject some in favor of others. I don’t mean to say that no semantic theory can conceivably be false, just that the fault is usually incompleteness not outright error. Apparent inconsistencies often melt under more tolerant investigation (as with Fregean versus Kaplanian approaches to indexicals). I used to be all in favor of “dual component” semantics, but really we should expand the dimensions dramatically to accommodate everything that characterizes meaning. The concept of meaning is a multi-dimensional concept incorporating a large variety of factors. It is not a simple thing like being square or red; it is more like the concepts of democracy or marriage or success. It contains multitudes.
Let me return to my political platform, because I was not being entirely frivolous (though mainly so). In ethics there has historically been a tendency towards monolithic theories, as with utilitarianism and Kantian ethics. It was left to more ecumenical ethicists like W.D. Ross to advocate a pluralist reconciliation between these apparently competing systems, thus producing a multi-dimensional ethical system. It is easy to see this development as an integration of different political perspectives—the pure will of the privileged autonomous agent versus the maximization of happiness in a suffering population. In the case of semantics we also have a politically contested domain, because language is spoken by diverse groups of people each with their purposes, positions, and ways of life. It would not be amazing if a certain kind of linguistic hegemony were in effect according to which only certain aspects of meaning are deemed “proper”, the rest consigned to illegitimacy and disdain. Hence we get the idea of the logically perfect language. The messy reality of meaning might not receive its due recognition because of an ingrained habit of favoring some things over others. There is always something evaluative in theories of meaning, as if only a certain dimension is deserving of respect. Why has tone not received the attention it deserves? Could it be that its prime examples are racial slurs and sexist language? Why would people want to explore the expression of their own prejudices and hostilities? Speaking very broadly, there is something democraticabout meaning: everyone speaks no matter his or her social class or place in society, and meaning itself combines disparate elements jostling together. Oversimplifying culture from political motives is not so far removed from oversimplifying language from similar motives. The habit of exclusivity is deeply rooted and ubiquitous. At the least it can operate as a factor in determining what theoretical options people tend to take seriously. Semantics is political too.
I have no wish to wax psychoanalytic, but isn’t the notion of reference suspiciously phallic (at least as phallic as some of Freud’s phallic symbols)? It seems to involve a kind of mental protrusion, as the act of reference extends outward to make contact with objects in the environment. People sometimes talk of reference as like tentacles reaching out to grasp, but other organs of the body can reach out and make contact too. And what about pointing? The pointing finger has a rigidity and angle not unlike… And then there is “rigid designation”, a phrase that trips suspiciously easily off the tongue. Just saying.
Light can appear homogeneous, but the rainbow resolves it into an array of separable hues. Meaning can seem homogeneous too until we resolve it into its components.
For all I know intellectual traditions from beyond the West have suggested aspects of meaning Western thinkers have missed. If so, I cordially invite them in.
Alan Sokal would be proud of you. You should have submitted this one to, “The Journal of Philosophy”. Nevertheless, Chomsky, in his longish essay, “Language and Nature”, dismisses, outright,” philosophical” approaches to meaning and reference. We should like you to address directly some of his reservations. Your book on Wittgenstein was /is brilliant. It influenced me greatly in college, as it does still.
Yes, Chomsky has no place for reference in semantics at all–just not politically kosher, I guess.
Brilliant analysis. It never occurred to me before what an evil son of bitch that old logic whiz Lord Russell was. Too long have the people groaned under his cruel monist regime. Too long have their honorable pluralist instincts been crushed by his linguistic rapacity. May his western soul find no respite in the fiery pits of hell.
Well, he did spend time in jail for protesting against WWI, so he was punished for his wicked ways. On the other hand, Principia Mathematica is a phallic monument of gigantic proportions, clearly meant to discourage anyone from speaking who is not privy to the authors’ private elite language.
This philosophy of language rigmarole is a little over my head. I was able to appreciate the political analysis of this fine essay. My takeaway: that old logic whiz Lord Russell was one evil son of a bitch. White male aristocrat bastard. May he find no respite from the fiery pits of hell for imposing his phallic straitjacket on the honorable language instincts of the people.
My comments below, especially on reification and faith, apply more broadly than to the discipline of semantics, so apologies for being tangential. (Let’s say I am encouraged by the commitment to plurality in this blog.)
“No doubt there are interesting relations of dependency between these various elements, which may be studied, but the plurality is irreducible—part of meaning’s rich pageant.”
This is a critical sentence in that it highlights the need for balance. For sure, the denial of plurality leads to dogma; also, by not acknowledging relations between different perspectives actually exist (and need to be discovered, fashioned, compared, tested), and that these relations may among them have some properties or structure, you run the risk of turning science into fashion or style.
However, even acknowledging this, we have a powerful tendency to reify – even at the level of the collection of abstractions and the relations. It’s as if our desire to compose a system of knowledge compels us (punishing us as was Tantalus). I don’t think reification itself is a bad thing, in fact it is essential to thought; but the danger lies in believing that each reification is an ending and an absolute, rather than a relative conception, which itself needs to be understood in relation to, transformed into, or combined with other views. (This is easier said than done. The concept of universe, or multiverse, is an example of a type of limit of this process. How many of us don’t just accept in a common-sense way that there “is” a universe?)
Ironically, what seems to be required to carry out science constructively is a type of faith. Take the Eastern parable of the blind men and the elephant. Though each man may have a different perspective, as none of them can actually see (or feel) the elephant as a whole they need faith that there is one ‘thing’ that exists, or at least continues to manifest in a coherent way, and that binds the elements together – even if they will never really fully know what it is. But witness here my inability to express myself without reification (referring to some total thing that exists). Even though this theoretical thing (the elephant) contains plurality, is not reifying it itself a denial of plurality? Hence my comment that some type of faith is required, rather than the belief that something specific, though not yet fully known or theorized, thing exists. (A westerner’s comment on an eastern parable: the parable could have gone further, and taken a more positive view on knowledge, whereby the blind men uncover some relations between the different parts of the elephant, and thereby work together to understand more about what the elephant is, aspects of how it works as a whole.)
With the remarks above, I’ve have had in mind the theoretical sciences and approaches to the foundations of mathematics. But to bring it closer to language, I’ll reference Roberto Calasso’s little gem of a book Literature and the Gods. One of his points (expressed crudely by me) is that the writer-poet who thinks that the gods are manifestations or fragments of the psyche will be impoverished as an artist relative to one who thinks in terms of the converse. I think this and other points made in this book are relevant to your post. (Calasso does briefly discuss absolutism in 20th century politics, the dangerous aspects of the notion of “good community”, which implies a homogeneity, especially when combined with technology.)
There is reification and unification, i.e. thinking things more unified than they really are. People have often objected to reifying meaning, but not to unifying it. The idea of meanings as units of something is the culprit (or one of them).
Burton Dreben, the Harvard logician, was another scarifying skeptic of the value of philosophy (a la Chomsky). Had you any traffic with him? There’s a scene in Thomas Mann’s, “The Magic Mountain” (another great bed-book over the years), in which Hans Castorp has a dream about being in flight from Dr. Krokowski—resident psycho-therapist at House Bergoff. Young Hans, in his extremity, had climbed a flag-pole, and awoke, perspiring, just as the Dr. had grabbed his pant-leg. The narrator of my (virtual) book, Alan Reynolds, enthusiastic student of philosophy at MIT, has a similarly fearful dream about Dreben.
I went to some lectures of his in London in the 1970s, but that’s about all. Short and lively.
Hi, Dr. McGinn,
I’ve got scant formal schooling but am trying to understand how natural languages function. Can you please tell me what is wrong or misguided about the following summary view of mine of how natural languages function?
Users of a natural language, like English, define its terms’ meanings through their use of the terms (incidentally, these users themselves constitute a fuzzy set); this use is tracked by dictionaries, the more precise (but never perfect) tracking being done by the better ones. Accordingly, words can take on multiple meanings (i.e., they can be polysemantic) when different sets of users of the language use terms in different ways.
The line between when a term is taken by dictionaries to be the chief one and when it is unregistered is vague: a seeming majority of users have to use a term to mean X in order for the term to be have that meaning codified in dictionaries as its chief meaning (with many terms taking on a hierarchy of meanings ranked by their roughly determined frequency of use). E.g., the English word “atheism” is polysemantic; it is used by English speakers and writers to roughly mean “an absence of belief in God or gods” and “a denial of the existence of God”—the latter being the historically more frequent meaning.
Moreover, natural languages like English are continually in flux: their words stop being used; sometimes, their terms are revived; the syntax changes; the morphology of their terms change; the pronunciations of their terms change; the meanings of their terms change; and the natural languages themselves eventully go extinct, i.e., mutate into something wholly different or stop being used due to the death of their users.
There’s nothing misguided about what you say here.