Philosophy and Politics

Philosophy and Politics

It would be naïve to suppose that philosophy in the twentieth century was sealed off from the political turmoil of the period, particularly two World Wars followed by a Cold War. Philosophers, being intellectual people, would naturally look to the causes of war (and oppression generally) in various forms of defective thinking: ideologies, social conformity, propaganda, delusion, irrationality, and sheer nonsense. They would then set themselves to rectifying these deformities of thought, hoping thereby to prevent war and other forms of violence. They would see themselves as thought doctors and intellectual scolds. Thus religion, pseudo science, superstition, political ideologies (fascism, communism, etc.) would be subjected to philosophical scrutiny, followed by condemnation. Positivism is one extreme (and therefore attractive) form of this tendency, declaring much of traditional discourse literally meaningless: the chief cause of war is nonsense, to put it simply. This diagnosis fits religious wars and with some tweaking carries over to wars powered by ideology. Karl Popper, though not a positivist, shared the aim of cordoning off the shabbier precincts of intellectual culture—particularly, Marxism and Freudianism. Other philosophers put their faith in symbolic logic or the down to earth prescriptions of ordinary language. Philosophers felt the need to oppose forms of thinking and talking that fostered dangerous tendencies, so there was a political rationale for their activities. The search for a criterion of meaningfulness (or a demarcation criterion in the case of Popper) was at least in part prompted by political concerns—concerns about war, human welfare, and human enlightenment. It was not a purely intellectual matter. Philosophy was seen as politically useful. The same could be said of phenomenology and existentialism, notably in the shape of Heidegger and Sartre, whose writings have a clear political motivation. We need to focus on lived experience, Dasein, the for-itself and the in-itself, the indisputable facts of Being. These philosophers oppose tendencies of thought that produce social discord, alienation, and sickness unto death. Twentieth century philosophy is thus shaped by political considerations; it is not a pure inquiry cut off from “the real world”. It is, in a word, relevant.[1]

            Nor were previous centuries all that different. Without rehearsing all this we can report that philosophy was shaped a good deal by religion, that it was concerned to promote the reputation of science, that it addressed itself to issues of political authority in an age of monarchy.  Opposing or supporting the Church was a central concern of philosophy throughout the middle ages, coming to a head in the Renaissance. And, of course, there was plenty of war to fuel interest in its intellectual causes (I am speaking mainly of Europe, but elsewhere in the world much the same situation existed). Philosophy was not disinterested—disengaged from large political and social issues. Only in the time of the ancient Greeks does philosophy appear serenely removed from politics, though here too appearances may be deceptive. True, Plato and Aristotle undertake inquiries whose political relevance is at best obscure (particulars and universals, substance and form), but politics occasionally intrudes in the form of the Sophists, the nature of the ideal state, democracy, and the death of Socrates. Were Socrates’ own motives completely apolitical? He went around exposing lazy thinking, questioning traditional pieties (the Euthyphro argument), and upsetting the authorities: why would he do this unless he thought that the errors he exposed were harmful? If they were just harmless eccentricities, silly personal foibles, he might have devoted his energies to other pursuits; he clearly thought it mattered that people are so intellectually inept. It mattered to the body politic. Still, I think it is true to say that the spirit of pure inquiry was very much alive in this period of intellectual history; one doesn’t sense the philosopher looking over his shoulder at the political ramifications of his studies. Perhaps this is because the time was one of relative peace with little internal conflict and a sense of freedom. The ancient Greeks did not feel themselves to be at odds with dangerous ideologies that philosophers must be conscripted to combat (at least that is my impression). However, this period didn’t last long and philosophy took up the cudgels against foes and phonies. The philosopher conceived himself in oppositional terms—always fighting with someone, always aiming for the Greater Good. The philosopher was ipso facto a political philosopher.

            The same is not true of other disciplines. Mathematicians and physicists are not politicians: they don’t conceive their subject as primarily concerned to combat dangerous intellectual error on the part of others, or to correct shabby thinking, or to root out pernicious nonsense. Teachers of these subjects don’t announce that their purpose is to enable you to think clearly—the assumption being that clear thinking will have practical benefits. Critical thinking courses, however, are intended to foster an ability to resist propaganda and sophism of the kind offered by the unscrupulous politician. Mathematics and physics, by contrast, simply study a certain range of questions in an impartial spirit, without regard for political considerations. Wars are not caused by false physical theories or unproved mathematical conjectures! The same is true of biology, geology, history, and botany. But philosophy has always been thought of as politically relevant: philosophers are expected to have political opinions, to be politically engaged. Even within philosophy there are political battles—battles for power, prestige, funds. Analytical philosophy versus continental philosophy, history of philosophy versus timeless problems, ethics versus metaphysics: philosophers are always engaged on some sort of crusade, busily denouncing and demoting. The philosophy profession is apt to be highly politicized in one way or another. Philosophers, we are told, are concerned with how to live, and politics is about that very question on a larger scale. Nowadays philosophers are much exercised with questions of gender and racial equality, this being a new arena of political engagement for them. That is a continuation of an old tradition: the philosopher as political operative. How could it be otherwise?

            Let’s pursue that question: what would philosophy be like without political input and influence? Suppose the twentieth century had been a century of peace and tranquility: no war, no oppression, no genocide, no imperialism, no class division, no religious controversy, etc. That is, suppose the century had featured nothing of consequence in the political sphere. Do you think there would have been the same obsession with meaningfulness, demarcation principles, the hygiene of formal logic, and the bracing breeze of ordinary language? I doubt it: for none of the usual political enemies would have existed to combat. All these concerns were prompted by perceived errors that required philosophical correction—errors with “real world” consequences. Wouldn’t philosophy have looked a lot more like physics or botany? And what would that be in the case of philosophy? A concentration on philosophical problems as such—that’s what. Philosophers would simply confront the perennial problems of philosophy without regard for any political ramifications–trying to understand them, debating them, and even solving them (well, that may be expecting too much). Not that nothing of that type was occurring during the time of the politically engaged kind of philosophy (for those problems have an irresistible allure), but it was not free to go its own way without any inhibition or restraint. The subject was surrounded and infiltrated by political questions. In fact, I think that the last few decades of the twentieth century were unusually free of political preoccupations and that philosophy benefited hugely thereby. There wasn’t much need for political engagement on the part of philosophers, so they could (to some degree) pursue questions that are politically irrelevant. They rediscovered the joys of pure inquiry: they could, for example, indulge in metaphysics without fearing that they were abetting stultifying religion or war-inciting ideological nonsense. That is, the psychology of philosophers changed during this period—not completely but partially. They felt freer to pursue their vocation.[2] Imagine what it would be like if politics intruded not at all on philosophy—a type of utopia no doubt but an imaginable one. Philosophers could then focus on the problems of philosophy without having to think about anything extraneous to them. It would be like Popper without the (alleged) pseudo sciences of Marxism and Freudianism to contend with—there would be simply no need to labor over formulating a demarcation criterion. If there is no war, then we need not fret over its intellectual causes: we need not spend our time skewering bellicose ideologies with superior reasoning. We could be exclusively concerned with discovering the truth without regard to its political utility. As I say, I think some of this has been in the ascendant, but it is good to state clearly what philosophy would look like in its unadulterated form—apolitical philosophy. Politics is fine in its place, and philosophers have a role to play in improving the practice of politics, but we should not lose sight of philosophy in its ideal and primal form. It is not in its essence prophylactic or therapeutic or anti-irrational or politically progressive. It is not in the business of war prevention or bullshit detection or oppression removal. It isn’t even to be understood as a good way to think clearly about things in general (that is not its essential point). It is an attempt to come to grips with certain age-old problems—a completely non-political enterprise. And there is always a danger that philosophy aimed at political ends will be dragged down to the level of politics. In my view philosophy is intrinsically apolitical, despite its history. That is indeed a main part of its attraction. We are not trying to improve the world qua philosophers but to formulate and answer the distinctive problems of philosophy. Movements like positivism are an aberration in philosophy, not part of its central mission (and let’s not forget that the avowed aim of positivism was to destroy and suppress metaphysicians, to “cancel” them).[3] In an ideal world the philosopher would have no interest in politics at all (except as a hobby)—though political philosophy would be perfectly kosher. The ivory tower would be sealed off from the outside world.

            I fear I may be misunderstood. It is not that I am against politics, and it is not that I think philosophy has no relevance to it, and it is not that I think philosophy has been wholly political for its entire history. I am merely suggesting that the exigencies of politics have shaped and distorted philosophy, which in its essence is not a political subject. Its connection with politics is contingent. In its pure form philosophy is an apolitical attempt to come up with the truth about a certain range of (pretty abstract) problems. Anything else is a corruption of its true nature.[4]


[1] I am obviously speaking in broad generalities here; the usual caveats apply.

[2] I want to avoid mentioning names for fear of omissions and wrong inclusions, but just to convey a sense of what I am talking about let me cite the following as (relatively) apolitical philosophically: Kripke, Nagel, Fodor, Strawson, Davidson, Lewis, and others. On the political side I would mention Rorty and Scruton as explicitly political in their approach to philosophy. More ambiguous figures would be Austin, Quine, Wittgenstein, and Rawls.   

[3] Isn’t it remarkable that the obsession with meaningfulness, once so urgent, has now completely disappeared from the philosophical agenda? I conjecture that this has to do with the change of political climate since the early and middle twentieth century, particularly regarding the dominance of religion. Nor are philosophers so focused these days on the question of freedom and personal authenticity, ever since the political revolution of the 1960s rendered such concerns nugatory (at least to some degree). 

[4] What is valuable about the philosophical state of mind (the philosophical life) is that it is not a political state of mind—not obsessed with power, advantage, competition, winning and losing, popularity, and influence. It is free from such worldly concerns. Not that it is easy to achieve.

Share

The Alphabet of Thought

The Alphabet of Thought

An alphabet consists of a relatively small number of letters correlated with simple sounds. The modern English alphabet (deriving from the Latin alphabet) has 26 letters. The sounds represented are those found in speech, so the alphabet is a way to code the sound structure of speech. Writing consists of strings of letters that correspond to the sounds of spoken language. Any sentence of a language can be represented by an alphabet—infinitely many such sentences. Words can be represented as sequences of letters in combination. We can also say that spoken language has an alphabet—the collection of basic sounds that make up vocal utterances. These too are relatively few in number (around 30), as dictated by the human articulatory system. Clearly combination plays a large role in allowing these primitive elements to generate so many meaningful strings. A notable, indeed defining, feature of an alphabet is that the basic elements are not themselves meaningful: the sounds and marks that compose an alphabet have no meaning in isolation (with very few exceptions). For example, “red” consists of three letters and three sounds (phonemes), corresponding to “r”, “e”, and “d”, none of which have any meaning independent of their meaning in combination. We might call this “the principle of non-semantic composition”, or “the principle of alphabetical composition”. Phrases and sentences obey a principle of semantic composition because words are meaningful units, but words (spoken or written) are made of elements that are not meaningful in their own right. Meanings are not being combined when words are formed from marks or sounds. This is what enables an alphabet to do its work: if the principle of combination were purely semantic, we would soon see an explosion of primitive elements—as many as there are basic meanings. Such an alphabet would be unwieldy to use and a strain on the memory. So we make do with 30 or so basic elements and let combination do the job of generating the infinitely many sentences that language contains. Speech and writing are thus economical as to primitives but fertile as to combinations—a great many meanings and a handful of letters or sounds.

            The question I am interested in is whether thought works alphabetically: does it too have an alphabet in the sense just outlined? Let’s assume there is a language of thought: then the question is whether this language is composed of alphabetic elements in the way speech and writing are so composed. We already know it is composed of words—so much is implicit in calling it a language—but it is a further question whether the words themselves have an alphabet-like composition. Are the internal words made up of mental letters or mental sounds? Are they spelled a certain way? I don’t mean letters of the current English alphabet (though this cannot be ruled out on logical grounds) or actual sonic events (that would make thought noisy); I mean does it contain anything comparable to these alphabets. That is, are the words of LOT composed non-semantically? Is there an alphabet for the language of thought? Given that thoughts are made up of concepts, this is the question of whether concepts are expressed in internal versions of an alphabet? Does the concept red, say, carry with it components corresponding to “r”, “e”, and “d”? Does the word of LOT that represents that concept have components that are themselves not conceptually significant? The word of LOT that represents the concept red is a meaningful word, but is this word composed of meaningless elements in the manner of an alphabet? Is thought alphabetically represented in the mind? When you think are you somehow uttering “sounds” or making “marks” that have no individual meaning? Does LOT work with about 30 such elements, using them to construct infinitely many mental representations? 

            We have no direct evidence that this is so. There aren’t any letters written on the brain that we can decipher, and no audible susurrations issuing from the cortex; nor has anyone compiled a list of alphabetic thought elements. Neither is it true that we can introspect the alphabetic components of LOT, as we can hear and see the sounds and letters that make up our ordinary alphabet. But presumably that is not to be expected given that LOT is an unconscious mental reality; and we don’t generally insist that every decent psychological construct be directly detectable. Maybe we could find indirect evidence for such an alphabet, even identifying its constituents; or maybe not, even though it is a psychological fact. Still, we can inquire into whether there are any plausibility considerations that might favor the idea. First, we have the precedent supplied by spoken language: it is accompanied by an alphabetic system, so why not the internal language? This might conceivably be true of outer language even though we couldn’t directly detect the sounds and marks that constitute the alphabet, perhaps because they are hidden away somehow and only manifest themselves in acts of communication (going straight into the brain without any sensory representation).[1] Second, we know that concepts have alphabetic vehicles, since they are expressed by spoken and written words, so it is perfectly possible for them to have internal such vehicles too. Third, not having an alphabetic structure is massively inconvenient—it would require the brain to have distinct primitive symbols for every basic concept. Why not take advantage of the combinatorial powers supplied by an alphabet? This will give us computational economy and efficiency—the brain only has to manipulate about 30 basic representational units. Fourth, inner speech presumably mirrors outer speech by deploying a parallel internal alphabet: when we say “red” to ourselves silently we mentally rehearse bits corresponding to “r”, “e”, and “d” (you can perform this experiment on yourself now). If so, granted the close connection between inner speech and thought, it is a small step to recognize the same structure in the realm of what we call pure thought. Maybe what happens here is just that the alphabet goes under ground while still chugging away imperceptibly. Fifth, the idea of a language without an alphabet is not easy to make realistic sense of: the speaker needs a manageable system of elements with which to form words or else words become difficult to produce and understand, or else become few in number.  Speech and writing work so well because they operate from a few primitives with a small number of combinatorial operations: thus they are enormously repetitive, which is helpful. LOT should avail itself of the same convenience.

            But what are these elements, you might wonder. What are the non-semantic units that combine to form semantic units? They have to exist in the brain and be capable of impressive feats of rapid dexterity. Electrical patterns! We know how closely electrical activity in the brain maps onto mental activity, so why not suppose that words of LOT are composed of electrical patterns that combine to generate a meaningful word? There have to be such electrical patterns when a word in LOT is tokened, so why not accept that there are constituent such patterns corresponding to the alphabetic structure of the word? In other words, the “r” part of “red” corresponds to an electrical pattern that combines with other electrical patterns to generate the complex electrical pattern corresponding to “red”. This is non-semantic composition—alphabet-like word construction. Those elementary electrical configurations can recur in the employment of other internal words, as the same sound or letter can appear in different words; it is the combination that produces the specific meaningful word in question. Thus the LOT word for “red” is made up of a combination of electrical patterns that function like letters and phonemes. Of course, there may also be higher-level descriptions of these electrical constituents that we don’t now know about, but at a basic level the brain is producing electrical patterns that act as “hardware” for the abstract alphabet (“software”) associated with LOT. The important point is that there is non-semantic composition in LOT as well as semantic composition—just like spoken and written language. LOT’s alphabet is electrical in nature with charge and voltage corresponding to sound and shape (at least at the basic level).

            So it is not unreasonable to credit LOT with an alphabetic architecture, just like speech and writing. What this essentially means is that it obeys a principle of non-semantic composition. Does anything else obey such a principle? Why, intentional action does: it too proceeds by constructing complex units from units not themselves intentional actions. Whenever you perform an intentional action there are countless bodily events that you don’t also intend but which combine to produce what you do intend. Actions are not composed only of other actions, as words are not composed only of other words. In both cases there is a generative system that manufactures one sort of thing from another sort of thing—actions from non-intentional bodily events (e.g. nerve impulses innervating muscles); words from non-meaningful sounds or marks. We might even postulate a psychological law—the law of non-mental to mental composition. Just to be a bit snappier, let’s call it “the law of alphabetical composition”. This law says that certain things—speech, writing, thought, and intentional action—are constructed from elements drawn from outside these domains. To be more exact, there are two phases of construction: first, we construct an instance of these categories from elements that don’t fall within them; second, we construct further such instances using previously constructed instances. For example, we make spoken words from meaningless sounds and then we make further word-like items (phrases and sentences) from words. Words in LOT are likewise made from non-words by alphabetic procedures, and then these words are used to construct further meaningful units. Better, the vehicles of meaning are the result of two generative processes: those that produce meaningful vehicles from non-meaningful vehicles, and those that produce meaningful vehicles from other meaningful vehicles. Perception might obey the same law, producing percepts from elements not themselves percepts; and these elements too might have alphabetic structure. It is often remarked what a marvelous invention the alphabet was, making written records finally available and usable; well, evolution seems to have been there before us, exploiting alphabetic structure in the management of speech, thought, action, and perception. The idea of using a pre-semantic reality consisting of a small number of primitives in order to represent an infinite totality of semantic units is just too good an idea to pass up. The mind thus evolved as an alphabetic machine, and we cottoned onto this late in the game when we invented the first written alphabets.

            There is a tendency to think that thought is like language but with the words removed leaving only the pure proposition.  But this has to be wrong because the mind and the brain need a medium of representation for thought to occur in; they can’t just grasp a meaning without any mediation. The language of thought is brought in to supply such a medium, but once we take that step we must ask the question of the nature of its composition; and once we do that the idea of an alphabet-like structure starts to take shape. The internal language must be like the external language in this respect. Thus we end up with the idea that the words of LOT are composed like the words of speech and writing—combinations of a limited number of basic meaningless elements, such as sounds and shapes are. Thought is alphabetically organized too.[2]


[1] We might have blindsight and deafhearing that prevent us having any conscious perception of marks and sounds, and yet those things have their impact on our nervous systems.

[2] A spoken or written language can be segmented according to two sorts of principle: semantic (which includes syntactic) and phonetic or figural. These are independent principles; both are essential to human language as we find it. Sentences sound or look a certain way as well as mean something or other. Similarly, we should think of the sentences of LOT as having these two dimensions: segmentation by meaning and segmentation by intrinsic character. The latter is where alphabetic structure comes in.  

Share

Naming and Knowledge

Naming and Knowledge

A long and winding tradition in philosophy has it that naming is the essence of language. You name it it’s a name. Or at least all words are name-like: names are representative of language in general. Names denote and language is in the denotation business. I am going to argue that this position is sorely mistaken, not because most words are not like names but because names are not like words—in fact, they are not words at all. Names are a very special kind of linguistic unit, quite distinct from words in general: they are not even meaningful. But this is not just a truism about the syntactic form of names; it reflects a deep fact about the connection between meaning and knowledge. If this sounds obviously wrong to you, do me the honor of reading on: names, unlike words in general, convey or embed no knowledge; they have a quite different character and function. Names are really incursions from outside language proper; they are alien imports, foreign to what language essentially is.

            The fundamental point to grasp is not unfamiliar: names are labels. That is, they are mere labels—tags, surrogates, pointers, placeholders. They have no conceptual content, no intrinsic meaning, and no information-value. The OED defines “name” as “a word or set of words by which someone or something is known, addressed, or referred to”. This is a very special type of expression: we would not describe other words in such terms. A definite description, say, is not a word or set of words by which someone or something is known or addressed; it is not a mere label for something. Naming is labeling, but describing is not labeling. We confer a name on someone in a baptism or something similar, so that henceforth they will be known by that name, but we don’t confer a description on something in this way. A person (or other entity) earns the descriptions that apply to it in virtue of the properties it instantiates, but names are not earned in this way—they are merely stipulated. They are what we call something not what something is. They are ceremonial, practical, and arbitrary: suitable for nametags, birth certificates, and roll calls. They are not attempts to capture some aspect of reality in words. This is why the description theory of names strikes us as so contrary to the evident nature of names: it makes names into conveyors of information, repositories of knowledge. No, a name is just a label, a mere device of identification, a convenience, an empty sound or mark. It has a bearer, to be sure, but it is not a meaningful element of language: labels on shirts have bearers too (the shirt itself) but they are not thereby meaningful entities. The essence of a name is to be meaningless, to convey no information, to express nothing about its bearer. Thus we pick meaningless sounds or marks to name people not meaningful descriptions (we don’t baptize someone “the future president of the United States”). We need labels for people, so we pick linguistic units that carry no semantic baggage—units quite unlike words in general. It doesn’t matter what the name says only what it sounds like, so we choose sounds that don’t say anything. We choose arbitrary bits of noise or script devoid of meaning. Nor do these bits magically acquire meaning by being so picked; they just function as empty labels. This is why you have to ask someone what he or she is called; it can’t be inferred from descriptive knowledge about the person (“Ah, you must be called ‘he of the flaming red hair’”).[1]

            I will put this by saying that names contain no knowledge. You can grasp and use them in the absence of any knowledge about their bearer (save that he or she is so named). You only know that this person is labeled “Sam Adams”, for example. The same is not true of any associated definite descriptions: here your grasp includes actual knowledge about the satisfier of the description. Hence a name cannot be equivalent to a description: one is a mere label and the other is a conveyor of information (knowledge, fact)—or purports to be. Probably the very existence of names as a linguistic institution derives from limitations on human knowledge: we just don’t know enough to refer to people and things by way of their properties, so we employ labels to circumvent our ignorance. Keeping track of a person’s changing properties is beyond most speakers, and names supply a nice convenient way of referring to them—precisely because they carry no epistemic commitments. Labels are what we use when identifying descriptions elude us. This is evident in the case of names for natural kinds: you may spot a new type of animal in a remote jungle and possess no uniquely identifying information about its species, so you just stipulate that this type of animal will be named a widger. Then you can talk about widgers without knowing much about them. When you learn more you may replace the name with something more descriptive, or invest the name with this new knowledge, but for now the label enables you to refer to this unknown type of animal. It’s just a label not a piece of zoological knowledge, but it serves your practical purposes. Thus names are devices for remedying ignorance in the furtherance of speech (demonstratives can function similarly[2]). Accordingly, we can say that the naming parts of speech are knowledge-independent while other parts of speech are knowledge involving. 

            Is this true of words in general? What about adjectives, connectives, and quantifiers? Quick inspection confirms the thesis that all such words are cognitively demanding—they are not mere labels. The word “red”, say, is not a mere label of redness but expresses that property: you can’t understand it without knowing what redness is. The same goes for other adjectives like “square” or “brave” or “tardy”: to grasp these words is to have knowledge of something substantive, namely what the denoted (expressed) properties are, not merely what certain things are called. The same is true for “and”, “or”, “all”, and “some”: these are not mere labels for things whose nature may escape us; they express in their meaning certain substantive facts concerning possible states of affairs. You have to know what a general fact is to grasp the meaning of the word “all”: you can’t claim to understand it and then say, “I don’t know anything about what this word names; I am just using it as a label for the thing it stands for”. Hence we say that these words are meaningful—that they are words. The OED defines “word” as follows: “a single distinct meaningful element of speech or writing, used to form sentences with others”. Here the operative term is “meaningful”—exactly what names as labels are not. This is why I said that names are not really words: words are meaningful while names are meaningless (in a perfectly straightforward sense). Nothing meaningless can be a word. As the tradition puts it, names have no connotation, but words always have connotation. Other elements of discourse can be functional without being strictly words, such as emphasis, gestures, and facial expressions (even vocal sounds like “Boo!”); names belong in this general category—they are word-like but not really words. They can act as placeholders for real words but they are just senseless labels. We could say that they are not part of the language faculty proper, i.e. the system of meaningful elements that combine to form sentences, rather like gestural elements (pointing, head orientation). They are a bit like saying “la-di-da” or “blah-blah-blah”. They have to be learned as add-ons to linguistic mastery proper; they are not part of the initial innate language program. They are not part of an ideal language in which every element is a meaningful word (excuse the pleonasm). Ordinary language is clear on this point: we don’t ask, “What is your word?” but “What is your name?” and we don’t ask, “What is the name of red in Italian?” but “What is the word for red in Italian?” We know the difference between words and names, between meaningful signs and meaningless labels. In no way, then, could language consist of a collection of names—it would contain no words! Nor are names representative of language in general. Spoken language is a complex multi-faceted thing and it divides into the names and the non-names. This division correlates with knowledge and the absence of knowledge. Most of language is knowledge involving, but names are ways to speak without possessing knowledge: they function to label we know not what. Names are inarticulate and content-free, just empty sounds. Naming an object is the opposite of knowing about it. The only knowledge they require is knowledge of language itself (“This person has been dubbed ‘Sam Adams’”) while words in general require that one knows something outside of language, whether by acquaintance or description. You have to know what red is to understand “red” and what conjunction is to understand “and” and what a general fact is to understand “all”. Knowledge of names is always meta-linguistic knowledge, but this is not so for words in general. We need to have knowledge of things in order to understand words for things. Meaning and knowledge are intertwined. 

            We have learned that a single object can have two names and that an identity statement formed from them is a necessary truth; also that two descriptions can apply to the same object and in some cases form a necessary identity statement (when the descriptions are both rigid designators). But these are completely different semantically: the former consists of two labels that happen to be assigned by stipulation to the same object; the latter consists of descriptions that express properties that are necessarily co-instantiated by the object in question (e.g. “the successor of 4 is identical to the predecessor of 6”). We cannot analyze one in terms of the other, since they involve different kinds of fact. Nor should we suppose that their epistemic status derives from the same source: true, they both express synthetic propositions, but one derives from a coincidence of stipulations while the other arises from a non-linguistic fact about numbers. The same point would apply to any a posteriori identities involving descriptions. The sentences may look superficially similar, but the occurrence of names in one and descriptions in the other guarantees a complete difference of analysis. We can have two labels for the same thing and the same thing can instantiate different properties—these are very different states of affairs.

            Given that names are not really words (meaningful units of language), how is it that they can occur in sentences combined with real words? Now that is a good question—one would think that other words would shun their company. How do they manage to sneak into sentences disguised as words? Why don’t the sentences reject them as fake words? Get thee out you senseless labels–you empty vessels! A possible answer, which sounds more radical than it really is, is that names don’t occur as constituents of sentences. To be sure, they occur as parts of sound sequences and written marks, but all sorts of things belong there that are not part of sentences proper. Sentences are far more abstract than that, belonging to the unconscious computational mechanisms of the language faculty—they are composed of elements quite far removed from the sensory phenomena in which they are expressed by the human sensorimotor system. So it is possible that names are tacked on by some extraneous part of the mind and are not strictly sentence components at the deep level. The question, then, is what does occur in deep sentences when spoken sentences appear to contain names. The natural thought is that these elements are reference fixers of some sort: possibly definite descriptions, or (more likely) demonstratives. These lexical elements compose the underlying sentences—being genuine words—and names come into the picture by being tacked on from outside and uttered in outer speech. In any case, there are ways to explain how names can appear to be real words without actually being real words (a not uncommon phenomenon). The point always to remember is that names are empty meaningless labels brought in to serve a particular pragmatic purpose, which is divorced from knowledge of the extra-linguistic world.

            As a coda, let me note that names are bits of spoken language that we can like or dislike, abbreviate and distort, that can be changed at will, are subject to fashions and fads, and are easily forgotten—while the regular words of language are not subject to these whims. No one hates the word for red in their language, or proposes changing it, or feels it is so last year, or simply can’t recall it: these facts all suggest that names and words belong in different parts of our cognitive economy. Names are ancillary to language, orthogonal to it, not its truest representatives. All the more odd, then, it is that philosophers should have decided that they form the very essence of language; they are anything but.[3] Language would be quite happy to do without names and only tolerates them because of the epistemic deficits of its users (frankly, they are a bit of an embarrassment). What’s in a name? Nothing—and that is exactly their point.             


[1] If you look up the name “John” in the dictionary you will draw a blank, but the word “John” does appear, defined as “toilet” and “a prostitute’s client”. Dictionaries contain words with meanings not names. To ask for a definition of an ordinary name is bizarre, a kind of linguistic category mistake. 

[2] But demonstratives and other indexical expressions are not mere labels like names. And sure enough they demand a lot more epistemic engagement than names, typically requiring perceptual acquaintance or something close to it (as with “now”). We don’t baptize something “this”.

[3] It may seem especially odd that Russell required that a genuine name have a bearer with which we are acquainted—the most immediate and intimate form of knowledge. But the oddity ebbs when we remember that for Russell the prime example of a name is a demonstrative for a sense datum not an ordinary proper name at all. As Wittgenstein remarked, it is strange that philosophers have chosen as paradigms of names expressions that are not names at all (such as “this”). For Russell, in fact, ordinary proper names are dispensable parts of language, scarcely belonging there at all (in this his intuitions were on the right track). There is some irony in the fact that recent philosophy of language has focused on a type of expression that is quite peculiar and hardly belongs to language at all. It is demonstratives and descriptions that are the genuine article, both firmly hooked up to knowledge of reality.

Share

The Many Minds Problem

The Many Minds Problem

When it comes to other minds we are notably weak from an epistemic point of view. We are just not very good at knowing about them. Epistemic inadequacy is our standing condition. This comes out in two ways: first, we find it difficult to justify our ascriptions of mental states to others—the traditional other minds problem; second, we come up short in trying to grasp the nature of minds different from our own—the alien minds problems (bats and the like). We don’t have these problems with our own minds or with other bodies: it is not mind as such that presents epistemic challenges, because we know our own mind remarkably well; and we have no special problem describing the bodies of other beings. But other minds strike us as private, elusive, inscrutable. They seem like an area of reality we are prohibited from entering: we can’t see them, touch them, or smell them. And why are they so hidden from us? Not because they are too far away or too small: we can’t even explain our ignorance of them. Nor is the problem purely philosophical: the inscrutability of others is part of normal human life. It sometimes feels as if the whole effort is a complete waste of time: why not just accept that we are incurably ignorant of other minds? But then social relations would grind to a halt, so we blindly soldier on.

            I want to draw attention to another epistemic problem of other minds—what I am calling the many minds problem. We also have difficulty with the notion of another mind—one that isn’t mine. How is this different from my mind in someone else’s body? How can I imagine another center of consciousness? Don’t I model it on my own consciousness–but then how is it different from my own consciousness elsewhere?[1] How do I form the concept of a mind distinct from my own? I can easily form the concept of a body distinct from my own: it exists at a different position in space, over there, and can’t occupy the same place as my body. I don’t model the idea of another body on my body: I just see other bodies—even bodies quite different from mine. But this epistemic resource is not available to me in thinking about other minds: here I can’t form the idea of a plurality of minds laid out in space at specific distances from each other and precluding co-occupation. All I have to go on is my own mind and my perception of bodies, but these are insufficient to give me what I need. Do I really have the idea of other minds? The skeptic thinks not: sure, I can talk that way, but I don’t really grasp what I mean by such talk. This is a case of reference without understanding: naming without describing. I can say “George’s mind”, but I can’t form an adequate conception of what I am referring to (compare Hume on causation). Maybe I even know that there are many minds; what I don’t know is what it is to be a mind other than my own. Even if I recognize that other people have minds, I can’t conceive of what that state of affairs amounts to. I am like a blind man trying to understand colors: I can talk about them and assert their existence, but their nature escapes me. I am condemned to solipsism in the sense that I only really understand my own existence. I understand the concept of a plurality of bodies but not the concept of a plurality of minds.[2]

            It might be thought that this is an exaggeration (though exaggerations are often useful in philosophy) since we can conceive of minds under causal and functional descriptions. In a roomful of people I can see their bodies laid out in space, nicely distinguished, but I can also see that different physical inputs and outputs apply to different people. Thus I can distinguish one mind from another by the fact that people perceive different objects and perform different actions. I see the different ways the minds are laid out in the physical world—as a plurality of causal loci. But this doesn’t supply what we need, because it doesn’t add up to a conception of minds as such: it doesn’t help us form the idea of a separate mind from ours that is analogous to our conception of our own mind. It doesn’t give us a conception of a plurality of those private inscrutable things of which I am one. Isn’t this just putting myself in the place of others, not forming the idea of genuine others? The problem is that we have no conceptual framework—better, no conceptual surrounding—in which to situate our conception of many minds. We have nothing like the apparatus of space and perception to give content to our talk of other minds. Thus our thinking is hazy, unformed, and merely heuristic. Animals show awareness that others have minds too, but they surely have no articulate grasp of what they are aware of: they don’t have clear and distinct ideas of a plurality of minds. But neither do we, because we lack any conceptual scaffolding that would hold such an idea in place. We certainly can’t obtain that idea from a combination our own mind and the general notion of body. Maybe if we could literally see other minds we would have a clear and distinct idea of their plural existence, but this is precisely what we lack. We don’t perceive other minds laid out before us like peas in a pod.

            It is hard to find a good parallel for our epistemic position in this regard. Not in abstract objects like numbers, because here the plurality is generated by an operation such as the successor function: we form the idea of a totality of discrete numbers by constructing them from an initial basis by iterations of this operation. To do anything analogous for the concept of mind or self we would need an operation that generates the idea of distinct selves from a basis beginning in our own mind or self: but there is no such operation. The same goes for propositions and sentences, or for points in space and time. The plurality of minds is an empirical and contingent plurality governed by no such logical operations. Perhaps the closest analogy would be the idea of a plurality of universes, as in the “many worlds” hypothesis in physics. Here too there is no prospect of a perceptual basis for the totality in question, as there is for objects within a universe, and indeed we have trouble with the idea of such a plurality of universes. For what makes it the case that these alleged universes (cut off from us as they are) form a real plurality: aren’t they just parts of the original universe? Likewise, what makes it the case that other minds are really distinct from my mind: what grounds this idea? Do we really know what we mean when we speak of a plurality of universes? We might think of each mind as a subjective universe in its own right, distinct from our own subjective universe, but again how do we perform the necessary individuation? How do I form the thought, he is not me? I am not thinking of his body when I think this, nor of his causal embedding in the world, but of his mind as such: but what is this thought exactly? Isn’t this talk just a necessary device for negotiating the social world not a well-grounded cognitive representation? I can talk about a bat’s echolocation experience and attribute such experiences to bats, but I have no real conception of what I am talking about; similarly, my talk of multiple selves is not tethered to any intelligible substantive conception. Thus I don’t know what it is for many minds to exist, though I am convinced that such a thing is a fact. To put it differently, I have a cognitive bias in favor of my own mind and think of other minds in the light (or dark) of this bias. I am thinking most clearly when I project my own mind into other bodies, but of course that is not the concept of other minds. My own mind thus exerts a gravitational pull on my thinking about other minds, occluding their independent reality. Hence that sense of puzzlement you feel when you find yourself suddenly wondering what it is to be another person—confronted by a distinct self as real as the self you are confronted by all the time. Don’t you feel like you are facing a complete blank when you try to form an idea of that other self? You grasp what her thoughts and feelings might be, because you have such thoughts and feelings yourself, but you never have any experience of being another person. You never get into an epistemic state in which you are someone else; you are always just boring old you. Hence the difficulty we have of conceiving (really conceiving) of another person’s existence. At least you can see and touch someone else’s body, so you can appreciate its distinctness from your own body, but you can’t see and touch someone else’s mind—so you are stuck with your own mind as a model for everyone else’s. And that is a terrible model, because you are not by definition someone else. Conceptual solipsism is thus forced on us by our very constitution; we can’t transcend ourselves to form a clear idea of other minds. So we must add the problem of many minds to the traditional problem of other minds and the problem of alien minds—we are just unavoidably inept when it comes to knowing minds other than our own. This is not full cognitive closure perhaps, but it is at least partial cognitive blockage.[3]


[1] Wittgenstein was exercised by this question: “If one has to imagine someone else’s pain on the model of one’s own, this is none too easy a thing to do: for I have to imagine pain which I do not feel on the model of the pain which I do feel” (Philosophical Investigations, 302). As he says, thinking of the pain of another person is not the same as thinking of a pain in one part of my body as in another part of my body.

[2] It would be possible to imagine intelligent beings that explicitly lack the concept of other minds despite grasping their own minds. They just accept that they have no idea what another mind might be, happily or unhappily. Perhaps the thought has occurred to them that their own minds might not be the only ones, but they freely admit that they have no idea what it would be for another mind to exist: the idea of a plurality of minds is beyond their comprehension. They don’t even talk this way, limiting their psychological remarks to self-attributions. At the other end of the spectrum we can imagine godlike beings whose grasp of minds is as solid and extensive as our grasp of bodies, perhaps more so. Humans lie somewhere between these two extremes, possessing language for other minds and a hazy grasp of what might be involved; but this grasp is inchoate and shaky, a mere sketch of a possible concept. Other minds are like missing shades of blue as far as we are concerned: we can recognize their possibility but we can’t give the idea any color (so to speak). 

[3] The binary open-closed picture of human knowledge is no doubt oversimplified: in many cases we have partial cognitive closure combined with a modicum of openness. Mystery comes in degrees or layers or shades of grey. Thus we might say that there is a high degree of mystery where other minds are concerned, i.e. a serious amount of cognitive closure (epistemic boundedness). We have an inkling, but that’s all we have. 

Share

The “Notorious” Nabokovian RBG

The “Notorious” Nabokovian RBG

I was pleased to read in today’s (September 20, 2020) New York Times these words from Ruth Bader Ginsburg: “At Cornell University, my professor of European literature, Vladimir Nabokov, changed the way I read and I write. Choosing the right word, and the right word order, he illustrated, could make an enormous difference in conveying an image or an idea”. Well said, I thought, especially noting the feminist cloud currently hanging over Nabokov (what with Lolita and all). But this thought was quickly replaced by puzzlement over the word “notorious”: is that really the word we want for this distinguished Supreme Court justice? According to the OED, it means “famous for some bad quality or deed”. Appropriate for a rapper, perhaps, but not for Justice Ginsburg—any more than the word “infamous”. Has someone confused “notorious” with “notable” or “noteworthy” (with a touch of “victorious”)? In any case, the moniker is ill chosen. To my mind RBG proves a maxim I have long espoused, namely that courage and intelligence are inversely correlated with physical size. I find her notable and noteworthy—as well as large in the best sense.

Share

Other Brains

Other Brains

I was watching a nature documentary the other night about slime (The Secret Mind of Slime, PBS). Scientists have experimented on slime and discovered that it can perceive, process information, learn, memorize, and even decide. Slime is smart. Slime is intelligent. One of the scientists (“slimatologists”) speculated agreeably that slime could be the evolutionary origin of brains: because slime uses electrical circuits to conduct its intelligent business, and that is what the brain uses too. Electricity is the source of intelligence in slime and in brains, with brains just a more sophisticated form of slime. We must therefore think again before we denigrate something with the word slime. A slimy brain is a smart brain. In the course of elevating slime’s image, the program also drew attention to roots—because roots also turn out to be intelligent. They can perceive, learn, and decide—at least in a primitive form. One of the scientists put it by saying that the plant’s brain is under ground with its reproductive parts above ground, thus inverting the typical arrangement. He illustrated the point by producing a plant pot with a human doll head down in the soil, feet pointing upwards. It is as if the tree has its head buried in the earth with its limbs held aloft: the clever parts of the tree spend their lives under ground while the lowlier parts waft in the breeze. It is a striking image, illustrating the power of images, and one that set me thinking. Apparently two types of life-form are conceivable and even actual: a brain-up form like almost any animal you can think of, and a brain-down form exemplified in a primitive fashion in plants. Head in the air or head buried in the ground—two possible phenotypes, two evolutionary options.

            This raises intriguing questions. Why haven’t plants capitalized on the brain-down option, having already blazed the trail? Why don’t we see sophisticated brains stuck in the ground while the rest of the organism pokes out above ground? Wouldn’t natural selection favor the emergence of such brains, as it has favored brains that float above the earth’s surface? Slime and roots are pretty impressive intelligence-wise, but why not ascend to the heady heights (so to speak) of lizards and llamas? Yet nothing like this populates our planet. That is an evolutionary puzzle to be set beside other evolutionary puzzles such as the origins of sex or consciousness. There seems no natural obstacle to it, and better brains are generally a good idea. The sense organs needn’t be buried as well, preventing them from responding to events above ground; they could hover in the clear air and relay their messages to the brain lurking beneath. After all, the sense organs of terrestrial animals are generally at some distance from the brain, with non-zero time intervals between reception and response. The eyes could be attached to branchlike structures (think giraffes) and still serve the visual cortex hidden under ground. The brain could then issue orders to sway or withdraw or bristle or spray poison—whatever might prove useful in the battle to survive. Plants already have ingenious ways of thwarting predators, and they could no doubt use some more intelligent adaptations orchestrated by a smart root-brain. Is it that given their niche they simply have no need for smarter brains? That is hard to credit: other things being equal, brains are handy organs to possess, which is why so many roaming animals possess them. Is it that plants don’t move and so don’t need brains?[1] Now we are talking, but again this doesn’t really explain the absence of brain-down creatures, since it is not clear that locomotion is the sole reason that brains evolve. There could be plenty of aboveground limb motion to organize–and why shouldn’t the creature move around under ground? Why shouldn’t the brain-down creature be a burrowing creature? It might even withdraw its air-dwelling appendages as it burrows, thus making subterranean motion easier, protruding them again when it finds a new place to live; or it might simply slice through the earth with parts both below and above ground. There is no good reason why its dual ecosystem should rule out developing a bigger brain. So it is a real question why we don’t find truly intelligent trees with the brains the size of an elephant’s. After all, some animals are amphibious, or exist partially under water and partially above water: why shouldn’t the same be true of the earth medium? Wouldn’t the brain be safer ensconced under ground away from predators that exist only on the surface of the planet? That is how many whole animals lead safer lives, so why not keep the vital brain under ground with only the organism’s leaves and branches sticking out? Trees have been around for an awfully long time and you would think that evolving a better brain might have occurred to them. It seems peculiar that only brain-up organisms have taken advantage of this dandy little adaptation.

            Imagine a planet on which things are very different: here there are many organisms that have adopted the brain-down lifestyle. It is nothing unusual, with colonies and even cities composed of such inverted (to us) creatures. They might have developed technologies that compensate for their relatively static mode of life, such as flying machines for delivering mail. They stay put but they like it that way (like Phillip Larkin). Perhaps the atmosphere on this planet is toxic to brains like theirs, or just too hot or too cold. In any case their brains do better comfortably placed under the planet’s surface; indeed, they might be the dominant life form on the planet, with the up-brainers relatively low in the biological order. Here it pays handsomely to store your brain snugly under ground. It is as if plant life has undergone the kind of acceleration that animal life underwent on planet earth during the Cambrian—with many types of intelligent and brainy organism now enjoying the partially belowground lifestyle. It may even be thought de rigueur to exist in this form—a cut above, so to speak. The brain-downers may find the brain-uppers rather a pathetic lot, not fully “evolved”, mildly absurd. Why carry your brain around in a bony box in the open air where it can easily get knocked about? Better to keep it within the soft embrace of the earth sheathed in a nice flexible waterproof membrane. Brain injury is virtually unknown among these creatures, and not having to keep the brain aloft enables brains to grow to enormous size. These are some seriously smart subterranean brains. So there is nothing logically problematic about the brain-down lifestyle—nothing contrary to sound biological theory. It is just an accident that earth is devoid of such eminently reasonable creatures. We brain-uppers suffer from a prejudice that makes us think that our way is the only way, but plants already contain the seeds of a different way of living—the head down, genitals up way.

            From a broader perspective, indeed, the difference is not as great as we might parochially suppose. After all, plants do move around a fair bit: they sway and bend in the wind, they grow upwards and downwards, they drop their leaves, they swarm up walls, they send out their pollen, and they travel with the earth’s diurnal rotation and orbit round the sun. They are not the static entities we are apt to imagine. They are a lot livelier than mountains or rocks. On the other hand, how much do organisms equipped with legs or wings actually move? Some never stray far from their place of birth during their whole life, a matter feet in some cases. Even the sprinting cheetah doesn’t cover that much ground and is a good deal slower than light. From the point of view of the universe, things on planet earth are pretty sluggish, pretty earthbound. The brains of animals are stuck in the earth’s atmosphere, even if not buried under ground, and they really don’t move that much more than plants. To an alien species used to freer modes of travel, all of life on earth might seem as if it is rooted to the spot. And what exactly is the difference between under ground and over ground? In addition to the gases of earth’s atmosphere there is dust, water vapor, pollutants, flying insects, and birds—all contributing to the solidity of what we call the air. Compared to empty space, earth’s atmosphere is part of the ground, just a bit less cluttered (compare the oceans). And our heads encounter all sorts of resistance as we wander strenuously around: wind, water, obstacles, and projectiles. It is true that our brains are located above our feet, but in the wider scheme of things this doesn’t seem so crucial (consider snakes); and anyway they are not always above our feet, as when lying down or upside down. The difference between creatures with their heads in the sand or mud and creatures with their heads in the air is just not that deep. So, again, it is puzzling why we don’t find organisms whose brains are stowed under ground like roots. There is nothing about the location of our brains—and those of most animals on earth—that sets the standard for how a brain has to be located. Other brains might live out their days enjoying other accommodations. Other creatures might be like that scientist’s doll with its head stuck in the soil. Elsewhere in the universe subterranean slime might be the material basis of most intelligence.[2]Co


[1] Let me note that it is the function of roots to keep a plant rooted in place (among other functions), i.e. to keep the plant at rest. This enables plants to avoid the depredations of wind and other forms of displacement—which less rooted organisms have more trouble with. As the function of legs is to enable movement, so the function of roots is to prevent it—both are locomotive functions. A tree is not like a rock, which doesn’t move as a matter of simple physical inertia; roots are ingenious devices of motion prevention. 

[2] Someone should write a science fiction story based on this possibility entitled Slime Wars or Under Dune or Planet of the Trees. What about the idea of creatures whose brains exist at the center of their planet with the rest of their life conducted at the surface?

Share

Sameness and Skepticism

Sameness and Skepticism

The concept of identity is central to philosophy. Philosophers are characteristically concerned with whether A is identical to B. Is the mind identical to the brain, is knowledge true justified belief, is the good maximum utility, are numbers sets, is meaning reference, is the good life the intellectual life? Some philosophers favor identity, keeping things to a minimum, while others insist on difference, multiplying what there is. Naturally, then, they have been interested in identity as such: its logic, its metaphysics, and its epistemology. How is it related to indiscernibility, is it known a priori, is it absolute or relative, is it definable? Multiplicity is surely a basic fact about the world—it is presented to us as consisting of many distinct objects—and identity comes with multiplicity (this thing is not that thing but it is itself). It is natural to wonder how diverse the world really is: what is identical to what, and what is different? The very idea of conceptual analysis presupposes the concept of identity: is the concept of knowledge, say, identical to the concept of true justified belief? Synonymy is just the identity relation applied to words. Then too, there is identity through time, identity across worlds, and personal identity. Philosophers are constantly making judgments of identity and difference; you might suppose that it is their main occupation. Philosophers are identity hounds, sniffing out what is the same and what is different. That, we might say, is what philosophy is identical with.

            It would be nice if such judgments were skepticism-proof. Then philosophy would enjoy an epistemic privilege. It might even elevate philosophy above science, which is not skepticism-proof. But this seems like a forlorn hope: identity judgments are often wrong. As every philosopher knows, people used to think that the Morning Star is not the same as the Evening Star, but actually they are the same planet, as we have now discovered. Now is that judgment infallible? Sadly no, since we could have made an astronomical mistake: maybe there are really two planets after all—it just seems as if there is one. Perhaps there are doppelgangers everywhere, so that we are constantly meeting different people we think are the same. It might turn out that all of our identity beliefs are false: this seems like an epistemic possibility. Nor is this possibility limited to a posteriori identity beliefs: people used to think on analytic grounds that knowledge is identical to true justified belief, but then they discovered counterexamples to that claim. And maybe that judgment will turn out to be wrong—perhaps we misunderstand the concept of justification. Might it turn out that the meaning of “bachelor” is not identical with the meaning of “unmarried man”? These are not easy questions, and it would be wrong to dismiss them as absurd. Skepticism is nothing if not resourceful.

            However, it seems to me that there is one area in which skepticism about identity cannot gain a foothold—an area in which we can be certain that our identity and distinctness judgments are true. I don’t just mean with respect to the mind or numbers or meanings; I mean with respect to the external world, the skeptic’s go-to position. We may not be certain whether external objects exist or what their nature is, but we do infallibly know propositions about their identity and difference. For example, if I now judge that this cup is not identical to that computer, I cannot be wrong about that: it could not turn out that cup and computer are one and the same. Nor can I be wrong about the cup being identical to itself: this is a rock solid identity judgment. It is true that I might be wrong about whether the cup and the computer exist, or about whether I am dreaming, or about whether such objects are ideas in the mind of God; but none of that implies that I could be wrong about their identity and distinctness. Whatever the truth about the cup and computer is, I must be right in judging them non-identical. They might not have the shape and color they appear to have, but it is certain that they are not the same thing. And note that this is not a proposition about their appearance: it isn’t just that I know the appearances are non-identical. I know that whatever lies on the other side of the appearances, even if it is just non-existent intentional objects, these things are not identical.[1] This is as certain as the proposition that each thing is identical to itself. Maybe both objects came into existence a second ago and are not identical to anything preceding them, despite the appearances, but still they are self-identical and other-distinct. Thus reality—whatever it is—is necessarily composed of distinct objects, a great many of them.[2] Not metaphysically necessary, but epistemologically necessary: it could not turn out that our impression of multiplicity is incorrect, though there are possible worlds containing a single object. There are no illusions of multiplicity in which this cup is really identical to that computer. Maybe the cup is itself a computer, and the computer a cup, but still they are not identical. In the same way objects in dreams are not identical: it could not turn out that every object you dreamed about last night is really one and the same object. I can see that the cup and the computer are distinct, though I can’t see that they exist. It is not true to say that every question of identity is self-evident, but in some cases it is. So far as I can determine, this is the only fact about external objects that is proof against the skeptic: not existence, not materiality, not shape or color, not even being a cup or a computer. But whatever the real truth may be about these things, I at least know that they are not the same. Here appearance does entail reality. Presumably this is because identity and difference are such minimal conditions—they commit us to so little. Still, they are something: they form a roadblock to the outright victory of the skeptic. They are a counterexample to the claim that everything about our judgments concerning the external world is fallible.

            I think we can generalize this point beyond perceptually presented objects. We know with certainty that red is distinct from blue and triangles are distinct from rectangles. What these properties ultimately are, and whether they are ever instantiated, is as may be; but that they are distinct is indisputable. Nor is there any doubt that red is identical to itself (etc.).[3] Again, appearance and reality cannot diverge. Likewise, I can be sure that 2 is not 3 and that pain is not belief and that democracy is not monarchy. These identity facts are infallibly known to me—and they are objective facts not facts about the mind. When they are known they are incorrigibly known. They are as certain as the Cogito. Indeed, the Cogito presupposes such facts, since we must accept that thinking is not identical to the self that thinks (or else it is a mere tautology). All our judgments presuppose conceptual distinctness, ruling out the possibility that our concepts are all identical. The skeptic may convince us that meanings don’t exist, or that we don’t know what they are, but he can’t convince us that the meaning of “plus” is the same as the meaning of “minus”. If he could, we could turn round and ask him whether “ignorance” means the same as “knowledge”. Both skeptic and anti-skeptic presuppose that they know that words and concepts are distinct from other words and concepts. Did Quine ever argue that “rabbit” might mean the same as “table”, or Kripke’s Wittgenstein contend that “plus” might mean “pain”? As with external objects, such judgments are invulnerable to skeptical challenge, which puts them in a very special class. And don’t I also know with complete certainty that I am not you? Descartes could have proposed the “distinct-self Cogito”: I think, therefore I am not you. That is, my knowledge of myself rules out the possibility that I am (so to speak) someone else—I cannot suppose that this self might actually not be distinct from another self. It could not turn out that I am you. That is not a real epistemic possibility. Maybe you don’t exist at all, but if you do I know with certainty that I am not identical to you. There may not be any other minds, but I know that my mind is not the same as yours if there are other minds. I know that I am identical to myself and that I am distinct from you: no skeptical scenario could convince me otherwise. I also know with certainty that I am not identical to this cup, so my distinctness from other things is guaranteed. No evil demon can persuade me that I am merely dreaming that I am different from other selves and from the objects around me. This is why I never dream such things: the very idea is preposterous verging on nonsensical.

            Given this, the philosopher’s interest in identities and differences is not vulnerable to the skeptic’s corrosive doubts. Such knowledge is uniquely positioned to avoid these doubts because beliefs about identity and difference are so epistemologically undemanding. It is not like inductive knowledge or inference to the best explanation, which are exercises in epistemological adventure and full of risk. This doesn’t mean it is easy to acquire, but once acquired it is hard to dislodge: we really do know that knowledge is not identical to true justified belief, that use and mention are distinct, that goodness is not the same as pleasure. There are no skeptical scenarios like the brain in a vat that can persuade us that these are spurious distinctions; we couldn’t be just dreaming that use is not mention. The scientist must face the possibility that nature is not uniform, but the philosopher needn’t worry that everything may be one.[4] We know perfectly well that the world (whatever it is) contains a multitude and that each of its elements is identical to itself. This may not be much, but it is something. It curbs the skeptic’s enthusiasm.


[1] Non-existent objects can be the same or different: Hamlet is not identical to Macbeth, though he is identical to himself (and to the Prince of Denmark).

[2] In Berkeley’s system distinct objects are distinct ideas in the mind of God; there is no suggestion that object distinctness might be an illusion—that there might really be just one Big Idea. Nor have I ever heard of a skeptic who contends that multiplicity might be a perceptual illusion, as external existence might be an illusion. Even the brain in a vat is confronted by multiplicity—all those fake tables and chairs. Illusions of existence don’t entail illusions of identity. 

[3] Let this point not be underestimated: with respect to any object or property X I know with certainty that X is identical to itself. The skeptic can never take this away from me: it is a piece of substantive knowledge concerning any object that comes to my attention. Such identity knowledge is invulnerable to skepticism (and it is not trivially tautological). When Frege said, “Identity is that relation a thing has to itself and to no other thing” he was stating a profound truth—and one that places a limit on the scope of skepticism.

[4] I am not here talking about philosophical monism in any of its varieties; I am talking about commonsense judgments of distinctness, as that my two cats are distinct or that Tuesday isn’t Wednesday. The world might be composed of one type of thing or stuff, but we can rule out the possibility that all our ordinary objects are really identical (e.g. Mount Everest is identical to London, Queen Elizabeth is identical to Mars, everything is identical to Brad Pitt). Even if the world is one giant particular, it has numerically distinct parts. We can be certain that the world is Many (though not necessarily existent).  

Share

The Parasitic Meme

The Parasitic Meme

When Richard Dawkins introduced the word “meme” in The Selfish Gene he did so on the model of the word “gene”, and his discussion of the concept urged an analogy with genes—notably because both are replicators. I want to urge a different analogy (possibly identity), between memes and organisms: the meme is like (maybe is) a type of organism, specifically a parasite.[1] Parasites are passed from host organism to host organism, seldom killing the host but syphoning off its energy resources. Typically they are hard to get rid off by normal mechanical means and can be a damn nuisance (sometimes worse). The parasite needs its host to survive, so too much debilitation is not desirable from its point of view: it must be relatively benign (not like a regular predator). Being a type of organism, usually small (fleas, lice, worms), it has both a phenotype and a genotype, these being geared to its life as a “guest at someone else’s table” (as the original Greek word suggests). It is also subject to random mutation and natural selection (sometimes intentional medical selection). It takes up residence in or on the body and proliferates from that snug perch. It is an animal like other animals, though a particularly crafty one. How then are memes significantly like parasites?

            First we must overcome the idea that minds are non-biological things. I take it I don’t need to say much to dislodge that old idea: minds evolve, are aspects of brains, have a genetic basis, and are subject the usual rules of natural selection. Belief, desire, and consciousness are biological phenomena. Accordingly, memes, being mental entities, qualify as biological too—they are an aspect of the organism, a manifestation of life. But this is not sufficient to make them count as organisms themselves, even of the parasitic type. The question is whether they meet the other conditions for being (parasitic) organisms. They are clearly replicators, as organisms are, and also genes. Organisms replicate by generating offspring and not by simple division. Likewise, memes don’t divide into two when they replicate; rather, they generate copies of themselves in their hosts. So we know that memes are biological replicators, which is a good start. Further, we can say that they syphon off energy from their hosts: they take up residence in a human brain and persist there by dint of the energy available in a brain. No too much of that energy—that could be fatal—but enough to stay in existence and replicate themselves in another brain. Like parasites, memes can be a nuisance, but they are not usually so obtrusive as to threaten the life and health of the host. The lice in your hair may make your skin itch, and the memes in your mind can make your mind irritated—but they don’t kill you. You may want to get rid of an annoying jingle or manner of speech, as you may want to get rid of any parasites that come to your attention. Some parasites, however, are okay (gut bacteria) and some memes are welcome to stay. So we can say that memes have some of the causal, behavioral, and functional aspects of parasites: they operate in the same kind of way. They are not predators exactly, but they are scroungers, freeloaders, unwelcome guests (sometimes). Both have a sort of life of their own—a will of their own, interests of their own. The meme seems intent on its own survival even if you don’t like having it around: it is “selfish”. It is more organism-like than gene-like in this respect: more like an autonomous entity with self-directed goals. It is more like an active life form than a recipe for creating life forms.

            But does the meme have a phenotype and a genotype? If we count language as a meme, a very large one, we can find literal equivalents of these notions: its phenotype is universal human grammar and its genotype is the genetic program that is innate in the human animal. Language genes build languages in brains, as limb genes build limbs in bodies. The language meme thus literally has a phenotype and genotype. But what about more standard examples of memes— such as jingles, fashions, and ideologies? Here the phenotype is the psychological profile of the meme: its phenomenology, semantic content, and behavioral dispositions. These determine (in conjunction with other factors) its staying power and contagiousness—its ability to survive and reproduce. A rapidly fading musical meme will not get transmitted, e.g. by humming. The meme’s phenotype is what seals its fate as a viable entity, particularly the way it meshes with the human mind. In this it is just like a parasitic organism. But does it have a genotype too? Possibly, if it is innately fixed, but generally not, so here the analogy may seem to break down. However, we must not be too parochial when it comes to genes. Not all life forms are necessarily grounded in DNA molecules; on other planets the underlying replicators might be chemically quite different. The abstract concept of the gene is just “whatever underlying structures account for heritability” or some such definition. Genes encode and they build—but they can do this in different ways in different conditions. So do memes harbor anything that meets this abstract description? I think they do and must. First, they are constructed from more primitive elements as they pass from one mind to another: it is not a matter of a rubber stamp or cookie-cutter. The meme can only enter another mind if it is first analyzed and then actively reproduced. The jingle must be heard and processed and converted into an inner melody: it must be capable of such analysis and conversion, and it must be constructed from simpler components (notes, rhythms, lyrics[2]). The components have to be copied and strung together so as to resemble the original. This is a non-trivial task. Importantly, the brain must contain states and processes that correspond somehow to the meme’s apparent phenotype—hidden layers of machinery. This is starting to look a lot like a genetic substrate in the wide sense, i.e. a mechanism of transmission. Not chunks of DNA, to be sure, but units that act somewhat as DNA acts—constructively and reproductively. The meme has something analogous to cellular structure; indeed in the broad sense of “cell” (the word derives from monks’ cells) it has cells—basic compositional units. It has a fine-grained mechanism of reproduction that goes beyond its manifest phenotype.[3] The efficiency of this mechanism is key to its survival and competitive potential—how well it does against other memes vying for the same mental space (“Jingle Wars”). So it looks like the analogy is holding up pretty well: memes are biological replicators with phenotypes and genotypes (widely conceived). Maybe they can even be said to grow and die: the Catholic meme grew over the centuries and it looks now to be in decline, possibly to die out eventually. Memes become extinct or lose their dominance (Beatle Mania, Communism, the Twist). They are parasites that have their day in the sun but can lose their grip with the passage of time, as bodily parasites can. 

            Memes are thus very like parasites, but are they literally parasites? It seems to me that this is not an unreasonable way to characterize them: once we get over the prejudice that minds are not biological the way is clear to classifying memes as mental parasites. They parasitize the mind (brain): they get in there and live off its resources. What if there were conscious physical parasites that invade the brain and suck up its nutrients, affecting the way the host’s mind functions? They might inject ideas into the host mind, possibly as an aid to their survival. Wouldn’t these be just like our memes—units of energy exploitation with a psychological nature? Parasites of the body have evolved in the usual way, and so have parasites of the mind—selfish replicators in their own right. Each has adopted the parasitic lifestyle, with the brain the focus of the mental kind. So there are actually two types of life on earth: the DNA-based type and the type preferred by the memes. We don’t know much about the basic processes and structures in the case of memes, but we have reason to believe that such processes and structures exist, maybe to be revealed by neuroscience. Perhaps there is a finite set of meme components for human memes and a finite number of combinatorial principles: these are the fundamental transmittable units of the meme universe. They may be quite ancient and fairly recondite—rather like genes—but they are the generative foundation of all meme activity. Whole ideologies may be the elephants of the memo-sphere, evolving from much smaller and simpler memes drawn from bygone days. If this is right, memes are literally living parasites feasting on the brain’s energy reserves.[4]


[1] In fact Dawkins alludes to this possibility, quoting Nick Humphrey: “When you plant a fertile meme in my mind you literally parasitize my brain, turning it into a vehicle for the meme’s propagation in just the way that a virus may parasitize the genetic mechanism of a host cell” (p.192). However, he does not pursue this suggestion, preferring to focus on the analogy with genes. Whether other authors have run with the idea I don’t know.  

[2] I remember an old Mike Leigh film in which a character went around the whole time singing, “Tasty, tasty, very very tasty; they’re very tasty”, clearly playing host to a meme he couldn’t control (even now that meme resonates in my mind).

[3] We have a name for this mechanism: imitation. But imitation is a complex process of analysis and synthesis, requiring underlying generative machinery, and capable of degrees of effectiveness. It is the meme analogue of the mechanism of inheritance that we now know is based on the DNA molecule (at least on planet earth).

[4] Parasites exploit weaknesses in the body’s defenses, immunological and mechanical; similarly memes exploit the mind’s natural receptivity and plasticity. It is good to be a fast learner, a cognitive sponge, but this can lead to stuff creeping in that does nobody any good. Dangerous ideologies are the price we pay for being easily educable—as parasites exploit the body’s surfeit of energy resources. The human mind is susceptible to meme intrusion because of its generous cognitive endowment; other animals are not much prone to such intrusion.

Share