Physiological Investigations

 

 

Physiological Investigations

 

  1. It would be wrong to think that all the organs of the body are pumps. The heart is a pump and it features prominently in our picture of the body, but not every organ resembles the heart. We might be tempted to think that the lungs, stomach, and kidneys are pump-like, but what about the bones, skin, and blood? The organs are like tools in a toolbox: they may look similar but they do very different things. Do they all modify something? That too can be made to fit many cases, but does the skin modify anything, or the eyes? The organs are united by family resemblance not by any shared feature. There are many organs with many functions and modes of operation; we should not try to force them into a single mold. Any more than words should all be assimilated to names.

 

  1. We can call the activities of bodily organs “organ games”. They resemble each other much as games do, and they are active like games. They are part of our “form of life”. Organs can also be compared to chess pieces. In the beginning was the deed. Organs have a use, a function: they are like language in this respect. Both are part of our “natural history”. The variety of organs is like the variety of words. Organs are the words of the body, its active components.

 

  1. The relationship between organs and their function is not easy to discern or describe. This is a battle against the bewitchment of our intelligence by means of perception (the organs all look alike and have similar-sounding names). The function is not reducible to the structure, as the author of Tractatus Logico-Physiologicus wrongly supposed. Is the function “contained” in the structure? We can say this, but it is not contained in a “queer” way. What would we say if a heart suddenly stopped beating in its regular manner and instead started beating in a quite different pattern? Would we say it was malfunctioning, or that it always had this function? The case would be like someone unexpectedly giving 5 as the answer to the question “What is the sum of 13 and 278?” The heart simply finds it natural to go on in a certain way without consulting any “interpretation”. Is the function of the heart the same as its possible movements? This is like asking whether the meaning of a word is the same as its possible uses. There is a way the heart is supposed to behave and the way it does behave (normative versus descriptive), but we find it difficult to locate this difference in any discoverable fact. The heart does not beat as it does by following the example of a second invisible heart inside it (a homunculus heart): that would lead to regress and paradox. The function of the heart is “exhibited” in what it does—in its “practice”. A heart could not have the function of beating thus and so if it beat only once.

 

  1. There is nothing hidden or queer about the organs of the body. There are no private organs or beetle-like organs concealed inside boxes. Everything is open to view (though the body may need to be literally opened in order to see the organs). There is no elan vital (a “queer process”). We can provide a “perspicuous representation” of the body’s organs. There are no profound physiological problems about the organs, though there may be puzzles (Harvey’s blood circulation theory of the heart was a paradigm of physiology, comparable to Russell’s theory of descriptions). It is a mistake to intellectualize the body, or to view it solely through the lens of geometry (as D’Arcy Thompson did). The organs are parts of the living human creature; and they have ordinary criteria of ascription. We must not “sublime” them, or take them to be part of another layer of human reality. They are no more remarkable than language, which is public, functional, and heterogeneous. There is no “general form of the bodily organ” any more than there is a “general form of the proposition”. The body is like a city built over millennia: here a very old structure (four limbs), there a newer adaptation (a bipedal gait), and here something quite recent (the human larynx). Organs no more perform a unitary function than speech acts do. Nor is there any deeper level of analysis according to which the body is fundamentally uniform. It would be wrong to try to reduce the organs to simple elements (“cells”).

 

  1. Physiology can never interfere with the workings of the body—the body is perfectly in order as it is—it can only describe it. Physiology does not operate with an ideal bodily form to which actual bodily forms only approximate; that is a myth resulting from taking the body on holiday. The myth is no more plausible than the myth of an ideal logical language. The distinction between organs can be vague (where does the digestive system begin and end?), but this vagueness does not impede the body’s functioning. A tribe might carve the body up quite differently (literally in some instances), according to their “customs”, “practices” and “form of life”, and it would be pointless to argue with them about it. All organs are equally important: it isn’t that some are vital and perfectly formed while others are dispensable and poorly formed. No one would think that language divides into the superior and the inferior (mathematics versus animal husbandry, say), and it is equally misguided to think the heart and brain are superior to the kidneys and the gall bladder. Each plays its own “organ game” with its own purposes and rules. Physiology can never result in a perfect type of body shorn of all defects: that is a chimera, as mistaken as the idea that language could be replaced by a symbolic calculus. We must resist the urge to think this way by conducting a patient examination of the way the body actually works. Don’t think, look! In the end all we can do is perspicuously describe the way the organs are used. There can be no physiological theory of them, contrary to the author of the Tractatus (the body as a totality of hidden platonic forms). Above all, we must steer between a desiccated machine-like picture of the body and a queer spirit-like picture of the body, viewing the body rather as a family resemblance, form of life, natural history, deed involving, criteria oriented, public, game playing, toolbox type of thing. Back to the rough ground! We must abolish the myth of the “queer physiological process”. We must attend to the body’s “grammar”. We may need “therapy”. Then physiology will finally be able to find peace and let the fly out of the fly-bottle.

 

Share

Language, Truth, and Grammar

 

 

Language, Truth, and Grammar

 

It would be convenient if only true statements were grammatical. Then we would be able to read truth-value off grammar. Falsehood would be signaled by ungrammaticality. But of course false statements are as grammatical as true ones. Grammar is completely indifferent to truth. The world is what determines truth not grammar. Meaning is not indifferent to truth, since in combination with reality it determines truth; and some truths depend on meaning alone. But the grammatical form of a statement tells us nothing about its truth even in conjunction with reality; and “Bachelors are unhappy men” has the same grammatical form as “Bachelors are unmarried men”. Grammar is neutral as to truth: it will generate a falsehood just as readily as it will generate a truth. Perhaps we can say that grammatical sentences ought to be true—that is their point—but “ought” does not imply “is”. The rules of grammar are simply not designed to produce the truth, the whole truth, and nothing but the truth. The machine in our head that generates infinitely many sentences from a finite number of elements is a falsity-generating machine as much as a truth-generating machine. When a child learns how to speak grammatically he or she learns how to construct false sentences as well as true ones. Grammar can never keep a child on the straight and narrow truth-wise (hence all the lying). Grammar will happily lend itself to either enterprise.

            There are two aspects to this looseness of connection between grammar and truth: truth of belief and truth of utterance. Utterances (chiefly assertions) can be false as well as true without detriment to grammar, so that error and deception in speech are grammatically possible. People can tell lies while using grammar correctly (but not when using it incorrectly so as to produce nonsense). And beliefs can be expressed in a language of thought with full grammatical correctness and not be true. Thus grammar can be viewed as a device that permits falsehood in communication and in thought; it is no impediment to either. Indeed, it sets them up for falsehood by making falsehood eminently feasible.  It thus makes humans prone to error in a way that non-speaking creatures are not. Animals are capable of both deception and error, so falsehood is part of their mental universe, but they don’t have the facilitating aid of grammar to work with. An animal may be under a perceptual illusion and so form a false belief, and deception is analogous to the lying assertion; but animals are not confronted by the enormous array of possible falsehoods that we speakers are. Our mental universe is brimming with the possibility of falsehood just by virtue of possessing grammar: most of those infinitely many sentences are false, and we have equal mastery of the false ones. We can envisage falsehood all too easily: it is built into our linguistic brain mechanisms. And this means that we can be victims of falsehood all too easily: false sentences can form in our mind according to the rules of grammar, and speakers can transmit falsity by exploiting the mechanisms of grammar. Telling lies and believing falsehoods are things we can do with the greatest of ease, given that grammar has the power to do both. If you don’t have a grammar module in your head, your opportunities for falsehood are greatly reduced. It is much harder to lie if all you have to go on is your skin coloration, and error is limited by the possibilities of perceptual illusion. In humans false belief can be induced by means of the faculty of speech, and speech itself is extremely liable to error. Anyone can produce a false utterance at any time, intentionally or otherwise; and the language of thought can easily operate to generate a false sentence inwardly. Grammar is what makes all this possible: it is the mechanism whereby profligate falsehood enters human life. Not the only one, of course, since we are also capable of perceptual illusion and logical fallacy; but it is the mechanism that allows falsehood on a grand scale, because of its truth-indifferent combinatorial powers. It is the main engine of cognitive malfunction, i.e. false belief. Think of all those conspiracy theories: they are spread by means of language (not by actually seeing what they purport to report), and language works according to the combinatorial powers of grammar. Grammar cares nothing for their absurdity or harmfulness; it just dutifully goes about its constructive business. How did Iago induce so much falsehood in Othello’s credulous mind? Almost entirely by exploiting the power of grammar to construct a false narrative (that business with the handkerchief was strictly secondary). Iago was slyly adept at using grammar to construct linguistic strings that would act on Othello’s receptive mind so as to produce false beliefs. It would be far more difficult, if not impossible, to achieve the same deceptive ends without the power of grammar. Grammar is not at all falsehood-averse; it is not intrinsically oriented towards the true.  [1]

            This means that language is a two-edged sword: it enables great feats of truth production, arguably the basis of human civilization; but at the same time—and for the same reason—it permits vast quantities of falsehood production. So grammar is both a very good thing and a very bad thing. From a certain perspective, it can be viewed as a genetically determined disease—the cause of untold error, delusion, paranoia, propaganda, and madness. It fills our heads with garbage, frankly. It’s like a parasite that eats into our mental universe, constantly generating new forms of error—a kind of psychological virus. Once a person is infected with the grammar virus her mind goes berserk—or it easily can once the mechanism is in place. A person lacking a language faculty will be far less prone to error simply because he lacks the mental machinery that enables it; you will not get very far brainwashing such a person or haranguing him into accepting the latest nonsense. But this very source of epistemic pathology is also the foundation of language in all its beauty and fecundity—its ability to convey and express truth. So grammar is a gift with a distinctly double existence: the ability to create great things, but also the correlated ability to introduce virtually unlimited error into human thought and culture. Before humans evolved language they were mentally limited in obvious ways, but they were also free of the side effects of a language faculty like ours. It is the unlimited and unconstrained potential of grammar that makes it such a rich source of error in human life: it has extraordinary productive power combined with absolute indifference to truth. An utterance (or a belief) has exactly the same appearance whether it is true or false, precisely because grammar is truth-neutral, and this enables it to function as a source of false belief as well as true belief. If grammar were truth-sensitive, we could tell the true from the false just by establishing grammaticality, but that is precisely what we cannot do. Indeed, it is hard to see how it could be done given that the objective world and the grammatical faculty are distinct existences: how could the internal rules of sentence formation depend on what is going on in the external world? The grammatical faculty is a kind of formal computational system operating by its own rules, but what makes sentences true or false lies outside this system. So any language like ours will necessarily contain grammatical rules that allow the formation of false sentences, with all that that entails. Language is necessarily a falsehood generator as well as a truth vehicle. It is, in fact, the most prolific falsehood generator on planet earth. It made the human brain into a hotbed of error. Because of language we must constantly be on our guard against falsehood—we are haunted by its possibility. This is why we are always correcting each other’s false beliefs. But animals don’t live in this world of lurking falsehood: they don’t need to argue with each other all the time, criticizing, correcting, disputing. Animals mainly believe what is true without having to worry about believing the false, but humans are rightly concerned about the truth status of their belief systems, given the prevalence of falsehood. We are, in a way, the victims of our own good fortune as grammatical speakers (and thinkers). It is entirely possible that our propensity to error, aided and abetted by our talent for grammar, will lead to human extinction (global warming, nuclear weapons). Of course, grammar alone is not responsible for such disasters, but it is a dangerous force needing elaborate policing—anarchic in its powers of falsehood generation. It is too creative—too free, too untrammeled. Truth cannot hold it back. It is, as it were, intellectually unethical—a kind of truth psychopath. It is as capable of falsehood as it is of truth, and doesn’t much care which way it goes. It is value-neutral, yet it generates structures that have the normative properties of truth and falsehood. Grammar produces syntactic structures, but syntactic structures don’t care whether they are true or false; yet these structures shape the very architecture of the human mind. It is syntactic structure that permits propaganda, mental manipulation, and fallacious reasoning—as well as true belief, science, and logical thought.  [2] When syntactic structures participate in the transmission of falsehood they act in ways that are potentially harmful, but they are as prepared to do that as to transmit truth. The syntactic machinery goes about its nimble carefree business in complete oblivion with regard to truth or falsehood. It is rather like the biological machinery of cell production, which can produce healthy tissue or unhealthy tissue. Like many biological adaptations, it is a mixed bag of the useful and the detrimental; it remains to be seen how damaging its propensity to allow error will be. There is some reason to believe that the human mind actively prefers error to accuracy, and grammar gives it ample scope to indulge that preference.

            I can imagine a science fiction story in which an alien population is placidly going about its business without the benefit of a grammatical language, believing mainly the truth and not being susceptible to propaganda and other forms of nefarious persuasion. Then a clever scientist invents a device to be inserted into the brain that will produce things called “sentences”, which will join with existing cognitive structures to enhance thought and enable communication. The invention is greeted as a great advance, but the inventor has not reckoned with a side effect of the device: it keeps producing false sentences because the rules of sentence formation have no filtering mechanism to inhibit the production of such sentences. The result is that the population starts to experience an enormous increase of erroneous belief and misleading speech, some of it quite harmful. But the device is now firmly implanted and cannot be safely removed. The people degenerate into a nattering horde of delusional fools, incapable of distinguishing truth from falsity, believing any groundless nonsense that comes along (they have no cognitive immune system capable of excluding false beliefs). Soon there is war and famine and general strife, along with the rejection of all science—and we know where this will all end. Eventually the species dies out, victims of a technology they could not control. Death by Grammar, it might be called. If dinosaurs went extinct by being too big, where once this was an advantage, we ourselves might go extinct by being too grammatical—too prone to the falsehoods facilitated by the grammatical faculty. We will leave behind us all those creatures not endowed with a language faculty that is powerful enough to create error unlimited. We will have perished from our own language instinct.  [3]

 

  [1] Fiction uses grammar in exactly the same way factual discourse does; there is no grammatical marker for fictional language. Grammar is promiscuous as between fact and fiction, capable of functioning in both domains. And fiction is a very natural deployment of our grammatical faculty, ancient and spontaneous. We can even imagine a tribe speaking only fictional sentences. This tells us a lot about the connection between the language faculty and truth, i.e. the lack of connection. Grammar is really an abstract formal structure on which truth and falsity can equally be hung (vide Chomsky). 

  [2] Language and politics are closely connected: language is what politicians use to persuade and manipulate. Oratory is the main tool in the politician’s toolbox. They use syntax (inter alia) to encourage assent and obedience, often to falsehoods; politics might be defined as the artful promulgation of the false by means of syntax (along with semantics and pragmatics). It is not an accident that Chomsky is interested in both linguistics and politics, the latter being an application of the former. Generative grammar forms the underlying machinery of political discourse: it facilitates and promotes such discourse. No generative grammar, no politics as we know it: grammar is ideally designed to this end given that it is sublimely indifferent to truth. Political speech makes ample use of the power of grammar to generate falsehood.    

  [3] Bear in mind that spoken language hasn’t existed for long in human history, let alone in world history (about 200,000 years by most estimates). So the long-term effects of grammar on survival are not yet clear (ditto advanced intelligence): it might turn out to be maladaptive in the end. Clearly the fertile capacity to generate error is a serious design defect, which might ultimately prove calamitous (I’m thinking particularly of climate change). Are there species elsewhere in the universe made extinct by their powers of syntactic production? Mythology is an offshoot of grammar, and mythology is pure falsehood (the Homeric gods etc.). The same is true of religion and ideology. The possibility of error is the price we pay for our linguistic creativity, and the price may be greater than we realize.

Share

Two Dogmas of Rationalism

 

Two Dogmas of Rationalism

 

We may suppose the rationalist to hold two theses: (a) there is an analytic-synthetic distinction, and (b) the meaning of statements can be reduced to the innate ideas expressed or denoted by the terms in them. Since rationalism accords priority to a priori statements, the odd member of the pair in (a) will be the class of synthetic statements (we will suppose for the sake of argument that the distinction between a priori and a posterioristatements maps onto the distinction between analytic and synthetic statements). What kind of truth do they have? Answer: the kind in which the predicate is not synonymous with, or contained in, the subject—that is, it is different in meaning. But how is this notion to be defined? We know well enough what synonymy is—we recognize it easily when we see it and dictionaries are full of it—but it is more problematic to say what it is for words to differ in their meaning. Is it a matter of the words not being intersubstitutable salva veritate? Or is it a matter of there being different ideas in the minds of speakers who use the words? Neither option is very appealing, but nothing else suggests itself. So the rationalist is unable to define the analytic-synthetic distinction, because of the difficulty of saying what difference of meaning amounts to. Maybe this notion should be abandoned and replaced by the idea that all words really have identical meaning: that is a simpler hypothesis and nothing in science contradicts it. Why suppose distinctions of meaning when no such differences can be defined or discerned? So this dogma is best abandoned: there are only analytically true statements, because the notion of differences of meaning cannot be made satisfactory sense of. The concept of synthetic truth is without scientifically definable content.

            Second, the rationalist is committed to the thesis that statements are isolated constructions built from symbols that express innate concepts. This suggests that they can be verified individually, by reference to the individual concepts they contain. For example, we can verify “Bachelors are unmarried males” by consulting our innate ideas of bachelor, unmarried, and male. We need not stray beyond this statement to take into account other statements. But this account of verification is too atomistic; in fact, we can only verify analytic statements by making reference to other connected statements. Thus their meaning is more holistic than has been supposed, given the connection between meaning and verification. For example, when I verify that bachelors are unmarried males I have to understand such statements as “Marriage is a legal contract between consenting adults containing such and such commitments” and “A male is one sex of a species with two sexes characterized by a certain type of anatomy”. And these statements themselves contain words that I have to understand by reference to other statements such as “A legal contract is a binding agreement between people with such and such penalties attached”. And so it goes on indefinitely in a network of interconnected statements. There is no isolated verification of a single analytic statement but rather a holistic verification of a whole set of connected statements. Really it is whole theories (sets of sentences) that are confirmed not statements taken in isolation. So the rationalist dogma of reduction to a fixed set of innate ideas has to be wrong, with each idea attached to the individual words of the statement. Rationalist reductionism must be rejected.

            What lessons can we draw from this little exercise in parallels? First, the corresponding arguments against empiricist “dogmas” are about as plausible as these arguments, which is to say not very plausible (I will spare you the details). Second, the problems with empiricism alleged in “Two Dogmas of Empiricism” have precise counterparts in relation to rationalism, so they in no way favor rationalism over empiricism (not that the author of “Two Dogmas of Empiricism” claimed as much). What we left with, then, if we accept these arguments is an epistemology that is neither empiricism nor rationalism but exists in some undefined epistemological hinterland. In effect, we are left without any viable conception of sentence meaning and without any theory of how knowledge is acquired—except vague talk of networks, holism, and assent behavior. We are left, that is, with the desiccated picture later defended by the author of the original “Two Dogmas”, consisting of physical stimuli eliciting responses holistically without any viable conception of meaning (indeterminacy plus “naturalized epistemology”). Admirers of “Two Dogmas” need to take this consequence into account. I myself think that the considerations sketched above fail to refute any so-called dogmas in either empiricism or rationalism. On the contrary, the analytic-synthetic distinction is not cast into doubt, and no difficulty in sentence-by-sentence meaning has been detected (whatever “holism” exists is consistent with sentences having their meaning determined by the words they actually contain). At any rate, we can construct entirely parallel arguments designed to undermine the “dogmas” of rationalism.  [1]

 

  [1] It is particularly noteworthy that verification holism (whatever that exactly comes to) can be applied to the acceptance of analytic truths. Holism is not a unique feature of empirical verification. In other words, beliefs always come in groups, more or less extensive, whether empirically justified or based on rational insight (including grasp of what words mean).

Share

Inverted Positivism

 

 

Inverted Positivism

 

I wish to introduce you to the work of an obscure Austrian philosopher. His name is Otto Otto and he lives in the suburbs of Vienna.  [1] He belongs to a group called the Vienna Oval by facetious analogy with the better-known Vienna Circle. Otto (you can take this to be his first or last name according to preference) is a positivist strict and pure, old school to the core. He accepts the verifiability criterion of meaning without dilution or compromise: every meaningful statement must be verifiable. He differs from other positivists, however, in two particulars: he doesn’t think that ordinary empirical statements are verifiable, and he does think that a priori truths are. In particular, he holds that analytic truths are the paradigm of verifiability. He thus maintains that analytic and other a prioristatements are straightforwardly meaningful while empirical statements are not. His reason for denying the verifiability of empirical statements is not eccentric: it lies in the power of skepticism. Skepticism teaches us that our ordinary statements of science and common sense are not rationally justifiable, i.e. not verifiable. According to the verifiability principle, then, they are not meaningful. Since we could be brains in a vat, we can’t justify statements about the external world, which means they are not verifiable; and hence they cannot be meaningful. However, we can justify analytic statements, because the skeptic cannot cast doubt on our acceptance of them: we know for sure, for example, that bachelors are unmarried males. So analytic statements are meaningful, but empirical statements are not. That is Otto’s considered position and he sees no reason to deviate from it.

            Now Professor Otto is not an unreasonable man: he is aware that his position might strike some as extreme, even perverse. For how can it be that science and common sense are literally meaningless? Here he is prepared to hedge a bit: they may be agreed to have a kind of secondary semantic status. Conventional positivists make this kind of compromise all the time: they accept that a priori statements are meaningful without being empirically verifiable, holding them to be tautologies devoid of real content; and they also recognize the existence of meaningful ethical statements. They accordingly speak of “cognitive meaning”, contrasting it with lesser kinds of meaning; they operate, in effect, with a semantic caste system. Likewise Otto and his comrades accept that empirical statements have a kind of second-class semantic status: they have a use, a role on communication, even if they lack meaning proper. They lack what Otto is pleased to call logical meaning or rational meaning or epistemic meaning, but they do have pragmatic meaning—meaning in the vernacular non-rigorous sense. They have the same status as tautologies in the rival positivist worldview: tautologies are not meaningful in the sense of being informative and fact stating, but only in the lesser sense that they are grammatically well-formed and composed of meaningful elements. Likewise Otto and his associates accept that empirical statements are grammatical and composed of meaningful elements, but they deny that such statements have the kind of serious substantial meaning possessed by statements you can rationally accept as true. They wonder what the point is of statements that cannot be used to express knowledge, as empirical statements cannot (because of skepticism). True, such statements are not literal nonsense, either by grammar or lexicon, but they can’t match a priori statements for their ability to express ascertainable knowledge. The latter statements are genuinely cognitive in the sense of being knowledge expressing, while empirical statements exist in a limbo of uncertainty. Otto privately condemns empirical statements as sheer nonsense, in his strict and pure sense, but he publicly concedes that they have a kind of degenerate meaning—just as the Circle positivists do the same for analytic statements and ethical statements. What Otto can’t fathom is why these putative positivists believe that empirical statements are meaningful and yet are not verifiable—given that they accept the verifiability theory of meaning. Don’t they see that no empirical statement can be established as true, or even asserted in preference to its negation, given that the skeptic is undeniably right? They fail to understand that a priori statements are the only kind that allow of conclusive verification, and hence qualify as meaningful. Even if they are mere tautologies—which Otto strongly contests—they are at least verifiable tautologies: you can at least know them to be true! You know them to be true by the exercise of reason and knowledge of meaning, whereas the senses can never deliver skepticism-proof knowledge. The problem of induction by itself shows that laws of nature cannot ever be known, so such statements are not verifiable, and hence not meaningful. On the other hand, we can know with certainty that bachelors are unmarried males and that 2 plus 2 equals 4. Thus the principle of verifiability shows that only a priori statements are meaningful in the gold-standard sense, with empirical statements trailing somewhere in the semantic dust. They are meaningful only in the sense that “Colorless green ideas sleep furiously” is meaningful, i.e. they are grammatical and composed of meaningful elements. Otto’s position is rather like the position that would have been taken by Popper had he been interested in criteria of meaningfulness: empirical statements can never be verified (though they may be falsified), and so cannot be strictly meaningful. Otto and friends adopt the simple view that meaning requires the possibility of knowledge, and only a priori statements can really be known to be true. They regard the rival band of positivists as sloppy weak-kneed thinkers who refuse to accept the problem of empirical knowledge—don’t they see that no empirical statement has ever been verified? Authentic positivism thus requires us to accept that empirical discourse is strictly meaningless save in a second-class by-courtesy-only sense. It is meaningful only in the way ethical discourse is meaningful, i.e. by dint of grammatical correctness and pragmatic utility.

            What view does Otto’s school of positivism take of metaphysical discourse? One might think they would be tolerant of it since it purports to be a priori, but actually they are as intolerant of it as their rivals in central Vienna. The reason is simple: such statements are never rationally justifiable. No metaphysician can ever establish the truth of his assertions: there is no method of acquiring metaphysical knowledge. It isn’t the lack of empirical content that is the problem—that applies equally to mathematical knowledge—but the problem of not having any effective method of delivering knowledge. No proof procedures and no simple unfolding of meaning, just endless wrangling and futile dispute. So metaphysics is as meaningless as the other positivists maintain but for a different reason. Empirical justification is neither here nor there; what matters is that there be some method for finding out the truth. As to ethics, Otto is ambivalent: he is inclined to regard it as a priori, and he accepts that ethical reasoning is rational, but he is disturbed at the lack of consensus about ethical questions. He is apt to call ethical statements “quasi-meaningful”: they have emotive meaning, to be sure, and they permit rational inference, but they lack the kind of certainty we find in mathematics, logic, and analytic truth. Ethical statements are not as meaningful as statements in these areas, though they are a lot more meaningful than statements of physics, say, with its unverifiable induction-based statements of natural law. Nothing meaningful is unjustifiable, so ethics squeaks in by comparison with science: after all, it has a strong a priori component. But science is stuck in unverifiable limbo: for none of it can be proved. Here Otto is a Popperian: Hume was right about induction, and that means that scientific theories can never be rationally established (though they may be refuted). As Otto likes to say, science never expresses genuine propositions—things that can be true or false—though it can bandy around sentences that are instrumentally useful and are clearly grammatical.

            Otto suspects that the other positivists are unduly influenced by religion. They see that religion is not an area of rational inquiry in good standing, and that it is dubiously meaningful, so they naturally seek to ban it. But they wrongly locate the central defect of religion: it isn’t that it lacks empirical credentials but that it lacks any procedure for establishing its claims. It doesn’t have the methodological clarity of the analytic, the logical, and the mathematical. In fact, it does have empirical criteria of justification; it is just that these criteria tend to undermine its truth (all those alleged miracles never pan out empirically). The reason it is not meaningful is that its claims are not susceptible to rational demonstration—not rationally verifiable. Faith is not rational demonstration, so faith cannot supply meaning. The other positivists wrongly contrast religion with science, thinking that science supplies the paradigm of the meaningful for the true positivist; but science is not strictly meaningful by correct positivist standards. Rather, religion and science are both condemned to semantic destitution, according to the proper form of positivism—the kind that links meaning with rational provability. The problem with religious claims is just that there is no rational way to demonstrate them. If they were analytic everything would be fine (the ontological argument was a valiant effort in that direction), but they clearly are not—they don’t just spell out what the word “God” means. Nor are they mathematical in nature. So there is no way to justify them by rational criteria. The positivists were right to find a tight connection between meaning and knowledge, but they wrongly located this connection in empirical knowledge—of which there is no such thing. They were logical empiricists where they should have been logical rationalists: pure reason can establish the truth of propositions, and hence guarantee meaning, but the senses combined with induction are impotent to establish anything, so they cannot be the source of meaning. Skepticism disproves classical positivism, but it leaves Otto’s version of positivism untouched, or so he contends.  [2]

 

  [1] Why the double “Otto”? The palindromic possibilities, the economy (two letters, a whole name), the contempt for custom: and why not? Nabokov cannot be the only one.

  [2] Did I mention that Otto Otto is a mathematician by training and also an accomplished logician? He is also fond of compiling synonyms. Empirical science leaves him cold because of its lack of formal rigor and its inconclusive methods. This may have something to do with his insistence that real meaning lies in the a priori sciences; he is certainly snooty about the mathematical capabilities of members of the Vienna Circle (those mathematical illiterates, as he refers to them). In fact, he lumps them together with the metaphysicians in their shared lack of methodological scruples. Empirical science is far too much like metaphysics, in his book.

Share

Studying the Brain

 

Studying the Brain

 

Brain studies have proceeded apace since that clump of grey tissue in our heads was tapped as the basis of mind. First it was inspected with the naked eye, prodded and poked; then dissected and anatomized; then stained and examined under a microscope; then electrically recorded, grossly and minutely; and latterly viewed by means of MRI machines and the like. With these instruments we have developed quite a full picture of the brain’s architecture, chemistry, and mechanics—its parts, constituents, and processes. The role of human eyes has been conspicuous in this effort: by using our eyes we have learned a lot about the brain (researchers don’t tend to use smell and taste or touch and hearing). Indeed, we can think of the eye as just another instrument, along with the microscope and the electrode: the eye is the main instrument the brain uses to study itself (with its cornea, retina, fovea, etc.). Brain science is methodologically ocular. It is the eye that chiefly reveals the brain as we now know it. Even when a microscope is used the human eye is still at the epistemic center.    [1] It would be generally agreed that the same methods will continue to be used to accumulate knowledge of the brain—and that no other method would be workable, or even desirable. The brain reveals itself exclusively to these methods: third-person observation, assorted scientific instruments, and recordings of neural activity. In particular, visual perception is the best (and only) route to knowledge of the brain. For what other comparable method is available?

            But this ignores another possible avenue of discovery: introspection, i.e. knowing oneself from the inside. You might reply: but introspection reveals the mind not the brain, so it is of no use as a means of learning about the brain. This, however, assumes that the mind is not an attribute of the brain—possibly being located in a separate substance altogether. But that is wrong: the mind—consciousness—is an aspect of the brain. The brain—a physical object in space—is the bearer of mental states, there being nothing else to be their bearer. Consciousness is a brain state. This shouldn’t be controversial when correctly understood: it does not mean that consciousness is a physical state of neurons just like the physical states they are known to have by using the observational method. It may well be a state of an entirely different kind, as private and subjective as any arch-dualist might wish; but it is still a state of neurons. Neurons have these states as they have the states discovered by the observational methods described above. Electrical activity is an aspect of the brain, ultimately its neurons; conscious activity is likewise as aspect of the brain, and hence its neural constituents. This means that in knowing about the mind by means of introspection we thereby come to know about the brain: we are learning about the brain by introspecting the mind. Knowledge of mind is knowledge of brain. Note that this kind of first-person knowledge is relatively primitive methodologically: there are no microscopes, electrodes, and MRI machines here. We are using our “inner eye” nakedly, without augmentation or upgrade; so we don’t have instruments that can enhance its resolution power or reveal the fine structure of what is revealed. Still, it provides genuine knowledge of the brain—knowledge that extends beyond what can be gleaned using the first method. So we really have two methods available for studying the brain: the perception-based method mainly centered on the eye, and the introspection-based method solely based on the “inner eye”. Put differently, the brain allows both methods to be used to investigate its nature, revealing different things to each.

            Let’s pause to interrogate the phrase “the inner eye”. Can it be taken literally? It might be thought unavoidably metaphorical since there is no eyeball in the brain responding to light given off by the mind. But that is a much too narrow interpretation if the concept of the visual. First, we have the concept of the mind’s eye, i.e. visual imagination: we see things with the mind’s eye as well as the body’s eye. This use of “see” is not metaphorical. Second, and connected, we use the concept of seeing far more widely than for the case of seeing by use of the eyes in the head: we are constantly seeing things that are not sensed by the eyes (a glance at the dictionary will assure you of this). We could call this “intellectual seeing” but even that does not do justice the variety of ways of seeing. In fact, in this capacious use of “see”, it appears to mean something like “perceive clearly” or “perceive as a totality”, which goes well beyond the deliverances of the body’s eyes. Indeed, the eyes don’t see at all if they don’t provide seeing in this wider sense: fragmentary and indistinct visual experience doesn’t count as genuine seeing—for nothing is perceived clearly and as a totality (“blooming, buzzing confusion”). Third, the involvement of visual cortex is relevant to the question of seeing: visual imagery involves activity in the visual (occipital) cortex, and it is not to be ruled out that employment of the “inner eye” might also recruit this part of the brain. It is noteworthy that talk of the “inner eye” (like talk of the “mind’s eye”) comes very naturally to us—we don’t likewise reach for the phrases “inner ear” or “inner touch”—and this may indicate an appreciation of the visual character of introspection. Nothing rules out the idea that introspection might have a visual character in the wide sense, and it is certainly not contrary to our habitual modes of speech. In fact, once you become accustomed to the idea, it becomes quite natural to regard introspection as a mode of seeing: it is certainly an example of perceiving things clearly and as a totality. I propose, then, to speak in this way: then we can say (what is agreeably neat) that we know about the brain by two kinds of seeing—seeing with the eyes embedded in the face and seeing with the inner eye. Both types of eye enable us to see aspects of the brain: physical and mental aspects, respectively. Thus we can study the brain by employing our two sorts of eye—outer and inner. Now we see it one way, now another, with different properties revealed, depending on the type of eye being used.

            The point of central interest to me at present is that neither eye tells the full story. Actually that understates it: each eye is systematically blind to what the other eye sees. The outer eye tells us nothing about the mental aspect of the brain, and the inner eye tells us nothing about the physical aspect of the brain. Each eye is perceptually closed to what the other eye is open to—rather as the human eye is closed to certain parts of the spectrum. The outer eye reveals quite a bit about the brain but stops short where the mind begins, while the inner eye is very revealing about mental aspects of the brain but cannot extend to its physical aspects. In fact, the situation is even more extreme than that: the inner eye doesn’t even give so much as a hint that the brain is a physical object located in the head, while the outer eye intimates nothing about the existence of consciousness. So far as each eye is concerned, the brain is nothing other than what it can reveal; but each offers only a very partial picture of the brain’s full reality. These two modes of seeing are thus remarkably tunnel-visioned.    [2] They don’t even acknowledge the existence of the aspect of the brain they are not geared to reveal. They are blind to each other’s domain in a very strong sense: constitutionally ignorant, dedicatedly blinkered. It is almost as if they want to deny that the brain has another aspect altogether—the one they can’t resonate to. And this means that, as means of studying the brain, they each suffer from what I shall call methodological closure. It could also be called methodological blindness or partiality or selectivity or divergence or tunnel vision or bias or ignorance, but I use the word “closure” to recall the phrase “cognitive closure”: the type of closure at work here derives from specifically methodological limitations rather than from limitations of the entire cognitive system. We could also call it “instrumental closure”, bearing in mind the point that methods involve instruments, whether natural or artificial. If the eye counts as an instrument of investigation, then we can say that the outer eye is instrumentally closed to mental aspects of the brain, while the inner eye is instrumentally closed to physical aspects of the brain. Both are useful instruments for gaining knowledge of some aspects of the brain but also useless for gaining knowledge of other aspects. They each suffer from a form of instrumental specialization: one is designed to get at physical aspects of the brain (inter alia) and the other is designed to get at mental aspects of the brain. The brain has each aspect just as objectively as it has the other, but our methods of knowing about it favor one aspect over the other, as a matter of their very structure. Outer eyes can vary in their scope and limits from organism to organism; well, our two sorts of eyes also vary in their scope and limits. Each has a blind spot where the other has clear vision. That’s just the way these eyes are made.

            Now this raises an intriguing question: are there any other aspects of the brain that these eyes don’t see? Is there anything about the brain that they are both blind to? Surely that is very possible, since not everything about the brain will be revealed by these perceptual systems: we may need theory and inference to discover properties hidden to these two modes of perception. Of course, the perceptual foundations are likely to constrain the scope of theory and inference, but we can suppose that new properties may lurk in the brain, which belong to neither sort of perceptual faculty. In fact, I think (and have argued) that this must be so, on pain of having no account of the connection between the two aspects of the brain. But I won’t repeat that now; my point is rather that the limitations of both ways of seeing suggest that we are highly confined in our methods for knowing about the brain. If both faculties are so sharply limited, what are the chances that the conjunction of them provides total coverage? Why should the capacities of these eyes, inner and outer, exhaust the objective reality of the brain? The brain might be brimming with properties to which our two eyes are completely blind. True, the two eyes do well within their respective domains of operation, yielding impressive knowledge of the brain, but the limitations of each, as revealed by the other, suggest a good deal of methodological closure—which is to say, ignorance. The two instruments have the limitations of all instruments, given their inbuilt scope and limits; and the brain might well (almost certainly does) have aspects to which they are both incapable of responding. It is left up to reason alone to try to discover what they decline to disclose, and pure reason can only go so far without a supply of primitive data to go on. What if the brain houses a completely distinct set of properties that are not hinted at by either the inner eye or the outer eye? Then we can expect methodological roadblock, instrument failure, and cognitive collapse. At any rate, the two eyes will not themselves be up to the task of disclosing what the brain contains.

            Putting that aside, what are the broader implications of seeing things this way? Phenomenology turns out to be brain science: Husserl was a brain scientist (as was Sartre and other phenomenologists). Moreover, phenomenology relies on the use of an inner eye to establish its results—it has a vision-based methodology. Psychology is also the study of the brain, even in its least neurological departments, since the mind simply is an aspect of the brain. Philosophy of mind is philosophy of the brain, for the same reason. By the same token, brain science is (partly) phenomenology, because consciousness is a property of the brain as such—not just correlated with it. Locke, Hume, and Kant (among others) were students of the brain, since “impressions” and “ideas” are states of the brain (mental states of the brain). We can even describe psychological studies as studies of the physiology of the brain, since “physiology” just means “the branch of biology concerned with the normal functions of living organisms and their parts” (OED). There is no requirement here that the functions be physical in nature (whatever quite that means): they could be irreducibly mental (and that word too has no clear meaning short of stipulation). The study of mind is a physiological study, conducted by means of the instruments available to us, both natural and artificial. The study of the brain thus includes a great many methods and disciplines, many of which are divorced from the methods adopted by what is conventionally called “brain science”. The so-called humanities are all brain science in the end—and there is nothing in the least reductive in saying that. It is just an acknowledgment that the brain is the de facto locus of the mind—where the mind happens, what bodily organ it derives from. The mind is not an aspect of the kidneys or the heart, and is not an aspect of an immaterial substance; it is an aspect of the organ we call the brain. To say that is not to reduce the mind but to expand the brain. This is why it is important to understand that introspective knowledge is knowledge of an aspect of the brain (in fact, several aspects). And it is also important to understand that the kind of knowledge contained in a neurophysiology textbook is only partialknowledge of the brain, omitting everything that can be learned about the brain by introspection—as well as by psychology and other studies of the human mind (history, literature, and science itself as a human institution). The brain is a multi-faceted thing. It is a mistake to let a single mode of access to the brain bias one’s general conception of the kind of thing the brain is. The brain is a far more remarkable entity than our untutored senses represent it as being.    [3]   

 

    [1] Compare astronomy: here too the investigator must rely on the eyes and optical instruments such as the telescope. Without these devices he or she would be hopelessly stymied. And it just so happens that distant objects interact with our eyes in such a way as to permit astronomical knowledge. Thus reality and method mesh, but only just.

    [2] Isn’t it true that any instrument, including the human sense organs, contains inbuilt biases that obstruct knowledge of anything outside their range? When you look at an object through a microscope, say, you no longer take in its macro features, focused as you are on its microstructure. If you did nothing but this your whole life, you might naturally come to think that things don’t have macro features. Similarly, telescopes elide facts of distance that are apparent to ordinary vision. The eye itself with its limited visual field gives an impression of non-existence to what lies outside of it (thus fueling idealism). All instruments of knowledge tend to suppress other knowledge, if only by occupying one’s attention: so they are not just partial but also oblivious of, and biased against, realities they can’t reveal. Don’t we have a strong impression when looking at a brain that it can’t contain consciousness? Our eyes give us a biased sense of the possibilities of the brain. From a different perspective it might seem perfectly natural for the brain to be the locus of consciousness.   

    [3] It would be different if every physical object could introspect itself, thus revealing an inner mental being as well as an outer physical being, but so far as we know this isn’t so (even for the staunchest panpsychist). The brain stands magnificently alone in its dual nature. My own suspicion is that the brain is wildly different in its objective nature from what we suspect, a complete anomaly of nature, given our standard modes of knowledge acquisition. A weak analogy: the earth is really very different from other planets in the solar system, which is what allows it to have life and mind on it. It is similar to them in many ways, but in crucial respects it is not—particularly as regards water content and temperature. Likewise, beehives and ant colonies are very different from mere aggregations of insects, exhibiting another level of organization altogether. But these are only lame analogies: the brain is special in a special way (and we don’t know what that way is). I tend to picture it in my imagination as having a completely different color from every other object.  

Share

Mental and Physical Events

 

Mental and Physical Events

 

Identity of properties is one thing; identity of particulars is another. Particulars can be identical without their properties all being identical. This is obvious: Superman is identical to Clark Kent but the property of being a flying man is not identical to the property of being a journalist. It is just that a single man has both these properties. This distinction has been thought useful in characterizing the relationship between the mind and the brain: hence the distinction between type identity theories and token identity theories. A type identity theory would say that the property of pain is identical to the property of C-fiber firing; a token identity theory would say that every particular instance of pain is identical with an instance of some kind of brain event or another—it need not always be C-fiber firing. The properties are different but they apply to the same particular (so a dualism of particulars is false). Thus we have “non-reductive materialism” and “anomalous monism”.  [1] Two sets of properties, one set of particulars: mental properties are not identical to physical properties, but every instance of the former is an instance of the latter. Here is an analogy for the token identity theory: each member of the set of soldiers has a certain rank—private, corporal, captain, colonel, etc.—and (we can suppose) a certain civilian occupation—lawyer, teacher, greengrocer, tailor, etc. For any instance of the former set of attributes, we can say that he or she is identical to someone with an attribute drawn from the latter set of attributes. That is, every soldier is token identical to someone of a certain civilian occupation—what has the former property also has the latter—but it would be quite wrong to identify military ranks and civilian occupations. The property of being a colonel is not identical to the property of being a lawyer, say. No one would be a “civilianist” about military ranks, holding that ranks are type identical with civilian occupations. Yet there are no soldiers who fail (we are supposing) to also have a (prior) civilian occupation: there is no dualism here of soldiers and non-military workers—as if each soldier has a kind of shadow civilian counterpart. No, he or she just is a particular civilian worker. The same people can have different attributes. Just so events can have both the attribute of being a pain and the attribute of being a C-fiber firing—without the attributes being identical. Thus we have a weaker version of materialism, one that avoids the problems encountered by type identity theories. We have materialism without reductionism.

            But do we? Is token identity theory really a form of materialism? Is it strong enough for that? And is it an adequate account of the relationship between the mind and the brain? Consider this question (which I have never seen asked): could any other type of mental event than pain have a token that is identical with a token of C-fiber firing? Could a tickle or a sensation of red be identical to an instance of C-fiber firing—in addition to instances of pain? Evidently, both colonels and corporals can be identical to people who are teachers in civilian life, so can both token pains and token tickles both be identical to tokens of the type C-fiber firing? Generally, is it possible for instances of every type of mental event to be identical to instances of the same physical type? Might every mental token be identical to a token of an identical physical type? Evidently, nothing in logic precludes this: token identity theory is consistent with total homogeneity at the physical level. There is nothing but C-fiber firing to “realize” every mental type. If that were so, then properties of the brain would have nothing to do with properties of the mind. In the same way civilian occupations have nothing to do with military ranks: these properties are determined by quite different factors (or could be). Nothing about being a tailor makes you into a colonel rather than a corporal, since tailors can be both (they can be trained to be both). Similarly, the mental type of a mental token is not determined by its physical type—it might not even be correlated with that type. Of course, if type identity were true, then we would have such determination, but not if we only have token identity. Token identity is entirely neutral on what determines mental properties: it could be acts of God or human convention or the color of your hair. Two people could have completely different mental lives while having all their physical properties in common—they could still be such that all their mental tokens are identical to physical tokens. A given mental type can be “multiply realized”, as we have been taught, but as a matter of logic it is also true that the same physical type can be “multiply manifested”, i.e. correlated with different mental types. At any rate, token identity in no way rules this out. We may then wonder whether it deserves the name “materialism”, since it is silent on what makes an organism have the mind it has. Mere token identity is a very weak relation, hardly qualifying as a less unpalatable version of classic type identity: for it says nothing about the nature or fixation of mental properties, i.e. what makes the mind the mind. What kind of mind you have has nothing to do with what kind of brain you have, according to token identity theory; the theory merely rules out the possibility that mental tokens float free of physical tokens—as soldiers might be thought (falsely) to float free of people with prior civilian lives. A monism of mental tokens allows for any old theory of mental types, or none. It is not a theory of mental types at all.

            This point may be conceded (the “uniform realization” point) but it may be suggested that we need to strengthen the token identity theory in a familiar way—by invoking supervenience. We can assert that mental types are strongly dependent on brain types, so that brain type entails mental type—physical properties fix mental properties. All right, let’s go ahead and assert that: the question then is what makes it true. The crucial point is that type identity explains this but token identity does not: if mental types are physical types, then of course you can’t have one without the other; but if they aren’t, the question is left hanging. Without type identity (or something close to it) supervenience looks like a mere stipulation devoid of rationale. It leaves open the question of why the dependence goes in one direction only: why doesn’t the mind also determine the brain? How can the properties be dependent one way but not the other? We certainly don’t have supervenience in the case of the soldiers, for the obvious reason that civilian occupation doesn’t determine military rank (or shape determine color, etc.), so why is the mental case different? No answer is given—a mere logical possibility is asserted. And surely we would want to say that there must be some internal relation between the brain and the mind—something about brain properties that underlies and explains supervenience. Absent a specification of what this might be, supervenience only gives us materialism by main force: it is what we need to wheel in to bulk up token identity into something looking more like classic materialism. More strongly put, unexplained supervenience is mere postulation not a theory of the mind-brain relation. It (purportedly) fills the gap left by abandoning type identity theory but without really supplying any filler. But token identity alone is hopelessly weak as a theory of the mind-brain relation. We may note that the asymmetry of dependence postulated by supervenience is also exaggerated at best: for there must be somedetermination from the mental to the physical, as a matter of hard necessity. For example, pain is necessarily linked to withdrawal behavior (or a disposition to it), but withdrawal behavior must be physically produced by the nervous system—so pain must fix some physical aspects of the organism feeling pain (viz. a physical withdrawal mechanism). It is not completely neutral about the condition of the body. Maybe C-fibers are in fact the only ones that can figure in a causal sequence that culminates in withdrawing a limb from the painful stimulus, even though this fact is not transparent to us; in that case there is partial supervenience from the mental to the physical.  [2] And then we will have mutual dependence between mental and physical types, which encourages a type identity theory after all. It turns out that token identity plus supervenience is not sufficient to capture the nature of the mind-brain relation, and that we must move in a more type-committed direction. So token identity alone is no good, and one-way supervenience is no good either; we can’t avoid assuming something like type identity (possibly type composition). Maybe the brain descriptions (and the mental descriptions too) have to go beyond our commonsense categories, but we can’t avoid assuming a close relation of types—and identity seems the only clear way to go. Of course, this will lead to the classic objections to type identity theory, but that is just the old familiar mind-body problem making itself felt. My point is that the attempt to circumvent the objections to type identity by retreating to token identity (with or without supervenience) is doomed, because (a) that is not really a form of materialism and (b) we evidently need to postulate a stronger relation between mind and brain than can be supplied by those theories.

            We tacitly assume that physical types play some role in fixing mental types when we intuitively rule out the possibility that the same physical type may underlie different mental types in instances of token identity. We don’t even consider the possibility that token pains and tickles and sensations of red might all be identical to physical tokens of C-fiber firing, even though that is not logically precluded by token identity theories—because we assume that the mental is fixed in some way by the physical (hence the appeal of type identity theories). But mere token identity is quite compatible with homogeneity at the physical level combined with heterogeneity at the mental level (the analogue of soldiers of different military ranks all coming from people of the same civilian occupation).  [3] But that is not a satisfactory account of the relation between mental and physical events; and supervenience as commonly understood is not sufficient to remedy the problem. Type identity seems like the only way to go—with all the problems that are attendant upon that. Token identity theory is thus an inadequate refuge from those problems. It is too much like saying that all mental events fall under physical descriptions like “occurring n miles from the equator”: that is no doubt true, but it is not a form of materialism. Many types of mental token could correspond to the same such description (all those at a certain latitude, say), but that is not a theory of what makes mental types the types they are. The brain needs to be brought into closer proximity to the mind than that, but token identity alone is not equipped to do it. Type identity, however, is.  [4]

 

C

  [1] Davidson’s “Mental Events” serves as a classic expression of this type of theory.

  [2] Valves are similar: they can be made of very different materials, but they all require a physical mechanism consisting of opening and closing. Likewise all tables require a flat raised stable surface, which are physical features, despite varying widely in physical composition. Not all physical facts are compositional facts.

  [3] If we were to observe this situation obtaining in the brain, we would surely conclude that mind and brain have little to do with each other. The same mental type can co-occur with different physical types (“multiple realization”) and the same physical type can co-occur with different mental types (“uniform realization”). The fact that each mental token is identical with some physical token would not alter our opinion. In fact, we observe quite strong correlations and these lead us, reasonably enough, to postulate type identities; but this is not part of the logic of token identity theories. So token identity theory cannot be construed as a relaxing of type identity theory that preserves its spirit while avoiding its difficulties.  

  [4] The type of type identity might be very different from any currently envisaged or even imaginable by us: it might have to be expressed using concepts quite alien to concepts we now use to think about mind and brain. In particular, C-fiber firing may be a far more exotic thing (by our standards) than we realize; it may have hidden depths. Type identity and mysterianism are not incompatible doctrines: the brain might have properties currently mysterious to us that are type identical with mental properties.

Share

Labile Fear

 

 

Labile Fear

 

Fear is a besetting emotion. It is with us always. It is also a universal feature of animal life. Fear motivates like no other emotion. It is unpleasant, intense, and disruptive. We do well to understand it. The aspect of fear I want to focus on is its extremely labile character (OED: “liable to change, easily altered”). It is labile along two dimensions: abruptness of change, and flexibility of object. You can feel an intense and overwhelming fear at a particular time and instantly cease to feel it if circumstances suddenly change: that is, if your beliefs change (beliefs are also highly labile). This is biologically intelligible: circumstances can change rapidly and we need to update our fear emotions accordingly. You thought you were about to be attacked by a bear but you suddenly realize it is only a bush: the emotion evaporates in the instant, with barely an echo remaining. Similarly the onset can be sudden, as when what looks like a fallen branch turns out to be a rattlesnake. Again, this is evolutionarily predictable. Other emotions have more lag time, more inertia, especially attachment and love: they start more gradually, build up, and take time to dissipate. Fear is like pain: it can abruptly end and begin—and pain is one of the things we most fear (death being the other thing). Love isn’t like a sensation at all in that it has no such well-defined temporal boundaries; the closest thing to it are sensations of pleasure, which may take time to take hold and time to dissolve. But fear is highly responsive to changing circumstances—hyper-labile. It is nimble, belief-dependent, and easily triggered and terminated. Phobias are a case in point: the fear can be intense in the presence of the feared object, but it quickly subsides once the object is removed. The phobic subject is not continuously assailed by fear of the phobic object if it is kept at a safe distance, but the onset is sudden when confronted by it. The point of fear is to be switched on quickly when the occasion demands and not to hang around once the danger has passed or receded.

            But it is the second labile aspect of fear that really makes it stand out. Here again there are two expressions of this: variability of object and object redirection. You can be afraid of almost anything and of nearly everything; fear is not choosy. There are people who are deathly afraid of celery or butterflies; many people are terrified of non-existent objects; the unknown inspires general dread. We are all afraid of death, disease, poverty, loneliness, failure, and rejection. I am not at all happy with heights. Again, love is far choosier: you can’t love just anything. This feature of fear seems rather counter-evolutionary: why install such an undiscriminating fight-or-flight response? Where is the biological payoff in celery phobia? Perhaps this is an overshoot of the need for flexibility of object; it is certainly puzzling (hence phobias are regarded as irrational). Freud had elaborate theories about why certain phobias exist (celery as a symbol of something genuinely dangerous). But the second aspect is particularly peculiar (in both senses)—what I called object redirection. This is a curious psychological phenomenon, though evidently common enough. I mean the tendency of fear to shift its object from one thing to another for obscure reasons. Suppose you are afraid of becoming unemployed: you then find yourself afraid of individuals of a certain ethnic group. Your fear has shifted from your own joblessness to certain people. Or you fear the police and find yourself afraid of anyone in uniform. You might recognize this as irrational but your fear mechanism has other ideas.  [1]Trauma works like this: it spreads fear around indiscriminately. Thus you are easily triggered by situations with only a slight resemblance to the original traumatic event: from gunfire to firecrackers, from near drowning to water in general. The fear spreads itself wildly from one object to another, finding similarities everywhere. You might be afraid of anyone with a certain accent because of a bad experience with someone with that accent years earlier. The spread is not entirely unintelligible, but it is certainly extreme and unruly. Whole populations can become fixated on a certain fear object as a result of their other fears. This is fear overflow, fear misdirection, fear shift. Fear will readily swap one fear object for another without much regard for rational justification. It is just too labile—too ready to attach itself to inappropriate objects. Fear fizzes away inside, searching for an outlet, and it can easily be redirected to objects not deserving it.  [2] We need a catchy phrase for this so that its prevalence can be memorably captured (compare the phrases “confirmation bias”, “cognitive dissonance”, “sublimation”, “projection”, and the like): how about “fear shift” or “fear retargeting” or “fear transference”? It is the marked tendency of fear to latch onto anything in the general vicinity—the analogue of loving any blonde person because you love one blonde person (which is not a real thing).

            We have all heard FDR’s famous statement, “The only thing we have to fear is fear itself”. Is this true—can it be true? Can we fear fear? You can be afraid that you will feel fear in the heat of battle, and you can be afraid that other people will be afraid of you and hence attack you: but can you fear fear itself? The answer would appear to be No: for what is there about fear in itself that should occasion fear? How can that emotion be a proper object of fear, any more than other emotions? Can you be frightened of hate as such (as opposed to its possible consequences)? There is nothing intrinsically dangerous about fear considered in itself: it is just a feeling. You can be afraid of the consequences of fear (you might ignobly run away when battle is joined), but the emotion itself is not fearsome. So despite the ability of fear to take objects seemingly at random, it cannot take itself as object—any more than happiness can be feared. Have you ever heard of a case of fear phobia? The statement in question is at best misleading; it must mean something like, “We should be afraid of the consequences of a certain kind of fear, such as violent action”. With respect to fear itself, it is not so labile as to be able to latch onto that.  [3] Can you fear prime numbers or remote galaxies or moral values or electrons? Doubtful—though celery and butterflies evidently can arouse real fear. So fear is not crazily labile, just pretty damn indiscriminate. In understanding and mastering it we need to be aware of its power to mutate and metamorphose and redirect, but we needn’t be concerned to curb it in relation to everything. We must not be paralyzed by fear or dominated by it or bamboozled by it, but we do need to respect its powerfully protean character. It is exceptionally plastic, malleable, and volatile, but not absolutely bonkers. Fear is not a form of insanity, though it comes close sometimes.

Freud thought that sexual desire lies behind almost every aspect of mental life, so that it needs to be understood and regulated; the more plausible view is that fear gets its talons into almost everything, so it needs to be understood and regulated. This applies as much to private life as to international politics. We don’t need to fear fear; but we do urgently need to understand it. It is reported that a few people feel no fear as a result of physical abnormality (the amygdala is supposed to be involved): it is hard for the rest of us to grasp what this must be like (envy would not be inappropriate), but certainly such individuals have a very different mental life from the rest of us. They are not plagued by this wayward, erratic, alarmingly anarchic force; they are not victims of their own cerebral fear centers. The existentialists focused on anxiety (angst) as the prime emotional mover of human life, but fear is surely the more pervasive and active force in our lives, in all its varieties and manifestations. We rightly fear a great many things, but we also unreasonably fear many things too. It is hard not to see our fear responses as a botched evolutionary job—cobbled together, out of control, riddled with design defects. Apparently, different components of it evolved during different evolutionary periods, as ecological demands changed over time; it was not intelligently designed to know its proper scope and limits. Fear is a biological mess, a simmering hodge-podge, and certainly not designed with our happiness in mind (it clearly contraindicates the idea of a divine creator). Not having it at all might not be such a bad idea. Imagine going to the dentist with no fear in your heart! You could still make rational judgments about possible sources of danger, but no more of that nasty oppressive emotion clogging up your brain. We all have to master our fear by effort of will, recognizing that it is not always beneficial; why not make a drug that simply removes it from the human psyche?  [4] Pain, yes, that seems necessary to a safe and successful life; but fear we could definitely live without. Do we really need our eye-watering fear of death? That fear is a serious blight on our life (animals are happily free of it and do quite well in its absence): we don’t need that biting searing debilitating feeling clouding our days! Fear is not something we should simply accept as a fact of life; maybe it is just a temporary aberration in human history. We could certainly do with less of it, or at any rate a more rationally ordered fear economy. Wouldn’t it be nice to live just one day without fear of any kind?  [5]

 

  [1] When does fear enter human life? It doesn’t seem to exist in the newborn, except perhaps in a very rudimentary form; it awaits the development of reason. It must be a traumatic experience when fear finally makes its appearance: “What is this horribly upsetting feeling I’m having?” The Garden of Eden was clearly a fear-free zone until knowledge and sin introduced fear into human existence. Fear and knowledge are closely intertwined: you can’t fear what you don’t know. When will it be over? Only with death apparently: then fear will be no more.

  [2] Thus fear is easily manipulated: it is just so mobile and malleable. Fear is the secret weapon of the dictator—his or her raw material (fissile material, we might say).

  [3] What if someone said, “I am not afraid of anything except being afraid”? Wouldn’t we reply: “So you aren’t afraid of the consequences of being afraid either but just of the emotion itself—that makes no sense”. The only way we could be afraid of fear is in virtue of its unpleasant phenomenology—its kinship to pain. But could we be terrified of that unpleasantness? It is like the idea of being in love with love: could you be desperately in love with it? It can’t be just like other love objects. Metaphor is at work in such locutions.

  [4] This drug would make Heidegger’s philosophy virtually obsolete. And could Kierkegaard have written Fear and Trembling and Sickness unto Death?

  [5] Then too, there is the question of shame: people tend to be ashamed of their fears and don’t like talking about them. Do we really need the burden of shame in addition to the fear that prompts it? Aren’t we burdened enough already?

Share

Identity

 

Identity

 

Philosophical logicians usually distinguish between qualitative and numerical identity. The former can hold between one object and another, meaning exact similarity (we can also define a notion of partial qualitative identity). Numerical identity (which from now on I will simply call identity) is supposed to relate objects only to themselves: nothing can be identical, in this sense, to an object that is not it. It is supposed that every object stands in this relation to itself, using “object” in the most capacious sense to include numbers, properties, functions, processes, etc.  Identity appears to hold even between fictional objects and themselves—Sherlock Holmes is identical to himself. So the relation of identity is absolutely universal; moreover, it is necessary—everything is necessarily identical to itself. This is not true of qualitative identity, since it can be contingent that two objects are exactly similar. It is commonly accepted that the identity relation holds trivially of everything: just by being something an object is self-identical. For this reason some people have felt that identity is a pseudo relation—that there is something suspicious about it. It does seem exceptionally uninformative to be told that an object is identical to itself (tell me something I don’t know!). Anyway it is supposed that we know what we are talking about: we know what “identical” means in this special sense—to the point that we can recognize the concept as fishy in some way. But do we really know what identity is in the intended sense? Do we have a genuine concept of identity? Can we articulate what we mean by the word?

            It might be thought that we have a number of possible avenues of explication available: Leibniz’s law, a famous dictum of Frege’s, and the involvement of sortal concepts in identity.    [1] Taking the last first, it is sometimes said that identity statements are incomplete without the specification of a sortal, as in “Hesperus is the same planetas Phosphorous”; accordingly, identity is sortal relative, or at least sortal dependent. Thus we can explicate the nature of identity by saying that it essentially involves the kinds of objects—same planet, same animal, same number. It is not just an elusively bare abstract relation that holds indifferently of everything thing there is (and is not); it has specific concrete substance (in the Aristotelian sense of “substance” as well). But this doctrine cannot be right: it confuses identity statements and identity facts. Maybe statements of identity need sortal supplementation (or maybe not    [2]), but the nonlinguistic fact of identity surely does not. How can an object’s identity with itself depend on its kind? What does that even mean? Does it mean that there is no relation of identity except one that incorporates a sortal kind—as with planet-identity, animal-identity, and number-identity? But it is hard to see what this is supposed to mean: aren’t these all instances of identity tout court? Isn’t there an overarching relation of simple numerical identity? Nor is it clear how much elucidation this doctrine affords: we are still left wondering what the import and point of identity is supposed to be. I am the same human being as myself: big deal, what’s the point of saying that? We have a bunch of sortal-relative identity relations, but we still don’t know what they are exactly—and why objects bother to instantiate them. What does it mean to say that x is the same F as y? What is this sameness with oneself?

            Here we reach for Frege’s famous dictum: identity is that relation a thing has to itself and to no other thing. This is ritually intoned, as if it contains self-evident wisdom, but it is not critically examined. The thought is that other equivalence relations don’t satisfy the definition because they can relate an object to other objects: for example, I am the same height as myself, but also the same height as other people—whereas I am identical to myself, but not identical to anyone else. The identity relation can only relate an object to itself, but other equivalence relations can hold between an object and itself and other objects. There are three problems here. First, what about a universe empty save for one object? In such a universe sameness of height does not relate me to other objects, since there are none—so it would count as the identity relation according to Frege’s dictum. Intuitively, what have other objects got to do with my self-identity? Not relating me to other objects can hardly count as essential to my identity with myself. Second, the dictum is circular if offered as a definition: for we need to understand what is meant by “other things”. Surely this phrase means “things not identical to the given thing”, but then the concept of identity is being presupposed: you already have to grasp what identity is before you can understand Frege’s dictum. Third, and most telling, the dictum doesn’t single the identity relation out even in the actual world; other relations satisfy Frege’s condition. Take the part-whole relation: certain objects stand in this relation to me, but they don’t stand in this relation to anyone else. My right arm is part of me but not part of anyone else—so the part-whole relation holds between me and parts of me but not between me and parts of other objects. You might object that the part-whole relation doesn’t relate me to myself but only to parts of me (unlike the same-height relation), but consider the relation of improper part: that does relate me to myself but not to anyone else—I am not a part, proper or improper, of anyone else. Yet part-hood and identity are not the same relation. This could have been expected on intuitive grounds, because Frege’s dictum is very general—identity is being said to be anyrelation that relates an object to itself but not to anything else. That is unlikely to single identity out uniquely, save per accidens (this why it fails for the single-object universe). The dictum fails to capture the specific idea of numerical identity. Not that Frege meant it as a strict definition (he was too circumspect for that) but more as a useful heuristic; in any case, it doesn’t help with the task of giving the notion of identity clear content. We can’t complacently cite the dictum as explaining what that concept consists in.

            Third, we have Leibniz’s law of the indiscernibility of identicals: does this tell us what we are talking about when we talk of identity? The law has the advantage of being true, indeed necessarily true, but it is limited as a method for explaining the concept of identity. It offers only a necessary condition to begin with, and the converse principle is far from self-evident on a natural understanding of it (i.e. the identity of indiscernibles). But the main problem is one of triviality: what precisely does this law assert? It is awkward to state because we have to say something like, “If two objects are (numerically) identical, then they must share all their properties”. Two objects? We blushingly shift to using variables: “If x and y are identical, then they must share all their properties”. But this is scarcely any better—x AND y? What is really meant is just that an object is always exactly similar to itself: an object is always qualitatively identical to itself. Where there is numerical identity there is qualitative identity. True enough, but does it help with understanding the concept of (numerical) identity? We are being told that objects always have the properties they have and no others—again, that is hardly news. But worse, it uses qualitative identity to explain numerical identity: it derives the latter concept from the former. Construed as an effort to get a handle on identity proper, it invokes qualitative identity, stating weakly that objects are always exactly like themselves. Surely we are entitled to expect something better—something meatier, more apropos (but see below). So we are still lacking any decent account of what this alleged special relation of numerical identity comes to—some kind of elucidation, analysis, insight. Instead we just have the identity relation staring blankly and inarticulately back at us, hoping we will somehow get the hang of it. It seems unnervingly self-effacing.

            At this point the disturbing figure of the skeptic enters the conversation: what if there is no such relation as identity? It is proving so elusive because it isn’t really there. What do we mean when we say an object is identical to itself—what are we thinking? Nothing, according to the skeptic: our wheels are spinning, our thought process deceiving us. It might be contended, skeptically, that the concept has its origin in certain epistemic and linguistic practices but that it has no reference in objective reality; it is a kind of illicit projection, a phantasm of the intellect. Things are not round and heavy and red and self-identical: that last is just not a real property of things, but a reification of our epistemic and linguistic practices. We often don’t know that we are dealing with a single object and can therefore discover the truth of a statement of the form “x is identical to y”, but that doesn’t imply that the real world contains objective identity relations. The concept of identity is useful to us in recording our epistemic dealings with the world, but it shouldn’t be taken to denote a genuine constituent of objective reality—for what kind of constituent is it? Do we see or touch it, or need it in our scientific theories, or feel it in ourselves? Why not just admit that it isn’t part of a truly objective conception of things (perhaps rather like the commonsense concept of an object). There are objective similarities between things to be sure, and we can speak of things as indistinguishable, but the idea of numerical identity is a chimera. So says the skeptic, and he is not without rational grounds for his opinion. However, the position is extreme and I am inclined to suggest something weaker, though in the spirit of the skeptical position. This is that talk of numerical identity is best interpreted as an extension of the concept of qualitative identity, which is perfectly meaningful: to say that an object is identical to itself is just to say that it is exactly similar to itself. Two distinct objects can be exactly similar, thus warranting talk of identity between them (“these two balls are identical”), and a single object can be exactly similar to itself too. So there is not an extra primitive relation in the world called “numerical identity”; this talk is really just the application of the concept of qualitative identity to solitary objects. Every object is qualitatively identical to itself—that is, every object is self-identical in just the sense that two objects can be said to be identical. There is really just the concept of qualitative identity, and it can hold between distinct things or one thing and itself. Of course, statements of qualitative identity between an object and itself are trivially true, but then so is the proposition that every object is self-identical. An advantage of this way of seeing things is that we need not recognize any ambiguity in the word “identical”: it always means so-called qualitative identity. And there is little intuitive plausibility in the view that “identical” varies in meaning as between a numerical and a qualitative sense. If this position is correct, there is no identity relation such as philosophical logicians have supposed—no separate kind of identity; there is just a single relation of similarity—but objects can stand in this relation to themselves. To be self-identical is to be self-similar. I am completely similar to myself, hence “self-identical”. We can easily specify what this identity relation consists in: the sharing of properties. We know what properties are and we know what sharing them is—well, that is what identity is all about. This relation can hold between several objects and it can hold between a single object and itself. If I say that I am identical to myself, I am saying that I am exactly similar to myself—just as I can be similar to other people (perhaps exactly similar). The statement is no doubt peculiar, because hardly disputable, but it is the interpretation that makes the most sense of identity talk; it’s either that or skepticism about the whole concept. We could try maintaining, feebly, that the concept of numerical identity is primitive and inexplicable—simply not capable of any articulation—but that seems unattractive in the light of the alternatives. It is preferable to hold that so-called numerical identity is analyzable as reflexive qualitative identity. After all, that relation clearly exists and has a clear content—why introduce anything further?

            What are the consequences of this revision in the way we think of identity? All those puzzles of identity must now be recast in terms of self-similarity, as must the idea of a criterion of identity. This may not (should not) make a difference to the substantive issues, but to be clear in our mind we should think in the recommended terms. There is nothing real to identity over and above self-similarity. And since philosophy is very largely concerned with questions of identity, particularly the identity of concepts and properties, the revision must have an impact on how we understand philosophical questions. Concepts (meanings, intensions, properties) can be exactly similar to themselves, this being what concept identity comes down to. If the concept of knowledge, say, has a property not possessed by the concept of true justified belief, then the two concepts cannot be identical; for then the concept of knowledge would not be qualitatively identical to the concept of true justified belief. Identity is always qualitative identity, so concepts can’t be identical unless they share the same qualities (this is Leibniz’s law in another form). In a way the concept of identity already contains Leibniz’s law, because what it means to say that x is identical to yis defined as sharing the same properties. It is not some further tacked on thesis that identical objects are always exactly alike: self-identity simply is sharing the same properties—x being identical to x just is x being qualitatively identical to x. This is why Leibniz’s law is so self-evident: it is really a kind of tautology. This is as it should be.    [3]

 

    [1] I have consigned to a footnote another familiar attempt to explicate identity because the attempt barely gets off the ground and is lamentably confused, namely that identity is a relation between signs. That is, for objects to be identical is for them to be the single denotation of two terms. The trouble, obviously, is that object identity can’t depend on language. Still, the suggestion is helpful in illustrating what a substantive account of identity might look like: at least we are given a nontrivial analysis of the concept (just a wrong one). Compare: identity is the co-reference of ideas in God’s mind—substantive enough but none too plausible.

    [2] The sortals go into the fixation of reference not the type of identity relation involved, as in “that elephant is identical to that elephant” said while pointing to different parts of the same elephant. But objects cannot be incomplete in the way bare demonstratives are.

    [3] Old hands will see the imprint of several philosophical logicians on what I write here (Geach, Wiggins, Kripke, and others). It should be evident that what I say is radical to the point of heretical—I myself have always assumed that numerical identity is transparently a concept in good standing. I am as shocked as anyone by the skeptical reflections herein sketched (contrast my Logical Properties (2000), chapter 1). 

Share