The Making of a Philosopher (Part Two)

The Making of a Philosopher (Part Two)

The following is a sequel of sorts to my The Making of a Philosopher (2002). Like that work, this is to be an intellectual memoir, not a marital, medical, musical, or muscular one—a memoir of the mind. It’s about what has gone on in my head.

I originally applied to university to study economics. This seemed like a practical subject, destined to provide employment, and I was already taking an A-level in it (for which I subsequently obtained an A). My strong subjects in school were mathematics and English (not too much memorization), and economics combined the two nicely. I might easily have become a professional economist (I still take an interest in the subject). But I happened to read some Freud and found it fascinating, so I switched to psychology in my applications. This subject too would lead to gainful employment, possibly in the educational field (I had no thoughts of an academic career). I was trying to be sensible, but not bored; after all, it is your whole life we are talking about. This occurred around 1968, a momentous year on the world stage. I therefore studied psychology at Manchester University, obtaining my degree in 1971 (B.A., First Class), followed by an M.A. in psychology in 1972. Philosophy formed a small part of my undergraduate degree: an introductory course on Plato and Sartre and a history and philosophy of science course. I also did some independent reading in philosophy, but nothing like what a student of philosophy might undertake; I was woefully undereducated in that regard. Nevertheless, I ended up studying philosophy at Oxford on the B.Phil. in 1972 (long story, recounted in my aforementioned book). That was a considerable challenge, because everyone else on the course had a substantial (and exceptional) undergraduate education in philosophy, of a kind alien to my own undergraduate acquaintance with the subject (Husserl and Adolph Grunbaum mainly). I had a lot of catching up to do, to put it mildly. I am surprised I came out the other end in one piece.

In 1974 I began my first philosophy job at University College London, after a mere two years of studying philosophy (four years of psychology before that). I didn’t teach philosophy of mind and made no use of my two degrees in psychology (including a good deal of experimental psychology). I mainly taught philosophical logic and philosophy of language (my first lecture course was on truth). I was very conscious of the fact that my philosophical education was patchy, embarrassingly so, and that I had never had the chance to do any serious research in philosophy; I could really have used a couple of years on a JRF or something similar. From then on, I was on the academic treadmill: tutorials, lectures, committees, writing for the journals, book reviewing—the usual routine. I never had much time to immerse myself more widely and deeply in philosophy, though I tried as best I could. And so it continued for the next 38 years! I got through my career, but always going from pillar to post, always rushed, pressured, tired, anxious, barely managing to keep my head above water. I never had the opportunity to just let my mind go where it wanted to go, read whatever I wanted to read, write whatever I felt like writing, think about whatever I liked. I never had that kind of philosophical leisure. I suppose I could say that I had no philosophical freedom. I never had that couple of years to develop my philosophical mind under conditions of unimpeded reflection. I got used to it, but it always grated, rankled, irritated. I imagine it must be much the same for many people: not enough time, not enough energy, too many obligations.

Then I retired (2013). Everything suddenly changed. The pressure was off. The treadmill had been discarded. No more teaching, no more department work, precious few invitations. Each day was a free day. The year ahead was not mapped out by the demands of a university schedule. No more breaking off a train of thought because a lecture had to be delivered the next day. The immediate result was an uptick of energy and concentration: no more teaching fatigue, no more interruptions, no more having to show up for meetings of one kind or another (supervisions, office hours, department meetings, etc.). My time was my own. Let me repeat that, because it’s important: My time was my own. I could do with it whatever my heart desired; I was subject to no temporal demands (Do this! Do that!). I was thus able to immerse myself in philosophical thinking, reading, and writing without external impositions—for the first time in my life (I’m not counting childhood). This produced a qualitative change in my state of mind, my philosophical consciousness, my very existence. I could read all the things I never had time to read, think without distraction for days on end, weeks, months, years. It has been a kind of bliss, foreign to my previous existence, a rebirth of sorts. And not only philosophy: I could read all the literature and science I ever wanted to read, which also contributes to one’s philosophical development. Writing becomes a pleasure not a torment, because there isn’t that nagging feeling that you will have to break off soon in order to fulfill your professional duties. You don’t have to quit in mid-sentence, mid-thought. Can you imagine? Being a professor uses up a lot of energy—have you noticed that?—and this energy could be deployed in other pursuits. To retire is to be reborn (but don’t leave it too late). I also don’t feel that I have to sacrifice other aspects of my life to the academic treadmill, including personal relationships (not to mention sport, music, etc.). Apart from anything else, life becomes a lot more enjoyable.

But the main point I want to make, reporting on my own case (I am still a psychologist, remember), is that in this phase of my life I have achieved a degree of breadth and depth in philosophy that I would never otherwise have achieved. I would even say that I have become over the last ten years a different kind of philosopher. I wish I could characterize this exactly; it has to do with gaining a larger perspective, an ease of thought, a facility of expression (writing philosophy well takes years, decades, of effort). I can just see further.  So, I think of this phase of my mental life as a new philosophical life; I am not the same person philosophically. There was a time when I was a philosophical novice, a time I was an apprentice philosopher, then a time of professional maturity, and now a time not of advanced age or twinkly wisdom but of fresh growth, of new beginnings, of excitement and exhilaration. I could call it creative, but that doesn’t quite hit the nail on the head: it is more a matter of discovery, mastery, arrival. I could almost call a sequel to my old book The Making of a New Philosopher. It isn’t something I ever anticipated.

Of course, there is an irony in all this, a bitter irony one might say, over which I have no desire to dwell. I will put it as abstractly as possible. I am concerned with inner psychology not external circumstances. First, and obviously, there is this blog, the fruit of innumerable hours of quietly intense lucubration. It must be a couple of thousand pages by now. This has been my preferred mode of philosophical expression during this period of personal renaissance—short, to the point, uncluttered, unbound. It is to be noticed that this material has not found its way into print, for several reasons I won’t go into. I feel fortunate that such a method of publication now exists, or else my inner world might not have made it into the outer world. I like what I have written, more so than before. But my inner world has been removed from the outer world of academic philosophy, producing a strange schism in my self-consciousness. It’s not exactly Socrates or Galileo or Russell; it is more a kind of intramural etiolation (here goes the abstraction). We might call it blank-slating, oblique erasure, identity removal. Of course, I still have good friends at the highest levels of philosophical (and other) achievement, whose names I will not mention (you can guess the reasons), so I am by no means cut off from professional contacts; and it’s true that my geographical location increases the degree of professional estrangement. Still, I feel as if nothing I say will ever be received as it once was. And, oddly enough, I don’t much care: my inner world has eclipsed my outer world—that academic carapace the professional professor carries around with him or her has been shed. My inner world has so expanded that it reaches to my subjective horizon. There has been a metamorphosis: I have become a different kind of being, curiously aloof, weirdly autonomous. It is a kind of brimming isolation, supercharged solitude. The banal life of the professional academic has been abolished, to be replaced by a peculiar kind of originality—the reborn corpse, the retired youth, the liberated prisoner. I have a paradoxical duality, the flourishing failure. And I kind of like it. My intellectual world is a world of my own creation with little extraneous intrusion.[1]

[1] I do seek out, and receive, regular feedback from my philosophical friends, so it isn’t that I rely solely on my own judgment. I am not some quivering recluse stewing in his own juices, not a bit of it.

Share

Empiricism, Memory, and Knowledge

Empiricism, Memory, and Knowledge

In pre-Socratic times there was a school of thought known as “memorism” (or so I once dreamt). The principal doctrine of this school was that all knowledge is stored in memory: whenever you know something there was a past event that laid it down in memory, and knowledge is the recall of that something. Past event, storage, recall: these are the necessary and sufficient conditions of knowledge. The memorists opposed the orthodox school of thought (the “revelationists”) which held that all knowledge arises from direct communication with the gods: whenever you know something the gods are conveying it into your mind by speaking directly to you. The revelationists found the memorists impious in their reliance on memory, a human attribute, instead of the divine action of the gods, to whom we owe everything. There is no invocation of the past, no mysterious storing of information, no ecstatic recall experience, just good old-fashioned godly beneficence. Let the gods be praised! The two schools debated the matter at length, never coming to any firm resolution. The revelationists brought forward counterexamples: what about knowledge of the present and future—surely, we don’t know these things by memory? The memorists responded either by denying the existence of such knowledge (eliminative memorism) or by explaining present and future knowledge as special cases of memory knowledge (reductive memorism). Ingeniously, they contended that by the time knowledge is acquired the thing known is past and retained in memory, and that we only know the future by remembering the past (induction etc.). So, it was either memory naturalism or revelation supernaturalism. The memorists were gaining adherents as their anti-supernaturalism spread and flourished; the revelationists seemed mired in superstition and pseudo-explanation. The gods surely had other things to occupy their time, and anyway were not deemed “empirically verifiable”. Memorism seemed to cover the ground nicely, was rooted in everyday experience, and dispensed with ad hoc appeals to divinity. And indisputably, a vast amount of human knowledge simply is stored in memory—knowledge of history, geography, animal husbandry, who your friends and enemies are. The theory looked warranted by the plain facts of human psychology.

But a new school of thought was taking shape at around this time: this school sympathized with the memorist’s anti-supernaturalism but were troubled by apparent counterexamples to the central doctrine. What about our knowledge that everything is self-identical? Is that based on memory? Was there some past event that laid this information down in memory—say, seeing a bunch of self-identical things and making an inductive leap, or hearing it from a trusted teacher? We never seemed to have learned this truth—never made an observation of it or were taught it in school. Yet we knew it. And there is a lot more knowledge like this, as they quickly pointed out: all of geometry and arithmetic, logic, conceptual truths, ethical propositions, maybe even philosophical theories. No past event triggered and justified this type of knowledge; there was no experience of recall in entertaining it; and people never suffered from difficulties of recollection over it (“I know the answer to this, but I can’t quite bring it to mind”). Such knowledge simply doesn’t bear the marks of memory.[1] Has anyone ever said “I just can’t remember whether everything is self-identical or not”? It thus appears that our knowledge exists in two places in our mind: in memory and in some other faculty not itself a type of memory. When asked what this faculty consists in, the anti-memorists grew dark and pensive: for no name suggested itself and the question was obscure. Some declared it an irresoluble mystery, while others (the “eternalists”) boldly asserted that such knowledge exists in the mind eternally (there was no moment of acquisition) and is a primitive fact of human nature. If we want a name for it, we can call it “un-memory” or “pre-memory”—in any case, it isn’t a form of memory in any normal sense. We just have this knowledge; it exists in the deepest recesses of our soul. It was never put there by anyone, divine or mortal, nor was it the result of an interaction with external reality. Remember, these were ancient times and evolution and genetic inheritance were unknown concepts. What this third school (they had no generally accepted name) was sure of was just that not all knowledge is memory knowledge. They opposed the idea that memory knowledge exhausts the whole of human knowledge; their positive theory, however, was still a work in progress. To be sure, much human knowledge is represented in memory, but there remains a substantial core of knowledge that is not so represented. Thus, there are really two types of knowledge; knowledge is not a homogeneous phenomenon. It has two species, two fundamental forms. They resisted the epistemological monism of the memorists.

Does all this ancient intellectual history remind you of anything? Is my dream a reflection of any actual history? Empiricism versus rationalism, of course: memorism is another version of empiricism and anti-memorism is the analogue of rationalism (or nativism). The memorist substitutes memory for experience: instead of saying that all knowledge derives from experience, he says that it is all dependent on memory. He thus sidesteps the standard problems with the concept of experience—whether it is conceptual or non-conceptual, given or interpreted, justificatory or epistemically idle, opaque or transparent—and replaces it with the concept of memory. Interactions with the environment lay down memories, which are later recalled; this is the source of all knowledge worthy of the name. Surely, something like this picture was implicit in traditional empiricism, since experiences had to be retained in memory in order to provide the basis of subsequent knowledge: you see something, remember it, and later recall it in an exercise of knowledge. In short, knowledge is sensory memory, according to empiricism. And rationalism is the denial of that: some knowledge (mathematics, etc.) is not memory knowledge of past interactions with the observable world; it has a different origin and modus operandi (which is hard to specify). Both empiricism and rationalism were opposed to the revelationists of their day; knowledge is not a gift from the gods (or God) but a fact of human natural psychology—an achievement of memory or a product of instinct. My pre-Socratic dream narrative thus mirrors the actual narrative of later philosophy (as Plato’s belief in innate knowledge anticipates Descartes and Leibniz). For some reason, the empiricists didn’t make memory salient, but it was hovering in the background: knowledge is experience remembered. The rationalists, by contrast, thought that not all knowledge consists in experience remembered—remembering past sensory interactions is not a part of knowledge arrived at by pure reason. The question being debated concerns the role of memory in knowledge, not so much the role of experience—whatever that might be exactly. We can even imagine a form of empiricism that eschews the concept of experience altogether, but still insists on the vital role of memory: perhaps there are just physical excitations of the sensory receptors (conscious experiences having been eliminated from the picture), and anyway we want to make room for subliminally acquired empirical knowledge that involves no conscious experience at all. Memory empiricism thus takes precedence over experience empiricism, theoretically speaking.

Putting traditional empiricism aside, the memory formulation affects our view of the distinction between a prioriand a posteriori knowledge. We can now reformulate this distinction in the obvious way: a posteriori knowledge is knowledge based on memory, while a priori knowledge is knowledge not based on memory. This works pretty well: the role of individual and collective memory in the formation of scientific and commonsense knowledge is acknowledged, while its irrelevance to typical instances of a priori knowledge is highlighted (it is not a type of historical knowledge in the broad sense). If anything, this puts a priori knowledge in a better light, because it sounds like pure dogma to insist that all knowledge proceeds from memory—memory is just one way of storing information. The genes store information, in animals and humans, which is then transmitted to offspring (we call the result “instinct”), but such storage is not the faculty normally labeled “memory”. Memory is just one way of possessing information (as well as skills and pre-dispositions); and there is little prima facieplausibility in the thesis that all knowledge (etc.) is contained in acquired memories. Memory is just one method of being informed, equipped, internally configured. And it sounds completely wrong to claim that logical and mathematical knowledge is arrived at by consulting one’s memory, as if it has a basis in historical records; logical and mathematical reasons are not time-bound in that way (“I remember seeing the law of non-contradiction for the first time when I was ten years old, and I have never forgotten it”). You don’t have to ransack your memory to decide if modus ponens is a sound logical rule, nor is there any danger of forgetting it. This kind of knowledge is completely different from knowledge of historical dates, or the route home, or the results of an experiment. Thus, the distinction between a priori and a posteriori knowledge has a firm and clear foundation, which helps establish the sui generis character of the a priori. Strangely enough, the place of memory in relation to the traditional distinction has not been much recognized (if at all).

The empiricist, whether experiential or memorial, puts space and time at the center of knowledge: you can only make pertinent sensory observations at certain times and in certain places. The doctrine might, indeed, be so defined: all knowledge rests on suitable spatiotemporal proximity to the thing known. But the rationalist points to types of knowledge not restricted in this way: we don’t need to be near numbers at a certain time of day in order to know about them (or logical truths or meanings). This type of knowledge is not dependent on spatiotemporal proximity to the thing known—hence the adequacy of the armchair in arriving at such knowledge. Nor is the subject matter naturally conceived as existing in space and time (what is a number such that we could be near it?). Here we find a marked contrast between the two types of knowledge. We really should not expect that a priori knowledge could be subsumed under the a posteriori umbrella. The empiricist is guilty of overgeneralizing from properties of knowledge characteristic of only certain types of knowledge—those dependent on sensory experience or memory. Such knowledge is only so good as the experiences that (allegedly) ground it, or the memory capacities that make it possible; but rational knowledge is free of these kinds of limitations, being neither experiential nor memorial. How it does work, however, is far from clear. All we can say is that considerations of space and time make no difference to the availability of a prioriknowledge.[2]

[1] I discuss this in Inborn Knowledge (2015), 44-46.

[2] I realize that I have been writing and thinking about a priori knowledge for over fifty years, and I never tire of it, difficult though it is. It’s one of the things that got me into philosophy in the first place. I think most discussions of it over the last century have been pretty feeble—exercises in problem avoidance and tendentious stipulation. The nature of a priori knowledge is one of the Big Mysteries of philosophy.

Share

Affective Empiricism

Affective Empiricism

The classic debate between empiricism and rationalism concerning the origins of the human mind focused on the cognitive aspects of the mind.[1] Descartes and Leibniz believed that some knowledge is innate, while Locke thought that all knowledge is acquired through the senses. But there is little to nothing on the affective aspects of the mind: Locke did not insist that emotions are acquired via the senses, and Descartes and Leibniz did not cite the emotions as instances of the rationalist thesis. I think I know why: it was common ground that affective nativism is true and affective empiricism is false. Not nativism about ideas of emotions but nativism about emotions themselves: we are born with these propensities, abilities, traits, dispositions. We may not feel emotions in the womb but our genes contain them in potential form, as they contain our anatomy, physiology, and other characteristics. We don’t learn to feel emotions—by observation of others, imitation, or instruction. In this respect we are like other animals: they too are not an affective tabula rasa. And there are good biological reasons for that: these are traits it is important to have for the sake of survival, so best not left to chance. How, indeed, could such traits be acquired by means of the senses—what might the mechanism be? How could they be “abstracted” from perceived objects? Maybe you could get the idea of the emotion from observing others, but could you get the emotion itself? That would be like acquiring four legs by gazing at quadrupeds. So, there is no real dispute about the origins of emotions: we are born to feel them; we don’t learn from experience to feel them (whatever that might mean). They are written into the DNA. But if that is so, isn’t it a black mark against empiricist thinking about the mind? For, if emotions are agreed to be innate, why shouldn’t “ideas” be—beliefs, knowledge, concepts, perceptual capacities? Why would the mind be hospitable to innate emotions but not to innate cognitions? Why would nativism be half-true? Why must empiricism be true of part of the mind but not of other parts? Whence the dogmatism? Granted, the environment can play a role in shaping and developing the emotions, but the preponderance must be owed to the native constitution of the organism. On this everyone seems to be agreed.[2]

What are the emotions we inherit along with our genes? It is customary to list six basic emotions: anger, fear, disgust, happiness, sadness, surprise. We might want to add lust and sexual passion to this list (also love), but let’s leave it at that. These are the emotions that lurk in our genes just waiting to see the light of day; they are the primitive elements of the periodic affective table. They may be combined into compounds such as despair or helplessness or envy or joy. They are no more learned than pain is: we don’t acquire the ability to feel pain by observing pain in others and somehow internalizing it by abstraction. Emotions are not taught but inherited–universal, spontaneous, part of human nature. But they are not cognition-independent: they have intentionality. We are afraid of things, angry at people, sad about situations; and these objects of emotion are specific—we are not afraid (say) of just anything but of a limited class of things. Prey animals are born being afraid of big cats not butterflies; they don’t learn to be afraid of being eaten by tigers (that would be too late for the learning to have any utility). So, emotions have representational content; and that means that such contents are also innate—antelopes are born knowing what tigers are, as well as being afraid of them. They have, in the old terminology, ideas of tigers that are bound up with their fear of tigers. This means that some ideas have to be innate if emotions are, which contradicts empiricism about ideas; nativism about emotions leads to nativism about emotion-relevant ideas. Not only that; emotions are correlated with a set of expectations about the world—about what kinds of things it contains. It contains things that are scary, angering, disgusting, desirable, happy, sad, or surprising: emotions thus carry with them a “world-view”. And this world-view is innate not acquired by experience, contrary to the empiricist theory of the cognitive mind. If so, what is to prevent other ideas from being innate? To put it simply, animals are born knowing a good deal about the external world just by virtue of being born with feelings about the external world: they come into the world cognitively prepared for it, not mentally blank, not blissfully ignorant. Emotion thus paves the way for a general nativism, and affective nativism is common ground between empiricists and rationalists. Of course, there are many other arguments for the nativist position, but it is instructive that emotions provide yet another argument, and not one easily avoided by the determined empiricist. A sentimentalist in ethics would be committed to nativism about ethical attitudes, since emotions are always fundamentally innate (emotivism thus implies ethical nativism). Emotions can, it is true, be shaped and modified by experience, but the basic repertoire of emotions is original to human psychology. They are instincts not cultural acquisitions. Language is an instinct too, as is perception, and also thought, but emotions are the primal instincts; they have been coded into animal genes for millions of years. Affective nativism is the basic form of nativism.

Some psychologists have claimed that all behavior is learned no matter what might be the case for aspects of the mind. But this position is obviously unstable: if emotions are innate, so must their associated behavioral expression be innate. The prey animal must have an innate predisposition to flee at the sight of a big cat, since fear elicits the flight response (that is the point of the emotion). Flight is like salivation—a reflexive inborn response to a stimulus. So, whole behavior patterns have an innate basis: anti-nativism about behavior is another false dogma of empiricism. The correct position is that nearly all of the mind (including behavior) is innate: emotions, desires, perception, concepts, many beliefs, anything a priori, the psychological faculties (memory, reason, mathematics, ethics, etc.).[3] This is really just biological common sense. All learning, properly so-called, is based on an innate unlearned system; we learn some things only because we don’t learn everything. The tool shed model is more psychologically realistic than the empty cabinet model.

[1] I discuss this debate in Inborn Knowledge (2015). The present paper furthers that discussion.

[2] See Hume’s note 2.9 in his Enquiry Concerning Human Understanding in which he asserts, as against Locke, that self-love, resentment, and sexual passion are all innate, adding that “all impressions are innate”.

[3] I am stating the nativist position very strongly so as to rectify previous empiricist bias; of course, we must allow for some contribution from the environment. The point is that the foundation is innately fixed. What is true of the body is true of the mind: the mind is not originally a blank slate, as the body is not originally a piece of formless stuff. True, the mind has memory which stores acquired information, but the body too bears the marks of experience as it interacts with the world outside of it. The mind is not empty at birth, as the body is not shapeless at birth.

Share

On Substitutivity

On Substitutivity

The idea of substituting one expression for another has played a key role in logical and semantic studies. In particular, the idea of substituting terms with the same reference has featured prominently: can this always be done without changing the truth-value of the sentence in which the terms occur? Is such substitution ever truth-value disruptive? The consensus has been that it can be, for example when substituting into belief contexts. Thus, the convention has arisen of calling some contexts “referentially transparent” and some “referentially opaque”. The distinction has been thought to be binary, not divisible into finer distinctions. I think this is a mistake; it misses important differences. There are at least four kinds of case to consider, each with distinctive properties and varying explanations. Specifically, there are three types of opacity, which deserve different names; at least two of them exhibit their own kind of transparency—they are not fully opaque. Let’s accept, for the moment, that there are no other kinds of transparency wider than the usual kind; we can then ask whether there are degrees or grades of opacity, i.e., departures from full transparency.

First, consider modal, causal, and explanatory contexts: just how opaque are these? Suppose I say “The king of England is necessarily a king”, giving the modal operator wide scope: that is clearly true, analytically so. But it is not true if we substitute “Charles Windsor” for “the king of England”—hence the context is not referentially transparent. Can we substitute any other expression for the description and obtain a truth? Yes, if we restrict the substituted terms to those that make reference to kinghood: “the male monarch”, “the male head of the church of England”, “the male hereditary ruler of an independent state”, etc. The germane consideration is that the property of being a king necessarily implies being a king (or a monarch more generally); it doesn’t matter whether anyone knows or believes this to be so—it is not dependent on how kings are thought of or mentally represented. It is a modal fact that obtains independently of how anyone thinks (compare “the successor of 2 is a number”). So, this context is less transparent than other contexts (such as negation) but more transparent than belief contexts. In honor of this fact, I will say that it is a translucent context—somewhat transparent, not completely opaque. The same holds for causal contexts: they are not fully transparent, but they are not as opaque as belief contexts either. For example, it can be true to say “The batsman is unconscious because the cricket ball hit him in the head”, but not true to say “The batsman is unconscious because the red ball hit him in the head”. The reason is that a cricket ball is always of a certain hardness and weight but a red ball isn’t—even if the cricket ball is in fact red. Being a cricket ball is causally relevant but being a red ball isn’t. This is even clearer if the context is explicitly explanatory: the explanation of the batsman’s unconsciousness is his being hit by a ball of a certain density and weight not the fact that the ball was red. We can substitute any description for the original description if it preserves reference to the causally relevant properties of the missile (“the heavy hard sphere travelling at 60 mph”), even if no one has any beliefs about these descriptions. The context is somewhat transparent, but not as transparent as a negation context—yet it falls short of the opacity of a belief context. It is transparent with respect to causally relevant explanatory properties, though not with respect to objects as such.

Belief contexts, as already implied, are opaquer than modal, causal, and explanatory contexts, because they bring in mental representation—ways of conceiving things, modes of presentation, concepts, perspectives. But they are not fully opaque, because they do allow some latitude for substitution—they don’t resist all substitutions. Thus, we may substitute synonymous terms within their scope: they are transparent with respect to sense. We could describe them as “semi-opaque” (or “semi-transparent”). Their substitutivity properties fall between full transparency and translucency, on the one hand, and a yet stricter kind of opacity, on the other. This third kind of opacity belongs to quotational contexts: they won’t even allow substitution of synonyms—you can only substitute terms that refer to the same terms (words, bits of language). These contexts may be called “hyper-opaque”, though simply calling them “opaque” would not be semantically amiss given that that word connotes an absolute condition. We have already concluded that belief contexts are not maximally opaque given their openness to synonyms—hence “semi-opaque”. So, we now have a four-way distinction: transparent, translucent, semi-opaque, and opaque (or hyper-opaque). It is a not a binary business, an all-or-nothing affair. Contexts are transparent with respect to this or that class of possible substitutions, opaque to one degree or another, not rigidly either one of the other. This is because words have a range of entities associated with them: references (objects in the world), aspects of objects (mind-independent properties), senses of words (ways of representing the world), and the words themselves (marks, sounds, states of the brain). The old extensional-intensional dichotomy is too simple.

Once we have made these distinctions in the case of singular terms, we can extend the apparatus to whole sentences. Some contexts (“and”, “not”) are truth-functional; some are fact-functional (“necessarily”, “because”, “explains”); some are sense-functional (“believes” and other propositional attitude verbs); and some are inscription-functional (direct speech, quotation). Being truth-functional has no special status on this way of looking at things; it is just an extreme type of truth-value dependence, i.e., any sub-sentence with the same truth-value can be substituted for the original without changing the truth-value of the whole. Some contexts permit a broader range of substitutions salva veritate than others: that is all. It isn’t some kind of special property of truth-values, as opposed to facts or senses, that renders them more respectable or pellucid. Nor are truth-functional contexts in any way superior because of their substitutional liberality: for all contexts have their own distinctive substitutional profile. It is true that the words “transparent” and “opaque” have different connotations, evaluatively speaking: transparency is “good”, opacity is “bad”—especially when it comes to discourse (not so much for clothes). But that is just an accident of terminology; it’s all the same merit-wise. We should be substitutivity egalitarians.

I have left till last a more startlingly unorthodox suggestion (the reader needed some softening up first). What about extending the concept of transparency beyond its normal bounds? To this end, I shall introduce the notion of extended extension: a given term extendedly refers not just to its normal referent but to a wider range of referents, e.g., any twin of the normal referent. This will allow a wide range of substitutions to retain truth-value, since most of what is true of an individual will also be true of its twin. In a universe consisting of Leibnizian pairs (indiscernible but numerally distinct individuals) everything true of one twin will be true of the other, so we can substitute one twin’s name for the other and not disrupt truth-value. The names will be transparent with respect to duplicates. This seems like a definable idea and it substantially expands the range of possible substitutions. Transparent contexts will be doubly transparent in this universe—hyper-transparent. We could also employ the distinction between objects and the matter that composes them, as in the statue and the piece of bronze that composes it. Again, nearly everything true of one will be true of the other, so substitution will be well-nigh universal. Thus, the simple dichotomy between transparent and opaque contexts yields to a more generous understanding of the semantic phenomena. This opens up new possibilities concerning formal models and interpretations, i.e., assignments of elements from a chosen domain. I rather warm to the idea of fourfold functions from domains onto the formalized language: objects, aspects, senses, and words themselves. Full substitutivity will hold with respect to each subdomain.[1]

[1] Informed readers will discern elements of Frege, Wittgenstein, Russell, Carnap, Quine, and others in this paper. In general, I am promoting greater inclusiveness in formal semantics. Many types of entity are relevant to semantic functioning. Jungle, not desert, landscapes.

Share

Annoying Morality

Annoying Morality

Given its importance, there is something deeply unsatisfactory about morality. I don’t mean its alleged subjectivity or relativity, which is more puerile than profound, still less the existence of metaethical controversy; I mean its fragmentary and unsystematic character. It appears as a list of injunctions with no unifying principle underlying them. This is a well-known complaint, so I don’t need to expatiate on it: the miscellaneous list of deontological rules, the consequentialist additions and amendments, the need for separate principles of justice. There are 10 commandments, but why 10 and not 14 or 2 or 23? The usual list concerning lying, stealing, killing, promise-breaking, committing adultery, betraying friends, ingratitude, contract-breaking, et cetera, seems woefully heterogeneous, and refuses to submit to unification or even simplification, despite some valiant efforts. Even utilitarianism consists of two non-equivalent principles, concerning harm and benefit, where one (the no-harm principle) doesn‘t entail the other (the maximize well-being principle). Considerations of justice complicate the picture still further, not being reducible to anything more primitive. There are many things we must not do, and likewise for what we must do. Some say it all comes down to one rule—treating people (and animals?) as ends in themselves, not violating the social contract, maximizing utility, obeying God, conforming to societal norms—but none of these stands up to scrutiny. Moral pluralism seems to be the inescapable predicament—a lamentable lack of system, order, and organization (where is the moral analogue of Peano’s axioms?). What is worse, this lack of unity breeds moral conflicts, dilemmas, and quandaries: one department of morality suggests one thing, another suggests another, with no resolving principle in sight. The poor moral agent is left struggling with a heap of different commands, cognitively overburdened, unable to think straight, confused and bewildered. It’s all so complicated, so messy, so all-over-the-place![1]

Prudence is different. Here things are pretty straightforward: there isn’t a lot to learn, to remember, to take into consideration. It’s mainly a matter of consequences: don’t harm yourself in the future; act so as to make your future self happy. You don’t need to worry about not treating yourself justly, not lying to yourself, not stealing from yourself, not betraying yourself, not committing adultery with yourself, and so on. Just don’t ignore your future self’s welfare—who could forget that? You may not always act prudently, but at least you know clearly and simply what you should do. But morality is not like that—not by a long chalk. Prudence is part of morality, to be sure, but morality contains a whole lot more, and it is thorny stuff—taxing, sometimes confounding. How can we teach all this to children? How do we navigate it in the heat of the moment? How can we keep our conscience clear with all this junk to think about? Some religions preach love as the unifying formula, but that is hopelessly limited, unrealistic, and prone to lapses from strict ethical correctness. It is appealing in its simplicity and sentimentality, but it fails to measure up to the complexities of the moral life. The fact is that morality is intolerably many-sided, splintered, and polymorphous. It seems cobbled together. This has consequences. Wouldn’t the world be a better place if morality had a sleek and simple nature—where one rule encompassed everything moral? Then everyone would be clear about what they ought to do—what morality requires of them. Wouldn’t this reduce moral laxity and moral skepticism? Because, frankly, the complexity is annoying, irritating, maddening—it makes you want to scream sometimes. It’s just so hard. It’s a pain to have to think about, a drag on the spirit. Even the smartest people get tripped up by it. You always feel that you might have missed something, and you frequently have. Why did God have to make it so bloody complicated, given that he wanted us to obey it? And it fuels the moral nihilists among us, who would be happy to get rid of morality entirely—who needs the hassle? Human life would be a great deal easier if morality were more straightforward.

Why do we even speak as if morality were a single unitary thing? Doesn’t this encourage simple-minded moral monism? Why not admit that so-called morality consists of heterogeneous subdepartments—moral maxims, future consequences, justice and injustice? At least let’s acknowledge that it isn’t a clean-limbed monolith but a museum of monuments of varying ages and pedigrees—a type of zoo. Also, teach it in schools like any other difficult subject of study; don’t expect everyone to get the hang of it by trial and error with no explicit instruction. It’s too academically demanding for that. Bit of history, bit of anthropology, some literature, lots of philosophy, examinations, the works. There could be an A-level in it. As things stand, morality is a haphazard collection of disparate ideas bundled together into a kind of disorganized heap. The ordinary mortal needs help finding his or her way through it.[2]

[1] W.D. Ross’s moral system (if that is the word) is agreeably realistic in its avowed pluralism, but it is complicated and far from algorithmic (“prima facie duties”). It represents the actual nature of moral thought, not some philosopher’s idealization. But it is intellectually far from readily graspable; and it doesn’t translate smoothly into right action. Still, I think it is the best moral philosophy we have. See The Right and the Good (1930).

[2] In my experience people’s moral expertise differs dramatically, and their level of moral complacency. Mostly people are just too simple-minded, too morally lazy. Professors are no exception. Moral thoughtfulness is a rare commodity. It takes work, patience, and dedication.

Share

“Inner” and “Outer”

“Inner” and “Outer”

It is with some reluctance that I undertake a discussion of an obscure and elusive topic: what philosophers mean by “inner” and “outer”, if anything. There is a cluster of putative distinctions surrounding the mental: internal and external, private and public, subjective and objective, mental and physical, spiritual and corporeal, inner and outer. I shall only be discussing the last of these; I mean to cast no aspersions on the others. These are distinct distinctions, as the words used indicate, some more viable than others. I am concerned with the alleged distinction between the property of being inner and the property of being outer, construed as both ontological and phenomenological (so distinct from the property of intersubjective knowability or observability). What does it mean to say that the mind is something inner while the material world is something outer? It clearly doesn’t mean the same as the distinction between being internal and external in the spatial sense: that distinction applies to facts about the body—what is spatially within it and spatially outside of it. The heart and brain are within it (“internal”) while clothes and houses are outside it (“external”). But the heart and brain are not “inner”—they are “outer”. Only the mind is inner, not the internal organs of the body.

What does the dictionary have to say about “inner”? The OED (Shorter) gives: “situated (more) within or inside; (more or further) inward; internal”. This is intended to capture such uses as “inner sanctum”, “inner recesses”, “innertube”, “inner regions”, “innermost”, “inner circle”, “inner workings”—not the nature of the mind as an “inner” reality. We also are given “close to the center” by the OED (Concise), also intended in the spatial sense—central as opposed to peripheral. This distinction makes perfect sense, but it is not the distinction as intended by psychologists and philosophers of mind. The mind is not being conceived as closer to some center than the body or the material world outside the body (literally, spatially). Nor do philosophers want to allow for gradations of innerness, as in “more inner”—the mind is not “more inner” than the body. And what exactly is it inside of—the body, the self? No, the idea is that the mind is inner intrinsically, as a matter of its very nature, as material objects are intrinsically extended, or numbers are intrinsically abstract—it isn’t more or less inner relative to some containing entity. If so, shouldn’t the property be phenomenologically detectable—part of everyday consciousness? But do I experience my mind as inner? Does it strike me as something inner? It is hard to know what this might mean, but in so far as it means anything the answer would appear to be no. It strikes me neither as inner nor as outer; it just strikes me as there. For relative to what would it be inner? What center is it closer to? In what sense of “closer”? Granted, material objects strike me as outer relative to my body—both are objects situated in space—but what does it mean to say they each strike me as outer relative to my mind? It isn’t an object in space that other objects could stand in spatial relations to. By the same token, my mind doesn’t strike me as inner relative to objects of perception, because it stands in no spatial relation to those objects (phenomenologically). It doesn’t feel somehow inner. My heart feels inner in relation to my arms or the chair I am sitting in, but my mind doesn’t feel inner in these ways—I don’t perceive it that way, or otherwise so apprehend it. Thus, this alleged innerness is not a phenomenological datum, not a part of one’s normal self-awareness. How then could its ascription be ontologically true—how could the mind be ontologically inner and yet not phenomenologically inner? How could it be inner without my knowing it? Why would we even talk this way if there was no phenomenological basis to the distinction? The natural conclusion, then, is that there is no such property as innerness (and none of outerness either, except in the innocuous “outside my body” sense).

Why do we talk this way if it has no basis in fact? Well, there is a use of “inner” and “inside” that does apply to the mind, as when I say that my beliefs and desires are inside my mind (not outside it)—that they are inner constituents of my mind. In this sense my mental states are “internal” to my mind not “external” to it. And surely, I do experience my mental states as inner parts of my mind, constituents of it, elements within it. Is thatwhat we mean by calling the mind inner? There are two points about this. The first is that there is no entailment from “beliefs are inner parts of the mind” to “the mind is inner”. That is a simple non sequitur: parts can be inner to the thing they are parts of without the whole thing itself being inner to anything, or nothing. Second, the same locution applies to material objects and their parts: the parts of an engine are internal to it (“inner” relative to the engine), but neither they nor the engine are themselves inner—unless parts of something larger. So, there is no basis here for grounding the concept of the inner, construed as a property of the mind considered in itself. This is just another way of talking about the part-whole relation and has no bearing on the supposed inner nature of the mind. The mind is neither inner nor outer; it simply is. The distinction between what is internal to the person and external to the person, where this coincides with the boundaries of the body, is perfectly intelligible and real; but the philosophical inner-outer distinction looks like a myth, a conceptual snarl-up. It is also true that we have a private-public distinction in good standing, but this epistemological distinction is not identical to the inner-outer distinction—neither entails the other. I don’t even experience my mind as analogous to the inner chambers of a building, because there is no contrast analogous to the spatial relation between periphery and center. I do have what I am pleased to call an “inner life”, but this means nothing more than that I have a mental life in addition to an organic physical life. The word “inner” just means “mental” or “psychological” or “spiritual”; it does not connote anything beyond these terms. Interestingly, the OED gives as its second definition of “inner”, “designating the mind or soul; mental; spiritual”, thus regarding the word as simply synonymous with those terms—in which case “the mind is inner” means “the mind is the mind”. It is obvious that philosophers intend more than a tautology when characterizing the mind as “inner”, but it is unclear whether there is anything more they can mean. It looks as if language has been playing tricks on them, mixing up one “language game” with another (architecture and psychology). Yet it is a trick that has proved remarkably tenacious. It is hard to think of the mind as not inner (in some nontrivial sense of the term).[1]

[1] One might think that since the mind is not conceived as in space (whether or not it really is) it is impossible for it to be outer, so it is “inner” by default.  But this is obviously wrong, because not being in space does not entail being inner in any sense (if anything, being inner implies some sort of spatiality); numbers are not in space but not intuitively inner in any way. It is really tremendously obscure what this talk of the inner amounts to. The plain fact is that I experience my mind as neutral with respect to the inner-outer axis; it is simply present. It is not experienced as within anything—the body, the soul, the cosmos. That is why idealism is thought possible.

Share

Is Language in the Head?

Is Language in the Head?

It has been said that meaning is not in the head, but is language in the head? A naïve response to this odd question might be: “Well, no, because language is speech and speech is in the mouth and throat”. Technically, the mouth is in the head, of course, but the question is really asking whether language is in the brain. The speech organs are not in the brain, so speech is not in the brain, i.e., the sounds that come out of a person’s mouth. A person would not be speaking English, say, if the sounds they made were not the sounds characteristic of English, but of Chinese, even if their brain were in the same state as that of an English speaker (suppose the articulatory organs have been artificially hooked up to a Chinese voice generator). However, if the brain were identical, it would be sending motor commands to the speech organs identical to those sent by an ordinary English speaker, so to that extent the language would be English. And the language employed in silent soliloquy would clearly be English. The speech organs merely externalize internal linguistic operations; they are not what the language fundamentally is.[1] The syntax of the language would also be in the head (brain), even if the articulatory organs were unable to externalize this syntax. Perhaps we are under some sort of perceptual illusion as to the location of language, brought on by the fact that we hear the sounds of language and see the speaker moving his or her lips; but really, language is located in the brain machinery behind these outer manifestations. After all, a person might be a master of English even if the speech organs don’t function at all (and he or she cannot perform the acts of a sign language). Language exists in the brain (e.g., Broca’s area) not in the motor systems by which it is externalized in communication. Speech acts are not the essence of language (unless we mean internal speech acts). A brain in a vat could have mastery of a language, possibly under the illusion of performing speech acts with mouth or hands. Language is like thought in this respect: thought too has modes of externalization (speech and other bodily actions), but these are not the essence of thought. Thought is in the brain, if anywhere, not in the body that expresses thought (and may fail to in cases of paralysis). Only dogmatic behaviorism could deny these virtual truisms. By all means say that speech is the embodiment of language, its vehicle in acts of communication, its overt manifestation, but don’t say that language is speech, i.e., the sounds and marks (let’s not forget writing) produced by the body. Obviously, a machine that merely replicates the sounds of English doesn’t know English. Language is in the human head not in the atmosphere (where soundwaves reside). Nor is it in the hand motions of speakers of Sign. Language is in the head as electricity is in atoms—not in the effects of electricity that we observe. Howlanguage exists in the brain is still a subject of study, and presents many mysteries, but that it does is scarcely deniable. We should therefore be internalists about language. Linguistics and philosophy of language are really discussing an internal property of the human animal not the bodily events that provide the external medium of linguistic expression (the externalization of language mastery). It is acceptable to talk as if speech is the subject of interest, given that it is publicly available and acts as a vehicle of language, but we should not make the mistake of identifying language with speech. And remember that inner speech is itself just another manifestation of language not its indispensable essence. Performance isn’t competence, here and elsewhere.

In the light of the above, we might want to revisit externalism about meaning: is it to be supposed that language is in the head but meaning is not? That sounds fishy, fantastical. First, let’s formulate the thesis more precisely. Instead of saying “in the head” we should say “in the person”: for it might justly be complained that it is a category mistake to say that meanings are in the head; rather, they are attributes of the person (same for language). Then the question is whether meaning is an internal property of the person or an external property—intrinsic or relational (compare being bipedal and being married). Second, we need to clean up the usual formulation of the twin earth thought experiment: so-called twin earth is not a twin of earth but rather a very similar (but not completely similar) variant of earth. This is because “twin” earth contains XYZ not H2O, these being distinct substances that happens to look alike. A real twin is not just a superficial copy but a deep copy—same DNA, same anatomy, a precise duplicate. But twin earth is not an exact duplicate of earth but a partial duplicate, given the difference in its liquid content. A genuine twin planet could not produce a difference of meaning for “water”. We do better to speak of a sister planet—very similar in outward appearance but not the same through and through. Nor is it necessary to the thought experiment to suppose that the planet is otherwise precisely identical to earth; all that is necessary is that “water” designates different substances on the two planets. Indeed, it is not required that we imagine two planets; two sides of this planet will do fine.

But these are minor emendations compared to the main defect in the usual formulation: the thought experiment doesn’t demonstrate that all of meaning is not in the person, only that some of it isn’t.[2] The same goes for the extension of the thesis from meaning to mind: some of the mind is not in the person as an intrinsic non-relational property, but it doesn’t follow that all is.[3] In fact, upon closer analysis we have the much more modest result that the meaning of only some expressions is (only partially) not in the person, namely those that have a demonstrative component (ordinary natural kind terms like “water”). To put it baldly, all we have is the thesis that linguistic context is not in the person—for example, the fact that one object or kind and not another is being pointed to in a given case of reference fixing (“that liquid”). And that is a truism: in this sense perception is not in the person, being determined by causal context. Nor is knowledge located solely within the person (though partially it is), simply because knowledge requires truth. Given that context helps to fix reference for indexical expressions, it of course follows that an aspect of meaning isn’t located in the person (what Kaplan calls “content”)—he or she is located in the context. The context is external to the person and it helps to determine the reference of “water”; so, neither meaning nor mind is wholly within the person. But this isn’t a remarkable metaphysical discovery but rather a platitude dressed up as a startling new insight. It is certainly a truth worth knowing, and admits of solid demonstration, but it isn’t a radical new view of meaning and mind. Yes, two persons can be in the same intrinsic condition and refer to distinct objects with the same indexical term—but that is hardly surprising, given context-dependence. And terms like “water” can be semantically tied to such demonstrative reference via the act of reference-fixing, but that too is not a startling discovery about where meaning and mind are located. For it is entirely compatible with the claim that nearly allof meaning and mind are located within the individual—are completely “individualistic”. The cash-value of “meanings aren’t in the head” is simply “indexical reference is context-dependent”—big effing deal!

Thus: language is in the head, and meaning is mainly in the head. Syntax (grammar) is in the head and so is phonetics, if we mean “voice commands from the brain to the articulatory apparatus”. The lexicon is also in the head, though words can be overtly pronounced on occasion. The meaning of most words is wholly in the head (person), though some words have an indexical component that brings in context. Persons are equipped with an elaborate internal mental apparatus that they bring to the world; but they are also placed within the world, and that provides a linguistic context that can select a particular object as reference. This is all reassuringly obvious and distills the basic truths that have emerged from discussions of internalism and externalism.[4] The rhetoric has outpaced the logic.[5]

[1] I am with Chomsky on this (the “I-language” etc.). See chapter 1 of What Kind of Creatures Are We? (2016).

[2] See my Mental Content (1989) for a detailed discussion.

[3] Can we say that meaning is in the mind? I don’t see why not, even if meaning is not in the person, since the boundaries of the person correspond to the boundaries of the body (roughly). Neither mind nor meaning are (completely) in the person, though they are attributes of the person (partially relational attributes).

[4] I am thinking of Kripke, Putnam, Donnellan, Kaplan, Burge, myself, and others.

[5] To be clear, I think opponents of externalist thought experiments are mistaken, but the externalist thesis is far less momentous than has sometimes been supposed. A modest form of externalism is trivially true—but still true, which is not nothing.

Share

The Journal of Philosophical Philosophy

I hereby perform the following performative: :”I hereby name this blog “The Journal of Philosophical Philosophy”. There, done. I am the editor and sole contributor (aside from such comments as I deem worthy of inclusion). That wasn’t so difficult.

 

Share