Knowledge and Human Nature

 

 

Knowledge and Human Nature

 

 

An alien observer of human cognitive development would be struck by a fact he might be tempted to describe as paradoxical. This is that in the first five years or so of life development is rapid and impressive while subsequent learning tends to be slow and laborious. The typical five-year-old already has excellent sensory awareness of the world, a mature language, and a fully functioning conceptual scheme—all without apparent effort. They may be small, but they are smart. The reason for this precocity, we conjecture, is that much of what they have achieved by that age is the unfolding of an innate program or set of programs: all this cognitive sophistication is written into the genes awaiting read-out.[1] It is not picked up by diligent inspection of the environment. It comes quickly because it was already present in substantial outline. Thereafter the child must learn things the hard way—by learningthem. Hence school, memorization, studying, instruction, concentration. Knowledge becomes willed, while before it was unwilled, spontaneous, given.[2] Cognitive development turns into work.

It could be otherwise for our alien observers: they are accustomed to school virtually from birth, because their children are born knowing practically nothing. They learn language by painstaking instruction, having no innate grammar; concepts are acquired by something called “deliberate abstraction”, which is arduous and time-consuming; even their senses need years to get honed into something usable. They don’t reach the cognitive level of a typical human five-year-old till the age of fifteen. Empiricism is true of them, and it takes time and effort. However, they have excellent memories and powers of concentration, as well as an aversion to play, so their later cognitive development is rapid and smooth: they are superior to human college-educated humans by the age of seventeen and they go on to spectacular intellectual achievements in later life, vastly outstripping human adults. They are slow at first, given the paucity of their innate endowments, but quick later, while humans are quick at first but slow later (our memory is weak and our powers of concentration lamentable). To the alien observers this seems strange, almost paradoxical: why start so promisingly and then lapse into mediocrity? They continue to gain in intellectual strength while we seem to lose that spark of genius that characterized the first few years of life. That’s just the way the two species are cognitively set up: an initial large genetic boost for us, and a virtual blank slate for them (but excellent capacities of attention, memory, and studiousness). Our five-year-olds outshine theirs, but their adults put ours to shame.

I tell this story to highlight an important point about the human capacity for knowledge—an existential point. The existentialists thought that freedom was the essence of human nature, conditioning many aspects of our lives, individual and social; but a case can be made that human knowledge plays a similar life-determining role. For we suffer under a fundamental ambivalence about knowledge, which is to say about our cognitive nature (which is not confined to non-affective parts of our lives). We are simultaneously very good at knowledge and quite poor at it. Some things come to us naturally and smoothly, especially in our earliest experience (pre-school); but other things tax us terribly, calling for intense effort and leading to inevitable frustration. Rote memory becomes the bane of our lives. Examinations loom over us. School is experienced as a kind of prison. Calculus is hard. History refuses to stick. Geography is boring. What happened to that earlier facility when everything came so easily? We were all equal then, but now we must compete with each other to achieve good test results, which determine later success in life. We seem to go from genius to dunce overnight. Imagine if you could remember your earlier successes and compare them with your current travails: it was all so easy and enjoyable then, as the innate program unfurled itself, but now the daily need to absorb new material has become trial and tribulation. Getting an education is no cakewalk. Wouldn’t it be nice if it could just be uploaded into your brain as you slept, as your genes uploaded all that innate information? It’s like a lost paradise, a heavenly pre-existence (shades of Plato), with school as the fall from blessedness. You are condemned to feel unintelligent, a disappointment, an intellectual hack. Maybe you will make your mark in society by dint of great effort and a bit of luck, but you are still a member of a species that has to struggle for knowledge, for which knowledge is elusive and hard-won. Suppose you had to live in a society in which those late-developing aliens also lived: they would make you look like a complete ignoramus, an utter nincompoop—despite their initial slow start.

A vice to which human beings are particularly prone is overestimating their claims to knowledge. It is as if they need to do this—it serves some psychic purpose. Reversion to childhood would be one hypothesis (“epistemic regression”). But the actual state of human knowledge renders it intelligible: within each of us there exists a substantial core of inherited solid knowledge combined with laboriously acquired knowledge, some of it shaky at best. Take our knowledge of language, including the conceptual scheme that goes with it: we are right to feel confident that we have this under control—the skeptic will not meet fertile ground here (I know how to speak grammatically!). Generalizing, we may come to the conclusion that our epistemic skills are well up to par: so far as knowledge is concerned, we are a credit to our species. But this is a bad induction: some of our knowledge is indeed rock solid, but a lot isn’t. Being good at language is not being good at politics or medicine or metaphysics or morals. We are extrapolating from an unrepresentative sample. As young children, our knowledge tends to be well founded, because restricted to certain areas; but as adults we venture into areas in which we have little inborn expertise, and here we are prone to error, sometimes fantastically so. We know what sentences are grammatical but not what political system is best. But we overestimate our cognitive powers because some of them are exemplary. It would be different if all our so-called knowledge were shaky from the start; then we might have the requisite humility. But our early-life knowledge gives us a false sense of security, which we tend to overgeneralize. We believe we are as clever about everything as we are about some things.

I recommend accepting that we have two sorts of knowledge—that we are split epistemic beings. On the one hand, we have the robust innately given type of knowledge; but on the other hand, we have a rather rickety set of aptitudes that we press into service in order to extend our innately given knowledge. Science and philosophy belong to the latter system. Thus they developed late in human evolution, are superfluous to survival, and are grafted on by main force not biological patrimony. There is no established name for this distinction between types of knowledge, though it seems real enough, and I can’t think of anything that really captures what we need; still, it is a distinction that corresponds to an important dimension of human life—an existential fact. We are caught between an image of ourselves as epistemic experts and a contrasting image of epistemic amateurishness. We are not cognitively unified. We have a dual nature. We are rich and poor, advantaged and disadvantaged. Other animals don’t suffer from this kind of divide: they don’t strive to extend their knowledge beyond what comes naturally to them. Many learn, but they don’t go to school to do it. They don’t get grades and flunk exams and read books. Reading is in some ways the quintessential human activity—an artificial way to cram your brain with information not given at birth or vouchsafed by personal experience. Reading is hard, unnatural, and an effort. It is an exercise in concentration management. We may come to find it enjoyable[3], but no one thinks it is a skill acquired without training and dedication (and reading came late in the human story). It is also fallible. And it hurts your eyes. This is your secondary epistemic system in operation (we could label the types of knowledge “primary knowledge” and “secondary knowledge” just to have handy names).

Animals are not divided beings in this way (lamenting their reading ability); nor do they apprehend themselves as so divided. But we are well aware of our dual nature, and we chafe at it (as the existentialists say that we chafe at the recognition of our freedom). We wish we could return to epistemic Eden, when knowledge came so readily; but we are condemned to conscious ignorance, with little inroads here and there—we are aware of our epistemic limits and foibles. We know how much we don’t know and how hard it would be to know it (think of remote parts of space). We know, that is, that we fall short of an ideal. We can’t even remember names and telephone numbers!  Yet our knowledge of convoluted grammatical constructions is effortless. If we are that good at knowledge, why are we so bad? Skepticism is just the extreme expression of what we all know in our hearts—that we leave a lot to be desired from an epistemic point of view.[4] We are both paragons and pariahs in the epistemic marketplace. In some moods we celebrate our epistemic achievements, in others we rue our epistemic failures. The reason is that we are genuinely split, cognitively schizoid. Perhaps in the prehistoric world the split was not so evident, in those halcyon hunter-gatherer days, before school, writing, and transmissible civilization; but modern humans, living in large organized groups, developing unnatural specialized skills, have the split before their eyes every day—the specter of the not-known. We thus experience epistemic insecurity, epistemic neurosis, and epistemic anxiety. Our self-worth is bound up with knowledge (“erudite” is not a pejorative). It is as if we contain an epistemic god (already manifest by age 5) existing side by side with an epistemic savage: the high and the low, the ideal and the flawed. I don’t mean that we shouldn’t value what we acquire with the secondary system, or that it isn’t really knowledge, just that it contrasts sharply with the primary system. The secondary system might never have existed, in which case no felt disparity would have existed; but with us as we are now we cannot avoid the pang of awareness that our efforts at knowledge are halting and frequently feeble. The young child does not suffer from epistemic angst, but the adult has epistemic angst as a permanent companion. School is the primary purveyor of that angst today. Education is thus a fraught venture, psychologically speaking, in which our dual nature uneasily plays itself out. The existentialists stressed the agony of decision, but there is also the agony of ignorance (Hamlet is all about this subject, as is Othello).[5]

Freud contended that the foundations of psychic life are laid down in the first few years of life (and sex not freedom or knowledge is the dominant theme), shaping everything that comes later. The stage was set and then the drama played out. I am suggesting something similar: the first few years of cognitive life lay down the foundations, and they are relatively trouble-free. Knowledge grows in the child quite naturally and spontaneously without any strenuous effort or difficulty. Only subsequently does the acquisition of knowledge become a labor, calling upon will power and explicit instruction. We might view this transition, psychoanalytically, as a kind of trauma: from ease to unease, from self-confidence to self-doubt. Whoever thought knowledge could be so hard! Compare acquiring a first language with learning a second language: so effortless the first time, so demanding the second. What happened? Now learning has become a chore and a trial. It is a type of fall from grace. The reason we don’t feel the trauma more is that it happens at such an early age (I assume there is no active repression)—though many a child remembers the misery of school. Knowledge becomes fraught, a site of potential distress. Cramming becomes a way of life, a series of tests and trials. But all the while the memory of a happier time haunts us, when knowledge came as easily as the dawn.[6] And then there is death, when all that knowledge comes to nothing—when all the epistemic effort is shown futile. Our divided nature as epistemic beings thus has its significance for how we live in and experience the world. It is not just a matter of bloodless ratiocination.

 

C

[1] I won’t rehearse all the evidence and arguments that have been convincingly given for this conjecture, save to mention the existence of critical periods for learning. Would that such periods could occur during high school mathematics training!

[2] Of course, we still pick up a lot of information without effort just by being in the world, but for many areas of knowledge something like school is required (this is true even for illiterate tribes).

[3] Logan Pearsall Smith: “People say that life is the thing, but I prefer reading.”

[4] Is it an accident that one of the prime distinguishing characteristics of God is his omniscience? He knows automatically what we can never hope to.

[5] The Internet, with its seemingly infinite resources, drives this point home. It also leads to varied and grotesque deformities in our cognitive lives.

[6] Here you see me lapsing into weak poetry, as all theorists of the meaning of life must inevitably do. Sartre’s Being and Nothingness is one long dramatic poem: who can forget his puppet-like waiter, or the woman in bad faith whose hand remains limp as her would-be suitor grasps it, or Pierre’s vivid absence from the cafe? My illustrative vignette would feature a bleary-eyed student studying in a gloomy library while recollecting her carefree sunlit days of cheerful effortless knowing.

Share

Are There Subjective Concepts?

I can imagine four types of position on this question: (i) there are only subjective concepts (none are objective); (ii) there are only objective concepts (none are subjective); (iii) there are both subjective concepts and objective concepts; (iv) all concepts are both subjective and objective (in some respects). I am inclined to accept (iv), with (ii) as my second favorite, so I reject some standard views on this subject. Obviously the question turns on what is meant by “subjective” and “objective” in this connection. If we mean, “contributed by the mind and not by the world” by “subjective”, and “contributed by the world and not the mind” by “objective”, then the position I favor is that all concepts are partly a function of the mind and partly a function of the world. That is, our cognitive makeup partly fixes the nature of our concepts, but part is also fixed by reality, as it exists outside the mind. But I am not primarily interested in arguing for this position here; I want to discuss a more limited question–namely, is it possible for there to be both a subjective and an objective concept of the same state of affairs? Can we view (represent, describe, cognize) a single fact in two different conceptual ways, subjectively and objectively? To adopt a well-known locution, is it possible to conceive of a single property both from a particular “point of view” and also from no point of view (from “nowhere”)? Could we start by conceiving a property (fact, state of affairs) subjectively and then develop an objective way of conceiving it? Could we (do we ever) “transcend” a subjective concept and replace it with an objective concept, or simply retain both concepts? Granted, it is perfectly possible to conceive of the same property (object, kind) using two different concepts, but is it ever the case that one of these concepts is subjective and the other objective?[1]

We can accept that there are subjective and objective states or properties or facts, if by that we mean states of subjects and states of objects. Pain is a subjective state because it is a state of conscious subjects, but electric charge is a state of an object that is not a conscious subject (generally). But what about the concepts of such subjective states—are they too subjective? It is not immediately clear what this might mean, but the most obvious interpretation is that the concept of pain can be possessed only by someone who feels pain—you can only know what pain is if you have experienced it. So the concept is subject-relative: there are preconditions for possessing it that require a certain psychological makeup. There are two points to be made about this. The first is that it is not clear why this condition justifies the term “subjective”: isn’t it just a claim about the necessary conditions for possessing the concept? Why should the condition imply that the concept of pain embodies a subjective view of pain? Why not say that the concept is completely objective about pain, even though it can be acquired only by experiencing pain? Why should it imply that pain could be more objectively viewed in some other way? If the concept reveals the nature of pain as it is in itself, why is it described as “subjective”? Isn’t it entirely objective–certainly not limited or defective or biased in some way? Second, isn’t an analogous proposition true of any concept? Any concept, no matter how objective, can only be grasped by beings psychologically equipped to grasp it—isn’t that a tautology? You can only grasp the concept of electric charge if you have a certain cognitive makeup, perhaps involving language with its specific architecture (animals don’t grasp it). So that concept is also subjective-relative: it requires a certain kind of mind, a certain cognitive “point of view”. No concept can be possessed by a vacuum! The notion of an objective concept had better not require that there is no kind of mind-dependence. There are sensory “points of view” and cognitive “points of view”, and concepts can be possessed only by beings that bring those points of view to the table. So far we have found no meaningful distinction between so-called subjective concepts and objective concepts. True, the concept of electric charge doesn’t require any specific sensory apparatus to be possessed; but it is equally true that the concept of pain doesn’t require any specific cognitive apparatus to be possessed, such as that required for the understanding of physics.

Consider color: we can agree that color is a subjective phenomenon since it depends on the existence of sensory appearances, but why say that our ordinary concept of color is subjective? That certainly doesn’t follow from the subjectivity of color itself—the concept might be entirely objective. Indeed, I would defend the view that our ordinary concept of color represents color just as it intrinsically is—just as it objectively is—and that it cannot be improved upon by moving in a more objective direction. There is no such thing as an objective conception of color that is distinct from the conception we have by virtue of our experience of color (given that color is a subjective phenomenon). Thinking of color under physical concepts such as wavelength is not a more objective (more accurate) conception of color but rather a mode of thinking appropriate to the physical basis of color (compare pain and C-fiber stimulation). Our concept of red, say, is not one perspective on redness that might be supplemented or superseded by some more objective concept; it tells us what redness actually (objectively) is. So it isn’t that we have a subjective view of color that can be compared with an objective view; we simply have an objective concept of a subjective phenomenon. The fact that we can have this concept only by seeing color ourselves doesn’t entail that the concept itself fails of objectivity or is somehow “subjective”. A concept is a mode of presentation of a property and our ordinary concept of red presents it as it really is, objectively; we don’t render our concepts of color any more objectively penetrating by couching them in physical terms—on the contrary. I would say, then, that our color concepts are not subjective but objective—or better, that they are objective and also subjective in the trivial sense that you can only possess them if you have a certain type of psychological makeup. The nature of color is fully captured in our ordinary concept of color (in our ordinary knowledge of it), and that is what an objective concept is supposed to do (compare the concept of pain). A subjective concept of red might be expressed by “what reminds me of my true love”—since other people don’t share my romantic associations—but that is a far cry from our ordinary concept of red. I therefore think there is no good sense in which our color concepts are subjective. They are concepts of something subjective, but that doesn’t prove that they themselves are subjective—any more than that the objectivity of a fact implies the objectivity of any conception of it. Indeed, I would venture to assert that anyone who has an adequate concept of red has precisely the concept of red that I have, i.e. the concept that is derived from inner acquaintance with sensations of red. There is no more objective concept of red, and this concept is not subjective in any interesting sense. In fact, the whole idea that concepts contain “perspectives” on their reference is misguided (based on a false perceptual model); certainly our color concepts and concepts of sensations are not to be understood in that way.[2]

It might be thought that theoretical identification affords an illustration of subjective and objective concepts of the same thing. We have discovered that water is H2O and heat is molecular motion: aren’t these cases in which a subjective concept is coupled with an objective concept? The ordinary concepts embed our modes of sensibility while the scientific concepts don’t; the former can only be grasped by beings that share our “point of view”. But these cases repeat what we have already seen: the concepts are really objective concepts of a subjective fact. The subjective fact is the way water and heat appear to us in sensation, and this is incorporated into the concept (“the thing that appears thus and so”). We have a concept of this appearance—a sensory concept—and the appearance is a subjective fact, i.e. a fact about conscious beings: but the concept itself is not a subjective view of an appearance. It is an objective representation of something itself subjective. Anyone who shares the concept accurately and completely grasps the appearance in question; and the appearance can’t be grasped properly unless that concept is possessed. It is not that the scientific concept is another way to grasp that appearance, which is somehow more objective; it is a concept of a physical thing not of a mind-dependent mode of appearance.[3] Maybe it is true that you can only grasp the concept of that appearance by being subject to it, but why should this imply that the concept inherently involves a subjective way of apprehending what it represents? The concept denotes the appearance “directly”, just it is objectively is; it is not a subject-dependent “perspective” on its referent. It is not that in these cases we have two types of conceptualization of the very same fact or property, subjective and objective; rather, we have concepts of water and heat, the physical things, coupled with concepts of other facts, facts of appearance. The latter concepts are just as objective as the former, since they capture the objective nature of the appearance (which is a subjective fact). To repeat, concepts of the subjective are not thereby subjective concepts—just as concepts of the objective are not thereby objective concepts (“the metal I love best”). There is no coherent sense in which one’s concepts of one’s subjective states embed a subjective perspective on one’s subjective states—a “point of view” on them that might reveal more about the subject than about them.  Of course, any concept embeds something about the constitution of the conceiver, since it must be conditioned by a given cognitive structure; but that just gives us the trivial truth that all concepts have a “subjective” dimension as well as an objective one. The paradigm of a subjective way of thinking is one in which a person lets emotion interfere with reason (“Do try to be more objective and not let your emotions run away with you!”), but our ordinary concepts of subjective states are nothing like that—they don’t let emotion affect how they represent the mind.

The correct conclusion, then, is that all concepts are objective: they represent things as they objectively are (except when they don’t, as when we pick something out by reference to our personal idiosyncrasies, e.g. “my favorite metal”, “the color I most dislike”). The ordinary concepts of color or sensation or emotion are objective concepts because they pick out what they do in virtue of actual intrinsic properties of the things in question, not by virtue of accidental relations to the conceiver’s peculiarities. It isn’t that philosophical reflection has discovered that concepts we thought were objective turn out to be merely subjective. Common sense concepts are not subjective in some way that contrasts with the concepts of science. True, we perceive the world in ways conditioned by our given modes of sensibility, which are not necessarily shared by all sentient beings, but from this it doesn’t follow that any of our concepts are themselves subjective.

 

[1] The obvious reference here is to Thomas Nagel’s discussion of subjective and objective conceptions in The View From Nowhere (1986), particularly the first two chapters. However, inspection of Nagel’s text reveals (to me) no outright contradiction between what I maintain and what he says—though there certainly seems to be a difference of attitude and terminology. Much the same can be said about my book The Subjective View (1983).

[2] It is possible to have a subjective view of reality, as when one projects one’s subjective states onto reality, perhaps not realizing that this is what one is doing. This is plausibly what happens with color. Thus one arrives at a view of reality that has subjective elements. But none of this implies that concepts of color are subjective concepts, only that one’s perceptual view of reality involves projected subjective states. One’s entire picture of reality could be constructed from such projected subjective states without any concept being itself subjective (except in the sense of being a concept of a subjective state). There is the conceptual analogue of a use-mention confusion lurking here.

[3] None of this is to deny the distinction between the manifest image and the scientific image: it is just that both “images” are objective.

Share

Modal Metaphysics

In Naming and Necessity, Kripke gives a number of examples of essential properties in order to show that not all necessities are a priori or analytic. He is not concerned to develop a general metaphysics of modality, a systematic classification of necessities and possibilities. But that project is a worthwhile one, and relatively unexplored. I shall offer some remarks on it, hoping to show that there is some interesting structure here: there are patterns and generalizations. I won’t re-defend Kripke’s examples (most of which have sources elsewhere) but take them as given; my question is what general picture they promote. Thus I will accept that there are necessities of origin, kind, and composition: a given human being, say, essentially has the parental origin she has; she is essentially of the kind human; and she is essentially composed of certain biological materials (cells, carbon, etc.). No one could be this human being and not have those properties. These are metaphysical necessities concerning individual human beings. I couldn’t have been born to the British royal family or been a dog or be made of glass—though perhaps someone looking like me could have these properties. By contrast, certain properties of human beings are contingent and could easily be lacked without detriment to identity: I could have had a different occupation or lived in a different place or never pole-vaulted. It would still be me, just living a different life. My history is contingent, but my origin, kind, and composition are not.

Well and good: but is there any deeper story to tell? Do we just have a series of examples of essential and contingent properties with nothing to unify them, or might there be something in common to the examples? Is there a principled dualism or just a list of unrelated instances? With respect to essential properties, I think we can accept two important points. The first is that the list we have so far is complete: Kripke didn’t omit an important class of necessary truths. He never claimed completeness, but reflection suggests that he found it—there are no other de re necessities waiting to be recognized.[1] True, we can analyze the relation of origin and detect various necessities of origin (parents, sperm and egg, strands of DNA); and true, we can distinguish necessities of composition that relate to types as well as to tokens (this table is necessarily made of wood, the type, and also necessarily made of this particular piece of wood, a token)); true also, we can distinguish human beings from persons and accordingly raise two different questions about necessities of kind. But there doesn’t seem to be any additional category of de re individual essence that has not been mentioned; our list appears exhaustive (there is surely no necessity regarding bodily organs, for example, since one can be given someone else’s kidney and have an artificial heart implanted).

The second point is that the three categories extracted from Kripke’s text are logically independent of each other: none entails the others. Thus we can’t deduce origin from natural kind or composition from origin. We have three distinct types of necessity here, not reducible one to the other. This is true even if we extend essentialism beyond biological entities, claiming that individual atoms, say, have necessities of origin, kind, and composition: this very hydrogen atom couldn’t have come from anywhere but the big bang (that event) or been an iron atom or be made of anything but quarks.[2] We seem to have run the gamut—that’s about it as far as essence is concerned. Where an object came from, what it is made of, what kind of thing it is—that exhausts its essential nature; everything else is contingent. We might thus declare a triune theory of individual essence—a holy trinity of separable types of necessity. It would have been nice to find a deeper unity, but it turns out that 3 is the magic number—at least it wasn’t 7 or 29! The three essences do seem naturally connected, certainly not opposed to each other, but there is no apparent way to unify them into a single attribute. Hence we can announce the doctrine of Threefold Essence.

It might be supposed that contingency will yield a richer harvest of types. Aren’t there hugely many kinds of contingent property—occupation, location, hobbies, prejudices, talents, acts performed, things owned? Where is the unity here? The class of contingent truths appears to be hopelessly heterogeneous, a mere motley. But I think, perhaps surprisingly, that this is wrong: there is really only one kind of contingent truth! Or better, all kinds of contingent truth have the same unitary basis. Consider states of motion: being at rest or traveling through space. Suppose I am at rest now, sitting quite still: I could have been in motion, pacing around, playing tennis, driving my car. It is entirely contingent what my state of motion is at any given time. The same is true for any physical object: its state of motion is a contingent property of it—it could exist and yet be in a different state of motion. In fact, if you wanted to give a clear and convincing example of a contingent property, you couldn’t do better than to pick motion—motion is the paradigm of the contingent. An object’s motion is not part of its intrinsic nature, what makes it what it is. Intuitively speaking, motion belongs to the career of an object, not its constitution–its behavior not its being. Maybe an object’s potential for motion is written into it, but its actual state of motion is just so much adventitious history—alternative motion is easily imaginable. Even when motion follows strict laws of nature, as with elliptical planetary motion, we can easily conceive it being otherwise: the earth is not what it is in virtue of tracing ellipses around the sun instead of circles. Just as the earth is not necessarily inhabited, so it is not necessarily in elliptical orbit about the sun.[3]

But what about other types of contingent property, say being a philosopher? They are not types of motion. True enough, but notice that motion is involved in their coming to obtain: I became a philosopher by taking a particular path through space, acting in specific ways, moving my hand to write philosophy essays, etc. I came to have the property of being a philosopher by virtue of certain motions (some in my brain). The same thing is true of my more athletic attributes, as well as musical. So I think we can venture this generalization: every contingent property of an object supervenes on motion. Nothing happens but that motion makes it so. The property might not be a state of motion, but its instantiation depends on certain motion properties being instantiated. When I imagine myself not being a philosopher I imagine various motions not having occurred (e.g. moving from Manchester to Oxford in 1972). So what unifies the class of contingent properties is their dependence on motion—which is the paradigm of the contingent. Whenever we conceive of certain properties not holding we conceive of enabling motions not occurring. This is the basis of our sense of contingency. There is really only one kind of contingent truth—the kind that depends on episodes of motion. History is the history of movement, ultimately. Contingency is therefore monistic, tracing back to a fundamental kind of contingency. Necessity comes in three irreducible types, but contingency is always the same. The loose relation between objects and space is the ground of contingency.[4]

Immediately we notice that origin, kind, and composition have nothing to do with motion. They imply nothing about how things move. They are not part of a thing’s dynamic history—what happens to it or what it does. It is a necessary condition of a property being essential to a thing that it not be a motion-dependent property, but it also seems to be sufficient for essence that the property not involve motion. Any truth about an object that does not directly or indirectly relate to its motion is a necessary truth about it. Take color and shape: these are not essential properties, to be sure, and they seem static, but don’t they tacitly involve motion—motion of parts or particles? The shape of an object might be constant for a period of time, but apply appropriate forces and you get movement of parts—hence the shape is contingent, since it can be altered by motion. Color is contingent because it can be changed by the passage of light coming from the object and by the tiny motions of receptors responding to the incoming light. When we imagine shapes and colors being otherwise we imagine certain motions occurring or not occurring. But no change of motion in an object can change it from having the origin it has to having a different origin, and similarly for kind and composition. Properties are essential when and only when they don’t involve motion, and they are contingent when and only when they do.

This is a pleasing generalization, but can things really be that simple? Does the modal structure of the world divide up so neatly?  Consider numbers: motion is not involved in their having the properties they have and all their (intrinsic) properties are essential.[5] The number 2 is essentially a number, is essentially even, is essentially the predecessor of 3, is essentially a divisor of 16, etc. It has no history that could have been otherwise, no movement that we could imagine reversed—no location, job, hobby, or talent. Nothing happens to it and it does nothing. Movement is alien to its being. Thus it is all essence. It is the same with geometrical figures: they participate in no marches or street-crossings and possess no moveable parts. Contingency is accordingly not in their nature. Contingency enters the life of an object only when history comes to visit, but history consists of motions large and small. In other words, contingency feeds on events, and where there are no events there is no contingency. Then all is necessity. A purely platonic world would lack contingency because nothing would happen in it that could have been otherwise. Universals track no paths through space that they might not have tracked. No journey, no contingency.

There is a line of objection to our neat binary picture that one seldom hears urged today, though it is not without precedent, namely that there is no real contingency in the universe. Everything that happens happens by necessity. This is the opposite of the modal skeptic who denies that anything is really necessary (except maybe analytic truths). Suppose determinism is true, so that everything that happens follows from the laws of nature and hence is nomologically necessary. Suppose too that we regard nomological necessity as a form of metaphysical necessity.[6] Then we reach the conclusion that everything must have happened as it actually happened: there are no contingent facts. Granted, there are impressions of contingency, but these turn out to be illusory upon closer analysis—they confuse what is (allegedly) contingent for this object and what might be true of some counterpart object. We are familiar with the idea that what seems contingent for a natural kind is really what is possible for some other natural kind similar to the one in question (e.g. some liquid similar to water might not be H2O but not water itself). Well, according to the metaphysical view we are considering, when it seems to us that an object might have been otherwise in some respect we are really thinking of some other object that might be that way. In fact, all objects simply play out their essential nature in their actions and reactions (Leibniz held a view like this). If so, all facts are necessary facts–we are merely under an illusion of contingency. I don’t say this view is correct, only that it intelligibly has the consequence that necessity is ubiquitous. In effect, it takes motion to be the necessary unfolding of the intrinsic nature of the universe—though we may not be able to grasp the way this unfolding works. Indeed, it can be maintained that only a view like this can render the world intelligible, since pure contingency is unintelligible (it violates the principle of sufficient reason). If reason is built into the universe, it must work by rational principles, but these can only be necessary truths. Motion, in particular, cannot be arbitrary and spontaneous; it must be written into the nature of things. The world may appear to harbor a deep contingency but this is just an appearance—underneath it has a rational order. I had to become a philosopher; it wasn’t just an accident that could have been otherwise. States of motion are essential properties after all.

I mention this metaphysical position for the sake of completeness, not to endorse it. The position that seems right to me is the usual binary one: we have essences and we have accidents. The essences revolve around origin, kind, and composition, while the accidents owe their existence to the nature of motion. We can grant that motion is governed by natural laws that carry their own type of necessity, and hence strong determinism is true, but that doesn’t add up to full metaphysical necessity. We can conceive these laws being otherwise in a way we can’t conceive origin, kind, and composition being otherwise. There are metaphysically possible worlds in which I became a quantity surveyor and was born in Australia (my parents emigrated) but not worlds in which I am a tiger or made of glass or came from an acorn. The basic structure of modal reality is thus a triad of essential properties, on the one hand, and a unified class of motion-dependent contingent properties, on the other. There is nothing more and nothing less.[7]

 

Colin McGinn

[1] I don’t mean to assert dogmatically that no other necessities will ever be discovered, though that may be true; I mean only that I don’t know of any obvious ones that fail to show up in Kripke’s text.

[2] I won’t discuss whether all natural objects exhibit all three types of essence, animate and inanimate, but I am inclined to think it is true.

[3] As an exercise in astronomical essentialism we can ask what the necessary properties of the earth are. First its origin: it necessarily came from the stuff it actually came from (probably a bunch of celestial dust); second its kind: it is necessarily a planet; third its composition: it is necessarily made from a specific collection of assorted elements. The earth (that object) couldn’t have come from some other source; it couldn’t be an elephant; and it couldn’t be made of jelly. But there might be a planet that looked like earth but had a different origin and composition (and maybe was a living organism).

[4] Events and time are different: a given event couldn’t have occurred at a different time, e.g. WWI occurring in 1963 (though there could have been a similar war at that time). Whether objects can exist at other times is a difficult question: could I have been born in 1940 or 1066?

[5] Isn’t it a contingent property of the number 2 that it is the number of my cats? We can talk that way, but notice that the alleged property is relational not intrinsic; indeed, it is entirely extrinsic to the number. It is not part of the nature of 2 that it numbers my cats—not a truth of arithmetic.

[6] Kripke toys briefly with this idea in Naming and Necessity, p.99.

[7] Modality is more streamlined than we might have supposed, less variegated. God had relatively little to do in creating necessity and contingency compared to creating all the truths. When creating all the possible worlds he followed a few simple precepts. Reality is modally parsimonious.

Share

Are There Subjective Reasons?

 

 

 

Are There Subjective Reasons?

 

 

I like coffee and you like tea. This gives me a reason to choose coffee, but it doesn’t give you a reason to make that choice. The reason is relative to me—to my preferences. You would choose tea given the choice.  Thus we might say that reasons of this type—desire-based reasons—are “subjective reasons”: they are relative to the individual subject making the choice. They are not like “objective reasons” that apply to everyone equally, such as (allegedly) moral reasons, which are indifferent to the individual’s personal preferences. Everyone has a moral reason not to murder his neighbor, no matter how much he might prefer him dead—viz. that it would be morally wrong to do it. But some reasons (perhaps most) are subjective in the sense that they don’t generalize: they apply only to individuals with appropriate desires or wishes or tastes or inclinations. They have no rational hold over anyone else. It would be wrong to criticize someone for not acting on them, given their personal preferences. When it comes to matters of taste, the right response is: “It’s all completely subjective”.

But this is mistaken for two reasons. The first is that your preferring tea gives me a reason to offer you tea, while I contentedly stick to coffee: that is, the fact that you have a preference for tea works as a reason applicable to me to act in certain ways in relation to you. You have a certain property—being a tea-fancier—and that gives me a reason to supply you with tea in appropriate circumstances. So that reason applies to everyone equally: it is objective. It is objectively the case that everyone has a reason to give you tea not coffee: there is nothing subjective about that. Second, ifI shared that property I too would have a reason to choose as you do. So we can generalize as follows: everyone is such that if they have a preference for tea they have a reason to choose tea. It is not as if you could have that preference and it still be a question what you have reason to do. It isn’t “up to you” what it is rational to do, a matter of subjective whim. True, you may not actually have the property in question, but it is an entirely objective matter that ifyou do a certain choice is rational. It is an objective property of the property that it requires a certain choice. It functions as an objective reason whenever it is instantiated. There is nothing subjective about the reason once the facts are fixed. The reason may be said to be a conditionalreason, i.e. it depends on instantiating certain properties, but there is nothing “subjective” about it. Salt only dissolves if certain conditions obtain—that doesn’t make it “subjective”. We might call desires “subjective states” because they are psychological properties of conscious subjects, but that doesn’t imply that they provide merely subjective reasons. Whenever a reason applies it always generates objective requirements: on others to act in certain ways, and on anyone who has the property that grounds the reason. There is never any purely subjective (or “agent-relative”) rationality: all rationality is objective (impersonal, absolute, general).

We might compare this to subjective facts. There are no purely subjective facts, i.e. facts that have no objective reality. There are psychological facts about subjects, but these are objective facts in the sense that they exist absolutely, not forsome people and not others. Bat experiences are facts in the objective world (there is no other). They might be knownonly by bats, but their existenceis not relative to bats—they are part of objective reality (not fictions or dreams or projections). To be is to be objective. Not everyone has bat experiences, but they don’t exist only from the perspective of bats (whatever that might mean). In the same way not everyone has a preference for tea, but that preference exists objectively and gives rise to objective reasons for action that apply to anyone. Even a taste shared by no one else, say a fondness for grilled cactus, has its objective reason-giving power: this idiosyncratic individual can expect to be offered grilled cactus at a barbecue, and if anyone else were to acquire the taste they would have every reason to act on it. There are no reasons that apply to an individual in isolation without implications for anyone else. Rationality is never purely personal in this sense.[1]

 

Colin

[1]We might then say that there are two sorts of objective reason for action: the sort that depends on the psychological make-up of the individual and the sort that doesn’t so depend. The former would include personal tastes; the latter would apply to moral reasons (assuming we accept this view of morality). There are not “subjective reasons” and “objective reasons”.

Share

A Problem In Hume

 

A Problem in Hume

 

 

 

Early in the TreatiseHume sets out to establish what he calls a “general proposition”, namely: “That all our simple ideas in their first appearance are deriv’d from simple impressions, which are correspondent to them, and which they exactly represent” (Book I, Section I, p.52).[1]What kind of proposition is this? It is evidently a causal proposition, to the effect that ideas are caused by impressions, and not vice versa: the word “deriv’d” indicates causality. So Hume’s general proposition concerns a type of mental causation linking impressions and ideas; accordingly, it states a psychological causal law. It is not like a mathematical generalization that expresses mere “relations of ideas”, so it is not known a priori. As if to confirm this interpretation of his meaning, Hume goes on to say:  “The constant conjunction of our resembling perceptions [impressions and ideas], is a convincing proof, that the one are the causes of the other; and this priority of the impressions is an equal proof, that our impressions are the causes of our ideas, not our ideas of our impressions” (p. 53). Thus we observe the constant conjunction of impressions and ideas, as well as the temporal priority of impressions over ideas, and we infer that the two are causally connected, with impressions doing the causing. In Hume’s terminology, we believe his general proposition on the basis of “experience”—our experience of constant conjunction.

But this means that Hume’s own critique of causal belief applies to his guiding principle. In brief: our causal beliefs are not based on insight into the real powers of cause and effect but on mere constant conjunctions that could easily have been otherwise, and which interact with our instincts to produce non-rational beliefs of an inductive nature. It is like our knowledge of the actions of colliding billiard balls: the real powers are hidden and our experience of objects is consistent with anything following anything; we are merely brought by custom and instinct to expect a particular type of effect when we experience a constant conjunction (and not otherwise). Thus induction is not an affair of reason but of our animal nature (animals too form expectations based on nothing more than constant conjunction). Skepticism regarding our inductive inferences is therefore indicated: induction has no rational foundation. For example, prior to our experience of constant conjunction ideas might be the cause of impressions, or ideas might have no cause, or the impression of red might cause the idea of blue, or impressions might cause heart palpitations. We observe no “necessary connexion” between cause and effect and associate the two only by experience of regularity—which might break down at any moment. Impressions have caused ideas so far but we have no reason to suppose that they will continue to do so—any more than we have reason to expect billiard balls to impart motion as they have hitherto. Hume’s general proposition is an inductive generalization and hence falls under his strictures regarding our causal knowledge (so called); in particular, it is believed on instinct not reason.

Why is this a problem for Hume? Because his own philosophy is based on a principle that he himself is committed to regarding as irrational—mere custom, animal instinct, blind acceptance. He accepts a principle—a crucial principle–that he has no reasonto accept. It might be that the idea of necessary connexion, say, is an exception to the generalization Hume has arrived at on the basis of his experience of constant conjunction between impressions and ideas—the equivalent of a black swan. Nothing in our experience can logically rule out such an exception, so we cannot exclude the idea based on anything we have observed. The missing shade of blue might also simply be an instance in which the generalization breaks down. There is no necessityin the general proposition Hume seeks to establish, by his own lights–at any rate, no necessity we can know about. Hume’s philosophy is therefore self-refuting. His fundamental empiricist principle—all ideas are derived from impressions—is unjustifiable given his skepticism about induction. Maybe we can’t helpaccepting his principle, but that is just a matter of our animal tendencies not a reflection of any foundation in reason. It is just that when we encounter an idea our mind suggests the existence of a corresponding impression because that is what we have experienced so far—we expectto find an impression. But that is not a rational expectation, merely the operation of brute instinct. Hume’s entire philosophy thus rests on a principle that he himself regards as embodying an invalid inference.

It is remarkable that Hume uses the word “proof” as he does in the passage quoted above: he says there that the constant conjunction of impressions and ideas gives us “convincing proof” that there is a causal relation that can be relied on in new cases. Where else would Hume say that constant conjunction gives us “convincing proof” of a causal generalization? His entire position is that constant conjunction gives us no such “proof” but only inclines us by instinct to have certain psychological expectations. And it is noteworthy that in the Enquiry, the more mature work, he drops all such talk of constant conjunction, causality, and proof in relation to his basic empiricist principle, speaking merely of ideas as “derived” from impressions. But we are still entitled to ask what manner of relation this derivation is, and it is hard to see how it could be anything but causality given Hume’s general outlook. Did he come to see the basic incoherence of his philosophy and seek to paper over the problem? He certainly never directly confronts the question of whether his principle is an inductive causal generalization, and hence is subject to Humean scruples about such generalizations.

It is clear from the way he writes that Hume does not regard his principle as a fallible inference from constant conjunctions with no force beyond what experience has so far provided. He seems to suppose that it is something like a conceptual or necessary truth: there couldnot be a simple idea that arose spontaneously without the help of an antecedent sensory impression—as (to use his own example) a blind man necessarily cannot have ideas of color. The trouble is that nothing in his official philosophy allows him to assert such a thing: there are only “relations of ideas” and “matters of fact”, with causal knowledge based on nothing but “experience”. His principle has to be a causal generalization, according to his own standards, and yet to admit that is to undermine its power to do the work Hume requires of it. Why shouldn’t the ideas of space, time, number, body, self, and necessity all be exceptions to a generalization based on a past constant conjunction of impressions and ideas? Sometimes ideas are copies of impressions but sometimes they may not be—there is no a priori necessity about the link. That is precisely what a rationalist like Descartes or Leibniz will insist: there are many simple ideas that don’t stem from impressions; it is simply a bad induction to suppose otherwise.

According to Hume’s general theory of causation, we import the idea of necessary connexion from somewhere “extraneous and foreign”[2]to the causal relation itself, i.e. from the mind’s instinctual tendency to project constant conjunctions. This point should apply as much to his general proposition about ideas and impressions as to any other causal statement: but then his philosophy rests upon the same fallacy–he has attributed to his principle a necessity that arises from within his own mind. He should regard the principle as recording nothing more than a constant conjunction that he has so far observed, so that his philosophy might collapse at any time. Maybe tomorrow ideas will notbe caused by impressions but arise in the mind ab initio. Nowhere does Hume ever confront such a possibility, but it is what his general position commits him to.

 

Co

[1]David Hume, A Treatise of Human Nature(Penguin Books, 1969; originally published 1739).

[2]The phrase is from Section VII, [26], p. 56 of An Enquiry Concerning Human Understanding(Oxford University Press, 2007).

Share

Is Solipsism Logically Possible?

 

 

Is Solipsism Logically Possible?

 

 

It has been commonly assumed that solipsism is logically or metaphysically possible. I could exist without anything else existing. There are possible worlds in which I exist and nothing else does. I can imagine myself completely alone. Seductive as such thoughts may appear, I think they are mistaken; they arise from a confusion of metaphysical and epistemic possibility.

Suppose someone claims that this table in front of me could exist in splendid isolation, the sole occupant of an ontologically impoverished world—no chairs, planets, people, birds, etc. Well, thatseems true—those absences are logically possible. But what about the piece of wood the table is made of? This table is made of that piece of wood in every possible world in which it exists, so the table cannot exist without the piece of wood. But that piece of wood came from a particular tree—it could not have come from any other tree. So this table can only exist in a world that alsocontains the tree in question, since it was a part of that tree. The table and the tree are distinct existences, so the table cannot exist without something elseexisting—the tree that donated the part that composes it. The table is necessarily composed of that piece of wood and that piece of wood necessarily derives from a particular tree: there are necessities linking the table with another object, viz. the tree. Thus “solipsism” with respect to this table is not logically possible.

Now consider a person, say me. I could not exist without my parents existing, since no person could be thisindividual and not be born to my parents. This is the necessity of origin as applied to persons. In any world in which I exist my parents exist; more precisely, in any world in which I exist a particular sperm and egg exist (and they can exist only because of the human organisms that produced them). So my existence implies the existence of my parents. Therefore solipsism is not logically possible. But the existential ramifications go further: my parents cannot exist in a world in which theirparents don’t exist. And so on back down the ancestral line, till we get to the origin of life: no later organism can exist without the procreative organisms in its ancestral line. Every organism has an origin, and that origin is essential to its identity. But it goes even further, because the very first organism must have had its own inorganic origin, presumably in a clump of molecules, and that origin is essential to it—itcould not exist without thatclump existing. And that clump of molecules also had an origin, possibly in element-forming stars; so it couldn’t exist without the physical entities that gave rise to it. And those physical entities go back to the big bang, originating in some sort of super-hot plasma. So I (thisperson) could not exist unless the whole chain existed, up to and including certain components of the big bang. Colin McGinn could not exist without millions and millions of other things existing, granted the necessity of origin. I am linked by hard necessity to an enormous sequence of distinct particulars. I couldn’t be mewithout them.

Of course, there could be someone just likeme that exists in the absence of my specific generative sequence—though he too will necessarily carry his own generative sequence. Perhaps in some remote possible world this counterpart of mine arises not by procreation but by instantaneous generation—say, by lightning rearranging the molecules in a swamp. But even then that individual would not be able to exist without hisparticular origins—his collection of swampy molecules and that magical bolt of lightning. Solipsism will not be logically possible even for him. In any case, the question is irrelevant to whether Icould exist without my generative sequence: my counterparts are not identical to me. All we are claiming is that solipsism is logically impossible so far as Iam concerned—this specific human being. It is myexistence that logically (metaphysically) requires the existence of other things—lots of other things. I (Colin McGinn) could never exist in another possible world and peer out over it to find nothing but myself (at least throughout history–I might exist without any other organism existing at the same time as me, my parents both being dead). The same applies to any person with the kind of origin I have, i.e. all human beings.

Why do we feel resistance to these crushingly banal points? I think it is in part because we confuse a metaphysical question with an epistemological question; and we cannot answer the epistemological question by appealing to our answer to the metaphysical question. The epistemological question is whether I can now provethat solipsism is false: can I establish that I am not alone in the universe? In particular, can I establish that my parents really exist (or existed)? Maybe they are just figments of my imagination; maybe I was conceived by lightning and swamp. I cannot be certainthat I was not. I cannot even be certain that I have a body. I can establish that I think and exist, but I cannot get beyond that in the quest for certainty. So the existence of my parents is not an epistemicnecessity. If I could prove that I am a member of a particular biological species, then maybe I could prove that I must have arisen by sexual reproduction from other members of that species: but the skeptic is not going to let that by–she will demand that I demonstrate that I ama particular kind of organism arising by sexual reproduction. And I will not be able to meet that challenge, since there are conceivable alternatives to it (the hand of God, swamp and lightning, the dream hypothesis). Maybe I just imaginethat I am a biological entity with parents and an evolutionary history. So we cannot disprove solipsism in the epistemological sense: for all I know, there is nothing in the universe apart from me.

But this is perfectly compatible with the thesis that it is not in factlogically possible for me to exist without other entities existing along with me: for if I ama biological entity born by procreation, then my existence logically implies the existence of many other things. It is just that I cannot prove to the skeptic’s satisfaction (or my own) that that is what I am. I might come to the conclusion that I had no parents after all, but that will not make it the case that there are metaphysically possible worlds in which I had no parents—this is a matter of the facts about me, not my beliefs about the facts. Thus solipsism is an epistemic possibility but not a metaphysical possibility. It is just like the table being both necessarily made of wood (metaphysical) and also being possibly not made of wood (epistemic). Giventhat I arose from biological parents, I necessarily did; but it is an epistemic possibility that I did not so arise—I could be mistaken about this.

It would be nice to disprove solipsism, but it isn’t insignificant to show that it is not in fact logically possible, given the actual nature of persons. Persons are the kind of thing that implies the existence of other things (granted that we are right in our commonsense view of what a person is). In this they resemble many ordinary biological and physical entities, which also have non-contingent origins. We may feel ourselves to be removed from the world that surrounds us, as if we are self-standing individuals, ontologically autonomous—as if our essential nature could subsist alone in the world. But that is a mistake—we are more dependent on other things than we are prone to suppose. We are more enmeshed in what lies outside of us than we imagine. We suffer from illusions of transcendence and autonomy. We are not free-floating egos that owe no allegiance to anything else; we are essentially relational beings, our identity bound up in our history. We cannot be metaphysically detached from our origins, proximate and remote.

The same point applies to our mental states: they too cannot be separated from other things. Could this pain exist in complete isolation? That may seem like a logical possibility, but on reflection it is not: first, this pain’s identity depends on its bearer—it could not be thispain unless it had thatbearer; and second, the identity of the bearer depends on the kind of history it has. So this pain could not exist without the generative sequence that gave rise to its bearer, a particular living organism; and that depends upon billions of years of history, going back to the big bang (and before). There is no possible world in which this painexists and certain remote physical occurrences don’t exist. There are necessary links connecting present mental states with remote physical occurrences—from the joining of a particular sperm and egg, to the origin of mammals, to the production of chemical elements. My pains can’t exist in a world without me (you can’t have mypains), but I can’t exist in a world without my parents, and my parents can’t exist in a world without their remote primate ancestors, and these ancestors too had their own necessary origins. The pains that now occur on planet Earth (thosepains) could not exist in a possible world without an elaborate biological and physical history that coincides with their actual history.

It is an interesting fact that we recognize these necessities. On the one hand, we have quite strongly Cartesian intuitions about the person and the mind, which is why dualism and solipsism appeal to us—these seemlike logical possibilities. But on the other hand, we are willing to accept that the person and mind are tied to other entities with bonds of necessity—as with the necessity of personal origin. We recognize that the identity of a person cannot be radically detached from all extrinsic and bodily things—parents, sperms, and eggs. These are anti-Cartesian intuitions insofar as they dispute the self-subsistence of the self.[1]We are thus both Cartesian and anti-Cartesian in our modal instincts about persons. It is as if we know quite well that the self cannot be a self-subsistent non-material substance without logical ties to anything beyond itself, even though in certain moods we fall prey to such thoughts. We know that our essence implies the existence of other things—as demonstrated by the necessity of origin—and therefore solipsism is not in fact logically possible. We are modally ambivalent about self and mind, but not confused.

 

Colin McGinn

[1]Kripke mentions the anti-Cartesian consequences of the necessity of origin at the very end of Naming and Necessity(footnote 77, p. 155). What is surprising is that neither he nor anyone else seems to have noticed the consequences for solipsism (including myself, and I published an article on the necessity of origin in 1976). But it is really just a fairly obvious deduction from the necessity of origin (originally proposed by Sprigge in 1962, as Kripke notes).

Share

An Obvious Theory of Truth

Truisms are welcome in the theory of truth. Here is one: the sentence “London is rainy” if true if and only if the entity referred to by “London” has the property expressed by “rainy”. Generalizing, a sentence (or proposition) is true just in case the reference of the subject expression instantiates the property expressed by the predicate expression. This formula combines two concepts: a semantic concept of reference (denotation, expression) and the concept of instantiation understood as a non-semantic relation between objects and properties. Truth results when the entities denoted (objects and properties) stand in the instantiation relation. So we can say that truth consists of a combination of a semantic relation and a non-semantic relation: it is the “logical product” of these two relations. The analysis of truth is given by a “vertical” relation to the world and a “horizontal” relation between worldly entities. Thus “true” expresses a complex property comprising representation and instantiation—that is what the concept amounts to. Both are necessary for truth and together they are sufficient. Moreover, the formula is the most banal of truisms: of course a sentence is true if the things it talks about have the properties the sentence attributes to them. The sentence “snow is white” is true just if the stuff it refers to (snow) has the property the sentence ascribes to it (being white). How could this fail to be correct?[1]

Some minor wrinkles can be quickly ironed out. Is the theory (let’s call it that) ontologically committed to properties in some objectionable platonic sense? I stated it that way, but this is not integral to the theory (though metaphysically unobjectionable, in my view): we could state it in terms of concepts or even just predicates—as in the notion of an object falling into the extension of a predicate. Nor is the theory committed to sentences as truth-bearers: we can run it on propositions, statements, beliefs, what have you, so long as we have a relation like denotation to work with.  It might be thought that the theory is restricted to subject-predicate sentences and won’t extend to quantified sentences, but this limitation is easily remedied by adding that the objects referred to or quantified over should instantiate whatever is predicated of them. Whatever objects are semantically relevant are the ones that need to do the instantiating if the sentence is to be true. What about moral truths? Well, if there are such truths the theory commits us to the idea that moral sentences can be true only if there are moral properties (or concepts or predicates) for objects to instantiate—but this will presumably be so if there are moral truths to start with. What we don’t get are nonsensical truths, because there will be no objects and properties to stand in the instantiation relation (e.g. borogroves and mimsiness). We just have the commonsense thought that whether a sentence is true depends on what objects have which properties. If you say that an object has a property and it does, your statement is true; but if you say that an object has a property and it doesn’t, your statement is false. Clear?

What is surprising is that this theory, if we can dignify it with that word, has not been mooted (at least to my knowledge), since it seems blindingly obvious.[2] Some theories in its vicinity have been mooted, but not this theory exactly. It certainly carries the whiff of the correspondence theory, but it invokes no relation between whole propositions and facts, speaking instead of objects and properties and associated sentence-parts. The world comes into the picture, but not by way of a correspondence relation between facts and propositions. Nor is it a redundancy theory, since it defines truth as a complex property constituted by substantive relations; still less is the theory deflationary. It is also not the same as Tarski’s theory: the schema employed does not repeat on the right the sentence mentioned on the left (so it doesn’t satisfy Convention T) but rather embeds semantic vocabulary and the notion of instantiation. It is possible to universally quantify an instance of the schema and produce a well-formed result, whereas that is not possible for Tarski’s schema. We can say, “For all propositions x, x is true if and only if the objects referred to in x instantiate the properties expressed in x”, but we can’t say, “For all propositions x, x is true if and only if x”, because that is not well-formed (“x” being an individual variable not a sentence letter). Also, the definition proposed by the obvious theory is explicit, not inductive, and applies to any sentence in any language (we are not defining “true-in-L”).[3] The theory is closer to a formulation championed by P.F. Strawson: a statement is true if and only if “things are as they are thereby stated to be”. The spirit looks the same, but what are these “things”, and where is the reference to properties and their instantiation? It sounds a lot like saying, “if and only if reality is as stated”: but that is not the same as the formulation in terms of objects and properties. Perhaps the obvious theory could be read as a more explicit version of this type of theory; and indeed it looks very much like what people were driving at all along. For surely we want to say that the truth of a statement turns on the instantiation of properties by objects combined with suitable semantic relations to those objects and properties. To say something true you have to refer to an object and then assign a property to it that it actually has—obviously.

Consider the locution “true of”: what is its analysis? Obviously this: a predicate is true of an object if and only if the object has the property expressed by the predicate. This is the core of the obvious theory: truth itself is defined by reference to “true of” (as Tarski defines truth in terms of “satisfies”). We might say that “true of” is the basic notion in the theory of truth. We reach truth of propositions by plugging in a singular term: from “F is true of x” we derive “F is true of a” where “a” is a closed singular term (say a proper name). Thus the sentence “Fa” is true just if the predicate “F” is true of the object referred to by “a”. The other theories of truth remain neutral on the analysis of “true of”, which is a limitation in any attempt to define the concept of truth generally; but the obvious theory puts it at the center. To say something true you have to apply a predicate to what it is true of. And that is a matter of picking a predicate that expresses a property that applies to the object.

The OED defines “true” as “in accordance with fact or reality”. Fair enough, but what is “in accordance with” and what is “fact or reality”? The correspondence theory suggests some sort of isomorphism between propositions and complexes called facts. The obvious theory says that truth is a matter of identified objects instantiating assigned properties; so accordance is simply objects having the properties they are said to have. A statement is in accordance with reality just on the condition that it assigns properties to objects as they are actually distributed, i.e. as they are. Fact and reality are just objects having properties. This is a substantive definition of truth meeting standard conditions of adequacy: it defines truth in terms of notions severally necessary and jointly sufficient; it is non-circular; and it permits a universally quantified formula that captures our intuitions about truth. To repeat it in a slightly different language, a proposition is true if and only if its subject matter (objects and properties) exemplifies suitable instantiation relations. Truth is a matter of objects instantiating properties in the way alleged by a proposition. To understand the concept of truth, then, we need to grasp this complex of concepts: reference, object and property, instantiation. It is not simply a device of semantic ascent or essentially redundant or logically simple or merely a means of abbreviation. It is a thick analytically deep concept with a definite nature. Yet its nature is entirely (indeed painfully) obvious—not in the least bit surprising. The truth about truth is a true truism.[4]

 

[1] The same form of analysis can be applied to the concept of justification, which I take to be confirmation of the theory: a proposition is justified if and only if there are good reasons to believe that the objects referred to instantiate the property expressed. Likewise, we can say that it is a fact that p if and only if a certain object instantiates a given property, e.g. London instantiates being rainy (notice that no semantic relation is involved here).

[2] Why this should be is not clear to me: perhaps it is thought too obvious, or perhaps less obvious theories are confounded with it (correspondence theories).

[3] Devotees of Tarski’s theory will want to know how to provide recursion clauses for logical connectives. This is easily done: for example, “p and q” is true if and only if the objects and properties referred to in “p” stand in the instantiation relation and the objects and properties referred to in “q” stand in the instantiation relation; and similarly for “or” and “not”.

[4] Why is the truth about truth a truism while the truth about (say) knowledge is not? Because there is nothing more to the truth of propositions than objects instantiating properties combined with the fact that propositions stand for things. There is nothing hidden here, nothing to be discovered. Other theories purport to say something interesting, but the obvious theory is content with mere accuracy.

Share

Quantifier Concepts

Would it be quixotic to suppose that quantifiers hold the secret to human success?[1] Could the student of quantification theory be studying the ultimate differentia that separates humans from the rest of nature? That would be a delightful result for the logically minded; and I think there is actually a good deal to be said for it. For it is plausible to suppose that what other animals lack, cognitively speaking, and we splendidly possess, is the ability to engage in quantifier-driven reasoning. We grasp what Quine called the “apparatus of quantification” but they do not—though they no doubt grasp much else. That apparatus, to put it briefly, involves the existential and universal quantifiers, variable binding, embedding, scope, domain, and a distinctive syntactic form—not to mention the non-standard quantifiers “most”, “a few”, “many”, and others. Suppose all this represented in the human language of thought.  Then we can surmise that other animals, though cognitively gifted in many ways, lack an internalization of the apparatus of quantification—though they may well entertain singular and general concepts, truth functions, psychological concepts, etc. At any rate, there are possible beings that have mastery of a conceptual apparatus just like ours except that quantification is not included. The question is what capacities they would thereby lack that we possess, and which confer signal advantages on us. What do quantifiers do for you? What mental achievements do they make possible?

Quantifiers are obviously deeply embedded in our thinking, so it is not easy to tease out their contribution, but certain areas of human thought clearly depend on them.  First, science: where would science be without the universal quantifier? A law is precisely a generalization about all things of a certain kind (we can include ceteris paribus laws). If you don’t grasp the concept all, you don’t know what a law is. Similarly, you have to grasp that if some things lack a certain property then it is not a law that all things have that property (I omit some obvious qualifications). Quantificational reasoning is essential to scientific thought (animals don’t seem strong with science): science consists of universally quantified propositions. Second, mathematics: this too is shot through with quantificational structure (it was mathematics that caused Frege to invent modern quantification theory). The most basic axiom of arithmetic is universally quantified: for every number, there exists a successor number. Peano’s axioms are quantificational in form. The embedding of quantifiers is rife in mathematics. Geometry is much the same: we have theorems about all triangles, circles, etc. Moreover, according to some views, arithmetic reduces to quantification theory (plus set theory, which is itself formulated by means of quantifiers). Standard first-order predicate logic is clearly quantificational, but so is second-order logic (which greatly increases expressive power). Propositional logic discerns no quantifiers in its formulas, but it is tacitly quantificational itself, since sentence letters are interpreted generally: for any p and q… We understand it to express universal propositions. That is what logical necessity consists in. Modal logic involves quantification over possible worlds (necessity and universality are close cousins). Inductive logic involves moving from singular premises to general conclusions and would be impossible in the absence of the concept everything. Falsification depends on there being some counter-instance to a generalization. All this would be impossible for beings without a mental representation of the apparatus of quantification. When we reason we move from the particular to the general and the general to the particular, and this requires grasping how all and some work; not to grasp these principles would be a severe cognitive deficit (“quantifier derangement syndrome”). If Russell is right, definite descriptions are not possible without quantification (do animals grasp definite descriptions?); they are built from the quantifiers “all” and “some”. Many pronouns function as bound variables. Lastly, cosmology requires the use of the ultimate universal quantifier: for it concerns the nature of everything (ditto metaphysics). Here we ascend from specific domains to the entire domain of the universe. It is remarkable that we have such an all-encompassing concept—we can think about everything there is. Can animals ever think about the whole enchilada? I doubt it: they think specific and particular, local and limited. Maybe their thought is largely demonstrative, or maybe it employs a medium alien to human thought. In any case, our cognitive resources include the extensive and intricate apparatus of quantification, which greatly expands our powers of mental representation and hence our understanding of the world.[2] In turn, this enhanced understanding feeds into our actions and mode of life. We are quantifying creatures (other creatures could be rational beings but not quantifying beings).

Let me note two further features of quantifier concepts that set them apart. We know from the work of logicians that they are not semantically singular terms but a sui generis type of expression; they occupy their own category in mental grammar. It is sometimes said that they are second-order concepts, i.e., concepts of concepts, and this sets them apart from their first-order brethren. To grasp them, you have to be able to ascend a level and predicate them of a concept: this requires a cognitive leap, a new mode of mental representation. Creatures with only first-order concepts are not guaranteed to be capable of achieving this new level, however hard they think. Presumably, it occurred at some point in human cognitive evolution, perhaps triggered by a specific mutation affecting brain circuitry, and not shared by other species. Perhaps we have a gene for quantification! Some piece of brain rewiring caused us to be able to grasp second-order concepts like all and some, where there was no such grasp before. Then we were off to the races, with science, logic, mathematics, and cosmology on the horizon. A new cognitive trick catapulted us to the next intellectual level. Imagine if you lacked these concepts and were stuck at the level of the specific and particular: then a super-scientist rewires your brain to give you a grasp of quantification. Wouldn’t that be a stunning intellectual breakthrough, opening up vast avenues of new understanding and reasoning power? The child picks it all up automatically along with the other remarkable resources of human language, but that doesn’t mean it isn’t a signal achievement—imagine losing it one day! Quantification is a classy mental act, belonging only to the intellectual elite, by no means proletarian.

Secondly, quantifier concepts are unique as to content: no other concept is such a bad candidate for empiricist treatment.[3] How could the concepts all and some be derived by a copying operation from sensory stimuli? They are not concepts of a sensory quality, or of any mental operation. I am tempted to call them abstract, but that is just a vague way to register their distinctness from other types of concept. I would guess they are innate—for how could they be picked up from observation of the environment? They enter our thought at an early age and shape it pervasively, but their origins are obscure. They are part of the universal human lexicon, but they name nothing and describe nothing. Form the thought “Everything changes” and ask yourself what is going on in your mind: you will find no discernible constituent corresponding to the quantifier—no image or feeling or disposition. There is nothing…concrete here. Yet you mentally took in all of reality! How is that possible? What do you have that your cat doesn’t have? I mean: how is the concept of universality mentally represented? Can we have a description theory of it? How about a causal theory? Neither seems remotely feasible. We are used to the words (and their corresponding logical symbols) but what is the content exactly? Where is the cognitive science of quantification? What we have here is a complex and intricate biological adaptation of enormous utility but quite opaque in its mode of operation. It took logicians thousands of years to identify it and describe its logical character, but its psychology is not even in its infancy (or its neuroscience). The point I am urging is that it has some claim to distinguish us from other thinking beings on our planet. Let us grant that bees, whales, and dolphins have communication systems, along with associated cognitive structures—but it is a further claim to maintain that they understand quantification as humans do.[4] All humans do understand it (short of pathology), but there is no evidence that other animals can engage in quantificational reasoning (just consider the difficulties of embedded quantifiers).

It is not implausible to suppose that humans go through an ontogenesis in which “all” begins locally and then gradually widens to take in more and more of reality. Thus the child initially applies “all” to all the marbles on the table or all the apples he can see, later expanding the domain to include all the marbles or apples on earth. But that isn’t enough to yield the adult concept: the child must include all past and future marbles and apples, as well as any found elsewhere in space. Then there are all the possible marbles and apples. Finally, we reach everything there is. The original concept (innately present, we can suppose) already contained this potential, but it undergoes a process of maturation that ends with the cosmic all. This would be in conformity with standard views of linguistic and conceptual development. But the process has a special interest because the concept is so all-encompassing in its nature: its enormous reach signifies a kind of supremacy among concepts—it is the king of all concepts, as it were. Every other concept is subordinate to it, literally. Doubtless, it is a concept that has fueled the acts of many a despot or madman, or metaphysician or cosmologist (a “theory of everything”). God is described as all-powerful, all-knowing, and all-virtuous: the recipient of every estimable universal quantification. So much majesty revolves around this concept—its place in human thought is unrivaled.[5] Once the child has fully absorbed this concept (or it has fully matured within her) she becomes a being of a different cognitive order from the run of terrestrial animals, including her former self. Morality is stamped with it too: duty is what everyone ought always to do in any circumstances (remember Kant’s categorical imperative, in which universalization is paramount). We would not be the cognitive (and emotional) beings we are without this capacious and ubiquitous concept. When Aristotle enunciated his famous syllogism beginning “All men are mortal” he was drawing attention to the mighty power of that little word “all”: once you know that all F’s are G you know something of high significance from which many interesting things follow. In it may reside our capacity for the type of thought that defines human nature.[6]

 

Colin McGinn

[1] I am slightly misusing the word “quixotic” here, but the alliteration was irresistible.

[2] George Eliot reminds us of a downside to this mental advantage over other animals: “But this power of generalizing which gives men so much the superiority in mistake over the dumb animals…” (Middlemarch, 592) Our ability to generalize lays us open to errors of thought unknown to animals lacking this capacity; and it must be said that quantifiers can cause us no end of trouble—especially the standing temptation to abuse “all” in the presence of “some” (quantificational malfeasance).

[3] I know this is saying a lot given empiricism’s poor track record, but a bit of overstatement may be forgiven in the light of the fact that one never hears much about quantifier concepts from empiricists (I don’t recall Hume discussing them at all). They are expected to take care of themselves.

[4] Given other differences between human and animal thought, it might be more apt to compare humans to other hominids now extinct. What if Neanderthals matched humans cognitively except where quantification is concerned? That could be the reason for our relative success.

[5] What is the connection between death and the universal quantifier? Simply this: when you die it is all over. Everything about you has gone. You are now nothing. The quantifiers say it all. We understand what death is because we can use quantifiers this way.

[6] People often discuss this question as if it is all-or-nothing matter—either we share thought with animals or we don’t. But a more nuanced discussion can focus on whether there are any areas of human thought inaccessible to other thinking beings. Thought may not be homogeneous in its nature and origins (similarly for language). Quantification may have been added quite late in the game.

Share