Imagination, Knowledge, and Other Minds

Imagination, Knowledge, and Other Minds

We don’t know what it is like to be a bat, a shark, or an octopus. There are facts of the matter—phenomenological facts—about these things, but we don’t stand in the knowledge relation to them. We don’t grasp them, apprehend them, conceptualize them. Our knowledge reaches its limits with these facts; we can only be ignorant of them. This is a truth of epistemology, like the truth that we don’t and can’t know certain facts about the past, or remote regions of the universe. But why don’t we know these facts? What is the source of our ignorance? The answer is that we cannot imagine them. Our imagination hits a brick wall when we try to get our minds around such facts—facts about (some) other minds. We know no such limitation when it comes to our own mind; that mind we know by direct acquaintance. If we had a similar acquaintance with the minds of others, we would not be so limited; nor would we need to use imagination to grasp the facts in question. We resort to imagination to know the minds of other creatures, there being nothing better to go on, but imagination will not serve us in the present instance. The reason is that our imagination proceeds from a basis in our own self-acquaintance, and cannot radically transcend that basis; but the minds of bats, sharks, and octopuses are too different from our own mind for our imagination to provide what is needed. We suffer from cognitive confinement brought on by imaginative poverty. Notice that we cannot hope to sidestep imagination by relying on pure reason—the faculty by which we know mathematics, among other things. We have perceptual faculties and rational faculties, but they don’t cover all of reality; imaginative faculties enable us to plug the gap in some cases—minds similar to our own—but not in all cases. All three faculties have their limitations, overcome (partially) by the other faculties we possess, but in the case of alien minds we encounter a region of reality that resists all of our epistemic faculties. And it took lowly creatures (as we think) to teach us this lesson, as if they are saying, “Just try to understand us—you won’t get far”. But perhaps we can analyze the reasons for our imaginative failure: what exactly is it about the minds of these creatures that bars us from imagining their phenomenological interior? A natural suggestion is that they have sensations we don’t have—as sighted people have sensations blind people don’t have. So, there is a kind of localized epistemic transcendence: it isn’t that all of a bat’s consciousness is off limits for us limited humans. We know quite well what being a bat is like in other respects, of which there are a great many more; we only partially don’t know what it is like to be a bat. About this some have retorted that even it is not so clear: perhaps the bat’s echolocation sensations are similar to our visual sensations (they function the same way and have the same abstract structure), or they are like our auditory sensations (being processed through the bat’s ears), though higher pitched. But I think there is a deeper point to be made: we don’t know what it’s like to be a bat in a broader sense—whether or not we can grasp the nature of their sonar sensations. We don’t know (can’t imagine) what it’s like to be a bat, not merely what it’s like for a bat to use its sonar sense (same for the shark and the octopus). In this sense we don’t know what it’s like to be a bird (or a whale or a porcupine or a snake). For these creatures are just too different from us physically and psychologically for us to be able imaginatively to enter into their mode of consciousness. At any rate, we don’t fully grasp it (some of it we can grasp). What is it we can’t grasp or imagine? There is no obvious label for this thing, or short description of it, but maybe I can point us in the right direction by saying that we don’t grasp the animal’s mental organization—its way of combining the elements that make up its mind. We don’t grasp the kind of complex self that the animal inhabits—its lived world, its overall condition of consciousness (“form of mental life”). As Descartes says, we can’t imagine a thousand-sided figure (though we can intellectually grasp the concept), not because we can’t imagine the nature of its elements, but because we can’t form a mental picture of a figure with precisely that many sides. Likewise, we can’t imagine the total mental life of an animal that is very remote from our own awareness of things, including our specific mental organization—thought patterns, sensory acuities, memory capacities, range of knowledge, linguistic mastery, and emotional make-up. It isn’t just a matter of a particular type of sensation but of the animal’s essential being. In fact, there are few if any animals whose minds are fully imaginable by us (and the same applies to human infants and earlier versions of hominids). Our ignorance here is widespread and systematic; and it stems from our imaginative limitations. Knowledge of other minds by means of imagination is inherently fragmentary and glancing. And it isn’t going to be improved upon any time soon, since our imaginative faculties are pretty much fixed and finite (stemming from our perceptual faculties). It certainly won’t be overcome by acquiring scientific knowledge of the brain: this kind of propositional physical knowledge is not sufficient to provide for imaginative representation of an alien mind. Imaginative knowledge is sui generis and not derivable from perceptual and ratiocinative knowledge. One might be tempted to adopt an empiricist theory of imagination (as did Locke and Hume), holding that mental images are faint copies of sensory impressions; but that theory has many problems, notably that images have different properties from perceptions.[1] It may be that perceptions provide necessary conditions for images to arise in the mind, but perceptions are not sufficient for images, even with an attenuation process added. So, imaginative knowledge operates by its own principles and has its own limitations, different from those of perception and pure reason. We are accustomed to the twofold distinction between a priori and a posteriori knowledge, but really, we need to add a separate category—knowledge based not on pure rational insight nor on sense experience (or both) but on exercises of the imagination. Then, what else might be so based, and so limited? An obvious case would be knowledge of fiction: we use our imagination to mentally picture fictional characters and their situations, and this affords us knowledge of them (e.g., Hamlet is weak-willed and vacillating). Also, imagination enters into the production of modal knowledge, as we imagine states of affairs that test a claim to necessity or contingency. It seems plausible that ethics involves imaginative deployment too, because we have to think through possible scenarios to arrive at ethical conclusions. Here our imaginative faculties may let us down, given the complexities of the real world (imagination is not at its best with complexity and nuance, the quantitative and the subtle). In some areas we are compelled to resort to imagination for want of anything better, e.g., knowledge of other minds, but the faculty is faltering and often feeble, leading to areas of irremediable ignorance. As a thought experiment consider the following: we visit a planet on which lives a people radically different from us mentally, so different that we can gain only the vaguest idea of what goes on in their heads (our imagination draws a blank). Their political system is shaped by this (to us) alien form of consciousness, so much so that our political scientists would like to explain its origins and workings. But they are prohibited from arriving at the explanation they seek because that would require psychological knowledge they don’t and can’t have. They might well become mysterians concerning this planet’s politics, simply because the explanandum exists in an area of reality they are barred from understanding. I think we often have only the faintest understanding of the social behavior of terrestrial species, because we fail to grasp the make-up of the minds of the animals in question: the sociology of sharks eludes us because their psychology does (it’s a strange world they live in—for us). Much of the biological world is hidden from us by the boundaries of our imagination, which cannot be overcome by perception and reason. We don’t know what it’s like to be a bat (emotionally, personally, existentially), so we don’t know how bats relate to each other—not completely, not in the way we understand our own social relations. Cognitive closure thus afflicts zoology as much as brain science (in as much as it seeks an explanation of the conscious mind). Indeed, we don’t understand a lot of the behavior of our house pets, individual and social, precisely because their minds are a (partially) closed book to us; and we know that we don’t. There are pockets of mystery in them as far we are concerned: what it is psychologically to be a cat, say, is beyond our comprehension. We just can’t imagine what goes on in their secret cat minds, and we might be very wrong even in our more confident assumptions. This ignorance might well be permanent, pending an upgrade to our imaginative faculties. Imagination is certainly liberating in some respects, but it is also confining. It is not a font of unlimited understanding.[2]

[1] See my Mindsight (2004) in which I itemize the many differences between images and percepts.

[2] To what extent our imaginative limits affect our ability to solve the mind-body problem is an interesting question, but I have nothing useful to say about it at this time.

Share
2 replies
  1. AJM
    AJM says:

    As a lurker on your site and a reader of your work, I’d be interested to get your opinion n the question now fiercely debated in the AI community about the “brain as computer.” Specifically, Geoffrey Hinton has said that love must be a brain process and since computers (deep learning networks) can replicate brain processes (even better over time if not there yet), then a neural net will be able to recreate “love.”

    I am sure you find this objectionable. But I’d be eager to understand how what you see as the “biological underpinnings” of sensory phenomena. Are there things the brain does that you believe someday (with enough compute) a neural net wlll not be able to do? It seems, again, that cognitive scientists are moving toward a belief that all human thought might just be statistical inference.

    Finally, I’d be interese if you could point me to any of your writings (or that of others) on this fascinating and timely subject.

    Reply
    • Colin McGinn
      Colin McGinn says:

      It’s quite a big question, which I have discussed in various places (e.g., chapter 6 of The Mysterious Flame). The basic point is that we don’t know whether what the brain does computationally or functionally is all that is relevant to its ability to be the basis of consciousness. Is what a diamond “does” all that is relevant to its nature? Acting like something is not the same as being indistinguishable from that thing. Computational properties are not the only properties that exist.

      Reply

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.