Sentience and Morality
Sentience and Morality
It is often said these days that morality applies when and only when sentience is present, but the exact connection between the two is not often spelled out. The thought is that the states of insentient objects impose no obligations on us while the states of sentient beings do. Obviously it is not intended that the merely physical states of sentient beings impose such obligations; the idea is that it is in virtue of the mental states of sentient beings that moral obligations apply. That is, it is states of consciousness that form the necessary and sufficient conditions for morality to apply. But what is it about consciousness that confers value on it? Not merely its subjectivity or its intentionality or its rationality or its innerness, since many conscious states have these features but have nothing particularly to do with morality—for instance, seeing yellow or thinking about the moon. There is nothing good or bad about having such mental states considered in themselves: they impose no moral obligations on moral beings (we are not obliged to increase the incidence of seeing yellow). Rather, there is a subclass of mental states that are morally relevant—the class that includes desire, happiness and unhappiness, agreeableness and its opposite, good experiences and bad. Pain and suffering are paramount in this class: from physical injury to bad smells to romantic pangs to boredom, lassitude, and despair. All the things that we don’t like, that turn us off, that bring us down, that ruin our day: thwarted desires, nasty sensations, unpleasant forebodings. These are the mental states that create obligation and trigger moral action, along with their positive counterparts. Thus “Pain is bad” is the prime example of a moral proposition, because it leads directly to ought-statements such as “You ought to do something about that person’s pain”. It is this type of state of sentient beings that is deemed morally significant.
But this doesn’t settle the question of the connection between sentience and morality, because we have yet to explain what sentience as such has to do with morality. Why does the property of consciousness have such a central role in moral thinking? What about unconscious pain and unconscious desire? Suppose for a moment that these are possible (the supposition is not absurd): would such states also ground moral obligation? The answer is unclear: one feels that they would count to some degree but not as much as fully conscious pain and desire. It is worse to feel a conscious pain than to have one that is outside of consciousness, but surely one ought to do something about someone’s pain even if he or she is not conscious of it. Similarly, it is good to satisfy a person’s conscious desires, but is it equally good to satisfy someone’s unconscious desires? Utilitarianism exhorts us to produce the greatest happiness for the greatest number, but does that extend to unconscious happiness? The fact of being conscious makes a moral difference, though not the same difference as that between (say) being square and being in pain—the former having no moral significance at all. What if a tribe was equally attentive to the unconscious minds of its members as to their conscious minds—wouldn’t that make them more morally sensitive than us? Is the emphasis on conscious mental states a moral prejudice? Or is it just that we don’t tend to believe in the unconscious mind but would change our moral tune if we did?
The question of other minds is instructive: here there can be genuine doubt about the scope of morality. Suppose that half the people you are acquainted with are conscious and half or not—but you don’t know which is which. The distribution of sentience is opaque to you. Then you will be placed in a moral quandary, since you can’t apply the sentience-morality connection with any confidence. You agree that obligation requires sentience, but you don’t know who has it. It would be different if morality had nothing to do with sentience and everything to do with outer behavior, since then your obligations would be equal for all the individuals you encounter. But as things are, the scope of moral obligation is indeterminate so far as you are concerned, so you don’t know how to be moral in this world. Should you treat everyone as if they are sentient just to be on the safe side (but think what a waste of resources for the false positives), or should you put half your effort into each individual (but that may result in neglecting a deserving case)? The sentience criterion runs into the problem of other minds and in unfavorable cases can render it unworkable as a basis for morality. You always know what is due to you morally, since you know your own consciousness directly, but you can be genuinely uncertain what you owe to others. And this theoretical problem has a real-life counterpart when it comes to other species and people with abnormalities (such as paralysis or coma). If you got really serious about the problem of other minds, you might wonder whether you had any moral obligations to others at all—and likewise with the problem of not recognizing sentience when it is present (worms, trees?). The sentience test makes morality epistemologically problematic.
Is it just a primitive fact that sentience matters? What if we came across aliens that invert our normal sense of moral obligation, treating insentient objects as deserving moral respect while disregarding sentient beings? They go around making sure that every object is polished and made straight, holding this to be their prime moral obligation, while treating suffering and unfulfilled desire as morally insignificant. Could we persuade them of the error of their ways? Would we be reduced to saying, “Can’t you see that pain is bad and roundness is value-neutral?” When they ask us why we can only reply, “Because pain is a conscious state”. They might wonder whybeing a conscious state makes all the difference: what is it about consciousness that makes it the sine qua non of morality?[1] Is it the mere fact that to be conscious is for there to be something it is like for the organism? But why does what it’s like have this special relationship to morality? Why is this kind of subjectivity a condition for moral obligation to get a grip? And if it is a condition, why do moral theories not generally recognize it to be so?
For not all substantive moral theories put sentience at the heart of morality. Utilitarianism does because it expressly speaks of maximizing a certain type of mental state, assumed to be conscious; but deontological theories have no such direct connection to consciousness. The duty to tell the truth, keep your promises, not to steal, not to commit adultery, etc., say nothing about states of consciousness: they could apply in a robot world. There might be lying, stealing, promise-breaking adulterous robots—that is, beings that do all these immoral things. Does morality apply to them? One might say that such duties can only apply against a suitable psychological background, but that is not part of the official deontological story and threatens to reduce it to a utilitarian theory. The connection between morality and sentience is certainly looser according to deontological ethics, and less subject to some of the uncertainties of the sentience criterion. It may be true that in the actual world morality and sentience are coterminous, more or less, but it is a question whether the connection goes any deeper—whether the existence of moral obligation can be explained by the nature of sentience. That is, it is a question why being conscious should be the touchstone of morality, as a matter of conceptual necessity. There is certainly a feeling—an “intuition”—that this is so, but articulating it is less easy than one might have expected. Maybe one part of morality—the part concerned with pain and suffering—necessarily involves consciousness, but it may not be essential to other parts, such as those emphasized by deontological theories. Would an eliminative view of consciousness put an end to morality? Would a takeover of the conscious mind by the unconscious mind make the idea of moral obligation nugatory? What if a condition analogous to blindsight were to invade the entire human mind—would that mean that morality no longer applies? Kant took personhood to define the scope and limits of morality, thus excluding animals and some humans; the substitution of sentience for personhood was intended to enlarge the range of moral obligation. But perhaps we need a further enlargement to include beings whose consciousness is in question. Sentience may be too parochial (as well as too inclusive in some respects). Having interests seems to be the essence of morality, but that notion doesn’t seem necessarily tied to sentience, though having conscious desires may be the central case of having interests. Ecological ethics sometimes speaks of the interests of whole species or the biosphere or the planet, but these entities are not claimed to be conscious sentient beings. Sentience as such doesn’t seem to constitute the dividing line, given the moral neutrality of much sentience (e.g. seeing yellow), and unconscious mental states seem to have some moral weight, and not all moral rules have to do with promoting conscious happiness and avoiding conscious misery, and other minds can be elusive: so we do well to take the sentience criterion as just a rough rule of thumb rather than a definitive account of the scope of morality. Perhaps, indeed, it was always a bad idea to seek hard and fast limits to the scope of morality.[2]
[1] Should we say that pain can only hurt if it is conscious, and hurting is what makes pain morally significant? But then it is not consciousness as such that is morally significant but its power to make things hurt: an entity is morally considerable if and only if it can be made to hurt. This is (a) a rather narrow conception of morality and (b) not self-evident as a conceptual truth because of the possibility of unconscious pain. Here we get into debates about the metaphysics of pain, which render the sentience criterion contestable.
[2] Morality has been in a process of expansion over many centuries; it would be folly to suppose that it has reached its outer limit now. We tend to fashion theories to fit its actual scope at any given time. Now that we have acquired the power to destroy planets (our own at least) and might develop yet more destructive powers, we might have to consider our obligations to the universe as a whole; and this might prompt us to expand our notion of obligation beyond the realm of sentient beings. What if we encounter complex alien life forms to which our notion of sentience fails to apply—might we then contemplate extending the concept of moral obligation to include these insentient life forms? After all, it was only recently that the sentience criterion came to be accepted as a legitimate extension, mainly as a result of a greater awareness of animal ethics.
Deontological ethics is necessarily connected with sentience. The kinds of acts marked out as wrong by deontological moralities (lying, stealing, promise-breaking, committing adultery or whatever) are either necessarily conscious, or else wrong only insofar as they are conscious. To take the examples mentioned. It seems to be impossible to lie unconsciously (although it’s possible to deceive unconsciously), since lying by definition occurs when a speaker makes a statement he consciously knows to be false (a person accused of lying could defend themselves by arguing that though the statement they made was one they knew to be false, this knowledge was not present to their consciousness at the time they did so). Common usage would seem to allow that it is possible to steal while being completely unconscious of doing so, although such unconscious theft would seem not to be the kind that a rational deontological principle would proscribe. As for breaking promises, this is I think possible to do unconsciously (though not I think in a way that could be regarded as a genuine violation); but the act prior and necessary to breaking a promise, namely making one, would seem to require, as a matter of conceptual necessity, consciousness.
Two observations. First, deontological ethics isn’t concerned with conscious states, i.e. what is wrong with immoral acts like lying is not a matter of the mental states that accompany them (contrast utilitarianism). What is wrong is that someone will be misinformed or lose their possessions or turn up for a meeting and find no one there. Second, imagine a generalized blindsight in which things function normally but there is no consciousness of the usual kind: here a person could know what she said was false and intend to say it anyway but there is no conscious expression of these mental states. The moral rules can be stated without essential recourse to consciousness.
It seems degree of morality or moral sensibility increases with level of sentience and evolution of consciousness in beings. Our world-view, perception, thinking, speech, action and livelihood ought to be guided by mindful ethics of non-harm and directed towards the well being of all living beings, since life at the deepest level is a network of interdependent relationships. To harm and cause pain to another by body-mind-speech action is to harm self even if it is not apparent immediately through the universal law of vibrational action (Karma).
I agree with your ethical stance but not with its alleged metaphysical basis (“network of interdependent relationships”, “vibrational action”)–that sounds like mysticism to me.