The Structure of Moral Thinking
How do we actually think when we think morally? What is the psychological reality of reasoning about moral questions? How do we arrive at moral knowledge? I propose to answer this question by considering an imaginary example designed to separate out the several components of moral reasoning and intended as a model of how we in fact reason morally.
Our imaginary being begins with various kinds of experiences and other psychological states: contentment, calm, comfort, elation, enjoyment, fulfillment, relaxation, tranquility, inspiration, understanding, and so on through an extensive range; but also disquiet, anxiety, discomfort, depression, disappointment, annoyance, frustration, boredom, ignorance, and so on through an equally extensive range. At this stage there is no evaluation of these feelings and states by the subject, merely the undergoing of them. The subject is so far just a repository of psychological facts with no evaluative attitude towards these facts. Then a degree of self-reflection sets in: the subject starts to evaluate her psychological states. Some she deems “good”, others “bad”: she adopts an evaluative attitude with respect to her inner life. But this attitude is completely passive—she merely notes a distinction in how she reacts to her first-order psychological states, mainly her emotions. She has ventured beyond the merely factual to the evaluative, but there is no thought yet of action. In due time she makes an interesting discovery: she has the power to affect the course of her psychological life. Previously she was passive in the face of her experiences, but now she realizes that she can do something about how she feels. She discovers that she has the power of action to shape the course of her emotions: she can actively avoid some things while actively promoting others. For instance, if she eats food she can produce a pleasant sensation of satiation, and if she stays indoors she can avoid getting cold. Now she has discovered prudence: she can act so as to avoid the bad states and bring about the good ones. This is a practical discovery not an evaluative one; any value prudence has wholly depends upon the value of the states of affairs it brings about. Our imaginary being has become the captain of her destiny—partially at least (some things she can do nothing about). She now makes judgments of the form: I ought to do such-and-such in order to bring about, or avoid, psychological state X. First, she consults her evaluation metric; and second, she determines how best to avoid the bad states and promote the good ones—so far as she is concerned. There is no thought yet of other people and of morality—everything is egoistic (as we might say from the outside).
Let us then introduce other people into our solitary subject’s world—people with whom she interacts. How will she deal with them? From the point of view of prudence they are mere instruments for affecting her own psychological states—just like the inanimate objects of her environment. Nothing in her reasoning hitherto has prepared her for morality; the step from it to that is impassable. So how does she make the leap? The answer is clear: she must recognize that others are like her in a certain crucial respect—they too have evaluation metrics and prudential practicality. That is, other people have emotions like hers, evaluations of these emotions, and the ability to act on this information so as to promote the good and avoid the bad in their own lives. This is not an easy piece of knowledge to acquire; we can easily imagine beings that never acquire it (most animals, say). One needs to be able to appreciate the psychological reality of others, as well as their distinctness from oneself—solipsism must be transcended. But once our subject grasps this important fact about the world she possesses a reason to act in ways she has not recognized heretofore: this reason is that her actions will help or harm the other people she has come to know exist. She acknowledges the existence of others and this provides a reason to act towards them in certain ways. Moreover, those ways need not coincide with the ways of acting recommended by her purely prudential reasons: there may be conflict between the reasons she has. Call the basic fact she recognizes equality: other individuals are equal to her in being a source of reasons for action. She might judge, possibly truly, that others are not equal to her—that their psychological states don’t matter as much as hers. Compared to her they are mere insects (they might actually be insects). Then she will not rate the reasons deriving from their psychology as equal to her reasons, and she will act accordingly. Or she might judge others to be superior to her, possibly truly, treating the reasons for her actions stemming from them as stronger than the reasons stemming from her own psychology—she regards them as gods (they might be gods). But in the situation we are imagining there is equality between them and her, so it is rational for her to take this equality into account in her deliberations. Her reason for acting morally towards them is that they are her equals in the relevant sense. She thus conjoins this last consideration with what she has already concluded concerning evaluation and action. Now she makes judgments of the form: I ought to do such-and-such for others. She has arrived at the moral ought.
We have then three separate elements in the origin of moral reasoning: first- person evaluation of psychological states; recognition of reasons for action deriving from that evaluation (practical prudence); and extending this conceptual apparatus to others by virtue of a judgment of equality (morality). Distilling it down still further, we can say that moral reasoning consists of evaluation, practicality, and equality—each element necessary and jointly sufficient. That is the basic structure of moral thinking as we find it. Perhaps the order of acquisition in humans mirrors what I have expressed chronologically for my imaginary being; in any case, that is the architecture of moral reasoning. The whole thing is powered by a first-person recognition that human psychology consists (partly) of emotional states that admit of division into the good and the bad. It would be too simple to say that pain and pleasure provide the basis of moral judgment, since the disagreeable emotions don’t always involve pain (anxiety, boredom, ignorance) and the agreeable ones are only pleasurable in a wide sense (is understanding a type of pleasure?). But there exists a way of dividing the emotions that corresponds to a natural evaluation, and without that neither prudence nor morality could get off the ground.
You might object that other ways of classifying emotions would work just as well for grounding prudence and morality: what if the subject judges that certain emotions are approved by God or by members of his own family and society? That would provide a reason for action different from the kind of evaluative judgment I have cited. But for familiar reasons this kind of suggestion goes nowhere: for (a) it raises the question of why God or other people regard certain emotions as good and other emotions as bad if not precisely that some are intrinsically good and some are intrinsically bad; and (b) such a basis would be unable to power prudence and morality without being itself based on something more than mere positive regard by others—so what if they think boredom is OK, Idon’t! At some point a basic evaluation is necessary or else there is nothing good or bad to be concerned about. If (per impossibile) people were simply indifferent to their emotions, not seeing why some are deemed good and others bad, there would be no reason to be concerned about what emotions they had—they would accept the ones we regard as bad with equanimity. Neutrality about what you feel is the death of prudence and morality; an evaluation is essential in order that action be prescribed or proscribed.
You might wonder whether the step through prudence to morality could be skipped: why not go straight from evaluation to moral judgment? That is, if a subject judges that others will have disagreeable psychological states unless he does X he might immediately conclude that he ought morally to do X with no thought of prudence. I think this is possible in principle, but my aim was to describe how we think morally not how it is logically possible to think morally. And in our case we take a detour through ourselves: we put ourselves in the position of the other and ask what prudence would require in that circumstance. This is encapsulated in the Golden Rule: treat other people as you would wish to be treated (assuming you are being prudent). We could simply say, “Treat others as they wish to be treated”, but that doesn’t carry the same psychological punch, because we prefer to think in terms of ourselves and of what we want. Thus we urge people to act towards others so as to produce the psychological states that we would like to have if we were in their position. This convoluted way of putting it embeds our grasp of how we conceive of our own good when acting prudently. In our case we think first in prudential terms and then we generalize; we don’t just jump from evaluation of our psychological states to a judgment about how we should treat others. We use our conception of prudent action to generate a moral judgment by taking ourselves as model: I should behave towards others as I would like them to behave towards me (when I am being prudent). In principle, I suppose, a person could have no grasp of prudence and still be motivated by the moral ought, but that does not appear to be true of humans—we weave prudence and morality together. I treat other people’s psyches as I would treat my own, in effect. It is not merely that morality and prudence are parallel forms of reasoning; the latter is embedded in the former—its prototype, as it were. Morality is what happens when prudence extends to others by way of a judgment of equality.