Utilitarianism
For the last couple of classes we’ve been discussing utilitarianism (U). U is a consequentialist doctrine, like ethical egoism, though it evaluates actions by the general good, not merely that of the agent. An action is right if and only if it leads to more happiness and less suffering than any other action that could be performed in the circumstances, with respect to everyone affected by the action. That is, we are obliged to do what leads to the most happiness for the most people (and animals, on some versions). Clearly, then, U is altruistic in form, since it requires us to sacrifice some of our own happiness if that will lead to greater happiness all round. The view is universalist, egalitarian, secular, monistic–and obviously onto something. Surely the goal of morality, at least in part, is to promote the general welfare, it might be thought, and that’s what U prescribes. It comes as a surprise then that the theory encounters serious and principled problems, mainly revolving around questions of justice–but also concerning whether it is too morally demanding. Such criticisms are the topics for next week’s class.
Leave a Reply
Want to join the discussion?Feel free to contribute!