Plurality and the Big Bang

 

 

 

 

 

Plurality and the Big Bang

 

 

It is said that the big bang created space and time—they did not exist beforehand. Thus something existed (a “singularity”) before space and time existed; and it was some sort of empirical particular not an abstract entity. It is generally conceived as superhot plasma not yet differentiated into elementary particles. Now adjoin that idea to the Kantian principle that space and time are the basis of individuation for empirical particulars: there can only be a well-defined plurality of particulars if there is a spatiotemporal manifold in which these particulars are arrayed. Then we get the result that the universe at the time of the big bang was a singularity in this strong sense: it was, and could only be, a single unified entity. The conditions for plurality were not met in that early state of things: metaphysical monism prevailed of necessity. The big bang fragmented reality, taking it from unity to multiplicity, by dint of space and time. It created division. It gave the world parts. It allowed particulars to exist apart from each other.

So one thing we know about the universe before the big bang is that it was devoid of plurality. It was as the metaphysical monists conceive of reality today: a seamless whole. Some philosophers have thought that Kant’s noumenal world must be a unitary world, given that it is not subject to the categories of space and time (the conditions for plurality not being met in that world). Others have speculated that all minds must be fundamentally identical given that the mind is not a spatial entity (for what could their distinctness consist in but spatial separation?). Well, if the universe issued from a big bang that created space and time, then it too must have existed in a unitary form—as a single undifferentiated entity. We can therefore deduce that there had to be a single singularity: there could not have been a plurality of singularities each spawning a totality of discrete particulars. For these would have to exist separately in space and time, given that spatiotemporal separation is the ultimate basis for individual distinctness, and space and time did not exist until the big bang wrought them. The universe could not have resulted from a pair of singularities—no universe could, by Kant’s principle. They would have to be separated in space (if simultaneous) but there was no space at the onset of the big bang. Accordingly, there was just one big bang, and there had to be: the singularity was necessarily singular.[1]

This is a substantive piece of knowledge—a significant cosmological theorem. We know very little about the state of the universe before the big bang, but we do know that it was unitary in a very strong sense—there was no existing plurality of empirically particulars. Metaphysically, the universe was one. Plurality was a later offshoot of this underlying oneness—an emergent property rooted in a more basic reality. We might even say that reality is fundamentally singular, cosmologically speaking. Maybe the singularity comprised a unified field of force lacking particulate structure—not even consisting of matter in the sense we now conceive of it. Material plurality is a late development, a contingent offshoot: au fond the universe is undivided power (energy, oomph). This is a fact worth knowing, providing an insight into the nature of the universe before it was fragmented by that early explosion. There was an abrupt transition from the One to the Many—plurality emanating from unity. An undifferentiated whole shattered into pieces as space and time took shape. The old cosmic unity was gone: now the universe was a collection of separate particulars existing at a distance from each other. Before the big bang there was no room (literally) in the universe for distinct particulars–everything had to be jammed inextricably together as a single seamless entity. It is doubtful that we can even conceive of this reality, except in the most abstract and metaphorical terms, given that our minds have evolved to cope with a world of spatiotemporal plurality: but its general structure follows from basic cosmological principles. In creating space and time the universe brought forth plurality from unity. It broke the bonds of being. It changed the metaphysical structure of reality.

 

Colin McGinn

[1] Of course, nobody doubts that there was just one big bang as a matter of empirical fact; but what we have here is a proof that this had to be so.

Why Sex?

Why Did Sex Evolve?

 

 

Some reproduction is sexual and some is asexual. There is no biological necessity about sexual reproduction, despite its prevalence. How could there be any such necessity, given that the basic principle of evolutionary biology is just that organisms are designed to maximize the presence of their genes in later generations? This says nothing about mixing genes or about a division between the sexes. Why not reproduce purely by cloning? Why isn’t all genesis parthenogenesis? This seems perfectly possible, much simpler, and evidently how reproduction began on planet earth. So why did sex evolve from non-sex?

The problem is not just that no obvious adaptive rationale for sex seems to suggest itself; there are positive reasons why the existence of sex seems to violate basic principles of evolutionary biology. The most obvious point is what might be called “gene dilution”: instead of passing down a hundred percent of one’s genes, one passes down only fifty percent. In parthenogenesis a hundred percent are inherited, so surely an organism would prefer that figure to a mere fifty per cent. Genes that build bodies that pass on only a fraction of themselves will not be as frequent in later populations as genes that pass on a hundred per cent of themselves. Sexual reproduction appears to thwart the prime directive governing genes: maximization. Second, in sexual reproduction it is necessary to find a suitable sexual partner, whereas in cloning you can go it alone. That requires expenditure of energy and risk of failure: why take such a hard road when parthenogenesis enables one to stay home alone and get on with the job without going out in search of unreliable mates? Third, from the female’s point of view sexual reproduction looks a lot like altruism: she is providing a service to the male by passing on his genes, using her own energy resources, for which she does not seem adequately recompensed. She seems to be doing the male a favor, but organisms built by selfish genes don’t do each other favors. The role of female looks unacceptably altruistic. We need to show what is in it for her genetic prospects. Why tolerate males at all? Why aren’t all organisms female?

Sex thus appears biologically paradoxical—inconsistent with even the most basic principles of evolution. We must find a way to explain its origin and persistence that comports with basic biology. Many theories have been proposed, which I won’t discuss here. I intend to propose, as economically as possible, another theory, which respects the game-theoretic selfish gene perspective now prevalent in the field. I call this the “genetic parasite theory”.

Imagine a population of single-celled eukaryotic organisms (ones with a sheathed nucleus containing DNA) and suppose their mode of reproduction to be asexual. They are all, in effect, female, and reproduce by cell division, transmitting one hundred percent of their genes into each offspring. Now, life being what it is, opportunities for parasitism will arise: some of these cells may adapt to exploit the resources of other cells in the population, without killing them. Let us suppose that they attach themselves to the surface of host cells and siphon off nutrition found in the host. The host will resist such nutritional theft, but it may persist nevertheless, possibly following an arms race between parasite and host. Being a parasite is always a highly attractive option for any evolved creature, having all the benefits of theft over honest toil, and is not easily foiled (it is really just a kind of non-fatal predation). But suppose that some cells are more ambitious: they seek not just food but also reproductive assistance—they want to use other cells to pass on their genes, sparing themselves the expense and trouble. They therefore evolve a pointy organ that can penetrate the surface membrane of other cells and transfer their own DNA into the nucleus of the host cell, where it can enjoy the resources of the host cell in getting itself reproduced. These cells are not nutritional parasites but genetic parasites. They might even be able to replace entirely the DNA of the host cell, by inserting all their own DNA into the nucleus. That would mean that none of the host’s genes are transmitted and all of theirs are. By the laws of gene selection such a host would soon go extinct; its offspring would be copies of the parasitic cell. We would thus expect that counter-measures to the genetic infiltration would evolve, and an arms race would develop. The host cell might develop ways of poisoning the parasite or dissolving its genetic residue once inside or blunting its pointy organ. Let us suppose that an equilibrium point is reached in which fifty percent of the parasite’s DNA is permitted inside the host’s nucleus and fifty percent of the host’s DNA remains. Perhaps if the host accepts this amount less damage is done to it by the invasive cell, which will limit its aggressive incursions if the host cooperates to some extent (otherwise it will fight to the death). Still, this is a highly unsatisfactory outcome for the host, because of gene dilution. How might it adapt to this state of affairs? It needs to find a way to get something out of the new arrangement—some sort of genetic payoff.

Some of the genetic parasites will contain better genes than others. If poor quality genes are mixed with the host’s genes, then the result will be less advantageous than if good quality genes are mixed. Given that the host is losing fifty percent of her genes in the new arrangement, it would clearly be better if she were to have good quality alien genes than poor quality ones, since the whole package will then do better, which is good for her genes. So she begins to favor parasitic genes that are better than others—she exercises quality control with respect to her genetic parasites. She becomes selective in her resistance. She is still not as well off genetically as she was before all this happened, when she reproduced by solitary cloning, since she is still suffering from gene dilution. But there is little she can do about it given the aggressive parasites she has to contend with. It is always better to have no parasites than some, but it can be better to have some parasites rather than others. You want the ones that can do you a favor in return, if that is at all feasible. Thus our host will want to select the best genes she can from her would-be parasites, because these will aid her own genes better than other parasitic genes will. The situation is still unstable, however, because there will be selective pressure to revert to the pre-parasite state of things, where all of her genes get perpetuated. She will want to resist the genetic parasites as much as possible, consistently with the arms race and the costs of resistance. How can things be made more palatable to her?

Suppose that among her “suitors” a select few have genes with the following property: if they are combined with hers they will actually increase the chances of her own genes surviving into the future, relative to their chances without such combination. Given genetic variation in the population, some cells will be more viable than others—and these are the best ones to “mate” with. In other words, the optimal strategy for an invaded host cell is to select a parasite that will improve her genetic prospects—not just relative to other potential parasites but also relative to her chances without such parasites. If she mates with such a fine specimen, then her genes will actually be better off than if she reproduced all on her own. True, there will be fewer of her genes in the next generation, but by combining with genes superior to her own she will ensure that more of her genes will eventually survive and reproduce. The result of this kind of upgrade combination is a “leg up” in terms of genetic survival, compared to the way things used to be. This, then, is how sexual reproduction evolved as a stable mode of reproduction: genetic parasitism combined with genetic selection that provides a “leg up”. What started as straight parasitism, with the usual arms race and compromise, turned into a kind of symbiosis, when the female improved her genes’ chances by selecting high quality “male” genes. Now there was something in it for her, beyond simply minimizing the bad effects of determined parasites. If she could have won the arms race against the parasites, reverting to her untroubled asexual mode of reproduction, that would have been quite satisfactory; but it is even better to find a way to exploit the parasite by selecting only parasites that serve her own genetic interests better than the old regime. And if she could not win the arms race anyway, it is better to turn a fait accompli into an unexpected triumph: the female cells that are better at selecting the male genetic parasites with the best genes will do better than those that are not so good at this. Thus we get competition among males to be selected and competition among females for the best males.

Here are a couple of analogies to bring out the logic of the situation. Suppose there was a parasitic worm that could actually affect the DNA of its host: it secretes a chemical into the DNA and changes its composition, producing new genes. Suppose some of these worms produced worse DNA and some produced better DNA. Clearly it is to the advantage of the host organism that it selects the worms that improve its DNA, since these will then have a better chance of being passed on. It might be better to have no such worm, but given that this is unavoidable, natural selection will favor the “good” worms over the “bad”. And maybe there are some worms so good that it is better to have them than to have no worms at all—since they can build organisms greatly superior to any built by the host’s original genetic composition. These super-worms produce simply outstanding children for the host. Similarly, alien DNA (deriving from a “male”) might so improve the female’s gene complex that the necessary genetic dilution is acceptable, according to the genetic calculations. Fifty per cent of my genes surviving for a thousand generations is a lot better than one hundred percept surviving for only ten generations. It is like five of my ten children living to be a hundred with the other five dying in childbirth, compared to all of them living only to the age of three. A worm that re-tooled your genes to make them substantially better at surviving would pay its way in the unforgiving genetic arithmetic. Genes for tolerating such a worm would be more likely to be passed on.

The second analogy concerns coalitions. If someone comes to you to form a coalition in order to secure some future benefit, you must ask yourself a simple question: am I better of with her or without her? If I can secure the benefit without the coalition, I do well to decline her offer, since I would then have to share the benefit. But if I judge that I cannot achieve the end without her help, then I should join with her in a coalition, since I will get nothing otherwise. Sexual reproduction has the same logic: if my genes go it alone they have a certain probability of surviving to reproductive age (maybe zero), but if I combine them with someone else’s genes (losing fifty per cent of them, say), then there is a different probability of their survival. If the latter exceeds the former, then I am rational to choose the latter over the former. A potential mate is like someone offering you a coalition: if you mate with me I assure you the chances of genetic happiness are high, compared to the chances if you mate with someone else or just decide to go it alone. Suppose you happen to have both means of reproduction available to you, sexual and asexual. You have to choose which to employ. You compute the payoffs by multiplying the number of your genes that will get passed on by the probability of their survival (over, say, the next million years). If a hundred per cent get passed on by the asexual method, but there is a low probability of their long term survival, you might opt for the sexual method where fewer get passed on but the probability of survival is much higher—but only if you believe the “donor” genes have this kind of survival power. In the same way, your coalition mate has to be good enough to warrant dividing the spoils with her later, or else you will choose to go it alone and keep all the spoils for yourself. Sex arises from genetic coalitions, possibly preceded by genetic parasitism. This is better for both parties, because there is something in it for the host and the parasite benefits because its incursions are no longer resisted. Thus we move from resistance to consent: the male benefits but so does the female. This solves the problem we started out with, which was to explain how sexual reproduction could make sense given that asexual reproduction seems so much more sensible biologically. The answer is that it results from a strategy for dealing with genetic parasitism.

Let me restate the point in less abstract terms. Why should a female mammal allow her womb to be colonized by a male mammal with a different set of DNA? Why should her energy resources be diverted into generating his child? He should take care of it himself! He is just freeloading off her womb and energy resources. The suggested answer is that she is gambling that the influx of his genes will improve the prospects of her genes. Given that the male would just parasitize her womb anyway, it is better to be selective about mates and try to improve her own chances in the genetic lottery. In the payoff matrix that describes all the options, with their various costs and benefits, sexual reproduction seems the best choice—the best compromise, we might say. Asexual reproduction makes perfect biological sense—it is how a well-meaning Creator might have arranged things—but given the rough and tumble of evolution, with the ever-present threat of unscrupulous parasites, sex has emerged as a kind of game-theoretic solution to an inevitable problem: namely, what to do about those pesky genetic parasites. The parasites are with us always, given the attraction of that line of work; the question is what can be done about them. And the answer in one word is: compromise. Sex is a kind of accommodation to the harsh reality of biological existence—ultimately, access to energy.

If this explanation is on the right lines, what might we expect to characterize animals and their sexual behavior? One point has already been mentioned: we would expect male competition for females, selectivity from females, and competition within females for outstanding males. Of course, these are all abundant features of animal behavior. Correspondingly, we would expect female sexual anatomy to conform to the general theoretical picture: it should not be too easy to impregnate the female, which would impair her ability to be selective. Her consent to copulation should be required for copulation to be feasible for the male. Rape should be, at least, difficult and potentially hazardous. At the same time, copulation should not be so difficult that a suitable male is just not up to the task: hence difficult but not too difficult. We might also expect sex to be somewhat predatory, given that the male is always essentially exploiting the female, in order to spare himself the effort of gestating offspring; and the female will always be wary and choosy, wondering if this suitor is really “the one”. She is giving up a lot to incubate his progeny, in terms of energy and commitment; so she has to be sure there is something in it for her (i.e. her genes). The genes of the female must move her in such a way that their interests are respected, even though only fifty per cent of them will end up in the next generation. The genes of the male have no such concerns, given that he is not called upon to act as incubator; and he can spread those genes around ad libitum. The underlying logic of sex predicts these kinds of phenotypic facts, and they are evident enough in animal behavior. Fundamentally, the male is still the aggressive parasite and the female the reluctant host trying to make the best of a bad job. Of course, once the sexual machinery is in place and the female has no other reproductive option, she will act with enthusiasm and commitment; but the genesis of sexual reproduction is still written into the underlying structure of the sexual relationship.

A less obvious consequence concerns sexual selection. The female exercises quality control: she evaluates her potential mates by formulating hypotheses about their genetic fitness, based on what she can observe. She cannot peer directly into the suitor’s genes but must go by outward appearance. Thus she espies the peacock’s lavish tail and infers genetic superiority within. This causes males to improve their appearance so that females will evaluate them highly: but to improve their appearance they have to improve their reality. They have to be bigger and stronger, less infested with parasites, and more able to sustain pointless bits of flamboyance. So there is selective pressure on males to improve. Thus sex leads to sexual selection, which leads to improvement. That is, sex is what powers evolution to produce ever more complex and accomplished animals, via sexual selection. But asexual reproduction has no such consequence: the organism just reproduces itself according to its original design. There is no sexual selection when reproduction is asexual, and hence no motor to drive biological progress. The result is likely to be stasis, uniformity, and dullness. You don’t get complex beautiful animals when the method of reproduction is asexual. It is not, of course, that anyone is aiming for such complex beauty; it is just that the mechanism for producing it does not exist in a world without sexual reproduction. We owe it to sex to kick start evolution into a higher gear; before sex the pressures for change were minimal. It was when females started to be choosy, as a way of making the best out of living in a world of genetic parasites, that sexual selection triggered the kind of evolutionary changes that we see. Without sex Earth might never have got beyond boring bacteria floating in nondescript oceans. We owe it to the parasites among them to have initiated a process that led to the impressive variety of animal life that now exists.

Here is one final point–a kind of theorem: a genetically perfect female has no rationale for engaging in sex in a world in which she is subject to genetic parasitism. If she cannot improve her genetic fitness by merging her genes with those of a male, then she has no motive to permit her body to be used as incubator. For her, all genetic mixing is genetic degradation. She therefore has every reason to fight off all male incursions. But the same is not true of a genetically perfect male: he still has every reason to reproduce sexually, since he can thereby produce more copies of his genes than by solitary cloning. He just has to deposit them in as many willing (or unwilling) female bodies as possible. There is a huge logical asymmetry between being the one with the incubating body and being the one who uses someone else’s body as incubator. That asymmetry is the real basis of sexual reproduction (and indeed ultimately defines the difference between the sexes). Genetic perfection in the female leads naturally to frigidity, but genetic perfection in the male entails no diminution of sexual appetite. In other words, the genes see no point in sexual reproduction for the genetically perfect female, but they see a lot of point in it for the genetically perfect male. Of course, there is no such thing as genetic perfection in the real biological world–but I am making a purely logical point.

 

Colin McGinn

Injustice

Injustice

 

 

Injustice directed towards an individual creates a specific psychological response. This response includes anger at the perpetrator, moral indignation, resentment, a sense of futility, a desire for revenge, disillusionment, and general malaise. It can shape a person’s entire life, and destroy his or her wellbeing permanently. The injustice can be of two kinds: retributive and distributive. The victim can be blamed and punished for something he or she has not done, or punished disproportionately, or not given due process; or the victim can be subject to unfair distributions of goods to which he or she is entitled, by natural right or contract. Though both types of injustice occasion the psychological response mentioned, the former is apt to occasion it more strongly and deeply. A person wrongly blamed for something, especially where the blamers show bias or negligence, or basic disregard for justice itself, is liable to induce a state of extreme agitation and outrage. In addition to the unjust treatment the victim has received, he or she must also deal with the sense of anger, outrage, resentment, and so on. Clearly, to treat someone unjustly is the very height of culpability, and anyone guilty of such injustice must be held accountable, especially if they have been placed in a position of authority and power over others. This is why we rightly deplore corruption in the judicial system or in quasi-legal tribunals, as well as negligence and plain stupidity. Hatred of injustice is both necessary and unavoidable.

We don’t feel the same way about other crimes against the person. If someone steals from you or strikes you or breaks a promise to you or lies to you, then you may well be upset and angry, but you don’t experience the same degree of psychological upheaval. The reaction to injustice is in a class of its own, sui generis, and not so easily shrugged off. The psychological impact is more profound and enduring. It creates a feeling of pointlessness, deep distrust, and personal isolation. This is particularly true if the injustice is repeated and systematic—if it is sustained over time in numerous unjust acts (racial discrimination, especially embodied in the law, is the obvious example). It is bad enough to blame and punish an innocent person once, but to keep on doing it is exponentially worse, especially when opportunities for just restitution arise. Then the victim is apt to feel that the system is stacked against him, that there is no escape from injustice, and that life is not worth living in such conditions. Suicide can then seem like the only possible escape from systemic injustice. This is a profoundly terrible thing to do to someone—very different from the normal run of crimes and misdeeds. While it is possible to forgive someone for stealing, lying, hitting, and so on, it is extremely difficult—perhaps impossible—to forgive someone for blatant and repeated injustice. A sense of injustice destroys personal relations between people. You cannot remain friends with someone who has treated you unjustly, nor can you respect him or her thereafter.

I take it these points are obvious, if painful to contemplate. One of the less obvious consequences of injustice is that it becomes almost impossible to treat the person who has been unjust to you in a just manner. You feel that your unjust persecutor has sacrificed the right to just treatment from you. Here injustice differs from other crimes: you don’t feel that being lied to or stolen from or struck justifies doing the same thing to the person who has done these things to you. But you do feel that injustice justifies injustice in return: “Why should I be just with you when you were so unjust with me?” Is this just psychological weakness or is it something more profound—more conceptual? Is it just “hitting out” or does it reflect something about the nature of injustice? True, you may manage to set aside your (correct) sense of injustice and treat the perpetrator justly; but you feel that this requires a special, almost superhuman, effort—as if the other person does not deserve just treatment from you. Their right to justice from you has been undermined by their own manifest injustice towards you. That is why they must be judged by someone other than the person they have wronged—by an impartial judge: because the victim of the injustice simply cannot be expected to treat them justly. Everyone has the right to be treated justly by me, but not if they have treated me unjustly; then someone else must be brought in to serve the cause of justice (hence no vigilante justice). No one can be left at the mercy of those they have treated unjustly. But the victim of an act of theft, say, is not likely to steal from the thief, or to suppose that it is morally permissible to do so. Acts of injustice, however, are affronts to morality itself—a rejection of the demands of morality—and we feel that such actors deserve special condemnation. The corrupt judge is worse than the guilty criminal, because the judge is charged with, and accepts, the role of arbiter of justice. To imprison a person unjustly is the ultimate crime; and the person imprisoned is not expected to deal leniently, or even fairly, with his unjust judge. Injustice thus breeds injustice in the victim, even if he or she tries to “rise above” it.

But there is a further consequence that is even more disturbing: the tendency to generalize injustice. If a person has been made the victim of injustice, especially if it is repeated, systematic, and unrepentant, then he or she is apt to abandon justice as a general rule of conduct. The victim thinks: “I have been treated unjustly, so why should I treat others justly?” By contrast, the victim of theft does not think: “I have been stolen from, so why should I not steal from others?” It is not entirely clear why there should be this asymmetry, but it seems to exist and to be entrenched. It may have to do with the general sense that injustice is itself a rejection of morality, not merely a violation of it. We say of the grossly unjust agent that he or she “doesn’t know right from wrong”, but we don’t tend to say that about other miscreants. We take injustice to be a more profound moral failing—and rightly so. It brings morality more fully into question, so that an unjustly treated individual feels less constrained by it.

And here I think is where the special evil of injustice shows itself—the thing that sets it apart from other crimes and misdeeds: it creates chains of injustice. Suppose A is unjust to B and that B forms the psychological response I described; then B will be apt to be unjust to C, even when C has not been unjust to B. But then C has become the victim of injustice, and will in turn be likely to be unjust to D; and so on. One act of injustice (or a series of acts directed against a particular individual) will generate a chain of unjust acts, all mediated by the psychological response I described. It doesn’t work like this with stealing, lying, and so on. Injustice has the power to propagate itself through a population, like a contagious disease, hopping from one person to the next. Previously just people are thus turned into unjust people by being themselves treated unjustly—all because at the beginning of the chain an innocent person was treated unjustly. Injustice begets injustice, while theft does not beget theft. Of course, if the theft is felt to involve injustice, then theft will generate the same kind of chain; but not otherwise. If a rich man steals from you, you feel an injustice that does not apply to a poor thief—though you may still deplore the poor thief’s action. We do not hate the thief qua thief, as we hate the unjust agent—he or she we regard as morally bankrupt. Is there anything worse than a “hanging judge” who blatantly ignores evidence and follows discriminatory policies? What about a judge who knowingly sentences innocent people to death for kickbacks from the makers of electric chairs, or because she wants to look “tough” for political reasons? That is evil of a stunning magnitude.

Chains of injustice ramify and proliferate. They can spread through a whole population. They can be transmitted down the generations. And they may be triggered by a single isolated act of injustice. The injustice chain is particularly dangerous in the case of children. If the parents have been treated unjustly, they will be apt to treat their children unjustly; but children will experience the injustice in a sharp and undiluted form, without any possibility of rising above it. It will inform their entire worldview: the world will be seen as an inherently unjust place, with talk of justice meaningless and pointless. And so injustice gets passed down the generations. How many unjust acts in the world are explained by the existence of one of these chains? Here we should distinguish between the instigator of a chain and a link in a chain. If A is not himself a victim of any injustice, and yet acts unjustly, for reasons of self-interest or political expediency, say, then A is an instigator—he or she sets the chain in motion. Such a person is far more culpable than one who is a mere link in a chain instigated by someone else—the link merely inherits injustice without creating it ab initio. The link is a victim of injustice as well as a perpetrator of it, and he is the latter because (or partly because) of the former. The instigator, however, has brought a potentially endless chain of injustice into the world: not just the initial unjust act, but also all its ramifying consequences. The instigator has created the disease, not merely been one of its carriers.

The evil of injustice therefore far outweighs the evil of other kinds of immoral act, not just because of its intrinsic evil (though that is considerable), but also because of its tendency to grow and spread. You can see how it could infect an entire population, as well as succeeding generations. It deserves the name “Original Sin”: it is a sin that begets other sins. Those guilty of it, especially the instigators, deserve special condemnation, special contempt. Everyone should, of course, be conscious of the burdens of justice, and employ every means possible to ensure that justice is done. All unjust acts should be rectified fully and promptly. Restitution should be mandatory. There is simply no excuse for injustice, as there can be for other kinds of immoral act. Injustice should not be tolerated or excused, but rigorously punished (not only by law but also by social censure). The shame attaching to injustice should be unique and profound. No one should turn a blind eye to it. Ever.

What can be done to prevent chains of injustice from forming? Don’t instigate them, obviously: but what do we do if some people insist on being injustice instigators? It is all very well to exhort people not to make the same “mistakes” as those who have treated them unjustly; but that may not be very effective advice for someone who has been made bitter and cynical by the injustices done to them. You can’t expect people to be saints if they have been systematically abused. A person who has been sent to jail, knowingly and cynically, for a murder he did not commit is not likely to view the world kindly. Someone who has known nothing but injustice is unlikely to treat others justly. What is necessary is firm public support for justice, above all other values, and an intolerance of injustice—people should be rewarded and punished according to their capacity for justice. No one should be left in a position of judicial power that has acted unjustly. Also, it is necessary for justice to be seen to be done, not merely to be done: justice must be celebrated and recognized, spoken of in hushed and reverent tones. Injustice, for its part, should be despised and reviled for what it is. The word fair should be on everyone’s lips, and be the (or a) basic moral word. The nerve of justice should be forever taut.

Utilitarianism has a lot to answer for here: it shifts moral praise and blame from justice to consequences, so that an unjust individual can always plead that he or she was just trying to maximize the good—this being regarded as the ultimate aim of morality. An unjust act is thus excused by claiming that it will likely lead to greater happiness all round. This is an insidious way of thinking, almost bound to lead to corruption, and anyway ignores the ramifying effects of injustice—and hence is not defensible even on utilitarian grounds. Fairness is what matters, not the expectation of generalized happiness. If people feel that they will not be treated fairly, perhaps precisely because they have not been, then this will rot everything from the inside out. The psychological effects of injustice, and the resulting chains of injustice, are so damaging that injustice must never be allowed to stand. Injustice is the worst of moral failings.

 

Colin McGinn

Evolutionary Causation

Evolutionary Causation

 

 

Evolution is a causal process. But it is a causal process unlike any other. It is a causal process in which the effect vastly exceeds the cause, and indeed (in a loose sense) contradicts the cause. The most obvious point is that the effect is much more complex than the cause: from natural selection operating over simple entities, ever more complex entities evolve. At the origin of the causal chain leading to humans we have simple bacteria; and there is no infusion of complexity from outside, except that provided by genetic mutation and natural selection. The evolutionary causal process takes us from simplicity to complexity, dramatically so. Evolution also takes us from mindlessness to minds—from matter to consciousness. Nothing psychological directs the evolutionary process, but something psychological emerges from it. The effect is mind, but the cause is mindless. It is the same with purpose: there is no purpose to evolution by mutation and natural selection (despite the misleading connotations of that phrase), but the result is animals endowed with purpose. Teleology is caused by non-teleology; efficient causes produce final causes. Animals desire and intend things, but evolution does not. Similarly, animals—some of them—have foresight and plan ahead, avoiding future catastrophe; but evolution has no foresight and is not deterred by future catastrophe. Evolution could drive all animals to extinction and not lift a finger to avert that consequence—just as it has driven countless species extinct (nor does it lament those vanished species, as we might). Evolution is blind in its operations, but it produces animals that are not. Evolution is also amoral, wasteful, cruel, and indifferent; but it produces at least one species with a moral sense (and maybe more). Indeed, it produces moral beings that deplore the very process that led to them: we would change nature’s methods if we could. We recoil at the very process that leads to our ability to recoil. Our moral nature is not nature’s nature, but the opposite of it. Finally, evolution has no comprehension of anything–no understanding or intelligence or insight–yet it causes creatures that have comprehension of many things, including the evolutionary process and its lack of comprehension. Thus we can say that animals (especially humans) are caused by a process alien to themselves: animals have characteristics that are not found in their (remote) evolutionary causes, and which are at variance with their causes. The causality is a kind of paradoxical causality—as if it gives rise to things opposite to itself.

The same can be said of embryological causality, which is in many ways just like evolutionary causality. The embryological process takes us from the simple, mindless, purposeless, blind, amoral, and uncomprehending to the opposite of these traits—to a being with the negation of each of them. The process itself is one of mechanical self-assembly, guided by nothing, not even a blueprint, purely local in its methods of protein synthesis; yet it produces a being endowed with a rich set of traits. The causality is again paradoxical, anomalous, surprising, hard to believe, miraculous-seeming. It has the look of creating something from nothing. But it is entirely natural, governed by the laws of physics and chemistry, just like evolution.

No such paradoxical causality is assumed in creationist models of evolution or embryogenesis. According to those theories, the cause of creatures with the traits listed is another creature with the traits listed: God is precisely a being with complexity, mind, purpose, foresight, morals, and comprehension. Creationist causation is non-paradoxical (paradoxically!), since the cause has the very traits found in the effect. But that theory is known to be false, while evolution by natural selection is known to be true—so we are saddled with the kind of causation it involves. And notice that it is not just that the causation is emergent or generative; it actually involves negating the defining features of the cause—that is, producing traits that are at variance with the nature of the cause. It is just as if nature were contradicting herself.

It might be said that evolutionary (and embryological) causation is not as special as I am making out. Isn’t the early history of the universe likewise causally “peculiar”? After all, the universe was once all dispersed gas and only gradually did gravity create solid discrete bodies—stars, planets, galaxies, etc. Didn’t physical causality produce the solid and massive from the vaporous and weightless? Well, no, it didn’t, because all that gas was simply dispersed matter, with the standard properties of matter—mass, volume, electric charge, and gravity.[1] The universe never went from non-mass to mass or non-gravity to gravity. There was no paradoxical emergence, no something from nothing, just the working out of what already existed—basically, material aggregation by gravitational force. Even the production of the elements inside stars proceeded by the pre-existing laws (gravitational compression). It is only when we get to the evolution of life that the surprising kind of causality kicks in; then the universe discovers a new kind of causal process, with vastly greater generative power. Mutation and natural selection, genes and self-replication, prove to introduce a new kind of causation, capable of far more impressive feats than any observed hitherto—even if they are far more local and parochial, as well as less physically powerful (it takes only sunlight to fuel the whole process). The causation involved in the expanding universe operates on a much larger and more general scale than the expanding of life on earth, but it is far less impressive than evolutionary causation in its innovative power. The cause-effect mismatch of the latter is not mirrored in the former. Evolution is causally prodigious.

It would be good to have a label for the two types of causation, but none currently exists. We might borrow from Kant’s discussion of the analytic-synthetic distinction and press into service “ampliative” and “augmentative”. Thus non-evolutionary causation might be described as “ampliative causation” because it merely spells out what was already present: it adds nothing essentially new, though it can yield non-trivial transformations in the universe (stars, elements, black holes, etc). By contrast, we have “augmentative causation” in which radical innovations occur, even the kind of oppositions I have described. Evolution and embryogenesis involve augmentative causation—as a synthetic judgment for Kant involves going beyond the content of the subject in the predicate. Or we could speak of “conservative causation” versus “creative causation”, or “easy causation” versus “hard causation, or “mechanical causation” versus “innovative causation”. Of course, it might be maintained that the two types of causation really collapse into each other, because what I am calling a special type of causation really reduces to the other type. Either the evolved traits I listed do not really exist (“eliminativism”) or they can be reduced to properties that obey the ampliative type of causation (“reductionism”). I have not claimed to refute such a position here; my point has been that evolutionary causation is special on the assumption that the traits in question exist and cannot be reduced away to properties found in the evolutionary causes that don’t presuppose them. That is, I have assumed that mammals, for example, instantiate a range of traits not found in bacteria (mind, purpose, morals, etc). Given that, we need to recognize two basic kinds of causation in the universe (though they may be interwoven in complex ways).

I strongly suspect that this distinction lies behind an historical puzzle concerning the discovery of the theory of evolution (indeed I formulated the distinction while trying to solve that puzzle). It wasn’t until the nineteenth century that Darwin (as well as others) came up with the correct theory, though there was nothing to prevent earlier thinkers from discovering it, even going back to the ancient Greeks. The question was how to explain biological adaptation, and the answer is blindingly obvious once formulated; yet no one thought of it. It was quite possible to come up with the theory from the armchair by a priori reasoning, and it is not the kind of counterintuitive and difficult theory that (say) quantum theory and relativity theory are. All we need are the notions of chance variation and differential survival. The theory really could have been invented by Aristotle or Descartes well before Darwin’s time—so why wasn’t it? In fact, Darwin himself did not endorse it from the armchair but had to amass huge amounts of empirical data before he could accept the theory—the data forced the theory on him. Why the reluctance to accept what seems so blindingly obvious? (Selective breeding alone should have given the game away long before.) Something seems to have been blocking people from seeing it—and more than just piety and tradition.

My suggestion is that it was the nature of the causation involved that made the theory invisible for so long: people just assumed that the cause of animal (and human) existence had to conform to the usual kind of conservative ampliative causation. Hence the easy attraction of the creationist picture: it didn’t involve counterintuitive paradoxical-looking causation, but just regular causation in which cause and effect match each other. It turns out that this view of causation was too restrictive, as we now understand, but it is not surprising that there would be resistance to it at a gut level. The production of life had to be a causal process of some sort, but the kinds with which we are familiar respect the conservative principle, so it had to be that kind of causal process. We have certainly never observed the kind of creative causation manifest in the evolutionary process, simply because it takes place over such a long span of time. The idea that things like humans, with their distinctive traits, could be caused by a blind mechanical amoral process, originating in things like bacteria, was just too much of an affront to habitual assumptions about how causality works. And indeed it is still difficult to make sense of the causal process involved, precisely because it seems to entail getting something remarkable from nothing very much. It is thus very surprising that Darwin’s theory is true—something to marvel over (mind from mindlessness, morality from amorality). This explains why it was so hard for Darwin and others to recognize its truth. If someone had come up with the basic idea centuries earlier, it would have struck them as preposterous: how could chance variation in mindless entities, combined with blind natural selection, have caused the creatures we see today? We have found out, however, that the impossible is actually true—if I may exaggerate a bit. So it is not inexplicable that Darwin’s theory lay hidden for so long, despite its simplicity and a priori availability—unlike Newton’s theory that required enormous feats of mathematical reasoning and pure genius. No one came up with Darwin’s theory for so long because it seemed causally impossible—quite unlike the traditional creationist story (despite its falsity). Even today I suspect that many people resist Darwinism simply because of their habitual intuitions about causality, formed by ordinary observation, not because of any immovable religious commitment. People came round to the heliocentric hypothesis much more easily, because it did not challenge their intuitive notions of causality. But if what I am suggesting here is right, it is much harder to persuade people of evolution because of the causal picture it implies. To put it more positively, in order to overcome resistance we need to address ourselves directly to the question of causality and face it head on. The theory should be advertised as empirically correct but causally counterintuitive, or at least causally surprising. Simply passing over the causal question in silence, without facing it squarely, leaves people uneasy and perplexed. We need to accept that when species evolved a new species of causation evolved.[2]

 

[1] An exception might be made for the instant of the big bang in which it is said that matter, space, and time came to exist with extreme rapidity. Putting aside the question of whether that is the right thing to say, my comment is that if this is indeed so then causation at this early stage had the kind of innovative power I am attributing to evolutionary causation. Thereafter, however, physical causation followed a far more conservative trajectory.

[2] This is not to deny that evolutionary causation is grounded in ordinary physical causation (compare mental causation); it is just to say that the macro causal process of evolution involves the production of kinds of entities not present before. Teeming life comes from monotonous non-life—animals from atoms, men from molecules.

Distinctions and Difference

 

 

 

Distinctions and Differences

 

 

There is distinction in the world and distinction in the mind: things differ and so do concepts. States of affairs differ and thoughts about them differ. But how are these distinctions related? Are they dependent or independent? Clearly there can be objective distinctions that are not mirrored by cognitive distinctions—where thought fails to capture distinctions in reality (consider reality before the onset of thought). But can there be distinctions in thought that are not mirrored by distinctions in reality? Of course, distinctions in thought are real distinctions, but the question is whether distinctions in thought always reflect distinctions in what thought is about. Can two thoughts be distinct even if what they are about is not distinct? Can thoughts differ while the states of affairs they express are identical?

The orthodox view is that they can, but on closer analysis this is wrong. It is generally agreed that in the vast majority of cases cognitive distinctions are matched by objective distinctions—it isn’t that reality is a homogeneous lump that we insist in thinking about in different ways—but it is supposed that there is a special subclass of cases in which no worldly distinction can be found that corresponds to a cognitive distinction. I speak, of course, of classic sense-reference cases: the reference is the same but the sense differs. But this is confused thinking: identity of reference does not entail identity of state of affairs expressed, because of the different properties that can be connoted by a singular term. The name “Hesperus” connotes the property of being the evening star while the name “Phosphorous” connotes the property of bring the morning star—and morning and evening are not the same thing. Appearing in the evening is a different worldly state of affairs from appearing in the morning. Similarly for “water is H2O”: here the word “water” connotes the property of appearing in a certain way to human subjects while “H2O” does not; the way water looks is a different property from the property of having a certain molecular structure. In these kinds of cases the cognitive difference is not independent of distinctions in the world; it corresponds precisely to such distinctions. No cognitive difference without objective difference. The content of thought cannot vary without the subject matter of the thought varying, i.e. without a distinction at the level of objective reality.

This is as it should be: for what would be the point of conceptual distinctions that fail to map onto worldly distinctions? The aim of concepts is to make discriminations among things beyond the mind (sometimes within the mind): a distinction between concepts that concern exactly the same objects and properties is a pointless distinction—why distinguish what is not distinct? Our minds track distinctions in reality; they don’t invent distinctions that don’t exist in reality. Distinctions without differences are not real distinctions. Do not multiply distinctions beyond necessity! That is, thought should track only objective distinctions, of which there are many and subtle. There is more than enough fine structure in the world to occupy the discriminating thinker; anything else is redundant and pointless. Indeed impossible: concepts can’t differ without the aid of distinct states of affairs–that is their nature (this is a variant of Brentano’s thesis). A concept is always a concept of something. Externalism about conceptual distinctions is true: no concepts are distinct but objective reality makes them so.

You might think that concepts could differ in their dispositions while corresponding to identical states of affairs, thus counting as distinct concepts. But (a) the same concept could have different dispositions in different cognitive beings and (b) such dispositions would be pointless as means of discrimination. What is the point of discriminating what cannot be discriminated? You might say that different beings might have different needs with respect to the same objective world, so they might differ in how they conceptualize external things; but then the different states of affairs involve states of the organism itself—the varying needs ground the distinction in the concepts. The concept edible may apply to the same thing for one creature as inedible does for another, but that is only because of different properties of the different creature’s digestive systems. Concepts differ only in virtue of objective differences that they represent or correlate with—there is no separate dimension of variation capable of individuating concepts. This is not to say that connotation reduces to denotation, still less that connotation doesn’t exist; it is just to say that connotation always cashes out as objective worldly difference—since what is connoted is always a property of things. Even in the case of semantic tone (dog versus cur) there is always an underlying distinction at the level of facts: different emotions are aroused by the different concepts. There are no conceptual differences that are “purely cognitive”, that exist independently of non-conceptual facts. The world is the ultimate dictator of conceptual distinctions. Conceptual distinctions are never “in the head”.[1]

The right picture is this: the world consists of the totality of objective distinctions—different ways that things can be. The aim of thought is to latch onto and exploit these distinctions, and it has nothing to work with other than the distinctions that exist in reality. Thoughts divide up according to the states of affairs that form their content. It is certainly not that concepts acquire their distinctness from some source other than objective reality, and then foist distinctions onto reality—as if they could be subjectively distinct ontological distinctions.   without any objective distinctness in things. There is nothing to constitute a difference of concepts other than distinctness in what they represent—objects and properties, basically. Conceptual distinctions recapitulate

[1] In the case of indexical expressions we always have differences in spatiotemporal context that correspond to different indexical concepts, as with “here” and “there” and “now” and “then”. But this is a complex subject I won’t pursue here.

Beauty and Objectification

 

 

 

Beauty and Objectification

 

 

Beauty can be found in both people and things. In the case of people it is connected to sexual desire, while not so in the case of things. One finds the object of one’s desire beautiful, but one doesn’t desire all the things one finds beautiful. I may desire a certain woman, but I don’t desire a painting of her, however beautiful it may be. Thus beauty can be connected to two sorts of attitude: the erotic attitude and the aesthetic attitude. These attitudes differ markedly: they entail different dispositions on the part of the onlooker and different wishes as to the behavior of the beautiful object. The erotic attitude entails a desire to have sex (of some sort) with the object, not so for the aesthetic attitude. The erotic attitude is physically active, while the aesthetic attitude is contemplative.

It is sometimes supposed that the erotic attitude is inherently objectifying, since its focus is the embodied self: one desires that body. Thus we hear talk of “sex objects”—the other is the object of desire. The other is reduced to her (or his) body. By contrast, the aesthetic attitude regards the other as more than a mere physical thing with which to cavort: it regards the other as belonging to the realm of disinterested contemplation and valued for its intrinsic character. The other is not merely an instrument of gratification, analogous to food, but a valuable being in its own right, like a work of art. Desire is objectifying while aesthetic contemplation is edifying. A person can enjoy being admired for her beauty but not being treated as a mere thing for someone else’s carnal pleasure. To be found beautiful in the erotic way is to be treated as a mere object, while to be found beautiful in the aesthetic way is to be elevated to the level of art (possibly the divine).

But this way of thinking is the opposite of the truth: the erotic attitude to beauty is subjectifying while the aesthetic attitude is objectifying (and potentially morally suspect). This is because sexual desire contains in its intentionality the wish that the other should behave as a sexual agent: that is, should actively engage in sexual interractions with the one doing the desiring. It is the desire for desire, and hence action. It is the desire that the other should will what we ourselves will. Of course, the desired action is the action of an embodied being, but it is essential to the desire that its object be an agent endowed with volition. One does not desire the other qua inert body but qua active self. The “object” of sexual desire is a conscious willing agent with whom one desires a certain sort of cooperation. One wishes to engage in a joint project, as it were, i.e. sexual interaction. The beauty of the other is conceived under that aspect—as an aid to the erotic project. At the moment of desire what is wished for is the agency of the other to manifest itself in a particular way.

It is quite otherwise with the aesthetic attitude. Here beauty does not excite any desire for the beautiful object to act in a certain way—it does not inspire a desire for active cooperation. On the contrary, the object of contemplation is regarded as just that—a passive object to be gazed at and appreciated. You don’t want Mona Lisa’s picture to kiss you, though you may well want Mona Lisa herself to. The enraptured gaze is caught up in the qualities of the object qua object without regard for any actions it might undertake. Thus a woman’s face can be regarded as a purely aesthetic object: not something to be kissed and adored but to be admired for its formal beauty. In this attitude the object is dwelt on as on a beautiful painting—in an attitude of disinterested aesthetic analysis. The observer’s eye will move admiringly over the eyes, lips, cheeks, and chin, noting the symmetries and sparkle, the color and texture (such exquisitely smooth skin!). The idea that there is person within is not at the forefront of the mental act (robots can be beautiful in this sense). The viewer may have no sexual interest in the woman at all, through lack of libido or difference of sexual preference. The person is reduced to an aesthetic object—an appearance of matter, a congeries of qualities. The exact shape of nose or color of eyes will be analytically noted and appreciated. What the person within thinks or feels is irrelevant—outer appearance is all. Thus the other is objectified by the aesthetic attitude: her humanity is deemed secondary at best. Her agency is eclipsed by her beauty as a thing among other things—paintings, sculptures, landscapes. And here lies a moral danger: she may be regarded merely as an object devoid of will and agency. She might be degraded, assigned to the wrong ontological category. She might protest: “I’m not just a beautiful object—I’m a person!”

Consider the attitude of the peahen to the peacock. She finds that tail beautiful; it incites lust in her, the desire for copulation. She seeks the cooperation of the peacock to satisfy her desires, well aware that he might not reciprocate (though he probably will). In no way does she treat him as a mere object empty of agency. We, however, gazing at the peacock’s tail, adopt the aesthetic attitude not the erotic attitude; and in so doing we perceive the peacock as a thing of visual beauty—we objectify him. We are not concerned with his thoughts and feelings but with his feathers—splendid, no doubt, but merely part of his body. Which of us is more objectifying, the peahen or the connoisseur? The peahen desires to reproduce with the peacock (a cooperative act), but the connoisseur regards the peacock with a curator’s eye—what a fine candidate for taxidermy! The connoisseur will want to take a picture, the peahen to move in closer. One wants to make images, the other babies. The former requires passivity on the part of the object, the latter activity.

It is odd that beauty plays both these roles—as a stimulus to desire and as an object of contemplation. They are really very different, and yet the same thing is perceived, viz. beauty. Suppose you experience a sudden loss of libido: you no longer desire your beloved but you still perceive her beauty. Has your experience of her changed? She still looks the same to you—she is no less beautiful—but her beauty just doesn’t excite you any more. Inevitably you move from the erotic attitude to the aesthetic attitude, objectifying her in the process. You no longer want her body next to yours; you are content to gaze at her from afar. Is that progress?

 

Colin McGinn

Platonic Pragmatism

 

 

 

 

Platonic Pragmatism

 

 

The pragmatic theory of truth has this going for it: it recognizes that truth is something with value. Truth is something we ought to pursue and hence has a normative aspect. It is good to believe what is true and bad to believe what is false. Truth is a desirable property of belief. As William James says, “The true is the name of whatever proves itself to be good in the way of belief” (1907). It is contradictory to say, “We ought to believe what is true but truth is not a good thing”. Any theory of truth that fails to acknowledge the normative character of truth is defective or at least incomplete. Thus the classic correspondence theory fails to meet this condition: for what is so good about correspondence? If correspondence is a type of isomorphism, what is desirable about isomorphism? Sameness of form is not ipso facto a good thing: objects can share their form without this being something they ought to do (crystals, mice). If truth were just correspondence, it would be normatively neutral, not the desirable trait we take it to be (much the same can be said about coherence). Truth cannot reduce to a property or relation that bears no trace of the normative; it must have some type of goodness built into it. This seems like a solid insight on the part of the pragmatist and a cogent criticism of other theories. Call it “Convention G”: any adequate theory of truth must reveal truth as an inherently normative property, i.e. an instance of the Good. It must be something about which we (rightly) care.

The pragmatist, having identified this requirement, goes on to give an account of what the goodness in question consists in; and it is an account both natural and dubious. The goodness of truth is simply the way it conduces to human flourishing—the way it leads to a satisfying life. Truth is what contributes to human happiness: believing what is true will make us happy not sad. This is because true beliefs enable us to satisfy our desires more successfully than false beliefs. The farmer with true agricultural beliefs will reap a better harvest than one who has false agricultural beliefs. We will dress more comfortably for the weather if we have true beliefs about the state of the weather. A stockbroker with true beliefs about the market will make more money than one who has false beliefs. We can express these facts by saying that true beliefs have good utilitarian consequences; indeed, we could call this type of pragmatism “the utilitarian theory of truth”.[1] The truth is what maximizes utility (so it has a lot in common with the right as a utilitarian conceives it). Truth is good because self-gratification is good—good food, nice home, stimulating company. Truth is good for the same reason other things are good: it leads to pleasure, satisfaction, happiness. We can all agree that these things are good; well, truth is just one among the engines of human gratification. The pragmatist thus invokes ordinary human goods and identifies the goodness of truth with these goods.

And this is a very natural move: what else could constitute the goodness of truth? But it is also a move that has generated criticism: for surely not all true beliefs maximize utility—for example, grief will be the result of believing truly that a loved one has just died. Sometimes truth requires us to face harsh realities; the happiness-producing belief may be the false belief. And what about true belief in a society ruled by propaganda, as in George Orwell’s 1984? In Orwell’s dystopia true belief leads inevitably to Room 101 (and we know what happens to you there). Isn’t the pragmatic theory a recipe for wishful thinking, conformity, and slavery to the passions? We want to protest: you could believe the truth and it lead to absolute disaster—it would still be the truth! Sure, truth often leads to utility, but not as a matter of definition, not as a matter of essence. A belief can be true even though it fails to maximize utility. Additionally, a belief can be true though it has nothing to do with desire satisfaction, as with abstract theoretical beliefs. The pragmatist has therefore failed to explain the nature of truth in terms of human goods of the standard sort. Is it then an incorrect theory?

But didn’t it seem to rest on an important insight—the normative nature of truth? Here we need to separate two things: (a) truth as a type of good and (b) the utilitarian theory of goodness. We can have (a) without (b). Consider Plato’s account of truth in which truth is essentially connected to goodness and beauty: for Plato, believing the truth is contemplating the sublime world of forms, chief among them the form of the Good. This makes for an elevation of the soul: communion with the perfect and eternal. This is not a matter of appetites and bodily needs, quite the contrary. Plato accepts that truth is a type of good but he doesn’t identify the good with desire satisfaction. For him, the good is contemplating the forms, and that is what true belief enables one to do. This will lead to a special higher form of happiness—the happiness of rational contemplation, roughly. There is thus room for a Platonic form of pragmatism: true belief is belief that leads to rational happiness, i.e. contemplation of the forms. This kind of happiness (soul elevation) is consistent with many kinds of ordinary unhappiness. A person may be destitute and yet in rational contact with a higher reality (Diogenes, for example): his believing is good even though it does not mitigate his material deprivations. If there are goods beyond the basic goods, then a Platonic pragmatist can appeal to these goods to explain the nature of truth.[2] We ought to pursue truth because of these goods not those identified by your typical American pragmatist, focused as he is on creaturely wellbeing. Truth is essentially connected to the Good and the Beautiful, according to Plato; so these notions can be invoked to inject a normative element into truth. We can thus be Platonic pragmatists not American-style pragmatists. At any rate, such a combination of views is logically consistent and not unattractive.

We need not agree with Plato’s view of truth in order to appreciate the architecture of his position. Truth is a good thing, but its goodness does not consist in desire satisfaction but in something more rarified—the “good of the intellect”. Truth is an intellectual good not an appetitive good; it is superior to falsehood as a condition of the intellectual faculties. It may not be easy to specify the nature of this kind of goodness, though it commands intuitive acceptance, but it offers a way to agree with the basic insight of pragmatism while avoiding the standard objections to it. There is something “pragmatic” about truth in the sense that it conduces to a human good—an intellectual good—but it is not a matter of maximizing non-intellectual wellbeing. The good of truth is not the good of satiety, safety, and prosperity; it is the good of understanding, insight, and judgment. More grandly, it is the good of intellectual receptivity to reality—a kind of self-transcendence. It is the very opposite of slavery to the passions, subjection to our own needy animal nature; it opens the self to what lies beyond it. Classic pragmatism puts the human self at the center of the search for truth, identifying truth with the satisfaction of basic human needs; Platonic pragmatism puts the aim of self-transcendence at the center of the search for truth, identifying truth with the intellectual good of apprehending reality impersonally, without regard to its ability to satisfy our needs. It is both the opposite of classic pragmatism and yet a version of its basic insight, viz. that truth must be connected to goodness in order to be what we intuitively take it to be. Platonic pragmatism thus has the virtues but not the vices of classic pragmatism.

 

Colin McGinn

[1] Pragmatism is a consequentialist theory of truth that emphasizes human happiness. Formally, it resembles utilitarianism with respect to moral rightness: the right act is the one with the best utilitarian consequences. Thus utilitarianism might be characterized as “moral pragmatism”. The two doctrines have a similar form, though one concerns rightness of action and the other concerns truth of belief. Were the pragmatists influenced by the utilitarians?

[2] Another traditional conception of truth provides a direct link between truth and goodness, namely the idea that in knowing the truth about the world we come to know God’s mind. If God created the world according to his own nature, then insight into the world is insight into God’s nature, and that is in itself deemed good. Thus truth is valuable because knowledge of God is valuable; such knowledge may even enable to live better lives by God’s standards. Again, this is a kind of “pragmatism” that does not appeal to the idea of human desire as the good that truth serves, instead invoking a “higher” type of good.

Ontological commitment

Ontological Commitment

 

 

Can there be a criterion of ontological commitment? Can there be a formal test of what a person is ontologically committed to? What a person is committed to is a matter of what he believes or assumes or presupposes or is prepared to act on—on his attitudes. So the question is whether there is a linguistic litmus test for an attitude of commitment. Can we read a person’s ontology off his verbal productions? Can I figure out my ontological commitments by inspecting my use of language?

The first thing to observe is that the question is not restricted to matters of existence. As the term is commonly used “ontological commitment” is taken to refer to what a person takes to exist, so that it is interchangeable with “existential commitment”. That is certainly one form of commitment—what a person believes to exist—but it is not the only form. Consider “chromatic commitment”: what colors you believe things have (whether they exist or not). You may believe that things are colored and you may believe specific color claims—these are your chromatic ontological commitments. Ontology concerns what is so, and color is a matter of what is so. Roses are red and violets are blue—and Santa Klaus has a white beard and a red cloak (whether he exists or not). I might believe that colors are unreal and that nothing has them; in that case I am not ontologically committed with respect to color, though I might well believe in the existence of the things commonly said to be colored. Ontological commitment can concern any fact or putative fact: do you believe in that fact or not? Do you believe in moral facts, divine facts, facts about unobservable entities, psychological facts, and so on? Existence is just one kind of ontological commitment: we might say that it concerns one type of property, viz. the property of existence. Does anything have the property of existing? Which things do? Does anything have the property of being colored? Which things do? And so for any property you care to mention. A criterion for existential commitment might be a willingness to affirm “Such-and-such exists”, and a criterion for chromatic commitment might be a willingness to affirm “Such-and-such is red” (and similarly for other kinds of fact). It is artificial to single out existence from other sorts of ontological commitment: it is just one kind of factual commitment. The proper contrast here is with “epistemological commitment”: what we are committed to in the way of knowledge. What is it that we think we know? Do we think there is any knowledge, and if so what is known? We can be committed on questions of being (fact, reality) and we can be committed on questions of knowledge; what we are committed to existentially is just a special case of a more general question.

The question of providing a criterion of ontological commitment is thus broader than that of providing a criterion of existential commitment. Quine announced, “To be is to be the value of a variable”; he has been paraphrased thus, “What you say there is, you say there is”. That is, you are committed to whatever your sentences mean: if you affirm a sentence that can be true only if certain things exist, then you committed to the existence of those things. For example, you can’t say, “There are numbers” and then turn round and deny there are numbers: you must be taken at your word. But it is the same with all forms of ontological commitment: if you say, “Roses are red” you can’t turn round and deny that roses are red (same for “good”, “solid”, “conscious”, and so on). To be committed to red things is to describe things as red. You are committed to such facts as your sayings require for their truth. The criterion of commitment is saying. You can’t disavow what you affirm: you can’t say it and then try to take it back. You can’t say it in practice but then disavow it theoretically. You can’t have your ontological cake and eat it. You can’t weasel out of your statements.

That sounds all very reasonable (indeed trivial—what was the fuss all about?), but actually it runs into difficulties as a formal test of ontological commitment. The idea was to provide a public formal test of ontological commitment, eschewing the vagaries of what a person internally believes. We might think of it as a behavioral criterion for a mental phenomenon: what a person is committed to (believes to be) is what he affirms in his public utterances. A person believes in unicorns if she affirms, “There are unicorns” or “Unicorns exist”. I determine what I believe in by examining what I say, and I might be surprised at what turns up (I may find that I accept, say, an ontology of events or possible worlds). Thus the criterion is formal and public: it invokes facts of language and it is interpersonally accessible. No need to delve into the inner recesses of a person’s mind.

But the proposal is obviously problematic. It hardly provides a necessary condition, since you can keep silent about what you believe or may not have language at all; and it is not sufficient, since speech is not always sincere assertion. It is possible to say something and not believe what one says, as in play-acting or elocution practice. Even in assertion you may not be committed to what you assert in the sense that you believe what you say. A liar can’t use his assertions to figure out his ontological commitments. The assertion must be sincere, i.e. you must believe what you assert. But that is what we were seeking a criterion for–belief. Speech is never a sure guide to belief, so we can’t formulate a test of ontological commitment from facts about speech. My ontological commitments can be read off my sincere assertions—if I sincerely assert, “Snow is white”, then I am committed to snow being white—but the commitment comes from the belief not the assertion. No act of speech (or writing) can add up to belief, so there cannot be a formal linguistic criterion of ontological commitment. In order to find out what I am committed to you have to find out what I believe; what I say isn’t going to get you there. It may be true that what I say there is I say there is, but it doesn’t follow that that is what I believe there is. The most that can be claimed is that we have criterion for the ontological commitments of what someone says—a speech act is “committed” to what is required for its truth—but this is a far cry from the ontological commitments of a person. What I believe is not the same thing as what I say, since I may not give voice to my beliefs and, if I do, I may not mean what I say. My ontological commitments are fixed by my beliefs—but that is a trivial tautology not an illuminating criterion.