Does Arithmetic Rest on a Mistake?

 

Does Arithmetic Rest on a Mistake?

 

How can the statement “1 + 1 = 2” be true? How can the operation of adding 1 to itself produce the number 2? There is only one number 1, so how could it by itself give rise to the distinct number 2? If you add the number 1 to itself, all you get is the number 1. It’s like adding Socrates to Socrates and hoping to get Plato (or “double Socrates”, whatever that may be). If anything we have the oddity “Socrates + Socrates = Socrates”. The Concise OED has an instructive definition of “add”: “to join to or put with something else”; the Shorter OED gives “join to or unite (one thing to another) as an increase or supplement”. Both stipulate that the added things must be distinct (“something else”, “another”): but 1 is not distinct from itself, so it can’t properly be added to itself. And how would doing that “increase” anything? In our initial statement we have two occurrences of the numeral “1” denoting the self-same number, asserting that this number added to itself gives 2 as sum. What is this strange kind of addition, and if it were to exist how could it yield the number 2? If “+” expresses a function, it would appear to have the same number occur in both argument places—yet we are told that this single number yields 2 as value of the function from 1 as argument. Notice that no one ever utters the sentence “1 added to itself equals 2”, because that makes the incoherence obvious—as with “Socrates added to himself equals Plato (or some other entity distinct from Socrates)”. On the face of it, then, arithmetic contains an absurdity—but one that escapes notice and goes unchallenged. What is going on?

            We must first observe that ordinary language contains two sorts of number word: adjectival and nominal. The arithmetical language we have been considering is nominal: nouns, singular terms and proper names that denote numbers conceived as objects. These terms form the subject of sentences to which predications are directed. But in much ordinary speech the adjectival use dominates: “five dogs”, “three cats”, “one car”. Here we are not using number words to denote objects but as components of predicates; they modify count nouns or sortals. In the adjectival use we can say things like, “One cat and one dog together add up to two animals”; or more formally, “One dog + one cat = two animals”. There is nothing puzzling here: there are many dogs and cats subject to counting and they can feature in equations (one dog and one cat are clearly distinct things). We are not trying to get two things out of one or engaging in peculiar acts of addition. We said nothing here about the object 1 and adding it to itself; we spoke only of the number of cats and dogs. I conjecture that people tend to hear the pure mathematical nominal statement as short for, or closely related to, the applied adjectival statement; and this leads them to overlook the peculiarities of the former kind of statement, logically speaking. Probably when we are drilled in early school years in academic arithmetic we are introduced to its formulas by means of adjectival paraphrases that lull the mind into a sense of familiarity, while actually changing the thought in fundamental ways. An ontology of cats and dogs is covertly replaced by an ontology of numbers denoted by proper names. Thus children don’t protest, “But you can’t produce 2 just by combining 1 with itself!”

            Abetting these adjectival uses in overlooking the logical problems inherent in “1 + 1 = 2”, we have sign-object confusion: we see two signs for 1 and conjure two number 1’s to go with them. This gives us the illusion that 1 can be converted into 2 by being added to itself. That clearly won’t work for “4 + 4 = 8” and infinitely many sentences like it, since there are not eight occurrences of “4” here; but anyway the fallacy is too blatant to bamboozle the mind for long. There is just the number 4 here, denoted twice, and it can’t be converted into 8 by being added to itself: 4 put together with itself gives just the same old number 4. In addition to this there is vagueness and uncertainty about what precisely these mathematical objects are, which allows the mind to imagine that they can increase in magnitude simply by self-adding. One has to focus on the logical character of the statements in question to see how peculiar they are, as standardly understood. In any case there are several factors that induce us to overlook the actual intended content of these sentences, the main one being the availability of adjectival counterparts to them, which are perfectly kosher.

            The problem I have indicated infects certain attempts to define the natural numbers. Leibniz’s approach, endorsed by Frege, has it that each number is composed of a series of 1’s (apart from zero). Thus “1 + 1 + 1 = 3”: we can define 3 in this manner, and so on for all numbers. But adding 1 to 1 is not a method for generating a new number; it is simply a way to remain stuck at the number 1. We can add 2 to 1 to get 3 because these are different numbers, but adding a number to itself can’t produce a new number. Non-identity is the essence of counting. It might be thought that there is a way out by exploiting the adjectival paraphrase as follows: the statement “one collection + another collection + one more collection = three collections” is perfectly meaningful, allowing us to identify these three entities with the number 3. That is not adding one thing to itself, but rather adding three distinct things together (as it might be, collections of dogs, cats, and mice). But really this says nothing like the original statement containing tokens of “1” that all denote the same number; it merely gives the false impression that such a statement makes sense by sounding similar to it.

            It might be said that we could save arithmetic by reformulating it adjectivally, ridding ourselves of nominal expressions and an ontology of numbers as objects. That sounds like a solid move in principle, but it won’t be able to save all of arithmetic as it now exists, because that subject has now taken on a life of its own. We would need to be able to restate all propositions about numbers in adjectival terms—for example, propositions affirming primes, cubes, successors, etc. How can theorems about numbers as such be represented in a language that declines to refer to them? What is called “number theory” will find it difficult to reformulate itself using only numerical adjectives and count nouns—how can we even say that a certain number is even? Adjectival arithmetic is fine in the market place, but it won’t do to encompass nominalized academic arithmetic.

            Could we ban all equations of the form “n + n = m” but keep the rest of pure arithmetic? There will still be infinitely many true equations to play with, such “5 + 3 = 8”. This doesn’t add any number to itself. But unfortunately the problem persists under the surface: for implicit in such a statement is an addition of one number to itself, viz. 3 added to 3, since 3 is part of 5 and so gets added to 3 (with 2 added to the result to give 8). Hidden in “5 + 3” is the addition “3 + 3”—as also is “4 + 1”. So we can’t avoid commitment to such equations even when they don’t appear on the surface; they lurk beneath because they are built into the whole conception. Numbers can always be broken down so as to generate them. You can’t have arithmetic, as it now exists, without these kinds of equations, despite their manifest weirdness (they don’t even fit the dictionary definition of “add”). One might even say that they claim a metaphysical impossibility: adding an object to itself (itself an impossible operation) to produce a quite distinct object (impossible ontologically). This is what you get if you nominalize adjectives illicitly. Here is an analogy: talk of large and small objects is common, as in “Jumbo is a small elephant” and “Mickey is a large mouse” (attributive adjectives). There is no logical problem about such sortal-relative adjectives in their proper grammatical position, but if we try to abstract them away from this position in order to form nominal expressions we get ourselves into trouble. Thus we might elect to speak of an entity called “largeness” and regard it as self-subsistent, as if “large” had a meaning independently of the nouns to which it is usually linked. Then we would wonder how a single animal could have both largeness and smallness at the same time, given that Jumbo is a large mammal but a small elephant (small for an elephant). Similarly, number words originally belong with count nouns, in which position they are unproblematic; but if you abstract them from that context and nominalize them, you find that the ontology thus created produces monsters like “1 + 1”. You can certainly add one cat to one dog and get two animals, but if you try to add the object 1 to itself to get the object 2 you run into incoherencies. In effect, there has been an illicit reification—of attributive adjectives or of numerical adjectives. Singular terms have been introduced and objects assigned to them, along with certain operations (like addition): but the coherence of the whole structure has not been demonstrated. In fact, the structure is built on equations that have no clear sense—or else are demonstrably nonsensical.

            So what is the status of arithmetic as it is commonly understood? Is it simply nonsense? Are its propositions analogous to “Largeness is larger than smallness” or “Largeness added to largeness equals even larger largeness”? That is, does it consist of mangled adjectives forced to dress up as pseudo proper names? Should it therefore be dropped, eschewed, and ridiculed? That seems harsh. Perhaps a form of fictionalism will serve to save it: arithmetical facts in the shape of adjectival constructions have been converted into propositions about fictional entities, obeying fictional laws. Names have been introduced and formulas manufactured, so that we end up with the likes of “1 + 1 = 2”. We drill kids in this discourse, as we drill them in other fictional discourse masquerading as fact (e.g. religion) and they are forced to accept it at face value. People end up believing in the Holy Trinity, a piece of transparent nonsense; and they end up believing that there are objects that when added to themselves produce other greater objects, which is scarcely more credible than the Holy Trinity nonsense. So maybe the whole shebang is carefully curated fiction presented as sober truth. And there is no denying that the edifice rests on perfectly sensible foundations in the use of number words in adjectival form; it is not pure nonsense. Nor is nonsense always and necessarily pernicious; it may even be useful (“useful fictions”, e.g. the average man). Is it an accident that Charles Dodgson was both a mathematician and a creator of delightful nonsense? Arithmetic, as we have it, is a human construction, according to fictionalism, like the creatures of the Jaberwocky, and it does not need literal truth in order to captivate the human mind. And it’s sort of true, given its sterling adjectival origins. We can carry on cheerfully intoning such nonsense as “1 + 1 = 2” while accepting that we are engaging in metaphysical quackery. The whole history of mathematics is littered with controversy about the reality of this or that newly created mathematical entity (zero, the infinitesimal, the irrational, the negative, etc.): is it inconceivable that the arithmetic of positive whole numbers is also steeped in ontological mud?

 

Colin McGinn                

Share