## Does Arithmetic Rest on a Mistake?

Does Arithmetic Rest on a Mistake?

How can the statement “1 + 1 = 2” be true? How can the operation of adding 1 to itself produce the number 2? There is only one number 1, so how could it by itself give rise to the distinct number 2? If you add the number 1 to itself, all you get is the number 1. It’s like adding Socrates to Socrates and hoping to get Plato (or “double Socrates”, whatever that may be). If anything we have the oddity “Socrates + Socrates = Socrates”. The Concise OED has an instructive definition of “add”: “to join to or put with something else”; the Shorter OED gives “join to or unite (one thing to another) as an increase or supplement”. Both stipulate that the added things must be distinct (“something else”, “another”): but 1 is not distinct from itself, so it can’t properly be added to itself. And how would doing that “increase” anything? In our initial statement we have two occurrences of the numeral “1” denoting the self-same number, asserting that this number added to itself gives 2 as sum. What is this strange kind of addition, and if it were to exist how could it yield the number 2? If “+” expresses a function, it would appear to have the same number occur in both argument places—yet we are told that this single number yields 2 as value of the function from 1 as argument. Notice that no one ever utters the sentence “1 added to itself equals 2”, because that makes the incoherence obvious—as with “Socrates added to himself equals Plato (or some other entity distinct from Socrates)”. On the face of it, then, arithmetic contains an absurdity—but one that escapes notice and goes unchallenged. What is going on?

Abetting these adjectival uses in overlooking the logical problems inherent in “1 + 1 = 2”, we have sign-object confusion: we see two signs for 1 and conjure two number 1’s to go with them. This gives us the illusion that 1 can be converted into 2 by being added to itself. That clearly won’t work for “4 + 4 = 8” and infinitely many sentences like it, since there are not eight occurrences of “4” here; but anyway the fallacy is too blatant to bamboozle the mind for long. There is just the number 4 here, denoted twice, and it can’t be converted into 8 by being added to itself: 4 put together with itself gives just the same old number 4. In addition to this there is vagueness and uncertainty about what precisely these mathematical objects are, which allows the mind to imagine that they can increase in magnitude simply by self-adding. One has to focus on the logical character of the statements in question to see how peculiar they are, as standardly understood. In any case there are several factors that induce us to overlook the actual intended content of these sentences, the main one being the availability of adjectival counterparts to them, which are perfectly kosher.

The problem I have indicated infects certain attempts to define the natural numbers. Leibniz’s approach, endorsed by Frege, has it that each number is composed of a series of 1’s (apart from zero). Thus “1 + 1 + 1 = 3”: we can define 3 in this manner, and so on for all numbers. But adding 1 to 1 is not a method for generating a new number; it is simply a way to remain stuck at the number 1. We can add 2 to 1 to get 3 because these are different numbers, but adding a number to itself can’t produce a new number. Non-identity is the essence of counting. It might be thought that there is a way out by exploiting the adjectival paraphrase as follows: the statement “one collection + another collection + one more collection = three collections” is perfectly meaningful, allowing us to identify these three entities with the number 3. That is not adding one thing to itself, but rather adding three distinct things together (as it might be, collections of dogs, cats, and mice). But really this says nothing like the original statement containing tokens of “1” that all denote the same number; it merely gives the false impression that such a statement makes sense by sounding similar to it.

It might be said that we could save arithmetic by reformulating it adjectivally, ridding ourselves of nominal expressions and an ontology of numbers as objects. That sounds like a solid move in principle, but it won’t be able to save all of arithmetic as it now exists, because that subject has now taken on a life of its own. We would need to be able to restate all propositions about numbers in adjectival terms—for example, propositions affirming primes, cubes, successors, etc. How can theorems about numbers as such be represented in a language that declines to refer to them? What is called “number theory” will find it difficult to reformulate itself using only numerical adjectives and count nouns—how can we even say that a certain number is even? Adjectival arithmetic is fine in the market place, but it won’t do to encompass nominalized academic arithmetic.