# Does Arithmetic Rest on a Mistake?

Does Arithmetic Rest on a Mistake?

How can the statement “1 + 1 = 2” be true? How can the operation of adding 1 to itself produce the number 2? There is only one number 1, so how could it by itself give rise to the distinct number 2? If you add the number 1 to itself, all you get is the number 1. It’s like adding Socrates to Socrates and hoping to get Plato (or “double Socrates”, whatever that may be). If anything we have the oddity “Socrates + Socrates = Socrates”. The Concise OED has an instructive definition of “add”: “to join to or put with something else”; the Shorter OED gives “join to or unite (one thing to another) as an increase or supplement”. Both stipulate that the added things must be distinct (“something else”, “another”): but 1 is not distinct from itself, so it can’t properly be added to itself. And how would doing that “increase” anything? In our initial statement we have two occurrences of the numeral “1” denoting the self-same number, asserting that this number added to itself gives 2 as sum. What is this strange kind of addition, and if it were to exist how could it yield the number 2? If “+” expresses a function, it would appear to have the same number occur in both argument places—yet we are told that this single number yields 2 as value of the function from 1 as argument. Notice that no one ever utters the sentence “1 added to itself equals 2”, because that makes the incoherence obvious—as with “Socrates added to himself equals Plato (or some other entity distinct from Socrates)”. On the face of it, then, arithmetic contains an absurdity—but one that escapes notice and goes unchallenged. What is going on?

Abetting these adjectival uses in overlooking the logical problems inherent in “1 + 1 = 2”, we have sign-object confusion: we see two signs for 1 and conjure two number 1’s to go with them. This gives us the illusion that 1 can be converted into 2 by being added to itself. That clearly won’t work for “4 + 4 = 8” and infinitely many sentences like it, since there are not eight occurrences of “4” here; but anyway the fallacy is too blatant to bamboozle the mind for long. There is just the number 4 here, denoted twice, and it can’t be converted into 8 by being added to itself: 4 put together with itself gives just the same old number 4. In addition to this there is vagueness and uncertainty about what precisely these mathematical objects are, which allows the mind to imagine that they can increase in magnitude simply by self-adding. One has to focus on the logical character of the statements in question to see how peculiar they are, as standardly understood. In any case there are several factors that induce us to overlook the actual intended content of these sentences, the main one being the availability of adjectival counterparts to them, which are perfectly kosher.

The problem I have indicated infects certain attempts to define the natural numbers. Leibniz’s approach, endorsed by Frege, has it that each number is composed of a series of 1’s (apart from zero). Thus “1 + 1 + 1 = 3”: we can define 3 in this manner, and so on for all numbers. But adding 1 to 1 is not a method for generating a new number; it is simply a way to remain stuck at the number 1. We can add 2 to 1 to get 3 because these are different numbers, but adding a number to itself can’t produce a new number. Non-identity is the essence of counting. It might be thought that there is a way out by exploiting the adjectival paraphrase as follows: the statement “one collection + another collection + one more collection = three collections” is perfectly meaningful, allowing us to identify these three entities with the number 3. That is not adding one thing to itself, but rather adding three distinct things together (as it might be, collections of dogs, cats, and mice). But really this says nothing like the original statement containing tokens of “1” that all denote the same number; it merely gives the false impression that such a statement makes sense by sounding similar to it.

It might be said that we could save arithmetic by reformulating it adjectivally, ridding ourselves of nominal expressions and an ontology of numbers as objects. That sounds like a solid move in principle, but it won’t be able to save all of arithmetic as it now exists, because that subject has now taken on a life of its own. We would need to be able to restate all propositions about numbers in adjectival terms—for example, propositions affirming primes, cubes, successors, etc. How can theorems about numbers as such be represented in a language that declines to refer to them? What is called “number theory” will find it difficult to reformulate itself using only numerical adjectives and count nouns—how can we even say that a certain number is even? Adjectival arithmetic is fine in the market place, but it won’t do to encompass nominalized academic arithmetic.

So what is the status of arithmetic as it is commonly understood? Is it simply nonsense? Are its propositions analogous to “Largeness is larger than smallness” or “Largeness added to largeness equals even larger largeness”? That is, does it consist of mangled adjectives forced to dress up as pseudo proper names? Should it therefore be dropped, eschewed, and ridiculed? That seems harsh. Perhaps a form of fictionalism will serve to save it: arithmetical facts in the shape of adjectival constructions have been converted into propositions about fictional entities, obeying fictional laws. Names have been introduced and formulas manufactured, so that we end up with the likes of “1 + 1 = 2”. We drill kids in this discourse, as we drill them in other fictional discourse masquerading as fact (e.g. religion) and they are forced to accept it at face value. People end up believing in the Holy Trinity, a piece of transparent nonsense; and they end up believing that there are objects that when added to themselves produce other greater objects, which is scarcely more credible than the Holy Trinity nonsense. So maybe the whole shebang is carefully curated fiction presented as sober truth. And there is no denying that the edifice rests on perfectly sensible foundations in the use of number words in adjectival form; it is not pure nonsense. Nor is nonsense always and necessarily pernicious; it may even be useful (“useful fictions”, e.g. the average man). Is it an accident that Charles Dodgson was both a mathematician and a creator of delightful nonsense? Arithmetic, as we have it, is a human construction, according to fictionalism, like the creatures of the Jaberwocky, and it does not need literal truth in order to captivate the human mind. And it’s sort of true, given its sterling adjectival origins. We can carry on cheerfully intoning such nonsense as “1 + 1 = 2” while accepting that we are engaging in metaphysical quackery. The whole history of mathematics is littered with controversy about the reality of this or that newly created mathematical entity (zero, the infinitesimal, the irrational, the negative, etc.): is it inconceivable that the arithmetic of positive whole numbers is also steeped in ontological mud?

Colin McGinn

3 replies
1. Jeffrey Kessen says:

The right side of your Post is cut off, so I couldn’t quite read the whole thing. I stunk at math in High-School, but started reading some history and philosophy of mathematics before I went to college, and kind of got into it Check out Morris Kline. Did well enough anyway to be hired as a tutor of math at a Community College (Oh how I boasted about that to my sisters and parents.) But skip Morris Kline for now. Check out instead, Three Dog Night’s, “One is the Loneliest Number” on You-Tube.

• Lucas says:

I have long thought this myself, but I don’t believe arithmetic relies on a mistake so much as an implicit assumption of classes.

For example, 1 + 1 assumes the possibilities of multiple identical things, which is of course oxymoronic. One apple plus another apple is not two identical objects but one object and another entirely distinct object. What allows the idea of multiples is the implicit assumption of a class of items (apples) to which the two items belong, and beyond which we make no further distinction other than the fact they are separate things.

Therefore any assertion by maths lovers that maths is an ultimate truth I don’t buy, as it is only true within the framework of itself as an abstraction.

Nevertheless, its explanatory and predictive powers are so impressive that it is an indispensable tool in modelling – and thus understanding and working in – the complex universe we occupy. One hydrogen atom is very much like another, and knowing its atomic mass and spectrometric signature helps us understand the very large and very small with much more clarity than if we were to treat each one as unique. Similarly, one millimetre is so similar to another that I am happy not to split hairs when measuring my floor for a new rug. When talking about such things, saying 1 + 1 = 2 is so near to the truth that it is hard to see the benefit of splitting hairs other than for the purposes of philosophical distinction.

And while it may be possible to teach students this distinction, I can’t see it changing any of the way maths is taught. Can you imagine? Now, children, let’s count together: one, one, one, one… 🙂