My understanding is that Nelson doesn’t necessarily believe in exponentiation either although his work here doesn’t have the direct potential to wreck that.
I know this isn’t the main point of your comment but I got carried away looking this stuff up.
In his 1986 book, he defines a predicate exp(x,k,f) which basically means “f is a function defined for i < k and f(i) = x^i”. And he defines another predicate epsilon(k) which is “for all x there exists f such that exp(x,k,f)”. I think you can give a proof of epsilon(k) in his regime in O(log log k) steps. In standard arithmetic that takes O(1) steps using induction, but O(log log k) is pretty cheap too if you’re not going to iterate exponentiation. So that’s a sense in which Nelson does believe in exponentiation.
On the other hand I think it’s accurate to say that Nelson doesn’t believe that exponentiation is total, i.e. that for all n, epsilon(n). Here’s an extract from Nelson’s “Predicative Arithmetic” (Princeton U Press 1986). Chapter heading: “Is exponentiation total?”
Why are mathematicians so convinced that exponentiation is total (everywhere defined)? Because they believe in the existence of abstract objects called numbers. What is a number? Originally, sequences of tally marks were used to count things. Then positional notation—the most powerful achievement of mathematics—was invented. Decimals (i.e. numbers written in positional notation) are simply canonical forms for variable-free terms of arithmetic. It has universally assumed, on the basis of scant evidence, that decimals are the same kind of thing as sequences of tally marks, only expressed in a more practical and efficient notation. This assumption is based on the semantic view of mathematics, in which mathematical expressions, such as decimals and sequences of tally marks, are regarded as denoting abstract objects. But to one who takes a formalist view of mathematics, the subject matter of mathematics is the expressions themselves together with the rules for manipulating them—nothing more. From this point of view, the invention of positional notation was the creation of a new kind of number.
How is it then that we can continue to think of the numbers as being 0, S0, SS0, SSS0, …? The relativization scheme of Chapter 5 [heading: “Induction by relativization”] explains this to some extent. But now let us adjoin exponentiation to the symbols of arithmetic. Have we again created a new kind of number? Yes. Let b be a variable-free term of arithmetic. To say that the expression 2^b is just another way of expressing the variable-free term of arithmetic, SS0 SS0 … SS0 with b occurrences of SS0, is to assume that b denotes something that is also denoted by a sequence of tally marks. (The notorious three dots are a direct carry-over from tally marks.) The situation is worse for expressions of the form 2^2^b—then we need to assume that 2^b itself denotes something that is also denoted by a sequence of tally marks.
Mathematicians have always operated on the unchallenged assumption that it is possible in principle to express 2^b as a numeral by performing the recursively indicated computations. To say that it is possible in principle is to say that the recursion will terminate in a certain number of steps—and this use of the word “numbers” can only refer to the primitive notion; the steps are things that are counted by a sequence of tally marks. In what number of steps will the recursion terminate? Why, in somewhat more that 2^b steps. The circularity in the argument is glaringly obvious.
I think I can explain that. Nelson doesn’t believe in any kind of Platonic number, but he does believe in formal number systems and that they can sometimes accurately describe the world. We have essentially two number systems: the tally-mark system (e.g. SSSSS0 and so on), and the positional system (e.g. 2011 and so on). Both of these are successful at modeling aspects of the world.
Most people regard these two number systems as equivalent in some sense. Specifically, there is a procedure for rewriting a positional number as a tally number, and most people believe that this procedure terminates “at least in principle.” (For instance, one can formally prove this in PA using the induction axiom). But Nelson denies it. He believes that the positional number system and the tally number system are different, and that the positional number system is strictly stronger. I think your remark --
This argument applies just as well to addition and multiplication.
-- applies to the tally system, but not to the positional system. But Nelson does not believe that the positional system is inconsistent. So he is happy to adjoin axioms to the tally system that allow for working with addition and multiplication.
My knowledge of complexity theory is very limited, but I thought exponentiation could be done in polynomial time. Is that incorrect, or am I misunderstanding Nelson’s argument (please don’t tell me he also doubts complexity theory, we must be allowed some fun :P).
EDIT: Yes, I was wrong, the claim of polynomial time in the length of b may well be too strong, but it does seem to use less than 2^b. I think I can use the position system to compute 2^b in O(b^2) steps, which seems to lead us straight from accepting multiplication to accepting exponentiation.
I think so, in positional notation. That’s what I was alluding to with my O(log log k) comment. But there is no polynomial time algorithm to reduce 2^n to a tally number (obviously).
I once read something by Nelson about the polynomial hierarchy in complexity theory, but I can’t find it now.
I know this isn’t the main point of your comment but I got carried away looking this stuff up.
In his 1986 book, he defines a predicate exp(x,k,f) which basically means “f is a function defined for i < k and f(i) = x^i”. And he defines another predicate epsilon(k) which is “for all x there exists f such that exp(x,k,f)”. I think you can give a proof of epsilon(k) in his regime in O(log log k) steps. In standard arithmetic that takes O(1) steps using induction, but O(log log k) is pretty cheap too if you’re not going to iterate exponentiation. So that’s a sense in which Nelson does believe in exponentiation.
On the other hand I think it’s accurate to say that Nelson doesn’t believe that exponentiation is total, i.e. that for all n, epsilon(n). Here’s an extract from Nelson’s “Predicative Arithmetic” (Princeton U Press 1986). Chapter heading: “Is exponentiation total?”
This argument applies just as well to addition and multiplication.
See Nelsons paper where he talks about the qualitative difference between types of recursion.
I think I can explain that. Nelson doesn’t believe in any kind of Platonic number, but he does believe in formal number systems and that they can sometimes accurately describe the world. We have essentially two number systems: the tally-mark system (e.g. SSSSS0 and so on), and the positional system (e.g. 2011 and so on). Both of these are successful at modeling aspects of the world.
Most people regard these two number systems as equivalent in some sense. Specifically, there is a procedure for rewriting a positional number as a tally number, and most people believe that this procedure terminates “at least in principle.” (For instance, one can formally prove this in PA using the induction axiom). But Nelson denies it. He believes that the positional number system and the tally number system are different, and that the positional number system is strictly stronger. I think your remark --
-- applies to the tally system, but not to the positional system. But Nelson does not believe that the positional system is inconsistent. So he is happy to adjoin axioms to the tally system that allow for working with addition and multiplication.
My knowledge of complexity theory is very limited, but I thought exponentiation could be done in polynomial time. Is that incorrect, or am I misunderstanding Nelson’s argument (please don’t tell me he also doubts complexity theory, we must be allowed some fun :P).
EDIT: Yes, I was wrong, the claim of polynomial time in the length of b may well be too strong, but it does seem to use less than 2^b. I think I can use the position system to compute 2^b in O(b^2) steps, which seems to lead us straight from accepting multiplication to accepting exponentiation.
I think so, in positional notation. That’s what I was alluding to with my O(log log k) comment. But there is no polynomial time algorithm to reduce 2^n to a tally number (obviously).
I once read something by Nelson about the polynomial hierarchy in complexity theory, but I can’t find it now.