Sure, and that’s the age-old argument for why we should not take second-order logic at face value. But in this case we cannot go around blithely talking about the integers for there is no language we could use to speak of them, or any other infinite set. We would be forbidden of saying that there is something we cannot talk about, and this is not surprising—what is it you can’t refer to?
Sure, and that’s the age-old argument for why we should not take second-order logic at face value.
I’m not familiar with the literature of this argument. (It was probably clear from the tentativeness of my comment that I was thinking my own murky way through this issue.)
You seem to take it as the default that we should take second-order logic at face value. (Now that I know what you mean by “face value”, I see that you did acknowledge this issue in your earlier comment.) But I should think that the default would be to be skeptical about this. Why expect that we have a canonical model when we talk about sets or predicates if we’re entertaining skepticism that we have a canonical model for integer-talk?
Why expect that we have a canonical model when we talk about sets or predicates if we’re entertaining skepticism that we have a canonical model for integer-talk?
We don’t. Skepticism of sets, predicates, and canonical integers are all the same position in the debate.
And so is skepticism of canonical Turing machines, as far as I can tell. Specifically, skepticism that there is always a fact of the matter as to whether a given TM halts.
I think you might be able to make the skeptical position precise by constructing nonstandard variants of TMs where the time steps and tape squares are numbered with nonstandard naturals, and the number of symbols and states are also nonstandard, and you would be able to relate these back to the nonstandard models that produced them by using a recursive description of N to regenerate the nonstandard model of the natural numbers you started with. This would show that there are nonstandard variants of computability that all believe in different ‘standard’, ‘minimal’ models of arithmetic, and are unaware of the existence of smaller models, and thus presumably of the ‘weaker’ (because they halt less often) notions of Turing Machines.
Now, I’m not yet sure if this construction goes through as I described it; for me, if it does it weighs against the existence of a ‘true’ Standard Model and if it doesn’t it weighs in favor.
It is in fact provably impossible to construct a computable nonstandard model (where, say, S and +, or S and × are both computable relations) in a standard model of computation. What I was referring to was a nonstandard model that was computable according to an equally nonstandard definition of computation, one that makes explicit the definitional dependence of Turing Machines on the standard natural numbers and replaces them with nonstandard ones.
The question I’m wondering about is whether such a definition leads to a sensible theory of computation (at least on its own terms) or whether it turns out to just be nonsense. This may have been addressed in the literature but if so it’s beyond the level to which I’ve read so far.
It is in fact provably impossible to construct a computable nonstandard model (where, say, S and +, or S and × are both computable relations) in a standard model of computation.
Would you give a reference? I found it easy to find assertions such as “the completeness theorem is not constructively provable,” but this statement is a little stronger.
But in this case we cannot go around blithely talking about the integers for there is no language we could use to speak of them, or any other infinite set. We would be forbidden of saying that there is something we cannot talk about, and this is not surprising—what is it you can’t refer to?
I believe that this claim is based on a defective notion of what it takes to refer to something successfully. The issue that we’re talking about here is a manifestation of that defect. I’m trying to work out a different conception of reference, but it’s very much a work in progress.
Sure, and that’s the age-old argument for why we should not take second-order logic at face value. But in this case we cannot go around blithely talking about the integers for there is no language we could use to speak of them, or any other infinite set. We would be forbidden of saying that there is something we cannot talk about, and this is not surprising—what is it you can’t refer to?
I’m not familiar with the literature of this argument. (It was probably clear from the tentativeness of my comment that I was thinking my own murky way through this issue.)
You seem to take it as the default that we should take second-order logic at face value. (Now that I know what you mean by “face value”, I see that you did acknowledge this issue in your earlier comment.) But I should think that the default would be to be skeptical about this. Why expect that we have a canonical model when we talk about sets or predicates if we’re entertaining skepticism that we have a canonical model for integer-talk?
We don’t. Skepticism of sets, predicates, and canonical integers are all the same position in the debate.
And so is skepticism of canonical Turing machines, as far as I can tell. Specifically, skepticism that there is always a fact of the matter as to whether a given TM halts.
I think you might be able to make the skeptical position precise by constructing nonstandard variants of TMs where the time steps and tape squares are numbered with nonstandard naturals, and the number of symbols and states are also nonstandard, and you would be able to relate these back to the nonstandard models that produced them by using a recursive description of N to regenerate the nonstandard model of the natural numbers you started with. This would show that there are nonstandard variants of computability that all believe in different ‘standard’, ‘minimal’ models of arithmetic, and are unaware of the existence of smaller models, and thus presumably of the ‘weaker’ (because they halt less often) notions of Turing Machines.
Now, I’m not yet sure if this construction goes through as I described it; for me, if it does it weighs against the existence of a ‘true’ Standard Model and if it doesn’t it weighs in favor.
I’m not sure, but I think it’s impossible to construct a computable nonstandard model of the integers (one where you can implement operations like +).
It is in fact provably impossible to construct a computable nonstandard model (where, say, S and +, or S and × are both computable relations) in a standard model of computation. What I was referring to was a nonstandard model that was computable according to an equally nonstandard definition of computation, one that makes explicit the definitional dependence of Turing Machines on the standard natural numbers and replaces them with nonstandard ones.
The question I’m wondering about is whether such a definition leads to a sensible theory of computation (at least on its own terms) or whether it turns out to just be nonsense. This may have been addressed in the literature but if so it’s beyond the level to which I’ve read so far.
Would you give a reference? I found it easy to find assertions such as “the completeness theorem is not constructively provable,” but this statement is a little stronger.
Tennenbaum’s Theorem
I believe that this claim is based on a defective notion of what it takes to refer to something successfully. The issue that we’re talking about here is a manifestation of that defect. I’m trying to work out a different conception of reference, but it’s very much a work in progress.