One can construct a Turing Machine which iterates over all possible PA proofs, halting if it ever finds an inconsistency. Given this, if you’re going to hold that there’s no objective fact of the matter about whether PA is consistent, you’ll also have to hold there’s no objective fact of the matter about whether this Turing Machine halts.
When we build a Turing machine that iterates over “all possible” proofs, we have to make choices about physical implementation that are more specific than PA.
When a mathematical theory says that multiple things are consistent, and you try it in the real world and only one thing happens, you should infer that trying it in the real world more precisely corresponds to some other, more specific mathematical structure where only one thing is consistent, not that only one thing was real in the original mathematics, and trying in the real world uncovered which one it was.
I’m saying by asking about the behavior of such a machine implemented in the real world, you are being more specific than PA. For which you should think about the properties of physics and what kinds of mathematics they can implement, not whether proofs in PA have “objective proof-hood.”
Gives me a good idea for a sci-fi story, though:
Suppose rather than building a Turing machine ourselves to check proofs, we explore our infinite universe and find such a Turing machine that appears to have been running for infinite timesteps already. We can tell for sure that it’s checking proofs of PA, but to our shock, it’s actually somewhere in the middle of checking proofs coded by some nonstandard sequence of numbers. We decide to build a space station to keep watch on it, to see if it halts.
So is it impossible for me to abstractly describe a Turing Machine, and then wonder whether it would halt, with that necessarily having a fact of the matter, all without resorting to physical instantiations?
The idea I’m trying to express is that “a proof using PA axioms is a valid proof using PA axioms if and only if it is enumerated by the following TM: [standard PA TM enumerator description]”.
My question is what’s an example of a PA proof you think is arguably valid but wouldn’t be enumerated?
A purely logical TM would be understood to enumerate different proofs depending on the model of the axioms you used to specify it. This is how there can be one model of PA where such a TM halts, and another model of PA where such a TM doesn’t halt. So your plan doesn’t do the work you seem to think it does.
Don’t think of this as “there is one actual thing, but it mysteriously has multiple behaviors.” Even though it’s really convenient to talk that way (I did it above just now), maybe try to think of it like when you pick some axioms, they don’t actually pick out a single thing (if they’re complicated enough), instead they’re like a name shared by multiple “different things” (models), which can behave differently.
One can construct a Turing Machine which iterates over all possible PA proofs, halting if it ever finds an inconsistency. Given this, if you’re going to hold that there’s no objective fact of the matter about whether PA is consistent, you’ll also have to hold there’s no objective fact of the matter about whether this Turing Machine halts.
Which proofs are possible depends on your model of PA! In non-standard models, you can have proofs coded for by non-standard numbers.
More LW posts: https://www.lesswrong.com/s/SqFbMbtxGybdS2gRs
When we build a Turing machine that iterates over “all possible” proofs, we have to make choices about physical implementation that are more specific than PA.
When a mathematical theory says that multiple things are consistent, and you try it in the real world and only one thing happens, you should infer that trying it in the real world more precisely corresponds to some other, more specific mathematical structure where only one thing is consistent, not that only one thing was real in the original mathematics, and trying in the real world uncovered which one it was.
So you don’t think there’s a Turing Machine which enumerates all and only valid PA proofs?
For what proof encoded by only a non-standard number would you endorse the claim “this proof doesn’t objectively lack proof-hood”?
I’m saying by asking about the behavior of such a machine implemented in the real world, you are being more specific than PA. For which you should think about the properties of physics and what kinds of mathematics they can implement, not whether proofs in PA have “objective proof-hood.”
Gives me a good idea for a sci-fi story, though:
Suppose rather than building a Turing machine ourselves to check proofs, we explore our infinite universe and find such a Turing machine that appears to have been running for infinite timesteps already. We can tell for sure that it’s checking proofs of PA, but to our shock, it’s actually somewhere in the middle of checking proofs coded by some nonstandard sequence of numbers. We decide to build a space station to keep watch on it, to see if it halts.
So is it impossible for me to abstractly describe a Turing Machine, and then wonder whether it would halt, with that necessarily having a fact of the matter, all without resorting to physical instantiations?
The idea I’m trying to express is that “a proof using PA axioms is a valid proof using PA axioms if and only if it is enumerated by the following TM: [standard PA TM enumerator description]”.
My question is what’s an example of a PA proof you think is arguably valid but wouldn’t be enumerated?
A purely logical TM would be understood to enumerate different proofs depending on the model of the axioms you used to specify it. This is how there can be one model of PA where such a TM halts, and another model of PA where such a TM doesn’t halt. So your plan doesn’t do the work you seem to think it does.
Don’t think of this as “there is one actual thing, but it mysteriously has multiple behaviors.” Even though it’s really convenient to talk that way (I did it above just now), maybe try to think of it like when you pick some axioms, they don’t actually pick out a single thing (if they’re complicated enough), instead they’re like a name shared by multiple “different things” (models), which can behave differently.