I’m saying by asking about the behavior of such a machine implemented in the real world, you are being more specific than PA. For which you should think about the properties of physics and what kinds of mathematics they can implement, not whether proofs in PA have “objective proof-hood.”
Gives me a good idea for a sci-fi story, though:
Suppose rather than building a Turing machine ourselves to check proofs, we explore our infinite universe and find such a Turing machine that appears to have been running for infinite timesteps already. We can tell for sure that it’s checking proofs of PA, but to our shock, it’s actually somewhere in the middle of checking proofs coded by some nonstandard sequence of numbers. We decide to build a space station to keep watch on it, to see if it halts.
So is it impossible for me to abstractly describe a Turing Machine, and then wonder whether it would halt, with that necessarily having a fact of the matter, all without resorting to physical instantiations?
The idea I’m trying to express is that “a proof using PA axioms is a valid proof using PA axioms if and only if it is enumerated by the following TM: [standard PA TM enumerator description]”.
My question is what’s an example of a PA proof you think is arguably valid but wouldn’t be enumerated?
A purely logical TM would be understood to enumerate different proofs depending on the model of the axioms you used to specify it. This is how there can be one model of PA where such a TM halts, and another model of PA where such a TM doesn’t halt. So your plan doesn’t do the work you seem to think it does.
Don’t think of this as “there is one actual thing, but it mysteriously has multiple behaviors.” Even though it’s really convenient to talk that way (I did it above just now), maybe try to think of it like when you pick some axioms, they don’t actually pick out a single thing (if they’re complicated enough), instead they’re like a name shared by multiple “different things” (models), which can behave differently.
So you don’t think there’s a Turing Machine which enumerates all and only valid PA proofs?
For what proof encoded by only a non-standard number would you endorse the claim “this proof doesn’t objectively lack proof-hood”?
I’m saying by asking about the behavior of such a machine implemented in the real world, you are being more specific than PA. For which you should think about the properties of physics and what kinds of mathematics they can implement, not whether proofs in PA have “objective proof-hood.”
Gives me a good idea for a sci-fi story, though:
Suppose rather than building a Turing machine ourselves to check proofs, we explore our infinite universe and find such a Turing machine that appears to have been running for infinite timesteps already. We can tell for sure that it’s checking proofs of PA, but to our shock, it’s actually somewhere in the middle of checking proofs coded by some nonstandard sequence of numbers. We decide to build a space station to keep watch on it, to see if it halts.
So is it impossible for me to abstractly describe a Turing Machine, and then wonder whether it would halt, with that necessarily having a fact of the matter, all without resorting to physical instantiations?
The idea I’m trying to express is that “a proof using PA axioms is a valid proof using PA axioms if and only if it is enumerated by the following TM: [standard PA TM enumerator description]”.
My question is what’s an example of a PA proof you think is arguably valid but wouldn’t be enumerated?
A purely logical TM would be understood to enumerate different proofs depending on the model of the axioms you used to specify it. This is how there can be one model of PA where such a TM halts, and another model of PA where such a TM doesn’t halt. So your plan doesn’t do the work you seem to think it does.
Don’t think of this as “there is one actual thing, but it mysteriously has multiple behaviors.” Even though it’s really convenient to talk that way (I did it above just now), maybe try to think of it like when you pick some axioms, they don’t actually pick out a single thing (if they’re complicated enough), instead they’re like a name shared by multiple “different things” (models), which can behave differently.