In particular, I’ve generated some quotes that suggest that the idealization was what Turing was focusing on.
Yes, Turing machines are an idealization, but they’re meant to be an idealization of processes of computation we can carry out in the real world. That’s why computability is a useful notion to study, it corresponds to something we care about in reality. In contrast, a notion of computation which includes time travel or arbitrary oracles is less useful, because as far as we know those don’t exist. Regarding the point about time boundedness, yes Turing-computability does not exactly correspond to things we will be able to compute in reality(probably), but it’s a lot closer to reality than models of computation with oracles or time travel. Similarly, the Earth is not exactly spherical, but models which treat the Earth as spherical are a lot closer to reality than models which treat space as having 7 dimensions.
And while Turing gave some intuition for why the Church-Turing Thesis should hold, intuitions are not proof
I think this might be the core issue with your argument: the Church-Turing thesis is not the sort of thing you can prove or disprove mathematically. It’s an attempt at matching an informal concept with a set of axioms. You can’t prove or disprove a set of axioms, you can just choose sets which are more or less useful based on philosophical argument, empirical evidence, etc. Everybody who studies computability is aware that you can define alternative models of computation and study their properties, it’s just that those other models are considered less important because they can’t actually be built. If you want to convince people to change their notion of computability, you have to argue that your new definition is more useful for modeling something in reality they care about, trying to pull a ‘gotcha’ based on a list of requirements you made up yourself is pointless.
Yes, Turing machines are an idealization, but they’re meant to be an idealization of processes of computation we can carry out in the real world. That’s why computability is a useful notion to study, it corresponds to something we care about in reality.
The problem is by idealizing that away, we have made it much less useful. Universal Turing Machines as Turing defined them are basically in a similar class to ideas like arbitrary oracles or time travel: they require assumptions that basic physical constraints are fundamentally wrong.
In other words, idealization buys us too much power.
In particular, I disagree with this heavily:
Regarding the point about time boundedness, yes Turing-computability does not exactly correspond to things we will be able to compute in reality(probably), but it’s a lot closer to reality than models of computation with oracles or time travel.
Yes, technically UTMs are a little better at modeling computation, but it’s still much much farther than likely reality (this is so here), because it relies on certain assumptions being wrong, and any way this happens would likely imply that the other models of computation suddenly become more plausible. It’s a little less wrong, but there are other models which are way more accurate. In other words, the idealization is so unrealistic
If you want to focus on real life computation without having to leave theory, computational complexity already does that job. If you want to focus on near future reality, then complexity theory is best while not having to deal with a lot of messiness.
the Church-Turing thesis is not the sort of thing you can prove or disprove mathematically. It’s an attempt at matching an informal concept with a set of axioms.
I have two things to say here:
While Euclid’s axioms of geometry weren’t disproven, the intuitive notion that this represented the only valid/true geometry was broken, and I see a similar issue here.
Even then, there are other ways to get to the UTM using more formal definitions, and in the case where you just want to focus on UTMs, that’s fine, but don’t try to imply that it’s somehow universal/the only true model etc.
Also, how you do this: “It’s an attempt at matching an informal concept with a set of axioms.” can matter a lot, and while I expect some intuitions to focus on the UTM, others won’t, and instead get drastically different results.
I think a big difference between us is that I don’t expect computability theory to model our reality at all, just what is possible in the multiverse of logic, whereas you do expect computability theory to model our reality, and thus I’m fine with many of my results even if they can’t be applied in reality, since I generally think that even the computable stuff isn’t useful at all in applied reality, and thus I usually focus on arbitrary idealizations when focusing on computability theory.
You disagree with that because you do expect computability theory to model our reality.
If I wanted to model computational reality, I’d use very different tools than computability theory to model it.
Edit:
Everybody who studies computability is aware that you can define alternative models of computation and study their properties,
It certainly wasn’t obvious to Turing, Church and Godel et al. I can definitely agree that modern people studying computability theory may understand this, at least implicitly, but I don’t think that happened with the original founders of computability theory, and the link you gave me is quite insufficient, in that ignoring the other falsities that Turing made in the quote, it didn’t even define the concept beyond a very basic level.
Yes, Turing machines are an idealization, but they’re meant to be an idealization of processes of computation we can carry out in the real world. That’s why computability is a useful notion to study, it corresponds to something we care about in reality. In contrast, a notion of computation which includes time travel or arbitrary oracles is less useful, because as far as we know those don’t exist. Regarding the point about time boundedness, yes Turing-computability does not exactly correspond to things we will be able to compute in reality(probably), but it’s a lot closer to reality than models of computation with oracles or time travel. Similarly, the Earth is not exactly spherical, but models which treat the Earth as spherical are a lot closer to reality than models which treat space as having 7 dimensions.
I think this might be the core issue with your argument: the Church-Turing thesis is not the sort of thing you can prove or disprove mathematically. It’s an attempt at matching an informal concept with a set of axioms. You can’t prove or disprove a set of axioms, you can just choose sets which are more or less useful based on philosophical argument, empirical evidence, etc. Everybody who studies computability is aware that you can define alternative models of computation and study their properties, it’s just that those other models are considered less important because they can’t actually be built. If you want to convince people to change their notion of computability, you have to argue that your new definition is more useful for modeling something in reality they care about, trying to pull a ‘gotcha’ based on a list of requirements you made up yourself is pointless.
The problem is by idealizing that away, we have made it much less useful. Universal Turing Machines as Turing defined them are basically in a similar class to ideas like arbitrary oracles or time travel: they require assumptions that basic physical constraints are fundamentally wrong.
In other words, idealization buys us too much power.
In particular, I disagree with this heavily:
Yes, technically UTMs are a little better at modeling computation, but it’s still much much farther than likely reality (this is so here), because it relies on certain assumptions being wrong, and any way this happens would likely imply that the other models of computation suddenly become more plausible. It’s a little less wrong, but there are other models which are way more accurate. In other words, the idealization is so unrealistic
If you want to focus on real life computation without having to leave theory, computational complexity already does that job. If you want to focus on near future reality, then complexity theory is best while not having to deal with a lot of messiness.
I have two things to say here:
While Euclid’s axioms of geometry weren’t disproven, the intuitive notion that this represented the only valid/true geometry was broken, and I see a similar issue here.
Even then, there are other ways to get to the UTM using more formal definitions, and in the case where you just want to focus on UTMs, that’s fine, but don’t try to imply that it’s somehow universal/the only true model etc.
Also, how you do this: “It’s an attempt at matching an informal concept with a set of axioms.” can matter a lot, and while I expect some intuitions to focus on the UTM, others won’t, and instead get drastically different results.
I think a big difference between us is that I don’t expect computability theory to model our reality at all, just what is possible in the multiverse of logic, whereas you do expect computability theory to model our reality, and thus I’m fine with many of my results even if they can’t be applied in reality, since I generally think that even the computable stuff isn’t useful at all in applied reality, and thus I usually focus on arbitrary idealizations when focusing on computability theory.
You disagree with that because you do expect computability theory to model our reality.
If I wanted to model computational reality, I’d use very different tools than computability theory to model it.
Edit:
It certainly wasn’t obvious to Turing, Church and Godel et al. I can definitely agree that modern people studying computability theory may understand this, at least implicitly, but I don’t think that happened with the original founders of computability theory, and the link you gave me is quite insufficient, in that ignoring the other falsities that Turing made in the quote, it didn’t even define the concept beyond a very basic level.
Other researchers had to do this.