Even then, I’m still a bit mystified, in that I don’t understand why people treat it like an absolute notion that applies to all computational models, rather than simply a definition, and one that can be replaced by other equivalent or different definitions like this quote below:
Gödel said that “the great importance … [of] Turing’s computability” is
largely due to the fact that with this concept one has for the first time succeeded in giving an absolute definition of an interesting epistemological notion, i.e., one not depending on the formalism chosen. (Gödel 1946: 150)
or like this quote below:
Gödel also said:
The resulting definition of the concept of mechanical by the sharp concept of “performable by a Turing machine” is both correct and unique. … Moreover it is absolutely impossible that anybody who understands the question and knows Turing’s definition should decide for a different concept. (Quoted in Wang 1996: 203)
Putting it less politely, my biggest issue is a motte and bailey is going on, where people want to defend the correct motte of the Church-Turing thesis as a definition, albeit there are other equivalent ones, but people want to go beyond that and defend the incorrect bailey that it’s absolute/universal/the only valid definition of something, similar to how people thought the intuitive definition of Euclidean Geometry was the only possible valid geometry, at least implicitly.
I will check out Turing and Church’s papers, though I will say that for the conditions 2.5 and 2.6, they’re there to express two ideas below:
2.5 was essentially a statement explicitly saying that no other constraints to the problem were added, other than the constraints above.
2.6 was not Turing’s idea or problem, and instead it’s a modern problem, I’ll have to edit the post to say that this is not something that Turing thought about.
And condition 2.3 is essentially replacing the rigorousness of something instead of creativity by instead asking whether a single algorithm can solve arbitrary instances of a problem, like in a decision problem where the answers are yes or no, I require the algorithm to always say yes if it’s in the set, and no if it’s not in the set, and that this must be doable on arbitrary instances of a problem. in other words given that algorithm applied to the problem, you don’t need to think about it, you can just be rigorous, in the sense that you apply the same algorithm for arbitrary instances of a problem, and can follow it by rote, and you don’t need to have a different algorithm.
Regarding open question 1, I think the answer to “how many functions can we compute in this model” is “all functions”. Since you are allowing arbitrary oracles to be used (by the branching argument), for any given function we can just use an oracle which computes that function.
This seems like it’s instead an answer to Open Question 2, in that we can simulate all oracle tapes on all inputs, finite or infinite, which is I think equivalent to solving all the functions from N to N, and that’s the set of all programs. I was also asking for an upper bound, too if you can supply it, but I’ll take it as a partial answer.
Is there a reason that allowing arbitrary extensions to the UTM that respect the constraints of an effective function does the same thing? That is, if we give the UTM arbitrary extensions, can we too get it to solve say all the functions from N to N via arbitrary extensions of the UTM model, and is it upper bounded by all functions from N to N?
Edit: I’ve decided to share some preliminary results of my checking, and I already see some issues below:
For example, this statement:
That said I think it’s very unlikely you’ll find that to be the case, since Turing was well aware that halting oracles existed, so clearly he didn’t intend to include them in his definition of effective operations.
Is correct, but misses major nuance, and importantly assumes it’s not a machine. That’s a problem, because without more assumptions, you can’t exclude halting oracles from your definition of effective operations. He assumed that there could be an unspecified oracle, but the problem is in the statement below, he falsely believes it’s not a machine that calculates effective functions, when it is, and critically he didn’t know that you could make the model more concrete into an actual machine. To be clear, I don’t blame Turing too much, as the results in the post weren’t known, and Church and Turing made the entire foundation from scratch, so it was inevitable that mistakes would be made.
Let us suppose that we are supplied with some unspecified means of solving number-theoretic problems; a kind of oracle as it were. We shall not go any further into the nature of this oracle apart from saying that it
cannot be a machine.
This is from the paper Systems of Logic Based On Ordinals, link is below:
Even then, I’m still a bit mystified, in that I don’t understand why people treat it like an absolute notion that applies to all computational models, rather than simply a definition, and one that can be replaced by other equivalent or different definitions like this quote below:
or like this quote below:
Putting it less politely, my biggest issue is a motte and bailey is going on, where people want to defend the correct motte of the Church-Turing thesis as a definition, albeit there are other equivalent ones, but people want to go beyond that and defend the incorrect bailey that it’s absolute/universal/the only valid definition of something, similar to how people thought the intuitive definition of Euclidean Geometry was the only possible valid geometry, at least implicitly.
I will check out Turing and Church’s papers, though I will say that for the conditions 2.5 and 2.6, they’re there to express two ideas below:
2.5 was essentially a statement explicitly saying that no other constraints to the problem were added, other than the constraints above.
2.6 was not Turing’s idea or problem, and instead it’s a modern problem, I’ll have to edit the post to say that this is not something that Turing thought about.
And condition 2.3 is essentially replacing the rigorousness of something instead of creativity by instead asking whether a single algorithm can solve arbitrary instances of a problem, like in a decision problem where the answers are yes or no, I require the algorithm to always say yes if it’s in the set, and no if it’s not in the set, and that this must be doable on arbitrary instances of a problem. in other words given that algorithm applied to the problem, you don’t need to think about it, you can just be rigorous, in the sense that you apply the same algorithm for arbitrary instances of a problem, and can follow it by rote, and you don’t need to have a different algorithm.
This seems like it’s instead an answer to Open Question 2, in that we can simulate all oracle tapes on all inputs, finite or infinite, which is I think equivalent to solving all the functions from N to N, and that’s the set of all programs. I was also asking for an upper bound, too if you can supply it, but I’ll take it as a partial answer.
Is there a reason that allowing arbitrary extensions to the UTM that respect the constraints of an effective function does the same thing? That is, if we give the UTM arbitrary extensions, can we too get it to solve say all the functions from N to N via arbitrary extensions of the UTM model, and is it upper bounded by all functions from N to N?
Edit: I’ve decided to share some preliminary results of my checking, and I already see some issues below:
For example, this statement:
Is correct, but misses major nuance, and importantly assumes it’s not a machine. That’s a problem, because without more assumptions, you can’t exclude halting oracles from your definition of effective operations. He assumed that there could be an unspecified oracle, but the problem is in the statement below, he falsely believes it’s not a machine that calculates effective functions, when it is, and critically he didn’t know that you could make the model more concrete into an actual machine. To be clear, I don’t blame Turing too much, as the results in the post weren’t known, and Church and Turing made the entire foundation from scratch, so it was inevitable that mistakes would be made.
This is from the paper Systems of Logic Based On Ordinals, link is below:
https://pure.mpg.de/rest/items/item_2403325_2/component/file_2403324/content
There may be other problems, but that’s just the first problem I found.