If you have an algorithm for producing an Omega, then I run said algorithm and run Omega on myself and do whatever Omega predicts I won’t do
So then my decision procedure simulates Omega, who simulates my decision procedure. “My decision procedure” is the same decision procedure in both cases, so I think it’s impossible for me to do what Omega predicts I won’t do. Or is this where Omega becomes super-Turing? My knowledge on that is limited.
So then my decision procedure simulates Omega, who simulates my decision procedure.
Yep. The problem is the unbounded recursion here, give or take. A reasons about B reasoning about A reasoning about B reasoning about A ad infinitum.
I think it’s impossible for me to do what Omega predicts I won’t do.
Hence, there’s a contradiction. Assuming the Oracle side is correct, it’s fairly straightforward to show that the agent is also correct… and that this leads to a contradiction. Hence, the Oracle side cannot be correct[1].
It’s easiest to see via analogy to the Halting problem[2].
OmegaHalting Problem Solver → a decidable algorithm that predicts what an arbitrary agent doesif an arbitrary Turing Machine halts
If you have a decidable algorithm for producing an Omega a Halting Problem Solver, then I run said algorithm and run said resulting OmegaHalting Problem Solver on myself and do whatever said OmegaHalting Problem Solver predicts I won’t do (take both boxes go into an infinite loop if and only ifsaid OmegaHalting Problem Solver predicts I won’t)
This is a contradiction, therefore an algorithm for producing an Omega a Halting Problem Solver cannot exist.
Or, in the slightly more straightforward ‘standard’ proof form:
OmegaHalting Problem Solver → a decidable algorithm that predicts what an arbitrary agent doesif an arbitrary Turing Machine halts
If you have an Omega a Halting Problem Solver, then I run said OmegaHalting Problem Solver on myself and do whatever said OmegaHalting Problem Solver predicts I won’t do (take both boxes go into an infinite loop if and only ifsaid OmegaHalting Problem Solver predicts I won’t)
This is a contradiction, therefore an Omega a Halting Problem Solver cannot exist.
Or is this where Omega becomes super-Turing?
This contradiction only applies if the agent can simulate Omega. (The premise requires that Omega can simulate the agent.)
One way of avoiding this contradiction is if the agent is not Turing-complete, and Omega can simulate the agent but not vice versa.
If the agent is Turing-complete, this implies that Omega must be Turing-complete in order for Omega to simulate the agent. By the Church-Turing thesis, if both are computable this in turn implies that Omega and the agent can simulate each other, and we lead to this contradiction. Which means that if the agent is Turing-complete, Omega must not be computable. So the other possibility is the agent is Turing-complete and that Omega isn’t computable (and is e.g. an Oracle Machine). This is where Omega becomes super-Turing.
(A third possibility is that Omega is allowed to be semidecidable… but in this case if I’m an agent that will result in Omega infinite-looping it shouldn’t have been able to ask the question in the first place.)
The standard proof starts with ‘assume you have a TM that solves the Halting Problem’, and directly shows that said TM is a contradiction. This proof starts with ‘assume you have an algorithm that produces a TM that solves the Halting Problem’, and shows that said algorithm is a contradiction.
Thanks for your comment!
So then my decision procedure simulates Omega, who simulates my decision procedure. “My decision procedure” is the same decision procedure in both cases, so I think it’s impossible for me to do what Omega predicts I won’t do. Or is this where Omega becomes super-Turing? My knowledge on that is limited.
Yep. The problem is the unbounded recursion here, give or take. A reasons about B reasoning about A reasoning about B reasoning about A ad infinitum.
Hence, there’s a contradiction. Assuming the Oracle side is correct, it’s fairly straightforward to show that the agent is also correct… and that this leads to a contradiction. Hence, the Oracle side cannot be correct[1].
It’s easiest to see via analogy to the Halting problem[2].
Or, in the slightly more straightforward ‘standard’ proof form:
This contradiction only applies if the agent can simulate Omega. (The premise requires that Omega can simulate the agent.)
One way of avoiding this contradiction is if the agent is not Turing-complete, and Omega can simulate the agent but not vice versa.
If the agent is Turing-complete, this implies that Omega must be Turing-complete in order for Omega to simulate the agent. By the Church-Turing thesis, if both are computable this in turn implies that Omega and the agent can simulate each other, and we lead to this contradiction. Which means that if the agent is Turing-complete, Omega must not be computable. So the other possibility is the agent is Turing-complete and that Omega isn’t computable (and is e.g. an Oracle Machine). This is where Omega becomes super-Turing.
(A third possibility is that Omega is allowed to be semidecidable… but in this case if I’m an agent that will result in Omega infinite-looping it shouldn’t have been able to ask the question in the first place.)
In particular, I suspect the failure mode of many Omegas would be to go into an infinite loop.
This is a slightly different proof than the ‘standard’ proof of the Halting Problem[3], but this proof also works.
The standard proof starts with ‘assume you have a TM that solves the Halting Problem’, and directly shows that said TM is a contradiction. This proof starts with ‘assume you have an algorithm that produces a TM that solves the Halting Problem’, and shows that said algorithm is a contradiction.