If it’s purely theoretical then why can’t I have a hypercomputer? What’s wrong with simply solving the halting problem by using an oracle, or by running a Turing machine for infinitely many steps before I make my decision?
You’re asking the same question three times.
Anyway, an oracle can determine whether a program in a Turing machine can halt. It can’t determine whether it itself can halt.
Any attempt to use an oracle could lead to X predicting Y who tries to predict X using an oracle. That can be equivalent to the oracle trying to determine whether it itself can halt.
If I can’t have infinite time, then I might as well have 5 seconds.
This is of course true, but it just means that both finite time and 5 seconds are bad.
OK, I think I’ve found a source of confusion here.
There’s two fundamentally different questions one could ask:
what is the optimal action for X/X* to perform?
what computations should X/X* perform in order to work out which action ve should perform?
The first question is the standard decision-theoretic question, and in that context the halting problem is of no relevance because we’re solving the problem from the outside, not from the inside.
On the other hand, there is no point to taking the inside or “embedded” view unless we specifically want to consider computational or real-world constraints. In that context, the answer is that it’s pretty stupid for the agent to run a simulation of itself because that obviously won’t work.
Any decision-making algorithm in the real world has to be smart enough not to go into infinite loops. Of course, such an algorithm won’t be optimal, but it would be very silly to expect it to be optimal except in relatively easy cases.
You’re asking the same question three times.
Anyway, an oracle can determine whether a program in a Turing machine can halt. It can’t determine whether it itself can halt.
Any attempt to use an oracle could lead to X predicting Y who tries to predict X using an oracle. That can be equivalent to the oracle trying to determine whether it itself can halt.
This is of course true, but it just means that both finite time and 5 seconds are bad.
OK, I think I’ve found a source of confusion here.
There’s two fundamentally different questions one could ask:
what is the optimal action for X/X* to perform?
what computations should X/X* perform in order to work out which action ve should perform?
The first question is the standard decision-theoretic question, and in that context the halting problem is of no relevance because we’re solving the problem from the outside, not from the inside.
On the other hand, there is no point to taking the inside or “embedded” view unless we specifically want to consider computational or real-world constraints. In that context, the answer is that it’s pretty stupid for the agent to run a simulation of itself because that obviously won’t work.
Any decision-making algorithm in the real world has to be smart enough not to go into infinite loops. Of course, such an algorithm won’t be optimal, but it would be very silly to expect it to be optimal except in relatively easy cases.