OK, I think I’ve found a source of confusion here.
There’s two fundamentally different questions one could ask:
what is the optimal action for X/X* to perform?
what computations should X/X* perform in order to work out which action ve should perform?
The first question is the standard decision-theoretic question, and in that context the halting problem is of no relevance because we’re solving the problem from the outside, not from the inside.
On the other hand, there is no point to taking the inside or “embedded” view unless we specifically want to consider computational or real-world constraints. In that context, the answer is that it’s pretty stupid for the agent to run a simulation of itself because that obviously won’t work.
Any decision-making algorithm in the real world has to be smart enough not to go into infinite loops. Of course, such an algorithm won’t be optimal, but it would be very silly to expect it to be optimal except in relatively easy cases.
OK, I think I’ve found a source of confusion here.
There’s two fundamentally different questions one could ask:
what is the optimal action for X/X* to perform?
what computations should X/X* perform in order to work out which action ve should perform?
The first question is the standard decision-theoretic question, and in that context the halting problem is of no relevance because we’re solving the problem from the outside, not from the inside.
On the other hand, there is no point to taking the inside or “embedded” view unless we specifically want to consider computational or real-world constraints. In that context, the answer is that it’s pretty stupid for the agent to run a simulation of itself because that obviously won’t work.
Any decision-making algorithm in the real world has to be smart enough not to go into infinite loops. Of course, such an algorithm won’t be optimal, but it would be very silly to expect it to be optimal except in relatively easy cases.