Ultimately we do want the human to be able to run arbitrarily expensive subroutines, which prohibits using any heuristic of the form “stop this computation if it goes on for more than N steps.”
What if we keep this heuristic but also define T to have an instruction that is equivalent to calling a halting-problem oracle (with each call counting as one step)? Of course that makes it harder for the outer AGI to reason about how to maximize its utility, but the increase in difficulty doesn’t seem very large relative to the difficulty in the original proposal.
What if we keep this heuristic but also define T to have an instruction that is equivalent to calling a halting-problem oracle (with each call counting as one step)? Of course that makes it harder for the outer AGI to reason about how to maximize its utility, but the increase in difficulty doesn’t seem very large relative to the difficulty in the original proposal.