You kill the shell not when it is in an infinite loop, but when it takes more than a few seconds to run. We can set up such a safety net, allowing the human to run anything that takes (say) less than a million years to run, without risk of crashing. This is the sort of thing I was referring to by “some care.”
Ultimately we do want the human to be able to run arbitrarily expensive subroutines, which prohibits using any heuristic of the form “stop this computation if it goes on for more than N steps.”
Ultimately we do want the human to be able to run arbitrarily expensive subroutines, which prohibits using any heuristic of the form “stop this computation if it goes on for more than N steps.”
What if we keep this heuristic but also define T to have an instruction that is equivalent to calling a halting-problem oracle (with each call counting as one step)? Of course that makes it harder for the outer AGI to reason about how to maximize its utility, but the increase in difficulty doesn’t seem very large relative to the difficulty in the original proposal.
You kill the shell not when it is in an infinite loop, but when it takes more than a few seconds to run. We can set up such a safety net, allowing the human to run anything that takes (say) less than a million years to run, without risk of crashing. This is the sort of thing I was referring to by “some care.”
Ultimately we do want the human to be able to run arbitrarily expensive subroutines, which prohibits using any heuristic of the form “stop this computation if it goes on for more than N steps.”
What if we keep this heuristic but also define T to have an instruction that is equivalent to calling a halting-problem oracle (with each call counting as one step)? Of course that makes it harder for the outer AGI to reason about how to maximize its utility, but the increase in difficulty doesn’t seem very large relative to the difficulty in the original proposal.