Is this even possible? Flexibility/generality seems quite difficult to get if you also want the long-range effects of the agent’s actions, as at some point you’re just solving the halting problem. Imagine that the agent and environment together are some arbitrary Turing machine and halting gives low reward. Then we cannot tell in general if it eventually halts. It also seems like we cannot tell in practice whether complicated machines halt within a billion steps without simulation or complicated static analysis?
Yes, if you have a very high bar for assumptions or the strength of the bound, it is impossible.
Fortunately, we don’t need a guarantee this strong. One research pathway is to weaken the requirements until they no longer cause a contradiction like this, while maintaining most of the properties that you wanted from the guarantee. For example, one way to weaken the requirements is to require that the agent provably does well relative to what is possible for agents of similar runtime. This still gives us a reasonable guarantee (“it will do as well as it possibly could have done”) without requiring that it solve the halting problem.
Is this even possible? Flexibility/generality seems quite difficult to get if you also want the long-range effects of the agent’s actions, as at some point you’re just solving the halting problem. Imagine that the agent and environment together are some arbitrary Turing machine and halting gives low reward. Then we cannot tell in general if it eventually halts. It also seems like we cannot tell in practice whether complicated machines halt within a billion steps without simulation or complicated static analysis?
Yes, if you have a very high bar for assumptions or the strength of the bound, it is impossible.
Fortunately, we don’t need a guarantee this strong. One research pathway is to weaken the requirements until they no longer cause a contradiction like this, while maintaining most of the properties that you wanted from the guarantee. For example, one way to weaken the requirements is to require that the agent provably does well relative to what is possible for agents of similar runtime. This still gives us a reasonable guarantee (“it will do as well as it possibly could have done”) without requiring that it solve the halting problem.