Yes, if you have a very high bar for assumptions or the strength of the bound, it is impossible.
Fortunately, we don’t need a guarantee this strong. One research pathway is to weaken the requirements until they no longer cause a contradiction like this, while maintaining most of the properties that you wanted from the guarantee. For example, one way to weaken the requirements is to require that the agent provably does well relative to what is possible for agents of similar runtime. This still gives us a reasonable guarantee (“it will do as well as it possibly could have done”) without requiring that it solve the halting problem.
Yes, if you have a very high bar for assumptions or the strength of the bound, it is impossible.
Fortunately, we don’t need a guarantee this strong. One research pathway is to weaken the requirements until they no longer cause a contradiction like this, while maintaining most of the properties that you wanted from the guarantee. For example, one way to weaken the requirements is to require that the agent provably does well relative to what is possible for agents of similar runtime. This still gives us a reasonable guarantee (“it will do as well as it possibly could have done”) without requiring that it solve the halting problem.