It’s extremely difficult for the agent to conclude that it will make any particular choice, because any proof of that (starting from your assumptions or any other assumptions) must also prove that the agent won’t stumble on any other proofs that lead to yet higher outcomes.
I.e., that the agent won’t find (contradictory) proofs that the same actions will lead to different, even higher utilities. Right, thanks.
I.e., that the agent won’t find (contradictory) proofs that the same actions will lead to different, even higher utilities. Right, thanks.