Well, it seems obvious that it’s true—but tricky to formalise. Subtle problems like agent simulates predictor (when you know more than Omega) and maybe some diagonal agents (who apply diagonal reasoning to you) seem to be relatively believable situations. It’s a bit like Godel’s theorem—initially, the only examples were weird and specifically constructed, but then people found more natural examples.
But “do what you would have precommitted to doing” seems to be much better than other strategies, even if it’s not provably ideal.
Well, it seems obvious that it’s true—but tricky to formalise. Subtle problems like agent simulates predictor (when you know more than Omega) and maybe some diagonal agents (who apply diagonal reasoning to you) seem to be relatively believable situations. It’s a bit like Godel’s theorem—initially, the only examples were weird and specifically constructed, but then people found more natural examples.
But “do what you would have precommitted to doing” seems to be much better than other strategies, even if it’s not provably ideal.