Indeed. These are all scenarios of the form “Omega looks at the source code for your decision theory, and intentionally creates a scenario that breaks it.” Omega could do this with any possible decision theory (or at last, anything that could be implemented with finite resources), so what exactly are we supposed to learn by contemplating specific examples?
It seems to me that the valuable Omega thought experiments are the ones where Omega’s omnipotence is simply used to force the player to stick to the rules of the given scenario. When you start postulating that an impossible, acausal superintelligence is actively working agaisnt you it’s time to hang up your hat and go home, because no strategy you could possibly come up with is going to do you any good.
The trouble is when another agent wins in this situation and in the situations you usually encounter. For example, an anti-traditional-rationalist, that always makes the opposite choice to a traditional rationalist, will one-box; it just fails spectacularly when asked to choose between different amounts of cake.
Indeed. These are all scenarios of the form “Omega looks at the source code for your decision theory, and intentionally creates a scenario that breaks it.” Omega could do this with any possible decision theory (or at last, anything that could be implemented with finite resources), so what exactly are we supposed to learn by contemplating specific examples?
It seems to me that the valuable Omega thought experiments are the ones where Omega’s omnipotence is simply used to force the player to stick to the rules of the given scenario. When you start postulating that an impossible, acausal superintelligence is actively working agaisnt you it’s time to hang up your hat and go home, because no strategy you could possibly come up with is going to do you any good.
The trouble is when another agent wins in this situation and in the situations you usually encounter. For example, an anti-traditional-rationalist, that always makes the opposite choice to a traditional rationalist, will one-box; it just fails spectacularly when asked to choose between different amounts of cake.