I might suggest “not interesting” rather than “not fair” as the complaint. One can image an Omega that leaves the box empty if the player is unpredictable, or if the player doesn’t rigorously follow CDT, or just always leaves it empty regardless. But there’s no intuition pump that it drives, and no analysis of why a formalization would or wouldn’t get the right answer.
When I’m in challenge-the-hypothetical mode, I defend CDT by making the agent believe Omega cheats. It’s a trick box that changes contents AFTER the agent chooses, BEFORE the contents are revealed. This is much higher probability to any rational agent than mind-reading or extreme predictability.
I might suggest “not interesting” rather than “not fair” as the complaint. One can image an Omega that leaves the box empty if the player is unpredictable, or if the player doesn’t rigorously follow CDT, or just always leaves it empty regardless. But there’s no intuition pump that it drives, and no analysis of why a formalization would or wouldn’t get the right answer.
When I’m in challenge-the-hypothetical mode, I defend CDT by making the agent believe Omega cheats. It’s a trick box that changes contents AFTER the agent chooses, BEFORE the contents are revealed. This is much higher probability to any rational agent than mind-reading or extreme predictability.