Sure, and sometimes people can predict things like “the agent will use UDT” and use that to punish the agent. But this kind of prediction is “unfair” because it doesn’t lead to an interesting decision theory—you can punish any decision theory that way. So to me the boundaries of “fair” and “unfair” are also partly about mathematical taste and promising-ness, not just what will lead to a better tank and such.
Right, that kind of prediction is unfair because it doesn’t lead to an interesting decision theory… but I asked why you don’t get to predict things like “the agent will randomize.” All sorts of interesting decision theory comes out of considering situations where you do get to predict such things. (Besides, such situations are important in real life.)
I might suggest “not interesting” rather than “not fair” as the complaint. One can image an Omega that leaves the box empty if the player is unpredictable, or if the player doesn’t rigorously follow CDT, or just always leaves it empty regardless. But there’s no intuition pump that it drives, and no analysis of why a formalization would or wouldn’t get the right answer.
When I’m in challenge-the-hypothetical mode, I defend CDT by making the agent believe Omega cheats. It’s a trick box that changes contents AFTER the agent chooses, BEFORE the contents are revealed. This is much higher probability to any rational agent than mind-reading or extreme predictability.
Sure, and sometimes people can predict things like “the agent will use UDT” and use that to punish the agent. But this kind of prediction is “unfair” because it doesn’t lead to an interesting decision theory—you can punish any decision theory that way. So to me the boundaries of “fair” and “unfair” are also partly about mathematical taste and promising-ness, not just what will lead to a better tank and such.
Right, that kind of prediction is unfair because it doesn’t lead to an interesting decision theory… but I asked why you don’t get to predict things like “the agent will randomize.” All sorts of interesting decision theory comes out of considering situations where you do get to predict such things. (Besides, such situations are important in real life.)
I might suggest “not interesting” rather than “not fair” as the complaint. One can image an Omega that leaves the box empty if the player is unpredictable, or if the player doesn’t rigorously follow CDT, or just always leaves it empty regardless. But there’s no intuition pump that it drives, and no analysis of why a formalization would or wouldn’t get the right answer.
When I’m in challenge-the-hypothetical mode, I defend CDT by making the agent believe Omega cheats. It’s a trick box that changes contents AFTER the agent chooses, BEFORE the contents are revealed. This is much higher probability to any rational agent than mind-reading or extreme predictability.