Suppose my decision algorithm for the “both boxes are transparent” case is to take only box B if and only if it is empty, and to take both boxes if and only if box B has a million dollars in it. How does Omega respond? No matter how it handles box B, it’s implied prediction will be wrong.
Death by lightning.
I typically include such disclaimers such as the above in a footnote or more precisely targeted problem specification so as to avoid any avoid-the-question technicalities. The premise is not that Omega is an idiot or a sloppy game-designer.
Come to think of it, I could implement the second algorithm (and maybe the first) if a million dollars weighs enough compared to the boxes. Suppose my decision algorithm outputs: “Grab box B and test it’s weight, and maybe shake it a bit. If it clearly has a million dollars in it, take only box B. Otherwise, take both boxes.” If that’s my algorithm, then I don’t think the problem actually tells us what Omega predicts, and thus what outcome I’m getting.
You took box B. Putting it down again doesn’t help you. Finding ways to be cleverer than Omega is not a winning solution to Newcomblike problems.
Death by lightning.
I typically include such disclaimers such as the above in a footnote or more precisely targeted problem specification so as to avoid any avoid-the-question technicalities. The premise is not that Omega is an idiot or a sloppy game-designer.
You took box B. Putting it down again doesn’t help you. Finding ways to be cleverer than Omega is not a winning solution to Newcomblike problems.