But we can still break it in similar ways. Pre-commit to flipping a coin (or some other random variable) to make your choice, and Omega can’t be a perfect predictor, which breaks the specification of the problem.
The premise of the thought experiment is that Omega has come to you and said, “I have two boxes here, and know whether you are going to open one box or two boxes, and thus have filled the boxes accordingly”.
If Omega knows enough to predict whether you’ll one-box or two-box, then Omega knows enough to predict whether you’re going to flip a coin, do a dance, kill yourself, or otherwise break that premise. Since the frame story is that the premise holds, then clearly Omega has predicted that you will either one-box or two-box.
Therefore, this Omega doesn’t play this game with people who do something silly instead of one-boxing or two-boxing. Maybe it just ignores those people. Maybe it plays another game. But the point is, if we have the narrative power to stipulate an Omega that plays the “one box or two” game accurately, then we have the narrative power to stipulate an Omega that doesn’t bother playing it with people who are going to break the premise of the thought experiment.
In programmer-speak, we would say that Omega’s behavior is undefined in these circumstances, and it is legal for Omega to make demons fly out of your nose in response to such cleverness.
Either Omega has perfect predictive power over minds AND coins, or it doesn’t.
If it has perfect predictive power over minds AND coins, then it knows which way the flip will go, and what you’re really saying is “give me a 50⁄50 gamble with a net payoff of $500,500”, instead of $1,000,000 OR $1,000 - in which case you are not a rational actor and Newcomb’s Omega has no reason to want to play the game with you.
If it only has predictive power over minds, then neither it nor you know which way the flip will go, and the premise is broken. Since you accepted the premise when you said “if Omega shows up, I would...”, then you must not be the sort of person who would pre-commit to an unpredictable coinflip, and you’re just trying to signal cleverness by breaking the thought experiment on a bogus technicality.
Since you accepted the premise when you said “if Omega shows up, I would...”, then you must not be the sort of person who would pre-commit to an unpredictable coinflip, and you’re just trying to signal cleverness by breaking the thought experiment on a bogus technicality.
Its not breaking the thought experiment on a “bogus technicality” its pointing out that the thought experiment is only coherent if we make some pretty significant assumptions about how people make decisions. The more noisy we believe human decision making is, the less perfect omega can be.
The paradox still raises the same point for decisions algorithms, but the coin flip underscores that the problem can be ill-defined for decisions algorithms that incorporate noisy inputs.
The premise of the thought experiment is that Omega has come to you and said, “I have two boxes here, and know whether you are going to open one box or two boxes, and thus have filled the boxes accordingly”.
If Omega knows enough to predict whether you’ll one-box or two-box, then Omega knows enough to predict whether you’re going to flip a coin, do a dance, kill yourself, or otherwise break that premise. Since the frame story is that the premise holds, then clearly Omega has predicted that you will either one-box or two-box.
Therefore, this Omega doesn’t play this game with people who do something silly instead of one-boxing or two-boxing. Maybe it just ignores those people. Maybe it plays another game. But the point is, if we have the narrative power to stipulate an Omega that plays the “one box or two” game accurately, then we have the narrative power to stipulate an Omega that doesn’t bother playing it with people who are going to break the premise of the thought experiment.
In programmer-speak, we would say that Omega’s behavior is undefined in these circumstances, and it is legal for Omega to make demons fly out of your nose in response to such cleverness.
Flipping a coin IS one boxing Or two boxing! Its just not doing it PREDICTABLY.
ಠ_ಠ
EDIT: Okay, I’ll engage.
Either Omega has perfect predictive power over minds AND coins, or it doesn’t.
If it has perfect predictive power over minds AND coins, then it knows which way the flip will go, and what you’re really saying is “give me a 50⁄50 gamble with a net payoff of $500,500”, instead of $1,000,000 OR $1,000 - in which case you are not a rational actor and Newcomb’s Omega has no reason to want to play the game with you.
If it only has predictive power over minds, then neither it nor you know which way the flip will go, and the premise is broken. Since you accepted the premise when you said “if Omega shows up, I would...”, then you must not be the sort of person who would pre-commit to an unpredictable coinflip, and you’re just trying to signal cleverness by breaking the thought experiment on a bogus technicality.
Please don’t do that.
Its not breaking the thought experiment on a “bogus technicality” its pointing out that the thought experiment is only coherent if we make some pretty significant assumptions about how people make decisions. The more noisy we believe human decision making is, the less perfect omega can be.
The paradox still raises the same point for decisions algorithms, but the coin flip underscores that the problem can be ill-defined for decisions algorithms that incorporate noisy inputs.