If the game is rigged against you, so what? Take both boxes. You cannot lose, and there’s a small chance the conman erred.
What helps me when I get stuck in this loop (the loop isn’t incorrect exactly, it’s just non-productive) is to meditate on how the problem assumes that, for all my complexity, I’m still a deterministic machine. Omega can read my source code and know what I’m going to pick. If I end up picking both boxes, he knew that before I did, and I’ll end up with less money. If I can convince myself—somehow—to pick just the one box, then Omega will have seen that coming too and will reward me with the bonus. So the question becomes, can your source code output the decision to one-box?
The answer in humans is ‘yes’—any human can learn to output 1-box -- but it depends sensitively upon how much time the human has to think about it, to what extent they’ve been exposed to the problem before, and what arguments they’ve heard. Given all these parameters, Omega can deduce what they will decide.
Am I at all right in assuming that 1-boxing is heavily favored in this community?
These factors have come together (time + exposure to the right arguments, etc.) on Less Wrong so that people who hang out at Less Wrong have been conditioned to 1-box. (And are thus conditioned to win in this dilemma.)
I agree with everything you say in this comment, and still find 2-boxing rational.
The reason still seems to be: you can consistently win without being rational.
By rational, I think you mean logical. (We tend to define ‘rational’ as ‘winning’ around here.*)
… and—given a certain set of assumptions—it is absolutely logical that (a) Omega has already made his prediction, (b) the stuff is already in the boxes, (c) you can only maximize your payoff by choosing both boxes. (This is what I meant by this line of reasoning isn’t incorrect, it’s just unproductive in finding the solution to this dilemma.)
But consider what other logical assumptions have already snuck into the logic above. We’re not familiar with outcomes that depend upon our decision algorithm, we’re not used to optimizing over this action. The productive direction to think along is this one: unlike a typical situation, the content of the boxes depends upon your algorithm that outputs the choice, only indirectly on your choice.
You’re halfway to the solution of this problem if you can see both ways of thinking about the problem as reasonable. You’ll feel some frustration that you can alternate between them—like flip-flopping between different interpretations of an optical illusion—and they’re contradictory. Then the second half of the solution is to notice that you can choose which way to think about the problem as a willful choice—make the choice that results in the win. That is the rational (and logical) thing to do.
Let me know if you don’t agree with the part where you’re supposed to see both ways of thinking about the problem as reasonable.
* But the distinction doesn’t really matter because we haven’t found any cases where rational and logical aren’t the same thing.
This is why I find it incomprehensible that anyone can really be mystified by the one-boxer’s position. I want to say “Look, I’ve got a million dollars! You’ve got a thousand dollars! And you have to admit that you could have seen this coming all along. Now tell me who had the right decision procedure?”
My point of view is that the winning thing to do here and the logical thing to do are the same.
If you want to understand my point of view or if you want me to understand your point of view, you need to tell me where you think logical and winning diverge. Then I tell you why I think they don’t, etc.
You’ve mentioned ‘backwards causality’ which isn’t assumed in our one-box solution to Newcomb. How comfortable are you with the assumption of determinism? (If you’re not, how do you reconcile that Omega is a perfect predictor?)
You’ve mentioned ‘backwards causality’ which isn’t assumed in our one-box solution
to Newcomb.
Only to rule it out as a solution. No problem here.
How comfortable are you with the assumption of determinism?
In general, very.
Concerning Newcomb, I don’t think it’s essential, and as far as I recall, it isn’t mentioned in the orginal problem.
you need to tell me where you think logical and winning diverge
I’ll try again: I think you can show with simple counterexamples that winning is neither necessary nor sufficient for being logical (your term for my rational, if I understand you correctly).
Here we go: it’s not necessary, because you can be unlucky. Your strategy might be best, but you might lose as soon as luck is involved.
It’s not sufficient, because you can be lucky. You can win a game even if you’re not perfectly rational.
1-boxing seems a variant of the second case, instead of (bad) luck the game is rigged.
What helps me when I get stuck in this loop (the loop isn’t incorrect exactly, it’s just non-productive) is to meditate on how the problem assumes that, for all my complexity, I’m still a deterministic machine. Omega can read my source code and know what I’m going to pick. If I end up picking both boxes, he knew that before I did, and I’ll end up with less money. If I can convince myself—somehow—to pick just the one box, then Omega will have seen that coming too and will reward me with the bonus. So the question becomes, can your source code output the decision to one-box?
The answer in humans is ‘yes’—any human can learn to output 1-box -- but it depends sensitively upon how much time the human has to think about it, to what extent they’ve been exposed to the problem before, and what arguments they’ve heard. Given all these parameters, Omega can deduce what they will decide.
These factors have come together (time + exposure to the right arguments, etc.) on Less Wrong so that people who hang out at Less Wrong have been conditioned to 1-box. (And are thus conditioned to win in this dilemma.)
I agree with everything you say in this comment, and still find 2-boxing rational. The reason still seems to be: you can consistently win without being rational.
By rational, I think you mean logical. (We tend to define ‘rational’ as ‘winning’ around here.*)
… and—given a certain set of assumptions—it is absolutely logical that (a) Omega has already made his prediction, (b) the stuff is already in the boxes, (c) you can only maximize your payoff by choosing both boxes. (This is what I meant by this line of reasoning isn’t incorrect, it’s just unproductive in finding the solution to this dilemma.)
But consider what other logical assumptions have already snuck into the logic above. We’re not familiar with outcomes that depend upon our decision algorithm, we’re not used to optimizing over this action. The productive direction to think along is this one: unlike a typical situation, the content of the boxes depends upon your algorithm that outputs the choice, only indirectly on your choice.
You’re halfway to the solution of this problem if you can see both ways of thinking about the problem as reasonable. You’ll feel some frustration that you can alternate between them—like flip-flopping between different interpretations of an optical illusion—and they’re contradictory. Then the second half of the solution is to notice that you can choose which way to think about the problem as a willful choice—make the choice that results in the win. That is the rational (and logical) thing to do.
Let me know if you don’t agree with the part where you’re supposed to see both ways of thinking about the problem as reasonable.
* But the distinction doesn’t really matter because we haven’t found any cases where rational and logical aren’t the same thing.
May I suggest again that defining rational as winning may be the problem?
(2nd reply)
I’m beginning to come around to your point of view. Omega rewards you for being illogical.
.… It’s just logical to allow him to do so.
This is why I find it incomprehensible that anyone can really be mystified by the one-boxer’s position. I want to say “Look, I’ve got a million dollars! You’ve got a thousand dollars! And you have to admit that you could have seen this coming all along. Now tell me who had the right decision procedure?”
My point of view is that the winning thing to do here and the logical thing to do are the same.
If you want to understand my point of view or if you want me to understand your point of view, you need to tell me where you think logical and winning diverge. Then I tell you why I think they don’t, etc.
You’ve mentioned ‘backwards causality’ which isn’t assumed in our one-box solution to Newcomb. How comfortable are you with the assumption of determinism? (If you’re not, how do you reconcile that Omega is a perfect predictor?)
Only to rule it out as a solution. No problem here.
In general, very. Concerning Newcomb, I don’t think it’s essential, and as far as I recall, it isn’t mentioned in the orginal problem.
I’ll try again: I think you can show with simple counterexamples that winning is neither necessary nor sufficient for being logical (your term for my rational, if I understand you correctly).
Here we go: it’s not necessary, because you can be unlucky. Your strategy might be best, but you might lose as soon as luck is involved. It’s not sufficient, because you can be lucky. You can win a game even if you’re not perfectly rational.
1-boxing seems a variant of the second case, instead of (bad) luck the game is rigged.
Around here, “rational” is taken to include in its definition “not losing predictably”. Could you explain what you mean by the term?