I just recently really worked through this, and I’m a firm one-boxer. After a few discussions with two-boxer people, I came to understand why: I consider myself predictable and deterministic. Two-boxers do not.
For me, the idea that Omega can predict my behaviour accurately is pretty much a no-brainer. I already think it possible to upload into digital form and make multiple copies of myself (which are all simultaneously “me”), and running bulk numbers of predictions using simulations seems perfectly reasonable. Two-boxers, on the other hand, think of conciousness and their sense of self as some mystical, magical thing that can’t be reliably predicted.
The reason I would pick only one box is roughly: the more strongly I want to pick one box, the more I convince myself to only pick one box, the more likely it is that simulations of me will also pick one box.
Note that by reasoning this out in advance, prior to being presented with the actual decision, I have in all probability raised my odds of walking away with the million dollar box. I now have an established pattern of cached thoughts with a preference for selecting one box, which may improve the odds that simulated copies will also one-box.
This note also implies another side effect: if Omega has a high accuracy rate even when people are caught flat footed (without prior exposure to the problem), then my estimation of Omega’s predictive powers increases dramatically.
The high accuracy rate itself implies something, though I’m not quite sure what: with an extremely high accuracy rate, either people are disinclined to choose completely randomly, or Omega’s predictor is good enough that true random number generation is very difficult for a human.
the more likely it is that simulations of me will also pick one box.
Yes, it’s clear that the correct strategy in advance, if you thought you were going to encounter Newcomb’s problems, is to precommit to one-boxing (but as I mentioned in my comments, at least some two-boxers maintain a distinction between “ideal rational behavior” and “the behavior which, in reality, gives you the highest payoff”). Scott Aaronson goes even further: if people are running simulations of you, then you may have some anthropic uncertainty about whether you’re the original or a simulation, so deciding to one-box may in fact cause the simulation to one-box if you yourself are the simulation!
You can restore something of the original problem by asking what you should do if you were “dropped into” a Newcomb’s problem without having the chance to make precommitments.
For me I think part of the reason I’m so very very quick to commit to one boxing is the low improvement in outcomes from 2 boxing as the problem was presented on the wiki.
The wiki lists 1000 vs 1000,000.
If I was sitting across from Derren Brown or similar skilled street magician I’d say there’s much more than a 1 in a thousand chance that he’d predict that I’d one box.
If the problem was stated with a lesser difference, say 1000 vs 5000 I might 2 box in part because a certain payoff is worth more to me than an uncertain one even if the expected return on the gamble is marginally higher.
I don’t see how precommitment is relevant, whether you are “real” or a simulation. Omega knows what you will do even if you don’t, so why bother precommitting?
Precommitment isn’t relevant to Omega, but it is relevant to the person making the decision. It’s basically a way of ‘agreeing to cooperate’ with possible simulations of yourself, in an environment where there’s perhaps not as much on the line and it’s easier to think rationally about the problem.
What I have never understood is why precommitment to a specific solution is necessary, either as a way of ‘agreeing to cooperate’ with possible simulations (supposing I posit simulations being involved), or more generally as a way of ensuring that I behave as an instantiation of the decision procedure that maximizes expected value.
There are three relevant propositions: A: Predictor predicts I one-box iff I one-box B: Predictor predicts I two-box iff I two-box C: Predictor puts more money in box B than box A iff Predictor predicts I one-box
If I am confident that (A and B and C) then my highest-EV strategy is to one-box. If I am the sort of agent who reliably picks the highest-EV strategy (which around here we call a “rational” agent), then I one-box.
If A and C are true, then Predictor puts more money in box B.
None of that requires any precommitment to figure out. What does precommitment have to do with any of this?
I don’t believe that anyone in this chain said that it was ‘necessary’, and for a strictly rational agent, I don’t believe it is.
However, I am a person, and am not strictly rational. For me, my mental architecture is such that it relies on caching and precomputed decisions, and decisions made under stress may not be the same as those made in contemplative peace and quiet. Precomputation and precommitment is a way of improving the odds that I will make a particular decision under stress.
I agree that humans aren’t strictly rational, and that decisions under stress are less likely to be rational, and that precommitted/rehearsed answers are more likely to arise under stress.
Do you mean “a fully general argument against precommitting when dealing with perfect predictors”? I don’t see how free will is relevant here, however it is defined.
Person A: I’m about to fight Omega. I hear he’s a perfect predictor, but I think if I bulk up enough, I can overwhelm him with strength anyway. He’s actually quite weak.
Person B: I don’t see how strength is relevant. Omega knows what you will do even if you don’t, so why bother getting stronger?
Saying that Omega already knows what you will do doesn’t solve the problem of figuring out what to do. If you don’t precommit to one-boxing, your simulation might not one-box, and that would be bad. If you precommit to one-boxing and honor that precommitment, your simulation will one-box, and that is better.
I understand that precommitment can be a good thing in some situations, but I doubt that Newcomb is one of them.
If you don’t precommit to one-boxing, your simulation might not one-box, and that would be bad.
There is no way my simulation will do anything different from me if the predictor is perfect. I don’t need to precommit to one-box. I can just one-box when the time comes. There is no difference in the outcome.
To me the difference is saying that one-boxing maximizes utility vs promising to one-box. In the first case there is no decision made or even guaranteed to be made when the time comes. I might even be thinking that I’d two-box, but change my mind at the last instance.
For the record, when I first really considered the problem, my reasoning was still very similar. It ran approximately as follows:
“The more strongly I am able to convince myself to one-box, the higher the probability that any simulations of me would also have one-boxed. Since I am currently able to strongly convince myself to one-box without prior exposure to the problem, it is extremely likely that my simulations would also one-box, therefore it is in our best interests to one-box.”
Note that I did not run estimated probabilties and tradeoffs based on the sizes of the reward, error probability of Omega, and confidence in my ability to one-box reliably. I am certain that there are combinations of those parameters which would make two-boxing better than one, but I did not do the math.
I just recently really worked through this, and I’m a firm one-boxer. After a few discussions with two-boxer people, I came to understand why: I consider myself predictable and deterministic. Two-boxers do not.
For me, the idea that Omega can predict my behaviour accurately is pretty much a no-brainer. I already think it possible to upload into digital form and make multiple copies of myself (which are all simultaneously “me”), and running bulk numbers of predictions using simulations seems perfectly reasonable. Two-boxers, on the other hand, think of conciousness and their sense of self as some mystical, magical thing that can’t be reliably predicted.
The reason I would pick only one box is roughly: the more strongly I want to pick one box, the more I convince myself to only pick one box, the more likely it is that simulations of me will also pick one box.
Note that by reasoning this out in advance, prior to being presented with the actual decision, I have in all probability raised my odds of walking away with the million dollar box. I now have an established pattern of cached thoughts with a preference for selecting one box, which may improve the odds that simulated copies will also one-box.
This note also implies another side effect: if Omega has a high accuracy rate even when people are caught flat footed (without prior exposure to the problem), then my estimation of Omega’s predictive powers increases dramatically.
The high accuracy rate itself implies something, though I’m not quite sure what: with an extremely high accuracy rate, either people are disinclined to choose completely randomly, or Omega’s predictor is good enough that true random number generation is very difficult for a human.
I consider myself quite predictable and deterministic, and I’m a two boxer.
Yes, it’s clear that the correct strategy in advance, if you thought you were going to encounter Newcomb’s problems, is to precommit to one-boxing (but as I mentioned in my comments, at least some two-boxers maintain a distinction between “ideal rational behavior” and “the behavior which, in reality, gives you the highest payoff”). Scott Aaronson goes even further: if people are running simulations of you, then you may have some anthropic uncertainty about whether you’re the original or a simulation, so deciding to one-box may in fact cause the simulation to one-box if you yourself are the simulation!
You can restore something of the original problem by asking what you should do if you were “dropped into” a Newcomb’s problem without having the chance to make precommitments.
For me I think part of the reason I’m so very very quick to commit to one boxing is the low improvement in outcomes from 2 boxing as the problem was presented on the wiki.
The wiki lists 1000 vs 1000,000.
If I was sitting across from Derren Brown or similar skilled street magician I’d say there’s much more than a 1 in a thousand chance that he’d predict that I’d one box.
If the problem was stated with a lesser difference, say 1000 vs 5000 I might 2 box in part because a certain payoff is worth more to me than an uncertain one even if the expected return on the gamble is marginally higher.
I don’t see how precommitment is relevant, whether you are “real” or a simulation. Omega knows what you will do even if you don’t, so why bother precommitting?
Precommitment isn’t relevant to Omega, but it is relevant to the person making the decision. It’s basically a way of ‘agreeing to cooperate’ with possible simulations of yourself, in an environment where there’s perhaps not as much on the line and it’s easier to think rationally about the problem.
What I have never understood is why precommitment to a specific solution is necessary, either as a way of ‘agreeing to cooperate’ with possible simulations (supposing I posit simulations being involved), or more generally as a way of ensuring that I behave as an instantiation of the decision procedure that maximizes expected value.
There are three relevant propositions:
A: Predictor predicts I one-box iff I one-box
B: Predictor predicts I two-box iff I two-box
C: Predictor puts more money in box B than box A iff Predictor predicts I one-box
If I am confident that (A and B and C) then my highest-EV strategy is to one-box. If I am the sort of agent who reliably picks the highest-EV strategy (which around here we call a “rational” agent), then I one-box.
If A and C are true, then Predictor puts more money in box B.
None of that requires any precommitment to figure out. What does precommitment have to do with any of this?
I don’t believe that anyone in this chain said that it was ‘necessary’, and for a strictly rational agent, I don’t believe it is.
However, I am a person, and am not strictly rational. For me, my mental architecture is such that it relies on caching and precomputed decisions, and decisions made under stress may not be the same as those made in contemplative peace and quiet. Precomputation and precommitment is a way of improving the odds that I will make a particular decision under stress.
I agree that humans aren’t strictly rational, and that decisions under stress are less likely to be rational, and that precommitted/rehearsed answers are more likely to arise under stress.
Isn’t that a fully general counterargument against doing anything whatsoever in the absence of free will?
Do you mean “a fully general argument against precommitting when dealing with perfect predictors”? I don’t see how free will is relevant here, however it is defined.
Person A: I’m about to fight Omega. I hear he’s a perfect predictor, but I think if I bulk up enough, I can overwhelm him with strength anyway. He’s actually quite weak.
Person B: I don’t see how strength is relevant. Omega knows what you will do even if you don’t, so why bother getting stronger?
Feel free to make your point more explicit. What does this example mean to you?
Saying that Omega already knows what you will do doesn’t solve the problem of figuring out what to do. If you don’t precommit to one-boxing, your simulation might not one-box, and that would be bad. If you precommit to one-boxing and honor that precommitment, your simulation will one-box, and that is better.
I understand that precommitment can be a good thing in some situations, but I doubt that Newcomb is one of them.
There is no way my simulation will do anything different from me if the predictor is perfect. I don’t need to precommit to one-box. I can just one-box when the time comes. There is no difference in the outcome.
I don’t understand how that’s different from precommitting to one-box.
To me the difference is saying that one-boxing maximizes utility vs promising to one-box. In the first case there is no decision made or even guaranteed to be made when the time comes. I might even be thinking that I’d two-box, but change my mind at the last instance.
For the record, when I first really considered the problem, my reasoning was still very similar. It ran approximately as follows:
“The more strongly I am able to convince myself to one-box, the higher the probability that any simulations of me would also have one-boxed. Since I am currently able to strongly convince myself to one-box without prior exposure to the problem, it is extremely likely that my simulations would also one-box, therefore it is in our best interests to one-box.”
Note that I did not run estimated probabilties and tradeoffs based on the sizes of the reward, error probability of Omega, and confidence in my ability to one-box reliably. I am certain that there are combinations of those parameters which would make two-boxing better than one, but I did not do the math.