the more likely it is that simulations of me will also pick one box.
Yes, it’s clear that the correct strategy in advance, if you thought you were going to encounter Newcomb’s problems, is to precommit to one-boxing (but as I mentioned in my comments, at least some two-boxers maintain a distinction between “ideal rational behavior” and “the behavior which, in reality, gives you the highest payoff”). Scott Aaronson goes even further: if people are running simulations of you, then you may have some anthropic uncertainty about whether you’re the original or a simulation, so deciding to one-box may in fact cause the simulation to one-box if you yourself are the simulation!
You can restore something of the original problem by asking what you should do if you were “dropped into” a Newcomb’s problem without having the chance to make precommitments.
For me I think part of the reason I’m so very very quick to commit to one boxing is the low improvement in outcomes from 2 boxing as the problem was presented on the wiki.
The wiki lists 1000 vs 1000,000.
If I was sitting across from Derren Brown or similar skilled street magician I’d say there’s much more than a 1 in a thousand chance that he’d predict that I’d one box.
If the problem was stated with a lesser difference, say 1000 vs 5000 I might 2 box in part because a certain payoff is worth more to me than an uncertain one even if the expected return on the gamble is marginally higher.
I don’t see how precommitment is relevant, whether you are “real” or a simulation. Omega knows what you will do even if you don’t, so why bother precommitting?
Precommitment isn’t relevant to Omega, but it is relevant to the person making the decision. It’s basically a way of ‘agreeing to cooperate’ with possible simulations of yourself, in an environment where there’s perhaps not as much on the line and it’s easier to think rationally about the problem.
What I have never understood is why precommitment to a specific solution is necessary, either as a way of ‘agreeing to cooperate’ with possible simulations (supposing I posit simulations being involved), or more generally as a way of ensuring that I behave as an instantiation of the decision procedure that maximizes expected value.
There are three relevant propositions: A: Predictor predicts I one-box iff I one-box B: Predictor predicts I two-box iff I two-box C: Predictor puts more money in box B than box A iff Predictor predicts I one-box
If I am confident that (A and B and C) then my highest-EV strategy is to one-box. If I am the sort of agent who reliably picks the highest-EV strategy (which around here we call a “rational” agent), then I one-box.
If A and C are true, then Predictor puts more money in box B.
None of that requires any precommitment to figure out. What does precommitment have to do with any of this?
I don’t believe that anyone in this chain said that it was ‘necessary’, and for a strictly rational agent, I don’t believe it is.
However, I am a person, and am not strictly rational. For me, my mental architecture is such that it relies on caching and precomputed decisions, and decisions made under stress may not be the same as those made in contemplative peace and quiet. Precomputation and precommitment is a way of improving the odds that I will make a particular decision under stress.
I agree that humans aren’t strictly rational, and that decisions under stress are less likely to be rational, and that precommitted/rehearsed answers are more likely to arise under stress.
Do you mean “a fully general argument against precommitting when dealing with perfect predictors”? I don’t see how free will is relevant here, however it is defined.
Person A: I’m about to fight Omega. I hear he’s a perfect predictor, but I think if I bulk up enough, I can overwhelm him with strength anyway. He’s actually quite weak.
Person B: I don’t see how strength is relevant. Omega knows what you will do even if you don’t, so why bother getting stronger?
Saying that Omega already knows what you will do doesn’t solve the problem of figuring out what to do. If you don’t precommit to one-boxing, your simulation might not one-box, and that would be bad. If you precommit to one-boxing and honor that precommitment, your simulation will one-box, and that is better.
I understand that precommitment can be a good thing in some situations, but I doubt that Newcomb is one of them.
If you don’t precommit to one-boxing, your simulation might not one-box, and that would be bad.
There is no way my simulation will do anything different from me if the predictor is perfect. I don’t need to precommit to one-box. I can just one-box when the time comes. There is no difference in the outcome.
To me the difference is saying that one-boxing maximizes utility vs promising to one-box. In the first case there is no decision made or even guaranteed to be made when the time comes. I might even be thinking that I’d two-box, but change my mind at the last instance.
For the record, when I first really considered the problem, my reasoning was still very similar. It ran approximately as follows:
“The more strongly I am able to convince myself to one-box, the higher the probability that any simulations of me would also have one-boxed. Since I am currently able to strongly convince myself to one-box without prior exposure to the problem, it is extremely likely that my simulations would also one-box, therefore it is in our best interests to one-box.”
Note that I did not run estimated probabilties and tradeoffs based on the sizes of the reward, error probability of Omega, and confidence in my ability to one-box reliably. I am certain that there are combinations of those parameters which would make two-boxing better than one, but I did not do the math.
Yes, it’s clear that the correct strategy in advance, if you thought you were going to encounter Newcomb’s problems, is to precommit to one-boxing (but as I mentioned in my comments, at least some two-boxers maintain a distinction between “ideal rational behavior” and “the behavior which, in reality, gives you the highest payoff”). Scott Aaronson goes even further: if people are running simulations of you, then you may have some anthropic uncertainty about whether you’re the original or a simulation, so deciding to one-box may in fact cause the simulation to one-box if you yourself are the simulation!
You can restore something of the original problem by asking what you should do if you were “dropped into” a Newcomb’s problem without having the chance to make precommitments.
For me I think part of the reason I’m so very very quick to commit to one boxing is the low improvement in outcomes from 2 boxing as the problem was presented on the wiki.
The wiki lists 1000 vs 1000,000.
If I was sitting across from Derren Brown or similar skilled street magician I’d say there’s much more than a 1 in a thousand chance that he’d predict that I’d one box.
If the problem was stated with a lesser difference, say 1000 vs 5000 I might 2 box in part because a certain payoff is worth more to me than an uncertain one even if the expected return on the gamble is marginally higher.
I don’t see how precommitment is relevant, whether you are “real” or a simulation. Omega knows what you will do even if you don’t, so why bother precommitting?
Precommitment isn’t relevant to Omega, but it is relevant to the person making the decision. It’s basically a way of ‘agreeing to cooperate’ with possible simulations of yourself, in an environment where there’s perhaps not as much on the line and it’s easier to think rationally about the problem.
What I have never understood is why precommitment to a specific solution is necessary, either as a way of ‘agreeing to cooperate’ with possible simulations (supposing I posit simulations being involved), or more generally as a way of ensuring that I behave as an instantiation of the decision procedure that maximizes expected value.
There are three relevant propositions:
A: Predictor predicts I one-box iff I one-box
B: Predictor predicts I two-box iff I two-box
C: Predictor puts more money in box B than box A iff Predictor predicts I one-box
If I am confident that (A and B and C) then my highest-EV strategy is to one-box. If I am the sort of agent who reliably picks the highest-EV strategy (which around here we call a “rational” agent), then I one-box.
If A and C are true, then Predictor puts more money in box B.
None of that requires any precommitment to figure out. What does precommitment have to do with any of this?
I don’t believe that anyone in this chain said that it was ‘necessary’, and for a strictly rational agent, I don’t believe it is.
However, I am a person, and am not strictly rational. For me, my mental architecture is such that it relies on caching and precomputed decisions, and decisions made under stress may not be the same as those made in contemplative peace and quiet. Precomputation and precommitment is a way of improving the odds that I will make a particular decision under stress.
I agree that humans aren’t strictly rational, and that decisions under stress are less likely to be rational, and that precommitted/rehearsed answers are more likely to arise under stress.
Isn’t that a fully general counterargument against doing anything whatsoever in the absence of free will?
Do you mean “a fully general argument against precommitting when dealing with perfect predictors”? I don’t see how free will is relevant here, however it is defined.
Person A: I’m about to fight Omega. I hear he’s a perfect predictor, but I think if I bulk up enough, I can overwhelm him with strength anyway. He’s actually quite weak.
Person B: I don’t see how strength is relevant. Omega knows what you will do even if you don’t, so why bother getting stronger?
Feel free to make your point more explicit. What does this example mean to you?
Saying that Omega already knows what you will do doesn’t solve the problem of figuring out what to do. If you don’t precommit to one-boxing, your simulation might not one-box, and that would be bad. If you precommit to one-boxing and honor that precommitment, your simulation will one-box, and that is better.
I understand that precommitment can be a good thing in some situations, but I doubt that Newcomb is one of them.
There is no way my simulation will do anything different from me if the predictor is perfect. I don’t need to precommit to one-box. I can just one-box when the time comes. There is no difference in the outcome.
I don’t understand how that’s different from precommitting to one-box.
To me the difference is saying that one-boxing maximizes utility vs promising to one-box. In the first case there is no decision made or even guaranteed to be made when the time comes. I might even be thinking that I’d two-box, but change my mind at the last instance.
For the record, when I first really considered the problem, my reasoning was still very similar. It ran approximately as follows:
“The more strongly I am able to convince myself to one-box, the higher the probability that any simulations of me would also have one-boxed. Since I am currently able to strongly convince myself to one-box without prior exposure to the problem, it is extremely likely that my simulations would also one-box, therefore it is in our best interests to one-box.”
Note that I did not run estimated probabilties and tradeoffs based on the sizes of the reward, error probability of Omega, and confidence in my ability to one-box reliably. I am certain that there are combinations of those parameters which would make two-boxing better than one, but I did not do the math.