Do you mean “a fully general argument against precommitting when dealing with perfect predictors”? I don’t see how free will is relevant here, however it is defined.
Person A: I’m about to fight Omega. I hear he’s a perfect predictor, but I think if I bulk up enough, I can overwhelm him with strength anyway. He’s actually quite weak.
Person B: I don’t see how strength is relevant. Omega knows what you will do even if you don’t, so why bother getting stronger?
Saying that Omega already knows what you will do doesn’t solve the problem of figuring out what to do. If you don’t precommit to one-boxing, your simulation might not one-box, and that would be bad. If you precommit to one-boxing and honor that precommitment, your simulation will one-box, and that is better.
I understand that precommitment can be a good thing in some situations, but I doubt that Newcomb is one of them.
If you don’t precommit to one-boxing, your simulation might not one-box, and that would be bad.
There is no way my simulation will do anything different from me if the predictor is perfect. I don’t need to precommit to one-box. I can just one-box when the time comes. There is no difference in the outcome.
To me the difference is saying that one-boxing maximizes utility vs promising to one-box. In the first case there is no decision made or even guaranteed to be made when the time comes. I might even be thinking that I’d two-box, but change my mind at the last instance.
Do you mean “a fully general argument against precommitting when dealing with perfect predictors”? I don’t see how free will is relevant here, however it is defined.
Person A: I’m about to fight Omega. I hear he’s a perfect predictor, but I think if I bulk up enough, I can overwhelm him with strength anyway. He’s actually quite weak.
Person B: I don’t see how strength is relevant. Omega knows what you will do even if you don’t, so why bother getting stronger?
Feel free to make your point more explicit. What does this example mean to you?
Saying that Omega already knows what you will do doesn’t solve the problem of figuring out what to do. If you don’t precommit to one-boxing, your simulation might not one-box, and that would be bad. If you precommit to one-boxing and honor that precommitment, your simulation will one-box, and that is better.
I understand that precommitment can be a good thing in some situations, but I doubt that Newcomb is one of them.
There is no way my simulation will do anything different from me if the predictor is perfect. I don’t need to precommit to one-box. I can just one-box when the time comes. There is no difference in the outcome.
I don’t understand how that’s different from precommitting to one-box.
To me the difference is saying that one-boxing maximizes utility vs promising to one-box. In the first case there is no decision made or even guaranteed to be made when the time comes. I might even be thinking that I’d two-box, but change my mind at the last instance.