I think the idea is that even if Omega always predicted two-boxing, it still could be said to predict with 90% accuracy if 10% of the human population happened to be one-boxers. And yet you should two-box in that case. So basically, the non-deterministic version of Newcomb’s problem isn’t specified clearly enough.
I disagree. To be at all meaningful to the problem, the “90% accuracy” has to mean that, given all the information available to you, you assign a 90% probability to Omega correctly predicting your choice. This is quite different from correctly predicting the choices of 90% of the human population.
I don’t think this works in the example given, where Omega always predicts 2-boxing. We agree that the correct thing to do in that case is to 2-box. And if I’ve decided to 2-box then I can be > 90% confident that Omega will predict my personal actions correctly. But this still shouldn’t make me 1-box.
I’ve commented on Newcomb in previous threads… in my view it really does matter how Omega makes its predictions, and whether they are perfectly reliable or just very reliable.
Agreed for that case, but perfect reliability still isn’t necessary (consider omega 99.99% accurate/10% one boxers for example)
What matters is that your uncertainty in omegas prediction is tied to your uncertainty in your actions. If you’re 90% confident that omega gets it right conditioning on deciding to one box and 90% confident that omega gets it right conditional on deciding to two box, then you should one box. (0.9 1M>1K+0.1 1M)
Good point. I don’t think this is worth going into within this post, but I introduced a weasel word to signify that the circumstances of a 90% Predictor do matter.
I think the idea is that even if Omega always predicted two-boxing, it still could be said to predict with 90% accuracy if 10% of the human population happened to be one-boxers. And yet you should two-box in that case. So basically, the non-deterministic version of Newcomb’s problem isn’t specified clearly enough.
I disagree. To be at all meaningful to the problem, the “90% accuracy” has to mean that, given all the information available to you, you assign a 90% probability to Omega correctly predicting your choice. This is quite different from correctly predicting the choices of 90% of the human population.
I don’t think this works in the example given, where Omega always predicts 2-boxing. We agree that the correct thing to do in that case is to 2-box. And if I’ve decided to 2-box then I can be > 90% confident that Omega will predict my personal actions correctly. But this still shouldn’t make me 1-box.
I’ve commented on Newcomb in previous threads… in my view it really does matter how Omega makes its predictions, and whether they are perfectly reliable or just very reliable.
Agreed for that case, but perfect reliability still isn’t necessary (consider omega 99.99% accurate/10% one boxers for example)
What matters is that your uncertainty in omegas prediction is tied to your uncertainty in your actions. If you’re 90% confident that omega gets it right conditioning on deciding to one box and 90% confident that omega gets it right conditional on deciding to two box, then you should one box. (0.9 1M>1K+0.1 1M)
Far better explanation than mine, thanks!
Good point. I don’t think this is worth going into within this post, but I introduced a weasel word to signify that the circumstances of a 90% Predictor do matter.
Very nice, thanks!
Oh. That’s very nice, thanks!