I don’t think this works in the example given, where Omega always predicts 2-boxing. We agree that the correct thing to do in that case is to 2-box. And if I’ve decided to 2-box then I can be > 90% confident that Omega will predict my personal actions correctly. But this still shouldn’t make me 1-box.
I’ve commented on Newcomb in previous threads… in my view it really does matter how Omega makes its predictions, and whether they are perfectly reliable or just very reliable.
Agreed for that case, but perfect reliability still isn’t necessary (consider omega 99.99% accurate/10% one boxers for example)
What matters is that your uncertainty in omegas prediction is tied to your uncertainty in your actions. If you’re 90% confident that omega gets it right conditioning on deciding to one box and 90% confident that omega gets it right conditional on deciding to two box, then you should one box. (0.9 1M>1K+0.1 1M)
I don’t think this works in the example given, where Omega always predicts 2-boxing. We agree that the correct thing to do in that case is to 2-box. And if I’ve decided to 2-box then I can be > 90% confident that Omega will predict my personal actions correctly. But this still shouldn’t make me 1-box.
I’ve commented on Newcomb in previous threads… in my view it really does matter how Omega makes its predictions, and whether they are perfectly reliable or just very reliable.
Agreed for that case, but perfect reliability still isn’t necessary (consider omega 99.99% accurate/10% one boxers for example)
What matters is that your uncertainty in omegas prediction is tied to your uncertainty in your actions. If you’re 90% confident that omega gets it right conditioning on deciding to one box and 90% confident that omega gets it right conditional on deciding to two box, then you should one box. (0.9 1M>1K+0.1 1M)