Still, we can imagine a situation where Omega would predict our actions with a good deal of accuracy, especially if we had publicly announced that we would choose to one-box in such situations.
We could, but I’m not going to think about those unless the problem is stated a bit more precisely, so we don’t get caught up in arguing over the exact parameters again. The details on how exactly Omega determines what to do are very important. I’ve actually said elsewhere that if you didn’t know how Omega did it, you should try to put probabilities on different possible methods, and do an EV calculation based on that; is there any way that can fail badly?
(Also, if there was any chance of Omega existing and taking cues from our public announcements, the obvious rational thing to do would be to stop talking about it in public.)
I agree with you that this case is no different from the original Newcomb; I think most comments here were attempting to find a difference, but there isn’t one.
I think people may have been trying to solve the case mentioned in OP, which is less than 100%, and does have a difference.
We could, but I’m not going to think about those unless the problem is stated a bit more precisely, so we don’t get caught up in arguing over the exact parameters again. The details on how exactly Omega determines what to do are very important. I’ve actually said elsewhere that if you didn’t know how Omega did it, you should try to put probabilities on different possible methods, and do an EV calculation based on that; is there any way that can fail badly?
(Also, if there was any chance of Omega existing and taking cues from our public announcements, the obvious rational thing to do would be to stop talking about it in public.)
I think people may have been trying to solve the case mentioned in OP, which is less than 100%, and does have a difference.