You are right that 100% correlation requires an unrealistic situation. This is true also in the original Newcomb, i.e. we don’t actually expect anything in the real world to be able to predict our actions with 100% accuracy. Still, we can imagine a situation where Omega would predict our actions with a good deal of accuracy, especially if we had publicly announced that we would choose to one-box in such situations.
The genetic Newcomb requires an even more unrealistic scenario, since in the real world genes do not predict actions with anything close to 100% certitude. I agree with you that this case is no different from the original Newcomb; I think most comments here were attempting to find a difference, but there isn’t one.
Still, we can imagine a situation where Omega would predict our actions with a good deal of accuracy, especially if we had publicly announced that we would choose to one-box in such situations.
We could, but I’m not going to think about those unless the problem is stated a bit more precisely, so we don’t get caught up in arguing over the exact parameters again. The details on how exactly Omega determines what to do are very important. I’ve actually said elsewhere that if you didn’t know how Omega did it, you should try to put probabilities on different possible methods, and do an EV calculation based on that; is there any way that can fail badly?
(Also, if there was any chance of Omega existing and taking cues from our public announcements, the obvious rational thing to do would be to stop talking about it in public.)
I agree with you that this case is no different from the original Newcomb; I think most comments here were attempting to find a difference, but there isn’t one.
I think people may have been trying to solve the case mentioned in OP, which is less than 100%, and does have a difference.
You are right that 100% correlation requires an unrealistic situation. This is true also in the original Newcomb, i.e. we don’t actually expect anything in the real world to be able to predict our actions with 100% accuracy. Still, we can imagine a situation where Omega would predict our actions with a good deal of accuracy, especially if we had publicly announced that we would choose to one-box in such situations.
The genetic Newcomb requires an even more unrealistic scenario, since in the real world genes do not predict actions with anything close to 100% certitude. I agree with you that this case is no different from the original Newcomb; I think most comments here were attempting to find a difference, but there isn’t one.
We could, but I’m not going to think about those unless the problem is stated a bit more precisely, so we don’t get caught up in arguing over the exact parameters again. The details on how exactly Omega determines what to do are very important. I’ve actually said elsewhere that if you didn’t know how Omega did it, you should try to put probabilities on different possible methods, and do an EV calculation based on that; is there any way that can fail badly?
(Also, if there was any chance of Omega existing and taking cues from our public announcements, the obvious rational thing to do would be to stop talking about it in public.)
I think people may have been trying to solve the case mentioned in OP, which is less than 100%, and does have a difference.