In Newcomb, before knowing the box contents, you should one-box. If you know the contents, you should two-box (or am I wrong?)
In Prisoner, before knowing the opponent’s choice, you should cooperate. After knowing the opponent’s choice, you should defect (or am I wrong?).
If I’m right in the above two cases, doesn’t Omega look more like the “after knowing” situations above? If so, then I must be wrong about the above two cases...
I want to be someone who in situation Y does X, but when Y&Z happens, I don’t necessarily want to do X. Here, Z is the extra information that I lost (in Omega), the opponent has chosen (in Prisoner) or that both boxes have money in them (in Newcomb). What am I missing?
No—in the prisoners’ dilemma, you should always defect (presuming the payoff matrix represents utility), unless you can somehow collectively pre-commit to co-operating, or it is iterative. This distinction you’re thinking of only applies when reverse causation comes into play.
In Newcomb, before knowing the box contents, you should one-box. If you know the contents, you should two-box (or am I wrong?)
In Prisoner, before knowing the opponent’s choice, you should cooperate. After knowing the opponent’s choice, you should defect (or am I wrong?).
If I’m right in the above two cases, doesn’t Omega look more like the “after knowing” situations above? If so, then I must be wrong about the above two cases...
I want to be someone who in situation Y does X, but when Y&Z happens, I don’t necessarily want to do X. Here, Z is the extra information that I lost (in Omega), the opponent has chosen (in Prisoner) or that both boxes have money in them (in Newcomb). What am I missing?
No—in the prisoners’ dilemma, you should always defect (presuming the payoff matrix represents utility), unless you can somehow collectively pre-commit to co-operating, or it is iterative. This distinction you’re thinking of only applies when reverse causation comes into play.