I convinced myself to one-box in Newcomb by simply treating it as if the contents of the boxes magically change when I made my decision. Simply draw the decision tree and maximize u-value.
I convinced myself to cooperate in the Prisoner’s Dilemma by treating it as if whatever decision I made the other person would magically make too. Simply draw the decision tree and maximize u-value.
It seems that Omega is different because I actually have the information, where in the others I don’t.
For example, In Newcomb, if we could see the contents of both boxes, then I should two-box, no? In the Prisoner’s Dilemma, if my opponent decides before me and I observe the decision, then I should defect, no?
I suspect that this means that my thought process in Newcomb and the Prisoner’s Dilemma is incorrect. That there is a better way to think about them that makes them more like Omega. Am I correct? Does this make sense?
Yes, the objective in designing this puzzle was to construct an example where according to my understanding of the correct way to make decision, the correct decision looks like losing. In other cases you may say that you close your eyes, pretend that your decision determines the past or other agents’ actions, and just make the decision that gives the best outcome. In this case, you choose the worst outcome. The argument is that on reflection it still looks like the best outcome, and you are given an opportunity to think about what’s the correct perspective from which it’s the best outcome. It binds the state of reality to your subjective perspective, where in many other thought experiments you may dispense with this connection and focus solely on the reality, without paying any special attention to the decision-maker.
In Newcomb, before knowing the box contents, you should one-box. If you know the contents, you should two-box (or am I wrong?)
In Prisoner, before knowing the opponent’s choice, you should cooperate. After knowing the opponent’s choice, you should defect (or am I wrong?).
If I’m right in the above two cases, doesn’t Omega look more like the “after knowing” situations above? If so, then I must be wrong about the above two cases...
I want to be someone who in situation Y does X, but when Y&Z happens, I don’t necessarily want to do X. Here, Z is the extra information that I lost (in Omega), the opponent has chosen (in Prisoner) or that both boxes have money in them (in Newcomb). What am I missing?
No—in the prisoners’ dilemma, you should always defect (presuming the payoff matrix represents utility), unless you can somehow collectively pre-commit to co-operating, or it is iterative. This distinction you’re thinking of only applies when reverse causation comes into play.
I convinced myself to one-box in Newcomb by simply treating it as if the contents of the boxes magically change when I made my decision. Simply draw the decision tree and maximize u-value.
I convinced myself to cooperate in the Prisoner’s Dilemma by treating it as if whatever decision I made the other person would magically make too. Simply draw the decision tree and maximize u-value.
It seems that Omega is different because I actually have the information, where in the others I don’t.
For example, In Newcomb, if we could see the contents of both boxes, then I should two-box, no? In the Prisoner’s Dilemma, if my opponent decides before me and I observe the decision, then I should defect, no?
I suspect that this means that my thought process in Newcomb and the Prisoner’s Dilemma is incorrect. That there is a better way to think about them that makes them more like Omega. Am I correct? Does this make sense?
Yes, the objective in designing this puzzle was to construct an example where according to my understanding of the correct way to make decision, the correct decision looks like losing. In other cases you may say that you close your eyes, pretend that your decision determines the past or other agents’ actions, and just make the decision that gives the best outcome. In this case, you choose the worst outcome. The argument is that on reflection it still looks like the best outcome, and you are given an opportunity to think about what’s the correct perspective from which it’s the best outcome. It binds the state of reality to your subjective perspective, where in many other thought experiments you may dispense with this connection and focus solely on the reality, without paying any special attention to the decision-maker.
In Newcomb, before knowing the box contents, you should one-box. If you know the contents, you should two-box (or am I wrong?)
In Prisoner, before knowing the opponent’s choice, you should cooperate. After knowing the opponent’s choice, you should defect (or am I wrong?).
If I’m right in the above two cases, doesn’t Omega look more like the “after knowing” situations above? If so, then I must be wrong about the above two cases...
I want to be someone who in situation Y does X, but when Y&Z happens, I don’t necessarily want to do X. Here, Z is the extra information that I lost (in Omega), the opponent has chosen (in Prisoner) or that both boxes have money in them (in Newcomb). What am I missing?
No—in the prisoners’ dilemma, you should always defect (presuming the payoff matrix represents utility), unless you can somehow collectively pre-commit to co-operating, or it is iterative. This distinction you’re thinking of only applies when reverse causation comes into play.