According to what rules? And anyway I have preferences for all kinds of impossible things. For example, I prefer cooperating with copies of myself, even though I know it would never happen, since we’d both accept the dominance reasoning and defect.
I think he meant according to the rules of the thought experiments. In Newcomb’s problem, Omega predicts what you do. Whatever you choose to do, that’s what Omega predicted you would choose to do. You cannot to choose to do something that Omega wouldn’t predict—it’s impossible. There is no such thing as “the kind of agent who is predicted to one-box, but then two-box once the money has been put in the opaque box”.
Right. The rules of the respective thought experiments.. Similarly, if you’re the sort to defect against near copies of yourself in one-shot PD, then so is your near copy. (edit: I see now that scmbradley already wrote about that—sorry for the redundancy).
According to what rules? And anyway I have preferences for all kinds of impossible things. For example, I prefer cooperating with copies of myself, even though I know it would never happen, since we’d both accept the dominance reasoning and defect.
I think he meant according to the rules of the thought experiments. In Newcomb’s problem, Omega predicts what you do. Whatever you choose to do, that’s what Omega predicted you would choose to do. You cannot to choose to do something that Omega wouldn’t predict—it’s impossible. There is no such thing as “the kind of agent who is predicted to one-box, but then two-box once the money has been put in the opaque box”.
Elsewhere on this comment thread I’ve discussed why I think those “rules” are not interesting. Basically, because they’re impossible to implement.
Right. The rules of the respective thought experiments.. Similarly, if you’re the sort to defect against near copies of yourself in one-shot PD, then so is your near copy. (edit: I see now that scmbradley already wrote about that—sorry for the redundancy).