But as I understand it, proponents of alternative DTs are talking about a conditional PD where you know you face an opponent executing a particular DT. The fancy-DT-users all defect on PD when the prior of their PD-partner being on CDT or similar is high enough, right?
Wouldn’t you like to be the type of agent who cooperates with near-copies of yourself? Wouldn’t you like to be the type of agent who one-boxes? The trick is to satisfy this desire without using a bunch of stupid special-case rules, and show that it doesn’t lead to poor decisions elsewhere.
But as I understand it, proponents of alternative DTs are talking about a conditional PD where you know you face an opponent executing a particular DT. The fancy-DT-users all defect on PD when the prior of their PD-partner being on CDT or similar is high enough, right?
Wouldn’t you like to be the type of agent who cooperates with near-copies of yourself? Wouldn’t you like to be the type of agent who one-boxes?
Yes, but it would be strictly better (for me) to be the kind of agent who defects against near-copies of myself when they co-operate in one-shot games. It would be better to be the kind of agent who is predicted to one-box, but then two-box once the money has been put in the opaque box.
But the point is really that I don’t see it as the job of an alternative decision theory to get “the right” answers to these sorts of questions.
They’re not necessarily impossible. If you have genuine reason to believe you can outsmart Omega, or that you can outsmart the near-copy of yourself in PD, then you should two-box or defect.
But if the only information you have is that you’re playing against a near-copy of yourself in PD, then cooperating is probably the smart thing to do. I understand this kind of thing is still being figured out.
According to what rules? And anyway I have preferences for all kinds of impossible things. For example, I prefer cooperating with copies of myself, even though I know it would never happen, since we’d both accept the dominance reasoning and defect.
I think he meant according to the rules of the thought experiments. In Newcomb’s problem, Omega predicts what you do. Whatever you choose to do, that’s what Omega predicted you would choose to do. You cannot to choose to do something that Omega wouldn’t predict—it’s impossible. There is no such thing as “the kind of agent who is predicted to one-box, but then two-box once the money has been put in the opaque box”.
Right. The rules of the respective thought experiments.. Similarly, if you’re the sort to defect against near copies of yourself in one-shot PD, then so is your near copy. (edit: I see now that scmbradley already wrote about that—sorry for the redundancy).
I generally share your reservations.
But as I understand it, proponents of alternative DTs are talking about a conditional PD where you know you face an opponent executing a particular DT. The fancy-DT-users all defect on PD when the prior of their PD-partner being on CDT or similar is high enough, right?
Wouldn’t you like to be the type of agent who cooperates with near-copies of yourself? Wouldn’t you like to be the type of agent who one-boxes? The trick is to satisfy this desire without using a bunch of stupid special-case rules, and show that it doesn’t lead to poor decisions elsewhere.
(Yes, you are correct!)
Yes, but it would be strictly better (for me) to be the kind of agent who defects against near-copies of myself when they co-operate in one-shot games. It would be better to be the kind of agent who is predicted to one-box, but then two-box once the money has been put in the opaque box.
But the point is really that I don’t see it as the job of an alternative decision theory to get “the right” answers to these sorts of questions.
The larger point makes sense. Those two things you prefer are impossible according to the rules, though.
They’re not necessarily impossible. If you have genuine reason to believe you can outsmart Omega, or that you can outsmart the near-copy of yourself in PD, then you should two-box or defect.
But if the only information you have is that you’re playing against a near-copy of yourself in PD, then cooperating is probably the smart thing to do. I understand this kind of thing is still being figured out.
According to what rules? And anyway I have preferences for all kinds of impossible things. For example, I prefer cooperating with copies of myself, even though I know it would never happen, since we’d both accept the dominance reasoning and defect.
I think he meant according to the rules of the thought experiments. In Newcomb’s problem, Omega predicts what you do. Whatever you choose to do, that’s what Omega predicted you would choose to do. You cannot to choose to do something that Omega wouldn’t predict—it’s impossible. There is no such thing as “the kind of agent who is predicted to one-box, but then two-box once the money has been put in the opaque box”.
Elsewhere on this comment thread I’ve discussed why I think those “rules” are not interesting. Basically, because they’re impossible to implement.
Right. The rules of the respective thought experiments.. Similarly, if you’re the sort to defect against near copies of yourself in one-shot PD, then so is your near copy. (edit: I see now that scmbradley already wrote about that—sorry for the redundancy).