I disagree. The Prisoner’s Dilemma does not specify that you are blind as to the nature of your opponent.
The transparent version of the Prisoner’s Dilemma, and the more complicated ‘shared source code’ version that shows up on LW, are generally considered variants of the basic PD.
In contrast to games where you can say things like “I cooperate if they cooperate, and I defect if they defect,” in the basic game you either say “I cooperate” or “I defect.” Now, you might know some things about them, and they might know some things about you, but there’s no causal connection between your action and their action, like there is if they’re informed of your action, they’re informed of your source code, or they have the ability to perceive the future.
I apologize for the aggravation.
“Aggravating” may have been too strong a word; “disappointed” might have been better, that I saw content I mostly agreed with presented in a way I mostly disagreed with, with the extra implication that the presentation was possibly more important than the content.
To me, a “vanilla” Prisoner’s Dilemma involves actual human prisoners who may reason about their partners. I don’t mean to imply that I think “standard” PD involves credible pre-commitments nor perfect knowledge of the opponent. While I agree that in standard PD there’s no causal connection between actions, there can be logical connections between actions that make for interesting strategies (eg if you expect them to use TDT).
On this point, I’m inclined to think that we agree and are debating terminology.
“Aggravating” may have been too strong a word; “disappointed” might have been better
That’s even worse! :-)
I readily admit that my presentation is tailored to my personality, and I understand how others may find it grating.
That said, a secondary goal of this post was to instill doubt in concepts that look sacred (terminal goals, epistemic rationality) and encourage people to consider that even these may be sacrificed for instrumental gains.
It seems you already grasp the tradeoffs between epistemic and instrumental rationality and that you can consistently reach mental states that are elusive to naive epistemically rational agents, and that you’ve come to these conclusions by a different means than I. By my analysis, there are many others who need a push before they are willing to even consider “terminal goals” and “false beliefs” as strategic tools. This post caters more to them.
I’d be very interested to hear more about how you’ve achieved similar results with different techniques!
The transparent version of the Prisoner’s Dilemma, and the more complicated ‘shared source code’ version that shows up on LW, are generally considered variants of the basic PD.
In contrast to games where you can say things like “I cooperate if they cooperate, and I defect if they defect,” in the basic game you either say “I cooperate” or “I defect.” Now, you might know some things about them, and they might know some things about you, but there’s no causal connection between your action and their action, like there is if they’re informed of your action, they’re informed of your source code, or they have the ability to perceive the future.
“Aggravating” may have been too strong a word; “disappointed” might have been better, that I saw content I mostly agreed with presented in a way I mostly disagreed with, with the extra implication that the presentation was possibly more important than the content.
To me, a “vanilla” Prisoner’s Dilemma involves actual human prisoners who may reason about their partners. I don’t mean to imply that I think “standard” PD involves credible pre-commitments nor perfect knowledge of the opponent. While I agree that in standard PD there’s no causal connection between actions, there can be logical connections between actions that make for interesting strategies (eg if you expect them to use TDT).
On this point, I’m inclined to think that we agree and are debating terminology.
That’s even worse! :-)
I readily admit that my presentation is tailored to my personality, and I understand how others may find it grating.
That said, a secondary goal of this post was to instill doubt in concepts that look sacred (terminal goals, epistemic rationality) and encourage people to consider that even these may be sacrificed for instrumental gains.
It seems you already grasp the tradeoffs between epistemic and instrumental rationality and that you can consistently reach mental states that are elusive to naive epistemically rational agents, and that you’ve come to these conclusions by a different means than I. By my analysis, there are many others who need a push before they are willing to even consider “terminal goals” and “false beliefs” as strategic tools. This post caters more to them.
I’d be very interested to hear more about how you’ve achieved similar results with different techniques!