No, I still don’t think so. The expression “defection risk” implies the one-shot prisoner dilemma context and neither such a situation is common in real life, nor normal people think in such categories (correctly, too).
“I don’t trust someone like that” should just be interpreted directly according to its plain meaning. Not trusting someone does not imply a PD-like context and/or an expectation of defection.
“Too smart for his own good” I understand as meaning “He’s smart enough to figure out how to bend the rules or go around them, but he is not smart enough to figure all the consequences of that and weight them properly”. Again, nothing related to PD.
The expression “defection risk” implies the one-shot prisoner dilemma context and neither such a situation is common in real life, nor normal people think in such categories (correctly, too).
You’re reading too much into what RomeoStevens wrote—at no point did he explicitly mention the one-shot Prisonner’s Dilemma.
A pretty common usage here is to use the Prisonner’s Dilemma as a simplified model (think spherical cow on a frictionless plane, or perfect gas) of many morally-relevant situations.
This model is not what people explicitly think about (just like people don’t explicitly think about social status when they are outraged or dismissive, or don’t explicitly think about expected utility when deciding), but it may still be a good (simplified) model of what people think.
And RomeoStevens is referring to what people think, he’s just using “defection risk” as a shorthand. If you ask normal people, they’ll usually talk in terms of trust.
You may object that the model is not good enough, but you’ll need a better argument than “it’s not what people think” (nobody is claiming it is); do you similarly object to discussing people’s choices in terms of expected utility and opportunity costs?
You’re reading too much into what RomeoStevens wrote—at no point did he explicitly mention the one-shot Prisonner’s Dilemma.
The only two contexts I know where the expression “defection risk” has meaning is prisoner’s dilemma and Cold War spy/counterintelligence games.
he’s just using “defection risk” as a shorthand
I think it’s a wrong expression here both connotationally and denotationally.
You may object that the model is not good enough
I’m not saying it’s not good enough, I’m saying it’s wrong. To repeat myself, one-shot PD is quite rare is real life. Trying to shoehorn “many morally-relevant situations” into this mold is not the right approach.
Yes, Prisoner’s Dilemma, but not only one-shot. You added that detail in with no attempt at justification, and it is not justifiable.
In the iterated Prisoner’s Dilemma with communication possible, defecting is a pretty stupid move. I’m sure it happens in real life on occasion, but it’s the rarity and not the norm.
You can have systems that are clearly instances of PD (or at least related enough that one would use the same term) where the payoff structure makes this not so clear.
Like, oh, defecting on a cooperator yields 100 points instead of the usual 5, and the other player gets −95. Then defection becomes profitable within 33 rounds of the end, not just 2.
Also, in uncertainty or accidental defections, and it ceases to be so crazy to defect. What if you play with two other players, and can see only the total number of defections and cooperations you faced? What if you aren’t sure how many more moves there are left? What if there is a continuum from cooperation to defection?
Well, let’s get back to reality. We were talking about the way normal people think, remember?
So consider a normal person. When, in the course of his typical life, does he have to make choices in a PD situation? At work? When he’s drinking beer and watching the game with his buddies or when she’s watching a show and gossiping with her girlfriends? When looking for a mate? In relationships with parents or kids?
PD is a neat construct and certainly occurs in real life—rarely. But for your regular bloke or gal it’s a non-issue and they don’t spend time thinking in terms of a theoretical situation they don’t care about.
No, I still don’t think so. The expression “defection risk” implies the one-shot prisoner dilemma context and neither such a situation is common in real life, nor normal people think in such categories (correctly, too).
“I don’t trust someone like that” should just be interpreted directly according to its plain meaning. Not trusting someone does not imply a PD-like context and/or an expectation of defection.
“Too smart for his own good” I understand as meaning “He’s smart enough to figure out how to bend the rules or go around them, but he is not smart enough to figure all the consequences of that and weight them properly”. Again, nothing related to PD.
You’re reading too much into what RomeoStevens wrote—at no point did he explicitly mention the one-shot Prisonner’s Dilemma.
A pretty common usage here is to use the Prisonner’s Dilemma as a simplified model (think spherical cow on a frictionless plane, or perfect gas) of many morally-relevant situations.
This model is not what people explicitly think about (just like people don’t explicitly think about social status when they are outraged or dismissive, or don’t explicitly think about expected utility when deciding), but it may still be a good (simplified) model of what people think.
And RomeoStevens is referring to what people think, he’s just using “defection risk” as a shorthand. If you ask normal people, they’ll usually talk in terms of trust.
You may object that the model is not good enough, but you’ll need a better argument than “it’s not what people think” (nobody is claiming it is); do you similarly object to discussing people’s choices in terms of expected utility and opportunity costs?
The only two contexts I know where the expression “defection risk” has meaning is prisoner’s dilemma and Cold War spy/counterintelligence games.
I think it’s a wrong expression here both connotationally and denotationally.
I’m not saying it’s not good enough, I’m saying it’s wrong. To repeat myself, one-shot PD is quite rare is real life. Trying to shoehorn “many morally-relevant situations” into this mold is not the right approach.
Yes, Prisoner’s Dilemma, but not only one-shot. You added that detail in with no attempt at justification, and it is not justifiable.
(also, games similar to Prisoner’s Dilemma with explicitly cooperative and defecting strategies, but details details)
In the iterated Prisoner’s Dilemma with communication possible, defecting is a pretty stupid move. I’m sure it happens in real life on occasion, but it’s the rarity and not the norm.
With utterly vanilla, standard rules, yes.
You can have systems that are clearly instances of PD (or at least related enough that one would use the same term) where the payoff structure makes this not so clear.
Like, oh, defecting on a cooperator yields 100 points instead of the usual 5, and the other player gets −95. Then defection becomes profitable within 33 rounds of the end, not just 2.
Also, in uncertainty or accidental defections, and it ceases to be so crazy to defect. What if you play with two other players, and can see only the total number of defections and cooperations you faced? What if you aren’t sure how many more moves there are left? What if there is a continuum from cooperation to defection?
Well, let’s get back to reality. We were talking about the way normal people think, remember?
So consider a normal person. When, in the course of his typical life, does he have to make choices in a PD situation? At work? When he’s drinking beer and watching the game with his buddies or when she’s watching a show and gossiping with her girlfriends? When looking for a mate? In relationships with parents or kids?
PD is a neat construct and certainly occurs in real life—rarely. But for your regular bloke or gal it’s a non-issue and they don’t spend time thinking in terms of a theoretical situation they don’t care about.
At this point, Emile’s post seems appopriate ( http://lesswrong.com/r/discussion/lw/kox/why_are_people_put_off_by_rationality/b71b )
Which I’ve already seen and replied to.
Looks like we hit a circle. Agree to disagree?
Your response to it was that defection risk means this one very specific thing. She said that it’s LW-shorthand for a much more general thing.
Considering that you weren’t the one originally using the term, maybe you should use the definition that makes sense?