The first thing I thought when I read this first example was “didn’t Axelrod discuss the issue of the IPD with noise in his book?” the answer, it seems, is yes but not in his book, (PDF warning!). Essentially they seem to come the conclusion that forgiving the finger slip is optimal if it doesn’t happen very often (for their particular choices of values <=1% of the time). Otherwise you should have the strategy where you forgive the opponent if they punish you for a finger slip, but play TFT otherwise.
A bias I’ve noticed: People are a lot more likely to believe a bad event which was claimed to be an accident actually was an accident if it was done by someone you feel allied with, and to believe it was malice or culpable negligence if it was done by someone you already mistrust.
It’s actually rather a hard call if you don’t have solid information.
It’s not a bias of either type so much as it represents a recursive feedback of updated beliefs.
New ‘givens’ will cause a belief update to any Bayesian nodal belief system, which propagates backwards and forwards. However, that belief system must be modulated by its a priori givens.
This creates a potential runaway feedback of exploitation: You have the current belief with high probability of alliance with a person. Person commits an act(1) with negative utility to you, and asks for forgiveness using explanation X. This causes you to accept explanation X as your ally wouldn’t harm you intentionally. Person then commits act(2) with negative utility to you, and asks for forgiveness with totally-unrelated explanation Y. Person is your ally, and this sort of situation has never come up before, so again you accept the excuse.
Now, if you instead thought of the person as a neutral rather than an ally, then you’d be far less likely to accept explanation Y—or at least you’d be less likely to continue associating with the person—though the ‘average’ person I model in my mind would likely accept explanation X (no prior reason to hold assumption of deceit, most people are typically honest to strangers, etc., etc..).
What might constitute a potential for bias in interpretation is if you have a sufficiently high utility from not making an enemy out of the ally even if the ally is causing you harm. A very simple example: You come home one day to discover a crumpled up pair of underwear not yours (but of your gender) in the bathroom with your lover in bed. Your lover explains that this somehow made it into his/her gymbag by accident. This seems odd, but you deeply love your lover and so you let this pass. An extended time later, you come home to discover that the house has a perfume/cologne (appropriate to your gender) not yours. But again, your lover keeps a spotless home for you, cooks and cleans for you, and you really don’t want to live alone. So… you let this pass. A still further extended period later, you discover in phone records that your lover has been calling your business-partner repeatedly. For extended periods. You recall the scent and realize that this was where you had placed it—it was your business partner’s! In fact, the underwear was your business-partner’s size! You now are faced with a realization; your lover, who provides your home world for you, and keeps you in total comfort with little stress, is sleeping with your business-partner, who co-owns your sole source of income. You have no legal recourse in either case, (because that’s how the scenario is planted and I’m cheating here anyhow.) So what options are you left with? Punishing your lover and partner’s betrayals has massive negative utility for you: the destruction of your personal and professional livelihoods. If they discovered you knew, you are totally convinced they would each cut you out and that would be the end of it for them.
So what choice do you make? The optimal utility choice here—without biases—is to pretend you still don’t know.
(There’s actually another equally optimal selection if you can pull it off, but I’m biased for it: tell them that because you love your lover so much, you find joy in him/her being happy and want to share in that happiness.)
Anyhow, this thing just kinda de-railed on me a little, so I’ll stop now.
The first thing I thought when I read this first example was “didn’t Axelrod discuss the issue of the IPD with noise in his book?” the answer, it seems, is yes but not in his book, (PDF warning!). Essentially they seem to come the conclusion that forgiving the finger slip is optimal if it doesn’t happen very often (for their particular choices of values <=1% of the time). Otherwise you should have the strategy where you forgive the opponent if they punish you for a finger slip, but play TFT otherwise.
A bias I’ve noticed: People are a lot more likely to believe a bad event which was claimed to be an accident actually was an accident if it was done by someone you feel allied with, and to believe it was malice or culpable negligence if it was done by someone you already mistrust.
It’s actually rather a hard call if you don’t have solid information.
Is that really a bias? The fact that they are allied or not with you is some information about what they are likely to do.
It’s some information, but I think it’s very tempting (confirmation bias, halo/horns effects) to wildly overestimate how much information you’ve got.
It’s not a bias of either type so much as it represents a recursive feedback of updated beliefs.
New ‘givens’ will cause a belief update to any Bayesian nodal belief system, which propagates backwards and forwards. However, that belief system must be modulated by its a priori givens.
This creates a potential runaway feedback of exploitation: You have the current belief with high probability of alliance with a person. Person commits an act(1) with negative utility to you, and asks for forgiveness using explanation X. This causes you to accept explanation X as your ally wouldn’t harm you intentionally. Person then commits act(2) with negative utility to you, and asks for forgiveness with totally-unrelated explanation Y. Person is your ally, and this sort of situation has never come up before, so again you accept the excuse.
Now, if you instead thought of the person as a neutral rather than an ally, then you’d be far less likely to accept explanation Y—or at least you’d be less likely to continue associating with the person—though the ‘average’ person I model in my mind would likely accept explanation X (no prior reason to hold assumption of deceit, most people are typically honest to strangers, etc., etc..).
What might constitute a potential for bias in interpretation is if you have a sufficiently high utility from not making an enemy out of the ally even if the ally is causing you harm. A very simple example: You come home one day to discover a crumpled up pair of underwear not yours (but of your gender) in the bathroom with your lover in bed. Your lover explains that this somehow made it into his/her gymbag by accident. This seems odd, but you deeply love your lover and so you let this pass. An extended time later, you come home to discover that the house has a perfume/cologne (appropriate to your gender) not yours. But again, your lover keeps a spotless home for you, cooks and cleans for you, and you really don’t want to live alone. So… you let this pass. A still further extended period later, you discover in phone records that your lover has been calling your business-partner repeatedly. For extended periods. You recall the scent and realize that this was where you had placed it—it was your business partner’s! In fact, the underwear was your business-partner’s size! You now are faced with a realization; your lover, who provides your home world for you, and keeps you in total comfort with little stress, is sleeping with your business-partner, who co-owns your sole source of income. You have no legal recourse in either case, (because that’s how the scenario is planted and I’m cheating here anyhow.) So what options are you left with? Punishing your lover and partner’s betrayals has massive negative utility for you: the destruction of your personal and professional livelihoods. If they discovered you knew, you are totally convinced they would each cut you out and that would be the end of it for them.
So what choice do you make? The optimal utility choice here—without biases—is to pretend you still don’t know.
(There’s actually another equally optimal selection if you can pull it off, but I’m biased for it: tell them that because you love your lover so much, you find joy in him/her being happy and want to share in that happiness.)
Anyhow, this thing just kinda de-railed on me a little, so I’ll stop now.
Witness the 9/11 truthers.