but the usual simple game-theory models are not realistic, because people sometimes care about not hurting other people (which means that the “pain” in my “opponent”’s outcome matrix also translates to a small amount of my “pain”) and sometimes they also think long-term so they may give up an unimportant “fight” now in order to increase a chance of better cooperation in the future
but doing this properly requires me having a good model of my “opponent”’s mind, so I can see how much “pain” various outcomes give them
but I don’t have a direct insight into my “opponent”’s mind, and this gives them an obvious incentive to lie and exaggerate their “pain” (and even if I could read their minds, they could self-modify to actually feel more “pain” if they knew it would make me give up)
maybe there is a higher level of game theory that can deal with this situation
Although people pretending to be offended for personal gain is a real problem, it is less common in reality than it is in people’s imaginations. If a person appears to suffer from an action of yours which you find completely innocuous, you should consider the possibility that eir mind is different from yours before rejecting eir suffering as feigned.
Uhm, it depends. My guess is that such people are rare as a fraction of population, but if they are skilled at exploiting other people’s empathy, they can lie about their internal “pain” pretty often. So while the probability of “person X randomly chosen from population would do this” is very small, the probability of “a person who replied online on your post, acting offended and citing political arguments and calling in their numerous supporters, would do this” could actually be pretty high. (Prior probability, posterior probability, selection bias.) So I would probably use the way I have received a complaint as an evidence.
Situation A: I play a music I enjoy, and my neighbor says: “Excuse me, my ears hurt, could you please turn down the volume?” I would turn down the volume, and if it is too quite for me to enjoy, they I would simply turn the music off, or consider using headphones.
Situation B: There is an active political or religious movement X with typical modus operandi of finding something they complain about. My neighbor is a very active member of X. This month, their topic is “make your neighbors turn down the music, because our great prophet said music is sinful”. I play a music I enjoy, and my neighbor says: “Excuse me, my ears hurt from your sinful music, you should be ashamed of yourself, and you will burn in hell. Could you turn down the volume?” I would ignore them, or offer a trade (something like “I am doing you a big favor here, and I expect some favor in return in the future”), depending on my mood and my estimate of their probability of returning the favor (the more righteous they are, the less likely).
Be careful about phrasing this in terms of lying. It doesn’t consider the scenario where people really feel pain about such things, and are not lying, yet would not feel pain if they didn’t have incentives to do so. Actual movements X often have adherents who behave that way.
I basically agree with you, but I think situation B to quite that extent is rare. And of course identifying similarity to that is pretty open to bias if you just don’t like that movement.
Concrete example—I used to use the Hebrew name of God in theological conversations, as this was normal at my college. I noticed a Jewish classmate of mine was wincing. I discussed it with him, he found it uncomfortable, I stopped doing it. Didn’t cost me anything, happy to do it.
Also, I think some of this is bleeding over from ‘I am not willing to inconvenience myself’ to actively enjoying making a point (possibly in some vague sense that it will help them reform, though not sure if that’s evidenced). I can get that instinct, and the habit of “punishing” people who push things can make sense in game theory terms. But I think the idea of not feeling duty-bound is different to getting to the position where some commenters might turn UP the music.
Concrete example—I used to use the Hebrew name of God in theological conversations, as this was normal at my college. I noticed a Jewish classmate of mine was wincing. I discussed it with him, he found it uncomfortable, I stopped doing it. Didn’t cost me anything, happy to do it.
The difference is that the Jews have been using the same set of demands for a long time, so they’re unlikely to present new demands once you accede to their current ones.
Desensitization training is great if it (a) works and (b) is less bad than the problem it’s meant to solve.
(I’m now imagining Alice and Carol’s conversation: “So, alright, I’ll turn my music down this time, but there’s this great program I can point you to that teaches you to be okay with loud noise. It really works, I swear! Um, I think if you did that, we’d both be happier.”)
Treating thin-skinned people (in all senses of the word) as though they were already thick-skinned is not the same, I think. It fails criterion (a) horribly, and does not satisfy (b) by definition: it is the problem desensitization training ought to solve.
Quick thoughts:
people will try to use game theory to solve this
but the usual simple game-theory models are not realistic, because people sometimes care about not hurting other people (which means that the “pain” in my “opponent”’s outcome matrix also translates to a small amount of my “pain”) and sometimes they also think long-term so they may give up an unimportant “fight” now in order to increase a chance of better cooperation in the future
but doing this properly requires me having a good model of my “opponent”’s mind, so I can see how much “pain” various outcomes give them
but I don’t have a direct insight into my “opponent”’s mind, and this gives them an obvious incentive to lie and exaggerate their “pain” (and even if I could read their minds, they could self-modify to actually feel more “pain” if they knew it would make me give up)
maybe there is a higher level of game theory that can deal with this situation
but I don’t know it.
In Yvain’s post (linked here by gjm), Yvain says:
Uhm, it depends. My guess is that such people are rare as a fraction of population, but if they are skilled at exploiting other people’s empathy, they can lie about their internal “pain” pretty often. So while the probability of “person X randomly chosen from population would do this” is very small, the probability of “a person who replied online on your post, acting offended and citing political arguments and calling in their numerous supporters, would do this” could actually be pretty high. (Prior probability, posterior probability, selection bias.) So I would probably use the way I have received a complaint as an evidence.
Situation A: I play a music I enjoy, and my neighbor says: “Excuse me, my ears hurt, could you please turn down the volume?” I would turn down the volume, and if it is too quite for me to enjoy, they I would simply turn the music off, or consider using headphones.
Situation B: There is an active political or religious movement X with typical modus operandi of finding something they complain about. My neighbor is a very active member of X. This month, their topic is “make your neighbors turn down the music, because our great prophet said music is sinful”. I play a music I enjoy, and my neighbor says: “Excuse me, my ears hurt from your sinful music, you should be ashamed of yourself, and you will burn in hell. Could you turn down the volume?” I would ignore them, or offer a trade (something like “I am doing you a big favor here, and I expect some favor in return in the future”), depending on my mood and my estimate of their probability of returning the favor (the more righteous they are, the less likely).
Be careful about phrasing this in terms of lying. It doesn’t consider the scenario where people really feel pain about such things, and are not lying, yet would not feel pain if they didn’t have incentives to do so. Actual movements X often have adherents who behave that way.
I basically agree with you, but I think situation B to quite that extent is rare. And of course identifying similarity to that is pretty open to bias if you just don’t like that movement.
Concrete example—I used to use the Hebrew name of God in theological conversations, as this was normal at my college. I noticed a Jewish classmate of mine was wincing. I discussed it with him, he found it uncomfortable, I stopped doing it. Didn’t cost me anything, happy to do it.
Also, I think some of this is bleeding over from ‘I am not willing to inconvenience myself’ to actively enjoying making a point (possibly in some vague sense that it will help them reform, though not sure if that’s evidenced). I can get that instinct, and the habit of “punishing” people who push things can make sense in game theory terms. But I think the idea of not feeling duty-bound is different to getting to the position where some commenters might turn UP the music.
The difference is that the Jews have been using the same set of demands for a long time, so they’re unlikely to present new demands once you accede to their current ones.
I think the issue is not as much as unconsciously exploiting it, but more like the amount of pain felt depends on the absence or presence of “training”. More here: http://lesswrong.com/lw/59i/offense_versus_harm_minimization/c8u7
Desensitization training is great if it (a) works and (b) is less bad than the problem it’s meant to solve.
(I’m now imagining Alice and Carol’s conversation: “So, alright, I’ll turn my music down this time, but there’s this great program I can point you to that teaches you to be okay with loud noise. It really works, I swear! Um, I think if you did that, we’d both be happier.”)
Treating thin-skinned people (in all senses of the word) as though they were already thick-skinned is not the same, I think. It fails criterion (a) horribly, and does not satisfy (b) by definition: it is the problem desensitization training ought to solve.