How could I be intelligent enough to make what seemed like convincing arguments for positions he knew were wrong, and yet stupid enough to believe them?
This… makes so much sense for the human hardware, actually.
“How can you be smart enough to discuss the topic X intelligently, and yet dumb enough to not notice that the tribe X is losing the fight and you could have easily joined the winning side instead? How can a person so epistemically rational be so instrumentally irrational?”
By the way, how much of the tension between ‘diversity of people’ and ‘diversity of ideas’ is “natural”, and how much is a self-fulfilling prophecy? I mean, was it always true that if you allow people to say opinion X, then people P will avoid your group? Or is it something that we actually taught them recently; by speading the idea that people P should be offended by hearing the opinion X, and should avoid any group which tolerates expressing such opinion (even if it is a minority opinion within the group), and that the group should then feel guilty because these people made this choice?
Imagine that there is a group you would like to belong to. Then you hear some people in the group saying X, and you personally don’t like the opinion X. You also notice that those people are in a minority within the group, but they are a tolerated minority; nobody sends them away for saying X. You have two options: 1) Join the group, because the non-X side is already stronger, and your presence will make it even stronger. You will get some utility from being a member, and lose some utility by being occassionally exposed to the opinion X. Or: 2) Openly refuse to join the group, and tell them that you consider X offensive; that the group originally made a good impression on you, but by tolerating this opinion, they made you not join them. You lose some utility from not being a member of the group, but there is a chance that you win a lot of utility if you succeed to make the group change its policy towards X.
Now the question is, what makes either of these choices more likely? Let’s assume that you prefer being a member to not being a member; but if you choose and publicly announce the latter option and the group refuses to change its policy towards X, you will probably remain consistent and avoid the group.
Seems to me that an important factor is the probability that a group will change its policy towards its subgroup. More precisely, your estimate of this probability. If you felt certain that the group will not change its policy, the first option is clearly better. On the other hand, if you feel certain that the group will change its policy if you precommit to avoid them otherwise, the second option is clearly better. So, game-theoretically, a group which signals that it really wants you, encourages you to blackmail them.
Another interesting question is what happens if we have two people; one of them strongly wants to join the group, the other one only has a mild preference for joining. Both dislike X equally, and both assume equal probability of the group changing its policy according to their requests. Which one is more likely to choose the second option? The one who cares less about the membership. So, game-theoretically, a group which signals that it really wants people from some specific set, encourages those among the set who care least about the group to blackmail it most.
If these assumptions are correct, when someone tells you that you should change a group policy to not tolerate opinion X, because that offends them, you should assume that the person probably does not care strongly about joining your group (they only strongly dislike X), and that you have invited this situation on yourself because you showed too much willingness to supress your members just to make hypothetical members happy. And if you accept the complaint, you should actually expect more similar complaints in the future, because you showed that complaining about X works.
Short version: If you change the rules to make whining the winning strategy… expect a lot of whining.
“How can you be smart enough to discuss the topic X intelligently, and yet dumb enough to not notice that the tribe X is losing the fight and you could have easily joined the winning side instead? How can a person so epistemically rational be so instrumentally irrational?”
I doubt that such a calculation is in any way conscious, but behind the scenes, something like that is probably happening. Truth detectors for “socially advantageous” are probably stronger than those for “predictively accurate”.
This… makes so much sense for the human hardware, actually.
“How can you be smart enough to discuss the topic X intelligently, and yet dumb enough to not notice that the tribe X is losing the fight and you could have easily joined the winning side instead? How can a person so epistemically rational be so instrumentally irrational?”
By the way, how much of the tension between ‘diversity of people’ and ‘diversity of ideas’ is “natural”, and how much is a self-fulfilling prophecy? I mean, was it always true that if you allow people to say opinion X, then people P will avoid your group? Or is it something that we actually taught them recently; by speading the idea that people P should be offended by hearing the opinion X, and should avoid any group which tolerates expressing such opinion (even if it is a minority opinion within the group), and that the group should then feel guilty because these people made this choice?
Imagine that there is a group you would like to belong to. Then you hear some people in the group saying X, and you personally don’t like the opinion X. You also notice that those people are in a minority within the group, but they are a tolerated minority; nobody sends them away for saying X. You have two options: 1) Join the group, because the non-X side is already stronger, and your presence will make it even stronger. You will get some utility from being a member, and lose some utility by being occassionally exposed to the opinion X. Or: 2) Openly refuse to join the group, and tell them that you consider X offensive; that the group originally made a good impression on you, but by tolerating this opinion, they made you not join them. You lose some utility from not being a member of the group, but there is a chance that you win a lot of utility if you succeed to make the group change its policy towards X.
Now the question is, what makes either of these choices more likely? Let’s assume that you prefer being a member to not being a member; but if you choose and publicly announce the latter option and the group refuses to change its policy towards X, you will probably remain consistent and avoid the group.
Seems to me that an important factor is the probability that a group will change its policy towards its subgroup. More precisely, your estimate of this probability. If you felt certain that the group will not change its policy, the first option is clearly better. On the other hand, if you feel certain that the group will change its policy if you precommit to avoid them otherwise, the second option is clearly better. So, game-theoretically, a group which signals that it really wants you, encourages you to blackmail them.
Another interesting question is what happens if we have two people; one of them strongly wants to join the group, the other one only has a mild preference for joining. Both dislike X equally, and both assume equal probability of the group changing its policy according to their requests. Which one is more likely to choose the second option? The one who cares less about the membership. So, game-theoretically, a group which signals that it really wants people from some specific set, encourages those among the set who care least about the group to blackmail it most.
If these assumptions are correct, when someone tells you that you should change a group policy to not tolerate opinion X, because that offends them, you should assume that the person probably does not care strongly about joining your group (they only strongly dislike X), and that you have invited this situation on yourself because you showed too much willingness to supress your members just to make hypothetical members happy. And if you accept the complaint, you should actually expect more similar complaints in the future, because you showed that complaining about X works.
Short version: If you change the rules to make whining the winning strategy… expect a lot of whining.
I doubt that such a calculation is in any way conscious, but behind the scenes, something like that is probably happening. Truth detectors for “socially advantageous” are probably stronger than those for “predictively accurate”.