I’ve also observed that people who come to believe that there are significant differences between the sexes/races/whatevers on average begin to discriminate against all individuals of the disadvantaged sex/race/whatever, even when they were only persuaded by scientific results they believed to be accurate and were reluctant to accept that conclusion. I have watched this happen to smart people more than once. Furthermore, I have never met (or read the writings of) any person who believed in fundamental differences between the whatevers and who was not also to some degree a bigot.
This is something I haven’t observed, but it’s seemed plausible to me anyway. Have there been any studies (even small, lightweight studies with hypothetical trait differences) showing that sort of overshoot? If there are, why don’t they get the sort of publicity that studies which show differences get?
Speaking of AIs getting out of the box, it’s conceivable to me that an AI could talk its way out. It’s a lot less plausible that an AI could get it right the first time.
And here’s a thought which may or may not be dangerous, but which spooked the hell out of me when I first realized it.
Different groups have different emotional tones, and these kept pretty stable with social pressure. Part of the social pressure is usually claims that the particular tone is superior to the alternatives (nicer, more honest, more fun, more dignified, etc.). The shocker was when I realized that the emotional tone is almost certainly the result of what a few high-status members of a group prefer or preferred, but the emotional tone is generally defended as though it’s morally superior. This is true even in troll groups, who claim that emotional toughness is more valuable than anything which can be gained by not being insulting.
Different groups have different emotional tones . . . (nicer, more honest, more fun, more dignified, etc.).
Downvotes have caused me to put a lot of effort into changing the tone of my communications on Less Wrong so that they are no longer significantly less agreeable (nice) than the group average.
In the early 1990s the newsgroups about computers and other technical subjects were similar to Less Wrong: mostly male, mean IQ above 130, vastly denser in libertarians than the population of any country, the best place online for people already high in rationality to improve their rationality.
Aside from differences in the “shape” of the conversation caused by differences in the “mediating” software used to implement the conversation, the biggest difference between the technical newsgroups of the early 1990s and Less Wrong is that the tone of Less Wrong is much more agreeable.
For example, there was much less evidence IIRC of a desire to spare someone’s feelings on the technical newsgroups of the early 1990s, and flames (impassioned harangues of a length almost never seen in comments here and of a level of vitriol very rare here) were very common—but then again the mediating software probably pulled for deep nesting of replies more than Less Wrong’s software does, and most of those flames occured in very deeply nested flamewars with only 2 or 3 participants.
The slightly longer answer is that it probably does not matter unless the niceness reaches the level at which people become too deferential towards the leaders of the community, a failure mode that I personally do not worry about.
Parenthetically, none of the newsgroups I frequented in the 1990s had a leader unless my memory is epically failing me right now. Erik Naggum came the closest (on comp.lang.lisp) but the maintenance of his not-quite-leader status required him to expend a prodigious amount time (and words) to continue to prove his expertise and commitment to Lisp and to browbeat other participants.
(And my guess is the the constant public browbeating cost him at least one consulting job. It certainly did not make him look attractive.)
The most likely reason for the emotional tone of LW is that the participants the community most admire have altruism, philanthropy or a refined kind of friendliness as one of their primary motivations for participation, and for them to maintain a certain level of niceness is probably effortless or well-rehearsed and instrumentally very useful.
Specifically, Eliezer and Anna have altruism, philanthropy or human friendliness as one of their primary motivations with probability .9. There are almost certainly others here with that as one of the primary motivations, but they are hard for me to read or I just do not have enough information (in the form of either a large body of online writings like Eliezer’s or sufficient face time) to form an opinion worth expressing.
More precisely, if they were less nice than they were, it would be difficult for them to fulfill their mission of improving people’s rationality and networking to reduce e-risks, but if they were too nice it would have too much of an inhibitory effect on the critical (judgemental) faculties of them and their interlocutors, so they end up being less nice than the average suburban Californian, say, but significantly nicer than the average niceness of most of the online communities frequented by programmers and others whose work relies heavily on the critical faculty, i.e., where to succeed at the work requires being able to perceive very subtle faults in something.
In other words, I have a working hypothesis that there is a tension between the internal emotional state optimal for “interpersonal” goals (like networking and teaching rationality) and the state optimal for making a rational analysis of a situation or argument. This tension certainly exists for me. I have no direct evidence that the same tension exists for the leaders of this community, but again that is my tentative hypothesis.
So, IMHO the important question is not the effects of the current level of niceness but rather the effects of altruistically motivated participants. I should share my thinking on that some day when I have more time.
I’ve also observed that people who come to believe that there are significant differences between the sexes/races/whatevers on average begin to discriminate against all individuals of the disadvantaged sex/race/whatever, even when they were only persuaded by scientific results they believed to be accurate and were reluctant to accept that conclusion. I have watched this happen to smart people more than once. Furthermore, I have never met (or read the writings of) any person who believed in fundamental differences between the whatevers and who was not also to some degree a bigot.
This is something I haven’t observed, but it’s seemed plausible to me anyway. Have there been any studies (even small, lightweight studies with hypothetical trait differences) showing that sort of overshoot? If there are, why don’t they get the sort of publicity that studies which show differences get?
I would also be interested in hearing if there are any studies on this subject. For me, much of WrongBot’s argument hangs on how accurate these observations are. I’m still not sure I’d agree with the overall point, but more evidence on this point would make me much more inclined to consider it.
Also, WrongBot, it seems possible that the observations you’ve made could have alternate explanations; e.g., the people that you have witnessed change their behavior based on scientific results may not have been as originally unbiased or reluctant to change their minds on these subjects as you had believed them to be.
In other words, there may be a chicken/egg problem here. Did these people that you observed really become more bigoted/discriminatory after accepting the truth of certain studies, or did (perhaps subconscious) bigotry actually lead them to accept (and even seek out) studies showing results that confirmed this bigotry and gave them “cover” to discriminate?
I didn’t look hard enough for more evidence for this post, and I apologize.
I’ve recently turned up:
A study on clapping which indicated that people believe very strongly that they can distinguish between the sounds of clapping produced by men and women, when in reality they’re slightly better than chance. The relevant section starts at the bottom of the 4th page of that PDF. This is weak evidence that beliefs about gender influence a wide array of situations, often unconsciously.
This paper on sex-role beliefs and sex-difference knowledge in schoolteachers may be relevant, but it’s buried behind a pay-wall.
Lots of studies like this one have documented how gender prejudices subconsciously affect behavior.
And here’s a precise discussion of exactly the effect I was describing. Naturally, it too is behind a pay-wall.
The shocker was when I realized that the emotional tone is almost certainly the result of what a few high-status members of a group prefer or preferred
Yes, if you have gained temporary influence over others one of the ways you can put that to further use is by trading that influence into an environment that accords with your preferences.
but the emotional tone is generally defended as though it’s morally superior
Regardless of how it comes to be established as a social norm, it could be that a particular tone is more suited to a particular purpose, for instance truth-seeking or community-building or fund-raising.
(For instance, academics have a strong norm of writing in an impersonal tone, usually relying on the passive voice to achieve that. This could either be the result of contingent pressure exerted by the people who founded the field, or it could be an antidote to inflamed rhetoric which would detract from the arguments of fact and inference.)
Yes, if you have gained temporary influence over others one of the ways you can put that to further use is by trading that influence into an environment that accords with your preferences.
What exactly is spent here? It looks like this is someone with enough status in the group can do “for free”.
I don’t think it’s ever free to use your influence over a group. Do it too often, and you come across as a despot.
As a local example, Eliezer’s insistence on the use of ROT13 for spoilerish comments carried through at some status “cost” when a few dissenters objected.
Your point about tone being set top-down (by the high-status, or by inertia in the established community) seems to me to explain why we there are so many genuinely vicious people among netizens who talk rationally and honestly about differences in populations (essentially anti-PC) - even beyond what you’d expect in that they’re rebelling against an explicit “be nice” policy that most people assent to.
I’m not sure about the connection you’re making. Is it combining my points that tone is set from the top, and people are apt to overshoot their prejudices beyond their evidence?
I think it’s complicated. Some of it probably is animus, but it wouldn’t surprise me if some of it isn’t about the specific topic so much as resentment at having the rules changed with no acknowledgement made that rule changes have costs for those who are obeying them.
This is something I haven’t observed, but it’s seemed plausible to me anyway. Have there been any studies (even small, lightweight studies with hypothetical trait differences) showing that sort of overshoot? If there are, why don’t they get the sort of publicity that studies which show differences get?
Speaking of AIs getting out of the box, it’s conceivable to me that an AI could talk its way out. It’s a lot less plausible that an AI could get it right the first time.
And here’s a thought which may or may not be dangerous, but which spooked the hell out of me when I first realized it.
Different groups have different emotional tones, and these kept pretty stable with social pressure. Part of the social pressure is usually claims that the particular tone is superior to the alternatives (nicer, more honest, more fun, more dignified, etc.). The shocker was when I realized that the emotional tone is almost certainly the result of what a few high-status members of a group prefer or preferred, but the emotional tone is generally defended as though it’s morally superior. This is true even in troll groups, who claim that emotional toughness is more valuable than anything which can be gained by not being insulting.
Downvotes have caused me to put a lot of effort into changing the tone of my communications on Less Wrong so that they are no longer significantly less agreeable (nice) than the group average.
In the early 1990s the newsgroups about computers and other technical subjects were similar to Less Wrong: mostly male, mean IQ above 130, vastly denser in libertarians than the population of any country, the best place online for people already high in rationality to improve their rationality.
Aside from differences in the “shape” of the conversation caused by differences in the “mediating” software used to implement the conversation, the biggest difference between the technical newsgroups of the early 1990s and Less Wrong is that the tone of Less Wrong is much more agreeable.
For example, there was much less evidence IIRC of a desire to spare someone’s feelings on the technical newsgroups of the early 1990s, and flames (impassioned harangues of a length almost never seen in comments here and of a level of vitriol very rare here) were very common—but then again the mediating software probably pulled for deep nesting of replies more than Less Wrong’s software does, and most of those flames occured in very deeply nested flamewars with only 2 or 3 participants.
Having seen both types of tone, which do you think is more effective in improving rationality and sharing ideas?
The short answer is I do not know.
The slightly longer answer is that it probably does not matter unless the niceness reaches the level at which people become too deferential towards the leaders of the community, a failure mode that I personally do not worry about.
Parenthetically, none of the newsgroups I frequented in the 1990s had a leader unless my memory is epically failing me right now. Erik Naggum came the closest (on comp.lang.lisp) but the maintenance of his not-quite-leader status required him to expend a prodigious amount time (and words) to continue to prove his expertise and commitment to Lisp and to browbeat other participants. (And my guess is the the constant public browbeating cost him at least one consulting job. It certainly did not make him look attractive.)
The most likely reason for the emotional tone of LW is that the participants the community most admire have altruism, philanthropy or a refined kind of friendliness as one of their primary motivations for participation, and for them to maintain a certain level of niceness is probably effortless or well-rehearsed and instrumentally very useful.
Specifically, Eliezer and Anna have altruism, philanthropy or human friendliness as one of their primary motivations with probability .9. There are almost certainly others here with that as one of the primary motivations, but they are hard for me to read or I just do not have enough information (in the form of either a large body of online writings like Eliezer’s or sufficient face time) to form an opinion worth expressing.
More precisely, if they were less nice than they were, it would be difficult for them to fulfill their mission of improving people’s rationality and networking to reduce e-risks, but if they were too nice it would have too much of an inhibitory effect on the critical (judgemental) faculties of them and their interlocutors, so they end up being less nice than the average suburban Californian, say, but significantly nicer than the average niceness of most of the online communities frequented by programmers and others whose work relies heavily on the critical faculty, i.e., where to succeed at the work requires being able to perceive very subtle faults in something.
In other words, I have a working hypothesis that there is a tension between the internal emotional state optimal for “interpersonal” goals (like networking and teaching rationality) and the state optimal for making a rational analysis of a situation or argument. This tension certainly exists for me. I have no direct evidence that the same tension exists for the leaders of this community, but again that is my tentative hypothesis.
So, IMHO the important question is not the effects of the current level of niceness but rather the effects of altruistically motivated participants. I should share my thinking on that some day when I have more time.
I would also be interested in hearing if there are any studies on this subject. For me, much of WrongBot’s argument hangs on how accurate these observations are. I’m still not sure I’d agree with the overall point, but more evidence on this point would make me much more inclined to consider it.
Also, WrongBot, it seems possible that the observations you’ve made could have alternate explanations; e.g., the people that you have witnessed change their behavior based on scientific results may not have been as originally unbiased or reluctant to change their minds on these subjects as you had believed them to be.
In other words, there may be a chicken/egg problem here. Did these people that you observed really become more bigoted/discriminatory after accepting the truth of certain studies, or did (perhaps subconscious) bigotry actually lead them to accept (and even seek out) studies showing results that confirmed this bigotry and gave them “cover” to discriminate?
I didn’t look hard enough for more evidence for this post, and I apologize.
I’ve recently turned up:
A study on clapping which indicated that people believe very strongly that they can distinguish between the sounds of clapping produced by men and women, when in reality they’re slightly better than chance. The relevant section starts at the bottom of the 4th page of that PDF. This is weak evidence that beliefs about gender influence a wide array of situations, often unconsciously.
This paper on sex-role beliefs and sex-difference knowledge in schoolteachers may be relevant, but it’s buried behind a pay-wall.
Lots of studies like this one have documented how gender prejudices subconsciously affect behavior.
And here’s a precise discussion of exactly the effect I was describing. Naturally, it too is behind a pay-wall.
Yes, if you have gained temporary influence over others one of the ways you can put that to further use is by trading that influence into an environment that accords with your preferences.
Regardless of how it comes to be established as a social norm, it could be that a particular tone is more suited to a particular purpose, for instance truth-seeking or community-building or fund-raising.
(For instance, academics have a strong norm of writing in an impersonal tone, usually relying on the passive voice to achieve that. This could either be the result of contingent pressure exerted by the people who founded the field, or it could be an antidote to inflamed rhetoric which would detract from the arguments of fact and inference.)
What exactly is spent here? It looks like this is someone with enough status in the group can do “for free”.
I don’t think it’s ever free to use your influence over a group. Do it too often, and you come across as a despot.
As a local example, Eliezer’s insistence on the use of ROT13 for spoilerish comments carried through at some status “cost” when a few dissenters objected.
Your point about tone being set top-down (by the high-status, or by inertia in the established community) seems to me to explain why we there are so many genuinely vicious people among netizens who talk rationally and honestly about differences in populations (essentially anti-PC) - even beyond what you’d expect in that they’re rebelling against an explicit “be nice” policy that most people assent to.
I’m not sure about the connection you’re making. Is it combining my points that tone is set from the top, and people are apt to overshoot their prejudices beyond their evidence?
My old theory about the nastiness of some anti-PC reactionaries was that they came to their view out of some animus.
Your suggestion that communities’ tones may be determined by that of a small number of incumbents serves as an alternative, softening explanation.
I think it’s complicated. Some of it probably is animus, but it wouldn’t surprise me if some of it isn’t about the specific topic so much as resentment at having the rules changed with no acknowledgement made that rule changes have costs for those who are obeying them.