Among lurkers, the average feminism score was 3.84. Among people who had posted something—whether a post on Main, a post in Discussion, or a comment, the average feminism score was 3.8. A t-test failed to reveal any significant difference between the two (p = .49). So there is no difference between lurkers and posters in feminism score.
Among people who have never posted a top-level article in Main, the average feminism score is 3.84. Among people who have posted top-level articles in Main, the average feminism score is 3.47. A t-test found a significant difference (p < .01). So top-level posters were slightly less feminist than the Less Wrong average. However, the average feminism of top-level posters (3.47) is still significantly higher than the average feminism among women (3.1).
My conclusion is that most posters in LW have conventionally liberal views (at least on social issues) but many of them refrain from participating in the periodic discussions that erupt touching on these issues. Some possible reasons for this: i) they hold these opinions in a non-passionate way that does not incline them to argue for them; ii) they are more interested in other stuff LW has to offer like logic or futurism and see politics as a distraction; iii) they mistakenly believe their opinions are unpopular and they will suffer a karma hit.
I agree that this is a very plausible possibility as well. However, IADBOC for two reasons.
First, a large part of views like “feminism” and “social justice” are plausibly terminal values. These terminal values are probably absorbed from the surrounding culture, but it is not clear how they could be argued for against someone who held opposite values. In addition, for the descriptive components of these views, “most people hold them absorbed from general culture and can’t argue for them” is not correlated with “unjustified, untrue beliefs”. The same description would apply to most ordinary scientific beliefs held by non-experts.
“most people hold them absorbed from general culture and can’t argue for them” is not correlated with “unjustified, untrue beliefs”
But is, as Yvain has explained on his blog, more likely to be associated with true or at least reasonable beliefs. Reasonable beliefs are more likely to become commonly accepted beliefs, and most people who hold commonly accepted beliefs absorbed them from general culture and have never seen a need to make sound arguments for them.
Observe that this argument applies even more strongly to beliefs that have lasted a long time. In particular it applies much more strongly to religion.
I don’t think that that is an important distinction. Most of the effect I was talking about is that it is easier for something reasonable (something with a relatively large probability of being true) to make the jump from controversial belief to generally accepted belief. Once something is generally accepted and people stop arguing about it, there is no strong mechanism rejecting false beliefs.
To the contrary, new beliefs can seem more reasonable by being associated with previously accepted beliefs, so beliefs in clusters of strongly held beliefs such as religions and certain ideologies are less likely to be true than the first belief in the cluster to become generally accepted.
First, a large part of views like “feminism” and “social justice” are plausibly terminal values.
Disagree here. Unless your terminal values include things like “everyone believing X regardless of it’s truth value” or “making everyone as equal as possible even at the cost of making everyone worse off”, the SJ policy proposals don’t actually promote the terminal values they claim to support. One could equally well claim that opposition to cryonics is based on terminal values.
In addition, for the descriptive components of these views, “most people hold them absorbed from general culture and can’t argue for them” is not correlated with “unjustified, untrue beliefs”. The same description would apply to most ordinary scientific beliefs held by non-experts.
Or for that matter religious views by non-theologian theists.
Your model of Feminism/SJ differs from mine. Most of the cluster of my-model-of-SJ-space consists of the terminal value “people should not face barriers to doing what they want to do on account of factors orthogonal to that goal” (which I endorse).
My model of SJ also includes (as a smaller component) the terminal value “no one should believe there are correlations between race/sex/gender and any other attribute or characteristic”, which I don’t endorse.
“people should not face barriers to doing what they want to do on account of factors orthogonal to that goal”
What kind of factors count as “orthogonal to that goal”? If my goal is to become a physicist, say, does the fact that I’m not very intelligent count as an “orthogonal factor”? If the answer is no, then this is one form of my claim of them trying to make everyone as equal as possible even at the cost of making everyone worse off.
If the answer is yes, the question arises what they’re objection is to some disciplines having demographics that differ from the general population. Given that they tend to take this as ipso facto evidence of racism/sexism/etc. this shows that denial of correlations between race/sex and other attributes is in fact much more central to their belief system then you seem to think.
BTW, the other form of my claim can be seen in the following situation: You need to choose between three candidates A, B and C for a position, you know that A is qualified and that one of B or C is also qualified (possibly slightly more qualified then A) but the other is extremely unqualified (as it happens B is the qualified one but you don’t know that). However, for reasons beyond either A or B’s control it is very hard to check which of B or C is the qualified one. Does hiring A, even though this is clearly unfair to B, count as “creating a barrier orthogonal to the goal”?
If my goal is to become a physicist, say, does the fact that I’m not very intelligent count as an “orthogonal factor”?
No.
If the answer is no, then this is one form of my claim of them trying to make everyone as equal as possible even at the cost of making everyone worse off.
If “they” believe that. If you know of a large number of people who believe this, I am not aware of them.
Does hiring A, even though this is clearly unfair to B, count as “creating a barrier orthogonal to the goal”?
Hiring isn’t creating the barrier; the barrier—the inability to determine which candidate is qualified—is already there.
If my goal is to become a physicist, say, does the fact that I’m not very intelligent count as an “orthogonal factor”?
No.
Did you mean to say “Yes” and get confused by the double negative? (That would be more consistent with the rest of your comment.)
If the answer is no, then this is one form of my claim of them trying to make everyone as equal as possible even at the cost of making everyone worse off.
If “they” believe that. If you know of a large number of people who believe this, I am not aware of them.
I never said they believed that, at most they alieve that. My claim is that is what you get if you try to steel man their position as based on terminal values rather than factual confusion.
Confused: There doesn’t appear to be a double-negative.
If you’re not very intelligent, that is relevant to your physicist aspirations. It is not orthogonal.
I do not understand how your description is a steel man. It may be an attempt to extrapolate instrumental values from a certain set of terminal values, but that doesn’t help us in our matter-of-fact disagreement about the terminal values of the SJ cluster.
If you want to steel man social justice, substitute the entire works of John Rawls.
Confused: There doesn’t appear to be a double-negative.
Sorry, my mistake.
If you want to steel man social justice, substitute the entire works of John Rawls.
The part of his work that I have read, consisted of him making a social contract-type argument saying that since the contract must be made before risk preferences, i.e., whether one is risk averse to risk loving are assigned, we should treat everyone as maximally risk averse. There was also some talk about utility that mostly consisted of him misunderstanding the concept. This did not leave me particularly inclined to read the rest.
In my case it’s something similar to (ii)… I often feel that arguing in favor of my views will not be a useful contribution to the discussions that periodically erupt on these issues, so I don’t. (Sometimes I do.)
Update: Likely that feminist-inclined LWers are less likely to comment/vote and more more likely to take surveys.
Meta-update: This hypothesis ruled highly-improbable based on more data from Yvain.
Among lurkers, the average feminism score was 3.84. Among people who had posted something—whether a post on Main, a post in Discussion, or a comment, the average feminism score was 3.8. A t-test failed to reveal any significant difference between the two (p = .49). So there is no difference between lurkers and posters in feminism score.
Among people who have never posted a top-level article in Main, the average feminism score is 3.84. Among people who have posted top-level articles in Main, the average feminism score is 3.47. A t-test found a significant difference (p < .01). So top-level posters were slightly less feminist than the Less Wrong average. However, the average feminism of top-level posters (3.47) is still significantly higher than the average feminism among women (3.1).
I update in the direction that the model of people I form based on LW comments is pretty inaccurate.
My conclusion is that most posters in LW have conventionally liberal views (at least on social issues) but many of them refrain from participating in the periodic discussions that erupt touching on these issues. Some possible reasons for this: i) they hold these opinions in a non-passionate way that does not incline them to argue for them; ii) they are more interested in other stuff LW has to offer like logic or futurism and see politics as a distraction; iii) they mistakenly believe their opinions are unpopular and they will suffer a karma hit.
iv) they absorbed these views from their surrounding culture and don’t actually have good arguments for them.
I agree that this is a very plausible possibility as well. However, IADBOC for two reasons.
First, a large part of views like “feminism” and “social justice” are plausibly terminal values. These terminal values are probably absorbed from the surrounding culture, but it is not clear how they could be argued for against someone who held opposite values. In addition, for the descriptive components of these views, “most people hold them absorbed from general culture and can’t argue for them” is not correlated with “unjustified, untrue beliefs”. The same description would apply to most ordinary scientific beliefs held by non-experts.
But is, as Yvain has explained on his blog, more likely to be associated with true or at least reasonable beliefs. Reasonable beliefs are more likely to become commonly accepted beliefs, and most people who hold commonly accepted beliefs absorbed them from general culture and have never seen a need to make sound arguments for them.
Observe that this argument applies even more strongly to beliefs that have lasted a long time. In particular it applies much more strongly to religion.
I don’t think that that is an important distinction. Most of the effect I was talking about is that it is easier for something reasonable (something with a relatively large probability of being true) to make the jump from controversial belief to generally accepted belief. Once something is generally accepted and people stop arguing about it, there is no strong mechanism rejecting false beliefs.
To the contrary, new beliefs can seem more reasonable by being associated with previously accepted beliefs, so beliefs in clusters of strongly held beliefs such as religions and certain ideologies are less likely to be true than the first belief in the cluster to become generally accepted.
Memetic evolution. The fact that a belief has survived for a long time, and survived the rise and fall of civilizations, is evidence in it’s favor.
Disagree here. Unless your terminal values include things like “everyone believing X regardless of it’s truth value” or “making everyone as equal as possible even at the cost of making everyone worse off”, the SJ policy proposals don’t actually promote the terminal values they claim to support. One could equally well claim that opposition to cryonics is based on terminal values.
Or for that matter religious views by non-theologian theists.
Your model of Feminism/SJ differs from mine. Most of the cluster of my-model-of-SJ-space consists of the terminal value “people should not face barriers to doing what they want to do on account of factors orthogonal to that goal” (which I endorse).
My model of SJ also includes (as a smaller component) the terminal value “no one should believe there are correlations between race/sex/gender and any other attribute or characteristic”, which I don’t endorse.
What kind of factors count as “orthogonal to that goal”? If my goal is to become a physicist, say, does the fact that I’m not very intelligent count as an “orthogonal factor”? If the answer is no, then this is one form of my claim of them trying to make everyone as equal as possible even at the cost of making everyone worse off.
If the answer is yes, the question arises what they’re objection is to some disciplines having demographics that differ from the general population. Given that they tend to take this as ipso facto evidence of racism/sexism/etc. this shows that denial of correlations between race/sex and other attributes is in fact much more central to their belief system then you seem to think.
BTW, the other form of my claim can be seen in the following situation: You need to choose between three candidates A, B and C for a position, you know that A is qualified and that one of B or C is also qualified (possibly slightly more qualified then A) but the other is extremely unqualified (as it happens B is the qualified one but you don’t know that). However, for reasons beyond either A or B’s control it is very hard to check which of B or C is the qualified one. Does hiring A, even though this is clearly unfair to B, count as “creating a barrier orthogonal to the goal”?
No.
If “they” believe that. If you know of a large number of people who believe this, I am not aware of them.
Hiring isn’t creating the barrier; the barrier—the inability to determine which candidate is qualified—is already there.
Did you mean to say “Yes” and get confused by the double negative? (That would be more consistent with the rest of your comment.)
I never said they believed that, at most they alieve that. My claim is that is what you get if you try to steel man their position as based on terminal values rather than factual confusion.
Confused: There doesn’t appear to be a double-negative.
If you’re not very intelligent, that is relevant to your physicist aspirations. It is not orthogonal.
I do not understand how your description is a steel man. It may be an attempt to extrapolate instrumental values from a certain set of terminal values, but that doesn’t help us in our matter-of-fact disagreement about the terminal values of the SJ cluster.
If you want to steel man social justice, substitute the entire works of John Rawls.
Sorry, my mistake.
The part of his work that I have read, consisted of him making a social contract-type argument saying that since the contract must be made before risk preferences, i.e., whether one is risk averse to risk loving are assigned, we should treat everyone as maximally risk averse. There was also some talk about utility that mostly consisted of him misunderstanding the concept. This did not leave me particularly inclined to read the rest.
Could you talk a little more about/give an example of what you have in mind here?
In my case it’s something similar to (ii)… I often feel that arguing in favor of my views will not be a useful contribution to the discussions that periodically erupt on these issues, so I don’t. (Sometimes I do.)
Possible, but I suspect the “Why our kind can’t cooperate” both has a stronger effect and is more likely.
Indeed. I weep to imagine what the author of the linked article would think of us if she decided to check out the discussion her piece had inspired.