It is. But if someone is saying “this group of people is notably bad” then it’s worth asking whether they’re actually worse than other broadly similar groups of people or not.
I think the article, at least to judge from the parts of it posted here, is arguing that rationalists and/or EAs are unusually bad. See e.g. the final paragraph about paperclip-maximizers.
I fail to see why it matters what other broadly similar groups of people do. Rationalists ought to predict and steer the future better than other kinds of people, and so should be held to a higher standard. Deflecting with “but all the other kids are equally abusive!” is just really stupid.
As for the article, I’m not concerned with the opinion of a journalist either; they can be confused or bombastic about the exact extent of the problem if they want, it’s rather standard for journalists; but I don’t doubt that the problem is real and hasn’t been preemptively fixed before it happened, which bothers me because the founders of this community are more than smart enough to have at least made an attempt to do so.
Whether it matters what other broadly similar groups do depends on what you’re concerned with and why.
If you’re, say, a staff member at an EA organization, then presumably you are trying to do the best you could plausibly do, and in that case the only significance of those other groups would be that if you have some idea how hard they are trying to do the best they can, it may give you some idea of what you can realistically hope to achieve. (“Group X has such-and-such a rate of sexual misconduct incidents, but I know they aren’t really trying hard; we’ve got to do much better than that.” “Group Y has such-and-such a rate of sexual misconduct incidents, and I know that the people in charge are making heroic efforts; we probably can’t do better.”)
So for people in that situation, I think your point of view is just right. But:
If you’re someone wondering whether you should avoid associating with rationalists or EAs for fear of being sexually harassed or assaulted, then you probably have some idea of how reluctant you are to associate with other groups (academics, Silicon Valley software engineers, …) for similar reasons. If it turns out that rationalists or EAs are pretty much like those, then you should be about as scared of rationalists as you are of them, regardless of whether rationalists should or could have done better.
If you’re a Less Wrong reader wondering whether these are Awful People that you’ve been associating with and you should be questioning your judgement in thinking otherwise, then again you probably have some idea of how Awful some other similar groups are. If it turns out that rationalists are pretty much like academics or software engineers, then you should feel about as bad for failing to shun them as you would for failing to shun academics or software engineers.
If you’re a random person reading a Bloomberg News article, and wondering whether you should start thinking of “rationalist” and “effective altruist” as warning signs in the same way as you might think of some other terms that I won’t specify for fear of irrelevant controversy, then once again you should be calibrating your outrage against how you feel about other groups.
For the avoidance of doubt, I should say that I don’t know how the rate of sexual misconduct among rationalists / EAs / Silicon Valley rationalists in particular / … compares with the rate in other groups, nor do I have a very good idea of how high it is in other similar groups. It could be that the rate among rationalists is exceptionally high (as the Bloomberg News article is clearly trying to make us think). It could be that it’s comparable to the rate among, say, Silicon Valley software engineers and that that rate is horrifyingly high (as plenty of other news articles would have us think). It could be that actually rationalists aren’t much different from any other group with a lot of geeky men in it, and that groups with a lot of geeky men in them are much less bad than journalists would have us believe. That last one is the way my prejudices lean … but they would, wouldn’t they?, so I wouldn’t put much weight on them.
[EDITED to add:] Oh, another specific situation one could be in that’s relevant here: If you are contemplating Reasons Why Rationalists Are So Bad (cf. the final paragraph quoted in the OP here, which offers an explanation for that), it is highly relevant whether rationalists are in fact unusually bad. If rationalists or EAs are just like whatever population they’re mostly drawn from, then it doesn’t make sense to look for explanations of their badness in rationalist/EA-specific causes like alleged tunnel vision about AI.
[EDITED again to add:] To whatever extent the EA community and/or the rationalist community claims to be better than others, of course it is fair to hold them to a higher standard, and take any failure to meet it as evidence against that claim. (Suppose it turns out that the rate of child sex abuse among Roman Catholic clergy is exactly the same as that in some reasonably chosen comparison group. Then you probably shouldn’t see Roman Catholic Clergy as super-bad, but you should take that as evidence against any claim that the Roman Catholic Church is the earthly manifestation of a divine being who is the source of all goodness and moral value, or that its clergy are particularly good people to look to for moral advice.) How far either EAs or rationalists can reasonably be held to be making such a claim seems like a complicated question.
I am a pessimist who works from the assumption that humans are globally a bit terrible. Thus, I don’t consider the isolated data point of “humans in group x have been caught being terrible” to be particularly novel or useful.
Reporting that I would find useful would ultimately take the form “humans in group x trend toward differently terrible from humans in other groups”, whether that’s claiming that they’re worse, differently bad, or better.
Whenever someone claims that a given group is better than most of society, the obvious next question is “better at being excellent to each other, or better at covering it up when they aren’t?”.
The isolated data point of “people in power are accused of using that power to harm others” is like… yes, and? That’s kind of baseline for our species.
And as a potential victim, reporting on misconduct is only useful to me if it updates the way I take precautions against it, by pointing out that the misconduct in a given community is notably different from that in the world at large.
Whataboutism is a fallacy.
It is. But if someone is saying “this group of people is notably bad” then it’s worth asking whether they’re actually worse than other broadly similar groups of people or not.
I think the article, at least to judge from the parts of it posted here, is arguing that rationalists and/or EAs are unusually bad. See e.g. the final paragraph about paperclip-maximizers.
I fail to see why it matters what other broadly similar groups of people do. Rationalists ought to predict and steer the future better than other kinds of people, and so should be held to a higher standard. Deflecting with “but all the other kids are equally abusive!” is just really stupid.
As for the article, I’m not concerned with the opinion of a journalist either; they can be confused or bombastic about the exact extent of the problem if they want, it’s rather standard for journalists; but I don’t doubt that the problem is real and hasn’t been preemptively fixed before it happened, which bothers me because the founders of this community are more than smart enough to have at least made an attempt to do so.
Whether it matters what other broadly similar groups do depends on what you’re concerned with and why.
If you’re, say, a staff member at an EA organization, then presumably you are trying to do the best you could plausibly do, and in that case the only significance of those other groups would be that if you have some idea how hard they are trying to do the best they can, it may give you some idea of what you can realistically hope to achieve. (“Group X has such-and-such a rate of sexual misconduct incidents, but I know they aren’t really trying hard; we’ve got to do much better than that.” “Group Y has such-and-such a rate of sexual misconduct incidents, and I know that the people in charge are making heroic efforts; we probably can’t do better.”)
So for people in that situation, I think your point of view is just right. But:
If you’re someone wondering whether you should avoid associating with rationalists or EAs for fear of being sexually harassed or assaulted, then you probably have some idea of how reluctant you are to associate with other groups (academics, Silicon Valley software engineers, …) for similar reasons. If it turns out that rationalists or EAs are pretty much like those, then you should be about as scared of rationalists as you are of them, regardless of whether rationalists should or could have done better.
If you’re a Less Wrong reader wondering whether these are Awful People that you’ve been associating with and you should be questioning your judgement in thinking otherwise, then again you probably have some idea of how Awful some other similar groups are. If it turns out that rationalists are pretty much like academics or software engineers, then you should feel about as bad for failing to shun them as you would for failing to shun academics or software engineers.
If you’re a random person reading a Bloomberg News article, and wondering whether you should start thinking of “rationalist” and “effective altruist” as warning signs in the same way as you might think of some other terms that I won’t specify for fear of irrelevant controversy, then once again you should be calibrating your outrage against how you feel about other groups.
For the avoidance of doubt, I should say that I don’t know how the rate of sexual misconduct among rationalists / EAs / Silicon Valley rationalists in particular / … compares with the rate in other groups, nor do I have a very good idea of how high it is in other similar groups. It could be that the rate among rationalists is exceptionally high (as the Bloomberg News article is clearly trying to make us think). It could be that it’s comparable to the rate among, say, Silicon Valley software engineers and that that rate is horrifyingly high (as plenty of other news articles would have us think). It could be that actually rationalists aren’t much different from any other group with a lot of geeky men in it, and that groups with a lot of geeky men in them are much less bad than journalists would have us believe. That last one is the way my prejudices lean … but they would, wouldn’t they?, so I wouldn’t put much weight on them.
[EDITED to add:] Oh, another specific situation one could be in that’s relevant here: If you are contemplating Reasons Why Rationalists Are So Bad (cf. the final paragraph quoted in the OP here, which offers an explanation for that), it is highly relevant whether rationalists are in fact unusually bad. If rationalists or EAs are just like whatever population they’re mostly drawn from, then it doesn’t make sense to look for explanations of their badness in rationalist/EA-specific causes like alleged tunnel vision about AI.
[EDITED again to add:] To whatever extent the EA community and/or the rationalist community claims to be better than others, of course it is fair to hold them to a higher standard, and take any failure to meet it as evidence against that claim. (Suppose it turns out that the rate of child sex abuse among Roman Catholic clergy is exactly the same as that in some reasonably chosen comparison group. Then you probably shouldn’t see Roman Catholic Clergy as super-bad, but you should take that as evidence against any claim that the Roman Catholic Church is the earthly manifestation of a divine being who is the source of all goodness and moral value, or that its clergy are particularly good people to look to for moral advice.) How far either EAs or rationalists can reasonably be held to be making such a claim seems like a complicated question.
I am a pessimist who works from the assumption that humans are globally a bit terrible. Thus, I don’t consider the isolated data point of “humans in group x have been caught being terrible” to be particularly novel or useful.
Reporting that I would find useful would ultimately take the form “humans in group x trend toward differently terrible from humans in other groups”, whether that’s claiming that they’re worse, differently bad, or better.
Whenever someone claims that a given group is better than most of society, the obvious next question is “better at being excellent to each other, or better at covering it up when they aren’t?”.
The isolated data point of “people in power are accused of using that power to harm others” is like… yes, and? That’s kind of baseline for our species.
And as a potential victim, reporting on misconduct is only useful to me if it updates the way I take precautions against it, by pointing out that the misconduct in a given community is notably different from that in the world at large.
No, it’s not, especially given that ‘whataboutism’ is a label used to dismiss comparisons that don’t advance particular arguments.
Writing the words “what about” does not invalidate any and all comparisons.