Unfortunately, there’s no way to publicly examine, measure, or discuss group differences in a way that doesn’t disproportionately attract those who’d misinterpret and misuse this against the group(s) in question, and therefore without those groups legitimately feeling attacked by those giving visibility to the topic.
It’s the core of politics (how to characterize groups and their relation to each other), and therefore rationality in hard-mode. There are very few forums where it won’t cause more harm than good.
In fact, we don’t have good epistemic practices for studying or thinking about groups, as distinct from individuals. I suspect there is a lot of reality in the self-organized sortation by visible traits, and this does cause group-norms to differ by distribution of invisible traits. But I don’t know of many studies that break it down that way.
Unfortunately, there’s no way to publicly examine, measure, or discuss group differences in a way that doesn’t disproportionately attract those who’d misinterpret and misuse this against the group(s) in question, and therefore without those groups legitimately feeling attacked by those giving visibility to the topic.
I feel like this might be downstream of activist-caused evaporative cooling, tho. If you say that anyone who studies group differences must be motivated by animus, people unmotivated by animus will be disproportionately likely to leave the field.
One of the current controversies in medicine is over whether race should be used as a factor in diagnostic decisions (one example). Now, you or I as patients might want our doctors to use all available information to provide us with the best treatment possible, and you or I as people interested in good outcomes for all might be worried that this will lead to people being predictably mistreated (which, if set according to population-level averages, will disproportionately affect minority groups). It seems pretty implausible to me that the people who set up race-sensitive diagnostics and treatments for kidney diseases were motivated by ill will towards individuals or groups, and much more likely that they were motivated by good will.
Similarly, you could imagine people who want to come up with policies and procedures which are motivated by good will towards everyone, and want to use the most effective information available to do so. Is it really productive to militate them out of existence?
I didn’t say nor mean to imply that anyone who studies group differences must be motivated by animus. I do find that public discussions of such research seems to attract amateurs who ARE motivated that way, and activists who assume that others in the discussion are.
To be clear, I’m deeply in favor of careful world-modeling that understands the distribution of traits, and the level of correlation between visible and invisible traits, especially when it allows better individual decisions based on actual observations and measurements. I, however, don’t believe that most people, even on LessWrong, are capable of that level of rigor, and the discussions seem more painful than helpful in non-heavily-moderated-and-focused forums.
Not “this topic is wrong to research and use in individual treatments”, but “this topic doesn’t go well on LessWrong”. And a little bit of “this topic is wrong to use in general non-individual policy”.
Unfortunately, there’s no way to publicly examine, measure, or discuss group differences in a way that doesn’t disproportionately attract those who’d misinterpret and misuse this against the group(s) in question, and therefore without those groups legitimately feeling attacked by those giving visibility to the topic.
It’s the core of politics (how to characterize groups and their relation to each other), and therefore rationality in hard-mode. There are very few forums where it won’t cause more harm than good.
In fact, we don’t have good epistemic practices for studying or thinking about groups, as distinct from individuals. I suspect there is a lot of reality in the self-organized sortation by visible traits, and this does cause group-norms to differ by distribution of invisible traits. But I don’t know of many studies that break it down that way.
I feel like this might be downstream of activist-caused evaporative cooling, tho. If you say that anyone who studies group differences must be motivated by animus, people unmotivated by animus will be disproportionately likely to leave the field.
One of the current controversies in medicine is over whether race should be used as a factor in diagnostic decisions (one example). Now, you or I as patients might want our doctors to use all available information to provide us with the best treatment possible, and you or I as people interested in good outcomes for all might be worried that this will lead to people being predictably mistreated (which, if set according to population-level averages, will disproportionately affect minority groups). It seems pretty implausible to me that the people who set up race-sensitive diagnostics and treatments for kidney diseases were motivated by ill will towards individuals or groups, and much more likely that they were motivated by good will.
Similarly, you could imagine people who want to come up with policies and procedures which are motivated by good will towards everyone, and want to use the most effective information available to do so. Is it really productive to militate them out of existence?
I didn’t say nor mean to imply that anyone who studies group differences must be motivated by animus. I do find that public discussions of such research seems to attract amateurs who ARE motivated that way, and activists who assume that others in the discussion are.
To be clear, I’m deeply in favor of careful world-modeling that understands the distribution of traits, and the level of correlation between visible and invisible traits, especially when it allows better individual decisions based on actual observations and measurements. I, however, don’t believe that most people, even on LessWrong, are capable of that level of rigor, and the discussions seem more painful than helpful in non-heavily-moderated-and-focused forums.
Not “this topic is wrong to research and use in individual treatments”, but “this topic doesn’t go well on LessWrong”. And a little bit of “this topic is wrong to use in general non-individual policy”.