And so if we have a gender taboo, I would much rather it be a “your opinion on gender politics really doesn’t matter, and to the extent you have one, you should be curious rather than idealistic” than a “let’s not talk about gender politics because it might upset X.” The first is dissolving politics; the second is surrendering to X.
Thing is, given the gender stuff in the sequences previously mentioned, it seems to me that communications intended to say the former would be likely to come across as “let’s not talk about gender politics — and therefore, Eliezer’s stuff about verthandi, boreana, catgirls, and the like, and various folks’ side comments on ev.psych, are all allowed to stand unquestioned.”
But the primary substance of her claim should have been about the epistemic role that stereotypes should play as evidence.
it seems to me that communications intended to say the former would be likely to come across as
I think that gender is on topic when discussing fun theory, self-modification, and CEV, in ways that politics are on topic when discussing those things. I do agree that it might be worthwhile to try and rewrite articles that are problematic; the last I heard, the sequences were being edited to become a book, and that seems like a good time to attempt those changes.
Eh? That seems rather unrelated.
Is good science more likely to match or smash stereotypes? If you believe that stereotypes are Bayesian evidence for the ground truth, then good science is more likely to match stereotypes, and thus, science that smashes stereotypes is less likely to be good science. Now, this is still just Bayesian evidence, and enough studies that are done well can outweigh the hastily-made impressions of the public. The neat thing about this is that we can quantify the amount that we should believe in stereotypes; the linked article suggests anti-believing in stereotypes, without explicit justification as to why.
When someone encourages science to smash stereotypes, they need to be clear what methodological principle they have in mind. Without that, it reads like a political rallying cry, supplemented with ammunition used to kill enemy soldiers, rather than a serious suggestion by an empiricist.
For example, consider this study, and its rapid promotion by feminists. It was a single study, which was sprinkled with warnings that a single study doesn’t prove anything, and that this was, to the best of the authors’ knowledge, the only time this result had ever been observed, despite widespread experimentation. Glancing at it briefly, I found several components of their results that looked odd, and warranted investigation.
Separating what one wants to be true and what one believes to be true is a very important rationality skill, which should be applied to gender just as much to the rest of life.
If you believe that stereotypes are Bayesian evidence for the ground truth, then good science is more likely to match stereotypes, and thus, science that smashes stereotypes is less likely to be good science.
Depends on what you mean by “stereotype”.
If everyone says that Welsh corgis weigh less than one ton, that is good evidence that they do weigh less than one ton.
However, if a group of loud Greens says that Blues are whiny, I am not so sure that this is good evidence that Blues are whiny. I think it is more likely to be something other than evidence — for instance, a rhetorical tactic to encourage Greens to steal Blues’ stuff and discourage Blues from complaining about it.
I expect there to be plenty of low-quality motivated search. That is not surprising. I also expect that if Greens hold a stereotype about the lived experience of Blues that is contrary to Blues’ reports of their own lived experience, the Greens’ stereotype is screened off as evidence by the Blues’ experience.
Suppose G is a binary variable of the ground truth, S is a binary variable of the stereotype, and E is a binary variable of the result of an experiment.
If stereotypes are Bayesian evidence for the ground truth, that means P(S|G)>P(S|~G) and P(~S|G)P(E|~G) and P(~E|G)=P(E|~S), and P(~E|S)<=P(~E|~S). (If you don’t see why this is, I recommend opening up a spreadsheet, generating some binary distributions which are good evidence, and then working out the probabilities through Bayes.)
It’s not guaranteed to be the case, because stereotypes and the results of experiments are probably not independent once we condition on the ground truth. The important thing about using this as a criticism is noting that stereotypes prevalent in academia and stereotypes prevalent in the general population may be rather different. Looking at the suggested results in the linked article, you’ll note it’s saying “hey, you should conform to my stereotypes, even when the ground truth is probably the other way” under the guise of “smash stereotypes.”
Firstly, just because something is Bayesian evidence, it doesn’t follow that it’s strong enough to overcome the prior probability. We may have reason to believe that , say, we’re all clones, and thus the stereotype that anyone from vat 4-G is an idiot are probably unfounded. Of course, there could be something wrong with vat 4-G, and we update our probability of this, but that doesn’t make it more likely. (And the Robber’s Cave experiment shows that even when two populations are drawn from the same random distribution, opposing stereotypes can and will form.)
Secondly, I suspect you may be using a more general definition of “stereotype”, whereas I (and, I’m guessing, that article) are using a definition closer to “overgeneralization” or “simplistic profile of a large group”, which naturally are contrasted to “normal distribution”. Could you taboo “stereotype” for me, please?
Firstly, just because something is Bayesian evidence, it doesn’t follow that it’s strong enough to overcome the prior probability.
Ah, that’s the issue: I don’t mean that it’s more likely than not, or P(E|S)>P(~E|S), just that it’s more likely than it would be otherwise, or P(E|S)>P(E)>P(E|~S).
I suspect you may be using a more general definition of “stereotype”
Quite possibly. What I mean by ‘stereotype’ is generally ‘the general population noticing results from a distributional tendency.’ Suppose the population holds an opinion of the form “men are smarter than women.” As a logical statement, it is disproven by finding a single woman who is smarter than a single man (which is easy to do!). As a distributional statement, it could be interpreted as any of “the male intelligence mean is larger than the female intelligence mean” or “the male intelligence variance is larger than the female intelligence variance” or “high male intelligence is more visible than high female intelligence,” because all of those are distributional tendencies that could have noticeable results along the lines of “men are smarter than women.”
In particular, the ground truth of higher male variance in intelligence is interesting because it results in both “men are smarter than women” and “men are dumber than women” being valid impressions, in the sense that there are more smart men than smart women and dumb men than dumb women! This is perfectly natural if you think in distributions, and it seems to me that both of those are memes that are common in the wider culture.
Ah, that’s the issue: I don’t mean that it’s more likely than not, or P(E|S)>P(~E|S), just that it’s more likely than it would be otherwise, or P(E|S)>P(E)>P(E|~S).
Oh, right :)
As a distributional statement, it could be interpreted as any of “the male intelligence mean is larger than the female intelligence mean” or “the male intelligence variance is larger than the female intelligence variance” or “high male intelligence is more visible than high female intelligence,” because all of those are distributional tendencies that could have noticeable results along the lines of “men are smarter than women.”
Have you tried asking people what they mean? That might narrow it down.
In particular, the ground truth of higher male variance in intelligence is interesting because it results in both “men are smarter than women” and “men are dumber than women” being valid impressions, in the sense that there are more smart men than smart women and dumb men than dumb women! This is perfectly natural if you think in distributions, and it seems to me that both of those are memes that are common in the wider culture.
“X are dumber than Y” is a pretty universal “meme”. Just like “X are worse people than Y”, “X are more/less emotional than Y” and so on and so forth. Note that positive stereotypes of women usually emphasize their intuition, which is often seen as opposed to “intelligence”.
IOW, interesting, but probably coincidence, since it fits better with the known tendency to develop opposing stereotypes than academics foolishly ignoring sources of evidence.
Thing is, given the gender stuff in the sequences previously mentioned, it seems to me that communications intended to say the former would be likely to come across as “let’s not talk about gender politics — and therefore, Eliezer’s stuff about verthandi, boreana, catgirls, and the like, and various folks’ side comments on ev.psych, are all allowed to stand unquestioned.”
Eh? That seems rather unrelated.
I think that gender is on topic when discussing fun theory, self-modification, and CEV, in ways that politics are on topic when discussing those things. I do agree that it might be worthwhile to try and rewrite articles that are problematic; the last I heard, the sequences were being edited to become a book, and that seems like a good time to attempt those changes.
Is good science more likely to match or smash stereotypes? If you believe that stereotypes are Bayesian evidence for the ground truth, then good science is more likely to match stereotypes, and thus, science that smashes stereotypes is less likely to be good science. Now, this is still just Bayesian evidence, and enough studies that are done well can outweigh the hastily-made impressions of the public. The neat thing about this is that we can quantify the amount that we should believe in stereotypes; the linked article suggests anti-believing in stereotypes, without explicit justification as to why.
When someone encourages science to smash stereotypes, they need to be clear what methodological principle they have in mind. Without that, it reads like a political rallying cry, supplemented with ammunition used to kill enemy soldiers, rather than a serious suggestion by an empiricist.
For example, consider this study, and its rapid promotion by feminists. It was a single study, which was sprinkled with warnings that a single study doesn’t prove anything, and that this was, to the best of the authors’ knowledge, the only time this result had ever been observed, despite widespread experimentation. Glancing at it briefly, I found several components of their results that looked odd, and warranted investigation.
Separating what one wants to be true and what one believes to be true is a very important rationality skill, which should be applied to gender just as much to the rest of life.
Depends on what you mean by “stereotype”.
If everyone says that Welsh corgis weigh less than one ton, that is good evidence that they do weigh less than one ton.
However, if a group of loud Greens says that Blues are whiny, I am not so sure that this is good evidence that Blues are whiny. I think it is more likely to be something other than evidence — for instance, a rhetorical tactic to encourage Greens to steal Blues’ stuff and discourage Blues from complaining about it.
I expect there to be plenty of low-quality motivated search. That is not surprising. I also expect that if Greens hold a stereotype about the lived experience of Blues that is contrary to Blues’ reports of their own lived experience, the Greens’ stereotype is screened off as evidence by the Blues’ experience.
That … really doesn’t follow.
Suppose G is a binary variable of the ground truth, S is a binary variable of the stereotype, and E is a binary variable of the result of an experiment.
If stereotypes are Bayesian evidence for the ground truth, that means P(S|G)>P(S|~G) and P(~S|G)P(E|~G) and P(~E|G)=P(E|~S), and P(~E|S)<=P(~E|~S). (If you don’t see why this is, I recommend opening up a spreadsheet, generating some binary distributions which are good evidence, and then working out the probabilities through Bayes.)
It’s not guaranteed to be the case, because stereotypes and the results of experiments are probably not independent once we condition on the ground truth. The important thing about using this as a criticism is noting that stereotypes prevalent in academia and stereotypes prevalent in the general population may be rather different. Looking at the suggested results in the linked article, you’ll note it’s saying “hey, you should conform to my stereotypes, even when the ground truth is probably the other way” under the guise of “smash stereotypes.”
Firstly, just because something is Bayesian evidence, it doesn’t follow that it’s strong enough to overcome the prior probability. We may have reason to believe that , say, we’re all clones, and thus the stereotype that anyone from vat 4-G is an idiot are probably unfounded. Of course, there could be something wrong with vat 4-G, and we update our probability of this, but that doesn’t make it more likely. (And the Robber’s Cave experiment shows that even when two populations are drawn from the same random distribution, opposing stereotypes can and will form.)
Secondly, I suspect you may be using a more general definition of “stereotype”, whereas I (and, I’m guessing, that article) are using a definition closer to “overgeneralization” or “simplistic profile of a large group”, which naturally are contrasted to “normal distribution”. Could you taboo “stereotype” for me, please?
Ah, that’s the issue: I don’t mean that it’s more likely than not, or P(E|S)>P(~E|S), just that it’s more likely than it would be otherwise, or P(E|S)>P(E)>P(E|~S).
Quite possibly. What I mean by ‘stereotype’ is generally ‘the general population noticing results from a distributional tendency.’ Suppose the population holds an opinion of the form “men are smarter than women.” As a logical statement, it is disproven by finding a single woman who is smarter than a single man (which is easy to do!). As a distributional statement, it could be interpreted as any of “the male intelligence mean is larger than the female intelligence mean” or “the male intelligence variance is larger than the female intelligence variance” or “high male intelligence is more visible than high female intelligence,” because all of those are distributional tendencies that could have noticeable results along the lines of “men are smarter than women.”
In particular, the ground truth of higher male variance in intelligence is interesting because it results in both “men are smarter than women” and “men are dumber than women” being valid impressions, in the sense that there are more smart men than smart women and dumb men than dumb women! This is perfectly natural if you think in distributions, and it seems to me that both of those are memes that are common in the wider culture.
Oh, right :)
Have you tried asking people what they mean? That might narrow it down.
“X are dumber than Y” is a pretty universal “meme”. Just like “X are worse people than Y”, “X are more/less emotional than Y” and so on and so forth. Note that positive stereotypes of women usually emphasize their intuition, which is often seen as opposed to “intelligence”.
IOW, interesting, but probably coincidence, since it fits better with the known tendency to develop opposing stereotypes than academics foolishly ignoring sources of evidence.