For example, here is an informal writeup of a PNAS article finding evidence of bias favouring male over female job applicants when everything about the applications was exactly the same apart from the name.
That’s not necessarily irrational in general. The other information on the resume does not prevent the name from also providing potentially relevant information.
I’d suggest you look up “screening off” in any text on Bayesian inference. The explanation on the wiki is not really the greatest.
But when you have information that is closer and more specific to the property you’re trying to predict, you should expect to increasingly disregard information that is further from it. Even if your prior asserts that sex predicts competence, when you have more direct measures of competence of a particular candidate, they should screen off the less-direct one in your prior.
I’d suggest you look up “screening off” in any text on Bayesian inference. The explanation on the wiki is not really the greatest.
I know what screening off is—I was saying that not all the information is screened off here. There are still other issues given the premise that names taken alone predict competence to some extent. For example, one resume may be more likely to be honest than another, and even if the resume is completely honest, reversion to the mean is likely to be larger in one case than another.
So: take a look at the paper, or at the informal summary of it to which I also linked, and then tell us whether you consider that—given all the information provided to the faculty in the application—knowing whether the candidate is male or female gives anywhere near enough further information to justify the differences in rated competence found by the researchers.
It seems to me that for that to be so, there would need to be absolutely huge differences between men and women, so big that no one with any brain and any integrity would deny that men are much much much better scientists than women. Do you think that’s the case?
It seems to me that for that to be so, there would need to be absolutely huge differences between men and women, so big that no one with any brain and any integrity would deny that men are much much much better scientists than women. Do you think that’s the case?
I think that regardless of the actual facts, assuming the difference is counterfactually that large, it’s still very plausible that almost everyone would still deny any difference exists, due to political and cultural forces.
While I don’t think there is such a large difference, I don’t accept the argument from “people wouldn’t pretend a big difference doesn’t exist”.
I wasn’t merely arguing that if there were such a large difference everyone would admit it. I was also arguing that if there were such a large difference we’d all know it. Obviously this argument will be more persuasive to people who (like me) think it’s clear from observation that there isn’t so huge a difference between men and women, than to people who don’t.
Just by way of reminder: we’d be looking for a difference large enough that, knowing
what degree a person got from what institution
what their grade point average was
what their GRE scores are
what was written about them by a faculty member writing a letter of recommendation
what they wrote themselves in an application letter
the difference between male and female suffices to make a difference to their estimated competence of 0.7 points on a 5-point scale. That would have to be either a really really enormous difference between men and women, or a really weird difference—weird in that whatever it is somehow manages to make a big difference in competence without having any effect on academic performance, test scores, or reported faculty opinions. Which presumably would require it to be quite narrow in scope but, again, really really enormous in size.
And it seems about as obvious to me that there isn’t such a difference as that (say) there isn’t a difference of 20cm in typical heights between men and women. Not just because if there were then it would be widely admitted (maybe it would, maybe not) but because it would be obvious.
Now, of course I could be wrong. There could be such an enormous difference and I could be somehow blind to it for some weird cultural-political reason or something. But is it really too much to suggest that when
the exact same job application gets radically different evaluations depending on whether the candidate’s name is “John” or “Jennifer”
it’s reasonable to take that as strong evidence for bias in favour of men over women that isn’t simply a proportionate response to actual differences in competence? I mean, it’s just Bayes’ theorem. How likely is that outcome if people do have such bias? How likely is it if they don’t? (Not “is it possible if they don’t?”. The answer to that sort of question is almost always yes, regardless of what’s true.)
I wasn’t merely arguing that if there were such a large difference everyone would admit it. I was also arguing that if there were such a large difference we’d all know it.
It’s not entirely clear that these are two different things. Admitting a highly politically incorrect opinion publicly and admitting it to oneself or one’s friends aren’t really completely separate. People tend to believe what they profess, and what they hear others profess.
That would have to be either a really really enormous difference between men and women, or a really weird difference—weird in that whatever it is somehow manages to make a big difference in competence without having any effect on academic performance, test scores, or reported faculty opinions. Which presumably would require it to be quite narrow in scope but, again, really really enormous in size.
I suspect one source of the disagreement between us may be that you’re assigning a high predictive ability to academic performance, while I don’t even assign it a very high correlation. This may be because my intuition is trained on different academic fields. I don’t have any experience with scientific lab managers (the job the study’s resumes applied for). I do have experience with programmers and other related fields, mostly below the doctoral level.
And it seems about as obvious to me that there isn’t such a difference as that (say) there isn’t a difference of 20cm in typical heights between men and women. Not just because if there were then it would be widely admitted (maybe it would, maybe not) but because it would be obvious.
When I first read that I thought: but there is about a 20cm difference in the average heights of men and women! Is gjm arguing the opposite point from what I thought, or maybe being sarcastic?
So I checked the average height differences between the sexes, and the male:female ratio is typically between 1.07-1.09. This translates to 8-15 cm of difference. So while it’s not as much as 20cm, it’s “only” a 2x difference from my prediction. Maybe I’m just bad at translating what I see into centimeters and this difference is much more obvious to you than it is to me.
But is it really too much to suggest that when the exact same job application gets radically different evaluations depending on whether the candidate’s name is “John” or “Jennifer” it’s reasonable to take that as strong evidence for bias in favour of men over women that isn’t simply a proportionate response to actual differences in competence?
I don’t disagree with this. I just think the cultural power of “politically correct” thinking is strong enough to make people ignore truths of the magnitude of this being counterfactually wrong and stick to accepted explanations.
So I checked the average height differences between the sexes, and the male:female ratio is typically between 1.07-1.09. This translates to 8-15 cm of difference. So while it’s not as much as 20cm, it’s “only” a 2x difference from my prediction. Maybe I’m just bad at translating what I see into centimeters and this difference is much more obvious to you than it is to me.
Maybe gjm’s System 1 automatically compensates for the difference—I know that unless I’m deliberately paying attentìon to people’s height I’m much less likely to notice it if a man is six feet tall than if a woman is six feet tall, and for all we know the same might apply to gjm.
Yeah, I know, people consider my barely basic¹ cooking skills exceptional merely because I happen to have a Y chromosome. That’s male privilege for ya.
At least that’s what my System 1 tells me, and I can’t think of a way to find out whether the impostor syndrome applies short of having someone who doesn’t know my gender taste what I cook.
you’re assigning a high predictive ability to academic performance, while I don’t even assign it a very high correlation.
Academic performance is one of the things known to the faculty (and the same between the “male” and “female” conditions); it is not the only one. The relevant question is: How much predictive power does the totality of the information provided have, and conditioned on that how much predictive power does the sex of the applicant have? It looks to me as if the answers, on any account of sex differences that I find credible, are “quite a bit” and “scarcely any”.
By “academic performance” I was referring to all of these bullet points:
what degree a person got from what institution
what their grade point average was
what their GRE scores are
what was written about them by a faculty member writing a letter of recommendation
Which (from your summary) I understand is pretty much all of the information in the application letter.
I’m not claiming that sex differences have predictive power; I’m claiming that academic performance doesn’t have as much power as we’d like and recruiters have to look for more info.
For sure. My apologies if I somehow gave the impression of disagreeing with that. The second half of what I called the “relevant question” above is of course the real key here, and it sounds as if maybe we agree about that.
Obviously this argument will be more persuasive to people who (like me) think it’s clear from observation that there isn’t so huge a difference between men and women, than to people who don’t.
No. If the argument is more clear because you think that it supports the outcome that you prefer you are engaging in motivated cognition. It’s an error in reasoning.
But I didn’t say, and I don’t think it’s true, that the argument is clearer “because [I] think it supports the outcome [I] prefer”. I “prefer” that outcome, in part, because it seems clear from observation that there isn’t that sort of huge difference between men and women. That is not a reasoning error, it’s straightforward inference.
So your theory is that all observed larger number of men at the upper end of any bell curve is due to sexism? And the larger number of men at the lower end of most bell curve, e.g., more men in prison is due to..something?
Most of the data I’ve seen suggests women have lower variance, here Robin Hansen discusses some of the implications about variance in test scores.
There may well be differences in average performance between men and women in various intellectual tasks, in either direction. Indeed, there are some specific categories of tasks where the evidence for such differences seems strong; the most famous example is probably “mental rotation”. The difference for mental rotation is large but not enormous (about 1sd); my understanding is that all other sex differences found in scientific studies are smaller, and there are differences going in both directions.
It seems unlikely to me that there’s a big general cognitive deficit on either side. I believe girls are currently doing better than boys in pretty much all subjects at school in my country nowadays; in the past it was the other way around; so whatever differences there are (in this kind of task) must be smaller than the size of difference that can be induced by cultural effects. Of course this is consistent with deficits in very specific areas, with variance differences that affect how many really stellar performers there are of each sex, etc.
There may well be differences in variance between men and women. These differences might be fairly big, but it seems unlikely to me that they’re large enough to make huge differences at “ordinary” ability levels.
Once you start looking at the tails of the distributions, I expect them to be quite far from being Gaussian or even symmetrical. There are a lot more ways for things to go badly wrong than for them to go exceptionally right, after all. So I am skeptical about inferences from differences at the “low” end to differences at the “high” end.
There are certainly a lot more men than women in prison, especially if you look specifically at crimes of violence. However, lumping this together with variations in ability seems like a wilful embracing of the halo/horns effect; it seems like much of it will come from variations in sociopathy, enjoyment of violence, and other such characteristics that needn’t go along with worse performance as (say) a scientific lab manager.
For those last two reasons, any inference that looks much like “there are more men in prison, so we should expect more men to win Nobel prizes in physics” seems extremely suspect to me. Still more “there are more men in prison, so we should expect more men to make good lab managers”.
Putting all the above together: there may be well be differences in competence between men and women; they may well be bigger at the highest levels; I wouldn’t expect the differences to be enormous except maybe at the very highest levels where variance differences can be a really big deal.
All of that is independent from the question of whether there are sexist attitudes—by which, for present purpose, I mean: whether there are systematic biases that make men get evaluated better than women relative to their actual ability, likely performance, etc. Or, for that matter, worse.
It seems to me that there is a lot of evidence that there are such sexist attitudes, generally favouring men over women. We’ve had a lot of discussion of one study which seems to me like very strong evidence for such attitudes in one domain; I posted some links to some others. There’s a pile of anecdote too, but of course the way that looks may simply reflect what anecdotes I happen to have encountered. (I think the available anecdotage is at any rate evidence that sexist attitudes in both directions exist.)
The possibility of real ability differences has some bearing on how to interpret the apparent evidence for sexist attitudes, but in at least some cases—e.g., the study we’ve discussed so much here—it seems to me that it doesn’t make much difference, because to make the evidence not be strong evidence for sexist attitudes it would be necessary for the ability differences to be (what seems to me to be) unrealistically large.
The relevant question for most practical purposes is not the statistical difference between men and women in some particular kind of ability, but the statistical difference conditional on the information usually available when hiring (or when considering promotion, or when allocating places at a university, or whatever). Nothing I have seen so far gives me reason to think that these differences are large, even though the information in question is limited and unreliable.
(To get quantitative for a moment, let’s suppose everything in sight is normally distributed. Some underlying ability: male and female both have mean 0, but s.d. 1 for men and 0 for women. Some measure of ability equals the actual ability level plus noise with mean 0 and s.d. 1. Actual job performance looks like underlying ability plus other factors with mean 0 and s.d. 0.5. Then conditional on measured ability being +2 (i.e., well above average but not stratospheric), mean predicted job performance is about +1.0 for male applicants (with s.d. 0.87) and +0.8 for female applicants (with s.d. 0.80), a difference of about 0.2 (male) standard deviations. Definitely not zero, but not exactly huge either and a lot smaller than the noise. I have no idea how realistic any of the numbers I’ve assumed here actually are, and would be glad to learn of credible estimates—though of course this is a toy model at best whatever numbers one plugs in.)
Thanks. Though that high likelihood of mindkill makes it (1) more likely that someone will try to correct my obvious stupid errors when in fact I’m right and they’re confused, and (2) more likely that someone will rightly correct my obvious stupid errors when in fact they’re right and I’m confused but I won’t believe them. Still, the best we can do is the best we can do :-).
There are certainly a lot more men than women in prison, especially if you look specifically at crimes of violence. However, lumping this together with variations in ability seems like a wilful embracing of the halo/horns effect; it seems like much of it will come from variations in sociopathy, enjoyment of violence, and other such characteristics that needn’t go along with worse performance as (say) a scientific lab manager.
Note that the number of people who are in jail doesn’t merely depend on how many commit crimes, it depends on how many get caught committing crimes, and that such a statistic would anticorrelate with intelligence is very nearly obvious to me.
Yes, that’s a good point. How big the effect is depends on how the probability of getting caught varies with intelligence: I agree that it will almost always anticorrelate, but the dependence could be very strong or very weak. Anyone got any statistics on that?
Hmm. It might be possible to indirectly get some information about them by comparing the kinds of people that get caught for premeditated crime with the kinds of people that get caught for crimes of impulse, and then adjusting for any correlation of intelligence with self-control. The latter ought to be harder to cover up.
It quite easy to make wrong arguments in favor of positions that are true. If you think that an argument is good just because you think it’s conclusion is true it’s time to pause and reflect and look at a situation where the same structure of the argument would lead to a conclusion that’s false.
Even if men and woman are on average equally qualified that doesn’t mean that a specific subset is. For a hiring manager it’s not important whether there’s causation. Correlation in the data set is enough.
If you think that an argument is good just because you think its conclusion is true [...]
I agree, that’s a very bad sign. On the other hand, there’s nothing very alarming about thinking an argument is more persuasive when you agree with its premises. And often the premises and the conclusions are related to one another. That seems to me to be exactly the situation here.
Premise: There pretty clearly isn’t an enormous cognitive difference between men and women that makes women much less competent at brainwork, so much less competent that a moderate amount of information about a person’s abilities leaves a lot of male-female difference un-screened-off.
Argument: If indeed there isn’t, then the best explanation of findings of the sort we’ve been discussing is prejudice in favour of men and against women that has substantial impact on hiring.
Conclusion: There probably is such prejudice, and it probably leads (among other things) to underrepresentation of women in many brainwork-heavy jobs.
(Note: the premise, the argument, and the conclusion are all sketchy approximations. Filling in all the details would make the above maybe 20x longer than it is.)
I find the argument somewhat persuasive. This is partly because I find the premise plausible; some people might not (e.g., because the evidence they think they have regarding the relative abilities of men and women differs from the evidence I think I have); those people will find it less persuasive.
The premise in question is not the conclusion of the argument. It is not equivalent to the conclusion of the argument. It neither implies nor is implied by the conclusion of the argument. It is, for sure, somewhat related to the conclusion—e.g., by the fact that they are premise and conclusion of a short and simple argument—and doubtless there is a correlation between believing one and believing the other. I do not find this sufficient reason to think that finding the argument more credible if one accepts the premise is any sort of cognitive error.
Perhaps I am misunderstanding your argument somehow. I confess I don’t find it perfectly clear. Would you like to make it more explicit what error you think I am committing and why you think that?
Even if men and women are on average equally qualified that doesn’t mean that a specific subset is.
I agree and am not aware of having said or implied otherwise. One of the many modifications that would be needed to turn the argument-sketch above into something unambiguous and quantitative would be to replace “between men and women” with “between men applying for lab manager posts and women applying for lab manager posts”. If you think this makes an actual difference in here, I’d be interested to see the details.
For a hiring manager it’s not important whether there’s causation. Correlation in the data set is enough.
Depends on the details, of course, but mostly yes. Once again, though, I am having trouble working out what I’ve said that suggests I think otherwise.
If the argument is more clear because you think that it supports the outcome that you prefer you are engaging in motivated cognition.
And/or if the argument is less clear to other people because they think it supports the outcome that they don’t like they are engaging in motivated cognition.
it’s reasonable to take that as strong evidence for bias in favour of men over women that isn’t simply a proportionate response to actual differences in competence? I mean, it’s just Bayes’ theorem. How likely is that outcome if people do have such bias?
By the same logic you could say that someone who hires people with high SRT scores engages in SRT bias. Someone who hires based on SRT scores could simply reasonably believe that people with high SRT scores are more competent.
Google’s HR department has a variety of factors on which it judges candidates. A few years afterwards they reevaluate their hiring decisions. They run a regression analysis and see which factors predict job performance at Google. They learn from that analysis and switch their hiring decision to hiring people which score highly on the factors that the regression analysis found predictive.
That’s how making rational hiring decisions looks like. In the process they found that college marks aren’t very relevant for predicting job performance. Being good at Fermi estimates unfortunately isn’t as well, so those LW people who train Fermi estimates don’t get benefits anymore when they want to get a job at Google.
Given current laws Google is not allowed to put values such as gender into the mix they use to make hiring decisions. That means that Google can’t make the hiring decisions that maximize predicted job performance.
The politics of the issue also make it pretty bad PR for them to publish results about the effects of a model that includes gender if the correct value in the regression analysis would mean worse chances for woman getting a job. It’s good PR for them if the correct value would mean to favor woman. No big company that does regression analysis on job performance published data that favoring in gender would mean hiring more woman.
Factoring in gender into a regression analysis would mean that any bias against woman in subjective competence evaluations in interviews would be canceled by that factor.
Just imagine if a big company would find that by putting gender into their regression analysis they would hiring more women and get better average job performance as a result. Don’t you think those companies would lobby Washington to allow them to put gender into hiring decisions? The silence on the issue speaks.
It could be that the silencing of feminists who want to prevent “privileged” from talking about the issue is strong enough that rational companies don’t dare to speak about their need to change their hiring practices to hire more woman via making data driven arguments. If that’s the case that says a lot about the concept of privilege and it’s problem in shutting down rational arguments.
weird in that whatever it is somehow manages to make a big difference in competence without having any effect on academic performance, test scores, or reported faculty opinions
Imagine that academic performance has a really low value for predicting job performance. People that spend a lot of time preparing for tests get better academic marks. Woman spent more time than men preparing for academic tests. That means a woman of equal competence scores higher because she puts in more work. The test isn’t anymore a strict measure of competence but a measure of effort at scoring highly of the test. In that scenario it makes sense to infer that a woman with the same test score as a man is likely less competent as the man as long as you are hiring for “competence” and not for “putting in effort to game the test”.
I mean, it’s just Bayes’ theorem. How likely is that outcome if people do have such bias? How likely is it if they don’t?
If you write down the math you see that it depends on your priors for the effect size of how gender correlates with job performance.
Imagine that academic performance has a really low value for predicting job performance. [...]
Sure. It is possible to construct possible worlds in which the behaviour of the academic faculty investigated in this study is rational and unbiased and sensible and good. The question is: How credible is it that our world is one of them?
If you think it is at all credible, then I invite you to show me the numbers. Tell me what you think the actual relationship is between gender, academic performance, job performance, etc. Tell me why you think the numbers you’ve suggested are credible, and why they lead to the sort of results found in this study. Because my prediction is that to get the sort of results found in this study you will need to assume numbers that are really implausible. I could, of course, be wrong; in which case, show me. But I don’t think anything is achieved by reiterating that it’s possible for the results of this study to be consistent with good and unbiased (more precisely: “biased” only in the sense of recognizing genuine relevant correlations) decisions by the faculty. We all (I hope) know that already. “Possible” is waaaaay too low a bar.
The question is: How credible is it that our world is one of them?
Making wrong arguments isn’t good even if it leads to a true conclusion. I haven’t argued that the world happens to be shaped a certain way. I argue that your arguments are wrong. LessWrong is primarily a forum for rational debate. If you arguing for a position that I believe to be true but make arguments that are flawed I will object. That’s because arguments aren’t soldiers.
On the matter of the extend of gender discrimination I don’t have a fixed opinion. My uncertainty interval is pretty large. Not having a small uncertainty interval because you fall for flawed arguments matters. The fact that humans are by default overconfident is well replicated.
But if we become back to grades as a predictor: Google did find that academic performance is no good predictor for job performance at Google.
Google doesn’t even ask for GPA or test scores from candidates anymore, unless someone’s a year or two out of school, because they don’t correlate at all with success at the company.
Of course Google won’t give you the relevant data as an academic does, but Google is a company that wants to make money. It actually has a stake in hiring high performing individuals.
While we are at it, you argue as if scientific studies nearly always replicate. We don’t live in a world where that’s true. Political debates tend to make people overconfident.
It looks to me as if that’s because you are treating them as if they are intended to be deductive inferences when in fact they are inductive ones.
At no point have I intended to argue that (e.g.) it is impossible that the results found in this study are the result of accurate rational evaluation by the faculty in question. Only that it is very unlikely. The fact that one can construct possible worlds where their behaviour is close to optimal is of rather little relevance to that.
Google did find that academic performance is no good predictor for job performance at Google.
Among people actually hired by Google. Who (1) pretty much all have very good academic performance (see e.g. this if it’s not clear why that’s relevant) and (2) will typically have been better in other respects if worse academically, in order to get hired: see e.g. this for more information.
I conjecture that, ironically, if Google measure again, they’ll find that GPA became a better predictor of job success when they stopped using it as an important metric for selecting candidates.
you argue as if scientific studies nearly always replicate
Not intentionally. I’m aware that they don’t. None the less, scientific studies are the best we have, and it’s not like there’s a shortage of studies finding evidence of the sort of sex bias we’re discussing.
None the less, scientific studies are the best we have
“Best we have” doesn’t justify a small confidence interval. If there no good evidence available on a topic the right thing to do is to be uncertain.
it’s not like there’s a shortage of studies finding evidence of the sort of sex bias we’re discussing
The default way to act in those situations is to form your opinions based on meta-analysis.
I conjecture that, ironically, if Google measure again, they’ll find that GPA became a better predictor of job success when they stopped using it as an important metric for selecting candidates.
You basically think that a bunch of highly paid staticians make a very trivial error when a lot of money is at stake. How confident are you in that prediction?
If there is no good evidence available on a topic the right thing to do is to be uncertain.
I agree. (Did I say something to suggest otherwise?)
The default way [...] is to form your opinions based on meta-analysis.
Given the time and inclination to do the meta-analysis (or someone else who’s already done the work), yes. Have you perchance done it or read the work of someone else who has?
I agree. (Did I say something to suggest otherwise?)
On this topic it seems like your position is that you know that employers act irrationally and don’t hire woman who would perform well.
My position is that I don’t know whether or not that’s a case. That means you have a smaller confidence interval. I consider the size of that interval unjustified.
Given the time and inclination to do the meta-analysis
In the absence of that work being done it’s not good to believe that one knows the answer.
My position is that I’ve seen an awful lot of evidence, both scientific and anecdotal, that seems best explained by supposing such irrationality. A few examples:
Another study of attitudes to hiring finding that for applicants early in their career just changing the name from female to male results in dramatically more positive assessment. (The differences were smaller with a candidate several years further into his/her career.)
A famous study by Goldberg submitted identical essays under male and female names and found that it got substantially better assessments with the male name. (I should add that this one seems to have been repeated several times, sometimes getting the same result and sometimes not. Different biases at different institutions?)
In each case, of course one can come up with explanations that don’t involve bias—as some commenters in this discussion have eagerly done. But it seems to me that the evidence is well past the point where denying the existence of sexist biases is one hell of a stretch.
I’m not really commenting on the object-level issue, just on the dubious logic of claiming that the name can’t matter if everything else is equal. In practice I’d guess it’s likely that the difference in rating is larger than justified.
Just that if you asked, ahead of time, a question like “So, what would it take to convince you beyond reasonable doubt that there’s bias favouring men over women in the academic employment market?”, the answer you’d get would likely be pretty much exactly what this study found.
Of course, there are always loopholes, just like a sufficiently ingenious creationist can always find contrived explanations for why some scientific finding is compatible with creationism. The speed of light is changing! The aftermath of Noah’s flood just happened to deposit corpses in the layers we find in the fossil record! The world was created with the appearance of great age! Similarly, we can find contrived explanations for why an identical-looking application gets such different assessments depending on whether it’s thought to come from a man or a woman. They might be really worried about maternity leave, and choose to define taking maternity leave as a variety of incompetence! There might be differences in competence between men and women that make a big difference to scientific productivity but are completely undetectable by academic testing and unmentionable by faculty! There might be really big differences that everyone conspires not to admit to the existence of! Sure, there might. And the earth might be 6000 years old.
Just that if you asked, ahead of time, a question like “So, what would it take to convince you beyond reasonable doubt that there’s bias favouring men over women in the academic employment market?”,
If this is indeed the case than why isn’t the system approaching an equilibrium similar to the one the system reached for Asians, Irish, and Scottish Highlanders?
I don’t think I can usefully attempt to answer the question, because it isn’t perfectly clear to me (1) what sort of “equilibrium” you have in mind or (2) why you think I should “if this is indeed the case” expect the system to approach such an equilibrium. The linked article, consisting mostly of several pages of Macaulay, doesn’t do much to make either of those things clear to me.
The point is that Asians, Irish, and Scottish Highlanders were able to overcome negative stereotypes and “microagressions”, and whatever other epicycles the SJW crowd feels like inventing, towards them. Why not blacks and women? You know maybe there really are innate differences involved here.
The question seems like it has a false premise, namely that women and black people haven’t made progress in overcoming those things. In fact the treatment of both groups has improved tremendously over, let’s say, the last 50 years. Which is roughly what we might expect if in fact much of the difference in how they’d been treated before was due to bias.
(It’s probably also what we’d expect if the difference was not due to bias and these groups gained in political power for some reason other than having their genuine merits recognized better. So I’m not claiming this as positive evidence for that bias. But your argument, if I’ve understood it right, is that the bias theory must be wrong because if it were right then the treatment of these groups would be improving—and in fact it is improving. I’m not aware of any reason to think it’s converged to its final state.)
In fact the treatment of both groups has improved tremendously over, let’s say, the last 50 years.
I agree the “treatment” has improved. They still can’t make it in intellectually demanding occupations except by affirmative action. Let’s take the most technologically innovative part of the economy: Silicon Valley. Men massively outnumber women in technology jobs, as for race Blacks are massively underrepresented and Asians are massively overrepresented.
They still can’t make it in intellectually demanding occupations except by affirmative action.
To avoid begging the question, that should be “don’t”.
200 years ago it was basically unthinkable for most women to have any role other than parent and housekeeper. 50 years ago it was basically unthinkable for women to have senior leadership roles or to work in the most intellectually demanding jobs. Now it is thinkable but uncommon; at least some of them appear to do pretty well but they are few in number. Prima facie, the continuing underrepresentation could be because of differences in ability distribution or personality traits or sometihng; or because of (reduced but still remaining) prejudice; or some mixture of both.
Your argument a couple of posts back, if I understand it right, was: It must be because of differences in ability, because otherwise they’d be doing OK now just like East Asians are. So far as I can see, that argument only works if there’s some reason to think that if the past shortage of women in those roles were the result of prejudice, then by now it would be completely repaired. But I see no reason to expect that; prejudices can last a very long time. It looks to me (though I don’t have statistics; do you?) as if the current rate of change in women’s career prospects is still substantial, suggesting that if the last few decades’ changes are the result of prejudice reduction then the process isn’t yet complete and we shouldn’t assume that we are now at the endpoint of the process.
Note also that there were no people tacking about “microagressions” or for that matter much in the way of affirmative action when these groups succeeded.
That language seems to presuppose that whatever change they achieved has now stopped. As I said above, I think that’s very far from clear.
the closing of the gender gap appears to have stopped
That post seems long on anecdote and short on data.
Pages 10-11 of this document seems to show a steady decline in full-time gender pay gap from 1970 to 2010. The part-time figures are weirdly different; by eye they seem to show one downward jump circa 1974, then approximate stasis, then another downward jump circa 2005.
What it would take to show that there is bias favoring men over women would involve showing that men are more likely to be hred than women and that this imbalance in hiring rate is not justified.
It seems to me that for that to be so, there would need to be absolutely huge differences between men and women, so big that no one with any brain and any integrity would deny that men are much much much better scientists than women.
If you apply a high cutoff the difference is pretty big.
In the present case—as you would see if you looked at the study in question, which I therefore guess you haven’t—the level of ability we’re looking at (for “male” and “female” candidates) is not super-high, and in particular isn’t high enough for the sort of variance difference you have in mind to make a big difference.
These are candidates with a bachelor’s degree only, GPA of 3.2, and all the information in the application designed to make them look like decent but not stellar candidates for the job. We’re not talking about the extreme tails of the ability distribution here; the tails have already been cut off.
All the listed information is actually remarkable little. Like I said below the most “objective” thing on your list is the GRE score and even standardized test scores have high variance.
That’s not necessarily irrational in general. The other information on the resume does not prevent the name from also providing potentially relevant information.
I’d suggest you look up “screening off” in any text on Bayesian inference. The explanation on the wiki is not really the greatest.
But when you have information that is closer and more specific to the property you’re trying to predict, you should expect to increasingly disregard information that is further from it. Even if your prior asserts that sex predicts competence, when you have more direct measures of competence of a particular candidate, they should screen off the less-direct one in your prior.
If there evidence that the effect size of discrimination stays the same regardless how much information an application provides?
I know what screening off is—I was saying that not all the information is screened off here. There are still other issues given the premise that names taken alone predict competence to some extent. For example, one resume may be more likely to be honest than another, and even if the resume is completely honest, reversion to the mean is likely to be larger in one case than another.
So: take a look at the paper, or at the informal summary of it to which I also linked, and then tell us whether you consider that—given all the information provided to the faculty in the application—knowing whether the candidate is male or female gives anywhere near enough further information to justify the differences in rated competence found by the researchers.
It seems to me that for that to be so, there would need to be absolutely huge differences between men and women, so big that no one with any brain and any integrity would deny that men are much much much better scientists than women. Do you think that’s the case?
I think that regardless of the actual facts, assuming the difference is counterfactually that large, it’s still very plausible that almost everyone would still deny any difference exists, due to political and cultural forces.
While I don’t think there is such a large difference, I don’t accept the argument from “people wouldn’t pretend a big difference doesn’t exist”.
I wasn’t merely arguing that if there were such a large difference everyone would admit it. I was also arguing that if there were such a large difference we’d all know it. Obviously this argument will be more persuasive to people who (like me) think it’s clear from observation that there isn’t so huge a difference between men and women, than to people who don’t.
Just by way of reminder: we’d be looking for a difference large enough that, knowing
what degree a person got from what institution
what their grade point average was
what their GRE scores are
what was written about them by a faculty member writing a letter of recommendation
what they wrote themselves in an application letter
the difference between male and female suffices to make a difference to their estimated competence of 0.7 points on a 5-point scale. That would have to be either a really really enormous difference between men and women, or a really weird difference—weird in that whatever it is somehow manages to make a big difference in competence without having any effect on academic performance, test scores, or reported faculty opinions. Which presumably would require it to be quite narrow in scope but, again, really really enormous in size.
And it seems about as obvious to me that there isn’t such a difference as that (say) there isn’t a difference of 20cm in typical heights between men and women. Not just because if there were then it would be widely admitted (maybe it would, maybe not) but because it would be obvious.
Now, of course I could be wrong. There could be such an enormous difference and I could be somehow blind to it for some weird cultural-political reason or something. But is it really too much to suggest that when
the exact same job application gets radically different evaluations depending on whether the candidate’s name is “John” or “Jennifer”
it’s reasonable to take that as strong evidence for bias in favour of men over women that isn’t simply a proportionate response to actual differences in competence? I mean, it’s just Bayes’ theorem. How likely is that outcome if people do have such bias? How likely is it if they don’t? (Not “is it possible if they don’t?”. The answer to that sort of question is almost always yes, regardless of what’s true.)
It’s not entirely clear that these are two different things. Admitting a highly politically incorrect opinion publicly and admitting it to oneself or one’s friends aren’t really completely separate. People tend to believe what they profess, and what they hear others profess.
I suspect one source of the disagreement between us may be that you’re assigning a high predictive ability to academic performance, while I don’t even assign it a very high correlation. This may be because my intuition is trained on different academic fields. I don’t have any experience with scientific lab managers (the job the study’s resumes applied for). I do have experience with programmers and other related fields, mostly below the doctoral level.
When I first read that I thought: but there is about a 20cm difference in the average heights of men and women! Is gjm arguing the opposite point from what I thought, or maybe being sarcastic?
So I checked the average height differences between the sexes, and the male:female ratio is typically between 1.07-1.09. This translates to 8-15 cm of difference. So while it’s not as much as 20cm, it’s “only” a 2x difference from my prediction. Maybe I’m just bad at translating what I see into centimeters and this difference is much more obvious to you than it is to me.
I don’t disagree with this. I just think the cultural power of “politically correct” thinking is strong enough to make people ignore truths of the magnitude of this being counterfactually wrong and stick to accepted explanations.
Maybe gjm’s System 1 automatically compensates for the difference—I know that unless I’m deliberately paying attentìon to people’s height I’m much less likely to notice it if a man is six feet tall than if a woman is six feet tall, and for all we know the same might apply to gjm.
I’m guessing this effect doesn’t just apply to height.
Yeah, I know, people consider my barely basic¹ cooking skills exceptional merely because I happen to have a Y chromosome. That’s male privilege for ya.
At least that’s what my System 1 tells me, and I can’t think of a way to find out whether the impostor syndrome applies short of having someone who doesn’t know my gender taste what I cook.
Academic performance is one of the things known to the faculty (and the same between the “male” and “female” conditions); it is not the only one. The relevant question is: How much predictive power does the totality of the information provided have, and conditioned on that how much predictive power does the sex of the applicant have? It looks to me as if the answers, on any account of sex differences that I find credible, are “quite a bit” and “scarcely any”.
By “academic performance” I was referring to all of these bullet points:
Which (from your summary) I understand is pretty much all of the information in the application letter.
I’m not claiming that sex differences have predictive power; I’m claiming that academic performance doesn’t have as much power as we’d like and recruiters have to look for more info.
For sure. My apologies if I somehow gave the impression of disagreeing with that. The second half of what I called the “relevant question” above is of course the real key here, and it sounds as if maybe we agree about that.
No. If the argument is more clear because you think that it supports the outcome that you prefer you are engaging in motivated cognition. It’s an error in reasoning.
But I didn’t say, and I don’t think it’s true, that the argument is clearer “because [I] think it supports the outcome [I] prefer”. I “prefer” that outcome, in part, because it seems clear from observation that there isn’t that sort of huge difference between men and women. That is not a reasoning error, it’s straightforward inference.
So your theory is that all observed larger number of men at the upper end of any bell curve is due to sexism? And the larger number of men at the lower end of most bell curve, e.g., more men in prison is due to..something?
Most of the data I’ve seen suggests women have lower variance, here Robin Hansen discusses some of the implications about variance in test scores.
Nope. (What did I say to make you think that?)
My position is as follows.
There may well be differences in average performance between men and women in various intellectual tasks, in either direction. Indeed, there are some specific categories of tasks where the evidence for such differences seems strong; the most famous example is probably “mental rotation”. The difference for mental rotation is large but not enormous (about 1sd); my understanding is that all other sex differences found in scientific studies are smaller, and there are differences going in both directions.
It seems unlikely to me that there’s a big general cognitive deficit on either side. I believe girls are currently doing better than boys in pretty much all subjects at school in my country nowadays; in the past it was the other way around; so whatever differences there are (in this kind of task) must be smaller than the size of difference that can be induced by cultural effects. Of course this is consistent with deficits in very specific areas, with variance differences that affect how many really stellar performers there are of each sex, etc.
There may well be differences in variance between men and women. These differences might be fairly big, but it seems unlikely to me that they’re large enough to make huge differences at “ordinary” ability levels.
Once you start looking at the tails of the distributions, I expect them to be quite far from being Gaussian or even symmetrical. There are a lot more ways for things to go badly wrong than for them to go exceptionally right, after all. So I am skeptical about inferences from differences at the “low” end to differences at the “high” end.
There are certainly a lot more men than women in prison, especially if you look specifically at crimes of violence. However, lumping this together with variations in ability seems like a wilful embracing of the halo/horns effect; it seems like much of it will come from variations in sociopathy, enjoyment of violence, and other such characteristics that needn’t go along with worse performance as (say) a scientific lab manager.
For those last two reasons, any inference that looks much like “there are more men in prison, so we should expect more men to win Nobel prizes in physics” seems extremely suspect to me. Still more “there are more men in prison, so we should expect more men to make good lab managers”.
Putting all the above together: there may be well be differences in competence between men and women; they may well be bigger at the highest levels; I wouldn’t expect the differences to be enormous except maybe at the very highest levels where variance differences can be a really big deal.
All of that is independent from the question of whether there are sexist attitudes—by which, for present purpose, I mean: whether there are systematic biases that make men get evaluated better than women relative to their actual ability, likely performance, etc. Or, for that matter, worse.
It seems to me that there is a lot of evidence that there are such sexist attitudes, generally favouring men over women. We’ve had a lot of discussion of one study which seems to me like very strong evidence for such attitudes in one domain; I posted some links to some others. There’s a pile of anecdote too, but of course the way that looks may simply reflect what anecdotes I happen to have encountered. (I think the available anecdotage is at any rate evidence that sexist attitudes in both directions exist.)
The possibility of real ability differences has some bearing on how to interpret the apparent evidence for sexist attitudes, but in at least some cases—e.g., the study we’ve discussed so much here—it seems to me that it doesn’t make much difference, because to make the evidence not be strong evidence for sexist attitudes it would be necessary for the ability differences to be (what seems to me to be) unrealistically large.
The relevant question for most practical purposes is not the statistical difference between men and women in some particular kind of ability, but the statistical difference conditional on the information usually available when hiring (or when considering promotion, or when allocating places at a university, or whatever). Nothing I have seen so far gives me reason to think that these differences are large, even though the information in question is limited and unreliable.
(To get quantitative for a moment, let’s suppose everything in sight is normally distributed. Some underlying ability: male and female both have mean 0, but s.d. 1 for men and 0 for women. Some measure of ability equals the actual ability level plus noise with mean 0 and s.d. 1. Actual job performance looks like underlying ability plus other factors with mean 0 and s.d. 0.5. Then conditional on measured ability being +2 (i.e., well above average but not stratospheric), mean predicted job performance is about +1.0 for male applicants (with s.d. 0.87) and +0.8 for female applicants (with s.d. 0.80), a difference of about 0.2 (male) standard deviations. Definitely not zero, but not exactly huge either and a lot smaller than the noise. I have no idea how realistic any of the numbers I’ve assumed here actually are, and would be glad to learn of credible estimates—though of course this is a toy model at best whatever numbers one plugs in.)
Kudos for explicitly writing out your nuanced position on a high-likelihood-of-mindkill issue.
Thanks. Though that high likelihood of mindkill makes it (1) more likely that someone will try to correct my obvious stupid errors when in fact I’m right and they’re confused, and (2) more likely that someone will rightly correct my obvious stupid errors when in fact they’re right and I’m confused but I won’t believe them. Still, the best we can do is the best we can do :-).
Note that the number of people who are in jail doesn’t merely depend on how many commit crimes, it depends on how many get caught committing crimes, and that such a statistic would anticorrelate with intelligence is very nearly obvious to me.
(I agree with most of the rest of your comment.)
Yes, that’s a good point. How big the effect is depends on how the probability of getting caught varies with intelligence: I agree that it will almost always anticorrelate, but the dependence could be very strong or very weak. Anyone got any statistics on that?
I’d guess people who commit crimes but don’t get caught are very, very hard to get statistics about.
Hmm. It might be possible to indirectly get some information about them by comparing the kinds of people that get caught for premeditated crime with the kinds of people that get caught for crimes of impulse, and then adjusting for any correlation of intelligence with self-control. The latter ought to be harder to cover up.
It quite easy to make wrong arguments in favor of positions that are true. If you think that an argument is good just because you think it’s conclusion is true it’s time to pause and reflect and look at a situation where the same structure of the argument would lead to a conclusion that’s false.
Even if men and woman are on average equally qualified that doesn’t mean that a specific subset is. For a hiring manager it’s not important whether there’s causation. Correlation in the data set is enough.
I agree, that’s a very bad sign. On the other hand, there’s nothing very alarming about thinking an argument is more persuasive when you agree with its premises. And often the premises and the conclusions are related to one another. That seems to me to be exactly the situation here.
Premise: There pretty clearly isn’t an enormous cognitive difference between men and women that makes women much less competent at brainwork, so much less competent that a moderate amount of information about a person’s abilities leaves a lot of male-female difference un-screened-off.
Argument: If indeed there isn’t, then the best explanation of findings of the sort we’ve been discussing is prejudice in favour of men and against women that has substantial impact on hiring.
Conclusion: There probably is such prejudice, and it probably leads (among other things) to underrepresentation of women in many brainwork-heavy jobs.
(Note: the premise, the argument, and the conclusion are all sketchy approximations. Filling in all the details would make the above maybe 20x longer than it is.)
I find the argument somewhat persuasive. This is partly because I find the premise plausible; some people might not (e.g., because the evidence they think they have regarding the relative abilities of men and women differs from the evidence I think I have); those people will find it less persuasive.
The premise in question is not the conclusion of the argument. It is not equivalent to the conclusion of the argument. It neither implies nor is implied by the conclusion of the argument. It is, for sure, somewhat related to the conclusion—e.g., by the fact that they are premise and conclusion of a short and simple argument—and doubtless there is a correlation between believing one and believing the other. I do not find this sufficient reason to think that finding the argument more credible if one accepts the premise is any sort of cognitive error.
Perhaps I am misunderstanding your argument somehow. I confess I don’t find it perfectly clear. Would you like to make it more explicit what error you think I am committing and why you think that?
I agree and am not aware of having said or implied otherwise. One of the many modifications that would be needed to turn the argument-sketch above into something unambiguous and quantitative would be to replace “between men and women” with “between men applying for lab manager posts and women applying for lab manager posts”. If you think this makes an actual difference in here, I’d be interested to see the details.
Depends on the details, of course, but mostly yes. Once again, though, I am having trouble working out what I’ve said that suggests I think otherwise.
And/or if the argument is less clear to other people because they think it supports the outcome that they don’t like they are engaging in motivated cognition.
By the same logic you could say that someone who hires people with high SRT scores engages in SRT bias. Someone who hires based on SRT scores could simply reasonably believe that people with high SRT scores are more competent.
Google’s HR department has a variety of factors on which it judges candidates. A few years afterwards they reevaluate their hiring decisions. They run a regression analysis and see which factors predict job performance at Google. They learn from that analysis and switch their hiring decision to hiring people which score highly on the factors that the regression analysis found predictive.
That’s how making rational hiring decisions looks like. In the process they found that college marks aren’t very relevant for predicting job performance. Being good at Fermi estimates unfortunately isn’t as well, so those LW people who train Fermi estimates don’t get benefits anymore when they want to get a job at Google.
Given current laws Google is not allowed to put values such as gender into the mix they use to make hiring decisions. That means that Google can’t make the hiring decisions that maximize predicted job performance.
The politics of the issue also make it pretty bad PR for them to publish results about the effects of a model that includes gender if the correct value in the regression analysis would mean worse chances for woman getting a job. It’s good PR for them if the correct value would mean to favor woman. No big company that does regression analysis on job performance published data that favoring in gender would mean hiring more woman. Factoring in gender into a regression analysis would mean that any bias against woman in subjective competence evaluations in interviews would be canceled by that factor.
Just imagine if a big company would find that by putting gender into their regression analysis they would hiring more women and get better average job performance as a result. Don’t you think those companies would lobby Washington to allow them to put gender into hiring decisions? The silence on the issue speaks.
It could be that the silencing of feminists who want to prevent “privileged” from talking about the issue is strong enough that rational companies don’t dare to speak about their need to change their hiring practices to hire more woman via making data driven arguments. If that’s the case that says a lot about the concept of privilege and it’s problem in shutting down rational arguments.
Imagine that academic performance has a really low value for predicting job performance. People that spend a lot of time preparing for tests get better academic marks. Woman spent more time than men preparing for academic tests. That means a woman of equal competence scores higher because she puts in more work. The test isn’t anymore a strict measure of competence but a measure of effort at scoring highly of the test. In that scenario it makes sense to infer that a woman with the same test score as a man is likely less competent as the man as long as you are hiring for “competence” and not for “putting in effort to game the test”.
If you write down the math you see that it depends on your priors for the effect size of how gender correlates with job performance.
Sure. It is possible to construct possible worlds in which the behaviour of the academic faculty investigated in this study is rational and unbiased and sensible and good. The question is: How credible is it that our world is one of them?
If you think it is at all credible, then I invite you to show me the numbers. Tell me what you think the actual relationship is between gender, academic performance, job performance, etc. Tell me why you think the numbers you’ve suggested are credible, and why they lead to the sort of results found in this study. Because my prediction is that to get the sort of results found in this study you will need to assume numbers that are really implausible. I could, of course, be wrong; in which case, show me. But I don’t think anything is achieved by reiterating that it’s possible for the results of this study to be consistent with good and unbiased (more precisely: “biased” only in the sense of recognizing genuine relevant correlations) decisions by the faculty. We all (I hope) know that already. “Possible” is waaaaay too low a bar.
Making wrong arguments isn’t good even if it leads to a true conclusion. I haven’t argued that the world happens to be shaped a certain way. I argue that your arguments are wrong. LessWrong is primarily a forum for rational debate. If you arguing for a position that I believe to be true but make arguments that are flawed I will object. That’s because arguments aren’t soldiers.
On the matter of the extend of gender discrimination I don’t have a fixed opinion. My uncertainty interval is pretty large. Not having a small uncertainty interval because you fall for flawed arguments matters. The fact that humans are by default overconfident is well replicated.
But if we become back to grades as a predictor: Google did find that academic performance is no good predictor for job performance at Google.
Of course Google won’t give you the relevant data as an academic does, but Google is a company that wants to make money. It actually has a stake in hiring high performing individuals.
While we are at it, you argue as if scientific studies nearly always replicate. We don’t live in a world where that’s true. Political debates tend to make people overconfident.
It looks to me as if that’s because you are treating them as if they are intended to be deductive inferences when in fact they are inductive ones.
At no point have I intended to argue that (e.g.) it is impossible that the results found in this study are the result of accurate rational evaluation by the faculty in question. Only that it is very unlikely. The fact that one can construct possible worlds where their behaviour is close to optimal is of rather little relevance to that.
Among people actually hired by Google. Who (1) pretty much all have very good academic performance (see e.g. this if it’s not clear why that’s relevant) and (2) will typically have been better in other respects if worse academically, in order to get hired: see e.g. this for more information.
I conjecture that, ironically, if Google measure again, they’ll find that GPA became a better predictor of job success when they stopped using it as an important metric for selecting candidates.
Not intentionally. I’m aware that they don’t. None the less, scientific studies are the best we have, and it’s not like there’s a shortage of studies finding evidence of the sort of sex bias we’re discussing.
“Best we have” doesn’t justify a small confidence interval. If there no good evidence available on a topic the right thing to do is to be uncertain.
The default way to act in those situations is to form your opinions based on meta-analysis.
You basically think that a bunch of highly paid staticians make a very trivial error when a lot of money is at stake. How confident are you in that prediction?
I agree. (Did I say something to suggest otherwise?)
Given the time and inclination to do the meta-analysis (or someone else who’s already done the work), yes. Have you perchance done it or read the work of someone else who has?
Not very.
[EDITED to fix a punctuation typo]
On this topic it seems like your position is that you know that employers act irrationally and don’t hire woman who would perform well. My position is that I don’t know whether or not that’s a case. That means you have a smaller confidence interval. I consider the size of that interval unjustified.
In the absence of that work being done it’s not good to believe that one knows the answer.
My position is that I’ve seen an awful lot of evidence, both scientific and anecdotal, that seems best explained by supposing such irrationality. A few examples:
The study we’ve been discussing here.
A neurobiologist transitions from female to male and is immediately treated as much more competent.
Another study of attitudes to hiring finding that for applicants early in their career just changing the name from female to male results in dramatically more positive assessment. (The differences were smaller with a candidate several years further into his/her career.)
A famous study by Goldberg submitted identical essays under male and female names and found that it got substantially better assessments with the male name. (I should add that this one seems to have been repeated several times, sometimes getting the same result and sometimes not. Different biases at different institutions?)
Auditioning orchestral players behind a screen makes women do much better relative to men.
In each case, of course one can come up with explanations that don’t involve bias—as some commenters in this discussion have eagerly done. But it seems to me that the evidence is well past the point where denying the existence of sexist biases is one hell of a stretch.
I’m not really commenting on the object-level issue, just on the dubious logic of claiming that the name can’t matter if everything else is equal. In practice I’d guess it’s likely that the difference in rating is larger than justified.
I’m not sure anyone’s quite claiming that.
Just that if you asked, ahead of time, a question like “So, what would it take to convince you beyond reasonable doubt that there’s bias favouring men over women in the academic employment market?”, the answer you’d get would likely be pretty much exactly what this study found.
Of course, there are always loopholes, just like a sufficiently ingenious creationist can always find contrived explanations for why some scientific finding is compatible with creationism. The speed of light is changing! The aftermath of Noah’s flood just happened to deposit corpses in the layers we find in the fossil record! The world was created with the appearance of great age! Similarly, we can find contrived explanations for why an identical-looking application gets such different assessments depending on whether it’s thought to come from a man or a woman. They might be really worried about maternity leave, and choose to define taking maternity leave as a variety of incompetence! There might be differences in competence between men and women that make a big difference to scientific productivity but are completely undetectable by academic testing and unmentionable by faculty! There might be really big differences that everyone conspires not to admit to the existence of! Sure, there might. And the earth might be 6000 years old.
If this is indeed the case than why isn’t the system approaching an equilibrium similar to the one the system reached for Asians, Irish, and Scottish Highlanders?
I don’t think I can usefully attempt to answer the question, because it isn’t perfectly clear to me (1) what sort of “equilibrium” you have in mind or (2) why you think I should “if this is indeed the case” expect the system to approach such an equilibrium. The linked article, consisting mostly of several pages of Macaulay, doesn’t do much to make either of those things clear to me.
Would you care to be more explicit?
The point is that Asians, Irish, and Scottish Highlanders were able to overcome negative stereotypes and “microagressions”, and whatever other epicycles the SJW crowd feels like inventing, towards them. Why not blacks and women? You know maybe there really are innate differences involved here.
The question seems like it has a false premise, namely that women and black people haven’t made progress in overcoming those things. In fact the treatment of both groups has improved tremendously over, let’s say, the last 50 years. Which is roughly what we might expect if in fact much of the difference in how they’d been treated before was due to bias.
(It’s probably also what we’d expect if the difference was not due to bias and these groups gained in political power for some reason other than having their genuine merits recognized better. So I’m not claiming this as positive evidence for that bias. But your argument, if I’ve understood it right, is that the bias theory must be wrong because if it were right then the treatment of these groups would be improving—and in fact it is improving. I’m not aware of any reason to think it’s converged to its final state.)
I agree the “treatment” has improved. They still can’t make it in intellectually demanding occupations except by affirmative action. Let’s take the most technologically innovative part of the economy: Silicon Valley. Men massively outnumber women in technology jobs, as for race Blacks are massively underrepresented and Asians are massively overrepresented.
To avoid begging the question, that should be “don’t”.
200 years ago it was basically unthinkable for most women to have any role other than parent and housekeeper. 50 years ago it was basically unthinkable for women to have senior leadership roles or to work in the most intellectually demanding jobs. Now it is thinkable but uncommon; at least some of them appear to do pretty well but they are few in number. Prima facie, the continuing underrepresentation could be because of differences in ability distribution or personality traits or sometihng; or because of (reduced but still remaining) prejudice; or some mixture of both.
Your argument a couple of posts back, if I understand it right, was: It must be because of differences in ability, because otherwise they’d be doing OK now just like East Asians are. So far as I can see, that argument only works if there’s some reason to think that if the past shortage of women in those roles were the result of prejudice, then by now it would be completely repaired. But I see no reason to expect that; prejudices can last a very long time. It looks to me (though I don’t have statistics; do you?) as if the current rate of change in women’s career prospects is still substantial, suggesting that if the last few decades’ changes are the result of prejudice reduction then the process isn’t yet complete and we shouldn’t assume that we are now at the endpoint of the process.
Note also that there were no people tacking about “microagressions” or for that matter much in the way of affirmative action when these groups succeeded.
Also, the closing of the gender gap appears to have stopped during the last 30 years.
That language seems to presuppose that whatever change they achieved has now stopped. As I said above, I think that’s very far from clear.
That post seems long on anecdote and short on data.
Pages 10-11 of this document seems to show a steady decline in full-time gender pay gap from 1970 to 2010. The part-time figures are weirdly different; by eye they seem to show one downward jump circa 1974, then approximate stasis, then another downward jump circa 2005.
What it would take to show that there is bias favoring men over women would involve showing that men are more likely to be hred than women and that this imbalance in hiring rate is not justified.
If you apply a high cutoff the difference is pretty big.
In the present case—as you would see if you looked at the study in question, which I therefore guess you haven’t—the level of ability we’re looking at (for “male” and “female” candidates) is not super-high, and in particular isn’t high enough for the sort of variance difference you have in mind to make a big difference.
These are candidates with a bachelor’s degree only, GPA of 3.2, and all the information in the application designed to make them look like decent but not stellar candidates for the job. We’re not talking about the extreme tails of the ability distribution here; the tails have already been cut off.
All the listed information is actually remarkable little. Like I said below the most “objective” thing on your list is the GRE score and even standardized test scores have high variance.