Er, how exactly does this cause the man and the woman to get different salaries, unless they work at different companies, in different locations? And if so, then, contrary to the stipulations, they’re not doing “the exact same job”!
Maybe there is an aspect of randomness in every salary offer. Sometimes companies will overoffer/underoffer based on their impressions of the candidate. By applying to more places, the men have more opportunities to get lucky with high offers, which they are then likely to accept.
I mean, plenty of companies in our world give variable salaries based on interview performance. Once you have that the rest follows.
Another alternative: There could be companies that agree to match your highest competing offer. This also exists in our world and would explain the effect.
The most direct modelisation of the problem does lead to that result without any trickery, that seems like a concrete reason and one you can calculate before looking at the real world.
Suppose each interview leads to a Measured Competence Score PCS, which is Competence Score * random var pulled from a normal distribution. We suppose men and women have the same Competence Score from the assumptions that they do the same work, but suppose men are going to twice as many interviews as women because have more accepting criteria on where to work. Finally suppose the algorithm for fixing pay is simply MCS multiplied by some constant (which is indeed not directly related to gender). It’s easy to see that a company received twice as many male candidates and selecting the top x% of all candidates will end up with more male candidates with higher salaries, even though competence and work done is exactly the same.
A very interesting point here is to notice that a smarter employer who realises this bias exists can outcompete the market by correcting for this bias, for example by multiplying MCS of women by a constant (calculated based on the ratio of applicants). He will thus have more competent people for a certain price point than their competitors. In this simple toy model, affirmative action works and makes the world more meritocratic (people are payed closer to the value they provide).
I also note that the important factors here is that interviews lead to variance in measured competence score and there is a disproportion of number of applications per person per gender. It does not seem to matter if there is only a disproportion of number of applications per gender (eg. in tech if 10% of applications come from women and that accurately reflects the number of applicants, then there will be no average pay difference in the end, and so affirmative action does not help for simple population disproportions, only for applications per person disproportions). In fact, this doesn’t need to be corrected by gender. If applicants had to answer how many interviews they were doing total, the algorithm could directly correct for that per person and again reach an unbiased measurement of competence.
Er, how exactly does this cause the man and the woman to get different salaries, unless they work at different companies, in different locations? And if so, then, contrary to the stipulations, they’re not doing “the exact same job”!
Maybe there is an aspect of randomness in every salary offer. Sometimes companies will overoffer/underoffer based on their impressions of the candidate. By applying to more places, the men have more opportunities to get lucky with high offers, which they are then likely to accept.
This seems like a stretch, a just-so story. Do you have any concrete reason to believe this to be the case?
I mean, plenty of companies in our world give variable salaries based on interview performance. Once you have that the rest follows.
Another alternative: There could be companies that agree to match your highest competing offer. This also exists in our world and would explain the effect.
The most direct modelisation of the problem does lead to that result without any trickery, that seems like a concrete reason and one you can calculate before looking at the real world.
Suppose each interview leads to a Measured Competence Score PCS, which is Competence Score * random var pulled from a normal distribution. We suppose men and women have the same Competence Score from the assumptions that they do the same work, but suppose men are going to twice as many interviews as women because have more accepting criteria on where to work. Finally suppose the algorithm for fixing pay is simply MCS multiplied by some constant (which is indeed not directly related to gender).
It’s easy to see that a company received twice as many male candidates and selecting the top x% of all candidates will end up with more male candidates with higher salaries, even though competence and work done is exactly the same.
A very interesting point here is to notice that a smarter employer who realises this bias exists can outcompete the market by correcting for this bias, for example by multiplying MCS of women by a constant (calculated based on the ratio of applicants). He will thus have more competent people for a certain price point than their competitors. In this simple toy model, affirmative action works and makes the world more meritocratic (people are payed closer to the value they provide).
I also note that the important factors here is that interviews lead to variance in measured competence score and there is a disproportion of number of applications per person per gender. It does not seem to matter if there is only a disproportion of number of applications per gender (eg. in tech if 10% of applications come from women and that accurately reflects the number of applicants, then there will be no average pay difference in the end, and so affirmative action does not help for simple population disproportions, only for applications per person disproportions). In fact, this doesn’t need to be corrected by gender. If applicants had to answer how many interviews they were doing total, the algorithm could directly correct for that per person and again reach an unbiased measurement of competence.