The most direct modelisation of the problem does lead to that result without any trickery, that seems like a concrete reason and one you can calculate before looking at the real world.
Suppose each interview leads to a Measured Competence Score PCS, which is Competence Score * random var pulled from a normal distribution. We suppose men and women have the same Competence Score from the assumptions that they do the same work, but suppose men are going to twice as many interviews as women because have more accepting criteria on where to work. Finally suppose the algorithm for fixing pay is simply MCS multiplied by some constant (which is indeed not directly related to gender). It’s easy to see that a company received twice as many male candidates and selecting the top x% of all candidates will end up with more male candidates with higher salaries, even though competence and work done is exactly the same.
A very interesting point here is to notice that a smarter employer who realises this bias exists can outcompete the market by correcting for this bias, for example by multiplying MCS of women by a constant (calculated based on the ratio of applicants). He will thus have more competent people for a certain price point than their competitors. In this simple toy model, affirmative action works and makes the world more meritocratic (people are payed closer to the value they provide).
I also note that the important factors here is that interviews lead to variance in measured competence score and there is a disproportion of number of applications per person per gender. It does not seem to matter if there is only a disproportion of number of applications per gender (eg. in tech if 10% of applications come from women and that accurately reflects the number of applicants, then there will be no average pay difference in the end, and so affirmative action does not help for simple population disproportions, only for applications per person disproportions). In fact, this doesn’t need to be corrected by gender. If applicants had to answer how many interviews they were doing total, the algorithm could directly correct for that per person and again reach an unbiased measurement of competence.
The most direct modelisation of the problem does lead to that result without any trickery, that seems like a concrete reason and one you can calculate before looking at the real world.
Suppose each interview leads to a Measured Competence Score PCS, which is Competence Score * random var pulled from a normal distribution. We suppose men and women have the same Competence Score from the assumptions that they do the same work, but suppose men are going to twice as many interviews as women because have more accepting criteria on where to work. Finally suppose the algorithm for fixing pay is simply MCS multiplied by some constant (which is indeed not directly related to gender).
It’s easy to see that a company received twice as many male candidates and selecting the top x% of all candidates will end up with more male candidates with higher salaries, even though competence and work done is exactly the same.
A very interesting point here is to notice that a smarter employer who realises this bias exists can outcompete the market by correcting for this bias, for example by multiplying MCS of women by a constant (calculated based on the ratio of applicants). He will thus have more competent people for a certain price point than their competitors. In this simple toy model, affirmative action works and makes the world more meritocratic (people are payed closer to the value they provide).
I also note that the important factors here is that interviews lead to variance in measured competence score and there is a disproportion of number of applications per person per gender. It does not seem to matter if there is only a disproportion of number of applications per gender (eg. in tech if 10% of applications come from women and that accurately reflects the number of applicants, then there will be no average pay difference in the end, and so affirmative action does not help for simple population disproportions, only for applications per person disproportions). In fact, this doesn’t need to be corrected by gender. If applicants had to answer how many interviews they were doing total, the algorithm could directly correct for that per person and again reach an unbiased measurement of competence.