The real problem comes in when employers decide that they need exceptional people but can’t actually identify these exceptional people. If filtering based on race was allowed, employers would use that (the best mathematicians are disproportionately white and asian, therefore if I hire a white or asian I’ll get an above-average mathematician).
Basically, you’re right except for the problem where humans mix up p(a|b) and p(b|a),
“The best mathematicians are disproportionately white and asian, therefore if I hire a white or asian I’ll get an above-average mathematician” is Bayesianly correct if the race is the only thing you know about the candidates; but it isn’t (a randomly-chosen white or Asian person is very unlikely to be a decent mathematician), and the other information you have about the candidates most likely mostly screens off the information that race gives you about maths skills.
Hmm, so E(the Math SAT score that X deserves|the Math SAT score that X got is 800, and X is male) is just 4 points more than E(the Math SAT score that X deserves|the Math SAT score that X got is 800, and X is female). That doesn’t sound like terribly much to me, and I’d guess there are plenty of people who, due to corrupted mindware and stuff, would treat a male who got 800 better than a female who got 800 by a much greater extent than justified by that 4-point difference in the Bayesian posterior expected values. (Cf the person who told whowhowho that Obama must be dumber than Bush—surely we know much more about them than their races?)
I’m not sure if this is correct, but I sometimes wonder given how they’re surrounded by spin-doctors and other image manipulators how much we really know about prominent politicians, especially when the politician in question is new so you can’t look at his record.
Ironically this is a case where p(a|b) is in fact a good proxy for p(b|a) and and the kind of filtering you’re objecting to is in fact the correct thing to do from a Bayesian perspective.
See also: Offended by conditional probability
“The best mathematicians are disproportionately white and asian, therefore if I hire a white or asian I’ll get an above-average mathematician” is Bayesianly correct if the race is the only thing you know about the candidates; but it isn’t (a randomly-chosen white or Asian person is very unlikely to be a decent mathematician), and the other information you have about the candidates most likely mostly screens off the information that race gives you about maths skills.
Read the comment I linked to and possibly subsequent discussion if you’re interested in these things.
Hmm, so E(the Math SAT score that X deserves|the Math SAT score that X got is 800, and X is male) is just 4 points more than E(the Math SAT score that X deserves|the Math SAT score that X got is 800, and X is female). That doesn’t sound like terribly much to me, and I’d guess there are plenty of people who, due to corrupted mindware and stuff, would treat a male who got 800 better than a female who got 800 by a much greater extent than justified by that 4-point difference in the Bayesian posterior expected values. (Cf the person who told whowhowho that Obama must be dumber than Bush—surely we know much more about them than their races?)
I’m not sure if this is correct, but I sometimes wonder given how they’re surrounded by spin-doctors and other image manipulators how much we really know about prominent politicians, especially when the politician in question is new so you can’t look at his record.