Your numerical model at the beginning looks incoherent.
Suppose, there are two kinds of people, smart people and stupid people; and suppose, with wild starry-eyed optimism, that the populace is split 50-50 between them. Smart people would add enough value to a company to be worth a $100,000 salary each year, but stupid people would only be worth $40,000.
If employers could never distinguish between smart and stupid, even after hiring, then they could select randomly and pay $70,000 per year (the expected value per hire given the 50:50 ratio). If they can eventually distinguish, then they could pay this salary to new graduates (who have not worked before), and then adjust salaries after detecting student types.
Why? Consider the thought process of a smart person when deciding whether or not to take the course. She thinks “I am smart, so if I take the course, I will certainly pass. Then I will make an extra $60,000 at this job. So my costs are $50,000, and my benefits are $60,000. Sounds like a good deal.
This seems to assume one will only stay on the job for 1 year. In a world where initially employers are paying $70,000 for a 50:50 mix of smart and stupid for 1 year, the smart person will only get an income gain of $30,000 for a $50,000 cost. So no one will start taking the course. But your other claims are incompatible with staying more than 1 year.
The stupid person, on the other hand, thinks: “As a stupid person, if I take the course, I have a 50% chance of passing and making $60,000 extra, and a 50% chance of failing and making $0 extra. My expected benefit is $30,000, but my expected cost is $50,000. I’ll stay out of school and take the $40,000 salary for non-graduates.”
This assumes that one will stay in the job for only one year. But if one will get to stay in the job for 2 years, then the benefit is doubled and everyone would benefit by taking the course.
And in a world where everyone takes the course those with passing scores will be 75% smart and 25% dumb, with an expected productivity of $85,000, not $100,000, so employers could not afford the $100,000 salary.
Your numerical model at the beginning looks incoherent.
If employers could never distinguish between smart and stupid, even after hiring, then they could select randomly and pay $70,000 per year (the expected value per hire given the 50:50 ratio). If they can eventually distinguish, then they could pay this salary to new graduates (who have not worked before), and then adjust salaries after detecting student types.
This seems to assume one will only stay on the job for 1 year. In a world where initially employers are paying $70,000 for a 50:50 mix of smart and stupid for 1 year, the smart person will only get an income gain of $30,000 for a $50,000 cost. So no one will start taking the course. But your other claims are incompatible with staying more than 1 year.
This assumes that one will stay in the job for only one year. But if one will get to stay in the job for 2 years, then the benefit is doubled and everyone would benefit by taking the course.
And in a world where everyone takes the course those with passing scores will be 75% smart and 25% dumb, with an expected productivity of $85,000, not $100,000, so employers could not afford the $100,000 salary.
It’s not coherent so much as contrived, but your points are valid. But is a more realistic example really that useful for the purpose it’s serving?