Grade inflation originally began in the United States due to the Vietnam War draft. University students where exempt from the draft as long as they maintained high enough grades, so students became less willing to stretch their abilities and professors less willing to accurately report their abilities.
The issue is that grades are trying to serve three separate purposes:
Regular feedback to students on how well they understand the material.
Personal recommendations from teachers to prospective employers/universities.
Global comparisons between students.
The administration mostly believe grades serve the third purpose, so they advocate for fudging the numbers. “Last year, our new policies implemented at Goodhart School of Excellence improved the GPA by 0.5 points! Look at how successful our students are compared to others.” Teachers, on the other hand, usually want grades to serve the first two purposes. If we want to prevent Goodharting, we can either give teachers back their power, or use other comparison systems.
This is already kind-of a thing. Top universities no longer use GPA as a metric, except as a demerit for imperfect grades, relying more on standardized test scores. There was a brief period where they tried going test-optional, but MIT quickly reversed that trend. I don’t think a standardized exam is a perfect solution—how do you compare project- or lab-based classes, like computer science and chemistry? I think in these scenarios we could have students submit their work to third parties, much like the capstone project in AP Seminar & Research.
If we can get administrators to use a better (unfudgible) comparator, I’m not actually terribly worried whether teachers use grades to give regular feedback or recommend their students. It’s just important to make sure the comparator is hard enough to actually see a spread, even at the very top. The number of “perfect” ACT scores has increased by 25x in the past 25 years, and I understand why from a money-making perspective, but it’s really unfortunate that there are several dozen sixth-graders that could get a 36 in any given section (maybe not the same sixth-graders for each section). How is one school supposed to show it’s better at helping these kinds of students than another school? The answer right now is competitions; in seventh grade, I (and half a dozen others) switched schools solely because the other had won the state MATHCOUNTS competition. Word quickly gets around which schools have the best clubs, though it really is just the club, not the classes.
Grade inflation originally began in the United States due to the Vietnam War draft. University students where exempt from the draft as long as they maintained high enough grades, so students became less willing to stretch their abilities and professors less willing to accurately report their abilities.
The issue is that grades are trying to serve three separate purposes:
Regular feedback to students on how well they understand the material.
Personal recommendations from teachers to prospective employers/universities.
Global comparisons between students.
The administration mostly believe grades serve the third purpose, so they advocate for fudging the numbers. “Last year, our new policies implemented at Goodhart School of Excellence improved the GPA by 0.5 points! Look at how successful our students are compared to others.” Teachers, on the other hand, usually want grades to serve the first two purposes. If we want to prevent Goodharting, we can either give teachers back their power, or use other comparison systems.
This is already kind-of a thing. Top universities no longer use GPA as a metric, except as a demerit for imperfect grades, relying more on standardized test scores. There was a brief period where they tried going test-optional, but MIT quickly reversed that trend. I don’t think a standardized exam is a perfect solution—how do you compare project- or lab-based classes, like computer science and chemistry? I think in these scenarios we could have students submit their work to third parties, much like the capstone project in AP Seminar & Research.
If we can get administrators to use a better (unfudgible) comparator, I’m not actually terribly worried whether teachers use grades to give regular feedback or recommend their students. It’s just important to make sure the comparator is hard enough to actually see a spread, even at the very top. The number of “perfect” ACT scores has increased by 25x in the past 25 years, and I understand why from a money-making perspective, but it’s really unfortunate that there are several dozen sixth-graders that could get a 36 in any given section (maybe not the same sixth-graders for each section). How is one school supposed to show it’s better at helping these kinds of students than another school? The answer right now is competitions; in seventh grade, I (and half a dozen others) switched schools solely because the other had won the state MATHCOUNTS competition. Word quickly gets around which schools have the best clubs, though it really is just the club, not the classes.