I don’t think this is just about status. There’s also issues around choosing the metric and Goodhearting. I work in software development, and it would be really nice if we could just measure a single thing and know how effective engineers are, but every metric we’ve thought of is easy to game. You could say you plan to start measuring lines of code written and won’t use it in performance reviews, but the only sane thing for an engineer to do would be to start gaming the metric immediately since lines-of-code is correlated with performance and how long will management resist the temptation use the metric?
It seems like academia has a similar problem, where publications are measurable and gameable, and if we could somehow stop measuring publications it would make the field better.
A solution is technical if it is precise enough to be goodharted, and social if it’s so cryptonormatively imprecise that it can be used to coalitionally extract value.
In medicine and education one can improve results by improving inputs (only accepting easier patients or smarter students). A pressure to provide measurable results could translate to pressure to game the results.
Then again, maybe prediction markets could help here. Or those would be gamed too, considering that the information (on patients’ health, or students’ skills) is not publicly available.
I don’t think this is just about status. There’s also issues around choosing the metric and Goodhearting. I work in software development, and it would be really nice if we could just measure a single thing and know how effective engineers are, but every metric we’ve thought of is easy to game. You could say you plan to start measuring lines of code written and won’t use it in performance reviews, but the only sane thing for an engineer to do would be to start gaming the metric immediately since lines-of-code is correlated with performance and how long will management resist the temptation use the metric?
It seems like academia has a similar problem, where publications are measurable and gameable, and if we could somehow stop measuring publications it would make the field better.
A solution is technical if it is precise enough to be goodharted, and social if it’s so cryptonormatively imprecise that it can be used to coalitionally extract value.
In medicine and education one can improve results by improving inputs (only accepting easier patients or smarter students). A pressure to provide measurable results could translate to pressure to game the results.
Then again, maybe prediction markets could help here. Or those would be gamed too, considering that the information (on patients’ health, or students’ skills) is not publicly available.
There’s an upper limit to ‘easier patients’ though, you can’t be more than perfectly healthy and compliant after all.