Professional athletes are arguably the most publicly understood meritocracy around. There are public records of thousands of different attributes for each player. When athletes stop performing well, this is discussed at length by enthusiasts, and it’s understood when they are kicked off their respective teams. The important stuff is out in the open. There’s a culture of honest, open, and candid communication around meritocratic competence and value.
This isn’t only valuable to help team decisions. It also helps data scientists learn which sorts of characteristics and records correlate best with long term success. As sufficient data is collected, whole new schools of thought emerge, and these coincide with innovative and effective strategies for future talent selection. See Moneyball or the entire field of sabermetrics.
In comparison, our standards for intellectuals are quite prosaic. If I want to get a sense of just how good LeBron James is I can look through tables and tables or organized data and metrics. If I don’t trust one metric I have dozens of others to choose.
However, if I want to know how much to trust and value Jonathan Haidt I’m honestly not sure what to do. Some ideas:
Read most of his work, then do a large set of Epistemic Spot Checks and more to get a sense of how correct and novel it is.
Teach myself a fair amount of Psychology, get a set of Academic Journal subscriptions, then read critiques and counter critiques of his work.
Read the “Reception” part of his Wikipedia page and hope that my attempt to infer his qualities from that is successful.
Use some fairly quick “gut level” heuristics to guess.
Ask my friends and hope that they did a thorough job of the above, or have discussed the issue with other friends who did.
Of course, even if I do this for Jonathan Haidt broadly, I’d really want narrow breakdowns. Maybe his old work is really great, but in the last 5 years his motives have changed. Perhaps his knowledge and discussion of some parts of Psychology is quite on point, but his meanderings into Philosophy are simplistic and overstated.[1]
This is important because evaluating intellectuals is dramatically more impactful than evaluating athletes. When I choose incorrectly, I could easily waste lost of time and money, or be dramatically misled and develop a systematically biased worldview. It also leads to incentive problems. If intellectuals recognize the public’s lack of discernment, then they have less incentive to do an actual good job, and more of an incentive to signal harder in uncorrelated ways.
I hear one argument: “It’s obvious which intellectuals are good and bad. Just read them a little bit.” I don’t agree with this argument. For one, Expert Political Judgement provided a fair amount of evidence for just how poorly calibrated all famous and well esteemed intellectuals seem to be.
One could imagine an organization saying “enough is enough” and setting up a list of comprehensive grades for public intellectuals on an extensive series of metrics. I imagine this would be treated with a fantastical amount of vitriol.
“What about my privacy? This is an affront to Academics you should be trying to help.”
“What if people misunderstand the metrics? They’ll incorrectly assume that some intellectuals are doing a poor job, and that could be terrible for their careers.”
“We can’t trust people to see some attempts at quantifying the impossible. Let them read the sources and make up their own minds.”
I’m sure professional athletes said the same thing when public metrics began to appear. Generally new signals get push back. There will be winners and losers, and the losers fight back much harder than the winners encourage. In this case the losers would likely be the least epistemically modest of the intellectuals, a particularly nasty bunch. But if signals can persist, they get accepted as part of the way things are and life moves on.
Long and comprehensive grading systems similar to that used by athletes would probably be overkill, especially to start with. Any work here would be very expensive to carry out and it’s not obvious who would pay for it. I would expect that “intellectual reviews” would get fewer hits than “tv reviews”, but that those hits would be much more impactful. I’d be excited to hear for simple proposals. Perhaps it would be possible to get many of the possible benefits while not having to face some of the many possible costs.
What should count as an intellectual? It’s a fuzzy line, but I would favor an expansive definition. If someone is making public claims about important and uncertain topics and has a substantial following, these readers should have effective methods of evaluating them.
Meritocracy matters. Having good intellectuals and making it obvious how good these intellectuals are matters. Developing thought out standards, rubrics, and metrics for quality is really the way to ensure good signals. There are definitely ways of doing this poorly, but the status quo is really really bad.
[1] I’m using Jonathan Haidt because I think of him as a generally well respected academic who has somewhat controversial views. I personally find him to be every interesting.
Can we hold intellectuals to similar public standards as athletes?
Professional athletes are arguably the most publicly understood meritocracy around. There are public records of thousands of different attributes for each player. When athletes stop performing well, this is discussed at length by enthusiasts, and it’s understood when they are kicked off their respective teams. The important stuff is out in the open. There’s a culture of honest, open, and candid communication around meritocratic competence and value.
This isn’t only valuable to help team decisions. It also helps data scientists learn which sorts of characteristics and records correlate best with long term success. As sufficient data is collected, whole new schools of thought emerge, and these coincide with innovative and effective strategies for future talent selection. See Moneyball or the entire field of sabermetrics.
In comparison, our standards for intellectuals are quite prosaic. If I want to get a sense of just how good LeBron James is I can look through tables and tables or organized data and metrics. If I don’t trust one metric I have dozens of others to choose.
However, if I want to know how much to trust and value Jonathan Haidt I’m honestly not sure what to do. Some ideas:
Read most of his work, then do a large set of Epistemic Spot Checks and more to get a sense of how correct and novel it is.
Teach myself a fair amount of Psychology, get a set of Academic Journal subscriptions, then read critiques and counter critiques of his work.
Investigate his citation stats.
Read the “Reception” part of his Wikipedia page and hope that my attempt to infer his qualities from that is successful.
Use some fairly quick “gut level” heuristics to guess.
Ask my friends and hope that they did a thorough job of the above, or have discussed the issue with other friends who did.
Of course, even if I do this for Jonathan Haidt broadly, I’d really want narrow breakdowns. Maybe his old work is really great, but in the last 5 years his motives have changed. Perhaps his knowledge and discussion of some parts of Psychology is quite on point, but his meanderings into Philosophy are simplistic and overstated.[1]
This is important because evaluating intellectuals is dramatically more impactful than evaluating athletes. When I choose incorrectly, I could easily waste lost of time and money, or be dramatically misled and develop a systematically biased worldview. It also leads to incentive problems. If intellectuals recognize the public’s lack of discernment, then they have less incentive to do an actual good job, and more of an incentive to signal harder in uncorrelated ways.
I hear one argument: “It’s obvious which intellectuals are good and bad. Just read them a little bit.” I don’t agree with this argument. For one, Expert Political Judgement provided a fair amount of evidence for just how poorly calibrated all famous and well esteemed intellectuals seem to be.
One could imagine an organization saying “enough is enough” and setting up a list of comprehensive grades for public intellectuals on an extensive series of metrics. I imagine this would be treated with a fantastical amount of vitriol.
“What about my privacy? This is an affront to Academics you should be trying to help.”
“What if people misunderstand the metrics? They’ll incorrectly assume that some intellectuals are doing a poor job, and that could be terrible for their careers.”
“We can’t trust people to see some attempts at quantifying the impossible. Let them read the sources and make up their own minds.”
I’m sure professional athletes said the same thing when public metrics began to appear. Generally new signals get push back. There will be winners and losers, and the losers fight back much harder than the winners encourage. In this case the losers would likely be the least epistemically modest of the intellectuals, a particularly nasty bunch. But if signals can persist, they get accepted as part of the way things are and life moves on.
Long and comprehensive grading systems similar to that used by athletes would probably be overkill, especially to start with. Any work here would be very expensive to carry out and it’s not obvious who would pay for it. I would expect that “intellectual reviews” would get fewer hits than “tv reviews”, but that those hits would be much more impactful. I’d be excited to hear for simple proposals. Perhaps it would be possible to get many of the possible benefits while not having to face some of the many possible costs.
What should count as an intellectual? It’s a fuzzy line, but I would favor an expansive definition. If someone is making public claims about important and uncertain topics and has a substantial following, these readers should have effective methods of evaluating them.
Meritocracy matters. Having good intellectuals and making it obvious how good these intellectuals are matters. Developing thought out standards, rubrics, and metrics for quality is really the way to ensure good signals. There are definitely ways of doing this poorly, but the status quo is really really bad.
[1] I’m using Jonathan Haidt because I think of him as a generally well respected academic who has somewhat controversial views. I personally find him to be every interesting.