I think this is either misunderstanding what I’m saying, or is giving up on the issue incredibly quickly.
You could have titled your post “Can we try harder to evaluate the quality of intellectuals?”
Instead, your phrase was “to similar public standards.”
The consequence is that you’re going to experience some “talking past each other.” Some, like me, will say that it’s transparently impossible to evaluate intellectuals with the same or similar statistical rigor as an athlete. As others pointed out, this is because their work is not usually amenable to rigidly defined statistics, and when it is, the statistics are too easily goodharted.
The debate you seem to desire is whether we could be trying harder to statistically evaluate intellectuals. The answer there is probably yes?
But these are two different debates, and I think the wording of your original post is going to lead to two separate conversations here. You may want to clarify which one you’re trying to have.
I was using “athletes” as a thought experiment. I do think it’s worth considering and having a bunch of clear objective metrics could be interesting and useful, especially if done gradually and with the right summary stats. However, the first steps for metrics of intellectuals would be subjective reviews and evaluations and similar.
Things will also get more interesting as we get better AI and similar to provide interesting stats that aren’t exactly “boring objective stats” but also not quite “well thought out reviews” either.
I think you might enjoy getting into things like Replication Watch and similar efforts to discover scientific fraud and push for better standards for scientific publishing. There is an effort in the scientific world to bring statistical and other tools to bear on policing papers and entire fields for p-hacking, publication bias and the file drawer problem, and outright fraud. This seems to me the mainline effort to do what you’re talking about.
Here on LW, Elizabeth has been doing posts on what she calls “Epistemic Spot Checks,” to try and figure out how a non-expert could quickly vet the quality of a book they’re reading without having to be an expert in the field itself. I’d recommend reading her posts in general, she’s got something going on.
While I don’t think these sorts of efforts are going to ever result in the kind of crisp, objective, powerfully useful statistics that characterize sabermetrics, I suspect that just about every area of life could benefit from just a little bit more statistical rigor. And certainly, holding intellectuals to a higher public standard is a worthy goal.
You could have titled your post “Can we try harder to evaluate the quality of intellectuals?”
Instead, your phrase was “to similar public standards.”
The consequence is that you’re going to experience some “talking past each other.” Some, like me, will say that it’s transparently impossible to evaluate intellectuals with the same or similar statistical rigor as an athlete. As others pointed out, this is because their work is not usually amenable to rigidly defined statistics, and when it is, the statistics are too easily goodharted.
The debate you seem to desire is whether we could be trying harder to statistically evaluate intellectuals. The answer there is probably yes?
But these are two different debates, and I think the wording of your original post is going to lead to two separate conversations here. You may want to clarify which one you’re trying to have.
That’s a good point, I think it’s fair here.
I was using “athletes” as a thought experiment. I do think it’s worth considering and having a bunch of clear objective metrics could be interesting and useful, especially if done gradually and with the right summary stats. However, the first steps for metrics of intellectuals would be subjective reviews and evaluations and similar.
Things will also get more interesting as we get better AI and similar to provide interesting stats that aren’t exactly “boring objective stats” but also not quite “well thought out reviews” either.
I think you might enjoy getting into things like Replication Watch and similar efforts to discover scientific fraud and push for better standards for scientific publishing. There is an effort in the scientific world to bring statistical and other tools to bear on policing papers and entire fields for p-hacking, publication bias and the file drawer problem, and outright fraud. This seems to me the mainline effort to do what you’re talking about.
Here on LW, Elizabeth has been doing posts on what she calls “Epistemic Spot Checks,” to try and figure out how a non-expert could quickly vet the quality of a book they’re reading without having to be an expert in the field itself. I’d recommend reading her posts in general, she’s got something going on.
While I don’t think these sorts of efforts are going to ever result in the kind of crisp, objective, powerfully useful statistics that characterize sabermetrics, I suspect that just about every area of life could benefit from just a little bit more statistical rigor. And certainly, holding intellectuals to a higher public standard is a worthy goal.