I like the thought. Though unlike sports, intellectual work seems fundamentally open-ended, and therefore doesn’t seem to allow for easy metrics. Intellectuals aren’t the ones playing the game, they’re the ones figuring out the rules of the game. I think that’s why it often is better to focus on the ideas rather than the people.
A similar question also applies within academia. There, citation counts already serve as a metric to measure intellectual accomplishment. The goodharting of that metric probably can tell you a lot about the challenges such a system would face. What metrics do you have in mind?
In a way, this problem is just scaling up the old reputation/prestige system. You find people you respect and trust, and then you see who they respect and trust, and so on. While regularly checking for local validity of course. Maybe some kind of social app inspired by liquid democracy/quadratic voting might work? You enter how much you trust and respect a few public intellectuals (who themselves entered how much they trust other intellectuals) and then it computes how much you can trust everyone else.
I find such a social app idea really interesting. A map that tracks which public intellectuals value each others contributions (possibly even divided on subject) would be a valuable tool. I guess some initial work on this could even be done without participation of said persons, as most already identify their primary influences in their work.
Intellectuals aren’t the ones playing the game, they’re the ones figuring out the rules of the game.
This doesn’t seem true to me. There’s relatively little systematic literature from intellectuals trying to understand what structural things make for quality intellectual standards. The majority of it seems to be arguing and discussing specific orthogonal opinions. It’s true that they “are the ones” to figure out the rules of the game, but this is a small minority of them, and for these people, it’s often a side endeavor.
In a way, this problem is just scaling up the old reputation/prestige system.
Definitely. I think the process of “evaluation standardization and openness” is a repeated one across industries and sectors. There’s a lot of value to be had in understanding the wisdom of existing informal evaluation systems and scaling them into formal ones.
Maybe some kind of social app inspired by liquid democracy/quadratic voting might work?
I imagine the space of options here is quite vast. This option seems like a neat choice. Perhaps several distinct efforts could be tried.
What metrics do you have in mind?
I have some rough ideas, want to brainstorm on this a bit more before writing more.
I maybe wasn’t clear about what I meant by ‘the game.’ I didn’t mean how to be a good public intellectual but rather the broader ‘game’ of coming up with new ideas and figuring things out.
One important metric I use to judge public intellectuals is whether they share my views, and start from similar assumptions. It’s obviously important to not filter too strongly on this or you’re never going to hear anything that challenges your beliefs, but it still makes sense to discount the views of people who hold beliefs you think are false. But you obviously can’t build an objective metric based on how much someone agrees with you.
The issue is that one of the most important metrics I use to quickly measure the merits of an intellectual is inherently subjective. You can’t have your system based on adjudicating the truth of disputed claims.
The illegibility and opacity of intra-group status was doing something really important – it created space where everyone could belong. The light of day poisons the magic. It’s a delightful paradox: a group that exists to confer social status will fall apart the minute that relative status within the group is made explicit. There’s real social value in the ambiguity: the more there is, the more people can plausibly join, before it fractures into subgroups.
There is probably a lot to be improved with current evaluation systems, but one always has to be careful with those fences.
I think ranking systems can be very powerful (as would make sense for something I’m claiming to be important), and can be quite bad if done poorly (arguably, current uses of citations are quite poor). Being careful matters a lot.
Maybe some kind of social app inspired by liquid democracy/quadratic voting might work?
Do you think it’s wise to entrust the collective with judging the worth of intellectuals? I can think of a lot of reasons this could go wrong: cognitive biases, emotional reasoning, ignorance, Dunning–Kruger effect, politically-driven decisions… Just look at what’s happening now with cancel culture.
In general this connects to the problem of expertise. If even intellectuals have trouble understanding who among them is worthy of trust and respect, how could individuals alien to their field fare better?
If the rating was done between intellectuals, don’t you think the whole thing would be prone to conflicts of interest, with individuals tending to support their tribe / those who can benefit them / those whose power tempts them or scares them?
I am not against the idea of rating intellectual work. I’m just mistrustful of having the rating done by other humans, with biases and agendas of their own. I would be more inclined to support objective forms of rating. Forecasts are a good example.
Do you think it’s wise to entrust the collective with judging the worth of intellectuals?
The idea as described doesn’t necessitate that.
Everyone rates everyose else. This creates a web of trust.
An individual user then designates a few sources they trust. The system uses those seeds to propagate trust through the network, by a transitivity assumption.
So every individual gets custom trust ratings of everyone else, based on who they personally trust to evaluate trustworthiness.
This doesn’t directly solve the base-level problem of evaluating intellectuals, but it solves the problem of aggregating everyone’s opinions about intellectual trustworthiness, while taking into account their trustworthiness in said aggregation.
Because the aggregation doesn’t automatically include everyone’s opinion, we are not “entrusting the collective” with anything. You start the trust aggregation from trusted sources.
Unfortunately, the trust evaluations do remain entirely subjective (IE unlike probabilities in a prediction market, there is no objective truth which eventually comes in to decide who was right.)
I like the thought. Though unlike sports, intellectual work seems fundamentally open-ended, and therefore doesn’t seem to allow for easy metrics. Intellectuals aren’t the ones playing the game, they’re the ones figuring out the rules of the game. I think that’s why it often is better to focus on the ideas rather than the people.
A similar question also applies within academia. There, citation counts already serve as a metric to measure intellectual accomplishment. The goodharting of that metric probably can tell you a lot about the challenges such a system would face. What metrics do you have in mind?
In a way, this problem is just scaling up the old reputation/prestige system. You find people you respect and trust, and then you see who they respect and trust, and so on. While regularly checking for local validity of course. Maybe some kind of social app inspired by liquid democracy/quadratic voting might work? You enter how much you trust and respect a few public intellectuals (who themselves entered how much they trust other intellectuals) and then it computes how much you can trust everyone else.
I find such a social app idea really interesting. A map that tracks which public intellectuals value each others contributions (possibly even divided on subject) would be a valuable tool. I guess some initial work on this could even be done without participation of said persons, as most already identify their primary influences in their work.
Thanks! Some very quick thoughts:
This doesn’t seem true to me. There’s relatively little systematic literature from intellectuals trying to understand what structural things make for quality intellectual standards. The majority of it seems to be arguing and discussing specific orthogonal opinions. It’s true that they “are the ones” to figure out the rules of the game, but this is a small minority of them, and for these people, it’s often a side endeavor.
Definitely. I think the process of “evaluation standardization and openness” is a repeated one across industries and sectors. There’s a lot of value to be had in understanding the wisdom of existing informal evaluation systems and scaling them into formal ones.
I imagine the space of options here is quite vast. This option seems like a neat choice. Perhaps several distinct efforts could be tried.
I have some rough ideas, want to brainstorm on this a bit more before writing more.
I maybe wasn’t clear about what I meant by ‘the game.’ I didn’t mean how to be a good public intellectual but rather the broader ‘game’ of coming up with new ideas and figuring things out.
One important metric I use to judge public intellectuals is whether they share my views, and start from similar assumptions. It’s obviously important to not filter too strongly on this or you’re never going to hear anything that challenges your beliefs, but it still makes sense to discount the views of people who hold beliefs you think are false. But you obviously can’t build an objective metric based on how much someone agrees with you.
The issue is that one of the most important metrics I use to quickly measure the merits of an intellectual is inherently subjective. You can’t have your system based on adjudicating the truth of disputed claims.
One consideration to keep in mind though is that there might also be a social function in the informality and vagueness of many evaluation systems.
From Social Capital in Silicon Valley:
There is probably a lot to be improved with current evaluation systems, but one always has to be careful with those fences.
Good points, thanks.
I think ranking systems can be very powerful (as would make sense for something I’m claiming to be important), and can be quite bad if done poorly (arguably, current uses of citations are quite poor). Being careful matters a lot.
Do you think it’s wise to entrust the collective with judging the worth of intellectuals? I can think of a lot of reasons this could go wrong: cognitive biases, emotional reasoning, ignorance, Dunning–Kruger effect, politically-driven decisions… Just look at what’s happening now with cancel culture.
In general this connects to the problem of expertise. If even intellectuals have trouble understanding who among them is worthy of trust and respect, how could individuals alien to their field fare better?
If the rating was done between intellectuals, don’t you think the whole thing would be prone to conflicts of interest, with individuals tending to support their tribe / those who can benefit them / those whose power tempts them or scares them?
I am not against the idea of rating intellectual work. I’m just mistrustful of having the rating done by other humans, with biases and agendas of their own. I would be more inclined to support objective forms of rating. Forecasts are a good example.
The idea as described doesn’t necessitate that.
Everyone rates everyose else. This creates a web of trust.
An individual user then designates a few sources they trust. The system uses those seeds to propagate trust through the network, by a transitivity assumption.
So every individual gets custom trust ratings of everyone else, based on who they personally trust to evaluate trustworthiness.
This doesn’t directly solve the base-level problem of evaluating intellectuals, but it solves the problem of aggregating everyone’s opinions about intellectual trustworthiness, while taking into account their trustworthiness in said aggregation.
Because the aggregation doesn’t automatically include everyone’s opinion, we are not “entrusting the collective” with anything. You start the trust aggregation from trusted sources.
Unfortunately, the trust evaluations do remain entirely subjective (IE unlike probabilities in a prediction market, there is no objective truth which eventually comes in to decide who was right.)