It seems a little odd that the grandparent comment was about arguments from authority, but here we are talking about Grothendieck’s work in pure math and Eliezer’s on methods of rationality. Because the thing is, in neither area can an appeal to authority work. Regardless of how much G, or how much scholarship and expertise they have acquired, they both have to “win” by actually convincing ordinary people with their arguments rather than overawing them with their authority.
On the other hand, when advocating anarchist political positions or prioritizing existential risks, authority helps. Trouble is, neither math skill nor {whatever it is that EY does so well} qualifies as a credential for the needed kind of authority.
The idea is that you don’t, in general, have fully articulated proofs of the question in hand, and you’re relying on some combination of heuristics to come to your conclusion.
If you’re allowed to hear other peoples answers, and a bit about the people making them, then you have a set of heuristics and answers, and you have to guess what the real answer is based on these. If you stick with your original answer, you’re arbitrarily picking one heuristic to trust completely, which is clearly suboptimal.
You want to discount like minded thinking (many people, one heuristic), weigh more heavily peoples views that you know were reached by thinking about the problem in different ways (again, weight the heuristic, not the person), and of course, more heavily weight heuristics that you expect to work. It’s how to do this last part that we’re talking about.
High G people may have access to more complex heuristics that most could not come up with, but what’s more important is having your heuristic free of errors that prevent its functioning. Knowing what a heuristic has to do in order to work is more important than having a lot of cognitive horsepower spent on coming up with fancy heuristics without a solid reason.
Of course, in the end, if you spot a glaring error in someone’s thinking, you don’t trust him, even if he’s an ‘authority’ (in other words: even if he has a track record of producing good heuristics, you condition on this one being bad and don’t trust the output). And of course, the deeper into the object level you are able to dive, the more information you have on which to judge the credibility of heuristics.
Perhaps it has better connotations whens stated as “Aumann agreement”?
It seems a little odd that the grandparent comment was about arguments from authority, but here we are talking about Grothendieck’s work in pure math and Eliezer’s on methods of rationality. Because the thing is, in neither area can an appeal to authority work. Regardless of how much G, or how much scholarship and expertise they have acquired, they both have to “win” by actually convincing ordinary people with their arguments rather than overawing them with their authority.
On the other hand, when advocating anarchist political positions or prioritizing existential risks, authority helps. Trouble is, neither math skill nor {whatever it is that EY does so well} qualifies as a credential for the needed kind of authority.
There’s a place for “argument from authority”.
The idea is that you don’t, in general, have fully articulated proofs of the question in hand, and you’re relying on some combination of heuristics to come to your conclusion.
If you’re allowed to hear other peoples answers, and a bit about the people making them, then you have a set of heuristics and answers, and you have to guess what the real answer is based on these. If you stick with your original answer, you’re arbitrarily picking one heuristic to trust completely, which is clearly suboptimal.
You want to discount like minded thinking (many people, one heuristic), weigh more heavily peoples views that you know were reached by thinking about the problem in different ways (again, weight the heuristic, not the person), and of course, more heavily weight heuristics that you expect to work. It’s how to do this last part that we’re talking about.
High G people may have access to more complex heuristics that most could not come up with, but what’s more important is having your heuristic free of errors that prevent its functioning. Knowing what a heuristic has to do in order to work is more important than having a lot of cognitive horsepower spent on coming up with fancy heuristics without a solid reason.
Of course, in the end, if you spot a glaring error in someone’s thinking, you don’t trust him, even if he’s an ‘authority’ (in other words: even if he has a track record of producing good heuristics, you condition on this one being bad and don’t trust the output). And of course, the deeper into the object level you are able to dive, the more information you have on which to judge the credibility of heuristics.
Perhaps it has better connotations whens stated as “Aumann agreement”?
Agree with this.