It appears that what distinguished Grothendieck was not high g-factor. See Jordan Ellenberg’s blog post titled The capacity to be alone.
My point is that Grothendieck exhibited very high instrumental rationality with respect to mathematics but low instrumental rationality with respect to his efforts to ensure the survival of the human race, and that something analogous could very well be the case of Eliezer.
I don’t think Eliezer would claim to be smarter than Grothendieck or Gödel or Erdős, but he could claim with some justification to be saner than them.
What evidence is there that Eliezer is saner than Grothendieck? I don’t have a strong opinion on this point, I’m just curious what you have in mind.
What evidence is there that Eliezer is saner than Grothendieck? I don’t have a strong opinion on this point, I’m just curious what you have in mind.
It should perhaps be mentioned that the few accounts of encountering Grothendieck during the last 20 years describe someone who seems actually clinically insane, with delusions and extreme paranoia, not just someone with less than stellar rationality.
Yes, I concur. But what about Grothendieck in the 1970s vs. Eliezer now? Or Gromov now vs. Eliezer now? It’s not clear to me which way such comparisons go.
Grothendieck’s magnum opus was his contributions to pure mathematics. That requires very high intelligence and a willingness to, in hackneyed terms, think outside the box; or, in LW terms, go to school wearing a clown suit.
Eliezer’s magnum opus, so far, is the sequences. They combine a lot of pre-existing work and some of his own insights into a coherent whole that displays, I think, extraordinarily rare sanity. Pratchett’s “First Sight,” applied to a wide variety of fields. Going through accumulated human knowledge and picking out a framework that satisfies Occam’s Razor better than any other I’ve seen is why I think he’s very sane.
It seems a little odd that the grandparent comment was about arguments from authority, but here we are talking about Grothendieck’s work in pure math and Eliezer’s on methods of rationality. Because the thing is, in neither area can an appeal to authority work. Regardless of how much G, or how much scholarship and expertise they have acquired, they both have to “win” by actually convincing ordinary people with their arguments rather than overawing them with their authority.
On the other hand, when advocating anarchist political positions or prioritizing existential risks, authority helps. Trouble is, neither math skill nor {whatever it is that EY does so well} qualifies as a credential for the needed kind of authority.
The idea is that you don’t, in general, have fully articulated proofs of the question in hand, and you’re relying on some combination of heuristics to come to your conclusion.
If you’re allowed to hear other peoples answers, and a bit about the people making them, then you have a set of heuristics and answers, and you have to guess what the real answer is based on these. If you stick with your original answer, you’re arbitrarily picking one heuristic to trust completely, which is clearly suboptimal.
You want to discount like minded thinking (many people, one heuristic), weigh more heavily peoples views that you know were reached by thinking about the problem in different ways (again, weight the heuristic, not the person), and of course, more heavily weight heuristics that you expect to work. It’s how to do this last part that we’re talking about.
High G people may have access to more complex heuristics that most could not come up with, but what’s more important is having your heuristic free of errors that prevent its functioning. Knowing what a heuristic has to do in order to work is more important than having a lot of cognitive horsepower spent on coming up with fancy heuristics without a solid reason.
Of course, in the end, if you spot a glaring error in someone’s thinking, you don’t trust him, even if he’s an ‘authority’ (in other words: even if he has a track record of producing good heuristics, you condition on this one being bad and don’t trust the output). And of course, the deeper into the object level you are able to dive, the more information you have on which to judge the credibility of heuristics.
Perhaps it has better connotations whens stated as “Aumann agreement”?
It appears that what distinguished Grothendieck was not high g-factor. See Jordan Ellenberg’s blog post titled The capacity to be alone.
My point is that Grothendieck exhibited very high instrumental rationality with respect to mathematics but low instrumental rationality with respect to his efforts to ensure the survival of the human race, and that something analogous could very well be the case of Eliezer.
What evidence is there that Eliezer is saner than Grothendieck? I don’t have a strong opinion on this point, I’m just curious what you have in mind.
It should perhaps be mentioned that the few accounts of encountering Grothendieck during the last 20 years describe someone who seems actually clinically insane, with delusions and extreme paranoia, not just someone with less than stellar rationality.
Yes, I concur. But what about Grothendieck in the 1970s vs. Eliezer now? Or Gromov now vs. Eliezer now? It’s not clear to me which way such comparisons go.
Grothendieck’s magnum opus was his contributions to pure mathematics. That requires very high intelligence and a willingness to, in hackneyed terms, think outside the box; or, in LW terms, go to school wearing a clown suit.
Eliezer’s magnum opus, so far, is the sequences. They combine a lot of pre-existing work and some of his own insights into a coherent whole that displays, I think, extraordinarily rare sanity. Pratchett’s “First Sight,” applied to a wide variety of fields. Going through accumulated human knowledge and picking out a framework that satisfies Occam’s Razor better than any other I’ve seen is why I think he’s very sane.
It seems a little odd that the grandparent comment was about arguments from authority, but here we are talking about Grothendieck’s work in pure math and Eliezer’s on methods of rationality. Because the thing is, in neither area can an appeal to authority work. Regardless of how much G, or how much scholarship and expertise they have acquired, they both have to “win” by actually convincing ordinary people with their arguments rather than overawing them with their authority.
On the other hand, when advocating anarchist political positions or prioritizing existential risks, authority helps. Trouble is, neither math skill nor {whatever it is that EY does so well} qualifies as a credential for the needed kind of authority.
There’s a place for “argument from authority”.
The idea is that you don’t, in general, have fully articulated proofs of the question in hand, and you’re relying on some combination of heuristics to come to your conclusion.
If you’re allowed to hear other peoples answers, and a bit about the people making them, then you have a set of heuristics and answers, and you have to guess what the real answer is based on these. If you stick with your original answer, you’re arbitrarily picking one heuristic to trust completely, which is clearly suboptimal.
You want to discount like minded thinking (many people, one heuristic), weigh more heavily peoples views that you know were reached by thinking about the problem in different ways (again, weight the heuristic, not the person), and of course, more heavily weight heuristics that you expect to work. It’s how to do this last part that we’re talking about.
High G people may have access to more complex heuristics that most could not come up with, but what’s more important is having your heuristic free of errors that prevent its functioning. Knowing what a heuristic has to do in order to work is more important than having a lot of cognitive horsepower spent on coming up with fancy heuristics without a solid reason.
Of course, in the end, if you spot a glaring error in someone’s thinking, you don’t trust him, even if he’s an ‘authority’ (in other words: even if he has a track record of producing good heuristics, you condition on this one being bad and don’t trust the output). And of course, the deeper into the object level you are able to dive, the more information you have on which to judge the credibility of heuristics.
Perhaps it has better connotations whens stated as “Aumann agreement”?
Agree with this.