I think that I can save the world, not just because I’m the one who happens to be making the effort, but because I’m the only one who can make the effort.
Thinking that one person is going to save the world, and you’re him, qualifies as “an inflated sense of one’s own importance”, IMO.
First mistake: believing that one person will be saving the world. Second mistake: there is likely only one person that can do it, and he’s that person.
“You think that you are potentially the greatest who has yet lived, the strongest servant of the Light, that no other is likely to take up your wand if you lay it down.”
To put the first quotation into some context, Eliezer argued that his combination of high SAT scores and spending a lot of effort in studying AI puts him in a unique position that can make a “difference between cracking the problem of intelligence in five years and cracking it in twenty-five”. (Which could make a huge difference, if it saves Earth from destruction by nanotechnology, presumably coming during that interval...)
Of course, knowing that it was written in 2000, the five-years estimate was obviously wrong. And there is a Sequence about it, which explains that Friendly AI is more complicated than just any AI. (Which doesn’t prove that the five-years estimate would be correct for any AI.)
Most people very seriously studying AI probably have high SATs too. High IQs. High lots of things. And some likely have other unique qualities and advantages that Eliezer doesn’t.
Unique in some qualities doesn’t mean uniquely capable of the task in some timeline.
My main objection is that until it’s done, I don’t think people are very justified in claims to know what it will take to get done, and therefore unjustified in claiming some particular person is best able to do it, even if he is best suited to pursue one particular approach to the problem.
Hence, I conclude he is overestimating his importance, per the definition. Not that I see it as some heinous crime. He’s over confident. So what? It seems to be an ingredient to high achievement. Better to be over confident epistemologically than under confident instrumentally.
I’d say that’s, at the very least, an oversimplification; when you look at the architecture of organizations generally recognized as cults, you end up finding they share a fairly specific cluster of cultural characteristics, one that has more to do with internal organization than claims of certainty. My favorite framework for this is the amusingly named ABCDEF: though aimed at new religions in the neopagan space, it’s general enough to be applied outside it.
Thinking that one person is going to save the world, and you’re him, qualifies as “an inflated sense of one’s own importance”, IMO.
First mistake: believing that one person will be saving the world. Second mistake: there is likely only one person that can do it, and he’s that person.
To put the first quotation into some context, Eliezer argued that his combination of high SAT scores and spending a lot of effort in studying AI puts him in a unique position that can make a “difference between cracking the problem of intelligence in five years and cracking it in twenty-five”. (Which could make a huge difference, if it saves Earth from destruction by nanotechnology, presumably coming during that interval...)
Of course, knowing that it was written in 2000, the five-years estimate was obviously wrong. And there is a Sequence about it, which explains that Friendly AI is more complicated than just any AI. (Which doesn’t prove that the five-years estimate would be correct for any AI.)
Most people very seriously studying AI probably have high SATs too. High IQs. High lots of things. And some likely have other unique qualities and advantages that Eliezer doesn’t.
Unique in some qualities doesn’t mean uniquely capable of the task in some timeline.
My main objection is that until it’s done, I don’t think people are very justified in claims to know what it will take to get done, and therefore unjustified in claiming some particular person is best able to do it, even if he is best suited to pursue one particular approach to the problem.
Hence, I conclude he is overestimating his importance, per the definition. Not that I see it as some heinous crime. He’s over confident. So what? It seems to be an ingredient to high achievement. Better to be over confident epistemologically than under confident instrumentally.
Private overconfidence is harmless. Public overconfidence is how cults start.
I’d say that’s, at the very least, an oversimplification; when you look at the architecture of organizations generally recognized as cults, you end up finding they share a fairly specific cluster of cultural characteristics, one that has more to do with internal organization than claims of certainty. My favorite framework for this is the amusingly named ABCDEF: though aimed at new religions in the neopagan space, it’s general enough to be applied outside it.
(Eliezer, of course, would say that every cause wants to be a cult. I think he’s being too free with the word, myself.)