So I definitely think that’s something weirdly unspoken about the argument; I would characterize it as Eliezer saying “suppose I’m right and they’re wrong; all this requires is things to be harder than people think, which is usual. Suppose instead that I’m wrong and they’re right; this requires things to be easier than people think, which is unusual.” But the equation of “people” and “Eliezer” is sort of strange; as Quintin notes, it isn’t that unusual for outside observers to overestimate difficulty, and so I wish he had centrally addressed the the reference class tennis game; is the expertise “getting AI systems to be capable” or “getting AI systems to do what you want”?
So I definitely think that’s something weirdly unspoken about the argument; I would characterize it as Eliezer saying “suppose I’m right and they’re wrong; all this requires is things to be harder than people think, which is usual. Suppose instead that I’m wrong and they’re right; this requires things to be easier than people think, which is unusual.” But the equation of “people” and “Eliezer” is sort of strange; as Quintin notes, it isn’t that unusual for outside observers to overestimate difficulty, and so I wish he had centrally addressed the the reference class tennis game; is the expertise “getting AI systems to be capable” or “getting AI systems to do what you want”?