Thanks for the references, both of which I had seen before.
Concerning Eliezer’s response to Scott Aaronson: I agree that there’s a huge amount of uncertainty about these things and it’s possible that AGI will develop unexpectedly, but don’t see how this points in the direction of AGI being likely to be developed within decades. It seems like one could have said the same thing that Eliezer is saying in 1950 or even 1800. See Holden’s remarks about noncontingency here.
Even though the FAI problem is incredibly difficult, it’s still worth working on because the returns attached to success would be enormous.
Lots of people who have worked on AGI are mediocre
The field of AI research is not well organized.
Claim (1) might be true. I suspect that both of claims (2) and (3) are true. But by themselves these claims offer essentially no support for the idea that Eliezer is likely to be able to build a Friendly AI.
Thanks for the references, both of which I had seen before.
Concerning Eliezer’s response to Scott Aaronson: I agree that there’s a huge amount of uncertainty about these things and it’s possible that AGI will develop unexpectedly, but don’t see how this points in the direction of AGI being likely to be developed within decades. It seems like one could have said the same thing that Eliezer is saying in 1950 or even 1800. See Holden’s remarks about noncontingency here.
As for A Premature Word on AI, Eliezer seems to be saying that
Even though the FAI problem is incredibly difficult, it’s still worth working on because the returns attached to success would be enormous.
Lots of people who have worked on AGI are mediocre
The field of AI research is not well organized.
Claim (1) might be true. I suspect that both of claims (2) and (3) are true. But by themselves these claims offer essentially no support for the idea that Eliezer is likely to be able to build a Friendly AI.