Thanks. I had skimmed that paper before, but my impression was that it only briefly acknowledged my main objection regarding computational complexity on page 4. Most of the paper involves analogies with evolution/civilization which I don’t think are very useful-my argument is that the difficulty of designing intelligence should grow exponentially at high levels, so the difficulty of relatively low-difficulty tasks like designing human intelligence doesn’t seem that important.
On page 35, Eliezer writes:
I am not aware of anyone who has defended an “intelligence fizzle” seriously and at great length.
I will read it again more thoroughly, and see if there’s anything I missed.
Yudkowsky addresses some of these objections in more detail in “Intelligence Explosion Microeconomics”.
Thanks. I had skimmed that paper before, but my impression was that it only briefly acknowledged my main objection regarding computational complexity on page 4. Most of the paper involves analogies with evolution/civilization which I don’t think are very useful-my argument is that the difficulty of designing intelligence should grow exponentially at high levels, so the difficulty of relatively low-difficulty tasks like designing human intelligence doesn’t seem that important.
On page 35, Eliezer writes:
I will read it again more thoroughly, and see if there’s anything I missed.