I don’t agree with everything here—or even the central argument—but this post was informative for me, and I think others would benefit from it.
Things I learned.
No free lunch theorem. I’m very grateful for this, and it made me start learning more about optimisation.
From the above: there is no general intelligence. I had previously believed that finding a God’s algorithm for optimisation would be one of the most significant achievements of the century. I discovered that’s impossible.
Exponential growth does not imply exponential progress as exponential growth may meet exponential bottlenecks. This was also something I didn’t appreciate. Upgrading from a level n intelligence to a level n+1 intelligence may require more relative intelligence than upgrading from a level n-1 to a level n. Exponential bottlenecks may result in diminishing marginal growth of intelligence.
The article may have seemed of significant pedagogical value to me, because I hadn’t met these ideas before. For example, I have just started reading the Yudkowsky-Hanson AI foom debate.
The Impossibility of the Intelligence Explosion
I don’t agree with everything here—or even the central argument—but this post was informative for me, and I think others would benefit from it.
Things I learned.
No free lunch theorem. I’m very grateful for this, and it made me start learning more about optimisation.
From the above: there is no general intelligence. I had previously believed that finding a God’s algorithm for optimisation would be one of the most significant achievements of the century. I discovered that’s impossible.
Exponential growth does not imply exponential progress as exponential growth may meet exponential bottlenecks. This was also something I didn’t appreciate. Upgrading from a level n intelligence to a level n+1 intelligence may require more relative intelligence than upgrading from a level n-1 to a level n. Exponential bottlenecks may result in diminishing marginal growth of intelligence.
The article may have seemed of significant pedagogical value to me, because I hadn’t met these ideas before. For example, I have just started reading the Yudkowsky-Hanson AI foom debate.