Where can I find arguments that intelligence self-amplification is not likely to quickly yield rapidly diminishing returns? I know Chalmers has a brief discussion of it in his singularity analysis article, but I’d like to see some lengthier expositions.
Where can I find arguments that intelligence self-amplification is not likely to quickly yield rapidly diminishing returns? I know Chalmers has a brief discussion of it in his singularity analysis article, but I’d like to see some lengthier expositions.
I asked the same question not so long ago, and I was pointed to http://wiki.lesswrong.com/wiki/The_Hanson-Yudkowsky_AI-Foom_Debate which did contain interesting arguments on the topic. Hope it’ll help you as it helped me.