I think this post paints a somewhat inaccurate view of the past.
The post claims that MIRI’s talk of recursive self-improvement from a seed AI came about via MIRI’s attempts to respond to claims such as “AI will never exceed human capabilities” or “Growth rates post AI will be like growth rates beforehand.” Thus, the post says, people in MIRI spoke of recursive self-improvement from a seed AI not because they thought this was a particularly likely mainline future—but because they thought this was one obvious way that AI—past a certain level of development—would obviously exceed human capabilities and result in massively different growth rates. Thus, the post says:
The weighty conclusion of the “recursive self-improvement” meme is not “expect seed AI”. The weighty conclusion is “sufficiently smart AI will rapidly improve to heights that leave humans in the dust”.
However, I think this view of the past is pretty certainly misleading, because the Singularity Institute—what MIRI was before a rebranding—actually intended to build a seed AI.
Thus, bringing up recursive self improvement from a seed AI was not just a rhetorical move to point out how things would go nuts eventually—it was actually something they saw as central to the future.
SIAI has the additional goal of fostering a broader discussion and understanding of beneficial artificial intelligence. We offer forums for Singularity discussion, coordinate Singularity-related efforts, and publish material on the Singularity. Above all, our long-term mission is direct research into Singularity technologies, specifically Friendly AI, and the direct implementation of the Singularity. We’re presently seeking funding to begin our long-term project to create recursively self-improving AI that displays true general cognition—a Singularity seed.
Similarly, in his 2011 debate with Hanson, Yudkowsky humorously describes the Singularity Institute as the ““Institute for Carefully Programmed Intelligence Explosions,” and goes on to describe how he thinks the future is likely to go:
When we try to visualize how all this is likely to go down, we tend to visualize a scenario that someone else once termed “a brain in a box in a basement.” I love that phrase, so I stole it. In other words, we tend to visualize that there’s this AI programming team, a lot like the sort of wannabe AI programming teams you see nowadays, trying to create artificial general intelligence, like the artificial general intelligence projects you see nowadays. They manage to acquire some new deep insights which, combined with published insights in the general scientific community, let them go down into their basement and work in it for a while and create an AI which is smart enough to reprogram itself, and then you get an intelligence explosion.
There are other locations where you can see that the original intent of the Singularity Institute / MIRI was to build a seed AI.
Thus, I do not think that MIRI spoke so much about recursive self improvement merely as a rhetorical move to show that AI would eventually be able to exceed humans. I think they spoke about it because that’s what they were planning to build—or at least in part. I think the post is likely to—at best—somewhat distort this view of the world, by leaving out this highly relevant fact.
I think this post paints a somewhat inaccurate view of the past.
The post claims that MIRI’s talk of recursive self-improvement from a seed AI came about via MIRI’s attempts to respond to claims such as “AI will never exceed human capabilities” or “Growth rates post AI will be like growth rates beforehand.” Thus, the post says, people in MIRI spoke of recursive self-improvement from a seed AI not because they thought this was a particularly likely mainline future—but because they thought this was one obvious way that AI—past a certain level of development—would obviously exceed human capabilities and result in massively different growth rates. Thus, the post says:
However, I think this view of the past is pretty certainly misleading, because the Singularity Institute—what MIRI was before a rebranding—actually intended to build a seed AI.
Thus, bringing up recursive self improvement from a seed AI was not just a rhetorical move to point out how things would go nuts eventually—it was actually something they saw as central to the future.
From the Singularity Institute Website, circa 2006, emphasis mine:
Similarly, in his 2011 debate with Hanson, Yudkowsky humorously describes the Singularity Institute as the ““Institute for Carefully Programmed Intelligence Explosions,” and goes on to describe how he thinks the future is likely to go:
There are other locations where you can see that the original intent of the Singularity Institute / MIRI was to build a seed AI.
Thus, I do not think that MIRI spoke so much about recursive self improvement merely as a rhetorical move to show that AI would eventually be able to exceed humans. I think they spoke about it because that’s what they were planning to build—or at least in part. I think the post is likely to—at best—somewhat distort this view of the world, by leaving out this highly relevant fact.