I get the sense that the crux here is more between fast / slow takeoffs than unipolar / multipolar scenarios.
In the case of a gradual transition into more powerful technology, what happens when the children of your analogy discovers recursive self improvement?
Even recursive self improvement can be framed gradually. Recursive technological improvement is thousands of years old. The phenomenon of technology allowing us to build better technology has sustained economic growth. Recursive self improvement is simply a very local form of recursive technological improvement.
You could imagine systems will gradually get better at recursive self improvement. Some will improve themselves sort-of well, and these systems will pose risks. Some other systems will improve themselves really well, and pose greater risks. But we would have seen the latter phenomenon coming ahead of time.
And since there’s no hard separation between recursive technological improvement and recursive self improvement, you could imagine technological improvement getting gradually more local, until all the relevant action is from a single system improving itself. In that case, there would also be warning signs before it was too late.
This framing really helped me think about gradual self-improvement, thanks for writing it down!
I agree with most of what you wrote. I still feel that in the case of an AGI re-writing its own code there’s some sense of intent that hasn’t been explicitly happening for the past thousand years.
Agreed, you could still model Humanity as some kind of self-improving Human + Computer Colossus (cf. Tim Urban’s framing) that somehow has some agency. But it’s much less effective at self-improving itself, and it’s not thinking “yep, I need to invent this new science to optimize this utility function”. I agree that the threshold is “when all the relevant action is from a single system improving itself”.
there would also be warning signs before it was too late
And what happens then? Will we reach some kind of global consensus to stop any research in this area? How long will it take to build a safe “single system improving itself”? How will all the relevant actors behave in the meantime?
My intuition is that in the best scenario we reach some kind of AGI Cold War situation for long periods of time.
I get the sense that the crux here is more between fast / slow takeoffs than unipolar / multipolar scenarios.
In the case of a gradual transition into more powerful technology, what happens when the children of your analogy discovers recursive self improvement?
Even recursive self improvement can be framed gradually. Recursive technological improvement is thousands of years old. The phenomenon of technology allowing us to build better technology has sustained economic growth. Recursive self improvement is simply a very local form of recursive technological improvement.
You could imagine systems will gradually get better at recursive self improvement. Some will improve themselves sort-of well, and these systems will pose risks. Some other systems will improve themselves really well, and pose greater risks. But we would have seen the latter phenomenon coming ahead of time.
And since there’s no hard separation between recursive technological improvement and recursive self improvement, you could imagine technological improvement getting gradually more local, until all the relevant action is from a single system improving itself. In that case, there would also be warning signs before it was too late.
This framing really helped me think about gradual self-improvement, thanks for writing it down!
I agree with most of what you wrote. I still feel that in the case of an AGI re-writing its own code there’s some sense of intent that hasn’t been explicitly happening for the past thousand years.
Agreed, you could still model Humanity as some kind of self-improving Human + Computer Colossus (cf. Tim Urban’s framing) that somehow has some agency. But it’s much less effective at self-improving itself, and it’s not thinking “yep, I need to invent this new science to optimize this utility function”. I agree that the threshold is “when all the relevant action is from a single system improving itself”.
And what happens then? Will we reach some kind of global consensus to stop any research in this area? How long will it take to build a safe “single system improving itself”? How will all the relevant actors behave in the meantime?
My intuition is that in the best scenario we reach some kind of AGI Cold War situation for long periods of time.