This framing really helped me think about gradual self-improvement, thanks for writing it down!
I agree with most of what you wrote. I still feel that in the case of an AGI re-writing its own code there’s some sense of intent that hasn’t been explicitly happening for the past thousand years.
Agreed, you could still model Humanity as some kind of self-improving Human + Computer Colossus (cf. Tim Urban’s framing) that somehow has some agency. But it’s much less effective at self-improving itself, and it’s not thinking “yep, I need to invent this new science to optimize this utility function”. I agree that the threshold is “when all the relevant action is from a single system improving itself”.
there would also be warning signs before it was too late
And what happens then? Will we reach some kind of global consensus to stop any research in this area? How long will it take to build a safe “single system improving itself”? How will all the relevant actors behave in the meantime?
My intuition is that in the best scenario we reach some kind of AGI Cold War situation for long periods of time.
This framing really helped me think about gradual self-improvement, thanks for writing it down!
I agree with most of what you wrote. I still feel that in the case of an AGI re-writing its own code there’s some sense of intent that hasn’t been explicitly happening for the past thousand years.
Agreed, you could still model Humanity as some kind of self-improving Human + Computer Colossus (cf. Tim Urban’s framing) that somehow has some agency. But it’s much less effective at self-improving itself, and it’s not thinking “yep, I need to invent this new science to optimize this utility function”. I agree that the threshold is “when all the relevant action is from a single system improving itself”.
And what happens then? Will we reach some kind of global consensus to stop any research in this area? How long will it take to build a safe “single system improving itself”? How will all the relevant actors behave in the meantime?
My intuition is that in the best scenario we reach some kind of AGI Cold War situation for long periods of time.