I still like & endorse this post. When I wrote it, I hadn’t read more than the wiki articles on the subject. But then afterwards I went and read 3 books (written by historians) about it, and I think the original post held up very well to all this new info. In particular, the main critique the post got—that disease was more important than I made it sound, in a way that undermined my conclusion—seems to have been pretty wrong. (See e.g. this comment thread, thesefollow up posts)
So, why does it matter? What contribution did this post make? Well, at the time—and still now, though I think I’ve made a dent in the discourse—quite a lot of people I respect (such as people at OpenPhil) seemed to think unaligned AGI would need god-like powers to be able to take over the world—it would need to be stronger than the rest of the world combined! I think this is based on a flawed model of how takeover/conquest works, and history contains plenty of counterexamples to the model. The conquistadors are my favorite counterexample from my limited knowledge of history. (The flawed model goes by the name of “The China Argument,” at least in my mind. You may have heard the argument before—China is way more capable than the most capable human, yet it can’t take over the world; therefore AGI will need to be way way more capable than the most powerful human to take over the world.)
Needless to say, this is a somewhat important crux, as illustrated by e.g. Joe Carlsmith’s report, which assigns a mere 40% credence to unaligned APS-AI taking over the world even conditional on it escaping and seeking power and managing to cause at least a trillion dollars worth of damage.
(I’ve also gotten feedback from various people at OpenPhil saying that this post was helpful to them, so yay!)
I’ve since written a sequence of posts elaborating on this idea: Takeoff and Takeover in the Past and Future. Alas, I still haven’t written the capstone posts in the sequence, the posts that’ll tie it all together. I’ve got some new arguments and models to add too!
I think that if this post makes the review (and if it’s allowed) I’ll want to revise it as heavily as the rules allow, to include more analysis of the sort I’ve been doing in the sequence and plan to do in the future capstone post(s).
(I am the author)
I still like & endorse this post. When I wrote it, I hadn’t read more than the wiki articles on the subject. But then afterwards I went and read 3 books (written by historians) about it, and I think the original post held up very well to all this new info. In particular, the main critique the post got—that disease was more important than I made it sound, in a way that undermined my conclusion—seems to have been pretty wrong. (See e.g. this comment thread, these follow up posts)
So, why does it matter? What contribution did this post make? Well, at the time—and still now, though I think I’ve made a dent in the discourse—quite a lot of people I respect (such as people at OpenPhil) seemed to think unaligned AGI would need god-like powers to be able to take over the world—it would need to be stronger than the rest of the world combined! I think this is based on a flawed model of how takeover/conquest works, and history contains plenty of counterexamples to the model. The conquistadors are my favorite counterexample from my limited knowledge of history. (The flawed model goes by the name of “The China Argument,” at least in my mind. You may have heard the argument before—China is way more capable than the most capable human, yet it can’t take over the world; therefore AGI will need to be way way more capable than the most powerful human to take over the world.)
Needless to say, this is a somewhat important crux, as illustrated by e.g. Joe Carlsmith’s report, which assigns a mere 40% credence to unaligned APS-AI taking over the world even conditional on it escaping and seeking power and managing to cause at least a trillion dollars worth of damage.
(I’ve also gotten feedback from various people at OpenPhil saying that this post was helpful to them, so yay!)
I’ve since written a sequence of posts elaborating on this idea: Takeoff and Takeover in the Past and Future. Alas, I still haven’t written the capstone posts in the sequence, the posts that’ll tie it all together. I’ve got some new arguments and models to add too!
I think that if this post makes the review (and if it’s allowed) I’ll want to revise it as heavily as the rules allow, to include more analysis of the sort I’ve been doing in the sequence and plan to do in the future capstone post(s).