I’m not saying it’s the same, just that it’s similar in the ways relevant to my argument.
Questioning the notion of control is fine. I agree that colonial kings had limited control over their governors, etc. and I agree that an AGI takeover would involve the AGI having limited control over the humans it commands, at least at first.
For more of what I mean on “game over,” see this post. Basically, for purposes of timelines, we care about our loss of influence. Even if the AIs that wrest influence from us are themselves not in control, and lose influence to other AIs later, or whatever, it doesn’t matter much from our perspective. Moreover it’s not total loss of influence that matters, but relative loss of influence. If some event happens that reduces our influence by 90%, i.e. makes it ten times less likely for us to achieve a given amount of improvement in the world by our lights, then for planning purposes we should focus on plans that finish before that event occurs, or ideally plans that prevent that event from occurring.
There’s a related issue of “How bad will things get, by our lights, once we lose control?” I do think that in this sense, unfortunately, what we are facing is historically (mostly) unprecedented. The Aztecs were conquered by humans, after all. In the grand scheme of things the colonizers and colonized weren’t that different, and so things turned out badly for the colonized but not as badly as they could have been. I think unfortunately that the divergence in values between AIs and humans is likely to be bigger than the divergence between historical colonized and colonizers.
(EDIT: Oh, and I don’t think the governors or people in the past were stupid either. And I didn’t interpret you that way; my apologies if it sounded like I did.)
I’m not saying it’s the same, just that it’s similar in the ways relevant to my argument.
Questioning the notion of control is fine. I agree that colonial kings had limited control over their governors, etc. and I agree that an AGI takeover would involve the AGI having limited control over the humans it commands, at least at first.
For more of what I mean on “game over,” see this post. Basically, for purposes of timelines, we care about our loss of influence. Even if the AIs that wrest influence from us are themselves not in control, and lose influence to other AIs later, or whatever, it doesn’t matter much from our perspective. Moreover it’s not total loss of influence that matters, but relative loss of influence. If some event happens that reduces our influence by 90%, i.e. makes it ten times less likely for us to achieve a given amount of improvement in the world by our lights, then for planning purposes we should focus on plans that finish before that event occurs, or ideally plans that prevent that event from occurring.
There’s a related issue of “How bad will things get, by our lights, once we lose control?” I do think that in this sense, unfortunately, what we are facing is historically (mostly) unprecedented. The Aztecs were conquered by humans, after all. In the grand scheme of things the colonizers and colonized weren’t that different, and so things turned out badly for the colonized but not as badly as they could have been. I think unfortunately that the divergence in values between AIs and humans is likely to be bigger than the divergence between historical colonized and colonizers.
(EDIT: Oh, and I don’t think the governors or people in the past were stupid either. And I didn’t interpret you that way; my apologies if it sounded like I did.)
I’m reminded of this:
https://www.antipope.org/charlie/blog-static/2018/01/dude-you-broke-the-future.html
(And no worries.)