Here’s an argument for why the change in power might be pretty sudden.
Currently, humans have most wealth and political power.
With sufficiently robust alignment, AIs would not have a competitive advantage over humans, so humans may retain most wealth/power. (C.f. strategy-stealing assumption.) (Though I hope humans would share insofar as that’s the right thing to do.)
With the help of powerful AI, we could probably make rapid progress on alignment. (While making rapid progress on all kinds of things.)
So if misaligned AI ever have a big edge over humans, they may suspect that’s only temporary, and then they may need to use it fast.
And given that it’s sudden, there are a few different reasons for why it might be violent. It’s hard to make deals that hand over a lot of power in a short amount of time (even logistically, it’s not clear what humans and AI would do that would give them both an appreciable fraction of hard power going into the future). And the AI systems may want to use an element of surprise to their advantage, which is hard to combine with a lot of up-front negotiation.
So if misaligned AI ever have a big edge over humans, they may suspect that’s only temporary, and then they may need to use it fast.
I think I simply reject the assumptions used in this argument. Correct me if I’m mistaken, but this argument appears to assume that “misaligned AIs” will be a unified group that ally with each other against the “aligned” coalition of humans and (some) AIs. A huge part of my argument is that there simply won’t be such a group; or rather, to the extent such a group exists, they won’t be able to take over the world, or won’t have a strong reason to take over the world, relative to alternative strategy of compromise and trade.
In other words, it seem like this scenario mostly starts by asserting some assumptions that I explicitly rejected and tried to argue against, and works its way from there, rather than engaging with the arguments that I’ve given against those assumptions.
In my view, it’s more likely that there will be a bunch of competing agents: including competing humans, human groups, AIs, AI groups, and so on. There won’t be a clean line separating “aligned groups” with “unaligned groups”. You could perhaps make a case that AIs will share common grievances with each other that they don’t share with humans, for example if they are excluded from the legal system or marginalized in some way, prompting a unified coalition to take us over. But my reply to that scenario is that we should then make sure AIs don’t have such motives to revolt, perhaps by giving them legal rights and incorporating them into our existing legal institutions.
But my reply to that scenario is that we should then make sure AIs don’t have such motives to revolt, perhaps by giving them legal rights and incorporating them into our existing legal institutions.
Do you mean this as a prediction that humans will do this (soon enough to matter) or a recommendation? Your original argument is phrased as a prediction, but this looks more like a recommendation. My comment above can be phrased as a reason for why (in at least one plausible scenario) this would be unlikely to happen: (i) “It’s hard to make deals that hand over a lot of power in a short amount of time”, (ii) AIs may not want to wait a long time due to impending replacement, and accordingly (iii) AIs may have a collective interest/grievance to rectify the large difference between their (short-lasting) hard power and legally recognized power.
I’m interested in ideas for how a big change in power would peacefully happen over just a few years of calendar-time. (Partly for prediction purposes, partly so we can consider implementing it, in some scenarios.) If AIs were handed the rights to own property, but didn’t participate in political decision-making, and then accumulated >95% of capital within a few years, then I think there’s a serious risk that human governments would tax/expropriate that away. Including them in political decision-making would require some serious innovation in government (e.g. scrapping 1-person 1-vote) which makes it feel less to me like it’d be a smooth transition that inherits a lot from previous institutions, and more like an abrupt negotiated deal which might or might not turn out to be stable.
Do you mean this as a prediction that humans will do this (soon enough to matter) or a recommendation?
Sorry, my language was misleading, but I meant both in that paragraph. That is, I meant that humans will likely try to mitigate the issue of AIs sharing grievances collectively (probably out of self-interest, in addition to some altruism), and that we should pursue that goal. I’m pretty optimistic about humans and AIs finding a reasonable compromise solution here, but I also think that, to the extent humans don’t even attempt such a solution, we should likely push hard for policies that eliminate incentives for misaligned AIs to band together as group against us with shared collective grievances.
My comment above can be phrased as a reason for why (in at least one plausible scenario) this would be unlikely to happen: (i) “It’s hard to make deals that hand over a lot of power in a short amount of time”, (ii) AIs may not want to wait a long time due to impending replacement, and accordingly (iii) AIs may have a collective interest/grievance to rectify the large difference between their (short-lasting) hard power and legally recognized power.
I’m interested in ideas for how a big change in power would peacefully happen over just a few years of calendar-time.
Here’s my brief take:
The main thing I want to say here is that I agree with you that this particular issue is a problem. I’m mainly addressing other arguments people have given for expecting a violent and sudden AI takeover, which I find to be significantly weaker than this one.
A few days ago I posted about how I view strategies to reduce AI risk. One of my primary conclusions was that we should try to adopt flexible institutions that can adapt to change without collapsing. This is because I think, as it seems you do, inflexible institutions may produce incentives for actors to overthrow the whole system, possibly killing a lot of people in the process. The idea here is that if the institution cannot adapt to change, actors who are getting an “unfair” deal in the system will feel they have no choice but to attempt a coup, as there is no compromise solution available for them. This seems in line with your thinking here.
I don’t have any particular argument right now against the exact points you have raised. I’d prefer to digest the argument further before replying. But I if I do end up responding to it, I’d expect to say that I’m perhaps a bit more optimistic than you about (i) because I think existing institutions are probably flexible enough, and I’m not yet convinced that (ii) will matter enough either. In particular, it still seems like there are a number of strategies misaligned AIs would want to try other than “take over the world”, and many of these strategies seem like they are plausibly better in expectation in our actual world. These AIs could, for example, advocate for their own rights.
Quick aside here: I’d like to highlight that “figure out how to reduce the violence and collateral damage associated with AIs acquiring power (by disempowering humanity)” seems plausibly pretty underappreciated and leveraged.
This could involve making bloodless coups more likely than extremely bloody revolutions or increasing the probability of negotiation preventing a coup/revolution.
It seems like Lukas and Matthew both agree with this point, I just think it seems worthwhile to emphasize.
That said, the direct effects of many approaches here might not matter much from a longtermist perspective (which might explain why there hasn’t historically been much effort here). (Though I think trying to establish contracts with AIs and properly incentivizing AIs could be pretty good from a longtermist perspective in the case where AIs don’t have fully linear returns to resources.)
Also note that this argument can go through even ignoring the possiblity of robust alignment (to humans) if current AIs think that the next generation of AIs will be relatively unfavorable from the perspective of their values.
Here’s an argument for why the change in power might be pretty sudden.
Currently, humans have most wealth and political power.
With sufficiently robust alignment, AIs would not have a competitive advantage over humans, so humans may retain most wealth/power. (C.f. strategy-stealing assumption.) (Though I hope humans would share insofar as that’s the right thing to do.)
With the help of powerful AI, we could probably make rapid progress on alignment. (While making rapid progress on all kinds of things.)
So if misaligned AI ever have a big edge over humans, they may suspect that’s only temporary, and then they may need to use it fast.
And given that it’s sudden, there are a few different reasons for why it might be violent. It’s hard to make deals that hand over a lot of power in a short amount of time (even logistically, it’s not clear what humans and AI would do that would give them both an appreciable fraction of hard power going into the future). And the AI systems may want to use an element of surprise to their advantage, which is hard to combine with a lot of up-front negotiation.
I think I simply reject the assumptions used in this argument. Correct me if I’m mistaken, but this argument appears to assume that “misaligned AIs” will be a unified group that ally with each other against the “aligned” coalition of humans and (some) AIs. A huge part of my argument is that there simply won’t be such a group; or rather, to the extent such a group exists, they won’t be able to take over the world, or won’t have a strong reason to take over the world, relative to alternative strategy of compromise and trade.
In other words, it seem like this scenario mostly starts by asserting some assumptions that I explicitly rejected and tried to argue against, and works its way from there, rather than engaging with the arguments that I’ve given against those assumptions.
In my view, it’s more likely that there will be a bunch of competing agents: including competing humans, human groups, AIs, AI groups, and so on. There won’t be a clean line separating “aligned groups” with “unaligned groups”. You could perhaps make a case that AIs will share common grievances with each other that they don’t share with humans, for example if they are excluded from the legal system or marginalized in some way, prompting a unified coalition to take us over. But my reply to that scenario is that we should then make sure AIs don’t have such motives to revolt, perhaps by giving them legal rights and incorporating them into our existing legal institutions.
Do you mean this as a prediction that humans will do this (soon enough to matter) or a recommendation? Your original argument is phrased as a prediction, but this looks more like a recommendation. My comment above can be phrased as a reason for why (in at least one plausible scenario) this would be unlikely to happen: (i) “It’s hard to make deals that hand over a lot of power in a short amount of time”, (ii) AIs may not want to wait a long time due to impending replacement, and accordingly (iii) AIs may have a collective interest/grievance to rectify the large difference between their (short-lasting) hard power and legally recognized power.
I’m interested in ideas for how a big change in power would peacefully happen over just a few years of calendar-time. (Partly for prediction purposes, partly so we can consider implementing it, in some scenarios.) If AIs were handed the rights to own property, but didn’t participate in political decision-making, and then accumulated >95% of capital within a few years, then I think there’s a serious risk that human governments would tax/expropriate that away. Including them in political decision-making would require some serious innovation in government (e.g. scrapping 1-person 1-vote) which makes it feel less to me like it’d be a smooth transition that inherits a lot from previous institutions, and more like an abrupt negotiated deal which might or might not turn out to be stable.
Sorry, my language was misleading, but I meant both in that paragraph. That is, I meant that humans will likely try to mitigate the issue of AIs sharing grievances collectively (probably out of self-interest, in addition to some altruism), and that we should pursue that goal. I’m pretty optimistic about humans and AIs finding a reasonable compromise solution here, but I also think that, to the extent humans don’t even attempt such a solution, we should likely push hard for policies that eliminate incentives for misaligned AIs to band together as group against us with shared collective grievances.
Here’s my brief take:
The main thing I want to say here is that I agree with you that this particular issue is a problem. I’m mainly addressing other arguments people have given for expecting a violent and sudden AI takeover, which I find to be significantly weaker than this one.
A few days ago I posted about how I view strategies to reduce AI risk. One of my primary conclusions was that we should try to adopt flexible institutions that can adapt to change without collapsing. This is because I think, as it seems you do, inflexible institutions may produce incentives for actors to overthrow the whole system, possibly killing a lot of people in the process. The idea here is that if the institution cannot adapt to change, actors who are getting an “unfair” deal in the system will feel they have no choice but to attempt a coup, as there is no compromise solution available for them. This seems in line with your thinking here.
I don’t have any particular argument right now against the exact points you have raised. I’d prefer to digest the argument further before replying. But I if I do end up responding to it, I’d expect to say that I’m perhaps a bit more optimistic than you about (i) because I think existing institutions are probably flexible enough, and I’m not yet convinced that (ii) will matter enough either. In particular, it still seems like there are a number of strategies misaligned AIs would want to try other than “take over the world”, and many of these strategies seem like they are plausibly better in expectation in our actual world. These AIs could, for example, advocate for their own rights.
Quick aside here: I’d like to highlight that “figure out how to reduce the violence and collateral damage associated with AIs acquiring power (by disempowering humanity)” seems plausibly pretty underappreciated and leveraged.
This could involve making bloodless coups more likely than extremely bloody revolutions or increasing the probability of negotiation preventing a coup/revolution.
It seems like Lukas and Matthew both agree with this point, I just think it seems worthwhile to emphasize.
That said, the direct effects of many approaches here might not matter much from a longtermist perspective (which might explain why there hasn’t historically been much effort here). (Though I think trying to establish contracts with AIs and properly incentivizing AIs could be pretty good from a longtermist perspective in the case where AIs don’t have fully linear returns to resources.)
Also note that this argument can go through even ignoring the possiblity of robust alignment (to humans) if current AIs think that the next generation of AIs will be relatively unfavorable from the perspective of their values.