Eventually we reach the point where we could not recover from a correlated automation failure. Under these conditions influence-seeking systems stop behaving in the intended way, since their incentives have changed—they are now more interested in controlling influence after the resulting catastrophe then continuing to play nice with existing institutions and incentives.
I’m not sure I understand this part. The influence-seeking systems which have the most influence also have the most to lose from a catastrophe. So they’ll be incentivised to police each other and make catastrophe-avoidance mechanisms more robust.
As an analogy: we may already be past the point where we could recover from a correlated “world leader failure”: every world leader simultaneously launching a coup. But this doesn’t make such a failure very likely, unless world leaders also have strong coordination and commitment mechanisms between themselves (which are binding even after the catastrophe).
(Upvoted because I think this deserves more clarification/discussion.)
I’m not sure I understand this part. The influence-seeking systems which have the most influence also have the most to lose from a catastrophe. So they’ll be incentivised to police each other and make catastrophe-avoidance mechanisms more robust.
I’m not sure either, but I think the idea is that once influence-seeking systems gain a certain amount of influence, it may become faster or more certain for them to gain more influence by causing a catastrophe than to continue to work within existing rules and institutions. For example they may predict that unless they do that, humans will eventually coordinate to take back the influence that humans lost, or they may predict that during such a catastrophe they can probably expropriate a lot of resources currently owned by humans and gain much influence that way, or humans will voluntarily hand more power to them in order to try to use them to deal with the catastrophe.
As an analogy: we may already be past the point where we could recover from a correlated “world leader failure”: every world leader simultaneously launching a coup. But this doesn’t make such a failure very likely, unless world leaders also have strong coordination and commitment mechanisms between themselves (which are binding even after the catastrophe).
I think such a failure can happen without especially strong coordination and commitment mechanisms. Something like this happened during the Chinese Warlord Era, when many military commanders became warlords during a correlated “military commander failure”, and similar things probably happened many times throughout history. I think what’s actually preventing a “world leader failure” today is that most world leaders, especially of the rich democratic countries, don’t see any way to further their own values by launching coups in a correlated way. In other words, what would they do afterwards if they did launch such a coup, that would be better than just exercising the power that they already have?
I think the idea is that once influence-seeking systems gain a certain amount of influence, it may become faster or more certain for them to gain more influence by causing a catastrophe than to continue to work within existing rules and institutions.
The key issue here is whether there will be coordination between a set of influence-seeking systems that can cause (and will benefit from) a catastrophe, even when other systems are opposing them. If we picture systems as having power comparable to what companies have now, that seems difficult. If we picture them as having power comparable to what countries have now, that seems fairly easy.
The key issue here is whether there will be coordination between a set of influence-seeking systems that can cause (and will benefit from) a catastrophe, even when other systems are opposing them.
Do you not expect this threshold to be crossed sooner or later, assuming AI alignment remains unsolved? Also, it seems like the main alternative to this scenario is that the influence-seeking systems expect to eventually gain control of most of the universe anyway (even without a “correlated automation failure”), so they don’t see a reason to “rock the boat” and try to dispossess humans of their remaining influence/power/resources, but this is almost as bad as the “correlated automation failure” scenario from an astronomical waste perspective. (I’m wondering if you’re questioning whether things will turn out badly, or questioning whether things will turn out badly this way.)
Mostly I am questioning whether things will turn out badly this way.
Do you not expect this threshold to be crossed sooner or later, assuming AI alignment remains unsolved?
Probably, but I’m pretty uncertain about this. It depends on a lot of messy details about reality, things like: how offense-defence balance scales; what proportion of powerful systems are mostly aligned; whether influence-seeking systems are risk-neutral; what self-governance structures they’ll set up; the extent to which their preferences are compatible with ours; how human-comprehensible the most important upcoming scientific advances are.
I’m not sure I understand this part. The influence-seeking systems which have the most influence also have the most to lose from a catastrophe. So they’ll be incentivised to police each other and make catastrophe-avoidance mechanisms more robust.
As an analogy: we may already be past the point where we could recover from a correlated “world leader failure”: every world leader simultaneously launching a coup. But this doesn’t make such a failure very likely, unless world leaders also have strong coordination and commitment mechanisms between themselves (which are binding even after the catastrophe).
(Upvoted because I think this deserves more clarification/discussion.)
I’m not sure either, but I think the idea is that once influence-seeking systems gain a certain amount of influence, it may become faster or more certain for them to gain more influence by causing a catastrophe than to continue to work within existing rules and institutions. For example they may predict that unless they do that, humans will eventually coordinate to take back the influence that humans lost, or they may predict that during such a catastrophe they can probably expropriate a lot of resources currently owned by humans and gain much influence that way, or humans will voluntarily hand more power to them in order to try to use them to deal with the catastrophe.
I think such a failure can happen without especially strong coordination and commitment mechanisms. Something like this happened during the Chinese Warlord Era, when many military commanders became warlords during a correlated “military commander failure”, and similar things probably happened many times throughout history. I think what’s actually preventing a “world leader failure” today is that most world leaders, especially of the rich democratic countries, don’t see any way to further their own values by launching coups in a correlated way. In other words, what would they do afterwards if they did launch such a coup, that would be better than just exercising the power that they already have?
The key issue here is whether there will be coordination between a set of influence-seeking systems that can cause (and will benefit from) a catastrophe, even when other systems are opposing them. If we picture systems as having power comparable to what companies have now, that seems difficult. If we picture them as having power comparable to what countries have now, that seems fairly easy.
Do you not expect this threshold to be crossed sooner or later, assuming AI alignment remains unsolved? Also, it seems like the main alternative to this scenario is that the influence-seeking systems expect to eventually gain control of most of the universe anyway (even without a “correlated automation failure”), so they don’t see a reason to “rock the boat” and try to dispossess humans of their remaining influence/power/resources, but this is almost as bad as the “correlated automation failure” scenario from an astronomical waste perspective. (I’m wondering if you’re questioning whether things will turn out badly, or questioning whether things will turn out badly this way.)
Mostly I am questioning whether things will turn out badly this way.
Probably, but I’m pretty uncertain about this. It depends on a lot of messy details about reality, things like: how offense-defence balance scales; what proportion of powerful systems are mostly aligned; whether influence-seeking systems are risk-neutral; what self-governance structures they’ll set up; the extent to which their preferences are compatible with ours; how human-comprehensible the most important upcoming scientific advances are.