if we make sure that power is low enough we can turn it off, if the agent will acquire power if that’s the only way to achieve its goal rather than stopping at/before some limit then it might still acquire power and be catastrophic*, etc.
Yeah. I have the math for this kind of tradeoff worked out—stay tuned!
Though further up this comment I brought up the possibility that “power seeking behavior is the cause of catastrophe, rather than having power.”
I think this is true, actually; if another agent already has a lot of power and it isn’t already catastrophic for us, their continued existence isn’t that big of a deal wrt the status quo. The bad stuff comes with the change in who has power.
The act of taking away our power is generally only incentivized so the agent can become better able to achieve its own goal. The question is, why is the agent trying to convince us of something / get someone else to do something catastrophic, if the agent isn’t trying to increase its own AU?
Yeah. I have the math for this kind of tradeoff worked out—stay tuned!
I think this is true, actually; if another agent already has a lot of power and it isn’t already catastrophic for us, their continued existence isn’t that big of a deal wrt the status quo. The bad stuff comes with the change in who has power.
The act of taking away our power is generally only incentivized so the agent can become better able to achieve its own goal. The question is, why is the agent trying to convince us of something / get someone else to do something catastrophic, if the agent isn’t trying to increase its own AU?