Quick aside here: I’d like to highlight that “figure out how to reduce the violence and collateral damage associated with AIs acquiring power (by disempowering humanity)” seems plausibly pretty underappreciated and leveraged.
This could involve making bloodless coups more likely than extremely bloody revolutions or increasing the probability of negotiation preventing a coup/revolution.
It seems like Lukas and Matthew both agree with this point, I just think it seems worthwhile to emphasize.
That said, the direct effects of many approaches here might not matter much from a longtermist perspective (which might explain why there hasn’t historically been much effort here). (Though I think trying to establish contracts with AIs and properly incentivizing AIs could be pretty good from a longtermist perspective in the case where AIs don’t have fully linear returns to resources.)
Quick aside here: I’d like to highlight that “figure out how to reduce the violence and collateral damage associated with AIs acquiring power (by disempowering humanity)” seems plausibly pretty underappreciated and leveraged.
This could involve making bloodless coups more likely than extremely bloody revolutions or increasing the probability of negotiation preventing a coup/revolution.
It seems like Lukas and Matthew both agree with this point, I just think it seems worthwhile to emphasize.
That said, the direct effects of many approaches here might not matter much from a longtermist perspective (which might explain why there hasn’t historically been much effort here). (Though I think trying to establish contracts with AIs and properly incentivizing AIs could be pretty good from a longtermist perspective in the case where AIs don’t have fully linear returns to resources.)