My pet “(AI) policy” idea for a while has been “direct recourse”, which is the idea that you can hedge against one party precipitating an irreversible events by giving other parties the ability to disrupt their operations at will. For instance, I could shut down my competitors’ AI project if I think it’s an X-risk. The idea is that I would have to compensate you if I was later deemed to have done this for an illegitimate reason. If shutting down your AI project is not irreversible, then this system increases our ability to prevent irreversible events, since I might stop some existential catastrophe, and if I shut down your project when I shouldn’t, then I just compensate you and we’re all good.
My pet “(AI) policy” idea for a while has been “direct recourse”, which is the idea that you can hedge against one party precipitating an irreversible events by giving other parties the ability to disrupt their operations at will.
For instance, I could shut down my competitors’ AI project if I think it’s an X-risk.
The idea is that I would have to compensate you if I was later deemed to have done this for an illegitimate reason.
If shutting down your AI project is not irreversible, then this system increases our ability to prevent irreversible events, since I might stop some existential catastrophe, and if I shut down your project when I shouldn’t, then I just compensate you and we’re all good.