I think it’s exceedingly unlikely (<1%) that we robustly prevent anyone from {making an AI that kills everyone} without an aligned sovereign.
I think it’s exceedingly unlikely (<1%) that we robustly prevent anyone from {making an AI that kills everyone} without an aligned sovereign.