Right, if the ASI has Superalignment so baked in that it can’t be undone (somehow—ask the ASI to figure it out) then it couldn’t be used for offense. It would follow something like the Non-Aggression Principle.
In that scenario, OpenAI should release it onto an distributed inference blockchain before the NSA kicks in the door and seizes it.
Right, if the ASI has Superalignment so baked in that it can’t be undone (somehow—ask the ASI to figure it out) then it couldn’t be used for offense. It would follow something like the Non-Aggression Principle.
In that scenario, OpenAI should release it onto an distributed inference blockchain before the NSA kicks in the door and seizes it.