In what way is that defensive? It involves creating and deploying a highly autonomous ASI agent into the world; if it is untrustworthy, that’s game over for everyone. I guess the idea is that it doesn’t involve breaking any current laws? Yes, I guess in that sense it’s defensive.
Right, if the ASI has Superalignment so baked in that it can’t be undone (somehow—ask the ASI to figure it out) then it couldn’t be used for offense. It would follow something like the Non-Aggression Principle.
In that scenario, OpenAI should release it onto an distributed inference blockchain before the NSA kicks in the door and seizes it.
In what way is that defensive? It involves creating and deploying a highly autonomous ASI agent into the world; if it is untrustworthy, that’s game over for everyone. I guess the idea is that it doesn’t involve breaking any current laws? Yes, I guess in that sense it’s defensive.
Right, if the ASI has Superalignment so baked in that it can’t be undone (somehow—ask the ASI to figure it out) then it couldn’t be used for offense. It would follow something like the Non-Aggression Principle.
In that scenario, OpenAI should release it onto an distributed inference blockchain before the NSA kicks in the door and seizes it.