The related thing that I think I do wish orgs would issue statements on is “what are the circumstances in which it would make sense to pause unilaterally, even though all the race-conditions still apply, because your work has gotten too dangerous. i.e., even if you think it’s actually relatively safe to continue research and deployment now, if you’re taking x-risk seriously as a concern there should be some point at which an AGI model would be unsafe to deploy to the public, and a point at which it’s unsafe even to be running new training runs.
Each org should have some model of when that point likely is, and I think even with my cynical-political-world-goggles on it should be to their benefit to say that publicly.
The related thing that I think I do wish orgs would issue statements on is “what are the circumstances in which it would make sense to pause unilaterally, even though all the race-conditions still apply, because your work has gotten too dangerous. i.e., even if you think it’s actually relatively safe to continue research and deployment now, if you’re taking x-risk seriously as a concern there should be some point at which an AGI model would be unsafe to deploy to the public, and a point at which it’s unsafe even to be running new training runs.
Each org should have some model of when that point likely is, and I think even with my cynical-political-world-goggles on it should be to their benefit to say that publicly.