I actually don’t know that I think this is helpful to push for now.
I do wish a “good version” of this would happen soon, but I think the version you’d be likely to get is one where they write weird reputational concerns where they don’t want to be seen by their investors as not racing ahead to make progress as fast as possible (since their investors don’t understand the degree of danger involved)
(There’s also the bit where, well, the fact that they’re labs pursuing AI in the first place means that (in my opinion) leadership would probably just have pausing-takes I think don’t make sense)
And then, once having written a public statement on it, they’d be more likely to stick to that public statement, even if nonsensical.
I do generally wish more orgs would speak more freely (even when I disagree with them), and I separately wish something about their strategic thinking process was different (though I’m not sure exactly what their thought process is at the moment so not sure how I wish it were different). But both of those things seem like causal-nodes further up a chain than “whether they engage publicly on this particular issue.”
The related thing that I think I do wish orgs would issue statements on is “what are the circumstances in which it would make sense to pause unilaterally, even though all the race-conditions still apply, because your work has gotten too dangerous. i.e., even if you think it’s actually relatively safe to continue research and deployment now, if you’re taking x-risk seriously as a concern there should be some point at which an AGI model would be unsafe to deploy to the public, and a point at which it’s unsafe even to be running new training runs.
Each org should have some model of when that point likely is, and I think even with my cynical-political-world-goggles on it should be to their benefit to say that publicly.
I actually don’t know that I think this is helpful to push for now.
I do wish a “good version” of this would happen soon, but I think the version you’d be likely to get is one where they write weird reputational concerns where they don’t want to be seen by their investors as not racing ahead to make progress as fast as possible (since their investors don’t understand the degree of danger involved)
(There’s also the bit where, well, the fact that they’re labs pursuing AI in the first place means that (in my opinion) leadership would probably just have pausing-takes I think don’t make sense)
And then, once having written a public statement on it, they’d be more likely to stick to that public statement, even if nonsensical.
I do generally wish more orgs would speak more freely (even when I disagree with them), and I separately wish something about their strategic thinking process was different (though I’m not sure exactly what their thought process is at the moment so not sure how I wish it were different). But both of those things seem like causal-nodes further up a chain than “whether they engage publicly on this particular issue.”
The related thing that I think I do wish orgs would issue statements on is “what are the circumstances in which it would make sense to pause unilaterally, even though all the race-conditions still apply, because your work has gotten too dangerous. i.e., even if you think it’s actually relatively safe to continue research and deployment now, if you’re taking x-risk seriously as a concern there should be some point at which an AGI model would be unsafe to deploy to the public, and a point at which it’s unsafe even to be running new training runs.
Each org should have some model of when that point likely is, and I think even with my cynical-political-world-goggles on it should be to their benefit to say that publicly.