I am not an expert, however I’d like to make a suggestion regarding the strategy. The issue I see with this approach is that policymakers have a very bad track record of listening to actual technical people (see environmental regulations).
Generally speaking they will only listen when this is convenient to them (some immediate material benefit is on the table), or if there is very large popular support, in which case they will take action in the way that allows them to put the least effort they can get away with.
There is, however, one case where technical people can get their way (at times): Military analysts
Strategic analysts to be more precise; apparently the very real threat of nuclear war is enough to actually get some things done. Nuclear weapons share some qualities with AI systems envisioned by MIRI:
They can “end the world”
They have been successfully contained (only a small number of actors have access to them)
World-wide, Industry-wide control on their development
At one point, there were serious discussion of halting development altogether
“Control” has persisted over long time periods
No rogue user (as of now)
I think military analysts could be a good target to try to reach out to, they are more likely to listen and understand technical arguments than policymakers for sure, and they already have experience in navigating the political world. In an ideal scenario AI could be treated like another class of WMDs like nuclear, chemical and bioweapons.
I absolutely agree that I see promise in reaching out to military analysts and explaining the national security implications to them. I very much disagree that AI is successfully contained. The open-weights models being released currently seem to be only a couple years behind the industry-controlled models. Thus, even if we regulate industry to get them to make their AIs behave safely, we haven’t tackled the open-weights problem at all.
Halting the industrial development of AI would certainly slow it down, but also very likely not halt development entirely.
So yes, the large scale industrial development of AI is producing the most powerful results and is the most visible threat, but is not the only threat. Millions of rogue users are currently training open weights AIs on datasets of ‘crime stories’ demonstrating AI assistants aiding their users in committing crimes. This is part of the ‘decensoring process’. Most of these users are just doing this for harmless fun, to make the the AI into an interesting conversation partner. But it does have the side-effect of making the model willing to help out with even dangerous projects, like helping terrorists develop weapons and plan attacks.
I am not an expert, however I’d like to make a suggestion regarding the strategy. The issue I see with this approach is that policymakers have a very bad track record of listening to actual technical people (see environmental regulations).
Generally speaking they will only listen when this is convenient to them (some immediate material benefit is on the table), or if there is very large popular support, in which case they will take action in the way that allows them to put the least effort they can get away with.
There is, however, one case where technical people can get their way (at times): Military analysts
Strategic analysts to be more precise; apparently the very real threat of nuclear war is enough to actually get some things done. Nuclear weapons share some qualities with AI systems envisioned by MIRI:
They can “end the world”
They have been successfully contained (only a small number of actors have access to them)
World-wide, Industry-wide control on their development
At one point, there were serious discussion of halting development altogether
“Control” has persisted over long time periods
No rogue user (as of now)
I think military analysts could be a good target to try to reach out to, they are more likely to listen and understand technical arguments than policymakers for sure, and they already have experience in navigating the political world. In an ideal scenario AI could be treated like another class of WMDs like nuclear, chemical and bioweapons.
Seems right, thanks.
I absolutely agree that I see promise in reaching out to military analysts and explaining the national security implications to them. I very much disagree that AI is successfully contained. The open-weights models being released currently seem to be only a couple years behind the industry-controlled models. Thus, even if we regulate industry to get them to make their AIs behave safely, we haven’t tackled the open-weights problem at all.
Halting the industrial development of AI would certainly slow it down, but also very likely not halt development entirely.
So yes, the large scale industrial development of AI is producing the most powerful results and is the most visible threat, but is not the only threat. Millions of rogue users are currently training open weights AIs on datasets of ‘crime stories’ demonstrating AI assistants aiding their users in committing crimes. This is part of the ‘decensoring process’. Most of these users are just doing this for harmless fun, to make the the AI into an interesting conversation partner. But it does have the side-effect of making the model willing to help out with even dangerous projects, like helping terrorists develop weapons and plan attacks.