I absolutely agree that I see promise in reaching out to military analysts and explaining the national security implications to them. I very much disagree that AI is successfully contained. The open-weights models being released currently seem to be only a couple years behind the industry-controlled models. Thus, even if we regulate industry to get them to make their AIs behave safely, we haven’t tackled the open-weights problem at all.
Halting the industrial development of AI would certainly slow it down, but also very likely not halt development entirely.
So yes, the large scale industrial development of AI is producing the most powerful results and is the most visible threat, but is not the only threat. Millions of rogue users are currently training open weights AIs on datasets of ‘crime stories’ demonstrating AI assistants aiding their users in committing crimes. This is part of the ‘decensoring process’. Most of these users are just doing this for harmless fun, to make the the AI into an interesting conversation partner. But it does have the side-effect of making the model willing to help out with even dangerous projects, like helping terrorists develop weapons and plan attacks.
I absolutely agree that I see promise in reaching out to military analysts and explaining the national security implications to them. I very much disagree that AI is successfully contained. The open-weights models being released currently seem to be only a couple years behind the industry-controlled models. Thus, even if we regulate industry to get them to make their AIs behave safely, we haven’t tackled the open-weights problem at all.
Halting the industrial development of AI would certainly slow it down, but also very likely not halt development entirely.
So yes, the large scale industrial development of AI is producing the most powerful results and is the most visible threat, but is not the only threat. Millions of rogue users are currently training open weights AIs on datasets of ‘crime stories’ demonstrating AI assistants aiding their users in committing crimes. This is part of the ‘decensoring process’. Most of these users are just doing this for harmless fun, to make the the AI into an interesting conversation partner. But it does have the side-effect of making the model willing to help out with even dangerous projects, like helping terrorists develop weapons and plan attacks.