I’m really glad to see people take this problem seriously, and I’m frankly pretty disturbed that the conflict I sketched out might have ended up happening so soon after I first went public about it a month ago. I think your comment also gets big parts of the problem right e.g. “events at the AI frontier are not something that the national security elite can just allow to unfold unattended” which is well-put.
But everything is on fire right now, and people have unfairly high standards for getting everything right the first time (it is even more unfair for the world which is ending). Like learning a language, a system as obscure and complicated as this required me to make tons of mistakes before I had something ready for presentation.
Its an Occam’s razor violation. The simple explanation is that the current chaos comes from human beings making mistakes. The OAI board detonating their own company does not achieve their stated goals of developing safe AGI.
If the ultimate reason here had to do with AI safety, a diaspora from OAI spreads the knowledge around to all the competition.
As for national security, the current outcomes are chaos. Why not just send the FBI to seize everything if the US government decided that AI is a security issue?
That ultimately has to be what happens. In an alternate history without a Manhattan project, imagine GE energy research in 1950 is playing with weapons grade u-235.
Eventually some researcher inside realizes you could make a really powerful bomb, not just compact nuclear power sources, and the government would have to seize all the U-235/H-100s.
Finishing the analogy, the government would probably wait to act until GE causes a nuclear fizzle (a low yield nuclear explosion) or some other clear evidence that the danger is real becomes available.
if the US government decided that AI is a security issue
The US government considered AI a national security issue long before ChatGPT. But when it comes to American companies developing technologies of strategic interest, of course they’d prefer to work behind the scenes. If the powers deem that a company needs closer oversight, they would prefer the resulting changes to be viewed as purely a matter of corporate decision-making.
Whatever is actually going on with OpenAI, you can be sure that there are national security factions who care greatly about what happens to its technologies. The relevant decisions are not just business decisions.
That makes sense. It’s worth noting that the AI OpenAI leads in is very different from the type the military is most concerned with. So whoever in the goverment is concerned with this transition is less likely to be concerned with the military implications, and more on the economic and social impacts.
It’s also interesting relative to government interest that, thus far, generative AI tools have been immediately been deployed globally, so the relative impact on competitive status is negligible.
Thus, we might actually hope that to the extent the relevant government apparatus is composed of sensible individuals, its concerns with this type of AI might actually be similar to those of this community: economic disruption and the real if strange risk of disempowering humanity permanently.
Of course, hoping that the government is collectively sensible is highly questionable. But the security factions you’re addressing might be the more pragmatic and sensible parts of government.
This is the wrong way to think about this, and an even worse way to write about this.
I think this comment is closing the overton window on extremely important concepts, in a really bad way. There probably isn’t a “deep state”, we can infer that the US Natsec establishment is probably decentralized a la lc’s Don’t take the organization chart literally due to the structure of the human brain, and I’ve argued that competent networks of people emerge stochastically based on technological advances like better lie detection technology, whereas Gwern argued that competence mainly emerges based on proximity to the core leadership and leadership’s wavering focus.
I’ve argued pretty persuasively that there’s good odds that excessively powerful people at tech companies and intelligence agencies might follow the national interest, and hijack or damage the AI safety community in order to get better positioned to access AI tech.
I’m really glad to see people take this problem seriously, and I’m frankly pretty disturbed that the conflict I sketched out might have ended up happening so soon after I first went public about it a month ago. I think your comment also gets big parts of the problem right e.g. “events at the AI frontier are not something that the national security elite can just allow to unfold unattended” which is well-put.
But everything is on fire right now, and people have unfairly high standards for getting everything right the first time (it is even more unfair for the world which is ending). Like learning a language, a system as obscure and complicated as this required me to make tons of mistakes before I had something ready for presentation.
Its an Occam’s razor violation. The simple explanation is that the current chaos comes from human beings making mistakes. The OAI board detonating their own company does not achieve their stated goals of developing safe AGI.
If the ultimate reason here had to do with AI safety, a diaspora from OAI spreads the knowledge around to all the competition.
As for national security, the current outcomes are chaos. Why not just send the FBI to seize everything if the US government decided that AI is a security issue?
That ultimately has to be what happens. In an alternate history without a Manhattan project, imagine GE energy research in 1950 is playing with weapons grade u-235.
Eventually some researcher inside realizes you could make a really powerful bomb, not just compact nuclear power sources, and the government would have to seize all the U-235/H-100s.
Finishing the analogy, the government would probably wait to act until GE causes a nuclear fizzle (a low yield nuclear explosion) or some other clear evidence that the danger is real becomes available.
The US government considered AI a national security issue long before ChatGPT. But when it comes to American companies developing technologies of strategic interest, of course they’d prefer to work behind the scenes. If the powers deem that a company needs closer oversight, they would prefer the resulting changes to be viewed as purely a matter of corporate decision-making.
Whatever is actually going on with OpenAI, you can be sure that there are national security factions who care greatly about what happens to its technologies. The relevant decisions are not just business decisions.
That makes sense. It’s worth noting that the AI OpenAI leads in is very different from the type the military is most concerned with. So whoever in the goverment is concerned with this transition is less likely to be concerned with the military implications, and more on the economic and social impacts.
It’s also interesting relative to government interest that, thus far, generative AI tools have been immediately been deployed globally, so the relative impact on competitive status is negligible.
Thus, we might actually hope that to the extent the relevant government apparatus is composed of sensible individuals, its concerns with this type of AI might actually be similar to those of this community: economic disruption and the real if strange risk of disempowering humanity permanently.
Of course, hoping that the government is collectively sensible is highly questionable. But the security factions you’re addressing might be the more pragmatic and sensible parts of government.