Its an Occam’s razor violation. The simple explanation is that the current chaos comes from human beings making mistakes. The OAI board detonating their own company does not achieve their stated goals of developing safe AGI.
If the ultimate reason here had to do with AI safety, a diaspora from OAI spreads the knowledge around to all the competition.
As for national security, the current outcomes are chaos. Why not just send the FBI to seize everything if the US government decided that AI is a security issue?
That ultimately has to be what happens. In an alternate history without a Manhattan project, imagine GE energy research in 1950 is playing with weapons grade u-235.
Eventually some researcher inside realizes you could make a really powerful bomb, not just compact nuclear power sources, and the government would have to seize all the U-235/H-100s.
Finishing the analogy, the government would probably wait to act until GE causes a nuclear fizzle (a low yield nuclear explosion) or some other clear evidence that the danger is real becomes available.
if the US government decided that AI is a security issue
The US government considered AI a national security issue long before ChatGPT. But when it comes to American companies developing technologies of strategic interest, of course they’d prefer to work behind the scenes. If the powers deem that a company needs closer oversight, they would prefer the resulting changes to be viewed as purely a matter of corporate decision-making.
Whatever is actually going on with OpenAI, you can be sure that there are national security factions who care greatly about what happens to its technologies. The relevant decisions are not just business decisions.
That makes sense. It’s worth noting that the AI OpenAI leads in is very different from the type the military is most concerned with. So whoever in the goverment is concerned with this transition is less likely to be concerned with the military implications, and more on the economic and social impacts.
It’s also interesting relative to government interest that, thus far, generative AI tools have been immediately been deployed globally, so the relative impact on competitive status is negligible.
Thus, we might actually hope that to the extent the relevant government apparatus is composed of sensible individuals, its concerns with this type of AI might actually be similar to those of this community: economic disruption and the real if strange risk of disempowering humanity permanently.
Of course, hoping that the government is collectively sensible is highly questionable. But the security factions you’re addressing might be the more pragmatic and sensible parts of government.
Its an Occam’s razor violation. The simple explanation is that the current chaos comes from human beings making mistakes. The OAI board detonating their own company does not achieve their stated goals of developing safe AGI.
If the ultimate reason here had to do with AI safety, a diaspora from OAI spreads the knowledge around to all the competition.
As for national security, the current outcomes are chaos. Why not just send the FBI to seize everything if the US government decided that AI is a security issue?
That ultimately has to be what happens. In an alternate history without a Manhattan project, imagine GE energy research in 1950 is playing with weapons grade u-235.
Eventually some researcher inside realizes you could make a really powerful bomb, not just compact nuclear power sources, and the government would have to seize all the U-235/H-100s.
Finishing the analogy, the government would probably wait to act until GE causes a nuclear fizzle (a low yield nuclear explosion) or some other clear evidence that the danger is real becomes available.
The US government considered AI a national security issue long before ChatGPT. But when it comes to American companies developing technologies of strategic interest, of course they’d prefer to work behind the scenes. If the powers deem that a company needs closer oversight, they would prefer the resulting changes to be viewed as purely a matter of corporate decision-making.
Whatever is actually going on with OpenAI, you can be sure that there are national security factions who care greatly about what happens to its technologies. The relevant decisions are not just business decisions.
That makes sense. It’s worth noting that the AI OpenAI leads in is very different from the type the military is most concerned with. So whoever in the goverment is concerned with this transition is less likely to be concerned with the military implications, and more on the economic and social impacts.
It’s also interesting relative to government interest that, thus far, generative AI tools have been immediately been deployed globally, so the relative impact on competitive status is negligible.
Thus, we might actually hope that to the extent the relevant government apparatus is composed of sensible individuals, its concerns with this type of AI might actually be similar to those of this community: economic disruption and the real if strange risk of disempowering humanity permanently.
Of course, hoping that the government is collectively sensible is highly questionable. But the security factions you’re addressing might be the more pragmatic and sensible parts of government.