I have devised a conspiracy theory that this is all a planned double-cross of EAs by the deep state. EAs thought they had a governance structure that could stop a company developing dangerous frontier AI. Instead, the board’s actions have provided an opportunity to reel in the unpredictable OpenAI, and bring its research activities under the aegis of trusted government partner Microsoft; while the need to do this, can be blamed on EA zealotry rendering independent OpenAI untenable.
I am not particularly committed to this theory, but it might be an echo of the truth. As @trevor keeps saying, events at the AI frontier are not something that the national security elite can just allow to unfold unattended.
I’m really glad to see people take this problem seriously, and I’m frankly pretty disturbed that the conflict I sketched out might have ended up happening so soon after I first went public about it a month ago. I think your comment also gets big parts of the problem right e.g. “events at the AI frontier are not something that the national security elite can just allow to unfold unattended” which is well-put.
But everything is on fire right now, and people have unfairly high standards for getting everything right the first time (it is even more unfair for the world which is ending). Like learning a language, a system as obscure and complicated as this required me to make tons of mistakes before I had something ready for presentation.
Its an Occam’s razor violation. The simple explanation is that the current chaos comes from human beings making mistakes. The OAI board detonating their own company does not achieve their stated goals of developing safe AGI.
If the ultimate reason here had to do with AI safety, a diaspora from OAI spreads the knowledge around to all the competition.
As for national security, the current outcomes are chaos. Why not just send the FBI to seize everything if the US government decided that AI is a security issue?
That ultimately has to be what happens. In an alternate history without a Manhattan project, imagine GE energy research in 1950 is playing with weapons grade u-235.
Eventually some researcher inside realizes you could make a really powerful bomb, not just compact nuclear power sources, and the government would have to seize all the U-235/H-100s.
Finishing the analogy, the government would probably wait to act until GE causes a nuclear fizzle (a low yield nuclear explosion) or some other clear evidence that the danger is real becomes available.
if the US government decided that AI is a security issue
The US government considered AI a national security issue long before ChatGPT. But when it comes to American companies developing technologies of strategic interest, of course they’d prefer to work behind the scenes. If the powers deem that a company needs closer oversight, they would prefer the resulting changes to be viewed as purely a matter of corporate decision-making.
Whatever is actually going on with OpenAI, you can be sure that there are national security factions who care greatly about what happens to its technologies. The relevant decisions are not just business decisions.
That makes sense. It’s worth noting that the AI OpenAI leads in is very different from the type the military is most concerned with. So whoever in the goverment is concerned with this transition is less likely to be concerned with the military implications, and more on the economic and social impacts.
It’s also interesting relative to government interest that, thus far, generative AI tools have been immediately been deployed globally, so the relative impact on competitive status is negligible.
Thus, we might actually hope that to the extent the relevant government apparatus is composed of sensible individuals, its concerns with this type of AI might actually be similar to those of this community: economic disruption and the real if strange risk of disempowering humanity permanently.
Of course, hoping that the government is collectively sensible is highly questionable. But the security factions you’re addressing might be the more pragmatic and sensible parts of government.
You are attributing a lot more deviousness and strategic boldness to the so-called deep state than the US government is organizationally capable of. The CIA may have tried a few things like this in banana republics but there’s just no way anybody could pull it off domestically.
This is a good point, that much of the data we have comes from leaked operations in South America (e.g. the Church Hearings), and CIA operations are probably much easier there than on American soil.
However, there are also different kinds of systems pointed inward which look more like normal power games e.g. FBI informants, or lobbyists forming complex agreements/arrangements (like how their lawyer counterparts develop clever value-handshake-like agreements/arrangements to settle out-of-court). It shouldn’t be surprising that domestic ops are more complicated and look like ordinary domestic power plays (possibly occasionally augmented by advanced technology).
The profit motive alone could motivate Microsoft execs to leverage their access to advanced technology to get a better outcome for Microsoft. I was pretty surprised by the possibility that silicon valley VCs alone could potentially set up sophisticated operations e.g. using pre-established connections to journalists to leak false information or access to large tech companies with manipulation capabilities (e.g. Andreessen Horowitz’s access to Facebook’s manipulation research).
I don’t understand the mechanism of the double-cross here. How would they get the pro-EA and safety side to trigger the crisis? And why would the safety/EA people, who are trying to make everything more predictable and controllable, be the ones who are purged from influence?
I’m very confused about the events that have been taking place, but one factor that I have very little doubt is that the NSA has acquired access to smartphone operating systems and smartphone microphones throughout the OpenAI building (it’s just one building, and a really important one, so it’s also reasonably likely that it’s also been bugged). Whether they were doing anything with that access is much less clear.
Seeing repeated theories about the “deep-state” by supposed EA members is not a good look, for EA especially now.
The mundane explanation: a bunch of inexperienced people, some of whom probably should have never been on the board, severely miscalculated the consequences of their actions due to a combination of good-will, ego, and greed.
The US Natsec community (which is probably decentralized and not a “deep state”) has a very strong interest in accelerating AI faster than China and Russia e.g. for use in military hardware like cruise missiles, economic growth in an era of technological stagnation, and for defending/counteracting/mitigating SOTA foreign influence operations e.g. Russian botnets that use AI and user data for targeted manipulation. Current-gen AI is pretty well known to be highly valuable for these uses.
This is what makes “the super dangerous people who already badly want AI” one major hypothesis, but not at all the default explanation. Considering who seems to be benefiting the most, Microsoft (which AFAIK probably has the strongest ties to the military out of the big 5 tech companies), this is pretty clearly worth consideration.
I have devised a conspiracy theory that this is all a planned double-cross of EAs by the deep state. EAs thought they had a governance structure that could stop a company developing dangerous frontier AI. Instead, the board’s actions have provided an opportunity to reel in the unpredictable OpenAI, and bring its research activities under the aegis of trusted government partner Microsoft; while the need to do this, can be blamed on EA zealotry rendering independent OpenAI untenable.
I am not particularly committed to this theory, but it might be an echo of the truth. As @trevor keeps saying, events at the AI frontier are not something that the national security elite can just allow to unfold unattended.
This is the wrong way to think about this, and an even worse way to write about this.
I think this comment is closing the overton window on extremely important concepts, in a really bad way. There probably isn’t a “deep state”, we can infer that the US Natsec establishment is probably decentralized a la lc’s Don’t take the organization chart literally due to the structure of the human brain, and I’ve argued that competent networks of people emerge stochastically based on technological advances like better lie detection technology, whereas Gwern argued that competence mainly emerges based on proximity to the core leadership and leadership’s wavering focus.
I’ve argued pretty persuasively that there’s good odds that excessively powerful people at tech companies and intelligence agencies might follow the national interest, and hijack or damage the AI safety community in order to get better positioned to access AI tech.
I’m really glad to see people take this problem seriously, and I’m frankly pretty disturbed that the conflict I sketched out might have ended up happening so soon after I first went public about it a month ago. I think your comment also gets big parts of the problem right e.g. “events at the AI frontier are not something that the national security elite can just allow to unfold unattended” which is well-put.
But everything is on fire right now, and people have unfairly high standards for getting everything right the first time (it is even more unfair for the world which is ending). Like learning a language, a system as obscure and complicated as this required me to make tons of mistakes before I had something ready for presentation.
Its an Occam’s razor violation. The simple explanation is that the current chaos comes from human beings making mistakes. The OAI board detonating their own company does not achieve their stated goals of developing safe AGI.
If the ultimate reason here had to do with AI safety, a diaspora from OAI spreads the knowledge around to all the competition.
As for national security, the current outcomes are chaos. Why not just send the FBI to seize everything if the US government decided that AI is a security issue?
That ultimately has to be what happens. In an alternate history without a Manhattan project, imagine GE energy research in 1950 is playing with weapons grade u-235.
Eventually some researcher inside realizes you could make a really powerful bomb, not just compact nuclear power sources, and the government would have to seize all the U-235/H-100s.
Finishing the analogy, the government would probably wait to act until GE causes a nuclear fizzle (a low yield nuclear explosion) or some other clear evidence that the danger is real becomes available.
The US government considered AI a national security issue long before ChatGPT. But when it comes to American companies developing technologies of strategic interest, of course they’d prefer to work behind the scenes. If the powers deem that a company needs closer oversight, they would prefer the resulting changes to be viewed as purely a matter of corporate decision-making.
Whatever is actually going on with OpenAI, you can be sure that there are national security factions who care greatly about what happens to its technologies. The relevant decisions are not just business decisions.
That makes sense. It’s worth noting that the AI OpenAI leads in is very different from the type the military is most concerned with. So whoever in the goverment is concerned with this transition is less likely to be concerned with the military implications, and more on the economic and social impacts.
It’s also interesting relative to government interest that, thus far, generative AI tools have been immediately been deployed globally, so the relative impact on competitive status is negligible.
Thus, we might actually hope that to the extent the relevant government apparatus is composed of sensible individuals, its concerns with this type of AI might actually be similar to those of this community: economic disruption and the real if strange risk of disempowering humanity permanently.
Of course, hoping that the government is collectively sensible is highly questionable. But the security factions you’re addressing might be the more pragmatic and sensible parts of government.
You are attributing a lot more deviousness and strategic boldness to the so-called deep state than the US government is organizationally capable of. The CIA may have tried a few things like this in banana republics but there’s just no way anybody could pull it off domestically.
This is a good point, that much of the data we have comes from leaked operations in South America (e.g. the Church Hearings), and CIA operations are probably much easier there than on American soil.
However, there are also different kinds of systems pointed inward which look more like normal power games e.g. FBI informants, or lobbyists forming complex agreements/arrangements (like how their lawyer counterparts develop clever value-handshake-like agreements/arrangements to settle out-of-court). It shouldn’t be surprising that domestic ops are more complicated and look like ordinary domestic power plays (possibly occasionally augmented by advanced technology).
The profit motive alone could motivate Microsoft execs to leverage their access to advanced technology to get a better outcome for Microsoft. I was pretty surprised by the possibility that silicon valley VCs alone could potentially set up sophisticated operations e.g. using pre-established connections to journalists to leak false information or access to large tech companies with manipulation capabilities (e.g. Andreessen Horowitz’s access to Facebook’s manipulation research).
I don’t understand the mechanism of the double-cross here. How would they get the pro-EA and safety side to trigger the crisis? And why would the safety/EA people, who are trying to make everything more predictable and controllable, be the ones who are purged from influence?
One explanation that comes to mind is that AI already offers extremely powerful manipulation capabilities and governments are already racing to acquire these capabilities.
I’m very confused about the events that have been taking place, but one factor that I have very little doubt is that the NSA has acquired access to smartphone operating systems and smartphone microphones throughout the OpenAI building (it’s just one building, and a really important one, so it’s also reasonably likely that it’s also been bugged). Whether they were doing anything with that access is much less clear.
The EAs on the board have national security ties.
Seems unlikely they’d need to communicate at all. The stock market is plenty of pressure on Microsoft.
Seeing repeated theories about the “deep-state” by supposed EA members is not a good look, for EA especially now.
The mundane explanation: a bunch of inexperienced people, some of whom probably should have never been on the board, severely miscalculated the consequences of their actions due to a combination of good-will, ego, and greed.
Nothing wrong with hypothesizing about it, as long as one doesn’t over update on it.
This does seem vastly more likely. Why would “the deep state” be anti-EA or anti-AI safety? Or organizing complex shenanigans to pursue those values?
I never attribute to malice what is explainable by foolishness.
The US Natsec community (which is probably decentralized and not a “deep state”) has a very strong interest in accelerating AI faster than China and Russia e.g. for use in military hardware like cruise missiles, economic growth in an era of technological stagnation, and for defending/counteracting/mitigating SOTA foreign influence operations e.g. Russian botnets that use AI and user data for targeted manipulation. Current-gen AI is pretty well known to be highly valuable for these uses.
This is what makes “the super dangerous people who already badly want AI” one major hypothesis, but not at all the default explanation. Considering who seems to be benefiting the most, Microsoft (which AFAIK probably has the strongest ties to the military out of the big 5 tech companies), this is pretty clearly worth consideration.
The US NatSec community doesn’t know that the US (and Britain) are with probability = .99 at least 8 years ahead of China and Russia in AI?