I’m noticing my confusion about the level of support here. Kara Swisher says that these are 505⁄700 employees, but the OpenAI publication I’m most familiar with is the autointerpretability paper, and none (!) of the core research contributors to that paper signed this letter. Why is a large fraction of the company anti-board/pro-Sam except for 0⁄6 of this team (discounting Henk Tillman because he seems to work for Apple instead of OpenAI)? The only authors on that paper that signed the letter are Gabriel Goh and Ilya Sutskever. So is the alignment team unusually pro-board/anti-Sam, or are the 505 just not that large a faction in the company?
Still, it is interesting that this group is clearly underrepresented among people who have actually signed the letter.
Edit: Updated to note that Nick Cammarata is no longer at OpenAI, so he couldn’t have signed the letter. For what it’s worth, he has liked at least one tweet that called for the board to resign: https://twitter.com/nickcammarata/likes
Just to clarify: ~700 out of ~770 OpenAI employees have signed the letter (~90%)
Out of the 10 authors of the autointerpretability paper, only 5 have signed the letter. This is much lower than the average rate. One out of the 10 is no longer at OpenAI, so couldn’t have signed it, so it makes sense to count this as 5⁄9 rather than 5⁄10. Either way, it’s still well below the average rate.
The only evidence I’ve seen that this is real, so far, is Kara Swisher’s (who?) word, and not having heard a refutation yet, but neither of those things are very reassuring given that Kara’s thread bears The Mark:
I was annoyed by the lack of actual proof that the petition to follow sam was signed by anyone from OpenAI, all it would really take is linking a single endorsement (or just an acknowledgement of its existence) from a signatory. So I asked around and here is one such tweet: https://twitter.com/E0M/status/1726743918023496140
A research team’s ability to design a robust corporate structure doesn’t necessarily predict their ability to solve a hard technical problem. Maybe there’s some overlap, but machine learning and philosophy are different fields than business. Also, I suspect that the people doing the AI alignment research at OpenAI are not the same people who designed the corporate structure (but this might be wrong).
Welcome to LessWrong! Sorry for the harsh greeting. Standards of discourse are higher than other places on the internet, so quips usually aren’t well-tolerated (even if they have some element of truth).
I have devised a conspiracy theory that this is all a planned double-cross of EAs by the deep state. EAs thought they had a governance structure that could stop a company developing dangerous frontier AI. Instead, the board’s actions have provided an opportunity to reel in the unpredictable OpenAI, and bring its research activities under the aegis of trusted government partner Microsoft; while the need to do this, can be blamed on EA zealotry rendering independent OpenAI untenable.
I am not particularly committed to this theory, but it might be an echo of the truth. As @trevor keeps saying, events at the AI frontier are not something that the national security elite can just allow to unfold unattended.
I’m really glad to see people take this problem seriously, and I’m frankly pretty disturbed that the conflict I sketched out might have ended up happening so soon after I first went public about it a month ago. I think your comment also gets big parts of the problem right e.g. “events at the AI frontier are not something that the national security elite can just allow to unfold unattended” which is well-put.
But everything is on fire right now, and people have unfairly high standards for getting everything right the first time (it is even more unfair for the world which is ending). Like learning a language, a system as obscure and complicated as this required me to make tons of mistakes before I had something ready for presentation.
Its an Occam’s razor violation. The simple explanation is that the current chaos comes from human beings making mistakes. The OAI board detonating their own company does not achieve their stated goals of developing safe AGI.
If the ultimate reason here had to do with AI safety, a diaspora from OAI spreads the knowledge around to all the competition.
As for national security, the current outcomes are chaos. Why not just send the FBI to seize everything if the US government decided that AI is a security issue?
That ultimately has to be what happens. In an alternate history without a Manhattan project, imagine GE energy research in 1950 is playing with weapons grade u-235.
Eventually some researcher inside realizes you could make a really powerful bomb, not just compact nuclear power sources, and the government would have to seize all the U-235/H-100s.
Finishing the analogy, the government would probably wait to act until GE causes a nuclear fizzle (a low yield nuclear explosion) or some other clear evidence that the danger is real becomes available.
if the US government decided that AI is a security issue
The US government considered AI a national security issue long before ChatGPT. But when it comes to American companies developing technologies of strategic interest, of course they’d prefer to work behind the scenes. If the powers deem that a company needs closer oversight, they would prefer the resulting changes to be viewed as purely a matter of corporate decision-making.
Whatever is actually going on with OpenAI, you can be sure that there are national security factions who care greatly about what happens to its technologies. The relevant decisions are not just business decisions.
That makes sense. It’s worth noting that the AI OpenAI leads in is very different from the type the military is most concerned with. So whoever in the goverment is concerned with this transition is less likely to be concerned with the military implications, and more on the economic and social impacts.
It’s also interesting relative to government interest that, thus far, generative AI tools have been immediately been deployed globally, so the relative impact on competitive status is negligible.
Thus, we might actually hope that to the extent the relevant government apparatus is composed of sensible individuals, its concerns with this type of AI might actually be similar to those of this community: economic disruption and the real if strange risk of disempowering humanity permanently.
Of course, hoping that the government is collectively sensible is highly questionable. But the security factions you’re addressing might be the more pragmatic and sensible parts of government.
You are attributing a lot more deviousness and strategic boldness to the so-called deep state than the US government is organizationally capable of. The CIA may have tried a few things like this in banana republics but there’s just no way anybody could pull it off domestically.
This is a good point, that much of the data we have comes from leaked operations in South America (e.g. the Church Hearings), and CIA operations are probably much easier there than on American soil.
However, there are also different kinds of systems pointed inward which look more like normal power games e.g. FBI informants, or lobbyists forming complex agreements/arrangements (like how their lawyer counterparts develop clever value-handshake-like agreements/arrangements to settle out-of-court). It shouldn’t be surprising that domestic ops are more complicated and look like ordinary domestic power plays (possibly occasionally augmented by advanced technology).
The profit motive alone could motivate Microsoft execs to leverage their access to advanced technology to get a better outcome for Microsoft. I was pretty surprised by the possibility that silicon valley VCs alone could potentially set up sophisticated operations e.g. using pre-established connections to journalists to leak false information or access to large tech companies with manipulation capabilities (e.g. Andreessen Horowitz’s access to Facebook’s manipulation research).
I don’t understand the mechanism of the double-cross here. How would they get the pro-EA and safety side to trigger the crisis? And why would the safety/EA people, who are trying to make everything more predictable and controllable, be the ones who are purged from influence?
I’m very confused about the events that have been taking place, but one factor that I have very little doubt is that the NSA has acquired access to smartphone operating systems and smartphone microphones throughout the OpenAI building (it’s just one building, and a really important one, so it’s also reasonably likely that it’s also been bugged). Whether they were doing anything with that access is much less clear.
Seeing repeated theories about the “deep-state” by supposed EA members is not a good look, for EA especially now.
The mundane explanation: a bunch of inexperienced people, some of whom probably should have never been on the board, severely miscalculated the consequences of their actions due to a combination of good-will, ego, and greed.
The US Natsec community (which is probably decentralized and not a “deep state”) has a very strong interest in accelerating AI faster than China and Russia e.g. for use in military hardware like cruise missiles, economic growth in an era of technological stagnation, and for defending/counteracting/mitigating SOTA foreign influence operations e.g. Russian botnets that use AI and user data for targeted manipulation. Current-gen AI is pretty well known to be highly valuable for these uses.
This is what makes “the super dangerous people who already badly want AI” one major hypothesis, but not at all the default explanation. Considering who seems to be benefiting the most, Microsoft (which AFAIK probably has the strongest ties to the military out of the big 5 tech companies), this is pretty clearly worth consideration.
I’m noticing my confusion about the level of support here. Kara Swisher says that these are 505⁄700 employees, but the OpenAI publication I’m most familiar with is the autointerpretability paper, and none (!) of the core research contributors to that paper signed this letter. Why is a large fraction of the company anti-board/pro-Sam except for 0⁄6 of this team (discounting Henk Tillman because he seems to work for Apple instead of OpenAI)? The only authors on that paper that signed the letter are Gabriel Goh and Ilya Sutskever. So is the alignment team unusually pro-board/anti-Sam, or are the 505 just not that large a faction in the company?
[Editing to add a link to the pdf of the letter, which is how I checked for who signed https://s3.documentcloud.org/documents/24172246/letter-to-the-openai-board-google-docs.pdf ]
There is an updated list of 702 who have signed the letter (as of the time I’m writing this) here: https://www.nytimes.com/interactive/2023/11/20/technology/letter-to-the-open-ai-board.html (direct link to pdf: https://static01.nyt.com/newsgraphics/documenttools/f31ff522a5b1ad7a/9cf7eda3-full.pdf)
Nick Cammarata left OpenAI ~8 weeks ago, so he couldn’t have signed the letter.
Out of the remaining 6 core research contributors:
3⁄6 have signed it: Steven Bills, Dan Mossing, and Henk Tillman
3⁄6 have still not signed it: Leo Gao, Jeff Wu, and William Saunders
Out of the non-core research contributors:
2⁄3 signed it: Gabriel Goh and Ilya Sutskever
1⁄3 still have not signed it: Jan Leike
That being said, it looks like Jan Leike has tweeted that he thinks the board should resign: https://twitter.com/janleike/status/1726600432750125146
And that tweet was liked by Leo Gao: https://twitter.com/nabla_theta/likes
Still, it is interesting that this group is clearly underrepresented among people who have actually signed the letter.
Edit: Updated to note that Nick Cammarata is no longer at OpenAI, so he couldn’t have signed the letter. For what it’s worth, he has liked at least one tweet that called for the board to resign: https://twitter.com/nickcammarata/likes
Cammarata says he quit OA ~8 weeks ago, so therefore couldn’t’ve signed it: https://twitter.com/nickcammarata/status/1725939131736633579
Ah, nice catch, I’ll update my comment.
So it’s been falsified? Isn’t that a pretty big deal against the source, or whoever purports the letter to be 100% genuine?
I believe Nick was initially mentioned as someone who wasn’t on the letter
Apparently I read too quickly and didn’t understand the point that the parent has added explicitly.
No, the letter has not been falsified.
Just to clarify: ~700 out of ~770 OpenAI employees have signed the letter (~90%)
Out of the 10 authors of the autointerpretability paper, only 5 have signed the letter. This is much lower than the average rate. One out of the 10 is no longer at OpenAI, so couldn’t have signed it, so it makes sense to count this as 5⁄9 rather than 5⁄10. Either way, it’s still well below the average rate.
The only evidence I’ve seen that this is real, so far, is Kara Swisher’s (who?) word, and not having heard a refutation yet, but neither of those things are very reassuring given that Kara’s thread bears The Mark:
I am also confused. It would make me happy if we got some relevant information about this in the coming days.
I was annoyed by the lack of actual proof that the petition to follow sam was signed by anyone from OpenAI, all it would really take is linking a single endorsement (or just an acknowledgement of its existence) from a signatory. So I asked around and here is one such tweet: https://twitter.com/E0M/status/1726743918023496140
A research team’s ability to design a robust corporate structure doesn’t necessarily predict their ability to solve a hard technical problem. Maybe there’s some overlap, but machine learning and philosophy are different fields than business. Also, I suspect that the people doing the AI alignment research at OpenAI are not the same people who designed the corporate structure (but this might be wrong).
Welcome to LessWrong! Sorry for the harsh greeting. Standards of discourse are higher than other places on the internet, so quips usually aren’t well-tolerated (even if they have some element of truth).
I have devised a conspiracy theory that this is all a planned double-cross of EAs by the deep state. EAs thought they had a governance structure that could stop a company developing dangerous frontier AI. Instead, the board’s actions have provided an opportunity to reel in the unpredictable OpenAI, and bring its research activities under the aegis of trusted government partner Microsoft; while the need to do this, can be blamed on EA zealotry rendering independent OpenAI untenable.
I am not particularly committed to this theory, but it might be an echo of the truth. As @trevor keeps saying, events at the AI frontier are not something that the national security elite can just allow to unfold unattended.
This is the wrong way to think about this, and an even worse way to write about this.
I think this comment is closing the overton window on extremely important concepts, in a really bad way. There probably isn’t a “deep state”, we can infer that the US Natsec establishment is probably decentralized a la lc’s Don’t take the organization chart literally due to the structure of the human brain, and I’ve argued that competent networks of people emerge stochastically based on technological advances like better lie detection technology, whereas Gwern argued that competence mainly emerges based on proximity to the core leadership and leadership’s wavering focus.
I’ve argued pretty persuasively that there’s good odds that excessively powerful people at tech companies and intelligence agencies might follow the national interest, and hijack or damage the AI safety community in order to get better positioned to access AI tech.
I’m really glad to see people take this problem seriously, and I’m frankly pretty disturbed that the conflict I sketched out might have ended up happening so soon after I first went public about it a month ago. I think your comment also gets big parts of the problem right e.g. “events at the AI frontier are not something that the national security elite can just allow to unfold unattended” which is well-put.
But everything is on fire right now, and people have unfairly high standards for getting everything right the first time (it is even more unfair for the world which is ending). Like learning a language, a system as obscure and complicated as this required me to make tons of mistakes before I had something ready for presentation.
Its an Occam’s razor violation. The simple explanation is that the current chaos comes from human beings making mistakes. The OAI board detonating their own company does not achieve their stated goals of developing safe AGI.
If the ultimate reason here had to do with AI safety, a diaspora from OAI spreads the knowledge around to all the competition.
As for national security, the current outcomes are chaos. Why not just send the FBI to seize everything if the US government decided that AI is a security issue?
That ultimately has to be what happens. In an alternate history without a Manhattan project, imagine GE energy research in 1950 is playing with weapons grade u-235.
Eventually some researcher inside realizes you could make a really powerful bomb, not just compact nuclear power sources, and the government would have to seize all the U-235/H-100s.
Finishing the analogy, the government would probably wait to act until GE causes a nuclear fizzle (a low yield nuclear explosion) or some other clear evidence that the danger is real becomes available.
The US government considered AI a national security issue long before ChatGPT. But when it comes to American companies developing technologies of strategic interest, of course they’d prefer to work behind the scenes. If the powers deem that a company needs closer oversight, they would prefer the resulting changes to be viewed as purely a matter of corporate decision-making.
Whatever is actually going on with OpenAI, you can be sure that there are national security factions who care greatly about what happens to its technologies. The relevant decisions are not just business decisions.
That makes sense. It’s worth noting that the AI OpenAI leads in is very different from the type the military is most concerned with. So whoever in the goverment is concerned with this transition is less likely to be concerned with the military implications, and more on the economic and social impacts.
It’s also interesting relative to government interest that, thus far, generative AI tools have been immediately been deployed globally, so the relative impact on competitive status is negligible.
Thus, we might actually hope that to the extent the relevant government apparatus is composed of sensible individuals, its concerns with this type of AI might actually be similar to those of this community: economic disruption and the real if strange risk of disempowering humanity permanently.
Of course, hoping that the government is collectively sensible is highly questionable. But the security factions you’re addressing might be the more pragmatic and sensible parts of government.
You are attributing a lot more deviousness and strategic boldness to the so-called deep state than the US government is organizationally capable of. The CIA may have tried a few things like this in banana republics but there’s just no way anybody could pull it off domestically.
This is a good point, that much of the data we have comes from leaked operations in South America (e.g. the Church Hearings), and CIA operations are probably much easier there than on American soil.
However, there are also different kinds of systems pointed inward which look more like normal power games e.g. FBI informants, or lobbyists forming complex agreements/arrangements (like how their lawyer counterparts develop clever value-handshake-like agreements/arrangements to settle out-of-court). It shouldn’t be surprising that domestic ops are more complicated and look like ordinary domestic power plays (possibly occasionally augmented by advanced technology).
The profit motive alone could motivate Microsoft execs to leverage their access to advanced technology to get a better outcome for Microsoft. I was pretty surprised by the possibility that silicon valley VCs alone could potentially set up sophisticated operations e.g. using pre-established connections to journalists to leak false information or access to large tech companies with manipulation capabilities (e.g. Andreessen Horowitz’s access to Facebook’s manipulation research).
I don’t understand the mechanism of the double-cross here. How would they get the pro-EA and safety side to trigger the crisis? And why would the safety/EA people, who are trying to make everything more predictable and controllable, be the ones who are purged from influence?
One explanation that comes to mind is that AI already offers extremely powerful manipulation capabilities and governments are already racing to acquire these capabilities.
I’m very confused about the events that have been taking place, but one factor that I have very little doubt is that the NSA has acquired access to smartphone operating systems and smartphone microphones throughout the OpenAI building (it’s just one building, and a really important one, so it’s also reasonably likely that it’s also been bugged). Whether they were doing anything with that access is much less clear.
The EAs on the board have national security ties.
Seems unlikely they’d need to communicate at all. The stock market is plenty of pressure on Microsoft.
Seeing repeated theories about the “deep-state” by supposed EA members is not a good look, for EA especially now.
The mundane explanation: a bunch of inexperienced people, some of whom probably should have never been on the board, severely miscalculated the consequences of their actions due to a combination of good-will, ego, and greed.
Nothing wrong with hypothesizing about it, as long as one doesn’t over update on it.
This does seem vastly more likely. Why would “the deep state” be anti-EA or anti-AI safety? Or organizing complex shenanigans to pursue those values?
I never attribute to malice what is explainable by foolishness.
The US Natsec community (which is probably decentralized and not a “deep state”) has a very strong interest in accelerating AI faster than China and Russia e.g. for use in military hardware like cruise missiles, economic growth in an era of technological stagnation, and for defending/counteracting/mitigating SOTA foreign influence operations e.g. Russian botnets that use AI and user data for targeted manipulation. Current-gen AI is pretty well known to be highly valuable for these uses.
This is what makes “the super dangerous people who already badly want AI” one major hypothesis, but not at all the default explanation. Considering who seems to be benefiting the most, Microsoft (which AFAIK probably has the strongest ties to the military out of the big 5 tech companies), this is pretty clearly worth consideration.
The US NatSec community doesn’t know that the US (and Britain) are with probability = .99 at least 8 years ahead of China and Russia in AI?