Sam Altman has made many enemies in his tenure at OpenAI. One of them is Elon Musk, who feels betrayed by OpenAI, and has filed failed lawsuits against the company. I previously wrote this off as Musk considering the org too “woke”, but Altman’s recent behavior has made me wonder if it was more of a personal betrayal. Altman has taken Musk’s money, intended for an AI safety non-profit, and is currently converting it into enormous personal equity. All the while de-emphasizing AI safety research.
Musk now has the ear of the President-elect. Vice-President-elect JD Vance is also associated with Peter Thiel, whose ties with Musk go all the way back to PayPal. Has there been any analysis on the impact this may have on OpenAI’s ongoing restructuring? What might happen if the DOJ turns hostile?
[Following was added after initial post]
I would add that convincing Musk to take action against Altman is the highest ROI thing I can think of in terms of decreasing AI extinction risk.
In the email above, clearly stated, is a line of reasoning that has lead very competent people to work extremely hard to build potentially-omnicidal machines.
But also Altman’s actions since are very clearly counter to the spirit of that email. I could imagine a version of this plan, executed with earnestness and attempted cooperativeness, that wasn’t nearly as harmful (though still pretty bad, probably).
Part of the problem is that “we should build it first, before the less trustworthy” is a meme that universalizes terribly.
Part of the problem is that Sam Altman was not actually sincere in the the execution of that sentiment, regardless of how sincere his original intentions were.
It’s not clear to me that there was actually an option to build a $100B company with competent people around the world who would’ve been united in conditionally shutting down and unconditionally pushing for regulation. I don’t know that the culture and concepts of people who do a lot of this work in the business world would allow for such a plan to be actively worked on.
You maybe right. Maybe the top talent wouldn’t have gotten on board with that mission, and so it wouldn’t have gotten top talent.
I bet Illya would have been in for that mission, and I think a surprisingly large number of other top researchers might have been in for it as well. Obviously we’ll never know.
And I think if the founders are committed to a mission, and they reaffirm their commitment in every meeting, they can go surprisingly far in making in the culture of an org.
Maybe there’s a hope there, but I’ll point out that many of the people needed to run a business (finance, legal, product, etc) are not idealistic scientists who would be willing to have their equity become worthless.
Those people don’t get substantial equity in most business in the world. They generally get paid a salary and benefits in exchange for their work, and that’s about it.
I know little enough that I don’t know whether this statement is true. I would’ve guessed that in most $10B companies anyone with a title like “CFO” and “CTO” and “COO” is paid primarily in equity, but perhaps this is mostly true of a few companies I’ve looked into more (like Amazon).
Ilya is demonstrably not in on that mission, since his step immediately after leaving OpenAI was to found an additional AGI company and thus increase x-risk.
Also, Sam Altman is a pretty impressive guy. I wonder what would have happened if he had decided to try to stop humanity from building AGI, instead of trying to be the one to do it instead of google.
That might very well help, yes. However, two thoughts, neither at all well thought out:
If the Trump administration does fight OpenAI, let’s hope Altman doesn’t manage to judo flip the situation like he did with the OpenAI board saga, and somehow magically end up replacing Musk or Trump in the upcoming administration...
Musk’s own track record on AI x-risk is not great. I guess he did endorse California’s SB 1047, so that’s better than OpenAI’s current position. But he helped found OpenAI, and recently founded another AI company. There’s a scenario where we just trade extinction risk from Altman’s OpenAI for extinction risk from Musk’s xAI.
Potentially a hot take, but I feel like xAI’s contributions to race dynamics (at least thus far) have been relatively trivial. I am usually skeptical of the whole “I need to start an AI company to have a seat at the table”, but I do imagine that Elon owning an AI company strengthens his voice. And I think his AI-related comms have mostly been used to (a) raise awareness about AI risk, (b) raise concerns about OpenAI/Altman, and (c) endorse SB1047 [which he did even faster and less ambiguously than Anthropic].
The counterargument here is that maybe if xAI was in 1st place, Elon’s positions would shift. I find this plausible, but I also find it plausible that Musk (a) actually cares a lot about AI safety, (b) doesn’t trust the other players in the race, and (c) is more likely to use his influence to help policymakers understand AI risk than any of the other lab CEOs.
I’m sympathetic to Musk being genuinely worried about AI safety. My problem is that one of his first actions after learning about AI safety was to found OpenAI, and that hasn’t worked out very well. Not just due to Altman; even the “Open” part was a highly questionable goal. Hopefully Musk’s future actions in this area would have positive EV, but still.
I think that xAI’s contributions have been minimal so far, but that could shift. Apparently they have a very ambitious data center coming up, and are scaling up research efforts quickly. Seems very accelerate-y.
But he helped found OpenAI, and recently founded another AI company.
I think Elon’s strategy of “telling the world not to build AGI, and then going to start another AGI company himself” is much less dumb / ethical fraught, than people often credit.
I don’t think Vance is e/acc. He has said positive things about open source, but consider that thecontext was specifically about censorship and political bias in contemporary LLMs (bolding mine):
There are undoubtedly risks related to AI. One of the biggest:
A partisan group of crazy people use AI to infect every part of the information economy with left wing bias. Gemini can’t produce accurate history. ChatGPT promotes genocidal concepts.
The solution is open source
If Vinod really believes AI is as dangerous as a nuclear weapon, why does ChatGPT have such an insane political bias? If you wanted to promote bipartisan efforts to regulate for safety, it’s entirely counterproductive.
Any moderate or conservative who goes along with this obvious effort to entrench insane left-wing businesses is a useful idiot.
I’m not handing out favors to industrial-scale DEI bullshit because tech people are complaining about safety.
The words I’ve bolded indicate that Vance is at least peripherally aware that the “tech people [...] complaining about safety” are a different constituency than the “DEI bullshit” he deplores. If future developments or rhetorical innovations persuade him that extinction risk is a serious concern, it seems likely that he’d be on board with “bipartisan efforts to regulate for safety.”
How would removing Sam Altman significantly reduce extinction risk? Conditional on AI alignment being hard and Doom likely the exact identity of the Shoggoth Summoner seems immaterial.
Just as one example, OpenAI was against SB 1047, whereas Musk was for it. I’m not optimistic about regulation being enough to save us, but presumably they would be helpful, and some AI companies like OpenAI were against even the limited regulations of SB 1047. Plus SB 1047 also included stuff like whistleblower protections, and that’s the kind of thing that could help policymakers make better decisions in the future.
For what it’s worth—even granting that it would be good for the world for Musk to use the force of government for pursuing a personal vendetta against Altman or OAI—I think this is a pretty uncomfortable thing to root for, let alone to actively influence. I think this for the same reason that I think it’s uncomfortable to hope for—and immoral to contribute to—assassination of political leaders, even assuming that their assassination would be net good.
I don’t understand the reference to assassination. Presumably there are already laws on the books that outlaw trying to destroy the world (?), so it would be enough to apply those to AGI companies.
Notably, no law I know of allows you to take legal action on a hunch that they might destroy the world based on your probability of them destroying the world being high without them doing any harmful actions (and no, building AI doesn’t count here.)
I’m quite happy for laws to be passed and enforced via the normal mechanisms. But I think it’s bad for policy and enforcement to be determined by Elon Musk’s personal vendettas. If Elon tried to defund the AI safety institute because of a personal vendetta against AI safety researchers, I would have some process concerns, and so I also have process concerns when these vendettas are directed against OAI.
Sam Altman has made many enemies in his tenure at OpenAI. One of them is Elon Musk, who feels betrayed by OpenAI, and has filed failed lawsuits against the company. I previously wrote this off as Musk considering the org too “woke”, but Altman’s recent behavior has made me wonder if it was more of a personal betrayal. Altman has taken Musk’s money, intended for an AI safety non-profit, and is currently converting it into enormous personal equity. All the while de-emphasizing AI safety research.
Musk now has the ear of the President-elect. Vice-President-elect JD Vance is also associated with Peter Thiel, whose ties with Musk go all the way back to PayPal. Has there been any analysis on the impact this may have on OpenAI’s ongoing restructuring? What might happen if the DOJ turns hostile?
[Following was added after initial post]
I would add that convincing Musk to take action against Altman is the highest ROI thing I can think of in terms of decreasing AI extinction risk.
Internal Tech Emails on X: “Sam Altman emails Elon Musk May 25, 2015 https://t.co/L1F5bMkqkd” / X
In the email above, clearly stated, is a line of reasoning that has lead very competent people to work extremely hard to build potentially-omnicidal machines.
Absolutely true.
But also Altman’s actions since are very clearly counter to the spirit of that email. I could imagine a version of this plan, executed with earnestness and attempted cooperativeness, that wasn’t nearly as harmful (though still pretty bad, probably).
Part of the problem is that “we should build it first, before the less trustworthy” is a meme that universalizes terribly.
Part of the problem is that Sam Altman was not actually sincere in the the execution of that sentiment, regardless of how sincere his original intentions were.
It’s not clear to me that there was actually an option to build a $100B company with competent people around the world who would’ve been united in conditionally shutting down and unconditionally pushing for regulation. I don’t know that the culture and concepts of people who do a lot of this work in the business world would allow for such a plan to be actively worked on.
You maybe right. Maybe the top talent wouldn’t have gotten on board with that mission, and so it wouldn’t have gotten top talent.
I bet Illya would have been in for that mission, and I think a surprisingly large number of other top researchers might have been in for it as well. Obviously we’ll never know.
And I think if the founders are committed to a mission, and they reaffirm their commitment in every meeting, they can go surprisingly far in making in the culture of an org.
Maybe there’s a hope there, but I’ll point out that many of the people needed to run a business (finance, legal, product, etc) are not idealistic scientists who would be willing to have their equity become worthless.
Those people don’t get substantial equity in most business in the world. They generally get paid a salary and benefits in exchange for their work, and that’s about it.
I know little enough that I don’t know whether this statement is true. I would’ve guessed that in most $10B companies anyone with a title like “CFO” and “CTO” and “COO” is paid primarily in equity, but perhaps this is mostly true of a few companies I’ve looked into more (like Amazon).
Ilya is demonstrably not in on that mission, since his step immediately after leaving OpenAI was to found an additional AGI company and thus increase x-risk.
I don’t think that’s a valid inference.
Also, Sam Altman is a pretty impressive guy. I wonder what would have happened if he had decided to try to stop humanity from building AGI, instead of trying to be the one to do it instead of google.
That might very well help, yes. However, two thoughts, neither at all well thought out:
If the Trump administration does fight OpenAI, let’s hope Altman doesn’t manage to judo flip the situation like he did with the OpenAI board saga, and somehow magically end up replacing Musk or Trump in the upcoming administration...
Musk’s own track record on AI x-risk is not great. I guess he did endorse California’s SB 1047, so that’s better than OpenAI’s current position. But he helped found OpenAI, and recently founded another AI company. There’s a scenario where we just trade extinction risk from Altman’s OpenAI for extinction risk from Musk’s xAI.
Potentially a hot take, but I feel like xAI’s contributions to race dynamics (at least thus far) have been relatively trivial. I am usually skeptical of the whole “I need to start an AI company to have a seat at the table”, but I do imagine that Elon owning an AI company strengthens his voice. And I think his AI-related comms have mostly been used to (a) raise awareness about AI risk, (b) raise concerns about OpenAI/Altman, and (c) endorse SB1047 [which he did even faster and less ambiguously than Anthropic].
The counterargument here is that maybe if xAI was in 1st place, Elon’s positions would shift. I find this plausible, but I also find it plausible that Musk (a) actually cares a lot about AI safety, (b) doesn’t trust the other players in the race, and (c) is more likely to use his influence to help policymakers understand AI risk than any of the other lab CEOs.
I’m sympathetic to Musk being genuinely worried about AI safety. My problem is that one of his first actions after learning about AI safety was to found OpenAI, and that hasn’t worked out very well. Not just due to Altman; even the “Open” part was a highly questionable goal. Hopefully Musk’s future actions in this area would have positive EV, but still.
I think that xAI’s contributions have been minimal so far, but that could shift. Apparently they have a very ambitious data center coming up, and are scaling up research efforts quickly. Seems very accelerate-y.
I think Elon’s strategy of “telling the world not to build AGI, and then going to start another AGI company himself” is much less dumb / ethical fraught, than people often credit.
If Trump dies, Vance is in charge, and he’s previously espoused bland eaccism.
I keep thinking: Everything depends on whether Elon and JD can be friends.
I don’t think Vance is e/acc. He has said positive things about open source, but consider that the context was specifically about censorship and political bias in contemporary LLMs (bolding mine):
The words I’ve bolded indicate that Vance is at least peripherally aware that the “tech people [...] complaining about safety” are a different constituency than the “DEI bullshit” he deplores. If future developments or rhetorical innovations persuade him that extinction risk is a serious concern, it seems likely that he’d be on board with “bipartisan efforts to regulate for safety.”
How would removing Sam Altman significantly reduce extinction risk? Conditional on AI alignment being hard and Doom likely the exact identity of the Shoggoth Summoner seems immaterial.
Just as one example, OpenAI was against SB 1047, whereas Musk was for it. I’m not optimistic about regulation being enough to save us, but presumably they would be helpful, and some AI companies like OpenAI were against even the limited regulations of SB 1047. Plus SB 1047 also included stuff like whistleblower protections, and that’s the kind of thing that could help policymakers make better decisions in the future.
I would expect, the issue isn’t about convincing Musk to take action but about finding effective actions that Musk could take.
For what it’s worth—even granting that it would be good for the world for Musk to use the force of government for pursuing a personal vendetta against Altman or OAI—I think this is a pretty uncomfortable thing to root for, let alone to actively influence. I think this for the same reason that I think it’s uncomfortable to hope for—and immoral to contribute to—assassination of political leaders, even assuming that their assassination would be net good.
I don’t understand the reference to assassination. Presumably there are already laws on the books that outlaw trying to destroy the world (?), so it would be enough to apply those to AGI companies.
Notably, no law I know of allows you to take legal action on a hunch that they might destroy the world based on your probability of them destroying the world being high without them doing any harmful actions (and no, building AI doesn’t count here.)
What if whistleblowers and internal documents corroborated that they think what they’re doing could destroy the world?
Maybe there’s a case there, but I’d doubt it get past a jury, let alone result in any guilty verdicts.
I’m quite happy for laws to be passed and enforced via the normal mechanisms. But I think it’s bad for policy and enforcement to be determined by Elon Musk’s personal vendettas. If Elon tried to defund the AI safety institute because of a personal vendetta against AI safety researchers, I would have some process concerns, and so I also have process concerns when these vendettas are directed against OAI.