It’s interesting to me that the big AI CEOs have largely conceded that AGI/ASI could be extremely dangerous (but aren’t taking sufficient action given this view IMO), as opposed to them just denying that the risk is plausible. My intuition is that the latter is more strategic if they were just trying to have license to do what they want. (For instance, my impression is that energy companies delayed climate action pretty significantly by not yielding at first on whether climate change is even a real concern.)
I guess maybe the AI folks are walking a strategic middle ground? Where they concede there could be some huge risk, but then also sometimes say things like ‘risk assessment should be evidence-based,’ with the implication that current concerns aren’t rigorous. And maybe that’s more strategic than either world above?
But really it seems to me like they’re probably earnest in their views about the risks (or at least once were earnest). And so, political action that treats their concern as disingenuous is probably wrongheaded, as opposed to modeling them as ‘really concerned but facing very constrained useful actions’.
You’re not accounting for enemy action. They couldn’t have been sure, at the onset, how successful the AI Notkilleveryoneism faction will be at raising alarm, and in general, how blatant the risks will become to the outsiders as capabilities progress. And they have been intimately familiar with the relevant discussions, after all.
So they might’ve overcorrected, and considered that the “strategic middle ground” would be to admit the risk is plausible (but not as certain as the “doomers” say), rather than to deny it (which they might’ve expected to become a delusional-looking position in the future, so not a PR-friendly stance to take).
Or, at least, I think this could’ve been a relevant factor there.
My model is that Sam Altman regarded the EA world as a memetic threat, early on, and took actions to defuse that threat by paying lip service / taking openphil money / hiring prominent AI safety people for AI safety teams.
Like, possibly the EAs could have crea ed a widespread vibe that building AGI is a cartoon evil thing to do, sort of the way many people think of working for a tobacco company or an oil company.
Then, after ChatGPT, OpenAI was a much bigger fish than the EAs or the rationalists, and he began taking moves to extricate himself from them.
Sam Altman posted Machine intelligence, part 1[1] on February 25th, 2015. This is admittedly after the FLI conference in Puerto Rico, which is reportedly where Elon Musk was inspired to start OpenAI (though I can’t find a reference substantiating his interaction with Demis as the specific trigger), but there is other reporting suggesting that OpenAI was only properly conceived later in the year, and Sam Altman wasn’t at the FLI conference himself. (Also, it’d surprise me a bit if it took nearly a year, i.e. from Jan 2nd[2] to Dec 11th[3], for OpenAI to go from “conceived of” to “existing”.)
That of the famous “Development of superhuman machine intelligence (SMI) [1] is probably the greatest threat to the continued existence of humanity.” quote.
My model is that Sam Altman regarded the EA world as a memetic threat, early on, and took actions to defuse that threat by paying lip service / taking openphil money / hiring prominent AI safety people for AI safety teams.
In the context of the thread, I took this to suggest that Sam Altman never had any genuine concern about x-risk from AI, or, at a minimum, that any such concern was dominated by the social maneuvering you’re describing. That seems implausible to me given that he publicly expressed concern about x-risk from AI 10 months before OpenAI was publicly founded, and possibly several months before it was even conceived.
I don’t dispute that he never had any genuine concern. I guess that he probably did have genuine concern (though not necessarily that that was his main motivation for founding OpenAI).
Just guessing, but maybe admitting the danger is strategically useful, because it may result in regulations that will hurt the potential competitors more. The regulations often impose fixed costs (such as paying a specialized team which produces paperwork on environmental impacts), which are okay when you are already making millions.
I imagine, someone might figure out a way to make the AI much cheaper, maybe by sacrificing the generality. For example, this probably doesn’t make sense, but would it be possible to train an LLM only based on Python code (as opposed to the entire internet) and produce an AI that is only a Python code autocomplete? If it could be 1000x cheaper, you could make a startup without having to build a new power plant for you. Imagine that you add some special sauce to the algorithm (for example the AI will always internally write unit tests, which will visibly increase the correctness of the generated code; or it will be some combination of the ancient “expert system” approach with the new LLM approach, for example the LLM will train the expert system and then the expert system will provide feedback for the LLM), so you would be able to sell your narrow AI even when more general AIs are available. And once you start selling it, you get an income, which means you can expand the functionality.
It is better to have a consensus that such things are too dangerous to leave in hands of startups that can’t already lobby the government.
Hey, I am happy that the CEOs admit that the dangers exist. But if they are only doing it to secure their profits, it will probably warp their interpretations of what exactly the risks are, and what is a good way to reduce them.
Just guessing, but maybe admitting the danger is strategically useful, because it may result in regulations that will hurt the potential competitors more. The regulations often impose fixed costs (such as paying a specialized team which produces paperwork on environmental impacts), which are okay when you are already making millions.
My sense of things is that OpenAI at least appears to be lobbying against regulation moreso than they are lobbying for it?
To me, this seems consistent with just maximizing shareholder value.
Salaries and compute are the largest expenses at big AI firms, and “being the good guys” lets you get the best people at significant discounts. To my understanding, one of the greatest early successes of OpenAI was hiring great talent for cheap because they were “the non-profit good guys who cared about safety”. Later, great people like John Schulman left OpenAI for Anthropic because of his “desire to deepen my focus on AI alignment”.
As for people thinking you’re a potential x-risk, the downsides seem mostly solved by “if we didn’t do it somebody less responsible would”. AI safety policy interventions could also give great moats against competition, especially for the leading firm(s). Furthermore, much of the “AI alignment research” they invest in prevents PR disasters (terrorist used ChatGPT to invent dangerous bio-weapon) and most of the “interpretability” they invest in seems pretty close to R&D which they would invest in anyway to improve capabilities.
This might sound overly pessimistic. However, it can be viewed positively: there is significant overlap between the interests of big AI firms and the AI safety community.
To me, this seems consistent with just maximizing shareholder value. … “being the good guys” lets you get the best people at significant discounts.
This is pretty different from my model of what happened with OpenAI or Anthropic—especially the latter, where the founding team left huge equity value on the table by departing (OpenAI’s equity had already appreciated something like 10x between the first MSFT funding round and EOY 2020, when they departed).
And even for Sam and OpenAI, this would seem like a kind of wild strategy for pursuing wealth for someone who already had the network and opportunities he had pre-OpenAI?
With the change to for-profit and Sam receiving equity, it seems like the strategy will pay off. However, this might be hindsight bias, or I might otherwise have a too simplified view.
It’s interesting to me that the big AI CEOs have largely conceded that AGI/ASI could be extremely dangerous (but aren’t taking sufficient action given this view IMO), as opposed to them just denying that the risk is plausible. My intuition is that the latter is more strategic if they were just trying to have license to do what they want. (For instance, my impression is that energy companies delayed climate action pretty significantly by not yielding at first on whether climate change is even a real concern.)
I guess maybe the AI folks are walking a strategic middle ground? Where they concede there could be some huge risk, but then also sometimes say things like ‘risk assessment should be evidence-based,’ with the implication that current concerns aren’t rigorous. And maybe that’s more strategic than either world above?
But really it seems to me like they’re probably earnest in their views about the risks (or at least once were earnest). And so, political action that treats their concern as disingenuous is probably wrongheaded, as opposed to modeling them as ‘really concerned but facing very constrained useful actions’.
You’re not accounting for enemy action. They couldn’t have been sure, at the onset, how successful the AI Notkilleveryoneism faction will be at raising alarm, and in general, how blatant the risks will become to the outsiders as capabilities progress. And they have been intimately familiar with the relevant discussions, after all.
So they might’ve overcorrected, and considered that the “strategic middle ground” would be to admit the risk is plausible (but not as certain as the “doomers” say), rather than to deny it (which they might’ve expected to become a delusional-looking position in the future, so not a PR-friendly stance to take).
Or, at least, I think this could’ve been a relevant factor there.
My model is that Sam Altman regarded the EA world as a memetic threat, early on, and took actions to defuse that threat by paying lip service / taking openphil money / hiring prominent AI safety people for AI safety teams.
Like, possibly the EAs could have crea ed a widespread vibe that building AGI is a cartoon evil thing to do, sort of the way many people think of working for a tobacco company or an oil company.
Then, after ChatGPT, OpenAI was a much bigger fish than the EAs or the rationalists, and he began taking moves to extricate himself from them.
Sam Altman posted Machine intelligence, part 1[1] on February 25th, 2015. This is admittedly after the FLI conference in Puerto Rico, which is reportedly where Elon Musk was inspired to start OpenAI (though I can’t find a reference substantiating his interaction with Demis as the specific trigger), but there is other reporting suggesting that OpenAI was only properly conceived later in the year, and Sam Altman wasn’t at the FLI conference himself. (Also, it’d surprise me a bit if it took nearly a year, i.e. from Jan 2nd[2] to Dec 11th[3], for OpenAI to go from “conceived of” to “existing”.)
That of the famous “Development of superhuman machine intelligence (SMI) [1] is probably the greatest threat to the continued existence of humanity.” quote.
The FLI conference.
OpenAI’s public founding.
Is this taken to be a counterpoint to my story above? I’m not sure exactly how it’s related.
Yes:
In the context of the thread, I took this to suggest that Sam Altman never had any genuine concern about x-risk from AI, or, at a minimum, that any such concern was dominated by the social maneuvering you’re describing. That seems implausible to me given that he publicly expressed concern about x-risk from AI 10 months before OpenAI was publicly founded, and possibly several months before it was even conceived.
I don’t dispute that he never had any genuine concern. I guess that he probably did have genuine concern (though not necessarily that that was his main motivation for founding OpenAI).
Just guessing, but maybe admitting the danger is strategically useful, because it may result in regulations that will hurt the potential competitors more. The regulations often impose fixed costs (such as paying a specialized team which produces paperwork on environmental impacts), which are okay when you are already making millions.
I imagine, someone might figure out a way to make the AI much cheaper, maybe by sacrificing the generality. For example, this probably doesn’t make sense, but would it be possible to train an LLM only based on Python code (as opposed to the entire internet) and produce an AI that is only a Python code autocomplete? If it could be 1000x cheaper, you could make a startup without having to build a new power plant for you. Imagine that you add some special sauce to the algorithm (for example the AI will always internally write unit tests, which will visibly increase the correctness of the generated code; or it will be some combination of the ancient “expert system” approach with the new LLM approach, for example the LLM will train the expert system and then the expert system will provide feedback for the LLM), so you would be able to sell your narrow AI even when more general AIs are available. And once you start selling it, you get an income, which means you can expand the functionality.
It is better to have a consensus that such things are too dangerous to leave in hands of startups that can’t already lobby the government.
Hey, I am happy that the CEOs admit that the dangers exist. But if they are only doing it to secure their profits, it will probably warp their interpretations of what exactly the risks are, and what is a good way to reduce them.
My sense of things is that OpenAI at least appears to be lobbying against regulation moreso than they are lobbying for it?
To me, this seems consistent with just maximizing shareholder value.
Salaries and compute are the largest expenses at big AI firms, and “being the good guys” lets you get the best people at significant discounts. To my understanding, one of the greatest early successes of OpenAI was hiring great talent for cheap because they were “the non-profit good guys who cared about safety”. Later, great people like John Schulman left OpenAI for Anthropic because of his “desire to deepen my focus on AI alignment”.
As for people thinking you’re a potential x-risk, the downsides seem mostly solved by “if we didn’t do it somebody less responsible would”. AI safety policy interventions could also give great moats against competition, especially for the leading firm(s). Furthermore, much of the “AI alignment research” they invest in prevents PR disasters (terrorist used ChatGPT to invent dangerous bio-weapon) and most of the “interpretability” they invest in seems pretty close to R&D which they would invest in anyway to improve capabilities.
This might sound overly pessimistic. However, it can be viewed positively: there is significant overlap between the interests of big AI firms and the AI safety community.
This is pretty different from my model of what happened with OpenAI or Anthropic—especially the latter, where the founding team left huge equity value on the table by departing (OpenAI’s equity had already appreciated something like 10x between the first MSFT funding round and EOY 2020, when they departed).
And even for Sam and OpenAI, this would seem like a kind of wild strategy for pursuing wealth for someone who already had the network and opportunities he had pre-OpenAI?
With the change to for-profit and Sam receiving equity, it seems like the strategy will pay off. However, this might be hindsight bias, or I might otherwise have a too simplified view.