I stumbled upon this post <3 years later and—wow, seems like we’re very close to some fire alarms!
Expert Opinion
The world gradually notices that each of these new leaders express at least as much concern about the risks of AI as prior researchers, and that they increasingly agree that AI will pass human levels sometime in the 2030s.
By the late 2020s, there’s a clear enough consensus among the recognized AI experts that no major news organization is willing to deny that AI represents an important source of near-term risk.
I’m don’t think we’re quite at the point where no major news organization willing to deny AI risk—but the CAIS statement was signed by a bunch of industry leaders + endorsed by leading politicians including the UK PM. Even if there isn’t full consensus on AI x-risk, my guess is that the number of people “willing to deny that AI represents an important source of near-term risk” is in a small minority.
Venture Capital
I find it easy to imagine that a couple of years before we get AGI, we’ll see VCs making multi-billion dollar investments in brand new AI companies, and large tech companies occasionally acquiring them for tens of billions.
SSI raised $1 billion with basically no product, whilst $4bn Inflection was (de facto) acquired by Microsoft. So this hasn’t quite yet happened, but we’re pretty close!
Corporate Budgets
Companies such as DeepMind and OpenAI might throw around compute budgets of, say, $100 billion in one year.
Military leaders become convinced that human-level AI is 5-10 years away, given a Manhattan Project-like effort, and that it will give an important military advantage to any country that achieves it.
This likely involves widespread expert agreement that throwing lots of compute at the problem will be an important factor in succeeding.
...
[The Apollo scenario] resembles the prior one, but leading nations are more clearly peaceful, and they’re more optimistic that they will be able to copy (or otherwise benefit from?) the first AI.
A leading nation (let’s say Canada) makes a massive AI project that hires the best and the brightest, and devotes more money to compute than could reasonably be expected from any other organization that the best and brightest would be willing to work for.
We’re definitely not at this point yet for either of these scenarios. However, the recent NSM gives a signal of trending in this direction (e.g. “It is the policy of the United States Government to enhance innovation and competition by bolstering key drivers of AI progress, such as technical talent and computational power.”
The Wright Brothers Analogy
This one is pretty hard to assess. It essentially looks like a world where it’s very difficult to tell when “AGI” is acheived, given challenges with actually evaluating whether AI models are human-level. Which feels like where we are today? This line feels particularly accurate:
Thus there will be a significant period when one or more AI’s are able to exceed human abilities in a moderate number of fields, and that number of fields is growing at a moderate and hard to measure pace.
I expect this scenario would produce massive confusion about whether to be alarmed.
Worms
Stuxnet-like attacks could become more common if software can productively rewrite itself
AI-Cyber attacks are now fairly discussed by governments (e.g. here), though it’s unclear how much this is already happening vs is predicted to happen in future.
Most of the rest—Conspicuously Malicious AI Assistants, Free Guy, Age of Em, AI Politicians, AI’s Manipulate Politics—have not happened. No warning shot here yet.
Finally,
Turing Test Alarm, Snooze Button Edition
GPT-7 passes a Turing test, but isn’t smart enough to do anything dangerous. People interpret that as evidence that AI is safe.
This feels pretty much where we’re at, only with GPT-4.
--
Overall, my sense is there are clear “warning shots of warning shots”—we have clear signs that warning scenarios are not far off at all (and arguably some have already happened). However, the connection between warning sign and response feels pretty random (with maybe the exception of the FLI letter + CAIS statement + Hinton’s resignation triggering a lot more attention to safety during the Spring of 2023).
I’m not sure what to conclude from this. Maybe it’s “plan for relatively mundance scenarios”, since these are the ones with the clearest evidence of triggering a response
I stumbled upon this post <3 years later and—wow, seems like we’re very close to some fire alarms!
Expert Opinion
Would Hinton and Bengio tick this box?
I’m don’t think we’re quite at the point where no major news organization willing to deny AI risk—but the CAIS statement was signed by a bunch of industry leaders + endorsed by leading politicians including the UK PM. Even if there isn’t full consensus on AI x-risk, my guess is that the number of people “willing to deny that AI represents an important source of near-term risk” is in a small minority.
Venture Capital
SSI raised $1 billion with basically no product, whilst $4bn Inflection was (de facto) acquired by Microsoft. So this hasn’t quite yet happened, but we’re pretty close!
Corporate Budgets
OpenAI/Microsoft says they’ll spend $100bn on a US data center. DeepMind also says they’ll spend $100bn on IA (though not exclusively compute). Admittedly these are multi-year projects
Military Arms Race / Project Apollo Analogy
We’re definitely not at this point yet for either of these scenarios. However, the recent NSM gives a signal of trending in this direction (e.g. “It is the policy of the United States Government to enhance innovation and competition by bolstering key drivers of AI progress, such as technical talent and computational power.”
The Wright Brothers Analogy
This one is pretty hard to assess. It essentially looks like a world where it’s very difficult to tell when “AGI” is acheived, given challenges with actually evaluating whether AI models are human-level. Which feels like where we are today? This line feels particularly accurate:
Worms
AI-Cyber attacks are now fairly discussed by governments (e.g. here), though it’s unclear how much this is already happening vs is predicted to happen in future.
Most of the rest—Conspicuously Malicious AI Assistants, Free Guy, Age of Em, AI Politicians, AI’s Manipulate Politics—have not happened. No warning shot here yet.
Finally,
Turing Test Alarm, Snooze Button Edition
This feels pretty much where we’re at, only with GPT-4.
--
Overall, my sense is there are clear “warning shots of warning shots”—we have clear signs that warning scenarios are not far off at all (and arguably some have already happened). However, the connection between warning sign and response feels pretty random (with maybe the exception of the FLI letter + CAIS statement + Hinton’s resignation triggering a lot more attention to safety during the Spring of 2023).
I’m not sure what to conclude from this. Maybe it’s “plan for relatively mundance scenarios”, since these are the ones with the clearest evidence of triggering a response