A better “Statement on AI Risk?”

Remember the “Statement on AI Risk,” which was signed by many experts and influenced governments? Let’s write a new stronger statement for experts to sign:

Statement on AI Inconsistency (v1.0us):

1: ASI threatens the US (and NATO) as much as all military threats combined. Why does the US spend $800 billion/​year on its military but less than $0.1 billion/​year on AI alignment/​safety?

2: ASI being equally dangerous isn’t an extreme opinion: the median superforecaster sees a 2.1% chance of an AI catastrophe (killing 1 in 10 people), the median AI expert sees 5%-12%, other experts see 5%, and the general public sees 5%. To justify 8000 times less spending, you must be 99.999% sure of no AI catastrophe, and thus 99.95% sure that you won’t realize you were wrong and the majority of experts were right (if you studied the disagreement further).

3: “But military spending isn’t just for protecting NATO, it protects other countries far more likely to be invaded.” Even they are not 8000 times less likely to be attacked by ASI. US foreign aid—including Ukrainian aid—is only $100 billion/​year, so protecting them can’t be the real reason for military spending.

4: The real reason for the 8000fold difference is habit, habit, and habit. Foreign invasion concerns have decreased decade by decade, and ASI concerns have increased year by year, but budgets remained within the status quo, causing a massive inconsistency between belief and behaviour.

5: Do not let humanity’s story be so heartbreaking.

We are one or two anonymous guys with zero connections, zero resources, zero experience.

We need an organization to publish it on their website, and contact the AI experts and others who might sign it. We really prefer an organization like the Future of Life Institute (which wrote the pause AI letter) or the Center for AI Safety (which wrote the Statement on AI Risk).

Help

We’ve sent an email to the Future of Life Institute but our gut feeling is they won’t reply to such an anonymous email. Does anyone here have contacts with one of these organizations? Would you be willing to help?

Of course we’d also like to hear other critique, advice, and edits to the statement.

Why

We feel the Statement on AI Inconsistency might accomplish more than the Statement on AI Risk, while being almost as easy to sign.

The reason it might accomplish more is that people in the government cannot acknowledge the statement (and the experts who signed it), say it makes a decent point, but then do very little about it.

So long as the government spends a token amount on a small AI Safety Institute (AISI), they can feel they have done “enough,” and that the Statement on AI Risk is out of the way. The Statement on AI Inconsistency is more “stubborn:” they cannot claim to have addressed it until they spend a nontrivial amount relative to the military budget.

On the other hand, the Statement on AI Inconsistency is almost as easy to sign, because the main difficulty of signing it is how crazy it sounds. But once people acknowledge the Statement on AI Risk—“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war”—the Overton window has moved so much that signing the Statement on AI Inconsistency requires only a little craziness beyond the normal position. It is a small step on top of a big step.

References

“the US spend $800 billion/​year on its military”

  • [1] says it’s $820 billion in 2024. $800 billion is an approximate number.

“less than $0.1 billion/​year on AI alignment/​safety”:

  • The AISI is the most notable US government funded AI safety organization. It does not focus on ASI takeover risk though it may partially focus on other catastrophic AI risks. AISI’s budget is $10 million according to [2]. Worldwide AI safety funding is between $0.1 billion and $0.2 billion according to [3].

“the median superforecaster sees a 2.1% chance of an AI catastrophe (killing 1 in 10 people), the median AI expert sees 5%-12%, other experts see 5%, and the general public sees 5%”

  • [4] says: Median superforecaster: 2.13%. Median “domain experts” i.e. AI experts: 12%. Median “non-domain experts:” 6.16%. Public Survey: 5%. These are predictions for 2100. Nonetheless, these are predictions before ChatGPT was released, so it’s possible they see the same risk sooner than 2100 now.

  • [5] says the median AI expert sees a 5% chance of “future AI advances causing human extinction or similarly permanent and severe disempowerment of the human species” and a 10% chance of “human inability to control future advanced AI systems causing human extinction or similarly permanent and severe disempowerment of the human species.”

“US foreign aid—including Ukrainian aid—is only $100 billion/​year”

  • [6] says the 2023 foreign aid was $62 billion, but only includes $16 billion to Ukraine. [7] puts “Ukraine aid bills for FY 2023” at $60 billion. It’s unclear how these numbers fit together or overlap, but our guess is that [7] includes obligations for the future.

  • [8] says that “the United States had been spending about $5.4 billion per month as a result of the war. At that spending rate, $61 billion would last for nearly a full year.” This suggests Ukraine spending will rise from $16 billion/​year to somewhere around $60 billion/​year. $100 billion/​year should be a good rough estimate for total foreign aid then.

  1. ^

    USAFacts Team. (August 1, 2024). “How much does the US spend on the military?” USAFacts. https://​​usafacts.org/​​articles/​​how-much-does-the-us-spend-on-the-military/​​

  2. ^

    Wiggers, Kyle. (October 22, 2024). “The US AI Safety Institute stands on shaky ground.” TechCrunch. https://​​techcrunch.com/​​2024/​​10/​​22/​​the-u-s-ai-safety-institute-stands-on-shaky-ground/​​

  3. ^

    McAleese, Stephen, and NunoSempere. (July 12, 2023). “An Overview of the AI Safety Funding Situation.” LessWrong. https://​​www.lesswrong.com/​​posts/​​WGpFFJo2uFe5ssgEb/​​an-overview-of-the-ai-safety-funding-situation/​​

  4. ^

    Karger, Ezra, Josh Rosenberg, Zachary Jacobs, Molly Hickman, Rose Hadshar, Kayla Gamin, and P. E. Tetlock. (August 8, 2023). “Forecasting Existential Risks Evidence from a Long-Run Forecasting Tournament.” Forecasting Research Institute. p. 259. https://​​static1.squarespace.com/​​static/​​635693acf15a3e2a14a56a4a/​​t/​​64f0a7838ccbf43b6b5ee40c/​​1693493128111/​​XPT.pdf#page=260

  5. ^

    Stein-Perlman, Zach, Benjamin Weinstein-Raun, and Katja Grace. (August 3, 2022). “2022 Expert Survey on Progress in AI.” AI Impacts. https://​​aiimpacts.org/​​2022-expert-survey-on-progress-in-ai/​​

  6. ^
  7. ^

    Masters, Jonathan, and Will Merrow. (September 27, 2024). “How Much U.S. Aid Is Going to Ukraine?” Council on Foreign Relations. https://​​www.cfr.org/​​article/​​how-much-us-aid-going-ukraine

  8. ^

    Cancian, Mark and Chris Park. (May 1, 2024). “What Is in the Ukraine Aid Package, and What Does it Mean for the Future of the War?” Center for Strategic & International Studies. https://​​www.csis.org/​​analysis/​​what-ukraine-aid-package-and-what-does-it-mean-future-war