Slow motion videos as AI risk intuition pumps
tl;dr: When making the case for AI as a risk to humanity, trying showing people an evocative illustration of what differences in processing speeds can look like, such as this video.
Over the past ~12 years of making the case for AI x-risk to various people inside and outside academia, I’ve found folks often ask for a single story of how AI “goes off the rails”. When given a plausible story, the mind just thinks of a way humanity could avoid that-particular-story, and goes back to thinking there’s no risk, unless provided with another story, or another, etc.. Eventually this can lead to a realization that there’s a lot of ways for humanity to die, and a correspondingly high level of risk, but it takes a while.
Nowadays, before getting into a bunch of specific stories, I try to say something more general, like this:
There’s a ton of ways humanity can die out from the introduction of AI. I’m happy to share specific stories if necessary, but plenty of risks arise just from the fact that humans are extremely slow. Transistors can fire about 10 million times faster than human brain cells, so it’s possible we’ll eventually have digital minds operating 10 million times faster than us, meaning from a decision-making perspective we’d look to them like stationary objects, like plants or rocks. This speed differential exists whether or not you believe in a centralized AI system calling the shots, or an economy of many, so it applies to a wide variety of “stories” for how the future could go. To give you a sense, here’s what humans look like when slowed down by only around 100x:
https://vimeo.com/83664407 <-- (cred to an anonymous friend for suggesting this one)
[At this point, I wait for the person I’m chatting with to watch the video.]
Now, when you try imagining things turning out fine for humanity over the course of a year, try imagining advanced AI technology running all over the world and making all kinds of decisions and actions 10 million times faster than us, for 10 million subjective years. Meanwhile, there are these nearly-stationary plant-like or rock-like “human” objects around that could easily be taken apart for, say, biofuel or carbon atoms, if you could just get started building a human-disassembler. Visualizing things this way, you can start to see all the ways that a digital civilization can develop very quickly into a situation where there are no humans left alive, just as human civilization doesn’t show much regard for plants or wildlife or insects.
I’ve found this kind of argument — including an actual 30 second pause to watch a video in the middle of the conversation — to be more persuasive than trying to tell a single, specific story, so I thought I’d share it.
- How AI Takeover Might Happen in 2 Years by 7 Feb 2025 17:10 UTC; 381 points) (
- AGI rising: why we are in a new era of acute risk and increasing public awareness, and what to do now by 2 May 2023 10:17 UTC; 68 points) (EA Forum;
- Voting Results for the 2022 Review by 2 Feb 2024 20:34 UTC; 57 points) (
- AI #40: A Vision from Vitalik by 30 Nov 2023 17:30 UTC; 53 points) (
- How AI Takeover Might Happen in Two Years by 7 Feb 2025 23:51 UTC; 29 points) (EA Forum;
- I’m interviewing Vitalik Buterin about ‘my techno-optimism’, E/acc and D/acc. What should I ask him? by 12 Jan 2024 19:58 UTC; 26 points) (EA Forum;
- 24 Mar 2023 7:05 UTC; 24 points) 's comment on We have to Upgrade by (
- AGI rising: why we are in a new era of acute risk and increasing public awareness, and what to do now by 3 May 2023 20:26 UTC; 23 points) (
- 1 Apr 2023 1:10 UTC; 15 points) 's comment on All AGI Safety questions welcome (especially basic ones) [~monthly thread] by (
- AGI goal space is big, but narrowing might not be as hard as it seems. by 12 Apr 2023 19:03 UTC; 15 points) (
- 19 May 2024 6:49 UTC; 12 points) 's comment on “If we go extinct due to misaligned AI, at least nature will continue, right? … right?” by (
- 27 Jul 2023 9:42 UTC; 9 points) 's comment on [Crosspost] An AI Pause Is Humanity’s Best Bet For Preventing Extinction (TIME) by (EA Forum;
- 22 Jun 2022 10:39 UTC; 7 points) 's comment on Getting from an unaligned AGI to an aligned AGI? by (
- 15 Mar 2023 19:07 UTC; 5 points) 's comment on Good depictions of speed mismatches between advanced AI systems and humans? by (EA Forum;
- 25 Dec 2022 19:24 UTC; 4 points) 's comment on Let’s think about slowing down AI by (
- 12 Jan 2023 8:22 UTC; 4 points) 's comment on We don’t trade with ants by (
- 30 Mar 2023 7:25 UTC; 3 points) 's comment on FLI open letter: Pause giant AI experiments by (
- 19 Jun 2022 15:46 UTC; 3 points) 's comment on AGI Safety FAQ / all-dumb-questions-allowed thread by (
- 29 Mar 2023 23:18 UTC; 1 point) 's comment on FLI open letter: Pause giant AI experiments by (
- 18 Jul 2022 23:08 UTC; 1 point) 's comment on All AGI safety questions welcome (especially basic ones) [July 2022] by (
I am confused about whether the videos are real and exactly how much faster AIs could be run. But I think at the very least it’s a promising direction to look for grokkable bounds on how advanced AI will go