Possible takeaways from the coronavirus pandemic for slow AI takeoff
(Cross-posted from personal blog. Summarized in Alignment Newsletter #104. Thanks to Janos Kramar for his helpful feedback on this post.)
Epistemic status: fairly speculative, would appreciate feedback
As the covid-19 pandemic unfolds, we can draw lessons from it for managing future global risks, such as other pandemics, climate change, and risks from advanced AI. In this post, I will focus on possible implications for AI risk. For a broader treatment of this question, I recommend FLI’s covid-19 page that includes expert interviews on the implications of the pandemic for other types of risks.
A key element in AI risk scenarios is the speed of takeoff—whether advanced AI is developed gradually or suddenly. Paul Christiano’s post on takeoff speeds defines slow takeoff in terms of the economic impact of AI as follows: “There will be a complete 4 year interval in which world output doubles, before the first 1 year interval in which world output doubles.” It argues that slow AI takeoff is more likely than fast takeoff, but is not necessarily easier to manage, since it poses different challenges, such as large-scale coordination. This post expands on this point by examining some parallels between the coronavirus pandemic and a slow takeoff scenario. The upsides of slow takeoff include the ability to learn from experience, act on warning signs, and reach a timely consensus that there is a serious problem. I would argue that the covid-19 pandemic had these properties, but most of the world’s institutions did not take advantage of them. This suggests that, unless our institutions improve, we should not expect the slow AI takeoff scenario to have a good default outcome.
Learning from experience. In the slow takeoff scenario, general AI is expected to appear in a world that has already experienced transformative change from less advanced AI, and institutions will have a chance to learn from problems with these AI systems. An analogy could be made with learning from dealing with less “advanced” epidemics like SARS that were not as successful as covid-19 at spreading across the world. While some useful lessons were learned, they were not successfully generalized to covid-19, which had somewhat different properties than these previous pathogens (such as asymptomatic transmission and higher virulence). Similarly, general AI may have somewhat different properties from less advanced AI that would make mitigation strategies more difficult to generalize.
Warning signs. In the coronavirus pandemic response, there has been a lot of variance in how successfully governments acted on warning signs. Western countries had at least a month of warning while the epidemic was spreading in China, which they could have used to stock up on PPE and build up testing capacity, but most did not do so. Experts have warned about the likelihood of a coronavirus outbreak for many years, but this did not lead most governments to stock up on medical supplies. This was a failure to take cheap preventative measures in response to advance warnings about a widely recognized risk with tangible consequences, which is not a good sign for the case where the risk is less tangible and well-understood (such as risk from general AI).
Consensus on the problem. During the covid-19 epidemic, the abundance of warning signs and past experience with previous pandemics created an opportunity for a timely consensus that there is a serious problem. However, it actually took a long time for a broad consensus to emerge—the virus was often dismissed as “overblown” and “just like the flu” as late as March 2020. A timely response to the risk required acting before there was a consensus, thus risking the appearance of overreacting to the problem. I think we can also expect this to happen with advanced AI. Similarly to the discussion of covid-19, there is an unfortunate irony where those who take a dismissive position on advanced AI risks are often seen as cautious, prudent skeptics, while those who advocate early action are portrayed as “panicking” and overreacting. The “moving goalposts” effect, where new advances in AI are dismissed as not real AI, could continue indefinitely as increasingly advanced AI systems are deployed. I would expect the “no fire alarm” hypothesis to hold in the slow takeoff scenario—there may not be a consensus on the importance of general AI until it arrives, so risks from advanced AI would continue to be seen as “overblown” until it is too late to address them.
We can hope that the transformative technological change involved in the slow takeoff scenario will also help create more competent institutions without these weaknesses. We might expect that institutions unable to adapt to the fast pace of change will be replaced by more competent ones. However, we could also see an increasingly chaotic world where institutions fail to adapt without better institutions being formed quickly enough to replace them. Success in the slow takeoff scenario depends on institutional competence and large-scale coordination. Unless more competent institutions are in place by the time general AI arrives, it is not clear to me that slow takeoff would be much safer than fast takeoff.
- 2020 AI Alignment Literature Review and Charity Comparison by 21 Dec 2020 15:25 UTC; 155 points) (EA Forum;
- 2020 AI Alignment Literature Review and Charity Comparison by 21 Dec 2020 15:27 UTC; 137 points) (
- Voting Results for the 2020 Review by 2 Feb 2022 18:37 UTC; 108 points) (
- Hinges and crises by 17 Mar 2022 13:43 UTC; 72 points) (EA Forum;
- Coronavirus as a test-run for X-risks by 13 Jun 2020 21:00 UTC; 71 points) (
- Hinges and crises by 29 Mar 2022 11:11 UTC; 44 points) (
- EA Updates for August 2020 by 28 Aug 2020 10:29 UTC; 34 points) (EA Forum;
- [AN #108]: Why we should scrutinize arguments for AI risk by 16 Jul 2020 6:47 UTC; 19 points) (
- [AN #104]: The perils of inaccessible information, and what we can learn about AI alignment from COVID by 18 Jun 2020 17:10 UTC; 19 points) (
- 14 Jul 2020 0:57 UTC; 15 points) 's comment on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher by (EA Forum;
- Responding to ‘Beyond Hyperanthropomorphism’ by 14 Sep 2022 20:37 UTC; 9 points) (
I generally endorse the claims made in this post and the overall analogy. Since this post was written, there are a few more examples I can add to the categories for slow takeoff properties.
Learning from experience
The UK procrastinated on locking down in response to the Alpha variant due to political considerations (not wanting to “cancel Christmas”), though it was known that timely lockdowns are much more effective.
Various countries reacted to Omicron with travel bans after they already had community transmission (e.g. Canada and the UK), while it was known that these measures would be ineffective.
Warning signs
Since there is a non-negligible possibility that covid-19 originated in a lab, the covid pandemic can be viewed as a warning sign about the dangers of gain of function research. So far, as far as I know, this warning sign has not been acted upon (there is no significant new initiative to ban gain of function research).
I think there was some improvement at acting on warning signs for subsequent variants (e.g. I believe that measures in response to Omicron were generally taken faster than measures for original covid). This gives me some hope that our institutions can get better at reacting to warning signs with practice (at least warning signs that are similar to those they have encountered before). This suggests that dealing with narrow AI disasters could potentially lead institutions to improve their ability to respond to warning signs.
Consensus on the problem
It took a long time to reach consensus on the importance of mask wearing and aerosol transmission.
We still don’t seem to have widespread consensus that transmission through surfaces is insignificant, at least judging by the amount of effort that seems to go into disinfection and cleaning in various buildings that I visit.