I tend to think that the pandemic shares more properties with fast takeoff than it does with slow takeoff. Under fast takeoff, a very powerful system will spring into existence after a long period of AI being otherwise irrelevant, in a similar way to how the virus was dormant until early this year. The defining feature of slow takeoff, by contrast, is a gradual increase in abilities from AI systems all across the world.
In particular, I object to this portion of your post,
The “moving goalposts” effect, where new advances in AI are dismissed as not real AI, could continue indefinitely as increasingly advanced AI systems are deployed. I would expect the “no fire alarm” hypothesis to hold in the slow takeoff scenario—there may not be a consensus on the importance of general AI until it arrives, so risks from advanced AI would continue to be seen as “overblown” until it is too late to address them.
I’m not convinced that these parallels to COVID-19 are very informative. Compared to this pandemic, I expect the direct effects of AI to be very obvious to observers, in a similar way that the direct effects of cars are obvious to people who go outside. Under a slow takeoff, AI will already be performing a lot of important economic labor before the world “goes crazy” in the important senses. Compare to the pandemic, in which
It is not empirically obvious that it’s worse than a seasonal flu (we only know that it is due to careful data analysis after months of collection).
It’s not clearly affecting everyone around you in the way that cars, computers, software, and other forms of engineering are.
Is considered natural, and primarily affects old people who are conventionally considered to be less worthy of concern (though people give lip service denying this).
Some good points, but on the contrary: a slow take-off is considered safer because we have more lead time and warning shots, but the world has seen many similar events and warning shots for covid. Ones that come to mind in the last two decades are swine flu, bird flu, and Ebola, and of course there have been many more over history.
This just isn’t that novel or surprising, billionaires like Bill Gates have been sounding the alarm, and still the supermajority of Western countries failed to take basic preventative measures. Those properties seem similar to even the slow take-off scenario. I feel like the fast-takeoff analogy would go through most strongly in a world where we’d just never seen this sort of pandemic before, but in reality we’ve seen many of them.
Thanks Matthew for your interesting points! I agree that it’s not clear whether the pandemic is a good analogy for slow takeoff. When I was drafting the post, I started with an analogy with “medium” takeoff (on the time scale of months), but later updated towards the slow takeoff scenario being a better match. The pandemic response in 2020 (since covid became apparent as a threat) is most relevant for the medium takeoff analogy, while the general level of readiness for a coronavirus pandemic prior to 2020 is most relevant for the slow takeoff analogy.
I agree with Ben’s response to your comment. Covid did not spring into existence in a world where pandemics are irrelevant, since there have been many recent epidemics and experts have been sounding the alarm about the next one. You make a good point that epidemics don’t gradually increase in severity, though I think they have been increasing in frequency and global reach as a result of international travel, and the possibility of a virus escaping from a lab also increases the chances of encountering more powerful pathogens in the future. Overall, I agree that we can probably expect AI systems to increase in competence more gradually in a slow takeoff scenario, which is a reason for optimism.
Your objections to the parallel with covid not being taken seriously seem reasonable to me, and I’m not very confident in this analogy overall. However, one could argue that the experience with previous epidemics should have resulted in a stronger prior on pandemics being a serious threat. I think it was clear from the outset of the covid epidemic that it’s much more contagious than seasonal flu, which should have produced an update towards it being a serious threat as well.
I agree that the direct economic effects of advanced AI would be obvious to observers, but I don’t think this would necessarily translate into widespread awareness that much more powerful AI systems are imminent that could transform the world even more. People are generally bad at reacting to exponential trends, as we’ve seen in the covid response. If we had general-purpose household robots in every home, I would expect some people to take the risks of general AI more seriously, and some other people to say “I don’t see my household robot trying to take over the world, so these concerns about general AI are overblown”. Overall, as more advanced AI systems are developed and have a large economic impact, I would expect the proportion of people who take the risks of general AI seriously to increase steadily, but wouldn’t expect widespread consensus until relatively late in the game.
I tend to think that the pandemic shares more properties with fast takeoff than it does with slow takeoff. Under fast takeoff, a very powerful system will spring into existence after a long period of AI being otherwise irrelevant, in a similar way to how the virus was dormant until early this year. The defining feature of slow takeoff, by contrast, is a gradual increase in abilities from AI systems all across the world.
In particular, I object to this portion of your post,
I’m not convinced that these parallels to COVID-19 are very informative. Compared to this pandemic, I expect the direct effects of AI to be very obvious to observers, in a similar way that the direct effects of cars are obvious to people who go outside. Under a slow takeoff, AI will already be performing a lot of important economic labor before the world “goes crazy” in the important senses. Compare to the pandemic, in which
It is not empirically obvious that it’s worse than a seasonal flu (we only know that it is due to careful data analysis after months of collection).
It’s not clearly affecting everyone around you in the way that cars, computers, software, and other forms of engineering are.
Is considered natural, and primarily affects old people who are conventionally considered to be less worthy of concern (though people give lip service denying this).
Some good points, but on the contrary: a slow take-off is considered safer because we have more lead time and warning shots, but the world has seen many similar events and warning shots for covid. Ones that come to mind in the last two decades are swine flu, bird flu, and Ebola, and of course there have been many more over history.
This just isn’t that novel or surprising, billionaires like Bill Gates have been sounding the alarm, and still the supermajority of Western countries failed to take basic preventative measures. Those properties seem similar to even the slow take-off scenario. I feel like the fast-takeoff analogy would go through most strongly in a world where we’d just never seen this sort of pandemic before, but in reality we’ve seen many of them.
Thanks Matthew for your interesting points! I agree that it’s not clear whether the pandemic is a good analogy for slow takeoff. When I was drafting the post, I started with an analogy with “medium” takeoff (on the time scale of months), but later updated towards the slow takeoff scenario being a better match. The pandemic response in 2020 (since covid became apparent as a threat) is most relevant for the medium takeoff analogy, while the general level of readiness for a coronavirus pandemic prior to 2020 is most relevant for the slow takeoff analogy.
I agree with Ben’s response to your comment. Covid did not spring into existence in a world where pandemics are irrelevant, since there have been many recent epidemics and experts have been sounding the alarm about the next one. You make a good point that epidemics don’t gradually increase in severity, though I think they have been increasing in frequency and global reach as a result of international travel, and the possibility of a virus escaping from a lab also increases the chances of encountering more powerful pathogens in the future. Overall, I agree that we can probably expect AI systems to increase in competence more gradually in a slow takeoff scenario, which is a reason for optimism.
Your objections to the parallel with covid not being taken seriously seem reasonable to me, and I’m not very confident in this analogy overall. However, one could argue that the experience with previous epidemics should have resulted in a stronger prior on pandemics being a serious threat. I think it was clear from the outset of the covid epidemic that it’s much more contagious than seasonal flu, which should have produced an update towards it being a serious threat as well.
I agree that the direct economic effects of advanced AI would be obvious to observers, but I don’t think this would necessarily translate into widespread awareness that much more powerful AI systems are imminent that could transform the world even more. People are generally bad at reacting to exponential trends, as we’ve seen in the covid response. If we had general-purpose household robots in every home, I would expect some people to take the risks of general AI more seriously, and some other people to say “I don’t see my household robot trying to take over the world, so these concerns about general AI are overblown”. Overall, as more advanced AI systems are developed and have a large economic impact, I would expect the proportion of people who take the risks of general AI seriously to increase steadily, but wouldn’t expect widespread consensus until relatively late in the game.