The COVID-19 pandemic is an example of a large risk that humanity faced. What lessons can we learn for AI alignment? This post argues that the pandemic is an example of the sort of situation we can expect in a slow takeoff scenario, since we had the opportunity to learn from experience, act on warning signs, and reach a timely consensus that there is a serious problem. However, while we could have learned from previous epidemics like SARS, we failed to generalize the lessons from SARS. Despite warning signs of a pandemic in February, many countries wasted a month when they could have been stocking up on PPE and testing capacity. We had no consensus that COVID-19 was a problem, with articles dismissing it as no worse than the flu as late as March.
All of these problems could also happen with slow takeoff: we may fail to generalize from narrow AI systems to more general AI systems; we might not act on warning signs; and we may not believe that powerful AI is on the horizon until it is too late. The conclusion is “unless more competent institutions are in place by the time general AI arrives, it is not clear to me that slow takeoff would be much safer than fast takeoff”.
Planned opinion:
While I agree that the COVID response was worse than it could have been, I think there are several important disanalogies between the COVID-19 pandemic and a soft takeoff scenario, which I elaborate on in this comment. First, with COVID there were many novel problems, which I don’t expect with AI. Second, I expect a longer time period over which decisions can be made for AI alignment. Finally, with AI alignment, we have the option of preventing problems from ever arising, which is not really an option with pandemics. See also this post.
Thanks Rohin for covering the post in the newsletter!
The summary looks great overall. I have a minor objection to the word “narrow” here: “we may fail to generalize from narrow AI systems to more general AI systems”. When I talked about generalizing from less advanced AI systems, I didn’t specifically mean narrow AI—what I had in mind was increasingly general AI systems we are likely to encounter on the path to AGI in a slow takeoff scenario.
For the opinion, I would agree that it’s not clear how well the covid scenario matches the slow takeoff scenario, and that there are some important disanalogies. I disagree with some of the specific disanalogies you point out though:
I wouldn’t say that there were many novel problems with covid. The supply chain problem for PPE seems easy enough to predict and prepare for given the predicted likelihood of a global respiratory pandemic. Do you have other examples of novel problems besides the supply chain problem?
I don’t agree that we can’t prevent problems from arising with pandemics—e.g. we can decrease the interactions with wild animals that can transmit viruses to humans, and improve biosecurity standards to prevent viruses escaping from labs.
Changed narrow/general to weak/strong in the LW version of the newsletter (unfortunately the newsletter had already gone out when your comment was written).
I wouldn’t say that there were many novel problems with covid. The supply chain problem for PPE seems easy enough to predict and prepare for given the predicted likelihood of a global respiratory pandemic. Do you have other examples of novel problems besides the supply chain problem?
There was some worry about supply chain problems for food. Perhaps that didn’t materialize, or it did materialize and it was solved without me noticing.
I expect that this was the first extended shelter-in-place order for most if not all of the US, and this led to a bunch of problems in deciding what should and shouldn’t be included in the order, how stringent to make it, etc.
More broadly, I’m not thinking of any specific problem, but the world is clearly very different than it was in any recent epidemic (at least in the US), and I would be shocked if this did not bring with it several challenges that we did not anticipate ahead of time (perhaps someone somewhere had anticipated it, but it wasn’t widespread knowledge).
I don’t agree that we can’t prevent problems from arising with pandemics—e.g. we can decrease the interactions with wild animals that can transmit viruses to humans, and improve biosecurity standards to prevent viruses escaping from labs.
I definitely agree that we can decrease the likelihood of pandemics arising, but we can’t really hope to eliminate them altogether (with current technology). But really I think this was not my main point, and I summarized my point badly: the point was that given that alignment is about preventing misalignment from arising, the analogous thing for pandemics would be about preventing pandemics from arising; it is unclear to me whether civilization was particularly inadequate along this axis ex ante (i.e. before we knew that COVID was a thing).
Planned summary for the Alignment Newsletter:
Planned opinion:
Thanks Rohin for covering the post in the newsletter!
The summary looks great overall. I have a minor objection to the word “narrow” here: “we may fail to generalize from narrow AI systems to more general AI systems”. When I talked about generalizing from less advanced AI systems, I didn’t specifically mean narrow AI—what I had in mind was increasingly general AI systems we are likely to encounter on the path to AGI in a slow takeoff scenario.
For the opinion, I would agree that it’s not clear how well the covid scenario matches the slow takeoff scenario, and that there are some important disanalogies. I disagree with some of the specific disanalogies you point out though:
I wouldn’t say that there were many novel problems with covid. The supply chain problem for PPE seems easy enough to predict and prepare for given the predicted likelihood of a global respiratory pandemic. Do you have other examples of novel problems besides the supply chain problem?
I don’t agree that we can’t prevent problems from arising with pandemics—e.g. we can decrease the interactions with wild animals that can transmit viruses to humans, and improve biosecurity standards to prevent viruses escaping from labs.
Changed narrow/general to weak/strong in the LW version of the newsletter (unfortunately the newsletter had already gone out when your comment was written).
There was some worry about supply chain problems for food. Perhaps that didn’t materialize, or it did materialize and it was solved without me noticing.
I expect that this was the first extended shelter-in-place order for most if not all of the US, and this led to a bunch of problems in deciding what should and shouldn’t be included in the order, how stringent to make it, etc.
More broadly, I’m not thinking of any specific problem, but the world is clearly very different than it was in any recent epidemic (at least in the US), and I would be shocked if this did not bring with it several challenges that we did not anticipate ahead of time (perhaps someone somewhere had anticipated it, but it wasn’t widespread knowledge).
I definitely agree that we can decrease the likelihood of pandemics arising, but we can’t really hope to eliminate them altogether (with current technology). But really I think this was not my main point, and I summarized my point badly: the point was that given that alignment is about preventing misalignment from arising, the analogous thing for pandemics would be about preventing pandemics from arising; it is unclear to me whether civilization was particularly inadequate along this axis ex ante (i.e. before we knew that COVID was a thing).