Thanks Rohin for covering the post in the newsletter!
The summary looks great overall. I have a minor objection to the word “narrow” here: “we may fail to generalize from narrow AI systems to more general AI systems”. When I talked about generalizing from less advanced AI systems, I didn’t specifically mean narrow AI—what I had in mind was increasingly general AI systems we are likely to encounter on the path to AGI in a slow takeoff scenario.
For the opinion, I would agree that it’s not clear how well the covid scenario matches the slow takeoff scenario, and that there are some important disanalogies. I disagree with some of the specific disanalogies you point out though:
I wouldn’t say that there were many novel problems with covid. The supply chain problem for PPE seems easy enough to predict and prepare for given the predicted likelihood of a global respiratory pandemic. Do you have other examples of novel problems besides the supply chain problem?
I don’t agree that we can’t prevent problems from arising with pandemics—e.g. we can decrease the interactions with wild animals that can transmit viruses to humans, and improve biosecurity standards to prevent viruses escaping from labs.
Changed narrow/general to weak/strong in the LW version of the newsletter (unfortunately the newsletter had already gone out when your comment was written).
I wouldn’t say that there were many novel problems with covid. The supply chain problem for PPE seems easy enough to predict and prepare for given the predicted likelihood of a global respiratory pandemic. Do you have other examples of novel problems besides the supply chain problem?
There was some worry about supply chain problems for food. Perhaps that didn’t materialize, or it did materialize and it was solved without me noticing.
I expect that this was the first extended shelter-in-place order for most if not all of the US, and this led to a bunch of problems in deciding what should and shouldn’t be included in the order, how stringent to make it, etc.
More broadly, I’m not thinking of any specific problem, but the world is clearly very different than it was in any recent epidemic (at least in the US), and I would be shocked if this did not bring with it several challenges that we did not anticipate ahead of time (perhaps someone somewhere had anticipated it, but it wasn’t widespread knowledge).
I don’t agree that we can’t prevent problems from arising with pandemics—e.g. we can decrease the interactions with wild animals that can transmit viruses to humans, and improve biosecurity standards to prevent viruses escaping from labs.
I definitely agree that we can decrease the likelihood of pandemics arising, but we can’t really hope to eliminate them altogether (with current technology). But really I think this was not my main point, and I summarized my point badly: the point was that given that alignment is about preventing misalignment from arising, the analogous thing for pandemics would be about preventing pandemics from arising; it is unclear to me whether civilization was particularly inadequate along this axis ex ante (i.e. before we knew that COVID was a thing).
Thanks Rohin for covering the post in the newsletter!
The summary looks great overall. I have a minor objection to the word “narrow” here: “we may fail to generalize from narrow AI systems to more general AI systems”. When I talked about generalizing from less advanced AI systems, I didn’t specifically mean narrow AI—what I had in mind was increasingly general AI systems we are likely to encounter on the path to AGI in a slow takeoff scenario.
For the opinion, I would agree that it’s not clear how well the covid scenario matches the slow takeoff scenario, and that there are some important disanalogies. I disagree with some of the specific disanalogies you point out though:
I wouldn’t say that there were many novel problems with covid. The supply chain problem for PPE seems easy enough to predict and prepare for given the predicted likelihood of a global respiratory pandemic. Do you have other examples of novel problems besides the supply chain problem?
I don’t agree that we can’t prevent problems from arising with pandemics—e.g. we can decrease the interactions with wild animals that can transmit viruses to humans, and improve biosecurity standards to prevent viruses escaping from labs.
Changed narrow/general to weak/strong in the LW version of the newsletter (unfortunately the newsletter had already gone out when your comment was written).
There was some worry about supply chain problems for food. Perhaps that didn’t materialize, or it did materialize and it was solved without me noticing.
I expect that this was the first extended shelter-in-place order for most if not all of the US, and this led to a bunch of problems in deciding what should and shouldn’t be included in the order, how stringent to make it, etc.
More broadly, I’m not thinking of any specific problem, but the world is clearly very different than it was in any recent epidemic (at least in the US), and I would be shocked if this did not bring with it several challenges that we did not anticipate ahead of time (perhaps someone somewhere had anticipated it, but it wasn’t widespread knowledge).
I definitely agree that we can decrease the likelihood of pandemics arising, but we can’t really hope to eliminate them altogether (with current technology). But really I think this was not my main point, and I summarized my point badly: the point was that given that alignment is about preventing misalignment from arising, the analogous thing for pandemics would be about preventing pandemics from arising; it is unclear to me whether civilization was particularly inadequate along this axis ex ante (i.e. before we knew that COVID was a thing).