What do you think is the primary component? I seem to recall reading somewhere that previous experience with SARS makes a big difference. I guess my more general point is that if the good COVID responses can mostly be explained by factors that predictably won’t be available to the median AI risk response, then the variance in COVID response doesn’t help to give much hope for a good AI risk response.
What seemed to make a difference
someone with a good models what to do getting to advisory position when the politicians freak out
previous experience with SARS
ratio of “trust in institutions” vs. “trust in your neighbors wisdom”
raw technological capacity
ability of the government to govern (ie execute many things at short time)
In my view, 1. and 4. could go better than in covid, 2. is irrelevant, 3. and 5. seem broad parameters which can develop in different directions. Image you somehow become the main advisor to US president when the situation becomes really weird, and she follows your advice closely—my rough impression is in most situations you would be able to move the response to be moderately sane.
it’s relatively intuitive for humans to think about the mechanics of the danger and possible countermeasures
Empirically, this often wasn’t true. Humans had mildly confused ideas about the micro-level, but often highly confused ideas about the exponential macro-dynamics. (We created a whole educational game on that, and have some feedback that for some policymakers it was the thing that helped them understand… after a year in the pandemic)
previous human experiences with pandemics, including very similar ones like SARS
there are very effective countermeasures that are much easier / less costly than comparable countermeasures for AI risk, such as distributing high quality masks to everyone and sealing one’s borders
COVID isn’t agenty and can’t fight back intelligently
potentially divisive issues in AI risk response seem to be a strict superset of politically divisive issues in COVID response (additional issues include: how to weigh very long term benefits against short term costs, the sentience, moral worth, and rights of AIs, what kind of values do we want AIs to have, and/or who should have control/access to AI)
One factor which may make governments more responsive to AI risk is covid wasn’t exactly threatening to states. Covid was pretty bad for individual people, and some businesses, but in some cases, the relative power of states even grew during covid. In contrast, in some scenarios it may be clear that AI is existential risk for states as well.
What seemed to make a difference
someone with a good models what to do getting to advisory position when the politicians freak out
previous experience with SARS
ratio of “trust in institutions” vs. “trust in your neighbors wisdom”
raw technological capacity
ability of the government to govern (ie execute many things at short time)
In my view, 1. and 4. could go better than in covid, 2. is irrelevant, 3. and 5. seem broad parameters which can develop in different directions. Image you somehow become the main advisor to US president when the situation becomes really weird, and she follows your advice closely—my rough impression is in most situations you would be able to move the response to be moderately sane.
Empirically, this often wasn’t true. Humans had mildly confused ideas about the micro-level, but often highly confused ideas about the exponential macro-dynamics. (We created a whole educational game on that, and have some feedback that for some policymakers it was the thing that helped them understand… after a year in the pandemic)
One factor which may make governments more responsive to AI risk is covid wasn’t exactly threatening to states. Covid was pretty bad for individual people, and some businesses, but in some cases, the relative power of states even grew during covid. In contrast, in some scenarios it may be clear that AI is existential risk for states as well.