I doubt that’s the primary component that makes the difference. Other countries which did mostly sensible things early are eg Australia, Czechia, Vietnam, New Zealand, Iceland.
What do you think is the primary component? I seem to recall reading somewhere that previous experience with SARS makes a big difference. I guess my more general point is that if the good COVID responses can mostly be explained by factors that predictably won’t be available to the median AI risk response, then the variance in COVID response doesn’t help to give much hope for a good AI risk response.
My main claim isn’t about what a median response would be, but something like “difference between median early covid governmental response and actually good early covid response was something between 1 and 2 sigma; this suggests bad response isn’t over-determined, and sensibe responses are within human reach”.
This seems to depend on response to AI risk being of similar difficulty as response to COVID. I think people who updated towards “bad response to AI risk is overdetermined” did so partly on the basis that the former is much harder. (In other words, if the median government has done this badly against COVID, what chance does it have against something much harder?) I wrote down a list of things that make COVID an easier challenge, which I now realize may be a bit of a tangent if that’s not the main thing you want to argue about, but I’ll put down here anyway so as to not waste it.
it’s relatively intuitive for humans to think about the mechanics of the danger and possible countermeasures
previous human experiences with pandemics, including very similar ones like SARS
there are very effective countermeasures that are much easier / less costly than comparable countermeasures for AI risk, such as distributing high quality masks to everyone and sealing one’s borders
COVID isn’t agenty and can’t fight back intelligently
potentially divisive issues in AI risk response seem to be a strict superset of politically divisive issues in COVID response (additional issues include: how to weigh very long term benefits against short term costs, the sentience, moral worth, and rights of AIs, what kind of values do we want AIs to have, and/or who should have control/access to AI)
I asked myself for an example of a country whose initial pandemic response was unusually poor, settled on Brazil, and found that Brazil’s IQ was lower than I expected at 87. So that’s one data point that supports your hypothesis.
I suspect that cultural homogeneity is at least as important.
What do you think is the primary component? I seem to recall reading somewhere that previous experience with SARS makes a big difference. I guess my more general point is that if the good COVID responses can mostly be explained by factors that predictably won’t be available to the median AI risk response, then the variance in COVID response doesn’t help to give much hope for a good AI risk response.
What seemed to make a difference
someone with a good models what to do getting to advisory position when the politicians freak out
previous experience with SARS
ratio of “trust in institutions” vs. “trust in your neighbors wisdom”
raw technological capacity
ability of the government to govern (ie execute many things at short time)
In my view, 1. and 4. could go better than in covid, 2. is irrelevant, 3. and 5. seem broad parameters which can develop in different directions. Image you somehow become the main advisor to US president when the situation becomes really weird, and she follows your advice closely—my rough impression is in most situations you would be able to move the response to be moderately sane.
it’s relatively intuitive for humans to think about the mechanics of the danger and possible countermeasures
Empirically, this often wasn’t true. Humans had mildly confused ideas about the micro-level, but often highly confused ideas about the exponential macro-dynamics. (We created a whole educational game on that, and have some feedback that for some policymakers it was the thing that helped them understand… after a year in the pandemic)
previous human experiences with pandemics, including very similar ones like SARS
there are very effective countermeasures that are much easier / less costly than comparable countermeasures for AI risk, such as distributing high quality masks to everyone and sealing one’s borders
COVID isn’t agenty and can’t fight back intelligently
potentially divisive issues in AI risk response seem to be a strict superset of politically divisive issues in COVID response (additional issues include: how to weigh very long term benefits against short term costs, the sentience, moral worth, and rights of AIs, what kind of values do we want AIs to have, and/or who should have control/access to AI)
One factor which may make governments more responsive to AI risk is covid wasn’t exactly threatening to states. Covid was pretty bad for individual people, and some businesses, but in some cases, the relative power of states even grew during covid. In contrast, in some scenarios it may be clear that AI is existential risk for states as well.
What do you think is the primary component? I seem to recall reading somewhere that previous experience with SARS makes a big difference. I guess my more general point is that if the good COVID responses can mostly be explained by factors that predictably won’t be available to the median AI risk response, then the variance in COVID response doesn’t help to give much hope for a good AI risk response.
This seems to depend on response to AI risk being of similar difficulty as response to COVID. I think people who updated towards “bad response to AI risk is overdetermined” did so partly on the basis that the former is much harder. (In other words, if the median government has done this badly against COVID, what chance does it have against something much harder?) I wrote down a list of things that make COVID an easier challenge, which I now realize may be a bit of a tangent if that’s not the main thing you want to argue about, but I’ll put down here anyway so as to not waste it.
it’s relatively intuitive for humans to think about the mechanics of the danger and possible countermeasures
previous human experiences with pandemics, including very similar ones like SARS
there are very effective countermeasures that are much easier / less costly than comparable countermeasures for AI risk, such as distributing high quality masks to everyone and sealing one’s borders
COVID isn’t agenty and can’t fight back intelligently
potentially divisive issues in AI risk response seem to be a strict superset of politically divisive issues in COVID response (additional issues include: how to weigh very long term benefits against short term costs, the sentience, moral worth, and rights of AIs, what kind of values do we want AIs to have, and/or who should have control/access to AI)
I asked myself for an example of a country whose initial pandemic response was unusually poor, settled on Brazil, and found that Brazil’s IQ was lower than I expected at 87. So that’s one data point that supports your hypothesis.
I suspect that cultural homogeneity is at least as important.
What seemed to make a difference
someone with a good models what to do getting to advisory position when the politicians freak out
previous experience with SARS
ratio of “trust in institutions” vs. “trust in your neighbors wisdom”
raw technological capacity
ability of the government to govern (ie execute many things at short time)
In my view, 1. and 4. could go better than in covid, 2. is irrelevant, 3. and 5. seem broad parameters which can develop in different directions. Image you somehow become the main advisor to US president when the situation becomes really weird, and she follows your advice closely—my rough impression is in most situations you would be able to move the response to be moderately sane.
Empirically, this often wasn’t true. Humans had mildly confused ideas about the micro-level, but often highly confused ideas about the exponential macro-dynamics. (We created a whole educational game on that, and have some feedback that for some policymakers it was the thing that helped them understand… after a year in the pandemic)
One factor which may make governments more responsive to AI risk is covid wasn’t exactly threatening to states. Covid was pretty bad for individual people, and some businesses, but in some cases, the relative power of states even grew during covid. In contrast, in some scenarios it may be clear that AI is existential risk for states as well.