Deaths being from natural phenomena seem to just be one factor determining how strong our emotional response to disasters is, and there are plenty of others. People seem to give a greater emotional response if the deaths are flashy, unexpected, instant rather than slow (both in regards to each individual and the length of the disaster), could happen to anyone at any time, and are inversely correlated with age (people care much less if old people die and much more if children do). This would explain why 9/11, school shootings, or shark attacks have a much greater emotional response than covid or the classic comparison of 9/11 to the flu. It would also help if the disaster was international. So a lot probably depends on the circumstances of the AI disaster.
A new and unfamiliar disaster could also come with fewer preconceptions that the size of the threat is bounded above by previous instances and that we can deal with it with known tools like medicine and vaccines in the case of pandemics. On the other hand, it could have the effect of making people set an upper bound on the possible size of future AI disasters.
It also seems to me like it would be a lot more actionable, easier, and less costly to regulate AI research than to put effective measures in place to prevent future pandemics, so the reluctance should be less.
Deaths being from natural phenomena seem to just be one factor determining how strong our emotional response to disasters is, and there are plenty of others. People seem to give a greater emotional response if the deaths are flashy, unexpected, instant rather than slow (both in regards to each individual and the length of the disaster), could happen to anyone at any time, and are inversely correlated with age (people care much less if old people die and much more if children do). This would explain why 9/11, school shootings, or shark attacks have a much greater emotional response than covid or the classic comparison of 9/11 to the flu. It would also help if the disaster was international. So a lot probably depends on the circumstances of the AI disaster.
A new and unfamiliar disaster could also come with fewer preconceptions that the size of the threat is bounded above by previous instances and that we can deal with it with known tools like medicine and vaccines in the case of pandemics. On the other hand, it could have the effect of making people set an upper bound on the possible size of future AI disasters.
It also seems to me like it would be a lot more actionable, easier, and less costly to regulate AI research than to put effective measures in place to prevent future pandemics, so the reluctance should be less.