I heavily endorse the tone and message of this post!
I also have a sense of optimism coming from society’s endogenous response to threats like these, especially with respect to how public response to COVID went from Feb → Mar 2020 and how the last 6 months have gone for public response to AI and AI safety (or even just the 4-5 months since ChatGPT was released in November). We could also look at the shift in response to climate change over the past 5-10 years.
Humanity does seem to have a knack for figuring things out just in the nick of time. Can’t say it’s something I’m glad to be relying on for optimism in this moment, but it has worked out in the past...
COVID and climate change are actually easy problems that only became serious or highly costly because of humanity’s irrationality and lack of coordination.
COVID—Early lockdown in China + border closures + testing/tracing stops it early, or stockpiling enough elastomeric respirators for everyone keeps health/economic damage at a minimum (e.g. making subsequent large scale lockdowns unnecessary).
Climate change—Continued nuclear rollout (i.e. if it didn’t stop or slow down decades ago) + plugin hybrids or EVs allows world to be mostly decarbonized at minimal cost, or if we failed to do that, geoengineering minimizes damage.
For me, the generalization from these two examples is that humanity is liable to incur at least 1 or 2 orders of magnitude more cost/damage than necessary from big risks, so if you think an optimal response to AI risk means incurring 1% loss of expected value (from truly unpredictable accidents that happen even when one has taken all reasonable precautions), then the actual response would perhaps incur 10-100%.
COVID and climate change are actually easy problems
I’m not sure I’d agree with that at all. Also, how are you calculating what cost is “necessary” for problems like COVID/climate change vs. incurred because of a “less-than-perfect” response? How are we even determining what the “perfect” response would be? We have no way of measuring the counterfactual damage from some other response to COVID, we can only (approximately) measure the damage that has happened due to our actual response.
For those reasons alone I don’t make the same generalization you do about predicting the approximate range of damage from these types of problems.
To me the generalization to be made is simply that: as an exogenous threat looms larger on the public consciousness, the larger the societal response to that threat becomes. And the larger the societal response to exogenous threats, the more likely we are to find some solution to overcoming them: either by hard work, miracle, chance, or whatever.
I heavily endorse the tone and message of this post!
I also have a sense of optimism coming from society’s endogenous response to threats like these, especially with respect to how public response to COVID went from Feb → Mar 2020 and how the last 6 months have gone for public response to AI and AI safety (or even just the 4-5 months since ChatGPT was released in November). We could also look at the shift in response to climate change over the past 5-10 years.
Humanity does seem to have a knack for figuring things out just in the nick of time. Can’t say it’s something I’m glad to be relying on for optimism in this moment, but it has worked out in the past...
COVID and climate change are actually easy problems that only became serious or highly costly because of humanity’s irrationality and lack of coordination.
COVID—Early lockdown in China + border closures + testing/tracing stops it early, or stockpiling enough elastomeric respirators for everyone keeps health/economic damage at a minimum (e.g. making subsequent large scale lockdowns unnecessary).
Climate change—Continued nuclear rollout (i.e. if it didn’t stop or slow down decades ago) + plugin hybrids or EVs allows world to be mostly decarbonized at minimal cost, or if we failed to do that, geoengineering minimizes damage.
For me, the generalization from these two examples is that humanity is liable to incur at least 1 or 2 orders of magnitude more cost/damage than necessary from big risks, so if you think an optimal response to AI risk means incurring 1% loss of expected value (from truly unpredictable accidents that happen even when one has taken all reasonable precautions), then the actual response would perhaps incur 10-100%.
I’m not sure I’d agree with that at all. Also, how are you calculating what cost is “necessary” for problems like COVID/climate change vs. incurred because of a “less-than-perfect” response? How are we even determining what the “perfect” response would be? We have no way of measuring the counterfactual damage from some other response to COVID, we can only (approximately) measure the damage that has happened due to our actual response.
For those reasons alone I don’t make the same generalization you do about predicting the approximate range of damage from these types of problems.
To me the generalization to be made is simply that: as an exogenous threat looms larger on the public consciousness, the larger the societal response to that threat becomes. And the larger the societal response to exogenous threats, the more likely we are to find some solution to overcoming them: either by hard work, miracle, chance, or whatever.
And I think there’s a steady case to be made that the exogenous xrisk from AI is starting to loom larger and larger on the public consciousness: The Overton Window widens: Examples of AI risk in the media.