One of the main bottlenecks on explaining the full gravity of the AI situation to people is that they’re already worn out from hearing about climate change, which for decades has been widely depicted as an existential risk with the full persuasive force of the environmentalism movement.
Fixing this rather awful choke point could plausibly be one of the most impactful things here. The “Global Risk Prioritization” concept is probably helpful for that but I don’t know how accessible it is. Heninger’s series analyzing the environmentalist movement was fantastic, but the fact that it came out recently instead of ten years ago tells me that the “climate fatigue” problem might be understudied, and evaluation of climate fatigue’s difficulty/hopelessness might yield unexpectedly hopeful results.
The notion of a new intelligent species being dangerous is actually quite intuitive, and quite different from climate change. Climate change is more like arguing for the risks of more-or-less aligned AGI—complex, debatable, and non-intuitive. One reason I like this strategy is that it does not conflate the two.
The relevant bit of the climate crisis to learn from in that series is: don’t create polarization unless the decision-makers mostly sit on one side of the polarizing line. Polarization is the mind-killer.
Note that this kind of messaging can (if you’re not careful) come across as “hey let’s work on AI x-risk instead of climate change”, which would be both very counterproductive and very misleading—see my discussion here.
One of the main bottlenecks on explaining the full gravity of the AI situation to people is that they’re already worn out from hearing about climate change, which for decades has been widely depicted as an existential risk with the full persuasive force of the environmentalism movement.
Fixing this rather awful choke point could plausibly be one of the most impactful things here. The “Global Risk Prioritization” concept is probably helpful for that but I don’t know how accessible it is. Heninger’s series analyzing the environmentalist movement was fantastic, but the fact that it came out recently instead of ten years ago tells me that the “climate fatigue” problem might be understudied, and evaluation of climate fatigue’s difficulty/hopelessness might yield unexpectedly hopeful results.
The notion of a new intelligent species being dangerous is actually quite intuitive, and quite different from climate change. Climate change is more like arguing for the risks of more-or-less aligned AGI—complex, debatable, and non-intuitive. One reason I like this strategy is that it does not conflate the two.
The relevant bit of the climate crisis to learn from in that series is: don’t create polarization unless the decision-makers mostly sit on one side of the polarizing line. Polarization is the mind-killer.
Note that this kind of messaging can (if you’re not careful) come across as “hey let’s work on AI x-risk instead of climate change”, which would be both very counterproductive and very misleading—see my discussion here.