There’s a difference between thinking something won’t work in practice and opposing it. The paper examines opposition. As in, taking steps to make geoengineering less likely.
My personal suspicion is that the more plausible a climate scientist thinks geoengineering is, the more likely they are to oppose it, not the other way around. Just like climate modeling for nuclear famine isn’t actually about accurate climate modeling and finding ways to mitigate deaths from starvation (it’s about opposition to nuclear war), I suspect that a lot of global-warming research is more about opposing capitalism/technology/greed/industrial development than it is about finding practical ways to mitigate the damage.
This is because these fields are about using utilizing how bad their catastrophe is to shape policy. Mitigation efforts hamper that. Therefore, they must prevent mitigation research. There is a difference between thinking about a catastrophe as an ideological tool, in which case you actively avoid talking about mitigation measures and actively sabotage mitigation efforts, and thinking about it as a problem to be solved, in which you absolutely invest in damage mitigation. Most nuclear famine and global warming research absolutely seem like the former, while AI safety still looks like the latter.
AI alignment orgs aren’t trying to sabotage mitigation (prosaic alignment?). The people working on, for example, interpretability and value alignment, might view prosaic alignment as ineffective, but they aren’t trying to prevent it from taking place. Even those who want to slow down AI development aren’t trying to stop prosaic alignment research. Despite the surface similarities, there are fundamental differences between the fields of climate change and AI alignment research.
I vaguely remember reading a comment on Lesswrong about how the anti-geoengineering stance is actually 3d chess to force governments to reduce carbon emissions by removing mitigation as an option. But the most likely result is just that poor nations like Bangladesh suffer humanitarian catastrophes while the developed world twiddles its thumbs. I don’t think there is any equivalent in the field of AI safety.
Epistemic status: ~30% sophistry
There’s a difference between thinking something won’t work in practice and opposing it. The paper examines opposition. As in, taking steps to make geoengineering less likely.
My personal suspicion is that the more plausible a climate scientist thinks geoengineering is, the more likely they are to oppose it, not the other way around. Just like climate modeling for nuclear famine isn’t actually about accurate climate modeling and finding ways to mitigate deaths from starvation (it’s about opposition to nuclear war), I suspect that a lot of global-warming research is more about opposing capitalism/technology/greed/industrial development than it is about finding practical ways to mitigate the damage.
This is because these fields are about using utilizing how bad their catastrophe is to shape policy. Mitigation efforts hamper that. Therefore, they must prevent mitigation research. There is a difference between thinking about a catastrophe as an ideological tool, in which case you actively avoid talking about mitigation measures and actively sabotage mitigation efforts, and thinking about it as a problem to be solved, in which you absolutely invest in damage mitigation. Most nuclear famine and global warming research absolutely seem like the former, while AI safety still looks like the latter.
AI alignment orgs aren’t trying to sabotage mitigation (prosaic alignment?). The people working on, for example, interpretability and value alignment, might view prosaic alignment as ineffective, but they aren’t trying to prevent it from taking place. Even those who want to slow down AI development aren’t trying to stop prosaic alignment research. Despite the surface similarities, there are fundamental differences between the fields of climate change and AI alignment research.
I vaguely remember reading a comment on Lesswrong about how the anti-geoengineering stance is actually 3d chess to force governments to reduce carbon emissions by removing mitigation as an option. But the most likely result is just that poor nations like Bangladesh suffer humanitarian catastrophes while the developed world twiddles its thumbs. I don’t think there is any equivalent in the field of AI safety.