People who are more realistic see the doom of the GreenHouse Gas (GHG) crisis: all time high CO2 production, all time high coal burning, the activation of tipping points, such as collapsing ice shelves, acceleration of ice streams, generalized burning of forests, peat and even permafrost combusting under the snow through winter, oceanic dead zones, 6th mass extinction, etc. They also observe that we are on target for a further rise of many degrees Celsius by 2100 in the polar regions, where global heating has the most impact. And they see the silliness of massive efforts in fake solutions such as making more than 100 million cars a year running on huge (half a ton) batteries… or turning human food into fuel for combustion.
The same realistic mood shows to cogent and honest climate scientists that there is no plausible geoengineering solution which could be deployed in the next few decades. It’s all rationally coherent.
But then when they are told that sea level rise, or drought, or floods, or storm, or massive fires and vegetation dying out, will become catastrophic in their neighborhood, the same people get desperate and ready to use anything. This “suggests that the climate experts’ support for geoengineering will increase over time, as more regions are adversely affected and more experts observe or expect damages in their home country.”
However that would be a moral and intellectual failure, at least in the case of solar geoengineering. Only outright removing CO2 from the atmosphere is acceptable (but we don’t have the tech on the mass scale necessary; we will, only when we have thermonuclear fusion reactors, by just freezing the CO2 out and stuffing it in basalt).
There’s a difference between thinking something won’t work in practice and opposing it. The paper examines opposition. As in, taking steps to make geoengineering less likely.
My personal suspicion is that the more plausible a climate scientist thinks geoengineering is, the more likely they are to oppose it, not the other way around. Just like climate modeling for nuclear famine isn’t actually about accurate climate modeling and finding ways to mitigate deaths from starvation (it’s about opposition to nuclear war), I suspect that a lot of global-warming research is more about opposing capitalism/technology/greed/industrial development than it is about finding practical ways to mitigate the damage.
This is because these fields are about using utilizing how bad their catastrophe is to shape policy. Mitigation efforts hamper that. Therefore, they must prevent mitigation research. There is a difference between thinking about a catastrophe as an ideological tool, in which case you actively avoid talking about mitigation measures and actively sabotage mitigation efforts, and thinking about it as a problem to be solved, in which you absolutely invest in damage mitigation. Most nuclear famine and global warming research absolutely seem like the former, while AI safety still looks like the latter.
AI alignment orgs aren’t trying to sabotage mitigation (prosaic alignment?). The people working on, for example, interpretability and value alignment, might view prosaic alignment as ineffective, but they aren’t trying to prevent it from taking place. Even those who want to slow down AI development aren’t trying to stop prosaic alignment research. Despite the surface similarities, there are fundamental differences between the fields of climate change and AI alignment research.
I vaguely remember reading a comment on Lesswrong about how the anti-geoengineering stance is actually 3d chess to force governments to reduce carbon emissions by removing mitigation as an option. But the most likely result is just that poor nations like Bangladesh suffer humanitarian catastrophes while the developed world twiddles its thumbs. I don’t think there is any equivalent in the field of AI safety.
Realistic Cynicism In Climate Science:
Apparent paradox: the more scientists worry about “climate change”, the less they believe in geoengineering. They also are more inclined to geoengineering when the impact gets personal. It actually makes rational sense.
People who are more realistic see the doom of the GreenHouse Gas (GHG) crisis: all time high CO2 production, all time high coal burning, the activation of tipping points, such as collapsing ice shelves, acceleration of ice streams, generalized burning of forests, peat and even permafrost combusting under the snow through winter, oceanic dead zones, 6th mass extinction, etc. They also observe that we are on target for a further rise of many degrees Celsius by 2100 in the polar regions, where global heating has the most impact. And they see the silliness of massive efforts in fake solutions such as making more than 100 million cars a year running on huge (half a ton) batteries… or turning human food into fuel for combustion.
The same realistic mood shows to cogent and honest climate scientists that there is no plausible geoengineering solution which could be deployed in the next few decades. It’s all rationally coherent.
But then when they are told that sea level rise, or drought, or floods, or storm, or massive fires and vegetation dying out, will become catastrophic in their neighborhood, the same people get desperate and ready to use anything. This “suggests that the climate experts’ support for geoengineering will increase over time, as more regions are adversely affected and more experts observe or expect damages in their home country.”
However that would be a moral and intellectual failure, at least in the case of solar geoengineering. Only outright removing CO2 from the atmosphere is acceptable (but we don’t have the tech on the mass scale necessary; we will, only when we have thermonuclear fusion reactors, by just freezing the CO2 out and stuffing it in basalt).
Epistemic status: ~30% sophistry
There’s a difference between thinking something won’t work in practice and opposing it. The paper examines opposition. As in, taking steps to make geoengineering less likely.
My personal suspicion is that the more plausible a climate scientist thinks geoengineering is, the more likely they are to oppose it, not the other way around. Just like climate modeling for nuclear famine isn’t actually about accurate climate modeling and finding ways to mitigate deaths from starvation (it’s about opposition to nuclear war), I suspect that a lot of global-warming research is more about opposing capitalism/technology/greed/industrial development than it is about finding practical ways to mitigate the damage.
This is because these fields are about using utilizing how bad their catastrophe is to shape policy. Mitigation efforts hamper that. Therefore, they must prevent mitigation research. There is a difference between thinking about a catastrophe as an ideological tool, in which case you actively avoid talking about mitigation measures and actively sabotage mitigation efforts, and thinking about it as a problem to be solved, in which you absolutely invest in damage mitigation. Most nuclear famine and global warming research absolutely seem like the former, while AI safety still looks like the latter.
AI alignment orgs aren’t trying to sabotage mitigation (prosaic alignment?). The people working on, for example, interpretability and value alignment, might view prosaic alignment as ineffective, but they aren’t trying to prevent it from taking place. Even those who want to slow down AI development aren’t trying to stop prosaic alignment research. Despite the surface similarities, there are fundamental differences between the fields of climate change and AI alignment research.
I vaguely remember reading a comment on Lesswrong about how the anti-geoengineering stance is actually 3d chess to force governments to reduce carbon emissions by removing mitigation as an option. But the most likely result is just that poor nations like Bangladesh suffer humanitarian catastrophes while the developed world twiddles its thumbs. I don’t think there is any equivalent in the field of AI safety.