It’s not clear to me that in the scenario you describe 10% is a better figure to use than 1%.
Presumably the main reason for estimating such figures is to decide (individually or collectively) what to do.
If 10% of current existential risk is because of the possibility that our greenhouse-gas emissions turn the earth into Venus (and if 10% of current existential risk is a large amount), then the things we might consider doing as a result include campaigning for regulations or incentives that reduce greenhouse gas emissions, searching for technologies that do useful things with less greenhouse gas emission than the ones we currently have, investigating ways of getting those gases out of the atmosphere once they’re in, and so forth.
If 10% of current existential risk is because of the possibility that we have a massive nuclear war and the resulting firestorms fill the atmosphere with particulates that lower temperatures and the remaining humans freeze to death, then the things we might consider doing as a result include campaigning for nuclear disarmament or rearmament, (whichever we think will do more to reduce the likelihood of large-scale nuclear war), finding ways to reduce international tensions generally, researching weapons that are even more directly destructive and have fewer side effects, investigating ways of getting particulates out of the atmosphere after a nuclear war, and so forth.
The actions in these two cases have very little overlap. The first set are mostly concerned with changing how we affect the climate. The second set are mostly concerned with changing whether and how we get into massive wars.
For what actual purpose is there any meaning to adding the two figures together? It seems to me if we’re asking “what existential risk arises from climate change?” we are interested in the first type of risk, and the second type wants combining with other kinds of existential risk arising from nuclear war (people killed by the actual nuclear explosions, EMP screwing up electronics we need to keep our civilization going, fallout making things super-dangerous even after the bombs have stopped going off, etc.).
I’m not certain that this analysis is right, but at the very least it seems plausible enough to me that I don’t see how it can be that “clearly” we want to use the 10% figure rather than the 1% figure in your scenario.
If 10% of current existential risk is because of the possibility that we have a massive nuclear war and the resulting firestorms fill the atmosphere with particulates that lower temperatures and the remaining humans freeze to death, then the things we might consider doing as a result include campaigning for nuclear disarmament or rearmament, (whichever we think will do more to reduce the likelihood of large-scale nuclear war), finding ways to reduce international tensions generally, researching weapons that are even more directly destructive and have fewer side effects, investigating ways of getting particulates out of the atmosphere after a nuclear war, and so forth.
In the hypothetical, 9% was the contribution of climate change to nuclear winter, not the total probability of nuclear winter. The total probability for nuclear winter could be 25%.
In that case, if we ‘solved’ climate change, the probability for nuclear winter would decrease from 25% to 16% (and the probability for first-order extinction from climate change would decrease from 1% to 0%). The total decrease in existential risk would be 10%.
I will grant you that it’s not irrelevant where the first-order effect comes from—if we somehow solved nuclear war entirely, this would make it much less urgent to solve climate change, since now the possible gain is only 1% and not 10%. But it still seems obvious to me that the number you care about when discussing climate change is 10% because as long as we don’t magically solve nuclear war, that’s the total increase to the one event we care about (i.e., the single category of existential risk).
Ah, OK, I didn’t read carefully enough: you specified that somehow “solving” climate change would reduce Pr(extinction due to nuclear winter) by 9%. I agree that in that case you’re right. But now that I understand better what scenario you’re proposing it seems like a really weird scenario to propose, because I can’t imagine what sort of real-world “solution” to climate change would have that property. Maybe the discovery of some sort of weather magic that enables us to adjust weather and climate arbitrarily would do it, but the actual things we might do that would help with climate change are all more specific and limited than that, and e.g. scarcely anything that reduces danger from global warming would help much with nuclear winter.
So I’m not sure how this (to my mind super-improbable) hypothetical scenario, where work on climate change would somehow address nuclear winter along with global warming, tells us anything about the actual world we live in, where surely that wouldn’t be the case.
But now that I understand better what scenario you’re proposing it seems like a really weird scenario to propose, because I can’t imagine what sort of real-world “solution” to climate change would have that property. Maybe the discovery of some sort of weather magic that enables us to adjust weather and climate arbitrarily would do it
I think the story of how mitigating climate change reduces risk of first-order effects from nuclear war is not that it helps survive nuclear winter, but that climate change leads to things like refugee crises, which in turn lead to worse international relations and higher chance of nuclear weapons being used, and hence mitigating c/c leads to lower chances of nuclear winter occurring.
The 1%/9% numbers were meant to illustrate the principle and not to be realistic, but if you told me something like, there’s a 0.5% contribution to x-risk from c/c via first-order effects, and there’s a total of 5% contribution to x-risk from c/c via increased risk from AI, bio-terrorism, and nuclear winter (all of which plausibly suffer from political instabilities), that doesn’t sound obviously unreasonable to me.
The concrete claims I’m defending are that
insofar as they exist, n-th order contributions to x-risk matter roughly as much as first-order contributions; and
it’s not obvious that they don’t exist or are not negligible.
I think those are all you need to see that the single-category framing is the correct one.
OK, so it turns out I misunderstood your example in two different ways, making (in addition to the error discussed above) the rookie mistake of assuming that when you gave nuclear war leading to nuclear winter (which surely is a variety of anthropogenic climate change) the latter was the “climate change” you meant. Oh well.
So, I do agree that if climate change contributes to existential risk indirectly in that sort of way (but we’re still talking about the same kind of climate change as we might worry about the direct effects of) then yes, that should go in the same accounting bucket as the direct effects. Yay, agreement.
(And I think we also agree that cases where other things such as nuclear war produce other kinds of climate change should not go in the same accounting bucket, even though in some sense they involve climate change.)
So, I do agree that if climate change contributes to existential risk indirectly in that sort of way (but we’re still talking about the same kind of climate change as we might worry about the direct effects of) then yes, that should go in the same accounting bucket as the direct effects. Yay, agreement.
(And I think we also agree that cases where other things such as nuclear war produce other kinds of climate change should not go in the same accounting bucket, even though in some sense they involve climate change.)
Yes on both.
This conversation is sort of interesting on a meta level. Turns out there were two ways my example was confusing, and neither of them occurred to me when I wrote it. Apologies for that.
I’m not sure if there’s a lesson here. Maybe something like ‘the difficulty of communicating something isn’t strictly tied to how simple the point seems to you’ (because this was kind of the issue; I thought what I was saying was simple hence easy to understand hence there was no need to think much about what examples to use). Or maybe just always think for a minimum amount of time since one tends to underestimate the difficulty of conversation in general. In retrospect, it sure seems stupid to use nuclear winter as an example for a second-order effect of climate change, when the fact that winter and climate are connected is totally coincidental.
It’s somewhat consoling that we at least managed to resolve one misunderstanding per back-and-forth message pair.
It’s not clear to me that in the scenario you describe 10% is a better figure to use than 1%.
Presumably the main reason for estimating such figures is to decide (individually or collectively) what to do.
If 10% of current existential risk is because of the possibility that our greenhouse-gas emissions turn the earth into Venus (and if 10% of current existential risk is a large amount), then the things we might consider doing as a result include campaigning for regulations or incentives that reduce greenhouse gas emissions, searching for technologies that do useful things with less greenhouse gas emission than the ones we currently have, investigating ways of getting those gases out of the atmosphere once they’re in, and so forth.
If 10% of current existential risk is because of the possibility that we have a massive nuclear war and the resulting firestorms fill the atmosphere with particulates that lower temperatures and the remaining humans freeze to death, then the things we might consider doing as a result include campaigning for nuclear disarmament or rearmament, (whichever we think will do more to reduce the likelihood of large-scale nuclear war), finding ways to reduce international tensions generally, researching weapons that are even more directly destructive and have fewer side effects, investigating ways of getting particulates out of the atmosphere after a nuclear war, and so forth.
The actions in these two cases have very little overlap. The first set are mostly concerned with changing how we affect the climate. The second set are mostly concerned with changing whether and how we get into massive wars.
For what actual purpose is there any meaning to adding the two figures together? It seems to me if we’re asking “what existential risk arises from climate change?” we are interested in the first type of risk, and the second type wants combining with other kinds of existential risk arising from nuclear war (people killed by the actual nuclear explosions, EMP screwing up electronics we need to keep our civilization going, fallout making things super-dangerous even after the bombs have stopped going off, etc.).
I’m not certain that this analysis is right, but at the very least it seems plausible enough to me that I don’t see how it can be that “clearly” we want to use the 10% figure rather than the 1% figure in your scenario.
In the hypothetical, 9% was the contribution of climate change to nuclear winter, not the total probability of nuclear winter. The total probability for nuclear winter could be 25%.
In that case, if we ‘solved’ climate change, the probability for nuclear winter would decrease from 25% to 16% (and the probability for first-order extinction from climate change would decrease from 1% to 0%). The total decrease in existential risk would be 10%.
I will grant you that it’s not irrelevant where the first-order effect comes from—if we somehow solved nuclear war entirely, this would make it much less urgent to solve climate change, since now the possible gain is only 1% and not 10%. But it still seems obvious to me that the number you care about when discussing climate change is 10% because as long as we don’t magically solve nuclear war, that’s the total increase to the one event we care about (i.e., the single category of existential risk).
Ah, OK, I didn’t read carefully enough: you specified that somehow “solving” climate change would reduce Pr(extinction due to nuclear winter) by 9%. I agree that in that case you’re right. But now that I understand better what scenario you’re proposing it seems like a really weird scenario to propose, because I can’t imagine what sort of real-world “solution” to climate change would have that property. Maybe the discovery of some sort of weather magic that enables us to adjust weather and climate arbitrarily would do it, but the actual things we might do that would help with climate change are all more specific and limited than that, and e.g. scarcely anything that reduces danger from global warming would help much with nuclear winter.
So I’m not sure how this (to my mind super-improbable) hypothetical scenario, where work on climate change would somehow address nuclear winter along with global warming, tells us anything about the actual world we live in, where surely that wouldn’t be the case.
Am I still missing something important?
I think the story of how mitigating climate change reduces risk of first-order effects from nuclear war is not that it helps survive nuclear winter, but that climate change leads to things like refugee crises, which in turn lead to worse international relations and higher chance of nuclear weapons being used, and hence mitigating c/c leads to lower chances of nuclear winter occurring.
The 1%/9% numbers were meant to illustrate the principle and not to be realistic, but if you told me something like, there’s a 0.5% contribution to x-risk from c/c via first-order effects, and there’s a total of 5% contribution to x-risk from c/c via increased risk from AI, bio-terrorism, and nuclear winter (all of which plausibly suffer from political instabilities), that doesn’t sound obviously unreasonable to me.
The concrete claims I’m defending are that
insofar as they exist, n-th order contributions to x-risk matter roughly as much as first-order contributions; and
it’s not obvious that they don’t exist or are not negligible.
I think those are all you need to see that the single-category framing is the correct one.
OK, so it turns out I misunderstood your example in two different ways, making (in addition to the error discussed above) the rookie mistake of assuming that when you gave nuclear war leading to nuclear winter (which surely is a variety of anthropogenic climate change) the latter was the “climate change” you meant. Oh well.
So, I do agree that if climate change contributes to existential risk indirectly in that sort of way (but we’re still talking about the same kind of climate change as we might worry about the direct effects of) then yes, that should go in the same accounting bucket as the direct effects. Yay, agreement.
(And I think we also agree that cases where other things such as nuclear war produce other kinds of climate change should not go in the same accounting bucket, even though in some sense they involve climate change.)
Yes on both.
This conversation is sort of interesting on a meta level. Turns out there were two ways my example was confusing, and neither of them occurred to me when I wrote it. Apologies for that.
I’m not sure if there’s a lesson here. Maybe something like ‘the difficulty of communicating something isn’t strictly tied to how simple the point seems to you’ (because this was kind of the issue; I thought what I was saying was simple hence easy to understand hence there was no need to think much about what examples to use). Or maybe just always think for a minimum amount of time since one tends to underestimate the difficulty of conversation in general. In retrospect, it sure seems stupid to use nuclear winter as an example for a second-order effect of climate change, when the fact that winter and climate are connected is totally coincidental.
It’s somewhat consoling that we at least managed to resolve one misunderstanding per back-and-forth message pair.