I am generally concerned, and also think this makes me an outlier. I don’t have any specific model of what will happen.
This is a low information belief that could definitely change in the future. However, it doesn’t seem important to figure out how dangerous climate change is exactly because doing something about it is definitely not my comparative advantage, and I’m confident that it’s less under-prioritized and less important than dangers from AI. It’s mostly like, ‘well the future of life institute has studied this problem, they don’t seem to think we can disregard it as a contributor to existential risk, and they seem like the most reasonable authority to trust here’.
A personal quibble I have is that I’ve seen people dismiss climate change because they don’t think it poses a first-order existential risk. I think this is a confused framing that comes from asking ‘is climate change an existential risk?’ rather than ‘does climate change contribute to existential risk?’, which is the correct question because existential risk is a single category. The answer to the latter question seems to be trivially yes, and the follow-up question is just how much.
It’s mostly like, ‘well the future of life institute has studied this problem, they don’t seem to think we can disregard it as a contributor to existential risk, and they seem like the most reasonable authority to trust here’.
Woah, yeah, just let it be known that I don’t think you should trust FLI with this kind of stuff. They seem to pretty transparently have messed up prioritization in this way a few times, trying to be more appealing to a broader audience, by emphasizing hypotheses that seem intuitively compelling but not actually very likely to be true, with the explicit aim of broadening their reach, as far as I can tell.
Of course, you are free to make your own judgement, but since I think there is a good chance others look at FLI and might think that I (and others) endorse their judgement here since they are kind of affiliated with the community, I want to make it clear that I very concretely don’t endorse their judgement on topics like this.
FWIW I don’t think the FLI is that reasonable an authority here, I’m not sure why you’d defer to them.
They do a good job coordinating lots of things to happen, but I think their public statements on AI, nukes, climate change, etc, are often pretty confused or are wrong. For example, their focus on lethal autonomous weapons seems confused about the problem we have with AI, focusing on the direct destructive capabilities of AI instead of the alignment problem where we don’t understand what decisions they’re even making and so cannot in-principle align their intent with our own.
I’m not sure I follow your point about “is” versus “contributes to”. I don’t think I agree that it doesn’t matter whether a particular entity is itself capable of ending civilization. Nanotech, AI, synthetic biology, each have the ability to be powerful enough to end civilization before breakfast. Climate change seems like a major catastrophe but not on the same level, and so while it’s still relevant to model over multiple decades, it’s not primary in the way the others are.
I’m not sure I follow your point about “is” versus “contributes to”. I don’t think I agree that it doesn’t matter whether a particular entity is itself capable of ending civilization. Nanotech, AI, synthetic biology, each have the ability to be powerful enough to end civilization before breakfast. Climate change seems like a major catastrophe but not on the same level, and so while it’s still relevant to model over multiple decades, it’s not primary in the way the others are.
Suppose it is, in fact, the case that climate change contributes 10% to existential risk. (Defined by, if we performed a surgery on the world state right now that found a perfect solution to c/c, existential risk would go down by that much.) Suppose further that only one percentage point of that goes into scenarios where snowball effects lead to earth to becoming so hot that society grinds to a halt, and nine percentage points into scenarios where international tensions lead to an all-out nuclear war and subsequent winter that ends of all humanity. Would you then treat “x-risk by climate change” as 1% or 10%? My point is that it should clearly be 10%, and this answer falls out of the framing I suggest. (Whereas the ‘x-risk by or from climate change’ phrasing makes it kind of unclear.)
FWIW I don’t think the FLI is that reasonable an authority here, I’m not sure why you’d defer to them.
The ‘FLI is a reasonable authority’ belief is itself fairly low information (low enough to be moved by your comment).
Would you then treat “x-risk by climate change” as 1% or 10%? My point is that it should clearly be 10%
Thanks! Your point is well taken, I’m generally pro being specific and clear in the way that you are being.
However, I have a clever counterargument, which I will now use for devastating effect!
(...not really, but I just realized this is very much like a conversation I had earlier this week.)
I was having a conversation with Critch and Habryka at a LessWrong event, where Critch said he felt people were using the term ‘aligned’ in a very all-or-nothing way, rather than discussing its subtleties. Critch made the following analogy (I’m recounting as best I can, forgive me if I have misremembered):
Bob sees a single sketchy looking trial in his country’s legal system and say that the justice system is unjust, and should be overthrown.
Alice replies saying that justice is a fairly subtle category with lots of edge cases and things can be more or less just, and wants Bob to acknowledge all the ways that the justice system is and isn’t just rather than using a flat term.
Critch was saying people are being like Bob with the term ‘aligned’.
Habryka replied with a different analogy:
Bob lives in a failed state surrounded by many sham trials and people going to prison for bad reasons. He says that the justice system is unjust, and should be overthrown.
Alice replies <the same>.
As I see it, in the former conversation Alice is clarifying, and in the latter I feel like Alice is obfuscating.
I think often when I use the term ‘x-risk’ I feel more like Bob in the second situation, where most people didn’t really have it on their radar that this could finish civilization, rather than just be another terrible situation we have to deal with, filled with unnecessary suffering and death. Of the two analogies I feel like we’re closer to the second situation, where I’m Bob and you’re Alice.
Returning from the analogy, the point is that there are some things that are really x-risks and directly cause human extinction, and there are other things like bad governance structures or cancel culture or climate change that are pretty bad indeed and generally make society much worse off, but are not in the category of extinction risk, and I think it’s confusing the category to obfuscate which ones are members of the category and which aren’t. In most conversations, I’m trying to use ‘x-risk’ to distinguish between things that are really bad, and things that have this ability to cause extinction, where previously no distinction was being made.
The analogy makes sense to me, and I can both see how being Bob on alignment (and many other areas) is a failure mode, and how being Alice in some cases is failure mode.
But I don’t think it applies to what I said.
there are some things that are really x-risks and directly cause human extinction, and there are other things like bad governance structures or cancel culture or climate change that are pretty bad indeed and generally make society much worse off
I agree, but I was postulating that climate change increases literal extinction (where everyone dies) by 10%.
The difference between x-risk and catastrophic risk (what I think you’re talking about) is not the same as the difference between first and n-th order existential risk. As far as I’m concerned, the former is very large because of future generations, but the second is zero. I don’t care at all if climate change kills everyone directly or via nuclear war, as long as everyone is dead.
Or was your point just that the two could be conflated?
It’s not clear to me that in the scenario you describe 10% is a better figure to use than 1%.
Presumably the main reason for estimating such figures is to decide (individually or collectively) what to do.
If 10% of current existential risk is because of the possibility that our greenhouse-gas emissions turn the earth into Venus (and if 10% of current existential risk is a large amount), then the things we might consider doing as a result include campaigning for regulations or incentives that reduce greenhouse gas emissions, searching for technologies that do useful things with less greenhouse gas emission than the ones we currently have, investigating ways of getting those gases out of the atmosphere once they’re in, and so forth.
If 10% of current existential risk is because of the possibility that we have a massive nuclear war and the resulting firestorms fill the atmosphere with particulates that lower temperatures and the remaining humans freeze to death, then the things we might consider doing as a result include campaigning for nuclear disarmament or rearmament, (whichever we think will do more to reduce the likelihood of large-scale nuclear war), finding ways to reduce international tensions generally, researching weapons that are even more directly destructive and have fewer side effects, investigating ways of getting particulates out of the atmosphere after a nuclear war, and so forth.
The actions in these two cases have very little overlap. The first set are mostly concerned with changing how we affect the climate. The second set are mostly concerned with changing whether and how we get into massive wars.
For what actual purpose is there any meaning to adding the two figures together? It seems to me if we’re asking “what existential risk arises from climate change?” we are interested in the first type of risk, and the second type wants combining with other kinds of existential risk arising from nuclear war (people killed by the actual nuclear explosions, EMP screwing up electronics we need to keep our civilization going, fallout making things super-dangerous even after the bombs have stopped going off, etc.).
I’m not certain that this analysis is right, but at the very least it seems plausible enough to me that I don’t see how it can be that “clearly” we want to use the 10% figure rather than the 1% figure in your scenario.
If 10% of current existential risk is because of the possibility that we have a massive nuclear war and the resulting firestorms fill the atmosphere with particulates that lower temperatures and the remaining humans freeze to death, then the things we might consider doing as a result include campaigning for nuclear disarmament or rearmament, (whichever we think will do more to reduce the likelihood of large-scale nuclear war), finding ways to reduce international tensions generally, researching weapons that are even more directly destructive and have fewer side effects, investigating ways of getting particulates out of the atmosphere after a nuclear war, and so forth.
In the hypothetical, 9% was the contribution of climate change to nuclear winter, not the total probability of nuclear winter. The total probability for nuclear winter could be 25%.
In that case, if we ‘solved’ climate change, the probability for nuclear winter would decrease from 25% to 16% (and the probability for first-order extinction from climate change would decrease from 1% to 0%). The total decrease in existential risk would be 10%.
I will grant you that it’s not irrelevant where the first-order effect comes from—if we somehow solved nuclear war entirely, this would make it much less urgent to solve climate change, since now the possible gain is only 1% and not 10%. But it still seems obvious to me that the number you care about when discussing climate change is 10% because as long as we don’t magically solve nuclear war, that’s the total increase to the one event we care about (i.e., the single category of existential risk).
Ah, OK, I didn’t read carefully enough: you specified that somehow “solving” climate change would reduce Pr(extinction due to nuclear winter) by 9%. I agree that in that case you’re right. But now that I understand better what scenario you’re proposing it seems like a really weird scenario to propose, because I can’t imagine what sort of real-world “solution” to climate change would have that property. Maybe the discovery of some sort of weather magic that enables us to adjust weather and climate arbitrarily would do it, but the actual things we might do that would help with climate change are all more specific and limited than that, and e.g. scarcely anything that reduces danger from global warming would help much with nuclear winter.
So I’m not sure how this (to my mind super-improbable) hypothetical scenario, where work on climate change would somehow address nuclear winter along with global warming, tells us anything about the actual world we live in, where surely that wouldn’t be the case.
But now that I understand better what scenario you’re proposing it seems like a really weird scenario to propose, because I can’t imagine what sort of real-world “solution” to climate change would have that property. Maybe the discovery of some sort of weather magic that enables us to adjust weather and climate arbitrarily would do it
I think the story of how mitigating climate change reduces risk of first-order effects from nuclear war is not that it helps survive nuclear winter, but that climate change leads to things like refugee crises, which in turn lead to worse international relations and higher chance of nuclear weapons being used, and hence mitigating c/c leads to lower chances of nuclear winter occurring.
The 1%/9% numbers were meant to illustrate the principle and not to be realistic, but if you told me something like, there’s a 0.5% contribution to x-risk from c/c via first-order effects, and there’s a total of 5% contribution to x-risk from c/c via increased risk from AI, bio-terrorism, and nuclear winter (all of which plausibly suffer from political instabilities), that doesn’t sound obviously unreasonable to me.
The concrete claims I’m defending are that
insofar as they exist, n-th order contributions to x-risk matter roughly as much as first-order contributions; and
it’s not obvious that they don’t exist or are not negligible.
I think those are all you need to see that the single-category framing is the correct one.
OK, so it turns out I misunderstood your example in two different ways, making (in addition to the error discussed above) the rookie mistake of assuming that when you gave nuclear war leading to nuclear winter (which surely is a variety of anthropogenic climate change) the latter was the “climate change” you meant. Oh well.
So, I do agree that if climate change contributes to existential risk indirectly in that sort of way (but we’re still talking about the same kind of climate change as we might worry about the direct effects of) then yes, that should go in the same accounting bucket as the direct effects. Yay, agreement.
(And I think we also agree that cases where other things such as nuclear war produce other kinds of climate change should not go in the same accounting bucket, even though in some sense they involve climate change.)
So, I do agree that if climate change contributes to existential risk indirectly in that sort of way (but we’re still talking about the same kind of climate change as we might worry about the direct effects of) then yes, that should go in the same accounting bucket as the direct effects. Yay, agreement.
(And I think we also agree that cases where other things such as nuclear war produce other kinds of climate change should not go in the same accounting bucket, even though in some sense they involve climate change.)
Yes on both.
This conversation is sort of interesting on a meta level. Turns out there were two ways my example was confusing, and neither of them occurred to me when I wrote it. Apologies for that.
I’m not sure if there’s a lesson here. Maybe something like ‘the difficulty of communicating something isn’t strictly tied to how simple the point seems to you’ (because this was kind of the issue; I thought what I was saying was simple hence easy to understand hence there was no need to think much about what examples to use). Or maybe just always think for a minimum amount of time since one tends to underestimate the difficulty of conversation in general. In retrospect, it sure seems stupid to use nuclear winter as an example for a second-order effect of climate change, when the fact that winter and climate are connected is totally coincidental.
It’s somewhat consoling that we at least managed to resolve one misunderstanding per back-and-forth message pair.
I am generally concerned, and also think this makes me an outlier. I don’t have any specific model of what will happen.
This is a low information belief that could definitely change in the future. However, it doesn’t seem important to figure out how dangerous climate change is exactly because doing something about it is definitely not my comparative advantage, and I’m confident that it’s less under-prioritized and less important than dangers from AI. It’s mostly like, ‘well the future of life institute has studied this problem, they don’t seem to think we can disregard it as a contributor to existential risk, and they seem like the most reasonable authority to trust here’.
A personal quibble I have is that I’ve seen people dismiss climate change because they don’t think it poses a first-order existential risk. I think this is a confused framing that comes from asking ‘is climate change an existential risk?’ rather than ‘does climate change contribute to existential risk?’, which is the correct question because existential risk is a single category. The answer to the latter question seems to be trivially yes, and the follow-up question is just how much.
Woah, yeah, just let it be known that I don’t think you should trust FLI with this kind of stuff. They seem to pretty transparently have messed up prioritization in this way a few times, trying to be more appealing to a broader audience, by emphasizing hypotheses that seem intuitively compelling but not actually very likely to be true, with the explicit aim of broadening their reach, as far as I can tell.
Of course, you are free to make your own judgement, but since I think there is a good chance others look at FLI and might think that I (and others) endorse their judgement here since they are kind of affiliated with the community, I want to make it clear that I very concretely don’t endorse their judgement on topics like this.
FWIW I don’t think the FLI is that reasonable an authority here, I’m not sure why you’d defer to them.
They do a good job coordinating lots of things to happen, but I think their public statements on AI, nukes, climate change, etc, are often pretty confused or are wrong. For example, their focus on lethal autonomous weapons seems confused about the problem we have with AI, focusing on the direct destructive capabilities of AI instead of the alignment problem where we don’t understand what decisions they’re even making and so cannot in-principle align their intent with our own.
I’m not sure I follow your point about “is” versus “contributes to”. I don’t think I agree that it doesn’t matter whether a particular entity is itself capable of ending civilization. Nanotech, AI, synthetic biology, each have the ability to be powerful enough to end civilization before breakfast. Climate change seems like a major catastrophe but not on the same level, and so while it’s still relevant to model over multiple decades, it’s not primary in the way the others are.
Suppose it is, in fact, the case that climate change contributes 10% to existential risk. (Defined by, if we performed a surgery on the world state right now that found a perfect solution to c/c, existential risk would go down by that much.) Suppose further that only one percentage point of that goes into scenarios where snowball effects lead to earth to becoming so hot that society grinds to a halt, and nine percentage points into scenarios where international tensions lead to an all-out nuclear war and subsequent winter that ends of all humanity. Would you then treat “x-risk by climate change” as 1% or 10%? My point is that it should clearly be 10%, and this answer falls out of the framing I suggest. (Whereas the ‘x-risk by or from climate change’ phrasing makes it kind of unclear.)
The ‘FLI is a reasonable authority’ belief is itself fairly low information (low enough to be moved by your comment).
Thanks! Your point is well taken, I’m generally pro being specific and clear in the way that you are being.
However, I have a clever counterargument, which I will now use for devastating effect!
(...not really, but I just realized this is very much like a conversation I had earlier this week.)
I was having a conversation with Critch and Habryka at a LessWrong event, where Critch said he felt people were using the term ‘aligned’ in a very all-or-nothing way, rather than discussing its subtleties. Critch made the following analogy (I’m recounting as best I can, forgive me if I have misremembered):
Bob sees a single sketchy looking trial in his country’s legal system and say that the justice system is unjust, and should be overthrown.
Alice replies saying that justice is a fairly subtle category with lots of edge cases and things can be more or less just, and wants Bob to acknowledge all the ways that the justice system is and isn’t just rather than using a flat term.
Critch was saying people are being like Bob with the term ‘aligned’.
Habryka replied with a different analogy:
Bob lives in a failed state surrounded by many sham trials and people going to prison for bad reasons. He says that the justice system is unjust, and should be overthrown.
Alice replies <the same>.
As I see it, in the former conversation Alice is clarifying, and in the latter I feel like Alice is obfuscating.
I think often when I use the term ‘x-risk’ I feel more like Bob in the second situation, where most people didn’t really have it on their radar that this could finish civilization, rather than just be another terrible situation we have to deal with, filled with unnecessary suffering and death. Of the two analogies I feel like we’re closer to the second situation, where I’m Bob and you’re Alice.
Returning from the analogy, the point is that there are some things that are really x-risks and directly cause human extinction, and there are other things like bad governance structures or cancel culture or climate change that are pretty bad indeed and generally make society much worse off, but are not in the category of extinction risk, and I think it’s confusing the category to obfuscate which ones are members of the category and which aren’t. In most conversations, I’m trying to use ‘x-risk’ to distinguish between things that are really bad, and things that have this ability to cause extinction, where previously no distinction was being made.
The analogy makes sense to me, and I can both see how being Bob on alignment (and many other areas) is a failure mode, and how being Alice in some cases is failure mode.
But I don’t think it applies to what I said.
I agree, but I was postulating that climate change increases literal extinction (where everyone dies) by 10%.
The difference between x-risk and catastrophic risk (what I think you’re talking about) is not the same as the difference between first and n-th order existential risk. As far as I’m concerned, the former is very large because of future generations, but the second is zero. I don’t care at all if climate change kills everyone directly or via nuclear war, as long as everyone is dead.
Or was your point just that the two could be conflated?
It’s not clear to me that in the scenario you describe 10% is a better figure to use than 1%.
Presumably the main reason for estimating such figures is to decide (individually or collectively) what to do.
If 10% of current existential risk is because of the possibility that our greenhouse-gas emissions turn the earth into Venus (and if 10% of current existential risk is a large amount), then the things we might consider doing as a result include campaigning for regulations or incentives that reduce greenhouse gas emissions, searching for technologies that do useful things with less greenhouse gas emission than the ones we currently have, investigating ways of getting those gases out of the atmosphere once they’re in, and so forth.
If 10% of current existential risk is because of the possibility that we have a massive nuclear war and the resulting firestorms fill the atmosphere with particulates that lower temperatures and the remaining humans freeze to death, then the things we might consider doing as a result include campaigning for nuclear disarmament or rearmament, (whichever we think will do more to reduce the likelihood of large-scale nuclear war), finding ways to reduce international tensions generally, researching weapons that are even more directly destructive and have fewer side effects, investigating ways of getting particulates out of the atmosphere after a nuclear war, and so forth.
The actions in these two cases have very little overlap. The first set are mostly concerned with changing how we affect the climate. The second set are mostly concerned with changing whether and how we get into massive wars.
For what actual purpose is there any meaning to adding the two figures together? It seems to me if we’re asking “what existential risk arises from climate change?” we are interested in the first type of risk, and the second type wants combining with other kinds of existential risk arising from nuclear war (people killed by the actual nuclear explosions, EMP screwing up electronics we need to keep our civilization going, fallout making things super-dangerous even after the bombs have stopped going off, etc.).
I’m not certain that this analysis is right, but at the very least it seems plausible enough to me that I don’t see how it can be that “clearly” we want to use the 10% figure rather than the 1% figure in your scenario.
In the hypothetical, 9% was the contribution of climate change to nuclear winter, not the total probability of nuclear winter. The total probability for nuclear winter could be 25%.
In that case, if we ‘solved’ climate change, the probability for nuclear winter would decrease from 25% to 16% (and the probability for first-order extinction from climate change would decrease from 1% to 0%). The total decrease in existential risk would be 10%.
I will grant you that it’s not irrelevant where the first-order effect comes from—if we somehow solved nuclear war entirely, this would make it much less urgent to solve climate change, since now the possible gain is only 1% and not 10%. But it still seems obvious to me that the number you care about when discussing climate change is 10% because as long as we don’t magically solve nuclear war, that’s the total increase to the one event we care about (i.e., the single category of existential risk).
Ah, OK, I didn’t read carefully enough: you specified that somehow “solving” climate change would reduce Pr(extinction due to nuclear winter) by 9%. I agree that in that case you’re right. But now that I understand better what scenario you’re proposing it seems like a really weird scenario to propose, because I can’t imagine what sort of real-world “solution” to climate change would have that property. Maybe the discovery of some sort of weather magic that enables us to adjust weather and climate arbitrarily would do it, but the actual things we might do that would help with climate change are all more specific and limited than that, and e.g. scarcely anything that reduces danger from global warming would help much with nuclear winter.
So I’m not sure how this (to my mind super-improbable) hypothetical scenario, where work on climate change would somehow address nuclear winter along with global warming, tells us anything about the actual world we live in, where surely that wouldn’t be the case.
Am I still missing something important?
I think the story of how mitigating climate change reduces risk of first-order effects from nuclear war is not that it helps survive nuclear winter, but that climate change leads to things like refugee crises, which in turn lead to worse international relations and higher chance of nuclear weapons being used, and hence mitigating c/c leads to lower chances of nuclear winter occurring.
The 1%/9% numbers were meant to illustrate the principle and not to be realistic, but if you told me something like, there’s a 0.5% contribution to x-risk from c/c via first-order effects, and there’s a total of 5% contribution to x-risk from c/c via increased risk from AI, bio-terrorism, and nuclear winter (all of which plausibly suffer from political instabilities), that doesn’t sound obviously unreasonable to me.
The concrete claims I’m defending are that
insofar as they exist, n-th order contributions to x-risk matter roughly as much as first-order contributions; and
it’s not obvious that they don’t exist or are not negligible.
I think those are all you need to see that the single-category framing is the correct one.
OK, so it turns out I misunderstood your example in two different ways, making (in addition to the error discussed above) the rookie mistake of assuming that when you gave nuclear war leading to nuclear winter (which surely is a variety of anthropogenic climate change) the latter was the “climate change” you meant. Oh well.
So, I do agree that if climate change contributes to existential risk indirectly in that sort of way (but we’re still talking about the same kind of climate change as we might worry about the direct effects of) then yes, that should go in the same accounting bucket as the direct effects. Yay, agreement.
(And I think we also agree that cases where other things such as nuclear war produce other kinds of climate change should not go in the same accounting bucket, even though in some sense they involve climate change.)
Yes on both.
This conversation is sort of interesting on a meta level. Turns out there were two ways my example was confusing, and neither of them occurred to me when I wrote it. Apologies for that.
I’m not sure if there’s a lesson here. Maybe something like ‘the difficulty of communicating something isn’t strictly tied to how simple the point seems to you’ (because this was kind of the issue; I thought what I was saying was simple hence easy to understand hence there was no need to think much about what examples to use). Or maybe just always think for a minimum amount of time since one tends to underestimate the difficulty of conversation in general. In retrospect, it sure seems stupid to use nuclear winter as an example for a second-order effect of climate change, when the fact that winter and climate are connected is totally coincidental.
It’s somewhat consoling that we at least managed to resolve one misunderstanding per back-and-forth message pair.