Would you then treat “x-risk by climate change” as 1% or 10%? My point is that it should clearly be 10%
Thanks! Your point is well taken, I’m generally pro being specific and clear in the way that you are being.
However, I have a clever counterargument, which I will now use for devastating effect!
(...not really, but I just realized this is very much like a conversation I had earlier this week.)
I was having a conversation with Critch and Habryka at a LessWrong event, where Critch said he felt people were using the term ‘aligned’ in a very all-or-nothing way, rather than discussing its subtleties. Critch made the following analogy (I’m recounting as best I can, forgive me if I have misremembered):
Bob sees a single sketchy looking trial in his country’s legal system and say that the justice system is unjust, and should be overthrown.
Alice replies saying that justice is a fairly subtle category with lots of edge cases and things can be more or less just, and wants Bob to acknowledge all the ways that the justice system is and isn’t just rather than using a flat term.
Critch was saying people are being like Bob with the term ‘aligned’.
Habryka replied with a different analogy:
Bob lives in a failed state surrounded by many sham trials and people going to prison for bad reasons. He says that the justice system is unjust, and should be overthrown.
Alice replies <the same>.
As I see it, in the former conversation Alice is clarifying, and in the latter I feel like Alice is obfuscating.
I think often when I use the term ‘x-risk’ I feel more like Bob in the second situation, where most people didn’t really have it on their radar that this could finish civilization, rather than just be another terrible situation we have to deal with, filled with unnecessary suffering and death. Of the two analogies I feel like we’re closer to the second situation, where I’m Bob and you’re Alice.
Returning from the analogy, the point is that there are some things that are really x-risks and directly cause human extinction, and there are other things like bad governance structures or cancel culture or climate change that are pretty bad indeed and generally make society much worse off, but are not in the category of extinction risk, and I think it’s confusing the category to obfuscate which ones are members of the category and which aren’t. In most conversations, I’m trying to use ‘x-risk’ to distinguish between things that are really bad, and things that have this ability to cause extinction, where previously no distinction was being made.
The analogy makes sense to me, and I can both see how being Bob on alignment (and many other areas) is a failure mode, and how being Alice in some cases is failure mode.
But I don’t think it applies to what I said.
there are some things that are really x-risks and directly cause human extinction, and there are other things like bad governance structures or cancel culture or climate change that are pretty bad indeed and generally make society much worse off
I agree, but I was postulating that climate change increases literal extinction (where everyone dies) by 10%.
The difference between x-risk and catastrophic risk (what I think you’re talking about) is not the same as the difference between first and n-th order existential risk. As far as I’m concerned, the former is very large because of future generations, but the second is zero. I don’t care at all if climate change kills everyone directly or via nuclear war, as long as everyone is dead.
Or was your point just that the two could be conflated?
Thanks! Your point is well taken, I’m generally pro being specific and clear in the way that you are being.
However, I have a clever counterargument, which I will now use for devastating effect!
(...not really, but I just realized this is very much like a conversation I had earlier this week.)
I was having a conversation with Critch and Habryka at a LessWrong event, where Critch said he felt people were using the term ‘aligned’ in a very all-or-nothing way, rather than discussing its subtleties. Critch made the following analogy (I’m recounting as best I can, forgive me if I have misremembered):
Bob sees a single sketchy looking trial in his country’s legal system and say that the justice system is unjust, and should be overthrown.
Alice replies saying that justice is a fairly subtle category with lots of edge cases and things can be more or less just, and wants Bob to acknowledge all the ways that the justice system is and isn’t just rather than using a flat term.
Critch was saying people are being like Bob with the term ‘aligned’.
Habryka replied with a different analogy:
Bob lives in a failed state surrounded by many sham trials and people going to prison for bad reasons. He says that the justice system is unjust, and should be overthrown.
Alice replies <the same>.
As I see it, in the former conversation Alice is clarifying, and in the latter I feel like Alice is obfuscating.
I think often when I use the term ‘x-risk’ I feel more like Bob in the second situation, where most people didn’t really have it on their radar that this could finish civilization, rather than just be another terrible situation we have to deal with, filled with unnecessary suffering and death. Of the two analogies I feel like we’re closer to the second situation, where I’m Bob and you’re Alice.
Returning from the analogy, the point is that there are some things that are really x-risks and directly cause human extinction, and there are other things like bad governance structures or cancel culture or climate change that are pretty bad indeed and generally make society much worse off, but are not in the category of extinction risk, and I think it’s confusing the category to obfuscate which ones are members of the category and which aren’t. In most conversations, I’m trying to use ‘x-risk’ to distinguish between things that are really bad, and things that have this ability to cause extinction, where previously no distinction was being made.
The analogy makes sense to me, and I can both see how being Bob on alignment (and many other areas) is a failure mode, and how being Alice in some cases is failure mode.
But I don’t think it applies to what I said.
I agree, but I was postulating that climate change increases literal extinction (where everyone dies) by 10%.
The difference between x-risk and catastrophic risk (what I think you’re talking about) is not the same as the difference between first and n-th order existential risk. As far as I’m concerned, the former is very large because of future generations, but the second is zero. I don’t care at all if climate change kills everyone directly or via nuclear war, as long as everyone is dead.
Or was your point just that the two could be conflated?