What about “Deniers?” as in, climate change deniers.
Too harsh maybe? IDK, I feel like a neutral observer presented with a conflict framed as “Doomers vs. Deniers” would not say that “deniers” was the harsher term.
I’d definitely disagree, if only because it implies a level of evidence for the doom side that’s not really there, and the evidence is a lot more balanced than in the climate case.
IMO this is the problem with Zvi’s attempted naming too: It incorrectly assumes that the debate on AI is so settled that we can treat people viewing AI as not an X-risk as essentially dismissible deniers/wishful thinking, and this isn’t where we’re at for even the better argued stuff like the Orthogonality Thesis or Instrumental Convergence, to a large extent.
Having enough evidence to confidently dismiss something is very hard, much harder than people realize.
? The people viewing AI as not an X-risk are the people confidently dismissing something.
I think the evidence is really there. Again, the claim isn’t that we are definitely doomed, it’s that AGI poses an existential risk to humanity. I think it’s pretty unreasonable to disagree with that statement.
The point is that the details aren’t analogous to the climate change case, and while I don’t agree with people who dismiss AI risk, I think that the evidence we have isn’t enough to to claim anything more than AI risk is real.
The details matter, and due to unique issues, it’s going to be very hard to get to the level where we can confidently say that people denying AI risk is totally irrational.
I normally am all for charitability and humility and so forth, but I will put my foot down and say that it’s irrational (or uninformed) to disagree with this statement:
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
(I say uninformed because I want to leave an escape clause for people who aren’t aware of various facts or haven’t been exposed to various arguments yet. But for people who have followed AI progress recently and/or who have heard the standard arguments for riskiness, yeah, I think it’s irrational to deny the CAIS statement.)
I think the situation is quite similar to the situation with climate change, and I’m overall not sure which is worse. What are the properties of climate change deniers that seem less reasonable to you than AI x-risk deniers?
Or more generally, what details are you thinking of?
I agree with the statement, broadly construed, so I don’t disagree here.
The key disanalogy between climate change and AI risk is the evidence base for both.
For Climate change, there was arguably trillions to quadrillions of data points of evidence, if not more, which is easily enough to convince even very skeptical people’s priors to update massively.
For AI, the evidence base is closer to maybe 100 data points maximum, and arguably lower than that. This is changing for the future, and things are getting better, but it’s quite different from climate change where you could call them deniers pretty matter of factly. This means more general priors matter, and even not very extreme priors wouldn’t update much on the evidence for AI doom, so they are much, much less irrational compared to climate deniers
If the statement is all that’s being asked for, that’s enough. The worry is when people apply climate analogies to the AI without realizing the differences, and those differences are enough to alter or invalidate the conclusions argued for.
I’m not at all sure this would actually be relevant to the rhetorical outcome, but I feel like the AI-can’t-go-wrong camp wouldn’t really accept the “Denier” label in the same way people in the AI-goes-wrong-by-default camp accept “Doomer.” Climate change deniers agree they are deniers, even if they prefer terms like skeptic among themselves.
In the case of climate change deniers, the question is whether or not climate change is real, and the thing that they are denying is the mountain of measurements showing that it is real. I think what is different about the can’t-go-wrong, wrong-by-default dichotomy is that the question we’re arguing about is the direction of change, instead; it would be like if we transmuted the climate change denier camp into a bunch of people whose response wasn’t “no it isn’t” but instead was “yes, and that is great news and we need more of it.”
Naturally it is weird to imagine people tacitly accepting the Mary Sue label in the same way we accept Doomer, so cut by my own knife I suppose!
The analogy (in terms of dynamics of the debate) with climate change is not that bad: “great news and we need more” is in fact a talking point of people who prefer not acting against climate change. E.g., they would mention correlations between plant growth and CO2 concentration. That said, it would be weird to call such people climate deniers.
What about “Deniers?” as in, climate change deniers.
Too harsh maybe? IDK, I feel like a neutral observer presented with a conflict framed as “Doomers vs. Deniers” would not say that “deniers” was the harsher term.
I’d definitely disagree, if only because it implies a level of evidence for the doom side that’s not really there, and the evidence is a lot more balanced than in the climate case.
IMO this is the problem with Zvi’s attempted naming too: It incorrectly assumes that the debate on AI is so settled that we can treat people viewing AI as not an X-risk as essentially dismissible deniers/wishful thinking, and this isn’t where we’re at for even the better argued stuff like the Orthogonality Thesis or Instrumental Convergence, to a large extent.
Having enough evidence to confidently dismiss something is very hard, much harder than people realize.
? The people viewing AI as not an X-risk are the people confidently dismissing something.
I think the evidence is really there. Again, the claim isn’t that we are definitely doomed, it’s that AGI poses an existential risk to humanity. I think it’s pretty unreasonable to disagree with that statement.
The point is that the details aren’t analogous to the climate change case, and while I don’t agree with people who dismiss AI risk, I think that the evidence we have isn’t enough to to claim anything more than AI risk is real.
The details matter, and due to unique issues, it’s going to be very hard to get to the level where we can confidently say that people denying AI risk is totally irrational.
I normally am all for charitability and humility and so forth, but I will put my foot down and say that it’s irrational (or uninformed) to disagree with this statement:
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
(I say uninformed because I want to leave an escape clause for people who aren’t aware of various facts or haven’t been exposed to various arguments yet. But for people who have followed AI progress recently and/or who have heard the standard arguments for riskiness, yeah, I think it’s irrational to deny the CAIS statement.)
I think the situation is quite similar to the situation with climate change, and I’m overall not sure which is worse. What are the properties of climate change deniers that seem less reasonable to you than AI x-risk deniers?
Or more generally, what details are you thinking of?
I agree with the statement, broadly construed, so I don’t disagree here.
The key disanalogy between climate change and AI risk is the evidence base for both.
For Climate change, there was arguably trillions to quadrillions of data points of evidence, if not more, which is easily enough to convince even very skeptical people’s priors to update massively.
For AI, the evidence base is closer to maybe 100 data points maximum, and arguably lower than that. This is changing for the future, and things are getting better, but it’s quite different from climate change where you could call them deniers pretty matter of factly. This means more general priors matter, and even not very extreme priors wouldn’t update much on the evidence for AI doom, so they are much, much less irrational compared to climate deniers
If the statement is all that’s being asked for, that’s enough. The worry is when people apply climate analogies to the AI without realizing the differences, and those differences are enough to alter or invalidate the conclusions argued for.
I’m not at all sure this would actually be relevant to the rhetorical outcome, but I feel like the AI-can’t-go-wrong camp wouldn’t really accept the “Denier” label in the same way people in the AI-goes-wrong-by-default camp accept “Doomer.” Climate change deniers agree they are deniers, even if they prefer terms like skeptic among themselves.
In the case of climate change deniers, the question is whether or not climate change is real, and the thing that they are denying is the mountain of measurements showing that it is real. I think what is different about the can’t-go-wrong, wrong-by-default dichotomy is that the question we’re arguing about is the direction of change, instead; it would be like if we transmuted the climate change denier camp into a bunch of people whose response wasn’t “no it isn’t” but instead was “yes, and that is great news and we need more of it.”
Naturally it is weird to imagine people tacitly accepting the Mary Sue label in the same way we accept Doomer, so cut by my own knife I suppose!
The analogy (in terms of dynamics of the debate) with climate change is not that bad: “great news and we need more” is in fact a talking point of people who prefer not acting against climate change. E.g., they would mention correlations between plant growth and CO2 concentration. That said, it would be weird to call such people climate deniers.