The point is that the details aren’t analogous to the climate change case, and while I don’t agree with people who dismiss AI risk, I think that the evidence we have isn’t enough to to claim anything more than AI risk is real.
The details matter, and due to unique issues, it’s going to be very hard to get to the level where we can confidently say that people denying AI risk is totally irrational.
I normally am all for charitability and humility and so forth, but I will put my foot down and say that it’s irrational (or uninformed) to disagree with this statement:
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
(I say uninformed because I want to leave an escape clause for people who aren’t aware of various facts or haven’t been exposed to various arguments yet. But for people who have followed AI progress recently and/or who have heard the standard arguments for riskiness, yeah, I think it’s irrational to deny the CAIS statement.)
I think the situation is quite similar to the situation with climate change, and I’m overall not sure which is worse. What are the properties of climate change deniers that seem less reasonable to you than AI x-risk deniers?
Or more generally, what details are you thinking of?
I agree with the statement, broadly construed, so I don’t disagree here.
The key disanalogy between climate change and AI risk is the evidence base for both.
For Climate change, there was arguably trillions to quadrillions of data points of evidence, if not more, which is easily enough to convince even very skeptical people’s priors to update massively.
For AI, the evidence base is closer to maybe 100 data points maximum, and arguably lower than that. This is changing for the future, and things are getting better, but it’s quite different from climate change where you could call them deniers pretty matter of factly. This means more general priors matter, and even not very extreme priors wouldn’t update much on the evidence for AI doom, so they are much, much less irrational compared to climate deniers
If the statement is all that’s being asked for, that’s enough. The worry is when people apply climate analogies to the AI without realizing the differences, and those differences are enough to alter or invalidate the conclusions argued for.
The point is that the details aren’t analogous to the climate change case, and while I don’t agree with people who dismiss AI risk, I think that the evidence we have isn’t enough to to claim anything more than AI risk is real.
The details matter, and due to unique issues, it’s going to be very hard to get to the level where we can confidently say that people denying AI risk is totally irrational.
I normally am all for charitability and humility and so forth, but I will put my foot down and say that it’s irrational (or uninformed) to disagree with this statement:
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
(I say uninformed because I want to leave an escape clause for people who aren’t aware of various facts or haven’t been exposed to various arguments yet. But for people who have followed AI progress recently and/or who have heard the standard arguments for riskiness, yeah, I think it’s irrational to deny the CAIS statement.)
I think the situation is quite similar to the situation with climate change, and I’m overall not sure which is worse. What are the properties of climate change deniers that seem less reasonable to you than AI x-risk deniers?
Or more generally, what details are you thinking of?
I agree with the statement, broadly construed, so I don’t disagree here.
The key disanalogy between climate change and AI risk is the evidence base for both.
For Climate change, there was arguably trillions to quadrillions of data points of evidence, if not more, which is easily enough to convince even very skeptical people’s priors to update massively.
For AI, the evidence base is closer to maybe 100 data points maximum, and arguably lower than that. This is changing for the future, and things are getting better, but it’s quite different from climate change where you could call them deniers pretty matter of factly. This means more general priors matter, and even not very extreme priors wouldn’t update much on the evidence for AI doom, so they are much, much less irrational compared to climate deniers
If the statement is all that’s being asked for, that’s enough. The worry is when people apply climate analogies to the AI without realizing the differences, and those differences are enough to alter or invalidate the conclusions argued for.