Precision beyond order-of-magnitude probably doesn’t matter. But there’s not much agreement on order-of-magnitude risks. Is it 1% in the next 10 years or next 80? Is it much over 1%, or closer to 0.1%. And is that conditional on other risks not materializing? Why would you give less than 1% to those other risks (my suspicion is you don’t think civilization collapse or 90% reduction in population is existential, which is debatable on it’s own).
And even if it IS the most important (but still very unlikely) risk, that doesn’t make it the one with the highest-EV to work on or donate to. You need to multiply by the amount of change you think you can have.
Yes, “precision beyond order-of-magnitude” is probably a better way to say what I was trying to.
I would go further and say that establishing P(doom) > 1% is sufficient to make AI the most important x-risk, because (like you point out), I don’t think there are other x-risks that have over a 1% chance of causing extinction (or permanent collapse). I don’t have this argument written up, but my reasoning mostly comes from the pieces I linked in addition to John Halstead’s research on the risks from climate change.
You need to multiply by the amount of change you think you can have.
Agreed. I don’t know of any work that addresses this question directly by trying to estimate how much different projects can reduce P(doom) but would be very interested to read something like that. I also think P(doom) sort of contains this information but people seem to use different definitions.
Precision beyond order-of-magnitude probably doesn’t matter. But there’s not much agreement on order-of-magnitude risks. Is it 1% in the next 10 years or next 80? Is it much over 1%, or closer to 0.1%. And is that conditional on other risks not materializing? Why would you give less than 1% to those other risks (my suspicion is you don’t think civilization collapse or 90% reduction in population is existential, which is debatable on it’s own).
And even if it IS the most important (but still very unlikely) risk, that doesn’t make it the one with the highest-EV to work on or donate to. You need to multiply by the amount of change you think you can have.
Yes, “precision beyond order-of-magnitude” is probably a better way to say what I was trying to.
I would go further and say that establishing P(doom) > 1% is sufficient to make AI the most important x-risk, because (like you point out), I don’t think there are other x-risks that have over a 1% chance of causing extinction (or permanent collapse). I don’t have this argument written up, but my reasoning mostly comes from the pieces I linked in addition to John Halstead’s research on the risks from climate change.
Agreed. I don’t know of any work that addresses this question directly by trying to estimate how much different projects can reduce P(doom) but would be very interested to read something like that. I also think P(doom) sort of contains this information but people seem to use different definitions.