I think the question of whether doom is of moderate or tiny probability is action relevant, and also how & why doom is most likely to happen is very action relevant
Yes, because I thought the why was obvious. I still do!
If doom has tiny probability, it’s better to focus on other issues. While I can’t give you a function mapping the doom mechanism to correct actions, different mechanisms of failure often require different techniques to address them—and even if they don’t, we want to check that the technique actually addresses them.
How large does it have to be before it’s worth focusing on, in your opinion? Even for very small probabilities of doom the expected value is extremely negative, even if you fully discount future life and only consider present lives.
So just to check, if we run the numbers, not counting non-human life or future lives, and rounding up a bit to an even 8 billion people alive today, if we assume for the sake of argument that each person has 30 QALYs left, that’s 8b * 30 QALY at stake with doom, and a 0.01% chance of doom represents the loss of 24 million QALYs. Or if we just think in terms of people, that’s the expected loss of 800 thousand people.
If we count future lives the number gets a lot bigger. If we conservatively guess at something like 100 trillion future lives throughout the history of the future universe with let’s say 100mm QALYs each, that’s 10^16 QALYs at stake.
But either way, since this is the threshold, you seem to think that, in expectation, less than 800,000 people will die from misaligned AI? Is that right? At what odds would you be willing to bet that less than 800,000 people die as a result of the development of advanced AI systems?
Gotta disagree with you on this. When the stakes are this high, even a 1% chance of doom is worth dropping everything in your life for to try and help with the problem.
I paraphrase you both Batman & Dick Cheney (of all two people, lol, but the logic is sound): “AGI has the power to destroy the entire human race, and if we believe there’s even a 1% chance that it will, then we have to treat it as an absolute certainty.”
I don’t agree, primarily because it’s only isolated in a vacuum. Other existential risks have more than 1% probability, so if AI risk only had a 1% probability, then we should change focus to another x-risk.
I think the question of whether doom is of moderate or tiny probability is action relevant, and also how & why doom is most likely to happen is very action relevant
Okay, but why? You’ve provided an assertion with no argument or evidence.
Yes, because I thought the why was obvious. I still do!
If doom has tiny probability, it’s better to focus on other issues. While I can’t give you a function mapping the doom mechanism to correct actions, different mechanisms of failure often require different techniques to address them—and even if they don’t, we want to check that the technique actually addresses them.
How large does it have to be before it’s worth focusing on, in your opinion? Even for very small probabilities of doom the expected value is extremely negative, even if you fully discount future life and only consider present lives.
A quick guess is that at about 1 in 10 000 chance of AI doom working on it is about as good as ETG to GiveWell top charities
So just to check, if we run the numbers, not counting non-human life or future lives, and rounding up a bit to an even 8 billion people alive today, if we assume for the sake of argument that each person has 30 QALYs left, that’s 8b * 30 QALY at stake with doom, and a 0.01% chance of doom represents the loss of 24 million QALYs. Or if we just think in terms of people, that’s the expected loss of 800 thousand people.
If we count future lives the number gets a lot bigger. If we conservatively guess at something like 100 trillion future lives throughout the history of the future universe with let’s say 100mm QALYs each, that’s 10^16 QALYs at stake.
But either way, since this is the threshold, you seem to think that, in expectation, less than 800,000 people will die from misaligned AI? Is that right? At what odds would you be willing to bet that less than 800,000 people die as a result of the development of advanced AI systems?
There are about 8 billion people, so your 24,000 QALYs should be 24,000,000.
Oh, oops, thank you! I can’t believe I made that mistake. I’ll update my comment. I thought the number seemed really low!
Gotta disagree with you on this. When the stakes are this high, even a 1% chance of doom is worth dropping everything in your life for to try and help with the problem.
I paraphrase you both Batman & Dick Cheney (of all two people, lol, but the logic is sound): “AGI has the power to destroy the entire human race, and if we believe there’s even a 1% chance that it will, then we have to treat it as an absolute certainty.”
I don’t agree, primarily because it’s only isolated in a vacuum. Other existential risks have more than 1% probability, so if AI risk only had a 1% probability, then we should change focus to another x-risk.
If you can name another immediate threat with a ≥1% chance of killing everyone, then yes, we should drop everything to focus on that too.
A pandemic that kills even just 50% of the population? <0.1%
An unseen meteor? <0.1%
Climate change? 0% chance that it could kill literally everyone