Note that Friendly AI (if it works) will defeat all (or at least a lot of) x-risk. So AI has a good claim to being the most effective at reducing x-risks, even the ones that aren’t AI risk. If you anticipate an intelligence explosion but aren’t worried about UFAI then your favourite charity is probably some non-MIRI AI research lab (Google?).
So AI has a good claim to being the most effective at reducing x-risks, even the ones that aren’t AI risk.
You’re ignoring time. If you expect a sufficiently powerful FAI to arise, say, not earlier than a hundred years from now, and you think that the coming century has significant x-risks, focusing all the resources on the FAI might not be a good idea.
Not to mention that if your P(AI) isn’t close to one, you probably want to be prepared for the situation in which an AI never materializes.
As far as I remember from LW census data the median date for predicted AGI intelligence explosion didn’t fall in this century and more people considered bioengineered pandemics the most probably X-risk in this century than UFAI.
Close. Bioengineered pandemics were the GCR (global catastrophic risk — not necessarily as bad as a full-blown X-risk) most often (23% of responses) considered most likely. (Unfriendly AI came in third at 14%.) The median singularity year estimate on the survey was 2089 after outliers were removed.
Note that Friendly AI (if it works) will defeat all (or at least a lot of) x-risk. So AI has a good claim to being the most effective at reducing x-risks, even the ones that aren’t AI risk. If you anticipate an intelligence explosion but aren’t worried about UFAI then your favourite charity is probably some non-MIRI AI research lab (Google?).
You’re ignoring time. If you expect a sufficiently powerful FAI to arise, say, not earlier than a hundred years from now, and you think that the coming century has significant x-risks, focusing all the resources on the FAI might not be a good idea.
Not to mention that if your P(AI) isn’t close to one, you probably want to be prepared for the situation in which an AI never materializes.
As far as I remember from LW census data the median date for predicted AGI intelligence explosion didn’t fall in this century and more people considered bioengineered pandemics the most probably X-risk in this century than UFAI.
Close. Bioengineered pandemics were the GCR (global catastrophic risk — not necessarily as bad as a full-blown X-risk) most often (23% of responses) considered most likely. (Unfriendly AI came in third at 14%.) The median singularity year estimate on the survey was 2089 after outliers were removed.