What are the most effective charities working towards reducing biotech or pandemic x-risk? I see those mentioned here occasionally as the second most important x-risk behind AI risk, but I haven’t seen much discussion on the most effective ways to fund their prevention. Have I missed something?
Biotech x-risk is a tricky subject, since research into how to prevent it also is likely to provide more information on how to engineer biothreats. It’s from nontrivial to know what lines of research will decrease the risk, and which will increase it. One doesn’t want a 28 Days Later type situation, where a lab doing research into viruses ends up being the source of a pandemic.
Note that Friendly AI (if it works) will defeat all (or at least a lot of) x-risk. So AI has a good claim to being the most effective at reducing x-risks, even the ones that aren’t AI risk. If you anticipate an intelligence explosion but aren’t worried about UFAI then your favourite charity is probably some non-MIRI AI research lab (Google?).
So AI has a good claim to being the most effective at reducing x-risks, even the ones that aren’t AI risk.
You’re ignoring time. If you expect a sufficiently powerful FAI to arise, say, not earlier than a hundred years from now, and you think that the coming century has significant x-risks, focusing all the resources on the FAI might not be a good idea.
Not to mention that if your P(AI) isn’t close to one, you probably want to be prepared for the situation in which an AI never materializes.
As far as I remember from LW census data the median date for predicted AGI intelligence explosion didn’t fall in this century and more people considered bioengineered pandemics the most probably X-risk in this century than UFAI.
Close. Bioengineered pandemics were the GCR (global catastrophic risk — not necessarily as bad as a full-blown X-risk) most often (23% of responses) considered most likely. (Unfriendly AI came in third at 14%.) The median singularity year estimate on the survey was 2089 after outliers were removed.
What are the most effective charities working towards reducing biotech or pandemic x-risk? I see those mentioned here occasionally as the second most important x-risk behind AI risk, but I haven’t seen much discussion on the most effective ways to fund their prevention. Have I missed something?
Biotech x-risk is a tricky subject, since research into how to prevent it also is likely to provide more information on how to engineer biothreats. It’s from nontrivial to know what lines of research will decrease the risk, and which will increase it. One doesn’t want a 28 Days Later type situation, where a lab doing research into viruses ends up being the source of a pandemic.
Note that Friendly AI (if it works) will defeat all (or at least a lot of) x-risk. So AI has a good claim to being the most effective at reducing x-risks, even the ones that aren’t AI risk. If you anticipate an intelligence explosion but aren’t worried about UFAI then your favourite charity is probably some non-MIRI AI research lab (Google?).
You’re ignoring time. If you expect a sufficiently powerful FAI to arise, say, not earlier than a hundred years from now, and you think that the coming century has significant x-risks, focusing all the resources on the FAI might not be a good idea.
Not to mention that if your P(AI) isn’t close to one, you probably want to be prepared for the situation in which an AI never materializes.
As far as I remember from LW census data the median date for predicted AGI intelligence explosion didn’t fall in this century and more people considered bioengineered pandemics the most probably X-risk in this century than UFAI.
Close. Bioengineered pandemics were the GCR (global catastrophic risk — not necessarily as bad as a full-blown X-risk) most often (23% of responses) considered most likely. (Unfriendly AI came in third at 14%.) The median singularity year estimate on the survey was 2089 after outliers were removed.