Currently I don’t think existential risk charities are very appropriate for small-scale individual donations, because of the difficulty of evaluating them. I feel that donating to a long-term research charity is a recipe for either analysis-paralysis or a decision that’s ultimately arbitrary. I’ll definitely continue gathering information, and see whether I can raise my confidence in an existential risk charity enough to consider donating. I think it will take a lot of research.
For any systemic risk charity, you can give a kind of “Drake equation” that arrives at an estimated dollar-per-life based on a sequence of probability estimates. Off the top of my head, I think the global population estimate for 2050 is around 8 billion, assuming the current trend in reducing the number of people in extreme poverty continues (reducing extreme poverty reduces pop. growth). That means you have to arrive at a probability greater than 8,000,000:1 to get a cost-per-life estimate of under $1,000.
At first glance that odds ratio looks pretty generous. But it’s very difficult to have any kind of confidence in the calculation that leads to that. How do I decide between estimates of 10^-3 and 10^-5 likelihood? They’re both too small for me to evaluate informally, and there’s two orders of magnitude difference there. Is there a page where you lay out these estimates? I’ve kind of assumed that this existed, but haven’t seen it yet.
The above calculation seems to be considering only current people, and not much valuing additional years of happy life for current people or better lives relative to current Western standards. Nick Bostrom’s Astronomical Waste paper discusses those issues. Time-discounting isn’t enough to wipe out the effect either, since populations may expand very quickly (e.g. brain emulations/artificial wombs and AI teachers).
Gaverick Matheny’s paper “Reducing the Risk of Human Extinction” is also relevant, although it arbitrarily caps various things (like the rate of population growth) to limit the dominance of the future.
If you care about bringing future people into being, then the expected future population if we avoid existential risk is many, many orders of magnitude greater than the current population of the world and looms very large.
If you don’t care about future people then you have to grapple with the Nonidentity Problem:
Suppose that agents as a community have chosen to deplete rather than conserve certain resources. The consequences of that choice for the persons who exist now and who will come into existence over the next two centuries will be “slightly higher” than under a conservation alternative (Parfit 1987, 362). Thereafter, however, for many centuries the quality of life would be much lower. “The great lowering of the quality of life must provide some moral reason not to choose Depletion” (p. 363). Surely we ought to have chosen conservation in some form or another instead. But at the same time depletion seems to harm no one: while distant future persons, by hypothesis, will suffer the adverse effects of choice of depletion, it is also true that a conservation choice very probably would have changed the timing and manner of the conceptions. Future persons, in other words, owe their suffering but also their very existence to the depletion choice. Provided that that existence is worth having, we seem forced to conclude that depletion does not harm, or make things worse for, and is not otherwise “bad for,” anyone at all (p. 363).
Separately, there seems to be a typo in this paragraph of your post:
Off the top of my head, I think the global population estimate for 2050 is around 8 billion, assuming the current trend in reducing the number of people in extreme poverty continues (reducing extreme poverty reduces pop. growth). That means you have to arrive at a probability greater than 8,000,000:1 to get a cost-per-life estimate of under $1,000.
If you mean “what reduction in the probability of (immediate) extinction is equivalent in expected-lives—of-currently-living-people to saving one life today” then that will be near 1 in 8 billion, not 1 in 8 million. That figure is also a slight underestimate if you only care about curent people because medium-term catastrophes would kill future people who don’t yet exist and many current people may have died by then.
Also, if you’re looking for easier-to-evaluate charities, or bigger higher-status ones endorsed by folk such as Warren Buffett, foreign policy elites, etc, I suggest the Nuclear Threat Initiative as an existence proof of the possibility of spending on x-risk reduction. I wouldn’t recommend giving to it in particular, but it does point to the feasibility of meaningful action. Also see Martin Hellman’s on reducing nuclear risk.
Currently I don’t think existential risk charities are very appropriate for small-scale individual donations, because of the difficulty of evaluating them. I feel that donating to a long-term research charity is a recipe for either analysis-paralysis or a decision that’s ultimately arbitrary. I’ll definitely continue gathering information, and see whether I can raise my confidence in an existential risk charity enough to consider donating. I think it will take a lot of research.
For any systemic risk charity, you can give a kind of “Drake equation” that arrives at an estimated dollar-per-life based on a sequence of probability estimates. Off the top of my head, I think the global population estimate for 2050 is around 8 billion, assuming the current trend in reducing the number of people in extreme poverty continues (reducing extreme poverty reduces pop. growth). That means you have to arrive at a probability greater than 8,000,000:1 to get a cost-per-life estimate of under $1,000.
At first glance that odds ratio looks pretty generous. But it’s very difficult to have any kind of confidence in the calculation that leads to that. How do I decide between estimates of 10^-3 and 10^-5 likelihood? They’re both too small for me to evaluate informally, and there’s two orders of magnitude difference there. Is there a page where you lay out these estimates? I’ve kind of assumed that this existed, but haven’t seen it yet.
The above calculation seems to be considering only current people, and not much valuing additional years of happy life for current people or better lives relative to current Western standards. Nick Bostrom’s Astronomical Waste paper discusses those issues. Time-discounting isn’t enough to wipe out the effect either, since populations may expand very quickly (e.g. brain emulations/artificial wombs and AI teachers).
Gaverick Matheny’s paper “Reducing the Risk of Human Extinction” is also relevant, although it arbitrarily caps various things (like the rate of population growth) to limit the dominance of the future.
If you care about bringing future people into being, then the expected future population if we avoid existential risk is many, many orders of magnitude greater than the current population of the world and looms very large.
If you don’t care about future people then you have to grapple with the Nonidentity Problem:
Separately, there seems to be a typo in this paragraph of your post:
If you mean “what reduction in the probability of (immediate) extinction is equivalent in expected-lives—of-currently-living-people to saving one life today” then that will be near 1 in 8 billion, not 1 in 8 million. That figure is also a slight underestimate if you only care about curent people because medium-term catastrophes would kill future people who don’t yet exist and many current people may have died by then.
Also, if you’re looking for easier-to-evaluate charities, or bigger higher-status ones endorsed by folk such as Warren Buffett, foreign policy elites, etc, I suggest the Nuclear Threat Initiative as an existence proof of the possibility of spending on x-risk reduction. I wouldn’t recommend giving to it in particular, but it does point to the feasibility of meaningful action. Also see Martin Hellman’s on reducing nuclear risk.
Are nukes really an x-risk?