For many utility functions, I think donating to an organisation working on decreasing existential risk would be incredibly efficient, as:
Even if we use the most conservative of [estimates of the utility of decreasing existential risk], which entirely ignores the possibility of space colonisation and software minds, we find that the expected loss of an existential catastrophe is greater than the value of 10^16 human lives. This implies that the expected value of reducing existential risk by a mere one millionth of one percentage point is at least a hundred times the value of a million human lives. (Bostrom, Existential Risk Prevention as Global Priority)
I don’t think decreasing existential risk falls into it, because the probability of an existential catastrophe isn’t extremely small. One survey taken at Oxford predicted that there was a ~19% chance of human extinction prior to 2100. Determining the probability of existential catastrophe is very challenging and the aforementioned statistic should be viewed skeptically, but a probability anywhere near 19% would still (as far as I can tell) prevent to from falling prey to Pascal’s mugging.
But your earlier quote says that it makes sense to reduce risk by a millionth of a percentage point because the expected value of lives saved is still large. It doesn’t propose reducing the risk from 19% to nothing; it proposes reducing the risk by a tiny amount. Only in the unlikely event that that tiny change happens to be the tipping point that prevents extinction would this reduction be beneficial; the expected value is derived by multiplying this unlikelihood by the large number of lives saved were it to be true. That sounds like Pascal’s Mugging. I agree that it wouldn’t be Pascal’s Mugging to reduce the 19% to 0, but I think that reducing it to 18.999999% is.
I see what you mean. I don’t really know enough about Pascal’s mugging to determine whether decreasing existential risk be 1 millionth of a percent is worth it, but it’s a moot point, as it seems reasonable that existential risk could be reduced by far more than 1 millionth of one percent.
For many utility functions, I think donating to an organisation working on decreasing existential risk would be incredibly efficient, as:
Doesn’t that fall prey to Pascal’s Mugging?
I don’t think decreasing existential risk falls into it, because the probability of an existential catastrophe isn’t extremely small. One survey taken at Oxford predicted that there was a ~19% chance of human extinction prior to 2100. Determining the probability of existential catastrophe is very challenging and the aforementioned statistic should be viewed skeptically, but a probability anywhere near 19% would still (as far as I can tell) prevent to from falling prey to Pascal’s mugging.
But your earlier quote says that it makes sense to reduce risk by a millionth of a percentage point because the expected value of lives saved is still large. It doesn’t propose reducing the risk from 19% to nothing; it proposes reducing the risk by a tiny amount. Only in the unlikely event that that tiny change happens to be the tipping point that prevents extinction would this reduction be beneficial; the expected value is derived by multiplying this unlikelihood by the large number of lives saved were it to be true. That sounds like Pascal’s Mugging. I agree that it wouldn’t be Pascal’s Mugging to reduce the 19% to 0, but I think that reducing it to 18.999999% is.
I see what you mean. I don’t really know enough about Pascal’s mugging to determine whether decreasing existential risk be 1 millionth of a percent is worth it, but it’s a moot point, as it seems reasonable that existential risk could be reduced by far more than 1 millionth of one percent.