and that’s a far better investment than any other philanthropic effort that you know of, so you should fund course of action X even if you think that model A is probably wrong.
This stands out as problematic, since there’s no plausible consequentialist argument for this from a steel-manned Person 1. Person 1 is both arguing for the total dominance of total utilitarian considerations in Person 2′s decision-making, and separately presenting a bogus argument about what total utilitarianism would recommend. Jennifer’s comment addresses the first prong, while the following paragraphs address the second.
If one finds oneself in a situation where one doesn’t know of any other courses of action within a few orders of magnitude in cost-effectiveness, that’s a sign that one is radically underinformed about the topic and should be learning more before acting. For instance, considerations about large possible future populations/astronomical waste increase the expected value of any existential risk reduction, from asteroids to nukes to bio to AI. For any specific risk there are many different ways, direct and indirect, to try to address it.
Fermi calculations like the one in your post can easily show the strong total utilitarian case (that is, the case within the particular framework of total utilitarianism) for focus on existential risk, but showing that a highly specific “course of action” addressing a specific risk is better than alternative ways to reduce existential risk is not robust to shifts of many orders of magnitude in probability. Even if I am confident in the value of, e.g. asteroid tracking, there are still numerous implementation details that could (collectively) swing cost-effectiveness by a few orders of magnitude, and I can’t avoid that problem in the fashion of your Person 1.
I agree with most of what you say here. Maybe it satisfactorily answers the questions raised in my post; I’ll spend some time brooding over this.
For instance, considerations about large possible future populations/astronomical waste increase the expected value of any existential risk reduction, from asteroids to nukes to bio to AI. For any specific risk there are many different ways, direct and indirect, to try to address it.
Here it would be good to compile a list; I myself am very much at a loss as to what the available options are.
This stands out as problematic, since there’s no plausible consequentialist argument for this from a steel-manned Person 1. Person 1 is both arguing for the total dominance of total utilitarian considerations in Person 2′s decision-making, and separately presenting a bogus argument about what total utilitarianism would recommend. Jennifer’s comment addresses the first prong, while the following paragraphs address the second.
If one finds oneself in a situation where one doesn’t know of any other courses of action within a few orders of magnitude in cost-effectiveness, that’s a sign that one is radically underinformed about the topic and should be learning more before acting. For instance, considerations about large possible future populations/astronomical waste increase the expected value of any existential risk reduction, from asteroids to nukes to bio to AI. For any specific risk there are many different ways, direct and indirect, to try to address it.
Fermi calculations like the one in your post can easily show the strong total utilitarian case (that is, the case within the particular framework of total utilitarianism) for focus on existential risk, but showing that a highly specific “course of action” addressing a specific risk is better than alternative ways to reduce existential risk is not robust to shifts of many orders of magnitude in probability. Even if I am confident in the value of, e.g. asteroid tracking, there are still numerous implementation details that could (collectively) swing cost-effectiveness by a few orders of magnitude, and I can’t avoid that problem in the fashion of your Person 1.
I agree with most of what you say here. Maybe it satisfactorily answers the questions raised in my post; I’ll spend some time brooding over this.
Here it would be good to compile a list; I myself am very much at a loss as to what the available options are.
I have such lists, but by the logic of your post it sounds like you should gather them yourself so you worry less about selection bias.
I would love to study these lists! Would you mind sending me them? ( My email: myusername@gmx.de )