Standard cost-benefit analysis on non-Venusian global warming involves (implicit or explicit) projections of climate sensitivity, technological change, economic and population growth, risks of nuclear war and other global catastrophic risks, economic damages of climate change, and more 90 (!!!) years into the future or even centuries. There are huge areas where subjective estimates play big roles there.
Right, so maybe reference to global warming was a bad example because there too, one is dealing with vast uncertainties. Note that global warming passes the “test” (3) above.
Nonetheless, one can put reasonable probability distributions on these and conclude that there are low-hanging fruit worth plucking (as part of a big enough global x-risk reduction fund).
I’m curious about this.
Regarding social costs and being an “odd duck”: note that Weitzman, in his widely celebrated article on uncertainty about seemingly implausible hard-to-analyze high-impact climate change, also calls for work on risks of AI and engineered pathogens as some of the handful of serious x-risks demanding attention.
Likewise judge-economist Richard Posner called for preliminary work on AI extinction risk in his book Catastrophe. Philosopher John Leslie in his book on human extinction discussed AI risk at length. Bill Gates went out of his way to mention it as a possibility.
These are pertinent examples but I think it’s still fair to say that interest in reducing AI risks marks one as an odd duck at present and that this gives rise to an equilibrating force against successful work on preventing AI extinction risk (how large I don’t know). I can imagine this changing in the near future.
Regarding Fermi calculations, the specific argument in the post is wrong for the reasons JGWeissman mentions:
I attempted to clarify in the comments.
With respect to sacrifice/demandingness, that’s pretty orthogonal to efficacy
What I was trying to get at here was that in part for social reasons and in part because of the inherent openendedness/uncertainty spanning over many orders of magnitude being conducive to psychological instability, the minimum sacrifice needed to usefully consciously reduce existential risk may be too great for people to effectively work to reduce existential risk by design.
Regarding Pascal’s Mugging, that involves numbers much more extreme than show up in the area of x-risk, by many orders of magnitude.
This is true for the Eliezer/Bostrom case study, but my intuition is that the same considerations apply. Even if the best estimates that the probability that Christianity is true aren’t presently > 10^(-50), there was some point in the past when in Europe the best estimates that the probability that Christianity is true were higher than some of the probabilities that show up in the area of x-risk.
I guess I would say that humans are sufficiently bad at reasoning about small probability events when the estimates are not strongly data driven that acting based on such estimates without having somewhat independent arguments for the same action is likely a far mode failure. I’m particularly concerned about the Availability heuristic here.
Sure, most people are not unitary total utilitarians.
I’m unclear on whether my intuition here is coming from a deviation from unitary total utilitarianism or whether my intuition is coming from, e.g. game theoretic considerations that I don’t understand explicitly but which are compatible with unitary total utilitarianism.
we have strong negative associations with weak execution, which are pretty well grounded, since one can usually find something trying to do the same task more efficiently.
Agree.
That applies to x-risk as well.
Except insofar as the there’s relatively little interest in x-risk and few organizations involved (again, not making a judgment about particular organizations here).
The meaningful question is: “considering the best way to reduce existential risk I can find, including investing in the creation or identification of new opportunities and holding resources in hope of finding such in the future, do I prefer it to some charity that reduces existential risk less but displays more indicators of virtue and benefits current people in the near-term in conventional ways more?”
My own intuition points me toward favoring a focus on existential risk reduction but I have uncertainty as to whether it’s right (at least for me personally) because:
(i) I’ve found thinking about existential risk reduction on account of the poor quality of the information available and the multitude of relevant considerations. As Anna says in Making your explicit reasoning trustworthy:
Some people find their beliefs changing rapidly back and forth, based for example on the particular lines of argument they’re currently pondering, or the beliefs of those they’ve recently read or talked to. Such fluctuations are generally bad news for both the accuracy of your beliefs and the usefulness of your actions.
(ii) Most of the people who I know are not in favor of near-term overt focus on existential risk reduction. I don’t know whether this is because I have implicit knowledge that they don’t have, because they have implicit knowledge that I don’t have or because they’re motivated to be opposed to such near-term overt focus for reasons unrelated to global welfare. I lean toward thinking that the situation is some combination of the latter two of the three. I’m quite confused about this matter.
Most of the people who I know are not in favor of near-term overt focus on existential risk reduction. I don’t know whether this is because I have implicit knowledge that they don’t have, because they have implicit knowledge that I don’t have or because they’re motivated to be opposed to such near-term overt focus for reasons unrelated to global welfare. I lean toward thinking that the situation is some combination of the latter two of the three.
I think you would normally expect genuine concern about saving the world to be rare among evolved creatures. It is a problem that our ancestor’s rarely faced. It is also someone else’s problem.
Saving the world may make sense as a superstimulus to the human desire for a grand cause, though. Humans are attracted to such causes for reasons that appear to be primarily to do with social signalling. I think a signalling perspective makes reasonable sense of the variation in the extent to which people are interested in the area.
Right, so maybe reference to global warming was a bad example because there too, one is dealing with vast uncertainties. Note that global warming passes the “test” (3) above.
I’m curious about this.
These are pertinent examples but I think it’s still fair to say that interest in reducing AI risks marks one as an odd duck at present and that this gives rise to an equilibrating force against successful work on preventing AI extinction risk (how large I don’t know). I can imagine this changing in the near future.
I attempted to clarify in the comments.
What I was trying to get at here was that in part for social reasons and in part because of the inherent openendedness/uncertainty spanning over many orders of magnitude being conducive to psychological instability, the minimum sacrifice needed to usefully consciously reduce existential risk may be too great for people to effectively work to reduce existential risk by design.
This is true for the Eliezer/Bostrom case study, but my intuition is that the same considerations apply. Even if the best estimates that the probability that Christianity is true aren’t presently > 10^(-50), there was some point in the past when in Europe the best estimates that the probability that Christianity is true were higher than some of the probabilities that show up in the area of x-risk.
I guess I would say that humans are sufficiently bad at reasoning about small probability events when the estimates are not strongly data driven that acting based on such estimates without having somewhat independent arguments for the same action is likely a far mode failure. I’m particularly concerned about the Availability heuristic here.
I’m unclear on whether my intuition here is coming from a deviation from unitary total utilitarianism or whether my intuition is coming from, e.g. game theoretic considerations that I don’t understand explicitly but which are compatible with unitary total utilitarianism.
Agree.
Except insofar as the there’s relatively little interest in x-risk and few organizations involved (again, not making a judgment about particular organizations here).
My own intuition points me toward favoring a focus on existential risk reduction but I have uncertainty as to whether it’s right (at least for me personally) because:
(i) I’ve found thinking about existential risk reduction on account of the poor quality of the information available and the multitude of relevant considerations. As Anna says in Making your explicit reasoning trustworthy:
(ii) Most of the people who I know are not in favor of near-term overt focus on existential risk reduction. I don’t know whether this is because I have implicit knowledge that they don’t have, because they have implicit knowledge that I don’t have or because they’re motivated to be opposed to such near-term overt focus for reasons unrelated to global welfare. I lean toward thinking that the situation is some combination of the latter two of the three. I’m quite confused about this matter.
I think you would normally expect genuine concern about saving the world to be rare among evolved creatures. It is a problem that our ancestor’s rarely faced. It is also someone else’s problem.
Saving the world may make sense as a superstimulus to the human desire for a grand cause, though. Humans are attracted to such causes for reasons that appear to be primarily to do with social signalling. I think a signalling perspective makes reasonable sense of the variation in the extent to which people are interested in the area.