not so much existential risk due to asteroid strike or Venusian global warming
This is fairly true of asteroids with a short enough time horizon, but there are still uncertainties over the future costs of asteroid defense and detection, military applications of asteroid defense interceptors, and so forth.
Standard cost-benefit analysis on non-Venusian global warming involves (implicit or explicit) projections of climate sensitivity, technological change, economic and population growth, risks of nuclear war and other global catastrophic risks, economic damages of climate change, and more 90 (!!!) years into the future or even centuries. There are huge areas where subjective estimates play big roles there. .
Venusian runaway surprise climate change of the kind discussed by Martin Weitzman or Broome, the sort threatening human extinction (through means other than generic social collapse non-recovery) involves working with right-tail outcomes and small probabilities of big impacts, added on to all the other social, technological, and other uncertainties. Nonetheless, one can put reasonable probability distributions on these and conclude that there are low-hanging fruit worth plucking (as part of a big enough global x-risk reduction fund). However, it can’t be done without dealing with this uncertainty.
Regarding social costs and being an “odd duck”: note that Weitzman, in his widely celebrated article on uncertainty about seemingly implausible hard-to-analyze high-impact climate change, also calls for work on risks of AI and engineered pathogens as some of the handful of serious x-risks demanding attention.
Likewise judge-economist Richard Posner called for preliminary work on AI extinction risk in his book Catastrophe. Philosopher John Leslie in his book on human extinction discussed AI risk at length. Bill Gates went out of his way to mention it as a possibility.
Regarding Fermi calculations, the specific argument in the post is wrong for the reasons JGWeissman mentions: Drake Equation style methods are the canonical way of combining independent probabilities of failure to get the correct (multiplicative) penalty for a conjunction (although one should also remember that some important conclusions are disjunctive, and one needs to take all the disjuncts into account). The conjunction fallacy involves failing to multiply penalties in that way.
With respect to sacrifice/demandingness, that’s pretty orthogonal to efficacy. Whether the most effective cause is contributing to saving local children in your own community, deworming African children, paying for cleaning asteroid-watcher telescopes, or researching AI risks, one could make large or small sacrifices. GiveWell-type organizations can act as “utility monsters,” a never-ending maw to consume all luxuries, but even most efficient charity enthusiasts are able to deal with that adequately. One can buy fuzzies and utilons and personal indulgences separately, compensating for social weirdness of more effective causes by retaining more for other pursuits.
Regarding Pascal’s Mugging, that involves numbers much more extreme than show up in the area of x-risk, by many orders of magnitude.
Intuitively, the position “it doesn’t matter how well executed charity X’s activities are; since charity X is an existential risk reduction charity, charity X triumphs non-existential risk charities” is for me a reductio ad absurdem for adopting a conscious, explicit, single-minded focus on existential risk reduction.
Sure, most people are not unitary total utilitarians. I certainly am not one, although one of my motives is to avert “astronomical waste” style losses. But this is a bit of an irrelevant comparison: we have strong negative associations with weak execution, which are pretty well grounded, since one can usually find something trying to do the same task more efficiently. That applies to x-risk as well. The meaningful question is: “considering the best way to reduce existential risk I can find, including investing in the creation or identification of new opportunities and holding resources in hope of finding such in the future, do I prefer it to some charity that reduces existential risk less but displays more indicators of virtue and benefits current people in the near-term in conventional ways more?”
I feel right to restrict support to existential risk reduction opportunities that meet some minimal standard for “sufficiently well-conceived and compelling” well above that of multiplying the value of ensuring human survival by a crude guess as to the probability that a given intervention will succeed.
Standard cost-benefit analysis on non-Venusian global warming involves (implicit or explicit) projections of climate sensitivity, technological change, economic and population growth, risks of nuclear war and other global catastrophic risks, economic damages of climate change, and more 90 (!!!) years into the future or even centuries. There are huge areas where subjective estimates play big roles there.
Right, so maybe reference to global warming was a bad example because there too, one is dealing with vast uncertainties. Note that global warming passes the “test” (3) above.
Nonetheless, one can put reasonable probability distributions on these and conclude that there are low-hanging fruit worth plucking (as part of a big enough global x-risk reduction fund).
I’m curious about this.
Regarding social costs and being an “odd duck”: note that Weitzman, in his widely celebrated article on uncertainty about seemingly implausible hard-to-analyze high-impact climate change, also calls for work on risks of AI and engineered pathogens as some of the handful of serious x-risks demanding attention.
Likewise judge-economist Richard Posner called for preliminary work on AI extinction risk in his book Catastrophe. Philosopher John Leslie in his book on human extinction discussed AI risk at length. Bill Gates went out of his way to mention it as a possibility.
These are pertinent examples but I think it’s still fair to say that interest in reducing AI risks marks one as an odd duck at present and that this gives rise to an equilibrating force against successful work on preventing AI extinction risk (how large I don’t know). I can imagine this changing in the near future.
Regarding Fermi calculations, the specific argument in the post is wrong for the reasons JGWeissman mentions:
I attempted to clarify in the comments.
With respect to sacrifice/demandingness, that’s pretty orthogonal to efficacy
What I was trying to get at here was that in part for social reasons and in part because of the inherent openendedness/uncertainty spanning over many orders of magnitude being conducive to psychological instability, the minimum sacrifice needed to usefully consciously reduce existential risk may be too great for people to effectively work to reduce existential risk by design.
Regarding Pascal’s Mugging, that involves numbers much more extreme than show up in the area of x-risk, by many orders of magnitude.
This is true for the Eliezer/Bostrom case study, but my intuition is that the same considerations apply. Even if the best estimates that the probability that Christianity is true aren’t presently > 10^(-50), there was some point in the past when in Europe the best estimates that the probability that Christianity is true were higher than some of the probabilities that show up in the area of x-risk.
I guess I would say that humans are sufficiently bad at reasoning about small probability events when the estimates are not strongly data driven that acting based on such estimates without having somewhat independent arguments for the same action is likely a far mode failure. I’m particularly concerned about the Availability heuristic here.
Sure, most people are not unitary total utilitarians.
I’m unclear on whether my intuition here is coming from a deviation from unitary total utilitarianism or whether my intuition is coming from, e.g. game theoretic considerations that I don’t understand explicitly but which are compatible with unitary total utilitarianism.
we have strong negative associations with weak execution, which are pretty well grounded, since one can usually find something trying to do the same task more efficiently.
Agree.
That applies to x-risk as well.
Except insofar as the there’s relatively little interest in x-risk and few organizations involved (again, not making a judgment about particular organizations here).
The meaningful question is: “considering the best way to reduce existential risk I can find, including investing in the creation or identification of new opportunities and holding resources in hope of finding such in the future, do I prefer it to some charity that reduces existential risk less but displays more indicators of virtue and benefits current people in the near-term in conventional ways more?”
My own intuition points me toward favoring a focus on existential risk reduction but I have uncertainty as to whether it’s right (at least for me personally) because:
(i) I’ve found thinking about existential risk reduction on account of the poor quality of the information available and the multitude of relevant considerations. As Anna says in Making your explicit reasoning trustworthy:
Some people find their beliefs changing rapidly back and forth, based for example on the particular lines of argument they’re currently pondering, or the beliefs of those they’ve recently read or talked to. Such fluctuations are generally bad news for both the accuracy of your beliefs and the usefulness of your actions.
(ii) Most of the people who I know are not in favor of near-term overt focus on existential risk reduction. I don’t know whether this is because I have implicit knowledge that they don’t have, because they have implicit knowledge that I don’t have or because they’re motivated to be opposed to such near-term overt focus for reasons unrelated to global welfare. I lean toward thinking that the situation is some combination of the latter two of the three. I’m quite confused about this matter.
Most of the people who I know are not in favor of near-term overt focus on existential risk reduction. I don’t know whether this is because I have implicit knowledge that they don’t have, because they have implicit knowledge that I don’t have or because they’re motivated to be opposed to such near-term overt focus for reasons unrelated to global welfare. I lean toward thinking that the situation is some combination of the latter two of the three.
I think you would normally expect genuine concern about saving the world to be rare among evolved creatures. It is a problem that our ancestor’s rarely faced. It is also someone else’s problem.
Saving the world may make sense as a superstimulus to the human desire for a grand cause, though. Humans are attracted to such causes for reasons that appear to be primarily to do with social signalling. I think a signalling perspective makes reasonable sense of the variation in the extent to which people are interested in the area.
This is fairly true of asteroids with a short enough time horizon, but there are still uncertainties over the future costs of asteroid defense and detection, military applications of asteroid defense interceptors, and so forth.
Standard cost-benefit analysis on non-Venusian global warming involves (implicit or explicit) projections of climate sensitivity, technological change, economic and population growth, risks of nuclear war and other global catastrophic risks, economic damages of climate change, and more 90 (!!!) years into the future or even centuries. There are huge areas where subjective estimates play big roles there. .
Venusian runaway surprise climate change of the kind discussed by Martin Weitzman or Broome, the sort threatening human extinction (through means other than generic social collapse non-recovery) involves working with right-tail outcomes and small probabilities of big impacts, added on to all the other social, technological, and other uncertainties. Nonetheless, one can put reasonable probability distributions on these and conclude that there are low-hanging fruit worth plucking (as part of a big enough global x-risk reduction fund). However, it can’t be done without dealing with this uncertainty.
Regarding social costs and being an “odd duck”: note that Weitzman, in his widely celebrated article on uncertainty about seemingly implausible hard-to-analyze high-impact climate change, also calls for work on risks of AI and engineered pathogens as some of the handful of serious x-risks demanding attention.
Likewise judge-economist Richard Posner called for preliminary work on AI extinction risk in his book Catastrophe. Philosopher John Leslie in his book on human extinction discussed AI risk at length. Bill Gates went out of his way to mention it as a possibility.
Regarding Fermi calculations, the specific argument in the post is wrong for the reasons JGWeissman mentions: Drake Equation style methods are the canonical way of combining independent probabilities of failure to get the correct (multiplicative) penalty for a conjunction (although one should also remember that some important conclusions are disjunctive, and one needs to take all the disjuncts into account). The conjunction fallacy involves failing to multiply penalties in that way.
With respect to sacrifice/demandingness, that’s pretty orthogonal to efficacy. Whether the most effective cause is contributing to saving local children in your own community, deworming African children, paying for cleaning asteroid-watcher telescopes, or researching AI risks, one could make large or small sacrifices. GiveWell-type organizations can act as “utility monsters,” a never-ending maw to consume all luxuries, but even most efficient charity enthusiasts are able to deal with that adequately. One can buy fuzzies and utilons and personal indulgences separately, compensating for social weirdness of more effective causes by retaining more for other pursuits.
Regarding Pascal’s Mugging, that involves numbers much more extreme than show up in the area of x-risk, by many orders of magnitude.
Sure, most people are not unitary total utilitarians. I certainly am not one, although one of my motives is to avert “astronomical waste” style losses. But this is a bit of an irrelevant comparison: we have strong negative associations with weak execution, which are pretty well grounded, since one can usually find something trying to do the same task more efficiently. That applies to x-risk as well. The meaningful question is: “considering the best way to reduce existential risk I can find, including investing in the creation or identification of new opportunities and holding resources in hope of finding such in the future, do I prefer it to some charity that reduces existential risk less but displays more indicators of virtue and benefits current people in the near-term in conventional ways more?”
Does the NTI pass your tests? Why?
Very good question. I haven’t looked closely at NTI; will let you know when I take a closer look.
Right, so maybe reference to global warming was a bad example because there too, one is dealing with vast uncertainties. Note that global warming passes the “test” (3) above.
I’m curious about this.
These are pertinent examples but I think it’s still fair to say that interest in reducing AI risks marks one as an odd duck at present and that this gives rise to an equilibrating force against successful work on preventing AI extinction risk (how large I don’t know). I can imagine this changing in the near future.
I attempted to clarify in the comments.
What I was trying to get at here was that in part for social reasons and in part because of the inherent openendedness/uncertainty spanning over many orders of magnitude being conducive to psychological instability, the minimum sacrifice needed to usefully consciously reduce existential risk may be too great for people to effectively work to reduce existential risk by design.
This is true for the Eliezer/Bostrom case study, but my intuition is that the same considerations apply. Even if the best estimates that the probability that Christianity is true aren’t presently > 10^(-50), there was some point in the past when in Europe the best estimates that the probability that Christianity is true were higher than some of the probabilities that show up in the area of x-risk.
I guess I would say that humans are sufficiently bad at reasoning about small probability events when the estimates are not strongly data driven that acting based on such estimates without having somewhat independent arguments for the same action is likely a far mode failure. I’m particularly concerned about the Availability heuristic here.
I’m unclear on whether my intuition here is coming from a deviation from unitary total utilitarianism or whether my intuition is coming from, e.g. game theoretic considerations that I don’t understand explicitly but which are compatible with unitary total utilitarianism.
Agree.
Except insofar as the there’s relatively little interest in x-risk and few organizations involved (again, not making a judgment about particular organizations here).
My own intuition points me toward favoring a focus on existential risk reduction but I have uncertainty as to whether it’s right (at least for me personally) because:
(i) I’ve found thinking about existential risk reduction on account of the poor quality of the information available and the multitude of relevant considerations. As Anna says in Making your explicit reasoning trustworthy:
(ii) Most of the people who I know are not in favor of near-term overt focus on existential risk reduction. I don’t know whether this is because I have implicit knowledge that they don’t have, because they have implicit knowledge that I don’t have or because they’re motivated to be opposed to such near-term overt focus for reasons unrelated to global welfare. I lean toward thinking that the situation is some combination of the latter two of the three. I’m quite confused about this matter.
I think you would normally expect genuine concern about saving the world to be rare among evolved creatures. It is a problem that our ancestor’s rarely faced. It is also someone else’s problem.
Saving the world may make sense as a superstimulus to the human desire for a grand cause, though. Humans are attracted to such causes for reasons that appear to be primarily to do with social signalling. I think a signalling perspective makes reasonable sense of the variation in the extent to which people are interested in the area.