I’d be very appreciative to hear if you know of someone doing more.
Over the coming months I’m going to be doing an investigation of the non-profits affiliated with the Nuclear Threat Initiative with a view toward finding x-risk reduction charities other than SIAI & FHI. I’ll report back what I learn but it may be a while.
I’m under the impression that nuclear war doesn’t pose an existential risk. Do you disagree? If so, I probably ought to make a discussion post on the subject so we don’t take this one too far off topic.
My impression is that the risk of immediate extinction due to nuclear war is very small but that a nuclear war could cripple civilization to the point of not being able to recover enough to affect a positive singularity; also it would plausibly increase other x-risks—intuitively, nuclear war would destabilize society, and people are less likely to take safety precautions in an unstable society when developing advanced technologies than they otherwise would be. I’d give a subjective estimate of 0.1% − 1% of nuclear war preventing a positive singularity.
Good question. My intended meaning was the second of the meanings that you listed “the probability of a positive singularity is 0.1%-1% lower than the probability of a positive singularity given no nuclear war.” Would be interested to hear any thoughts that you have about these things.
I can’t think of a mechanism through which recovery would become long-term impossible, but maybe there is one. People taking fewer safety precautions in a destabilized society does sound plausible. There are probably a number of other, similarly important effects of nuclear war on existential risk to take into account. Different technologies (IA, uploading, AGI, Friendliness philosophy) have different motivations behind them that would probably be differently affected by a nuclear war. Memes would have more time to come closer to some sort of equilibrium in various relevant groups. To the extent that there are nontrivial existential risks not depending on future technology, they would have more time to strike. Catastrophes would be more psychologically salient, or maybe the idea of future nuclear war would overshadow other kinds of catastrophe. Power would be more in the hands of those who weren’t involved in the nuclear war.
In any case, the effect of nuclear war on existential risk seems like a nontrivial question that we’d have to have a better idea about before we could decide that resources are better spent on nuclear war prevention than something else. To make things more complicated, it’s possible that preventing nuclear war would on average decrease existential risk but that a specific measure to prevent nuclear war would increase existential risk (or vice versa), because the specific kinds of nuclear war that the measure prevents are atypical.
The number and strength of reasons we see one way or the other may depend more on time people have spent searching specifically for reasons for/against than on what reasons exist. The main reason to expect an imbalance there is that nuclear war causes huge amounts of death and suffering, and so people will be motivated to rationalize that it will also be a bad thing according to this mostly independent criterion of existential risk minimization; or people may overcorrect for that effect or have other biases for thinking nuclear war would prevent existential risk. To the extent that our misgivings about failing to do enough to stop nuclear war have to do with worries that existential risk reduction may not outweigh huge present death and suffering, we’d do better to acknowledge those worries than to rationalize ourselves into thinking there’s never a conflict.
Without knowing anything about specific risk mitigation proposals, I would guess that there’s even more expected return from looking into weird, hard-to-think-about technologies like MNT than from looking into nuclear war, because less of the low-hanging fruit there would already have been picked. But more specific information could easily overrule that presumption, and some people within SingInst seem to have pretty high estimates of the return from efforts to prevent nuclear war, so who knows.
I can’t think of a mechanism through which recovery would become long-term impossible, but maybe there is one.
I have little idea of how likely it is but a nuclear winter could seriously hamper human mobility.
Widespread radiation would further hamper human mobility.
Redeveloping preexisting infrastructure could require natural resources on of order of magnitude comparable to the infrastructure that we have today. Right now we have the efficient market hypothesis to help out with natural resource shortage, but upsetting the trajectory of our development could exacerbate the problem.
Note that a probability of 0.1% isn’t so large (even taking account all of the other things that could interfere with a positive singularity).
Different technologies (IA, uploading, AGI, Friendliness philosophy) have different motivations behind them that would probably be differently affected by a nuclear war. Memes would have more time to come closer to some sort of equilibrium in various relevant groups.
Reasoning productively about the expected value of these things presently seems to me to be too difficult (but I’m open to changing my mind if you have ideas).
To the extent that there are nontrivial existential risks not depending on future technology, they would have more time to strike.
With the exception of natural resource shortage (which I mentioned above) I doubt that this is within an order of magnitude of significance of other relevant factors provided that we’re talking about a delay on the order of fewer than 100 years (maybe similarly for a delay of 1000 years; I would have to think about it).
Catastrophes would be more psychologically salient, or maybe the idea of future nuclear war would overshadow other kinds of catastrophe.
Similarly, I doubt that this would be game-changing.
Power would be more in the hands of those who weren’t involved in the nuclear war.
These seem worthy of further contemplation—is the development of future technologies more likely to go in Australia than in the current major powers, etc.
In any case, the effect of nuclear war on existential risk seems like a nontrivial question that we’d have to have a better idea about before we could decide that resources are better spent on nuclear war prevention than something else.
This seems reasonable. As I mentioned, I presently attach high expected x-risk reduction to nuclear war prevention but my confidence is sufficiently unstable at present so that the value devoting resources to gather more information outweighs the value of donating to nuclear war reduction charities.
To make things more complicated, it’s possible that preventing nuclear war would on average decrease existential risk but that a specific measure to prevent nuclear war would increase existential risk (or vice versa), because the specific kinds of nuclear war that the measure prevents are atypical.
Yes. In the course of researching nuclear threat reduction charities I hope to learn what options are on the table.
Without knowing anything about specific risk mitigation proposals, I would guess that there’s even more expected return from looking into weird, hard-to-think-about technologies like MNT than from looking into nuclear war, because less of the low-hanging fruit there would already have been picked.
On the other hand there may not be low hanging fruit attached to thinking about weird, hard-to-think-about technologies like MNT. I do however plan on looking into the Foresight Institute.
Thanks for clarifying and I hope your research goes well. If I’m not mistaken, you can see the 0.1% calculation as the product of three things: the probability nuclear war happens, the probability that if it happens it’s such that it prevents any future positive singularities that otherwise would have happened, and the probability a positive singularity would otherwise have happened. If the first and third probabilities are, say, 1⁄5 and 1⁄4, then the answer will be 1⁄20 of the middle probability, so your 0.1%-1% answer corresponds to a 2%-20% chance that if a nuclear war happens then it’s such that it prevents any future positive singularities that would otherwise have happened. Certainly the lower end and maybe the upper end of that range seem like they could plausibly end up being close to our best estimate. But note that you have to look at the net effect after taking into account effects in both directions; I would still put substantial probability on this estimate ending up effectively negative, also. (Probabilities can’t really go negative, so the interpretation I gave above doesn’t really work, but I hope you can see what I mean.)
note that you have to look at the net effect after taking into account effects in both directions; I would still put substantial probability on this estimate ending up effectively negative, also.
I agree and should have been more explicit in taking this into account. However, note that if one assigns a 2:1 odds ratio for (0.1%-1% decrease in x-risk)/(same size increase in x-risk) then the expected value of preventing nuclear war doesn’t drop below 1⁄3 of what it would be if there wasn’t the possibility of nuclear war increasing x-risk: still on the same rough order of magnitude.
Thanks for the clarification on the estimate. Unhappy as it makes me to say it, I suspect that nuclear war or other non-existential catastrophe would overall reduce existential risk, because we’d have more time to think about existential risk mitigation while we rebuild society. However I suspect that trying to bring nuclear war about as a result of this reasoning is not a winning strategy.
Building society the first time around, we were able to take advantage of various useful natural resources such as relatively plentiful coal and (later) oil. After a nuclear war or some other civilization-wrecking catastrophe, it might be Very Difficult Indeed to rebuild without those resources at our disposal. It’s difficult enough even now, with everything basically still working nicely, to see how to wean ourselves off fossil fuels, as for various reasons many people think we should do. Now imagine trying to build a nuclear power industry or highly efficient solar cells with our existing energy infrastructure in ruins.
So it looks to me as if (1) our best prospects for long-term x-risk avoidance all involve advanced technology (space travel, AI, nanothingies, …) and (2) a major not-immediately-existential catastrophe could seriously jeapordize our prospects of ever developing such technology, so (3) such a catastrophe should be regarded as a big increase in x-risk.
I’ve heard arguments for and against “it might turn out to be too hard the second time around”. I think overall that it’s more likely than not that we would eventually succeed in rebuilding a technological society, but that’s the strongest I could put it, ie it’s very plausible that we would never do so.
If enough of our existing thinking survives, the thinking time that rebuilding civilization would give us might move things a little in our favour WRT AI++, MNT etc. I don’t know which side does better on this tradeoff. However I seriously doubt that trying to bring about the collapse of civilization is the most efficient way to mitigate existential risk.
Also, and I hate to be this selfish about it but there it is, if civilization ends I definitely die either way, and I’d kind of prefer not to.
Building society the first time around, we were able to take advantage of various useful natural resources such as relatively plentiful coal and (later) oil. After a nuclear war or some other civilization-wrecking catastrophe, it might be Very Difficult Indeed to rebuild without those resources at our disposal.
We have a huge mountain of coal, and will do for the next hundred years or so. Doing without doesn’t seem very likely.
How easily accessible is that coal to people whose civilization has collapsed, taking most of the industrial machinery with it? (That’s a genuine question. Naively, it seems like the easiest-to-get-at bits would have been mined out first, leaving the harder bits. How much harder they are, and how big a problem that would be, I have no idea.)
Unhappy as it makes me to say it, I suspect that nuclear war or other non-existential catastrophe would overall reduce existential risk, because we’d have more time to think about existential risk mitigation while we rebuild society. However I suspect that trying to bring nuclear war about as a result of this reasoning is not a winning strategy.
Technical challenges? Difficulty in coordinating? Are there other candidate setbacks?
because we’d have more time to think about existential risk mitigation while we rebuild society
It may be highly unproductive to think about advanced future technologies in very much detail before there’s a credible research program on the table on account of the search tree involving dozens of orders of magnitude. I presently believe in this to be the case.
I do think that we can get better at some relevant things at present (learning how to obtain as accurate as realistically possible predictions about probable government behaviors, etc.) and that all else being equal we could benefit from more time thinking about these things rather than less time.
However, it’s not clear to me that the time so gained would outweigh a presumed loss in clear thinking post-nuclear war and I currently believe that the loss would be substantially greater than the gain.
As steven0461 mentioned, “some people within SingInst seem to have pretty high estimates of the return from efforts to prevent nuclear war.” I haven’t had a chance to talk about this with them in detail; but it updates me in the direction of attaching high expected value reduction to nuclear war risk reduction.
My positions on these points are very much subject to change with incoming information.
It may be highly unproductive to think about advanced future technologies in very much detail before there’s a credible research program on the table on account of the search tree involving dozens of orders of magnitude. I presently believe in this to be the case.
because we’d have more time to think about existential risk mitigation while we rebuild society.”
A more likely result: the religious crazies will take over, and they either don’t think existential risk can exist (because God would prevent them) or they think preventing existential risk would be blasphemy (because God ought be allowed to destroy us). Or they even actively work to make it happen and bring about God’s judgmenent.
And then humanity dies, because both denying and embracing existential risk causes it to come nearer.
Over the coming months I’m going to be doing an investigation of the non-profits affiliated with the Nuclear Threat Initiative with a view toward finding x-risk reduction charities other than SIAI & FHI. I’ll report back what I learn but it may be a while.
I’m under the impression that nuclear war doesn’t pose an existential risk. Do you disagree? If so, I probably ought to make a discussion post on the subject so we don’t take this one too far off topic.
My impression is that the risk of immediate extinction due to nuclear war is very small but that a nuclear war could cripple civilization to the point of not being able to recover enough to affect a positive singularity; also it would plausibly increase other x-risks—intuitively, nuclear war would destabilize society, and people are less likely to take safety precautions in an unstable society when developing advanced technologies than they otherwise would be. I’d give a subjective estimate of 0.1% − 1% of nuclear war preventing a positive singularity.
Do you mean:
The probability of PS given NW is .1-1% lower than the probability of PS given not-NW
The probability of PS is .1-1% lower than the probability of PS given not-NW
The probability of PS is 99-99.9% times the probability of PS given not-NW
etc?
Good question. My intended meaning was the second of the meanings that you listed “the probability of a positive singularity is 0.1%-1% lower than the probability of a positive singularity given no nuclear war.” Would be interested to hear any thoughts that you have about these things.
I can’t think of a mechanism through which recovery would become long-term impossible, but maybe there is one. People taking fewer safety precautions in a destabilized society does sound plausible. There are probably a number of other, similarly important effects of nuclear war on existential risk to take into account. Different technologies (IA, uploading, AGI, Friendliness philosophy) have different motivations behind them that would probably be differently affected by a nuclear war. Memes would have more time to come closer to some sort of equilibrium in various relevant groups. To the extent that there are nontrivial existential risks not depending on future technology, they would have more time to strike. Catastrophes would be more psychologically salient, or maybe the idea of future nuclear war would overshadow other kinds of catastrophe. Power would be more in the hands of those who weren’t involved in the nuclear war.
In any case, the effect of nuclear war on existential risk seems like a nontrivial question that we’d have to have a better idea about before we could decide that resources are better spent on nuclear war prevention than something else. To make things more complicated, it’s possible that preventing nuclear war would on average decrease existential risk but that a specific measure to prevent nuclear war would increase existential risk (or vice versa), because the specific kinds of nuclear war that the measure prevents are atypical.
The number and strength of reasons we see one way or the other may depend more on time people have spent searching specifically for reasons for/against than on what reasons exist. The main reason to expect an imbalance there is that nuclear war causes huge amounts of death and suffering, and so people will be motivated to rationalize that it will also be a bad thing according to this mostly independent criterion of existential risk minimization; or people may overcorrect for that effect or have other biases for thinking nuclear war would prevent existential risk. To the extent that our misgivings about failing to do enough to stop nuclear war have to do with worries that existential risk reduction may not outweigh huge present death and suffering, we’d do better to acknowledge those worries than to rationalize ourselves into thinking there’s never a conflict.
Without knowing anything about specific risk mitigation proposals, I would guess that there’s even more expected return from looking into weird, hard-to-think-about technologies like MNT than from looking into nuclear war, because less of the low-hanging fruit there would already have been picked. But more specific information could easily overrule that presumption, and some people within SingInst seem to have pretty high estimates of the return from efforts to prevent nuclear war, so who knows.
Thanks for your thoughtful comment.
I have little idea of how likely it is but a nuclear winter could seriously hamper human mobility.
Widespread radiation would further hamper human mobility.
Redeveloping preexisting infrastructure could require natural resources on of order of magnitude comparable to the infrastructure that we have today. Right now we have the efficient market hypothesis to help out with natural resource shortage, but upsetting the trajectory of our development could exacerbate the problem.
Note that a probability of 0.1% isn’t so large (even taking account all of the other things that could interfere with a positive singularity).
Reasoning productively about the expected value of these things presently seems to me to be too difficult (but I’m open to changing my mind if you have ideas).
With the exception of natural resource shortage (which I mentioned above) I doubt that this is within an order of magnitude of significance of other relevant factors provided that we’re talking about a delay on the order of fewer than 100 years (maybe similarly for a delay of 1000 years; I would have to think about it).
Similarly, I doubt that this would be game-changing.
These seem worthy of further contemplation—is the development of future technologies more likely to go in Australia than in the current major powers, etc.
This seems reasonable. As I mentioned, I presently attach high expected x-risk reduction to nuclear war prevention but my confidence is sufficiently unstable at present so that the value devoting resources to gather more information outweighs the value of donating to nuclear war reduction charities.
Yes. In the course of researching nuclear threat reduction charities I hope to learn what options are on the table.
On the other hand there may not be low hanging fruit attached to thinking about weird, hard-to-think-about technologies like MNT. I do however plan on looking into the Foresight Institute.
Thanks for clarifying and I hope your research goes well. If I’m not mistaken, you can see the 0.1% calculation as the product of three things: the probability nuclear war happens, the probability that if it happens it’s such that it prevents any future positive singularities that otherwise would have happened, and the probability a positive singularity would otherwise have happened. If the first and third probabilities are, say, 1⁄5 and 1⁄4, then the answer will be 1⁄20 of the middle probability, so your 0.1%-1% answer corresponds to a 2%-20% chance that if a nuclear war happens then it’s such that it prevents any future positive singularities that would otherwise have happened. Certainly the lower end and maybe the upper end of that range seem like they could plausibly end up being close to our best estimate. But note that you have to look at the net effect after taking into account effects in both directions; I would still put substantial probability on this estimate ending up effectively negative, also. (Probabilities can’t really go negative, so the interpretation I gave above doesn’t really work, but I hope you can see what I mean.)
I agree and should have been more explicit in taking this into account. However, note that if one assigns a 2:1 odds ratio for (0.1%-1% decrease in x-risk)/(same size increase in x-risk) then the expected value of preventing nuclear war doesn’t drop below 1⁄3 of what it would be if there wasn’t the possibility of nuclear war increasing x-risk: still on the same rough order of magnitude.
Thanks for the clarification on the estimate. Unhappy as it makes me to say it, I suspect that nuclear war or other non-existential catastrophe would overall reduce existential risk, because we’d have more time to think about existential risk mitigation while we rebuild society. However I suspect that trying to bring nuclear war about as a result of this reasoning is not a winning strategy.
Building society the first time around, we were able to take advantage of various useful natural resources such as relatively plentiful coal and (later) oil. After a nuclear war or some other civilization-wrecking catastrophe, it might be Very Difficult Indeed to rebuild without those resources at our disposal. It’s difficult enough even now, with everything basically still working nicely, to see how to wean ourselves off fossil fuels, as for various reasons many people think we should do. Now imagine trying to build a nuclear power industry or highly efficient solar cells with our existing energy infrastructure in ruins.
So it looks to me as if (1) our best prospects for long-term x-risk avoidance all involve advanced technology (space travel, AI, nanothingies, …) and (2) a major not-immediately-existential catastrophe could seriously jeapordize our prospects of ever developing such technology, so (3) such a catastrophe should be regarded as a big increase in x-risk.
I’ve heard arguments for and against “it might turn out to be too hard the second time around”. I think overall that it’s more likely than not that we would eventually succeed in rebuilding a technological society, but that’s the strongest I could put it, ie it’s very plausible that we would never do so.
If enough of our existing thinking survives, the thinking time that rebuilding civilization would give us might move things a little in our favour WRT AI++, MNT etc. I don’t know which side does better on this tradeoff. However I seriously doubt that trying to bring about the collapse of civilization is the most efficient way to mitigate existential risk.
Also, and I hate to be this selfish about it but there it is, if civilization ends I definitely die either way, and I’d kind of prefer not to.
We have a huge mountain of coal, and will do for the next hundred years or so. Doing without doesn’t seem very likely.
How easily accessible is that coal to people whose civilization has collapsed, taking most of the industrial machinery with it? (That’s a genuine question. Naively, it seems like the easiest-to-get-at bits would have been mined out first, leaving the harder bits. How much harder they are, and how big a problem that would be, I have no idea.)
It’s probably fair to say that some of the low hanging fossil fuel fruit have been taken.
Technical challenges? Difficulty in coordinating? Are there other candidate setbacks?
It may be highly unproductive to think about advanced future technologies in very much detail before there’s a credible research program on the table on account of the search tree involving dozens of orders of magnitude. I presently believe in this to be the case.
I do think that we can get better at some relevant things at present (learning how to obtain as accurate as realistically possible predictions about probable government behaviors, etc.) and that all else being equal we could benefit from more time thinking about these things rather than less time.
However, it’s not clear to me that the time so gained would outweigh a presumed loss in clear thinking post-nuclear war and I currently believe that the loss would be substantially greater than the gain.
As steven0461 mentioned, “some people within SingInst seem to have pretty high estimates of the return from efforts to prevent nuclear war.” I haven’t had a chance to talk about this with them in detail; but it updates me in the direction of attaching high expected value reduction to nuclear war risk reduction.
My positions on these points are very much subject to change with incoming information.
How much detail is too much?
A more likely result: the religious crazies will take over, and they either don’t think existential risk can exist (because God would prevent them) or they think preventing existential risk would be blasphemy (because God ought be allowed to destroy us). Or they even actively work to make it happen and bring about God’s judgmenent.
And then humanity dies, because both denying and embracing existential risk causes it to come nearer.