Good question. My intended meaning was the second of the meanings that you listed “the probability of a positive singularity is 0.1%-1% lower than the probability of a positive singularity given no nuclear war.” Would be interested to hear any thoughts that you have about these things.
I can’t think of a mechanism through which recovery would become long-term impossible, but maybe there is one. People taking fewer safety precautions in a destabilized society does sound plausible. There are probably a number of other, similarly important effects of nuclear war on existential risk to take into account. Different technologies (IA, uploading, AGI, Friendliness philosophy) have different motivations behind them that would probably be differently affected by a nuclear war. Memes would have more time to come closer to some sort of equilibrium in various relevant groups. To the extent that there are nontrivial existential risks not depending on future technology, they would have more time to strike. Catastrophes would be more psychologically salient, or maybe the idea of future nuclear war would overshadow other kinds of catastrophe. Power would be more in the hands of those who weren’t involved in the nuclear war.
In any case, the effect of nuclear war on existential risk seems like a nontrivial question that we’d have to have a better idea about before we could decide that resources are better spent on nuclear war prevention than something else. To make things more complicated, it’s possible that preventing nuclear war would on average decrease existential risk but that a specific measure to prevent nuclear war would increase existential risk (or vice versa), because the specific kinds of nuclear war that the measure prevents are atypical.
The number and strength of reasons we see one way or the other may depend more on time people have spent searching specifically for reasons for/against than on what reasons exist. The main reason to expect an imbalance there is that nuclear war causes huge amounts of death and suffering, and so people will be motivated to rationalize that it will also be a bad thing according to this mostly independent criterion of existential risk minimization; or people may overcorrect for that effect or have other biases for thinking nuclear war would prevent existential risk. To the extent that our misgivings about failing to do enough to stop nuclear war have to do with worries that existential risk reduction may not outweigh huge present death and suffering, we’d do better to acknowledge those worries than to rationalize ourselves into thinking there’s never a conflict.
Without knowing anything about specific risk mitigation proposals, I would guess that there’s even more expected return from looking into weird, hard-to-think-about technologies like MNT than from looking into nuclear war, because less of the low-hanging fruit there would already have been picked. But more specific information could easily overrule that presumption, and some people within SingInst seem to have pretty high estimates of the return from efforts to prevent nuclear war, so who knows.
I can’t think of a mechanism through which recovery would become long-term impossible, but maybe there is one.
I have little idea of how likely it is but a nuclear winter could seriously hamper human mobility.
Widespread radiation would further hamper human mobility.
Redeveloping preexisting infrastructure could require natural resources on of order of magnitude comparable to the infrastructure that we have today. Right now we have the efficient market hypothesis to help out with natural resource shortage, but upsetting the trajectory of our development could exacerbate the problem.
Note that a probability of 0.1% isn’t so large (even taking account all of the other things that could interfere with a positive singularity).
Different technologies (IA, uploading, AGI, Friendliness philosophy) have different motivations behind them that would probably be differently affected by a nuclear war. Memes would have more time to come closer to some sort of equilibrium in various relevant groups.
Reasoning productively about the expected value of these things presently seems to me to be too difficult (but I’m open to changing my mind if you have ideas).
To the extent that there are nontrivial existential risks not depending on future technology, they would have more time to strike.
With the exception of natural resource shortage (which I mentioned above) I doubt that this is within an order of magnitude of significance of other relevant factors provided that we’re talking about a delay on the order of fewer than 100 years (maybe similarly for a delay of 1000 years; I would have to think about it).
Catastrophes would be more psychologically salient, or maybe the idea of future nuclear war would overshadow other kinds of catastrophe.
Similarly, I doubt that this would be game-changing.
Power would be more in the hands of those who weren’t involved in the nuclear war.
These seem worthy of further contemplation—is the development of future technologies more likely to go in Australia than in the current major powers, etc.
In any case, the effect of nuclear war on existential risk seems like a nontrivial question that we’d have to have a better idea about before we could decide that resources are better spent on nuclear war prevention than something else.
This seems reasonable. As I mentioned, I presently attach high expected x-risk reduction to nuclear war prevention but my confidence is sufficiently unstable at present so that the value devoting resources to gather more information outweighs the value of donating to nuclear war reduction charities.
To make things more complicated, it’s possible that preventing nuclear war would on average decrease existential risk but that a specific measure to prevent nuclear war would increase existential risk (or vice versa), because the specific kinds of nuclear war that the measure prevents are atypical.
Yes. In the course of researching nuclear threat reduction charities I hope to learn what options are on the table.
Without knowing anything about specific risk mitigation proposals, I would guess that there’s even more expected return from looking into weird, hard-to-think-about technologies like MNT than from looking into nuclear war, because less of the low-hanging fruit there would already have been picked.
On the other hand there may not be low hanging fruit attached to thinking about weird, hard-to-think-about technologies like MNT. I do however plan on looking into the Foresight Institute.
Thanks for clarifying and I hope your research goes well. If I’m not mistaken, you can see the 0.1% calculation as the product of three things: the probability nuclear war happens, the probability that if it happens it’s such that it prevents any future positive singularities that otherwise would have happened, and the probability a positive singularity would otherwise have happened. If the first and third probabilities are, say, 1⁄5 and 1⁄4, then the answer will be 1⁄20 of the middle probability, so your 0.1%-1% answer corresponds to a 2%-20% chance that if a nuclear war happens then it’s such that it prevents any future positive singularities that would otherwise have happened. Certainly the lower end and maybe the upper end of that range seem like they could plausibly end up being close to our best estimate. But note that you have to look at the net effect after taking into account effects in both directions; I would still put substantial probability on this estimate ending up effectively negative, also. (Probabilities can’t really go negative, so the interpretation I gave above doesn’t really work, but I hope you can see what I mean.)
note that you have to look at the net effect after taking into account effects in both directions; I would still put substantial probability on this estimate ending up effectively negative, also.
I agree and should have been more explicit in taking this into account. However, note that if one assigns a 2:1 odds ratio for (0.1%-1% decrease in x-risk)/(same size increase in x-risk) then the expected value of preventing nuclear war doesn’t drop below 1⁄3 of what it would be if there wasn’t the possibility of nuclear war increasing x-risk: still on the same rough order of magnitude.
Good question. My intended meaning was the second of the meanings that you listed “the probability of a positive singularity is 0.1%-1% lower than the probability of a positive singularity given no nuclear war.” Would be interested to hear any thoughts that you have about these things.
I can’t think of a mechanism through which recovery would become long-term impossible, but maybe there is one. People taking fewer safety precautions in a destabilized society does sound plausible. There are probably a number of other, similarly important effects of nuclear war on existential risk to take into account. Different technologies (IA, uploading, AGI, Friendliness philosophy) have different motivations behind them that would probably be differently affected by a nuclear war. Memes would have more time to come closer to some sort of equilibrium in various relevant groups. To the extent that there are nontrivial existential risks not depending on future technology, they would have more time to strike. Catastrophes would be more psychologically salient, or maybe the idea of future nuclear war would overshadow other kinds of catastrophe. Power would be more in the hands of those who weren’t involved in the nuclear war.
In any case, the effect of nuclear war on existential risk seems like a nontrivial question that we’d have to have a better idea about before we could decide that resources are better spent on nuclear war prevention than something else. To make things more complicated, it’s possible that preventing nuclear war would on average decrease existential risk but that a specific measure to prevent nuclear war would increase existential risk (or vice versa), because the specific kinds of nuclear war that the measure prevents are atypical.
The number and strength of reasons we see one way or the other may depend more on time people have spent searching specifically for reasons for/against than on what reasons exist. The main reason to expect an imbalance there is that nuclear war causes huge amounts of death and suffering, and so people will be motivated to rationalize that it will also be a bad thing according to this mostly independent criterion of existential risk minimization; or people may overcorrect for that effect or have other biases for thinking nuclear war would prevent existential risk. To the extent that our misgivings about failing to do enough to stop nuclear war have to do with worries that existential risk reduction may not outweigh huge present death and suffering, we’d do better to acknowledge those worries than to rationalize ourselves into thinking there’s never a conflict.
Without knowing anything about specific risk mitigation proposals, I would guess that there’s even more expected return from looking into weird, hard-to-think-about technologies like MNT than from looking into nuclear war, because less of the low-hanging fruit there would already have been picked. But more specific information could easily overrule that presumption, and some people within SingInst seem to have pretty high estimates of the return from efforts to prevent nuclear war, so who knows.
Thanks for your thoughtful comment.
I have little idea of how likely it is but a nuclear winter could seriously hamper human mobility.
Widespread radiation would further hamper human mobility.
Redeveloping preexisting infrastructure could require natural resources on of order of magnitude comparable to the infrastructure that we have today. Right now we have the efficient market hypothesis to help out with natural resource shortage, but upsetting the trajectory of our development could exacerbate the problem.
Note that a probability of 0.1% isn’t so large (even taking account all of the other things that could interfere with a positive singularity).
Reasoning productively about the expected value of these things presently seems to me to be too difficult (but I’m open to changing my mind if you have ideas).
With the exception of natural resource shortage (which I mentioned above) I doubt that this is within an order of magnitude of significance of other relevant factors provided that we’re talking about a delay on the order of fewer than 100 years (maybe similarly for a delay of 1000 years; I would have to think about it).
Similarly, I doubt that this would be game-changing.
These seem worthy of further contemplation—is the development of future technologies more likely to go in Australia than in the current major powers, etc.
This seems reasonable. As I mentioned, I presently attach high expected x-risk reduction to nuclear war prevention but my confidence is sufficiently unstable at present so that the value devoting resources to gather more information outweighs the value of donating to nuclear war reduction charities.
Yes. In the course of researching nuclear threat reduction charities I hope to learn what options are on the table.
On the other hand there may not be low hanging fruit attached to thinking about weird, hard-to-think-about technologies like MNT. I do however plan on looking into the Foresight Institute.
Thanks for clarifying and I hope your research goes well. If I’m not mistaken, you can see the 0.1% calculation as the product of three things: the probability nuclear war happens, the probability that if it happens it’s such that it prevents any future positive singularities that otherwise would have happened, and the probability a positive singularity would otherwise have happened. If the first and third probabilities are, say, 1⁄5 and 1⁄4, then the answer will be 1⁄20 of the middle probability, so your 0.1%-1% answer corresponds to a 2%-20% chance that if a nuclear war happens then it’s such that it prevents any future positive singularities that would otherwise have happened. Certainly the lower end and maybe the upper end of that range seem like they could plausibly end up being close to our best estimate. But note that you have to look at the net effect after taking into account effects in both directions; I would still put substantial probability on this estimate ending up effectively negative, also. (Probabilities can’t really go negative, so the interpretation I gave above doesn’t really work, but I hope you can see what I mean.)
I agree and should have been more explicit in taking this into account. However, note that if one assigns a 2:1 odds ratio for (0.1%-1% decrease in x-risk)/(same size increase in x-risk) then the expected value of preventing nuclear war doesn’t drop below 1⁄3 of what it would be if there wasn’t the possibility of nuclear war increasing x-risk: still on the same rough order of magnitude.