Thanks for the clarification on the estimate. Unhappy as it makes me to say it, I suspect that nuclear war or other non-existential catastrophe would overall reduce existential risk, because we’d have more time to think about existential risk mitigation while we rebuild society. However I suspect that trying to bring nuclear war about as a result of this reasoning is not a winning strategy.
Building society the first time around, we were able to take advantage of various useful natural resources such as relatively plentiful coal and (later) oil. After a nuclear war or some other civilization-wrecking catastrophe, it might be Very Difficult Indeed to rebuild without those resources at our disposal. It’s difficult enough even now, with everything basically still working nicely, to see how to wean ourselves off fossil fuels, as for various reasons many people think we should do. Now imagine trying to build a nuclear power industry or highly efficient solar cells with our existing energy infrastructure in ruins.
So it looks to me as if (1) our best prospects for long-term x-risk avoidance all involve advanced technology (space travel, AI, nanothingies, …) and (2) a major not-immediately-existential catastrophe could seriously jeapordize our prospects of ever developing such technology, so (3) such a catastrophe should be regarded as a big increase in x-risk.
I’ve heard arguments for and against “it might turn out to be too hard the second time around”. I think overall that it’s more likely than not that we would eventually succeed in rebuilding a technological society, but that’s the strongest I could put it, ie it’s very plausible that we would never do so.
If enough of our existing thinking survives, the thinking time that rebuilding civilization would give us might move things a little in our favour WRT AI++, MNT etc. I don’t know which side does better on this tradeoff. However I seriously doubt that trying to bring about the collapse of civilization is the most efficient way to mitigate existential risk.
Also, and I hate to be this selfish about it but there it is, if civilization ends I definitely die either way, and I’d kind of prefer not to.
Building society the first time around, we were able to take advantage of various useful natural resources such as relatively plentiful coal and (later) oil. After a nuclear war or some other civilization-wrecking catastrophe, it might be Very Difficult Indeed to rebuild without those resources at our disposal.
We have a huge mountain of coal, and will do for the next hundred years or so. Doing without doesn’t seem very likely.
How easily accessible is that coal to people whose civilization has collapsed, taking most of the industrial machinery with it? (That’s a genuine question. Naively, it seems like the easiest-to-get-at bits would have been mined out first, leaving the harder bits. How much harder they are, and how big a problem that would be, I have no idea.)
Unhappy as it makes me to say it, I suspect that nuclear war or other non-existential catastrophe would overall reduce existential risk, because we’d have more time to think about existential risk mitigation while we rebuild society. However I suspect that trying to bring nuclear war about as a result of this reasoning is not a winning strategy.
Technical challenges? Difficulty in coordinating? Are there other candidate setbacks?
because we’d have more time to think about existential risk mitigation while we rebuild society
It may be highly unproductive to think about advanced future technologies in very much detail before there’s a credible research program on the table on account of the search tree involving dozens of orders of magnitude. I presently believe in this to be the case.
I do think that we can get better at some relevant things at present (learning how to obtain as accurate as realistically possible predictions about probable government behaviors, etc.) and that all else being equal we could benefit from more time thinking about these things rather than less time.
However, it’s not clear to me that the time so gained would outweigh a presumed loss in clear thinking post-nuclear war and I currently believe that the loss would be substantially greater than the gain.
As steven0461 mentioned, “some people within SingInst seem to have pretty high estimates of the return from efforts to prevent nuclear war.” I haven’t had a chance to talk about this with them in detail; but it updates me in the direction of attaching high expected value reduction to nuclear war risk reduction.
My positions on these points are very much subject to change with incoming information.
It may be highly unproductive to think about advanced future technologies in very much detail before there’s a credible research program on the table on account of the search tree involving dozens of orders of magnitude. I presently believe in this to be the case.
because we’d have more time to think about existential risk mitigation while we rebuild society.”
A more likely result: the religious crazies will take over, and they either don’t think existential risk can exist (because God would prevent them) or they think preventing existential risk would be blasphemy (because God ought be allowed to destroy us). Or they even actively work to make it happen and bring about God’s judgmenent.
And then humanity dies, because both denying and embracing existential risk causes it to come nearer.
Thanks for the clarification on the estimate. Unhappy as it makes me to say it, I suspect that nuclear war or other non-existential catastrophe would overall reduce existential risk, because we’d have more time to think about existential risk mitigation while we rebuild society. However I suspect that trying to bring nuclear war about as a result of this reasoning is not a winning strategy.
Building society the first time around, we were able to take advantage of various useful natural resources such as relatively plentiful coal and (later) oil. After a nuclear war or some other civilization-wrecking catastrophe, it might be Very Difficult Indeed to rebuild without those resources at our disposal. It’s difficult enough even now, with everything basically still working nicely, to see how to wean ourselves off fossil fuels, as for various reasons many people think we should do. Now imagine trying to build a nuclear power industry or highly efficient solar cells with our existing energy infrastructure in ruins.
So it looks to me as if (1) our best prospects for long-term x-risk avoidance all involve advanced technology (space travel, AI, nanothingies, …) and (2) a major not-immediately-existential catastrophe could seriously jeapordize our prospects of ever developing such technology, so (3) such a catastrophe should be regarded as a big increase in x-risk.
I’ve heard arguments for and against “it might turn out to be too hard the second time around”. I think overall that it’s more likely than not that we would eventually succeed in rebuilding a technological society, but that’s the strongest I could put it, ie it’s very plausible that we would never do so.
If enough of our existing thinking survives, the thinking time that rebuilding civilization would give us might move things a little in our favour WRT AI++, MNT etc. I don’t know which side does better on this tradeoff. However I seriously doubt that trying to bring about the collapse of civilization is the most efficient way to mitigate existential risk.
Also, and I hate to be this selfish about it but there it is, if civilization ends I definitely die either way, and I’d kind of prefer not to.
We have a huge mountain of coal, and will do for the next hundred years or so. Doing without doesn’t seem very likely.
How easily accessible is that coal to people whose civilization has collapsed, taking most of the industrial machinery with it? (That’s a genuine question. Naively, it seems like the easiest-to-get-at bits would have been mined out first, leaving the harder bits. How much harder they are, and how big a problem that would be, I have no idea.)
It’s probably fair to say that some of the low hanging fossil fuel fruit have been taken.
Technical challenges? Difficulty in coordinating? Are there other candidate setbacks?
It may be highly unproductive to think about advanced future technologies in very much detail before there’s a credible research program on the table on account of the search tree involving dozens of orders of magnitude. I presently believe in this to be the case.
I do think that we can get better at some relevant things at present (learning how to obtain as accurate as realistically possible predictions about probable government behaviors, etc.) and that all else being equal we could benefit from more time thinking about these things rather than less time.
However, it’s not clear to me that the time so gained would outweigh a presumed loss in clear thinking post-nuclear war and I currently believe that the loss would be substantially greater than the gain.
As steven0461 mentioned, “some people within SingInst seem to have pretty high estimates of the return from efforts to prevent nuclear war.” I haven’t had a chance to talk about this with them in detail; but it updates me in the direction of attaching high expected value reduction to nuclear war risk reduction.
My positions on these points are very much subject to change with incoming information.
How much detail is too much?
A more likely result: the religious crazies will take over, and they either don’t think existential risk can exist (because God would prevent them) or they think preventing existential risk would be blasphemy (because God ought be allowed to destroy us). Or they even actively work to make it happen and bring about God’s judgmenent.
And then humanity dies, because both denying and embracing existential risk causes it to come nearer.