The primary problem with nuclear war is that it isn’t obvious that humans can get back to our current tech level without the now consumed resources (primarily fossil fuels) that we’ve used to bootstrap ourselves up to our current tech level. If that’s an issue, then any event that effectively pushes the tech level much below 1900 is about the same as an existential risk, it will just take longer for something else to then finish us off. There’s been some discussion on LW about how possible it is to get back to current tech levels without the non-renewables to bootstrap us up, and there doesn’t seem to be any real consensus on the matter. This should probably be on the list of things that someone at FHI should spend some time examining.
Well, the textbooks need to be not just destroyed but accessible, and many advanced physics textbooks depend on other math texts and the like. But most of those are wide spread, so this seems like a reasonable assumption. So it may be possible to avoid spending as much in experimentation. But until the 20th century, experimentation was for physics not a massive set of costs. A more serious issue is going to be engineering, but again, an assumption that basic textbooks will be accessible seems reasonable. The serious issue is more how much one can take advantage of economies of scale and comparative advantage in order to get to the point where one has enough free resources to do things like build solar panels and the like.
Moreover, there’s a large amount of what may be things like institutional knowledge or simply isn’t commonly written down in textbooks. For example, China for decades now has had trouble making their own high-performance jet engines (1), and during the cold war the USSR had a lot of trouble making high-perfomance microchips. In both cases there likely were other problems at play in addition to technical know-how. but this suggests that there may be more serious issues for many technologies than just basic textbooks.
Yes, this is the general reason why global catastrophic risks (including global warming, global pandemics and so on) shade into existential risks.
In fact, my personal view of x-risk is that these are the most likely and worrying failure modes : some catastrophe cripples human civilization and technology, we never really recover, and then much further in the future die off from some natural extinction event.
“We’re not sure if we could get back to our current tech level afterwards” isn’t an xrisk.
It’s also purely speculative. The world still has huge deposits of coal, oil, natural gas, oil sands and shale oil, plus large reserves of half a dozen more obscure forms of fossil fuel that have never been commercially developed because they aren’t cost-competitive. Plus there’s wind, geothermal, hydroelectric, solar and nuclear. We’re a long, long way away from the “all non-renewables are exhausted” scenario.
“We’re not sure if we could get back to our current tech level afterwards” isn’t an xrisk.
Yes it is. Right now, we can’t deal with a variety of basic x-risk that require large technologies. Big asteroids hit every hundred million years or so and many other disasters can easily wipe out a technological non-advanced species. If our tech level is reduced to even late 19th century and is static then civilization is simply dead and doesn’t know it until something comes along to finish it off.
The world still has huge deposits of coal, oil, natural gas, oil sands and shale oil, plus large reserves of half a dozen more obscure forms of fossil fuel that have never been commercially developed because they aren’t cost-competitive.
The problem is exactly that: They aren’t as cost competitive, and have much lower EROEI. That makes them much less useful and not even clear if they can be used to actually move to our current tech level. For example, to even get >1 EROEI on oil shale requires a fair bit of advanced technology. Similarly, most of the remaining coal is in much deeper locations than classical coal (we’ve consumed most of the coal that was easy to get to).
Plus there’s wind, geothermal, hydroelectric, solar and nuclear. We’re a long, long way away from the “all non-renewables are exhausted” scenario.
All of these require high tech levels to start with or have other problems. Geothermal only works for limited locations. Solar requires extremely high tech levels to even have positive energy return. Nuclear power requires similar issues along with massive processing procedures for enough economies of scale to kick in. Both solar and have terrible trouble with providing consistent power which is important for many uses such as manufacturing. Efficient batteries are one answer to that but they require also advance tech. It may help to keep in mind that even with the advantages we had the first time around, the vast majority of early electric companies simply failed. There’s an excellent book which discusses many of these issues—Maggie Koerth-Baker’s “Before the Lights Go Out.” It focuses more on the current American electric grid, but in that context discusses many of these issues.
Now you’re just changing the definition to try to win an argument. An xrisk is typically defined as one that, in and of itself, would result in the complete extinction of a species. If A causes a situation that prevents us from dealing with B when it finally arrives the xrisk is B, not A. Otherwise we’d be talking about poverty and political resource allocation as critical xrisks, and the term would lose all meaning.
I’m not going to get into an extended debate about energy resources, since that would be wildly off-topic. But for the record I think you’ve bought into a line of political propaganda that has little relation to reality—there’s a large body of evidence that we’re nowhere near running out of fossil fuels, and the energy industry experts whose livelihoods rely on making correct predictions mostly seem to be lined up on the side of expecting abundance rather than scarcity. I don’t expect you to agree, but anyone who’s curious should be able to find both sides of this argument with a little googling.
Now you’re just changing the definition to try to win an argument.
So Nick Bostrom, who seems to be one of the major thinkers about existential risk seems to think that this justifies being discussed in the context of existential risk
http://www.nickbostrom.com/existential/risks.html . In 5.1 of that link he writes:
The natural resources needed to sustain a high-tech civilization are being used up. If some other cataclysm destroys the technology we have, it may not be possible to climb back up to present levels if natural conditions are less favorable than they were for our ancestors, for example if the most easily exploitable coal, oil, and mineral resources have been depleted. (On the other hand, if plenty of information about our technological feats is preserved, that could make a rebirth of civilization easier.)
Moving on, you wrote:
If A causes a situation that prevents us from dealing with B when it finally arrives the xrisk is B, not A. Otherwise we’d be talking about poverty and political resource allocation as critical xrisks, and the term would lose all meaning.
So I agree we need to be careful about keeping focused on existential risk as proximate causes. I had a slightly annoying discussion with someone earlier today who was arguing that “religious fanaticism” should constitute an existential risk. But in some contexts, the line certainly is blurry. If for example, a nuclear war wiped out all but 10 humans, and then they died from lack of food, I suspect you’d say that the existential risk that got them was nuclear war, not famine. In this context, the question has to be asked if something doesn’t completely wipe out humanity but leaves humanity in a situation where it is limping along to the point where things that wouldn’t normally be existential risk could easily wipe humanity out should that be classified as existential risk? Even if one doesn’t want to call that “existential risk”, it seems clear that they share the most relevant features of existential risk (e.g. relevant to understanding the Great Filter, likely understudied and underfunded, will still result in a tremendous loss of human value, will prevent us from traveling out among the stars, will make us feel really stupid if we fail to prevent and it happens, etc.).
I’m not going to get into an extended debate about energy resources, since that would be wildly off-topic. But for the record I think you’ve bought into a line of political propaganda that has little relation to reality—there’s a large body of evidence that we’re nowhere near running out of fossil fuels,
This and the rest of that paragraph seem to indicate that you didn’t read my earlier paragraph that closely. Nothing in my comment said that we were running out of fossil fuels, or even that we were running out of fuels with >1 EROEI. There’s a lot of fossil fuels left. The issue in this context is that the remaining fossil fuels take technology to efficiently harness, and while we generally have that technology, a society trying to come back from drastic collapse may not have the technology. That’s a very different worry than any claim about us running out of fossil fuels in the near or even indefinite future.
The primary problem with nuclear war is that it isn’t obvious that humans can get back to our current tech level without the now consumed resources (primarily fossil fuels) that we’ve used to bootstrap ourselves up to our current tech level. If that’s an issue, then any event that effectively pushes the tech level much below 1900 is about the same as an existential risk, it will just take longer for something else to then finish us off. There’s been some discussion on LW about how possible it is to get back to current tech levels without the non-renewables to bootstrap us up, and there doesn’t seem to be any real consensus on the matter. This should probably be on the list of things that someone at FHI should spend some time examining.
On the other hand, future civilizations have the benefit of 20th century science unless the catastrophe also manages to destroy all physics textbooks.
Well, the textbooks need to be not just destroyed but accessible, and many advanced physics textbooks depend on other math texts and the like. But most of those are wide spread, so this seems like a reasonable assumption. So it may be possible to avoid spending as much in experimentation. But until the 20th century, experimentation was for physics not a massive set of costs. A more serious issue is going to be engineering, but again, an assumption that basic textbooks will be accessible seems reasonable. The serious issue is more how much one can take advantage of economies of scale and comparative advantage in order to get to the point where one has enough free resources to do things like build solar panels and the like.
Moreover, there’s a large amount of what may be things like institutional knowledge or simply isn’t commonly written down in textbooks. For example, China for decades now has had trouble making their own high-performance jet engines (1), and during the cold war the USSR had a lot of trouble making high-perfomance microchips. In both cases there likely were other problems at play in addition to technical know-how. but this suggests that there may be more serious issues for many technologies than just basic textbooks.
Yes, this is the general reason why global catastrophic risks (including global warming, global pandemics and so on) shade into existential risks.
In fact, my personal view of x-risk is that these are the most likely and worrying failure modes : some catastrophe cripples human civilization and technology, we never really recover, and then much further in the future die off from some natural extinction event.
“We’re not sure if we could get back to our current tech level afterwards” isn’t an xrisk.
It’s also purely speculative. The world still has huge deposits of coal, oil, natural gas, oil sands and shale oil, plus large reserves of half a dozen more obscure forms of fossil fuel that have never been commercially developed because they aren’t cost-competitive. Plus there’s wind, geothermal, hydroelectric, solar and nuclear. We’re a long, long way away from the “all non-renewables are exhausted” scenario.
Yes it is. Right now, we can’t deal with a variety of basic x-risk that require large technologies. Big asteroids hit every hundred million years or so and many other disasters can easily wipe out a technological non-advanced species. If our tech level is reduced to even late 19th century and is static then civilization is simply dead and doesn’t know it until something comes along to finish it off.
The problem is exactly that: They aren’t as cost competitive, and have much lower EROEI. That makes them much less useful and not even clear if they can be used to actually move to our current tech level. For example, to even get >1 EROEI on oil shale requires a fair bit of advanced technology. Similarly, most of the remaining coal is in much deeper locations than classical coal (we’ve consumed most of the coal that was easy to get to).
All of these require high tech levels to start with or have other problems. Geothermal only works for limited locations. Solar requires extremely high tech levels to even have positive energy return. Nuclear power requires similar issues along with massive processing procedures for enough economies of scale to kick in. Both solar and have terrible trouble with providing consistent power which is important for many uses such as manufacturing. Efficient batteries are one answer to that but they require also advance tech. It may help to keep in mind that even with the advantages we had the first time around, the vast majority of early electric companies simply failed. There’s an excellent book which discusses many of these issues—Maggie Koerth-Baker’s “Before the Lights Go Out.” It focuses more on the current American electric grid, but in that context discusses many of these issues.
Now you’re just changing the definition to try to win an argument. An xrisk is typically defined as one that, in and of itself, would result in the complete extinction of a species. If A causes a situation that prevents us from dealing with B when it finally arrives the xrisk is B, not A. Otherwise we’d be talking about poverty and political resource allocation as critical xrisks, and the term would lose all meaning.
I’m not going to get into an extended debate about energy resources, since that would be wildly off-topic. But for the record I think you’ve bought into a line of political propaganda that has little relation to reality—there’s a large body of evidence that we’re nowhere near running out of fossil fuels, and the energy industry experts whose livelihoods rely on making correct predictions mostly seem to be lined up on the side of expecting abundance rather than scarcity. I don’t expect you to agree, but anyone who’s curious should be able to find both sides of this argument with a little googling.
So Nick Bostrom, who seems to be one of the major thinkers about existential risk seems to think that this justifies being discussed in the context of existential risk http://www.nickbostrom.com/existential/risks.html . In 5.1 of that link he writes:
Moving on, you wrote:
So I agree we need to be careful about keeping focused on existential risk as proximate causes. I had a slightly annoying discussion with someone earlier today who was arguing that “religious fanaticism” should constitute an existential risk. But in some contexts, the line certainly is blurry. If for example, a nuclear war wiped out all but 10 humans, and then they died from lack of food, I suspect you’d say that the existential risk that got them was nuclear war, not famine. In this context, the question has to be asked if something doesn’t completely wipe out humanity but leaves humanity in a situation where it is limping along to the point where things that wouldn’t normally be existential risk could easily wipe humanity out should that be classified as existential risk? Even if one doesn’t want to call that “existential risk”, it seems clear that they share the most relevant features of existential risk (e.g. relevant to understanding the Great Filter, likely understudied and underfunded, will still result in a tremendous loss of human value, will prevent us from traveling out among the stars, will make us feel really stupid if we fail to prevent and it happens, etc.).
This and the rest of that paragraph seem to indicate that you didn’t read my earlier paragraph that closely. Nothing in my comment said that we were running out of fossil fuels, or even that we were running out of fuels with >1 EROEI. There’s a lot of fossil fuels left. The issue in this context is that the remaining fossil fuels take technology to efficiently harness, and while we generally have that technology, a society trying to come back from drastic collapse may not have the technology. That’s a very different worry than any claim about us running out of fossil fuels in the near or even indefinite future.