“We’re not sure if we could get back to our current tech level afterwards” isn’t an xrisk.
Yes it is. Right now, we can’t deal with a variety of basic x-risk that require large technologies. Big asteroids hit every hundred million years or so and many other disasters can easily wipe out a technological non-advanced species. If our tech level is reduced to even late 19th century and is static then civilization is simply dead and doesn’t know it until something comes along to finish it off.
The world still has huge deposits of coal, oil, natural gas, oil sands and shale oil, plus large reserves of half a dozen more obscure forms of fossil fuel that have never been commercially developed because they aren’t cost-competitive.
The problem is exactly that: They aren’t as cost competitive, and have much lower EROEI. That makes them much less useful and not even clear if they can be used to actually move to our current tech level. For example, to even get >1 EROEI on oil shale requires a fair bit of advanced technology. Similarly, most of the remaining coal is in much deeper locations than classical coal (we’ve consumed most of the coal that was easy to get to).
Plus there’s wind, geothermal, hydroelectric, solar and nuclear. We’re a long, long way away from the “all non-renewables are exhausted” scenario.
All of these require high tech levels to start with or have other problems. Geothermal only works for limited locations. Solar requires extremely high tech levels to even have positive energy return. Nuclear power requires similar issues along with massive processing procedures for enough economies of scale to kick in. Both solar and have terrible trouble with providing consistent power which is important for many uses such as manufacturing. Efficient batteries are one answer to that but they require also advance tech. It may help to keep in mind that even with the advantages we had the first time around, the vast majority of early electric companies simply failed. There’s an excellent book which discusses many of these issues—Maggie Koerth-Baker’s “Before the Lights Go Out.” It focuses more on the current American electric grid, but in that context discusses many of these issues.
Now you’re just changing the definition to try to win an argument. An xrisk is typically defined as one that, in and of itself, would result in the complete extinction of a species. If A causes a situation that prevents us from dealing with B when it finally arrives the xrisk is B, not A. Otherwise we’d be talking about poverty and political resource allocation as critical xrisks, and the term would lose all meaning.
I’m not going to get into an extended debate about energy resources, since that would be wildly off-topic. But for the record I think you’ve bought into a line of political propaganda that has little relation to reality—there’s a large body of evidence that we’re nowhere near running out of fossil fuels, and the energy industry experts whose livelihoods rely on making correct predictions mostly seem to be lined up on the side of expecting abundance rather than scarcity. I don’t expect you to agree, but anyone who’s curious should be able to find both sides of this argument with a little googling.
Now you’re just changing the definition to try to win an argument.
So Nick Bostrom, who seems to be one of the major thinkers about existential risk seems to think that this justifies being discussed in the context of existential risk
http://www.nickbostrom.com/existential/risks.html . In 5.1 of that link he writes:
The natural resources needed to sustain a high-tech civilization are being used up. If some other cataclysm destroys the technology we have, it may not be possible to climb back up to present levels if natural conditions are less favorable than they were for our ancestors, for example if the most easily exploitable coal, oil, and mineral resources have been depleted. (On the other hand, if plenty of information about our technological feats is preserved, that could make a rebirth of civilization easier.)
Moving on, you wrote:
If A causes a situation that prevents us from dealing with B when it finally arrives the xrisk is B, not A. Otherwise we’d be talking about poverty and political resource allocation as critical xrisks, and the term would lose all meaning.
So I agree we need to be careful about keeping focused on existential risk as proximate causes. I had a slightly annoying discussion with someone earlier today who was arguing that “religious fanaticism” should constitute an existential risk. But in some contexts, the line certainly is blurry. If for example, a nuclear war wiped out all but 10 humans, and then they died from lack of food, I suspect you’d say that the existential risk that got them was nuclear war, not famine. In this context, the question has to be asked if something doesn’t completely wipe out humanity but leaves humanity in a situation where it is limping along to the point where things that wouldn’t normally be existential risk could easily wipe humanity out should that be classified as existential risk? Even if one doesn’t want to call that “existential risk”, it seems clear that they share the most relevant features of existential risk (e.g. relevant to understanding the Great Filter, likely understudied and underfunded, will still result in a tremendous loss of human value, will prevent us from traveling out among the stars, will make us feel really stupid if we fail to prevent and it happens, etc.).
I’m not going to get into an extended debate about energy resources, since that would be wildly off-topic. But for the record I think you’ve bought into a line of political propaganda that has little relation to reality—there’s a large body of evidence that we’re nowhere near running out of fossil fuels,
This and the rest of that paragraph seem to indicate that you didn’t read my earlier paragraph that closely. Nothing in my comment said that we were running out of fossil fuels, or even that we were running out of fuels with >1 EROEI. There’s a lot of fossil fuels left. The issue in this context is that the remaining fossil fuels take technology to efficiently harness, and while we generally have that technology, a society trying to come back from drastic collapse may not have the technology. That’s a very different worry than any claim about us running out of fossil fuels in the near or even indefinite future.
Yes it is. Right now, we can’t deal with a variety of basic x-risk that require large technologies. Big asteroids hit every hundred million years or so and many other disasters can easily wipe out a technological non-advanced species. If our tech level is reduced to even late 19th century and is static then civilization is simply dead and doesn’t know it until something comes along to finish it off.
The problem is exactly that: They aren’t as cost competitive, and have much lower EROEI. That makes them much less useful and not even clear if they can be used to actually move to our current tech level. For example, to even get >1 EROEI on oil shale requires a fair bit of advanced technology. Similarly, most of the remaining coal is in much deeper locations than classical coal (we’ve consumed most of the coal that was easy to get to).
All of these require high tech levels to start with or have other problems. Geothermal only works for limited locations. Solar requires extremely high tech levels to even have positive energy return. Nuclear power requires similar issues along with massive processing procedures for enough economies of scale to kick in. Both solar and have terrible trouble with providing consistent power which is important for many uses such as manufacturing. Efficient batteries are one answer to that but they require also advance tech. It may help to keep in mind that even with the advantages we had the first time around, the vast majority of early electric companies simply failed. There’s an excellent book which discusses many of these issues—Maggie Koerth-Baker’s “Before the Lights Go Out.” It focuses more on the current American electric grid, but in that context discusses many of these issues.
Now you’re just changing the definition to try to win an argument. An xrisk is typically defined as one that, in and of itself, would result in the complete extinction of a species. If A causes a situation that prevents us from dealing with B when it finally arrives the xrisk is B, not A. Otherwise we’d be talking about poverty and political resource allocation as critical xrisks, and the term would lose all meaning.
I’m not going to get into an extended debate about energy resources, since that would be wildly off-topic. But for the record I think you’ve bought into a line of political propaganda that has little relation to reality—there’s a large body of evidence that we’re nowhere near running out of fossil fuels, and the energy industry experts whose livelihoods rely on making correct predictions mostly seem to be lined up on the side of expecting abundance rather than scarcity. I don’t expect you to agree, but anyone who’s curious should be able to find both sides of this argument with a little googling.
So Nick Bostrom, who seems to be one of the major thinkers about existential risk seems to think that this justifies being discussed in the context of existential risk http://www.nickbostrom.com/existential/risks.html . In 5.1 of that link he writes:
Moving on, you wrote:
So I agree we need to be careful about keeping focused on existential risk as proximate causes. I had a slightly annoying discussion with someone earlier today who was arguing that “religious fanaticism” should constitute an existential risk. But in some contexts, the line certainly is blurry. If for example, a nuclear war wiped out all but 10 humans, and then they died from lack of food, I suspect you’d say that the existential risk that got them was nuclear war, not famine. In this context, the question has to be asked if something doesn’t completely wipe out humanity but leaves humanity in a situation where it is limping along to the point where things that wouldn’t normally be existential risk could easily wipe humanity out should that be classified as existential risk? Even if one doesn’t want to call that “existential risk”, it seems clear that they share the most relevant features of existential risk (e.g. relevant to understanding the Great Filter, likely understudied and underfunded, will still result in a tremendous loss of human value, will prevent us from traveling out among the stars, will make us feel really stupid if we fail to prevent and it happens, etc.).
This and the rest of that paragraph seem to indicate that you didn’t read my earlier paragraph that closely. Nothing in my comment said that we were running out of fossil fuels, or even that we were running out of fuels with >1 EROEI. There’s a lot of fossil fuels left. The issue in this context is that the remaining fossil fuels take technology to efficiently harness, and while we generally have that technology, a society trying to come back from drastic collapse may not have the technology. That’s a very different worry than any claim about us running out of fossil fuels in the near or even indefinite future.