Now you’re just changing the definition to try to win an argument. An xrisk is typically defined as one that, in and of itself, would result in the complete extinction of a species. If A causes a situation that prevents us from dealing with B when it finally arrives the xrisk is B, not A. Otherwise we’d be talking about poverty and political resource allocation as critical xrisks, and the term would lose all meaning.
I’m not going to get into an extended debate about energy resources, since that would be wildly off-topic. But for the record I think you’ve bought into a line of political propaganda that has little relation to reality—there’s a large body of evidence that we’re nowhere near running out of fossil fuels, and the energy industry experts whose livelihoods rely on making correct predictions mostly seem to be lined up on the side of expecting abundance rather than scarcity. I don’t expect you to agree, but anyone who’s curious should be able to find both sides of this argument with a little googling.
Now you’re just changing the definition to try to win an argument.
So Nick Bostrom, who seems to be one of the major thinkers about existential risk seems to think that this justifies being discussed in the context of existential risk
http://www.nickbostrom.com/existential/risks.html . In 5.1 of that link he writes:
The natural resources needed to sustain a high-tech civilization are being used up. If some other cataclysm destroys the technology we have, it may not be possible to climb back up to present levels if natural conditions are less favorable than they were for our ancestors, for example if the most easily exploitable coal, oil, and mineral resources have been depleted. (On the other hand, if plenty of information about our technological feats is preserved, that could make a rebirth of civilization easier.)
Moving on, you wrote:
If A causes a situation that prevents us from dealing with B when it finally arrives the xrisk is B, not A. Otherwise we’d be talking about poverty and political resource allocation as critical xrisks, and the term would lose all meaning.
So I agree we need to be careful about keeping focused on existential risk as proximate causes. I had a slightly annoying discussion with someone earlier today who was arguing that “religious fanaticism” should constitute an existential risk. But in some contexts, the line certainly is blurry. If for example, a nuclear war wiped out all but 10 humans, and then they died from lack of food, I suspect you’d say that the existential risk that got them was nuclear war, not famine. In this context, the question has to be asked if something doesn’t completely wipe out humanity but leaves humanity in a situation where it is limping along to the point where things that wouldn’t normally be existential risk could easily wipe humanity out should that be classified as existential risk? Even if one doesn’t want to call that “existential risk”, it seems clear that they share the most relevant features of existential risk (e.g. relevant to understanding the Great Filter, likely understudied and underfunded, will still result in a tremendous loss of human value, will prevent us from traveling out among the stars, will make us feel really stupid if we fail to prevent and it happens, etc.).
I’m not going to get into an extended debate about energy resources, since that would be wildly off-topic. But for the record I think you’ve bought into a line of political propaganda that has little relation to reality—there’s a large body of evidence that we’re nowhere near running out of fossil fuels,
This and the rest of that paragraph seem to indicate that you didn’t read my earlier paragraph that closely. Nothing in my comment said that we were running out of fossil fuels, or even that we were running out of fuels with >1 EROEI. There’s a lot of fossil fuels left. The issue in this context is that the remaining fossil fuels take technology to efficiently harness, and while we generally have that technology, a society trying to come back from drastic collapse may not have the technology. That’s a very different worry than any claim about us running out of fossil fuels in the near or even indefinite future.
Now you’re just changing the definition to try to win an argument. An xrisk is typically defined as one that, in and of itself, would result in the complete extinction of a species. If A causes a situation that prevents us from dealing with B when it finally arrives the xrisk is B, not A. Otherwise we’d be talking about poverty and political resource allocation as critical xrisks, and the term would lose all meaning.
I’m not going to get into an extended debate about energy resources, since that would be wildly off-topic. But for the record I think you’ve bought into a line of political propaganda that has little relation to reality—there’s a large body of evidence that we’re nowhere near running out of fossil fuels, and the energy industry experts whose livelihoods rely on making correct predictions mostly seem to be lined up on the side of expecting abundance rather than scarcity. I don’t expect you to agree, but anyone who’s curious should be able to find both sides of this argument with a little googling.
So Nick Bostrom, who seems to be one of the major thinkers about existential risk seems to think that this justifies being discussed in the context of existential risk http://www.nickbostrom.com/existential/risks.html . In 5.1 of that link he writes:
Moving on, you wrote:
So I agree we need to be careful about keeping focused on existential risk as proximate causes. I had a slightly annoying discussion with someone earlier today who was arguing that “religious fanaticism” should constitute an existential risk. But in some contexts, the line certainly is blurry. If for example, a nuclear war wiped out all but 10 humans, and then they died from lack of food, I suspect you’d say that the existential risk that got them was nuclear war, not famine. In this context, the question has to be asked if something doesn’t completely wipe out humanity but leaves humanity in a situation where it is limping along to the point where things that wouldn’t normally be existential risk could easily wipe humanity out should that be classified as existential risk? Even if one doesn’t want to call that “existential risk”, it seems clear that they share the most relevant features of existential risk (e.g. relevant to understanding the Great Filter, likely understudied and underfunded, will still result in a tremendous loss of human value, will prevent us from traveling out among the stars, will make us feel really stupid if we fail to prevent and it happens, etc.).
This and the rest of that paragraph seem to indicate that you didn’t read my earlier paragraph that closely. Nothing in my comment said that we were running out of fossil fuels, or even that we were running out of fuels with >1 EROEI. There’s a lot of fossil fuels left. The issue in this context is that the remaining fossil fuels take technology to efficiently harness, and while we generally have that technology, a society trying to come back from drastic collapse may not have the technology. That’s a very different worry than any claim about us running out of fossil fuels in the near or even indefinite future.