The alternative most environmentalists either ignore or outright oppose is deliberately trying to accelerate the rate of technological advancement to increase the “safety zone” between expansion of carrying capacity and population growth.
The Jevons paradox: technological improvements make each unit of natural resources more useful, increasing the rate at which they are used up. (Though I’m not convinced that most environmentalists actually are opposed to all relevant technological improvements. I’ve definitely never heard any complain about solar energy research, for example.) Additionally, a safety margin maintained through an ever-increasing rate of technological advancement is brittle and seems like it should increase catastrophic risk. An analogy: “let’s not build below sea level” is more robust than “intricate dyke system vulnerable to catastrophic failure.”
I like the idea of space colonization, but it’s not clear that it’s a practical, let alone robust, way to get our eggs into more baskets.
On existential risk overall, my reading on AI has been pushing me towards the point of view that actually global warming → civilizational collapse may be our best hope for the future, if it can only happen fast enough to prevent the development of a superintelligence.
I like the idea of space colonization, but it’s not clear that it’s a practical, let alone robust, way to get our eggs into more baskets.
I read somewhere that to calibrate the logistics of getting everyone off Earth, you should consider how much it would cost and how long it would take to load every human onto a passenger jet and fly them all to the same continent. I wish I could find that essay. Long story short, it would take a loooot of resources. So, it probably won’t be our eggs in particular getting into more baskets, but at least the eggs of some fellow humans.
On existential risk overall, my reading on AI has been pushing me towards the point of view that actually global warming → civilizational collapse may be our best hope for the future, if it can only happen fast enough to prevent the development of a superintelligence.
I see two outcomes: either there are enough exploitable resources left to rebuild a technological civilization, in which case someone will get back to pursuing superintelligence, or there are not enough exploitable resources left to rebuild a technological civilization in which case we piss away our last days throwing spears and dying of dysentery. Or maybe we evolve into non tool-using creatures like in Galapagos. In any case, the left of the Drake Equation remains at zero. Breaking out of the overshoot/collapse cycle means the risk of going out with a bang, but the alternative is the certainty of going out with a whimper.
An analogy: “let’s not build below sea level” is more robust than “intricate dyke system vulnerable to catastrophic failure.”
I am not sure that’s true. Consider a similar analogy: “let’s not develop agriculture” is more robust than “dependence on fickle weather or intricate irrigation system”. Is that so? Not likely—you just get hit by a different set of risks. One day a lot of pale people with thundersticks appear, they kill your men and herd women and children into reservations to die.
Given the fate of the societies which did not climb the technological tree sufficiently fast, I’d say throttling down progress sure doesn’t look like a wise choice.
Given the fate of the societies which did not climb the technological tree sufficiently fast, I’d say throttling down progress sure doesn’t look like a wise choice.
I completely agree, that’s a great point! The sixth one, to be exact.
Being willing to take on greater existential risks will definitely help in short- or medium-term competition, as your example shows (greater risk of famine in exchange for ability to conquer other societies). So no, I don’t think we can necessarily coordinate to avoid a “brittle” situation in which we are vulnerable to catastrophic failure. That doesn’t mean it’s not desirable.
The Jevons paradox: technological improvements make each unit of natural resources more useful, increasing the rate at which they are used up. (Though I’m not convinced that most environmentalists actually are opposed to all relevant technological improvements. I’ve definitely never heard any complain about solar energy research, for example.) Additionally, a safety margin maintained through an ever-increasing rate of technological advancement is brittle and seems like it should increase catastrophic risk. An analogy: “let’s not build below sea level” is more robust than “intricate dyke system vulnerable to catastrophic failure.”
I like the idea of space colonization, but it’s not clear that it’s a practical, let alone robust, way to get our eggs into more baskets.
On existential risk overall, my reading on AI has been pushing me towards the point of view that actually global warming → civilizational collapse may be our best hope for the future, if it can only happen fast enough to prevent the development of a superintelligence.
I read somewhere that to calibrate the logistics of getting everyone off Earth, you should consider how much it would cost and how long it would take to load every human onto a passenger jet and fly them all to the same continent. I wish I could find that essay. Long story short, it would take a loooot of resources. So, it probably won’t be our eggs in particular getting into more baskets, but at least the eggs of some fellow humans.
I see two outcomes: either there are enough exploitable resources left to rebuild a technological civilization, in which case someone will get back to pursuing superintelligence, or there are not enough exploitable resources left to rebuild a technological civilization in which case we piss away our last days throwing spears and dying of dysentery. Or maybe we evolve into non tool-using creatures like in Galapagos. In any case, the left of the Drake Equation remains at zero. Breaking out of the overshoot/collapse cycle means the risk of going out with a bang, but the alternative is the certainty of going out with a whimper.
As far as x-risk is concerned, we all have the same eggs.
I am not sure that’s true. Consider a similar analogy: “let’s not develop agriculture” is more robust than “dependence on fickle weather or intricate irrigation system”. Is that so? Not likely—you just get hit by a different set of risks. One day a lot of pale people with thundersticks appear, they kill your men and herd women and children into reservations to die.
Given the fate of the societies which did not climb the technological tree sufficiently fast, I’d say throttling down progress sure doesn’t look like a wise choice.
I completely agree, that’s a great point! The sixth one, to be exact.
Being willing to take on greater existential risks will definitely help in short- or medium-term competition, as your example shows (greater risk of famine in exchange for ability to conquer other societies). So no, I don’t think we can necessarily coordinate to avoid a “brittle” situation in which we are vulnerable to catastrophic failure. That doesn’t mean it’s not desirable.