How likely do you consider this scenario [] compared to extinction?
I have no ability or basis to make a reliable, useful prediction of this kind.
The prediction being made is implicit. Maybe I should have said testable hypothesis.
By “passed over” I didn’t mean to “ignored”. I meant something more like, “cannot be relied on to have the intended impact at this time”. So, if a problem needs to be solved urgently, proven charities are the way to go. At this point, one necessarily assigns weights to the following mutually exclusive beliefs:
1A. Exceeding the planet’s carrying capacity (in the generalized sense that doesn’t imply we know which specific resource we will overuse in a manner that kills us) is a speculative existential threat in the same category as asteroid impacts and rogue AI because population growth is already slowing and at present trends it will slow to zero or even into population decline soon enough to avert disaster.
1B. Exceeding the planet’s carrying capacity (as above) is a speculative existential threat in the same category as asteroid impacts and rogue AI because so far new technology has always found ways to expand the planet’s carrying capacity just in time to prevent disaster and will continue to do so indefinitely.
1C. Exceeding the planet’s carrying capacity (as above) is a speculative existential threat in the same category as asteroid impacts and rogue AI because I have some other evidence that this is too unlikely to be worth worrying about.
2 Exceeding the planet’s carrying capacity (as above) is a sufficiently credible and immediate existential risk, but there exist proven causes that I believe are the best way to tackle this problem.
3 Exceeding the planet’s carrying capacity (as above) with the resultant collapse of civilization and possible extinction is not preventable. The practical altruist’s goal is instead reducing suffering as much as possible while we wait for everything to be undone by our inevitable demise.
For my part, I’m making the implicit assumption that value of information is believed to be lower than the value of concrete charitable outcomes, i.e. the intellectually honest person’s version of the “we should solve all problems on Earth before we go exploring the universe” argument. To be fair, I don’t think you actually said charitable experiments should be funded less than proven charities. For all I know you might privately believe the opposite: that even proven methods aren’t enough and we need to desperately expand our capabilities by funding speculative projects more (with concrete criteria for measuring outcomes, of course).
I’d put decent credence on 1A, but I don’t expect actual population decline. I’d also put decent credence on 1B, but perhaps not indefinitely. There does seem lots of room for further innovation in farming and resource extraction. Furthermore, one could also imagine eventual colonization of other planets.
Secondly, I think you’re missing the option I most endorse:
4 Exceeding the planet’s carrying capacity (as above) is a sufficiently credible and immediate existential risk to take seriously (but perhaps still is not as credible nor as immediate as other existential risks). However, there are no known interventions at this time to reliably improve our planet’s carrying capacity. Therefore, our best option is to try and find these innovations.
I agree with 4 to the degree that I disagree with 1B. I think there’s a good chance existing agricultural innovations are already good enough and just need to be deployed. But I don’t think funding that is the most cost-effective thing I could be doing.
Lastly, as a nitpick: I don’t think asteroid impacts and Rogue AI are in the same category. Asteroid risk is actually fairly well understood, relatively speaking.
However, there are no known interventions at this time to reliably improve our planet’s carrying capacity.
True enough for the supply-side. The demand-side interventions are obvious, but are not seriously considered or even discussed because of religious/political/cultural stigma.
The final outcome involves people choosing to reproduce less, obviously. The means to get there in a way that’s broadly acceptable is the tough problem. But perhaps not the same order of tough as AI.
Many religions are hostile to family planning and no mainstream ones I know of are actively in favor of it.
People who choose to have large numbers of children have the advantage of numbers (insofar that their large-family values get passed onto their children).
Civil libertarians are uncomfortable with population control because of it being a cover for racist policies in the recent past.
Economic libertarians are uncomfortable with population control because they have come to associate that goal with intrusive government policy and this prevents them from even considering free-market means to achieve that goal.
Many, maybe most people like to leave the option of having more-than-replacement levels of children for emotional reasons that were perhaps shaped by evolution.
It’s a lot to overcome. Perhaps the first step is at least separating the actual issue from misguided solutions that have been attempted and make it less taboo of a topic for public debate. I don’t know, though. It’s easier to see the destination than how to get there.
Exceeding the planet’s carrying capacity (as above) is a sufficiently credible and immediate existential risk to take seriously (but perhaps still is not as credible nor as immediate as other existential risks). However, there are no known interventions at this time to reliably improve our planet’s carrying capacity.
Though I fear it hypocritical to mention: perhaps you ought to give some thought to reducing consumption per individual living human instead? Particularly among those who already enjoy the largesse?
The prediction being made is implicit. Maybe I should have said testable hypothesis.
By “passed over” I didn’t mean to “ignored”. I meant something more like, “cannot be relied on to have the intended impact at this time”. So, if a problem needs to be solved urgently, proven charities are the way to go. At this point, one necessarily assigns weights to the following mutually exclusive beliefs:
1A. Exceeding the planet’s carrying capacity (in the generalized sense that doesn’t imply we know which specific resource we will overuse in a manner that kills us) is a speculative existential threat in the same category as asteroid impacts and rogue AI because population growth is already slowing and at present trends it will slow to zero or even into population decline soon enough to avert disaster.
1B. Exceeding the planet’s carrying capacity (as above) is a speculative existential threat in the same category as asteroid impacts and rogue AI because so far new technology has always found ways to expand the planet’s carrying capacity just in time to prevent disaster and will continue to do so indefinitely.
1C. Exceeding the planet’s carrying capacity (as above) is a speculative existential threat in the same category as asteroid impacts and rogue AI because I have some other evidence that this is too unlikely to be worth worrying about.
2 Exceeding the planet’s carrying capacity (as above) is a sufficiently credible and immediate existential risk, but there exist proven causes that I believe are the best way to tackle this problem.
3 Exceeding the planet’s carrying capacity (as above) with the resultant collapse of civilization and possible extinction is not preventable. The practical altruist’s goal is instead reducing suffering as much as possible while we wait for everything to be undone by our inevitable demise.
For my part, I’m making the implicit assumption that value of information is believed to be lower than the value of concrete charitable outcomes, i.e. the intellectually honest person’s version of the “we should solve all problems on Earth before we go exploring the universe” argument. To be fair, I don’t think you actually said charitable experiments should be funded less than proven charities. For all I know you might privately believe the opposite: that even proven methods aren’t enough and we need to desperately expand our capabilities by funding speculative projects more (with concrete criteria for measuring outcomes, of course).
I’d put decent credence on 1A, but I don’t expect actual population decline. I’d also put decent credence on 1B, but perhaps not indefinitely. There does seem lots of room for further innovation in farming and resource extraction. Furthermore, one could also imagine eventual colonization of other planets.
Secondly, I think you’re missing the option I most endorse:
4 Exceeding the planet’s carrying capacity (as above) is a sufficiently credible and immediate existential risk to take seriously (but perhaps still is not as credible nor as immediate as other existential risks). However, there are no known interventions at this time to reliably improve our planet’s carrying capacity. Therefore, our best option is to try and find these innovations.
I agree with 4 to the degree that I disagree with 1B. I think there’s a good chance existing agricultural innovations are already good enough and just need to be deployed. But I don’t think funding that is the most cost-effective thing I could be doing.
Lastly, as a nitpick: I don’t think asteroid impacts and Rogue AI are in the same category. Asteroid risk is actually fairly well understood, relatively speaking.
True enough for the supply-side. The demand-side interventions are obvious, but are not seriously considered or even discussed because of religious/political/cultural stigma.
What interventions would you consider?
The final outcome involves people choosing to reproduce less, obviously. The means to get there in a way that’s broadly acceptable is the tough problem. But perhaps not the same order of tough as AI.
Many religions are hostile to family planning and no mainstream ones I know of are actively in favor of it.
People who choose to have large numbers of children have the advantage of numbers (insofar that their large-family values get passed onto their children).
Civil libertarians are uncomfortable with population control because of it being a cover for racist policies in the recent past.
Economic libertarians are uncomfortable with population control because they have come to associate that goal with intrusive government policy and this prevents them from even considering free-market means to achieve that goal.
Many, maybe most people like to leave the option of having more-than-replacement levels of children for emotional reasons that were perhaps shaped by evolution.
It’s a lot to overcome. Perhaps the first step is at least separating the actual issue from misguided solutions that have been attempted and make it less taboo of a topic for public debate. I don’t know, though. It’s easier to see the destination than how to get there.
Though I fear it hypocritical to mention: perhaps you ought to give some thought to reducing consumption per individual living human instead? Particularly among those who already enjoy the largesse?
Each extra kid can completely wipe out a lifetime of being a responsible consumption.
Yeah, I do some of that.