I would suggest responding with your points (Top 3-5, if you have too many to easily list) on why this is incredibly obviously not a problem, seeing where you get pushback if anywhere, and iterating from there. Don’t be afraid to point out “incredibly obvious” things—it might not be incredibly obvious to other people. And if you’re genuinely unsure why anyone could think this is a problem, the responses to your incredibly obvious points should give you a better idea.
We already have eight billion people. There is no immediate underpopulation crisis, and in fact there are lots of signs that we’re causing serious environmental trouble trying to support that many with the technology we’re using[1]. We’re struggling to come up with better core technologies to support even that many people, even without raising their standard of living. Maybe we will, maybe we won’t. At the moment, if there’s any population problem, it’s overpopulation.
It’s not plausible that any downward trend will continue to the point of being a real extinction threat. That’s not how selection pressure works. And even if it could happen, it would take many centuries and the word “crisis” is totally inappropriate. You can always deal with it when and if it becomes an actual problem. [2]
There’s no intrinsic value to having more people[3], and hypothetical people who don’t exist don’t have any right to be brought into existence.
Although we don’t know how to get to the technology for a larger population, it’s much more plausible that we can tweak our existing stuff, and/or stuff that’s already starting to be built, to deal well with an older population. And if not, it’s still not unsurvivable, and it’s much more predictable than what we could have to deal with if we keep putting pressure on the environment.
The fact that the we haven’t hit the most apocalyptic timelines of the most extreme predictions of the most pessimistic people in the 1970s does not mean that we don’t have serious environmental degradation going on. Note, as one example among many, that the climate is going pretty wild, and that official targets meant to prevent or slow that have never been met. And observable environmental effects may lag by decades even if you’ve passed major tipping points.
We don’t need more people to innovate; just integrate over more time. The only real innovation “deadlines” we might have are on problems that are made worse by more population. Anyway, we’re doing a rotten job of using the innovative potential of the people we have.
All the same thoughts here. I also want to understand what the plan is if we keep growing the population. Is the idea that we keep going until we reach a higher stable number, or that we literally keep growing always? If the former, what’s the number and why? If the latter, does that mean the whole strategy is 100% dependent on us inhabiting space? And if that’s the case, shouldn’t this rather big element in the plan be made explicit?
It is the change that is bad, not necessarily the future total size of the population.
Edit: Maybe I should unpack that a bit. I also think more people is better, because life is good and innovation is proportional to the number of innovators, but apart from that:
A decreasing population leads to economic stagnation and innovation slowdown. Both can be observed in Japan. South Korea, China, Taiwan are on track to tank their population much faster than Japan ever did. Hows that going to work out for them?
In a permanent recession will investment dry up killing whatever dynamism there might still be?
If the age pyramid is inverted old people have too much political power for the country to ever reverse course and support the young towards family formation.
If you allow massive immigration to fix the labor shortage you also invite ethnic strife down the line. Almost all violent conflicts are based on having two or more ethnic groups within one country.
Will young people emigrate if they are burdened with caring for too many old people in a shrinking economy?
My view is that the progress we observe in the last centuries is more fragile than it seems and it is certainly possible that we will kill it almost completely if we continue to remove or weaken many of the preconditions for it.
I agree that civilisational progress is fairly fragile. But it is fragile in both directions. Climate change and resource wars seem about as likely to lead to global conflict as internecine ethnic strife to me.
I say this partly because immigration seems like a force for mutual cultural understanding and trade, to me. Without it we would probably see more closed-off nations, more likely to go to war. With too much of it, however, there can be bad side effects and cultural rifts if not managed very wisely. Where the line is is no simple question.
I also want to advance the simple main idea that drives my views on this issue, which is that population growth HAS to level off eventually unless we colonise space. The side effects on the economy will equally have to be managed at one time or another.
Will they be easier to manage in the future? Or could growing populations make it even harder? Could managing a fall in population rates be easier if done more slowly?
Maybe. But I don’t feel that’s the tenor of the arguments I am hearing from rationalist and adjacent people right now.
I agree that massive population growth would also be dangerous. We have that in Africa, so I worry about it for Afrika. We don’t have it anywhere else, so I don’t worry about it for any other place.
Empirically, resource wars are much less likely than internecine ethnic strife.
After we have automated much of the economy, there won’t be side effects on the economy. The trick is actually getting there.
Automating much of the economy is more than a little way off, and is highly likely to bring its own problems which I would expect to cross-cut with all these issues. I personally doubt that –in the event humans are not sidelined altogether – advances in AI would make demographic transition much economically easier, but I think that’s in the realm of speculation either way.
Do you think that a large population that was reducing slowly would be something Zvi, Robin Hanson and others taking this stance would celebrate? (As opposed to what we have a large population that is growing but showing signs of falling relatively fast in geographical/cultural pockets?)
Currently global population growth is positive but decelerating, I guess a more gradual deceleration would be less disturbing to them? But what about if world population growth very gradually moved from positive to negative? Would they be happy with that?
I had assumed not but I am trying to understand what good looks like.
I don’t know what Zvi and Robin Hanson would celebrate, but I personally worry about fast population decline in those “geographical/cultural pockets” that are responsible for scientific and technological progress.
And I worry because I see the possibility that the decline of innovation and tech will not be as gradual as even fast population decline generally is, but that this decline will be exacerbated by the political instability and/or political sclerosis that comes from two many old people / too much immigration + a shrinking pie.
Somewhat related: this Scott comment on his Substack, in response to a reader asking “wouldn’t it be a valid longtermist goal to bring human numbers in alignment with sustainability” (which sounds to me like it’s advocating for reducing the population somehow):
I think there’s a division on whether we should be funding decreased population (to solve the environmental crisis) or increased population (to solve the fertility collapse / economic crisis).
I tend to think there is no environmental crisis related to overpopulation, in the sense that there are no current famines not related to political issues (ie we can grow the food and transport the food when warlords don’t prevent us from distributing it), plus the history of things like the Simon-Ehrlich wager, plus the fact that non-immigrant population is set to decline on its own everywhere except Africa, and Africa is expected to stabilize soon. I’m also concerned that “fight overpopulation charities” have a really scary history (see https://www.astralcodexten.com/p/galton-ehrlich-buck ) that makes me want to run away screaming.
So I’m not currently funding any population-reduction charities in particular, although I am funding some more generic environmental/sustainability projects. I haven’t funded any increase-population-charities yet either, mostly because I haven’t found ones I like, although some of the IVG related charities I fund might do that as a side effect.
Your factual claims here seem at least somewhat reasonable. Naively extrapolating sub-replacement fertility centuries into the future is silly. Our wealthy civilization ought to be capable of finding some way of coping with increased elder care. The current number of humans may perhaps be more than is optimal.
But your moral view is atrocious. Human extinction would be bad—very bad. Because humans are the most interesting species to arise so far. Because human striving is properly focused on the future, not just hedonistic pleasures of the moment. Because there may be a meaning to human existence that we can so far only dimly perceive.
And more humans are better, if they come at no (or small) cost to the quality of life of the existing humans. Human lives have intrinsic value, and every additional life adds value.
I do wonder if your moral views have biased your factual assessments as well.
Worrying about extinction is one thing, and we’re nowhere near that point, but does the pro fertility case rely on the philosophical assumption that more people is better? Surely you can see how some people might not find that very compelling.
I think that various “pro-fertility” people have a variety of motivations.
But “more people are better” ought to be a belief of everyone, whether pro-fertility or not. It’s an “other things being equal” statement, of course—more people at no cost or other tradeoff is good. One can believe that and still think that less people would be a good idea in the current situation. But if you don’t think more people are good when there’s no tradeoff, I don’t see what moral view you can have other than nihilism or some form of extreme egoism.
BTW: I’m not ruling out an expansive definition of “people”—maybe gorillas are people, maybe some alien species are, maybe some AIs would be—but I think that’s outside the scope of the current discussion.
But “more people are better” ought to be a belief of everyone, whether pro-fertility or not. It’s an “other things being equal” statement, of course—more people at no cost or other tradeoff is good. One can believe that and still think that less people would be a good idea in the current situation. But if you don’t think more people are good when there’s no tradeoff, I don’t see what moral view you can have other than nihilism or some form of extreme egoism.
Do all variants of downside focused ethics get dismissed as extreme egoism? Hard to see them as nihilistic.
I suspect clarity and consensus on the meaning of “more people at no cost or other tradeoff” to be difficult. If “more people” means more happy people preoccupied with the welfare of the least fortunate, then sure “at no cost or other tradeoff” should suffice for practically everyone to get behind it. But that seems like quite a biased distribution for a default meaning of “more people.”
“It’s utterly bizarre to worry about fertility. Lack of fertility is not going to be an x-risk anytime soon. We already have too many people and if anything a voluntary population reduction is a good thing in the relative near-term. (i.e, a few decades or so) We’ve had explosive growth over the last century in terms of population, it’s already unstable, why do we want to keep going?”
In a synchronous discussion I would now pause to see if I had your view right. Because that would take too much time in an asynchronous discussion, I’ll reply to the imaginary view I have in my head, while hoping it’s not too inaccurate. Would welcome corrections.
If this view of yours seems roughly right, here’s what I think are the viewpoint differences:
I think people who worry about fertility would agree with you that fertility is not an existential threat.
I think the intrinsic value of having more people is not an important crux—it is possible to have your view on Point 3 and still worry about fertility.
I think the “fertility crisis” is more about replacement than continued increase. It is possible that many of the people who worry about fertility would also welcome still more people, but I don’t think they would consider it a crisis if we were only at replacement rates, or close to it.
I think people who care about speed of innovation don’t just care about imposed population deadlines looming, but also about quality of life—if we had invented penicillin a century earlier, many people would have lived much longer, happier lives, for example. One could frame technological progress as a moral imperative this way. I’m not sure if this is a major crux, but I think there are people with a general “More people = good” viewpoint for this reason, even ignoring population ethics. You are right that we could use the people we have better, but I don’t see this as a mutually exclusive situation.
I think the people who worry about the fertility crisis would disagree with you about Point 4. I don’t think it’s obvious that “tech to deal with an older population” is actually easier than “tech to deal with a larger population”. It might be! Might not be.
While you may not agree with these ideas, I hope I’ve presented them reasonably and accurately enough that it makes the other side merely different, rather than bizarre and impossible to understand.
I’m not so sure about point 3 being irrelevant. Without that, what is the positive reason for caring about fertility? Just the innovation rate and aging population?
Those don’t seem to explain the really extreme importance people attach to this: talking about a “crisis”, talking about really large public expenditures, talking about coercive measures, talking about people’s stated preferences for their own lives being wrong to the point where they need to be ignored or overridden, etc… I mean, those are the sorts of things that people tend to reserve for Big Issues(TM).
I get the impression that some people just really, really care about having more humans purely for the sake of having more humans. And not just up to some set optimum number, but up to the absolute maximum number they can achieve subject to whatever other constraints they may recognize. Ceteris paribus, 10⁴⁷ people is better than 10⁴⁶ people and 10⁴⁸ is better still.
That view is actually explicit in long-termist circles that are Less-Wrong-adjacent. And it’s something I absolutely cannot figure out. I’ve been in long discussions about it on here, and I still can’t get inside people’s heads about it.
I mean, I just got a comment calling me “morally atrocious” for not wanting to increase the population without limit (at least so long as it didn’t make life worse for the existing population). I think that was meant to be independent of the part about extinction; maybe I’m wrong.
I think people who care about speed of innovation don’t just care about imposed population deadlines looming, but also about quality of life
… but if you have more people around in order to get penicillin invented, you equally have more people around to suffer before penicillin is invented. That seems to be true for innovation in general. More people may mean less time before an innovation happens, but it also means more people living before that innovation. Seems like a wash in terms of the impact of almost any innovation.
The only way I can get any sense out of it at all is to think that people want the innovations within their own lifetimes, or maybe the lifetimes of their children or people they actually know. But the impacts of these interventions are so far down the road that that’s not likely to happen without essentially indefinite life extension. Which is about the last scenario where you want to be artificially increasing fertility. [1]
… and all of that makes me wonder why people who are usually pretty skeptical and analytical would get behind the innovation argument. I will have to admit that I strongly suspect motivated cognition. I have a lot of trouble believing that the natalism arises from the innovation concern, and very little trouble believing it’s the other way around.
A big part of the “bizarreness” I’m talking about is the easy assignment of importance to that kind of weak argument about what would normally be a weak concern.
I think the people who worry about the fertility crisis would disagree with you about Point 4. I don’t think it’s obvious that “tech to deal with an older population” is actually easier than “tech to deal with a larger population”. It might be! Might not be.
Well, you’re right, you can never be sure. But the other part of point 4 was that we’re probably better able to deal with failing to get better old-population technology than with failing to get large-population technology. And at least we know what the consequences of failure would be, because we’ve seen aging before.
My intuitive sense is that assistive gadgets, industrial automation, and even outright anti-aging technology, are easier than changing where all the bulk raw materials come from, or even than changing the balance of energy sources, or how much material and energy gets used. That’s even more true if you count the very real difficulties in getting people to actually adopt changes even when you know how to make them technically. But even if I’m wrong, the downside risk of an older population seems obviously more limited than that of a larger population[2].
So why would people who are often very careful about other risks want to just plunge in and create more people? Even if they do think “larger technology” is easier than “older technology”, they could also be wrong… and there’s no backup plan.
Again, it seems weird and out of character and suspiciously like the behavior you’d expect from people who intuitively felt that higher fertility, and higher population, were axiomatically good almost regardless of risk, and were coloring their factual beliefs according to that feeling. Which takes me back to not understanding why anybody would feel that way, or expect others to agree to order the world around it.
… and in fact there are people in the world, maybe not on Less Wrong, who are against life extension because it might not be compatible with high fertility. Fertility axiomatically wins for those people. And they can be very fervent about it.
Also, in the end, if you ever stop growing your population, for any reason at all, you’ll still eventually have to deal with the population getting older. So after you do the large-population technology, you’ll still eventually have to do at least some of the old-population technology.
I think this makes a lot of sense. While I think you can make the case for “fertility crisis purely as a means of preventing economic slowdown and increasing innovation” I think your arguments are good that people don’t actually often make this argument, and a lot of it does stem from “more people = good”.
But I think if you start from “more people = good”, you don’t actually have motivated reasoning as much as you suspect re: innovation argument. I think it’s more that the innovation argument actually does just work if you accept that more people = good. Because if more people = good, that means more people were good before penicillin and then are even more good afterwards, and these two don’t actually cancel each other out.
In summary, I don’t think that “more people = good” motivates the “Life is generally good to have, actually” argument—I think if anything it’s the other way around. People who think life is good tend to be more likely to think it’s a moral good to give it to others. The argument doesn’t say it’s “axiomatically good” to add more people, it’s “axiomatically good conditional on life being net positive”.
As for understanding why people might feel that way—my best argument is this.
Let’s say you could choose to give birth to a child who would be born with a terribly painful and crippling disease. Would it be a bad thing to do that? Many people would say yes.
Now, let’s say you could choose to give birth to a child who would live a happy and healthy positive life? Would that be a good thing? It seems that, logically, if giving birth to a child who suffers is bad, giving birth to a child who enjoys life is good.
That, imo, is the best argument for being in favor of more people if you think life is positive.
Note that I don’t think this means people should be forced to have kids or that you’re a monster for choosing not to, even if those arguments were true. You can save a life for 5k USD after all, and raising a kid yourself takes far more resources than that. Realistically, if my vasectomy makes me a bad person then I’m also a bad person for not donating every spare dollar to the AMF instead of merely 10%, and if that’s a “bad person” then the word has no meaning.
Your views were called “morally atrocious” because you stated that human extinction would not necessarily be bad. Seems very clear from the context in the comment frankly.
I would suggest responding with your points (Top 3-5, if you have too many to easily list) on why this is incredibly obviously not a problem, seeing where you get pushback if anywhere, and iterating from there. Don’t be afraid to point out “incredibly obvious” things—it might not be incredibly obvious to other people. And if you’re genuinely unsure why anyone could think this is a problem, the responses to your incredibly obvious points should give you a better idea.
OK...
We already have eight billion people. There is no immediate underpopulation crisis, and in fact there are lots of signs that we’re causing serious environmental trouble trying to support that many with the technology we’re using[1]. We’re struggling to come up with better core technologies to support even that many people, even without raising their standard of living. Maybe we will, maybe we won’t. At the moment, if there’s any population problem, it’s overpopulation.
It’s not plausible that any downward trend will continue to the point of being a real extinction threat. That’s not how selection pressure works. And even if it could happen, it would take many centuries and the word “crisis” is totally inappropriate. You can always deal with it when and if it becomes an actual problem. [2]
There’s no intrinsic value to having more people[3], and hypothetical people who don’t exist don’t have any right to be brought into existence.
Although we don’t know how to get to the technology for a larger population, it’s much more plausible that we can tweak our existing stuff, and/or stuff that’s already starting to be built, to deal well with an older population. And if not, it’s still not unsurvivable, and it’s much more predictable than what we could have to deal with if we keep putting pressure on the environment.
The fact that the we haven’t hit the most apocalyptic timelines of the most extreme predictions of the most pessimistic people in the 1970s does not mean that we don’t have serious environmental degradation going on. Note, as one example among many, that the climate is going pretty wild, and that official targets meant to prevent or slow that have never been met. And observable environmental effects may lag by decades even if you’ve passed major tipping points.
… and it’s not self-evident that extinction is even bad, depending on how it comes about.
We don’t need more people to innovate; just integrate over more time. The only real innovation “deadlines” we might have are on problems that are made worse by more population. Anyway, we’re doing a rotten job of using the innovative potential of the people we have.
All the same thoughts here. I also want to understand what the plan is if we keep growing the population. Is the idea that we keep going until we reach a higher stable number, or that we literally keep growing always? If the former, what’s the number and why? If the latter, does that mean the whole strategy is 100% dependent on us inhabiting space? And if that’s the case, shouldn’t this rather big element in the plan be made explicit?
Does the post ever mention the target of growing the population? I only recall mentions of replacement fertility.
So is the target to keep the population as it is? Has an argument been made as to why the current population is ‘correct’? Isn’t it a bit arbitrary?
It is the change that is bad, not necessarily the future total size of the population.
Edit: Maybe I should unpack that a bit. I also think more people is better, because life is good and innovation is proportional to the number of innovators, but apart from that:
A decreasing population leads to economic stagnation and innovation slowdown. Both can be observed in Japan. South Korea, China, Taiwan are on track to tank their population much faster than Japan ever did. Hows that going to work out for them?
In a permanent recession will investment dry up killing whatever dynamism there might still be?
If the age pyramid is inverted old people have too much political power for the country to ever reverse course and support the young towards family formation.
If you allow massive immigration to fix the labor shortage you also invite ethnic strife down the line. Almost all violent conflicts are based on having two or more ethnic groups within one country.
Will young people emigrate if they are burdened with caring for too many old people in a shrinking economy?
My view is that the progress we observe in the last centuries is more fragile than it seems and it is certainly possible that we will kill it almost completely if we continue to remove or weaken many of the preconditions for it.
I replied before your edit so a bit more:
I agree that civilisational progress is fairly fragile. But it is fragile in both directions. Climate change and resource wars seem about as likely to lead to global conflict as internecine ethnic strife to me.
I say this partly because immigration seems like a force for mutual cultural understanding and trade, to me. Without it we would probably see more closed-off nations, more likely to go to war. With too much of it, however, there can be bad side effects and cultural rifts if not managed very wisely. Where the line is is no simple question.
I also want to advance the simple main idea that drives my views on this issue, which is that population growth HAS to level off eventually unless we colonise space. The side effects on the economy will equally have to be managed at one time or another.
Will they be easier to manage in the future? Or could growing populations make it even harder? Could managing a fall in population rates be easier if done more slowly?
Maybe. But I don’t feel that’s the tenor of the arguments I am hearing from rationalist and adjacent people right now.
I agree that massive population growth would also be dangerous. We have that in Africa, so I worry about it for Afrika. We don’t have it anywhere else, so I don’t worry about it for any other place.
Empirically, resource wars are much less likely than internecine ethnic strife.
After we have automated much of the economy, there won’t be side effects on the economy. The trick is actually getting there.
Automating much of the economy is more than a little way off, and is highly likely to bring its own problems which I would expect to cross-cut with all these issues. I personally doubt that –in the event humans are not sidelined altogether – advances in AI would make demographic transition much economically easier, but I think that’s in the realm of speculation either way.
Do you think that a large population that was reducing slowly would be something Zvi, Robin Hanson and others taking this stance would celebrate? (As opposed to what we have a large population that is growing but showing signs of falling relatively fast in geographical/cultural pockets?)
Currently global population growth is positive but decelerating, I guess a more gradual deceleration would be less disturbing to them? But what about if world population growth very gradually moved from positive to negative? Would they be happy with that?
I had assumed not but I am trying to understand what good looks like.
I don’t know what Zvi and Robin Hanson would celebrate, but I personally worry about fast population decline in those “geographical/cultural pockets” that are responsible for scientific and technological progress.
And I worry because I see the possibility that the decline of innovation and tech will not be as gradual as even fast population decline generally is, but that this decline will be exacerbated by the political instability and/or political sclerosis that comes from two many old people / too much immigration + a shrinking pie.
Somewhat related: this Scott comment on his Substack, in response to a reader asking “wouldn’t it be a valid longtermist goal to bring human numbers in alignment with sustainability” (which sounds to me like it’s advocating for reducing the population somehow):
Your factual claims here seem at least somewhat reasonable. Naively extrapolating sub-replacement fertility centuries into the future is silly. Our wealthy civilization ought to be capable of finding some way of coping with increased elder care. The current number of humans may perhaps be more than is optimal.
But your moral view is atrocious. Human extinction would be bad—very bad. Because humans are the most interesting species to arise so far. Because human striving is properly focused on the future, not just hedonistic pleasures of the moment. Because there may be a meaning to human existence that we can so far only dimly perceive.
And more humans are better, if they come at no (or small) cost to the quality of life of the existing humans. Human lives have intrinsic value, and every additional life adds value.
I do wonder if your moral views have biased your factual assessments as well.
Worrying about extinction is one thing, and we’re nowhere near that point, but does the pro fertility case rely on the philosophical assumption that more people is better? Surely you can see how some people might not find that very compelling.
I think that various “pro-fertility” people have a variety of motivations.
But “more people are better” ought to be a belief of everyone, whether pro-fertility or not. It’s an “other things being equal” statement, of course—more people at no cost or other tradeoff is good. One can believe that and still think that less people would be a good idea in the current situation. But if you don’t think more people are good when there’s no tradeoff, I don’t see what moral view you can have other than nihilism or some form of extreme egoism.
BTW: I’m not ruling out an expansive definition of “people”—maybe gorillas are people, maybe some alien species are, maybe some AIs would be—but I think that’s outside the scope of the current discussion.
Do all variants of downside focused ethics get dismissed as extreme egoism? Hard to see them as nihilistic.
I suspect clarity and consensus on the meaning of “more people at no cost or other tradeoff” to be difficult. If “more people” means more happy people preoccupied with the welfare of the least fortunate, then sure “at no cost or other tradeoff” should suffice for practically everyone to get behind it. But that seems like quite a biased distribution for a default meaning of “more people.”
Okay, I think I see several of the cruxes here.
Here’s my understanding of your viewpoint:
“It’s utterly bizarre to worry about fertility. Lack of fertility is not going to be an x-risk anytime soon. We already have too many people and if anything a voluntary population reduction is a good thing in the relative near-term. (i.e, a few decades or so) We’ve had explosive growth over the last century in terms of population, it’s already unstable, why do we want to keep going?”
In a synchronous discussion I would now pause to see if I had your view right. Because that would take too much time in an asynchronous discussion, I’ll reply to the imaginary view I have in my head, while hoping it’s not too inaccurate. Would welcome corrections.
If this view of yours seems roughly right, here’s what I think are the viewpoint differences:
I think people who worry about fertility would agree with you that fertility is not an existential threat.
I think the intrinsic value of having more people is not an important crux—it is possible to have your view on Point 3 and still worry about fertility.
I think the “fertility crisis” is more about replacement than continued increase. It is possible that many of the people who worry about fertility would also welcome still more people, but I don’t think they would consider it a crisis if we were only at replacement rates, or close to it.
I think people who care about speed of innovation don’t just care about imposed population deadlines looming, but also about quality of life—if we had invented penicillin a century earlier, many people would have lived much longer, happier lives, for example. One could frame technological progress as a moral imperative this way. I’m not sure if this is a major crux, but I think there are people with a general “More people = good” viewpoint for this reason, even ignoring population ethics. You are right that we could use the people we have better, but I don’t see this as a mutually exclusive situation.
I think the people who worry about the fertility crisis would disagree with you about Point 4. I don’t think it’s obvious that “tech to deal with an older population” is actually easier than “tech to deal with a larger population”. It might be! Might not be.
While you may not agree with these ideas, I hope I’ve presented them reasonably and accurately enough that it makes the other side merely different, rather than bizarre and impossible to understand.
I think your summary’s reasonable.
I’m not so sure about point 3 being irrelevant. Without that, what is the positive reason for caring about fertility? Just the innovation rate and aging population?
Those don’t seem to explain the really extreme importance people attach to this: talking about a “crisis”, talking about really large public expenditures, talking about coercive measures, talking about people’s stated preferences for their own lives being wrong to the point where they need to be ignored or overridden, etc… I mean, those are the sorts of things that people tend to reserve for Big Issues(TM).
I get the impression that some people just really, really care about having more humans purely for the sake of having more humans. And not just up to some set optimum number, but up to the absolute maximum number they can achieve subject to whatever other constraints they may recognize. Ceteris paribus, 10⁴⁷ people is better than 10⁴⁶ people and 10⁴⁸ is better still.
That view is actually explicit in long-termist circles that are Less-Wrong-adjacent. And it’s something I absolutely cannot figure out. I’ve been in long discussions about it on here, and I still can’t get inside people’s heads about it.
I mean, I just got a comment calling me “morally atrocious” for not wanting to increase the population without limit (at least so long as it didn’t make life worse for the existing population). I think that was meant to be independent of the part about extinction; maybe I’m wrong.
… but if you have more people around in order to get penicillin invented, you equally have more people around to suffer before penicillin is invented. That seems to be true for innovation in general. More people may mean less time before an innovation happens, but it also means more people living before that innovation. Seems like a wash in terms of the impact of almost any innovation.
The only way I can get any sense out of it at all is to think that people want the innovations within their own lifetimes, or maybe the lifetimes of their children or people they actually know. But the impacts of these interventions are so far down the road that that’s not likely to happen without essentially indefinite life extension. Which is about the last scenario where you want to be artificially increasing fertility. [1]
… and all of that makes me wonder why people who are usually pretty skeptical and analytical would get behind the innovation argument. I will have to admit that I strongly suspect motivated cognition. I have a lot of trouble believing that the natalism arises from the innovation concern, and very little trouble believing it’s the other way around.
A big part of the “bizarreness” I’m talking about is the easy assignment of importance to that kind of weak argument about what would normally be a weak concern.
Well, you’re right, you can never be sure. But the other part of point 4 was that we’re probably better able to deal with failing to get better old-population technology than with failing to get large-population technology. And at least we know what the consequences of failure would be, because we’ve seen aging before.
My intuitive sense is that assistive gadgets, industrial automation, and even outright anti-aging technology, are easier than changing where all the bulk raw materials come from, or even than changing the balance of energy sources, or how much material and energy gets used. That’s even more true if you count the very real difficulties in getting people to actually adopt changes even when you know how to make them technically. But even if I’m wrong, the downside risk of an older population seems obviously more limited than that of a larger population[2].
So why would people who are often very careful about other risks want to just plunge in and create more people? Even if they do think “larger technology” is easier than “older technology”, they could also be wrong… and there’s no backup plan.
Again, it seems weird and out of character and suspiciously like the behavior you’d expect from people who intuitively felt that higher fertility, and higher population, were axiomatically good almost regardless of risk, and were coloring their factual beliefs according to that feeling. Which takes me back to not understanding why anybody would feel that way, or expect others to agree to order the world around it.
… and in fact there are people in the world, maybe not on Less Wrong, who are against life extension because it might not be compatible with high fertility. Fertility axiomatically wins for those people. And they can be very fervent about it.
Also, in the end, if you ever stop growing your population, for any reason at all, you’ll still eventually have to deal with the population getting older. So after you do the large-population technology, you’ll still eventually have to do at least some of the old-population technology.
I think this makes a lot of sense. While I think you can make the case for “fertility crisis purely as a means of preventing economic slowdown and increasing innovation” I think your arguments are good that people don’t actually often make this argument, and a lot of it does stem from “more people = good”.
But I think if you start from “more people = good”, you don’t actually have motivated reasoning as much as you suspect re: innovation argument. I think it’s more that the innovation argument actually does just work if you accept that more people = good. Because if more people = good, that means more people were good before penicillin and then are even more good afterwards, and these two don’t actually cancel each other out.
In summary, I don’t think that “more people = good” motivates the “Life is generally good to have, actually” argument—I think if anything it’s the other way around. People who think life is good tend to be more likely to think it’s a moral good to give it to others. The argument doesn’t say it’s “axiomatically good” to add more people, it’s “axiomatically good conditional on life being net positive”.
As for understanding why people might feel that way—my best argument is this.
Let’s say you could choose to give birth to a child who would be born with a terribly painful and crippling disease. Would it be a bad thing to do that? Many people would say yes.
Now, let’s say you could choose to give birth to a child who would live a happy and healthy positive life? Would that be a good thing? It seems that, logically, if giving birth to a child who suffers is bad, giving birth to a child who enjoys life is good.
That, imo, is the best argument for being in favor of more people if you think life is positive.
Note that I don’t think this means people should be forced to have kids or that you’re a monster for choosing not to, even if those arguments were true. You can save a life for 5k USD after all, and raising a kid yourself takes far more resources than that. Realistically, if my vasectomy makes me a bad person then I’m also a bad person for not donating every spare dollar to the AMF instead of merely 10%, and if that’s a “bad person” then the word has no meaning.
Your views were called “morally atrocious” because you stated that human extinction would not necessarily be bad. Seems very clear from the context in the comment frankly.