I roughly agree with this model, I disagree with “all available signs right now point to us being far below the population level tat would be optimal”. I don’t know why you think that the bottleneck is “the lack of enough people”.
The best explanation I know of why economic growth stopped accelerating circa the 1960s is “we don’t have enough people”. See e.g. Open Philanthropy’s explosive growth report.
In addition to this, predictions based on resource bottlenecks have not done well in the past. The model that “resource constraints are binding on how many people we can have” has consistently done badly at predicting the future over the last 200 years. Given that the model has worse fit to the past data unless you save it by adding an extra parameter, you should update against it in light of its bad track record.
Ask yourself whether a person thinking like you would have made accurate predictions in 1900 if they did not know what would happen in the future, or even better; write down an explicit stochastic model of growth and see what happens if you fit it to data pre-1900 and extrapolate it to post-1900. Which model is able to make better predictions? I’ve done this exercise before and I can tell you that the models that get the best fit to the data are the ones in which population bottlenecks are serious but resource bottlenecks are not.
If you roughly agree with the model, can you model your uncertainty over the optimal level of population as a lognormal distribution and give me the parameters? Why do you think it’s lower than ~ 10 billion people?
First, I would be careful using the argument “models that did not work in the past won’t work in the future”. A person in 1900 would probably fail to predict the development of the key technologies that allowed the population growth we saw in the XX century (the Haber–Bosch process and the crops improvement), the same way we might fail to see potential technologies that will be developed in this century and would allow even higher growth. But we shouldn’t count on those new technologies to materialize: they might happen, or they might not.
Second, I think it would be a futile exercise to fit a lognormal distribution about my uncertainty: treating the world as a single entity does not really make sense in the world we live in right now: Places like Bangladesh or India would be clearly better off with a (much) smaller population and there might be places where it could be better having more people available. My impression is that, in general, most developing countries (the ones where the population is growing faster now) would be much better off with smaller populations. Then, I wouldn’t really know how to estimate what is the “best” number, and even if I did, I would be just engaging in a pseudo-statistical analysis to give a fake sense of certainty. However, even if I cannot give you a precise value for the parameters of a lognormal, I think that if I see a person who is 1.7m tall and weighs 300 kg, I should be allowed to state: that’s too much. I don’t know what his “correct” weight should be, but it is clearly less.
Last, I do agree (as it has been argued before in this thread) that a perfectly managed planet Earth should be able to sustain a way larger population: but we are not a perfectly managed Earth and we probably won’t be at any point. Given that humans are imperfect agents with very limited coordination ability, I would prefer to live in a world where the population stabilizes to 10 billion people or less than in a world that has 100 times more humans.
First, I would be careful using the argument “models that did not work in the past won’t work in the future”.
My exact argument is that if a particular type of model can’t accurately retrodict the past without overfitting then you should update against it when making predictions about the future. I didn’t make an absolutist claim such as “models that didn’t work in the past won’t work in the future”, just that the fact that they didn’t work in the past is an update against them.
Places like Bangladesh or India would be clearly better off with a (much) smaller population and there might be places where it could be better having more people available.
A specific country like Bangladesh would of course be better off per person if it were less populated, but that’s just because they are able to trade with the rest of the world and their current level of population is adapted to those conditions. You can’t compare the optimal level of population when you can interact with the outside world with the optimal level when you can’t; they are quite different.
I think this conversation is not fruitful so I’ll stop it here unless you can come up with specific object-level predictions about e.g. the next 10 years that we would disagree about.
I roughly agree with this model, I disagree with “all available signs right now point to us being far below the population level tat would be optimal”. I don’t know why you think that the bottleneck is “the lack of enough people”.
The best explanation I know of why economic growth stopped accelerating circa the 1960s is “we don’t have enough people”. See e.g. Open Philanthropy’s explosive growth report.
In addition to this, predictions based on resource bottlenecks have not done well in the past. The model that “resource constraints are binding on how many people we can have” has consistently done badly at predicting the future over the last 200 years. Given that the model has worse fit to the past data unless you save it by adding an extra parameter, you should update against it in light of its bad track record.
Ask yourself whether a person thinking like you would have made accurate predictions in 1900 if they did not know what would happen in the future, or even better; write down an explicit stochastic model of growth and see what happens if you fit it to data pre-1900 and extrapolate it to post-1900. Which model is able to make better predictions? I’ve done this exercise before and I can tell you that the models that get the best fit to the data are the ones in which population bottlenecks are serious but resource bottlenecks are not.
If you roughly agree with the model, can you model your uncertainty over the optimal level of population as a lognormal distribution and give me the parameters? Why do you think it’s lower than ~ 10 billion people?
First, I would be careful using the argument “models that did not work in the past won’t work in the future”. A person in 1900 would probably fail to predict the development of the key technologies that allowed the population growth we saw in the XX century (the Haber–Bosch process and the crops improvement), the same way we might fail to see potential technologies that will be developed in this century and would allow even higher growth. But we shouldn’t count on those new technologies to materialize: they might happen, or they might not.
Second, I think it would be a futile exercise to fit a lognormal distribution about my uncertainty: treating the world as a single entity does not really make sense in the world we live in right now: Places like Bangladesh or India would be clearly better off with a (much) smaller population and there might be places where it could be better having more people available. My impression is that, in general, most developing countries (the ones where the population is growing faster now) would be much better off with smaller populations. Then, I wouldn’t really know how to estimate what is the “best” number, and even if I did, I would be just engaging in a pseudo-statistical analysis to give a fake sense of certainty. However, even if I cannot give you a precise value for the parameters of a lognormal, I think that if I see a person who is 1.7m tall and weighs 300 kg, I should be allowed to state: that’s too much. I don’t know what his “correct” weight should be, but it is clearly less.
Last, I do agree (as it has been argued before in this thread) that a perfectly managed planet Earth should be able to sustain a way larger population: but we are not a perfectly managed Earth and we probably won’t be at any point. Given that humans are imperfect agents with very limited coordination ability, I would prefer to live in a world where the population stabilizes to 10 billion people or less than in a world that has 100 times more humans.
My exact argument is that if a particular type of model can’t accurately retrodict the past without overfitting then you should update against it when making predictions about the future. I didn’t make an absolutist claim such as “models that didn’t work in the past won’t work in the future”, just that the fact that they didn’t work in the past is an update against them.
A specific country like Bangladesh would of course be better off per person if it were less populated, but that’s just because they are able to trade with the rest of the world and their current level of population is adapted to those conditions. You can’t compare the optimal level of population when you can interact with the outside world with the optimal level when you can’t; they are quite different.
I think this conversation is not fruitful so I’ll stop it here unless you can come up with specific object-level predictions about e.g. the next 10 years that we would disagree about.