I see, I think I would classify this under “values can be satisfied with a small portion of the universe” since it’s about what makes your life as an individual better in the medium term.
I think that’s a poor way to classify my view. What I said was that population growth likely causes real per-capita incomes to increase. This means that people will actually get greater control over the universe, in a material sense. Each person’s total share of GDP would decline in relative terms, but their control over their “portion of the universe” would actually increase, because the effect of greater wealth outweighs the relative decline against other people.
I am not claiming that population growth is merely good for us in the “medium term”. Instead I am saying that population growth on current margins seems good over your entire long-term future. That does not mean that population growth will always be good, irrespective of population size, but all else being equal, it seems better for you, that more people (or humanish AIs who are integrated into our culture) come into existence now, and begin contributing to innovation, specialization, and trade.
And moreover, we do not appear close to the point at which the marginal value flips its sign, turning population growth into a negative.
but their control over their “portion of the universe” would actually increase
Yes, in the medium term. But given a very long future it’s likely that any control so gained could eventually also be gained while on a more conservative trajectory, while leaving you/your values with a bigger slice of the pie in the end. So I don’t think that gaining more control in the short run is very important—except insofar as that extra control helps you stabilize your values. On current margins it does actually seem plausible that human population growth improves value stabilization faster than it erodes your share I suppose, although I don’t think I would extend that to creating an AI population larger in size than the human one.
On current margins it does actually seem plausible that human population growth improves value stabilization faster than it erodes your share I suppose, although I don’t think I would extend that to creating an AI population larger in size than the human one.
I mean, without rapid technological progress in the coming decades, the default outcome is I just die and my values don’t get stabilized in any meaningful sense. (I don’t care a whole lot about living through my descendents.)
In general, I think you’re probably pointing at something that might become true in the future, and I’m certainly not saying that population growth will always be selfishly valuable. But when judged from the perspective of my own life, it seems pretty straightforward that accelerating technological progress through population growth (both from humans and AIs) is net-valuable valuable even in the face of non-trivial risks to our society’s moral and cultural values.
(On the other hand, if I shared Eliezer’s view of a >90% chance of human extinction after AGI, I’d likely favor slowing things down. Thankfully I have a more moderate view than he does on this issue.)
I see, I think I would classify this under “values can be satisfied with a small portion of the universe” since it’s about what makes your life as an individual better in the medium term.
I think that’s a poor way to classify my view. What I said was that population growth likely causes real per-capita incomes to increase. This means that people will actually get greater control over the universe, in a material sense. Each person’s total share of GDP would decline in relative terms, but their control over their “portion of the universe” would actually increase, because the effect of greater wealth outweighs the relative decline against other people.
I am not claiming that population growth is merely good for us in the “medium term”. Instead I am saying that population growth on current margins seems good over your entire long-term future. That does not mean that population growth will always be good, irrespective of population size, but all else being equal, it seems better for you, that more people (or humanish AIs who are integrated into our culture) come into existence now, and begin contributing to innovation, specialization, and trade.
And moreover, we do not appear close to the point at which the marginal value flips its sign, turning population growth into a negative.
Yes, in the medium term. But given a very long future it’s likely that any control so gained could eventually also be gained while on a more conservative trajectory, while leaving you/your values with a bigger slice of the pie in the end. So I don’t think that gaining more control in the short run is very important—except insofar as that extra control helps you stabilize your values. On current margins it does actually seem plausible that human population growth improves value stabilization faster than it erodes your share I suppose, although I don’t think I would extend that to creating an AI population larger in size than the human one.
I mean, without rapid technological progress in the coming decades, the default outcome is I just die and my values don’t get stabilized in any meaningful sense. (I don’t care a whole lot about living through my descendents.)
In general, I think you’re probably pointing at something that might become true in the future, and I’m certainly not saying that population growth will always be selfishly valuable. But when judged from the perspective of my own life, it seems pretty straightforward that accelerating technological progress through population growth (both from humans and AIs) is net-valuable valuable even in the face of non-trivial risks to our society’s moral and cultural values.
(On the other hand, if I shared Eliezer’s view of a >90% chance of human extinction after AGI, I’d likely favor slowing things down. Thankfully I have a more moderate view than he does on this issue.)