but their control over their “portion of the universe” would actually increase
Yes, in the medium term. But given a very long future it’s likely that any control so gained could eventually also be gained while on a more conservative trajectory, while leaving you/your values with a bigger slice of the pie in the end. So I don’t think that gaining more control in the short run is very important—except insofar as that extra control helps you stabilize your values. On current margins it does actually seem plausible that human population growth improves value stabilization faster than it erodes your share I suppose, although I don’t think I would extend that to creating an AI population larger in size than the human one.
On current margins it does actually seem plausible that human population growth improves value stabilization faster than it erodes your share I suppose, although I don’t think I would extend that to creating an AI population larger in size than the human one.
I mean, without rapid technological progress in the coming decades, the default outcome is I just die and my values don’t get stabilized in any meaningful sense. (I don’t care a whole lot about living through my descendents.)
In general, I think you’re probably pointing at something that might become true in the future, and I’m certainly not saying that population growth will always be selfishly valuable. But when judged from the perspective of my own life, it seems pretty straightforward that accelerating technological progress through population growth (both from humans and AIs) is net-valuable valuable even in the face of non-trivial risks to our society’s moral and cultural values.
(On the other hand, if I shared Eliezer’s view of a >90% chance of human extinction after AGI, I’d likely favor slowing things down. Thankfully I have a more moderate view than he does on this issue.)
Yes, in the medium term. But given a very long future it’s likely that any control so gained could eventually also be gained while on a more conservative trajectory, while leaving you/your values with a bigger slice of the pie in the end. So I don’t think that gaining more control in the short run is very important—except insofar as that extra control helps you stabilize your values. On current margins it does actually seem plausible that human population growth improves value stabilization faster than it erodes your share I suppose, although I don’t think I would extend that to creating an AI population larger in size than the human one.
I mean, without rapid technological progress in the coming decades, the default outcome is I just die and my values don’t get stabilized in any meaningful sense. (I don’t care a whole lot about living through my descendents.)
In general, I think you’re probably pointing at something that might become true in the future, and I’m certainly not saying that population growth will always be selfishly valuable. But when judged from the perspective of my own life, it seems pretty straightforward that accelerating technological progress through population growth (both from humans and AIs) is net-valuable valuable even in the face of non-trivial risks to our society’s moral and cultural values.
(On the other hand, if I shared Eliezer’s view of a >90% chance of human extinction after AGI, I’d likely favor slowing things down. Thankfully I have a more moderate view than he does on this issue.)