I think people sharing Yudkowsky’s position think that different humans ultimately(on reflection?) have very similar values
I agree that’s what many people believe, but this post was primarily about exploring the idea that humans do not actually have very similar values upon reflection. Joe Carlsmith wrote,
And one route to optimism about “human alignment” is to claim that most humans will converge, on reflection, to sufficiently similar values that their utility functions won’t be “fragile” relative to each other. In the light of Reason, for example, maybe Yudkowsky and my friends would come to agree about the importance of preserving boredom and reality-contact. But even setting aside problems for the notion of “reflection” at stake, and questions about who will be disposed to “reflect” in the relevant way, positing robust convergence in this respect is a strong, convenient, and thus-far-undefended empirical hypothesis – and one that, absent a defense, might prompt questions, from the atheists, about wishful thinking.
[...]
We can see this momentum as leading to a yet-deeper atheism. Yudkowsky’s humanism, at least, has some trust in human hearts, and thus, in some uncontrolled Other. But the atheism I have in mind, here, trusts only in the Self, at least as the power at stake scales – and in the limit, only in this slice of Self, the Self-Right-Now. Ultimately, indeed, this Self is the only route to a good future. Maybe the Other matters as a patient – but like God, they can’t be trusted with the wheel.
As it happens, I agree more with this yet-deeper atheism, and don’t put much faith in human values.
Another point, I don’t think that Joe was endorsing the “yet deeper atheism”, just exploring it as a possible way of orienting. So I think that he could take the same fork in the argument, denying that humans have ultimately dissimilar values in the same way that future AI systems might.
Even so, it seems valuable to explore the implications of the idea presented in the post, even if the post author did not endorse the idea fully. I personally think the alternative view—that humans naturally converge on very similar values—is highly unlikely to be true, and as Joe wrote, seems to be a “thus-far-undefended empirical hypothesis – and one that, absent a defense, might prompt questions, from the atheists, about wishful thinking”.
In that case I’m actually kinda confused as to why you don’t think that population growth is bad. Is it that you think that your values can be fully satisfied with a relatively small portion of the universe, and you or people sharing your values will be able to bargain for enough of a share to do this?
On current margins population growth seems selfishly good because
Our best models of economic growth predict increasing returns to scale from population size, meaning that population growth makes most of us richer, and
The negatives of cultural/value drift seems outweighed by the effect of increased per-capita incomes.
Moreover, even in a Malthusian state in which the median income is at subsidence level, it is plausible that some people could have very high incomes from their material investments, and existing people (including us) have plenty of opportunities to accumulate wealth to prepare for this eventual outcome.
Another frame here is to ask “What’s the alternative, selfishly?” Population growth accelerates technological progress, which could extend your lifespan and increase your income. A lack of population growth could thus lead to your early demise, in a state of material deprivation. Is the second scenario really better because you have greater relativepower?
To defend (2), one intuition pump is to ask, “Would you prefer to live in a version of America with 1950s values but 4x greater real per-capital incomes, compared to 2020s America?” To me, the answer is “yes”, selfishly speaking.
All of this should of course be distinguished from what you think is altruistically good. You might, for example, be something like a negative utilitarian and believe that population growth is bad because it increases overall suffering. I am sympathetic to this view, but at the same time it is hard to bring myself to let this argument overcome my selfish values.
I see, I think I would classify this under “values can be satisfied with a small portion of the universe” since it’s about what makes your life as an individual better in the medium term.
I think that’s a poor way to classify my view. What I said was that population growth likely causes real per-capita incomes to increase. This means that people will actually get greater control over the universe, in a material sense. Each person’s total share of GDP would decline in relative terms, but their control over their “portion of the universe” would actually increase, because the effect of greater wealth outweighs the relative decline against other people.
I am not claiming that population growth is merely good for us in the “medium term”. Instead I am saying that population growth on current margins seems good over your entire long-term future. That does not mean that population growth will always be good, irrespective of population size, but all else being equal, it seems better for you, that more people (or humanish AIs who are integrated into our culture) come into existence now, and begin contributing to innovation, specialization, and trade.
And moreover, we do not appear close to the point at which the marginal value flips its sign, turning population growth into a negative.
but their control over their “portion of the universe” would actually increase
Yes, in the medium term. But given a very long future it’s likely that any control so gained could eventually also be gained while on a more conservative trajectory, while leaving you/your values with a bigger slice of the pie in the end. So I don’t think that gaining more control in the short run is very important—except insofar as that extra control helps you stabilize your values. On current margins it does actually seem plausible that human population growth improves value stabilization faster than it erodes your share I suppose, although I don’t think I would extend that to creating an AI population larger in size than the human one.
On current margins it does actually seem plausible that human population growth improves value stabilization faster than it erodes your share I suppose, although I don’t think I would extend that to creating an AI population larger in size than the human one.
I mean, without rapid technological progress in the coming decades, the default outcome is I just die and my values don’t get stabilized in any meaningful sense. (I don’t care a whole lot about living through my descendents.)
In general, I think you’re probably pointing at something that might become true in the future, and I’m certainly not saying that population growth will always be selfishly valuable. But when judged from the perspective of my own life, it seems pretty straightforward that accelerating technological progress through population growth (both from humans and AIs) is net-valuable valuable even in the face of non-trivial risks to our society’s moral and cultural values.
(On the other hand, if I shared Eliezer’s view of a >90% chance of human extinction after AGI, I’d likely favor slowing things down. Thankfully I have a more moderate view than he does on this issue.)
I agree that’s what many people believe, but this post was primarily about exploring the idea that humans do not actually have very similar values upon reflection. Joe Carlsmith wrote,
As it happens, I agree more with this yet-deeper atheism, and don’t put much faith in human values.
Another point, I don’t think that Joe was endorsing the “yet deeper atheism”, just exploring it as a possible way of orienting. So I think that he could take the same fork in the argument, denying that humans have ultimately dissimilar values in the same way that future AI systems might.
Even so, it seems valuable to explore the implications of the idea presented in the post, even if the post author did not endorse the idea fully. I personally think the alternative view—that humans naturally converge on very similar values—is highly unlikely to be true, and as Joe wrote, seems to be a “thus-far-undefended empirical hypothesis – and one that, absent a defense, might prompt questions, from the atheists, about wishful thinking”.
In that case I’m actually kinda confused as to why you don’t think that population growth is bad. Is it that you think that your values can be fully satisfied with a relatively small portion of the universe, and you or people sharing your values will be able to bargain for enough of a share to do this?
On current margins population growth seems selfishly good because
Our best models of economic growth predict increasing returns to scale from population size, meaning that population growth makes most of us richer, and
The negatives of cultural/value drift seems outweighed by the effect of increased per-capita incomes.
Moreover, even in a Malthusian state in which the median income is at subsidence level, it is plausible that some people could have very high incomes from their material investments, and existing people (including us) have plenty of opportunities to accumulate wealth to prepare for this eventual outcome.
Another frame here is to ask “What’s the alternative, selfishly?” Population growth accelerates technological progress, which could extend your lifespan and increase your income. A lack of population growth could thus lead to your early demise, in a state of material deprivation. Is the second scenario really better because you have greater relative power?
To defend (2), one intuition pump is to ask, “Would you prefer to live in a version of America with 1950s values but 4x greater real per-capital incomes, compared to 2020s America?” To me, the answer is “yes”, selfishly speaking.
All of this should of course be distinguished from what you think is altruistically good. You might, for example, be something like a negative utilitarian and believe that population growth is bad because it increases overall suffering. I am sympathetic to this view, but at the same time it is hard to bring myself to let this argument overcome my selfish values.
I see, I think I would classify this under “values can be satisfied with a small portion of the universe” since it’s about what makes your life as an individual better in the medium term.
I think that’s a poor way to classify my view. What I said was that population growth likely causes real per-capita incomes to increase. This means that people will actually get greater control over the universe, in a material sense. Each person’s total share of GDP would decline in relative terms, but their control over their “portion of the universe” would actually increase, because the effect of greater wealth outweighs the relative decline against other people.
I am not claiming that population growth is merely good for us in the “medium term”. Instead I am saying that population growth on current margins seems good over your entire long-term future. That does not mean that population growth will always be good, irrespective of population size, but all else being equal, it seems better for you, that more people (or humanish AIs who are integrated into our culture) come into existence now, and begin contributing to innovation, specialization, and trade.
And moreover, we do not appear close to the point at which the marginal value flips its sign, turning population growth into a negative.
Yes, in the medium term. But given a very long future it’s likely that any control so gained could eventually also be gained while on a more conservative trajectory, while leaving you/your values with a bigger slice of the pie in the end. So I don’t think that gaining more control in the short run is very important—except insofar as that extra control helps you stabilize your values. On current margins it does actually seem plausible that human population growth improves value stabilization faster than it erodes your share I suppose, although I don’t think I would extend that to creating an AI population larger in size than the human one.
I mean, without rapid technological progress in the coming decades, the default outcome is I just die and my values don’t get stabilized in any meaningful sense. (I don’t care a whole lot about living through my descendents.)
In general, I think you’re probably pointing at something that might become true in the future, and I’m certainly not saying that population growth will always be selfishly valuable. But when judged from the perspective of my own life, it seems pretty straightforward that accelerating technological progress through population growth (both from humans and AIs) is net-valuable valuable even in the face of non-trivial risks to our society’s moral and cultural values.
(On the other hand, if I shared Eliezer’s view of a >90% chance of human extinction after AGI, I’d likely favor slowing things down. Thankfully I have a more moderate view than he does on this issue.)