To add to this, there is nothing mandating that the same number of humans that exist now are required in the future for human civilization to work. Especially, as productivity has gone up in recent decades, humanity could now probably exist at the same level of prosperity of the latter 20th century with half the population it has now.
And of course, the whole premise of this site is that AIs have the capability to render this entire argument moot. In principle, if a FAI can be created, we can all rest peacefully and retire, and no longer will a younger generation be required. Of course things get complicated in practice. That’s why you can’t make the argument ‘we need more children!’
To add to this, there is nothing mandating that the same number of humans that exist now are required in the future for human civilization to work. Especially, as productivity has gone up in recent decades, humanity could now probably exist at the same level of prosperity of the latter 20th century with half the population it has now.
Maybe we decide each couple should ideally have 1.7 children, maybe 2.3. The point still stands that some children need to be reared and it is, all things being equal, your duty to do your part in this.
all things being equal, your duty to do your part in this.
One can ‘do their duty’ by being a good person and contributing to society, thus helping other people’s children grow up. But then, most people already do try to contribute to society, and in wealthy countries (which your criticism seems to be aimed at) the system appears to be working well and allowing the support of new children as it stands, so it’s not like any extra effort is needed on society’s part.
By the way, just to clarify, I’m not saying people should work so that other people can have kids. All I’m saying is that this is a hole in your argument. Someone who doesn’t bear children is not necessarily a freeloader.
One can ‘do their duty’ by being a good person and contributing to society, thus helping other people’s children grow up.
This is like saying “I’m going to evade paying taxes, I’ll just contribute to society in other ways”. This might work, you might even come out ahead, but you are prima facie being a freeloader.
I don’t see how this would be any different from any of the billion other possible game-theoretic freeloader problems.
By this logic, all the possible freeloader things that you’re doing (i.e. the contributions you’re not giving that some large number of other people are giving) are also worth consideration relative to their potential or possible value to society. Have you watered a plant today? Have you walked to work instead of using a car today? Have you saved someone’s life today? Have you taught someone something useful today? Have you marginally assisted in future scientific and technological advances today? Or all of those things this hour? This minute? No? Because there’s a lot of people out there who have, you freeloader! Sure, you might say you’re prioritizing other things, that you’re trying to contribute in other, better ways. This might work, you might even come out ahead, but you are prima facie a horrible being that causes all sorts of headaches for Game Theorists worldwide.
Now please proceed to ignore me and accuse people of freeloading on this particular problem that you think is more important than the other ones.
I have nothing against division of labor. Not everyone needs to be a farmer. But you can’t effectively farm children so we need most people to pitch in. This is a volunteer system very unlike growing plants. If you grow plants and sell them to me then I’m not a freeloader. But if you raise children then I don’t pay you for them, yet I still benefit. That’s where the freeloader part comes in.
Now please proceed to ignore me
Why? I’m assuming this is some sort of sarcasm rather than an honest request, but please clarify if this is not the case. If it was sarcasm, what was it motivated by?
Why? I’m assuming this is some sort of sarcasm rather than an honest request, but please clarify if this is not the case. If it was sarcasm, what was it motivated by?
The possible interpretation as sarcasm was intentional; the phrasing was intended to trigger a game board favorable to me in game-theoretic terms (i.e. provoke a “catch-22” where I win, in simple terms):
If you take my request seriously or otherwise do ignore me, I’m instinctively and emotionally content in the knowledge that I’ve uselessly thrown words at someone on the Internet I disagree with, which is something I’ve done many times before and am now comfortable with. The less reasonable parts of my brain are satisfied that you’ve done as I said, that I’m in control and not socially at risk. The more reasonable parts… well, acceptable expected utility gamble which happened to result in a minor negative outcome instead of a minor positive one, and I still got my initial entertainment and mental exercise from writing the comment.
If you take it as sarcasm or as some sort of challenge and decide to engage with me in an intellectual discussion about the game-theoretic issues, the cost analyses, or even just why you think these particular issues are more relevant and important and others can be discarded, then I’ve made progress and we’re now in a more interesting (for me) part of the discussion where I believe we are closer to a satisfying conclusion.
Of course, I was somewhat betting that one of those two would be chosen, at some risk of unexpected divergence. Possibly also at some cost to you in the form of ego hit or something. However, unexpected divergences include this one, where you ask me about my choice of words, and I believe this is of positive value. To be honest, writing the above was quite fun. Plus you simultaneously went for one of those favorable options. Quite a success.
Now on to the actual topic at hand:
Various use-of-words arguments could be made regarding the non-volunteer versus volunteer aspects of human-making and other socially-beneficial endeavors, but more interesting and is the point about effective children-farming: Effective to what metric?
My mental model of Quality-Adjusted New Humans takes a few high-quality humans randomly spawned in a large number of new humans all living in a ceteris paribus better environment to be far superior to a marginally higher high-low new human quality ratio in a lower quality environment. As such, I think it’s more efficient and beneficial to have experts focus on improving the environment at the expense of this low amount of potential parents being lost.
In practice, the above translates to: First-world, educated, high-quality people such as might be expected to participate on LW would benefit society more if they focus on creating a better society for the new humans to grow in, as opposed to adding a marginal number of high-expected-quality-adjusted-new-humans.
Which all probably relates to my priors about the impacts of Nature VS Nurture and my priors about the cost-benefits of one high-quality human versus many low-quality humans.
And on the other end of the inference chain, this leads to my conclusion that we should not recommend that LW participants and other audience members focus on producing children, with the corollary (but separate) points about where I do think they should focus their efforts.
That was an awesome answer, which leaves me with very little to add. I’ll merely say that—as you’ve already implicitly predicted—what seems to be going on is that my nature/nurture priors are significantly different from yours and this leads us to such different conclusions.
.That was an awesome answer, which leaves me with very little to add. I’ll merely say that—as you’ve already implicitly predicted—what seems to be going on is that my nature/nurture priors are significantly different from yours and this leads us to such different conclusions.
And there’s the satisfying conclusion. Our priors are uneven, but we agree on the evidence and our predictions. We can now safely adjourn or move on to a more elaborate discussion about our respective priors.
As an important data point, my wordgaming experiments rarely work out this well, but so far have retained net positive expected utility (as do, unsurprisingly, most efforts at improving social skills). I’ll bump up this tactic a few notches on my mental list.
To add to this, there is nothing mandating that the same number of humans that exist now are required in the future for human civilization to work. Especially, as productivity has gone up in recent decades, humanity could now probably exist at the same level of prosperity of the latter 20th century with half the population it has now.
And of course, the whole premise of this site is that AIs have the capability to render this entire argument moot. In principle, if a FAI can be created, we can all rest peacefully and retire, and no longer will a younger generation be required. Of course things get complicated in practice. That’s why you can’t make the argument ‘we need more children!’
Maybe we decide each couple should ideally have 1.7 children, maybe 2.3. The point still stands that some children need to be reared and it is, all things being equal, your duty to do your part in this.
One can ‘do their duty’ by being a good person and contributing to society, thus helping other people’s children grow up. But then, most people already do try to contribute to society, and in wealthy countries (which your criticism seems to be aimed at) the system appears to be working well and allowing the support of new children as it stands, so it’s not like any extra effort is needed on society’s part.
By the way, just to clarify, I’m not saying people should work so that other people can have kids. All I’m saying is that this is a hole in your argument. Someone who doesn’t bear children is not necessarily a freeloader.
This is like saying “I’m going to evade paying taxes, I’ll just contribute to society in other ways”. This might work, you might even come out ahead, but you are prima facie being a freeloader.
I don’t see how this would be any different from any of the billion other possible game-theoretic freeloader problems.
By this logic, all the possible freeloader things that you’re doing (i.e. the contributions you’re not giving that some large number of other people are giving) are also worth consideration relative to their potential or possible value to society. Have you watered a plant today? Have you walked to work instead of using a car today? Have you saved someone’s life today? Have you taught someone something useful today? Have you marginally assisted in future scientific and technological advances today? Or all of those things this hour? This minute? No? Because there’s a lot of people out there who have, you freeloader! Sure, you might say you’re prioritizing other things, that you’re trying to contribute in other, better ways. This might work, you might even come out ahead, but you are prima facie a horrible being that causes all sorts of headaches for Game Theorists worldwide.
Now please proceed to ignore me and accuse people of freeloading on this particular problem that you think is more important than the other ones.
I have nothing against division of labor. Not everyone needs to be a farmer. But you can’t effectively farm children so we need most people to pitch in. This is a volunteer system very unlike growing plants. If you grow plants and sell them to me then I’m not a freeloader. But if you raise children then I don’t pay you for them, yet I still benefit. That’s where the freeloader part comes in.
Why? I’m assuming this is some sort of sarcasm rather than an honest request, but please clarify if this is not the case. If it was sarcasm, what was it motivated by?
The possible interpretation as sarcasm was intentional; the phrasing was intended to trigger a game board favorable to me in game-theoretic terms (i.e. provoke a “catch-22” where I win, in simple terms):
If you take my request seriously or otherwise do ignore me, I’m instinctively and emotionally content in the knowledge that I’ve uselessly thrown words at someone on the Internet I disagree with, which is something I’ve done many times before and am now comfortable with. The less reasonable parts of my brain are satisfied that you’ve done as I said, that I’m in control and not socially at risk. The more reasonable parts… well, acceptable expected utility gamble which happened to result in a minor negative outcome instead of a minor positive one, and I still got my initial entertainment and mental exercise from writing the comment.
If you take it as sarcasm or as some sort of challenge and decide to engage with me in an intellectual discussion about the game-theoretic issues, the cost analyses, or even just why you think these particular issues are more relevant and important and others can be discarded, then I’ve made progress and we’re now in a more interesting (for me) part of the discussion where I believe we are closer to a satisfying conclusion.
Of course, I was somewhat betting that one of those two would be chosen, at some risk of unexpected divergence. Possibly also at some cost to you in the form of ego hit or something. However, unexpected divergences include this one, where you ask me about my choice of words, and I believe this is of positive value. To be honest, writing the above was quite fun. Plus you simultaneously went for one of those favorable options. Quite a success.
Now on to the actual topic at hand:
Various use-of-words arguments could be made regarding the non-volunteer versus volunteer aspects of human-making and other socially-beneficial endeavors, but more interesting and is the point about effective children-farming: Effective to what metric?
My mental model of Quality-Adjusted New Humans takes a few high-quality humans randomly spawned in a large number of new humans all living in a ceteris paribus better environment to be far superior to a marginally higher high-low new human quality ratio in a lower quality environment. As such, I think it’s more efficient and beneficial to have experts focus on improving the environment at the expense of this low amount of potential parents being lost.
In practice, the above translates to: First-world, educated, high-quality people such as might be expected to participate on LW would benefit society more if they focus on creating a better society for the new humans to grow in, as opposed to adding a marginal number of high-expected-quality-adjusted-new-humans.
Which all probably relates to my priors about the impacts of Nature VS Nurture and my priors about the cost-benefits of one high-quality human versus many low-quality humans.
And on the other end of the inference chain, this leads to my conclusion that we should not recommend that LW participants and other audience members focus on producing children, with the corollary (but separate) points about where I do think they should focus their efforts.
That was an awesome answer, which leaves me with very little to add. I’ll merely say that—as you’ve already implicitly predicted—what seems to be going on is that my nature/nurture priors are significantly different from yours and this leads us to such different conclusions.
And there’s the satisfying conclusion. Our priors are uneven, but we agree on the evidence and our predictions. We can now safely adjourn or move on to a more elaborate discussion about our respective priors.
As an important data point, my wordgaming experiments rarely work out this well, but so far have retained net positive expected utility (as do, unsurprisingly, most efforts at improving social skills). I’ll bump up this tactic a few notches on my mental list.