Why? I’m assuming this is some sort of sarcasm rather than an honest request, but please clarify if this is not the case. If it was sarcasm, what was it motivated by?
The possible interpretation as sarcasm was intentional; the phrasing was intended to trigger a game board favorable to me in game-theoretic terms (i.e. provoke a “catch-22” where I win, in simple terms):
If you take my request seriously or otherwise do ignore me, I’m instinctively and emotionally content in the knowledge that I’ve uselessly thrown words at someone on the Internet I disagree with, which is something I’ve done many times before and am now comfortable with. The less reasonable parts of my brain are satisfied that you’ve done as I said, that I’m in control and not socially at risk. The more reasonable parts… well, acceptable expected utility gamble which happened to result in a minor negative outcome instead of a minor positive one, and I still got my initial entertainment and mental exercise from writing the comment.
If you take it as sarcasm or as some sort of challenge and decide to engage with me in an intellectual discussion about the game-theoretic issues, the cost analyses, or even just why you think these particular issues are more relevant and important and others can be discarded, then I’ve made progress and we’re now in a more interesting (for me) part of the discussion where I believe we are closer to a satisfying conclusion.
Of course, I was somewhat betting that one of those two would be chosen, at some risk of unexpected divergence. Possibly also at some cost to you in the form of ego hit or something. However, unexpected divergences include this one, where you ask me about my choice of words, and I believe this is of positive value. To be honest, writing the above was quite fun. Plus you simultaneously went for one of those favorable options. Quite a success.
Now on to the actual topic at hand:
Various use-of-words arguments could be made regarding the non-volunteer versus volunteer aspects of human-making and other socially-beneficial endeavors, but more interesting and is the point about effective children-farming: Effective to what metric?
My mental model of Quality-Adjusted New Humans takes a few high-quality humans randomly spawned in a large number of new humans all living in a ceteris paribus better environment to be far superior to a marginally higher high-low new human quality ratio in a lower quality environment. As such, I think it’s more efficient and beneficial to have experts focus on improving the environment at the expense of this low amount of potential parents being lost.
In practice, the above translates to: First-world, educated, high-quality people such as might be expected to participate on LW would benefit society more if they focus on creating a better society for the new humans to grow in, as opposed to adding a marginal number of high-expected-quality-adjusted-new-humans.
Which all probably relates to my priors about the impacts of Nature VS Nurture and my priors about the cost-benefits of one high-quality human versus many low-quality humans.
And on the other end of the inference chain, this leads to my conclusion that we should not recommend that LW participants and other audience members focus on producing children, with the corollary (but separate) points about where I do think they should focus their efforts.
That was an awesome answer, which leaves me with very little to add. I’ll merely say that—as you’ve already implicitly predicted—what seems to be going on is that my nature/nurture priors are significantly different from yours and this leads us to such different conclusions.
.That was an awesome answer, which leaves me with very little to add. I’ll merely say that—as you’ve already implicitly predicted—what seems to be going on is that my nature/nurture priors are significantly different from yours and this leads us to such different conclusions.
And there’s the satisfying conclusion. Our priors are uneven, but we agree on the evidence and our predictions. We can now safely adjourn or move on to a more elaborate discussion about our respective priors.
As an important data point, my wordgaming experiments rarely work out this well, but so far have retained net positive expected utility (as do, unsurprisingly, most efforts at improving social skills). I’ll bump up this tactic a few notches on my mental list.
The possible interpretation as sarcasm was intentional; the phrasing was intended to trigger a game board favorable to me in game-theoretic terms (i.e. provoke a “catch-22” where I win, in simple terms):
If you take my request seriously or otherwise do ignore me, I’m instinctively and emotionally content in the knowledge that I’ve uselessly thrown words at someone on the Internet I disagree with, which is something I’ve done many times before and am now comfortable with. The less reasonable parts of my brain are satisfied that you’ve done as I said, that I’m in control and not socially at risk. The more reasonable parts… well, acceptable expected utility gamble which happened to result in a minor negative outcome instead of a minor positive one, and I still got my initial entertainment and mental exercise from writing the comment.
If you take it as sarcasm or as some sort of challenge and decide to engage with me in an intellectual discussion about the game-theoretic issues, the cost analyses, or even just why you think these particular issues are more relevant and important and others can be discarded, then I’ve made progress and we’re now in a more interesting (for me) part of the discussion where I believe we are closer to a satisfying conclusion.
Of course, I was somewhat betting that one of those two would be chosen, at some risk of unexpected divergence. Possibly also at some cost to you in the form of ego hit or something. However, unexpected divergences include this one, where you ask me about my choice of words, and I believe this is of positive value. To be honest, writing the above was quite fun. Plus you simultaneously went for one of those favorable options. Quite a success.
Now on to the actual topic at hand:
Various use-of-words arguments could be made regarding the non-volunteer versus volunteer aspects of human-making and other socially-beneficial endeavors, but more interesting and is the point about effective children-farming: Effective to what metric?
My mental model of Quality-Adjusted New Humans takes a few high-quality humans randomly spawned in a large number of new humans all living in a ceteris paribus better environment to be far superior to a marginally higher high-low new human quality ratio in a lower quality environment. As such, I think it’s more efficient and beneficial to have experts focus on improving the environment at the expense of this low amount of potential parents being lost.
In practice, the above translates to: First-world, educated, high-quality people such as might be expected to participate on LW would benefit society more if they focus on creating a better society for the new humans to grow in, as opposed to adding a marginal number of high-expected-quality-adjusted-new-humans.
Which all probably relates to my priors about the impacts of Nature VS Nurture and my priors about the cost-benefits of one high-quality human versus many low-quality humans.
And on the other end of the inference chain, this leads to my conclusion that we should not recommend that LW participants and other audience members focus on producing children, with the corollary (but separate) points about where I do think they should focus their efforts.
That was an awesome answer, which leaves me with very little to add. I’ll merely say that—as you’ve already implicitly predicted—what seems to be going on is that my nature/nurture priors are significantly different from yours and this leads us to such different conclusions.
And there’s the satisfying conclusion. Our priors are uneven, but we agree on the evidence and our predictions. We can now safely adjourn or move on to a more elaborate discussion about our respective priors.
As an important data point, my wordgaming experiments rarely work out this well, but so far have retained net positive expected utility (as do, unsurprisingly, most efforts at improving social skills). I’ll bump up this tactic a few notches on my mental list.