I find it repugnant to even consider creating people with lives worse than the current average. So some resources will just have to remain unused, if that’s the condition.
Intentionally creating people less happy than I am. Think about it from the parenting perspective. Would you want to bring unhappy children into the world (your personal happiness level being the baseline), if you could predict their happiness level with certainty?
Intentionally creating people less happy than I am.
That is, your life is the least happy life worth living? If you reflectively endorse that, we ought to have a talk on how we can make your life better.
It’s not clear to me that this is a misunderstanding. I think that my life is pretty dang awesome, and I would be willing to have children that are significantly less happy than I am (though, ceteris paribus, more happiness is better). If you aren’t, reaching out with friendly concern seems appropriate.
I would be willing to have children that are significantly less happy than I am
Remember, not “provided I already have children, I’m OK with them being significantly less happy than I am”, but “Knowing for sure that my children will be significantly less happy than I am, I will still have children”. May not give you pause, but probably will to most (first-world) people.
I suspect that most first-world people are significantly less happy than many happy people on LW, and that those people on LW would still be very happy to have children who were as happy as average first-worlders, though reasonably hoping to do better.
I have evidence that if my current happiness level is the baseline, I prefer the continued existence of at least one sub-baseline-happy person (myself) to their nonexistence. That is, when I go through depressive episodes in which I am significantly less happy than I am right now, I still want to keep existing.
I suspect that generalizes, though it’s really hard to have data about other people’s happiness.
It seems to me that if I endorse that choice (which I think I do), I ought not reject creating a new person whom I would otherwise create, simply because their existence is sub-baseline-happy.
That said, it also seems to me that there’s a level of unhappiness below which I would prefer to end my existence rather than continue my existence at that level. (I go through periods of those as well, which I get through by remembering that they are transient.) I’m much more inclined to treat that level as the baseline.
I prefer the continued existence of at least one sub-baseline-happy person (myself) to their nonexistence.
This does not contradict what I said. Creation != continued existence, as emphasized in the OP. There is a significant hysteresis between the two. You don’t want to have children less happy than you are, but you won’t kill your own unhappy children.
There are situations under which I would kill my own unhappy children. Indeed, there are even such situations where, were they happier, I would not kill them. However, “less happy than I am” does not describe those situations.
Intentionally creating people less happy than I am
This probably isn’t the same as “creating people with lives worse than the current average”.
your personal happiness level being the baseline
Why would that be the baseline? I’m lucky enough to have a high happiness set point, but that doesn’t mean I think everyone else has lives that are not worth living.
Would you want to bring unhappy children into the world?
Unhappy as in net negative for their life? No. Unhappy as in “less happy than average”? Depends what the average is, but quite possibly.
One argument that’s occurred to me is that adding more people in A+ might actually be harming the people in population A because the people in population A would presumably prefer that there not be a bunch of desperately poor people who need their help kept forever out of reach, and adding the people in A+ violates that preference. Of course, the populations are not aware of each others’ existence, but it’s possible to harm someone without their knowledge, if I spread dirty rumors about someone I’d say that I harmed them even if they never find out about it.
However, I am not satisfied with this argument, it feels a little too much like a rationalization to me. It might also suggest that we ought to be careful about how we reproduce in case it turns out that there are aliens out there somewhere living lives far more fantastic than ours are.
Of course, the populations are not aware of each others’ existence, but it’s possible to harm someone without their knowledge
Instrumentally, if there is absolutely no interaction, not even indirect, is possible between the two groups, there is no way one group can harm another.
it’s possible to harm someone without their knowledge, if I spread dirty rumors about someone I’d say that I harmed them even if they never find out about it.
True, but only because rumors can harm people, so the “no interaction” rule is broken.
True, but only because rumors can harm people, so the “no interaction” rule is broken.
I’m not sure about that. I don’t think most people would want rumors spread about them, even if the rumors did nothing other than make some people think worse about them (but they never acted on those thoughts).
Similarly, it seems to me that someone who cheats on their spouse and is never caught has wronged their spouse, even if their spouse is never aware of the affair’s existence, and the cheater doesn’t spend less money or time on the spouse because of it.
Now, suppose I have a strong preference to live in a universe where innocent people are never tortured for no good reason. Now, suppose someone in some far-off place that I can never interact with tortures an innocent person for no good reason. Haven’t my preferences been thwarted in some sense?
Now, suppose I have a strong preference to live in a universe where innocent people are never tortured for no good reason. Now, suppose someone in some far-off place that I can never interact with tortures an innocent person for no good reason. Haven’t my preferences been thwarted in some sense?
How do you know it is not happening right now? Since there is no way to tell, by your assumption, you might as well assume the worst and be perpetually unhappy. I warmly recommend instrumentalism as a workable alternative.
There is no need to be unhappy over situations I can’t control. I know that awful things are happening in other countries that I have no control over, but I don’t let that make me unhappy, even though my preferences are being perpetually thwarted by those things happening. But the fact that it doesn’t make me unhappy doesn’t change the fact that it’s not what I’d prefer.
I find it repugnant to even consider creating people with lives worse than the current average. So some resources will just have to remain unused, if that’s the condition.
What do you find repugnant about it?
Intentionally creating people less happy than I am. Think about it from the parenting perspective. Would you want to bring unhappy children into the world (your personal happiness level being the baseline), if you could predict their happiness level with certainty?
That is, your life is the least happy life worth living? If you reflectively endorse that, we ought to have a talk on how we can make your life better.
This, in conjunction with some other stuff I’ve been working on, prompted me to rethink some things about my priorities in life. Thanks!
Again, a misunderstanding. See my other reply.
It’s not clear to me that this is a misunderstanding. I think that my life is pretty dang awesome, and I would be willing to have children that are significantly less happy than I am (though, ceteris paribus, more happiness is better). If you aren’t, reaching out with friendly concern seems appropriate.
Remember, not “provided I already have children, I’m OK with them being significantly less happy than I am”, but “Knowing for sure that my children will be significantly less happy than I am, I will still have children”. May not give you pause, but probably will to most (first-world) people.
I suspect that most first-world people are significantly less happy than many happy people on LW, and that those people on LW would still be very happy to have children who were as happy as average first-worlders, though reasonably hoping to do better.
Well… hrm.
I have evidence that if my current happiness level is the baseline, I prefer the continued existence of at least one sub-baseline-happy person (myself) to their nonexistence. That is, when I go through depressive episodes in which I am significantly less happy than I am right now, I still want to keep existing.
I suspect that generalizes, though it’s really hard to have data about other people’s happiness.
It seems to me that if I endorse that choice (which I think I do), I ought not reject creating a new person whom I would otherwise create, simply because their existence is sub-baseline-happy.
That said, it also seems to me that there’s a level of unhappiness below which I would prefer to end my existence rather than continue my existence at that level. (I go through periods of those as well, which I get through by remembering that they are transient.) I’m much more inclined to treat that level as the baseline.
This does not contradict what I said. Creation != continued existence, as emphasized in the OP. There is a significant hysteresis between the two. You don’t want to have children less happy than you are, but you won’t kill your own unhappy children.
Agreed that creation != continued existence.
There are situations under which I would kill my own unhappy children. Indeed, there are even such situations where, were they happier, I would not kill them. However, “less happy than I am” does not describe those situations.
Looks like we agree, then.
This probably isn’t the same as “creating people with lives worse than the current average”.
Why would that be the baseline? I’m lucky enough to have a high happiness set point, but that doesn’t mean I think everyone else has lives that are not worth living.
Unhappy as in net negative for their life? No. Unhappy as in “less happy than average”? Depends what the average is, but quite possibly.
I’ve considered this possibility as well.
One argument that’s occurred to me is that adding more people in A+ might actually be harming the people in population A because the people in population A would presumably prefer that there not be a bunch of desperately poor people who need their help kept forever out of reach, and adding the people in A+ violates that preference. Of course, the populations are not aware of each others’ existence, but it’s possible to harm someone without their knowledge, if I spread dirty rumors about someone I’d say that I harmed them even if they never find out about it.
However, I am not satisfied with this argument, it feels a little too much like a rationalization to me. It might also suggest that we ought to be careful about how we reproduce in case it turns out that there are aliens out there somewhere living lives far more fantastic than ours are.
Instrumentally, if there is absolutely no interaction, not even indirect, is possible between the two groups, there is no way one group can harm another.
True, but only because rumors can harm people, so the “no interaction” rule is broken.
I’m not sure about that. I don’t think most people would want rumors spread about them, even if the rumors did nothing other than make some people think worse about them (but they never acted on those thoughts).
Similarly, it seems to me that someone who cheats on their spouse and is never caught has wronged their spouse, even if their spouse is never aware of the affair’s existence, and the cheater doesn’t spend less money or time on the spouse because of it.
Now, suppose I have a strong preference to live in a universe where innocent people are never tortured for no good reason. Now, suppose someone in some far-off place that I can never interact with tortures an innocent person for no good reason. Haven’t my preferences been thwarted in some sense?
How do you know it is not happening right now? Since there is no way to tell, by your assumption, you might as well assume the worst and be perpetually unhappy. I warmly recommend instrumentalism as a workable alternative.
There is no need to be unhappy over situations I can’t control. I know that awful things are happening in other countries that I have no control over, but I don’t let that make me unhappy, even though my preferences are being perpetually thwarted by those things happening. But the fact that it doesn’t make me unhappy doesn’t change the fact that it’s not what I’d prefer.