Being inspired by the relatively recent discussions of Parfit’s Repugnant Conclusion, I started to wonder how many of us actually hold that ceteris paribus, a world with more happy people is better than a world with fewer happy people. I am not that much interested in answer generated by the moral philosophy you endorse, but rather the intuitive gut feeling: imagine you learn from a sufficiently trustworthy source about existence of a previously unknown planet (1) with a billion people living on it, all of them reasonably (2) happy, would it feel like a good news (3)? Please answer a poll in the subcomments.
Explanatory notes:
Assume that the planet is so distant or otherwise separated that you are above all reasonable doubt certain that no contact will ever be established between it and Earth. You, your descendants or anybody else on Earth will never know anything about the new planet except the initial information that it exists and in one point of its history, it had one billion happy people.
If you believe there is some level of happiness necessary for a life to be worth living, “reasonably happy” should be interpreted as being above this level. On the other hand, it should still be a human level of happiness, nothing outlandish. The level of happiness should be considered sustainable and not in conflict with the new planet’s inhabitants’ values, if this is necessary for your evaluation.
That is, would it feel naturally good, similar to how you feel when you succeed in something you care about, or when you learn that a child was born to one of your friends, or that one of your relatives was cured from a serious disease? I am not interested in good feelings that appear after intelectual reflection, as if you consider that something is good according to your adopted moral theory and then feel good about how moral you are.
In any case, if possible, try to leave aside MWI, UDT, TDT, anthropics and AI.
The reason why I am asking is that I don’t terminally value other people whom I don’t directly know. I am still disturbed by learning about their suffering and I may value them instrumentally as bearers of cultural or linguistic diversity or for other reasons, but I am not sad if I learn that fifty miners have died in an accident in Chile, for example. I am a moral nihilist (I think that morality reduces entirely to personal preferences, community norms or game theory, depending on the context) and thus I accept the lack of intuitive sadness as good indicator of my values. According to LW standards, am I evil?
Assume that the planet is so distant or otherwise separated that you are above all reasonable doubt certain that no contact will ever be established between it and Earth. You, your descendants or anybody else on Earth will never know anything about the new planet except the initial information that it exists and in one point of its history, it had one billion happy people.
To avoid the massive utility of knowing that another intelligent species survived the great filter you might want to specify that a 93rd planet full of reasonably happy people has just been located millions of light-years away.
The reason why I am asking is that I don’t terminally value other people whom I don’t directly know. I am still disturbed by learning about their suffering and I may value them instrumentally as bearers of cultural or linguistic diversity or for other reasons, but I am not sad if I learn that fifty miners have died in an accident in Chile, for example. I am a moral nihilist (I think that morality reduces entirely to personal preferences, community norms or game theory, depending on the context) and thus I accept the lack of intuitive sadness as good indicator of my values. According to LW standards, am I evil?
I think that given our evolutionary origins it’s quite normal to have stronger feelings for people we know personally and associate ourselves with. All this means is that humans are poor administrators of other people’s happiness without special training. You may try thinking about how you would feel if you had a button that collapsed a mine in Chile if you pushed it. Would you push it on a whim just because miners dying in Chile doesn’t necessarily make you sad or would you suddenly feel a personal connection to those miners by means of the button you had to control their fate? What if you had to push a button every day to prevent the mine from collapsing? You might find that it isn’t so much your emotional/moral detachment from miners in Chile but your causal detachment from their fates that reduces your emotional/moral feelings about them.
You may try thinking about how you would feel if you had a button that collapsed a mine in Chile if you pushed it. Would you push it on a whim just because miners dying in Chile doesn’t necessarily make you sad or would you suddenly feel a personal connection to those miners by means of the button you had to control their fate?
I wouldn’t push the button because
fear that my action might be discovered,
feeling guilty of murder,
other people’s suffering (the miners’ when they would be dying and their relatives’ afterwards) having negative utility to me,
“on a whim” doesn’t sound as reasonable motivation,
fear that by doing so I would become accustomed to killing.
If the button painlessly killed people without relatives or friends and I were very certain that my pushing would remain undiscovered and there were some minimal reward for that, that would solve 1, 3 and 4. It’s more difficult to imagine what would placate my inner deontologist who cares about 2; I don’t want to stipulate memory erasing since I have no idea how I would feel after having my memory erased.
Nevertheless if the button created new miners from scratch, I wouldn’t push it if there was some associated cost, no matter how low. Assuming that I had no interest in Chilean mining industry.
The first such civilization surviving thus far still provides a large quantity of information. In particular, it makes us think the early stages of the filter are easier, and thus causes us to update our probability of future survival downward for both civilizations. In other words, hearing about another civilization makes us think it more likely that said civilization will go extinct soon.
Anyway, even if prase didn’t mention the Great Filter in particular, given that he/she said “in any case, if possible, try to leave aside MWI, UDT, TDT, anthropics and AI”, I don’t think he/she was interested in answers involving the Great Filter, either.
(Not sure this is the best way to say what I’m trying to say, but I hope you know what I mean anyway.)
How about someone dying from malaria because you didn’t donate $1,600 to the AMF?
I’m not sure if I would get more utility from spending $1,600 once to save a random number of people for only a few months or years or focus on a few individuals and try to make their lives much better and longer (perhaps by offering microloans to smart people with no capital and in danger of starving). The “save a child for dollars a day” marketing seems to have more emotional appeal because those charities can afford to skim 90% off the top and still get donations. I should probably value 1000 lives saved for 6 months over 10 lives saved for 50 years just because of the increasing pace of methods for saving people, like malaria eradication efforts. The expected number of those 1000 who are still alive in 50 years is probably greater than 10 if they don’t starve or die of malaria thanks to a donation.
I have similar thoughts, though perhaps not for exactly the same reasons. It seems to me that in discussions that touch on population ethics, a lot of people seem to assume that more people is inherently better, subject to some quality-of-life considerations. It’s not obvious to me why this should be so. I can see that if you adopt a certain simple form of utilitarianism where each person’s life is assigned a utility and then total utility is the sum of all these, then it will always increase total utility to create more positive-utility lives. But I don’t think my moral utility function is constructed this way. Large populations have many benefits—economies of scale, survivability, etc.--but I don’t assign value to them beyond and independent of those benefits.
The premise feels mildly good to me, but I’m pretty sure some of that is positive affect bleeding over from my thoughts on alien life, survivability of sapience in the face of planet-killer events, et cetera. I’m likewise fairly sure it’s not due to the bare fact of knowing about a population that I didn’t know about before.
I don’t get the same positive associations when I think about similar scenarios closer to home, i.e. “happy self-sustaining population of ten million mole people discovered in the implausibly vast sewers of Manhattan”.
I used to have such a positive gut feeling: e.g. the idea of Earth having a population of 100 billion felt awesome. These days I think my positive gut feeling to that is much weaker.
Where exactly had you lived when the idea of 100 billion people on Earth felt awesome? I suspect the feelings toward population increase are correlated with how much ‘free’ land and, on the other hand, crowded places one sees around in one’s life. There aren’t many crowded places in Finland.
In Finland, yes, though I haven’t really been to anywhere substantially more crowded since that. The change in my gut feeling has probably more to do with a general shift towards negative utilitarianism.
The reason why I am asking is that I don’t terminally value other people whom I don’t directly know. I am still disturbed by learning about their suffering and I may value them instrumentally as bearers of cultural or linguistic diversity or for other reasons, but I am not sad if I learn that fifty miners have died in an accident in Chile, for example.
Me neither, but 10^9 >> 50. (Okay, “I don’t terminally value other people whom I don’t directly know” is not strictly true for me, but the amount by which I terminally value them is epsilon. And epsilon times a billion is not that small.
Being inspired by the relatively recent discussions of Parfit’s Repugnant Conclusion, I started to wonder how many of us actually hold that ceteris paribus, a world with more happy people is better than a world with fewer happy people. I am not that much interested in answer generated by the moral philosophy you endorse, but rather the intuitive gut feeling: imagine you learn from a sufficiently trustworthy source about existence of a previously unknown planet (1) with a billion people living on it, all of them reasonably (2) happy, would it feel like a good news (3)? Please answer a poll in the subcomments.
Explanatory notes:
Assume that the planet is so distant or otherwise separated that you are above all reasonable doubt certain that no contact will ever be established between it and Earth. You, your descendants or anybody else on Earth will never know anything about the new planet except the initial information that it exists and in one point of its history, it had one billion happy people.
If you believe there is some level of happiness necessary for a life to be worth living, “reasonably happy” should be interpreted as being above this level. On the other hand, it should still be a human level of happiness, nothing outlandish. The level of happiness should be considered sustainable and not in conflict with the new planet’s inhabitants’ values, if this is necessary for your evaluation.
That is, would it feel naturally good, similar to how you feel when you succeed in something you care about, or when you learn that a child was born to one of your friends, or that one of your relatives was cured from a serious disease? I am not interested in good feelings that appear after intelectual reflection, as if you consider that something is good according to your adopted moral theory and then feel good about how moral you are.
In any case, if possible, try to leave aside MWI, UDT, TDT, anthropics and AI.
The reason why I am asking is that I don’t terminally value other people whom I don’t directly know. I am still disturbed by learning about their suffering and I may value them instrumentally as bearers of cultural or linguistic diversity or for other reasons, but I am not sad if I learn that fifty miners have died in an accident in Chile, for example. I am a moral nihilist (I think that morality reduces entirely to personal preferences, community norms or game theory, depending on the context) and thus I accept the lack of intuitive sadness as good indicator of my values. According to LW standards, am I evil?
Upvote this if learning about the new planet full of happy people feels like good news to you.
Upvote this if learning about the new planet full of happy people doesn’t feel like good news to you.
To avoid the massive utility of knowing that another intelligent species survived the great filter you might want to specify that a 93rd planet full of reasonably happy people has just been located millions of light-years away.
I think that given our evolutionary origins it’s quite normal to have stronger feelings for people we know personally and associate ourselves with. All this means is that humans are poor administrators of other people’s happiness without special training. You may try thinking about how you would feel if you had a button that collapsed a mine in Chile if you pushed it. Would you push it on a whim just because miners dying in Chile doesn’t necessarily make you sad or would you suddenly feel a personal connection to those miners by means of the button you had to control their fate? What if you had to push a button every day to prevent the mine from collapsing? You might find that it isn’t so much your emotional/moral detachment from miners in Chile but your causal detachment from their fates that reduces your emotional/moral feelings about them.
I wouldn’t push the button because
fear that my action might be discovered,
feeling guilty of murder,
other people’s suffering (the miners’ when they would be dying and their relatives’ afterwards) having negative utility to me,
“on a whim” doesn’t sound as reasonable motivation,
fear that by doing so I would become accustomed to killing.
If the button painlessly killed people without relatives or friends and I were very certain that my pushing would remain undiscovered and there were some minimal reward for that, that would solve 1, 3 and 4. It’s more difficult to imagine what would placate my inner deontologist who cares about 2; I don’t want to stipulate memory erasing since I have no idea how I would feel after having my memory erased.
Nevertheless if the button created new miners from scratch, I wouldn’t push it if there was some associated cost, no matter how low. Assuming that I had no interest in Chilean mining industry.
It has survived it so far, but for all we know it may be going to be extinct in 200 years.
The first such civilization surviving thus far still provides a large quantity of information. In particular, it makes us think the early stages of the filter are easier, and thus causes us to update our probability of future survival downward for both civilizations. In other words, hearing about another civilization makes us think it more likely that said civilization will go extinct soon.
Anyway, even if prase didn’t mention the Great Filter in particular, given that he/she said “in any case, if possible, try to leave aside MWI, UDT, TDT, anthropics and AI”, I don’t think he/she was interested in answers involving the Great Filter, either.
(Not sure this is the best way to say what I’m trying to say, but I hope you know what I mean anyway.)
You are right.
How about someone dying from malaria because you didn’t donate $1,600 to the AMF?
I’m not sure if I would get more utility from spending $1,600 once to save a random number of people for only a few months or years or focus on a few individuals and try to make their lives much better and longer (perhaps by offering microloans to smart people with no capital and in danger of starving). The “save a child for dollars a day” marketing seems to have more emotional appeal because those charities can afford to skim 90% off the top and still get donations. I should probably value 1000 lives saved for 6 months over 10 lives saved for 50 years just because of the increasing pace of methods for saving people, like malaria eradication efforts. The expected number of those 1000 who are still alive in 50 years is probably greater than 10 if they don’t starve or die of malaria thanks to a donation.
I have similar thoughts, though perhaps not for exactly the same reasons. It seems to me that in discussions that touch on population ethics, a lot of people seem to assume that more people is inherently better, subject to some quality-of-life considerations. It’s not obvious to me why this should be so. I can see that if you adopt a certain simple form of utilitarianism where each person’s life is assigned a utility and then total utility is the sum of all these, then it will always increase total utility to create more positive-utility lives. But I don’t think my moral utility function is constructed this way. Large populations have many benefits—economies of scale, survivability, etc.--but I don’t assign value to them beyond and independent of those benefits.
The premise feels mildly good to me, but I’m pretty sure some of that is positive affect bleeding over from my thoughts on alien life, survivability of sapience in the face of planet-killer events, et cetera. I’m likewise fairly sure it’s not due to the bare fact of knowing about a population that I didn’t know about before.
I don’t get the same positive associations when I think about similar scenarios closer to home, i.e. “happy self-sustaining population of ten million mole people discovered in the implausibly vast sewers of Manhattan”.
I used to have such a positive gut feeling: e.g. the idea of Earth having a population of 100 billion felt awesome. These days I think my positive gut feeling to that is much weaker.
Where exactly had you lived when the idea of 100 billion people on Earth felt awesome? I suspect the feelings toward population increase are correlated with how much ‘free’ land and, on the other hand, crowded places one sees around in one’s life. There aren’t many crowded places in Finland.
In Finland, yes, though I haven’t really been to anywhere substantially more crowded since that. The change in my gut feeling has probably more to do with a general shift towards negative utilitarianism.
Me neither, but 10^9 >> 50. (Okay, “I don’t terminally value other people whom I don’t directly know” is not strictly true for me, but the amount by which I terminally value them is epsilon. And epsilon times a billion is not that small.
Karma sink.