Assume that the planet is so distant or otherwise separated that you are above all reasonable doubt certain that no contact will ever be established between it and Earth. You, your descendants or anybody else on Earth will never know anything about the new planet except the initial information that it exists and in one point of its history, it had one billion happy people.
To avoid the massive utility of knowing that another intelligent species survived the great filter you might want to specify that a 93rd planet full of reasonably happy people has just been located millions of light-years away.
The reason why I am asking is that I don’t terminally value other people whom I don’t directly know. I am still disturbed by learning about their suffering and I may value them instrumentally as bearers of cultural or linguistic diversity or for other reasons, but I am not sad if I learn that fifty miners have died in an accident in Chile, for example. I am a moral nihilist (I think that morality reduces entirely to personal preferences, community norms or game theory, depending on the context) and thus I accept the lack of intuitive sadness as good indicator of my values. According to LW standards, am I evil?
I think that given our evolutionary origins it’s quite normal to have stronger feelings for people we know personally and associate ourselves with. All this means is that humans are poor administrators of other people’s happiness without special training. You may try thinking about how you would feel if you had a button that collapsed a mine in Chile if you pushed it. Would you push it on a whim just because miners dying in Chile doesn’t necessarily make you sad or would you suddenly feel a personal connection to those miners by means of the button you had to control their fate? What if you had to push a button every day to prevent the mine from collapsing? You might find that it isn’t so much your emotional/moral detachment from miners in Chile but your causal detachment from their fates that reduces your emotional/moral feelings about them.
You may try thinking about how you would feel if you had a button that collapsed a mine in Chile if you pushed it. Would you push it on a whim just because miners dying in Chile doesn’t necessarily make you sad or would you suddenly feel a personal connection to those miners by means of the button you had to control their fate?
I wouldn’t push the button because
fear that my action might be discovered,
feeling guilty of murder,
other people’s suffering (the miners’ when they would be dying and their relatives’ afterwards) having negative utility to me,
“on a whim” doesn’t sound as reasonable motivation,
fear that by doing so I would become accustomed to killing.
If the button painlessly killed people without relatives or friends and I were very certain that my pushing would remain undiscovered and there were some minimal reward for that, that would solve 1, 3 and 4. It’s more difficult to imagine what would placate my inner deontologist who cares about 2; I don’t want to stipulate memory erasing since I have no idea how I would feel after having my memory erased.
Nevertheless if the button created new miners from scratch, I wouldn’t push it if there was some associated cost, no matter how low. Assuming that I had no interest in Chilean mining industry.
The first such civilization surviving thus far still provides a large quantity of information. In particular, it makes us think the early stages of the filter are easier, and thus causes us to update our probability of future survival downward for both civilizations. In other words, hearing about another civilization makes us think it more likely that said civilization will go extinct soon.
Anyway, even if prase didn’t mention the Great Filter in particular, given that he/she said “in any case, if possible, try to leave aside MWI, UDT, TDT, anthropics and AI”, I don’t think he/she was interested in answers involving the Great Filter, either.
(Not sure this is the best way to say what I’m trying to say, but I hope you know what I mean anyway.)
How about someone dying from malaria because you didn’t donate $1,600 to the AMF?
I’m not sure if I would get more utility from spending $1,600 once to save a random number of people for only a few months or years or focus on a few individuals and try to make their lives much better and longer (perhaps by offering microloans to smart people with no capital and in danger of starving). The “save a child for dollars a day” marketing seems to have more emotional appeal because those charities can afford to skim 90% off the top and still get donations. I should probably value 1000 lives saved for 6 months over 10 lives saved for 50 years just because of the increasing pace of methods for saving people, like malaria eradication efforts. The expected number of those 1000 who are still alive in 50 years is probably greater than 10 if they don’t starve or die of malaria thanks to a donation.
To avoid the massive utility of knowing that another intelligent species survived the great filter you might want to specify that a 93rd planet full of reasonably happy people has just been located millions of light-years away.
I think that given our evolutionary origins it’s quite normal to have stronger feelings for people we know personally and associate ourselves with. All this means is that humans are poor administrators of other people’s happiness without special training. You may try thinking about how you would feel if you had a button that collapsed a mine in Chile if you pushed it. Would you push it on a whim just because miners dying in Chile doesn’t necessarily make you sad or would you suddenly feel a personal connection to those miners by means of the button you had to control their fate? What if you had to push a button every day to prevent the mine from collapsing? You might find that it isn’t so much your emotional/moral detachment from miners in Chile but your causal detachment from their fates that reduces your emotional/moral feelings about them.
I wouldn’t push the button because
fear that my action might be discovered,
feeling guilty of murder,
other people’s suffering (the miners’ when they would be dying and their relatives’ afterwards) having negative utility to me,
“on a whim” doesn’t sound as reasonable motivation,
fear that by doing so I would become accustomed to killing.
If the button painlessly killed people without relatives or friends and I were very certain that my pushing would remain undiscovered and there were some minimal reward for that, that would solve 1, 3 and 4. It’s more difficult to imagine what would placate my inner deontologist who cares about 2; I don’t want to stipulate memory erasing since I have no idea how I would feel after having my memory erased.
Nevertheless if the button created new miners from scratch, I wouldn’t push it if there was some associated cost, no matter how low. Assuming that I had no interest in Chilean mining industry.
It has survived it so far, but for all we know it may be going to be extinct in 200 years.
The first such civilization surviving thus far still provides a large quantity of information. In particular, it makes us think the early stages of the filter are easier, and thus causes us to update our probability of future survival downward for both civilizations. In other words, hearing about another civilization makes us think it more likely that said civilization will go extinct soon.
Anyway, even if prase didn’t mention the Great Filter in particular, given that he/she said “in any case, if possible, try to leave aside MWI, UDT, TDT, anthropics and AI”, I don’t think he/she was interested in answers involving the Great Filter, either.
(Not sure this is the best way to say what I’m trying to say, but I hope you know what I mean anyway.)
You are right.
How about someone dying from malaria because you didn’t donate $1,600 to the AMF?
I’m not sure if I would get more utility from spending $1,600 once to save a random number of people for only a few months or years or focus on a few individuals and try to make their lives much better and longer (perhaps by offering microloans to smart people with no capital and in danger of starving). The “save a child for dollars a day” marketing seems to have more emotional appeal because those charities can afford to skim 90% off the top and still get donations. I should probably value 1000 lives saved for 6 months over 10 lives saved for 50 years just because of the increasing pace of methods for saving people, like malaria eradication efforts. The expected number of those 1000 who are still alive in 50 years is probably greater than 10 if they don’t starve or die of malaria thanks to a donation.