This sentence doesn’t really mean much. A dead person doesn’t have preferences or utility (zero or otherwise) when dead any more than a rock does, a dead person had preferences and utility when alive. The death of a living person (who preferred to live) reduces average utility because the living person preferred to not die, and that preference is violated!
you’d be willing to kill people for the crime of not being sufficiently happy or fulfilled
I support the right to euthanasia for people who truly prefer to be killed, e.g. because they suffer from terminal painful diseases. Do you oppose it?
Perhaps you should see it more clearly if you think of it as maximizing the average preference utility across the timeline, rather than the average utility at a single point in time.
The death of a living person (who preferred to live) reduces average utility because the living person preferred to not die, and that preference is violated!
But after the fact, they are not alive, so they do not impact the average utility across all living things, so you are increasing the average utility across all living things.
Here’s what I mean, roughly expressed. Two possible timelines. (A)
2010 Alice loves her life (she wants to live with continued life being a preference satisfaction of 7 per year). Bob merely likes his life (he wants to live with continued life being a preference satisfaction of 6 per year).
2011 Alice and 2011 Bob both alive as before.
2012 Alice and 2012 Bob both alive as before.
2013 Alice and 2013 Bob both alive as before.
2010 Bob wants 2010 Bob to exist, 2011 Bob to exist, 2012 Bob to exist, 2013 Bob to exist. 2010 Alice wants 2010 Alice to exist, 2011 Alice to exist, 2012 Alice to exist, 2013 Alice to exist.
2010 average utility is therefore (4x7 + 4x6) / 2= 26 and that also remains the average for the whole timeline.
(B)
2010 Alice and Bob same as before.
2011 Alice is alive. Bob has just been killed.
2012 Alice alive as before.
2013 Alice alive as before.
2010 average utility is: 4x7 +6 = (4x7 + 1x6)/2 = 17 2011 average utility is: 4x7 = 28 2012 average utility is: 4x7 = 28 2013 average utility is: 4x7 = 28
So Bob’s death increased the average utility indicated in the preferences of a single year. But average utilty across the timeline is now (28 + 6 + 28 + 28 + 28) / 5 = 23.6
In short the average utility of the timeline as a whole is decreased by taking out Bob.
You are averaging based on the population at the start of the experiment. In essence, you are counting dead people in your average, like Eliezer’s offhanded comment implied he would. Also, you are summing over the population rather than averaging.
Correcting those discrepancies, we would see (ua ⇒ “utils average”; murder happening New Year’s Day 2011):
Now, let’s say we are using procreation instead of murder as the interesting behavior. Let’s say each act of procreation reduces the average utility by 1, and it starts at 100 at the beginning of the experiment, with an initial population of 10.
In the first year, we can decrease the average utility by 10 in order to add one human with 99 utility. When do we stop adding humans? Well, it’s clear that the average utility in this contrived example is equal to 110 minus the total population, and the total utility is equal to the average times the population size. If we have 60 people, that means our average utility is 50, with a total of 3,000 utils. Three times as good for everyone, except half as good for our original ten people.
We maximize the utility at a population of 55 in this example (and 55 average utility) -- but that’s because we can’t add new people very efficiently. If we had a very efficient way of adding more people, we’d end up with the average utility being just barely better than death, but we’d make up for it in volume. That’s what you are suggesting we do.
That also isn’t a universe I want to live in. Eliezer is suggesting we count dead people in our averages, nothing more. That’s sufficient to go from kill-almost-all-humans to something we can maybe live with. (Of course, if we counted their preferences, that would be a conservatizing force that we could never get rid of, which is similarly worrying, albeit not as much so. In the worst case, we could use an expanding immortal population to counter it. Still-living people can change their preferences.)
You are averaging based on the population at the start of the experiment. In essence, you are counting dead people in your average, like Eliezer’s offhanded comment implied he would
I consider every moment of living experience as of equal weight. You may call that “counting dead people” if you want, but that’s only because when considering the entire timeline I consider every living moment—given a single timeline, there’s no living people vs dead people, there’s just people living in different times. If you calculate the global population it doesn’t matter what country you live in—if you calculate the utility of a fixed timeline, it doesn’t matter what time you live in.
But the main thing I’m not sure you get is that I believe preferences are valid also when concerning the future, not just when concerning the present.
If 2014 Carl wants the state of the world to be X in 2024, that’s still a preference to be counted, even if Carl ends up dead in the meantime. That Carl severely does NOT want to be dead in 2024, means that there’s a heavy disutility penalty for the 2014 function of his utility if he ends up nonetheless dead in 2024.
Of course, if we counted their preferences, that would be a conservatizing force that we could never get rid of
If e.g. someone wants to be buried at sea because he loves the sea, I consider it good that we bury them at sea. But if someone wants to be buried at sea only because he believes such a ritual is necessary for his soul to be resurrected by God Poseidon, his preference is dependent on false beliefs—it doesn’t represent true terminal values; and that’s the ones I’m concerned about.
If conservatism is e.g. motivated by either wrong epistemic beliefs, or by fear, rather than true different terminal values, it should likewise not modify our actions, if we’re acting from an epistemically superior position (we know what they didn’t).
when considering the entire timeline I consider every living moment—given a single timeline, there’s no living people vs dead people, there’s just people living in different times. If you calculate the global population it doesn’t matter what country you live in—if you calculate the utility of a fixed timeline, it doesn’t matter what time you live in.
That’s an ingenious fix, but when I think about it I’m not sure it works. The problem is that although you are calculating the utility integrated over the timeline, the values that you are integrating are still based on a particular moment. In other words, calculating the utility of the 2014-2024 timeline by 2014 preferences might not produce the same result as calculating the utility of the 2014-2024 timeline by 2024 preferences. Worse yet, if you’re comparing two timelines and the two timelines have different 2024s in them, and you try to compare them by 2024 preferences, which timeline’s 2024 preferences do you use?
For instance, consider
timeline A: Carl is alive in 2014 and is killed soon afterwards, but two new people are born who are alive in 2024.
timeline B: Carl is alive in 2014 and in 2024, but the two people from A never existed.
If you compare the timelines by Carl’s 2014 preferences or Carl’s timeline B 2024 preferences, timeline B is better, because timeline B has a lot of utility integrated over Carl’s life.
If you compare the timelines by the other people’s timeline A 2024 preferences, timeline A is better.
It’s tempting to try to fix this argument by saying that rather than using preferences at a particular moment, you will use preferences integrated over the timeline, but if you do that in the obvious way (by weighting the preferences according to the person-hours spent with that preference), then killing someone early reduces the contribution of their preference to the integrated utility, causing a problem similar to the original one.
you’d be willing to kill people for the crime of not being sufficiently happy or fulfilled
I support the right to euthanasia for people who truly prefer to be killed, e.g. because they suffer from terminal painful diseases. Do you oppose it?
I believe that what dhasenan was getting at is that without the assumption that a dead person has 0 utility, you would be willing to kill people who are happy (positive utility), but just not as happy as they could be. I’m not sure how exactly this would go mathematically, but the point is that killing a +utility person being a reduction in utility is a vital axiom
It’s not that they could be happier. Rather, if the average happiness is greater than my happiness, the average happiness in the population will be increased if I die (assuming the other effects of a person dying are minimal or sufficiently mitigated).
but the point is that killing a +utility person being a reduction in utility is a vital axiom
I don’t know if we need have it as an axiom rather than this being a natural consequence of happy people preferring not to be killed, and of us likewise preferring not to kill them, and of pretty much everyone preferring their continued lives to their deaths… The good of preference utilitarianism is that it takes all these preferences as an input.
If preference average utilitarianism nonetheless leads to such an abominable conclusion, I’ll choose to abandon preference average utilitarianism, considering it a failed/misguided attempt at describing my sense of morality—but I’m not certain it needs lead to such a conclusion at all.
This sentence doesn’t really mean much. A dead person doesn’t have preferences or utility (zero or otherwise) when dead any more than a rock does, a dead person had preferences and utility when alive. The death of a living person (who preferred to live) reduces average utility because the living person preferred to not die, and that preference is violated!
I support the right to euthanasia for people who truly prefer to be killed, e.g. because they suffer from terminal painful diseases. Do you oppose it?
Perhaps you should see it more clearly if you think of it as maximizing the average preference utility across the timeline, rather than the average utility at a single point in time.
But after the fact, they are not alive, so they do not impact the average utility across all living things, so you are increasing the average utility across all living things.
Here’s what I mean, roughly expressed. Two possible timelines.
(A)
2010 Alice loves her life (she wants to live with continued life being a preference satisfaction of 7 per year). Bob merely likes his life (he wants to live with continued life being a preference satisfaction of 6 per year).
2011 Alice and 2011 Bob both alive as before.
2012 Alice and 2012 Bob both alive as before.
2013 Alice and 2013 Bob both alive as before.
2010 Bob wants 2010 Bob to exist, 2011 Bob to exist, 2012 Bob to exist, 2013 Bob to exist.
2010 Alice wants 2010 Alice to exist, 2011 Alice to exist, 2012 Alice to exist, 2013 Alice to exist.
2010 average utility is therefore (4x7 + 4x6) / 2= 26 and that also remains the average for the whole timeline.
(B)
2010 Alice and Bob same as before.
2011 Alice is alive. Bob has just been killed.
2012 Alice alive as before.
2013 Alice alive as before.
2010 average utility is: 4x7 +6 = (4x7 + 1x6)/2 = 17
2011 average utility is: 4x7 = 28
2012 average utility is: 4x7 = 28
2013 average utility is: 4x7 = 28
So Bob’s death increased the average utility indicated in the preferences of a single year. But average utilty across the timeline is now (28 + 6 + 28 + 28 + 28) / 5 = 23.6
In short the average utility of the timeline as a whole is decreased by taking out Bob.
You are averaging based on the population at the start of the experiment. In essence, you are counting dead people in your average, like Eliezer’s offhanded comment implied he would. Also, you are summing over the population rather than averaging.
Correcting those discrepancies, we would see (ua ⇒ “utils average”; murder happening New Year’s Day 2011):
The murder was a clear advantage.
Now, let’s say we are using procreation instead of murder as the interesting behavior. Let’s say each act of procreation reduces the average utility by 1, and it starts at 100 at the beginning of the experiment, with an initial population of 10.
In the first year, we can decrease the average utility by 10 in order to add one human with 99 utility. When do we stop adding humans? Well, it’s clear that the average utility in this contrived example is equal to 110 minus the total population, and the total utility is equal to the average times the population size. If we have 60 people, that means our average utility is 50, with a total of 3,000 utils. Three times as good for everyone, except half as good for our original ten people.
We maximize the utility at a population of 55 in this example (and 55 average utility) -- but that’s because we can’t add new people very efficiently. If we had a very efficient way of adding more people, we’d end up with the average utility being just barely better than death, but we’d make up for it in volume. That’s what you are suggesting we do.
That also isn’t a universe I want to live in. Eliezer is suggesting we count dead people in our averages, nothing more. That’s sufficient to go from kill-almost-all-humans to something we can maybe live with. (Of course, if we counted their preferences, that would be a conservatizing force that we could never get rid of, which is similarly worrying, albeit not as much so. In the worst case, we could use an expanding immortal population to counter it. Still-living people can change their preferences.)
I consider every moment of living experience as of equal weight. You may call that “counting dead people” if you want, but that’s only because when considering the entire timeline I consider every living moment—given a single timeline, there’s no living people vs dead people, there’s just people living in different times. If you calculate the global population it doesn’t matter what country you live in—if you calculate the utility of a fixed timeline, it doesn’t matter what time you live in.
But the main thing I’m not sure you get is that I believe preferences are valid also when concerning the future, not just when concerning the present.
If 2014 Carl wants the state of the world to be X in 2024, that’s still a preference to be counted, even if Carl ends up dead in the meantime. That Carl severely does NOT want to be dead in 2024, means that there’s a heavy disutility penalty for the 2014 function of his utility if he ends up nonetheless dead in 2024.
If e.g. someone wants to be buried at sea because he loves the sea, I consider it good that we bury them at sea.
But if someone wants to be buried at sea only because he believes such a ritual is necessary for his soul to be resurrected by God Poseidon, his preference is dependent on false beliefs—it doesn’t represent true terminal values; and that’s the ones I’m concerned about.
If conservatism is e.g. motivated by either wrong epistemic beliefs, or by fear, rather than true different terminal values, it should likewise not modify our actions, if we’re acting from an epistemically superior position (we know what they didn’t).
That’s an ingenious fix, but when I think about it I’m not sure it works. The problem is that although you are calculating the utility integrated over the timeline, the values that you are integrating are still based on a particular moment. In other words, calculating the utility of the 2014-2024 timeline by 2014 preferences might not produce the same result as calculating the utility of the 2014-2024 timeline by 2024 preferences. Worse yet, if you’re comparing two timelines and the two timelines have different 2024s in them, and you try to compare them by 2024 preferences, which timeline’s 2024 preferences do you use?
For instance, consider timeline A: Carl is alive in 2014 and is killed soon afterwards, but two new people are born who are alive in 2024. timeline B: Carl is alive in 2014 and in 2024, but the two people from A never existed.
If you compare the timelines by Carl’s 2014 preferences or Carl’s timeline B 2024 preferences, timeline B is better, because timeline B has a lot of utility integrated over Carl’s life. If you compare the timelines by the other people’s timeline A 2024 preferences, timeline A is better.
It’s tempting to try to fix this argument by saying that rather than using preferences at a particular moment, you will use preferences integrated over the timeline, but if you do that in the obvious way (by weighting the preferences according to the person-hours spent with that preference), then killing someone early reduces the contribution of their preference to the integrated utility, causing a problem similar to the original one.
I think you’re arguing against my argument against a position you don’t hold, but which I called by a term that sounds to you like your position.
Assuming you have a function that yields the utility that one person has at one particular second, what do you want to optimize for?
And maybe I should wait until I’m less than 102 degrees Fahrenheit to continue this discussion.
I believe that what dhasenan was getting at is that without the assumption that a dead person has 0 utility, you would be willing to kill people who are happy (positive utility), but just not as happy as they could be. I’m not sure how exactly this would go mathematically, but the point is that killing a +utility person being a reduction in utility is a vital axiom
It’s not that they could be happier. Rather, if the average happiness is greater than my happiness, the average happiness in the population will be increased if I die (assuming the other effects of a person dying are minimal or sufficiently mitigated).
I don’t know if we need have it as an axiom rather than this being a natural consequence of happy people preferring not to be killed, and of us likewise preferring not to kill them, and of pretty much everyone preferring their continued lives to their deaths… The good of preference utilitarianism is that it takes all these preferences as an input.
If preference average utilitarianism nonetheless leads to such an abominable conclusion, I’ll choose to abandon preference average utilitarianism, considering it a failed/misguided attempt at describing my sense of morality—but I’m not certain it needs lead to such a conclusion at all.