Average utilitarianism (which can be both hedonistic or about preferences / utility functions) is another way to avoid the repugnant conclusion. However, average utilitarianism comes with its own conclusions that most consider to be unacceptable. If the average life in the universe turns out to be absolutely miserable, is it a good thing if I bring a child into existence that will have a life that goes slightly less miserable? Or similarly, if the average life is free of suffering and full of the most intense happiness possible, would I be acting catastrophically wrong if I brought into existence a lot of beings that constantly experience the peak of current human happiness (without ever having preferences unfulfilled too), simply because it would lower the overall average?
Another point to bring up against average utilitarianism is that is seems odd why the value of creating a new life should depend on what the rest of the universe looks like. All the conscious experiences remain the same, after all, so where does this “let’s just take the average!” come from?
If the average life in the universe turns out to be absolutely miserable, is it a good thing if I bring a child into existence that will have a life that goes slightly less miserable? Or similarly, if the average life is free of suffering and full of the most intense happiness possible, would I be acting catastrophically wrong if I brought into existence a lot of beings that constantly experience the peak of current human happiness (without ever having preferences unfulfilled too), simply because it would lower the overall average?
More repugnant than that is that naive average utilitarianism would seem to say that killing the least happy person in the world is a good thing, no matter how happy they are.
More repugnant than that is that naive average utilitarianism would seem to say that killing the least happy person in the world is a good thing, no matter how happy they are.
This can be resolved by taking a timeless view of the population, so that someone still counts as part of the average even after they die. This neatly resolves the question you asked Eliezer earlier in the thread, “If you prefer no monster to a happy monster why don’t you kill the monster.” The answer is that once the monster is created it always exists in a timeless sense. The only way for there to be “no monster” is for it to never exist in the first place.
That still leaves the most repugnant conclusion of naive average utilitarianism, namely that it states that, if the average utility is ultranegative (i.e., everyone is tortured 24⁄7), creating someone with slightly less negative utility (ie they are tortured 23⁄7) is better than creating nobody.
In my view average utilitarianism is a failed attempt to capture a basic intuition, namely that a small population of high utility people is sometimes better than a large one of low utility people, even if the large population’s total utility is higher. “Take the average utility of the population” sounds like an easy and mathematically rigorous to express that intuition at first, but runs into problems once you figure out “munchkin” ways to manipulate the average, like adding moderately miserable people to a super-miserable world..
In my view we should keep the basic intuition (especially the timeless interpreation of it), but figure out a way to express it that isn’t as horrible as AU.
I would think so. Of course, that’s not to say we know that they count… my confidence that someone who doesn’t exist once existed is likely much higher, all else being equal, than my confidence that someone who doesn’t exist is going to exist.
This should in no way be understood as endorsing the more general formulation.
Yes and no. Yes in that the timeless view is timeless in both directions. No in that for decisionmaking we can only take into account predictions of the future and not the future itself.
For intuitive purposes, consider the current political issues of climate change and economic bubbles. It might be the case that we who are now alive could have better quality of life if we used up the natural resources and if we had the government propogate a massive economic bubble that wouldn’t burst until after we died. If we don’t value the welfare of possible future generations, we should do those things. If we do value the welfare of possible future generations, we should not do those things.
For technical purposes, suppose we have an AIXI-bot with a utility function that values human welfare. Examination of the AIXI definition makes it clear that the utility function is evaluated over the (predicted) total future. (Entertaining speculation: If the utility function was additive, such an optimizer might kill off those of us using more than our share of resources to ensure we stay within Earth’s carrying capacity, making it able to support a billion years of humanity; or it might enslave us to build space colonies capable of supporting unimaginable throngs of future happier humans.)
For philosophical purposes, there’s an important sense in which my brainstates change so much over the years that I can meaningfully, if not literally, say “I’m not the same person I was a decade ago”, and expect that the same will be true a decade from now. So if I want to value my future self, there’s a sense in which I necessarily must value the welfare of some only-partly-known set of possible future persons.
If I kill someone in their sleep so they don’t experience death, and nobody else is affected by it (maybe it’s a hobo or something), is that okay under the timeless view because their prior utility still “counts”?
If we’re talking preference utilitarianism, in the “timeless sense” you have drastically reduced the utility of the person, since the person (while still living) would have preferred not to be so killed; and you went against that preference.
It’s because their prior utility (their preference not to be killed) counts, that killing someone is drastically different from them not being born in the first place.
No, because they’ll be deprived of any future utility they might have otherwise received by remaining alive.
So if a person is born, has 50 utility of experiences and is then killed, the timeless view says the population had one person of 50 utility added to it by their birth.
By contrast, if they were born, have 50 utility of experiences, avoid being killed, and then have an additional 60 utility of experiences before they die of old age, the timeless view says the population had one person of 110 utility added to it by their birth.
Obviously, all other things being equal, adding someone with 110 utility is better than adding someone with 50, so killing is still bad.
Yes, that’s my point (Maybe my tenses were wrong.) This answer (the weighting) was meant to be the answer to teageegeepea’s question of how exactly the timeless view considers the situation.
More repugnant than that is that naive average utilitarianism would seem to say that killing the least happy person in the world is a good thing, no matter how happy they are.
In real life, this would tend to make the remaining people less happy.
Average utilitarianism (which can be both hedonistic or about preferences / utility functions) is another way to avoid the repugnant conclusion. However, average utilitarianism comes with its own conclusions that most consider to be unacceptable. If the average life in the universe turns out to be absolutely miserable, is it a good thing if I bring a child into existence that will have a life that goes slightly less miserable? Or similarly, if the average life is free of suffering and full of the most intense happiness possible, would I be acting catastrophically wrong if I brought into existence a lot of beings that constantly experience the peak of current human happiness (without ever having preferences unfulfilled too), simply because it would lower the overall average?
Another point to bring up against average utilitarianism is that is seems odd why the value of creating a new life should depend on what the rest of the universe looks like. All the conscious experiences remain the same, after all, so where does this “let’s just take the average!” come from?
More repugnant than that is that naive average utilitarianism would seem to say that killing the least happy person in the world is a good thing, no matter how happy they are.
This can be resolved by taking a timeless view of the population, so that someone still counts as part of the average even after they die. This neatly resolves the question you asked Eliezer earlier in the thread, “If you prefer no monster to a happy monster why don’t you kill the monster.” The answer is that once the monster is created it always exists in a timeless sense. The only way for there to be “no monster” is for it to never exist in the first place.
That still leaves the most repugnant conclusion of naive average utilitarianism, namely that it states that, if the average utility is ultranegative (i.e., everyone is tortured 24⁄7), creating someone with slightly less negative utility (ie they are tortured 23⁄7) is better than creating nobody.
In my view average utilitarianism is a failed attempt to capture a basic intuition, namely that a small population of high utility people is sometimes better than a large one of low utility people, even if the large population’s total utility is higher. “Take the average utility of the population” sounds like an easy and mathematically rigorous to express that intuition at first, but runs into problems once you figure out “munchkin” ways to manipulate the average, like adding moderately miserable people to a super-miserable world..
In my view we should keep the basic intuition (especially the timeless interpreation of it), but figure out a way to express it that isn’t as horrible as AU.
In that view, does someone already counts as part of the average even before they are born?
I would think so. Of course, that’s not to say we know that they count… my confidence that someone who doesn’t exist once existed is likely much higher, all else being equal, than my confidence that someone who doesn’t exist is going to exist.
This should in no way be understood as endorsing the more general formulation.
Presumably, only if they get born. Although that’s tweakable.
Yes and no. Yes in that the timeless view is timeless in both directions. No in that for decisionmaking we can only take into account predictions of the future and not the future itself.
For intuitive purposes, consider the current political issues of climate change and economic bubbles. It might be the case that we who are now alive could have better quality of life if we used up the natural resources and if we had the government propogate a massive economic bubble that wouldn’t burst until after we died. If we don’t value the welfare of possible future generations, we should do those things. If we do value the welfare of possible future generations, we should not do those things.
For technical purposes, suppose we have an AIXI-bot with a utility function that values human welfare. Examination of the AIXI definition makes it clear that the utility function is evaluated over the (predicted) total future. (Entertaining speculation: If the utility function was additive, such an optimizer might kill off those of us using more than our share of resources to ensure we stay within Earth’s carrying capacity, making it able to support a billion years of humanity; or it might enslave us to build space colonies capable of supporting unimaginable throngs of future happier humans.)
For philosophical purposes, there’s an important sense in which my brainstates change so much over the years that I can meaningfully, if not literally, say “I’m not the same person I was a decade ago”, and expect that the same will be true a decade from now. So if I want to value my future self, there’s a sense in which I necessarily must value the welfare of some only-partly-known set of possible future persons.
If I kill someone in their sleep so they don’t experience death, and nobody else is affected by it (maybe it’s a hobo or something), is that okay under the timeless view because their prior utility still “counts”?
If we’re talking preference utilitarianism, in the “timeless sense” you have drastically reduced the utility of the person, since the person (while still living) would have preferred not to be so killed; and you went against that preference.
It’s because their prior utility (their preference not to be killed) counts, that killing someone is drastically different from them not being born in the first place.
No, because they’ll be deprived of any future utility they might have otherwise received by remaining alive.
So if a person is born, has 50 utility of experiences and is then killed, the timeless view says the population had one person of 50 utility added to it by their birth.
By contrast, if they were born, have 50 utility of experiences, avoid being killed, and then have an additional 60 utility of experiences before they die of old age, the timeless view says the population had one person of 110 utility added to it by their birth.
Obviously, all other things being equal, adding someone with 110 utility is better than adding someone with 50, so killing is still bad.
The obvious way to avoid this is to weight each person by their measure, e.g. the amount of time they spend alive.
I think total utilitarianism already does that.
Yes, that’s my point (Maybe my tenses were wrong.) This answer (the weighting) was meant to be the answer to teageegeepea’s question of how exactly the timeless view considers the situation.
In real life, this would tend to make the remaining people less happy.