Hmm. Yes, it does appear that an less-happy-than-average person presented with a device that would remove them from existence without externalities would be compelled to use it if they are an average utilitarian with a utility function defined in terms of subjective quality of life, regardless of the value of their experiential utility.
The problem is diminished, though not eliminated, if we use a utility function defined in terms of expected preference satisfaction (people generally prefer to continue existing), and I’m really more of a preference than a pleasure/pain utilitarian, but you can overcome that by making the gap between your and the average preference satisfaction large enough that it overcomes your preference for existing in the future. Unlikely, perhaps, but there’s nothing in the definition of the scenario that appears to forbid it.
That’s the trouble, though; for any given utility function except one dominated by an existence term, it seems possible to construct a scenario where nonexistence is preferable to existence: Utility Monsters for pleasure/pain utilitarians, et cetera. A world populated by average-type preference utilitarians with a dominant preference for existing in the future does seem immune to this problem, but I probably just haven’t thought of a sufficiently weird dilemma yet. The only saving grace is that most of the possibilities are pretty far-fetched. Have you actually found a knockdown argument, or just an area where our ethical intuitions go out of scope and stop returning good values?
I don’t think that existence is always preferable to nonexistence. A good life seems to have value, but a bad life is, ceteris paribus, not preferable to nonexistence. The problem with average utilitarianism is that a good life is declared worse than nonexistence if it is below average, but I want to preserve the value in that life, not eliminate it. In general, the solution is not to add a large existence term, but to make sure that the utility function as stated matches your actual preference for whether or not someone should exist.
Have you actually found a knockdown argument, or just an area where our ethical intuitions go out of scope and stop returning good values?
Argument for what exactly? We’ve touched on a few different issues and it’s not clear to me what you mean here.
One can use a simple hack to get round this problem: set any being that has existed but no longer does as having utility zero (for some zero level). In that case, average utilitarians won’t want to bring them into existence, but won’t want to elminate them afterwards.
There are other ways of acheiving the same thing. I think that people are far too inclinded to over-simplify their moral intuitions, based on over-simple mathematical models. Once your preferences are utility functions, and with some decent way of dealing with copies/death, then there are no further reasons to expect simplicity.
One can use a simple hack to get round this problem: set any being that has existed but no longer does as having utility zero (for some zero level). In that case, average utilitarians won’t want to bring them into existence, but won’t want to elminate them afterwards.
I’m sympathetic to this line of thought, but setting the number to zero creates unnecessary problems. I think it would make more sense to set the utility of a dead person to be whatever the total amount of utility they experienced over their lifetime was. This has the same result of removing the incentive to kill unhappy people, since (for instance) people who died in their 20s would normally have much lower utility than people who lived to be 80. And it would remove certain counterintuitive results that zeroing produces, such as having someone who was tortured to death end up making the same contribution to the average as someone who died from excessive sex.
Relativity: people do not have experiences before or after each other.
On another planet, a happy civilization with our values existed billions of years ago. Over the course of its existence, there were quadrillions of people. Everything we do now is almost worthless by comparison, because they are all bringing down the average.
Oog, a member of the first sentient tribe, which has just come into existence, is going to be hit very hard with a club. Thag, a future member of the same tribe who does not yet exist, is going to be hit very hard with a club three times 100 years from now. If you can prevent one of these, it depends on the number of people who live in tribe between now and then, even though they would no longer exist.
2 is irrelevant, because it doesn’t matter whether our current utility is in some absolute sense low, because we will still make the same decisions! U(A)=1 U(B)=100 gives the same outcome as U(A)=0.001 and U(B)=0.1.
1 and 3 can be solved by taking average utility over all agents, past and future. But that’s irrelevant to me because… I’m not an average utilitarian :-)
I’m waiting until I can capture my moral values in some (complicated) utility function. Until then, I’m refining my position.
2 is irrelevant, because it doesn’t matter whether our current utility is in some absolute sense low, because we will still make the same decisions! U(A)=1 U(B)=100 gives the same outcome as U(A)=0.001 and U(B)=0.1.
I was more thinking of people who needed to chose between benefiting our society and benefiting the ancient society, for whom this distinction would be relevant. I guess this is mathematically equivalent to 3.
1 and 3 can be solved by taking average utility over all agents, past and future. But that’s irrelevant to me because… I’m not an average utilitarian :-)
This brings back the original problem of killing people to bring up the average.
It feels odd to take an average over a possibly infinite future in this manner. It might work, but I feel like how well it matches our preferences will depend on the specifics of physics.
EDIT: This also implies that a world with 10 happy immortal people and 10 happy people who die is much worse than one with just 10 immortal people. Would you agree with that and all similar statements of that type implied by this solution?
I agree with that to some extent (as in I disagree, but replace both 10 with 10 trillion and I’d agree). But I’m still firming up my intuition at the moment.
In general, the solution is not to add a large existence term, but to make sure that the utility function as stated matches your actual preference for whether or not someone should exist.
What do you mean by “actual preference”? I can’t think of many interpretations of that comment that don’t implicitly define a utility function (making the statement tautological): even evaluating everything in terms of how well it matches our ethical intuitions constitutes a utility function, albeit one that’ll almost certainly end up being inconsistent outside a fairly narrow domain.
Our intuitions evolved to make decisions about existing entities in a world dense with externalities. I’d expect them to choke on problems that don’t deal with either one in a conventional way, and I don’t trust intuition pumps that rely on those problems.
I do mean a utility function, but one not necessarily known to the agent. If one values all good lives but wants to consider average utilitarianism, they could make average utilitarianism less different from their real utility function by adding existence terms. However, if their real utility function says that a life should exist, ceteris paribus, if it meets a certain standard of value, regardless of the value of other lives, than they are not really an average utilitarian; average utilitarianism is incompatible with that statement.
Well we must already know something about our utility functions or it is meaningless to say that we desire for them to be maximized. I feel considerably more confidant that I want people to live good lives than I feel toward any of the arguments for average utilitarianism that I have seen.
Hmm. Yes, it does appear that an less-happy-than-average person presented with a device that would remove them from existence without externalities would be compelled to use it if they are an average utilitarian with a utility function defined in terms of subjective quality of life, regardless of the value of their experiential utility.
The problem is diminished, though not eliminated, if we use a utility function defined in terms of expected preference satisfaction (people generally prefer to continue existing), and I’m really more of a preference than a pleasure/pain utilitarian, but you can overcome that by making the gap between your and the average preference satisfaction large enough that it overcomes your preference for existing in the future. Unlikely, perhaps, but there’s nothing in the definition of the scenario that appears to forbid it.
That’s the trouble, though; for any given utility function except one dominated by an existence term, it seems possible to construct a scenario where nonexistence is preferable to existence: Utility Monsters for pleasure/pain utilitarians, et cetera. A world populated by average-type preference utilitarians with a dominant preference for existing in the future does seem immune to this problem, but I probably just haven’t thought of a sufficiently weird dilemma yet. The only saving grace is that most of the possibilities are pretty far-fetched. Have you actually found a knockdown argument, or just an area where our ethical intuitions go out of scope and stop returning good values?
I don’t think that existence is always preferable to nonexistence. A good life seems to have value, but a bad life is, ceteris paribus, not preferable to nonexistence. The problem with average utilitarianism is that a good life is declared worse than nonexistence if it is below average, but I want to preserve the value in that life, not eliminate it. In general, the solution is not to add a large existence term, but to make sure that the utility function as stated matches your actual preference for whether or not someone should exist.
Argument for what exactly? We’ve touched on a few different issues and it’s not clear to me what you mean here.
One can use a simple hack to get round this problem: set any being that has existed but no longer does as having utility zero (for some zero level). In that case, average utilitarians won’t want to bring them into existence, but won’t want to elminate them afterwards.
There are other ways of acheiving the same thing. I think that people are far too inclinded to over-simplify their moral intuitions, based on over-simple mathematical models. Once your preferences are utility functions, and with some decent way of dealing with copies/death, then there are no further reasons to expect simplicity.
I’m sympathetic to this line of thought, but setting the number to zero creates unnecessary problems. I think it would make more sense to set the utility of a dead person to be whatever the total amount of utility they experienced over their lifetime was. This has the same result of removing the incentive to kill unhappy people, since (for instance) people who died in their 20s would normally have much lower utility than people who lived to be 80. And it would remove certain counterintuitive results that zeroing produces, such as having someone who was tortured to death end up making the same contribution to the average as someone who died from excessive sex.
There are so many problems with this.
Relativity: people do not have experiences before or after each other.
On another planet, a happy civilization with our values existed billions of years ago. Over the course of its existence, there were quadrillions of people. Everything we do now is almost worthless by comparison, because they are all bringing down the average.
Oog, a member of the first sentient tribe, which has just come into existence, is going to be hit very hard with a club. Thag, a future member of the same tribe who does not yet exist, is going to be hit very hard with a club three times 100 years from now. If you can prevent one of these, it depends on the number of people who live in tribe between now and then, even though they would no longer exist.
2 is irrelevant, because it doesn’t matter whether our current utility is in some absolute sense low, because we will still make the same decisions! U(A)=1 U(B)=100 gives the same outcome as U(A)=0.001 and U(B)=0.1.
1 and 3 can be solved by taking average utility over all agents, past and future. But that’s irrelevant to me because… I’m not an average utilitarian :-)
I’m waiting until I can capture my moral values in some (complicated) utility function. Until then, I’m refining my position.
I was more thinking of people who needed to chose between benefiting our society and benefiting the ancient society, for whom this distinction would be relevant. I guess this is mathematically equivalent to 3.
This brings back the original problem of killing people to bring up the average.
That can be dealt with in some usual way, by setting the utility of someone dead to zero but keeping them in the average.
It feels odd to take an average over a possibly infinite future in this manner. It might work, but I feel like how well it matches our preferences will depend on the specifics of physics.
EDIT: This also implies that a world with 10 happy immortal people and 10 happy people who die is much worse than one with just 10 immortal people. Would you agree with that and all similar statements of that type implied by this solution?
I agree with that to some extent (as in I disagree, but replace both 10 with 10 trillion and I’d agree). But I’m still firming up my intuition at the moment.
What do you mean by “actual preference”? I can’t think of many interpretations of that comment that don’t implicitly define a utility function (making the statement tautological): even evaluating everything in terms of how well it matches our ethical intuitions constitutes a utility function, albeit one that’ll almost certainly end up being inconsistent outside a fairly narrow domain.
Our intuitions evolved to make decisions about existing entities in a world dense with externalities. I’d expect them to choke on problems that don’t deal with either one in a conventional way, and I don’t trust intuition pumps that rely on those problems.
I do mean a utility function, but one not necessarily known to the agent. If one values all good lives but wants to consider average utilitarianism, they could make average utilitarianism less different from their real utility function by adding existence terms. However, if their real utility function says that a life should exist, ceteris paribus, if it meets a certain standard of value, regardless of the value of other lives, than they are not really an average utilitarian; average utilitarianism is incompatible with that statement.
That seems logically valid, but it doesn’t tell us very much about those utility functions that we don’t already know.
Well we must already know something about our utility functions or it is meaningless to say that we desire for them to be maximized. I feel considerably more confidant that I want people to live good lives than I feel toward any of the arguments for average utilitarianism that I have seen.