at least in the strong, repugnant conclusion form of your argument
What do you mean by this? When I talked about the value of people lives, I was referring to peoples lives, insofar as they have value, not implying that all lives inherently have value just by existing.
Oh, the original repugnant conclusion. I though you were just drawing an analogy to it. Anyway, I think that people only find this conclusion repugnant because of scope insensitivity.
I find it repugnant because I find it repugant. Any population ethic that is utilitarian is as good as any other; mine is of a type that rejects the repugnant conclusion. Average utilitarianism, to pick one example, is not scope insensitive, but rejects the RP (I personally think you need to be a bit more sophisitcated).
You sound a bit like Self-PA here. You do realize that it is possible to misjudge your preferences due to factual mistakes? That’s what the people in Eliezer’s examples of scope insensitivity were doing. I don’t see how you could determine the utility of one billion happy lives just by asking a human brain how it feels about the matter (ie without more complex introspection, preferably involving math).
Average utilitarianism leads to the conclusion that if someone of below average personal experiential utility, meaning the utility that they experience rather than the utility function that describes their preferences, can be removed from the world without affecting anyone else’s personal experiential utility, then this should be done. My mind can understand one person’s experiences, and I think that, as long as their personal experiential utility is positive*, doing so is wrong.
* Since personal experiential utility must be integrated over time, it must have a zero, unlike the utility functions that describe preferences.
Average utilitarianism leads to the conclusion that if someone of below average personal experiential utility, meaning the utility that they experience rather than the utility function that describes their preferences, can be removed from the world without affecting anyone else’s personal experiential utility, then this should be done.
I suspect you’ve allowed yourself to be confused by the semantics of the scenario. If you rule out externalities, removing someone from the world of the thought experiment can’t be consequentially equivalent to killing them (which leaves a mess of dangling emotional pointers, has a variety of knock-on effects, and introduces additional complications if you’re using a term for preference satisfaction, to say nothing of timeless approaches); it’s more accurately modeled with a comparison between worlds where the person in question does and doesn’t exist, Wonderful Life-style.
With that in mind, it’s not at all self-evident to me that the world where the less-satisfied-than-average individual exists is more pleasant or morally perfect than the one in which they don’t. Why not bite that bullet?
No, I was not making that confusion. I based my decision on a consideration of just that person’s mental state. I find a `good’ life valuable, though I don’t know the specifics of what a good life is, and ceteris paribus, I prefer its existence to its nonexistence.
As evidence to me clearly differentiating killing and `deleting’ someone, I am surprised by how much emphasis Eliezer puts on preserving life, rather than making sure that good life exist. Actually, thinking about that article, I am becoming less surprised that he takes this position because he focuses on the rights of conscious beings rather than on some additional value possessed by already-existing life relative to nonexistent life.
Hmm. Yes, it does appear that an less-happy-than-average person presented with a device that would remove them from existence without externalities would be compelled to use it if they are an average utilitarian with a utility function defined in terms of subjective quality of life, regardless of the value of their experiential utility.
The problem is diminished, though not eliminated, if we use a utility function defined in terms of expected preference satisfaction (people generally prefer to continue existing), and I’m really more of a preference than a pleasure/pain utilitarian, but you can overcome that by making the gap between your and the average preference satisfaction large enough that it overcomes your preference for existing in the future. Unlikely, perhaps, but there’s nothing in the definition of the scenario that appears to forbid it.
That’s the trouble, though; for any given utility function except one dominated by an existence term, it seems possible to construct a scenario where nonexistence is preferable to existence: Utility Monsters for pleasure/pain utilitarians, et cetera. A world populated by average-type preference utilitarians with a dominant preference for existing in the future does seem immune to this problem, but I probably just haven’t thought of a sufficiently weird dilemma yet. The only saving grace is that most of the possibilities are pretty far-fetched. Have you actually found a knockdown argument, or just an area where our ethical intuitions go out of scope and stop returning good values?
I don’t think that existence is always preferable to nonexistence. A good life seems to have value, but a bad life is, ceteris paribus, not preferable to nonexistence. The problem with average utilitarianism is that a good life is declared worse than nonexistence if it is below average, but I want to preserve the value in that life, not eliminate it. In general, the solution is not to add a large existence term, but to make sure that the utility function as stated matches your actual preference for whether or not someone should exist.
Have you actually found a knockdown argument, or just an area where our ethical intuitions go out of scope and stop returning good values?
Argument for what exactly? We’ve touched on a few different issues and it’s not clear to me what you mean here.
One can use a simple hack to get round this problem: set any being that has existed but no longer does as having utility zero (for some zero level). In that case, average utilitarians won’t want to bring them into existence, but won’t want to elminate them afterwards.
There are other ways of acheiving the same thing. I think that people are far too inclinded to over-simplify their moral intuitions, based on over-simple mathematical models. Once your preferences are utility functions, and with some decent way of dealing with copies/death, then there are no further reasons to expect simplicity.
One can use a simple hack to get round this problem: set any being that has existed but no longer does as having utility zero (for some zero level). In that case, average utilitarians won’t want to bring them into existence, but won’t want to elminate them afterwards.
I’m sympathetic to this line of thought, but setting the number to zero creates unnecessary problems. I think it would make more sense to set the utility of a dead person to be whatever the total amount of utility they experienced over their lifetime was. This has the same result of removing the incentive to kill unhappy people, since (for instance) people who died in their 20s would normally have much lower utility than people who lived to be 80. And it would remove certain counterintuitive results that zeroing produces, such as having someone who was tortured to death end up making the same contribution to the average as someone who died from excessive sex.
Relativity: people do not have experiences before or after each other.
On another planet, a happy civilization with our values existed billions of years ago. Over the course of its existence, there were quadrillions of people. Everything we do now is almost worthless by comparison, because they are all bringing down the average.
Oog, a member of the first sentient tribe, which has just come into existence, is going to be hit very hard with a club. Thag, a future member of the same tribe who does not yet exist, is going to be hit very hard with a club three times 100 years from now. If you can prevent one of these, it depends on the number of people who live in tribe between now and then, even though they would no longer exist.
2 is irrelevant, because it doesn’t matter whether our current utility is in some absolute sense low, because we will still make the same decisions! U(A)=1 U(B)=100 gives the same outcome as U(A)=0.001 and U(B)=0.1.
1 and 3 can be solved by taking average utility over all agents, past and future. But that’s irrelevant to me because… I’m not an average utilitarian :-)
I’m waiting until I can capture my moral values in some (complicated) utility function. Until then, I’m refining my position.
2 is irrelevant, because it doesn’t matter whether our current utility is in some absolute sense low, because we will still make the same decisions! U(A)=1 U(B)=100 gives the same outcome as U(A)=0.001 and U(B)=0.1.
I was more thinking of people who needed to chose between benefiting our society and benefiting the ancient society, for whom this distinction would be relevant. I guess this is mathematically equivalent to 3.
1 and 3 can be solved by taking average utility over all agents, past and future. But that’s irrelevant to me because… I’m not an average utilitarian :-)
This brings back the original problem of killing people to bring up the average.
It feels odd to take an average over a possibly infinite future in this manner. It might work, but I feel like how well it matches our preferences will depend on the specifics of physics.
EDIT: This also implies that a world with 10 happy immortal people and 10 happy people who die is much worse than one with just 10 immortal people. Would you agree with that and all similar statements of that type implied by this solution?
I agree with that to some extent (as in I disagree, but replace both 10 with 10 trillion and I’d agree). But I’m still firming up my intuition at the moment.
In general, the solution is not to add a large existence term, but to make sure that the utility function as stated matches your actual preference for whether or not someone should exist.
What do you mean by “actual preference”? I can’t think of many interpretations of that comment that don’t implicitly define a utility function (making the statement tautological): even evaluating everything in terms of how well it matches our ethical intuitions constitutes a utility function, albeit one that’ll almost certainly end up being inconsistent outside a fairly narrow domain.
Our intuitions evolved to make decisions about existing entities in a world dense with externalities. I’d expect them to choke on problems that don’t deal with either one in a conventional way, and I don’t trust intuition pumps that rely on those problems.
I do mean a utility function, but one not necessarily known to the agent. If one values all good lives but wants to consider average utilitarianism, they could make average utilitarianism less different from their real utility function by adding existence terms. However, if their real utility function says that a life should exist, ceteris paribus, if it meets a certain standard of value, regardless of the value of other lives, than they are not really an average utilitarian; average utilitarianism is incompatible with that statement.
Well we must already know something about our utility functions or it is meaningless to say that we desire for them to be maximized. I feel considerably more confidant that I want people to live good lives than I feel toward any of the arguments for average utilitarianism that I have seen.
I may misjudge my preferences, but unless someone else has convincing reasons to claim they know my preferences better than me, I’m sticking with them :-)
Btw, total utilitarianism has a problem with death as well. Most total utilitarians do not consider “kill this person, and replace them with a completely different person who is happier/has easier to satisfy preferences” as an improvement. But if it’s not an improvement, then something is happening that is not captured by the standard total utility. And if total utilitarianism has to have an extra module that deals with death, I see no problem for other utility functions to have a similar module.
I may misjudge my preferences, but unless someone else has convincing reasons to claim they know my preferences better than me, I’m sticking with them :-)
Do you think that Eliezer’s arguments about scope insensitivity here should have convinced the Israelis donating to sick children to reevaluate their preferences? Isn’t your average utilitarianism based on the same intuition?
I am neither a classical nor a preference utilitarian, but I am reasonably confidant that my utility function is a sum over individuals, so I consider myself a total utilitarian. Ceteris paribus, I would consider the situation that you describe an improvement,
Do you think that Eliezer’s arguments about scope insensitivity here should have convinced the Israelis donating to sick children to reevaluate their preferences? Isn’t your average utilitarianism based on the same intuition?
Only if they value saving more children in the first place. If the flaw is pointed out, if they understand fully the problem, and then say “actually, I care about warm fuzzies to do with saving children, not saving children per see”, then they are monsterous people, but consistent.
You can’t say that people have the wrong utility by pointing out scope insensitivity, unless you can convince them that scope insensitivity is morally wrong. I think that scope insensitivity for existent humans is wrong, but fine over non-existent humans, which I don’t count as moral agents—just as normal humans aren’t worried about the scope insensitivity over the feelings of sand.
I find the repugnant conclusion repugnant. Rejecting it, is however, non-trivial, so I’m working towards an improved utility that has more of my moral values and less problems.
Only if they value saving more children in the first place. If the flaw is pointed out, if they understand fully the problem, and then say “actually, I care about warm fuzzies to do with saving children, not saving children per see”, then they are monsterous people, but consistent.
Would that actually be the best way of getting warm fuzzies? Anyways, any set of actions is consistent with maximizing a utility function; sets of preferences are the things that can be inconsistent with utility maximization. I’m not saying that I could convince any possible being that scope insensitivity is wrong. What I do think is that the humans are not acting according to their `real’ preferences, and that they would realize this if they understood Eliezer’s arguments.
I think that scope insensitivity for existent humans is wrong, but fine over non-existent humans, which I don’t count as moral agents.
What moral status do you attach to humans who do not currently exist, but definitely will exist in the future?
I’m working towards an improved utility that has more of my moral values and less problems.
What I do think is that the humans are not acting according to their `real’ preferences, and that they would realize this if they understood Eliezer’s arguments.
Human real preferences aren’t utility based, not even close, and this is a big potential problem. So they have to make their preferences closer to a utility function, using some methods or other. But humans never should act according to their messy ‘real’ preferences.
What moral status do you attach to humans who do not currently exist, but definitely will exist in the future?
Same as I do to people today. Simple heuristic: any choice that causes increased utility to any agent that exists at any time is always positive—giving a dollar to somebody in two generation is good, whoever they are.
On the other hand, choices that increase or decrease the number of agents—giving birth to that person in two generations or not—are more complicated.
Oh yes, I’ve seen it—I think the author pointed it out to me. It’s a nice point, but it doesn’t even undermine average utilitarianism. It only undermines particularly naive “birth means nothing” arguments.
I simply take the position that “only the preferences of people currently existing at the time they have those preferences are relevant” (this means that your current preferences about what happens after you die are relevant, but not your preferences “before you were born”). That leaves a lot of flexibility...
What do you mean by this? When I talked about the value of people lives, I was referring to peoples lives, insofar as they have value, not implying that all lives inherently have value just by existing.
I was referring to this type of argument: http://en.wikipedia.org/wiki/Repugnant_conclusion and making unwarranted assumptions about how you would handle these cases.
Oh, the original repugnant conclusion. I though you were just drawing an analogy to it. Anyway, I think that people only find this conclusion repugnant because of scope insensitivity.
I find it repugnant because I find it repugant. Any population ethic that is utilitarian is as good as any other; mine is of a type that rejects the repugnant conclusion. Average utilitarianism, to pick one example, is not scope insensitive, but rejects the RP (I personally think you need to be a bit more sophisitcated).
You sound a bit like Self-PA here. You do realize that it is possible to misjudge your preferences due to factual mistakes? That’s what the people in Eliezer’s examples of scope insensitivity were doing. I don’t see how you could determine the utility of one billion happy lives just by asking a human brain how it feels about the matter (ie without more complex introspection, preferably involving math).
Average utilitarianism leads to the conclusion that if someone of below average personal experiential utility, meaning the utility that they experience rather than the utility function that describes their preferences, can be removed from the world without affecting anyone else’s personal experiential utility, then this should be done. My mind can understand one person’s experiences, and I think that, as long as their personal experiential utility is positive*, doing so is wrong.
* Since personal experiential utility must be integrated over time, it must have a zero, unlike the utility functions that describe preferences.
I suspect you’ve allowed yourself to be confused by the semantics of the scenario. If you rule out externalities, removing someone from the world of the thought experiment can’t be consequentially equivalent to killing them (which leaves a mess of dangling emotional pointers, has a variety of knock-on effects, and introduces additional complications if you’re using a term for preference satisfaction, to say nothing of timeless approaches); it’s more accurately modeled with a comparison between worlds where the person in question does and doesn’t exist, Wonderful Life-style.
With that in mind, it’s not at all self-evident to me that the world where the less-satisfied-than-average individual exists is more pleasant or morally perfect than the one in which they don’t. Why not bite that bullet?
No, I was not making that confusion. I based my decision on a consideration of just that person’s mental state. I find a `good’ life valuable, though I don’t know the specifics of what a good life is, and ceteris paribus, I prefer its existence to its nonexistence.
As evidence to me clearly differentiating killing and `deleting’ someone, I am surprised by how much emphasis Eliezer puts on preserving life, rather than making sure that good life exist. Actually, thinking about that article, I am becoming less surprised that he takes this position because he focuses on the rights of conscious beings rather than on some additional value possessed by already-existing life relative to nonexistent life.
Hmm. Yes, it does appear that an less-happy-than-average person presented with a device that would remove them from existence without externalities would be compelled to use it if they are an average utilitarian with a utility function defined in terms of subjective quality of life, regardless of the value of their experiential utility.
The problem is diminished, though not eliminated, if we use a utility function defined in terms of expected preference satisfaction (people generally prefer to continue existing), and I’m really more of a preference than a pleasure/pain utilitarian, but you can overcome that by making the gap between your and the average preference satisfaction large enough that it overcomes your preference for existing in the future. Unlikely, perhaps, but there’s nothing in the definition of the scenario that appears to forbid it.
That’s the trouble, though; for any given utility function except one dominated by an existence term, it seems possible to construct a scenario where nonexistence is preferable to existence: Utility Monsters for pleasure/pain utilitarians, et cetera. A world populated by average-type preference utilitarians with a dominant preference for existing in the future does seem immune to this problem, but I probably just haven’t thought of a sufficiently weird dilemma yet. The only saving grace is that most of the possibilities are pretty far-fetched. Have you actually found a knockdown argument, or just an area where our ethical intuitions go out of scope and stop returning good values?
I don’t think that existence is always preferable to nonexistence. A good life seems to have value, but a bad life is, ceteris paribus, not preferable to nonexistence. The problem with average utilitarianism is that a good life is declared worse than nonexistence if it is below average, but I want to preserve the value in that life, not eliminate it. In general, the solution is not to add a large existence term, but to make sure that the utility function as stated matches your actual preference for whether or not someone should exist.
Argument for what exactly? We’ve touched on a few different issues and it’s not clear to me what you mean here.
One can use a simple hack to get round this problem: set any being that has existed but no longer does as having utility zero (for some zero level). In that case, average utilitarians won’t want to bring them into existence, but won’t want to elminate them afterwards.
There are other ways of acheiving the same thing. I think that people are far too inclinded to over-simplify their moral intuitions, based on over-simple mathematical models. Once your preferences are utility functions, and with some decent way of dealing with copies/death, then there are no further reasons to expect simplicity.
I’m sympathetic to this line of thought, but setting the number to zero creates unnecessary problems. I think it would make more sense to set the utility of a dead person to be whatever the total amount of utility they experienced over their lifetime was. This has the same result of removing the incentive to kill unhappy people, since (for instance) people who died in their 20s would normally have much lower utility than people who lived to be 80. And it would remove certain counterintuitive results that zeroing produces, such as having someone who was tortured to death end up making the same contribution to the average as someone who died from excessive sex.
There are so many problems with this.
Relativity: people do not have experiences before or after each other.
On another planet, a happy civilization with our values existed billions of years ago. Over the course of its existence, there were quadrillions of people. Everything we do now is almost worthless by comparison, because they are all bringing down the average.
Oog, a member of the first sentient tribe, which has just come into existence, is going to be hit very hard with a club. Thag, a future member of the same tribe who does not yet exist, is going to be hit very hard with a club three times 100 years from now. If you can prevent one of these, it depends on the number of people who live in tribe between now and then, even though they would no longer exist.
2 is irrelevant, because it doesn’t matter whether our current utility is in some absolute sense low, because we will still make the same decisions! U(A)=1 U(B)=100 gives the same outcome as U(A)=0.001 and U(B)=0.1.
1 and 3 can be solved by taking average utility over all agents, past and future. But that’s irrelevant to me because… I’m not an average utilitarian :-)
I’m waiting until I can capture my moral values in some (complicated) utility function. Until then, I’m refining my position.
I was more thinking of people who needed to chose between benefiting our society and benefiting the ancient society, for whom this distinction would be relevant. I guess this is mathematically equivalent to 3.
This brings back the original problem of killing people to bring up the average.
That can be dealt with in some usual way, by setting the utility of someone dead to zero but keeping them in the average.
It feels odd to take an average over a possibly infinite future in this manner. It might work, but I feel like how well it matches our preferences will depend on the specifics of physics.
EDIT: This also implies that a world with 10 happy immortal people and 10 happy people who die is much worse than one with just 10 immortal people. Would you agree with that and all similar statements of that type implied by this solution?
I agree with that to some extent (as in I disagree, but replace both 10 with 10 trillion and I’d agree). But I’m still firming up my intuition at the moment.
What do you mean by “actual preference”? I can’t think of many interpretations of that comment that don’t implicitly define a utility function (making the statement tautological): even evaluating everything in terms of how well it matches our ethical intuitions constitutes a utility function, albeit one that’ll almost certainly end up being inconsistent outside a fairly narrow domain.
Our intuitions evolved to make decisions about existing entities in a world dense with externalities. I’d expect them to choke on problems that don’t deal with either one in a conventional way, and I don’t trust intuition pumps that rely on those problems.
I do mean a utility function, but one not necessarily known to the agent. If one values all good lives but wants to consider average utilitarianism, they could make average utilitarianism less different from their real utility function by adding existence terms. However, if their real utility function says that a life should exist, ceteris paribus, if it meets a certain standard of value, regardless of the value of other lives, than they are not really an average utilitarian; average utilitarianism is incompatible with that statement.
That seems logically valid, but it doesn’t tell us very much about those utility functions that we don’t already know.
Well we must already know something about our utility functions or it is meaningless to say that we desire for them to be maximized. I feel considerably more confidant that I want people to live good lives than I feel toward any of the arguments for average utilitarianism that I have seen.
I may misjudge my preferences, but unless someone else has convincing reasons to claim they know my preferences better than me, I’m sticking with them :-)
Btw, total utilitarianism has a problem with death as well. Most total utilitarians do not consider “kill this person, and replace them with a completely different person who is happier/has easier to satisfy preferences” as an improvement. But if it’s not an improvement, then something is happening that is not captured by the standard total utility. And if total utilitarianism has to have an extra module that deals with death, I see no problem for other utility functions to have a similar module.
Do you think that Eliezer’s arguments about scope insensitivity here should have convinced the Israelis donating to sick children to reevaluate their preferences? Isn’t your average utilitarianism based on the same intuition?
I am neither a classical nor a preference utilitarian, but I am reasonably confidant that my utility function is a sum over individuals, so I consider myself a total utilitarian. Ceteris paribus, I would consider the situation that you describe an improvement,
Only if they value saving more children in the first place. If the flaw is pointed out, if they understand fully the problem, and then say “actually, I care about warm fuzzies to do with saving children, not saving children per see”, then they are monsterous people, but consistent.
You can’t say that people have the wrong utility by pointing out scope insensitivity, unless you can convince them that scope insensitivity is morally wrong. I think that scope insensitivity for existent humans is wrong, but fine over non-existent humans, which I don’t count as moral agents—just as normal humans aren’t worried about the scope insensitivity over the feelings of sand.
I find the repugnant conclusion repugnant. Rejecting it, is however, non-trivial, so I’m working towards an improved utility that has more of my moral values and less problems.
Would that actually be the best way of getting warm fuzzies? Anyways, any set of actions is consistent with maximizing a utility function; sets of preferences are the things that can be inconsistent with utility maximization. I’m not saying that I could convince any possible being that scope insensitivity is wrong. What I do think is that the humans are not acting according to their `real’ preferences, and that they would realize this if they understood Eliezer’s arguments.
What moral status do you attach to humans who do not currently exist, but definitely will exist in the future?
Good luck!
Human real preferences aren’t utility based, not even close, and this is a big potential problem. So they have to make their preferences closer to a utility function, using some methods or other. But humans never should act according to their messy ‘real’ preferences.
Same as I do to people today. Simple heuristic: any choice that causes increased utility to any agent that exists at any time is always positive—giving a dollar to somebody in two generation is good, whoever they are.
On the other hand, choices that increase or decrease the number of agents—giving birth to that person in two generations or not—are more complicated.
Thanks!
Have you seen http://meteuphoric.wordpress.com/2011/03/13/if-birth-is-worth-nothing-births-are-worth-anything/ ? It may help you notice any inconsistencies between possible utility functions and your values.
Oh yes, I’ve seen it—I think the author pointed it out to me. It’s a nice point, but it doesn’t even undermine average utilitarianism. It only undermines particularly naive “birth means nothing” arguments.
I simply take the position that “only the preferences of people currently existing at the time they have those preferences are relevant” (this means that your current preferences about what happens after you die are relevant, but not your preferences “before you were born”). That leaves a lot of flexibility...
Of course it doesn’t apply to many forms of average utilitarianism. It just struck me as a useful consistency check.