I may misjudge my preferences, but unless someone else has convincing reasons to claim they know my preferences better than me, I’m sticking with them :-)
Do you think that Eliezer’s arguments about scope insensitivity here should have convinced the Israelis donating to sick children to reevaluate their preferences? Isn’t your average utilitarianism based on the same intuition?
I am neither a classical nor a preference utilitarian, but I am reasonably confidant that my utility function is a sum over individuals, so I consider myself a total utilitarian. Ceteris paribus, I would consider the situation that you describe an improvement,
Do you think that Eliezer’s arguments about scope insensitivity here should have convinced the Israelis donating to sick children to reevaluate their preferences? Isn’t your average utilitarianism based on the same intuition?
Only if they value saving more children in the first place. If the flaw is pointed out, if they understand fully the problem, and then say “actually, I care about warm fuzzies to do with saving children, not saving children per see”, then they are monsterous people, but consistent.
You can’t say that people have the wrong utility by pointing out scope insensitivity, unless you can convince them that scope insensitivity is morally wrong. I think that scope insensitivity for existent humans is wrong, but fine over non-existent humans, which I don’t count as moral agents—just as normal humans aren’t worried about the scope insensitivity over the feelings of sand.
I find the repugnant conclusion repugnant. Rejecting it, is however, non-trivial, so I’m working towards an improved utility that has more of my moral values and less problems.
Only if they value saving more children in the first place. If the flaw is pointed out, if they understand fully the problem, and then say “actually, I care about warm fuzzies to do with saving children, not saving children per see”, then they are monsterous people, but consistent.
Would that actually be the best way of getting warm fuzzies? Anyways, any set of actions is consistent with maximizing a utility function; sets of preferences are the things that can be inconsistent with utility maximization. I’m not saying that I could convince any possible being that scope insensitivity is wrong. What I do think is that the humans are not acting according to their `real’ preferences, and that they would realize this if they understood Eliezer’s arguments.
I think that scope insensitivity for existent humans is wrong, but fine over non-existent humans, which I don’t count as moral agents.
What moral status do you attach to humans who do not currently exist, but definitely will exist in the future?
I’m working towards an improved utility that has more of my moral values and less problems.
What I do think is that the humans are not acting according to their `real’ preferences, and that they would realize this if they understood Eliezer’s arguments.
Human real preferences aren’t utility based, not even close, and this is a big potential problem. So they have to make their preferences closer to a utility function, using some methods or other. But humans never should act according to their messy ‘real’ preferences.
What moral status do you attach to humans who do not currently exist, but definitely will exist in the future?
Same as I do to people today. Simple heuristic: any choice that causes increased utility to any agent that exists at any time is always positive—giving a dollar to somebody in two generation is good, whoever they are.
On the other hand, choices that increase or decrease the number of agents—giving birth to that person in two generations or not—are more complicated.
Oh yes, I’ve seen it—I think the author pointed it out to me. It’s a nice point, but it doesn’t even undermine average utilitarianism. It only undermines particularly naive “birth means nothing” arguments.
I simply take the position that “only the preferences of people currently existing at the time they have those preferences are relevant” (this means that your current preferences about what happens after you die are relevant, but not your preferences “before you were born”). That leaves a lot of flexibility...
Do you think that Eliezer’s arguments about scope insensitivity here should have convinced the Israelis donating to sick children to reevaluate their preferences? Isn’t your average utilitarianism based on the same intuition?
I am neither a classical nor a preference utilitarian, but I am reasonably confidant that my utility function is a sum over individuals, so I consider myself a total utilitarian. Ceteris paribus, I would consider the situation that you describe an improvement,
Only if they value saving more children in the first place. If the flaw is pointed out, if they understand fully the problem, and then say “actually, I care about warm fuzzies to do with saving children, not saving children per see”, then they are monsterous people, but consistent.
You can’t say that people have the wrong utility by pointing out scope insensitivity, unless you can convince them that scope insensitivity is morally wrong. I think that scope insensitivity for existent humans is wrong, but fine over non-existent humans, which I don’t count as moral agents—just as normal humans aren’t worried about the scope insensitivity over the feelings of sand.
I find the repugnant conclusion repugnant. Rejecting it, is however, non-trivial, so I’m working towards an improved utility that has more of my moral values and less problems.
Would that actually be the best way of getting warm fuzzies? Anyways, any set of actions is consistent with maximizing a utility function; sets of preferences are the things that can be inconsistent with utility maximization. I’m not saying that I could convince any possible being that scope insensitivity is wrong. What I do think is that the humans are not acting according to their `real’ preferences, and that they would realize this if they understood Eliezer’s arguments.
What moral status do you attach to humans who do not currently exist, but definitely will exist in the future?
Good luck!
Human real preferences aren’t utility based, not even close, and this is a big potential problem. So they have to make their preferences closer to a utility function, using some methods or other. But humans never should act according to their messy ‘real’ preferences.
Same as I do to people today. Simple heuristic: any choice that causes increased utility to any agent that exists at any time is always positive—giving a dollar to somebody in two generation is good, whoever they are.
On the other hand, choices that increase or decrease the number of agents—giving birth to that person in two generations or not—are more complicated.
Thanks!
Have you seen http://meteuphoric.wordpress.com/2011/03/13/if-birth-is-worth-nothing-births-are-worth-anything/ ? It may help you notice any inconsistencies between possible utility functions and your values.
Oh yes, I’ve seen it—I think the author pointed it out to me. It’s a nice point, but it doesn’t even undermine average utilitarianism. It only undermines particularly naive “birth means nothing” arguments.
I simply take the position that “only the preferences of people currently existing at the time they have those preferences are relevant” (this means that your current preferences about what happens after you die are relevant, but not your preferences “before you were born”). That leaves a lot of flexibility...
Of course it doesn’t apply to many forms of average utilitarianism. It just struck me as a useful consistency check.