I don’t like average utilitarianism, and I wasn’t even aware that most folks here did, but I still have a guess as to why.
For many people, average utilitarianism is believed to be completely unachievable. There is no way to discover peoples utility functions in a way that can be averaged together. You cannot get people to honesty report their utility functions, and further they can never even know them, because they have no way to normalize and figure out whether or not they actually care more than the person next to therm.
However, a sufficiently advanced Friendly AI may be able to discover the true utility functions of everyone by looking into everyone’s brains at the same time. This makes average utilitarianism an actual plausible option for a futurist, but complete nonsense for a professional population ethicist.
For many people, average utilitarianism is believed to be completely unachievable. There is no way to discover peoples utility functions in a way that can be averaged together.
I thought “average utiltarianism” referred to something like “my utility function is computed by taking the average suffering and pleasure of all the people in the world”, not “I would like the utility functions of everyone to be averaged together and have that used to create a world”.
I think you are correct. That is what I meant, but I see how I misused the word “utility.” The argument translates easily. Without out AI, we don’t have any way to measure suffering and pleasure.
Why is average utilitarianism popular among some folks here? The view doesn’t seem to be at all popular among professional population ethicists.
Don’t think it is.
What specifically do you disagree with?
I think Pablo is correct that average utilitarianism is much more popular here than among philosophers.
The words only make sense if parsed as disagreement with the claim that average utilitarianism is popular here.
Perhaps, if you mean the difference between ‘trivial’ and ‘negligible’.
I don’t like average utilitarianism, and I wasn’t even aware that most folks here did, but I still have a guess as to why.
For many people, average utilitarianism is believed to be completely unachievable. There is no way to discover peoples utility functions in a way that can be averaged together. You cannot get people to honesty report their utility functions, and further they can never even know them, because they have no way to normalize and figure out whether or not they actually care more than the person next to therm.
However, a sufficiently advanced Friendly AI may be able to discover the true utility functions of everyone by looking into everyone’s brains at the same time. This makes average utilitarianism an actual plausible option for a futurist, but complete nonsense for a professional population ethicist.
This is all completely a guess.
Most people here do not endorse average utilitarianism.
I thought “average utiltarianism” referred to something like “my utility function is computed by taking the average suffering and pleasure of all the people in the world”, not “I would like the utility functions of everyone to be averaged together and have that used to create a world”.
I think you are correct. That is what I meant, but I see how I misused the word “utility.” The argument translates easily. Without out AI, we don’t have any way to measure suffering and pleasure.
This does not explain a preference for average utilitarianism over total utilitarianism. Avoiding the “repugnant conclusion” is probably a factor.
I didn’t even consider total utilitarianism in my response. Sorry. I think you are right about the “repugnant conclusion”.