Any criticism of average utilitarianism that claims that it advocates killing unsatisfied people sounds like rigging the speedometer to me. Obviously what an average utilitarian means is “Work to increase people’s utility, and measure this by looking at how high average utility is.” Drawing bizarre implications like “kill unsatisfied people” from that is obviously an example of confusing the measurement method for the actual goal.
What an ethical theory like average utilitarianism is supposed to do, is give you a description of what constitutes a good state of affairs. Better states of affairs are defined to be those with higher average utility. If killing people increases average utility, and killing people is still wrong, then average utilitarianism is false.
When you’re defining something rigorously, “you know what I mean” is not an acceptable answer. This is especially obvious in the case where we’re trying to program machines to act ethically, but it’s a failing of the theory even if that’s not our goal.
When you’re defining something rigorously, “you know what I mean” is not an acceptable answer. This is especially obvious in the case where we’re trying to program machines to act ethically, but it’s a failing of the theory even if that’s not our goal.
You’re right. An ethical theory is flawed if it is insufficiently rigorous. But it seems to me that if it’s fairly obvious that a person means something other than a strictly literal interpretation of their theory the response should be to Steel-Man their argument. Say, “Your theory gives some insane-seeming results interpreted literally, but it seems to me that this is because you were insufficiently rigorous in stating it. Here’s what I think you actually meant.” That just seems like the charitable thing to do to me.
For reference, Steel Man is a different task than the principle of charity calls for. The charitable interpretation is the best interpretation you can reasonably make of someone’s argument. The Steel Man is figuratively “the strongest thing you can construct out of its corpse”—it can include quite a bit that the author didn’t intend or even would disagree with..
What an ethical theory like average utilitarianism is supposed to do, is give you a description of what constitutes a good state of affairs. Better states of affairs are defined to be those with higher average utility. If killing people increases average utility, and killing people is still wrong, then average utilitarianism is false.
When you’re defining something rigorously, “you know what I mean” is not an acceptable answer. This is especially obvious in the case where we’re trying to program machines to act ethically, but it’s a failing of the theory even if that’s not our goal.
You’re right. An ethical theory is flawed if it is insufficiently rigorous. But it seems to me that if it’s fairly obvious that a person means something other than a strictly literal interpretation of their theory the response should be to Steel-Man their argument. Say, “Your theory gives some insane-seeming results interpreted literally, but it seems to me that this is because you were insufficiently rigorous in stating it. Here’s what I think you actually meant.” That just seems like the charitable thing to do to me.
For reference, Steel Man is a different task than the principle of charity calls for. The charitable interpretation is the best interpretation you can reasonably make of someone’s argument. The Steel Man is figuratively “the strongest thing you can construct out of its corpse”—it can include quite a bit that the author didn’t intend or even would disagree with..
To add a friendly addendum:
Reading a position charitably is polite
Making a Steel-Man of a position advances true understanding
This is just one of many examples of situations in which politeness potentially conflicts with seeking truth.