Utilitarian ethics, while highly egalitarian in real life situations, does not place any intrinsic value on equality.
I don’t agree with that. Utilitarian ethics don’t specify how the utility function is calculated, especially how you make the aggregate function from all the individual utility. You can very well decide to use “average * gini” or any other compound formula that factor in equality, and you’ll still be an “utilitarian ethics”.
The “how to compute the aggregate” to me is one of the toughest problems left in utilitarian ethics, I don’t see any aggregate (average, sum, median, average * gini, …) which doesn’t lead to absurd results in some cases. I fear that, like the human utility function is complicated, the aggregate function we should use is complicated and will contain sum, average, median and gini in a form or in another.
In my opinion the toughest problem is to compare one person’s utility with other person’s utility. Doubly so if the “person” does not have to be homo sapiens (so we can’t use neurons or hormones).
I don’t deny that it’s hard. But I think people do pretty well in our day to day lives by using our mind’s capacity for sympathy. I think I can safely assume that if I kicked a guy in the nuts and stole his money his suffering from the assault and theft would outweigh the utility I got from the money (assuming I spent the money on frivolous things). I can tell this by simulating how I would react if such an event happened to me, and assuming the other guy’s mind is fairly similar to mine.
Now, I could be wrong. Maybe the guy is a masochist with a fetish for being kicked in the nuts, and he was planning on spending the money he was carrying paying someone to do it for him. But perfect knowledge is impossible, so that’s a problem with basically any endeavor. We don’t give up on science because of all the problems obtaining knowledge, we shouldn’t give up on morality either. You just do the best you can.
Obviously scaling sympathy to large populations is really hard. And attempting to project it onto alien minds is even harder. But I don’t think it’s impossible. The first idea that comes to mind would be to ask the alien mind what it wants in life, ranked in order of how much it wants them, and then map those onto a similar list of what I want.
I find it difficult to sympathise with people who exhibit traits characteristic for utility monsters and those people are usually still quite far away from the thought-experiment ideal of a utility monster. I am sure that if the monster told me what it wants, I’d do my best to prevent it from happening.
I don’t agree with that. Utilitarian ethics don’t specify how the utility function is calculated, especially how you make the aggregate function from all the individual utility.
I was referring to total and average utilitarianism, the two most common kinds.
I fear that, like the human utility function is complicated, the aggregate function we should use is complicated and will contain sum, average, median and gini in a form or in another.
I agree completely. I think we’ll probably have to use multiple methods of aggregation and then combine the score in some way.
I don’t agree with that. Utilitarian ethics don’t specify how the utility function is calculated, especially how you make the aggregate function from all the individual utility. You can very well decide to use “average * gini” or any other compound formula that factor in equality, and you’ll still be an “utilitarian ethics”.
The “how to compute the aggregate” to me is one of the toughest problems left in utilitarian ethics, I don’t see any aggregate (average, sum, median, average * gini, …) which doesn’t lead to absurd results in some cases. I fear that, like the human utility function is complicated, the aggregate function we should use is complicated and will contain sum, average, median and gini in a form or in another.
In my opinion the toughest problem is to compare one person’s utility with other person’s utility. Doubly so if the “person” does not have to be homo sapiens (so we can’t use neurons or hormones).
I don’t deny that it’s hard. But I think people do pretty well in our day to day lives by using our mind’s capacity for sympathy. I think I can safely assume that if I kicked a guy in the nuts and stole his money his suffering from the assault and theft would outweigh the utility I got from the money (assuming I spent the money on frivolous things). I can tell this by simulating how I would react if such an event happened to me, and assuming the other guy’s mind is fairly similar to mine.
Now, I could be wrong. Maybe the guy is a masochist with a fetish for being kicked in the nuts, and he was planning on spending the money he was carrying paying someone to do it for him. But perfect knowledge is impossible, so that’s a problem with basically any endeavor. We don’t give up on science because of all the problems obtaining knowledge, we shouldn’t give up on morality either. You just do the best you can.
Obviously scaling sympathy to large populations is really hard. And attempting to project it onto alien minds is even harder. But I don’t think it’s impossible. The first idea that comes to mind would be to ask the alien mind what it wants in life, ranked in order of how much it wants them, and then map those onto a similar list of what I want.
I find it difficult to sympathise with people who exhibit traits characteristic for utility monsters and those people are usually still quite far away from the thought-experiment ideal of a utility monster. I am sure that if the monster told me what it wants, I’d do my best to prevent it from happening.
I was referring to total and average utilitarianism, the two most common kinds.
I agree completely. I think we’ll probably have to use multiple methods of aggregation and then combine the score in some way.