In my opinion the toughest problem is to compare one person’s utility with other person’s utility. Doubly so if the “person” does not have to be homo sapiens (so we can’t use neurons or hormones).
I don’t deny that it’s hard. But I think people do pretty well in our day to day lives by using our mind’s capacity for sympathy. I think I can safely assume that if I kicked a guy in the nuts and stole his money his suffering from the assault and theft would outweigh the utility I got from the money (assuming I spent the money on frivolous things). I can tell this by simulating how I would react if such an event happened to me, and assuming the other guy’s mind is fairly similar to mine.
Now, I could be wrong. Maybe the guy is a masochist with a fetish for being kicked in the nuts, and he was planning on spending the money he was carrying paying someone to do it for him. But perfect knowledge is impossible, so that’s a problem with basically any endeavor. We don’t give up on science because of all the problems obtaining knowledge, we shouldn’t give up on morality either. You just do the best you can.
Obviously scaling sympathy to large populations is really hard. And attempting to project it onto alien minds is even harder. But I don’t think it’s impossible. The first idea that comes to mind would be to ask the alien mind what it wants in life, ranked in order of how much it wants them, and then map those onto a similar list of what I want.
I find it difficult to sympathise with people who exhibit traits characteristic for utility monsters and those people are usually still quite far away from the thought-experiment ideal of a utility monster. I am sure that if the monster told me what it wants, I’d do my best to prevent it from happening.
In my opinion the toughest problem is to compare one person’s utility with other person’s utility. Doubly so if the “person” does not have to be homo sapiens (so we can’t use neurons or hormones).
I don’t deny that it’s hard. But I think people do pretty well in our day to day lives by using our mind’s capacity for sympathy. I think I can safely assume that if I kicked a guy in the nuts and stole his money his suffering from the assault and theft would outweigh the utility I got from the money (assuming I spent the money on frivolous things). I can tell this by simulating how I would react if such an event happened to me, and assuming the other guy’s mind is fairly similar to mine.
Now, I could be wrong. Maybe the guy is a masochist with a fetish for being kicked in the nuts, and he was planning on spending the money he was carrying paying someone to do it for him. But perfect knowledge is impossible, so that’s a problem with basically any endeavor. We don’t give up on science because of all the problems obtaining knowledge, we shouldn’t give up on morality either. You just do the best you can.
Obviously scaling sympathy to large populations is really hard. And attempting to project it onto alien minds is even harder. But I don’t think it’s impossible. The first idea that comes to mind would be to ask the alien mind what it wants in life, ranked in order of how much it wants them, and then map those onto a similar list of what I want.
I find it difficult to sympathise with people who exhibit traits characteristic for utility monsters and those people are usually still quite far away from the thought-experiment ideal of a utility monster. I am sure that if the monster told me what it wants, I’d do my best to prevent it from happening.