There’s a strip of an incredibly over-the-top vulgar comic called space moose that gets at the same idea. These acts of kindness aren’t positive utility, even if the utility metric is based on desires, because they conflict with the desires of the stingrays or other victims. Preferences also need to be weighted somehow in preference utilitarianism, I suppose by importance to the person. But then hmm, anyone gets to be a utility monster by just really really really really wanting to kill the stringrays. So yeah there’s a problem there.
I think I need to update, and abandon preference utilitarianism even as a useful correlate of whatever the right measure would be.
While it’s gratifying to win an argument, I’d rather not do it under false pretenses:
But then hmm, anyone gets to be a utility monster by just really really really really
wanting to kill the stringrays.
We need a solution to the utility monster problem if we’re going to have a Friendly AI that cares about people’s desires, so it’s better to solve the utility monster problem than to give up on preference utilitarianism in part because you don’t know how to solve the utility monster problem. I’ve sketched proposed solutions to two types of utility monsters, one that has one entity with large utility and one that has a large number of entities with modest utility. If these putative solutions seem wrong to you, please post bugs, fixes, or alternatives as replies to those comments.
I agree that preference utilitarianism has the problem that it doesn’t free you from choosing how to weight the preferences. It also has the problem that you have to separate yourself into two parts, the part that gets to have its preference included in the weighted sum, and the part that has a preference that is the weighted sum. In reality there’s only one of you, so that distinction is artificial.
There’s a strip of an incredibly over-the-top vulgar comic called space moose that gets at the same idea. These acts of kindness aren’t positive utility, even if the utility metric is based on desires, because they conflict with the desires of the stingrays or other victims. Preferences also need to be weighted somehow in preference utilitarianism, I suppose by importance to the person. But then hmm, anyone gets to be a utility monster by just really really really really wanting to kill the stringrays. So yeah there’s a problem there.
I think I need to update, and abandon preference utilitarianism even as a useful correlate of whatever the right measure would be.
While it’s gratifying to win an argument, I’d rather not do it under false pretenses:
We need a solution to the utility monster problem if we’re going to have a Friendly AI that cares about people’s desires, so it’s better to solve the utility monster problem than to give up on preference utilitarianism in part because you don’t know how to solve the utility monster problem. I’ve sketched proposed solutions to two types of utility monsters, one that has one entity with large utility and one that has a large number of entities with modest utility. If these putative solutions seem wrong to you, please post bugs, fixes, or alternatives as replies to those comments.
I agree that preference utilitarianism has the problem that it doesn’t free you from choosing how to weight the preferences. It also has the problem that you have to separate yourself into two parts, the part that gets to have its preference included in the weighted sum, and the part that has a preference that is the weighted sum. In reality there’s only one of you, so that distinction is artificial.