The problem that I’ve always had with the “utility monster” idea is that it’s a misuse of what information utility functions actually encode.
In game theory or economics, a utility function is a rank ordering of preferred states over less preferred states for a single agent (who presumably has some input he can adjust to solve for his preferred states). That’s it. There are no “global” utility functions or “collective” utility measures that don’t run into problems when individual goals conflict.
Given that an agent’s utility function only encodes preferences, turning up the gain on it really really high (meaning agent A really reaaaally cares about all of his preferences) doesn’t mean that agents B,C,D, etc should take A’s preferences any more or less seriously. Multiplying it by a large number is like multiplying a probability distribution or an eigenvector by a really large number—the relative frequencies, pointing direction are exactly the same.
Before some large number of people should sacrifice their previous interests on the altar of Carethulu, there should be some new reason why these others (not Carethulu) should want to do so (implying a different utility function for them).
Before some large number of people should sacrifice their previous interests on the altar of Carethulu, there should be some new reason why these others (not Carethulu) should want to do so (implying a different utility function for them).
I think the misunderstanding here is that some of you interpret the post as a call to change your values. However, it is merely a suggestion for the implementation of values that already exist, such as utilitarian preferences.
The idea is clearly never going to be attractive to people who care exactly zero about the SWB of others. But those are not a target group of effective altruism or any charity really.
The problem that I’ve always had with the “utility monster” idea is that it’s a misuse of what information utility functions actually encode.
In game theory or economics, a utility function is a rank ordering of preferred states over less preferred states for a single agent (who presumably has some input he can adjust to solve for his preferred states). That’s it. There are no “global” utility functions or “collective” utility measures that don’t run into problems when individual goals conflict.
Given that an agent’s utility function only encodes preferences, turning up the gain on it really really high (meaning agent A really reaaaally cares about all of his preferences) doesn’t mean that agents B,C,D, etc should take A’s preferences any more or less seriously. Multiplying it by a large number is like multiplying a probability distribution or an eigenvector by a really large number—the relative frequencies, pointing direction are exactly the same.
Before some large number of people should sacrifice their previous interests on the altar of Carethulu, there should be some new reason why these others (not Carethulu) should want to do so (implying a different utility function for them).
I think the misunderstanding here is that some of you interpret the post as a call to change your values. However, it is merely a suggestion for the implementation of values that already exist, such as utilitarian preferences.
The idea is clearly never going to be attractive to people who care exactly zero about the SWB of others. But those are not a target group of effective altruism or any charity really.