This is true; there is no canonical way to aggregate utilities. An agent can only be a utility monster with respect to some scheme for comparing utilities between agents.
Such a scheme is only measuring its own utility of different states of the universe; a utility monster is not a problem for such a scheme/agent, any more than preventing 3^^^3 people being tortured for a million years at zero cost would be a problem.
I’m not quite sure what you mean. If you mean that any agent that cares disproportionately about a utility monster would not regret that it cares disproportionately about a utility monster, then that is true. However, if humans propose some method of aggregating their utilities, and then they notice that in practice, their procedure disproportionately favors one of them at the expense of the others, the others would likely complain that it was not a fair aggregation. So a utility monster could be a problem.
If humans propose some method of aggregating their utilities, and later notice that following that method is non-optimal, it is because the method they proposed does not match their actual values.
That’s a characteristic of the method, not of the world.
That’s right; being a utility monster is only with respect to an aggregation. However, the concept was invented and first talked about by people who thought there was a canonical aggregation, and as an unfortunate result, the dependency on the aggregation is typically not mentioned in the definition.
I can’t resolve paradoxes that come up with regard to people who have internally inconsistent value systems; were they afraid that the canonical aggregation was such that they personally were left out, in a manner that proved they were bad (because they preferred outcomes where they did better than they did at the global maximum of the canonical aggregation)?
This is true; there is no canonical way to aggregate utilities. An agent can only be a utility monster with respect to some scheme for comparing utilities between agents.
Such a scheme is only measuring its own utility of different states of the universe; a utility monster is not a problem for such a scheme/agent, any more than preventing 3^^^3 people being tortured for a million years at zero cost would be a problem.
I’m not quite sure what you mean. If you mean that any agent that cares disproportionately about a utility monster would not regret that it cares disproportionately about a utility monster, then that is true. However, if humans propose some method of aggregating their utilities, and then they notice that in practice, their procedure disproportionately favors one of them at the expense of the others, the others would likely complain that it was not a fair aggregation. So a utility monster could be a problem.
If humans propose some method of aggregating their utilities, and later notice that following that method is non-optimal, it is because the method they proposed does not match their actual values.
That’s a characteristic of the method, not of the world.
That’s right; being a utility monster is only with respect to an aggregation. However, the concept was invented and first talked about by people who thought there was a canonical aggregation, and as an unfortunate result, the dependency on the aggregation is typically not mentioned in the definition.
I can’t resolve paradoxes that come up with regard to people who have internally inconsistent value systems; were they afraid that the canonical aggregation was such that they personally were left out, in a manner that proved they were bad (because they preferred outcomes where they did better than they did at the global maximum of the canonical aggregation)?