Pareto-optimal means that no one can be made better off without making someone else worse off. It doesn’t care about how much better off it can make someone, so the existence of a Utility Monster makes no difference to which states are Pareto-optimal. Pareto-optimal could range all the way from giving all the resources to the Utility Monster to giving nothing to the Utility Monster.
So my comment was fairly trivial from the definition of Pareto-optimal; I was just trying to emphasize that there generally are a wide range of Pareto-optimal states; you can’t just increase the utility for one person arbitrarily high without trading it off against someone else’s utility; you can start, but eventually you hit a Pareto-optimal state, and then you’ve got tradeoffs to make.
It looks like you are taking some kind of sum across all agents as the utility of the world; that is incompatible with the basic assumption of the utility monster as I understand it.
The utility monster is something such that as it controls scarce resources, the marginal utility that it contributes to the world as a whole (per additional resource that it controls/consumes) increases. (With everything else having a decreasing marginal return).
The argument is that such a creature would receive all of the resources, and that is bad; the counterargument is that given the described setup, giving the utility monster all of the resources is good, and the fact that we intuit that it is bad is a problem with our intuition and not the math.
As far as I can tell, the definition involving increasing marginal returns was invented by some wikipedian. Wikipedia does not cite a source for that definition. According to every other source, a utility monster is an agent who gets more utility from having resources than anyone else gets from having resources, regardless of how the utility monster’s marginal value of resources changes with the amount of resources already controlled.
Either way, the argument for giving the utility monster all the resources comes from maximizing the sum of the utilities of each agent. I’m not sure what you mean by this being incompatible with the assumption of the utility monster.
Edit: Also, rereading my previous comment, I notice that I was actually not taking a sum across the utilities of all agents. Pareto-optimal does not mean maximizing such a sum. It means a state such that it is impossible to make anyone better off without making anyone else worse off.
A +utility outcome for one agent is incomparable to a -utility for a different agent on the object layer. It is impossible to compare how much the utility monster gains from security to how much the peasant loses from lack of autonomy without taking a third point- this third viewpoint becomes the only agent in the meta-level (or, if there are multiple agents in the first meta, it goes up again, until there is only one agent at a particular level of meta).
This is true; there is no canonical way to aggregate utilities. An agent can only be a utility monster with respect to some scheme for comparing utilities between agents.
Such a scheme is only measuring its own utility of different states of the universe; a utility monster is not a problem for such a scheme/agent, any more than preventing 3^^^3 people being tortured for a million years at zero cost would be a problem.
I’m not quite sure what you mean. If you mean that any agent that cares disproportionately about a utility monster would not regret that it cares disproportionately about a utility monster, then that is true. However, if humans propose some method of aggregating their utilities, and then they notice that in practice, their procedure disproportionately favors one of them at the expense of the others, the others would likely complain that it was not a fair aggregation. So a utility monster could be a problem.
If humans propose some method of aggregating their utilities, and later notice that following that method is non-optimal, it is because the method they proposed does not match their actual values.
That’s a characteristic of the method, not of the world.
That’s right; being a utility monster is only with respect to an aggregation. However, the concept was invented and first talked about by people who thought there was a canonical aggregation, and as an unfortunate result, the dependency on the aggregation is typically not mentioned in the definition.
I can’t resolve paradoxes that come up with regard to people who have internally inconsistent value systems; were they afraid that the canonical aggregation was such that they personally were left out, in a manner that proved they were bad (because they preferred outcomes where they did better than they did at the global maximum of the canonical aggregation)?
Right; I can’t give you one of my utilons directly.
If the world is already in a Pareto-optimal state, then changing it to benefit the utility monster would require making someone else worse off.
What does the Pareto-optimal state look like if a Utility Monster exists?
Pareto-optimal means that no one can be made better off without making someone else worse off. It doesn’t care about how much better off it can make someone, so the existence of a Utility Monster makes no difference to which states are Pareto-optimal. Pareto-optimal could range all the way from giving all the resources to the Utility Monster to giving nothing to the Utility Monster.
So my comment was fairly trivial from the definition of Pareto-optimal; I was just trying to emphasize that there generally are a wide range of Pareto-optimal states; you can’t just increase the utility for one person arbitrarily high without trading it off against someone else’s utility; you can start, but eventually you hit a Pareto-optimal state, and then you’ve got tradeoffs to make.
It looks like you are taking some kind of sum across all agents as the utility of the world; that is incompatible with the basic assumption of the utility monster as I understand it.
The utility monster is something such that as it controls scarce resources, the marginal utility that it contributes to the world as a whole (per additional resource that it controls/consumes) increases. (With everything else having a decreasing marginal return).
The argument is that such a creature would receive all of the resources, and that is bad; the counterargument is that given the described setup, giving the utility monster all of the resources is good, and the fact that we intuit that it is bad is a problem with our intuition and not the math.
As far as I can tell, the definition involving increasing marginal returns was invented by some wikipedian. Wikipedia does not cite a source for that definition. According to every other source, a utility monster is an agent who gets more utility from having resources than anyone else gets from having resources, regardless of how the utility monster’s marginal value of resources changes with the amount of resources already controlled.
Either way, the argument for giving the utility monster all the resources comes from maximizing the sum of the utilities of each agent. I’m not sure what you mean by this being incompatible with the assumption of the utility monster.
Edit: Also, rereading my previous comment, I notice that I was actually not taking a sum across the utilities of all agents. Pareto-optimal does not mean maximizing such a sum. It means a state such that it is impossible to make anyone better off without making anyone else worse off.
A +utility outcome for one agent is incomparable to a -utility for a different agent on the object layer. It is impossible to compare how much the utility monster gains from security to how much the peasant loses from lack of autonomy without taking a third point- this third viewpoint becomes the only agent in the meta-level (or, if there are multiple agents in the first meta, it goes up again, until there is only one agent at a particular level of meta).
This is true; there is no canonical way to aggregate utilities. An agent can only be a utility monster with respect to some scheme for comparing utilities between agents.
Such a scheme is only measuring its own utility of different states of the universe; a utility monster is not a problem for such a scheme/agent, any more than preventing 3^^^3 people being tortured for a million years at zero cost would be a problem.
I’m not quite sure what you mean. If you mean that any agent that cares disproportionately about a utility monster would not regret that it cares disproportionately about a utility monster, then that is true. However, if humans propose some method of aggregating their utilities, and then they notice that in practice, their procedure disproportionately favors one of them at the expense of the others, the others would likely complain that it was not a fair aggregation. So a utility monster could be a problem.
If humans propose some method of aggregating their utilities, and later notice that following that method is non-optimal, it is because the method they proposed does not match their actual values.
That’s a characteristic of the method, not of the world.
That’s right; being a utility monster is only with respect to an aggregation. However, the concept was invented and first talked about by people who thought there was a canonical aggregation, and as an unfortunate result, the dependency on the aggregation is typically not mentioned in the definition.
I can’t resolve paradoxes that come up with regard to people who have internally inconsistent value systems; were they afraid that the canonical aggregation was such that they personally were left out, in a manner that proved they were bad (because they preferred outcomes where they did better than they did at the global maximum of the canonical aggregation)?