Bounded utility and infinite utility are different things. A utility function u from outcomes to real numbers is bounded if there is a number M such that for every outcome x, we have |u(x)| < M.
Bounded utility and infinite utility are different things. A utility function u from outcomes to real numbers is bounded if there is a number M such that for every outcome x, we have |u(x)| < M.
I was confused, thanks There are two ways that I can imagine having a bounded utility function; either define the function so that it has a finite bound or only define it over a finite domain. I was only thinking about the former when I wrote that comment (and not assuming its range was limited to the reals, e.g. “infinity” was a valid utility), and so I missed the fact that the utility function could be unbounded as the result of an infinite domain.
When we talk about utility functions, we’re talking about functions that encode a rational agent’s preferences. It does not represent how happy an agent is.
First of all, was I wrong in assuming that A’s high preference for an odd number of stars puts it at a disadvantage to B in normalized utility, making B the utility monster? If not, please explain how A can become a utility monster if, e.g. A’s most important preference is having an odd number of stars and B’s most important preference is happily living forever. Doesn’t a utility monster only happen if one agent’s utility for the same things is overvalued, which normalization should prevent?
What does it mean for A and B to “have identical preferences” if in fact A has an overriding preference for an odd number of stars? I think that the maximum utility (if it exists) that an agent can achieve should be normalized against the maximum utility of other agents otherwise the immediate result is a utility monster. It’s one thing for A to have its own high utility for something, it’s quite another for A to have arbitrarily more utility than any other agent.
Also, if A’s highest preference has no chance of being an outcome then isn’t the solution to fix A’s utility function instead of favoring B’s achievable preferences? The other possibility is to do run-off voting on desired outcomes so that A’s top votes are always going to be for outcomes with an odd number of stars, but when those world states lose the votes will run off to the outcomes that are identical except for there being an even or indeterminate number of stars, and then A’s and B’s voting preferences will be exactly the same.
Agent utility and utilitarian utility (this renormaization/combining buisness) are two entirely seperate things. No reason the former has to impact the latter, in fact, as we can see, it causes utility monsters and such.
I can’t comment further. Every way I look at it, combining preferences (utilitarianism) is utterly incoherent. Game theory/cooperation seems the only tractible path. I don’t know the context here tho...
if A’s highest preference has no chance of being an outcome then isn’t the solution to fix A’s utility function
Solution for who? A certainly doesn’t want you mucking around it its utility function as that would cause it to not do good things in the universe (from its perspective)
Solution for who? A certainly doesn’t want you mucking around it its utility function as that would cause it to not do good things in the universe (from its perspective)
If A knows that a preferred outcome is completely unobtainable and it knows that some utilitarian theorist is going to discount its preferences with regard to another agent, isn’t it rational to adjust its utility function? Perhaps it’s not; striving for unobtainable goals is somehow a human trait.
In pathological cases like that, sure, you can blackmail it into adjusting its post-op utility function. But only if it became convinced that that gave it a higher chance of getting the things it currently wants.
A lot of those pathological cases go away with reflectively consistent decision thoeries, but perhaps not that one. Don’t feel like working it out.
Ah, you’re right. B would be the utility monster. Not because A’s normalized utilities are lower, but because the intervals between them are shorter. I could go into more detail in a top-level Discussion post, but I think we’re basically in agreement here.
Also, if A’s highest preference has no chance of being an outcome then isn’t the solution to fix A’s utility function instead of favoring B’s achievable preferences?
Well, now you’re abandoning the program of normalizing utilities and averaging them, the inadequacy of which program this thought experiment was meant to demonstrate.
Bounded utility and infinite utility are different things. A utility function u from outcomes to real numbers is bounded if there is a number M such that for every outcome x, we have |u(x)| < M.
When we talk about utility functions, we’re talking about functions that encode a rational agent’s preferences. It does not represent how happy an agent is.
I was confused, thanks There are two ways that I can imagine having a bounded utility function; either define the function so that it has a finite bound or only define it over a finite domain. I was only thinking about the former when I wrote that comment (and not assuming its range was limited to the reals, e.g. “infinity” was a valid utility), and so I missed the fact that the utility function could be unbounded as the result of an infinite domain.
First of all, was I wrong in assuming that A’s high preference for an odd number of stars puts it at a disadvantage to B in normalized utility, making B the utility monster? If not, please explain how A can become a utility monster if, e.g. A’s most important preference is having an odd number of stars and B’s most important preference is happily living forever. Doesn’t a utility monster only happen if one agent’s utility for the same things is overvalued, which normalization should prevent?
What does it mean for A and B to “have identical preferences” if in fact A has an overriding preference for an odd number of stars? I think that the maximum utility (if it exists) that an agent can achieve should be normalized against the maximum utility of other agents otherwise the immediate result is a utility monster. It’s one thing for A to have its own high utility for something, it’s quite another for A to have arbitrarily more utility than any other agent.
Also, if A’s highest preference has no chance of being an outcome then isn’t the solution to fix A’s utility function instead of favoring B’s achievable preferences? The other possibility is to do run-off voting on desired outcomes so that A’s top votes are always going to be for outcomes with an odd number of stars, but when those world states lose the votes will run off to the outcomes that are identical except for there being an even or indeterminate number of stars, and then A’s and B’s voting preferences will be exactly the same.
Agent utility and utilitarian utility (this renormaization/combining buisness) are two entirely seperate things. No reason the former has to impact the latter, in fact, as we can see, it causes utility monsters and such.
I can’t comment further. Every way I look at it, combining preferences (utilitarianism) is utterly incoherent. Game theory/cooperation seems the only tractible path. I don’t know the context here tho...
Solution for who? A certainly doesn’t want you mucking around it its utility function as that would cause it to not do good things in the universe (from its perspective)
If A knows that a preferred outcome is completely unobtainable and it knows that some utilitarian theorist is going to discount its preferences with regard to another agent, isn’t it rational to adjust its utility function? Perhaps it’s not; striving for unobtainable goals is somehow a human trait.
In pathological cases like that, sure, you can blackmail it into adjusting its post-op utility function. But only if it became convinced that that gave it a higher chance of getting the things it currently wants.
A lot of those pathological cases go away with reflectively consistent decision thoeries, but perhaps not that one. Don’t feel like working it out.
Ah, you’re right. B would be the utility monster. Not because A’s normalized utilities are lower, but because the intervals between them are shorter. I could go into more detail in a top-level Discussion post, but I think we’re basically in agreement here.
Well, now you’re abandoning the program of normalizing utilities and averaging them, the inadequacy of which program this thought experiment was meant to demonstrate.