In humans, and perhaps all complex agents, utility is an unmeasurable abstraction about multidimensional preferences and goals. It can’t be observed, let alone summed or calculated. It CAN be modeled and estimated, and it’s fair to talk about aggregation functions of one’s estimates of utilities, or about aggregation of self-reported estimates or indications from others.
It is your own modeling choice to dislike the outcome of outsized influence via larger utility swings in some participants. How you normalize it is a preference of yours, not an objective fact about the world.
Yes, this is indeed a preference of mine (and other people as well), and I’m attempting to find the way to combine utilities that is as good as possible according to my and other people preferences (so that it can be incorporated into AGI, for example).
In humans, and perhaps all complex agents, utility is an unmeasurable abstraction about multidimensional preferences and goals. It can’t be observed, let alone summed or calculated. It CAN be modeled and estimated, and it’s fair to talk about aggregation functions of one’s estimates of utilities, or about aggregation of self-reported estimates or indications from others.
It is your own modeling choice to dislike the outcome of outsized influence via larger utility swings in some participants. How you normalize it is a preference of yours, not an objective fact about the world.
Yes, this is indeed a preference of mine (and other people as well), and I’m attempting to find the way to combine utilities that is as good as possible according to my and other people preferences (so that it can be incorporated into AGI, for example).