If utilons don’t automagically include everything, I don’t think they’re a useful concept. The concept of a quantified reward which includes everything is useful because it removes room for debate; a quantified reward that included mostly everything doesn’t have that property, and doesn’t seem any more useful than denominating things in $.
That makes it somewhat of a angels-on-the-head-of-a-pin issue, doesn’t it?
Maybe, but the point is to remove object-level concerns about the precise degree of merits of the rewards and put it in a situation where you are arguing purely about the abstract issue. It is a convenient way to say ‘All things being equal, and ignoring all outside factors’, encapsulated as a fictional substance.
If utilons don’t automagically include everything, I don’t think they’re a useful concept.
Utilons are the output of the utility function. Will you, then, say that a utility function which doesn’t include everything is not a useful concept?
And I’m still uncertain about the properties of utilons. What operations are defined for them? Comparison, probably, but what about addition? multiplicaton by a probability? Under which transformations they are invariant?
It all feels very hand-wavy.
a situation where you are arguing purely about the abstract issue
Which, of course, often has the advantage of clarity and the disadvantage of irrelevance...
And I’m still uncertain about the properties of utilons. What operations are defined for them? Comparison, probably, but what about addition? multiplicaton by a probability? Under which transformations they are invariant?
The same properties as of utility functions, I would assume. Which is to say, you can compare them, and take a weighted average over any probability measure, and also take a positive global affine transformation (ax+b where a>0). Generally speaking, any operation that’s covariant under a positive affine transformation should be permitted.
Will you, then, say that a utility function which doesn’t include everything is not a useful concept?
Yes, I think I agree. However, this is another implausible counterfactual, because the utility function is, as a concept, defined to include everything; it is the function that takes world-states and determines how much you value that world. And yes, it’s very hand-wavy, because understanding what any individual human values is not meanginfully simpler than understanding human values overall, which is one of the Big Hard Problems. When we understand the latter, the former can become less hand-wavy.
It’s no more abstract than is Bayes’ Theorem; both are in principle easy to use and incredibly useful, and in practice require implausibly thorough information about the world, or else heavy approximation.
The utility function is generally considered to map to the real numbers, so utilons are real-valued and all appropriate transformations and operations are defined on them.
the utility function is, as a concept, defined to include everything; it is the function that takes world-states and determines how much you value that world.
Some utility functions value world-states. But it’s also quite common to call a “utility function” something that shows/tells/calculates how much you value something specific.
The utility function is generally considered to map to the real numbers
I am not sure of that. Utility functions often map to ranks, for example.
But it’s also quite common to call a “utility function” something that shows/tells/calculates how much you value something specific.
I’m not familiar with that usage, Could you point me to a case in which the term was used, that way? Naively, if I saw that phrasing I would most likely consider it akin to a mathematical “abuse of notation”, where it actually referred to “the utility of the world in which exists over the otherwise-identical world in which did not exist”, but where the subtleties are not relevant to the example at hand and are taken as understood.
I am not sure of that. Utility functions often map to ranks, for example.
Could you provide an example of this also? In the cases where someone specifies the output of a utility function, I’ve always seen it be real or rational numbers. (Intuitively worldstates should be finite, like the universe, and therefore map to the rationals rather than reals, but this isn’t important.)
If utilons don’t automagically include everything, I don’t think they’re a useful concept. The concept of a quantified reward which includes everything is useful because it removes room for debate; a quantified reward that included mostly everything doesn’t have that property, and doesn’t seem any more useful than denominating things in $.
Maybe, but the point is to remove object-level concerns about the precise degree of merits of the rewards and put it in a situation where you are arguing purely about the abstract issue. It is a convenient way to say ‘All things being equal, and ignoring all outside factors’, encapsulated as a fictional substance.
Utilons are the output of the utility function. Will you, then, say that a utility function which doesn’t include everything is not a useful concept?
And I’m still uncertain about the properties of utilons. What operations are defined for them? Comparison, probably, but what about addition? multiplicaton by a probability? Under which transformations they are invariant?
It all feels very hand-wavy.
Which, of course, often has the advantage of clarity and the disadvantage of irrelevance...
The same properties as of utility functions, I would assume. Which is to say, you can compare them, and take a weighted average over any probability measure, and also take a positive global affine transformation (ax+b where a>0). Generally speaking, any operation that’s covariant under a positive affine transformation should be permitted.
Yes, I think I agree. However, this is another implausible counterfactual, because the utility function is, as a concept, defined to include everything; it is the function that takes world-states and determines how much you value that world. And yes, it’s very hand-wavy, because understanding what any individual human values is not meanginfully simpler than understanding human values overall, which is one of the Big Hard Problems. When we understand the latter, the former can become less hand-wavy.
It’s no more abstract than is Bayes’ Theorem; both are in principle easy to use and incredibly useful, and in practice require implausibly thorough information about the world, or else heavy approximation.
The utility function is generally considered to map to the real numbers, so utilons are real-valued and all appropriate transformations and operations are defined on them.
Some utility functions value world-states. But it’s also quite common to call a “utility function” something that shows/tells/calculates how much you value something specific.
I am not sure of that. Utility functions often map to ranks, for example.
I’m not familiar with that usage, Could you point me to a case in which the term was used, that way? Naively, if I saw that phrasing I would most likely consider it akin to a mathematical “abuse of notation”, where it actually referred to “the utility of the world in which exists over the otherwise-identical world in which did not exist”, but where the subtleties are not relevant to the example at hand and are taken as understood.
Could you provide an example of this also? In the cases where someone specifies the output of a utility function, I’ve always seen it be real or rational numbers. (Intuitively worldstates should be finite, like the universe, and therefore map to the rationals rather than reals, but this isn’t important.)
Um, Wikipedia?
That’s an example of the rank ordering, but not of the first thing I asked for.
The entire concept of utility in Wikipedia is the utility of specific goods, not of world-states.