I’m pretty strongly cribbing off the end of So8res’s MMEU rejection. Part of what I got from that chunk is that precisely quantifying utilons may be noncomputable, and even if not is currently intractable, but that doesn’t matter. We know that we almost certainly will not and possibly cannot actually be offered a precise bet in utilons, but in principle that doesn’t change the appropriate response, if we were to be offered one.
So there is definitely higher potential for regret with the second bet, since losing a bunch when I could otherwise have gained a bunch, and that would reduce my utility for that case, but for the statement ‘you will receive −90 utilons’ to be true, it would have to include the consideration of my regret. So I should not add additional compensation for the regret; it’s factored into the problem statement.
Which boils down to me being unintuitively indifferent, with even the slight uncomfortable feeling of being indifferent when intuition says I shouldn’t be factored into the calculations.
We know that we almost certainly will not and possibly cannot actually be offered a precise bet in utilons
That makes it somewhat of a angels-on-the-head-of-a-pin issue, doesn’t it?
I am not convinced that utilons automagically include everything—it seems to me they wouldn’t be consistent between different bets in that case (and, of course, each person has his own personal utilons which are not directly comparable to anyone else’s).
If utilons don’t automagically include everything, I don’t think they’re a useful concept. The concept of a quantified reward which includes everything is useful because it removes room for debate; a quantified reward that included mostly everything doesn’t have that property, and doesn’t seem any more useful than denominating things in $.
That makes it somewhat of a angels-on-the-head-of-a-pin issue, doesn’t it?
Maybe, but the point is to remove object-level concerns about the precise degree of merits of the rewards and put it in a situation where you are arguing purely about the abstract issue. It is a convenient way to say ‘All things being equal, and ignoring all outside factors’, encapsulated as a fictional substance.
If utilons don’t automagically include everything, I don’t think they’re a useful concept.
Utilons are the output of the utility function. Will you, then, say that a utility function which doesn’t include everything is not a useful concept?
And I’m still uncertain about the properties of utilons. What operations are defined for them? Comparison, probably, but what about addition? multiplicaton by a probability? Under which transformations they are invariant?
It all feels very hand-wavy.
a situation where you are arguing purely about the abstract issue
Which, of course, often has the advantage of clarity and the disadvantage of irrelevance...
And I’m still uncertain about the properties of utilons. What operations are defined for them? Comparison, probably, but what about addition? multiplicaton by a probability? Under which transformations they are invariant?
The same properties as of utility functions, I would assume. Which is to say, you can compare them, and take a weighted average over any probability measure, and also take a positive global affine transformation (ax+b where a>0). Generally speaking, any operation that’s covariant under a positive affine transformation should be permitted.
Will you, then, say that a utility function which doesn’t include everything is not a useful concept?
Yes, I think I agree. However, this is another implausible counterfactual, because the utility function is, as a concept, defined to include everything; it is the function that takes world-states and determines how much you value that world. And yes, it’s very hand-wavy, because understanding what any individual human values is not meanginfully simpler than understanding human values overall, which is one of the Big Hard Problems. When we understand the latter, the former can become less hand-wavy.
It’s no more abstract than is Bayes’ Theorem; both are in principle easy to use and incredibly useful, and in practice require implausibly thorough information about the world, or else heavy approximation.
The utility function is generally considered to map to the real numbers, so utilons are real-valued and all appropriate transformations and operations are defined on them.
the utility function is, as a concept, defined to include everything; it is the function that takes world-states and determines how much you value that world.
Some utility functions value world-states. But it’s also quite common to call a “utility function” something that shows/tells/calculates how much you value something specific.
The utility function is generally considered to map to the real numbers
I am not sure of that. Utility functions often map to ranks, for example.
But it’s also quite common to call a “utility function” something that shows/tells/calculates how much you value something specific.
I’m not familiar with that usage, Could you point me to a case in which the term was used, that way? Naively, if I saw that phrasing I would most likely consider it akin to a mathematical “abuse of notation”, where it actually referred to “the utility of the world in which exists over the otherwise-identical world in which did not exist”, but where the subtleties are not relevant to the example at hand and are taken as understood.
I am not sure of that. Utility functions often map to ranks, for example.
Could you provide an example of this also? In the cases where someone specifies the output of a utility function, I’ve always seen it be real or rational numbers. (Intuitively worldstates should be finite, like the universe, and therefore map to the rationals rather than reals, but this isn’t important.)
Yes, I am, by definition, because the util rewards, being in utilons, must factor in everything I care about, including the potential regret.
Unless your bets don’t cash out as
and
If it means something else, then the precise wording could make the decision different.
It’s not quite the potential regret that is the issue, it is the degree of uncertainty, aka risk.
Do you happen to have any links to a coherent theory of utilons?
I’m pretty strongly cribbing off the end of So8res’s MMEU rejection. Part of what I got from that chunk is that precisely quantifying utilons may be noncomputable, and even if not is currently intractable, but that doesn’t matter. We know that we almost certainly will not and possibly cannot actually be offered a precise bet in utilons, but in principle that doesn’t change the appropriate response, if we were to be offered one.
So there is definitely higher potential for regret with the second bet, since losing a bunch when I could otherwise have gained a bunch, and that would reduce my utility for that case, but for the statement ‘you will receive −90 utilons’ to be true, it would have to include the consideration of my regret. So I should not add additional compensation for the regret; it’s factored into the problem statement.
Which boils down to me being unintuitively indifferent, with even the slight uncomfortable feeling of being indifferent when intuition says I shouldn’t be factored into the calculations.
That makes it somewhat of a angels-on-the-head-of-a-pin issue, doesn’t it?
I am not convinced that utilons automagically include everything—it seems to me they wouldn’t be consistent between different bets in that case (and, of course, each person has his own personal utilons which are not directly comparable to anyone else’s).
If utilons don’t automagically include everything, I don’t think they’re a useful concept. The concept of a quantified reward which includes everything is useful because it removes room for debate; a quantified reward that included mostly everything doesn’t have that property, and doesn’t seem any more useful than denominating things in $.
Maybe, but the point is to remove object-level concerns about the precise degree of merits of the rewards and put it in a situation where you are arguing purely about the abstract issue. It is a convenient way to say ‘All things being equal, and ignoring all outside factors’, encapsulated as a fictional substance.
Utilons are the output of the utility function. Will you, then, say that a utility function which doesn’t include everything is not a useful concept?
And I’m still uncertain about the properties of utilons. What operations are defined for them? Comparison, probably, but what about addition? multiplicaton by a probability? Under which transformations they are invariant?
It all feels very hand-wavy.
Which, of course, often has the advantage of clarity and the disadvantage of irrelevance...
The same properties as of utility functions, I would assume. Which is to say, you can compare them, and take a weighted average over any probability measure, and also take a positive global affine transformation (ax+b where a>0). Generally speaking, any operation that’s covariant under a positive affine transformation should be permitted.
Yes, I think I agree. However, this is another implausible counterfactual, because the utility function is, as a concept, defined to include everything; it is the function that takes world-states and determines how much you value that world. And yes, it’s very hand-wavy, because understanding what any individual human values is not meanginfully simpler than understanding human values overall, which is one of the Big Hard Problems. When we understand the latter, the former can become less hand-wavy.
It’s no more abstract than is Bayes’ Theorem; both are in principle easy to use and incredibly useful, and in practice require implausibly thorough information about the world, or else heavy approximation.
The utility function is generally considered to map to the real numbers, so utilons are real-valued and all appropriate transformations and operations are defined on them.
Some utility functions value world-states. But it’s also quite common to call a “utility function” something that shows/tells/calculates how much you value something specific.
I am not sure of that. Utility functions often map to ranks, for example.
I’m not familiar with that usage, Could you point me to a case in which the term was used, that way? Naively, if I saw that phrasing I would most likely consider it akin to a mathematical “abuse of notation”, where it actually referred to “the utility of the world in which exists over the otherwise-identical world in which did not exist”, but where the subtleties are not relevant to the example at hand and are taken as understood.
Could you provide an example of this also? In the cases where someone specifies the output of a utility function, I’ve always seen it be real or rational numbers. (Intuitively worldstates should be finite, like the universe, and therefore map to the rationals rather than reals, but this isn’t important.)
Um, Wikipedia?
That’s an example of the rank ordering, but not of the first thing I asked for.
The entire concept of utility in Wikipedia is the utility of specific goods, not of world-states.