Utility maximization destroys complex values by choosing the value that yields the most utility, i.e. the best cost-value ratio. One unit of utility is not discriminable from another unit of utility. All a utility maximizer can do is to maximize expected utility. If it turns out that one of its complex values can be effectively realized and optimized, it might turn out to outweigh all other values. This can only be countered by changing one’s utility function and reassign utility in such a way as to outweigh that effect, which will lead to inconsistency, or by discounting the value that threatens to outweigh all others, which will again lead to inconsistency.
Can’t your utility function look like “number of paperclips times number of funny jokes” rather than a linear combination? Then situations where you accept very little humor in exchange for loads of paperclips are much rarer.
Relevant intuition: this trade-off makes me feel sad, so it can’t be what I really want. And I hear it’s proven that wanting can only work if it involves maximizing a function over the state of the universe.
For this reason, I almost wish LW would stop talking about utility functions entirely.
That it is theoretically possible for functions to be arbitrarily complex does not seem to be a good reason to reject using a specific kind of function. Most information representation formats can be arbitrary complex. That’s what they do.
(This is to say that while I respect your preference for not talking about utility functions your actual reasons are probably better than because utility functions can be arbitrarily complex.)
Right, sorry. The reason I meant was something like “utlity functions can be arbitrarily complex and in practice are extremely complex, but this is frequently ignored”, what with talk about “what utility do you assign to a firm handshake” or the like.
Edit: And while they have useful mathematical features in the abstract, they seem to become prohibitively complex when modeling the preferences of things like humans.
...what with talk about “what utility do you assign to a firm handshake” or the like.
World states are not uniform entities, but compounds of different items, different features, each adding a certain amount of utility, weight to the overall value of the world state. If you only consider utility preferences between world states that are not made up of all the items of your utility-function, then isn’t this a dramatic oversimplification? I don’t see what is wrong in asking how you weigh firm handshakes. A world state that features firm handshakes must be different from one that doesn’t feature firm handshakes, even if the difference is tiny. So if I ask how much utility you assign to firm handshakes I ask how you weigh firm handshakes, how the absence of firm handshakes would affect the value of a world state. I ask about your utility preferences between possible world states that feature firm handshakes and those that don’t.
World states are not uniform entities, but compounds of different items, different features, each adding a certain amount of utility, weight to the overall value of the world state. If you only consider utility preferences between world states that are not made up of all the items of your utility-function, then isn’t this a dramatic oversimplification?
So far as I can tell, you have it backwards—those sorts of functions form a subset of the set of utility functions.
The problem is that utility functions that are easy to think about are ridiculously simple, and produce behavior like the above “maximize one value” or “tile the universe with ‘like’ buttons”. They’re characterized by “Handshake = (5*firmness_quotient) UTILS” or “Slice of Cheesecake = 32 UTILS” or what have you.
I’m sure it’s possible to discuss utility functions without falling into these traps, but I don’t think we do that, except in the vaguest cases.
There are very few instances in which I would ask “what utility do you assign?” regarding a concrete, non-contrived good. I tend to consider utility preferences between possible world states that could arise depending on a specific decision or event and then only consider actual numbers if actually necessary for the purpose of multiplying.
I would certainly prefer to limit use of the term to those who actually understand what it means!
There are very few instances in which I would ask “what utility do you assign?” regarding a concrete, non-contrived good.
Exactly. Perhaps if we used a different model (or an explicitly spelled-out simplified subset of the utility functions) we could talk about such things.
I would certainly prefer to limit use of the term to those who actually understand what it means!
But if you do not “assign utility” and only consider world states, how do you deal with novel discoveries? How does a hunter gatherer integrate category theory into its utility function? I mean, you have to somehow weigh new items?
I just go ahead and assign value directly to “novelty” and “variety”.
Isn’t that too unspecific? Every sequence of digits of the variety of transcendental numbers can be transcribed into musical scores. Or you could use cellular automata to create endless amounts of novel music. But that is not what you mean. If I asked you for a concrete example you could only tell me something that you already expect but are not sure of, which isn’t really novel, or say that you will be able to point out novelty in retrospect. But even with the latter answer there is a fundamental problem, because novelty can’t be crowned in retrospect if you are able to recognize it. In other words, it is predictable what will excite you and make you label something n-o-v-e-l. In this respect what you call “novelty” is just like the creation of music by the computation of the sequences of transcendental numbers, uncertain but ultimately computable. My point, to assign value to “novelty” and “variety” can not replace the assignment of utility to discrete sequences that make interesting music. You have to weigh discrete items, because those that are sufficiently described by “novelty” and “variety” are just random noise.
You have to weigh discrete items, because those that are sufficiently described by “novelty” and “variety” are just random noise.
Continuous random noise is quite monotonous to experience—the opposite of varied. I didn’t say that variety and novelty were my only values, just that I assign value to them. I value good music, too, as well as food and other pleasant stimuli. The theory of diminishing returns comes into play, often caused by the facility of the human mind to attain boredom. I view this as a value continuum rather than a set value.
In my mind, I’m picturing one of those bar graphs that show up when music is playing, except instead of music, it’s my mind and body moving throughout the day, and each bar represents my value of particular things in the world, with new bars added and old ones dying off, and… well, it’s way more complex than, “assign value K to music notes XYZ and call it done.” And several times I’ve been rebuked for using the phrase “assign value to something”, as opposed to “discover value as already-implemented by my brain”.
I want to rephrase my last comment:
Utility maximization destroys complex values by choosing the value that yields the most utility, i.e. the best cost-value ratio. One unit of utility is not discriminable from another unit of utility. All a utility maximizer can do is to maximize expected utility. If it turns out that one of its complex values can be effectively realized and optimized, it might turn out to outweigh all other values. This can only be countered by changing one’s utility function and reassign utility in such a way as to outweigh that effect, which will lead to inconsistency, or by discounting the value that threatens to outweigh all others, which will again lead to inconsistency.
Can’t your utility function look like “number of paperclips times number of funny jokes” rather than a linear combination? Then situations where you accept very little humor in exchange for loads of paperclips are much rarer.
Relevant intuition: this trade-off makes me feel sad, so it can’t be what I really want. And I hear it’s proven that wanting can only work if it involves maximizing a function over the state of the universe.
No, it doesn’t. A utility function can be as complex as you want it to be. In fact it can be more complex than is possible to represent in universe.
For this reason, I almost wish LW would stop talking about utility functions entirely.
That it is theoretically possible for functions to be arbitrarily complex does not seem to be a good reason to reject using a specific kind of function. Most information representation formats can be arbitrary complex. That’s what they do.
(This is to say that while I respect your preference for not talking about utility functions your actual reasons are probably better than because utility functions can be arbitrarily complex.)
Right, sorry. The reason I meant was something like “utlity functions can be arbitrarily complex and in practice are extremely complex, but this is frequently ignored”, what with talk about “what utility do you assign to a firm handshake” or the like.
Edit: And while they have useful mathematical features in the abstract, they seem to become prohibitively complex when modeling the preferences of things like humans.
World states are not uniform entities, but compounds of different items, different features, each adding a certain amount of utility, weight to the overall value of the world state. If you only consider utility preferences between world states that are not made up of all the items of your utility-function, then isn’t this a dramatic oversimplification? I don’t see what is wrong in asking how you weigh firm handshakes. A world state that features firm handshakes must be different from one that doesn’t feature firm handshakes, even if the difference is tiny. So if I ask how much utility you assign to firm handshakes I ask how you weigh firm handshakes, how the absence of firm handshakes would affect the value of a world state. I ask about your utility preferences between possible world states that feature firm handshakes and those that don’t.
So far as I can tell, you have it backwards—those sorts of functions form a subset of the set of utility functions.
The problem is that utility functions that are easy to think about are ridiculously simple, and produce behavior like the above “maximize one value” or “tile the universe with ‘like’ buttons”. They’re characterized by “Handshake = (5*firmness_quotient) UTILS” or “Slice of Cheesecake = 32 UTILS” or what have you.
I’m sure it’s possible to discuss utility functions without falling into these traps, but I don’t think we do that, except in the vaguest cases.
Ick. Yes. That question makes (almost) no sense.
There are very few instances in which I would ask “what utility do you assign?” regarding a concrete, non-contrived good. I tend to consider utility preferences between possible world states that could arise depending on a specific decision or event and then only consider actual numbers if actually necessary for the purpose of multiplying.
I would certainly prefer to limit use of the term to those who actually understand what it means!
Exactly. Perhaps if we used a different model (or an explicitly spelled-out simplified subset of the utility functions) we could talk about such things.
Inconceivable!
But if you do not “assign utility” and only consider world states, how do you deal with novel discoveries? How does a hunter gatherer integrate category theory into its utility function? I mean, you have to somehow weigh new items?
I just go ahead and assign value directly to “novelty” and “variety”.
Isn’t that too unspecific? Every sequence of digits of the variety of transcendental numbers can be transcribed into musical scores. Or you could use cellular automata to create endless amounts of novel music. But that is not what you mean. If I asked you for a concrete example you could only tell me something that you already expect but are not sure of, which isn’t really novel, or say that you will be able to point out novelty in retrospect. But even with the latter answer there is a fundamental problem, because novelty can’t be crowned in retrospect if you are able to recognize it. In other words, it is predictable what will excite you and make you label something n-o-v-e-l. In this respect what you call “novelty” is just like the creation of music by the computation of the sequences of transcendental numbers, uncertain but ultimately computable. My point, to assign value to “novelty” and “variety” can not replace the assignment of utility to discrete sequences that make interesting music. You have to weigh discrete items, because those that are sufficiently described by “novelty” and “variety” are just random noise.
Continuous random noise is quite monotonous to experience—the opposite of varied. I didn’t say that variety and novelty were my only values, just that I assign value to them. I value good music, too, as well as food and other pleasant stimuli. The theory of diminishing returns comes into play, often caused by the facility of the human mind to attain boredom. I view this as a value continuum rather than a set value.
In my mind, I’m picturing one of those bar graphs that show up when music is playing, except instead of music, it’s my mind and body moving throughout the day, and each bar represents my value of particular things in the world, with new bars added and old ones dying off, and… well, it’s way more complex than, “assign value K to music notes XYZ and call it done.” And several times I’ve been rebuked for using the phrase “assign value to something”, as opposed to “discover value as already-implemented by my brain”.
Does not follow necessary. A larger plethora of values can be the greatest utility.
I don’t say that it must always be so. But it can be constructed that way.