...what with talk about “what utility do you assign to a firm handshake” or the like.
World states are not uniform entities, but compounds of different items, different features, each adding a certain amount of utility, weight to the overall value of the world state. If you only consider utility preferences between world states that are not made up of all the items of your utility-function, then isn’t this a dramatic oversimplification? I don’t see what is wrong in asking how you weigh firm handshakes. A world state that features firm handshakes must be different from one that doesn’t feature firm handshakes, even if the difference is tiny. So if I ask how much utility you assign to firm handshakes I ask how you weigh firm handshakes, how the absence of firm handshakes would affect the value of a world state. I ask about your utility preferences between possible world states that feature firm handshakes and those that don’t.
World states are not uniform entities, but compounds of different items, different features, each adding a certain amount of utility, weight to the overall value of the world state. If you only consider utility preferences between world states that are not made up of all the items of your utility-function, then isn’t this a dramatic oversimplification?
So far as I can tell, you have it backwards—those sorts of functions form a subset of the set of utility functions.
The problem is that utility functions that are easy to think about are ridiculously simple, and produce behavior like the above “maximize one value” or “tile the universe with ‘like’ buttons”. They’re characterized by “Handshake = (5*firmness_quotient) UTILS” or “Slice of Cheesecake = 32 UTILS” or what have you.
I’m sure it’s possible to discuss utility functions without falling into these traps, but I don’t think we do that, except in the vaguest cases.
World states are not uniform entities, but compounds of different items, different features, each adding a certain amount of utility, weight to the overall value of the world state. If you only consider utility preferences between world states that are not made up of all the items of your utility-function, then isn’t this a dramatic oversimplification? I don’t see what is wrong in asking how you weigh firm handshakes. A world state that features firm handshakes must be different from one that doesn’t feature firm handshakes, even if the difference is tiny. So if I ask how much utility you assign to firm handshakes I ask how you weigh firm handshakes, how the absence of firm handshakes would affect the value of a world state. I ask about your utility preferences between possible world states that feature firm handshakes and those that don’t.
So far as I can tell, you have it backwards—those sorts of functions form a subset of the set of utility functions.
The problem is that utility functions that are easy to think about are ridiculously simple, and produce behavior like the above “maximize one value” or “tile the universe with ‘like’ buttons”. They’re characterized by “Handshake = (5*firmness_quotient) UTILS” or “Slice of Cheesecake = 32 UTILS” or what have you.
I’m sure it’s possible to discuss utility functions without falling into these traps, but I don’t think we do that, except in the vaguest cases.