A utility function just maps states down to a one-dimensional spectrum of utility.
That is a simple-enough concept, and I doubt it is the source of disagreement.
The difference boils down to what the utility function is applied to. If the inputs to the utility function are “Alaska”, “Boston” and “California”, then a utilitarian representation of circular driving behaviour is impossible.
However, in practice, agents know more than just what they want. They know what they have got. Also, they know how bored they are. So, expanding the set of inputs to the utility function to include other aspects of the agent’s state provides a utilitarian resolution. This does not represent a non-standard definition or theory—it is just including more of the agent’s state in the inputs to the utility function.
I agree with the substance of everything you have just said, and maintain that the only real point on which we disagree is whether the standard technical usage of “utility function” allows the choice set to be considered as part of the state description.
Anything else you want to include, go for it. But I maintain that, while it is clearly formally possible to include the choice set in in the state description, this is not part of standard usage, and therefore, your objection to Cyan’s original comment (which is a well-established result based on the standard usage) was misplaced.
I have no substantive problem in principle with including choice sets in the state description; maybe the broader definition of “utility function” that encompasses this is even a “better” definition.
ETA: The last sentence of this comment previously said something like “but I’m not sure what you gain by doing so”. I thought I had managed to edit it before anyone would have seen it, but it looks like Tim’s response below was to that earlier version.
ETA2: On further reflection, I think it’s the standard definition of transitive in this context that excludes the choice set from the state description, not the definition of utility function. Which I think basically gets me to where Cyan was some time ago.
You get to model humans with a utility function for one thing. Modelling human behaviour is a big part of point of utilitarian models—and human decisions really do depend on the range choices they are given in a weird way that can’t be captured without this information.
Also, the formulation is neater. You get to write u(state) - instead of u(state—minus a bunch of things which are to be ignored).
Fair enough. Unfortunately you also gain confusion from people using terms in different ways, but we seem to have made it to roughly the same place in the end.
Also, the formulation is neater. You get to write u(state) - instead of u(state—minus a bunch of things which are to be ignored).
This is a quibble, and I guess it kind of depends what you mean by neater, but this claim strikes me as odd. Any actual description of (state including choice set) is going to be more complicated than the corresponding description of (state excluding choice set). Indeed, I took that to be part of your original point: you can represent almost anything if you’re willing to complicate the state descriptions sufficiently.
I mean you can say that the agent’s utility function takes as its input its entire state—not some subset of it. The description of the entire state is longer, but the specification of what is included is shorter.
A utility function just maps states down to a one-dimensional spectrum of utility.
That is a simple-enough concept, and I doubt it is the source of disagreement.
The difference boils down to what the utility function is applied to. If the inputs to the utility function are “Alaska”, “Boston” and “California”, then a utilitarian representation of circular driving behaviour is impossible.
However, in practice, agents know more than just what they want. They know what they have got. Also, they know how bored they are. So, expanding the set of inputs to the utility function to include other aspects of the agent’s state provides a utilitarian resolution. This does not represent a non-standard definition or theory—it is just including more of the agent’s state in the inputs to the utility function.
I agree with the substance of everything you have just said, and maintain that the only real point on which we disagree is whether the standard technical usage of “utility function” allows the choice set to be considered as part of the state description.
Anything else you want to include, go for it. But I maintain that, while it is clearly formally possible to include the choice set in in the state description, this is not part of standard usage, and therefore, your objection to Cyan’s original comment (which is a well-established result based on the standard usage) was misplaced.
I have no substantive problem in principle with including choice sets in the state description; maybe the broader definition of “utility function” that encompasses this is even a “better” definition.
ETA: The last sentence of this comment previously said something like “but I’m not sure what you gain by doing so”. I thought I had managed to edit it before anyone would have seen it, but it looks like Tim’s response below was to that earlier version.
ETA2: On further reflection, I think it’s the standard definition of transitive in this context that excludes the choice set from the state description, not the definition of utility function. Which I think basically gets me to where Cyan was some time ago.
You get to model humans with a utility function for one thing. Modelling human behaviour is a big part of point of utilitarian models—and human decisions really do depend on the range choices they are given in a weird way that can’t be captured without this information.
Also, the formulation is neater. You get to write u(state) - instead of u(state—minus a bunch of things which are to be ignored).
Fair enough. Unfortunately you also gain confusion from people using terms in different ways, but we seem to have made it to roughly the same place in the end.
This is a quibble, and I guess it kind of depends what you mean by neater, but this claim strikes me as odd. Any actual description of (state including choice set) is going to be more complicated than the corresponding description of (state excluding choice set). Indeed, I took that to be part of your original point: you can represent almost anything if you’re willing to complicate the state descriptions sufficiently.
I mean you can say that the agent’s utility function takes as its input its entire state—not some subset of it. The description of the entire state is longer, but the specification of what is included is shorter.