You get to model humans with a utility function for one thing. Modelling human behaviour is a big part of point of utilitarian models—and human decisions really do depend on the range choices they are given in a weird way that can’t be captured without this information.
Also, the formulation is neater. You get to write u(state) - instead of u(state—minus a bunch of things which are to be ignored).
Fair enough. Unfortunately you also gain confusion from people using terms in different ways, but we seem to have made it to roughly the same place in the end.
Also, the formulation is neater. You get to write u(state) - instead of u(state—minus a bunch of things which are to be ignored).
This is a quibble, and I guess it kind of depends what you mean by neater, but this claim strikes me as odd. Any actual description of (state including choice set) is going to be more complicated than the corresponding description of (state excluding choice set). Indeed, I took that to be part of your original point: you can represent almost anything if you’re willing to complicate the state descriptions sufficiently.
I mean you can say that the agent’s utility function takes as its input its entire state—not some subset of it. The description of the entire state is longer, but the specification of what is included is shorter.
You get to model humans with a utility function for one thing. Modelling human behaviour is a big part of point of utilitarian models—and human decisions really do depend on the range choices they are given in a weird way that can’t be captured without this information.
Also, the formulation is neater. You get to write u(state) - instead of u(state—minus a bunch of things which are to be ignored).
Fair enough. Unfortunately you also gain confusion from people using terms in different ways, but we seem to have made it to roughly the same place in the end.
This is a quibble, and I guess it kind of depends what you mean by neater, but this claim strikes me as odd. Any actual description of (state including choice set) is going to be more complicated than the corresponding description of (state excluding choice set). Indeed, I took that to be part of your original point: you can represent almost anything if you’re willing to complicate the state descriptions sufficiently.
I mean you can say that the agent’s utility function takes as its input its entire state—not some subset of it. The description of the entire state is longer, but the specification of what is included is shorter.