The most defensible use of the term is described as Ordinal Utility, but this is a little weaker than I commonly see it used around here. I’d summarize as “a predictive model for how much goodness an agent will experience conditioned on some decision”. Vincent Yu has a more formal description in (this comment)[http://lesswrong.com/lw/dhd/stupid_questions_open_thread_round_3/72z3].
There’s a lot of discussion about whether humans have a utility function or not, with the underlying connotation being that a utility function implies consistency in decisionmaking, so inconsistency proves lack of utility function. One example: Do Humans Want Things? I prefer to think of humans as having a utility function at any given point in time, but not one that’s consistent over time.
A semi-joking synonym for “I care about X” for some of us is “I have a term for X in my utility function”. Note that this (for me) implies a LOT of terms in my function, with very different coefficients that may not be constant over time.
A “utility function” as applied to humans is an abstraction, a model. And just like any model, it is subject to the George Box maxim “All models are wrong, but some are useful”.
If you are saying that your model is “humans … [have] a utility function at any given point in time, but not one that’s consistent over time”, well, how useful is this model? You can’t estimate this utility function well and it can change at any time… so what does this model give you?
The most defensible use of the term is described as Ordinal Utility, but this is a little weaker than I commonly see it used around here. I’d summarize as “a predictive model for how much goodness an agent will experience conditioned on some decision”. Vincent Yu has a more formal description in (this comment)[http://lesswrong.com/lw/dhd/stupid_questions_open_thread_round_3/72z3].
There’s a lot of discussion about whether humans have a utility function or not, with the underlying connotation being that a utility function implies consistency in decisionmaking, so inconsistency proves lack of utility function. One example: Do Humans Want Things? I prefer to think of humans as having a utility function at any given point in time, but not one that’s consistent over time.
A semi-joking synonym for “I care about X” for some of us is “I have a term for X in my utility function”. Note that this (for me) implies a LOT of terms in my function, with very different coefficients that may not be constant over time.
A “utility function” as applied to humans is an abstraction, a model. And just like any model, it is subject to the George Box maxim “All models are wrong, but some are useful”.
If you are saying that your model is “humans … [have] a utility function at any given point in time, but not one that’s consistent over time”, well, how useful is this model? You can’t estimate this utility function well and it can change at any time… so what does this model give you?