I haven’t been downvoting you, for what it’s worth.
Anyway, I think our disagreement revolves around different interpretations of desirable in that quote (I think that definition’s a little loose, incidentally, but that doesn’t seem to be problematic here). You seem to be defining it as based on choice: a world-state is desirable relative to another if an agent would choose it over the other given the opportunity. That’s pretty close to the thinking in economics among other disciplines, hence why I’ve been talking so much about revealed preference.
The problem is that we often choose things that turn out in retrospect to have been served our needs poorly. With that in mind I’m inclined to think of terminal values as irreducible terms in a utility function: features of future world-state that have a direct impact on an agent’s well-being (a loose term, but hopefully an understandable one), and which can’t be expressed in terms of more fundamental features. (There might be more than one decomposition of values here, in which case we should prefer the simplest one.)
That’s fundamentally choice-agnostic, although elective concordance with outcomes might turn out to be such a term. Irrational risk aversion (though risk aversion can be rational, taking into account the limitations of foresight!) and other cognitive biases are features of choice, not of utility: if they worked on utility directly, we wouldn’t call them biases.
By way of disclaimer, though, I should probably mention that this model isn’t a perfect one when applied to humans: we don’t seem to follow the VNM axioms consistently, so we can’t be said to have utility functions in the strict sense. Some features of our cognition seem to behave similarly within certain bounds, though, and it’s those that I’m focusing on above.
Excellently put, I think that sums up our disagreement very accurately. I’m not sure risk aversion couldn’t be expressed as an irreducible term in a utility function, though. I suppose it would be more of a trait of the utility function, such as all probabilities are raised to a power greater than one, or something.
I haven’t been downvoting you, for what it’s worth.
Anyway, I think our disagreement revolves around different interpretations of desirable in that quote (I think that definition’s a little loose, incidentally, but that doesn’t seem to be problematic here). You seem to be defining it as based on choice: a world-state is desirable relative to another if an agent would choose it over the other given the opportunity. That’s pretty close to the thinking in economics among other disciplines, hence why I’ve been talking so much about revealed preference.
The problem is that we often choose things that turn out in retrospect to have been served our needs poorly. With that in mind I’m inclined to think of terminal values as irreducible terms in a utility function: features of future world-state that have a direct impact on an agent’s well-being (a loose term, but hopefully an understandable one), and which can’t be expressed in terms of more fundamental features. (There might be more than one decomposition of values here, in which case we should prefer the simplest one.)
That’s fundamentally choice-agnostic, although elective concordance with outcomes might turn out to be such a term. Irrational risk aversion (though risk aversion can be rational, taking into account the limitations of foresight!) and other cognitive biases are features of choice, not of utility: if they worked on utility directly, we wouldn’t call them biases.
By way of disclaimer, though, I should probably mention that this model isn’t a perfect one when applied to humans: we don’t seem to follow the VNM axioms consistently, so we can’t be said to have utility functions in the strict sense. Some features of our cognition seem to behave similarly within certain bounds, though, and it’s those that I’m focusing on above.
Excellently put, I think that sums up our disagreement very accurately. I’m not sure risk aversion couldn’t be expressed as an irreducible term in a utility function, though. I suppose it would be more of a trait of the utility function, such as all probabilities are raised to a power greater than one, or something.