Well the reason I’d call it a terminal value is that if you asked people whether they would save 50 lives with 100% probability or 100 lives with 50% probability, people would tend to pick the former. When pressed why, they wouldn’t really have an explanation, other than that they value not taking risks.
Sure, but you could generate a scenario like that for just about any well-defined cognitive bias: it’s perilously close to the definition of bias, in fact. That doesn’t necessarily mean biases are inextricably incorporated into our value system, unless you’re defining human values purely in terms of revealed preferences—in which case why bother talking about this stuff at all?
I’m sorry for continuing this, because I feel like I’m just not getting why I’m wrong and we’re going in circles. And while I’m fairly confident that some of the downvoting is grudge based some of it is not, and was here before this happened.
How are you defining terminal values? EY defined it as values that “are desirable without conditioning on other consequences”. It seems to me that regardless of the things are, if you value things you have (or sure things) more than potential future things, that would qualify as a terminal value.
I haven’t been downvoting you, for what it’s worth.
Anyway, I think our disagreement revolves around different interpretations of desirable in that quote (I think that definition’s a little loose, incidentally, but that doesn’t seem to be problematic here). You seem to be defining it as based on choice: a world-state is desirable relative to another if an agent would choose it over the other given the opportunity. That’s pretty close to the thinking in economics among other disciplines, hence why I’ve been talking so much about revealed preference.
The problem is that we often choose things that turn out in retrospect to have been served our needs poorly. With that in mind I’m inclined to think of terminal values as irreducible terms in a utility function: features of future world-state that have a direct impact on an agent’s well-being (a loose term, but hopefully an understandable one), and which can’t be expressed in terms of more fundamental features. (There might be more than one decomposition of values here, in which case we should prefer the simplest one.)
That’s fundamentally choice-agnostic, although elective concordance with outcomes might turn out to be such a term. Irrational risk aversion (though risk aversion can be rational, taking into account the limitations of foresight!) and other cognitive biases are features of choice, not of utility: if they worked on utility directly, we wouldn’t call them biases.
By way of disclaimer, though, I should probably mention that this model isn’t a perfect one when applied to humans: we don’t seem to follow the VNM axioms consistently, so we can’t be said to have utility functions in the strict sense. Some features of our cognition seem to behave similarly within certain bounds, though, and it’s those that I’m focusing on above.
Excellently put, I think that sums up our disagreement very accurately. I’m not sure risk aversion couldn’t be expressed as an irreducible term in a utility function, though. I suppose it would be more of a trait of the utility function, such as all probabilities are raised to a power greater than one, or something.
Well the reason I’d call it a terminal value is that if you asked people whether they would save 50 lives with 100% probability or 100 lives with 50% probability, people would tend to pick the former. When pressed why, they wouldn’t really have an explanation, other than that they value not taking risks.
Sure, but you could generate a scenario like that for just about any well-defined cognitive bias: it’s perilously close to the definition of bias, in fact. That doesn’t necessarily mean biases are inextricably incorporated into our value system, unless you’re defining human values purely in terms of revealed preferences—in which case why bother talking about this stuff at all?
I’m sorry for continuing this, because I feel like I’m just not getting why I’m wrong and we’re going in circles. And while I’m fairly confident that some of the downvoting is grudge based some of it is not, and was here before this happened.
How are you defining terminal values? EY defined it as values that “are desirable without conditioning on other consequences”. It seems to me that regardless of the things are, if you value things you have (or sure things) more than potential future things, that would qualify as a terminal value.
I haven’t been downvoting you, for what it’s worth.
Anyway, I think our disagreement revolves around different interpretations of desirable in that quote (I think that definition’s a little loose, incidentally, but that doesn’t seem to be problematic here). You seem to be defining it as based on choice: a world-state is desirable relative to another if an agent would choose it over the other given the opportunity. That’s pretty close to the thinking in economics among other disciplines, hence why I’ve been talking so much about revealed preference.
The problem is that we often choose things that turn out in retrospect to have been served our needs poorly. With that in mind I’m inclined to think of terminal values as irreducible terms in a utility function: features of future world-state that have a direct impact on an agent’s well-being (a loose term, but hopefully an understandable one), and which can’t be expressed in terms of more fundamental features. (There might be more than one decomposition of values here, in which case we should prefer the simplest one.)
That’s fundamentally choice-agnostic, although elective concordance with outcomes might turn out to be such a term. Irrational risk aversion (though risk aversion can be rational, taking into account the limitations of foresight!) and other cognitive biases are features of choice, not of utility: if they worked on utility directly, we wouldn’t call them biases.
By way of disclaimer, though, I should probably mention that this model isn’t a perfect one when applied to humans: we don’t seem to follow the VNM axioms consistently, so we can’t be said to have utility functions in the strict sense. Some features of our cognition seem to behave similarly within certain bounds, though, and it’s those that I’m focusing on above.
Excellently put, I think that sums up our disagreement very accurately. I’m not sure risk aversion couldn’t be expressed as an irreducible term in a utility function, though. I suppose it would be more of a trait of the utility function, such as all probabilities are raised to a power greater than one, or something.