One wouldn’t put all of one’s money into one stock in the market because we have decreasing marginal utility towards money. If we didn’t and all we wanted was the highest expected value (which is how we should optimize for charity), then we would put our money into the stock that is likeliest to make us the most money.
In other words, I don’t want to have a minimum number of lives saved—I want to maximize the number of lives I save.
Perhaps I didn’t express my point clearly enough. In fact, I’m certain of it. But I more trying to express that there is some element of risk in a charity. Perhaps there is a probability is corrupt, etc. and isn’t as efficient as it’s rated as. A better example is likely this:
Assuming it will succeed, the SIAI is pretty unarguably the most important charity in existence. But that’s a huge assumption, and it makes some sense to hedge against that bet and distribute money to other charities as well.
But you have no reason to be risk averse about purely altruistically motivated donations. A 50% chance to do some good is just as altruistically desirable as a 100% chance to do half as much good (ignoring changes in marginal utility or including them in “X as much good”).
I tend to agree with you. But many people are risk averse and would prefer the latter to the former, and I’m not necessarily sure you can say that’s wrong per se; it’s just a different utility function. What you can say is that that methodology leads to non Pareto-optimum results in some cases.
It’s either “wrong” (irrational) or not purely altruistic. Of course even a just mostly altruistic donations can do a lot of good and should be encouraged rather than chastising the donors for being selfish or irrational, but that doesn’t change the facts about what would constitute rational altruistic behavior.
I think you’re using “rationality” the way Rand did. There are plenty of ways things can be wrong without being irrational, and there are plenty of ways to be irrational without being wrong. Wrong vs. Irrational is quite often a debate about terminal values. In this case, your terminal value is maximizing good regardless of probability. In the hypothetical example, the person has a terminal value of risk aversion, which, agree or not, is a terminal value that many many humans have.
I think you’re misusing “terminal values” here. Risk aversion is a pretty stable feature of people’s revealed preferences, so I can sort of see why you’re putting it into that category; but I’d characterize it more as a heuristic people use in the absence of good intuitions about long-term behavior than as a terminal value in its own right. If you offered people the chance to become less risk-averse in a context where that demonstrably leads to improved outcomes relative to other preferences, I’d bet they’d take it, and be reflectively consistent in doing so. People try to do that in weak ways all the time, in fact; it’s a fixture of self-help literature.
Well the reason I’d call it a terminal value is that if you asked people whether they would save 50 lives with 100% probability or 100 lives with 50% probability, people would tend to pick the former. When pressed why, they wouldn’t really have an explanation, other than that they value not taking risks.
Sure, but you could generate a scenario like that for just about any well-defined cognitive bias: it’s perilously close to the definition of bias, in fact. That doesn’t necessarily mean biases are inextricably incorporated into our value system, unless you’re defining human values purely in terms of revealed preferences—in which case why bother talking about this stuff at all?
I’m sorry for continuing this, because I feel like I’m just not getting why I’m wrong and we’re going in circles. And while I’m fairly confident that some of the downvoting is grudge based some of it is not, and was here before this happened.
How are you defining terminal values? EY defined it as values that “are desirable without conditioning on other consequences”. It seems to me that regardless of the things are, if you value things you have (or sure things) more than potential future things, that would qualify as a terminal value.
I haven’t been downvoting you, for what it’s worth.
Anyway, I think our disagreement revolves around different interpretations of desirable in that quote (I think that definition’s a little loose, incidentally, but that doesn’t seem to be problematic here). You seem to be defining it as based on choice: a world-state is desirable relative to another if an agent would choose it over the other given the opportunity. That’s pretty close to the thinking in economics among other disciplines, hence why I’ve been talking so much about revealed preference.
The problem is that we often choose things that turn out in retrospect to have been served our needs poorly. With that in mind I’m inclined to think of terminal values as irreducible terms in a utility function: features of future world-state that have a direct impact on an agent’s well-being (a loose term, but hopefully an understandable one), and which can’t be expressed in terms of more fundamental features. (There might be more than one decomposition of values here, in which case we should prefer the simplest one.)
That’s fundamentally choice-agnostic, although elective concordance with outcomes might turn out to be such a term. Irrational risk aversion (though risk aversion can be rational, taking into account the limitations of foresight!) and other cognitive biases are features of choice, not of utility: if they worked on utility directly, we wouldn’t call them biases.
By way of disclaimer, though, I should probably mention that this model isn’t a perfect one when applied to humans: we don’t seem to follow the VNM axioms consistently, so we can’t be said to have utility functions in the strict sense. Some features of our cognition seem to behave similarly within certain bounds, though, and it’s those that I’m focusing on above.
Excellently put, I think that sums up our disagreement very accurately. I’m not sure risk aversion couldn’t be expressed as an irreducible term in a utility function, though. I suppose it would be more of a trait of the utility function, such as all probabilities are raised to a power greater than one, or something.
Aside from whether risk aversion can usefully be considered a terminal value such a terminal value risk aversion cannot possibly be a purely altruistic value because it’s only noticeably risk averse with respect to that particular donor, not with respect to the beneficiaries (unless your individual donation constitutes a significant fraction of the total funds for a particular cause).
Fair point. I agree. At least I did once I figured out what that sentence was actually saying ;-). I was just trying to offer a potential explanation for ChrisHallquist’s actions.
One wouldn’t put all of one’s money into one stock in the market because we have decreasing marginal utility towards money. If we didn’t and all we wanted was the highest expected value (which is how we should optimize for charity), then we would put our money into the stock that is likeliest to make us the most money.
In other words, I don’t want to have a minimum number of lives saved—I want to maximize the number of lives I save.
Perhaps I didn’t express my point clearly enough. In fact, I’m certain of it. But I more trying to express that there is some element of risk in a charity. Perhaps there is a probability is corrupt, etc. and isn’t as efficient as it’s rated as. A better example is likely this:
Assuming it will succeed, the SIAI is pretty unarguably the most important charity in existence. But that’s a huge assumption, and it makes some sense to hedge against that bet and distribute money to other charities as well.
But you have no reason to be risk averse about purely altruistically motivated donations. A 50% chance to do some good is just as altruistically desirable as a 100% chance to do half as much good (ignoring changes in marginal utility or including them in “X as much good”).
I tend to agree with you. But many people are risk averse and would prefer the latter to the former, and I’m not necessarily sure you can say that’s wrong per se; it’s just a different utility function. What you can say is that that methodology leads to non Pareto-optimum results in some cases.
It’s either “wrong” (irrational) or not purely altruistic. Of course even a just mostly altruistic donations can do a lot of good and should be encouraged rather than chastising the donors for being selfish or irrational, but that doesn’t change the facts about what would constitute rational altruistic behavior.
I think you’re using “rationality” the way Rand did. There are plenty of ways things can be wrong without being irrational, and there are plenty of ways to be irrational without being wrong. Wrong vs. Irrational is quite often a debate about terminal values. In this case, your terminal value is maximizing good regardless of probability. In the hypothetical example, the person has a terminal value of risk aversion, which, agree or not, is a terminal value that many many humans have.
I think you’re misusing “terminal values” here. Risk aversion is a pretty stable feature of people’s revealed preferences, so I can sort of see why you’re putting it into that category; but I’d characterize it more as a heuristic people use in the absence of good intuitions about long-term behavior than as a terminal value in its own right. If you offered people the chance to become less risk-averse in a context where that demonstrably leads to improved outcomes relative to other preferences, I’d bet they’d take it, and be reflectively consistent in doing so. People try to do that in weak ways all the time, in fact; it’s a fixture of self-help literature.
Well the reason I’d call it a terminal value is that if you asked people whether they would save 50 lives with 100% probability or 100 lives with 50% probability, people would tend to pick the former. When pressed why, they wouldn’t really have an explanation, other than that they value not taking risks.
Sure, but you could generate a scenario like that for just about any well-defined cognitive bias: it’s perilously close to the definition of bias, in fact. That doesn’t necessarily mean biases are inextricably incorporated into our value system, unless you’re defining human values purely in terms of revealed preferences—in which case why bother talking about this stuff at all?
I’m sorry for continuing this, because I feel like I’m just not getting why I’m wrong and we’re going in circles. And while I’m fairly confident that some of the downvoting is grudge based some of it is not, and was here before this happened.
How are you defining terminal values? EY defined it as values that “are desirable without conditioning on other consequences”. It seems to me that regardless of the things are, if you value things you have (or sure things) more than potential future things, that would qualify as a terminal value.
I haven’t been downvoting you, for what it’s worth.
Anyway, I think our disagreement revolves around different interpretations of desirable in that quote (I think that definition’s a little loose, incidentally, but that doesn’t seem to be problematic here). You seem to be defining it as based on choice: a world-state is desirable relative to another if an agent would choose it over the other given the opportunity. That’s pretty close to the thinking in economics among other disciplines, hence why I’ve been talking so much about revealed preference.
The problem is that we often choose things that turn out in retrospect to have been served our needs poorly. With that in mind I’m inclined to think of terminal values as irreducible terms in a utility function: features of future world-state that have a direct impact on an agent’s well-being (a loose term, but hopefully an understandable one), and which can’t be expressed in terms of more fundamental features. (There might be more than one decomposition of values here, in which case we should prefer the simplest one.)
That’s fundamentally choice-agnostic, although elective concordance with outcomes might turn out to be such a term. Irrational risk aversion (though risk aversion can be rational, taking into account the limitations of foresight!) and other cognitive biases are features of choice, not of utility: if they worked on utility directly, we wouldn’t call them biases.
By way of disclaimer, though, I should probably mention that this model isn’t a perfect one when applied to humans: we don’t seem to follow the VNM axioms consistently, so we can’t be said to have utility functions in the strict sense. Some features of our cognition seem to behave similarly within certain bounds, though, and it’s those that I’m focusing on above.
Excellently put, I think that sums up our disagreement very accurately. I’m not sure risk aversion couldn’t be expressed as an irreducible term in a utility function, though. I suppose it would be more of a trait of the utility function, such as all probabilities are raised to a power greater than one, or something.
Aside from whether risk aversion can usefully be considered a terminal value such a terminal value risk aversion cannot possibly be a purely altruistic value because it’s only noticeably risk averse with respect to that particular donor, not with respect to the beneficiaries (unless your individual donation constitutes a significant fraction of the total funds for a particular cause).
Fair point. I agree. At least I did once I figured out what that sentence was actually saying ;-). I was just trying to offer a potential explanation for ChrisHallquist’s actions.