The reason the DMI argument works when arguing for equality of wealth is that people are limited in their ability to get utility from their wealth, because there is only so much time in the day to spend enjoying it.
That is a reason for diminishing marginal utility, not the reason. Maybe it’s even enough bigger than all other reasons that thinking of it as the only reason gives you a pretty good approximation to how much marginal utility you gain from each dollar. But just because this particular reason does not apply to lifespans does not imply that you are not allowed to be risk averse about your lifespan. In general, you do not need an excuse for you to be allowed to be risk averse; risk aversion is perfectly compatible with expected utility theory. I think thought experiments along the lines of the one you propose make a compelling demonstration that humans are risk averse about almost everything. This is not inconsistent.
In general, you do not need an excuse for you to be allowed to be risk averse; risk aversion is perfectly compatible with expected utility theory. I think thought experiments along the lines of the one you propose make a compelling demonstration that humans are risk averse about almost everything. This is not inconsistent.
Thank you, for some reason I thought that it was inconsistent, that there was somehow an objective was to determine how to fit probabilities into your utility function. Your comment and others have indicated to me that this is probably not the case.
That’s not quite right. A better way to put it is that probabilities are the only thing that there is an objective way to fit into a utility function. If X is worth 1 util, and Y is worth 3 utils, then a lottery that gives you X if a fair coin lands heads and Y if it lands tails is worth 2 utils.
But there is no objective way to fit time into a utility function. It is possible that a 30-year life is worth 200 utils, but a 60-year life is only worth 300 utils, instead of 400.
I’m not sure I get it. What I inferred from your first comment was that it is not irrational to be averse to risky ventures, even if the probabilities seem beneficial. Or to put it another way, the Endowment Effect is not irrational. I am starting to think the Endowment Effect might be responsible for a lot of the hesitancy to engage in lifespan gambles.
But there is no objective way to fit time into a utility function. It is possible that a 30-year life is worth 200 utils, but a 60-year life is only worth 300 utils, instead of 400.
I find this idea disturbing because it might imply that once someone reaches the age of 30 you should (if you can) kill them and replace them with a new person who has the same utility function about their lifespan.
What I inferred from your first comment was that it is not irrational to be averse to risky ventures, even if the probabilities seem beneficial.
That is correct. You are not obligated to value X the same amount as a 50% chance of getting 2X, whether X is a unit of money, lifespan, or whatever. But that’s because your utility function does not have to be linear with respect to X. If you say that X is worth 1 util and 3X is worth 2 utils, that’s just another way of saying that X is just as valuable as a 50% chance of getting 3X. A utility function is just a way of encoding both the order of your preferences and your response to risk.
Or to put it another way, the Endowment Effect is not irrational.
No, the Endowment Effect is status quo bias, which is different from risk aversion, and which changes your relative preferences when your assessment of the status quo changes, potentially making people decline deals that, if all added together, would leave them strictly better off, so that still is irrational. There are models of risk aversion which are completely time-symmetric (not dependent on the status quo), like exponential discounting.
Given where this conversation is going, I should clarify that the Endowment Effect does not strictly speaking violate the expected utility axioms. It’s just that most people have a strong intuition that temporary changes in your ownership of resources that get reversed again before you would even get a chance to use the resources cannot possibly matter, and under that assumption, the Endowment Effect is irrational.
I am starting to think the Endowment Effect might be responsible for a lot of the hesitancy to engage in lifespan gambles.
Only partially. Our risk-aversion with respect to our future lifespan has very little to do the Endowment Effect, and can be modeled by perfectly status quo-ignoring exponential discounting.
However, we also have an intuition that once a person has been created, keeping them alive is more valuable than creating them in the first place. In a sense, this is the Endowment Effect, but unlike in the case of material resources, it does not seem obvious that someone continuing to live a certain amount of time should be just as valuable as someone starting to live the same amount of time. Hence, it is possible to value 60 years of future life less than twice as much as 30 years of future life for someone who already exists, but also value creating one person who will live 60 years more than creating two people who will live 30 years each.
If consequentialism were that straightforward that repugnant conclusion might hold. Killing anyone who reaches age 30, though, would diminish the utility of everyone’s lives more significantly than the remaining years they’d lose, for they’d also have the disutility of knowing their days were numbered (and someone would have the disutility of knowing they’d have to perform the act of killing others). Also, one’s life has utility to others as well as oneself. If everyone were euthanized at age 30, parenthood would have to begin at age 12 for children to be raised for a full 18 years.
That is a reason for diminishing marginal utility, not the reason. Maybe it’s even enough bigger than all other reasons that thinking of it as the only reason gives you a pretty good approximation to how much marginal utility you gain from each dollar. But just because this particular reason does not apply to lifespans does not imply that you are not allowed to be risk averse about your lifespan. In general, you do not need an excuse for you to be allowed to be risk averse; risk aversion is perfectly compatible with expected utility theory. I think thought experiments along the lines of the one you propose make a compelling demonstration that humans are risk averse about almost everything. This is not inconsistent.
Thank you, for some reason I thought that it was inconsistent, that there was somehow an objective was to determine how to fit probabilities into your utility function. Your comment and others have indicated to me that this is probably not the case.
That’s not quite right. A better way to put it is that probabilities are the only thing that there is an objective way to fit into a utility function. If X is worth 1 util, and Y is worth 3 utils, then a lottery that gives you X if a fair coin lands heads and Y if it lands tails is worth 2 utils.
But there is no objective way to fit time into a utility function. It is possible that a 30-year life is worth 200 utils, but a 60-year life is only worth 300 utils, instead of 400.
I’m not sure I get it. What I inferred from your first comment was that it is not irrational to be averse to risky ventures, even if the probabilities seem beneficial. Or to put it another way, the Endowment Effect is not irrational. I am starting to think the Endowment Effect might be responsible for a lot of the hesitancy to engage in lifespan gambles.
I find this idea disturbing because it might imply that once someone reaches the age of 30 you should (if you can) kill them and replace them with a new person who has the same utility function about their lifespan.
That is correct. You are not obligated to value X the same amount as a 50% chance of getting 2X, whether X is a unit of money, lifespan, or whatever. But that’s because your utility function does not have to be linear with respect to X. If you say that X is worth 1 util and 3X is worth 2 utils, that’s just another way of saying that X is just as valuable as a 50% chance of getting 3X. A utility function is just a way of encoding both the order of your preferences and your response to risk.
No, the Endowment Effect is status quo bias, which is different from risk aversion, and which changes your relative preferences when your assessment of the status quo changes, potentially making people decline deals that, if all added together, would leave them strictly better off, so that still is irrational. There are models of risk aversion which are completely time-symmetric (not dependent on the status quo), like exponential discounting.
Given where this conversation is going, I should clarify that the Endowment Effect does not strictly speaking violate the expected utility axioms. It’s just that most people have a strong intuition that temporary changes in your ownership of resources that get reversed again before you would even get a chance to use the resources cannot possibly matter, and under that assumption, the Endowment Effect is irrational.
Only partially. Our risk-aversion with respect to our future lifespan has very little to do the Endowment Effect, and can be modeled by perfectly status quo-ignoring exponential discounting.
However, we also have an intuition that once a person has been created, keeping them alive is more valuable than creating them in the first place. In a sense, this is the Endowment Effect, but unlike in the case of material resources, it does not seem obvious that someone continuing to live a certain amount of time should be just as valuable as someone starting to live the same amount of time. Hence, it is possible to value 60 years of future life less than twice as much as 30 years of future life for someone who already exists, but also value creating one person who will live 60 years more than creating two people who will live 30 years each.
If consequentialism were that straightforward that repugnant conclusion might hold. Killing anyone who reaches age 30, though, would diminish the utility of everyone’s lives more significantly than the remaining years they’d lose, for they’d also have the disutility of knowing their days were numbered (and someone would have the disutility of knowing they’d have to perform the act of killing others). Also, one’s life has utility to others as well as oneself. If everyone were euthanized at age 30, parenthood would have to begin at age 12 for children to be raised for a full 18 years.