You claim to be supporting expected utility, but you talk only about money. You don’t just “stop” if your utility function is convex—you buy the option for a 50% chance at 20,000, because that chance is worth more to you than $10,000. Conversely, if your utility function is concave, you buy the other option. There is no “hiding” involved: if I am risk-averse, I definitely take the $10,000 for sure, and indeed would likely be willing to take $9,999 for sure.
In a world in which people are risk-averse—for very good evolutionary reasons—you won’t be able to get somebody to trade those options in both directions. This is, after all, the entire principle of insurance, that you can trade risks because there is a difference between the risk aversion of an individual and an insurance company. In principle, yes, you could buy anti-insurance, where you pay money that you lose if your house burns down, but since you’re always trading with a risk-averse person, you cannot profit on average.
If you can find somebody risk-loving, they may be willing to give you the deal that allows your proof at the end to function. As it is, risk-averse dealers will not make deals that increase their risk with no improvement in expected monetary return (to compensate for the lower expected utility), and the market does not work that way.
If I am risk loving, I will buy the 50% of ¥20 000 for ¥10 000. I may buy it for ¥10 001. But if it is freely tradable, I will not buy it for ¥15 000, or even ¥10 010. Similarly, if I am risk averse and it is freely tradable, I will not sell it for ¥9 990.
These setups force your risk loving or aversion into very narrow bands. There is another way of seeing the prices on these things: instead of buying this lottery, I could buy one-millionth of a million such lotteries, all independent. This I must price around ¥10 000 under any reasonable utility function (central limit theorem gives a very narrow distribution around that value).
So again, if these objects have a price, they must be valued at around ¥10 000, no matter what your utility function is.
Unfortunately, there are rarely situations even in finance where a) I can buy a millionth of a million gambles and b) other people also want to do something similar (and there are sellers of these lotteries still), especially after we move into the territory of negligible or zero expected gain.
Again: unless each risk-averse person is matched with a risk-loving donor, the arbitrage between a linear utility function in money and a risk-averse function is simply insurance. While competition between insurers can help lessen this gap, insurers do not exist unless the arbitrage gap exists.
I’m ultimately not criticizing your math or concepts; I’m suggesting they have an important point of disconnection from their use in the world, and that we should be aware of that and not try to force our utility functions to be linear in money.
I’m not actually saying that it is vital to make our utility functions linear in money—I’m suggesting it has the same level of importance as making our preferences follow independence.
I.e. we should start moving our preferences in the direction of independence/linearity in money, maybe a little bit more than we do now, and should be aware that ultimately this is the “correct” way of doing things once transaction costs vanish.
But in practice we shouldn’t worry too much about this (unless we work in finance), as the complete absence of a human-describing utility function is a much more fundamental issue!
Fair enough—though I still ultimately think that sentient beings will typically evolve or self-modify to be risk-averse, I suppose there is a difference between that and saying that there will always be insurance opportunities in the absence of transaction costs (though there may always be arbitrage/insurance between people of different risk tolerances).
I’d be interested to hear your thoughts on whether or to what degree we may be able to discern the kernel of a human utility function.
(though there may always be arbitrage/insurance between people of different risk tolerances).
The efficient market price for increasing and decreasing risk is zero.
Easy example of increasing risk: I create two futures contracts, A which will pay out £50 if a coin comes out heads, and cost you £50 if that coin comes out tail. The second contract is B, where the outcomes is reversed. A and B together is exactly the same as nothing at all; if sold seperately, I’ve just created risk from nothing.
In practice there will be insurance opportunities; but the profits may be tiny.
I’d be interested to hear your thoughts on whether or to what degree we may be able to discern the kernel of a human utility function.
I think, not at all. Our preferences are too transitive, dependent, and inconsistent. The real question is, whether we can construct a utility function that we’d find acceptable; that is much more likely.
The efficient market price for increasing and decreasing risk is zero.
If you can find people with complementary attitudes toward risk. Your example does indeed create risk—but in a risk-averse world, nobody would want to buy those contracts. Insurance arises from large entities with high capital and thus high relative risk-neutrality assuming the risk of smaller, more risk-averse entities for a price. If this market can be made efficient, the profits thus gained may be small, but insofar as all private insurance-providing organizations should be risk-averse to survive, the profits cannot be driven to zero.
It may not even be the case that they are small—depending on the structure of the market, a sufficiently large and risk-neutral organization may be able to become something of a natural monopoly, with its size reinforcing its risk-neutrality, and any competitors having difficulty entering without being large at the outset.
Re: the human utility function, I think I agree. I’ve been interested in Eric Weinstein’s work on introducing gauge theory into preferences to make them usefully invariant, but I think you’re right that they are too fatally flawed to ultimately discern naturally.
Mainly agree—but don’t forget aggregation. You can get rid of risk even if everyone is risk-averse, just by replacing “whole ownership of few risky contracts” with “partial ownership of many risky contracts”.
In the example in this post, if I own LH and you own LT, and we are both risk averse, we gain from trading half our contracts to each other.
If you love the mini cooper, you’d buy it for £20 000; but never for £200 000, not matter how much you love it. Money contracts are similar, except arbitrage get even stronger, as the correct price of these contracts is canonical.
You claim to be supporting expected utility, but you talk only about money. You don’t just “stop” if your utility function is convex—you buy the option for a 50% chance at 20,000, because that chance is worth more to you than $10,000. Conversely, if your utility function is concave, you buy the other option. There is no “hiding” involved: if I am risk-averse, I definitely take the $10,000 for sure, and indeed would likely be willing to take $9,999 for sure.
In a world in which people are risk-averse—for very good evolutionary reasons—you won’t be able to get somebody to trade those options in both directions. This is, after all, the entire principle of insurance, that you can trade risks because there is a difference between the risk aversion of an individual and an insurance company. In principle, yes, you could buy anti-insurance, where you pay money that you lose if your house burns down, but since you’re always trading with a risk-averse person, you cannot profit on average.
If you can find somebody risk-loving, they may be willing to give you the deal that allows your proof at the end to function. As it is, risk-averse dealers will not make deals that increase their risk with no improvement in expected monetary return (to compensate for the lower expected utility), and the market does not work that way.
If I am risk loving, I will buy the 50% of ¥20 000 for ¥10 000. I may buy it for ¥10 001. But if it is freely tradable, I will not buy it for ¥15 000, or even ¥10 010. Similarly, if I am risk averse and it is freely tradable, I will not sell it for ¥9 990.
These setups force your risk loving or aversion into very narrow bands. There is another way of seeing the prices on these things: instead of buying this lottery, I could buy one-millionth of a million such lotteries, all independent. This I must price around ¥10 000 under any reasonable utility function (central limit theorem gives a very narrow distribution around that value).
So again, if these objects have a price, they must be valued at around ¥10 000, no matter what your utility function is.
Unfortunately, there are rarely situations even in finance where a) I can buy a millionth of a million gambles and b) other people also want to do something similar (and there are sellers of these lotteries still), especially after we move into the territory of negligible or zero expected gain.
Again: unless each risk-averse person is matched with a risk-loving donor, the arbitrage between a linear utility function in money and a risk-averse function is simply insurance. While competition between insurers can help lessen this gap, insurers do not exist unless the arbitrage gap exists.
I’m ultimately not criticizing your math or concepts; I’m suggesting they have an important point of disconnection from their use in the world, and that we should be aware of that and not try to force our utility functions to be linear in money.
I’m not actually saying that it is vital to make our utility functions linear in money—I’m suggesting it has the same level of importance as making our preferences follow independence.
I.e. we should start moving our preferences in the direction of independence/linearity in money, maybe a little bit more than we do now, and should be aware that ultimately this is the “correct” way of doing things once transaction costs vanish.
But in practice we shouldn’t worry too much about this (unless we work in finance), as the complete absence of a human-describing utility function is a much more fundamental issue!
Fair enough—though I still ultimately think that sentient beings will typically evolve or self-modify to be risk-averse, I suppose there is a difference between that and saying that there will always be insurance opportunities in the absence of transaction costs (though there may always be arbitrage/insurance between people of different risk tolerances).
I’d be interested to hear your thoughts on whether or to what degree we may be able to discern the kernel of a human utility function.
The efficient market price for increasing and decreasing risk is zero.
Easy example of increasing risk: I create two futures contracts, A which will pay out £50 if a coin comes out heads, and cost you £50 if that coin comes out tail. The second contract is B, where the outcomes is reversed. A and B together is exactly the same as nothing at all; if sold seperately, I’ve just created risk from nothing.
In practice there will be insurance opportunities; but the profits may be tiny.
I think, not at all. Our preferences are too transitive, dependent, and inconsistent. The real question is, whether we can construct a utility function that we’d find acceptable; that is much more likely.
If you can find people with complementary attitudes toward risk. Your example does indeed create risk—but in a risk-averse world, nobody would want to buy those contracts. Insurance arises from large entities with high capital and thus high relative risk-neutrality assuming the risk of smaller, more risk-averse entities for a price. If this market can be made efficient, the profits thus gained may be small, but insofar as all private insurance-providing organizations should be risk-averse to survive, the profits cannot be driven to zero.
It may not even be the case that they are small—depending on the structure of the market, a sufficiently large and risk-neutral organization may be able to become something of a natural monopoly, with its size reinforcing its risk-neutrality, and any competitors having difficulty entering without being large at the outset.
Re: the human utility function, I think I agree. I’ve been interested in Eric Weinstein’s work on introducing gauge theory into preferences to make them usefully invariant, but I think you’re right that they are too fatally flawed to ultimately discern naturally.
Mainly agree—but don’t forget aggregation. You can get rid of risk even if everyone is risk-averse, just by replacing “whole ownership of few risky contracts” with “partial ownership of many risky contracts”.
In the example in this post, if I own LH and you own LT, and we are both risk averse, we gain from trading half our contracts to each other.
If you love the mini cooper, you’d buy it for £20 000; but never for £200 000, not matter how much you love it. Money contracts are similar, except arbitrage get even stronger, as the correct price of these contracts is canonical.
Start with save scummers. All that training is bound to have generalised at least somewhat.