Unfortunately, there are rarely situations even in finance where a) I can buy a millionth of a million gambles and b) other people also want to do something similar (and there are sellers of these lotteries still), especially after we move into the territory of negligible or zero expected gain.
Again: unless each risk-averse person is matched with a risk-loving donor, the arbitrage between a linear utility function in money and a risk-averse function is simply insurance. While competition between insurers can help lessen this gap, insurers do not exist unless the arbitrage gap exists.
I’m ultimately not criticizing your math or concepts; I’m suggesting they have an important point of disconnection from their use in the world, and that we should be aware of that and not try to force our utility functions to be linear in money.
I’m not actually saying that it is vital to make our utility functions linear in money—I’m suggesting it has the same level of importance as making our preferences follow independence.
I.e. we should start moving our preferences in the direction of independence/linearity in money, maybe a little bit more than we do now, and should be aware that ultimately this is the “correct” way of doing things once transaction costs vanish.
But in practice we shouldn’t worry too much about this (unless we work in finance), as the complete absence of a human-describing utility function is a much more fundamental issue!
Fair enough—though I still ultimately think that sentient beings will typically evolve or self-modify to be risk-averse, I suppose there is a difference between that and saying that there will always be insurance opportunities in the absence of transaction costs (though there may always be arbitrage/insurance between people of different risk tolerances).
I’d be interested to hear your thoughts on whether or to what degree we may be able to discern the kernel of a human utility function.
(though there may always be arbitrage/insurance between people of different risk tolerances).
The efficient market price for increasing and decreasing risk is zero.
Easy example of increasing risk: I create two futures contracts, A which will pay out £50 if a coin comes out heads, and cost you £50 if that coin comes out tail. The second contract is B, where the outcomes is reversed. A and B together is exactly the same as nothing at all; if sold seperately, I’ve just created risk from nothing.
In practice there will be insurance opportunities; but the profits may be tiny.
I’d be interested to hear your thoughts on whether or to what degree we may be able to discern the kernel of a human utility function.
I think, not at all. Our preferences are too transitive, dependent, and inconsistent. The real question is, whether we can construct a utility function that we’d find acceptable; that is much more likely.
The efficient market price for increasing and decreasing risk is zero.
If you can find people with complementary attitudes toward risk. Your example does indeed create risk—but in a risk-averse world, nobody would want to buy those contracts. Insurance arises from large entities with high capital and thus high relative risk-neutrality assuming the risk of smaller, more risk-averse entities for a price. If this market can be made efficient, the profits thus gained may be small, but insofar as all private insurance-providing organizations should be risk-averse to survive, the profits cannot be driven to zero.
It may not even be the case that they are small—depending on the structure of the market, a sufficiently large and risk-neutral organization may be able to become something of a natural monopoly, with its size reinforcing its risk-neutrality, and any competitors having difficulty entering without being large at the outset.
Re: the human utility function, I think I agree. I’ve been interested in Eric Weinstein’s work on introducing gauge theory into preferences to make them usefully invariant, but I think you’re right that they are too fatally flawed to ultimately discern naturally.
Mainly agree—but don’t forget aggregation. You can get rid of risk even if everyone is risk-averse, just by replacing “whole ownership of few risky contracts” with “partial ownership of many risky contracts”.
In the example in this post, if I own LH and you own LT, and we are both risk averse, we gain from trading half our contracts to each other.
Unfortunately, there are rarely situations even in finance where a) I can buy a millionth of a million gambles and b) other people also want to do something similar (and there are sellers of these lotteries still), especially after we move into the territory of negligible or zero expected gain.
Again: unless each risk-averse person is matched with a risk-loving donor, the arbitrage between a linear utility function in money and a risk-averse function is simply insurance. While competition between insurers can help lessen this gap, insurers do not exist unless the arbitrage gap exists.
I’m ultimately not criticizing your math or concepts; I’m suggesting they have an important point of disconnection from their use in the world, and that we should be aware of that and not try to force our utility functions to be linear in money.
I’m not actually saying that it is vital to make our utility functions linear in money—I’m suggesting it has the same level of importance as making our preferences follow independence.
I.e. we should start moving our preferences in the direction of independence/linearity in money, maybe a little bit more than we do now, and should be aware that ultimately this is the “correct” way of doing things once transaction costs vanish.
But in practice we shouldn’t worry too much about this (unless we work in finance), as the complete absence of a human-describing utility function is a much more fundamental issue!
Fair enough—though I still ultimately think that sentient beings will typically evolve or self-modify to be risk-averse, I suppose there is a difference between that and saying that there will always be insurance opportunities in the absence of transaction costs (though there may always be arbitrage/insurance between people of different risk tolerances).
I’d be interested to hear your thoughts on whether or to what degree we may be able to discern the kernel of a human utility function.
The efficient market price for increasing and decreasing risk is zero.
Easy example of increasing risk: I create two futures contracts, A which will pay out £50 if a coin comes out heads, and cost you £50 if that coin comes out tail. The second contract is B, where the outcomes is reversed. A and B together is exactly the same as nothing at all; if sold seperately, I’ve just created risk from nothing.
In practice there will be insurance opportunities; but the profits may be tiny.
I think, not at all. Our preferences are too transitive, dependent, and inconsistent. The real question is, whether we can construct a utility function that we’d find acceptable; that is much more likely.
If you can find people with complementary attitudes toward risk. Your example does indeed create risk—but in a risk-averse world, nobody would want to buy those contracts. Insurance arises from large entities with high capital and thus high relative risk-neutrality assuming the risk of smaller, more risk-averse entities for a price. If this market can be made efficient, the profits thus gained may be small, but insofar as all private insurance-providing organizations should be risk-averse to survive, the profits cannot be driven to zero.
It may not even be the case that they are small—depending on the structure of the market, a sufficiently large and risk-neutral organization may be able to become something of a natural monopoly, with its size reinforcing its risk-neutrality, and any competitors having difficulty entering without being large at the outset.
Re: the human utility function, I think I agree. I’ve been interested in Eric Weinstein’s work on introducing gauge theory into preferences to make them usefully invariant, but I think you’re right that they are too fatally flawed to ultimately discern naturally.
Mainly agree—but don’t forget aggregation. You can get rid of risk even if everyone is risk-averse, just by replacing “whole ownership of few risky contracts” with “partial ownership of many risky contracts”.
In the example in this post, if I own LH and you own LT, and we are both risk averse, we gain from trading half our contracts to each other.