Fair enough—though I still ultimately think that sentient beings will typically evolve or self-modify to be risk-averse, I suppose there is a difference between that and saying that there will always be insurance opportunities in the absence of transaction costs (though there may always be arbitrage/insurance between people of different risk tolerances).
I’d be interested to hear your thoughts on whether or to what degree we may be able to discern the kernel of a human utility function.
(though there may always be arbitrage/insurance between people of different risk tolerances).
The efficient market price for increasing and decreasing risk is zero.
Easy example of increasing risk: I create two futures contracts, A which will pay out £50 if a coin comes out heads, and cost you £50 if that coin comes out tail. The second contract is B, where the outcomes is reversed. A and B together is exactly the same as nothing at all; if sold seperately, I’ve just created risk from nothing.
In practice there will be insurance opportunities; but the profits may be tiny.
I’d be interested to hear your thoughts on whether or to what degree we may be able to discern the kernel of a human utility function.
I think, not at all. Our preferences are too transitive, dependent, and inconsistent. The real question is, whether we can construct a utility function that we’d find acceptable; that is much more likely.
The efficient market price for increasing and decreasing risk is zero.
If you can find people with complementary attitudes toward risk. Your example does indeed create risk—but in a risk-averse world, nobody would want to buy those contracts. Insurance arises from large entities with high capital and thus high relative risk-neutrality assuming the risk of smaller, more risk-averse entities for a price. If this market can be made efficient, the profits thus gained may be small, but insofar as all private insurance-providing organizations should be risk-averse to survive, the profits cannot be driven to zero.
It may not even be the case that they are small—depending on the structure of the market, a sufficiently large and risk-neutral organization may be able to become something of a natural monopoly, with its size reinforcing its risk-neutrality, and any competitors having difficulty entering without being large at the outset.
Re: the human utility function, I think I agree. I’ve been interested in Eric Weinstein’s work on introducing gauge theory into preferences to make them usefully invariant, but I think you’re right that they are too fatally flawed to ultimately discern naturally.
Mainly agree—but don’t forget aggregation. You can get rid of risk even if everyone is risk-averse, just by replacing “whole ownership of few risky contracts” with “partial ownership of many risky contracts”.
In the example in this post, if I own LH and you own LT, and we are both risk averse, we gain from trading half our contracts to each other.
Fair enough—though I still ultimately think that sentient beings will typically evolve or self-modify to be risk-averse, I suppose there is a difference between that and saying that there will always be insurance opportunities in the absence of transaction costs (though there may always be arbitrage/insurance between people of different risk tolerances).
I’d be interested to hear your thoughts on whether or to what degree we may be able to discern the kernel of a human utility function.
The efficient market price for increasing and decreasing risk is zero.
Easy example of increasing risk: I create two futures contracts, A which will pay out £50 if a coin comes out heads, and cost you £50 if that coin comes out tail. The second contract is B, where the outcomes is reversed. A and B together is exactly the same as nothing at all; if sold seperately, I’ve just created risk from nothing.
In practice there will be insurance opportunities; but the profits may be tiny.
I think, not at all. Our preferences are too transitive, dependent, and inconsistent. The real question is, whether we can construct a utility function that we’d find acceptable; that is much more likely.
If you can find people with complementary attitudes toward risk. Your example does indeed create risk—but in a risk-averse world, nobody would want to buy those contracts. Insurance arises from large entities with high capital and thus high relative risk-neutrality assuming the risk of smaller, more risk-averse entities for a price. If this market can be made efficient, the profits thus gained may be small, but insofar as all private insurance-providing organizations should be risk-averse to survive, the profits cannot be driven to zero.
It may not even be the case that they are small—depending on the structure of the market, a sufficiently large and risk-neutral organization may be able to become something of a natural monopoly, with its size reinforcing its risk-neutrality, and any competitors having difficulty entering without being large at the outset.
Re: the human utility function, I think I agree. I’ve been interested in Eric Weinstein’s work on introducing gauge theory into preferences to make them usefully invariant, but I think you’re right that they are too fatally flawed to ultimately discern naturally.
Mainly agree—but don’t forget aggregation. You can get rid of risk even if everyone is risk-averse, just by replacing “whole ownership of few risky contracts” with “partial ownership of many risky contracts”.
In the example in this post, if I own LH and you own LT, and we are both risk averse, we gain from trading half our contracts to each other.