Consequences of arbitrage: expected cash
I prefer the movie Twelve Monkeys to Akira. I prefer Akira to David Attenborough’s Life in the Undergrowth. And I prefer David Attenborough’s Life in the Undergrowth to Twelve Monkeys.
I have intransitive preferences. But I don’t suffer from this intransitivity. Up until the moment I’m confronted by an avatar of the money pump, juggling the three DVD boxes in front of me with a greedy gleam in his eye. He’ll arbitrage me to death unless I snap out of my intransitive preferences and banish him by putting my options in order.
Arbitrage, in the broadest sense, means picking up free money—money that is free because of other people’s preferences. Money pumps are a form of arbitrage, exploiting the lack of consistency, transitivity or independence in people’s preferences. In most cases, arbitrage ultimately destroys itself: people either wise up to the exploitation and get rid of their vulnerabilities, or lose all their money, leaving only players who are not vulnerable to arbitrage. The crash and burn of the Long-Term Capital Management hedge fund was due in part to the diminishing returns of their arbitrage strategies.
Most humans to not react to the possibility of being arbitraged by changing their whole preference systems. Instead they cling to their old preferences as much as possible, while keeping a keen eye out to avoid being taken advantage of. They keep their inconsistent, intransitive, dependent systems but end up behaving consistently, transitively and independently in their most common transactions.
The weaknesses of this approach are manifest. Having one system of preferences but acting as if we had another is a great strain on our poor overloaded brains. To avoid the arbitrage, we need to scan present and future deals with great keenness and insight, always on the lookout for traps. Since transaction costs shield us from most of the negative consequences of imperfect decision theories, we have to be especially vigilant as transaction costs continue to drop, meaning that opportunities to be arbitraged will continue to rise in future. Finally, how we exit the trap of arbitrage depends on how we entered it: if my juggling Avatar had started me on Life in the Undergrowth, I’d have ended up with Twelve Monkeys, and refused the next trade. If he’d started me on Twelve Monkeys, I’ve had ended up with Akira. These may not have been the options I’d have settled on if I’d taken the time to sort out my preferences ahead of time.
For these reasons, it is much wiser to change our decision theory ahead of time to something that doesn’t leave us vulnerable to arbitrage, rather than clinging nominally to our old preferences.
Inconsistency or intransitivity leaves us vulnerable to a strong money pump, so these we should avoid. Violating independence leaves us vulnerable to a weak money pump, which also means giving up free money, so this should be avoided too. Along with completeness (meaning you can actually decide between options) and the technical assumption of continuity, these make up the von Neumann-Morgenstern axioms of expected utility. Thus if we want to avoid being arbitraged, we should cleave to expected utility.
But the consequences of arbitrage do not stop there.
Quick, which would you prefer, ¥10 000 with certainty, or a 50% chance of getting ¥20 000? Well, it depends on how your utility scales with cash. If it scales concavely, then you are risk averse, while if it scales convexly, then… Stop. Minus the transaction costs, those two options are worth exactly the same thing. If they are freely tradable, then you can exchange them one for one on the world market. Hence if you price the 50% contract at any value other than ¥10 000, you can be arbitraged if you act on your preferences (neglecting transaction costs). People selling to or buying contracts from you will make instant free money on the trade. Money that would be yours instead if your preferences were other.
Of course, you could keep your non-linear utility, and just behave as if it were linear, because of the market price, while being risk-averse in secret… But just as before, this is cumbersome, complicated and unnecessary. Exactly as arbitrage makes you cleave to independence, it will make your utility linear in money—at least for small, freely tradable amounts.
In conclusion:
Avoiding arbitrage forces your decision theory to follow the axioms of expected utility. It further forces your utility to be linear for any small quantity of money (or any other fungible asset). Thus you will follow expected cash.
Addendum: If contracts such as L = {¥20 000 if a certain coin comes up heads/tails} were freely tradable, they would cost ¥10 000.
Proof: Let LH be the contract that gives out ¥20 000 if that coin comes out heads; LT be the contract if that same coin comes out tails. LH and LT together are exactly the same as a guaranteed ¥20 000. However, individually, LH and LT are the same contract − 50% chance of ¥20 000 - thus by the Law of One Price, they must have the same price (you can get the same result by symmetry). Two contracts with the same price, totalling ¥20 000 together: they must individualy be worth ¥10 000.
You can’ t even trade these evenly on the world market, and if they’re not tradeable options then most people would be silly to take the latter. Even in finance, higher variance is at least supposed to imply lower prices / higher yield, though Eric Falkenstein says this is a myth.
And when they do become tradable, must people suddenly then change their preferences? (The arbitrage argument for Independence, I feel, is as weak and as strong as the arbitrage argument for a utility linear in money)
Now if I wanted to do that exact bet, a bank would put it together for me—with a hefty premium because it’s an unusual one (maybe UK bookies would give me a better bet). This exact option is only rare because people don’t want to admit that playing the stock market is similar to playing the lottery. If I transformed the bet into something similar, except phrased in terms of hedged commodities future, then you would easily be able buy and sell it, and its price would be around ¥10 000.
For it is easy to show that if those 50% lottery were generally traded, they would have to cost ¥10 000. First, take the lottery as above, along with a similar lottery that pays out ¥20 000 only if the first lottery pays nothing.
Together, these two lotteries are worth a certain ¥20 000. Yet the two lotteries are identical individually, so the Law of one price implies they have the same price—which must then be ¥10 000.
Charging higher prices for low variance is one of the ways banks pump money from the suckers.
And casinos charge for high variance.
Indeed. Ultimately, we might prefer high or low variance, but on proper financial markets, the cost of increasing or decreasing variance is negligeable.
Not so. Buying assets in order to hedge risks is one of the fundamental functions of modern finance market, and the motivation for essentially all “interesting” financial instruments (futures, options, CDSs, etc). Note that the prices of options are even talked about as “implied (Black-Scholes) volatility”, and in trader jargon buying or writing options can be referred to as “trading volatility”. Variance is what modern finance is all about.
You claim to be supporting expected utility, but you talk only about money. You don’t just “stop” if your utility function is convex—you buy the option for a 50% chance at 20,000, because that chance is worth more to you than $10,000. Conversely, if your utility function is concave, you buy the other option. There is no “hiding” involved: if I am risk-averse, I definitely take the $10,000 for sure, and indeed would likely be willing to take $9,999 for sure.
In a world in which people are risk-averse—for very good evolutionary reasons—you won’t be able to get somebody to trade those options in both directions. This is, after all, the entire principle of insurance, that you can trade risks because there is a difference between the risk aversion of an individual and an insurance company. In principle, yes, you could buy anti-insurance, where you pay money that you lose if your house burns down, but since you’re always trading with a risk-averse person, you cannot profit on average.
If you can find somebody risk-loving, they may be willing to give you the deal that allows your proof at the end to function. As it is, risk-averse dealers will not make deals that increase their risk with no improvement in expected monetary return (to compensate for the lower expected utility), and the market does not work that way.
If I am risk loving, I will buy the 50% of ¥20 000 for ¥10 000. I may buy it for ¥10 001. But if it is freely tradable, I will not buy it for ¥15 000, or even ¥10 010. Similarly, if I am risk averse and it is freely tradable, I will not sell it for ¥9 990.
These setups force your risk loving or aversion into very narrow bands. There is another way of seeing the prices on these things: instead of buying this lottery, I could buy one-millionth of a million such lotteries, all independent. This I must price around ¥10 000 under any reasonable utility function (central limit theorem gives a very narrow distribution around that value).
So again, if these objects have a price, they must be valued at around ¥10 000, no matter what your utility function is.
Unfortunately, there are rarely situations even in finance where a) I can buy a millionth of a million gambles and b) other people also want to do something similar (and there are sellers of these lotteries still), especially after we move into the territory of negligible or zero expected gain.
Again: unless each risk-averse person is matched with a risk-loving donor, the arbitrage between a linear utility function in money and a risk-averse function is simply insurance. While competition between insurers can help lessen this gap, insurers do not exist unless the arbitrage gap exists.
I’m ultimately not criticizing your math or concepts; I’m suggesting they have an important point of disconnection from their use in the world, and that we should be aware of that and not try to force our utility functions to be linear in money.
I’m not actually saying that it is vital to make our utility functions linear in money—I’m suggesting it has the same level of importance as making our preferences follow independence.
I.e. we should start moving our preferences in the direction of independence/linearity in money, maybe a little bit more than we do now, and should be aware that ultimately this is the “correct” way of doing things once transaction costs vanish.
But in practice we shouldn’t worry too much about this (unless we work in finance), as the complete absence of a human-describing utility function is a much more fundamental issue!
Fair enough—though I still ultimately think that sentient beings will typically evolve or self-modify to be risk-averse, I suppose there is a difference between that and saying that there will always be insurance opportunities in the absence of transaction costs (though there may always be arbitrage/insurance between people of different risk tolerances).
I’d be interested to hear your thoughts on whether or to what degree we may be able to discern the kernel of a human utility function.
The efficient market price for increasing and decreasing risk is zero.
Easy example of increasing risk: I create two futures contracts, A which will pay out £50 if a coin comes out heads, and cost you £50 if that coin comes out tail. The second contract is B, where the outcomes is reversed. A and B together is exactly the same as nothing at all; if sold seperately, I’ve just created risk from nothing.
In practice there will be insurance opportunities; but the profits may be tiny.
I think, not at all. Our preferences are too transitive, dependent, and inconsistent. The real question is, whether we can construct a utility function that we’d find acceptable; that is much more likely.
If you can find people with complementary attitudes toward risk. Your example does indeed create risk—but in a risk-averse world, nobody would want to buy those contracts. Insurance arises from large entities with high capital and thus high relative risk-neutrality assuming the risk of smaller, more risk-averse entities for a price. If this market can be made efficient, the profits thus gained may be small, but insofar as all private insurance-providing organizations should be risk-averse to survive, the profits cannot be driven to zero.
It may not even be the case that they are small—depending on the structure of the market, a sufficiently large and risk-neutral organization may be able to become something of a natural monopoly, with its size reinforcing its risk-neutrality, and any competitors having difficulty entering without being large at the outset.
Re: the human utility function, I think I agree. I’ve been interested in Eric Weinstein’s work on introducing gauge theory into preferences to make them usefully invariant, but I think you’re right that they are too fatally flawed to ultimately discern naturally.
Mainly agree—but don’t forget aggregation. You can get rid of risk even if everyone is risk-averse, just by replacing “whole ownership of few risky contracts” with “partial ownership of many risky contracts”.
In the example in this post, if I own LH and you own LT, and we are both risk averse, we gain from trading half our contracts to each other.
If you love the mini cooper, you’d buy it for £20 000; but never for £200 000, not matter how much you love it. Money contracts are similar, except arbitrage get even stronger, as the correct price of these contracts is canonical.
Start with save scummers. All that training is bound to have generalised at least somewhat.
This is wrong. Certainly if there was also a tradable contract for the same coin coming up tails, then both of them has to trade for 10,000. But in isolation, there is nothing forcing the price on one of the contracts.
This is one of the first examples in Mark Joshi’s The Concepts and Practice of Mathematical Finance. Market prices in general work the same way—some risks can be avoided by buying other assets so that the random elements cancel out, while some can’t, and the latter is what controls the pricing of the asset.
Good point. Adjusted the phrasing in the post.
Is there a fundamental reason that some random elements can’t be cancelled out? Can’t you specifically hedge against the ultimate value of your first contract, thus nullifying any risk?
The phrasing incidentally is still a bit off. LH and LT are not indistinguishable contracts, since the contingencies in which they pay out is different. The things you should apply the law of one price to is the portfolio consisting of two units of “always pay 10,000” versus the portfolio consisting of one unit of LH and one unit of LT. Those two portfolios behave the same in all possible worlds, and therefore must have the same price.
Whether a risk can be hedged against or not is kindof the ultimate question of all financial markets—almost all interesting instruments (futures, options, CDSs, etc) are designed specifically to make hedging easier. Clearly some risk can’t be hedged—if Omega drops by and says “I’ll give you 10,000 iff my quantum coin comes up tails”, then that introduces some irreducible uncertainty into the system, and some speculator somewhere has to be compensated for taking it on. Of course, you can always buy insurance for the event that the coin does not come up tails, but then the person selling the insurance is taking on the risk and will want to be compensated according to their risk preferences.
On the other hand, suprisingly many risks can be hedged against. Figuring out how to hedge some risk which other people had not seen how to is the basis for all clever arbitrage trades.
A particularly interesting example of this is option pricing. A put option essentially is a tool for reducing variance (by eliminating cases where you lose much money because your stock decreases in value), so the price of the put option should be a direct indication of how much a risk-averse investor values the resulting decrease in variance. However, what Black and Scholes noticed was that, actually, provided the underlying stock price changes smoothly enough (follows log-normal Brownian motion), the same risk that the option allows you to eliminate can already be hedged away by just shorting the right amount of stock. So the risk is hedgable, writers of the option should not be compensated for taking it on, and option prices are exactly the same as if everyone was risk-neutral.
On the other hand, if the price of the underlying stock does not change smoothly—if it has random “chrashes” where it suddenly jumps a lot—then the risk mitigated by the option is not hedgable, and we can no longer price the option without knowing what the risk preferences of the investors are. Real-life option prices do not exactly follow the Black-Scholes model (they have so-called “volatility smiles”), which indicates that in the real world, for whatever reason, the corresponding risks are actually not completely hedgable.
Interesting. I’m sure the extra risk can still be hedged or reduced (as long as each contract has an “anti-contract” that pays out exactly the reverse), but it seems this is not exactly how the market operates in practice.
Think about a farmer who will get a good harvest of the sun shines. So he can sell a contract saying “pay 10,000 if the sun shines this summer”. Someone who buys that contract and want to hedge the risk needs to find someone who wants to sell an anti-contract: “pay 10,000 if it rains”. Maybe there is such a person on the market (mushroom pickers?), in which case the risk can be hedged. Or maybe everyone in the world is actually better off with sunshine (or at least, the total productivity in the economy will be higher), in which case the amount of sunshine is a systematic risk which cannot be hedged.
You’re right—weather (or other time dependent related events) cannot be risk reduced on the moment.
But they can be risk reduced over time, by aggregation. I would be willing to sell ten thousand contracts “pay 1 if it rains this year”, one for each of the next ten thousand years. I would do this if we assume the yearly rains are somewhat independent, and that I have a good estimate of their likelyhood, allowing me to price the events reasonably. This, in practice, is stupid because of the ten thousand year delay. Alternatively, I could sell these contracts in 10 000 different locations on the planet—but they would not longer be even approximately independent.
So there are three limitations to reducing risk through aggregation:
1) Reasonable time scale for aggregation.
2) Establishing a reasonable level of independence in the contracts.
3) Calculating the probabilites correctly.
What most people call “systematic risks”, seem to fail one or more of these three requirements, and so can’t be easily risk reduced through aggregation.
except, finding exploitable inconsistencies in other peoples preferences that haven’t yet been destroyed by some other arbitrageur actually requires a fair bit of work and/or risk.
My husband’s law professor described arbitrage as grabbing at nickels from in front of a bulldozer… The point being that you really need to know what your doing as an arbitrageur to make any money at all, and if you don’t, you stand to lose quite a bit.
From the point of view of the person being arbitraged, this makes no difference...
Sticking with expected utility works in theory if you have a discrete number of variables (options) and can discern between all variables such that they can be judged equally and the cost (in time or whatever) is not greater than the marginal gain from the process. Here is an example I like: Go to the supermarket and optimize your expected utility for breakfast cereal.
The money pump only works if your “utility function” is static, or more accurately, if your preferences update slower than the pumper can change the statistical trade imbalance eg: arbitrage doesn’t work if the person outsourced to can also outsource.
I can take advantage of your vN-M axioms if I have any information about one of your preferences which you do not have (this need not be gotten illegally), as a result, you sticking to it would money pump regardless.
This sounds like an even better reason to use expected utility! If you have ignorance about your preferences, then you should reduce the amount of other unknowns, and hence simplify your decision theory to expected utility.
“The power’s out at my house and I need a game I can play.” “I guess you need something that doesn’t require electricity. I have just the thing: a pencil!”
Expected utility has one of the traits needed of a decision theory. As AndrewKemendo points out, it does not have all of them.
I don’t need to price (contract B, when I already have contract A) the same as (contract A, when I have nothing).
I haven’t done the math to see if this solves the problem or not – if the two willingnesses to pay have to sum to 20000, for any increasing utility function – but there must be a solution; expected utility shouldn’t ever produce circular preferences.