Systemic risk: a moral tale of ten insurance companies
Once upon a time...
Imagine there were ten insurance sectors, each sector being a different large risk (or possibly the same risks, in different geographical areas). All of these risks are taken to be independent.
To simplify, we assume that all the risks follow the same yearly payout distributions. The details of the distribution doesn’t matter much for the argument, but in this toy model, the payouts follow the discrete binomial distribution with n=10 and p=0.5, with millions of pounds as the unit:
This means that the probability that each sector pays out £n million each year is (0.5)10 . 10!/(n!(10-n)!).
All these companies are bound by Solvency II-like requirements, that mandate that they have to be 99.5% sure to payout all their policies in a given year—or, put another way, that they only fail to payout once in every 200 years on average. To do so, in each sector, the insurance companies have to have capital totalling £9 million available every year (the red dashed line).
Assume that each sector expects £1 million in total yearly expected profit. Then since the expected payout is £5 million, each sector will charge £6 million a year in premiums. They must thus maintain a capital reserve of £3 million each year (they get £6 million in premiums, and must maintain a total of £9 million). They thus invest £3 million to get an expected profit of £1 million—a tidy profit!
Every two hundred years, one of the insurance sectors goes bust and has to be bailed out somehow; every hundred billion trillion years, all ten insurance sectors go bust all at the same time. We assume this is too big to be bailed out, and there’s a grand collapse of the whole insurance industry with knock on effects throughout the economy.
But now assume that insurance companies are allowed to invest in each other’s sectors. The most efficient way of doing so is to buy equally in each of the ten sectors. The payouts across the market as a whole are now described by the discrete binomial distribution with n=100 and p=0.5:
This is a much narrower distribution (relative to its mean). In order to have enough capital to payout 99.5% of the time, the whole industry needs only keep £63 million in capital (the red dashed line). Note that this is far less that the combined capital for each sector when they were separate, which would be ten times £9 million, or £90 million (the pink dashed line). There is thus a profit taking opportunity in this area (it comes from the fact that the standard deviation of X+Y is less that the standard deviation of X plus the standard deviation Y).
If the industry still expects to make an expected profit of £1 million per sector, this comes to £10 million total. The expected payout is £50 million, so they will charge £60 million in premium. To accomplish their Solvency II obligations, they still need to hold an extra £3 million in capital (since £63 million - £60 million = £3 million). However, this is now across the whole insurance industry, not just per sector.
Thus they expect profits of £10 million based on holding capital of £3 million—astronomical profits! Of course, that assumes that the insurance companies capture all the surplus from cross investing; in reality there would be competition, and a buyer surplus as well. But the general point is that there is a vast profit opportunity available from cross-investing, and thus if these investments are possible, they will be made. This conclusion is not dependent on the specific assumptions of the model, but captures the general result that insuring independent risks reduces total risk.
But note what has happened now: once every 200 years, an insurance company that has spread their investments across the ten sectors will be unable to payout what they owe. However, every company will be following this strategy! So when one goes bust, they all go bust. Thus the complete collapse of the insurance industry is no longer a one in hundred billion trillion year event, but a one in two hundred year event. The risk for each company has stayed the same (and their profits have gone up), but the systemic risk across the whole insurance industry has gone up tremendously.
...and they failed to live happily ever after for very much longer.
If I understand it correctly, this is the paradox:
How would you define optimal insurance? You cannot have 100% certainty, so let’s say that optimal insurance means “this thing cannot fail, unless literally the whole society falls apart”.
Sounds good, doesn’t it? Until you realize that this definitions is equivalent to “if this fails, then literally the whole society falls apart”. Which sounds scary.
The question is, how okay it is to put all your eggs in one basket, if doing so increases the expected survival of every individual egg. In addition to straightforward “shut up and multiply”, please consider all the moral hazard this would bring. People are not good at imagining small probabilities, so if before they were okay with e.g. 1% probability of losing one important thing, now they will become okay with 1% probability of losing everything.
To what extent is the crosslinking necessarily a pure trade of tail risk for profit now, and to what extent an actual improvement? Is there an increased reserve requirement which still results in an increased profit but without increasing the tail risk?
Those questions depend on the specific details of the distributions, I think. And also on assumptions about how bad generalised insurance industry collapse is versus localised insurance industry collapse.
Interesting question. It is clear that the probability mass in excess of the reserves is equal in both distributions, yielding identical long-run numbers of industry-defaults-per-year, however the average magnitude of the unrecoverable losses is greater in the no-diversification model.
If you assume a linear cost function for the expected losses, and take the mean of the distribution past a variable reserve level, you will find a reserve level for a unified insurance agent which has the same expected loss-cost, a lower number of absolute industry-loss events, and a lower reserve requirement than the diversified case.
My Wolfram-fu fails me, but you would want to multiply the binomial PDF (or gaussian approximation) by x, and find the integral from y to 100 (or infinity) that is equal to the diverse expected loss, 1*10/200. For binomial distributions, y will be <90, so short answer, ‘yes’.
This looks like the Tragedy of the commons applied to risk i.e. risk (diversification) is the common good. If all take a part of it (by pooling risks) the aggregate risk goes up.
Looks like you discovered diversification :-)
In your scenario each insurance company gained the benefits of diversification but the industry overall lost is (as all companies are exactly the same now). But I don’t understand what does the adjective “moral” in the title refer to.
I knew the result; it just felt good to have a clear model :-)
It becomes even clearer if, post-diversification, instead of 10 insurance companies which are absolutely identical you just have one. In this case that one company internalizes the benefits of diversification while it’s obvious that the industry composed now of a single company has no diversification at all.
I think it’s moral in the same way as the tragedy of the commons is moral.
That may well be so, because I don’t understand what does the adjective “moral” have to do with the tragedy of the commons, too.
It has to do with reasoning about good and bad outcomes, incentives, choices of action … in what way is that not moral reasoning?
If you stick your hand into the fire you’ll get burned. If you don’t, you won’t. See: “reasoning about good and bad outcomes, incentives, choices of action”. Is that moral reasoning?
Quite a lot of both traditional and philosophical moral views attribute negative value to self-destructive behavior, actually.
I don’t see anything self-destructive about sticking your hand into a fire. I’ve done it and I’m still around :-P
On a bit more serious note, you’re confusing moral reasoning itself with the subject of moral reasoning.
http://en.wikipedia.org/wiki/Antifragile
http://www.amazon.com/Antifragile-Things-That-Disorder-Incerto/dp/0812979680
Is there a mathematical solution to this problem? Could the regulation requiring a 99.5% chance each year for each individual company to meet its obligations take into account the dependency between different companies, and set the required level to target the chance of industry-wide failure? Taleb has certainly made the point that the assumption of independent failures fails when everyone is adopting the same strategy. They needn’t even be investing in each other for this to happen, just acting as fewer agents than they are.
If you have excellent models, then you could have the regulators adjust requirements as dependencies change. But we don’t have excellent models, far from it...
I think this problem is best understood as two simpler problems:
Imagine UberInsure bought all 10 insurance companies. Then it would be easier for them to comply with the risk requirements (it takes a more extreme event for them to fail to pay off all their obligations), but the sector is no safer than it was (or, equivalently, UberInsure can take higher profits while maintaining the same nominal risk ratio, actually increasing the risk to the sector). So capital requirements need to be weighted by company size, not just a fixed percentage.
Imagine all 10 companies invest in each other. Then although they’re nominally separate companies, they’re actually acting as UberInsure; in a crisis the correlations will go to 1 and the whole sector will collapse.
Honestly we’re already starting to address these problems, in an ad-hoc, crude way: banks that are “systemically important” are a) more tightly regulated, suggesting we’re starting to recognize at least some distinction between large and small banks b) not allowed to own each other’s paper (or rather, not allowed to count it in their assets if they do).