TL;DR: If you want to know whether getting insurance is worth it, use the Kelly Insurance Calculator. If you want to know why or how, read on.
Note to LW readers: this is almost the entire article, except some additional maths that I couldn’t figure out how to get right in the LW editor, and margin notes. If you’re very curious, read the original article!
Misunderstandings about insurance
People online sometimes ask if they should get some insurance, and then other people say incorrect things, like
This is a philosophical question; my spouse and I differ in views.
or
Technically no insurance is ever worth its price, because if it was then no insurance companies would be able to exist in a market economy.
or
Get insurance if you need it to sleep well at night.
or
Instead of getting insurance, you should save up the premium you would have paid and get compounding market return on it. The money you end up with is on average going to be more than whatever you’ll end up claiming on the insurance.
or
If you love your children, you should get disability insurance for them.
or
You should insure only what you cannot afford to lose.
These are the things I would say in response.
It is not a philosophical question, it is a mathematical one.
Technically, some insurance is worth its price, even when the insurance company makes a profit.
Whether or not to get insurance should have nothing to do with what makes one sleep – again, it is a mathematical decision with a correct answer.
Saving up the premium instead of getting insurance is making the mistake of conflating an ensemble average with a time average.
Love does not make insurance a mathematically appropriate decision. Running the numbers does.
The last quote (“things you cannot afford to lose”) is the closest to being true, but it doesn’t define exactly what it means to afford to lose something, so it ends up recommending a decision based on vibes anyway, which is wrong.
In order to be able to make the insurance decision wisely, we need to know what the purpose of insurance really is. Most people do not know this, even when they think they do.
The purpose of insurance
The purpose if insurance is not to help us pay for things that we literally do not have enough money to pay for. It does help in that situation, but the purpose of insurance is much broader than that. What insurance does is help us avoid large drawndowns on our accumulated wealth, in order for our wealth to gather compound interest faster.
Think about that. Even though insurance is an expected loss, it helps us earn more money in the long run. This comes back to the Kelly criterion, which teaches us that the compounding effects on wealth can make it worth paying a little up front to avoid a potential large loss later.
This is the hidden purpose of insurance. It’s great at protecting us against losses which we literally cannot cover with our own money, but it also protects us against losses which set our wealth back far enough that we lose out on significant compounding effects.
To determine where the threshold for large enough losses is, we need to calculate.
Computing when insurance is worth it
The Kelly criterion is not just a general idea, but a specific mathematical relationship. We can use this to determine when insurance is worth it. We need to know some numbers:
What is our current wealth W?
How much is the premium P?
Then we need to estimate the probability distribution of the bad events that could occur. In other words, for each bad event ii we can think of, we estimate
What is the probability p_i that this event happens? and
If it does happen, what would be the uninsured cost c_i?
We’re going to ignore the deductible for now because it makes the equation more complicated, but we’ll get back to it. We plug these numbers into the equation for the value V of the insurance to someone in our situation:
If this number is positive, then the insurance is worth it. If it is negative, we would do better to pay the costs out of our own pockets.
Motorcycle insurance
In a concrete example, let’s say that our household wealth is $25,000, and we’ve just gotten a motorcycle with some miles on it already. Insuring this motorcycle against all repairs would cost $900 per year. We might think of two bad events:
It ends up needing expensive maintenance due to its age. We expect this to happen once in the next three years, meaning there’s roughly a 33 % probability it’s needed the next year. This costs maybe $2000.
We end up riding irresponsibly and wreck it, or it gets stolen, or somehow needs to be replaced completely. Maybe there’s a 1⁄40 risk of this any given year, and it would cost $8000.
Assuming no deductible, would this be worth it? Yes! Solving the equation – or entering the parameters into the Kelly insurance calculator – we see that we should be willing to pay a premium of up to $912 in this situation. If our wealth had been $32,000 instead, the insurance would no longer have been worth it – in that situation, we should not spend more than $899 on it, but the premium offered is $900.
The effect of the deductible
In the same example as above, now set a fixed deductible of $500 for both events, and watch the value of the insurance plummet! Under those terms, we should only accept the insurance if our wealth is less than $10,000.
Helicopter hovering exercise
To test your knowledge, we’ll run with one more example.
Let’s say you get the opportunity to try to hover a helicopter close to ground, for whatever reason. There’s a real pilot next to you who will take control when you screw up (because hovering a helicopter is hard!) However, there’s a small (2 %) chance you will screw up so bad the other pilot won’t be able to recover control and you crash the helicopter. You will be fine, but you will have to pay $10,000 kr to repair the helicopter, if that happens.
You can get insurance before you go, which will cover $6,000 of helicopter damage (so even with insurance, you have to pay $4,000 in addition to the insurance premium if you crash), but cost you $150 up front. Do you take it?
You probably know by now: it depends on your wealth! There’s a specific number of dollars in the bank you need to have to skip the insurance. Whipping out the Kelly insurance calculator, we figure it out to be $34,700. Wealthier than that? Okay, skip the insurance. Have less than that? It’s wise to take the offer up.
It’s not that hard
I am surprised not more people are talking about this. Everyone goes around making insurance decisions on vibes, even as these decisions can be quite consequential and involve a lot of money. There’s just a general assumption that insurance decisions are incalculable – but the industry has calculated with them for at least seventy years! Are people not a little curious how they do it?
More specifically: until now, there has been no insurance calculator that actually uses the Kelly criterion. All others use loose heuristics. Who thinks that leads to better decisions?
Appendix A: Anticipated and actual criticism
I think there are two major points of disagreement possible in the description above:
The Kelly criterion is bad, and
The probability distribution of bad events is unknown.
Both of these points are technically true, but not as meaningful as their proponents seem to think.
Yes, the Kelly criterion is too aggressive for most people, who do not value maximum growth over all else. Most people want to trade off some growth against security. The correct response here is not to throw the baby out with the bathwater and ignore Kelly entirely – the correct response is to use a fractional Kelly allocation. This can be done quite easily by entering a lower wealth in the Kelly insurance calculator. See the Kelly article for more discussion on this.
The probability distribution of anything is unknown, but this is not a problem. Good forecasters estimate accurate probabilities all the time, and nearly anyone can learn to do it.
But, perhaps most fatally, the people who oppose the method suggested in this article have not yet proposed a better alternative. They tend to base their insurance decisions on one of the incorrect superstitions that opened this article.
Appendix B: How insurance companies make money
The reason all this works is that the insurance company has way more money than we do. If we enter the motorcycle example with no deductible into the Kelly insurance calculator again, and increase our wealth by a factor of ten, we see the break-even point moves down to $863. This is the point where the insurance starts being worth offering for someone with 10× our wealth!
In other words, when someone with 10× our wealth meets us, and we agree on motorcycle insurance for $900, we have made a $12 profit and the insurer has made a $37 profit.
It sounds crazy, but that’s the effect of the asymmetric nature of differential capital under compounding. This is the beauty of insurance: deals are struck at premiums that profit both parties of the deal.
Appendix C: The relativity of costs
The clever reader will also see that if we set the deductible to be event-dependent, and create a virtual event for when nothing bad happens (this event has a deductible and cost of zero), a lot of the terms are similar and can be combined. Indeed, the equation can then be given as
This, perhaps, makes it clear that it is not the absolute size of the wealth that matters, but its size in proportion to the premium, deductible, and cost of events.
This is pretty useful!
I note that it assigns infinite badness to going bankrupt (e.g., if you put the cost of any event as >= your wealth, it always takes the insurance). But in life, going bankrupt is not infinitely bad, and there are definitely some insurances that you don’t want to pay for even if the loss would cause you to go bankrupt. It is not immediately obvious to me how to improve the app to take this into account, other than warning the user that they’re in that situation. Anyway, still useful but figured I’d flag it.
I think the solution to this is to add something to your wealth to account for inalienable human capital, and count costs only by how much you will actually be forced to pay. This is a good idea in general; else most people with student loans or a mortage are “in the red”, and couldnt use this at all.
This doesn’t play very well with fractional kelly though
Human capital is worth nothing after you die, though.
This is good! But note that many things we call ‘insurance’ are not only about reducing the risk of excessive drawdowns by moving risk around:
There can be a collective bargaining component. For example, health insurance generally includes a network of providers who have agreed to lower rates. Even if your bankroll were as large as the insurance company’s, this could still make taking insurance worth it for access to their negotiated rates.
An insurance company is often better suited to learn about how to avoid risks than individuals. My homeowner’s insurance company requires various things to reduce their risk: maybe I don’t know whether to check for Federal Pacific breaker panels, but my insurance company does. Title insurance companies maintain databases. Specialty insurers develop expertise in rare risks.
Insurance can surface cases where people don’t agree on how high the risk is, and force them to explicitly account for it on balance sheets.
Insurance can be a scapegoat, allowing people to set limits on otherwise very high expenses. Society (though less LW, which I think is eroding a net-positive arrangement) generally agree that if a parent buys health insurance for their child then if the insurance company says no to some treatment we should perhaps blame the insurance company for being uncaring but not blame the parent for not paying out of pocket. This lets the insurance company put downward pressure on costs without individuals needing to make this kind of painful decision.
Relatedly, agreeing in advance how to handle a wide range of scenarios is difficult, and you can offload this to insurance. Maybe two people would find it challenging to agree in the moment under which circumstances it’s worth spending money on a shared pet’s health, but can agree to split the payment for pet health insurance. You can use insurance requirements instead of questioning someone else’s judgement, or as a way to turn down a risky proposition.
Annoying anecdote: I interviewed for an entry-level actuarial position recently and, when asked about the purpose of insurance, I responded with essentially the above argument (along the lines of increasing everyone’s log expectation, with kelly betting as a motivation). The reply I got was “that’s overcomplicated; the purpose of insurance is to let people avoid risk”.
By the way, I agree strongly with this post and have been trying to make my insurance decisions based on this philosophy over the past year.
I’m not sure how far in your cheek your tongue was, but I claim this is obviously wrong and I can elaborate if you weren’t kidding.
I’m confused by the calculator. I enter wealth 10,000; premium 5,000; probability 3; cost 2,500; and deductible 0. I think that means: I should pay $5000 to get insurance. 97% of the time, it doesn’t pay out and I’m down $5000. 3% of the time, a bad thing happens, and instead of paying $2500 I instead pay $0, but I’m still down $2500. That’s clearly not right. (I should never put more than 3% of my net worth on a bet that pays out 3% of the time, according to Kelly.) Not sure if the calculator is wrong or I misunderstand these numbers.
Kelly is derived under a framework that assumes bets are offered one at a time. With insurance, some of my wealth is tied up for a period of time. That changes which bets I should accept. For small fractions of my net worth and small numbers of bets that’s probably not a big deal, but I think it’s at least worth acknowledging. (This is the only attempt I’m aware of to add simultaneous bets to the Kelly framework, and I haven’t read it closely enough to understand it. But there might be others.)
There’s a related practical problem that a significant fraction of my wealth is in pensions that I’m not allowed to access for 30+ years. That’s going to affect what bets I can take, and what bets I ought to take.
I hadn’t thought of it this way before, but it feels like a useful framing.
But I do note that, there are theoretical reasons to expect flood insurance to be harder to get than fire insurance. If you get caught in a flood your whole neighborhood probably does too, but if your house catches fire it’s likely just you and maybe a handful of others. I think you need to go outside the Kelly framework to explain this.
I have a hobby horse that I think people misunderstand the justifications for Kelly, and my sense is that you do too (though I haven’t read your more detailed article about it), but it’s not really relevant to this article.
I agree with you, and I think the introduction unfortunately does major damage to what is otherwise a very interesting and valuable article about the mathematics of insurance. I can’t recommend this article to anybody, because the introduction comes right out and says: “The things you always believed about rationalists were true. We believe that emotions have no value, and to be rational it is mandatory to assign zero utility to them. There is no amount of money you should pay for emotional comfort; to do so would be making an error.” This is obviously false.
The probability should be given as 0.03 -- that might reduce your confusion!
If I understand your point correctly, I disagree. Kelly instructs us to choose the course of action that maximises log-wealth in period t+1 assuming a particular joint distribution of outcomes. This course of action can by all means be a complicated portfolio of simultaneous bets.
Of course, the insurance calculator does not offer you the interface to enter a periodful of simultaneous bets! That takes a dedicated tool. The calculator can only tell you the ROI of insurance; it does not compare this ROI to alternative, more complex portfolios which may well outperform the insurance alone.
This is where reinsurance and other non-traditional instruments of risk trading enter the picture. Your insurance company can offer flood insurance because they insure their portfolio with reinsurers, or hedge with catastrophy bonds, etc.
The net effect of the current practices of the industry is that fire insurance becomes slightly more expensive to pay for flood insurance.
I don’t think I disagree strongly with much of what you say in that article, although I admit I haven’t read it that thoroughly. It seems like you’re making three points:
Kelly is not dependent on log utility—we agree.
Simultaneous, independent bets lower the risk and applying the Kelly criterion properly to that situation results in greater allocations than the common, naive application—we agree.
If one donates one’s winnings then one’s bets no longer compound and the expected profit is a better guide then expected log wealth—we agree.
Perhaps you should make this more clear in the calculator, to avoid people mistakenly making bad choices? (Or just change it to percent. Most people are more comfortable with percentages, and the % symbol will make it unambiguous.)
Aha! Yes, that explains a lot.
I’m now curious if there’s any meaning to the result I got. Like, “how much should I pay to insure against an event that happens with 300% probability” is a wrong question. But if we take the Kelly formula and plug in 300% for the probability we get some answer, and I’m wondering if that answer has any meaning.
But when simultaneous bets are possible, the way to maximize expected log wealth won’t generally be “bet the same amounts you would have done if the bets had come one at a time” (that’s not even well specified as written), so you won’t be using the Kelly formula.
(You can argue that this is still, somehow, Kelly. But then I’d ask “what do you mean when you say this is what Kelly instructs? Is this different from simply maximizing expected log wealth? If not, why are we talking about Kelly at all instead of talking about expected log wealth?”)
It’s not just that “the insurance calculator does not offer you the interface” to handle simultaneous bets. You claim that there’s a specific mathematical relationship we can use to determine if insurance is worth it; and then you write down a mathematical formula and say that insurance is worth it if the result is positive. But this is the wrong formula to use when bets are offered simultaneously, which in the case of insurance they are.
I don’t think so? Like, in real world insurance they’re obviously important. (As I understand it, another important factor in some jurisdictions is “governments subsidize flood insurance.”) But the point I was making, that I stand behind, is
Correlated risk is important in insurance, both in theory and practice
If you talk about insurance in a Kelly framework you won’t be able to handle correlated risk.
(This isn’t a point I was trying to make and I tentatively disagree with it, but probably not worth going into.)
Kelly allocations only require taking actions that maximise the expectation of the joint distribution of log-wealth. It doesn’t matter how many bets are used to construct that joint distribution, nor when during the period they were entered.
If you don’t know at the start of the period which bets you will enter during the period, you have to make a forecast, as with anything unknown about the future. But this is not a problem within the Kelly optimisation, which assumes the joint distribution of outcomes already exists.
This is also how correlated risk is worked into a Kelly-based decision.
Simultaneous (correlated or independent) bets are only a problem in so far as we fail to construct a joint distribution of outcomes for those simultaneous bets. Which, yeah, sure, dimensionality makes itself known, but there’s no fundamental problem there that isn’t solved the same way as in the unidimensional case.
Edit: In more laymanny terms, Kelly requires that, for each potential combination of simultaneous bets you are going to enter during the period, you estimate the probability distribution of wealth outcomes (and this probability distribution should account for any correlations) after the period has passed. Given that, Kelly tells you to choose the set of bets (and sizes in each) that maximise the expected log of wealth outcomes.
Kelly is a function of actions and their associated probability distributions of outcomes. The actions can be complex compound actions such as entering simultaneous bets—Kelly does not care, as long as it gets its outcome probability distribution for each action.
Ah, my “what do you mean” may have been unclear. I think you took it as, like, “what is the thing that Kelly instructs?” But what I meant is “why do you mean when you say that Kelly instructs this?” Like, what is this “Kelly” and why do we care what it says?
That said, I do agree this is a broadly reasonable thing to be doing. I just wouldn’t use the word “Kelly”, I’d talk about “maximizing expected log money”.
But it’s not what you’re doing in the post. In the post, you say “this is how to mathematically determine if you should buy insurance”. But the formula you give assumes bets come one at a time, even though that doesn’t describe insurance.
Ah, sure. Dear child has many names. Another common name for it is “the E log X strategy” but that tends to not be as recogniseable to people.
Ah, I see your point. That is true. I’d argue this isolated E log X approach is still better than vibes, but I’ll think about ways to rephrase to not make such a strong claim.
The example of flood vs fire insurance is explainable with the Kelly framework. Offering flood insurance is more like making one large bet. Offering fire insurance is more like offering many small bets. Under Kelly the insurance company needs a larger edge to justify a larger bet.
Standard disclaimer about simplified models here.
I think we’re disagreeing about terminology here, not anything substantive, so I mostly feel like shrug. But that feels to me like you’re noticing the framework is deficient, stepping outside it, figuring out what’s going on, making some adjustment, and then stepping back in.
I don’t think you can explain why you made that adjustment from inside the framework. Like, how do you explain “multiple correlated bets are similar to one bigger bet” in a framework where
Bets are offered one at a time and resolved instantly
The bets you get offered don’t depend on previous history
?
This seems to assume that 100% of claims get approved. How can the equation be modified to account for the probability of claims being denied?
I would guess lower cost insurance policies tend to come from companies with lower claim approval rates, so it seems appropriate to price into the calculator. I believe there are also softer elements in insurance costs like this that should be considered, such as customer service quality, but that’s probably out of scope for this calculator.
Fundamentally we are taking the probability-weighted expectation of log-wealth under all possible outcomes from a single set of actions, and comparing this to all other sets of actions.
The way to work in uncompensated claims is to add another term for that outcome, with the probability that the claim is unpaid and the log of wealth corresponding to both paying that cost out of pocket and fighting the insurance company about it.
A refused claim is (legally) an event that was never covered by the insurance, and is therefore irrelevant if the question is “take policy A or not at all”.
After all, if that event occurred without insurance, it is still not covered.
However, this is important to consider when comparing different policies with different amounts of coverage. Eg “comprehensive” car insurance compared with “third party, fire, and theft”.
“Rates” of unpaid claims only make sense in a situation where the law allows the insurer to breach their contracts. In that situation, the value of insurance plummets, and possibly reaches zero.
Your formula is only valid if utility = log($).
With that assumption the equation compares your utility with and without insurance. Simple!
If you had some other utility function, like utility = $, then you should make insurance decisions differently.
I think the Kelly betting stuff is a big distraction, and that ppl with utility=$ shouldn’t bet like that. I think the result that Kelly betting maximizes long term $ bakes in assumptions about utility functions and is easily misunderstood—someone with utility=$ probably goes bankrupt but might become insanely rich AI is happy not to Kelly bet. (I haven’t explained this point properly, but recall reading about this and it’s just wrong on it’s face that someone with utility=$ should follow your formula)
Is this true?
I’m still a bit confused about this point of the Kelly criterion. I thought that actually this is the way to maximize expected returns if you value money linearly, and the log term comes from compounding gains.
That the log utility assumption is actually a separate justification for the Kelly criterion that doesn’t take into account expected compounding returns
I’ve written about this here. Bottom line is, if you actually value money linearly (you don’t) you should not bet according to the Kelly criterion.
From the original post:
Click the link for a more in-depth explanation
This is a synonym for “if money compounds and you want more of it at lower risk”. So in a sense, yes, but it seems confusing to phrase it in terms of utility as if the choice was arbitrary and not determined by other constraints.
No it’s not. In the real world, money compounds and I want more of it at lower risk. Also, in the real world, “utility = log($)” is false: I do not have a utility function, and if I did it would not be purely a function of money.
I agree—sorry about the sloppy wording.
What I tried to say wad that “if you act like someone who maximises compounding money you also act like someone with utility that is log-money.”
I still disagree with that.
I either think this is wrong or I don’t understand.
What do you mean by ‘maximising compounding money?’ Do you mean maximising expected wealth at some specific point in the future? Or median wealth? Are you assuming no time discounting? Or do you mean maximising the expected value of some sort of area under the curve of wealth over time?
I like this! improvement: a lookup chart for lots of base rates of common disasters as an intuition pump?
Curated. Insurance is a routine part of life, whether it be the car and home insurance we necessarily buy or the Amazon-offered protection one reflexively declines, the insurance we know doctors must have, businesses must have, and so on.
So it’s pretty neat when someone comes along along and (compellingly) says “hey guys, you (or are at least most people) are wrong about when insurance makes sense to buy, the reasons you have are wrong, here’s the formula”.
While assumptions can be questioned, e.g. infiniteness badness of going bankrupt and other factors can be raised, this is just a neat technical treatment of a very practical, everyday question. I expect that I’ll be thinking in terms of this myself making various insurance choices. Kudos!
Here’s a puzzle about this that took me a while.
When you know the terms of the bet (what probability of winning, and what payoff is offered), the Kelly criterion spits out a fraction of your bankroll to wager. That doesn’t support the result “a poor person should want to take one side, while a rich person should want to take the other”.
So what’s going on here?
Not a correct answer: “you don’t get to choose how much to wager. The payoffs on each side are fixed, you either pay in or you don’t.” True but doesn’t solve the problem. It might be that for one party, the stakes offered are higher than the optimal amount and for the other they’re lower. It might be that one party decides they don’t want to take the bet because of that. But the parties won’t decide to take opposite sides of it.
Let’s be concrete. Stick with one bad outcome. Insurance costs $600, and there’s a 1⁄3 chance of paying out $2000. Turning this into a bet is kind of subtle. At first I assumed that meant you’re staking $600 for a 1⁄3 chance of winning… $1400? $2000? But neither of those is right.
Case by case. If you take the insurance, then 2⁄3 of the time nothing happens and you just lose $600. 1⁄3 of the time the insurance kicks in, and you’re still out $600.
If you don’t take the insurance, then 2⁄3 of the time nothing happens and you have $0. 1⁄3 of the time something goes wrong and you’re out $2000.
So the baseline is taking insurance. The bet being offered is that you can not take it. You can put up stakes of $2000 for a 2⁄3 chance of winning $600 (plus your $2000 back).
Now from the insurance company’s perspective. The baseline and the bet are swapped: if they don’t offer you insurance, then nothing happens to their bankroll. The bet is when you do take insurance.
If they offer it and you accept, then 2⁄3 of the time they’re up $600 and 1⁄3 of the time they’re out $1400. So they wager $1400 for a 2⁄3 chance of winning $600 (plus their $1400 back).
So them offering insurance and you accepting it isn’t simply modeled as “two parties taking opposite sides of a bet”.
Oh, I think that also means that section is slightly wrong. You want to take insurance if
log(Wyou−P)>plog(Wyou−c)+(1−p)log(Wyou)and the insurance company wants to offer it if
log(Wthem)<plog(Wthem+P−c)+(1−p)log(Wthem+P).So define
V(W)=log(W−P)−(plog(W−c)+(1−p)log(W))as you did above. Appendix B suggests that you’d take insurance if V(Wyou)>0 and they’d offer it if V(Wthem)<0. But in fact they’d offer it if V(Wthem+P)<0.
Does the answer to “should I buy insurance” change if the interest rate that you earn on your wealth is zero or even negative?
This (and much of the rest of your article) seems needlessly disdainful of people’s emotions.
Wealth does not equal happiness!
If it did, then yes, 899 < 900 so don’t buy the insurance. But in the real world, I think you’re doing normal humans a big disservice by pretending that we are all robots.
Even Mr. Spock would take human emotions into consideration when giving advice to a human.
Wealth not equaling happiness works both ways. It’s the idea of losing wealth that’s driving sleep away. In this case, the goal of buying insurance is to minimize the risk of losing wealth. The real thing that’s stopping you sleep is not whether you have insurance or not, it’s how likely it is that something bad happens, which will cost more than you’re comfortable losing. Having insurance is just one of the ways to minimize that—the problem is stress stemming from uncertainty, not whether you’ve bought an insurance policy.
The list of misunderstandings is a bit tongue in cheek (at least that’s how I read it). So it’s not so much disdainful of people’s emotions, as much as it’s pointing out that whether you have insurance is not the right thing to worry about—it’s much more fruitful to try to work out the probabilities of various bad things then calculate how much you should be willing to pay to lower that risk. It’s about viewing the world through the lens of probability and deciding these things on the basis of expected value. Rather than have sleepless nights, just shut up and multiply (this is a quote, not an attack). Even if you’re very risk averse, you should be able to just plug that into the equation and come up with some maximum insurance cost above which it’s not worth buying it. Then you just buy it (or not) and sleep the sleep of the just. The point is to actually investigate it and put some numbers on it, rather than live in stress. This is why it’s a mathematical decision with a correct answer. Though the correct answer, of course, will be subjective and depend on your utility function. It’s still a mathematical decision.
Spock is an interesting example to use, in how he’s very much not rational. Here’s a lot more on that topic.
This seems like a very handy calculator to have bookmarked.
I think I did find a bug:At the low end it’s making some insane recommendations. E.g. with wealth W and a 50% chance of loss W (50% chance of getting wiped out), the insurance recommendation is any premium up to W.Wealth $10k, risk 50% on $9999 loss, recommends insure for $9900 premium.
That log(W-P) term is shooting off towards -infinity and presumably breaking something?Edit: As papetoast points out, this is a faithful implementation of the Kelly criterion and is not a bug. Rather, Kelly assumes that taking a loss >= wealth is infinitely bad, which is not true in an environment where debts are dischargable in bankruptcy (and total wealth may even remain positive throughout).
There’s probably corrections that would improve the model by factoring in future earnings, the degree to which the loss must be replace immediately (or at all), and the degree to which some losses are capped.
The math is correct if you’re trying to optimize log(Wealth). log(10000)=4 and log(1)=0 so the mean is log(100)=2. This model assumes going bankrupt is infinitely bad, which is not accurate of an assumption, but it is not a bug.
Hmm, I guess I see why other calculators have at least some additional heuristics and aren’t straight Kelly. Going bankrupt is not infinitely bad in the US. If the insured has low wealth, there’s likely a loan attached to any large asset that really complicates the math. Making W just be “household wealth” also doesn’t model “I can replace the loss next paycheck”. I’m not sure what exactly the correct notion of wealth is here, but if wealth is small compared to future earnings, and replacing the loss can be deferred, these assumptions are incorrect.
And obviously, paying $10k premium to insure a 50% chance of a $10k loss is always a mistake for all wealth levels. You’re choosing to be bankrupt in 100% of possible worlds instead of 50%.
I don’t quite understand. Going with the worked out example in the post you link to:
Okay, so we’re optimizing for geometric expectation now.
Meanwhile, the same post:
So we’re … not optimising for logarithmic utility. But isn’t optimizing the geometric expectation equivalent to optimizing the log of utility? Last time I checked, E[ln(X)] = ln(G(X)) where G(X) is the geometric expectation of X (mind you I only used chatgpt to confirm my intuition but it could be wrong)
Which is fine actually. I do care more about my geometric expected wealth than I do about my expected wealth, but that would have been a much shorter post
Make sense. I suppose we assume that the insurance pays out the value of the asset, leaving our wealth unchanged. So assuming we buy the insurance, there’s no randomness in our log wealth, which is guaranteed to be log(W-P). The difference between that, and our expected log wealth if we don’t buy the insurance, is V. That’s why log(W-P) is positive in the formula for V, and all the terms weighted by probabilities are negative.
Nice write up and putting some light on something I think I have intuitively been doing but not quite realizing it. Particularly the impact on growth of wealth.
I was thinking that a big challenge for a lot of people is the estimated distribution—which is likely why so many non-technical rationales are given by many people. Trying to assess that is hard and requires a lot of information about a lot of things—something the insurance companies can do (as suggested by another comment) but probably overwhelms most people who buy insurance.
With that thought, I was wondering if anyone has thought of shifting the equations a bit. Rather than working up some estimate of the probability space, why not put an equation together that you might be able to churn out some probability distributions given W, P, d_i and c_i. for the break-even case. I think most people would be able to digest that, event x_i has implied probability p_i, event x_j has implied probability p_j. Then the person can think if those probabilities actually make sense to them and their situation.
Clearly, it could not be an exhaustive listing of events but I would think a table of three or four of the main events that carry the greatest losses would be a good starting point for most people.
Indeed. In reality, the vast majority of people do not have sufficient information to make reasonable estimates of the probability of loss—and in many cases, even the size of the loss.
Eg a landlord is required to rebuild the place and temporarily rehome the tenants while that’s being done in the event of the house being destroyed by fire or flood. They’re also liable for compensation and healthcare of affected third parties—and legal cost of determining those figures.
They can probably calculate the rebuilding cost and a reasonable upper bound on temporary housing. But third party liability?
So, it still comes back to vibes.
It seems to me that another common and valid reason for insurance is if your utility is a nonlinear function of your wealth, but the insurance company values wealth linearly on the margin. E.g. for life insurance, the marginal value of a dollar for your kids after you die so that they can have food and housing and such is much higher than the marginal value of a dollar paid in premiums while you’re working.
Even so, at some level of wealth you’ll leave more behind by saving up the premium and having your children inherit the compound interest instead. That point is found through the Kelly criterion.
(The Kelly criterion is indeed equal to concave utility, but the insurance company is so wealthy that individual life insurance payouts sit on the nearly linear early part of the utility curve, whereas for most individuals it does not.)
That is assuming you live sufficiently long. The point in life insurance is to make sure you leave something for your kids/spouse if you die soon.
It is under no such assumption! If you have sufficient wealth you will leave something even if you die early, by virtue of already having the wealth.
If it’s easier, think of it as the child guarding the parent’s money and deciding whether to place a hedging bet on their parent’s death or not—using said parent’s money. Using the same Kelly formula we’ll find there is some parental wealth at which it pays more to let it compound instead of using it to pay for premia.
Don’t be overly naive consequentialist about this. “Nothing” is an overstatement.
Peace of mind can absolutely be one of the things you are purchasing with an insurance contract. If your Kelly calculation says that motorcycle insurance is worth $899 a month, and costs $900 a month, but you’ll spend time worrying about not being insured if you don’t buy it, and won’t if you do, I fully expect that is worth more than $1 a month.
But do be actual consequentialist about it. If the value of the insurance is more like $10, but the cost is $900, I doubt peace of mind about this one thing is worth $890 a month.
I don’t think your case for how insurance companies make money (Appendix B) makes sense. The insurance company does not have logarithmic discounting on wealth, it will not be using Kelly to allocate bets. From the perspective of the company, it is purely dependent on the direct profitability of the bet—premium minus expected payout and overheads.
Separately, the claim that there is no alternative to Kelly is very weak. I guess you mean there is no formalized, mathematical alternative? Otherwise, I propose a very simple one: buy insurance if the cost of insurance is lower than the disutility of worrying about the bad outcomes covered by insurance. This is the ‘vibes based’ strategy you discuss at the beginning, but it is clearly superior to the Kelly calculator. In the case where Kelly says not to buy insurance, either:
you worry about the bad outcomes and that worry is more damaging than cost of insurance would be; you have lost utility
you do not worry about the bad outcomes more than cost of insurance; you would not have bought the insurance under vibes strategy either
Therefore the Kelly criterion can only be superior when it advises you to buy insurance for something you are not worried about. However, you are not likely to run this calculations about things you aren’t worried about, so these opportunities are hard to find.
Nevertheless, I’m glad the tool exists, because it might help me calibrate how much I should be worried about outcomes I can quantify.
Not true. Risk management is a huge part of many types of insurance, and that is about finding the appropriate exposure to a risk—and this exposure is found through the Kelly criterion.
This matters less in some types of insurance (e.g. life, which has stable long-term rates and rare catastrophic events) but significantly in other types (liability, natural disaster-linked.)
This is only about maximising profit for a given level of risk, it has nothing to do with specific shapes of utility functions.
The problem with your explanation lies in the way companies calculate the cost of insurance. They do not base it solely on the nominal value of potential losses; instead, they account for the real value. For instance, if there’s a potential loss of $1,000, insurance companies calculate the cost by considering its real value, including factors like the money they’ll collect and invest over time, leveraging the compounding effect. As a result, compounding does not make insurance profitable for the insured in the long run.
When evaluating insurance, the key factor to consider is the expected value of your money. No matter how you calculate it, the expected value of your money when you use insurance will always be lower than if you didn’t use it. This is because insurance is inherently a financial product designed to profit the company offering it.
However, insurance remains logical for most people because the potential impact of losing all their money—or falling into significant debt—goes far beyond the nominal value of the loss. This is due to the fact that money’s value isn’t constant; its impact on a person’s life depends on their financial situation. For someone with limited resources, losing $100 has a much greater effect on their quality of life than it would for a wealthy individual.
This disparity is why it often makes sense for an average person to insure against significant damages, even if insurance is, technically speaking, a financial loss in the long term. For individuals with smaller financial reserves, the potential damage from a significant loss outweighs the cost of insurance. Meanwhile, the insurance company is able to absorb and spread that risk across a large pool of clients, which minimizes the impact of any individual claim.
Nice article! I’m practice you can get positive expected value insurance though, since you know more about your risk (in some cases) than the insurance does. The other way around in some cases the insurance is better at estimating your risk and might “rip you off” if they (correctly) assume you overestimate your risk.
Another common case where you get positive EV insurance is when the cost paid by you and the cost paid by the insurerer, when the bad event happens, are significantly different.
For example, if you get extended phone / device insurance from a manufacturer, when the device fails you would have to pay the retail price for a new device. The manufacturer however only needs to pay the production price, which given margins can be a small fraction of the retail price. Thus the manufacturer can set a premium that is (in expectation) somewhere in between those two prices, and you both benefit.
In the majority of cases the insurer has more knowledge, as they can afford the time to do the research.
This is of course why any insurance market must have a great many suppliers. If there are only one or two insurers, they can easily set the premiums to make supernormal profits because they are ‘competing’ against consumers with very little knowledge, instead of other insurers who have similar knowledge levels.