Indeed. (Fun for commenters: come up with more. Asteroid impact. Banking system collapse. Massive crop failure from virus or bacteria. Antibiotic resistsance....) If we treat all threats this way, we spend 10 times GDP. It’s a interesting case of framing bias. If you worry only about climate, it seems sensible to pay a pretty stiff price to avoid a small uncertain catastrophe. But if you worry about small uncertain catastrophes, you spend all you have and more, and it’s not clear that climate is the highest on the list.
This seems like a very strange thing for an economics professor to say.
Suppose we make an isomorphic argument:
“Of course, one can buy insurance against, say, a car crash. Shouldn’t we pay a bit more car insurance now, though the best guess is that it’s not a worthwhile investment, as insurance against such tail risks? But the problem is, we could buy insurance against our house burning down, homeowner’s insurance against being robbed, our iPod breaking, our husband dying (or our wife, or our children, or our pets), travel insurance about our upcoming vacation, insurance against losing our job, catastrophic health insurance, legal liability insurance, longevity insurance… (Commenters, have fun listing others.) But if we treat risks this way, we’ll wind up spending 10 times our annual income on insuring against risks! It’s an interesting case of framing bias: it may sound rational to insure against a house fire or a car crash or an income-earner dying unexpectedly, you spend all you have and more, so it’s not clear that car crash insurance is highest on the list.”
Doesn’t sound quite so clever that time around, does it? But all I did was take the framework of his argument: “if one invests in insurance against X, then because there are also risks Y, Z, A, B, C, which are equally rational to invest in, one will also invest in risks Y...C, and if one invested against all those risks, one will wind up broke; any investment where one winds up broke is a bad investment; QED, investing in insurance against X is a bad investment.” and substitute in different, uncontroversial, forms of insurance and risk.
What makes the difference? Why does his framing seem so different from my framing?
Well, it could be that the argument is fallacious in equating risks. Maybe banking system collapse has a different risk from crop collapse which has a different risk from asteroid impact, and really, we don’t care about some of them and so we would not invest in those, leaving us not bankrupt but optimally insured. In which case his argument boils down to ‘we should invest in climate change protection iff it’s the investment which highest marginal returns’, which is boringly obvious and not insightful at all because it means that all we need to discuss is the object-level discussion about where climate change belongs on “the list”, and there’s not a meta-level objection to investing against existential risks at all as the post is being presented as.
You are treating “investing in preventing X” as the same thing as “insuring against X.” They are not the same thing. And they are doubly not the same thing on a society-wide level.
Insurance typically functions to distribute risk, not reduce it. If I get insurance against a house fire, my house is just as likely to burn down as it was before. However, the risk of a house fire is now shared between me and the insurance company. As Lumifer points out, trying to make your house fire-proof (or prevent any of the other risks you list) really would be ruinously expensive.
For threats to civilisation as a whole, there is no-one outside of the planet with whom we can share the risk. Therefore it is not sensible to talk about insurance for them, except in a metaphorical sense.
You are treating “investing in preventing X” as the same thing as “insuring against X.” They are not the same thing. And they are doubly not the same thing on a society-wide level.
Fair enough, certainly one can draw a distinction between spreading risks around and reducing risks; even though in practice, the distinction is a bit muddled inasmuch as insurance companies invest heavily in reducing net risk by fighting moral hazard, funding prevention research, establishing industry-wide codes, withholding insurance unless best-practices are implemented.
So go back to my isomorphic argument, and for every mention of insurance, replace it with some personal action that reduces the risk eg. for ‘health insurance’, swap in ‘exercise’ or ‘caloric restriction’ or ‘daily wine consumption’.
Does this instantly rescue Cochrane’s argument and the isomorphism sound equally sensible? “You shouldn’t try to quit eating so much junk food because while that reduces your health risks, there are so many risks you could be reducing that it makes no sense to try to reduce all of them and hence by the fallacy of division, no sense to try to reduce any of them!”
As Lumifer points out, trying to make your house fire-proof (or prevent any of the other risks you list) really would be ruinously expensive.
So you resolve Cochrane’s argument by denying the equality of the risks.
I think you’re misreading Cochrane. He approvingly quotes Pindyck who says “society cannot afford to respond strongly to all those threats” and points out that picking which ones to respond to is hard. Notably, Cochrane says “I’m not convinced our political system is ready to do a very good job of prioritizing outsize expenditures on small ambiguous-probability events.”
All that doesn’t necessarily imply that you should nothing—just that selecting the low-probability threats to respond to is not trivial and that our current sociopolitical system is likely to make a mess out of it. Both of these assertions sound true to me.
The difference is that one set of risks is insurable and the other is not.
An insurable risk is one which can be mitigated through diversification. You can insure your house against fire only because there are thousands of other people also insuring their houses against fire. One consequence is that insurance is cheaper than an individual guarantee: it would cost much more to make your specific house entirely fireproof.
The other difference (and that one goes against Cochrane) is that normal insurable risks are survivable (and so you can assign certain economic value / utility / etc. to outcomes) while existential risks are not—the value/utility of the bad outcome is negative infinity.
(admittedly, I just skimmed the blog post, so I can be easily convinced my tentative position here is wrong)
I’m not sure I see any difference between your proposed isomorphic argument and his argument.
Assuming our level of certainty about risks we can insure against is the same as our level of (un)certainty about existential risks, and assuming the “spending 10 times our annual income” is accurate for both...the arguments sound exactly as “clever” as each other.
I also am not sure I agree with the “boringly obvious and not insightful at all” part. Or rather, I agree that it should be boringly obvious, but given our current obsession with climate change, is it boringly obvious to most people? Or rather, I suppose, the real question is do most people need the question phrased to them in this way to see it?
I guess what I’m saying is that it doesn’t seem implausible to me is that if you asked a representative sample of people if climate change protection was important to invest in they would say yes and vote for that. And then if you made the boringly obvious argument about determining where it belongs on the list of important things, they’d also say yes and vote for that.
I’m not sure I see any difference between your proposed isomorphic argument and his argument.
Good, then my isomorphism succeeded. Typically, people try to deny that the underlying logic is the same.
the arguments sound exactly as “clever” as each other.
They do? So if you agree that things like car or health or house insurance are irrational, did you run out and cancel every form of insurance you have and advise your family and friends to cancel their insurance too?
I guess what I’m saying is that it doesn’t seem implausible to me is that if you asked a representative sample of people if climate change protection was important to invest in they would say yes and vote for that. And then if you made the boringly obvious argument about determining where it belongs on the list of important things, they’d also say yes and vote for that.
But note that thinking climate change is a big enough risk to invest against has nothing at all to do with his little argument about ‘oh there so so many risks what are we to do we can’t consume insurance against them all’. Pointing out that there are a lot of options cannot be an argument against picking a subset of options; here’s another version: “this restaurant offers 20 forms of cheesecake for dessert, but if I ordered a slice of 1 then why not order all 20, but then, why I would run out of cash and be arrested and get fat too! So it seems rational to not order any cheesecake at all.” Why not just order 1 or 2 of the slices you like best… Arguing about whether you like the strawberry cheesecake better than the chocolate is a completely different argument which has nothing to do with there being 20 possible slices rather than, say, 5.
Good, then my isomorphism succeeded. Typically, people try to deny that the underlying logic is the same.
Not in the way I think you think.
They do? So if you agree that things like car or health or house insurance are irrational, did you run out and cancel every form of insurance you have and advise your family and friends to cancel their insurance too?
No, because we can quantify the risks and costs of those things and make good decisions about their worth.
In other words, if I assume that you intended that, for the sake of your argument, that we have the same amount of knowledge about insurance as we do about these existential risks, then the two arguments seem exactly as clever as each other: neither are terribly clever because they both point out that we need more information and well...duh. (However, see my argument about just how obvious “duh” things actually are.)
If I don’t assume that you intended that, for the sake of your isomorphism, that we have the same amount of knowledge about insurance as we do about these existential risks, then the two arguments aren’t so isomorphic.
But note that thinking climate change is a big enough risk to invest against has nothing at all to do with his little argument about ‘oh there so so many risks what are we to do we can’t consume insurance against them all’.
If this is the argument Cochrane is endorsing, I don’t support it, but that’s not exactly what I got out of his post. Lumifer’s reading is closer to what I got.
This seems like a very strange thing for an economics professor to say.
Suppose we make an isomorphic argument:
Doesn’t sound quite so clever that time around, does it? But all I did was take the framework of his argument: “if one invests in insurance against X, then because there are also risks Y, Z, A, B, C, which are equally rational to invest in, one will also invest in risks Y...C, and if one invested against all those risks, one will wind up broke; any investment where one winds up broke is a bad investment; QED, investing in insurance against X is a bad investment.” and substitute in different, uncontroversial, forms of insurance and risk.
What makes the difference? Why does his framing seem so different from my framing?
Well, it could be that the argument is fallacious in equating risks. Maybe banking system collapse has a different risk from crop collapse which has a different risk from asteroid impact, and really, we don’t care about some of them and so we would not invest in those, leaving us not bankrupt but optimally insured. In which case his argument boils down to ‘we should invest in climate change protection iff it’s the investment which highest marginal returns’, which is boringly obvious and not insightful at all because it means that all we need to discuss is the object-level discussion about where climate change belongs on “the list”, and there’s not a meta-level objection to investing against existential risks at all as the post is being presented as.
You are treating “investing in preventing X” as the same thing as “insuring against X.” They are not the same thing. And they are doubly not the same thing on a society-wide level.
Insurance typically functions to distribute risk, not reduce it. If I get insurance against a house fire, my house is just as likely to burn down as it was before. However, the risk of a house fire is now shared between me and the insurance company. As Lumifer points out, trying to make your house fire-proof (or prevent any of the other risks you list) really would be ruinously expensive.
For threats to civilisation as a whole, there is no-one outside of the planet with whom we can share the risk. Therefore it is not sensible to talk about insurance for them, except in a metaphorical sense.
Fair enough, certainly one can draw a distinction between spreading risks around and reducing risks; even though in practice, the distinction is a bit muddled inasmuch as insurance companies invest heavily in reducing net risk by fighting moral hazard, funding prevention research, establishing industry-wide codes, withholding insurance unless best-practices are implemented.
So go back to my isomorphic argument, and for every mention of insurance, replace it with some personal action that reduces the risk eg. for ‘health insurance’, swap in ‘exercise’ or ‘caloric restriction’ or ‘daily wine consumption’.
Does this instantly rescue Cochrane’s argument and the isomorphism sound equally sensible? “You shouldn’t try to quit eating so much junk food because while that reduces your health risks, there are so many risks you could be reducing that it makes no sense to try to reduce all of them and hence by the fallacy of division, no sense to try to reduce any of them!”
So you resolve Cochrane’s argument by denying the equality of the risks.
I think you’re misreading Cochrane. He approvingly quotes Pindyck who says “society cannot afford to respond strongly to all those threats” and points out that picking which ones to respond to is hard. Notably, Cochrane says “I’m not convinced our political system is ready to do a very good job of prioritizing outsize expenditures on small ambiguous-probability events.”
All that doesn’t necessarily imply that you should nothing—just that selecting the low-probability threats to respond to is not trivial and that our current sociopolitical system is likely to make a mess out of it. Both of these assertions sound true to me.
The difference is that one set of risks is insurable and the other is not.
An insurable risk is one which can be mitigated through diversification. You can insure your house against fire only because there are thousands of other people also insuring their houses against fire. One consequence is that insurance is cheaper than an individual guarantee: it would cost much more to make your specific house entirely fireproof.
The other difference (and that one goes against Cochrane) is that normal insurable risks are survivable (and so you can assign certain economic value / utility / etc. to outcomes) while existential risks are not—the value/utility of the bad outcome is negative infinity.
(admittedly, I just skimmed the blog post, so I can be easily convinced my tentative position here is wrong)
I’m not sure I see any difference between your proposed isomorphic argument and his argument.
Assuming our level of certainty about risks we can insure against is the same as our level of (un)certainty about existential risks, and assuming the “spending 10 times our annual income” is accurate for both...the arguments sound exactly as “clever” as each other.
I also am not sure I agree with the “boringly obvious and not insightful at all” part. Or rather, I agree that it should be boringly obvious, but given our current obsession with climate change, is it boringly obvious to most people? Or rather, I suppose, the real question is do most people need the question phrased to them in this way to see it?
I guess what I’m saying is that it doesn’t seem implausible to me is that if you asked a representative sample of people if climate change protection was important to invest in they would say yes and vote for that. And then if you made the boringly obvious argument about determining where it belongs on the list of important things, they’d also say yes and vote for that.
Good, then my isomorphism succeeded. Typically, people try to deny that the underlying logic is the same.
They do? So if you agree that things like car or health or house insurance are irrational, did you run out and cancel every form of insurance you have and advise your family and friends to cancel their insurance too?
But note that thinking climate change is a big enough risk to invest against has nothing at all to do with his little argument about ‘oh there so so many risks what are we to do we can’t consume insurance against them all’. Pointing out that there are a lot of options cannot be an argument against picking a subset of options; here’s another version: “this restaurant offers 20 forms of cheesecake for dessert, but if I ordered a slice of 1 then why not order all 20, but then, why I would run out of cash and be arrested and get fat too! So it seems rational to not order any cheesecake at all.” Why not just order 1 or 2 of the slices you like best… Arguing about whether you like the strawberry cheesecake better than the chocolate is a completely different argument which has nothing to do with there being 20 possible slices rather than, say, 5.
No, because we can quantify the risks and costs of those things and make good decisions about their worth.
In other words, if I assume that you intended that, for the sake of your argument, that we have the same amount of knowledge about insurance as we do about these existential risks, then the two arguments seem exactly as clever as each other: neither are terribly clever because they both point out that we need more information and well...duh. (However, see my argument about just how obvious “duh” things actually are.)
If I don’t assume that you intended that, for the sake of your isomorphism, that we have the same amount of knowledge about insurance as we do about these existential risks, then the two arguments aren’t so isomorphic.
If this is the argument Cochrane is endorsing, I don’t support it, but that’s not exactly what I got out of his post. Lumifer’s reading is closer to what I got.