Does anyone have a clear example to give of a time/space where overconfidence seems to them to be doing a lot of harm? I would say making investments in general (I am a professional investment analyst.) This is an area where lots of people are making decisions under uncertainty, and overconfidence can cost everyone a lot of money.
One example would be bank risk modelling pre-2008: ‘our VAR model says that 99.9% of the time we won’t lose more than X’ therefore this bank is well-capitalised. Everyone was overconfident that the models were correct, they weren’t, chaos ensued. (I remember the risk manager of one bank—Goldman Sachs? - bewailing that they had just experienced a 26-standard deviation event, which is basically impossible. No mate, your models were wrong, and you should have known better because financial systems have crises every decade or two.)
Speaking from personal experience, I’d say a frequent failure-mode is excessive belief in modelling. Sometimes it comes from the model-builder: ‘this model is the best model it can be, I’ve spent lots of time and effort tinkering with it, therefore the model must be right’. Sometimes it’s because the model-builder understands that the model is flawed, but is willing to overstate their confidence in the results, and/or the person receiving the communication doesn’t want to listen to that uncertainty.
While my personal experience is mostly around people (including myself) building financial models, I suggest that people building any model of some dynamic system that is not fully understood are likely to suffer the same failure-mode: at some point down the line someone gets very over-confident and starts thinking that the model is right, or at least everyone forgets to explore the possibility that the model is wrong. When those models are used to make decisions with real-life consequences (think epidemiology models in 2020), there is a risk of getting things very wrong, when people start acting on the basis that the model is the reality.
Which brings me on to my second example, which will be more controversial than the first one, so sorry about that. In March 2020, Imperial College released a model predicting an extraordinary death toll if countries didn’t lock down to control Covid. I can’t speak to Imperial’s internal calibration, but the communication to politicians and the public definitely seems to have suffered from over-confidence. The forecasts of a very high death toll pushed governments around the world, including the UK (where I live) into strict lockdowns. Remember that lockdowns themselves are very damaging: mass deprivation of liberty, mass unemployment, stoking a mental health pandemic, depriving children of education—the harms caused by lockdowns will still be with us for decades to come. You need a really strong reason to impose one.
And yet, the one counterfactual we have, Sweden, suggests that Imperial College’s model was wrong by an order of magnitude. When the model was applied to Sweden (link below), it suggested a death toll of 96,000 by 1 July 2020 with no mitigation, or half that level with more aggressive social distancing. Actual reported Covid deaths in Sweden by 1 July were 5,500 (second link below).
So it’s my contention—and I’m aware it’s a controversial view—that overconfidence in the output of an epidemiological model has resulted in strict lockdowns which are a disaster for human welfare and which in themselves do far more harm than they prevent. (This is not an argument for doing nothing: it’s an argument for carefully calibrating a response to try and save the most lives for the least collateral damage.)
Hey! Thanks. I notice you’re a brand new commenter and I wanted to say this was a great first (actually second) comment. Both your examples were on-point and detailed. Your second one FYI seems quite likely to me too. (A friend of mine interacted with epidemiological modeling at many places early in the pandemic – and I have heard many horror stories from them about the modeling that was being used to advise governments.)
I’ll leave an on-topic reply tomorrow, just wanted to say thanks for the solid comment.
Does anyone have a clear example to give of a time/space where overconfidence seems to them to be doing a lot of harm? I would say making investments in general (I am a professional investment analyst.) This is an area where lots of people are making decisions under uncertainty, and overconfidence can cost everyone a lot of money.
One example would be bank risk modelling pre-2008: ‘our VAR model says that 99.9% of the time we won’t lose more than X’ therefore this bank is well-capitalised. Everyone was overconfident that the models were correct, they weren’t, chaos ensued. (I remember the risk manager of one bank—Goldman Sachs? - bewailing that they had just experienced a 26-standard deviation event, which is basically impossible. No mate, your models were wrong, and you should have known better because financial systems have crises every decade or two.)
Speaking from personal experience, I’d say a frequent failure-mode is excessive belief in modelling. Sometimes it comes from the model-builder: ‘this model is the best model it can be, I’ve spent lots of time and effort tinkering with it, therefore the model must be right’. Sometimes it’s because the model-builder understands that the model is flawed, but is willing to overstate their confidence in the results, and/or the person receiving the communication doesn’t want to listen to that uncertainty.
While my personal experience is mostly around people (including myself) building financial models, I suggest that people building any model of some dynamic system that is not fully understood are likely to suffer the same failure-mode: at some point down the line someone gets very over-confident and starts thinking that the model is right, or at least everyone forgets to explore the possibility that the model is wrong. When those models are used to make decisions with real-life consequences (think epidemiology models in 2020), there is a risk of getting things very wrong, when people start acting on the basis that the model is the reality.
Which brings me on to my second example, which will be more controversial than the first one, so sorry about that. In March 2020, Imperial College released a model predicting an extraordinary death toll if countries didn’t lock down to control Covid. I can’t speak to Imperial’s internal calibration, but the communication to politicians and the public definitely seems to have suffered from over-confidence. The forecasts of a very high death toll pushed governments around the world, including the UK (where I live) into strict lockdowns. Remember that lockdowns themselves are very damaging: mass deprivation of liberty, mass unemployment, stoking a mental health pandemic, depriving children of education—the harms caused by lockdowns will still be with us for decades to come. You need a really strong reason to impose one.
And yet, the one counterfactual we have, Sweden, suggests that Imperial College’s model was wrong by an order of magnitude. When the model was applied to Sweden (link below), it suggested a death toll of 96,000 by 1 July 2020 with no mitigation, or half that level with more aggressive social distancing. Actual reported Covid deaths in Sweden by 1 July were 5,500 (second link below).
So it’s my contention—and I’m aware it’s a controversial view—that overconfidence in the output of an epidemiological model has resulted in strict lockdowns which are a disaster for human welfare and which in themselves do far more harm than they prevent. (This is not an argument for doing nothing: it’s an argument for carefully calibrating a response to try and save the most lives for the least collateral damage.)
Imperial model applied to Sweden: https://www.medrxiv.org/content/10.1101/2020.04.11.20062133v1.full.pdf
Covid deaths in Sweden by date: https://www.statista.com/statistics/1105753/cumulative-coronavirus-deaths-in-sweden/
Hey! Thanks. I notice you’re a brand new commenter and I wanted to say this was a great first (actually second) comment. Both your examples were on-point and detailed. Your second one FYI seems quite likely to me too. (A friend of mine interacted with epidemiological modeling at many places early in the pandemic – and I have heard many horror stories from them about the modeling that was being used to advise governments.)
I’ll leave an on-topic reply tomorrow, just wanted to say thanks for the solid comment.
Thank you!