Does anyone have a clear example to give of a time/space where overconfidence seems to them to be doing a lot of harm?
I feel when I query myself for situations to apply this advice, or situations where I feel I’ve seen others apply the norms recommended here, it mostly points in directions I don’t want. Be less confident about things, make fewer bold claims, make sure to not make confident statements that turn out to be false.
I feel like the virtues I would like many people to live out are about about trusting themselves and taking on more risk: take stronger bets on your ideas, make more bold claims, spend more time defending unlikely/niche ideas in the public sphere, make surprising predictions (to allow yourself to be falsified). Ask yourself not “was everything I said always precisely communicated with the exact right level of probability” but “did we get closer to reality or further away”. This helps move the discourse forward, be it between close collaborators or in the public sphere.
I think it’s a cost if you can’t always take every sentence and assume it represented the person’s reflectively endorsed confidence, but on the margin I think it’s much worse if people have nothing interesting to say. Proper scoring rules don’t incentivize getting the important questions right.
——
I don’t want to dodge the ethical claims in the OP, which frames the topic of overconfidence as an ethical question around deception. As deontology, I agree that deceptive sentences are unethical. From a virtue ethics standpoint, I think that if you follow the virtue of moving the conversation closer to reality, and learn in yourself to avoid the attraction toward deception, then you’ll be on the right track morally, and in most cases do not need to police your individual statements to the degree the OP recommends. Virtue ethics cannot always be as precise as deontology (which itself is not as precise as utilitarianism), so I acknowledge that my recommendations cannot always save someone who is living a life of epistemological sin, but I think that overall following the virtues is what I’ll do more than trying to follow the deontology like someone scared he is constantly committing (or attempting to commit) crimes.
When I ruminate on trying to apply the norm ‘Overconfidence is Deceit’, I think of two example cases. The first is people feeling epistemically helpless, like they don’t know how to think or what’s true, and looking for some hard guardrails to avoid making a single step wrong. Sometimes this is right, but more often I think people’s fear and anxiety is not reflective of reality, and they should take the risk that they might be totally overconfident. And if they do suspect they are acting unethically, they should stop, drop and catch fire, and decide whether to reorganize themselves in a fundamental way, rather than putting a non-trivial tax on all further thoughts.
The second case I have in mind is people feeling helpless about the level of the sanity water-line being so low. “Why can’t everyone stop saying such dumb things all the time!” I think trying to stop other people saying wrong things feels like a thing you do when you’re spending too much time around people you think are dumb, and I recommend fixing this more directly by changing your social circles. For me and the people close to me, I would often rather they try to take on epistemic practices motivated by getting important questions right. Questions more like “How does a physicist figure out a new fundamental law?” rather than “How would an random person stop themselves from becoming a crackpot who believed they’d invented perpetual motion?”. That tends to come up with things like “get good at fermi estimates” and “make lots of predictions” and “go away from everyone and think for yourself for a couple of years” moreso than things like “make sure all your sentences never miscommunicate their confidence, to the point of immorality and disgust”.
I guess this is the age-old debate that Eliezer discusses in Inadequate Equilibria, and I tend to take his side of it. I am concerned that people who talk about overconfidence all the time aren’t primarily motivated by trying to figure out new and important truths, but are mostly trying to add guardrails out of fear of themselves/everyone else falling off. I guess I mostly don’t share the spirit of the OP and won’t be installing the recommended mental subroutine.
Does anyone have a clear example to give of a time/space where overconfidence seems to them to be doing a lot of harm? I would say making investments in general (I am a professional investment analyst.) This is an area where lots of people are making decisions under uncertainty, and overconfidence can cost everyone a lot of money.
One example would be bank risk modelling pre-2008: ‘our VAR model says that 99.9% of the time we won’t lose more than X’ therefore this bank is well-capitalised. Everyone was overconfident that the models were correct, they weren’t, chaos ensued. (I remember the risk manager of one bank—Goldman Sachs? - bewailing that they had just experienced a 26-standard deviation event, which is basically impossible. No mate, your models were wrong, and you should have known better because financial systems have crises every decade or two.)
Speaking from personal experience, I’d say a frequent failure-mode is excessive belief in modelling. Sometimes it comes from the model-builder: ‘this model is the best model it can be, I’ve spent lots of time and effort tinkering with it, therefore the model must be right’. Sometimes it’s because the model-builder understands that the model is flawed, but is willing to overstate their confidence in the results, and/or the person receiving the communication doesn’t want to listen to that uncertainty.
While my personal experience is mostly around people (including myself) building financial models, I suggest that people building any model of some dynamic system that is not fully understood are likely to suffer the same failure-mode: at some point down the line someone gets very over-confident and starts thinking that the model is right, or at least everyone forgets to explore the possibility that the model is wrong. When those models are used to make decisions with real-life consequences (think epidemiology models in 2020), there is a risk of getting things very wrong, when people start acting on the basis that the model is the reality.
Which brings me on to my second example, which will be more controversial than the first one, so sorry about that. In March 2020, Imperial College released a model predicting an extraordinary death toll if countries didn’t lock down to control Covid. I can’t speak to Imperial’s internal calibration, but the communication to politicians and the public definitely seems to have suffered from over-confidence. The forecasts of a very high death toll pushed governments around the world, including the UK (where I live) into strict lockdowns. Remember that lockdowns themselves are very damaging: mass deprivation of liberty, mass unemployment, stoking a mental health pandemic, depriving children of education—the harms caused by lockdowns will still be with us for decades to come. You need a really strong reason to impose one.
And yet, the one counterfactual we have, Sweden, suggests that Imperial College’s model was wrong by an order of magnitude. When the model was applied to Sweden (link below), it suggested a death toll of 96,000 by 1 July 2020 with no mitigation, or half that level with more aggressive social distancing. Actual reported Covid deaths in Sweden by 1 July were 5,500 (second link below).
So it’s my contention—and I’m aware it’s a controversial view—that overconfidence in the output of an epidemiological model has resulted in strict lockdowns which are a disaster for human welfare and which in themselves do far more harm than they prevent. (This is not an argument for doing nothing: it’s an argument for carefully calibrating a response to try and save the most lives for the least collateral damage.)
Hey! Thanks. I notice you’re a brand new commenter and I wanted to say this was a great first (actually second) comment. Both your examples were on-point and detailed. Your second one FYI seems quite likely to me too. (A friend of mine interacted with epidemiological modeling at many places early in the pandemic – and I have heard many horror stories from them about the modeling that was being used to advise governments.)
I’ll leave an on-topic reply tomorrow, just wanted to say thanks for the solid comment.
I was thinking about this a little more, and I think that the difference in our perspectives is that you approached the topic from the point of view of individual psychology, while I (perhaps wrongly) interpreted Duncan’s original post as being about group decision-making. From an individual point of view, I get where you’re coming from, and I would agree that many people need to be more confident rather than less.
But applied to group decision-making, I think the situation is very different. I’ll admit I don’t have hard data on this, but from life experience and anecdotes of others, I would support the claim that most groups are too swayed by the apparent confidence of the person presenting a recommendation/pitch/whatever, and therefore that most groups make sub-optimal decisions because of it. (I think this is also why Duncan somewhat elides the difference between individuals who are genuinely over-confident about their beliefs, and individuals who are deliberately projecting overconfidence: from the point of view of the group listening to them, it looks the same.)
Since groups make a very large number of decisions (in business contexts, in NGOs, in academic research, in regulatory contexts...) I think this is a widespread problem and it’s useful to ask ourselves how to reduce the bias toward over-confidence in group decision-making.
Does anyone have a clear example to give of a time/space where overconfidence seems to them to be doing a lot of harm?
I feel when I query myself for situations to apply this advice, or situations where I feel I’ve seen others apply the norms recommended here, it mostly points in directions I don’t want. Be less confident about things, make fewer bold claims, make sure to not make confident statements that turn out to be false.
I feel like the virtues I would like many people to live out are about about trusting themselves and taking on more risk: take stronger bets on your ideas, make more bold claims, spend more time defending unlikely/niche ideas in the public sphere, make surprising predictions (to allow yourself to be falsified). Ask yourself not “was everything I said always precisely communicated with the exact right level of probability” but “did we get closer to reality or further away”. This helps move the discourse forward, be it between close collaborators or in the public sphere.
I think it’s a cost if you can’t always take every sentence and assume it represented the person’s reflectively endorsed confidence, but on the margin I think it’s much worse if people have nothing interesting to say. Proper scoring rules don’t incentivize getting the important questions right.
——
I don’t want to dodge the ethical claims in the OP, which frames the topic of overconfidence as an ethical question around deception. As deontology, I agree that deceptive sentences are unethical. From a virtue ethics standpoint, I think that if you follow the virtue of moving the conversation closer to reality, and learn in yourself to avoid the attraction toward deception, then you’ll be on the right track morally, and in most cases do not need to police your individual statements to the degree the OP recommends. Virtue ethics cannot always be as precise as deontology (which itself is not as precise as utilitarianism), so I acknowledge that my recommendations cannot always save someone who is living a life of epistemological sin, but I think that overall following the virtues is what I’ll do more than trying to follow the deontology like someone scared he is constantly committing (or attempting to commit) crimes.
When I ruminate on trying to apply the norm ‘Overconfidence is Deceit’, I think of two example cases. The first is people feeling epistemically helpless, like they don’t know how to think or what’s true, and looking for some hard guardrails to avoid making a single step wrong. Sometimes this is right, but more often I think people’s fear and anxiety is not reflective of reality, and they should take the risk that they might be totally overconfident. And if they do suspect they are acting unethically, they should stop, drop and catch fire, and decide whether to reorganize themselves in a fundamental way, rather than putting a non-trivial tax on all further thoughts.
The second case I have in mind is people feeling helpless about the level of the sanity water-line being so low. “Why can’t everyone stop saying such dumb things all the time!” I think trying to stop other people saying wrong things feels like a thing you do when you’re spending too much time around people you think are dumb, and I recommend fixing this more directly by changing your social circles. For me and the people close to me, I would often rather they try to take on epistemic practices motivated by getting important questions right. Questions more like “How does a physicist figure out a new fundamental law?” rather than “How would an random person stop themselves from becoming a crackpot who believed they’d invented perpetual motion?”. That tends to come up with things like “get good at fermi estimates” and “make lots of predictions” and “go away from everyone and think for yourself for a couple of years” moreso than things like “make sure all your sentences never miscommunicate their confidence, to the point of immorality and disgust”.
I guess this is the age-old debate that Eliezer discusses in Inadequate Equilibria, and I tend to take his side of it. I am concerned that people who talk about overconfidence all the time aren’t primarily motivated by trying to figure out new and important truths, but are mostly trying to add guardrails out of fear of themselves/everyone else falling off. I guess I mostly don’t share the spirit of the OP and won’t be installing the recommended mental subroutine.
Does anyone have a clear example to give of a time/space where overconfidence seems to them to be doing a lot of harm? I would say making investments in general (I am a professional investment analyst.) This is an area where lots of people are making decisions under uncertainty, and overconfidence can cost everyone a lot of money.
One example would be bank risk modelling pre-2008: ‘our VAR model says that 99.9% of the time we won’t lose more than X’ therefore this bank is well-capitalised. Everyone was overconfident that the models were correct, they weren’t, chaos ensued. (I remember the risk manager of one bank—Goldman Sachs? - bewailing that they had just experienced a 26-standard deviation event, which is basically impossible. No mate, your models were wrong, and you should have known better because financial systems have crises every decade or two.)
Speaking from personal experience, I’d say a frequent failure-mode is excessive belief in modelling. Sometimes it comes from the model-builder: ‘this model is the best model it can be, I’ve spent lots of time and effort tinkering with it, therefore the model must be right’. Sometimes it’s because the model-builder understands that the model is flawed, but is willing to overstate their confidence in the results, and/or the person receiving the communication doesn’t want to listen to that uncertainty.
While my personal experience is mostly around people (including myself) building financial models, I suggest that people building any model of some dynamic system that is not fully understood are likely to suffer the same failure-mode: at some point down the line someone gets very over-confident and starts thinking that the model is right, or at least everyone forgets to explore the possibility that the model is wrong. When those models are used to make decisions with real-life consequences (think epidemiology models in 2020), there is a risk of getting things very wrong, when people start acting on the basis that the model is the reality.
Which brings me on to my second example, which will be more controversial than the first one, so sorry about that. In March 2020, Imperial College released a model predicting an extraordinary death toll if countries didn’t lock down to control Covid. I can’t speak to Imperial’s internal calibration, but the communication to politicians and the public definitely seems to have suffered from over-confidence. The forecasts of a very high death toll pushed governments around the world, including the UK (where I live) into strict lockdowns. Remember that lockdowns themselves are very damaging: mass deprivation of liberty, mass unemployment, stoking a mental health pandemic, depriving children of education—the harms caused by lockdowns will still be with us for decades to come. You need a really strong reason to impose one.
And yet, the one counterfactual we have, Sweden, suggests that Imperial College’s model was wrong by an order of magnitude. When the model was applied to Sweden (link below), it suggested a death toll of 96,000 by 1 July 2020 with no mitigation, or half that level with more aggressive social distancing. Actual reported Covid deaths in Sweden by 1 July were 5,500 (second link below).
So it’s my contention—and I’m aware it’s a controversial view—that overconfidence in the output of an epidemiological model has resulted in strict lockdowns which are a disaster for human welfare and which in themselves do far more harm than they prevent. (This is not an argument for doing nothing: it’s an argument for carefully calibrating a response to try and save the most lives for the least collateral damage.)
Imperial model applied to Sweden: https://www.medrxiv.org/content/10.1101/2020.04.11.20062133v1.full.pdf
Covid deaths in Sweden by date: https://www.statista.com/statistics/1105753/cumulative-coronavirus-deaths-in-sweden/
Hey! Thanks. I notice you’re a brand new commenter and I wanted to say this was a great first (actually second) comment. Both your examples were on-point and detailed. Your second one FYI seems quite likely to me too. (A friend of mine interacted with epidemiological modeling at many places early in the pandemic – and I have heard many horror stories from them about the modeling that was being used to advise governments.)
I’ll leave an on-topic reply tomorrow, just wanted to say thanks for the solid comment.
Thank you!
I was thinking about this a little more, and I think that the difference in our perspectives is that you approached the topic from the point of view of individual psychology, while I (perhaps wrongly) interpreted Duncan’s original post as being about group decision-making. From an individual point of view, I get where you’re coming from, and I would agree that many people need to be more confident rather than less.
But applied to group decision-making, I think the situation is very different. I’ll admit I don’t have hard data on this, but from life experience and anecdotes of others, I would support the claim that most groups are too swayed by the apparent confidence of the person presenting a recommendation/pitch/whatever, and therefore that most groups make sub-optimal decisions because of it. (I think this is also why Duncan somewhat elides the difference between individuals who are genuinely over-confident about their beliefs, and individuals who are deliberately projecting overconfidence: from the point of view of the group listening to them, it looks the same.)
Since groups make a very large number of decisions (in business contexts, in NGOs, in academic research, in regulatory contexts...) I think this is a widespread problem and it’s useful to ask ourselves how to reduce the bias toward over-confidence in group decision-making.
Almost everyone’s response to COVID, including institutions, to the tune of many preventable deaths.
Almost everything produced by the red tribe in 2020, to the tune of significant damage to the social fabric.
Thanks for the examples! Those two sound like the second case I had in mind.