The Importance of Saying “Oops”
I just finished reading a history of Enron’s downfall, The Smartest Guys in the Room, which hereby wins my award for “Least Appropriate Book Title.”
An unsurprising feature of Enron’s slow rot and abrupt collapse was that the executive players never admitted to having made a large mistake. When catastrophe #247 grew to such an extent that it required an actual policy change, they would say, “Too bad that didn’t work out—it was such a good idea—how are we going to hide the problem on our balance sheet?” As opposed to, “It now seems obvious in retrospect that it was a mistake from the beginning.” As opposed to, “I’ve been stupid.” There was never a watershed moment, a moment of humbling realization, of acknowledging a fundamental problem. After the bankruptcy, Jeff Skilling, the former COO and brief CEO of Enron, declined his own lawyers’ advice to take the Fifth Amendment; he testified before Congress that Enron had been a great company.
Not every change is an improvement, but every improvement is necessarily a change. If we only admit small local errors, we will only make small local changes. The motivation for a big change comes from acknowledging a big mistake.
As a child I was raised on equal parts science and science fiction, and from Heinlein to Feynman I learned the tropes of Traditional Rationality: theories must be bold and expose themselves to falsification; be willing to commit the heroic sacrifice of giving up your own ideas when confronted with contrary evidence; play nice in your arguments; try not to deceive yourself; and other fuzzy verbalisms.
A traditional rationalist upbringing tries to produce arguers who will concede to contrary evidence eventually—there should be some mountain of evidence sufficient to move you. This is not trivial; it distinguishes science from religion. But there is less focus on speed, on giving up the fight as quickly as possible, integrating evidence efficiently so that it only takes a minimum of contrary evidence to destroy your cherished belief.
I was raised in Traditional Rationality, and thought myself quite the rationalist. I switched to Bayescraft (Laplace / Jaynes / Tversky / Kahneman) in the aftermath of . . . well, it’s a long story. Roughly, I switched because I realized that Traditional Rationality’s fuzzy verbal tropes had been insufficient to prevent me from making a large mistake.
After I had finally and fully admitted my mistake, I looked back upon the path that had led me to my Awful Realization. And I saw that I had made a series of small concessions, minimal concessions, grudgingly conceding each millimeter of ground, realizing as little as possible of my mistake on each occasion, admitting failure only in small tolerable nibbles. I could have moved so much faster, I realized, if I had simply screamed “OOPS!”
And I thought: I must raise the level of my game.
There is a powerful advantage to admitting you have made a large mistake. It’s painful. It can also change your whole life.
It is important to have the watershed moment, the moment of humbling realization. To acknowledge a fundamental problem, not divide it into palatable bite-size mistakes.
Do not indulge in drama and become proud of admitting errors. It is surely superior to get it right the first time. But if you do make an error, better by far to see it all at once. Even hedonically, it is better to take one large loss than many small ones. The alternative is stretching out the battle with yourself over years. The alternative is Enron.
Since then I have watched others making their own series of minimal concessions, grudgingly conceding each millimeter of ground; never confessing a global mistake where a local one will do; always learning as little as possible from each error. What they could fix in one fell swoop voluntarily, they transform into tiny local patches they must be argued into. Never do they say, after confessing one mistake, I’ve been a fool. They do their best to minimize their embarrassment by saying I was right in principle, or It could have worked, or I still want to embrace the true essence of whatever-I’m-attached-to. Defending their pride in this passing moment, they ensure they will again make the same mistake, and again need to defend their pride.
Better to swallow the entire bitter pill in one terrible gulp.
- A personal reflection on SBF by 7 Feb 2023 17:56 UTC; 321 points) (EA Forum;
- Predictable updating about AI risk by 8 May 2023 21:53 UTC; 288 points) (
- The Feeling of Idea Scarcity by 31 Dec 2022 17:34 UTC; 246 points) (
- Cached Selves by 22 Mar 2009 19:34 UTC; 214 points) (
- Closing Notes on Nonlinear Investigation by 15 Sep 2023 22:31 UTC; 202 points) (EA Forum;
- Less Wrong NYC: Case Study of a Successful Rationalist Chapter by 17 Mar 2011 20:12 UTC; 188 points) (
- Apologizing is a Core Rationalist Skill by 2 Jan 2024 17:47 UTC; 152 points) (
- Predictable updating about AI risk by 8 May 2023 22:05 UTC; 130 points) (EA Forum;
- A Rationalist’s Tale by 28 Sep 2011 1:17 UTC; 122 points) (
- Closing Notes on Nonlinear Investigation by 15 Sep 2023 22:44 UTC; 97 points) (
- A summary of every “Highlights from the Sequences” post by 15 Jul 2022 23:01 UTC; 97 points) (
- If Many-Worlds Had Come First by 10 May 2008 7:43 UTC; 89 points) (
- New User’s Guide to LessWrong by 17 May 2023 0:55 UTC; 87 points) (
- So You’ve Changed Your Mind by 28 Apr 2011 19:42 UTC; 77 points) (
- The Sacred Mundane by 25 Mar 2009 9:53 UTC; 73 points) (
- Causal Reference by 20 Oct 2012 22:12 UTC; 68 points) (
- Curating “The Epistemic Sequences” (list v.0.1) by 23 Jul 2022 22:17 UTC; 65 points) (
- Aiming for Convergence Is Like Discouraging Betting by 1 Feb 2023 0:03 UTC; 60 points) (
- Which rationality posts are begging for further practical development? by 23 Jul 2023 22:22 UTC; 58 points) (
- 26 Jan 2014 8:59 UTC; 56 points) 's comment on Open thread, January 25- February 1 by (
- Rationality Lessons in the Game of Go by 21 Aug 2010 14:33 UTC; 49 points) (
- A summary of every “Highlights from the Sequences” post by 15 Jul 2022 23:05 UTC; 47 points) (EA Forum;
- Fighting a Rearguard Action Against the Truth by 24 Sep 2008 1:23 UTC; 47 points) (
- What Curiosity Looks Like by 6 Jan 2012 21:28 UTC; 44 points) (
- Lighthaven Sequences Reading Group #2 (Tuesday 09/17) by 8 Sep 2024 21:23 UTC; 40 points) (
- Sunk Cost Fallacy by 12 Apr 2009 17:30 UTC; 40 points) (
- Knox and Sollecito freed by 3 Oct 2011 20:24 UTC; 39 points) (
- What is wisdom? by 14 Nov 2023 2:13 UTC; 37 points) (
- EA Forum Prize: Winners for November 2020 by 11 Jan 2021 7:45 UTC; 35 points) (EA Forum;
- (Summary) Sequence Highlights—Thinking Better on Purpose by 2 Aug 2022 17:45 UTC; 33 points) (
- Unspeakable Morality by 4 Aug 2009 5:57 UTC; 33 points) (
- How to Have Space Correctly by 25 Jun 2013 3:47 UTC; 33 points) (
- An unofficial “Highlights from the Sequences” tier list by 5 Sep 2022 14:07 UTC; 29 points) (
- A Genius for Destruction by 1 Aug 2008 19:25 UTC; 25 points) (
- Rethinking Batch Normalization by 2 Aug 2019 20:21 UTC; 20 points) (
- That Crisis thing seems pretty useful by 10 Apr 2009 17:10 UTC; 18 points) (
- 5 Sep 2011 19:39 UTC; 16 points) 's comment on The Fatal Gift of Beauty: The Trials of Amanda Knox by (
- 20 Jul 2014 13:02 UTC; 15 points) 's comment on [LINK] Another “LessWrongers are crazy” article—this time on Slate by (
- Previous Post Revised by 14 Dec 2009 6:56 UTC; 15 points) (
- Perceptual Blindspots: How to Increase Self-Awareness by 26 Mar 2024 5:37 UTC; 14 points) (
- Those who can’t admit they’re wrong by 1 Jul 2011 5:09 UTC; 14 points) (
- 28 Jul 2011 4:45 UTC; 13 points) 's comment on New Post version 1 (please read this ONLY if your last name beings with a–k) by (
- 16 Jul 2023 2:06 UTC; 13 points) 's comment on A Hill of Validity in Defense of Meaning by (
- 15 May 2013 22:15 UTC; 12 points) 's comment on Welcome to Less Wrong! (5th thread, March 2013) by (
- 6 Oct 2022 16:07 UTC; 11 points) 's comment on Research Deprioritizing External Communication by (EA Forum;
- 9 May 2022 1:07 UTC; 11 points) 's comment on David Udell’s Shortform by (
- 9 Sep 2023 23:22 UTC; 11 points) 's comment on Sharing Information About Nonlinear by (
- A Cruciverbalist’s Introduction to Bayesian reasoning by 4 Apr 2021 8:50 UTC; 11 points) (
- [SEQ RERUN] The Importance of Saying “Oops” by 29 Jun 2011 5:25 UTC; 10 points) (
- Where do I most obviously still need to say “oops”? by 22 Nov 2011 1:48 UTC; 10 points) (
- Have you changed your mind recently? by 6 Feb 2015 18:34 UTC; 10 points) (
- [SEQ RERUN] Focus Your Uncertainty by 30 Jun 2011 5:21 UTC; 10 points) (
- 20 Sep 2011 3:11 UTC; 10 points) 's comment on Friendly AI research news: FriendlyAI.tumblr.com by (
- My Failed Situation/Action Belief System by 2 Feb 2010 18:56 UTC; 9 points) (
- 16 May 2013 15:35 UTC; 9 points) 's comment on Welcome to Less Wrong! (5th thread, March 2013) by (
- 17 Dec 2010 16:00 UTC; 8 points) 's comment on How to Convince Me That 2 + 2 = 3 by (
- 24 Mar 2013 0:19 UTC; 8 points) 's comment on Why AI may not foom by (
- 19 Sep 2010 8:45 UTC; 8 points) 's comment on Tell Your Rationalist Origin Story by (
- Rationality Reading Group: Part K: Letting Go by 8 Oct 2015 2:32 UTC; 8 points) (
- I am Bad at Flirting; Realizing that by Noticing Confusion by 30 Jun 2020 20:05 UTC; 8 points) (
- 10 Feb 2014 11:54 UTC; 7 points) 's comment on Open Thread for February 3 − 10 by (
- 21 Oct 2011 15:03 UTC; 7 points) 's comment on Amanda Knox: post mortem by (
- An inducible group-”meditation” for use in rationality dojos by 2 Jan 2012 10:32 UTC; 7 points) (
- Chess Analyst “solves” King’s Gambit by 3 Apr 2012 23:36 UTC; 6 points) (
- 2 Nov 2011 14:24 UTC; 6 points) 's comment on Amanda Knox: post mortem by (
- 12 Apr 2011 16:35 UTC; 6 points) 's comment on It is OK to publicly make a mistake and change your mind by (
- 18 May 2010 8:53 UTC; 6 points) 's comment on Rationality quotes: May 2010 by (
- Rationality Book Club: Week 3 by 17 Jan 2024 4:15 UTC; 5 points) (EA Forum;
- 23 Mar 2013 23:58 UTC; 5 points) 's comment on Why AI may not foom by (
- 21 Oct 2011 19:31 UTC; 5 points) 's comment on Feeling Rational by (
- [LINK, TED video] Kathryn Schulz on Being Wrong by 4 May 2011 15:52 UTC; 5 points) (
- Introduction to Modern Dating: Strategic Dating Advice for beginners by 20 Jul 2024 15:45 UTC; 5 points) (
- 11 Aug 2011 5:07 UTC; 5 points) 's comment on Raise the Age Demographic by (
- 14 Dec 2011 15:34 UTC; 4 points) 's comment on How to Not Lose an Argument by (
- 23 Oct 2014 1:20 UTC; 4 points) 's comment on Open thread, Oct. 20 - Oct. 26, 2014 by (
- 26 Oct 2011 16:09 UTC; 4 points) 's comment on Amanda Knox: post mortem by (
- Norfolk Social—VA Rationalists by 10 Oct 2022 0:07 UTC; 4 points) (
- 23 Jul 2010 15:29 UTC; 2 points) 's comment on Book Review: The Root of Thought by (
- 1 May 2011 7:25 UTC; 2 points) 's comment on Mitigating Social Awkwardness by (
- 5 Aug 2007 3:48 UTC; 2 points) 's comment on Religion’s Claim to be Non-Disprovable by (
- 3 May 2010 5:32 UTC; 1 point) 's comment on What is missing from rationality? by (
- 1 Aug 2023 10:25 UTC; 1 point) 's comment on SSA vs. SIA: how future population may provide evidence for or against the foundations of political liberalism by (
- 31 Dec 2012 23:20 UTC; 1 point) 's comment on Where Recursive Justification Hits Bottom by (
- 19 Aug 2022 20:56 UTC; 1 point) 's comment on David Udell’s Shortform by (
- 2 Feb 2012 15:26 UTC; 1 point) 's comment on Open Thread, February 1-14, 2012 by (
- 26 Jan 2014 21:09 UTC; 1 point) 's comment on 2013 Survey Results by (
- Meetup : Meetup #6 - Still Amsterdam! by 8 Nov 2016 18:02 UTC; 1 point) (
- 28 Apr 2010 13:34 UTC; 0 points) 's comment on What is missing from rationality? by (
- 6 Feb 2012 20:38 UTC; 0 points) 's comment on [Poll] Method of Recruitment by (
- 13 Apr 2011 20:01 UTC; 0 points) 's comment on It is OK to publicly make a mistake and change your mind by (
- 3 May 2012 1:55 UTC; 0 points) 's comment on Case Study: Testing Confirmation Bias by (
- Consciousness of simulations & uploads: a reductio by 21 Aug 2010 20:02 UTC; -1 points) (
- Human condition sunk fallacy by 1 Mar 2019 1:43 UTC; -14 points) (
It really is the hardest thing in life for people to decide when to cut their losses.
When you have time and effort invested, it is so difficult to finally decide that ‘enough is enough’ and stop following that path.
Take simple queing at the bank or checkout or waiting for a bus. After you know you should quit, you feel that joining a new queue means ‘losing’ all the time already invested in the slower queue.
From financial investments to marriage it is all the same problem. Do I carry on or write it off and try something else? It is easy to say, but really, really difficult to do.
Great post.
It would be very useful if we could have some idea of how often people make big mistakes. Then it would be suspicious if we had gone on much longer than this time thinking we had not made such a mistake.
I think the title of the book was supposed to be ironic: they thought of themselves as the “smartest guys in the room” but their carefully constructed house of cards eventually collapsed.
I know of at least one decision I made that turned out poorly; my choice of college major. I do not know if it was the best decision to have made based on the information I had at the time.
My experience was this:
I was considering which of two majors would be better. I was advised to choose one of them. I chose that one.
In my second year of college, I began taking courses specific to that major. I did not like them. I was informed that those courses are, in fact, awful, everyone hates them, but they cover necessary material for the better, more advanced courses, and that once I started to take the upper level courses, I would like the major more than I currently did. I accepted this advice, and did not change majors.
I then began to take higher level courses. As it turned out, they were about as bad as the lower level courses. It took me a while to realize this. By the time I thought it was likely that I was in the wrong major, however, I had taken many courses that could not be easily transferred to any other field. I decided that, as the courses I had already taken were a sunk cost, it would be better to put up with a few more awful courses and then graduate than to transfer to another major and start my college degree from scratch.
In the end, it took me six years to graduate. I graduated in May 2006, worked during the summer of 2006 in a temporary position I did not like, and have been happily job-free ever since (much to the dismay of my parents).
In hindsight, I realize that I would be more satisfied if I had chosen a different major, but I do not know at what point I ought to have known that I should have chosen a different major.
I suggest, by your second year. I expect that you hated those classes more than most of the people who continue in that major, and by checking with third-year students, you could have discovered this fact.
In my second year, I took classes that everybody traditionally hated and grumbled about. And I dutifully grumbled about them along with the rest. But secretly, I liked them. (There was much to complain about, but in the end, they were interesting.) And when second-year students asked me about them in my third year, I believe that I admitted this. (Ironically, I changed majors in my third year anyway, but for different reasons, and with little waste.)
Well, this is just a hypothesis, but I wonder (if you even notice this comment three years late) if it seems reasonable to you.
One fascinating thing about Enron is that they found a way to corrupt their own standards. They were huge fans of marking assets to market—which could have averted both savings and loan crises, kept Executive Life from collapsing, averted the Japanese banking crisis, etc. On top of that, they loved incentive-based compensation.
This all fell apart when some Enron traders became the market: if you get paid based on prices, and you set prices, the rest of the Enron story is inevitable.
Apropos: http://www.somethinghappens.net/d/20070801.html
brilliant post; BillK’s comment, even better :)
cheers, eokyere
Why don’t you have to take into account the prior probability of the large mistake that occurred? Of course, you might be biased and believe it to be smaller than it truly is, in which case there should be a whoops moment (your mistake was overconfidence), but clearly there must also be cases in which there was a small prior probability of a big mistake. Shouldn’t we not judge these cases by only examining the outcome?
What’s the heuristic supposed to be, here? Taking your medicine early and going for the big change sounds better, in principle, but I think it amounts to more fuzziness. The first small bit of evidence that you made a mistake may or may not actually relate to an error. There are false positives as well as false negatives. At some point the evidence overwhelms your own risk tolerance (as it relates to future costs, the historical ones are sunk) and you change your mind....or not. It isn’t clear to me that you minimize your costs by jumping to the conclusion that you were wrong any more than you do so by clinging to the idea that you remain correct.
It would be possible to recognize large mistakes too early, rather than too late. Can anyone here think of any case where they’ve ever seen a human being do that?
I admit I don’t remember any case of people publicly affirming, out of the blue: “I am damn wrong, big time!”.
No homeopath waking up in a regular Tuesday and declaring “Gee!, I’m sorry, I’ve read an article and realized I’ve been medicating people with this placebo ‘Styrofoam’ little balls for the last five years and now I see this was insane. I am publicly apologizing for my patients. I quit. But I am happy that I can see it now, and can still become a specialist in something else.”
Confirmation bias seems to grow stronger the more the time passes, and the more public their opinion gets.
(I know this is an ooooold post, but since I thought about it, why not reply?)
All the time. Generally when it’s something they don’t want to do and are looking for reasons to stop rather than reasons to continue. At that point small incongruities are automatically taken as evidence that the whole system is flawed.
I’ve done it. I’ve zig-zagged on at least three things, where if I’d had a higher change-my-mind threshold I wouldn’t’ve. Though, I suppose each of those instances were due to catastrophic forgetting, and not actually reasoned arguments.
Sure. Assuming you desire a long term romantic relationship, if you end all romantic relationships that you see as most likely insufficiently desirable for a long term relationship there is a good chance you will not develop a good enough grasp of relationship etiquette, skills, & problem solving to appease a candidate that they would deem as a sufficiently desirable long term romantic companion. That behavior wouldn’t strictly prevent the person from finding someone who would put up with their lack of knowledge but it sure would have a non-negligible probability of doing so.
Eliezer, not bothering to go after a goal may fall into that category. For example, it’s reasonable to choose to live an average life, because one is probably mistaken if one thinks one is likely to have strongly positively deviant outcomes in life, such as becoming a billionaire, or procreating with a 1 in a million beauty, or winning a nobel prize for one’s academic contributions, or becoming an A list celebrity. So one may choose never to invest in going after these goals, and devote the balance of one’s time and energy to optimizing one’s odds of maintaining a median existence, in terms of achievements and experiences. I could name people who seem to be doing that, but you’ve never heard of them.
This is a case of availability heuristic, if I understand what you’re saying. That people who screw up and admit only in the end gets into the news; those who admit big mistakes early do not.
Sounds reasonable. But this example is not really about mistakes, just an adjustment of ambitions and expectations, which might even be subconscious. It’s not really about right or wrong, as I see it.
Eliezer,
Not so much recognizing mistakes too early. Rather, mistakenly seeing a mistake where there isn’t one. False positives abound.
BillK said:
“It really is the hardest thing in life for people to decide when to cut their losses.”
No it’s not. All you have to do is to periodically pretend that you were magically teleported into your current situation. Anything else is the sunk cost fallacy.
Excellent heuristic, John!
I don’t think the “oops” situation will show up early under all circumstances. If it’s a situation where we’ve been before, tasting success and failure, we could sense its symptoms and diagnose early. But if it’s a new venture and we are passionate about it, we’ll give it a longer rope hoping for the best.
That said, Enron collapse had many dimensions. Not just the non-admission of a mistake or not having the gall to capitulate. The perpetrators walked into it with eyes wide open, perhaps relying on the theory of “greater fool”—that goes there will always be a greater fool to whom you can palm off your bad bets. Had they been lucky, they would’ve even gotten away with it all—perhaps selling off to a Private Equity fund just like Sam Zell did (with Equity Office Trust) to Blackstone, sensing the madness of subprime early on, and Schwarzman going public and cashing out...
Wouldn’t you agree, Eleizer...?
John wrote: All you have to do is to periodically pretend that you were magically teleported into your current situation. Anything else is the sunk cost fallacy.
Yup! I agree. That’s all you have to do.
Try explaining that to your wife, when you decide that you didn’t really want to have a mortgage, two kids to bring up, a job with no prospects, etc. and that the sunk cost fallacy means that you are going off to California with the blonde cheerleader down the road.
You’ll probably find it real easy. ;)
BillK, the external consequences of that are the same whether you were just teleported into the situation or got into it yourself. Though I agree that the internal consequences may be different, if you truly did, in the past, give your word.
Krishna, of course the Enron collapse was complicated, but—at least according to the book I read—they were drinking their own Kool-Aid.
Eliezer, I agree completely.
That is exactly the point of my original remark. Knowing that you should cut your losses due to a previous mistaken decision is not the difficult part. Deciding to actually do it and face the consequences is probably one of the hardest things people have to face in their life. It affects people personally because they have to admit that they have spent possibly many years on the wrong path. In some cases it will also have serious consequences on their friends and family.
Some people find it so hard to do that they will commit suicide in preference.
Well said, Bilk. I agree with you in that not many are endowed with the faculties to spot the rot early on and the eventual courage to walk down the road with the blonde cheerleader. Deeper we are mired, harder it gets to come out. I could tangent it to a space travel metaphor. A slew of peripheral factors get built around the `wrong’ that becomes the core, to which our existence (including that of our family) gravitates and extrication would call for application of an intense escape velocity, impossible at a time when we are nearly out of gas and don’t even have cruising speed. Even if we manage that, somehow, somehow, we’ll find ourselves propelled into a space, a new environment that offers neither gravity nor any kind of support to which we’ve grown so much used to, where we just have to stay suspended not knowing where to go… It’s this vision of uncertainty that holds us back from abject confession, tempting us to evaluate the cost v. benefit that in the near term will mostly be reclining more towards cost ( that of shame induced fear, the looming prospect of sudden loss of self esteem) than benefit (lightness of conscience, a clean slate, scope for restart) . There is also a grail of hope that there’ll-soon-be-a-way-out that tempts us to let our folly will lie buried deep within till we choose to turn the spigot on, at an hour when we have the advantage and to an audience that will laud our triumph than laugh at our misjudgment.
Hope we are one here.
A few years ago I realized I’d been totally wrong for the past two decades—my entire professional life. It felt like my life had been wasted, because, in fact, it had been; none of the skills, none of the knowledge I possessed applied to my new situation. The solid steel building I thought I’d built was turned to sand and blown away in one terrible instant.
Now after years of previously unimaginable success following that epiphany, it all seems like a bad dream, like that part of my life really never happened. The current me is unable to see into the mind that made those mistakes. It was someone else. That guy could never have achieved what I have now.
I do not fully understand this last sentence. Specifically, why does a big mistake imply that a big change needs to be made? Perhaps I am not understanding the difference between a small mistake and a big mistake.
In other words, is a mistake’s “bigness” dependent on how much change is required to undo/fix/prevent the mistake? Or is is a mistake’s bigness dependent on the effects of the mistake?
A simple mathematical error can cause planes to drop out of the sky for lack of fuel. I instinctively say this is a big mistake with a small change required to fix it.
I think the minor differences between admitting mistakes that require big changes and admitting mistakes that have large consequences trigger different defense mechanisms. Namely, the former is laziness and the latter is shame. Knowing where the resistance comes from is useful when overcoming the resistance. It is also possible to notice the resistance before you realize that a mistake has been made. Consciously acknowledging a mistake can only happen after you begin to realize that a mistake may have been made.
The big examples you use seem to point toward large, shameful confessions, which makes perfect sense. They seem to qualify for both big effects and big changes required to fix them. These probably trigger whatever resistances would have applied to one category or the other. Which is probably why a post like this is such a useful shot in the arm to make one sit down and self-reflect for a few minutes.
All of which is to say, “Yeah, I agree,” with paragraphs instead of three words.
That being said, I don’t understand why the motivation for a big change comes from acknowledging a big mistake. Wouldn’t it be the other way around? If I want to make better, bigger changes, I could start by acknowledging big mistakes. The motivation for the acknowledgement comes from the desire to improve. If acknowledging errors provides motivation to change than I would argue that the change is based on guilt or dissatisfaction. The goal should be positive change, not alleviating pressure on your ego. If acknowledging mistakes is in the way of the higher goal than throw your ego in the fire and improve.
Is this a possible explanation or corollary to the sunk-costs fallacy of economics?
If anyone’s feeling nervous about using this advice is day-to-day situations (in other words, I won’t even try to tackle BillK’s example of weddings, children, and mortgages), here are 2 positive experiences I’ve had recently.
At work, after spending >1.5 hrs digging into a problem, I called a coworker for help. We spent an additional 20+ minutes digging into it to no avail. Finally, I realized that I had typed the year wrong, repeatedly used the word “oops” and “mistake”, and the call ended shortly thereafter. Now that it’s been a few days, I can confidently say that there were no negative repercussions from that. Instead, I was able to complete the task quickly thereafter.
At home, a family member and I were trying to do something new that we had learned by watching videos and reading written instructions. I skipped about half the steps and after saying that I’d been a fool, we were able to start over and finish after a few minutes. The family member did not say anything negative about my mistake afterwards.
And then there are the legions of people who do not admit to even the tiniest mistake. To these people, incongruent information is to be ignored at all costs. And I do mean all costs: when my unvaccinated uncle died of Covid, my unvaccinated dad did not consider this to be evidence that Covid was dangerous, because my uncle also showed signs of having had a stroke around the same time, and we can be 100% certain this was the sole reason he was put on a ventilator and died. (Of course, this is not how he phrased it; he seems to have an extreme self-blinding technique, such that if a stroke could have killed his brother, there is nothing more to say or think about the matter and We Will Not Discuss It Further.) It did not sway him, either, when his favorite anti-vax pastor Marcus Lamb died of Covid, though he had no other cause of death to propose.
I think this type of person is among the most popular and extreme in politics. And their followers, such as my dad, do the same thing.
But they never admit it. They may even use the language of changing their mind: “I was wrong… it turns out the conspiracy is even bigger than I thought!” And I think a lot of people who can change their mind get roped in by those who can’t. Myself, for instance: my religion taught me it was important to tell the truth, but eventually I found out that key information was hidden from me, filtered out by leaders who taught “tell the truth” and “choose the right”. The hypocrisy was not obvious, and it took me far too long to detect it.
I’m so glad there’s a corner of the internet for people who can change their minds quicker than scientists, even if the information comes from the “wrong” side. Like when a climate science denier told me CO2′s effect decreases logarithmically, and within a day or two I figured out he was right. Some more recent flip-flops of mine: Covid origin (natural origin ⇒ likely lab leak ⇒ natural origin); Russia’s invasion of Ukraine (Kyiv will fall ⇒ Russia’s losing ⇒ stalemate).
But it’s not enough; we need to scale rationality up. Eliezer mainly preached individual rationality, with “rationality dojos” and such, but figuring out the truth is very hard in a media environment where nearly two thirds of everybody gives up each centimetre of ground grudgingly, and the other third won’t give up even a single millimetre of ground (at least not until the rest of the tribe has given up a few metres first). And maybe it’s worse, maybe it’s half-and-half. In this environment it’s often a lot of work even for aspiring rationalists to figure out a poor approximation of the truth. I think we can do better and I’ve been wanting to propose a technological solution, but after seven months no one has upvoted or even tried to criticize my idea.
Not knocking your idea, but usually when you want to complain that “no one has upvoted me” it’s good to think again whether you really want to blame other people.
I can guess at a reason why people may not have read that post you linked. I found it long-winded, like a page out of your diary where you’re still developing the idea, thinking aloud by writing—which is excellent to do, but it doesn’t seem like something you wrote from the start for other people to read, so it’s hard to follow. At least, I’m still puzzled about what you wanted to put forward in it.
Interesting that you seem to see rationality (as opposed to traditional rationality) as a more effective and efficient version of seeking the truth (~epistemic rationality). In that sense, it does seem somewhat similar to doing what EA is trying to do for altruism.
Another way to frame this: correct for biases in your sensitivity to new information.
Enron was too insensitive to new information. It biased itself towards insensitivity by rewarding those who stuck to the party line.
Conversely, a founder who gives up after hearing a few ’no’s from investors is likely too sensitive to new information. They’re biased in the opposite direction: it’s often easier to give up than to trudge on.
Eliezer’s point is that most of us are too insensitive to new information because it’s painful to admit that we were wrong. I can agree with this, but it’s also not a universal truth because there are times where it’s painful to admit that we were right. The universal truth is that it’s good to correct for biases in your sensitivity to new information.
Examples:
Alice moves to Examplestan, where everyone believes that 3+2=7. Alice should be less sensitive to new information on the sum of 3 and 2 because it’s easier to conform.
Bob is a fervent ateapotist (someone who doesn’t believe that there’s a teapot between Earth and Mars), has mostly ateapotist friends, and heads up the local ateapotist club. If NASA publishes new images of a teapot between Earth and Mars, Bob should be more sensitive to those images than he’s inclined to be because it’s easier to stick with what he already believes (and retain his friends) than to discard what he believes (and lose his friends).
I rewrote this as lyrics and fed it into Udio for 5 hours until it gave me this. I think music helps internalize rationalist skills.
Scam or get scammed. While I completely agree that it’s important we all have the humility to admit when we are wrong, I don’t think it has much to do with being smart.
I hope I’ve understood you correctly here, but you seem to be suggesting they aren’t smart because smart people admit to being wrong, and the Enron execs more or less never did admit their ‘mistakes’. So the title is “least appropriate” beacuse it characterizes them as “smart”.
First, I don’t believe that being smart has anything to do with admitting when one is wrong. Happy to offer some examples.
Next, the author is saying they were smart because they managed to build an empire based on smoke and mirrors without anyone being able to catch them in their lies for such a long time. If the traders who were out there finding investors and closing deals had been more intelligent someone would’ve blown the whistle and put a stop to it all. If the regulators and business partners had figured out when deals fell apart for reasons other than market unpredictability they would’ve surely gone after Enron on day 1. Instead, they made hundreds of millions before anyone caught on. This was what made them “smart”—these guys made the entire financial sector look like dummies.
A commenter below, @Doug_S., said:
Along the same lines as my above thinking, I definitely do not think the title is meant to be ironic. They are, however, dirtbags and con-men. But the Enron saga isn’t something that any average Joe could pull off: make hundreds of millions in personal wealth, scam the financial giants and even earn their respect. We have government agencies who have the single mission to prevent stuff like this (FTC, SEC,...). Definitely requires a bit of intellect and ability to stay two steps ahead of everyone else.