The Mistake Script
Here on Less Wrong, we have hopefully developed our ability to spot mistaken arguments. Suppose you’re reading an article and you encounter a fallacy. What do you do? Consider the following script:
Reread the argument to determine whether it’s really an error. (If not, resume reading.)
Verify that the error is relevant to the point of the article. (If not, resume reading.)
Decide whether the remainder of the article is worth reading despite the error. Resume reading or don’t.
This script seems intuitively correct, and many people follow a close approximation of it. However, following this script is very bad, because the judgement in step (3) is tainted: you are more likely to continue reading the article if you agree with its conclusion than if you don’t. If you disagreed with the article, then you were also more likely to have spotted the mistake in the first place. These two biases can cause you to unknowingly avoid reading anything you disagree with, which makes you strongly resist changing your beliefs. Long articles almost always include some bad arguments, even when their conclusion is correct. We can greatly improve this script with an explicit countermeasure:
Reread the argument to determine whether it’s really an error. (If not, resume reading.)
Verify that the error is relevant to the point of the article. (If not, resume reading.)
Decide whether you agree with the article’s conclusion. If you are sure you do, stop reading. If you aren’t sure what the conclusion is or aren’t sure you agree with it, continue.
Decide whether the remainder of the article is worth reading despite the error. Resume reading or don’t.
This extra step protects us from confirmation bias and the “echo chamber” effect. We might try adding more steps, to reduce bias even further:
Reread the argument to determine whether it’s really an error. (If not, resume reading.)
Verify that the error is relevant to the point of the article. (If not, resume reading.)
Attempt to generate other arguments which could substitute for the faulty one. If you produce a valid one, resume reading.
Decide whether you agree with the article’s conclusion. (If you are sure you do, stop reading. If you aren’t sure what the conclusion is or aren’t sure you agree with it, continue.)
Decide whether the remainder of the article is worth reading despite the error. Resume reading or don’t.
While seemingly valid, this extra step would be bad, because the associated cost is too high. Generating arguments takes much more time and mental effort than evaluating someone else’s, so you will always be tempted to skip this step. If you were to use any means to force yourself to include it when you invoke the script, then you would instead bias yourself against invoking the script in the first place, and let errors slide.
Finding an error in someone else’s argument shouldn’t cause you to spend much time. Dealing with a mistake in an argument you wrote yourself, on the other hand, is more involved. Suppose you catch yourself writing, saying, or thinking an argument that you know is invalid. What do you do about it? Here is my script:
If you caught the problem immediately when you first generated the bad argument, things are working as they should, so skip this script.
Check your emotional reaction to the conclusion of the bad argument. If you want it to be true, then you have caught yourself rationalizing. Run a script for that.
Give yourself an imaginary gold star for having recognized your mistake. If you feel bad about having made the mistake in the first place, give yourself enough additional gold stars to counter this feeling.
Name the heuristic or fallacy you used (Surface similarity, overgeneralization, ad hominem, non sequitur, etc.)
Estimate how often the named heuristic or fallacy has lead you astray. If the answer is more often than you think is acceptable, note it, so you can think about how to counter that bias later.
Generate other conclusions which you have used this same argument to support in the past, if any. Note them, to reevaluate later.
A good script provides a checklist of things to think about, plus guidance on how long to think about each, and what state to be in while doing so. When evaluating our own mistakes, emotional state is important; if acknowledging that we’ve made a mistake causes us to feel bad, then we simply won’t acknowledge our mistakes, hence step (3) in this procedure.
Thinking accurately is more complicated than just following scripts, but script-following is a major part of how the mind works. If left alone, the mind will generate its own scripts for common occurances, but they probably won’t be optimal. The scripts we use for error handling filter the information we receive and regulate all other beliefs; they are too important to leave to chance. What other anti-bias countermeasures could we add? What other scripts do we follow that could be improved?
If I really sat down and worked out my script there’d be new bits scribbled in different colour inks every day I read this blog. It would end up as a grotesque flow diagram with nested lists based on different outcomes to other lists.
I don’t think consciously keeping track of even a 6 point formal script each time I read an article would work out. I’d have less brain bandwidth left to actually think about it and I’d be less likely to notice things not on the list (no useable script will cover everything).
I do think it’s a good idea to occasionally figure out what script you’re implicitly following (kind of like you have) and look at tweaking it when it leads you astray, then consciously paying attention to this tweak till it becomes second nature. But not the whole list!
I tried formalizing everything, ended up with a grotesque and incomplete flowchart, and decided to make the formalized procedure less precise, by hiding all that complexity behind the word “decide” in the last step. I believe the actual procedure which implements that process is hard wired, and is something like:
Generate reasons for and against an action, and a weight for each.
Compute the total weights of the reasons for and against
Compare the difference between the weights to a threshold. Compare the ratio between the weights to a different threshold. If both thresholds are met, decide in favor. If neither threshold is met, decide against. Otherwise go back to generating reasons.
The first step (generating reasons) is sort of like exemplar selection and sort of like memory lookup, and is therefore greatly influenced by priming certain concepts beforehand.
I’m quite impressed by the post. I think it’s potentially valuable for helping to decode about how we think about reading articles, and that using it sometimes would be a useful exercise. I’m just not convinced about forming a habit of keeping at the front of my mind such a script whenever one encounters a fallacy.
Would you claim that by reading this blog and maintaining an informal and “grotesque” flow diagram in your head, you are growing more rational?
As I said, I wouldn’t want to consciously do such a thing. But our brains are a messy spaghetti-coded hack, with vital systems inherited from reptiles, written by a blind idiot, and not exactly built for objectively debating how to override nature and de-bias itself. An informal and grotesque flow diagram is what I’d expect to get if I tried to formalize what my mind was really up to when trying to rationally judge and study these posts.
I agree with this post completely.
However, you didn’t answer my question. I asked “Would you claim that you are growing more rational?”
Ah. Yes, I’m pretty sure this blog and her older sister have made me more rational since I clumsily stumbled into the latter, and that’s probably involved the informal flow diagrams in my head becoming a little less grotesque through maintenance.
Reread the argument to determine whether it’s really an error. (If not, resume reading.)
Verbalise my instinctive knowledge of what the error is, forming an argument against it.
Observe all the naunces unconvered during the process of 2. Consider where the argument may be merely poorly expressed.
Decide whether I feel like arguing about it (if not, resume reading).
Type.
That isn’t a script I give myself. That’s merely a self observation of what my apparent script is. I’m relatively content with it so I’ll let it be.
(Note, steps 1-4 are optional.)
I don’t generally stop reading because of a single fallacy or error in an article. I might in a maths proof, but not in an article. It usually take a series of a few of them to alter my reading pattern at all. Even then as often as not I’ll finish reading, now taking joy from spotting all the ludicrous errors.
Probably stopping reading is a mistake, authors often lead the most insightful arguments till last.
I don’t think I agree with step 3 in the second script (step 4 in the third script). I think that would create a bias against understanding the intricacies of arguments that you agree with, which I’m not comfortable with. Maybe you could just restate it as “If you aren’t sure that you agree with the statement, continue reading” or something to that effect.
Edited to add “If you aren’t sure what the conclusion is or aren’t sure you agree with it, continue.” The case where you aren’t sure whether you agree was meant to be excuded by “If you are sure you do”, but wasn’t very clear. The case where you aren’t sure what the conclusion is wasn’t mentioned at all, and it’s an important one since many good articles take awhile to get to the point, or cover a broad range of points, and shouldn’t be aborted early.
Hm. Well, I was thinking in general that you can come to the same conclusion by more than one route and it could be important to see how other people do it. For example, I hold now some libertarian-style beliefs that I held when I was a teenager, but the framework that those beliefs are in is completely different. “Free trade is good because (comparative advantage, economic reasoning” is different than “Free trade is good because people shouldn’t be restricted in who they can sell their goods to!” by a wide margin.
In fact, there have been situations where I’ve changed my mind to be on the other side of an issue by reading something whose conclusion I agree with, because I would see flaws in their arguments, try and overlay my own arguments and find that the same flaws exist in both arguments, leading me to change my beliefs.
Maybe we agree, though, and what you mean by “conclusions” is what I mean by “conclusions and reasoning.”
Edited to insert summary break.