Here on Less Wrong, we have hopefully developed our ability to spot mistaken arguments. Suppose you’re reading an article and you encounter a fallacy. What do you do? Consider the following script:
Reread the argument to determine whether it’s really an error. (If not, resume reading.)
Verify that the error is relevant to the point of the article. (If not, resume reading.)
Decide whether the remainder of the article is worth reading despite the error. Resume reading or don’t.
This script seems intuitively correct, and many people follow a close approximation of it. However, following this script is very bad, because the judgement in step (3) is tainted: you are more likely to continue reading the article if you agree with its conclusion than if you don’t. If you disagreed with the article, then you were also more likely to have spotted the mistake in the first place. These two biases can cause you to unknowingly avoid reading anything you disagree with, which makes you strongly resist changing your beliefs. Long articles almost always include some bad arguments, even when their conclusion is correct. We can greatly improve this script with an explicit countermeasure:
Reread the argument to determine whether it’s really an error. (If not, resume reading.)
Verify that the error is relevant to the point of the article. (If not, resume reading.)
Decide whether you agree with the article’s conclusion. If you are sure you do, stop reading. If you aren’t sure what the conclusion is or aren’t sure you agree with it, continue.
Decide whether the remainder of the article is worth reading despite the error. Resume reading or don’t.
This extra step protects us from confirmation bias and the “echo chamber” effect. We might try adding more steps, to reduce bias even further:
Reread the argument to determine whether it’s really an error. (If not, resume reading.)
Verify that the error is relevant to the point of the article. (If not, resume reading.)
Attempt to generate other arguments which could substitute for the faulty one. If you produce a valid one, resume reading.
Decide whether you agree with the article’s conclusion. (If you are sure you do, stop reading. If you aren’t sure what the conclusion is or aren’t sure you agree with it, continue.)
Decide whether the remainder of the article is worth reading despite the error. Resume reading or don’t.
While seemingly valid, this extra step would be bad, because the associated cost is too high. Generating arguments takes much more time and mental effort than evaluating someone else’s, so you will always be tempted to skip this step. If you were to use any means to force yourself to include it when you invoke the script, then you would instead bias yourself against invoking the script in the first place, and let errors slide.
Finding an error in someone else’s argument shouldn’t cause you to spend much time. Dealing with a mistake in an argument you wrote yourself, on the other hand, is more involved. Suppose you catch yourself writing, saying, or thinking an argument that you know is invalid. What do you do about it? Here is my script:
If you caught the problem immediately when you first generated the bad argument, things are working as they should, so skip this script.
Check your emotional reaction to the conclusion of the bad argument. If you want it to be true, then you have caught yourself rationalizing. Run a script for that.
Give yourself an imaginary gold star for having recognized your mistake. If you feel bad about having made the mistake in the first place, give yourself enough additional gold stars to counter this feeling.
Name the heuristic or fallacy you used (Surface similarity, overgeneralization, ad hominem, non sequitur, etc.)
Estimate how often the named heuristic or fallacy has lead you astray. If the answer is more often than you think is acceptable, note it, so you can think about how to counter that bias later.
Generate other conclusions which you have used this same argument to support in the past, if any. Note them, to reevaluate later.
A good script provides a checklist of things to think about, plus guidance on how long to think about each, and what state to be in while doing so. When evaluating our own mistakes, emotional state is important; if acknowledging that we’ve made a mistake causes us to feel bad, then we simply won’t acknowledge our mistakes, hence step (3) in this procedure.
Thinking accurately is more complicated than just following scripts, but script-following is a major part of how the mind works. If left alone, the mind will generate its own scripts for common occurances, but they probably won’t be optimal. The scripts we use for error handling filter the information we receive and regulate all other beliefs; they are too important to leave to chance. What other anti-bias countermeasures could we add? What other scripts do we follow that could be improved?
The Mistake Script
Here on Less Wrong, we have hopefully developed our ability to spot mistaken arguments. Suppose you’re reading an article and you encounter a fallacy. What do you do? Consider the following script:
Reread the argument to determine whether it’s really an error. (If not, resume reading.)
Verify that the error is relevant to the point of the article. (If not, resume reading.)
Decide whether the remainder of the article is worth reading despite the error. Resume reading or don’t.
This script seems intuitively correct, and many people follow a close approximation of it. However, following this script is very bad, because the judgement in step (3) is tainted: you are more likely to continue reading the article if you agree with its conclusion than if you don’t. If you disagreed with the article, then you were also more likely to have spotted the mistake in the first place. These two biases can cause you to unknowingly avoid reading anything you disagree with, which makes you strongly resist changing your beliefs. Long articles almost always include some bad arguments, even when their conclusion is correct. We can greatly improve this script with an explicit countermeasure:
Reread the argument to determine whether it’s really an error. (If not, resume reading.)
Verify that the error is relevant to the point of the article. (If not, resume reading.)
Decide whether you agree with the article’s conclusion. If you are sure you do, stop reading. If you aren’t sure what the conclusion is or aren’t sure you agree with it, continue.
Decide whether the remainder of the article is worth reading despite the error. Resume reading or don’t.
This extra step protects us from confirmation bias and the “echo chamber” effect. We might try adding more steps, to reduce bias even further:
Reread the argument to determine whether it’s really an error. (If not, resume reading.)
Verify that the error is relevant to the point of the article. (If not, resume reading.)
Attempt to generate other arguments which could substitute for the faulty one. If you produce a valid one, resume reading.
Decide whether you agree with the article’s conclusion. (If you are sure you do, stop reading. If you aren’t sure what the conclusion is or aren’t sure you agree with it, continue.)
Decide whether the remainder of the article is worth reading despite the error. Resume reading or don’t.
While seemingly valid, this extra step would be bad, because the associated cost is too high. Generating arguments takes much more time and mental effort than evaluating someone else’s, so you will always be tempted to skip this step. If you were to use any means to force yourself to include it when you invoke the script, then you would instead bias yourself against invoking the script in the first place, and let errors slide.
Finding an error in someone else’s argument shouldn’t cause you to spend much time. Dealing with a mistake in an argument you wrote yourself, on the other hand, is more involved. Suppose you catch yourself writing, saying, or thinking an argument that you know is invalid. What do you do about it? Here is my script:
If you caught the problem immediately when you first generated the bad argument, things are working as they should, so skip this script.
Check your emotional reaction to the conclusion of the bad argument. If you want it to be true, then you have caught yourself rationalizing. Run a script for that.
Give yourself an imaginary gold star for having recognized your mistake. If you feel bad about having made the mistake in the first place, give yourself enough additional gold stars to counter this feeling.
Name the heuristic or fallacy you used (Surface similarity, overgeneralization, ad hominem, non sequitur, etc.)
Estimate how often the named heuristic or fallacy has lead you astray. If the answer is more often than you think is acceptable, note it, so you can think about how to counter that bias later.
Generate other conclusions which you have used this same argument to support in the past, if any. Note them, to reevaluate later.
A good script provides a checklist of things to think about, plus guidance on how long to think about each, and what state to be in while doing so. When evaluating our own mistakes, emotional state is important; if acknowledging that we’ve made a mistake causes us to feel bad, then we simply won’t acknowledge our mistakes, hence step (3) in this procedure.
Thinking accurately is more complicated than just following scripts, but script-following is a major part of how the mind works. If left alone, the mind will generate its own scripts for common occurances, but they probably won’t be optimal. The scripts we use for error handling filter the information we receive and regulate all other beliefs; they are too important to leave to chance. What other anti-bias countermeasures could we add? What other scripts do we follow that could be improved?