This detachment itself seems to help accuracy; I was struck by a psychology study demonstrating that not only are people better at falsifying theories put forth by other people, they are better at falsifying when pretending it is held by an imaginary friend!
I think we’ve just derived a new heuristic. Pretend that your beliefs are held by your imaginary friend.
Short version: suppose that reasoning in the sense of “consciously studying the premises of conclusions and evaluating them, as well as generating consciously understood chains of inference” evolved mainly to persuade others of your views. Then it’s only to be expected that we will only study and generate theories at a superficial level by default, because there’s no reason to waste time evaluating our conscious justifications if they aren’t going to be used for anything. If we do expect them to be subjected to closer scrutiny by outsiders, then we’re much more likely to actually inspect the justifications for flaws, so that we’ll know how to counter any objections the others will bring up.
An exercise we ran at minicamp—which seemed valuable, but requires a partner—is to take and argue for a position for some time. Then, at some interval, you switch and argue against the position (while your partner defends). I used this once at work, but haven’t had a chance since. The suggestion to swap sides mid argument surprised the two, but did lead to a more effective discussion.
The exercise sometimes felt forced if the topic was artificial and veered too far off course, or if one side was simply convinced and felt that further artificial defense was unproductive.
I think we’ve just derived a new heuristic. Pretend that your beliefs are held by your imaginary friend.
I agree. When I first read the essay, I went to myself so that is why ‘rubber-duck debugging’ works!
Near vs Far: http://www.telegraph.co.uk/finance/businessclub/8527500/Daniel-H-Pink-employees-are-faster-and-more-creative-when-solving-other-peoples-problems.html
An explanation of why this works.
Short version: suppose that reasoning in the sense of “consciously studying the premises of conclusions and evaluating them, as well as generating consciously understood chains of inference” evolved mainly to persuade others of your views. Then it’s only to be expected that we will only study and generate theories at a superficial level by default, because there’s no reason to waste time evaluating our conscious justifications if they aren’t going to be used for anything. If we do expect them to be subjected to closer scrutiny by outsiders, then we’re much more likely to actually inspect the justifications for flaws, so that we’ll know how to counter any objections the others will bring up.
An exercise we ran at minicamp—which seemed valuable, but requires a partner—is to take and argue for a position for some time. Then, at some interval, you switch and argue against the position (while your partner defends). I used this once at work, but haven’t had a chance since. The suggestion to swap sides mid argument surprised the two, but did lead to a more effective discussion.
The exercise sometimes felt forced if the topic was artificial and veered too far off course, or if one side was simply convinced and felt that further artificial defense was unproductive.
Still, it’s a riff on this theme.
Does falsification improve if I imagine they are the beliefs of the imaginary friend of an imaginary friend?