The question of whether trying to consistently adopt meta-reasoning position A will raise the percentage of time you’re correct, compared with meta-reasoning position B, is often a difficult one.
When someone uses a disliked heuristic to produce a wrong result, the temptation is to pronounce the heuristic “toxic”. When someone uses a favored heuristic to produce a wrong result, the temptation is to shrug and say “there is no safe harbor for a rationalist” or “such a person is biased, stupid, and beyond help; he would have gotten to the wrong conclusion anyway, no matter what his meta-reasoning position was. The idiot reasoner, rather than my beautiful heuristic, has to be discarded.” In the absence of hard data, consensus seems difficult; the problem is exacerbated when a novel meta-reasoning argument is brought up in the middle of a debate on a separate disagreement, in which case the opposing sides have even more temptation to “dig in” to separate meta-reasoning positions.
The question of whether trying to consistently adopt meta-reasoning position A will raise the percentage of time you’re correct, compared with meta-reasoning position B, is often a difficult one.
When someone uses a disliked heuristic to produce a wrong result, the temptation is to pronounce the heuristic “toxic”. When someone uses a favored heuristic to produce a wrong result, the temptation is to shrug and say “there is no safe harbor for a rationalist” or “such a person is biased, stupid, and beyond help; he would have gotten to the wrong conclusion anyway, no matter what his meta-reasoning position was. The idiot reasoner, rather than my beautiful heuristic, has to be discarded.” In the absence of hard data, consensus seems difficult; the problem is exacerbated when a novel meta-reasoning argument is brought up in the middle of a debate on a separate disagreement, in which case the opposing sides have even more temptation to “dig in” to separate meta-reasoning positions.