Hello.
I found LessWrong after reading HPMoR. I think I woke up as a rationalist when I realised that in my everyday reasoning I always judjed from the bottom line not considering any third alternatives, and started to think what to do about that. I am currently trying to stop my mind from always aimlessly and uselessly wandering from one topic to another. I registered on LessWrong after I started to question myself on why do I believe rationality to work, and ran into a problem, and thought I could get some help here. The problem is expressed in the following text (I am ready to move it from welcome board to any other suitable one if needed):
John was reading a book called “Rationality: From AI to Zombies” and thought: “Well, I am advised to doubt my beliefs, as some of them may turn out to be wrong”. So, it occurred to John to try do doubt the following statement: “Extraordinary claim requires extraordinary evidence”. But that was impossible to doubt, as this statement was a straightforward implication of the theorem X of probability theory, which John, as a mathematician, knew to be correct. After a while a wild thought ran through his mind: “What if every time a person looks at the proof of the theorem X, the Dark Lords of the Matrix alter the perception of this person to make the proof look correct, but actually there is a mistake in it, and the theorem is actually incorrect?” But John didn’t even consider that idea seriously, because such an extraordinary claim would definitely require extraordinary evidence.
Fifteen minutes later, John spontaneously considered the following hypothetical situation: He visualized a religious person, Jane, who is reading a book called “Rationality: From AI to Zombies”. After reading for some time, Jane thinks that she should try to doubt her belief in Zeus. But it is definitely an impossible action, as existence of Zeus is confirmed in the Sacred Book of Lightning, which, as Jane knows, contains only Ultimate and Absolute Truth. After a while a wild thought runs through her mind: “What if the Sacred Book of Lightning actually consists of lies?” But Jane doesn’t even consider the idea seriously, because the Book is surely written by Zeus himself, who doesn’t ever lie.
From this hypothetical situation John concluded that if he couldn’t doubt B because he believed A, and couldn’t doubt A because he believed B, he’d better try to doubt A and B simultaneously, as he would be cheating otherwise. So, he attempted to simultaneously doubt the facts that “Extraordinary claim requires extraordinary evidence” and that “Theorem X is proved correctly”.
As he attempted to do it, and succeeded, he spent some more time considering Jane’s position before settling his doubt. Jane justifies her set of beliefs by Faith. Faith is certainly an implication of her beliefs (the ones about reliability of the Sacred Book), and Faith certainly belongs to the meta-level of her thinking, affecting her ideas about existence of Zeus located at the object level.
So, John generalized that if he had some meta-level process controlling his thoughts and this process was implied by the very thought he was currently doubting, it would be wise to suspend the process for the time of doubting. Because not following this rule could make him not to lose some beliefs which, from the outside perspective, looked as ridiculous as Jane’s religion. John searched through the meta-level controlling his thoughts. He was horrified to realize that Bayesian reasoning itself fitted the criteria: it was definitely organizing his thought process, and its correctness was implied by the theorem X he was currently doubting. So he was sitting, with his belief unsettled and with no ideas of how to settle it correctly. After all, even if he made up any idea, how could he know that it wasn’t the worst idea ever intentionally given to him by the Dark Lords of the Matrix? He didn’t allow himself to disregard this nonsense with “Extraordinary claim requires extraordinary evidence” – otherwise he would fail doubting this very statement and there would be no point in this whole crisis of faith which he deliberately inflicted on himself…
Jane, in whose imagination the whole story took place, yawned and closed a book called “Rationality: From AI to Zombies”, lying in front of her. If learning rationality was going to make her doubt herself out of rationality, why would she even bother to try that? She was comfortable with her belief in Zeus, and the only theory which could point out her mistakes apparently ended up in self-annihilation. Or, shortly, who would believe anyone saying “We have evidence that considering evidence leads you to truth, therefore it is true that considering evidence leads you to truth”?
Well, it sounds right. But which mistake in rationality was done in that described situation, and how can it be improved? My first idea was that there are things we shouldn’t doubt… But it is kind of dogmatic and feels wrong. So should it maybe be like “Before doubting X think of what will you become if you succeed, and take it into consideration before actually trying to doubt X”. But this still implies “There are cases when you shouldn’t doubt”, which is still suspicious and doesn’t sound “rational”. I mean, doesn’t sound like making the map reflect the territory.