The Proper Use of Doubt
Once, when I was holding forth upon the Way, I remarked upon how most organized belief systems exist to flee from doubt. A listener replied to me that the Jesuits must be immune from this criticism, because they practice organized doubt: their novices, he said, are told to doubt Christianity; doubt the existence of God; doubt if their calling is real; doubt that they are suitable for perpetual vows of chastity and poverty. And I said: Ah, but they’re supposed to overcome these doubts, right? He said: No, they are to doubt that perhaps their doubts may grow and become stronger.
Googling failed to confirm or refute these allegations. But I find this scenario fascinating, worthy of discussion, regardless of whether it is true or false of Jesuits. If the Jesuits practiced deliberate doubt, as described above, would they therefore be virtuous as rationalists?
I think I have to concede that the Jesuits, in the (possibly hypothetical) scenario above, would not properly be described as “fleeing from doubt.” But the (possibly hypothetical) conduct still strikes me as highly suspicious. To a truly virtuous rationalist, doubt should not be scary. The conduct described above sounds to me like a program of desensitization for something very scary, like exposing an arachnophobe to spiders under carefully controlled conditions.
But even so, they are encouraging their novices to doubt—right? Does it matter if their reasons are flawed? Is this not still a worthy deed unto a rationalist?
All curiosity seeks to annihilate itself; there is no curiosity that does not want an answer. But if you obtain an answer, if you satisfy your curiosity, then the glorious mystery will no longer be mysterious.
In the same way, every doubt exists in order to annihilate some particular belief. If a doubt fails to destroy its target, the doubt has died unfulfilled—but that is still a resolution, an ending, albeit a sadder one. A doubt that neither destroys itself nor destroys its target might as well have never existed at all. It is the resolution of doubts, not the mere act of doubting, which drives the ratchet of rationality forward.
Every improvement is a change, but not every change is an improvement. Every rationalist doubts, but not all doubts are rational. Wearing doubts doesn’t make you a rationalist any more than wearing a white medical lab coat makes you a doctor.
A rational doubt comes into existence for a specific reason—you have some specific justification to suspect the belief is wrong. This reason, in turn, implies an avenue of investigation which will either destroy the targeted belief or destroy the doubt. This holds even for highly abstract doubts, like: “I wonder if there might be a simpler hypothesis which also explains this data.” In this case you investigate by trying to think of simpler hypotheses. As this search continues longer and longer without fruit, you will think it less and less likely that the next increment of computation will be the one to succeed. Eventually the cost of searching will exceed the expected benefit, and you’ll stop searching. At which point you can no longer claim to be usefully doubting. A doubt that is not investigated might as well not exist. Every doubt exists to destroy itself, one way or the other. An unresolved doubt is a null-op; it does not turn the wheel, neither forward nor back.
If you really believe a religion (and don’t just believe in it), then why would you tell your novices to consider doubts that must die unfulfilled? It would be like telling physics students to agonize over whether the twentieth-century revolution might have been a mistake, and that Newtonian mechanics was correct all along. If you don’t really doubt something, why would you pretend that you do?
Because we all want to be seen as rational—and doubting is widely believed to be a virtue of a rationalist. But it is not widely understood that you need a particular reason to doubt, or that an unresolved doubt is a null-op. Instead people think it’s about modesty, a submissive demeanor, maintaining the tribal status hierarchy—almost exactly the same problem as with humility, on which I have previously written. Making a great public display of doubt to convince yourself that you are a rationalist will do around as much good as wearing a lab coat.
To avoid merely professing doubts,1 remember:
A rational doubt exists to destroy its target belief, and if it does not destroy its target it dies unfulfilled.
A rational doubt arises from some specific reason the belief might be wrong.
An unresolved doubt is a null-op.
An uninvestigated doubt might as well not exist.
You should not be proud of mere doubting, although you can justly be proud when you have just finished tearing a cherished belief to shreds.
Though it may take courage to face your doubts, never forget that to an ideal mind doubt would not be scary in the first place.
1See “Professing and Cheering” in Map and Territory.
- Crisis of Faith by 10 Oct 2008 22:08 UTC; 175 points) (
- The Level Above Mine by 26 Sep 2008 9:18 UTC; 132 points) (
- No Safe Defense, Not Even Science by 18 May 2008 5:19 UTC; 118 points) (
- The Sin of Underconfidence by 20 Apr 2009 6:30 UTC; 103 points) (
- “Outside View!” as Conversation-Halter by 24 Feb 2010 5:53 UTC; 93 points) (
- My Kind of Reflection by 10 Jul 2008 7:21 UTC; 61 points) (
- Conjunction Controversy (Or, How They Nail It Down) by 20 Sep 2007 2:41 UTC; 59 points) (
- Growing Up is Hard by 4 Jan 2009 3:55 UTC; 55 points) (
- Against Devil’s Advocacy by 9 Jun 2008 4:15 UTC; 50 points) (
- The Importance of Self-Doubt by 19 Aug 2010 22:47 UTC; 28 points) (
- Trust in Math by 15 Jan 2008 4:25 UTC; 23 points) (
- 4 Apr 2011 13:38 UTC; 11 points) 's comment on Recent de-convert saturated by religious community; advice? by (
- [SEQ RERUN] The Proper Use of Doubt by 1 Jul 2011 23:54 UTC; 9 points) (
- [SEQ RERUN] The Virtue of Narrowness by 3 Jul 2011 19:47 UTC; 8 points) (
- Rationality Reading Group: Part K: Letting Go by 8 Oct 2015 2:32 UTC; 8 points) (
- The Solution to Sleeping Beauty by 4 Mar 2024 6:46 UTC; 8 points) (
- [Old] Mapmaking Series by 12 Mar 2019 17:32 UTC; 8 points) (
- Rationality Practice: Self-Deception by 9 Jan 2023 4:07 UTC; 6 points) (
- 17 Feb 2010 3:44 UTC; 4 points) 's comment on Boo lights: groupthink edition by (
- 24 Oct 2023 4:17 UTC; 3 points) 's comment on The Sin of Underconfidence by (
- 7 Jan 2012 9:54 UTC; 2 points) 's comment on Welcome to Less Wrong! (2012) by (
- 24 Jan 2011 4:11 UTC; 2 points) 's comment on Intrapersonal negotiation by (
- 18 Jul 2020 13:06 UTC; 2 points) 's comment on Telling more rational stories by (
- 5 Apr 2011 21:31 UTC; 1 point) 's comment on Just Try It: Quantity Trumps Quality by (
- 15 Aug 2015 19:21 UTC; 0 points) 's comment on How much does work in AI safety help the world? by (EA Forum;
- 14 Nov 2010 8:04 UTC; 0 points) 's comment on The Sheer Folly of Callow Youth by (
- 15 Jun 2015 18:29 UTC; 0 points) 's comment on Epistemic Trust: Clarification by (
This is a good post, and it suggests a whole series of similar posts: take each of the cues people treat as signs of rationality, and dig deeper to look for when exactly rational people would or would not display those signs. Watch out for people proud to display the cue even when rational people would not have it.
“—An unresolved doubt is a null-op.
An uninvestigated doubt might as well not exist.”
Perhaps we’re just using words differently, but I’m not sure I agree with either of these. I would have thought that recognising valid doubts would be useful in making decisions, even when the information necessary to resolve such doubts with certainty may not be available; and in some cases, the gains in terms of improved decision-making may not be worth the cost of investigating and resolving the doubt.
I think I’m using “doubt” as almost coextensive with “uncertainty”, and I’m not entirely sure what else it would mean, but do you mean something else?
A great post (in a series of great recent posts from Eliezer), and so far the comments on this post are very strong too.
PS I love this line for the double scoop of transparency: “Making a great public display of doubt to convince yourself that you are a rationalist, will do around as much good as wearing a lab coat.”
Good point, conchis. By “doubt” I don’t mean assigning a probability unequal to 1 - all probabilities are like that, in my book. If I’m pretty sure that a coinflip is fair, I don’t say I “doubt” whether the result will be heads or tails—it doesn’t feel the same as doubting whether it’s possible to revive a cryonics patient.
It seems to me that the word “doubt” could refer to two different things. First, it could be descriptive, an emotion that human beings sometimes feel, for example what kids feel when they start to wonder whether Santa Claus really exists. Second, “doubt” could have a pure mathematical meaning: an ideal Bayesian seeing a probabilistic opportunity to destroy a belief (downgrade its probability) by following a path of investigation. A human rationalist’s Type-1 ‘doubts’ should also qualify as Type-2 ‘doubts’.
HA, what do you mean by “transparency”?
Eliezer,
http://www.mja.com.au/public/issues/174_07_020401/mvdw/mvdw.html
Particularly scary sentence:
“And yet, the practice of medicine involves more than its subservience to evidence or science. It also involves issues such as the meaning of service and feelings of professional pride.”
Hopefully: Heh. I have sometimes thought that all professional lectures on rationality should be delivered while wearing a clown suit, to prevent the audience from confusing seriousness with solemnity.
But I still don’t know what you mean by “transparency”; you’ve used it before, and I can think of more than possible meaning to it.
I think the phrasing that ‘Jesuit novices are told to doubt everything’ is a loose interpretation of the very demanding initial two year training period that novices have to undergo. The Jesuits want only the best people and try hard to weed out the weaker applicants, so that only the most dedicated survive the initial training period.
The Catholic Encyclopaedia describes the Jesuit training here: Note: Their article may be biased. :) http://www.newadvent.org/cathen/14081a.htm
Quote: the novice is trained diligently in the meditative study of the truths of religion, in the habit of self-knowledge, in the constant scrutiny of his motives and of the actions inspired by them, in the correction of every form of self-deceit, illusion, plausible pretext, and in the education of his will, particularly in making choice of what seems best after careful deliberation and without self-seeking. Deeds, not words, are insisted upon as proof of genuine service, and a mechanical, emotional, or fanciful piety is not tolerated.
If the applicant survives for two years, then his real training begins.
BillK
Eliezer, I’m using transparency to mean people wearing lab coats, or making great public displays of doubt being open and honest to themselves and others about why they’re doing so. I think it’s a standard usage of the word.
Eliezer, I still don’t think the definition of doubt you posit in your comment accounts well for its usage in your post. There are a number of topics that I have only cursory knowledge on, and positions on ideas in those topics that I judge as having only a limited probability of truth. Most of these topics are on the periphery of my awareness most of the time, and only have a very limited usefulness to me, and are not very interesting. While I could easily investigate these matters in more depth and come to a more well-considered position on many ideas, I have no motivation to. So do I have doubts about these? According to your definition, yes, but that doesn’t quite work. If the topics were to become important to me for some reason (whether because they were necessary to solve some problem I were working on or just because I became interested in them) then I would have doubts, and I would fill out my knowledge to get rid of them.
Maybe all your definition needs is a proviso that the utility of the increased certainty about a topic (which would be determined by the goal system of the intelligence, and might be technically arational) must outweigh the cost of the line of inquiry suggested by the doubt. More precisely, that the utility of resolving the doubt be higher than the utility of saving resources for other pursuits.
Eliezer, I think I’m starting to see what you’re getting at a little more with your second definition (“an ideal Bayesian seeing a probabilistic opportunity to destroy a belief (downgrade its probability) by following a path of investigation”). But I’m still not entirely sure whether I would agree with the two points in your post that I originally took issue with.
If I come up with a reason to doubt the probability I previously assigned to some outcome, then, because (as an ideal Bayesian) I shouldn’t expect to change the probability assigned to something as a result of new evidence, I should presumably revise my probability estimate down immediately—before seeing any evidence at all. But once that’s done, whether or not the doubt warrants further investigation, or still needs in some sense to be resolved still seems an open issue. To be honest, I’m not even entirely sure what “resolution” would mean in this context any more. (Unless, perhaps, you simply mean the initial downgrading of probability?)
Conchis, the path of investigation would have a high probability of changing your current estimate up or down (by the same weighted amount), so your current estimate can still be your best guess as an ideal Bayesian while still making the investigation likely to change it to be more accurate.
pdf23ds: Of course, but I interpreted Eliezer as having in mind an asymmetrical (and, I might add, intuitive) definition of doubt, that placed a higher probability on downgrading. I might have misunderstood him though.
Conchis, I didn’t get the impression that a doubt more often downgrades than it upgrades, since you can just as easily doubt a low estimation as a high one, and since you can express any probability of X greater than .5 as one of ~X under .5. Someone can just as easily doubt their atheism as they can their theism.
But the doubts could be asymmetrical. What about a path of investigation (POI) that had a 10% chance of increasing you estimation by .3 and a 90% chance of lowering it by .03? (Those numbers might not actually be balanced, but I think you get the idea.)
pdf23ds, I’m not sure we’re really disagreeing about anything here. I would naturally define a doubt in exactly the way you seem to suggest. But if you use it that way, then the two points of Eliezer’s I took issue with in my first comment don’t seem to follow. I took Eliezer’s response as attempting to find an alternative definition on which they did follow, and then pointed out that the alternative definition he seemed to be offering didn’t make sense. Maybe I misunderstood something along the way here, but I’m certainly not arguing for that definition myself.
‘No, they are to doubt that perhaps their doubts may grow and become stronger.’
This establishes the a rule for using doubt as bias against any future information that would perhaps increase preexisting doubts. It is a bias because it does not apply the rule for doubt about anything that perhaps increases belief, or perhaps maintains the assessment of current doubt/belief.
Having instructed a rule for doubt about ‘perhaps’ for only one ‘perhaps’, the rule provides a default increase in the amount of information required to overcome that particular ‘perhaps’. That increased amount of information needed is the amount needed to overcome the imposed ‘doubt’.
That this amount is established by rule conveys the requirement of ‘faith’ because it replaces a methodology that is falsifiable with one that is not.
Answers are not as important as questions: you can’t answer an unasked question (until its answer provokes you to ask it.
Questions have no value at all if you’re not actively seeking answers.
And an unfortunate amount of philosophy seems to be seeking ways to put questions such that the answers are hard to find and the questioner seems wise. A symptom of this is if the philosopher is offended by having someone give a straight answer the questions they pose rather than acknowledge a paradox.
I think that unresolved doubt can and does serve a purpose. I think that becoming more comfortable with uncertainty, and refusing to come to a conclusion to avoid the uncomfortable feeling that comes with uncertainty, is valuable. I think that staying in a state of “I don’t know” can be psychologically tougher than coming to a conclusion.
Sometimes I see people, or catch myself, jumping to conclusions in order to have a resolution. I’ve had to train myself to stay in the unresolved state longer, in order to eventually end up having a better answer. That does not necessarily mean seeking that answer right away, and sometimes the path to finding such an answer is not clear.
I don’t agree with you that “A doubt that is not investigated might as well not exist. Every doubt exists to destroy itself, one way or the other. An unresolved doubt is a null-op; it does not turn the wheel, neither forward nor back.”
I think that a doubt that is not investigated still serves as a placeholder in one’s mind, a space carved out for uncertainty, so that if and when new evidence comes in, there is somewhere for a new model that includes it to take shape.
Just adding a view. Seems that one might connect the desire to eliminate the doubt and the problem of confirmation bias. I think it highly rational to accept that we do have limited knowledge and so all conclusions, outside some (narrow?) contexts, must be suspect at all times.
Pick anything “fact” your claim you know—for instance, that you know how to drive a car—and then start digging into just what you need to really know to make that claim 100% true. Do you actually know all that information or do you just get by and not cause/avoid accidents?
So little in our world is independent from everything else so when we start pulling one thread....
This seems to be popular opinion in the comments, and I’m inclined to agree that doubt can still be useful even if not investigated further. Yudkowsky pointed out above that the word “doubt” seems to have 2 meanings. It can refer either to an emotional state (such as the emotions a child feels when doubting Santa), or to a mathematical uncertainty (when you’re not sure your conclusions are statistically significant).
In both cases, I can think of counterexamples where merely doubting without having the opportunity to act on those doubts still proves useful. In the mathematical sense, doubting provides an upper bound for how much you would trust a possibly-erroneous concision without investigating it. The emotional aspect cements this knowledge in your mind, and makes it come to mind much easier if it is needed in the future.
Perhaps doubting can best be thought of as having diminishing returns. The first time you think to doubt a statement, it is tested, and if it has no obvious flaws one can assign it a higher probability than one which hasn’t been doubted. Additional thought returns less and less additional certainty, since it is less and less likely to disprove the statement. Eventually, the only value left is as a marker. Even then, the purpose of a red flag is to point out something that is actually uncertain, so the total value of a lingering doubt should go to zero if investigated forever.
Very well written, I just wanted to confirm something, I was under the impression that since nothing has 100% certainty, nothing can have a 0% uncertainty, you could get closer and closer, but you can never actually reach it. If I’m wrong or misunderstanding this I would appreciate it if someone would correct me, thanks.
That’s my understanding as well. I was trying to say that, if you were to formalize all this mathematically, and took the limit as number of Bayesian updates n went to infinity, uncertainty would go to zero.
Since we don’t have infinite time to do an infinite number of updates, in practice there is always some level of uncertainty > 0%.
There are some forms of doubts that you can easily reduce by simply adding more observations but not all. Seeing an infinitive amount of white swans doen’t help you to completely rule out the black one.
MarsColony_in10years: Yeah, thanks. Sorry about the nitpicking.
ChristianKl: I think an infinite number would allow you to rule out the possibility (of a black swan that is). I thought that the problem was simply that we could never get an infinite number of them, but then again: I’m not certain.
To the extend that the word infinitive makes sense, you can see an infinitive number of white swans without seeing a black swan.
Hi,
Interesting post, but I think you forgot to account for the time issue : « An uninvestigated doubt might as well not exist. » is true if the doubt stays uninvestigated forever. But if it is uninvestigated for now (even for a period of several years), the mere fact there is a doubt means the probability of investigating later on is higher than if there is no doubt at all (P(Investigate_in_10years) and P(Doubt_now) are not independent).
For example, the fact that right now I’m now doubtful about cyronics means that I may investigate the issue later on, with a higher probability that my friends who says “cryonics is non-sense”.
So I would reformulate as :
A doubt that stays uninvestigated forever might as well not exist
An uninvestigated doubt is only useful as it may drive you to investigate in the future.
Are you sure it will do that much? Wearing a lab coat actually does increase perceptiveness (you can safely ignore everything before the experimental overview). If signaling rationality actually increases rationality, it may be useful after all. Considering that both the outward behaviors and the clothing operate through the same mechanism, namely associating oneself with that image, it’s likely that a great public display of doubt is indeed as useful as wearing a lab coat.
instead of taking questions and asking them, take questions and identify the situations where the doubt arise, then avoid those curiosity generators for an alternative solution to curiosity stopping
Not sure if this is the proper place to say this but your first link is broken.
http://www.yudkowsky.net/virtues/ → http://www.yudkowsky.net/rational/virtues/
Remember how in another post you argued a rationalist should be able to reserve his knowledge of it was taken away ? I believe this is a similar approach as the one taken by these hypothetical Jesuits. In fact, I see two possible ways to explain such a behavior : one could ask a physics student whether Newtonian physics were not the absolute best if they expected the student to discover relativity by themselves. Likewise, I guess the hypothetical Jesuits could want two separate benefits out of this : -Ensuring the student is savant/fanatic enough to join the tribe. -Teaching the student to discover core beliefs of their faith by themselves, both reinforcing these beliefs and assuring their correctedness.
So I grew up around Jesuits and, while I obviously can’t speak for all of them, I’d say that they probably qualify as proto-rationalists, if not rationalists. To the point where a large portion of other Christian sects denounce them as atheists because they refuse to wallow in mysticism like everyone else.
A core principle of the Jesuit philosophy is that God gave us our intellect specifically so that we could come to better understand him. You won’t find them trying to quibble about “micro” vs “macro” evolution or any of the other silliness that other groups use as a membership badge and try to talk in circles around. They do still believe that there is a super-natural world beyond our ability to directly observe, but everything about this world must be logically consistent and any apparent inconsistency is a flaw in your own understanding, not a flaw in the world or a “divine mystery”.
They are trained to draw a hard line between what they believe and what they know, and to treat any perceived inconsistency between the two as a reason to probe deeper until it makes sense. And any fellow Christian who gives the appearance of engaging in “belief in belief”? They’ll tear him a new one just as fast as Yudkowsky would, if not faster. They have his lack of tolerance for it, coupled with encyclopedic knowledge not only of the Bible’s contents, but also generally of practically every work by every significant Christian and major pagan philosopher before or since.
I suppose a good way to explain the fundamental difference is that where most Christian sects believe that certain things are true because they are in the Bible, the Jesuits would say that the stories in the Bible were selected because they teach a fundamental truth or two. Were it not for the weight of Catholic tradition, I strongly suspect many Jesuits would be in favor of continuing to add to the anthology that is the Bible as we develop better stories for teaching the desired lessons. Or, at least, developing an updated one that would make sense to a modern reader without having to spend decades studying all the cultural context necessary to understand what’s going on. I first heard the observation that “The Lord of the Rings is a fundamentally Christian story and worldview, just dressed up in different mythology” from a Jesuit for example.
Definitely interesting people and nearly always worth developing a relationship with when you can. And while they’ll try to convert you, they’ll do it by presenting logical arguments, not by shouting and hitting you with a large book. They’ll take what they consider to be the core lessons and principles of Christianity and recompute how to explain them couched in your own world view. And if you end up agreeing on everything but the mythology? Well that’s good enough.
I’m instantly thinking about politics: there are many cases where you cannot build a clear model of what’s going on exactly in a given government, and the information is not only not transparent but also degraded with all the noise, intentional and not. I think it’s reasonable to maintain the feeling of doubt while ruminating on such topics.